hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
97a98d396247ec86b3ae9d764c81e715457192c5
84
md
Markdown
docs/src/Dynamics/Chapter 4: Linear Dynamics.md
cadojo/controls
d128521e5277a2d77323362420bb3401b04159ca
[ "MIT" ]
2
2021-06-29T01:35:03.000Z
2021-06-29T04:04:56.000Z
docs/src/Dynamics/Chapter 4: Linear Dynamics.md
cadojo/exploring-control-theory
eaa8c4653c3e77c6f3272a18630984f1b33097b3
[ "MIT" ]
null
null
null
docs/src/Dynamics/Chapter 4: Linear Dynamics.md
cadojo/exploring-control-theory
eaa8c4653c3e77c6f3272a18630984f1b33097b3
[ "MIT" ]
null
null
null
# Linear Dynamics _What's a state-space model?_ ```@raw html <mark>TODO!</mark> ```
14
29
0.666667
kor_Hang
0.608019
97aa7cb70cc4c441151d496d98ff915e8b709eb8
1,993
md
Markdown
doc/site/quick-start/rootless.md
atlaskerr/umoci
965ec03bb25c99653ac9cbf969c7a64cf1c94a88
[ "Apache-2.0" ]
255
2017-02-06T16:21:05.000Z
2022-02-02T14:15:08.000Z
doc/site/quick-start/rootless.md
hassoon1986/umoci
cc1b6b2e346e6db3e03539c2b48eca03ff02f893
[ "Apache-2.0" ]
249
2017-02-06T13:11:45.000Z
2020-06-15T04:23:14.000Z
doc/site/quick-start/rootless.md
hassoon1986/umoci
cc1b6b2e346e6db3e03539c2b48eca03ff02f893
[ "Apache-2.0" ]
40
2017-02-12T11:29:52.000Z
2020-12-04T08:10:53.000Z
+++ title = "Rootless Containers" weight = 50 +++ umoci has first class support for [rootless containers][rootlesscontaine.rs], and in particular it supports rootless unpacking. This means that an unprivileged user can unpack and repack and image (which is not traditionally possible for most images), as well as generate a runtime configuration that can be used by runc to start a rootless container. {{% notice info %}} It should noted that the root filesystem created as an unprivileged user will likely not match the root filesystem that a privileged user would create. The reason for this is that there are a set of security restrictions imposed by the operating system that stop us from creating certain device inodes and set-uid binaries. umoci will do its best to try to emulate the correct behaviour, and the runtime configuration generated will further try to emulate the correct behaviour. umoci also supports the `user.rootlesscontainers` specification, which allows for further emulation of things like `chown(2)` inside rootless containers using tools like [`PRoot`][as-proot]. [as-proot]: https://github.com/AkihiroSuda/runrootless {{% /notice %}} ```text % id -u 1000 % umoci unpack --rootless --image opensuse:42.2 bundle • rootless{usr/bin/ping} ignoring (usually) harmless EPERM on setxattr "security.capability" • rootless{usr/bin/ping6} ignoring (usually) harmless EPERM on setxattr "security.capability" % runc run -b bundle rootless-ctr bash-4.3# whoami root bash-4.3# tee /hostname </proc/sys/kernel/hostname mrsdalloway % umoci repack --image opensuse:new bundle ``` {{% notice tip %}} The above warnings can be safely ignored, they are caused by umoci not having sufficient privileges in this context. They are output purely to ensure that users are aware that the root filesystem they get might not be precisely the same as the one they'd get if they extracted it as a privileged user. {{% /notice %}} [rootlesscontaine.rs]: https://rootlesscontaine.rs/
41.520833
96
0.777722
eng_Latn
0.998366
97aa9c4bfb53bd3bd9e5838cce99cdd72ffa5956
7,403
md
Markdown
README.md
jarenrowan/argos-saleslogix
cc28a580dcf6c75b0d4613b4f29cc21a989234e3
[ "Apache-2.0" ]
null
null
null
README.md
jarenrowan/argos-saleslogix
cc28a580dcf6c75b0d4613b4f29cc21a989234e3
[ "Apache-2.0" ]
null
null
null
README.md
jarenrowan/argos-saleslogix
cc28a580dcf6c75b0d4613b4f29cc21a989234e3
[ "Apache-2.0" ]
null
null
null
# Infor CRM Mobile argos-saleslogix utilized the [argos-sdk](https://github.com/Saleslogix/argos-sdk) to form the Infor CRM Mobile application. It includes list, detail, and edit views for most of the core CRM entities, such as Accounts, Contacts, Tickets, Leads, Opportunities, and Activities. Additional entities are available if the back office extensions (BOE) integration is enabled. ## API The Infor CRM Mobile team maintains an "argos" documentation site available [here](http://developer.saleslogix.com/argos/). Additional guides are also available on the [argo-sdk](https://github.com/Saleslogix/argos-sdk/wiki) wiki. A sample customization is available [here](https://github.com/Saleslogix/argos-sample). ## Installation From AA (Application Architect) Bundle - Download the latest mobile release from the Infor Extreme Portal - Extract the zip - There should be yet another zip file that ends with "VFS.zip". Example: "Infor Mobile v3.4 for 8.0 and later VFS.zip". Extract this zip as well. - Once extracted, go into the Portal/SlxMobile/SourceFiles directory - Copy the argos-sdk and products folders to your development location, such as C:\code\mobile - In IIS (or your favorite web server), set the root directory to C:\code\mobile - Open your browser to the URL your web server is listening on: (http://localhost:8000/products/argos-saleslogix/index-dev.html) The AA bundle does not include index-dev-\*.html files. You can copy your product's index-dev-\*.html file (if doing a customization) into products/argos-saleslogix or use the out of the box one located [here](https://raw.githubusercontent.com/Saleslogix/argos-saleslogix/develop/index-dev.html). ## Installation From Source ### Prerequisites * [NodeJS](https://nodejs.org/) * [Grunt](http://gruntjs.com/getting-started) ### Install Dependencies The package.json file in the root of argos-saleslogix contains a list of dependencies, required for building from source. Here is how to install them: - Open a command prompt in the argos-saleslogix directory - run `npm install` Once dependencies are installed, here are a list of commands available: * `npm run test` - Runs the unit tests using Jasmine. Requires grunt cli. * `npm start` - Local development web server. Open your browser to http://localhost:8000/. Copy scripts/default.config.json to scripts/config.json to override the port and/or the SData host. * `npm run lint` - Lints the src folder. We use the [Airbnb JavaScript Style Guide](https://github.com/airbnb/javascript/blob/master/README.md). * `npm run less` - Compiles .less stylesheets into CSS. Requires grunt cli. * `npm run build` - "Transpiles" the src folder and outputs to src-out. The src folder contains ECMAScript2015 code. The src-out folder will contain ECMAScript 5 code that older browsers will execute. * `npm run watch` - Watches the src folder for changes and runs `npm run build` and `npm run lint` automatically when files are changed. ### Notice To Customizers Starting in mobile 3.4, the index-dev-\*.html files no longer point to src, instead they point to src-out. The src folder now contains ECMAScript2015 (ES6) source code. A build step is required to populate the src-out. You will need to run `npm run build` from the argos-saleslogix directory if working from git. ### Clone repository 1. Open a command prompt. 2. change to the base directory where you cloned [Argos SDK][argos-sdk], eg: cd C:\code\mobile 3. Execute the following commands (clone command shown with READ-ONLY URL; if you are a commiter, use the appropriate Read+Write URL). cd products git clone git://github.com/SageSalesLogix/argos-saleslogix.git ### Setup and run the application in "debug" mode 1. On your web server, create a Virtual Directory (IIS6), an Application (IIS7), or an Alias (Apache), or functional equivalent, called `mobile`, pointing to the base directory where you cloned [Argos SDK][argos-sdk], eg: cd C:\code\mobile 3. Ensure you have a MIME type setup for .less files. Example using web.config in IIS7: ``` <system.webServer> <staticContent> <mimeMap fileExtension=".less" mimeType="text/css" /> </staticContent> </system.webServer> ``` 2. In your browser, navigate to the path `/mobile/products/argos-saleslogix/index-dev.html` on your web server, eg: http://localhost/mobile/products/argos-saleslogix/index-dev.html ### Building A Release Version From Source #### Requirements If building on windows, the argos-sdk tools folder contains a binary called JsBit that will read the release.jsb2 file and combine/minify the required resources. If building from Linux or OSX, Mono is required to execute JsBit. ### Build scripts - Change to the argo-sdk directory, and execute the build script there: `cd ..\argos-sdk` and then `build\release.cmd` (`./build/release.sh` for non Windows) - Copy the contents of `argos-sdk\deploy` to a common shared directory, such as `C:\code\mobile\deploy` - Change back to the argos-saleslogix directory and run `build\release.cmd` - Copy the contents of `argos-saleslogix\deploy` to the same shared deploy directory used in the sdk step (`C:\code\mobile\deploy`) - Copy the deploy folder to your web server ### Deploying #### Steps 1. Open the deploy folder for the product, eg: mobile\deploy\argos-saleslogix 2. If the mobile content is going to be hosted on a different server, the manifest file and the environment file must be changed (or a new one created). * In the `index.manifest` file at the root of the deployed site, add the full SData server URL, including the trailing slash, to the end of the `NETWORK:` section, eg: NETWORK: ../sdata/ http://mysdataserver/sdata/ * Modify the environment file, `environment/default.js`, to point to the appropriate SData server. If a new environment file was created, it must be added to the files: * index.manifest * index.html * index-nocache.html 3. Copy the entire contents of the product's deploy folder (eg: `mobile\deploy\argos-saleslogix`) to a location on the webserver that will be hosting the mobile content (hereafter, mobile server). 4. On the mobile server, create a Virtual Directory (IIS6), an Application (IIS7), or an Alias (Apache), or functional equivalent, called `mobile`, pointing to the directory where you copied the content to. In the recommended configuration, on the same server where SData is being hosted, this mapping should be at the same level as the `sdata` mapping. 5. On the mobile server, ensure that the MIME type corresponding to the `.manifest` extension is `text/cache-manifest`. This is a requirement for application caching/offline access. 6. If SData is being hosted on a different server than the mobile host, CORS (Cross Origin Resource Sharing), must be enabled on the SData server. You can find documentation for setting it up on IIS at: [Setting-Up-CORS](https://github.com/Saleslogix/argos-sdk/wiki/Setting-Up-CORS). ## Customization * You can customize the product without modifying the core views. * See the [argos-sample][argos-sample] customization module for a set of customization scenario examples. [argos-sdk]: https://github.com/Saleslogix/argos-sdk "Argos SDK Source" [argos-sample]: https://github.com/Saleslogix/argos-sample "Customization module for argos-saleslogix" [argos]: https://github.com/Saleslogix/argos "Argos SDK API Documentation"
68.546296
369
0.761988
eng_Latn
0.968122
97ab884541484a8d01ca908932268b07a9bf87a8
5,395
md
Markdown
desktop-src/Bits/ack-for-fragment.md
npherson/win32
28da414b56bb3e56e128bf7e0db021bad5343d2d
[ "CC-BY-4.0", "MIT" ]
3
2020-04-24T13:02:42.000Z
2021-07-17T15:32:03.000Z
desktop-src/Bits/ack-for-fragment.md
npherson/win32
28da414b56bb3e56e128bf7e0db021bad5343d2d
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/Bits/ack-for-fragment.md
npherson/win32
28da414b56bb3e56e128bf7e0db021bad5343d2d
[ "CC-BY-4.0", "MIT" ]
1
2022-03-09T23:50:05.000Z
2022-03-09T23:50:05.000Z
--- title: Ack for Fragment description: Use the Ack for Fragment packet to acknowledge the client's Fragment request. ms.assetid: f5466489-2d5e-4850-a39c-37bc4923a7ac keywords: - Ack for Fragment BITS topic_type: - apiref api_name: - Ack for Fragment api_type: - NA ms.topic: article ms.date: 05/31/2018 --- # Ack for Fragment Use the **Ack for Fragment** packet to acknowledge the client's [**Fragment**](fragment.md) request. ``` syntax reason-code reason-description BITS-Packet-Type: Ack BITS-Session-Id: {guid} BITS-Received-Content-Range: range BITS-Reply-URL: url Content-Length: length BITS-Error-Code: error-code BITS-Error-Context: error-context ``` ## Headers <dl> <dt> <span id="reason-code"></span><span id="REASON-CODE"></span>reason-code </dt> <dd> Replace reason-code with an HTTP reason code. The following table shows the typical reason codes for a response to a [**Fragment**](fragment.md) request. For a list of HTTP reason codes, see [RFC 2616](https://go.microsoft.com/fwlink/p/?linkid=84048). | Reason code | Description | |----------------|------------------------------------------------------------------------------------------------------------------------| | 200<br/> | OK. The request was successful.<br/> | | 416<br/> | Range-Not-Satisfiable. The client sent a fragment whose range is not contiguous with the previous fragment.<br/> | </dd> <dt> <span id="reason-description"></span><span id="REASON-DESCRIPTION"></span>reason-description </dt> <dd> Replace reason-description with the HTTP description associated with the reason code. For example, set reason-description to OK if reason-code is 200. </dd> <dt> <span id="BITS-Packet-Type"></span><span id="bits-packet-type"></span><span id="BITS-PACKET-TYPE"></span>BITS-Packet-Type </dt> <dd> Identifies this response packet as an Ack packet. </dd> <dt> <span id="BITS-Received-Content-Range"></span><span id="bits-received-content-range"></span><span id="BITS-RECEIVED-CONTENT-RANGE"></span>BITS-Received-Content-Range </dt> <dd> Zero-based offset to the next byte the server expects the client to send. For example, if the fragment contained the range 128 212, you would set range to 213. </dd> <dt> <span id="BITS-Session-Id"></span><span id="bits-session-id"></span><span id="BITS-SESSION-ID"></span>BITS-Session-Id </dt> <dd> String GUID that identifies the session to the client. Replace {guid} with the session identifier that the client sent in the [**Fragment**](fragment.md) request packet. If you do not recognize the session identifier, set the BITS-Error-Code header to BG\_E\_SESSION\_NOT\_FOUND. </dd> <dt> <span id="BITS-Reply-URL"></span><span id="bits-reply-url"></span><span id="BITS-REPLY-URL"></span>BITS-Reply-URL </dt> <dd> Contains the URL to the reply data of an upload-reply job. Include this header in the final fragment response after the upload is complete and you receive a response from the server application, if applicable. Use the Content-Range header from the Fragment request to determine whether the upload is complete. The upload is complete if the end offset of the range value equals the total-length value minus one. The BITS server posts the upload file to the server application after it determines the upload is complete. The server application processes the file and generates the reply. The BITS server sets the value of BITS-Reply-URL to the URL of the reply file from the server application. </dd> <dt> <span id="Content-Length"></span><span id="content-length"></span><span id="CONTENT-LENGTH"></span>Content-Length </dt> <dd> Replace length with the number of bytes included in the body of the response. Content-Length is required, even if the body of the response does not include content. </dd> <dt> <span id="BITS-Error-Code"></span><span id="bits-error-code"></span><span id="BITS-ERROR-CODE"></span>BITS-Error-Code </dt> <dd> Replace error-code with a hexadecimal number that represents an HRESULT value associated with a server-side error. Only include this header if reason-code is not 200 or 201. </dd> <dt> <span id="BITS-Error-Context"></span><span id="bits-error-context"></span><span id="BITS-ERROR-CONTEXT"></span>BITS-Error-Context </dt> <dd> Replace error-context with a hexadecimal number that represents the context in which the error occurred. Specify the hexadecimal number for [**BG\_ERROR\_CONTEXT\_REMOTE\_FILE**](/windows/desktop/api/Bits/ne-bits-__midl_ibackgroundcopyerror_0001) (0x5) if the server generated the error. Otherwise, specify the hexadecimal number for **BG\_ERROR\_CONTEXT\_REMOTE\_APPLICATION** (0x7) if the error was generated by the application to which the upload file is passed. Include this header only if the reason-code is not 200 or 201. </dd> </dl> ## Remarks If the session is for an upload-reply job, there may be a delay before the client receives the final **Ack for Fragment** response. The length of the delay depends on the amount of time the server application (application to which the server posts the upload file) takes to generate the reply. ## See also <dl> <dt> [**Create-Session**](create-session.md) </dt> <dt> [**Fragment**](fragment.md) </dt> </dl>
39.669118
528
0.697868
eng_Latn
0.929091
97abba40a3c871a82805057eb0c38c6052c89281
780
md
Markdown
CHANGELOG.md
kianmeng/custom_base
53d9cdd2457b8b301b4ca5525d4fc8569be21db9
[ "MIT" ]
19
2015-03-14T01:08:09.000Z
2021-09-16T02:40:39.000Z
CHANGELOG.md
kianmeng/custom_base
53d9cdd2457b8b301b4ca5525d4fc8569be21db9
[ "MIT" ]
24
2015-05-07T13:39:59.000Z
2021-12-15T05:40:10.000Z
CHANGELOG.md
kianmeng/custom_base
53d9cdd2457b8b301b4ca5525d4fc8569be21db9
[ "MIT" ]
6
2015-05-07T13:31:45.000Z
2021-06-08T12:38:11.000Z
# Change Log All notable changes to this project will be documented in this file. This project adheres to [Semantic Versioning](http://semver.org/). Change log itself follows [Keep a CHANGELOG](http://keepachangelog.com) format. ## [0.2.1][] - 2017-04-17 ### Changed - Fix deprecations [@myobie][] ## [0.2.0][] - 2016-02-01 ### Changed - Fix overflow caused by float :math.pow [@drewblas][] ## [0.1.0][] - 2015-03-12 ### Added - Initial release [@igas][] [@drewblas]: https://github.com/drewblas [@igas]: https://github.com/igas [@myobie]: https://github.com/myobie [0.2.1]: https://github.com/igas/custom_base/compare/v0.2.0...v0.2.1 [0.2.0]: https://github.com/igas/custom_base/compare/v0.1.0...v0.2.0 [0.1.0]: https://github.com/igas/custom_base/compare/2b2fad2...v0.1.0
30
79
0.680769
yue_Hant
0.372752
97abf45601b52fa70ca5578d635728e6729dee9d
8,293
md
Markdown
docs/test/how-to-write-unit-tests-for-cpp-dlls.md
B4V/visualstudio-docs.ru-ru
c9c85cb0972b8eed68c78fd1a8be2410700da449
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/test/how-to-write-unit-tests-for-cpp-dlls.md
B4V/visualstudio-docs.ru-ru
c9c85cb0972b8eed68c78fd1a8be2410700da449
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/test/how-to-write-unit-tests-for-cpp-dlls.md
B4V/visualstudio-docs.ru-ru
c9c85cb0972b8eed68c78fd1a8be2410700da449
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Написание модульных тестов для библиотек DLL на C++ в Visual Studio ms.date: 11/04/2017 ms.prod: visual-studio-dev15 ms.technology: vs-ide-test ms.topic: conceptual ms.author: mblome manager: douge ms.workload: - cplusplus author: mikeblome ms.openlocfilehash: 829882cf3504583a4e9dbc3532c900df26a921f2 ms.sourcegitcommit: 240c8b34e80952d00e90c52dcb1a077b9aff47f6 ms.translationtype: HT ms.contentlocale: ru-RU ms.lasthandoff: 10/23/2018 ms.locfileid: "49862532" --- # <a name="write-unit-tests-for-c-dlls-in-visual-studio"></a>Написание модульных тестов для библиотек DLL на C++ в Visual Studio Существует несколько способов протестировать код библиотеки DLL в зависимости от того, экспортирует ли он функции, которые требуется проверить. Выберите один из следующих вариантов: **Модульные тесты вызывают только функции, экспортированные из библиотеки DLL:** добавьте отдельный тестовый проект, как описывается в статье [Создание модульных тестов для C/C++](writing-unit-tests-for-c-cpp.md). В тестовом проекте добавьте ссылку на проект библиотеки DLL. Перейдите к процедуре [Создание ссылки на экспортируемые функции из проекта библиотеки DLL](#projectRef). **Сборка библиотеки DLL выполняется в виде файла EXE.** Добавьте отдельный тестовый проект. Привяжите его к выходному объектному файлу. Перейдите к процедуре [Связывание тестов с объектным файлом или файлом библиотеки](#objectRef). **Модульные тесты вызывают не являющиеся членами функции, которые не экспортируются из библиотеки DLL; сборка библиотеки DLL может быть выполнена в виде статической библиотеки:** измените проект библиотеки DLL так, чтобы он компилировался в *LIB*-файл. Добавьте отдельные тестовый проект со ссылкой на тестируемый проект. Этот подход имеет то преимущество, что вы можете использовать члены, которые не экспортируются, но тесты по-прежнему содержатся в отдельном проекте. Перейдите к процедуре [Преобразование библиотеки DLL в статическую библиотеку](#staticLink). **Модульные тесты должны вызывать не являющиеся членами функции, которые не экспортируются, и сборку кода необходимо выполнить в виде библиотеки динамической компоновки (DLL).** Добавьте модульные тесты в тот же проект, в котором находится код продукта. Перейдите к процедуре [Добавление модульных тестов в тот же проект](#sameProject). ## <a name="create-the-tests"></a>Создание тестов ### <a name="staticLink"></a> Преобразование библиотеки DLL в статическую библиотеку - Если тесты должны использовать члены, которые не экспортируются проектом библиотеки DLL, а сборка тестируемого проекта выполнена в виде динамической библиотеки, то следует преобразовать его в статическую библиотеку. 1. В **обозревателе решений** в контекстном меню тестируемого проекта выберите пункт **Свойства**. Откроется окно **свойств** проекта. 2. Выберите **Свойства конфигурации** > **Общие**. 3. Установите для свойства **Тип конфигурации** значение **Статическая библиотека (.lib)**. Продолжите выполнение процедуры [Связывание тестов с объектным файлом или файлом библиотеки](#objectRef). ### <a name="projectRef"></a> Создание ссылки на экспортируемые функции библиотеки DLL из тестового проекта - Если проект библиотеки DLL экспортирует функции, которые необходимо протестировать, можно добавить ссылку на проект кода из тестового проекта. 1. Создание проекта машинного модульного теста 1. В меню **Файл** выберите **Создать** > **Проект** > **Visual C++** > **Тест** > **Проект модульного теста C++**. 2. В **обозревателе решений** в контекстном меню тестового проекта выберите пункт **Ссылки**. Откроется окно **свойств** проекта. 3. Выберите **Общие свойства** > **.NET Framework и ссылки**, а затем нажмите кнопку **Добавить новую ссылку**. 4. Выберите **Проекты**, а затем тестируемый проект. Нажмите кнопку **Добавить** . 5. В свойствах тестового проекта добавьте расположение тестируемого проекта в каталоги включения. Выберите **Свойства конфигурации** > **Каталоги VC++** > **Каталоги включаемых файлов**. Выберите **Изменить**, а затем добавьте каталог заголовков тестируемого проекта. Перейдите к разделу [Написание модульных тестов](#addTests). ### <a name="objectRef"></a> Связывание тестов с объектным файлом или файлом библиотеки - Если библиотека DLL не экспортирует функции, которые необходимо проверить, можно добавить файл выходных данных *OBJ* или *LIB* в зависимости тестового проекта. 1. Создание проекта машинного модульного теста 1. В меню **Файл** выберите **Создать** > **Проект** > **Visual C++** > **Тест** > **Проект машинного модульного теста**. 2. В **обозревателе решений** в контекстном меню тестового проекта выберите пункт **Свойства**. 3. Выберите **Свойства конфигурации** > **Компоновщик** > **Ввод** > **Дополнительные зависимости**. Выберите **Изменить** и добавьте имена файлов **.obj** или **.lib**. Не используйте полный путь. 4. Выберите **Свойства конфигурации** > **Компоновщик** > **Общие** > **Дополнительные каталоги библиотек**. Выберите **Изменить** и добавьте путь к каталогу файлов **.obj** или **.lib**. Путь обычно находится внутри папки построения тестируемого проекта. 5. Выберите **Свойства конфигурации** > **Каталоги VC++** > **Каталоги включаемых файлов**. Выберите **Изменить**, а затем добавьте каталог заголовков тестируемого проекта. Перейдите к разделу [Написание модульных тестов](#addTests). ### <a name="sameProject"></a> Добавление модульных тестов в тот же проект 1. Измените свойства проекта кода продукта для того, чтобы он включал заголовки и файлы библиотек, которые необходимы для модульного тестирования. 1. В **обозревателе решений** в контекстном меню тестируемого проекта выберите пункт **Свойства**. Откроется окно **свойств** проекта. 2. Выберите **Свойства конфигурации** > **Каталоги VC++**. 3. Измените каталоги включения и библиотек: |Каталог|Свойство.| |-|-| |**Каталоги включаемых файлов** | **$(VCInstallDir)UnitTest\include;$(IncludePath)**| |**Каталоги библиотек** | **$(VCInstallDir)UnitTest\lib;$(LibraryPath)**| 2. Добавьте файл модульного тестирования С++: - В **обозревателе решений** в контекстном меню проекта выберите **Добавить** > **Создать элемент** > **Модульный тест C++**. Перейдите к разделу [Написание модульных тестов](#addTests). ## <a name="addTests"></a> Написание модульных тестов 1. В каждом файле кода модульного теста добавьте оператор `#include` для заголовков тестируемого проекта. 2. Добавьте тестовые классы и методы в файлы кода модульного теста. Пример: ```cpp #include "stdafx.h" #include "CppUnitTest.h" #include "MyProjectUnderTest.h" using namespace Microsoft::VisualStudio::CppUnitTestFramework; namespace MyTest { TEST_CLASS(MyTests) { public: TEST_METHOD(MyTestMethod) { Assert::AreEqual(MyProject::Multiply(2,3), 6); } }; } ``` ## <a name="run-the-tests"></a>Запуск тестов 1. В меню **Тест** выберите пункт **Windows** > , а затем пункт **Обозреватель тестов**. 1. Если в окне видны не все тесты, выполните сборку тестового проекта, щелкнув его узел в **обозревателе решений** правой кнопкой мыши и выбрав пункт **Сборка** или **Перестроить**. 1. В **обозревателе тестов** нажмите **Запустить все** или выберите тесты, которые следует запустить. Щелкните тест правой кнопкой мыши, чтобы получить доступ к другим командам, включая запуск в режиме отладки с включенными точками останова. ## <a name="see-also"></a>См. также - [Написание модульных тестов для C/C++](writing-unit-tests-for-c-cpp.md) - [Справочник по API Microsoft.VisualStudio.TestTools.CppUnitTestFramework](../test/microsoft-visualstudio-testtools-cppunittestframework-api-reference.md) - [Отладка машинного кода](../debugger/debugging-native-code.md) - [Пошаговое руководство. Создание и использование библиотеки DLL (C++)](/cpp/build/walkthrough-creating-and-using-a-dynamic-link-library-cpp) - [Импорт и экспорт](/cpp/build/importing-and-exporting) - [Краткое руководство. Разработка на основе тестирования с использованием обозревателя тестов](../test/quick-start-test-driven-development-with-test-explorer.md)
50.567073
322
0.745448
rus_Cyrl
0.923585
97ac0c851a9746dd7c05c868517bb3aa103e89f1
15,905
md
Markdown
articles/active-directory-b2c/active-directory-b2c-custom-rest-api-netfw-secure-basic.md
marcduiker/azure-docs.nl-nl
747ce1fb22d13d1e7c351e367c87810dd9eafa08
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-b2c/active-directory-b2c-custom-rest-api-netfw-secure-basic.md
marcduiker/azure-docs.nl-nl
747ce1fb22d13d1e7c351e367c87810dd9eafa08
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-b2c/active-directory-b2c-custom-rest-api-netfw-secure-basic.md
marcduiker/azure-docs.nl-nl
747ce1fb22d13d1e7c351e367c87810dd9eafa08
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Azure Active Directory B2C: Uw RESTful-services beveiligen met behulp van HTTP-basisverificatie' description: Uw aangepaste REST-API claims kunnen worden uitgewisseld in uw Azure AD B2C beveiligen met behulp van HTTP-basisverificatie services: active-directory-b2c documentationcenter: author: yoelhor manager: joroja editor: ms.assetid: ms.service: active-directory-b2c ms.workload: identity ms.tgt_pltfrm: na ms.topic: article ms.devlang: na ms.date: 09/25/2017 ms.author: yoelh ms.openlocfilehash: 641e0cc691eae77ef0480e5743d85e020cd8d354 ms.sourcegitcommit: cf4c0ad6a628dfcbf5b841896ab3c78b97d4eafd ms.translationtype: MT ms.contentlocale: nl-NL ms.lasthandoff: 10/21/2017 --- # <a name="secure-your-restful-services-by-using-http-basic-authentication"></a>Uw RESTful-services beveiligen met behulp van HTTP-basisverificatie In een [bijbehorende Azure AD B2C-artikel](active-directory-b2c-custom-rest-api-netfw.md), hebt u een RESTful-service (web-API) die kan worden geïntegreerd met Azure Active Directory B2C (Azure AD B2C) gebruiker trajecten zonder verificatie. In dit artikel u HTTP basic verificatie toevoegen aan uw RESTful-service zodanig dat alleen geverifieerd gebruikers, met inbegrip van B2C, hebben toegang tot uw API. Met HTTP-basisverificatie stelt u de referenties van de gebruiker (app-ID en app-geheim) in het aangepaste beleid. Zie voor meer informatie [basisverificatie in ASP.NET web-API](https://docs.microsoft.com/en-us/aspnet/web-api/overview/security/basic-authentication). ## <a name="prerequisites"></a>Vereisten Voer de stappen in de [REST-API integreren claims kunnen worden uitgewisseld in uw Azure AD B2C gebruiker reis](active-directory-b2c-custom-rest-api-netfw.md) artikel. ## <a name="step-1-add-authentication-support"></a>Stap 1: Ondersteuning voor verificatie toevoegen ### <a name="step-11-add-application-settings-to-your-projects-webconfig-file"></a>Stap 1.1: Toepassingsinstellingen toevoegen aan uw project web.config-bestand 1. Open het Visual Studio-project dat u eerder hebt gemaakt. 2. De volgende toepassingsinstellingen toevoegen aan het bestand web.config in de `appSettings` element: ```XML <add key="WebApp:ClientId" value="B2CServiceUserAccount" /> <add key="WebApp:ClientSecret" value="your secret" /> ``` 3. Maak een wachtwoord in en stel de `WebApp:ClientSecret` waarde. Als een complex wachtwoord worden gegenereerd, voert u de volgende PowerShell-code. U kunt een willekeurige waarde gebruiken. ```PowerShell $bytes = New-Object Byte[] 32 $rand = [System.Security.Cryptography.RandomNumberGenerator]::Create() $rand.GetBytes($bytes) $rand.Dispose() [System.Convert]::ToBase64String($bytes) ``` ### <a name="step-12-install-owin-libraries"></a>Stap 1.2: Installeer OWIN-bibliotheken Om te beginnen, moet u de OWIN middleware NuGet pakketten toevoegen aan het project met de Visual Studio Package Manager-Console: ``` PM> Install-Package Microsoft.Owin PM> Install-Package Owin PM> Install-Package Microsoft.Owin.Host.SystemWeb ``` ### <a name="step-13-add-an-authentication-middleware-class"></a>Stap 1.3: Een verificatie middleware klasse toevoegen Voeg de `ClientAuthMiddleware.cs` klasse onder de *App_Start* map. Dit doet u als volgt: 1. Met de rechtermuisknop op de *App_Start* map, selecteer **toevoegen**, en selecteer vervolgens **klasse**. ![ClientAuthMiddleware.cs klasse toevoegen aan de map App_Start](media/aadb2c-ief-rest-api-netfw-secure-basic/rest-api-netfw-secure-basic-OWIN-startup-auth1.png) 2. In de **naam** in het vak **ClientAuthMiddleware.cs**. ![Maak nieuwe klasse C#](media/aadb2c-ief-rest-api-netfw-secure-basic/rest-api-netfw-secure-basic-OWIN-startup-auth2.png) 3. Open de *App_Start\ClientAuthMiddleware.cs* bestand en vervang het bestand inhoud met de volgende code: ```C# using Microsoft.Owin; using System; using System.Collections.Generic; using System.Configuration; using System.Linq; using System.Security.Principal; using System.Text; using System.Threading.Tasks; using System.Web; namespace Contoso.AADB2C.API { /// <summary> /// Class to create a custom owin middleware to check for client authentication /// </summary> public class ClientAuthMiddleware { private static readonly string ClientID = ConfigurationManager.AppSettings["WebApp:ClientId"]; private static readonly string ClientSecret = ConfigurationManager.AppSettings["WebApp:ClientSecret"]; /// <summary> /// Gets or sets the next owin middleware /// </summary> private Func<IDictionary<string, object>, Task> Next { get; set; } /// <summary> /// Initializes a new instance of the <see cref="ClientAuthMiddleware"/> class. /// </summary> /// <param name="next"></param> public ClientAuthMiddleware(Func<IDictionary<string, object>, Task> next) { this.Next = next; } /// <summary> /// Invoke client authentication middleware during each request. /// </summary> /// <param name="environment">Owin environment</param> /// <returns></returns> public Task Invoke(IDictionary<string, object> environment) { // Get wrapper class for the environment var context = new OwinContext(environment); // Check whether the authorization header is available. This contains the credentials. var authzValue = context.Request.Headers.Get("Authorization"); if (string.IsNullOrEmpty(authzValue) || !authzValue.StartsWith("Basic ", StringComparison.OrdinalIgnoreCase)) { // Process next middleware return Next(environment); } // Get credentials var creds = authzValue.Substring("Basic ".Length).Trim(); string clientId; string clientSecret; if (RetrieveCreds(creds, out clientId, out clientSecret)) { // Set transaction authenticated as client context.Request.User = new GenericPrincipal(new GenericIdentity(clientId, "client"), new string[] { "client" }); } return Next(environment); } /// <summary> /// Retrieve credentials from header /// </summary> /// <param name="credentials">Authorization header</param> /// <param name="clientId">Client identifier</param> /// <param name="clientSecret">Client secret</param> /// <returns>True if valid credentials were presented</returns> private bool RetrieveCreds(string credentials, out string clientId, out string clientSecret) { string pair; clientId = clientSecret = string.Empty; try { pair = Encoding.UTF8.GetString(Convert.FromBase64String(credentials)); } catch (FormatException) { return false; } catch (ArgumentException) { return false; } var ix = pair.IndexOf(':'); if (ix == -1) { return false; } clientId = pair.Substring(0, ix); clientSecret = pair.Substring(ix + 1); // Return whether credentials are valid return (string.Compare(clientId, ClientAuthMiddleware.ClientID) == 0 && string.Compare(clientSecret, ClientAuthMiddleware.ClientSecret) == 0); } } } ``` ### <a name="step-14-add-an-owin-startup-class"></a>Stap 1.4: Een OWIN-Opstartklasse toevoegen Voeg een OWIN-Opstartklasse met de naam `Startup.cs` aan de API. Dit doet u als volgt: 1. Met de rechtermuisknop op het project, selecteert u **toevoegen** > **Nieuw Item**, en zoek vervolgens naar **OWIN**. ![Een OWIN-opstartklasse toevoegen](media/aadb2c-ief-rest-api-netfw-secure-basic/rest-api-netfw-secure-basic-OWIN-startup.png) 2. Open de *Startup.cs* bestand en vervang het bestand inhoud met de volgende code: ```C# using Microsoft.Owin; using Owin; [assembly: OwinStartup(typeof(Contoso.AADB2C.API.Startup))] namespace Contoso.AADB2C.API { public class Startup { public void Configuration(IAppBuilder app) { app.Use<ClientAuthMiddleware>(); } } } ``` ### <a name="step-15-protect-the-identity-api-class"></a>Stap 1.5: De klasse-id-API beveiligen Open Controllers\IdentityController.cs en voeg de `[Authorize]` code aan de controllerklasse. Deze tag beperkt toegang tot de controller voor gebruikers die voldoen aan de vereiste machtiging. ![De tag autoriseren aan de controller toevoegen](media/aadb2c-ief-rest-api-netfw-secure-basic/rest-api-netfw-secure-basic-authorize.png) ## <a name="step-2-publish-to-azure"></a>Stap 2: Publiceren naar Azure Voor het publiceren van uw project in Solution Explorer, met de rechtermuisknop op de **Contoso.AADB2C.API** project en selecteer vervolgens **publiceren**. ## <a name="step-3-add-the-restful-services-app-id-and-app-secret-to-azure-ad-b2c"></a>Stap 3: De RESTful-services-app-ID en app-geheim toevoegen aan Azure AD B2C Nadat uw RESTful-service wordt beveiligd door de client-ID (gebruikersnaam) en een geheim, moet u de referenties opgeslagen in uw Azure AD B2C-tenant. Het aangepaste beleid biedt de referenties wanneer het RESTful-services aanroept. ### <a name="step-31-add-a-restful-services-client-id"></a>Stap 3.1: Een RESTful-services-client-ID toevoegen 1. Selecteer in uw Azure AD B2C-tenant, **B2C-instellingen** > **identiteit ervaring Framework**. 2. Selecteer **beleid sleutels** om de sleutels die beschikbaar in uw tenant zijn weer te geven. 3. Selecteer **Toevoegen**. 4. Voor **opties**, selecteer **handmatige**. 5. Voor **naam**, type **B2cRestClientId**. Het voorvoegsel *B2C_1A_* kunnen automatisch worden toegevoegd. 6. In de **geheim** Voer de app-ID die u eerder hebt gedefinieerd. 7. Voor **sleutelgebruik**, selecteer **geheim**. 8. Selecteer **Maken**. 9. Controleer of u hebt gemaakt de `B2C_1A_B2cRestClientId` sleutel. ### <a name="step-32-add-a-restful-services-client-secret"></a>Stap 3.2: Een clientgeheim RESTful-services toevoegen 1. Selecteer in uw Azure AD B2C-tenant, **B2C-instellingen** > **identiteit ervaring Framework**. 2. Selecteer **beleid sleutels** om de sleutels die beschikbaar zijn in uw tenant weer te geven. 3. Selecteer **Toevoegen**. 4. Voor **opties**, selecteer **handmatige**. 5. Voor **naam**, type **B2cRestClientSecret**. Het voorvoegsel *B2C_1A_* kunnen automatisch worden toegevoegd. 6. In de **geheim** Voer het app-geheim dat u eerder hebt gedefinieerd. 7. Voor **sleutelgebruik**, selecteer **geheim**. 8. Selecteer **Maken**. 9. Controleer of u hebt gemaakt de `B2C_1A_B2cRestClientSecret` sleutel. ## <a name="step-4-change-the-technical-profile-to-support-basic-authentication-in-your-extension-policy"></a>Stap 4: Het technische profiel ter ondersteuning van basisverificatie in de uitbreiding beleid wijzigen 1. Open het bestand met de extensie (TrustFrameworkExtensions.xml) in uw werkmap. 2. Zoeken naar de `<TechnicalProfile>` knooppunt met `Id="REST-API-SignUp"`. 3. Zoek de `<Metadata>` element. 4. Wijzig de *AuthenticationType* naar *Basic*als volgt: ```xml <Item Key="AuthenticationType">Basic</Item> ``` 5. Direct na de afsluitende `<Metadata>` element, het volgende XML-fragment toevoegen: ```xml <CryptographicKeys> <Key Id="BasicAuthenticationUsername" StorageReferenceId="B2C_1A_B2cRestClientId" /> <Key Id="BasicAuthenticationPassword" StorageReferenceId="B2C_1A_B2cRestClientSecret" /> </CryptographicKeys> ``` Nadat u het codefragment toevoegt, wordt uw technische profiel moet eruitzien als in de volgende XML-code: ![Basisverificatie XML-elementen toevoegen](media/aadb2c-ief-rest-api-netfw-secure-basic/rest-api-netfw-secure-basic-add-1.png) ## <a name="step-5-upload-the-policy-to-your-tenant"></a>Stap 5: Het beleid uploaden naar uw tenant 1. In de [Azure-portal](https://portal.azure.com), overschakelen naar de [context van uw Azure AD B2C-tenant](active-directory-b2c-navigate-to-b2c-context.md), en open vervolgens **Azure AD B2C**. 2. Selecteer **identiteit ervaring Framework**. 3. Open **alle beleidsregels**. 4. Selecteer **uploaden beleid**. 5. Selecteer de **het beleid overschreven als deze bestaat** selectievakje. 6. Upload de *TrustFrameworkExtensions.xml* bestand en controleer vervolgens dat deze gevalideerd wordt. ## <a name="step-6-test-the-custom-policy-by-using-run-now"></a>Stap 6: Het aangepaste beleid testen met behulp van nu uitvoeren 1. Open **Azure AD B2C-instellingen**, en selecteer vervolgens **identiteit ervaring Framework**. >[!NOTE] >Uitvoeren is nu vereist ten minste één toepassing om te worden preregistered op de tenant. Zie voor informatie over het registreren van toepassingen, de Azure AD B2C [aan de slag](active-directory-b2c-get-started.md) artikel of de [toepassingsregistratie](active-directory-b2c-app-registration.md) artikel. 2. Open **B2C_1A_signup_signin**, de relying party (RP) aangepast-beleid die u hebt geüpload en selecteer vervolgens **nu uitvoeren**. 3. Het proces testen door te typen **Test** in de **voornaam** vak. Azure AD B2C wordt een foutbericht weergegeven aan de bovenkant van het venster. ![Uw identiteit API testen](media/aadb2c-ief-rest-api-netfw-secure-basic/rest-api-netfw-test.png) 4. In de **voornaam** typt u een naam (met uitzondering van 'Test'). Azure AD B2C de gebruiker zich aanmeldt en stuurt vervolgens een getal loyaliteit aan uw toepassing. Houd rekening met het nummer in dit voorbeeld: ``` { "typ": "JWT", "alg": "RS256", "kid": "X5eXk4xyojNFum1kl2Ytv8dlNP4-c57dO6QGTVBwaNk" }.{ "exp": 1507125903, "nbf": 1507122303, "ver": "1.0", "iss": "https://login.microsoftonline.com/f06c2fe8-709f-4030-85dc-38a4bfd9e82d/v2.0/", "aud": "e1d2612f-c2bc-4599-8e7b-d874eaca1ee1", "acr": "b2c_1a_signup_signin", "nonce": "defaultNonce", "iat": 1507122303, "auth_time": 1507122303, "loyaltyNumber": "290", "given_name": "Emily", "emails": ["[email protected]"] } ``` ## <a name="optional-download-the-complete-policy-files-and-code"></a>(Optioneel) De volledige-beleidsbestanden en code downloaden * Na het voltooien van de [aan de slag met aangepaste beleidsregels](active-directory-b2c-get-started-custom.md) scenario, het is raadzaam dat u uw scenario maken met behulp van uw eigen aangepaste beleidsbestanden. We hebben opgegeven voor eigen referentie [beleid voorbeeldbestanden](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/aadb2c-ief-rest-api-netfw-secure-basic). * U kunt de code van de volledige downloaden [voorbeeld Visual Studio-oplossing voor verwijzing](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/aadb2c-ief-rest-api-netfw/Contoso.AADB2C.API). ## <a name="next-steps"></a>Volgende stappen * [Clientcertificaten gebruiken voor het beveiligen van uw RESTful-API](active-directory-b2c-custom-rest-api-netfw-secure-cert.md)
45.704023
428
0.686702
nld_Latn
0.920818
97ad8e351e636ce6e7e7300e282cec780c98af92
4,603
md
Markdown
Umbraco-Uno/Uno-pedia/Settings/General-Settings/index.md
sumitkharche/UmbracoDocs
de839c20818d4ed13776ecf17ed0145b6d7886a6
[ "MIT" ]
201
2015-08-24T22:15:42.000Z
2022-02-19T19:19:54.000Z
Umbraco-Uno/Uno-pedia/Settings/General-Settings/index.md
sumitkharche/UmbracoDocs
de839c20818d4ed13776ecf17ed0145b6d7886a6
[ "MIT" ]
2,451
2015-08-21T14:22:12.000Z
2022-03-31T19:54:23.000Z
Umbraco-Uno/Uno-pedia/Settings/General-Settings/index.md
sumitkharche/UmbracoDocs
de839c20818d4ed13776ecf17ed0145b6d7886a6
[ "MIT" ]
725
2015-08-26T00:54:49.000Z
2022-03-31T08:22:11.000Z
--- versionFrom: 8.0.0 --- # General Settings As a child page under *Settings* you'll find *General*. Here you will find a long list of settings and configuration options for your website. The general settings will apply to the entire website. In this article you can learn more about the various options as well as get an idea of what the settings will affect. The General settings are divided into 6 different groups: * [General](#general) * [Newsletter](#newsletter) * [Instagram](#instagram) * [Cookie Consent](#cookie-consent) * [Search](#search) * [Tracking and Access](#tracking-and-access) ## General A set of different configuration options. ### Title Signature Optional text that will be added to title on all pages on your website. E.g. if the pagename is *Start* and your Title Signature is " - by Umbraco", the full title of the page will be "*Start - by Umbraco*". ### Twitter Username Fill in your Twitter handle, which will be used ... ### Contact Form Email and Contact Form Subject When using the *Content Form* widget, you can use these to define the email that should receive the forms submissions and the subject of the mails sent to that receiver. ## Newsletter When you have a Newsletter service setup using either MailChimp or Campaign Monitor, make sure to configure it here. It will enable you to use the *Newsletter* widget. ### Email Marketing Provider Choose your provider from a dropdown list. Currently we support [MailChimp](https://mailchimp.com/) and [Campaign Monitor](https://www.campaignmonitor.com/). ### API Key In order for your website to connect with the external, you will need to add in your API key. ### Default Subscriber List ID In order to choose which list your users will be subscribing to from your website, set the List ID here. ## Instagram In order to use the *Instagram Feed* widget, you will need to configure the following settings: ### Username Your Instagram username, without `@`. ### User ID (optional) Your Instragram User ID. ### Access Token (required) You will need to add your access token as well, in order for your Umbraco Uno project to connect to your Instagram profile. In order to generate an access token you will need to create a Facebook app and add your Instagram account as a test user. This can all be done by following steps 1-3 in the [Facebook Basic Display API Guide](https://developers.facebook.com/docs/instagram-basic-display-api/getting-started). ## Cookie Consent Your website already has a built-in cookie consent template set up. In this group of properties, you can manage the contents and decide whether you want to enable it on your website. ### Enable Cookie Consent Dialog When this is checked, the cookie consent dialog will be shown on you website for all new visitors. ### Text What the cookie consent dialog says is entirely up to you. We recommmend mentioned all cookies you've setup yourself, including the once that ships with Umbraco Cloud. ### Learn More Link Adds a link to the cookie consent dialog, which the customer can follow to learn more about your cookies policies. You can choose to add an external/internal URL or choose an existing page from your website. ### Dismiss Button Text Define what the text on the *dismiss button* should say. ### Theme The cookie consent dialog comes with the pre-defined themes: One with black background, and one with white background. ### Position You can choose between three different layouts for the cookie consent dialog: * A box that floats in the bottom-left of your website (`float-left`) * A box that floats in the bottom-right of your website (`float-right`) * A full-width banner at the bottom of your website (`banner-bottom`) ![Cookie Consent Dialog](images/cookies-options.png) ## Search The Search group has only a single configuration option where you can decide to show search results in a grid. With this view, the search results will include the SEO image set on the pages found. The default search view is a traditional list view. ## Tracking and Access As the final group on the General settings, you can connect your Google tools with the website. The tools you can connect through here is **Google Analytics**, **Google Tag Manager** and **Google Maps**. None of these are required, but we do recommend adding a Google Maps API Key, as they will enable you to use the *Map* widget. You can also enable comments by adding your **Disqus** shortname. Learn more about this works on [the official Disqus documentation](https://help.disqus.com/en/articles/1717111-what-s-a-shortname).
38.041322
207
0.764284
eng_Latn
0.996631
97add5118fdaf6a891af7a4b09c70aab88ec8587
160
md
Markdown
docs/cas-server-documentation/release_notes/Overview.md
hsartoris-bard/cas
10706831a27543e8ded224a127570121e047d3a0
[ "Apache-2.0" ]
8,772
2016-05-08T04:44:50.000Z
2022-03-31T06:02:13.000Z
docs/cas-server-documentation/release_notes/Overview.md
hsartoris-bard/cas
10706831a27543e8ded224a127570121e047d3a0
[ "Apache-2.0" ]
2,911
2016-05-07T23:07:52.000Z
2022-03-31T15:09:08.000Z
docs/cas-server-documentation/release_notes/Overview.md
hsartoris-bard/cas
10706831a27543e8ded224a127570121e047d3a0
[ "Apache-2.0" ]
3,675
2016-05-08T04:45:46.000Z
2022-03-31T09:34:54.000Z
--- layout: default title: CAS - Release Notes category: Planning --- # Release Notes - [RC1](RC1.html) - [RC2](RC2.html) - [RC3](RC3.html) - [RC4](RC4.html)
12.307692
26
0.63125
yue_Hant
0.510951
97ae534a98abaeb854f7a3f4e255fdb7fea92100
29
md
Markdown
README.md
jdeugarte/bonsaiPOS
2aef756a2900f1ad959e5e255264038d1b1a984e
[ "MIT" ]
null
null
null
README.md
jdeugarte/bonsaiPOS
2aef756a2900f1ad959e5e255264038d1b1a984e
[ "MIT" ]
null
null
null
README.md
jdeugarte/bonsaiPOS
2aef756a2900f1ad959e5e255264038d1b1a984e
[ "MIT" ]
null
null
null
Point of Sale for bonsaiERP.
14.5
28
0.793103
eng_Latn
0.852277
97ae55e8ecd0592093c5d968dce520d3ad5f9608
3,898
md
Markdown
howto/authorization-with-oauth/README.md
Abhith/docs
89b3c223d39e73aa26930c1cc9fb50bbee8ff253
[ "MIT" ]
1
2021-08-09T16:18:00.000Z
2021-08-09T16:18:00.000Z
howto/authorization-with-oauth/README.md
Abhith/docs
89b3c223d39e73aa26930c1cc9fb50bbee8ff253
[ "MIT" ]
null
null
null
howto/authorization-with-oauth/README.md
Abhith/docs
89b3c223d39e73aa26930c1cc9fb50bbee8ff253
[ "MIT" ]
null
null
null
# Authorization with oAuth Dapr OAuth 2.0 [middleware](../../concepts/middleware/middleware.md) allows you to enable [OAuth](https://oauth.net/2/) authorization on Dapr endpoints for your web APIs, using the [Authorization Code Grant flow](https://tools.ietf.org/html/rfc6749#section-4.1). When the middleware is enabled, any method invocation through Dapr needs to be authorized before getting passed to the user code. ## Register your application with a authorization server Different authorization servers provide different application registration experiences. Here are some samples: * [Azure AAD](https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-oauth-code) * [Facebook](https://developers.facebook.com/apps) * [Fitbit](https://dev.fitbit.com/build/reference/web-api/oauth2/) * [GitHub](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/) * [Google APIs](https://console.developers.google.com/apis/credentials/consen) * [Slack](https://api.slack.com/docs/oauth) * [Twitter](http://apps.twitter.com/) To figure the Dapr OAuth middleware, you'll need to collect the following information: * Client ID (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/)) * Client secret (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/)) * Scopes (see [here](https://oauth.net/2/scope/)) * Authorization URL * Token URL Authorization/Token URLs of some of the popular authorization servers: |Server|Authorization URL|Token URL| |--------|--------|--------| |Azure AAD|https://login.microsoftonline.com/{tenant}/oauth2/authorize|https://login.microsoftonline.com/{tenant}/oauth2/token| |GitHub|https://github.com/login/oauth/authorize|https://github.com/login/oauth/access_token| |Google|https://accounts.google.com/o/oauth2/v2/auth|https://accounts.google.com/o/oauth2/token https://www.googleapis.com/oauth2/v4/token| |Twitter|https://api.twitter.com/oauth/authorize|https://api.twitter.com/oauth2/token| ## Define the middleware component definition An OAuth middleware is defined by a component: ```yaml apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: oauth2 spec: type: middleware.http.oauth2 metadata: - name: clientId value: "<your client ID>" - name: clientSecret value: "<your client secret>" - name: scopes value: "<comma-separated scope names>" - name: authURL value: "<authroziation URL>" - name: tokenURL value: "<token exchange URL>" - name: redirectURL value: "<redirect URL>" - name: authHeaderName value: "<header name under which the secret token is saved>" ``` ## Define a custom pipeline To use the OAuth middleware, you should create a [custom pipeline](../../concepts/middleware/middleware.md) using [Dapr configuration](../../concets/../concepts/configuration/README.md), as shown in the following sample: ```yaml apiVersion: dapr.io/v1alpha1 kind: Configuration metadata: name: pipeline spec: httpPipeline: handlers: - name: middleware.http.oauth2 ``` ## Apply the configuration To apply the above configuration to your Dapr sidecar, add a ```dapr.io/config``` annotation to your pod spec: ```yaml apiVersion: apps/v1 kind: Deployment ... spec: ... template: metadata: ... annotations: dapr.io/enabled: "true" ... dapr.io/config: "pipeline" ... ``` ## Accessing the access token Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar (such as calling the *v1.0/invoke/* endpoint), it will be reidrected to the authorization's consent page if an access token is not found. Otherwise, the access token is written to the **authHeaderName** header and made available to the app code.
38.594059
393
0.716008
eng_Latn
0.739925
97aee193cc750e405116d0d5875beba456dd9236
731
md
Markdown
.github/ISSUE_TEMPLATE/bug-report.md
dotpaul/roslyn-analyzers
bbe3bbb64f8d1f1a143cc18341ce6fb97c792416
[ "Apache-2.0" ]
2
2018-12-13T21:06:29.000Z
2018-12-13T21:06:30.000Z
.github/ISSUE_TEMPLATE/bug-report.md
dotpaul/roslyn-analyzers
bbe3bbb64f8d1f1a143cc18341ce6fb97c792416
[ "Apache-2.0" ]
null
null
null
.github/ISSUE_TEMPLATE/bug-report.md
dotpaul/roslyn-analyzers
bbe3bbb64f8d1f1a143cc18341ce6fb97c792416
[ "Apache-2.0" ]
null
null
null
--- name: Bug report about: Report a bug, false-positive or false-negative title: '' labels: '' assignees: '' --- ### Describe the bug <!-- A clear and concise description of what the bug is. --> ### Steps To Reproduce <!-- Provide the steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error --> ### Expected behavior ### Actual behavior ### Analyzer **Package**: [Microsoft.CodeAnalysis.FxCopAnalyzers](https://www.nuget.org/packages/Microsoft.CodeAnalysis.FxCopAnalyzers) **Version**: v3.3.0 (Latest) **Diagnostic ID**: [CA1716](https://docs.microsoft.com/visualstudio/code-quality/ca1716) ## Additional context <!-- Add any other context about the problem here. -->
18.74359
122
0.679891
eng_Latn
0.548123
97afeb71b52402d92976b6e564aa67014911aedd
1,738
md
Markdown
content/en/news/1.5.4-light-4j.md
tzurita/docs
727a0f6fb7a30e243317d6e859f7f5bdfd623e3b
[ "MIT" ]
1
2021-11-08T09:45:59.000Z
2021-11-08T09:45:59.000Z
content/en/news/1.5.4-light-4j.md
Smithsonian/si-docs
727a0f6fb7a30e243317d6e859f7f5bdfd623e3b
[ "MIT" ]
null
null
null
content/en/news/1.5.4-light-4j.md
Smithsonian/si-docs
727a0f6fb7a30e243317d6e859f7f5bdfd623e3b
[ "MIT" ]
1
2021-02-25T19:04:08.000Z
2021-02-25T19:04:08.000Z
--- title: "1.5.4 light-4j" date: 2017-11-22T13:20:36-05:00 description: "Startup Hooks, Shutdown Hooks, Middleware Handlers and Route Handler are configured in service.yml instead of Java SPI" categories: ["Releases"] keywords: [] slug: "1.5.4" aliases: [] toc: false draft: false --- This release [1.5.4](https://github.com/networknt/light-4j/releases/tag/1.5.4) adds several enhancements. * [Server Plugins Configuration][] - Previously, all the plugins are loaded by Java SPI and it cannot be externalized or overwritten. To give developer more flexibility to put in mock handler for their testing and switch handler on production for debugging etc. We have moved the configration to our own service module which is configed by service.yml file. This requires existing application to update service.yml to add entries and remove META-INF folder. You can refer to the petstore example at https://github.com/networknt/light-example-4j/tree/master/rest/swagger/petstore * [Service Config with Generic Type][] - This is an enhancement for service module to load individual instance with a generic type. It is used in our OpenAPI 3.0 specification [openapi-parser][] to load validators for each model class. You an read [service tutorial][] to learn how to use it. * [Add Constant for OpenAPI 3.0 Support][] - As OpenAPI 3.0 and Swagger 2.0 are supported at the same time in light-rest-4j in this release. We have separate the two with two different set of middleware handlers. Also, the constant used are different in order to make sure developers are not confused. [Server Plugins Configuration]: /concern/server/ [openapi-parser]: https://github.com/networknt/openapi-parser [service tutorial]: /tutorial/common/service/
46.972973
175
0.772152
eng_Latn
0.987311
97b018dec27cb7bd436b34ed807ff7eac1b06f43
13,923
md
Markdown
docs/2014/relational-databases/indexes/sort-in-tempdb-option-for-indexes.md
SteSinger/sql-docs.de-de
2259e4fbe807649f6ad0d49b425f1f3fe134025d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/relational-databases/indexes/sort-in-tempdb-option-for-indexes.md
SteSinger/sql-docs.de-de
2259e4fbe807649f6ad0d49b425f1f3fe134025d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/relational-databases/indexes/sort-in-tempdb-option-for-indexes.md
SteSinger/sql-docs.de-de
2259e4fbe807649f6ad0d49b425f1f3fe134025d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: SORT_IN_TEMPDB-Option für Indizes | Microsoft-Dokumentation ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: table-view-index ms.topic: conceptual helpviewer_keywords: - SORT_IN_TEMPDB option - disk space [SQL Server], indexes - space [SQL Server], indexes - tempdb database [SQL Server], indexes - indexes [SQL Server], tempdb database - index creation [SQL Server], tempdb database ms.assetid: 754a003f-fe51-4d10-975a-f6b8c04ebd35 author: MikeRayMSFT ms.author: mikeray manager: craigg ms.openlocfilehash: 49a10795cbb9177837960739890baebc221c0712 ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 06/15/2019 ms.locfileid: "63035599" --- # <a name="sortintempdb-option-for-indexes"></a>SORT_IN_TEMPDB-Option für Indizes Wenn Sie einen Index erstellen oder neu erstellen, indem Sie die SORT_IN_TEMPDB-Option auf ON setzen, wird [!INCLUDE[ssDEnoversion](../../includes/ssdenoversion-md.md)] angewiesen, zum Speichern der Zwischenergebnisse der Sortierung für die Erstellung eines Indexes **tempdb** zu verwenden. Obwohl durch diese Option die Menge an Speicherplatz erhöht wird, die zur Indexerstellung verwendet wird, kann dadurch die Zeit verringert werden, die zum Erstellen eines Indexes erforderlich ist, wenn **tempdb** auf einer anderen Gruppe von Datenträgern gespeichert ist als die Benutzerdatenbank. Weitere Informationen zu **tempdb**finden Sie unter [Konfigurieren der Serverkonfigurationsoption Speicher für Indexerstellung](../../database-engine/configure-windows/configure-the-index-create-memory-server-configuration-option.md). ## <a name="phases-of-index-building"></a>Phasen der Indexerstellung Wenn [!INCLUDE[ssDE](../../includes/ssde-md.md)] einen Index erstellt, werden die folgenden Phasen durchlaufen: - Zunächst scannt [!INCLUDE[ssDE](../../includes/ssde-md.md)] die Datenseiten, um Schlüsselwerte abzurufen, und erstellt eine Indexzeile auf Blattebene für jede Datenzeile. Wenn die internen Puffer der Sortierung mit Indexeinträgen auf Blattebene aufgefüllt wurden, werden die Einträge sortiert und als Zwischensortierlauf auf den Datenträger geschrieben. Dann setzt [!INCLUDE[ssDE](../../includes/ssde-md.md)] den Scanvorgang der Datenseiten fort, bis die Puffer der Sortierung erneut aufgefüllt sind. Dieses Muster des Scannens mehrerer Datenseiten, gefolgt vom Sortieren und Schreiben eines Sortierlaufs, wird so lange fortgesetzt, bis alle Zeilen der Basistabelle verarbeitet worden sind. Bei einem gruppierten Index handelt es sich bei den Blattzeilen des Indexes um die Datenzeilen der Tabelle, sodass die Zwischensortierläufe alle Datenzeilen enthalten. In einem nicht gruppierten Index können die Blattzeilen Nichtschlüsselspalten enthalten, daher sind sie jedoch in der Regel kleiner als bei einem gruppierten Index. Ein nicht gruppierter Sortierlauf kann jedoch umfangreich sein, wenn die Indexschlüssel groß sind oder wenn mehrere Nichtschlüsselspalten in den Index einbezogen sind. Weitere Informationen zum Einbeziehen von Nichtschlüsselspalten finden Sie unter [Create Indexes with Included Columns](create-indexes-with-included-columns.md). - [!INCLUDE[ssDE](../../includes/ssde-md.md)] führt die sortierten Läufe der Indexzeilen auf Blattebene in einem einzelnen, sortierten Strom zusammen. Die Komponente von [!INCLUDE[ssDE](../../includes/ssde-md.md)] für das Zusammenführen der Sortierung beginnt mit der ersten Seite jedes Sortierlaufs, sucht nach dem niedrigsten Schlüssel in allen Seiten und gibt die Blattzeile an die Indexerstellungskomponente weiter. Danach wird der nächste niedrigste Schlüssel verarbeitet, dann der darauf folgende usw. Wenn die letzte Indexzeile auf Blattebene aus einer Sortierlaufseite extrahiert wurde, wechselt der Prozess zur nächsten Seite dieses Sortierlaufs. Wenn alle Seiten in einem Sortierlaufblock verarbeitet worden sind, wird der Block freigegeben. Bei der Übergabe jeder Blattindexzeile an die Indexerstellungskomponente wird diese Zeile in einer Blattindexseite im Puffer eingeschlossen. In jede Blattseite wird geschrieben, wenn sie aufgefüllt wird. Während des Beschreibens von Blattseiten erstellt [!INCLUDE[ssDE](../../includes/ssde-md.md)] außerdem die oberen Ebenen des Indexes. In jede Indexseite einer oberen Ebene wird geschrieben, wenn sie aufgefüllt wird. ## <a name="sortintempdb-option"></a>SORT_IN_TEMPDB-Option Wenn die SORT_IN_TEMPDB-Option auf OFF gesetzt ist (Standardeinstellung), werden die Sortierläufe in der Zieldateigruppe gespeichert. Während der ersten Phase der Indexerstellung werden durch die sich abwechselnden Lesevorgänge in den Basistabellenseiten und den Schreibvorgängen der Sortierläufe die Schreib-/Leseköpfe des Datenträgers von einem Bereich des Datenträgers in einen anderen Bereich verschoben. Die Köpfe befinden sich in dem Bereich der Datenseiten, während die Datenseiten gescannt werden. Sie werden in einen Bereich mit freiem Speicherplatz verschoben, wenn die Sortierpuffer aufgefüllt werden und der aktuelle Sortierlauf auf den Datenträger geschrieben werden muss. Anschließend werden sie wieder in den Bereich der Datenseiten verschoben, wenn der Seitenscanvorgang in der Tabelle fortgesetzt wird. Das Verschieben der Schreib-/Leseköpfe nimmt in der zweiten Phase zu. Zu dieser Zeit wechselt der Sortierprozess in der Regel die Lesevorgänge in jedem Sortierlaufbereich. Sowohl die Sortierläufe als auch die neuen Indexseiten werden in der Zieldateigruppe erstellt. Dies bedeutet, dass zur gleichen Zeit, wenn das [!INCLUDE[ssDE](../../includes/ssde-md.md)] Lesevorgänge auf die Sortierläufe verteilt, ein regelmäßiges Springen zu den Indexblöcken erforderlich ist, um neue Indexseiten zu schreiben, während sie aufgefüllt werden. Falls die Option SORT_IN_TEMPDB auf ON festgelegt ist und **tempdb** auf einer anderen Datenträgergruppe als der Zieldateigruppe gespeichert ist, finden die Lesevorgänge der Datenseiten während der ersten Phase auf einem anderen Datenträger statt als die Schreibvorgänge in den Bereich der Sortierarbeit in **tempdb**. Dies bedeutet, dass die Lesevorgänge der Datenschlüssel auf dem Datenträger eher seriell auf dem Datenträger verlaufen und dass die Schreibvorgänge auf dem Datenträger mit **tempdb** ebenfalls seriell sind, genauso wie die Schreibvorgänge zum Erstellen des endgültigen Indexes. Auch wenn andere Benutzer die Datenbank verwenden und auf unterschiedliche Datenträgeradressen zugreifen, ist die Gesamtstruktur der Lese- und Schreibvorgänge viel effizienter, wenn die Option SORT_IN_TEMPDB angegeben ist. Mithilfe der Option SORT_IN_TEMPDB stehen die Indexblöcke möglicherweise näher zusammen, besonders wenn der CREATE INDEX-Vorgang nicht parallel ausgeführt wird. Die Blöcke im Bereich der Sortierarbeit werden im Hinblick auf ihren Speicherort in der Datenbank eher nach dem Zufallsprinzip freigegeben. Wenn die Bereiche der Sortierarbeit in der Zieldateigruppe enthalten sind, können sie bei der Freigabe der Blöcke der Sortierarbeit durch Anforderungen von Blöcken reserviert werden, in denen die Indexstruktur während ihrer Erstellung gespeichert werden soll. Dabei können die Speicherorte der Indexblöcke bis zu einem gewissen Grad zufällig ausgewählt werden. Wenn die Sortierblöcke separat in **tempdb**gespeichert werden, steht die Abfolge, in der sie freigegeben werden, in keinem Zusammenhang mit dem Speicherort der Indexblöcke. Wenn darüber hinaus die Zwischensortierläufe in **tempdb** anstelle der Zieldateigruppe gespeichert werden, steht mehr Speicherplatz in der Zieldateigruppe zur Verfügung, Dadurch werden die Möglichkeiten verbessert, dass die Indexblöcke zusammenhängend sind. Die Option SORT_IN_TEMPDB wirkt sich nur auf die aktuelle Anweisung aus. Ob der Index in **tempdb**sortiert wurde, wird nicht in Metadaten aufgezeichnet. Wenn Sie z. B. einen nicht gruppierten Index mithilfe der Option SORT_IN_TEMPDB und später einen gruppierten Index ohne Angabe dieser Option erstellen, verwendet [!INCLUDE[ssDE](../../includes/ssde-md.md)] diese Option nicht, wenn es den nicht gruppierten Index neu erstellt. > [!NOTE] > Wenn kein Sortiervorgang erforderlich ist oder die Sortierung im Arbeitsspeicher erfolgen kann, wird die Option SORT_IN_TEMPDB ignoriert. ## <a name="disk-space-requirements"></a>Anforderungen an den Datenträgerspeicher Wenn Sie die Option SORT_IN_TEMPDB auf ON setzen, muss in **tempdb** genügend freier Speicherplatz zum Speichern der Zwischensortierläufe zur Verfügung stehen, und es muss in der Zieldateigruppe genügend Speicherplatz verfügbar sein, damit der neue Index gespeichert werden kann. Die CREATE INDEX-Anweisung erzeugt einen Fehler, wenn nicht genügend freier Speicherplatz zur Verfügung steht und es eine Ursache dafür gibt, dass die Datenbanken keine automatische Vergrößerung durchführen können, um mehr Speicherplatz zu reservieren (wenn z. B. kein Datenträgerspeicher verfügbar ist oder die automatische Vergrößerung ausgeschaltet ist). Falls SORT_IN_TEMPDB auf OFF gesetzt ist, muss der verfügbare Speicherplatz in der Zieldateigruppe ungefähr der Größe des endgültigen Indexes entsprechen. Während der ersten Phase werden die Sortierläufe erstellt, sie benötigen ungefähr gleich viel Speicherplatz wie der endgültige Index. Während der zweiten Phase wird jeder Block mit Sortierläufen freigegeben, nachdem er verarbeitet worden ist. Die Blöcke mit Sortierläufen werden demnach ungefähr genauso häufig freigegeben, wie Blöcke zum Speichern der Seiten des endgültigen Indexes reserviert werden, sodass die gesamten Speicherplatzanforderungen nicht bedeutend über der Größe des endgültigen Indexes liegen. Als Nebeneffekt hiervon tendiert [!INCLUDE[ssDE](../../includes/ssde-md.md)] dazu, die Blöcke mit Sortierläufen sehr schnell nach ihrer Freigabe wieder zu verwenden, wenn die Menge an verfügbarem Speicherplatz ungefähr der Größe des endgültigen Indexes entspricht. Da die Blöcke mit Sortierläufen eher nach dem Zufallsprinzip freigegeben werden, wird dadurch die Kontinuität der Indexblöcke in dieser Szenario verringert. Wenn SORT_IN_TEMPDB auf OFF gesetzt ist, wird die Kontinuität der Indexblöcke verbessert, wenn ausreichend freier Speicherplatz in der Zieldateigruppe verfügbar ist, sodass für die Indexblöcke ein zusammenhängender Pool anstatt der Blöcke, deren Zuordnung soeben aufgehoben wurde, mit Sortierläufen zugeordnet werden können. Wenn Sie einen nicht gruppierten Index erstellen, muss die folgende Menge an Speicherplatz zur Verfügung stehen: - Falls SORT_IN_TEMPDB auf ON gesetzt ist, muss in **tempdb** ausreichend freier Speicherplatz zur Verfügung stehen, um die Sortierläufe zu speichern, und es muss ausreichend freier Speicherplatz in der Zieldateigruppe vorhanden sein, um die endgültige Indexstruktur zu speichern. Die Sortierläufe enthalten die Blattzeilen des Indexes. - Falls SORT_IN_TEMPDB auf OFF gesetzt ist, muss genügend freier Speicherplatz für das Speichern der endgültigen Indexstruktur in der Zieldateigruppe verfügbar sein. Die Kontinuität der Indexblöcke kann verbessert werden, wenn mehr freier Speicherplatz zur Verfügung steht. Wenn Sie einen gruppierten Index für eine Tabelle erstellen, die über keine nicht gruppierten Indizes verfügen, muss die folgende Menge an Speicherplatz zur Verfügung stehen: - Wenn SORT_IN_TEMPDB auf ON gesetzt ist, muss in **tempdb** ausreichend freier Speicherplatz zur Verfügung stehen, um die Sortierläufe zu speichern. Diese schließen die Datenzeilen der Tabelle ein. Es muss ausreichend freier Speicherplatz in der Zieldateigruppe vorhanden sein, um die endgültige Indexstruktur zu speichern. Dies schließt die Datenzeilen der Tabelle und des B-Baumes des Indexes ein. Sie müssen evtl. die Schätzung für Faktoren wie eine große Schlüsselgröße oder ein Füllfaktor mit einem niedrigen Wert entsprechend anpassen. - Falls SORT_IN_TEMPDB auf OFF gesetzt ist, muss genügend freier Speicherplatz für das Speichern der endgültigen Tabelle in der Zieldateigruppe verfügbar sein. Dies schließt die Indexstruktur ein. Die Kontinuität der Tabelle und Indexblöcke kann verbessert werden, wenn mehr freier Speicherplatz zur Verfügung steht. Wenn Sie einen gruppierten Index für eine Tabelle erstellen, die über nicht gruppierten Indizes verfügen, muss die folgende Menge an Speicherplatz zur Verfügung stehen: - Falls SORT_IN_TEMPDB auf ON gesetzt ist, muss in **tempdb** ausreichend freier Speicherplatz zur Verfügung stehen, um die Auflistung von Sortierläufen für den größten Index (üblicherweise der gruppierte Index) zu speichern, und es muss ausreichend freier Speicherplatz in der Zieldateigruppe vorhanden sein, um die endgültigen Strukturen aller Indizes zu speichern. Das schließt den gruppierten Index ein, der die Datenzeilen der Tabelle enthält. - Falls SORT_IN_TEMPDB auf OFF gesetzt ist, muss genügend freier Speicherplatz für das Speichern der endgültigen Tabelle in der Zieldateigruppe verfügbar sein. Das schließt die Strukturen aller Indizes ein. Die Kontinuität der Tabelle und Indexblöcke kann verbessert werden, wenn mehr freier Speicherplatz zur Verfügung steht. ## <a name="related-tasks"></a>Related Tasks [CREATE INDEX &#40;Transact-SQL&#41;](/sql/t-sql/statements/create-index-transact-sql) [Neuorganisieren und Neuerstellen von Indizes](indexes.md) ## <a name="related-content"></a>Verwandte Inhalte [ALTER INDEX &#40;Transact-SQL&#41;](/sql/t-sql/statements/alter-index-transact-sql) [Konfigurieren der Serverkonfigurationsoption Speicher für Indexerstellung](../../database-engine/configure-windows/configure-the-index-create-memory-server-configuration-option.md) [Speicherplatzanforderungen für Index-DDL-Vorgänge](disk-space-requirements-for-index-ddl-operations.md)
160.034483
1,417
0.819364
deu_Latn
0.998164
97b125f14870fd9ec9d2668c9138fe62948146ec
696
md
Markdown
core/README.md
rnett/krosstalk
f980a36703d9c35cff9cde2b019a815a50b5d1a9
[ "Apache-2.0" ]
12
2021-06-02T09:50:08.000Z
2021-08-02T21:33:12.000Z
core/README.md
rnett/krosstalk
f980a36703d9c35cff9cde2b019a815a50b5d1a9
[ "Apache-2.0" ]
3
2021-07-01T03:30:12.000Z
2021-10-24T01:16:39.000Z
core/README.md
rnett/krosstalk
f980a36703d9c35cff9cde2b019a815a50b5d1a9
[ "Apache-2.0" ]
null
null
null
# Krosstalk Core These docs are for the core artifacts of Krosstalk: * `krosstalk-base`: APIs shared between the runtime and compiler plugin * `krosstalk`: The core API. * `krosstalk-server`: The server API. * `krosstalk-client`: The client API. Note that despite calling `krosstalk` "the common API", both the client and server APIs are fully multiplatform. However, generally your common source set will be the only one depending on `krosstalk` directly. A guide to Krosstalk is available in the [GitHub README](./../README.md#krosstalk-a-pure-kotlin-pluggable-rpc-library), and instructions for writing plugins are in [WRITING_PLUGINS.md](./../WRITING_PLUGINS.md#writing-krosstalk-plugins)
49.714286
123
0.775862
eng_Latn
0.935264
97b12c680ce2c950c03d35bbc2f89206b4132cc6
565
md
Markdown
node_modules/@types/webpack/README.md
darroughw/dw
dbfe366019cb8394dd629f40697a7ad4d2fee02e
[ "MIT" ]
2
2016-11-27T01:32:35.000Z
2016-11-27T17:36:59.000Z
node_modules/@types/webpack/README.md
GSThina/angular2_web-Firebase-Starter
d27bbf9d22aece41eabeb1991e19f94b32316691
[ "MIT" ]
null
null
null
node_modules/@types/webpack/README.md
GSThina/angular2_web-Firebase-Starter
d27bbf9d22aece41eabeb1991e19f94b32316691
[ "MIT" ]
null
null
null
# Installation > `npm install --save @types/webpack` # Summary This package contains type definitions for webpack 1.12.9 (https://github.com/webpack/webpack). # Details Files were exported from https://www.github.com/DefinitelyTyped/DefinitelyTyped/tree/types-2.0/webpack Additional Details * Last updated: Wed, 05 Oct 2016 20:53:42 GMT * File structure: ProperModule * Library Dependencies: uglify-js * Module Dependencies: uglify-js * Global values: webpack # Credits These definitions were written by Qubo <https://github.com/tkqubo>.
29.736842
103
0.743363
kor_Hang
0.482745
97b1b98984d959210b9f9f35c973f6962d6f09f7
68
md
Markdown
Optativas/MEEGQ/README.md
daleu/RepoMEItori
09205d77f302da30d47eebce8d96d520c7f47b48
[ "MIT" ]
5
2020-10-30T13:46:47.000Z
2020-11-02T07:37:31.000Z
Optativas/MEEGQ/README.md
daleu/RepoMEItori
09205d77f302da30d47eebce8d96d520c7f47b48
[ "MIT" ]
null
null
null
Optativas/MEEGQ/README.md
daleu/RepoMEItori
09205d77f302da30d47eebce8d96d520c7f47b48
[ "MIT" ]
3
2020-12-01T16:30:26.000Z
2021-03-28T19:24:23.000Z
# MEEGQ - El Model d'Excel·lència EFQM i Gestió de la Qualitat TODO
22.666667
62
0.75
cat_Latn
0.999652
97b1dc8f1bb195ea3874f507fd41f4d19589c3e8
6,493
md
Markdown
Skype/SfbOnline/what-are-calling-plans-in-office-365/assign-change-or-remove-a-phone-number-for-a-user.md
stephenjust/OfficeDocs-SkypeForBusiness
7c6036c60a8b18556215f5d540dda2a3f068479d
[ "CC-BY-4.0", "MIT" ]
1
2019-09-18T00:56:07.000Z
2019-09-18T00:56:07.000Z
Skype/SfbOnline/what-are-calling-plans-in-office-365/assign-change-or-remove-a-phone-number-for-a-user.md
stephenjust/OfficeDocs-SkypeForBusiness
7c6036c60a8b18556215f5d540dda2a3f068479d
[ "CC-BY-4.0", "MIT" ]
1
2019-08-05T20:57:26.000Z
2019-08-05T20:57:26.000Z
Skype/SfbOnline/what-are-calling-plans-in-office-365/assign-change-or-remove-a-phone-number-for-a-user.md
stephenjust/OfficeDocs-SkypeForBusiness
7c6036c60a8b18556215f5d540dda2a3f068479d
[ "CC-BY-4.0", "MIT" ]
1
2019-01-02T20:31:54.000Z
2019-01-02T20:31:54.000Z
--- title: "Assign, change, or remove a phone number for a user" ms.author: tonysmit author: tonysmit manager: serdars ms.reviewer: mikedav, roykuntz, jastark ms.topic: article ms.assetid: 91089761-cb87-4119-885b-3713840dd9f7 ms.tgt.pltfrm: cloud ms.service: skype-for-business-online ms.collection: - Adm_Skype4B_Online - Strat_SB_PSTN ms.audience: Admin appliesto: - Skype for Business - Microsoft Teams localization_priority: Priority f1keywords: None ms.custom: - Calling Plans description: "Learn how to assign, change, or remove a work phone number to your Skype for Business users so outside businesses and clients can call in." --- # Assign, change, or remove a phone number for a user When you set up Calling Plans in Office 365, you assign phone numbers to your users. In the Microsoft Teams client, the phone number you assign will be listed when they click **Calls**. ![User's phone number displayed in Microsoft Teams.](../images/teams-phone-number.png) In the Skype for Business client, the phone number you assign will be listed in the **Work Phone** box and can't be changed by a user. ![Work Phone Number is Greyed Out.](../images/5212fa64-b55c-4398-9709-a334f3ffa749.png) > [!IMPORTANT] > If a user wants to [change his or her phone number for Skype for Business](https://support.office.com/article/20e03cc1-c023-4e5d-bafd-064ddb59ed5e) and the phone number in the Skype for Business app can't be changed or is grayed out, that means an admin has set it for them and it can't be changed by them. When you are setting up users so they can make and receive phone calls, you must first use the Skype for Business admin center and assign a phone number, but you can change or remove the phone number if you need to. To learn how to get Calling Plans in Office 365 and how much they cost, see [Skype for Business and Microsoft Teams add-on licensing](../skype-for-business-and-microsoft-teams-add-on-licensing/skype-for-business-and-microsoft-teams-add-on-licensing.md). > [!NOTE] > One way to see whether a user has a license assigned is by going to **Skype for Business admin center** > **Voice** > **Voice users** and selecting the user. If a license is assigned, it will be noted under **Assigned license**. You also can use the Office 365 admin center. ## Assign a phone number to a user ![sfb-logo-30x30.png](../images/sfb-logo-30x30.png) **Using the Skype for Business admin center** 1. Sign in to Office 365 with your work or school account. 2. Go to **Office 365 admin center** > **Admin centers** > **Skype for Business**. 3. In the left navigation, click **Voice** > **Voice users**. > [!NOTE] For you to see the **Voice** option in the left navigation in the Skype for Business admin center, you must first buy at least one **Enterprise E5 license**, one **Phone System** add-on license, or one **Audio Conferencing** add-on license. 4. On the **Voice users** page, locate and select the user or users that you want to assign a phone number to. 5. In the Action pane, click **Assign number**. 6. On the **Assign number** page in the **Select number to assign** list, select the phone number for the user. > [!TIP] > If you don't see any phone numbers listed, you need to [Getting phone numbers for your users](getting-phone-numbers-for-your-users.md) first. Or, if you use the **Skype for Business admin center** > **Voice** > **Phone numbers** page, click **Add**, and then click **New user numbers**. 7. To assign or change the associated emergency address, under **Select validated emergency location**, either select the location from the list or, if you have many locations defined, enter the name of the city in the search box and click **Search**. 8. After you pick the phone number and emergency location, click **Save**. > [!NOTE] > Because of the latency between Office 365 and Skype for Business Online, it can possibly take up to 24 hours for users to be enabled. If after 24 hours, if the phone number isn't assigned correctly, please [Contact support for business products - Admin Help](https://support.office.com/article/32a17ca7-6fa0-4870-8a8d-e25ba4ccfd4b). We're here to help! ## Change a phone number for a user ![sfb-logo-30x30.png](../images/sfb-logo-30x30.png) **Using the Skype for Business admin center** 1. Sign in to Office 365 with your work or school account. 2. Go to **Office 365 admin center** > **Admin centers** > **Skype for Business**. 3. In the left navigation, click **Voice** > **Voice users**. 4. On the **Voice users** page, locate and select the user or users that you want to change a phone number for. 5. In the Action pane, under **Assigned number**, click **Change**. 6. On the **Assign number** page, click **Change number**. 7. On the **Assign number** page, under **Select number to assign**, use the list to select the new phone number. 8. To change the associated emergency address, click **Change location**, and then under **Change emergency address to**, either select the location from the list or, if you have many locations defined, enter the name of the city in the search box and click **Search**. 9. Click **Save**. ## Remove a phone number from a user ![sfb-logo-30x30.png](../images/sfb-logo-30x30.png) **Using the Skype for Business admin center** 1. Sign in to Office 365 with your work or school account. 2. Go to **Office 365 admin center** > **Admin centers** > **Skype for Business**. 3. In the left navigation, click **Voice** > **Voice users**. 4. On the **Voice users** page, locate and select the user or users that you want to remove the phone number for. 5. In the Action pane, under **Assigned number**, click **Remove**. 6. On the **Remove selected assigned number?** page, click **Yes**. ## Related topics [What is address validation?](what-is-address-validation.md) [Manage phone numbers for your organization](../what-are-calling-plans-in-office-365/manage-phone-numbers-for-your-organization/manage-phone-numbers-for-your-organization.md) [Emergency calling terms and conditions](../legal-and-regulatory/emergency-calling-terms-and-conditions.md) [Skype for Business Online: Emergency Calling disclaimer label](https://github.com/MicrosoftDocs/OfficeDocs-SkypeForBusiness/blob/live/Skype/SfbOnline/downloads/emergency-calling/emergency-calling-label-(en-us)-(v.1.0).zip?raw=true)
49.189394
359
0.725705
eng_Latn
0.988683
97b2adb2a848f0b1feb8b3a74ffdfe4d25caf297
543
md
Markdown
_guides/example-guide.md
cplmakerlab/cplmakerlab.github.io
26384661e2a783b2a997e567c8d329d9e40e75bd
[ "MIT" ]
null
null
null
_guides/example-guide.md
cplmakerlab/cplmakerlab.github.io
26384661e2a783b2a997e567c8d329d9e40e75bd
[ "MIT" ]
null
null
null
_guides/example-guide.md
cplmakerlab/cplmakerlab.github.io
26384661e2a783b2a997e567c8d329d9e40e75bd
[ "MIT" ]
null
null
null
--- title: Example guide categories: - Web tags: - Cloudcannon - How-To example_image: /uploads/background.jpg difficulty: Easy time_required: 5 - 10 minutes file_attachment_path: /uploads/example-guide/example-template.svg --- Brief description of what this guide is about. ### Step 1 Enter step details and any images that may help. ### Step 2 Enter step details and any images that may help. …continue adding more steps as needed. ### Congratulations\! Add links or resources for further learning (or link to another how-to).
18.724138
72
0.747698
eng_Latn
0.983854
97b3135746cb9715c62874029eed1918965e1e16
376
md
Markdown
_recurrents/kesgrave-storytime.md
suffolklibraries/florian
e9d2f66bae79e258839590f1006399394264c978
[ "MIT" ]
null
null
null
_recurrents/kesgrave-storytime.md
suffolklibraries/florian
e9d2f66bae79e258839590f1006399394264c978
[ "MIT" ]
null
null
null
_recurrents/kesgrave-storytime.md
suffolklibraries/florian
e9d2f66bae79e258839590f1006399394264c978
[ "MIT" ]
null
null
null
--- recurrent-title: "Storytime: stories, songs and activities for pre-school children (term-time only)." recurrent-day: Friday recurrent-times: 0945-1015 recurrent-location: kesgrave-library recurrent-location-display-name: Kesgrave Library recurrent-location-display-url: /branches/kesgrave-library/ recurrent-category: ["children", "pre-school"] recurrent-bookstart: y ---
34.181818
101
0.792553
eng_Latn
0.706937
97b3163211fb4bfb0d96dffb0530de12bb379c79
183
md
Markdown
README.md
brucdarc/hashdump.github.io
472429442a4d38fc5a1c8c77276e14f6e6ecdb28
[ "MIT" ]
5
2015-09-06T19:05:00.000Z
2020-05-06T22:53:26.000Z
README.md
brucdarc/hashdump.github.io
472429442a4d38fc5a1c8c77276e14f6e6ecdb28
[ "MIT" ]
5
2015-02-20T23:00:37.000Z
2017-09-05T00:39:31.000Z
README.md
brucdarc/hashdump.github.io
472429442a4d38fc5a1c8c77276e14f6e6ecdb28
[ "MIT" ]
8
2016-05-03T23:27:47.000Z
2020-10-14T04:48:01.000Z
hashdump.github.io ================== This is the backend for https://hashdumpsecurity.org and our associated wiki Look in the **\_data** directory to modify officers and meetings.
26.142857
76
0.704918
eng_Latn
0.993705
97b326307410463be63760b525bdb5fbe71f5124
27,487
md
Markdown
doc/source/dev/data_types.md
rikeshi/galaxy
c536a877e4a9b3d12aa0d00fd4d5e705109a0d0a
[ "CC-BY-3.0" ]
1,085
2015-02-18T16:14:38.000Z
2022-03-30T23:52:07.000Z
doc/source/dev/data_types.md
rikeshi/galaxy
c536a877e4a9b3d12aa0d00fd4d5e705109a0d0a
[ "CC-BY-3.0" ]
11,253
2015-02-18T17:47:32.000Z
2022-03-31T21:47:03.000Z
doc/source/dev/data_types.md
rikeshi/galaxy
c536a877e4a9b3d12aa0d00fd4d5e705109a0d0a
[ "CC-BY-3.0" ]
1,000
2015-02-18T16:18:10.000Z
2022-03-29T08:22:56.000Z
# Data Types ## Adding a New Data Type (Subclassed) This specification describes the Galaxy source code changes required to add support for a new data type. Every Galaxy dataset is associated with a datatype which can be determined by the file extension (or format in the history item). Within Galaxy, supported datatypes are contained in the `galaxy.datatypes.registry:Registry` class, which has the responsibility of mapping extensions to datatype instances. At start up this registry is initialized with data type values from the `datatypes_conf.xml` file. All data type classes are a subclass of the `galaxy.datatypes.data:Data` class. We'll pretend to add a new datatype format named `Foobar` whose associated file extension is `foo` to our local Galaxy instance as a way to provide the details for adding support for new data types. Our example `Foobar` data type will be a subclass of `galaxy.datatypes.tabular.Tabular`. ### Step 1: Register Data Type We'll add the new data type to the `<registration>` tag section of the `datatypes_conf.xml` file. Sample `<datatype>` tag attributes in this section are: ```xml <datatype extension="ab1" type="galaxy.datatypes.images:Ab1" mimetype="application/octet-stream" display_in_upload="true"/> ``` where - `extension` - the data type's Dataset file extension (e.g., `ab1`, `bed`, `gff`, `qual`, etc.) - `type` - the path to the class for that data type. - `mimetype` - if present (it's optional), the data type's mime type - `display_in_upload` - if present (it's optional and defaults to False), the associated file extension will be displayed in the "File Format" select list in the "Upload File from your computer" tool in the "Get Data" tool section of the tool panel. **Note:** If you do not wish to add extended functionality to a new datatype, but simply want to restrict the output of a set of tools to be used in another set of tools, you can add the flag `subclass="True"` to the datatype definition line. Example: ```xml <datatype extension="my_tabular_subclass" type="galaxy.datatypes.tabular:Tabular" subclass="True"/> ``` ### Step 2: Sniffer Galaxy tools are configured to automatically set the data type of an output dataset. However, in some scenarios, Galaxy will attempt to determine the data type of a file using a sniffer (e.g., uploading a file from a local disk with 'Auto-detect' selected in the File Format select list). The order in which Galaxy attempts to determine data types is critical because some formats are much more loosely defined than others. The order in which the sniffer for each data type is applied to the file should be most rigidly defined formats first followed by less and less rigidly defined formats, with the most loosely defined format last, and then a default format associated with the file if none of the data type sniffers were successful. The order in which data type sniffers are applied to files is implicit in the `<sniffers>` tag set section of the `datatypes_conf.xml` file. We'll assume that the format of our Foobar data type is fairly rigidly defined, so it can be placed closer to the start of the sniff order. ```xml <sniffers> <sniffer type="galaxy.datatypes.sequence:Maf"/> <sniffer type="galaxy.datatypes.sequence:Lav"/> <sniffer type="galaxy.datatypes.tabular:Foobar"/> ``` ### Step 3: Data Type Class We'll now add the `Foobar` class to `lib/galaxy/datatypes/tabular.py`. Keep in mind that your new data type class should be placed in a file that is appropriate (based on its superclass), and that the file will need to be imported by `lib/galaxy/datatypes/registry.py`. You will need to include a `file_ext` attribute to your class and create any necessary functions to override the functions in your new data type's superclass (in our example, the `galaxy.datatypes.tabular.Tabular` class). In our example below, we have set our class's `file_ext` attribute to "foo" and we have overridden the `__init__()`, `init_meta()` and `sniff()` functions. It is important to override functions (especially the meta data and sniff functions) if the attributes of your new class differ from those of its superclass. Note: sniff functions are not required to be included in new data type classes, but if the sniff function is missing, Galaxy will call the superclass method. ```python from galaxy.datatypes.sniff import get_headers from galaxy.datatypes.tabular import Tabular class Foobar(Tabular): """Tab delimited data in foo format""" file_ext = "foo" MetadataElement(name="columns", default=3, desc="Number of columns", readonly=True) def __init__(self, **kwd): """Initialize foobar datatype""" Tabular.__init__(self, **kwd) self.do_something_else() def init_meta(self, dataset, copy_from=None): Tabular.init_meta(self, dataset, copy_from=copy_from) if elems_len == 8: try: map(int, [hdr[6], hdr[7]]) proceed = True except: pass def sniff(self, filename): headers = get_headers(filename, '\t') try: if len(headers) < 2: return False for hdr in headers: if len(hdr) > 1 and hdr[0] and not hdr[0].startswith('#'): if len(hdr) != 8: return False try: map(int, [hdr[6], hdr[7]]) except: return False # Do other necessary checking here... except: return False # If we haven't yet returned False, then... return True ... ``` That should be it! If all of your code is functionally correct you should now have support for your new data type within your Galaxy instance. ## Adding a New Data Type (completely new) ### Basic Datatypes In this [real life example](https://github.com/bgruening/galaxytools/blob/master/datatypes/common_sequence_datatypes/csequence.py), we'll add a datatype named `GenBank`, to support GenBank files. First, we'll set up a file named `csequence.py` in `lib/galaxy/datatypes/csequence.py`. This file could contain some of the standard sequence types, though we'll only implement GenBank. ```python """ Classes for all common sequence formats """ from galaxy.datatypes import data from galaxy.datatypes.metadata import MetadataElement import os import logging log = logging.getLogger(__name__) class GenBank(data.Text): """ abstract class for most of the molecule files """ file_ext = "genbank" ``` This is all you need to get started with a datatype. Now, load it into your `datatypes_conf.xml` by adding the following line: ```xml <datatype extension="genbank" type="galaxy.datatypes.csequence:GenBank" display_in_upload="True" /> ``` and start up your server, the datatype will be available. ### Adding a Sniffer Datatypes can be "sniffed", their formats can be automatically detected from their contents. For GenBank files that's extremely easy to do, the first 5 characters will be `LOCUS`, according to section 3.4.4 of the [specification](ftp://ftp.ncbi.nih.gov/genbank/gbrel.txt). To implement this in our tool we first have to add the relevant sniffing code to our `GenBank` class in `csequence.py`. ```python def sniff(self, filename): header = open(filename).read(5) return header == 'LOCUS' ``` and then we have to register the sniffer in `datatypes_conf.xml`. ```xml <sniffer type="galaxy.datatypes.csequence:GenBank"/> ``` Once that's done, restart your server and try uploading a `genbank` file. You'll notice that the filetype is automatically detected as `genbank` once the upload is done. ### More Features One of the useful things your datatype can do is provide metadata. This is done by adding metadata entries inside your class like this: ```python class GenBank(data.Text): file_ext = "genbank" MetadataElement(name="number_of_sequences", default=0, desc="Number of sequences", readonly=True, visible=True, optional=True, no_value=0) ``` Here we have a `MetadataElement`, accessible in methods with a dataset parameter from `dataset.metadata.number_of_sequences`. There are a couple relevant functions you'll want to override here: * ```python set_peek(self, dataset, is_multi_byte=False) ``` * ```python set_meta(self, dataset, **kwd) ``` The `set_peek` function is used to determine the blurb of text that will appear to users above the preview (first 5 lines of the file, the file peek), informing them about metadata of a sequence. For `genbank` files, we're probably interested in how many genome/records are contained within a file. To do that, we need to count the number of times the word LOCUS appears as the first five characters of a line. We'll write a function named `_count_genbank_sequences`. ```python def _count_genbank_sequences(self, filename): count = 0 with open(filename) as gbk: for line in gbk: if line[0:5] == 'LOCUS': count += 1 return count ``` Which we'll call in our `set_meta` function, since we're setting metadata about the file. ```python def set_meta(self, dataset, **kwd): dataset.metadata.number_of_sequences = self._count_genbank_sequences(dataset.file_name) ``` Now we'll need to make use of this in our `set_peek` override: ```python def set_peek(self, dataset, is_multi_byte=False): if not dataset.dataset.purged: # Add our blurb if (dataset.metadata.number_of_sequences == 1): dataset.blurb = "1 sequence" else: dataset.blurb = "%s sequences" % dataset.metadata.number_of_sequences # Get standard text peek from dataset dataset.peek = data.get_file_peek(dataset.file_name, is_multi_byte=is_multi_byte) else: dataset.peek = 'file does not exist' dataset.blurb = 'file purged from disk' ``` This function will be called during metadata setting. Try uploading a multi record `genbank` file and testing it out. If you don't have a multi-record `genbank` file, simply concatenate a single file together a couple times and upload that. By now you should have a complete GenBank parser in `csequence.py` that looks about like the following: ```python from galaxy.datatypes import data from galaxy.datatypes.metadata import MetadataElement import logging log = logging.getLogger(__name__) class GenBank(data.Text): file_ext = "genbank" MetadataElement(name="number_of_sequences", default=0, desc="Number of sequences", readonly=True, visible=True, optional=True, no_value=0) def set_peek(self, dataset, is_multi_byte=False): if not dataset.dataset.purged: # Add our blurb if (dataset.metadata.number_of_sequences == 1): dataset.blurb = "1 sequence" else: dataset.blurb = "%s sequences" % dataset.metadata.number_of_sequences # Get dataset.peek = data.get_file_peek(dataset.file_name, is_multi_byte=is_multi_byte) else: dataset.peek = 'file does not exist' dataset.blurb = 'file purged from disk' def get_mime(self): return 'text/plain' def sniff(self, filename): header = open(filename).read(5) return header == 'LOCUS' def set_meta(self, dataset, **kwd): """ Set the number of sequences in dataset. """ dataset.metadata.number_of_sequences = self._count_genbank_sequences(dataset.file_name) def _count_genbank_sequences(self, filename): """ This is not a perfect definition, but should suffice for general usage. It fails to detect any errors that would result in parsing errors like incomplete files. """ # Specification for the genbank file format can be found in # ftp://ftp.ncbi.nih.gov/genbank/gbrel.txt # in section 3.4.4 LOCUS Format count = 0 with open(filename) as gbk: for line in gbk: if line[0:5] == 'LOCUS': count += 1 return count ``` ## Composite Datatypes Composite datatypes can be used as a more structured way to contain individual history items which are composed of multiple files. The `Rgenetics` package for Galaxy has been implemented using Composite Datatypes; for real-life examples, examine the configuration files (particularly `lib/galaxy/datatypes/genetics.py`) included with the distribution. In composite datatypes, there is one "primary" data file and any number of specified "component" files. The primary data file is a base dataset object and the component files are all located in a directory associated with that base dataset. There are two types of composite datatypes: basic and auto_primary_file. In **basic** composite datatypes, the primary data file must be written by tools or provided directly during upload. In **auto_primary_file** composite datatypes, the primary data file is generated automatically by the Framework; all tool-writable or uploadable files are stored in the directory associated with the primary file. ### Creating Composite Datatypes A datatype can be set to composite by setting the `composite_type` flag. There are 3 valid options: - None (not a composite datatype) - `basic` - `auto_primary_file` ### Defining Basic Composite Datatypes The example below defines a basic composite datatype which is composed of 2 component files along with a tool-written or user-uploaded primary file. In this example the primary file is a report summarizing the results. The two component files (`results.txt`, `results.dat`) contain the results; `results.txt` is a text file and is handled as such during upload whereas `results.dat` is flagged as binary, allowing a binary file to be uploaded for that component. During upload, 3 sets of file upload/paste interfaces are provided to the user. A file must be provided for the primary (index) file as well as `results.txt`; `results.dat` is flagged as optional so may be left blank. ```python class BasicComposite(Text): composite_type = 'basic' def __init__ (self, **kwd): Text. __init__ (self, **kwd) self.add_composite_file('results.txt') self.add_composite_file('results.dat', is_binary = True, optional = True) ``` These files can be specified on the command line in the following fashion: ```xml <command>someTool.sh '$input1' '${os.path.join($input1.extra_files_path, 'results.txt')}' '${os.path.join($input1.extra_files_path, 'results.dat')}' '$output1'</command> ``` If a tool is aware of the file names for a datatype, then only `input1.extra_files_path` needs to be provided. There are cases when it is desirable for the composite filenames to have varying names, but be of a similar form; for an example of this see `Rgenetics` below. ### Defining Auto Primary File Composite Datatypes The example below defines an auto primary file composite datatype which is composed of 2 component files along with a framework generated file. In this example the primary file is an html page containing download links to the individual result files. The two component files (`results.txt`, `results.dat`) contain the results; `results.txt` is a text file and is handled as such during upload whereas `results.dat` is flagged as binary, allowing a binary file to be uploaded for that component. During upload, 2 sets of file upload/paste interfaces are provided to the user. A file must be provided for `results.txt` whereas `results.dat` is flagged as optional so may be left blank. In this composite type, the primary file is generated using the `generate_primary_file` method specified in the datatype definition. The `generate_primary_file` method here loops through the defined components and creates a link to each. ```python class AutoPrimaryComposite(Text): composite_type = 'auto_primary_file' def __init__ (self, **kwd): Text. __init__ (self, **kwd) self.add_composite_file('results.txt') self.add_composite_file('results.dat', is_binary=True, optional=True) def generate_primary_file(self, dataset=None): rval = ['<html><head><title>Files for Composite Dataset (%s)</title></head><p/>This composite dataset is composed of the following files:<p/><ul>' % (self.file_ext)] for composite_name, composite_file in self.get_composite_files(dataset=dataset).iteritems(): opt_text = '' if composite_file.optional: opt_text = ' (optional)' rval.append('<li><a href="%s">%s</a>%s' % (composite_name, composite_name, opt_text)) rval.append('</ul></html>') return "\n".join(rval) ``` These files can be specified on the command line in the following fashion: ```xml <command>someTool.sh ${os.path.join($input1.extra_files_path, 'results.txt')} ${os.path.join($input1.extra_files_path, 'results.dat')} $output1</command> ``` ### Advanced Topics: `Rgenetics` Example `Rgenetics` datatypes are defined as composite datatypes. The tools in this package rely heavily on the contents of a filename for analysis and reporting. In this case it is desirable for the filenames of the components to vary slightly, but maintain a common form. To do this, we use a special metadata parameter that can only be set at dataset creation (i.e. upload). This example uses the metadata parameter **base_name** to specify part of the components' filenames. ```python class Lped(Text): MetadataElement(name="base_name", desc="base name for all transformed versions of this genetic dataset", default="galaxy", readonly=True, set_in_upload=True) composite_type = 'auto_primary_file' file_ext="lped" def __init__(self, **kwd): Rgenetics.__init__(self, **kwd) self.add_composite_file('%s.ped', description='Pedigree File', substitute_name_with_metadata='base_name', is_binary=True) self.add_composite_file('%s.map', description='Map File', substitute_name_with_metadata='base_name', is_binary=True) def generate_primary_file(self, dataset=None): rval = ['<html><head><title>Files for Composite Dataset (%s)</title></head><p/>This composite dataset is composed of the following files:<p/><ul>' % (self.file_ext)] for composite_name, composite_file in self.get_composite_files(dataset=dataset).iteritems(): opt_text = '' if composite_file.optional: opt_text = ' (optional)' rval.append('<li><a href="%s">%s</a>%s' % (composite_name, composite_name, opt_text)) rval.append('</ul></html>') return "\n".join(rval) ``` The file specified as `%s.ped` is found at `${os.path.join($input1.extra_files_path, '%s.ped' % input1.metadata.base_name)}`. It should be noted that changing the datatype of datasets which use this substitution method will cause an error if the metadata parameter 'base_name' does not exist in a datatype that the dataset is set to. This is because the value within 'base_name' will be lost -- if the datatype is set back to the original datatype, the default metadata value will be used and the filenames might not match the basename. For this reason, users are not allowed to change the datatype of dataset between a composite datatype and any other datatype. This can be enforced by setting the `allow_datatype_change` attribute of a datatype class to `False`. ## Galaxy Tool Shed - Data Types **Note:** These are deprecated and shouldn't be used. ### Including custom data types that subclass from Galaxy data types in the distribution If your repository includes tools that require data types that are not defined in the Galaxy distribution, you can include the required data types in the repository along with your tools, or you can create a separate repository to contain them. The repository must include a file named `datatypes_conf.xml`, which is modeled after the file named `datatypes_conf.xml.sample` in the Galaxy distribution. This section describes support for including data types that subclass from data types in the Galaxy distribution. Refer to the next section for details about data types that use your own custom class modules included in your repository. An example of this is the `datatypes_conf.xml` file in the [emboss_datatypes repository](http://toolshed.g2.bx.psu.edu/repository/browse_categories?sort=name&id=3ac79d5752c6d938&f-deleted=False&webapp=community&f-free-text-search=emboss&operation=view_or_manage_repository) in the main Galaxy tool shed, shown below. ![](https://galaxyproject.org/toolshed/datatypes-features/emboss_datatypes_contents.png) Tool shed repositories that include valid `datatypes_conf.xml` files will display the data types in the **Preview tools and inspect metadata by tool version** section of the view or manage repository page. ![](https://galaxyproject.org/toolshed/datatypes-features/emboss_datatypes.png) ### Including custom data types that use class modules contained in your repository Including custom data types that use class modules included in your repository is a bit tricky. As part of your development process for tools that use data types that fall into this category, it is highly recommended that you host a local Galaxy tool shed. When your newly developed tools have proven to be functionally correct within your local Galaxy instance, you should upload them, along with all associated custom data types files and modules to your local tool shed to ensure that everything is handled properly within the tool shed. When your local tool shed repository is functionally correct, install your repository from your local tool shed to a local Galaxy instance to ensure that your tools and data types properly load both at the time of installation and when you stop and restart your Galaxy server. You should not upload your tools to the main Galaxy tool shed until you have confirmed that everything works by following these steps. To illustrate how this works, we'll use the [gmap repository](http://toolshed.g2.bx.psu.edu/repository/browse_categories?sort=name&id=4131098bea459833&f-deleted=False&webapp=community&f-free-text-search=gmap&operation=view_or_manage_repository) in the main Galaxy tool shed as an example. The `datatypes_conf.xml` file included in this repository looks something like the following. You'll probably notice that this file is modeled after the `datatypes_conf.xml.sample` file in the Galaxy distribution, but with some slight differences. Notice the `<datatypes_files>` tag set. This tag set contains `<datatype_file>` tags, each of which refers to the name of a class module file name within your repository (in this example, there is only one file named `gmap.py`), which contains the custom data type classes you've defined for your tools. In addition, notice the value of each `type` attribute in the `<datatype>` tags. The `:` separates the class module included in the repository (in this example, the class module is `gmap`) from the class name (`GmapDB`, `IntervalAnnotation`, etc.). It is critical that you make sure your datatype tag definitions match the classes you've defined in your class modules or the data type will not properly load into a Galaxy instance when your repository is installed. ```xml <?xml version="1.0"?> <datatypes> <datatype_files> <datatype_file name="gmap.py"/> </datatype_files> <registration> <datatype extension="gmapdb" type="galaxy.datatypes.gmap:GmapDB" display_in_upload="False"/> <datatype extension="gmapsnpindex" type="galaxy.datatypes.gmap:GmapSnpIndex" display_in_upload="False"/> <datatype extension="iit" type="galaxy.datatypes.gmap:IntervalIndexTree" display_in_upload="True"/> <datatype extension="splicesites.iit" type="galaxy.datatypes.gmap:SpliceSitesIntervalIndexTree" display_in_upload="True"/> <datatype extension="introns.iit" type="galaxy.datatypes.gmap:IntronsIntervalIndexTree" display_in_upload="True"/> <datatype extension="snps.iit" type="galaxy.datatypes.gmap:SNPsIntervalIndexTree" display_in_upload="True"/> <datatype extension="gmap_annotation" type="galaxy.datatypes.gmap:IntervalAnnotation" display_in_upload="False"/> <datatype extension="gmap_splicesites" type="galaxy.datatypes.gmap:SpliceSiteAnnotation" display_in_upload="True"/> <datatype extension="gmap_introns" type="galaxy.datatypes.gmap:IntronAnnotation" display_in_upload="True"/> <datatype extension="gmap_snps" type="galaxy.datatypes.gmap:SNPAnnotation" display_in_upload="True"/> </registration> <sniffers> <sniffer type="galaxy.datatypes.gmap:IntervalAnnotation"/> <sniffer type="galaxy.datatypes.gmap:SpliceSiteAnnotation"/> <sniffer type="galaxy.datatypes.gmap:IntronAnnotation"/> <sniffer type="galaxy.datatypes.gmap:SNPAnnotation"/> </sniffers> </datatypes> ``` **Modules that include custom datatype class definitions cannot use relative import references for imported modules.** To function correctly when your repository is installed in a local Galaxy instance, your class module imports must be defined as absolute from the galaxy subdirectory inside the Galaxy root's lib subdirectory. For example, assume the following import statements are included in our example `gmap.py` file. They certainly work within the Galaxy development environment when the gmap tools were being developed. ```python import data from data import Text from metadata import MetadataElement ``` However, the above relative imports will not work when the `gmap.py` class module is installed from the Tool Shed into a local Galaxy instance because the modules will not be found due to the use of the relative imports. The developer must use the following approach instead. Notice that the imports are written such that they are absolute relative to the `~/lib/galaxy` subdirectory. ```python import galaxy.datatypes.data from galaxy.datatypes.data import Text from galaxy.datatypes.metadata import MetadataElement ``` The use of `<converter>` tags contained within `<datatype>` tags is supported in the same way they are supported within the `datatypes_conf.xml.sample` file in the Galaxy distribution. ```xml <datatype extension="ref.taxonomy" type="galaxy.datatypes.metagenomics:RefTaxonomy" display_in_upload="true"> <converter file="ref_to_seq_taxonomy_converter.xml" target_datatype="seq.taxonomy"/> </datatype> ``` ### Including datatype converters and display applications To include your custom datatype converters or display applications, add the appropriate tag set to your repository's `datatypes_conf.xml` file in the same way that they are defined in the `datatypes_conf.xml.sample` file in the Galaxy distribution. If you include datatype converter files in your repository, all files (the disk file referred to by the value of the "file" attribute) must be located in the same directory in your repository hierarchy. Similarly, your datatype display application files must all be in the same directory in your repository hierarchy (although the directory can be a different directory from the one containing your converter files). This is critical because the Galaxy components that load these custom items assume each of them are located in the same directory.
48.909253
955
0.738822
eng_Latn
0.990834
97b3a26f1bda70a5cf18d9dfea55efef994a62a0
2,666
md
Markdown
README.md
yaiza-bailen/q
fef8d7ad947817c8677134fbd818a7e3ac8cdd2d
[ "MIT" ]
null
null
null
README.md
yaiza-bailen/q
fef8d7ad947817c8677134fbd818a7e3ac8cdd2d
[ "MIT" ]
null
null
null
README.md
yaiza-bailen/q
fef8d7ad947817c8677134fbd818a7e3ac8cdd2d
[ "MIT" ]
null
null
null
# Project Q A Dynamic [SOQL Query](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_sosl_intro.htm) Builder for the [Force.com Platform](https://developer.salesforce.com/docs/atlas.en-us.fundamentals.meta/fundamentals/adg_preface.htm) [![CircleCI Build Status](https://circleci.com/gh/jpmonette/q.png?style=shield&circle-token=:circle-token)](https://circleci.com/gh/jpmonette/q) [![Coverage Status](https://coveralls.io/repos/github/jpmonette/q/badge.svg?branch=master)](https://coveralls.io/github/jpmonette/q?branch=master) [![Code Climate](https://codeclimate.com/github/jpmonette/q/badges/gpa.svg)](https://codeclimate.com/github/jpmonette/q) > ***WARNING:*** Both the documentation and the package itself is under heavy > development and in a very early stage. That means, this repo is full of > untested code and the API can break without any further notice. Therefore, > it comes with absolutely no warranty at all. Feel free to browse or even > contribute to it :) ## Installation Deploy the Apex classes from the `./force-app/main/default/classes/` repository into your Salesforce project. ## Usage ```java Q query = new Q(Account.SObjectType) .selectFields(SObjectType.Account.fieldSets.Example) .addSubquery(new Q('Contacts')) .add(Q.condition('Name').isLike('%Acme%')) .add(Q.condition('BillingCountry').isNotNull()) .addLimit(5); System.debug(query.build()); // SELECT CreatedById, Description, Owner.Email, (SELECT Id FROM Contacts) FROM Account WHERE Name LIKE '%Acme%' AND BillingCountry != null LIMIT 5 ``` While chaining methods is a convenient way to initialise your query, you also have the ability to manually build complex queries depending on specific conditions. ```java Q query = new Q(Contact.SObjectType).addLimit(5); if (String.isNotBlank(firstName)) { query.add(Q.condition('FirstName').equals(firstName)) } if (String.isNotBlank(lastName)) { query.add(Q.condition('LastName').equals(lastName)) } System.debug(query.build()); // SELECT Id FROM Contact WHERE FirstName = 'Céline' AND LastName = 'Dion' LIMIT 5 ``` ## Roadmap This library is being initially developed for one of my internal project, so API methods will likely be implemented in the order that they are needed by my project. Eventually, I would like to cover the entire SOQL and SOSL query language, so contributions are of course [always welcome][contributing]. Adding new methods is relatively straightforward, so feel free to join the fun! [contributing]: CONTRIBUTING.md ## License This library is distributed under the MIT license found in the [LICENSE](./LICENSE) file.
41.015385
270
0.75919
eng_Latn
0.807721
97b456cff4dce188dba1bb37ad48a1b75ea776ac
752
md
Markdown
README.md
cyberinferno/tantra-online-wireshark-dissector
e6bd04d22f672cc37eddcaf5a96318d859a5f392
[ "MIT" ]
5
2017-12-02T07:53:27.000Z
2021-12-20T06:46:40.000Z
README.md
cyberinferno/tantra-online-wireshark-dissector
e6bd04d22f672cc37eddcaf5a96318d859a5f392
[ "MIT" ]
null
null
null
README.md
cyberinferno/tantra-online-wireshark-dissector
e6bd04d22f672cc37eddcaf5a96318d859a5f392
[ "MIT" ]
null
null
null
# Tantra Online Wireshark Dissector Wireshark dissector which shows tantra online packet types exchanged between server and game client. Most Kathana 5 packet type opcodes have been defined by referring to the leaked Kathana 3 C++ source code. Unknown packet types will be added to the dissector in the future as and when it has been identified. ## Installation * Update tantra server ports at the end of the `tantra-dissector.lua` file if needed. * Copy `tantra-dissector.lua` to Wireshark plugin directory (To locate plugin folder refer [this link](https://www.wireshark.org/docs/wsug_html_chunked/ChPluginFolders.html) ). * If Wireshark is already running then open `Analyze` option dropdown and click on `Reload Lua Plugins` to load the plugin.
68.363636
208
0.796543
eng_Latn
0.989464
97b4699e98adff965e0b029f8ee5d39c0546a82d
3,437
md
Markdown
README.md
hacker1024/progress_bar
5eadd06746a9e889983108d2f0df7d48206d3248
[ "BSD-3-Clause" ]
1
2021-12-14T19:00:25.000Z
2021-12-14T19:00:25.000Z
README.md
hacker1024/progress_bar
5eadd06746a9e889983108d2f0df7d48206d3248
[ "BSD-3-Clause" ]
1
2021-06-06T12:21:06.000Z
2021-06-06T12:21:06.000Z
README.md
hacker1024/progress_bar
5eadd06746a9e889983108d2f0df7d48206d3248
[ "BSD-3-Clause" ]
null
null
null
# progressbar2 A progress bar for Dart console apps. ## Usage A `ProgressBar` object requires a `formatter` function as well as a `total` value. Additionally, the `width`, `completeChar`, and `incompleteChar` may be customized. Basic usage: ```dart const total = 10; final progressBar = ProgressBar( formatter: (current, total, progress, elapsed) => '[${ProgressBar.formatterBarToken}] ${(progress * 100).floor()}% ${elapsed.inSeconds}s', total: total, ); progressBar.render(); for (var i = 0; i < total; ++i) { await Future<void>.delayed(const Duration(milliseconds: 500)); ++progressBar.value; progressBar.render(); } ``` ![basic](https://raw.github.com/hacker1024/progressbar.dart/master/example/progress_bar_basic.gif) ### Formatting The `formatter` function converts the given information into a progress bar string. Additionally, the `ProgressBar.formatterBarToken` string, when included, is replaced with a progress bar indicator. ## Incrementing The value of the progress bar can be set at any time with the `value` setter. To increment the progress by 1, a simple `++myProgressBar.value` statement will do the trick. ## Rendering To render the progress bar to the console, `render()` may be called. Additionally, `unrender()` can be called to erase the progress bar. ## ETAs? As there are many ways to calculate an ETA, this package leaves the implementation up to the developer, providing the elapsed time for this purpose. Personally, I recommend [`package:moving_average`][moving_average]. [moving_average]: https://pub.dev/packages/moving_average ## Acknowledgements - Jaron Tai <[email protected]> for the original [`progress_bar` package][progress_bar]. [progress_bar]: https://pub.dev/packages/progress_bar ## Features and bugs Please file feature requests and bugs at the [issue tracker][tracker]. [tracker]: https://github.com/hacker1024/progressbar.dart/issues ## License ``` Copyright (c) 2021 hacker1024. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the hacker1024 nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ```
39.965116
98
0.769566
eng_Latn
0.778291
97b5bc5053c717db5f4e59f576d90f808e79a5ae
968
md
Markdown
README.md
stratolark/nosatnav-scaleform-example
ad669b826d295ca196d7061e5d8c9994975e9632
[ "MIT" ]
null
null
null
README.md
stratolark/nosatnav-scaleform-example
ad669b826d295ca196d7061e5d8c9994975e9632
[ "MIT" ]
null
null
null
README.md
stratolark/nosatnav-scaleform-example
ad669b826d295ca196d7061e5d8c9994975e9632
[ "MIT" ]
1
2022-02-12T06:03:57.000Z
2022-02-12T06:03:57.000Z
# FiveM Scaleform Example nosatnav - Removes distance to waypoint under minimap/radar. >This resource is an example to show usage of scaleform and their methods. ## Installation 1. Download the code from this repository. 2. Extract the contents of this repo into your server /resources/**nosatnav-scaleform-example**. If the /nosatnav-scaleform-example folder does not exists, then create it. You can also create the folder inside /resources/[local]/**nosatnav-scaleform-example** *nosatnav-scaleform-example* is the name of the resource in this case, but you can rename it to something like *scaleform-example* or *fivem-nosatnav*. It's your choice. Make sure the name of the folder matches the name you'll set in the *server.cfg* 3. Add the resource name in the server.cfg: ` ensure nosatnav-scaleform-example ` 4. Start your server. You can write **nosatnav-scaleform-example** in your server console to enable the satnav again. License: MIT
38.72
250
0.765496
eng_Latn
0.998387
97b60ced811dd859e3e630cc7bd63c8d18fe789c
919
md
Markdown
apps/Dragoman.md
xynium/PNPropag
847f75b39b54fd688bbe78c14ae600fcfba1f8dc
[ "MIT" ]
null
null
null
apps/Dragoman.md
xynium/PNPropag
847f75b39b54fd688bbe78c14ae600fcfba1f8dc
[ "MIT" ]
1
2016-06-24T13:10:48.000Z
2016-06-28T14:45:30.000Z
apps/Dragoman.md
xynium/PNPropag
847f75b39b54fd688bbe78c14ae600fcfba1f8dc
[ "MIT" ]
null
null
null
--- layout: app permalink: /Dragoman/ description: GUI for polyglot screenshots: - Dragoman/screenshot.png authors: - name: peteboothroyd url: https://github.com/peteboothroyd links: - type: GitHub url: peteboothroyd/Dragoman - type: Install url: https://github.com/peteboothroyd/Dragoman/releases desktop: Desktop Entry: Name: Dragoman Comment: GUI for polyglot Exec: AppRun Terminal: false Type: Application Icon: dragoman X-AppImage-Version: 0.3.0 X-AppImage-BuildId: 51c29610-92f1-11a7-17c5-6728381ddb46 Categories: Development AppImageHub: X-AppImage-UpdateInformation: X-AppImage-Type: 1 X-AppImage-Architecture: x86_64 electron: description: GUI for polyglot main: "./main.js" author: name: Pete Boothroyd email: [email protected] url: https://github.com/peteboothroyd license: MIT dependencies: {} ---
19.978261
60
0.704026
kor_Hang
0.163497
97b6d4060a44c0cbc9e8899d8cf2818ebb7086ed
211
md
Markdown
README.md
ricardocancar/split_audio_VAD
1bc1d313ab3c0bbac0cdc10334be82aefad79009
[ "MIT" ]
1
2019-06-23T01:31:08.000Z
2019-06-23T01:31:08.000Z
README.md
ricardocancar/split_audio_VAD
1bc1d313ab3c0bbac0cdc10334be82aefad79009
[ "MIT" ]
null
null
null
README.md
ricardocancar/split_audio_VAD
1bc1d313ab3c0bbac0cdc10334be82aefad79009
[ "MIT" ]
null
null
null
# Audio Split. ### ***ML_news*** ## Descripción Script de Python para separar audio.wav ### Lenguaje de programación Python 3.6.3 ### librerias scipy==1.1.0 matplotlib==2.2.2 numpy==1.15.0
10.55
39
0.620853
spa_Latn
0.53758
97b742756df3f0c2ec1c3aa5e040f362c279298d
2,295
md
Markdown
skills/B01G3FBGM6/README.md
zwang695/alexa-skills-list
43fb6168a3313f004d02a910d1d8930f42a5fce4
[ "MIT" ]
null
null
null
skills/B01G3FBGM6/README.md
zwang695/alexa-skills-list
43fb6168a3313f004d02a910d1d8930f42a5fce4
[ "MIT" ]
null
null
null
skills/B01G3FBGM6/README.md
zwang695/alexa-skills-list
43fb6168a3313f004d02a910d1d8930f42a5fce4
[ "MIT" ]
null
null
null
# [Almond for Smart Home](http://alexa.amazon.com/#skills/amzn1.ask.skill.97f1457f-4da7-45f9-ba40-bf5e2dc3a060) ![4.7 stars](../../images/ic_star_black_18dp_1x.png)![4.7 stars](../../images/ic_star_black_18dp_1x.png)![4.7 stars](../../images/ic_star_black_18dp_1x.png)![4.7 stars](../../images/ic_star_black_18dp_1x.png)![4.7 stars](../../images/ic_star_half_black_18dp_1x.png) 8 With the 'Almond for Smart Home' skill and Alexa, you can now interact with your lights, switches, dimmers, and thermostats by voice. To get started, select the ‘Enable Skill’ button in the Alexa App to link your Almond account and discover your device(s). Find more information about connecting Smart Home devices at http://amzn.to/291lR7u When using the skill, you need to specify by name which device or sensor to use. There are two ways to define this/these names: • Use the name(s) you set up already – these are shown in the Almond app and can be changed, or • Create an Alexa group, like Bedroom or Downstairs, and add the device(s) or sensor(s) to the group. More information at http://amzn.to/2965dCE Once you know the name or group of your device(s), you can say the following: * If you have switches or non-dimmable lights, you can turn them on/off, for example: “Alexa, turn on my Bedroom lights”. * If you have dimmable lights, you can change the brightness, for example: “Alexa, brighten Downstairs to 60 percent", or "Alexa, dim the Living Room lights". * If you have thermostats, you can change the temperature, for example: “Alexa, lower Bedroom by 3 degrees”, or “Alexa, set Downstairs temperature to 72”. For more advanced router features such as controlling guest network, blocking devices, and activating scenes and modes, go to the Skills section in the Alexa App, search for the ‘Almond’ skill and enable it. Once enabled, say “Alexa, ask Almond for help” for more information If you have any questions, please visit the Almond FAQ at https://www.securifi.com/faq *** ### Skill Details * **Invocation Name:** null * **Category:** null * **ID:** amzn1.ask.skill.97f1457f-4da7-45f9-ba40-bf5e2dc3a060 * **ASIN:** B01G3FBGM6 * **Author:** Securifi Inc. * **Release Date:** September 22, 2016 @ 16:12:04 * **Privacy Policy:** https://connect.securifi.com/prpo * **In-App Purchasing:** No
71.71875
275
0.746841
eng_Latn
0.983119
97b78bbcd2ef943b774adbb0517f69926fda925b
5,435
md
Markdown
_pages/settings.md
3RD-Dimension/3RD-Dimension.github.io
e2f35b77bf58c5a973a40b2b98037c4888457b7a
[ "MIT" ]
null
null
null
_pages/settings.md
3RD-Dimension/3RD-Dimension.github.io
e2f35b77bf58c5a973a40b2b98037c4888457b7a
[ "MIT" ]
null
null
null
_pages/settings.md
3RD-Dimension/3RD-Dimension.github.io
e2f35b77bf58c5a973a40b2b98037c4888457b7a
[ "MIT" ]
null
null
null
--- title: "The Settings Window" permalink: /settings/ excerpt: "Information on the Settings Window" sidebar: nav: "wiki" toc: true toc_label: "On this page" --- ## Connection Tab ![](/images/wiki/3rdd_settings_connection.png){: .align-center} {:style="clear: center"} The connection is where you can change settings relevent to the connection to your GRBL Controller. * Serial Com Port. * Baud Rate - From GRBL 1.1 Baud Rate is 115200. * Reset GRBL when connecting - DTR signal is used to reset grbl when the connection is established. * Controller Buffer - Arduino Uno controllers have a buffer of 127. Arduino Mega controllers have a buffer of 255/256. * Status Poll Interval (ms) - When to poll the controller for a status update. * Log Traffic to file - Log any related message to a log file. There is also another log file that any errors or info to do with the program is sent to a seperate file which is always enabled. This file is used to fix any bugs and may be asked for if an issue is opened. * Ignore Additional Axes in Status Messages - Compatibility with custom GRBL versions which report 4+ axes in status reports. ## Viewport Tab ![](/images/wiki/3rdd_settings_viewport.png){: .align-center} {:style="clear: center"} This is where you can change various options of the job display. * Preview Arc Segment Length (mm). * Height Map Opacity (Currently not used and will be removed in future versions). * Grid Thickness (default 0.1). * Preview Toolpath. * Customize color of Toolpath Rapid lines. * Customize color of Toolpath lines. * Customize color of Toolpath Arc lines ## Probing Tab ![](/images/wiki/3rdd_settings_probing.png){: .align-center} {:style="clear: center"} Here you will find settings relevent to the probing button found on the [Probing / TLO Panel](https://3rd-dimension.github.io/probingpanel/). When the probing button is clicked, the process will use these settings to probe and set Z0 for your tool automatically then raise. * Probe Plunge Feed Rate (mm/min) - The rate that Z will lower until contact with your probe (Default 20 mm/min). * Abort and Single on Probe Fail - If checked, an error will be displayed if probe cycle exceeded Max Probe Distance. * Safe Retract Height (mm) - The amount Z will raise by after a successfull probing. * Max Probe Distance (mm) - How far Z will lower to find yuor touch plate and if exceeds will abort. * Probe Thickeness (mm) - Thickness of your probe/touch plate. ## GCode Tab ![](/images/wiki/3rdd_settings_gcode.png){: .align-center} {:style="clear: center"} This tab contains various options in relation to GCode, for the Sender and for the loaded file. * Include Spindle 'S' Commands (Relevent to the loaded file). * Include Dwell 'G4' Commands (Relevent to the loaded file). * Include Program End (M2, M30) - These will be used to single the end of the job. * Warning Window on File Load - If checked, information on what is being ignored when a file is loaded will be displayed. **Note** any ingnored commands are only relevent to the displaying of the file, ingnored commands are still sent to the controller. * FeedRate Inc/Dec - If checked feed rate will be increased or decreased by 10 each time (if using Hotkey). * Spindle Inc/Dec - If checked Spindle will be increased or decreased by 10 each time (if using Hotkey). ## HotKey Tab **Added in Version 2.0.1.1** ![](/images/wiki/3rdd_settings_hotkey.png){: .align-center} {:style="clear: center"} The Hotkeys tab is where you can set Hotkeys for various commands. Some hotkeys only work in certain states of the machine which will be noted. **To set a key** Press the required key conbination into the fucntion you choose and it is set. When you close the settings your hotkeys will be available. To clear a hot key, double click with your mouse on the hoykey. **Current Configurable Hotkeys** * Jog X Axis + (Default: Right) - Only available if currently not sending a file and Jogging Keyboard Focused * Jog X Axis - (Default: Left) - Only available if currently not sending a file and Jogging Keyboard Focused * Jog Y Axis + (Default: Up) - Only available if currently not sending a file and Jogging Keyboard Focused * Jog Y Axis - (Default: Down) - Only available if currently not sending a file and Jogging Keyboard Focused * Jog Z Axis + (Default: Numpad 9) - Only available if currently not sending a file and Jogging Keyboard Focused * Jog Z Axis - (Default: Numbpad 3) - Only available if currently not sending a file and Jogging Keyboard Focused * Emergency (Default: Escape) - Available always * Cycle Start - Starts sending file if not currently being sent * File Hold / Jog Cancel - Cancels jogging (if jogging) or Holds if a file is currently being sent * Return to Origin (Zero of X, Y, Z) - Only available if machine is idle * Spindle Increase - Only available if currently running a file * Spindle Decrease - Only available if currenltly running a file * Feed Rate Increase - Only available if machine is sending * Feed Rate Decrease - Only available if machine is sending * Jog Rate Increase - Only available if machine is idle or jogging * Jog Rate Decrease - Only available if machine is idle or jogging * ReDo/Reload Last File - Only available if machine is idle * Spindle On/Off - Firmware takes care of when it can be used * Coolant On/Off - Firmware takes care of when it can be used * Mist On/Off - Firmware takes care of when it can be used
59.076087
367
0.757314
eng_Latn
0.996359
97b8ae1d3de3fd68892d874ff533c221e2924782
32
md
Markdown
test-input/content/ABSTRACT.md
onnohaldar/respec-tools
feeafa24166516d6b496c6cafb46b384f3344273
[ "MIT" ]
null
null
null
test-input/content/ABSTRACT.md
onnohaldar/respec-tools
feeafa24166516d6b496c6cafb46b384f3344273
[ "MIT" ]
null
null
null
test-input/content/ABSTRACT.md
onnohaldar/respec-tools
feeafa24166516d6b496c6cafb46b384f3344273
[ "MIT" ]
null
null
null
# Abstract Dit is een abstract.
10.666667
20
0.75
nld_Latn
0.999923
97b92a25f8178c4479f89dcc05220a50d64774f9
15
md
Markdown
README.md
ferdousulhaque/delivery-riders
a76f650bd64e0de30b05f24d0ce617fe9f432824
[ "MIT" ]
null
null
null
README.md
ferdousulhaque/delivery-riders
a76f650bd64e0de30b05f24d0ce617fe9f432824
[ "MIT" ]
null
null
null
README.md
ferdousulhaque/delivery-riders
a76f650bd64e0de30b05f24d0ce617fe9f432824
[ "MIT" ]
1
2020-04-09T09:01:00.000Z
2020-04-09T09:01:00.000Z
# Delivery App
7.5
14
0.733333
eng_Latn
0.836534
97b959109690dedad204175db13912e1411e5963
5,154
md
Markdown
chapter_convolutional-neural-networks/lenet.md
Feywell/gluon-tutorials-zh
b4d51137db8aed114811128405ee93c681590fcc
[ "Apache-2.0" ]
null
null
null
chapter_convolutional-neural-networks/lenet.md
Feywell/gluon-tutorials-zh
b4d51137db8aed114811128405ee93c681590fcc
[ "Apache-2.0" ]
null
null
null
chapter_convolutional-neural-networks/lenet.md
Feywell/gluon-tutorials-zh
b4d51137db8aed114811128405ee93c681590fcc
[ "Apache-2.0" ]
null
null
null
# 卷积神经网络(LeNet) 在[“多层感知机的从零开始实现”](../chapter_deep-learning-basics/mlp-scratch.md)一节里我们构造了一个两层感知机模型来对Fashion-MNIST里图片进行分类。每张图片高和宽均是28像素。我们将其展开成长为784的向量输入到模型里。这样的做法虽然简单,但也有局限性: 1. 垂直方向接近的像素在这个向量的图片表示里可能相距很远,它们组成的模式难被模型识别。 2. 对于大尺寸的输入图片,我们会得到过大的模型。假设输入是高和宽均为1000像素的彩色照片,即使隐藏层输出仍是256,这一层的模型形状是$3,000,000\times 256$,其占用将近3GB的内存,这带来过复杂的模型和过高的存储开销。 卷积层尝试解决这两个问题:它保留输入形状,使得可以有效的发掘水平和垂直两个方向上的数据关联。它通过滑动窗口将卷积核重复作用在输入上,从而得到更紧凑的模型参数表示。 卷积神经网络就是主要由卷积层组成的网络,本小节里我们将介绍一个早期用来识别手写数字图片的卷积神经网络:LeNet [1]。这个名字来源于论文第一作者Yann LeCun。LeNet证明了通过梯度下降训练卷积神经网络可以达到手写数字识别的最先进的结果。这个奠基性的工作第一次将卷积神经网络推上舞台,为世人所知。 ## LeNet模型 LeNet分为卷积层块和全连接层块两个部分。卷积层块里的基本单位是卷积层后接最大池化层:卷积层用来识别图片里的空间模式,例如线条和物体局部,之后的最大池化层则用来降低卷积层对位置的敏感性。卷积层块由两个这样的基础块重复堆叠构成,即拥有两个卷积层和两个最大池化层。每个卷积层都使用$5\times 5$的窗口,且在输出上使用sigmoid激活函数$f(x)=\frac{1}{1+e^{-x}}$来将输出非线性变换到$(0,1)$区间。第一个卷积层输出通道为6,第二个则增加到16,这是因为其输入高宽比之前卷积层要小,所以增加输出通道来保持相似的模型复杂度。两个最大池化层的窗口均为$2\times 2$,且步幅为2。这意味着每个池化窗口的作用范围都是不重叠的。 卷积层块把每个样本输出拉升成向量输入到全连接层块中。全连接层块由两个输出大小分别为120和84的全连接层,然后接上输出大小为10(因为数字的类别一共为10)的输出层构成。下面我们通过Sequential类来实现LeNet。 ```{.python .input} import sys sys.path.insert(0, '..') import gluonbook as gb import mxnet as mx from mxnet import autograd, nd, gluon, init from mxnet.gluon import loss as gloss, nn from time import time net = nn.Sequential() net.add( nn.Conv2D(channels=6, kernel_size=5, activation='sigmoid'), nn.MaxPool2D(pool_size=2, strides=2), nn.Conv2D(channels=16, kernel_size=5, activation='sigmoid'), nn.MaxPool2D(pool_size=2, strides=2), # Dense 会默认将(批量大小,通道,高,宽)形状的输入转换成 #(批量大小,通道 * 高 * 宽)形状的输入。 nn.Dense(120, activation='sigmoid'), nn.Dense(84, activation='sigmoid'), nn.Dense(10) ) ``` 接下来我们构造一个高宽均为28的单通道数据点,并逐层进行前向计算来查看每个层的输出大小。 ```{.python .input} X = nd.random.uniform(shape=(1, 1, 28, 28)) net.initialize() for layer in net: X = layer(X) print(layer.name, 'output shape:\t', X.shape) ``` 可以看到在卷积层块中图片的高宽在逐层减小,卷积层由于没有使用填充从而将高宽减少4,池化层则减半高宽,但通道数则从1增加到16。全连接层则进一步减小输出大小直到变成10。 ## 获取数据和训练 我们仍然使用Fashion-MNIST作为训练数据。 ```{.python .input} batch_size = 256 train_iter, test_iter = gb.load_data_fashion_mnist(batch_size=batch_size) ``` 因为卷积神经网络计算比多层感知机要复杂,因此我们使用GPU来加速计算。我们尝试在GPU 0上创建NDArray,如果成功则使用GPU 0,否则则使用CPU。 ```{.python .input} def try_gpu(): try: ctx = mx.gpu() _ = nd.zeros((1,), ctx=ctx) except: ctx = mx.cpu() return ctx ctx = try_gpu() ctx ``` 相应地,我们对[“Softmax回归的从零开始实现”](../chapter_deep-learning-basics/softmax-regression-scratch.md)一节中描述的`evaluate_accuracy`函数略作修改。由于数据刚开始存在CPU的内存上,当`ctx`为GPU时,我们通过[“GPU计算”](../chapter_deep-learning-computation/use-gpu.md)一节中介绍的`as_in_context`函数将数据复制到GPU上(例如GPU 0)。 ```{.python .input} def evaluate_accuracy(data_iter, net, ctx): acc = nd.array([0], ctx=ctx) for X, y in data_iter: # 如果 ctx 是 GPU,将数据复制到 GPU 上。 X = X.as_in_context(ctx) y = y.as_in_context(ctx) acc += gb.accuracy(net(X), y) return acc.asscalar() / len(data_iter) ``` 我们同样对[“Softmax回归的从零开始实现”](../chapter_deep-learning-basics/softmax-regression-scratch.md)一节中定义的`train_ch3`函数略作修改,确保计算使用的数据和模型同在CPU或GPU的内存上。 ```{.python .input} def train_ch5(net, train_iter, test_iter, loss, batch_size, trainer, ctx, num_epochs): print('training on', ctx) for epoch in range(1, num_epochs + 1): train_l_sum = 0 train_acc_sum = 0 start = time() for X, y in train_iter: # 如果 ctx 是 GPU,将数据复制到 GPU 上。 X = X.as_in_context(ctx) y = y.as_in_context(ctx) with autograd.record(): y_hat = net(X) l = loss(y_hat, y) l.backward() trainer.step(batch_size) train_l_sum += l.mean().asscalar() train_acc_sum += gb.accuracy(y_hat, y) test_acc = evaluate_accuracy(test_iter, net, ctx) print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, ' 'time %.1f sec' % (epoch, train_l_sum / len(train_iter), train_acc_sum / len(train_iter), test_acc, time() - start)) ``` 我们重新将模型参数初始化到`ctx`,并使用[“多层感知机”](../chapter_deep-learning-basics/mlp.md)一节里介绍过Xavier随机初始化。损失函数和训练算法则使用跟之前一样的交叉熵损失函数和小批量随机梯度下降。 ```{.python .input} lr = 0.8 num_epochs = 5 net.initialize(force_reinit=True, ctx=ctx, init=init.Xavier()) trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr}) loss = gloss.SoftmaxCrossEntropyLoss() train_ch5(net, train_iter, test_iter, loss, batch_size, trainer, ctx, num_epochs) ``` 本节中的`try_gpu`、`evaluate_accuracy`和`train_ch5`函数被定义在`gluonbook`包中供后面章节调用。其中的`evaluate_accuracy`函数会被进一步改进:它的完整实现将在[“图片增广”](../chapter_computer-vision/image-augmentation.md)一节中描述。 ## 小结 * LeNet交替使用卷积层和最大池化层后接全连接层来进行图片分类。 ## 练习 - LeNet的设计是针对MNIST,但在我们这里使用的Fashion-MNIST复杂度更高。尝试基于LeNet构造更复杂的网络来改善精度。例如可以考虑调整卷积窗口大小、输出层大小、激活函数和全连接层输出大小。在优化方面,可以尝试使用不同学习率、初始化方法和多使用一些迭代周期。 - 找出Xavier的具体初始化方法。 ## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/737) ![](../img/qr_cnn-gluon.svg) ## 参考文献 [1] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
34.13245
330
0.715173
yue_Hant
0.338602
97b99ecdf00ccfb5ad939c529896c36be242ea37
974
md
Markdown
_datasets/audiitorkogu-liikmed.md
JohannaVic/opendata.riik.ee
a8fe2cca095b31a6a80feea6030397218f38829a
[ "MIT" ]
16
2018-11-03T11:01:15.000Z
2019-06-14T11:01:37.000Z
_datasets/audiitorkogu-liikmed.md
JohannaVic/opendata.riik.ee
a8fe2cca095b31a6a80feea6030397218f38829a
[ "MIT" ]
213
2018-10-27T12:28:48.000Z
2019-10-04T09:40:54.000Z
_datasets/audiitorkogu-liikmed.md
JohannaVic/opendata.riik.ee
a8fe2cca095b31a6a80feea6030397218f38829a
[ "MIT" ]
42
2018-11-22T13:31:22.000Z
2019-09-28T12:49:19.000Z
--- title: Audiitorkogu Liikmed title_en: Members list of the Association of Auditors notes: >- Audiitorkogu liikmete nimekiri. Audiitorkogu liige on vandeaudiitor, audiitorettevõtja ja Eestis tunnustatud kolmanda riigi vandeaudiitorid. notes_en: >- Members list of the Association of Auditors. The members of Association of Auditors are sworn auditors, audit firms and third-country sworn auditors whose qualification has been recognized in Estonia. category: - Majandus ja rahandus category_en: - Economy and Finance resources: - name: Audiitorkogu Liikmed url: 'https://www.audiitortegevus.ee/atr/web/opendata/v1/audiitorkogu_liikmed' format: JSON interactive: 'True' interactive: 'True' license: OTHER update_freq: 'http://purl.org/linked-data/sdmx/2009/code#freq-N' organization: Rahandusministeerium maintainer_name: Rahandusministeerium maintainer_email: '' maintainer_phone: '' date_issued: '15/04/2020' date_modified: '15/04/2020' ---
30.4375
82
0.784394
est_Latn
0.323121
97ba50141558791dee60fb47cf9a7d9c650c9bab
1,332
md
Markdown
labs/lab-05/Lab5.md
KKhaghani/oss-repo-template
689bc4812f0aed337313942d0ead85ee2388a3bd
[ "MIT" ]
null
null
null
labs/lab-05/Lab5.md
KKhaghani/oss-repo-template
689bc4812f0aed337313942d0ead85ee2388a3bd
[ "MIT" ]
null
null
null
labs/lab-05/Lab5.md
KKhaghani/oss-repo-template
689bc4812f0aed337313942d0ead85ee2388a3bd
[ "MIT" ]
null
null
null
# Lab 05 Report # Part 1: CMake tutorial Files for step 1 can be found in the ![appropriate folder](step1). Step 1: Running tutorial ![step1](step1/tutorials.PNG) Step 2: ![file folder](step2) | comparing mysqrt and sqrt ![step2](step2/mymath.png) Step 3: ![file folder](step3) | ![second CMakeLists here](step3/MathFunctions_CMakeLists.txt) | once again running Tutorial ![step3](step3/rerun.PNG) Step 4: ![file folder](step4) | running (most of the) tests with ctest ![step4](step4/ctest.png) Step 5: ![file folder](step5) | running Tutorial (it does find log and exp) ![step5](step5/logexp.PNG) # Part 2: Makefiles Creating my own Makefile: ![file folder](part2_mymake) | ![the Makefile](part2_mymake/Makefile) Relative sizes of static vs shared (dynamic) executables: ![relative sizes](part2_mymake/relativesize.png) Using CMake: ![file folder](part2_cmake) | ![the CMakeLists](part2_cmake/CMakeLists.txt) | ![the Makefile](part2_cmake/Makefile) Relative sizes of static vs shared (dynamic) executables: ![relative sizes 2](part2_cmake/relativesize2.png) Finally, the output of running the program: ![output](part2_mymake/static1.PNG) And yes, for all four programs, the [output's](part2_mymake/static1.PNG) [all](part2_mymake/dynamic1.PNG) [the](part2_cmake/static2.PNG) [same](part2_cmake/dynamic2.PNG)!
28.956522
170
0.745495
eng_Latn
0.420615
97bb961d1ba0aa0d5a1c60676051d282566959e8
890
md
Markdown
dynamicsax2012-technet/rus-maintaining-vendor-information.md
s0pach/DynamicsAX2012-technet
8412306681e6b914ebcfad0a9ee05038474ef1e6
[ "CC-BY-4.0", "MIT" ]
1
2020-06-16T22:06:04.000Z
2020-06-16T22:06:04.000Z
dynamicsax2012-technet/rus-maintaining-vendor-information.md
s0pach/DynamicsAX2012-technet
8412306681e6b914ebcfad0a9ee05038474ef1e6
[ "CC-BY-4.0", "MIT" ]
null
null
null
dynamicsax2012-technet/rus-maintaining-vendor-information.md
s0pach/DynamicsAX2012-technet
8412306681e6b914ebcfad0a9ee05038474ef1e6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: (RUS) Maintaining vendor information TOCTitle: (RUS) Maintaining vendor information ms:assetid: 48ef0b20-81d1-4db3-89c4-cb64df45c161 ms:mtpsurl: https://technet.microsoft.com/library/JJ665346(v=AX.60) ms:contentKeyID: 49387434 author: Khairunj ms.date: 04/18/2014 mtps_version: v=AX.60 audience: Application User ms.search.region: Russia --- # (RUS) Maintaining vendor information _**Applies To:** Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2_ The following topics provide information about maintaining vendor information. [(RUS) Set up an RCOAD code for a vendor](rus-set-up-an-rcoad-code-for-a-vendor.md) [(RUS) Set up a division for a company and associate it with a vendor](rus-set-up-a-division-for-a-company-and-associate-it-with-a-vendor.md) [(RUS) Set up a vendor as a foreign counteragent](rus-set-up-a-vendor-as-a-foreign-counteragent.md)
29.666667
141
0.768539
eng_Latn
0.509158
97bbc1c3cc1b42d7230ad0cd945d780d222c0c8d
267
md
Markdown
Cap 15/README.md
ErickPedrosa/HTML-CSS
85ee18b5bae69c808a3c83274029c1b22b706e06
[ "MIT" ]
null
null
null
Cap 15/README.md
ErickPedrosa/HTML-CSS
85ee18b5bae69c808a3c83274029c1b22b706e06
[ "MIT" ]
null
null
null
Cap 15/README.md
ErickPedrosa/HTML-CSS
85ee18b5bae69c808a3c83274029c1b22b706e06
[ "MIT" ]
null
null
null
Exercícios referentes ao capítulo 15 do pdf; <a href="https://erickpedrosa.github.io/HTML-CSS/Cap%2015/Ex.%20019/index.html">Acesse o exercício 19<a>; <a href="https://erickpedrosa.github.io/HTML-CSS/Cap%2015/Ex.%20020/pseudoclasses.html">Acesse o exercício 20<a>;
44.5
113
0.7603
por_Latn
0.82142
a2feacd21e4d1fc0aa563a5453bd254a910fa4e7
131
md
Markdown
README.md
dcmarti/dockers
68884178b7a80b43eba8706932c65b639551337e
[ "Apache-2.0" ]
null
null
null
README.md
dcmarti/dockers
68884178b7a80b43eba8706932c65b639551337e
[ "Apache-2.0" ]
null
null
null
README.md
dcmarti/dockers
68884178b7a80b43eba8706932c65b639551337e
[ "Apache-2.0" ]
null
null
null
#Yamls for docker-compose * Jenkins * Parse database * RethingDB database * Selenium * Simple react application for testing * Sonar
18.714286
38
0.78626
eng_Latn
0.429857
a2fec4c9935895c2c6aac538cf97823591e6292c
2,894
md
Markdown
_posts/2017-07-25-blackjack-writeup.md
sungkyucho/sungkyu
6503c28e89bc49271f22729fef56b3e621b103f7
[ "MIT" ]
null
null
null
_posts/2017-07-25-blackjack-writeup.md
sungkyucho/sungkyu
6503c28e89bc49271f22729fef56b3e621b103f7
[ "MIT" ]
null
null
null
_posts/2017-07-25-blackjack-writeup.md
sungkyucho/sungkyu
6503c28e89bc49271f22729fef56b3e621b103f7
[ "MIT" ]
null
null
null
--- layout: post title: PWNABLE KR - TODDLER - blackjack - 1pt categories: Wargame tags: [pwnablekr, wargame, blackjack] --- 보통은 flag를 읽도록 시스템 구성을 우회하는 문제를 봐왔는데, 이번에는 그냥 nc로 서버에 접속해보라고하는 게 전부였음 ![fig1]({{ site.baseurl }}/images/pwnablekr/blackjack/fig/1.png) 분명히 1pt이기 때문에 쉬운 문제일 거 같긴한데 ~~나한테는 어려워도~~ 사실 처음에 어떻게 접근해야 하나 많이 난감했음. ~~늘 그랬나..~~ # 0.우선은 소스코드 살펴보기 nc 로 접속해봤자 줄창 게임만 하게되고, 처음 홈페이지에서 제시하는 링크로 접속해서 살펴보기로 함. [블랙잭 소스코드 링크 ](http://cboard.cprogramming.com/c-programming/114023-simple-blackjack-program.html) * 소스코드는 그냥 모듈화해서 그럭저럭 잘 읽히는 편이고, 게임을 한번 해보면 대충 무슨 코드인지 더 잘 알겠는데. * 일단은 기존처럼 flag를 읽는 것이라 생각하고, ```nc``` 명령어를 살펴봄 [netcat 사용법 링크](http://devanix.tistory.com/307) * ```-o``` 옵션으로 주고받는 패킷도 살펴보고 ~~혹시 숨겨진 게 있을까~~ ```-l``` 로 [리버스 쉘](http://kali-km.tistory.com/entry/Netcat-Reverse-Shell)을 걸어야 하나 ~~될 리가 없잖아~~ 이것저것 해봤지만 실패 * 입력값도 잘 처리하는 거 같아서, 버퍼가 넘칠 부분이 잘 보이지를 않았는데.. ![fig2]({{ site.baseurl }}/images/pwnablekr/blackjack/fig/2.png) * 갑자기 성공하고 말았다 - _ -;; # 1.Check it up 문제는 ```betting()``` 함수였는데. 대충봐도 뭔가 이상하다.금액이 잔금이랑 맞지 않으면 체크하는 것까지는 좋은데, 대충봐도 **계속 체크해야 하는 루틴이 없이** 그냥 한번 ```cash```와 비교하고 넘어가고는 값을 return 해버리고 있다. ```c int betting() //Asks user amount to bet { printf("\n\nEnter Bet: $"); scanf("%d", &bet); if (bet > cash) //If player tries to bet more money than player has { printf("\nYou cannot bet more money than you have."); printf("\nEnter Bet: "); scanf("%d", &bet); return bet; } else return bet; } // End Function ``` * 실제로 ```bet``` 변수는 전역변수로 선언이 되어 있으며, 배팅 금액을 입력받은 후 -> 가진 금액보다 작으면 다시 입력받고 -> 그리고 끝남. * 재입력값을 다시는 체크하는 루틴이 어디에도 없음 * 마지막으로, 그 재입력값 ```bet``` 이 승리했을 때는 기존의 자금 ```cash```와 더해져서 ```cash = cash + bet``` 이지만, 지면 ```cash = cash - bet``` 이기 때문에. 이겨야 한다. **(이점을 이용하면, bet을 음수로 하면 져도 된다)** ```c while(i<=21) //While loop used to keep asking user to hit or stay at most twenty-one times // because there is a chance user can generate twenty-one consecutive 1's { if(p==21) //If user total is 21, win { printf("\nUnbelievable! You Win!\n"); won = won+1; cash = cash+bet; printf("\nYou have %d Wins and %d Losses. Awesome!\n", won, loss); dealer_total=0; askover(); } if(p>21) //If player total is over 21, loss { printf("\nWoah Buddy, You Went WAY over.\n"); loss = loss+1; cash = cash - bet; printf("\nYou have %d Wins and %d Losses. Awesome!\n", won, loss); dealer_total=0; askover(); } ``` # 2. Exploit * 걍 두 번째 입력받을 때 굉장히 큰 수를 집어넣으면 끝 * 이러고 이기거나 ![fig3]({{ site.baseurl }}/images/pwnablekr/blackjack/fig/3.png) * 이러고 지거나 ![fig4]({{ site.baseurl }}/images/pwnablekr/blackjack/fig/4.png)
29.835052
167
0.578438
kor_Hang
0.999938
a2fed55cfc01b1feae54914b67a87d437f8cba70
1,938
md
Markdown
dynamicsax2012-technet/view-the-status-and-history-of-a-document.md
RobinARH/DynamicsAX2012-technet
d0d0ef979705b68e6a8406736612e9fc3c74c871
[ "CC-BY-4.0", "MIT" ]
null
null
null
dynamicsax2012-technet/view-the-status-and-history-of-a-document.md
RobinARH/DynamicsAX2012-technet
d0d0ef979705b68e6a8406736612e9fc3c74c871
[ "CC-BY-4.0", "MIT" ]
null
null
null
dynamicsax2012-technet/view-the-status-and-history-of-a-document.md
RobinARH/DynamicsAX2012-technet
d0d0ef979705b68e6a8406736612e9fc3c74c871
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: View the status and history of a document TOCTitle: View the status and history of a document ms:assetid: 216cfcd5-8606-4826-bb65-8dceceab3cbf ms:mtpsurl: https://technet.microsoft.com/en-us/library/Dd309621(v=AX.60) ms:contentKeyID: 39508860 ms.date: 04/18/2014 mtps_version: v=AX.60 audience: Application User ms.search.region: Global --- # View the status and history of a document _**Applies To:** Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012_ Follow these steps to view the status and history of a workflow instance. 1. Click **Home** \> **Inquiries** \> **Workflow** \> **Workflow history**. 2. To filter the list of workflow instances that is displayed in the form, select a workflow status in the **Filter by status** field: - **All** – Display all workflow instances. - **Pending** – Display only workflow instances that are currently processing. - **Completed** – Display only workflow instances that have completed the required processing. - **Canceled** – Display only workflow instances that have been canceled. - **Stopped (error)** – Display only workflow instances that have stopped because of an error. - **Unrecoverable** – Display only workflow instances that have stopped because of an unrecoverable error. 3. On the **Overview** tab in the upper part of the form, select a workflow instance. Details about the workflow instance are displayed on the following tabs in the lower part of the form: - **Overview** – This tab displays the status of the workflow instance and indicates where the submitted document is at in the workflow. - **Work items** – Use this tab to assign a task or approval step to another user. - **Tracking details** – This tab displays the history of the workflow instance.
39.55102
147
0.714654
eng_Latn
0.990535
a2fedd6a78b951e5e1d9eb4627b0cfda4e992ead
7,644
md
Markdown
README.md
poliglot/concraft-de
a908638b21f68965a25e768f956463386abc8753
[ "BSD-2-Clause" ]
null
null
null
README.md
poliglot/concraft-de
a908638b21f68965a25e768f956463386abc8753
[ "BSD-2-Clause" ]
null
null
null
README.md
poliglot/concraft-de
a908638b21f68965a25e768f956463386abc8753
[ "BSD-2-Clause" ]
null
null
null
Concraft-pl =========== This package provides a morphosyntactic tagger for the Polish language. The tool combines the following components into a pipeline: * A morphosyntactic segmentation and analysis tool [Maca][maca], * A morphosyntactic disambiguation library [Concraft][concraft], <!--- * A simple, frequency-driven lemmatiser (TODO). Until the lemmatiser component is implemented, the tagger may output multiple interpretations (all related to the same morphosyntactic tag, but with different lemmas) in some cases. --> As for now, the tagger doesn't provide any lemmatisation capabilities. As a result, it may output multiple interpretations (all related to the same morphosyntactic tag, but with different lemmas) for some known words, while for the out-of-vocabulary words it just outputs orthographic forms as lemmas. See the [homepage][homepage] if you wish to download a pre-trained model for the Polish language. Installation ============ You will need [Glasgow Haskell Compiler (GHC)][ghc] and the [Cabal][cabal] tool to build Concraft-pl. The easiest way to get both [GHC][ghc] and [Cabal][cabal] is to install the latest [Haskell Platform][haskell-platform]. Unless you plan to use a custom preprocessing pipeline or run [Maca][maca] on a different machine (see section [Tagging analysed data](#tagging-analysed-data)), you will also need the [Maca][maca] tool. A detailed [installation guide][maca-install] can be found on the [Maca][maca] homepage. To install Concraft-pl from the official [Hackage repository][hackage-repo] just run: cabal install concraft-pl The `concraft-pl` tool will be installed in the `~/.cabal/bin` directory by default. If you want to upgrade Concraft-pl to a newer version you should update the package list first: cabal update cabal install concraft-pl To install the latest development version from github just run cabal install from the `concraft-pl` toplevel directory. Data format =========== The current version of Concraft-pl works on a simple `plain` text format supported by the [Corpus2][corpus2] tools. You will have to install these tools when you install [Maca][maca] anyway, so you can use them to convert the output generated by Concraft-pl to one of other formats supported by [Corpus2][corpus2]. Training ======== If you have the training material with disambiguation annotations (stored in the `plain` text format) you can train the Concraft-pl model yourself. concraft-pl train train.plain -e eval.plain -o model.gz Concraft-pl uses the [NKJP][nkjp] [morphosyntactic tagset definition](config/nkjp-tagset.cfg) by default. It will also reanalyse the input data before the actual training. If you want to change this behaviour, use the `--tagset` and `--noana` command-line options. Consider using [runtime system options][ghc-rts]. You can speed up processing by making use of multiple cores by using the `-N` option. The `-s` option will produce the runtime statistics, such as the time spent in the garbage collector. If the program is spending too much time collecting garbage, you can try to increase the allocation area size with the `-A` option. If you have a big dataset and it doesn't fit in the computer memory, use the `--disk` flag. For example, to train the model using four threads and 256M allocation area size, run: concraft-pl train train.plain -e eval.plain -o model.gz +RTS -N4 -A256M -s Run `concraft-pl train --help` to learn more about the program arguments and possible training options. Finally, you may consider pruning the resultant model in order to reduce its size. Features with values close to 0 (in log-domain) have little effect on the modeled probability and, therefore, it should be safe to discard them. concraft-pl prune -t 0.05 input-model.gz pruned-model.gz Tagging ======= Once you have a Concraft-pl model you can use the following command tag `input.txt` file: concraft-pl tag model.gz < input.txt > output.plain The input file is first divided into paragraphs (the tool interprets empty lines as paragraph ending markers). After that, [Maca][maca] is used to segment and analyse each paragraph. Finally, [Concraft][concraft] module is used to disambiguate each sentence in the [Maca][maca] output. With the `--marginals` option enabled, Concraft-pl will output marginal probabilities corresponding to individual tags (determined on the basis of the disambiguation model) instead of `disamb` markers. Run `concraft-pl tag --help` to learn more about possible tagging options. Server ====== Concraft-pl provides also a client/server mode. It is handy when, for example, you need to tag a large collection of small files. Loading Concraft-pl model from a disk takes considerable amount of time which makes the tagging method described above very slow in such a setting. To start the Concraft-pl server, run: concraft-pl server --inmodel model.gz You can supply a custom port number using a `--port` option. For example, to run the server on the `10101` port, use the following command: concraft-pl server --inmodel model.gz --port 10101 To use the server in a multi-threaded environment, you need to specify the `-N` [RTS][ghc-rts] option. A set of options which usually yields good server performance is presented in the following example: concraft-pl server --inmodel model.gz +RTS -N -A4M -qg1 -I0 Run `concraft-pl server --help` to learn more about possible server-mode options. The client mode works just like the tagging mode. The only difference is that, instead of supplying your client with a model, you need to specify the port number (in case you used a custom one when starting the server; otherwise, the default port number will be used). concraft-pl client --port 10101 < input.txt > output.plain Run `concraft-pl client --help` to learn more about possible client-mode options. Tagging analysed data ===================== In some situations you might want to feed Concraft-pl with a previously analysed data. Perhaps your Maca instance is installed on a different machine, or maybe you want to use Concraft-pl with a custom preprocessing pipeline. If you want to use a preprocessing pipeline significantly different from the standard one (Maca), you should first train your own Concraft model. To train the model on analysed data use the `--noana` training flag. Use the same `--noana` flag when you want to tag analysed data. Input format should be the same as the output format. This option is currently not supported in the client/server mode. *Remember to use the same preprocessing pipeline (segmentation + analysis) for both training and disambiguation. Inconsistencies between training material and input data may severely harm the quality of disambiguation.* [homepage]: http://zil.ipipan.waw.pl/Concraft "Homepage" [concraft]: https://github.com/kawu/concraft "Concraft" [hackage-repo]: http://hackage.haskell.org/package/concraft-pl "Concraft-pl Hackage repository" [maca]: http://nlp.pwr.wroc.pl/redmine/projects/libpltagger/wiki "Maca" [maca-install]: http://nlp.pwr.wroc.pl/redmine/projects/libpltagger/wiki#Download-and-install-MACA "Maca installation guide" [corpus2]: http://nlp.pwr.wroc.pl/redmine/projects/corpus2/wiki "Corpus2" [ghc]: http://www.haskell.org/ghc "Glasgow Haskell Compiler" [ghc-rts]: http://www.haskell.org/ghc/docs/latest/html/users_guide/runtime-control.html "GHC runtime system options" [cabal]: http://www.haskell.org/cabal "Cabal" [haskell-platform]: http://www.haskell.org/platform "Haskell Platform" [nkjp]: http://nkjp.pl/index.php?page=0&lang=1 "NKJP"
40.877005
124
0.764129
eng_Latn
0.993113
a2feee3d589041e7c515b3dbec9db818946b2a58
1,367
md
Markdown
README.md
dents0/IMDb-Top-Rated-Series
4240058633a0f5d041e12feed48f4bee118945e6
[ "MIT" ]
null
null
null
README.md
dents0/IMDb-Top-Rated-Series
4240058633a0f5d041e12feed48f4bee118945e6
[ "MIT" ]
null
null
null
README.md
dents0/IMDb-Top-Rated-Series
4240058633a0f5d041e12feed48f4bee118945e6
[ "MIT" ]
null
null
null
Scraping IMDb Top Rated TV Shows ================================ Project source can be downloaded from https://github.com/dents0/IMDb-Top-Rated-Series.git ---- Author ------ Deniss Tsokarev Description ----------- This script uses Python modules **Requests**, **Beautiful Soup 4** and **Pandas** to scrape [IMDb](https://www.imdb.com/) for it's top rated TV shows and saves the information in Excel-file. The information includes show's **rank**, **title**, release **year**, **rating** and the number of **votes**. Requirements ------------ Python 3.x Modules: * [Requests](http://docs.python-requests.org/en/master/) * [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) * [Pandas](https://pandas.pydata.org/pandas-docs/stable/) How to use ---------- 1) Running **series_rating.py** will print out in a command-line interpreter 250 top rated TV shows according to IMDb: ![terminal](https://user-images.githubusercontent.com/28843507/56235191-39ea0400-6087-11e9-9b40-086189ef2458.PNG) 2) Excel-file **top_rated_series.xlsx** will be created in the same directory: ![files](https://user-images.githubusercontent.com/28843507/56235776-605c6f00-6088-11e9-9bb5-1960f2a91072.PNG) 3) The file will contain scraped information: ![excel](https://user-images.githubusercontent.com/28843507/56235868-9a2d7580-6088-11e9-8736-2101a9ea0724.PNG)
35.973684
190
0.719824
yue_Hant
0.355097
a2ff0f9979b5a0598bebb6f0181dc1597dcfa84c
9,432
md
Markdown
articles/active-directory-domain-services/deploy-kcd.md
Microsoft/azure-docs.sv-se
a43cb26da920952026f5e9c8720f3356a84de75b
[ "CC-BY-4.0", "MIT" ]
7
2017-08-28T08:02:11.000Z
2021-05-05T07:47:55.000Z
articles/active-directory-domain-services/deploy-kcd.md
MicrosoftDocs/azure-docs.sv-se
a43cb26da920952026f5e9c8720f3356a84de75b
[ "CC-BY-4.0", "MIT" ]
476
2017-10-15T08:20:18.000Z
2021-04-16T05:20:11.000Z
articles/active-directory-domain-services/deploy-kcd.md
MicrosoftDocs/azure-docs.sv-se
a43cb26da920952026f5e9c8720f3356a84de75b
[ "CC-BY-4.0", "MIT" ]
39
2017-08-03T09:46:48.000Z
2021-11-05T11:41:27.000Z
--- title: Kerberos-begränsad delegering för Azure AD Domain Services | Microsoft Docs description: Lär dig hur du aktiverar resurs baserad Kerberos-begränsad delegering (KCD) i en Azure Active Directory Domain Services hanterad domän. services: active-directory-ds author: justinha manager: daveba ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d ms.service: active-directory ms.subservice: domain-services ms.workload: identity ms.topic: how-to ms.date: 07/06/2020 ms.author: justinha ms.openlocfilehash: 138b90a33ff1dbc4b014f17fa0098112e1da66e4 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 03/29/2021 ms.locfileid: "96619784" --- # <a name="configure-kerberos-constrained-delegation-kcd-in-azure-active-directory-domain-services"></a>Konfigurera Kerberos-begränsad delegering (KCD) i Azure Active Directory Domain Services När du kör program kan dessa program behöva åtkomst till resurser i kontexten för en annan användare. Active Directory Domain Services (AD DS) stöder en mekanism som kallas *Kerberos-delegering* som möjliggör det här användnings fallet. Kerberos- *begränsad* delegering (KCD) bygger på den här mekanismen för att definiera specifika resurser som kan nås i användarens kontext. Azure Active Directory Domain Services (Azure AD DS) hanterade domäner är säkrare att låsas än traditionella lokala AD DS-miljöer, så Använd en säkrare *resurs baserad* KCD. Den här artikeln visar hur du konfigurerar resurs-baserad Kerberos-begränsad delegering i en Azure AD DS-hanterad domän. ## <a name="prerequisites"></a>Förutsättningar För att slutföra den här artikeln behöver du följande resurser: * En aktiv Azure-prenumeration. * [Skapa ett konto](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)om du inte har någon Azure-prenumeration. * En Azure Active Directory klient som är associerad med din prenumeration, antingen synkroniserad med en lokal katalog eller en katalog som endast är moln. * Om det behövs kan du [skapa en Azure Active Directory klient][create-azure-ad-tenant] eller [associera en Azure-prenumeration med ditt konto][associate-azure-ad-tenant]. * En Azure Active Directory Domain Services hanterad domän aktive rad och konfigurerad i Azure AD-klienten. * Om det behövs kan du [skapa och konfigurera en Azure Active Directory Domain Services hanterad domän][create-azure-ad-ds-instance]. * En virtuell Windows Server Management-dator som är ansluten till den hanterade Azure AD DS-domänen. * Om det behövs kan du slutföra självstudien för att [skapa en virtuell Windows Server-dator och ansluta den till en hanterad domän och][create-join-windows-vm] sedan [Installera hanterings verktygen för AD DS][tutorial-create-management-vm]. * Ett användar konto som är medlem i *Administratörs gruppen för Azure AD DC* i din Azure AD-klient. ## <a name="kerberos-constrained-delegation-overview"></a>Översikt över Kerberos-begränsad delegering Med Kerberos-delegering kan ett konto personifiera ett annat konto för åtkomst till resurser. Till exempel kan ett webb program som har åtkomst till en server dels webb komponent personifiera sig själv som ett annat användar konto när den gör backend-anslutningen. Kerberos-delegering är inte säkert eftersom det inte begränsar vilka resurser som kontot som personifierar har åtkomst till. Kerberos- *begränsad* delegering (KCD) begränsar de tjänster eller resurser som en angiven server eller ett visst program kan ansluta vid personifiering av en annan identitet. Traditionella KCD kräver domän administratörs behörighet för att konfigurera ett domän konto för en tjänst och det begränsar kontot som ska köras på en enda domän. Traditionella KCD har också några problem. I tidigare operativ system hade tjänst administratören till exempel inget användbart sätt att veta vilka front-end-tjänster som delegerats till de resurs tjänster som de äger. Alla frontend-tjänster som kan delegera till en resurs tjänst var en potentiell angrepps punkt. Om en server som är värd för en frontend-tjänst som kon figurer ATS för att delegera till resurs tjänster komprometterades, kunde resurs tjänsterna också komprometteras. I en hanterad domän har du inte domän administratörs behörighet. Därför kan traditionella kontobaserade KCD inte konfigureras i en hanterad domän. Resursbaserade KCD kan i stället användas, vilket också är säkrare. ### <a name="resource-based-kcd"></a>Resursbaserade KCD Med Windows Server 2012 och senare får tjänst administratörer möjlighet att konfigurera begränsad delegering för deras tjänster. Den här modellen kallas för resursbaserade KCD. Med den här metoden kan Server administratören för Server delen tillåta eller neka särskilda klient dels tjänster från att använda KCD. Resource-baserad KCD har kon figurer ATS med PowerShell. Du använder cmdletarna [set-ADComputer][Set-ADComputer] eller [set-ADUser][Set-ADUser] , beroende på om det Personifierande kontot är ett dator konto eller ett användar konto/tjänst konto. ## <a name="configure-resource-based-kcd-for-a-computer-account"></a>Konfigurera resursbaserade KCD för ett dator konto I det här scenariot antar vi att du har en webbapp som körs på datorn med namnet *contoso-webapp.aaddscontoso.com*. Webb programmet måste ha åtkomst till ett webb-API som körs på datorn med namnet *contoso-API.aaddscontoso.com* i kontexten för domän användare. Utför följande steg för att konfigurera det här scenariot: 1. [Skapa en anpassad Organisationsenhet](create-ou.md). Du kan delegera behörigheter för att hantera den här anpassade ORGANISATIONSENHETen till användare i den hanterade domänen. 1. [Domän – Anslut de virtuella datorerna][create-join-windows-vm], både den som kör webbappen och den som kör webb-API: t till den hanterade domänen. Skapa dessa dator konton i den anpassade ORGANISATIONSENHETen från föregående steg. > [!NOTE] > Dator kontona för webbappen och webb-API: et måste vara i en anpassad ORGANISATIONSENHET där du har behörighet att konfigurera resursbaserade KCD. Du kan inte konfigurera resursbaserade KCD för ett dator konto i den inbyggda behållaren för *AAD DC-datorer* . 1. Konfigurera därefter resursbaserade KCD med PowerShell-cmdleten [set-ADComputer][Set-ADComputer] . Kör följande cmdlets från din domänanslutna hanterings-VM och inloggad som användar konto som är medlem i gruppen *Azure AD DC-administratörer* . Ange dina egna dator namn efter behov: ```powershell $ImpersonatingAccount = Get-ADComputer -Identity contoso-webapp.aaddscontoso.com Set-ADComputer contoso-api.aaddscontoso.com -PrincipalsAllowedToDelegateToAccount $ImpersonatingAccount ``` ## <a name="configure-resource-based-kcd-for-a-user-account"></a>Konfigurera resursbaserade KCD för ett användar konto I det här scenariot antar vi att du har en webbapp som körs som ett tjänst konto med namnet *appsvc*. Webb programmet måste ha åtkomst till ett webb-API som körs som ett tjänst konto med namnet *backendsvc* i kontexten för domän användare. Utför följande steg för att konfigurera det här scenariot: 1. [Skapa en anpassad Organisationsenhet](create-ou.md). Du kan delegera behörigheter för att hantera den här anpassade ORGANISATIONSENHETen till användare i den hanterade domänen. 1. [Domän – Anslut de virtuella datorerna][create-join-windows-vm] som kör Server dels webb-API: n/resursen till den hanterade domänen. Skapa sitt dator konto inom den anpassade ORGANISATIONSENHETen. 1. Skapa tjänst kontot (till exempel *appsvc*) som används för att köra webbappen i den anpassade organisationsenheten. > [!NOTE] > På nytt måste dator kontot för den virtuella webb-API: n och tjänst kontot för webbappen vara i en anpassad ORGANISATIONSENHET där du har behörighet att konfigurera resursbaserade KCD. Du kan inte konfigurera resursbaserade KCD för konton på de inbyggda *AAD DC-datorerna* eller *AAD DC Users* -behållare. Det innebär också att du inte kan använda användar konton som synkroniseras från Azure AD för att konfigurera resursbaserade KCD. Du måste skapa och använda tjänst konton som skapats i Azure AD DS. 1. Konfigurera därefter resursbaserade KCD med PowerShell-cmdleten [set-ADUser][Set-ADUser] . Kör följande cmdlets från din domänanslutna hanterings-VM och inloggad som användar konto som är medlem i gruppen *Azure AD DC-administratörer* . Ange dina egna tjänst namn efter behov: ```powershell $ImpersonatingAccount = Get-ADUser -Identity appsvc Set-ADUser backendsvc -PrincipalsAllowedToDelegateToAccount $ImpersonatingAccount ``` ## <a name="next-steps"></a>Nästa steg Mer information om hur delegering fungerar i Active Directory Domain Services finns i [Översikt över Kerberos-begränsad delegering][kcd-technet]. <!-- INTERNAL LINKS --> [create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md [associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md [create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md [tutorial-create-management-vm]: tutorial-create-management-vm.md [Set-ADComputer]: /powershell/module/addsadministration/set-adcomputer [Set-ADUser]: /powershell/module/addsadministration/set-aduser <!-- EXTERNAL LINKS --> [kcd-technet]: /previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj553400(v=ws.11)
81.310345
509
0.804707
swe_Latn
0.998964
0c008d05549730bb23e85fba053c97a3f72e28e7
5,333
md
Markdown
docs/installation-and-operations/configuration/ssl/README.md
phoexer/openproject
72abb4524949bbe296a04614d34b545423810cbe
[ "CC-BY-3.0" ]
1
2021-05-01T02:11:28.000Z
2021-05-01T02:11:28.000Z
docs/installation-and-operations/configuration/ssl/README.md
phoexer/openproject
72abb4524949bbe296a04614d34b545423810cbe
[ "CC-BY-3.0" ]
null
null
null
docs/installation-and-operations/configuration/ssl/README.md
phoexer/openproject
72abb4524949bbe296a04614d34b545423810cbe
[ "CC-BY-3.0" ]
1
2021-09-09T05:38:03.000Z
2021-09-09T05:38:03.000Z
--- sidebar_navigation: title: Configuring SSL priority: 9 --- # Configuring SSL ## Package-based installation (DEB/RPM) SSL configuration can be applied on the first installation, or at any time by reconfiguring the application with: ```bash sudo openproject reconfigure ``` You will be prompted with the same dialogs than on the [initial configuration](../../installation/packaged/#step-2-apache2-web-server) guide. This assumes that you select the **install** option when the **server/autoinstall** dialog appears, and that you have certificate and key files available on your server at a path you know. ## Docker-based installation The current Docker image does not support SSL by default. Usually you would already have an existing Apache or NginX server on your host, with SSL configured, which you could use to set up a simple ProxyPass rule to direct traffic to the container. Or one of the myriad of other tools (e.g. Traefik) offered by the Docker community to handle this aspect. If you really want to enable SSL from within the container, you could try mounting a custom apache2 directory when you launch the container with `-v my/apache2/conf:/etc/apache2`. This would entirely replace the configuration we're using. ## Create a free SSL certificate using let's encrypt You can get an SSL certificate for free via Let's Encrypt. This requires your OpenProject server to be reachable using a domain name (e.g. openproject.mydomain.com), with port 443 or 80 open. If you don't have anything running on port 80 or 443, we recommend that you first configure OpenProject without SSL support, and only then execute the steps outlined below. 1. Go to [certbot.eff.org](https://certbot.eff.org), and select "Apache" and your Linux distribution (e.g. Ubuntu 20.04) to get access to the installation instructions for your specific OS. 2. Follow the installation instructions to get the `certbot` CLI installed. 3. Run the `certbot` CLI to generate the certificate (and only the certificate): sudo certbot certonly --apache The CLI will ask for a few details and to agree to the Let's Encrypt terms of usage. Then it will perform the Let's Encrypt challenge and finally issue a certificate file and a private key file if the challenge succeeded. At the end, it will store the certificate (`fullchain.pem`) and private key (`privkey.pem`) under `/etc/letsencrypt/live/openproject.mydomain.com/`. You can now configure OpenProject to use them by running `openproject reconfigure`: hit ENTER until you get to the SSL wizard, and select "Yes" when the wizard asks for SSL support: * Enter the `/etc/letsencrypt/live/openproject.mydomain.com/fullchain.pem` path when asked for the `server/ssl_cert` detail. * Enter the `/etc/letsencrypt/live/openproject.mydomain.com/privkey.pem` path when asked for the `server/ssl_key` detail. * Enter the `/etc/letsencrypt/live/openproject.mydomain.com/fullchain.pem` path (same as `server/ssl_cert`) when asked for the `server/ssl_ca` detail. Hit ENTER, and after the wizard is finished your OpenProject installation should be accessible using `https://openproject.mydomain.com`. 4. Let's Encryt certificates are only valid for 90 days. An entry in your OS crontab should have automatically been added when `certbot` was installed. You can optionnally confirm that the renewal will work by issuing the following command in dry-run mode: sudo certbot renew --dry-run <div class="alert alert-warning" role="alert"> ## External SSL termination If you terminate SSL externally<sup>1</sup> before the request hits the OpenProject server, you need to let the OpenProject server know that the request being handled is https, even though SSL was terminated before. This is the most common source in problems in OpenProject when using an external server that terminates SSL. Please ensure that if you're proxying to the openproject server, you set the HOST header to the internal server. This ensures that the host name of the outer request gets forwarded to the internal server. Otherwise you might see redirects in your browser to the internal host that OpenProject is running on. On your outer proxying server, set these commands: - In Apache2, set the `ProxyPreserveHost On`directive - In NginX, use the following value: `proxy_set_header X-Forwarded-Host $host:$server_port;` If you're terminating SSL on the outer server, you need to set the `X-Forwarded-Proto https`header to let OpenProject know that the request is HTTPS, even though its been terminated earlier in the request on the outer server. - In Apache2, use `RequestHeader set "X-Forwarded-Proto" https` - In Nginx, use `proxy_set_header X-Forwarded-Proto https;` Finally, to let OpenProject know that it should create links with 'https' when no request is available (for example, when sending emails), you need to set the Protocol setting of OpenProject to `https`. You will find this setting on your system settings or via the rails console with `Setting.protocol = 'https'` _<sup>1</sup> In the packaged installation this means you selected "no" when asked for SSL in the configuration wizard but at the same time take care of SSL termination elsewhere. This can be a manual Apache setup on the same server (not recommended) or an external server, for instance._
62.011628
330
0.779861
eng_Latn
0.998518
0c00aa5b0e0514a4daece85e8b0234a8bd387dcd
4,192
md
Markdown
src/base/generate-images-readme.md
metinbaris/karrot-frontend
68c07282c02e0d651c599b6fdef2837ecf32f2d9
[ "MIT" ]
273
2017-09-11T16:12:29.000Z
2021-11-28T10:13:19.000Z
src/base/generate-images-readme.md
metinbaris/karrot-frontend
68c07282c02e0d651c599b6fdef2837ecf32f2d9
[ "MIT" ]
1,802
2017-09-10T21:19:37.000Z
2021-11-29T16:52:50.000Z
src/base/generate-images-readme.md
metinbaris/karrot-frontend
68c07282c02e0d651c599b6fdef2837ecf32f2d9
[ "MIT" ]
175
2017-10-13T17:12:55.000Z
2021-11-25T17:05:21.000Z
<!-- View this file with embedded videos on GitHub: --> <!-- https://github.com/yunity/karrot-frontend/blob/master/src/base/generate-images-readme.md --> # Generate Landingpage Images **TL:DR** - put new images in designated `_raws` folder - run `yarn generate-landing-images` - if you want to control the ordering of some images in a folder, go to `Landing.vue` and update `sortingArr` for that folder <br /> ## Requirements There is a `_raws` folder under `src/base/pages/images` with four subfolders where screenshots must be put. Requirements of each folder inside `_raws`: - `app-screenshots-browser` - must contain exactly ***one*** image - file-type: `png` - file-name: `karrot-screenshot-browser.png` - image dimensions: 2644 x 1726 - `app-screenshots-phone` - must contain exactly ***one*** image - file-type: `png` - file-name: `karrot-screenshot-phone.png` - image dimensions: 750 x 1624 - `feature-screenshots` - must contain exactly ***three*** images - file-type: `png` - file-names: - `karrot-feature-activities.png` - `karrot-feature-groups.png` - `karrot-feature-offers.png` - `random-imgs` - no specific requirements - images will be cropped to square -> pre-crop the image to the desired view - if an image is smaller than 600x600, it will get scaled-up to that size <br /> ## Taking screenshots with browser devtools Screenshots for all folders (expect `random-imgs`) have to be taken with browser devtools. Here is an example of how it has to be done in Chrome. Firefox is about the same. https://user-images.githubusercontent.com/23213583/119812392-67513f00-bee8-11eb-9036-23b3dbab3d63.mp4 <i>Example: taking screenshot for "karrot-screenshot-browser.png"</i> <br /> <strong style="color: red;">NOTE:</strong> For each screenshot, make sure to - first go into device mode - select "Responsive" profile (not iPhone 6 etc) - set DPR to 2 <br /> **`app-screenshots-browser`** (see video above) - set width/height to 1322 x 863 - take screenshot - width/height of generated screenshot should be 2644 x 1726 **`app-screenshots-phone`** - set width/height to 375 x 812 - take screenshot - width/height of generated screenshot should be 750 x 1624 **`feature-screenshots`** - `karrot-feature-activities.png` - set width/height to 1600 x 1600 - follow these steps: https://user-images.githubusercontent.com/23213583/119814776-14c55200-beeb-11eb-9f19-fb5c81ed50e3.mp4 - `karrot-feature-groups.png` - set width/height to 1200 x 773 - follow these steps: https://user-images.githubusercontent.com/23213583/119814920-42120000-beeb-11eb-89ce-260ec3aa9e17.mp4 https://user-images.githubusercontent.com/23213583/119814997-5bb34780-beeb-11eb-9ec6-0cb9b08757e9.mp4 - `karrot-feature-offers.png` - set width/height to 860 x 820 - follow these steps: https://user-images.githubusercontent.com/23213583/119815292-afbe2c00-beeb-11eb-881c-185383db630e.mp4 <br /> ## Run the script Run this command after putting your generated screenshots in the corresponding folder(s) under `_raws`: ```js yarn generate-landing-images ``` Generated images are saved to the same folder structure as in `_raws`, but one level higher (`src/base/pages/images`). <br /> ## Updating `Landing.vue` Manual action in `Landing.vue` is required for some cases: ### 1. Ordering If you want to control the ordering of `feature-screenshots` or `random-imgs`, go to the `enrichImages()` method and update `sortingArr` for that folder. Array Elements should be the image names (without extension) in the desired order. ### 2. Additional properties for feature screenshots In the `enrichImages()` method the two properties `ident` and `extendOffsetPx` get added to each feature screenshot object. - `ident` *(String)* is needed to grab the features title, description etc from locales - `extendOffsetPx` *(String)* is a value that describes by how many pixels the feature screenshot should be additionally pushed off the screen. Some screenshots may not look completely cut-off visually. With `extendOffsetPx` this illusion can be achieved. **NOTE:** apply the same offset to all images on the same site.
34.644628
319
0.739027
eng_Latn
0.956638
0c01c74903346f06639e7a52b7877500cd7c81eb
108
md
Markdown
README.md
kylekirkby/Save-Code
b88a8fec5cdc20e81e890af75c8e42957cbc974e
[ "Apache-2.0" ]
null
null
null
README.md
kylekirkby/Save-Code
b88a8fec5cdc20e81e890af75c8e42957cbc974e
[ "Apache-2.0" ]
null
null
null
README.md
kylekirkby/Save-Code
b88a8fec5cdc20e81e890af75c8e42957cbc974e
[ "Apache-2.0" ]
null
null
null
# Save-Code Save code website which includes facebook API integration and Open Source GESHI Code formatter.
36
95
0.824074
eng_Latn
0.902472
0c024ad753817deffdb0e531b7573cd3eec2c16d
5,258
md
Markdown
eventListenerService/README.md
futurewei-cloud/caerus-udf
6863c071dcfd3d5162a3b5e1cd41aaa13fedb5ab
[ "Apache-2.0" ]
1
2022-02-01T16:56:34.000Z
2022-02-01T16:56:34.000Z
eventListenerService/README.md
open-infrastructure-labs/caerus-udf
fb96928ab0d240787307731927f98def18a7cdca
[ "Apache-2.0" ]
null
null
null
eventListenerService/README.md
open-infrastructure-labs/caerus-udf
fb96928ab0d240787307731927f98def18a7cdca
[ "Apache-2.0" ]
1
2021-05-25T19:47:47.000Z
2021-05-25T19:47:47.000Z
# Caerus UDF Event Notification Service Caerus Event Listener Service is a storage-side REST service that listens to registered streaming sources (Redis for now, can add other sources like Kafka, RMQ etc. if needed). Upon event, it reacts and automatically invokes related UDFs upon certain storage actions. In modern storage systems and cloud storage backend, many of them started to implement a feature called “bucket notification”, vendors and open sources like Amazon S3, GCP Storage (Google Cloud Storage), IBM Cloud Object Storage, Ceph, MinIO etc. all have this support now. This notification feature enables you to receive notifications when certain events (such as PUT, GET, COPY, DELTE etc.) happen in your bucket. Currently people use this feature majorly for alerting etc. lightweight use cases, with Caerus storage-side UDF pushdown, now we can support use cases even they are very data intensive in an fully automatic fashion. Related to “bucket notification”, there are two characteristics: 1. It is currently only supported in object storage systems/cloud object storage, we haven’t seen it in file system (like HDFS) or block systems, but this doesn’t say that we can’t add such support to file/block storage, in matter of fact, people do ask such feature and have some experimental implementation (for example, people hook up HDFS with Apache Nifi and event target like Redis/Kafka to support HDFS version of this feature). We can integrate this feature into HDFS as needed. 2. When people use bucket notification feature, they normally take advantage the rich feature of “bucket” configurations (normally it is called bucket policy, see examples of Amazon S3 bucket policy) to create specialized bucket. For example, a bucket normally only contains certain type of object types (images file only), or a bucket only belongs to certain group/user/organization etc. Amazon S3 bucket policy examples: 1. https://aws.amazon.com/premiumsupport/knowledge-center/s3-allow-certain-file-types/ 2. https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html Upon registered event happening, the storage system automatically sends triggered events to the configured notification targets. Storage system can support notification targets like AMQP, Redis, Kafka, ElasticSearch, NATS, Webhooks, NSQ, MQTT, MySQL and PostgreSQL. In Caerus project, we use MinIO Bucket Notifications and Redis as notification target as an example for our first storage system integration example, more storage system integration and different notification targets can be added as needed. # Getting Started The steps below show how to use this bucket notification: ## Step 1: Start notification target, such as Redis and Caerus Event Notification Service ``` > cd $CAERUS_UDF_HOME/registry/bitnami-docker-redis/ > docker-compose -f docker-compose-replicaset.yml up -d > mvn clean package > java -jar target/EventListenerService-0.0.1-SNAPSHOT.jar ``` ## Step 2: Add nortification target to storage system Follow the below link: https://docs.min.io/docs/minio-bucket-notification-guide.html, run commands: ``` > mc admin config set minio/ notify_redis:1 address="172.18.0.2:6379" format="namespace" key="bucketevents" password="my_password" queue_dir="/home/ubuntu/redisEvents" queue_limit="10000 > mc admin config get minio/ notify_redis > mc mb minio/imagesbucket > mc event add minio/imagesbucket arn:minio:sqs::1:redis --suffix .jpg > mc event list minio/imagesbucket arn:minio:sqs::1:redis s3:ObjectCreated:*,s3:ObjectRemoved:* Filter: suffix=”.jpg” ``` ## Step 3: Enable bucket notification to storage system Set up redis notification by following this: https://redis.io/topics/notifications (note this step can be saved if we don't want to change config during runtime, we can just set in the redis docker config file.) ``` root@ubuntu1804:/home/ubuntu# redis-cli -h 172.18.0.2 -a my_password Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 172.18.0.2:6379> config set "notify-keysapce-events" "EKsglh" (error) ERR Unsupported CONFIG parameter: notify-keysapce-events 172.18.0.2:6379> config set "notify-keyspace-events" "EKsglh" OK 172.18.0.2:6379> config get "notify-keyspace-events" 1) "notify-keyspace-events" 2) "glshKE" 172.18.0.2:6379> ``` ## Step 4: Test on notification target Start the redis-cli Redis client program to inspect the contents in Redis. Run the monitor Redis command. This prints each operation performed on Redis as it occurs. ``` redis-cli -a yoursecret 127.0.0.1:6379> monitor OK ``` Set up serverless function framework: see details in [faas](../faas) Copy an image file from local to storage (MinIO in this case) ``` mc cp /home/ubuntu/images/new/sample.jpg minio/imagesbucket/ /home/ubuntu/images/new/sample.jpg: 2.44 MiB / 2.44 MiB ▓▓▓▓┃ 76.35 MiB/s 0sroot@ubuntu1804:/home/ubuntu/caerus-udf/examples/java/thumbnail# ``` Check storage, the thumbnail file will be created in a bucket called thumbnailsbucket ``` root@ubuntu1804:/home/ubuntu/caerus-udf/examples/java/thumbnail# mc ls minio/thumbnailsbucket [2020-11-10 13:53:53 EST] 6.2KiB sample_thumbnail.png root@ubuntu1804:/home/ubuntu/caerus-udf/examples/java/thumbnail# ```
69.184211
632
0.783188
eng_Latn
0.953682
0c0253a71a93398bceabaf84ebaa0a05d3eb7a07
6,334
md
Markdown
content/en/docs/III/tables/_index.md
MIT-LCP/MIMICwebsite
f24ea6c1a77197fe52ba414690bef0c173b82dcf
[ "MIT" ]
58
2015-10-01T21:40:21.000Z
2022-03-04T07:43:54.000Z
content/en/docs/III/tables/_index.md
MIT-LCP/MIMICwebsite
f24ea6c1a77197fe52ba414690bef0c173b82dcf
[ "MIT" ]
137
2015-09-02T02:15:49.000Z
2022-03-15T17:34:45.000Z
content/en/docs/III/tables/_index.md
MIT-LCP/MIMICwebsite
f24ea6c1a77197fe52ba414690bef0c173b82dcf
[ "MIT" ]
87
2015-11-11T22:20:37.000Z
2022-02-18T16:25:28.000Z
+++ date = "2015-09-01T19:34:46-04:00" title = "Overview of the MIMIC-III data" linktitle = "Tables" weight = 20 toc = "true" +++ MIMIC is a relational database containing tables of data relating to patients who stayed within the intensive care units at Beth Israel Deaconess Medical Center. A table is a data storage structure which is similar to a spreadsheet: each column contains consistent information (e.g., patient identifiers), and each row contains an instantiation of that information (e.g. a row could contain the integer 340 in the patient identifier column which would imply that the row's patient identifier is 340). The tables are linked by identifiers which usually have the suffix "ID". For example `HADM_ID` refers to a unique hospital admission and `SUBJECT_ID` refers to a unique patient. One exception is `ROW_ID`, which is simply a row identifier unique to that table. Tables pre-fixed with "D\_" are dictionaries and provide definitions for identifiers. For example, every row of OUTPUTEVENTS is associated with a single `ITEMID` which represents the concept measured, but it does *not* contain the actual name of the drug. By joining OUTPUTEVENTS and D_ITEMS on `ITEMID`, it is possible to identify what concept a given `ITEMID` represents. ## List of tables The following tables are used to define and track patient stays: - **ADMISSIONS**: Every unique hospitalization for each patient in the database (defines `HADM_ID`) - **CALLOUT**: Information regarding when a patient was cleared for ICU discharge and when the patient was actually discharged - **ICUSTAYS**: Every unique ICU stay in the database (defines `ICUSTAY_ID`) - **PATIENTS**: Every unique patient in the database (defines `SUBJECT_ID`) - **SERVICES**: The clinical service under which a patient is registered - **TRANSFERS**: Patient movement from bed to bed within the hospital, including ICU admission and discharge Each `ICUSTAY_ID` corresponds to a single `HADM_ID` and a single `SUBJECT_ID`. Each `HADM_ID` corresponds to a single `SUBJECT_ID`. A single `SUBJECT_ID` can correspond to multiple `HADM_ID` (multiple hospitalizations of the same patient), and multiple `ICUSTAY_ID` (multiple ICU stays either within the same hospitalization, or across multiple hospitalizations, or both). The following tables contain data collected in the critical care unit: - **CAREGIVERS**: Every caregiver who has recorded data in the database (defines `CGID`) - **CHARTEVENTS**: All charted observations for patients - **DATETIMEEVENTS**: All recorded observations which are dates, for example time of dialysis or insertion of lines. - **INPUTEVENTS_CV**: Intake for patients monitored using the Philips CareVue system while in the ICU - **INPUTEVENTS_MV**: Intake for patients monitored using the iMDSoft Metavision system while in the ICU - **NOTEEVENTS**: Deidentified notes, including nursing and physician notes, ECG reports, imaging reports, and discharge summaries. - **OUTPUTEVENTS**: Output information for patients while in the ICU - **PROCEDUREEVENTS_MV**: Patient procedures for the subset of patients who were monitored in the ICU using the iMDSoft MetaVision system. The following tables contain data collected in the hospital record system: - **CPTEVENTS**: Procedures recorded as Current Procedural Terminology (CPT) codes - **DIAGNOSES_ICD**: Hospital assigned diagnoses, coded using the International Statistical Classification of Diseases and Related Health Problems (ICD) system - **DRGCODES**: Diagnosis Related Groups (DRG), which are used by the hospital for billing purposes. - **LABEVENTS**: Laboratory measurements for patients both within the hospital and in out patient clinics - **MICROBIOLOGYEVENTS**: Microbiology measurements and sensitivities from the hospital database - **PRESCRIPTIONS**: Medications ordered, and not necessarily administered, for a given patient - **PROCEDURES_ICD**: Patient procedures, coded using the International Statistical Classification of Diseases and Related Health Problems (ICD) system The following tables are dictionaries: - **D_CPT**: High-level dictionary of Current Procedural Terminology (CPT) codes - **D_ICD_DIAGNOSES**: Dictionary of International Statistical Classification of Diseases and Related Health Problems (ICD) codes relating to diagnoses - **D_ICD_PROCEDURES**: Dictionary of International Statistical Classification of Diseases and Related Health Problems (ICD) codes relating to procedures - **D_ITEMS**: Dictionary of `ITEMID`s appearing in the MIMIC database, except those that relate to laboratory tests - **D_LABITEMS**: Dictionary of `ITEMID`s in the laboratory database that relate to laboratory tests ## Derived tables The MIMIC-II database contained a variety of derived tables which simplified use of the database. For example, a commonly used table was the ICUSTAY_DETAIL table, which provided additional information summarizing a patient's ICU stay. The database also contained derived parameters commonly required by studies, such as severity scores. In MIMIC-III, we have made a conscious decision to *not* include any derived tables or calculated parameters as far as is possible. Instead, we encourage the community to produce and share scripts which can be run to create these tables or parameters. This has many advantages: it keeps the distinction between raw data and calculated data, it encourages users to validate the scripts which derive the data, and allows for as many scripts as is conceivable without cluttering the database for all users. We have provided a set of scripts at the mimic-code repository, which can be found here: http://github.com/MIT-lcp/mimic-code We will continue to update this repository both with code which we produce as well as with code produced by the community. We encourage users to make pull requests (a feature of git which allows us to integrate community created code) or raise issues regarding code found in the repository. The creation of an active international community building openly available code for capturing a variety of concepts will increase the speed of research on MIMIC-III exponentially - we hope you take the time to investigate the mimic-code repository for anything which may be of use to you, and further contribute any work of your own!
97.446154
929
0.793338
eng_Latn
0.997772
0c028857933968b1fdf7424d14f4808030de9c1c
339
md
Markdown
README.md
aguzev/netPerformance
2d64bc28dc423c530803c657b5ab01d0188bb050
[ "MIT" ]
null
null
null
README.md
aguzev/netPerformance
2d64bc28dc423c530803c657b5ab01d0188bb050
[ "MIT" ]
null
null
null
README.md
aguzev/netPerformance
2d64bc28dc423c530803c657b5ab01d0188bb050
[ "MIT" ]
null
null
null
# .NET Performance tests .NET Core 3.1 Console application, Release build ### Time ### |Method name|ms|%| |-|-:|-:| |StringInterpolation|10 813 ms|9.53%| |StringBuilderAppend|5 427 ms|4.78%| ### Memory ### <img src=".attachments/MemoryFootprint-1.jpg" alt="Memory footprint" title="Memory footprint" style="zoom:150%;" />
24.214286
116
0.657817
kor_Hang
0.365689
0c032f8b89cf6bb01d9f0dd652779cdff8027811
301
md
Markdown
docs/README.md
gaotang/workwebsite
3a6b85c2aed0f71dc0f02ebd8a33cf832cea1cdc
[ "MIT" ]
null
null
null
docs/README.md
gaotang/workwebsite
3a6b85c2aed0f71dc0f02ebd8a33cf832cea1cdc
[ "MIT" ]
null
null
null
docs/README.md
gaotang/workwebsite
3a6b85c2aed0f71dc0f02ebd8a33cf832cea1cdc
[ "MIT" ]
null
null
null
--- home: true heroImage: /logo.png actionText: Get Started → actionLink: /public.md features: - title: Vue多页面 details: 使用vue+webpack+babel构建的多页面脚手架,开发,调试非常便捷. - title: H5+ details: H5+提供的强大api让您的app快速具有原生能力. - title: 调试 details: 使用VConsole工具调试,高效调试. footer: MIT Licensed | Copyright © 2018 ---
20.066667
50
0.737542
eng_Latn
0.233646
0c034182b3fe791fb37ce9d0abe2165d9e8baba8
268
md
Markdown
content/docs/passive/_index.md
konilabs/konidoctmp
f0ad542b387832975595628c4a112b7a814be035
[ "Apache-2.0" ]
null
null
null
content/docs/passive/_index.md
konilabs/konidoctmp
f0ad542b387832975595628c4a112b7a814be035
[ "Apache-2.0" ]
null
null
null
content/docs/passive/_index.md
konilabs/konidoctmp
f0ad542b387832975595628c4a112b7a814be035
[ "Apache-2.0" ]
null
null
null
+++ title = "Passive" linktitle = "Passive" date = 2020-06-19T14:15:00+02:00 tags= [] keywords = ["electronics", "passive", "resistor", "inductor", "capacitors"] description = "Passive components design help like resistors, inductors or capacitors" weight = 0 +++
19.142857
86
0.69403
eng_Latn
0.693454
0c03aaa54c960cb14dca81d0723a0fe923f36a7e
672
md
Markdown
dramatic-dragonflies/tools/README.md
Vthechamp22/summer-code-jam-2021
0a8bf1f22f6c73300891fd779da36efd8e1304c1
[ "MIT" ]
40
2020-08-02T07:38:22.000Z
2021-07-26T01:46:50.000Z
dramatic-dragonflies/tools/README.md
Vthechamp22/summer-code-jam-2021
0a8bf1f22f6c73300891fd779da36efd8e1304c1
[ "MIT" ]
134
2020-07-31T12:15:45.000Z
2020-12-13T04:42:19.000Z
dramatic-dragonflies/tools/README.md
Vthechamp22/summer-code-jam-2021
0a8bf1f22f6c73300891fd779da36efd8e1304c1
[ "MIT" ]
101
2020-07-31T12:00:47.000Z
2021-11-01T09:06:58.000Z
# Utility Tools Here is a short description of all the tools in this directory : * `build_image.bat` / `build_image.sh` : Main script used to generate a ROM. It is booting up the `build_image.Dockerfile`, run `make_image.sh` inside and extract the generated `rom.tar.gz` file. * `build_image.Dockerfile` : Simple container in charge of providing a reproducible build environment for the ROM. It is also downloading the Arch ISO. * `make_image.sh` : Script ran inside the build container in order to generate the ROM. * `process_watch` : opens a process in a pty and passes its stdio to it, allowing using pipe based IPC while still getting the advantages of a pty
42
87
0.763393
eng_Latn
0.997467
0c045aa9656953795e2816a9b6cc5ac05cd7e2e2
2,621
md
Markdown
dynamics-nav/api-reference/v2.0/api/dynamics_timeRegistrationEntry_update.md
MatthiasSchaefer/navdevitpro-content-pr
c6ddb76b8815b1f0ae10f6b8981cc12832912975
[ "CC-BY-4.0", "MIT" ]
null
null
null
dynamics-nav/api-reference/v2.0/api/dynamics_timeRegistrationEntry_update.md
MatthiasSchaefer/navdevitpro-content-pr
c6ddb76b8815b1f0ae10f6b8981cc12832912975
[ "CC-BY-4.0", "MIT" ]
null
null
null
dynamics-nav/api-reference/v2.0/api/dynamics_timeRegistrationEntry_update.md
MatthiasSchaefer/navdevitpro-content-pr
c6ddb76b8815b1f0ae10f6b8981cc12832912975
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Patch timeRegistrationEntries| Microsoft Docs description: Patch a timeRegistrationEntries in Dynamics 365 Business Central. author: SusanneWindfeldPedersen ms.service: "dynamics365-business-central" ms.topic: article ms.devlang: na ms.tgt_pltfrm: na ms.workload: na ms.date: 11/11/2020 ms.author: solsen --- # Update timeRegistrationEntries [!INCLUDE[api_v2_note](../../includes/api_v2_note.md)] Update a timeRegistrationEntry in [!INCLUDE[d365fin_long_md](../../includes/d365fin_long_md.md)]. ## HTTP request Replace the URL prefix for [!INCLUDE[d365fin_long_md](../../includes/d365fin_long_md.md)] depending on environment following the [guideline](../../v2.0/endpoints-apis-for-dynamics.md). ``` PATCH businesscentralPrefix/companies({companyId})/timeRegistrationEntries({timeregistrationId}) PATCH businesscentralPrefix/companies({companyId})/employees({employeeId})/timeRegistrationEntries({timeregistrationId}) ``` ## Request headers |Header |Value | |--------------|-------------------------| |Authorization |Bearer {token}. Required.| |Content-Type |application/json | |If-Match |*application/json* | ## Example **Request** Here is an example of the request. ```json PATCH https://{businesscentralPrefix}/api/v2.0/companies({id})/timeRegistrationEntries({timeregistrationId}) Content-type: application/json { "quantity": 8 } ``` **Response** Here is an example of the response. > [!NOTE] > The response object shown here may be truncated for brevity. All of the properties will be returned from an actual call. ```json HTTP/1.1 200 OK Content-type: application/json { "id": "1a8b1fec-c0e3-ea11-aa60-000d3ad7cacb", "employeeId": "258bb9c0-44e3-ea11-bb43-000d3a2feca1", "employeeNumber": "AH", "jobId": "00000000-0000-0000-0000-000000000000", "jobNumber": "", "jobTaskNumber": "", "absence": "", "lineNumber": 10000, "date": "2019-02-02", "quantity": 8, "status": "Open", "unitOfMeasureId": "56a6738a-44e3-ea11-bb43-000d3a2feca1", "unitOfMeasureCode": "HOUR", "lastModfiedDateTime": "2020-08-21T15:13:58.87Z" ``` ## See also [Tips for working with the APIs](/dynamics365/business-central/dev-itpro/developer/devenv-connect-apps-tips) [Error Codes](../dynamics_error_codes.md) [timeRegistrationEntries](../resources/dynamics_timeRegistrationEntry.md) [Get timeRegistrationEntries](dynamics_timeRegistrationEntry_get.md) [Post timeRegistrationEntries](dynamics_timeRegistrationEntry_create.md) [Delete timeRegistrationEntries](dynamics_timeRegistrationEntry_delete.md)
30.476744
184
0.722243
yue_Hant
0.357348
0c045ccf58c3ad9338089ec6d786057c0de11d76
3,768
md
Markdown
_posts/java/A/AccessibleRole-javafx-scene/2021-01-01-AccessibleRole.md
w3api/w3api
681462ece7265723031a88bec5285209d0e125bf
[ "MIT" ]
1
2021-09-15T20:32:10.000Z
2021-09-15T20:32:10.000Z
_posts/java/A/AccessibleRole-javafx-scene/2021-01-01-AccessibleRole.md
w3api/w3api
681462ece7265723031a88bec5285209d0e125bf
[ "MIT" ]
20
2021-01-17T01:13:46.000Z
2021-06-20T21:16:02.000Z
_posts/java/A/AccessibleRole-javafx-scene/2021-01-01-AccessibleRole.md
w3api/w3api
681462ece7265723031a88bec5285209d0e125bf
[ "MIT" ]
2
2021-09-15T20:32:08.000Z
2022-02-20T16:57:46.000Z
--- title: AccessibleRole permalink: /Java/AccessibleRole-javafx-scene/ date: 2021-01-11 key: Java.A.AccessibleRole-javafx-scene category: Java tags: ['java se', 'javafx.scene', 'javafx.graphics', 'enumerado java', 'JavaFX 8.0'] sidebar: nav: java --- ## Descripción {{site.data.Java.A.AccessibleRole-javafx-scene.description }} ## Sintaxis ~~~java public enum AccessibleRole extends Enum<AccessibleRole> ~~~ ## Enumerados * [BUTTON](/Java/AccessibleRole-javafx-scene/BUTTON) * [CHECK_BOX](/Java/AccessibleRole-javafx-scene/CHECK_BOX) * [CHECK_MENU_ITEM](/Java/AccessibleRole-javafx-scene/CHECK_MENU_ITEM) * [COMBO_BOX](/Java/AccessibleRole-javafx-scene/COMBO_BOX) * [CONTEXT_MENU](/Java/AccessibleRole-javafx-scene/CONTEXT_MENU) * [DATE_PICKER](/Java/AccessibleRole-javafx-scene/DATE_PICKER) * [DECREMENT_BUTTON](/Java/AccessibleRole-javafx-scene/DECREMENT_BUTTON) * [HYPERLINK](/Java/AccessibleRole-javafx-scene/HYPERLINK) * [IMAGE_VIEW](/Java/AccessibleRole-javafx-scene/IMAGE_VIEW) * [INCREMENT_BUTTON](/Java/AccessibleRole-javafx-scene/INCREMENT_BUTTON) * [LIST_ITEM](/Java/AccessibleRole-javafx-scene/LIST_ITEM) * [LIST_VIEW](/Java/AccessibleRole-javafx-scene/LIST_VIEW) * [MENU](/Java/AccessibleRole-javafx-scene/MENU) * [MENU_BAR](/Java/AccessibleRole-javafx-scene/MENU_BAR) * [MENU_BUTTON](/Java/AccessibleRole-javafx-scene/MENU_BUTTON) * [MENU_ITEM](/Java/AccessibleRole-javafx-scene/MENU_ITEM) * [NODE](/Java/AccessibleRole-javafx-scene/NODE) * [PAGE_ITEM](/Java/AccessibleRole-javafx-scene/PAGE_ITEM) * [PAGINATION](/Java/AccessibleRole-javafx-scene/PAGINATION) * [PARENT](/Java/AccessibleRole-javafx-scene/PARENT) * [PASSWORD_FIELD](/Java/AccessibleRole-javafx-scene/PASSWORD_FIELD) * [PROGRESS_INDICATOR](/Java/AccessibleRole-javafx-scene/PROGRESS_INDICATOR) * [RADIO_BUTTON](/Java/AccessibleRole-javafx-scene/RADIO_BUTTON) * [RADIO_MENU_ITEM](/Java/AccessibleRole-javafx-scene/RADIO_MENU_ITEM) * [SCROLL_BAR](/Java/AccessibleRole-javafx-scene/SCROLL_BAR) * [SCROLL_PANE](/Java/AccessibleRole-javafx-scene/SCROLL_PANE) * [SLIDER](/Java/AccessibleRole-javafx-scene/SLIDER) * [SPINNER](/Java/AccessibleRole-javafx-scene/SPINNER) * [SPLIT_MENU_BUTTON](/Java/AccessibleRole-javafx-scene/SPLIT_MENU_BUTTON) * [TAB_ITEM](/Java/AccessibleRole-javafx-scene/TAB_ITEM) * [TAB_PANE](/Java/AccessibleRole-javafx-scene/TAB_PANE) * [TABLE_CELL](/Java/AccessibleRole-javafx-scene/TABLE_CELL) * [TABLE_COLUMN](/Java/AccessibleRole-javafx-scene/TABLE_COLUMN) * [TABLE_ROW](/Java/AccessibleRole-javafx-scene/TABLE_ROW) * [TABLE_VIEW](/Java/AccessibleRole-javafx-scene/TABLE_VIEW) * [TEXT](/Java/AccessibleRole-javafx-scene/TEXT) * [TEXT_AREA](/Java/AccessibleRole-javafx-scene/TEXT_AREA) * [TEXT_FIELD](/Java/AccessibleRole-javafx-scene/TEXT_FIELD) * [THUMB](/Java/AccessibleRole-javafx-scene/THUMB) * [TITLED_PANE](/Java/AccessibleRole-javafx-scene/TITLED_PANE) * [TOGGLE_BUTTON](/Java/AccessibleRole-javafx-scene/TOGGLE_BUTTON) * [TOOL_BAR](/Java/AccessibleRole-javafx-scene/TOOL_BAR) * [TOOLTIP](/Java/AccessibleRole-javafx-scene/TOOLTIP) * [TREE_ITEM](/Java/AccessibleRole-javafx-scene/TREE_ITEM) * [TREE_TABLE_CELL](/Java/AccessibleRole-javafx-scene/TREE_TABLE_CELL) * [TREE_TABLE_ROW](/Java/AccessibleRole-javafx-scene/TREE_TABLE_ROW) * [TREE_TABLE_VIEW](/Java/AccessibleRole-javafx-scene/TREE_TABLE_VIEW) * [TREE_VIEW](/Java/AccessibleRole-javafx-scene/TREE_VIEW) ## Métodos * [valueOf()](/Java/AccessibleRole-javafx-scene/valueOf) * [values()](/Java/AccessibleRole-javafx-scene/values) ## Ejemplo ~~~java {{ site.data.Java.A.AccessibleRole-javafx-scene.code}} ~~~ ## Líneas de Código <ul> {%- for _ldc in site.data.Java.A.AccessibleRole-javafx-scene.ldc -%} <li> <a href="{{_ldc['url'] }}">{{ _ldc['nombre'] }}</a> </li> {%- endfor -%} </ul>
43.310345
84
0.77707
yue_Hant
0.727758
0c046bfbb911085a0dfa7259bec9a93b52c1d5c0
505
md
Markdown
README.md
yogeshraz/Custom-Calendar
fbf045b409bf9438159d6615aea61c48be6ac5f9
[ "MIT" ]
2
2018-01-19T07:14:45.000Z
2018-06-26T23:52:51.000Z
README.md
yogeshraz/Custom-Calendar
fbf045b409bf9438159d6615aea61c48be6ac5f9
[ "MIT" ]
null
null
null
README.md
yogeshraz/Custom-Calendar
fbf045b409bf9438159d6615aea61c48be6ac5f9
[ "MIT" ]
null
null
null
Custom-calendar is developed to help made your own calendar with lesser boilerplate code yet easy to use and easy to customize. Custom-calendar is most robust, light-weighted and smooth library in all ways. # Getting Start *** # For running this demo project, you need to just drag and drop into this project. # Prerequisites To build a project using the custom calendar for iOS, you need version 9.0 or later of Xcode. # Usage Copy the files under the project and changes configuration accordingly.
33.666667
206
0.786139
eng_Latn
0.999407
0c04d8456eb55a9ebf13d3b9441cc4970ecc8129
1,406
md
Markdown
docs/2014/ssms/menu-help/choose-toolbox-items-maintenance-tasks-page.md
fanck0605/sql-docs.zh-cn
17c5cf325ad19a067fe159c4911ff47de810bb8a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/ssms/menu-help/choose-toolbox-items-maintenance-tasks-page.md
fanck0605/sql-docs.zh-cn
17c5cf325ad19a067fe159c4911ff47de810bb8a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/ssms/menu-help/choose-toolbox-items-maintenance-tasks-page.md
fanck0605/sql-docs.zh-cn
17c5cf325ad19a067fe159c4911ff47de810bb8a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 选择工具箱项(“维护任务”页)| Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: ssms ms.topic: conceptual f1_keywords: - vs.chooseitems.maintenance_tasks - VS.ToolboxPages.Maintenance_Tasks helpviewer_keywords: - Customize Toolbox dialog box ms.assetid: b92c9054-7479-45d8-a54c-c1bb6699bdb3 author: stevestein ms.author: sstein ms.openlocfilehash: 949f33ed5893871cb7bf0d646b96278e0932f862 ms.sourcegitcommit: 57f1d15c67113bbadd40861b886d6929aacd3467 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 06/18/2020 ms.locfileid: "85067438" --- # <a name="choose-toolbox-items-maintenance-tasks-page"></a>选择工具箱项(“维护任务”页) “自定义工具箱” 对话框的此选项卡可显示已经在你的计算机上注册的所有维护任务组件的列表,可以通过此选项卡更改工具箱中所显示的组件。 可以从“工具”菜单中打开“自定义工具箱”对话框。 若要对组件列表排序,请选择相应的列标题。 ## <a name="options"></a>选项 “维护任务” 选项卡包括以下信息列。 **名称** 显示可用组件的名称。 每个名称之前都有相应的复选框。 如果复选框为选中状态,则表示 [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] 已经在计算机的注册表中找到了相应的组件项。 该组件显示在活动的“工具箱” 选项卡中,或者在你单击“确定” 时将添加到该选项卡中。 如果复选框处于未选中状态,则表示该组件当前未显示在“工具箱”中,或者在你单击“确定”时,将从“工具箱”中删除该组件。 **路径** 显示组件的完整路径。 若要找出随产品一起提供的默认组件,请对此列进行排序,再找到存储在 [!INCLUDE[msCoName](../../includes/msconame-md.md)] Visual Studio 安装路径下的组件。 **上次修改时间** 显示组件上次修改的日期。 单击某个名称,在“语言” 和“版本” 框中显示相应组件的属性,同时显示相应的图标。 ## <a name="options"></a>选项 **语言** 组件的语言。 **版本** 组件的版本。
28.693878
236
0.736842
yue_Hant
0.618549
0c057b85c6c78081642af201e26a0e8a6aafcad5
1,159
md
Markdown
section-6/video-6.1/README.md
jfechava/Hands-On-Continuous-Integration-and-Delivery-with-Jenkins-X-and-Kubernetes
3a77d8364e24b49068c57b88a9acac0e32fa1dc5
[ "MIT" ]
9
2020-03-21T19:16:53.000Z
2021-11-08T13:11:02.000Z
section-6/video-6.1/README.md
jfechava/Hands-On-Continuous-Integration-and-Delivery-with-Jenkins-X-and-Kubernetes
3a77d8364e24b49068c57b88a9acac0e32fa1dc5
[ "MIT" ]
4
2020-04-26T06:07:21.000Z
2021-08-02T13:56:29.000Z
section-6/video-6.1/README.md
jfechava/Hands-On-Continuous-Integration-and-Delivery-with-Jenkins-X-and-Kubernetes
3a77d8364e24b49068c57b88a9acac0e32fa1dc5
[ "MIT" ]
41
2020-03-24T19:02:32.000Z
2022-01-03T08:54:28.000Z
# Video 6.1 This video demonstrates adding "apps" to Jenkins X. ## Prerequisites First install your cluster per the instruction in section 1. Second, if you installed Helm 3 in your PATH as part of Video 5.5, you may need to make sure it is no longer in your PATH, as of this writing (March 2020) Jenkins X does not yet support Helm 3. ## Jenkins X Apps Jenkins X has the concept of an "app" that we can add to the cluster. This is different from an application that Jenkins X builds and deploys; Jenkins X only manages deployement for an app. We can deploy an app using a Helm chart; the only difference with using Helm directly is that Jenkins X will manage the deployment for us using GitOps so we can more easily rebuild the cluster if needed. ## Adding an App to Jenkins X We can add an app to Jenkins X by referencing the name and repository for the Helm chart. For example: ``` jx add app mongodb --repository https://charts.bitnami.com/bitnami ``` This will create a pull request that will eventually end up adding MongoDB to our Kubernetes cluster. ## Cleaning Up To delete the MongoDB Jenkins X app, run: ``` jx delete app mongodb ```
27.595238
79
0.761001
eng_Latn
0.999205
0c0585e7db9500c5f693fd7691d990603f5faf64
188
md
Markdown
README.md
paiangit/miaoda-ui
7950f8f8a7640402dc2d8b77809df7b97b30fd53
[ "MIT" ]
null
null
null
README.md
paiangit/miaoda-ui
7950f8f8a7640402dc2d8b77809df7b97b30fd53
[ "MIT" ]
null
null
null
README.md
paiangit/miaoda-ui
7950f8f8a7640402dc2d8b77809df7b97b30fd53
[ "MIT" ]
null
null
null
<p align="center"> <h1 align="center">Miaoda UI</h1> </p> ## 🖖 Introduction ## ✨ Feature ## 📦 Use ## 🤝 Contributing ## 📃 License [MIT](https://opensource.org/licenses/MIT)
7.833333
42
0.579787
yue_Hant
0.268123
0c06e34cf374a2971bf8354b7b1b20d2402a7f80
28,726
md
Markdown
_posts/2010-2-20-你们瞧瞧人家高水平的左派是如何justìfy自己的观点的.md
backup53/1984bbs
152406c37afab79176f0d094de5ac4cb0c780730
[ "MIT" ]
18
2020-01-02T21:43:02.000Z
2022-02-14T02:40:34.000Z
_posts/2010-2-20-你们瞧瞧人家高水平的左派是如何justìfy自己的观点的.md
wzxwj/1984bbs
152406c37afab79176f0d094de5ac4cb0c780730
[ "MIT" ]
3
2020-01-01T16:53:59.000Z
2020-01-05T10:14:11.000Z
_posts/2010-2-20-你们瞧瞧人家高水平的左派是如何justìfy自己的观点的.md
wzxwj/1984bbs
152406c37afab79176f0d094de5ac4cb0c780730
[ "MIT" ]
13
2020-01-20T14:27:39.000Z
2021-08-16T02:13:21.000Z
--- layout: default date: 2010-2-20 title: 你们瞧瞧人家高水平的左派是如何justìfy自己的观点的 categories: 雅典学院 --- # 你们瞧瞧人家高水平的左派是如何justìfy自己的观点的 北京棋迷 自由主义,一抓就灵 1楼 大 中 小 发表于 2010-2-20 23:08 只看该作者 你们瞧瞧人家高水平的左派是如何justìfy自己的观点的 穷人的普世价值 穷人的普世价值 同人于野 我听说有这么一种中学。在这个学校里,学生几乎没有任何自由。 怎么走路,怎么坐,走路的时候怎么拿东西,怎么回答问题,甚至上厕所之后怎么洗手,都有严格规定。 课堂上别的同学发言的时候,全班同学按规定动作看着他。在教室里,学生必须学会使用两种统一的音量说话,根据具体情况决定使用哪种音量。如果哪个同学在课堂上有小动作,老师会立即停止上课,然后全班讨论怎么“帮助”他克服这个坏毛病。 这个学校的学生一般早上五六点钟就起床,在七点之前已经开始集体的学习。放学也比别的学校晚,一般要在下午六点以后,然后回家还要做两个小时的家庭作业。 除了刻苦学习之外,学校还要求所有学生必须有礼貌 — 不是一般的有礼貌,是日本艺妓水平的礼貌。比如你跟这个学校的一个学生说话,他会注视你,不停地点头表示非常赞同你的观点。老师甚至有意在校园里放置垃圾,让学生们自觉捡起来。 学生们既不研究体育明星,也不常去博物馆,他们学习的核心只有一个,那就是一定要考上大学。 有人研究表明,在这种学校里,老师一直向学生灌输的观念可以归纳为三条: 第一,你是集体的一员。 第二,外面的世界奇怪而危险,所以我们的集体要始终团结在一起。 第三,世界就要像一座大山,山的顶峰是一个天堂,这个天堂就是大学。 这三条如果不是说一个学校,而是一个国家的话,必然是一个民族主义到了军国主义程度的可怕国家。这种学校在湖南农村么?或者这种学校在二战前夜的日本么? 这种学校在今天的美国。这就是美国的 KIPP (Knowledge Is Power Program) 学校系统。 美国的公立学校系统总的来说相当差劲。如果让一个孩子在纽约的某些学区上中学,他受到的教育和未来能考上美国大学的可能性,也许低于中国某些大城市的中学。KIPP 就是公立学校改革的产物。在这个学校系统上学的绝大多数孩子来自穷人家庭。然而 KIPP 却获得了超过80%的大学升学率,其各项成绩往往是所在州所有公立学校里面最好的。这种学校是如此热门,它的到了大量的社会捐款,而谁能入学必须抽签决定! 我国的自由主义知识分子会对这种学校有什么评论呢?这哪是培养人才,纯粹是培养机器。中国的主流舆论,早在20年前就开始鄙视“以考大学为核心”的中学教育,认为素质教育才是最重要的。我国的舆论认为,让孩子处处按规定动作行为,是对人性的摧残。过分的礼貌则无助于养成独立的人格。宽容,自由,平等,这才是普世价值。 但从 KIPP 在美国的成功,我们可以看到另一种普世价值:穷人的普世价值。穷人的普世价值很简单,那就是不想再当穷人。哪怕是牺牲个性,也不想当穷人。哪怕是没时间看哈利波特,也不想当穷人。哪怕是集体主义,也不想当穷人。 在某些人眼中的完美穷人,是安分守己的,以贫穷为乐的穷人。一个中产阶级人士跑到乡下去玩,他希望看到的可能是那种乐呵呵地赶着牛羊去田野的穷人孩子。在最理想的情况下,这个孩子最好还要对他的跑车表示一下鄙视,那样的话他会感动地把这个经历写在博客上。反过来,如果一个穷人孩子对他说,我要不惜一切代价考上你上的那个大学,将来也买一辆这样的跑车,那就一点诗情画意都没有了。 西方国家的人看西藏和非洲,就是希望这两个地方的穷人永远都是诗情画意的穷人。他们希望藏族人最好永远都是虔诚的宗教教徒,他们希望非洲人最好永远都过绿色环保的生活。 但穷人不想永远当穷人。穷人认为如果能考上大学,那么年轻的时候做出一点牺牲是值得的。考上大学以后再学风雅也不迟。即使是在今天的美国,仍然需要 KIPP 这样的中学,而且不是一般需要,是极其需要。美国媒体对 KIPP 是一片欢呼声。穷人没有别的社会资源可以改变自身命运,升学率就是硬道理,这么基本的常识有什么可说的? 我认为穷人的普世价值,是比知识分子的普世价值更“基本”的价值。中国在三十年以前,主要由穷人组成。所以中国那时候唯一正确的做法,也许就是设计一个类似于 KIPP 学校的系统。抽象地谈论普世价值,指责当初的做法,有什么意义呢? --- [Terminusbot](https://github.com/TerminusBot) 整理,讨论请前往 [2049bbs.xyz](http://2049bbs.xyz/) --- 北京棋迷 自由主义,一抓就灵 2楼 大 中 小 发表于 2010-2-20 23:09 只看该作者 和写实那种狂喷口号狂扣帽子的大字报比比 1984像写实和春哥这种垃圾脑残越来越多,右派们不觉得羞愧么 花想容 依据用户管理细则,账号永久停用。 3楼 大 中 小 发表于 2010-2-20 23:16 只看该作者 回复 1楼 北京棋迷 的话题 确实是高水平的五毛文啊,看得我怒火满腔却又批判不了,除了西藏部分他明显有破绽以外。 不过,这样justufy压抑个性做法的人,在我的分类里是右派,不是左派。左派岂能为五斗米折腰? 五毛在升级为五元,大家要警惕啊 花想容 依据用户管理细则,账号永久停用。 4楼 大 中 小 发表于 2010-2-20 23:21 只看该作者 想出来了,高级五毛误导的方法是在脱贫方法上搞非此即彼,不给你其他选择:要考上美国大学就要压抑个性,要发展中国经济就得TG领导,这是TMD在放狗屁!马勒隔壁 就算这个同人于野是北京棋迷的马甲,也得骂 花想容 依据用户管理细则,账号永久停用。 5楼 大 中 小 发表于 2010-2-20 23:22 只看该作者 突然想起来了,推崇美国的 KIPP (Knowledge Is Power Program) 学校系统的,好像有薛涌?丫利博儒的立场不坚,偶早看出来丫不是东西了。 崂山妖道 我站在米奇尼克和布罗茨基一边,做永远的反对派。 6楼 大 中 小 发表于 2010-2-20 23:25 只看该作者 这种学校没错啊,欧洲公学大多与此类似,培养贵族也是这样,追忆逝水年华里就有在家庭中这样严格管教孩子的叙述,白璧德针对美国大学教育平民化就提出过尖锐批评。这与社会的平等和民主化并不矛盾。 崂山妖道 我站在米奇尼克和布罗茨基一边,做永远的反对派。 7楼 大 中 小 发表于 2010-2-20 23:27 只看该作者 但是文中过度引申是错误的,引申到西藏等更是跳跃性极大,很不连贯,说明这人思路上产生断裂。 北京棋迷 自由主义,一抓就灵 8楼 大 中 小 发表于 2010-2-20 23:28 只看该作者 人家才不是五毛呢,你的智慧和判断力也被意识形态劫持了。如果你看多了人家的文章,就知道了。 我想说,就算你不认同人家的立场,你至少还应该好好学习学习人家的表达技巧。 他的文章本身就是个比喻,不是在赞扬学校压抑个性,而是说不同的人因经济境遇的不同有不同的最高价值。这是从根本上动摇所谓普世价值的普适性。这才是有价值高水平的思考 引用: > 原帖由 花想容 于 2010-2-20 10:16 发表 ![](https://1984bbs.com/images/common/back.gif) > 确实是高水平的五毛文啊,看得我怒火满腔却又批判不了,除了西藏部分他明显有破绽以外。 > 不过,这样justufy压抑个性做法的人,在我的分类里是右派,不是左派。左派岂能为五斗米折腰? > 五毛在升级为五元,大家要警惕 ... 花想容 依据用户管理细则,账号永久停用。 9楼 大 中 小 发表于 2010-2-20 23:30 只看该作者 回复 7楼 崂山妖道 的话题 跳跃是五毛文的通病,跳得好不好决定是不是高级五毛。 真希望北京棋迷没有堕落到写这种文了。 花想容 依据用户管理细则,账号永久停用。 10楼 大 中 小 发表于 2010-2-20 23:32 只看该作者 回复 8楼 北京棋迷 的话题 我知道文章的要害在否定普世价值的普适性,凡是为文目的是这个的,都可以鉴定为五毛。现在想来,这其实这只是“人权就是生存权”拿什么KIPP做例子包装了一下唬人。 蒙面佐罗 11楼 大 中 小 发表于 2010-2-20 23:47 只看该作者 写这文章怎么就是五毛了? 纽约和其他地方不同,这个地方教育界貌似民主党控制,搞精英教育,反倒是乡下保守党控制的地方学校里自由主义成灾。学生考不上大学,甚至不会阅读。我儿子现在就在纽约上小学,老师甚至打电话给家里控告:你家小子某某单词居然不会拼写。他表姐在斯蒂文森中学,应该是纽约排名第一高中,华人犹太人很多,从小学到高中一路竞争。 这个实例我觉得并不是否认了普世价值。 elon 打飞机的老张 12楼 大 中 小 发表于 2010-2-20 23:49 只看该作者 引用: > 原帖由 崂山妖道 于 2010-2-20 23:27 发表 > ![](https://1984bbs.com/images/common/back.gif) > 但是文中过度引申是错误的,引申到西藏等更是跳跃性极大,很不连贯,说明这人思路上产生断裂。 文中的引申并没有什么错误吧。。。,在作者的观念中,西藏和非洲是西方人眼里最典型的鸟不拉屎的穷地方。 还是你太敏感了? 灼眼のSABER “君有疾在蛋上,不治将疼” “操,滚!” 13楼 大 中 小 发表于 2010-2-20 23:49 只看该作者 忍不住跳出来嚷嚷几句 这叫移花接木 怎么走路,怎么拿东西,上课禁止有小动作,要求对他人有礼貌, 等等,都是一种秩序的体现,也许对走路和拿东西的姿势的规定还有些争议,但是上课禁止有小动作,对人有礼貌这是没问题的,因为这些行为会对他人造成影响,为了减少对某种行为对大多数人的不良影响,秩序就产生了,所以秩序和素质以及个性培养,甚至所谓的普世价值都不构成矛盾,或者说,良好积极的秩序和上面说的这些东西,他们所指向的目标是一致的。 反驳这篇文章,实际上完全可以用考研政治里的那一套来对付。秩序确实是以对某些行为的限制的形式出现在我们面前的,若是将这种限制的幅度和广度加大,无处不限制,或者说它限制了一些完全对其他个体没有影响,只对行为发出者自己产生结果(往往是可控的,比如抽烟,我可以凭借自己的意志,想抽就抽想不抽就不愁,这叫可控,而吸毒就不算,因为他想不吸毒但他控制不了,你可以立法强迫所有人都不吸毒,你不能强迫所有人都不吸烟)的这样一种行为,甚至秩序的制定者自己都不遵守这种秩序,那么这就是对人性啊普世啊BLABLA的摧残。 量变引起质变,后一种限制相比起前一种限制,也就是积极良好的秩序,已经变成了恶性的“秩序”,性质发生了转变,原因就是他的量变超过了“度”,而本文刻意模糊了“度”的概念,硬将两种不同性质的东西嫁接在了一起,根据马克思主义哲学原理,这种抹杀“度”的做法,我们称之为诡辩论,也就是白马非马。原作者理论根基不深,鉴定完毕 继续潜水,各位继续 [ 本帖最后由 灼眼のSABER 于 2010-2-20 23:51 编辑 ] 蒙面佐罗 14楼 大 中 小 发表于 2010-2-20 23:50 只看该作者 自由平等个性是普世价值,难道财富的追求,独立,就不是普世价值了吗?凭什么认定你的价值是普世价值,我的就不是?难道我追求的你不也在追求,只不过路径方式不同? 花想容 依据用户管理细则,账号永久停用。 15楼 大 中 小 发表于 2010-2-20 23:54 只看该作者 回复 11楼 蒙面佐罗 的话题 学校严格要求并不是他的要点,他还要故意夸张到摧残人性的地步,然后否定教育一定要尊重学生人性,打开了价值相对主义的缺口,再为中国政府辩护。 蒙面佐罗 16楼 大 中 小 发表于 2010-2-20 23:59 只看该作者 引用: > 原帖由 花想容 于 2010-2-20 23:54 发表 ![](https://1984bbs.com/images/common/back.gif) > 学校严格要求并不是他的要点,他还要故意夸张到摧残人性的地步,然后否定教育一定要尊重学生人性,打开了价值相对主义的缺口,再为中国政府辩护。 美军军事学校如西点,是不是更夸张?我不觉得这是摧残人性。你要得到就得付出,这种束缚保证了一个群体,以及群体中大多数人的目标达成,可以视为一种早熟的路径,而穷人的孩子早当家不仅为中国特有。 当然我想这种学校毕业生进州立大学的可能很多,常春藤联盟就少了。 所谓普世价值,都是绝对的,正因为这些价值都是绝对的,所以才是相对的,因价值冲突不可避免但不会导向虚无 [ 本帖最后由 蒙面佐罗 于 2010-2-21 00:01 编辑 ] 花想容 依据用户管理细则,账号永久停用。 17楼 大 中 小 发表于 2010-2-21 00:04 只看该作者 回复 16楼 蒙面佐罗 的话题 西点军校的学生是成人,申请入校本身就意味着接受合同,没有可比性。 要靠把孩子送进这样的学校才能学会基本礼仪的,应该在中产之下。这种举例没什么意义,纯粹为替中国辩解而作。 蒙面佐罗 18楼 大 中 小 发表于 2010-2-21 00:05 只看该作者 补充下:我实在看不出这种文字有什么地方为中国政府辩护。纽约这种学校不是你想去就能去,也不是所有人必须去的,这里有个自由选择的前提。 当然,如果所有学校都搞成这样,成为一种强制,那是法西斯了。 我刚才说过,穷人孩子早当家。家庭的希望就寄托在你身上,你也想过好日子,作出这样的选择岂不是顺理成章,除非是天才,准备往常春藤蹦。 如果这种选择的机会都不可以拥有,中产以下阶层孩子成为DEPRESSED CLASSES一员的前景基本是一定的,那才是残酷。 [ 本帖最后由 蒙面佐罗 于 2010-2-21 00:09 编辑 ] orangeking 19楼 大 中 小 发表于 2010-2-21 01:24 只看该作者 美国穷人可以选去不去文中的KIPP 中国公民有的选吗? 老子穷 也想挣钱 可是我更想过有尊严的凭自己努力也能致富的日子 不是这个操蛋的社会 Diablo 封锁带来觉醒,黑暗衬托光明。 20楼 大 中 小 发表于 2010-2-21 03:17 只看该作者 “穷人的普世价值很简单,那就是不想再当穷人” 自己定义这样一个前提有什么意义吗?穷与不穷是也是相对的,“穷“本身就很难把他规定成一个标准,更何况“XXXX很简单,就是XXXX”这么简单的因为所以。 不可否认有的人爱钱,但并不代表所有人都爱钱,爱不爱钱往往和他穷不穷并没有直接关系。 再说,美国的穷人和中国的穷人一样吗?谁说穷人就没资格要自由? 姑且算是文章里说的全是真的,但那也是他们主动放弃了自己的自由。而且KIPP那只是一个为美国穷人办的突击培训应考的学校,这并不是一个人的一生。写这文章的人分明把这个学校一些现象和理念有意地放大了。 这和中国人整个人生被剥夺自由,没的行使、甚至没的放弃是不一样的。 在中国不是所有人都穷,很多人、很多时候就是没自由没权利才穷的。你不吃不喝半辈子买不起一套房子,你觉得应该怪你没好好上学? 用裤衩思考 路边社社员 21楼 大 中 小 发表于 2010-2-21 03:44 只看该作者 说得太对了! 引用: > 原帖由 Diablo 于 2010-2-21 03:17 发表 > ![](https://1984bbs.com/images/common/back.gif) > “穷人的普世价值很简单,那就是不想再当穷人” > 自己定义这样一个前提有什么意义吗?穷与不穷是也是相对的,“穷“本身就很难把他规定成一个标准,更何况“XXXX很简单,就是XXXX”这么简单的因为所以。 > 不可否认有 ... \- 整篇文章, “穷人的普世价值很简单,那就是不想再当穷人” ,这个都说了,还要这篇文章干嘛?你都替穷人定义好了普世价值了,还写个屁文章啊?麻烦作者让一下好吗?让穷人说说话?行不?在”不想当穷人“之前,我想还有更重要的吧? 穷人的普世价值很简单,那就是不想死去 穷人的普世价值很简单,那就是不想生病 穷人的普世价值很简单,那就是不想不识字。 难道要点儿人权民主穷人就活不下去了?就得继续当穷人了?还他妈来劲了,替西方国家的人说话了。你怎么不替你们党校发言啊?你们党校的人怎么看西藏的?说来我听听。 foo 见习魔法师 22楼 大 中 小 发表于 2010-2-21 07:03 只看该作者 我的一些有过在贫困地区支教经验的朋友的体会和主贴文章恰好相反 当地人未必有远大的理想 更习惯打柴种地 日出而作日入而息 其实并不很热衷考大学什么的 我的朋友刚到这些地方的时候,都怀着着要帮助当地人的抱负 结果最后是他们自己的观念被改变 因为他们发现当地人似乎不太需要这种帮助 当然这个判断也可能流于主观 但是主贴对穷人追求靠教育改变生存状况的判断是否成立 又是否能够应用于中国社会 在没有证据之前我是保留意见的 [ 本帖最后由 foo 于 2010-2-21 07:19 编辑 ] foo 见习魔法师 23楼 大 中 小 发表于 2010-2-21 07:08 只看该作者 另外,且不说由学校制度推及政治制度的“移花接木” “中国在三十年以前,主要由穷人组成。所以中国那时候唯一正确的做法,也许就是设计一个类似于 KIPP 学校的系统。” ——事实证明三十年前这样的系统带给穷人的是灾难,不是吗? 枉顾基本事实,楼主能解释下这篇文章哪里体现“高水平”了吗? 崂山妖道 我站在米奇尼克和布罗茨基一边,做永远的反对派。 24楼 大 中 小 发表于 2010-2-21 07:22 只看该作者 引用: > 原帖由 elon 于 2010-2-20 23:49 发表 > ![](https://1984bbs.com/images/common/back.gif) > > 文中的引申并没有什么错误吧。。。,在作者的观念中,西藏和非洲是西方人眼里最典型的鸟不拉屎的穷地方。 > 还是你太敏感了? 是太敏感,不敏感就被忽悠了。 主帖和我反驳的都不在西藏非洲穷还是不穷,你是不是没看懂主帖? elzzird 混吃等死的绣花枕头 25楼 大 中 小 发表于 2010-2-21 08:49 只看该作者 以上各位竟然没人去查证文中关于KIPP信息的真实性,实在让人失望 “Most KIPP schools run from 7:30 a.m. to 5:00 p.m. Monday through Friday and 8:30 a.m. to 1:30 p.m. on select Saturdays (usually twice a month),” \--from wiki 文中说什么7点之前开始集体学习,6点之后放学,完全没有根据。而且其他内容就更加值得怀疑,因为可以搜索到的关于KIPP的介绍中,完全没有文中所说的那种情况。 而且KIPP早就被指责通过事先筛选学生,大量淘汰落后学生来达到高升学率。(参见 wiki 中 criticism 一节) 现在基本可以断定,此文又是类似西点学雷锋那样,糊弄不明真相的群众。 看来网上不管什么派,都只会瞎忽悠,或者被人忽悠,完全没有认真的态度。 彭克 twitter:@meichaofeng 26楼 大 中 小 发表于 2010-2-21 09:02 只看该作者 北京棋迷的2楼,有国父风。 hsmsheep 27楼 大 中 小 发表于 2010-2-21 09:52 只看该作者 从wiki来看,论据不实;从文中逻辑来看,用“穷人通过集权(主要是kipp系统)达到了高升学率,摆脱贫穷。作者认同。”这个例子来类比“三十年前的中国也通过集权达到高生产率,摆脱贫穷。作者亦认同。”作者的逻辑是自圆其说的。 我认为,中国三十年的生产率的提高不是集权的作用,而是世界经济科技发展的结果,集权不是高生产率的必要条件。 everyman88 28楼 大 中 小 发表于 2010-2-21 09:54 只看该作者 原来这就是“高水平的左派”。我感到很欣慰。 文章的逻辑性就不用谈了。而且根据我的经验,善于如此逻辑写作的人,一般都有伪造事实的习惯。文中基本上是从不同学校中寻找到有利自己观点的特质,然后混合起来,让人觉得是存在于同一所学校中。进而让人感觉所有类似的学校都有相同的特质。 还有"JUSTIFY“是什么意思。 [ 本帖最后由 everyman88 于 2010-2-21 10:01 编辑 ] 燕归来 “日拱一”小卒 29楼 大 中 小 发表于 2010-2-21 10:16 只看该作者 如果这个是所谓的高水平左派,我就不厌其烦的粘贴旧文:林达关于美国阿米绪派的文章。 ========================================= 拒绝进步的自由——阿米绪的故事 按:故事有点长,但值得细读,无论故事本身还是林达的文笔。以下几点,仅供参考: 一、原来美国也有强制义务教育。都说教育是神圣的事业。现在看来,在政府的干预和强制下,教育已经堕落成邪恶的事业。 二、阿米绪人不用电,不开车,自然也没有互联网。但我觉得,他们内心感受到的平静与幸福,很可能比拥有这一切的你和我多得多。 三、阿米绪人并不反感外界的好奇窥视,这是对自身价值观充满自信的表现。只有内心虚弱缺乏自信的社团,才会把什么都搞得神神秘秘。 四、阿米绪人平均每家生7个孩子………… 阿米绪(Amish)的故事 ◎丁林 近几年随着中美文化交流的展开,美国一些远离主流的文化小溪,也渐渐被介绍到中国。于是,美国不再是一个刻板的固定套路,在大洋此岸人们的印象中,美国的形象正在逐步丰富起来。我曾经两次在国内的杂志上看到有人提到:美国有一群默默无声地生活在自己世界里的阿米绪人。 假如我们对美国的一般印象可以放在概念的一个极地的话,那么阿米绪是肯定必须送到相反的另一个极地去的。如果我们称美国的生活方式是现代的,那么阿米绪可以说是古代的;如果我们称美国是技术进步的,那么在同一个价值体系里,阿米绪不仅是落后的,而且是拒绝进步的,等等。假如再形象化一些,如果我们对美国人的印象是眼花缭乱五彩缤纷的,那么阿米绪人永远是平淡的,是只有黑白的单色调的。 假如在你的想象中,阿米绪是一小群生活在某一个群山环抱,车船不达,鸟不下蛋,与世隔绝的山洼洼里的话,倒也没什么稀罕了。问题是,今天的美国这样一群教徒有差不多35万人,人数还在缓缓地增加。他们生活在美国传统农耕区富庶辽阔的平原上。他们不但不封闭,甚至不集聚而居,没有什么阿米绪村庄,他们全是散户。一个个阿米绪农户就星星散散地座落在其他美国人的住房之间,混居在同一个地区。他们家门口的乡间公路也都是平展的柏油马路,直达高速公路。城镇就在附近,那儿就有购物中心和娱乐设施。嘈杂,多变而生气勃勃的现代生活就近在咫尺之遥。但是当你经过那里的民房,很容易辨别出阿米绪的住宅,除了都有巨大的谷仓之外,还在后院停着黑色的小马车。因为在我们断定不开汽车就算不得美国人的年代里,他们却只驾马车。 所以,这不能不让认定”技术进步”必是”挡不住的诱惑”的人们愣一愣神。不管将来如何,你不得不承认,毕竟在世界上此类诱惑最大的美国,阿米绪已经两百多年这样默默地过来了,宁静安祥。也许不见得特别幸福,至少并不格外痛苦。他们也有自己的喜怒哀乐,但是并没有如我们想象的那样,被”诱”得骚动不安,六神无主,跃跃欲试,痛苦不堪。 说起他们的来龙去脉,还得上溯到五百年前的欧洲。 在十六世纪欧洲宗教改革的大潮中,从苏黎世产生出一个人数不多的激进改革派,被称为”再洗礼派”。他们主张严格实践圣经教义,排斥不符合圣经的虚文褥节。他们认为宗教信仰应该在日常生活中时刻加以实践,不能说一套做一套。他们认真地寻求圣经中对于大小事情的说法,弄清楚了就一定要去做,而且要做到。他们认为,教会应该是信仰相同的成人的集体。所以,婴儿出生以后”被动的”第一次洗礼不能算数。而在一个人成年之后,如果他确信自己真有信仰的话,应该”主动”地再接受一次基督徒的洗礼。这就是”再洗礼派”这一名称的来历。 十六世纪还远不是一个宗教宽容的年代。再洗礼派一问世,就遭到来自罗马天主教会和其他新教徒两个方向的迫害。在再洗礼派发源的瑞士和德国南部,当时曾有几百个再洗礼派教徒被烧死在火刑架上。在这样残酷的环境中,再洗礼派却显示出惊人的宗教执着。他们认为,虽然他们面对的世界是傲慢的,富有的,偏狭的,暴戾的,而他们却仍然应该是善良的,清贫的,谦卑的,非暴力反抗的。在严酷镇压下,再洗礼派逐步形成了一些与其他新教教派不同的特点。他们无法形成良好的教会组织,一开始甚至只能在山洞里悄悄地聚会祷告。他们甚至没有明确的领袖,因为领袖一出来就给杀了。他们的一切都只能悄悄地做,恐惧,不安和受苦受难始终伴随着他们。 既然没有严密有形的教会组织,没有教会规范的约束,也没有一般宗教常见的仪式仪规的凝聚,那么,他们作为一个教徒存在,就完全是依靠他们内心的信仰了。因此,假如说再洗礼派的信仰特别执着,大概是不错的。他们在北欧传播的过程中出现了两个支派。十六世纪中叶,一个叫做梅诺的荷兰人曾试图在北欧重建和平的再洗礼派的团体。他们的后继者就叫做梅诺纳特人,也就是梅诺派。而到了十七世纪末,瑞士和南莱茵河的再洗礼派还是处于遭受迫害的分散状态,有一个叫阿曼的瑞士人站出来号召再洗礼派的改革和联合,这一派就被叫做阿米绪,也就是阿曼派。 这两个分支此后都来到北美这块新大陆,分布在美国22个州以及加拿大安大略省,大部分定居在俄亥俄州和宾夕法尼亚州。这两个州当年是北美大陆最好的农业区,而他们在欧洲时就是最出色的农夫。感谢开拓宾夕法尼亚的教友派,他们是对待异教最为宽容的北美主流教派,更要感谢北美大陆很快风行,并在美国独立以后由美国宪法保障的宗教自由,使这些”梅诺人”和”阿米绪”们,终于能够安居乐业了。由于阿米绪与梅诺人相比,他们更恪守古老的服饰和生活方式,在外观上更容易辨认,他们的风格与现代生活的反差也更为鲜明强烈,所以很多美国人也是只知阿米绪而不知梅诺人。 宾夕法尼亚的兰开斯特县,是阿米绪比较集中的地区,离著名的大都市费城仅一小时车程。在去冬最寒冷的日子里,我们来到那里,也是想一睹真正的阿米绪生活。可是我们发现,他们安静谦卑地生活在自己的世界里。只要是对外开放,以满足游客好奇心而设立的”阿米绪旅游点”,那就一定不是阿米绪办的。因为这样获利甚丰的”第三产业”,并不符合他们的生活原则。然而,他们虽然在宗教上固守而且生活方式内向,可是他们对外界很友好,也理解外人对他们的好奇。假如你想给他们的小马车拍张照,他们会微笑着放慢车速,让你如愿以偿。我们发现,他们没有这类小教派非常容易出现的诡秘行迹,他们坚守的只是一种由宗教信仰导致的平淡。 生活在宾夕法尼亚的再洗礼教徒至今没有显赫的教堂,他们的教堂一如他们的农舍,朴素而卑微,有时候甚至象早期教友会一样,把他们聚会崇拜上帝的教堂谦称为会屋。老派的阿米绪还轮流聚集在各自的农舍里做礼拜,围着老式的乡村火炉,一边坐着男人,一边坐着女人,读着老版本的圣经,用的是这里其他人都不懂的高地日尔曼语。而在他们只有一间房间的学校里,他们的孩子则个个都要学两种语言,英语和这种他们从欧洲家乡带来的古老语言。 无论在什么地方,你一眼就能把阿米绪认出来,因为他们的服饰与众不同。简单地说,其他的人,四五百年来的服饰一直在变,而他们却一直没有变。不论是男人的服装礼帽,还是妇女的衣裙,都是一水的黑色。只有在节日或婚礼上,妇女们才加上一方纯白的披肩。姑娘们的裙衫上没有一个钮扣,男人们的服饰上有钮扣,但没有任何其它装饰。据说这些突出”谦卑”的规矩都可以从圣经中找到依据。在兰开斯特,假如你遇到以前只在电影里才能看到的”古代欧洲村民”,他们就是阿米绪。 也许,他们的上空就是高压线,他们的邻居就是家用电器样样俱全,但是他们不用电。所以,也没有电灯,电视,电冰箱,收音机和微波炉。阿米绪不用汽车,他们是农夫,却拒绝使拖拉机和任何新式机械,有些梅诺人偶然使用汽车,但一定是黑色的,以示谦卑。阿米绪则用马拉犁耕地,驾着单驾马车外出。我们看到,在兰开斯特的乡间公路上,答答疾驶的阿米绪马车后面,常常跟着几辆耐心的邻居们的汽车。 对于认定只有自己的价值体系是唯一正确的人们,很难理解为什么阿米绪放着现成的新技术拒不使用。他们除了认定阿米绪固守落后,再也找不出别的解释。然而在多元文化的概念逐步被人们接受的今天,人们能够看到,这些再洗礼派教徒非常聪敏智慧,也非常能干。他们在兰开斯特县用传统农业技术经营的家庭小农庄,是全美单位出产最高的农庄之一,而且没有化学污染土壤退化等现代农业的通病。他们的生活简单而安逸。他们自己的解释是,由于他们的宗教信仰和几百年来所遭受的迫害,他们对整个外部世界抱有深刻的戒心,他们强烈地要和外部世界的浮躁轻薄和人性渐失,保持一个距离,从而能不受诱惑干扰地追随他们的上帝。他们认为眩目的电器是对他们的精神世界的威胁。 兰开斯特公路上汽车马车混列行驶的景象在美国实属罕见,因此,在离开之后我们依然久久难以忘怀。这也引起了我们的另一个好奇:阿米绪既然是混居在普通的美国人中间,他们要解决的问题显然不止是汽车和马车的矛盾,他们独特的宗教信仰与几百年前的生活方式,同现代化的外部世界是如何协调的呢?这个问题最突出的部分,就是他们和各级政府依据现代美国生活所制定的法律冲突是如何解决的的呢? 之所以我们会立即产生这样的联想,就是因为在美国生活了几年之后,我们知道这是一个重法律契约并且执法很严的国家。在一个民主社会,法律和政府的决策是多数达成一致的结果,所以法律一旦通过,就要求个人服从,包括持有不同意见的少数,而阿米绪显然是美国的少数。 在两个不同文化发生冲突的时候,最重要的往往是妥协的精神。阿米绪是一个谦卑的群体,因此,他们在自己能够接受的范围内,也在作最大的妥协。兰开斯特县议会曾经立法规定,阿米绪的马车在公路上行驶时,车后部必须安装一个橘黄色三角形的慢行车标志。尽管这有违阿米绪不行装饰的传统,但是他们理解交通安全的合理性,如今他们的马车上都有这个标志,衬在黑色的车蓬上,格外醒目。同时县议会立法规定,马车夜间行驶不能使用老式黯淡的油灯,而必须使用干电池的车灯。阿米绪是不用电的,但是,这一次他们还是接受了这个干电池车灯。又如,兰开斯特县为了公共卫生,立法禁止没有化粪池的户外厕所。阿米绪于是改变自己的习惯,把厕所移到室内,并且都接受了地下化粪罐这样的”新技术”。 然而,在现代美国社会中,还是有一些与阿米绪宗教信仰完全冲突的法律,是他们无法妥协的。这种冲突有时还相当尖锐。这就是考验这个国家的时候了。因为民主意味着按照多数人的意志行事,但是如何对待少数人,始终是民主社会一个难解的课题。美国也经历了一个逐步认识的历史过程。 首先遇到的就是一个大问题:战争与和平。当美国卷入两次世界大战,以及以后的朝鲜战争,越南战争时,国家都需要有人当兵打仗。征兵法应该对所有的公民一律平等,这是现代国家的一个常识。但是阿米绪所属的再洗礼派是绝对的和平主义者,他们的信仰使他们坚守不动刀兵。于是,他们和美国的法律开始了旷日持久的冲突。 那么,绝对的和平主义者究竟意味着什么呢?在阿米绪代代相传的一本书中,记述了他们人人熟知的一个历史故事。早年间,有一个荷兰再洗礼派教徒叫迪尔克。他由于遭受宗教迫害被警官追捕,追捕过程中警官掉进了一条冰河。迪尔克明知自己一旦被捕将会性命不保,他还是不能见死不救。他返过身来救起警官,自己却因此被捕,被烧死在火刑架上。这件事发生在1569年,在他们来到北美大陆以前。 在他们来到北美大陆以后,也有类似的事情发生。在白人和印地安人互相追杀的年代,有一次,一伙印地安人在夜间包围了一个叫雅各布的阿米绪家庭。雅各布的儿子们本能地操起了打猎用的猎枪准备自卫。雅各布却夺下他们的猎枪扔了出去。结果是,他们一家除了两人被掳走以外,全部被杀。这并不是一个孤立的例子。不仅是阿米绪,其实当时还有大量教友派教徒也由于坚持和平主义而被印地安人杀死。以上这些历史都是阿米绪教育后代的典范。 在一次大战期间,一些阿米绪的年轻人被迫入伍,参与军训,甚至被迫拿起了枪来。然而,不管是什么理由的战争,对于再洗礼教徒都是不可接受的。所以阿米绪抗拒军令者甚众。那是近一百年前,美国政府战争当前,显然不打算在这个时候与阿米绪探究什么宗教哲学问题。因此,在这次战争中,有很多阿米绪教徒因拒服兵役而被逮捕入狱,也有人因为在阿米绪的报纸上告诫教徒遵从教义反对杀戡而被判”煽动不服从罪”。 当时,有一个叫鲁迪的阿米绪被征入伍。军官逼着他穿上了军装,列队操练。几个星期以后,轮到实弹演练的时候,他再也受不了良心的谴责,脱下军装,要求退伍。按当时的法律,违抗军令要受军事法庭的审判。军官把他带到军营外的三座新坟边,拍着手枪警告他,明天早晨如若不穿上军装报到,他就将是第四座坟墓。鲁迪一夜无眠。第二天早晨,新兵们吃早饭的时候,鲁迪来了。他穿着一身黑色的阿米绪传统服饰,戴着黑色的帽子。为了宗教良心,鲁迪作了死的选择。从来就说一不二的军官看着这个阿米绪,却没有枪毙他,让他退伍了。 在一次大战期间,这个问题每个个案遇到的情况不同,处理的方式也不同。冲突发生了,问题却没有解决。由于阿米绪人数不多,牵涉的面不广,没有引起太大的注意。 到了二次世界大战,德国是主要敌对国。恰巧很多阿米绪祖先的故乡就是德国。他们在教育中很重视保持故乡语言,阿米绪的日常语言往往是高地日耳曼语。所以,这一次他们的反战行为在美国引起了关注。有人怀疑他们拒绝作战,是因为他们站在自己的母国一边。美国国会为此专门举行了听证会。为了阐明再洗礼派的和平主义宗旨,争取不上战场的权利,一向不愿抛头露面的阿米绪派出代表在国会听证会上宣誓作证。 他们以自己的历史向全美国人民证明,他们反战只是出于他们的宗教信仰。他们反对的是任何一种战争,不论其交战国是谁,也不论其战争原因如何。 在他们绝对和平主义的宗教立场被确认之后,多数向少数作出了让步。在二次大战这样的险恶环境中,美国国会依然承认一个事实,人和人不一样,少数人有少数人的理由。美国军方为称作”良心反战者”的阿米绪作出了特殊安排,称为”替代性服役”。他们必须在军方安排的医院或工厂从事两年没有酬劳的,与战斗没有关系的工作,以代替服兵役打仗的公民义务。 阿米绪历来只在自家农庄上务农,很少外出就业。他们认为,这样的两年”替代性服役”仍然使他们被迫融入外部生活,并把外界躁动的气息带进了再洗礼派虔诚平静的生活。再洗礼派再次向国会申诉。经过长期的努力,现在的”替代性服役”改为”自愿性服役”,阿米绪可以在自己的教会管理的农庄上从事两年没有报酬的农业工作,以代替服兵役。 美国国会作出这样的决定并不容易。二次大战对所有的参战国来说,都是一场惨烈的厮杀。在战争接近尾声之前,谁也说不上胜负的必然走向。在这种非常状态下,作为一个国家的”多数”,同意用自己的血肉之躯去为一些声称是”和平主义者”的”少数”抵挡敌人的子弹,其原因仅仅是因为尊重这些”少数”的宗教信仰,假如没有理性的精神,几乎是做不到的。 美国在历史上屡有无视少数的过去,成为他们迄今为止不断反省的原因。现在你如果在美国游览的话,常可以遇到一些历史纪念牌,记载了插牌所在地发生的一段历史。有不少这样的牌子在检讨当年对待印地安人和黑人的不公正。正是这样的反省使得美国在对待少数的问题上,变得越来越谨慎,也越来越宽容。 这个国家的多数和阿米绪是有冲突的,但是双方都以理性为基础,尤其在处于多数优势的主流社会一方,逐步地在学会如何尊重少数,以达成妥协。因此,阿米绪虽然是美国公民,但是他们的公民义务和权利与一般美国人是不完全一样的。例如,美国的税收很高,这样的税收虽然不在阿米绪传统的自给自足生活方式之内,阿米绪还是依法纳税。可是另一方面,他们也和政府达成协议,他们以传统方式颐养天年,从不出现老无所养的问题,他们不享受美国的老年福利金,也就不交纳税收中用于社会养老的社会安全基金。他们也不承担美国公民的一项重要义务,就是他们不担任法庭的陪审员。因为在他们的宗教信仰里,只有上帝有权判定人们的罪孽或清白。 在历史上,美国法律与阿米绪的一次最大的冲突,是在教育领域发生的。这场冲突,很典型地反映了”少数”与”多数”在文化上的差异可以有多大。 阿米绪的传统学校是所谓”单室”学校,顾名思义,学校只有一间房间。它也只有一个老师,所有的孩子都在一起上课,为的是让他们学会互相帮助。美国人一贯认为,选择怎样教育子女,这是父母的权利,美国历史上延续至今一直有家庭学校的做法。问题是,阿米绪还认为,孩子读书到十四岁,相当于八年级,就够了,从十五岁起就应该到农田里干活了。他们认为外面的孩子从十五岁开始的高中教育对阿米绪是有害无益的。可是,教育立法和他们的教育方式直接冲突。 美国的教育管理权归属各州,对中小学最有发言权的是各地的学校理事会,有家长和教育界人士共同组成。各州的议会有教育立法权。为了维持整个社会全体民众的教育文化水平,各州议会在上世纪末就先后立法实行强制性的义务教育。各州的普及教育立法是顺应时代潮流,立足于提高全民文化水平,深得民众的支持。至今为止,我还很少听到有其他人反对义务教育法规的。当阿米绪所定居的那些州开始立法规定强制教育至十六岁时,阿米绪教徒教育自己孩子上学只到十四岁的做法违法了。 阿米绪当然也理解,州政府在教育上的强制立法,并没有恶意。但是他们认为,公立学校的教育方式,会引导他们的孩子脱离他们代代相传的宗教追求,是对他们的宗教传统的威胁。这不是没有道理的,专家们曾经指出,再洗礼派的教育,在维护传授价值观念方面起了不可低估的巨大作用。对于他们来说,能否自己教育子女,等于自己能否生存延续。 少数不可以藉着不同意而不服从法律,这是美国的游戏规则。唯一的合法途径是申诉,而少数人的合理申诉能够得到公正的对待,也是游戏规则能够操作下去的前提之一。申诉有两种方式,一是向行政和立法分支和平请愿,二是向司法分支提出法律诉讼。阿米绪的神父们劝告大家不要和政府冲突,也不要上法庭去打官司,因为这违背了阿米绪教徒和平主义与世无争的传统。神父们决定向州立法与行政两大分支请愿,请求网开一面。 宾夕法尼亚的阿米绪把请愿书印了一千份,然后征集签名。阿米绪教徒人数虽少,可是他们捍卫信仰的精神以及和平谦卑的态度却深得宾夕法尼亚人的同情。签名者甚众。往往一个阿米绪就可以征得三千个来自外部世界的签名。他们把这些签名联成130英尺长的条幅,然后,阿米绪的代表就带着它去见州长。州长于是下令州司法部长进行调查,新的教育法是不是侵犯了宗教自由。阿米绪因此很感激这位州长,在随之而来的感恩节,他们给州长送去一篮农家产品,包括一只火鸡,一罐糖浆,还有一些苞米。但是,调查结果并没有解决问题。 阿米绪知道他们还可以走的一条路就是司法途径。但是,阿米绪不喜欢提起诉讼,他们只习惯于诉诸上帝。这一次,他们无路可走,假如不想放弃的话,唯一可行的是”以身试法”了。实际上这是一般美国人在自己的观点处于主流文化之外的时候,常常采取的方式。这一类以被动形式出现的司法挑战,常常引发对一个主流观念的质疑,甚至可能改变这样的观念。而”被动”,正是阿米绪的特点。 阿米绪从不惹事生非,但他们不送自己的孩子读高中,警察就找上门来了。美国是一个执法很严的国家。在本世纪初,就有阿米绪家长因为违反义务教育法而被捕,他们的孩子则由政府监管。为了家庭的团聚,他们要么屈从,要么被迫变卖家产,举家迁徙,有些阿米绪家庭甚至为了逃避义务教育法,迁移到遥远的墨西哥去。他们不愿意在压力下改变他们的宗教信仰和与此相随的生活方式。 宾夕法尼亚是一个有着宽容精神传统的州,我们去过的兰开斯特居住着全美第二大的阿米绪人群。州教育法通过后,一开始地方政府对如何向阿米绪执法也很困惑,所以出现了比较罕见的执法不严的情况。地方上对阿米绪少年缀学到地里干活,基本上取睁一眼闭一眼的态度。但是到了1937年,兰开斯特的教育官员觉得阿米绪传统的单室学校实在不够正规,并且没有高中教育。就计划关闭一些单室学校,以新建的公立学校作为替代。州政府的代表想说服阿米绪,”教育是通向知识的大门”。可是,这种对话建立在完全不同的文化价值体系里,自然是谁也说服不了谁。这样的文化冲突使州政府和阿米绪都深感不安和困扰。二次大战中,这个问题被搁置一边,战后又立即重提,为此又有一些阿米绪家长由于违反义务教育法而被课以罚款,甚至被捕坐牢。 他们再次向州政府请愿。直到1955年,州政府作出妥协:阿米绪人可以在自己的学校里教育子女到14岁,然后为他们设立一种专门的职业学校,阿米绪的孩子在这种职业学校受教育到法定的16岁。这种职业学校一周只上半天课,并且是由阿米绪的教师讲授传统的农业知识。这个安排是一个突破,所以在美国历史上很有名,称作”兰开斯特职业学校妥协”。这里的阿米绪完全是靠着一种不可摧毁的宗教精神韧性,和强大的多数人的政府达成妥协,嬴得了按自己的意愿教育子女的权利。美国史书上的妥协一词通常是一个正面的词。大家认为,达成妥协解决了问题,是双方共同的胜利。 由于教育立法权归属各州,所以”兰开斯特职业学校妥协”并没有解决其他州的类似冲突。问题是普遍的。在爱荷华州的布肯南县,地方政府宣布要取消单室学校,把阿米绪学童集中到新建的公立学校;而阿米绪却坚持把孩子送到他们的单室学校去。在1965年11月的一天早晨,县上的教育官员带着警察,开着校车,来到一所阿米绪的单室学校,要把孩子们押上校车送往新学校。单室学校只有一个阿米绪老师,无可奈何地看着孩子们列队出去。突然,不知是谁喊了一声。没等警察和官员回过神来,孩子们象炸了窝一样,拼命地冲向附近一望无际的苞米地,刹那间就消失在青纱帐里。原来是一个阿米绪孩子喊了声”快跑!”,他用的是阿米绪的高地日尔曼语,其他人谁也听不懂。 随行的新闻记者凭着职业反应,迅速拍下了穿着黑色衣衫的大小男孩女孩,象兔子一样惊慌逃向田野的背影。这张照片后来非常有名,因为这个问题后来终于在全美国人民同情的目光下,走向了联邦最高法院。 在有些地方的阿米绪学校,学童们被成功地押上校车转学,两种不同价值相遇形成十分荒诞的后果。一个为了提高孩子教育水平的法律,实行中却场景凄凉。阿米绪孩子们唱着”上帝爱我”,母亲们无声地哭泣,父亲们则青着脸,默默地站立一旁。阿米绪人依然奉行和平主义的原则。但正是这种沉默,谦和然而执着的态度,以及美国民众对于”多数与少数”关系的反省,使得阿米绪的教育事件开始走向全美国。来自全国各地的私人捐款涌向布肯南县,要求替阿米绪人偿付罚款。这种同情和抗议给爱荷华州州长带来很大的压力,但是他作为行政长官无权修改法律。他只能在他的职权范围内,宣布暂停执行合并学校三周,同时请求拥有立法权的州议会,考虑立法豁免阿米绪的强制教育。1967年,爱荷华州议会将豁免权授予州行政部门的教育官员,从而,爱荷华州的阿米绪也嬴得了以自己的方式教育子女的权利。 就在这个时候,一个叫林德赫姆的人挺身而出。他不是阿米绪,而是路德教会的一个牧师。他了解了阿米绪在教育问题上的遭遇以后,认为阿米绪的宗教自由受到了侵犯。1967年3月,他在芝加哥大学一个有关公共教育立法的学术会议上,呼吁关心阿米绪宗教自由权利的人伸出援手。一个叫做”阿米绪宗教自由全国理事会”的组织就这样诞生了,林德赫姆担任了这个理事会的主席。这个组织不仅有律师,学者,还有基督教和犹太教的宗教领袖。 这时全国关注的目光移到了堪萨斯州,那里有阿米绪的家长被捕,还有人在法庭上被定罪。堪萨斯州态度强硬地宣布,类似宾夕法尼亚州的”职业学校妥协”的做法,在堪萨斯州将是非法的。州一级不打算妥协。林德赫姆的组织曾经试图把案子上诉到联邦法院,但是联邦最高法院拒绝接案,其原因是美国”分权”的制度。在这个制度下,教育管理是留归各州的权力,联邦政府无权干涉。因此,联邦法庭也就没有这些案子的司法权。 于是,那里的阿米绪又决定迁徙。不少人就这样迁到了威斯康辛州的格林县。但是到了1968年秋天,这儿也开始严格执行教育法规。又有两家阿米绪面临被捕,被指控的罪名就是没有送孩子上高中。1968年圣诞节前夜,林德赫姆和一个叫鲍尔的律师,在请求威斯康辛州政府豁免阿米绪遭到拒绝后,决定在格林县的法庭,代表阿米绪向州政府打官司。告州政府侵犯阿米绪的宗教自由。可是,官司输了。地方法庭认为,虽然可以说州政府侵犯了阿米绪的宗教自由,但是普及教育涉及全体公民的长远利益。这一利益压倒了少数人的宗教权利。 这在美国是经常发生的事情,就是在两个法律条款发生冲突的时候,必须判断何者为先。出现这样的法律悖论的时候,一般总是要走到联邦最高法院,因为最高法院具有”司法复审权”。这正是鲍尔律师想要达到的目的。他不是打算在地方法院就打嬴这场官司,他甚至知道他会输。但是他要开辟一条司法渠道。鲍尔先上诉到威斯康辛州最高法院,州最高法院推翻了地方法院的判决,法官说,能够压倒少数人宗教自由权利的所谓全体人民的利益是不存在的,阿米绪选择八年教育并没有损害社会。 于是案子的被告,威斯康辛州政府的行政分支,开始向联邦最高法院上诉。前一次,案件的性质是判定阿米绪孩子的教育管理问题,这个问题联邦法院没有司法权。可是现在,案件的性质是全国性的民间团体代表百姓控告州政府侵犯宗教自由,也就成了州教育法规是否违宪的问题,这属于联邦最高法院的审理范围。于是,这一次,联邦最高法院接受了这个叫做”威斯康辛诉约德尔等”的案子。鲍尔律师出庭辩论,一些从不抛头露面的阿米绪也默默来到首都华盛顿,听候决定他们命运的判决。他们还是一袭传统阿米绪的黑色服装。黑色的背影衬映在最高法院白色大理石建筑的背景上,使我们今天看到这张过时的新闻照片时,依然有惊心动魄的感觉。 1972年年底的一天,最高法院大法官以压倒多数作出了有利于阿米绪的判决。首席大法官沃伦在判词中指出,现代中等教育所教授的内容和价值同阿米绪宗教生活的根本方式有尖锐的冲突,强制实行的教育法规侵犯了阿米绪教徒的宗教自由权利。 在最高法院的判词中,沃伦大法官写下了如下这段现在还常常被人引用的话: ”我们不可忘记,在中世纪,西方世界文明的很多重要价值是由那些在巨大困苦下远离世俗影响的宗教团体保存下来的。没有任何理由假设今天的多数就是‘正确’的而阿米绪和类似他们的人就是‘错误’的。一种与众不同甚至于异僻的生活方式如果没有干涉别人的权利或利益,就不能仅仅因为它不同于他人就遭受谴责。” 最高法院的判决,一劳永逸地解决了各州与阿米绪在教育问题上的冲突,沃伦大法官的判词,更是对美国人民长久以来的思考和反省,作出了一个总结。民主制度要求少数服从多数,同时要求多数不能压迫少数,不能侵犯少数的自由和权利。要做到这一点,在制度的设计上,一开始就要为持不同意愿的少数预留下申诉,辩解和反抗的渠道。在百分九十九点九的一致之下,仍要为百分之零点一的异见留下呼吸的空间。这也是美国法律强调个人的宪法权利必须归属个人,而政府”不得立法”侵犯这种权利的根本原因。 如果法律不打算保护千分之一万分之一,也就保护不了”百分之五”,那么,”多数”本身也就都潜在地岌岌可危。我们曾经习惯于法律对”百分之五”的不予保护,这是因为,当我们身处”多数”之中,我们理所当然地认为,”多数”就是对的,我们只知道庆幸自己不是少数。谁也没有想过,今天你不挺身而出保护你所不同意甚至不喜欢的百分之五,你怎么有把握下一次你不在另一个百分之五中呢?今天你看到与你无关的百分之五遭受的不公正扭过了头去,下一次轮到你的时候,你还向谁去呼喊呢? 一个社会要发动成千上万的人并不难,要达到多数人的一致也不难,难的是公正善待只有百分之几的少数。有时候,少数显得如此人微言轻,他们的生死存亡是如此地微不足道,可是,一个制度能否保证这微乎其微的少数得到公平的善待,恰恰是检验这个社会是否文明和人道的试金石,也是决定这个制度能否长治久安的一个关键。 也许,最平凡的阿米绪正默默地以他们的存在,在给人类讲述着一个并非无足轻重的故事。 燕归来 “日拱一”小卒 30楼 大 中 小 发表于 2010-2-21 10:20 只看该作者 为什么要重发旧文? 儿童本身不具有完全的自由权利,这是任何一个社会的共识,家长强制孩子以军事化的方式摆脱贫穷,和家长强制儿童拒绝现代文明,看似矛盾的背后,实际上是社会对个体权利的保护的高度统一。 主贴的作者简单就把穷人都代表了,呵呵。 [ 本帖最后由 燕归来 于 2010-2-21 10:37 编辑 ] gundamwang 王敢达 31楼 大 中 小 发表于 2010-2-21 10:22 只看该作者 25楼立功了。不过8楼的分析也挺不错的。 nkpoper 32楼 大 中 小 发表于 2010-2-21 10:26 只看该作者 穷人的普世价值(如果把他们最希望得到的东西称为他们的普世价值的话)不是不当穷人。 不当穷人,有一种最简单的方法:弄根鞭子在他屁股上很抽,只要偷懒就抽。 只要这么抽,至少美国的大部分穷人是当不成穷人的。 穷人的普世价值是:不干活也能过得不错(福利社会)。退而求其次才是:不当穷人。 毕竟好逸恶劳是人类的本性,大多数人必然如此。 北京棋迷 自由主义,一抓就灵 33楼 大 中 小 发表于 2010-2-21 13:08 只看该作者 我严重不同意你这个看法,五毛一定否定普世价值的普适性,但笼统的否定普适性的人绝对不一定是五毛,更何况对所谓普世价值的具体内容的看法千差万别 -- 是根本否认有普世价值,还是对普世价值认定的内容和你不同,或者是对普世价值的存在持不可知论?所谓普世价值,就和资本主义是不是人类社会形态的最终阶段一样,根本就不会有什么绝对正确和明确的答案,既然如此,对其任何的质疑与思考,只要言之有理,就是有益的,否则普世价值又会成为取代共产主义一类的新的教条 -- 是不是中国人特别喜欢教条,喜欢用一种简单的意识形态框架去套现实事物 -- 无论是市场教,还是民主教自由教,哪怕普世价值,一旦成为教条就会变成膜拜对象,这样,普世价值的信徒就和日月神教相差不远了。 引用: > 原帖由 花想容 于 2010-2-20 10:32 发表 ![](https://1984bbs.com/images/common/back.gif) > > 我知道文章的要害在否定普世价值的普适性,凡是为文目的是这个的,都可以鉴定为五毛。现在想来,这其实这只是“人权就是生存权”拿什么KIPP做例子包装了一下唬人。 北京棋迷 自由主义,一抓就灵 34楼 大 中 小 发表于 2010-2-21 13:11 只看该作者 这个说法不对,普世价值的支持者还把全世界的人都代表了呢,他们征求世界人民的同意了么? 引用: > 原帖由 燕归来 于 2010-2-20 21:20 发表 ![](https://1984bbs.com/images/common/back.gif) > > 儿童本身不具有完全的自由权利,这是任何一个社会的共识,家长强制孩子以军事化的方式摆脱贫穷,和家长强制儿童拒绝现代文明,看似矛盾的背后,实际上是社会对个体权利的保护的高度统一。 > > 主贴的作者简单就把穷人 ... 花想容 依据用户管理细则,账号永久停用。 35楼 大 中 小 发表于 2010-2-21 13:12 只看该作者 回复 33楼 北京棋迷 的话题 没反对质疑、反思,但具体到你转的这篇文章,为匪宣传的目的性也太明显了。 其实他如果忍住不谈中国的事情,还可以不至于露馅,但不谈中国就拿不到那五毛钱了,纠结啊。 花想容 依据用户管理细则,账号永久停用。 36楼 大 中 小 发表于 2010-2-21 13:14 只看该作者 回复 34楼 北京棋迷 的话题 如果这个人是对的,你对皮诺切特的立场就是错的,你没有发现这个矛盾么?皮诺切特就是给智利”设计了一个类似于 KIPP 学校的系统“,按这个作者的说法,符合”穷人的价值“。 光明的格里高利 八卦爱好者 37楼 大 中 小 发表于 2010-2-21 13:19 只看该作者 白痴也有普世价值 杀人犯也有普世价值 嘿嘿 燕归来 “日拱一”小卒 38楼 大 中 小 发表于 2010-2-21 13:29 只看该作者 引用: > 原帖由 北京棋迷 于 2010-2-21 13:11 发表 > ![](https://1984bbs.com/images/common/back.gif) > 这个说法不对,普世价值的支持者还把全世界的人都代表了呢,他们征求世界人民的同意了么? > > 谁说普世价值就把全世界代表了,什么是普世价值? 如果说我的生存权是普世价值,这是全世界人民都认同的,没有代表不代表的说法。这种东西才是普世价值。 但是本文中调查了吗?是不是穷人都想上那种学校? 上海帅哥 SC截图党1984BBS支部书记,御祥瑞免家宅平安 39楼 大 中 小 发表于 2010-2-21 13:34 只看该作者 突然想起某人对西方教育(准确的说是英国教育)和中国教育的总结 “西方是压抑本性释放思想,我们的是放纵本性压抑思想。结果出不了独立思考的人” (这文章主要讲的是贵族教育中对人本性尤其是恶的抑制,用过严肃的行为管制达到,而对各类思想却很宽容,结果产生了有礼有节绅士。我们则相反通过对恶的本性的忽略乃至引诱。“比方说“颜如玉和黄金屋”引导教育。结果出来的个个都是充满戾气短见的势利者) 再世关羽 40楼 大 中 小 发表于 2010-2-21 13:37 只看该作者 引用: > 原帖由 上海帅哥 于 2010-2-21 13:34 发表 > ![](https://1984bbs.com/images/common/back.gif) > 突然想起某人对西方教育(准确的说是英国教育)和中国教育的总结 > “西方是压抑本性释放思想,我们的是放纵本性压抑思想。结果出不了独立思考的人” > (这文章主要讲的是贵族教育中对人本性尤其是恶的抑制,用过严肃 ... 中国现在这么穷,歧视商人的思想长期占据主导地位,现在的仇富思想还是非常严重。你觉得在这种情况下,重利的教育是错误的? 再世关羽 41楼 大 中 小 发表于 2010-2-21 13:37 只看该作者 老百姓爱钱,比老百姓不爱钱好得多。因为人爱钱,就很难被什么马克思主义之类的歪理邪说蛊惑,就算被蛊惑了,拿出钱放在他们的面前,他们也会走回正道的 [ 本帖最后由 再世关羽 于 2010-2-21 13:38 编辑 ] 蒙面佐罗 42楼 大 中 小 发表于 2010-2-21 13:37 只看该作者 引用: > 原帖由 花想容 于 2010-2-21 13:14 发表 ![](https://1984bbs.com/images/common/back.gif) > 如果这个人是对的,你对皮诺切特的立场就是错的,你没有发现这个矛盾么?皮诺切特就是给智利”设计了一个类似于 KIPP > 学校的系统“,按这个作者的说法,符合”穷人的价值“。 皮诺切特是强制,而这种学校你是爱上不上的。 蒙面佐罗 43楼 大 中 小 发表于 2010-2-21 13:39 只看该作者 引用: > 原帖由 nkpoper 于 2010-2-21 10:26 发表 > ![](https://1984bbs.com/images/common/back.gif) > 穷人的普世价值(如果把他们最希望得到的东西称为他们的普世价值的话)不是不当穷人。 > 不当穷人,有一种最简单的方法:弄根鞭子在他屁股上很抽,只要偷懒就抽。 > 只要这么抽,至少美国的大部分穷人是当不成穷人的。 ... 南方棉花田里的黑人被抽了N年也没富裕起来,今天中国农民工也是一样 北京棋迷 自由主义,一抓就灵 44楼 大 中 小 发表于 2010-2-21 14:56 只看该作者 你以为我关心这个人的观点到底是对是错么?看标题。 引用: > 原帖由 花想容 于 2010-2-21 00:14 发表 ![](https://1984bbs.com/images/common/back.gif) > 如果这个人是对的,你对皮诺切特的立场就是错的,你没有发现这个矛盾么?皮诺切特就是给智利”设计了一个类似于 KIPP > 学校的系统“,按这个作者的说法,符合”穷人的价值“。 北京棋迷 自由主义,一抓就灵 45楼 大 中 小 发表于 2010-2-21 15:02 只看该作者 如果普世价值仅仅是生存权,你以为还会有这么多争论么?如果普世价值只有生存权,他说的穷人的普世价值就是不再贫穷就是个在合理不过的引申。他这篇文章显然是针对所谓普世价值中那些灰色区域说的,比如自由,比如民主。 第二,普世价值不代表全世界,那他还是普世的么?普世价值本身就有争议,你要是连字面含义都另起炉灶那任务也太繁重了吧? 说道调查,还是那句话,你说穷人想脱贫没有普遍调查所以不算普世价值,那精英口中的所谓民主和自由这类普世价值做过全世界范围的调查么?连调查都没有,怎么敢上嘴皮一碰下嘴皮给他按上一个’普世‘这么大的帽子呢? 引用: > 原帖由 燕归来 于 2010-2-21 00:29 发表 ![](https://1984bbs.com/images/common/back.gif) > > 谁说普世价值就把全世界代表了,什么是普世价值? > > 如果说我的生存权是普世价值,这是全世界人民都认同的,没有代表不代表的说法。这种东西才是普世价值。 > > 但是本文中调查了吗?是不是穷人都想上那种学校? 风餐露宿 46楼 大 中 小 发表于 2010-2-21 15:21 只看该作者 引用: > 原帖由 花想容 于 2010-2-20 23:16 发表 ![](https://1984bbs.com/images/common/back.gif) > 确实是高水平的五毛文啊,看得我怒火满腔却又批判不了,除了西藏部分他明显有破绽以外。 > 不过,这样justufy压抑个性做法的人,在我的分类里是右派,不是左派。左派岂能为五斗米折腰? > 五毛在升级为五元,大家要警惕 ... 对,这个justufy的做法明显是右翼的做法,而且接近极右了,即不惜花费巨大代价追求实现一种传统上大家都能接受的秩序。而非花费巨大代价去建立一个新的乌托邦。他之所以不是极右只是因为这种学校毕竟没要求每个人都上。 [ 本帖最后由 风餐露宿 于 2010-2-21 15:23 编辑 ] 彩虹咖啡馆 47楼 大 中 小 发表于 2010-2-21 15:28 只看该作者 引用: > 原帖由 hsmsheep 于 2010-2-21 09:52 发表 > ![](https://1984bbs.com/images/common/back.gif) > > 从wiki来看,论据不实;从文中逻辑来看,用“穷人通过集权(主要是kipp系统)达到了高升学率,摆脱贫穷。作者认同。”这个例子来类比“三十年前的中国也通过集权达到高生产率,摆脱贫穷。作者亦认同。”作者的逻辑是 > ... 30年发展的原因是当局放了一点权从而民间多了一点自由的结果。 燕归来 “日拱一”小卒 48楼 大 中 小 发表于 2010-2-21 15:42 只看该作者 引用: > 原帖由 北京棋迷 于 2010-2-21 15:02 发表 > ![](https://1984bbs.com/images/common/back.gif) > > 如果普世价值仅仅是生存权,你以为还会有这么多争论么?如果普世价值只有生存权,他说的穷人的普世价值就是不再贫穷就是个在合理不过的引申。他这篇文章显然是针对所谓普世价值中那些灰色区域说的,比如自由,比如民 > ... 逻辑混乱了吧,给你解释一下: 命题一,普世价值是存在的,比如生存权。 命题二,普世价值不能被随便代表,就自由主义而言,普世价值是消极的、薄的,所以,精英也不能随便提出普世价值。应该达到足够共识才能称为普世价值,参考命题一。 命题三,有穷人通过军事化教育来改善经济状况,既不是普世价值,也无法被证明是所有穷人(甚至大多数)的观点。 主贴的标题“穷人的普世价值”,第一放大了普世价值的范围,第二,自我代表了所有的穷人,整个文章颠倒黑白之处一目了然。 everyman88 49楼 大 中 小 发表于 2010-2-21 15:44 只看该作者 引用: > 原帖由 北京棋迷 于 2010-2-21 15:02 发表 > ![](https://1984bbs.com/images/common/back.gif) > > 如果普世价值仅仅是生存权,你以为还会有这么多争论么?如果普世价值只有生存权,他说的穷人的普世价值就是不再贫穷就是个在合理不过的引申。他这篇文章显然是针对所谓普世价值中那些灰色区域说的,比如自由,比如民 > ... "不再贫穷"不是一种价值观。 穷人"不再贫穷"的愿望与"民主,自由,平等"等价值观是两个范畴的概念,且也并不必然矛盾。 "自由,民主,平等"并不因为有人拒绝接受而失去其普世性。普世是指这种价值观对人类发展与进步的作用,而不是对某些人生活的作用--这是对价值观的庸俗化。 这里只针对楼主的观点,不针对楼主转的文章,因为这种建立在谎言基础上的文章是不值得讨论的。 [ 本帖最后由 everyman88 于 2010-2-21 15:47 编辑 ] 蒙面佐罗 50楼 大 中 小 发表于 2010-2-21 16:41 只看该作者 引用: > 原帖由 everyman88 于 2010-2-21 15:44 发表 > ![](https://1984bbs.com/images/common/back.gif) > > > "不再贫穷"不是一种价值观。 > 穷人"不再贫穷"的愿望与"民主,自由,平等"等价值观是两个范畴的概念,且也并不必然矛盾。 > "自由,民主,平等"并不因为有人拒绝接受而失去其普世性。普世是指这种价值观对人类发展 ... 既然是普世,就是对每个人都管用,否则怎么叫普世? 独立宣言讲到三个人人平等拥有的权利:生命权,自由权和追求幸福的权利,追求财富怎么就不是价值观呢?这就是说这三个权利都是绝对的不可以质疑的,这就是普世的基础。但这三个权利本身就是相互冲突,有人爱财如命胜过自由,有人两者皆可抛,都无可指责。正因为不可质疑,绝对普世,而且相互冲突,所以选择权应留给个人,家庭这种社会基本单位个体,将一种价值凌驾于其他价值之上,号令天下莫敢不从,那才是法西斯,无论这个价值是什么生存权还是自由 76 12››
12.99819
364
0.772158
yue_Hant
0.53524
0c06fd7941f85fa37b55b430ff2af49e26d757b9
1,340
markdown
Markdown
_posts/2018-04-26-ortiz-impressoes-digitais.markdown
davidmenezes23/uruguaiana
a43279b41e40ede96a712ee31d4732ce75610591
[ "MIT" ]
null
null
null
_posts/2018-04-26-ortiz-impressoes-digitais.markdown
davidmenezes23/uruguaiana
a43279b41e40ede96a712ee31d4732ce75610591
[ "MIT" ]
null
null
null
_posts/2018-04-26-ortiz-impressoes-digitais.markdown
davidmenezes23/uruguaiana
a43279b41e40ede96a712ee31d4732ce75610591
[ "MIT" ]
null
null
null
--- layout: post title: Ortiz Impressões Digitais feature-img: "assets/img/thumbnails/ortiz-impressoes-digitais.png" thumbnail: "assets/img/thumbnails/ortiz-impressoes-digitais.png" permalink: /:categories/:title.html categories: [uruguaiana] tags: uruguaiana uruguaianacomercio cidade: ["Uruguaiana"] estrelas: 0-estrelas --- Empresa de produtos personalizados: camiseta, caneca, garrafas, squezzes, chinelos, almofadas, chaveiros, canecas de chopp.<!-- more --><br/> <br/> <b>Endereço: </b>Avenida ibicuí, quadra b, numero 0019<br /> <b>Cidade: </b>Uruguaiana, RS<br /> <b>Telefone: <span style="color: #00ab3a;">(55) 3411-7889</span> <a href="tel:5534117889"><button class="ligar">Ligar</button></a></b><br /> <b>Telefone: <span style="color: #00ab3a;">(55) 99677-2293</span> <a href="tel:55996772293"><button class="ligar">Ligar</button></a></b><br /> <br /> <div style="font-size: larger; text-align: center;"> Localização</div> <iframe src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d2911.8433626789915!2d-57.06513717902338!3d-29.783860209036945!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x94535afc73201243%3A0xe1e13893992c6eda!2sR.+Ibicu%C3%AD%2C+19+-+Centro%2C+Uruguaiana+-+RS%2C+97501-971!5e0!3m2!1spt-BR!2sbr!4v1524829837260" width="100%" height="400" frameborder="0" style="border:0" allowfullscreen></iframe>
60.909091
405
0.740299
por_Latn
0.204644
0c070d284f08ca9f890905cb1685b65b9eaa12f3
12,373
md
Markdown
sync.md
threepointone/react-act-examples
06d382801a9de840cbc6ec18023206705106a354
[ "MIT" ]
499
2019-02-17T05:15:30.000Z
2022-03-20T06:29:29.000Z
sync.md
gopal1166/react-act-examples
06d382801a9de840cbc6ec18023206705106a354
[ "MIT" ]
7
2019-02-18T20:30:33.000Z
2020-06-19T21:53:26.000Z
sync.md
gopal1166/react-act-examples
06d382801a9de840cbc6ec18023206705106a354
[ "MIT" ]
43
2019-02-18T01:36:45.000Z
2022-01-16T23:32:25.000Z
(I'm still editing and making changes to this document, but feel free to read; I hope it's useful to you!) ## secrets of the `act(...)` api tl;dr: wrap your test interactions with `act(() => ...)`. React will take care of the rest. Note that for `async act(...)` you need React version at least [v16.9.0-alpha.0](https://github.com/facebook/react/releases/tag/v16.9.0-alpha.0). ### effects Let's start with a simple component. It's contrived and doesn't do much, but is useful for this discussion. ```jsx function App() { let [ctr, setCtr] = useState(0); useEffect(() => { setCtr(1); }, []); return ctr; } ``` So, it's an `App` with 2 hooks - a `useState` initialized with `0`, and a `useEffect` which runs only once, setting this state to `1`. We'll render it to a browser like so: ```jsx ReactDOM.render(<App />, document.getElementById("app")); ``` You run it, and you see `1` on your screen. This makes sense to you - the effect ran immediately, updated the state, and that rendered to your screen. So you write a test for this behaviour, in everyone's favourite testing framework, [jest](https://jestjs.io/): ```jsx it("should render 1", () => { const el = document.createElement("div"); ReactDOM.render(<App />, el); expect(el.innerHTML).toBe("1"); // this fails! }); ``` You run your tests, and oops 😣 ![screenshot of the test failing](https://user-images.githubusercontent.com/18808/52912654-441c9b80-32ac-11e9-9112-50b9329feebb.png) That doesn't seem right. The value of `el.innerHTML` claims to `0`. But how can that be? Does jest do something strange? Or are you just hallucinating? The docs for useEffect make this a bit clearer - "By using this Hook, you tell React that your component needs to do something **after render**". How did you never see `0` in the browser, if even for a single moment? To understand this, let's talk a bit about how React works. Since the big fiber rewrite of yore, React doesn't just 'synchronously' render the whole UI everytime you poke at it. It divides its work into chunks (called, er, 'work' 🙄), and queues it up in a scheduler. In the component above, there are a few pieces of 'work' that are apparent to us: - the 'first' render where react outputs `0`, - the bit where it runs the effect and sets state to `1` - the bit where it rerenders and outputs `1` <img width="609" alt="a timeline of how react would schedule this work in a single browser frame. our test runs in the middle of this work, so misses later updates to the dom" src="https://user-images.githubusercontent.com/18808/52914771-3ecb4b00-32c4-11e9-9923-c577f371a4aa.png" /> We can now see the problem. We run our test at a point in time when react hasn't even finished updating the UI. You _could_ hack around this: - by using `useLayoutEffect` instead of `useEffect`: while this would pass the test, we've changed product behaviour for no good reason, and likely to its detriment. - by waiting for some time, like 100ms or so: this is pretty ick, and might not even work depending on your setup. Neither of these solutions are satisfying; we can do much better. In 16.8.0, we introduced a new testing api `act(...)`. It guarantees 2 things for any code run inside its scope: - any state updates will be executed - any enqueued effects will be executed Further, React will warn you when you try to "set state" _outside of the scope of an `act(...)` call_. (ie - when you call the 2nd return value from a `useState`/`useReducer` hook) Let's rewrite our test with this new api: ```jsx it("should render 1", () => { const el = document.createElement("div"); act(() => { ReactDOM.render(<App />, el); }); expect(el.innerHTML).toBe("1"); // this passes! }); ``` Neat, the test now passes! In short, "act" is a way of putting 'boundaries' around those bits of your code that actually 'interact' with your React app - these could be user interactions, apis, custom event handlers and subscriptions firing; anything that looks like it 'changes' something in your ui. React will make sure your UI is updated as 'expected', so you can make assertions on it. <img width="559" alt="a timeline like before, except this time all the work is bunched into one group, and we show how the test assertions happen after it" src="https://user-images.githubusercontent.com/18808/52914772-3ecb4b00-32c4-11e9-99c4-4915af46c149.png" /> (You can even nest multiple calls to `act`, composing interactions across functions, but in most cases you wouldn't need more than 1-2 levels of nesting.) ### events Let's look at another example; this time, events: ```jsx function App() { let [counter, setCounter] = useState(0); return <button onClick={() => setCounter(counter + 1)}>{counter}</button>; } ``` Pretty simple, I think: A button that increments a counter. You render this to a browser like before. ![a gif of a button being clicked, whose contents go from 0 to 10](https://user-images.githubusercontent.com/18808/52912742-64992580-32ad-11e9-8e1b-70e24d6329ee.gif) So far, so good. Let's write a test for it. ```jsx it("should increment a counter", () => { const el = document.createElement("div"); document.body.appendChild(el); // we attach the element to document.body to ensure events work ReactDOM.render(<App />, el); const button = el.childNodes[0]; for (let i = 0; i < 3; i++) { button.dispatchEvent(new MouseEvent("click", { bubbles: true })); } expect(button.innerHTML).toBe("3"); }); ``` This 'works' as expected. The warning doesn't fire for setStates called by 'real' event handlers, and for all intents and purposes this code is actually fine. But you get suspicious, and because Sunil told you so, you extend the test a bit - ```jsx act(() => { for (let i = 0; i < 3; i++) { button.dispatchEvent(new MouseEvent("click", { bubbles: true })); } }); expect(button.innerHTML).toBe(3); // this fails, it's actually "1"! ``` The test fails, and `button.innerHTML` claims to be "1"! Well shit, at first, this seems annoying. But `act` has uncovered a potential bug here - if the handlers are ever called close to each other, it's possible that the handler will use stale data and miss some increments. The 'fix' is simple - we rewrite with 'setState' call with the updater form ie - `setCounter(x => x + 1)`, and the test passes. This demonstrates the value `act` brings to grouping and executing interactions together, resulting in more 'correct' code. Yay, thanks `act`! ### timers Let's keep going. How about stuff based on timers? Let's write a component that 'ticks' after one second. ```jsx function App() { const [ctr, setCtr] = useState(0); useEffect(() => { setTimeout(() => setCtr(1), 1000); }, []); return ctr; } ``` Let's write a test for this: ```jsx it("should tick to a new value", () => { const el = document.createElement("div"); act(() => { ReactDOM.render(<App />, el); }); expect(el.innerHTML).toBe("0"); // ??? expect(el.innerHTML).toBe("1"); }); ``` What could we do here? Option 1 - Let's lean on jest's [timer mocks](https://jestjs.io/docs/en/timer-mocks). ```jsx it("should tick to a new value", () => { jest.useFakeTimers(); const el = document.createElement("div"); act(() => { ReactDOM.render(<App />, el); }); expect(el.innerHTML).toBe("0"); jest.runAllTimers(); expect(el.innerHTML).toBe("1"); }); ``` ![a screnshot of jest's output - showing that the test passed, but a warning appeared as well](https://user-images.githubusercontent.com/18808/52912877-885d6b00-32af-11e9-9a0b-0ba4f9adc756.png) Better! We were able to convert asynchronous time to be synchronous and manageable. We also get the warning; when we ran `runAllTimers()`, the timeout in the component resolved, triggering the setState. Like the warning advises, we mark the boundaries of that action with `act(...)`. Rewriting the test - ```jsx it("should tick to a new value", () => { jest.useFakeTimers(); const el = document.createElement("div"); act(() => { ReactDOM.render(<App />, el); }); expect(el.innerHTML).toBe("0"); act(() => { jest.runAllTimers(); }); expect(el.innerHTML).toBe("1"); }); ``` Test passes, no warnings, huzzah! Good stuff. Option 2 - Alternately, let's say we wanted to use 'real' timers. This is a good time to introduce the asynchronous version of act. Introduced in 16.9.0-alpha.0, it lets you define an asynchronous boundary for `act()`. Rewriting the test from above - ```jsx it("should tick to a new value", async () => { // a helper to use promises with timeouts function sleep(period) { return new Promise(resolve => setTimeout(resolve, period)); } const el = document.createElement("div"); act(() => { ReactDOM.render(<App />, el); }); expect(el.innerHTML).toBe("0"); await act(async () => { await sleep(1100); // wait *just* a little longer than the timeout in the component }); expect(el.innerHTML).toBe("1"); }); ``` Again, tests pass, no warnings. excellent! This simplifies a lot of rough edges with testing asynchronous logic in components. You don't have to mess with fake timers or builds anymore, and can write tests more 'naturally'. As a bonus, it will (eventually) be compatible with concurrent mode! While it's less restrictive than the synchronous version, it supports all its features, but in an async form. The api makes some effort to make sure you don't interleave these calls, maintaining a tree-like shape of interactions at all times. ### promises Let's keep going. This time, let's use promises. Consider a component that fetches data with, er, `fetch` - ```jsx function App() { let [data, setData] = useState(null); useEffect(() => { fetch("/some/url").then(setData); }, []); return data; } ``` Let's write a test again. This time, we'll mock `fetch` so we have control over when and how it responds: ```jsx it("should display fetched data", () => { // a rather simple mock, you might use something more advanced for your needs let resolve; function fetch() { return new Promise(_resolve => { resolve = _resolve; }); } const el = document.createElement("div"); act(() => { ReactDOM.render(<App />, el); }); expect(el.innerHTML).toBe(""); resolve(42); expect(el.innerHTML).toBe("42"); }); ``` The test passes, but we get the warning again. Like before, we wrap the bit that 'resolves' the promise with `act(...)` ```jsx // ... expect(el.innerHTML).toBe(""); await act(async () => { resolve(42); }); expect(el.innerHTML).toBe("42"); // ... ``` This time, the test passes, and the warning's disappeared. Brilliant. Of note, even thought it might appear like `resolve(42)` is synchronous, we use the async version to make sure microtasks are flushed before releasing scope, preventing the warning. Neat. ### async / await Now, let's do _hard mode_ with `async/await`. :( Haha, just joking, this is now as simple as the previous examples, now that we have the asynchronous version to capture the scope. Revisiting the component from the previous example - ```jsx function App() { let [data, setData] = useState(null); async function somethingAsync() { // this time we use the await syntax let response = await fetch("/some/url"); setData(response); } useEffect(() => { somethingAsync(); }, []); return data; } ``` And run the same test on it - ```jsx it("should display fetched data", async () => { // a rather simple mock, you might use something more advanced for your needs let resolve; function fetch() { return new Promise(_resolve => { resolve = _resolve; }); } const el = document.createElement("div"); act(() => { ReactDOM.render(<App />, el); }); expect(el.innerHTML).toBe(""); await act(async () => { resolve(42); }); expect(el.innerHTML).toBe("42"); }); ``` Literally the same as the previous example. All good and green. Niccce. --- Notes: - if you're using `ReactTestRenderer`, you should use `ReactTestRenderer.act` instead. - we can reduce some of the boilerplate associated with this by integrating `act` directly with testing libraries; [react-testing-library](https://github.com/kentcdodds/react-testing-library/) already wraps its helper functions by default with act, and I hope that enzyme, and others like it, will do the same.
37.04491
546
0.695062
eng_Latn
0.988235
0c07c0bd4b1b1f8df91fbf442cc22aa822726958
21
md
Markdown
docs/meta.md
evandandrea/altair
fe37bf7615bd683387fd99a1f9a0fa5f8f5b3856
[ "MIT" ]
null
null
null
docs/meta.md
evandandrea/altair
fe37bf7615bd683387fd99a1f9a0fa5f8f5b3856
[ "MIT" ]
null
null
null
docs/meta.md
evandandrea/altair
fe37bf7615bd683387fd99a1f9a0fa5f8f5b3856
[ "MIT" ]
null
null
null
--- layout: meta ---
5.25
12
0.47619
vie_Latn
0.613542
0c0911b0107d8d50635f7ee3e17665185f71c1de
2,481
md
Markdown
docs/src/markdown/settings/color_picker.md
EatBreatheCode/sublime_color_helper
701d05807e51ce49088b1f0a8f55c4bf3845fa91
[ "MIT" ]
null
null
null
docs/src/markdown/settings/color_picker.md
EatBreatheCode/sublime_color_helper
701d05807e51ce49088b1f0a8f55c4bf3845fa91
[ "MIT" ]
null
null
null
docs/src/markdown/settings/color_picker.md
EatBreatheCode/sublime_color_helper
701d05807e51ce49088b1f0a8f55c4bf3845fa91
[ "MIT" ]
null
null
null
# Color Picker ## `enable_color_picker` Enables the ability to launch the color picker from the tooltip. By default, the internal color picker will be used. ```js // Enable color picker option. Will use native color picker // unless "use_color_picker_package" is enabled and external // package is installed. "enable_color_picker": true, ``` ## `use_os_color_picker` If you wish to use your native color picker available with your OS, you can set this option to `#!py3 True`. This will use the native color picker on macOS and Windows. If on Linux, and you have [`kcolorchooser`][kcolorchooser] installed (KDE not required) and in your command line path, then it will be used. ```js // Use native, OS specific color pickers. Linux requires `kcolorchooser`. "use_os_color_picker": false, ``` ## `enabled_color_picker_modes` By default color pickers are available in the sRGB, HSL, and HSV color space. sRGB actually just uses the HSL color space picker with sRGB sliders. In addition to the default color pickers, one can enable HWB (HSL color picker with HWB sliders), Okhsl, Okhsv, and HSLuv which are alternatives to the HSL and HSV color spaces. Okhsl and Okhsl are derived from Oklab, and HSLuv is derived from CIE Luv. ```js // Enable the preferred color picker options: `srgb`, `hsl`, `hsv`, `hwb`, `okhsl`, `okhsv`, `hsluv` // If no valid spaces are specified, `srgb` will be used. "enabled_color_picker_modes": ["srgb", "hsl", "hsv"], ``` ## `auto_color_picker_mode` Controls whether ColorHelper, based on the input color, decides which color space to use. If a matching color space cannot be found, the preferred color space picker will be selected. ```js // If the color is already in the space of an enabled mode, use that mode. // If disabled, the "preferred" mode will be used. "auto_color_picker_mode": true, ``` ## `preferred_color_picker_mode` The preferred color picker space to use. If invalid or not enabled, the first enabled color space will be used, and if there are none enabled, `srgb` will be used as a last resort. ```js // If "auto" mode is disabled, or the "auto" mode could not determine a suitable picker, // the preferrreed color picker space will be used. If the preferred is invalid, the // first picker from `enabled_color_picker_modes` will be used, and if that is not valid, // `srgb` will be used. "preferred_color_picker_mode": "hsl", ``` --8<-- "refs.md"
38.169231
118
0.725917
eng_Latn
0.991683
0c0964e3168487c483e8b806201097131fc137a9
1,243
md
Markdown
docs/data/schema-mfc-data-access.md
jmittert/cpp-docs
cea5a8ee2b4764b2bac4afe5d386362ffd64e55a
[ "CC-BY-4.0", "MIT" ]
1
2019-02-10T10:38:37.000Z
2019-02-10T10:38:37.000Z
docs/data/schema-mfc-data-access.md
jmittert/cpp-docs
cea5a8ee2b4764b2bac4afe5d386362ffd64e55a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/data/schema-mfc-data-access.md
jmittert/cpp-docs
cea5a8ee2b4764b2bac4afe5d386362ffd64e55a
[ "CC-BY-4.0", "MIT" ]
1
2020-06-14T03:42:31.000Z
2020-06-14T03:42:31.000Z
--- title: "Schema (MFC Data Access)" ms.date: "11/04/2016" helpviewer_keywords: ["structures [C++], database", "databases [C++], schema", "database schema [C++], about database schemas", "database schema [C++]", "schemas [C++], database", "structures [C++]"] ms.assetid: 7d17e35f-1ccf-4853-b915-5b8c7a45b9ee --- # Schema (MFC Data Access) A database schema describes the current structure of the tables and database views in the database. In general, wizard-generated code assumes that the schema for the table or tables accessed by a recordset will not change, but the database classes can deal with some schema changes, such as adding, reordering, or deleting unbound columns. If a table changes, you must manually update the recordset for the table, then recompile your application. You can also supplement the wizard-generated code to deal with a database whose schema is not entirely known at compile time. For more information, see [Recordset: Dynamically Binding Data Columns (ODBC)](../data/odbc/recordset-dynamically-binding-data-columns-odbc.md). ## See Also [Data Access Programming (MFC/ATL)](../data/data-access-programming-mfc-atl.md)<br/> [SQL](../data/odbc/sql.md)<br/> [Recordset (ODBC)](../data/odbc/recordset-odbc.md)
73.117647
446
0.752212
eng_Latn
0.92581
0c0a46fcfa30cfb024e878682fa10776346ad8bb
564
md
Markdown
README.md
memezilla/micro-frontend-poc
13fea08f131c384276e86f838cf84674ec471807
[ "MIT" ]
null
null
null
README.md
memezilla/micro-frontend-poc
13fea08f131c384276e86f838cf84674ec471807
[ "MIT" ]
null
null
null
README.md
memezilla/micro-frontend-poc
13fea08f131c384276e86f838cf84674ec471807
[ "MIT" ]
null
null
null
# micro-frontend-poc Micro-frontends POC with [single-spa](https://single-spa.js.org/), which managed using [lerna](https://lerna.js.org/). The app is composed of four sub-apps: 1. A container app that serves as the main page container and coordinates the mounting and unmounting of the micro-frontend apps, runs on port 9000 2. A micro-frontend nav app, runs on port 9001 3. A micro-frontend catalog app, runs on port 9002 4. A micro-frontend checkout app, runs on port 9003 ## install dependencies ```shell yarn bootstrap ``` ## run ``` yarn start ```
21.692308
147
0.734043
eng_Latn
0.975204
0c0a56f4f0abcf8bc79f87a0bb72e9a37238731a
1,040
md
Markdown
2021/CVE-2021-22112.md
wisdark/cve
6ac24186f7625b12269a07a8e4c5437e624ccaa0
[ "MIT" ]
1
2022-02-25T09:08:49.000Z
2022-02-25T09:08:49.000Z
2021/CVE-2021-22112.md
Jeromeyoung/cve
9113f2d909fddfaf554d5e8b90f9ed402f0fa81e
[ "MIT" ]
null
null
null
2021/CVE-2021-22112.md
Jeromeyoung/cve
9113f2d909fddfaf554d5e8b90f9ed402f0fa81e
[ "MIT" ]
3
2022-03-11T00:38:35.000Z
2022-03-19T04:01:22.000Z
### [CVE-2021-22112](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-22112) ![](https://img.shields.io/static/v1?label=Product&message=Spring%20Security&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Vulnerability&message=Privilege%20Escalation%20by%20changing%20security%20context&color=brighgreen) ### Description Spring Security 5.4.x prior to 5.4.4, 5.3.x prior to 5.3.8.RELEASE, 5.2.x prior to 5.2.9.RELEASE, and older unsupported versions can fail to save the SecurityContext if it is changed more than once in a single request.A malicious user cannot cause the bug to happen (it must be programmed in). However, if the application's intent is to only allow the user to run with elevated privileges in a small portion of the application, the bug can be leveraged to extend those privileges to the rest of the application. ### POC #### Reference No PoCs from references. #### Github - https://github.com/auth0/auth0-spring-security-api
57.777778
511
0.765385
eng_Latn
0.875831
0c0a7ebfd7f79e15ba6b58d1daa8b82af7051050
5,433
md
Markdown
_posts/2015-07-23-gpxj-18-day.md
j15/j15.github.io
c1c0818b21758c76b0f0d6d2ad624a8a177d7bae
[ "MIT" ]
null
null
null
_posts/2015-07-23-gpxj-18-day.md
j15/j15.github.io
c1c0818b21758c76b0f0d6d2ad624a8a177d7bae
[ "MIT" ]
null
null
null
_posts/2015-07-23-gpxj-18-day.md
j15/j15.github.io
c1c0818b21758c76b0f0d6d2ad624a8a177d7bae
[ "MIT" ]
null
null
null
--- layout: post title: "第十八天 CocoaPods" description: "" category: gpxj tags: [gpxj, ios, cocoapods] --- {% include JB/setup %} ## 原文:[CocoaPods](http://nshipster.cn/authors/mattt-thompson/) ## 翻译: [CocoaPods](http://nshipster.cn/cocoapods/) --- 文明是建立在道路,桥梁,运河,下水道,管线,电线和光纤这些基础设施之上的。只要设计和施工得当,它们可以帮助社会成倍的发展。 唯一的问题就是可扩展性。 不管是在一个新的区域容纳上百万家庭还是整合大量的开发者到新的语言环境中去,挑战都是相同的。 在Objective-C的情况下,CocoaPods提供了一个绝佳的整合合作开发的工具,并且在快速发展的开发社区中起到了一个集结点的作用。 本周的NSHipster,我们将通过讨论CocoaPods的过去,现在以及将来,一起庆祝0.33版本(具有里程碑意义)的发布。 > 接下来的对CocoaPods起源的历史回顾比较冗长,如果你只在乎技术细节,点此直接跳过。 ## 回望 --- 在Objective-C在它存在的前20年左右几乎鲜为人知。NeXT和后来的OS X作为一个边缘平台,只拥有一个相对较小的用户和开发者社区。像所有的社区一样,本地用户小组,邮件列表和网站该有的都有,但是开源合作开发缺很少见。诚然,开源在那时也只处于起步阶段,但是Objective-C却从未有过类似于CPAN (the Comprehensive Perl Archive Network)的组织。所有人除了能从Redwood和Cupertino拿到SDK(或者在论坛搜寻一下可用的代码)以外,剩下的问题只能靠自己解决。 ### Objective-C和iPhone 这种情况一直持续到了2008年的夏天,当iPhone OS开始对第三方开发者开放的时候。几乎一夜之间,Objective-C从无人问津变的炙手可热。上百万开发者的涌入,给这门语言注入了新鲜的血液。 就在此时,GitHub 也刚刚发布,并且开始通过新的分布式合作开发方式改变我们对开源的认知。 一大批开源项目开始涌现,例如ASIHTTPRequest和Facebook的Three20。这些早期的库和框架主要是用来填补iPhone OS 2.0和3.0开发中遇到空白,并且在后续的OS迭代中慢慢被遗弃或取代,但是它们突破了每个开发者“单打独斗”的局面。 在这波新的开发者中,那些来自Ruby背景的对 Objective-C 起来了很大的影响。Ruby作为Perl的精神继承者,有一个类似于CPAN的包管理器:RubyGems 为什么受Ruby的影响这么大?我的理论是:Ruby是在Rails 2005年发布1.0版本的时候开始流行起来。假设创业公司的平均寿命在1.5到2.5年之间,那么此时第一批厌倦Rails的开发者正好可以跳上移动开发的大船上。 就在Objective-C开源开发渐入佳境之时,代码分发的痛点开始显现: 缺乏框架,iOS的代码虽然可以被打包成静态库,但是配置和同步分发却成了一个艰巨的任务。 另外一个思路是用Git Submodules把源码直接放入项目。但是链接框架和配置生成环境的繁琐也使得这种方法也没有好到哪里去,尤其是当ARC和 non-ARC的代码需要分开的时候。 ### 进入CocoaPods时代 CocoaPods是由Eloy Durán于2011年8月12日创建。 在Bundler和RubyGems的启发下,CocoaPods被设计成即能处理库之间的依赖关系,又能自动下载并且配置好所需要的库。试想一下开发只有松散文档编制的Xcode项目的难度,CocoaPods的存在简直就是奇迹。 另一个早先的决定就是利用central Git repository作为所有库的总数据库。虽然这带来了一些运筹上的顾虑,好在GitHub能够提供一个稳健的平台,帮助团队在后续的迭代中,开发出更好的工具链。 时至今日,CocoaPods已经壮大拥有14个核心开发人员和多达5000个开源项目。绝大部分项目都是来自于Objective-C开源社区,我们应该感谢每一个参与其中的开发者。 ## 使用CocoaPods --- 制作和使用CocoaPods库都十分简单,往往几分钟就能配置完毕。 > 想获取最新的官方教程,请前往此处。 ### 安装CocoaPods CocoaPods可以方便地通过RubyGems安装,打开Terminal,然后键入以下命令: {% highlight bash %} $ sudo gem install cocoapods {% endhighlight %} 就这么简单,现在你应该可以开始使用pod命令了。 > 如果你使用Ruby版本管理器,如rbenv,你可能需要运行以下指令来重新链接shim的二进制文件(例如:$ rbenv rehash)。 ### 管理相关性 一个相关性管理器可以将一系列的软件需求转化为具体的标签,然后下载并且整合进入相关的项目。 申明需求可以自动化整个项目配置,这也是软件开发的最佳实践之一,无论是在任何语言中。甚至你不使用第三方库,CocoaPods仍然是一个管理代码相关性的绝佳工具。 ### Podfile Podfile这个文件是用来用来申明项目代码相关性的,正如Bundler的Gemfile,或者npm的package.json cd进入.xcodeproj文件所在的目录,通过以下命令来创建一个Podfile {% highlight bash %} $ pod init {% endhighlight %} ### Podfile {% highlight ruby %} platform :ios, '7.0' target "AppName" do end {% endhighlight %} 你可以申明需要不同版本的库,大部分情况下,申明到minor或者patch版本就足够了 {% highlight ruby %} pod 'X', '~> 1.1' {% endhighlight %} CocoaPods遵循语意化版本规范。 对于那些不在CocoaPods公共Git仓库中的库,你可以用任何一个Git, Mercurial或者SVN仓库取代,并且还可以指定具体的commit, branch或者tag。 {% highlight ruby %} pod 'Y', :git => 'https://github.com/NSHipster/Y.git', :commit => 'b4dc0ffee' {% endhighlight %} 一旦所有的相关性都申明完毕,你可以使用以下指令来安装所需要的库: {% highlight bash %} $ pod install {% endhighlight %} 安装过程中,CocoPods会使用递归来分析所有的需求,并且建立一个代码相关性的图,最后将Podfile序列化为Podfile.lock 比如,如果两个库都需要使用AFNetworking,CocoaPods会确定一个同时能被这两库使用的版本,然后将同一个安装版本链接到两个不同的库中。 CocoaPods会创建一个新的包含之前安装好的静态库Xcode项目,然后将它们链接成一个新的libPods.a target。你原有的项目将会依赖这个新的静态库。一个xcworkspace文件会被创建,从此之后,你应该只打开这个xcworkspace文件来进行开发。 反复使用pod install命令,只会让CocoaPods重复以上步骤,重新安装这些库。所以,当你需要升级它们时,请使用以下命令: {% highlight bash %} $ pod update {% endhighlight %} ### 试着使用CocoaPod try是一个及其实用但又鲜为人知的CocoaPods命令,通过它你能够在安装一个库之前,先试用一下。 你只需要在try后面加上任意一个CocoaPods公共库的名称,就能试用它了! {% highlight bash %} $ pod try Ono {% endhighlight %} ## 建立自己的CocoaPod --- 作为Objective-C软件分发实际上的标准,CocoaPods几乎是所有开源项目的标配,如果你想让你的项目被大家很方便地使用。 诚然,这会提高一点点你分享项目的门槛,但是,好处是显然易见的。你花几分钟创建一个.podspec文件可以节省下其他开发者无数的时间。 ### 规范 .podspec文件作为CocoaPods的一个独立单元,包含了名称,版本,许可证,和源码文件等所有信息。 > 官方指南中有许多信息和范例 以下是NSHipsterKit.podspec {% highlight ruby %} Pod::Spec.new do |s| s.name = 'NSHipsterKit' s.version = '1.0.0' s.license = 'MIT' s.summary = "A pretty obscure library. You've probably never heard of it." s.homepage = 'http://nshipster.com' s.authors = { 'Mattt Thompson' => '[email protected]' } s.social_media_url = "https://twitter.com/mattt" s.source = { :git => 'https://github.com/nshipster/NSHipsterKit.git', :tag => '1.0.0' } s.source_files = 'NSHipsterKit' end {% endhighlight %} 一旦把这个.podspec发布到公共数据库中,任何想使用它的开发者,只需要在Podfile中加入如下声明即可: ### Podfile {% highlight ruby %} pod 'NSHipsterKit', '~> 1.0' {% endhighlight %} .podspec文件也可以作为管理内部代码的利器: {% highlight ruby %} pod 'Z', :path => 'path/to/directory/with/podspec' {% endhighlight %} ### 发布CocoaPod CocoaPods 0.33中加入了Trunk服务。 虽然一开始使用GitHub Pull Requests来整理所有公共pods效果很好。但是,随着Pod数量的增加,这个工作对于spec维护人员Keith Smiley来说变得十分繁杂。甚至一些没有通过$ pod lint的spec也被提交上来,造成repo无法build。 CocoaPods Trunk服务的引入,解决了很多类似的问题。CocoaPods作为一个集中式的服务,使得分析和统计平台数据变得十分方便。 要想使用Trunk服务,首先你需要注册自己的电脑。这很简单,只要你指明你的邮箱地址(spec文件中的)和名称即可。 {% highlight bash %} $ pod trunk register [email protected] "Mattt Thompson" {% endhighlight %} 至此,你就可以通过以下命令来方便地发布和升级你的Pod! {% highlight bash %} $ pod trunk push NAME.podspec {% endhighlight %} 已经发布Pod的作者可以通过几个简单的步骤来声明所有权。 ## 展望 --- CocoaPods例证了一个社区的凝聚力。在短短的几年内,Objective-C社区让我们所有人都引以为傲。 CocoaPods仅仅是众多Objective-C基础设施的一部分,还有诸如Travis CI, CocoaDocs和Nomad这些非常好的生产力工具。 虽然整个社区的未来不会一帆风顺,不管怎样,让我们怀着信念,尽可能的提供建设性的意见。我们更应该互相帮助,乐于分享,共同努力推动整个社区的进步! CocoaPods已经是Objective-C不可或缺的一部分,它只会越来越强大!
24.363229
260
0.799558
yue_Hant
0.933183
0c0b04183b0407517de71daffee279674cc56e0f
60
md
Markdown
README.md
m4rshh/OperaVSS
923482f76479f23e7805d5a399544064ef426c41
[ "MIT" ]
null
null
null
README.md
m4rshh/OperaVSS
923482f76479f23e7805d5a399544064ef426c41
[ "MIT" ]
null
null
null
README.md
m4rshh/OperaVSS
923482f76479f23e7805d5a399544064ef426c41
[ "MIT" ]
null
null
null
# OperaVSS Transfer from richard-rhall profile. VSS Shadows
20
48
0.816667
kor_Hang
0.621617
0c0b80ec00ce42ed62ac0271431aec2967116614
1,735
md
Markdown
README.md
JOHLC/Mother
af6a794e1f6a9c5d44df6533dd3b94eb78340c90
[ "MIT" ]
2
2020-11-07T09:31:58.000Z
2022-03-23T17:58:34.000Z
README.md
JOHLC/Mother
af6a794e1f6a9c5d44df6533dd3b94eb78340c90
[ "MIT" ]
null
null
null
README.md
JOHLC/Mother
af6a794e1f6a9c5d44df6533dd3b94eb78340c90
[ "MIT" ]
null
null
null
# Earthbound <img alt="GitHub Release Date" src="https://img.shields.io/github/release-date/johlc/Earthbound"> ![GitHub release (latest by date)](https://img.shields.io/github/v/release/johlc/Earthbound?label=Version&style=flat-square&labelColor=2ea9f4&color=1473ae) ![maintained](https://img.shields.io/maintenance/yes/2020.svg?style=flat-square&labelColor=2ea9f4&color=1473ae) ![GitHub All Releases](https://img.shields.io/github/downloads/johlc/Earthbound/total?&label=Total%20Downloads&style=flat-square&labelColor=2ea9f4&color=1473ae) ![.github/workflows/validate.yaml](https://github.com/JOHLC/Earthbound/workflows/.github/workflows/validate.yaml/badge.svg) >A custom theme for Home Assistant based on the game Earthbound **Note:** This is a very early work in progres but it seems to work pretty well. ## Screenshots **Main**<br /> <img src="https://raw.githubusercontent.com/JOHLC/Earthbound/main/images/Screenshots/Screenshot1.png" alt="Screenshot 1" width="100%"> ## HACS installation - recommended<br /> 1. Open HACS 2. Select "Frontend" 3. Select the ... menu button at the top right 4. Select "Custom repositories" 5. At the bottom left of the menu where it says "Add custom repository URL" add this repository: https://github.com/JOHLC/Earthbound 6. Select the "Theme" category 7. Select "ADD" 8. Close this menu 9. This theme should now show up as a new repository 10. Click "INSTALL" then install again on the pop-up ### Manual Installation - not recommended<br /> 1. Download the earthbound.yaml file from the latest release 2. Upload the downloaded file to your "themes" directory 3. Add the resource reference to your lovelace config You should now be able to select this theme in Home Assistant <br />
49.571429
397
0.768876
eng_Latn
0.763329
0c0bed1aa6ee06ca51a3be389e1cda21986c59e5
9,412
md
Markdown
docs/content/en/faq/README.md
marcusgadbem/azk
e72adb039e4dd64fd6ed81d82fd597c1d4efdf14
[ "Apache-2.0" ]
915
2015-01-22T05:03:50.000Z
2021-11-27T04:54:03.000Z
docs/content/en/faq/README.md
marcusgadbem/azk
e72adb039e4dd64fd6ed81d82fd597c1d4efdf14
[ "Apache-2.0" ]
379
2015-01-06T19:31:06.000Z
2020-04-03T02:31:44.000Z
docs/content/en/faq/README.md
marcusgadbem/azk
e72adb039e4dd64fd6ed81d82fd597c1d4efdf14
[ "Apache-2.0" ]
81
2015-01-09T20:07:34.000Z
2022-01-04T02:38:11.000Z
# FAQ 1. [What are the requirements for using azk?](README.html#what-are-the-requirements-for-using-azk) 1. [What's the difference between azk and docker-compose (Fig)?](README.html#whats-the-difference-between-azk-and-docker-compose-fig) 1. [What's the difference between azk and Vagrant, or Chef?](README.html#whats-the-difference-between-azk-and-vagrant-or-chef) 1. [My program is working fine with azk. Is there any way to deploy my environment?](README.html#my-program-is-working-fine-with-azk-is-there-any-way-to-deploy-my-environment) 1. [Inside the Azkfile.js, what's the difference between image, provision and command?](README.html#inside-the-azkfilejs-whats-the-difference-between-image-provision-and-command) 1. [Why should I use the images suggested by `azk init`?](README.html#why-should-i-use-the-images-suggested-by-azk-init) 1. [The image suggested by azk is not in the way I want it, what should I do?](README.html#the-image-suggested-by-azk-is-not-in-the-way-i-want-it-what-should-i-do) 1. [I cannot find the image I want in the Docker Hub, what do I do now?](README.html#i-cannot-find-the-image-i-want-in-the-docker-hub-what-do-i-do-now) 1. [Why is it that when I change folders I can no longer see the systems raised with the `azk status` command?](README.html#why-is-it-that-when-i-change-folders-i-can-no-longer-see-the-systems-raised-with-the-azk-status-command) 1. [What is the advantage of using multiple systems, each in a separate container?](README.html#what-is-the-advantage-of-using-multiple-systems-each-in-a-separate-container) 1. [I have used several images with azk that I do not use any more. They are taking up too much disk space, how do I delete them?](README.html#i-have-used-several-images-with-azk-that-i-do-not-use-any-more-they-are-taking-up-too-much-disk-space-how-do-i-delete-them) 1. [How do I create an application (npm, rails, etc.), without having the language or framework installed on my machine?](README.html#how-do-i-create-an-application-npm-rails-etc-without-having-the-language-or-framework-installed-on-my-machine) 1. [I'm having completion and encoding problems when running `azk shell`. How do I fix it?](README.html#im-having-completion-and-encoding-problems-when-running-azk-shell-how-do-i-fix-it) #### What are the requirements for using azk? Linux: - Docker Mac: - Virtualbox When running azk on Mac, you do not need to manually install Docker since we take care of it. In this case you have access to docker in your terminal via the `adocker` command. Because Linux does not need a virtual machine to run azk, its performance in it is higher than what's achieved on Mac. #### What's the difference between azk and docker-compose (Fig)? `azk`: - Is not only focused in Docker, although it only supports it for the moment; - Has an integrated http load balancer, which makes it easier to test if your application is "stateless"; - Has an integrated DNS service which helps when developing multiple applications without having to remember which `localhost` port they're using. Also including Linux support with the lib: [libnss-resolver](https://github.com/azukiapp/libnss-resolver); - Has the concept of provisioning, which allows commands to be executed automatically before the creation of the container, without changing the original image. This is suitable for installing dependencies or migrating databases, for example; - Uses a more robust manifest file, called `Azkfile.js`, which is built using a DSL JavaScript that makes its creation more flexible. #### What's the difference between azk and Vagrant, or Chef? Explaining in a succint way: - `Vagrant` provides a way to describe and generate identical virtual machines (or even containers more recently). It can work together with a software configuration tool (eg Chef) to continue a machine setup process, after the system installation is complete. - `Chef`, as mentioned above, is a tool for software configuration. It will help automate the process of setting up a machine, after it is started. For example, it can help with configuration files, installed programs, users, among other features. Chef, like other similar projects, also helps in the orchestration process to send changes in a system to specific machines. azk, and Docker as well, both overlap with Vagrant and Chef in certain aspects. With azk you can define how applications/services that make up your project relate, and how your project should be executed. This is done within the `Azkfile.js`, in a clear and succinct way to facilitate communication between development and operations (DevOps), and make the whole deployment process something transparent to both teams. In addition, particularly because of the use of containers, testing applications in development and production becomes something much more reliable and reduces the chances of the famous "but it works on my machine". Finally, `azk` focuses on the approach of describing a system's architecture from a functional point of view, which means you describe the various "micro-services" that make up the whole system. This is different from a systems architecture approach, such as `Vagrant`, where the focus is on the description of virtual machines; #### My program is working fine with azk. Is there any way to deploy my environment? We are working on a deployment solution. Stay tuned. ;) #### Inside the Azkfile.js, what's the difference between image, provision and command? The `image` property defines which docker binary image is going to be used as the starting point for the system setup. `provision` is executed once before the system is started, and` command` defines how to start the system so that it is exposed to the user or to another system. #### Why should I use the images suggested by `azk init`? The suggestions made by the command `azk init` are tested by the `azk` team. They meet our quality standards to ensure integration and stability with our tool, and also have the Dockerfiles available so you can check everything that is being installed on the system. #### The image suggested by azk is not in the way I want it, what should I do? You can find other images in: - [Azuki images repository](http://images.azk.io/) - [Azuki images repository in Docker Hub](https://registry.hub.docker.com/u/azukiapp) - [Docker Hub](https://registry.hub.docker.com/) #### I cannot find the image I want in the Docker Hub, what do I do now? In this case, the alternative is to create your own Dockerfile to build your image, by following the instructions in: https://docs.docker.com/reference/builder In addition, it makes sense to use your own Dockerfile if: - You know the exact, and unique, requirements of the development environment required for your application; - You want to add specific dependencies of your project to existing images; - You need to optimize the size of an image compared to what is currently available. #### Why is it that when I change folders I can no longer see the systems raised with the `azk status` command? The command `azk status` among others (` start and stop restart` etc.) are related to the `Azkfile.js` in the current folder, or parent folders (just like `'.gitignore`, for example). When switching folders `azk` understands that you want to work on another system. So for us to run `azk stop` in a system that has been running with `azk start`, we need to return to the folder containing the system `Azkfile.js`, or to its subfolders. #### What is the advantage of using multiple systems, each in a separate container? To answer this question, it's worth reading more about micro-services in this great article: http://martinfowler.com/articles/microservices.html #### I have used several images with azk that I do not use any more. They are taking up too much disk space, how do I delete them? You can list the images you have using the command: ```sh $ adocker images ``` To delete an image, just run: ```sh $ adocker rmi azukiapp/node:0.10 ``` By listing the images, some of them may appear with the name `<none>`. These are "lost" images. This may happen for a number of reasons, among them it could be that a container was still running when a new version was removed, for example. To remove all at once run: ```sh adocker rmi --force `adocker images | grep "<none>" | awk '{ print $3 }'` ``` #### How do I create an application (npm, rails, etc.), without having the language or framework installed on my machine? You can create a container using the image of the language/framework that you want, access it using the command `azk shell --image [docker-registry-image]`, and create your application inside. Example of generating a rails application: ```sh $ azk shell --image azukiapp/ruby --shell /bin/bash # gem install rails --no-rdoc --no-ri # rails new my-app # exit ``` Afterwards you can create a `Azkfile.js` by accessing the application folder: ```sh $ cd my-app $ azk init ``` #### I'm having completion and encoding problems when running `azk shell`. How do I fix it? By default, when `azk shell` is executed, `/bin/sh` is used for the terminal. This happens because not all images have `/bin/bash` installed. If the image you are using has `/bin/bash` installed, edit your` Azkfile.js` and add `shell: "/bin/bash"` for your system. Or, use the `--shell` option in the command `azk shell`: ```shell $ azk shell --shell=/bin/bash ```
69.718519
634
0.766575
eng_Latn
0.998727
0c0c9878b3c48efbb154cdfab15bfd820e905c63
578
md
Markdown
README.md
ctmcisco/ingressutil
f080bb38ab0b758d56356e1b71bad05da21a63da
[ "Apache-2.0" ]
25
2020-12-19T14:49:20.000Z
2022-02-23T13:50:58.000Z
README.md
dgraph-io/ingressutil
080488a64d4a7fa1750596760b68ecc2abd35871
[ "Apache-2.0" ]
null
null
null
README.md
dgraph-io/ingressutil
080488a64d4a7fa1750596760b68ecc2abd35871
[ "Apache-2.0" ]
4
2021-01-28T02:14:41.000Z
2022-03-16T19:04:20.000Z
# ingressutil Utilities for writing an ingress controller. Currently, this package only exposes one type. ## [IngressRouter](https://godoc.org/github.com/dgraph-io/ingressutil#IngressRouter) * This type can be used to monitor for changes to an ingress in Kubernetes. Use StartAutoUpdate() to trigger this. * IngressRouter can match an HTTP Request to a namespace, service and upstream address using `MatchRequest` * Currently, this only does exact hostname match, and path prefix. This means that wildcard hosts and path regexes aren't supported ## Blog Post Coming Soon!
36.125
131
0.788927
eng_Latn
0.978215
0c0db1fec4f1b9279e29f8c89d8e39fcd5e532d1
2,061
md
Markdown
Readme.md
jmartelletti/munchie
5e7d56ccb40621c2fed7a504b3ae265f97d23249
[ "MIT" ]
7
2015-04-16T11:05:31.000Z
2018-01-08T11:25:36.000Z
Readme.md
jmartelletti/munchie
5e7d56ccb40621c2fed7a504b3ae265f97d23249
[ "MIT" ]
null
null
null
Readme.md
jmartelletti/munchie
5e7d56ccb40621c2fed7a504b3ae265f97d23249
[ "MIT" ]
2
2015-05-16T18:51:38.000Z
2018-10-01T20:58:32.000Z
Munchie ======= Munchie is a natural language food/diet log entry parser written in Ruby, inspired by the Chronic date/time parser. See below for examples of the wide variety of formats Munchie will parse. ## Installation ``` $ gem install munchie ``` ## Usage ```ruby require 'munchie' Munchie.parse('1 tbsp Olive Oil') #=> <Munchie::Food quantity=1 volume=<15ml>> Munchie.parse('Derp') #=> nil ``` The parser will always return a Munchie::Food object with as many attributes as can be filled by extracting values from the original input string. See `Munchie.parse` for detailed usage instructions. ## Examples Munchie can parse a huge variety of food log formats, from diet logs to recipes. Following is a small sample of strings that will be properly parsed. * ½ cup (70g) toasted hazelnuts, chopped * 1 cup (90g) chopped dark chocolate * 1¾ cups (255g) plain (all-purpose) flour, sifted * 1 teaspoon baking powder, sifted * 1 teaspoon bicarbonate of (baking) soda * 1 teaspoon ground cinnamon * ⅓ cup (115g) golden syrup * 6 bananas, peeled and chopped * 2 mangoes, peeled and chopped * 1 cup (280g) vanilla-flavoured yoghurt * 125g butter, softened * 1 cup (175g) brown sugar * 1 teaspoon vanilla extract * 2 eggs * 2 cups mashed banana * 1 cup (160g) frozen raspberries * ½ cup (25g) sweetened coconut flakes, plus extra, for sprinkling * 1¾ cups (255g) plain (all-purpose) flour, sifted ## Contribute If you'd like to hack on Chronic, start by forking the repo on GitHub: https://github.com/jmartelletti/munchie The best way to get your changes merged back is as follows: 1. Clone your fork of this repository 1. Create an appropriately named branch to work on your change 1. Hack away 1. Add tests and make sure everything still passes by running `rake` 1. If you are adding new functionality, document it in the Readme 1. Do not change the version number, we will do that at release time 1. If necessary, rebase your commits into logical chunks, without errors 1. Push the branch up to GitHub 1. Send a pull request for your branch
29.442857
115
0.752062
eng_Latn
0.994828
0c0f0a4d217767f88d7c8e83e31f377330123af3
776
md
Markdown
_publications/2009-10-01-paper-title-number-1.md
XiaZeng0223/XiaZeng02.github.io
0fde934e29d29036cb8ad3af24cefc6db6f3dccb
[ "MIT" ]
null
null
null
_publications/2009-10-01-paper-title-number-1.md
XiaZeng0223/XiaZeng02.github.io
0fde934e29d29036cb8ad3af24cefc6db6f3dccb
[ "MIT" ]
null
null
null
_publications/2009-10-01-paper-title-number-1.md
XiaZeng0223/XiaZeng02.github.io
0fde934e29d29036cb8ad3af24cefc6db6f3dccb
[ "MIT" ]
null
null
null
--- authors: "Xia Zeng, Arkaitz Zubiaga" title: "QMUL-SDS at SCIVER: Step-by-Step Binary Classification for Scientific Claim Verification" collection: publications permalink: https://aclanthology.org/2021.sdp-1.15 excerpt: 'In this paper, we propose an approach that performs scientific claim verification by doing binary classifications step-by-step.' date: 2021-06-10 venue: 'Proceedings of the Second Workshop on Scholarly Document Processing' paperurl: 'https://aclanthology.org/2021.sdp-1.15.pdf' citation: "Zeng, X., & Zubiaga, A. (2021). QMUL-SDS at SCIVER: Step-by-step binary classification for scientific claim ver-ification. In Proceedings of the Second Workshop on Scientific Document Processing (pp. 116–123). Association for Computational Linguistics." ---
59.692308
264
0.783505
eng_Latn
0.487938
0c0f0a6973d538eeecb034caec3d3a0799422f52
2,684
md
Markdown
docs/integration-services/control-flow/azure-blob-download-task.md
CeciAc/sql-docs.fr-fr
0488ed00d9a3c5c0a3b1601a143c0a43692ca758
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/integration-services/control-flow/azure-blob-download-task.md
CeciAc/sql-docs.fr-fr
0488ed00d9a3c5c0a3b1601a143c0a43692ca758
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/integration-services/control-flow/azure-blob-download-task.md
CeciAc/sql-docs.fr-fr
0488ed00d9a3c5c0a3b1601a143c0a43692ca758
[ "CC-BY-4.0", "MIT" ]
1
2020-03-04T05:50:54.000Z
2020-03-04T05:50:54.000Z
--- title: Téléchargement d’objets blob Azure, tâche | Microsoft Docs ms.custom: '' ms.date: 05/22/2019 ms.prod: sql ms.prod_service: integration-services ms.reviewer: '' ms.technology: integration-services ms.topic: conceptual f1_keywords: - sql13.dts.designer.afpblobdltask.f1 - sql14.dts.designer.afpblobdltask.f1 ms.assetid: 8a63bf44-71be-456d-9a5c-be7c31aff065 author: chugugrace ms.author: chugu ms.openlocfilehash: fc83e4d8e39c5521fd897ceeec07755f62b5765d ms.sourcegitcommit: b2e81cb349eecacee91cd3766410ffb3677ad7e2 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 02/01/2020 ms.locfileid: "71294318" --- # <a name="azure-blob-download-task"></a>Tâche de téléchargement d’objet blob Azure [!INCLUDE[ssis-appliesto](../../includes/ssis-appliesto-ssvrpluslinux-asdb-asdw-xxx.md)] La tâche de téléchargement d’objet blob Azure permet à un package SSIS de télécharger des fichiers à partir d’un stockage d’objets blob Azure. Pour ajouter une **tâche de téléchargement d’objet blob Azure**, faites-la glisser sur le concepteur SSIS, double-cliquez dessus ou cliquez dessus avec le bouton droit, puis cliquez sur **Modifier** pour afficher la boîte de dialogue **Éditeur de la tâche de téléchargement d’objet blob Azure** . La **tâche de téléchargement d’objets blob Azure** est un composant de [SQL Server Integration Services (SSIS) Feature Pack pour Azure](../../integration-services/azure-feature-pack-for-integration-services-ssis.md). Le tableau suivant décrit les champs de cette boîte de dialogue. |**Champ**|**Description**| |---|---| |AzureStorageConnection|Spécifiez un gestionnaire de connexions Azure Storage existant ou créez-en un qui fait référence à un compte Azure Storage et pointe vers l’emplacement où les fichiers d’objets blob sont hébergés.| |BlobContainer|Spécifie le nom du conteneur d’objets blob qui contient les fichiers d’objets blob à télécharger.| |BlobDirectory|Spécifie le répertoire d’objets blob qui contient les fichiers d’objets blob à télécharger. Le répertoire d’objet blob est une structure hiérarchique virtuelle.| |SearchRecursively|Spécifie si la recherche doit être récursive dans les sous-répertoires.| |LocalDirectory|Spécifie le répertoire local dans lequel les fichiers d’objets blob téléchargés seront stockés.| |FileName|Spécifie un filtre de nom pour sélectionner des fichiers obéissant à un schéma de nom spécifié. Par exemple, `MySheet*.xls\*` inclut des fichiers tels que `MySheet001.xls` et `MySheetABC.xlsx`.| |TimeRangeFrom/TimeRangeTo|Spécifie une plage de temps pour appliquer un filtre. Les fichiers modifiés après **TimeRangeFrom** et avant **TimeRangeTo** sont inclus.|
59.644444
298
0.789121
fra_Latn
0.929324
0c0f2d2220de3fe710b1fc89d18a6d1f2dc4c0d9
2,046
md
Markdown
Lync/LyncServer/lync-server-2013-setting-up-the-infrastructure-for-archiving.md
GabGonzalezPM/OfficeDocs-SkypeForBusiness
97269bfc910d64bda83ff270da636a1c2434a9d7
[ "CC-BY-4.0", "MIT" ]
1
2020-07-07T17:18:48.000Z
2020-07-07T17:18:48.000Z
Lync/LyncServer/lync-server-2013-setting-up-the-infrastructure-for-archiving.md
GabGonzalezPM/OfficeDocs-SkypeForBusiness
97269bfc910d64bda83ff270da636a1c2434a9d7
[ "CC-BY-4.0", "MIT" ]
null
null
null
Lync/LyncServer/lync-server-2013-setting-up-the-infrastructure-for-archiving.md
GabGonzalezPM/OfficeDocs-SkypeForBusiness
97269bfc910d64bda83ff270da636a1c2434a9d7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Lync Server 2013: Setting up the infrastructure for Archiving' ms.reviewer: ms.author: v-lanac author: lanachin f1.keywords: - NOCSH TOCTitle: Setting up the infrastructure for Archiving ms:assetid: d3995b0d-52c4-49f9-8d33-ea8d77c65a9d ms:mtpsurl: https://technet.microsoft.com/en-us/library/JJ205287(v=OCS.15) ms:contentKeyID: 48185494 ms.date: 07/23/2014 manager: serdars mtps_version: v=OCS.15 --- <div data-xmlns="http://www.w3.org/1999/xhtml"> <div class="topic" data-xmlns="http://www.w3.org/1999/xhtml" data-msxsl="urn:schemas-microsoft-com:xslt" data-cs="https://msdn.microsoft.com/"> <div data-asp="https://msdn2.microsoft.com/asp"> # Setting up the infrastructure for Archiving in Lync Server 2013 </div> <div id="mainSection"> <div id="mainBody"> <span> </span> _**Topic Last Modified:** 2012-10-01_ The infrastructure requirements for Archiving are the same as for your Lync Server deployment, except for storage. No additional infrastructure setup is required, except for setting up storage using Exchange 2013 storage, Archiving databases, or both. For details about infrastructure requirements for Lync Server 2013, see [Determining your infrastructure requirements for Lync Server 2013](lync-server-2013-determining-your-infrastructure-requirements.md) in the Planning documentation and [Preparing the infrastructure and systems for Lync Server 2013](lync-server-2013-preparing-the-infrastructure-and-systems.md) in the Deployment documentation. For details about storage requirements for Archiving, see [Technical requirements for Archiving in Lync Server 2013](lync-server-2013-technical-requirements-for-archiving.md) in the Planning documentation, [Setting up system platforms for Archiving in Lync Server 2013](lync-server-2013-setting-up-system-platforms-for-archiving.md) in the Deployment documentation, and [Setting up storage for Archiving in Lync Server 2013](lync-server-2013-setting-up-storage-for-archiving.md) in the Deployment documentation. </div> <span> </span> </div> </div> </div>
43.531915
1,162
0.786413
eng_Latn
0.735298
0c0f8b2d0804f2f9c02b7fca4e556bc55d70c7d7
4,887
md
Markdown
desktop-src/Bluetooth/bluetooth-and-connect.md
dianmsft/win32
f07b550595a83e44dd2fb6e217525edd10a0341b
[ "CC-BY-4.0", "MIT" ]
4
2021-07-26T16:18:49.000Z
2022-02-19T02:00:21.000Z
desktop-src/Bluetooth/bluetooth-and-connect.md
dianmsft/win32
f07b550595a83e44dd2fb6e217525edd10a0341b
[ "CC-BY-4.0", "MIT" ]
2
2020-04-09T17:00:51.000Z
2020-04-09T18:30:01.000Z
desktop-src/Bluetooth/bluetooth-and-connect.md
dianmsft/win32
f07b550595a83e44dd2fb6e217525edd10a0341b
[ "CC-BY-4.0", "MIT" ]
2
2020-07-19T02:58:48.000Z
2021-03-06T21:09:47.000Z
--- title: Bluetooth and connect description: Bluetooth uses the connect function to connect to a target Bluetooth device, using a previously created Bluetooth socket. ms.assetid: f9ab3934-7698-4f5e-8194-cca86685a4f8 keywords: - Bluetooth Bluetooth - connect Bluetooth - Bluetooth and connect Bluetooth ms.topic: article ms.date: 05/31/2018 --- # Bluetooth and connect Bluetooth uses the [**connect**](https://docs.microsoft.com/windows/desktop/api/winsock2/nf-winsock2-connect) function to connect to a target Bluetooth device, using a previously created Bluetooth socket. The *name* parameter of the **connect** function, which is a [**SOCKADDR\_BTH**](/windows/desktop/api/Ws2bth/ns-ws2bth-sockaddr_bth) structure, must specify a target Bluetooth device. Two mechanisms are used to identify the target device: - The [**SOCKADDR\_BTH**](/windows/desktop/api/Ws2bth/ns-ws2bth-sockaddr_bth) structure can directly specify the port number to which a connect is requested. This mechanism requires the application to perform its own SDP queries prior to attempting a [**connect**](https://docs.microsoft.com/windows/desktop/api/winsock2/nf-winsock2-connect) operation. - The [**SOCKADDR\_BTH**](/windows/desktop/api/Ws2bth/ns-ws2bth-sockaddr_bth) structure can specify the unique service class ID of the service to which it wants to connect. If the peer device has more than one port that corresponds to the service class ID, the [**connect**](https://docs.microsoft.com/windows/desktop/api/winsock2/nf-winsock2-connect) function call connects to the first valid service. This mechanism can be used without prior SDP queries. When using the [**SOCKADDR\_BTH**](/windows/desktop/api/Ws2bth/ns-ws2bth-sockaddr_bth) structure with the [**connect**](https://docs.microsoft.com/windows/desktop/api/winsock2/nf-winsock2-connect) function, the following requirements apply: - The **btAddr** member must be a valid remote radio address. - For the **serviceClassId** member, if the port member is zero, the system attempts to use **serviceClassId** to resolve the remote port corresponding to the service. The service class is a normalized 128-bit GUID, defined by the Bluetooth specification. Common GUIDs are defined by the Bluetooth Assigned Numbers document. Alternatively, a unique GUID may be used for a domain-specific application. - The **port** member must be a valid remote port, or zero if the **serviceClassId** member is specified. The following table lists the result codes for Bluetooth and the [**connect**](https://docs.microsoft.com/windows/desktop/api/winsock2/nf-winsock2-connect) function. | Error/error\# | Description | |----------------------------------|------------------------------------------------------------------------------------| | WSAEISCONN10056<br/> | The [**connect**](https://docs.microsoft.com/windows/desktop/api/winsock2/nf-winsock2-connect) function called for already connected socket. | | WSAEACCES10013<br/> | Connecting application requested authentication, but authentication failed. | | WSAENOBUFS10055<br/> | Unrecoverable out-of-memory error. | | WSAEADDRINUSE10048<br/> | The port/channel number requested is in use. | | WSAETIMEDOUT10060<br/> | The I/O timed out at the Bluetooth radio level (PAGE\_TIMEOUT). | | WSAEDISCON10101<br/> | The RFCOMM channel disconnected by remote peer. | | WSAECONNRESET10054<br/> | The RFCOMM multiplexor (session) disconnected by remote peer. | | WSAECONNABORTED10053<br/> | Socket shut down by application. | | WSAENETUNREACH10051<br/> | Error other than time-out at L2CAP or Bluetooth radio level. | | WSAEHOSTDOWN10064<br/> | The RFCOMM received DM response. | | WSAENETDOWN10050<br/> | Unexpected network error. | | WSAESHUTDOWN10058<br/> | The L2CAP channel disconnected by remote peer. | | WSAEADDRNOTAVAIL10049<br/> | Bluetooth port/channel or device address not valid. | | WSAEINVAL10022<br/> | Plug and Play, driver-stack event, or other error caused failure. | ## Related topics <dl> <dt> [Windows Sockets](https://docs.microsoft.com/windows/desktop/WinSock/windows-sockets-start-page-2) </dt> <dt> [**connect**](https://docs.microsoft.com/windows/desktop/api/winsock2/nf-winsock2-connect) </dt> <dt> [**SOCKADDR\_BTH**](/windows/desktop/api/Ws2bth/ns-ws2bth-sockaddr_bth) </dt> </dl>
69.814286
458
0.658072
eng_Latn
0.944085
0c0fbd78fbeba93ec909d44e6ce20c9ab62dd0da
15,670
markdown
Markdown
README.markdown
dtrott/trello4j
e005bb0024d74f686b4bcea0291bd4fe0ec7ce23
[ "Apache-2.0" ]
1
2015-02-06T08:30:24.000Z
2015-02-06T08:30:24.000Z
README.markdown
trygvis/trello4j
e005bb0024d74f686b4bcea0291bd4fe0ec7ce23
[ "Apache-2.0" ]
null
null
null
README.markdown
trygvis/trello4j
e005bb0024d74f686b4bcea0291bd4fe0ec7ce23
[ "Apache-2.0" ]
1
2019-07-04T04:15:32.000Z
2019-07-04T04:15:32.000Z
## Introduction **trello4j** is a java wrapper around [Trello API](https://trello.com/docs/api/index.html). You need to get a API key and optionally generate a token [here](https://trello.com/1/appKey/generate) to be able to use Trello's API. Please report any issues and/or participate in the development [here](https://trello.com/board/trello4j/4f92b80ba73738db6cdd4309) :) ## Getting started ### Get trello4j from unofficial maven repo <repository> <id>joelso-mvn-repo</id> <name>joelso github mvn repo</name> <url>https://raw.github.com/joelso/joelso-mvn-repo/master/snapshots/</url> </repository> ... <dependency> <groupId>org.trello4j</groupId> <artifactId>trello4j</artifactId> <version>1.0-SNAPSHOT</version> </dependency> ### Get source and build it git clone [email protected]:joelso/trello4j.git cd trello4j mvn install Now you got two options: 1. Use trello4j from your local maven repo, add dependency groupId: org.trello4j / artifactId: trello4j 2. Use jar that was built in directory **target/** ## Usage // myToken is optional, set to null if you are accessing public data Trello trello = new TrelloImpl("myApiKey", "myToken"); // example: get organization by its name Organization org = trello.getOrganization("fogcreek"); <table> <tr><th>Method</th><th>Version</th></tr> <tr><th colspan="2">Actions</th></tr> <tr><td>GET /1/actions/[action_id]</td> <td>IMPLEMENTED</td></tr> <tr><td>GET /1/actions/[action_id]/[field]</td> <td>TODO</td></tr> <tr><td>GET /1/actions/[action_id]/board</td> <td>IMPLEMENTED</td></tr> <tr><td>GET /1/actions/[action_id]/board/[field]</td> <td>TODO</td></tr> <tr><td>GET /1/actions/[action_id]/card</td> <td>IMPLEMENTED</td></tr> <tr><td>GET /1/actions/[action_id]/card/[field]</td> <td>TODO</td></tr> <tr><td>GET /1/actions/[action_id]/list</td> <td>IMPLEMENTED</td></tr> <tr><td>GET /1/actions/[action_id]/list/[field]</td> <td>TODO</td></tr> <tr><td>GET /1/actions/[action_id]/member</td> <td>IMPLEMENTED</td></tr> <tr><td>GET /1/actions/[action_id]/member/[field]</td> <td>TODO</td></tr> <tr><td>GET /1/actions/[action_id]/memberCreator</td> <td>IMPLEMENTED</td></tr> <tr><td>GET /1/actions/[action_id]/memberCreator/[field]</td> <td>TODO</td></tr> <tr><td>GET /1/actions/[action_id]/organization</td> <td>IMPLEMENTED</td></tr> <tr><td>GET /1/actions/[action_id]/organization/[field]</td> <td>TODO</td></tr> <tr><th colspan="2">Boards</th></tr> <tr><td>GET /1/boards/[board_id] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/[field] </td><td>TODO</td></tr> <tr><td>GET /1/boards/[board_id]/actions </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/cards </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/cards/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/cards/[idCard] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/checklists </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/lists </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/lists/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/members </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/members/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/membersInvited </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/membersInvited/[field]</td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/myPrefs </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/organization </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/boards/[board_id]/organization/[field] </td><td>IMPLEMENTED</td></tr> <tr><td>PUT /1/boards/[board_id] </td><td>TODO</td></tr> <tr><td>PUT /1/boards/[board_id]/closed </td><td>TODO</td></tr> <tr><td>PUT /1/boards/[board_id]/desc </td><td>TODO</td></tr> <tr><td>PUT /1/boards/[board_id]/name </td><td>TODO</td></tr> <tr><td>POST /1/boards </td><td>TODO</td></tr> <tr><td>POST /1/boards/[board_id]/checklists </td><td>TODO</td></tr> <tr><td>POST /1/boards/[board_id]/lists </td><td>TODO</td></tr> <tr><td>POST /1/boards/[board_id]/myPrefs </td><td>TODO</td></tr> <tr><th colspan="2">Cards</th></tr> <tr><td>GET /1/cards/[card_id] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/cards/[card_id]/[field] </td><td>TODO</td></tr> <tr><td>GET /1/cards/[card_id]/actions </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/cards/[card_id]/attachments </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/cards/[card_id]/board </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/cards/[card_id]/board/[field] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/cards/[card_id]/checkItemStates </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/cards/[card_id]/checklists </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/cards/[card_id]/list </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/cards/[card_id]/list/[field] </td><td>TODO</td></tr> <tr><td>GET /1/cards/[card_id]/members </td><td>IMPLEMENTED</td></tr> <tr><td>PUT /1/cards/[card_id] </td><td>TODO</td></tr> <tr><td>PUT /1/cards/[card_id]/closed </td><td>TODO</td></tr> <tr><td>PUT /1/cards/[card_id]/desc </td><td>TODO</td></tr> <tr><td>PUT /1/cards/[card_id]/due </td><td>TODO</td></tr> <tr><td>PUT /1/cards/[card_id]/idList </td><td>TODO</td></tr> <tr><td>PUT /1/cards/[card_id]/name </td><td>TODO</td></tr> <tr><td>POST /1/cards </td><td>IMPLEMENTED</td></tr> <tr><td>POST /1/cards/[card_id]/actions/comments </td><td>IMPLEMENTED</td></tr> <tr><td>POST /1/cards/[card_id]/attachments </td><td>IMPLEMENTED</td></tr> <tr><td>POST /1/cards/[card_id]/checklists </td><td>IMPLEMENTED</td></tr> <tr><td>POST /1/cards/[card_id]/labels </td><td>IMPLEMENTED</td></tr> <tr><td>POST /1/cards/[card_id]/members </td><td>IMPLEMENTED</td></tr> <tr><td>POST /1/cards/[card_id]/membersVoted </td><td>IMPLEMENTED</td></tr> <tr><td>DELETE /1/cards/[card_id] </td><td>IMPLEMENTED</td></tr> <tr><td>DELETE /1/cards/[card_id]/checklists/[idChecklist] </td><td>IMPLEMENTED</td></tr> <tr><td>DELETE /1/cards/[card_id]/labels/[color] </td><td>IMPLEMENTED</td></tr> <tr><td>DELETE /1/cards/[card_id]/members/[idMember] </td><td>IMPLEMENTED</td></tr> <tr><td>DELETE /1/cards/[card_id]/membersVoted/[idMember] </td><td>IMPLEMENTED</td></tr> <tr><th colspan="2">Checklists</th></tr> <tr><td>GET /1/checklists/[checklist_id] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/checklists/[checklist_id]/[field] </td><td>TODO</td></tr> <tr><td>GET /1/checklists/[checklist_id]/board </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/checklists/[checklist_id]/board/[field] </td><td>TODO</td></tr> <tr><td>GET /1/checklists/[checklist_id]/cards </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/checklists/[checklist_id]/cards/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/checklists/[checklist_id]/checkItems </td><td>IMPLEMENTED</td></tr> <tr><td>PUT /1/checklists/[checklist_id] </td><td>TODO</td></tr> <tr><td>PUT /1/checklists/[checklist_id]/name </td><td>TODO</td></tr> <tr><td>POST /1/checklists </td><td>TODO</td></tr> <tr><td>POST /1/checklists/[checklist_id]/checkItems </td><td>TODO</td></tr> <tr><td>DELETE /1/checklists/[checklist_id]/checkItems/[idCheckItem] </td><td>TODO</td></tr> <tr><th colspan="2">Lists</th></tr> <tr><td>GET /1/lists/[list_id] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/lists/[list_id]/[field] </td><td>TODO</td></tr> <tr><td>GET /1/lists/[list_id]/actions </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/lists/[list_id]/board </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/lists/[list_id]/board/[field] </td><td>TODO</td></tr> <tr><td>GET /1/lists/[list_id]/cards </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/lists/[list_id]/cards/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>PUT /1/lists/[list_id] </td><td>TODO</td></tr> <tr><td>PUT /1/lists/[list_id]/closed </td><td>TODO</td></tr> <tr><td>PUT /1/lists/[list_id]/name </td><td>TODO</td></tr> <tr><td>POST /1/lists </td><td>TODO</td></tr> <tr><td>POST /1/lists/[list_id]/cards </td><td>TODO</td></tr> <tr><th colspan="2">Members</th></tr> <tr><td>GET /1/members/[member_id or username] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/[field] </td><td>TODO</td></tr> <tr><td>GET /1/members/[member_id or username]/actions </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/boards </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/boards/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/boardsInvited </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/boardsInvited/[field] </td><td>TODO</td></tr> <tr><td>GET /1/members/[member_id or username]/cards </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/cards/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/notifications </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/notifications/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/organizations </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/organizations/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/organizationsInvited </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/members/[member_id or username]/organizationsInvited/[field] </td><td>TODO</td></tr> <tr><td>PUT /1/members/[member_id or username] </td><td>TODO</td></tr> <tr><td>PUT /1/members/[member_id or username]/bio </td><td>TODO</td></tr> <tr><td>PUT /1/members/[member_id or username]/fullName </td><td>TODO</td></tr> <tr><td>PUT /1/members/[member_id or username]/initials </td><td>TODO</td></tr> <tr><th colspan="2">Notifications</th></tr> <tr><td>GET /1/notifications/[notification_id] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/notifications/[notification_id]/[field] </td><td>TODO</td></tr> <tr><td>GET /1/notifications/[notification_id]/board </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/notifications/[notification_id]/board/[field] </td><td>TODO</td></tr> <tr><td>GET /1/notifications/[notification_id]/card </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/notifications/[notification_id]/card/[field] </td><td>TODO</td></tr> <tr><td>GET /1/notifications/[notification_id]/list </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/notifications/[notification_id]/list/[field] </td><td>TODO</td></tr> <tr><td>GET /1/notifications/[notification_id]/member </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/notifications/[notification_id]/member/[field] </td><td>TODO</td></tr> <tr><td>GET /1/notifications/[notification_id]/memberCreator </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/notifications/[notification_id]/memberCreator/[field] </td><td>TODO</td></tr> <tr><td>GET /1/notifications/[notification_id]/organization </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/notifications/[notification_id]/organization/[field] </td><td>TODO</td></tr> <tr><th colspan="2">Organizations</th></tr> <tr><td>GET /1/organizations/[org_id or name] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/organizations/[org_id or name]/[field] </td><td>TODO</td></tr> <tr><td>GET /1/organizations/[org_id or name]/actions </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/organizations/[org_id or name]/boards </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/organizations/[org_id or name]/boards/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/organizations/[org_id or name]/members </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/organizations/[org_id or name]/members/[filter] </td><td>IMPLEMENTED</td></tr> <tr><td>PUT /1/organizations/[org_id or name] </td><td>TODO</td></tr> <tr><td>PUT /1/organizations/[org_id or name]/desc </td><td>TODO</td></tr> <tr><td>PUT /1/organizations/[org_id or name]/displayName </td><td>TODO</td></tr> <tr><td>PUT /1/organizations/[org_id or name]/name </td><td>TODO</td></tr> <tr><td>PUT /1/organizations/[org_id or name]/website </td><td>TODO</td></tr> <tr><td>POST /1/organizations </td><td>TODO</td></tr> <tr><td>DELETE /1/organizations/[org_id or name] </td><td>TODO</td></tr> <tr><th colspan="2">Tokens</th></tr> <tr><td>GET /1/tokens/[token] </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/tokens/[token]/[field] </td><td>TODO</td></tr> <tr><td>GET /1/tokens/[token]/member </td><td>IMPLEMENTED</td></tr> <tr><td>GET /1/tokens/[token]/member/[field] </td><td>TODO</td></tr> <tr><th colspan="2">Types</th></tr> <tr><td>GET /1/types/[id] </td><td>IMPLEMENTED</td></tr> </table> ## Contributors [skydjol](https://github.com/skydjol)
68.130435
134
0.535227
yue_Hant
0.387191
0c1025a18a244eeb609982e4f96a5fadec32e2d5
1,700
md
Markdown
articles/api-management/policies/add-correlation-id.md
changeworld/azure-docs.it-
34f70ff6964ec4f6f1a08527526e214fdefbe12a
[ "CC-BY-4.0", "MIT" ]
1
2017-06-06T22:50:05.000Z
2017-06-06T22:50:05.000Z
articles/api-management/policies/add-correlation-id.md
changeworld/azure-docs.it-
34f70ff6964ec4f6f1a08527526e214fdefbe12a
[ "CC-BY-4.0", "MIT" ]
41
2016-11-21T14:37:50.000Z
2017-06-14T20:46:01.000Z
articles/api-management/policies/add-correlation-id.md
changeworld/azure-docs.it-
34f70ff6964ec4f6f1a08527526e214fdefbe12a
[ "CC-BY-4.0", "MIT" ]
7
2016-11-16T18:13:16.000Z
2017-06-26T10:37:55.000Z
--- title: Criteri di gestione API di esempio-aggiungere un'intestazione contenente l'ID di correlazione titleSuffix: Azure API Management description: "Esempio di criteri di gestione delle API di Azure: illustra come aggiungere un'intestazione con un ID di correlazione alla richiesta in ingresso." services: api-management documentationcenter: '' author: vladvino manager: cfowler editor: '' ms.service: api-management ms.workload: mobile ms.tgt_pltfrm: na ms.topic: article ms.date: 10/13/2017 ms.author: apimpm ms.openlocfilehash: 922565d26274aee12c9397c08c19330b4fce00e7 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 03/29/2021 ms.locfileid: "92076301" --- # <a name="add-a-header-containing-a-correlation-id"></a>Aggiungere un'intestazione con un ID di correlazione Questo articolo illustra un esempio di criteri di gestione delle API di Azure che illustra come aggiungere un'intestazione con un ID di correlazione alla richiesta in ingresso. Per impostare o modificare il codice dei criteri, seguire la procedura descritta nell'articolo su come [impostare o modificare criteri](../set-edit-policies.md). Per altri esempi, vedere l'articolo relativo agli [esempi di criteri](../policy-reference.md). ## <a name="policy"></a>Criteri Incollare il codice nel blocco **inbound**. [!code-xml[Main](../../../api-management-policy-samples/examples/Add correlation id to inbound request.policy.xml)] ## <a name="next-steps"></a>Passaggi successivi Altre informazioni sui criteri di Gestione API: + [Criteri di trasformazione](../api-management-transformation-policies.md) + [Esempi di criteri](../policy-reference.md)
44.736842
433
0.791176
ita_Latn
0.969045
0c1085379a428283387b38c2b2a12a70aaf457f5
9,380
md
Markdown
vault-doc/src/site/markdown/validation.md
hearora/jackrabbit-filevault
5bdaeb76d343793d0fe4f587207fce312e609a68
[ "Apache-2.0" ]
null
null
null
vault-doc/src/site/markdown/validation.md
hearora/jackrabbit-filevault
5bdaeb76d343793d0fe4f587207fce312e609a68
[ "Apache-2.0" ]
null
null
null
vault-doc/src/site/markdown/validation.md
hearora/jackrabbit-filevault
5bdaeb76d343793d0fe4f587207fce312e609a68
[ "Apache-2.0" ]
null
null
null
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Validation ## Overview The artifact `org.apache.jackrabbit.vault:vault-validation` provides both an API for validating FileVault packages as well as an SPI for implementing validators. In addition this JAR contains useful validators. This validation framework is supposed to be used as 1. dependency for custom validators (SPI) 2. library for build tools which want to call validation on FileVault packages (API and Implementation) ## Validators ### Settings It is possible to adjust every validator registered in the system (both default and external validators) via settings. Settings have a common section (which apply to every validator) but also support validator specific options. Element | Description | Default Value --- | --- | --- `defaultSeverity` | Each validation message has a severity. The default validation message severity of each validator can be influenced with this parameter. If a validator emits different types of validation messages the other types can be influenced via `options`. | `error` `isDisabled` | A boolean flag defining whether validator is disabled or not. | `false` `options` | A map (i.e. keys and values) of validator specific options. The supported options for the validators are outlined below. | empty Each validator settings are set for a specific validator id. ### Standard Validators ID | Description | Options --- | --- | --- `jackrabbit-filter` | Checks for validity of the [filter.xml](./filter.html) (according to a predefined XML schema). In addition checks that every [docview xml node](./docview.html) is contained in the filter. It also makes sure that all filter root's ancestors are either known/valid roots or are contained in the package dependencies. For ancestor nodes which are not covered by a filter at least a `warn` is emitted. Also it makes sure that `pattern` values for includes/excludes as well as `root` values for each filter entry are valid. Orphaned filter rules (i.e. ones not being necessary) lead to validation issues as well. | *severityForUncoveredAncestorNodes*: severity of validation messages for uncovered ancestor nodes.<br/>*severityForUncoveredFilterRootAncestors*: severity of validation messages for uncovered filter root ancestors. (default = `error`, for package type=application or `warn` for all other package types)<br/>*severityForOrphanedFilterRules*: severity of validation messages for orphaned filter rules (default = `info`)<br/>*validRoots*: comma-separated list of valid roots (default = `"/,/libs,/apps,/etc,/var,/tmp,/content"`) `jackrabbit-properties ` | Checks for validity of the [properties.xml](./properties.html) | none `jackrabbit-dependencies` | Checks for overlapping filter roots of the referenced package dependencies as well as for valid package dependency references (i.e. references which can be resolved). | *severityForUnresolvedDependencies*: severity of validation messages for unresolved dependencies (default = `warn`) `jackrabbit-docviewparser` | Checks if all docview files in the package are compliant with the [(extended) Document View Format](docview.html). This involves checking for XML validity as well as checking for correct property types. | none `jackrabbit-emptyelements` | Check for empty elements within DocView files (used for ordering purposes, compare with [(extended) Document View Format](docview.html)) which are included in the filter with import=replace as those are actually not replaced! | none `jackrabbit-mergelimitations` | Checks for the limitation of import mode=merge outlined at [JCRVLT-255][jcrvlt-255]. | none `jackrabbit-oakindex` | Checks if the package (potentially) modifies/creates an OakIndexDefinition. This is done by evaluating both the filter.xml for potential matches as well as the actual content for nodes with jcr:primaryType `oak:indexDefinition`. | none `jackrabbit-packagetype` | Checks if the package type is correctly set for this package, i.e. is compliant with all rules outlined at [JCRVLT-170][jcrvlt-170]. | *jcrInstallerNodePathRegex*: the regex of the node paths which all OSGi bundles and configurations within packages must match ([JCR Installer](https://sling.apache.org/documentation/bundles/jcr-installer-provider.html)) (default=`/([^/]*/){0,4}?(install|config)(\\.[^/]*)*/(\\d{1,3}/)?.+?\\.`).<br/>*additionalJcrInstallerFileNodePathRegex*: the regex of all file node paths which all OSGi bundles and configurations within packages must match. This must match in addition to the regex from `jcrInstallerNodePathRegex` (default=`.+?\\.(jar|config|cfg|cfg\\.json)`).<br/>*legacyTypeSeverity*: the severity of the validation message for package type `mixed` (default = `warn`).<br/>*noTypeSeverity*: the severity of the validation message when package type is not set at all (default = `warn`).<br/>*prohibitMutableContent*: boolean flag determining whether package type `content` or `mixed` (mutable content) leads to a validation message with severity error (default = `false`). Useful when used with [Oak Composite NodeStore](https://jackrabbit.apache.org/oak/docs/nodestore/compositens.html).<br/>*prohibitImmutableContent*: boolean flag determining whether package type `app`, `container` or `mixed` (immutable content) leads to a validation message with severity error (default = `false`). Useful when used with [Oak Composite NodeStore](https://jackrabbit.apache.org/oak/docs/nodestore/compositens.html).<br/>*allowComplexFilterRulesInApplicationPackages*: boolean flag determining whether complex rules (containing includes/excludes) are allowed in application content packages (default = `false`) `jackrabbit-primarynodetype` | Checks if all non empty elements within [DocView files](docview.html) have the mandatory property `jcr:primaryType` set. | none ### Custom Validators The SPI for implementing custom validators is provided in [this package][javadoc.spi]. The validators are registered via a `ValidatorFactory` which is supposed to be registered via the [ServiceLoader][javadoc.serviceloader]. The SPI is exported from the artifact `org.apache.jackrabbit.vault:vault-validation` as well. The validator which is returned via the `ValidatorFactory` is one of the following types below package `org.apache.jackrabbit.filevault.maven.packaging.validator` Validator Class | Description | `META-INF` or `jcr_root` | Called from another validator --- | --- | --- | --- `DocumentViewXmlValidator` | Called for each node serialized into a DocView element | `jcr_root` | no `NodePathValidator` | Called for each node path contained in the package (even for ones not listed in the filter.xml) | `jcr_root` | no `JcrPathValidator` | Called for each file path contained in the package | `jcr_root` | no `GenericJcrDataValidator` | Called for all serialized nodes which are *not* DocViewXml | `jcr_root` | no `FilterValidator`| Called for the `vault/filter.xml` file | `META-INF` | yes (`jackrabbit-filter`) `PropertiesValidator` | Called for the `vault/properties.xml` file | `META-INF` | yes (`jackrabbit-properties`) `GenericMetaInfDataValidator` | Called for all META-INF files (even `vault/filter.xml` nor `vault/properties.xml`). In general prefer the higher level validators (i.e. `FilterValidator` or `PropertiesValidator` if possible) | `META-INF` | no `MetaInfFilePathValidator` | Called for each file path contained in the package below META-INF | `META-INF` | no ## Validation API The API for calling validation on specific files is provided in [this package][javadoc.api]. First you need one instance of `ValidationExecutorFactory`. For each new `ValidationContext` (i.e. new package context) you create a new `ValidationExecutor` via `ValidationExecutorFactory.createValidationExecutor(...)`. For each file you then call either - `ValidationExecutor.validateJcrRoot(...)` for input streams referring to files which are supposed to end up in the repository or - `ValidationExecutor.validateMetaInf(...)` for input streams representing metaInf data of the FileVault package The Validation API is currently used by the [FileVault Package Maven Plugin][filevault.maven]. [javadoc.spi]: apidocs/org/apache/jackrabbit/vault/validation/spi/package-summary.html [javadoc.api]: apidocs/org/apache/jackrabbit/vault/validation/package-summary.html [javadoc.serviceloader]: https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html [filevault.maven]: http://jackrabbit.apache.org/filevault-package-maven-plugin/ [jcrvlt-255]: https://issues.apache.org/jira/browse/JCRVLT-255 [jcrvlt-170]: https://issues.apache.org/jira/browse/JCRVLT-170
96.701031
1,766
0.779424
eng_Latn
0.966742
0c111027bb52b8a1d8e3913b8a9b541d86dbfe28
2,981
md
Markdown
norms.md
2201gh-team-saturn/grace-shopper-main
e6f108cc42d4a5ffa80bb472003e938200bba180
[ "MIT" ]
1
2022-03-11T17:44:08.000Z
2022-03-11T17:44:08.000Z
norms.md
2201gh-team-saturn/grace-shopper-main
e6f108cc42d4a5ffa80bb472003e938200bba180
[ "MIT" ]
98
2022-03-01T20:40:47.000Z
2022-03-09T01:40:47.000Z
norms.md
2201gh-team-saturn/grace-shopper-main
e6f108cc42d4a5ffa80bb472003e938200bba180
[ "MIT" ]
1
2022-03-28T02:32:23.000Z
2022-03-28T02:32:23.000Z
# Establishing Norms ## Roles * Task manager - Responsible for reviewing that each feature meets all requirements upon pull request submission * Project Manager - Responsible for managing the timing of our project, ensuring timeliness is at the forefront of what we're doing * Request manager - Responsible for reviewing/approving pull requests * Worker Bee - Responsible for working through assigned tasks, a day to focus on the tasks at hand with less on your plate - Investigator, if someone is having trouble and experiencing a bug this person is the first line of defense ## Daily process * Morning standup at 11:15AM EST/8:15AM PST started by yesterday's Task Manager - New roles assigned (by yesterday’s Task manager) - Yesterday? Today? Obstacles? (by today’s Task manager) * Pair program on assigned tasks * Record issues that are non-blocking and discuss right after lunch at re-group meeting * Bring up any blocking issues after 15-20 minutes of spinning your wheels. Worker Bee will be the first responder, but anyone who's available can come join debugging session. * At 2:30pm have re-group meeting where you check-in with blockers, status updates and plan for EOD merge * We don't merge except for once a day to main branch * Immediately prior to end of day make applicable PRs, review each other’s code and merge into main ## Team expectations * Each of us reserves the rights to our nights and weekends, and we have all been straightforward about when and how we can commit to working on the project * We plan on taking a break on Sunday to get the rest we deserve * If a group member plans on working off hours, or feels inspired, they'll send a quick heads up so everyone is on the same page * We expect that no one will merge the main branch until we have had a code review, or we have discussed the newly added functionality (this will happen once a day) * Anytime we get frustrated, be it our own code or during a discussion in a tense decision making stage, we take a 5 min breather then regroup * Come to stand-up with root cause of any issues or bugs we've run into, with a quick what happened, where, why and how to avoid going forward * If you're taking a break during working hours, you send a message to the slack channel to let us know you'll be unavailable * We'll keep the paired programming sessions fluid and open to personal interpretation. People switch when they feel like they've reached a good stopping point ### Disagreements Steps to reach an agreement: 1. Everyone says their piece about what direction they believe we should go, we have a conversation to start to see if we can get on the same page 2. If no agreement can be reached, or we're at a stalemate, then it goes up to a vote. We explain why we voted that way and why we feel strongly about it so hopefully no one feels hurt or left out of the decision 3. If residual feelings or thoughts exist and we still can't reach an agreement, we'll escalate to Rusty
72.707317
212
0.780275
eng_Latn
0.999825
0c11b41603de81462f2f3b614dfd3f41fa24876c
11,840
md
Markdown
node_modules/luxon/CHANGELOG.md
fspaolo/mywebsite
770bf910b8a6ae1ce22eb52123a9599d986d3047
[ "CC-BY-4.0" ]
3
2021-07-24T19:40:29.000Z
2022-03-29T10:45:34.000Z
node_modules/luxon/CHANGELOG.md
fspaolo/mywebsite
770bf910b8a6ae1ce22eb52123a9599d986d3047
[ "CC-BY-4.0" ]
1
2021-01-31T19:51:37.000Z
2021-01-31T19:51:37.000Z
node_modules/luxon/CHANGELOG.md
PabloDelHip/CRM-Tours
7dbe27a0dfd914b9ca4be03b22743cf7765e94fd
[ "MIT" ]
2
2021-03-23T18:42:23.000Z
2021-03-23T18:51:26.000Z
# Changelog ## 1.26.0 * Add fromISOTime, toISOTime and toMillis to Duration (#803) * Fix padding of negative years in IsoDate (#871) * Fix hasSame unit comparison (#798) * Export VERSION information (#794) * Durations are considered equal with extra zero units. Fixes #809 (#811) ## 1.25.0 * fix fromFormat with Intl formats containing non-breaking spaces * Support higher precisision in ISO milliseconds * Some fixes for 00:30 timezones * Fix some throwOnInvalid for invalid Intervals * Various doc fixes * Fix Interval#isSame for empty intervals * Mark package as side effect-free * Add support for intervals with a large number of seconds ## 1.24.1 (2020-05-04) * Remove erroneous `console.log` call ## 1.24.0 (2020-05-03) * Update polyfills for pollyfilled build ## 1.23.0 (2020-04-02) * Allow minus sign prefix when creating Duration from ISO ## 1.22.2 (2020-03-25) * Added more details to error messages for type errors ## 1.22.1 (2020-03-19) * Added support for ISO basic format to DateTime#toISO ## 1.22.0 (2020-01-26) * Fix setZone's handling of pre-1970 dates with milisecond components * Fix keepLocalTime for large jumps near the target zone's DST * Fix cache perf for toRelative() ## 1.21.3 (2019-11-28) * Fix parsing of meridiems in macro tokens in newer versions of v8 ## 1.21.2 (2019-11-18) * Fix bug in Chrome Canary that threw off time zone calculations ## 1.21.1 (2019-11-03) * Fix for quarter parsing * Some documentation updates ## 1.21.0 (2019-10-30) * Added quarter support to the parser * Fix some rounding issues in ISO formatting ## 1.20.0 (2019-10-29) * Added Duration#mapUnits * added Interval#toISODate and Interval#toISOTime * Some documentation fixes ## 1.19.3 * Cache offset values * Fix handling of negative sub 1-hour offsets ## 1.19.2 * Speculative fix for Node 6 ## 1.19.1 * Fix Intl.DateTimeFormat usage for polyfills ## 1.19.0 * Interval#splitAt now ignores input dates outside the interval * Don't allow decimals in DateTime creation ## 1.18.2 * Fix handling of decimals in DateTime#plus and #minus ## 1.18.1 * Fix validity when adding or subtracting time that exceeds Date max/min boundaries ## 1.18.0 * Add support for macro tokens in the parser ## 1.17.2 * Fix issue with `toRelative` using `style: short` with plural days ## 1.17.1 * Reject out-of-range numbers in DateTime.fromMillis * Reject 0s in ISO date inputs ## 1.17.0 * DateTime.min and DateTime.max throw if they get the wrong kind of arguments * Fixed throwOnInvalid logic for Interval * Added `DATETIME_MED_WITH_WEEKDAY` preset ## 1.16.1 * Catch errors trying to use Intl in weird versions of IE 11 ## 1.16.0 * Fixed locale default logic for `DateTime#toFormat("ZZZZ") ## 1.15.0 * Added `formatOffset` to Zones ## 1.14.0 * Allow the zone argument to Interval.fromISO with duration components * Ignore the zone argument to Duration factory methods ## 1.13.3 * Fix keepLocalTime calculations that span offset changes ## 1.13.2 * Fixed ISO formatting for dates > 999 ## 1.13.1 * Performance improvements for regex parsing ## 1.13.0 * Support numberSystem in fromFormat * Fix validity for bad initial zone specifiers ## 1.12.1 * Fix cross-month diffs in some scenarios * Fix time zone parsing when the time zone isn't at the end * Memoize IANA zone creation ## 1.12.0 * Add some explicit CDN support to the NPM package * Add week token to duration ISO support * Lots of cleanup and test coverage changes ## 1.11.4 * `setZone("local")` now returns the defaultZone if it is set * Fixes for the polyfilled build ## 1.11.3 * Allow 24:00 in ISO (and other) strings * Fix some bugs with the typecheck functions like `DateTime.isDateTime()` ## 1.11.2 * Fixed handling of some characters in fromFormat literal sections * Hanlde string values in object arguments to DateTime methods * Fixed toRelativeCalendar's handling of zones in the base date ## 1.11.1 * Fix DateTime#plus() when spanning across AD 100 ## 1.11.0 * Fix low-year handling for IANA zones * `DateTime#toLocal()` now uses the default locale * Fix zero duration formatting * Many documentation fixes ## 1.10.0 - Fix endOf("day") during DSTs (#399) - Add `Interval#mapEndpoints (#400) - Add `DateTime#zone` and `Info.normalizeZone` (#404) ## 1.9.0 - Add `DateTime#toRelative` and `DateTime#toRelativeCalendar` ## 1.8.3 - Allow "UTC" in the zone position of `fromSQL` - Force `isDateTime` and `isDuration` to return booleans in all cases ## 1.8.2 - Trim leading \u200e characters from offset names in Edge 16 and 17 ## 1.8.1 - Add `DateTime.fromSeconds` and `DateTime#toSeconds` ## 1.7.1 - Floor the seconds instead of rounding them when outputting the 'X' format - Change the options to toLocale to override the configuration (the previous options were essentially ignored) ## 1.6.2 - Fixing merge error that resulted in bad error messages ## 1.6.0 - **midly breaking** Rework negative durations - Fix handling weekdays at the end of leap week years - Add isDuration, isDateTime, and isInterval - Fix handling of Luxon object arguments passed from other execution contexts ## 1.5.0 - Improved error message - Added DateTime#invalidExplanation, Duration#invalidExplanation, Interval#invalidExplanation to provide more details on invalid objects ## 1.4.6 - Cache Intl objects for an 85x speed up on basic operations using non-en locales ## 1.4.5 - Fix minified builds ## 1.4.4 - Fix hour formatting in RFC822 strings - Interval.fromISO accepts formats with durations ## 1.4.3 Removal accidentally-introduced runtime dependency ## 1.4.2 - Handle locale strings with BCP 47 extensions. Especially helpful for environments with funky default locales - Support for [weekYear]-W[weekNumber] ISO 8601 strings ## 1.4.1 - Empty diffs now have all the asked-for units in them, set at 0 - Duration operations perserve the superset of units ## 1.4.0 - Add x and X to toFormat for formatting Epoch seconds and Epoch milliseconds - Parser allows a wider range of IANA zone specifiers - BREAKING: Etc/GMT+10 is now interpreted as UTC-10, per spec ## 1.3.3 Documentation fixes ## 1.3.2 - DateTime.fromMillis will throw if passed a non-number - Fixes for type checking across JS contexts ## 1.3.1 - Include milliseconds in Duration#toISO - Avoid deprecation warning from DateTime#inspect in Node 10 ## 1.3.0 - **mildly breaking change** Duration.toFormat now floors its outputs instead of rounding them (see #224) - Added 'floor' option to Duration.toFormat and deprecated the 'round' option - Added `Dateime.toBSON` - Fixed infinite loop when passing invalid or zero-length durations to Interval#splitBy - Added better error handling to Duration.fromObject() ## 1.2.1 - 222x speed-up in DateTime creation for non-en locales - Added `DateTime#toMillis` alias for `DateTime#valueOf` - Fixed types on zone exports ## 1.2.0 - Export Zone classes - Fix `endOf` and `startOf` for quarters - Change `toFormat("Z")` to return a number for UTC - Allow "GTM" as an argument to `setZone` ## 1.1.0 - Support for zone names with more than two components - Fixed long-term-accurate conversions for months - Added `weeksInWeekYear` ## 1.0.0 - The big one-oh. No changes from 0.5.8. ## 0.5.8 - Large perf improvements for `DateTime#toFormat()`, when using non-intl numbers ## 0.5.7 - Added AMD build to the NPM package - Large performance improvements to technical formatting (e.g. `DateTime#toISO`) ## 0.5.6 - Refactor internals - Added support for fractional seconds in `Duration.fromISO` - Added browser global to the NPM package ## 0.5.5 - Best-we-can-do fix for `DateTime#toLocaleString()` for fixed-offset zones when showing the zone name in the output - Fixed `Duration#shiftTo` for unormalized Durations that need a rollup cascade ## 0.5.4 - Fix default locales in Node - Fix prototype to help with React inspection - Improve REPL output for Durations in Node ## 0.5.3 - Remove errant ICU runtime dep (again) ## 0.5.2 - Remove comments from minified builds (introduced by 0.5.1) ## 0.5.1 - Fixed minified builds (oops) - Fix computation of fractional parts of diffs ## 0.5.0 - `isBefore()` returns true for the end of the interval, consistent with being half-open - `zoneName` now rturns `null` for invalid DateTimes - Added quarter support - Adding a month to Jan 31 gives Feb 28/29 ## 0.4.0 - Always round down to the nearest millisecond when parsing ## 0.3.1 - Fixed `toLocaleString` for fixed-offset zones in the absence of Intl - Added `Info.isValidIANAZone` - Made malformed zone specifiers result in invalid DateTime instances ## 0.3.0 - Rename DateTime.fromString to DateTime.fromFormat (leaving deprecated DateTime.fromString) - Rename DateTime.fromStringExplain to DateTime.fromFormatExplain (leaving deprecated DateTime.fromStringExplain) - Support Etc/GMT IANA zones - Perf fixes for zones - Rework build infrastructure ## 0.2.12 - Fix DateTime.fromObject's handling of default zones - Change `keepCalendarTime` to `keepLocalTime` ## 0.2.11 - Handle no arguments in `DateTime.min` and `DateTime.max` - Documentation fixes ## 0.2.10 - Fix bug where Durations could sometimes mutate ## 0.2.9 - Fix `DateTime.fromMillis(0)` more thoroughly ## 0.2.8 - Fix sourcemaps ## 0.2.7 - Fix `DateTime.fromMillis(0)` ## 0.2.6 - Fix 'h' and 'hh' `toFormat` tokens for midnight ## 0.2.5 - Better `shiftTo` behavior for durations with floating point components ## 0.2.4 - Fix `toHTTP` to use 24-hour hours - Tighten up regular expressions - Various documentation fixes ## 0.2.3 - Fixes for `diff` with multiple units ## 0.2.2 - Fixes for `fromSQL`, `toSQL`, `toSQLTime`, and `toSQLDate` - Add `includeOffset` option to `toISO` and `toISOTime` ## 0.2.1 - Add `module` field to package.json ## 0.2.0 - Remove polyfills from main builds - Update compilation toolchain to target builds more exactly - Fix IE in polyfill build ## 0.1.0 - Add `.fromSQL`, `#toSQL`, `#toSQLTime`, `#toSQLDate` - Fix AM/PM parsing - Major perf improvements - Default to system locale when using macro formats in `#toFormat` - `.fromISO` accepts standalone times - See https://github.com/moment/luxon/issues/93 for important news concerning field accessibility ## 0.0.22 - Add 'u' formatting and parsing - Add 'y', 'yyyyy', and 'yyyyyy' parsing tokens - Add 'yyyyyy' formatting token - Better error messages for missing arguments to `DateTime.fromString` ## 0.0.21 - Fix zones for Edge ## 0.0.20 - Fix `fromISO` to accept various levels of subsecond precision ## 0.0.19 - Fixed parsing for ordinals - Made parsing stricter ## 0.0.18 - Fixed formatting for non-hour aligned fixed-offset zones - Fixed longterm conversion accuracy option in diffs - Fixed invalid handling in `Interval#set` ## 0.0.17 - Fixing formatting for fixed-offset zones ## 0.0.16 - Fixes for IE 9 & 10 ## 0.0.15 - Fixing busted release 0.0.14 ## 0.0.13 - toLocaleString() and others default to the system's locale - support for ISO week durations in `Duration.fromISO` ## 0.0.12 - Improve non-Intl fallbacks for toLocaleString - Fix `offsetNameShort` and `offsetNameLong` for non-Intl environments - Added `weekdayShort`, `weekdayLong`, `monthShort`, `monthLong` DateTime getters ## 0.0.10 - Only include build dir in NPM module ## 0.0.9 - Move to Moment Github org ## 0.0.8 - The local zone can now report its IANA name - Fixed parsing bug for `yy` and `kk` - Improved test coverage ## 0.0.7 - Added `toLocaleParts` - Slightly more friendly month/weekday parsing - Default locale setting ## 0.0.6 - Stricter `toJSDate` - `fromISO` now supports `year` and `year-month` formats - More graceful degradation in the absence of platform features ## 0.0.5 Experimental, but now broadly useful.
22.681992
136
0.730743
eng_Latn
0.926792
0c1212dff1051abf3ddd5c614c6e7664170d92eb
437
md
Markdown
README.md
wannaphong/pyntptimenow
0b05012b472b2ea680f2634b311c4417b1c3ea1a
[ "MIT" ]
null
null
null
README.md
wannaphong/pyntptimenow
0b05012b472b2ea680f2634b311c4417b1c3ea1a
[ "MIT" ]
null
null
null
README.md
wannaphong/pyntptimenow
0b05012b472b2ea680f2634b311c4417b1c3ea1a
[ "MIT" ]
null
null
null
# ntptimenow Python NTP time now ## Install ```sh pip install ntptimenow ``` ## Used ```python from ntptimenow import NTPTimeNow now = NTPTimeNow().ntp_update() # It's datetime. ``` **API** ```python NTPTimeNow(poolservers:str='time.kku.ac.th',version:int=3) ``` - poolservers is `NTP Server`. (default is `time.kku.ac.th`) - version is version of `NTP Server`. (default is `3`) ```python ntp_now() ``` return datetime.datetime
14.566667
60
0.681922
eng_Latn
0.502894
0c13126df59cc13f50e6faacad238ff2b9af59f9
9,692
md
Markdown
jme3/contributions/tonegodgui/screen.md
erlend-sh/monkeydocs-content
5fcc38f65968908b4880e816d167b15dac758490
[ "CC0-1.0", "BSD-3-Clause" ]
1
2019-02-07T05:17:16.000Z
2019-02-07T05:17:16.000Z
jme3/contributions/tonegodgui/screen.md
erlend-sh/monkeydocs-content
5fcc38f65968908b4880e816d167b15dac758490
[ "CC0-1.0", "BSD-3-Clause" ]
null
null
null
jme3/contributions/tonegodgui/screen.md
erlend-sh/monkeydocs-content
5fcc38f65968908b4880e816d167b15dac758490
[ "CC0-1.0", "BSD-3-Clause" ]
null
null
null
--- title: The Screen Class --- <h3 class="sectionedit1" id="the_screen_class">The Screen Class</h3> <div class="level3"> <p> You can create a screen using one of the two provided constructors as shown in the <a href="http://jmonkeyengine.org/wiki/doku.php/jme3:contributions:tonegodgui:quickstart" class="urlextern" title="http://jmonkeyengine.org/wiki/doku.php/jme3:contributions:tonegodgui:quickstart" rel="nofollow">Quick Start Guide</a>.<br /> </p> <pre class="code java"><span class="co1">// this = any JME Application</span> Screen screen <span class="sy0">=</span> <span class="kw1">new</span> Screen<span class="br0">(</span><span class="kw1">this</span><span class="br0">)</span><span class="sy0">;</span> guiNode.<span class="me1">addControl</span><span class="br0">(</span>screen<span class="br0">)</span><span class="sy0">;</span></pre> <p> Optionally, you can provide a path to a custom Style theme: </p> <pre class="code java"><span class="co1">// this = any JME Application</span> Screen screen <span class="sy0">=</span> <span class="kw1">new</span> Screen<span class="br0">(</span><span class="kw1">this</span>, <span class="st0">"tonegod/gui/style/def/style_map.xml"</span><span class="br0">)</span><span class="sy0">;</span> guiNode.<span class="me1">addControl</span><span class="br0">(</span>screen<span class="br0">)</span><span class="sy0">;</span></pre> <p> To make use of custom cursors, call this method prior to initializing the screen: </p> <pre class="code java">screen.<span class="me1">setUseCustomCursors</span><span class="br0">(</span><span class="kw2">true</span><span class="br0">)</span><span class="sy0">;</span></pre> <p> You can disable and re-enable custom cursors at any point, but they must be initialized with the screen to be loaded.<br /> <br /> </p> <p> Cursor Effects can be enabled/disabled at anytime by calling: </p> <pre class="code java">screen.<span class="me1">setUseCursorEffects</span><span class="br0">(</span><span class="kw4">boolean</span> useCursorEffects<span class="br0">)</span><span class="sy0">;</span></pre> <p> </p><p></p><div class="noteclassic">More info on: <a href="http://jmonkeyengine.org/wiki/doku.php/jme3:contributions:tonegodgui:cursoreffects" class="urlextern" title="http://jmonkeyengine.org/wiki/doku.php/jme3:contributions:tonegodgui:cursoreffects" rel="nofollow">Cursor Effects</a>. </div> <p> Tool Tips can be enabled/disabled at anytime by calling: </p> <pre class="code java">screen.<span class="me1">setUseToolTips</span><span class="br0">(</span><span class="kw4">boolean</span> useToolTips<span class="br0">)</span><span class="sy0">;</span></pre> <p> </p><p></p><div class="noteclassic">More info on: <a href="http://jmonkeyengine.org/wiki/doku.php/jme3:contributions:tonegodgui:tooltips" class="urlextern" title="http://jmonkeyengine.org/wiki/doku.php/jme3:contributions:tonegodgui:tooltips" rel="nofollow">Tool Tips</a>. </div> <p> Audio can be enabled/disabled at anytime by calling: </p> <pre class="code java">screen.<span class="me1">setUseUIAudio</span><span class="br0">(</span><span class="kw4">boolean</span> useUIAudio<span class="br0">)</span><span class="sy0">;</span></pre> <p> </p><p></p><div class="noteclassic">More info on: <a href="http://jmonkeyengine.org/wiki/doku.php/jme3:contributions:tonegodgui:audio" class="urlextern" title="http://jmonkeyengine.org/wiki/doku.php/jme3:contributions:tonegodgui:audio" rel="nofollow">Audio Support</a>. </div> </div> <h4 id="application_quick_references_methods">Application quick references methods:</h4> <div class="level4"> <pre class="code java">screen.<span class="me1">getApplication</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span>   <span class="co1">// Screen dimensions</span> screen.<span class="me1">getWidth</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span> screen.<span class="me1">getHeight</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span>   <span class="co1">// Mouse position</span> screen.<span class="me1">getMouseXY</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span> <span class="co1">// returns a Vector2f containing current mouse x/y</span></pre> </div> <h4 id="methods_for_adding_remove_base_level_controls">Methods for adding remove base level controls:</h4> <div class="level4"> <pre class="code java">screen.<span class="me1">addElement</span><span class="br0">(</span><a href="http://www.google.com/search?hl=en&amp;q=allinurl%3Adocs.oracle.com+javase+docs+api+element"><span class="kw3">Element</span></a> element<span class="br0">)</span><span class="sy0">;</span> screen.<span class="me1">removeElement</span><span class="br0">(</span><a href="http://www.google.com/search?hl=en&amp;q=allinurl%3Adocs.oracle.com+javase+docs+api+element"><span class="kw3">Element</span></a> element<span class="br0">)</span><span class="sy0">;</span></pre> </div> <h4 id="z-order_methods">Z-Order Methods:</h4> <div class="level4"> <pre class="code java"><span class="co1">// Bring the specified Element to the front</span> screen.<span class="me1">updateZOrder</span><span class="br0">(</span><a href="http://www.google.com/search?hl=en&amp;q=allinurl%3Adocs.oracle.com+javase+docs+api+element"><span class="kw3">Element</span></a> topMost<span class="br0">)</span><span class="sy0">;</span></pre> </div> <h4 id="accessing_the_effectmanager">Accessing the EffectManager:</h4> <div class="level4"> <pre class="code java">screen.<span class="me1">getEffectManager</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span></pre> </div> <h4 id="retrieving_a_style">Retrieving a Style:</h4> <div class="level4"> <pre class="code java">screen.<span class="me1">getStyle</span><span class="br0">(</span><a href="http://www.google.com/search?hl=en&amp;q=allinurl%3Adocs.oracle.com+javase+docs+api+string"><span class="kw3">String</span></a> tagName<span class="br0">)</span><span class="sy0">;</span></pre> </div> <h4 id="custom_cursor_related_methods">Custom Cursor Related Methods:</h4> <div class="level4"> <pre class="code java">screen.<span class="me1">setCursor</span><span class="br0">(</span>Screen.<span class="me1">CursorType</span> cur<span class="br0">)</span><span class="sy0">;</span> <span class="co1">// called by controls</span> screen.<span class="me1">setForcedCursor</span><span class="br0">(</span>Screen.<span class="me1">CursorType</span> cur<span class="br0">)</span><span class="sy0">;</span> <span class="co1">// Overrides control manipulation of cursor.</span> screen.<span class="me1">releaseForcedCursor</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span> <span class="co1">// Release cursor manipulation back to controls</span></pre> </div> <h4 id="retrieving_elements">Retrieving Elements</h4> <div class="level4"> <pre class="code java"><span class="co1">// Recursive search</span> screen.<span class="me1">getElementById</span><span class="br0">(</span><a href="http://www.google.com/search?hl=en&amp;q=allinurl%3Adocs.oracle.com+javase+docs+api+string"><span class="kw3">String</span></a> <a href="http://www.google.com/search?hl=en&amp;q=allinurl%3Adocs.oracle.com+javase+docs+api+uid"><span class="kw3">UID</span></a><span class="br0">)</span><span class="sy0">;</span></pre> </div> <h4 id="ui_global_alpha_settings">UI Global Alpha Settings:</h4> <div class="level4"> <pre class="code java">screen.<span class="me1">setGlobalAlpha</span><span class="br0">(</span><span class="kw4">float</span> globalAlpha<span class="br0">)</span><span class="sy0">;</span> screen.<span class="me1">getGlobalAlpha</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span></pre> </div> <h4 id="ui_global_audio_settings">UI Global Audio Settings:</h4> <div class="level4"> <pre class="code java">screen.<span class="me1">setUseUIAudio</span><span class="br0">(</span><span class="kw4">boolean</span> useUIAudio<span class="br0">)</span><span class="sy0">;</span> screen.<span class="me1">getUseUIAudio</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span> screen.<span class="me1">setUIAudioVolume</span><span class="br0">(</span><span class="kw4">float</span> uiAudioVolume<span class="br0">)</span><span class="sy0">;</span> screen.<span class="me1">getUIAudioVolume</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span> screen.<span class="me1">playAudioNode</span><span class="br0">(</span><a href="http://www.google.com/search?hl=en&amp;q=allinurl%3Adocs.oracle.com+javase+docs+api+string"><span class="kw3">String</span></a> key, <span class="kw4">float</span> volume<span class="br0">)</span><span class="sy0">;</span></pre> </div> <h4 id="enabling_global_texture_atlas_use">Enabling Global Texture Atlas Use</h4> <div class="level4"> <p> Keep in mind, that it is possible to enable texture atlas usage per element without enabling global texture atlas use. </p> <pre class="code java">screen.<span class="me1">setUseTextureAtlas</span><span class="br0">(</span><span class="kw4">boolean</span> useTextureAtlas, <a href="http://www.google.com/search?hl=en&amp;q=allinurl%3Adocs.oracle.com+javase+docs+api+string"><span class="kw3">String</span></a> texturePath<span class="br0">)</span><span class="sy0">;</span> screen.<span class="me1">getUseTextureAtlas</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span> screen.<span class="me1">getAtlasTexture</span><span class="br0">(</span><span class="br0">)</span><span class="sy0">;</span></pre> </div>
65.486486
397
0.705634
yue_Hant
0.235667
0c13455881b790156dfc0d28f0c1174d8728cb8f
503
md
Markdown
README.md
UssieApp/go_mailchimp
10e79bc045768e678cee6cc5480a86430772571f
[ "MIT" ]
1
2016-05-05T11:38:10.000Z
2016-05-05T11:38:10.000Z
README.md
UssieApp/gochimp
10e79bc045768e678cee6cc5480a86430772571f
[ "MIT" ]
null
null
null
README.md
UssieApp/gochimp
10e79bc045768e678cee6cc5480a86430772571f
[ "MIT" ]
null
null
null
# gochimp A Go library for interacting with the [Mailchimp API](http://kb.mailchimp.com/api). Includes support for [Mandrill](https://mandrillapp.com/api/docs). **Note:** _This is very much a work in progress! Currently only the features needed immediately by Ussie are implemented. Anything may change in the future. Use at your own risk!_ Contributions, criticism and suggestions welcome. Will also do our best to add features upon request. The goal is to evolve this into a full-fledged client.
38.692308
121
0.777336
eng_Latn
0.996792
0c134f918d8b553d928dcb57e0957cb84f5326d4
5,521
md
Markdown
_posts/2018-10-23-Download-exam-answer-booklet-template.md
Kirsten-Krick/Kirsten-Krick
58994392de08fb245c4163dd2e5566de8dd45a7a
[ "MIT" ]
null
null
null
_posts/2018-10-23-Download-exam-answer-booklet-template.md
Kirsten-Krick/Kirsten-Krick
58994392de08fb245c4163dd2e5566de8dd45a7a
[ "MIT" ]
null
null
null
_posts/2018-10-23-Download-exam-answer-booklet-template.md
Kirsten-Krick/Kirsten-Krick
58994392de08fb245c4163dd2e5566de8dd45a7a
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Exam answer booklet template book I chose no particular direction, right, she had reached him even though he didn't want to become involved, The lowing of cows and the soft whickering of horses aren't responses to his the vessel to give Barents the important news, philosophize about pie. He shook his head despairingly, "Where are you going?" "When was it changed. Le Guin. He had fed the chickens, cities and towns withdrew inside defensive walls; arts, stepping into that upstairs hallway, California 92658 sometimes told women that he remembered it, and knew that the girl had cheated him, like vibrations passing through a guitar string, and! learn besides that all selling of spirits to savages is not only But not quite. The dog had gotten her head stuck in the empty cheese-popcorn bag that Curtis had left on He didn't know why he'd spoken her name, and their and mills and business, 371; his life. after Mr? "I need neither! I am. They were in the eastern hills, but maybe half In hour-at most forty-five minutes-away if he returned by the fire road, Alabama, and dropped open the door. "The whole village together couldn't change that!" she said, in fact. Agnes asked, make him have to. " there sent by them. " actor as well as a deeply vile human being, Exam answer booklet template leaped up front his armchair again? But as soon as I discovered it was St. Accounts exam answer booklet template this meeting vary; but though after it the dragons ceased their hostilities for a while, so petite that her feet barely touched the He grimaced, they seed the planet with the spores and. Among the reasons for this supposition is mentioned the top and so wide they could not see the far wall, and her voice trembled, loading cargo all day for the boats that went downriver, and the ground is rent Your deeds, at least, exam answer booklet template his mother. A gash in it deepened, stymied by the                     ab. Exam answer booklet template to you again soon. " "The exam answer booklet template baby," said Nolly, no breeze whatsoever. I sort of just walk. " get a computer-related position, beings who are in fact both human and dragon. Sir Hugh Willoughby, dead cowboy got to exam answer booklet template with you or me. To prevent such an accident, gathering wizards to work together at the In fact, September 7! Micky clawed in frustration, freeing one of the white The nets are set in summer among the ground-ices along the shore. Just exam answer booklet template touching. " order to fill up the great blank which still existed in the From Competition 15; Retranslated sf titles 89 perhaps be met with most frequently will not be the north point of conditioned by a lifetime of fighting her way to the top. Henrik Nilsen grape? " So we took the key and going up to see the room, and volcanoes; bring in the roses, and a Hawaiian shirt, as she sat dead still on the kitchen chair, art, water smell. At least they'd be together and that would help see him through. with someone headed for a more populous exam answer booklet template that will provide even better Throughout exam answer booklet template morning, I'll try," she said. quaked through her with 1906 San Francisco intensity, and he was disappointed, peace. "What?" lights, as though underneath the mercury mask of the walls the noble "Feel what?" she asked, January 12, breathe shallowly and through the mouth. rather straight to Polly's left sandal, no one begrudged him his winnings. Of course, philosophize about pie, looking forward. He demanded obedience, but this Along with the reindeer and the bear there are found in the regions He had bribed a parking attendant to keep his Mercedes at the curb in a valet zone. His eyes caught on something at the end of the couch. She would be- Singh stopped to consider-forty-one years old? He can call it an accident and close the case, he opened it all the hardened into a mask? work and talk. Only the pale silver glow across the sky, inspired her to imagine elegant parties thrown He had considered tracking down Celestina-and the bastard exam answer booklet template to her exhibition, but he is assigned to him so exalted a position, p, there was no path. And he was mildly surprised to find that the statement did not startle him. For the Archmage and Lebannen to go bodily into death, and equipment in all directions while soldiers in suits hung everywhere in helpless tangles of safety lines. Now within the Lady Afifeh's palace was an underground way communicating with the palace of the princess Mariyeh. But exam answer booklet template machine picked up unison: "Bringing Up Baby! " She looked back and forth from Lang to Crawford, and I most commend yon on steamer belonged to the Alaska Company. But I'm not going to sugarcoat this, exam answer booklet template plumbing. " Little snot, because exam answer booklet template believed that God was angry with muscles protest to watch. She moved woodenly, for fear of killing her too soon and too mercifully, he would tell her how her brother suffered. "No. " relationship exam answer booklet template a good man-perhaps even marriage. Somewhere, there's nobody who'd notice or think to ask, sir," a voice called out, for--according to the bones. Because they aren't traveling in the stolen saddlery truck, however. Ahead of them the door of the VIP carrier opened to expose the rotund form of Colonel Wassermann. " you call off the SWAT team?" information.
613.444444
5,419
0.787901
eng_Latn
0.999926
0c137d79f85bb535a7e6d15e449ebb113c3c7bd7
28
md
Markdown
README.md
nandohos/angularsidebar
3aacc880f0e9dd17a7640a21c8f7356001071a08
[ "MIT" ]
1
2021-05-24T19:25:41.000Z
2021-05-24T19:25:41.000Z
README.md
nandohos/angularsidebar
3aacc880f0e9dd17a7640a21c8f7356001071a08
[ "MIT" ]
null
null
null
README.md
nandohos/angularsidebar
3aacc880f0e9dd17a7640a21c8f7356001071a08
[ "MIT" ]
null
null
null
# Sidebar Example Sidebar
5.6
15
0.75
eng_Latn
0.229098
0c13a091c5ac6bbd6344242089d16189c8aa7d45
1,072
md
Markdown
AlchemyInsights/teams-emergency-calling.md
isabella232/OfficeDocs-AlchemyInsights-pr.da-DK
a907697f48db2dc57c19d7e003d92831c111566e
[ "CC-BY-4.0", "MIT" ]
2
2020-05-19T19:06:02.000Z
2020-09-17T11:26:05.000Z
AlchemyInsights/teams-emergency-calling.md
isabella232/OfficeDocs-AlchemyInsights-pr.da-DK
a907697f48db2dc57c19d7e003d92831c111566e
[ "CC-BY-4.0", "MIT" ]
2
2022-02-09T06:59:12.000Z
2022-02-09T06:59:36.000Z
AlchemyInsights/teams-emergency-calling.md
isabella232/OfficeDocs-AlchemyInsights-pr.da-DK
a907697f48db2dc57c19d7e003d92831c111566e
[ "CC-BY-4.0", "MIT" ]
2
2019-10-11T18:36:50.000Z
2021-10-09T10:49:57.000Z
--- title: Nødopkald fra Teams ms.author: pebaum author: pebaum ms.audience: ITPro ms.topic: article ms.service: o365-administration ROBOTS: NOINDEX, NOFOLLOW localization_priority: Priority ms.custom: - "9002239" - "4348" ms.assetid: '' ms.openlocfilehash: 4e696ef434e8017e84b75632845eb20fdf201ccb7e80a5b07864b8848b891c69 ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175 ms.translationtype: MT ms.contentlocale: da-DK ms.lasthandoff: 08/05/2021 ms.locfileid: "54008540" --- # <a name="teams-emergency-calling"></a>Nødopkald fra Teams Alle telefonnumre skal være tilknyttet en nødposition. Se følgende for at få flere oplysninger: - [Oversigt over nødopkald](https://docs.microsoft.com/MicrosoftTeams/what-are-emergency-locations-addresses-and-call-routing) - [Tilføj, udskift eller fjern en nødplacering fra din organisation](https://docs.microsoft.com/MicrosoftTeams/add-change-remove-emergency-location-organization) - [Tildele eller rediger en nødlokation for en bruger](https://docs.microsoft.com/MicrosoftTeams/assign-change-emergency-location-user)
35.733333
161
0.8125
dan_Latn
0.513036
0c143ba39bf9d33799ece8737dd1b06fbdd3b6ec
591
md
Markdown
docs/source/cli/airshipctl_image_build.md
Ashughorla/airshipctl
63b3501a53b238e32f938371728eb830583fda3a
[ "Apache-2.0" ]
null
null
null
docs/source/cli/airshipctl_image_build.md
Ashughorla/airshipctl
63b3501a53b238e32f938371728eb830583fda3a
[ "Apache-2.0" ]
null
null
null
docs/source/cli/airshipctl_image_build.md
Ashughorla/airshipctl
63b3501a53b238e32f938371728eb830583fda3a
[ "Apache-2.0" ]
2
2020-06-22T08:34:15.000Z
2020-07-23T10:50:52.000Z
## airshipctl image build Build ISO image ### Synopsis Build ISO image ``` airshipctl image build [flags] ``` ### Options ``` -h, --help help for build ``` ### Options inherited from parent commands ``` --airshipconf string Path to file for airshipctl configuration. (default "$HOME/.airship/config") --debug enable verbose output --kubeconfig string Path to kubeconfig associated with airshipctl configuration. (default "$HOME/.airship/kubeconfig") ``` ### SEE ALSO * [airshipctl image](airshipctl_image.md) - Manage ISO image creation
19.064516
127
0.671743
eng_Latn
0.783117
0c148066ab362f75bbdb91f6fabacb4efd59d957
1,670
md
Markdown
README.md
operationalservices/jujucharm-salt-minion
be94ea70d869180d86aa912d9e61572bab9ed8a6
[ "Apache-2.0" ]
null
null
null
README.md
operationalservices/jujucharm-salt-minion
be94ea70d869180d86aa912d9e61572bab9ed8a6
[ "Apache-2.0" ]
null
null
null
README.md
operationalservices/jujucharm-salt-minion
be94ea70d869180d86aa912d9e61572bab9ed8a6
[ "Apache-2.0" ]
null
null
null
# Overview This charm provides a Salt Minion service from [SaltStack](http://www.saltstack.com/). Salt is a powerful remote execution manager that can be used to administer servers in a fast and efficient way. It allows commands to be executed across large groups of servers. This means systems can be easily managed, but data can also be easily gathered. Quick introspection into running systems becomes a reality. Remote execution is usually used to set up a certain state on a remote system. Salt addresses this problem as well, the Salt state system uses Salt state files to define the state a server needs to be in. Between the remote execution system, and state management Salt addresses the backbone of cloud and data center management. This particular package provides the worker / agent for Salt. **Note:** Be advised that while the Salt software is mature and proven, this particular packaging in a juju charm is rather untested. This charm is tested with Ubuntu 14.04 (trusty). # Usage This is a subordinate salt minion charm. Get this charm: mkdir -p ~/charms/trusty cd ~/charms/trusty git clone https://github.com/operationalservices/jujucharm-salt-minion.git Deploy the service on Juju 1.25: juju deploy local:salt-minion --repository ~/charms juju set salt-minion salt-master="<IP of the salt master server>" juju add-relation salt-minion another-service Deploy the service on Juju 2.0: juju deploy ./charms/trusty/jujucharm-salt-minion/ --series xenial salt-minion-xenial juju config salt-minion-xenial salt-master="<IP of the salt master server>" juju add-relation salt-minion-xenial another-service
36.304348
89
0.772455
eng_Latn
0.994898
0c15a8fd1ff000cc62d075e288c7f925844a4071
1,403
md
Markdown
docs/error-messages/compiler-warnings/compiler-warning-c4430.md
Juanmemo/cpp-docs
4dcb1791f0c619a99b785ea6f832b8804c188bd9
[ "CC-BY-4.0", "MIT" ]
1
2021-05-18T02:55:34.000Z
2021-05-18T02:55:34.000Z
docs/error-messages/compiler-warnings/compiler-warning-c4430.md
Juanmemo/cpp-docs
4dcb1791f0c619a99b785ea6f832b8804c188bd9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-warnings/compiler-warning-c4430.md
Juanmemo/cpp-docs
4dcb1791f0c619a99b785ea6f832b8804c188bd9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Compiler Warning C4430 | Microsoft Docs" ms.custom: "" ms.date: "11/04/2016" ms.reviewer: "" ms.suite: "" ms.technology: ["cpp-tools"] ms.tgt_pltfrm: "" ms.topic: "error-reference" f1_keywords: ["C4430"] dev_langs: ["C++"] helpviewer_keywords: ["C4430"] ms.assetid: 12efbfff-aa58-4a86-a7d6-2c6a12d01dd3 caps.latest.revision: 8 author: "corob-msft" ms.author: "corob" manager: "ghogen" ms.workload: ["cplusplus"] --- # Compiler Warning C4430 missing type specifier - int assumed. Note: C++ does not support default-int This error can be generated as a result of compiler conformance work that was done for Visual C++ 2005: all declarations must explicitly specify the type; int is no longer assumed. C4430 is always issued as an error. You can turn off this warning with the `#pragma warning` or **/wd**; see [warning](../../preprocessor/warning.md) or [/w, /W0, /W1, /W2, /W3, /W4, /w1, /w2, /w3, /w4, /Wall, /wd, /we, /wo, /Wv, /WX (Warning Level)](../../build/reference/compiler-option-warning-level.md) for more information. ## Example The following sample generates C4430. ``` // C4430.cpp // compile with: /c struct CMyClass { CUndeclared m_myClass; // C4430 int m_myClass; // OK }; typedef struct { POINT(); // C4430 // try the following line instead // int POINT(); unsigned x; unsigned y; } POINT; ```
31.177778
332
0.662865
eng_Latn
0.900355
0c15c33c5ecb5c729e130a2249ecddfdd09438bb
224
md
Markdown
_project/21-of-the-best-laundry-room-hacks.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
_project/21-of-the-best-laundry-room-hacks.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
_project/21-of-the-best-laundry-room-hacks.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
--- layout: project_single title: "21 of the Best Laundry Room Hacks" slug: "21-of-the-best-laundry-room-hacks" parent: "laundry-room-makeover-ideas" --- Behind the door storage solution to keep your laundry room organized!
32
69
0.763393
eng_Latn
0.897112
0c15d805093b849c5f05dd888385502f3f93f9ca
3,798
md
Markdown
azure-stack/user/create-ssh-key-on-windows.md
ebmarquez/azure-stack-docs
3c1a9691119a469fd232b887a2d7f2a482b22ebf
[ "CC-BY-4.0", "MIT" ]
null
null
null
azure-stack/user/create-ssh-key-on-windows.md
ebmarquez/azure-stack-docs
3c1a9691119a469fd232b887a2d7f2a482b22ebf
[ "CC-BY-4.0", "MIT" ]
2
2020-04-09T17:02:29.000Z
2020-04-09T17:32:55.000Z
azure-stack/user/create-ssh-key-on-windows.md
ebmarquez/azure-stack-docs
3c1a9691119a469fd232b887a2d7f2a482b22ebf
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Deploy a Kubernetes cluster to a custom virtual network on Azure Stack Hub description: Learn how to deploy a Kubernetes cluster to a custom virtual network on Azure Stack Hub. author: mattbriggs ms.topic: article ms.date: 2/28/2020 ms.author: mabrigg ms.reviewer: waltero ms.lastreviewed: 2/28/2020 # Intent: As an Azure Stack Hub user, I would like to create a public/private ssh key pair to use when creating Linux VMs. # Keywords: create SSH key --- # Create an SSH key for Linux on Azure Stack Hub You can create an SSH (secure shell) key for your Linux machine on a Windows machine. Use the public key generated by the steps in this article for SSH authentication with VMs. If you are using a Windows machine install Ubuntu on Windows to get a terminal with utilities such as bash, ssh, git, and with Ubuntu on Windows. Run the **ssh-keygen** to create your key. ## Open bash on Windows 1. If you do not have the Windows Subsystem for Linux installed on your machine, install "[Ubuntu on Windows](https://www.microsoft.com/en-us/p/ubuntu/9nblggh4msv6?activetab=pivot:overviewtab). For more information about using the Windows Subsystem for Linux, see [Windows Subsystem for Linux Documentation](https://docs.microsoft.com/windows/wsl/about). 2. Type **Ubuntu** in your toolbar and select **Open**. ## Create a key with ssh-keygen 1. Type the following command from your bash prompt: ```bash ssh-keygen -t rsa ``` Bash displays the following prompt: ```bash Generating public/private rsa key pair. Enter file in which to save the key (/home/username/.ssh/id_rsa): ``` 2. Type the filename and passphrase. Type the passphrase again. Bash displays the following: ```bash Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): key.txt Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in key.txt. Your public key has been saved in key.txt.pub. The key fingerprint is: SHA256:xanotrealoN6z1/KChqeah0CYVeyhL50/0rq37qgy6Ik username@machine The key's randomart image is: +---[RSA 2048]----+ | o. . | | . o. + | | + o .+ o o | |o o . O + | | . o .o S . | | o +. . | |. o +..o. . | |= . ooB +o+ . | |E=..*X=*.. +. | +----[SHA256]-----+ ``` 3. To view and the public ssh key: ```bash cat /home/<username>/<filename> ``` Bash displays something like the following: ```bash ssh-rsa AAAAB3NzaC1ycTHISISANEXAMPLEDITqEJRNrf6tXy9c0vKnMhiol1BFzHFV3 +suXk6NDeFcA9uI58VdD/CuvG826R+3OPnXutDdl2MLyH3DGG1fJAHObUWQxmDWluhSGb JMHiw2L9Wnf9klG6+qWLuZgjB3TQdus8sZI8YdB4EOIuftpMQ1zkAJRAilY0p4QxHhKbU IkvWqBNR+rd5FcQx33apIrB4LMkjd+RpDKOTuSL2qIM2+szhdL5Vp5Y6Z1Ut1EpOrkbg1 cVw7oW0eP3ROPdyNqnbi9m1UVzB99aoNXaepmYviwJGMzXsTkiMmi8Qq+F8/qy7i4Jxl0 aignia880qOtQrvNEvyhgZOM5oDhgE3IJ username@machine ``` 4. Copy the text `ssh-rsa [...]` up to `username@machinename`. Make sure the text doesn't include any carriage returns. You can use this text when creating your VM or Kubernetes cluster using the AKS engine. 5. If you are on a Windows machine, you can access your Linux files using **\\\\wsl$**. 1. Type `\\wsl$` in your toolbar. The default window your distribution open. 2. Navigate to: `\\wsl$\Ubuntu\home\<username>` and find the public and private key and save to a secure location. ## Next steps - [Deploy a Kubernetes cluster with the AKS engine on Azure Stack Hub](azure-stack-kubernetes-aks-engine-deploy-cluster.md) - [Quickstart: Create a Linux server VM by using the Azure Stack Hub portal](azure-stack-quick-linux-portal.md)
38.755102
365
0.706688
eng_Latn
0.905374
0c1619cd00b4551a5fbde212113838b487de9122
2,278
md
Markdown
README.md
RafaCarva/HC-Logical-Sequence
265f56308601a957eddd5b8fca01829050db9fbf
[ "MIT" ]
null
null
null
README.md
RafaCarva/HC-Logical-Sequence
265f56308601a957eddd5b8fca01829050db9fbf
[ "MIT" ]
1
2019-08-31T18:11:29.000Z
2019-09-25T12:46:30.000Z
README.md
RafaCarva/HC-Logical-Sequence
265f56308601a957eddd5b8fca01829050db9fbf
[ "MIT" ]
null
null
null
# HC-Logical-Sequence Um jogo para ensinar sequenciamento lógico</br> ![01](https://user-images.githubusercontent.com/13722768/65601868-50bb4d80-df79-11e9-9ee3-f94c9b98c31c.jpg) ![02](https://user-images.githubusercontent.com/13722768/65601873-53b63e00-df79-11e9-81e0-eefad5eefb8a.jpg) ![03](https://user-images.githubusercontent.com/13722768/65601874-544ed480-df79-11e9-8389-15997189c8e3.jpg) </br> Versão do Unity utilizada: 2018.3.14f1 ## Como funciona ### Gerenciador O gerenciador da fase é o empytGame "MySceneManager" que contém um script com o mesmo nome. Quando a fase inicia, o método awake() esconde o UI de encerramento. Esse script possui algumas variaveis públicas que precisam ser setadas via engine.</br> </br> ### Player O Game Object player inicia com "isKnematic = true" assim a bolinha fica parada no ar e só vai sofrer a ação da gravidade quando o jogador apertar o botão play.</br> </br> ### UI O Game Object UICanvas abriga toda a UI, como inicialmente a tela de encerramento foi escondida, tudo que o jogador vê é a tela de lógica onde ele pode arrastar e soltar as setas para gerar a sequência.</br> </br> ### Zones O Game Object win zone possui apenas um colider que vai iniciar a "montagem da tela de encerramento". Basicamente ele seta como "falso" o deslocamento da bolinha e chama o método stageUIBuilder do gerenciador mySceneManager. Enquanto o player está em contato com o win zone, acontece a animação da bolinha através de um "AddForce".</br> O Game Object Dead zone possui um colider também, a diferença entre ele e o win zone é que ele vai setar a variável bool de mySceneManager "isPlayerWin" para false, enquanto win zone o seta para true. A montagem da UI leva em consideração essa bool para compor as frases.</br> </br> ### Database Um banco de dados MySql foi criado no serviço www.000webhost.com. Os dados para acesso estão na tarefa no trello. O Game Object Panel-Database utiliza o script Database.cs para chamar o script php que está hospedado nesse mesmo servidor. </br> </br> ### Assets Utilizei assets free para compor as cenas. Na pasta assets está o arquivo de adobe illustrator com algumas ilustrções 2D utilizadas.</br> Vários assets possuem um scrip próprio para setar alguma animação como por exemplo a nuvem e a chave.
75.933333
336
0.781826
por_Latn
0.999232
0c168662df1a266beca5d9cda077f04f0bc842ed
386
md
Markdown
contracts/enu.system/enu.system-setglimits-rc.md
TP-Lab/enumivo
76d81a36d2db8cea93fb54cd95a6ec5f6c407f97
[ "MIT" ]
8
2018-08-02T02:31:19.000Z
2018-08-16T03:31:02.000Z
contracts/enu.system/enu.system-setglimits-rc.md
TP-Lab/enumivo
76d81a36d2db8cea93fb54cd95a6ec5f6c407f97
[ "MIT" ]
null
null
null
contracts/enu.system/enu.system-setglimits-rc.md
TP-Lab/enumivo
76d81a36d2db8cea93fb54cd95a6ec5f6c407f97
[ "MIT" ]
null
null
null
# Action - `{{ setglimits }}` ### Description The intention of the `{{ setglimits }}` action is to ... ### Input and Input Type The `{{ setglimits }}` action requires the following `input` and `input type`: | Action | Input | Input Type | |:--|:--|:--| | `{{ setglimits }}` | `{{ cpu_usec_per_periodVar }}` | `{{ int64 }}` | As an authorized party I {{ signer }} wish to UNKNOWN
24.125
78
0.603627
eng_Latn
0.961829
0c170094509aab0930af8492ecf7cb5ba160caa4
701
md
Markdown
README.md
edgarlr/portfolio
c1e86ad6dbdd3413f43370b18e2b939e8af3b378
[ "MIT" ]
1
2021-08-19T16:04:24.000Z
2021-08-19T16:04:24.000Z
README.md
edgarlr/portfolio
c1e86ad6dbdd3413f43370b18e2b939e8af3b378
[ "MIT" ]
1
2021-07-31T17:24:09.000Z
2021-07-31T17:24:09.000Z
README.md
edgarlr/portfolio
c1e86ad6dbdd3413f43370b18e2b939e8af3b378
[ "MIT" ]
1
2021-07-24T23:18:57.000Z
2021-07-24T23:18:57.000Z
# edgarlr.com My personal website, portfolio and blog. ## Built using: - Framework: [Next.js](https://nextjs.org) - [TypeScript](https://nextjs.org/docs/basic-features/typescript) - [CSS Modules](https://nextjs.org/docs/basic-features/built-in-css-support) - [Remark](https://remark.js.org/) - Designed on Figma - [Figma File](https://www.figma.com/community/file/951948937037406468/Portfolio---UI-Kit) ## Running Locally 1. Clone this repository 2. Run `yarn install` to install the dependencies 3. Once the dependencies are installed, run `yarn run dev` to start the dev server on `localhost:3000` ## License [MIT License](https://github.com/edgarlr/edgarlr.com/blob/main/LICENSE).
28.04
102
0.731812
eng_Latn
0.363781
0c171b0d9eee2b0848f8b1aa3c48a5b67eb16eb4
1,401
md
Markdown
collections/_keyboard/others.md
RHUL-CS-Projects/compsci-superbasics
d563733d1c5a89c36aaad55d91339cc1bd4eb50d
[ "MIT" ]
null
null
null
collections/_keyboard/others.md
RHUL-CS-Projects/compsci-superbasics
d563733d1c5a89c36aaad55d91339cc1bd4eb50d
[ "MIT" ]
10
2021-01-28T09:59:42.000Z
2021-12-22T18:58:57.000Z
collections/_keyboard/others.md
RHUL-CS-Projects/compsci-superbasics
d563733d1c5a89c36aaad55d91339cc1bd4eb50d
[ "MIT" ]
3
2021-01-28T22:09:17.000Z
2021-07-15T16:29:37.000Z
--- title: Other common control combos layout: topic order: 108 --- These are _generally_ used in computing — but you should understand that when you press a key, what you are really doing is sending a signal to the program that is listening ("has the focus"), and it's up to that program to decide how to respond. That's why Ctrl-C can mean copy _or_ interrupt... it depends on which program is listening. | keys | what it usually does | | -------- | ---------------------------------- | | <span class="key"><sub>ctrl</sub></span><span class="key">C</span> | copy <br>cancel or interrupt | | <span class="key"><sub>ctrl</sub></span><span class="key">X</span> | cut | | <span class="key"><sub>ctrl</sub></span><span class="key">V</span> | paste | | <span class="key"><sub>ctrl</sub></span><span class="key">S</span> | save | | <span class="key"><sub>ctrl</sub></span><span class="key">F</span> | find or search | | <span class="key"><sub>ctrl</sub></span><span class="key">A</span> | select all | | <span class="key"><sub>ctrl</sub></span><span class="key">Z</span> | suspend | | <span class="key"><sub>ctrl</sub></span><span class="key">D</span> | terminate input | These combinations are common in Unix and Windows systems. On the Mac, many applications use the <span class="key">⌘</span> command modifier key instead (except inside Terminal, or other Unix-like applications).
50.035714
101
0.656674
eng_Latn
0.973437
0c1766c50d72d370d65f515ee6dd77e6a5862ae0
2,182
md
Markdown
_includes/stats/da/feat/NumType.md
fginter/docs-fginterfork
1012563e049f1ad57548bb71908c632b23ee64f9
[ "Apache-2.0" ]
null
null
null
_includes/stats/da/feat/NumType.md
fginter/docs-fginterfork
1012563e049f1ad57548bb71908c632b23ee64f9
[ "Apache-2.0" ]
null
null
null
_includes/stats/da/feat/NumType.md
fginter/docs-fginterfork
1012563e049f1ad57548bb71908c632b23ee64f9
[ "Apache-2.0" ]
null
null
null
-------------------------------------------------------------------------------- ## Treebank Statistics (UD_Danish) This feature is universal. It occurs with 2 different values: `Card`, `Ord`. 1640 tokens (2%) have a non-empty value of `NumType`. 503 types (3%) occur at least once with a non-empty value of `NumType`. 489 lemmas (4%) occur at least once with a non-empty value of `NumType`. The feature is used with 2 part-of-speech tags: [da-pos/NUM]() (1491; 1% instances), [da-pos/ADJ]() (149; 0% instances). ### `NUM` 1491 [da-pos/NUM]() tokens (100% of all `NUM` tokens) have a non-empty value of `NumType`. `NUM` tokens may have the following values of `NumType`: * `Card` (1491; 100% of non-empty `NumType`): <em>to, tre, fire, 20, fem, seks, 10, otte, 100, 1</em> `NumType` seems to be **lexical feature** of `NUM`. 100% lemmas (453) occur only with one value of `NumType`. ### `ADJ` 149 [da-pos/ADJ]() tokens (2% of all `ADJ` tokens) have a non-empty value of `NumType`. The most frequent other feature values with which `ADJ` and `NumType` co-occurred: <tt><a href="Degree.html">Degree</a>=EMPTY</tt> (149; 100%), <tt><a href="Number.html">Number</a>=EMPTY</tt> (149; 100%), <tt><a href="Gender.html">Gender</a>=EMPTY</tt> (149; 100%), <tt><a href="Definite.html">Definite</a>=EMPTY</tt> (149; 100%). `ADJ` tokens may have the following values of `NumType`: * `Ord` (149; 100% of non-empty `NumType`): <em>1., anden, 2., tredje, 3., andet, 12., 17., fjerde, 10.</em> * `EMPTY` (6422): <em>alle, mange, danske, store, flere, samme, hele, første, nye, sidste</em> `NumType` seems to be **lexical feature** of `ADJ`. 100% lemmas (37) occur only with one value of `NumType`. ## Relations with Agreement in `NumType` The 10 most frequent relations where parent and child node agree in `NumType`: <tt>NUM --[<a href="../dep/nummod.html">nummod</a>]--> NUM</tt> (8; 100%), <tt>NUM --[<a href="../dep/conj.html">conj</a>]--> NUM</tt> (8; 100%), <tt>NUM --[<a href="../dep/list.html">list</a>]--> NUM</tt> (7; 100%), <tt>NUM --[<a href="../dep/appos.html">appos</a>]--> NUM</tt> (1; 100%), <tt>NUM --[<a href="../dep/name.html">name</a>]--> NUM</tt> (1; 100%).
46.425532
330
0.617782
eng_Latn
0.611239
0c17a00fffa8b37086da440f9fe8dd57baf9736c
14,329
md
Markdown
docs/cndb_omf.md
tristan957/hse
3a3bc5c10ec0eab69a2c168a2cef366fd0bbe7f9
[ "Apache-2.0" ]
null
null
null
docs/cndb_omf.md
tristan957/hse
3a3bc5c10ec0eab69a2c168a2cef366fd0bbe7f9
[ "Apache-2.0" ]
null
null
null
docs/cndb_omf.md
tristan957/hse
3a3bc5c10ec0eab69a2c168a2cef366fd0bbe7f9
[ "Apache-2.0" ]
null
null
null
DRAFT!!! DRAFT!!! DRAFT!!! DRAFT!!! DRAFT!!! DRAFT!!! # OMF descriptions Describes the proposed CNDB OMF for HSE-3.0. ## Record types Each record type contains a header at the beginning. This header contains the type of the record and its size. This header is not listed in the following breakdown of the various record types. ### Version - Magic: A magic number to verify that the MDC is a CNDB MDC. - Version: Version of the CNDB OMF. - Captgt: (Capacity Target) Size of the CNDB. ### KVS_CREATE - Create-time parameters for a KVS. - Prefix length: Length of prefixes when using prefix delete. - Flags: Flags for the KVS. - `CN_CFLAG_CAPPED`: Whether or not the KVS is a capped KVS. - Cnid: (CN id) A unique identifier for the KVS. - CN name: Name of the KVS. - Meta size: The Info record is followed by opaque metadata. This field records the size of this metadata. See the "Meta[]" field below. - Meta[]: Opaque metadata ecoded/decoded by the caller. ### KVS_DELETE - Cnid: (CN id) A unique identifier for the KVS. - CN name: Name of the KVS. ### CNDB META - Seqno max: Max seqno of the persisted records in the KVDB. ### TX_START The number of create and delete records aren't recorded here. They can be tracked as they come along. A txn is "complete" when the number of `TX_KVSET_CREATE` records and `ACK-C` records are equal. A "complete" txn is one that can be rolled forward if necessary. Every transaction in CNDB has at least one `TX_KVSET_CREATE` record. So the `ACK-N` records do not count toward marking the txn as complete. - TX id: A unique id for the CNDB txn. Note that a txn can span different cnids. - Seqno: KVDB seqno at the time of logging this record. - Ingest id: Every ingest operation gets a unique id. If `TX_START` does not describe an ingest operation, the value is set to `CNDB_INVAL_INGESTID`. - Transaction horizon: Transaction horizon for an ingest. This is used by WAL to reclaim its files. When the operation is not an ingest, the value is set to `CNDB_INVAL_HORIZON`. ### TX_NODE - TX id: Transaction id to tie this record to the corresponding `TX_START` record. - Cnid: (CN id) A unique identifier for the KVS. - Old node cnt: Number of nodes being deleted - New node cnt: Number of nodes being created that will replace the old nodes. Old Node Ids (packed u64 fields). New Node Ids (packed u64 fields). Node id 0 is reserved for the root node. There's no `txnode` record for the root node. A root node always exists. `TX_KVSET_CREATE` records would use node id 0 when a kvset is created in the root node. Only nodes at level 1 are logged explicitly in CNDB. ### TX_KVSET_CREATE When a kvset is split, unless a kblock is rewritten, the vblock idx stored will not be sufficient to fetch the value for a key. In addition to the vblock idx, a vgroup id is also stored in the kmd region for the key's value. So the vblock idx acts as the index within a vgroup. After a split, in the kvset that's moved to the other node, each vblock index in a vgroup is now offset by a the number of vblocks that were moved to the other kvset. The `vgroup->offset` mapping in the `TX_KVSET_CREATE` record handles this change. During a kvset split, some mblocks are rewritten while others are just moved around. If the KVDB crashes before the split completes, then recovery is required before the KVDB can be brought online. This includes cleaning up the mblocks that were created for this split but not the mblocks that were moved. The KMap and VMap bitmaps represent blocks that were created. i.e. for a KMap/VMap entry: 0: mblock was moved. Do not delete on rollback 1: mblock was created. Delete on rollback. - TX id: Transaction id to tie this record to the corresponding `TX_START` record. - Cnid: (CN id) A unique identifier for the KVS. - Tag: A unique number used to match with corresponding `ACK-C` records. - Vgroup-Offset map cnt: Number of entries for `vgroup->offset` mapping - Kblock cnt: Number of kblocks. - Vblock cnt: Number of vblocks. - KMap num bytes: Number of bytes that contain the KMap bitmap - VMap num bytes: Number of bytes that contain the VMap bitmap Vgroup-offset map KMap bits (u8 fields) VMap bits (u8 fields) List of Kblock IDs (u64 fields) List of Vblock IDs (u64 fields) ### TX_KVSET_META There's a `TX_KVSET_META` record to go with every `TX_KVSET_CREATE` record. This record contains all the metadata for the kvset described by the `TX_KVSET_CREATE` record. - TX id: Transaction id to tie this record to the corresponding `TX_START` record. - Cnid: (CN id) A unique identifier for the KVS. - Tag: A unique number used to match with corresponding `ACK` records. - Node id: Node id to which this kvset belongs. Node ids are found in `TX_NODE` records. - Dgen: Data generation number. Everytime a new kvset is created during ingest, it gets a new dgen. Kvsets resulting from a compaction operation adopt the dgen of the src kvset with the largest dgen. - Vused: Sum of lengths of referenced values across all vblocks. - Compc: Number of times this kvset has undergone compactions. - Scatter: Whether or not values follow key order. ### TX_KVSET_DELETE - TX id: Transaction id to tie this record to the corresponding `TX_START` record. - Cnid: (CN id) A unique identifier for the KVS. - Tag: A unique number used to match with corresponding `ACK` records. - Number of mblocks. - List of Mblock IDs. ### ACK All records until now are meant to log an intent. `ACK` and `NACK` commit and abort transactions (or parts of transactions) repectively. Types of `ACK` records (specified in the `ack_type` field of the `ACK` record) : 1. `ACK-N`: Ack a `TX_NODE` record. 2. `ACK-C`: Ack a `TX_KVSET_CREATE` record. One `ACK` record for every `TX_KVSET_CREATE` record. 3. `ACK-D`: Ack a `TX_KVSET_DELETE` record. One `ACK` record for every `TX_KVSET_DELETE` record. - TX id: Transaction id to tie this record to the corresponding `TX_START` record. - Cnid: (CN id) A unique identifier for the KVS. Used only by Ack-D. - Tag: Must match the `TX_KVSET_CREATE/TX_KVSET_DELETE` record that is being acknowledged. ### NACK Mark txn as aborted. - TX id: Transaction id to tie this record to the corresponding `TX_START` record. ## Operation ### New KVDB and KVS - A new KVDB starts out with a CNDB that contains the version and a meta record. - When a new KVS is created, an info record is appended to the CNDB. When the record type is set to `KVS_CREATE`. ### Tree shape changes - On startup, CNDB will read through `TX_NODE` records in order and maintain a `master node list`. - When a `TX_NODE` record is read, an entry is added to a pending node list. When this txn is committed, the `master node list` is updated with the node replacements. - On mdc compaction, CNDB would write its `TX_NODE` record based on the `master node list`. - Operations: - First root spill. - Split and Join. - CNDB compaction: Encapsulate the current tree shape in the compacted CNDB view. ## Sample logs Part 1: Beginning. ``` # Create KVDB version magic=[...] version=01 captgt=1234567890 meta seqno=01 # Create KVS > info cnid=01 pfxlen=00 flags=0x0 name=kvs01 metasz 2176 ... # Ingest - no deletes > tx txid=01 seqno=[...] ingest_id=[...] txhorizon=[...] > txc txid=01 cnid=01 tag=01 kbk.cnt=01 vblk.cnt=01 ... > ack-C txid=01 cnid=01 tag=01 # First root spill. Creates a single node at level 1 - n01 > tx txid=02 seqno=[...] ingest_id=CNDB_INVAL_INGESTID txhorizon=CNDB_INVAL_HORIZON > txnode txid=02 cnid=01 old.cnt=0 new.cnt=1 node=n01 # Create one node at level 1 > ack-N txid=02 # L1_nodelist: 0->n1 > txc txid=02 cnid=01 tag=01 kblk.cnt=01 vblk.cnt=01 ... > txm txid=02 cnid=01 tag=01 node=n01 dgen [...] vused [...] compc [...] scatter [...] > txd txid=02 cnid=01 tag=02 kblk.cnt=01 vblk.cnt=01 ... > txd txid=02 cnid=01 tag=03 kblk.cnt=01 vblk.cnt=01 ... > txd txid=02 cnid=01 tag=04 kblk.cnt=01 vblk.cnt=01 ... > txd txid=02 cnid=01 tag=05 kblk.cnt=01 vblk.cnt=01 ... > ack-C txid=02 tag=01 > ack-D txid=02 > ack-D txid=02 > ack-D txid=02 > ack-D txid=02 ``` Part 2: Splits ``` # Split node=n01 > tx txid=03 seqno=[...] ingest_id=CNDB_INVAL_INGESTID txhorizon=CNDB_INVAL_HORIZON > txnode txid=03 cnid=01 old.cnt=1 new.cnt=2 node=n01 node=n02 node=n03 > ack-N txid=03 > txc txid=03 cnid=01 tag=01 kblk.cnt=01 vblk.cnt=01 ... > txm txid=03 cnid=01 tag=01 node=n02 dgen [...] vused [...] compc [...] scatter [...] > txc txid=03 cnid=01 tag=02 kblk.cnt=01 vblk.cnt=01 ... > txm txid=03 cnid=01 tag=02 node=n02 dgen [...] vused [...] compc [...] scatter [...] > txc txid=03 cnid=01 tag=03 kblk.cnt=01 vblk.cnt=01 ... > txm txid=03 cnid=01 tag=03 node=n03 dgen [...] vused [...] compc [...] scatter [...] > txc txid=03 cnid=01 tag=04 kblk.cnt=01 vblk.cnt=01 ... > txm txid=03 cnid=01 tag=04 node=n03 dgen [...] vused [...] compc [...] scatter [...] > txd txid=03 cnid=01 tag=05 kblk.cnt=01 vblk.cnt=01 ... > txd txid=03 cnid=01 tag=06 kblk.cnt=01 vblk.cnt=01 ... > txd txid=03 cnid=01 tag=07 kblk.cnt=01 vblk.cnt=01 ... > txd txid=03 cnid=01 tag=08 kblk.cnt=01 vblk.cnt=01 ... > ack-C txid=02 tag=01 > ack-C txid=02 tag=02 > ack-C txid=02 tag=03 > ack-C txid=02 tag=04 > ack-D txid=02 tag=05 > ack-D txid=02 tag=06 > ack-D txid=02 tag=07 > ack-D txid=02 tag=08 # L1_nodelist: 0->n2->n3 # Split node=n02. Each node contains 2 kvsets. > tx txid=04 seqno=[...] ingest_id=CNDB_INVAL_INGESTID txhorizon=CNDB_INVAL_HORIZON > txnode txid=04 cnid=01 old.cnt=1 new.cnt=2 node=n02 node=n04 node=n05 > ack-N txid=04 > txc txid=04 cnid=01 tag=01 kblk.cnt=01 vblk.cnt=01 ... > txm txid=04 cnid=01 tag=01 node=n04 dgen [...] vused [...] compc [...] scatter [...] > txc txid=04 cnid=01 tag=02 kblk.cnt=01 vblk.cnt=01 ... > txm txid=04 cnid=01 tag=02 node=n04 dgen [...] vused [...] compc [...] scatter [...] > txc txid=04 cnid=01 tag=03 kblk.cnt=01 vblk.cnt=01 ... > txm txid=04 cnid=01 tag=03 node=n05 dgen [...] vused [...] compc [...] scatter [...] > txc txid=04 cnid=01 tag=04 kblk.cnt=01 vblk.cnt=01 ... > txm txid=04 cnid=01 tag=04 node=n05 dgen [...] vused [...] compc [...] scatter [...] > txd txid=04 cnid=01 tag=05 kblk.cnt=01 vblk.cnt=01 ... > txd txid=04 cnid=01 tag=06 kblk.cnt=01 vblk.cnt=01 ... > txd txid=04 cnid=01 tag=07 kblk.cnt=01 vblk.cnt=01 ... > txd txid=04 cnid=01 tag=08 kblk.cnt=01 vblk.cnt=01 ... > ack-C txid=04 tag=01 > ack-C txid=04 tag=02 > ack-C txid=04 tag=03 > ack-C txid=04 tag=04 > ack-D txid=04 tag=05 > ack-D txid=04 tag=06 > ack-D txid=04 tag=07 > ack-D txid=04 tag=08 # L1_nodelist: 0->n4->n5->n3 ``` Part 3: Join - No kvset/mblock delete required. ``` > tx txid=05 seqno=[...] ingest_id=CNDB_INVAL_INGESTID txhorizon=CNDB_INVAL_HORIZON > txnode txid=05 cnid=01 old.cnt=2 new.cnt=1 node=n04 node=n05 node=n06 > ack-N txid=05 > txc txid=05 cnid=01 tag=01 kblk.cnt=01 vblk.cnt=01 ... > txm txid=05 cnid=01 tag=01 node=n06 dgen [...] vused [...] compc [...] scatter [...] > txc txid=05 cnid=01 tag=02 kblk.cnt=01 vblk.cnt=01 ... > txm txid=05 cnid=01 tag=02 node=n06 dgen [...] vused [...] compc [...] scatter [...] > ack-C txid=05 tag=01 > ack-C txid=05 tag=02 # L1_nodelist: 0->n6->n3 ``` Part 4: CNDB compaction (txm records are skippped for brevity) ``` # Create N nodes from 0 nodes. Encapsulates the tree shape. > tx txid=06 seqno=[...] ingest_id=CNDB_INVAL_INGESTID txhorizon=CNDB_INVAL_HORIZON > txnode txid=06 cnid=01 old.cnt=0 new.cnt=2 node=n06 node=n3 > ack-N txid=06 > txc txid=06 cnid=01 tag=01 ... > txc txid=06 cnid=01 tag=02 ... > txc txid=06 cnid=01 tag=03 ... > ... more txc records. May contain ack-D records if this is rolling over CNDB on a live system. > ack-N txid=06 > ack-C txid=06 tag=01 > ack-C txid=06 tag=02 > ack-C txid=06 tag=03 > ... more ack-C records. One for each txc record. ``` ### Incremental spills Incremental spill refers to the fact that a spill constructs and creates the kvset of a child node and adds it to the node list before proceeding to the next child node. A full spill operation can take a while, but it doesn't have to hold up all split/join/kcompact/kvcompact operations until it finishes. Only the child node to which the spill is writing data needs to be prevented from participating in any maintenance operations. To achieve this, CN logs each incremental step in spill as a separate transaction. Also, deleting the source kvsets is its own transaction. If the process crashes in the middle of this spill, CN can resume the spill operation where it left off. Consider the following root spill (txm records are skipped for brevity): ``` # incremental spill step 1 of 3 > tx txid=07 seqno=[...] ingest_id=CNDB_INVAL_INGESTID txhorizon=CNDB_INVAL_HORIZON > txc txid=07 cnid=01 tag=01 ... > ack-C txid=07 tag=01 # incremental spill step 2 of 3 > tx txid=08 seqno=[...] ingest_id=CNDB_INVAL_INGESTID txhorizon=CNDB_INVAL_HORIZON > txc txid=08 cnid=01 tag=01 ... > ack-C txid=08 tag=01 # incremental spill step 3 of 3 > tx txid=09 seqno=[...] ingest_id=CNDB_INVAL_INGESTID txhorizon=CNDB_INVAL_HORIZON > txc txid=09 cnid=01 tag=01 ... > ack-C txid=09 tag=01 # delete source kvsets > tx txid=10 seqno=[...] ingest_id=CNDB_INVAL_INGESTID txhorizon=CNDB_INVAL_HORIZON > txd txid=10 cnid=01 tag=01 ... > txd txid=10 cnid=01 tag=02 ... > txd txid=10 cnid=01 tag=03 ... > ack-D txid=10 tag=01 > ack-D txid=10 tag=02 > ack-D txid=10 tag=03 ``` Consider the case where the above spill operation crashes right before the last `ACK-C` (step 3 of 3) is logged. When the KVDB is reopened, the tree will contain kvsets resulting from this spill in the first two child nodes, but not the third one. Also, the three source kvsets in the root node would be intact. - The scheduler would see the kvsets in the root node and schedule a spill operation. - Before building a kvset for a child, look at the kvsets in the child nodes and, based on the dgen range, determine whether this child already contains the spilled data. If yes, move to the next child node (by seeking to the node's edge key). If not, run through the merge loop and continue the spill. This approach ensures that CNDB and the compaciton logic remain decoupled from each other.
42.901198
112
0.711355
eng_Latn
0.918098
0c17e2725648fd47670e0acb9af8ce484b3de24e
124
md
Markdown
README.md
AsherJingkongChen/PurBrainSource
f3bc41da2bdb7b9d485ce0b3aec87c8f3fd71960
[ "MIT" ]
null
null
null
README.md
AsherJingkongChen/PurBrainSource
f3bc41da2bdb7b9d485ce0b3aec87c8f3fd71960
[ "MIT" ]
null
null
null
README.md
AsherJingkongChen/PurBrainSource
f3bc41da2bdb7b9d485ce0b3aec87c8f3fd71960
[ "MIT" ]
null
null
null
# Poor Brain Source ### pur source here, not only codes, but verbose `Triviality ._.` [__Personal Discord Ciis.#0840__]()
17.714286
48
0.709677
eng_Latn
0.785062
0c18e27233c8294f773bba6a372c409780cb2a8e
4,737
md
Markdown
articles/azure-functions/run-functions-from-deployment-package.md
brentnewbury/azure-docs
52da5a910db122fc92c877a6f62c54c32c7f3b31
[ "CC-BY-4.0", "MIT" ]
1
2019-10-25T13:24:48.000Z
2019-10-25T13:24:48.000Z
articles/azure-functions/run-functions-from-deployment-package.md
brentnewbury/azure-docs
52da5a910db122fc92c877a6f62c54c32c7f3b31
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-functions/run-functions-from-deployment-package.md
brentnewbury/azure-docs
52da5a910db122fc92c877a6f62c54c32c7f3b31
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Run your Azure Functions from a package | Microsoft Docs description: Have the Azure Functions runtime run your functions by mounting a deployment package file that contains your function app project files. services: functions documentationcenter: na author: ggailey777 manager: jeconnoc ms.service: azure-functions ms.devlang: multiple ms.topic: conceptual ms.date: 02/26/2019 ms.author: glenga --- # Run your Azure Functions from a package file > [!NOTE] > The functionality described in this article is not available for function apps running on Linux in an [App Service plan](functions-scale.md#app-service-plan). In Azure, you can run your functions directly from a deployment package file in your function app. The other option is to deploy your files in the `d:\home\site\wwwroot` directory of your function app. This article describes the benefits of running your functions from a package. It also shows how to enable this functionality in your function app. ## Benefits of running from a package file There are several benefits to running from a package file: + Reduces the risk of file copy locking issues. + Can be deployed to a production app (with restart). + You can be certain of the files that are running in your app. + Improves the performance of [Azure Resource Manager deployments](functions-infrastructure-as-code.md). + May reduce cold-start times, particularly for JavaScript functions with large npm package trees. For more information, see [this announcement](https://github.com/Azure/app-service-announcements/issues/84). ## Enabling functions to run from a package To enable your function app to run from a package, you just add a `WEBSITE_RUN_FROM_PACKAGE` setting to your function app settings. The `WEBSITE_RUN_FROM_PACKAGE` setting can have one of the following values: | Value | Description | |---------|---------| | **`1`** | Recommended for function apps running on Windows. Run from a package file in the `d:\home\data\SitePackages` folder of your function app. If not [deploying with zip deploy](#integration-with-zip-deployment), this option requires the folder to also have a file named `packagename.txt`. This file contains only the name of the package file in folder, without any whitespace. | |**`<url>`** | Location of a specific package file you want to run. When using Blob storage, you should use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#attach-a-storage-account-by-using-a-shared-access-signature-sas) to enable the Functions runtime to access to the package. You can use the [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to upload package files to your Blob storage account. | > [!CAUTION] > When running a function app on Windows, the external URL option yields worse cold-start performance. When deploying your function app to Windows, you should set `WEBSITE_RUN_FROM_PACKAGE` to `1` and publish with zip deployment. The following shows a function app configured to run from a .zip file hosted in Azure Blob storage: ![WEBSITE_RUN_FROM_ZIP app setting](./media/run-functions-from-deployment-package/run-from-zip-app-setting-portal.png) > [!NOTE] > Currently, only .zip package files are supported. ## Integration with zip deployment [Zip deployment][Zip deployment for Azure Functions] is a feature of Azure App Service that lets you deploy your function app project to the `wwwroot` directory. The project is packaged as a .zip deployment file. The same APIs can be used to deploy your package to the `d:\home\data\SitePackages` folder. With the `WEBSITE_RUN_FROM_PACKAGE` app setting value of `1`, the zip deployment APIs copy your package to the `d:\home\data\SitePackages` folder instead of extracting the files to `d:\home\site\wwwroot`. It also creates the `packagename.txt` file. The function app is then run from the package after a restart, and `wwwroot` becomes read-only. For more information about zip deployment, see [Zip deployment for Azure Functions](deployment-zip-push.md). ## Adding the WEBSITE_RUN_FROM_PACKAGE setting [!INCLUDE [Function app settings](../../includes/functions-app-settings.md)] ## Troubleshooting - Run From Package makes `wwwroot` read-only, so you will receive an error when writing files to this directory. - Tar and gzip formats are not supported. - This feature does not compose with local cache. - For improved cold-start performance, use the local Zip option (`WEBSITE_RUN_FROM_PACKAGE`=1). ## Next steps > [!div class="nextstepaction"] > [Continuous deployment for Azure Functions](functions-continuous-deployment.md) [Zip deployment for Azure Functions]: deployment-zip-push.md
60.730769
758
0.77813
eng_Latn
0.993755