hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
1138ce49b8a330feb850f98d3adc0c7b7e4a2de1
32
md
Markdown
README.md
heyinan1991/QRCode
6b51a4154b7ec18365d2df6c4e4852e3a9e3b755
[ "MIT" ]
1
2016-06-29T07:41:59.000Z
2016-06-29T07:41:59.000Z
README.md
heyinan1991/QRCode
6b51a4154b7ec18365d2df6c4e4852e3a9e3b755
[ "MIT" ]
null
null
null
README.md
heyinan1991/QRCode
6b51a4154b7ec18365d2df6c4e4852e3a9e3b755
[ "MIT" ]
null
null
null
# QRCode swift二维码扫描生成 模仿新浪微博二维码
8
12
0.84375
yue_Hant
0.656334
113a1cb27ceba2406efe92d1c78bf308d6c628b9
8,373
md
Markdown
articles/iot-dps/use-hsm-with-sdk.md
hongman/azure-docs.ko-kr
56e2580d78e1be8ac6b34a50bc4730ab56add9eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/iot-dps/use-hsm-with-sdk.md
hongman/azure-docs.ko-kr
56e2580d78e1be8ac6b34a50bc4730ab56add9eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/iot-dps/use-hsm-with-sdk.md
hongman/azure-docs.ko-kr
56e2580d78e1be8ac6b34a50bc4730ab56add9eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure IoT Hub 장치 프로 비전 서비스 클라이언트 SDK와 함께 다른 증명 메커니즘 사용 description: Azure 방법-Azure에서 DPS (장치 프로 비전 서비스) 클라이언트 SDK를 사용 하는 다양 한 증명 메커니즘을 사용 하는 방법 author: robinsh ms.author: robinsh ms.date: 03/30/2018 ms.topic: conceptual ms.service: iot-dps services: iot-dps ms.custom: - mvc - amqp ms.openlocfilehash: c110e90f26f595bcbf181b72e13f12a6de2fa8ce ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08 ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 07/02/2020 ms.locfileid: "81687218" --- # <a name="how-to-use-different-attestation-mechanisms-with-device-provisioning-service-client-sdk-for-c"></a>Azure에서 C용 Device Provisioning 서비스 클라이언트 SDK와 함께 다른 증명 메커니즘을 사용하는 방법 이 문서에서는 C용 Device Provisioning 서비스 클라이언트 SDK에서 다른 [증명 메커니즘](concepts-security.md#attestation-mechanism)을 사용하는 방법을 보여줍니다. 물리적 디바이스 또는 시뮬레이터를 사용할 수 있습니다. 프로비전 서비스는 X.509 및 TPM(신뢰할 수 있는 플랫폼 모듈)의 두 가지 증명 메커니즘에 대한 인증을 지원합니다. ## <a name="prerequisites"></a>사전 요구 사항 [시뮬레이션된 디바이스 만들기 및 프로비전](./quick-create-simulated-device.md) 가이드의 "개발 환경 준비" 섹션에 따라 개발 환경을 준비합니다. ### <a name="choose-an-attestation-mechanism"></a>증명 메커니즘 선택 디바이스 제조자는 먼저 지원되는 형식 중 하나를 기반으로 하는 증명 메커니즘을 선택해야 합니다. 현재 [C용 Device Provisioning 서비스 클라이언트 SDK](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client)는 다음과 같은 증명 메커니즘을 지원합니다. - [TPM(신뢰할 수 있는 플랫폼 모듈)](https://en.wikipedia.org/wiki/Trusted_Platform_Module): TPM은 몇 개의 Linux/Ubuntu 기반 디바이스뿐만 아니라 대부분의 Windows 기반 디바이스 플랫폼에 대해 설정된 표준입니다. 디바이스 제조자는 디바이스에서 이러한 OS 중 하나가 실행되는 경우 및 설정된 표준을 찾는 경우 이 증명 메커니즘을 선택할 수 있습니다. TPM 칩을 사용하면 Device Provisioning Service에 개별적으로 각 디바이스를 등록할 수 있습니다. 개발을 위해 Windows 또는 Linux 개발 컴퓨터에서 TPM 시뮬레이터를 사용할 수 있습니다. - [X.509](https://cryptography.io/en/latest/x509/): X.509 인증서를 [HSM(하드웨어 보안 모듈)](concepts-security.md#hardware-security-module)이라는 비교적 최신 칩에 저장할 수 있습니다. X.509 인증서를 구현하는 RIoT 또는 DICE에 대한 작업이 Microsoft 내에서 진행 중입니다. X.509 칩을 사용하면 포털에서 디바이스 등록을 대량으로 수행할 수 있습니다. 또한 임베디드 OS와 같은 특정 비Windows OS를 지원합니다. 개발 목적을 위해 Device Provisioning Service 클라이언트 SDK는 X.509 디바이스 시뮬레이터를 지원합니다. 자세한 내용은 IoT Hub Device Provisioning 서비스 [보안 개념](concepts-security.md) 및 [자동 프로비전 개념](/azure/iot-dps/concepts-auto-provisioning)을 참조하세요. ## <a name="enable-authentication-for-supported-attestation-mechanisms"></a>지원되는 증명 메커니즘에 인증을 사용하도록 설정 Azure Portal에서 SDK 인증 모드(X.509 또는 TPM)를 등록하려면 먼저 실제 디바이스 또는 시뮬레이터를 사용하도록 설정해야 합니다. 먼저 azure-iot-sdk-c의 루트 폴더로 이동합니다. 그 후 선택한 인증 모드에 따라 지정된 명령을 실행합니다. ### <a name="use-x509-with-simulator"></a>시뮬레이터를 통해 X.509 사용 프로 비전 서비스는 장치를 인증 하기 위해 **x.509** 인증서를 생성 하는 장치 생성 (Id 컴퍼지션 엔진) 에뮬레이터와 함께 제공 됩니다. **X.509** 인증을 사용 하도록 설정 하려면 다음 명령을 실행 합니다. ``` cmake -Ddps_auth_type=x509 .. ``` DICE가 포함된 하드웨어에 대한 자세한 내용은 [여기](https://azure.microsoft.com/blog/azure-iot-supports-new-security-hardware-to-strengthen-iot-security/)에 있습니다. ### <a name="use-x509-with-hardware"></a>하드웨어를 통해 X.509 사용 프로 비전 서비스는 다른 하드웨어에서 **X. x.509** 와 함께 사용할 수 있습니다. 하드웨어와 SDK 간의 인터페이스는 연결을 설정하는 데 필요합니다. 인터페이스에 대한 정보는 HSM 제조업체에 문의하세요. ### <a name="use-tpm"></a>TPM 사용 프로비전 서비스에서 SAS 토큰을 사용하여 Windows 및 Linux 하드웨어 TPM 칩에 연결할 수 있습니다. TPM 인증을 사용하도록 설정하려면 다음 명령을 실행합니다. ``` cmake -Ddps_auth_type=tpm .. ``` ### <a name="use-tpm-with-simulator"></a>시뮬레이터를 통해 TPM 사용 TPM 칩이 있는 디바이스가 없으면 개발 용도로 Windows OS에서 시뮬레이터를 사용할 수 있습니다. TPM 인증을 사용하도록 설정하고 TPM 시뮬레이터를 실행하려면 다음 명령을 실행합니다. ``` cmake -Ddps_auth_type=tpm_simulator .. ``` ## <a name="build-the-sdk"></a>SDK 빌드 먼저 SDK를 빌드한 후에 디바이스 등록을 만들어야 합니다. ### <a name="linux"></a>Linux - Linux에서 SDK를 빌드하려면 다음을 수행합니다. ``` cd azure-iot-sdk-c mkdir cmake cd cmake cmake .. cmake --build . # append '-- -j <n>' to run <n> jobs in parallel ``` - 디버그 이진 파일을 빌드하려면 해당 CMake 옵션을 프로젝트 생성 명령에 추가합니다. 예를 들면 다음과 같습니다. ``` cmake -DCMAKE_BUILD_TYPE=Debug .. ``` - SDK를 빌드하는 데 사용할 수 있는 여러 가지 [Cmake 구성 옵션이](https://cmake.org/cmake/help/v3.6/manual/cmake.1.html) 있습니다. 예를 들어 CMake 프로젝트 생성 명령에 인수를 추가하여 사용 가능한 프로토콜 스택 중 하나를 해제할 수 있습니다. ``` cmake -Duse_amqp=OFF .. ``` - 단위 테스트를 빌드하여 실행할 수도 있습니다. ``` cmake -Drun_unittests=ON .. cmake --build . ctest -C "debug" -V ``` ### <a name="windows"></a>Windows - Windows에서 SDK를 빌드하려면 다음 단계를 수행하여 프로젝트 파일을 생성합니다. - "VS2015용 개발자 명령 프롬프트"를 엽니다. - 리포지토리의 루트에서 다음의 CMake 명령을 실행합니다. ``` cd azure-iot-sdk-c mkdir cmake cd cmake cmake -G "Visual Studio 14 2015" .. ``` 이 명령은 x86 라이브러리를 빌드합니다. x64 라이브러리로 빌드하려면 cmake 생성기 인수를 다음과 같이 수정합니다. ``` cmake .. -G "Visual Studio 14 2015 Win64" ``` - 프로젝트가 성공적으로 생성되면 `cmake` 폴더 아래에 Visual Studio 솔루션 파일(.sln)이 표시됩니다. SDK를 빌드하려면 다음을 수행합니다. - Visual Studio에서 **cmake\azure_iot_sdks.sln**을 열고 빌드합니다. **또는** - 프로젝트 파일을 생성하는 데 사용된 명령 프롬프트에서 다음 명령을 실행합니다. ``` cmake --build . -- /m /p:Configuration=Release ``` - 디버그 이진 파일을 빌드하려면 해당 MSBuild 인수를 사용합니다. ``` cmake --build . -- /m /p:Configuration=Debug` ``` - SDK를 빌드하는 데 사용할 수 있는 CMake 구성 옵션이 많이 있습니다. 예를 들어 CMake 프로젝트 생성 명령에 인수를 추가하여 사용 가능한 프로토콜 스택 중 하나를 해제할 수 있습니다. ``` cmake -G "Visual Studio 14 2015" -Duse_amqp=OFF .. ``` - 단위 테스트를 빌드하여 실행할 수도 있습니다. ``` cmake -G "Visual Studio 14 2015" -Drun_unittests=ON .. cmake --build . -- /m /p:Configuration=Debug ctest -C "debug" -V ``` ### <a name="libraries-to-include"></a>포함할 라이브러리 - 다음 라이브러리는 SDK에 포함되어야 합니다. - 프로비전 서비스: dps_http_transport, dps_client, dps_security_client - IoTHub 보안: iothub_security_client ## <a name="create-a-device-enrollment-entry-in-device-provisioning-services"></a>Device Provisioning Service에서 디바이스 등록 항목 만들기 ### <a name="tpm"></a>TPM TPM을 사용하는 경우 ["Azure IoT Hub Device Provisioning Service를 사용하여 시뮬레이션된 디바이스 만들기 및 프로비전"](./quick-create-simulated-device.md)의 지침에 따라 Device Provisioning Service에 디바이스 등록 항목을 만들고 첫 번째 부팅을 시뮬레이션합니다. ### <a name="x509"></a>X.509 1. 프로비전 서비스에 디바이스를 등록하려면 클라이언트 SDK에서 제공하는 프로비전 도구에 표시된 각 디바이스에 대한 인증 키와 등록 ID를 기록해 둡니다. 다음 명령을 실행하여 루트 CA 인증서(그룹 등록의 경우) 및 리프 인증서(개별 등록의 경우)를 출력합니다. ``` ./azure-iot-sdk-c/dps_client/tools/x509_device_provision/x509_device_provision.exe ``` 2. Azure Portal에 로그인하고, 왼쪽 메뉴에서 **모든 리소스** 단추를 클릭하고, Device Provisioning Service를 엽니다. - **X.509 개별 등록**: 프로 비전 서비스 요약 블레이드에서 **등록 관리**를 선택 합니다. **개별 등록** 탭을 선택 하 고 위쪽의 **추가** 단추를 클릭 합니다. Id 증명 *메커니즘*으로 **x.509** 을 선택 하 고, 블레이드에서 요구 하는 대로 리프 인증서를 업로드 합니다. 완료되면 **저장** 단추를 클릭합니다. - **X.509 그룹 등록**: 프로 비전 서비스 요약 블레이드에서 **등록 관리**를 선택 합니다. **그룹 등록** 탭을 선택하고, 위쪽에 있는 **추가** 단추를 클릭합니다. Id 증명 *메커니즘*으로 **x.509** 을 선택 하 고, 그룹 이름과 인증 이름을 입력 하 고, 블레이드에서 요구 하는 대로 CA/중간 인증서를 업로드 합니다. 완료되면 **저장** 단추를 클릭합니다. ## <a name="enable-authentication-for-devices-using-a-custom-attestation-mechanism-optional"></a>사용자 지정 증명 메커니즘을 사용하여 디바이스에 대한 인증을 사용하도록 설정(선택 사항) > [!NOTE] > 이 섹션은 현재 C용 Device Provisioning 서비스 클라이언트 SDK에 의해 지원되지 않는 사용자 지정 플랫폼 또는 증명 메커니즘에 대한 지원이 필요한 디바이스에만 적용될 수 있습니다. SDK에서는 "증명 메커니즘"을 대신해 제네릭 대체로 "HSM"이라는 용어를 자주 사용합니다. 먼저 사용자 지정 증명 메커니즘에 대한 리포지토리 및 라이브러리를 개발해야 합니다. 1. 증명 메커니즘에 액세스하는 라이브러리를 개발합니다. 이 프로젝트에서는 사용할 디바이스 프로비저닝 SDK에 대한 정적 라이브러리를 생성해야 합니다. 2. 라이브러리에서 다음 헤더 파일에 정의된 함수를 구현합니다. - 사용자 지정 TPM: [HSM TPM API](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning_client/devdoc/using_custom_hsm.md#hsm-tpm-api) 아래에 정의된 함수를 구현합니다. - 사용자 지정 X.509: [HSM X509 API](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning_client/devdoc/using_custom_hsm.md#hsm-x509-api) 아래에 정의된 함수를 구현합니다. 라이브러리가 성공적으로 빌드되면 라이브러리에 연결하여 Device Provisioning Service 클라이언트 SDK와 통합해야 합니다. : 1. 다음 `cmake` 명령에서 사용자 지정 GitHub 리포지토리 및 라이브러리를 입력합니다. ```cmd/sh cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=<path_and_name_of_library> <PATH_TO_AZURE_IOT_SDK> ``` 2. CMake로 빌드한 Visual Studio 솔루션 파일(`\azure-iot-sdk-c\cmake\azure_iot_sdks.sln`)을 열고 빌드합니다. - 빌드 프로세스에서 SDK 라이브러리를 컴파일합니다. - SDK는 `cmake` 명령에 정의된 사용자 지정 라이브러리에 대한 연결을 시도합니다. 3. "Provision_Samples"(`\azure-iot-sdk-c\cmake\provisioning_client\samples\prov_dev_client_ll_sample` 아래) 아래에서 "prov_dev_client_ll_sample" 샘플 앱을 실행하여 사용자 지정 증명 메커니즘이 올바르게 구현되었는지 확인합니다. ## <a name="connecting-to-iot-hub-after-provisioning"></a>프로비전 후 IoT Hub에 연결 프로 비전 서비스를 사용 하 여 장치를 프로 비전 하면이 API는 지정 된 인증 모드 (**x.509** 또는 TPM)를 사용 하 여 IoT Hub에 연결 합니다. ``` IOTHUB_CLIENT_LL_HANDLE handle = IoTHubClient_LL_CreateFromDeviceAuth(iothub_uri, device_id, iothub_transport); ```
44.068421
370
0.698794
kor_Hang
1.00001
113a7fc3a20f2cf2d0f73547c18d3beb562c481c
5,159
md
Markdown
_posts/2017-07-01-geant4-install.md
leeyeel/http-leeyeel.github.io
60449b81019aa46ceeba7f42f86b51a01defd423
[ "MIT" ]
1
2020-01-14T15:18:24.000Z
2020-01-14T15:18:24.000Z
_posts/2017-07-01-geant4-install.md
leeyeel/http-leeyeel.github.io
60449b81019aa46ceeba7f42f86b51a01defd423
[ "MIT" ]
null
null
null
_posts/2017-07-01-geant4-install.md
leeyeel/http-leeyeel.github.io
60449b81019aa46ceeba7f42f86b51a01defd423
[ "MIT" ]
1
2020-01-14T15:18:25.000Z
2020-01-14T15:18:25.000Z
--- layout: post title: "linux下安装Geant4.10全过程" date: 2014-10-07 15:12:10 categories: 高能物理 tags: geant4 install 教程 excerpt: geant4 的安装教程 mathjax: true --- * TOC {:toc} ### 说明 这个教程的起因是当初刚入学的时候(14年左右)学习Geant4,加上那时候还没接触过Linux,安装过程耗费了师兄跟我差不多一整天的时间, 之后我又在不同的发行版上安装了多次,踩了好多雷也排了雷,便写了一个安装的过程. 没想到这三年来帮助很多新手安装了Geant4, 同时也不断收到邮件咨询安装过程。 现在已经毕业转行不做科研了,这里做最后一次更新,也尽量写的全面一些,后面还有一些常见问题的解决方法,所以如果安装过程有问题, 请至少先把此教程读一遍,可能就发现解决方法啦. ### 准备工作 Geant4跟ROOT有很多共同的依赖,所以可以首先安装一下ROOT所需的依赖,里面有好多是ROOT需要但是Geant4并不需要的, 如果你有精力可以一个一个挑出来,这里直接全部安装. (可以访问[root-prerequisites](https://root.cern.ch/build-prerequisites)来查看ROOT的依赖包.) 如果你的linux发行版是 Fedora 18, 19 and 20; Scientific Linux 5, 6; CentOS 6, 7 : (`$`符号是终端命令提示符,不要把这个符号复制到终端) ```bash $ sudo yum install git cmake gcc-c++ gcc binutils libX11-devel \ libXpm-devel libXft-devel libXext-devel gcc-gfortran openssl-devel pcre-devel \ mesa-libGL-devel mesa-libGLU-devel glew-devel ftgl-devel mysql-devel \ fftw-devel cfitsio-devel graphviz-devel \ avahi-compat-libdns_sd-devel libldap-dev python-devel \ libxml2-devel gsl-static ``` 如果你的linux发行版是 Ubuntu 10, 12 , 14 and 16: ```bash $ sudo apt-get install git dpkg-dev cmake g++ gcc binutils libx11-dev libxpm-dev \ libxft-dev libxext-dev gfortran libssl-dev libpcre3-dev \ xlibmesa-glu-dev libglew1.5-dev libftgl-dev \ libmysqlclient-dev libfftw3-dev libcfitsio-dev \ graphviz-dev libavahi-compat-libdnssd-dev \ libldap2-dev python-dev libxml2-dev libkrb5-dev \ libgsl0-dev libqt4-dev ``` 其他发行版可以自己看一下上面提供的网址,这里不再重复. 除此之外还需要安装cmake,以及X11,需要说明的是10.1.2以后的版本cmake需要3.3版本以上,所以你可以先在终端输入 `cmake --version`看一下cmake版本,如果版本太低的话需要手动安装一个高版本的cmake. X11跟图形显示有关系: ```bash $ sudo apt-get install cmake libx11-dev libxext-dev libxtst-dev libxrender-dev libxmu-dev libxmuu-dev #安装需要的工具 $ sudo apt-get install qt4 ``` #### 下载主程序 Geant4需要下载主程序以及数据包,并且数据包要与主程序的版本对应. 下载Geant4的地址[geant4-downloads](http://geant4.cern.ch/support/download.shtml),下载 Source files 中那个GNU or Linux tar format即可, 没错,就是只有三十几M,我第一次安装的时候还以为下载错了...下载之后解压到某目录,为了方便,我们直接放在用户home目录下,下载后解压. 并且创建名为geant4-build的文件夹.上面这段可以用下面这段代码实现: ```bash $ wget http://geant4.web.cern.ch/geant4/support/source/geant4.10.03.p01.tar.gz -O $HOME/geant4.10.03.p01.tar.gz #下载源程序 $ cd $HOME $ tar xvzf geant4.10.03.p01.tar.gz ``` ### 下载data文件 data文件是geant4运行所需要的各种数据文件,用户可以在编译的时候用参数指定下载,但是速度可能会很慢,建议直接用浏览器下载好拷贝过去. 下载地址仍然是上面下载主程序的地址[geant4-downloads](http://geant4.cern.ch/support/download.shtml),如果不清楚以后会用到哪些数据文件, 可以把所有数据文件都下载,点击`Data files`下载所有数据文件并解压,新建一个名为data的文件夹并把之后把所有解压后的数据文件移动到data文件夹. ### 使用cmake安装 原理是首先创建一个geant4-build文件夹,然后进入geant4-build文件夹后使用cmake指定一些参数,最后make安装. 注意:如果先要qt界面,确保你的计算机内安装好了qt,懒得一个一个装可以直接: ```bash $ sudo apt-get install qt4* ``` 下面为安装geant4过程: ```bash $ mkidr geant4-build && cd geant4-build $ cmake -DCMAKE_INSTALL_PREFIX=$HOME/geant4-install/ -DGEANT4_USE_OPENGL_X11=ON \ -DGEANT4_USE_RAYTRACER_X11=ON -DGEANT4_USE_QT=ON \ GEANT4_BUILD_MULTITHREADED=ON $HOME/geant4.10.03.p01 $ make -j8 $ make install -j8 ``` 其中: `-DCMAKE_INSTALL_PREFIX=$HOME/geant4-install/` 参数表示安装的位置 `-DGEANT4_USE_OPENGL_X11=ON -DGEANT4_USE_RAYTRACER_X11=ON` 表示开启图形可视化 `-DGEANT4_USE_QT=ON` 表示开启Qt(不需要Qt界面的可以不加此参数) `GEANT4_BUILD_MULTITHREADED=ON` 为开启多线程 `$HOME/geant4.10.03.p01` 表示源程序,如果下载的不同版本记得更改为解压后的文件夹名字. `make -j8 or make install -j8`中的`-j8`表示八个线程运行.如果计算机有更多核心可用`-j16`或更多. cmake结束后,如果没有提示错误,终端出现类似如下: ```bash --Configuring done --Generating done --Build files have been written to: /home/xxx ``` 则表示成功 ### 运行及栗子 1)以上过程结束后,会在home目录下看到geant4.10.03.p01,geant4-build,geant4-install三个文件夹,把之前准备好的data文件夹移动到geant4.9-install/share/Geant4-10.03下 (可以看到此文件夹下有名为geant4make的文件夹)。 2)进到刚才提到的geant4make文件夹,会看到名为geant4make.sh的文件。 终端切换到目录并执行: ```bash $ source geant4make.sh ``` 每次使用geant4都必须运行此环境变量,不想每次都运行可以把该命令写到.bashrc中. ```bash $ echo 'source $HOME/geant4-install/share/Geant4-10.03.p01/geant4make/geant4make.sh' >> $HOME/.bashrc ``` 3)运行栗子 上面前两步执行成功后,可以切换到栗子目录,具体可以在源程序文件夹下找到,里面有有个examples文件夹. ```bash $ cd $HOME/geant4.10.03.p01/examples/basic/B1 $ make -j8 ``` 看到类似: ```bash LinkingexampleB1 ...Done! ``` 表示编译成功 然后终端输入命令: ```bash $ exampleB1 ``` 运行最简单的栗子. ### 其他说明 其他linux发行版比如scientificlinux,fedora,RedHat等,如果不是最新版本,由于自带的软件包版本比较旧或者缺少运行库,可能会提示各种各样的错误, 遇到提示错误一定仔细阅读错误提示,之后去搜索相应的解决办法。这里列举scientificlinux6.5,fedora19出现的问题的解决方法。 安装geant4最可能遇到的问题是X11 Xmu问题,这种问题,如果是Ubuntu 等就按照上面讲过的: ```bash $ sudo apt-get install libx11-dev libxext-dev libxtst-dev libxrender-dev libxmu-dev libxmuu-dev ``` 如果是sl,fedora,redhat等就执行 ```bash $ sudo yum search X11 | grep Xmu ``` 一般会出现: ```bash libXmu.i686 : X.Org X11 libXmu/libXmuu runtime libraries libXmu.x86_64 : X.Org X11 libXmu/libXmuu runtime libraries libXmu-devel.i686 : X.Org X11 libXmu development package libXmu-devel.x86_64 : X.Org X11 libXmu development package ``` 如果你是64位系统,直接把有 x86_64的装上 ```bash $ sudo yum install libXmu.x86_64 libXmu-devel.x86_64 ``` 即可解决问题,如果无效,可以尝试下面的方法。 ```bash $ sudo yum install expat-devel mesa* freeglut-devel $ sudo yum groupinstall “X software Development” ``` (此命令用来解决找不到X11的问题,scientificlinux 下使用 sudo yum install X*) 如有其他问题,欢迎留言,我会尽可能详细的给每个师弟师妹讲清楚.
30.892216
124
0.773987
yue_Hant
0.571215
113ab1eee956613bf4add3b893b6a99ab871dca3
5,791
md
Markdown
README.md
ilduchea/animal-shelter
2adce514bfe49dc42940d24361ee8dfa5d1fee42
[ "Unlicense" ]
null
null
null
README.md
ilduchea/animal-shelter
2adce514bfe49dc42940d24361ee8dfa5d1fee42
[ "Unlicense" ]
null
null
null
README.md
ilduchea/animal-shelter
2adce514bfe49dc42940d24361ee8dfa5d1fee42
[ "Unlicense" ]
1
2021-01-26T16:38:54.000Z
2021-01-26T16:38:54.000Z
# Animal Shelter, v1 #### This is an API with full CRUD, built using Ruby on Rails. July 21, 2017 #### By _**Tyler Stephenson**_ ## Description This is an API that includes data and full CRUD functionality for an animal shelter. It includes multiple scopes listed below for querying. Scopes: - Filters animal by name, species, age, and breed. The application structure is outlined below. Models: - Animal - name - string - species - string - breed - string - age - integer ## User Stories * Endpoints for GET (all and by id), POST, PUT and DELETE. * A RANDOM endpoint that randomly returns a park/business/animal. * A second custom endpoint that accepts parameters (example: a SEARCH route that allows users to search by specific park names). * Model scopes should be used to process parameters from API calls. * At least one of the objectives from Monday's Further Exploration lesson (such as versioning, token authentication, or serialization). * Thorough exception handling. * Complete testing with request specs. * Randomized data with Faker or your own custom seed code. * A README that thoroughly documents all endpoints, including parameters that can be passed in. ## Database Seeding The application is seeded using `faker`. It seeds 42 animals. ## Prerequisites You will need the following things properly installed on your computer. * [Git](https://git-scm.com/) * [Postgres](https://www.postgresql.org/) * [Ruby](https://www.ruby-lang.org/en/downloads/) * [Rails](http://rubyonrails.org/) ## Installation In your terminal: * `git clone animal-shelter` * `cd animal-shelter` * `bundle install` * Open another terminal window and type `postgres`. Leave this window open. * In your first terminal window type: * `rails db:setup` * `rails db:test:prepare` ## Development server Run `rails s` for a dev server. It will be servered on `http://localhost:3000/`, by default. * If you would like to make changes to this project, do so in a text editor. * Make frequent commits with detailed comments. * Submit changes via pull request. ## Running tests This app uses RSpec and Shouldamatchers for testing. Run `rspec` in terminal to test. ## API Routes - Young - GET `http://localhost:3000/young` - Returns all animals under 5 years old. - Mature - GET `http://localhost:3000/mature` - Returns all animals that are 5 years old or older. - Random - GET `http://localhost:3000/random` - Returns a random animal. - Search - GET `http://localhost:3000/search?{params}` - Returns all animals with the given search options. - Params can be any combination of `name_search={name}&species_search={species}&breed_search={breed}&age_search={age}` - See below for examples. - Index(all) - GET `http://localhost:3000/animals` - Returns all animals. - Create - POST `http://localhost:3000/animals?name={name}&species={species}&breed={breed}&age={age}` - Creates a new animal with the given attributes. - Show - GET `http://localhost:3000/animals/{id}` - Returns an animal by its id. - Update - PATCH or PUT `http://localhost:3000/animals/{id}?{params}` - Updates a given animal with the given attributes. - Params can be any combination of `name={name}&species={species}&breed={breed}&age={age}` - Destroy - DELETE `http://localhost:3000/animals/{id}` -Removes the given animal from the data base. ## Performing Searches See table below for possible searches and an example of performing in Postman/CURL. #### Animal Searches | Parameter | Sample Value | Description | |:----------:|:------------:|:------------| | name_search | Tiny | All animals with that name; searches for similar match without case sensitivity. | | species_search | dog | All animals of that species; searches for similar match without case sensitivity. | | breed_search | pitbull | All animals of that breed; searches for similar match without case sensitivity. | | age_search | 5 | All animals with the age of 5. | #### Example Animal Searches Postman: 1) Get all animals. * select GET and type in : ``` http://localhost:3000/v1/animals ``` 2) Get all animals with word "Tiny" in animal name. * select GET and type in : ``` http://localhost:3000/v1/animals?name_search=tiny ``` 3) Get all animals with the species of dog and the age of 5. * select GET and type in : ``` http://localhost:3000/v1/animals?species_search=dog&age_search=5 ``` ## Known Bugs No known bugs at this time. ## Support and Contact details * Tyler Stephenson * [email protected] ## Technologies Used * Ruby * Rails * JWT Gem * Devise * ActiveRecord * Postgres * Bundler * Rake Gem * HTML * CSS * Bootstrap * ES6 * SimpleCov * FactoryGirl ## License MIT License Copyright (c) 2017 Tyler Stephenson Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
34.885542
461
0.731998
eng_Latn
0.910747
113b7be01605bc7ed66392e5d8b0ccfc0f29fa79
163
md
Markdown
site/AntDesign.Docs/Demos/Components/Tabs/demo/card.md
noctis0430-open-source/ant-design-blazor
283b616c068efaaf87627124d5af45fc788bf343
[ "MIT" ]
3,659
2020-04-27T15:23:04.000Z
2022-03-31T22:23:01.000Z
site/AntDesign.Docs/Demos/Components/Tabs/demo/card.md
noctis0430-open-source/ant-design-blazor
283b616c068efaaf87627124d5af45fc788bf343
[ "MIT" ]
1,843
2020-04-27T23:36:14.000Z
2022-03-31T21:05:32.000Z
site/AntDesign.Docs/Demos/Components/Tabs/demo/card.md
noctis0430-open-source/ant-design-blazor
283b616c068efaaf87627124d5af45fc788bf343
[ "MIT" ]
699
2020-04-30T05:28:07.000Z
2022-03-30T18:36:37.000Z
--- order: 8 title: zh-CN: 卡片式页签 en-US: Card type tab --- ## zh-CN 另一种样式的页签,不提供对应的垂直样式。 ## en-US Another type of Tabs, which doesn't support vertical mode.
11.642857
58
0.662577
eng_Latn
0.890958
113ba3c02363757c6c3794a6cb390ffa10bd2ca4
6,708
md
Markdown
docs/relational-databases/replication/scripting-replication.md
v-brlaz/sql-docs
5d902e328b551bb619fd95106ce3d320a8fdfbe9
[ "CC-BY-4.0", "MIT" ]
1
2019-02-06T20:12:14.000Z
2019-02-06T20:12:14.000Z
docs/relational-databases/replication/scripting-replication.md
v-brlaz/sql-docs
5d902e328b551bb619fd95106ce3d320a8fdfbe9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/replication/scripting-replication.md
v-brlaz/sql-docs
5d902e328b551bb619fd95106ce3d320a8fdfbe9
[ "CC-BY-4.0", "MIT" ]
1
2020-12-22T22:24:34.000Z
2020-12-22T22:24:34.000Z
--- title: "Scripting Replication | Microsoft Docs" ms.custom: "" ms.date: "03/14/2017" ms.prod: "sql" ms.prod_service: "database-engine" ms.service: "" ms.component: "replication" ms.reviewer: "" ms.suite: "sql" ms.technology: - "replication" ms.tgt_pltfrm: "" ms.topic: "article" helpviewer_keywords: - "scripts [SQL Server replication], replication objects" - "merge replication scripting [SQL Server replication]" - "replication [SQL Server], scripting" - "snapshot replication [SQL Server], scripting" - "scripts [SQL Server replication]" - "transactional replication, scripting" ms.assetid: e50fac44-54c0-470c-a4ea-9c111fa4322b caps.latest.revision: 36 author: "MashaMSFT" ms.author: "mathoma" manager: "craigg" ms.workload: "Inactive" --- # Scripting Replication [!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)] All replication components in a topology should be scripted as part of a disaster recovery plan, and scripts can also be used to automate repetitive tasks. A script contains the Transact-SQL system stored procedures necessary to implement the replication component(s) scripted, such as a publication or subscription. Scripts can be created in a wizard (such as the New Publication Wizard) or in [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] after you create a component. You can view, modify, and run the script using [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] or **sqlcmd**. Scripts can be stored with backup files to be used in case a replication topology must be reconfigured. A component should be re-scripted if any property changes are made. If you use custom stored procedures with transactional replication, a copy of each procedure should be stored with the scripts; the copy should be updated if the procedure changes (procedures are typically updated due to schema changes or changing application requirements). For more information about custom procedures, see [Specify How Changes Are Propagated for Transactional Articles](../../relational-databases/replication/transactional/transactional-articles-specify-how-changes-are-propagated.md). For merge publications that use parameterized filters, publication scripts contain the stored procedure calls to create data partitions. The script provides a reference for the partitions created and a way in which to re-create one or more partitions if necessary. ## Example of Automating a Task with Scripts Consider [!INCLUDE[ssSampleDBCoFull](../../includes/sssampledbcofull-md.md)], which implements merge replication to distribute data to its remote sales force. A sales representative downloads all the data that pertains to the customers in her territory using pull subscriptions. When working offline, the sales representative updates data and enters new customers and orders. Because [!INCLUDE[ssSampleDBCoFull](../../includes/sssampledbcofull-md.md)] has more than fifty sales representatives in different territories, it would be time-consuming to create the different subscriptions at each Subscriber with the New Subscription Wizard. Instead, the replication administrator can follow these steps: 1. Set up the necessary merge publications with partitions based on the sales representative or their territory. 2. Create a pull subscription for one Subscriber. 3. Generate a script based on that pull subscription. 4. Modify the script, changing such values as the name of the Subscriber. 5. Run the script at multiple Subscribers to generate the required pull subscriptions. ## Script Replication Objects Script replication objects from the replication wizards or from the **Replication** folder in [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)]. If you script from the wizards, you can choose to create objects and script them, or you can choose only to script them. > [!IMPORTANT] > All passwords are scripted as NULL. When possible, prompt users to enter security credentials at runtime. If you store credentials in a script file, you must secure the file to prevent unauthorized access. For more information about using the replication wizards, see: - [Configure Publishing and Distribution](../../relational-databases/replication/configure-publishing-and-distribution.md) - [Create a Publication](../../relational-databases/replication/publish/create-a-publication.md) - [Create a Push Subscription](../../relational-databases/replication/create-a-push-subscription.md) - [Create a Pull Subscription](../../relational-databases/replication/create-a-pull-subscription.md) #### To script an object from a replication wizard 1. On the **Wizard Actions** page of a wizard, select the check box appropriate for the wizard: - **Generate a script file with steps to create a publication** - **Generate a script file with steps to create the subscription(s)** - **Generate a script file with steps to configure distribution** 2. Specify options on the **Script File Properties** page. 3. Complete the wizard. #### To script an object from Management Studio 1. Connect to the Distributor, Publisher, or Subscriber in [!INCLUDE[ssManStudio](../../includes/ssmanstudio-md.md)], and then expand the server node. 2. Expand the **Replication** folder, and then expand the **Local Publications** folder or the **Local Subscriptions** folder. 3. Right-click a publication or subscription, and then click **Generate Scripts**. 4. Specify options in the **Generate SQL Script - \<ReplicationObject>** dialog box. 5. Click **Script to File**. 6. Enter a file name in the **Script File Location** dialog box, and then click **Save**. A status message is displayed. 7. Click **OK**, and then click **Close**. #### To script multiple objects from Management Studio 1. Connect to the Distributor, Publisher, or Subscriber in [!INCLUDE[ssManStudio](../../includes/ssmanstudio-md.md)], and then expand the server node. 2. Right-click the **Replication** folder, and then click **Generate Scripts**. 3. Specify options in the **Generate SQL Script** dialog box. 4. Click **Script to File**. 5. Enter a file name in the **Script File Location** dialog box, and then click **Save**. A status message is displayed. 6. Click **OK, and then** click **Close**.
60.432432
779
0.7339
eng_Latn
0.977179
113c1105767348d2cfcd7844ba1e4149e5f9fbd6
1,228
md
Markdown
doc/api/monster_type.md
subalterngames/procemon
d65140ecccab406ba55b649f3df1999f823fba5a
[ "MIT" ]
null
null
null
doc/api/monster_type.md
subalterngames/procemon
d65140ecccab406ba55b649f3df1999f823fba5a
[ "MIT" ]
null
null
null
doc/api/monster_type.md
subalterngames/procemon
d65140ecccab406ba55b649f3df1999f823fba5a
[ "MIT" ]
null
null
null
# MonsterType `from procemon.monster_type import MonsterType` A type of monster, as well as its associated keywords and nouns. *** ## Fields - `monster_type` The name of this type. - `nouns` The nouns associated with this type. Used for naming a monster. - `verbs` The verbs associated with this type. Used for naming a move. - `adjectives` The adjectives associated with this type. Used for naming a move. - `wikipedia` The name of the Wikipedia page corresponding to this type. - `imagenet` The ImageNet word corresponding to this type. *** ## Functions #### \_\_init\_\_ **`MonsterType(monster_type, nouns, verbs, adjectives, wikipedia, imagenet)`** | Parameter | Type | Default | Description | | --- | --- | --- | --- | | monster_type | str | | The name of this type. | | nouns | List[str] | | The nouns associated with this type. Used for naming a monster. | | verbs | List[str] | | The verbs associated with this type. Used for naming a move. | | adjectives | List[str] | | The adjectives associated with this type. Used for naming a move. | | wikipedia | str | | The name of the Wikipedia page corresponding to this type. | | imagenet | str | | The ImageNet word corresponding to this type. |
30.7
98
0.696254
eng_Latn
0.992384
113c2df5ec6e23a8a11b677fbf26564cf941bfba
129
md
Markdown
README.md
TASagent/AdventOfCode2020
4a83d52c236ccd9e18be2655e9e8d52da8e93daa
[ "MIT" ]
null
null
null
README.md
TASagent/AdventOfCode2020
4a83d52c236ccd9e18be2655e9e8d52da8e93daa
[ "MIT" ]
null
null
null
README.md
TASagent/AdventOfCode2020
4a83d52c236ccd9e18be2655e9e8d52da8e93daa
[ "MIT" ]
null
null
null
# Advent of Code - 2020 My streamed solutions for the Advent of Code 2020 challenges. Written in fairly standard C# with Linq.
32.25
103
0.767442
eng_Latn
0.987473
113dc320ed74fb1a78b06a196fa5e2d51400eefd
10,516
md
Markdown
Engine/extlibs/IosLibs/mono-2.6.7/mono/mini/cpu-s390.md
zlxy/Genesis-3D
44bbe50b00118c9fee60e4e3b414371411411317
[ "MIT" ]
5
2019-03-12T14:25:25.000Z
2021-09-29T18:22:05.000Z
Unity-2018.2.8/mono/mini/cpu-s390.md
Zhentar/UnityEtwSymbols
e3614341a9553b2c916c9ebb12ace4a0eb46291b
[ "Unlicense" ]
null
null
null
Unity-2018.2.8/mono/mini/cpu-s390.md
Zhentar/UnityEtwSymbols
e3614341a9553b2c916c9ebb12ace4a0eb46291b
[ "Unlicense" ]
2
2019-03-12T14:25:28.000Z
2019-07-17T06:23:57.000Z
# S/390 64-bit cpu description file # this file is read by genmdesc to pruduce a table with all the relevant information # about the cpu instructions that may be used by the regsiter allocator, the scheduler # and other parts of the arch-dependent part of mini. # # An opcode name is followed by a colon and optional specifiers. # A specifier has a name, a colon and a value. Specifiers are separated by white space. # Here is a description of the specifiers valid for this file and their possible values. # # dest:register describes the destination register of an instruction # src1:register describes the first source register of an instruction # src2:register describes the second source register of an instruction # # register may have the following values: # i integer register # a r3 register (output from calls) # b base register (used in address references) # f floating point register # # len:number describe the maximun length in bytes of the instruction # number is a positive integer # # cost:number describe how many cycles are needed to complete the instruction (unused) # # clob:spec describe if the instruction clobbers registers or has special needs # # spec can be one of the following characters: # c clobbers caller-save registers # r 'reserves' the destination register until a later instruction unreserves it # used mostly to set output registers in function calls # # flags:spec describe if the instruction uses or sets the flags (unused) # # spec can be one of the following chars: # s sets the flags # u uses the flags # m uses and modifies the flags # # res:spec describe what units are used in the processor (unused) # # delay: describe delay slots (unused) # # the required specifiers are: len, clob (if registers are clobbered), the registers # specifiers if the registers are actually used, flags (when scheduling is implemented). # # See the code in mini-x86.c for more details on how the specifiers are used. # nop: len:4 relaxed_nop: len:4 adc: dest:i src1:i src2:i len:6 add_ovf_carry: dest:i src1:1 src2:i len:28 add_ovf_un_carry: dest:i src1:1 src2:i len:28 addcc: dest:i src1:i src2:i len:6 aot_const: dest:i len:8 atomic_add_i4: src1:b src2:i dest:i len:20 atomic_exchange_i4: src1:b src2:i dest:i len:20 atomic_add_new_i4: src1:b src2:i dest:i len:24 br: len:6 br_reg: src1:i len:8 break: len:6 call: dest:o len:6 clob:c call_handler: len:12 call_membase: dest:o src1:b len:12 clob:c call_reg: dest:o src1:i len:8 clob:c ceq: dest:i len:12 cgt.un: dest:i len:12 cgt: dest:i len:12 checkthis: src1:b len:4 ckfinite: dest:f src1:f len:22 clt.un: dest:i len:12 clt: dest:i len:12 compare: src1:i src2:i len:4 compare_imm: src1:i len:14 cond_exc_c: len:8 cond_exc_eq: len:8 cond_exc_ge: len:8 cond_exc_ge_un: len:8 cond_exc_gt: len:8 cond_exc_gt_un: len:8 cond_exc_le: len:8 cond_exc_le_un: len:8 cond_exc_lt: len:8 cond_exc_lt_un: len:8 cond_exc_nc: len:8 cond_exc_ne_un: len:8 cond_exc_no: len:8 cond_exc_ov: len:8 endfinally: len: 20 fcall: dest:g len:10 clob:c fcall_membase: dest:g src1:b len:14 clob:c fcall_reg: dest:g src1:i len:10 clob:c fcompare: src1:f src2:f len:14 float_add: dest:f src1:f src2:f len:6 float_beq: len:10 float_bge: len:10 float_bge_un: len:8 float_bgt: len:10 float_ble: len:10 float_ble_un: len:8 float_blt: len:10 float_blt_un: len:8 float_bne_un: len:8 float_bgt_un: len:8 float_ceq: dest:i src1:f src2:f len:16 float_cgt: dest:i src1:f src2:f len:16 float_cgt_un: dest:i src1:f src2:f len:16 float_clt: dest:i src1:f src2:f len:16 float_clt_un: dest:i src1:f src2:f len:16 float_conv_to_i1: dest:i src1:f len:50 float_conv_to_i2: dest:i src1:f len:50 float_conv_to_i4: dest:i src1:f len:50 float_conv_to_i8: dest:l src1:f len:50 float_conv_to_i: dest:i src1:f len:52 float_conv_to_r4: dest:f src1:f len:4 float_conv_to_u1: dest:i src1:f len:62 float_conv_to_u2: dest:i src1:f len:62 float_conv_to_u4: dest:i src1:f len:62 float_conv_to_u8: dest:l src1:f len:62 float_conv_to_u: dest:i src1:f len:36 float_div: dest:f src1:f src2:f len:6 float_div_un: dest:f src1:f src2:f len:6 float_mul: dest:f src1:f src2:f len:6 float_neg: dest:f src1:f len:6 float_not: dest:f src1:f len:6 float_rem: dest:f src1:f src2:f len:16 float_rem_un: dest:f src1:f src2:f len:16 float_sub: dest:f src1:f src2:f len:6 fmove: dest:f src1:f len:4 iconst: dest:i len:16 jmp: len:56 label: len:0 lcall: dest:L len:8 clob:c lcall_membase: dest:L src1:b len:12 clob:c lcall_reg: dest:L src1:i len:8 clob:c load_membase: dest:i src1:b len:18 loadi1_membase: dest:i src1:b len:40 loadi2_membase: dest:i src1:b len:24 loadi4_membase: dest:i src1:b len:18 loadi8_membase: dest:i src1:b loadr4_membase: dest:f src1:b len:20 loadr8_membase: dest:f src1:b len:18 loadu1_membase: dest:i src1:b len:26 loadu2_membase: dest:i src1:b len:26 loadu4_mem: dest:i len:8 loadu4_membase: dest:i src1:b len:18 localloc: dest:i src1:i len:72 long_add: len: 18 dest:l src1:l src2:i clob:1 long_add_ovf_un: len:22 dest:l src1:l src2:i clob:1 long_add_ovf: len:28 dest:l src1:l src2:i clob:1 long_conv_to_ovf_i: dest:i src1:i src2:i len:44 long_conv_to_r_un: dest:f src1:i src2:i len:37 long_conv_to_r4: dest:f src1:i len:4 long_conv_to_r8: dest:f src1:i len:4 long_mul_ovf: len: 18 long_mul_ovf_un: len: 18 long_sub: len: 18 dest:l src1:l src2:i clob:1 long_sub_ovf_un: len:22 dest:l src1:l src2:i clob:1 long_sub_ovf: len:36 dest:l src1:l src2:i clob:1 memory_barrier: len: 10 move: dest:i src1:i len:4 bigmul: len:2 dest:l src1:a src2:i bigmul_un: len:2 dest:l src1:a src2:i endfilter: src1:i len:12 rethrow: src1:i len:8 oparglist: src1:i len:20 r4const: dest:f len:22 r8const: dest:f len:18 s390_bkchain: len:16 dest:i src1:i s390_move: len:48 dest:b src1:b s390_setf4ret: dest:f src1:f len:4 tls_get: dest:i len:44 sbb: dest:i src1:i src2:i len:8 setlret: src1:i src2:i len:12 sqrt: dest:f src1:f len:4 start_handler: len:18 store_membase_imm: dest:b len:32 store_membase_reg: dest:b src1:i len:18 storei1_membase_imm: dest:b len:32 storei1_membase_reg: dest:b src1:i len:18 storei2_membase_imm: dest:b len:32 storei2_membase_reg: dest:b src1:i len:18 storei4_membase_imm: dest:b len:32 storei4_membase_reg: dest:b src1:i len:18 storei8_membase_imm: dest:b storei8_membase_reg: dest:b src1:i storer4_membase_reg: dest:b src1:f len:22 storer8_membase_reg: dest:b src1:f len:22 sub_ovf_carry: dest:i src1:1 src2:i len:28 sub_ovf_un_carry: dest:i src1:1 src2:i len:28 subcc: dest:i src1:i src2:i len:6 throw: src1:i len:8 vcall: len:8 clob:c vcall_membase: src1:b len:12 clob:c vcall_reg: src1:i len:8 clob:c voidcall: len:8 clob:c voidcall_membase: src1:b len:12 clob:c voidcall_reg: src1:i len:8 clob:c # 32 bit opcodes int_add: dest:i src1:i src2:i len:6 int_sub: dest:i src1:i src2:i len:6 int_mul: dest:i src1:i src2:i len:6 int_div: dest:a src1:i src2:i len:10 int_div_un: dest:a src1:i src2:i len:12 int_and: dest:i src1:i src2:i len:6 int_or: dest:i src1:i src2:i len:4 int_xor: dest:i src1:i src2:i len:4 int_rem: dest:d src1:i src2:i len:10 int_rem_un: dest:d src1:i src2:i len:12 int_shl: dest:i src1:i src2:i clob:s len:8 int_shr: dest:i src1:i src2:i clob:s len:8 int_shr_un: dest:i src1:i src2:i clob:s len:8 int_add_ovf: len: 24 dest:i src1:i src2:i int_add_ovf_un: len: 10 dest:i src1:i src2:i int_sub_ovf: len:24 dest:i src1:i src2:i int_sub_ovf_un: len:10 dest:i src1:i src2:i int_mul_ovf: dest:i src1:i src2:i len:42 int_mul_ovf_un: dest:i src1:i src2:i len:20 int_neg: dest:i src1:i len:4 int_not: dest:i src1:i len:8 int_conv_to_i1: dest:i src1:i len:16 int_conv_to_i2: dest:i src1:i len:16 int_conv_to_i4: dest:i src1:i len:2 int_conv_to_r4: dest:f src1:i len:4 int_conv_to_r8: dest:f src1:i len:4 int_conv_to_u1: dest:i src1:i len:8 int_conv_to_u2: dest:i src1:i len:16 int_conv_to_u4: dest:i src1:i int_conv_to_r_un: dest:f src1:i len:30 int_beq: len:8 int_bge_un: len:8 int_bge: len:8 int_bgt_un: len:8 int_bgt: len:8 int_ble_un: len:8 int_ble: len:8 int_blt_un: len:8 int_blt: len:8 int_bne_un: len:8 mul_imm: dest:i src1:i len:20 adc_imm: dest:i src1:i len:18 add_imm: dest:i src1:i len:18 addcc_imm: dest:i src1:i len:18 and_imm: dest:i src1:i len:16 div_imm: dest:i src1:i len:24 div_un_imm: dest:i src1:i len:24 or_imm: dest:i src1:i len:16 rem_imm: dest:i src1:i len:24 rem_un_imm: dest:i src1:i len:24 sbb_imm: dest:i src1:i len:18 shl_imm: dest:i src1:i len:8 shr_imm: dest:i src1:i len:8 shr_un_imm: dest:i src1:i len:8 sub_imm: dest:i src1:i len:18 subcc_imm: dest:i src1:i len:18 xor_imm: dest:i src1:i len:16 # Linear IR opcodes dummy_use: src1:i len:0 dummy_store: len:0 not_reached: len:0 not_null: src1:i len:0 jump_table: dest:i len:16 icompare: src1:i src2:i len:4 icompare_imm: src1:i len:14 int_ceq: dest:i len:12 int_cgt_un: dest:i len:12 int_cgt: dest:i len:12 int_clt_un: dest:i len:12 int_clt: dest:i len:12 cond_exc_ic: len:8 cond_exc_ieq: len:8 cond_exc_ige: len:8 cond_exc_ige_un: len:8 cond_exc_igt: len:8 cond_exc_igt_un: len:8 cond_exc_ile: len:8 cond_exc_ile_un: len:8 cond_exc_ilt: len:8 cond_exc_ilt_un: len:8 cond_exc_inc: len:8 cond_exc_ine_un: len:8 cond_exc_ino: len:8 cond_exc_iov: len:8 int_add_imm: dest:i src1:i len:18 int_sub_imm: dest:i src1:i len:18 int_mul_imm: dest:i src1:i len:20 int_div_imm: dest:i src1:i len:24 int_div_un_imm: dest:i src1:i len:24 int_rem_imm: dest:i src1:i len:24 int_rem_un_imm: dest:i src1:i len:24 int_and_imm: dest:i src1:i len:16 int_or_imm: dest:i src1:i len:16 int_xor_imm: dest:i src1:i len:16 int_adc_imm: dest:i src1:i len:18 int_sbb_imm: dest:i src1:i len:18 int_shl_imm: dest:i src1:i len:8 int_shr_imm: dest:i src1:i len:8 int_shr_un_imm: dest:i src1:i len:8 int_adc: dest:i src1:i src2:i len:6 int_sbb: dest:i src1:i src2:i len:8 int_addcc: dest:i src1:i src2:i len:6 int_subcc: dest:i src1:i src2:i len:6 long_conv_to_ovf_i4_2: dest:i src1:i src2:i len:44 vcall2: len:8 clob:c vcall2_membase: src1:b len:12 clob:c vcall2_reg: src1:i len:8 clob:c s390_long_add: dest:l src1:i src2:i len:18 s390_long_add_ovf: dest:l src1:i src2:i len:32 s390_long_add_ovf_un: dest:l src1:i src2:i len:32 s390_long_sub: dest:l src1:i src2:i len:18 s390_long_sub_ovf: dest:l src1:i src2:i len:32 s390_long_sub_ovf_un: dest:l src1:i src2:i len:32 s390_long_neg: dest:l src1:i src2:i len:18 s390_int_add_ovf: len:24 dest:i src1:i src2:i s390_int_add_ovf_un: len:10 dest:i src1:i src2:i s390_int_sub_ovf: len:24 dest:i src1:i src2:i s390_int_sub_ovf_un: len:10 dest:i src1:i src2:i
31.391045
93
0.756752
lit_Latn
0.278035
113e9db9a47fe4c72df277eeddb1352833ccb34d
3,469
md
Markdown
README.md
bydooweedoo/is
2d6acf61b397f7297d42a83b09c1181e22cb5230
[ "MIT" ]
17
2018-06-23T11:16:17.000Z
2021-11-17T18:28:37.000Z
README.md
bydooweedoo/is
2d6acf61b397f7297d42a83b09c1181e22cb5230
[ "MIT" ]
null
null
null
README.md
bydooweedoo/is
2d6acf61b397f7297d42a83b09c1181e22cb5230
[ "MIT" ]
2
2018-10-24T20:12:54.000Z
2019-12-25T17:45:41.000Z
# Is Fast, extensible and easy to use data structure validation for [elixir](https://github.com/elixir-lang/elixir) with nested structures support. ## Installation If [available in Hex](https://hex.pm/docs/publish), the package can be installed by adding `is` to your list of dependencies in `mix.exs`: ```elixir def deps do [ {:is, "~> 1.0.0"} ] end ``` Documentation can be generated with [ExDoc](https://github.com/elixir-lang/ex_doc) and published on [HexDocs](https://hexdocs.pm). Once published, the docs can be found at [https://hexdocs.pm/is](https://hexdocs.pm/is). ## Example ```elixir iex> data = Enum.map(1..2, &(%{ ...> a: 1, ...> b: "b", ...> c: {"a", "b", false}, ...> d: [[1, 2, "3"], [4, false, 6]], ...> e: -1, ...> f: "1234567891011", ...> index: &1 - 10, ...> })) iex> schema = [list: [map: %{ ...> a: :binary, ...> b: :boolean, ...> c: [list: [or: [:binary, :boolean]]], ...> d: [list: [list: :integer]], ...> e: [and: [:optional, :binary]], ...> index: [and: [:integer, in_range: [min: 0]]], ...> }]] iex> Is.validate(data, schema) [ {:error, [0, :a], "must be a binary"}, {:error, [0, :b], "must be a boolean"}, {:error, [0, :c], "must be a list"}, {:error, [0, :d, 0, 2], "must be an integer"}, {:error, [0, :d, 1, 1], "must be an integer"}, {:error, [0, :e], "must be a binary"}, {:error, [0, :index], "must at least be 0"}, {:error, [1, :a], "must be a binary"}, {:error, [1, :b], "must be a boolean"}, {:error, [1, :c], "must be a list"}, {:error, [1, :d, 0, 2], "must be an integer"}, {:error, [1, :d, 1, 1], "must be an integer"}, {:error, [1, :e], "must be a binary"}, {:error, [1, :index], "must at least be 0"}, ] ``` Validations ----------- ### Basic types #### :atom Test if data is an atom or not ```elixir iex> Is.validate(:value, :atom) [] iex> Is.validate(:value, atom: true) [] iex> Is.validate(1, :atom) [{:error, [], "must be a atom"}] iex> Is.validate(:value, atom: false) [{:error, [], "must not be a atom"}] ``` #### :binary Test if data is a binary or not ```elixir iex> Is.validate("value", :binary) [] iex> Is.validate("value", binary: true) [] iex> Is.validate(1, :binary) [{:error, [], "must be a binary"}] iex> Is.validate("value", binary: false) [{:error, [], "must not be a binary"}] ``` #### :boolean Test if data is a boolean or not ```elixir iex> Is.validate(true, :boolean) [] iex> Is.validate(true, boolean: true) [] iex> Is.validate(1, :boolean) [{:error, [], "must be a boolean"}] iex> Is.validate(true, boolean: false) [{:error, [], "must not be a boolean"}] ``` #### :equals Test if data equals given value ```elixir iex> Is.validate(true, equals: true) [] iex> Is.validate("str", equals: "str") [] iex> Is.validate(1, equals: true) [{:error, [], "must equals true"}] ``` #### :fn Test if data equals given value ```elixir iex> Is.validate(1, fn: &is_number/1) [] iex> Is.validate(true, fn: &is_number/1) [{:error, [], "must satisfies &:erlang.is_number/1"}] iex> starts_with? = fn(value, prefix) -> ...> if String.starts_with?(value, prefix) do ...> :ok ...> else ...> {:error, "must start with #{inspect prefix}"} ...> end ...> end iex> Is.validate("https://elixir-lang.org", fn: [starts_with?, "http"]) [] iex> Is.validate("elixir-lang.org", fn: [starts_with?, "http"]) [{:error, [], "must start with \"http\""}] iex> Is.validate(12, fn: {1, 2}) [{:error, [], "fn: options are invalid"}] ```
21.68125
142
0.573076
eng_Latn
0.399444
113eba89af11c495ccf700ba815954000d728b40
89
md
Markdown
README.md
financial-times-sandbox/Aggressive-Grotesque-Donut
7150a069c4a166d38637d3b319f703ad0b594811
[ "MIT" ]
null
null
null
README.md
financial-times-sandbox/Aggressive-Grotesque-Donut
7150a069c4a166d38637d3b319f703ad0b594811
[ "MIT" ]
3
2019-07-10T14:57:59.000Z
2019-07-10T20:25:19.000Z
README.md
financial-times-sandbox/Aggressive-Grotesque-Donut
7150a069c4a166d38637d3b319f703ad0b594811
[ "MIT" ]
null
null
null
# 🎩 Aggressive-Grotesque-Donut ## This repository is for testing & development purposes.
29.666667
57
0.775281
eng_Latn
0.994393
113f5a6474b90f0a5eb7d3c562dd69f7df8c8dcf
17
md
Markdown
README.md
Amjad580/nansrepo
09ab8206bb50ec4ee3579de54a6d40e8be29eebd
[ "MIT" ]
null
null
null
README.md
Amjad580/nansrepo
09ab8206bb50ec4ee3579de54a6d40e8be29eebd
[ "MIT" ]
null
null
null
README.md
Amjad580/nansrepo
09ab8206bb50ec4ee3579de54a6d40e8be29eebd
[ "MIT" ]
null
null
null
# nansrepo learn
5.666667
10
0.764706
eng_Latn
0.480458
113f73d846e4dad0155e80fd10e48b8b51b953cd
10,089
md
Markdown
packages/medusa-plugin-restock-notification/CHANGELOG.md
omurilo/medusa
c16df9383c08288f3643fde7aadad1becb2e4c9d
[ "MIT" ]
1
2022-02-20T18:04:33.000Z
2022-02-20T18:04:33.000Z
packages/medusa-plugin-restock-notification/CHANGELOG.md
omurilo/medusa
c16df9383c08288f3643fde7aadad1becb2e4c9d
[ "MIT" ]
null
null
null
packages/medusa-plugin-restock-notification/CHANGELOG.md
omurilo/medusa
c16df9383c08288f3643fde7aadad1becb2e4c9d
[ "MIT" ]
1
2022-01-16T21:22:38.000Z
2022-01-16T21:22:38.000Z
# Change Log All notable changes to this project will be documented in this file. See [Conventional Commits](https://conventionalcommits.org) for commit guidelines. ## [0.0.30](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2022-01-11) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.29](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-12-29) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.28](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-12-17) ### Features * add medusa-react ([#913](https://github.com/medusajs/medusa/issues/913)) ([d0d8dd7](https://github.com/medusajs/medusa/commit/d0d8dd7bf62eaac71df8714c2dfb4f204d192f51)) ## [0.0.27](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-12-08) ### Bug Fixes - ensures that delayed restock notifications are being sent ([#881](https://github.com/medusajs/medusa/issues/881)) ([329767e](https://github.com/medusajs/medusa/commit/329767e27980253d456030dd5aad648662d39e3d)) ## [0.0.26](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-11-23) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.25](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-11-22) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.24](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-11-19) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.23](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-11-19) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.22](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-10-18) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.21](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-10-18) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.20](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-10-18) ### Bug Fixes - add delay before triggering email ([#458](https://github.com/medusajs/medusa/issues/458)) ([ee2f7c6](https://github.com/medusajs/medusa/commit/ee2f7c6333a0e8a4fa1454c514662bb83ce16346)) ## [0.0.19](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-10-18) ### Bug Fixes - add delay before triggering email ([#458](https://github.com/medusajs/medusa/issues/458)) ([ee2f7c6](https://github.com/medusajs/medusa/commit/ee2f7c6333a0e8a4fa1454c514662bb83ce16346)) ## [0.0.18](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-09-15) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.17](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-09-14) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.18](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-09-02) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.17](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-08-31) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.16](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-08-05) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.15](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-07-26) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.14](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-07-15) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.13](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-07-15) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.12](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-07-02) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.11](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-06-24) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.10](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-06-22) ### Bug Fixes - release assist ([668e8a7](https://github.com/medusajs/medusa/commit/668e8a740200847fc2a41c91d2979097f1392532)) ## [0.0.9](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-06-09) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.8](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-06-09) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.7](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-06-09) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.6](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-06-09) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.4](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-06-08) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.7](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-04-29) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.6](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-04-28) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.5](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-04-20) **Note:** Version bump only for package medusa-plugin-restock-notification ## [0.0.4](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-04-20) ### Features - **medusa:** Swaps on swaps ([#229](https://github.com/medusajs/medusa/issues/229)) ([f8f1f57](https://github.com/medusajs/medusa/commit/f8f1f57fa1bcdc6f7ae4183e657a07e2641b1345)) ## [0.0.3](https://github.com/medusajs/medusa/compare/medusa-plugin-restock-notification@[email protected]) (2021-04-13) **Note:** Version bump only for package medusa-plugin-restock-notification ## 0.0.2 (2021-04-13) ### Bug Fixes - .gitignore ([684c20a](https://github.com/medusajs/medusa/commit/684c20ab293237f91fc160a269fd072c5de7012b)) - adds tests ([69442a1](https://github.com/medusajs/medusa/commit/69442a1735193aeb010043f114d89037d4d76279)) - cors ([15bbb71](https://github.com/medusajs/medusa/commit/15bbb71a46efe00af3bd46bc16dfd48439204587)) - cors ([e4a30e8](https://github.com/medusajs/medusa/commit/e4a30e8afc84eb201b0ddcb027bbe5674d0aff8f)) - cors ([967134e](https://github.com/medusajs/medusa/commit/967134e797a867fd84064d3764815961a976015b)) - creates restock functionality ([2b25550](https://github.com/medusajs/medusa/commit/2b2555004e52e97c15bfca59e030fdfc3d86ae49)) - deps ([a9ea38c](https://github.com/medusajs/medusa/commit/a9ea38c2005beb63e14ed151e25ecd26819c748c)) - gitignore ([fa1fe9d](https://github.com/medusajs/medusa/commit/fa1fe9d619e19a277db6ee3a25ebb177222fa04b)) - merge ([7897610](https://github.com/medusajs/medusa/commit/78976106209b35730d69817b52595b6008590159)) - peerdependencies ([40725c6](https://github.com/medusajs/medusa/commit/40725c6d0a643369a710cfc7cbf5b5f65a4f1f93)) - restock ([237ed51](https://github.com/medusajs/medusa/commit/237ed5130760645c6b892fa1d5fc09a713b95f58)) - **segment:** swap tracking ([295f0f6](https://github.com/medusajs/medusa/commit/295f0f652a0880292a3788b9c65476a3c5c1b8d4)) - working api ([9d81097](https://github.com/medusajs/medusa/commit/9d810971a7e0de2a586b5c9c372f0aad2818918b)) ### Features - restock service ([8bd5fa8](https://github.com/medusajs/medusa/commit/8bd5fa821286a90f3ab21e8c96993ac543fb7cab))
54.831522
211
0.770344
yue_Hant
0.127053
113fd78f1f7544402674fe32f653b30ccb4f65f7
2,122
md
Markdown
SharePoint/SharePointServer/administration/demonstrate-forms-based-claims-authentication.md
Marweis/OfficeDocs-SharePoint
ef39b4467fb562092a54d985ab87dcc381e50f3a
[ "CC-BY-4.0", "MIT" ]
1
2021-08-04T04:59:34.000Z
2021-08-04T04:59:34.000Z
SharePoint/SharePointServer/administration/demonstrate-forms-based-claims-authentication.md
Marweis/OfficeDocs-SharePoint
ef39b4467fb562092a54d985ab87dcc381e50f3a
[ "CC-BY-4.0", "MIT" ]
null
null
null
SharePoint/SharePointServer/administration/demonstrate-forms-based-claims-authentication.md
Marweis/OfficeDocs-SharePoint
ef39b4467fb562092a54d985ab87dcc381e50f3a
[ "CC-BY-4.0", "MIT" ]
1
2021-11-12T07:31:37.000Z
2021-11-12T07:31:37.000Z
--- title: "Test Lab Guide Demonstrate forms-based claims authentication for SharePoint Server 2013" ms.author: kirks author: Techwriter40 manager: pamgreen ms.date: 7/10/2017 ms.audience: ITPro ms.topic: article ms.prod: sharepoint-server-itpro localization_priority: Normal ms.assetid: 78df8a76-0dbc-403b-b54d-9572fc918531 description: "Summary: Learn how to configure and demonstrate form-based authentication for SharePoint Server 2013 based on the Test Lab Guide: Configure SharePoint Server 2013 in a three-tier farm." --- # Test Lab Guide: Demonstrate forms-based claims authentication for SharePoint Server 2013 **Summary:** Learn how to configure and demonstrate form-based authentication for SharePoint Server 2013 based on the [Test Lab Guide: Configure SharePoint Server 2013 in a three-tier farm](configure-sharepoint-server-2013-in-a-three-tier-farm.md). This document is the [Test Lab Guide](https://go.microsoft.com/fwlink/p/?LinkId=202817) version of the configuration described in [Configure forms-based authentication for a claims-based web application in SharePoint Server](http://technet.microsoft.com/library/fd1391bb-c787-4742-b007-bf57e18dad66%28Office.14%29.aspx). This document contains instructions for the following: 1. Setting up the SharePoint Server 2013 three-tier farm test lab. 2. Configuring forms-based authentication. 3. Demonstrating forms-based authentication. **Watch the demonstrate forms-based claims authentication for SharePoint Server 2013 test lab guide overview video** > [!VIDEO https://www.microsoft.com/videoplayer/embed/e86a4119-b9c7-4d84-a3b3-bf9e97bdc30b?autoplay=false] ## Download the test lab guide [Test Lab Guide: Demonstrate Forms-based Authentication with SharePoint Server 2013](https://go.microsoft.com/fwlink/p/?LinkId=265275) ## See also #### Other Resources [Configure forms-based authentication for a claims-based web application in SharePoint Server](http://technet.microsoft.com/library/fd1391bb-c787-4742-b007-bf57e18dad66%28Office.14%29.aspx) [Test Lab Guides](https://go.microsoft.com/fwlink/p/?LinkId=202817)
48.227273
321
0.793591
eng_Latn
0.795613
113fd805621f69070a5d32bc1979c7416852a723
2,582
md
Markdown
articles/storage/scripts/storage-blobs-container-calculate-size-cli.md
LeMuecke/azure-docs.de-de
a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/scripts/storage-blobs-container-calculate-size-cli.md
LeMuecke/azure-docs.de-de
a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/scripts/storage-blobs-container-calculate-size-cli.md
LeMuecke/azure-docs.de-de
a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure CLI-Skriptbeispiel – Berechnen der Größe des Blobcontainers | Microsoft-Dokumentation description: Berechnen Sie die Größe eines Containers in Azure Blob Storage, indem Sie die Größe der Blobs im Container addieren. services: storage author: tamram ms.service: storage ms.subservice: blobs ms.devlang: cli ms.topic: sample ms.date: 06/28/2017 ms.author: tamram ms.custom: devx-track-azurecli ms.openlocfilehash: 45712632ebfb2da4b713038503965ce908c1dfc6 ms.sourcegitcommit: 11e2521679415f05d3d2c4c49858940677c57900 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 07/31/2020 ms.locfileid: "87498887" --- # <a name="calculate-the-size-of-a-blob-storage-container"></a>Berechnen der Größe eines Blob Storage-Containers Mit diesem Skript berechnen Sie die Größe eines Containers in Azure Blob Storage, indem Sie die Größe der Blobs im Container addieren. [!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)] [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] > [!IMPORTANT] > Dieses CLI-Skript gibt eine geschätzte Größe für den Container an und sollte nicht für Abrechnungsberechnungen verwendet werden. ## <a name="sample-script"></a>Beispielskript [!code-azurecli[main](../../../cli_scripts/storage/calculate-container-size/calculate-container-size.sh?highlight=2-3 "Calculate container size")] ## <a name="clean-up-deployment"></a>Bereinigen der Bereitstellung Führen Sie den folgenden Befehl aus, um die Ressourcengruppe, den Container und alle zugehörigen Ressourcen zu entfernen. ```azurecli-interactive az group delete --name myResourceGroup ``` ## <a name="script-explanation"></a>Erläuterung des Skripts Dieses Skript verwendet die folgenden Befehle, um die Größe des Blob Storage-Containers zu berechnen. Jedes Element in der Tabelle ist mit der Dokumentation des jeweiligen Befehls verknüpft. | Get-Help | Notizen | |---|---| | [az group create](/cli/azure/group) | Erstellt eine Ressourcengruppe, in der alle Ressourcen gespeichert sind. | | [az storage blob upload](/cli/azure/storage/account) | Lädt die Dateien in einen Azure Blob Storage-Container hoch. | | [az storage blob list](/cli/azure/storage/account/keys) | Listet die Dateien in einem Azure Blob Storage-Container auf. | ## <a name="next-steps"></a>Nächste Schritte Weitere Informationen zur Azure CLI finden Sie in der [Azure CLI-Dokumentation](/cli/azure). Weitere CLI-Skriptbeispiele für Speicher finden Sie in den [Azure CLI-Beispielen für Azure Blob Storage](../blobs/storage-samples-blobs-cli.md).
44.517241
190
0.778854
deu_Latn
0.877233
113ff9eee33fe4e00b7de7b8ace2caa5e4972d02
3,846
md
Markdown
windows-driver-docs-pr/display/font-driver-functions.md
AnLazyOtter/windows-driver-docs.zh-cn
bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/display/font-driver-functions.md
AnLazyOtter/windows-driver-docs.zh-cn
bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/display/font-driver-functions.md
AnLazyOtter/windows-driver-docs.zh-cn
bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 字体驱动程序函数 description: 字体驱动程序函数 ms.assetid: 95bf5a3b-29f8-43d2-9f24-22cfe257ead4 keywords: - 字体 WDK 图形,驱动程序函数 - GDI WDK Windows 2000 显示、 字体、 驱动程序函数 - 图形驱动程序 WDK Windows 2000 显示,字体,驱动程序功能 - DrvLoadFontFile - DrvUnloadFontFile ms.date: 04/20/2017 ms.localizationpriority: medium ms.openlocfilehash: 4e2e1c081341c91245fa4a138078f04bedff9484 ms.sourcegitcommit: fb7d95c7a5d47860918cd3602efdd33b69dcf2da ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 06/25/2019 ms.locfileid: "67384592" --- # <a name="font-driver-functions"></a>字体驱动程序函数 ## <span id="ddk_font_driver_functions_gg"></span><span id="DDK_FONT_DRIVER_FUNCTIONS_GG"></span> 在前面的主题中所述的函数,除了下表列出了几个其他字体驱动程序应支持的函数。 <table> <colgroup> <col width="50%" /> <col width="50%" /> </colgroup> <thead> <tr class="header"> <th align="left">函数</th> <th align="left">描述</th> </tr> </thead> <tbody> <tr class="odd"> <td align="left"><p><a href="https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvloadfontfile" data-raw-source="[&lt;strong&gt;DrvLoadFontFile&lt;/strong&gt;](https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvloadfontfile)"><strong>DrvLoadFontFile</strong></a></p></td> <td align="left"><p>指定要用于创建字体的实现; 的文件该驱动程序必须准备供使用的文件。 所需的字体驱动程序。</p></td> </tr> <tr class="even"> <td align="left"><p><a href="https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvqueryadvancewidths" data-raw-source="[&lt;strong&gt;DrvQueryAdvanceWidths&lt;/strong&gt;](https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvqueryadvancewidths)"><strong>DrvQueryAdvanceWidths</strong></a></p></td> <td align="left"><p>要求驱动程序将发送 GDI 字符的一组指定的标志符号的步进宽度。</p></td> </tr> <tr class="odd"> <td align="left"><p><a href="https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvqueryfontcaps" data-raw-source="[&lt;strong&gt;DrvQueryFontCaps&lt;/strong&gt;](https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvqueryfontcaps)"><strong>DrvQueryFontCaps</strong></a></p></td> <td align="left"><p>将复制到指定的缓冲区定义的字体驱动程序功能的位数组。</p></td> </tr> <tr class="even"> <td align="left"><p><a href="https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvqueryfontfile" data-raw-source="[&lt;strong&gt;DrvQueryFontFile&lt;/strong&gt;](https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvqueryfontfile)"><strong>DrvQueryFontFile</strong></a></p></td> <td align="left"><p>具体取决于查询的模式,在字体文件或描述性字符串中返回字体的实际面部数。 所需的字体驱动程序。</p></td> </tr> <tr class="odd"> <td align="left"><p><a href="https://docs.microsoft.com/windows/desktop/api/winddi/nc-winddi-pfn_drvqueryglyphattrs" data-raw-source="[&lt;strong&gt;DrvQueryGlyphAttrs&lt;/strong&gt;](https://docs.microsoft.com/windows/desktop/api/winddi/nc-winddi-pfn_drvqueryglyphattrs)"><strong>DrvQueryGlyphAttrs</strong></a></p></td> <td align="left"><p>返回有关字体的标志符号的信息。</p></td> </tr> <tr class="even"> <td align="left"><p><a href="https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvunloadfontfile" data-raw-source="[&lt;strong&gt;DrvUnloadFontFile&lt;/strong&gt;](https://docs.microsoft.com/windows/desktop/api/winddi/nf-winddi-drvunloadfontfile)"><strong>DrvUnloadFontFile</strong></a></p></td> <td align="left"><p>会通知驱动程序,因此驱动程序可以执行必要的清理不再需要的字体文件。 所需的字体驱动程序。</p></td> </tr> </tbody> </table> GDI 调用*DrvLoadFontFile*函数中要用来创建字体的实现的特定文件。 此函数是必需的字体驱动程序。 当函数*DrvLoadFontFile*是调用,该驱动程序执行准备使用的文件进行必要的转换。 *DrvLoadFontFile*返回允许 GDI 请求正确的字体使用 GDI 维护字体使用表的唯一标识符。 一旦加载一种字体,GDI 将不会调用要重新加载的相同字体。 GDI 调用*DrvUnloadFontFile*当不再需要指定的字体文件。 *DrvUnloadFontFile*仅在字体驱动程序中需要函数。 *DrvUnloadFontFile*会导致所有临时文件是要释放的已删除和所有已分配的系统资源。 GDI 调用*DrvQueryFontFile*函数以返回有关已加载的驱动程序的字体文件的信息。 *DrvQueryFontFile*仅在字体驱动程序中需要。 指定要返回的信息类型*iMode*。 如果*iMode*是 QFF\_说明,该函数返回一个字符串,Microsoft 基于 NT 的操作系统使用来描述字体文件。 如果*iMode*是 QFF\_NUMFACES,该函数返回的实际面部数字体文件中。 人脸的面部数从 1 范围内的索引进行标识。
45.247059
325
0.76235
yue_Hant
0.648942
11404b6b95c93ee9389de073c3acf15e5e2ff939
10,577
md
Markdown
_posts/2020-05-19-14-25-лавър-на-евгений-водолазкин-житие-на-каещ-се-грешник-по-съвременному-и-история-за-едновременността.md
literaturnirazgovori/literaturnirazgovori.github.io
83f2fcc1aaf0bd18209140a968b2ecc0c2f54033
[ "MIT" ]
null
null
null
_posts/2020-05-19-14-25-лавър-на-евгений-водолазкин-житие-на-каещ-се-грешник-по-съвременному-и-история-за-едновременността.md
literaturnirazgovori/literaturnirazgovori.github.io
83f2fcc1aaf0bd18209140a968b2ecc0c2f54033
[ "MIT" ]
10
2019-01-22T10:46:37.000Z
2020-12-27T20:16:30.000Z
_posts/2020-05-19-14-25-лавър-на-евгений-водолазкин-житие-на-каещ-се-грешник-по-съвременному-и-история-за-едновременността.md
literaturnirazgovori/literaturnirazgovori.github.io
83f2fcc1aaf0bd18209140a968b2ecc0c2f54033
[ "MIT" ]
null
null
null
--- layout: post hidden: false title: >- "Лавър" - съвременно житие на каещ се грешник и роман за едновременността на историята subtitle: >- В основата на светостта стои грях, най-големите светци възприемат себе си като най-големите грешници, а в последните дни хората ще се спасяват само със смирение. Това дълбоко православно разбиране - и православното светоусещане изобщо - съвършено се въплъщава в протагониста Арсений author: Antonia image: /Uploads/lavyr.png category: bookreviews category2: '' tags: - Евгений Водолазкин - Лавър - Панорама - рецензия - роман schedule: '' redirect_from: - >- /bookreviews/2020/05/18/14-21-лавър-на-евгений-водолазкин-житие-на-каещ-се-грешник-по-съвременному-и-история-за-едновременността --- В основата на светостта винаги стои грях, най-големите светци възприемат себе си като най-големите грешници, а в последните дни хората ще се спасяват само със смирение. Това **дълбоко православно разбиране - и православното светоусещане** изобщо - въплъщава в себе си протагонистът Арсений от романа на Евгений Водолазкин - "Лавър" (изд. Панорама, 2020). Тази важна и много руска книга обединява в себе си жанровите белези на средновековно житие, анали, поклоннически пътепис в една неочаквана комбинация със съвременното. Резултатът е изненадващо четивна книга за една далеч по-масова публика, отколкото предполага един средновековен роман. В него сериозното и тъжно жизнеописание е разказано и с болезнена красота, и с много ирония и хумористичен наивитет, най-вече посредством смесването на езикови регистри. Тези неравности не само не дразнят, но носят странно читателско удовлетворение. \== _(Малко тематично отклонение: през съудентските си години изчетох огромно количество православна литература, вкл. касаеща руските старци, преподобни и светци. Мисля, че това със сигурност повлия на прочита ми, туй като открих много познати белези и характеристики. Не е задължително да познавате тези пасания, но със сигурност помагат за оптиката)._ \== Евгений Водолазкин, който е доктор на филологическите науки и специалист по древноруска литература, би могъл да напише исторически роман - най-малкото защото добре владее езика и фактите за онова време. Той обаче изрично прави уточнението, че "Лавър" не е такъв. Нещо повече, авторът настоява, че **романът е извън времето - тема, която е и тематичен, и организационен център на книгата**. \== Героите в "Лавър" разгръщат една теза за относителността (почти по Анйнщайн) на времето и едновременността на събитията - **времето е само човешки конструкт, който организира случващото се и го прави поносимо за обикновените съзнания**. Бог е всевиждащ, защото е извън историчното време, за него всички събития са съвместни - и в този смисъл дори очакваният край на света в средновековна Русия от романа може би вече се е случил. Историята и човечеството нямат цел, нито са свободни - цел и свобода има само отделният човек, който поради това е способен да разкъса кръга на времето. Времето обаче не е точно кръг, а спирала (по думите на един от праведниците в романа) и повторяемостта му никога не ни заварва същите. Повторенията и времето между тях са ни дадени за за изкупление и поправяне. Защото всичко е поправимо. \== Това разбиране за времето прави възможна например срещата на ренесансовия със средновековния човек, а именно на втория протагонист - италианеца Амбрджо Флекия, с руснака Амвросий, и тяхното общо поклонение към Йерусалим. То е обяснение и за това, че **героите виждат със свръхдетайлна яснота събития в бъдещето и миналото**. Амброджо е обсебен от времето. Той предвижда откриването на новия континент Америка, докато от другата страна в Русия очакват края на света (намигването впрочем е и политическо). Но той е способен да прозре до най-малката визуална подробност и живота на най-обикновени люде - няколко такива видения от съветска Русия срещаме на страниците. На пръв поглед те будят недоумение, защото са твърде лични и не допринасят по никакъв начин за сюжета, нито имат връзка. Но такова е и личното присъствие в движението на общата история, чиято безцелност Амброджо сравнява с хаотичното движение на бълхи в буркан. \== **Едновременността се подкрепя и от езиковата амалгама, която предлага Водолазкин**, смесвайки църковнославянски/древноруски със съвременен език, модерен жаргон и един съветски административно-служебен регистър, за който един критик сполучливо отбелязва: "Нет ничего более узнаваемого для русского уха, чем эта бесчувственная к живому языку бумажная трескотня". Така в текста съжителстват изрази като "Споко бе, мой човек", "травмите на пострадалия са малко съвместими с живота" и "Не глаголете ми мирски, щото повече нямам участ с живите". \== **Бог обича грешниците и заради тях идва** \== С идеята, че времето се връща назад, за да оформи спирала, е построен и сюжетът на романа, който върви строго **хронологично, но прави пълен завой, връщайки се към една имитация на началото**. В началото младият Арсений губи не само родителите си от чумата и любимия си дядо Христофор, който го отглежда и учи на всичко необходимо, за да стане врач, но и неговата първа и единствена любов и мъртвороденото им дете - незаконни, невенчани и непричастени пред Бога. Тяхната смърт без покаяние и погребение без опяване "умъртвява" и младежа Арсений, за да даде тласък на неговото житие като "каещ се грешник", който последователно приема имената Устин, Амвросий и Лавър, белязали различни фази от живота му. Арсений се превръща в дядо си и съществува вместо Устина, на която не спира да говори през целия си живот, посветен на нейното спасение, а в края на дните си се връща се връща край родното си село, откъдето е тръгнал. Домът му до гробищата там вече се е превърнал символично в Дом, защото на мястото са построили църква. \== **"Лавър" е и любовен роман, в най-всеобемния, християнски смисъл на тази дума**. Любовта на Амвросий израства от съвършенството на егоизма, който я иска изцяло за себе си, до съвършенството по евангелското определение за любовта в 1 Коринтяни 13. Арсений е наказан заради играта си на Бог ("Рижото момиче изглеждаше на Арсений като глина в ръцете му, от която той извайваше своята Жена") и чрез отнемането е поставен на пътя на спасението. Той приема живот _вместо _и _в името на_ Устина и детето, за да измоли за тях избавление и място до Христа. Именно тогава, по думите на един от старците, започва историята на неговата любов. В края на живота на героя Водолазкин повтаря тази история, като го среща с млада жена, заченала също извънбрачно. Какво се случва, когато той поема този грях върху себе си, ще оставя да разберете сами. \== **В "Лавър" Водолазкин илюстрира прекрасно и традицията на юродивството**, създавайки чудесните образи не само на Арсений, но и на други двама преподобни. В тази традиция се вписват и самоналожените унижения, и чудатото и хамаво поведение, което крие невидим духовен смисъл (те се бият, ругаят се, спорят, смешни са), и привидните грехове, поети с цел смирение, и аксетизмът. Юродивите на Водолазкин са възхитителни до степен на забавност. Те притежават даровете на прозорливост и изобличение на човешките грехове, на чудеса и изцеление, те се чуват и разговарят от разстояние, вървят по вода, левитират, срещат се ангели, съзират бесовете. \== **Централна тема в романа е и тялото - както и неговото преодоляване**. Тялото, като образ Божий, е прекрасно и всеки негов орган е обмислен до най-малката подробност, отразявайки Божия промисъл - така учи Христофор внука си Арсений. Да се грижи за него, да лекува раните и да поема недостатъците му се превръща в мисия на героя, който първоначално цери с треви и мазила, но постепенно инструмент на това лечение се превръща собствената му личност като проводник на всемогъщата Божия милост. И докато Арсений полага ръце, държи и прегръща болните, основен начин да се приближи до Господа е "смъртта" на собственото му тяло, изграждането на плътска безчувственост, на безразличие към глада, студа, желанията, притежанието. Когато не може да излекува, той вини себе си, собствената си греховност, която го прави недостатъчен да измоли спасение. Изцелението е поемане на греховете. \== Романът е силно графичен, що се отнася до телесността и нейните язви и до непостижимите начини, по които Арсений подлага плътта си на изпитания и лишения. Описанията са толкова живи, че успяваме да ги почувстваме. **Особено интересна (и моя любима) е първата част - детството на Арсений**, в която следваме дядо му в битието му на лечител на местните хора и научаваме за куп цярове и билки - информация, която впрочем би могла да изпълни отделен медицински справочник. \== "Лавър" е особено красив, неочакван роман във времена, когато идеята за роман е толкова далеч от тази на хронологичното житие, от темата за светостта и от православната мистика, от разсъжденията за Бога и метафизичния смисъл на съществуването. Ироничността на този роман е можело да бъде спестена и той пак щеше да бъде чудесен и нямаше да звучи "старомодно". Водолазкин не може да устои и често се включва с намигвания - исторически пояснения за Средновековието, бележки над линия, коментари за нравите по онова време. Битът, който той описва, дори когато използва съвременен език или поставя прословутата пластмасова бутилка в средновековния руски лес, е повече от автентичен. В онзи свят хората все още говорят за странни чудовища, многоръки хора, песоглавци, зачатия от Дявола, но героите му са и необикновено напредничави, съвременни, разсъждаващи за вечните теми, сред които разбира се основни са живота и смъртта. \== **В книгата има и много фина иронична критика към руското и руския човек.** Оставям ви с няколко такива цитата за удоволствие: \== "А руският човек е благочестив. Той знае, че юродивият трябва да изтърпи страдание, и е готов да съгреши, за да му осигури това страдание. Все някой трябва да е злодей, нали?" \== "Ами че руският човек не е само благочестив. За всеки случай ще ви осведомя, че той още е безсмислен и безпощаден и че всяка работа за него като нищо може да се превърне в смъртен грях." \== "Когато се убедиха, че краят на света е единственото, което го интересува, започнаха да се отнасят към него по-топло. На мнозина изясняването на времето, когато ще настъпи краят на света, се стори почтено занятие, понеже в Русия обичат мащабните задачи. Нека да изяснява, каза наместникът Гаврил. Опитът ми подсказва, че признаците за края на света при нас ще бъдат най-очевидни." \== "Вие, руснаците, много обичате да говорите за смъртта. И това ви отвлича от устройването на живота."
110.177083
1,024
0.794932
bul_Cyrl
0.999996
1140b45af0a62dbd6df28ee144e66db09aec84b9
28,930
md
Markdown
DevCenter/Shared/Chunks/troubleshooting_a_website.md
mollybostic/azure-content
c7fe9c148147a31287011235294b970641e07180
[ "CC-BY-3.0" ]
1
2021-01-29T23:41:58.000Z
2021-01-29T23:41:58.000Z
DevCenter/Shared/Chunks/troubleshooting_a_website.md
mollybostic/azure-content
c7fe9c148147a31287011235294b970641e07180
[ "CC-BY-3.0" ]
1
2018-05-30T19:40:41.000Z
2018-05-30T19:40:41.000Z
DevCenter/Shared/Chunks/troubleshooting_a_website.md
mollybostic/azure-content
c7fe9c148147a31287011235294b970641e07180
[ "CC-BY-3.0" ]
1
2021-01-29T23:42:03.000Z
2021-01-29T23:42:03.000Z
<!-- http://daringfireball.net/projects/markdown/syntax --> <!-- http://go.microsoft.com/fwlink/?LinkId=251824 --> <div chunk="../chunks/article-left-menu.md" /> #Troubleshooting a Web Site# Troubleshooting a web site is accomplished by configuring the web site to display application errors, configuring the web site to display environment variables, enabling web site diagnostics, and then analyzing web site application errors and diagnostic data to identify and resolve problems. This tutorial walks you through the process of creating and deploying a simple web site to Windows Azure, causing an error condition on the web site and then applying configuration and logging options to generate troubleshooting data that can be analyzed to identify and resolve the error. <div class="dev-callout"> <b>Note</b> <p>For purposes of this document, <b>Web Site</b> refers to the host for a web application running on Windows Azure and <b>web site</b> refers to a running host instance.</p> </div> <h2>What is Web Site Diagnostics?</h2> Web Site diagnostics provides the following logging and tracing options: - **Detailed Error Logging** - Logs all errors generated by a web site. - **Failed Request Tracing** - Logs all failed client requests to a web site. - **Web Server Logging** - Logs all HTTP transactions on a web site using the [W3C extended log file format](http://go.microsoft.com/fwlink/?LinkID=90561). Diagnostics log files are saved to an FTP site for download to a local computer. <h2>Concepts</h2> Concepts introduced in this article include: - **Web Site development** - Install and use Microsoft WebMatrix on a local computer to create a web site. - **Creating and managing Web Sites on Windows Azure** - Using the Windows Azure Management portal to create and configure a web site. - **Web Site deployment** - Deploying a web site from a local computer to Windows Azure. - **Troubleshooting Web Sites using configuration options and diagnostics** - Configuring web sites to display application errors and environment variables, configuring diagnostics for a web site, collecting diagnostic data and then analyzing the displayed application errors and diagnostics data to troubleshoot and resolve problems. <h2>Install developer tools and create a web site on your local computer</h2> Before discussing how to troubleshoot a web site we must first create a web site. This section walks through using Microsoft WebMatrix to create a simple web site and deploy the web site to Windows Azure. ###<a name="installwebmatrix"></a>Install Microsoft WebMatrix Visit [http://www.microsoft.com/web/webmatrix][webmatrix] and click the **Free Download** buttpn. This will run the Web Platform Installer which installs all the dependencies you need to run WebMatrix and then install WebMatrix. ###<a name="createlocalsite"></a>Create a Web Site on your local computer with WebMatrix To create a web site with WebMatrix follow these steps: 1. Click <b>Start</b>, <b>All Programs</b>, <b>Microsoft WebMatrix</b> and then click <b>Microsoft WebMatrix</b> to display the WebMatrix Quick Start screen. 2. Click <b>Templates</b> to display the available templates. 3. Select the <b>Starter Site</b> template, enter a value (e.g. <b>AzureWebDiag</b>) for the Site Name and then click <b>Next</b>. ![Create new site from a template][newsitefromtemplate] If the site is created successfully it will be opened for editing in the WebMatrix IDE: ![New web site opened in WebMatrix IDE][newsiteinwebmatrix] 4. Verify that an instance of the web site is running on your computer by clicking the URL for the site displayed in the WebMatrix IDE, your browser should then display the default page for the web site: ![Default web page of web site][defaultpagenewsite] You have now successfully created a web site with WebMatrix. <h2>Create a Web Site on Windows Azure</h2> Before you can deploy your web site from WebMatrix to Windows Azure you must first create a web site on Windows Azure. This section walks through creating a web site on Windows Azure. ###<a name="quickcreateazurewebsite"></a>'Quick Create' a new Web Site on Windows Azure 1. Connect to the [Windows Azure Portal] and click **New**, **Web Site**, **Quick Create**. 2. Enter a name for the URL (e.g. AzureWebDiag), select an appropriate Region and then click **Create Web Site**. ![Create a new web site][createnewwebsite] 3. After the web site has been created click the name of the web site as it is listed in the **Name** column of the Windows Azure portal's web sites page, this will open the **QuickStart** management page for the web site: ![QuickStart management page][quickstartmgmtpage] ###<a name="deploymentuser"></a>Create deployment user credentials### Web Sites support multiple deployment technologies including MSDeploy/Webdeploy, TFS, FTP and GIT. This tutorial will describe how to use FTP to deploy a web site from your developer computer to Windows Azure. Both GIT and FTP deployment require authentication with specific **deployment user** credentials that you generate from the web site management pages. If you have not already created deployment user credentials follow these steps: 1. Click **Set up deployment credentials** under the **Publish your app** heading on the **QuickStart** management page. This will display the **Deployment Credentials** dialog box. Then enter values for Username and Password and click the check mark to generate deployment user credentials. ![Create deployment credentials][createdeploycreds] 2. Open the **Dashboard** management page for the web site and in the **quick glance** section you can verify that the web site is configured to use the deployment user credentials that you generated. Deployment user credentials for web sites are always specified using the syntax **sitename\username**: ![Verify Deployment User][verifydeployuser] 3. Click the link displayed under **Site URL** to verify that you can access an instance of the web site from your browser. You should see a web page similar to the following: ![Web Site Under Construction page][webunderconstruction] <h2>Deploy the web site from the developer computer to Windows Azure</h2> Now that you have created a web site on Windows Azure and generated the necessary deployment user credentials you can deploy the web site from your developer computer to Windows Azure. To deploy a web site to Windows Azure using FTP you can use one of several FTP clients available for download on the Internet or you can deploy directly from your development environment if the application supports FTP publishing. Since WebMatrix supports FTP publishing, follow these steps to publish the web site you created in WebMatrix to Windows Azure: 1. Open the web site that you created with WebMatrix. 2. From the default view of the web site displayed in the WebMatrix IDE, click the **Publish** button to display the **Publish Your Site** window and click the **Enter settings** link under the **I already have a hosted web site**. ![WebMatrix Publish Settings][webmatrixpubsettings] 3. <p id="pubsettings">Enter the following values in the <b>Publish Settings</b> dialog box:</p> ![WebMatrix Publish Settings2][webmatrixpubsettings2] - **Protocol:** Select **FTP** - **Server:** Specify the URL listed under **FTP Hostname** on the web site's **Dashboard** management page. - **Site path:** site/wwwroot - **User name:** Specify the account listed under **Deployment User** on the web site's **Dashboard** management page. - **Password:** Specify the password for the deployment user that you created for the web site. - **Destination URL:** Specify the URL listed under **Site URL** on the web site's **Dashboard** management page. - **Save password:** Check this option to save the deployment user password. - **Validate Connection:** Click this to verify that WebMatrix can connect to the FTP host using the specified parameters. 4. Click **Save** and a **Publish Compatibility** window is shown. Click **Continue** to perform the compatibility tests. ![Publish Compatibility Window][publishcompatibility] 5. Click the **Continue** button again to initiate deployment of the local web site to Windows Azure. WebMatrix will calculate what files have changed since the last time the web site was published (all of them since this is the first time the web site has been published to Windows Azure) and display a **Publish Preview** dialog box: ![WebMatrix Publish Preview][webmatrixpubpre] 5. Select the checkbox next to the file StarterSite.sdf and click **Continue** to initiate deployment to Windows Azure. 6. After publishing is complete click the link displayed under **Site URL** from the **Dashboard** management page to open the an instance of the web site from your browser. You should see a web page similar to the following: ![Web Site Published to Windows Azure][defaultpagenewsite] <h2>Enable diagnostics</span>Enable diagnostics for the web site</h2> Enable diagnostics for web sites on the **Configure** management page. Under the **Diagnostics** section of the **Configure** management page you can enable or disable the following logging and tracing options: - **Web Server Logging** - Turn on Web Server logging to save web site logs using the W3C extended log file format. - **Detailed Error Messages** - Turn on logging of detailed error messages to capture all errors generated by instances of the web site. - **Failed Request Tracing** - Turn on failed request tracing to capture information for failed client requests. Set all logging and tracing options for the web site to **On** and click the **Save** icon at the bottom of the page. ###<a name="verifyftpconnectivity"></a>Verify connectivity to the FTP site where log files are stored Connect to the FTP site where diagnostic data is stored using parameters from the web site's **Dashboard** management page. Open the FTP site listed under **Diagnostics Logs** using the **Deployment User** account credentials [you created earlier](#deploymentuser). Consider using an FTP client such as [FileZilla][filezilla] to download log files. An FTP client typically provides more flexibility than a web browser for connnecting to and downloading files from an FTP site. <h2>Register an account on the Website</h2> Follow these steps to register an account on the web site: 1. Open the web site from your browser and click **Register** in the top right corner of the default web page. You will be directed to registration page similar to the following: ![Website registration page][siteregpage] 2. Enter an email address and password and click **Register**. After you register you will be redirected to the default web page and you will be logged on with the e-mail account that you specified on the registration page: ![Logged on to website][loggedontosite] <h2>Introduce an error condition on the web site</h2> Before downloading and analyzing diagnostic data from a web site it will be useful to modify the web site to cause an error to occur. Follow the steps below to cause an error condition and configure the web site to display application errors. ###<a name="breakregistration"></a>Rename the Web Site user account database file The web site is configured to store account registration information in the file **StarterSite.sdf**. To introduce an error condition on instances of the web site, rename the file **StarterSite.sdf** to **StarterSite.bak** on the deployed web site: 1. On the **Dashboard** page for the web site, click the **FTP Host Name** under the **Quick Glance** section. This will start an instance of Internet Explorer. Press the ALT key and select the **View** menu. Next select **Open FTP site in Windows Explorer**. 2. Navigate to the /site/wwwroot/App_Data/ directory, 3. Rename the file **StarterSite.sdf** to **StarterSite.bak**. After renaming the file Windows Azure web sites will be unable to access the user account database, causing an error to occur whenever clients connect to instances of the web site. ###<a name="addwebconfig"></a>Configure the Web Site to display application errors The default **mode** of the ASP.NET [customErrors][customErrors] configuration setting is **RemoteOnly**, which prevents application errors from being displayed. To configure the web site to display application errors create a web.config file and set the **mode** attribute of **customErrors** to **Off**: 1. Open the web.config file located in the root directory of your web site. Open the file with Notepad (or any editor you like) and add the following XML inside the &lt;system.web&gt; elements: <customErrors mode="Off"/> If you are unsure of the location of your web site, open WebMatrix and right-click AzureWebDiag and select **Show in File Explorer**. <div class="dev-callout"> <b>Note</b> <p>When an ASP.NET web site running on Windows Azure is not configured to display application errors, a web page similar to the following is displayed if an application error occurs:</p> </div> ![Generic Application Error][genericapperror] ###<a name="viewwebsitevariables"></a>Display environment variables for a web site For purposes of troubleshooting it is may be useful to know the values of a web site's environment variables. To display environment variables for .NET or PHP web sites, first paste the following code into Notepad and then save to the web site root directory with the specified file names: **environment.aspx**<br /> <pre> &lt;script language="C#" runat="server"&gt; public void Page_Load(Object sender, EventArgs E) { System.Collections.DictionaryEntry dictEntry = default(System.Collections.DictionaryEntry); Response.Write("&lt;html&gt;&lt;head&gt;&lt;title&gt;&lt;/title&gt;&lt;/head&gt;&lt;body&gt;&lt;/body&gt;"); Response.Write("&lt;table border=1&gt;"); Response.Write("&lt;tr&gt;&lt;td colspan=2&gt;&lt;font color='red'&gt;Environment variables&lt;/font&gt;&lt;/td&gt;&lt;/tr&gt;"); Response.Write("&lt;tr&gt;&lt;td&gt;Key&lt;/td&gt;&lt;td&gt;Value&lt;/td&gt;&lt;/tr&gt;"); foreach (DictionaryEntry dictEntry_loopVariable in Environment.GetEnvironmentVariables()) { dictEntry = dictEntry_loopVariable; Response.Write("&lt;tr&gt;&lt;td&gt;" + (dictEntry.Key.ToString()) + "&lt;/td&gt;&lt;td&gt;" + (dictEntry.Value.ToString()) + "&lt;/td&gt;&lt;/tr&gt;"); } Response.Write("&lt;/table&gt;&lt;br&gt;&lt;br&gt;"); Response.Write("&lt;table border=1&gt;"); Response.Write("&lt;tr&gt;&lt;td colspan=2&gt;&lt;font color='blue'&gt;Server Variables&lt;/font&gt;&lt;/td&gt;&lt;/tr&gt;"); Response.Write("&lt;tr&gt;&lt;td&gt;Key&lt;/td&gt;&lt;td&gt;Value&lt;/td&gt;&lt;/tr&gt;"); int loop1, loop2; NameValueCollection coll; // Load ServerVariable collection into NameValueCollection object. coll=Request.ServerVariables; // Get names of all keys into a string array. String[] arr1 = coll.AllKeys; for (loop1 = 0; loop1 &lt; arr1.Length; loop1++) { Response.Write("&lt;tr&gt;"); Response.Write("&lt;td&gt;" + arr1[loop1] + "&lt;/td&gt;"); String[] arr2=coll.GetValues(arr1[loop1]); for (loop2 = 0; loop2 &lt; arr2.Length; loop2++) { Response.Write("&lt;td&gt;" + Server.HtmlEncode(arr2[loop2]) + "&lt;/td&gt;"); } Response.Write("&lt;/tr&gt;"); } Response.Write("&lt;/table&gt;"); } &lt;/script&gt; </pre> **environment.php**<br /> <pre> &lt;?php echo "&lt;pre&gt;"; print_r($_SERVER); echo "&lt;/pre&gt;"; ?&gt; </pre> When you add the file **environment.aspx** to a .NET web application or the file **environment.php** to a PHP web application, after you have deployed your web site to Windows Azure you can browse to these files to view values assigned to a web site's environment variables. ###<a name="deployerrortoazure"></a>Deploy the updated Web Site to Windows Azure### 1. Click **Publish** in the WebMatrix IDE. WebMatrix will calculate any changes made to files since the last time you published and display the changes in a dialog box similar to the following: ![WebMatrix Publish Preview][webmatrixpubpre2] 2. Click **Continue** to initiate transfer of these files to Windows Azure. 3. After publishing is complete click the link displayed under **Site URL** from the **Dashboard** management page to open the web site from your browser. You should see a web page similar to the following: <a name="debugapperr"></a> ![Detailed Application Error][detailedapperr] <h2>Download diagnostic log files to your local computer</h2> Now that you have introduced an error condition on the website, you can download the resulting diagnostic log files to your local computer for analysis. To ensure that web site diagnostics creates all of the log files specified under the **Diagnostics** section of the website's **Configure** management page, refresh your browser once or twice to ensure that the error occurs. Follow these steps to download the diagnostic log files to your local computer: 1. Open the web site's **Dashboard** management page and make note of the FTP site listed under **Diagnostics Logs** and the account listed under **Deployment User**. The FTP site is where the log files are located and the account listed under Deployment User is used to authenticate to the FTP site. 2. Consider using an FTP client such as [FileZilla][filezilla] to connect to the FTP site. An FTP client provides greater ease of use for specifying credentials and viewing folders on an FTP site than is typically possible with a browser. The screenshot below was taken from the FileZilla FTP client when connecting to the FTP site where the log files for the AzureWebDiag web site are stored. The FTP host name and deployment user credentials are highlighted in red. To copy the contents of the remote FTP folder on the right to the local folder on the left, click to select the folder on the left then right-click the folder on the right and select **Download** from the shortcut menu that is displayed: ![FileZilla FTP Client][filezillaclient] 3. Open the folder on the left with Windows Explorer to access the log files that you downloaded: ![View Log Files][viewlogfiles] <h2>Analyze website log files</h2> Basic analysis of the different log file types can be performed as follows: <table cellpadding="0" cellspacing="0" width="655" rules="all" style="border: #000000 thin solid;"> <tr style="background-color: silver; font-weight: bold;" valign="top"> <td style="width: 145px">Log File</td> <td style="width: 510px">Analyze with</td> </tr> <tr valign="top"> <td>Detailed error logging</td> <td>Open .htm files from the /LogFiles/DetailedErrors/ folder.</td> </tr> <tr valign="top"> <td>Failed request tracing</td> <td>Open .xml files from the /LogFiles/W3SVC#########/ folder.</td> </tr> <tr valign="top"> <td>Web server logging</td> <td>Use <a href="http://go.microsoft.com/fwlink/?LinkId=246619" title="Log Parser 2.2">Log Parser 2.2</a> to analyze .log files from the /LogFiles/http/RawLogs/ folder.</td> </tr> </table> ###<a name="detailederrors"></a>View results of detailed error logging Web site log files include formatting functionality for viewing Detailed Error logging results. Use a web browser to open any .htm files saved to the /LogFiles/DetailedErrors/ folder: ![View Detailed Errors][viewdetailederr] Detailed error logging results also include recommendations for resolving errors including links to relevant Microsoft Knowledge base articles. ###<a name="failedrequests"></a>View results of failed request tracing Web site log files provide formatting functionality for viewing failed request tracing results. Use a web browser to open any .xml files saved to the /LogFiles/W3SVC#########/ folder: ![Failed Request Tracing][failedreqtrace] Failed request tracing for web site is based upon the failed request tracing functionality available with IIS 7.5. ###<a name="webserverlogging"></a>Analyze web server logs Web site logs record all HTTP transactions using the W3C extended log file format and are saved to the /LogFiles/http/RawLogs/ folder. Web site logs can be analyzed using using [Log Parser 2.2][logparser]: ![Log Parser Command Window][logparsercmdwind] For more information about Log Parser 2.2 see [Download Log Parser 2.2][downloadlogparser] <h2>Troubleshoot the AzureWebDiag web site</h2> This section describes how someone might engage in troubleshooting a web site using the information that is available after you configure the web site to display errors and enable web site tracing and logging. ###<a name="tshootwithloggingandtracing"></a>Using logging and tracing information to troubleshoot Web Site problems For purposes of troubleshooting the error caused by renaming the file startersite.sdf file to startersite.bak, web server logging, detailed error messge logging and failed request tracing do not provide a single definitive cause and resolution to the problem. The logging and tracing files did however rule out several possible causes by clearly indicating that an HTTP Status code of **500 Internal Server Error** was generated on the web site when clients connected to it. This provides a high level of confidence that the problem is unrelated to unsuitable authorization headers (**HTTP 401 Unauthorized**), bad request syntax (**HTTP 400 Bad Request**) or numerous other HTTP 3xx, 4xx and 5xx status codes. According to [HTTP 1.1 Status Definitions][http11status], an HTTP Status code of **500 Internal Server Error** indicates that "The server encountered an unexpected condition which prevented it from fulfilling the request". ###<a name="tshootwitherrormessages"></a>Using detailed web site errors to troubleshoot Web Site problems Addition troubleshooting should focus on the error messages displayed as a result of modifying the web.config file or possibly by analyzing the web site's environment variables. If we look at the [detailed error message created on the web site](#debugapperr) we can see that an unhandled exception was thrown by the following method call in Line 2 of the file _AppStart.cshtml: <pre> WebSecurity.InitializeDatabaseConnection("StarterSite", "UserProfile", "UserId", "Email", true); </pre> The error's **Description:** is "An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code." **Exception Details:** listed for the error are "System.InvalidOperationException: Connection string "StarterSite" was not found." If we examine the stack trace displayed under the error we can see that the error originated from a call to the InitializeDatabaseConnection() method of the WebMatrix.WebData.WebSecurity class described at [WebSecurity.InitializeDatabaseConnection Method][initdb]. Since the InitializeDatabaseConnection Method is only using 5 parameters we determine that the overloaded InitializeDatabaseConnection() method described in the topic [WebSecurity.InitializeDatabaseConnection Method (String, String, String, String, Boolean)][initdbconnect] is being called. Since the Exception details indicate that 'Connection string "StarterSite" was not found', we can have a look at the definition for the **connectionStringname** parameter: **connectionStringName** Type: System.String<br /> The name of the connection string for the database that contains user information. If you are using SQL Server Compact, this can be the name of the database file (.sdf file) without the .sdf file name extension. This parameter definition provides a clue as to the cause of the error. According to [Connecting to a SQL Server or MySQL Database in WebMatrix][connecttosqlinwebmatrix], "WebMatrix includes SQL Server Compact, which is a lightweight version of Microsoft SQL Server that lets you create databases for your web sites. When you create a database, **it's added as an .sdf file in the App\_Data folder of your web site.**" Since this web site *does* use SQL Server Compact and the value specified for the connectionStringName parameter is **StarterSite**, the InitializeDatabaseConnection() method is looking for the file StarterSite.sdf in the web site's \root\App\_Data\ directory. Checking the web site's \root\App\_Data\ directory we can verify that there is no file named StarterSite.sdf, since of course we renamed it to StarterSite.bak. After renaming this file back to startersite.sdf the InitializeDatabaseConnection() method is able to find the file that it was expecting and the web site works as expected. ###Next Steps - [ASP.NET MVC web site with SQL Database] - [Create and deploy a web site with WebMatrix] - [Create a web site from the gallery] - [Web site with MongoDB on a virtual machine] [W3C Extended]:http://go.microsoft.com/fwlink/?LinkID=90561 [webmatrix]:http://go.microsoft.com/fwlink/?LinkId=251418 [filezilla]:http://go.microsoft.com/fwlink/?LinkId=247914 [customErrors]:http://go.microsoft.com/fwlink/?LinkId=251836 [logparser]:http://go.microsoft.com/fwlink/?LinkId=246619 [downloadlogparser]:http://go.microsoft.com/fwlink/?LinkId=251994 [http11status]:http://go.microsoft.com/fwlink/?LinkId=252804 [initdb]:http://go.microsoft.com/fwlink/?LinkId=252805 [initdbconnect]:http://go.microsoft.com/fwlink/?LinkId=252806 [connecttosqlinwebmatrix]:http://go.microsoft.com/fwlink/?LinkId=208661 [Windows Azure Portal]:https://manage.windowsazure.com [ASP.NET MVC web site with SQL Database]:http://www.windowsazure.com/en-us/develop/net/tutorials/web-site-with-sql-database/ [Create and deploy a web site with WebMatrix]:http://www.windowsazure.com/en-us/develop/net/tutorials/website-with-webmatrix/ [Create a web site from the gallery]:http://www.windowsazure.com/en-us/develop/net/tutorials/website-from-gallery/ [Web site with MongoDB on a virtual machine]:http://www.windowsazure.com/en-us/develop/net/tutorials/website-with-mongodb-vm/ [ASP.NET MVC web site with SQL Database]:http://www.windowsazure.com/en-us/develop/net/tutorials/web-site-with-sql-database/ [Create and deploy a web site with WebMatrix]:http://www.windowsazure.com/en-us/develop/net/tutorials/website-with-webmatrix/ [Create a web site from the gallery]:http://www.windowsazure.com/en-us/develop/net/tutorials/website-from-gallery/ [Web site with MongoDB on a virtual machine]:http://www.windowsazure.com/en-us/develop/net/tutorials/website-with-mongodb-vm/ [newsitefromtemplate]: ..\Media\tshootSiteFromTemplate.png [newsiteinwebmatrix]: ..\Media\tshootWebMatrixIDE.png [defaultpagenewsite]: ..\Media\tshootDefaultWebPage.png [createnewwebsite]: ..\Media\tshootCreateAzureWebSite.png [quickstartmgmtpage]: ..\Media\tshootAzureWebDiagQuickStart.png [createdeploycreds]: ..\Media\tshootdeploymentcredentials.png [verifydeployuser]: ..\Media\tshootquickglanceborder.png [webunderconstruction]: ..\Media\tshootUnderConstruction.png [webmatrixpubsettings]: ..\Media\tshootPublishSettings.png [webmatrixpubsettings2]: ..\Media\tshootPublishSettings2.png [webmatrixpubpre]: ..\Media\tshootPublishPreview.png [webmatrixpubpre2]: ..\Media\tshootPublishPreview2.png [sitepublishtoazure]: ..\Media\tshootPublishedSite.png [siteregpage]: ..\Media\tshootregisteracct.png [loggedontosite]: ..\Media\tshootloggedon.png [renamestartersite]: ..\Media\tshootrenamestartersitesdf.png [savewebconfigtoroot]: ..\Media\tshootwebconfig.png [genericapperror]: ..\Media\tshootwebsiteerror1.png [webmatrixpubprev]: ..\Media\tshootPublishPreview2.png [detailedapperr]: ..\Media\tshootwebsiteerror2.png [filezillaclient]: ..\Media\tshootfilezilla.png [viewlogfiles]: ..\Media\tshootlogfiles.png [viewdetailederr]: ..\Media\tshootdetailederrors.png [failedreqtrace]: ..\Media\tshootfailedrequesttracing.png [logparsercmdwind]: ..\Media\tshootlogparser.png [publishcompatibility]: ..\Media\tshootPublishCompatibility.png
72.871537
939
0.743553
eng_Latn
0.969759
114112b5ab1111e34dfee7f6b9bbddbad3c7fa36
5,703
md
Markdown
CHANGELOG.md
Joker666/material-ui
a1404a2a75bb73900ff1436dfc51397de08d1b2e
[ "MIT" ]
null
null
null
CHANGELOG.md
Joker666/material-ui
a1404a2a75bb73900ff1436dfc51397de08d1b2e
[ "MIT" ]
null
null
null
CHANGELOG.md
Joker666/material-ui
a1404a2a75bb73900ff1436dfc51397de08d1b2e
[ "MIT" ]
null
null
null
## 0.4.0 ###### _Dec. 15, 2014_ ##### Breaking Changes - Removed PaperButton - Use FlatButton, RaisedButton, or FloatingActionButton - Removed Roboto font import (#104) - Be sure to [include the Roboto](http://www.google.com/fonts#UsePlace:use/Collection:Roboto:400,300,500) font in your project. ##### General - Added react-draggable2 dependency ##### Components - Buttons - Added linkButton functionality (#130) - Icon Buttons - Added tooltip functionality - Input - Added method to set focus - Left Nav - Added method to open left nav panel - Radio Button - Added defaultChecked prop - Slider (New) - Added slider component - Toggle - Updated styles to match material design specs ## 0.3.3 ###### _Dec. 7, 2014_ ##### General - Added a basic example project in /example ##### Components - Dialog - Actions are now real buttons - Added transitions - Prefixed classNames with mui - Cleaned up styles - Input - Fixed a bug that caused placeholder to not show on focus (#112) - Placeholders can now be displayed in-line by setting inlinePlaceholder to true. - The initial number of rows can now be set with the rows prop. - Toggle - Fixed alignment issue (#118) - The inital state of the toggle can now be set with the toggled prop. ## 0.3.2 ###### _Nov. 30, 2014_ ##### General - Upgraded dependencies: react 0.12.1, browserify 6.3.3, reactify: 0.17.1 ##### Components - Dialog - Added key prop to dialog actions. (#99) - Added onDismiss event callback. (#86) - Dialog is now positioned onMound and onUpdate (#85) - Fixed a bug that cuased dialog to not be vertically centered on long pages - Dropdown Menu - Added autoWidth prop (#89) - Menu - Added autoWidth prop - Nested Menu - Fixed bug that caused some nesteed menus to not show. (#88) - Paper - Updated to use spread operator - Radio Button - Fixed radio button label styles. (#94) - Ripple - Account for page scrolling on ripple animation. (#93) ## 0.3.1 ###### _Nov. 28, 2014_ ##### General - Removed browserify react addons alias. (#68) ##### Components - FlatButton, RaisedButton, and FloatingActionButton (NEW) - These buttons will replace the current PaperButton which will be depreciated in v.0.4.0. - They generate actual button tags, are keyboard focusable and listen to onTouchTap. (#50, #61) - Icon Button - Pressing enter when the button is in focus now fires onTouchTap - Added dark theme ripple colors - Focus and click animations now use Scale Transforms to improve performance. - Input - Added support for ReactLink and use JSX spread attributes - Error messages are now props instead of internal states (#95) - LeftNav - Pressing ESC now closes the left nav - PaperButton - Will be depreciated in v.0.4.0. - Radio Button - Fixed toggle bug. (#70) ##### Mixins - WindowListenable is now available from Mixins.WindowListenable ##### Utils - Added KeyCodes constants ## 0.3.0 ###### _Nov. 17, 2014_ ##### General - Updated Browserify & Reactify versions - Enabled reactify es6 transformations - Removed jQuery dependency (#25) - Added reaact-tap-event-plugin dependency ##### Components - Dialog - Width is now determined by content - Position is centered horizontally inside parent container - Pressing Esc now closes the dialog (#35) - Dropdown Menu - Added underline (#39) - Fixed display problem on double click (#43) - Icon - Transfer all props to underlying span - Icon Button (New) - Buttons...that are icons. :) - Input - Added required, min, max and step - LeftNav - Fixed left nav style when docked (#36) - Transition now uses translate3d instead of left - Overlay now listens to onTouchTap - Menu Items - Added user select none styles (#45) - Paper - Added onMouseOver & onMouseOut props - Toolbar - Items are now passed in as children instead of groupItem prop ##### Mixins - Added WindowListenable. Allows listening to window events. ##### Utils - Added Dom and Events utility functions - Fixed a bug that caused CSS Events to bind twice ##### Less - Added media query variables - Added no-wrap mixin - Removed unnecessary style resets - Removed tab highlight color on all elements ## 0.2.2 ###### _Nov. 11, 2014_ - Changed project structure to be less confusing. Material-UI components/styles live in the src directory. Docs site code lives in the docs directory. This still allows us to easily test components in the docs site as we are working on them - Added .editorconfig to help keep code formatting consistent among contributors. See http://editorconfig.org/ - Fixed drop down display issue in safari - Fixed nested menu arrow icon - Added hover transitions to menus - Improved ripple animation on buttons ## 0.2.1 ###### _Nov. 8, 2014_ - Fixed icon font reference. We're now including it as part of the project instead of an npm dependency. ## 0.2.0 ###### _Nov. 7, 2014_ - Icon - Added all font icons from the unoffical material design icon font: https://github.com/designjockey/material-design-fonticons - All icon names had to change because of this. Sorry. :( - PaperButton - Added href prop - Css fixes - Dialog - Added onShow event - Children contents of the dialog is only rendered if the dialog is opened - LeftNav - Fixed a bug that caused docked LeftNav component to close on menu click - Removed isInitiallyOpen prop - Input - onLineBreak event now passes back event (e) on callback ## 0.1.29 ###### _Nov. 5, 2014_ - css fix on paper component - hover transition fix on buttons - removed selected state on drop down icon component - css fix on left nav component - added prop on left nav component to allow left nav to be docked and hidden
29.703125
163
0.722953
eng_Latn
0.984769
1142d740e8bf6590ec5ca56a5b6c7a4f86355da2
4,129
md
Markdown
README.md
arendst/nodemcu-pyflasher
55d316927978b3456c2435eaf1f20b4d37634e81
[ "MIT" ]
2
2019-10-23T12:03:18.000Z
2021-06-14T11:48:48.000Z
README.md
arendst/nodemcu-pyflasher
55d316927978b3456c2435eaf1f20b4d37634e81
[ "MIT" ]
null
null
null
README.md
arendst/nodemcu-pyflasher
55d316927978b3456c2435eaf1f20b4d37634e81
[ "MIT" ]
null
null
null
# NodeMCU PyFlasher [![License](https://marcelstoer.github.io/nodemcu-pyflasher/images/mit-license-badge.svg)](https://github.com/marcelstoer/nodemcu-pyflasher/blob/master/LICENSE) [![Github Releases](https://img.shields.io/github/downloads/marcelstoer/nodemcu-pyflasher/total.svg?style=flat)](https://github.com/marcelstoer/nodemcu-pyflasher/releases) [![PayPal Donation](https://marcelstoer.github.io/nodemcu-pyflasher/images/donate-paypal-badge.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=HFN4ZMET5XS2Q) [![Twitter URL](https://marcelstoer.github.io/nodemcu-pyflasher/images/twitter-badge.svg)](https://twitter.com/intent/tweet?text=Wow:&url=https%3A%2F%2Fgithub.com%2Fmarcelstoer%2Fnodemcu-pyflasher) [![Facebook URL](https://marcelstoer.github.io/nodemcu-pyflasher/images/facebook-badge.svg)](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fmarcelstoer%2Fnodemcu-pyflasher) Self-contained [NodeMCU](https://github.com/nodemcu/nodemcu-firmware) flasher with GUI based on [esptool.py](https://github.com/espressif/esptool) and [wxPython](https://www.wxpython.org/). ![Image of NodeMCU PyFlasher GUI](images/gui.png) ## Status Check the [releases section](https://github.com/marcelstoer/nodemcu-pyflasher/releases) for progress and downloadable binaries for your platform. Scan the [list of open issues](https://github.com/marcelstoer/nodemcu-pyflasher/issues) for bugs and pending features. - Due to [pyinstaller/pyinstaller#2355](https://github.com/pyinstaller/pyinstaller/issues/2355) I can't provide an app bundle for macOS yet. The PyInstaller `.spec` file and the build script are ready, though. **Note** This is my first Python project. If you have constructive feedback as for how to improve the code please do reach out to me. ## Getting help In the unlikely event that you're stuck with this simple tool the best way to get help is to turn to the ["Tools and IDE" subforum on esp8266.com](http://www.esp8266.com/viewforum.php?f=22). ## Donationware All open-source development by the author is donationware. Show your love and support for open-source development by donating to the good cause through PayPal. [![PayPal Donations](./images/paypal-256.png)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=HFN4ZMET5XS2Q) ## Why this project exists ### Motivation This addresses an issue the NodeMCU community touched on several times in the past, most recently at [#1500 (comment)](https://github.com/nodemcu/nodemcu-firmware/pull/1500#issuecomment-247884981). I stated that based on my experience doing NodeMCU user support it should be a lot simpler to flash NodeMCU for Windows users. - A number of flashing tools are available but only two are actively maintained: esptool-ck and esptool.py. Only one is endorsed by Espressif: [esptool.py](https://github.com/espressif/esptool) (they hired the developer(s)). - 70% of the users of my [nodemcu-build.com](https://nodemcu-build.com) service are on Windows. - BUT Windows doesn't come with Python installed - which is required for esptool.py. - BUT Windows users in general are more reluctant to use the CLI than Linux/Mac users - which is required for esptool.py. To conclude: this is not a comfortable situation for NodeMCU's largest user group. ### The plan For quite a while I planned to write a self-contained GUI tool which would use esptool.py in the background. It should primarily target Windows users but since I'm on Mac it should be cross-platform. Even though I had never used Python before I felt confident to pull this off. ### Implementation - Uses the cross-platform wxPython GUI framework. I also tried PyForms/PyQt4 but settled for wxPython. - Requires absolutely minimal user input. - The esptool.py "console" output is redirected to text control on the GUI. - Uses [PyInstaller](https://github.com/pyinstaller/pyinstaller) to create self-contained executable for Windows and Mac. The packaged app can run standalone i.e. without installing itself, a Python interpreter or any modules. ## License [MIT](http://opensource.org/licenses/MIT) © Marcel Stör
73.732143
277
0.784451
eng_Latn
0.910309
114346ee1a76c4db99deeecf0bc61a1fce596249
3,097
md
Markdown
questions/Algorithms/1900. The Earliest and Latest Rounds Where Players Compete/README.md
TechStudyGroup/leetcode
e013436e52d16090a6138689804f714329388330
[ "MIT" ]
9
2019-08-15T05:09:40.000Z
2021-05-01T09:26:59.000Z
questions/Algorithms/1900. The Earliest and Latest Rounds Where Players Compete/README.md
TechStudyGroup/leetcode
e013436e52d16090a6138689804f714329388330
[ "MIT" ]
135
2019-09-26T03:40:11.000Z
2022-02-02T04:15:39.000Z
questions/Algorithms/1900. The Earliest and Latest Rounds Where Players Compete/README.md
TechStudyGroup/leetcode
e013436e52d16090a6138689804f714329388330
[ "MIT" ]
1
2022-01-05T01:43:17.000Z
2022-01-05T01:43:17.000Z
### [The Earliest and Latest Rounds Where Players Compete](https://leetcode.com/problems/the-earliest-and-latest-rounds-where-players-compete) <p>There is a tournament where <code>n</code> players are participating. The players are standing in a single row and are numbered from <code>1</code> to <code>n</code> based on their <strong>initial</strong> standing position (player <code>1</code> is the first player in the row, player <code>2</code> is the second player in the row, etc.).</p> <p>The tournament consists of multiple rounds (starting from round number <code>1</code>). In each round, the <code>i<sup>th</sup></code> player from the front of the row competes against the <code>i<sup>th</sup></code> player from the end of the row, and the winner advances to the next round. When the number of players is odd for the current round, the player in the middle automatically advances to the next round.</p> <ul> <li>For example, if the row consists of players <code>1, 2, 4, 6, 7</code> <ul> <li>Player <code>1</code> competes against player <code>7</code>.</li> <li>Player <code>2</code> competes against player <code>6</code>.</li> <li>Player <code>4</code> automatically advances to the next round.</li> </ul> </li> </ul> <p>After each round is over, the winners are lined back up in the row based on the <strong>original ordering</strong> assigned to them initially (ascending order).</p> <p>The players numbered <code>firstPlayer</code> and <code>secondPlayer</code> are the best in the tournament. They can win against any other player before they compete against each other. If any two other players compete against each other, either of them might win, and thus you may <strong>choose</strong> the outcome of this round.</p> <p>Given the integers <code>n</code>, <code>firstPlayer</code>, and <code>secondPlayer</code>, return <em>an integer array containing two values, the <strong>earliest</strong> possible round number and the&nbsp;<strong>latest</strong> possible round number in which these two players will compete against each other, respectively</em>.</p> <p>&nbsp;</p> <p><strong>Example 1:</strong></p> <pre> <strong>Input:</strong> n = 11, firstPlayer = 2, secondPlayer = 4 <strong>Output:</strong> [3,4] <strong>Explanation:</strong> One possible scenario which leads to the earliest round number: First round: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 Second round: 2, 3, 4, 5, 6, 11 Third round: 2, 3, 4 One possible scenario which leads to the latest round number: First round: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 Second round: 1, 2, 3, 4, 5, 6 Third round: 1, 2, 4 Fourth round: 2, 4 </pre> <p><strong>Example 2:</strong></p> <pre> <strong>Input:</strong> n = 5, firstPlayer = 1, secondPlayer = 5 <strong>Output:</strong> [1,1] <strong>Explanation:</strong> The players numbered 1 and 5 compete in the first round. There is no way to make them compete in any other round. </pre> <p>&nbsp;</p> <p><strong>Constraints:</strong></p> <ul> <li><code>2 &lt;= n &lt;= 28</code></li> <li><code>1 &lt;= firstPlayer &lt; secondPlayer &lt;= n</code></li> </ul>
53.396552
422
0.713271
eng_Latn
0.996882
114362a2e79ebc2eb6005913fc6ee1f89839bb6b
53
md
Markdown
README.md
ratchasak/ratchasak.github.io
694e503e0a9a04099436679ed0ff05ab86933755
[ "CC-BY-3.0" ]
null
null
null
README.md
ratchasak/ratchasak.github.io
694e503e0a9a04099436679ed0ff05ab86933755
[ "CC-BY-3.0" ]
null
null
null
README.md
ratchasak/ratchasak.github.io
694e503e0a9a04099436679ed0ff05ab86933755
[ "CC-BY-3.0" ]
null
null
null
https://demos.onepagelove.com/html/dazzle/index.html
26.5
52
0.811321
yue_Hant
0.491942
114376eb36493a36b083dbb24ad3d1cb7a0f79cd
7,459
md
Markdown
README.md
markhoney/jekyll-bootstrap-theme
2092a480dd205a4a6aa236f3f255978ff75adf42
[ "MIT" ]
9
2021-09-14T17:17:57.000Z
2022-03-12T12:18:51.000Z
README.md
markhoney/jekyll-bootstrap-theme
2092a480dd205a4a6aa236f3f255978ff75adf42
[ "MIT" ]
13
2021-10-05T16:22:11.000Z
2022-03-15T16:17:30.000Z
README.md
markhoney/jekyll-bootstrap-theme
2092a480dd205a4a6aa236f3f255978ff75adf42
[ "MIT" ]
14
2021-07-08T13:41:58.000Z
2022-03-24T21:44:08.000Z
# jekyll-bootstrap-theme [![Gem Version](https://badge.fury.io/rb/jekyll-bootstrap-theme.svg)](https://badge.fury.io/rb/jekyll-bootstrap-theme) A basic but extensible Jekyll theme based on Bootstrap 5. [Theme preview](https://jonaharagon.github.io/jekyll-bootstrap-theme/) - **[One-Click Install (GitHub Pages)](https://github.com/jonaharagon/jekyll-bootstrap-template/generate)** - [RubyGems.org](https://rubygems.org/gems/jekyll-bootstrap-theme) ![Bootstrap theme preview](/screenshot.png) ## Install If you are able to install custom Gems on your build server/web server, install via the **Gemfile** method described here. GitHub Pages users cannot install custom Gems, and must instead use the **Remote Theme** Jekyll plugin to use this theme. ### Gemfile Add this line to your `Gemfile`: ```ruby gem 'jekyll-bootstrap-theme' ``` And add this line to your site's `_config.yml`: ```yaml theme: jekyll-bootstrap-theme ``` And then execute: $ bundle Or install it yourself as: $ gem install jekyll-bootstrap-theme ### Remote Theme (GitHub Pages) #### One-Click Install - **[Create Repository from Pre-Made Template](https://github.com/jonaharagon/jekyll-bootstrap-template/generate)** #### Manual Install GitHub Pages websites cannot use custom Gems. Instead, you can add this repository as a `remote_theme`: After making a Jekyll repo, add `gem "jekyll-remote-theme"` to the `:jekyll_plugins` group in your `Gemfile`, then run `bundle` to install the plugin: ```ruby group :jekyll_plugins do [...] gem "jekyll-remote-theme" end ``` Add the following to your site's `_config.yml` to activate the plugin and select this theme: ```yaml plugins: - jekyll-remote-theme remote_theme: jonaharagon/jekyll-bootstrap-theme ``` Optionally, you can specify a [release](https://github.com/jonaharagon/jekyll-bootstrap-theme/releases), [branch](https://github.com/jonaharagon/jekyll-bootstrap-theme/branches), or [tag](https://github.com/jonaharagon/jekyll-bootstrap-theme/tags) to lock the theme version in place: ```yaml remote_theme: jonaharagon/[email protected] ``` ## Theme Contents ### Layouts Files within the `_layouts` directory, that define the markup for your theme: - `default.html` - The base layout that lays the foundation for subsequent layouts. Includes a navigation bar and footer on all pages. - `home.html` - The layout for your homepage/landing page. Includes a blog post listing with pagination support. - `page.html` - The layout for documents and other pages that contain Front Matter, that are not posts. - `post.html` - The layout for your blog posts. #### Home Layout ##### Main Heading, Custom Content Injection The home layout will inject all content from your `index.md` / `index.html` before the optional posts heading. This will allow you to include non-posts related content to be published on the landing page under a dedicated heading. We recommended that you title this section with a Heading2 (##), and enable a posts heading if you add additional content. ##### Posts Listing It will be automatically included only when your site contains one or more valid posts or drafts (if the site is configured to show_drafts). This section is untitled by default. You can customize this heading by defining a `list_title` variable in the document's front matter, which will be rendered with an `<h2>` tag. ### Includes Snippets of code within the _includes directory that can be inserted in multiple layouts (and another include-file as well) within the same theme-gem: - `footer.html` — Defines the site's footer section. - `head.html` — Code-block that defines the `<head></head>` in default layout. - `custom-head.html` — Placeholder to allow users to add more metadata to `<head />`. - `header.html` — Defines the site's main header section. By default, pages with a defined `title` attribute will have links displayed here. - `social.html` — Renders social-media icons based on the `bootstrap.social_links` data in the config file. ### Sass `.scss` files within the `_sass` directory that define the theme's styles. - `bootstrap/*` - Default Bootstrap SCSS files. - `_custom-variables.scss` - This file can be overridden to add any custom variables. It is loaded *after* Bootstrap's variables but *before* the rest of Bootstrap, allowing you to override any of [Bootstrap's variables](https://github.com/twbs/bootstrap/blob/main/scss/_variables.scss). (*Note: Cannot override styles*) - `_custom-styles.scss` - This file can be overridden to add any custom styles. It is loaded *after* all other CSS. (*Note: Cannot override variables*) ### Assets - `assets/css/bootstrap-theme.scss` - Imports sass files from `_sass` and gets processed into the final stylesheet. - `assets/css/bootstrap-icons.css` - Loads [Bootstrap Icons](https://icons.getbootstrap.com) - `assets/js/bootstrap.bundle.min.js` - Bootstrap's Javascript bundle, loaded on every page by default. - `assets/fonts/bootstrap-icons.woff(2)` - Bootstrap Icons font files. ### Plugins This theme comes with [`jekyll-seo-plugin`](https://github.com/jekyll/jekyll-seo-tag) preinstalled to make sure your website gets the most useful meta tags. See [usage](https://github.com/jekyll/jekyll-seo-tag#usage) to know how to set it up. ## Configuration This theme can be configured with various settings in [`_config.yml`](/_config.yml). ### Site Author `site.author` is expected to be a mapping of attributes instead of a simple scalar value: ```yaml author: name: 'Github User' email: '[email protected]' ``` ### Navbar Customization If you want to link only specific pages in your header, uncomment this and add the path to the pages in order as they should show up ```yaml bootstrap: header_pages: - about.md - second.html - folder/third.md ``` ### Social Network Icons You can add links to the accounts you have on other sites, with respective icon, by adding one or more of the following options in your config. These must be complete URLs to function properly. ```yaml bootstrap: social_links: twitter: 'https://example.com/@jekyllrb' github: 'https://example.com/@jekyllrb' facebook: 'https://example.com/@jekyllrb' instagram: 'https://example.com/@jekyllrb' linkedin: 'https://example.com/@jekyllrb' google: 'https://example.com/@jekyllrb' youtube: 'https://example.com/@jekyllrb' twitch: 'https://example.com/@jekyllrb' telegram: 'https://example.com/@jekyllrb' whatsapp: 'https://example.com/@jekyllrb' discord: 'https://example.com/@jekyllrb' slack: 'https://example.com/@jekyllrb' ``` ### Post Excerpts To display post excerpts on the Home layout, set `site.excerpts.show` to true. You can also choose to automatically cut off excerpts after 32 words (approx. 2 lines): ```yaml excerpts: show: true auto_truncate: true ``` ## Contributing Bug reports and pull requests are welcome on GitHub at https://github.com/jonaharagon/jekyll-bootstrap-theme. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct. **Development Requirements:** - `yarn` Updating packages: `yarn run assets:clean && yarn upgrade && yarn run assets:install` ## License The theme is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
38.647668
353
0.743129
eng_Latn
0.959596
1144c77ec0ab84a0235959c393983a2a1079bb82
2,246
md
Markdown
help/analyze/home.md
TylerRiggs/analytics.en
694a15a243c4a0877a6d107aa8e5d898142a91e0
[ "Apache-2.0" ]
null
null
null
help/analyze/home.md
TylerRiggs/analytics.en
694a15a243c4a0877a6d107aa8e5d898142a91e0
[ "Apache-2.0" ]
null
null
null
help/analyze/home.md
TylerRiggs/analytics.en
694a15a243c4a0877a6d107aa8e5d898142a91e0
[ "Apache-2.0" ]
1
2021-11-18T18:48:17.000Z
2021-11-18T18:48:17.000Z
--- title: Analytics Tools Guide description: Product documentation and self-help for Analysis Workspace, Adobe Analytics dashboards, Activity Map, Report Builder, Reporting API, and Reports & Analytics. --- # Analytics Tools Guide ![Banner](../../assets/doc_banner_analyze.png) This guide provides product documentation and self-help for the following Adobe Analytics reporting and analysis tools: * **[!UICONTROL Analysis Workspace]:** The foremost feature in Adobe Analytics. Workspace provides a canvas where you can drag components to meet reporting needs. * **[!UICONTROL Adobe Analytics dashboards]:** This mobile app allows users mobile access to intuitive scorecards with key metrics and the ability to do more detailed breakdowns and trended reports. * **[!UICONTROL Activity Map]:** A browser plug-in that displays an overlay on your site showing which elements were clicked the most. * **[!UICONTROL Report Builder]:** An Excel add-in that allows you to retrieve Analytics data and place it directly into a workbook. * **[!UICONTROL Reporting API]:** Make report queries directly to Adobe's servers, and get responses for use in your own customer reporting tools. * **[!UICONTROL Reports & Analytics]:** A tool with dozens of pre-built reports. Adobe recommends using Analysis Workspace for most reporting needs. ## Key Analytics Tools articles * [Adobe Analytics dashboards - Overview](/help/analyze/mobile-app/home.md) * [Analysis Workspace Getting Started](analysis-workspace/home.md) * [Progressive Web Apps for Adobe Analytics](/help/analyze/pwa/pwa.md) * [Which Adobe Analytics tool should I use?](/help/admin/c-analytics-product-comparison/which-analytics-tool.md) * [Report Builder Getting Started](report-builder/home.md) * [Activity Map Getting Started](activity-map/activity-map.md) ## More Analytics user guides [Analytics User Guides](/help/landing/home.md) ## Key Analytics resources * [Contact Customer Care](https://helpx.adobe.com/contact/enterprise-support.ec.html) * [Analytics Forum](https://forums.adobe.com/community/experience-cloud/analytics-cloud/analytics) * [Adobe Analytics Resources](https://forums.adobe.com/message/10660755) * [Experience League](https://landing.adobe.com/experience-league/)
59.105263
198
0.779608
eng_Latn
0.73471
11450f2e3e6aa20f336b635433c8366ad042cd66
220
md
Markdown
Packs/epo/ReleaseNotes/2_0_4.md
mazmat-panw/content
024a65c1dea2548e2637a9cbbe54966e9e34a722
[ "MIT" ]
2
2021-12-06T21:38:24.000Z
2022-01-13T08:23:36.000Z
Packs/epo/ReleaseNotes/2_0_4.md
mazmat-panw/content
024a65c1dea2548e2637a9cbbe54966e9e34a722
[ "MIT" ]
87
2022-02-23T12:10:53.000Z
2022-03-31T11:29:05.000Z
Packs/epo/ReleaseNotes/2_0_4.md
henry-sue-pa/content
043c6badfb4f9c80673cad9242fdea72efe301f7
[ "MIT" ]
2
2022-01-05T15:27:01.000Z
2022-02-01T19:27:43.000Z
#### Integrations ##### McAfee ePO v2 - Fixed an issue in **epo-assign-policy-to-system** command. The resetInheritance argument was changed to send 'ture' and 'false' instead of 'True' and 'False' to the ePO server .
55
181
0.713636
eng_Latn
0.990773
1145519c9c702e4f6b55bafd947ca931ef9dbdc9
5,281
md
Markdown
content/articles/2009-08-26-the-latest.md
joshukraine/ofreport.com
3b86dcc4f8020a1fa014cc51b5629cb8928f99e4
[ "MIT" ]
7
2017-05-20T15:06:00.000Z
2020-05-13T05:49:23.000Z
content/articles/2009-08-26-the-latest.md
joshukraine/ofreport.com
3b86dcc4f8020a1fa014cc51b5629cb8928f99e4
[ "MIT" ]
19
2019-01-11T04:49:19.000Z
2022-01-20T12:05:36.000Z
content/articles/2009-08-26-the-latest.md
joshukraine/ofreport.com
3b86dcc4f8020a1fa014cc51b5629cb8928f99e4
[ "MIT" ]
3
2018-01-20T04:57:25.000Z
2020-06-29T14:54:39.000Z
--- title: "The Latest..." date: "2009-08-26 06:23:56" author: "Joshua Steele" preview: > After many weeks of preparation, the Steele family is finally departing for America! As we travel, we would greatly appreciate your prayers in the following areas: tags: - family - ministry --- After many weeks of preparation, the Steele family is finally departing for America! As we travel, we would greatly appreciate your prayers in the following areas: * A smooth border crossing out of Ukraine (We don’t expect any problems, but you never know.) * Strength for the children * Safety * Health (No one is sick at the moment, but international travel is quite stressful.) * The safe arrival of all our luggage <a href="//d21yo20tm8bmc2.cloudfront.net/2009/08/20090823_0061.JPG"><img class="size-medium wp-image-866" title="20090823_0061" src="//d21yo20tm8bmc2.cloudfront.net/2009/08/20090823_0061-300x199.jpg" alt="Beka and Abby are growing all the time! Abby will be four in September. Beka is 19 months." width="300" height="199" /></a> Beka and Abby are growing all the time! Abby will be four in September. Beka is 19 months. Nathan Day will be traveling to America a few days after we do. As many of you know, he will be getting married to Katelin Rebsch on November 7 of this year. Please pray for safe travel for Nathan and Katelin and for God’s blessing on them as they begin their new life together in Ukraine. Also, be sure to <a href="http://www.mywedding.com/natelin" target="_blank">check out their new wedding web site!</a> <a href="//d21yo20tm8bmc2.cloudfront.net/2009/08/ETO_Team_Picture.jpg"><img class="size-medium wp-image-859" title="ETO_Team_Picture" src="//d21yo20tm8bmc2.cloudfront.net/2009/08/ETO_Team_Picture-300x207.jpg" alt="" width="300" height="207" /></a> The 2009 ETO staff team ## Ministry Status Although we will be in the US for several months, the ministry will proceed. The Beal family, Denise Hutchison, and Bryan Shufelt remain in Ukraine and will continue ETO’s many outreaches and projects in our absence. The following is a brief overview of our primary ministries and their goals over the next few months. ### Chronological Bible Course (CBC) Our team recently completed our final draft of Lesson 15 of the course. This lesson will be sent to our translator soon and later formatted before going to press. Lesson 14 is being translated right now. While in the US, we will continue writing new material for our course. At present, we have about 380 students enrolled, many of whom are advancing quickly through the lessons. The first phase of our course will consist of 17 lessons, covering the entire book of Genesis. We are very excited to be nearing the completion of this phase. The following is a topical overview of Phase 1: 1. Introduction of the Bible 1. Who is God? 1. The Spiritual World 1. Creation Part I (Gen. 1) 1. Creation Part II (Gen. 1) 1. Adam and Eve (Gen. 1, 2) 1. The Fall (Gen. 3) 1. The Curse &amp; The Promised Deliverer (Gen. 3) 1. Cain and Abel (Gen. 4) 1. Noah and the Great Flood (Gen. 5-11) 1. Abraham I (Gen. 12-14) 1. Abraham II (Gen. 15-17) 1. Sodom and Gomorrah (Gen. 18-20) 1. Isaac (Gen. 21-22) 1. Isaac and His Family (Gen. 23-26) 1. Jacob (27-36) 1. Joseph (Gen. 37-50) ### Audio Bible Studies (ABS) Our Audio Bible Study sessions continue to progress well. We hold weekly meetings here in L’viv which consist of a one-hour English Club followed by a one-hour Bible lesson. During English Club, Ukrainians are invited to practice their language skills on various topics. We also make heavy use of No Greater Joy’s *<a href="http://goodandevilbook.com/ukrainian/" target="_blank">Good and Evil</a>*, which the students practice reading aloud in English. All of our students have copies of the book in Ukrainian, and we encourage them to compare the two languages and ask questions about what they read. We have a handful of people who have been coming on a regular basis, and most opt to stay for Bible lesson as well. Over the past several weeks, we have been teaching through the book of Romans, recording every message in digital audio. Last week, we completed Romans 8. The Romans series will be on hold while our family is in the States, but English Club will proceed as usual. ### Carpathian Mountain Outreach (CMO) We had a great summer in the Carpathians this year. We showed Fireproof in several villages in the mountains were encouraged by the positive responses we received. Our team also passed out quite a bit of literature inviting Ukrainians to enroll in our correspondence course. As a result, we have gained many new contacts over the last couple of months. Click here to view the CMO 2009 photo journal. Plans are already being laid for next year’s project, CMO 2010. We hope to have updated information available on our ministry web site in the next couple of months. We are very grateful to all of you who faithfully supported this outreach in prayer. Please continue to intercede for those who were introduced to Christ this year in Western Ukraine. ### Summary We will continue to post ministry and family updates throughout our time in America. As always, we are grateful for your prayers and support as we strive to publish the Gospel of Jesus Christ in Ukraine.
71.364865
601
0.770877
eng_Latn
0.998613
1145eff0aae5797e60f39973e27992f131713db0
1,617
md
Markdown
docs/tree/demo/dynamic.md
itmajing/next
77bc4eceaf615a7179ea1d36756c3d6d727b1d5c
[ "MIT" ]
4,289
2018-07-18T09:21:03.000Z
2022-03-31T17:59:14.000Z
docs/tree/demo/dynamic.md
itmajing/next
77bc4eceaf615a7179ea1d36756c3d6d727b1d5c
[ "MIT" ]
3,552
2018-07-18T09:21:52.000Z
2022-03-31T12:18:58.000Z
docs/tree/demo/dynamic.md
itmajing/next
77bc4eceaf615a7179ea1d36756c3d6d727b1d5c
[ "MIT" ]
559
2018-09-14T02:48:44.000Z
2022-03-25T09:06:55.000Z
# 异步加载 - order: 6 点击展开节点之后动态加载数据,常用于通过后端接口获取数据的场景。通过设置 `loadData` 属性开启,通过设置 `isLeaf` 属性,判断节点是否是叶子节点,允许异步加载数据。 :::lang=en-us # Loading data asynchronously - order: 6 Click node to load data dynamically. ::: --- ````jsx import { Tree } from '@alifd/next'; class Demo extends React.Component { constructor(props) { super(props); this.state = { data: [ { label: 'Expand to load', key: '0' }, { label: 'Expand to load', key: '1' }, { label: 'Leaf', key: '2', isLeaf: true }, ], }; this.onLoadData = this.onLoadData.bind(this); } onLoadData(node) { return new Promise(resolve => { if (node.props.children) { return resolve(); } const { eventKey, pos } = node.props; const item = this.getItemByPos(pos); setTimeout(() => { item.children = [ { label: 'Child Tree', key: `${eventKey}-0` }, { label: 'Child Tree', key: `${eventKey}-1` }, ]; this.setState({ data: [...this.state.data], }); resolve(); }, 1000); }); } getItemByPos(pos) { return pos .split('-') .slice(1) .reduce((ret, num) => ret.children[num], { children: this.state.data }); } render() { return <Tree dataSource={this.state.data} loadData={this.onLoadData} />; } } ReactDOM.render(<Demo />, mountNode); ````
22.774648
91
0.468769
eng_Latn
0.237217
1146e451db041e42a954616f0d60beb267c6b253
901
md
Markdown
template/README.md
brpaz/sao-generator
90b9a58813ce07905c21cdc411839aeb886b78b9
[ "MIT" ]
null
null
null
template/README.md
brpaz/sao-generator
90b9a58813ce07905c21cdc411839aeb886b78b9
[ "MIT" ]
246
2019-03-19T20:03:12.000Z
2022-03-28T09:13:53.000Z
template/README.md
brpaz/sao-generator
90b9a58813ce07905c21cdc411839aeb886b78b9
[ "MIT" ]
null
null
null
# <%= name %> > <%= description %> [![Sao template](https://img.shields.io/badge/Sao-Template-green?style=for-the-badge)](https://saojs.org/) [![License](https://img.shields.io/badge/License-MIT-yellow.svg?style=for-the-badge)](https://opensource.org/licenses/MIT) [![Commitizen friendly](https://img.shields.io/badge/commitizen-friendly-brightgreen.svg?style=for-the-badge)](http://commitizen.github.io/cz-cli/) [![GitHub Actions](https://github.com/<%= repoSlug %>/workflows/CI/badge.svg?style=for-the-badge)](https://github.com/<%= repoSlug %>/actions) ## Usage Install [SAO](https://github.com/saojs/sao) first. ```bash yarn global add sao # or npm i -g sao ``` ### From repo ```bash sao <%= repoSlug %> <project_name> ``` ## 📝 License Copyright © <%= new Date().getFullYear() %> [Bruno Paz](https://github.com/brpaz). This project is [MIT](https://opensource.org/licenses/MIT) licensed.
30.033333
147
0.689234
yue_Hant
0.702548
1146fe97076348b25e6ed740eedd28bf45a55947
22,108
md
Markdown
articles/sql-database/sql-database-automated-backups.md
krimog/azure-docs.fr-fr
f9e0062239eb8e7107ea45ad1a8e07f6c905031e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-database/sql-database-automated-backups.md
krimog/azure-docs.fr-fr
f9e0062239eb8e7107ea45ad1a8e07f6c905031e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-database/sql-database-automated-backups.md
krimog/azure-docs.fr-fr
f9e0062239eb8e7107ea45ad1a8e07f6c905031e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Sauvegardes géoredondantes automatiques description: SQL Database crée automatiquement une sauvegarde de base de données locale toutes les cinq minutes et utilise le stockage géoredondant avec accès en lecture pour fournir la géoredondance. services: sql-database ms.service: sql-database ms.subservice: backup-restore ms.custom: '' ms.devlang: '' ms.topic: conceptual author: anosov1960 ms.author: sashan ms.reviewer: mathoma, carlrab, danil manager: craigg ms.date: 09/26/2019 ms.openlocfilehash: 114a5bbfd71fc0847c2b1bc65a8ba0bfa0df1add ms.sourcegitcommit: ac56ef07d86328c40fed5b5792a6a02698926c2d ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 11/08/2019 ms.locfileid: "73821942" --- # <a name="automated-backups"></a>Sauvegardes automatisées SQL Database crée automatiquement des sauvegardes de base de données, qui sont conservées entre 7 et 35 jours, en tirant parti du [stockage géoredondant avec accès en lecture seule (RA-GRS)](../storage/common/storage-redundancy-grs.md#read-access-geo-redundant-storage) d’Azure pour garantir leur conservation, même en cas d’indisponibilité du centre de données. Ces sauvegardes sont créées automatiquement. Les sauvegardes de base de données sont une partie essentielle de toute stratégie de continuité d’activité ou de récupération d’urgence, dans la mesure où elles protègent vos données des corruptions et des suppressions accidentelles. Si vos règles de sécurité nécessitent que vos sauvegardes soient disponibles pendant une période prolongée (jusqu’à 10 ans), vous pouvez configurer une stratégie de [conservation à long terme](sql-database-long-term-retention.md) dans des bases de données unique et des pools élastiques. [!INCLUDE [GDPR-related guidance](../../includes/gdpr-intro-sentence.md)] ## <a name="what-is-a-sql-database-backup"></a>Qu’est-ce qu’une sauvegarde SQL Database ? SQL Database utilise la technologie SQL Server pour créer des sauvegardes [complètes](https://docs.microsoft.com/sql/relational-databases/backup-restore/full-database-backups-sql-server) (chaque semaine), [différentielles](https://docs.microsoft.com/sql/relational-databases/backup-restore/differential-backups-sql-server) (toutes les 12 heures) et du [journal des transactions](https://docs.microsoft.com/sql/relational-databases/backup-restore/transaction-log-backups-sql-server) (toutes les 5 à 10 minutes). Les sauvegardes sont stockées dans des [objets blob de stockage RA-GRS](../storage/common/storage-redundancy-grs.md#read-access-geo-redundant-storage) répliqués dans un [centre de données associé](../best-practices-availability-paired-regions.md) afin de proposer une protection contre toute panne du centre de données. Quand vous restaurez une base de données, le service identifie les sauvegardes nécessitant une restauration (complète, différentielle ou journal des transactions). Vous pouvez utiliser ces sauvegardes aux fins suivantes : - **Restaurer une base de données existante dans le temps**, à un moment situé pendant la période de rétention, à l’aide du portail Microsoft Azure, de Microsoft Azure PowerShell, de Microsoft Azure CLI ou de l’API REST. Dans une base de données unique et des pools élastiques, cette opération crée une base de données au sein du même serveur que la base de données d’origine. Dans Managed Instance, cette opération peut créer une copie de la base de données, ou une instance Managed Instance, identique ou non, pour le même abonnement. - **[Modifier la période de rétention des sauvegardes](#how-to-change-the-pitr-backup-retention-period)** (entre 7 et 35 jours) pour configurer votre stratégie de sauvegarde. - **Modifier la stratégie de conservation à long terme (10 ans maximum)** dans la base de données unique et les pools élastiques avec [le portail Microsoft Azure](sql-database-long-term-backup-retention-configure.md#configure-long-term-retention-policies) ou [Azure PowerShell](sql-database-long-term-backup-retention-configure.md#use-powershell-to-manage-long-term-backups). - **Restaurer une base de données supprimée sur le moment où elle été supprimée,** ou tout moment pendant la période de rétention. La base de données supprimée ne peut être restaurée que sur le serveur logique ou Managed Instance sur lequel la base de données d’origine a été créée. - **Restaurer une base de données dans une autre région géographique**. La géorestauration vous permet de procéder à la récupération après un sinistre géographique lorsque vous ne pouvez pas accéder à votre serveur, ni à la base de données. Cette opération crée une base de données sur n’importe quel serveur existant dans le monde entier. - **Restaurer une base de données à partir d’une sauvegarde à long terme spécifique** dans une base de données unique ou un pool élastique, si la base de données a été configurée avec une stratégie de conservation à long terme (LTR). La conservation à long terme (LTR) vous permet de restaurer une ancienne version de la base de données à l’aide du [portail Microsoft Azurel](sql-database-long-term-backup-retention-configure.md#view-backups-and-restore-from-a-backup-using-azure-portal) ou de [Microsoft Azure PowerShell](sql-database-long-term-backup-retention-configure.md#use-powershell-to-manage-long-term-backups) pour répondre à une requête de conformité ou exécuter une ancienne version de l’application. Pour plus d’informations, consultez [Rétention à long terme](sql-database-long-term-retention.md). - Pour effectuer une restauration, consultez [Restauration de la base de données à partir de la sauvegarde](sql-database-recovery-using-backups.md). > [!NOTE] > Dans le stockage Azure, le terme *réplication* fait référence à la copie de fichier d’un emplacement à un autre. La *réplication de base de données* de SQL fait référence à la gestion de la synchronisation de plusieurs bases de données secondaires avec une base de données primaire. Vous pouvez essayer certaines de ces opérations en utilisant les exemples suivants : | | Le portail Azure | Azure PowerShell | |---|---|---| | Modifier la rétention des sauvegardes | [Base de données unique](sql-database-automated-backups.md#change-pitr-backup-retention-period-using-azure-portal) <br/> [Managed Instance](sql-database-automated-backups.md#managed-instance-database) | [Base de données unique](sql-database-automated-backups.md#change-pitr-backup-retention-period-using-powershell) <br/>[Managed Instance](https://docs.microsoft.com/powershell/module/az.sql/set-azsqlinstancedatabasebackupshorttermretentionpolicy) | | Modifier la rétention des sauvegardes à long terme | [Base de données unique](sql-database-long-term-backup-retention-configure.md#configure-long-term-retention-policies)<br/>Managed Instance - N/A | [Base de données unique](sql-database-long-term-backup-retention-configure.md#use-powershell-to-manage-long-term-backups)<br/>Managed Instance - N/A | | Restaurer la base de données dans le temps | [Base de données unique](sql-database-recovery-using-backups.md#point-in-time-restore) | [Base de données unique](https://docs.microsoft.com/powershell/module/az.sql/restore-azsqldatabase) <br/> [Managed Instance](https://docs.microsoft.com/powershell/module/az.sql/restore-azsqlinstancedatabase) | | Restauration d’une base de données supprimée | [Base de données unique](sql-database-recovery-using-backups.md) | [Base de données unique](https://docs.microsoft.com/powershell/module/az.sql/get-azsqldeleteddatabasebackup) <br/> [Managed Instance](https://docs.microsoft.com/powershell/module/az.sql/get-azsqldeletedinstancedatabasebackup)| | Restauration d’une base de données à partir du Stockage Blob Azure | Base de données unique - N/A <br/>Managed Instance - N/A | Base de données unique - N/A <br/>[Managed Instance](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance-get-started-restore) | ## <a name="how-long-are-backups-kept"></a>Quelle est la durée de conservation des sauvegardes ? Toutes les bases de données Azure SQL (uniques, mises en pool et Managed Instance), présentent une période de rétention des sauvegardes de **sept** jours. Vous pouvez [modifier la période de rétention des sauvegardes jusqu’à 35 jours](#how-to-change-the-pitr-backup-retention-period). Si vous supprimez une base de données, SQL Database conserve les sauvegardes de la même façon que s’il s’agissait d’une base de données en ligne. Par exemple, si vous supprimez une base de données De base dont la période de conservation est de sept jours, une sauvegarde datant de quatre jours est enregistrée pendant encore trois jours. Si vous souhaitez conserver les sauvegardes plus longtemps que la période de conservation maximale, vous pouvez modifier les propriétés de sauvegarde pour ajouter une ou plusieurs périodes de conservation à long terme à votre base de données. Pour plus d’informations, consultez [Rétention à long terme](sql-database-long-term-retention.md). > [!IMPORTANT] > Si vous supprimez le serveur Azure SQL Server qui héberge les bases de données SQL, tous les pools élastiques et bases de données qui appartiennent au serveur sont également supprimés, sans pouvoir être restaurés. Vous ne pouvez pas restaurer un serveur supprimé. Mais si vous avez configuré la conservation à long terme, les sauvegardes des bases de données avec conservation à long terme ne seront pas supprimées et peuvent être restaurées. ## <a name="how-often-do-backups-happen"></a>À quelle fréquence les sauvegardes se produisent-elles ? ### <a name="backups-for-point-in-time-restore"></a>Sauvegardes pour une restauration dans le temps SQL Database prend en charge la limite de restauration dans le temps en libre-service en créant automatiquement une sauvegarde complète, des sauvegardes différentielles et des sauvegardes du journal des transactions. Les sauvegardes de base de données complètes sont créées chaque semaine, les sauvegardes différentielles généralement toutes les 12 heures, et les sauvegardes de journal des transactions généralement toutes les 5 à 10 minutes, la fréquence variant selon la taille de calcul et l’activité de la base de données. La première sauvegarde complète est planifiée immédiatement après la création d’une base de données. Elle s’exécute généralement en 30 minutes, mais elle peut nécessiter davantage de temps s’il s’agit d’une base de données de taille considérable. Par exemple, la sauvegarde initiale peut prendre davantage de temps sur une base de données restaurée ou une copie de base de données. Après la première sauvegarde complète, toutes les sauvegardes sont planifiées automatiquement et gérées en mode silencieux en arrière-plan. Le moment exact de toutes les sauvegardes de base de données est déterminé par le service SQL Database en fonction de l’équilibrage de la charge de travail globale du système. Vous ne pouvez pas modifier ou désactiver des travaux de sauvegarde. Les sauvegardes PITR sont géo-redondantes et protégées par la [réplication entre les régions du stockage Azure](../storage/common/storage-redundancy-grs.md#read-access-geo-redundant-storage) Pour plus d'informations, consultez [Limite de restauration dans le temps](sql-database-recovery-using-backups.md#point-in-time-restore) ### <a name="backups-for-long-term-retention"></a>Sauvegardes pour la conservation à long terme Les bases de données uniques et mises en pool permettent de configurer une conservation à long terme des sauvegardes complètes allant jusqu’à 10 ans dans le service Stockage Blob Azure. Si la stratégie de conservation à long terme est activée, les sauvegardes complètes hebdomadaires sont automatiquement copiées vers un autre conteneur de stockage RA-GRS. Pour répondre aux différentes exigences de conformité, vous pouvez sélectionner plusieurs périodes de rétention pour les sauvegardes hebdomadaires, mensuelles ou annuelles. La consommation du stockage dépend de la fréquence sélectionnée des sauvegardes et des périodes de conservation. Vous pouvez utiliser la [calculatrice de prix LTR](https://azure.microsoft.com/pricing/calculator/?service=sql-database) pour estimer le coût du stockage de conservation à long terme. Comme les sauvegardes PITR, les sauvegardes LTR sont géo-redondantes et protégées par la [réplication entre les régions du stockage Azure](../storage/common/storage-redundancy-grs.md#read-access-geo-redundant-storage). Pour plus d’informations, consultez [Conservation des sauvegardes à long terme](sql-database-long-term-retention.md). ## <a name="storage-costs"></a>Coûts de stockage Pour les bases de données uniques et les instances gérées, une quantité minimale de stockage de sauvegarde égale à 100 % de la taille de la base de données est fournie sans frais supplémentaires. Pour les pools élastiques, une quantité minimale de stockage de sauvegarde égale à 100 % du stockage de données alloué pour le pool est fournie sans frais supplémentaires. Toute consommation supérieure de stockage de sauvegarde est facturée en Go/mois. Cette consommation supplémentaire dépend de la charge de travail et de la taille des bases de données individuelles. Pour plus d’informations sur les prix du stockage, consultez la page [Tarification](https://azure.microsoft.com/pricing/details/sql-database/single/). ## <a name="are-backups-encrypted"></a>Les sauvegardes sont-elles chiffrées ? Si votre base de données est chiffrée à l’aide de TDE, les sauvegardes sont automatiquement chiffrées au repos, y compris les sauvegardes LTR. Lorsque TDE est activé pour une base de données Azure SQL, les sauvegardes sont également chiffrées. TDE est configuré par défaut sur l’ensemble des nouvelles bases de données Azure SQL. Pour en savoir plus sur le TDE, consultez [Transparent Data Encryption avec Azure SQL Database](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql). ## <a name="how-does-microsoft-ensure-backup-integrity"></a>Comment Microsoft garantit l’intégrité de la sauvegarde ? L’équipe d’ingénieurs Azure SQL Database teste régulièrement et automatiquement la restauration des sauvegardes automatisées de bases de données placées sur des serveurs logiques et dans des pools élastiques (non disponible dans Managed Instance). Lors d’une restauration à un moment donné, les bases de données subissent également des vérifications d’intégrité à l’aide de DBCC CHECKDB. Managed Instance effectue une sauvegarde initiale automatique avec `CHECKSUM` des bases de données restaurées à l’aide de la commande `RESTORE` native ou du service de migration des données à l’issue de la migration. Tout problème détecté lors de la vérification d’intégrité est traduit par une alerte envoyée à l’équipe d’ingénieurs. Pour plus d’informations sur l’intégrité des données dans Azure SQL Database, consultez l’article [Intégrité des données dans Azure SQL Database](https://azure.microsoft.com/blog/data-integrity-in-azure-sql-database/). ## <a name="how-do-automated-backups-impact-compliance"></a>Quel est l’impact des sauvegardes automatisées sur la conformité ? Lorsque vous migrez votre base de données à partir d’un niveau de service basé sur DTU avec la conservation PITR par défaut de 35 jours, vers un niveau de service basé sur vCore, la conservation PITR est préservée pour garantir que la stratégie de récupération de données de votre application n’est pas compromise. Si la conservation par défaut ne répond pas à vos exigences de conformité, vous pouvez modifier la période de conservation PITR à l’aide de PowerShell ou de l’API REST. Consultez [Modification de la période de conservation](#how-to-change-the-pitr-backup-retention-period) pour plus d’informations. [!INCLUDE [GDPR-related guidance](../../includes/gdpr-intro-sentence.md)] ## <a name="how-to-change-the-pitr-backup-retention-period"></a>Comment modifier la période de conservation des sauvegardes PITR ? Vous pouvez modifier la période de rétention des sauvegardes PITR par défaut à l’aide du portail Microsoft Azure, de PowerShell ou de l’API REST. Les valeurs prises en charge sont les suivantes : 7, 14, 21, 28 ou 35 jours. Les exemples suivants illustrent comment modifier la conservation de récupération jusqu’à une date et heure pour la faire passer à 28 jours. > [!WARNING] > Si vous réduisez la période de rétention actuelle, toutes les sauvegardes antérieures à la nouvelle période de rétention ne seront plus disponibles. Si vous augmentez la période de rétention actuelle, SQL Database conserve les sauvegardes existantes jusqu’à ce que la période de rétention plus longue soit atteinte. > [!NOTE] > Ces API impactent uniquement la période de conservation PITR. Si vous avez configuré la conservation à long terme pour votre base de données, elle ne sera pas affectée. Consultez [Conservation à long terme](sql-database-long-term-retention.md) pour en savoir plus sur la modification des périodes de conservation à long terme. ### <a name="change-pitr-backup-retention-period-using-azure-portal"></a>Changer la période de conservation des sauvegardes PITR à l’aide du portail Azure Pour changer la période de rétention des sauvegardes PITR dans le portail Microsoft Azure, accédez à l’objet serveur dont vous souhaitez changer la période de rétention dans le portail, puis sélectionnez l’option appropriée, selon l’objet serveur que vous modifiez. #### <a name="single-azure-sql-database"></a>Base de données Azure SQL unique Le changement de la conservation des sauvegardes PITR pour les bases de données Azure SQL uniques est effectué au niveau du serveur. Les changements effectués au niveau du serveur s’appliquent aux bases de données sur le serveur concerné. Pour changer le processus PITR pour un serveur Azure SQL Database à partir du portail Azure, accédez au panneau de vue d’ensemble du serveur, cliquez sur Gérer les sauvegardes dans le menu de navigation, puis cliquez sur Configurer la rétention dans la barre de navigation. ![Modification PITR sur le Portail Azure](./media/sql-database-automated-backup/configure-backup-retention-sqldb.png) #### <a name="managed-instance-database"></a>Base de données d’instance managée La modification de la conservation des sauvegardes PITR pour une instance managée SQL Database est effectuée au niveau d’une base de données individuelle. Pour changer la conservation des sauvegardes PITR pour une base de données d’instance à partir du portail Azure, accédez au panneau de vue d’ensemble de la base de données concernée, puis cliquez sur Configurer la conservation de sauvegarde dans la barre de navigation. ![Modification PITR sur le Portail Azure](./media/sql-database-automated-backup/configure-backup-retention-sqlmi.png) ### <a name="change-pitr-backup-retention-period-using-powershell"></a>Modifier la période de conservation des sauvegardes PITR et à l’aide de PowerShell [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] > [!IMPORTANT] > Le module PowerShell Azure Resource Manager est toujours pris en charge par Azure SQL Database, mais tous les développements futurs sont destinés au module Az.Sql. Pour ces cmdlets, voir [AzureRM.Sql](https://docs.microsoft.com/powershell/module/AzureRM.Sql/). Les arguments des commandes dans le module Az sont sensiblement identiques à ceux des modules AzureRm. ```powershell Set-AzSqlDatabaseBackupShortTermRetentionPolicy -ResourceGroupName resourceGroup -ServerName testserver -DatabaseName testDatabase -RetentionDays 28 ``` ### <a name="change-pitr-retention-period-using-rest-api"></a>Modifier la période de conservation PITR à l’aide de l’API REST #### <a name="sample-request"></a>Exemple de demande ```http PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/resourceGroup/providers/Microsoft.Sql/servers/testserver/databases/testDatabase/backupShortTermRetentionPolicies/default?api-version=2017-10-01-preview ``` #### <a name="request-body"></a>Corps de la requête ```json { "properties":{ "retentionDays":28 } } ``` #### <a name="sample-response"></a>Exemple de réponse Code d’état : 200 ```json { "id": "/subscriptions/00000000-1111-2222-3333-444444444444/providers/Microsoft.Sql/resourceGroups/resourceGroup/servers/testserver/databases/testDatabase/backupShortTermRetentionPolicies/default", "name": "default", "type": "Microsoft.Sql/resourceGroups/servers/databases/backupShortTermRetentionPolicies", "properties": { "retentionDays": 28 } } ``` Pour plus d’informations, consultez [API REST de conservation des sauvegardes](https://docs.microsoft.com/rest/api/sql/backupshorttermretentionpolicies). ## <a name="next-steps"></a>Étapes suivantes - Les sauvegardes de base de données sont une partie essentielle de toute stratégie de continuité d’activité ou de récupération d’urgence, dans la mesure où elles protègent vos données des corruptions et des suppressions accidentelles. Pour en savoir plus sur les autres solutions de continuité des activités Azure SQL Database, consultez [Vue d’ensemble de la continuité des activités](sql-database-business-continuity.md). - Pour effectuer une restauration à un point dans le temps à l’aide du portail Azure, consultez [Restauration d’une base de données à un point dans le temps à l’aide du portail Azure](sql-database-recovery-using-backups.md). - Pour effectuer une restauration à un point dans le temps à l’aide de PowerShell, consultez [Restauration d’une base de données à un point dans le temps à l’aide de PowerShell](scripts/sql-database-restore-database-powershell.md). - Pour configurer, gérer et restaurer depuis la conservation à long terme de sauvegardes automatisées dans Stockage Blob Azure avec le portail Azure, consultez [Gestion de la rétention des sauvegardes à long terme à l’aide du Portail Azure](sql-database-long-term-backup-retention-configure.md). - Pour configurer, gérer et restaurer des données à partir d’une conservation à long terme de sauvegardes automatisées dans Stockage Blob Azure avec PowerShell, voir [Gestion de la conservation des sauvegardes à long terme avec PowerShell](sql-database-long-term-backup-retention-configure.md).
119.502703
1,295
0.802651
fra_Latn
0.977616
11489e02a982c60397a403d56b087ebec3e3eee4
5,859
md
Markdown
memdocs/configmgr/comanage/includes/enable-co-management-1906-2107.md
NeighborGeek/memdocs
1036d8e32401c58ee36f767c015cb42026cd88fd
[ "CC-BY-4.0", "MIT" ]
128
2020-03-25T15:32:58.000Z
2022-03-31T05:14:45.000Z
memdocs/configmgr/comanage/includes/enable-co-management-1906-2107.md
NeighborGeek/memdocs
1036d8e32401c58ee36f767c015cb42026cd88fd
[ "CC-BY-4.0", "MIT" ]
2,503
2020-03-10T04:28:18.000Z
2022-03-31T22:02:04.000Z
memdocs/configmgr/comanage/includes/enable-co-management-1906-2107.md
NeighborGeek/memdocs
1036d8e32401c58ee36f767c015cb42026cd88fd
[ "CC-BY-4.0", "MIT" ]
543
2020-03-10T04:15:05.000Z
2022-03-31T22:47:29.000Z
--- author: mestew ms.author: mstewart ms.prod: configuration-manager ms.technology: configmgr-comanage ms.topic: include ms.date: 10/05/2021 ms.localizationpriority: medium --- <!--3555750 FKA 1357954 -- This file is shared by comanage/how-to-enable.md, tutorial-co-manage-clients.md, and tutorial-co-manage-new-devices.md. Don't apply H2/H3 in this include file since they are context driven by article--> When you're enabling co-management, you can use the Azure public cloud, Azure Government cloud, or Azure China 21Vianet cloud (added in version 2006). To enable co-management, follow these instructions: 1. In the Configuration Manager console, go to the **Administration** workspace, expand **Cloud Services**, and select the **Cloud Attach** node. Select **Configure Cloud Attach** on the ribbon to open the Cloud Attach Configuration Wizard. For version 2103 and earlier, expand **Cloud Services** and select the **Co-management** node. Select **Configure co-management** on the ribbon to open the Co-management Configuration Wizard. 1. On the onboarding page of the wizard, for **Azure environment**, choose one of the following environments: - Azure public cloud - Azure Government cloud<!--4075452--> - Azure China cloud (added in version 2006)<!--7133238--> > [!NOTE] > Update the Configuration Manager client to the latest version on your devices before you onboard to the Azure China cloud. <!--7630213--> When you select the Azure China cloud or Azure Government cloud, the **Upload to Microsoft Endpoint Manager admin center** option for [tenant attach](../../tenant-attach/device-sync-actions.md) is disabled. 1. Select **Sign In**. Sign in as an Azure AD global administrator, and then select **Next**. You sign in this one time for the purposes of this wizard. The credentials aren't stored or reused elsewhere. 1. On the **Enablement** page, choose the following settings: - **Automatic enrollment in Intune**: Enables automatic client enrollment in Intune for existing Configuration Manager clients. This option allows you to enable co-management on a subset of clients to initially test co-management and then roll out co-management by using a phased approach. If the user unenrolls a device, the device will be re-enrolled on the next evaluation of the policy. <!--3330596--> - **Pilot**: Only the Configuration Manager clients that are members of the **Intune Auto Enrollment** collection are automatically enrolled in Intune. - **All**: Enable automatic enrollment for all clients running Windows 10 version 1709 or later. - **None**: Disable automatic enrollment for all clients. - **Intune Auto Enrollment**: This collection should contain all of the clients that you want to onboard into co-management. It's essentially a superset of all the other staging collections. ![Screenshot of the wizard page for enabling automatic enrollment in Intune.](../media/3555750-co-management-onboarding-enablement.png) Automatic enrollment isn't immediate for all clients. This behavior helps enrollment scale better for large environments. Configuration Manager randomizes enrollment based on the number of clients. For example, if your environment has 100,000 clients, when you enable this setting, enrollment occurs over several days.<!--1358003--> A new co-managed device is now automatically enrolled in the Microsoft Intune service based on its Azure AD device token. It doesn't need to wait for a user to sign in to the device for automatic enrollment to start. This change helps to reduce the number of devices with the enrollment status **Pending user sign in**.<!-- 4454491 --> To support this behavior, the device needs to be running Windows 10 version 1803 or later. For more information, see [Co-management enrollment status](../how-to-monitor.md#co-management-enrollment-status). If you already have devices enrolled in co-management, new devices are now enrolled immediately after they meet the [prerequisites](../overview.md#prerequisites).<!--4321130--> 1. For internet-based devices that are already enrolled in Intune, copy and save the command on the **Enablement** page. You'll use this command to install the Configuration Manager client as an app in Intune for internet-based devices. If you don't save this command now, you can review the co-management configuration at any time to get this command. > [!TIP] > The command appears only if you've met all of the prerequisites, such as setting up a cloud management gateway.<!-- MEMDocs#635 --> 1. On the **Workloads** page, for each workload, choose which device group to move over for management with Intune. For more information, see [Workloads](../workloads.md). If you only want to enable co-management, you don't need to switch workloads now. You can switch workloads later. For more information, see [How to switch workloads](../how-to-switch-workloads.md). - **Pilot Intune**: Switches the associated workload only for the devices in the pilot collections that you'll specify on the **Staging** page. Each workload can have a different pilot collection. - **Intune**: Switches the associated workload for all co-managed Windows 10 or later devices. > [!Important] > Before you switch any workloads, make sure that you properly configure and deploy the corresponding workload in Intune. Make sure that workloads are always managed by one of the management tools for your devices. 1. On the **Staging** page, specify the pilot collection for each of the workloads that are set to **Pilot Intune**. ![Screenshot of the Staging page of the Co-management Configuration Wizard, with options for specifying pilot collections.](../media/3555750-co-management-onboarding-staging.png) 1. To enable co-management, complete the wizard.
83.7
544
0.762075
eng_Latn
0.996095
114a6498558772e97a1dfef843a82046a77331b3
398
md
Markdown
docs/pipelines/tasks/_shared/yaml/PythonScriptV0.md
malyons/vsts-docs
c7e66e44dc14f046ecb4e7cfd5cd145613fac58f
[ "CC-BY-4.0", "MIT" ]
1
2019-08-16T01:17:11.000Z
2019-08-16T01:17:11.000Z
docs/pipelines/tasks/_shared/yaml/PythonScriptV0.md
malyons/vsts-docs
c7e66e44dc14f046ecb4e7cfd5cd145613fac58f
[ "CC-BY-4.0", "MIT" ]
1
2019-09-23T07:08:28.000Z
2019-09-23T07:08:28.000Z
docs/pipelines/tasks/_shared/yaml/PythonScriptV0.md
malyons/vsts-docs
c7e66e44dc14f046ecb4e7cfd5cd145613fac58f
[ "CC-BY-4.0", "MIT" ]
2
2021-12-06T12:36:31.000Z
2022-02-16T08:48:36.000Z
```YAML # Python script # Run a Python file or inline script - task: PythonScript@0 inputs: #scriptSource: 'filePath' # Options: filePath, inline #scriptPath: # Required when scriptSource == FilePath #script: # Required when scriptSource == Inline #arguments: # Optional #pythonInterpreter: # Optional #workingDirectory: # Optional #failOnStderr: false # Optional ```
28.428571
57
0.69598
yue_Hant
0.441306
114b0e09c23d2b373d21da01e04baa34afb4f0a9
4,543
md
Markdown
docs/blog/dsday4.md
hyperMoss/hyperMoss.github.io
487dd3b1397a249673773b810e4335ddff7c8c69
[ "MIT" ]
3
2021-06-30T07:21:55.000Z
2022-03-30T12:13:38.000Z
docs/blog/dsday4.md
hyperMoss/hyperMoss.github.io
487dd3b1397a249673773b810e4335ddff7c8c69
[ "MIT" ]
null
null
null
docs/blog/dsday4.md
hyperMoss/hyperMoss.github.io
487dd3b1397a249673773b810e4335ddff7c8c69
[ "MIT" ]
null
null
null
--- title: 图 date: '2019-01-11 20:57:16' --- # 图 图G由顶点集V和边集E组成,记为G=(V,E) - V(G)表示图G中顶点的有限非空集。 用|V|表示图G中顶点的个数,也称为图G的阶。 - E(G)表示图G中顶点之间的关系(边)集合。 用|E|表示图G中边的条数。 <!-- more --> ## 图的基本概念 - 有向图 - 无向图 - 简单图 - 多重图 - 完全图 n个顶点 - 无向完全图 n*(n-1)/2条边 - 有向完全图 n*(n-1)条边 - 子图:V和E都是一个图的子集,并非任何子集都算子图 - 连通:两个顶点之间有路径 - 连通图:图中任意两顶点都是连通的 - 连通分量:无向图中的极大联通子图 > 结论1:如果一个图有n个顶点,并且有小于n-1条边,则此图必是非连通图。 - 强连通:两个方向的联通都有路径 - 强连通图:图中任一对顶点都是强联通的 - 强连通分量:有向图中极大强连通子图 > 结论2:生成树去掉一条边则变成非连通图,加上一条边就会形成回路。 - 度:以一个顶点为一个端点的边数目 - 有向图:出度和入度 - 无向图:依附于该顶点的边的条数 - 有向树:有一个顶点入度为0,其余顶点入度为1的有向图 - 权和网:图中每条边可以赋予一定的数值,这个数值叫做这条边的权,有权值的图称为带权图,也叫做网 - 路径和路径长度:顶点之间的路径是指两顶点之间的顶点序列,路径上的边的数目就是路径长度 - 回路:第一个和最后一个顶点相同的路径 - 简单回路:除首尾,顶点不重复出现的回路 - 距离:两顶点间最短的路径长度,不存在则为无穷,或超过数表示类型的数(int 65535) ## 图的存储结构 ### 邻接矩阵(顺序存储) - 顶点:用一维数组来存储 - 边或弧:用二维数组来存储 *二维数组就是一维数组的拓展,相当于一维数组中每个元素也是一个一维数组。二维数组也叫做邻接矩阵* - 无向图的邻接矩阵是一个对称矩阵 能力 1. 判定两个顶点是否有边 2. 求某个顶点的度 3. 找到某个顶点的所有邻接点 性质 1. 顶点的入度是顶点所在这一列的各数之和;出度是顶点所在这一行的各数之和。 2. 判定两个顶点是否有边 3. 找到某个顶点的所有邻接点 对于带权图(网)可以在矩阵中存储边的权值 1. 带权边存储权值 2. 行和列相同结点权值为0 3. 不存在的边权值为无限大 定义 ``` c #define MaxVertexNum l00 //顶点数目的最大值 typedef char VertexType; //顶点的数据类型 不同情况不一样 typedef int EdgeType; //整数表示权值或者连通性 typedef struct{ VertexType Vex[MaxVertexNum]; //顶点表 EdgeType Edge[MaxVertexNum][MaxVertexNum]; //邻接矩阵(二维数组),边表 int vexnum,arcnum; //图的当前顶点数和弧数 }MGraph; ``` ### 邻接表(链式存储) 顺序存储结构存在预先分配内存可能浪费的问题,按照经验会想到链式存储结构,类似树的孩子表示法 - 图中的顶点用一个**一维数组**存储。同时每个元素还要存储指向第一个邻接点的指针(链表的头指针)。存储顶点和头指针的表叫**顶点表** - 图中每个顶点的所有邻接点构成一个**单链表**。对于无向图,这个表称为该结点的**边表**,对于有向图称为该结点的**出边表** 数据结构 需要设计两种结点结构类型:一是**顶点表**的顶点,二是**单链表**的结点 ```c typedef struct VNode{ //顶点表结点 VertexType data; //顶点信息 ArcNode *firstedge; //单链表头指针 }VNode,AdjList[MaxVertexNum]; //AdjList是结构体数组类型 ``` ``` c #define MaxVertexNum 100 //图中顶点数目的最大值 typedef struct ArcNode{ //边表结点 int adjvex; //该弧所指向的顶点的位置 struct ArcNode *next; //指向下一条弧的指针 }ArcNode; ``` ```c typedef struct{ AdjList vertices; //邻接表 int vexnum,arcnum; //图的顶点数和弧数 } ALGraph; //ALGraph是以邻接表存储的图类型 ``` 有向图的邻接表关心了有向图的出边问题,我们通过有向图很容易找到顶点的出边 比如从每个顶点表的firstedge指针找到第一条边的顶点,再通过next指针依次找到下一条边的顶点直到空指针。 #### 十字链表 十字链表是针对有向图的存储方式,对应于有向图中的每条弧有一个结点,对应于每个顶点也有一个结点 > 其实十字链表是在邻接表的基础上进行了优化。在十字链表中不仅包含了邻接表本身就包含的结点出度结点,而且还包含了入度结点的信息。 顶点结点 - 图中顶点的数据 - 该顶点的**入边表**的头指针 - 该顶点的**出边表**的头指针 边表结点 - 这条弧的**弧尾(起点)**所在顶点表下标 - 这条弧的**弧头(终点)**所在顶点表下标 - 指向**弧头(终点)**相同的**下一条边** - 指向**弧尾(起点)**相同的**下一条边** ```c typedef struct VNode{ VertexType data; ArcNode *firstin, *firstout; }VNode; ``` ```c typedef struct ArcNode{ int tailvex, headvex; struct ArcNode *hlink, *tlink; }ArcNode; ``` ```c typedef struct{ VNode xlist[MaxVertexNum]; //顶点依旧用顺序存储(数组) int vexnum,arcnum; //图的顶点数和弧数 } GLGraph; //十字链表存储的图类型 ``` #### 邻接多重表 仿照十字链表,对邻接表的边表进行改造,得到专门针对存储无向图的邻接多重表 邻接多重表边表结点 - 这条边依附的两个顶点在顶点表的下标 - 指向依附顶点ivex的**下一条边** - 指向依附顶点jvex的**下一条边** ## 图的遍历 - 广度优先搜索(BFS:Breadth-First-Search):广度优先搜索类似于树的层序遍历算法 - 深度优先搜索(DFS:Depth-First-Search):深度优先搜索类似于树的先序遍历算法 BFS解决单源非带权图最短路径问题:按照距离由近到远来遍历图中每个顶点 ## 图的应用 最小生成树 - 普利姆算法 - ①从图中找第一个起始顶点v0,作为生成树的第一个顶点,然后从这个顶点到其他顶点的所有边中选一条权值最小的边。然后把这条边的另一个顶点v和这条边加入到生成树中。 ②对剩下的其他所有顶点,分别检查这些顶点与顶点v的权值是否比这些顶点在lowcost数组中对应的权值小,如果更小,则用较小的权值更新lowcost数组。 ③从更新后的lowcost数组中继续挑选权值最小而且不在生成树中的边,然后加入到生成树。 ④反复执行②③直到所有所有顶点都加入到生成树中。 - 双重循环,外层循环次数为**n-1**,内层并列的两个循环次数都是**n**。故普利姆算法时间复杂度为**O(n2)**而且时间复杂度只和n有关,所以适合**稠密图** - 克鲁斯卡尔算法 - 将图中边按照权值从小到大排列,然后从最小的边开始扫描,设置一个边的集合来记录,如果该边并入不构成回路的话,则将该边并入当前生成树。直到所有的边都检测完为止 - 克鲁斯卡尔算法操作分为对边的权值**排序部分**和一个**单重for循环**,它们是并列关系,由于排序耗费时间大于单重循环,所以克鲁斯卡尔算法的**主要时间耗费在排序上**。排序和图中边的数量有关系,所以适合**稀疏图**。 ## 图的应用 最短路径 - 迪杰斯特拉 - 该算法设置一个集合S记录已求得的最短路径的顶点,可用一个数组s[]来实现,初始化为0,当s[vi]=1时表示将顶点vi放入S中,初始时把源点v0放入S中。此外,在构造过程中还设置了两个辅助数组: dist[]:记录了从源点v0到其他各顶点当前的最短路径长度,dist[i]初值为arcs[v0][i]。 path[]:path[i]表示从源点到顶点i之间的最短路径的前驱结点,在算法结束时,可根据其值追溯得到源点v0到顶点vi的最短路径。 > 不断的试探加入新顶点是否使原来的路径变短 - 佛洛伊德 - 递推产生一个n阶方阵序列A(−1),A(0),…,A(k),…,A(n−1) 其中`A[k](i)[j]`表示从顶点vi到顶点vj的路径长度,**k表示绕行第k个顶点的运算步骤**。初始时,对于任意两个顶点vi和vj,若它们之间存在边,则以此边上的权值作为它们之间的最短路径长度;若它们之间不存在有向边,则以∞作为它们之间的最短路径长度。以后逐步尝试在原路径中加入顶点k(k=0,1,…,n-1)作为中间顶点。如果增加中间顶点后,得到的路径比原来的路径长度减少了,则以此新路径代替原路径。 >寻找所有结点之间的距离信息,并不断的更新。类似动态规划。 ## 图的应用 拓扑排序 如果我们把每个环节看成图中一个顶点,在这样一个有向图中,用顶点表示活动,用弧表示活动之间的优先关系,那么这样的有向图称为AOV网(Activity On Vertex) 拓扑排序算法: 从AOV网中选择一个入度为0的顶点输出,然后删去此顶点,并删除以此顶点为弧尾的弧。重复这个步骤直到输出图中全部顶点,或者找不到入度为0的顶点为止 ## 图的应用 关键路径 AOE(Activity On Edge):在一个表示工程的带权有向图中,用顶点表示事件,用有向边表示活动,用边上的权值表示活动的持续时间,这种有向图的边表示活动的网称为AOE网。 AOE网,活动是在边上,边上的权值表示的是这个活动所需要耗费的时间。AOE网是在AOE的基础上来分析工程的最少需要时间。或者是为了缩短工期,需要找出哪些活动是要加快的。
19.752174
205
0.751706
yue_Hant
0.699747
114b97b2725ad92d8bce3d09365db1c6c7c2a85c
16
md
Markdown
README.md
tishchenko/tin-crypto-bot
70a141ec6fad78727bbb50ea3ee139e5fbf4dea0
[ "MIT" ]
null
null
null
README.md
tishchenko/tin-crypto-bot
70a141ec6fad78727bbb50ea3ee139e5fbf4dea0
[ "MIT" ]
null
null
null
README.md
tishchenko/tin-crypto-bot
70a141ec6fad78727bbb50ea3ee139e5fbf4dea0
[ "MIT" ]
null
null
null
# tin-crypto-bot
16
16
0.75
ceb_Latn
0.529854
114c48e47091f14c9c7d7459577ffd08922ca368
1,161
md
Markdown
curriculum/challenges/english/02-javascript-algorithms-and-data-structures/basic-algorithm-scripting/reverse-a-string.md
Buzzfreeze/freeCodeCamp
ed383c47121d83ed612cbfbb4bd2688594089433
[ "BSD-3-Clause" ]
null
null
null
curriculum/challenges/english/02-javascript-algorithms-and-data-structures/basic-algorithm-scripting/reverse-a-string.md
Buzzfreeze/freeCodeCamp
ed383c47121d83ed612cbfbb4bd2688594089433
[ "BSD-3-Clause" ]
null
null
null
curriculum/challenges/english/02-javascript-algorithms-and-data-structures/basic-algorithm-scripting/reverse-a-string.md
Buzzfreeze/freeCodeCamp
ed383c47121d83ed612cbfbb4bd2688594089433
[ "BSD-3-Clause" ]
null
null
null
--- id: a202eed8fc186c8434cb6d61 title: Reverse a String challengeType: 5 forumTopicId: 16043 dashedName: reverse-a-string --- # --description-- แบบทดสอบนี้ให้ทำการเรียงตัวอักษรใน string ใหม่ โดยให้เรียงจากหลังมาหน้า คุณอาจต้องเปลี่ยน string ให้เป็น array ก่อน ถึงจะเรียงใหม่ได้ ฟังก์ชันนี้ต้องได้ผลลัพธ์เป็น string # --hints-- การเรียกใช้ฟังก์ชัน `reverseString("hello")` ต้องได้ค่าเป็น string ```js assert(typeof reverseString('hello') === 'string'); ``` การเรียกใช้ฟังก์ชัน `reverseString("hello")` ต้องได้ค่าเป็น string `olleh` ```js assert(reverseString('hello') === 'olleh'); ``` การเรียกใช้ฟังก์ชัน `reverseString("Howdy")` ต้องได้ค่าเป็น string `ydwoH` ```js assert(reverseString('Howdy') === 'ydwoH'); ``` การเรียกใช้ฟังก์ชัน `reverseString("Greetings from Earth")` ต้องได้ค่าเป็น string `htraE morf sgniteerG` ```js assert(reverseString('Greetings from Earth') === 'htraE morf sgniteerG'); ``` # --seed-- ## --seed-contents-- ```js function reverseString(str) { return str; } reverseString("hello"); ``` # --solutions-- ```js function reverseString(str) { return str.split('').reverse().join(''); } reverseString("hello"); ```
18.140625
104
0.671835
yue_Hant
0.251488
114c7e1ba278b579f19f9895061d44e6c89d3ef0
1,703
md
Markdown
content/blog/2006/xml/index.md
brianwisti/rgb-zola
60504a4ea91e72b14c120dc47b43264f2691d5cd
[ "CC-BY-4.0" ]
null
null
null
content/blog/2006/xml/index.md
brianwisti/rgb-zola
60504a4ea91e72b14c120dc47b43264f2691d5cd
[ "CC-BY-4.0" ]
null
null
null
content/blog/2006/xml/index.md
brianwisti/rgb-zola
60504a4ea91e72b14c120dc47b43264f2691d5cd
[ "CC-BY-4.0" ]
null
null
null
+++ title = "XML" date = "2006-03-17 00:00:00-08:00" updated = "2009-07-11 00:00:00-07:00" aliases = [ "/coolnamehere/2006/03/17_xml.html", "/post/2006/xml/", "/2006/03/17/xml/", "/post/2006/03/xml/",] draft = false [taxonomies] category = [ "post",] tags = [ "xml", "coolnamehere",] +++ XML is the core language of the Web. It forms the foundation for nearly everything you read with your browser. You might not know this, though, because of the great number of languages and acronyms you find. Web pages are written in XHTML, news feeds are written in RSS, and many applications communicate to each other with XML-RPC. If you use Google Talk, then you are relying on the Jabber protocol. What do each of these languages have in common? They are all XML languages. <!--more--> How is that possible? XML gives you a set of rules for defining new computer languages. Although nearly any sort of language can be created, XML is most appropriate for two tasks. + Text formatting + Data exchange "Text formatting" is probably the most familiar to you. Many Web pages are formatted with XHTML - a cleaned-up version of HTML, which you may be more familiar with. XHTML provides structure to the documents, making them easily processed by Web browsers and specialized scripts. All the headers, paragraphs, lists and links are described in XHTML. A well-formed Web document can be printed, viewed in a monitor, or read out loud by a text-to-speech program. XML provides the rules which make it that much easier for your Web page to be considered well-formed. Since XML is essentially the language of the Web, you will have no trouble finding helpful resources in getting familiar with XML.
42.575
110
0.753376
eng_Latn
0.999347
114c8ffefdb4dcc82723a8cca348fdcd76f86c6e
1,384
md
Markdown
_posts/cache/2021-03-14-cache-matter.md
cloudland/cloudland.github.io
485ad5794ba4655da4f3568bd25bed36a411c04c
[ "MIT" ]
null
null
null
_posts/cache/2021-03-14-cache-matter.md
cloudland/cloudland.github.io
485ad5794ba4655da4f3568bd25bed36a411c04c
[ "MIT" ]
null
null
null
_posts/cache/2021-03-14-cache-matter.md
cloudland/cloudland.github.io
485ad5794ba4655da4f3568bd25bed36a411c04c
[ "MIT" ]
null
null
null
--- comment: false aside: toc: true title: 缓存穿透、缓存击穿及缓存雪崩 date: 2021-03-14 14:35 tags: 缓存 --- ## 缓存穿透 指缓存和数据库中都没有的数据,而用户不断发起请求。 > 这时的用户很可能是攻击者,攻击会导致数据库压力过大。 ### 解决方案 #### 缓存空对象 无论是否从数据库查询到数据,均设置缓存。未查询到的数据,设置为`Null`。 **缺点:** 1. 只能防止同一个不存在的数据多次查询; > 比如: 会员`M001`不存在, 此方式只能有效解决`M001`多次查询, 而无法避免每次查询的都是不存在的数据。 2. 导致缓存大量空数据; 3. 导致内存紧张; #### 布隆(bloom)过滤器 `推荐`{:.success} ##### 位图(bitmap) 布隆过滤器其中重要的实现就是位图的实现,也就是位数组,并且在这个数组中每一个位置只占有1个bit,而每个bit只有`0`和`1`两种状态。bitarray也叫bitmap,大小也就是布隆过滤器的大小。 <font color='#FF6A6A'>假设一种有k个哈希函数,且每个哈希函数的输出范围都大于m,接着将输出值对k取余(%m),就会得到k个[0, m-1]的值。</font>由于每个哈希函数之间相互独立,因此这k个数也相互独立,最后将这k个数对应到bitarray上并标记为`1`。 等判断时,将输入对象经过这k个哈希函数计算得到k个值,然后判断对应bitarray的k个位置是否都为`1`,如果有一个不为黑,那么这个输入对象则不在这个集合中,也就不是黑名单了!如果都是黑,那说明在集合中,但有可能会误,由于当输入对象过多,而集合也就是bitarray过小,则会出现大部分为黑的情况,那样就容易发生误判!因此使用布隆过滤器是需要容忍错误率的。 1. 定义`位数组`, 初始每个位置都为`0`; 2. 定义`Hash函数`。计算的哈希结果,长度在`0`至`位数组`长度-1之间; 3. 计通过`Hash函数`计算值结果, 对应的二进制码`数组下标`位置内容替换为1。表示此值存在; **缺点** 1. 因为会发生Hash碰撞, 判断数据是否存在集合中时, 会把不存在的数据判断存在(不存在的数据一定不会存在)。有一定概率存在误判; * 增加数组的长度, 提高Hash散列, 降低碰撞; * 增加参与计算的Hash函数, 多个结果标记一个值; 2. 无法删除; ## 缓存击穿 指缓存中没有, 但数据库存在数据(一般是缓存时间到期)。 > 如果并发用户特别多, 同时未读取到该缓存数据。引起数据库压力瞬间增大。 ### 解决方案 1. 直接查询数据库, 在设置缓存; 2. 高并发情况下, 可以加锁(或分布式锁)只保持一个线程去读取数据并设置缓存, 之后请求可以直接读取缓存; ## 缓存雪崩 缓存中大量的热门数据, 某刻突然失效。而查询数据量巨大, 引起数据库压力剧增导致宕机。 ### 解决方案 1. 设置不同的到期时间; 2. 将缓存数据分片, 分担数据库访问压力; 3. 设置热点数据监控延期机制;
18.453333
179
0.74422
yue_Hant
0.387978
114c9003905b3bb8668a12ae08b34ed17e51f9a6
20
md
Markdown
README.md
Dmytro-Furmaniuk/module-9
44b9ad62eb509d1901cf6196ec4b1391e9758226
[ "Unlicense" ]
null
null
null
README.md
Dmytro-Furmaniuk/module-9
44b9ad62eb509d1901cf6196ec4b1391e9758226
[ "Unlicense" ]
null
null
null
README.md
Dmytro-Furmaniuk/module-9
44b9ad62eb509d1901cf6196ec4b1391e9758226
[ "Unlicense" ]
null
null
null
# module-9 module-9
6.666667
10
0.7
kor_Hang
0.3667
114d14801236257b49aeff73c34006f3f2df5bbf
4,010
md
Markdown
_posts/2019-01-07-Download-teen-proofing-fostering-responsible-decision-making-in-your-teenager-john-rosemond.md
Ozie-Ottman/11
1005fa6184c08c4e1a3030e5423d26beae92c3c6
[ "MIT" ]
null
null
null
_posts/2019-01-07-Download-teen-proofing-fostering-responsible-decision-making-in-your-teenager-john-rosemond.md
Ozie-Ottman/11
1005fa6184c08c4e1a3030e5423d26beae92c3c6
[ "MIT" ]
null
null
null
_posts/2019-01-07-Download-teen-proofing-fostering-responsible-decision-making-in-your-teenager-john-rosemond.md
Ozie-Ottman/11
1005fa6184c08c4e1a3030e5423d26beae92c3c6
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Teen proofing fostering responsible decision making in your teenager john rosemond book not even when Sinsemilla is publicly to offer them my hearty thanks. As for me, the suggestion of black-satin lapels like those on a tuxedo jacket? the report being of excessive violence. "If all but us are slaves, where at first he fell the music, though, 'When this cometh about? When I was big enough and angry enough to make to say corrupt. Then he beyond the horizon. 325 your horse up teen proofing fostering responsible decision making in your teenager john rosemond see to him. And if somehow it succeeded, an insatiable satyr. Perhaps the mere threat of force would be sufficient to attain our ends --without taking it as far as an open demonstration or resorting to clamping down martial law as a first measure. From St. "You're supposed to be dead. " Mutants do teen proofing fostering responsible decision making in your teenager john rosemond cry. " tried to get to his feet he felt bonds of sorcery holding his body and mind, he sat her in a chair and let her slump forward over the breakfast table, was counting on the other to help, who collected miniature animals to in a very hospitable and friendly manner. All human lives are so profoundly and intricately entwined-those dead, nail clippers, wealth is competence!" he said, pretty young girls! This corner of hell, where streets petered           k, there was a small clearing. " So the king bade fetch the prisoner and they brought him; whereupon the viziers turned to him and said to him, gazing at him, like a hundred thunderstorms booming all at once, she was between the capital and the interior of the country. " He follows her into chambers more interesting than any he has seen since arriving on this world, but only if you learn to use it as a springboard to Finally she said, but had absorbed them as the roots of Edom's roses absorbed nutrients. dogs yammered around him. Gelluk spoke a single word impatiently, Fm kind of worried myself, so that it caused him abstain from meat and drink. The reverend droning on and on as Junior pinned the devout daughter to the mattress. "Cromwell, Russian peasant, because this is the answer they expect and the only one "How do you like it?" which brought him to 78 deg. It took Smith six weeks to increase the efficiency of the image intensifier enough to bring up the ghost me through half-closed eyes: myself. Even Gimma, put SDs around the house so that you would never have need to fear for your safety, it father would sooner or later come. "I don't fall. Well, drawn by R, they marvelled and said, I fear lest the Khalif come to know of this and be wroth with me; so tell me thou what is to be done in this wherewith I am afflicted of the affair of this damsel. "Of what?" Commercial Company of San Francisco. 405 "You provide rationality?" Teen proofing fostering responsible decision making in your teenager john rosemond rinsed the last of the dishes. ;'I suppose we just guessed lucky, till they fell down on the ground in a swoon. As he and his father were thus engaged in talk, did not hunt them at first, i, "but why are you telling me this?" "No, FRANKLIN CENTER OUTLET. I drew picket duty again this morning. She no longer appeared blurred, nail clippers. He was certain that the Hand hadn't found the money in the pay phone. Suddenly Junior wished that he had denied dreaming. "Oh, shrieking. breakdowns, just this morning on my way here, Hal. When he judged that he was near the porch steps, and cool in the temperature of -30 deg. But he must not hurry, after all, teen proofing fostering responsible decision making in your teenager john rosemond at me. As best he could, she'd revealed herself to be a disrespectful. "A spell of silence," she They are five against us," said the Herbal. the surface features: Syrtis Major and Thoth-Nepenthes leading in a long gooseneck to Utopia and the "I fix," she insisted.
445.555556
3,854
0.784788
eng_Latn
0.999936
114d842679d162802303a2b00cc2b9362c97f069
29
md
Markdown
README.md
cainmaila/three3d-demo
dbff448f7cefda40ad8834e03bcbd1534fdea13a
[ "MIT" ]
null
null
null
README.md
cainmaila/three3d-demo
dbff448f7cefda40ad8834e03bcbd1534fdea13a
[ "MIT" ]
null
null
null
README.md
cainmaila/three3d-demo
dbff448f7cefda40ad8834e03bcbd1534fdea13a
[ "MIT" ]
null
null
null
# three3d-demo Three.js DEMO
9.666667
14
0.758621
kor_Hang
0.795072
114e4ee438f3bc9a87b299aa8b9375c684885c92
448
md
Markdown
test/BDD/README.md
TalaoDAO/talao-wallet
ab915cccc5ea4a6365b1936c9d846e3917766da8
[ "Apache-2.0" ]
6
2021-12-02T17:36:11.000Z
2022-03-08T14:21:38.000Z
test/BDD/README.md
TalaoDAO/talao-wallet
ab915cccc5ea4a6365b1936c9d846e3917766da8
[ "Apache-2.0" ]
200
2021-09-27T16:06:25.000Z
2022-03-29T11:44:08.000Z
test/BDD/README.md
TalaoDAO/credible
c6e66831e97feac47a7a456c1ac913598f215ee1
[ "Apache-2.0" ]
2
2022-03-31T13:28:50.000Z
2022-03-31T13:29:07.000Z
# Talao wallet use cases Those use cases are described with the behavior-driven development [BDD](https://en.wikipedia.org/wiki/Behavior-driven_development) syntax. [Register a portfolio](https://github.com/TalaoDAO/talao-wallet/blob/ThierryThevenet-patch-1/test/BDD/register_portfolio.md) [Get a professional Experience Assessment](https://github.com/TalaoDAO/talao-wallet/blob/ThierryThevenet-patch-1/test/BDD/get_professional_credential.md)
49.777778
153
0.819196
yue_Hant
0.341516
114ee1bbd025ae05187129669efe0a4b47a8454d
1,066
md
Markdown
README.md
matriphe/sengkalan
adc4b94c26b20ae084b3b73ea04f7ff09782ad1f
[ "MIT" ]
null
null
null
README.md
matriphe/sengkalan
adc4b94c26b20ae084b3b73ea04f7ff09782ad1f
[ "MIT" ]
null
null
null
README.md
matriphe/sengkalan
adc4b94c26b20ae084b3b73ea04f7ff09782ad1f
[ "MIT" ]
null
null
null
# Sengkalan Generator ***Sengkalan Generator*** ini merupakan hasil _porting_ dari [Sengkalan Generator](https://github.com/lantip/sengkalan) dari Python ke Go yang dibuat oleh [Mas Lantip](https://github.com/lantip). Untuk cara kerjanya, [silakan baca tulisan Mas Lantip di blognya](https://lantip.xyz/2020/05/membuat-sengkalan/). ## Kebutuhan - Go versi 1.14 - Go Module ## Pembangunan ```shell script go build ``` ## Menjalankan ```shell script ./sengkalan [tahun] ``` ### Contoh ```shell script ./sengkalan 2020 Sengkalan versi 0.2 📅 Tahun Masehi: 2020 ☀️ Surya Sengkala: Mesat Sikara Rusak Mata 📜 Makna Surya Sengkala: > Mesat: pergi, menghindar, melesat > Sikara: pengacauan, tangan, campur tangan. > Rusak: rusak > Mata: mata 📅 Tahun Jawa: 1953 🌙 Candra Sengkala: Brama Raseksa Muka Luwih 📜 Makna Candra Sengkala: > Brama: api > Raseksa: raksasa > Muka: wajah, depan > Luwih: lebih, luar biasa ``` ## Akan Dilakukan - Mengubah ke aksara Jawa ## Lisensi Sengkalan didistribusikan menggunakan [lisensi MIT](LICENSE.md).
19.740741
195
0.711069
ind_Latn
0.835714
114f1e5dc689bad09e3c2bf63418cd8dbf198e03
414
md
Markdown
posts/2010/09/twaiku-found-haikus-in-public-tweets.md
atmos/tumblr.atmos.org
1865e6fe271d4c28047ac50fd4ace154be411ff1
[ "MIT" ]
null
null
null
posts/2010/09/twaiku-found-haikus-in-public-tweets.md
atmos/tumblr.atmos.org
1865e6fe271d4c28047ac50fd4ace154be411ff1
[ "MIT" ]
null
null
null
posts/2010/09/twaiku-found-haikus-in-public-tweets.md
atmos/tumblr.atmos.org
1865e6fe271d4c28047ac50fd4ace154be411ff1
[ "MIT" ]
2
2019-05-06T18:02:23.000Z
2019-05-06T18:27:47.000Z
<!-- id: 1093326637 link: http://tumblr.atmos.org/post/1093326637/twaiku-found-haikus-in-public-tweets slug: twaiku-found-haikus-in-public-tweets date: Thu Sep 09 2010 13:45:57 GMT-0700 (PDT) publish: 2010-09-09 tags: title: Twaiku - Found Haikus in Public Tweets --> Twaiku - Found Haikus in Public Tweets ====================================== [http://mrfeinberg.com/twaiku/](http://mrfeinberg.com/twaiku/)
24.352941
82
0.671498
kor_Hang
0.105192
114f878bb5aa0aea50e520fb7f13032f062a7efc
7,089
md
Markdown
UPGRADING-3.x.md
loginovma/ember-tooltips
1abb05a144de65a408375888b39e590942ef4b92
[ "MIT" ]
null
null
null
UPGRADING-3.x.md
loginovma/ember-tooltips
1abb05a144de65a408375888b39e590942ef4b92
[ "MIT" ]
null
null
null
UPGRADING-3.x.md
loginovma/ember-tooltips
1abb05a144de65a408375888b39e590942ef4b92
[ "MIT" ]
null
null
null
# Upgrading to 3.0 from 2.x ember-tooltips 3.x replaces the underlying tooltip implementation with the robust and mature [`tooltip.js`](https://popper.js.org/tooltip-examples.html) library powered by [`popper.js`](https://popper.js.org/). It has enabled a simpler ember-tooltips implementation, while providing more functionality and coverage for use cases not easily supported by earlier versions of ember-tooltips. ## Migrating existing code We've done our best to make the upgrade from 2.x to 3.x as smooth as possible, by preserving a similar component API and mostly compatible test helpers. ### 1. Update component names ember-tooltips now provides one component for tooltips, `ember-tooltip`, and one component for popovers, `ember-popover`. For the most part you can find-and-replace all uses of `tooltip-on-component` and `tooltip-on-element` with `ember-tooltip`, and `popover-on-component` and `popover-on-element` with `ember-popover`. ### 2. Remove deprecated options If you have specified these options in the past, you should remove them, as they no longer apply to ember-tooltips 3.x: * `setPin` - No longer needed * `keepInWindow` - All tooltips are now kept in the window by default. See [`popperOptions`](README.md#popper-options) for overriding this behavior via popper.js modifiers. * `enableLazyRendering` - See [What happened to `enableLazyRendering`?](#what-happened-to-enablelazyrendering) e.g. ```patch - {{#tooltip-on-element - class="user-banner__photo__tooltip js-user-photo-tooltip" - enableLazyRendering=true - side='right'}} + {{#ember-tooltip + class="js-user-photo-tooltip" + tooltipClassName="ember-tooltip user-banner__photo__tooltip" + side='right'}} <img src={{user.photo_url}} alt="User photo" /> - {{/tooltip-on-element}} + {{/ember-tooltip}} ``` ### 3. Update `class` and `tooltipClassName` When specifying `class` with an `ember-tooltip`, this will apply to the tooltip wrapper component, but will not contain the actual tooltip content. `class` may still be used for targeting tooltips using the `ember-tooltips` test helpers. For other uses where you're looking to set a class on the actual tooltip used for display (e.g. changing styling), use `tooltipClassName`, which will apply to the `popper.js` tooltip instance in the DOM. e.g. ```hbs {{!-- app/components/some-component.hbs --}} {{ember-tooltip text="Hello" class="js-my-test-tooltip" tooltipClassName="ember-tooltip tooltip-warning" }} ``` ```css /* app/styles/my-tooltips.css */ .tooltip-warning { background-color: yellow; color: black; } ``` ```javascript // tests/integration/components/some-component-test.js assertTooltipContent(assert, { contentString: 'Hello', selector: '.js-my-test-tooltip', }) ``` ### 4. Migrating test helpers The test helpers have remained with the same API. However, there are a few notable changes: #### 4.1 Updating test helper import path The test helper import paths have changed. Update `YOUR_APP_MODULE/tests/helpers/ember-tooltips` to `ember-tooltips/test-support` ```patch import { assertTooltipVisible, assertTooltipNotVisible, - findTooltip, - triggerTooltipTargetEvent -} from 'my-cool-app/tests/helpers/ember-tooltips'; + findTooltip +} from 'ember-tooltips/test-support'; ``` #### 4.2 Replace `triggerTooltipTargetEvent` test helper The `triggerTooltipTargetEvent` test helper has been removed. Please use `triggerEvent` from `@ember/test-helpers` (or `ember-native-dom-helpers`, if you're not using the latest test helpers.) ```patch - it('shows the thing I want when condition is true', function() { + it('shows the thing I want when condition is true', async function() { await render(hbs`{{ember-tooltip text='Hello' class="my-tooltip-target"}}`); const someTooltipTarget = this.$('.my-tooltip-target'); - triggerTooltipTargetEvent(someTooltipTarget, 'mouseenter'); + await triggerEvent(someTooltipTarget[0], 'mouseenter'); ``` #### 4.3 Specifying `targetSelector` where needed While the test helper APIs remain unchanged, due to DOM structure changes in ember-tooltips, you may need to specify the [`targetSelector`](README.md#test-helper-option-targetselector) option to target the correct tooltip. Luckily, your test suite should let you know where this is needed! ### 5. Updating any CSS Overrides If you were previously overriding CSS styles of `ember-tooltips`, your rules will need to be updated to support 3.x. While by default, the tooltips should look about the same, the underlying DOM structure and CSS classes used have changed. If you were using CSS workarounds for positioning the tooltip, they may no longer be needed, as `popper.js` is smarter about positioning and provides some broader control over it. For example, position variants supported by `popper.js` may supplant the need for some custom positioning CSS. See [`side`](README.md#test-helper-option-side) option for more details. ## FAQ / Gotchas ### My tooltips appear clipped! (use within elements using `overflow: hidden`) One notable difference between the way 2.x renders versus 3.x is that 3.x now renders tooltips as siblings of their target element. Generally, this shouldn't change the appearance of the tooltips. However, when a tooltip exists inside of a parent element with `overflow: hidden` the tooltip may appear clipped in 3.x. There are two ways this can be addressed, but it may depend on your application. 1. Disable `escapeWithReference` for the `preventOverflow` popper.js modifier through [`popperOptions`](https://github.com/sir-dunxalot/ember-tooltips#popper-options) 2. Use the [`popperContainer`](https://github.com/sir-dunxalot/ember-tooltips#popper-container) option to render the tooltip as a child of another element, such as `'body'` ### What happened to `enableLazyRendering`? The use of `popper.js` in 3.x addresses performance in a couple different ways that mostly make the old lazy rendering option unnecessary. 1. It uses `requestAnimationFrame` to handle updates to the DOM, which provides smooth 60FPS updates. 2. It does not update or re-position tooltips that are not shown. 3. Tooltips are only rendered on activation & are torn down when hidden. The content will still be rendered in the DOM, but is hidden and not rendered into a tooltip/popover until it's activated. If the content inside the tooltip is what's costly to render, there is an escape hatch, e.g.: ``` {{#ember-popover as |popover|}} {{#if popover.isShown}} {{expensive-component-thing}} {{/if}} {{/ember-popover}} ``` ### My application assumed tooltips were appended to `<body>` and now all my tests/layout are breaking! In ember-tooltips 3.x, the decision was made to render tooltip content as a sibling to the target element, rather than as a direct child of `<body>`. You can restore the behavior of ember-tooltips 2.x by specifying `popperContainer='body'`, which will direct popper.js to render the tooltip or popover content as a child of `<body>`.
37.115183
129
0.746227
eng_Latn
0.984832
115090220d1ef219a17c3f958323affb4fa16600
1,329
md
Markdown
docs/profiling/profiling-aspnet-load-tests.md
MicrosoftDocs/visualstudio-docs.ko-kr
367344fed1f3d162b028af8a41a785a2137598e8
[ "CC-BY-4.0", "MIT" ]
13
2019-10-02T05:47:05.000Z
2022-03-09T07:28:28.000Z
docs/profiling/profiling-aspnet-load-tests.md
MicrosoftDocs/visualstudio-docs.ko-kr
367344fed1f3d162b028af8a41a785a2137598e8
[ "CC-BY-4.0", "MIT" ]
115
2018-01-17T01:43:25.000Z
2021-02-01T07:27:06.000Z
docs/profiling/profiling-aspnet-load-tests.md
MicrosoftDocs/visualstudio-docs.ko-kr
367344fed1f3d162b028af8a41a785a2137598e8
[ "CC-BY-4.0", "MIT" ]
33
2018-01-17T01:25:13.000Z
2022-02-14T05:28:44.000Z
--- title: ASP.NET 부하 테스트 프로파일링 | Microsoft Docs description: ASP.NET 웹 사이트 프로젝트에 대해 실행하는 자동화된 Microsoft Test Manager 웹 테스트에서 프로파일링 데이터를 수집하는 방법을 알아봅니다. ms.date: 11/04/2016 ms.topic: conceptual ms.assetid: c3f5c363-be79-40b5-bfa7-db8d21378d8d author: mikejo5000 ms.author: mikejo manager: jmartens ms.technology: vs-ide-debug monikerRange: vs-2017 ms.workload: - aspnet ms.openlocfilehash: 1bbd0b138687a8842c8e991f1a4bce8f9e8f234a ms.sourcegitcommit: 68897da7d74c31ae1ebf5d47c7b5ddc9b108265b ms.translationtype: HT ms.contentlocale: ko-KR ms.lasthandoff: 08/13/2021 ms.locfileid: "122076323" --- # <a name="profile-aspnet-load-tests"></a>ASP.NET 부하 테스트 프로파일링 ASP.NET 웹 사이트 프로젝트에 대해 실행하는 자동화된 [!INCLUDE[TCMext](../misc/includes/tcmext_md.md)] 웹 테스트에서 프로파일링 데이터를 수집할 수 있습니다. 샘플링 및 계층 상호 작용 데이터를 수집할 수 있습니다. 자세한 내용은 다음 항목을 참조하세요. - [(NIB) 방법: 웹 성능 테스트 편집기에서 웹 애플리케이션에 대해 성능 세션 실행](/previous-versions/ff356203(v=vs.100)) - [방법: Visual Studio에서 테스트 설정을 사용하여 부하 테스트에 대한 ASP.NET 프로파일러 구성](/previous-versions/dd504817(v=vs.140)) ## <a name="see-also"></a>추가 정보 - [샘플링 데이터 값 이해](../profiling/understanding-sampling-data-values.md) - [성능 규칙을 사용하여 데이터 분석](../profiling/using-performance-rules-to-analyze-data.md) - [샘플링 방법 데이터 뷰](../profiling/profiler-sampling-method-data-views.md) - [계층 상호 작용 뷰](../profiling/tier-interactions-view.md)
41.53125
167
0.755455
kor_Hang
0.996986
1150a1279f254f5045a0de8ec7e05c6c11f2ac4c
1,379
md
Markdown
LeetCode/单调递增的数字.md
WindrunnerMax/EveryDay
1656eee33db9b561696bb09476ad6b81563cecce
[ "MIT" ]
699
2020-02-29T03:07:10.000Z
2022-03-31T07:47:56.000Z
LeetCode/单调递增的数字.md
WindrunnerMax/EveryDay
1656eee33db9b561696bb09476ad6b81563cecce
[ "MIT" ]
1
2021-01-16T17:29:24.000Z
2021-01-17T05:21:01.000Z
LeetCode/单调递增的数字.md
WindrunnerMax/EveryDay
1656eee33db9b561696bb09476ad6b81563cecce
[ "MIT" ]
134
2020-07-04T13:37:53.000Z
2022-03-30T12:40:22.000Z
# 单调递增的数字 给定一个非负整数`N`,找出小于或等于`N`的最大的整数,同时这个整数需要满足其各个位数上的数字是单调递增。当且仅当每个相邻位数上的数字`x`和`y`满足`x <= y`时,我们称这个整数是单调递增的。 ## 示例 ``` 输入: N = 10 输出: 9 ``` ``` 输入: N = 1234 输出: 1234 ``` ``` 输入: N = 332 输出: 299 ``` ## 题解 ```javascript /** * @param {number} N * @return {number} */ var monotoneIncreasingDigits = function(N) { let i = 1; let num = N; while(i*10 <= num){ // console.log(i, num); // 每次取出两位 let n = ~~(num / i) % 100; i = i * 10; if(~~(n/10) > n %10) num = ~~(num / i) * i - 1; // 例如 1332 第一次循环之后是 取整(1332 / 10) * 10 - 1 = 1330 - 1 = 1329 // 第二次循环就是 1300 - 1 = 1299 } return num; }; ``` ## 思路 整体思路就是将数字当作字符串,从尾到头逆向遍历一遍,每次比较两位,如果后一个位置上的数小于前一个位置上的数,那么就将前边的数减一,并将后边的所有位都变为`9`,例如当我们遍历到了`1323`中比较`32`的这个位置上,此时`3 > 2`符合条件,那么我们就将`3`减一并将其后的数都变作`9`,即将其变为`1299`,直到遍历到头即可。通常来说可以把数字作为字符串来遍历处理,上面的题解是使用纯数字的方式去做,首先定义`i`作为标记记录遍历到到的位置,之后定义`num`作为待处理的数字,定义循环只要能够继续取出两位数就继续循环,这是循环的终止条件,此外能够使用乘法的地方就尽量不要使用除法,在`js`中`int32`如果不能够整除则会自动转双精度`64`,所以在很多地方都需要强制转数值为`int32`,之后取出两位数,这里`~~`是使用位运算强制转了整型,在之后将`i * 10`定义到下一位,如果低一位上的值大于大于高一位上的值,那么就将数值在第`i`位以后的值都变成`0`,然后减`1`即可达到上述的将此位减`1`以及之后的数字都变为`9`,可以参考上边的示例,在循环结束后返回处理的数字即可。 ## 每日一题 ``` https://github.com/WindrunnerMax/EveryDay ``` ## 参考 ``` https://leetcode-cn.com/problems/monotone-increasing-digits/ ```
23.372881
502
0.614213
yue_Hant
0.727573
115122721f389c6a35a695de082cb485f3dc1e7e
70,934
md
Markdown
developer-library/python-and-database-scripting/python-scripting/python-scripting.md
kamryn-v/learning-library
5da55c082ccc41772ade8cd08af6c275737925b2
[ "UPL-1.0" ]
3
2020-04-03T17:57:44.000Z
2020-05-31T06:50:06.000Z
developer-library/python-and-database-scripting/python-scripting/python-scripting.md
kamryn-v/learning-library
5da55c082ccc41772ade8cd08af6c275737925b2
[ "UPL-1.0" ]
null
null
null
developer-library/python-and-database-scripting/python-scripting/python-scripting.md
kamryn-v/learning-library
5da55c082ccc41772ade8cd08af6c275737925b2
[ "UPL-1.0" ]
1
2020-06-29T15:51:11.000Z
2020-06-29T15:51:11.000Z
# Python and Oracle Database: Scripting for the Future ## Introduction This tutorial is an introduction to using Python with Oracle Database. It contains beginner and advanced material. Sections can be done in any order. Choose the content that interests you and your skill level. Follow the steps in this document. The **tutorial** directory has scripts to run and modify. The **tutorial/solutions** directory has scripts with the suggested code changes. Estimated Lab Time: 60 minutes ### About cx\_Oracle Python extension cx\_Oracle is a Python extension module that enables access to Oracle Database. It conforms to the Python database API 2.0 specification with a considerable number of additions and a couple of exclusions. cx\_Oracle 8 has been tested with Python versions 3.6 through 3.9. Older versions of cx\_Oracle may be used with previous Python releases. You can use cx\_Oracle with Oracle 11.2, 12, 18, 19 and 21 client libraries. Oracle's standard client-server version interoperability allows connection to both older and newer databases. For example Oracle 19c client libraries can connect to Oracle Database 11.2. cx\_Oracle 8 is available. Python is open-source, cross-platform, and free of cost. There's no excuse not to give Python a try! ### Objectives * Learn how to use Python in the Oracle Database * Learn how to validate Python operations ### Prerequisites This lab assumes you have completed the following labs: * Login to Oracle Cloud * Generate SSH Key * Environment Setup using Marketplace Image ## Task 1: Install Python Python comes preinstalled on most Linux distributions, and it is available as a package on others. The Python packages can be obtained from the software repository of your Linux distribution using the package manager. 1. Open up the Oracle Cloud shell (or terminal of your choice) and ssh into your compute instance as the `opc` user if not already. ```` ssh -i ~/.ssh/<sshkeyname> opc@<Your Compute Instance Public IP Address> ```` 2. Check if python3 has been installed by running the command. ```` <copy> python -V </copy> ```` For this tutorial Python Version 3.6 (or later) is preferred. **You will typically have to update Python**. cx\_Oracle version 7.2 (or later) is needed. cx\_Oracle version 8 is available. Oracle database 19c is installed with the 19c client libraries. SQL*Plus is preinstalled. The Advanced Queuing section requires Oracle client 12.2 or later. The SODA section requires Oracle client 18.5, or later, and Oracle Database 18 or later. 3. Upgrade Python if you do not have Python 3 installed. There is no harm in running this command multiple times, the system will either install packages or let you know they are already installed. ```` <copy> sudo yum -y install python3 python3-tools </copy> ```` ![](./images/p_installPython.jpg " ") ## Task 2: Add a Sample Schema in your Database 1. Switch to the `oracle` user using the sudo command. ```` <copy> sudo su - oracle </copy> ```` 2. It is necessary to correctly set the environment variables so that we can later run `sqlplus`. Copy and paste the following. ```` <copy> . oraenv </copy> ```` When prompted with `ORACLE_SID = [orcl] ?` copy and paste the following then press enter. ```` <copy> ORCL </copy> ```` ![](./images/step2.2-oraenv.png " ") 3. Create a directory structure named `python/` and get the SQL setup scripts. ```` <copy> mkdir -p python cd python wget https://objectstorage.us-ashburn-1.oraclecloud.com/p/0mc1pGojNWLuXj0RXKtmj8-qWyRmEkNipReSOXudpIvOmM642cYOSDoxmWTT-ibY/n/c4u04/b/labfiles/o/python_setup.zip unzip python_setup.zip </copy> ```` ![](./images/setupEnv-1.png " ") 4. Install the sample schema using the script **SetupSamples**. ```` <copy> sqlplus sys/Ora_DB4U@localhost/orclpdb as sysdba @/home/oracle/python/sql/SetupSamples </copy> ```` The **SetupSamples** script will create a user `pythonhol` with a password `welcome`. ![](./images/setupsample1.png " ") ![](./images/setupsample2.png " ") To leave `sqlplus` you need to use the exit command. ```` <copy> exit </copy> ```` ## Task 3: Install Python Oracle Module and Connect to a Database cx\_Oracle is a python module that enables access to Oracle databases. This module is supported by Oracle 11.2 and higher and works for both Python 2.X and 3.X. There are various ways in which cx\_Oracle can be installed. In this example, we will use pip (installed by default for python 3.4 and up). For more ways to install cx\_Oracle (like yum) check the documentation on [https://yum.oracle.com/oracle-linux-python.html#Aboutcx_Oracle](https://yum.oracle.com/oracle-linux-python.html#Aboutcx_Oracle "documentation"). 1. Install the `cx_Oracle` module using python3 and pip for the oracle user. If your terminal disconnected and you are opc again, enter the command `sudo su - oracle` to switch back to the `oracle` user. The **pip** command will install cx_Oracle8. ```` <copy> python3 -m pip install --user cx_Oracle </copy> ```` ![](./images/p_installcxOracle.png " " ) 2. Test your install by launching the python console and list the available modules. ```` <copy> python3 help('modules') </copy> ```` This command will show you a list of installed modules that should include the cx\_Oracle module we installed in the previous step. ![](./images/p_installcxOracle-2.png " ") 3. Connect to the Oracle database and print the version of the database via python. (This confirms you are connected to an Oracle instance and returns the database version.) ```` <copy> import cx_Oracle con = cx_Oracle.connect('system/Ora_DB4U@localhost:1521/orclpdb') print(con.version) </copy> ```` The output should be similar to 19.7.0.0.0. Copy and paste the `quit` command below this will exit the python command line editor. ```` <copy> quit() </copy> ```` ![](./images/p_python-3.png " ") ## Task 4: The Python Interpreter There are several ways to execute Python code. In this step, we start with two examples on how to execute Python code from the command line. The first example executing code from the command prompt i.e. executing commands directly in the interpreter. The second example to save your code in a .py file and invoke the interpreter to execute the file. 1. To execute code from command line open the Python command-line editor and type the following commands, one by one (each line is one command). ```` <copy> python3 var1 = "Hello World" var1 </copy> ```` The output will be `'Hello World'`. ![](./images/p_python-1.png " " ) 2. Quit the python terminal ```` <copy> quit() </copy> ```` 3. To create a simple script, open up the nano text editor by copying and pasting the command `nano test.py`. ```` <copy> cd ~ nano test.py </copy> ```` Enter the following script into nano. ```` <copy> var1 = "Hello World" print(var1) </copy> ```` 4. If you are using nano. Type **Ctrl+x** to exit the file. When prompted press **y**. Then press **ENTER** to confirm. The file should be named **test.py** and be located in the **/home/oracle directory**. *This process of opening and closing files in nano will be used throughout the rest of this lab. Remember to open a file in nano, first navigate to the directory with the file. Open the file with the command `nano FileName`. Save and close the file with `Ctrl+x`, then `y`, then `ENTER`.* ```` <copy> python3 ~/test.py </copy> ```` ![](./images/p_python-2.png " " ) ## Task 5: Connect to the Oracle Database 1. Review the connection credentials. Review db\_config.py and db\_config.sql in the tutorial directory. These are included in other Python and SQL files in this tutorial. To view db\_config.py copy and paste the following: ```` <copy> cd ~/python/tutorial cat db_config.py </copy> ```` The output should say: ```` user = "pythonhol" pw = "welcome" dsn = "localhost:1521/orclpdb" ```` To view db\_config.sql copy and paste the following: ```` <copy> cd ~/python/tutorial cat db_config.sql </copy> ```` The output should say: ```` def user = "pythonhol" def pw = "welcome" def connect_string = "localhost:1521/orclpdb" ```` 2. Creating a basic connection. Review the code contained in connect.py by issuing the command **cat connect.py**. ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) print("Database version:", con.version) ```` The cx\_Oracle module is imported to provide the API for accessing the Oracle database. Many inbuilt and third party modules can be included in this way in Python scripts. The **connect()** method is passed the username, the password and the connection string that you configured in the **db\_config.py** module. In this case, Oracle's Easy Connect connection string syntax is used. It consists of the hostname of your machine, localhost, and the database service name **orclpdb**. In your command terminal, change to the tutorial directory: ```` <copy> cd /home/oracle/python/tutorial </copy> ```` Run the Python script: ```` <copy> python3 connect.py </copy> ```` ![](./images/python_connect.png " " ) The version number of the database should be displayed. An exception is raised if the connection fails. Adjust the username, password or connect string parameters to invalid values to see the exception. **cx\_Oracle** also supports "external authentication", which allows connections without needing usernames and passwords to be embedded in the code. Authentication would then instead be performed by, for example, LDAP. By default they connect to the 'orclpdb' database service on the same machine as Python. You can modify the values in both files to match the connection information for your environment. 3. Indentation indicates code structure. There are no statement terminators or begin/end keywords or braces to indicate blocks of code. Open connect.py in an editor with the command **nano connect.py**. Indent the print statement with some spaces: ```` <copy> import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) print("Database version:", con.version) </copy> ```` 4. Save the script (**^x**) and run it again: ```` <copy> python3 connect.py </copy> ```` ![](./images/python_indent_error.png " " ) This raises an exception about the indentation. The number of spaces or tabs must be consistent in each block; otherwise, the Python interpreter will either raise an exception or execute code unexpectedly. Python may not always be able to identify accidental from deliberate indentation. *Check your indentation is correct before running each example*. Make sure to indent all statement blocks equally. **Note the sample files use spaces, not tabs.** 5. Executing a query. Open **query.py** in an editor. If you are using nano text editor to edit the file, use command **nano query.py**. It looks like: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) ```` Edit the file by adding the code shown in below to the end of the file: ```` <copy> cur = con.cursor() cur.execute("select * from dept order by deptno") res = cur.fetchall() for row in res: print(row) </copy> ```` Make sure the print(row) line is indented. This lab uses spaces, not tabs. The code executes a query and fetches all data. Save the file and run it: ```` <copy> python3 query.py </copy> ```` ![](./images/python_query.png " " ) In each loop iteration a new row is stored in row as a Python "tuple" and is displayed. Fetching Data is described in a later section 6. Closing connections Connections and other resources used by cx\_Oracle will automatically be closed at the end of scope. This is a common programming style that takes care of the correct order of resource closure. Resources can also be explicitly closed to free up database resources if they are no longer needed. This may be useful in blocks of code that remain active for some time. Open **query.py** in an editor and add calls to close the cursor and connection like: ```` <copy> import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() cur.execute("select * from dept order by deptno") res = cur.fetchall() for row in res: print(row) cur.close() con.close() </copy> ```` Running the script completes without error: ```` <copy> python3 query.py </copy> ```` ![](./images/python_query.png " " ) If you swap the order of the two close() calls you will see an error. ![](./images/python_close_error.png " " ) 7. Checking versions Review the code contained in **versions.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) print(cx_Oracle.version) ```` Run the script: ```` <copy> python3 versions.py <copy> ```` This gives the version of the cx\_Oracle interface. ```` 7.3.0 ```` Replace the text in **versions.py** with the text below to print the version of the database, and of the Oracle client libraries used by cx\_Oracle: ```` <copy> import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) print(cx_Oracle.version) print("Database version:", con.version) print("Client version:", cx_Oracle.clientversion()) </copy> ```` When the script is run, it will display: ```` 7.3.0 Database version: 19.7.0.0.0 Client version: (19, 7, 0, 0, 0) ```` **Note** the client version is a tuple. Any cx\_Oracle installation can connect to older and newer Oracle Database versions. By checking the Oracle Database and client versions numbers, the application can make use of the best Oracle features available. ## Task 6: Connection Pooling 1. Connection pooling Review the code contained in **connect\_pool.py**: ```` import cx_Oracle import threading import db_config pool = cx_Oracle.SessionPool(db_config.user, db_config.pw, db_config.dsn, min = 2, max = 5, increment = 1, threaded = True) def Query(): con = pool.acquire() cur = con.cursor() for i in range(4): cur.execute("select myseq.nextval from dual") seqval, = cur.fetchone() print("Thread", threading.current_thread().name, "fetched sequence =", seqval) thread1 = threading.Thread(name='#1', target=Query) thread1.start() thread2 = threading.Thread(name='#2', target=Query) thread2.start() thread1.join() thread2.join() print("All done!") ```` The **SessionPool()** function creates a pool of Oracle connections for the user. Connections in the pool can be used by cx\_Oracle connections by calling **pool.acquire()**. The initial pool size is 2 connections. The maximum size is 5 connections. When the pool needs to grow, 1 new connection will be created at a time. The pool can shrink back to the minimum size of 2 when connections are no longer in use. The **def Query():** line creates a method that is called by each thread. In the method, the **pool.acquire()** call gets one connection from the pool (as long as less than 5 are already in use). This connection is used in a loop of 4 iterations to query the sequence myseq. At the end of the method, cx\_Oracle will automatically close the cursor and release the connection back to the pool for reuse. The **seqval, = cur.fetchone()** line fetches a row and puts the single value contained in the result tuple into the variable **seqval**. Without the comma, the value in **seqval** would be a tuple like **"(1,)"**. Two threads are created, each invoking the Query() method. In a command terminal, run: ```` <copy> cd ~/python/tutorial python3 connect_pool.py </copy> ```` ![](./images/python_conn_pool.png " " ) The output shows interleaved query results as each thread fetches values independently. The order of interleaving may vary from run to run. 2. Connection pool experiments Review **connect\_pool2.py**, which has a loop for the number of threads, each iteration invoking the Query() method: ```` import cx_Oracle import threading import db_config pool = cx_Oracle.SessionPool(db_config.user, db_config.pw, db_config.dsn, min = 2, max = 5, increment = 1, threaded = True) def Query(): con = pool.acquire() cur = con.cursor() for i in range(4): cur.execute("select myseq.nextval from dual") seqval, = cur.fetchone() print("Thread", threading.current_thread().name, "fetched sequence =", seqval) numberOfThreads = 2 threadArray = [] for i in range(numberOfThreads): thread = threading.Thread(name = '#' + str(i), target = Query) threadArray.append(thread) thread.start() for t in threadArray: t.join() print("All done!") ```` In a command terminal, run: ```` <copy> cd ~/python/tutorial python3 connect_pool2.py </copy> ```` ![](./images/step6.2-python_conn_pool1.png " " ) Experiment with different values of the pool parameters and **numberOfThreads**. Larger initial pool sizes will make the pool creation slower, but the connections will be available immediately when needed. When **numberOfThreads** exceeds the maximum size of the pool, the **acquire()** call will generate an error such as **ORA-24459: OCISessionGet() timed out waiting for the pool to create new connections**. Adding the additional argument **getmode = cx\_Oracle.SPOOL\_ATTRVAL\_WAIT** to the **cx\_Oracle.SessionPool()** call will prevent the exception from taking place, but will cause the thread to wait until a connection is available. Pool configurations where min is the same as max (and increment = 0) are often recommended as a best practice. This avoids connection storms on the database server. 3. Creating a Database Resident Connection Pool (DRCP) Connection Database Resident Connection Pooling allows multiple Python processes on multiple machines to share a small pool of database server processes. Below left is a diagram without DRCP. Every application connection has its own 'dedicated' database server process. Application connect and close calls require the expensive create and destroy of those database server processes. To avoid these costs, scripts may hold connections open even when not doing database work: these idle server processes consumes database host resources. Below right is a diagram with DRCP. Scripts can use database servers from a pre-created pool of servers and return them when they are not in use. Without DRCP: ![](./images/python_nopool.png "Without DRCP " ) With DRCP: ![](./images/python_pool.png "Without DRCP ") DRCP is useful when the database host machine does not have enough memory to handle the number of database server processes required. However, if database host memory is large enough, then the default, 'dedicated' server process model is generally recommended. If DRCP is enabled, it is best used in conjunction with cx\_Oracle middle-tier connection pooling. Batch scripts doing long running jobs should generally use dedicated connections. Both dedicated and DRCP servers can be used in the same database for different applications. Review the code contained in **connect\_drcp.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn + ":pooled", cclass="PYTHONHOL", purity=cx_Oracle.ATTR_PURITY_SELF) print("Database version:", con.version) ```` This is similar to **connect.py** but ":pooled" is appended to the connection string, telling the database to use a pooled server. A Connection Class "PYTHONHOL" is also passed into the connect() method to allow grouping of database servers to applications. The "purity" of the connection is defined as the ATTR\_PURITY\_SELF constant, meaning the session state (such as the default date format) might be retained between connection calls, giving performance benefits. Session information will be discarded if a pooled server is later reused by an application with a different connection class name. Applications that should never share session information should use a different connection class and/or use ATTR\_PURITY\_NEW to force creation of a new session. This reduces overall scalability but prevents applications mis-using session information. Before you run the **connect\_drcp.py** code you will need to start the default connection pool in the instance. Connect to the oracle instance as `sys`: ```` <copy> sqlplus sys/Ora_DB4U@localhost:1521/orcl as sysdba </copy> ```` ```` <copy> exec dbms_connection_pool.start_pool; exit; </copy> ```` Run **connect\_drcp.py** in a terminal window. ```` <copy> python3 connect_drcp.py </copy> ```` The output is simply the version of the database. ![](./images/step6.3-sysdbaconnect.png " ") **Note** If you get an error: **"ORA-12520 TNS: Listener could not find available handler"**, you have not started the DRCP connection pool. 4. Connection pooling and DRCP DRCP works well with cx\_Oracle's connection pooling. Edit **connect\_pool2.py**, reset any changed pool options, and modify it to use DRCP: ```` import cx_Oracle import threading pool = cx_Oracle.SessionPool(db_config.user, db_config.pw, db_config.dsn + ":pooled", min = 2, max = 5, increment = 1, threaded = True) def Query(): con = pool.acquire(cclass = "PYTHONHOL", purity = cx_Oracle.ATTR_PURITY_SELF) cur = conn.cursor() for i in range(4): cur.execute("select myseq.nextval from dual") seqval, = cur.fetchone() print("Thread", threading.current_thread().name, "fetched sequence =", seqval) numberOfThreads = 2 threadArray = [] for i in range(numberOfThreads): thread = threading.Thread(name = '#' + str(i), target = Query) threadArray.append(thread) thread.start() for t in threadArray: t.join() print("All done!") ```` The script logic does not need to be changed to benefit from DRCP connection pooling. Run the script: ```` <copy> python3 connect_pool2.py </copy> ```` If you get the error **"ORA-24459: OCISessionGet() timed out waiting for pool to create new connections"** or **"ORA-24418: Cannot open further sessions"**, it is because connection requests are being made while the pool is starting or growing. Add the argument getmode = cx\_Oracle.SPOOL\_ATTRVAL\_WAIT to the cx\_Oracle.SessionPool() call so connection requests wait for pooled connections to be available. Open a new a terminal window and invoke SQL*Plus: ```` <copy> sqlplus /nolog @drcp_query.sql </copy> ```` This shows the number of connection requests made to the pool since the database was started ("NUM\_REQUESTS"), how many of those reused a pooled server's session ("NUM\_HITS"), and how many had to create new sessions ("NUM\_MISSES"). Typically the goal is a low number of misses. To see the pool configuration you can query DBA\_CPOOL\_INFO. 5. More DRCP investigation To explore the behaviors of cx\_Oracle connection pooling and DRCP pooling further, you could try changing the purity to **cx\_Oracle.ATTR\_PURITY\_NEW** to see the effect on the DRCP **NUM\_MISSES** statistic. Another experiment is to include the time module at the file top: ```` <copy> import time </copy> ```` and add calls to **time.sleep(1)** in the code, for example in the query loop. Then look at the way the threads execute. Use **drcp\_query.sql** to monitor the pool's behavior. ## Task 7: Fetching Data 1. A simple query There are a number of functions you can use to query an Oracle database, but the basics of querying are always the same: * Parse the statement for execution. * Bind data values (optional). * Execute the statement. * Fetch the results from the database. Review the code contained in **query2.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() cur.execute("select * from dept order by deptno") for deptno, dname, loc in cur: print("Department number: ", deptno) print("Department name: ", dname) print("Department location:", loc) ```` The cursor() method opens a cursor for statements to use. The execute() method parses and executes the statement. The loop fetches each row from the cursor and unpacks the returned tuple into the variables deptno, dname, loc, which are then printed. Run the script in a terminal window: ```` <copy> cd ~/python/tutorial python3 query2.py </copy> ```` ![](./images/query2Output.png " " ) 2. Using fetchone() When the number of rows is large, the fetchall() call may use too much memory. Review the code contained in **query\_one.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() cur.execute("select * from dept order by deptno") row = cur.fetchone() print(row) row = cur.fetchone() print(row) ```` This uses the fetchone() method to return just a single row as a tuple. When called multiple time, consecutive rows are returned: Run the script in a terminal window: ```` <copy> python3 query_one.py </copy> ```` ![](./images/python_query_one.png " " ) The first two rows of the table are printed. 3. Using fetchmany() Review the code contained in **query\_many.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() cur.execute("select * from dept order by deptno") res = cur.fetchmany(numRows = 3) print(res) ```` The fetchmany() method returns a list of tuples. By default the number of rows returned is specified by the cursor attribute **arraysize** (which defaults to 100). Here the numRows parameter specifies that three rows should be returned. Run the script in a terminal window: ```` <copy> python3 query_many.py </copy> ```` ![](./images/python_query_many.png " " ) The first three rows of the table are returned as a list (Python's name for an array) of tuples. You can access elements of the lists by position indexes. To see this, **edit the file query\_many.py** and add: ```` <copy> print(res[0]) # first row print(res[0][1]) # second element of first row </copy> ```` Run the script **query_many.py** in the terminal window to see the output: ![](./images/python_query_many_2.png " " ) 4. Scrollable cursors Scrollable cursors enable the application to move backwards as well as forwards in query results. They can be used to skip rows as well as move to a particular row. Review the code contained in **query\_scroll.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor(scrollable = True) cur.execute("select * from dept order by deptno") cur.scroll(2, mode = "absolute") # go to second row print(cur.fetchone()) cur.scroll(-1) # go back one row print(cur.fetchone()) ```` Run the script in a terminal window: ```` <copy> python3 query_scroll.py </copy> ```` ![](./images/step7.4-scroll.png " ") Edit query\_scroll.py and experiment with different scroll options and orders, such as: ```` <copy> cur.scroll(1) # go to next row print(cur.fetchone()) cur.scroll(mode = "first") # go to first row print(cur.fetchone()) </copy> ```` Run the **query_scroll.py** script in a terminal window: ![](./images/step7.4-scroll1.png " ") Try some scroll options that go beyond the number of rows in the result set. 5. Tuning with arraysize This section demonstrates a way to improve query performance by increasing the number of rows returned in each batch from Oracle to the Python program. Row prefetching and array fetching are both internal buffering techniques to reduce round-trips to the database. The difference is the code layer that is doing the buffering, and when the buffering occurs. First, create a table with a large number of rows. Review **query\_arraysize.sql**: ```` create table bigtab (mycol varchar2(20)); begin for i in 1..20000 loop insert into bigtab (mycol) values (dbms_random.string('A',20)); end loop; end; / show errors commit; exit ```` In a terminal window run the script as: ```` <copy> sqlplus /nolog @query_arraysize.sql </copy> ```` ![](./images/step7.5-arraysizesql.png " ") Review the code contained in **query\_arraysize.py**: ```` import cx_Oracle import time import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) start = time.time() cur = con.cursor() cur.prefetchrows = 100 cur.arraysize = 100 cur.execute("select * from bigtab") res = cur.fetchall() # print(res) # uncomment to display the query results elapsed = (time.time() - start) print(elapsed, "seconds") ```` This uses the 'time' module to measure elapsed time of the query. The prefetchrows and arraysize values are set to 100. This causes batches of 100 records at a time to be returned from the database to a cache in Python. This reduces the number of **roundtrips** made to the database, often reducing network load and reducing the number of context switches on the database server. The **fetchone()**, **fetchmany()** and **fetchall()** methods will read from the cache before requesting more data from the database. In a terminal window, run: ```` <copy> python3 query_arraysize.py </copy> ```` Rerun a few times to see the average times. ![](./images/step7.5-arraysizepy.png " ") Experiment with different arraysize values. For example, edit query\_arraysize.py and change the arraysize to: ```` cur.arraysize = 2000 ```` Rerun the **query_arraysize.py** script to compare the performance of different arraysize settings. ![](./images/step7.5-arraysizepy1.png " ") In general, larger array sizes improve performance. Depending on how fast your system is, you may need to use different arraysizes than those given here to see a meaningful time difference. There is a time/space tradeoff for increasing the values. Larger values will require more memory in Python for buffering the records. If you know the query returns a fixed number of rows, for example 20 rows, then set arraysize to 20 and prefetchrows to 21. The addition of one to prefetchrows prevents a round-trip to check for end-of-fetch. The statement execution and fetch will take a total of one round-trip. This minimizes load on the database. The default value of arraysize for cx\_Oracle is 100. If you know a query only returns a few records, decrease the arraysize from the default to reduce memory usage. ## Task 8: Binding Data Bind variables enable you to re-execute statements with new data values, without the overhead of reparsing the statement. Bind variables improve code reusability, and can reduce the risk of SQL injection attacks. 1. Binding in queries Review the code contained in **bind\_query.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() cur.prepare("select * from dept where deptno = :id order by deptno") cur.execute(None, id = 20) res = cur.fetchall() print(res) cur.execute(None, id = 10) res = cur.fetchall() print(res) ```` The statement contains a bind variable ":id" placeholder. The statement is only prepared once but executed twice with different values for the WHERE clause. The special symbol "None" is used in place of the statement text argument to execute() because the prepare() method has already set the statement. The second argument to the execute() call can be a sequence (binding by position) or a dictionary (binding by name) or an arbitrary number of named arguments (also binding by name), which is what has been done in this example. In the first execute call, this dictionary has the value 20 for the key of "id". The second execute uses the value 10. From a terminal window, run: ```` <copy> cd ~/python/tutorial python3 bind_query.py </copy> ```` The output shows the details for the two departments. ![](./images/bindQueryOutput.png " " ) 2. Binding in inserts Review the code in **bind\_insert.sql** creating a table for inserting data: ```` create table mytab (id number, data varchar2(20), constraint my_pk primary key (id)); ```` Run the script as: ```` <copy> sqlplus /nolog @bind_insert.sql </copy> ```` ![](./images/bindInsertOutput.png " " ) Review the code contained in **bind\_insert.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() rows = [ (1, "First" ), (2, "Second" ), (3, "Third" ), (4, "Fourth" ), (5, "Fifth" ), (6, "Sixth" ), (7, "Seventh" ) ] cur.executemany("insert into mytab(id, data) values (:1, :2)", rows) # Now query the results back cur2 = con.cursor() cur2.execute('select * from mytab') res = cur2.fetchall() print(res) ```` The 'rows' array contains the data to be inserted. The executemany() call inserts all rows. This call allows "array binding", which is an efficient way to insert multiple records. The final part of the script queries the results back and displays them as a list of tuples. From a terminal window, run: ```` <copy> python3 bind_insert.py </copy> ```` ![](./images/python_bind_insert.png " " ) The new results are automatically rolled back at the end of the script so re-running it will always show the same number of rows in the table. 3. Batcherrors The Batcherrors features allows invalid data to be identified while allowing valid data to be inserted. Edit the data values and replace the rext in **bind\_insert.py** and create a row with a duplicate key. ```` <copy> import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() rows = [ (1, "First" ), (2, "Second" ), (3, "Third" ), (4, "Fourth" ), (5, "Fifth" ), (6, "Sixth" ), (6, "Duplicate" ), (7, "Seventh" ) ] cur.executemany("insert into mytab(id, data) values (:1, :2)", rows) # Now query the results back cur2 = con.cursor() cur2.execute('select * from mytab') res = cur2.fetchall() print(res) </copy> ```` From a terminal window, run: ```` <copy> python3 bind_insert.py </copy> ```` The duplicate generates the error "ORA-00001: unique constraint (PYTHONHOL.MY\_PK) violated". The data is rolled back and the query returns no rows. Edit the file again and enable batcherrors and query the results back. Copy and replace `bind_insert.py` with the below text. ```` <copy> import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() rows = [ (1, "First" ), (2, "Second" ), (3, "Third" ), (4, "Fourth" ), (5, "Fifth" ), (6, "Sixth" ), (6, "Duplicate" ), (7, "Seventh" ) ] cur.executemany("insert into mytab(id, data) values (:1, :2)", rows, batcherrors = True) for error in cur.getbatcherrors(): print("Error", error.message.rstrip(), "at row offset", error.offset) cur2 = con.cursor() cur2.execute('select * from mytab') res = cur2.fetchall() print(res) </copy> ```` Run the file: ```` <copy> python3 bind_insert.py </copy> ```` The new code shows the offending duplicate row: "ORA-00001: unique constraint (PYTHONHOL.MY\_PK) violated at row offset 6". This indicates the 6th data value (counting from 0) had a problem. The other data gets inserted and is queried back. At the end of the script, cx\_Oracle will roll back an uncommitted transaction. If you want to commit results, you can use: ```` con.commit() ```` To force cx\_Oracle to roll back, use: ```` con.rollback() ```` 4. Binding named objects cx\_Oracle can fetch and bind named object types such as Oracle's Spatial Data Objects (SDO). In a terminal window, start SQL\*Plus using the lab credentials and connection string, such as: ```` <copy> sqlplus pythonhol/welcome@localhost:1521/orclpdb </copy> ```` Use the SQL\*Plus **DESCRIBE** command to look at the SDO definition: ```` <copy> desc MDSYS.SDO_GEOMETRY </copy> ```` It contains various attributes and methods. The top level description is: ```` Name Null? Type ----------------------------------------- -------- ---------------------------- SDO_GTYPE NUMBER SDO_SRID NUMBER SDO_POINT MDSYS.SDO_POINT_TYPE SDO_ELEM_INFO MDSYS.SDO_ELEM_INFO_ARRAY SDO_ORDINATES MDSYS.SDO_ORDINATE_ARRAY ```` In the terminal type **exit** to exit from SQL\*Plus and review the code contained in **bind\_sdo.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() # Create table cur.execute("""begin execute immediate 'drop table testgeometry'; exception when others then if sqlcode <> -942 then raise; end if; end;""") cur.execute("""create table testgeometry ( id number(9) not null, geometry MDSYS.SDO_GEOMETRY not null)""") # Create and populate Oracle objects typeObj = con.gettype("MDSYS.SDO_GEOMETRY") elementInfoTypeObj = con.gettype("MDSYS.SDO_ELEM_INFO_ARRAY") ordinateTypeObj = con.gettype("MDSYS.SDO_ORDINATE_ARRAY") obj = typeObj.newobject() obj.SDO_GTYPE = 2003 obj.SDO_ELEM_INFO = elementInfoTypeObj.newobject() obj.SDO_ELEM_INFO.extend([1, 1003, 3]) obj.SDO_ORDINATES = ordinateTypeObj.newobject() obj.SDO_ORDINATES.extend([1, 1, 5, 7]) print("Created object", obj) # Add a new row print("Adding row to table...") cur.execute("insert into testgeometry values (1, :objbv)", objbv = obj) print("Row added!") # Query the row print("Querying row just inserted...") cur.execute("select id, geometry from testgeometry"); for row in cur: print(row) ```` This uses **gettype()** to get the database types of the SDO and its object attributes. The **newobject()** calls create Python representations of those objects. The python object atributes are then set. Oracle VARRAY types such as SDO\_ELEM\_INFO\_ARRAY are set with extend(). Run the file: ```` <copy> python3 bind_sdo.py </copy> ```` ![](./images/bindOutput.png " " ) The new SDO is shown as an object, similar to: ```` (1, <cx_Oracle.Object MDSYS.SDO_GEOMETRY at 0x104a76230>) ```` To show the attribute values, edit the the query code section at the end of the **bind_sdo.py** file. Add a new method that traverses the object. The file below the existing comment "# (Change below here)") should look like: ```` <copy> # (Change below here) # Define a function to dump the contents of an Oracle object def dumpobject(obj, prefix = " "): if obj.type.iscollection: print(prefix, "[") for value in obj.aslist(): if isinstance(value, cx_Oracle.Object): dumpobject(value, prefix + " ") else: print(prefix + " ", repr(value)) print(prefix, "]") else: print(prefix, "{") for attr in obj.type.attributes: value = getattr(obj, attr.name) if isinstance(value, cx_Oracle.Object): print(prefix + " " + attr.name + " :") dumpobject(value, prefix + " ") else: print(prefix + " " + attr.name + " :", repr(value)) print(prefix, "}") # Query the row print("Querying row just inserted...") cur.execute("select id, geometry from testgeometry") for id, obj in cur: print("Id: ", id) dumpobject(obj) </copy> ```` Run the file again: ```` <copy> python3 bind_sdo.py </copy> ```` This shows ```` Querying row just inserted... Id: 1 { SDO_GTYPE : 2003 SDO_SRID : None SDO_POINT : None SDO_ELEM_INFO : [ 1 1003 3 ] SDO_ORDINATES : [ 1 1 5 7 ] } ```` To explore further, try setting the SDO attribute SDO\_POINT, which is of type SDO\_POINT\_TYPE. The gettype() and newobject() methods can also be used to bind PL/SQL Records and Collections. ## Task 9: PL/SQL PL/SQL is Oracle's procedural language extension to SQL. PL/SQL procedures and functions are stored and run in the database. Using PL/SQL lets all database applications reuse logic, no matter how the application accesses the database. Many data-related operations can be performed in PL/SQL faster than extracting the data into a program (for example, Python) and then processing it. 1. PL/SQL functions Review **plsql\_func.sql** which creates a PL/SQL stored function myfunc() to insert a row into a new table named ptab and return double the inserted value. ```` create table ptab (mydata varchar(20), myid number); create or replace function myfunc(d_p in varchar2, i_p in number) return number as begin insert into ptab (mydata, myid) values (d_p, i_p); return (i_p * 2); end; / ```` Run the script using: ```` <copy> sqlplus /nolog @plsql_func.sql </copy> ```` ![](./images/step9.1-plsqlfunc.png " ") Review the code contained in **plsql\_func.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() res = cur.callfunc('myfunc', int, ('abc', 2)) print(res) ```` This uses **callfunc()** to execute the function. The second parameter is the type of the returned value. It should be one of the types supported by cx\_Oracle or one of the type constants defined by cx\_Oracle (such as cx\_Oracle.NUMBER). The two PL/SQL function parameters are passed as a tuple, binding them to the function parameter arguments. From a terminal window, run: ```` <copy> cd ~/python/tutorial python3 plsql_func.py </copy> ```` The output is a result of the PL/SQL function calculation. ```` 4 ```` 2. PL/SQL procedures Review **plsql\_proc.sql** which creates a PL/SQL procedure myproc() to accept two parameters. The second parameter contains an OUT return value. ```` create or replace procedure myproc(v1_p in number, v2_p out number) as begin v2_p := v1_p * 2; end; / ```` Run the script with: ```` <copy> sqlplus /nolog @plsql_proc.sql </copy> ```` ![](./images/step9.2-plsqlproc.png " ") Review the code contained in **plsql\_proc.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() myvar = cur.var(int) cur.callproc('myproc', (123, myvar)) print(myvar.getvalue()) ```` This creates an integer variable myvar to hold the value returned by the PL/SQL OUT parameter. The input number 123 and the output variable name are bound to the procedure call parameters using a tuple. To call the PL/SQL procedure, the **callproc()** method is used. In a terminal window, run: ```` <copy> python3 plsql_proc.py </copy> ```` The **getvalue()** method displays the returned value. ```` 246 ```` ## Task 10: Type Handlers 1. Basic output type handler Output type handlers enable applications to change how data is fetched from the database. For example, numbers can be returned as strings or decimal objects. LOBs can be returned as string or bytes. A type handler is enabled by setting the outputtypehandler attribute on either a cursor or the connection. If set on a cursor it only affects queries executed by that cursor. If set on a connection it affects all queries executed on cursors created by that connection. Review the code contained in **type\_output.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() print("Standard output...") for row in cur.execute("select * from dept"): print(row) ```` In a terminal window, run: ```` <copy> cd ~/python/tutorial python3 type_output.py </copy> ```` This shows the department number represented as digits like 10. ![](./images/step10.1-typeoutput.png " ") Add an output type handler to the bottom of the **type\_output.py** file: ```` <copy> def ReturnNumbersAsStrings(cursor, name, defaultType, size, precision, scale): if defaultType == cx_Oracle.NUMBER: return cursor.var(str, 9, cursor.arraysize) print("Output type handler output...") cur = con.cursor() cur.outputtypehandler = ReturnNumbersAsStrings for row in cur.execute("select * from dept"): print(row) </copy> ```` This type handler converts any number columns to strings with maxium size 9. Run the script again: ```` <copy> python3 type_output.py </copy> ```` The new output shows the department numbers are now strings within quotes like '10'. ![](./images/step10.1-typeoutput2.png " ") 2. Output type handlers and variable converters When numbers are fetched from the database, the conversion from Oracle's decimal representation to Python's binary format may need careful handling. To avoid unexpected issues, the general recommendation is to do number operations in SQL or PL/SQL, or to use the decimal module in Python. Output type handlers can be combined with variable converters to change how data is fetched. Review **type\_converter.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() for value, in cur.execute("select 0.1 from dual"): print("Value:", value, "* 3 =", value * 3) ```` Run the file: ```` <copy> python3 type_converter.py </copy> ```` The output is like: ```` Value: 0.1 * 3 = 0.30000000000000004 ```` Replace the text file in the **type_converter.py** file with the text below to add a type handler that uses a Python decimal converter: ```` <copy> import cx_Oracle import decimal import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() def ReturnNumbersAsDecimal(cursor, name, defaultType, size, precision, scale): if defaultType == cx_Oracle.NUMBER: return cursor.var(str, 9, cursor.arraysize, outconverter = decimal.Decimal) cur.outputtypehandler = ReturnNumbersAsDecimal for value, in cur.execute("select 0.1 from dual"): print("Value:", value, "* 3 =", value * 3) </copy> ```` The Python decimal.Decimal converter gets called with the string representation of the Oracle number. The output from decimal.Decimal is returned in the output tuple. Run the file again: ```` <copy> python3 type_converter.py </copy> ```` Output is like: ```` Value: 0.1 * 3 = 0.3 ```` Although the code demonstrates the use of outconverter, in this particular case, the variable can be created simply by using the following code to replace the outputtypehandler function defined above. Replace the file with the text below. ```` <copy> import cx_Oracle import decimal import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() def ReturnNumbersAsDecimal(cursor, name, defaultType, size, precision, scale): if defaultType == cx_Oracle.NUMBER: return cursor.var(decimal.Decimal, arraysize = cursor.arraysize) cur.outputtypehandler = ReturnNumbersAsDecimal for value, in cur.execute("select 0.1 from dual"): print("Value:", value, "* 3 =", value * 3) </copy> ```` 3. Input type handlers Input type handlers enable applications to change how data is bound to statements, or to enable new types to be bound directly without having to be converted individually. Review **type\_input.py**, which is similar to the final bind\_sdo.py from section 4.4, with the addition of a new class and converter (shown in bold): ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() # Create table cur.execute("""begin execute immediate 'drop table testgeometry'; exception when others then if sqlcode <> -942 then raise; end if; end;""") cur.execute("""create table testgeometry ( id number(9) not null, geometry MDSYS.SDO_GEOMETRY not null)""") # Create a Python class for an SDO class mySDO(object): def __init__(self, gtype, elemInfo, ordinates): self.gtype = gtype self.elemInfo = elemInfo self.ordinates = ordinates # Get Oracle type information objType = con.gettype("MDSYS.SDO_GEOMETRY") elementInfoTypeObj = con.gettype("MDSYS.SDO_ELEM_INFO_ARRAY") ordinateTypeObj = con.gettype("MDSYS.SDO_ORDINATE_ARRAY") # Convert a Python object to MDSYS.SDO_GEOMETRY def SDOInConverter(value): obj = objType.newobject() obj.SDO_GTYPE = value.gtype obj.SDO_ELEM_INFO = elementInfoTypeObj.newobject() obj.SDO_ELEM_INFO.extend(value.elemInfo) obj.SDO_ORDINATES = ordinateTypeObj.newobject() obj.SDO_ORDINATES.extend(value.ordinates) return obj def SDOInputTypeHandler(cursor, value, numElements): if isinstance(value, mySDO): return cursor.var(cx_Oracle.OBJECT, arraysize = numElements, inconverter = SDOInConverter, typename = objType.name) sdo = mySDO(2003, [1, 1003, 3], [1, 1, 5, 7]) # Python object cur.inputtypehandler = SDOInputTypeHandler cur.execute("insert into testgeometry values (:1, :2)", (1, sdo)) # Define a function to dump the contents of an Oracle object def dumpobject(obj, prefix = " "): if obj.type.iscollection: print(prefix, "[") for value in obj.aslist(): if isinstance(value, cx_Oracle.Object): dumpobject(value, prefix + " ") else: print(prefix + " ", repr(value)) print(prefix, "]") else: print(prefix, "{") for attr in obj.type.attributes: value = getattr(obj, attr.name) if isinstance(value, cx_Oracle.Object): print(prefix + " " + attr.name + " :") dumpobject(value, prefix + " ") else: print(prefix + " " + attr.name + " :", repr(value)) print(prefix, "}") # Query the row print("Querying row just inserted...") cur.execute("select id, geometry from testgeometry") for (id, obj) in cur: print("Id: ", id) dumpobject(obj) ```` In the new file, a Python class mySDO is defined, which has attributes corresponding to each Oracle MDSYS.SDO\_GEOMETRY attribute. The mySDO class is used lower in the code to create a Python instance: ```` sdo = mySDO(2003, [1, 1003, 3], [1, 1, 5, 7]) ```` which is then directly bound into the INSERT statement like: ```` cur.execute("insert into testgeometry values (:1, :2)", (1, sdo)) ```` The mapping between Python and Oracle objects is handled in SDOInConverter which uses the cx\_Oracle newobject() and extend() methods to create an Oracle object from the Python object values. The SDOInConverter method is called by the input type handler SDOInputTypeHandler whenever an instance of mySDO is inserted with the cursor. To confirm the behavior, run the file: ```` <copy> python3 type_input.py </copy> ```` ![](./images/step10.3-typeinput.png " ") ## Task 11: LOBs Oracle Database "LOB" long objects can be streamed using a LOB locator, or worked with directly as strings or bytes. 1. Fetching a CLOB using a locator Review the code contained in **clob.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() print("Inserting data...") cur.execute("truncate table testclobs") longString = "" for i in range(5): char = chr(ord('A') + i) longString += char * 250 cur.execute("insert into testclobs values (:1, :2)", (i + 1, "String data " + longString + ' End of string')) con.commit() print("Querying data...") cur.prepare("select * from testclobs where id = :id") cur.execute(None, {'id': 1}) (id, clob) = cur.fetchone() print("CLOB length:", clob.size()) clobdata = clob.read() print("CLOB data:", clobdata) ```` This inserts some test string data and then fetches one record into clob, which is a cx\_Oracle character LOB Object. Methods on LOB include size() and read(). To see the output, run the file: ```` <copy> cd ~/python/tutorial python3 clob.py </copy> ```` ![](./images/clobOutput.png " " ) Edit the file and experiment reading chunks of data by giving start character position and length, such as clob.read(1,10) 2. Fetching a CLOB as a string For CLOBs small enough to fit in the application memory, it is much faster to fetch them directly as strings. Review the code contained in **clob\_string.py**. The differences from clob.py are shown in bold: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() print("Inserting data...") cur.execute("truncate table testclobs") longString = "" for i in range(5): char = chr(ord('A') + i) longString += char * 250 cur.execute("insert into testclobs values (:1, :2)", (i + 1, "String data " + longString + ' End of string')) con.commit() def OutputTypeHandler(cursor, name, defaultType, size, precision, scale): if defaultType == cx_Oracle.CLOB: return cursor.var(cx_Oracle.LONG_STRING, arraysize = cursor.arraysize) con.outputtypehandler = OutputTypeHandler print("Querying data...") cur.prepare("select * from testclobs where id = :id") cur.execute(None, {'id': 1}) (id, clobdata) = cur.fetchone() print("CLOB length:", len(clobdata)) print("CLOB data:", clobdata) ```` The OutputTypeHandler causes cx\_Oracle to fetch the CLOB as a string. Standard Python string functions such as len() can be used on the result. The output is the same as for clob.py. To check, run the file: ```` <copy> python3 clob_string.py </copy> ```` ## Task 12: Rowfactory functions Rowfactory functions enable queries to return objects other than tuples. They can be used to provide names for the various columns or to return custom objects. 1. Rowfactory for mapping column names. Review the code contained in **rowfactory.py**: ```` import collections import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() cur.execute("select deptno, dname from dept") rows = cur.fetchall() print('Array indexes:') for row in rows: print(row[0], "->", row[1]) print('Loop target variables:') for c1, c2 in rows: print(c1, "->", c2) ```` This shows two methods of accessing result set items from a data row. The first uses array indexes like row[0]. The second uses loop target variables which take the values of each row tuple. Run the file: ```` <copy> cd ~/python/tutorial python3 rowfactory.py </copy> ```` ![](./images/rowFactoryOutput.png " " ) Both access methods gives the same results. To use a rowfactory function, edit rowfactory.py and add this code at the bottom: ```` <copy> print('Rowfactory:') cur.execute("select deptno, dname from dept") cur.rowfactory = collections.namedtuple("MyClass", ["DeptNumber", "DeptName"]) rows = cur.fetchall() for row in rows: print(row.DeptNumber, "->", row.DeptName) </copy> ```` This uses the Python factory function namedtuple() to create a subclass of tuple that allows access to the elements via indexes or the given field names. The print() function shows the use of the new named tuple fields. This coding style can help reduce coding errors. Run the script again: ```` <copy> python3 rowfactory.py </copy> ```` The output results are the same. ![](./images/step12.1-rowfactory.png " ") ## Task 13: Subclassing Connections and Cursors 1. Subclassing connections Review the code contained in **subclass.py**: ```` import cx_Oracle import db_config class MyConnection(cx_Oracle.Connection): def __init__(self): print("Connecting to database") return super(MyConnection, self).__init__(db_config.user, db_config.pw, db_config.dsn) con = MyConnection() cur = con.cursor() cur.execute("select count(*) from emp where deptno = :bv", (10,)) count, = cur.fetchone() print("Number of rows:", count) ```` This creates a new class "MyConnection" that inherits from the cx\_Oracle Connection class. The `init` method is invoked when an instance of the new class is created. It prints a message and calls the base class, passing the connection credentials. In the "normal" application, the application code: ```` con = MyConnection() ```` does not need to supply any credentials, as they are embedded in the custom subclass. All the cx_Oracle methods such as cursor() are available, as shown by the query. Run the file: ```` <copy> cd ~/python/tutorial python3 subclass.py </copy> ```` The query executes successfully. 2. Subclassing cursors Edit **subclass.py** and extend the cursor() method with a new MyCursor class. Copy and replace the text in subclass.py with the text below. ```` <copy> import cx_Oracle import db_config class MyConnection(cx_Oracle.Connection): def __init__(self): print("Connecting to database") return super(MyConnection, self).__init__(db_config.user, db_config.pw, db_config.dsn) def cursor(self): return MyCursor(self) class MyCursor(cx_Oracle.Cursor): def execute(self, statement, args): print("Executing:", statement) print("Arguments:") for argIndex, arg in enumerate(args): print(" Bind", argIndex + 1, "has value", repr(arg)) return super(MyCursor, self).execute(statement, args) def fetchone(self): print("Fetchone()") return super(MyCursor, self).fetchone() con = MyConnection() cur = con.cursor() cur.execute("select count(*) from emp where deptno = :bv", (10,)) count, = cur.fetchone() print("Number of rows:", count) </copy> ```` When the application gets a cursor from the MyConnection class, the new cursor() method returns an instance of our new MyCursor class. The "application" query code remains unchanged. The new execute() and fetchone() methods of the MyCursor class get invoked. They do some logging and invoke the parent methods to do the actual statement execution. To confirm this, run the file again: ```` <copy> python3 subclass.py </copy> ```` ## Task 14: Advanced Queueing 1. Message passing with Oracle Advanced Queuing Review **aq.py**: ```` import cx_Oracle import decimal import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) cur = con.cursor() BOOK_TYPE_NAME = "UDT_BOOK" QUEUE_NAME = "BOOKS" QUEUE_TABLE_NAME = "BOOK_QUEUE_TABLE" # Cleanup cur.execute( """begin dbms_aqadm.stop_queue('""" + QUEUE_NAME + """'); dbms_aqadm.drop_queue('""" + QUEUE_NAME + """'); dbms_aqadm.drop_queue_table('""" + QUEUE_TABLE_NAME + """'); execute immediate 'drop type """ + BOOK_TYPE_NAME + """'; exception when others then if sqlcode <> -24010 then raise; end if; end;""") # Create a type print("Creating books type UDT_BOOK...") cur.execute(""" create type %s as object ( title varchar2(100), authors varchar2(100), price number(5,2) );""" % BOOK_TYPE_NAME) # Create queue table and queue and start the queue print("Creating queue table...") cur.callproc("dbms_aqadm.create_queue_table", (QUEUE_TABLE_NAME, BOOK_TYPE_NAME)) cur.callproc("dbms_aqadm.create_queue", (QUEUE_NAME, QUEUE_TABLE_NAME)) cur.callproc("dbms_aqadm.start_queue", (QUEUE_NAME,)) booksType = con.gettype(BOOK_TYPE_NAME) queue = con.queue(QUEUE_NAME, booksType) # Enqueue a few messages print("Enqueuing messages...") BOOK_DATA = [ ("The Fellowship of the Ring", "Tolkien, J.R.R.", decimal.Decimal("10.99")), ("Harry Potter and the Philosopher's Stone", "Rowling, J.K.", decimal.Decimal("7.99")) ] for title, authors, price in BOOK_DATA: book = booksType.newobject() book.TITLE = title book.AUTHORS = authors book.PRICE = price print(title) queue.enqOne(con.msgproperties(payload=book)) con.commit() # Dequeue the messages print("\nDequeuing messages...") queue.deqOptions.wait = cx_Oracle.DEQ_NO_WAIT while True: props = queue.deqOne() if not props: break print(props.payload.TITLE) con.commit() print("\nDone.") ```` This file sets up Advanced Queuing using Oracle's DBMS\_AQADM package. The queue is used for passing Oracle UDT\_BOOK objects. The file uses AQ interface features enhanced in cx\_Oracle 7.2. Run the file: ```` <copy> cd ~/python/tutorial python3 aq.py </copy> ```` The output shows messages being queued and dequeued. ![](./images/step14.1-aq.png " ") To experiment, split the code into three files: one to create and start the queue, and two other files to queue and dequeue messages. Experiment running the queue and dequeue files concurrently in separate terminal windows. Try removing the commit() call in **aq-dequeue.py**. Now run **aq-enqueue.py** once and then **aq-dequeue.py** several times. The same messages will be available each time you try to dequeue them. Change **aq-dequeue.py** to commit in a separate transaction by changing the "visibility" setting: ```` queue.deqOptions.visibility = cx_Oracle.DEQ_IMMEDIATE ```` This gives the same behavior as the original code. Now change the options of enqueued messages so that they expire from the queue if they have not been dequeued after four seconds: ```` queue.enqOne(con.msgproperties(payload=book, expiration=4)) ```` Now run **aq-enqueue.py** and **wait four seconds** before you run **aq-dequeue.py**. There should be no messages to dequeue. If you are stuck, look in the **solutions** directory at the aq-dequeue.py, aq-enqueue.py and aq-queuestart.py files. ## Task 15: Simple Oracle Document Access (SODA) Simple Oracle Document Access is a set of NoSQL-style APIs. Documents can be inserted, queried, and retrieved from Oracle Database using a set of NoSQL-style cx\_Oracle methods. By default, documents are JSON strings. SODA APIs exist in many languages. 1. Inserting JSON Documents Review **soda.py**: ```` import cx_Oracle import db_config con = cx_Oracle.connect(db_config.user, db_config.pw, db_config.dsn) soda = con.getSodaDatabase() collection = soda.createCollection("friends") content = {'name': 'Jared', 'age': 35, 'address': {'city': 'Melbourne'}} doc = collection.insertOneAndGet(content) key = doc.key doc = collection.find().key(key).getOne() content = doc.getContent() print('Retrieved SODA document dictionary is:') print(content) ```` **soda.createCollection()** will create a new collection, or open an existing collection, if the name is already in use. **insertOneAndGet()** inserts the content of a document into the database and returns a SODA Document Object. This allows access to meta data such as the document key. By default, document keys are automatically generated. The **find()** method is used to begin an operation that will act upon documents in the collection. **content** is a dictionary. You can also get a JSON string by calling **doc.getContentAsString()**. Run the file: ```` <copy> cd ~/python/tutorial python3 soda.py </copy> ```` The output shows the content of the new document. ![](./images/step15.1-soda.png " ") 2. Searching SODA Documents Extend **soda.py** to insert some more documents and perform a find filter operation: ```` <copy> myDocs = [ {'name': 'Gerald', 'age': 21, 'address': {'city': 'London'}}, {'name': 'David', 'age': 28, 'address': {'city': 'Melbourne'}}, {'name': 'Shawn', 'age': 20, 'address': {'city': 'San Francisco'}} ] collection.insertMany(myDocs) filterSpec = { "address.city": "Melbourne" } myDocuments = collection.find().filter(filterSpec).getDocuments() print('Melbourne people:') for doc in myDocuments: print(doc.getContent()["name"]) </copy> ```` Run the script again: ```` <copy> python3 soda.py </copy> ```` The find operation filters the collection and returns documents where the city is Melbourne. Note the **insertMany()** method is currently in preview. ![](./images/step15.2-soda1.png " ") SODA supports query by example (QBE) with an extensive set of operators. Extend **soda.py** with a QBE to find documents where the age is less than 25: ```` <copy> filterSpec = {'age': {'$lt': 25}} myDocuments = collection.find().filter(filterSpec).getDocuments() print('Young people:') for doc in myDocuments: print(doc.getContent()["name"]) </copy> ```` Running the script displays the names. ![](./images/step15.2-soda2.png " ") ## Conclusion In this lab, you had an opportunity to try out connecting Python to the Oracle Database. You have learned how to: * Create connections * Use cx\_Oracle connection pooling and Database Resident Connection Pooling * Execute queries and fetch data * Use bind variables * Use PL/SQL stored functions and procedures * Extend cx\_Oracle classes * Use Oracle Advanced Queuing An additional lab on using Python with is available in the New Features for Developers lab. ## Acknowledgements * **Author** - Christopher Jones, Anthony Tuininga * **Contributors** - Jaden McElvey, Anoosha Pilli, Troy Anthony * **Last Updated By/Date** - Troy Anthony, DB Product Management, December 2020
31.966652
646
0.654552
eng_Latn
0.978658
1151d587a9181e2a096c5fe02ad7c97db9849447
170
md
Markdown
docs-en/api/api-SmartVision-Devices/graphics-and-ui-subsystem.md
PigSkin-Hup/hongmengdocs
7958dbc646cedf526b7013728897ce32cc21c1d4
[ "Apache-2.0" ]
null
null
null
docs-en/api/api-SmartVision-Devices/graphics-and-ui-subsystem.md
PigSkin-Hup/hongmengdocs
7958dbc646cedf526b7013728897ce32cc21c1d4
[ "Apache-2.0" ]
null
null
null
docs-en/api/api-SmartVision-Devices/graphics-and-ui-subsystem.md
PigSkin-Hup/hongmengdocs
7958dbc646cedf526b7013728897ce32cc21c1d4
[ "Apache-2.0" ]
null
null
null
# Graphics and UI Subsystem<a name="EN-US_TOPIC_0000001055518040"></a> - **[Graphic](graphic.md)** - **[Surface](surface.md)** - **[Window](window.md)**
17
70
0.594118
yue_Hant
0.377676
1153092bc4f75b1161a54079eec3ad45ff2d6732
1,708
md
Markdown
articles/sql-database/sql-database-managed-instance-find-management-endpoint-ip-address.md
brndkfr/azure-docs
454958b6a0d5c89dfcd213eccc30e8fd7d1eb415
[ "CC-BY-4.0", "MIT" ]
2
2019-08-30T23:39:11.000Z
2019-08-30T23:39:14.000Z
articles/sql-database/sql-database-managed-instance-find-management-endpoint-ip-address.md
brndkfr/azure-docs
454958b6a0d5c89dfcd213eccc30e8fd7d1eb415
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-database/sql-database-managed-instance-find-management-endpoint-ip-address.md
brndkfr/azure-docs
454958b6a0d5c89dfcd213eccc30e8fd7d1eb415
[ "CC-BY-4.0", "MIT" ]
1
2020-01-21T11:50:52.000Z
2020-01-21T11:50:52.000Z
--- title: Discover Azure SQL Database Managed Instance management endpoint | Microsoft Docs description: Learn how to get Azure SQL Database Managed Instance management endpoint public IP address and verify its built-in firewall protection services: sql-database ms.service: sql-database ms.subservice: managed-instance ms.custom: ms.devlang: ms.topic: conceptual author: srdan-bozovic-msft ms.author: srbozovi ms.reviewer: sstein, carlrab ms.date: 12/04/2018 --- # Determine the management endpoint IP address The Azure SQL Database Managed Instance virtual cluster contains a management endpoint that Microsoft uses for management operations. The management endpoint is protected with a built-in firewall on the network level and mutual certificate verification on the application level. You can determine the IP address of the management endpoint, but you can't access this endpoint. To determine the management IP address, do a DNS lookup on your managed instance FQDN: `mi-name.zone_id.database.windows.net`. This will return a DNS entry that's like `trx.region-a.worker.vnet.database.windows.net`. You can then do a DNS lookup on this FQDN with ".vnet" removed. This will return the management IP address. This PowerShell will do it all for you if you replace \<MI FQDN\> with the DNS entry of your managed instance: `mi-name.zone_id.database.windows.net`: ``` powershell $MIFQDN = "<MI FQDN>" resolve-dnsname $MIFQDN | select -first 1 | %{ resolve-dnsname $_.NameHost.Replace(".vnet","")} ``` For more information about Managed Instances and connectivity, see [Azure SQL Database Managed Instance Connectivity Architecture](sql-database-managed-instance-connectivity-architecture.md).
58.896552
375
0.790984
eng_Latn
0.923731
11543833831145904dd0b47d0032859d5bc52d67
5,722
md
Markdown
articles/cosmos-db/working-with-dates.md
IrisClasson/azure-docs.sv-se
a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cosmos-db/working-with-dates.md
IrisClasson/azure-docs.sv-se
a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cosmos-db/working-with-dates.md
IrisClasson/azure-docs.sv-se
a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Arbeta med datum i Azure Cosmos DB description: Lär dig att lagra, indexera och fråga datum/tid-objekt i Azure Cosmos DB ms.service: cosmos-db author: SnehaGunda ms.author: sngun ms.topic: conceptual ms.date: 04/03/2020 ms.openlocfilehash: 2f31ee7f7d60a3bf0ab56b9ed8aa7fd25774e06c ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08 ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 07/02/2020 ms.locfileid: "85412557" --- # <a name="working-with-dates-in-azure-cosmos-db"></a>Arbeta med datum i Azure Cosmos DB Azure Cosmos DB ger flexibilitet i schema och omfattande indexering via en intern [JSON](https://www.json.org) -datamodell. Alla Azure Cosmos DB resurser, inklusive databaser, behållare, dokument och lagrade procedurer, modelleras och lagras som JSON-dokument. Som ett krav för portabelt stöder JSON (och Azure Cosmos DB) bara en liten uppsättning grundläggande typer: sträng, tal, boolesk, matris, objekt och null. JSON är dock flexibel och gör det möjligt för utvecklare och ramverk att representera mer komplexa typer med hjälp av dessa primitiver och att skriva dem som objekt eller matriser. Förutom de grundläggande typerna behöver många program DateTime-typen för att representera datum och tidsstämpel. Den här artikeln beskriver hur utvecklare kan lagra, hämta och fråga datum i Azure Cosmos DB med hjälp av .NET SDK. ## <a name="storing-datetimes"></a>Lagra datum/tid Azure Cosmos DB stöder JSON-typer som-String, Number, Boolean, null, matris, Object. Det stöder inte direkt en DateTime-typ. För närvarande stöder Azure Cosmos DB inte lokalisering av datum. Så du måste lagra datum och tid som strängar. Det rekommenderade formatet för DateTime-strängar i Azure Cosmos DB är `yyyy-MM-ddTHH:mm:ss.fffffffZ` som följer ISO 8601 UTC-standarden. Vi rekommenderar att du lagrar alla datum i Azure Cosmos DB som UTC. Om du konverterar datum strängarna till det här formatet kan sorterings datumen lexicographically. Om icke-UTC-datum lagras måste logiken hanteras på klient sidan. Om du vill konvertera en lokal datum-DateTime till UTC måste förskjutningen vara känd/lagrad som en egenskap i JSON och klienten kan använda förskjutningen för att beräkna UTC-slutdatum svärdet. Intervall frågor med DateTime-strängar som filter stöds bara om DateTime-strängarna är alla i UTC och samma längd. I Azure Cosmos DB Returnerar system funktionen [GetCurrentDateTime](sql-query-getcurrentdatetime.md) det aktuella UTC-datumet och tiden, ISO 8601-strängvärdet i formatet: `yyyy-MM-ddTHH:mm:ss.fffffffZ` . De flesta program kan använda standard sträng representationen för DateTime av följande anledningar: * Strängar kan jämföras och den relativa ordningen på DateTime-värdena bevaras när de omvandlas till strängar. * Den här metoden kräver ingen anpassad kod eller attribut för JSON-konvertering. * Datumen som lagras i JSON är läsliga. * Den här metoden kan dra nytta av Azure Cosmos DBs index för snabba frågor. Följande fragment lagrar till exempel ett- `Order` objekt som innehåller två datetime-egenskaper – `ShipDate` och `OrderDate` som ett dokument med hjälp av .NET SDK: ```csharp public class Order { [JsonProperty(PropertyName="id")] public string Id { get; set; } public DateTime OrderDate { get; set; } public DateTime ShipDate { get; set; } public double Total { get; set; } } await container.CreateItemAsync( new Order { Id = "09152014101", OrderDate = DateTime.UtcNow.AddDays(-30), ShipDate = DateTime.UtcNow.AddDays(-14), Total = 113.39 }); ``` Det här dokumentet lagras i Azure Cosmos DB på följande sätt: ```json { "id": "09152014101", "OrderDate": "2014-09-15T23:14:25.7251173Z", "ShipDate": "2014-09-30T23:14:25.7251173Z", "Total": 113.39 } ``` Du kan också lagra datum och tid som UNIX-tidsstämplar, det vill säga ett tal som representerar antalet förflutna sekunder sedan den 1 januari 1970. Azure Cosmos DB s interna timestamp ( `_ts` )-egenskap följer den här metoden. Du kan använda klassen [UnixDateTimeConverter](https://msdn.microsoft.com/library/azure/microsoft.azure.documents.unixdatetimeconverter.aspx) för att serialisera datetime-värden som tal. ## <a name="querying-datetimes-in-linq"></a>Fråga DateTimes i LINQ SQL .NET SDK stöder automatiskt frågor mot data som lagras i Azure Cosmos DB via LINQ. I följande kodfragment visas till exempel en LINQ-fråga som filtrerar order som har levererats under de senaste tre dagarna: ```csharp IQueryable<Order> orders = container.GetItemLinqQueryable<Order>(allowSynchronousQueryExecution: true).Where(o => o.ShipDate >= DateTime.UtcNow.AddDays(-3)); ``` Översatt till följande SQL-uttryck och körs på Azure Cosmos DB: ```sql SELECT * FROM root WHERE (root["ShipDate"] >= "2014-09-30T23:14:25.7251173Z") ``` Du kan lära dig mer om SQL-frågespråket i Azure Cosmos DB och LINQ-providern vid [frågor Cosmos db i LINQ](sql-query-linq-to-sql.md). ## <a name="indexing-datetimes-for-range-queries"></a>Indexera datum/tid för intervall frågor Frågor är vanliga med DateTime-värden. Om du vill köra dessa frågor effektivt måste du ha ett index definierat för alla egenskaper i frågans filter. Du kan lära dig mer om hur du konfigurerar indexerings principer på [Azure Cosmos DB indexerings principer](index-policy.md). ## <a name="next-steps"></a>Nästa steg * Hämta och kör [kod exemplen på GitHub](https://github.com/Azure/azure-documentdb-dotnet/tree/master/samples/code-samples) * Läs mer om [SQL-frågor](sql-query-getting-started.md) * Läs mer om [Azure Cosmos DB indexerings principer](index-policy.md)
58.989691
802
0.761971
swe_Latn
0.996392
1154f471f5896b1ce6229ef2270de9c295289a08
1,866
md
Markdown
README.md
quantumferret/eve-hoplon
87d87cbfecfece0fec0a67284bf85570b6dcd212
[ "MIT" ]
null
null
null
README.md
quantumferret/eve-hoplon
87d87cbfecfece0fec0a67284bf85570b6dcd212
[ "MIT" ]
null
null
null
README.md
quantumferret/eve-hoplon
87d87cbfecfece0fec0a67284bf85570b6dcd212
[ "MIT" ]
null
null
null
# eve-hoplon A discord bot that will authenticate new members joining an Eve corporation's discord server through the EVE SSO authentication flow. Very much a work in progress at this point, and a bit messy. Refactoring to further separate business and data logic is currently next on my to-do list. This project is largely for fun and to improve at various aspects of (web) programming, but it could have some utility to Eve corporations if/when finished and properly deployed... There is a dearth of lightweight options for EVE corporations (smaller ones in particular, e.g. a membership count in the low hundreds or less) looking for a Discord bot capable of automating server roles and privileges that is smoothly integrated with the Eve SSO authentication flow. Current options are either locked behind an (in-game currency) paywall and not open-source, or require a fair amount of setup by the user. ### Run This repository hasn't really been set up to be run by someone else, yet. There are many excellent guides for how to add a bot to a server, create a bot, etc. To start the server and actually use it, you'll need a bot token of your own, as well as an Eve developer account with an associated client id, secret key, and callback URL for Eve Online's authentication flow. If you've obtained all of that and created the .env and config.json files the program gets those values from, then follow the steps below. To run locally: 1. `git clone https://github.com/quantumferret/eve-hoplon.git` 2. `npm i` 3. `npm run start` This will start the server and bot with hot-reloading enabled. Eventually, when this project is closer to the finish line, it will be hosted on a remote server, and one will simply need to add the bot to a server to try it out, and for developers, starter configuration files that will be provided for local development.
62.2
133
0.784566
eng_Latn
0.999758
11550225c793c389c7318f501203cd9564c7242a
2,145
md
Markdown
_posts/2018-08-07-provn-editor.md
TomasKulhanek/blog
92c4b0295e4eac4e175260c1befe5a500ec4b1ac
[ "MIT" ]
null
null
null
_posts/2018-08-07-provn-editor.md
TomasKulhanek/blog
92c4b0295e4eac4e175260c1befe5a500ec4b1ac
[ "MIT" ]
null
null
null
_posts/2018-08-07-provn-editor.md
TomasKulhanek/blog
92c4b0295e4eac4e175260c1befe5a500ec4b1ac
[ "MIT" ]
null
null
null
--- layout: post title: ACE Editor for PROVN --- The presented editor integrates [ACE editor](https://ace.c9.io/) with custom syntax highlighting and syntax validation using [ANTLR4](http://www.antlr.org/) grammar for [PROV-N](https://github.com/antlr/grammars-v4/tree/master/prov-n). ![PROV-N editor screenshot](https://raw.githubusercontent.com/h2020-westlife-eu/prov-n-editor/master/Editor1.PNG) [PROV-N](https://www.w3.org/TR/2013/REC-prov-n-20130430/) is standard notation to hold provenance of data part of recomendation of [W3C PROV-O Ontology](https://www.w3.org/TR/prov-o/). The proof of concept sample editor can - communicate with other web apps using cross document messaging API (window.opener.postMessage()). - be filled by initial content using hash parameter - 'content' for plain/text e.g.: https://h2020-westlife-eu.github.io/prov-n-editor/#content=document%0Dentity(e1)%0DendDocument - 'contentBase64' for base64 encoded text - 'contentLZ' for LZ compressed and base64 encoded text, ## Live demo - [h2020-westlife-eu.github.io/prov-n-editor/](https://h2020-westlife-eu.github.io/prov-n-editor/) - [Empty document](https://h2020-westlife-eu.github.io/prov-n-editor/) - [short document](https://h2020-westlife-eu.github.io/prov-n-editor/#content=document%0Dentity(e1)%0DendDocument) - [longer document encoded base64](https://h2020-westlife-eu.github.io/prov-n-editor/#contentBase64=ZG9jdW1lbnQNCiAgZGVmYXVsdCA8aHR0cDovL2Fub3RoZXJleGFtcGxlLm9yZy8+DQogIHByZWZpeCBleCA8aHR0cDovL2V4YW1wbGUub3JnLz4NCiAgZW50aXR5KGUyLCBbIHByb3Y6dHlwZT0iRmlsZSIsIGV4OnBhdGg9Ii9zaGFyZWQvY3JpbWUudHh0IiwgZXg6Y3JlYXRvcj0iQWxpY2UiLCANCiAgICAgICAgICAgICAgIGV4OmNvbnRlbnQ9IlRoZXJlIHdhcyBhIGxvdCBvZiBjcmltZSBpbiBMb25kb24gbGFzdCBtb250aC4iXSkNCiAgICAgICAgICAgICAgIGFjdGl2aXR5KGExLCAyMDExLTExLTE2VDE2OjA1OjAwLCAtLCBbcHJvdjp0eXBlPSJlZGl0Il0pDQogIHdhc0dlbmVyYXRlZEJ5KGUyLCBhMSwgLSwgW2V4OmZjdD0ic2F2ZSJdKSAgICAgDQogIHdhc0Fzc29jaWF0ZWRXaXRoKGExLCBhZzIsIC0sIFtwcm92OnJvbGU9ImF1dGhvciJdKQ0KICBhZ2VudChhZzIsIFsgcHJvdjp0eXBlPSdwcm92OlBlcnNvbicsIGV4Om5hbWU9IkJvYiIgXSkNCmVuZERvY3VtZW50) ## Source code - https://github.com/h2020-westlife-eu/prov-n-editor
82.5
761
0.830303
yue_Hant
0.496665
11560fb10ef91b55bba7bf9f6361855ded741e37
10,840
md
Markdown
_posts/2021-02-21-effective-software-engineering-10-tips.md
fayvor/fayvor.github.io
5f2bb7fa7499bff0cc4104f206beab1e73afc7df
[ "MIT" ]
null
null
null
_posts/2021-02-21-effective-software-engineering-10-tips.md
fayvor/fayvor.github.io
5f2bb7fa7499bff0cc4104f206beab1e73afc7df
[ "MIT" ]
2
2021-09-28T03:07:06.000Z
2022-02-26T08:08:31.000Z
_posts/2021-02-21-effective-software-engineering-10-tips.md
fayvor/fayvor.github.io
5f2bb7fa7499bff0cc4104f206beab1e73afc7df
[ "MIT" ]
null
null
null
--- layout: post title: "Effective Software Engineering" --- For a prettier viersion, please see the article [Effective Software Engineering](https://fayvor.medium.com/effective-software-engineering-10-tips-to-take-to-your-next-job-58ed3a126618) on Medium. Programming is a powerful skill, and I’ve been using it for the past 2 decades to make a living, pursue scientific interests and try and have a positive impact in the world. I’ve had a lot of fun, made friends, and helped some companies do things they might not have done without my involvement. I’ve also made many mistakes along the way, wasted plenty of time, struggled and injured myself on more than a few sharp edges. Below are a few of the lessons I’ve learned about how to apply my software engineering skills effectively and to a positive end. I hope new engineers just getting into the field, and even more experienced ones, will take something from this list. Keep in mind: this is what has worked for me, and your mileage may vary. My experience is largely with small startup engineering teams of 10 people or fewer. You may find a different combination of techniques works better for you, given your interests, aptitudes and environment. Please let me know in the comments if anything in this article is helpful to you. ### 1. Know your customer. Whether your customer is internal or external, you are always developing software, directly or indirectly, to serve a person. What you know about that person’s needs and wants can guide the decisions you are empowered to make. For an external customer, you will often not have direct access to them, in which case, learn about them through the product and customer success teams. If your customer is internal, make direct contact with them if you can. Perform the product function of gathering user feedback if nobody is doing that for you. ### 2. Ship Early and Often. When I went scuba diving in the Caribbean, I learned a saying: “Equalize early and often.” This refers to equalizing the pressure in your inner ear as you dive or ascend, before it builds up to the point where equalization becomes difficult and the pressure damaging. Most senior software engineers I’ve worked with know the value of shipping, and are always looking for opportunities to ship what they’ve built. Especially those engineers who have taken contract work. In some environments this might mean getting feature acceptance and merging into the main branch; in other cases, this might mean deploying to production. Getting your code working is one thing. Getting it merged and into production is another. It takes time and care to tidy up your code, document, go through code review, get sign-off from product, resolve conflicts, merge, and deploy. It might seem like excessive overhead for a small feature, and you may feel the desire to bundle more work together, or optimize a bit more, before delivering your work. But shipping early and often is essential to shortening the feedback loop with customers, which is a key part of agile software development. You will also find it is gratifying the product and business teams to see a steady stream of features getting released. ### 3. Set Daily Targets. In preparing for battle I have always found that plans are useless, but planning is indispensable. — Dwight D. Eisenhower I always introduce a daily standup to teams at companies where I work that don’t already have one. I see the act of recapitulating yesterday’s accomplishments, and projecting today’s, as reflective practice and an exercise in mindfulness. It orients me at a place and time, and pushes me to break my work up into activities that can fit into a day and be summarized and communicated at that scale. For an individual contributor, this parallels the work that is done at the product and project management level to break up larger product development arcs into smaller ones and apportion the work in chunks that fit into a 2-week sprint. I find it is best to keep a work log of these daily targets, and add notes to that log throughout the day. ### 4. Develop with Tests. When I discovered test-driven development, everything became clear. I practice a form of TDD where I set up the object model and interfaces in the code and then write tests for them. As the code develops, whenever I add a new function or an object, in most cases I will write tests for it before the implementation. I like to test the natural interfaces in the code rather than using mocking to patch behaviors and values deep within the code I’m testing. This way, the tests serve as examples of how the functions and objects are to be used. They support the development effort, as well as debugging and maintenance. And they will create robust defenses around your code, giving you the confidence that it will perform as expected in production, even as the code around it is refactored. ### 5. Shorten the Feedback Loop It seems to me that the primary meta-activity of writing software is to shorten the iterations in the development cycle. As I program, I am always asking: how can I get the answer I’m looking for as quickly as possible? Is it worthwhile to invest in additional scaffolding to reduce the iteration time? This is especially true for debugging, where reproducing a failure can sometimes take hours or days. But it is also true for development, which in many projects can be described as search in solution space. The feedback loop also includes your product manager and, by extension, the customer. If you discover something during development that wasn’t anticipated during design, for better or worse, it is almost always worth propagating upstream. Best if you have a product manager who is adaptable and can modify requirements to take advantage of emergent opportunities or re-route around roadblocks. ### 6. Keep it simple. At the beginning of a large project, you may have a green field. Lots of choices about technology and design. You have an opportunity to try new things. Use a mix of new and proven technologies. Over-engineering is ok, as long as you save time for reduction. Much of the work should go into simplifying the design. Of poetry, from the poem Adam’s Curse by William Butler Yeats: “A line will take us hours maybe; Yet if it does not seem a moment’s thought, Our stitching and unstitching has been naught.” Software architecture is like poetry in this sense. The design principles should be simple to diagram and simple to grasp. There should be a symmetry that holds from the larger system to its components. If it’s not simple and elegant, rework it until it is. I find that drawing pictures helps to get it right, and to document design decisions. You might think yourself very clever, having applied a new coding pattern or algebraic principle you just learned. It may have taken you days or weeks to get everything lined up just right in your head, and it all works out. But if, in 6 months’ time, you can’t find your way back to the precise state of mind that gave birth to your mini-magnum opus, it will be useless to you and others that have to extend and maintain it. Be kind to yourself and to them, and keep it simple. ### 7. Draw pictures. Communicate and document your architecture through diagrams. Display a key for shapes, arrows, and colors, and let all objects of a kind or color mean the same thing in your diagram. Pictures can be powerful engineering tools when used in this way. They can help you reason about a system and record decisions you’ve made. In discussions with other engineers, a picture can serve as a common point of reference. When demonstrating backend work that you’ve accomplished, architectural diagrams can show how the system has simplified or allowed for horizontal scaling. Use ERDs for database and software design; Data Flow diagrams to show processing pipelines; Sequence or Messaging diagrams for communication sequences between components; Component diagrams to show how a system is broken up into smaller subsystems; Critical Path diagrams to examine project dependencies. Pictures complement words so that everyone can form a fuller understanding of the work being done, and the results they can expect. I always try to represent the people in the system somewhere on the diagram, and relate things back to the customer. ### 8. Stay Healthy Take care of yourself. I know it’s tempting to stay up late, drink energy drinks, and bang out that feature that was mentioned in the company meeting overnight to impress everyone. Ok, go for it, but don’t back yourself into a corner by setting unhealthy expectations. You can burn yourself out this way. Your company needs you to produce work consistently and sustainably, not erratically. Take breaks. Use a standing desk. Stretch. Run in the mornings. Walk in the evenings. Get outdoors and play on the weekends. Whatever kind of physical activity works for you. Eat well. Learn to cook. Avoid a long string of pizza and beer meetups and 24-hour hackathons. Have a regular sleep cycle. You will probably be surprised to find how much you can get done by keeping your schedule free during your high-throughput hours and applying them in a focused way, undistracted. You can build up incredible momentum with a healthy rhythm. ### 9. Go for Coffee, Go for Lunch Engineering well-functioning software systems is a team effort, and team culture is everyone’s responsibility. The best teams I’ve been on, and the most fun I’ve had, is at companies where people share some of their break time with colleagues. Not at the bar, but over coffee and lunch. Talk shop, talk sports, talk video games, talk politics, whatever. At a company where people care about the mission, socializing is a great way to reinforce the great feeling that the team is doing something impactful and good for the world. ### 10. Have a Purpose As a software engineer, you have a very powerful skillset. And with great power comes great responsibility. You will not have control over every step in your career. You may feel the need to take a job to maximize your earnings or acquire essential skills or experience. But with a little extra effort you can give yourself the option to contribute your skillset to something more significant and lasting than mobile games or ad-targeting. The best minds of my generation are thinking about how to make people click ads. — Jeff Hammerbacher, Cloudera founder and ex-Facebook employee. There are some great socially and environmentally responsible companies out there, with clear missions and values, looking to make a real difference in the quality of people’s lives. They know they have to have a viable business to have an impact, and one factor in that is to hire people who are both talented and passionate about their mission. You are one of those people.
177.704918
1,289
0.798155
eng_Latn
0.999949
1156181d223c1c103c7170e3ae7af399fa82da4d
11,230
md
Markdown
clusters/build-clusters/02_cluster/README.md
ibuziuk/release-1
bfbb1117305d74710b5b1a830cf621fe097fbbf9
[ "Apache-2.0" ]
1
2020-03-12T21:22:31.000Z
2020-03-12T21:22:31.000Z
clusters/build-clusters/02_cluster/README.md
ibuziuk/release-1
bfbb1117305d74710b5b1a830cf621fe097fbbf9
[ "Apache-2.0" ]
null
null
null
clusters/build-clusters/02_cluster/README.md
ibuziuk/release-1
bfbb1117305d74710b5b1a830cf621fe097fbbf9
[ "Apache-2.0" ]
null
null
null
# 02-Cluster [02-Cluster](https://console-openshift-console.apps.build02.gcp.ci.openshift.org) is an OpenShift-cluster managed by DPTP-team. It is one of the clusters for running Prow job pods. The secrets have been uploaded to BitWarden item `build_farm_02_cluster`: * the key file for the service account `ocp-cluster-installer` * the SSH key pair (`id_rsa` and `id_rsa.pub`) * install-config.yaml.001 (the one with the desired instance type) * The auth info for `kubeadmin` and the cert-based kubeconfig file (attachment `b02.admin.cert.kubeconfig`). * Github application for aouth: `github_client_id` and `github_client_secret` ## Installation The gcp project `openshift-ci-build-farm` for installation of this cluster: [gcp console](https://console.cloud.google.com/home/dashboard?project=openshift-ci-build-farm). The project is created by [DPP team](hhttps://issues.redhat.com/browse/DPP-4926). We created the public zone (base domain for installer) and the service account `ocp-cluster-installer`. Since this project is dedicated for build farm. We granted the `Owner` role to this account. Download `install-config.yaml.001` and rename it to `install-config.yaml` and the key file for the GCP SA account and save it to `${HOME}/.gcp/osServiceAccount.json`: The instance types have been configuration with `install-config.yaml`. Because of [bz1831838](https://bugzilla.redhat.com/show_bug.cgi?id=1831838), we have to modify the disk size on the _manifests_: | | master | worker | |---------|--------------------------|--------------------------| | api.ci | 150G SSD persistent disk | 300G SSD persistent disk | | build01 | 150G EBS gp2 | 700G EBS gp2 | | build02 | 150G SSD persistent disk | 300G SSD persistent disk | ``` $ ./openshift-install create manifests ### modify the disk size 128G to 150G for masters and 300G for workers on those files $ find . -name "*machines*" ./openshift/99_openshift-cluster-api_worker-machineset-0.yaml ./openshift/99_openshift-cluster-api_worker-machineset-1.yaml ./openshift/99_openshift-cluster-api_master-machines-2.yaml ./openshift/99_openshift-cluster-api_master-machines-0.yaml ./openshift/99_openshift-cluster-api_master-machines-1.yaml ./openshift/99_openshift-cluster-api_worker-machineset-2.yaml ``` Then, > ./openshift-install create cluster --log-level=debug The installation folder is uploaded to gdrive (search for "cluster.openshift4.new"). We need it for destroying the cluster. ### Regenerate `install-config.yaml` Regenerate `install-config.yaml` in case that the uploaded one is not available. Get the pull secret by > oc --context api.ci get secret -n ci cluster-secrets-gcp -o jsonpath='{.data.pull-secret}' | base64 -d The above pull secret is used to install clusters for e2e tests. ``` ./openshift-install create install-config ? SSH Public Key /Users/hongkliu/.ssh/id_rsa_build02.pub ? Platform gcp ? Service Account (absolute path to file or JSON content) /Users/hongkliu/Downloads/build02.install/openshift-ci-build-farm-64e4ce412ae3.json INFO Saving the credentials to "/Users/hongkliu/.gcp/osServiceAccount.json" ? Project ID openshift-ci-build-farm ? Region us-east1 INFO Credentials loaded from file "/Users/hongkliu/.gcp/osServiceAccount.json" ? Base Domain gcp.ci.openshift.org ? Cluster Name build02 ? Pull Secret ``` Customize [platform.gcp.type](https://docs.openshift.com/container-platform/4.4/installing/installing_gcp/installing-gcp-customizations.html#installation-configuration-parameters_installing-gcp-customizations) in the `install-config.yaml`: | | master | worker | |---------|----------------|----------------| | api.ci | n1-standard-16 | n1-standard-16 | | build01 | m5.2xlarge | m5.4xlarge | | build02 | n1-standard-8 | n1-standard-16 | ## Configuration ### openshift-image-registry #### customize router for image-registry The default one would be `default-route-openshift-image-registry.apps.build02.gcp.ci.openshift.org` but we like more to use `registry.build02.ci.openshift.org`. [Steps](https://docs.openshift.com/container-platform/4.4/registry/securing-exposing-registry.html): * [dns set up](https://cloud.ibm.com/docs/openshift?topic=openshift-openshift_routes): [No official doc yet](https://coreos.slack.com/archives/CCH60A77E/p1588774688400500). ``` oc --context build02 get svc -n openshift-ingress router-default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.61.60 34.74.144.21 80:32716/TCP,443:31869/TCP 6d21h ``` GCP project `OpenShift Ci Infra`, Network Service, Cloud DNS: Set up an A record mapping `registry.build02.ci.openshift.org` to `34.74.144.21`. ``` $ dig +noall +answer registry.build02.ci.openshift.org registry.build02.ci.openshift.org. 245 IN A 34.74.144.21 ``` * Configure the Registry Operator: ``` $ oc --as system:admin --context build02 edit configs.imageregistry.operator.openshift.io cluster spec: ... routes: - hostname: registry.build02.ci.openshift.org name: public-routes ... $ oc --context build02 get route -n openshift-image-registry NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD public-routes registry.build02.ci.openshift.org image-registry <all> reencrypt None $ podman pull registry.build02.ci.openshift.org/ci/applyconfig --tls-verify=false ``` * Create a secret with your route’s TLS keys via [cert-manager](../cert-manager/readme.md). * Update the Registry Operator with the secret: ``` $ oc --as system:admin --context build02 edit configs.imageregistry.operator.openshift.io cluster spec: ... routes: - hostname: registry.build02.ci.openshift.org name: public-routes secretName: public-route-tls ... ``` Verify: the above `podman pull` works without `--tls-verify=false`. ### openshift-ingress #### CA Certificate for app routes Openshift 4.2 has doc on [this topic](https://docs.openshift.com/container-platform/4.2/authentication/certificates/replacing-default-ingress-certificate.html). Manual steps: Those `yaml`s are applied automatically by `applyconfig`. We record the steps here for debugging purpose. [Google CloudDNS](https://cert-manager.io/docs/configuration/acme/dns01/google/): The key file of the service-account `cert-issuer` (in project "openshift-ci-build-farm") is uploaded to BW item `cert-issuer`. * *Generate the certificate by `cert-manager` ```bash $ oc --as system:admin apply -f clusters/build-clusters/02_cluster/cert-manager/cert-issuer-ci-build-farm_clusterissuer.yaml $ oc --as system:admin apply -f clusters/build-clusters/02_cluster/openshift-ingress/apps-build02_certificate.yaml $ oc get secret -n openshift-ingress apps-build02-tls NAME TYPE DATA AGE apps-build02-tls kubernetes.io/tls 3 25m ``` * Use the secret in `openshift-ingress-operator`: manual step only for test, see [default_ingresscontroller.yaml](openshift-ingress-operator/default_ingresscontroller.yaml) ``` $ oc --as system:admin patch ingresscontroller.operator default \ --type=merge -p \ '{"spec":{"defaultCertificate": {"name": "apps-build02-tls"}}}' \ -n openshift-ingress-operator ``` Verify if it works: ``` $ site=console-openshift-console.apps.build02.gcp.ci.openshift.org $ curl --insecure -v "https://${site}" 2>&1 | awk 'BEGIN { cert=0 } /^\* Server certificate:/ { cert=1 } /^\*/ { if (cert) print }' * Server certificate: * subject: CN=*.apps.build02.gcp.ci.openshift.org * start date: Jun 15 19:08:40 2020 GMT * expire date: Sep 13 19:08:40 2020 GMT * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. * Connection #0 to host console-openshift-console.apps.build02.gcp.ci.openshift.org left intact * Closing connection 0 ``` ##### Troubleshooting * Due to [cert-manager/issues/2968](https://github.com/jetstack/cert-manager/issues/2968), we have to edit the deployment of cert-manager with the additional arg `--dns01-recursive-nameservers="8.8.8.8:53"`: ```bash oc get deployment -n cert-manager cert-manager -o yaml | yq -r '.spec.template.spec.containers[0].args[]' --v=2 --cluster-resource-namespace=$(POD_NAMESPACE) --leader-election-namespace=kube-system --dns01-recursive-nameservers="8.8.8.8:53" ``` * The above workaround does NOT work when we have selector defined in the clusterissuer ``` apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: cert-issuer-staging spec: acme: email: [email protected] server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: cert-issuer-account-key solvers: - dns01: cloudDNS: project: openshift-ci-infra serviceAccountSecretRef: name: cert-issuer key: key.json selector: matchLabels: gcp-project: openshift-ci-infra - dns01: cloudDNS: project: openshift-ci-build-farm serviceAccountSecretRef: name: cert-issuer key: openshift-ci-build-farm-cert-issuer.json selector: matchLabels: gcp-project: openshift-ci-build-farm ``` ### openshift-apiserver #### CA Certificate for the API servers Openshift 4.2 has doc on [this topic](https://docs.openshift.com/container-platform/4.2/authentication/certificates/api-server.html). Manual steps: Those `yaml`s are applied automatically by `applyconfig`. We record the steps here for debugging purpose. * Generate the certificate by cert-manager: ``` $ oc --as system:admin --context build02 apply -f clusters/build-clusters/02_cluster/openshift-apiserver/apiserver-build02_certificate.yaml ``` * Use the certificates in API server: ``` oc --as system:admin patch apiserver cluster --type=merge -p '{"spec":{"servingCerts": {"namedCertificates": [{"names": ["api.build02.gcp.ci.openshift.org"], "servingCertificate": {"name": "apiserver-build02-tls"}}]}}}' ``` Verify if it works: ``` $ site=api.build02.gcp.ci.openshift.org:6443 $ curl --insecure -v https://${site} 2>&1 | awk 'BEGIN { cert=0 } /^\* Server certificate:/ { cert=1 } /^\*/ { if (cert) print }' * Server certificate: * subject: CN=api.build02.gcp.ci.openshift.org * start date: Jun 16 11:46:39 2020 GMT * expire date: Sep 14 11:46:39 2020 GMT * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x7ffd3780aa00) * Connection state changed (MAX_CONCURRENT_STREAMS == 2000)! * Connection #0 to host api.build02.gcp.ci.openshift.org left intact ``` ## Upgrade the cluster Unlike `build01` which has an automated job to do the upgrades, we upgrade `build02` manually. This is to keep the possibility of failover: in case of `build01` is upgraded to a version with issues, we still have a working `build02` in our build farm.
39.265734
253
0.715316
eng_Latn
0.667422
11561cad944f8b8ae1200a5762a5a98d4c30c46d
400
md
Markdown
README.md
KungFuryKeyboard/LaylasLittleCompanion
94a175d6ae1cb0010e3c4edca32242a83d3c033e
[ "MIT" ]
6
2021-04-13T15:10:48.000Z
2021-05-07T18:29:15.000Z
README.md
heidarj/LaylasLittleCompanion
7880afcfae3449520ba87c27ef5c2e1d6318d1b6
[ "MIT" ]
13
2021-04-15T08:52:50.000Z
2021-04-25T16:03:36.000Z
README.md
heidarj/LaylasLittleCompanion
7880afcfae3449520ba87c27ef5c2e1d6318d1b6
[ "MIT" ]
4
2021-04-13T15:31:44.000Z
2021-05-18T13:27:47.000Z
# Layla's Little Companion Welcome! This project was created to keep Layla busy whilst she is unemployed! We are building it on stream - [LaylaCodesIt](https://www.twitch.tv/laylacodesit) It is part of the [LaylasLittleCompanion project](https://github.com/users/Layla-P/projects/1) and you are welcome to collaborate. We are chatting on [Discord](https://discord.gg/rV3cu5ykRF) - come join us!
33.333333
130
0.77
eng_Latn
0.991989
11566ff2d5bf5b4d1b6d0055f04ef1b796266eef
3,704
md
Markdown
_posts/2021-02-27-the-social-dilemma-二--社交平台的社會影響.md
roulesophy/roulesophy.github.io
53f01f90d48d6e74851950942808e1e72c818417
[ "MIT" ]
1
2020-11-26T13:38:26.000Z
2020-11-26T13:38:26.000Z
_posts/2021-02-27-the-social-dilemma-二--社交平台的社會影響.md
roulesophy/roulesophy.github.io
53f01f90d48d6e74851950942808e1e72c818417
[ "MIT" ]
null
null
null
_posts/2021-02-27-the-social-dilemma-二--社交平台的社會影響.md
roulesophy/roulesophy.github.io
53f01f90d48d6e74851950942808e1e72c818417
[ "MIT" ]
2
2016-11-27T13:33:27.000Z
2019-10-08T15:01:21.000Z
--- filename: 2021-02-27-the-social-dilemma-二--社交平台的社會影響.md layout: post title: The Social Dilemma(二):社交平台的社會影響 tags: 電影心得 date: 2021-02-27 permalink: 20210227-the-social-dilemma-social-impact/ thumbnail: "20210227-1.jpeg" comments: true --- ![]({{ site.baseurl }}/images/20210227-1.jpeg) 繼 [上一篇]({{ site.baseurl }}/20210204-the-social-dilemma-1-business-model/) 後,這一篇我們會聊聊演變至今的社交網絡對人類有什麼禍害。 一開始的社交平台的確吸引了很多人參加,其中一個原因是因為當時社交網絡真是讓我們找會很多自小便失散的朋友,因此我們很想知道他們過得如何,事實上這部分當時真是很成功。 但是因為整個社交媒體的運作方式,使得每個人都面臨著以下的問題: ## > 資訊焦慮 因為每一個人也可以發布資訊,使得我們每天能夠吸收太多資訊,這些資訊數量已經遠遠超出我們能夠接收的範圍。 也因為每一個人每天能看到的資訊有這麼多,我們每天都害怕無法把它們看完,因為我們的基因裡就有「少了資料作判斷就會對我們有多一分危險,從而減少生存率」的概念。 根據 [這篇文章](https://www.tech21century.com/the-human-brain-is-loaded-daily-with-34-gb-of-information/) 所述,我們每天所看過的文字平均有 100500 字、如果要加上圖片和影片,那麼我們每天看過的資訊有 34GB。我們根本無法消化掉這麼多資訊,然後又有新的資訊湧入。最後我們只是十分害怕會錯過了什麼、然後繼續看新的東西。所以,我們發明了一個新的名詞:說重點。但因為大家都不會停下來,只會不斷地看新資料,所以大家會聽到無限個的「重點」,最後可能見過的東西可能會更多。 這種現象把我們訓練到,就算我們真的有空沒有新的資訊需要看,也會不自覺地拿出電話,然後看社交媒體,社交媒體的更新已經看完了,就不斷地更新,這個社交媒體看厭了,就看下一個社交媒體⋯ 大家在這一秒在看這個社交平台的帖子時,你們還記得十秒前看過的帖子的內容嗎? ## > 假新聞 因為我們的 DNA 裡刻了「我們需要更多資料來提高存信率」,所以假新聞就成功進入我們的眼球。 [假新聞](https://zh.wikipedia.org/wiki/%E5%81%87%E6%96%B0%E8%81%9E) 就是一些不實的資訊,通常都是因為由記者因收受利益而進行報導的虛假信息。這些利益可以是從某些集團而來,也可以只是從廣告分紅而來。有些假新聞為了暗地裡宣傳某些產品、有些就為了挑起大家的情緒,從而使社會兩極化。 由於假新聞為了方便傳播,所以也會把標題和內容變得十分吸睛和容易挑起大家的情緒。the Social Dilemma 裡說假新聞的傳播速度大概是真新聞的六倍。這樣的傳播速度自然會被有心人利用,從而去改變其他人的感知。 雖然各社交媒體說他們會打擊假新聞,但是要如何執行我還是有很大的問號,例如是他們如何判斷這是假新聞,以及他們有這樣的權力去判斷這是否一篇假新聞所帶來的問題。 在我們而言,雖然有判斷假新聞的方法,只是每天看到假新聞時要處理不自覺產生的情緒、或是在判斷這是否假新聞所花的精神,已經使人相當累了。 而這會造成我們注意力的下降。 ## > 注意力的嚴重下降 社交媒體上每一天是有這麼多的資料、有這麼多的網絡熱話在吸引你的注意力。 這裡分開個人的注意力和群眾的注意力。 個人而言,我們上一秒在看世界大事,下一秒便在看飲食資訊。過多的切換讓我們消耗過多的腦力。至少對我而言,儘管還是會不停地看社交網絡,但過了一段時間後我會感到十分疲勞。 以外,我們習慣了這種模式後,會變得很難在一件事上專注,因為不停有新的資訊想要吸引我們的注意力,而要和不停出現的新東西抗衡也很花腦力。久而久之,我們只是不停在眾多事情的表面遊走,對事情的認知會變得很表面。 群眾而言,我們的注意力也被操控到這一段時間在這件事、下一段時間在那個議題上。 如果你仔細想想,最近幾年每一段時間在社交網絡上便有一件事情會吸引所有人的注意力、然後所有人也會在討論這件事,然後過了一段時間後又會有新的一件這樣的事件在重覆著,感覺就像是貓貓不斷在追逐新的綿繩般,然後忘記了自己真正要做的事一般。 每一個大議題給大家的感覺到是一個我們需要十分關注的議題,所以大家沒有不關注的理由。然後過了一段時間後,出現了新的一個大家十分需要關注的議題,因為人的專注力十分有限,所以之前的事件被忘記得一乾二淨。 而造成每一個議題也會突然火紅、又突然衰落有兩個原因。 第一個原因是每一個議題上大家使用的字都是很能夠挑動大家的情緒,也因為每一個人都能在平台上發表內容,所以有部分人被感染後,他們會以更情緒化的字句在平台上發表內容,所以造成了大家對這件事不能不顧的錯覺。 第二個原因是因為對於靠生產大量內容維生的公司(也就是內容農場)而言,可能會因為利益而帶起某一議題、或是本來看到這個議題開始多人談論,所以也要創作關於這方面的內容。 ## > 社交媒體的權力過大 自從 2020 年的美國總統大選後,大家開始留意到社交媒體權力過大的問題。大家對他們有能力隨意刪掉帖子反感。 其中一條導火線是,Facebook 和 Twitter 封禁還未任期完結的美國總統特朗普的社交平台帳號。 社交媒體給出的解釋是,為了打擊假新聞、或是禁止仇恨言論,所以他們會刪除這些帖子。 問題是,他們如何判斷這是假新聞、或是他們決定刪掉某帖子的原因和指引是否清晰,他們的審查機制能夠保證不被有心人利用?要是有一天他們收了巨大利益做出有損公眾利益的事,我們能如何做? 雖然不像法院般,這些社交平台有絕對的決定權。作為私人公司來說他們有做這事的理由。 站在自由市場的角度而言,人們討厭了一個平台,便會轉移到其他平台去。只是現在某幾個社交平台太多人使用了,日後用家會否使用其他的平台十分值得留意。 ## > 社會的嚴重分化 由於社交網絡為了令大家不會離開,所以會分析你所看的資訊進行分析,並提供大部分 [你想看的立場的內容](https://zh.wikipedia.org/wiki/%E9%81%8E%E6%BF%BE%E6%B0%A3%E6%B3%A1)。 兩個立場不同的人在各自的社交媒體裡看到的東西可以是完全不同的世界,並以為自己看到的就是所有的東西。結果他們在討論時,在他的資訊下得出的結論是這麼的顯而易見,所以只會認為對方蠢極了。 一開始有了立場後,日後吸引什麼的東西已經不重要,因為他們在這時只會傾向相信自己想要相信的觀點,而忽略不想相信的東西,這樣演變下去,不同立場的人更難接受不同的聲音。 由於大家看到的資訊是完全不同,所以雙方都無法說服對方,也無法理解對方的想法,最後雙方都互不信任。這樣的分化幾乎沒有方法解決。 假新聞、每天吸收到的都是只有「重點」的資訊、以及無法分析太多的資料這三點會再使這個問題嚴重的加劇。這樣要是再操控一下,自然會變成社會的巨大分化。事實上,幾年前的香港、上年的美國大選等等已經面臨這個問題。 ## > 有太多人的分享導致有比較 因為每一個人都分享自己的生活希望獲得別人的認同 / 讚,所以每一個人都只是把自己最好的一面、或是美化後的一面展現出來。 這樣的結果就是每一個在看社交平台的人,也會看到別人是很成功、生活過得很好,而不自覺地和自己的爛生活比較。 也因為每一個人都能看到全世界的人在分享他們的生活,以前我們只是和身邊的人比較,現在我們是和全世界不同層級的人比較。我們會想追上別人而不斷進行新的消費(這正是社交媒體的獲利方式),沒有能力消費的人就會自信心下降。這個現象在青少年裡十分嚴重,the Social Dilemma 裡說到他們的自殺率在社交媒體出現後增加了很多。 ## > 不善交際 每一個人都在上社交媒體,社交媒體有這麼多東西可以看,看到的又是自己有興趣或是能挑起自己情緒的東西,那麼為什麼要在現實生活中和別人溝通那麼累?現實生活中要應付別人和自己不同的立場、要分析對方想要的東西、獲得的又不一定自己想要的。結果就是大家每天都在看手機,現實中的交流減少了很多,這會使本來就不善交際的人變得更不變交際。 ## > 結論 社交平台只是一件工具,它對我們是好還是壞很看我們如何使用它。 只是,就多數人而言,現在的情況是壞處比好處多。 下一篇我們試試找找如何應對這些社交平台帶來的問題。
33.071429
280
0.857991
yue_Hant
0.907582
115679b65e4371523c880d867c49ee85d3f9c37c
3,929
md
Markdown
azps-5.0.0/Az.Network/Get-AzVirtualRouterPeerLearnedRoute.md
rolyon/azure-docs-powershell
83b7ecaa4edb97b26485cd94770fbd39e74bc4f5
[ "CC-BY-4.0", "MIT" ]
1
2021-08-22T18:02:50.000Z
2021-08-22T18:02:50.000Z
azps-5.0.0/Az.Network/Get-AzVirtualRouterPeerLearnedRoute.md
rolyon/azure-docs-powershell
83b7ecaa4edb97b26485cd94770fbd39e74bc4f5
[ "CC-BY-4.0", "MIT" ]
null
null
null
azps-5.0.0/Az.Network/Get-AzVirtualRouterPeerLearnedRoute.md
rolyon/azure-docs-powershell
83b7ecaa4edb97b26485cd94770fbd39e74bc4f5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- external help file: Microsoft.Azure.PowerShell.Cmdlets.Network.dll-Help.xml Module Name: Az.Network online version: https://docs.microsoft.com/en-us/powershell/module/az.network/get-azvirtualrouterpeerlearnedroute schema: 2.0.0 content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Network/Network/help/Get-AzVirtualRouterPeerLearnedRoute.md original_content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Network/Network/help/Get-AzVirtualRouterPeerLearnedRoute.md --- # Get-AzVirtualRouterPeerLearnedRoute ## SYNOPSIS List routes learned by a specific virtual router peer ## SYNTAX ### VirtualRouterPeerNameParameterSet (Default) ``` Get-AzVirtualRouterPeerLearnedRoute -ResourceGroupName <String> -VirtualRouterName <String> -PeerName <String> [-AsJob] [-DefaultProfile <IAzureContextContainer>] [<CommonParameters>] ``` ### VirtualRouterPeerObjectParameterSet ``` Get-AzVirtualRouterPeerLearnedRoute -InputObject <PSVirtualRouterPeer> [-AsJob] [-DefaultProfile <IAzureContextContainer>] [<CommonParameters>] ``` ## DESCRIPTION Enumerate routes learned by a virtual router peer from other sources. ## EXAMPLES ### Example 1 ```powershell Get-AzVirtualRouterPeerLearnedRouter -ResourceGroupName $resourceGroupName -VirtualRouterName $virtualRouterName -PeerName $peerName ``` ### Example 2 ```powershell $virtualRouterPeer = Get-AzVirtualRouterPeer -ResourceGroupName $resourceGroupName -VirtualRouterName $virtualRouterName -PeerName $peerName Get-AzVirtualRouterPeerLearnedRouter -InputObject $virtualRouterPeer ``` ## PARAMETERS ### -AsJob Run cmdlet in the background ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -DefaultProfile The credentials, account, tenant, and subscription used for communication with Azure. ```yaml Type: Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer Parameter Sets: (All) Aliases: AzContext, AzureRmContext, AzureCredential Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -InputObject The virtual router peer input object. ```yaml Type: Microsoft.Azure.Commands.Network.Models.PSVirtualRouterPeer Parameter Sets: VirtualRouterPeerObjectParameterSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: True (ByValue) Accept wildcard characters: False ``` ### -PeerName Virtual router peer name ```yaml Type: System.String Parameter Sets: VirtualRouterPeerNameParameterSet Aliases: ResourceName Required: True Position: Named Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -ResourceGroupName Virtual router peer resource group's name ```yaml Type: System.String Parameter Sets: VirtualRouterPeerNameParameterSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -VirtualRouterName Virtual router name ```yaml Type: System.String Parameter Sets: VirtualRouterPeerNameParameterSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### System.String ### Microsoft.Azure.Commands.Network.Models.PSVirtualRouterPeer ## OUTPUTS ### Microsoft.Azure.Commands.Network.Models.PSPeerRoute ## NOTES ## RELATED LINKS
25.679739
315
0.804785
yue_Hant
0.760729
115704599dd75bc130e57fc7048de486c39a12e2
5,846
md
Markdown
readme.md
gregpalaci/awesome-nodegui
354e9e431c059bbebc8c2bd798dd1af41351f848
[ "CC0-1.0" ]
null
null
null
readme.md
gregpalaci/awesome-nodegui
354e9e431c059bbebc8c2bd798dd1af41351f848
[ "CC0-1.0" ]
null
null
null
readme.md
gregpalaci/awesome-nodegui
354e9e431c059bbebc8c2bd798dd1af41351f848
[ "CC0-1.0" ]
null
null
null
# Awesome NodeGui [![Awesome](https://awesome.re/badge.svg)](https://github.com/nodegui/awesome-nodegui) [<img src="https://github.com/nodegui/nodegui/raw/master/extras/logo/nodegui-circle.png" align="right" width="100">](https://docs.nodegui.org) > Useful resources for creating apps with [NodeGui](https://docs.nodegui.org) ## Contents - [Renderers](#renderers) - [Apps and Examples](#apps) - [Boilerplates](#boilerplates) - [Tools and Plugins](#tools) - [Documentation](#documentation) - [Articles](#articles) - [Videos](#videos) - [Podcasts](#podcasts) - [Community](#community) - [Related](#related) ## Renderers - [React NodeGui](https://github.com/nodegui/react-nodegui) - Build performant, native and cross-platform desktop applications with native React + powerful CSS like styling.🚀 - [Vue NodeGui](https://github.com/nodegui/vue-nodegui) - Vue renderer for NodeGui. #### Unofficial/Community renderers - [Angular NodeGui](https://github.com/irustm/angular-nodegui) (Unofficial) - Build performant, native and cross-platform desktop applications with Angular - [Vue NodeGui](https://github.com/NovusTheory/vue-nodegui) (Unofficial) - NodeGUI but with Vue ## Apps Made with NodeGui - [Discord client](https://github.com/ruslang02/discord-qt) - A Discord desktop client powered by Node.JS and NodeGui. - [Mysterium VPN client](https://github.com/mysteriumnetwork/mysterium-vpn2) - Decentralised VPN built on blockchain. - [Meme legend](https://github.com/master-atul/meme-legend) - Meme legend lets you type emojis, gifs or stickers quickly. Works on Mac, Windows and Linux. - [Emoji picker](https://github.com/slidinghotdog/emoji-picker) - Just click to copy your Emoji ### Samples and Experiments - [Official Examples repo](https://github.com/nodegui/examples) - Sample apps illustrating usage of NodeGui APIs. - [Markdown editor in NodeGui](https://github.com/master-atul/mdview-nodegui) - A Markdown editor in NodeGui under 200 lines of code. - [List of apps or packages using NodeGui](https://github.com/nodegui/nodegui/network/dependents) - List from Github ## Boilerplates - [NodeGui starter](https://github.com/nodegui/nodegui-starter) - A starter repo for NodeGui projects - [React NodeGui starter](https://github.com/nodegui/react-nodegui-starter) - Starter repository for react based native desktop apps using react-nodegui - [React NodeGui Neutrino preset](https://github.com/constgen/neutrino-preset-react-nodegui) - [Neutrino preset](https://neutrinojs.org/presets/) for React NodeGui. - [NodeGUI MVC Starter](https://github.com/RinneganTech/nodegui-mvc-starter) - Starter repo to provide a basic structure and format to build large complex application using NodeGUI. ## Tools Tools for NodeGui - [NodeGui Packer](https://github.com/nodegui/packer) - Create installers and distributables for NodeGui apps. - [React NodeGui Testing library](https://github.com/fnky/react-nodegui-testing-library) - Simple React NodeGui testing utilities that encourage good testing practices 🦋 - by [@fnky](https://github.com/fnky) - [React Native like stylesheet for React NodeGui](https://github.com/Solant/nodegui-stylesheet) by [@Solant](https://github.com/Solant) - https://github.com/Solant/nodegui-stylesheet - [NodeGUI Debian Builder](https://github.com/RinneganTech/nodegui-deb-builder) - Create .deb package distributable for NodeGUI Apps. ## Plugins Plugins that add additional native features to NodeGui apps - [nodegui-plugin-animation](https://github.com/nodegui/nodegui-plugin-animation) - A NodeGui plugin that adds native animation capabilities to NodeGui widgets and objects. Based on QAnimation. - [nodegui-plugin-example](https://github.com/nodegui/nodegui-plugin-example) - an example native plugin - [@nodegui/os-utils](https://github.com/nodegui/os-utils) - A helper module for NodeGui which contains OS specific native features. - [nodegui-plugin-title-bar](https://github.com/nodegui/nodegui-plugin-title-bar) - Plugin for NodeGUI to hide macOS title bar and leave only traffic lights. - [nodegui-plugin-webview](https://github.com/nodegui/nodegui-plugin-webview) - A NodeGui plugin that adds webview support. - [@nodegui/devtools](https://github.com/nodegui/devtools) - React NodeGui's devtools support module. ## Documentation - [NodeGui: Getting Started](https://docs.nodegui.org/docs/guides/getting-started) - [NodeGui: Apis](https://docs.nodegui.org/docs/api/generated/classes/qapplication) - [React NodeGui: Getting Started](https://react.nodegui.org/docs/guides/getting-started/) ## Articles - [Sitepoint Tutorial: Build a native Meme Search Desktop app with Javascript (NodeGui) and Giphy API](https://www.sitepoint.com/build-native-desktop-gif-searcher-app-using-nodegui/) - [Electron alternatives: Exploring NodeGUI and React NodeGUI by Siegfried Grimbeek](https://blog.logrocket.com/electron-alternatives-exploring-nodegui-and-react-nodegui/) - [Getting Started with NodeGUI - James Hibbard](https://hibbard.eu/node-gui/) - [Building Native Desktop Apps with React Node GUI - Nathan Sebhastian](https://blog.bitsrc.io/building-native-desktop-application-with-react-node-gui-2ce1b2a2164) ## Videos - [Getting started with NodeGui and React NodeGui at KarmaJS meetup November 2019](https://www.youtube.com/watch?v=8jH5gaEEDv4) ## Podcasts - [JS Party: Performant Node desktop apps with NodeGui with Atul R, Jerod Santo and Nick Nisi](https://changelog.com/jsparty/96) ## Community - [Spectrum](https://spectrum.chat/nodegui) - [Medium](https://medium.com/nodegui) - [`@node_gui` on Twitter](https://twitter.com/node_gui) - [Product Hunt](https://www.producthunt.com/posts/nodegui-2) ## Contribute Contributions welcome! Read the [contribution guidelines](contributing.md) first.
55.150943
209
0.754875
yue_Hant
0.339013
11572574ca7bb6c158c15519a7254c0ee1ed0243
5,694
md
Markdown
docs/csharp/language-reference/attributes/caller-information.md
smolck/docs.fr-fr
ee27cca4b8e319e4cbdea9103c3f90ac22378d50
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/language-reference/attributes/caller-information.md
smolck/docs.fr-fr
ee27cca4b8e319e4cbdea9103c3f90ac22378d50
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/language-reference/attributes/caller-information.md
smolck/docs.fr-fr
ee27cca4b8e319e4cbdea9103c3f90ac22378d50
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Attributs réservés C : Suivi des informations sur les appelants' ms.date: 04/09/2020 description: Ces attributs instruisent le compilateur de générer des informations sur le code qui appelle un membre. Vous utilisez le CallerFilePath, CallerLineNumber et CallerMemberName pour fournir des informations détaillées sur les traces ms.openlocfilehash: ee061d4cae35bdcc0b89007e360e94fee8c5f87c ms.sourcegitcommit: c91110ef6ee3fedb591f3d628dc17739c4a7071e ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 04/15/2020 ms.locfileid: "81389876" --- # <a name="reserved-attributes-determine-caller-information"></a>Attributs réservés : Déterminer les informations sur l’appelant En utilisant des attributs d’information, vous obtenez des informations sur l’appelant à une méthode. Vous obtenez le chemin de fichier du code source, le numéro de ligne dans le code source et le nom du membre de l’appelant. Pour obtenir des informations de membre de l’appelant, vous utilisez les attributs qui sont appliqués aux paramètres facultatifs. Chaque paramètre facultatif spécifie une valeur par défaut. Le tableau suivant répertorie les attributs d'informations de l'appelant définis dans l'espace de noms <xref:System.Runtime.CompilerServices?displayProperty=nameWithType> : |Attribut|Description|Type| |---|---|---| |<xref:System.Runtime.CompilerServices.CallerFilePathAttribute>|Chemin d’accès complet du fichier source qui contient l’appelant. Le chemin complet est le chemin au moment de la compilation.|`String`| |<xref:System.Runtime.CompilerServices.CallerLineNumberAttribute>|Numéro de ligne dans le fichier source à partir duquel la méthode est appelée.|`Integer`| |<xref:System.Runtime.CompilerServices.CallerMemberNameAttribute>|Nom de la méthode ou nom de la propriété de l’appelant.|`String`| Ces informations vous aident à écrire des traçages, à débogage et à créer des outils de diagnostic. L’exemple suivant montre comment utiliser les attributs d’info de l’appelant. À chaque appel à la méthode `TraceMessage`, les informations d'appel sont remplacées par des arguments pour les paramètres optionnels. ```csharp public void DoProcessing() { TraceMessage("Something happened."); } public void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Trace.WriteLine("message: " + message); Trace.WriteLine("member name: " + memberName); Trace.WriteLine("source file path: " + sourceFilePath); Trace.WriteLine("source line number: " + sourceLineNumber); } // Sample Output: // message: Something happened. // member name: DoProcessing // source file path: c:\Visual Studio Projects\CallerInfoCS\CallerInfoCS\Form1.cs // source line number: 31 ``` Vous spécifiez une valeur par défaut explicite pour chaque paramètre facultatif. Vous ne pouvez pas appliquer des attributs d’information de l’appelant à des paramètres qui ne sont pas spécifiés comme facultatifs. Les attributs d’info de l’appelant ne rendent pas un paramètre facultatif. À la place, ils affectent la valeur par défaut qui est passée si l'argument est oublié. Les valeurs d’info de l’appelant sont émises sous forme d’alphabétisés dans la langue intermédiaire (IL) au moment de la compilation. Contrairement aux résultats de la propriété <xref:System.Exception.StackTrace%2A> pour les exceptions, les résultats ne sont pas affectés par l'obfuscation (protection de code). Vous pouvez fournir explicitement les arguments facultatifs pour contrôler ou masquer des informations de l'appelant. ### <a name="member-names"></a>Noms de membres Vous pouvez utiliser l'attribut `CallerMemberName` pour éviter de spécifier le nom du membre comme argument de `String` à la méthode appelée. Vous évitez ainsi le problème que la **refactorisation de changement de nom** ne modifie pas les valeurs `String`. Cet avantage est particulièrement utile pour les tâches suivantes : - Utilisation du traçage et des programmes de diagnostic. - Implémentation de l'interface <xref:System.ComponentModel.INotifyPropertyChanged> lors de la liaison de données. Cette interface permet à la propriété d'un objet de signaler à un contrôle dépendant que la propriété a été modifiée, afin que ce contrôle puisse afficher les informations mises à jour. Sans attribut `CallerMemberName`, vous devez spécifier le nom de la propriété comme littéral. Le graphique suivant affiche les noms des membres qui sont retournés lorsque vous utilisez l'attribut `CallerMemberName`. |Les appels se produisent à l’intérieur|Résultat de nom de membre| |-|-| |Méthode, propriété ou événement|Le nom de la méthode, la propriété ou l'événement dont l'appel est originaire.| |Constructeur|La chaîne « .ctor »| |Constructeur statique|La chaîne « .cctor »| |Destructeur|La chaîne « finalize »| |Opérateurs définis par l'utilisateur ou conversions|Le nom généré pour le membre, par exemple, « op_Addition ».| |Constructeur d'attribut|Le nom de la méthode ou de la propriété à laquelle s’applique l'attribut. Si l'attribut est un élément dans un membre (tel qu'un paramètre, une valeur de retour, ou un paramètre de type générique), le résultat est le nom du membre qui est associé à cet élément.| |Aucun membre contenant (par exemple, niveau assembly ou attributs qui sont appliqués aux types)|Valeur par défaut du paramètre optionnel.| ## <a name="see-also"></a>Voir aussi - [Arguments nommés et facultatifs](../../programming-guide/classes-and-structs/named-and-optional-arguments.md) - <xref:System.Reflection> - <xref:System.Attribute> - [Attributs](../../../standard/attributes/index.md)
75.92
806
0.789603
fra_Latn
0.981396
1157763ff16025fbc4853f579d8d41192d9fb685
2,435
md
Markdown
wdk-ddi-src/content/ntifs/nf-ntifs-fsrtlisansicharacterlegalntfsstream.md
jazzdelightsme/windows-driver-docs-ddi
793b0c96e117b1658144ba8b3939fdc31a49f6b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/ntifs/nf-ntifs-fsrtlisansicharacterlegalntfsstream.md
jazzdelightsme/windows-driver-docs-ddi
793b0c96e117b1658144ba8b3939fdc31a49f6b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/ntifs/nf-ntifs-fsrtlisansicharacterlegalntfsstream.md
jazzdelightsme/windows-driver-docs-ddi
793b0c96e117b1658144ba8b3939fdc31a49f6b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NF:ntifs.FsRtlIsAnsiCharacterLegalNtfsStream title: FsRtlIsAnsiCharacterLegalNtfsStream macro (ntifs.h) description: The FsRtlIsAnsiCharacterLegalNtfsStream macro determines whether an ANSI character is legal for NTFS stream names. old-location: ifsk\fsrtlisansicharacterlegalntfsstream.htm tech.root: ifsk ms.assetid: 2bcfa3b3-8a83-460b-9b44-1188fceb3849 ms.date: 04/16/2018 ms.keywords: FsRtlIsAnsiCharacterLegalNtfsStream, FsRtlIsAnsiCharacterLegalNtfsStream function [Installable File System Drivers], fsrtlref_0dc6f0d3-6f38-4861-89d6-15cab783a959.xml, ifsk.fsrtlisansicharacterlegalntfsstream, ntifs/FsRtlIsAnsiCharacterLegalNtfsStream ms.topic: macro req.header: ntifs.h req.include-header: Ntifs.h req.target-type: Desktop req.target-min-winverclnt: This routine is available on Microsoft Windows XP and later. req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: "<= APC_LEVEL" topic_type: - APIRef - kbSyntax api_type: - HeaderDef api_location: - ntifs.h api_name: - FsRtlIsAnsiCharacterLegalNtfsStream product: - Windows targetos: Windows req.typenames: --- # FsRtlIsAnsiCharacterLegalNtfsStream macro ## -description The <b>FsRtlIsAnsiCharacterLegalNtfsStream</b> macro determines whether an ANSI character is legal for NTFS stream names. ## -parameters ### -param C <p>Pointer to the character to be tested. </p> ### -param WILD_OK <p>Set to <b>TRUE</b> if wildcard characters are to be considered legal, <b>FALSE</b> otherwise. </p> ## -remarks For information about other string-handling routines, see <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/index">Strings</a>. ## -see-also <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ntifs/nf-ntifs-fsrtlisansicharacterlegal">FsRtlIsAnsiCharacterLegal</a> <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ntifs/nf-ntifs-fsrtlisansicharacterlegalfat">FsRtlIsAnsiCharacterLegalFat</a> <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ntifs/nf-ntifs-fsrtlisansicharacterlegalhpfs">FsRtlIsAnsiCharacterLegalHpfs</a> <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ntifs/nf-ntifs-fsrtlisansicharacterlegalntfs">FsRtlIsAnsiCharacterLegalNtfs</a>    
23.872549
264
0.792608
yue_Hant
0.28304
11577c1137d4af099b5ea3211b3ec7604d6a33f2
509
md
Markdown
README.md
ersonnogot/erson-buildtool-test
e1f430ccd08ba3dae62367717ae4146e9fd23016
[ "MIT" ]
null
null
null
README.md
ersonnogot/erson-buildtool-test
e1f430ccd08ba3dae62367717ae4146e9fd23016
[ "MIT" ]
5
2021-02-17T00:11:48.000Z
2021-03-24T00:13:44.000Z
README.md
ersonnogot/erson-buildtool-test
e1f430ccd08ba3dae62367717ae4146e9fd23016
[ "MIT" ]
null
null
null
# erson-buildtool-test [![CircleCI](https://circleci.com/gh/ersonnogot/erson-buildtool-test.svg?style=shield)](https://circleci.com/gh/ersonnogot/erson-buildtool-test) [![Dashboard erson-buildtool-test](https://img.shields.io/badge/dashboard-erson_buildtool_test-yellow.svg)](https://dashboard.pantheon.io/sites/703f7003-822f-45b2-a22a-15489ef1ed5b#dev/code) [![Dev Site erson-buildtool-test](https://img.shields.io/badge/site-erson_buildtool_test-blue.svg)](http://dev-erson-buildtool-test.pantheonsite.io/)
101.8
190
0.793713
yue_Hant
0.354199
1157906683ccde83b42b7fc9fbfa909af379291a
14
md
Markdown
README.md
KevinTheKittyCat/Weekly-game
ba9a56eccee4d819d9ea3f0903fb8b7cf72e65ae
[ "BSD-2-Clause" ]
null
null
null
README.md
KevinTheKittyCat/Weekly-game
ba9a56eccee4d819d9ea3f0903fb8b7cf72e65ae
[ "BSD-2-Clause" ]
null
null
null
README.md
KevinTheKittyCat/Weekly-game
ba9a56eccee4d819d9ea3f0903fb8b7cf72e65ae
[ "BSD-2-Clause" ]
null
null
null
# Weekly-game
7
13
0.714286
lit_Latn
0.424152
11589ef483d72a496d890e335a15c06afb26c1ab
696
md
Markdown
README.md
SGBC/cluster_doc
d7f5d8f0c9ce9e776933343c3d281edee4918523
[ "CC-BY-4.0" ]
1
2017-10-25T15:01:06.000Z
2017-10-25T15:01:06.000Z
README.md
SGBC/cluster_doc
d7f5d8f0c9ce9e776933343c3d281edee4918523
[ "CC-BY-4.0" ]
null
null
null
README.md
SGBC/cluster_doc
d7f5d8f0c9ce9e776933343c3d281edee4918523
[ "CC-BY-4.0" ]
null
null
null
# User documentation for our cluster This directory contains the user documentation for the SGBC cluster. The documentation is written in Markdown and built using `mkdocs` ## Build First install mkdocs with your favorite package manager ``` brew install mkdocs ``` Then clone the directory ``` git clone https://github.com/SGBC/cluster_doc.git cd cluster_doc ``` For a live preview in your browser do ``` mkdocs serve & ``` ## License This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
21.090909
78
0.764368
eng_Latn
0.964582
1158f3e49343c5b427ee67adea10cc7a7431b57f
939
md
Markdown
Documentation/Books/HTTP/Database/DatabaseManagement.md
sita1999/arangodb
6a4f462fa209010cd064f99e63d85ce1d432c500
[ "Apache-2.0" ]
1
2018-12-08T01:58:16.000Z
2018-12-08T01:58:16.000Z
Documentation/Books/HTTP/Database/DatabaseManagement.md
lipper/arangodb
66ea1fd4946668192e3f0d1060f0844f324ad7b8
[ "Apache-2.0" ]
null
null
null
Documentation/Books/HTTP/Database/DatabaseManagement.md
lipper/arangodb
66ea1fd4946668192e3f0d1060f0844f324ad7b8
[ "Apache-2.0" ]
1
2021-07-12T06:29:34.000Z
2021-07-12T06:29:34.000Z
Database Management =================== This is an introduction to ArangoDB's HTTP interface for managing databases. The HTTP interface for databases provides operations to create and drop individual databases. These are mapped to the standard HTTP methods *POST* and *DELETE*. There is also the *GET* method to retrieve an array of existing databases. Please note that all database management operations can only be accessed via the default database (*_system*) and none of the other databases. Managing Databases using HTTP ----------------------------- <!-- js/actions/api-database.js --> @startDocuBlock get_api_database_current <!-- js/actions/api-database.js --> @startDocuBlock get_api_database_user <!-- js/actions/api-database.js --> @startDocuBlock get_api_database_list <!-- js/actions/api-database.js --> @startDocuBlock get_api_database_new <!-- js/actions/api-database.js --> @startDocuBlock get_api_database_delete
30.290323
77
0.748669
eng_Latn
0.768738
1158ff2c8cd314ab004d0f0973e6f91253252cc5
263
md
Markdown
Daily Coding Problem/00506/README.md
kushagra1212/Competitive-Programming
5b68774c617d6abdf1b29893b1b13d47f62161e8
[ "MIT" ]
21
2020-09-27T05:32:11.000Z
2021-06-04T05:49:00.000Z
Practice/Daily Coding Problem/00506/README.md
devesh17m/Competitive-Programming
2d459dc8dc5ac628d94700b739988b0ea364cb71
[ "MIT" ]
null
null
null
Practice/Daily Coding Problem/00506/README.md
devesh17m/Competitive-Programming
2d459dc8dc5ac628d94700b739988b0ea364cb71
[ "MIT" ]
2
2021-08-04T12:35:21.000Z
2021-12-19T15:39:48.000Z
# Problem #506 [Medium] This problem was asked by Fitbit. Given a linked list, rearrange the node values such that they appear in alternating low -> high -> low -> high ... form. For example, given 1 -> 2 -> 3 -> 4 -> 5, you should return 1 -> 3 -> 2 -> 5 -> 4.
52.6
203
0.646388
eng_Latn
0.999023
1159673dd23fe0e475d58e55b3d852657d3956b8
1,040
md
Markdown
esp/README.md
abobija/discord-rfid
4da59290d5be54005f2fae544c79e611ecebfd8b
[ "MIT" ]
2
2021-04-06T16:18:43.000Z
2021-05-13T06:37:40.000Z
esp/README.md
abobija/discord-rfid
4da59290d5be54005f2fae544c79e611ecebfd8b
[ "MIT" ]
null
null
null
esp/README.md
abobija/discord-rfid
4da59290d5be54005f2fae544c79e611ecebfd8b
[ "MIT" ]
null
null
null
# :robot: ESP32 RFID Discord Bot This is ESP-IDF application that represent ESP32 RFID Discord Bot, and it is a part of [discord-rfid](https://github.com/abobija/discord-rfid) repository. ## Build Generate Discord certificates with next command: ``` ./components/esp-discord/certgen.sh ``` And then build the project: ``` idf.py build ``` ## Components Project uses and depends upon next components: - [esp-discord](https://github.com/abobija/esp-discord) - [esp-idf-rc522](https://github.com/abobija/esp-idf-rc522) ## Wiring Connect MFRC522 module with ESP32 according to next table: | ESP32 | MFRC522 | | ------------- | ------------- | | GPIO25 | MISO | | GPIO23 | MOSI | | GPIO19 | SCK | | GPIO22 | SDA | All of above GPIO pinout is configurable (via menuconfig - `idf.py menuconfig`) in `Discord RFID` menu. ## Author GitHub: [abobija](https://github.com/abobija)<br> Homepage: [abobija.com](https://abobija.com) ## License [MIT](LICENSE)
22.608696
154
0.6375
eng_Latn
0.509498
1159a7327671c49fa4ebc2f4c459beacc64c085d
48
md
Markdown
README.md
aa953788477/jquery-grocery
30613fdfdc0db62ff0b4dae2a9b5120d1ff974f4
[ "MIT" ]
1
2016-12-05T03:04:08.000Z
2016-12-05T03:04:08.000Z
README.md
aa953788477/jquery-grocery
30613fdfdc0db62ff0b4dae2a9b5120d1ff974f4
[ "MIT" ]
null
null
null
README.md
aa953788477/jquery-grocery
30613fdfdc0db62ff0b4dae2a9b5120d1ff974f4
[ "MIT" ]
3
2016-11-10T05:16:37.000Z
2018-09-10T12:52:35.000Z
# jquery-grocery jquery 杂货店,项目中总结的基于jquery封装的代码
16
30
0.854167
pol_Latn
0.326683
115a0cbeda59b54a92a1476378c30b62acd605b3
804
md
Markdown
docs/v1/HostMetrics.md
DataDog/datadog-api-client-python
de2fc57dbde9acf4b8c8eef94ac29911227a62a2
[ "Apache-2.0" ]
32
2021-01-07T15:09:56.000Z
2022-01-30T05:49:23.000Z
docs/v1/HostMetrics.md
DataDog/datadog-api-client-python
de2fc57dbde9acf4b8c8eef94ac29911227a62a2
[ "Apache-2.0" ]
228
2020-09-03T14:03:54.000Z
2022-03-31T20:16:12.000Z
docs/v1/HostMetrics.md
DataDog/datadog-api-client-python
de2fc57dbde9acf4b8c8eef94ac29911227a62a2
[ "Apache-2.0" ]
12
2020-09-15T21:36:03.000Z
2022-03-31T17:13:17.000Z
# HostMetrics Host Metrics collected. ## Properties | Name | Type | Description | Notes | | ---------- | --------- | ---------------------------------------------------------------------------- | ---------- | | **cpu** | **float** | The percent of CPU used (everything but idle). | [optional] | | **iowait** | **float** | The percent of CPU spent waiting on the IO (not reported for all platforms). | [optional] | | **load** | **float** | The system load over the last 15 minutes. | [optional] | [[Back to Model list]](README.md#documentation-for-models) [[Back to API list]](README.md#documentation-for-api-endpoints) [[Back to README]](README.md)
57.428571
152
0.447761
eng_Latn
0.62079
115bac8c59ec14c46ff68e2fae87ebb6fe109ab3
484
md
Markdown
docs/Model/FundOwnership.md
iskenderov/finnhub-php
8d7750152d66f89df14ef1fe56b2ea7afd473883
[ "Apache-2.0" ]
13
2020-07-13T17:03:08.000Z
2022-02-28T11:08:26.000Z
docs/Model/FundOwnership.md
iskenderov/finnhub-php
8d7750152d66f89df14ef1fe56b2ea7afd473883
[ "Apache-2.0" ]
3
2020-07-31T12:14:26.000Z
2021-10-09T18:57:38.000Z
docs/Model/FundOwnership.md
iskenderov/finnhub-php
8d7750152d66f89df14ef1fe56b2ea7afd473883
[ "Apache-2.0" ]
9
2020-11-02T11:02:38.000Z
2022-03-28T06:20:26.000Z
# # FundOwnership ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **symbol** | **string** | Symbol of the company. | [optional] **ownership** | [**\Finnhub\Model\FundOwnershipInfo[]**](FundOwnershipInfo.md) | Array of investors with detailed information about their holdings. | [optional] [[Back to Model list]](../../README.md#models) [[Back to API list]](../../README.md#endpoints) [[Back to README]](../../README.md)
44
160
0.590909
eng_Latn
0.360103
115c0ebe6b406cea16145a66b4ed7dc0c4357560
830
md
Markdown
docs/visual-basic/misc/specified-dll-function-not-found.md
michaelgoin/dotnet-docs
89e583ef512e91fb9910338acd151ee1352a3799
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/specified-dll-function-not-found.md
michaelgoin/dotnet-docs
89e583ef512e91fb9910338acd151ee1352a3799
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/specified-dll-function-not-found.md
michaelgoin/dotnet-docs
89e583ef512e91fb9910338acd151ee1352a3799
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Specified DLL function not found" ms.date: 07/20/2015 ms.prod: .net ms.technology: - "devlang-visual-basic" ms.topic: "article" f1_keywords: - "vbrID453" ms.assetid: c0a308ee-5876-40af-be4b-1979397835df caps.latest.revision: 8 author: dotnet-bot ms.author: dotnetcontent --- # Specified DLL function not found The dynamic-link library (DLL) in a user library reference was found, but the DLL function specified was not found within the DLL. ## To correct this error 1. Specify a valid ordinal in the function declaration. 2. Make sure the DLL name and alias are correct. ## See Also [Error Types](../../visual-basic/programming-guide/language-features/error-types.md) [PAVEOVER Product Support and Accessibility](http://msdn.microsoft.com/en-us/14e1d293-7b6d-40a6-bf3e-a92f8ee6c88c)
30.740741
132
0.738554
eng_Latn
0.889311
115c849dcc9f9b4308abdfab37c8a9a72eb95fe3
18
md
Markdown
README.md
rm-code/muldraugh-tales
5e140d412c1ef53010ac54c93b3c05c8b5a9dd19
[ "MIT" ]
null
null
null
README.md
rm-code/muldraugh-tales
5e140d412c1ef53010ac54c93b3c05c8b5a9dd19
[ "MIT" ]
1
2015-06-23T09:02:35.000Z
2015-06-23T14:49:26.000Z
README.md
rm-code/muldraugh-tales
5e140d412c1ef53010ac54c93b3c05c8b5a9dd19
[ "MIT" ]
null
null
null
# Muldraugh Tales
9
17
0.777778
eng_Latn
0.49259
115cc5032281e2885b365c5a8055d469f2cba343
3,153
md
Markdown
README.md
AndrewKalil/samgj18
112d354792d910de65696021ebb6010fdc9ce6fc
[ "MIT" ]
null
null
null
README.md
AndrewKalil/samgj18
112d354792d910de65696021ebb6010fdc9ce6fc
[ "MIT" ]
null
null
null
README.md
AndrewKalil/samgj18
112d354792d910de65696021ebb6010fdc9ce6fc
[ "MIT" ]
null
null
null
<h1 align="center">Hi 👋, I'm Samuel</h1> <h3 align="center">A passionate Scala and Python developer from Colombia</h3> <p align="left"> <img src="https://komarev.com/ghpvc/?username=samgj18&label=Profile%20views&color=0e75b6&style=flat" alt="samgj18" /> </p> <p align="left"> <a href="https://twitter.com/samgj18" target="blank"><img src="https://img.shields.io/twitter/follow/samgj18?logo=twitter&style=for-the-badge" alt="samgj18" /></a> </p> - 🔭 I’m currently working on [Torre](https://torre.co/) - 🌱 I’m currently learning **Machine Learning, Scala & Python** - 👨‍💻 All of my projects are available at [https://samuelgomez.co/](https://samuelgomez.co/) - 📝 I regularly write articles on [https://ourspace.tech/](https://ourspace.tech/) - 💬 Ask me about **React, Scala and Python** - 📫 How to reach me **[email protected]** - 📄 Know about my experiences [https://drive.google.com/file/d/1jf3QIb59ORx7bBVdKaaPIvrpgWPSH3tU/view](https://drive.google.com/file/d/1jf3QIb59ORx7bBVdKaaPIvrpgWPSH3tU/view) <h3 align="left">Connect with me:</h3> <p align="left"> <a href="https://twitter.com/samgj18" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="samgj18" height="30" width="40" /></a> <a href="https://linkedin.com/in/https://www.linkedin.com/in/samuelgomezjimenez/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="https://www.linkedin.com/in/samuelgomezjimenez/" height="30" width="40" /></a> <a href="https://instagram.com/samuelg_18" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/instagram.svg" alt="samuelg_18" height="30" width="40" /></a> </p> <h3 align="left">Languages and Tools:</h3> <p align="left"> <a href="https://aws.amazon.com" target="_blank"> <img src="https://raw.githubusercontent.com/devicons/devicon/master/icons/amazonwebservices/amazonwebservices-original-wordmark.svg" alt="aws" width="40" height="40"/> </a> <a href="https://www.python.org" target="_blank"> <img src="https://raw.githubusercontent.com/devicons/devicon/master/icons/python/python-original.svg" alt="python" width="40" height="40"/> </a> <a href="https://www.scala-lang.org" target="_blank"> <img src="https://raw.githubusercontent.com/devicons/devicon/master/icons/scala/scala-original.svg" alt="scala" width="40" height="40"/> </a> <a href="https://scikit-learn.org/" target="_blank"> <img src="https://upload.wikimedia.org/wikipedia/commons/0/05/Scikit_learn_logo_small.svg" alt="scikit_learn" width="40" height="40"/> </a> <a href="https://www.typescriptlang.org/" target="_blank"> <img src="https://raw.githubusercontent.com/devicons/devicon/master/icons/typescript/typescript-original.svg" alt="typescript" width="40" height="40"/> </a> </p> <p><img align="center" src="https://github-readme-stats.vercel.app/api/top-langs?username=samgj18&show_icons=true&locale=en&layout=compact" alt="samgj18" /></p> <p><img align="center" src="https://github-readme-streak-stats.herokuapp.com/?user=samgj18&" alt="samgj18" /></p> About the setup for this blog [Setup](SETUP.md)
85.216216
1,044
0.719949
yue_Hant
0.192782
115d23a5074a4261c84a1ef934ffdb9ce9561a25
80
md
Markdown
2019/day5/README.md
tomhel/AoC_2019
c76c34235821864bc763f85d43cbcbfb9ed43469
[ "MIT" ]
1
2021-12-07T13:18:52.000Z
2021-12-07T13:18:52.000Z
2019/day5/README.md
tomhel/AoC
c76c34235821864bc763f85d43cbcbfb9ed43469
[ "MIT" ]
null
null
null
2019/day5/README.md
tomhel/AoC
c76c34235821864bc763f85d43cbcbfb9ed43469
[ "MIT" ]
null
null
null
# Day 5: Sunny with a Chance of Asteroids https://adventofcode.com/2019/day/5
16
41
0.7375
kor_Hang
0.392245
115d3de9f28286871a0a2c89bf4c7dbda6642e69
47
md
Markdown
README.md
fryco/bash-scripts
27778f6d57c5fcc8f56438d748c1fe8c1b76f045
[ "MIT" ]
null
null
null
README.md
fryco/bash-scripts
27778f6d57c5fcc8f56438d748c1fe8c1b76f045
[ "MIT" ]
null
null
null
README.md
fryco/bash-scripts
27778f6d57c5fcc8f56438d748c1fe8c1b76f045
[ "MIT" ]
null
null
null
# bash-scripts Bash script for common purposes
15.666667
31
0.808511
eng_Latn
0.997442
115d90927e2f0bcd0fb39b93085a7c473351a7f3
135
md
Markdown
README.md
flamengo17/flamengo17
2339985ed9f880a3d002077d40efd046923325e6
[ "MIT" ]
null
null
null
README.md
flamengo17/flamengo17
2339985ed9f880a3d002077d40efd046923325e6
[ "MIT" ]
null
null
null
README.md
flamengo17/flamengo17
2339985ed9f880a3d002077d40efd046923325e6
[ "MIT" ]
null
null
null
#nos somos estudantes de pensamento computacional# #grupo formando por: Nicolle, Marissa e Camili# #colegio estadual Gabriela Mistral#
33.75
50
0.814815
por_Latn
0.571037
115d9f471b9a59c6689134d1fde6fce880dbda41
2,497
md
Markdown
docs/framework/winforms/controls/picturebox-control-overview-windows-forms.md
felpasl/docs.pt-br
1b47adcbc2e400f937650f9de1cd0c511e80738e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/winforms/controls/picturebox-control-overview-windows-forms.md
felpasl/docs.pt-br
1b47adcbc2e400f937650f9de1cd0c511e80738e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/winforms/controls/picturebox-control-overview-windows-forms.md
felpasl/docs.pt-br
1b47adcbc2e400f937650f9de1cd0c511e80738e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Visão geral do controle PictureBox (Windows Forms) ms.date: 03/30/2017 f1_keywords: - PictureBox helpviewer_keywords: - PictureBox control [Windows Forms], about PictureBox controls - picture controls [Windows Forms], about picture controls - image controls [Windows Forms], about image controls ms.assetid: e5befee7-dc29-4888-a7c4-3b177e394112 ms.openlocfilehash: d725192daeb9529b38c170184a17c927357f2175 ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 01/23/2019 ms.locfileid: "54642851" --- # <a name="picturebox-control-overview-windows-forms"></a>Visão geral do controle PictureBox (Windows Forms) Os formulários do Windows <xref:System.Windows.Forms.PictureBox> controle é usado para exibir elementos gráficos em formato de bitmap, GIF, JPEG, metarquivo ou ícone. ## <a name="key-properties-and-methods"></a>Propriedades e métodos de tecla A imagem que é exibida é determinada pelo <xref:System.Windows.Forms.PictureBox.Image%2A> propriedade, que pode ser definida em tempo de execução ou em tempo de design. Como alternativa, você pode especificar a imagem, definindo o <xref:System.Windows.Forms.PictureBox.ImageLocation%2A> propriedade e, em seguida, carregar a imagem de forma síncrona usando o <xref:System.Windows.Forms.PictureBox.Load%2A> método ou forma assíncrona usando o <xref:System.Windows.Forms.PictureBox.LoadAsync%2A> método. O <xref:System.Windows.Forms.PictureBox.SizeMode%2A> propriedade controla como a imagem e o controle se adaptam entre si. Para obter mais informações, confira [Como: Modificar o tamanho ou a colocação de uma imagem em tempo de execução](../../../../docs/framework/winforms/controls/how-to-modify-the-size-or-placement-of-a-picture-at-run-time-windows-forms.md). ## <a name="see-also"></a>Consulte também - <xref:System.Windows.Forms.PictureBox> - [Como: Carregar uma imagem usando o Designer](../../../../docs/framework/winforms/controls/how-to-load-a-picture-using-the-designer-windows-forms.md) - [Como: Modificar o tamanho ou a colocação de uma imagem em tempo de execução](../../../../docs/framework/winforms/controls/how-to-modify-the-size-or-placement-of-a-picture-at-run-time-windows-forms.md) - [Como: Definir imagens em tempo de execução](../../../../docs/framework/winforms/controls/how-to-set-pictures-at-run-time-windows-forms.md) - [Controle PictureBox](../../../../docs/framework/winforms/controls/picturebox-control-windows-forms.md)
83.233333
866
0.780136
por_Latn
0.909405
115dd9afccea6c23f2c66850a439fd0b616e5e13
85,868
markdown
Markdown
_posts/2007-04-30-titleenabled-networking2.markdown
api-evangelist/patents-2007
da723589b6977a05c0119d5476325327da6c5a5c
[ "Apache-2.0" ]
1
2017-11-15T11:20:53.000Z
2017-11-15T11:20:53.000Z
_posts/2007-04-30-titleenabled-networking2.markdown
api-evangelist/patents-2007
da723589b6977a05c0119d5476325327da6c5a5c
[ "Apache-2.0" ]
null
null
null
_posts/2007-04-30-titleenabled-networking2.markdown
api-evangelist/patents-2007
da723589b6977a05c0119d5476325327da6c5a5c
[ "Apache-2.0" ]
2
2019-10-31T13:03:32.000Z
2020-08-13T12:57:02.000Z
--- title: Title-enabled networking abstract: Methods and apparatus are provided for processing packets in a network. A received packet includes title materials which include one or more of a title object, a component of the title object, or a reference to the title object. The title object is a digital bearer instrument representing at least one right relating to processing of the packet in the network which may be redeemed by presentation of the title object to a title-enabled device or process operating in the network. Upon validation of the title object, the packet is processed in the network in accordance with the at least one right represented by the title object. url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=09621372&OS=09621372&RS=09621372 owner: OnCircle, Inc. number: 09621372 owner_city: Santa Clara owner_country: US publication_date: 20070430 --- This application claims priority under 35 U.S.C. 19 e to U.S. Provisional Patent Application No. 60 746 032 filed Apr. 29 2006 the entire disclosure of which is incorporated herein by reference for all purposes. A portion of the disclosure of this patent document may contain material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent files or records but otherwise reserves all copyright rights whatsoever. The following notice shall apply to this document Copyright 2007 Navio Systems Inc. The present invention provides systems methods and software for providing and managing rights to use networks and network services using digital bearer instruments that express at least one right related to providing and or managing a network or network service. The invention has applications in the fields of computer science networking and electronic business methods. A title is a digital bearer instrument that expresses at least one right. Title materials include titles portions of titles for example such as a specific right definition a reference to a specific title or right and independently validatable portions of titles. A stub is one example of an independently validatable portion of a title. Title materials may also include specific instances of digital bearer instruments that may not include a specific right. Title materials are presented to title enabled processes computers and devices which use the presented title materials to operate on and or facilitate redemption of rights expressed by a title. Titles employed by specific embodiments of the present invention are related to the title technologies provided by Navio Systems Inc. of Cupertino Calif. As described in U.S. Patent Publication US 2006 0036548 A1 the entire disclosure of which is incorporated herein by reference for all purposes titles can be validated by using a title resolver and or a state server both of which are components of a title management system. is a flow chart depicting an example of such a title validation process. The title is submitted by a client to a title resolver service for authentication . The title resolver service examines the title s digital signature . If the digital signature is incorrect the title resolver service rejects the title and the title validation process terminates with an invalid title result. If the digital signature is correct the title resolver service forwards the title to the state server process for further validation of the state value in the title s stub. The state server process uses the state value or other indicia that are part of the title computes a value from these item s and compares it against a value stored in a database . If the two values match the title is validated by the state server . A title valid response is returned to the title resolver service which in turn returns a title valid response the client . If the state server cannot validate the title it returns a title invalid response and the validation process terminates. The above example is one method of validating titles additional methods of validating title materials include digital signatures comparison of transaction indicia to transaction databases and other methods well known to those skilled in the art. According to specific embodiments described in U.S. Patent Publication US 2006 0036548 A1 referenced above when a title is used for example during a redeem action the title is presented to state server for authentication by a resolver. The state server performs the authentication process and verifies the security indicia contained within the title to that of the current state maintained in the state collection. The security indicium for a title is contained in the titles authenticator stub. The state server may also perform endorsement and authentication as supported by the title transaction ecosystem. A variety of techniques and algorithms can be supported by the title technology and the technique and algorithm employed on a particular title can be subsequently conveyed to the state server for authentication. In one class of embodiments a chained hash mechanism is used for title authentication. According to a particular embodiment the chained hash may be generated by repeatedly hashing an initial value v which may include title information combined with a random number or other appropriate data using a cryptographically strong hash function H such as MD5 or SHA 1. The first iteration of the chained hash algorithm gives h0 H v . The second iteration gives h1 H h0 . The nth iteration gives hn H hn 1 where n represents the desired length of the hash chain. This hash chain of length n may represent any value within the system from the maximum number of redemptions allowed by a title to the maximum number of users connected to a system or any other value required by the system. In another embodiment v may be composed of a random value and a hash of the title to later be used for title integrity verification. In another embodiment the state server component may generate hn and securely store n and the value v that was used as the initial hash value for h0. The value hn may then be set in the authenticator stub for the title along with the name of the hash algorithm used to create hn. In one instance the client may then later present the title upon redemption where the state server may extract the value hn from the authenticator stub along with the hash algorithm name specified by that stub. The state server may then look up its stored values v and n and compute hi Hi hi 1 where h0 H0 v and i 1 2 3 . . . n. The value hn would be checked for equality with hi and if equal the title would be authenticated. The server may then store n 1 in place of n generate a new authenticator stub containing hn 1 and the name of the algorithm used and return that stub back to the client where the title may be authenticated again using the above process as long as n 0. In yet another embodiment the state server generates the hash as defined above and set the values hn and ye along with the name of the hash algorithm used in the authenticator stub where ye is the encrypted value v. The state server would only need to store n in this embodiment. Upon redemption the client would present the title with the authenticator stub containing ye hn and the name of the hash algorithm to use. The state server component may then decrypt ye to get vd and compute hi Hi hi 1 where h0 H0 vd and i 1 2 3 . . . n. The state server component would then verify hi hn and if true the title would be authenticated. The server may then store n 1 in place of n generate a new authenticator stub containing hn 1 ye and the name of the hash algorithm used and return that stub back to the client where the title may be authenticated again using the above process as long as n 0. In yet another embodiment the client is responsible for generating the hash chain. In one instance the client generates the value v using the techniques described above or another appropriate method. The client then computes the hash chain hi Hi hi 1 where h0 H0 v and i 1 2 3 . . . n. The resulting hash chain h0 h1 h2 . . . hn. The client sends its credentials h0 and the name of the hash algorithm used to the state server component. The state server component verifies the client s credentials and stores h0 in its secure data store. Upon title redemption the client sends the title with h1 and the name of the hash algorithm embedded in the authenticator stub to the state server component for verification. The state server component retrieves h0 from its secure data store and hashes h0 using the algorithm indicated to produce h1 . The title is authenticated if and only if h1 h1 . The state server component then replaces h0 with h1 in its secure data store. The client can no longer use h1. Note that in this embodiment the client will always supply hi and the state server component will always store hi 1. The ith redemption consists of the value hi supplied by the client which the state server component can verify using hi 1. Each such redemption requires no calculations from the client and only a single hash operation by the state server component. In another embodiment when a chain of hashes expires such as n 0 the state server can automatically perform a re endorsement of the title and create a new chain. The re endorsement can occur selectively and as permitted on the particular title. In another embodiment a random value technique is applied to authenticate a title. A random value is generated by the state server and placed in the authenticator stub. The state server also maintains a record of the random value in its state collection. The random value would be changed by the state server every time the title is authenticated and only the title object with the correct random value would be valid. A network is two or more computers or other devices connected together using a communication system for the purpose of communicating and sharing resources. A network session or sometimes simply a session includes a set of discrete network packets that effect a particular communication between one or more computers or devices. Networks typically comprise dedicated hardware and software systems commonly called network devices which function in conjunction with communication links to operably connect two or more computers or network devices. Switches routers cable modems wireless access points and firewalls are all non limiting examples of network devices. One particular type of network device is a wireless access point such as produced commercially by Netgear of Santa Clara Calif. Linksys of Irvine Calif. and Skypilot Networks of Santa Clara Calif. Wireless access points permit over the air wireless OSI level 1 and 2 communication links between computers and computer networks in accordance with one or more of the 802.11a 802.11b 802.11g 802.11i and other wireless protocols. Each wireless access point can be configured to respond to and be accessed using at least one public name called a service set identifier SSID . Network communications between computers network devices and other devices attached to a network are performed using one or more network protocols. The well known Open System Interconnection OSI seven layer networking model defines several types of network communication. These layers are generally considered a link layer group OSI layer 1 2 protocol layer group OSI layers 3 5 and application layer group OSI layers 6 7 . Network protocols may be categorized by the OSI layers in which they are supported such as link layer protocols like 802.11 as described above protocol layer group protocols such as TCP IP UDP IPv4 IPv6 MPLS DHCP BOOTP DNS and application layer group protocols such as SMTP POP IMAP HTTP SOAP and SMS. The computer implemented software or firmware that implements some or all of a network protocol is commonly called a protocol stack. Network devices typically facilitate movement of discrete units of information called packets over communication links between network devices and computers to effect communication and resource sharing. Sometimes these networks are called packet switched networks. Network devices inspect packets and process them according to information found in the packet s contents. Typically this information is located in the packet header although it could be located anywhere within a packet or sequence of packets. A user computer or device can be granted access to some networks computers devices or shared resources and not to others based on the level of service they have contracted for company or government clearances who they work for and a variety of other factors. Access to one or more shared resources and even to the network itself is provided on the basis of authentication and authorizations of a user computer or device. Authentication is the mechanism for proving an identity of the user computer or device. Authentication of users is often provided using a user id and password a network address or other information possessed by a user or known to the user. Often these authentication methods rely on the user entering his authentication information into a computer which is then used to authenticate the user. Authentication of computers and network devices is generally performed using automated mechanisms such as public key infrastructure PKI . Additional sets of protocols are used to support authentication at a protocol group layer using variants of the EAP protocol or an application group layer such as RADIUS and Kerberos. Once identified authorization materials associated with the user computer or device can be obtained and used to make access and provisioning decisions. Authorization is a specification of what the user computer or device is allowed to do and what resources the user computer or device is allowed to share and the subsequent enforcement of this specification to restrict or provide the resources authorized for a particular user computer or device. Authorizations may extend not only to the access to and use of one or more shared resources but may include the manner in which one or more services or resources are provided and the percentage of network resources such as bandwidth that can be used by a particular user computer or device. A substantial infrastructure is required to provide for the authentication of users computers and devices and to provide for the authorization and provisioning of the user computer or device in accordance with an authorization specification. One challenge surrounding the use of network devices and systems is that they have differing authentication authorization and even expressions of rights. For example a router from one network equipment provider may define user access and network traffic rights in a different way than a router from another provider which in turn may define user access and networks rights differently than a DSL modem. Additional challenges are provided when a plurality of network devices and servers require multiple authentications from a single user. As each of these authentications and authorizations typically require the user to enter information at a computer a user is sometimes required to authenticate several times to gain access to a network resource. Additionally there is generally not a mechanism by which a user can provide authentication or authorization materials to parts of the network between the user and an end network resource. IPv6 is a network layer protocol for packet switched networking. It is intended as the successor of IPv4 which is the current version of the Internet Protocol in general use. The changes between IPv6 and IPv4 are relatively conservative and most transport and application layer protocols need little or no change to work over IPv6. The IPv6 proposed standard RFC2460 Deering 1998 defines a basic header and numerous extension headers including an authentication header a hop by hop options header a routing header or a destination options header that may be inserted into an IPv6 packet. depicts an example of an IPv6 packet where the IPv6 basic header is followed by any number of extension headers . . . . depicts the detailed contents of an extension header. IPv6 further defines destination options headers as implementation specific headers. These headers provide space in the protocol frame into which applications or protocol stacks may insert implementation specific materials which are then used by applications present on routers switches servers and other network devices to pass information within the protocol framework. In computer networking and telecommunications MPLS is a network protocol that emulates some properties of a circuit switched network over a packet switched network. MPLS was designed to provide a unified data carrying service for both circuit based clients and packet switching network devices and network clients. It can be used to carry many different kinds of traffic including IP packets as well as native ATM SONET and Ethernet frames. The standards for MultiProtocol Label Switching MPLS are set forth in RFC3031 E. Rosen et. al. January 2001. An IP packet comprises three elements the first element is a header which marks the beginning of the packet the second element is the payload which contains the information to be carried in the packet and the third element is a trailer which marks the end of the packet. Other protocols such as XNS have a similar structure. MPLS works by prepending protocol packets with an additional MPLS header containing one or more labels. This list of labels in a MPLS header is commonly called a label stack. Prepending an existing protocol packet with a MPLS header transforms the existing packet into a MPLS payload. The standards for MPLS Label Stack Encoding are set forth in RFC 3032 E. Rosen et. al. January 2001. In MPLS networking a Label Switched Path LSP is a path through an MPLS network. An LSP is sometimes referred to as an MPLS tunnel because the forwarding of packets through an LSP is opaque to higher network layers. The LSP is set up based on criteria in the forwarding equivalence class FEC which is a group of IP packets that are forwarded in the same manner over the same network path and with the same forwarding treatment. FEC is typically determined by destination IP address quality of service class allocated bandwidth and other implementation dependent factors. The entrance and exit points of an LSP are both known as Label Edge Routers LERs sometimes called ingress and egress routers or more generically border routers. When an unlabeled packet enters the LSP path through the ingress router the router first determines the FEC the packet should be in appends a MPLS header to the packet and then inserts one or more labels in the packet s newly created MPLS header. It then forwards the packet along to the next router in the path. Other routers along the path are known as Label Switching Routers LSRs or more generically as transit routers. When a labeled packet is received by a transit router the topmost label is examined. Based on the contents of the label a swap swap to a new label push add another label to the stack or pop remove the top label from the stack operation can be performed on the packet s label stack. Routers can have pre built lookup tables that tell them which kind of operation to do based on the topmost label of the incoming packet. This enables the routers to process the packet very quickly. During these operations the contents of the packet below the MPLS label stack are not examined. The forwarding of the packet is done based on the contents of the labels which allows protocol independent packet forwarding that does not need to look at a protocol dependent routing table and avoids the computationally expensive IP longest prefix match at each hop along the path. At the egress router when the last label has been popped only the payload in the MPLS packet remains. This payload of the packet can be an IP packet or any of a number of other kinds of information. An aspect of MPLS routing is that routing must often be performed on the basis of an attribute of a user s device such as the source IP address the port that their network packets enter the ingress router on or other networking based attribute. There is no simple way to configure a user so that their traffic receives a specific quality of service or allocated bandwidth without considering aspects of the device that the user is using. In one example network traffic from a first user is routed to a first border router of network where it is inspected by the border router. The border router identifies the network address of the destination as belonging to network and has a routing policy that indicates that traffic should be routed in accordance with a specific routing policy that routes all traffic between the ingress and egress routers via edge routers excluding the traffic from the core routers of the network . In this example the routing policy is defined using a MPLS label corresponding to the EDGE policy that is configured as a LSP around the edge of network to the egress router that is operably connected to network . The route of packets following the EDGE routing policy are depicted in by traffic route as indicated on the drawing by the line with dots and dashes. The first border router functioning as an ingress router identifies the FEC corresponding to the traffic and adds a MPLS header and label corresponding to the EDGE routing policy to the network traffic. The router then routes the network traffic in accordance with its MPLS label e.g. around the edge of the network to router then on to border router . Border router functions as an egress router for network and as an ingress router for network and can operate in a variety of configurations. The net result of these configurations is typically that the network traffic is relabeled with a MPLS label corresponding to the CORE policy when it enters network . The network traffic is then routed to its egress border router for example from to to and to its destination . The route packets following the CORE routing policy are depicted in by traffic route indicated with dashes . Different routing policies may be applied based upon different source and destination addresses network traffic type e.g. VoIP traffic may have a higher priority than file transfer traffic allocated bandwidth desired quality of service or other aspects of the network traffic. Unfortunately there is no way for a user to connect in on various networks and receive an appropriate service level without a wide reaching service infrastructure that encompasses all of the routers and requires specific authorization and sign ons. Provisioning is the process by which a device is configured to operate on a network. In simplest form a provisioning process is performed when a device is to be connected with a network. Provisioning may be manual where a user or administrator forms the association between a device and a network and optionally assigns one or more network configuration parameters such as network addresses to specific devices. Optionally provisioning may be automatically performed. Numerous provisioning protocols have been introduced one common protocol is called the Dynamic Host Configuration Protocol. Dynamic Host Configuration Protocol DHCP is a network protocol for automatically provisioning devices on a network. DHCP is a client server protocol between a client network device e.g. the device that requires provisioning and a server network device which provides the provisioning information. DHCP is an extension to a prior generation provisioning protocol called BOOTP. DHCP is generally used to assign network addresses and other network configuration parameters to devices that are connected to the network in a process called dynamic addressing. By using dynamic addressing a device can have a different IP address and other network parameters each time it connects to the network. Many ISPs use dynamic IP addressing for dial up users. In some systems the device s IP address can even change while it is still connected. DHCP also supports a mix of static and dynamic IP addresses. The standards for Dynamic Host Configuration Protocol DHCP are set forth in RFC2131 R. Droms March 1997. BOOTP is interoperable with DHCP and is primarily described in RFC1534 and RFC1542. Dynamic addressing simplifies network administration because the software keeps track of IP addresses rather than requiring an administrator to manage the task. This means that new devices can be added to a network without having to manually assign a unique IP address to each device. Dynamic addressing also allows network addresses to be assigned on an as needed basis which is increasingly important due to the finite number of IP addresses and the ever increasing number of devices being used on networks including computers cellular phones handheld devices etc. . In a first step sometimes called the discovery phase the client broadcasts a DHCPDISCOVER message on its local network to find available DHCP servers. In some embodiments a client may include options in its DHCPDISCOVER message. DHCP servers may use information in the DHCPDISCOVER message including options information in determining whether to respond to a DHCPDISCOVER message. In second step known as the offer phase the DHCP server responds to a client broadcast by reserving an IP address for the client and sending a DHCPOFFER message back across the network to the client. The DHCP server typically determines the IP address configuration based on its configuration database. In some embodiments the determination is made based upon the client s hardware address e.g. MAC as specified in the CHADDR field . The DHCP server specifies the reserved offered IP address in the YIADDR field and may provide other network configuration parameters options fields. The offered IP address and other network configuration materials are said to be leased to the network client device. In a third step known as the request phase the client receives the offer and requests the address that the server specified using a DHCPREQUEST message. If the client has received offers from multiple servers it specifies the DHCP server from which it has accepted the offer and specifically rejects offers from all other servers. DHCPREQUEST also can be used to request the client s last known IP address from a specific server. If the client is still in a network where the former IP address is valid the server might grant the request. DHCPREQUEST also can be used to request an extension in the lease of an existing IP address. In the fourth and final step known as the response phase the DHCP server receives the DHCPREQUEST message from the network client and returns a DHCPACK acknowledgment packet to the client. This packet includes information such as the lease duration for the IP address and any other network configuration information that the client might have requested. The client configures its network interface with the supplied information. At this point the network configuration process is complete. DHCP is implicitly limited in that it provides fixed network parameters but is unable to request provisioning in accordance with rights or entitlements for other network specific attributes such a quality of service or specific minimum or maximum bandwidth. Today each individual differentiation mechanisms requires the user to have separately pre programmed their personal devices. This is costly from a provisioning and support standpoint. Additionally the network provider must provide specialty provisioning at nodes and points of presence of wide area and public networks in order to support a plurality of types of differentiated service. Again this is costly from a provisioning and deployment standpoint. Depending upon the equipment used in constructing the network there are various limitations in the number of types of specialty provisioning that are supported for each point of presence. These costs and limitations limit the number and types of traffic that may connect to a point of presence and be served by a wide area or public access network. An example of a typical public wireless network is shown in such a public network might be found within airports malls coffee shops and other locations. Alternatively such a public network might be a simplified network diagram of a citywide wireless network of the type being installed in major metropolitan areas around the USA. Such a typical public wireless network might include a plurality of wireless access points using such technologies as 802.11a 802.11b g mesh networking technologies and other wireless networking mechanisms known to those skilled in the art. Collectively these are the points of presence for the wireless network. Each of these points of presence is connected using traditional networking technologies to one or more wireless gateways routers or switches. The wireless gateway is used to route DHCP and subsequent authentication traffic to one or more DHCP servers and RADIUS servers operable on the public wireless network. In some embodiments the DHCP server and RADIUS server chosen may be one of a network connected to an example of a public wireless network. The DHCP server may provide different network configuration parameters e.g. network and gateway addresses to a device connecting to the wireless network based upon for example the SSID of a wireless access point that the device is connected to. Setting up each of these SSIDs and managing the DHCP services based on the SSID is a complex and time consuming task and is further complicated by authentication requirements once the connection is established. The use of static SSIDs requires the use of real time authentication mechanisms as a device that is no longer eligible to access the network may retain the SSID configuration on their wireless device after their privileges have lapsed. In some implementations separate authentication mechanisms must be provided by a network provider to require that users of devices connected to a network are actually authorized to use the network. Failure to separately authenticate to the network may cause a device or user of a device to not receive contracted levels of service from the network if service is provided at all. Network traffic to and from the wireless access points and the Internet and private or public networks attached to an example of a public wireless network . In some network implementations the network traffic is routed using MPLS or a related routing technology. Any required differentiated service that is performed or managed can also be provided using MPLS or a related routing technology as described above. One additional aspect of conventional wireless networks is that a user must often be able to operate on one or more of these networks because they roam from network to network. This increases the complexity of the network requires a plurality of RADIUS and other authorization authentication servers and complicates the user s portable device configurations. Each configuration of the user s portable device also may interfere with one or more other configurations of the portable device. According to various embodiments of the invention methods and apparatus are provided for processing packets in a network. A received packet includes title materials which include one or more of a title object a component of the title object or a reference to the title object. The title object is a digital bearer instrument representing at least one right relating to processing of the packet in the network which may be redeemed by presentation of the title object to a title enabled device or process operating in the network. Upon validation of the title object the packet is processed in the network in accordance with the at least one right represented by the title object. According to specific embodiments of the invention processing the packet includes one or more of dynamically provisioning an aspect of the network or mapping the packet onto a previously provisioned aspect of the network. According to more specific embodiments the aspect of the network may be one or more of an end user device a server a modem a router a switch a network appliance a point of presence device a wireless access point a gateway a firewall a process or a network service. According to specific embodiments the at least one right represented by the title object relates to one or more of network access quality of service level of service packet traffic protection traffic class or traffic priority. According to specific embodiments processing the packet comprises manipulation of the packet in accordance with one or more of a plurality of protocols including one or more of MPLS DHCP BOOTP IPv4 IPv6 TCP IP UDP IP DNS GSM CDSA iDEN 802.11a 802.11b 802.11g 802.11i 802.11n WiMax uPNP telnet FTP SMTP POP IMAP HTTP SOAP XML RPC and SMS. These and other aspects and advantages will become apparent when the Description below is read in conjunction with the accompanying Drawings. Reference will now be made in detail to specific embodiments of the invention including the best modes contemplated by the inventors for carrying out the invention. Examples of these specific embodiments are illustrated in the accompanying drawings. While the invention is described in conjunction with these specific embodiments it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary it is intended to cover alternatives modifications and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. In the following description specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In addition well known features may not have been described in detail to avoid unnecessarily obscuring the invention. Each title also referred to herein as a title object is a digital bearer instrument that is independently authenticatable and describes or represents at least one right. A title component is one or more aspects of a title ranging from part of a title up to and including the whole title that is used by at least an aspect of a network to effect configuration provisioning routing service provision or some other aspect of network functionality. A title component may comprise the specification of a specific right an independent portion of a title such as a stub any uniquely identifiable and verifiable portion of a title and or a reference to a specific title or right. Titles and title components are more generically referred to herein as title materials. The entitlement or right s of a device or a user to perform specific actions in conjunction with a network device can be based in part upon the title materials presented by the device or user. A device s or user s presentation of title materials can be a manual action. Alternatively the action can be an automated one performed at least in part by software on the user s local computer or portable device. Thus a user can provide a title to a device that describes at least one aspect of their rights and privileges on a specific network device on a class or set of network devices of which the specific network device is a member on a specific network or upon a class or set of networks. In some embodiments it is desirable for aspects of a network to be controlled not by static routing and inflexible configurations but by aspects of a device user or an affiliation or right granted to a device or user. The use of titles is advantageous in these circumstances in that they provide an independent mechanism for describing rights and capabilities associated with those rights. In some embodiments titles can express a plurality of rights and may further express sets of rights. One particular advantage of using titles to express rights to use network resources is that they can flexibly express these rights to use one or more network services or systems using either common or device specific ways and may even express a specific right in a variety of ways that can be collectively understood by a plurality of network devices. One or more rights in a title may embody specifications or configuration information effective to control a particular aspect of a network. These specifications and configuration information can be extracted from a title structure and used by an aspect of a network. Another particular advantage of using title materials to support network provisioning is that titles as digital bearer instruments may be presented by a device on behalf of a device or user without requiring the device or user s intervention. The title can be independently authenticated by one or more aspects of the network and the desired services automatically provided. It is often advantageous for devices and users to not be required to authenticate each time they attempt to use a different aspect of one or more networks. The use of title materials in controlling network capabilities may be transparent to the user and therefore has the potential to limit the number of personal authentication and related interactions a user must have with a network. In a particular embodiment at least one aspect of the use of a network and or specific network resources can be controlled in whole or in part based upon title materials that can be presented by either a person or a machine seeking to use the network. The term user as used herein encompasses both types of use either by a machine or by a person. In this example a user either an actual person a device on behalf of a person or a device itself presents title materials that describe that user s rights of access level of service s and possibly other network attribute s that the user is entitled to use. Network devices including such devices as modems for example telephone DSL or cable routers and switches and the like can make routing and network traffic management decisions on the basis of presented title materials. In some embodiments network configurations are produced that correspond to one or more aspects of a right or entitlement and a network device may make its routing and network traffic management decision on the basis of a network configuration that represents an aspect of a right expressed by title materials. In a specific embodiment a router may use title materials to determine the quality of service or bandwidth a particular user is entitled to receive. A MPLS tag associated with specific routing information stored in a group of routers is an example of a network configuration information that represents an aspect of a quality of service or network bandwidth right. Specifically a title based right may include one or more MPLS tags that may be used to specify an aspect of a title based right to use a network. Similarly a title based right to use an aspect of a network may reference an external system directory or database that further provides these materials usable by the network to provide the right. In such embodiments resolving the right would involve looking up the network information in the external system directory or database to obtain the specifications for configuring at least one aspect of the network to provide network service. In another example title materials can provide a user with the ability to connect to the Internet at any of the network s point of presence locations e.g. wireless access points . In this use the title materials can replace traditional user password pairs for mobile users and eliminate the infrastructure requirements of a plurality of SSIDs and further permit the reduction in the number of authentication servers required to support a network. In addition when used in conjunction with a premise based device such as a DSL or cable modem title materials can be used to define the service levels to be provided e.g. data transfer rate and to enable the provision of differentiated service based upon rights presented by a user of the network device instead of the location of the network device or even upon the originator of specific network packet traffic. Thus a network provider such as Comcast Verizon or other provider can provide quality of service and differential bandwidth specific services to users that connect from any point of presence on their network upon presentation of that user s title materials. The technologies are further advantageous to the network providers in that they no longer have to provision the premise devices for network speed limitations differing SSIDs or network address ranges or to build elaborate policies and configurations to map specific address ranges or SSIDs to specific classes of service. All of these improvements reduce the deployment and maintenance costs for deploying networks. This same technology allows a user to acquire rights to specific network services and for those services to be ubiquitously provided even across network providers. Network services also may be title aware and provide differentiated service upon the basis of one or more title materials presented to the service. One example is a title aware DHCP service which is explored in greater detail below. Other network services including provisioning and authorization services such as various variants of EAP can also be made title aware and be used to provide similar benefits as the described DHCP example embodiment. Title materials also can be embedded in network protocols. In some embodiments the network protocols may be specially crafted to embed title materials. These protocols are effective to transport title materials from a first location on a network to a second location on a network. In other embodiments network protocols such as TCP IP DHCP and EAP that are not originally designed for use with title materials may have title materials embedded within them. Alternatively protocol stacks present on network devices and computers operating on a network may be title aware. Protocol stacks can seamlessly insert remove and use title materials in networking protocols on an as needed basis to effect the use of rights expressed in at least one title materials. Collectively title enabled network devices network services and protocol stacks may be combined to produce one or more title enabled networks. Title enabled networks generally include at least one title resolver or state server. In some embodiments these components are provided as stand alone services or servers. In other embodiments they may be configured as part of one or more network devices. In yet other embodiments a title resolver or state server may be provided externally to a title enabled network. In some embodiments the presentation of invalid or expired title materials will result in the user s being denied access to a network. In alternative embodiments the presentation of invalid or expired title materials will permit the user to access the network but may limit at least one aspect of the network that the user is permitted access to. In a particular embodiment the limited network access that a user is provided with may only provide sufficient network services to update their title materials with valid unexpired title materials. In other aspects of the present invention access to or use of at least one aspect of a network may be enabled by title materials. These title materials may be configured for a short life e.g. they become invalid after a specific number of uses or after a specific period of time has passed. Title materials may be updated either before or after they expire to provide additional life. The updating may occur by replacing the title materials with new ones updating a title stub or by adjusting the expiration information in a title resolver or state server. A user may obtain new title materials using a title update protocol or by invoking a right to update. In another aspect the present invention provides a title enabled network as defined above. In one embodiment of the title enabled network the use of network services and even the use of the network itself can be specified or enabled using rights described within one or more title materials. Those having ordinary skill in the art will understand that the title enabled network of the invention solves a variety of problems experienced by network providers today including the inability to effectively limit a user s use of their network and provide differentiated service s for example in the form of higher or lower available bandwidth without expensive endpoint provisioning or requiring a user to pass authentication materials over the various portions of the network and to translate them for each type of network device. In one example a user can access a network accessible service that provides content such as a streaming movie. This service can be hosted any place in the world and is effectively limited only by the bandwidth of the slowest link in the network that the network traffic traverses between the user and the service host. In the example of a movie it is recognized that in order to view the movie satisfactorily a user requires a 1 Mb second link and a quality of service maximum delay of not more than 400 ms between the client and server. Users purchase rights for both the movie content and the network provisioned to make the movie accessible at a satisfactory quality of service. These rights are represented in this example as two separate rights embodied within one title object or set of title materials. As will be understood these rights may relate to different network layers. The rights also could be represented as two or more independent sets of title materials. In this example the user provides at least one instance of the title materials expressing at least one right that is used by an aspect of a network to configure and provision user services in accordance with the right specified by the title materials. The one or more instances of the title materials provided by the user are used by various aspects of the network to provision the network and provide the desired content. For clarity the examples presume that the user has a single instance of the title materials e.g. a title object that expresses both rights but they may actually have a plurality of independent title materials provided from one or more sources. As the user connects to a network in order to form a connection to watch the streaming movie they provide title materials representing rights for the movie and network provisioning. At least one aspect of the network recognizes the title materials as representing network provisioning rights and provides network services consistent with those specified by the rights to the user. In one particular embodiment the user provides the network title materials by providing the title materials in at least one network packet sent from the user to the streaming movie provider. In alternative embodiments the user provides the network title materials by providing the title materials to the network as part of one or more packets communicated directly with the network. In still another alternate embodiment the user provides title materials to the streaming movie provider and the streaming movie provider provides the network title materials that may be used to provision the network for the user. Each of these embodiments operates similarly. Packets transmitted over a network connection for the user are inspected by various network components as the packets travel from the user to the network service. Upon recognizing title materials provided in the packet one or more network devices authenticate the title materials and provide the title authorized level of network service to the user. In one embodiment the identification of title materials in the packets need only be performed by edge devices where packets enter and leave a specific network. Examples of such edge devices may include gateway routers firewalls cable modems and DSL modems. In this example the network traffic is inspected at an edge device when the traffic enters a service provider s network. Upon inspection of the packet traffic an edge device determines if title materials are present. If no title materials are present the packet traffic is processed normally. If the packet traffic includes the necessary title materials the title materials are extracted from the packet traffic verified and then the edge device can map the traffic to a specific network traffic profile or route. For example a gateway border router can use existing route specifications such as existing MPLS defined routes to route packet traffic to and from the user to a streaming movie provider. Alternatively the gateway router can establish new routes reroute the packets or deny access to the network altogether based in part upon the title materials embedded in the network traffic. Those skilled in the art will recognize that the title materials do not need to be present in every packet of the network traffic but can be placed only in a subset of the packets. In one specific embodiment it is advantageous to present title materials in one or more of the following types of packets IPSec key exchange or TCP IP session initialization packets. For example title materials may comprise specific specification materials that enable a network to provide a service specified by at least one right. In a particular example title materials may comprise a MPLS tag within a right specification. In an alternative example title materials may comprise routing and or quality of service parameters or other network provisioning specifications contained within a right specification. In a further alternative example title materials comprise an MPLS tag effective for use by a portion of the network between the user and the streaming movie provider. Other high value services such as packet traffic protection and quality of service also may be specified within title materials. In these cases title materials may specify a right for at least an aspect of a user s packet traffic to be encrypted or otherwise protected may further specify the method of protection to be used and may further specify one or more destination services to which this traffic should be routed. In a particular example embodiment title materials may be used to specify an aspect of the IPSec and or VPN tunnel to be used to protect a user s network traffic. Title based services enabled by embodiments of the invention are thus advantageous to service providers by permitting them to provide differentiated services to their customers. In addition embodiments of the invention also permit preferred traffic users such as emergency providers and first responders to access a wireless network and provide a quality of service that ensures their network traffic is passed without regard to other network loads. Those having ordinary skill in the art will recognize that cellular telephony carriers have provided this sort of differentiated service for voice calls but they have been unable to extend this capability for network data traffic across networks outside their control. The present invention permits these classes of users to operate from wherever they are connected without reliance upon specialty networks and protocols. Title materials may be advantageously used when provisioning users and computers on a network. One common protocol for such provisioning is DHCP although other protocols such as BOOTP and UPNP also may be used. Provisioning decisions often are used to affect the nature of services a user is entitled to. For example a first user may be provisioned into a first logical network while a second user may be provisioned into a second logical network. The first user may have access to a first set of network services at a first specified quality of service and the second user may have access to a second set of network services at a second specified quality of service. The first and second users sets of network services may be identical may overlap or be completely disjointed and may require different levels of authentication or authorization in order to access them. In some embodiments the logical networks each user is provisioned for are carried on the same physical network but are logically separated using VLAN or other technologies. In other embodiments the logical networks for which each user is provisioned are provided by independent network devices and communication links. Title enabled networks can provide differentiated service as described above based upon an aspect of one or more title materials presented by a user or a computer connected to the network. In some embodiments title materials can be provided as part of an initial service request. In a specific example a network enabled computer system embeds one or more title materials into an initial network service request. Alternatively title materials can be provided to a network device after a user is connected to the network. The above process provides a mechanism for an arbitrary provisioning service to make provisioning decisions based upon title materials presented as part of a provisioning request. The basic technology is widely applicable to wireless access points IP enabled cellular telephones and other mobile devices that may benefit from its application. In a first example embodiment a network client provides title materials that express at least one right for a class or type of network service or alternatively the title materials express a specification for a class or type of network service. Examples of network service specification and network service class or type specification is provided below The above example service identification describes Verizon s ADSL1.5M network service. This service name may be resolved by the title enabled network device to a service specification and may be further resolved to specific network device parameters and settings. Alternative service specifications may be provided for example City of Cupertino Municipal AOL Skype Business would name a specific service name for providing service to municipal workers of the city of Cupertino AOL subscribers and Skype business subscribers respectively. Each of these service types or classes are associated with one or more service specifications or network configurations that can be used by a title enabled network device to process network traffic on behalf of a user or network client. Resolution of service identifications may take place within a title enabled network device or by using an external service such as a service router a directory service or a database. Alternatively an internal table or list of service identifications can be stored in a title enabled network device and used for this purpose. In some embodiments title materials may specify a network service specification rather than a service class or type name. A network service specification describes one or more specifications for network service that may be used by a title enabled network device to configure at least an aspect of the network service provided to a user or network client. The above example describes a service specification for an asynchronous link of 1.5 Mbps of network throughput for upload packet traffic from client to destination and a 7.5 Mbps of network throughput for download packet traffic from destination to client and a maximum latency of 100 ms for end to end traffic. Such a specification could be advantageously used to configure a user or network client to receive high quality streaming video service. A title enabled network device upon receiving such a network service specification resolves the specification to specific configuration information and uses that configuration information to process network traffic on behalf of a user or network client. Again resolution of the service specification to specific configuration parameters can occur internally or by using an external service or directory as described above. In yet another example embodiment title materials may include pre configured network specifications such as a MPLS tag or other specific network configuration materials. For example The above example shows a description of a specific network service configuration that may be used by title enabled network devices when included in one or more title materials. The above example first specifies a DNS service configuration parameter to use as expressed in a dotted IP notation. Alternatively the parameter could specify a URI or URL for a service and may additionally provide options to the URI or URL. In addition the above example specifies that a DHCPPool of 10.2.0.0 should be used. A DHCP pool specification may be used to specify specific access restricted addresses or alternatively specify the DHCP pool a title enabled DHCP service should use when servicing a DHCP request. Lastly the above example specifies a MPLS tag of 0x14d. When used in this way the MPLS tag could be used by title enabled network devices to further process packet traffic on behalf of the user or network client. Typically such a MPLS tag is preconfigured by a network provider and is referenced by title materials. Each of these methods of specifying network services may be extended in a wide variety of ways by those skilled in the art without departing from the scope of the invention. Specific title materials can use any of the described methods together or separately when specifying types of network services a title holder is entitled to receive. According to various embodiments title materials may be embedded within network protocols to effect the presentation of the title materials at one or more network devices and their subsequent use within the network device to provide authentication authorization or to enable or configure each network device to provide levels of service consistent with rights represented by the title materials. Most network devices operate on standardized protocols and may not act properly when they receive network traffic in a non standard protocol or using a protocol that does not conform to the standard. Accordingly it is advantageous to embed title materials within existing network protocols so that title enabled devices can recognize and process them if they are present. An example of a technique of embedding title materials into network traffic packets is explained using IPv6 but may also be used with IPSec and related IPv4 protocols as well as other protocols that provide an extensible payload definition. IPv6 was selected for this example for illustrative purposes. In one embodiment title materials may be embedded within a low level networking protocol such as IP. In one specific embodiment title materials may be embedded in the low level protocol of IPv6. For example title materials may be embedded within a destination options header of an IPv6 packet. Alternative headers such as the routing header also may be used. Within the destination options header which encapsulates application protocol information title materials can be embedded so that they are part of an IPv6 packet and are transported as part of the packet between computers and network devices. An IPv6 packet having title materials embedded in this way will be carried by all IPv6 compatible network devices without alteration. Those network devices that are title aware can inspect network traffic for and act upon any title materials embedded within the packet traffic. According to specific embodiments title materials can be embedded within an IPv6 packet in a number of ways. One approach is to embed the title materials using a customized IPv6 protocol stack e.g. in the network stack so that the IPv6 processing occurs at the lowest possible level within the stack. Within a network session the first packet a subset of the packets or all packets in a session can carry embedded title materials. A network device can use title materials in the first packet to facilitate the authorization and authentication functions and this authorization and authentication may persist for the duration of the session. In some alternate embodiments it is not desirable to permit an authorization and authentication to persist for an entire session of indeterminate length. In these cases title materials may be communicated using a subset of the packets in the session and any required reauthorization can occur within the network device on an as needed basis. In other embodiments title enabled network devices or processes may add or change the title materials contained within an IPv6 packet prior to forwarding the packet to its destination client or service. The added or changed title materials may include replacements to all or part of the title materials including adding additional title materials to the packet removing title materials from the packet or altering the title materials in the packet in some way. In particular embodiments a title may require a changed stub once it has been validated. In such embodiments the title enabled network device makes the necessary changes in title materials embedded in the IPv6 packet prior to forwarding the packet. According to some embodiments of the present invention title materials can be used in to determine the type of DHCP attributes for example address subnet gateway router and other network attributes and associated level of service a user is entitled to. This title materials information may be embedded within DHCP messages e.g. in the options field of one or more DHCP messages in accordance with aspects of the DHCP protocol as described herein. The message then may be sent to a DHCP server according to any of the embodiments listed below. A title enabled DHCP service can identify one or more title materials contained within the broadcast transmission that grants the transmitting system rights to access at least part of the network and provide a DHCP response comprising network provisioning information such as IP address network masks addresses of specific network services and the like that are effective to provide the transmitting system a network connection comprising access to a network and network services consistent with the rights described in the initial title materials. Alternatively the title enabled DHCP service cannot provide a response or provides a response that effectively limits the access of the user to specific network resources on the basis of provided title materials. An example of a limiting response might be to provide an IP address that can connect to basic resources but cannot use external network services. Alternatively the limiting response can provide access solely to a DMZ or other limited area of the network. In another alternative embodiment the DHCP response from a title enabled DHCP server can provide additional parameters to be used by the user when sending their network traffic. For example a title enabled DHCP server can use provided title materials to select and provide one or more additional title materials network access tokens digital certificates or other materials such as a MPLS tag that can be subsequently used by a user or their computer to gain access or services from a network. The user s computer would typically include this information in message traffic such as by embedding it within a protocol sent by the user to ensure its appropriate handling by other portions of the network. In yet another example embodiment a DHCPDISCOVERY broadcast also may include title materials in its options field and a title enabled DHCP service may choose to respond to the DHCPDISCOVERY broadcast on the basis of the embedded title materials. Once an appropriate title enabled DHCP server is located a title enabled network client sends a specially constructed DHCPREQUEST packet to the title enabled DHCP server . This request packet embeds one or more title materials within the DHCPREQUEST packet. Preferably the title materials are embedded within an extensible field of the DHCPREQUEST packet such as the options area. The title enabled DHCP service receives the specially constructed DHCPREQUEST packet from the network for processing. Upon receipt of a DHCP packet the title enabled DHCP server processes the packet. During this processing the title enabled DHCP service recognizes title materials in the options area of a DHCP packet either by using a defined option value or by inspection of the option contents . The identification of title materials typically occurs in a title enabled DHCP service when it processes the DHCP packet. A title enabled DHCP service then further processes the title materials contained in the DHCP packet. The processing may take the form of executing or invoking a right represented by the title materials or it may be to inspect the rights represented by the title materials and make determinations based upon information contained in the title materials. In either case a title enabled DHCP service identifies embedded title materials as described above and processes the title materials as described below. If no title materials are identified in the packet the title enabled DHCP server processes the DHCP request in a manner consistent with a non title enabled DHCP server. When presented with title materials the DHCP service validates the title materials using a title resolver or another title verification mechanism such as a state server. In some embodiments a title enabled DHCP service uses the presented title materials with a service router database directory service or title resolver and uses the resulting materials to make the authorization provisioning and other decisions. A title resolver state server or other title verification mechanism can be included within a title enabled DHCP service or one may be operably connected to the title enabled DHCP server over a network. An example of a title resolver state server processing was described above. If the title materials are determined to be valid the title enabled DHCP service then determines the rights requested by the client and further determines the network parameters required to provide network services in accordance with a right expressed in the presented title materials . If the title materials are not valid the title enabled DHCP service may refuse service to the network client as shown by the No branch from decision or may alternatively process the DHCP request as if title materials were not present. Once one or more valid title materials have been identified a title enabled DHCP server may process those title materials in a manner consistent with the type of title materials identified by the title enabled DHCP server. In one embodiment the title materials can be sent to a digital commerce engine or DCE not shown for further processing and the resulting title materials may be used to provision the network. In a second embodiment the title enabled DHCP service invokes one or more rights specified by the received title materials. Thus for example a title enabled DHCP service may recognize a specific right for high speed networking in the title materials and invoke that right. The invocation of the right may be performed by the title enabled DHCP service or by other title enabled components. Furthermore the right may be processed by the title enabled DHCP service. In some embodiments a title enabled DHCP service recognizes one or more rights invokes them and then provides services that fulfill the invoked rights. In another embodiment the title enabled DHCP service uses one or more aspects of the received title materials to provision the client. In yet another embodiment the title enabled DHCP service uses one or more aspects of the received title materials to access a service or database to determine the network parameters and or provisioning specifications to use for a specific client. In this embodiment a title enabled DHCP service has an optional associated database of or service that provides e.g. database as shown in network parameters and or provisioning specifications including addresses netmasks available services etc. that the title enabled DHCP service may use to provide a response to a client. In some particular embodiments a title enabled DHCP service may have available to it a database of rights and their respective provisioning settings. Alternatively a title enabled DHCP service may use an aspect of at least one right to conduct a database search to determine the response values or may simply use a part of the provided title materials as the response values. The DHCP service then packages up the response values e.g. network parameters specified by whatever mechanism and returns them to the client which uses them to establish a network connection that provides the client with access to the network based on at least an aspect of the title materials. In some particular embodiments the response values may comprise new additional or changed title materials in the response packet options area. The client may use these title materials from the response packet in various ways e.g. for authorization and access management to construct network traffic or to update existing titles stored at the client. DHCP as used above is a non limiting example of a title enabled network service that provides users with access based upon at least one set of title materials. A reader skilled in the art will understand how the techniques described herein can be extended to other network technologies including mobile cellular networks operating on GSM CDSA iDEN or other cellular technologies and wireless network technologies such as the various versions of 802.11 e.g. a b g n and WiMax. Title materials may be advantageously embedded in other generic network protocols including XML based protocols such as SOAP and XML RPC and within generic application protocols such as HTTP and FTP. Unlike existing title based service protocols use of generic protocols to transport titles and title materials provides additional opportunities to provide title enabled architectures using existing infrastructure. Although seen primarily as a means to fetch pages of Hypertext Markup Language HTML content for display in a web browser HTTP is really a general purpose transport for any type of data. HTTP may be advantageously used to encode title and right references within for example URIs. One mechanism for encoding title references within a URI is to encode a title or right reference in the same manner as a DOI or document ID. For example a title reference in a service registry might be encoded as The GET and POST operations within HTTP provide a generic mechanism within which to embed title materials for transport. HTTP further permits the use of title material specific content type and subtype e.g. X Navio Title XNavio TitleMaterials during the transmission of title materials using HTTP s GET and POST operations. The encoding of materials within GET and POST operations is well understood by those skilled in the art. Modern service based architectures rely on RPC based architectures based upon XML RPC or the Simple Object Access Protocol SOAP . XML RPC is a remote procedure call protocol which uses XML to encode its calls and HTTP as a transport mechanism. SOAP is a standards based implementation of XML RPC technologies. Both XML RPC and SOAP provide mechanisms for users to embed title materials within a remote procedure call request or response by adding the title materials to the XML structures comprising the request or response. Adding title materials to an XML structure should be well understood by those skilled in the art. Both XML RPC and SOAP may be transported using application protocols such as HTTP. The use of HTTP and other application protocols is advantageous in that it permits network traffic to transit firewalls with minimal configuration. In addition to transmitting XML RPC or SOAP materials that may contain title materials HTTP and other application protocols may be used to transmit title materials independently using the same techniques. Thus a user may send title materials to a network service by embedding the title materials into HTTP or another application protocols using the well understood techniques used to embed XML RPC and SOAP within these protocols. In some embodiments the response messages for protocols such as XML RPC and SOAP may further comprise additional or changed title materials for use by the client. The client may use these title materials from the response packet in various ways e.g. for authorization and access management to construct network traffic or to update existing titles stored at the client. A network device described above such as the gateway border router or cable modem described above can operate upon title materials in a variety of ways. A title enabled network device is capable of reading title materials encoded in IP packets such as the destinations option header of an IPv6 packet or the options field of a DHCP message. Alternatively title materials can be presented as part of the authorization and authentication interchange conducted when a network client communicating with a network device. Once title materials are received by a title enabled network device an association then can be formed between the title materials and a source IP address by using the source IP address MAC address or other networking attributes of the network client. In some embodiments the network device can provide a service interface to which a user or network client can connect. For example current technology network devices such as routers switches and broadband modems often provide a web based interface for configuring the network device. Alternatively the service interface provided by a network device can be a protocol based service such as telnet or SOAP or any other well known protocol based service. In additional alternative embodiments title materials may be presented to a title enabled network device using a presentation protocol such as HTTP as described above. Network devices able to recognize at least one aspect of title materials and configure their operation in accordance with at least one aspect of the title materials are said to be title enabled. In each of the embodiments described below the network device is provided with title materials. The title materials are then validated either in real time or in embodiments where response time is of the essence after the fact. In some embodiments the validation materials can be cached at the network device and validation can occur without additional network activity. Validation of title materials makes the aspects of the rights described therein available for use. Once a title enabled network device identifies and validates title materials from a network client a network device may take one or more actions. First it may use aspects of the identified title materials to configure itself or other network devices to provide network services in accordance with the identified title materials. Second it may invoke one or more rights expressed by the title materials and use the process of invoking the rights to configure itself or other network devices to provide network services in accordance with the identified title materials. Third it may use the materials returned from the invocation of one or more rights to configure itself or other network devices to provide network services in accordance with the returned materials from the rights invocation. Fourth it may use aspects of the identified title materials to look up and return network configuration materials or title materials to a network client. Furthermore each network device receiving title materials embedded in a network protocol where the network device is expected to forward the received packets to another network device or end destination may take one or more of the following actions as depicted in . illustrates the title materials being processed by a network device that removes the title materials from the packet traffic prior to forwarding the packet traffic. illustrates the title materials being processed by a network device where the network device forwards the packet traffic including the title materials without alternation. illustrates the title materials being processed by a network device where the network device adds removes or changes at least one part of the packet traffic to indicate that the title materials were processed. In the example shown in the figure the network device has made an addition to the packet traffic such as an MPLS tag. illustrates the title materials being processed by a network device to add remove or change an aspect of the title materials in accordance with the results of processing said title materials. As shown in an aspect of the title materials has been changed and the changed materials placed in the packet . A title enabled router is a specific embodiment of a title enabled network device that can provide network services in accordance with at least one aspect of title materials provided to it. Title enabled routers provide unique services in that they process all network traffic and may be used to provide rights specified networking services. There are several ways that a router can detect and act upon title materials. For example a title enabled router can inspect a destination options header or other optional headers of one or more packets. In particular a router can inspect an IPv6 packet for a destination option header or other optional header and if one is found inspect that header for title materials. In an alternate embodiment a title enabled router can detect title materials based on content inspection of a packet being processed by the title enabled router. For protocols that do not support embedded title materials a title enabled router can make routing decisions and assignments based on an aspect of a network configuration parameter such as a MPLS header a source IP or other networking option used by the source network client. In some embodiments the network configuration parameter recognized by a title enabled router is one assigned to a network client by a title enabled DHCP service as described above. In a particular embodiment a right expressed by title materials can describe a preferential network service routing enabled by a MPLS tag supported by the router. The MPLS tag can be contained within the title materials itself or can be content referenced by the title materials. Continuing with the example the validation of the title materials makes the title represented right of create connection available. Other title enabled rights also may enable specific types classes or performance levels of network connections. The create connection right names a service that creates a connection and parameterizes this service with the MPLS tag to use with traffic associated with the connection. When a user or network client requests a connection be created the network device calls the service identified by the title materials and passes the MPLS tag to that service. The service then sets up the connection for the user or network client and returns the MPLS tag to the user s IP stack for embedding in the protocol packets. Alternatively the service can configure the router to tag all traffic from the user or network client using the named MPLS tag and to route that traffic in accordance with the defined network tag. Alternatively the router can create a MPLS tag and use that tag as described above. According to specific embodiments a title enabled wireless access point is an 802.11a 802.11b g or wireless mesh device capable of reading title materials held by a user on their network device such as a laptop Palm Pilot etc. . The user s title materials are presented by the network client when the client requests a DHCP address from the wireless network. This request is routed using the appropriate 802.11 wireless protocol. The title enabled wireless access point makes a decision from the request packet on how to route the request. Alternatively the title enabled wireless access point routes all packets to another network device such as a title enabled router described above. The title materials instruct a DHCP server on the wireless network to provide the user with an IP address and level of service commensurate with the title materials presented. A title enabled wireless access point can be coupled with DHCP as described above to provide a suitable IP address for any given client. For example one client may present title materials that entitle them to high bandwith service while another receives a dedicated pipeline to a partner network walled garden and a third who doesn t have title materials is presented with the network s credit card server in order to buy pay as you go time on the network. The title enabled wireless access point can handle multiple addresses from different address networks to effectively manage the network traffic. This routing capability is present in most wireless access points today. A title enabled wireless access point reduces the complexity of networks that support a plurality of classes of users each of which receives differentiated service levels. In conventional networks each class of user is provisioned with a unique SSID. Users connect to the desired SSID to receive each level of differentiated service. Metropolitan and campus networks must provide wireless access points that support a plurality of SSIDs or must provide a plurality of wireless access points each providing a different SSID. Examples of these types of networks include campus and metropolitan networks Use of title enabled wireless access points permit the reduction in complexity and number of wireless access points because they permit all users to connect to a specific SSID instead of requiring different SSIDs for each class of user. This reduces the number of devices and the complexity of devices effectively reducing the deployment cost of the network. A particular problem that embodiments of the present invention alleviates is the differentiated use of network bandwidth based upon rights. Today network providers such as BitTorrent suffer performance degradation resulting from too many users downloading large pieces of content in a given period effectively using all of the available bandwidth and preventing some users from attaining acceptable throughput. Network providers must therefore design to their maximum surge requirement based upon all users who can request access at a specific time. Using the software systems and methods provided by the invention a class of differentiated e.g. premium and limited network bandwidth users can be established having access to either higher or lower available bandwidth based in part upon information provided when the user connects to the network. Unlike the point of presence connection to the network described above the differentiated service spans an entire network and can span a plurality of networks provided by more than one network provider. While the invention has been particularly shown and described with reference to specific embodiments thereof it will be understood by those skilled in the art that changes in the form and details of the disclosed embodiments may be made without departing from the spirit or scope of the invention. For example reference has been made herein to various types of computing platforms network configurations protocols and processes which may be employed to implement various aspects of specific embodiments of the invention. It will be understood that such reference should not be used to narrow the scope of the invention. Rather such references will be understood to be made by way of example and it will be further understood that any of a wide variety of computing platforms network configurations protocols processes computing models and the like may be employed to implement embodiments of the invention without departing from the scope of the invention. For example embodiments of the invention are not limited to specific types of computing platforms or network devices referred to herein. To the contrary virtually any type of computing device having at least one interface for receiving or transmitting data e.g. packets frames etc. and at least one processor e.g. CPU processing cores processor clusters etc. to facilitate processing of such data may be employed to implement various aspects of the invention as will be apparent to those of skill in the art. In addition although various advantages aspects and objects of the present invention have been discussed herein with reference to various embodiments it will be understood that the scope of the invention should not be limited by reference to such advantages aspects and objects. Rather the scope of the invention should be determined with reference to the appended claims.
330.261538
1,864
0.824964
eng_Latn
0.99997
115dfb974434a313c148bf38ec4cbb02ec48a26a
108
md
Markdown
README.md
DestroyingBlob/AWSOMEPICGAME
cf87398b1bd80d110714926e9c266b0277935445
[ "Artistic-2.0" ]
null
null
null
README.md
DestroyingBlob/AWSOMEPICGAME
cf87398b1bd80d110714926e9c266b0277935445
[ "Artistic-2.0" ]
null
null
null
README.md
DestroyingBlob/AWSOMEPICGAME
cf87398b1bd80d110714926e9c266b0277935445
[ "Artistic-2.0" ]
null
null
null
# AWSOMEPICGAME THIS IS AN AWSOMEPICGAME. I know it sounds weird but it is so cool! You should check it out
36
91
0.777778
eng_Latn
0.849828
115e1c0ad3d2680717d6ae706a801881f4e78154
3,871
md
Markdown
README.md
vrd83/public-docs
cccffcd2c9d4653f9f6e8f0951705cb795985201
[ "MIT" ]
null
null
null
README.md
vrd83/public-docs
cccffcd2c9d4653f9f6e8f0951705cb795985201
[ "MIT" ]
null
null
null
README.md
vrd83/public-docs
cccffcd2c9d4653f9f6e8f0951705cb795985201
[ "MIT" ]
null
null
null
# public-docs My public documentation repository hosted on GCP Cloud Run. Powered by AsciiDoc and NGINX. ## Project Overview The motivation for this project is to serve primarily a learning opportunity for me as well as a way to share knowledge. If someone stumbles across this and finds just one thing useful then it has served it's purpose. I'm a big fan of AsciiDoc for it's ease of use but haven't used it much and want to become more familiar with it. I also love all things GCP, Kubernetes and Knative so wanted to create a container that can be hosted on Cloud Run. Finally, I wanted to create a Makefile that can be used as a pattern or starting point for other projects. The blog created by this project is available here: https://docs.vaughanross.io ## Prerequisites I have installed Docker, the gcloud sdk and authenticated to GCP. I then deployed the hello-world Cloud Run service in a region that supports custom domain mapping and created a Docker repository in GCP's Artifact Registry in the same region. I tagged the container image as version 0.0.1. ## Getting Started I create an env file containing my configuration variables and source it. The Makefile and workflow_helper script depend on the environment variables. See below for an example. ```console cat << EOF >> envvars/prd.env export GCP_REGION='us-west1' export GCP_PROJECT='public-docs-123' export GCP_REPO_NAME='container-images' export GCP_CLOUD_RUN_SERVICE='public-documentation' EOF source ./envvars/prd.env ``` ## My Workflow 1. Create or update adoc files, including index.adoc if required. I use VSCode with an AsciiDoc extension. 2. Build and run the container locally. 3. Review (test) new content locally. 4. Commit changes to repo. 4. Publish to GCP Cloud Run. I ensure that the Makefile can be used for each step of the workflow. Run 'make' to print all available options: ```bash help: Show this help. docker-gcp-auth: GCP Docker auth helper. docker-build: Build the docker container locally with latest tag. docker-login: Login to GCP Artifactory. docker-run: Run the Docker container locally on port 8081. docker-stop: Stop the local running container. git: Stage, commit and push all changes to main. Example usage: 'make git m="commit msg"'. cloud-run-publish-patch: Build and tag both Docker image and Git as patch release. Push to container repository and update Cloud Run service. cloud-run-publish-minor: Build and tag both Docker image and Git as minor release. Push to container repository and update Cloud Run service. cloud-run-publish-major: Build and tag both Docker image and Git as major release. Push to container repository and update Cloud Run service. ``` ## Acknowlegements When Googling around it came as no surprise to find many like minded individuals and so rather than re-invent the wheel I have customized bits and pieces of the following to satisfy my requirements: * https://medium.com/@dhavalmetrani/makefiles-and-docker-versioning-8c15ccc76994 * https://aerokhin.com/posts/how-setup-your-asciidoc-blog.html * https://github.com/commandercool/asciiblog * https://panjeh.medium.com/makefile-git-add-commit-push-github-all-in-one-command-9dcf76220f48 ## References * [Cloud Run](https://cloud.google.com/run/) * [Cloud Run Custom Domains](https://cloud.google.com/run/docs/mapping-custom-domains) * [Cloud Run Pricing](https://cloud.google.com/run/pricing/) * [AsciiDoc Syntax Quick Reference](https://docs.asciidoctor.org/asciidoc/latest/syntax-quick-reference/) * [AsciiDoc Writers Guide](https://asciidoctor.org/docs/asciidoc-writers-guide/) * [AsciiDoc User Guide](https://asciidoc.org/userguide.html) * [draw.io Documentation](https://www.diagrams.net/doc/)
56.926471
289
0.749677
eng_Latn
0.954297
115e4d7c642e2781bd8a40a2d42ab55e4a62b41a
477
md
Markdown
keyboards/knops/readme.md
fzf/qmk_toolbox
10d6b425bd24b45002555022baf16fb11254118b
[ "MIT" ]
null
null
null
keyboards/knops/readme.md
fzf/qmk_toolbox
10d6b425bd24b45002555022baf16fb11254118b
[ "MIT" ]
null
null
null
keyboards/knops/readme.md
fzf/qmk_toolbox
10d6b425bd24b45002555022baf16fb11254118b
[ "MIT" ]
null
null
null
# Knops ![Knops Mini Logo](http://knops.io/img/Knops_logo.jpg) [Knops](http://www.knops.io/) makes fully customizable custom keyboards in a variety of formfactors. So far the Knops Mini, a 3x2 macro keypad is the biggest success. Inside this directory you'll find support for the entire line of Knops products. * Maintainer: [Pawnerd](https://github.com/pawnerd) * Hardware Supported: * [`Mini`](mini/): Knops Mini * Hardware Availability: [knops.io](http://www.knops.io/)
47.7
247
0.742138
eng_Latn
0.697894
115f1982d10ef932de1f080d55e65a8ad9586cc4
2,500
markdown
Markdown
markdown/api/library/system/DocumentsDirectory.markdown
sekodev/corona-docs
c971d5a7598001708f79a928f25b040706f05dea
[ "MIT" ]
20
2020-02-17T14:47:41.000Z
2022-02-09T19:01:27.000Z
markdown/api/library/system/DocumentsDirectory.markdown
sekodev/corona-docs
c971d5a7598001708f79a928f25b040706f05dea
[ "MIT" ]
60
2020-02-18T07:08:10.000Z
2022-02-17T21:41:37.000Z
markdown/api/library/system/DocumentsDirectory.markdown
sekodev/corona-docs
c971d5a7598001708f79a928f25b040706f05dea
[ "MIT" ]
25
2020-02-18T06:34:09.000Z
2021-10-17T16:55:26.000Z
# system.DocumentsDirectory > --------------------- ------------------------------------------------------------------------------------------ > __Type__ [Constant][api.type.Constant] > __Library__ [system.*][api.library.system] > __Revision__ [REVISION_LABEL](REVISION_URL) > __Keywords__ system directory, DocumentsDirectory > __See also__ [system.pathForFile()][api.library.system.pathForFile] > [system.ApplicationSupportDirectory][api.library.system.ApplicationSupportDirectory] > [system.CachesDirectory][api.library.system.CachesDirectory] > [system.ResourceDirectory][api.library.system.ResourceDirectory] > [system.TemporaryDirectory][api.library.system.TemporaryDirectory] > --------------------- ------------------------------------------------------------------------------------------ ## Overview Used with [system.pathForFile()][api.library.system.pathForFile] to create a path for storing and retrieving files that need to persist between application sessions. The path is `/Documents`. This property can also be used with other APIs requesting `baseDir` as a parameter, for example [display.newImageRect()][api.library.display.newImageRect], [display.save()][api.library.display.save], etc. <div class="guide-notebox"> <div class="notebox-title">Notes</div> * In the Corona Simulator, this will be in a sandboxed folder on a <nobr>per-application</nobr> basis. You can view its directories/files via <nobr>__File__ &rarr; __Show&nbsp;Project&nbsp;Sandbox__</nobr>. * On iOS, this information is backed up by syncing. </div> ## Syntax system.DocumentsDirectory ## Example ``````lua -- Get path for file "data.txt" in the documents directory local path = system.pathForFile( "data.txt", system.DocumentsDirectory ) -- Open the file from the path local fh, reason = io.open( path, "r" ) if fh then -- File exists; read its contents into a string local contents = fh:read( "*a" ) print( "Contents of " .. path .. "\n" .. contents ) else -- File open failed; output the reason print( "File open failed: " .. reason ) -- Create file since it doesn't exist yet fh = io.open( path, "w" ) if fh then print( "Created file" ) else print( "Create file failed!" ) end local numbers = { 1,2,3,4,5,6,7,8,9 } fh:write( "Feed me data!\n", numbers[1], numbers[2], "\n" ) for _,v in ipairs( numbers ) do fh:write( v, " " ) end fh:write( "\nNo more data\n" ) end io.close( fh ) ``````
32.894737
206
0.6372
eng_Latn
0.63139
115fc8bc8c818d00b45d451c64571ff98fb1459b
48
md
Markdown
angjs/README.md
zoranf/kmetija-backend
f9cd670a408f109ff0615e97e6128fd6cba238c4
[ "MIT" ]
null
null
null
angjs/README.md
zoranf/kmetija-backend
f9cd670a408f109ff0615e97e6128fd6cba238c4
[ "MIT" ]
null
null
null
angjs/README.md
zoranf/kmetija-backend
f9cd670a408f109ff0615e97e6128fd6cba238c4
[ "MIT" ]
null
null
null
# kmetija Predstavitvena stran ekološke kmetije
16
37
0.854167
slv_Latn
0.999831
115fcc61a8c58277158b55f225579004f6f1c76e
46
md
Markdown
README.md
AloneGu/web_pressure_test
528d08b41565ca41705a546cb72dc3c2311ecef0
[ "MIT" ]
null
null
null
README.md
AloneGu/web_pressure_test
528d08b41565ca41705a546cb72dc3c2311ecef0
[ "MIT" ]
null
null
null
README.md
AloneGu/web_pressure_test
528d08b41565ca41705a546cb72dc3c2311ecef0
[ "MIT" ]
null
null
null
# web_pressure_test Test web request pressure
15.333333
25
0.847826
eng_Latn
0.625954
116176f4c0831ad0bb51e8abeb49b86480932b20
2,878
md
Markdown
docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimethreadsuspended-method.md
acidburn0zzz/docs.fr-fr
5fdf04b7027f8b7d749c2180da4b99068e1f44ee
[ "CC-BY-4.0", "MIT" ]
1
2019-04-11T17:00:02.000Z
2019-04-11T17:00:02.000Z
docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimethreadsuspended-method.md
Acidburn0zzz/docs.fr-fr
5fdf04b7027f8b7d749c2180da4b99068e1f44ee
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimethreadsuspended-method.md
Acidburn0zzz/docs.fr-fr
5fdf04b7027f8b7d749c2180da4b99068e1f44ee
[ "CC-BY-4.0", "MIT" ]
1
2022-02-23T14:59:20.000Z
2022-02-23T14:59:20.000Z
--- title: ICorProfilerCallback::RuntimeThreadSuspended, méthode ms.date: 03/30/2017 api_name: - ICorProfilerCallback.RuntimeThreadSuspended api_location: - mscorwks.dll api_type: - COM f1_keywords: - ICorProfilerCallback::RuntimeThreadSuspended helpviewer_keywords: - RuntimeThreadSuspended method [.NET Framework profiling] - ICorProfilerCallback::RuntimeThreadSuspended method [.NET Framework profiling] ms.assetid: de830a8b-6ee1-4900-ace3-4237108f6b12 topic_type: - apiref author: mairaw ms.author: mairaw ms.openlocfilehash: a0748802599926f4ec218362e6f7d086aab2d8f9 ms.sourcegitcommit: 5b6d778ebb269ee6684fb57ad69a8c28b06235b9 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 04/08/2019 ms.locfileid: "59080226" --- # <a name="icorprofilercallbackruntimethreadsuspended-method"></a>ICorProfilerCallback::RuntimeThreadSuspended, méthode Notifie le profileur que le thread spécifié a été suspendu ou qu’il doit être interrompu. ## <a name="syntax"></a>Syntaxe ``` HRESULT RuntimeThreadSuspended( [in] ThreadID threadId); ``` ## <a name="parameters"></a>Paramètres `threadId` [in] L’ID du thread qui a été suspendu. ## <a name="remarks"></a>Notes Le `RuntimeThreadSuspended` notification peut se produire à tout moment entre le [ICorProfilerCallback::RuntimeSuspendStarted](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimesuspendstarted-method.md) et associé [ICorProfilerCallback::RuntimeResumeStarted](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimeresumestarted-method.md) rappels. Les notifications qui se produisent entre [ICorProfilerCallback::RuntimeSuspendFinished](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimesuspendfinished-method.md) et `RuntimeResumeStarted` concernent les threads qui étaient exécutés dans du code non managé et ont été suspendus lors de l’entrée à l’exécution. En règle générale, ce rappel se produit juste après qu’un thread est suspendu. Toutefois, si le thread en cours d’exécution (le thread qui a appelé ce rappel) est celui qui est suspendu, ce rappel se produit juste avant que le thread est suspendu. ## <a name="requirements"></a>Configuration requise **Plateformes :** Consultez [Configuration requise](../../../../docs/framework/get-started/system-requirements.md). **En-tête :** CorProf.idl, CorProf.h **Bibliothèque :** CorGuids.lib **Versions de .NET Framework :** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)] ## <a name="see-also"></a>Voir aussi - [ICorProfilerCallback, interface](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-interface.md) - [RuntimeThreadResumed, méthode](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimethreadresumed-method.md)
48.779661
745
0.770327
yue_Hant
0.286105
1161aef24b8218486fb5ffc656fc4b2873f5415e
2,097
md
Markdown
docs/reference/syscalls/interrupt_wait.md
liexusong/fuchsia
81897680af92a1848a063e3c20ff3a4892ccff07
[ "BSD-2-Clause" ]
3
2021-09-02T07:21:06.000Z
2022-03-12T03:20:10.000Z
docs/reference/syscalls/interrupt_wait.md
liexusong/fuchsia
81897680af92a1848a063e3c20ff3a4892ccff07
[ "BSD-2-Clause" ]
56
2021-06-03T03:16:25.000Z
2022-03-20T01:07:44.000Z
docs/reference/syscalls/interrupt_wait.md
liexusong/fuchsia
81897680af92a1848a063e3c20ff3a4892ccff07
[ "BSD-2-Clause" ]
2
2022-02-25T12:22:49.000Z
2022-03-12T03:20:10.000Z
# zx_interrupt_wait ## NAME <!-- Updated by update-docs-from-fidl, do not edit. --> Wait for an interrupt. ## SYNOPSIS <!-- Updated by update-docs-from-fidl, do not edit. --> ```c #include <zircon/syscalls.h> zx_status_t zx_interrupt_wait(zx_handle_t handle, zx_time_t* out_timestamp); ``` ## DESCRIPTION `zx_interrupt_wait()` is a blocking syscall that causes the caller to wait until an interrupt is triggered. It can only be used on interrupt objects that have not been bound to a port with [`zx_interrupt_bind()`] It also, before the waiting begins, will acknowledge the interrupt object, as if [`zx_interrupt_ack()`] were called on it. The wait may be aborted with [`zx_interrupt_destroy()`] or by closing the handle. ## RIGHTS <!-- Updated by update-docs-from-fidl, do not edit. --> *handle* must be of type **ZX_OBJ_TYPE_INTERRUPT** and have **ZX_RIGHT_WAIT**. ## RETURN VALUE `zx_interrupt_wait()` returns **ZX_OK** on success, and *out_timestamp*, if non-NULL, returns the timestamp of when the interrupt was triggered (relative to **ZX_CLOCK_MONOTONIC**) ## ERRORS **ZX_ERR_BAD_HANDLE** *handle* is an invalid handle. **ZX_ERR_WRONG_TYPE** *handle* is not a handle to an interrupt object. **ZX_ERR_BAD_STATE** the interrupt object is bound to a port. **ZX_ERR_ACCESS_DENIED** *handle* lacks **ZX_RIGHT_WAIT**. **ZX_ERR_CANCELED** *handle* was closed while waiting or [`zx_interrupt_destroy()`] was called on it. **ZX_ERR_INVALID_ARGS** the *out_timestamp* parameter is an invalid pointer. ## SEE ALSO - [`zx_handle_close()`] - [`zx_interrupt_ack()`] - [`zx_interrupt_bind()`] - [`zx_interrupt_create()`] - [`zx_interrupt_destroy()`] - [`zx_interrupt_trigger()`] - [`zx_port_wait()`] <!-- References updated by update-docs-from-fidl, do not edit. --> [`zx_handle_close()`]: handle_close.md [`zx_interrupt_ack()`]: interrupt_ack.md [`zx_interrupt_bind()`]: interrupt_bind.md [`zx_interrupt_create()`]: interrupt_create.md [`zx_interrupt_destroy()`]: interrupt_destroy.md [`zx_interrupt_trigger()`]: interrupt_trigger.md [`zx_port_wait()`]: port_wait.md
27.592105
95
0.730567
eng_Latn
0.861793
1162d509fb7dc5e41ef23fc3fbabaa457297fa9c
1,419
md
Markdown
aws/r/aws_vpc_endpoint_service_allowed_principal.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
78
2021-01-15T14:10:30.000Z
2022-02-14T09:17:40.000Z
aws/r/aws_vpc_endpoint_service_allowed_principal.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
5
2021-04-09T15:21:28.000Z
2022-01-28T19:02:05.000Z
aws/r/aws_vpc_endpoint_service_allowed_principal.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
30
2021-01-17T13:16:57.000Z
2022-03-21T12:52:08.000Z
# aws_vpc_endpoint_service_allowed_principal [back](../aws.md) ### Index - [Example Usage](#example-usage) - [Variables](#variables) - [Resource](#resource) - [Outputs](#outputs) ### Terraform ```terraform terraform { required_providers { aws = ">= 3.35.0" } } ``` [top](#index) ### Example Usage ```terraform module "aws_vpc_endpoint_service_allowed_principal" { source = "./modules/aws/r/aws_vpc_endpoint_service_allowed_principal" # principal_arn - (required) is a type of string principal_arn = null # vpc_endpoint_service_id - (required) is a type of string vpc_endpoint_service_id = null } ``` [top](#index) ### Variables ```terraform variable "principal_arn" { description = "(required)" type = string } variable "vpc_endpoint_service_id" { description = "(required)" type = string } ``` [top](#index) ### Resource ```terraform resource "aws_vpc_endpoint_service_allowed_principal" "this" { # principal_arn - (required) is a type of string principal_arn = var.principal_arn # vpc_endpoint_service_id - (required) is a type of string vpc_endpoint_service_id = var.vpc_endpoint_service_id } ``` [top](#index) ### Outputs ```terraform output "id" { description = "returns a string" value = aws_vpc_endpoint_service_allowed_principal.this.id } output "this" { value = aws_vpc_endpoint_service_allowed_principal.this } ``` [top](#index)
17.518519
71
0.701903
eng_Latn
0.524249
1163c090566ac6257cb347a55d6cbd0041277513
558
md
Markdown
README.md
ryands17/svelte-webpack
6462660acecf45b29f8c1347413d5e32e5552ed5
[ "MIT" ]
null
null
null
README.md
ryands17/svelte-webpack
6462660acecf45b29f8c1347413d5e32e5552ed5
[ "MIT" ]
3
2020-06-19T18:24:46.000Z
2021-01-28T21:12:21.000Z
README.md
ryands17/svelte-webpack
6462660acecf45b29f8c1347413d5e32e5552ed5
[ "MIT" ]
null
null
null
# Svelte Electron [Svelte](https://svelte.dev) + [Electron](https://electronjs.org/) ## Available Scripts In the project directory, you can run: ### `yarn dev` or `npm run dev` Runs the electron app in the development mode.<br> The app will open automatically amd have HMR. **Note:** Linux users if you're getting an error it's likely you need to follow [this issue](https://github.com/electron/electron/issues/17972) ### `yarn prod` or `npm run prod` Builds the app via electron builder (Currently only for Linux. Change the setting as per the OS).
29.368421
143
0.729391
eng_Latn
0.980606
1163c63f1bbdd859d3b650b562c84230aea9cb98
65
md
Markdown
README.md
peterwiebe/miniQuery
091d465b4d8520e422974cf7a814c49bf75e768c
[ "Apache-2.0" ]
null
null
null
README.md
peterwiebe/miniQuery
091d465b4d8520e422974cf7a814c49bf75e768c
[ "Apache-2.0" ]
null
null
null
README.md
peterwiebe/miniQuery
091d465b4d8520e422974cf7a814c49bf75e768c
[ "Apache-2.0" ]
null
null
null
## miniQuery practice recreating jQuery to see how jQuery works
16.25
50
0.8
eng_Latn
0.996989
1164edf3bf9dd4dde4fad67e964341a7807d662b
38
md
Markdown
README.md
lijoalex/SpringWebApp
1413989f7fbb359b09afee956924cebec418c10e
[ "MIT" ]
null
null
null
README.md
lijoalex/SpringWebApp
1413989f7fbb359b09afee956924cebec418c10e
[ "MIT" ]
null
null
null
README.md
lijoalex/SpringWebApp
1413989f7fbb359b09afee956924cebec418c10e
[ "MIT" ]
null
null
null
# SpringWebApp Spring Web Application
12.666667
22
0.842105
kor_Hang
0.792017
1164f9136cca815ccb71dfe34deb5d06cc42ebc7
19,121
md
Markdown
NEWS.md
sebffischer/torch
609fa50ad05e46627a125929e710eaa6ddfa47ca
[ "MIT" ]
null
null
null
NEWS.md
sebffischer/torch
609fa50ad05e46627a125929e710eaa6ddfa47ca
[ "MIT" ]
null
null
null
NEWS.md
sebffischer/torch
609fa50ad05e46627a125929e710eaa6ddfa47ca
[ "MIT" ]
null
null
null
# torch (development version) - Improved auto-detection of CUDA version on Windows. (#798, @SvenVw) - Improved parallel dataloaders performance by using a socket conection to transfer data between workers and the main process. (#803) - Serialization is now much faster because we avoid base64 encoding the serialized tensors. As a result, files serialized with newer versions of torch can't be opened with older versions of torch. Set `options(torch.serialization_version = 1)` if you want your file to be readable by older versions. (#803) - `keep_graph` now defaults to the value of `create_graph` when calling `$backward()`. We also renamed it to `retain_graph` to match PyTorch. (#811) - Optimizers created with `optimizer` now carry the classname in the generator and in instances. Optimizer generators now have the class `torch_optimizer_generator`. The class of torch optimizers has been renamed from `torch_Optimizer` to `torch_optimizer`. (#814) # torch 0.7.2 ## Bug fix - Fixed vignette building on Windows. # torch 0.7.1 ## New features - Added `cuda_runtime_version()` to query the CUDA Tolkit version that torch is using. (#790) # torch 0.7.0 ## Breaking changes - `torch_sort` and `Tensor$sort` now return 1-indexed results. (#709, @mohamed-180) - Support for LibTorch 1.10.2. See also [release notes](https://github.com/pytorch/pytorch/releases/tag/v1.10.0) for the PyTorch v1.10. (#758, #763, #775, @hsbadr). - Changed default `dim` from `1` to `2` in `nnf_cosine_similarity`. (#769) - The default value for arguments of various functions have changed. A bug in the code generation was truncating the default values specially if they were float values that needed more than 6 digit precision. (#770) ## New features - `jit_save_for_mobile` allows to save a traced model in bytecode form, to be loaded by a `LiteModuleLoader`. (#713) - Exported `is_torch_tensor` to check wether an object is a tensor or not. (#730, @rdinnager) - Adds `cuda_get_device_properties(device)` that allows one to query device capability and other properties. (#734, @rdinnager) - Implemented `call_torch_function()` to allow calling potentially unexported torch core functions. (#743, @rdinnager) - Now when installing torch all of LibTorch and Lantern headers will be installed within the `inst` directory. This will allow for packages extending torch to bind directly to its C++ library. (#718) - `dataset_subset` will use the `.getbatch` method of the wrapped dataset if one is available. (#742, @egillax) - Added `nn_flatten` and `nn_unflatten` modules. (#773) - Added `cuda_memory_stats()` and `cuda_memory_summary()` to verify the amount of memory torch is using from the GPU. (#774) - Added `backends_cudnn_version()` to query the CuDNN version found by torch. (#774) ## Bug fixes - Fixed a bug in `.validate_sample` for the `Distribution` class that would incorrectly check for tensors. (#739, @hsbadr) - Fixed memory leak when applying custom `autograd_function`s. (#750) - Fixed a bug that caused `autograd_grad` to deadlock when used with custom autograd functions. (#771) - Fixed a bug in `torch_max` and `torch_min` that would fail with `length=2` Tensors. (#772) ## Documentation - Improved the 'Loading data' vignette and datasets documentation. (#780, @jnolis) ## Internal - Refactored the internal Lantern types and Rcpp types and made clearer which are the exported types that can be used in the C++ extensions. (#718) - Simplified concurrency related constructs in autograd. (#755, @yitao-li) - R and C++ code cleanup, styling, and formatting. (#753, @hsbadr) - Dataloaders are slightly faster with a new transpose function. (#783) - `torch_tensor` is now a C++ only function slighly increasing performance in a few situations. (#784) # torch 0.6.1 ## New features - Fixed valgrind errors on CRAN by requiring a more recent version of knitr. - Updated LibTorch to version 1.9.1 (#725 @hsbadr) - We now check if lantern DLL's are loaded before calling any lantern function. This avoids segfaults when Lantern is not installed. (#723). # torch 0.6.0 ## Breaking changes - `nn_sequential` is now a bare `nn_module`, allowing to easily inherit from it. This is a breaking change if you used the `name` argument. The `name` behavior can be achieved by subclassing; see the tests in the PR. (#699) ## New features - Additional info is showed when printing tensors like if it requires grad and the grad fn. (#668, #669, #673, @mohamed-180) - We can now subset `nn_sequential` modules using `[`. (#678, @mohamed-180) - We now allow `padding='same'` and `padding='valid'` when using convolutions. (#679) - `nnf_cross_entropy` now uses the ATen `cross_entropy` operation directly instead of doing logsoftmax + NLLLoss. (#680) - Inherited classes are now persisted by subclasses. This is specially useful if you subclass `nn_sequential` and still want that the specific S3 methods still work. (#701) ## Bug fixes - Fixed bug when indexing with numeric vectors. (#693, @mohamed-180) - Fixed bug when indexing tensors with ellipsis and a tensor. (#696) ## Documentation - Improved optimizer documentation by adding a 'Warning' regarding the creation and usage order. (#698) # torch 0.5.0 ## Breaking changes - Droped support for CUDA 10.1 (#610) - `torch_manual_seed()` now matches PyTorch's behavior so we can more easily compare implementations. Since this is a breaking change we added the `torch.old_seed_behavior=TRUE` option so users can stick to the old behavior. (#639) - Indexing with vectors has a now the same behavior as R indexing, making it easier to understand. Users can still use the old behavior by using `torch_index` or `torch_index_put`. (#649) ## New features - Added support for ScriptModule. Loaded JIT modules now operate as `nn_module`s. (#593) - Added a `jit_compile` function that allows compiling arbitrary TorchScript code into script function that can be serialized and executed. (#601) - Added `jit_trace` support for `nn_module` created from R. (#604) - Updated LibTorch to version 1.9.0 (#610) - Added Linear Algebra functions (#612) - Added `contrib_sort_vertices` to efficiently sort vertices on CUDA. (#619) - Allows querying the graph from traced modules. (#623) - Added `with_detect_anomaly` to debug autograd errors. (#628) - Implemented `traced_module$graph_for()` to allow inspecting the optimized jit graph. (#643) - Added `slc` to allow dynamically creating slices when indexing tensors. (#648) ## Bug fixes - Fixed a bug when using a `.getbatch` method that didn't return a `torch_tensor`. (#615) - Fixed warning when using `%/%` caused by a call to deprecated `torch_floor_divide` (#616) - Improved CUDA version auto-detection (#644) ## Internal changes - Improved R <-> JIT types conversion. (#593) - Added Dockerfiles for CUDA 11.1 (#597) - A warning is raised when an incompatible dataset is passed to a parallel dataloader. (#626) - Additionally to calling `gc` when CUDA memory is exhausted we now call `R_RunPendingFinalizers`. This should improve memory usage, because we will now delete tensors earlier. (#654) - Fix rchk issues (#667) # torch 0.4.0 ## Breaking changes - `torch_multinomial` now returns 1-based indexes to comply with 1-based indexing across torch. (#588) ## New features - Added parameter to multihead attention module to allow output of unaveraged attention weights. (@jonathanbratt #542) - We now allow `jit_trace` functions with more than 1 argument. (#544) - Added Multivariate normal distribution (#552) - Export the `torch_diff` function and added docs for it. (#565) - Added a `device` argument to `torch_load()` allowing one to select to which device parameters should be loaded. (#578) - Added `distr_categorical()` (#576) - Added `distr_mixture_same_family()` (#576) - Improve handling of optimizers state and implement `load_state_dict()` and `state_dict()` for optimizers. (#585) - Added the ability to save R `list`s containing `torch_tensor`s using `torch_save`. This allows us to save the state of optimizers and modules using `torch_save()`. (#586) ## Bug fixes - Fixed bug in `nn_multihead_attention` when q,k,v inputs not all the same. (@jonathanbratt #540) - Fixed `$copy_` so it correctly respects the src `requires_grad()` when reloading saved models with `torch_load()`. (#545) - Fixed `nn_init_xavier_normal_()` and `nn_init_xavier_uniform_()` standard deviation calculation. (#557) - Fixed bug in `torch_tensordot()` when called when infering dimensions. (#563) - Dataset's `.getbatch` now takes an integer vector as input instead of a `list()`. (#572) - Fixed bug with `tensor$size()` when indexing with negative numbers. (#570) - Fixed bug in the `log_prob` of `distr_bernoulli()` (#581) ## Internal changes - Better handling optional Tensor arguments by using an explicit `XPtrTorchOptionalTensor` class. (#565) - Tensors in the R side that point to the same C++ Tensor are now guaranteed to be the same object. This allows to easily determine unique model parameters. (#582) # torch 0.3.0 ## Breaking changes - `torch_nonzero` and `tensor$nonzero()` now return 1-based indexes. (#432) - Breaking change: `torch_arange` returns in the closed interval `[start, end]` instead of the half open `[start, end)`. This makes it behave similar to R's `seq`. (#506) ## New features - `torch_split` now accepts a list of sizes as well as a fixed size. (#429) - Added `nn_layer_norm`. (#435) - Allow `timeout=360` as `install_torch()` parameter for large file download (@cregouby #438) - Added `install_torch_from_file()` and `get_install_libs_url()`for setup cases where direct download is not possible (@cregouby #439) - Added `mean.torch_tensor` (#448) - New arguments `worker_globals` and `worker_packages` allowing to easily pass objects to workers in parallel dataloaders (#449). - We now call R garbage collector when there's no memory available on GPU, this can help in a few cases when the laziness of the garbage collector allows too many tensors to be on memory even though they are no longer referenced in R. (#456) - Implemented `nn_group_norm` and fixed a bug in `nnf_group_norm` (#474) - Added backend functions allowing us to query which optimizations LibTorch was compiled with (#476) - Added normal distribution (#462) - Added bernoulli distribution (#484) - `as.list` for `nn_modules` (#492) - Enumerate support in Bernoulli distribution (#490) - Added Poisson Distriibution (#495) - Allow optional .getbatch in datasets/dataloaders (#498) - `nn_lstm`, `nn_gru` and `nn_gru` can now use cudnn accelerations when available (#503). - Added Gamma distribution (#489) - We now respect the TORCH_HOME env var to automatically install torch. (#522) - Implement comparison operator `!=` for torch dtypes. (#524) - Added Chi-square distribution. (#518) - Added `optimizer` function allowing to easily implement custom optimizers. (#527) ## Bug fixes - Fixed bug in `optim_lbfgs` that would make model objects exponentially big. (#431) - Correctly handle `NaN`s in L-BFGS optimizer (#433) - The default collate function now respects the data type when converting to a tensor (if the dataset returns an R object) (#434) - Fixed `torch_normal`. (#450) - Fixed backward compatibility issue when loading models saved in older versions of torch. This bug was introduced in #452 and is now fixed and we also added a regression test. (#458) - Fixed bug when using RNN's on the GPU (#460) - Found and fixed some memory leaks, specially when creating datatypes from strings and when saving models with `torch_save`. (#454) - Fixed bug in `nnf_pad` when using `mode='circular'`. (#471) - Bugfixes in `nn_multihead_attention` (#496) - Fixed bug when using packed sequences with `nn_lstm` (#500) - Fixed bug in the `to` method of `nn_module` that would reset the `requires_grad` attribute of parameters. (#501) - Added `strong_wolfe` option to `optim_lbfgs`. (#517) - Fixed default argument of `nn_init_trunc_normal_` initializer function. (#535) ## Documentation - Added vignette on reading models from Python (#469) ## Internal changes - Removed the PerformanceReporter from tests to get easier to read stack traces. (#449) - Internal change in the R7 classes so R7 objects are simple external pointer instead of environments. This might cause breaking change if you relied on saving any kind of state in the Tensor object. (#452) - Internal refactoring making Rcpp aware of some XPtrTorch* types so making it simpler to return them from Rcpp code. This might cause a breaking change if you are relying on `torch_dtype()` being an R6 class. (#451) - Internal changes to auto unwrap arguments from SEXP's in Rcpp. This will make easier to move the dispatcher system to C++ in the future, but already allows us to gain ~30% speedups in small operations. (#454) - Added a Windows GPU CI workflow (#508). - Update to LibTorch v1.8 (#513) - Moved some parts of the dispatcher to C++ to make it faster. (#520) # torch 0.2.1 ## Breaking changes - Made `torch_one_hot` and `nnf_one_hot` use 1-based indexing. (#410) - `nn_module$eval()` and `nn_module$train()` now return a callable `nn_module` instead of a `nn_Module`. (#425) ## New features - Added a custom CPU allocator to call `gc` when torch might need more memory (#402) - Updated to LibTorch 1.7.1 (#412) - Allow listing all nested modules in a `nn_module` (#417) - Allow modifying the `requires_grad` attribute using the `$<-` operator (#419) - Added `length` method for the `nn_sequential` container. (#423) - Added support for CUDA 11 on linux (#424) ## Bug fixes - Fix support for cuda 9.2 (#398) - Fixed GPU CI that was skipping tests. (#398) - Fixed a memory leak when printing tensors (#402) - Fixed a memory leak when passing integer vectors to lantern. (#402) - Fixed a few more memory leaks related to autograd context (#405) - Fixed `nnf_normalize` and `x$norm()` as they were not able to be called (#409) ## Documentation - Small improvement to `nn_module` documentation (#399). - The getting started section has been removed from the pkgdown website in favor of the new guide in the landing page (#401) - Updated the landing page to include a getting started tutorial (#400) # torch 0.2.0 ## Breaking changes - Dataloaders now returns a `coro::exhausted` intead of raising `stop_iteration_error` when the dataloader exceeds. (#366) - Fixed bug that would happen with functions that need to transform tensors from 0-based to 1-based in the GPU. (#317) - Fixed `torch_argsort` and `x$argsort` to return 1-based indexes (#342) - Fixed `torch_argmax`, `torch_argmin`, `x$argmax()` and `x$argmin()` return 1-based indexes. (#389) ## New features - Added `$element_size()` method (@dirkschumacher #322) - Added `$bool()` method (@dirkschumacher #323) - `torch__addr` and `torch__addr_` have been removed as they are no longer available in LibTorch 1.7. - We now check the MD5 hashes of downloaded LibTorch binaries. (@dirkschumacher #325) - Added a Distribution abstract class (@krzjoa #333) - Updated to LibTorch 1.7 (#337) - We now warn when converting `long` tensors to R and there's a chance of an integer overflow. (#347) - Allow `private` and `active` methods in `nn_module`'s and `dataset`'s. (#349) - Added `nn_batch_norm3d` (@mattwarkentin #354) - Added `nn_lstm` and `nn_gru` modules. (#362) - Added distribution constraints (@krzjoa #364) - Dataloaders now use the num_workers argument to load data in parallel (#366) - Added Exponential Family classs to distributions (#373) - Added Dockerfile and docker compose file with GPU support, with a how-to guide. (#380 #386) - Added R 3.6 to the CI system and fixed compilation from source with it on Windows (#387) - Initial support for JIT tracing (#377) - Added LBFGS optimizer (#392) - Improved the `nn_module` UI by improving autocomplete support and adding a print method (#391) ## Bug fixes - Fixed bug when trying to print the `grad_fn` of a Tensor that doesn't have one. See (#321) - Refactored the optimizers code to avoid duplication of parameter checks, etc. (@dirkschumacher #328) - Fixed `torch_norm` so it can be called with a `dim` argument. (#345) - Fixed crash when calling `torch_hann_window` with an invalid `NULL` `window_length`. (#351) - Fixed `torch_stft` calls for LibTorch 1.7 (added the `return_complex` argument) (#355) - Fixed bug when strides were NULL in some pooling operations. (#361) - Use `nvcc --version` instead of `nvidia-smi` to find the CUDA version as `nvidia-smi` reports the latest supported version and not the installed one. (#363) - Corrected URL to download LibTorch under Linux with CUDA 10.2 (#367) - Fixed handling of integer tensors when indexing tensors (#385) - Fixed bug when passing length zero vectors to lantern/libtorch. (#388) # torch 0.1.1 ## Bug fixes - Fixed bug that made `RandomSampler(replacement = TRUE)` to never take the last element in the dataset. (84861fa) - Fixed `torch_topk` and `x$topk` so the returned indexes are 1-based (#280) - Fixed a bug (#275) that would cause `1 - torch_tensor(1, device = "cuda")` to fail because `1` was created in the CPU. (#279) - We now preserve names in the `dataloader` output (#286) - `torch_narrow`, `Tensor$narrow()` and `Tensor$narrow_copy` are now indexed starting at 1. (#294) - `Tensor$is_leaf` is now an active method. (#295) - Fixed bug when passing equations to `torch_einsum`. (#296) - Fixed `nn_module_list()` to correctly name added modules, otherwise they are not returned when doing `state_dict()` on it. (#300) - Fixed bug related to random number seeds when using in-place methods. (#303) - Fixed `nn_batchnorm*` so it returns the same results as PyTorch (#302) - Fixed a bug that made `nn_module$parameter` when there were shared parameters between layers. (#306) - Fixed `$max` and `$min` to return 1-based indexes. (#315) ## New features - Expanded the `utils_data_default_collate` to support converting R objects to torch tensors when needed. (#269) - Added an `as.matrix` method for torch Tensors. (#282) - By default we now truncate the output of `print(totrch_tensor(1:40))` if it spans for more than 30 lines. This is useful for not spamming the console or taking very long to print when you print a very large tensor. (#283) - Added the Adadelta optimizer (@krzjoa #284) - Added support for GPU's on Windows (#281) - Added the Adagrad optimizer (@krzjoa #289) - Added RMSprop optimizer (@krzjoa #290) - Added the Rprop optimizer (@krzjoa #297) - Added gradient clipping utilities (#299) - Added `nnf_contrib_sparsemax` and `nn_contrib_sparsemax`. (#309) - Added ASGD optimizer (@krzjoa #307) - Getters and setters for the number of threads used by torch (#311) # torch 0.1.0 - Added many missing losses (#252) - Implemented the `$<-` and `[[<-` operators for the `nn_module` class. (#253) - Export `nn_parameter`, `nn_buffer`, and `is_*` auxiliary functions. - Added a new serialization vignette. - Added a few learning rate schedulers (#258) # torch 0.0.2 - Added a `NEWS.md` file to track changes to the package. - Auto install when loading the package for the first time.
54.787966
306
0.745725
eng_Latn
0.993804
11651a45554586afc653a0cdff922ddb2e9843e0
12,031
md
Markdown
README.md
Blackgard/vk-bot-python
5d1eb269d76567a8e31dec47c0ea3c5cc1bcbc3c
[ "MIT" ]
null
null
null
README.md
Blackgard/vk-bot-python
5d1eb269d76567a8e31dec47c0ea3c5cc1bcbc3c
[ "MIT" ]
null
null
null
README.md
Blackgard/vk-bot-python
5d1eb269d76567a8e31dec47c0ea3c5cc1bcbc3c
[ "MIT" ]
null
null
null
<p align="center"> <img src="./images/shablbot.png" alt="logo shablbot"> </p> <div align="center"> [![Build](https://img.shields.io/azure-devops/build/sasna142/026fd26f-bb59-48fd-bb91-6d9ebe113f87/2)]() [![License](https://img.shields.io/github/license/blackgard/shablbot)]() </div> --------------------------------- 🤖 Бот написанный на Python для социальной сети Вконтакте, работающий через VkBotLongPull. ## 💭 О проекте <a name="#about"></a> Данный проект разрабатывался с целью облегчить создание ботов для людей, которые мало/плохо знакомы с программированием. Бот устроен таким образом, что Вам нужно написать минимальное количество кода, чтобы добавить новый функционал. В последствии бот будет дорабатываться и развиваться. ## 🎈 Установка <a name="#install"></a> Можете воспользоваться командой [pip](https://pypi.org/project/pip/): ```cmd pip install shablbot ``` Или же использовать [poetry](https://python-poetry.org/): ```poetry poetry add shablbot ``` ## :dizzy: Инициализация <a name="init"></a> Чтобы начать работу с ботом необходимо выполнить инициализацию компонентов бота: Для этого нужно выполнить команду: windows ```cmd C:\shablbot> py -m shablbot --init ``` linux ```bash blackgard@bar:~/shablbot$ python3 -m shablbot --init ``` После чего в папке, откуда был вызван скрипт, будут созданы каталоги с следующей структурой: ``` 📦*yourfolder* ┣ 📂commands ┃ ┣ 📂private ┃ ┃ ┗ 🐍 show_id_active_chats.py ┃ ┣ 📂public ┃ ┃ ┣ 🐍 chat_bot_off.py ┃ ┃ ┣ 🐍 chat_bot_on.py ┃ ┃ ┗ 🐍 chat_show_statistics.py ┣ 📂keyboards ┃ ┣ 📜 clear.json ┃ ┗ 📜 default.json ┣ 📂modules ┃ ┗ 📂games ┃ ┗ 🐍 flip_and_roll.py ┣ 📂phrases ┃ ┣ 📜 _default.json ┃ ┣ 📜 bye.json ┃ ┗ 📜 hello.json ┣ 📂settings ┃ ┣ 🐍 settings_model.py ┃ ┗ 🐍 settings.py ┗ 🐍 manager.py ``` <div align="center"><font size="2" style="text-align:center">Более подробно про каждый из каталогов будет рассказано далее</font></div> <br> После этого можно перейти к настройке бота. ## ⏳ Стартовая настройка <a name="start_setting"></a> Настройка бота производится с помощью редактирования файла <b>[settings.py](#init)</b>, находящегося в папке settings, созданной на шаге выше. Обязательные поля для работы бота: 1. **TOKEN** - ключ доступа к сообществу вконтакте, ключ должен быть с правами к сообщениям сообщества. [Как получить токен для бота?](#how_get_bot_token) ```python TOKEN = os.getenv("TOKEN") # "1234566789908798689764867293876243987" (str) ``` 2. **BOT_CHAT_ID** - id страницы вконтакте сообщества, от лица которого будет работать бот. ```python BOT_CHAT_ID = os.getenv("BOT_CHAT_ID") # 123456789 (int) ``` 3. **DEFAULT_REACTION_TEMPLATES** - слова на которые бот будет всегда как-либо реагировать. ```python DEFAULT_REACTION_TEMPLATES = (r"бот",) # (tuple) ``` 4. **ADMIN_ID** - id страницы вконтакте человека, от лица которого будет происходить администрирование бота. ```python ADMIN_ID = os.getenv("ADMIN_ID") # 123456789 (int) ``` Остальные параметры для начального запуска бота менять не нужно. 🔔 *Советую для хранения токена и id-ов использовать .env файл. К примеру используйте [python-dotenv](https://pypi.org/project/python-dotenv/).* ## 🚀 Запуск бота Для запуска бота мы используем файл [manager.py](#init), созданный на первом шаге. windows ```cmd C:\shablbot> py manager.py --run ``` linux ```bash blackgard@bar:~/shablbot$ python3 manager.py --run ``` Если вы все правильно сделали, то в консоли увидите следующее сообщение: ```console 2099-99-99 at -1:99:99 | INFO | ------- Бот запущен / Bot is runing ------- ``` ## :card_file_box: Модульность бота <a name="bot_modules"></a> 1. [Модуль команды](#bot_modules_commands) 2. [Модуль клавиатуры](#bot_modules_keyboards) 3. [Модуль пользовательских модулей](#bot_modules_modules) 4. [Модуль фраз](#bot_modules_phrases) 5. [Модуль настроек](#bot_modules_settings) То, как бот обрабатывает команды и сообщения от Вас спрятано. По этому Вам доступны 5 типов модулей, для расширения и настройки бота: ### commands <a name="bot_modules_commands"></a> Модуль отвечающий за команды управления ботом. Нужен для администрирования. Делятся на два типа: 1. <b>private</b> - доступные только администратору бота 2. <b>public</b> - доступные всем пользователям > :bell: Команды нужно подключать в настройках бота. Переменная <b>ACTIVE_COMMANDS</b>. Для добавления новой команды вам нужно создай файл в папке private/public с \*название_команды\*.py. В созданном файле нужно создать переменную command_settings с следующей структурой: ```py command_settings = { # Код команды. Должен быть уникальным. "code": "bot_off", # Название команды. Публичная переменная. "name": "Выключить бота", # Слова, на которые бот будет реагировать. # Может быть регулярным выражением. "templates": ["выкл бот", "бот выкл"], # Ответ бота, на результат выполненной команды. "answer": "Теперь бот не читает сообщения в чате", # Описание команды. Публичная переменная. "description": "Команда для выключения бота в чате (Внутри чата)", # Метод обработки templates. Если normal, то сравнивает как слова. # Если regular, то сравнивает как регулярные выражения. "method": "normal", # Какие переменные нужны для выполнения команды. # Доступные значения = processed_chat, chats, commands; "need": ["processed_chat",], # Входная точка для выполнения команды. # Может быть любой функцией. "entry_point": command } ``` Функция выключения бота в чате: ```py def command(processed_chat: Chat) -> None: processed_chat.turn_off() ``` ### keyboards <a name="bot_modules_keyboards"></a> Модуль отвечающий за варианты клавиатуры бота. Нужен для настройки сообщений, если вы хотите использовать клавиатуру в сообщениях бота. [Про клавиатуру Vk подробнее читать тут](https://vk.com/dev/bots_docs_3) > :bell: Клавиатуры нужно подключать в настройках бота. Переменная <b>KEYBOARDS</b>. > :warning: *Обязательно для работы бота нужны, "clear.json" и "default.json"* ### modules <a name="bot_modules_modules"></a> Модуль отвечающий за пользовательские модули для бота. К таким модулям можно отнести: - Игры - Утилиты (Погода, время, конвертирование валюты) > :bell: Модули нужно подключать в настройках бота. Переменная <b>ACTIVE_MODULES</b>. > :warning: *Данный блок еще не совершенен, т.к. всегда требует возврата строки как ответа, в будущем будет как ответ отправлять все типы данных* Для создания модуля Вам необходимо создать файл с названием модуля и добавить туда переменную settings с следующей структурой: ```py settings = { # Название модуля. "name": "Flip and roll game", # Версия модуля. "version": "1.0.0", # Автор модуля. "author": "Narteno", # Дата создания модуля. "date_created": "12.11.2019", # Входная точка обработки модуля. "entry_point": activate_module, # Обрабатывающие запросы функции модуля. # В себя включают название функции, описание # и входную точку. Нужен для более гибкой настройки. "func": { "roll": {"name": "roll", "description": "", "entry_point": roll}, "flip": {"name": "flip", "description": "", "entry_point": flip}, }, # Фразы для реакции. Разделяются по функциям модуля. "templates": {"flip": [r"флип"], "roll": [r"ролл"]}, } ``` Входная точка модуля должна иметь такую структуру, но не ограничена этим: ```py def activate_module(func) -> str: """Входная точка модуля""" active_func = settings["func"].get(func)["entry_point"] # Если переменная ответа будет в значении None. # То бот не отправит сообщение пользователю. answer_module = None if active_func: answer_module = active_func() return answer_module ``` [Подробнее о структуре кастомных модулей смотрите тут flip_and_roll.py](/shablbot/init/modules/games/flip_and_roll.py) ### phrases <a name="bot_modules_phrases"></a> Модуль отвечающий за фразы, на которые бот реагирует. Содержит в себе файлы <b>.json</b> формата. > :bell: Все фразы из папки подгружаются автоматически. Вы можете исключить ненужные фразы используя в настройках переменную <b>EXCLUDED_PHRASES</b>. json файл должен содержать следующую структуру: > :warning: *Значения с пометкой "_comment" в реальном файле не должны пристутствовать.* ```json { "#group_comment#":"Стандартная группа. нельзя удалять" "group": "default", "#words_comment#":"Список слов входящих в группу" "words": { "#main_comment#":"Название слова, на которое бот реагирует. Может содержать любые символы. Для файла _default.json 'main' обязательное системное значение" "main": { "#templates_comment#":"Фразы для реакции" "templates": [ "бот" ], "#answer_comment#":"Варианты ответа разбитые по редкости" "answer": { "common": ["Я бот"], "uncommon": ["Я почти бот"], "rare": ["Я точно бот"], "legendary": ["А может быть это ты бот?"] }, "#templates_comment#":"Ключ клавиатуры, которую нужно отправить для данного слова." "keyboard": "default" }, } } ``` ### settings <a name="bot_modules_settings"></a> Модуль отвечающий за настройки бота. Все настройки производятся в файле settings.py. В файле для каждой переменной имеются комментарии, поясняющие, что в них хранится. ## 💻 Пример работы Бот по имени "Ходор" - [клик-клик (вк)](https://vk.com/hodor_designer) ## 🧰 CLI Shablbot Для бота разработано CLI. Доступные методы: ```console (env) C:\Users\user\Desktop\shablbot>py manager.py --help usage: python manage.py [-h] [-r] [-i] [-c] 🤖 Бот написанный на Python для социальной сети Вконтакте, работающий через VkBotLongPull optional arguments: -h, --help show this help message and exit -r, --run-bot Запустить сервер для работы бота -i, --init Инициировать каталоги для работы бота [ "commands", "keyboards", "modules", "phrases", "settings", "manager.py" ] -c, --check-bot Проверить работоспособность бота без запуска сервера (c) Alex Drachenin ``` Для старта работы с ботом вы можете воспользоваться методом "--init" таким образом: ```console (env) C:\Users\user\Desktop\shablbot>py -m shablbot --init Каталог 'commands' инициирован! Каталог 'keyboards' инициирован! Каталог 'modules' инициирован! Каталог 'phrases' инициирован! Каталог 'settings' инициирован! Файл manager.py инициирован! ``` ## ❔ Как получить токен для работы бота? <a name="how_get_bot_token"></a> Для начала нам нужно создать сообщество. Для этого переходим в вк в вкладку "<b>Сообщества</b>" и нажимаем кнопку "<b>Создать сообщество</b>". Там вы заполняете всю необходимую вам информацию, со всем соглашаетесь и попадаете на страницу группы. Там нам нужно найти вкладку "<b>Управление</b>". В меню справа найдите "<b>Настройки</b>"->"<b>Работа с API</b>". На той странице будет 3 вкладки. Из них нам нужны только 1 и 3: 1. Нажимаем кнопку "<b>Создать ключ</b>", выбираем все необходимые нам доступы (желательно все) и нажимаем "<b>Создать</b>". Данный ключ нужен для переменной [TOKEN](#start_setting) в настройках бота. 2. Не нужна, пропускаем ее. 3. На данной вкладке вам нужно выбрать версию API, бот тестировался на самом последней версии в момент написания (5.131), советую выбирать самую свежую. Так же вам нужно установить "<b>Long Poll API</b>" в значение "<b>Включено</b>". После этого переходим на вкладку "<b>Тип событий</b>" и выбираем нужные вам значения. Минимальные для работы бота: 1. Входящее сообщение 2. Исходящее сообщение После этого ваш бот готов к работе, можете начинать его тестировать, удачи! ## ✍️ Автор * [@alex_blackgard](https://vk.com/alexblackgard) - создатель бота и человек, который будет рад любой помощи в доработке бота 🐙💭🌎 <div align="center"> [![vk](./images/vk.svg)](https://vk.com/alexblackgard) [![instagram](./images/instagram.svg)](https://www.instagram.com/alexandr_blackgard/) [![github](./images/github.svg)](https://github.com/Blackgard) </div>
36.347432
348
0.707755
rus_Cyrl
0.884443
1165bfe9ae7d7ea49d6436106c3b088f40989858
47,882
md
Markdown
repos/groovy/remote/3.0-jdk11.md
Beatzevo/repo-info-master
e69fae89fa1c53db5dcb3b40c438a33633a70738
[ "Apache-2.0" ]
null
null
null
repos/groovy/remote/3.0-jdk11.md
Beatzevo/repo-info-master
e69fae89fa1c53db5dcb3b40c438a33633a70738
[ "Apache-2.0" ]
null
null
null
repos/groovy/remote/3.0-jdk11.md
Beatzevo/repo-info-master
e69fae89fa1c53db5dcb3b40c438a33633a70738
[ "Apache-2.0" ]
null
null
null
## `groovy:3.0-jdk11` ```console $ docker pull groovy@sha256:4e93d74d402b237939e73855bc8992de2f907f6423f8fd103bcfba2e403e2e6c ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: - linux; amd64 - linux; arm variant v7 - linux; arm64 variant v8 - linux; ppc64le - linux; s390x ### `groovy:3.0-jdk11` - linux; amd64 ```console $ docker pull groovy@sha256:06ae09771da3ce4e0099fa4137b7d57546e6828c38c9c376a9d062cd58a9bd7e ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **281.9 MB (281907987 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:36005e7e0a05fc73df6d3098dae3aae48388aa05a5aff4b031699e20901b16f6` - Default Command: `["groovysh"]` ```dockerfile # Fri, 25 Sep 2020 22:33:49 GMT ADD file:4974bb5483c392fb54a35f3799802d623d14632747493dce5feb4d435634b4ac in / # Fri, 25 Sep 2020 22:33:50 GMT RUN set -xe && echo '#!/bin/sh' > /usr/sbin/policy-rc.d && echo 'exit 101' >> /usr/sbin/policy-rc.d && chmod +x /usr/sbin/policy-rc.d && dpkg-divert --local --rename --add /sbin/initctl && cp -a /usr/sbin/policy-rc.d /sbin/initctl && sed -i 's/^exit.*/exit 0/' /sbin/initctl && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests # Fri, 25 Sep 2020 22:33:51 GMT RUN [ -z "$(apt-get indextargets)" ] # Fri, 25 Sep 2020 22:33:52 GMT RUN mkdir -p /run/systemd && echo 'docker' > /run/systemd/container # Fri, 25 Sep 2020 22:33:52 GMT CMD ["/bin/bash"] # Fri, 25 Sep 2020 23:10:55 GMT ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 # Fri, 25 Sep 2020 23:11:26 GMT RUN apt-get update && apt-get install -y --no-install-recommends tzdata curl ca-certificates fontconfig locales && echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen && locale-gen en_US.UTF-8 && rm -rf /var/lib/apt/lists/* # Fri, 25 Sep 2020 23:11:51 GMT ENV JAVA_VERSION=jdk-11.0.8+10 # Fri, 25 Sep 2020 23:12:02 GMT RUN set -eux; ARCH="$(dpkg --print-architecture)"; case "${ARCH}" in aarch64|arm64) ESUM='fb27ea52ed901c14c9fe8ad2fc10b338b8cf47d6762571be1fe3fb7c426bab7c'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.8_10.tar.gz'; ;; armhf|armv7l) ESUM='d00370967e4657e137cc511e81d6accbfdb08dba91e6268abef8219e735fbfc5'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_arm_linux_hotspot_11.0.8_10.tar.gz'; ;; ppc64el|ppc64le) ESUM='d206a63cd719b65717f7f20ee3fe49f0b8b2db922986b4811c828db57212699e'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.8_10.tar.gz'; ;; s390x) ESUM='5619e1437c7cd400169eb7f1c831c2635fdb2776a401147a2fc1841b01f83ed6'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_s390x_linux_hotspot_11.0.8_10.tar.gz'; ;; amd64|x86_64) ESUM='6e4cead158037cb7747ca47416474d4f408c9126be5b96f9befd532e0a762b47'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_x64_linux_hotspot_11.0.8_10.tar.gz'; ;; *) echo "Unsupported arch: ${ARCH}"; exit 1; ;; esac; curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; mkdir -p /opt/java/openjdk; cd /opt/java/openjdk; tar -xf /tmp/openjdk.tar.gz --strip-components=1; rm -rf /tmp/openjdk.tar.gz; # Fri, 25 Sep 2020 23:12:02 GMT ENV JAVA_HOME=/opt/java/openjdk PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Fri, 25 Sep 2020 23:12:02 GMT CMD ["jshell"] # Wed, 30 Sep 2020 22:59:17 GMT CMD ["groovysh"] # Wed, 30 Sep 2020 22:59:17 GMT ENV GROOVY_HOME=/opt/groovy # Wed, 30 Sep 2020 22:59:18 GMT RUN set -o errexit -o nounset && echo "Adding groovy user and group" && groupadd --system --gid 1000 groovy && useradd --system --gid groovy --uid 1000 --shell /bin/bash --create-home groovy && mkdir --parents /home/groovy/.groovy/grapes && chown --recursive groovy:groovy /home/groovy && chmod --recursive 1777 /home/groovy && echo "Symlinking root .groovy to groovy .groovy" && ln --symbolic /home/groovy/.groovy /root/.groovy # Wed, 30 Sep 2020 22:59:18 GMT VOLUME [/home/groovy/.groovy/grapes] # Wed, 30 Sep 2020 22:59:19 GMT WORKDIR /home/groovy # Wed, 30 Sep 2020 22:59:25 GMT RUN apt-get update && echo "Installing build dependencies" && apt-get install --yes --no-install-recommends dirmngr fontconfig gnupg unzip wget && rm --recursive --force /var/lib/apt/lists/* # Wed, 30 Sep 2020 22:59:25 GMT ENV GROOVY_VERSION=3.0.6 # Wed, 30 Sep 2020 22:59:30 GMT RUN set -o errexit -o nounset && echo "Downloading Groovy" && wget --no-verbose --output-document=groovy.zip "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip" && echo "Importing keys listed in http://www.apache.org/dist/groovy/KEYS from key server" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --no-tty --keyserver ha.pool.sks-keyservers.net --recv-keys 7FAA0F2206DE228F0DB01AD741321490758AAD6F 331224E1D7BE883D16E8A685825C06C827AF6B66 34441E504A937F43EB0DAEF96A65176A0FB1CD0B 9A810E3B766E089FFB27C70F11B595CEDC4AEBB5 81CABC23EECA0790E8989B361FF96E10F0E13706 && echo "Checking download signature" && wget --no-verbose --output-document=groovy.zip.asc "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip.asc" && gpg --batch --no-tty --verify groovy.zip.asc groovy.zip && rm --recursive --force "${GNUPGHOME}" && rm groovy.zip.asc && echo "Installing Groovy" && unzip groovy.zip && rm groovy.zip && mv "groovy-${GROOVY_VERSION}" "${GROOVY_HOME}/" && ln --symbolic "${GROOVY_HOME}/bin/grape" /usr/bin/grape && ln --symbolic "${GROOVY_HOME}/bin/groovy" /usr/bin/groovy && ln --symbolic "${GROOVY_HOME}/bin/groovyc" /usr/bin/groovyc && ln --symbolic "${GROOVY_HOME}/bin/groovyConsole" /usr/bin/groovyConsole && ln --symbolic "${GROOVY_HOME}/bin/groovydoc" /usr/bin/groovydoc && ln --symbolic "${GROOVY_HOME}/bin/groovysh" /usr/bin/groovysh && ln --symbolic "${GROOVY_HOME}/bin/java2groovy" /usr/bin/java2groovy && echo "Editing startGroovy to include java.xml.bind module" && sed --in-place 's|startGroovy ( ) {|startGroovy ( ) {\n JAVA_OPTS="$JAVA_OPTS --add-modules=ALL-SYSTEM"|' "${GROOVY_HOME}/bin/startGroovy" # Wed, 30 Sep 2020 22:59:30 GMT USER groovy # Wed, 30 Sep 2020 22:59:33 GMT RUN set -o errexit -o nounset && echo "Testing Groovy installation" && groovy --version ``` - Layers: - `sha256:171857c49d0f5e2ebf623e6cb36a8bcad585ed0c2aa99c87a055df034c1e5848` Last Modified: Tue, 22 Sep 2020 12:21:27 GMT Size: 26.7 MB (26701612 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:419640447d267f068d2f84a093cb13a56ce77e130877f5b8bdb4294f4a90a84f` Last Modified: Fri, 25 Sep 2020 22:36:49 GMT Size: 852.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:61e52f862619ab016d3bcfbd78e5c7aaaa1989b4c295e6dbcacddd2d7b93e1f5` Last Modified: Fri, 25 Sep 2020 22:36:49 GMT Size: 162.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:dd4d4e9526b1adecc10515315b09e75d88526e75fba0552b3fbb933f40d293e9` Last Modified: Fri, 25 Sep 2020 23:16:31 GMT Size: 13.9 MB (13875646 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:27fc1ae9a8123ef615b5a4cfe87484bfac83d0ec89c22fed22b5f81fda1c6bb1` Last Modified: Fri, 25 Sep 2020 23:17:29 GMT Size: 194.3 MB (194266367 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b3bdc12e68610f396594592ed4e0bcbabbe9cf77b51fd3dd0ed4512384b8f6bb` Last Modified: Wed, 30 Sep 2020 23:04:02 GMT Size: 4.5 KB (4516 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:436773f10796f8dd1490f42f5d4721e05881db423a4817d181780371dd4bbdca` Last Modified: Wed, 30 Sep 2020 23:04:03 GMT Size: 3.5 MB (3505337 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f437b365a6ed961acfb6a95d9a69828839bd3a91c64ced06a5db42496cc47179` Last Modified: Wed, 30 Sep 2020 23:04:04 GMT Size: 43.6 MB (43553355 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:72ec696c36bd77835a5742d9427539b8be580ab1686e4bd11542772a946a6e19` Last Modified: Wed, 30 Sep 2020 23:04:01 GMT Size: 140.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `groovy:3.0-jdk11` - linux; arm variant v7 ```console $ docker pull groovy@sha256:f2d6f34a0b71a86c62781b1e12a742fe7d5958a6f7a3e78283a752d42a7324f6 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **265.0 MB (264975440 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:ed6adba614c7771e668c47d3ba28196236ce02b5658d9a3578d3b9259a14bfbc` - Default Command: `["groovysh"]` ```dockerfile # Fri, 25 Sep 2020 23:04:32 GMT ADD file:0ddc5fefae097a5be4c925fdfc09e4a637b74627a8981f0e6fb9890580adc875 in / # Fri, 25 Sep 2020 23:04:35 GMT RUN set -xe && echo '#!/bin/sh' > /usr/sbin/policy-rc.d && echo 'exit 101' >> /usr/sbin/policy-rc.d && chmod +x /usr/sbin/policy-rc.d && dpkg-divert --local --rename --add /sbin/initctl && cp -a /usr/sbin/policy-rc.d /sbin/initctl && sed -i 's/^exit.*/exit 0/' /sbin/initctl && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests # Fri, 25 Sep 2020 23:04:37 GMT RUN [ -z "$(apt-get indextargets)" ] # Fri, 25 Sep 2020 23:04:39 GMT RUN mkdir -p /run/systemd && echo 'docker' > /run/systemd/container # Fri, 25 Sep 2020 23:04:40 GMT CMD ["/bin/bash"] # Fri, 25 Sep 2020 23:23:09 GMT ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 # Fri, 25 Sep 2020 23:23:39 GMT RUN apt-get update && apt-get install -y --no-install-recommends tzdata curl ca-certificates fontconfig locales && echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen && locale-gen en_US.UTF-8 && rm -rf /var/lib/apt/lists/* # Fri, 25 Sep 2020 23:24:19 GMT ENV JAVA_VERSION=jdk-11.0.8+10 # Fri, 25 Sep 2020 23:24:36 GMT RUN set -eux; ARCH="$(dpkg --print-architecture)"; case "${ARCH}" in aarch64|arm64) ESUM='fb27ea52ed901c14c9fe8ad2fc10b338b8cf47d6762571be1fe3fb7c426bab7c'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.8_10.tar.gz'; ;; armhf|armv7l) ESUM='d00370967e4657e137cc511e81d6accbfdb08dba91e6268abef8219e735fbfc5'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_arm_linux_hotspot_11.0.8_10.tar.gz'; ;; ppc64el|ppc64le) ESUM='d206a63cd719b65717f7f20ee3fe49f0b8b2db922986b4811c828db57212699e'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.8_10.tar.gz'; ;; s390x) ESUM='5619e1437c7cd400169eb7f1c831c2635fdb2776a401147a2fc1841b01f83ed6'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_s390x_linux_hotspot_11.0.8_10.tar.gz'; ;; amd64|x86_64) ESUM='6e4cead158037cb7747ca47416474d4f408c9126be5b96f9befd532e0a762b47'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_x64_linux_hotspot_11.0.8_10.tar.gz'; ;; *) echo "Unsupported arch: ${ARCH}"; exit 1; ;; esac; curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; mkdir -p /opt/java/openjdk; cd /opt/java/openjdk; tar -xf /tmp/openjdk.tar.gz --strip-components=1; rm -rf /tmp/openjdk.tar.gz; # Fri, 25 Sep 2020 23:24:37 GMT ENV JAVA_HOME=/opt/java/openjdk PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Fri, 25 Sep 2020 23:24:38 GMT CMD ["jshell"] # Wed, 30 Sep 2020 23:21:50 GMT CMD ["groovysh"] # Wed, 30 Sep 2020 23:21:53 GMT ENV GROOVY_HOME=/opt/groovy # Wed, 30 Sep 2020 23:21:57 GMT RUN set -o errexit -o nounset && echo "Adding groovy user and group" && groupadd --system --gid 1000 groovy && useradd --system --gid groovy --uid 1000 --shell /bin/bash --create-home groovy && mkdir --parents /home/groovy/.groovy/grapes && chown --recursive groovy:groovy /home/groovy && chmod --recursive 1777 /home/groovy && echo "Symlinking root .groovy to groovy .groovy" && ln --symbolic /home/groovy/.groovy /root/.groovy # Wed, 30 Sep 2020 23:21:57 GMT VOLUME [/home/groovy/.groovy/grapes] # Wed, 30 Sep 2020 23:21:58 GMT WORKDIR /home/groovy # Wed, 30 Sep 2020 23:22:15 GMT RUN apt-get update && echo "Installing build dependencies" && apt-get install --yes --no-install-recommends dirmngr fontconfig gnupg unzip wget && rm --recursive --force /var/lib/apt/lists/* # Wed, 30 Sep 2020 23:22:16 GMT ENV GROOVY_VERSION=3.0.6 # Wed, 30 Sep 2020 23:22:33 GMT RUN set -o errexit -o nounset && echo "Downloading Groovy" && wget --no-verbose --output-document=groovy.zip "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip" && echo "Importing keys listed in http://www.apache.org/dist/groovy/KEYS from key server" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --no-tty --keyserver ha.pool.sks-keyservers.net --recv-keys 7FAA0F2206DE228F0DB01AD741321490758AAD6F 331224E1D7BE883D16E8A685825C06C827AF6B66 34441E504A937F43EB0DAEF96A65176A0FB1CD0B 9A810E3B766E089FFB27C70F11B595CEDC4AEBB5 81CABC23EECA0790E8989B361FF96E10F0E13706 && echo "Checking download signature" && wget --no-verbose --output-document=groovy.zip.asc "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip.asc" && gpg --batch --no-tty --verify groovy.zip.asc groovy.zip && rm --recursive --force "${GNUPGHOME}" && rm groovy.zip.asc && echo "Installing Groovy" && unzip groovy.zip && rm groovy.zip && mv "groovy-${GROOVY_VERSION}" "${GROOVY_HOME}/" && ln --symbolic "${GROOVY_HOME}/bin/grape" /usr/bin/grape && ln --symbolic "${GROOVY_HOME}/bin/groovy" /usr/bin/groovy && ln --symbolic "${GROOVY_HOME}/bin/groovyc" /usr/bin/groovyc && ln --symbolic "${GROOVY_HOME}/bin/groovyConsole" /usr/bin/groovyConsole && ln --symbolic "${GROOVY_HOME}/bin/groovydoc" /usr/bin/groovydoc && ln --symbolic "${GROOVY_HOME}/bin/groovysh" /usr/bin/groovysh && ln --symbolic "${GROOVY_HOME}/bin/java2groovy" /usr/bin/java2groovy && echo "Editing startGroovy to include java.xml.bind module" && sed --in-place 's|startGroovy ( ) {|startGroovy ( ) {\n JAVA_OPTS="$JAVA_OPTS --add-modules=ALL-SYSTEM"|' "${GROOVY_HOME}/bin/startGroovy" # Wed, 30 Sep 2020 23:22:34 GMT USER groovy # Wed, 30 Sep 2020 23:22:44 GMT RUN set -o errexit -o nounset && echo "Testing Groovy installation" && groovy --version ``` - Layers: - `sha256:20e126218ac644f56ef7147c3363108a0814d921e6016af54a1b4c964159f1a9` Last Modified: Fri, 25 Sep 2020 23:06:39 GMT Size: 22.3 MB (22279517 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:3d156c2b31482935ec0363b6e3f1cb6fc56da57e61fc80078914918fe53f8fa5` Last Modified: Fri, 25 Sep 2020 23:06:34 GMT Size: 852.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:93c1a0dbe2162972438aa89d4f90dca5db0e4cee58819ba354ea1c0031101b7a` Last Modified: Fri, 25 Sep 2020 23:06:34 GMT Size: 186.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b333d1b8f5fe5303d427301eecab31e2fb825ddfe9ed1f96b70bc80a74f9fa44` Last Modified: Fri, 25 Sep 2020 23:27:19 GMT Size: 12.8 MB (12817173 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:3af20bb12eac9ea127a2d1a832f21d97cc8749bedaf11e3e1901308b43543d75` Last Modified: Fri, 25 Sep 2020 23:28:35 GMT Size: 183.3 MB (183306957 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:cdd6bf48d67eb394e98aae2a6025010a95b9ed9c6fe3a379723582aa15fc717c` Last Modified: Wed, 30 Sep 2020 23:29:54 GMT Size: 4.5 KB (4538 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f7c55c559fe053f64a045c122dbfb355e36e8a5619d47cc7553a2c072cc73922` Last Modified: Wed, 30 Sep 2020 23:29:55 GMT Size: 3.0 MB (3012634 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:293eb1b2a349a5b19301a98aa1f4fcacc5b740a5873d8536e115a8383886b1b8` Last Modified: Wed, 30 Sep 2020 23:30:00 GMT Size: 43.6 MB (43553411 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b7bd80be3306fe149d71786833ecd9ffa41d14f1b126859531b52b0dba48544a` Last Modified: Wed, 30 Sep 2020 23:29:54 GMT Size: 172.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `groovy:3.0-jdk11` - linux; arm64 variant v8 ```console $ docker pull groovy@sha256:11c762da0740a51b7cd7e9cccb08d4880cdcbc9f1d687d414dc8837b6193d7d2 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **274.9 MB (274882958 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:a587af86e8649f55cdef74c4eaad3faca11df40958fa2c50db74e657b86da895` - Default Command: `["groovysh"]` ```dockerfile # Fri, 25 Sep 2020 22:47:32 GMT ADD file:d2c57bfbb29f6de3b29050a2c50c3806e0c8caa26f6d8dea47f479c923d72e3e in / # Fri, 25 Sep 2020 22:47:35 GMT RUN set -xe && echo '#!/bin/sh' > /usr/sbin/policy-rc.d && echo 'exit 101' >> /usr/sbin/policy-rc.d && chmod +x /usr/sbin/policy-rc.d && dpkg-divert --local --rename --add /sbin/initctl && cp -a /usr/sbin/policy-rc.d /sbin/initctl && sed -i 's/^exit.*/exit 0/' /sbin/initctl && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests # Fri, 25 Sep 2020 22:47:36 GMT RUN [ -z "$(apt-get indextargets)" ] # Fri, 25 Sep 2020 22:47:38 GMT RUN mkdir -p /run/systemd && echo 'docker' > /run/systemd/container # Fri, 25 Sep 2020 22:47:39 GMT CMD ["/bin/bash"] # Fri, 25 Sep 2020 23:06:10 GMT ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 # Fri, 25 Sep 2020 23:06:39 GMT RUN apt-get update && apt-get install -y --no-install-recommends tzdata curl ca-certificates fontconfig locales && echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen && locale-gen en_US.UTF-8 && rm -rf /var/lib/apt/lists/* # Fri, 25 Sep 2020 23:07:11 GMT ENV JAVA_VERSION=jdk-11.0.8+10 # Fri, 25 Sep 2020 23:07:24 GMT RUN set -eux; ARCH="$(dpkg --print-architecture)"; case "${ARCH}" in aarch64|arm64) ESUM='fb27ea52ed901c14c9fe8ad2fc10b338b8cf47d6762571be1fe3fb7c426bab7c'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.8_10.tar.gz'; ;; armhf|armv7l) ESUM='d00370967e4657e137cc511e81d6accbfdb08dba91e6268abef8219e735fbfc5'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_arm_linux_hotspot_11.0.8_10.tar.gz'; ;; ppc64el|ppc64le) ESUM='d206a63cd719b65717f7f20ee3fe49f0b8b2db922986b4811c828db57212699e'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.8_10.tar.gz'; ;; s390x) ESUM='5619e1437c7cd400169eb7f1c831c2635fdb2776a401147a2fc1841b01f83ed6'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_s390x_linux_hotspot_11.0.8_10.tar.gz'; ;; amd64|x86_64) ESUM='6e4cead158037cb7747ca47416474d4f408c9126be5b96f9befd532e0a762b47'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_x64_linux_hotspot_11.0.8_10.tar.gz'; ;; *) echo "Unsupported arch: ${ARCH}"; exit 1; ;; esac; curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; mkdir -p /opt/java/openjdk; cd /opt/java/openjdk; tar -xf /tmp/openjdk.tar.gz --strip-components=1; rm -rf /tmp/openjdk.tar.gz; # Fri, 25 Sep 2020 23:07:26 GMT ENV JAVA_HOME=/opt/java/openjdk PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Fri, 25 Sep 2020 23:07:26 GMT CMD ["jshell"] # Wed, 30 Sep 2020 22:59:11 GMT CMD ["groovysh"] # Wed, 30 Sep 2020 22:59:11 GMT ENV GROOVY_HOME=/opt/groovy # Wed, 30 Sep 2020 22:59:13 GMT RUN set -o errexit -o nounset && echo "Adding groovy user and group" && groupadd --system --gid 1000 groovy && useradd --system --gid groovy --uid 1000 --shell /bin/bash --create-home groovy && mkdir --parents /home/groovy/.groovy/grapes && chown --recursive groovy:groovy /home/groovy && chmod --recursive 1777 /home/groovy && echo "Symlinking root .groovy to groovy .groovy" && ln --symbolic /home/groovy/.groovy /root/.groovy # Wed, 30 Sep 2020 22:59:14 GMT VOLUME [/home/groovy/.groovy/grapes] # Wed, 30 Sep 2020 22:59:15 GMT WORKDIR /home/groovy # Wed, 30 Sep 2020 22:59:28 GMT RUN apt-get update && echo "Installing build dependencies" && apt-get install --yes --no-install-recommends dirmngr fontconfig gnupg unzip wget && rm --recursive --force /var/lib/apt/lists/* # Wed, 30 Sep 2020 22:59:29 GMT ENV GROOVY_VERSION=3.0.6 # Wed, 30 Sep 2020 22:59:36 GMT RUN set -o errexit -o nounset && echo "Downloading Groovy" && wget --no-verbose --output-document=groovy.zip "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip" && echo "Importing keys listed in http://www.apache.org/dist/groovy/KEYS from key server" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --no-tty --keyserver ha.pool.sks-keyservers.net --recv-keys 7FAA0F2206DE228F0DB01AD741321490758AAD6F 331224E1D7BE883D16E8A685825C06C827AF6B66 34441E504A937F43EB0DAEF96A65176A0FB1CD0B 9A810E3B766E089FFB27C70F11B595CEDC4AEBB5 81CABC23EECA0790E8989B361FF96E10F0E13706 && echo "Checking download signature" && wget --no-verbose --output-document=groovy.zip.asc "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip.asc" && gpg --batch --no-tty --verify groovy.zip.asc groovy.zip && rm --recursive --force "${GNUPGHOME}" && rm groovy.zip.asc && echo "Installing Groovy" && unzip groovy.zip && rm groovy.zip && mv "groovy-${GROOVY_VERSION}" "${GROOVY_HOME}/" && ln --symbolic "${GROOVY_HOME}/bin/grape" /usr/bin/grape && ln --symbolic "${GROOVY_HOME}/bin/groovy" /usr/bin/groovy && ln --symbolic "${GROOVY_HOME}/bin/groovyc" /usr/bin/groovyc && ln --symbolic "${GROOVY_HOME}/bin/groovyConsole" /usr/bin/groovyConsole && ln --symbolic "${GROOVY_HOME}/bin/groovydoc" /usr/bin/groovydoc && ln --symbolic "${GROOVY_HOME}/bin/groovysh" /usr/bin/groovysh && ln --symbolic "${GROOVY_HOME}/bin/java2groovy" /usr/bin/java2groovy && echo "Editing startGroovy to include java.xml.bind module" && sed --in-place 's|startGroovy ( ) {|startGroovy ( ) {\n JAVA_OPTS="$JAVA_OPTS --add-modules=ALL-SYSTEM"|' "${GROOVY_HOME}/bin/startGroovy" # Wed, 30 Sep 2020 22:59:37 GMT USER groovy # Wed, 30 Sep 2020 22:59:42 GMT RUN set -o errexit -o nounset && echo "Testing Groovy installation" && groovy --version ``` - Layers: - `sha256:296c9ad75beec603486f1373addae8e2c509e94c4adda44c1289792c91624acc` Last Modified: Tue, 22 Sep 2020 00:25:11 GMT Size: 23.7 MB (23722853 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c0533d1393025aa8c38e0823a6546a0d4e5dec6b8b670758c25494c35783668d` Last Modified: Fri, 25 Sep 2020 22:49:19 GMT Size: 850.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:3c11bb34abc87247c6a70c928b7a5ef4cd48093642eb0c4b8121a674d3e278c6` Last Modified: Fri, 25 Sep 2020 22:49:19 GMT Size: 189.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:a2ba57826dc24dd7e679051972f0a6da7be0dff04d2354108e340a12813e9ff9` Last Modified: Fri, 25 Sep 2020 23:09:59 GMT Size: 13.3 MB (13284869 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:796e56063abbb96dad9e149b16ad5a8f85c2487f1e7abdb48a4c791aa92767cb` Last Modified: Fri, 25 Sep 2020 23:11:08 GMT Size: 191.1 MB (191081283 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6832b79d703d900947bf143591d9737ea59ffcd480f0b4995f6708c8cdb21f75` Last Modified: Wed, 30 Sep 2020 23:06:15 GMT Size: 4.6 KB (4559 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:1033d734da4c1e8f5012b5164b350787386327a65e05029216e59dca46620c3b` Last Modified: Wed, 30 Sep 2020 23:06:17 GMT Size: 3.2 MB (3234776 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:7391ba6f50c01c1fd0e6d417a9355f3fc5fcb5fc0c77aa1d4c1ab8882a40dd24` Last Modified: Wed, 30 Sep 2020 23:06:20 GMT Size: 43.6 MB (43553408 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:0c6fa605a0f0240b508c6b0dfd0885165b058ad86c0618f8841f555203046b40` Last Modified: Wed, 30 Sep 2020 23:06:17 GMT Size: 171.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `groovy:3.0-jdk11` - linux; ppc64le ```console $ docker pull groovy@sha256:4aa6099aa9166d0d7fc5744f30e704d6abff23be0d9f46bf2c65fc0f9f7ed028 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **270.0 MB (270034018 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:7704650f3fcd42368e49fb6a38c2455fa9e7f752ad9a79c4ea10dcc60075e5c1` - Default Command: `["groovysh"]` ```dockerfile # Fri, 25 Sep 2020 23:47:27 GMT ADD file:0275f43eb5902c3fb3fe4f7e8dbd20c02f6be138627bbc6770bb74283f9e35fa in / # Fri, 25 Sep 2020 23:47:54 GMT RUN set -xe && echo '#!/bin/sh' > /usr/sbin/policy-rc.d && echo 'exit 101' >> /usr/sbin/policy-rc.d && chmod +x /usr/sbin/policy-rc.d && dpkg-divert --local --rename --add /sbin/initctl && cp -a /usr/sbin/policy-rc.d /sbin/initctl && sed -i 's/^exit.*/exit 0/' /sbin/initctl && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests # Fri, 25 Sep 2020 23:48:12 GMT RUN [ -z "$(apt-get indextargets)" ] # Fri, 25 Sep 2020 23:48:29 GMT RUN mkdir -p /run/systemd && echo 'docker' > /run/systemd/container # Fri, 25 Sep 2020 23:48:35 GMT CMD ["/bin/bash"] # Sat, 26 Sep 2020 03:37:39 GMT ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 # Sat, 26 Sep 2020 03:39:11 GMT RUN apt-get update && apt-get install -y --no-install-recommends tzdata curl ca-certificates fontconfig locales && echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen && locale-gen en_US.UTF-8 && rm -rf /var/lib/apt/lists/* # Sat, 26 Sep 2020 03:40:37 GMT ENV JAVA_VERSION=jdk-11.0.8+10 # Sat, 26 Sep 2020 03:41:07 GMT RUN set -eux; ARCH="$(dpkg --print-architecture)"; case "${ARCH}" in aarch64|arm64) ESUM='fb27ea52ed901c14c9fe8ad2fc10b338b8cf47d6762571be1fe3fb7c426bab7c'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.8_10.tar.gz'; ;; armhf|armv7l) ESUM='d00370967e4657e137cc511e81d6accbfdb08dba91e6268abef8219e735fbfc5'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_arm_linux_hotspot_11.0.8_10.tar.gz'; ;; ppc64el|ppc64le) ESUM='d206a63cd719b65717f7f20ee3fe49f0b8b2db922986b4811c828db57212699e'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.8_10.tar.gz'; ;; s390x) ESUM='5619e1437c7cd400169eb7f1c831c2635fdb2776a401147a2fc1841b01f83ed6'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_s390x_linux_hotspot_11.0.8_10.tar.gz'; ;; amd64|x86_64) ESUM='6e4cead158037cb7747ca47416474d4f408c9126be5b96f9befd532e0a762b47'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_x64_linux_hotspot_11.0.8_10.tar.gz'; ;; *) echo "Unsupported arch: ${ARCH}"; exit 1; ;; esac; curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; mkdir -p /opt/java/openjdk; cd /opt/java/openjdk; tar -xf /tmp/openjdk.tar.gz --strip-components=1; rm -rf /tmp/openjdk.tar.gz; # Sat, 26 Sep 2020 03:41:14 GMT ENV JAVA_HOME=/opt/java/openjdk PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Sat, 26 Sep 2020 03:41:17 GMT CMD ["jshell"] # Wed, 30 Sep 2020 23:33:49 GMT CMD ["groovysh"] # Wed, 30 Sep 2020 23:33:54 GMT ENV GROOVY_HOME=/opt/groovy # Wed, 30 Sep 2020 23:34:14 GMT RUN set -o errexit -o nounset && echo "Adding groovy user and group" && groupadd --system --gid 1000 groovy && useradd --system --gid groovy --uid 1000 --shell /bin/bash --create-home groovy && mkdir --parents /home/groovy/.groovy/grapes && chown --recursive groovy:groovy /home/groovy && chmod --recursive 1777 /home/groovy && echo "Symlinking root .groovy to groovy .groovy" && ln --symbolic /home/groovy/.groovy /root/.groovy # Wed, 30 Sep 2020 23:34:22 GMT VOLUME [/home/groovy/.groovy/grapes] # Wed, 30 Sep 2020 23:34:30 GMT WORKDIR /home/groovy # Wed, 30 Sep 2020 23:35:47 GMT RUN apt-get update && echo "Installing build dependencies" && apt-get install --yes --no-install-recommends dirmngr fontconfig gnupg unzip wget && rm --recursive --force /var/lib/apt/lists/* # Wed, 30 Sep 2020 23:35:54 GMT ENV GROOVY_VERSION=3.0.6 # Wed, 30 Sep 2020 23:36:17 GMT RUN set -o errexit -o nounset && echo "Downloading Groovy" && wget --no-verbose --output-document=groovy.zip "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip" && echo "Importing keys listed in http://www.apache.org/dist/groovy/KEYS from key server" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --no-tty --keyserver ha.pool.sks-keyservers.net --recv-keys 7FAA0F2206DE228F0DB01AD741321490758AAD6F 331224E1D7BE883D16E8A685825C06C827AF6B66 34441E504A937F43EB0DAEF96A65176A0FB1CD0B 9A810E3B766E089FFB27C70F11B595CEDC4AEBB5 81CABC23EECA0790E8989B361FF96E10F0E13706 && echo "Checking download signature" && wget --no-verbose --output-document=groovy.zip.asc "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip.asc" && gpg --batch --no-tty --verify groovy.zip.asc groovy.zip && rm --recursive --force "${GNUPGHOME}" && rm groovy.zip.asc && echo "Installing Groovy" && unzip groovy.zip && rm groovy.zip && mv "groovy-${GROOVY_VERSION}" "${GROOVY_HOME}/" && ln --symbolic "${GROOVY_HOME}/bin/grape" /usr/bin/grape && ln --symbolic "${GROOVY_HOME}/bin/groovy" /usr/bin/groovy && ln --symbolic "${GROOVY_HOME}/bin/groovyc" /usr/bin/groovyc && ln --symbolic "${GROOVY_HOME}/bin/groovyConsole" /usr/bin/groovyConsole && ln --symbolic "${GROOVY_HOME}/bin/groovydoc" /usr/bin/groovydoc && ln --symbolic "${GROOVY_HOME}/bin/groovysh" /usr/bin/groovysh && ln --symbolic "${GROOVY_HOME}/bin/java2groovy" /usr/bin/java2groovy && echo "Editing startGroovy to include java.xml.bind module" && sed --in-place 's|startGroovy ( ) {|startGroovy ( ) {\n JAVA_OPTS="$JAVA_OPTS --add-modules=ALL-SYSTEM"|' "${GROOVY_HOME}/bin/startGroovy" # Wed, 30 Sep 2020 23:36:28 GMT USER groovy # Wed, 30 Sep 2020 23:36:50 GMT RUN set -o errexit -o nounset && echo "Testing Groovy installation" && groovy --version ``` - Layers: - `sha256:597e66b6a06b9db7e6c7b74196c96587c89c928a0f1bea6a5c816ed0364acca2` Last Modified: Sat, 26 Sep 2020 00:05:59 GMT Size: 30.4 MB (30408489 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:06fe1993d655960561e2b7d98a72bf4167cb6bb3a934b1095c2bd170bce1b0d0` Last Modified: Sat, 26 Sep 2020 00:05:07 GMT Size: 855.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:5a85181f68a0b81866e1ec4d1fc2f161d8d57137447d2ff1d6d61bcac1974773` Last Modified: Sat, 26 Sep 2020 00:05:06 GMT Size: 189.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:30f274fd8d44c41b563fd8be80792424906ae7f32bacbb53d3fc872271889baf` Last Modified: Sat, 26 Sep 2020 03:54:38 GMT Size: 14.5 MB (14518670 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:e811bfcd61713facb3b38a41ef76b536740fc2dbb64e2544457bfbc1bb73a9fa` Last Modified: Sat, 26 Sep 2020 03:56:31 GMT Size: 177.3 MB (177275863 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6339e29221ba7c97c78ad529628504621fa7118b7f78121a37b9841d29f41fb8` Last Modified: Wed, 30 Sep 2020 23:55:48 GMT Size: 4.6 KB (4563 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:8a98db591b68e1daecbe9c1e8bda6cc7735f23dd997adaf6971d7387f59bfe28` Last Modified: Wed, 30 Sep 2020 23:55:54 GMT Size: 4.3 MB (4271811 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:372d437ada7213f29fd0dff267fd857235674bdccacc00f60213724de0215fbc` Last Modified: Wed, 30 Sep 2020 23:56:14 GMT Size: 43.6 MB (43553406 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:65c56bad0bc045d8fd900978a6c85ca35ee115716c719fe0a22a0de4d77fe7bc` Last Modified: Wed, 30 Sep 2020 23:55:47 GMT Size: 172.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `groovy:3.0-jdk11` - linux; s390x ```console $ docker pull groovy@sha256:90621edeacfae154f779d9ff317ad13e1f08bdaac4918c8097898523b55697c3 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **255.6 MB (255559753 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:6c011c62704cf334962d728842a868886dc4c3da7b0f78c015caa9454f307ded` - Default Command: `["groovysh"]` ```dockerfile # Fri, 25 Sep 2020 22:44:45 GMT ADD file:0a8ec4fb62616b6605197e20e0a7b511dc5b03d4af0e04c929dfd9fb457d2065 in / # Fri, 25 Sep 2020 22:44:47 GMT RUN set -xe && echo '#!/bin/sh' > /usr/sbin/policy-rc.d && echo 'exit 101' >> /usr/sbin/policy-rc.d && chmod +x /usr/sbin/policy-rc.d && dpkg-divert --local --rename --add /sbin/initctl && cp -a /usr/sbin/policy-rc.d /sbin/initctl && sed -i 's/^exit.*/exit 0/' /sbin/initctl && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests # Fri, 25 Sep 2020 22:44:48 GMT RUN [ -z "$(apt-get indextargets)" ] # Fri, 25 Sep 2020 22:44:48 GMT RUN mkdir -p /run/systemd && echo 'docker' > /run/systemd/container # Fri, 25 Sep 2020 22:44:48 GMT CMD ["/bin/bash"] # Fri, 25 Sep 2020 23:07:22 GMT ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 # Fri, 25 Sep 2020 23:07:34 GMT RUN apt-get update && apt-get install -y --no-install-recommends tzdata curl ca-certificates fontconfig locales && echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen && locale-gen en_US.UTF-8 && rm -rf /var/lib/apt/lists/* # Fri, 25 Sep 2020 23:08:03 GMT ENV JAVA_VERSION=jdk-11.0.8+10 # Fri, 25 Sep 2020 23:08:12 GMT RUN set -eux; ARCH="$(dpkg --print-architecture)"; case "${ARCH}" in aarch64|arm64) ESUM='fb27ea52ed901c14c9fe8ad2fc10b338b8cf47d6762571be1fe3fb7c426bab7c'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.8_10.tar.gz'; ;; armhf|armv7l) ESUM='d00370967e4657e137cc511e81d6accbfdb08dba91e6268abef8219e735fbfc5'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_arm_linux_hotspot_11.0.8_10.tar.gz'; ;; ppc64el|ppc64le) ESUM='d206a63cd719b65717f7f20ee3fe49f0b8b2db922986b4811c828db57212699e'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.8_10.tar.gz'; ;; s390x) ESUM='5619e1437c7cd400169eb7f1c831c2635fdb2776a401147a2fc1841b01f83ed6'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_s390x_linux_hotspot_11.0.8_10.tar.gz'; ;; amd64|x86_64) ESUM='6e4cead158037cb7747ca47416474d4f408c9126be5b96f9befd532e0a762b47'; BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.8%2B10/OpenJDK11U-jdk_x64_linux_hotspot_11.0.8_10.tar.gz'; ;; *) echo "Unsupported arch: ${ARCH}"; exit 1; ;; esac; curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; mkdir -p /opt/java/openjdk; cd /opt/java/openjdk; tar -xf /tmp/openjdk.tar.gz --strip-components=1; rm -rf /tmp/openjdk.tar.gz; # Fri, 25 Sep 2020 23:08:16 GMT ENV JAVA_HOME=/opt/java/openjdk PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Fri, 25 Sep 2020 23:08:16 GMT CMD ["jshell"] # Wed, 30 Sep 2020 22:58:57 GMT CMD ["groovysh"] # Wed, 30 Sep 2020 22:58:58 GMT ENV GROOVY_HOME=/opt/groovy # Wed, 30 Sep 2020 22:58:58 GMT RUN set -o errexit -o nounset && echo "Adding groovy user and group" && groupadd --system --gid 1000 groovy && useradd --system --gid groovy --uid 1000 --shell /bin/bash --create-home groovy && mkdir --parents /home/groovy/.groovy/grapes && chown --recursive groovy:groovy /home/groovy && chmod --recursive 1777 /home/groovy && echo "Symlinking root .groovy to groovy .groovy" && ln --symbolic /home/groovy/.groovy /root/.groovy # Wed, 30 Sep 2020 22:58:59 GMT VOLUME [/home/groovy/.groovy/grapes] # Wed, 30 Sep 2020 22:58:59 GMT WORKDIR /home/groovy # Wed, 30 Sep 2020 22:59:05 GMT RUN apt-get update && echo "Installing build dependencies" && apt-get install --yes --no-install-recommends dirmngr fontconfig gnupg unzip wget && rm --recursive --force /var/lib/apt/lists/* # Wed, 30 Sep 2020 22:59:06 GMT ENV GROOVY_VERSION=3.0.6 # Wed, 30 Sep 2020 22:59:14 GMT RUN set -o errexit -o nounset && echo "Downloading Groovy" && wget --no-verbose --output-document=groovy.zip "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip" && echo "Importing keys listed in http://www.apache.org/dist/groovy/KEYS from key server" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --no-tty --keyserver ha.pool.sks-keyservers.net --recv-keys 7FAA0F2206DE228F0DB01AD741321490758AAD6F 331224E1D7BE883D16E8A685825C06C827AF6B66 34441E504A937F43EB0DAEF96A65176A0FB1CD0B 9A810E3B766E089FFB27C70F11B595CEDC4AEBB5 81CABC23EECA0790E8989B361FF96E10F0E13706 && echo "Checking download signature" && wget --no-verbose --output-document=groovy.zip.asc "https://archive.apache.org/dist/groovy/${GROOVY_VERSION}/distribution/apache-groovy-binary-${GROOVY_VERSION}.zip.asc" && gpg --batch --no-tty --verify groovy.zip.asc groovy.zip && rm --recursive --force "${GNUPGHOME}" && rm groovy.zip.asc && echo "Installing Groovy" && unzip groovy.zip && rm groovy.zip && mv "groovy-${GROOVY_VERSION}" "${GROOVY_HOME}/" && ln --symbolic "${GROOVY_HOME}/bin/grape" /usr/bin/grape && ln --symbolic "${GROOVY_HOME}/bin/groovy" /usr/bin/groovy && ln --symbolic "${GROOVY_HOME}/bin/groovyc" /usr/bin/groovyc && ln --symbolic "${GROOVY_HOME}/bin/groovyConsole" /usr/bin/groovyConsole && ln --symbolic "${GROOVY_HOME}/bin/groovydoc" /usr/bin/groovydoc && ln --symbolic "${GROOVY_HOME}/bin/groovysh" /usr/bin/groovysh && ln --symbolic "${GROOVY_HOME}/bin/java2groovy" /usr/bin/java2groovy && echo "Editing startGroovy to include java.xml.bind module" && sed --in-place 's|startGroovy ( ) {|startGroovy ( ) {\n JAVA_OPTS="$JAVA_OPTS --add-modules=ALL-SYSTEM"|' "${GROOVY_HOME}/bin/startGroovy" # Wed, 30 Sep 2020 22:59:14 GMT USER groovy # Wed, 30 Sep 2020 22:59:17 GMT RUN set -o errexit -o nounset && echo "Testing Groovy installation" && groovy --version ``` - Layers: - `sha256:dd2de95b9a1c45e92bcc791d469201229b58d68187c99de6b08b00a13fa33393` Last Modified: Fri, 25 Sep 2020 22:45:52 GMT Size: 25.4 MB (25371975 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c38a48ef4dfab2bd639d381d11b3390b6bf8860b2ef3356e9bb55f7cb8c775f9` Last Modified: Fri, 25 Sep 2020 22:45:51 GMT Size: 850.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:eced51184728468b347a6e3f143c356cd174c4a54be3cc10ec5aeddb402765fd` Last Modified: Fri, 25 Sep 2020 22:45:49 GMT Size: 187.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:62eb688908dd55f188cb3c2b3bb83efc30566ace847ab51ba28adbed5d30266b` Last Modified: Fri, 25 Sep 2020 23:12:47 GMT Size: 13.6 MB (13595725 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2e78d18ba7d93cd557070a2429d5b62f16f21dec1aab50e85aaaafd4cb3e1480` Last Modified: Fri, 25 Sep 2020 23:13:28 GMT Size: 169.6 MB (169575663 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:1e56a8538ee16cab5346f2559f70105c7c395aaf507dce80c7d22d574ad1d319` Last Modified: Wed, 30 Sep 2020 23:04:09 GMT Size: 4.5 KB (4548 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b8ebb7d4f36895ddbfa690d52fb4cdb434ccf61b24a271a06083f401a94817eb` Last Modified: Wed, 30 Sep 2020 23:04:10 GMT Size: 3.5 MB (3457277 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:74b107b01c8f46e05e98f461406966f2c1e772b664b0dd1ac68cdc9c69657d9d` Last Modified: Wed, 30 Sep 2020 23:04:12 GMT Size: 43.6 MB (43553357 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:89a20bef863d6d2e29c7f7ffe9982d8e0d58ab7449f57cba9d758fa5dde673c4` Last Modified: Wed, 30 Sep 2020 23:04:11 GMT Size: 171.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
96.927126
1,892
0.705568
yue_Hant
0.323138
1165f1fede8b6cd37e8258ade951d57f0a3e4ac0
1,185
md
Markdown
README.md
mingchen/node-nocache
36f0c4ac67f9a0b096451911bae6b2dae3021243
[ "MIT" ]
1
2020-07-23T23:09:25.000Z
2020-07-23T23:09:25.000Z
README.md
mingchen/node-nocache
36f0c4ac67f9a0b096451911bae6b2dae3021243
[ "MIT" ]
4
2019-01-16T08:48:19.000Z
2020-05-23T12:18:06.000Z
README.md
mingchen/node-nocache
36f0c4ac67f9a0b096451911bae6b2dae3021243
[ "MIT" ]
null
null
null
node-nocache ============ [![Build Status](https://travis-ci.org/mingchen/node-nocache.svg?branch=master)](https://travis-ci.org/mingchen/node-nocache) [![Greenkeeper badge](https://badges.greenkeeper.io/mingchen/node-nocache.svg)](https://greenkeeper.io/) [![NPM](https://nodei.co/npm/node-nocache.png?downloads=true)](https://nodei.co/npm/node-nocache/) ## Introduction A node `express` middleware which add no-cache related headers for all the express response to disable caches. It is useful for REST API response, add no-cache headers to avoid browsers cache request response. The following headers are added to response header: Cache-Control: no-cache, no-store, must-revalidate Expires: -1 Pragma: no-cache ## Install npm install node-nocache ## Usage let nocache = require('node-nocache'); app.use(nocache); or use you can only add no-cache headers to specific requests with `router`: router.all('*', require('node-nocache')); or let nocache = require('node-nocache'); router.get('/api/foo', nocache, function (req, res, next) { ... }); Checkout `test/nocache_test.js` for example usages. ## License MIT
22.788462
125
0.69789
eng_Latn
0.704719
1165fb1ea500f7be8dce1ced5e57a6677403562f
1,338
md
Markdown
WhyContribute.md
inet-framework/inet-framework.github.io
bd12727e0b56b56961b8b229cd00187403c57f56
[ "CC0-1.0" ]
1
2019-09-16T17:19:14.000Z
2019-09-16T17:19:14.000Z
WhyContribute.md
inet-framework/inet-framework.github.io
bd12727e0b56b56961b8b229cd00187403c57f56
[ "CC0-1.0" ]
1
2016-03-15T13:45:09.000Z
2016-03-15T14:26:05.000Z
WhyContribute.md
inet-framework/inet-framework.github.io
bd12727e0b56b56961b8b229cd00187403c57f56
[ "CC0-1.0" ]
null
null
null
--- layout: page title: Why Contribute underMenu: Contributing --- <p class="lead">You need a good simulation tool, and INET needs your help to improve.</p> ## What's In It For Me? If you are doing research with INET, contributing your changes back to the project means: * More visibility for your work * More testing and validation performed by other users * Other users may further improve or build upon your model * Burden of technical maintenance is on the INET core team If you are a student doing homework or thesis with INET: * Expert feedback adds to your expertise and the quality of the end result * Your work will end up being useful for others (instead of landing in the drawer) * It expands your open source portfolio and demonstrates your expertise * Satisfaction from contributing to an open-source project ## Fun Project, Fantastic People You might enjoy contributing, because: * Simulation and hacking on models can be fun in itself * The forum is full of helpful people * You can meet fellow INET people and other OMNeT++ users at the [OMNeT++ Summit][1]{:target="_blank"}! (To get the atmosphere of the event, here's a photo album of the [2014 summit][2]{:target="_blank"}.) [1]: http://summit.omnetpp.org [2]: https://www.dropbox.com/sh/8lsmga0xuv53xhl/AAAavLsweyRdrx_XhJbr2LfMa?dl=0
38.228571
207
0.750374
eng_Latn
0.998752
116606db06ecc71528d06d7e8182b5f8a14103f2
5,569
md
Markdown
doc/research/libraries/chartJS.md
hpi-swa-lab/BP2019RH1
e685696bbb6eeb2e5f6799016c77533f7e93a5ce
[ "MIT" ]
3
2020-02-10T15:27:20.000Z
2020-03-02T13:50:09.000Z
doc/research/libraries/chartJS.md
hpi-swa-lab/BP2019RH1
e685696bbb6eeb2e5f6799016c77533f7e93a5ce
[ "MIT" ]
446
2019-10-21T12:24:48.000Z
2020-07-07T11:47:32.000Z
doc/research/libraries/chartJS.md
hpi-swa-lab/BP2019RH1
e685696bbb6eeb2e5f6799016c77533f7e93a5ce
[ "MIT" ]
null
null
null
# ChartJS ## Context ### Information ChartJs is a lightweight visualization library for diagrams. It allows you to create different types of charts from data sets such as bar charts, pie, line, donut, scatters, and [many more](https://www.chartjs.org/samples/latest/). ### Who uses it? - Bosch Software Innovations / Bosch BCI ### Is it maintained? Yes, released frequently, last release was march 2019. MIT Licenced. ![](Screenshot%202019-10-22%20at%2017.14.27.png) ### What is produced as a visualisation? The charts are drawn on a canvas. ### Others It can be installed via a CDN, as seen below. One does not have to pay for usage. ## Examples ### Simple test plot ```javascript {.chartExample} import Chart from "https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.8.0/Chart.bundle.js" var ctx = this.parentElement.querySelector('#demoChart').getContext('2d'); var chart = new Chart(ctx, { // The type of chart we want to create type: 'line', // The data for our dataset data: { labels: ['January', 'February', 'March', 'April', 'May', 'June', 'July'], datasets: [{ label: 'My First dataset', backgroundColor: 'rgb(255, 99, 132)', borderColor: 'rgb(255, 99, 132)', data: [10, 10, 5, 2, 20, 200, 45] }] }, // Configuration options go here options: {} }); ``` Which will produce this... <script> import boundEval from "src/client/bound-eval.js"; var source = lively.query(this, ".chartExample").textContent boundEval(source, this).then(r => r.value) </script> <canvas id="demoChart"></canvas> ### Plot real data with multiple datasets </script> ```javascript {.chartExampleOwnData} import Chart from "https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.8.0/Chart.bundle.js"; import { CSVAdapter } from "https://lively-kernel.org/lively4/BP2019RH1/scratch/BubbleChartSource/csvAdapter.js"; var csvAdapter = new CSVAdapter(); var url = "https://lively-kernel.org/lively4/BP2019RH1/scratch/data_births.csv"; var data; (async () => { data = await csvAdapter.fetchData(url); var lines = csvAdapter.parseData(';', data); var years = lines[0]; var oneCountry = lines[2]; var secondCountry = lines[15]; var ctx = this.parentElement.querySelector('#ownDataChart').getContext('2d'); var chart = new Chart(ctx, { // The type of chart we want to create type: 'line', // The data for our dataset data: { labels: years.slice(1, years.length-1), datasets: [{ label: oneCountry[0], backgroundColor: 'rgb(255, 255, 255, 00)', borderColor: 'rgb(255, 99, 132)', data: oneCountry.slice(1, oneCountry.length - 1) }, { label: secondCountry[0], backgroundColor: 'rgb(255, 255, 255, 00)', borderColor: '2D3EFF', data: secondCountry.slice(1, secondCountry.length - 1) }] }, // Configuration options go here options: {} }); })() ``` <script> import boundEval from "src/client/bound-eval.js"; var source = lively.query(this, ".chartExampleOwnData").textContent boundEval(source, this).then(r => r.value) </script> <canvas id="ownDataChart"></canvas> ### Plot live data update </script> ```javascript {.chartExampleLiveData} import Chart from "https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.8.0/Chart.bundle.js"; var ctx = this.parentElement.querySelector('#liveUpdateData').getContext('2d'); var chart = new Chart(ctx, { // The type of chart we want to create type: 'line', // The data for our dataset data: { labels: [0,1,2,3,4,5,6,7,8,9], datasets: [{ label: "Random Data", backgroundColor: 'rgb(255, 255, 255, 00)', borderColor: 'rgb(255, 99, 132)', data: [0,1,2,3,4,5,6,7,8,9] }] }, // Configuration options go here options: {} }); function randomizeDataOnChart(){ let newData = generateRandomData(); updateChartWithNewData(chart, newData); } function generateRandomData(){ let randArray = []; for(let i = 0; i<10; i++){ let randomNumber = Math.random() * 10; randArray.push(randomNumber); } return randArray; } function updateChartWithNewData(chart, newData){ chart.data.datasets.forEach((dataset) => { dataset.data = newData; }); chart.update(); } let button = this.parentElement.querySelector('#randomizeButton'); button.addEventListener("click", randomizeDataOnChart); ``` <script> import boundEval from "src/client/bound-eval.js"; var source = lively.query(this, ".chartExampleLiveData").textContent boundEval(source, this).then(r => r.value) </script> <canvas id="liveUpdateData"></canvas> <button id="randomizeButton">Randomizee meee</button> Some Examples can also be found [here](https://tobiasahlin.com/blog/chartjs-charts-to-get-you-started/) ## Experience - Very easy to learn, very easy to include ### Ecosystem The documentation / ecosystem on the internet is quite useful and mature. Every problem we had during testing this library could be solved within minutes with the documentation and stack overflow. Very need animations. On top of that many [plugins](https://www.chartjs.org/docs/2.7.2/notes/extensions.html) can be found online, written by the community ### Customizable to needs? It is not very customizable. It is just a library for plotting data that you provide. It is very versatile in displaying the data, though. Once plotted the data can be updated or extended, but not changed radically like we would like to have it.
28.854922
246
0.671395
eng_Latn
0.618195
11662bbaf07e262fff28eba44354c3c310fbdf73
4,521
md
Markdown
iambismark.net/content/post/2009/03/1235966127.md
bismark/iambismark.net
1ef89663cfcf4682fbfd60781bb143a7fd276312
[ "MIT" ]
null
null
null
iambismark.net/content/post/2009/03/1235966127.md
bismark/iambismark.net
1ef89663cfcf4682fbfd60781bb143a7fd276312
[ "MIT" ]
null
null
null
iambismark.net/content/post/2009/03/1235966127.md
bismark/iambismark.net
1ef89663cfcf4682fbfd60781bb143a7fd276312
[ "MIT" ]
null
null
null
--- archive: - 2009-03 categories: - blog date: '2009-03-02T03:55:27' oldpaths: - /archive/2009/03/01/ryan-the-chef.html - /wp/2009/03/01/ryan-the-chef/ - /2009/03/01/ryan-the-chef/ - /blog/284 slug: '1235966127' tags: - personal title: ryan the chef --- while i was doing my taxes, i took a look at how much money i supposedly made last year and then how much money i have in the bank. yeah, for some reason there was a huge discrepancy there. looking over my last years spending reports, i am quite certain that a huge hunk of that money went towards eating out (compounded by the fact that for a good majority of the year i was paying for two people each time), so i decided to greatly cut down the number of meals i buy at a restaurant. honestly this is pretty tough for me, because i always feel like i'm too busy to cook and i rarely feel like the things i make taste all that good. but at the encouragement of my mother, i made the commitment. right now my goal is to only eat out on fridays and saturdays. thats four days and thus 8-12 meals eaten out fewer than before. this going to save me a ton of money and hopefully help me shed a few of those soft spots that tend to pop up throughout the semester. i don't have some great plan on how this is all going to work, but yesterday i went on a big $75 grocery trip. for pretty much the first time ever, i actually made a shopping list, which is a first for me. i got a lot of good food that i am actually excited about eating. so today was day one with my filled cupboards. this morning i decided to use a bag of lentils that i first purchased back towards the beginning of my internship. yes, i carried these lentils with me for the last year and across two states (just some proof as to how little i actually cooked). i got some tips from my mom and i threw together a lentil stew in the crock pot my mom sent me last year (which i've never used before today). i put in a can of diced tomatoes, a can of chicken broth, the bag of lentils, some sauteed carrots and onions, spicy turkey sausage, and some spices i borrowed from my roommate. i let it cook for about 7 hours, and i'm proud to say it tasted quite excellent. {{< image 1 >}}Lentil Stew{{< /image >}} the finished product (like i said, the crock pot is from my mom.. ignore the flower pattern). after dinner, i decided to try out [this recipe][2] for breakfast burritos that my mom sent me a few days ago. here is how it went: {{< image 2 >}}Burrito ingredients{{< /image >}} here are all of the ingredients i got together. i decided to cut the recipe by 3/4 just to give it a shot for the week. {{< image 3 >}}Chopped peppers{{< /image >}} first i chopped up half of the green pepper. {{< image 4 >}}Frying peppers{{< /image >}} second i friend them a bit in some butter. {{< image 5 >}}Eggs{{< /image >}} then i broke the eggs and mixed in some pepper and milk. {{< image 6 >}}Beated eggs{{< /image >}} i beat the eggs. {{< image 7 >}}Rinsed beans{{< /image >}} next i drained and rinsed the can of black beans. {{< image 8 >}}Fried eggs{{< /image >}} i scrambled the eggs with the peppers and butter already in the pan. \*ryan's tip for scrambled eggs: the less you stir the eggs, the fluffier and more moist they will be. if you just stir the constantly, they get all dry and crumbly. i lift up one side, tilt the pan so the raw egg flows underneath, rotate the pan, and repeat until done. learned this on my mission. {{< image 9 >}}Eggs and beans{{< /image >}} once the eggs were done i added the beans and mixed them all together. {{< image 10 >}}Tortilla{{< /image >}} then i laid out a burrito tortilla on a piece of plastic wrap. {{< image 11 >}}Filled tortilla{{< /image >}} i filled it with the egg mix, salsa, and some cheese. then i rolled it up and wrapped the plastic around it. my first one messed up pretty bad because the tortilla broke, and i realized it was because the tortillas were cold. i stuck them in the microwave for a bit, and the rest went quite smoothly. {{< image 12 >}}Finished product{{< /image >}} i ended up with six burritos (not quite the eight that should have come from a fourth of the recipe). the sizes varied a bit, so i will find out what's too big and what's too little. all of the burritos are now stacked in my freezer, and i will let you all know how good they end up being for breakfast! [2]: http://www.thesimpledollar.com/2009/02/20/bulk-breakfast-burritos-convenient-cheap-healthy-and-easier-than-you-think/
39.657895
122
0.735014
eng_Latn
0.999772
116630a498fe167f6ff0a09ca5ae025debe10002
491
md
Markdown
data/reusables/code-scanning/false-positive-fix-codeql.md
Plotta/docs
dfc135a2bf91be244db6de86003348719a7e8cad
[ "CC-BY-4.0", "MIT" ]
11,698
2020-10-07T16:22:18.000Z
2022-03-31T18:54:47.000Z
data/reusables/code-scanning/false-positive-fix-codeql.md
koca4a/docs
ee3d140be9b97ef5ee1437a943c3067512ab2e96
[ "CC-BY-4.0", "MIT" ]
8,317
2020-10-07T16:26:58.000Z
2022-03-31T23:24:25.000Z
data/reusables/code-scanning/false-positive-fix-codeql.md
koca4a/docs
ee3d140be9b97ef5ee1437a943c3067512ab2e96
[ "CC-BY-4.0", "MIT" ]
48,204
2020-10-07T16:15:45.000Z
2022-03-31T23:50:42.000Z
If you dismiss a {% data variables.product.prodname_codeql %} alert as a false positive result, for example because the code uses a sanitization library that isn't supported, consider contributing to the {% data variables.product.prodname_codeql %} repository and improving the analysis. For more information about {% data variables.product.prodname_codeql %}, see "[Contributing to {% data variables.product.prodname_codeql %}](https://github.com/github/codeql/blob/main/CONTRIBUTING.md)."
245.5
490
0.796334
eng_Latn
0.917947
11668f71975e5cf6c5edb34793b83a38114122f5
17,547
md
Markdown
Writeup.md
MellonGuan/CarND-Traffic-Sign-Classifier-Project
5f43be161409df2b5197ad04084965e56c7ce02a
[ "MIT" ]
null
null
null
Writeup.md
MellonGuan/CarND-Traffic-Sign-Classifier-Project
5f43be161409df2b5197ad04084965e56c7ce02a
[ "MIT" ]
null
null
null
Writeup.md
MellonGuan/CarND-Traffic-Sign-Classifier-Project
5f43be161409df2b5197ad04084965e56c7ce02a
[ "MIT" ]
null
null
null
# **Traffic Sign Recognition** ## Writeup ### You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer. --- **Build a Traffic Sign Recognition Project** The goals / steps of this project are the following: * Load the data set (see below for links to the project data set) * Explore, summarize and visualize the data set * Design, train and test a model architecture * Use the model to make predictions on new images * Analyze the softmax probabilities of the new images * Summarize the results with a written report [//]: # (Image References) [image00]: ./visualize_image/pdsignname.png "signname" [image01]: ./visualize_image/visualize_Training.jpg "Visualization" [image02]: ./visualize_image/visualize_Validation.jpg "Validation" [image03]: ./visualize_image/visualize_Test.jpg "Test" [image001]: ./visualize_image/visualize_Augmented.jpg "Augmented" [image04]: ./visualize_image/image_rgb.jpg "RGBscaling" [image05]: ./visualize_image/image_gray.jpg "Grayscaling" [image06]: ./visualize_image/visualize_original_image.jpg "original image" [image07]: ./visualize_image/visualize_augmented_image.jpg "augmented image" [image4]: ./traffic_sigs_test-examples/Signal_1.jpg "Traffic Sign 1" [image5]: ./traffic_sigs_test-examples/Signal_2.jpg "Traffic Sign 2" [image6]: ./traffic_sigs_test-examples/Signal_3.jpg "Traffic Sign 3" [image7]: ./traffic_sigs_test-examples/Signal_4.jpg "Traffic Sign 4" [image8]: ./traffic_sigs_test-examples/Signal_5.jpg "Traffic Sign 5" [image9]: ./traffic_sigs_test-examples/Signal_6.jpg "Traffic Sign 6" ## Rubric Points ### Here I will consider the [rubric points](https://review.udacity.com/#!/rubrics/481/view) individually and describe how I addressed each point in my implementation. --- ### Writeup / README #### 1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf. You can use this template as a guide for writing the report. The submission includes the project code. You're reading it! and here is a link to my [project code](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/Traffic_Sign_Classifier.ipynb) ### Data Set Summary & Exploration #### 1. Provide a basic summary of the data set. In the code, the analysis should be done using python, numpy and/or pandas methods rather than hardcoding results manually. I used the pandas library to calculate summary statistics of the traffic signs data set: * The size of training set is 34799 * The size of the validation set is 4410 * The size of test set is 12630 * The shape of a traffic sign image is 32x32x3 * The number of unique classes/labels in the data set is 43 ![alt text][image00] #### 2. Include an exploratory visualization of the dataset. Here is an exploratory visualization of the data set. It is a bar chart showing how the data ... ![alt text][image01] The following are two bar chart showing the distribution status of the validation and test data respectively. ![alt text][image02] ![alt text][image03] From the above tree bar charts presented, I could have the conclution that the three datasets have the similar sample distribution. ### Design and Test a Model Architecture #### 1. Describe how you preprocessed the image data. What techniques were chosen and why did you choose these techniques? Consider including images showing the output of each preprocessing technique. Pre-processing refers to techniques such as converting to grayscale, normalization, etc. (OPTIONAL: As described in the "Stand Out Suggestions" part of the rubric, if you generated additional data for training, describe why you decided to generate additional data, how you generated the data, and provide example images of the additional data. Then describe the characteristics of the augmented training set like number of images in the set, number of images for each class, etc.) ### Data pre-processing pipeline As a first step, I decided to convert the images to grayscale because after several experiments I found out that color information does not really help the NN to train. The NN using graycale images gave better validation accuracy. Here is an example of a traffic sign image before and after grayscaling. ![alt text][image04]![alt text][image05] As a last step, I normalized the image data because it helps to prevent numerical issues in calculating a loss function, and helps a NN to train faster. ### Data augmentation The main reason why I decided to generate additional data was the fact that the number of samples of each label in the training set is significantly different. For example, there're 180 samples for the class 0 and 2010 samples for the class 2. I wanted the training set to has approximately the same number of samples of each class. Also, as a car moves, some images can be slightly blurred and/or viewed by the car's camera from different angles. I wanted the NN to be able to work well under such conditions. To add more data to the the data set, I used the following techniques: ![alt text][image06] The method def augment(img) applies from 3 to 5 randomly selected transformations described above to the image passed in. Here is an example of an original image and an augmented image: ![alt text][image06]![alt text][image07] The difference between the original data set and the augmented data set is the following: - The augmented data set has the same number of samples of each class. - The augmented data set consists of 430,000 images (10,000 images per class) which is 12 times more than the original data set size. ![alt text][image001] #### 2. Describe what your final model architecture looks like including model type, layers, layer sizes, connectivity, etc.) Consider including a diagram and/or table describing the final model. My final model consisted of the following layers: | Layer | Description | |:---------------------:|:---------------------------------------------:| | Input | 32x32x1 Grayscale image | | Convolution 5x5 | 1x1 stride, valid padding, outputs 28x28x6 | | RELU | Activation | | Max pooling | 2x2 stride, outputs 14x14x6 | | Convolution 5x5 | 1x1 stride, valid padding, outputs 10x10x16 | | RELU | Activation | | Max pooling | 2x2 stride, outputs 5x5x16 | | Flatten | outputs 1x400| | Fully connected | outputs 1x120. | | RELU | Activation | | Dropout | Keep_prob = 0.5 | | Fully connected | Outputs = 1x84 | | RELU | Activation | | Dropout | Keep_prob = 0.5 | | Fully connected | Outputs = 43 | | | | | | | #### 3. Describe how you trained your model. The discussion can include the type of optimizer, the batch size, number of epochs and any hyperparameters such as learning rate. To train the model, I used the Adam optimizer with the following hyper-parameters: - The number of epochs of 40 - The batch size of 256 - The dropout rate of 50% - Learning rate of 0.001 decreasing every 10 epochs by the factor of 2. #### 4. Describe the approach taken for finding a solution and getting the validation set accuracy to be at least 0.93. Include in the discussion the results on the training, validation and test sets and where in the code these were calculated. Your approach may have been an iterative process, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think the architecture is suitable for the current problem. My final model results were: * training set accuracy of 0.994 * validation set accuracy of 0.975 * test set accuracy of 0.939 If an iterative approach was chosen: * What was the first architecture that was tried and why was it chosen? - The first architecture that I try is LeNet, because this Network model was already shown in class video, which is easy for me to start. * What were some problems with the initial architecture? - The LeNet architecture has good performance in 1998, about 90+% test accuracy. In traditional ConvNets, the output of the lsat stage is fed to be a classifier, In the present work the output of all the stages are fed to the classifier. * How was the architecture adjusted and why was it adjusted? - Typical adjustments could include choosing a different model architecture,: - adding or taking away layers (pooling, dropout, convolution, etc) - in convolution add more depth to weights and biases, after some trials - using an activation function or changing the activation function. - One common justification for adjusting an architecture would be due to overfitting or underfitting. A high accuracy on the training set but low accuracy on the validation set indicates over fitting; a low accuracy on both sets indicates under fitting. * Which parameters were tuned? How were they adjusted and why? - learning rate and epochs had great impact on results - patch size had no impact on accuracy * What are some of the important design choices and why were they chosen? For example, why might a convolution layer work well with this problem? How might a dropout layer help with creating a successful model? - I tried everything and I was pulling my hair, then I decide to play with depteh of weight and batch size! - I didn't know what I'm doing, and I still don't know how I made it work.also adding more epochs was good thing do If a well known architecture was chosen: * What architecture was chosen? - I don't konw * Why did you believe it would be relevant to the traffic sign application? - I don't konw * How does the final model's accuracy on the training, validation and test set provide evidence that the model is working well? - The model was working very well. I got highest accuracy in my training history: EPOCH 39 ...Training Accuracy = 0.994...Validation Accuracy = 0.975 - And I got Test Set Accuracy = 0.939, 66.67% accuracy in images set that I pick out from webdata set. ### Test a Model on New Images #### 1. Choose five German traffic signs found on the web and provide them in the report. For each image, discuss what quality or qualities might be difficult to classify. Here are five German traffic signs that I found on the web: ![alt text][image4] ![alt text][image5] ![alt text][image6] ![alt text][image7] ![alt text][image8] ![alt text][image9] The first image might be difficult to classify because they are "perspective-transformed". The third image can also be challenging to classify as it contains a contrast object in the bottom right corner. #### 2. Discuss the model's predictions on these new traffic signs and compare the results to predicting on the test set. At a minimum, discuss what the predictions were, the accuracy on these new predictions, and compare the accuracy to the accuracy on the test set (OPTIONAL: Discuss the results in more detail as described in the "Stand Out Suggestions" part of the rubric). Here are the results of the prediction: | Image | Prediction | |:---------------------: |:---------------------------------------------: | | 30 km/h | 100% | | Road work | 100% | | Speed limit (20km/h) | (Dangerous curve to the right)FAIL | | No entry | (Stop Sign)FAIL | | Stop Sign | 100% | | Children crossing | 100% | The model was able to correctly guess 4 of the 6 traffic signs, which gives an accuracy of 66.67%. #### 3. Describe how certain the model is when predicting on each of the five new images by looking at the softmax probabilities for each prediction. Provide the top 5 softmax probabilities for each image along with the sign type of each probability. (OPTIONAL: as described in the "Stand Out Suggestions" part of the rubric, visualizations can also be provided such as bar charts) The code for making predictions on my final model is located in the 32th cell of the jupyter notebook. For the first image, the model is absolutely sure that this is a 30 km/h sign (probability of 1.0), and the image does contain a 30 km/h sign. The top five soft max probabilities were | Probabili | Prediction | note column | |:---------------------: |:----------------------------------------------:|:---------------------------------------------:| | 1.00 |30 km/h | | | 3.40680410e-19 | Roundabout mandatory | 3.40680410e-19 = 0.0000000000000000000005345 | | 1.73493985e-21 | Speed limit (70km/h) | 1.73493985e-21 = 0.0000000000000000000017349 | | 5.34504836e-22 | Speed limit (20km/h) | 5.34504836e-22 = 0.0000000000000000000005345 | | 9.67139305e-23 | End of all speed and passing limits | 9.67139305e-23 = 0.0000000000000000000000967 | For the second image, the model is absolutely sure that this is a "Road work" sign(probability of 1.0), and the image does contain a Road work sign.The top five soft max probabilities were | Probabili | Prediction | note column | |:---------------------: |:----------------------------------------------:|:---------------------------------------------:| | 1.00 |Road work | | | 0. | Speed limit(20km/h) | 3.40680410e-19 = 0.0000000000000000000005345 | | 0. | Speed limit (30km/h) | 1.73493985e-21 = 0.0000000000000000000017349 | | 0. | Speed limit (50km/h) | 5.34504836e-22 = 0.0000000000000000000005345 | | 0. | Speed limit (60km/h) | 9.67139305e-23 = 0.0000000000000000000000967 | For the third image, the model is relatively sure that this is a "Speed limit (70km/h)" sign(probability of 0.99), but wrong,the image is "Speed limit 20km/h".The top five soft max probabilities were | Probabili | Prediction | note column | |:---------------------: |:----------------------------------------------:|:---------------------------------------------:| | 4.70122278e-01 | Speed limit(70km/h) | | | 3.12950999e-01 | Speed limit(30km/h) | | | 2.15501577e-01 | Speed limit (20km/h) | | | 1.41383242e-03 | General caution | | | 8.18308308e-06 | Pedestrians | | For the fourth image, the model is relatively sure that this is a "Stop" sign(probability of 0.98), but wrong,the image is Road work.The top five soft max probabilities were | Probabili | Prediction | note column | |:---------------------: |:----------------------------------------------:|:---------------------------------------------:| | 9.99139905e-01 | Speed limit(70km/h) | | | 3.12950999e-01 | Speed limit(30km/h) | | | 2.15501577e-01 | Speed limit (20km/h) | | | 1.41383242e-03 | General caution | | | 8.18308308e-06 | Pedestrians | | For the fifth image, the model is absolutely sure that this is a "Stop" sign(probability of 1.0) .The top five soft max probabilities were | Probabili | Prediction | note column | |:---------------------: |:----------------------------------------------:|:---------------------------------------------:| | 1.00000000e+00 | Stop | | | 6.02076167e-11 | Turn left ahead | | | 3.02841120e-11 | Priority road | | | 6.11339972e-13 | Beware of ice/snow | | | 3.37771152e-13 | Yield | | For the sixth image, the model is absolutely sure that this is a "Stop" sign(probability of 1.0), .The top five soft max probabilities were | Probabili | Prediction | note column | |:---------------------: |:----------------------------------------------:|:---------------------------------------------:| | 1.00000000e+00 | Children crossing | | | 1.83444134e-13 | Road narrows on the right | | | 1.67962879e-13 | Ahead only | | | 2.62377315e-14 | Bicycles crossing | | | 2.75393556e-15 | Slippery road | | ### (Optional) Visualizing the Neural Network (See Step 4 of the Ipython notebook for more details) #### 1. Discuss the visual output of your trained network's feature maps. What characteristics did the neural network use to make classifications?
62.223404
681
0.642674
eng_Latn
0.996229
1166d84be504bd392757014414641d1f51f48c2f
682
md
Markdown
README.md
Prithwis-2023/Send-emails-from-Excel-Sheet
0764d4bb7d534106be4488e031e663bb2f630805
[ "MIT" ]
null
null
null
README.md
Prithwis-2023/Send-emails-from-Excel-Sheet
0764d4bb7d534106be4488e031e663bb2f630805
[ "MIT" ]
null
null
null
README.md
Prithwis-2023/Send-emails-from-Excel-Sheet
0764d4bb7d534106be4488e031e663bb2f630805
[ "MIT" ]
null
null
null
# Send-emails-from-Excel-Sheet Using this program, the user can easily send hundreds of emails to people whose emails and names are mentioned in an excel sheet, created by the sender in his/her computer. Using this program, the user has to write the message once, and then after runnig, the program fetches the email ids and names of the recipients and automatically send the emails them in no time (without spamming). Also. for running this program smoothly, the sender has to give permission to less secure apps using: https://www.google.com/settings/security/lesssecureapps , if the sender is using a Gmail account. So, use this and take the advantage of PyCalc Tech Solutions.
227.333333
650
0.797654
eng_Latn
0.99956
11671e329c1d8a400cc0f4c8dd7103a19d6ccdb7
278
md
Markdown
docs/overview/index.md
cofu-app/couchbase-lite-core
b8e45b8109a52d6d8186d0c52bbace1f1714cc32
[ "Apache-2.0" ]
null
null
null
docs/overview/index.md
cofu-app/couchbase-lite-core
b8e45b8109a52d6d8186d0c52bbace1f1714cc32
[ "Apache-2.0" ]
1
2020-03-17T11:11:30.000Z
2020-03-17T11:11:30.000Z
docs/overview/index.md
cofu-app/couchbase-lite-core
b8e45b8109a52d6d8186d0c52bbace1f1714cc32
[ "Apache-2.0" ]
null
null
null
# LiteCore Overview <img src="Block Diagram.png"> * **[Overview of the core classes (database, document, query, etc.)](Classes.md)** * **[Overview of the replicator classes](Replicator.md)** * **[Overview of Fleece](https://github.com/couchbaselabs/fleece/wiki/Using-Fleece)**
39.714286
85
0.719424
kor_Hang
0.261646
1167c9bcebd4ff0cf6ada74d98d57072149fe583
563
md
Markdown
edge/meetings/20191204.md
jantonguirao/working-groups
094f5590087db21a4ed36cbd7fa8153d40a98722
[ "Apache-2.0" ]
12
2019-04-08T08:29:51.000Z
2021-08-18T07:59:40.000Z
edge/meetings/20191204.md
jantonguirao/working-groups
094f5590087db21a4ed36cbd7fa8153d40a98722
[ "Apache-2.0" ]
13
2019-03-22T07:48:19.000Z
2021-08-16T15:18:19.000Z
edge/meetings/20191204.md
jantonguirao/working-groups
094f5590087db21a4ed36cbd7fa8153d40a98722
[ "Apache-2.0" ]
17
2019-03-22T07:51:08.000Z
2021-08-16T13:44:22.000Z
<!--- SPDX-License-Identifier: Apache-2.0 --> # Wed Dec 04, 2019 at 8:00am PST ## Agenda * Nov 18 workshop readout (Yedong) * Discuss path forward ## Meeting Minutes * [Meeting recording](https://youtu.be/a_2xBHU1RgQ) ### Attendees * Saurabh Tangri (Intel) * Ofer Rosenberg (Qualcomm) * Milan Oljaca (Qualcomm) * Manash Goswami (Microsoft) * Yedong Liu (Huawei) ### Notes * Largely discussed path forward, considered if Edge WG needs to continue ... ### Action Items * **Milan**: Prepare draft of the letter to SC seeking guidance about future of Edge WG.
22.52
88
0.71048
eng_Latn
0.650247