hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c9444a6fc7793f07157df04a0186ecb8e1debea5 | 5,951 | md | Markdown | single-cell/cellranger/README.md | mzager/dv-pipelines | 3356753cc56a5298bb075f12681f9282d8f08658 | [
"MIT"
] | 3 | 2020-02-24T21:08:11.000Z | 2020-05-19T18:26:01.000Z | single-cell/cellranger/README.md | mzager/dv-pipelines | 3356753cc56a5298bb075f12681f9282d8f08658 | [
"MIT"
] | null | null | null | single-cell/cellranger/README.md | mzager/dv-pipelines | 3356753cc56a5298bb075f12681f9282d8f08658 | [
"MIT"
] | 2 | 2020-01-04T00:23:07.000Z | 2020-02-26T17:54:34.000Z | # Cellranger Nextflow workflow
This cellranger Nextflow workflow is part of the Single cell RNASeq workflow developed for the Warren Lab. The following document outlines the input and output specifications, data formatting requirements and runtime enviroment details for the workflow.
## Execution
```
nextflow run cellranger.nf --wfconfig 'config.json'
```
## Inputs:
All the inputs for the analysis are read out of a config file, the structure of the config file is listed below.
```{nextflow}
params {
input {
metadata = 'ZhengSorted_metadata.csv'
gex_reference = 'refdata-cellranger-hg19-3.0.0'
vdj_reference = 'refdata-cellranger-vdj-GRCh38-alts-ensembl-3.1.0'
fastq_paths = 'ZhengSorted_10X/Fastq'
study_id = 'ZhengSorted_10X'
gex = true
vdj = false
}
output {
folder = "ZhengSorted_10X/Results"
}
count {
fastq_type = 'demux' //['mkfastq', 'demux', 'bcl2fastq']
}
aggr {
modes = "mapped" //['mapped', 'none']
}
}
```
The parameters file is divided into four sections.
1. Input
The input section contains the path to the metadata file, GEX and VDJ reference genome paths, path to the folders containing Fastq files, study name and flags to determine if VDJ and GEX analysis needs to be done.
i. Metadata
The metadata file outlines all the library information for the given study and must contain the columns shown in the example below. Any additional columns listed in the file will be added to the SingleCellExperiment object column data in later workflows.
| repoName | sampleName | patientID | expected_cells | chemistry | nucliecAcid | locus | platform | indices | sortingCT | tissueType | VDJType
-|----------|------------|-----------|----------------|-----------|-------------|-------|----------|---------|-----------|------------|--------
| S1_Monocytes | Monocytes | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | A1 | CD14pMonocytes | PBMC | NA
S1_Bcells | Bcells | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | A2 | CD19PBCells | PBMC | NA
S1_Progenitor | Progenitor | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | E1 | CD34pCells | PBMC | NA
S1_HelperTCells | HelperTCells | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | E2 | CD4pHelperTCells | PBMC | NA
S1_RegulatoryTCells | RegulatoryTCells | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | D1 | CD4p_CD25pRegulatoryCells | PBMC | NA
S1_NaiveTCells | NaiveTCells | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | D2 | CD4p_CD45RAp_CD25nNaiveTCells | PBMC | NA
S1_MemoryTCells | MemoryTCells | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | B1 | CD4p_CD45ROpMemoryTCells | PBMC | NA
S1_NKCells | NKCells | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | B2 | CD56pNKCells | PBMC | NA
S1_CytotoxicTCells | CytotoxicTCells | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | G1 | CD8pCytotoxicTCells | PBMC | NA
S1_NaiveCytotoxicTCells | NaiveCytotoxicTCells | S1 | 3000 | threeprime | cDNA | 3primeGEX | 10XGenomics | G2 | CD8p_CD45RApNaiveCytotoxicTCells | PBMC | NA
The VDJType column would either be `T cell` or `B cell` if VDJ sequencing was also done on these sample, if not the column value will be `NA`.
ii. GEX and VDJ reference files
The reference genome files for GEX and VDJ analysis can be downloaded from the 10X Genomics website using the links mentioned below:
```
GEX reference: http://cf.10xgenomics.com/supp/cell-exp/refdata-cellranger-GRCh38-3.0.0.tar.gz
VDJ reference: http://cf.10xgenomics.com/supp/cell-vdj/refdata-cellranger-vdj-GRCh38-alts-ensembl-3.1.0.tar.gz
```
iii. Path to folder containing FASTQ files
The folder structure of the FASTQ inputs can vary depending on the method used to generate the FASTQ from the raw sequencing files. The current pipeline FASTQ files generated using two methods `demux` and `mkfastq`. The recommended folder structure is mentioned below can be found at [this link.](https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/using/fastq-input) The current version of the pipeline was developed considering cellranger `mkfastq` outputs, and partially tailored to accept cellranger `demux` output. Other options will be made avaiable in future versions.
2. Output
All results from the cellranger Nextflow will be stored within subfolders under this output path.
3. Count
Currently the only parameter specified here is the method used to generate the FASTQ files, the additional run details such as sequencing chemistry and expected number of cells are obtained from the metadata file. Runtime characteristics such as memory and number of CPUs are obtain from the process information specified in the nextflow.config file.
4. Aggregate
Specifies what type of normalization should be performed during the aggregation step.
## Outputs:
The pipeline generates three folders in the output folder:
1. Counts
Contains the count matrices for each samples in the analysis. This serves as the input for most of the other nextflow scripts for downstream analysis
2. Metadata
The metadata folder contain the samplesheets for the VDJ and GEX analysisas well as additional information such files paths for count matrices and VDJ clonotypes and contig files. These sample metadta files serve as the input to the preprocessing nextflow that generate the Single Cell Experiment objects for each sample. All information from the sample metadata files is added to the colData of the Single Cell Experimentß
3. VDJ
The VDJ folder contains the results from the cellranger VDJ pipeline when available. The folder contains the clonotypes and contig. When avaiable the VDJ data is incorporated into the Single Cell Experiment object in the preprocessing nextflow
| 54.59633 | 613 | 0.731138 | eng_Latn | 0.956694 |
c9445f6e1e10aaaf29701eb3c7819523bbfc41ed | 2,916 | md | Markdown | src/README.md | TeddTech/NLP_Firm_Prediction | 7685cbe70095df7ed34035891f7fc20e5ac6df66 | [
"MIT"
] | null | null | null | src/README.md | TeddTech/NLP_Firm_Prediction | 7685cbe70095df7ed34035891f7fc20e5ac6df66 | [
"MIT"
] | 4 | 2020-01-28T22:54:09.000Z | 2022-02-10T00:21:59.000Z | src/README.md | TeddTech/NLP_Firm_Prediction | 7685cbe70095df7ed34035891f7fc20e5ac6df66 | [
"MIT"
] | null | null | null | # Source folder
April 2018
This folder contains all the source code
### **Structure of this folder**
```
├── model <- Store all functions to perform modelling
│ │
│ ├── extract_sentiment.py
│ ├── extract_topic.py
│ ├── keras_model.py
│ └── prototype_model.py
│
│
├── Preprocessing <- Store all functions to perform preprocessing steps
│ │
│ ├── financial_cols.csv
│ ├── financial_sentiment_cols.csv
│ ├── financial_sentiment_topic_cols.csv
│ ├── financial_topic_cols.csv
│ ├── vol_financial_cols.csv
│ ├── vol_financial_sentiment_cols.csv
│ ├── vol_financial_sentiment_topic_cols.csv
│ ├── vol_financial_topic_cols.csv
│ ├── sentiment_cols.csv
│ ├── sentiment_topic_cols.csv
│ └── topic_cols.csv
│
├── Scripts <- Store all scripts to run on the server
│ │
│ ├── test_years.csv
│ ├── train_years.csv
│ └── val_year.csv
│
├── fillings_count_sql_query <- Query wrote to obtain information from SQL database
├── structures.py <-Define Filing object and function filing_to_mongo
└── utils.py <- Store all utilities functions
```
### Links to files in this folder
|Folder|Description|Files|
| :---:| :---: |:---:|
|Models| Store all functions to perform modelling|[extract_sentiment.py](models\extract_sentiment.py) <br> [extract_topic.py](models\extract_topic.py) <br> [keras_model.py](models\keras_model.py) <br> [prototype_model.py](models\prototype_model.py) |
|Preprocessing| Store all functions to perform preprocessing steps | [feature_aggregation.py](preprocessing\feature_aggregation.py) <br> [label_feature_eng.py](preprocessing\label_feature_eng.py) <br> [subset_data.py](preprocessing\subset_data.py) <br> [text_extract.py](preprocessing\text_extract.py) |
|Scripts| Store all scripts to run on the server | [common_cik_script.py](scripts\common_cik_script.py) <br> [common_col_script](scripts\common_col_script.py) <br> [create_financial_features_script](scripts\create_financial_features_script.py) <br> [create_label_script](scripts\create_label_script.py) <br> [extract_cik_script](scripts\extract_cik_script.py) <br> [extract_filings_script](scripts\extract_filings_script.py) <br> [extract_sentiment_script](scripts\extract_sentiment_script.py) <br> [extract_uniq_col_script](scripts\extract_uniq_col_script.py) <br> [keras_nn_script](scripts\keras_nn_script.py) <br> [merge_features_script](scripts\merge_features_script.py) <br> [subset_data_script](scripts\subset_data_script.py) <br> [topic_modelling_script](scripts\topic_modelling_script.py) <br> [train_model_script](scripts\train_model_script.py)|
|[fillings_count_sql_query](fillings_count_sql_query.md)|Query wrote to obtain information from SQL database| |
|[structures.py](structures.py)|Define Filing object and function filing_to_mongo| |
|[utils.py](structures.py)|Store all utilities functions| |
| 51.157895 | 855 | 0.73834 | eng_Latn | 0.337622 |
c9449dea7f4de18142515692f9f15eb3ab1b1be6 | 12,506 | md | Markdown | mod12/lab-A/README.md | scifotho/Azure_Architect | 261540368dd1e02b3a78ba6564f975b219dcdbf4 | [
"MIT"
] | 1 | 2021-09-13T12:10:32.000Z | 2021-09-13T12:10:32.000Z | mod12/lab-A/README.md | scifotho/Azure_Architect | 261540368dd1e02b3a78ba6564f975b219dcdbf4 | [
"MIT"
] | null | null | null | mod12/lab-A/README.md | scifotho/Azure_Architect | 261540368dd1e02b3a78ba6564f975b219dcdbf4 | [
"MIT"
] | 2 | 2021-09-06T19:08:21.000Z | 2022-01-11T09:45:44.000Z | # Lab: Implementing an Azure App Service Web App with a Staging Slot
# Student lab manual
## Lab scenario
Adatum Corporation has a number of web apps that are updated on relatively frequent basis. While Adatum has not yet fully embraced DevOps principles, it relies on Git as its version control and is exploring the options to streamline the app updates. As Adatum is transitioning some of its workloads to Azure, the Adatum Enterprise Architecture team decided to evaluate the use of Azure App Service and its deployment slots to accomplish this objective.
Deployment slots are live apps with their own host names. App content and configurations elements can be swapped between two deployment slots, including the production slot. Deploying apps to a non-production slot has the following benefits:
- It is possible to validate app changes in a staging deployment slot before swapping it with the production slot.
- Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when during app deployment. The traffic redirection is seamless, and no requests are dropped because of swap operations. This workflow can be automated by configuring auto swap when pre-swap validation is not needed.
- After a swap, the slot with previously staged app has the previous production app. If the changes swapped into the production slot need to be reversed, this simply involves another swap immediately to return to the last known good state.
Deployment slots facilitate two common deployment patterns: blue/green and A/B testing. Blue-green deployment involves deploying an update into a production environment that is separate from the live application. After the deployment is validated, traffic routing is switched to the updated version. A/B testing involves gradually routing some of the traffic to a staging site in order to test a new version of an app.
The Adatum Architecture team wants to use Azure App Service web apps with deployment slots in order to test these two deployment patterns:
- Blue/Green deployments
- A/B testing
## Objectives
After completing this lab, you will be able to:
- Implement Blue/Green deployment pattern by using deployment slots of Azure App Service web apps
- Perform A/B testing by using deployment slots of Azure App Service web apps
## Lab Environment
Estimated Time: 60 minutes
## Lab Files
None
## Instructions
### Exercise 1: Implement an Azure App Service web app
1. Deploy an Azure App Service web app
1. Create an App Service web app deployment slot
#### Task 1: Deploy an Azure App Service web app
1. From your lab computer, start a web browser, navigate to the [Azure portal](https://portal.azure.com), and sign in by providing credentials of a user account with the Owner role in the subscription you will be using in this lab.
1. In the Azure portal, open **Cloud Shell** pane by selecting on the toolbar icon directly to the right of the search textbox.
1. If prompted to select either **Bash** or **PowerShell**, select **Bash**.
>**Note**: If this is the first time you are starting **Cloud Shell** and you are presented with the **You have no storage mounted** message, select the subscription you are using in this lab, and select **Create storage**.
1. From the Cloud Shell pane, run the following to create a new directory named **az30314a1** and set it as your current directory:
```sh
mkdir az30314a1
cd ~/az30314a1/
```
1. From the Cloud Shell pane, run the following to clone a sample app repository to the **az30314a1** directory:
```sh
REPO=https://github.com/Azure-Samples/html-docs-hello-world.git
git clone $REPO
cd html-docs-hello-world
```
1. From the Cloud Shell pane, run the following to configure a deployment user:
```sh
USERNAME=az30314user$RANDOM
PASSWORD=az30314pass$RANDOM
az webapp deployment user set --user-name $USERNAME --password $PASSWORD
echo $USERNAME
echo $PASSWORD
```
1. Verify that the deployment user was created successfully. If you receive an error message indicating a conflict, repeat the previous step.
>**Note**: Make sure to record the value of the username and the corresponding password.
1. From the Cloud Shell pane, run the following to create the resource group which will host the App Service web app (replace the `<location>` placeholder with the name of the Azure region that is available in your subscription and which is closest to the location of your lab computer):
```sh
LOCATION='<location>'
RGNAME='az30314a-labRG'
az group create --location $LOCATION --resource-group $RGNAME
```
1. From the Cloud Shell pane, run the following to create a new App Service plan:
```sh
SPNAME=az30314asp$LOCATION$RANDOM
az appservice plan create --name $SPNAME --resource-group $RGNAME --location $LOCATION --sku S1
```
1. From the Cloud Shell pane, run the following to create a new, Git-enabled App Service web app:
```sh
WEBAPPNAME=az30314$RANDOM$RANDOM
az webapp create --name $WEBAPPNAME --resource-group $RGNAME --plan $SPNAME --deployment-local-git
```
>**Note**: Wait for the deployment to complete.
1. From the Cloud Shell pane, run the following to retrieve the publishing URL of the newly created App Service web app:
```sh
URL=$(az webapp deployment list-publishing-credentials --name $WEBAPPNAME --resource-group $RGNAME --query scmUri --output tsv)
```
1. From the Cloud Shell pane, run the following to set the git remote alias representing the Git-enabled Azure App Service web app:
```sh
git remote add azure $URL
```
1. From the Cloud Shell pane, run the following to push to the Azure remote with git push azure master:
```sh
git push azure master
```
>**Note**: Wait for the deployment to complete.
1. From the Cloud Shell pane, run the following to identify the FQDN of the newly deployed App Service web app.
```sh
az webapp show --name $WEBAPPNAME --resource-group $RGNAME --query defaultHostName --output tsv
```
1. Close the Cloud Shell pane.
#### Task 2: Create an App Service web app deployment slot
1. In the Azure portal, search for and select **App Services** and, on the **App Services** blade, select the newly created App Service web app.
1. In the Azure portal, navigate to the blade displaying the newly deployed App Service web app, select the **URL** link, and verify that it displays the **Azure App Service - Sample Static HTML Site**. Leave the browser tab open.
1. On the App Service web app blade, in the **Deployment** section, select **Deployment slots** and then select **+ Add Slot**.
1. On the **Add a slot** blade, specify the following settings, select **Add**, and then select **Close**.
| Setting | Value |
| --- | --- |
| Name | **staging** |
| Clone settings from | the name of the web app |
### Exercise 2: Manage App Service web app deployment slots
The main tasks for this exercise are as follows:
1. Deploy web content to an App Service web app staging slot
1. Swap App Service web app staging slots
1. Configure A/B testing
1. Remove Azure resources deployed in the lab
#### Task 1: Deploy web content to an App Service web app staging slot
1. In the Azure portal, open **Cloud Shell** pane by selecting on the toolbar icon directly to the right of the search textbox.
1. From the Cloud Shell pane, run the following to ensure that the current set **az30314a1/html-docs-hello-world** as the current directory:
```sh
cd ~/az30314a1/html-docs-hello-world
```
1. In the Cloud Shell pane, run the following to start the built-in editor:
```sh
code index.html
```
1. In the Cloud Shell pane, in the code editor, replace the line:
```html
<h1>Azure App Service - Sample Static HTML Site</h1>
```
with the following line:
```html
<h1>Azure App Service - Sample Static HTML Site v1.0.1</h1>
```
1. Save the changes and close the editor window.
1. From the Cloud Shell pane, run the following to specify the required global git configuration settings:
```sh
git config --global user.email "[email protected]"
git config --global user.name "user az30314"
```
1. From the Cloud Shell pane, run the following to commit the change you applied locally to the master branch:
```sh
git add index.html
git commit -m 'v1.0.1'
```
1. From the Cloud Shell pane, run the following to retrieve the publishing URL of the newly created staging slot of the App Service web app:
```sh
RGNAME='az30314a-labRG'
WEBAPPNAME=$(az webapp list --resource-group $RGNAME --query "[?starts_with(name,'az30314')]".name --output tsv)
SLOTNAME='staging'
URLSTAGING=$(az webapp deployment list-publishing-credentials --name $WEBAPPNAME --slot $SLOTNAME --resource-group $RGNAME --query scmUri --output tsv)
```
1. From the Cloud Shell pane, run the following to set the git remote alias representing the staging slot of the Git-enabled Azure App Service web app:
```sh
git remote add azure-staging $URLSTAGING
```
1. From the Cloud Shell pane, run the following to push to the Azure remote with git push azure master:
```sh
git push azure-staging master
```
>**Note**: Wait for the deployment to complete.
1. Close the Cloud Shell pane.
1. In the Azure portal, navigate to the blade displaying the deployment slots of the App Service web app and select the staging slot.
1. On the blade displaying the staging slot overview, select the **URL** link.
#### Task 2: Swap App Service web app staging slots
1. In the Azure portal, navigate back to the blade displaying the App Service web app and select **Deployment slots**.
1. On the deployment slots blade, select **Swap**.
1. On the **Swap** blade, select **Swap** and then select **Close**.
1. Switch to the browser tab showing the App Service web app and refresh the browser window. Verify that it displays the changes you deployed to the staging slot.
1. Switch to the browser tab showing the staging slot of the App Service web app and refresh the browser window. Verify that it displays the original web page included in the original deployment.
#### Task 3: Configure A/B testing
1. In the Azure portal, navigate back to the blade displaying the deployment slots of the App Service web app.
1. In the Azure portal, on the blade displaying the App Service web app deployment slots, in the row displaying the staging slot, set the value in the **TRAFFIC %** column to 50. This will automatically set the value of **TRAFFIC %** in the row representing the production slot to 50.
1. On the blade displaying the App Service web app deployment slots, select **Save**.
1. In the Azure portal, open **Cloud Shell** pane by selecting on the toolbar icon directly to the right of the search textbox.
1. From the Cloud Shell pane, run the following to verify set the variables representing the name of the target web app and its distribution group:
```sh
RGNAME='az30314a-labRG'
WEBAPPNAME=$(az webapp list --resource-group $RGNAME --query "[?starts_with(name,'az30314')]".name --output tsv)
```
1. From the Cloud Shell pane, run the following several times to identify the traffic distribution between the two slots.
```sh
curl -H 'Cache-Control: no-cache' https://$WEBAPPNAME.azurewebsites.net --stderr - | grep '<h1>Azure App Service - Sample Static HTML Site'
```
>**Note**: Traffic distribution is not entirely deterministic, but you should see several responses from each target site.
#### Task 4: Remove Azure resources deployed in the lab
1. From the Cloud Shell pane, run the following to list the resource group you created in this exercise:
```sh
az group list --query "[?starts_with(name,'az30314')]".name --output tsv
```
> **Note**: Verify that the output contains only the resource group you created in this lab. This group will be deleted in this task.
1. From the Cloud Shell pane, run the following to delete the resource group you created in this lab
```sh
az group list --query "[?starts_with(name,'az30314')]".name --output tsv | xargs -L1 bash -c 'az group delete --name $0 --no-wait --yes'
```
1. From the Cloud Shell pane, run the following to remove the **az30314a1** directory:
```sh
rm -r -f ~/az30314a1
```
1. Close the Cloud Shell pane.
| 41.003279 | 453 | 0.734607 | eng_Latn | 0.99499 |
c945ae593bf2f17665d29ac15fde0fd951ce3bcf | 14,154 | md | Markdown | content/en/docs/corda-enterprise/4.2/deterministic-modules.md | ldm0/corda-docs | 13298c4a07c47506d081e085c99deb282944bf72 | [
"Apache-2.0"
] | null | null | null | content/en/docs/corda-enterprise/4.2/deterministic-modules.md | ldm0/corda-docs | 13298c4a07c47506d081e085c99deb282944bf72 | [
"Apache-2.0"
] | null | null | null | content/en/docs/corda-enterprise/4.2/deterministic-modules.md | ldm0/corda-docs | 13298c4a07c47506d081e085c99deb282944bf72 | [
"Apache-2.0"
] | 1 | 2021-04-05T04:35:55.000Z | 2021-04-05T04:35:55.000Z | ---
aliases:
- /releases/4.2/deterministic-modules.html
date: '2020-01-08T09:59:25Z'
menu:
corda-enterprise-4-2:
identifier: corda-enterprise-4-2-deterministic-modules
parent: corda-enterprise-4-2-miscellaneous
weight: 400
tags:
- deterministic
- modules
title: Deterministic Corda Modules
---
.red {color:red}
# Deterministic Corda Modules
A Corda contract’s verify function should always produce the same results for the same input data. To that end,
Corda provides the following modules:
* `core-deterministic`
* `serialization-deterministic`
* `jdk8u-deterministic`
These are reduced version of Corda’s `core` and `serialization` modules and the OpenJDK 8 `rt.jar`, where the
non-deterministic functionality has been removed. The intention here is that all CorDapp classes required for
contract verification should be compiled against these modules to prevent them containing non-deterministic behaviour.
{{< note >}}
These modules are only a development aid. They cannot guarantee determinism without also including
deterministic versions of all their dependent libraries, e.g. `kotlin-stdlib`.
{{< /note >}}
## Generating the Deterministic Modules
`jdk8u-deterministic` is a “pseudo JDK” image that we can point the Java and Kotlin compilers to. It downloads the
`rt.jar` containing a deterministic subset of the Java 8 APIs from the Artifactory.To build a new version of this JAR and upload it to the Artifactory, see the `create-jdk8u` module. This is a
standalone Gradle project within the Corda repository that will clone the `deterministic-jvm8` branch of Corda’s
[OpenJDK repository](https://github.com/corda/openjdk) and then build it. (This currently requires a C++ compiler,
GNU Make and a UNIX-like development environment.)`core-deterministic` and `serialization-deterministic` are generated from Corda’s `core` and `serialization`
modules respectively using both [ProGuard](https://www.guardsquare.com/en/proguard) and Corda’s `JarFilter` Gradle
plugin. Corda developers configure these tools by applying Corda’s `@KeepForDJVM` and `@DeleteForDJVM`
annotations to elements of `core` and `serialization` as described [here](#deterministic-annotations).The build generates each of Corda’s deterministic JARs in six steps:
* Some *very few* classes in the original JAR must be replaced completely. This is typically because the original
class uses something like `ThreadLocal`, which is not available in the deterministic Java APIs, and yet the
class is still required by the deterministic JAR. We must keep such classes to a minimum!
* The patched JAR is analysed by ProGuard for the first time using the following rule:
```groovy
keep '@interface net.corda.core.KeepForDJVM { *; }'
```
ProGuard works by calculating how much code is reachable from given “entry points”, and in our case these entry
points are the `@KeepForDJVM` classes. The unreachable classes are then discarded by ProGuard’s `shrink`
option.
* The remaining classes may still contain non-deterministic code. However, there is no way of writing a ProGuard rule
explicitly to discard anything. Consider the following class:
```kotlin
@CordaSerializable
@KeepForDJVM
data class UniqueIdentifier @JvmOverloads @DeleteForDJVM constructor(
val externalId: String? = null,
val id: UUID = UUID.randomUUID()
) : Comparable<UniqueIdentifier> {
...
}
```
While CorDapps will definitely need to handle `UniqueIdentifier` objects, all of the secondary constructors
generate a new random `UUID` and so are non-deterministic. Hence the next “determinising” step is to pass the
classes to the `JarFilter` tool, which strips out all of the elements which have been annotated as
`@DeleteForDJVM` and stubs out any functions annotated with `@StubOutForDJVM`. (Stub functions that
return a value will throw `UnsupportedOperationException`, whereas `void` or `Unit` stubs will do nothing.)
* After the `@DeleteForDJVM` elements have been filtered out, the classes are rescanned using ProGuard to remove
any more code that has now become unreachable.
* The remaining classes define our deterministic subset. However, the `@kotlin.Metadata` annotations on the compiled
Kotlin classes still contain references to all of the functions and properties that ProGuard has deleted. Therefore
we now use the `JarFilter` to delete these references, as otherwise the Kotlin compiler will pretend that the
deleted functions and properties are still present.
* Finally, we use ProGuard again to validate our JAR against the deterministic `rt.jar`:
```groovy
task checkDeterminism(type: ProGuardTask, dependsOn: jdkTask) {
injars metafix
libraryjars deterministic_rt_jar
configurations.deterministicLibraries.forEach {
libraryjars it, filter: '!META-INF/versions/**'
}
keepattributes '*'
dontpreverify
dontobfuscate
dontoptimize
verbose
keep 'class *'
}
```
[build.gradle](https://github.com/corda/corda/blob/release/os/4.1/core-deterministic/build.gradle)
This step will fail if ProGuard spots any Java API references that still cannot be satisfied by the deterministic
`rt.jar`, and hence it will break the build.
## Configuring IntelliJ with a Deterministic SDK
We would like to configure IntelliJ so that it will highlight uses of non-deterministic Java APIs as not found.
Or, more specifically, we would like IntelliJ to use the `deterministic-rt.jar` as a “Module SDK” for deterministic
modules rather than the `rt.jar` from the default project SDK, to make IntelliJ consistent with Gradle.
This is possible, but slightly tricky to configure because IntelliJ will not recognise an SDK containing only the
`deterministic-rt.jar` as being valid. It also requires that IntelliJ delegate all build tasks to Gradle, and that
Gradle be configured to use the Project’s SDK.
Gradle creates a suitable JDK image in the project’s `jdk8u-deterministic/jdk` directory, and you can
configure IntelliJ to use this location for this SDK. However, you should also be aware that IntelliJ SDKs
are available for *all* projects to use.To create this JDK image, execute the following:
```bash
$ gradlew jdk8u-deterministic:copyJdk
```
Now select `File/Project Structure/Platform Settings/SDKs` and add a new JDK SDK with the
`jdk8u-deterministic/jdk` directory as its home. Rename this SDK to something like “1.8 (Deterministic)”.This *should* be sufficient for IntelliJ. However, if IntelliJ realises that this SDK does not contain a
full JDK then you will need to configure the new SDK by hand:>
* Create a JDK Home directory with the following contents:>
`jre/lib/rt.jar`
where `rt.jar` here is this renamed artifact:
```xml
<dependency>
<groupId>net.corda</groupId>
<artifactId>deterministic-rt</artifactId>
<classifier>api</classifier>
</dependency>
```
* While IntelliJ is *not* running, locate the `config/options/jdk.table.xml` file in IntelliJ’s configuration
directory. Add an empty `<jdk>` section to this file:
```xml
<jdk version="2">
<name value="1.8 (Deterministic)"/>
<type value="JavaSDK"/>
<version value="java version "1.8.0""/>
<homePath value=".. path to the deterministic JDK directory .."/>
<roots>
</roots>
</jdk>
```
* Open IntelliJ and select `File/Project Structure/Platform Settings/SDKs`. The “1.8 (Deterministic)” SDK
should now be present. Select it and then click on the `Classpath` tab. Press the “Add” / “Plus” button to
add `rt.jar` to the SDK’s classpath. Then select the `Annotations` tab and include the same JAR(s) as
the other SDKs.
* Open the root `build.gradle` file and define this property:
```gradle
buildscript {
ext {
...
deterministic_idea_sdk = '1.8 (Deterministic)'
...
}
}
```
* Go to `File/Settings/Build, Execution, Deployment/Build Tools/Gradle`, and configure Gradle’s JVM to be the
project’s JVM.
* Go to `File/Settings/Build, Execution, Deployment/Build Tools/Gradle/Runner`, and select these options:>
* Delegate IDE build/run action to Gradle
* Run tests using the Gradle Test Runner
* Delete all of the `out` directories that IntelliJ has previously generated for each module.
* Go to `View/Tool Windows/Gradle` and click the `Refresh all Gradle projects` button.
These steps will enable IntelliJ’s presentation compiler to use the deterministic `rt.jar` with the following modules:
* `core-deterministic`
* `serialization-deterministic`
* `core-deterministic:testing:common`
but still build everything using Gradle with the full JDK.
## Testing the Deterministic Modules
The `core-deterministic:testing` module executes some basic JUnit tests for the `core-deterministic` and
`serialization-deterministic` JARs. These tests are compiled against the deterministic `rt.jar`, although
they are still executed using the full JDK.
The `testing` module also has two sub-modules:
`core-deterministic:testing:data`This module generates test data such as serialised transactions and elliptic curve key pairs using the full
non-deterministic `core` library and JDK. This data is all written into a single JAR which the `testing`
module adds to its classpath.`core-deterministic:testing:common`This module provides the test classes which the `testing` and `data` modules need to share. It is therefore
compiled against the deterministic API subset.
## Applying @KeepForDJVM and @DeleteForDJVM annotations
Corda developers need to understand how to annotate classes in the `core` and `serialization` modules correctly
in order to maintain the deterministic JARs.
{{< note >}}
Every Kotlin class still has its own `.class` file, even when all of those classes share the same
source file. Also, annotating the file:
```kotlin
@file:KeepForDJVM
package net.corda.core.internal
```
*does not* automatically annotate any class declared *within* this file. It merely annotates any
accompanying Kotlin `xxxKt` class.
{{< /note >}}
For more information about how `JarFilter` is processing the byte-code inside `core` and `serialization`,
use Gradle’s `--info` or `--debug` command-line options.
Classes that *must* be included in the deterministic JAR should be annotated as `@KeepForDJVM`.
```kotlin
@Target(FILE, CLASS)
@Retention(BINARY)
@CordaInternal
annotation class KeepForDJVM
```
[KeepForDJVM.kt](https://github.com/corda/corda/blob/release/os/4.1/core/src/main/kotlin/net/corda/core/KeepForDJVM.kt)
To preserve any Kotlin functions, properties or type aliases that have been declared outside of a `class`,
you should annotate the source file’s `package` declaration instead:
```kotlin
@file:JvmName("InternalUtils")
@file:KeepForDJVM
package net.corda.core.internal
infix fun Temporal.until(endExclusive: Temporal): Duration = Duration.between(this, endExclusive)
```
Elements that *must* be deleted from classes in the deterministic JAR should be annotated as `@DeleteForDJVM`.
```kotlin
@Target(
FILE,
CLASS,
CONSTRUCTOR,
FUNCTION,
PROPERTY_GETTER,
PROPERTY_SETTER,
PROPERTY,
FIELD,
TYPEALIAS
)
@Retention(BINARY)
@CordaInternal
annotation class DeleteForDJVM
```
[DeleteForDJVM.kt](https://github.com/corda/corda/blob/release/os/4.1/core/src/main/kotlin/net/corda/core/DeleteForDJVM.kt)
You must also ensure that a deterministic class’s primary constructor does not reference any classes that are
not available in the deterministic `rt.jar`. The biggest risk here would be that `JarFilter` would delete the
primary constructor and that the class could no longer be instantiated, although `JarFilter` will print a warning
in this case. However, it is also likely that the “determinised” class would have a different serialisation
signature than its non-deterministic version and so become unserialisable on the deterministic JVM.Primary constructors that have non-deterministic default parameter values must still be annotated as
`@DeleteForDJVM` because they cannot be refactored without breaking Corda’s binary interface. The Kotlin compiler
will automatically apply this `@DeleteForDJVM` annotation - along with any others - to all of the class’s
secondary constructors too. The `JarFilter` plugin can then remove the `@DeleteForDJVM` annotation from the
primary constructor so that it can subsequently delete only the secondary constructors.The annotations that `JarFilter` will “sanitise” from primary constructors in this way are listed in the plugin’s
configuration block, e.g.
```groovy
task jarFilter(type: JarFilterTask) {
...
annotations {
...
forSanitise = [
"net.corda.core.DeleteForDJVM"
]
}
}
```
Be aware that package-scoped Kotlin properties are all initialised within a common `<clinit>` block inside
their host `.class` file. This means that when `JarFilter` deletes these properties, it cannot also remove
their initialisation code. For example:
```kotlin
package net.corda.core
@DeleteForDJVM
val map: MutableMap<String, String> = ConcurrentHashMap()
```
In this case, `JarFilter` would delete the `map` property but the `<clinit>` block would still create
an instance of `ConcurrentHashMap`. The solution here is to refactor the property into its own file and then
annotate the file itself as `@DeleteForDJVM` instead.Sometimes it is impossible to delete a function entirely. Or a function may have some non-deterministic code
embedded inside it that cannot be removed. For these rare cases, there is the `@StubOutForDJVM`
annotation:
```kotlin
@Target(
CONSTRUCTOR,
FUNCTION,
PROPERTY_GETTER,
PROPERTY_SETTER
)
@Retention(BINARY)
@CordaInternal
annotation class StubOutForDJVM
```
[StubOutForDJVM.kt](https://github.com/corda/corda/blob/release/os/4.1/core/src/main/kotlin/net/corda/core/StubOutForDJVM.kt)
This annotation instructs `JarFilter` to replace the function’s body with either an empty body (for functions
that return `void` or `Unit`) or one that throws `UnsupportedOperationException`. For example:
```kotlin
fun necessaryCode() {
nonDeterministicOperations()
otherOperations()
}
@StubOutForDJVM
private fun nonDeterministicOperations() {
// etc
}
```
| 37.052356 | 209 | 0.768617 | eng_Latn | 0.989571 |
c9469342bc5c20e8325c1f29fbb5317adf59e668 | 8,061 | md | Markdown | _posts/2016-01-29-security-culture.md | eliawry/antirepressioncrew | 7ed47bcccc82e6ac0399ff82cae288e6212a7c2f | [
"MIT"
] | null | null | null | _posts/2016-01-29-security-culture.md | eliawry/antirepressioncrew | 7ed47bcccc82e6ac0399ff82cae288e6212a7c2f | [
"MIT"
] | null | null | null | _posts/2016-01-29-security-culture.md | eliawry/antirepressioncrew | 7ed47bcccc82e6ac0399ff82cae288e6212a7c2f | [
"MIT"
] | null | null | null | ---
layout: post
title: Security Culture
---
# What It Is, Why We Need It, and How to Do It
## What Is Security Culture?
In the political context, security culture is a set of customs and practices shared by a community or movement that are designed to minimize risk.
At its most basic, security culture is rooted in the principle of sharing sensitive information only on a need-to-know basis—not an “I trust you” basis or “I really want to know” basis.
## How Do We Do It?
Do not talk about your involvement or someone else’s involvement in illegal activities—past, present, or future. It might also include not talking about someone’s immigration status or other characteristics that might make them a bigger target for the State.
The exceptions are fairly obvious: (1) you can discuss such activities with the people you are actually planning those activities with (but choose people carefully); (2) you can discuss them after you have been convicted for them, provided you don’t incriminate anyone else. Obviously, different actions pose different risks and therefore, the precise level of information that is shared might vary. But need-to-know basis is the best starting point.
Common ways that people break this aspect of security culture include:
● *lying* about things they’ve done to impress others or prove themselves;
● *bragging* about things they’ve done, also often to impress others or prove how “radical” they are;
● *gossiping* about things they know about (“I heard so and so was involved in that”), often part of an effort to feel in-the-know and well-connected;
● *asking* a lot of questions about who was involved in various illegal actions, or what previous illegal actions someone has participated in.
● *indirect bragging/gossiping* by making a big deal about how hardcore they are, how they are trying to stay super underground/anonymous, or how much they can’t tell you about because of security culture (“Well, I could tell you soooo many stories, but like, security culture you know!”)
● *pressuring/manipulating* people to be involved in things in which they don’t want to be involved, sometimes by questioning their commitment to the cause or using guilt as a manipulation tactic.
There are other related aspects of security culture as well.

<p style="font-size:10px;"> Image credit: Rob Wilson Photography </p>
__Do not talk to law enforcement or cooperate with their investigations. Ever.__
Cops never qualify as need-to-know. Everything you say to law enforcement will be misquoted, pulled out of context, and used against you and others. Nothing you say to them can help you. Often people feel that they have nothing to hide from the police or the state. Everyone has something to hide, because the cops are gathering intelligence and mapping communities. Cops lie routinely. The only three things out of your mouth should be: *“I’m going to remain silent. I want to speak to a lawyer. I do not consent to a search.”*
__Take other appropriate measures to protect sensitive information.__
This means not sharing account passwords or keys/combinations to locks with people that don’t need them; using encryption for storing or sending sensitive digital information; not storing or communicating sensitive information digitally in the first place; hiding or deleting personal information that is publicly accessible on the internet.
The appropriate security measures depend on the type of information and the particular risk. Is it information that if stolen, could jeopardize someone’s employment, or simply would inconvenience the organization? Is the security measure directed at thwarting the State and law enforcement, or non-State actors like fascists or corporate security?
__Commit to addressing breaches of security culture in a constructive way.__
Breaches will happen, and people are not perfect. But the breaches need to be addressed, constructively and without speculation as to motive. Many breaches will be a result of people who forgot, made a mistake, simply didn’t know, or some other common error. They should be reminded or educated in a way that supports their effort and growth and doesn’t judge or shame. However, sometimes people who clearly know better will make flagrant, repeated breaches. This can be extremely dangerous and must be dealt with. Unfortunately, sometimes dangerous people just have to be kicked out.
In addressing any breach, it’s bad to speculate about someone’s motives. It’s often tempting to say things like “that’s classic informant behavior,” or “they must be a snitch.” Such speculation is problematic for two reasons. One, it can easily turn into honest people getting snitch jacketed and a lot of paranoia and finger pointing, which may be more damaging than the original security culture breach was in the first place. Second, it takes the discussion away from the actual behavior and the harm or danger it created, and makes the discussion about the alleged motivation. Address the behavior, not the motivation.
__Build a movement culture that is rooted in anti-oppression.__
Many of the common security culture breaches—bragging, lying, pressuring, etc.—stem from macho behavior and macho culture in our movements and communities. People often feel a need to prove how “radical” or “down” or “committed” they are. Many circles will boost up the people who are seen as the most badass or militant. We would do well to critically examine the underlying cultures we create in our movements and communities and how they may be starkly at odds with good security culture.
## Myths, Misconceptions, and Misapplications
__Security culture is not about hiding abuse or dirty laundry.__
Sometimes people who abuse, manipulate, coerce, or deceive the people around them will try to use security culture to hide their harmful actions. This is an abuse of security culture and a danger to our movements. People need to be accountable for their harmful actions, and if someone is a danger to others—whether through malice or just carelessness—that information needs to be shared so that people can take appropriate steps to protect themselves.
__Security culture is not about excluding new-comers, being inaccessible to people with different backgrounds, or being extra cool.__
We have to be aware of how our security practices might make our movements less accessible. A movement that can’t attract new participants, help build their political education and organizing capacity, and acculturate them into the resistance will flounder in its own isolation.
__Security Culture is not about witch hunts to find suspected infiltrators.__
Investigating people is extremely time consuming and is unlikely to produce compelling evidence. False or unsupported accusations or those based on conjecture and innuendo (sometimes called “snitch jacketing” or “bad jacketing”) are extremely damaging to movements—sometimes even more damaging than the infiltrator themselves. It amplifies paranoia, distrust, and conflict.
__Security culture cannot keep you safe. Our only real safety is in liberation!__
If you trust the wrong person with whom to plan an illegal action, and that person turns out to be an informant, or later decides to cooperate with the state, you might be in a bad spot. Taking elaborate measures to encrypt all your digital information and communications might fail if the government finds a way to access your encryption key. Kicking out the macho bragger but not doing anything about the quiet abuser might help protect against one risk, but creates a dozen others.
Resisting the misery and violence of the status quo and fighting for collective liberation is dangerous work, and there are powerful forces aligned against us. Security culture will help mitigate certain risks, but it cannot eliminate them. Our best security is in our solidarity with one another, our accountability to each other, and our fight for freedom.
| 124.015385 | 622 | 0.804863 | eng_Latn | 0.999885 |
c946aee50cecc63fd9207b2b261ac65f8d39bb89 | 81 | md | Markdown | README.md | rogema/tsg_toolbox | 7706381bb77ec5fcf5704d1cfc04dd442420d86c | [
"MIT"
] | 2 | 2018-11-25T22:58:44.000Z | 2021-08-25T13:31:14.000Z | README.md | rogema/tsg_toolbox | 7706381bb77ec5fcf5704d1cfc04dd442420d86c | [
"MIT"
] | null | null | null | README.md | rogema/tsg_toolbox | 7706381bb77ec5fcf5704d1cfc04dd442420d86c | [
"MIT"
] | 2 | 2018-11-25T22:58:50.000Z | 2021-08-25T13:31:15.000Z | # tsg_toolbox
A collection of useful functions to process thermosalinograph data
| 27 | 66 | 0.851852 | eng_Latn | 0.986841 |
c946c7134a4bb69e851e47580139c2ce4c36ff8e | 672 | md | Markdown | 2013/CVE-2013-7183.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2013/CVE-2013-7183.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2013/CVE-2013-7183.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2013-7183](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7183)



### Description
cgi-bin/reboot.cgi on Seowon Intech SWC-9100 routers allows remote attackers to (1) cause a denial of service (reboot) via a default_reboot action or (2) reset all configuration values via a factory_default action.
### POC
#### Reference
- http://www.kb.cert.org/vuls/id/431726
#### Github
No PoCs found on GitHub currently.
| 37.333333 | 214 | 0.74256 | eng_Latn | 0.273234 |
c94721ec30fa21e4029f7d107e54fb527efa6ba6 | 3,432 | md | Markdown | Metadata-Handling/README.md | thijsl/samples-android-sdk | 09fe3eb20746938d7bf460085f25a1c74fadecc2 | [
"BSD-3-Clause"
] | null | null | null | Metadata-Handling/README.md | thijsl/samples-android-sdk | 09fe3eb20746938d7bf460085f25a1c74fadecc2 | [
"BSD-3-Clause"
] | null | null | null | Metadata-Handling/README.md | thijsl/samples-android-sdk | 09fe3eb20746938d7bf460085f25a1c74fadecc2 | [
"BSD-3-Clause"
] | null | null | null | # Reference Apps - THEO Metadata Handling
The purpose of this app is to demonstrate how [THEOplayer] can be setup and configured for playback
of content which contains metadata.
For quick start, please proceed with the [Quick Start](#quick-start) guide.
## Guides
The guides below will provide a detailed explanation about extracting various types of metadata
from given stream.
* [THEOplayer How To's - Extracting Stream Metadata]
This app is an extension of [THEO Basic Playback] application. For help with getting started with
THEOplayer or Android Studio feel free to check related guides:
* [THEO Knowledge Base - Android Studio Setup]
* [THEO Knowledge Base - Virtual and Physical Devices]
* [THEOplayer How To's - THEOplayer Android SDK Integration]
## Quick Start
1. Obtain THEOplayer Android SDK and unzip it.
Please visit [Get Started with THEOplayer] to get required THEOplayer Android SDK.
2. Copy **`theoplayer-android-[name]-[version]-minapi16-release.aar`** file from unzipped SDK into
application **[libs]** folder and rename it to **`theoplayer.aar`**.
Project is configured to load SDK with such name, for using other name please change
`implementation ':theoplayer@aar'` dependency in [app-level build.gradle] file accordingly.
Please check [THEOplayer How To's - THEOplayer Android SDK Integration] guide for more information
about integrating THEOplayer Android SDK.
3. Open _**THEO Metadata Handling**_ application in Android Studio.
For more information about installing Android Studio please check
[THEO Knowledge Base - Android Studio Setup] guide.
Android Studio should automatically synchronize and rebuild project. If this won't happen please
select **File > Sync Project with Gradle Files** menu item to do it manually. Please note, that
in very rare cases it will be required to synchronize project twice.
4. Select **Run > Run 'app'** menu item to run application on a device selected by default.
To change the device please select **Run > Select Device...** menu item. For more information
about working with Android devices please check [THEO Knowledge Base - Virtual and Physical Devices]
guide.
## Streams/Content Rights:
The DRM streams used in this app (if any) are provided by our Partner: [EZ DRM] and hold all
the rights for the content. These streams are DRM protected and cannot be used for any other purposes.
## License
This project is licensed under the BSD 3 Clause License - see the [LICENSE] file for details.
[//]: # (Links and Guides reference)
[THEOplayer]: https://www.theoplayer.com/
[THEO Basic Playback]: ../Basic-Playback
[THEO Knowledge Base - Android Studio Setup]: ../Basic-Playback/guides/knowledgebase-android-studio-setup/README.md
[THEO Knowledge Base - Virtual and Physical Devices]: ../Basic-Playback/guides/knowledgebase-virtual-and-physical-devices/README.md
[THEOplayer How To's - THEOplayer Android SDK Integration]: ../Basic-Playback/guides/howto-theoplayer-android-sdk-integration/README.md
[THEOplayer How To's - Extracting Stream Metadata]: guides/howto-extracting-stream-metadata/README.md
[Get Started with THEOplayer]: https://www.theoplayer.com/licensing
[EZ DRM]: https://ezdrm.com/
[//]: # (Project files reference)
[LICENSE]: LICENSE
[libs]: app/libs
[app-level build.gradle]: app/build.gradle
| 42.9 | 136 | 0.745921 | eng_Latn | 0.93492 |
c94735a06e1f3e9f6362093ef9c955531ac4834b | 1,305 | md | Markdown | docs/ado/reference/ado-api/recordtype-property-ado.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/recordtype-property-ado.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/recordtype-property-ado.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: RecordType-Eigenschaft (ADO)
title: RecordType-Eigenschaft (ADO) | Microsoft-Dokumentation
ms.prod: sql
ms.prod_service: connectivity
ms.technology: ado
ms.custom: ''
ms.date: 01/19/2017
ms.reviewer: ''
ms.topic: reference
apitype: COM
f1_keywords:
- _Record::get_RecordType
- _Record::RecordType
- _Record::GetRecordType
helpviewer_keywords:
- RecordType property [ADO]
ms.assetid: 790e46a2-13d2-451e-a8be-130bd9a206a4
author: rothja
ms.author: jroth
ms.openlocfilehash: e5bf9326adeb7d1025eb011bf9474da3c77e65fa
ms.sourcegitcommit: 917df4ffd22e4a229af7dc481dcce3ebba0aa4d7
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 02/10/2021
ms.locfileid: "100051701"
---
# <a name="recordtype-property-ado"></a>RecordType-Eigenschaft (ADO)
Gibt den Typ des [Daten Satz](./record-object-ado.md) Objekts an.
## <a name="return-value"></a>Rückgabewert
Gibt einen [RecordTypeEnum](./recordtypeenum.md) -Wert zurück.
## <a name="remarks"></a>Bemerkungen
Die **RecordType** -Eigenschaft ist schreibgeschützt.
## <a name="applies-to"></a>Gilt für
[Record-Objekt (ADO)](./record-object-ado.md)
## <a name="see-also"></a>Weitere Informationen
[Type-Eigenschaft (ADO)](./type-property-ado.md)
[Type-Eigenschaft (ADO-Stream)](./type-property-ado-stream.md) | 31.071429 | 68 | 0.747126 | kor_Hang | 0.217952 |
c947c84a9ae5325e84bc3f4598cd1b23c0da744c | 4,540 | md | Markdown | README.md | gatchrat/projectFieldWarning | bfe49894d5168820607f43572557f2dd78085af0 | [
"Apache-2.0"
] | null | null | null | README.md | gatchrat/projectFieldWarning | bfe49894d5168820607f43572557f2dd78085af0 | [
"Apache-2.0"
] | null | null | null | README.md | gatchrat/projectFieldWarning | bfe49894d5168820607f43572557f2dd78085af0 | [
"Apache-2.0"
] | null | null | null | # Project: Field Warning
Project: Field Warning is a community-made RTS game centered around squad and company-scale warfare.
We are currently on Unity version **2018.3.6**.
## Downloading and Installing
**We recommend using [GitHub Desktop](https://desktop.github.com) to download**;
it provides an easy way to synchronize with the latest changes. If it is
inconvenient to download, or you do not want to sign up, you can use another
Git client (like [Sourcetree](https://www.atlassian.com/software/sourcetree) or
just download a ZIP archive.
### ...with GitHub Desktop
1. Download [GitHub Desktop](https://desktop.github.com).
2. Create a [GitHub account](https://github.com/join), and sign into GitHub
Desktop.
3. Click File → Clone Repository; in the dialog box, under URL insert
`https://github.com/FieldWarning/projectFieldWarning`. and clone it into any folder you like.
#### ...with another Git client
*This guide presumes that this client is set up and configured.*
1. Clone `https://github.com/FieldWarning/projectFieldWarning` into any folder you like.
### ...without using Git
**Warning:** You may fall behind from the latest version of the game. Make sure
to check this page often, and redownload. You will also be unable to make commits to the game using this method.
1. Download the [master ZIP file](https://github.com/FieldWarning/projectFieldWarning/archive/master.zip) for the game.
2. Extract the ZIP into any folder you like.
## Running and Playing
1. Download [Unity version 2018.3.6](https://unity3d.com/get-unity/download/archive)
2. Run the Unity Installer
3. Once the Installer finishes, open Unity, and click "Open" in the top right.
4. Navigate to [Clone Destination Folder](https://github.com/FieldWarning/projectFieldWarning/tree/master/src/FieldWarning), and select the FieldWarning folder.
5. In Unity, open the Scene/full-featured-scene file (white/black cube icon).
6. Once fully loaded, click the Play button and ensure everything works correctly. Enjoy!
#### Help, it's not working!
No need to worry. You can join our [Discord Server](https://discord.gg/ExQtQX4), and ask for help in #general.
Our community is friendly and active—so don't be afraid to ask any questions or voice opinions.
If you believe something is not working because of a bug, error, or otherwise *project*-related reason then please file a bug report [here](https://github.com/FieldWarning/projectFieldWarning/issues).
# Joining the community
Our project is 100% community based and everyone contributes on their own time. Without the staff, contributors, and members, this project would not have gotten this far. Thank you, *everyone*, for all your support.
*Anyone* is welcome to join and new-comers will be welcomed with open arms. If you are interested in becoming an active or contributing member, then make sure to join the Discord server by clicking [this](https://discord.gg/ExQtQX4) invite.
Assuming you've joined successfully, read the rules, and visit (and sign up for) everything listed in #links. Fill out and submit a Developer Application form. You must have the following to complete it:
- Gmail Account
- GitHub Account
- Trello Account
- Discord Username
We use the following platforms: [Trello](https://www.trello.com), [Discord](https://www.discordapp.com), and [Google Docs](https://docs.google.com)
## Documentation
Any available documentation is available in [projectFieldWarning/documentation](https://github.com/FieldWarning/projectFieldWarning/tree/master/documentation) as well as our [Wiki](https://github.com/FieldWarning/projectFieldWarning/wiki)
Please note [How to Contribute](https://github.com/FieldWarning/projectFieldWarning/tree/master/documentation/HOW_TO_CONTRIBUTE.md), [How to Utilize Trello](https://github.com/FieldWarning/projectFieldWarning/blob/master/documentation/TRELLO.md), and [our C# coding style](https://github.com/FieldWarning/projectFieldWarning/blob/master/documentation/coding-style.md).
Ensure you've read all documents (they're not that long, don't worry) and adhere to them when contributing.
## Licensing
For the community project to work, all contributions need to be permissively licensed. By providing commits to this repository, you are licensing your contribution to the PFW contributors and the rest of the world under APv2 (see LICENSE file).
If your commit contains files which you cannot or will not license to us under the APv2, please update the foreign-asset-licenses.md file. For code files, a notice in the header is required.
| 70.9375 | 370 | 0.780176 | eng_Latn | 0.982683 |
c94855243f4868d162370d8ea97614262b25b618 | 253 | md | Markdown | _includes/about/zh.md | roife/roife.github.io | e16b9ec345f23caae6b3834d20ef620692df25f7 | [
"Apache-2.0"
] | 9 | 2020-09-01T15:56:55.000Z | 2022-01-28T08:21:55.000Z | _includes/about/zh.md | roife/roife.github.io | e16b9ec345f23caae6b3834d20ef620692df25f7 | [
"Apache-2.0"
] | 8 | 2020-09-02T00:50:47.000Z | 2021-12-29T04:57:19.000Z | _includes/about/zh.md | roife/roife.github.io | e16b9ec345f23caae6b3834d20ef620692df25f7 | [
"Apache-2.0"
] | 8 | 2021-01-28T12:24:34.000Z | 2022-03-25T14:52:21.000Z | > **R**emember **OI** **F**or **E**ver
我是 roife,来自浙江。
- 2019 至今:北航计算机专业本科生(在读)
### 关于 roife
- 曾经是 OIer
- 毛毛控
- 喜欢的东西:
+ Compilers/Program Analysis
+ PL (Type System/FP/Verification)
+ Emacs
+ SCP 基金会
+ ACGN
### 联系方式
邮箱:[email protected]
| 12.047619 | 38 | 0.608696 | yue_Hant | 0.585602 |
c948a968230023c2c30c4391659ade5bcf47e3da | 1,480 | md | Markdown | TripAdvisor/Readme.md | may811204/SetGame | b0e7afdb2529011d0fed5d878bc209796f6024a8 | [
"Apache-2.0"
] | null | null | null | TripAdvisor/Readme.md | may811204/SetGame | b0e7afdb2529011d0fed5d878bc209796f6024a8 | [
"Apache-2.0"
] | null | null | null | TripAdvisor/Readme.md | may811204/SetGame | b0e7afdb2529011d0fed5d878bc209796f6024a8 | [
"Apache-2.0"
] | null | null | null | # Plagiarism Detector
# Project Objective
Implement a plagiarism detection algorithm using a N-tuple comparison allowing for synonyms.
# Parameters
3 required arguments, 1 optional argument.
+ File name for a list of synonyms
+ First File Name
+ Second File Name
+ N-Gram Size [Default = 3]
Input file may contain punctuations and special characters.
Same words with different upper case, lower case will be classified in same group. (ex, Dog, dog, dOg are consider the same)
Sample input file are included. (file1.txt, file2.txt, syns.txt)
# Algorithm Overview
Include Rabin-Karp Search algorithm for fast string pattern searching. [Rabin-Karp Search] (https://en.wikipedia.org/wiki/Rabin%E2%80%93Karp_algorithm)
Include Union Find data structure for collecting similar words groups. (ex. run, jog, sprint -> same group with same parent.)
# Output
Program outputs the percent of tuples in file1 which appear in file2.
# Dependencies :
[Maven] (https://maven.apache.org/download.cgi "Maven Build")
[Java 7] (http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html "JDK 7")
# Usage
Before each build
```
mvn clean package
```
Compile and execute
+ Make sure the files are under whole project folder. Same level with Main.java.
```
~$ javac Main.java
~$ java Main -f1 file1.txt -f2 file2.txt -s syns.txt
```
Sample Command Line arguments
```
java -jar $JAR.jar $FIRST_FILE $SECOND_FILE $SYN_FILE $NUM_TUPLE
```
## License
Chia-Ju, Chen [2018] | 30.204082 | 151 | 0.758108 | eng_Latn | 0.887433 |
c948ac139e249ed6719bc9242f6801eb204e9759 | 32 | md | Markdown | README.md | YehanZhou/fullstack | 768cd3c90af589bccd610fff0f5267a669a7470b | [
"MIT"
] | 1 | 2020-03-26T16:56:28.000Z | 2020-03-26T16:56:28.000Z | README.md | YehanZhou/fullstack | 768cd3c90af589bccd610fff0f5267a669a7470b | [
"MIT"
] | null | null | null | README.md | YehanZhou/fullstack | 768cd3c90af589bccd610fff0f5267a669a7470b | [
"MIT"
] | null | null | null | # fullstack
A fullstack project
| 10.666667 | 19 | 0.8125 | eng_Latn | 0.845853 |
c948e15cb89d1d9397b3d2d2565dde7ed0c11ad9 | 3,620 | md | Markdown | articles/mariadb/reference-stored-procedures.md | Gnafu/azure-docs.it-it | ffd06317c56e8145ce0c080b519a42f42e6b7527 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/mariadb/reference-stored-procedures.md | Gnafu/azure-docs.it-it | ffd06317c56e8145ce0c080b519a42f42e6b7527 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/mariadb/reference-stored-procedures.md | Gnafu/azure-docs.it-it | ffd06317c56e8145ce0c080b519a42f42e6b7527 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Stored procedure per database di Azure per MariaDB
description: Questo articolo presenta le stored procedure specifiche per il database di Azure per MariaDB.
author: ajlam
ms.author: andrela
ms.service: mariadb
ms.topic: conceptual
ms.date: 09/20/2019
ms.openlocfilehash: d9daaf619a19c0f4e4a591d4bbb4925679fd1fcb
ms.sourcegitcommit: f2771ec28b7d2d937eef81223980da8ea1a6a531
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 09/20/2019
ms.locfileid: "71174903"
---
# <a name="azure-database-for-mariadb-stored-procedures"></a>Stored procedure per database di Azure per MariaDB
Le stored procedure sono disponibili nel database di Azure per i server MariaDB per semplificare la gestione del server MariaDB. Ciò include la gestione delle connessioni, delle query e della configurazione di Replica dei dati in ingresso del server.
## <a name="data-in-replication-stored-procedures"></a>Stored procedure Replica dei dati in ingresso
La funzione per la replica dei dati in ingresso consente di sincronizzare i dati da un server MariaDB, eseguito in locale, in macchine virtuali o servizi di database ospitati da altri provider cloud, nel servizio Database di Azure per MariaDB.
Le stored procedure descritte di seguito vengono usate per configurare o rimuovere la replica dei dati in ingresso tra un server master e un server di replica.
|**Nome della stored procedure**|**Parametri di input**|**Parametri di output**|**Nota sull'utilizzo**|
|-----|-----|-----|-----|
|*mysql.az_replication_change_master*|master_host<br/>master_user<br/>master_password<br/>master_port<br/>master_log_file<br/>master_log_pos<br/>master_ssl_ca|N/D|Per trasferire i dati con la modalità SSL, passare il contesto del certificato della CA nel parametro master_ssl_ca. </br><br>Per trasferire i dati senza SSL, passare una stringa vuota nel parametro master_ssl_ca.|
|*mysql.az_replication _start*|N/D|N/D|Avvia la replica.|
|*mysql.az_replication _stop*|N/D|N/D|Arresta la replica.|
|*mysql.az_replication _remove_master*|N/D|N/D|Rimuove la relazione di replica tra il server master e quello di replica.|
|*mysql.az_replication_skip_counter*|N/D|N/D|Ignora un errore di replica.|
Per configurare Replica dei dati in ingresso tra un master e una replica nel database di Azure per MariaDB, vedere [come configurare replica dei dati in ingresso](howto-data-in-replication.md).
## <a name="other-stored-procedures"></a>Altre stored procedure
Le stored procedure seguenti sono disponibili nel database di Azure per MariaDB per gestire il server.
|**Nome della stored procedure**|**Parametri di input**|**Parametri di output**|**Nota sull'utilizzo**|
|-----|-----|-----|-----|
|*MySQL. AZ _kill*|processlist_id|N/D|Equivale a [`KILL CONNECTION`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) Command. Termina la connessione associata al processlist_id fornito dopo la terminazione di qualsiasi istruzione che la connessione è in esecuzione.|
|*MySQL. AZ _kill_query*|processlist_id|N/D|Equivale a [`KILL QUERY`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) Command. Termina l'istruzione attualmente in esecuzione per la connessione. Lascia attiva la connessione.|
|*MySQL. AZ _load_timezone*|N/D|N/D|Carica le tabelle del fuso orario per `time_zone` consentire l'impostazione del parametro sui valori denominati, ad esempio "US/Pacific".|
## <a name="next-steps"></a>Passaggi successivi
- Informazioni su come configurare [replica dei dati in ingresso](howto-data-in-replication.md)
- Informazioni su come usare le [tabelle del fuso orario](howto-server-parameters.md#working-with-the-time-zone-parameter) | 75.416667 | 377 | 0.784254 | ita_Latn | 0.969347 |
c9490f7687cc906c02189d0933707ca9d256c7a9 | 5,458 | markdown | Markdown | _posts/2019/2019-03-12-real-estate-ad.markdown | Halsien/halsien.github.io | ead7b1ff4de21922c329f753091ff8e6c25d7af9 | [
"MIT"
] | 1 | 2019-02-09T00:56:55.000Z | 2019-02-09T00:56:55.000Z | _posts/2019/2019-03-12-real-estate-ad.markdown | Halsien/halsien.github.io | ead7b1ff4de21922c329f753091ff8e6c25d7af9 | [
"MIT"
] | null | null | null | _posts/2019/2019-03-12-real-estate-ad.markdown | Halsien/halsien.github.io | ead7b1ff4de21922c329f753091ff8e6c25d7af9 | [
"MIT"
] | null | null | null | ---
layout: "post"
title: "Real Estate Ads for Facebook: Getting Leads"
date: "2019-03-12 22:22"
tag:
- project
- real estate
- facebook
- tips
- ad
- lead magnet
projects: true
hidden: true
description: "Real estate ads I made, with detailed breakdown."
category: project
author: nickrowdon
externalLink: false
---
In this writeup we discuss two real estate ads. Both ads are designed to get readers contact information and down the road turn them into paying customers. Both ads target different audiences, but their end goal is the same. Pretty simple, right? But how do we get that contact information?
We have to exchange something for it. No reader will give us their information unless they feel like they’re going to benefit from it. This is what lead magnets are. We provide something the reader finds valuable in exchange for their contact information.
Remember, in time we will turn these lists of contacts into paying customers.
Most people buying homes in this area are already locals. We will keep the ads local, too.
With that in mind, let’s look at the ads!
---
<div class="side-by-side">
<div class="toleft">
<picture>
<img src="{{ site.url }}/assets/images/realestate/homead1.jpg" alt="Home Ad Example">
</picture>
</div>
<div class="toright">
<picture>
<img src="{{ site.url }}/assets/images/realestate/homead2.jpg" alt="Home Ad Example">
</picture>
</div>
</div>
---
# Our First Ad
<picture>
<img src="{{ site.url }}/assets/images/realestate/homead1.jpg" alt="Home Ad Example">
</picture>
This ad is for people who are conscious about the value of their home. Not necessarily buying or selling a home yet, but will be down the road.
### Body Copy
We start the body copy off by calling out people already owning homes. This is because it's easier to sell a home to someone who's already been through the process of buying a home. If we aimed it at renters, we would have to educate them at some point down the road on why they should be owning rather than renting. We would group renters in a separate funnel than owners.
A study done by the Canadian Association of Accredited Mortgage Professionals claims the average Canadian will own 4-6 homes in their lifetime. It's in the homeowners best interest to want to increase the value of their home. Knowing this, we tell them they can increase the value of their home by up to 50%. Now that will grab someone's attention.
We know people will be skeptical of us. People will say they can't afford to make the changes. The line "Without Breaking The Bank" preemptively answers the skeptics. The reader can't have any reason to scroll past us after reading our post.
Next we get right to the point and tell the reader to click the image to learn more.
### Image
We used a beautiful yet realistic looking interior. We want to create an image in the readers mind that they can get that dream home with our help. Visual imagery is really powerful.
The blurred blue bar with white text is contrasting and visually pleasing. We chose a serif font to give a more professional feel to the ad.
### The Rest
If the reader reads nothing but the headline, they will still understand what the ad is about. "Your Home Will Shine With These 7 Hacks!" The word hack is one that people have pretty strong opinions over. Some say it's overused, some say it's cliché. But results speak for themselves, and if using a word like "Tips" outperforms it, then we will change it.
A simple "Learn More" button is all we need for this ad.
---
# Our Second Ad
<picture>
<img src="{{ site.url }}/assets/images/realestate/homead2.jpg" alt="Home Ad Example">
</picture>
### Body Copy
This ad will show up for people interested in buying a new home. By telling them right away that it has 6 bedrooms and 4 bathrooms, only good leads will continue with us. Someone buying a 1 bedroom apartment will not continue through our funnel.
The line "Click Below for Price, Location + More Pics!" is really important here. In advertising we have something called "above the fold." It originates from print advertising, it's the portion of the paper the reader can see before unfolding it.
In our case, the "fold" is the "see more" button. People may not click it. Lengthy text drives people away from things. Right away we tell them to just click for more information. They don't even have to read the full ad to learn more.
Those who do want to continue reading will learn more about the home. We mention things that help homes sell and increase value. The "+ MUCH MORE" builds curiosity. What else does this magical home have?
If that wasn't enough we throw in that it's an extremely rare opportunity. If they weren't sold yet, they definitely are now.
We re-iterate to the reader to click below for more info. We don't want them to forget why they're reading the ad!
### Image
We use a gorgeous photo of the home that we're selling. We could do an album or video, but in this case we want the reader to click the ad to see more images.
We could add text, but we want the focus to be on the gorgeous home.
### The Rest
The headline and description here are for people who don't read the body copy, and skim the headline. What exactly does "Fully Loaded" mean? They'll have to click through to find out.
---
### Are you looking to run real estate advertisements? Please contact me and we can find the best steps to take together.
---
| 48.300885 | 373 | 0.75339 | eng_Latn | 0.999602 |
c949990fe8367f236bb6e5f33286f008021b68df | 607 | md | Markdown | read/README.md | phatak-dev/fresher-to-expert | bbe9aef51fef0afc084a3490b57fdaf90a7a68bc | [
"Apache-2.0"
] | 3 | 2015-07-21T10:18:48.000Z | 2018-01-29T14:31:30.000Z | read/README.md | phatak-dev/fresher-to-expert | bbe9aef51fef0afc084a3490b57fdaf90a7a68bc | [
"Apache-2.0"
] | 1 | 2015-03-18T21:13:45.000Z | 2015-03-18T21:13:45.000Z | read/README.md | phatak-dev/fresher-to-expert | bbe9aef51fef0afc084a3490b57fdaf90a7a68bc | [
"Apache-2.0"
] | 2 | 2017-11-08T15:50:16.000Z | 2019-12-06T09:18:59.000Z | # Read - Keep up with the world
Textbooks become obsolete faster than you think. In a fast moving industry like ours, where technologies come and go in few years, being upto date with latest trends and technologies is very important for a developer.
People coming from the engineering background hate to read because of the kind of books recommended for reading. The resources shared here are nowhere near the boring or useless.
In following chapter, you are going to find variety of resources to read,watch and enjoy. These resources will help you improve your thinking about software development.
| 46.692308 | 217 | 0.803954 | eng_Latn | 0.999934 |
c949c4878d5a445e6b746bf0f95e9cdba7f90a2f | 244 | md | Markdown | package/patterns/README.md | ouduidui/fe-study | 03999e3cbdbbfb1f2a102d8bfc4016b0d45477d2 | [
"MIT"
] | null | null | null | package/patterns/README.md | ouduidui/fe-study | 03999e3cbdbbfb1f2a102d8bfc4016b0d45477d2 | [
"MIT"
] | null | null | null | package/patterns/README.md | ouduidui/fe-study | 03999e3cbdbbfb1f2a102d8bfc4016b0d45477d2 | [
"MIT"
] | null | null | null | # 设计模式学习笔记
> 该学习实现均使用`JavaScript`语言。
- [设计原则](./docs/design_principles.md)
- [单例模式](./packages/singleton/README.md)
- [工厂模式](./packages/factory/README.md)
- [抽象工厂模式](./packages/abstract-factory/README.md)
- [代理模式](./packages/proxy/README.md)
| 24.4 | 49 | 0.709016 | yue_Hant | 0.73121 |
c94a2deca9b14406d425fcaeae79927b668dc2ad | 7,345 | md | Markdown | DeemZ/README.md | Berat-Dzhevdetov/DeemZ-Platform | 603d07d226e5462a1610ebdef560407ad7b686f1 | [
"MIT"
] | 7 | 2021-11-13T17:31:36.000Z | 2022-03-10T13:36:51.000Z | DeemZ/README.md | Berat-Dzhevdetov/DeemZ | 997e4411f345df0758134d3dd5619c32bd9dfec7 | [
"MIT"
] | 30 | 2021-07-09T12:30:56.000Z | 2021-11-08T19:03:17.000Z | DeemZ/README.md | Berat-Dzhevdetov/DeemZ-Platform | 603d07d226e5462a1610ebdef560407ad7b686f1 | [
"MIT"
] | 3 | 2021-08-16T09:27:09.000Z | 2021-11-05T04:10:46.000Z | # DeemZ

This project is made with ASP.NET Core 5. The design is taken from [SoftUni](https://softuni.bg/) for educational purposes! Don't forget to make initial migrate if you want to start the app.
ASP.NET Core web application for online programming learing where you can take exams after the course and receive points.
## 🛠 Built with:
- ASP.NET Core MVC
- MS SQL Server
- Cloudinary
- Font-awesome
- Bootstrap
- SignalR
## Permissions:
| Permission | Guest | Logged User | Admin |
| -------------------------------------------------------------------- | ----- | --------------------------------------------- | ----- |
| Index page | ✅ | ✅ | ✅ |
| Privacy page | ✅ | ✅ | ✅ |
| Forum | ✅ | ✅ | ✅ |
| View Course Details | ✅ | ✅ | ✅ |
| Add report to resource | ❌ | ✅ | ✅ |
| Sign up for the course by paying | ❌ | ✅ | ❌ |
| View Course Resources | ❌ | ✅ (only if the user has paid for the course) | ✅ |
| Download Course Resources | ❌ | ✅ (only if the user has paid for the course) | ✅ |
| Admin Dashboard | ❌ | ❌ | ✅ |
| Add Course | ❌ | ❌ | ✅ |
| Edit Course | ❌ | ❌ | ✅ |
| Delete Course | ❌ | ❌ | ✅ |
| Add Lecture to Course | ❌ | ❌ | ✅ |
| Add Exam to Course | ❌ | ❌ | ✅ |
| Edit Exam | ❌ | ❌ | ✅ |
| Delete Exam | ❌ | ❌ | ✅ |
| Edit Lecture | ❌ | ❌ | ✅ |
| Delete Lecture | ❌ | ❌ | ✅ |
| Upload Resource to Lecture | ❌ | ❌ | ✅ |
| Delete Resource | ❌ | ❌ | ✅ |
| Edit User | ❌ | ❌ | ✅ |
| Sign Up User to Course (basically for adding lecturer to the course) | ❌ | ❌ | ✅ |
| Remove User From Course | ❌ | ❌ | ✅ |
| Delete report | ❌ | ❌ | ✅ |
## Unit tests coverage results:

## Database Diagram:

## Pages:
### Public Pages:
**Home Page**
This is the landing page of the application, from here you can read infromation about the company.

**Forum Topics**
In this page, all written topics are displayed, here you can get brief information about the topic. You can also search topic by title using the search bar on the top of the page.

**Course information**
On this page you can see information about the course such as when it starts, what will be studied during the course and etc.
Part I

Part II

### Pages for Logged Users:
**Home Page**
On this page, users can see their current courses, surveys and resources.

**Posting a Topic**
From this page, you can create a new topic. After choosing an appropriate title and description you can click the button Create in the bottom of the form.

**Course resources**
When you click on course's resource window will appear in which you will be able to view the resource only if you are admin or if you are signed up for the course! If the link is to another site (Facebook, YouTube) a new window will open in your browser.

After taking the exam with 80% or more, a pdf file will be automatically generated, which is a certificate and proves that you have passed the course succsessfully

**Report issue with course's resource or lecture**
If you find any issue with the course's resource or lecture you can describe your problem which will be directly send to admin for a look. Every resource have link under it for report.


### Admin Pages:
**Admin Dashboard**
In this page you can see information such as total users, money earned last 30 days, total courses and how many users signed up for courses for last 30 days

**Admin Group Chat**
In this page you can easily communicate with other admins.

| 60.702479 | 254 | 0.475153 | eng_Latn | 0.894549 |
c94ad46c242f55f4cef0f2023ef48bae2167225e | 5,944 | md | Markdown | README.md | tanhauhau/manta-style | 9450d8dd13aa85ca92728eda605ce0e38e5f975f | [
"MIT"
] | null | null | null | README.md | tanhauhau/manta-style | 9450d8dd13aa85ca92728eda605ce0e38e5f975f | [
"MIT"
] | 11 | 2018-08-16T15:37:31.000Z | 2018-09-04T11:27:54.000Z | README.md | tanhauhau/manta-style | 9450d8dd13aa85ca92728eda605ce0e38e5f975f | [
"MIT"
] | null | null | null | # Manta Style [](https://circleci.com/gh/Cryrivers/manta-style) [](https://codecov.io/gh/Cryrivers/manta-style/) [](https://github.com/Cryrivers/manta-style/blob/master/LICENSE) [](https://greenkeeper.io/)
> 🚀 Futuristic API Mock Server for Frontend
[Manta Style](https://github.com/Cryrivers/manta-style/issues/1) generates API mock endpoints from TypeScript type definitions automatically.
Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Usage](#usage)
- [Plugins](#plugins)
- [Contributing](#contributing)
- [Acknowledgments](#acknowledgments)
- [License](#license)
## Installation
### CLI
```sh
npm install --save-dev @manta-style/cli
```
You could also install it globally, which adds a command line tool `ms` to your system.
### Plugins
Manta Style needs plugins to support different file types and generate mock data.
The example of Quick Start below is using TypeScript. So first you might want to install TypeScript support to Manta Style.
```sh
npm install --save-dev @manta-style/plugin-builder-typescript
```
If you are new to Manta Style, please install the plugins below. We are going to use them in [Quick Start](#quick-start).
```sh
npm install --save-dev @manta-style/plugin-mock-example @manta-style/plugin-mock-faker
```
You could check [Plugins](#plugins) for the usages of official plugins. You could make our own plugins as well.
## Quick Start
### Create mock API configuration
You could use following configuration for test purpose. For more information about syntax, please check out [Syntax](./documentation/syntax.md).
```ts
interface User {
/**
* @faker {{internet.userName}}
*/
userName: string;
gender: 0 | 1 | 2;
/**
* @faker date.past
*/
birthday: number;
/**
* @faker {{address.country}}
*/
country: string;
/**
* @faker {{address.state}}
*/
state: string;
/**
* @faker {{address.city}}
*/
city: string;
}
type WithResponseSuccess<T> = {
status: 'ok';
data: T;
};
type WithResponseFailure = {
status: 'error';
/**
* @example Bad Request
*/
message: string;
};
type WithResponse<T> = WithResponseSuccess<T> | WithResponseFailure;
export type GET = {
'/user': WithResponse<User>;
};
```
### Launch Manta Style
```sh
ms -c ./config.ts
```
Manta Style launches a mock server at port 3000 by default. The above-stated example would generate following output in the terminal:
```
Manta Style launched at http://localhost:3000
┌────────┬────────────────────────────┬────────┬───────┐
│ Method │ Endpoint │ Mocked │ Proxy │
├────────┼────────────────────────────┼────────┼───────┤
│ GET │ http://localhost:3000/user │ Y │ │
└────────┴────────────────────────────┴────────┴───────┘
Press O to configure selective mocking
[FAKER MODE] Press S to take an instant snapshot
```
### Access endpoints in your browser
To view the mock data of the example above-stated, just launch a browser (or `curl`, `wget`) and access `http://localhost:3000/user`. Manta Style understands your type definition and generates mock data that respects it.
As `WithResponse<User> = WithResponseSuccess<User> | WithResponseFailure`, Manta Style would randomly choose one of the types in the union type. Therefore, it could randomly generate mock data for any of following cases:
1. `WithResponseSuccess<User>`:
```json
{
"status": "ok",
"data": {
"userName": "Zachariah.VonRueden20",
"gender": 2,
"birthday": 646869600,
"country": "Holy See (Vatican City State)",
"state": "Massachusetts",
"city": "South Evietown"
}
}
```
2. `WithResponseFailure`:
```json
{ "status": "error", "message": "Bad Request" }
```
Press <kbd>S</kbd> to enable snapshot mode for a constant output.
Press <kbd>O</kbd> to interactively disable or proxy a mocked endpoint.
## Usage
```
$ ms --help
Usage: ms [options]
Options:
-V, --version output the version number
-c --configFile <file> the TypeScript config file to generate entry points
-p --port <i> [3000] To use a port different than 3000
--proxyUrl <url> To enable proxy for disabled endpoints
--generateSnapshot <file> To generate a API mock data snapshot (Not yet implemented.)
--useSnapshot <file> To launch a server with data snapshot
-v --verbose show debug information
-h, --help output usage information
```
## Plugins
### Mock
- [plugin-mock-example](./packages/plugins/plugin-mock-example/README.md)
- [plugin-mock-faker](./packages/plugins/plugin-mock-faker/README.md)
- [plugin-mock-iterate](./packages/plugins/plugin-mock-iterate/README.md)
- [plugin-mock-qotd](./packages/plugins/plugin-mock-qotd/README.md)
- [plugin-mock-range](./packages/plugins/plugin-mock-range/README.md)
### Builder
Manta Style supports TypeScript only at the moment via `plugin-builder-typescript`. More language support is coming soon.
## Contributing
### Getting Started
```sh
yarn install
yarn run bootstrap
yarn run build
```
## Acknowledgments
- [Zhongliang Wang](https://github.com/Cryrivers) for original idea, architecture design, initial implementation of runtime and transformers.
- [Tan Li Hau](https://github.com/tanhauhau) for the design and implementation of selective mocking, plugin system, and many official plugins.
- [Jennie Ji](https://github.com/JennieJi) for implementation of live-reload feature.
## License
Manta Style is [MIT licensed](https://github.com/Cryrivers/manta-style/blob/master/LICENSE)
| 29.869347 | 590 | 0.683715 | eng_Latn | 0.635841 |
c94b5f09aede84d4fcb10ce45a7b65988d665010 | 4,120 | md | Markdown | docs/intro/overview.md | kobs30/onomy-sdk | 4f6f5b6cafaf0afc7dd91746490145478a0d5ee6 | [
"Apache-2.0"
] | null | null | null | docs/intro/overview.md | kobs30/onomy-sdk | 4f6f5b6cafaf0afc7dd91746490145478a0d5ee6 | [
"Apache-2.0"
] | null | null | null | docs/intro/overview.md | kobs30/onomy-sdk | 4f6f5b6cafaf0afc7dd91746490145478a0d5ee6 | [
"Apache-2.0"
] | null | null | null | <!--
order: 1
-->
# High-level Overview
## What is the SDK?
The [Onomy-SDK](https://github.com/onomyprotocol/onomy-sdk) is an open-source framework for building multi-asset public Proof-of-Stake (PoS) <df value="blockchain">blockchains</df>, like the Cosmos Hub, as well as permissioned Proof-Of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**.
The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. We envision the SDK as the npm-like framework to build secure blockchain applications on top of [Tendermint](https://github.com/tendermint/tendermint). SDK-based blockchains are built out of composable [modules](../building-modules/intro.md), most of which are open source and readily available for any developers to use. Anyone can create a module for the Onomy-SDK, and integrating already-built modules is as simple as importing them into your blockchain application. What's more, the Cosmos SDK is a capabilities-based system, which allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [this section](../core/ocap.md).
## What are Application-Specific Blockchains?
One development paradigm in the blockchain world today is that of virtual-machine blockchains like Ethereum, where development generally revolves around building a decentralised applications on top of an existing blockchain as a set of smart contracts. While smart contracts can be very good for some use cases like single-use applications (e.g. ICOs), they often fall short for building complex decentralised platforms. More generally, smart contracts can be limiting in terms of flexibility, sovereignty and performance.
Application-specific blockchains offer a radically different development paradigm than virtual-machine blockchains. An application-specific blockchain is a blockchain customized to operate a single application: developers have all the freedom to make the design decisions required for the application to run optimally. They can also provide better sovereignty, security and performance.
Learn more about [application-specific blockchains](./why-app-specific.md).
## Why the Cosmos SDK?
The Cosmos SDK is the most advanced framework for building custom application-specific blockchains today. Here are a few reasons why you might want to consider building your decentralised application with the Cosmos SDK:
- The default consensus engine available within the SDK is [Tendermint Core](https://github.com/tendermint/tendermint). Tendermint is the most (and only) mature BFT consensus engine in existence. It is widely used across the industry and is considered the gold standard consensus engine for building Proof-of-Stake systems.
- The SDK is open source and designed to make it easy to build blockchains out of composable [modules](../../x/). As the ecosystem of open source SDK modules grows, it will become increasingly easier to build complex decentralised platforms with it.
- The SDK is inspired by capabilities-based security, and informed by years of wrestling with blockchain state-machines. This makes the Cosmos SDK a very secure environment to build blockchains.
- Most importantly, the Cosmos SDK has already been used to build many application-specific blockchains that are already in production. Among others, we can cite [Cosmos Hub](https://hub.cosmos.network), [IRIS Hub](https://irisnet.org), [Binance Chain](https://docs.binance.org/), [Terra](https://terra.money/) or [Kava](https://www.kava.io/). [Many more](https://cosmos.network/ecosystem) are building on the Cosmos SDK.
## Getting started with the Cosmos SDK
- Learn more about the [architecture of an SDK application](./sdk-app-architecture.md)
- Learn how to build an application-specific blockchain from scratch with the [SDK Tutorial](https://cosmos.network/docs/tutorial)
## Next {hide}
Learn about [application-specific blockchains](./why-app-specific.md) {hide}
| 108.421053 | 853 | 0.797087 | eng_Latn | 0.996736 |
c94b660ff6336b86b5bf6fbf50d7d69e6dcd1d80 | 2,733 | md | Markdown | CodeOfConduct.md | ionab/LadiesofCodeGlasgow | 384816673f3b65875fb75de8ebf3af15703c6232 | [
"MIT"
] | 4 | 2019-06-17T20:46:03.000Z | 2019-10-03T10:12:50.000Z | CodeOfConduct.md | ionab/LadiesofCodeGlasgow | 384816673f3b65875fb75de8ebf3af15703c6232 | [
"MIT"
] | 9 | 2017-11-03T00:35:27.000Z | 2019-10-01T17:51:48.000Z | CodeOfConduct.md | ionab/LadiesofCodeGlasgow | 384816673f3b65875fb75de8ebf3af15703c6232 | [
"MIT"
] | 22 | 2017-11-03T00:33:36.000Z | 2019-10-15T17:02:50.000Z |
# Ladies of Code Glasgow: Code of Conduct
Ladies of Code events are community events intended for collaboration in the tech community. We value the participation of each member and want all attendees to have an enjoyable and fulfilling experience. Accordingly, all attendees are expected to show respect and courtesy to other attendees throughout all Ladies of Code events.
To make clear what is expected, all attendees, speakers, exhibitors, organisers, and volunteers at any Ladies of Code event are required to conform to the following Code of Conduct. Organisers will enforce these codes throughout the event.
## Code of Conduct
Ladies of Code is dedicated to providing a harassment-free experience for everyone, regardless of gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, race, age, religion, nationality or level of experience. We do not tolerate harassment of conference participants in any form nor do we tolerate any behaviour that would reasonably lead to another event participant being made to feel unsafe, insecure, or frightened for their physical or emotional well-being. All communication should be appropriate for a professional audience including people of many different backgrounds. Participants violating these rules may be sanctioned or expelled from the event at the discretion of the organisers.
Harassment includes verbal comments that reinforce social structures of domination related to gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, race, age, religion, sexual images in public spaces, deliberate intimidation, stalking, following, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact, and unwelcome sexual attention.
Participants asked to stop any harassing behaviour are expected to comply immediately. If a participant engages in harassing behaviour, the event organisers may take any action they deem appropriate, including warning the offender or expulsion from the event and community.
If you are being harassed, notice that someone else is being harassed, or have any other concerns, please contact a member of the Ladies of Code Team immediately.
Our team will be happy to help participants contact security or local law enforcement, provide escorts, or otherwise assist those experiencing harassment to feel safe for the duration of the event. We value your participation and attendance.
## Credit
Portions of this Code of Conduct are based on the example anti-harassment policy from the Geek Feminism wiki, created by the Ada Initiative and other volunteers, under a Creative Commons Zero license.
| 85.40625 | 744 | 0.821442 | eng_Latn | 0.999677 |
c94b8a98cdea37f450be8c2fb2132c1bddad65b8 | 430 | md | Markdown | CHANGELOG.md | danielvschoor/pyo3-log | 187dedeb94bc7e683fa91f81429c87b5964142bf | [
"Apache-2.0",
"MIT"
] | null | null | null | CHANGELOG.md | danielvschoor/pyo3-log | 187dedeb94bc7e683fa91f81429c87b5964142bf | [
"Apache-2.0",
"MIT"
] | null | null | null | CHANGELOG.md | danielvschoor/pyo3-log | 187dedeb94bc7e683fa91f81429c87b5964142bf | [
"Apache-2.0",
"MIT"
] | null | null | null | # 0.4.0
* Upgrade to pyo3 0.14.
# 0.3.1
* Don't confuse Trace level with NOTSET in python.
# 0.3.0
* Upgrade to pyo3 0.13.
# 0.2.2
* Fix of versioning of dependencies.
# 0.2.1
* Internal dependency update (arc-swap on 1.0).
# 0.2.0
* Bump version of pyo3 to 0.12.
# 0.1.2
* Remove confusing/irrelevant copy-pasted part of README.
# 0.1.1
* Bug: Remove stray println/dbgs from the code.
# 0.1.0
* Initial release.
| 11.944444 | 57 | 0.653488 | eng_Latn | 0.877309 |
c94bc61c3d25b90f1d1a99977cc8897199b36055 | 24,582 | md | Markdown | docs/developer-guide/performance.md | goszczynskip/deck.gl | f5c0c194de8486616c8c091d5c0362d6b9822b7a | [
"MIT"
] | null | null | null | docs/developer-guide/performance.md | goszczynskip/deck.gl | f5c0c194de8486616c8c091d5c0362d6b9822b7a | [
"MIT"
] | null | null | null | docs/developer-guide/performance.md | goszczynskip/deck.gl | f5c0c194de8486616c8c091d5c0362d6b9822b7a | [
"MIT"
] | null | null | null | # Performance Optimization
## General Performance Expectations
There are mainly two aspects that developers usually consider regarding the
performance of any computer programs: the time and the memory consumption, both of which obviously depends on the specs of the hardware deck.gl is ultimately running on.
On 2015 MacBook Pros with dual graphics cards, most basic layers
(like `ScatterplotLayer`) renders fluidly at 60 FPS during pan and zoom
operations up to about 1M (one million) data items, with framerates dropping into low double digits (10-20FPS) when the data sets approach 10M items.
Even if interactivity is not an issue, browser limitations on how big chunks of contiguous memory can be allocated (e.g. Chrome caps individual allocations at 1GB) will cause most layers to crash during WebGL buffer generation somewhere between 10M and 100M items. You would need to break up your data into chunks and use multiple deck.gl layers to get past this limit.
Modern phones (recent iPhones and higher-end Android phones) are surprisingly capable in terms of rendering performance, but are considerably more sensitive to memory pressure than laptops, resulting in browser restarts or page reloads. They also tend to load data significantly slower than desktop computers, so some tuning is usually needed to ensure a good overall user experience on mobile.
## Layer Update Performance
Layer update happens when the layer is first created, or when some layer props change. During an update, deck.gl may load necessary resources (e.g. image textures), generate WebGL buffers, and upload them to the GPU, all of which may take some time to complete, depending on the number of items in your `data` prop. Therefore, the key to performant deck.gl applications is to minimize layer updates wherever possible.
### Minimize data changes
When the `data` prop changes, the layer will recalculate all of its WebGL buffers. The time required for this is proportional to the number of items in your
`data` prop.
This step is the most expensive operation that a layer does - also on CPU - potentially affecting the responsiveness of the application. It may take
multiple seconds for multi-million item layers, and if your `data` prop is updated
frequently (e.g. animations), "stutter" can be visible even for layers with just a few thousand items.
Some good places to check for performance improvements are:
#### Avoid unnecessary shallow change in data prop
The layer does a shallow comparison between renders to determine if it needs to regenerate buffers. If
nothing has changed, make sure you supply the *same* data object every time you render. If the data object has to change shallowly for some reason, consider using the `dataComparator` prop to supply a custom comparison logic.
```js
// Bad
const DATA = [...];
const filters = {minTime: -1, maxTime: Infinity};
function setFilters(minTime, maxTime) {
filters.minTime = minTime;
filters.maxTime = maxTime;
render();
}
function render() {
const layer = new ScatterplotLayer({
// `filter` creates a new array every time `render` is called, even if the filters have not changed
data: DATA.filter(d => d.time >= filters.minTime && d.time <= filters.maxTime),
...
});
deck.setProps({layers: [layer]});
}
```
```js
// Good
const DATA = [...];
let filteredData = DATA;
const filters = {minTime: -1, maxTime: Infinity};
function setFilters(minTime, maxTime) {
filters.minTime = minTime;
filters.maxTime = maxTime;
// filtering is performed only once when the filters change
filteredData = DATA.filter(d => d.time >= minTime && d.time <= maxTime);
render();
}
function render() {
const layer = new ScatterplotLayer({
data: filteredData,
...
});
deck.setProps({layers: [layer]});
}
```
#### Use updateTriggers
So `data` has indeed changed. Do we have an entirely new collection of objects? Or did just certain fields changed in each row? Remember that changing `data` will update *all* buffers, so if, for example, object positions have not changed, it will be a waste of time to recalculate them.
```js
// Bad
const DATA = [...];
let currentYear = null;
let currentData = DATA;
function selectYear(year) {
currentYear = year;
currentData = DATA.map(d => ({
position: d.position,
population: d.populationsByYear[year]
}));
render();
}
function render() {
const layer = new ScatterplotLayer({
// `data` changes every time year changed, but positions don't need to update
data: currentData,
getPosition: d => d.position,
getRadius: d => Math.sqrt(d.population),
...
});
deck.setProps({layers: [layer]});
}
```
In this case, it is more efficient to use [`updateTriggers`](/docs/api-reference/layer.md#updatetriggers-object-optional) to invalidate only the selected attributes:
```js
// Good
const DATA = [...];
let currentYear = null;
function selectYear(year) {
currentYear = year;
render();
}
function render() {
const layer = new ScatterplotLayer({
// `data` never changes
data: DATA,
getPosition: d => d.position,
// radius depends on `currentYear`
getRadius: d => Math.sqrt(d.populationsByYear[currentYear]),
updateTriggers: {
// This tells deck.gl to recalculat radius when `currentYear` changes
getRadius: currentYear
},
...
});
deck.setProps({layers: [layer]});
}
```
#### Handle incremental data loading
A common technique for handling big datasets on the client side is to load data in chunks. We want to update the visualization whenever a new chunk comes in. If we append the new chunk to an existing data array, deck.gl will recalculate the whole buffers, even for the previously loaded chunks where nothing have changed:
```js
// Bad
let loadedData = [];
function onNewDataArrive(chunk) {
loadedData = loadedData.concat(chunk);
render();
}
function render() {
const layer = new ScatterplotLayer({
// If we have 1 million rows loaded and 100,000 new rows arrive,
// we end up recalculating the buffers for all 1,100,000 rows
data: loadedData,
...
});
deck.setProps({layers: [layer]});
}
```
To avoid doing this, we instead generate one layer for each chunk:
```js
// Good
const dataChunks = [];
function onNewDataArrive(chunk) {
dataChunks.push(chunk);
render();
}
function render() {
const layers = dataChunks.map((chunk, chunkIndex) => new ScatterplotLayer({
// Important: each layer must have a consistent & unique id
id: `chunk-${chunkIndex}`,
// If we have 10 100,000-row chunks already loaded and a new one arrive,
// the first 10 layers will see no prop change
// only the 11th layer's buffers need to be generated
data: chunk,
...
}));
deck.setProps({layers});
}
```
Starting v7.2.0, support for async iterables is added to efficiently update layers with incrementally loaded data:
```js
// Create an async iterable
async function* getData() {
for (let i = 0; i < 10; i++) {
await const chunk = fetchChunk(...);
yield chunk;
}
}
function render() {
const layer = new ScatterplotLayer({
// When a new chunk arrives, deck.gl only updates the sub buffers for the new rows
data: getData(),
...
});
deck.setProps({layers: [layer]});
}
```
See [Layer properties](/docs/api-reference/layer.md#basic-properties) for details.
#### Favor layer visibility over addition and removal
Removing a layer will lose all of its internal states, including generated buffers. If the layer is added back later, all the WebGL resources need to be regenerated again. In the use cases where layers need to be toggled frequently (e.g. via a control panel), there might be a significant perf penalty:
```js
// Bad
const DATA = [...];
const layerVisibility = {circles: true, labels: true}
function toggleLayer(key) {
layerVisibility[key] = !layerVisibility[key];
render();
}
function render() {
const layers = [
// when visibility goes from on to off to on, this layer will be completely removed and then regenerated
layerVisibility.circles && new ScatterplotLayer({
data: DATA,
...
}),
layerVisibility.labels && new TextLayer({
data: DATA,
...
})
];
deck.setProps({layers});
}
```
The [`visible`](/docs/api-reference/layer.md#visible-boolean-optional) prop is a cheap way to temporarily disable a layer:
```js
// Good
const DATA = [...];
const layerVisibility = {circles: true, labels: true}
function toggleLayer(key) {
layerVisibility[key] = !layerVisibility[key];
render();
}
function render() {
const layers = [
// when visibility is off, this layer's internal states will be retained in memory, making turning it back on instant
new ScatterplotLayer({
data: DATA,
visible: layerVisibility.circles,
...
}),
new TextLayer({
data: DATA,
visible: layerVisibility.labels,
...
})
];
deck.setProps({layers});
}
```
### Optimize Accessors
99% of the CPU time that deck.gl spends in updating buffers is calling the accessors you supply to the layer. Since they are called on every data object, any performance issue in the accessors is amplified by the size of your data.
#### Favor constants over callback functions
Most accessors accept constant values as well as functions. Constant props are extremely cheap to update in comparison. Use `ScatterplotLayer` as an example, the following two prop settings yield exactly the same visual outcome:
- `getFillColor: [255, 0, 0, 128]` - deck.gl uploads 4 numbers to the GPU.
- `getFillColor: d => [255, 0, 0, 128]` - deck.gl first builds a typed array of `4 * data.length` elements, call the accessor `data.length` times to fill it, then upload it to the GPU.
Aside from accessors, most layers also offer one or more `*Scale` props that are uniform multipliers on top of the per-object value. Always consider using them before invoking the accessors:
```js
// Bad
const DATA = [...];
function animate() {
render();
requestAnimationFrame(animate);
}
function render() {
const scale = Date.now() % 2000;
const layer = new ScatterplotLayer({
data: DATA,
getRadius: object => object.size * scale,
// deck.gl will call `getRadius` for ALL data objects every animation frame, which will likely choke the app
updateTriggers: {
getRadius: scale
},
...
});
deck.setProps({layers: [layer]});
}
```
```js
// Good
const DATA = [...];
function animate() {
render();
requestAnimationFrame(animate);
}
function render() {
const scale = Date.now() % 2000;
const layer = new ScatterplotLayer({
data: DATA,
getRadius: object => object.size,
// This has virtually no cost to update, easily getting 60fps animation
radiusScale: scale,
...
});
deck.setProps({layers: [layer]});
}
```
#### Use trivial functions as accessors
Whenever possible, make the accessors trivial functions and utilize pre-defined and/or pre-computed data.
```js
// Bad
const DATA = [...];
function render() {
const layer = new ScatterplotLayer({
data: DATA,
getFillColor: object => {
// This line creates a new values array from each object
// which can incur significant cost in garbage collection
const maxPopulation = Math.max.apply(null, Object.values(object.populationsByYear));
// This switch case creates a new color array for each object
// which can also incur significant cost in garbage collection
if (maxPopulation > 1000000) {
return [255, 0, 0];
} else if (maxPopulation > 100000) {
return [0, 255, 0];
} else {
return [0, 0, 255];
}
},
getRadius: object => {
// This line duplicates what's done in `getFillColor` and doubles the cost
const maxPopulation = Math.max.apply(null, Object.values(object.populationsByYear));
return Math.sqrt(maxPopulation);
}
...
});
deck.setProps({layers: [layer]});
}
```
```js
// Good
const DATA = [...];
// Use a for loop to avoid creating new objects
function getMaxPopulation(populationsByYear) {
let maxPopulation = 0;
for (const year in populationsByYear) {
const population = populationsByYear[year];
if (population > maxPopulation) {
maxPopulation = population;
}
}
return maxPopulation;
}
// Calculate max population once and store it in the data
DATA.forEach(d => {
d.maxPopulation = getMaxPopulation(d.populationsByYear);
});
// Use constant color values to avoid generating new arrays
const COLORS = {
ONE_MILLION: [255, 0, 0],
HUNDRED_THOUSAND: [0, 255, 0],
OTHER: [0, 0, 255]
};
function render() {
const layer = new ScatterplotLayer({
data: DATA,
getFillColor: object => {
if (object.maxPopulation > 1000000) {
return COLORS.ONE_MILLION;
} else if (maxPopulation > 100000) {
return COLORS.HUNDRED_THOUSAND;
} else {
return COLORS.OTHER;
}
},
getRadius: object => Math.sqrt(object.maxPopulation),
...
});
deck.setProps({layers: [layer]});
}
```
### Use Binary Data
When creating data-intensive applications, it is often desirable to offload client-side data processing to the server or web workers.
The server can send data to the client more efficiently using binary formats, e.g. [protobuf](https://developers.google.com/protocol-buffers), [Arrow](https://arrow.apache.org/) or simply a custom binary blob.
Some deck.gl applications use web workers to load data and generate attributes to get the processing off the main thread. Modern worker implementations allow ownership of typed arrays to be [transferred directly](https://developer.mozilla.org/en-US/docs/Web/API/Worker/postMessage#Parameters) between threads at virtualy no cost, bypassing serialization and deserialization of JSON objects.
#### Supply binary blobs to the data prop
Assume we have the data source encoded in the following format:
```js
// lon1, lat1, radius1, red1, green1, blue1, lon2, lat2, ...
const binaryData = new Float32Array([-122.4, 37.78, 1000, 255, 200, 0, -122.41, 37.775, 500, 200, 0, 0, -122.39, 37.8, 500, 0, 40, 200]);
```
Upon receiving the typed arrays, the application can of course re-construct a classic JavaScript array:
```js
// Bad
const data = [];
for (let i = 0; i < binaryData.length; i += 6) {
data.push({
position: binaryData.subarray(i, i + 2),
radius: binaryData[i + 2],
color: binaryData.subarray(i + 3, i + 6)
});
}
new ScatterplotLayer({
data,
getPosition: d => d.position,
getRadius: d => d.radius,
getFillColor: d => d.color
});
```
However, in addition to requiring custom repacking code, this array will take valuable CPU time to create, and significantly more memory to store than its binary form. In performance-sensitive applications that constantly push a large volumn of data (e.g. animations), this method will not be efficient enough.
Alternatively, one may supply a non-iterable object (not Array or TypedArray) to the `data` object. In this case, it must contain a `length` field that specifies the total number of objects. Since `data` is not iterable, each accessor will not receive a valid `object` argument, and therefore responsible of interpreting the input data's buffer layout:
```js
// Good
// Note: binaryData.length does not equal the number of items,
// which is why we need to wrap it in an object that contains a custom `length` field
const DATA = {src: binaryData, length: binaryData.length / 6}
new ScatterplotLayer({
data: DATA,
getPosition: (object, {index, data}) => {
return data.src.subarray(index * 6, index * 6 + 2);
},
getRadius: (object, {index, data}) => {
return data.src[index * 6 + 2];
},
getFillColor: (object, {index, data, target}) => {
return data.src.subarray(index * 6 + 3, index * 6 + 6);
}
})
```
Optionally, the accessors can utilize the pre-allocated `target` array in the second argument to further avoid creating new objects:
```js
// Good
const DATA = {src: binaryData, length: binaryData.length / 6}
new ScatterplotLayer({
data: DATA,
getPosition: (object, {index, data, target}) => {
target[0] = data.src[index * 6];
target[1] = data.src[index * 6 + 1];
target[2] = 0;
return target;
},
getRadius: (object, {index, data}) => {
return data.src[index * 6 + 2];
},
getFillColor: (object, {index, data, target}) => {
target[0] = data.src[index * 6 + 3];
target[1] = data.src[index * 6 + 4];
target[2] = data.src[index * 6 + 5];
target[3] = 255;
return target;
}
})
```
#### Supply attributes directly
While the built-in attribute generation functionality is a major part of a `Layer`s functionality, it can become a major bottleneck in performance since it is done on CPU in the main thread. If the application needs to push many data changes frequently, for example to render animations, data updates can block rendering and user interaction. In this case, the application should consider precalculated attributes on the back end or in web workers.
Deck.gl layers accepts external attributes as either a typed array or a WebGL buffer. Such attributes, if prepared carefully, can be directly utilized by the GPU, thus bypassing the CPU-bound attribute generation completely.
This technique offers the maximum performance possible in terms of data throughput, and is commonly used in heavy-duty, performance-sensitive applications.
To generate an attribute buffer for a layer, take the results returned from each object by the `get*` accessors and flatten them into a typed array. For example, consider the following layers:
```js
// Calculate attributes on the main thread
new PointCloudLayer({
// data format: [{position: [0, 0, 0], color: [255, 0, 0]}, ...]
data: POINT_CLOUD_DATA,
getPosition: d => d.position,
getColor: d => d.color,
getNormal: [0, 0, 1]
})
```
Should we move the attribute generation to a web worker:
```js
// Worker
// positions can be sent as either float32 or float64, depending on precision requirements
// point[0].x, point[0].y, point[0].z, point[1].x, point[1].y, point[1].z, ...
const positions = new Float64Array(POINT_CLOUD_DATA.flatMap(d => d.position));
// point[0].r, point[0].g, point[0].b, point[1].r, point[1].g, point[1].b, ...
const colors = new Uint8Array(POINT_CLOUD_DATA.flatMap(d => d.color));
// send back to main thread
postMessage({pointCount: POINT_CLOUD_DATA.length, positions, colors}, [positions.buffer, colors.buffer]);
```
```js
// Main thread
// `data` is received from the worker
new PointCloudLayer({
data: {
// this is required so that the layer knows how many points to draw
length: data.pointCount,
attributes: {
getPosition: {value: data.positions, size: 3},
getColor: {value: data.colors, size: 3},
}
},
// constant accessor works without raw data
getNormal: [0, 0, 1]
});
```
Note that instead of `getPosition`, we supply a `data.attributes.getPosition` object. This object defines the buffer from which `PointCloudLayer` should access its positions data. See the base `Layer` class' [data prop](/docs/api-reference/layer.md#basic-properties) for details.
It is also possible to use interleaved or custom layout external buffers:
```js
// Worker
// point[0].x, point[0].y, point[0].z, point[0].r, point[0].g, point[0].b, point[1].x, point[1].y, point[1].z, point[1].r, point[1].g, point[1].b, ...
const positionsAndColors = new Float32Array(POINT_CLOUD_DATA.flatMap(d => [
d.position[0],
d.position[1],
d.position[2],
// colors must be normalized if sent as floats
d.color[0] / 255,
d.color[1] / 255,
d.color[2] / 255
]));
// send back to main thread
postMessage({pointCount: POINT_CLOUD_DATA.length, positionsAndColors}, [positionsAndColors.buffer]);
```
```js
import {Buffer} from '@luma.gl/core';
const buffer = new Buffer(gl, {data: data.positionsAndColors});
new PointCloudLayer({
data: {
length : data.pointCount,
attributes: {
getPosition: {buffer, size: 3, offset: 0, stride: 24},
getColor: {buffer, size: 3, offset: 12, stride: 24},
}
},
// constant accessor works without raw data
getNormal: [0, 0, 1]
});
```
See full example in [examples/experimental/interleaved-buffer](https://github.com/uber/deck.gl/tree/master/examples/experimental/interleaved-buffer).
Note that external attributes only work with primitive layers, not composite layers, because composite layers often need to preprocess the data before passing it to the sub layers. Some layers that deal with variable-width data, such as `PathLayer`, `SolidPolygonLayer`, require additional information passed along with `data.attributes`. Consult each layer's documentation before use.
## Layer Rendering Performance
Layer rendering time (for large data sets) is essentially proportional to:
1. The number of vertex shader invocations,
which corresponds to the number of items in the layer's `data` prop
2. The number of fragment shader invocations, which corresponds to the total
number of pixels drawn.
Thus it is possible to render a scatterplot layer with 10M items with reasonable
frame rates on recent GPUs, provided that the radius (number of pixels) of each
point is small.
It is good to be aware that excessive overdraw (drawing many objects/pixels on top of each other) can generate very high fragment counts and thus hurt performance. As an example, a `Scatterplot` radius of 5 pixels generates ~ 100 pixels per point. If you have a `Scatterplot` layer with 10 million points, this can result in up to 1 billion fragment shader invocations per frame. While dependent on zoom levels (clipping will improve performance to some extent) this many fragments will certainly strain even a recent MacBook Pro GPU.
## Layer Picking Performance
deck.gl performs picking by drawing the layer into an off screen picking buffer. This essentially means that every layer that supports picking will be drawn off screen when panning and hovering. The picking is performed using the same GPU code that does the visual rendering, so the performance should be easy to predict.
Picking limitations:
* The picking system can only distinguish between 16M items per layer.
* The picking system can only handle 256 layers with the pickable flag set to true.
## Number of Layers
The layer count of an advanced deck.gl application tends to gradually increase, especially when using composite layers. We have built and optimized a highly complex application using close to 100 deck.gl layers (this includes hierarchies of sublayers rendered by custom composite layers rendering other composite layers) without seeing any performance issues related to the number of layers. If you really need to, it is probably possible to go a little higher (a few hundred layers). Just keep in mind that deck.gl was not designed to be used with thousands of layers.
## Common Issues
A couple of particular things to watch out for that tend to have a big impact on performance:
* If not needed disable Retina/High DPI rendering. It generetes 4x the number of pixels (fragments) and can have a big performance impact that depends on which computer or monitor is being used. This feature can be controlled using `useDevicePixels` prop of `DeckGL` component and it is on by default.
* Avoid using luma.gl debug mode in production. It queries the GPU error status after each operation which has a big impact on performance.
Smaller considerations:
* Enabling picking can have a small performance penalty so make sure the `pickable` property is `false` in layers that do not need picking (this is the default value).
| 37.529771 | 569 | 0.68961 | eng_Latn | 0.983029 |
c94c08ab1beae3be50b7181246aef1e7b63d7315 | 89 | md | Markdown | README.md | awblocker/tbb-insilico | 41171ac59bd907a465338fabf65ba5fccd2348fd | [
"Apache-2.0"
] | null | null | null | README.md | awblocker/tbb-insilico | 41171ac59bd907a465338fabf65ba5fccd2348fd | [
"Apache-2.0"
] | 1 | 2017-10-18T08:22:04.000Z | 2017-10-18T08:22:04.000Z | README.md | awblocker/tbb-insilico | 41171ac59bd907a465338fabf65ba5fccd2348fd | [
"Apache-2.0"
] | null | null | null | # tbb-insilico
Scripts for the in-silico MNase-seq analyses of Zhou et al. (2016), eLife
| 29.666667 | 73 | 0.752809 | eng_Latn | 0.335821 |
c94c3ebd9be48179680acadbb7385e7327b0d30d | 6,593 | md | Markdown | articles/cognitive-services/Computer-vision/includes/curl-quickstart.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2017-08-28T07:46:17.000Z | 2022-02-03T12:49:15.000Z | articles/cognitive-services/Computer-vision/includes/curl-quickstart.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 407 | 2018-06-14T16:12:48.000Z | 2021-06-02T16:08:13.000Z | articles/cognitive-services/Computer-vision/includes/curl-quickstart.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2017-10-04T22:53:31.000Z | 2022-03-10T16:41:59.000Z | ---
title: 'Quickstart: Reconhecimento ótico de caracteres REST API'
titleSuffix: Azure Cognitive Services
description: Neste arranque rápido, começa com o reconhecimento ótico de caracteres REST API.
services: cognitive-services
author: PatrickFarley
manager: nitinme
ms.service: cognitive-services
ms.subservice: computer-vision
ms.topic: include
ms.date: 04/19/2021
ms.author: pafarley
ms.custom: seodec18
ms.openlocfilehash: 2f01b1d222470c49505638be64180948b6f7e046
ms.sourcegitcommit: 6f1aa680588f5db41ed7fc78c934452d468ddb84
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 04/19/2021
ms.locfileid: "107728263"
---
Utilize o reconhecimento de caracteres óticos REST API para ler texto impresso e manuscrito.
> [!NOTE]
> Este quickstart utiliza comandos cURL para chamar a API REST. Também pode ligar para a API REST utilizando uma linguagem de programação. Veja as amostras do GitHub por exemplo em [C#](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/ComputerVision/REST), [Python,](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/python/ComputerVision/REST) [Java,](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/java/ComputerVision/REST) [JavaScript](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/javascript/ComputerVision/REST)e [Go](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/go/ComputerVision/REST).
## <a name="prerequisites"></a>Pré-requisitos
* Uma subscrição do Azure - [Crie uma gratuitamente](https://azure.microsoft.com/free/cognitive-services/)
* Assim que tiver a subscrição do Azure, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title=" crie um recurso de Visão de Computador crie um recurso de " target="_blank"> Visão De Computador no portal </a> Azure para obter a sua chave e ponto final. Depois de implementar, clique em **Ir para o recurso**.
* Necessitará da chave e ponto final do recurso que criar para ligar a sua aplicação ao serviço de Visão De Computador. Colará a chave e o ponto final no código abaixo mais tarde no arranque rápido.
* Pode utilizar o nível de preços gratuitos `F0` para experimentar o serviço e fazer upgrade mais tarde para um nível pago para produção.
* [cURL](https://curl.haxx.se/) instalado
## <a name="read-printed-and-handwritten-text"></a>Ler texto impresso e manuscrito
O serviço OCR pode ler texto visível numa imagem e convertê-lo num fluxo de caracteres. Para obter mais informações sobre o reconhecimento de texto, consulte a visão geral do [reconhecimento de caracteres óticos (OCR).](../overview-ocr.md)
### <a name="call-the-read-api"></a>Ligue para a API de leitura
Para criar e executar o exemplo, siga os seguintes passos:
1. Copie o comando seguinte para um editor de texto.
1. Faça as alterações seguintes ao comando, se for necessário:
1. Substitua o valor de `<subscriptionKey>` pela chave de subscrição.
1. Substitua a primeira parte do URL de pedido `westcentralus` () com o texto no seu próprio URL de ponto final.
[!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)]
1. Opcionalmente, altere o URL da imagem no corpo do pedido (`https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png\`) pelo URL de uma imagem diferente a ser analisada.
1. Abra uma janela da linha de comandos.
1. Cole o comando a partir do editor de texto na janela da linha de comandos e, em seguida, execute o comando.
```bash
curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2/read/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>" --data-ascii "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png\"}"
```
A resposta incluirá um `Operation-Location` cabeçalho, cujo valor é um URL único. Utilize este URL para consultar os resultados da operação Read. A URL expira em 48 horas.
### <a name="get-read-results"></a>Obter resultados de Leitura
1. Copie o seguinte comando para o seu editor de texto.
1. Substitua o URL pelo `Operation-Location` valor copiado no passo anterior.
1. Faça as alterações seguintes ao comando, se for necessário:
1. Substitua o valor de `<subscriptionKey>` pela chave de subscrição.
1. Abra uma janela da linha de comandos.
1. Cole o comando a partir do editor de texto na janela da linha de comandos e, em seguida, execute o comando.
```bash
curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"
```
### <a name="examine-the-response"></a>Examinar a resposta
O JSON devolve uma resposta de êxito. A aplicação de exemplo analisa e apresenta uma resposta de êxito na janela da linha de comandos, semelhante ao seguinte exemplo:
```json
{
"status": "succeeded",
"createdDateTime": "2021-04-08T21:56:17.6819115+00:00",
"lastUpdatedDateTime": "2021-04-08T21:56:18.4161316+00:00",
"analyzeResult": {
"version": "3.2",
"readResults": [
{
"page": 1,
"angle": 0,
"width": 338,
"height": 479,
"unit": "pixel",
"lines": [
{
"boundingBox": [
25,
14,
318,
14,
318,
59,
25,
59
],
"text": "NOTHING",
"appearance": {
"style": {
"name": "other",
"confidence": 0.971
}
},
"words": [
{
"boundingBox": [
27,
15,
294,
15,
294,
60,
27,
60
],
"text": "NOTHING",
"confidence": 0.994
}
]
}
]
}
]
}
}
```
## <a name="next-steps"></a>Passos seguintes
Neste arranque rápido, aprendeu a chamar a API read REST. Em seguida, saiba mais sobre as funcionalidades da API de Leitura.
> [!div class="nextstepaction"]
>[Ligue para a API de leitura](../Vision-API-How-to-Topics/call-read-api.md)
* [Visão geral do OCR](../overview-ocr.md)
| 45.784722 | 762 | 0.686637 | por_Latn | 0.97517 |
c94c548a511871838cd8a27cab5636aed41accc2 | 50 | md | Markdown | README.md | buckler-project/md5-scanner | 33c13fa49d53a3182ad628257a74b901d1a7e55b | [
"MIT"
] | null | null | null | README.md | buckler-project/md5-scanner | 33c13fa49d53a3182ad628257a74b901d1a7e55b | [
"MIT"
] | null | null | null | README.md | buckler-project/md5-scanner | 33c13fa49d53a3182ad628257a74b901d1a7e55b | [
"MIT"
] | null | null | null | # md5-scanner
The buckler scanner which scan md5.
| 16.666667 | 35 | 0.78 | eng_Latn | 0.988526 |
c94ca7e0b66c52caac2abafc462513c5d66cd332 | 796 | md | Markdown | 08-convert-a-local-realm-to-a-synced-realm/README.md | Popovich24/realm-sync-samples | d57ac81b4ddc8d68a73b619ba18d8df2f44bbda4 | [
"Apache-2.0"
] | 48 | 2018-12-20T21:42:20.000Z | 2021-11-22T17:01:38.000Z | 08-convert-a-local-realm-to-a-synced-realm/README.md | kwl3434/realm-sync-samples | 9688c7340866dcaac8647defb980d66bcf7c8029 | [
"Apache-2.0"
] | 4 | 2018-12-14T23:59:29.000Z | 2021-06-15T14:21:07.000Z | 08-convert-a-local-realm-to-a-synced-realm/README.md | isabella232/legacy-realm-sync-samples | 1ed52dfbfd56ab0aeb740cd0a03d4cfee4fa1fde | [
"Apache-2.0"
] | 25 | 2018-12-13T16:52:50.000Z | 2021-08-09T14:52:14.000Z | # 08. Convert a local Realm to a Synced Realm
## Languages
- Objective-C
- Node.js
## Overview
As your application grows to take advantage of more Realm features, you may add synchronization backed by the Realm Object Server---and this may require converting a locally-stored Realm file to a synchronized Realm. While the Realm Platform doesn't offer an automatic solution to this, it doesn't require a lot of code to accomplish. Theses example are for small applications to perform this task; the code could be easily integrated into an existing application.
Currently, Javascript is the recommended language to perform this migration. We aim to have more code samples available in the future. If you are experiencing issues, please submit a ticket [here](https://support.realm.io/).
| 49.75 | 464 | 0.786432 | eng_Latn | 0.999144 |
c94d27813342fc462dcc17a5172ced8cf6a6f2a5 | 254 | md | Markdown | content/theme/gatsby-starter-docz.md | molebox/jamstackthemes | aab69a1882ebde160373caa3fe10e2abcfe34327 | [
"MIT"
] | null | null | null | content/theme/gatsby-starter-docz.md | molebox/jamstackthemes | aab69a1882ebde160373caa3fe10e2abcfe34327 | [
"MIT"
] | null | null | null | content/theme/gatsby-starter-docz.md | molebox/jamstackthemes | aab69a1882ebde160373caa3fe10e2abcfe34327 | [
"MIT"
] | null | null | null | ---
title: "Gatsby Starter Docz"
github: https://github.com/RobinCsl/gatsby-starter-docz
demo: https://gatsby-starter-docz.netlify.com/
author: Robin Cussol
draft: true
ssg:
- Gatsby
cms:
- No Cms
date: 2019-01-19T19:10:34Z
github_branch: master
---
| 18.142857 | 55 | 0.728346 | kor_Hang | 0.183568 |
c94d77267a049b11b66cd655e56688a8bc5b0e88 | 7,478 | md | Markdown | en/MG3.md | mercari/mercari-engineering-ladder | 63dbacf3e26d09edfe922f16dc7f1486adad5ed8 | [
"CC0-1.0"
] | 107 | 2021-01-11T12:48:26.000Z | 2022-03-17T09:25:52.000Z | en/MG3.md | mercari/mercari-engineering-ladder | 63dbacf3e26d09edfe922f16dc7f1486adad5ed8 | [
"CC0-1.0"
] | 1 | 2021-02-11T01:16:51.000Z | 2021-02-11T01:16:51.000Z | en/MG3.md | mercari/mercari-engineering-ladder | 63dbacf3e26d09edfe922f16dc7f1486adad5ed8 | [
"CC0-1.0"
] | 8 | 2021-02-10T05:11:12.000Z | 2022-02-09T15:07:25.000Z | # MG3
###### Mercari Engineering Career Ladder
* [Seek Continuous Improvement](#seek-continuous-improvement)
* [Go Bold, Fail Fast & Learn Early](#go-bold-fail-fast--learn-early)
* [Take Action & Responsibility](#take-action--responsibility)
* [Focus on the Customer](#focus-on-the-customer)
* [Strive for Alignment](#strive-for-alignment)
* [Foster Trust & Inclusion](#foster-trust--inclusion)
* [Deliver with High Quality](#deliver-with-high-quality)
* [Share to Empower](#share-to-empower)
* [Be Strategic & Efficient](#be-strategic--efficient)
## Seek Continuous Improvement
Investigates, discusses, and thinks about ways to improve their team **and other teams;** proactively makes improvements.
Some examples are: reducing technical debt, task automatization, workflows, team practices, **writing code that is used across teams, sharing knowledge, or developing shared standards like a coding guide or a common process like a release train.**
Learns new things relevant to their role and applies them to their work, including, but not limited to: programming languages, frameworks, testing, debugging, writing readable code, communication skills, project management skills, and product development skills.
### [Engineering Manager Skills]
Learns management skills, such as communication skills, goal setting skills, how to conduct efficient 1on1s, and performance review frameworks, and applies them to their work.
Encourages members to create personal growth OKRs and makes sure the team workload allows members to use some time for self-improvement (study groups, conferences, reading, etc).
## Go Bold, Fail Fast & Learn Early
Thinks outside the box and is able to **lead the implementation of completely new features with little or no guidance.**
Researches and seeks out required information within **any team's domain** and create **valid** hypotheses on how things work.
**Delivers, in a timely manner,** proofs of concept (POC) for the team’s features.
### [Engineering Manager Skills]
Knows the members' capabilities and sets challenging goals for the team.
Manages risks, challenges, and results to accelerate the team's performance.
## Take Action & Responsibility
Takes responsibility to assess risks associated with technical solutions and **creates contingency plans for their team**.
**Leads other members** to handle customer support requests, follows incident handling procedures, and contributes to post-mortem reports.
**Clears blockers or critical issues concerning development.**
Considers both short-term and long-term solutions when fixing bugs.
### [Engineering Manager Skills]
Handles or dispatches customer support requests, can be on call when necessary, leads the team through incident-handling procedures, and contributes to post-mortem reports.
Takes responsibility for technical incidents and is able to coordinate with stakeholders as a representative for their team.
## Focus on the Customer
**Understands the business strategies thoroughly and how customers use products; works as a counterpart of the PM, decides on technical specifications in line with their business goals, and creates positive impact for customers.**
**Understands how Quality Assurance and Customer Support operate** in order to better support them.
**Understands the importance of responding to customers, and responds to customer support inquiries accurately, determining whether a fix is necessary or not.**
### [Engineering Manager Skills]
Knows the impact their team's work has on customers and continuously communicates with PMs to align goals.
Explains thoroughly the impact their team has on users to new team members during onboarding.
Applies and shares the customer perspective with regards to the team's domain.
## Strive for Alignment
Explains their ideas and opinions to **both engineers and other company members** clearly and respectfully.
**Integrates the team’s various opinions into the overall plan, through respectful discussions.**
### [Engineering Manager Skills]
Connects different members and projects across teams.
Mediates disagreements and finds an agreeable solution for each party.
Follows the division's goals and the engineering principles, helping to move the team towards those goals.
Helps members to align their individual goals with the team's.
## Foster Trust & Inclusion
**Delivers** praise and constructive feedback.
Seeks feedback actively from members **across teams**, using it as a tool for growth.
**Facilitates discussions, encourages everyone, including quiet participants, to share their opinions, and actively listens; ensures no one dominates the conversation.**
Works to build strong relationships with their teammates, manager, and business counterparts.
### [Engineering Manager Skills]
Helps foster a blameless and open culture by encouraging failure analysis focused on the process, not on individuals.
Values different opinions and diverse ideas, encouraging everyone in the team to take ownership in their work.
## Deliver with High Quality
**Fully** understands their team's domain and **has basic knowledge of other teams’ domains, to the extent that they can productively collaborate with them.**
**Makes suggestions to improve the overall code organization.**
**Has deep knowledge** of the programming languages, frameworks, and libraries of their platform; **utilizes abstractions and code isolation**.
**Suggests new guidelines and ways to improve systematic debugging**; resolves complex bugs or issues with **no guidance.**
Gives helpful code reviews **consistently** and **eliminates blockers to problematic releases.**
### [Engineering Manager Skills]
Has deep knowledge of quality management and is able to balance quality and delivery in their team's domain.
Implements team practices to continuously deliver business value while maintaining or increasing quality.
## Share to Empower
**Continuously improves** their team's documentation and **makes sure that this information is open and known** by other teams.
Shares useful technical information inside and outside the company.
Shares post-mortem information after an incident and **makes sure that everyone understands the impact and how to prevent the incident from reoccurring**.
### [Engineering Manager Skills]
Creates and shares the team's OKRs with its members; provides the necessary support for members to create and achieve their own OKRs.
Delegates tasks to promote skill growth and coaches members in an open, respectful, flexible, and empathetic manner.
Takes responsibility for onboarding new members.
## Be Strategic & Efficient
Knows when to reuse existing resources present in the codebase, team, or industry.
Knows when and how to ask for help to eliminate blockers.
**Knows when and how to spend time on tasks like performance optimizations and memory management.**
Estimates the required amount of work properly and **avoids committing to an unrealistic amount of work**.
**Able to propose solutions backed by data or well-known credible evidence.**
### [Engineering Manager Skills]
Makes decisions based on trade-offs between requirements, schedules, and technical approaches.
Prioritizes, and breaks down, their team development into smaller parts and tasks.
Defines the resources necessary for the team to solve issues including outsourcing and hiring new members, thereby balancing the results and the costs to achieve the team’s goals.
| 47.630573 | 262 | 0.79326 | eng_Latn | 0.999149 |
c94e595349d8e274cc27c60e44f41922a4dcfab7 | 436 | md | Markdown | README.md | mohammedelzanaty/Angular | f5cf43a9a1d27db6deda1689323cc54371a680eb | [
"MIT"
] | 2 | 2018-12-24T13:13:21.000Z | 2018-12-24T13:38:21.000Z | README.md | mohammedelzanaty/Angular | f5cf43a9a1d27db6deda1689323cc54371a680eb | [
"MIT"
] | null | null | null | README.md | mohammedelzanaty/Angular | f5cf43a9a1d27db6deda1689323cc54371a680eb | [
"MIT"
] | null | null | null | # My Angular Learning Path <img src="https://angular.io/assets/images/logos/angular/angular.svg" width="125" align="right" alt="JS Logo">
> Take your Angular to the next level and find out what it's fully capable of.
This list is mainly about Angular – Learn one way to build applications with Angular and reuse your code and abilities to build apps for any deployment target. For web, mobile web, native mobile and native desktop.
| 62.285714 | 215 | 0.768349 | eng_Latn | 0.994052 |
c94ea77a1e677fe1d9348ef3a8ca7d87e7050e2b | 8,383 | md | Markdown | articles/automation/troubleshoot/change-tracking.md | BaherAbdullah/azure-docs | 65d82440dd3209697fdb983ef456b0a2293e270a | [
"CC-BY-4.0",
"MIT"
] | 7,073 | 2017-06-27T08:58:22.000Z | 2022-03-30T23:19:23.000Z | articles/automation/troubleshoot/change-tracking.md | BaherAbdullah/azure-docs | 65d82440dd3209697fdb983ef456b0a2293e270a | [
"CC-BY-4.0",
"MIT"
] | 87,608 | 2017-06-26T22:11:41.000Z | 2022-03-31T23:57:29.000Z | articles/automation/troubleshoot/change-tracking.md | BaherAbdullah/azure-docs | 65d82440dd3209697fdb983ef456b0a2293e270a | [
"CC-BY-4.0",
"MIT"
] | 17,093 | 2017-06-27T03:28:18.000Z | 2022-03-31T20:46:38.000Z | ---
title: Troubleshoot Azure Automation Change Tracking and Inventory issues
description: This article tells how to troubleshoot and resolve issues with the Azure Automation Change Tracking and Inventory feature.
services: automation
ms.subservice: change-inventory-management
ms.date: 02/15/2021
ms.topic: troubleshooting
---
# Troubleshoot Change Tracking and Inventory issues
This article describes how to troubleshoot and resolve Azure Automation Change Tracking and Inventory issues. For general information about Change Tracking and Inventory, see [Change Tracking and Inventory overview](../change-tracking/overview.md).
## General errors
### <a name="machine-already-registered"></a>Scenario: Machine is already registered to a different account
### Issue
You receive the following error message:
```error
Unable to Register Machine for Change Tracking, Registration Failed with Exception System.InvalidOperationException: {"Message":"Machine is already registered to a different account."}
```
### Cause
The machine has already been deployed to another workspace for Change Tracking.
### Resolution
1. Make sure that your machine is reporting to the correct workspace. For guidance on how to verify this, see [Verify agent connectivity to Azure Monitor](../../azure-monitor/agents/agent-windows.md#verify-agent-connectivity-to-azure-monitor). Also make sure that this workspace is linked to your Azure Automation account. To confirm, go to your Automation account and select **Linked workspace** under **Related Resources**.
1. Make sure that the machines show up in the Log Analytics workspace linked to your Automation account. Run the following query in the Log Analytics workspace.
```kusto
Heartbeat
| summarize by Computer, Solutions
```
If you don't see your machine in the query results, it hasn't checked in recently. There's probably a local configuration issue. You should reinstall the Log Analytics agent.
If your machine is listed in the query results, verify under the Solutions property that **changeTracking** is listed. This verifies it is registered with Change Tracking and Inventory. If it is not, check for scope configuration problems. The scope configuration determines which machines are configured for Change Tracking and Inventory. To configure the scope configuration for the target machine, see [Enable Change Tracking and Inventory from an Automation account](../change-tracking/enable-from-automation-account.md).
In your workspace, run this query.
```kusto
Operation
| where OperationCategory == 'Data Collection Status'
| sort by TimeGenerated desc
```
1. If you get a ```Data collection stopped due to daily limit of free data reached. Ingestion status = OverQuota``` result, the quota defined on your workspace has been reached, which has stopped data from being saved. In your workspace, go to **Usage and estimated costs**. Either select a new **Pricing tier** that allows you to use more data, or click on **Daily cap**, and remove the cap.
:::image type="content" source="./media/change-tracking/change-tracking-usage.png" alt-text="Usage and estimated costs." lightbox="./media/change-tracking/change-tracking-usage.png":::
If your issue is still unresolved, follow the steps in [Deploy a Windows Hybrid Runbook Worker](../automation-windows-hrw-install.md) to reinstall the Hybrid Worker for Windows. For Linux, follow the steps in [Deploy a Linux Hybrid Runbook Worker](../automation-linux-hrw-install.md).
## Windows
### <a name="records-not-showing-windows"></a>Scenario: Change Tracking and Inventory records aren't showing for Windows machines
#### Issue
You don't see any Change Tracking and Inventory results for Windows machines that have been enabled for the feature.
#### Cause
This error can have the following causes:
* The Azure Log Analytics agent for Windows isn't running.
* Communication back to the Automation account is being blocked.
* The Change Tracking and Inventory management packs aren't downloaded.
* The VM being enabled might have come from a cloned machine that wasn't prepared with System Preparation (sysprep) with the Log Analytics agent for Windows installed.
#### Resolution
On the Log Analytics agent machine, go to **C:\Program Files\Microsoft Monitoring Agent\Agent\Tools** and run the following commands:
```cmd
net stop healthservice
StopTracing.cmd
StartTracing.cmd VER
net start healthservice
```
If you still need help, you can collect diagnostics information and contact support.
> [!NOTE]
> The Log Analytics agent enables error tracing by default. To enable verbose error messages as in the preceding example, use the `VER` parameter. For information traces, use `INF` when you invoke `StartTracing.cmd`.
##### Log Analytics agent for Windows not running
Verify that the Log Analytics agent for Windows (**HealthService.exe**) is running on the machine.
##### Communication to Automation account blocked
Check Event Viewer on the machine, and look for any events that have the word `changetracking` in them.
To learn about addresses and ports that must be allowed for Change Tracking and Inventory to work, see [Network planning](../automation-hybrid-runbook-worker.md#network-planning).
##### Management packs not downloaded
Verify that the following Change Tracking and Inventory management packs are installed locally:
* `Microsoft.IntelligencePacks.ChangeTrackingDirectAgent.*`
* `Microsoft.IntelligencePacks.InventoryChangeTracking.*`
* `Microsoft.IntelligencePacks.SingletonInventoryCollection.*`
##### VM from cloned machine that has not been sysprepped
If using a cloned image, sysprep the image first and then install the Log Analytics agent for Windows.
## Linux
### Scenario: No Change Tracking and Inventory results on Linux machines
#### Issue
You don't see any Change Tracking and Inventory results for Linux machines that are enabled for the feature.
#### Cause
Here are possible causes specific to this issue:
* The Log Analytics agent for Linux isn't running.
* The Log Analytics agent for Linux isn't configured correctly.
* There are file integrity monitoring (FIM) conflicts.
#### Resolution
##### Log Analytics agent for Linux not running
Verify that the daemon for the Log Analytics agent for Linux (**omsagent**) is running on your machine. Run the following query in the Log Analytics workspace that's linked to your Automation account.
```loganalytics Copy
Heartbeat
| summarize by Computer, Solutions
```
If you don't see your machine in query results, it hasn't recently checked in. There's probably a local configuration issue and you should reinstall the agent. For information about installation and configuration, see [Collect log data with the Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md).
If your machine shows up in the query results, verify the scope configuration. See [Targeting monitoring solutions in Azure Monitor](../../azure-monitor/insights/solution-targeting.md).
For more troubleshooting of this issue, see [Issue: You are not seeing any Linux data](../../azure-monitor/agents/agent-linux-troubleshoot.md#issue-you-are-not-seeing-any-linux-data).
##### Log Analytics agent for Linux not configured correctly
The Log Analytics agent for Linux might not be configured correctly for log and command-line output collection by using the OMS Log Collector tool. See [Change Tracking and Inventory overview](../change-tracking/overview.md).
##### FIM conflicts
Azure Security Center's FIM feature might be incorrectly validating the integrity of your Linux files. Verify that FIM is operational and correctly configured for Linux file monitoring. See [Change Tracking and Inventory overview](../change-tracking/overview.md).
## Next steps
If you don't see your problem here or you can't resolve your issue, try one of the following channels for additional support:
* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/).
* Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
* File an Azure support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/), and select **Get Support**. | 52.72327 | 528 | 0.782178 | eng_Latn | 0.987305 |
c94ea8c327b6aa4c8d39ba0f1a36f8e2a734cbab | 2,500 | md | Markdown | README.md | xebialabs-community/xlr-f5-plugin | 5736c83a855481a613dd81b257fdc5f2a4277a8c | [
"MIT"
] | null | null | null | README.md | xebialabs-community/xlr-f5-plugin | 5736c83a855481a613dd81b257fdc5f2a4277a8c | [
"MIT"
] | 6 | 2017-09-18T21:41:34.000Z | 2017-09-19T04:51:23.000Z | README.md | xebialabs-community/xlr-f5-plugin | 5736c83a855481a613dd81b257fdc5f2a4277a8c | [
"MIT"
] | null | null | null | # xlr-f5-plugin
Plugin for GTM/LTM
## Preface
This document describes the functionality provide by the `xlr-f5-plugin`.
## CI Status
[![Build Status][xlr-f5-plugin-travis-image] ][xlr-f5-plugin-travis-url]
[![Codacy][xlr-f5-plugin-codacy-image] ][xlr-f5-plugin-codacy-url]
[![Code Climate][xlr-f5-plugin-code-climate-image] ][xlr-f5-plugin-code-climate-url]
[![License: MIT][xlr-f5-plugin-license-image] ][xlr-f5-plugin-license-url]
[![Github All Releases][xlr-f5-plugin-downloads-image] ]()
[xlr-f5-plugin-travis-image]: https://travis-ci.org/xebialabs-community/xlr-f5-plugin.svg?branch=master
[xlr-f5-plugin-travis-url]: https://travis-ci.org/xebialabs-community/xlr-f5-plugin
[xlr-f5-plugin-codacy-image]: https://api.codacy.com/project/badge/Grade/eca7756dec96451f82a87fd09670096a
[xlr-f5-plugin-codacy-url]: https://www.codacy.com/app/gsajwan/xlr-f5-plugin
[xlr-f5-plugin-code-climate-image]: https://codeclimate.com/github/xebialabs-community/xlr-f5-plugin/badges/gpa.svg
[xlr-f5-plugin-code-climate-url]: https://codeclimate.com/github/xebialabs-community/xlr-f5-plugin
[xlr-f5-plugin-license-image]: https://img.shields.io/badge/License-MIT-yellow.svg
[xlr-f5-plugin-license-url]: https://opensource.org/licenses/MIT
[xlr-f5-plugin-downloads-image]: https://img.shields.io/github/downloads/xebialabs-community/xlr-f5-plugin/total.svg
## Overview
#### GTM:
Global traffic manager (GTM) integration with Xl release gives an option to enable and disable datacenters of choice. GTM gives you full control on the release flow.
#### LTM:
Local traffic manager (LTM) integration with Xl release gives an option to enable and disable pool-member in a specific pool whithout shutting down pool itself.
It also gives you an option to enable/disable multiple pool members.
## Installation
Copy the plugin JAR file into the `SERVER_HOME/plugins` directory of XL Release.
Install Python 2.7.x and the additional [pycontrol](https://pypi.python.org/pypi/pycontrol) and [suds](https://pypi.python.org/pypi/suds) libraries on the xl release server.
This plugin is mean to run the python script on the xl release windows server.
### Configuring Template
#### Enable LTM

#### Disable LTM

#### Enable GTM

#### Disable GTM

---
## References:
* https://devcentral.f5.com/wiki/iControl.LocalLB.ashx
* https://devcentral.f5.com/wiki/iControl.GlobalLB.ashx
| 44.642857 | 173 | 0.7644 | kor_Hang | 0.288411 |
c94ee8244180ec50a22d4c7d4b6a1bda3157aceb | 724 | md | Markdown | README.md | Suraj1199/python-gsoc.github.io | f211a7b82bec4f37941bad4f7fc42b5daa902bd3 | [
"CC-BY-4.0"
] | 78 | 2017-02-06T22:16:10.000Z | 2022-03-14T06:32:22.000Z | README.md | Suraj1199/python-gsoc.github.io | f211a7b82bec4f37941bad4f7fc42b5daa902bd3 | [
"CC-BY-4.0"
] | 62 | 2017-01-29T23:14:29.000Z | 2022-03-16T05:21:25.000Z | README.md | Suraj1199/python-gsoc.github.io | f211a7b82bec4f37941bad4f7fc42b5daa902bd3 | [
"CC-BY-4.0"
] | 123 | 2017-01-18T07:59:17.000Z | 2022-03-12T18:16:10.000Z | Python's Google Summer of Code Site
===================================
For many years, Python has used wiki.python.org to keep track of ideas pages,
FAQs, schedules, and all the plans for our participation in Google Summer of
Code. Unfortunately, editing the wiki isn't easy for everyone: it requires a
separate account, explicit edit permissions, knowledge of somewhat arcane
syntax. As a result, the wiki pages are maintained by a very small group.
Since we had great success with moving the student blog information to
github so people could use pull requests to update it, we're going to try to
do the same with the info pages and see how it works out.
You can view the live webpage at
http://python-gsoc.github.io
| 45.25 | 77 | 0.751381 | eng_Latn | 0.998052 |
c9509e05cba892f52919eff4784460084ec1e5c1 | 1,929 | md | Markdown | Operational_Excellence.md | schweikert/k8s-best-practices | 9632ce1097bf12ac402a18f415277bef9e37e776 | [
"MIT"
] | null | null | null | Operational_Excellence.md | schweikert/k8s-best-practices | 9632ce1097bf12ac402a18f415277bef9e37e776 | [
"MIT"
] | null | null | null | Operational_Excellence.md | schweikert/k8s-best-practices | 9632ce1097bf12ac402a18f415277bef9e37e776 | [
"MIT"
] | 1 | 2021-04-21T01:26:44.000Z | 2021-04-21T01:26:44.000Z | # Guidelines for Operational Excellence - Kubernetes on Azure
This is intended as a guide on how to operate a production ready Kubernetes environment running on Azure.
## What is this
When it comes to operating your Kubernetes production environment in an enterprise, there are many tutorials and opinions that exist in the community.
We strive to collect well founded concepts and help the reader to evaluate options and balance tradeoffs.
Feedback welcome!
To the editors:
> See this as mvp - more detailed topics will come once we have covered the basics
> We should focus on parts that are azure specific on which we will contribute content.
> Please provide references to good Kubernetes resources.
The severity or importance of each topic is indicated by an emoji in the topic name.
* :boom: Critical
* :fire: High
* :cloud: Medium
* :partly_sunny: Low
## Operational principles:
* TBD
Table of Contents
=================
* [Deployment](./Operational_Excellence_deployment.md)
* Infrastructure as Code
* Azure Resource Manager templates
* Terraform
* Helm charts
* [Choose the right VM size for your nodes](./Cost_Optimization.md#node---vm-sizes)
* [Maintenance](./Operational_Excellence_maintenance.md)
* Patch management
* Upgrade management
* Monitoring
* AKS cluster
* AKS master components
* acs-engine
* Identity and permissions
* Azure RBAC roles
* Kubernetes RBAC
* Azure Active Directory integration
* Scaling
* [Cluster Autoscaler](./Cost_Optimization.md#cluster-autoscaler)
* [Horizontal Pod Autoscaler](./Cost_Optimization.md#horizontal-pod-autoscaler)
* [Azure Container Instances connector](./Cost_Optimization.md#azure-container-instances-connector)
* Troubleshooting
* Common issues
* Kubelet logs
* SSH node access
* Links
Links
=================
> Good documentation that should be references | 31.622951 | 150 | 0.733541 | eng_Latn | 0.976778 |
c950a9e5b87edc36de3ab4c767c01f67eb3c3e88 | 554 | md | Markdown | packages/circe-combine-routers/README.md | ecfexorg/circe-kits | 3831f2b55b5f71141848fe36e3f9d8b904716af6 | [
"MIT"
] | null | null | null | packages/circe-combine-routers/README.md | ecfexorg/circe-kits | 3831f2b55b5f71141848fe36e3f9d8b904716af6 | [
"MIT"
] | null | null | null | packages/circe-combine-routers/README.md | ecfexorg/circe-kits | 3831f2b55b5f71141848fe36e3f9d8b904716af6 | [
"MIT"
] | null | null | null | # circe-combine-routers 合并路由对象
## 安装
[](https://nodei.co/npm/circe-combine-routers/)
## 使用
```typescript
import * as Koa from 'koa'
import * as Router from 'koa-router'
import * as checker from 'circe-combine-routers'
const app = new Koa()
const userRouter = new Router()
const postRouter = new Router()
app.use(combineRouters([userRouter, postRouter]))
```
## API
### function combineRoutes (routers: Router[], mounted?: string): Koa.Middleware
- routes 路由对象数组
- mount? 路由前缀
| 20.518519 | 116 | 0.712996 | eng_Latn | 0.322555 |
c9511ab72898357c3310782f9f6321ca01cae58b | 38 | md | Markdown | README.md | prayasht/kotlin-tube | bc7afc974935bd63effb3670c8199f8fe456cf63 | [
"MIT"
] | 1 | 2018-10-08T06:38:21.000Z | 2018-10-08T06:38:21.000Z | README.md | prayash/kotlin-tube | bc7afc974935bd63effb3670c8199f8fe456cf63 | [
"MIT"
] | null | null | null | README.md | prayash/kotlin-tube | bc7afc974935bd63effb3670c8199f8fe456cf63 | [
"MIT"
] | null | null | null | # kotlin-tube
YouTube clone in Kotlin
| 12.666667 | 23 | 0.789474 | slv_Latn | 0.924412 |
c9513e64afac02c1dd9d391dc5d492c765d6de39 | 1,316 | md | Markdown | app/views/static/developers/search.md | OpenAddressesUK/theodolite | 530dc27c530fca1e92cbc500460854f7c17ffcf8 | [
"MIT"
] | 3 | 2015-02-11T15:02:48.000Z | 2016-12-13T10:43:37.000Z | app/views/static/developers/search.md | OpenAddressesUK/theodolite | 530dc27c530fca1e92cbc500460854f7c17ffcf8 | [
"MIT"
] | 83 | 2015-01-02T07:08:11.000Z | 2015-05-18T10:27:23.000Z | app/views/static/developers/search.md | OpenAddressesUK/theodolite | 530dc27c530fca1e92cbc500460854f7c17ffcf8 | [
"MIT"
] | 7 | 2015-01-14T11:39:21.000Z | 2020-07-09T09:33:13.000Z | ---
title: Search API
layout: default
---
Our search API is at [https://openaddressesuk.org/addresses.json](https://openaddressesuk.org/addresses.json)
Simply specify the street, town and postcode arguments on the querystring.
* [https://openaddressesuk.org/addresses.json?street=camberwell](https://openaddressesuk.org/addresses.json?street=camberwell)
* [https://openaddressesuk.org/addresses.json?town=cheltenham](https://openaddressesuk.org/addresses.json?town=cheltenham)
* [https://openaddressesuk.org/addresses.json?postcode=se58qz](https://openaddressesuk.org/addresses.json?postcode=se58qz)
Partial search strings and multiple arguments are supported.
The response will provide you with all of the data matching your search terms including a persistent URL for each address, the addresses themselves in a format similar to the [British Standards Institute BS7666 standard](http://www.bsigroup.co.uk/en-GB/about-bsi/media-centre/press-releases/2006/7/Standardize-the-referencing-and-addressing-of-geographical-objects/#.VOxowLDkfp4), and the geographic centre of each address's postcode in latitude and longitude.
If you don’t fancy playing around with JSON but want to see how this API works then simply visit our [search page](https://openaddressesuk.org/addresses). Our website is built on our APIs. | 73.111111 | 460 | 0.805471 | eng_Latn | 0.918269 |
c95144106243453641bc61c1d8219070a5f3a108 | 2,081 | md | Markdown | docs/integration-services/system-views/TOC.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/system-views/TOC.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/system-views/TOC.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | # [Visão geral](views-integration-services-catalog.md)
# [catalog.catalog_properties](catalog-catalog-properties-ssisdb-database.md)
# [catalog.effective_object_permissions](catalog-effective-object-permissions-ssisdb-database.md)
# [catalog.environment_variables](catalog-environment-variables-ssisdb-database.md)
# [catalog.environment_references](catalog-environment-references-ssisdb-database.md)
# [catalog.environments](catalog-environments-ssisdb-database.md)
# [catalog.event_message_context](catalog-event-message-context.md)
# [catalog.event_messages](catalog-event-messages.md)
# [catalog.executable_statistics](catalog-executable-statistics.md)
# [catalog.executables](catalog-executables.md)
# [catalog.execution_component_phases](catalog-execution-component-phases.md)
# [catalog.execution_data_statistics](catalog-execution-data-statistics.md)
# [catalog.execution_data_taps](catalog-execution-data-taps.md)
# [catalog.execution_parameter_values](catalog-execution-parameter-values-ssisdb-database.md)
# [catalog.execution_property_override_values](catalog-execution-property-override-values.md)
# [catalog.executions](catalog-executions-ssisdb-database.md)
# [catalog.explicit_object_permissions](catalog-explicit-object-permissions-ssisdb-database.md)
# [catalog.extended_operation_info](catalog-extended-operation-info-ssisdb-database.md)
# [catalog.folders](catalog-folders-ssisdb-database.md)
# [catalog.object_parameters](catalog-object-parameters-ssisdb-database.md)
# [catalog.object_versions](catalog-object-versions-ssisdb-database.md)
# [catalog.operation_messages](catalog-operation-messages-ssisdb-database.md)
# [catalog.operations](catalog-operations-ssisdb-database.md)
# [catalog.packages](catalog-packages-ssisdb-database.md)
# [catalog.projects](catalog-projects-ssisdb-database.md)
# [catalog.validations](catalog-validations-ssisdb-database.md)
# [catalog.master_properties](catalog-master-properties-ssisdb-database.md)
# [catalog.worker_agents](catalog-worker-agents-ssisdb-database.md)
| 71.758621 | 99 | 0.805382 | swe_Latn | 0.101693 |
c95149548c0ac9a188a590059798ebaaf2fd6750 | 485 | md | Markdown | _posts/2021-09-24-204097515.md | bookmana/bookmana.github.io | 2ed7b023b0851c0c18ad8e7831ece910d9108852 | [
"MIT"
] | null | null | null | _posts/2021-09-24-204097515.md | bookmana/bookmana.github.io | 2ed7b023b0851c0c18ad8e7831ece910d9108852 | [
"MIT"
] | null | null | null | _posts/2021-09-24-204097515.md | bookmana/bookmana.github.io | 2ed7b023b0851c0c18ad8e7831ece910d9108852 | [
"MIT"
] | null | null | null | ---
title: "Campbell Essential Biology (Paperback+CD-ROM / 4th Ed.)"
date: 2021-09-24 15:14:12
categories: [외국도서, 대학교재-전문서적]
image: https://bimage.interpark.com/bookpinion/add_images/noimg_70_98.gif
description: ● ●
---
## **정보**
- **ISBN : 9781292026329**
- **출판사 : Pearson Education**
- **출판일 : 20090901**
- **저자 : Simon, Eric J./ Reece, Jane B./ Dickey, Jean L.**
------
## **요약**
● ●
------
------
Campbell Essential Biology (Paperback+CD-ROM / 4th Ed.)
------
| 13.472222 | 73 | 0.593814 | yue_Hant | 0.859652 |
c95179201c81d5dbfff9e5743b946210923c68cf | 1,297 | md | Markdown | readme.md | BeniJan/palletization_team | 0bae088c3d2b7629eef339f1af42ff192eeb6c47 | [
"MIT"
] | null | null | null | readme.md | BeniJan/palletization_team | 0bae088c3d2b7629eef339f1af42ff192eeb6c47 | [
"MIT"
] | null | null | null | readme.md | BeniJan/palletization_team | 0bae088c3d2b7629eef339f1af42ff192eeb6c47 | [
"MIT"
] | null | null | null | ### RUN THIS ON YOUR ARDUINO ###
Are you on:
*~ Linux*
Install _pyfirmata_ on your desktop using pip: "pip install pyfirmata"
In case of not having pip, install it by running "sudo apt-get install pip"
To give _pyfirmata_ permissions to access arduino port run in your terminal:
"sudo chmod a+rw /dev/ttyACM0"
Open your Arduino IDE, access _pyfirmata_ examples, then open the
_"StandardFirmata"_ example and upload it to your arduino.
Then you are ready to go, just enter the desired shirt objects list
in _shirtDisorderedList_ variable at *main.py* and run it by entering
"python main.py" inside your terminal at the root of the project.
*~ Windows*
Install _pyfirmata_ on your desktop by following this link [1] simple steps.
Once this project was developed in linux, to run in windows you will need to change:
"board = Arduino('/dev/ttyACM0')" to "board = Arduino(‘COM3’)"
Open your Arduino IDE, access _pyfirmata_ examples, then open the _"StandardFirmata"_
example and upload it to your arduino.
Then you are ready to go, just enter the desired shirt objects list in
_shirtDisorderedList_ variable at *main.py* and run it by entering "python main.py"
inside your terminal at the root of the project.
| 38.147059 | 89 | 0.723978 | eng_Latn | 0.995336 |
c951d92ab4915ed5953f23646e104ba836e02093 | 823 | md | Markdown | j-SearchData/readme.md | molda/components | 1310837e8a0f92fd9c5fa94278d189d27686938d | [
"MIT"
] | null | null | null | j-SearchData/readme.md | molda/components | 1310837e8a0f92fd9c5fa94278d189d27686938d | [
"MIT"
] | null | null | null | j-SearchData/readme.md | molda/components | 1310837e8a0f92fd9c5fa94278d189d27686938d | [
"MIT"
] | null | null | null | ## j-SearchData
This component tries to filter data according to the search phrase.
__Configuration__:
- `datasource {String}` __required__ - a path to the data-source with list of items `Array`
- `output {String}` __required__ - a path for filtered items
- `key {String}` __required__ - a key/property name for searching (default: `name`)
- `delay {Number}` a delay (default: `50` ms)
- `splitwords {Boolean}` tries to find word in various position (default: `true`)
__Good to know__:
This component adds classes below automatically when:
- class `ui-search-used` is added when the user searches for something
- class `ui-search-empty` is added when the user searches for something and at the same time nothing was found
### Author
- Peter Širka <[email protected]>
- [License](https://www.totaljs.com/license/) | 35.782609 | 110 | 0.744836 | eng_Latn | 0.99079 |
c951db4810201272b11c8a2282abd9d548412d24 | 764 | md | Markdown | README.md | chuangwei/blog | b3f44fd3be3eb7bb34ffb761a7717afaaef20fc2 | [
"Apache-2.0"
] | null | null | null | README.md | chuangwei/blog | b3f44fd3be3eb7bb34ffb761a7717afaaef20fc2 | [
"Apache-2.0"
] | 15 | 2020-05-12T12:18:20.000Z | 2020-09-25T02:27:26.000Z | README.md | chuangwei/blog | b3f44fd3be3eb7bb34ffb761a7717afaaef20fc2 | [
"Apache-2.0"
] | null | null | null | # 我的博客
## web专题
[web专题之浏览器工作原理:入门](https://github.com/chuangwei/blog/issues/2)
## vue全家桶及框架专题
[vue相关](https://github.com/chuangwei/blog/issues/3)
[vuex相关](https://github.com/chuangwei/blog/issues/4)
[vueRouter相关](https://github.com/chuangwei/blog/issues/5)
[vue-property-decorator](https://github.com/chuangwei/blog/issues/6)
[vue-element-admin](https://github.com/chuangwei/blog/issues/7)
## 开源库使用小结
阿里DataV
阿里qiankun
## js基础(自用)
[数据类型](https://github.com/chuangwei/blog/issues/8)
[原型](https://github.com/chuangwei/blog/issues/9)
## 我的读书
[关于《你不知道的Javacript》中模块依赖加载器的理解](https://github.com/chuangwei/blog/issues/1)
## 资料
1. [超全的工具函数(包括正则)](https://juejin.im/post/5e6cf42bf265da57397e3694)
2. [文件切片上传凹凸实验室)](https://mp.weixin.qq.com/s/jYng9Fud1Q8YykwqK12NVg)
| 28.296296 | 75 | 0.748691 | yue_Hant | 0.384447 |
c951fc1ffbcd652bd79a8c914808a446ee6ed959 | 1,003 | md | Markdown | docs/linter.md | qiufeihong2018/webpack-1 | 274cbe058105fe87753b6f5613e4ae63b84446aa | [
"MIT"
] | null | null | null | docs/linter.md | qiufeihong2018/webpack-1 | 274cbe058105fe87753b6f5613e4ae63b84446aa | [
"MIT"
] | null | null | null | docs/linter.md | qiufeihong2018/webpack-1 | 274cbe058105fe87753b6f5613e4ae63b84446aa | [
"MIT"
] | null | null | null | # Linter配置
这个模板使用 [ESLint](https://eslint.org/) 作为 linter, 并且使用 [Standard](https://github.com/feross/standard/blob/master/RULES.md) 预设一些小的自定义配置。
## eslint-plugin-vue
我们同样可以添加 [eslint-plugin-vue](https://github.com/vuejs/eslint-plugin-vue), 它提供一大堆有用的规则去书写一致的vue组件-它也可以检测模板!
你也可以找到所有有用规则的概述在 [github](https://github.com/vuejs/eslint-plugin-vue#gear-configs)中。 我们选择添加 `essential` 配置,但是我们推荐你熟悉了他们之后去改写 `strongly-recommended` 或者 `recommended` 规则集。
## 自定义
如果你使用默认的检测规则不开心,你可以自定义选项:
1. 在 `.eslintrc.js` 中覆盖单个规则。例如,你可以添加以下规则来强制分号而不是省略它们:
``` js
// .eslintrc.js
"semi": [2, "always"]
```
2. 例如,在生成项目时选择一个不同的ESLint预置 [eslint-config-airbnb](https://github.com/airbnb/javascript/tree/master/packages/eslint-config-airbnb)。
3. 在生成项目时为ESLint预设选择"none"并定义你自己的规则。为了获取更多信息,请看 [ESLint documentation](https://eslint.org/docs/rules/) 。
## 解决检测错误
你可以运行以下命令,让eslint修复它发现的任何错误(如果它能修复的话——并不是所有的错误都像这样可以修复):
```
npm run lint -- --fix
```
*(中间的 `--` 是必要的,以确保 `--fix` 项被传递给 `eslint`,而不是 `npm`。使用yarn时可省略)*
| 27.861111 | 169 | 0.737787 | yue_Hant | 0.913157 |
c9521a3c3dd041a8fc525effc5eb17d160b18ec2 | 1,284 | md | Markdown | content/5.concepts/agile-and-kanban.md | StevenJV/Book_Generation_Z_Developer | 7e2ff829faff41b1ce2f1f2f9cb88d712ebbc9ee | [
"Apache-2.0"
] | 36 | 2018-02-20T18:52:40.000Z | 2022-01-05T08:36:42.000Z | content/5.concepts/agile-and-kanban.md | StevenJV/Book_Generation_Z_Developer | 7e2ff829faff41b1ce2f1f2f9cb88d712ebbc9ee | [
"Apache-2.0"
] | 57 | 2018-02-22T00:57:13.000Z | 2020-04-03T16:30:33.000Z | content/5.concepts/agile-and-kanban.md | StevenJV/Book_Generation_Z_Developer | 7e2ff829faff41b1ce2f1f2f9cb88d712ebbc9ee | [
"Apache-2.0"
] | 25 | 2018-02-19T22:42:13.000Z | 2021-08-02T15:47:41.000Z | ---
title: Agile and Kanban
status: draft
---
**Topics to cover and ideas**
- history
- why it worked
- agile manifesto
- https://www.agilealliance.org/agile101/the-agile-manifesto/
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
- [Software Craftsmanship](http://manifesto.softwarecraftsmanship.org/)
- Not only working software , but also well-crafted software
- Not only responding to change , but also steadily adding value
- Not only individuals and interactions , but also a community of professionals
- Not only customer collaboration , but also productive partnerships
- Anton cords
- explain concepts (with diagrams)
- how agile become dogma and created environments where agile teams where not agile at all
- processes become more important than understanding why something was being created in the first place, to much effort was put on estimates, to much focus was placed on what could be done in 2 weeks
- Scrumbam is a nice alternative
- [The Mythical Man-Month](https://en.wikipedia.org/wiki/The_Mythical_Man-Month)
| 41.419355 | 203 | 0.735202 | eng_Latn | 0.998781 |
c95348e83efd52e365bb3cd72ee5f2f09ece12bb | 30 | md | Markdown | README.md | RafaelSouza5/desafio | cc0e58b432a62a91c9fb038e5038e4653766a2f8 | [
"MIT"
] | null | null | null | README.md | RafaelSouza5/desafio | cc0e58b432a62a91c9fb038e5038e4653766a2f8 | [
"MIT"
] | null | null | null | README.md | RafaelSouza5/desafio | cc0e58b432a62a91c9fb038e5038e4653766a2f8 | [
"MIT"
] | null | null | null | # desafio
Curso HTML5 & CSS3
| 10 | 19 | 0.7 | por_Latn | 0.596109 |
c9536f5cbf6bf000e2049254c561bf2048a5d361 | 7,183 | md | Markdown | articles/private-cloud/prc-sd-storage.md | EqualsDan/documentation | ec18ac284a705ca8cbaae9a6dd8ea20d38badbaa | [
"MIT"
] | null | null | null | articles/private-cloud/prc-sd-storage.md | EqualsDan/documentation | ec18ac284a705ca8cbaae9a6dd8ea20d38badbaa | [
"MIT"
] | null | null | null | articles/private-cloud/prc-sd-storage.md | EqualsDan/documentation | ec18ac284a705ca8cbaae9a6dd8ea20d38badbaa | [
"MIT"
] | null | null | null | ---
title: Private Cloud for Storage Service Definition | UKCloud Ltd
description: Provides an overview of what is provided by the Private Cloud for Storage service
services: private-cloud
author: Sue Highmoor
toc_rootlink: Service Definition
toc_sub1:
toc_sub2:
toc_sub3:
toc_sub4:
toc_title: Private Cloud for Storage Service Definition
toc_fullpath: Service Definition/prc-sd-storage.md
toc_mdlink: prc-sd-storage.md
---
# Private Cloud for Storage Service Definition
## Why UKCloud?
UKCloud is dedicated to helping the UK Public Sector and UK citizens by delivering more choice and flexibility through safe and trusted cloud technology. We own and operate a UK-sovereign, industry-leading, multi-cloud platform, located within the Government’s Crown Campus, offering multiple cloud technologies, including VMware, Azure, OpenStack, OpenShift and Oracle. This enables customers to choose the right technology for creating new workloads or migrating existing applications to the cloud.
We recognise the importance of government services in making the country run smoothly, which is why we include the highest level of support to all our customers at no extra cost. This includes a dedicated 24/7 UK telephone and ticket support, and Network Operations Centre (NOC) utilising protective and proactive monitoring tools, and access to UKCloud’s technical experts.

## What is Private Cloud for Storage?
Our Private Cloud for Storage provides single-tenant storage Infrastructure as a Service. Your data is hosted in one of our UK data centres, to gain the benefits of our mature and proven Assured OFFICIAL and Elevated OFFICIAL security domains; or within the Government’s Crown Campus.
Private Cloud for Storage is designed to be deployed as part of a larger solution to include genuine multi-cloud services such as UKCloud for VMware and UKCloud for OpenStack. This enables you to leverage the benefits of the UKCloud platform; even on infrastructure that is entirely dedicated to your organisation.
For full information regarding this product, we have [Service Scopes](prc-sco-storage.md), [FAQs](prc-faq-storage.md) and other relevant documents on our [Knowledge Centre](https://docs.ukcloud.com).
## What the service can help you achieve
- **Increased Security and Compliance** – gain the advantages of cloud whilst retaining regulatory compliance, through physical separation
- **Deployment of secure disconnected environments** – providing connectivity via private and secure network links, rather than over the public internet
- **Resolves migration challenges** – providing full control over the configuration, simplifying migration from legacy on-premise environments
- **Guaranteed resources** - your own private environment structured to your requirements to mitigate the risk of contention from other organisations
## Product options
The service is designed to be flexible and allows you to choose from the list below to match your requirements.
### Cloud Design and Purchase Options
UKCloud Design Services - flexible purchase options
- UKCloud Design, Procurement, Deployment and Management
- UKCloud Design, Customer Procurement, UKCloud Deployment and Management
### Location of Private Storage
Choose the location where the Private Storage should be situated
- UKCloud Data Centres
- Crown Hosting Data Centres
- Your Own Data Centre
### Security Domain
Choose the security domain for your Private Storage
- Assured OFFICIAL - DDoS protected internet, PSN, HSCN and Janet
- Elevated OFFICIAL - PSN and RLI
- Above OFFICIAL - SLI & Crypto
### Pricing and Packaging Options
Private Cloud solutions are offered with two options
- OPEX
- CAPEX
## Pricing and packaging
Full pricing with all options including licensing and connectivity is available in the [*UKCloud Pricing Guide*](https://ukcloud.com/wp-content/uploads/2019/06/ukcloud-pricing-guide-11.0.pdf).
## Accreditation and information assurance
The security of our platform is our number one priority. We’ve always been committed to adhering to exacting standards, frameworks and best practice. Everything we do is subject to regular independent validation by government accreditors, sector auditors, and management system assessors. Details are available on the [UKCloud website](https://ukcloud.com/governance/).
## Connectivity options
UKCloud provides one of the best-connected cloud platforms for the UK Public Sector. We offer a range of flexible connectivity options detailed in the [*UKCloud Pricing Guide*](https://ukcloud.com/wp-content/uploads/2019/06/ukcloud-pricing-guide-11.0.pdf) which enable access to our secure platform by DDoS-protected internet, native PSN, Janet, HSCN and RLI and your own lease lines via our HybridConnect service.
## An SLA you can trust
We understand that enterprise workloads need a dependable service that underpins the reliability of the application to users and other systems, which is why we offer one of the best SLAs on G-Cloud. For full details on the service SLA including measurements and service credits, please view the [*SLA defintion article*](../other/other-ref-sla-definition.md) on the UKCloud Knowledge Centre.
<table>
<tr>
<td><b>Platform SLA</b></td>
<td colspan="2">99.99%</td>
</tr>
<tr>
<td><b>Portal SLA</b></td>
<td colspan="2">99.90%</td>
</tr>
<tr>
<td><b>Availability calculation</b></td>
<td colspan="2">Availability indication is based on an average 730 hours per month. Excludes planned and emergency maintenance.</td>
</tr>
<tr>
<td><b>SLA Measurement</b></td>
<td colspan="2">Availability of all or part of the storage infrastructure.</td>
</tr>
<tr>
<td><b>Key exclusions</b></td>
<td>Applies to All-Inclusive, UKCloud Hosted and Crown Campus Hosted</td>
<td>Deletion or modification of files by customer resulting in data loss. Any access provided by you to your user base that takes the storage system beyond its ecommended performance and connectivity thresholds. Faults within external connectivity providers (for example DDoS-protected internet, PSN, Janet or HSCN) and components co-located at UKCloud.</td>
</tr>
<tr>
<td></td>
<td>Applies to customer-supplied hardware</td>
<td>As above, plus; any loss of connectivity or data including data corruption as a result of you or your suppliers installing new or additional capacity to the storage system.</td>
</tr>
<tr>
<td></td>
<td>Applies to Crown Campus Hosted</td>
<td>As above, plus; any platform outages causing disruption to power and cooling (as they are out of UKCloud's control).</td>
</tr>
</table>
## The small print
For full terms and conditions including onboarding and responsibilities, please refer to the [*Terms and conditions documents*](../other/other-ref-terms-and-conditions.md).
## Feedback
If you find an issue with this article, click **Improve this Doc** to suggest a change. If you have an idea for how we could improve any of our services, visit the [Ideas](https://community.ukcloud.com/ideas) section of the [UKCloud Community](https://community.ukcloud.com).
| 50.943262 | 500 | 0.777113 | eng_Latn | 0.991673 |
c9537e15585c5040732071306744c462958f1fc2 | 1,888 | md | Markdown | index.md | mulle-core/mulle-core.github.io | 8d2ae02f0cc56df11fe1d4d56341837008db1e4b | [
"BSD-3-Clause"
] | null | null | null | index.md | mulle-core/mulle-core.github.io | 8d2ae02f0cc56df11fe1d4d56341837008db1e4b | [
"BSD-3-Clause"
] | null | null | null | index.md | mulle-core/mulle-core.github.io | 8d2ae02f0cc56df11fe1d4d56341837008db1e4b | [
"BSD-3-Clause"
] | null | null | null | # mulle-core
The mulle-core library collection is written for C11.
*mulle-core* provides functionality that is outside of the C standard libraries.
These libraries are not strictly concerned with concurrency like mulle-concurrent is,
but may use features of mulle-concurrent. So *mulle-core* is a grabbag of libraries, that
is set to grow and that might get reorganized into more topical library
collections later on.
Library | Description
--------------------------------------------------------------------|----------------------
[mulle-atinit](//github.com/mulle-core/mulle-atinit) | A workaround for non-determinisic shared library loaders
[mulle-atexit](//github.com/mulle-core/mulle-atexit) | A workaround for deficient c-library implementations
[mulle-dlfcn](//github.com/mulle-core/mulle-dlfcn) | A wrapper for cross-plaftform dlopen, dlsysm operations
[mulle-mmap](//github.com/mulle-core/mulle-mmap) | Memory mapped file access, cross-platform
[mulle-fprintf](//github.com/mulle-core/mulle-fprintf) | mulle-fprintf marries mulle-sprintf with <stdio.h>
[mulle-sprintf](//github.com/mulle-core/mulle-sprintf) | An extensible sprintf function supporting stdarg and mulle-vararg
[mulle-stacktrace](//github.com/mulle-core/mulle-stacktrace) | Stracktrace support for various OS
[mulle-testallocator](//github.com/mulle-core/mulle-testallocator) | C memory leak and double free checking
[mulle-time](//github.com/mulle-core/mulle-time) | Small abstraction over platform clocks and time types
*mulle-core* is based on [mulle-concurrent](//github.com/mulle-concurrent) and
[mulle-c](//github.com/mulle-c).
The [MulleFoundation](https://MulleFoundation.github.io) is based on *mulle-core*.
| 65.103448 | 135 | 0.670021 | eng_Latn | 0.857952 |
c953a02677f530aae01fe3db3b386fd4dc09fc7c | 23,336 | md | Markdown | content/en/blog/_posts/2021-10-05-nsa-cisa-hardening.md | tamarabarbosa/website | 6b15373d6124f034efd4655a3134f3d0d343ab60 | [
"CC-BY-4.0"
] | 2 | 2019-06-30T13:03:01.000Z | 2021-11-21T18:33:32.000Z | content/en/blog/_posts/2021-10-05-nsa-cisa-hardening.md | tamarabarbosa/website | 6b15373d6124f034efd4655a3134f3d0d343ab60 | [
"CC-BY-4.0"
] | null | null | null | content/en/blog/_posts/2021-10-05-nsa-cisa-hardening.md | tamarabarbosa/website | 6b15373d6124f034efd4655a3134f3d0d343ab60 | [
"CC-BY-4.0"
] | null | null | null | ---
layout: blog
title: A Closer Look at NSA/CISA Kubernetes Hardening Guidance
date: 2021-10-05
slug: nsa-cisa-kubernetes-hardening-guidance
---
**Authors:** Jim Angel (Google), Pushkar Joglekar (VMware), and Savitha
Raghunathan (Red Hat)
{{% alert title="Disclaimer" %}}
The open source tools listed in this article are to serve as examples only
and are in no way a direct recommendation from the Kubernetes community or authors.
{{% /alert %}}
## Background
USA's National Security Agency (NSA) and the Cybersecurity and Infrastructure
Security Agency (CISA)
released, "[Kubernetes Hardening Guidance](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF)"
on August 3rd, 2021. The guidance details threats to Kubernetes environments
and provides secure configuration guidance to minimize risk.
The following sections of this blog correlate to the sections in the NSA/CISA guidance.
Any missing sections are skipped because of limited opportunities to add
anything new to the existing content.
_Note_: This blog post is not a substitute for reading the guide. Reading the published
guidance is recommended before proceeding as the following content is
complementary.
## Introduction and Threat Model
Note that the threats identified as important by the NSA/CISA, or the intended audience of this guidance, may be different from the threats that other enterprise users of Kubernetes consider important. This section
is still useful for organizations that care about data, resource theft and
service unavailability.
The guidance highlights the following three sources of compromises:
- Supply chain risks
- Malicious threat actors
- Insider threats (administrators, users, or cloud service providers)
The [threat model](https://en.wikipedia.org/wiki/Threat_model) tries to take a step back and review threats that not only
exist within the boundary of a Kubernetes cluster but also include the underlying
infrastructure and surrounding workloads that Kubernetes does not manage.
For example, when a workload outside the cluster shares the same physical
network, it has access to the kubelet and to control plane components: etcd, controller manager, scheduler and API
server. Therefore, the guidance recommends having network level isolation
separating Kubernetes clusters from other workloads that do not need connectivity
to Kubernetes control plane nodes. Specifically, scheduler, controller-manager,
etcd only need to be accessible to the API server. Any interactions with Kubernetes
from outside the cluster can happen by providing access to API server port.
List of ports and protocols for each of these components are
defined in [Ports and Protocols](/docs/reference/ports-and-protocols/)
within the Kubernetes documentation.
> Special note: kube-scheduler and kube-controller-manager uses different ports than the ones mentioned in the guidance
The [Threat modelling](https://cnsmap.netlify.app/threat-modelling) section
from the CNCF [Cloud Native Security Whitepaper + Map](https://github.com/cncf/tag-security/tree/main/security-whitepaper)
provides another perspective on approaching threat modelling Kubernetes, from a
cloud native lens.
## Kubernetes Pod security
Kubernetes by default does not guarantee strict workload isolation between pods
running in the same node in a cluster. However, the guidance provides several
techniques to enhance existing isolation and reduce the attack surface in case of a
compromise.
### "Non-root" containers and "rootless" container engines
Several best practices related to basic security principle of least privilege
i.e. provide only the permissions are needed; no more, no less, are worth a
second look.
The guide recommends setting non-root user at build time instead of relying on
setting `runAsUser` at runtime in your Pod spec. This is a good practice and provides
some level of defense in depth. For example, if the container image is built with user `10001`
and the Pod spec misses adding the `runAsuser` field in its `Deployment` object. In this
case there are certain edge cases that are worth exploring for awareness:
1. Pods can fail to start, if the user defined at build time is different from
the one defined in pod spec and some files are as a result inaccessible.
2. Pods can end up sharing User IDs unintentionally. This can be problematic
even if the User IDs are non-zero in a situation where a container escape to
host file system is possible. Once the attacker has access to the host file
system, they get access to all the file resources that are owned by other
unrelated pods that share the same UID.
3. Pods can end up sharing User IDs, with other node level processes not managed
by Kubernetes e.g. node level daemons for auditing, vulnerability scanning,
telemetry. The threat is similar to the one above where host file system
access can give attacker full access to these node level daemons without
needing to be root on the node.
However, none of these cases will have as severe an impact as a container
running as root being able to escape as a root user on the host, which can provide
an attacker with complete control of the worker node, further allowing lateral
movement to other worker or control plane nodes.
Kubernetes 1.22 introduced
an [alpha feature](/docs/tasks/administer-cluster/kubelet-in-userns/)
that specifically reduces the impact of such a control plane component running
as root user to a non-root user through user namespaces.
That ([alpha stage](/docs/reference/command-line-tools-reference/feature-gates/#feature-stages)) support for user namespaces / rootless mode is available with
the following container runtimes:
- [Docker Engine](https://docs.docker.com/engine/security/rootless/)
- [Podman](https://developers.redhat.com/blog/2020/09/25/rootless-containers-with-podman-the-basics)
Some distributions support running in rootless mode, like the following:
- [kind](https://kind.sigs.k8s.io/docs/user/rootless/)
- [k3s](https://rancher.com/docs/k3s/latest/en/advanced/#running-k3s-with-rootless-mode-experimental)
- [Usernetes](https://github.com/rootless-containers/usernetes)
### Immutable container filesystems
The NSA/CISA Kubernetes Hardening Guidance highlights an often overlooked feature `readOnlyRootFileSystem`, with a
working example in [Appendix B](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF#page=42). This example limits execution and tampering of
containers at runtime. Any read/write activity can then be limited to few
directories by using `tmpfs` volume mounts.
However, some applications that modify the container filesystem at runtime, like exploding a WAR or JAR file at container startup,
could face issues when enabling this feature. To avoid this issue, consider making minimal changes to the filesystem at runtime
when possible.
### Building secure container images
Kubernetes Hardening Guidance also recommends running a scanner at deploy time as an admission controller,
to prevent vulnerable or misconfigured pods from running in the cluster.
Theoretically, this sounds like a good approach but there are several caveats to
consider before this can be implemented in practice:
- Depending on network bandwidth, available resources and scanner of choice,
scanning for vulnerabilities for an image can take an indeterminate amount of
time. This could lead to slower or unpredictable pod start up times, which
could result in spikes of unavailability when apps are serving peak load.
- If the policy that allows or denies pod startup is made using incorrect or
incomplete data it could result in several false positive or false negative
outcomes like the following:
- inside a container image, the `openssl` package is detected as vulnerable. However,
the application is written in Golang and uses the Go `crypto` package for TLS. Therefore, this vulnerability
is not in the code execution path and as such has minimal impact if it
remains unfixed.
- A vulnerability is detected in the `openssl` package for a Debian base image.
However, the upstream Debian community considers this as a Minor impact
vulnerability and as a result does not release a patch fix for this
vulnerability. The owner of this image is now stuck with a vulnerability that
cannot be fixed and a cluster that does not allow the image to run because
of predefined policy that does not take into account whether the fix for a
vulnerability is available or not
- A Golang app is built on top of a [distroless](https://github.com/GoogleContainerTools/distroless)
image, but it is compiled with a Golang version that uses a vulnerable [standard library](https://pkg.go.dev/std).
The scanner has
no visibility into golang version but only on OS level packages. So it
allows the pod to run in the cluster in spite of the image containing an
app binary built on vulnerable golang.
To be clear, relying on vulnerability scanners is absolutely a good idea but
policy definitions should be flexible enough to allow:
- Creation of exception lists for images or vulnerabilities through labelling
- Overriding the severity with a risk score based on impact of a vulnerability
- Applying the same policies at build time to catch vulnerable images with
fixable vulnerabilities before they can be deployed into Kubernetes clusters
Special considerations like offline vulnerability database fetch, may also be
needed, if the clusters run in an air-gapped environment and the scanners
require internet access to update the vulnerability database.
### Pod Security Policies
Since Kubernetes v1.21, the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/)
API and related features are [deprecated](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/),
but some of the guidance in this section will still apply for the next few years, until cluster operators
upgrade their clusters to newer Kubernetes versions.
The Kubernetes project is working on a replacement for PodSecurityPolicy.
Kubernetes v1.22 includes an alpha feature called [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
that is intended to allow enforcing a minimum level of isolation between pods.
The built-in isolation levels for Pod Security Admission are derived
from [Pod Security Standards](/docs/concepts/security/pod-security-standards/), which is a superset of all the components mentioned in Table I [page 10](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF#page=17) of
the guidance.
Information about migrating from PodSecurityPolicy to the Pod Security
Admission feature is available
in
[Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller](/docs/tasks/configure-pod-container/migrate-from-psp/).
One important behavior mentioned in the guidance that remains the same between
Pod Security Policy and its replacement is that enforcing either of them does
not affect pods that are already running. With both PodSecurityPolicy and Pod Security Admission,
the enforcement happens during the pod creation
stage.
### Hardening container engines
Some container workloads are less trusted than others but may need to run in the
same cluster. In those cases, running them on dedicated nodes that include
hardened container runtimes that provide stricter pod isolation boundaries can
act as a useful security control.
Kubernetes supports
an API called [RuntimeClass](/docs/concepts/containers/runtime-class/) that is
stable / GA (and, therefore, enabled by default) stage as of Kubernetes v1.20.
RuntimeClass allows you to ensure that Pods requiring strong isolation are scheduled onto
nodes that can offer it.
Some third-party projects that you can use in conjunction with RuntimeClass are:
- [kata containers](https://github.com/kata-containers/kata-containers/blob/main/docs/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#create-runtime-class-for-kata-containers)
- [gvisor](https://gvisor.dev/docs/user_guide/containerd/quick_start/)
As discussed here and in the guidance, many features and tooling exist in and around
Kubernetes that can enhance the isolation boundaries between
pods. Based on relevant threats and risk posture, you should pick and choose
between them, instead of trying to apply all the recommendations. Having said that, cluster
level isolation i.e. running workloads in dedicated clusters, remains the strictest workload
isolation mechanism, in spite of improvements mentioned earlier here and in the guide.
## Network Separation and Hardening
Kubernetes Networking can be tricky and this section focuses on how to secure
and harden the relevant configurations. The guide identifies the following as key
takeaways:
- Using NetworkPolicies to create isolation between resources,
- Securing the control plane
- Encrypting traffic and sensitive data
### Network Policies
Network policies can be created with the help of network plugins. In order to
make the creation and visualization easier for users, Cilium supports
a [web GUI tool](https://editor.cilium.io). That web GUI lets you create Kubernetes
NetworkPolicies (a generic API that nevertheless requires a compatible CNI plugin),
and / or Cilium network policies (CiliumClusterwideNetworkPolicy and CiliumNetworkPolicy,
which only work in clusters that use the Cilium CNI plugin).
You can use these APIs to restrict network traffic between pods, and therefore minimize the
attack vector.
Another scenario that is worth exploring is the usage of external IPs. Some
services, when misconfigured, can create random external IPs. An attacker can take
advantage of this misconfiguration and easily intercept traffic. This vulnerability
has been reported
in [CVE-2020-8554](https://www.cvedetails.com/cve/CVE-2020-8554/).
Using [externalip-webhook](https://github.com/kubernetes-sigs/externalip-webhook)
can mitigate this vulnerability by preventing the services from using random
external IPs. [externalip-webhook](https://github.com/kubernetes-sigs/externalip-webhook)
only allows creation of services that don't require external IPs or whose
external IPs are within the range specified by the administrator.
> CVE-2020-8554 - Kubernetes API server in all versions allow an attacker
> who is able to create a ClusterIP service and set the `spec.externalIPs` field,
> to intercept traffic to that IP address. Additionally, an attacker who is able to
> patch the `status` (which is considered a privileged operation and should not
> typically be granted to users) of a LoadBalancer service can set the
> `status.loadBalancer.ingress.ip` to similar effect.
### Resource Policies
In addition to configuring ResourceQuotas and limits, consider restricting how many process
IDs (PIDs) a given Pod can use, and also to reserve some PIDs for node-level use to avoid
resource exhaustion. More details to apply these limits can be
found in [Process ID Limits And Reservations](/docs/concepts/policy/pid-limiting/).
### Control Plane Hardening
In the next section, the guide covers control plane hardening. It is worth
noting that
from [Kubernetes 1.20](https://github.com/kubernetes/kubernetes/issues/91506),
insecure port from API server, has been removed.
### Etcd
As a general rule, the etcd server should be configured to only trust
certificates assigned to the API server. It limits the attack surface and prevents a
malicious attacker from gaining access to the cluster. It might be beneficial to
use a separate CA for etcd, as it by default trusts all the certificates issued
by the root CA.
### Kubeconfig Files
In addition to specifying the token and certificates directly, `.kubeconfig`
supports dynamic retrieval of temporary tokens using auth provider plugins.
Beware of the possibility of malicious
shell [code execution](https://banzaicloud.com/blog/kubeconfig-security/) in a
`kubeconfig` file. Once attackers gain access to the cluster, they can steal ssh
keys/secrets or more.
### Secrets
Kubernetes [Secrets](/docs/concepts/configuration/secret/) is the native way of managing secrets as a Kubernetes
API object. However, in some scenarios such as a desire to have a single source of truth for all app secrets, irrespective of whether they run on Kubernetes or not, secrets can be managed loosely coupled with
Kubernetes and consumed by pods through side-cars or init-containers with minimal usage of Kubernetes Secrets API.
[External secrets providers](https://github.com/external-secrets/kubernetes-external-secrets)
and [csi-secrets-store](https://github.com/kubernetes-sigs/secrets-store-csi-driver)
are some of these alternatives to Kubernetes Secrets
## Log Auditing
The NSA/CISA guidance stresses monitoring and alerting based on logs. The key points
include logging at the host level, application level, and on the cloud. When
running Kubernetes in production, it's important to understand who's
responsible, and who's accountable, for each layer of logging.
### Kubernetes API auditing
One area that deserves more focus is what exactly should alert or be logged. The
document outlines a sample policy in [Appendix L: Audit Policy](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF#page=55) that logs all
RequestResponse's including metadata and request / response bodies. While helpful for a demo, it may not be practical for production.
Each organization needs to evaluate their
own threat model and build an audit policy that complements or helps troubleshooting incident response. Think
about how someone would attack your organization and what audit trail could identify it. Review more advanced options for tuning audit logs in the official [audit logging documentation](/docs/tasks/debug-application-cluster/audit/#audit-policy).
It's crucial to tune your audit logs to only include events that meet your threat model. A minimal audit policy that logs everything at `metadata` level can also be a good starting point.
Audit logging configurations can also be tested with
kind following these [instructions](https://kind.sigs.k8s.io/docs/user/auditing).
### Streaming logs and auditing
Logging is important for threat and anomaly detection. As the document outlines,
it's a best practice to scan and alert on logs as close to real time as possible
and to protect logs from tampering if a compromise occurs. It's important to
reflect on the various levels of logging and identify the critical areas such as
API endpoints.
Kubernetes API audit logging can stream to a webhook and there's an example in [Appendix N: Webhook configuration](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF#page=58). Using a webhook could be a method that
stores logs off cluster and/or centralizes all audit logs. Once logs are
centrally managed, look to enable alerting based on critical events. Also ensure
you understand what the baseline is for normal activities.
### Alert identification
While the guide stressed the importance of notifications, there is not a blanket
event list to alert from. The alerting requirements vary based on your own
requirements and threat model. Examples include the following events:
- Changes to the `securityContext` of a Pod
- Updates to admission controller configs
- Accessing certain files / URLs
### Additional logging resources
- [Seccomp Security Profiles and You: A Practical Guide - Duffie Cooley](https://www.youtube.com/watch?v=OPuu8wsu2Zc)
- [TGI Kubernetes 119: Gatekeeper and OPA](https://www.youtube.com/watch?v=ZJgaGJm9NJE)
- [Abusing The Lack of Kubernetes Auditing Policies](https://www.lacework.com/blog/hiding-in-plaintext-sight-abusing-the-lack-of-kubernetes-auditing-policies/)
- [Enable seccomp for all workloads with a new v1.22 alpha feature](https://kubernetes.io/blog/2021/08/25/seccomp-default/)
- [This Week in Cloud Native: Auditing / Pod Security](https://www.twitch.tv/videos/1147889860)
## Upgrading and Application Security practices
Kubernetes releases three times per year, so upgrade-related toil is a common problem for
people running production clusters. In addition to this, operators must
regularly upgrade the underlying node's operating system and running
applications. This is a best practice to ensure continued support and to reduce
the likelihood of bugs or vulnerabilities.
Kubernetes supports the three most recent stable releases. While each Kubernetes
release goes through a large number of tests before being published, some
teams aren't comfortable running the latest stable release until some time has
passed. No matter what version you're running, ensure that patch upgrades
happen frequently or automatically. More information can be found in
the [version skew](/releases/version-skew-policy/) policy
pages.
When thinking about how you'll manage node OS upgrades, consider ephemeral
nodes. Having the ability to destroy and add nodes allows your team to respond
quicker to node issues. In addition, having deployments that tolerate node
instability (and a culture that encourages frequent deployments) allows for
easier cluster upgrades.
Additionally, it's worth reiterating from the guidance that periodic
vulnerability scans and penetration tests can be performed on the various system
components to proactively look for insecure configurations and vulnerabilities.
### Finding release & security information
To find the most recent Kubernetes supported versions, refer to
[https://k8s.io/releases](https://k8s.io/releases), which includes minor versions. It's good to stay up to date with
your minor version patches.
If you're running a managed Kubernetes offering, look for their release
documentation and find their various security channels.
Subscribe to
the [Kubernetes Announce mailing list](https://groups.google.com/g/kubernetes-announce).
The Kubernetes Announce mailing list is searchable for terms such
as "[Security Advisories](https://groups.google.com/g/kubernetes-announce/search?q=%5BSecurity%20Advisory%5D)".
You can set up alerts and email notifications as long as you know what key
words to alert on.
## Conclusion
In summary, it is fantastic to see security practitioners sharing this
level of detailed guidance in public. This guidance further highlights
Kubernetes going mainstream and how securing Kubernetes clusters and the
application containers running on Kubernetes continues to need attention and focus of
practitioners. Only a few weeks after the guidance was published, an open source
tool [kubescape](https://github.com/armosec/kubescape) to validate cluster
against this guidance became available.
This tool can be a great starting point to check the current state of your
clusters, after which you can use the information in this blog post and in the guidance to assess
where improvements can be made.
Finally, it is worth reiterating that not all controls in this guidance will
make sense for all practitioners. The best way to know which controls matter is
to rely on the threat model of your own Kubernetes environment.
_A special shout out and thanks to Rory McCune (@raesene) for his inputs to this blog post_
| 55.827751 | 263 | 0.803822 | eng_Latn | 0.996826 |
c95405010cb48a85b704a9b6136fa0b268890654 | 3,612 | md | Markdown | docs/en/custom-navbar.md | lcen543/leave-leave-admin | 01f5e46cc2efe5e2e04d1683afd9e9ad3940ee3b | [
"MIT"
] | null | null | null | docs/en/custom-navbar.md | lcen543/leave-leave-admin | 01f5e46cc2efe5e2e04d1683afd9e9ad3940ee3b | [
"MIT"
] | null | null | null | docs/en/custom-navbar.md | lcen543/leave-leave-admin | 01f5e46cc2efe5e2e04d1683afd9e9ad3940ee3b | [
"MIT"
] | null | null | null | # Customize the head navigation bar
Since version `1.5.6`, you can add the html element to the top navigation bar, open `app/Admin/bootstrap.php`:
```php
use Leave\Admin\Facades\Admin;
Admin::navbar(function (\Leave\Admin\Widgets\Navbar $navbar) {
$navbar->left('html...');
$navbar->right('html...');
});
```
Methods `left` and `right` are used to add content to the left and right sides of the head, the method parameters can be any object that can be rendered (objects which impletements `Htmlable`, `Renderable`, or has method `__toString()`) or strings.
## Add elements to the left
For example, add a search bar on the left, first create a view `resources/views/search-bar.blade.php`:
```php
<style>
.search-form {
width: 250px;
margin: 10px 0 0 20px;
border-radius: 3px;
float: left;
}
.search-form input[type="text"] {
color: #666;
border: 0;
}
.search-form .btn {
color: #999;
background-color: #fff;
border: 0;
}
</style>
<form action="/admin/posts" method="get" class="search-form" pjax-container>
<div class="input-group input-group-sm ">
<input type="text" name="title" class="form-control" placeholder="Search...">
<span class="input-group-btn">
<button type="submit" name="search" id="search-btn" class="btn btn-flat"><i class="fa fa-search"></i></button>
</span>
</div>
</form>
```
Then add it to the head navigation bar:
```php
$navbar->left(view('search-bar'));
```
## Add elements to the right
You can only add the `<li>` tag on the right side of the navigation, such as adding some prompt icons, creating a new rendering class `app/Admin/Extensions/Nav/Links.php`
```php
<?php
namespace App\Admin\Extensions\Nav;
class Links
{
public function __toString()
{
return <<<HTML
<li>
<a href="#">
<i class="fa fa-envelope-o"></i>
<span class="label label-success">4</span>
</a>
</li>
<li>
<a href="#">
<i class="fa fa-bell-o"></i>
<span class="label label-warning">7</span>
</a>
</li>
<li>
<a href="#">
<i class="fa fa-flag-o"></i>
<span class="label label-danger">9</span>
</a>
</li>
HTML;
}
}
```
Then add it to the head navigation bar:
```php
$navbar->right(new \App\Admin\Extensions\Nav\Links());
```
Or use the following html to add a drop-down menu:
```html
<li class="dropdown notifications-menu">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" aria-expanded="false">
<i class="fa fa-bell-o"></i>
<span class="label label-warning">10</span>
</a>
<ul class="dropdown-menu">
<li class="header">You have 10 notifications</li>
<li>
<!-- inner menu: contains the actual data -->
<ul class="menu">
<li>
<a href="#">
<i class="fa fa-users text-aqua"></i> 5 new members joined today
</a>
</li>
<li>
<a href="#">
<i class="fa fa-warning text-yellow"></i> Very long description here that may not fit into the
page and may cause design problems
</a>
</li>
<li>
<a href="#">
<i class="fa fa-users text-red"></i> 5 new members joined
</a>
</li>
<li>
<a href="#">
<i class="fa fa-shopping-cart text-green"></i> 25 sales made
</a>
</li>
<li>
<a href="#">
<i class="fa fa-user text-red"></i> You changed your username
</a>
</li>
</ul>
</li>
<li class="footer"><a href="#">View all</a></li>
</ul>
</li>
```
More components can be found here [Bootstrap](https://getbootstrap.com/)
| 24.241611 | 248 | 0.596346 | eng_Latn | 0.805059 |
c954e27e2a33277cdba0adbb608984f9709b8002 | 537 | md | Markdown | docs/api-reference/classes/NSQuoteEntity/GetUseValuesFromQuote.md | stianol/crmscript | be1ad4f3a967aee2974e9dc7217255565980331e | [
"MIT"
] | null | null | null | docs/api-reference/classes/NSQuoteEntity/GetUseValuesFromQuote.md | stianol/crmscript | be1ad4f3a967aee2974e9dc7217255565980331e | [
"MIT"
] | null | null | null | docs/api-reference/classes/NSQuoteEntity/GetUseValuesFromQuote.md | stianol/crmscript | be1ad4f3a967aee2974e9dc7217255565980331e | [
"MIT"
] | null | null | null | ---
uid: crmscript_ref_NSQuoteEntity_GetUseValuesFromQuote
title: Integer GetUseValuesFromQuote()
intellisense: NSQuoteEntity.GetUseValuesFromQuote
keywords: NSQuoteEntity, GetUseValuesFromQuote
so.topic: reference
---
# Integer GetUseValuesFromQuote()
If true, then the Earning, Earning_Percent and Amount fields are populated from the QuoteVersion.QuoteAlternative (current revision, most-likely alternative).
**Returns:** Integer
```crmscript
NSQuoteEntity thing;
Integer useValuesFromQuote = thing.GetUseValuesFromQuote();
```
| 26.85 | 158 | 0.823091 | yue_Hant | 0.669705 |
c954ec30c211c2daa53b10ec6b965192392e333e | 801 | md | Markdown | README.md | andersro93/School.ICT441.HydroPlotter | 9943c6000e2df6ac10f498c395e5624639f44a78 | [
"MIT"
] | 1 | 2018-05-21T19:40:39.000Z | 2018-05-21T19:40:39.000Z | README.md | andersro93/School.ICT441.HydroPlotter | 9943c6000e2df6ac10f498c395e5624639f44a78 | [
"MIT"
] | null | null | null | README.md | andersro93/School.ICT441.HydroPlotter | 9943c6000e2df6ac10f498c395e5624639f44a78 | [
"MIT"
] | null | null | null | # School.ICT441.SnowDynamicsFrontend
## Project description
This project was created to support a report in ICT441 at University of Agder. The tool is build around Angular and Plotly to provide a plotting solution around Dynamic Bayesian Networks in order to get a good overview the network state and results.
## Project members
* Anders Refsdal Olsen
* Eivind Lindseth
* Erik Vetrhus Mathisen
* Halvor Songøygard Smørvik
* Mikael Antero Paavola
* Nicolas Anderson
## Copyright
This software is free to use by whoever wants to use it, however the source code is licensed under MiT license. This means that you are allowed to use the software however you like. But if you make any changes or starts selling it, you have to link back to the original code (this repository) and include the authors.
| 50.0625 | 318 | 0.796504 | eng_Latn | 0.999411 |
c9551d4a41d25d3458f4a67e4100fbe5e3fc9593 | 3,230 | md | Markdown | docs/vs-2015/msbuild/createcsharpmanifestresourcename-task.md | gewarren/visualstudio-docs.de-de | ede07b6ad9cde61f6464ce52187c4b7cd84254ac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/msbuild/createcsharpmanifestresourcename-task.md | gewarren/visualstudio-docs.de-de | ede07b6ad9cde61f6464ce52187c4b7cd84254ac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/msbuild/createcsharpmanifestresourcename-task.md | gewarren/visualstudio-docs.de-de | ede07b6ad9cde61f6464ce52187c4b7cd84254ac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CreateCSharpManifestResourceName-Aufgabe | Microsoft-Dokumentation
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-sdk
ms.tgt_pltfrm: ''
ms.topic: article
dev_langs:
- VB
- CSharp
- C++
- jsharp
helpviewer_keywords:
- MSBuild, CreateCSharpManifestResourceName task
- CreateCSharpManifestResourceName task [MSBuild]
ms.assetid: 2ace88c1-d757-40a7-8158-c1d3f5ff0511
caps.latest.revision: 11
author: mikejo5000
ms.author: mikejo
manager: ghogen
ms.openlocfilehash: 32aa7fdb5779d7ae042e8efa9652f25df70a2754
ms.sourcegitcommit: 9ceaf69568d61023868ced59108ae4dd46f720ab
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 10/12/2018
ms.locfileid: "49247844"
---
# <a name="createcsharpmanifestresourcename-task"></a>CreateCSharpManifestResourceName-Aufgabe
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Erstellt einen Manifestnamen im [!INCLUDE[csprcs](../includes/csprcs-md.md)]-Stil aus einem angegebenen RESX-Dateinamen oder aus einer anderen Ressource
## <a name="parameters"></a>Parameter
Die folgende Tabelle beschreibt die Parameter der Aufgabe [CreateCSharpManifestResourceName](../msbuild/createcsharpmanifestresourcename-task.md).
|Parameter|Beschreibung|
|---------------|-----------------|
|`ManifestResourceNames`|Schreibgeschützter <xref:Microsoft.Build.Framework.ITaskItem>-`[]`-Ausgabeparameter<br /><br /> Die resultierenden Manifestnamen|
|`ResourceFiles`|Erforderlicher `String` -Parameter.<br /><br /> Der Name der Ressourcendatei, von der der [!INCLUDE[csprcs](../includes/csprcs-md.md)]-Manifestname erstellt werden soll.|
|`RootNamespace`|Optionaler `String` -Parameter.<br /><br /> Der Stammnamespace der Ressourcendatei, der normalerweise aus der Projektdatei übernommen wird. Kann `null` sein.|
|`PrependCultureAsDirectory`|Optionaler `Boolean` -Parameter.<br /><br /> Wenn der Wert `true` ist, wird der Name der Kultur als Verzeichnisname unmittelbar vor dem Manifestressourcennamen hinzugefügt. Der Standardwert ist `true`sein.|
|`ResourceFilesWithManifestResourceNames`|Optionaler schreibgeschützter `String`-Ausgabeparameter<br /><br /> Gibt den Namen der Ressourcendatei zurück, die jetzt den Manifestressourcennamen enthält|
## <a name="remarks"></a>Hinweise
Die Aufgabe [CreateVisualBasicManifestResourceName](../msbuild/createvisualbasicmanifestresourcename-task.md) bestimmt den richtigen Manifestressourcennamen, der einer bestimmten RESX- oder einer anderen Ressourcendatei zugewiesen werden soll. Die Aufgabe stellt einen logischen Namen für eine Ressourcendatei bereit und fügt sie dann an einen Ausgabeparameter als Metadatenelement an.
Zusätzlich zu den oben aufgeführten Parametern erbt diese Aufgabe Parameter von der <xref:Microsoft.Build.Tasks.TaskExtension>-Klasse, die selbst von der <xref:Microsoft.Build.Utilities.Task>-Klasse erbt. Eine Liste mit diesen zusätzlichen Parametern und ihren Beschreibungen finden Sie unter [TaskExtension Base Class](../msbuild/taskextension-base-class.md).
## <a name="see-also"></a>Siehe auch
[Tasks (Aufgaben)](../msbuild/msbuild-tasks.md)
[Aufgabenreferenz](../msbuild/msbuild-task-reference.md)
| 53.833333 | 388 | 0.784211 | deu_Latn | 0.826388 |
c955a765010a1a7507e0ade61c40fd8420e930b6 | 7,771 | md | Markdown | articles/sql-database/sql-database-elastic-scale-manage-credentials.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sql-database/sql-database-elastic-scale-manage-credentials.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sql-database/sql-database-elastic-scale-manage-credentials.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Administración de credenciales en la biblioteca cliente de bases de datos elásticas
description: Cómo configurar el nivel correcto de las credenciales (de administrador a solo lectura) de las aplicaciones de bases de datos elásticas.
services: sql-database
ms.service: sql-database
ms.subservice: scale-out
ms.custom: ''
ms.devlang: ''
ms.topic: conceptual
author: stevestein
ms.author: sstein
ms.reviewer: ''
ms.date: 01/03/2019
ms.openlocfilehash: 91689a32a128584aade8081905e3d1aa3ecb0a97
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 03/27/2020
ms.locfileid: "73823575"
---
# <a name="credentials-used-to-access-the-elastic-database-client-library"></a>Credenciales usadas para acceder a la biblioteca de cliente de Elastic Database
La [biblioteca de cliente de Elastic Database](sql-database-elastic-database-client-library.md) usa tres variantes de credenciales para acceder al [administrador de mapas de particiones](sql-database-elastic-scale-shard-map-management.md). Según lo que necesite, use la credencial con el menor nivel de acceso posible.
* **Credenciales de administración**: se usan para crear o manipular un administrador de mapas de particiones. (Consulte el [glosario](sql-database-elastic-scale-glossary.md)).
* **Credenciales de acceso**: se usan para obtener acceso a un administrador de mapas de particiones ya existente, para así obtener información acerca de las particiones.
* **Credenciales de conexión**: se usan para conectarse a las particiones.
Consulte, asimismo, [Administración de bases de datos e inicios de sesión en Azure SQL Database](sql-database-manage-logins.md).
## <a name="about-management-credentials"></a>Acerca de las credenciales de administración
Las credenciales de administración se usan para crear un objeto **ShardMapManager** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager.shardmapmanager), [.NET](https://docs.microsoft.com/dotnet/api/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager)) para aquellas aplicaciones que manipulan mapas de particiones. (Por ejemplo, vea [Incorporación de una partición con herramientas de Elastic Database](sql-database-elastic-scale-add-a-shard.md) y [Enrutamiento dependiente de los datos](sql-database-elastic-scale-data-dependent-routing.md)). El usuario de la biblioteca de cliente de escalado elástico crea los usuarios e inicios de sesión de SQL necesarios; asimismo, debe asegurarse de que estos tienen permisos de lectura y escritura para poder usar la base de datos de mapa de particiones global y todas las bases de datos de particiones. Estas credenciales se usan para mantener el mapa de particiones global y los mapas de particiones locales cuando se realizan cambios en el mapa de particiones. Por ejemplo, use las credenciales de administración para crear un objeto de Shard Map Manager (mediante **GetSqlShardMapManager** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager.shardmapmanagerfactory.getsqlshardmapmanager), [.NET](https://docs.microsoft.com/dotnet/api/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.getsqlshardmapmanager)):
```java
// Obtain a shard map manager.
ShardMapManager shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager(smmAdminConnectionString,ShardMapManagerLoadPolicy.Lazy);
```
La variable **smmAdminConnectionString** es una cadena de conexión que contiene las credenciales de administración. El identificador de usuario y la contraseña proporcionan acceso de lectura y escritura a la base de datos de mapa de particiones y a particiones individuales. La cadena de conexión de administración también incluye el nombre del servidor y el nombre de la base de datos para identificar la base de datos de mapa de particiones global. Esta es una cadena de conexión típica para ese fin:
```java
"Server=<yourserver>.database.windows.net;Database=<yourdatabase>;User ID=<yourmgmtusername>;Password=<yourmgmtpassword>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;”
```
No use valores con el formato "username@server"; use simplemente el valor con formato "nombreUsuario". Le recomendamos esto, ya que las credenciales deben funcionar tanto con la base de datos del administrador de mapas de particiones, como con las particiones individuales, las cuales pueden estar en distintos servidores.
## <a name="access-credentials"></a>Credenciales de acceso
Si desea crear un administrador de mapas de particiones en una aplicación que no va a administrar mapas de particiones, use credenciales que tengan permisos de solo lectura en el mapa de particiones global. La información recuperada del mapa de particiones global que tenga estas credenciales se usa para el [enrutamiento dependiente de los datos](sql-database-elastic-scale-data-dependent-routing.md) y para rellenar la caché del mapa de particiones del cliente. Las credenciales se proporcionan a través del mismo patrón de llamada al elemento **GetSqlShardMapManager**:
```java
// Obtain shard map manager.
ShardMapManager shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager(smmReadOnlyConnectionString, ShardMapManagerLoadPolicy.Lazy);
```
Recuerde que puede usar el elemento **smmReadOnlyConnectionString** para reflejar la utilización de diferentes credenciales de este acceso en nombre de usuarios **que no sean administradores**: estas credenciales no deben proporcionar permiso de escritura para el mapa de particiones global.
## <a name="connection-credentials"></a>Credenciales de conexión
Cuando se usa el método **OpenConnectionForKey** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.mapper.listshardmapper.openconnectionforkey), [.NET](https://docs.microsoft.com/dotnet/api/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.openconnectionforkey)) para acceder a una partición asociada a una clave de particionamiento, se necesitan credenciales adicionales. Estas credenciales deben proporcionar los permisos de acceso de solo lectura a las tablas de mapa de particiones local que residen en la partición. Esto es necesario para realizar la validación de la conexión para el enrutamiento dependiente de los datos en la partición. Este fragmento de código le permitirá obtener acceso a los datos, según el contexto de enrutamiento dependiente de los datos:
```csharp
using (SqlConnection conn = rangeMap.OpenConnectionForKey<int>(targetWarehouse, smmUserConnectionString, ConnectionOptions.Validate))
```
En este ejemplo, el elemento **smmUserConnectionString** contiene la cadena de conexión para las credenciales de usuario. En el caso de Base de datos SQL de Azure, la siguiente es una cadena de conexión típica para las credenciales de usuario:
```java
"User ID=<yourusername>; Password=<youruserpassword>; Trusted_Connection=False; Encrypt=True; Connection Timeout=30;”
```
Al igual que con las credenciales de administración, no use valores que tengan el formato "username@server". En su lugar, use aquellos que tengan el formato "username". Tenga en cuenta también que la cadena de conexión no contiene un nombre de servidor y de base de datos. Esto se debe a que la llamada **OpenConnectionForKey** dirige automáticamente la conexión a la partición correcta según la clave. Por lo tanto, no se proporcionan ni el nombre de la base de datos ni el del servidor.
## <a name="see-also"></a>Consulte también
[Administración de bases de datos e inicios de sesión en Azure SQL Database](sql-database-manage-logins.md)
[Protección de SQL Database](sql-database-security-overview.md)
[Trabajos de Elastic Database](elastic-jobs-overview.md)
[!INCLUDE [elastic-scale-include](../../includes/elastic-scale-include.md)]
| 92.511905 | 1,431 | 0.811221 | spa_Latn | 0.965064 |
c956127885e4b6ecd00cc2f6189c8f7d1eb23cd2 | 6,811 | md | Markdown | CHANGELOG.md | lewiuberg/pyconfs | 2325f364b0dfff9bf35618f14d46966a82dfa046 | [
"MIT"
] | 10 | 2020-03-26T06:23:02.000Z | 2022-01-26T20:37:35.000Z | CHANGELOG.md | lewiuberg/pyconfs | 2325f364b0dfff9bf35618f14d46966a82dfa046 | [
"MIT"
] | 7 | 2020-04-23T10:51:37.000Z | 2021-11-22T09:41:52.000Z | CHANGELOG.md | lewiuberg/pyconfs | 2325f364b0dfff9bf35618f14d46966a82dfa046 | [
"MIT"
] | 2 | 2021-01-19T11:23:56.000Z | 2021-09-22T12:34:15.000Z | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.5.5] - 2021-10-20
### Fixed
- `ConfigurationList` objects do not preserve variables ([#20])
## [0.5.4] - 2021-10-18
### Added
- Changelog.
- `.is_list` property identifying `ConfigurationList` objects ([#19])
## [0.5.3] - 2021-10-16
### Added
- Explicit support for Python 3.10 ([#13]).
- [Citation file](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-citation-files) (by [@lewiuberg](https://github.com/lewiuberg/) in [#16]).
- Use GitHub actions for CD ([#18]).
### Fixed
- Enforce keys to be strings when writing to TOML (by [@lewiuberg](https://github.com/lewiuberg/) in [#17]).
- Fix error message in `.as_namedtuple()` for Python 3.10.
## [0.5.2] - 2021-06-08
### Fixed
- Handle nested (list) keys in `.replace()` ([#12]).
## [0.5.1] - 2020-11-28
### Changed
- Use GitHub actions for CI ([#8]).
### Removed
- `.copy_section()` ([#11]).
### Fixed
- Handle `.as_named_tuple()` in Python 3.9.
- Reporting correct name of reader in `.source` ([#9]).
## [0.5.0] - 2020-06-22
### Added
- `.copy_section()` to support default sections ([#7]).
### Changed
- Improved handling of variable interpolation in `.replace()` ([#3]).
### Fixed
- Properly update configuration, also when `cfg.entry = "value"` is used instead of `cfg.update_entry()` ([#4]).
## [0.4.2] - 2020-04-28
### Fixed
- Properly handle updating from nested dictionaries.
## [0.4.1] - 2020-04-20
### Fixed
- `.section_names` include parent names as well. Should just be the name of the actual section.
## [0.4.0] - 2020-04-18
### Added
- Proper support for lists inside configurations ([#1]).
### Changed
- `.leafs`, `.leaf_keys`, `.leaf_values` return lists for consistency with section and entry properties.
## [0.3.3] - 2020-04-05
### Added
- Explicit support for Python 3.9.
- Stand-alone `convert_to()` function exposing converters.
- `pretty_print` parameter when writing to TOML with nicer formatting.
### Changed
- Default `str()` behavior uses TOML format.
- `.get()` with list keys will recursively get an entry from a nested configuration.
### Fixed
- Allow `replace()` to handle non-strings by implictly converting variable values to strings.
## [0.3.2] - 2020-02-19
### Added
- `.leafs`, `.leaf_keys`, `.leaf_values` to iterate over all entries in a nested configuration.
### Changed
- Simplified support of multi-document YAMLs by only reading the first document.
- Ignore environment variables where conversion fails.
## [0.3.1] - 2020-01-24
### Added
- Support encodings in configuration files.
## [0.3.0] - 2020-01-09
### Added
- `.section_items` property to iterate over section names and section content.
- Reading configurations directly from strings.
- Writers for all supported configuration formats (INI, YAML, JSON, TOML).
- `.as_file()` for writing configurations to files.
- `.as_str()` for writing configurations to strings.
### Fixed
- Proper handling of AttributeErrors.
## [0.2.1] - 2019-11-12
### Changed
- Show parents in nested configuration names.
- Allow adding fields when converting to named tuples, dictionaries and strings.
## [0.2.0] - 2019-11-11
### Added
- Convert configurations to named tuples.
- Use named tuples for validation of configuration structure.
## [0.1.4] - 2019-11-04
### Added
- Handle `.cfg` files as INI-files.
### Fixed
- Don't crash when reading environment variables without converters.
## [0.1.3] - 2019-10-29
### Added
- Support JSON files using `json`.
- Support reading from environment variables.
### Changed
- TOML and YAML dependencies are installed as extras.
## [0.1.2] - 2019-10-22
### Added
- Support TOML files using 3rd party `toml`.
- Support YAML files using 3rd party`PyYAML`.
- Add source information to entries.
### Changed
- Only import 3rd party libraries when they are actually used.
## [0.1.1] - 2019-10-17
### Added
- Variable interpolation using `{}` handled by `.replace().
- Data type converters when using `.replace()` for entries.
- Support INI files using `configparser`.
### Fixed
- Actually return converted entries.
## [0.1.0] - 2019-10-16
Initial commit.
[Unreleased]: https://github.com/gahjelle/pyconfs/compare/v0.5.5...HEAD
[0.5.5]: https://github.com/gahjelle/pyconfs/compare/v0.5.4-20211018...v0.5.5-20211020
[0.5.4]: https://github.com/gahjelle/pyconfs/compare/v0.5.3-20211016...v0.5.4-20211018
[0.5.3]: https://github.com/gahjelle/pyconfs/compare/v0.5.2-20210608...v0.5.3-20211016
[0.5.2]: https://github.com/gahjelle/pyconfs/compare/v0.5.1-20201128...v0.5.2-20210608
[0.5.1]: https://github.com/gahjelle/pyconfs/compare/v0.5.0-20200622...v0.5.1-20201128
[0.5.0]: https://github.com/gahjelle/pyconfs/compare/v0.4.2-20200428...v0.5.0-20200622
[0.4.2]: https://github.com/gahjelle/pyconfs/compare/v0.4.1-20200420...v0.4.2-20200428
[0.4.1]: https://github.com/gahjelle/pyconfs/compare/v0.4.0-20200418...v0.4.1-20200420
[0.4.0]: https://github.com/gahjelle/pyconfs/compare/v0.3.3-20200405...v0.4.0-20200418
[0.3.3]: https://github.com/gahjelle/pyconfs/compare/v0.3.2-20200219...v0.3.3-20200405
[0.3.2]: https://github.com/gahjelle/pyconfs/compare/v0.3.1...v0.3.2-20200219
[0.3.1]: https://github.com/gahjelle/pyconfs/compare/v0.3.0...v0.3.1
[0.3.0]: https://github.com/gahjelle/pyconfs/compare/v0.2.1...v0.3.0
[0.2.1]: https://github.com/gahjelle/pyconfs/compare/v0.2.0...v0.2.1
[0.2.0]: https://github.com/gahjelle/pyconfs/compare/v0.1.4...v0.2.0
[0.1.4]: https://github.com/gahjelle/pyconfs/compare/v0.1.3...v0.1.4
[0.1.3]: https://github.com/gahjelle/pyconfs/compare/v0.1.2...v0.1.3
[0.1.2]: https://github.com/gahjelle/pyconfs/compare/v0.1.1...v0.1.2
[0.1.1]: https://github.com/gahjelle/pyconfs/compare/v0.1.0...v0.1.1
[0.1.0]: https://github.com/gahjelle/pyconfs/releases/tag/v0.1.0
[#20]: https://github.com/gahjelle/pyconfs/pull/20
[#19]: https://github.com/gahjelle/pyconfs/pull/19
[#18]: https://github.com/gahjelle/pyconfs/pull/18
[#17]: https://github.com/gahjelle/pyconfs/pull/17
[#16]: https://github.com/gahjelle/pyconfs/pull/16
[#13]: https://github.com/gahjelle/pyconfs/pull/13
[#12]: https://github.com/gahjelle/pyconfs/pull/12
[#11]: https://github.com/gahjelle/pyconfs/pull/11
[#9]: https://github.com/gahjelle/pyconfs/pull/9
[#8]: https://github.com/gahjelle/pyconfs/pull/8
[#7]: https://github.com/gahjelle/pyconfs/pull/7
[#4]: https://github.com/gahjelle/pyconfs/pull/4
[#3]: https://github.com/gahjelle/pyconfs/pull/3
[#1]: https://github.com/gahjelle/pyconfs/pull/1
| 26.297297 | 214 | 0.693731 | eng_Latn | 0.553302 |
c95672ca3643162909bc9c5fba3c6b07fe8aedca | 1,752 | md | Markdown | README.md | matthew-stay/scryer | 771c75e0ef73f62060414b8c74488606d439d4c2 | [
"MIT"
] | 1 | 2015-03-21T17:02:19.000Z | 2015-03-21T17:02:19.000Z | README.md | matthew-stay/scryer | 771c75e0ef73f62060414b8c74488606d439d4c2 | [
"MIT"
] | null | null | null | README.md | matthew-stay/scryer | 771c75e0ef73f62060414b8c74488606d439d4c2 | [
"MIT"
] | null | null | null | # scryer
> A random generator for interpretive word combinations.
## Getting Started
Scryer is built purely in javascript and can be run locally on a simple http server - in terminal run:
<code>$ python -m SimpleHTTPServer</code>
Scryer utilizes a library of common terms which are split into an <code>array()</code>. Generating returns a pair of these terms.
## Why
Scryer is built to invite opportunities for inspiration by introducing an element of chance. Reacting to chance operations can lead to new avenues of thinking.
When generating new terms, consider:
* Are the terms related or different? Is the pair better or worse because of it?
* What narrative is occurring between the word pairs?
* Does the meaning change if terms are flipped?
## History
### The I Ching

John Cage was known to utilize the [I Ching]() when developing his compositions. Commonly used as an ancient form of Chinese divination, Cage saw the I Ching as a tool for composing using chance. He could remove intention from the work and rely on methods of divination to guide his sound.
### Geomantic Devices

The I Ching isn't the only tool of this sort. Ancient Islamic inventors developed advanced instruments which were used to attain divine knowledge. Various dials could be turned which would result in a series of random patterns that could be interpreted.
=======
### Digital Geomancy
Scryer sets out to be a sort of geomantic instrument for inspiration - less divine, more digital. By interpreting the relationship of random variables, we gain a sense of new understanding. This observation of chance operations become a mirror for us to see ourselves.
| 41.714286 | 290 | 0.775114 | eng_Latn | 0.999194 |
c956e1149cc46c29d253cdb152591ea6f18bd2df | 29 | md | Markdown | README.md | MenchiG/notebook | 924349cb4b1f790ad9016c59271d43fea00d1229 | [
"MIT"
] | null | null | null | README.md | MenchiG/notebook | 924349cb4b1f790ad9016c59271d43fea00d1229 | [
"MIT"
] | null | null | null | README.md | MenchiG/notebook | 924349cb4b1f790ad9016c59271d43fea00d1229 | [
"MIT"
] | null | null | null | # notebook
Menchi's notebook
| 9.666667 | 17 | 0.793103 | eng_Latn | 0.843503 |
c9570641f413d1b64a505e43e2e1ac87dee11a92 | 6,872 | md | Markdown | zwave/de_DE/philio.pst02c_-_3_en_1_Ouverture.md | Mav3656/documentation | d8769915ac8ccc2338bc6ffcf37ae409f692b79f | [
"MIT"
] | 38 | 2016-11-01T15:39:00.000Z | 2020-05-22T20:53:49.000Z | zwave/de_DE/philio.pst02c_-_3_en_1_Ouverture.md | Mav3656/documentation | d8769915ac8ccc2338bc6ffcf37ae409f692b79f | [
"MIT"
] | 22 | 2016-11-02T09:38:18.000Z | 2018-05-16T13:05:05.000Z | zwave/de_DE/philio.pst02c_-_3_en_1_Ouverture.md | Mav3656/documentation | d8769915ac8ccc2338bc6ffcf37ae409f692b79f | [
"MIT"
] | 105 | 2016-10-17T16:28:25.000Z | 2020-05-12T15:52:29.000Z | Philio PST02 C - 3 en 1 Ouverture
=================================
\
- **Das Modul**
\

\
- **In Jeedom sichtbar**
\

\
Zusammenfassung
------
\
Le détecteur ZIP-PSM01 propose 3 fonctions différentes : détection
d’ouverture, capteur de température et détecteur de luminosité. Il se
compose de deux parties : un détecteur et un aimant. Ils sont conçus
pour être placés sur une porte ou une fenêtre avec l’aimant fixé sur la
partie qui s’ouvre et le détecteur sur la partie fixe.
L’ouverture de la porte ou de la fenêtre éloignera l’aimant du
détecteur, ce qui enclenchera le détecteur qui enverra un signal Z-Wave
d’alarme, si le système est armé (ce signal peut être exploité par une
sirène ou par une box domotique par exemple). Le capteur peut également
être utilisé pour le contrôle automatique de l’éclairage, en fonction du
niveau de luminosité. Par exemple, le capteur enverra un signal à
l’interrupteur Z-Wave pour allumer la lumière lorsque la porte s’ouvre
et que la pièce est sombre.
Le détecteur remontera aussi la luminosité et la température, soit en
cas de changement important, et à chaque fois que l’ouverture/fermeture
est détectée.
Un contrôleur Z-Wave (télécommande, dongle …) est nécessaire afin
d’intégrer ce détecteur dans votre réseau si vous avez déjà un réseau
existant.
\
Funktionen
---------
\
- Détecteur 3 en 1: Ouverture, température, lumière
- Adopte la récente puce Z-Wave 400series pour supporter les
opérations multicanaux et un débit de données plus
élevé (9.6/40/100kbps)
- Utilise le SDK Z-Wave 6.02
- Portée de l’antenne optimisée
- Utilisation pour des applications domotique ou de sécurité
- Bouton pour inclure/exclure le détecteur
- Autoprotection
- Indication de batterie faible
- Petit, discret et esthétique
- Facilité d’utilisation et d’installation
\
Caractéristiques techniques
---------------------------
\
- Modultyp : Z-Wave-Sender
- Alimentation : 1 pile 3V CR123A
- Durée de vie des piles : 3 ans (pour 14 déclenchements par jour)
- Fréquence : 868.42 MHz
- Distance de transmission : 30m en intérieur
- Capteur de température : -10 à 70° C
- Capteur de luminosité : 0 à 500 lux
- Abmessungen :
- Détecteur : 28 x 96 x 23 mm
- Aimant : 10 x 50 x 12 mm
- Poids : 52g
- Température de fonctionnement : -10 à 40° C
- Humidité de fonctionnement : 85%RH max
- Norme CE : EN300 220-1
- Certification Z-Wave : ZC08-13050003
\
Données du module
-----------------
\
- Marke : Philio Technology Corporation
- Name : PST02-C Door/Window 3 in 1 sensor
- Hersteller-ID : 316
- Produkttyp : 2
- Produit ID : 14
\
Configuration
-------------
\
Pour configurer le plugin OpenZwave et savoir comment mettre Jeedom en
inclusion référez-vous à cette
[documentation](https://jeedom.fr/doc/documentation/plugins/openzwave/fr_FR/openzwave.html).
\
> **Important**
>
> Pour mettre ce module en mode inclusion il faut appuyer 3 fois sur le
> bouton d’inclusion, conformément à sa documentation papier.
\

\
Une fois inclus vous devriez obtenir ceci :
\

\
### Commandes
\
Une fois le module reconnu, les commandes associées au module seront
disponibles.
\

\
Voici la liste des commandes :
\
- Ouverture : c’est la commande qui remontera une détection
d’ouverture
- Température : c’est la commande qui permet de remonter la
température
- Luminosité : c’est la commande qui permet de remonter la luminosité
- Batterie : c’est la commande batterie
\
### Configuration du module
\
> **Important**
>
> Lors d’une première inclusion réveillez toujours le module juste après
> l’inclusion.
\
Ensuite si vous voulez effectuer la configuration du module en fonction
de votre installation, il faut pour cela passer par la bouton
"Configuration" du plugin OpenZwave de Jeedom.
\

\
Vous arriverez sur cette page (après avoir cliqué sur l’onglet
paramètres)
\



\
Détails des paramètres :
\
- 2: permet de régler le signal envoyé aux modules dans le groupe
d’association 2
- 4: permet de régler le niveau de luminosité à partir duquel le
signal défini en paramètre 2 sera envoyé aux modules associés au
groupe 2
- 5: mode de fonctionnement (se reporter sur la
documentation constructeur) Valeur recommandée : 8
- 6: mode de fonctionnement du multi-sensor (se reporter sur la
documentation constructeur) Valeur recommandée : 4
- 7: mode de fonctionnement personnalisée du multi-sensor (se reporter
sur la documentation constructeur) Valeur recommandée : 20 (pour
avoir l’ouverture de fonctionnelle)
- 9: permet de définir au bout de combien de temps le signal OFF sera
envoyé aux modules associés au groupe 2
- 10: permet de définir la durée entre deux rapports de batterie (une
unité = parametre 20)
- 11: permet de définir la durée entre deux rapports auto d’ouverture
(une unité = parametre 20)
- 12: permet de définir la durée entre deux rapports auto de
luminosité (une unité = parametre 20) Valeur recommandée : 3
- 13: permet de définir la durée entre deux rapports auto de
température (une unité = parametre 20) Valeur recommandée : 2
- 20: durée d’un intervalle pour les paramètres 10 à 13 Valeur
recommandée : 10
- 21: valeur de variation en °F de température pour déclencher un
rapport
- 22: valeur en % de variation de luminosité pour déclencher un
rapport Valeur recommandée : 10
\
### Groupes
\
Ce module possède deux groupes d’association, seul le premier est
indispensable.
\

\
Bon à savoir
------------
\
### Visuel alternatif
\

\
Wakeup
------
\
Pour réveiller ce module il y a une seule et unique façon de procéder :
- relachez le bouton tamper et réappuyez dessus
\
F.A.Q.
------
\
Ce module se réveille en appuyant sur son bouton tamper.
\
Ce module est un module sur batterie, la nouvelle configuration sera
prise en compte au prochain wakeup.
\
Note importante
---------------
\
> **Important**
>
> Il faut réveiller le module : après son inclusion, après un changement
> de la configuration , après un changement de wakeup, après un
> changement des groupes d’association
\
**@sarakha63**
| 20.211765 | 92 | 0.708964 | fra_Latn | 0.982392 |
c957575b2e9c298c297daa332b2802bf2cd4381b | 1,049 | md | Markdown | author.md | alterdevteam/alterdevteam.github.io | 4278585ff91097cc57ddb8179a45e7e52a6ec6b6 | [
"MIT"
] | null | null | null | author.md | alterdevteam/alterdevteam.github.io | 4278585ff91097cc57ddb8179a45e7e52a6ec6b6 | [
"MIT"
] | null | null | null | author.md | alterdevteam/alterdevteam.github.io | 4278585ff91097cc57ddb8179a45e7e52a6ec6b6 | [
"MIT"
] | null | null | null | ---
layout: article
titles:
# @start locale config
en : &EN About the author
en-GB : *EN
en-US : *EN
en-CA : *EN
en-AU : *EN
zh-Hans : &ZH_HANS 关于
zh : *ZH_HANS
zh-CN : *ZH_HANS
zh-SG : *ZH_HANS
zh-Hant : &ZH_HANT 關於
zh-TW : *ZH_HANT
zh-HK : *ZH_HANT
ko : &KO 소개
ko-KR : *KO
fr : &FR À propos
fr-BE : *FR
fr-CA : *FR
fr-CH : *FR
fr-FR : *FR
fr-LU : *FR
# @end locale config
key: page-about
---
## About the developer
The main developer and the author of the idea is James Aaron Erang, a 18 year old high school student from Philippines. The author has many experience from programming, computers and operating systems, especially Linux/GNU. Furthermore, he is also a student researcher in field of Biology, and he also utilizes computer science in his research studies.
[ResearchGate](https://www.researchgate.net/profile/James-Aaron-Erang) | [ORCID](https://www.researchgate.net/profile/James-Aaron-Erang) | [email protected]
| 30.852941 | 352 | 0.64347 | eng_Latn | 0.832705 |
c957ac4a94eed57e2370300bdc936b2e44282eb0 | 1,307 | md | Markdown | README.md | dc-ken-jiu/dc-utils | 27deb5876c2b76f5547466fcd0da3e324ce1af8d | [
"MIT"
] | null | null | null | README.md | dc-ken-jiu/dc-utils | 27deb5876c2b76f5547466fcd0da3e324ce1af8d | [
"MIT"
] | null | null | null | README.md | dc-ken-jiu/dc-utils | 27deb5876c2b76f5547466fcd0da3e324ce1af8d | [
"MIT"
] | 3 | 2021-02-02T07:55:48.000Z | 2021-02-05T03:11:07.000Z | ## dc-utils
一个正在完善的常用方法库
### 时间
#### 示例
``` js
import { utils } from "@dc/utils"
utils.formatDate(new Date())
```
#### formatDate
`formatDate(date: Date): string`接受一个Date类型的参数,返回`YYYY-MM-DD`格式的字符串
#### getToday
`getToday(): string`返回`YYYY-MM-DD`格式的今日日期
#### getTomorrow
`getTomorrow(): string`返回`YYYY-MM-DD`格式的明日日期
#### getMonthDays
`getMonthDays(y: number, m: number): number`接受两个number类型的参数表示年份和月份,返回number类型的当月天数
#### getThisWeek
`getThisWeek(): string[]`返回`['YYYY-MM-DD','YYYY-MM-DD']`格式的本周起始和结束日期
#### getThisMonth
`getThisMonth(): string[]`返回`['YYYY-MM-DD','YYYY-MM-DD']`格式的本月起始和结束日期
### EventBus事件总线
#### 示例
``` js
import { EventBus } from "@dc/utils";
let eventBus = new EventBus();
let fun1 = function () {
console.log(1)
}
eventBus.addListener("type1",fun1)
eventBus.dispatchListener("type1") // 1
eventBus.removeListener("type1",fun1)
let fun42 = function (v) {
console.log(v)
}
eventBus.addListener("type2",fun2)
eventBus.dispatchListener("type2","this is a parameter") // this is a parameter
eventBus.removeListener("type2")
```
#### addListener
`addListener (type:String, cb:Function)` 监听事件
#### dispatchListener
`dispatchListener (type:String, params:any)` 触发事件
#### removeListener
`removeListener (type:String, cb:Function)` 注销事件,cb不传时,注销该事件下所有回调
| 16.3375 | 82 | 0.692425 | yue_Hant | 0.458581 |
c95843604c8a9d69726aaee7d07687ce73fc9734 | 3,688 | md | Markdown | website/translated_docs/de/Events/onHeader.md | Sieste68/docs | 63c06aaa9f06de535d3943294aca4a09fdac454a | [
"CC-BY-4.0"
] | 4 | 2020-05-11T16:14:13.000Z | 2021-11-16T10:52:56.000Z | website/translated_docs/de/Events/onHeader.md | Sieste68/docs | 63c06aaa9f06de535d3943294aca4a09fdac454a | [
"CC-BY-4.0"
] | 44 | 2019-05-29T08:56:43.000Z | 2022-03-18T14:00:51.000Z | website/translated_docs/de/Events/onHeader.md | Sieste68/docs | 63c06aaa9f06de535d3943294aca4a09fdac454a | [
"CC-BY-4.0"
] | 24 | 2019-04-24T12:20:49.000Z | 2021-09-09T17:46:26.000Z | ---
id: onHeader
title: On Header
---
| Code | Can be called by | Definition |
| ---- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- |
| 5 | [4D Write Pro area](FormObjects/writeProArea_overview) - [Button](FormObjects/button_overview.md) - [Button Grid](FormObjects/buttonGrid_overview.md) - [Check Box](FormObjects/checkbox_overview.md) - [Dropdown list](FormObjects/dropdownList_Overview.md) - Form (list form only) - [Hierarchical List](FormObjects/list_overview.md#overview) - [Input](FormObjects/input_overview.md) - [Picture Button](FormObjects/pictureButton_overview.md) - [Picture Pop up menu](FormObjects/picturePopupMenu_overview.md) - [Plug-in Area](FormObjects/pluginArea_overview.md#overview) - [Progress Indicators](FormObjects/progressIndicator.md) - [Radio Button](FormObjects/radio_overview.md) - [Ruler](FormObjects/ruler.md) - [Spinner](FormObjects/spinner.md) - [Splitter](FormObjects/splitters.md) - [Stepper](FormObjects/stepper.md) - [Tab control](FormObjects/tabControl.md) | The form's header area is about to be printed or displayed. |
## Beschreibung
The `On Header` event is called when a record is about to be displayed in a list form displayed via `DISPLAY SELECTION` and `MODIFY SELECTION`.
> This event cannot be selected for project forms, it is only available with **table forms**.
In this context, the following sequence of calls to methods and form events is triggered:
- For each object in the header area:
- Object method with `On Header` event
- Form method with `On Header` event
> Printed records are handled using the [`On Display Detail`](onDisplayDetail.md) event.
Calling a 4D command that displays a dialog box from the `On Header` event is not allowed and will cause a syntax error to occur. More particularly, the commands concerned are: `ALERT`, `DIALOG`, `CONFIRM`, `Request`, `ADD RECORD`, `MODIFY RECORD`, `DISPLAY SELECTION`, and `MODIFY SELECTION`. | 147.52 | 930 | 0.385575 | eng_Latn | 0.537502 |
c958a55da911df9ebb6aaf451f49214c509e9e08 | 2,590 | md | Markdown | indicators/Alligator/README.md | SnnK/Stock.Indicators | e6cb731c0597090d4b899fc1599e29ca6478f3a0 | [
"Apache-2.0"
] | 1 | 2021-07-30T03:10:25.000Z | 2021-07-30T03:10:25.000Z | indicators/Alligator/README.md | SnnK/Stock.Indicators | e6cb731c0597090d4b899fc1599e29ca6478f3a0 | [
"Apache-2.0"
] | 7 | 2021-11-01T07:03:36.000Z | 2022-03-01T07:04:42.000Z | indicators/Alligator/README.md | SnnK/Stock.Indicators | e6cb731c0597090d4b899fc1599e29ca6478f3a0 | [
"Apache-2.0"
] | null | null | null | # Williams Alligator
Created by Bill Williams, Alligator is a depiction of three smoothed moving averages of median price, showing chart patterns that compared to an alligator's feeding habits when describing market movement. The moving averages are known as the Jaw, Teeth, and Lips, which are calculated using specific lookback and offset periods. See also the [Gator Oscillator](../Gator/README.md#content).
[[Discuss] :speech_balloon:](https://github.com/DaveSkender/Stock.Indicators/discussions/385 "Community discussion about this indicator")

```csharp
// usage
IEnumerable<AlligatorResult> results =
Indicator.GetAlligator(history);
```
## Parameters
| name | type | notes
| -- |-- |--
| `history` | IEnumerable\<[TQuote](../../docs/GUIDE.md#historical-quotes)\> | Historical price quotes should have a consistent frequency (day, hour, minute, etc).
### Minimum history requirements
You must supply at least 115 periods of `history`. Since this uses a smoothing technique, we recommend you use at least 265 data points prior to the intended usage date for better precision.
### Internal parameters
This indicator uses fixed interal parameters for the three moving averages of median price `(H+L)/2`.
| SMMA | Lookback | Offset
| -- |-- |--
| Jaw | 13 | 8
| Teeth | 8 | 5
| Lips | 5 | 3
## Response
```csharp
IEnumerable<AlligatorResult>
```
The first 10-20 periods will have `null` values since there's not enough data to calculate. We always return the same number of elements as there are in the historical quotes.
:warning: **Warning**: The first 150 periods will have decreasing magnitude, convergence-related precision errors that can be as high as ~5% deviation in indicator values for earlier periods.
### AlligatorResult
| name | type | notes
| -- |-- |--
| `Date` | DateTime | Date
| `Jaw` | decimal | Alligator's Jaw
| `Teeth` | decimal | Alligator's Teeth
| `Lips` | decimal | Alligator's Lips
## Example
```csharp
// fetch historical quotes from your feed (your method)
IEnumerable<Quote> history = GetHistoryFromFeed("MSFT");
// calculate the Williams Alligator
IEnumerable<AlligatorResult> results = Indicator.GetAlligator(history);
// use results as needed
AlligatorResult result = results.LastOrDefault();
Console.WriteLine("Jaw on {0} was ${1}", result.Date, result.Jaw);
Console.WriteLine("Teeth on {0} was ${1}", result.Date, result.Teeth);
Console.WriteLine("Lips on {0} was ${1}", result.Date, result.Lips);
```
```bash
Jaw on 12/31/2018 was $260.61
Teeth on 12/31/2018 was $252.27
Lips on 12/31/2018 was $243.89
```
| 35 | 390 | 0.732819 | eng_Latn | 0.975084 |
c958ad63a2e5af249634de24be22e74cac9d6395 | 60 | md | Markdown | h5nuvola/README.md | ElettraSciComp/h5nuvola | ebe9ea0847c9191713f890740fdf37ac1de13e61 | [
"MIT"
] | 8 | 2020-03-16T09:42:20.000Z | 2022-02-15T12:47:51.000Z | h5nuvola/README.md | ElettraSciComp/h5nuvola | ebe9ea0847c9191713f890740fdf37ac1de13e61 | [
"MIT"
] | 1 | 2019-12-13T09:57:27.000Z | 2019-12-13T09:57:27.000Z | h5nuvola/README.md | ElettraSciComp/h5nuvola | ebe9ea0847c9191713f890740fdf37ac1de13e61 | [
"MIT"
] | null | null | null | # h5nuvola + VUO
h5nuvola integration with VUO source code. | 20 | 42 | 0.783333 | kor_Hang | 0.480065 |
c958eb27b10d9c586c9fb19a51ace511b2e50c95 | 1,174 | md | Markdown | source/partials/faculty/_tom_hooten.md | chosenvale/chosenvale | c8a208839fbbca82a01d758fc2eb3fed3664261e | [
"MIT"
] | null | null | null | source/partials/faculty/_tom_hooten.md | chosenvale/chosenvale | c8a208839fbbca82a01d758fc2eb3fed3664261e | [
"MIT"
] | null | null | null | source/partials/faculty/_tom_hooten.md | chosenvale/chosenvale | c8a208839fbbca82a01d758fc2eb3fed3664261e | [
"MIT"
] | 1 | 2018-11-02T20:11:49.000Z | 2018-11-02T20:11:49.000Z | Thomas Hooten is Principal Trumpet of the Los Angeles Philharmonic Orchestra, a position which he has held since 2012. Prior to joining the LA Phil, Hooten served as Principal Trumpet in the Atlanta Symphony from 2006-2012 and as Assistant Principal Trumpet with the Indianapolis Symphony. He began his professional career in 2000 with a trumpet/cornet position in “The President’s Own” United States Marine Band in Washington, D.C. He released “Trumpet Call,” his first solo album, in 2011. Tom is currently on the faculty at the University of Southern California (USC) and he also serves on the faculty for the Aspen Music Festival, acting as a guest artist and teacher. While in Atlanta, he shared a studio with his wife, Jennifer Marotta, at Kennesaw State University. Tom travels across the world as a soloist and clinician, and he is currently active in the Los Angeles studio scene. A native of Tampa, Florida, he earned his Bachelor of Music degree from the University of South Florida and his Master of Music degree from Rice University. His primary trumpet teachers have included Armando Ghitalla, John Hagstrom, and Don Owen. Tom is a Yamaha performing artist.
| 587 | 1,173 | 0.805792 | eng_Latn | 0.999858 |
c9593e029355031f876549e6342e0bd9e1aa1c92 | 6,042 | md | Markdown | docs/report/dashboards/add-widget-to-dashboard.md | zbecknell/vsts-docs | 98a46a7ed88d80a27dc6bbebc86c28ca67962481 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/report/dashboards/add-widget-to-dashboard.md | zbecknell/vsts-docs | 98a46a7ed88d80a27dc6bbebc86c28ca67962481 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/report/dashboards/add-widget-to-dashboard.md | zbecknell/vsts-docs | 98a46a7ed88d80a27dc6bbebc86c28ca67962481 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Add a widget to a team dashboard in VSTS or TFS
description: Choose and configure widgets that you add to a team dashboard
ms.prod: vs-devops-alm
ms.technology: vs-devops-reporting
ms.assetid: 0869DB42-6983-49A2-855C-2678CFFF4967
ms.manager: douge
ms.author: kaelli
ms.topic: get-started-article
ms.date: 08/22/2017
---
# Add widgets to a dashboard
<b>VSTS | TFS 2018 | TFS 2017</b>
Widgets smartly format data to provide access to easily consumable data. You add widgets to your team dashboards to gain visibility into the status and trends occurring as you develop your software project.
Each widget provides access to a chart, user-configurable information, or a set of links that open a feature or function.
You can add one or more charts or widgets to your dashboard. You add several widgets at a time simply by selecting each one. See [Manage dashboards](dashboards.md#manage) to determine the permissions you need to add and remove widgets from a dashboard.
## Connect to the web portal for your team project
To add a widget to a dashboard, you connect to your team project using a [supported web browser](../../tfs-server/requirements.md#supported-browsers). If you don't have a team project yet, create one in [VSTS](../../accounts/create-account-msa-or-work-student.md)<!--- or set one up in an [on-premises TFS](../../accounts/create-team-project.md)-->.
Open a browser window and click the **Dashboards** hub. If you haven't been added as a team member, [get added now](../../work/scale/multiple-teams.md#add-team-members).

If you don't see the team or team project you want, click the  project icon to [browse all team projects and teams](../../user-guide/account-home-pages.md).
>[!NOTE]
><b>Feature availability: </b> You can access the [widget catalog](widget-catalog.md) from the web portal for VSTS or TFS 2015.1 or later version. The widget catalog provides widgets for all tiles supported in previous releases of TFS for the team homepage. For on-premises TFS 2015, you can add select charts to the team home page using the [Pin to home page](team-dashboard.md) feature.
>
>To determine the platform and version you're on, see [Provide product and content feedback, Platforms and version support](../../user-guide/provide-feedback.md#platform-version).
## Add a widget to a dashboard
Click  to modify a dashboard. Click  to add a widget to the dashboard.
The [widget catalog](widget-catalog.md) describes all the available widgets, many of which are scoped to the selected team context.
>[!NOTE]
><b>Feature availability: </b>For VSTS and TFS 2017 and later versions, you can drag and drop a widget from the catalog onto the dashboard.
>
> Widget images may vary depending on which platform you access. This topic shows images that appear in VSTS. However, the widget title and functionality described in this topic are valid for both VSTS and TFS. For example, dashboard edit mode controls shown below are valid for VSTS and TFS 2015.2 and later version. Some functionality differs when you connect to TFS 2015.1 or earlier versions.
## Configure a widget
To configure a widget, add the widget to a dashboard and then click the  configure icon.

Click the delete icon to remove the tile from the dashboard.
Once you've configured the widget, you can edit it by opening the actions menu.
<img src="_img/add-widget-configure.png" alt="Edit configured widget " style="border: 2px solid #C3C3C3;" />
## Move or delete a widget from a dashboard
>[!NOTE]
>Just as you have to be a team or project admin to add items to a dashboard, you must have admin permissions to remove items.
Click  to modify your dashboard. You can then drag tiles to reorder their sequence on the dashboard.
To remove a widget, click the widget's  or  delete icons.
When you're finished with your changes, click  to exit dashboard editing.
## Copy a widget
>[!NOTE]
>**Feature availability:** This feature is only available from VSTS.
To copy a configured widget to another team dashboard, click the  actions icon and select **Add to dashboard**.
<img src="_img/dashboards-copy-widget.png" alt="Copy a widget to another team dashboard" style="border: 2px solid #C3C3C3;" />
## Try this next
> [!div class="nextstepaction"]
> [Review the widget catalog](widget-catalog.md)
> or
> [Review Marketplace widgets](https://marketplace.visualstudio.com/search?term=widget&target=VSTS&category=All%20categories&sortBy=Relevance)
### Extensibility
In addition to the widgets described in the Widget catalog, you can create your own widgets using the [Widget REST APIs](../../extend/develop/add-dashboard-widget.md).
### Widget size
Some widgets are pre-sized and can't be changed. Others are configurable through their configuration dialog.
For example, the Chart for work items widget allows you to select an area size ranging from 2 x 2 to 4 x 4 (tiles).
<img src="_img/add-widget-size.png" alt="Change widget size" style="border: 2px solid #C3C3C3;" />
### Disabled Marketplace widget
If your account or project collection administrator disables a marketplace widget, you'll see the following image:
<img src="_img/widget-catalog-disabled-widget.png" alt="Disabled widget extension notification" style="border: 2px solid #C3C3C3;" />
To regain access to it, request your admin to reinstate or reinstall the widget. | 53.469027 | 398 | 0.753393 | eng_Latn | 0.98992 |
c95a7e497af1243fc929937e8a5cfc9d028ad382 | 9,840 | md | Markdown | docs/FipeApi.md | parallelum/fipe-go | aad8394a7432cf2e3d7f203cbd0cfefbb2058c6c | [
"MIT"
] | null | null | null | docs/FipeApi.md | parallelum/fipe-go | aad8394a7432cf2e3d7f203cbd0cfefbb2058c6c | [
"MIT"
] | null | null | null | docs/FipeApi.md | parallelum/fipe-go | aad8394a7432cf2e3d7f203cbd0cfefbb2058c6c | [
"MIT"
] | null | null | null | # \FipeApi
All URIs are relative to *https://parallelum.com.br/fipe/api/v2*
Method | HTTP request | Description
------------- | ------------- | -------------
[**GetBrandsByType**](FipeApi.md#GetBrandsByType) | **Get** /{vehicleType}/brands | Brands by type
[**GetFipeInfo**](FipeApi.md#GetFipeInfo) | **Get** /{vehicleType}/brands/{brandId}/models/{modelId}/years/{yearId} | Fipe info
[**GetModelsByBrand**](FipeApi.md#GetModelsByBrand) | **Get** /{vehicleType}/brands/{brandId}/models | Models by brand
[**GetReferences**](FipeApi.md#GetReferences) | **Get** /references | Fipe month references
[**GetYearByModel**](FipeApi.md#GetYearByModel) | **Get** /{vehicleType}/brands/{brandId}/models/{modelId}/years | Years for model
## GetBrandsByType
> []NamedCode GetBrandsByType(ctx, vehicleType).Execute()
Brands by type
### Example
```go
package main
import (
"context"
"fmt"
"os"
openapiclient "./openapi"
)
func main() {
vehicleType := openapiclient.VehiclesType("cars") // VehiclesType | Type of vehicle
configuration := openapiclient.NewConfiguration()
api_client := openapiclient.NewAPIClient(configuration)
resp, r, err := api_client.FipeApi.GetBrandsByType(context.Background(), vehicleType).Execute()
if err != nil {
fmt.Fprintf(os.Stderr, "Error when calling `FipeApi.GetBrandsByType``: %v\n", err)
fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
}
// response from `GetBrandsByType`: []NamedCode
fmt.Fprintf(os.Stdout, "Response from `FipeApi.GetBrandsByType`: %v\n", resp)
}
```
### Path Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**vehicleType** | [**VehiclesType**](.md) | Type of vehicle |
### Other Parameters
Other parameters are passed through a pointer to a apiGetBrandsByTypeRequest struct via the builder pattern
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
### Return type
[**[]NamedCode**](NamedCode.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints)
[[Back to Model list]](../README.md#documentation-for-models)
[[Back to README]](../README.md)
## GetFipeInfo
> FipeResult GetFipeInfo(ctx, vehicleType, brandId, modelId, yearId).Reference(reference).Execute()
Fipe info
### Example
```go
package main
import (
"context"
"fmt"
"os"
openapiclient "./openapi"
)
func main() {
vehicleType := openapiclient.VehiclesType("cars") // VehiclesType | Type of vehicle
brandId := int32(59) // int32 | Brand of the vehicle
modelId := int32(5940) // int32 | Model of the vehicle
yearId := "2014-3" // string | Year for the vehicle
reference := int32(278) // int32 | Month reference code (optional)
configuration := openapiclient.NewConfiguration()
api_client := openapiclient.NewAPIClient(configuration)
resp, r, err := api_client.FipeApi.GetFipeInfo(context.Background(), vehicleType, brandId, modelId, yearId).Reference(reference).Execute()
if err != nil {
fmt.Fprintf(os.Stderr, "Error when calling `FipeApi.GetFipeInfo``: %v\n", err)
fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
}
// response from `GetFipeInfo`: FipeResult
fmt.Fprintf(os.Stdout, "Response from `FipeApi.GetFipeInfo`: %v\n", resp)
}
```
### Path Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**vehicleType** | [**VehiclesType**](.md) | Type of vehicle |
**brandId** | **int32** | Brand of the vehicle |
**modelId** | **int32** | Model of the vehicle |
**yearId** | **string** | Year for the vehicle |
### Other Parameters
Other parameters are passed through a pointer to a apiGetFipeInfoRequest struct via the builder pattern
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**reference** | **int32** | Month reference code |
### Return type
[**FipeResult**](FipeResult.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints)
[[Back to Model list]](../README.md#documentation-for-models)
[[Back to README]](../README.md)
## GetModelsByBrand
> []NamedCode GetModelsByBrand(ctx, vehicleType, brandId).Execute()
Models by brand
### Example
```go
package main
import (
"context"
"fmt"
"os"
openapiclient "./openapi"
)
func main() {
vehicleType := openapiclient.VehiclesType("cars") // VehiclesType | Type of vehicle
brandId := int32(59) // int32 | Brand of the vehicle
configuration := openapiclient.NewConfiguration()
api_client := openapiclient.NewAPIClient(configuration)
resp, r, err := api_client.FipeApi.GetModelsByBrand(context.Background(), vehicleType, brandId).Execute()
if err != nil {
fmt.Fprintf(os.Stderr, "Error when calling `FipeApi.GetModelsByBrand``: %v\n", err)
fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
}
// response from `GetModelsByBrand`: []NamedCode
fmt.Fprintf(os.Stdout, "Response from `FipeApi.GetModelsByBrand`: %v\n", resp)
}
```
### Path Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**vehicleType** | [**VehiclesType**](.md) | Type of vehicle |
**brandId** | **int32** | Brand of the vehicle |
### Other Parameters
Other parameters are passed through a pointer to a apiGetModelsByBrandRequest struct via the builder pattern
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
### Return type
[**[]NamedCode**](NamedCode.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints)
[[Back to Model list]](../README.md#documentation-for-models)
[[Back to README]](../README.md)
## GetReferences
> []Reference GetReferences(ctx).Execute()
Fipe month references
### Example
```go
package main
import (
"context"
"fmt"
"os"
openapiclient "./openapi"
)
func main() {
configuration := openapiclient.NewConfiguration()
api_client := openapiclient.NewAPIClient(configuration)
resp, r, err := api_client.FipeApi.GetReferences(context.Background()).Execute()
if err != nil {
fmt.Fprintf(os.Stderr, "Error when calling `FipeApi.GetReferences``: %v\n", err)
fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
}
// response from `GetReferences`: []Reference
fmt.Fprintf(os.Stdout, "Response from `FipeApi.GetReferences`: %v\n", resp)
}
```
### Path Parameters
This endpoint does not need any parameter.
### Other Parameters
Other parameters are passed through a pointer to a apiGetReferencesRequest struct via the builder pattern
### Return type
[**[]Reference**](Reference.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints)
[[Back to Model list]](../README.md#documentation-for-models)
[[Back to README]](../README.md)
## GetYearByModel
> []NamedCode GetYearByModel(ctx, vehicleType, brandId, modelId).Execute()
Years for model
### Example
```go
package main
import (
"context"
"fmt"
"os"
openapiclient "./openapi"
)
func main() {
vehicleType := openapiclient.VehiclesType("cars") // VehiclesType | Type of vehicle
brandId := int32(59) // int32 | Brand of the vehicle
modelId := int32(5940) // int32 | Model of the vehicle
configuration := openapiclient.NewConfiguration()
api_client := openapiclient.NewAPIClient(configuration)
resp, r, err := api_client.FipeApi.GetYearByModel(context.Background(), vehicleType, brandId, modelId).Execute()
if err != nil {
fmt.Fprintf(os.Stderr, "Error when calling `FipeApi.GetYearByModel``: %v\n", err)
fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
}
// response from `GetYearByModel`: []NamedCode
fmt.Fprintf(os.Stdout, "Response from `FipeApi.GetYearByModel`: %v\n", resp)
}
```
### Path Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**vehicleType** | [**VehiclesType**](.md) | Type of vehicle |
**brandId** | **int32** | Brand of the vehicle |
**modelId** | **int32** | Model of the vehicle |
### Other Parameters
Other parameters are passed through a pointer to a apiGetYearByModelRequest struct via the builder pattern
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
### Return type
[**[]NamedCode**](NamedCode.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints)
[[Back to Model list]](../README.md#documentation-for-models)
[[Back to README]](../README.md)
| 26.24 | 142 | 0.645732 | yue_Hant | 0.322605 |
c95a818de4650c68a0b52c4f63de64e03b49776c | 78 | md | Markdown | README.md | lonniev/snyked-sample | 6d9092f1b858b443c5a60a7bd80e5eae55baf52f | [
"MIT"
] | null | null | null | README.md | lonniev/snyked-sample | 6d9092f1b858b443c5a60a7bd80e5eae55baf52f | [
"MIT"
] | null | null | null | README.md | lonniev/snyked-sample | 6d9092f1b858b443c5a60a7bd80e5eae55baf52f | [
"MIT"
] | null | null | null | # snyked-sample
Helps understand how to configure and use Snyk and Dependabot
| 26 | 61 | 0.820513 | eng_Latn | 0.99739 |
c95ab910afd1f3b6abd015187bd767570cf734de | 1,326 | md | Markdown | _posts/Coding-Test/Python/2021-03-15-[python]Programmers_지형-이동.md | barking-code/barking-code.github.io | e016ec021a9407190bc701be0ed510a9aae31717 | [
"MIT"
] | null | null | null | _posts/Coding-Test/Python/2021-03-15-[python]Programmers_지형-이동.md | barking-code/barking-code.github.io | e016ec021a9407190bc701be0ed510a9aae31717 | [
"MIT"
] | 1 | 2020-12-16T14:16:07.000Z | 2020-12-16T14:16:07.000Z | _posts/Coding-Test/Python/2021-03-15-[python]Programmers_지형-이동.md | barking-code/barking-code.github.io | e016ec021a9407190bc701be0ed510a9aae31717 | [
"MIT"
] | null | null | null | ---
layout: post
title: "[Python]Programmers_지형 이동"
categories:
- Coding-Test
- Python
tags:
- Dijkstra
- Priority Queue
created_at: 2021-03-15T16:08:00+09:00
modified_at: 2021-03-15T16:08:00+09:00
visible: true
---
[Programmers 지형 이동](https://programmers.co.kr/learn/courses/30/lessons/62050)
```python
import heapq
dr = [-1, 0, 1, 0]
dc = [0, 1, 0, -1]
def solution(land, height):
def dijkstra(r, c):
total = 0
visit = [[False for _ in range(N)] for _ in range(N)]
heap = []
heapq.heappush(heap, (0, r, c))
while heap:
cost, r, c = heapq.heappop(heap)
if visit[r][c]: continue
visit[r][c] = True
total += cost
for d in range(4):
tr, tc = r+dr[d], c+dc[d]
if not (0 <= tr < N and 0 <= tc <N): continue
if visit[tr][tc]: continue
diff = abs(land[r][c] - land[tr][tc])
if diff <= height:
heapq.heappush(heap, (0, tr, tc))
else:
heapq.heappush(heap, (diff, tr, tc))
return total
N = len(land)
return dijkstra(0, 0)
```
| 22.1 | 77 | 0.452489 | eng_Latn | 0.489712 |
c95ac5ae24c40dc670b89d346c4e2f8c37ba7950 | 2,563 | md | Markdown | docset/winserver2012-ps/scheduledtasks/ScheduledTasks.md | krishana77/windows-powershell-docs | fb3d3078c447d048106383c2834ee50bb7cdfcbd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-03-27T20:56:18.000Z | 2019-03-27T20:56:18.000Z | docset/winserver2012-ps/scheduledtasks/ScheduledTasks.md | j0rt3g4/windows-powershell-docs | fb3d3078c447d048106383c2834ee50bb7cdfcbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docset/winserver2012-ps/scheduledtasks/ScheduledTasks.md | j0rt3g4/windows-powershell-docs | fb3d3078c447d048106383c2834ee50bb7cdfcbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
Module Name: ScheduledTasks
Module Guid: 5378EE8E-E349-49BB-83B9-F3D9C396C0A6
Download Help Link: http://go.microsoft.com/fwlink/?LinkId=255117
Help Version: 3.2.0.0
Locale: en-US
ms.assetid: 969AF3A6-0C60-4790-94D9-9E4920052551
manager: dansimp
ms.reviewer:
ms.author: kenwith
author: kenwith
---
# ScheduledTasks Module
## Description
This reference provides cmdlet descriptions and syntax for all Scheduled Tasks cmdlets. It lists the cmdlets in alphabetical order based on the verb at the beginning of the cmdlet.
## ScheduledTasks Cmdlets
### [Disable-ScheduledTask](./Disable-ScheduledTask.md)
Disables a scheduled task.
### [Enable-ScheduledTask](./Enable-ScheduledTask.md)
Enables a scheduled task.
### [Export-ScheduledTask](./Export-ScheduledTask.md)
Exports a scheduled task as an XML string.
### [Get-ClusteredScheduledTask](./Get-ClusteredScheduledTask.md)
Gets clustered scheduled tasks for a failover cluster.
### [Get-ScheduledTask](./Get-ScheduledTask.md)
Gets the task definition object of a scheduled task that is registered on the local computer.
### [Get-ScheduledTaskInfo](./Get-ScheduledTaskInfo.md)
Gets run-time information for a scheduled task.
### [New-ScheduledTask](./New-ScheduledTask.md)
Creates a scheduled task instance.
### [New-ScheduledTaskAction](./New-ScheduledTaskAction.md)
Creates a scheduled task action.
### [New-ScheduledTaskPrincipal](./New-ScheduledTaskPrincipal.md)
Creates an object that contains a scheduled task principal.
### [New-ScheduledTaskSettingsSet](./New-ScheduledTaskSettingsSet.md)
Creates a new scheduled task settings object.
### [New-ScheduledTaskTrigger](./New-ScheduledTaskTrigger.md)
Creates a scheduled task trigger object.
### [Register-ClusteredScheduledTask](./Register-ClusteredScheduledTask.md)
Registers a scheduled task on a failover cluster.
### [Register-ScheduledTask](./Register-ScheduledTask.md)
Registers a scheduled task definition on a local computer.
### [Set-ClusteredScheduledTask](./Set-ClusteredScheduledTask.md)
Changes settings for a clustered scheduled task.
### [Set-ScheduledTask](./Set-ScheduledTask.md)
Modifies a scheduled task.
### [Start-ScheduledTask](./Start-ScheduledTask.md)
Starts one or more instances of a scheduled task.
### [Stop-ScheduledTask](./Stop-ScheduledTask.md)
Stops all running instances of a task.
### [Unregister-ClusteredScheduledTask](./Unregister-ClusteredScheduledTask.md)
Removes a scheduled task from a failover cluster.
### [Unregister-ScheduledTask](./Unregister-ScheduledTask.md)
Unregisters a scheduled task.
| 33.723684 | 180 | 0.784627 | eng_Latn | 0.730527 |
c95c010e470a8523de78ec85345559195b469d82 | 870 | md | Markdown | README.md | WaffleLapkin/opt_reduce | 8611d00daccf1af7d69cb01d2fcb1fe749a1b923 | [
"MIT"
] | 2 | 2021-07-17T11:53:32.000Z | 2021-12-08T12:38:11.000Z | README.md | WaffleLapkin/opt_reduce | 8611d00daccf1af7d69cb01d2fcb1fe749a1b923 | [
"MIT"
] | null | null | null | README.md | WaffleLapkin/opt_reduce | 8611d00daccf1af7d69cb01d2fcb1fe749a1b923 | [
"MIT"
] | null | null | null | <div align="center">
<h1>opt_reduce</h1>
<a href="https://crates.io/crates/opt_reduce">
<img alt="crates.io" src="https://img.shields.io/crates/v/opt_reduce">
</a>
<a href="https://docs.rs/opt_reduce">
<img alt="documentation (docs.rs)" src="https://docs.rs/opt_reduce/badge.svg">
</a>
<a href="LICENSE">
<img alt="LICENSE (MIT)" src="https://img.shields.io/badge/license-MIT-brightgreen.svg">
</a>
</div>
This crate provides a `reduce` function for `Option<_>` that allows to
merge two options together.
This method was previously proposed for addition to `std` two times but both
PRs were closed:
1. [#84695][first PR]
2. [#87036][second PR]
[first PR]: https://github.com/rust-lang/rust/pull/84695
[second PR]: https://github.com/rust-lang/rust/pull/87036
---
```toml
opt_reduce = "1"
```
_Compiler support: requires rustc 1.31+_.
| 27.1875 | 92 | 0.67931 | eng_Latn | 0.460316 |
c95c1391c10baa54cc3299eaaacb3d3d822a83a8 | 510 | md | Markdown | _posts/2020-12-06-scratch-game.md | TheAwesomeCoder05/builds-alt | 95695ad8b572a929241305bbeed57ea0ef9ce1a8 | [
"MIT"
] | null | null | null | _posts/2020-12-06-scratch-game.md | TheAwesomeCoder05/builds-alt | 95695ad8b572a929241305bbeed57ea0ef9ce1a8 | [
"MIT"
] | null | null | null | _posts/2020-12-06-scratch-game.md | TheAwesomeCoder05/builds-alt | 95695ad8b572a929241305bbeed57ea0ef9ce1a8 | [
"MIT"
] | null | null | null | ---
title: "Crack it! - Integers"
date: 2020-12-06
---
This is my Scratch game. You need to solve some math problems related to integers here.
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<link rel="stylesheet" href="https://theawesomecoder05.github.io/builds-alt/assets/css/download.css">
<br>
<br>
<br>
<div class="center">
<button onclick="window.location.href='https://bit.ly/3oJYKBp';" class="button"> Check it out!</button>
</div>
| 31.875 | 113 | 0.717647 | eng_Latn | 0.466637 |
c95c754efc945fad6171e1fb8ed8bd9c9e337fa4 | 16,594 | md | Markdown | docs/2014/analysis-services/data-mining/naive-bayes-model-query-examples.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/data-mining/naive-bayes-model-query-examples.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/data-mining/naive-bayes-model-query-examples.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Ejemplos de consultas del modelo Bayes naive | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- analysis-services
ms.topic: conceptual
helpviewer_keywords:
- naive bayes model [Analysis Services]
- naive bayes algorithms [Analysis Services]
- content queries [DMX]
ms.assetid: e642bd7d-5afa-4dfb-8cca-4f84aadf61b0
author: minewiskan
ms.author: owend
manager: craigg
ms.openlocfilehash: 207e8dbd7ea9c0fcb2c453fb6611efcfc4ab7a16
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/02/2018
ms.locfileid: "48060455"
---
# <a name="naive-bayes-model-query-examples"></a>Ejemplos de consultas del modelo Bayes naive
Cuando se crea una consulta en un modelo de minería de datos, puede tratarse de una consulta de contenido, que proporciona detalles sobre las reglas y los conjuntos de elementos detectados durante el análisis, o una consulta de predicción, que usa las asociaciones detectadas en los datos para realizar predicciones. También puede recuperar los metadatos sobre el modelo utilizando una consulta del conjunto de filas de esquema de minería de datos. En esta sección se explica cómo crear estas consultas para los modelos que se basan en el algoritmo Bayes naive de Microsoft.
**Consultas de contenido**
[Obtener metadatos del modelo usando DMX](#bkmk_Query1)
[Recuperar un resumen de los datos de entrenamiento](#bkmk_Query2)
[Buscar más información sobre atributos](#bkmk_Query3)
[Usar procedimientos almacenados del sistema](#bkmk_Query4)
**Consultas de predicción**
[Predecir los resultados utilizando una consulta singleton](#bkmk_Query5)
[Obtener predicciones con valores de probabilidad y compatibilidad](#bkmk_Query6)
[Predecir asociaciones](#bkmk_Query7)
## <a name="finding-information-about-a-naive-bayes-model"></a>Buscar información sobre un modelo Bayes naive
El contenido de un modelo Bayes naive proporciona información agregada sobre la distribución de los valores en los datos de entrenamiento. También puede recuperar la información sobre los metadatos del modelo creando consultas con los conjuntos de filas de esquema de minería de datos.
### <a name="bkmk_Query1"></a> Consulta de ejemplo 1: obtener metadatos del modelo usando DMX
Al consultar el conjunto de filas de esquema de minería de datos, puede buscar los metadatos del modelo. Esto podría incluir cuándo se creó, cuándo se procesó en último lugar, el nombre de la estructura de minería de datos en la que se basa el modelo y el nombre de las columnas que se usan como atributos de predicción. También se pueden devolver los parámetros que se utilizaron cuando se creó el modelo.
```
SELECT MODEL_CATALOG, MODEL_NAME, DATE_CREATED, LAST_PROCESSED,
SERVICE_NAME, PREDICTION_ENTITY, FILTER
FROM $system.DMSCHEMA_MINING_MODELS
WHERE MODEL_NAME = 'TM_NaiveBayes_Filtered'
```
Resultados del ejemplo:
|||
|-|-|
|MODEL_CATALOG|AdventureWorks|
|MODEL_NAME|TM_NaiveBayes_Filtered|
|DATE_CREATED|3/1/2008 19:15|
|LAST_PROCESSED|3/2/2008 20:00|
|SERVICE_NAME|Microsoft_Naive_Bayes|
|PREDICTION_ENTITY|Bike Buyer,Yearly Income|
|FILTER|[Region] = 'Europe' OR [Region] = 'North America'|
El modelo que se usa para este ejemplo está basado en el modelo Bayes naive que se crea en [Basic Data Mining Tutorial](../../tutorials/basic-data-mining-tutorial.md), pero se modificó agregando un segundo atributo de predicción y aplicando un filtro a los datos de entrenamiento.
### <a name="bkmk_Query2"></a> Consulta de ejemplo 2: recuperar un resumen de los datos de entrenamiento
En un modelo Bayes naive, el nodo de estadísticas marginal almacena información agregada sobre la distribución de los valores de los datos de entrenamiento. Este resumen es cómodo y le evita tener que crear consultas SQL con los datos de entrenamiento para encontrar la misma información.
En el ejemplo siguiente se utiliza una consulta de contenido DMX para recuperar los datos del nodo (NODE_TYPE = 24). Dado que las estadísticas están almacenadas en una tabla anidada, la palabra clave FLATTENED se utiliza para facilitar la visualización de los resultados.
```
SELECT FLATTENED MODEL_NAME,
(SELECT ATTRIBUTE_NAME, ATTRIBUTE_VALUE, [SUPPORT], [PROBABILITY], VALUETYPE FROM NODE_DISTRIBUTION) AS t
FROM TM_NaiveBayes.CONTENT
WHERE NODE_TYPE = 26
```
> [!NOTE]
> El nombre de las columnas, SUPPORT y PROBABILITY, debe ir entre corchetes para distinguirlo de las palabras clave reservadas de Expresiones multidimensionales (MDX) con los mismos nombres.
Resultados parciales:
|MODEL_NAME|t.ATTRIBUTE_NAME|t.ATTRIBUTE_VALUE|t.SUPPORT|t.PROBABILITY|t.VALUETYPE|
|-----------------|-----------------------|------------------------|---------------|-------------------|-----------------|
|TM_NaiveBayes|Bike Buyer|Missing|0|0|1|
|TM_NaiveBayes|Bike Buyer|0|8869|0.507263784|4|
|TM_NaiveBayes|Bike Buyer|1|8615|0.492736216|4|
|TM_NaiveBayes|Sexo|Missing|0|0|1|
|TM_NaiveBayes|Sexo|F|8656|0.495081217|4|
|TM_NaiveBayes|Sexo|M|8828|0.504918783|4|
Por ejemplo, estos resultados le indican el número de casos de entrenamiento para cada valor discreto (VALUETYPE = 4), junto con la probabilidad calculada, ajustados para los valores que faltan (VALUETYPE = 1).
Para obtener una definición de los valores proporcionados en la tabla NODE_DISTRIBUTION en un modelo Bayes naive, vea [Contenido del modelo de minería de datos para los modelos Bayes naive (Analysis Services - Minería de datos)](mining-model-content-for-naive-bayes-models-analysis-services-data-mining.md). Para más información sobre cómo afectan los valores que faltan a los cálculos de probabilidad y compatibilidad, vea [Valores ausentes (Analysis Services - Minería de datos)](missing-values-analysis-services-data-mining.md).
### <a name="bkmk_Query3"></a> Consulta de ejemplo 3: buscar más información sobre atributos
Dado que un modelo Bayes naive a menudo contiene información compleja sobre las relaciones entre atributos diferentes, la manera más fácil de ver estas relaciones es utilizar el [Visor Bayes naive de Microsoft](browse-a-model-using-the-microsoft-naive-bayes-viewer.md). Sin embargo, puede crear consultas DMX para devolver los datos.
En el ejemplo siguiente se muestra cómo devolver información del modelo sobre un atributo determinado, `Region`.
```
SELECT NODE_TYPE, NODE_CAPTION,
NODE_PROBABILITY, NODE_SUPPORT, MSOLAP_NODE_SCORE
FROM TM_NaiveBayes.CONTENT
WHERE ATTRIBUTE_NAME = 'Region'
```
Esta consulta devuelve dos tipos de nodos: el nodo que representa el atributo de entrada (NODE_TYPE = 10) y nodos para cada valor del atributo (NODE_TYPE = 11). El título del nodo se utiliza para identificarlo, en lugar del nombre, porque el título muestra tanto el nombre como el valor del atributo.
|NODE_TYPE|NODE_CAPTION|NODE_PROBABILITY|NODE_SUPPORT|MSOLAP_NODE_SCORE|NODE_TYPE|
|----------------|-------------------|-----------------------|-------------------|-------------------------|----------------|
|10|Bike Buyer -> Region|1|17484|84.51555875|10|
|11|Bike Buyer -> Region = Missing|0|0|0|11|
|11|Bike Buyer -> Region = North America|0.508236102|8886|0|11|
|11|Bike Buyer -> Region = Pacific|0.193891558|3390|0|11|
|11|Bike Buyer -> Region = Europe|0.29787234|5208|0|11|
Algunas de las columnas almacenadas en los nodos son las mismas que se pueden obtener de los nodos de estadísticas marginales, como los valores de compatibilidad de los nodos y de puntuación de la probabilidad de los nodos. Sin embargo, MSOLAP_NODE_SCORE es un valor especial que solo se proporciona para los nodos de atributos de entrada e indica la importancia relativa de este atributo en el modelo. Puede ver casi toda esa misma información en el panel Red de dependencia del visor; sin embargo, el visor no proporciona puntuaciones.
La consulta siguiente devuelve las puntuaciones de importancia de todos los atributos del modelo:
```
SELECT NODE_CAPTION, MSOLAP_NODE_SCORE
FROM TM_NaiveBayes.CONTENT
WHERE NODE_TYPE = 10
ORDER BY MSOLAP_NODE_SCORE DESC
```
Resultados del ejemplo:
|NODE_CAPTION|MSOLAP_NODE_SCORE|
|-------------------|-------------------------|
|Bike Buyer -> Total Children|181.3654836|
|Bike Buyer -> Commute Distance|179.8419482|
|Bike Buyer -> English Education|156.9841928|
|Bike Buyer -> Number Children At Home|111.8122599|
|Bike Buyer -> Region|84.51555875|
|Bike Buyer -> Marital Status|23.13297354|
|Bike Buyer -> English Occupation|2.832069191|
Al examinar el contenido del modelo en el [Visor de árbol de contenido genérico de Microsoft](browse-a-model-using-the-microsoft-generic-content-tree-viewer.md), se hará una mejor idea de qué estadísticas podrían ser interesantes. Aquí se demostraron algunos ejemplos sencillos; más a menudo puede que tenga que ejecutar varias consultas o almacenar los resultados y procesarlos en el cliente.
### <a name="bkmk_Query4"></a> Consulta de ejemplo 4: usar procedimientos almacenados del sistema
Para explorar los resultados, puede utilizar algunos procedimientos almacenados de sistema de Analysis Services además de escribir sus propias consultas de contenido. Para utilizar un procedimiento almacenado de sistema, anteponga al nombre del procedimiento almacenado la palabra clave CALL:
```
CALL GetPredictableAttributes ('TM_NaiveBayes')
```
Resultados parciales:
|ATTRIBUTE_NAME|NODE_UNIQUE_NAME|
|---------------------|------------------------|
|Bike Buyer|100000001|
> [!NOTE]
> Estos procedimientos almacenados de sistema son para la comunicación interna entre el servidor de Analysis Services y el cliente, y solamente se utilizan por comodidad al desarrollar y probar los modelos de minería de datos. Al crear consultas para un sistema de producción, siempre debería escribir sus consultas utilizando DMX.
Para más información sobre los procedimientos almacenados del sistema de Analysis Services, vea [Procedimientos almacenados de minería de datos (Analysis Services - Minería de datos)](/sql/analysis-services/data-mining/data-mining-stored-procedures-analysis-services-data-mining).
## <a name="using-a-naive-bayes-model-to-make-predictions"></a>Usar un modelo Bayes naive para realizar predicciones
El algoritmo Bayes naive de Microsoft se suele utilizar menos para la predicción que para la exploración de relaciones entre los atributos de predicción y la entrada. Sin embargo, el modelo admite el uso de funciones de predicción tanto para predicción como para asociación.
### <a name="bkmk_Query5"></a> Consulta de ejemplo 5: predecir los resultados utilizando una consulta singleton
La consulta siguiente utiliza una consulta singleton para proporcionar un nuevo valor y predecir, según el modelo, si es probable que un cliente con estas características compre una bicicleta. La manera más fácil de crear una consulta singleton en un modelo de regresión es usar el cuadro de diálogo **Entrada de consulta singleton** . Por ejemplo, puede generar la consulta DMX siguiente seleccionando el modelo `TM_NaiveBayes` , eligiendo **Consulta singleton**y seleccionando los valores en las listas desplegables para `[Commute Distance]` y `Gender`.
```
SELECT
Predict([TM_NaiveBayes].[Bike Buyer])
FROM
[TM_NaiveBayes]
NATURAL PREDICTION JOIN
(SELECT '5-10 Miles' AS [Commute Distance],
'F' AS [Gender]) AS t
```
Resultados del ejemplo:
|Expresión|
|----------------|
|0|
La función de predicción devuelve el valor más probable, en este caso 0, que significa que es improbable que este tipo de cliente compre una bicicleta.
### <a name="bkmk_Query6"></a> Consulta de ejemplo 6: obtener predicciones con valores de probabilidad y compatibilidad
Además de predecir un resultado, a menudo desea conocer la precisión de la predicción. La consulta siguiente usa la misma consulta singleton que el ejemplo anterior, pero agrega la función de predicción [PredictHistogram (DMX)](/sql/dmx/predicthistogram-dmx) para devolver una tabla anidada que contiene las estadísticas de la compatibilidad de la predicción.
```
SELECT
Predict([TM_NaiveBayes].[Bike Buyer]),
PredictHistogram([TM_NaiveBayes].[Bike Buyer])
FROM
[TM_NaiveBayes]
NATURAL PREDICTION JOIN
(SELECT '5-10 Miles' AS [Commute Distance],
'F' AS [Gender]) AS t
```
Resultados del ejemplo:
|Bike Buyer|$SUPPORT|$PROBABILITY|$ADJUSTEDPROBABILITY|$VARIANCE|$STDEV|
|----------------|--------------|------------------|--------------------------|---------------|------------|
|0|10161.5714|0.581192599|0.010530981|0|0|
|1|7321.428768|0.418750215|0.008945684|0|0|
||0.999828444|5.72E-05|5.72E-05|0|0|
La fila final en la tabla muestra los ajustes para la compatibilidad y la probabilidad del valor que falta. Los valores de la desviación estándar y la varianza siempre son 0, porque los modelos Bayes naive no pueden modelar los valores continuos.
### <a name="bkmk_Query7"></a> Consulta de ejemplo 7: predecir las asociaciones
El algoritmo Bayes naive de Microsoft se puede utilizar para el análisis de la asociación, si la estructura de minería de datos contiene una tabla anidada con el atributo de predicción como clave. Por ejemplo, podría crear un modelo Bayes naive al usar la estructura de minería de datos creada en [Lección 3: Generar un escenario de cesta de la compra (Tutorial intermedio de minería de datos)](../../tutorials/lesson-3-building-a-market-basket-scenario-intermediate-data-mining-tutorial.md) del tutorial de minería de datos. El modelo utilizado en este ejemplo se modificó para agregar información sobre los ingresos y la región del cliente en la tabla de casos.
En el ejemplo de consulta siguiente se muestra una consulta singleton que predice los productos que están relacionados con las compras del producto, `'Road Tire Tube'`. Podría utilizar esta información para recomendar productos a un tipo específico de cliente.
```
SELECT PredictAssociation([Association].[v Assoc Seq Line Items])
FROM [Association_NB]
NATURAL PREDICTION JOIN
(SELECT 'High' AS [Income Group],
'Europe' AS [Region],
(SELECT 'Road Tire Tube' AS [Model])
AS [v Assoc Seq Line Items])
AS t
```
Resultados parciales:
|Modelo|
|-----------|
|Women's Mountain Shorts|
|Water Bottle|
|Touring-3000|
|Touring-2000|
|Touring-1000|
## <a name="function-list"></a>Lista de funciones
Todos los algoritmos de [!INCLUDE[msCoName](../../includes/msconame-md.md)] son compatibles con un conjunto común de funciones. No obstante, el algoritmo Bayes naive de [!INCLUDE[msCoName](../../includes/msconame-md.md)] admite las funciones adicionales que se enumeran en la siguiente tabla.
|||
|-|-|
|función de predicción|Uso|
|[IsDescendant (DMX)](/sql/dmx/isdescendant-dmx)|Determina si un nodo es un elemento secundario de otro nodo del modelo.|
|[Predecir (DMX)](/sql/dmx/predict-dmx)|Devuelve un valor o un conjunto de valores predichos para una columna especificada.|
|[PredictAdjustedProbability (DMX)](/sql/dmx/predictadjustedprobability-dmx)|Devuelve la probabilidad ponderada.|
|[PredictAssociation (DMX)](/sql/dmx/predictassociation-dmx)|Predice los miembros de un conjunto de datos asociativo.|
|[PredictNodeId (DMX)](/sql/dmx/predictnodeid-dmx)|Devuelve el Node_ID de cada caso.|
|[PredictProbability (DMX)](/sql/dmx/predictprobability-dmx)|Devuelve la probabilidad del valor de predicción.|
|[PredictSupport (DMX)](/sql/dmx/predictsupport-dmx)|Devuelve el valor de soporte de un estado especificado.|
Para ver la sintaxis de funciones específicas, vea [Referencia de funciones de Extensiones de minería de datos (DMX)](/sql/dmx/data-mining-extensions-dmx-function-reference).
## <a name="see-also"></a>Vea también
[Referencia técnica del algoritmo Bayes Naive de Microsoft](microsoft-naive-bayes-algorithm-technical-reference.md)
[Algoritmo Bayes Naive de Microsoft](microsoft-naive-bayes-algorithm.md)
[Para los modelos Bayes Naive contenido del modelo de minería de datos (Analysis Services - minería de datos)](mining-model-content-for-naive-bayes-models-analysis-services-data-mining.md)
| 62.856061 | 674 | 0.744124 | spa_Latn | 0.9625 |
c95c7f090acdccb0c11304f81236f57e72837a64 | 35,656 | md | Markdown | repos/node/remote/12-bullseye.md | Melon-Tropics/repo-info | 6654e3ebb0f4af168acafd22a2e4a0764f18401a | [
"Apache-2.0"
] | 2 | 2020-01-03T00:11:05.000Z | 2022-02-03T18:28:14.000Z | repos/node/remote/12-bullseye.md | Melon-Tropics/repo-info | 6654e3ebb0f4af168acafd22a2e4a0764f18401a | [
"Apache-2.0"
] | null | null | null | repos/node/remote/12-bullseye.md | Melon-Tropics/repo-info | 6654e3ebb0f4af168acafd22a2e4a0764f18401a | [
"Apache-2.0"
] | null | null | null | ## `node:12-bullseye`
```console
$ docker pull node@sha256:892ad8474e16c65e96cb94712b1ce7d4fe75e78abc2e7ef1e15851ad4ad66847
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms: 5
- linux; amd64
- linux; arm variant v7
- linux; arm64 variant v8
- linux; ppc64le
- linux; s390x
### `node:12-bullseye` - linux; amd64
```console
$ docker pull node@sha256:d37c116c0b89d87d6ccf8bebc9a94b926667900c5ff24fa4bcf10ce71cb30780
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **348.0 MB (347957615 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:23806f7e25fe8bd12809c249fc07762eb65eebcd780baec4244465673172b509`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Tue, 21 Dec 2021 01:22:32 GMT
ADD file:c03517c5ddbed4053165bfdf984b27a006fb5f533ca80b5798232d96df221440 in /
# Tue, 21 Dec 2021 01:22:32 GMT
CMD ["bash"]
# Tue, 21 Dec 2021 01:51:53 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 01:51:59 GMT
RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi
# Tue, 21 Dec 2021 01:52:16 GMT
RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 01:53:06 GMT
RUN set -ex; apt-get update; apt-get install -y --no-install-recommends autoconf automake bzip2 dpkg-dev file g++ gcc imagemagick libbz2-dev libc6-dev libcurl4-openssl-dev libdb-dev libevent-dev libffi-dev libgdbm-dev libglib2.0-dev libgmp-dev libjpeg-dev libkrb5-dev liblzma-dev libmagickcore-dev libmagickwand-dev libmaxminddb-dev libncurses5-dev libncursesw5-dev libpng-dev libpq-dev libreadline-dev libsqlite3-dev libssl-dev libtool libwebp-dev libxml2-dev libxslt-dev libyaml-dev make patch unzip xz-utils zlib1g-dev $( if apt-cache show 'default-libmysqlclient-dev' 2>/dev/null | grep -q '^Version:'; then echo 'default-libmysqlclient-dev'; else echo 'libmysqlclient-dev'; fi ) ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 18:44:48 GMT
RUN groupadd --gid 1000 node && useradd --uid 1000 --gid node --shell /bin/bash --create-home node
# Tue, 11 Jan 2022 21:24:03 GMT
ENV NODE_VERSION=12.22.9
# Tue, 11 Jan 2022 21:24:21 GMT
RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" && case "${dpkgArch##*-}" in amd64) ARCH='x64';; ppc64el) ARCH='ppc64le';; s390x) ARCH='s390x';; arm64) ARCH='arm64';; armhf) ARCH='armv7l';; i386) ARCH='x86';; *) echo "unsupported architecture"; exit 1 ;; esac && set -ex && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt && ln -s /usr/local/bin/node /usr/local/bin/nodejs && node --version && npm --version
# Tue, 11 Jan 2022 21:24:22 GMT
ENV YARN_VERSION=1.22.17
# Tue, 11 Jan 2022 21:24:26 GMT
RUN set -ex && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && yarn --version
# Tue, 11 Jan 2022 21:24:27 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Tue, 11 Jan 2022 21:24:27 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Tue, 11 Jan 2022 21:24:27 GMT
CMD ["node"]
```
- Layers:
- `sha256:0e29546d541cdbd309281d21a73a9d1db78665c1b95b74f32b009e0b77a6e1e3`
Last Modified: Tue, 21 Dec 2021 01:27:20 GMT
Size: 54.9 MB (54919034 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9b829c73b52b92b97d5c07a54fb0f3e921995a296c714b53a32ae67d19231fcd`
Last Modified: Tue, 21 Dec 2021 02:01:26 GMT
Size: 5.2 MB (5152816 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cb5b7ae361722f070eca53f35823ed21baa85d61d5d95cd5a95ab53d740cdd56`
Last Modified: Tue, 21 Dec 2021 02:01:26 GMT
Size: 10.9 MB (10871868 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:6494e4811622b31c027ccac322ca463937fd805f569a93e6f15c01aade718793`
Last Modified: Tue, 21 Dec 2021 02:01:49 GMT
Size: 54.6 MB (54566215 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:6f9f74896dfa93fe0172f594faba85e0b4e8a0481a0fefd9112efc7e4d3c78f7`
Last Modified: Tue, 21 Dec 2021 02:02:33 GMT
Size: 196.5 MB (196510974 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f2930ff7fb6061a0120e0b924ef643796b8d8afe63591a700ccd57efde8f0570`
Last Modified: Tue, 21 Dec 2021 19:04:52 GMT
Size: 4.2 KB (4203 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ddce6bfa6e8aa8e92f315a08e81807bb0cbfd922efa827aeb3f6d0a6bf9de62b`
Last Modified: Tue, 11 Jan 2022 21:44:54 GMT
Size: 23.6 MB (23643071 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a0eed5971907c6e6f14a9d2b2ac8e8e592c151a216445762639d635fcf2759b4`
Last Modified: Tue, 11 Jan 2022 21:44:49 GMT
Size: 2.3 MB (2288980 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e1bdf6875d28b8edba67a37ed2d1ed34cb8a0159d771756e041d9dcda7ff7709`
Last Modified: Tue, 11 Jan 2022 21:44:48 GMT
Size: 454.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `node:12-bullseye` - linux; arm variant v7
```console
$ docker pull node@sha256:e3369d77095ebb84907c2d6679a7708cd76f9792081e45156545348e42174170
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **306.5 MB (306484301 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:9a1c2ae67d0214c7884cbaba111b2f107efab432ac2e87a6bc0d991def7e6092`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Tue, 21 Dec 2021 01:59:11 GMT
ADD file:848bf729bc16d3b188567f096ee1c0386cb49825a06eef396401278afee2f4c7 in /
# Tue, 21 Dec 2021 01:59:12 GMT
CMD ["bash"]
# Tue, 21 Dec 2021 02:46:04 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 02:46:18 GMT
RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi
# Tue, 21 Dec 2021 02:47:10 GMT
RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 02:49:12 GMT
RUN set -ex; apt-get update; apt-get install -y --no-install-recommends autoconf automake bzip2 dpkg-dev file g++ gcc imagemagick libbz2-dev libc6-dev libcurl4-openssl-dev libdb-dev libevent-dev libffi-dev libgdbm-dev libglib2.0-dev libgmp-dev libjpeg-dev libkrb5-dev liblzma-dev libmagickcore-dev libmagickwand-dev libmaxminddb-dev libncurses5-dev libncursesw5-dev libpng-dev libpq-dev libreadline-dev libsqlite3-dev libssl-dev libtool libwebp-dev libxml2-dev libxslt-dev libyaml-dev make patch unzip xz-utils zlib1g-dev $( if apt-cache show 'default-libmysqlclient-dev' 2>/dev/null | grep -q '^Version:'; then echo 'default-libmysqlclient-dev'; else echo 'libmysqlclient-dev'; fi ) ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 08:51:39 GMT
RUN groupadd --gid 1000 node && useradd --uid 1000 --gid node --shell /bin/bash --create-home node
# Tue, 11 Jan 2022 21:20:33 GMT
ENV NODE_VERSION=12.22.9
# Tue, 11 Jan 2022 21:20:55 GMT
RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" && case "${dpkgArch##*-}" in amd64) ARCH='x64';; ppc64el) ARCH='ppc64le';; s390x) ARCH='s390x';; arm64) ARCH='arm64';; armhf) ARCH='armv7l';; i386) ARCH='x86';; *) echo "unsupported architecture"; exit 1 ;; esac && set -ex && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt && ln -s /usr/local/bin/node /usr/local/bin/nodejs && node --version && npm --version
# Tue, 11 Jan 2022 21:20:56 GMT
ENV YARN_VERSION=1.22.17
# Tue, 11 Jan 2022 21:21:04 GMT
RUN set -ex && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && yarn --version
# Tue, 11 Jan 2022 21:21:05 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Tue, 11 Jan 2022 21:21:05 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Tue, 11 Jan 2022 21:21:06 GMT
CMD ["node"]
```
- Layers:
- `sha256:fd92fbcda272f5935dcd0dfea445cba0152208f83c8fc8d2cb74c85379145c42`
Last Modified: Tue, 21 Dec 2021 02:14:41 GMT
Size: 50.1 MB (50121433 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:29a6987ee02fad9d702adbd7921b8e1776b1a091773dd47886055113b1d7ba62`
Last Modified: Tue, 21 Dec 2021 03:11:46 GMT
Size: 4.9 MB (4922490 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3d6d85cbb278c8e4845f79838154b77c36931e65f9b5bb9b8f56c92f41f72e27`
Last Modified: Tue, 21 Dec 2021 03:11:47 GMT
Size: 10.2 MB (10217004 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:46bd566d277d8d03e1b76768a074dd92306bc287eef88c42a08ef5b48b842fa1`
Last Modified: Tue, 21 Dec 2021 03:12:37 GMT
Size: 50.3 MB (50328047 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:edcf448232b2036e5813fc0174bcd35c3a967a79cf2ef79bc2bfb0410c7b176e`
Last Modified: Tue, 21 Dec 2021 03:14:27 GMT
Size: 167.0 MB (166954318 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f745d564c9a2fe2cb9703d38e9a82b1d722f005d514fdb676204647e85c8239e`
Last Modified: Tue, 21 Dec 2021 09:41:18 GMT
Size: 4.2 KB (4188 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:bf768950c11b5a74b648162c645be3323dfc01e8c2ce9984535bba8fdaf81c4b`
Last Modified: Tue, 11 Jan 2022 22:07:29 GMT
Size: 21.7 MB (21655067 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fb40c3c39d4f3bf141bc4c5a687458641c815bf1889f3a782bfb9fa9c5cc7463`
Last Modified: Tue, 11 Jan 2022 22:07:13 GMT
Size: 2.3 MB (2281301 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2726d4bcc32b50ce8853b605270736bc1faba20c8c10c5aec75e189620acd64b`
Last Modified: Tue, 11 Jan 2022 22:07:11 GMT
Size: 453.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `node:12-bullseye` - linux; arm64 variant v8
```console
$ docker pull node@sha256:4159f0f658e78338d69d67916276ca230853e294f2d6e4f7bc045df1035362f4
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **339.4 MB (339397241 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:d4bf5a2e5cf0f1758cbee6d7873c60ee1c1dd7d44170cdbfc612ea69134e4887`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Tue, 21 Dec 2021 01:42:08 GMT
ADD file:9d88e8701cd12aaee44dac3542cc3e4586f6382541afff76e56e8fb5275387d3 in /
# Tue, 21 Dec 2021 01:42:09 GMT
CMD ["bash"]
# Tue, 21 Dec 2021 02:12:18 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 02:12:23 GMT
RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi
# Tue, 21 Dec 2021 02:12:43 GMT
RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 02:13:27 GMT
RUN set -ex; apt-get update; apt-get install -y --no-install-recommends autoconf automake bzip2 dpkg-dev file g++ gcc imagemagick libbz2-dev libc6-dev libcurl4-openssl-dev libdb-dev libevent-dev libffi-dev libgdbm-dev libglib2.0-dev libgmp-dev libjpeg-dev libkrb5-dev liblzma-dev libmagickcore-dev libmagickwand-dev libmaxminddb-dev libncurses5-dev libncursesw5-dev libpng-dev libpq-dev libreadline-dev libsqlite3-dev libssl-dev libtool libwebp-dev libxml2-dev libxslt-dev libyaml-dev make patch unzip xz-utils zlib1g-dev $( if apt-cache show 'default-libmysqlclient-dev' 2>/dev/null | grep -q '^Version:'; then echo 'default-libmysqlclient-dev'; else echo 'libmysqlclient-dev'; fi ) ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 11:04:44 GMT
RUN groupadd --gid 1000 node && useradd --uid 1000 --gid node --shell /bin/bash --create-home node
# Tue, 11 Jan 2022 23:06:39 GMT
ENV NODE_VERSION=12.22.9
# Tue, 11 Jan 2022 23:06:53 GMT
RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" && case "${dpkgArch##*-}" in amd64) ARCH='x64';; ppc64el) ARCH='ppc64le';; s390x) ARCH='s390x';; arm64) ARCH='arm64';; armhf) ARCH='armv7l';; i386) ARCH='x86';; *) echo "unsupported architecture"; exit 1 ;; esac && set -ex && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt && ln -s /usr/local/bin/node /usr/local/bin/nodejs && node --version && npm --version
# Tue, 11 Jan 2022 23:06:54 GMT
ENV YARN_VERSION=1.22.17
# Tue, 11 Jan 2022 23:06:58 GMT
RUN set -ex && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && yarn --version
# Tue, 11 Jan 2022 23:06:59 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Tue, 11 Jan 2022 23:07:00 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Tue, 11 Jan 2022 23:07:01 GMT
CMD ["node"]
```
- Layers:
- `sha256:94a23d3cb5be24659b25f17537307e7f568d665244f6a383c1c6e51e31080749`
Last Modified: Tue, 21 Dec 2021 01:48:23 GMT
Size: 53.6 MB (53604608 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ac9d381bd1e98fa8759f80ff42db63c8fce4ac9407b2e7c8e0f031ed9f96432b`
Last Modified: Tue, 21 Dec 2021 02:22:29 GMT
Size: 5.1 MB (5141526 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:aa9c5b49b9db3dd2553e8ae6c2081b77274ec0a8b1f9903b0e5ac83900642098`
Last Modified: Tue, 21 Dec 2021 02:22:30 GMT
Size: 10.7 MB (10655891 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:841dd868500b6685b6cda93c97ea76e817b427d7a10bf73e9d03356fac199ffd`
Last Modified: Tue, 21 Dec 2021 02:22:52 GMT
Size: 54.7 MB (54668906 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d4bb9078a4a2954fb77553c7f66912068fb62ff7cf431160389ebd36fab5c7ad`
Last Modified: Tue, 21 Dec 2021 02:23:30 GMT
Size: 189.4 MB (189424367 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:6bf4ffb24b8e68c48dea37b1e0d3a818ded4842150fa9c3a2a100c4efbf0c88b`
Last Modified: Tue, 21 Dec 2021 11:31:28 GMT
Size: 4.1 KB (4094 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:05f0920ff61b665d83b9e2c09b3e6a70ab3d2ac618221ca6fc9c80ba1d39dfd2`
Last Modified: Tue, 11 Jan 2022 23:30:04 GMT
Size: 23.6 MB (23608588 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:8cfa70fbb6a7043252a39708f2c296d263f45e2c86a54687cc3d20ee6b770c4c`
Last Modified: Tue, 11 Jan 2022 23:30:00 GMT
Size: 2.3 MB (2288809 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e22503cdeba9c100c8dfc62cb5c9bc7e7e5eb1bb7369a612429d3e099adaaeec`
Last Modified: Tue, 11 Jan 2022 23:29:59 GMT
Size: 452.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `node:12-bullseye` - linux; ppc64le
```console
$ docker pull node@sha256:9149d1fdb0a0a22d70cfb26fff20dc55160c18fd073f9c20e9824195172d48d7
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **356.3 MB (356331801 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:c3a1068f4753c06978520baaed7ed730826b4b45d631c969de794561258565ab`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Tue, 21 Dec 2021 02:19:53 GMT
ADD file:36311aefca0fba2cc35dc40f11be529a000c6af70f6dcda70c3e5bdf3ac0f1c2 in /
# Tue, 21 Dec 2021 02:19:58 GMT
CMD ["bash"]
# Tue, 21 Dec 2021 02:59:03 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 02:59:28 GMT
RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi
# Tue, 21 Dec 2021 03:00:52 GMT
RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 03:05:42 GMT
RUN set -ex; apt-get update; apt-get install -y --no-install-recommends autoconf automake bzip2 dpkg-dev file g++ gcc imagemagick libbz2-dev libc6-dev libcurl4-openssl-dev libdb-dev libevent-dev libffi-dev libgdbm-dev libglib2.0-dev libgmp-dev libjpeg-dev libkrb5-dev liblzma-dev libmagickcore-dev libmagickwand-dev libmaxminddb-dev libncurses5-dev libncursesw5-dev libpng-dev libpq-dev libreadline-dev libsqlite3-dev libssl-dev libtool libwebp-dev libxml2-dev libxslt-dev libyaml-dev make patch unzip xz-utils zlib1g-dev $( if apt-cache show 'default-libmysqlclient-dev' 2>/dev/null | grep -q '^Version:'; then echo 'default-libmysqlclient-dev'; else echo 'libmysqlclient-dev'; fi ) ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 09:41:53 GMT
RUN groupadd --gid 1000 node && useradd --uid 1000 --gid node --shell /bin/bash --create-home node
# Wed, 12 Jan 2022 00:46:18 GMT
ENV NODE_VERSION=12.22.9
# Wed, 12 Jan 2022 00:46:41 GMT
RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" && case "${dpkgArch##*-}" in amd64) ARCH='x64';; ppc64el) ARCH='ppc64le';; s390x) ARCH='s390x';; arm64) ARCH='arm64';; armhf) ARCH='armv7l';; i386) ARCH='x86';; *) echo "unsupported architecture"; exit 1 ;; esac && set -ex && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt && ln -s /usr/local/bin/node /usr/local/bin/nodejs && node --version && npm --version
# Wed, 12 Jan 2022 00:46:46 GMT
ENV YARN_VERSION=1.22.17
# Wed, 12 Jan 2022 00:46:58 GMT
RUN set -ex && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && yarn --version
# Wed, 12 Jan 2022 00:46:59 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Wed, 12 Jan 2022 00:47:02 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Wed, 12 Jan 2022 00:47:05 GMT
CMD ["node"]
```
- Layers:
- `sha256:78f65c8d4acfdc2b066df6c8c1d7166f4ea52970529fd23ed61e944a0552d8a5`
Last Modified: Tue, 21 Dec 2021 02:28:43 GMT
Size: 58.8 MB (58809016 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:bd58e00ed3d8d2ae81f4a0364675b0cd711ad958282fbeb751beb72e7b331f2c`
Last Modified: Tue, 21 Dec 2021 03:30:59 GMT
Size: 5.4 MB (5401628 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f1bc75a3d2d59f1fd689555a4c3bd4a145412598f152f754ce9587fb920ebb2a`
Last Modified: Tue, 21 Dec 2021 03:31:00 GMT
Size: 11.6 MB (11626008 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d9f33ad99d775a1df9cd9107a23b931e31ce37e136c00bf0859ea7c3a28a810d`
Last Modified: Tue, 21 Dec 2021 03:31:26 GMT
Size: 58.9 MB (58850305 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d6a3dd9c708756d341039c5fb60d0a35420e7bc414ed97dae0e2a91e7ccd5684`
Last Modified: Tue, 21 Dec 2021 03:32:15 GMT
Size: 195.9 MB (195882725 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7ed91f0ade07e78fc6efe54106e2ba639bdbd27a941d47a3ee94caa817874248`
Last Modified: Tue, 21 Dec 2021 10:13:23 GMT
Size: 4.2 KB (4200 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:59674a7c11f9d35e0ad912158e49b4323d08f2f620e578233042df3f53f05f24`
Last Modified: Wed, 12 Jan 2022 01:20:16 GMT
Size: 23.5 MB (23468462 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fcf13833935eb6da8382832216301578a3c005132b94e566456d4079f4819c9e`
Last Modified: Wed, 12 Jan 2022 01:20:11 GMT
Size: 2.3 MB (2289005 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:97fe23491968f3dea4a8a76ed3071a776c5fe98aa33aef2484810b6393e1ec66`
Last Modified: Wed, 12 Jan 2022 01:20:11 GMT
Size: 452.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `node:12-bullseye` - linux; s390x
```console
$ docker pull node@sha256:f927fd4535c3e6b1b811f80f6fd9df55dafd6c59399019553c003ff02f81151c
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **321.7 MB (321675569 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:22da113131b500247d0351a89543d106ebce59d521489d0ee73fa2fb6e4923da`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Tue, 21 Dec 2021 01:42:08 GMT
ADD file:9bd51bb5b152533abeecc5a52ab1ef27b6fe2b3be150073d286b50d9c422cfb9 in /
# Tue, 21 Dec 2021 01:42:11 GMT
CMD ["bash"]
# Tue, 21 Dec 2021 02:08:49 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 02:08:53 GMT
RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi
# Tue, 21 Dec 2021 02:09:13 GMT
RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 02:10:08 GMT
RUN set -ex; apt-get update; apt-get install -y --no-install-recommends autoconf automake bzip2 dpkg-dev file g++ gcc imagemagick libbz2-dev libc6-dev libcurl4-openssl-dev libdb-dev libevent-dev libffi-dev libgdbm-dev libglib2.0-dev libgmp-dev libjpeg-dev libkrb5-dev liblzma-dev libmagickcore-dev libmagickwand-dev libmaxminddb-dev libncurses5-dev libncursesw5-dev libpng-dev libpq-dev libreadline-dev libsqlite3-dev libssl-dev libtool libwebp-dev libxml2-dev libxslt-dev libyaml-dev make patch unzip xz-utils zlib1g-dev $( if apt-cache show 'default-libmysqlclient-dev' 2>/dev/null | grep -q '^Version:'; then echo 'default-libmysqlclient-dev'; else echo 'libmysqlclient-dev'; fi ) ; rm -rf /var/lib/apt/lists/*
# Tue, 21 Dec 2021 03:18:06 GMT
RUN groupadd --gid 1000 node && useradd --uid 1000 --gid node --shell /bin/bash --create-home node
# Wed, 12 Jan 2022 00:26:28 GMT
ENV NODE_VERSION=12.22.9
# Wed, 12 Jan 2022 00:26:46 GMT
RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" && case "${dpkgArch##*-}" in amd64) ARCH='x64';; ppc64el) ARCH='ppc64le';; s390x) ARCH='s390x';; arm64) ARCH='arm64';; armhf) ARCH='armv7l';; i386) ARCH='x86';; *) echo "unsupported architecture"; exit 1 ;; esac && set -ex && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt && ln -s /usr/local/bin/node /usr/local/bin/nodejs && node --version && npm --version
# Wed, 12 Jan 2022 00:26:51 GMT
ENV YARN_VERSION=1.22.17
# Wed, 12 Jan 2022 00:26:55 GMT
RUN set -ex && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && yarn --version
# Wed, 12 Jan 2022 00:26:56 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Wed, 12 Jan 2022 00:26:56 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Wed, 12 Jan 2022 00:26:57 GMT
CMD ["node"]
```
- Layers:
- `sha256:893d9e8a132ef3e4de94156342e290ae15179e3e749ae758ca27bd72cd67b6e1`
Last Modified: Tue, 21 Dec 2021 01:47:53 GMT
Size: 53.2 MB (53194655 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f80509ff28f085f5b09954bf0136808ae6b2a37744ef3f8a0c6989c0ff40de21`
Last Modified: Tue, 21 Dec 2021 02:17:33 GMT
Size: 5.1 MB (5136685 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0fc21e37a06fd996c6ae0abba5be8b2c09baba43d9a6119c4c9e905db2ffeb46`
Last Modified: Tue, 21 Dec 2021 02:17:34 GMT
Size: 10.8 MB (10761597 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:8ed2dbae5528b3a39c8ae6ebd7570ddfcda9993e4addda77f48e17061117c8d8`
Last Modified: Tue, 21 Dec 2021 02:17:49 GMT
Size: 54.0 MB (54041198 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f62b0e9760b5d084b8ebc8263ef8e0ed37bbdfc861b33cec6354fe4369287e57`
Last Modified: Tue, 21 Dec 2021 02:18:16 GMT
Size: 172.5 MB (172511504 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:05805cea9e275cd4d1d04e9bc8f10b6de40ec58ef2b82e172384c504a7939859`
Last Modified: Tue, 21 Dec 2021 03:33:22 GMT
Size: 4.2 KB (4206 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d8d963162b8e383c008c10b0a7879e0c3a9fb2eb28bf2926ce7501801f5f34c0`
Last Modified: Wed, 12 Jan 2022 00:44:07 GMT
Size: 23.7 MB (23732978 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:729997beb98ab085f0a67cf24bd5ce84a5466ed64a6dc2143594a122ba000339`
Last Modified: Wed, 12 Jan 2022 00:44:02 GMT
Size: 2.3 MB (2292292 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1b3c41c8bf6bb2a585935bd4547c63f43da370ecd074acd3c6912f478f2693ba`
Last Modified: Wed, 12 Jan 2022 00:44:02 GMT
Size: 454.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 83.114219 | 1,619 | 0.727311 | yue_Hant | 0.214685 |
c95cc5a132c42f39a585f7491930593db80b7508 | 289 | md | Markdown | changelog/1.0.0_2020-12-17/start-proxy-with-server.md | amrita-shrestha/ocis | 2fc61bdd6182f46109def65096ce6ef8d0572f1d | [
"Apache-2.0"
] | 541 | 2019-09-19T08:09:41.000Z | 2022-03-30T12:24:43.000Z | changelog/1.0.0_2020-12-17/start-proxy-with-server.md | amrita-shrestha/ocis | 2fc61bdd6182f46109def65096ce6ef8d0572f1d | [
"Apache-2.0"
] | 3,230 | 2019-09-05T10:20:08.000Z | 2022-03-31T18:05:29.000Z | changelog/1.0.0_2020-12-17/start-proxy-with-server.md | amrita-shrestha/ocis | 2fc61bdd6182f46109def65096ce6ef8d0572f1d | [
"Apache-2.0"
] | 89 | 2019-09-18T07:20:00.000Z | 2022-03-31T18:19:40.000Z | Change: Start ocis-proxy with the ocis server command
Tags: proxy
Starts the proxy in single binary mode (./ocis server) on port 9200. The proxy serves as a single-entry point
for all http-clients.
https://github.com/owncloud/ocis/issues/119
https://github.com/owncloud/ocis/issues/136
| 28.9 | 109 | 0.778547 | eng_Latn | 0.795422 |
c95cc84676416ebaa845e00ef7578aa4aab361e4 | 3,095 | md | Markdown | doc/02/md/intro.md | turtleislands/breakit | 1fda7ac1643c0b6a9ce306d412fdc9838e0e8694 | [
"MIT"
] | 5 | 2018-10-02T02:09:08.000Z | 2020-05-29T06:53:02.000Z | doc/02/md/intro.md | turtleislands/breakit | 1fda7ac1643c0b6a9ce306d412fdc9838e0e8694 | [
"MIT"
] | null | null | null | doc/02/md/intro.md | turtleislands/breakit | 1fda7ac1643c0b6a9ce306d412fdc9838e0e8694 | [
"MIT"
] | 2 | 2018-10-19T22:07:41.000Z | 2018-11-07T17:23:14.000Z | <!---
@file intro.md
@author Yiwei Chiao ([email protected])
@date 10/06/2017 created.
@date 10/05/2018 last modified.
@version 0.1.1
@since 0.1.0
@copyright CC-BY, © 2017-2018 Yiwei Chiao
-->
# HTML5/CSS3
在網頁瀏覽器 (browser) 上,[JavaScript][wikiECMAScript]
([ECMAScript][]) 控制**程式行為** (behavior),
[HTML][wikiHTML] ([Hyper Text Markup Language][wikiHTML]) 決定文件
的**組織結構** (structure),而
[CSS][wikiCSS] ([Cascading Style Sheets][wikiCSS])
處理**排版**(style)。三者各司其職。
[BreakIt][breakit] 專案既然是一個網頁遊戲專案,自然少不了
[HTML][wikiHTML] 和 [CSS][wikiCSS]。只是專案重心在
[JavaScript][mdnJavaScript],所以
[HTML][mdnHTML],[CSS][mdnCSS] 只會簡單帶過使用到的部份。其餘更全面的介紹
或進階的主題,需要去參考其它的資源 (如這裡給的連結:[HTML][mdnHTML],和
[CSS][mdnCSS])。
## index.html
首先,在 `breakit/htdocs` 資料夾下,建立 `index.html` 檔案,內容如下:
```html
<!DOCTYPE html>
<html lang="zh-TW">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>BreakIt: A Breakout Game</title>
<meta name="author" content="Yiwei Chiao">
<meta name="description" content="A web-based Breakout (打磚塊) game.">
<meta name="keywords" content="Javascript, game, Breakout">
</head>
<body>
Hello World!
</body>
</html>
```
在 `index.html` 的內容列表中,用 `<>` 框起的字串稱為**標記** (*tag*),
它們也就是 [HTML][wikiHTML] 標記語言的組成部份。針對 [HTML][wikiHTML]
較詳細的介紹放在這一章的後半,這裡需要注意的只是 `<body>` 和 `</body>`
夾起的 `Hello World!`。
準備好 `breakit/htdocs` 資料夾下的 `index.html` 後,可以開啟瀏覽器,在瀏
覽器的網址列內輸入:
* Windows: `file:///d:/breakit/htdocs/index.html`
* Linux: `file:///home/ywchiao/breakit/htdocs/index.html`
* MacOs: `file:///Users/ywchiao/breakit/htdocs/index.html`
其中 Windows 的 `d:`,Linux/MacOS 裡的 `ywchiao` 請依個人情況更改。在
Linux/MacOS 系統如果不清楚路徑要怎麼打,可以在 terminal 下利用 `cd` 指令,
切換工作目錄到 `breakit/htdocs` 之後,輸入 `pwd`
(Present Working Directory),依螢幕輸出打就行了;而 Windows 則可以利用
檔案總管,切換資料夾到 `breakit/htdocs` 後,在檔管總管的瀏覽器列空白處,
點一下滑鼠左鍵就可以看到要輸入的內容。
如果瀏覽器的網址列輸入正確,應該會看見如 Figuer \ref{file:index} 的畫面。

### [HTML][mdnHTML] 標題 `<h1> ... <h6>`
Figure \ref{file:index} 看起來沒什麼不同?的確如此,因為前面提過,
[HTML][wikiHTML] 的用途在決定文件**結構** (structure),而非呈現。不過,
一些簡單的效果還是有的。修改:
```html
<body>
Hello World!
</body>
```
成為:
```html
<body>
<h1>Hello World!</h1>
</body>
```
存檔後,重新整理網頁,可以發現 `Hello World!` 的字型大小變了。這是因為
`<h1></h1>` 是 [HTML][mdnHTML] 用來標記**標題** (Heading) 的 *tag*;
其中,`<h1>` 標記標題的開始,而 `</h1>` 則標記標題的結束。排版習慣上,標題
的字體通常會比內文大一些。所以,[HTML][mdnHTML] 的 heading tags,標記的
文字也會大一些。
[HTML][mdnHTML] 總共定義了六 (6) 級的 heading 大小,分別以 `<h1>`.
`<h2>`。一直到 `<h6>` 標記。可以逐一試試效果。
[ECMAScript]: https://www.ecma-international.org/publications/standards/Ecma-262.htm
[mdnCSS]: https://developer.mozilla.org/en-US/docs/Web/CSS
[mdnHTML]: https://developer.mozilla.org/en-US/docs/Web/HTML
[mdnJavaScript]: https://developer.mozilla.org/zh-TW/docs/Web/JavaScript
[wikiCSS]: https://en.wikipedia.org/wiki/Cascading_Style_Sheets
[wikiECMAScript]: https://en.wikipedia.org/wiki/ECMAScript
[wikiHTML]: https://en.wikipedia.org/wiki/HTML
<!-- intro.md -->
| 28.136364 | 84 | 0.677868 | yue_Hant | 0.916661 |
c95d5764ceda96113da38ef3e1ed331de0b7d323 | 608 | md | Markdown | tests/testRepositories/ver01/Cypress-Mocks.package/CypressMockBasic.class/README.md | dalehenrich/filetree | 28ab532548104ee38c55eaff8e8849ab3f595c5f | [
"MIT"
] | 83 | 2015-02-10T10:41:30.000Z | 2022-02-24T23:43:10.000Z | tests/testRepositories/ver02/Cypress-Mocks.package/CypressMockBasic.class/README.md | CampSmalltalk/filetree | de952a1f7864e5eea921d6ee6d86f5cb78f83eb0 | [
"MIT"
] | 70 | 2015-01-08T10:54:09.000Z | 2021-05-10T03:16:53.000Z | tests/testRepositories/ver02/Cypress-Mocks.package/CypressMockBasic.class/README.md | CampSmalltalk/filetree | de952a1f7864e5eea921d6ee6d86f5cb78f83eb0 | [
"MIT"
] | 16 | 2015-02-01T16:46:32.000Z | 2021-05-29T03:03:42.000Z | ## Class Comment
This mock contains basic class and instance method selectors.
[**GitHub Flabored Markdown**][1] with **Smalltalk** sytax *highlighting*:
```Smalltalk
initialize
super initialize.
self name: 'Unknown'
```
And some [UTF8 samples][2]:
```
Lithuanian: Aš galiu valgyti stiklą ir jis manęs nežeidžia
Russian: Я могу есть стекло, оно мне не вредит.
Korean: 나는 유리를 먹을 수 있어요. 그래도 아프지 않아요
Hebrew: אני יכול לאכול זכוכית וזה לא מזיק לי.
Latin: Vitrum edere possum; mihi non nocet.
```
[1]: http://github.github.com/github-flavored-markdown/
[2]: http://www.columbia.edu/~fdc/utf8/
| 23.384615 | 75 | 0.712171 | kor_Hang | 0.637996 |
c95d6f6e2f05e350bf2de83cd083d2050b200a15 | 1,141 | md | Markdown | README.md | LesleyLai/cuda-path-tracer | f3da83ccc4a4c1c39c38aca3ceff6192a6599b98 | [
"MIT"
] | 2 | 2021-09-27T16:57:18.000Z | 2021-09-27T22:23:31.000Z | README.md | LesleyLai/cuda-path-tracer | f3da83ccc4a4c1c39c38aca3ceff6192a6599b98 | [
"MIT"
] | null | null | null | README.md | LesleyLai/cuda-path-tracer | f3da83ccc4a4c1c39c38aca3ceff6192a6599b98 | [
"MIT"
] | null | null | null | # CUDA Path Tracer
[](https://github.com/LesleyLai/cuda-path-tracer/actions/workflows/Windows.yml)
[](https://github.com/LesleyLai/cuda-path-tracer/actions/workflows/Ubuntu.yml)
WIP CUDA Path Tracer.
## Build
You need to have a recent version of C++ compiler ([Visual Studio](https://www.visualstudio.com/)
, [GCC](https://gcc.gnu.org/), or [Clang](https://clang.llvm.org/)), CMake
3.17+, [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads), and [Conan](https://conan.io/) package manager
installed.
To install conan, you need have a recent Python installed, and then you can do:
```
$ pip install conan
```
Afterwards, you can invoke CMake in command line to build the project
```
$ mkdir build
$ cd build
$ cmake -DCMAKE_BUILD_TYPE=Release ..
$ cmake --build .
```
or alternatively use your IDE's CMake integration.
## License
This repository is released under the MIT license, see [License](file:License) for more information.
| 32.6 | 176 | 0.742331 | eng_Latn | 0.610828 |
c95da553f523faa256ed70c142587d984e3622d8 | 2,113 | md | Markdown | content/en/docs/labs/boot/getting-started/run.md | SharePointOscar/jx-docs | 0080aec455eee278e7ab1b782bdb5f60ade2a744 | [
"Apache-2.0"
] | 1 | 2022-02-09T23:19:02.000Z | 2022-02-09T23:19:02.000Z | content/en/docs/labs/boot/getting-started/run.md | SharePointOscar/jx-docs | 0080aec455eee278e7ab1b782bdb5f60ade2a744 | [
"Apache-2.0"
] | null | null | null | content/en/docs/labs/boot/getting-started/run.md | SharePointOscar/jx-docs | 0080aec455eee278e7ab1b782bdb5f60ade2a744 | [
"Apache-2.0"
] | 1 | 2020-12-27T03:34:03.000Z | 2020-12-27T03:34:03.000Z | ---
title: Running Boot
linktitle: Running Boot
description: Running Boot to install / upgrade Jenkins X
weight: 50
---
{{% alert %}}
**NOTE: This current experiment is now closed. The work done and feedback we have received will be used to enhance Jenkins X in future versions**
**This code should not be used in production, or be adopted for usage. It should only be used to provide feedback to the Labs team.**
Thank you for your participation,
-Labs
{{% /alert %}}
Once you have [created your git repository](/docs/labs/boot/getting-started/repository/) for your development environment via `jxl boot create` or `jxl boot upgrade` and populated the [secrets](/docs/labs/boot/getting-started/secrets/) as shown above you can run the boot `Job` via:
```
jxl boot run --git-url=https://github.com/myorg/env-mycluster-dev.git
```
If you are using a private git repository you can specify the user name and token to clone the git repository via `--git-username` and `--git-token` arguments or you can add them into the URL:
```
jxl boot run --git-url=https://myusername:[email protected]/myorg/env-mycluster-dev.git
```
Once you have booted up once you can omit the `git-url` argument as it can be discovered from the `dev` `Environment` resource:
```
jxl boot run
```
This will use helm to install the boot Job and tail the log of the pod so you can see the boot job run. It looks like the boot process is running locally on your laptop but really it is all running inside a Pod inside Kubernetes.
Once this has finished you are ready to import or create a quicstart.
<nav>
<ul class="pagination">
<li class="page-item"><a class="page-link" href="../config">Previous</a></li>
<li class="page-item"><a class="page-link" href="../../../wizard/overview/">Next</a></li>
</ul>
</nav>
### Upgrading your cluster
Once you have the git repository for the upgrade you need to run the boot Job in a clean empty cluster.
To simplify things you may want to create a new cluster, connect to that and then boot from there. If you are happy with the results you can scale down/destroy the old one
| 39.867925 | 282 | 0.734974 | eng_Latn | 0.997313 |
c95dac8b33bd32b72508444447a83394c9911d01 | 552 | md | Markdown | data/content/fate-grand-order/ce/count-romani-archamans-hospitality/attr.en.md | tmdict/tmdict | c2f8ddb7885a91d01343de4ea7b66fea78351d94 | [
"MIT"
] | 3 | 2022-02-25T11:13:45.000Z | 2022-02-28T11:55:41.000Z | data/content/fate-grand-order/ce/count-romani-archamans-hospitality/attr.en.md | slsdo/tmdict | c2f8ddb7885a91d01343de4ea7b66fea78351d94 | [
"MIT"
] | null | null | null | data/content/fate-grand-order/ce/count-romani-archamans-hospitality/attr.en.md | slsdo/tmdict | c2f8ddb7885a91d01343de4ea7b66fea78351d94 | [
"MIT"
] | 2 | 2022-02-25T09:59:50.000Z | 2022-02-28T11:55:09.000Z | ---
parent: attribute.ce
source: fate-grand-order
id: count-romani-archamans-hospitality
language: en
weight: 0
---
Young guests visit the mansion each night, and are given a splendid and joyous reception.
What is the true identity of this mysterious count…!?
“That was the concept for the costume, anyway.
What do you think?”
“If the wine wasn’t actually a soft drink, I suppose it would’ve worked?
Considering we are talking about you, I can see it represents a lot of effort, but couldn’t you have done better than the phrase ’mysterious count’?”
| 34.5 | 149 | 0.768116 | eng_Latn | 0.999731 |
c95df5ef4eacbf7a5e146ca0fb536d80d6b514d2 | 11,662 | md | Markdown | _posts/2019-11-07-Usability-Testing-Check-In.md | clink-app/beautiful-jekyll | 4c784eab2536fe10e4e8d488a2b816843938926c | [
"MIT"
] | null | null | null | _posts/2019-11-07-Usability-Testing-Check-In.md | clink-app/beautiful-jekyll | 4c784eab2536fe10e4e8d488a2b816843938926c | [
"MIT"
] | null | null | null | _posts/2019-11-07-Usability-Testing-Check-In.md | clink-app/beautiful-jekyll | 4c784eab2536fe10e4e8d488a2b816843938926c | [
"MIT"
] | null | null | null | ---
layout: post
title: Usability Testing Check-In
published: true
---
# Our cognitive walk through
| problem | severity |
|------------------ | ------------------ |
| 1. Not clear whether filters will be applied when added | 3 |
| 2. Not clear that user is meant to type in filters page | 2 |
| 3. Not clear that header icons could be clicked | 2 |
To solve problem one, an "apply" button was added to the filters page.
To solve problem two, we added a keyboard component to our paper prototype. This will signal to the user that they are meant to begin typing, and then they will see the list of tag suggestions based off of the letters they input.
To address problem three, we added light shading around the clickable icons. We think this will make their function more clear to users of our paper prototype. We also don't anticipate this being as much of an issue with a digital mockup, since there will be more differentiation between buttons and text/ images.
Note: Most of these changes were not implemented until after our first usability test (the keyboard prototype piece was added prior to the user test).
# Our first usability test
We conducted a usability test on Nov 6th with TN, a sophomore planning on majoring in CS at Williams. The test is carried out in Eco Cafe, since it is a quiet, convenient, and comfortable location for our tester. Phoebe took on the job as the Facilitator. She explained the general purpose of the app, the tasks, and asked TN to explain their thought process as they navigated through the app. Vy was the Computer. The app was described as a tool to help users find events they are interested in. TN was told that each user gets a profile page where they can set fixed preferences for events. They were also given a brief introduction to the recommendation system, where the app has integrated the user's personal calender and recommends events that fit into detected blocks of free time and the user's fixed preferences. The tasks given to TN were:
- You are a user of the Clink app with the fixed preferences set for art and music. Find and add a sports event to your event list.
- Add an event from the recommendation list to their event list.
TN was quick to figure out how to find and scroll through events. However, they had trouble identifying how the temporary filter page works. They also found it difficult to differentiate between “find events” and “quick find”.
As for the revisions, we would need to redesign the filter page so it’s easier to understand for the users. We would also need to provide the user with more information about the "quick find" feature and explain how it differs from "find events".
Throughout the process, we learned how import it was to keep our user talking and speaking their thoughts out loud. Whenever TN was prompted to speak, they gave us valuable insight to what they were thinking when they made a certain action or why they weren't sure what action to take. These insights helped us develop revisions to our prototype.
We need to improve our testing process by making the changes from page to page of our prototype more smooth and quick so that our user isn't waiting for us constantly. We should also create a more specific senario for our user so they have a better idea of what their tasks are and a more concrete purpose as they navigate the prototype.
**Results from our first usability test**
**Negative**
**Incident 1**: Not sure what the difference between 'find events' and 'quick find' is.

TN revealed out loud that they could not tell what the difference between the two buttons was. They tried both options in their first task. They also used "find events" in their second task because they wanted to find an event, it was what "worked last time," they knew it held their preferences, and they assumed all the events shown would fit their schedule.
Severity of 3


To address this, we added an "information" icon to the recommendations page. This allows the user to access a description of the feature, explaining its purpose.
**Incident 2**: Could not figure out how to edit the filter.

TN clicked filter, saw that it only had tags for what we mentioned was already set as a preference in the profile, and assumed that was all it was, exiting from filter to try quick find instead. The next time TN entered the filters page they had a realization that the tags on the page had 'x's and could be removed. They revealed to us that they 'saw what was going on now' and that while they thought the filter could add tags, it can only remove tags. They did not see the text box for typing and adding a new tag to the filter.
Severity of 4

We got rid of the typing text boxes and now have one text box for both the existing tags and where users can type in new tags, similar to how the edit profile page is structured.
**Incident 3**: Not intuitive to find home page.

TN had trouble getting back to the home page and asked out loud how to get there. We gave a hint to guide them back to the home page: "What looks like it might take you back to the page you started on?" From there, TN clicked the header to go back to the home page.
Severity of 3

We added a circle around the logo in the header to make it appear more clickable.
**Incident 4**: Thought profile icon was decorative.

- TN didn't click on the profile icon at first because they thought it was 'purely decorative'.
Severity of 1

We made the icon more recognizable as a person by adding a smiley face to the figure.
**Incident 5**: Couldn't figure out how to edit the profile.

TN got to the edit profile page but could not figure out how to make an edit. We gave the hint: "Does anything look like it can be removed?" TN then clicked on the 'x' next to the music tag. They saved the page and revealed that they were looking for some way to make a keyboard pop up or get suggestions of tags to add. We gave them another hint: "Try going back to what you were just doing." TN clicked edit again and tapped randomly on the page around the tags until they hit the text box and our computer revealed the keyboard.
Severity of 2
We chose not to revise the interface of our edit profile page because TN figured out how to use it and said their main issue with figuring out how to use the page was that the tags were taking up the entire interest box and made it look uneditable. In the future tests, we will be more careful in placing our tags lined up with the top and left side of the box so that it is more obvious that there is room in the box to add more tags.
**Incident 6**: Did not 'apply' changes to filter

TN did not realize the filter was scrollable and so did not realize that the filter had an 'apply' and 'cancel' button. After making changes to the filter, TN clicked the 'x' at the top of the filter page to exit out of it, assuming that would automatically apply the new filter.
Severity of 3

We moved the apply button to the top of the filter so it was visible as soon as the filter was opened.
**Positive**
**Incident 6**: Scrolled down list of events.

TN could tell that the list of events was scrollable and did so to find an event they liked.
**Incident 7**: Added event to event list.

TN had no issue with getting the the more detailed page of an event and adding it to the event list.
**Incident 8**: Figured out how to access the 'RATE, DISLIKE, REPORT' menu.

TN clicked on the drop down menu on an event out of curiosity. They could tell immeadiately what it was for and knew it didn't have anything to do with their task so they closed out of the drop down menu without an issue.
**Our plan for our remaining usability tests**
For the remainder of our usability tests, we plan to target another Williams student with a (planned) major in the social sciences, and an older adult who is not in the college-student age range. These two users would be able to provide different points of view about the app other than TN, who is a relatively tech-savvy user. We are also planning to have the users test our rating feature, in which, when they rate an attended event, will take note so as to recommend/not recommend it to them in the future. The current two tests, however, seem to be extensive enough to test all of the currently implemented parts of our prototype. We are still planning to have one member in the group taking on the role of a facilitator, and the other the Computer. Each team member will participate in at least 2 usability tests. As for new approaches, we would like to give a more detailed explanation of the given tasks before setting the users off to do them. It’s important to give them just enough information to understand their role and do the tasks, without giving away too much information.
# Revised Prototype: Task Walkthrough
**Our current revised paper prototype**

## Task One: Filter and Find Event
To complete this task, we first click on "find events", which navigates to the event list.


If the user would like to add a filter, they click on "filters", which causes the revised filter page to pop up. The filters page shows their default filters, which were previously set in their profile. In this case, The filters currently applied are 'art' and 'music'.

In this walkthrough, the user would only like to search for sports events. They click the (x) to remove art and music filters, and begin typing "sports" to add a sports filter. They tap "apply" to save these changes. Now, only the sports filter is active.

Clicking the (x) on the filters menu takes the user back to the now-updated event list. It now only shows sports events.

After finding a sports event that interests them, they click on it. This takes them to a page with more details about the event.

They then tap "add" to add the event to their personal event list. This causes the confirmation page to appear, which gives lets the user know the action was performed successfully.

## Task Two: View Personal Event Recommendations
Tapping "quick find" takes the user to the personal event recommendation page. Since it was not entirely clear to the person testing our app how this feature differed than the first "find events" feature, we've added a (?) icon.


Tapping the (?) icon allows the user to get more information about what the recommendation feature is and how recommendations are generated.

Tapping the (^) icon collapses this information box, and takes the user back to the recommendation page. From here, the user can either refresh the recommendations or view more details (and potentially add the event to their event list), as they can in the steps shown previously.
| 67.802326 | 1,089 | 0.771566 | eng_Latn | 0.999737 |
c95e8a2696e09eba083179f2a04d44127bcc5aec | 1,725 | md | Markdown | _listings/opencorporates/companiesjurisdiction-codecompany-numberdata-get-openapi.md | streamdataio/data | aa68b46148ec1327f1cc46dc07ee67fd432242a4 | [
"CC-BY-3.0"
] | null | null | null | _listings/opencorporates/companiesjurisdiction-codecompany-numberdata-get-openapi.md | streamdataio/data | aa68b46148ec1327f1cc46dc07ee67fd432242a4 | [
"CC-BY-3.0"
] | 1 | 2020-12-18T18:49:19.000Z | 2020-12-18T18:49:19.000Z | _listings/opencorporates/companiesjurisdiction-codecompany-numberdata-get-openapi.md | streamdataio/data | aa68b46148ec1327f1cc46dc07ee67fd432242a4 | [
"CC-BY-3.0"
] | 2 | 2020-11-08T09:21:56.000Z | 2020-12-18T09:49:34.000Z | ---
swagger: "2.0"
x-collection-name: OpenCorporates
x-complete: 0
info:
title: OpenCorporates Companies Jurisdiction Code Company Number Data
description: nThis returns the data held for the given company
termsOfService: https://opencorporates.com/info/licence
version: v.04
host: api.opencorporates.com
basePath: v0.4/
schemes:
- http
produces:
- application/json
consumes:
- application/json
paths:
/companies/:jurisdiction_code/:company_number/data:
get:
summary: Companies Jurisdiction Code Company Number Data
description: nThis returns the data held for the given company
operationId: companies--jurisdiction-code--company-number-data
x-api-path-slug: companiesjurisdiction-codecompany-numberdata-get
parameters:
- in: query
name: data_type
description: this is a string value denoting the type of data, e
- in: query
name: description
description: the given description of the datum as displayed on OpenCorporates
- in: query
name: id
description: the id given to the filing by the company registry
- in: query
name: title
description: this is the title of the datum as displayed on OpenCorporates
responses:
200:
description: OK
tags:
- Businesses
- Companies
- :jurisdiction
- Code
- :company
- Number
- Data
x-streamrank:
polling_total_time_average: 0
polling_size_download_average: 0
streaming_total_time_average: 0
streaming_size_download_average: 0
change_yes: 0
change_no: 0
time_percentage: 0
size_percentage: 0
change_percentage: 0
last_run: ""
days_run: 0
minute_run: 0
--- | 27.822581 | 86 | 0.695072 | eng_Latn | 0.825517 |
c95e8a7b94d2dddb180aab7fe57bc5e6e4fa7b33 | 1,564 | md | Markdown | README.md | Rohail1/iplocate | 7272e3b085877288aed714119f9e50bf9d08a6d6 | [
"MIT"
] | 12 | 2017-02-16T05:44:14.000Z | 2022-02-25T05:26:09.000Z | README.md | Rohail1/iplocate | 7272e3b085877288aed714119f9e50bf9d08a6d6 | [
"MIT"
] | null | null | null | README.md | Rohail1/iplocate | 7272e3b085877288aed714119f9e50bf9d08a6d6 | [
"MIT"
] | null | null | null | # iplocate
[](https://circleci.com/gh/Rohail1/iplocate/tree/master)
[](https://www.npmjs.com/package/iplocate)<br/>This is a simple middleware for an express App that gets the locations of every request thats hits the server. I am using [freegeoip.net](http://freegeoip.net/) .
Its free service it lets you hit 10000 requests per hour for ip's location.
##Installation
`npm install iplocate --save`
### How to use
```javascript
let iplocate = require('iplocate');
app.use(iplocate);
router.get('api/route',function(req,res){
if(req.locationError){
// In case of Any error locationError will be populated
console.log('req.locationError ',req.locationError)
}else {
// The location object will be attached to request
console.log('req.location',req.location)
}
});
// Or use it with APP
app.get('api/route',function(req,res){
if(req.locationError){
// In case of Any error locationError will be populated
console.log('req.locationError ',req.locationError)
}else {
// The location object will be attached to request
console.log('req.location',req.location)
}
})
```
###Tests
`npm test`
###Issues
Create an issue if there are any bugs.
### For Any Queries
Feel free to contact me via [email](mailto:[email protected]) or on my website [lumous.pk](http://lumous.pk) | 29.509434 | 262 | 0.663043 | eng_Latn | 0.876145 |
c95eaadf722c6456235dda20819e412b718b2182 | 1,328 | md | Markdown | rfc/0001-rfc-process.md | HarryET/supabase-rfcs | 326f52846985f34a8dcdc10e2aada87a7a3554ba | [
"MIT"
] | 12 | 2021-09-08T09:49:44.000Z | 2022-01-27T02:23:14.000Z | rfc/0001-rfc-process.md | HarryET/supabase-rfcs | 326f52846985f34a8dcdc10e2aada87a7a3554ba | [
"MIT"
] | 3 | 2021-09-07T09:12:12.000Z | 2022-03-19T15:37:18.000Z | rfc/0001-rfc-process.md | HarryET/supabase-rfcs | 326f52846985f34a8dcdc10e2aada87a7a3554ba | [
"MIT"
] | 2 | 2021-12-04T06:31:31.000Z | 2022-03-23T15:39:55.000Z | - Feature Name: `rfc_process`
- Start Date: 2021-10-07
- RFC PR: [supabase/rfcs#0001](https://github.com/supabase/rfcs/pull/0001)
- Supabase Issue: [supabase/supabase#0000](https://github.com/supabase/supabase/issues/0000)
# Summary
[summary]: #summary
Initializing the RFC process for Supabase.
# Motivation
[motivation]: #motivation
We want a way to publicly discuss large and upcoming changes to Supabase.
# Guide-level explanation
[guide-level-explanation]: #guide-level-explanation
This is a f
# Reference-level explanation
[reference-level-explanation]: #reference-level-explanation
N/A
# Drawbacks
[drawbacks]: #drawbacks
Too much process can slow things down.
# Rationale and alternatives
[rationale-and-alternatives]: #rationale-and-alternatives
N/A
# Prior art
[prior-art]: #prior-art
This RFC process is based on the Rust RFC process. This was chosen over the Nix process as it seems lighter and simpler
(although they are both very similar).
- Rust: https://github.com/rust-lang/rfcs
- Nix: https://github.com/NixOS/rfcs/tree/master/rfcs
# Unresolved questions
[unresolved-questions]: #unresolved-questions
n/a.
# Future possibilities
[future-possibilities]: #future-possibilities
This process is supposed to be very light for now. It is expected to evolve as we continue to develop the process. | 24.145455 | 119 | 0.768825 | eng_Latn | 0.91401 |
c95f6b227b821b87dd1b4ad467e8252290abc584 | 4,134 | md | Markdown | content/projekte/demokratielabore.md | seegerseeger/turing2019 | 88c268c08d22ac0409835b5aed2262cf3b22c535 | [
"MIT"
] | null | null | null | content/projekte/demokratielabore.md | seegerseeger/turing2019 | 88c268c08d22ac0409835b5aed2262cf3b22c535 | [
"MIT"
] | null | null | null | content/projekte/demokratielabore.md | seegerseeger/turing2019 | 88c268c08d22ac0409835b5aed2262cf3b22c535 | [
"MIT"
] | null | null | null | ---
title: Demokratielabore
subtitle: Mit digitalen Tools die Gesellschaft von morgen gestalten
kategorien:
- Bildung
- Community
categories:
- education
- community
tile: double
layout: project
weight: 5
img: projects/Demokratielabore_Projektuebersicht_bigTile.jpg
img_header: projects/Demokratielabore_Header.jpg
people:
- name: Lydia Böttcher
role: Projektmanagerin
- name: Maximilian Voigt
role: Projektmanager
- name: Matthias Löwe
role: Projektmanager
- name: Jasmin Helm
role: Projektmanagerin
- name: Sonja Fischbauer
role: Projektmanagerin
- name: Paula Grünwald
role: Projektmanagerin
- name: Juliane Krüger
role: Projektmanagerin
- name: Nadine Stammen
role: Design
- name: Leonard Wolf
role: Studentischer Mitarbeiter
- name: Lea Pfau
role: Studentische Mitarbeiterin
- name: Sebastian Schröder
role: Bufdi
financing:
- BMFSFJ
- bpb
contact_person: lydiaboettcher
years: 2017 - heute
website: https://demokratielabore.de
contact:
twitter: demokratielabs
instagram: https://www.instagram.com/demokratielabs/
facebook:
mail: [email protected]
cta: Mitmachen
cta_text: |-
Führe mit Hilfe unserer Materialien <a href="https://demokratielabore.de/workshops/">eigene Workshops</a> eigene Workshops durch, starte eine Digital-AG in deinem Jugendverein und finde oder ergänze Projektideen auf unserer <a href="https://demokratielabore.de/materialsammlung/">Selbstlernplattform</a>!
more_text: |-
Weitere Infos gibt es auf der <a href="https://demokratielabore.de">Website</a> der Demokratielabore.
---
Wie können wir Kinder und Jugendliche heute befähigen, zu Gestaltenden unserer Gesellschaft von morgen zu werden? Was macht Jugendliche als Akteurinnen sichtbarer? Welche Rolle spielen hierbei digitale Mittel? Und wie gelingt Teilhabe auf digitaler und sozialer Ebene?
Um diese und andere Fragen zu beantworten, haben wir mit den Demokratielaboren verschiedene Workshopformate und Materialien entwickelt, um einer breiten Zielgruppe von Kindern und Jugendlichen zu Selbstwirksamkeitserfahrungen im Bereich von Demokratie und Technologie zu verhelfen. Die Formate sind für unterschiedliche Zielgruppen der außerschulischen Jugendbildung konzipiert. Einerseits ermöglichen sie Workshopleitenden die Vermittlung von Themen wie Cybermobbing, Fake News, Populismus, Stadtgestaltung, Datenschutz, Influencing, Diversität und Meinungsäußerung. Andererseits wurden Selbstlernmaterialien entwickelt, mit deren Hilfe Jugendliche sich eigenständig gesellschaftliche Themen und digitale Tools erarbeiten können, um eigene Projekte umzusetzen. Zusätzlich haben wir mit der Datenschule Fachkräfte-Schulungen durchgeführt, um die Chancen der Digitalisierung auch in der alltäglichen außerschulischen Jugendarbeit nutzbar zu machen.
<div class="two-img offset-lg-2">
<figure class="license">
<img alt="Bild vom Event" src="/files/projects/demokratielabore_img_1.jpg">
<figcaption>Foto: Lea Pfau, <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY 4.0</a></figcaption>
</figure>
<figure class="license">
<img alt="Bild vom Event" src="/files/projects/demokratielabore_img_2.jpg">
<figcaption>Foto: Lea Pfau, <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY 4.0</a></figcaption>
</figure>
</div>
**Hintergrund** <br>
Die Lebens- und Erfahrungswelt von Jugendlichen in Deutschland hat sich innerhalb einer Generation vollkommen gewandelt: Die digitale Revolution umfasst alle Bereiche des öffentlichen und privaten Lebens, prägt unsere Beziehungen und das Zusammenleben in unserer Gesellschaft. Obwohl die Verbreitung moderner Technologien und ihre konsumorientierte Nutzung zur Kommunikation, Information und Unterhaltung bereits weit vorangeschritten ist, hinkt die gesellschaftliche Gestaltung der Technisierung hinterher. Es ist deshalb unser Ziel, das Internet als positiven Gestaltungsraum zurückzuerobern und Selbstwirksamkeitserfahrungen für Jugendliche in der Auseinandersetzung mit Digitalität sowie demokratischen Strukturen und Prozessen erlebbar zu machen.
| 55.12 | 947 | 0.793904 | deu_Latn | 0.978417 |
c95f8060381306616ce56cd69c9b5a671825af93 | 336 | md | Markdown | starfield/README.md | antoniomo/ebiten-exercises | f3c28a3faff77bd4ea1ae22a29c5cdb52589e772 | [
"CC-BY-3.0",
"Apache-2.0"
] | null | null | null | starfield/README.md | antoniomo/ebiten-exercises | f3c28a3faff77bd4ea1ae22a29c5cdb52589e772 | [
"CC-BY-3.0",
"Apache-2.0"
] | null | null | null | starfield/README.md | antoniomo/ebiten-exercises | f3c28a3faff77bd4ea1ae22a29c5cdb52589e772 | [
"CC-BY-3.0",
"Apache-2.0"
] | null | null | null | The wasm build is just following: https://ebiten.org/documents/webassembly.html
The gopherjs build is following: https://ebiten.org/documents/gopherjs.html with
help from the main instructions at https://github.com/gopherjs/gopherjs. Because
as of now it still uses go 1.12, I also used xerrors package to have the .Is
error handling.
| 48 | 80 | 0.794643 | eng_Latn | 0.978156 |
c95fc736b159dcb34532b37421a3e4c20ae0b720 | 926 | md | Markdown | docs/BUILD.zh.md | consenlabs/imkey-core | e48a6bb530312cdb4988cf8ec3e4572dbed16b0a | [
"Apache-2.0"
] | 11 | 2020-11-06T14:52:55.000Z | 2022-01-09T13:01:47.000Z | docs/BUILD.zh.md | consenlabs/imkey-core | e48a6bb530312cdb4988cf8ec3e4572dbed16b0a | [
"Apache-2.0"
] | 2 | 2020-10-21T03:01:54.000Z | 2021-09-14T09:33:20.000Z | docs/BUILD.zh.md | consenlabs/imkey-core | e48a6bb530312cdb4988cf8ec3e4572dbed16b0a | [
"Apache-2.0"
] | 2 | 2020-11-23T07:00:42.000Z | 2021-05-26T09:57:12.000Z | # Build imKey Core X
## Install Rust
1. 安装 rustup
`curl https://sh.rustup.rs -sSf | sh`
安装完成后使用 `rustc --version` 确认是否安装成功,rustup 在安装过程中会附带安装 Rust 常用的 cargo 工具。
2. 安装 Android 相关 target
`rustup target add aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android`
3. 安装 iOS 相关 target
`rustup target add aarch64-apple-ios armv7-apple-ios armv7s-apple-ios i386-apple-ios x86_64-apple-ios`
## 配置 iOS 编译工具
1. 安装 Xcode
2. 安装 lipo 编译工具
```
cargo install cargo-lipo
cargo install cbindgen
```
3. 运行 `token-core` 项目中的`tools/build-ios.sh`。注意`tools/build-ios.sh`的目录配置。该脚本将会编译相关的 .a 文件并拷贝到指定的目录中
## 配置 Android 编译工具
1. 安装 Android SDK, Android Studio 会默认附带 Android SDK。也可以单独安装。Android Studio 附带的 SDK 目录在`/Users/xxx/Library/Android/sdk`
2. 配置`~/.cargo/config`
3. 运行 `imkey-core` 项目中的`tools/build-android-linux.sh`。注意目前android编译仅支持在linux系统上实现。
| 31.931034 | 121 | 0.707343 | yue_Hant | 0.243756 |
c960ad22a4924d22ae233ddb1200b1f2e8119223 | 7,699 | md | Markdown | sig-scalability/provider-configs.md | beekhof/k8s-community | c0254c5a854a0d12e7012028db3fad37ab931b7c | [
"Apache-2.0"
] | 1 | 2021-08-29T07:34:51.000Z | 2021-08-29T07:34:51.000Z | sig-scalability/provider-configs.md | beekhof/k8s-community | c0254c5a854a0d12e7012028db3fad37ab931b7c | [
"Apache-2.0"
] | null | null | null | sig-scalability/provider-configs.md | beekhof/k8s-community | c0254c5a854a0d12e7012028db3fad37ab931b7c | [
"Apache-2.0"
] | 1 | 2020-09-05T11:54:59.000Z | 2020-09-05T11:54:59.000Z | ### Scalability Testing/Analysis Environment and Goals
Project practice is to perform baseline scalability testing and analysis on a large single machine (VM or server) with all control plane processing on that single node. The single large machine provides sufficient scalability to scale to 5000 node density tests. The typical machine for testing at this scale is at the larger end of the VM scale available on public cloud providers, but is by no means the largest available. Large cluster runs are typically run with a node emulator (kubemark), with a set of resources to run kubemark typically requiring ~80 machines to simulate 5000 nodes, i.e. 60-ish hollow-nodes per machine.
The typical control plane server for this testing has 64 cores, at least 128GB of memory, Gig-E network interfaces, and SSD drives. Public cloud instances typically will have more memory than this for this number of cores.
Several factors contribute to a need to expanded testing beyond the single node baseline.
* Very large cluster operators typically run a minimum of 5 servers in the control plane to ensure that a control plane failure during upgrade is survivable without losing cluster state. We want testing to represent typical configurations at this scale..
* RAFT consensus processing on etcd means that 5-server clusters have different performance characteristics than 1-server clusters.
* Distributed systems often show different performance characteristics when the components are separated by a network versus co-execution on the same server.
* Distributed systems are often affected by cache consistency issues.
An important attribute of Kubernetes is broad support for different infrastructure options. Project experience is that testing on a variety of infrastructure options flushes out timing issues and improves quality. Users of kubernetes also find value in knowing that scale testing has been performed on the infrastructure options they care about.
Scalability testing on configurations that are similar are expected to have similar results, and deviations from the expectation need attention. Regressions may indicate system issues or undocumented assumptions based on those differences, and should be both explored and documented. Noting differences in various configs and which provide the highest system throughput may also give indications as to which performance optimizations are most interesting.
### Control Plane Machine Selection
As wide a selection of different infrastructure providers as possible helps the project. Configurations and testing strongly welcomed for providers not currently listed, and the Scalability SIG is engaging with all of the providers listed below.
<table>
<tr>
<td>Provider</td>
<td>Machine type</td>
<td>Cores</td>
<td>Memory</td>
<td>Kubemark Needs</td>
<td>Notes</td>
</tr>
<tr>
<td>Google</td>
<td>n1-standard-64</td>
<td>64</td>
<td>240</td>
<td>80 instances</td>
<td>Used for 5000 node test results</td>
</tr>
<tr>
<td>AWS</td>
<td>m4.16xlarge</td>
<td>64</td>
<td>256</td>
<td>80 instances</td>
<td>Proposed</td>
</tr>
<tr>
<td>Azure</td>
<td>standard-g5</td>
<td>32</td>
<td>448</td>
<td>?</td>
<td>Max cores instance,
proposed</td>
</tr>
<tr>
<td>Packet</td>
<td>Type 2</td>
<td>24</td>
<td>256</td>
<td>?</td>
<td>Bare metal, proposed</td>
</tr>
</table>
### Additional Configuration Requirements
* Scalability SIG efforts are currently oriented towards the 1.6 and later releases. This focus will shift over time - the SIG’s efforts are aimed towards scalability on trunk.
* Configuration and tuning of etcd is a critical component and has dramatic effects on scalability of large clusters. Minimum etcd version is 3.1.8
* API server configured for load balancing, other components using standard leader election.
* Etcd is used for two distinct cluster purposes - cluster state and event processing. These have different i/o characteristics. It is important to scalability testing efforts that the iops provided by the servers to etcd be consistent and protected. These leads to two requirements:
* Split etcd: Two different etcd clusters for events and cluster state (note: this is currently the GKE production default as well).
* Dedicated separate IOPS for the etcd clusters on each control plane node. On a bare metal install this could look like a dedicated SSD. This requires a more specific configuration per provider. Config table below.
<table>
<tr>
<td>Provider</td>
<td>Volume type</td>
<td>Size per etcd partition (2x)</td>
<td>Notes</td>
</tr>
<tr>
<td>Google</td>
<td>SSD persistent disk</td>
<td>256GB</td>
<td>Iops increases with volume size</td>
</tr>
<tr>
<td>AWS</td>
<td>EBS Provisioned IOPS SSD (io1)</td>
<td>256GB</td>
<td>Proposed</td>
</tr>
<tr>
<td>Azure</td>
<td>?</td>
<td>?</td>
<td>
</td>
</tr>
<tr>
<td>Packet</td>
<td>Dedicated SSD</td>
<td>256GB</td>
<td>Bare metal, proposed</td>
</tr>
</table>
### Areas for Future Work
* Leader election results are non-deterministic on on a typical cluster, and a config would be best served to be configured as worst-case. Not presently known whether there are performance impacts resulting from leader election resulting in either co-location or distribution of those components.
* Improving the cluster performance loading to match production deployment scenarios is critical on-going work, especially clusterloader: [https://github.com/kubernetes/perf-tests/tree/master/clusterloader](https://github.com/kubernetes/perf-tests/tree/master/clusterloader)
* Multi-zone / multi-az deployments are often used to manage large clusters, but for testing/scalability efforts the target is intentionally a single Availability Zone. This keeps greater consistency between environments that do and don’t support AZ-based deployments. Failures during scalability testing are outside the SIG charter. Protecting against network partitioning and improving total cluster availability (one of the key benefits to a multi-AZ strategy) are currently out scope for the Scalability SIG efforts.
* Scalability issues on very large clusters of actual nodes (instead of kubemark simulations) are real. Efforts to improve large cluster networking performance e.g. IPVS are important, and will be interesting areas for cross-SIG collaboration.
### Control Plane Cluster Config
Diagram shows high level target control plan config using the server types listed above, capturing:
* 5 server cluster
* Split etcd across 5 nodes
* API server load balanced
* Other components using leader election
Detail: Target config uses separate volumes for the etc configs:
Config hints to testers:
* ELB, by default, has a short timeout that'll cause control plane components to resync often. Users should set that to the max.
### Alternative Config
Motivated by many of the issues above, an alternative configuration is reasonable, and worth experimentation, as some production environments are built this way:
* Separate etcd cluster onto a dedicated set of 5 machines for etcd only.
* Do not run split etcd
* Run remainder of control plane on 5 nodes separately
* Question for discussion: are there advantages to this configuration for environments where the max number of cores per host are < 64?
### References
CoreOS commentary on etcd sizing.
[https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md)
| 47.233129 | 629 | 0.757761 | eng_Latn | 0.995729 |
c961095fdd1ed7fd219a87e233241be737fc4e1b | 2,624 | md | Markdown | help/home/c-inst-svr/c-rptr-fntly/c-cnfg-rptr-fntly/t-cfg-acc-ctrll-tgt-mach.md | K3rnW3rks/data-workbench.en | 8ebdce5699abdb935188f16476e17b888443bf23 | [
"Apache-2.0"
] | null | null | null | help/home/c-inst-svr/c-rptr-fntly/c-cnfg-rptr-fntly/t-cfg-acc-ctrll-tgt-mach.md | K3rnW3rks/data-workbench.en | 8ebdce5699abdb935188f16476e17b888443bf23 | [
"Apache-2.0"
] | null | null | null | help/home/c-inst-svr/c-rptr-fntly/c-cnfg-rptr-fntly/t-cfg-acc-ctrll-tgt-mach.md | K3rnW3rks/data-workbench.en | 8ebdce5699abdb935188f16476e17b888443bf23 | [
"Apache-2.0"
] | null | null | null | ---
description: Target Insight Server machines running the Insight Server Replication Service must be able to read the log files on this repeater server.
seo-description: Target Insight Server machines running the Insight Server Replication Service must be able to read the log files on this repeater server.
seo-title: Configuring Access Control for Target Machines
solution: Insight
title: Configuring Access Control for Target Machines
uuid: 35e032cf-6c1d-4348-88ce-4f4a6a30b16f
---
# Configuring Access Control for Target Machines{#configuring-access-control-for-target-machines}
Target Insight Server machines running the Insight Server Replication Service must be able to read the log files on this repeater server.
Access to the target machines is granted using the [!DNL Access Control.cfg] file.
**To configure Access Control for access by target [!DNL Insight Server] machines**
1. Navigate to the [!DNL Access Control] folder in the directory where you installed repeater functionality.
Example: [!DNL D:\Adobe\Repeater\Access Control]
1. Open [!DNL Access Control.cfg] in a text editor such as Notepad.
1. Create an access group for the [!DNL Insight Server] machines that must access the log files on this repeater server. Give this access group a name something like “Replication Targets.”
The following file fragment shows how the access group should look.
```
. . .
6 = AccessGroup:
Members = vector: N items
0 = string: IP:Machine0IPAddress
1 = string: IP:Machine1IPAddress
. . .
N = string: IP:MachineNIPAddress
Name = string: Replication Targets
Read-Only Access = vector: 1 items
0 = string: EventDataLocation
Read-Write Access = vector: 0 items
. . .
```
1. In the Members section, specify the IP address for each machine.
1. Update the items count for the Members vector to reflect the number of machine IP addresses you inserted.
1. In the Read-Only Access section, specify the location of the event data that the replication targets access. Use forward slashes in the path specification (/). The default location is the [!DNL Logs] folder on the Repeater machine (/Logs/).
1. Update the items count for the Read-Only Access vector to reflect the number of locations you inserted.
1. Update the number of access groups in the Access Control Groups vector at the top of the file to reflect the addition of the new access group.
```
Access Control Groups = vector: n items
```
1. Save the file.
| 48.592593 | 248 | 0.723323 | eng_Latn | 0.975801 |
c9622739b03adac342cccc117dcc39ecf86449ae | 1,733 | md | Markdown | knowledge-base/steps-on-how-to-add-objectdatadource-in-a-report-designed-in-the-standalone-designer.md | nventex/reporting-docs | 9c6bfa1ecba22186c4139b3be54bccab1ca0b57d | [
"Apache-2.0"
] | 3 | 2018-01-16T10:43:30.000Z | 2020-08-27T19:29:57.000Z | knowledge-base/steps-on-how-to-add-objectdatadource-in-a-report-designed-in-the-standalone-designer.md | nventex/reporting-docs | 9c6bfa1ecba22186c4139b3be54bccab1ca0b57d | [
"Apache-2.0"
] | 15 | 2019-11-06T18:13:01.000Z | 2022-03-31T13:54:36.000Z | knowledge-base/steps-on-how-to-add-objectdatadource-in-a-report-designed-in-the-standalone-designer.md | nventex/reporting-docs | 9c6bfa1ecba22186c4139b3be54bccab1ca0b57d | [
"Apache-2.0"
] | 17 | 2018-01-23T11:25:59.000Z | 2021-11-30T07:45:28.000Z | ---
title: Steps on how to add ObjectDataSource in a report designed in the Standalone Designer
description: This KB article lists all of the required steps to add ObjectDataSource Component in the Standalone Designer.
type: how-to
page_title: Adding ObjectDataSource in the Standalone Designer
slug: steps-on-how-to-add-objectdatadource-in-a-report-designed-in-the-standalone-designer
position:
tags:
ticketid: 1407612
res_type: kb
---
## Environment
<table>
<tbody>
<tr>
<td>Product</td>
<td>Progress® Telerik® Reporting</td>
</tr>
</tbody>
</table>
## Solution
1. Open **Visual Studio** and create a new **Class Library**.
2. Add the following piece of code from [How to: Bind to a BusinessObject article](../object-data-source-how-to-bind-to-business-object).
```CSharp
class Product
{
. . .
}
[DataObject]
class Products
{
. . .
}
```
3. Run the project and close it.
4. Copy the built assembly to clipboard - navigate to the project folder -> **bin** -> **Debug** and copy the dll file.
5. Navigate to the installation folder of the Report designer (for example: *C:\Program Files (x86)\Progress\Telerik Reporting R1 2019\Report Designer* ) and paste the .dll file.
6. Open the **Telerik.ReportDesigner.exe.config** file through an editor and add a reference for the assembly. For example:
```
<Telerik.Reporting>
<AssemblyReferences>
<add name="MyClassLibrary"/>
</AssemblyReferences>
</Telerik.Reporting>
```
7. Save and close.
8. Open the **Report Designer** and create a new report.
9. Navigate to **Data** -> **Object Data Source**.
10. Select from the **Available data source types** and follow the Wizard instructions until the process is completed.
| 30.946429 | 178 | 0.713791 | eng_Latn | 0.93174 |
c9623b18bf7e29384f823d8dc2c2d8591336d6fa | 512 | md | Markdown | deployment_scripts/puppet/modules/influxdb/README.md | saqibarfeen/iota-influxdb-grafana | 8f6db7c272602b78e43f3c2d0522c13d5ad9e491 | [
"Apache-2.0"
] | null | null | null | deployment_scripts/puppet/modules/influxdb/README.md | saqibarfeen/iota-influxdb-grafana | 8f6db7c272602b78e43f3c2d0522c13d5ad9e491 | [
"Apache-2.0"
] | null | null | null | deployment_scripts/puppet/modules/influxdb/README.md | saqibarfeen/iota-influxdb-grafana | 8f6db7c272602b78e43f3c2d0522c13d5ad9e491 | [
"Apache-2.0"
] | null | null | null | InfluxDB module for Puppet
==========================
Description
-----------
Puppet module for installing and configuring InfluxDB 0.9.x.
Usage
-----
```puppet
class {'influxdb':
meta_dir => '/opt/influxdb/meta'
data_dir => '/opt/influxdb/data'
hh_dir => '/opt/influxdb/hh'
}
```
Limitations
-----------
None.
License
-------
Licensed under the terms of the Apache License, version 2.0.
Contact
-------
Guillaume Thouvenin, <[email protected]>
Support
-------
See the Contact section.
| 13.128205 | 60 | 0.617188 | eng_Latn | 0.408287 |
c96272a12add8fca8463e857953bbae359439fbb | 2,983 | md | Markdown | docs/skill-development/skill-types/fallback-skill.md | HelloChatterbox/dev_documentation | 5c55cb4d7293dc19194bffb05959977ce6fdc7ed | [
"Apache-2.0"
] | null | null | null | docs/skill-development/skill-types/fallback-skill.md | HelloChatterbox/dev_documentation | 5c55cb4d7293dc19194bffb05959977ce6fdc7ed | [
"Apache-2.0"
] | null | null | null | docs/skill-development/skill-types/fallback-skill.md | HelloChatterbox/dev_documentation | 5c55cb4d7293dc19194bffb05959977ce6fdc7ed | [
"Apache-2.0"
] | null | null | null | ---
description: >-
A Fallback Skill is a Skill that will be called if no Intent is matched to the
Utterance.
---
# Fallback Skill
## Fallback **Skill** order of precedence
The Fallback **Skills** all have a priority and will be checked in order from low priority value to high priority value. If a Fallback **Skill** can handle the **Utterance** it will create a response and return `True`. After this no other Fallback **Skills** are tried. This means the priority for Fallbacks that can handle a _broad_ range of queries should be _high_ \(80-100\) and Fallbacks that only responds to a very specific range of queries should be higher \(20-80\). The more specific, the lower the priority value.
## Creating a Fallback **Skill**
Import the `FallbackSkill` base class:
```python
from chatterbox import FallbackSkill
```
Create a derived class:
```python
class MeaningFallback(FallbackSkill):
"""
A Fallback skill to answer the question about the
meaning of life, the universe and everything.
"""
def __init__(self):
super(MeaningFallback, self).__init__(name='My Fallback Skill)
# Add your own initialization code here
```
Register the handler with the fallback system
_Note: a `FallbackSkill` can register any number of fallback handlers_
```python
def initialize(self):
"""
Registers the fallback handler
"""
self.register_fallback(self.handle_fallback, 10)
# Any other initialize code you like can be placed here
```
Implement the fallback handler \(the method that will be called to potentially handle the **Utterance**\). The method implements logic to determine if the **Utterance** can be handled and shall output speech if itcan handle the query. It shall return Boolean `True` if the **Utterance** was handled and Boolean `False` if not.
```python
def handle_fallback(self, message):
"""
Answers question about the meaning of life, the universe
and everything.
"""
utterance = message.data.get("utterance")
if 'what' in utterance
and 'meaning' in utterance
and ('life' in utterance
or 'universe' in utterance
or 'everything' in utterance):
self.speak('42')
return True
else:
return False
```
Finally, the **Skill** creator must make sure the skill handler is removed when the **Skill** is shutdown by the system.
```python
def shutdown(self):
"""
Remove this skill from list of fallback skills.
"""
self.remove_fallback(self.handle_fallback)
super(MeaningFallback, self).shutdown()
```
And as with a normal **Skill** the function `create_skill()` needs to be in the file to instantiate the skill.
```python
def create_skill():
return MeaningFallback()
```
The above example can be found [here](https://github.com/forslund/fallback-meaning).
| 34.287356 | 524 | 0.67583 | eng_Latn | 0.988373 |
c9632edb97ff1eb6af43eb9a7657a1061b45a284 | 7,109 | md | Markdown | doc/developer/overview.md | Seagate/halon | 8a62438551659a3549f2c44aae05dc83397a3bc0 | [
"Apache-2.0"
] | null | null | null | doc/developer/overview.md | Seagate/halon | 8a62438551659a3549f2c44aae05dc83397a3bc0 | [
"Apache-2.0"
] | null | null | null | doc/developer/overview.md | Seagate/halon | 8a62438551659a3549f2c44aae05dc83397a3bc0 | [
"Apache-2.0"
] | null | null | null | % Halon Architectural Overview
% Nicholas Clarke
% 2015-11-30
# Halon Architectural Overview
## Background concepts
### Paxos
Paxos is a protocol designed to solve the problem of distributed consensus
in the face of failure. Broadly speaking, Paxos allows for a number of
(remote) processes to agree on some shared state, even in the presence of
message loss, delay, or the failure of some of those processes.
Halon uses Paxos to ensure the high availability of its core component, the
recovery coordinator. It does this by ensuring that the recovery coordinator's
state is shared with other processes which are able to take over from the
recovery coordinator in the event of its failure.
Further reading:
- [Paxos (Wikipedia)](https://en.wikipedia.org/wiki/Paxos_(computer_science))
- [Replication HLD](../hld/replication/hld.rst)
### Cloud Haskell
Cloud Haskell (otherwise known as distributed-process, and abbreviated as C-H
or D-P) is an implementation of Erlang OTP in Haskell. It may also be considered
as an implementation of the Actor model. It presents a view of the system as a
set of communicating _processes_. Processes may exchange messages, take (local)
actions, spawn new processes etc.
To Halon, Cloud Haskell provides the ability to write code for a distributed
system without needing to know where exactly each component is running. Thus
we may talk between components in the system without worrying whether they are
running on the same node or a remote node.
Every component of Halon can be seen as consisting of one or more C-H processes.
For this reason, we shall use the term 'Process' to refer to a C-H process,
and append the term 'System' when we wish to refer to a Unix level process.
Further reading:
- [Erlang OTP](http://learnyousomeerlang.com/what-is-otp)
- [Actor model (Wikipedia)](https://en.wikipedia.org/wiki/Actor_model)
- [Cloud Haskell](http://haskell-distributed.github.io/)
## Layer View
We may (loosely) think of Halon as operating on a number of layers:
- Services
- Recovery
- Event queue
- Replicator
- Paxos
- Cloud Haskell
We describe these layers in inverse order, starting with the bottom layer:
### Cloud Haskell
Cloud Haskell provides the abstraction on top of which all other layers sit.
Crucially, it provides the functionality to:
- Spawn processes on local or remote nodes.
- Send messages between processes.
- Monitor processes for failure.
All other layers are implemented as some collection of C-H processes running
on one or more nodes.
Code pointers:
To begin understanding the structure of C-H code:
- [Cloud Haskell](http://haskell-distributed.github.io/)
- [distributed-process documentation](http://hackage.haskell.org/package/distributed-process-0.5.5.1)
There is little of this layer implemented within Halon itself, but you may
consider `halond`, which is the executable which starts a C-H system-level
process on a node. A network of `halond` instances, whilst doing nothing
themselves, will come to host all of Halon's functionality.
- `mero-halon/src/halond`
### Paxos
Atop the C-H layer sits a network of processes implementing the Paxos algorithm.
The Paxos layer defines means by which a number of processes may agree on
updates to a shared state. This layer does not care about precisely what that
state is.
Code pointers:
The code implementing the Paxos layer lives in three packages:
- `consensus` provides a basis on which to implement an abstract protocol for
managing consensus.
- `consensus-paxos` provides an implementation of the `consensus` interface
using Paxos.
- `replicated-log` extends the Paxos implementation to multi-Paxos, allowing
for a log of decrees. At each 'round' of Paxos, a single value is agreed upon
by the processes participating in Paxos. `replicated-log` allows for a list
of such decrees.
Further reading:
- [Replication HLD](../replication/hld.rst)
### Replicator
The purpose of the replicator is to provide a simple interface to the paxos
code. The replicator allows us to address the group of consensus processes in
the manner of a simple data store.
Code pointers:
- `halon/src/lib/HA/Replicator.hs` defines the basic interface to a replication
group. In particular, see `getState` and `updateStateWith`.
- `halon/src/lib/HA/Replicator/Log.hs` implements the replication group interface
atop the `replicated-log` package.
- `halon/src/lib/HA/Replicator/Mock.hs` implements the replication group interface
atop a simple in-memory value. This is used in testing of higher-level
components when running Paxos is unimportant.
### Event queue
The event queue (EQ) provides a guaranteed delivery message queue, built atop
the replicator. Messages acknowledged by the event queue are guaranteed to be
delivered (at some point) to the recovery coordinator, and to be resilient to
the loss of up to half of the Paxos nodes.
Code pointers:
- `halon/src/lib/HA/EventQueue.hs` provides the code for the event queue itself.
- `halon/src/lib/HA/EventQueue/Producer.hs` provides the functions which are
used to send messages to the event queue, and verify acknowledgement.
- `halon/src/lib/HA/EQtracker.hs` is not directly related to the EQ, but is a
separate process running on each node which is responsible for tracking the
location of nodes running the EQ. This is used to simplify sending messages to
the EQ.
Further reading:
- [Event Queue component](components/event-queue.rst)
### Recovery
The recovery layer consists of a single process known as the recovery
coordinator and a collection of processes which stand ready to respawn the
recovery coordinator should it die, known as recovery supervisors.
The recovery coordinator implements the central logic of Halon; it takes a
stream of messages from the EQ and runs a set of rules which embody
our failure handling logic. In order to do this, the recovery coordinator
additionally has access to shared state (the 'Resource Graph') which is held
consistent amongst multiple nodes through use of the replicator.
In addition to the single recovery coordinator, we have multiple recovery
supervisors. These monitor (and routinely ping) the recovery coordinator, and
if they detect its failure are responsible for spawning a new one. Use of the
replicator ensures both that only one new recovery coordinator is started and
that the new recovery coordinator has access to the same state as the dead one.
Code pointers:
The recovery coordinator is the most complex part of Halon, and much of its
code consists of the recovery rules, which are implemented in their own DSL
(domain specific language). As such, we provide only basic pointers to parts
of the recovery mechanism:
- `mero-halon/src/lib/RecoveryCoordinator/Mero.hs` is the entry point for the
recovery coordinator.
- `mero-halon/src/lib/RecoveryCoordinator/Rules` contains the various rules
by which recovery logic is implemented.
- `mero-halon/src/lib/RecoveryCoordinator/Actions` contains the primitive
actions and queries which make up rules.
- `halon/src/lib/HA/RecoverySupervisor.hs` implements the recovery supervisor.
| 40.392045 | 101 | 0.786609 | eng_Latn | 0.998603 |
c963bea3051570c82b33d490f6bcf230cfe03e42 | 1,043 | md | Markdown | help/home/c-dataset-const-proc/c-add-config-files/c-server.md | AdobeDocs/data-workbench.en | 322f881bb4ac45ed36337d0ff7f9855e407cee07 | [
"MIT"
] | null | null | null | help/home/c-dataset-const-proc/c-add-config-files/c-server.md | AdobeDocs/data-workbench.en | 322f881bb4ac45ed36337d0ff7f9855e407cee07 | [
"MIT"
] | null | null | null | help/home/c-dataset-const-proc/c-add-config-files/c-server.md | AdobeDocs/data-workbench.en | 322f881bb4ac45ed36337d0ff7f9855e407cee07 | [
"MIT"
] | 5 | 2019-11-03T18:41:55.000Z | 2020-09-27T09:43:57.000Z | ---
description: The Sample Bytes parameter in the Server.cfg file specifies the data cache size (in bytes) for data workbench.
title: Server.cfg
uuid: 7e789133-09fc-442d-b643-cca8620f4a97
exl-id: fb7667f6-4061-4bde-8a48-6489b24e0411
---
# Server.cfg{#server-cfg}
The Sample Bytes parameter in the Server.cfg file specifies the data cache size (in bytes) for data workbench.
The default value is 250e6. Instructions for opening and saving the [!DNL Server.cfg] file are the same as those for [!DNL Log Processing Mode.cfg]. See [Log Processing Mode.cfg](../../../home/c-dataset-const-proc/c-add-config-files/t-log-proc-mode.md#task-e530907cb34f488182afe625e6d9e44a).
>[!NOTE]
>
>Because this file's parameter affects system performance, please contact Adobe before making any changes.
You can further limit the size of the data cache for data workbench machine that connect to the data workbench server by setting the Maximum Sample Size parameter in the [!DNL Insight.cfg] file. For more information, see the *Data Workbench User Guide*.
| 57.944444 | 291 | 0.780441 | eng_Latn | 0.934579 |
c9643e481b2c7b2eb42f41d7e023d5efb64931c7 | 822 | md | Markdown | _posts/sort/2018-03-18-Baekjoon-Sort-2-post.md | Camouflage129/Camouflage129.github.io | d78f54bf437138580cad70ce8a568c12acd243d1 | [
"MIT"
] | null | null | null | _posts/sort/2018-03-18-Baekjoon-Sort-2-post.md | Camouflage129/Camouflage129.github.io | d78f54bf437138580cad70ce8a568c12acd243d1 | [
"MIT"
] | null | null | null | _posts/sort/2018-03-18-Baekjoon-Sort-2-post.md | Camouflage129/Camouflage129.github.io | d78f54bf437138580cad70ce8a568c12acd243d1 | [
"MIT"
] | null | null | null | ---
layout: post
title: 1472. 소트인사이드
categories: [sort]
---
[백준 1472번 문제보러가기](https://www.acmicpc.net/problem/1472)
**== 문제풀이 힌트 ==**<br>
나머지 연산을 활용하면 편하다.
```c
#include<iostream>
#include<vector>
#include<algorithm>
using namespace std;
void pushNum(long long input, vector<int>& result) {
result.push_back(input % 10);
if(input / 10 >= 1)
pushNum(input / 10, result);
}
int main() {
long long input;
cin >> input;
vector<int> result;
pushNum(input, result);
sort(result.begin(), result.end());
for (int i = result.size() - 1; i >= 0; i--) {
printf("%d", result[i]);
}
}
```
**== 풀이 ==**<br>
10으로 나눈 나머지 마지막 끝값부터 vector에 넣어준후 Sort를 한다.<br>
Sort 알고리즘은 기본적으로 오름차순이므로 역으로 출력해주면 끝.<br>
주의할 점은 vector를 넘길 때, 주소를 넘겨야하므로 &를 써야한다.<br>
{% if site.dispus-shortname %}{% include dispus.html %}{% endif %} | 16.77551 | 66 | 0.63382 | kor_Hang | 0.967936 |
c9645b933507da043df3b1b7ffbce1987f3da896 | 3,764 | md | Markdown | docs/csharp/programming-guide/concepts/linq/deferred-execution-and-lazy-evaluation-in-linq-to-xml.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/programming-guide/concepts/linq/deferred-execution-and-lazy-evaluation-in-linq-to-xml.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/programming-guide/concepts/linq/deferred-execution-and-lazy-evaluation-in-linq-to-xml.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Esecuzione posticipata e valutazione lazy in LINQ to XML (C#)
ms.date: 07/20/2015
ms.assetid: 8683d1b4-b7ec-407b-be12-906ebe958a09
ms.openlocfilehash: 9cf28afb5b7b8b3047c8b1b21915ffe7409eb25e
ms.sourcegitcommit: 986f836f72ef10876878bd6217174e41464c145a
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 08/19/2019
ms.locfileid: "69594562"
---
# <a name="deferred-execution-and-lazy-evaluation-in-linq-to-xml-c"></a>Esecuzione posticipata e valutazione lazy in LINQ to XML (C#)
Operazioni di query e su asse vengono spesso implementate in modo da usare l'esecuzione posticipata. In questo argomento vengono illustrati requisiti e vantaggi dell'esecuzione posticipata e vengono fornite alcune considerazioni sull'implementazione.
## <a name="deferred-execution"></a>Esecuzione posticipata
Per esecuzione posticipata si intende che la valutazione di un'espressione viene ritardata finché il relativo valore *realizzato* non risulta effettivamente necessario. L'esecuzione posticipata può contribuire a migliorare notevolmente le prestazioni quando è necessario modificare grandi raccolte di dati, in particolare in programmi che contengono una serie di modifiche o query concatenate. Nel migliore dei casi l'esecuzione posticipata consente di eseguire un'unica iterazione nella raccolta di origine.
Le tecnologie LINQ usano notevolmente l'esecuzione posticipata sia nei membri di classi <xref:System.Linq?displayProperty=nameWithType> principali che nei metodi di estensione dei diversi spazi dei nomi LINQ, ad esempio <xref:System.Xml.Linq.Extensions?displayProperty=nameWithType>.
L'esecuzione posticipata è supportata direttamente nel linguaggio C# usando la parola chiave [yield](../../../language-reference/keywords/yield.md) (sotto forma di istruzione `yield-return`) quando viene usata all'interno di un blocco iteratore. Tale iteratore deve restituire una raccolta di tipo <xref:System.Collections.IEnumerator> o <xref:System.Collections.Generic.IEnumerator%601> (o un tipo derivato).
## <a name="eager-vs-lazy-evaluation"></a>Valutazione eager e valutazione lazy
Quando si scrive un metodo che implementa l'esecuzione posticipata, è inoltre necessario decidere se implementare il metodo tramite la valutazione lazy o la valutazione eager.
- Nella *valutazione lazy* durante ogni chiamata all'iteratore viene elaborato un unico elemento della raccolta di origine. Si tratta della modalità di implementazione tipica degli iteratori.
- Nella *valutazione eager* in seguito alla prima chiamata all'iteratore verrà elaborata l'intera raccolta. Potrebbe inoltre essere necessaria una copia temporanea della raccolta di origine. Ad esempio, il metodo <xref:System.Linq.Enumerable.OrderBy%2A> deve ordinare l'intera raccolta prima di restituire il primo elemento.
La valutazione lazy offre in genere prestazioni migliori perché implica una distribuzione uniforme dell'overhead di elaborazione in tutte le fasi della valutazione della raccolta e riduce al minimo l'uso di dati temporanei. Per alcune operazioni è naturalmente inevitabile dover materializzare i risultati intermedi.
## <a name="next-steps"></a>Passaggi successivi
Nel successivo argomento dell'esercitazione verrà illustrata l'esecuzione posticipata:
- [Esempio di esecuzione posticipata (C#)](./deferred-execution-example.md)
## <a name="see-also"></a>Vedere anche
- [Esercitazione: Concatenamento di query (C#)](./deferred-execution-and-lazy-evaluation-in-linq-to-xml.md)
- [Concetti e terminologia (trasformazione funzionale) (C#)](./concepts-and-terminology-functional-transformation.md)
- [Operazioni di aggregazione (C#)](./aggregation-operations.md)
- [yield](../../../language-reference/keywords/yield.md)
| 89.619048 | 511 | 0.801541 | ita_Latn | 0.995412 |
c964bd8b39c7d462545895b7e2c49054efc71fa0 | 4,569 | md | Markdown | docs/advanced/repurposetimeline.md | evanwill/cb-docs | b15f2288e4e1f9795c946e06bbd26ee054d5d111 | [
"MIT"
] | null | null | null | docs/advanced/repurposetimeline.md | evanwill/cb-docs | b15f2288e4e1f9795c946e06bbd26ee054d5d111 | [
"MIT"
] | 12 | 2021-04-29T23:59:31.000Z | 2022-01-07T09:42:54.000Z | docs/advanced/repurposetimeline.md | evanwill/cb-docs | b15f2288e4e1f9795c946e06bbd26ee054d5d111 | [
"MIT"
] | null | null | null | ---
title: Repurposing the Timeline
parent: Advanced
nav_order: 3
---
# Change Timeline Visualization to Feature Other Data
In some instances, you may have other data that is a natural fit for the Timeline visualization (for instance, we once used it to visualize [depth](https://www.lib.uidaho.edu/digital/watkins/depth.html)!)
In order to change the type of information the Timeline displays, you'll need to change a variable in the liquid code in the "timeline.html" file, contained in the "_layouts" folder.
### Change the Field Generating the Timeline
{:.alert .alert-yellow .mt-4}
Note: You can choose any field to add to this visualization, but if the field's datatype is "text" (rather than "integer"), you'll need to start with the steps in this section, and then move on to the [instructions below](#change-timeline-visualization-to-include-text-values-rather-than-integers) to include a text field.
1. Navigate to the "_layouts" directory, and open the "timeline.html" file.
2. On the second line of code in the "timeline.html" file, change the value for "map" from "date" to another metadata field that you'd like to represent.
3. The line you should change looks like this:
{% raw %}`{%- assign raw-dates = site.data[site.metadata] | map: 'date' | compact | uniq -%}`{% endraw %}
If we were changing it to map the field `depth`, it would then look this:
{% raw %}`{%- assign raw-dates = site.data[site.metadata] | map: 'depth' | compact | uniq -%}`{% endraw %}
{:.alert}
We'll use `depth` as our example for the steps below.
You can refer to the depth [visualization](https://www.lib.uidaho.edu/digital/watkins/depth.html) as an example, and check out the code in the revised "timeline.html" [layout](https://github.com/uidaholib/collectionbuilder-cdm-template/blob/watkins/_layouts/timeline.html) that we used to create it.
### Connect Your New Field to the Output
You'll need to make one more change to ensure that your new field (in our case, `depth`) can generate the visualization.
1. To do this, you'll need to search the "timeline.html" file for the Liquid filter "where_exp". Use `Ctrl + F` (PC) or `Command + F` (Mac) to open a find and replace textbox in Visual Studio Code. The line you're looking for should look like this:
`{%- assign inYear = items | where_exp: 'item', 'item.date contains year' -%}`
2. Once you've found "where_exp", change the part after "where_exp" so "item.date contains year" becomes "item.depth contains year". The end result will be:
{% raw %}`{%- assign inYear = items | where_exp: 'item', 'item.depth contains year' -%}`{% endraw %}
*Note: We are keeping the `year` variable constant here so as not to have to edit all the code and risk messing it up somewhere. You can, however, go through and edit all the `year` variables on the page to become `depth` if you like to keep things readable.*
### Change the Page Title and Navigation to Rename the Timeline Feature
Now that we are looking at another field (in our case, `depth`) rather than `date`, we'll likely want to change the way that the Timeline page is named and linked.
1. Edit "_data/config-nav.csv" by removing the line `Timeline,/timeline.html` and replacing it with `Depth,/depth.html` (or whatever matches up to your new page).
2. Then navigate to the "pages" directory and open the "timeline.md" markdown file.
3. Locate the yaml front matter at the top of the file (the front matter is the `key: value` pairs between two lines of dashes (`---`)).
4. You'll want to edit the front matter values to look like this (replacing `depth` with whichever field you're using):
```yaml
---
title: Depth
layout: timeline
permalink: /depth.html
# a timeline visualization will be added below the content in this file
---
## Collection Depth
```
### Change Timeline Visualization to Include Text Values Rather Than Integers
You can also visualize a metadata field with a "text" datatype, instead of an "integer" datatype.
You'll need to follow the [instructions above](#change-the-field-generating-the-timeline) to switch out `date` for your chosen field, then follow the steps below.
1. Find the following line of code in the "timeline.html" file:
{% raw %} `{%- assign uniqueYears = clean-years | remove: " " | split: ";" | uniq | sort -%}` {% endraw %}
2. Remove this portion of the code: `| remove: " "`, so that the line now looks like this:
{% raw %}`{%- assign uniqueYears = clean-years | split: ";" | uniq | sort -%}`{% endraw %}
Your Timeline visualization will now be based on the text values in your chosen field.
| 57.1125 | 322 | 0.730138 | eng_Latn | 0.996779 |
c964e57079498cc7d055041cf6ccba18859bb42b | 760 | md | Markdown | 2014/CVE-2014-4981.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2014/CVE-2014-4981.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2014/CVE-2014-4981.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2014-4981](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4981)



### Description
LPAR2RRD in 3.5 and earlier allows remote attackers to execute arbitrary commands due to insufficient input sanitization of the web GUI parameters.
### POC
#### Reference
- http://ocert.org/advisories/ocert-2014-005.html
- http://packetstormsecurity.com/files/127593/LPAR2RRD-3.5-4.53-Command-Injection.html
- http://www.openwall.com/lists/oss-security/2014/07/23/6
#### Github
No PoCs found on GitHub currently.
| 38 | 147 | 0.753947 | yue_Hant | 0.262179 |
c964e965db633c9d85939b1372a53a1f0675d5b1 | 2,374 | md | Markdown | sharepoint/sharepoint-ps/sharepoint-pnp/Set-PnPHubSite.md | kaarins/office-docs-powershell | 15ed0e8807fe5edd9f895bd17f524092e6468dd7 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-03-06T16:53:20.000Z | 2020-03-06T16:53:20.000Z | sharepoint/sharepoint-ps/sharepoint-pnp/Set-PnPHubSite.md | kaarins/office-docs-powershell | 15ed0e8807fe5edd9f895bd17f524092e6468dd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sharepoint/sharepoint-ps/sharepoint-pnp/Set-PnPHubSite.md | kaarins/office-docs-powershell | 15ed0e8807fe5edd9f895bd17f524092e6468dd7 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-01-20T07:13:30.000Z | 2021-01-20T07:13:30.000Z | ---
external help file:
online version: https://docs.microsoft.com/powershell/module/sharepoint-pnp/set-pnphubsite
applicable: SharePoint Online
schema: 2.0.0
---
# Set-PnPHubSite
## SYNOPSIS
Sets hub site properties
## SYNTAX
```powershell
Set-PnPHubSite [-SiteDesignId <GuidPipeBind>]
[-HideNameInNavigation [<SwitchParameter>]]
[-RequiresJoinApproval [<SwitchParameter>]]
[-Connection <SPOnlineConnection>]
```
## DESCRIPTION
Allows configuring a hub site
## EXAMPLES
### ------------------EXAMPLE 1------------------
```powershell
Set-PnPHubSite -Identity https://tenant.sharepoint.com/sites/myhubsite -Title "My New Title"
```
Sets the title of the hub site
### ------------------EXAMPLE 2------------------
```powershell
Set-PnPHubSite -Identity https://tenant.sharepoint.com/sites/myhubsite -Description "My updated description"
```
Sets the description of the hub site
### ------------------EXAMPLE 3------------------
```powershell
Set-PnPHubSite -Identity https://tenant.sharepoint.com/sites/myhubsite -SiteDesignId df8a3ef1-9603-44c4-abd9-541aea2fa745
```
Sets the site design which should be applied to sites joining the the hub site
### ------------------EXAMPLE 4------------------
```powershell
Set-PnPHubSite -Identity https://tenant.sharepoint.com/sites/myhubsite -LogoUrl "https://tenant.sharepoint.com/SiteAssets/Logo.png"
```
Sets the logo of the hub site
## PARAMETERS
### -HideNameInNavigation
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Required: False
Position: Named
Accept pipeline input: False
```
### -RequiresJoinApproval
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Required: False
Position: Named
Accept pipeline input: False
```
### -SiteDesignId
GUID of the SharePoint Site Design which should be applied when a site joins the hub site
```yaml
Type: GuidPipeBind
Parameter Sets: (All)
Required: False
Position: Named
Accept pipeline input: False
```
### -Connection
Optional connection to be used by the cmdlet. Retrieve the value for this parameter by either specifying -ReturnConnection on Connect-PnPOnline or by executing Get-PnPConnection.
```yaml
Type: SPOnlineConnection
Parameter Sets: (All)
Required: False
Position: Named
Accept pipeline input: False
```
## RELATED LINKS
[SharePoint Developer Patterns and Practices](https://aka.ms/sppnp) | 22.186916 | 178 | 0.702612 | yue_Hant | 0.741717 |
c966017f527e6eb1b1c9fba172263c5a474ec081 | 572 | md | Markdown | code/java/SecuritizedDerivativesAPIforDigitalPortals/v2/docs/SecuritizedDerivativeIssuerSearchDataRegistrationCountry.md | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 6 | 2022-02-07T16:34:18.000Z | 2022-03-30T08:04:57.000Z | code/java/SecuritizedDerivativesAPIforDigitalPortals/v2/docs/SecuritizedDerivativeIssuerSearchDataRegistrationCountry.md | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 2 | 2022-02-07T05:25:57.000Z | 2022-03-07T14:18:04.000Z | code/java/SecuritizedDerivativesAPIforDigitalPortals/v2/docs/SecuritizedDerivativeIssuerSearchDataRegistrationCountry.md | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | null | null | null |
# SecuritizedDerivativeIssuerSearchDataRegistrationCountry
List of countries of registration for trading of securitized derivatives. Only issuers that have registered at least one securitized derivative in a country in the provided list are returned.
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**ids** | **java.util.Set<BigDecimal>** | List of country identifiers. See endpoint `/basic/region/country/list` for valid values. | [optional]
## Implemented Interfaces
* Serializable
| 30.105263 | 191 | 0.694056 | eng_Latn | 0.919937 |
c9662fae15633ae9a891c3b6957869ebf86042e0 | 50 | md | Markdown | README.md | eduardodicarte/vm-rj_zbx | 0ab4e7f4fe2fec9280b2bf8f513a44855a7c1ded | [
"MIT"
] | null | null | null | README.md | eduardodicarte/vm-rj_zbx | 0ab4e7f4fe2fec9280b2bf8f513a44855a7c1ded | [
"MIT"
] | null | null | null | README.md | eduardodicarte/vm-rj_zbx | 0ab4e7f4fe2fec9280b2bf8f513a44855a7c1ded | [
"MIT"
] | null | null | null | # vm-rj_zabbix
Fontes para autoconfiguracao da vm
| 16.666667 | 34 | 0.82 | por_Latn | 0.430451 |
c9667873f192069adbf3495255839f16cb60921b | 3,152 | md | Markdown | docs/extensibility/debugger/reference/ienumdebugboundbreakpoints2.md | Dragollla16/visualstudio-docs | 53fc727cc744ddd3f4baeb36085deac7d8db7b94 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/ienumdebugboundbreakpoints2.md | Dragollla16/visualstudio-docs | 53fc727cc744ddd3f4baeb36085deac7d8db7b94 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/ienumdebugboundbreakpoints2.md | Dragollla16/visualstudio-docs | 53fc727cc744ddd3f4baeb36085deac7d8db7b94 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "IEnumDebugBoundBreakpoints2 | Microsoft Docs"
ms.date: "11/04/2016"
ms.topic: "conceptual"
f1_keywords:
- "IEnumDebugBoundBreakpoints2"
helpviewer_keywords:
- "IEnumDebugBoundBreakpoints2"
ms.assetid: ea03e7e1-28d6-40b7-8097-bbb61d3b7caa
author: "gregvanl"
ms.author: "gregvanl"
manager: jillfra
ms.workload:
- "vssdk"
---
# IEnumDebugBoundBreakpoints2
This interface enumerates the bound breakpoints associated with a pending breakpoint or breakpoint bound event.
## Syntax
```
IEnumDebugBoundBreakpoints2 : IUnknown
```
## Notes for Implementers
The debug engine (DE) implements this interface as part of its support for breakpoints. This interface must be implemented if breakpoints are supported.
## Notes for Callers
Visual Studio calls:
- [EnumBreakpoints](../../../extensibility/debugger/reference/idebugbreakpointevent2-enumbreakpoints.md) to obtain this interface representing a list of all breakpoints that were triggered.
- [EnumBoundBreakpoints](../../../extensibility/debugger/reference/idebugbreakpointboundevent2-enumboundbreakpoints.md) to obtain this interface representing a list of all breakpoints that were bound.
- [EnumBoundBreakpoints](../../../extensibility/debugger/reference/idebugpendingbreakpoint2-enumboundbreakpoints.md) to obtain this interface representing a list of all breakpoints bound to that pending breakpoint.
## Methods in Vtable Order
The following table shows the methods of `IEnumDebugBoundBreakpoints2`.
|Method|Description|
|------------|-----------------|
|[Next](../../../extensibility/debugger/reference/ienumdebugboundbreakpoints2-next.md)|Retrieves a specified number of bound breakpoints in an enumeration sequence.|
|[Skip](../../../extensibility/debugger/reference/ienumdebugboundbreakpoints2-skip.md)|Skips a specified number of bound breakpoints in an enumeration sequence.|
|[Reset](../../../extensibility/debugger/reference/ienumdebugboundbreakpoints2-reset.md)|Resets an enumeration sequence to the beginning.|
|[Clone](../../../extensibility/debugger/reference/ienumdebugboundbreakpoints2-clone.md)|Creates an enumerator that contains the same enumeration state as the current enumerator.|
|[GetCount](../../../extensibility/debugger/reference/ienumdebugboundbreakpoints2-getcount.md)|Gets the number of bound breakpoints in an enumerator.|
## Remarks
Visual Studio uses the bound breakpoints represented by this interface to update the display of breakpoints in the IDE.
## Requirements
Header: msdbg.h
Namespace: Microsoft.VisualStudio.Debugger.Interop
Assembly: Microsoft.VisualStudio.Debugger.Interop.dll
## See Also
[Core Interfaces](../../../extensibility/debugger/reference/core-interfaces.md)
[EnumBoundBreakpoints](../../../extensibility/debugger/reference/idebugbreakpointboundevent2-enumboundbreakpoints.md)
[EnumBoundBreakpoints](../../../extensibility/debugger/reference/idebugpendingbreakpoint2-enumboundbreakpoints.md)
[EnumBoundBreakpoints](../../../extensibility/debugger/reference/idebugpendingbreakpoint2-enumboundbreakpoints.md) | 50.83871 | 218 | 0.766815 | eng_Latn | 0.713583 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.