hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d0613c0492e54dca7c101323946a839a86ac6af6 | 1,170 | md | Markdown | database/Database_Install_Manual.md | Tram13/ProjectSEL2 | 066dbce1372c8b4f0ef683029423a93fc7f8ed22 | [
"MIT"
] | null | null | null | database/Database_Install_Manual.md | Tram13/ProjectSEL2 | 066dbce1372c8b4f0ef683029423a93fc7f8ed22 | [
"MIT"
] | null | null | null | database/Database_Install_Manual.md | Tram13/ProjectSEL2 | 066dbce1372c8b4f0ef683029423a93fc7f8ed22 | [
"MIT"
] | null | null | null | # Installatie PostgreSQL-databank
### Inleiding
In deze handleiding staat hoe men de databank lokaal kan installeren en de integratie met de Spring Boot-server
configureerd. Het UML-diagram van de databank staat in database/src/diagram.png. Dit diagram is gemaakt
met [Umbrello](https://umbrello.kde.org/).
### Stap 0 (Optioneel): Opstellen SQL-code voor tabeldefinities
1. Delen van de broncode worden gemaakt door [Umbrello](https://umbrello.kde.org/). Deze kun je steeds opnieuw maken
door Umbrello te installeren en vervolgens code -> code generation wizard op te roepen.
2. Run finalise_sql.sh om all_schemas.sql aan te maken.
### Stap 1: Installatie databank
1. Installeer [PostgreSQL](https://www.postgresql.org/). Laat de PostgreSQL-server draaien op poort 2002.
2. Maak een nieuwe databank aan. Noem deze "magdadatabase". Gebruik de SQL-code uit database/src/code/create_user.sql om
de gebruiker aan te maken.
### Stap 2: Tabellen opstellen
1. Gebruik de SQL-code uit database/src/code/all_schemas.sql om de tabellen op te stellen. Wanneer dit bestand niet
bestaat, kan dit aangemaakt worden zoals beschreven in stap 0.
| 46.8 | 121 | 0.759829 | nld_Latn | 0.998392 |
d061ca81260149d4028245f83ec324579af31aa8 | 3,240 | md | Markdown | docs/framework/unmanaged-api/hosting/iclrassemblyidentitymanager-getbindingidentityfromstream-method.md | leowsouza/docs.pt-br-1 | 67ce30b4075f0d05985fa2d2b314c35a6d8d7adf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/hosting/iclrassemblyidentitymanager-getbindingidentityfromstream-method.md | leowsouza/docs.pt-br-1 | 67ce30b4075f0d05985fa2d2b314c35a6d8d7adf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/hosting/iclrassemblyidentitymanager-getbindingidentityfromstream-method.md | leowsouza/docs.pt-br-1 | 67ce30b4075f0d05985fa2d2b314c35a6d8d7adf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Método ICLRAssemblyIdentityManager::GetBindingIdentityFromStream
ms.date: 03/30/2017
api_name:
- ICLRAssemblyIdentityManager.GetBindingIdentityFromStream
api_location:
- mscoree.dll
api_type:
- COM
f1_keywords:
- ICLRAssemblyIdentityManager::GetBindingIdentityFromStream
helpviewer_keywords:
- GetBindingIdentityFromStream method [.NET Framework hosting]
- ICLRAssemblyIdentityManager::GetBindingIdentityFromStream method [.NET Framework hosting]
ms.assetid: 40123b30-a589-46b3-95d3-af7b2b0baa05
topic_type:
- apiref
ms.openlocfilehash: b30f6f5ce22290dc3750cef0171349ec5ff2f76a
ms.sourcegitcommit: 559fcfbe4871636494870a8b716bf7325df34ac5
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 10/30/2019
ms.locfileid: "73126745"
---
# <a name="iclrassemblyidentitymanagergetbindingidentityfromstream-method"></a>Método ICLRAssemblyIdentityManager::GetBindingIdentityFromStream
Obtém os dados de identidade do assembly canônico para o assembly no fluxo especificado.
## <a name="syntax"></a>Sintaxe
```cpp
HRESULT GetBindingIdentityFromStream (
[in] IStream *pStream,
[in] DWORD dwFlags,
[out, size_is(*pcchBufferSize)] LPWSTR pwzBuffer,
[in, out] DWORD *pcchBufferSize
);
```
## <a name="parameters"></a>Parâmetros
`pStream`
no O fluxo do assembly a ser avaliado.
`dwFlags`
no Fornecido para extensibilidade futura. CLR_ASSEMBLY_IDENTITY_FLAGS_DEFAULT é o único valor para o qual a versão atual do Common Language Runtime (CLR) dá suporte.
`pwzBuffer`
fora Um buffer que contém os dados de identidade do assembly opaco.
`pcchBufferSize`
[entrada, saída] O tamanho de `pwzBuffer`.
## <a name="return-value"></a>Valor retornado
|HRESULT|Descrição|
|-------------|-----------------|
|S_OK|O método retornou com êxito.|
|E_INVALIDARG|O `pStream` fornecido é nulo.|
|ERROR_INSUFFICIENT_BUFFER|O tamanho de `pwzBuffer` é muito pequeno.|
|HOST_E_CLRNOTAVAILABLE|O CLR não foi carregado em um processo ou o CLR está em um estado no qual não pode executar código gerenciado ou processar a chamada com êxito.|
|HOST_E_TIMEOUT|A chamada atingiu o tempo limite.|
|HOST_E_NOT_OWNER|O chamador não possui o bloqueio.|
|HOST_E_ABANDONED|Um evento foi cancelado enquanto um thread ou uma fibra bloqueada estava esperando.|
|E_FAIL|Ocorreu uma falha catastrófica desconhecida. Se um método retornar E_FAIL, o CLR não poderá mais ser usado no processo. As chamadas subsequentes para métodos de hospedagem retornam HOST_E_CLRNOTAVAILABLE.|
## <a name="requirements"></a>Requisitos
**Plataformas:** confira [Requisitos do sistema](../../../../docs/framework/get-started/system-requirements.md).
**Cabeçalho:** MSCorEE. h
**Biblioteca:** Incluído como um recurso em MSCorEE. dll
**Versões do .NET Framework:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)]
## <a name="see-also"></a>Consulte também
- [Interface ICLRAssemblyIdentityManager](../../../../docs/framework/unmanaged-api/hosting/iclrassemblyidentitymanager-interface.md)
- [Interface ICLRAssemblyReferenceList](../../../../docs/framework/unmanaged-api/hosting/iclrassemblyreferencelist-interface.md)
| 41.538462 | 215 | 0.754012 | por_Latn | 0.852312 |
d06209008c2726c3c6ef7c1180fcd0b5eebed50e | 1,789 | md | Markdown | samples/go/README.md | Khaos-Labs/khaos-wallet-core | 2c06d49fddf978e0815b208dddef50ee2011c551 | [
"MIT"
] | 2 | 2020-11-16T08:06:30.000Z | 2021-06-18T03:21:44.000Z | samples/go/README.md | Khaos-Labs/khaos-wallet-core | 2c06d49fddf978e0815b208dddef50ee2011c551 | [
"MIT"
] | null | null | null | samples/go/README.md | Khaos-Labs/khaos-wallet-core | 2c06d49fddf978e0815b208dddef50ee2011c551 | [
"MIT"
] | null | null | null | # Sample Go Integration for [Wallet-Core](https://github.com/Khaos-Labs/khaos-wallet-core)
## Overview
This folder contains a small **Go** sample integration with
[Wallet Core](https://github.com/Khaos-Labs/khaos-wallet-core) library (part of [Khaos Wallet](https://khaoswallet.com)),
using [cgo](https://golang.org/cmd/cgo/).
## DISCLAIMER
> This is a sample application with demonstration purpose only,
> do not use it with real addresses, real transactions, or real funds.
> Use it at your own risk.
## Documentation
See the official [Khaos Wallet developer documentation here](https://developer.khaoswallet.com).
See especially Wallet Core
[Integration Guide](https://developer.khaoswallet.com/wallet-core/integration-guide),
and [Build Instructions](https://developer.khaoswallet.com/wallet-core/building).
## Prerequisites
* Docker
## Building and Running
1. Run `docker run -it Khaos-Labs/khaos-wallet-core`
The librabry is already built in this image (Build instructions [here](building.md)) Note: may not be the most recent version.
2. Install go: `apt-get update && apt-get install golang`
(or download from here [go1.14.2](https://dl.google.com/go/go1.14.2.linux-amd64.tar.gz), configure `GOROOT` and append `GOROOT/bin` to `PATH`).
3. Go to the **samples/go** folder within wallet core repo:
```shell
git clone https://github.com/Khaos-Labs/khaos-wallet-core.git
cd wallet-core/samples/go
```
4. Compile it by `go build -o main`. Relavant source file is `main.go`.
5. Run `./main` and you will see the output below:
```shell
==> calling wallet core from go
==> mnemonic is valid: true
==> bitcoin...
```
6. You might want to copy and run `main` outside of the docker container, make sure you have `libc++1` and `libc++abi1` installed in your host Ubuntu.
| 35.078431 | 150 | 0.735606 | eng_Latn | 0.90942 |
d0622345d452a3768162aeaa74e65d853aad7a20 | 8,137 | md | Markdown | docs/develop/module-specifications/README.md | daodiseomoney/docs | ae3e2620e303fa8cceb7939d36a89ae045419d2f | [
"MIT"
] | null | null | null | docs/develop/module-specifications/README.md | daodiseomoney/docs | ae3e2620e303fa8cceb7939d36a89ae045419d2f | [
"MIT"
] | null | null | null | docs/develop/module-specifications/README.md | daodiseomoney/docs | ae3e2620e303fa8cceb7939d36a89ae045419d2f | [
"MIT"
] | null | null | null | # ODISEO Core modules <img src="/img/icon_core.svg" height="40px">
```{toctree}
:hidden:
spec-auth
spec-authz
spec-bank
spec-capability
spec-distribution
spec-evidence
spec-feegrant
spec-governance
spec-market
spec-mint
spec-oracle
spec-slashing
spec-staking
spec-treasury
spec-wasm
```
The ODISEO Core is the official Golang reference implementation of the ODISEO protocol.
The ODISEO Core is built using the [Cosmos SDK](https://cosmos.network/sdk), which provides a robust framework for blockchains that run atop the [Tendermint](https://tendermint.com/) consensus protocol.
Before diving into the core modules, it may be useful to familiarize yourself with the [Cosmos](https://docs.cosmos.network/) and [Tendermint](https://docs.tendermint.com/master/tutorials/go.html) documentation.
## How to use the ODISEO Core module specifications
Each module specification begins with a short description of the module's main function within the architecture of the system and an explanation of how it contributes to implementing ODISEO's features.
The body of each module specification provides a more detailed description of its main processes and algorithms alongside any concepts you might need to know. The body of each module specification also contains links to more granular information, such as specific state variables, message handlers, and other functions.
These specifications are not an exhaustive reference and are provided as a companion guide for users who need to work directly with the ODISEO Core codebase or understand it. Though all the important functions in each module are described, more trivial functions, such as getters and setters, are omitted for clarity. Module logic is also located in either the message handler or block transitions, such as begin-blocker and end-blocker.
The end of each module specification includes lists of various module parameters alongside their default values with a brief explanation of their purpose, associated events / tags, and errors issued by the module.
## Module architecture
The ODISEO Core is organized into the following individual modules that implement different parts of the ODISEO protocol. They are listed in the order in which they are initialized during genesis:
1. `genaccounts` - import & export genesis account
2. [`distribution`](spec-distribution.md): distribute rewards between validators and delegators
- reward distribution
- community pool
3. [`staking`](spec-staking.md): validators and Luna
4. [`auth`](spec-auth.md): ante handler
- vesting accounts
5. [`bank`](spec-bank.md) - sending funds from account to account
6. [`slashing`](spec-slashing.md) - low-level Tendermint slashing (double-signing, etc)
7. [`oracle`](spec-oracle.md) - exchange rate feed oracle
- vote tallying weighted median
- ballot rewards
- slashing misbehaving oracles
8. [`treasury`](spec-treasury.md): miner incentive stabilization
- macroeconomic monitoring
- monetary policy levers (Tax Rate, Reward Weight)
- seigniorage settlement
::: {admonition} Note
:class: warning
As of proposals [43](https://station.ODISEO.money/proposal/43) and [172](https://station.ODISEO.money/proposal/172), all seigniorage is burned, and the stability fee tax rate is zero.
:::
9. [`gov`](spec-governance.md): on-chain governance
- proposals
- parameter updating
10. [`market`](spec-market.md): price-stabilization
- ODISEO<>ODISEO spot-conversion, Tobin Tax
- ODISEO<>Luna market-maker, Constant-Product spread
11. `crisis` - reports consensus failure state with proof to halt the chain
12. `genutil` - handles `gentx` commands
- filter and handle `MsgCreateValidator` messages
### Inherited modules
Many of the modules in ODISEO Core inherit from the Cosmos SDK and are configured to work with ODISEO through customization in either genesis parameters or by augmenting their functionality with additional code.
## Block lifecycle
The following processes are executed during each block transition:
### Begin block
1. Distribution: Issuance of rewards for the previous block.
2. Slashing: Checking of infraction evidence or downtime of validators for double-signing and downtime penalties.
### Process messages
3. Messages are routed to their appropriate modules and then processed by the appropriate message handlers.
### End block
4. Crisis: Check all registered invariants and assert that they remain true.
5. Oracle
- If at the end of `VotePeriod`, run [voting procedure](spec-oracle.md#voting-procedure) and update Luna exchange rate.
- If at the end of `SlashWindow`, penalize validators who [missed](spec-slashing.md) more `VotePeriod`s than permitted.
6. Governance: Remove inactive proposals, check active proposals whose voting periods have ended for passes, and run the registered proposal handler of the passed proposal.
7. Market: [Replenish](spec-market.md#end-block) liquidity pools, allowing spread fees to decrease.
8. Treasury: At the end of every `epoch`, update indicators, burn seigniorage, and recalibrate monetary policy levers (tax-rate, reward-weight) for the next epoch.
::: {admonition} Note
:class: warning
As of proposals [43](https://station.ODISEO.money/proposal/43) and [172](https://station.ODISEO.money/proposal/172), all seigniorage is burned, and the stability fee tax rate is zero.
:::
9. Staking: The new set of active validators is determined from the top 130 Luna stakers. Validators that lose their spot within the set start the unbonding process.
## Conventions
### Currency denominations
Two types of tokens can be held by accounts and wallets in the ODISEO protocol:
- ODISEO stablecoins, which track the exchange rate of various fiat currencies. Each ODISEO stablecoin is named for its corresponding three-letter [ISO 4217 fiat currency code](https://www.xe.com/iso4217.php), written as `ODISEO<currencycode>`. When used as a value, the last letter of each currency code abbreviation is replaced with T to signify it as a ODISEO stablecoin. For example, the ODISEO stablecoin pegged to the Korean Won, KRW, is named ODISEOKRW, and its abbreviation is KRT.
The ODISEO protocol's standard base currency is ODISEOSDR, or SDT, which pegs to the IMF's Special Drawing Rights. The ODISEO protocol uses SDT to make calculations and set rate standards.
- Luna, which is the ODISEO protocol's native staking asset. Delegators earn mining rewards when they stake their Luna to an active validator. Luna stabilizes the ODISEO economy by absorbing the price volatility of ODISEO stablecoins and is also used to make governance proposals.
The microunit ($\times 10^{-6}$) is the smallest atomic unit of both ODISEO stablecoins and Luna.
Below are some examples of the different ODISEO stablecoins:
| Denomination | Micro-Unit | Code | Value |
| :----------- | :--------- | :------ | :------------ |
| Luna | µLuna | `uluna` | 0.000001 Luna |
| ODISEOSDR | µSDR | `usdr` | 0.000001 SDT |
| ODISEOKRW | µKRW | `ukrw` | 0.000001 KRT |
| ODISEOUSD | µUSD | `uusd` | 0.000001 UST |
| ODISEOMNT | µMNT | `umnt` | 0.000001 MNT |
The ODISEO protocol is able to set prices for each stablecoin denomination through the use of external price oracles in the [oracle module](spec-oracle.md). Validators vote on the off-chain prices of Luna relative to each stablecoin's real-world fiat counterpart. The weighted medians of the submitted exchange rates are used to set the exchange rates of Luna and ODISEO stablecoins used by the [market module](spec-market.md). Because rates are set using an external price oracle, the market module is always able to swap stablecoins using current, real-world prices. The oracle also ensures that you can always trade $1 worth of Luna for 1 UST, and vice versa, regardless of the off-chain price of UST. Utilizing the market module exchange rates, arbitrageurs are incentivized to trade any off-peg ODISEO stablecoin on any external market, profiting from the price difference while simultaneously driving prices to match their fiat peg.
| 58.539568 | 938 | 0.765516 | eng_Latn | 0.991405 |
d06256d75bd23ca7d3e0135dff83b4c66c00df75 | 560 | md | Markdown | src/content/changes/dashboards/dashboards/v1.8.0.md | giantswarm/docs | a5b661029811c73bb4bb97c57058da89a9c0396f | [
"Apache-2.0"
] | 6 | 2019-05-10T16:08:44.000Z | 2021-09-26T07:29:55.000Z | src/content/changes/dashboards/dashboards/v1.8.0.md | giantswarm/docs | a5b661029811c73bb4bb97c57058da89a9c0396f | [
"Apache-2.0"
] | 565 | 2018-08-01T11:10:33.000Z | 2022-03-31T15:54:53.000Z | src/content/changes/dashboards/dashboards/v1.8.0.md | giantswarm/docs | a5b661029811c73bb4bb97c57058da89a9c0396f | [
"Apache-2.0"
] | 6 | 2019-07-16T07:36:56.000Z | 2021-02-19T15:21:33.000Z | ---
# Generated by scripts/aggregate-changelogs. WARNING: Manual edits to this files will be overwritten.
changes_categories:
- Dashboards
changes_entry:
repository: giantswarm/dashboards
url: https://github.com/giantswarm/dashboards/blob/master/CHANGELOG.md#180---2021-12-01
version: 1.8.0
version_tag: v1.8.0
date: '2021-12-01T12:52:43'
description: Changelog entry for giantswarm/dashboards version 1.8.0, published on
01 December 2021, 12:52
title: dashboards release v1.8.0
---
### Deleted
- Delete Azure Load Balancer Backend Nodes dashboard.
| 31.111111 | 101 | 0.769643 | eng_Latn | 0.576115 |
d0626af706d2d87dde92086b8ddb7a11b6fd040b | 3,292 | md | Markdown | _posts/2019-06-16/2019-06-16-AYT-konular%C4%B1.md | uguratmaca/trendy-news | 7c715b57a18679d05799dc7eee8fe5dbad752cd4 | [
"MIT"
] | null | null | null | _posts/2019-06-16/2019-06-16-AYT-konular%C4%B1.md | uguratmaca/trendy-news | 7c715b57a18679d05799dc7eee8fe5dbad752cd4 | [
"MIT"
] | null | null | null | _posts/2019-06-16/2019-06-16-AYT-konular%C4%B1.md | uguratmaca/trendy-news | 7c715b57a18679d05799dc7eee8fe5dbad752cd4 | [
"MIT"
] | null | null | null | ---
layout: post
category: articles
title: "AYT konuları"
newsTitle: "YKS soruları ve cevap anahtarı ne zaman yayımlanır? Gözler üniversite sınavı sorularında"
description: "Üniversite giriş sınavı YKS'de 2. oturum AYT saat 13.30'da sona erdi. YKS'nin üçüncü oturumu olan Yabancı Dil Testi (YDT) bugün 15.45'te uygulanacak. Yeni düzenleme ile birlikte tüm 2 yıllık önlisans bölümleri TYT, 4 yıllık lisans bölümleri ise AYT, 4 yıllık dil bölümleri de YDT sınavı ile alım yapacak. Peki, üniversite sınavı (YKS) soruları ve cevap anahtarı ne zaman yayımlanacak?"
tags: ['en son haberler','en çok aratılanlar','AYT konuları']
reference: "http://www.hurriyet.com.tr/gundem/yks-sorulari-ne-zaman-yayinlanacak-ayt-tyt-ve-ydt-sorulari-bugun-yayinlanir-mi-41245784"
date: "2019-06-16T11:28:00"
image: "http://i.hurimg.com/i/hurriyet/98/620x0/5d0627e67152d815a0db13f5.jpg"
---
<p>YKS’nin (Yükseköğretim Kurumları Sınavı) iki oturumu tamamlandı, gözler sorulara çevrildi. 15 Haziran Cumartesi günü TYT’de (Temel Yeterlilik Testi), bugün ise AYT’de (Alan Yeterlilik Testi) ter döken öğrenciler heyecanla soru ve cevapların yayımlanmasını bekliyor. Peki, YKS (AYT, TYT, YDT) soruları ne zaman yayınlanacak?</p>
<p>Üç oturumdan oluşan YKS’de sona gelindi. Adaylar, YDT’ye de katıldıktan sonra sonuçlar için bekleyiş başlayacak. ÖSYM, sonuçların 18 Temmuz’da açıklanacağını duyurdu. Yine ÖSYM’nin açıklayacağı takvime ve kılavuza göre tercihler yapılacak. Sonuç tarihi öncesinde bir öngörüde bulunmak, netlerini hesaplamak isteyen adaylar merakla soru cevapların yayımlanmasını bekliyor.</p>
<p><strong>SORULAR VE CEVAP ANAHTARI NE ZAMAN YAYIMLANIR?</strong></p>
<p>YKS soru ve cevaplarına ilişkin ÖSYM'den yapılan net bir açıklama yok. Bir önceki sene YKS 30 Haziran - 1 Temmuz 2018 tarihleri arasında gerçekleşmiş, soru ve cevaplar 1 Temmuz saat 18.00'de erişime açılmıştı.</p>
<p><strong>YKS SORU DAĞILIMLARI</strong></p>
<p>TYT'de Türkçe testinde 40, sosyal bilimler testinde tarihten 5, coğrafyadan 5, felsefeden 5, din kültürü ve ahlak bilgisi veya ilave felsefe sorularından 5'er olmak üzere toplam 20, temel matematik testinde 40, fizikten 7, kimyadan 7 ve biyolojiden 6 sorunun yer aldığı fen bilimleri testinde de 20 soru yöneltilecek.</p>
<p>Alan Yeterlilik Testi (AYT) 16 Haziran Pazar günü saat 10.15'te, Yabancı Dil Testi (YDT) ise 15.45'te uygulanacak. AYT'de Türk dili ve edebiyatı- sosyal bilimler-1 testi 40 sorudan oluşacak. Bu testte adaylar, Türk dili ve edebiyatından 24, tarih-1'den 10, coğrafya-1'den 6 soru yanıtlayacak.</p>
<p>AYT'nin sosyal bilimler-2 testi 40, matematik testi 40, fen bilimleri testi de 40 sorudan oluşacak. Sosyal bilimler testinde tarih-2'den 11, coğrafya-2'den 11, felsefe grubundan 12, din kültürü ve ahlak bilgisi veya ek felsefe sorularından 6 soru yer alacak. Fen bilimleri testindeki dağılım ise fizik 14, kimya 13 ve biyoloji 13 soru şeklinde olacak.</p>
<p>YDT'de de Almanca, Arapça, Fransızca, İngilizce ve Rusça dillerinde toplam 80 soru yer alacak.</p> | 149.636364 | 491 | 0.781896 | tur_Latn | 0.998744 |
d0628219fdef3389977758de662dd24d8ce199f3 | 3,419 | md | Markdown | docs/windows/attributes/event-source.md | LouisJustinTALLOT/cpp-docs.fr-fr | 1e9a7185459b26b2a69659615453d9eda6a2f2db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/windows/attributes/event-source.md | LouisJustinTALLOT/cpp-docs.fr-fr | 1e9a7185459b26b2a69659615453d9eda6a2f2db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/windows/attributes/event-source.md | LouisJustinTALLOT/cpp-docs.fr-fr | 1e9a7185459b26b2a69659615453d9eda6a2f2db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: event_source (attribut COM C++)
description: Découvrez comment utiliser l’attribut com de l’extension Microsoft C++ `event_source` .
ms.date: 11/20/2020
f1_keywords:
- vc-attr.event_source
helpviewer_keywords:
- event handling, attributes
- event logs, event source
- event sources, creating
- event_source attribute
- event sources
- event handling, creating event source
ms.openlocfilehash: 3cdfaaa86f8fc36bf0dc90d7961077546362a662
ms.sourcegitcommit: b02c61667ff7f38e7add266d0aabd8463f2dbfa1
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 11/23/2020
ms.locfileid: "95483267"
---
# <a name="event_source-attribute"></a>Attribut `event_source`
Crée une source d'événement.
> [!NOTE]
> Les attributs d’événement en C++ natif sont incompatibles avec le C++ standard. Elles ne se compilent pas lorsque vous spécifiez le [`/permissive-`](../../build/reference/permissive-standards-conformance.md) mode de conformité.
## <a name="syntax"></a>Syntaxe
```cpp
[ event_source(type, optimize=[speed | size], decorate=[true | false]) ]
```
### <a name="parameters"></a>Paramètres
*`type`*\
Une énumération de l’une des valeurs suivantes :
- `native` pour le code C/C++ non managé (par défaut pour les classes non managées).
- `com` pour le code COM. Utilisez `coclass` When *`type`* = `com` . Cette valeur nécessite que vous incluiez les fichiers d’en-tête suivants :
```cpp
#define _ATL_ATTRIBUTES
#include <atlbase.h>
#include <atlcom.h>
```
*`optimize`*\
Quand le *type* est `native` , vous pouvez spécifier `optimize=size` , pour indiquer qu’il y a 4 octets de stockage (minimum) pour tous les événements dans une classe ou `optimize=speed` (valeur par défaut) pour indiquer qu’il y a 4 * (nombre d’événements) octets de stockage.
*`decorate`*\
Quand le *type* est `native` , vous pouvez spécifier `decorate=false` , pour indiquer que le nom développé dans le fichier fusionné ( *`.mrg`* ) ne doit pas inclure le nom de la classe englobante. [`/Fx`](../../build/reference/fx-merge-injected-code.md) vous permet de générer des *`.mrg`* fichiers. `decorate=false`, qui est la valeur par défaut, génère des noms de types qualifiés complets dans le fichier fusionné.
## <a name="remarks"></a>Notes
L' **`event_source`** attribut C++ spécifie que la classe ou la structure à laquelle il est appliqué sera une source d’événement.
**`event_source`** est utilisé conjointement avec l' [`event_receiver`](event-receiver.md) attribut et le [`__event`](../../cpp/event.md) mot clé. Utilisez `event_receiver` pour créer des récepteurs d’événements. Utilisez **`__event`** sur les méthodes dans la source d’événements pour spécifier ces méthodes en tant qu’événements.
> [!NOTE]
> Une classe ou structure modélisée ne peut pas contenir d'événements.
## <a name="requirements"></a>Spécifications
| Contexte d’attribut | Valeur |
|--|--|
| **S’applique à** | **`class`**, **`struct`** |
| **Renouvelable** | Non |
| **Attributs requis** | **`coclass`** à `type`=`com` |
| **Attributs non valides** | None |
Pour plus d'informations, consultez [Contextes d'attribut](cpp-attributes-com-net.md#contexts).
## <a name="see-also"></a>Voir aussi
[Attributs du compilateur](compiler-attributes.md)\
[`event_receiver`](event-receiver.md)\
[`__event`](../../cpp/event.md)\
[`__hook`](../../cpp/hook.md)\
[`__unhook`](../../cpp/unhook.md)\
[Attributs de classe](class-attributes.md)
| 41.192771 | 417 | 0.721264 | fra_Latn | 0.904742 |
d062c85b19bebc5ee76f9254eafd58f72b96bab5 | 1,550 | md | Markdown | README.md | JayakrishnanAjayakumar/pcml | 1ac37e95ef68c1661fd4a0b1e6f500a14f103ecb | [
"BSD-3-Clause"
] | 1 | 2018-03-07T20:35:15.000Z | 2018-03-07T20:35:15.000Z | README.md | Jindam/HPCGISLab | 54ec030cd87b3f6f46ea68cdf007b21344517515 | [
"BSD-3-Clause"
] | null | null | null | README.md | Jindam/HPCGISLab | 54ec030cd87b3f6f46ea68cdf007b21344517515 | [
"BSD-3-Clause"
] | null | null | null | Parallel Cartographic Modeling Language - PCML
==============================================
Introduction
------------
The parallel cartographic modeling language (PCML) is a multi-institutional
collaborative project aiming to create a computing language for
cyberGIScientists that is designed for (1) usability, (2) programmability, and
(3) scalability. PCML provides multi-core parallel processing for spatial
operations while hiding the implementation complexities of parallelism.
Installation
------------
### 1. Make sure that `pip` and `setuptools` are installed in your current Python environment (global or a virtualenv).
### 2. Install GDAL library
$ sudo apt-get install libgdal-dev
or
$ su -c 'yum install gdal-devel gdal-libs'
*The command may vary according to package manager and system.*
### 3. Install the required dependencies
$ pip install -r requirements.txt
### 4. Finally, install
$ python setup.py install
<!-- TODO: platform/distribution specific troubleshooting. -->
Example
-------
from pcml import *
PCMLConfig.num_procs = 4 # Run computation in 4 processes (default)
layer1 = ReadASCIIGrid("layer1.asc")
layer2 = ReadGeoTIFF("layer2.tiff")
layer_out = layer1 + layer2
layer_out.print_data()
Please also see `test.py` for additional working examples.
Build status
------------
[](https://app.wercker.com/project/bykey/99dd16339b190c2ab04db505fa7af57a)
| 25.833333 | 175 | 0.709677 | eng_Latn | 0.884872 |
d062e313c213b5bc1326739eb46a328694f86db4 | 7,234 | md | Markdown | docs/framework/data/transactions/managing-concurrency-with-dependenttransaction.md | BaruaSourav/docs | c288ed777de6b091f5e074d3488f7934683f3eb5 | [
"CC-BY-4.0",
"MIT"
] | 3,294 | 2016-10-30T05:27:20.000Z | 2022-03-31T15:59:30.000Z | docs/framework/data/transactions/managing-concurrency-with-dependenttransaction.md | BaruaSourav/docs | c288ed777de6b091f5e074d3488f7934683f3eb5 | [
"CC-BY-4.0",
"MIT"
] | 16,739 | 2016-10-28T19:41:29.000Z | 2022-03-31T22:38:48.000Z | docs/framework/data/transactions/managing-concurrency-with-dependenttransaction.md | BaruaSourav/docs | c288ed777de6b091f5e074d3488f7934683f3eb5 | [
"CC-BY-4.0",
"MIT"
] | 6,701 | 2016-10-29T20:56:11.000Z | 2022-03-31T12:32:26.000Z | ---
title: "Managing Concurrency with DependentTransaction"
description: Manage transaction concurrency, including asynchronous tasks, by using the DependentTransaction class in .NET.
ms.date: "03/30/2017"
ms.assetid: b85a97d8-8e02-4555-95df-34c8af095148
---
# Managing Concurrency with DependentTransaction
The <xref:System.Transactions.Transaction> object is created using the <xref:System.Transactions.Transaction.DependentClone%2A> method. Its sole purpose is to guarantee that the transaction cannot commit while some other pieces of code (for example, a worker thread) are still performing work on the transaction. When the work done within the cloned transaction is complete and ready to be committed, it can notify the creator of the transaction using the <xref:System.Transactions.DependentTransaction.Complete%2A> method. Thus, you can preserve the consistency and correctness of data.
The <xref:System.Transactions.DependentTransaction> class can also be used to manage concurrency between asynchronous tasks. In this scenario, the parent can continue to execute any code while the dependent clone works on its own tasks. In other words, the parent's execution is not blocked until the dependent completes.
## Creating a Dependent Clone
To create a dependent transaction, call the <xref:System.Transactions.Transaction.DependentClone%2A> method and pass the <xref:System.Transactions.DependentCloneOption> enumeration as a parameter. This parameter defines the behavior of the transaction if `Commit` is called on the parent transaction before the dependent clone indicates that it is ready for the transaction to commit (by calling the <xref:System.Transactions.DependentTransaction.Complete%2A> method). The following values are valid for this parameter:
- <xref:System.Transactions.DependentCloneOption.BlockCommitUntilComplete> creates a dependent transaction that blocks the commit process of the parent transaction until the parent transaction times out, or until <xref:System.Transactions.DependentTransaction.Complete%2A> is called on all dependents indicating their completion. This is useful when the client does not want the parent transaction to commit until the dependent transactions have completed. If the parent finishes its work earlier than the dependent transaction and calls <xref:System.Transactions.CommittableTransaction.Commit%2A> on the transaction, the commit process is blocked in a state where additional work can be done on the transaction and new enlistments can be created, until all of the dependents call <xref:System.Transactions.DependentTransaction.Complete%2A>. As soon as all of them have finished their work and call <xref:System.Transactions.DependentTransaction.Complete%2A>, the commit process for the transaction begins.
- <xref:System.Transactions.DependentCloneOption.RollbackIfNotComplete>, on the other hand, creates a dependent transaction that automatically aborts if <xref:System.Transactions.CommittableTransaction.Commit%2A> is called on the parent transaction before <xref:System.Transactions.DependentTransaction.Complete%2A> is called. In this case, all the work done in the dependent transaction is intact within one transaction lifetime, and no one has a chance to commit just a portion of it.
The <xref:System.Transactions.DependentTransaction.Complete%2A> method must be called only once when your application finishes its work on the dependent transaction; otherwise, a <xref:System.InvalidOperationException> is thrown. After this call is invoked, you must not attempt any additional work on the transaction, or an exception is thrown.
The following code example shows how to create a dependent transaction to manage two concurrent tasks by cloning a dependent transaction and passing it to a worker thread.
```csharp
public class WorkerThread
{
public void DoWork(DependentTransaction dependentTransaction)
{
Thread thread = new Thread(ThreadMethod);
thread.Start(dependentTransaction);
}
public void ThreadMethod(object transaction)
{
DependentTransaction dependentTransaction = transaction as DependentTransaction;
Debug.Assert(dependentTransaction != null);
try
{
using(TransactionScope ts = new TransactionScope(dependentTransaction))
{
/* Perform transactional work here */
ts.Complete();
}
}
finally
{
dependentTransaction.Complete();
dependentTransaction.Dispose();
}
}
//Client code
using(TransactionScope scope = new TransactionScope())
{
Transaction currentTransaction = Transaction.Current;
DependentTransaction dependentTransaction;
dependentTransaction = currentTransaction.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
WorkerThread workerThread = new WorkerThread();
workerThread.DoWork(dependentTransaction);
/* Do some transactional work here, then: */
scope.Complete();
}
```
The client code creates a transactional scope that also sets the ambient transaction. You should not pass the ambient transaction to the worker thread. Instead, you should clone the current (ambient) transaction by calling the <xref:System.Transactions.Transaction.DependentClone%2A> method on the current transaction, and pass the dependent to the worker thread.
The `ThreadMethod` method executes on the new thread. The client starts a new thread, passing the dependent transaction as the `ThreadMethod` parameter.
Because the dependent transaction is created with <xref:System.Transactions.DependentCloneOption.BlockCommitUntilComplete>, you are guaranteed that the transaction cannot be committed until all of the transactional work done on the second thread is finished and <xref:System.Transactions.DependentTransaction.Complete%2A> is called on the dependent transaction. This means that if the client's scope ends (when it tries to dispose of the transaction object at the end of the `using` statement) before the new thread calls <xref:System.Transactions.DependentTransaction.Complete%2A> on the dependent transaction, the client code blocks until <xref:System.Transactions.DependentTransaction.Complete%2A> is called on the dependent. Then the transaction can finish committing or aborting.
## Concurrency Issues
There are a few additional concurrency issues that you need to be aware of when using the <xref:System.Transactions.DependentTransaction> class:
- If the worker thread rolls back the transaction but the parent tries to commit it, a <xref:System.Transactions.TransactionAbortedException> is thrown.
- You should create a new dependent clone for each worker thread in the transaction. Do not pass the same dependent clone to multiple threads, because only one of them can call <xref:System.Transactions.DependentTransaction.Complete%2A> on it.
- If the worker thread spawns a new worker thread, make sure to create a dependent clone from the dependent clone and pass it to the new thread.
## See also
- <xref:System.Transactions.DependentTransaction>
| 85.105882 | 1,008 | 0.782555 | eng_Latn | 0.980784 |
d0637d029975f6d63b64cc441291ae5f8bb7478b | 255 | md | Markdown | session-limit-user-operation-event-listener/README.md | gayanch/wso2-is-user-session-limit | c7467694eba96bf7d76fdd2f76ab5a0d64fee3ba | [
"Apache-2.0"
] | null | null | null | session-limit-user-operation-event-listener/README.md | gayanch/wso2-is-user-session-limit | c7467694eba96bf7d76fdd2f76ab5a0d64fee3ba | [
"Apache-2.0"
] | null | null | null | session-limit-user-operation-event-listener/README.md | gayanch/wso2-is-user-session-limit | c7467694eba96bf7d76fdd2f76ab5a0d64fee3ba | [
"Apache-2.0"
] | null | null | null | This code repository contains a sample maven project that demonstrates how to write a custom user operation event listener for WSO2 products.
More information on [1].
[1] http://tharindue.blogspot.com/2016/08/user-operation-event-listener-in-wso2.html
| 42.5 | 142 | 0.8 | eng_Latn | 0.939831 |
d0648f20098ecbd9602e9f863362eef789f97edc | 273 | md | Markdown | LawControl/ScicosBlocks/README.md | Lecrapouille/BacASable | e7de7a7b6f21ee3180dc968a47cbd1515cb82de5 | [
"Unlicense"
] | 1 | 2019-09-16T11:09:25.000Z | 2019-09-16T11:09:25.000Z | LawControl/ScicosBlocks/README.md | Lecrapouille/BacASable | e7de7a7b6f21ee3180dc968a47cbd1515cb82de5 | [
"Unlicense"
] | 1 | 2019-09-08T19:10:54.000Z | 2019-09-08T20:26:30.000Z | LawControl/ScicosBlocks/README.md | Lecrapouille/BacASable | e7de7a7b6f21ee3180dc968a47cbd1515cb82de5 | [
"Unlicense"
] | null | null | null | "As it" C code for Scicos blocks (http://www.scicoslab.org/) I used to make for some old student robotic project in ~2004/2007. This code is probably not working so use it "as it" and ScicosLab is no longer maintained.
Blocks are:
- Firewire camera
- Joystick
- UART link
| 39 | 218 | 0.750916 | eng_Latn | 0.992845 |
d06490b09fa09256afd5efa2dfa2a798fbaabd40 | 527 | md | Markdown | content/events/2018-istanbul/speakers/jacopo-nardiello.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 6 | 2016-11-14T14:08:29.000Z | 2018-05-09T18:57:06.000Z | content/events/2018-istanbul/speakers/jacopo-nardiello.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 461 | 2016-11-11T19:23:06.000Z | 2019-07-21T16:10:04.000Z | content/events/2018-istanbul/speakers/jacopo-nardiello.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 15 | 2016-11-11T15:07:53.000Z | 2019-01-18T04:55:24.000Z | +++
Title = "Jacopo Nardiello"
Website = ""
Twitter = "jnardiello"
Github = ""
image = "jacopo-nardiello.jpg"
type = "speaker"
+++
Jacopo is the founder of SIGHUP, an elite team working on Kubernetes and Cloud Native technologies working with world-leading organizations. He has been fully devoting his professional life to Kubernetes since late 2015 and has a deep interest on orchestration and dynamic infrastructures. Jacopo is also a CNCF Ambassador and the main organizer of the Kubernetes and Cloud Native Milano meetup. | 52.7 | 395 | 0.785579 | eng_Latn | 0.997671 |
d0658d06ea14f42835ac20401502e4db8ca03313 | 915 | md | Markdown | docs/visual-basic/misc/bc30293.md | emrekas/docs.tr-tr | 027bd2c6c93900a75cac7ac42531c89085f87888 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-06T07:30:24.000Z | 2020-01-06T07:30:24.000Z | docs/visual-basic/misc/bc30293.md | emrekas/docs.tr-tr | 027bd2c6c93900a75cac7ac42531c89085f87888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc30293.md | emrekas/docs.tr-tr | 027bd2c6c93900a75cac7ac42531c89085f87888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "<error>: '<structurename1>'contains'<structurename2>'"
ms.date: 07/20/2015
f1_keywords:
- vbc30293
- bc30293
helpviewer_keywords:
- BC30293
ms.assetid: c9d225e7-0627-4682-97f2-fd9c7be2842b
ms.openlocfilehash: b99791686ffa2633e74c41cd384b2106b387ada5
ms.sourcegitcommit: 2701302a99cafbe0d86d53d540eb0fa7e9b46b36
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 04/28/2019
ms.locfileid: "64663973"
---
# <a name="error-structurename1-contains-structurename2"></a>\<hata >: '\<structurename1 >' içerir '\<structurename2 >'
Bir yapının kendisi kendi üyelerinden biri belirlediğinde döngüsel yapı tanımı gerçekleşir.
**Hata Kimliği:** BC30293
## <a name="to-correct-this-error"></a>Bu hatayı düzeltmek için
- Yapı üyesinin adını değiştirin.
## <a name="see-also"></a>Ayrıca bkz.
- [Yapılar](../../visual-basic/programming-guide/language-features/data-types/structures.md)
| 31.551724 | 119 | 0.757377 | tur_Latn | 0.736437 |
d065921ce7da1ec7ef9719062d20dac7db8c76f4 | 596 | md | Markdown | docs/snippets.md | sketch7/ssv-au-dojo | 632712872bf21cd0c411305bb7ab5625b8d2828d | [
"MIT"
] | 3 | 2016-05-04T10:38:14.000Z | 2017-04-26T13:45:51.000Z | docs/snippets.md | sketch7/ssv-au-dojo | 632712872bf21cd0c411305bb7ab5625b8d2828d | [
"MIT"
] | null | null | null | docs/snippets.md | sketch7/ssv-au-dojo | 632712872bf21cd0c411305bb7ab5625b8d2828d | [
"MIT"
] | null | null | null |
# Tasks
- Binding
- repeat
- if
- two-way with input
- inline-conditions
- click trigger/delegate
- Binding Behaviors
- debounce
- throttle
- Value Converters
- KebabValueConverter
- TruncateValueConverter
- Extract Element - without VM
-
# Value Converters
Require from basic-form
```html
<require from="../../components/value-converters/kebabcase"></require>
```
# Custom Elements
Quick HTML way
```html
<require from="./hero-list-item.html"></require>
<hero-list-item hero.bind="hero" link-route.bind="heroState"></hero-list-item>
<template bindable="hero, linkRoute">
```
| 15.684211 | 78 | 0.711409 | eng_Latn | 0.380477 |
d065b124176ddab0b539c7dc68ca579ee8492db7 | 5,846 | md | Markdown | README.md | egnha/nofrills | a7fa84ae01d7ad3d5fae0bd8965fcae89af0bc73 | [
"MIT"
] | 49 | 2017-07-24T14:33:25.000Z | 2022-02-02T18:28:32.000Z | README.md | egnha/nofrills | a7fa84ae01d7ad3d5fae0bd8965fcae89af0bc73 | [
"MIT"
] | 54 | 2017-07-18T07:40:35.000Z | 2022-02-04T10:26:19.000Z | README.md | egnha/nofrills | a7fa84ae01d7ad3d5fae0bd8965fcae89af0bc73 | [
"MIT"
] | 2 | 2017-07-31T14:42:12.000Z | 2021-11-16T14:12:08.000Z |
<!-- README.md is generated from README.Rmd. Please edit that file -->
> Unless you need `curry()` or `curry_fn()`, you should use the more
> versatile [gestalt](https://github.com/egnha/gestalt) package, which
> includes `fn()`.
[](https://travis-ci.org/egnha/nofrills)
[](https://codecov.io/gh/egnha/nofrills)
[](https://cran.r-project.org/package=nofrills)
# nofrills <img src="inst/logo.png" align="right" />
*Low-Cost Anonymous Functions*
## Overview
*nofrills* is a lightweight R package that provides `fn()`, a more
powerful variation of `function()` that:
- **costs less** — enables tidyverse
[quasiquotation](https://rlang.r-lib.org/reference/quasiquotation.html)
so you don’t pay the price of [functional
impurity](#pure-functions-via-quasiquotation)
- has the **same great taste** — supports a superset of `function()`’s
syntax and capabilities
- is **less filling** —
``` r
fn(x, y = 1 ~ x + y)
```
is equivalent to
``` r
function(x, y = 1) x + y
```
## Installation
``` r
install.packages("nofrills")
```
Alternatively, install the development version from GitHub:
``` r
# install.packages("devtools")
devtools::install_github("egnha/nofrills")
```
## Usage
### Same syntax as `function()` but shorter
``` r
fn(x ~ x + 1)
#> function (x)
#> x + 1
fn(x, y ~ x + y)
#> function (x, y)
#> x + y
fn(x, y = 2 ~ x + y)
#> function (x, y = 2)
#> x + y
fn(x, y = 1, ... ~ log(x + y, ...))
#> function (x, y = 1, ...)
#> log(x + y, ...)
# the only exception, cf. alist()
fn(x, ... = , y ~ log(x + y, ...))
#> function (x, ..., y)
#> log(x + y, ...)
fn(~ NA)
#> function ()
#> NA
```
### Supports quasiquotation
#### Unquote values
``` r
z <- 0
fn(x, y = !!z ~ x + y)
#> function (x, y = 0)
#> x + y
fn(x ~ x > !!z)
#> function (x)
#> x > 0
```
#### Unquote argument names
``` r
arg <- "y"
fn(x, !!arg := 0 ~ x + !!as.name(arg))
#> function (x, y = 0)
#> x + y
```
#### Splice in argument lists
``` r
args <- alist(x, y = 0)
fn(!!!args, ~ x + y) # note the one-sided formula
#> function (x, y = 0)
#> x + y
```
#### Literally unquote with `QUQ()`, `QUQS()`
``` r
library(dplyr)
summariser <- quote(mean)
my_summarise <- fn(df, ... ~ {
group_by <- quos(...)
df %>%
group_by(QUQS(group_by)) %>%
summarise(a = (!!summariser)(a))
})
my_summarise
#> function (df, ...)
#> {
#> group_by <- quos(...)
#> df %>% group_by(`!!!`(group_by)) %>% summarise(a = mean(a))
#> }
```
(Source: [*Programming with
dplyr*](https://dplyr.tidyverse.org/articles/programming.html))
### [Curry](https://en.wikipedia.org/wiki/Currying) functions
#### Declare a curried function with `curry_fn()`
The syntax is the same as `fn()`. Using the literal unquoting operators
`QUQ()`, `QUQS()`, you can “delay” unquoting to embed argument values in
the innermost function:
``` r
compare_to <- curry_fn(target, x ~ identical(x, QUQ(target)))
is_this <- compare_to("this")
# The embedded value "this" renders the source comprehensible
is_this
#> function (x)
#> identical(x, "this")
#> <environment: 0x7fd045fde208>
```
#### Curry a function with `curry()`
``` r
curry(function(x, y, z = 0) x + y + z)
#> function (x)
#> function(y) function(z = 0) x + y + z
double <- curry(`*`)(2)
double(3)
#> [1] 6
```
## Pure functions via quasiquotation
Functions in R are generally
[impure](https://en.wikipedia.org/wiki/Pure_function), i.e., the return
value of a function will *not* in general be determined by the value of
its inputs alone. This is because a function may depend on mutable
objects in its [lexical
scope](https://adv-r.hadley.nz/functions.html#lexical-scoping). Normally
this isn’t an issue. But if you are working interactively and sourcing
files into the global environment, say, or using a notebook interface
(like Jupyter or R Notebook), it can be tricky to ensure that you
haven’t unwittingly mutated an object that an earlier function depends
upon.
- Consider the following function:
``` r
a <- 1
foo <- function(x) x + a
```
What is the value of `foo(1)`? It is not necessarily `2` because the
value of `a` may have changed between the *creation* of `foo()` and
the *calling* of `foo(1)`:
``` r
foo(1)
#> [1] 2
a <- 0
foo(1)
#> [1] 1
```
In other words, `foo()` is impure because the value of `foo(x)`
depends not only on the value of `x` but also on the *externally
mutable* value of `a`.
`fn()` enables you to write **pure(r)** functions by using
[quasiquotation](https://rlang.r-lib.org/reference/quasiquotation.html)
to eliminate such indeterminacy.
- With `fn()`, you can unquote `a` to capture its value at the point
of creation:
``` r
a <- 1
foo <- fn(x ~ x + !!a)
```
Now `foo()` is a pure function, unaffected by changes in its lexical
scope:
``` r
foo(1)
#> [1] 2
a <- 0
foo(1)
#> [1] 2
```
## Alternatives to nofrills
Alternative anonymous-function constructors (which don’t support
quasiquotation) include:
- [`pryr::f()`](https://github.com/hadley/pryr)
- [`lambda::f()`](https://github.com/jimhester/lambda)
- [`rlang::as_function()`](https://rlang.r-lib.org/reference/as_function.html)
## Acknowledgement
The [rlang](https://github.com/r-lib/rlang) package by [Lionel
Henry](https://github.com/lionel-) and [Hadley
Wickham](https://github.com/hadley) makes nofrills possible. Crucially,
rlang provides the engine for quasiquotation and expression capture.
## License
MIT Copyright © 2017–21 [Eugene Ha](https://github.com/egnha)
| 22.398467 | 118 | 0.627951 | eng_Latn | 0.886459 |
d0676645317c1cc29db425bc7bb0c4ddb5381a83 | 382 | md | Markdown | _blog_summaries/2016-06-29_meettheteamjune.md | innovatesac/website | 772b8c7d6de470d4a70d9c20ba719862d0f3d7bd | [
"CC0-1.0"
] | 2 | 2017-02-08T02:47:38.000Z | 2018-11-20T20:11:53.000Z | _blog_summaries/2016-06-29_meettheteamjune.md | innovatesac/innovatesac.github.io | 772b8c7d6de470d4a70d9c20ba719862d0f3d7bd | [
"CC0-1.0"
] | null | null | null | _blog_summaries/2016-06-29_meettheteamjune.md | innovatesac/innovatesac.github.io | 772b8c7d6de470d4a70d9c20ba719862d0f3d7bd | [
"CC0-1.0"
] | null | null | null | ---
title: Meet the Team—June 2016
medium_url: https://medium.com/@USDigitalService/meet-the-team-june-2016-40ea48572903
image_url: https://cdn-images-1.medium.com/max/800/1*v6JI55yfehGaX7IcHXI1uQ.jpeg
image_description: Shannon, a new USDS team member, smiling
date: 2016-06-27
---
We’re thrilled to start the summer with seven new teammates who have signed on for tours of duty.
| 38.2 | 97 | 0.78534 | eng_Latn | 0.556859 |
d0676e81b4e80b61ad026e7958062a11f3ea1d2f | 1,865 | md | Markdown | doc/content/templates/inheritance.md | jankx/plates | 3732d552284792a596d6d0386dc80ac56e67bde1 | [
"MIT"
] | 1,143 | 2015-01-02T10:04:17.000Z | 2022-03-30T18:23:03.000Z | vendor/league/plates/doc/content/templates/inheritance.md | flare-framework/Flare | 966a374f39aab060a57c58d6face09752939429c | [
"MIT"
] | 202 | 2015-01-13T13:10:10.000Z | 2022-02-25T20:59:30.000Z | vendor/league/plates/doc/content/templates/inheritance.md | flare-framework/Flare | 966a374f39aab060a57c58d6face09752939429c | [
"MIT"
] | 217 | 2015-01-01T16:16:02.000Z | 2022-03-25T09:29:23.000Z | +++
title = "Inheritance"
linkTitle = "Templates Inheritance"
[menu.main]
parent = "templates"
weight = 7
+++
By combining [layouts]({{< relref "templates/layouts.md" >}}) and [sections]({{< relref "templates/sections.md" >}}), Plates allows you to "build up" your pages using predefined sections. This is best understand using an example:
## Inheritance example
The following example illustrates a pretty standard website. Start by creating a site template, which includes your header and footer as well as any predefined content [sections]({{< relref "templates/sections.md" >}}). Notice how Plates makes it possible to even set default section content, in the event that a page doesn't define it.
{{< code-filename template.php >}}
~~~ php
<html>
<head>
<title><?=$this->e($title)?></title>
</head>
<body>
<img src="logo.png">
<div id="page">
<?=$this->section('page')?>
</div>
<div id="sidebar">
<?php if ($this->section('sidebar')): ?>
<?=$this->section('sidebar')?>
<?php else: ?>
<?=$this->fetch('default-sidebar')?>
<?php endif ?>
</div>
</body>
</html>
~~~
With the template defined, any page can now "implement" this [layout]({{< relref "templates/layouts.md" >}}). Notice how each section of content is defined between the `start()` and `end()` functions.
{{< code-filename profile.php >}}
~~~ php
<?php $this->layout('template', ['title' => 'User Profile']) ?>
<?php $this->start('page') ?>
<h1>Welcome!</h1>
<p>Hello <?=$this->e($name)?></p>
<?php $this->stop() ?>
<?php $this->start('sidebar') ?>
<ul>
<li><a href="/link">Example Link</a></li>
<li><a href="/link">Example Link</a></li>
<li><a href="/link">Example Link</a></li>
<li><a href="/link">Example Link</a></li>
<li><a href="/link">Example Link</a></li>
</ul>
<?php $this->stop() ?>
~~~
| 29.140625 | 336 | 0.617694 | eng_Latn | 0.8791 |
d0680ea2bb9771bc0c12678334e6e9acfb77b098 | 245 | md | Markdown | _definitions/bld-testacy.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 5 | 2018-08-07T21:57:01.000Z | 2022-02-26T13:29:20.000Z | _definitions/bld-testacy.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 1 | 2018-08-07T22:29:07.000Z | 2018-08-07T22:45:46.000Z | _definitions/bld-testacy.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 2 | 2020-12-26T17:22:04.000Z | 2021-02-12T21:35:50.000Z | ---
title: Testacy
letter: T
permalink: "/definitions/bld-testacy.html"
body: The state or condition of leaving a will at one’s death, opposed to “Intestacy
published_at: '2018-07-07'
source: Black's Law Dictionary 2nd Ed (1910)
layout: post
--- | 27.222222 | 84 | 0.746939 | eng_Latn | 0.945345 |
d068113937e1843c62e6c82c1a49b403fd02b433 | 1,121 | md | Markdown | README.md | biothomme/Retinol | 7383209257d7cc6b6ef6c0161d31aaf25cfcc630 | [
"MIT"
] | null | null | null | README.md | biothomme/Retinol | 7383209257d7cc6b6ef6c0161d31aaf25cfcc630 | [
"MIT"
] | null | null | null | README.md | biothomme/Retinol | 7383209257d7cc6b6ef6c0161d31aaf25cfcc630 | [
"MIT"
] | null | null | null | # Retinol
Python package for converting wavelength spectra of e.g. petals to trichromatic insect vision. Designed to be used as a colab notebook, to ease the process.
How do insects see the world? A recent review shows the magnitude of variation of visual perception across the phylum of insects (van der Kooi et al. 2021). Nonetheless, many of which have a set of 3 different wavelength receptors, covering a range of wavelengths from ~300 to ~700 nm (or in words, from UV to red). As there is no direct physical way of measuring this perception, a framework combining physiological exminations, wavelength measurements and mathematical transformations was set up during the 20th century (Wyszecki & Spines 1982, Chittka & Waser 1997). Thus, it is possible to compare different flowers by how they are perceived by an insect. This notebook implements the concept described by Chittka & Kevan (2005) (and basically shares its nomenclature) and enables the comparison of multiple species of flowers on how they are sensed by a trichromatic insect eye.
For any questions or help please contact thomas.huber<at>evobio.eu
| 140.125 | 883 | 0.802855 | eng_Latn | 0.999489 |
d068425d82465dccdba9d0a52508a85562cb0de3 | 2,396 | md | Markdown | desktop-src/WMP/controls-fastforward.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 552 | 2019-08-20T00:08:40.000Z | 2022-03-30T18:25:35.000Z | desktop-src/WMP/controls-fastforward.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,143 | 2019-08-21T20:17:47.000Z | 2022-03-31T20:24:39.000Z | desktop-src/WMP/controls-fastforward.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,287 | 2019-08-20T05:37:48.000Z | 2022-03-31T20:22:06.000Z | ---
title: Controls.fastForward method
description: The fastForward method starts fast play of the media item in the forward direction. | Controls.fastForward method
ms.assetid: 69cee803-f76b-4a8c-a2c2-1870665afaf9
keywords:
- fastForward method Windows Media Player
- fastForward method Windows Media Player , Controls class
- Controls class Windows Media Player , fastForward method
topic_type:
- apiref
api_name:
- Controls.fastForward
api_location:
- wmp.dll
api_type:
- COM
ms.topic: reference
ms.date: 05/31/2018
---
# Controls.fastForward method
The **fastForward** method starts fast play of the media item in the forward direction.
## Syntax
```JScript
Controls.fastForward()
```
## Parameters
This method has no parameters.
## Return value
This method does not return a value.
## Remarks
The **fastForward** method plays the clip back at five times the normal speed. Invoking **fastForward** changes the *Settings*.**rate** property to 5.0. If **rate** is subsequently changed, or if **play** or **stop** is called, Windows Media Player will cease fast forwarding.
The **fastForward** method does not work for live broadcasts and certain media types. To determine whether you can fast forward in a clip, call **isAvailable**("FastForward").
## Examples
The following example creates an HTML BUTTON element that uses **fastForward** to start fast play of the media item. The **Player** object was created with ID = "Player".
```JScript
<INPUT TYPE = "BUTTON" ID = "FF" NAME = "FF" VALUE = ">>"
/* Execute JScript when the BUTTON is clicked.
Check first to make sure fast-forward mode is available
for this particular media item */
onClick = "if (Player.controls.isAvailable('FastForward'))
Player.controls.fastForward();
">
```
## Requirements
| Requirement | Value |
|--------------------|------------------------------------------------------------------------------------|
| Version<br/> | Windows Media Player version 7.0 or later.<br/> |
| DLL<br/> | <dl> <dt>Wmp.dll</dt> </dl> |
## See also
<dl> <dt>
[**Controls Object**](controls-object.md)
</dt> <dt>
[**Controls.isAvailable**](controls-isavailable.md)
</dt> <dt>
[**Controls.play**](controls-play.md)
</dt> <dt>
[**Controls.stop**](controls-stop.md)
</dt> <dt>
[**Settings.rate**](settings-rate.md)
</dt> </dl>
| 22.819048 | 276 | 0.663189 | eng_Latn | 0.914054 |
d06859c932feb3580690ee6805158aedfc0b645a | 12 | md | Markdown | README.md | LONG444wdw/SING | 9296683755a81103085dd37ad0a4f3cb8b42273c | [
"Apache-2.0"
] | null | null | null | README.md | LONG444wdw/SING | 9296683755a81103085dd37ad0a4f3cb8b42273c | [
"Apache-2.0"
] | null | null | null | README.md | LONG444wdw/SING | 9296683755a81103085dd37ad0a4f3cb8b42273c | [
"Apache-2.0"
] | null | null | null | # SING
FGHJ
| 4 | 6 | 0.666667 | kor_Hang | 0.525584 |
d06876a97724c3aa9d56a44934e7da6803e8ee92 | 416 | md | Markdown | README.md | buldo/echo-go-tg-bot | d3e2b45cbd2bca6e0005215ee2e16812f7779533 | [
"MIT"
] | 2 | 2016-04-21T04:28:36.000Z | 2016-04-24T15:54:14.000Z | README.md | buldo/echo-go-tg-bot | d3e2b45cbd2bca6e0005215ee2e16812f7779533 | [
"MIT"
] | null | null | null | README.md | buldo/echo-go-tg-bot | d3e2b45cbd2bca6e0005215ee2e16812f7779533 | [
"MIT"
] | null | null | null | # Heroku echo telegram go bot
Small echo bot based on rockneurotiko/go-tgbot that can be hosted on Heroku
## How to install
Main steps are similar with https://devcenter.heroku.com/articles/getting-started-with-go .
But before pushing to heroku You have to set next env variables:
1. botToken - token received from Bot Father
2. hookURL - base url for hook. By default should be "https://application-heroku-domain"
| 46.222222 | 91 | 0.78125 | eng_Latn | 0.957041 |
d0688c9cf486c619f4510deaf73515878399a008 | 2,141 | md | Markdown | docs/vs-2015/code-quality/ca1012-abstract-types-should-not-have-constructors.md | galaxyuliana/visualstudio-docs.ko-kr | 0f07b2bdcdecc134d4f27d7da71521546f4046a6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/ca1012-abstract-types-should-not-have-constructors.md | galaxyuliana/visualstudio-docs.ko-kr | 0f07b2bdcdecc134d4f27d7da71521546f4046a6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/ca1012-abstract-types-should-not-have-constructors.md | galaxyuliana/visualstudio-docs.ko-kr | 0f07b2bdcdecc134d4f27d7da71521546f4046a6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'CA1012: 추상 형식에는 생성자를 사용 해야 합니다. | Microsoft Docs'
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-code-analysis
ms.topic: reference
f1_keywords:
- AbstractTypesShouldNotHaveConstructors
- CA1012
helpviewer_keywords:
- CA1012
ms.assetid: 09f458ac-dd88-4cd7-a47f-4106c1e80ece
caps.latest.revision: 27
author: gewarren
ms.author: gewarren
manager: wpickett
ms.openlocfilehash: 02b92ac92e545ab30405d195d85a97a4ef2e3806
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 04/23/2019
ms.locfileid: "68151091"
---
# <a name="ca1012-abstract-types-should-not-have-constructors"></a>CA1012: 추상 형식에는 생성자를 사용하면 안 됩니다.
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
|||
|-|-|
|TypeName|AbstractTypesShouldNotHaveConstructors|
|CheckId|CA1012|
|범주|Microsoft.Design|
|변경 수준|주요 변경 아님|
## <a name="cause"></a>원인
Public 형식이 추상 이며 public 생성자가 있습니다.
## <a name="rule-description"></a>규칙 설명
추상 형식에 대한 생성자는 파생된 형식에서만 호출할 수 있습니다. public 생성자에서 형식의 인스턴스를 만들고 사용자는 추상 형식의 인스턴스를 만들 수 없기 때문에 public 생성자가 있는 추상 형식은 잘못 디자인된 것입니다.
## <a name="how-to-fix-violations"></a>위반 문제를 해결하는 방법
이 규칙 위반 문제를 해결 하려면 생성자를 보호 하거나 형식을 추상으로 선언 하지 마십시오.
## <a name="when-to-suppress-warnings"></a>경고를 표시하지 않는 경우
이 규칙에서는 경고를 표시해야 합니다. 추상 형식에 public 생성자를 있습니다.
## <a name="example"></a>예제
다음 예제에서는이 규칙을 위반 하는 추상 형식을 포함 합니다.
[!code-csharp[FxCop.Design.AbstractTypeBad#1](../snippets/csharp/VS_Snippets_CodeAnalysis/FxCop.Design.AbstractTypeBad/cs/FxCop.Design.AbstractTypeBad.cs#1)]
[!code-vb[FxCop.Design.AbstractTypeBad#1](../snippets/visualbasic/VS_Snippets_CodeAnalysis/FxCop.Design.AbstractTypeBad/vb/FxCop.Design.AbstractTypeBad.vb#1)]
## <a name="example"></a>예제
다음 예제에서 생성자의 내게 필요한 옵션을 변경 하 여 위반을 해결 `public` 에 `protected`입니다.
[!code-csharp[FxCop.Design.AbstractTypeGood#1](../snippets/csharp/VS_Snippets_CodeAnalysis/FxCop.Design.AbstractTypeGood/cs/FxCop.Design.AbstractTypeGood.cs#1)]
[!code-vb[FxCop.Design.AbstractTypeGood#1](../snippets/visualbasic/VS_Snippets_CodeAnalysis/FxCop.Design.AbstractTypeGood/vb/FxCop.Design.AbstractTypeGood.vb#1)]
| 37.561404 | 162 | 0.769734 | kor_Hang | 0.994234 |
d068ad7eb2520c6658c962aaa71481e3b902ca05 | 2,243 | md | Markdown | Labs/Content in draft/Azure IoT/Adafruit_Feather_IoTSuite/README.md | umerslone/computerscience | 3ca482e31b8a6b602084963b6a007134c2e892a2 | [
"MIT"
] | 4 | 2020-01-30T13:12:04.000Z | 2022-01-21T01:14:43.000Z | Labs/Content in draft/Azure IoT/Adafruit_Feather_IoTSuite/README.md | umerslone/computerscience | 3ca482e31b8a6b602084963b6a007134c2e892a2 | [
"MIT"
] | null | null | null | Labs/Content in draft/Azure IoT/Adafruit_Feather_IoTSuite/README.md | umerslone/computerscience | 3ca482e31b8a6b602084963b6a007134c2e892a2 | [
"MIT"
] | 4 | 2020-04-11T18:25:02.000Z | 2020-04-11T19:54:57.000Z | # Adafruit Feather Azure IOT Suite
The aim of this mini project is for the Adafruit Feather to communicate with to the Microsoft Azure IoT Hub. Adafruit Feather should be able to send telemetry messages and also respond to commands sent to it by the IoT hub. The Adafruit Feather component used in this project is from the Microsoft Azure IoT Starter Kits
# Get Started with Microsoft Azure IoT Starter Kit - Adafruit Feather M0 WiFi (Arduino-compatible)
This tutorial describes the process of taking your Feather M0 WiFi kit, and using it to develop a temperature, humidity and pressure reader that can communicate with the cloud using the Microsoft Azure IoT SDK.
**Don't have a kit yet?:** Click [here](http://azure.com/iotstarterkits)
# Running a Simple Remote Monitoring Solution on Feather M0 WiFi (Arduino-compatible)
###Required Software
- Arduino IDE, version 1.6.8. from www.arduino.cc (Earlier versions will not work with the AzureIoT library)
- Sensor interface library from Adafruit: https://github.com/adafruit/Adafruit_BME280_Library/archive/master.zip
###Required Hardware
- Adafruit Feather M0 WiFi kit
- A microB USB cable
- A desktop or laptop computer which can run **Arduino IDE 1.6.8**
##Please follow the Tutorial
Tutorial-Azure IOT Suite with Ardunino.docx
##Setting up via IOTSuite and Individual Azure components
See the following tutorial https://blogs.msdn.microsoft.com/uk_faculty_connection/2016/10/16/creating-an-end-to-end-iot-solution-using-the-microsoft-azure-iot-starter-kit-w-adafruit-feather-m0-wifi-and-microsoft-azure/
#Next steps
Please visit our [Azure IoT Dev Center](https://azure.microsoft.com/en-us/develop/iot/) for more samples and documentation on Azure IoT.
##Stopping Provisioned Services
- In the [Microsoft Azure Portal](https://portal.azure.com/)
- Click on "All Resources"
- For each Stream Analytics and Web App resource:
- Click on the resource and click the "Stop" button in the new blade that appears
- For each IoT Hub resource:
- Click on the resource and click the "Devices" button in the new blade that appears
- Click on each device in the list and click the "Disable" button that appears in the new blade at the bottom
| 52.162791 | 321 | 0.770843 | eng_Latn | 0.958965 |
d069152696ca5dda0ae0347619a076184a29dd16 | 2,679 | md | Markdown | HibernateSpringBootAvoidEntityInDtoViaConstructor/README.md | luiz158/Hibernate-SpringBoot | c245c6297421c2ec5128048497229a33f68f15c2 | [
"Apache-2.0"
] | 879 | 2018-10-18T14:48:51.000Z | 2022-03-30T18:22:15.000Z | HibernateSpringBootAvoidEntityInDtoViaConstructor/README.md | luiz158/Hibernate-SpringBoot | c245c6297421c2ec5128048497229a33f68f15c2 | [
"Apache-2.0"
] | 4 | 2019-10-18T23:48:27.000Z | 2020-12-22T10:11:36.000Z | HibernateSpringBootAvoidEntityInDtoViaConstructor/README.md | luiz158/Hibernate-SpringBoot | c245c6297421c2ec5128048497229a33f68f15c2 | [
"Apache-2.0"
] | 414 | 2018-10-18T14:48:48.000Z | 2022-03-30T02:11:40.000Z | **[Avoid Entity In DTO Via Constructor Expression (no association)](https://github.com/AnghelLeonard/Hibernate-SpringBoot/tree/master/HibernateSpringBootAvoidEntityInDtoViaConstructor)**
<b><a href="https://persistencelayer.wixsite.com/springboot-hibernate/post/avoid-fetching-entity-in-dto-via-constructor-expression-no-associations">If you prefer to read it as a blog-post containing the relevant snippets of code then check this post</a></b>
**Description:** Let's assume that we have two entities, `Author` and `Book`. There is no materialized association between them, but, both entities shares an attribute named, `genre`. We want to use this attribute to join the tables corresponding to `Author` and `Book`, and fetch the result in a DTO. The result should contain the `Author` entity and only the `title` attribute from `Book`. Well, when you are in a scenario as here, it is strongly advisable to avoid fetching the DTO via *constructor expression*. This approach cannot fetch the data in a single `SELECT`, and is prone to N+1. Way better than this consists of using Spring projections, JPA `Tuple` or even Hibernate `ResultTransformer`. These approaches will fetch the data in a single `SELECT`. This application is a **DON'T DO THIS** example. Check the number of queries needed for fetching the data. In place, do it as here: [Entity Inside Spring Projection (no association)](https://github.com/AnghelLeonard/Hibernate-SpringBoot/tree/master/HibernateSpringBootDtoEntityViaProjectionNoAssociation).
-----------------------------------------------------------------------------------------------------------------------
<table>
<tr><td><b>If you need a deep dive into the performance recipes exposed in this repository then I am sure that you will love my book "Spring Boot Persistence Best Practices"</b></td><td><b>If you need a hand of tips and illustrations of 100+ Java persistence performance issues then "Java Persistence Performance Illustrated Guide" is for you.</b></td></tr>
<tr><td>
<a href="https://www.apress.com/us/book/9781484256251"><p align="left"><img src="https://github.com/AnghelLeonard/Hibernate-SpringBoot/blob/master/Spring%20Boot%20Persistence%20Best%20Practices.jpg" height="500" width="450"/></p></a>
</td><td>
<a href="https://leanpub.com/java-persistence-performance-illustrated-guide"><p align="right"><img src="https://github.com/AnghelLeonard/Hibernate-SpringBoot/blob/master/Java%20Persistence%20Performance%20Illustrated%20Guide.jpg" height="500" width="450"/></p></a>
</td></tr></table>
-----------------------------------------------------------------------------------------------------------------------
| 157.588235 | 1,068 | 0.703621 | eng_Latn | 0.887608 |
d06a1e4a119876371ddb60990c0550a2175f5251 | 8,565 | md | Markdown | dander_newbee-mall_readme.md | lua-study/0 | 161010a7530d62864e917d1d3253e1fa0a8413f9 | [
"Apache-2.0"
] | 3 | 2021-06-08T07:57:41.000Z | 2022-02-03T18:50:19.000Z | dander_newbee-mall_readme.md | lua-study/0 | 161010a7530d62864e917d1d3253e1fa0a8413f9 | [
"Apache-2.0"
] | null | null | null | dander_newbee-mall_readme.md | lua-study/0 | 161010a7530d62864e917d1d3253e1fa0a8413f9 | [
"Apache-2.0"
] | 5 | 2021-03-11T07:42:05.000Z | 2021-09-08T05:43:56.000Z | 


[](https://github.com/newbee-ltd/newbee-mall/blob/master/LICENSE)
newbee-mall 项目是一套电商系统,包括 newbee-mall 商城系统及 newbee-mall-admin 商城后台管理系统,基于 Spring Boot 2.X 及相关技术栈开发。 前台商城系统包含首页门户、商品分类、新品上线、首页轮播、商品推荐、商品搜索、商品展示、购物车、订单结算、订单流程、个人订单管理、会员中心、帮助中心等模块。 后台管理系统包含数据面板、轮播图管理、商品管理、订单管理、会员管理、分类管理、设置等模块。
**坚持不易,如果觉得项目还不错的话可以给项目一个 Star 吧,也是对我一直更新代码的一种鼓励啦,谢谢各位的支持。**

- newbee-mall 对新手开发者十分友好,无需复杂的操作步骤,**仅需 2 秒就可以启动这个完整的商城项目;**
- newbee-mall **也是一个企业级别的 Spring Boot 大型项目,对于各个阶段的 Java 开发者都是极佳的选择;**
- 你可以把它作为 Spring Boot 技术栈的综合实践项目,**newbee-mall 足够符合要求,且代码开源、功能完备、流程完整、页面交互美观;**
- 技术栈新颖且知识点丰富,学习后可以提升大家对于知识的理解和掌握,**可以进一步提升你的市场竞争力;**
- 对于部分求职中的 Java 开发者,**你也可以将该项目放入求职简历中以丰富你的工作履历;**
- **newbee-mall 还有一些不完善的地方,鄙人才疏学浅,望见谅;**
- **有任何问题都可以反馈给我,我会尽量完善该项目。**
> 更多 Spring Boot 实战项目可以关注十三的另一个代码仓库 [spring-boot-projects](https://github.com/ZHENFENG13/spring-boot-projects),该仓库中主要是 Spring Boot 的入门学习教程以及一些常用的 Spring Boot 实战项目教程,包括 Spring Boot 使用的各种示例代码,同时也包括一些实战项目的项目源码和效果展示,实战项目包括基本的 web 开发以及目前大家普遍使用的前后端分离实践项目等,后续会根据大家的反馈继续增加一些实战项目源码,摆脱各种 hello world 入门案例的束缚,真正的掌握 Spring Boot 开发。
关注公众号:**程序员的小故事**,回复"勾搭"进群交流。

## 项目演示
- [视频1:商城项目总览](https://edu.csdn.net/course/play/26258/326466)
- [视频2:商城系统介绍](https://edu.csdn.net/course/play/26258/326467)
- [视频3:商城后台管理系统介绍](https://edu.csdn.net/course/play/26258/328801)
## 开发及部署文档
- [**Spring Boot 大型线上商城项目实战教程**](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [技术选型之 Spring Boot](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [前期准备工作及基础环境搭建](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Spring Boot 项目初体验--项目搭建及启动](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Spring Boot 核心详解及源码分析](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Spring Boot 之 DispatchServlet 自动配置源码解读](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Spring Boot 之 Web 开发及 MVC 自动配置分析](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Thymeleaf 模板引擎技术介绍及整合](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Thymeleaf 语法详解及编码实践](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Spring Boot 实践之数据源自动配置及数据库操作](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Spring Boot 实践之整合 Mybatis 操作数据库](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [项目初体验:启动和使用新蜂商城](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城功能模块和流程设计详解](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [前端页面设计及技术选型](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [页面布局制作及跳转逻辑实现](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Spring Boot 整合 kaptcha 实现验证码功能](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城后台管理系统登录功能实现](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [登陆拦截器设置并完善身份验证](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [通用分页功能设计与开发实践](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [JqGrid 插件整合制作分页效果](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Spring Boot 实践之文件上传处理及路径回显](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城轮播图管理模块开发](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城分类管理模块开发-1](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城分类管理模块开发-2](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [富文本编辑器 KindEditor 介绍及整合详解](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城商品类目三级联动功能实现](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城商品编辑功能实现](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城商品管理模块功能实现](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城首页制作-1](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城首页制作-2](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城首页模块配置及功能完善](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城会员的注册/登录功能实现](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城搜索商品功能实现](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城购物车功能实现](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [Spring Boot中的事务处理](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城订单确认页和订单生成功能实践](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城个人订单列表和订单详情页制作](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城订单流程功能完善](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [新蜂商城错误页面制作](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
- [小册总结](https://juejin.im/book/5da2f9d4f265da5b81794d48?referrer=59199e22a22b9d0058279886)
## 联系作者
> 大家有任何问题或者建议都可以在 [issues](https://gitee.com/newbee-ltd/newbee-mall/issues) 中反馈给我,我会慢慢完善这个项目。
- 我的邮箱:[email protected]
- QQ技术交流群:796794009
> newbee-mall 在 GitHub 和国内的码云都创建了代码仓库,如果有人访问 GitHub 比较慢的话,建议在 Gitee 上查看该项目,两个仓库会保持同步更新。
- [newbee-mall in GitHub](https://github.com/newbee-ltd/newbee-mall)
- [newbee-mall in Gitee](https://gitee.com/newbee-ltd/newbee-mall)

## 页面展示
以下为商城项目的部分页面,由于篇幅所限,无法一一列举,重要节点及重要功能的页面都已整理在下方。
### 商城页面预览
- 商城首页 1

- 商城首页 2

- 商品搜索

- 购物车

- 订单结算

- 订单列表

- 支付页面

### 后台管理页面
- 登录页

- 轮播图管理

- 新品上线

- 分类管理

- 商品管理

- 商品编辑

- 订单管理


## 感谢
- [spring-projects](https://github.com/spring-projects/spring-boot)
- [thymeleaf](https://github.com/thymeleaf/thymeleaf)
- [mybatis](https://github.com/mybatis/mybatis-3)
- [ColorlibHQ](https://github.com/ColorlibHQ/AdminLTE)
- [tonytomov](https://github.com/tonytomov/jqGrid)
- [t4t5](https://github.com/t4t5/sweetalert)
- [skytotwo](https://github.com/skytotwo/Alipay-WeChat-HTML)
# 良心友情链接
[腾讯QQ群快速检索](http://u.720life.cn/s/8cf73f7c)
[软件免费开发论坛](http://u.720life.cn/s/bbb01dc0) | 49.796512 | 319 | 0.805604 | yue_Hant | 0.319377 |
d06afdfb2a58fd99c7b7c735253689f22bb2c5a9 | 4,357 | md | Markdown | _posts/2021-01-11-Mandatory-2020-Wrap-up-Post.md | shikhashikz/shikhashikz.github.io | c586282f4cc995bcca1a48cb51b130aa7d95b3b9 | [
"MIT"
] | null | null | null | _posts/2021-01-11-Mandatory-2020-Wrap-up-Post.md | shikhashikz/shikhashikz.github.io | c586282f4cc995bcca1a48cb51b130aa7d95b3b9 | [
"MIT"
] | 8 | 2016-09-15T08:31:20.000Z | 2020-07-18T14:00:07.000Z | _posts/2021-01-11-Mandatory-2020-Wrap-up-Post.md | shikhashikz/shikhashikz.github.io | c586282f4cc995bcca1a48cb51b130aa7d95b3b9 | [
"MIT"
] | null | null | null | ---
layout: post
title: Mandatory 2020 wrap up Post!
category: blog
keywords: blog writing author shikhashikz philosophy mindfulness learnings
image: assets/images/NY2021.jpg
---
Everybody is talking about 2020. It feels it’s mandatory to write a goodbye note for 2020. Yes, everyone wanted 2020 to be over because of all the reasons we know. However, I am finding it weird that everyone was desperately waiting for the midnight gong of 31st Dec. There was no magic wand which was going to whoosh away corona. Indeed it’s a deadly disease. Apart from the safety measure which we can take, there is nothing else which we can do.(I will not touch upon the spiritual aspects, saving this subject for some other day). Healthcare industry is working round the clock for the vaccine. Let’s trust our healthcare professionals and from our end, we make a conscious effort on stepping out, only with the mask on. I believe “mask” and “basic sanitation guidelines” are the gifts which 2020 silently wanted to give it to us.
Coming back to 2020, it was so different in many ways. Whoever had never worked in the work from home environment, it was a different world for them. Each house was different, each life was different. How your neighbour handled this situation was completely different from yours. It was not apt to question how another person is tackling the situation. And it was not appropriate to unleash a whip on your own self, when things do not go as per the plan. Keeping afloat with all the negativity, insecurity is by no means an easy task. I used to give myself a pat on my back when I get up from the bed, feeling blessed that I am alive and able to see my family besides me. These moments which we fail to cherish and be thankful for in the mad rush of career, getting things done, or minting that extra dash of cash. This year can be termed as the **“Year of Awakening”**
**What is my view for 2020?** I loved this year for all the “good reasons”. It gave me a chance to take a peep inside my self, my mind. How to keep calm when you feel that the entire world is faltering. To sift the chaff from the irrelevant and what is important.What is the real meaning of being focus. How one needs to be creative to celebrate life, appreciate health, love your family with all the differences floating around and the blessings bestowed by GOD. There were good nine months where I was not able to go for a holiday. Our plans kept on getting cancelled, postponed because of some or the other reasons. I was frustrated indeed, but then I had to keep going with the flow. I will lie if I say that there were no moments of frustration, being an emotional wreck, feeling listless, thinking that all is lost and I am not going anywhere. I felt stuck. I did feel that I am loner. I did envy by looking at all those amazing holiday pictures on social media. During this tsunami of emotions, I did not run away from my emotions. I felt them, I let them come, I did not shoo them away, I did not feel embarrass in talking about them. I believe this was the strength which did make me more sane! **Accepting the truth, that it’s ok to not be ok.**
And yes, 2020 did turn everybody in a philosopher, including me! I think GOD did receive a lot of knocks on HIS door enquiring on “the purpose of life”.

**What kept me going last year?** Meditation, yoga and words. I read, wrote and shared my learnings. I recorded 40 bite size videos for Antwak platform. Check out my contributions on [Digital Marketing, Soft Skills, Women at Work, Emotional Well-Being and Sales & BD](https://www.antwak.com/author/2069-shikha-pakhide)
I would take a moment to share my reading list of 2020:
***Atomic Habits***
***The Diary of a Young Girl***
***Are you there God? It’s me, Margaret***
***Sophie Kinsella’s “I owe you one”, “Surprise Me” and “Remember Me”***
***Healing Back Pain***
***The Rearranged life of Oona***
***The Silent Patient***
***Eleanor Oliphant is Completely Fine***
**What’s my myntra for the next phase?** Sharpen the lessons learnt in 2020 and Observe.
*Parting Note: My prayers are with all the families who witnessed unfortunate tragedies because of COVID. May GOD gives you the strength to sail through the troubled times.*
| 92.702128 | 1,255 | 0.770255 | eng_Latn | 0.99984 |
d06b25a0f3ab6f0681e39db6306b0efdc0e6642a | 110 | md | Markdown | docs/sites/README.md | TripleSD/moring | 3b37e69f61f155adcf2ebbbd265f5ae1997df43f | [
"MIT"
] | 6 | 2019-11-08T15:51:54.000Z | 2021-06-04T12:08:13.000Z | docs/sites/README.md | TripleSD/moring | 3b37e69f61f155adcf2ebbbd265f5ae1997df43f | [
"MIT"
] | 16 | 2019-12-04T14:54:09.000Z | 2022-02-26T19:59:05.000Z | docs/sites/README.md | TripleSD/moring | 3b37e69f61f155adcf2ebbbd265f5ae1997df43f | [
"MIT"
] | null | null | null | Отображаемые статусы - 200/301/302
Срок опроса сайтов - каждую минуту.
Time
Sites checks - 0.3s by site
| 10 | 35 | 0.727273 | rus_Cyrl | 0.670184 |
d06b9976f5506e91ec26475fcc2550f38919be4d | 2,334 | md | Markdown | docs/csharp/programming-guide/exceptions/how-to-handle-an-exception-using-try-catch.md | CharleyGui/docs.fr-fr | 2563c94abf0d041d775f700b552d1dbe199f03d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/programming-guide/exceptions/how-to-handle-an-exception-using-try-catch.md | CharleyGui/docs.fr-fr | 2563c94abf0d041d775f700b552d1dbe199f03d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/programming-guide/exceptions/how-to-handle-an-exception-using-try-catch.md | CharleyGui/docs.fr-fr | 2563c94abf0d041d775f700b552d1dbe199f03d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Comment gérer une exception à l’aide du Guide de programmation try-catch-C#
description: Découvrez comment gérer une exception à l’aide d’un bloc try-catch. Consultez un exemple de code et affichez des ressources supplémentaires disponibles.
ms.date: 12/09/2020
helpviewer_keywords:
- exception handling [C#], try/catch blocks
- exceptions [C#], try/catch blocks
- try/catch blocks [C#]
ms.assetid: ca8e3773-980e-4767-8633-7408540e9818
ms.openlocfilehash: b6368660dbe037123f5bb6ce52502d4a94fcfc3a
ms.sourcegitcommit: 9b877e160c326577e8aa5ead22a937110d80fa44
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 12/11/2020
ms.locfileid: "97110518"
---
# <a name="how-to-handle-an-exception-using-trycatch-c-programming-guide"></a>Comment gérer une exception à l’aide de try/catch (Guide de programmation C#)
L’objectif d’un bloc [try-catch](../../language-reference/keywords/try-catch.md) est d’intercepter et de gérer une exception générée par du code opérationnel. Certaines exceptions peuvent être gérées dans un `catch` bloc et le problème résolu sans que l’exception soit levée à nouveau ; toutefois, plus souvent, la seule chose que vous pouvez faire est de vous assurer que l’exception appropriée est levée.
## <a name="example"></a>Exemple
Dans cet exemple, <xref:System.IndexOutOfRangeException> n’est pas l’exception la plus appropriée : est <xref:System.ArgumentOutOfRangeException> plus logique pour la méthode, car l’erreur est provoquée par l' `index` argument passé par l’appelant.
:::code language="csharp" source="snippets/exceptions/ExampleTryCatch.cs" id="ExampleTryCatch":::
## <a name="comments"></a>Commentaires
Le code qui provoque une exception est cloisonné dans le bloc `try`. Une instruction `catch` est ajoutée juste après pour gérer `IndexOutOfRangeException`, si elle se produit. Le bloc `catch` gère `IndexOutOfRangeException` et lève l’exception plus appropriée `ArgumentOutOfRangeException` à la place. Pour fournir à l’appelant autant d’informations que possible, pensez à spécifier l’exception d’origine comme <xref:System.Exception.InnerException%2A> de la nouvelle exception. Étant donné que la <xref:System.Exception.InnerException%2A> propriété est [en lecture seule](../../properties.md#read-only), vous devez l’assigner dans le constructeur de la nouvelle exception.
| 77.8 | 673 | 0.793059 | fra_Latn | 0.949535 |
d06bf163cb9d8bc1fec30a37c5f77d6d9414e828 | 24 | md | Markdown | README.md | folosada/realtime-shopping-list | d5fe696901a5aa790dbd2c7745e17102859b3c4e | [
"Apache-2.0"
] | null | null | null | README.md | folosada/realtime-shopping-list | d5fe696901a5aa790dbd2c7745e17102859b3c4e | [
"Apache-2.0"
] | null | null | null | README.md | folosada/realtime-shopping-list | d5fe696901a5aa790dbd2c7745e17102859b3c4e | [
"Apache-2.0"
] | null | null | null | # realtime-shopping-list | 24 | 24 | 0.833333 | eng_Latn | 0.739079 |
d06c9f01e3a96ff793aa680e1dd68a209cfd44d3 | 4,377 | md | Markdown | docs/framework/wpf/advanced/how-to-build-a-table-programmatically.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-06-02T11:09:59.000Z | 2019-06-15T10:17:08.000Z | docs/framework/wpf/advanced/how-to-build-a-table-programmatically.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/how-to-build-a-table-programmatically.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-03-13T21:47:22.000Z | 2020-03-13T21:47:22.000Z | ---
title: "How to: Build a Table Programmatically"
ms.date: "03/30/2017"
dev_langs:
- "csharp"
- "vb"
helpviewer_keywords:
- "tables [WPF], creating programmatically"
ms.assetid: e3ca88f3-6e94-4b61-82fc-42104c10b761
---
# How to: Build a Table Programmatically
The following examples show how to programmatically create a <xref:System.Windows.Documents.Table> and populate it with content. The contents of the table are apportioned into five rows (represented by <xref:System.Windows.Documents.TableRow> objects contained in a <xref:System.Windows.Documents.Table.RowGroups%2A> object) and six columns (represented by <xref:System.Windows.Documents.TableColumn> objects). The rows are used for different presentation purposes, including a title row intended to title the entire table, a header row to describe the columns of data in the table, and a footer row with summary information. Note that the notion of "title", "header", and "footer" rows are not inherent to the table; these are simply rows with different characteristics. Table cells contain the actual content, which can be comprised of text, images, or nearly any other [!INCLUDE[TLA#tla_ui](../../../../includes/tlasharptla-ui-md.md)] element.
## Example
First, a <xref:System.Windows.Documents.FlowDocument> is created to host the <xref:System.Windows.Documents.Table>, and a new <xref:System.Windows.Documents.Table> is created and added to the contents of the <xref:System.Windows.Documents.FlowDocument>.
[!code-csharp[TableSnippets#_TableCreate](~/samples/snippets/csharp/VS_Snippets_Wpf/TableSnippets/CSharp/Table.cs#_tablecreate)]
[!code-vb[TableSnippets#_TableCreate](~/samples/snippets/visualbasic/VS_Snippets_Wpf/TableSnippets/VisualBasic/Table.vb#_tablecreate)]
## Example
Next, six <xref:System.Windows.Documents.TableColumn> objects are created and added to the table's <xref:System.Windows.Documents.Table.Columns%2A> collection, with some formatting applied.
> [!NOTE]
> Note that the table's <xref:System.Windows.Documents.Table.Columns%2A> collection uses standard zero-based indexing.
[!code-csharp[TableSnippets#_TableCreateColumns](~/samples/snippets/csharp/VS_Snippets_Wpf/TableSnippets/CSharp/Table.cs#_tablecreatecolumns)]
[!code-vb[TableSnippets#_TableCreateColumns](~/samples/snippets/visualbasic/VS_Snippets_Wpf/TableSnippets/VisualBasic/Table.vb#_tablecreatecolumns)]
## Example
Next, a title row is created and added to the table with some formatting applied. The title row happens to contain a single cell that spans all six columns in the table.
[!code-csharp[TableSnippets#_TableAddTitleRow](~/samples/snippets/csharp/VS_Snippets_Wpf/TableSnippets/CSharp/Table.cs#_tableaddtitlerow)]
[!code-vb[TableSnippets#_TableAddTitleRow](~/samples/snippets/visualbasic/VS_Snippets_Wpf/TableSnippets/VisualBasic/Table.vb#_tableaddtitlerow)]
## Example
Next, a header row is created and added to the table, and the cells in the header row are created and populated with content.
[!code-csharp[TableSnippets#_TableAddHeaderRow](~/samples/snippets/csharp/VS_Snippets_Wpf/TableSnippets/CSharp/Table.cs#_tableaddheaderrow)]
[!code-vb[TableSnippets#_TableAddHeaderRow](~/samples/snippets/visualbasic/VS_Snippets_Wpf/TableSnippets/VisualBasic/Table.vb#_tableaddheaderrow)]
## Example
Next, a row for data is created and added to the table, and the cells in this row are created and populated with content. Building this row is similar to building the header row, with slightly different formatting applied.
[!code-csharp[TableSnippets#_TableAddDataRow](~/samples/snippets/csharp/VS_Snippets_Wpf/TableSnippets/CSharp/Table.cs#_tableadddatarow)]
[!code-vb[TableSnippets#_TableAddDataRow](~/samples/snippets/visualbasic/VS_Snippets_Wpf/TableSnippets/VisualBasic/Table.vb#_tableadddatarow)]
## Example
Finally, a footer row is created, added, and formatted. Like the title row, the footer contains a single cell that spans all six columns in the table.
[!code-csharp[TableSnippets#_TableAddFooterRow](~/samples/snippets/csharp/VS_Snippets_Wpf/TableSnippets/CSharp/Table.cs#_tableaddfooterrow)]
[!code-vb[TableSnippets#_TableAddFooterRow](~/samples/snippets/visualbasic/VS_Snippets_Wpf/TableSnippets/VisualBasic/Table.vb#_tableaddfooterrow)]
## See also
- [Table Overview](table-overview.md)
| 79.581818 | 949 | 0.792552 | eng_Latn | 0.881195 |
d06cae33e8e411cebc139911a375f60be7e8a5b3 | 598 | md | Markdown | TODO.md | vishalsodani/easy-scraper | 918a65e4816cb71f153aef67a823263ce2639c6b | [
"MIT"
] | null | null | null | TODO.md | vishalsodani/easy-scraper | 918a65e4816cb71f153aef67a823263ce2639c6b | [
"MIT"
] | null | null | null | TODO.md | vishalsodani/easy-scraper | 918a65e4816cb71f153aef67a823263ce2639c6b | [
"MIT"
] | null | null | null | * 基本的なマッチの仕様
* [x] 兄弟ノードの間に何も入らないのを指定できるようにする
デフォルトで兄弟間は入らないことにした
* [x] 兄弟ノードの間に隙間を空ける機能
* [x] elementのattributeのチェック
* [x] パターンは再帰しない方が嬉しい?(直のテキストノードにのみマッチするべき?)
* テキスト同士のマッチ
* パターン拡張
* [ ] textノード以外にパターンを書けるようにする
* [x] attributeに書けるようにする
* [ ] 他にある?
* [x] textノードに複数パターンを書けるようにする
* [x] サブツリー全体にマッチするパターン
* [ ] パターンにがエレメントの時は、マッチしないのではなくて、そこに含まれる文字列全てにマッチするべき?
* 全てに含まれるような構文作るか
* 性能改善
* [ ] イテレーター化
* [ ] match_siblingsのメモ化
要らない気がしてきた
* エラーレポート
* [x] エラー検出&Resultで返すように
* [ ] エラーが非常に分かりにくいからなんとかならないか
| 23 | 59 | 0.64214 | jpn_Jpan | 0.997534 |
d06d5eb44f46ad549a96401135a934a500aa5b0d | 272 | md | Markdown | jair-bolsonaro.md | arquivo-br/arquivo-br.github.io | 41e3cef43eec594d0577b6f0988a6ea5c3137110 | [
"MIT"
] | null | null | null | jair-bolsonaro.md | arquivo-br/arquivo-br.github.io | 41e3cef43eec594d0577b6f0988a6ea5c3137110 | [
"MIT"
] | null | null | null | jair-bolsonaro.md | arquivo-br/arquivo-br.github.io | 41e3cef43eec594d0577b6f0988a6ea5c3137110 | [
"MIT"
] | null | null | null | ---
layout: page
title: Jair Bolsonaro
subtitle: Jair Messias Bolsonaro
---
Links, artigos e vídeos sobre Jair Messias Bolsonaro.
### arquivo-br:
* [Jair Bolsonaro - Entrevistas completas.](https://arquivo-br.github.io/2018-10-04-jair-bolsonaro-entrevistas-completas/)
| 22.666667 | 122 | 0.753676 | por_Latn | 0.897196 |
d06d9de4507bfac3f051a4aa8354abf57ba7dd12 | 3,951 | md | Markdown | README.md | NEU-ZJX/Global-Trajectory-Optimization | 43654a829014a1f3fa57f28c94f7101b5d4b54ca | [
"MIT"
] | 10 | 2018-10-06T05:11:24.000Z | 2021-08-06T09:07:41.000Z | README.md | HansRobo/Global-Trajectory-Optimization | 43654a829014a1f3fa57f28c94f7101b5d4b54ca | [
"MIT"
] | null | null | null | README.md | HansRobo/Global-Trajectory-Optimization | 43654a829014a1f3fa57f28c94f7101b5d4b54ca | [
"MIT"
] | 6 | 2018-10-06T13:39:36.000Z | 2022-01-23T15:05:26.000Z | # Global Trajectory Optimization via the Generalized Label Correcting Method
This is a library for solving global trajectory optimization problems given problem specific derived classes for:
1) A dynamical model for the system (i.e. x'(t)=f(x(t),u(t)$)
2) A cost function of a trajectory $x(t)$ and input signal u(t) to be minimized ( i.e. C(x,u)= Integral g(x(t),u(t)) )
3) A collision detector. That is, if some subset of the state space must be avoided, then function must be provided to determine if a trajectory intersects that region
4) An admissible heuristic to be used in a graph search. Note that the heuristic h(x)=0 is always admissible.
This library may be considered as an alternative to
a) Sampling-based planners
b) Variational trajectory optimization utilizing nonlinear programming techniques
### Pros/Cons over sampling-based planners:
This method handles differential constraints that arise in nonholonomic planning and planning for complex dynamical systems. The reason is that optimal sampling-based planners (such as RRT*) require a steering subroutine in addition to (1)-(4) above which is non-trivial to provide in general. The downside is that for holonomic planning, such as simple shortest path queries, RRT* and PRM* achieve better performance in comparable implementations.
### Pros/Cons over variational trajectory optimization:
This method is far more robust than nonlinear programming based techniues. It does not require a "good initial guess" and it will not converge to a locally optimal solution. The downside is the complexity of this method is exponential with the state space. Up to 3 dimensional state spaces are dealt with very well. In higher dimensions, some domain knowledge will be required to construct a good heuristic.
## Documentation
A technical paper describing the theoretical aspects of the method can be found here:
[https://arxiv.org/abs/1607.06966](https://arxiv.org/abs/1607.06966)
Documentation of the C++ implementation can be found at the link below or by running doxygen from the top level directory:
[https://codedocs.xyz/bapaden/Global-Trajectory-Optimization/md_README.html](https://codedocs.xyz/bapaden/Global-Trajectory-Optimization/md_README.html)
## Installation
To install the library after downloading the source code, enter the following terminal commands from the top level directory
```
mkdir build
cd build
cmake ..
make
make test
sudo make install
```
The unit tests are run using GTest. If it is not installed in your image you can install it with apt, or compile from source
```
sudo apt-get install libgtest-dev
```
[https://github.com/google/googletest](https://github.com/google/googletest)
## Running the examples
Several basic examples demonstrating how to interface with the library can ve found in the examples/ directory. Each example generates data that is saved in the plots/ directory which contains python scripts to generate basic illustrations of the solution. To run the examples:
```
cd GlobalTrajectoryOptimization/build/examples
./shortest-path-demo
./pendulum-swingup-demo
./nonholonomic-car-demo
```
To view the solutions:
```
cd GlobalTrajectoryOptimization/examples
python shortest_path_viewer.py
```
## API
Using Cmake, you can link to the installed library as follows:
```
find_package(glc)
include_directories(${GLC_INCLUDE_DIRS})
add_executable(your_awesome_planning_algorithm your_src.cpp)
target_link_libraries(your_awesome_planning_algorithm glc_planner_core)
```
In your source code where you instantiate a Planner object include the header
```
#include<glc/glc_planner_core.h>
```
You will have to implement derived classes for the following virtual base classes: (a) DynamicalSystem, (b) CostFunction, (c) Heuristic, (d) GoalRegion, (e) Obstacle. These base classes will need to meet the requirements described in the technical paper as well as the source documentation.
| 42.483871 | 448 | 0.785624 | eng_Latn | 0.995317 |
d06def32037e3dc8e1e5f4aeb0b2e694cbd2a864 | 2,177 | md | Markdown | docs/vs-2015/debugger/debugging-applications.md | jcarmon4/visualstudio-docs.es-es | 2f133c9f0a90eb92429dcca0573a0b3f458cdcf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/debugging-applications.md | jcarmon4/visualstudio-docs.es-es | 2f133c9f0a90eb92429dcca0573a0b3f458cdcf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/debugging-applications.md | jcarmon4/visualstudio-docs.es-es | 2f133c9f0a90eb92429dcca0573a0b3f458cdcf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Depuración de aplicaciones | Documentos de Microsoft
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-debug
ms.topic: conceptual
dev_langs:
- FSharp
- VB
- CSharp
- C++
ms.assetid: f7f08402-610e-47f0-ba10-575dd395a0f0
caps.latest.revision: 5
author: MikeJo5000
ms.author: mikejo
manager: jillfra
ms.openlocfilehash: b689a3be22c9fec775cf42b9d26393a886174daf
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 04/23/2019
ms.locfileid: "68197522"
---
# <a name="debugging-applications"></a>Aplicaciones de depuración
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Las secciones siguientes tratan la depuración de tipos específicos de aplicaciones. Diferentes tipos de aplicaciones en distintos lenguajes requieren configuraciones y técnicas propias, y presentan distintos problemas que es necesario depurar.
## <a name="debugging-for-different-types-of-applications"></a>Depuración para distintos tipos de aplicaciones
|||
|-|-|
|[Depuración de la Tienda Windows y aplicaciones Windows universales](../debugger/debugging-windows-store-and-windows-universal-apps.md)|Describe cómo depurar aplicaciones de la Tienda Windows y las aplicaciones Windows universales.|
|[Depurar código administrado](../debugger/debugging-managed-code.md)|Describe cómo depurar código administrado (Visual C#, Visual Basic y F3).|
|[Depuración de código nativo](../debugger/debugging-native-code.md)|Describe cómo depurar otro tipo de aplicaciones C++ nativas.|
|[Depurar código de GPU](../debugger/debugging-gpu-code.md)|Describe cómo depurar el código de C++ que se ejecuta en la Unidad de procesamiento de gráficos (GPU).|
|[Diagnóstico de gráficos (Depurar gráficos de DirectX)](../debugger/visual-studio-graphics-diagnostics.md)|Describe cómo depurar gráficos de DirectX.|
|[Depurar script y aplicaciones web](../debugger/debugging-web-applications-and-script.md)|Describe cómo depurar aplicaciones Web ASP.NET y AJAX.|
|[Depurar servicios WCF](../debugger/debugging-wcf-services.md)|Describe cómo depurar los servicios de Windows Communication Foundation.|
| 54.425 | 245 | 0.790078 | spa_Latn | 0.739392 |
d06e2cead52af0fff4a40d48f74b86209e82336f | 2,747 | md | Markdown | docs/master-data-services/delete-a-subscription-view-master-data-services.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/master-data-services/delete-a-subscription-view-master-data-services.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/master-data-services/delete-a-subscription-view-master-data-services.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-12-30T12:52:58.000Z | 2020-12-30T12:52:58.000Z | ---
description: Löschen einer Abonnementsicht (Master Data Services)
title: Löschen einer Abonnementsicht
ms.custom: ''
ms.date: 03/01/2017
ms.prod: sql
ms.prod_service: mds
ms.reviewer: ''
ms.technology: master-data-services
ms.topic: conceptual
helpviewer_keywords:
- deleting subscription views [Master Data Services]
- subscription views [Master Data Services], deleting
ms.assetid: 14b09c81-1297-48b0-8fe5-991414b930e0
author: lrtoyou1223
ms.author: lle
ms.openlocfilehash: 4dea51dd78f69c7906ec149dcefd52aa3f9e5dfb
ms.sourcegitcommit: e700497f962e4c2274df16d9e651059b42ff1a10
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 08/17/2020
ms.locfileid: "88500615"
---
# <a name="delete-a-subscription-view-master-data-services"></a>Löschen einer Abonnementsicht (Master Data Services)
[!INCLUDE [SQL Server - Windows only ASDBMI ](../includes/applies-to-version/sql-windows-only-asdbmi.md)]
In [!INCLUDE[ssMDSshort](../includes/ssmdsshort-md.md)]können Sie eine Abonnementsicht löschen, die Sie nicht mehr benötigen. Wenn Sie eine Abonnementsicht in [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)] löschen, wird die Sicht aus der [!INCLUDE[ssMDSshort](../includes/ssmdsshort-md.md)] -Datenbank entfernt. Sie können eine Abonnementsicht auch bearbeiten.
## <a name="prerequisites"></a>Voraussetzungen
So führen Sie diese Prozedur aus
- Sie müssen über die entsprechende Berechtigung für den Zugriff auf den Funktionsbereich **Integrationsmanagement** verfügen. Weitere Informationen finden Sie unter [Berechtigungen für Funktionsbereiche (Master Data Services)](../master-data-services/functional-area-permissions-master-data-services.md).
- Sie müssen ein Modelladministrator sein. Weitere Informationen finden Sie unter [Administratoren (Master Data Services)](../master-data-services/administrators-master-data-services.md).
### <a name="to-delete-a-subscription-view"></a>So löschen Sie eine Abonnementsicht
1. Klicken Sie in [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)]auf **Integrationsmanagement**.
2. Klicken Sie auf der Menüleiste auf **Sichten erstellen**.
3. Wählen Sie auf der Seite **Abonnementsichten** die Zeile mit der Sicht aus, die Sie löschen möchten.
4. Klicken Sie auf **Löschen**.
5. Klicken Sie im Bestätigungsdialogfeld auf **OK**.
## <a name="see-also"></a>Weitere Informationen
[Erstellen Sie eine Abonnement Sicht, um Daten (Master Data Services zu exportieren)](../master-data-services/create-a-subscription-view-to-export-data-master-data-services.md)
[Übersicht: Exportieren von Daten (Master Data Services)](../master-data-services/overview-exporting-data-master-data-services.md)
| 50.87037 | 366 | 0.765926 | deu_Latn | 0.758673 |
d06ef05821784723f43959851539b0f9fbd1f099 | 1,453 | md | Markdown | user-story-10-grant-outputs/README.md | ArtemisLav/pidgraph-notebooks-python | 7224adf16d1c5c47f6accb43c6b1814f2f2d0ac6 | [
"MIT"
] | null | null | null | user-story-10-grant-outputs/README.md | ArtemisLav/pidgraph-notebooks-python | 7224adf16d1c5c47f6accb43c6b1814f2f2d0ac6 | [
"MIT"
] | null | null | null | user-story-10-grant-outputs/README.md | ArtemisLav/pidgraph-notebooks-python | 7224adf16d1c5c47f6accb43c6b1814f2f2d0ac6 | [
"MIT"
] | null | null | null | ## [FREYA](https://www.project-freya.eu/en) WP2 [User Story 10](https://github.com/datacite/freya/issues/45): As a funder, we want to be able to find all the outputs related to our awarded grants, including block grants such as doctoral training grants, for management info and looking at impact.
### Jupyter Notebook:
[](https://mybinder.org/v2/gh/datacite/pidgraph-notebooks-python/master?filepath=user-story-10-grant-outputs%2Fpy-grant-outputs-with-output.ipynb)
### Examples of GraphQL Queries Used:
* Get outputs of [FREYA grant award](https://cordis.europa.eu/project/id/777523) from [European Commission](https://doi.org/10.13039/501100000780)
```
{
funder(id: "https://doi.org/10.13039/501100000780") {
name
works(query: "fundingReferences.awardNumber:777523", first: 75) {
totalCount
nodes {
id
formattedCitation(style: "vancouver")
titles {
title
}
descriptions {
description
}
types {
resourceType
}
dates {
date
dateType
}
versionOfCount
creators {
id
name
}
fundingReferences {
funderIdentifier
funderName
awardNumber
awardTitle
}
citationCount
viewCount
downloadCount
}
}
}
}
```
| 29.06 | 297 | 0.601514 | eng_Latn | 0.485582 |
d06fb3c0a0968d15151c1899da34a62e303e30b7 | 786 | md | Markdown | CHANGELOG.md | cyrax111/avatar_stack | 2119cc0ab0be2dd54f2f043692405f2a9df960ee | [
"BSD-3-Clause"
] | null | null | null | CHANGELOG.md | cyrax111/avatar_stack | 2119cc0ab0be2dd54f2f043692405f2a9df960ee | [
"BSD-3-Clause"
] | 1 | 2022-02-03T13:06:56.000Z | 2022-02-08T04:19:20.000Z | CHANGELOG.md | cyrax111/avatar_stack | 2119cc0ab0be2dd54f2f043692405f2a9df960ee | [
"BSD-3-Clause"
] | null | null | null | ## 1.1.1
* Enhanced readme.
## 1.1.0
* Fixed a dependency error.
## 1.0.9
* Fixed a dependency error.
## 1.0.8
* Fixed a dependency error.
## 1.0.7
* Fixed a dependency error.
## 1.0.6
* Fixed a dependency error.
## 1.0.5
* Updated dependencies
## 1.0.4
* Fixed a dependency error
## 1.0.3
* Fixed a dependency error
## 1.0.2
* Made code more robust
* Added tests
* Updated dependencies
## 1.0.1
* Added CI
## 1.0.0
* Added tests
* Refactored code
## 0.5.0
* Added positional parameter of the additional space between an info item and other items
* Added the way to tile items
* Added documentation
* Added examples
* Updated readme
## 0.4.0
* Added documentation
## 0.3.0
* Code refactoring
* Added alignment
* Added examples
## 0.2.1
* Initial release.
| 10.767123 | 89 | 0.661578 | eng_Latn | 0.986949 |
d06ff8abed6c001e292713a43c9549a9cc36734e | 9,979 | md | Markdown | hcs/r/hcs_cluster.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 78 | 2021-01-15T14:10:30.000Z | 2022-02-14T09:17:40.000Z | hcs/r/hcs_cluster.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 5 | 2021-04-09T15:21:28.000Z | 2022-01-28T19:02:05.000Z | hcs/r/hcs_cluster.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 30 | 2021-01-17T13:16:57.000Z | 2022-03-21T12:52:08.000Z | # hcs_cluster
[back](../hcs.md)
### Index
- [Example Usage](#example-usage)
- [Variables](#variables)
- [Resource](#resource)
- [Outputs](#outputs)
### Terraform
```terraform
terraform {
required_providers {
hcs = ">= 0.2.0"
}
}
```
[top](#index)
### Example Usage
```terraform
module "hcs_cluster" {
source = "./modules/hcs/r/hcs_cluster"
# cluster_mode - (required) is a type of string
cluster_mode = null
# cluster_name - (optional) is a type of string
cluster_name = null
# consul_datacenter - (optional) is a type of string
consul_datacenter = null
# consul_external_endpoint - (optional) is a type of bool
consul_external_endpoint = null
# consul_federation_token - (optional) is a type of string
consul_federation_token = null
# email - (required) is a type of string
email = null
# location - (optional) is a type of string
location = null
# managed_application_name - (required) is a type of string
managed_application_name = null
# managed_resource_group_name - (optional) is a type of string
managed_resource_group_name = null
# min_consul_version - (optional) is a type of string
min_consul_version = null
# plan_name - (optional) is a type of string
plan_name = null
# resource_group_name - (required) is a type of string
resource_group_name = null
# tags - (optional) is a type of map of string
tags = {}
# vnet_cidr - (optional) is a type of string
vnet_cidr = null
timeouts = [{
create = null
default = null
delete = null
update = null
}]
}
```
[top](#index)
### Variables
```terraform
variable "cluster_mode" {
description = "(required) - The mode of the cluster ('Development' or 'Production'). Development clusters only have a single Consul server. Production clusters are fully supported, full featured, and deploy with a minimum of three hosts."
type = string
}
variable "cluster_name" {
description = "(optional) - The name of the cluster Managed Resource. If not specified, it is defaulted to the value of `managed_application_name`."
type = string
default = null
}
variable "consul_datacenter" {
description = "(optional) - The Consul data center name of the cluster. If not specified, it is defaulted to the value of `managed_application_name`."
type = string
default = null
}
variable "consul_external_endpoint" {
description = "(optional) - Denotes that the cluster has an external endpoint for the Consul UI. Defaults to `false`."
type = bool
default = null
}
variable "consul_federation_token" {
description = "(optional) - The token used to join a federation of Consul clusters. If the cluster is not part of a federation, this field will be empty."
type = string
default = null
}
variable "email" {
description = "(required) - The contact email for the primary owner of the cluster."
type = string
}
variable "location" {
description = "(optional) - The Azure region that the cluster is deployed to. If not specified, it is defaulted to the region of the Resource Group the Managed Application belongs to."
type = string
default = null
}
variable "managed_application_name" {
description = "(required) - The name of the HCS Azure Managed Application."
type = string
}
variable "managed_resource_group_name" {
description = "(optional) - The name of the Managed Resource Group in which the cluster resources belong. If not specified, it is defaulted to the value of `managed_application_name` with 'mrg-' prepended."
type = string
default = null
}
variable "min_consul_version" {
description = "(optional) - The minimum Consul version of the cluster. If not specified, it is defaulted to the version that is currently recommended by HCS."
type = string
default = null
}
variable "plan_name" {
description = "(optional) - The name of the Azure Marketplace HCS plan for the cluster. If not specified, it will default to the current HCS default plan (see the `hcs_plan_defaults` data source)."
type = string
default = null
}
variable "resource_group_name" {
description = "(required) - The name of the Resource Group in which the HCS Azure Managed Application belongs."
type = string
}
variable "tags" {
description = "(optional) - A mapping of tags to assign to the HCS Azure Managed Application resource."
type = map(string)
default = null
}
variable "vnet_cidr" {
description = "(optional) - The VNET CIDR range of the Consul cluster. Defaults to `172.25.16.0/24`."
type = string
default = null
}
variable "timeouts" {
description = "nested block: NestingSingle, min items: 0, max items: 0"
type = set(object(
{
create = string
default = string
delete = string
update = string
}
))
default = []
}
```
[top](#index)
### Resource
```terraform
resource "hcs_cluster" "this" {
# cluster_mode - (required) is a type of string
cluster_mode = var.cluster_mode
# cluster_name - (optional) is a type of string
cluster_name = var.cluster_name
# consul_datacenter - (optional) is a type of string
consul_datacenter = var.consul_datacenter
# consul_external_endpoint - (optional) is a type of bool
consul_external_endpoint = var.consul_external_endpoint
# consul_federation_token - (optional) is a type of string
consul_federation_token = var.consul_federation_token
# email - (required) is a type of string
email = var.email
# location - (optional) is a type of string
location = var.location
# managed_application_name - (required) is a type of string
managed_application_name = var.managed_application_name
# managed_resource_group_name - (optional) is a type of string
managed_resource_group_name = var.managed_resource_group_name
# min_consul_version - (optional) is a type of string
min_consul_version = var.min_consul_version
# plan_name - (optional) is a type of string
plan_name = var.plan_name
# resource_group_name - (required) is a type of string
resource_group_name = var.resource_group_name
# tags - (optional) is a type of map of string
tags = var.tags
# vnet_cidr - (optional) is a type of string
vnet_cidr = var.vnet_cidr
dynamic "timeouts" {
for_each = var.timeouts
content {
# create - (optional) is a type of string
create = timeouts.value["create"]
# default - (optional) is a type of string
default = timeouts.value["default"]
# delete - (optional) is a type of string
delete = timeouts.value["delete"]
# update - (optional) is a type of string
update = timeouts.value["update"]
}
}
}
```
[top](#index)
### Outputs
```terraform
output "blob_container_name" {
description = "returns a string"
value = hcs_cluster.this.blob_container_name
}
output "cluster_name" {
description = "returns a string"
value = hcs_cluster.this.cluster_name
}
output "consul_automatic_upgrades" {
description = "returns a bool"
value = hcs_cluster.this.consul_automatic_upgrades
}
output "consul_ca_file" {
description = "returns a string"
value = hcs_cluster.this.consul_ca_file
}
output "consul_cluster_id" {
description = "returns a string"
value = hcs_cluster.this.consul_cluster_id
}
output "consul_config_file" {
description = "returns a string"
value = hcs_cluster.this.consul_config_file
}
output "consul_connect" {
description = "returns a bool"
value = hcs_cluster.this.consul_connect
}
output "consul_datacenter" {
description = "returns a string"
value = hcs_cluster.this.consul_datacenter
}
output "consul_external_endpoint_url" {
description = "returns a string"
value = hcs_cluster.this.consul_external_endpoint_url
}
output "consul_private_endpoint_url" {
description = "returns a string"
value = hcs_cluster.this.consul_private_endpoint_url
}
output "consul_root_token_accessor_id" {
description = "returns a string"
value = hcs_cluster.this.consul_root_token_accessor_id
}
output "consul_root_token_secret_id" {
description = "returns a string"
value = hcs_cluster.this.consul_root_token_secret_id
sensitive = true
}
output "consul_snapshot_interval" {
description = "returns a string"
value = hcs_cluster.this.consul_snapshot_interval
}
output "consul_snapshot_retention" {
description = "returns a string"
value = hcs_cluster.this.consul_snapshot_retention
}
output "consul_version" {
description = "returns a string"
value = hcs_cluster.this.consul_version
}
output "id" {
description = "returns a string"
value = hcs_cluster.this.id
}
output "location" {
description = "returns a string"
value = hcs_cluster.this.location
}
output "managed_application_id" {
description = "returns a string"
value = hcs_cluster.this.managed_application_id
}
output "managed_resource_group_name" {
description = "returns a string"
value = hcs_cluster.this.managed_resource_group_name
}
output "plan_name" {
description = "returns a string"
value = hcs_cluster.this.plan_name
}
output "state" {
description = "returns a string"
value = hcs_cluster.this.state
}
output "storage_account_name" {
description = "returns a string"
value = hcs_cluster.this.storage_account_name
}
output "storage_account_resource_group" {
description = "returns a string"
value = hcs_cluster.this.storage_account_resource_group
}
output "vnet_id" {
description = "returns a string"
value = hcs_cluster.this.vnet_id
}
output "vnet_name" {
description = "returns a string"
value = hcs_cluster.this.vnet_name
}
output "vnet_resource_group_name" {
description = "returns a string"
value = hcs_cluster.this.vnet_resource_group_name
}
output "this" {
value = hcs_cluster.this
}
```
[top](#index) | 27.719444 | 240 | 0.70468 | eng_Latn | 0.883103 |
d0701d1679ebfbbd8d3ecf919e0fba9086c292ce | 2,939 | md | Markdown | chapter_12/README.md | Stratus3D/programming_erlang_exercises | e4fd01024812059d338facc20f551e7dff4dac7e | [
"MIT"
] | 28 | 2015-03-25T07:23:14.000Z | 2022-01-31T21:39:13.000Z | chapter_12/README.md | Stratus3D/programming_erlang_exercises | e4fd01024812059d338facc20f551e7dff4dac7e | [
"MIT"
] | 2 | 2016-05-11T11:44:55.000Z | 2017-05-17T17:09:37.000Z | chapter_12/README.md | Stratus3D/programming_erlang_exercises | e4fd01024812059d338facc20f551e7dff4dac7e | [
"MIT"
] | 8 | 2017-01-29T15:07:06.000Z | 2021-08-15T14:41:29.000Z | # Exercises for Chapter 12
**1. Write a function `start(AnAtom, Fun)` to register `AnAtom` as `spawn(Fun)`. Make sure the program works correctly in the case when two parallel processes simultaneously evalutate `start/2`. In this case ensure one process fails and the other succeeds.**
In `exercise_1/` there is a module named `spawn_registered_fun` that contains a `start/2` function.
Example usage:
```
erlc spawn_registered_fun.erl
erl
1> spawn_registered_fun:start(foo, fun() -> receive _ -> ok end end).
{ok,<0.40.0>}
2> spawn_registered_fun:start(foo, fun() -> receive _ -> ok end end).
{error,already_running}
```
**2. Measure process spawning time on your machine. Use the program in Section 12.3 on page 189. Plot a graph of the number of process compared with process creation time. What can you deduce from the graph?**
In `exercise_2/` there is a file named `processes_graph.erl`. This Erlang file will spawn processes and time the takes to spawn them. The module the file defines contains two public functions. `spawn_and_time/1` spawns N number of processes and returns the average time it took to spawn them. `generate_data/3` calls `spawn_and_time/1` with a different number until a certain number of proceses are created, and writes the resulting averages to a CSV file.
Example usage:
```
erlc processes_graph.erl
erl
% Generate file for averages up 10,000 processes
1> processes_graph:generate_data("10000_processes.csv", 10000, 100).
% Generate file for averages up 30,000 processes
2> processes_graph:generate_data("30000_processes.csv", 30000, 100).
```
Graph the data in the CSVs can be done with the gnuplot files I have already created:
```
$ gnuplot -p spawn_times_10000.gnuplot
# generates graph of average runtimes for up to 10,000 processes
$ gnuplot -p spawn_times_30000.gnuplot
# generates graph of average runtimes for up to 30,000 processes
```
The graphs should look something like the one below

Looking at these graphs it is clear that the average spawn time for a single process is constant. Whether there is 1 process running or 10,000, spawning a new proceses takes about the same amount of time.
**3. Write a ring benchmark. Create a N processes in a ring. Send a message around the ring M times so that N * M messages are sent. Time how long it takes for different values of N and M.**
In `exercises_3/` there is a file named `process_ring.erl`. The file defines a module that exports a function named `run/3`. Invoking this function spawns a ring of processes and sends a message around the ring the number of times specified. The time it takes to spawn the ring and send the message around it is printed out by the function.
Example usage:
```
$ erlc process_ring.erl
$ erl
1> process_ring:run(1000, 1000, message).
Spawning 1000 processes and sending the message around the ring 1000 times took 485268 microseconds
stop
```
| 48.180328 | 457 | 0.771011 | eng_Latn | 0.994437 |
d0704f6de33d315a468811d4ff9bb25849326d06 | 134 | md | Markdown | README.md | matthewrenze/genetic-algorithms | bbc7da51ebf697f847d640cb28eb9c9ad63545e3 | [
"BSD-3-Clause"
] | 2 | 2020-05-19T01:55:05.000Z | 2021-08-31T00:17:04.000Z | README.md | matthewrenze/genetic-algorithms | bbc7da51ebf697f847d640cb28eb9c9ad63545e3 | [
"BSD-3-Clause"
] | null | null | null | README.md | matthewrenze/genetic-algorithms | bbc7da51ebf697f847d640cb28eb9c9ad63545e3 | [
"BSD-3-Clause"
] | null | null | null | # Genetic Algorithms
Genetic-algorithm solutions to classic problems in computer science
Please see knapsack.py for more information. | 33.5 | 67 | 0.843284 | eng_Latn | 0.976131 |
d070fa09a70967aadfafd31325d1869753c17754 | 1,496 | md | Markdown | hieradata/README.md | rgrizzell/puppet-template | b485d151e6216586869d0602a06f5e3810e06379 | [
"Apache-2.0"
] | null | null | null | hieradata/README.md | rgrizzell/puppet-template | b485d151e6216586869d0602a06f5e3810e06379 | [
"Apache-2.0"
] | null | null | null | hieradata/README.md | rgrizzell/puppet-template | b485d151e6216586869d0602a06f5e3810e06379 | [
"Apache-2.0"
] | null | null | null | # Puppet Hieradata
The files contained within this directory serve as a key-value store for the various services, virtual machines, cloud
instances, and containers. These key-value pairs are sourced by Puppet through the use of `lookup()` functions in the
Puppet manifests.
## YAML Hierarchy
The hierarchy of files allows for a fair degree of flexibility allowing for more refined overrides of other values. The
following ranking with the higher ones taking precedent over the lower ones.
1. `nodes/<hostname.subdomain.example.com.yaml>` - Only applies to a specific server.
2. `domains/<subdomain.example.com.yaml>` - Only applies to servers in a subdomain.
3. `<puppet_environment.yaml>` - Applicable to clients in an environment.
4. `common.yaml` - Applicable to all clients.
Additional hieradata backends can be defined to add flexibility, but be warned that if not properly designed, there is a
distinct possibility for improperly sourced key-value pairs.
## EYAML
EYAML (Encrypted YAML) is a secure method for protecting sensitive data such as passwords, API tokens, personally
identifiable information, and private keys.
EYAML files are plaintext in nature and work just like regular YAML files. The main difference is that the values in a
key-value pair are encrypted allowing users to open and edit the file without the need for decryption. This means that
so long as someone has the public key, they can encrypt data and insert it into the file.
For more information see EYAML | 57.538462 | 120 | 0.795455 | eng_Latn | 0.999139 |
d07221b1ce7a9072acf7f727efc022a4ebed610c | 1,600 | md | Markdown | content/en/2019-03-15-why study r.md | RunhangShu/Runhang-s-Web | a0dc1494bd9d42d6540cdf6add489fa997224e42 | [
"MIT"
] | null | null | null | content/en/2019-03-15-why study r.md | RunhangShu/Runhang-s-Web | a0dc1494bd9d42d6540cdf6add489fa997224e42 | [
"MIT"
] | 1 | 2021-09-27T02:26:05.000Z | 2021-09-28T20:53:34.000Z | content/en/2019-03-15-why study r.md | RunhangShu/Runhang-s-Web | a0dc1494bd9d42d6540cdf6add489fa997224e42 | [
"MIT"
] | 2 | 2021-05-15T06:40:07.000Z | 2021-09-27T02:18:35.000Z | ---
title: "我是怎么入的r坑"
author: "Run"
date: '2019-03-15'
output: html_document
---
<font face="FANGSONG">
 早在去年年初,什么r啊,perl啊,蟒蛇啊,牛鬼蛇神一下子出现在我的视野。只记得当时大家都在讨论,编程语言对我们以后科研里面数据分析,做图,宏基因组,微生物组,蛋白质组多么多么重要。我深以为然,但对于一个只会用电脑打游戏,看片的人来说,我迟迟没有下定决心去学。尽管内心已经波涛汹涌许久。
 力赫兄对我说过这么一句话,
<br>  <font color=orange>不掌握一两个生物信息方面的技能,你怎么和数以千万计的拿着<br>  移液枪的科研工作者区分开来?
</font>
<br> 这句话是激发波涛的最后一丝暗流。
<br> 那么现在问题来了,现在生物领域火的编程语言那么多,该选哪一个开始入门呢?
这个问题我跑去问了我现在所实习实验室的黄老板,中国香港人,我们一般称他dr.wong。他告诉我,就他们这个实验室而言,r足矣。
<br> **好,那就从r开始吧!以下是我学习r两个月来的感悟。**
<br> 学习r,就好比学习一门语言,它开源的环境使得任何人懂这门语言的人都可以编写自己的“程序”(在r里面这个叫package,我懒得切换英文,所以后面都称之为程序),再把自己的程序分享到[github](https://github.com/RunhangShu/cnm)(我一直喜欢把github比做成百度云,每个人都可以共享你的代码)。
<br> 一来你开发的一个运算程序可以让正在做这方面工作的人借鉴使用。比如,在这篇2015年发表在[nc上的文章](https://www.nature.com/articles/ncomms9370#ref20)用到了一个叫pgls的分析方法用于昆虫多样性和食草性的分析,而pgls的方法是在[1997年被提出用来分析生物系统发生的](https://www.journals.uchicago.edu/doi/abs/10.1086/286013),直到最近几年,有人把它的运算编写进了一个叫[caper](https://cran.r-project.org/web/packages/caper/vignettes/caper.pdf)的r程序包。说到这,你可能已经晕了。我们再回到开头这篇nc的文章中,他们就直接用到这个r程序包用于分析他们的数据。值得一提的是,可能是为了caper的功能强大,开发人员还整合了前人开发的ape,mvtnorm和mass程序。几年时间,单是关于caper引用量已经接近300次。
<br> 二来是有了你发表文章时公开数据和程序包,别人可以重复你的实验结果。做生物领域的科研里面很重要的一环就是**重复性**。毕竟一搞就闹出文章撤稿事件,其因是无法重复,并且数据已经找不到hhh~~关于如何记录数据,如何保存数据,我下次想到了再讨论讨论。
<br> 说到这,我想r的强大对于我们学习生物的自不必多说了。但是,其实r还有一个很强大的功能----***做图***。 据不完全统计,10个人做图,有11个人在用[ggplot2](https://ggplot2.tidyverse.org/reference/)。关于ggplot2,我目前还在学习,在这里推荐的教材是
r graphics cookbook,是一本托福红宝书级别的书,这本书也有相应的[网站免费学习r](http://www.cookbook-r.com/Graphs/)。
</font>
| 59.259259 | 465 | 0.81875 | yue_Hant | 0.523599 |
d072317d71c2706f548a8c7f5112c22f2730ca7f | 16,333 | md | Markdown | README.md | ohadschn/aad-pod-identity | a00cf1b4a79d001f995d356e6e0c33f92a8e0b34 | [
"MIT"
] | null | null | null | README.md | ohadschn/aad-pod-identity | a00cf1b4a79d001f995d356e6e0c33f92a8e0b34 | [
"MIT"
] | null | null | null | README.md | ohadschn/aad-pod-identity | a00cf1b4a79d001f995d356e6e0c33f92a8e0b34 | [
"MIT"
] | null | null | null | # AAD Pod Identity
[](https://dev.azure.com/azure/aad-pod-identity/_build/latest?definitionId=77&branchName=master)
[](https://codecov.io/gh/Azure/aad-pod-identity)
[](https://godoc.org/github.com/Azure/aad-pod-identity)
[](https://goreportcard.com/report/github.com/Azure/aad-pod-identity)
AAD Pod Identity enables Kubernetes applications to access cloud resources securely with [Azure Active Directory] (AAD).
Using Kubernetes primitives, administrators configure identities and bindings to match pods. Then without any code modifications, your containerized applications can leverage any resource in the cloud that depends on AAD as an identity provider.
----
## Contents
* [v1.6.0 Breaking Change](#v160-breaking-change)
* [Getting Started](#getting-started)
* [Components](#components)
+ [Managed Identity Controller](#managed-identity-controller)
+ [Node Managed Identity](#node-managed-identity)
* [Role Assignment](#role-assignment)
* [Demo](#demo)
+ [1. Deploy aad-pod-identity](#1-deploy-aad-pod-identity)
+ [2. Create an identity on Azure](#2-create-an-identity-on-azure)
+ [3. Deploy AzureIdentity](#3-deploy-azureidentity)
+ [4. (Optional) Match pods in the namespace](#4--optional--match-pods-in-the-namespace)
+ [5. Deploy AzureIdentityBinding](#5-deploy-azureidentitybinding)
+ [6. Deployment and Validation](#6-deployment-and-validation)
* [Uninstall Notes](#uninstall-notes)
* [What To Do Next?](#what-to-do-next)
* [Code of Conduct](#code-of-conduct)
* [Support](#support)
## v1.6.0 Breaking Change
With https://github.com/Azure/aad-pod-identity/pull/398, the [client-go](https://github.com/kubernetes/client-go) library is upgraded to v0.17.2, where CRD [fields are now case sensitive](https://github.com/kubernetes/kubernetes/issues/64612). If you are upgrading MIC and NMI from v1.x.x to v1.6.0, MIC v1.6.0+ will upgrade the fields of existing `AzureIdentity` and `AzureIdentityBinding` on startup to the new format to ensure backward compatibility. A configmap called `aad-pod-identity-config` is created to record and confirm the successful type upgrade.
However, for future `AzureIdentity` and `AzureIdentityBinding` created using v1.6.0+, the following fields need to be changed:
### `AzureIdentity`
| < 1.6.0 | >= 1.6.0 |
|------------------|------------------|
| `ClientID` | `clientID` |
| `ClientPassword` | `clientPassword` |
| `ResourceID` | `resourceID` |
| `TenantID` | `tenantID` |
### `AzureIdentityBinding`
| < 1.6.0 | >= 1.6.0 |
|-----------------|-----------------|
| `AzureIdentity` | `azureIdentity` |
| `Selector` | `selector` |
### `AzurePodIdentityException`
| < 1.6.0 | >= 1.6.0 |
|-----------------|-----------------|
| `PodLabels` | `podLabels` |
## Getting Started
It is recommended to get familiar with the AAD Pod Identity ecosystem before diving into the demo. It consists of the Managed Identity Controller (MIC) deployment, the Node Managed Identity (NMI) DaemonSet, and several standard and custom resources.
## Components
AAD Pod Identity has two components: the [Managed Identity Controller] (MIC) and the [Node Managed Identity] (NMI).
### Managed Identity Controller
The Managed Identity Controller (MIC) is a Kubernetes [custom resource] that watches for changes to pods, `AzureIdentity` and `AzureIdentityBindings` through the Kubernetes API server. When it detects a relevant change, the MIC adds or deletes `AzureAssignedIdentity` as needed.
Specifically, when a pod is scheduled, the MIC assigns the identity on Azure to the underlying VM/VMSS during the creation phase. When the pod is deleted, it removes the identity from the underlying VM/VMSS on Azure. The MIC takes similar actions when `AzureIdentity` or `AzureIdentityBinding` are created or deleted.
### Node Managed Identity
The authorization request to fetch a Service Principal Token from an MSI endpoint is sent to Azure Instance Metadata Service (IMDS) endpoint (169.254.169.254), which is redirected to the NMI pod. The redirection is accomplished by adding rules to redirect POD CIDR traffic with IMDS endpoint on port 80 to the NMI endpoint. The NMI server identifies the pod based on the remote address of the request and then queries Kubernetes (through MIC) for a matching Azure identity. NMI then makes an Azure Active Directory Authentication Library ([ADAL]) request to get the token for the client ID and returns it as a response. If the request had client ID as part of the query, it is validated against the admin-configured client ID.
Here is an example cURL command that will fetch an access token to access ARM within a pod identified by an AAD-Pod-Identity selector:
```bash
curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true -s
```
For different ways to acquire an access token within a pod, please refer to this [documentation](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token).
Similarly, a host can make an authorization request to fetch Service Principal Token for a resource directly from the NMI host endpoint (http://127.0.0.1:2579/host/token/). The request must include the pod namespace `podns` and the pod name `podname` in the request header and the resource endpoint of the resource requesting the token. The NMI server identifies the pod based on the `podns` and `podname` in the request header and then queries k8s (through MIC) for a matching azure identity. Then NMI makes an ADAL request to get a token for the resource in the request, returning the `token` and the `clientid` as a response.
Here is an example cURL command:
```bash
curl http://127.0.0.1:2579/host/token/?resource=https://vault.azure.net -H "podname: nginx-flex-kv-int" -H "podns: default"
```
For more information, please refer to the [design documentation](./docs/design/concept.md).
## Role Assignment
Your cluster will need the correct role assignment configuration to perform Azure-related operations such as assigning and un-assigning the identity on the underlying VM/VMSS. Please refer to the [role assignment](./docs/readmes/README.role-assignment.md) documentation to review and set required role assignments.
## Demo
You will need [Azure CLI] installed and a Kubernetes cluster running on Azure, either managed by [AKS] or provisioned with [AKS Engine].
Set the following Azure-related environment variables before getting started:
```bash
export SUBSCRIPTION_ID="<SubscriptionId>"
export RESOURCE_GROUP="<ResourceGroup>"
export IDENTITY_NAME="demo"
```
> For AKS cluster, there are two resource groups that you need to be aware of - the resource group that contains the AKS cluster itself, and the cluster resource group (`MC_<AKSClusterName>_<AKSResourceGroup>_<Location>`). The latter contains all of the infrastructure resources associated with the cluster like VM/VMSS and VNet. Depending on where you deploy your user-assigned identities, you might need additional role assignments. Please refer to [Role Assignment](#role-assignment) for more information. For this demo, it is recommended to use the cluster resource group (the one with `MC_` prefix) as the `RESOURCE_GROUP` environment variable.
### 1. Deploy aad-pod-identity
Deploy `aad-pod-identity` components to an RBAC-enabled cluster:
```bash
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment-rbac.yaml
# For AKS clusters, deploy the MIC and AKS add-on exception by running -
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/mic-exception.yaml
```
Deploy `aad-pod-identity` components to a non-RBAC cluster:
```bash
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment.yaml
# For AKS clusters, deploy the MIC and AKS add-on exception by running -
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/mic-exception.yaml
```
Deploy `aad-pod-identity` using [Helm 3](https://v3.helm.sh/):
```bash
helm repo add aad-pod-identity https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts
helm install aad-pod-identity aad-pod-identity/aad-pod-identity
```
For a list of overwritable values when installing with Helm, please refer to [this section](https://github.com/Azure/aad-pod-identity/tree/master/charts/aad-pod-identity#configuration).
> Important: For AKS clusters with limited [egress-traffic], Please install pod-identity in `kube-system` namespace using the [helm charts].
### 2. Create an identity on Azure
Create an identity on Azure and store the client ID and resource ID of the identity as environment variables:
```bash
az identity create -g $RESOURCE_GROUP -n $IDENTITY_NAME --subscription $SUBSCRIPTION_ID
export IDENTITY_CLIENT_ID="$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_NAME --subscription $SUBSCRIPTION_ID --query clientId -otsv)"
export IDENTITY_RESOURCE_ID="$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_NAME --subscription $SUBSCRIPTION_ID --query id -otsv)"
```
Assign the role "Reader" to the identity so it has read access to the resource group. At the same time, store the identity assignment ID as an environment variable.
```bash
export IDENTITY_ASSIGNMENT_ID="$(az role assignment create --role Reader --assignee $IDENTITY_CLIENT_ID --scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP --query id -otsv)"
```
### 3. Deploy `AzureIdentity`
Create an `AzureIdentity` in your cluster that references the identity you created above:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: $IDENTITY_NAME
spec:
type: 0
resourceID: $IDENTITY_RESOURCE_ID
clientID: $IDENTITY_CLIENT_ID
EOF
```
> Set `type: 0` for user-assigned MSI, `type: 1` for Service Principal with client secret, or `type: 2` for Service Principal with certificate. For more information, see [here](https://github.com/Azure/aad-pod-identity/tree/master/deploy/demo).
### 4. (Optional) Match pods in the namespace
For matching pods in the namespace, please refer to namespaced [README](docs/readmes/README.namespaced.md).
### 5. Deploy `AzureIdentityBinding`
Create an `AzureIdentityBinding` that reference the `AzureIdentity` you created above:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: $IDENTITY_NAME-binding
spec:
azureIdentity: $IDENTITY_NAME
selector: $IDENTITY_NAME
EOF
```
### 6. Deployment and Validation
For a pod to match an identity binding, it needs a [label] with the key `aadpodidbinding` whose value is that of the `selector:` field in the `AzureIdentityBinding`. Deploy a pod that validates the functionality:
```bash
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
aadpodidbinding: $IDENTITY_NAME
spec:
containers:
- name: demo
image: mcr.microsoft.com/k8s/aad-pod-identity/demo:1.2
args:
- --subscriptionid=$SUBSCRIPTION_ID
- --clientid=$IDENTITY_CLIENT_ID
- --resourcegroup=$RESOURCE_GROUP
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
nodeSelector:
kubernetes.io/os: linux
EOF
```
> `mcr.microsoft.com/k8s/aad-pod-identity/demo` is an image that demostrates the use of AAD pod identity. The source code can be found [here](./cmd/demo/main.go).
To verify that the pod is indeed using the identity correctly:
```bash
kubectl logs demo
```
If successful, the log output would be similar to the following output:
```
...
successfully doARMOperations vm count 1
successfully acquired a token using the MSI, msiEndpoint(http://169.254.169.254/metadata/identity/oauth2/token)
successfully acquired a token, userAssignedID MSI, msiEndpoint(http://169.254.169.254/metadata/identity/oauth2/token) clientID(xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)
successfully made GET on instance metadata
...
```
Once you are done with the demo, clean up your resources:
```bash
kubectl delete pod demo
kubectl delete azureidentity $IDENTITY_NAME
kubectl delete azureidentitybinding $IDENTITY_NAME-binding
az role assignment delete --id $IDENTITY_ASSIGNMENT_ID
az identity delete -g $RESOURCE_GROUP -n $IDENTITY_NAME
```
## Uninstall Notes
The NMI pods modify the nodes' [iptables] to intercept calls to IMDS endpoint within a node. This allows NMI to insert identities assigned to a pod before executing the request on behalf of the caller.
These iptables entries will be cleaned up when the pod-identity pods are uninstalled. However, if the pods are terminated for unexpected reasons, the iptables entries can be removed with these commands on the node:
```bash
# remove the custom chain reference
iptables -t nat -D PREROUTING -j aad-metadata
# flush the custom chain
iptables -t nat -F aad-metadata
# remove the custom chain
iptables -t nat -X aad-metadata
```
## What To Do Next?
* Dive deeper into AAD Pod Identity by following the detailed [Tutorial].
* Learn more about the design of AAD Pod Identity:
- [Concept]
- [Block Diagram]
* Learn how to debug this project at the [Debugging] wiki page.
* Join us by [Contributing] to AAD Pod Identity.
## Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq) or contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
## Support
aad-pod-identity is an open source project that is [**not** covered by the Microsoft Azure support policy](https://support.microsoft.com/en-us/help/2941892/support-for-linux-and-open-source-technology-in-azure). [Please search open issues here](https://github.com/Azure/aad-pod-identity/issues), and if your issue isn't already represented please [open a new one](https://github.com/Azure/aad-pod-identity/issues/new/choose). The project maintainers will respond to the best of their abilities.
[ADAL]: https://docs.microsoft.com/azure/active-directory/develop/active-directory-authentication-libraries
[AKS]: https://azure.microsoft.com/services/kubernetes-service/
[AKS Docs]: https://docs.microsoft.com/azure/aks/kubernetes-service-principal
[AKS Engine]: https://github.com/Azure/aks-engine
[annotation]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
[Azure Active Directory]: https://azure.microsoft.com/services/active-directory/
[Azure CLI]: https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest
[Block Diagram]: docs/design/concept.png
[Components]: #components
[Concept]: docs/design/concept.md
[Contributing]: CONTRIBUTING.md
[credentials stored in the cluster]: https://docs.microsoft.com/azure/aks/kubernetes-service-principal
[custom resource]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[Debugging]: https://github.com/Azure/aad-pod-identity/wiki/Debugging
[iptables]: https://en.wikipedia.org/wiki/Iptables
[label]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
[Managed Identity Controller]: #managed-identity-controller
[Node Managed Identity]: #node-managed-identity
[Prerequisites]: #prerequisites
[Tutorial]: docs/tutorial/README.md
[helm charts]: https://github.com/Azure/aad-pod-identity/tree/master/charts/aad-pod-identity
[egress-traffic]: https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic
| 50.255385 | 726 | 0.756873 | eng_Latn | 0.801216 |
d0727c2e4e62ba1b3993df4023756613d1679527 | 7,096 | md | Markdown | _listings/bc-laws/documentidaspectidcivixindexidcivixdocumentidxml-get-postman.md | streamdata-gallery-organizations/bc-laws | 38b718f07688fb7a760b394c380d67338fef8c0f | [
"CC-BY-3.0"
] | null | null | null | _listings/bc-laws/documentidaspectidcivixindexidcivixdocumentidxml-get-postman.md | streamdata-gallery-organizations/bc-laws | 38b718f07688fb7a760b394c380d67338fef8c0f | [
"CC-BY-3.0"
] | null | null | null | _listings/bc-laws/documentidaspectidcivixindexidcivixdocumentidxml-get-postman.md | streamdata-gallery-organizations/bc-laws | 38b718f07688fb7a760b394c380d67338fef8c0f | [
"CC-BY-3.0"
] | null | null | null | {
"info": {
"name": "BC Laws Retrieves a specific document from the BCLaws legislative repository (XML format)",
"_postman_id": "944dd38a-0dfe-41ec-97c3-e5969ff18302",
"description": "The /document API allows you to retrieve actual documents from the BCLaws legislative repository. To retrieve a document from the repository you need the aspect identifier and two other specific pieces of information about the document: the index identifier and the document identifier. These unique identifiers can be retrieved from the /content API.",
"schema": "https://schema.getpostman.com/json/collection/v2.0.0/"
},
"item": [
{
"name": "Content",
"item": [
{
"id": "16d417a6-a62c-4c1e-93cd-51f2a4f10411",
"name": "getContentAspectCivixdocument",
"request": {
"url": {
"protocol": "http",
"host": "www.bclaws.ca",
"path": [
"civix",
"content/:aspectId/:civixDocumentId"
],
"variable": [
{
"id": "aspectId",
"value": "{}",
"type": "string"
},
{
"id": "civixDocumentId",
"value": "{}",
"type": "string"
}
]
},
"method": "GET",
"body": {
"mode": "raw"
},
"description": "Lists the metadata available for the specified index or directory from the BCLaws legislative respository"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "0baaa1ce-06c5-4353-9773-7494888444e5"
}
]
}
]
},
{
"name": "Document",
"item": [
{
"id": "59c06e03-107c-4d45-b1fe-e88db9920858",
"name": "getDocumentAspectCivixindexCivixdocument",
"request": {
"url": {
"protocol": "http",
"host": "www.bclaws.ca",
"path": [
"civix",
"document/id/:aspectId/:civixIndexId/:civixDocumentId"
],
"variable": [
{
"id": "aspectId",
"value": "{}",
"type": "string"
},
{
"id": "civixDocumentId",
"value": "{}",
"type": "string"
},
{
"id": "civixIndexId",
"value": "{}",
"type": "string"
}
]
},
"method": "GET",
"body": {
"mode": "raw"
},
"description": "The /document API allows you to retrieve actual documents from the BCLaws legislative repository. To retrieve a document from the repository you need the aspect identifier and two other specific pieces of information about the document: the index identifier and the document identifier. These unique identifiers can be retrieved from the /content API."
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "73c02b02-9429-4510-ba09-0bb90369a5c8"
}
]
},
{
"id": "cb5f4118-d958-47ad-96b1-a3e6524ffe13",
"name": "getDocumentAspectCivixindexCivixdocumentSearchSearchstring",
"request": {
"url": {
"protocol": "http",
"host": "www.bclaws.ca",
"path": [
"civix",
"document/id/:aspectId/:civixIndexId/:civixDocumentId/search/:searchString"
],
"variable": [
{
"id": "aspectId",
"value": "{}",
"type": "string"
},
{
"id": "civixDocumentId",
"value": "{}",
"type": "string"
},
{
"id": "civixIndexId",
"value": "{}",
"type": "string"
},
{
"id": "searchString",
"value": "{}",
"type": "string"
}
]
},
"method": "GET",
"body": {
"mode": "raw"
},
"description": "The /document API allows you to retrieve actual documents from the BCLaws legislative repository. To retrieve a document from the repository you need the aspect identifier and two other specific pieces of information about the document: the index identifier and the document identifier. These unique identifiers can be retrieved from the /content API."
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "6830b15f-445b-4f6b-a729-219dc1bc5054"
}
]
},
{
"id": "03b46e30-7002-4214-94f0-fb0b173952f4",
"name": "getDocumentAspectCivixindexCivixdocumentXml",
"request": {
"url": {
"protocol": "http",
"host": "www.bclaws.ca",
"path": [
"civix",
"document/id/:aspectId/:civixIndexId/:civixDocumentId/xml"
],
"variable": [
{
"id": "aspectId",
"value": "{}",
"type": "string"
},
{
"id": "civixDocumentId",
"value": "{}",
"type": "string"
},
{
"id": "civixIndexId",
"value": "{}",
"type": "string"
}
]
},
"method": "GET",
"body": {
"mode": "raw"
},
"description": "The /document API allows you to retrieve actual documents from the BCLaws legislative repository. To retrieve a document from the repository you need the aspect identifier and two other specific pieces of information about the document: the index identifier and the document identifier. These unique identifiers can be retrieved from the /content API."
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "d0580493-0fab-463f-a792-c678ac8de84e"
}
]
}
]
}
]
} | 36.204082 | 381 | 0.418123 | eng_Latn | 0.464219 |
d072e53bce8b51fa4d9ca16d8990f23766801356 | 342 | md | Markdown | README.md | balaprasanna/gvr-android-sdk | a0e92a38bf5c42081dba77f247739cd7e2b721eb | [
"Apache-2.0"
] | null | null | null | README.md | balaprasanna/gvr-android-sdk | a0e92a38bf5c42081dba77f247739cd7e2b721eb | [
"Apache-2.0"
] | 1 | 2021-03-12T09:13:12.000Z | 2021-03-12T09:13:12.000Z | README.md | balaprasanna/gvr-android-sdk | a0e92a38bf5c42081dba77f247739cd7e2b721eb | [
"Apache-2.0"
] | null | null | null | Google VR SDK
=====================
Copyright (c) 2016 Google Inc. All rights reserved.
[https://developers.google.com/vr/android/get-started](https://developers.google.com/vr/android/get-started)
1.clone the repo
2.solve if there any sdk download issue, or any gradle issue.
3.select samples-sdk-simplevideowidget on top and run the app. | 34.2 | 108 | 0.72807 | eng_Latn | 0.395544 |
d0736cc84639c9acf59bd23562cb91a0faf6e566 | 19,747 | md | Markdown | README.md | agilecreativity/tips-and-tricks | 888458f6854ec59c4c4ec17bfeab1737589b9a6e | [
"MIT"
] | 1 | 2018-03-25T12:35:37.000Z | 2018-03-25T12:35:37.000Z | README.md | agilecreativity/tips-and-tricks | 888458f6854ec59c4c4ec17bfeab1737589b9a6e | [
"MIT"
] | null | null | null | README.md | agilecreativity/tips-and-tricks | 888458f6854ec59c4c4ec17bfeab1737589b9a6e | [
"MIT"
] | null | null | null | ### Random Tips
#### Fix the Arch Linux keys not valid
```sh
# Try this command
sudo pacman -S archlinux-keyring
# Then
sudo pacman -Syyu
# Or as usual
yaourt -Syyu
```
#### Install Vim with Lua support on Mac OSX
```sh
brew instal vim \
--with-client-server \
--with-gettext \
--with-lua \
--with-luajit \
--with-override-system-vi \
--HEAD
```
This will give you the `+lua` support which is more awesome!
I am provision this through my Ansible playbook with this role (using Homebrew):
```yml
---
## file: roles/common/tasks/editors.yml
## ...
##- name: Install Vim from source
- name: Install latest version of Vim from source with Lua support
command: brew instal vim --with-client-server --with-gettext --with-lua --with-luajit --with-override-system-vi --HEAD
args:
creates: /usr/local/bin/vim
tags: editors
## ...
```
If you things go well for you then you should see something like this when type `:version` from Vim
```
:version
VIM - Vi IMproved 8.0 (2016 Sep 12, compiled May 18 2017 22:57:51)
MacOS X (unix) version
Included patches: 1-600
Compiled by Homebrew
Huge version without GUI. Features included (+) or not (-):
+acl +comments +extra_search +keymap +mouse_dec +path_extra +smartindent +title +xfontset
+arabic +conceal +farsi +lambda -mouse_gpm +perl +startuptime -toolbar -xim
+autocmd +cryptv +file_in_path +langmap -mouse_jsbterm +persistent_undo +statusline +user_commands -xpm
-balloon_eval +cscope +find_in_path +libcall +mouse_netterm +postscript -sun_workshop +vertsplit +xsmp_interact
-browse +cursorbind +float +linebreak +mouse_sgr +printer +syntax +virtualedit +xterm_clipboard
++builtin_terms +cursorshape +folding +lispindent -mouse_sysmouse +profile +tag_binary +visual -xterm_save
+byte_offset +dialog_con -footer +listcmds +mouse_urxvt +python +tag_old_static +visualextra
+channel +diff +fork() +localmap +mouse_xterm -python3 -tag_any_white +viminfo
+cindent +digraphs +gettext +lua +multi_byte +quickfix -tcl +vreplace
+clientserver -dnd -hangul_input +menu +multi_lang +reltime +termguicolors +wildignore
+clipboard -ebcdic +iconv +mksession -mzscheme +rightleft +terminfo +wildmenu
+cmdline_compl +emacs_tags +insert_expand +modify_fname +netbeans_intg +ruby +termresponse +windows
+cmdline_hist +eval +job +mouse +num64 +scrollbind +textobjects +writebackup
+cmdline_info +ex_extra +jumplist -mouseshape +packages +signs +timers +X11
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
2nd user vimrc file: "~/.vim/vimrc"
user exrc file: "$HOME/.exrc"
defaults file: "$VIMRUNTIME/defaults.vim"
fall-back for $VIM: "/usr/local/share/vim"
Compilation: clang -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -g -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1
Linking: clang -L. -fstack-protector -L/usr/local/lib -L/usr/local/opt/libyaml/lib -L/usr/local/opt/openssl/lib -L/usr/local/opt/readline/lib -L/usr/loca
l/lib -o vim -lXt -lX11 -lSM -lICE -lncurses -liconv -lintl -framework Cocoa -pagezero_size 10000 -image_base 100000000 -L/usr/local/lib -lluajit-5.1 -
mmacosx-version-min=10.12 -fstack-protector-strong -L/usr/local/lib -L/usr/local/Cellar/perl/5.24.1/lib/perl5/5.24.1/darwin-thread-multi-2level/CORE -lperl
-lm -lutil -lc -F/usr/local/opt/python/Frameworks -framework Python -lruby.2.4.1 -lobjc
```
#### How to bring up your Github branch when your pull request (PR) is accepted.
To bring your local/remote branch to the same level as the upstream branch you can do the
following.
I recentely contributed to [webica](https://github.com/tmarble/webica/commit/42895b7118fe403a7fdc538f3c0f7f73fc18a82c) and like my local branch to be the same level as
the upstream version. All I have to do is the following:
```sh
# First you will need to tracking the branch if not already done
git remote add upstream [email protected]:tmarble/webica.git
```
From the base directory of your current project you should be able to see something like
```sh
git remote -v
origin [email protected]:agilecreativity/webica.git (push)
upstream [email protected]:tmarble/webica.git (fetch)
```
Now you want to fetch the upstream change first
```sh
git fetch upstream master
```
Then to bring your local branch to the level of your upstream branch
```sh
git pull --rebase upstream master
```
At this point you should be able to push the change to your remote branch (origin)
```sh
# This is your own branch
git push origin master
```
Your Github repository should now be at the same level as your upstream branch.
#### Default parameter in Linux shell script
Sometime it is useful to be able to use sensible default for the user.
e.g. In the following shell script we are trying to set the default `RAILS_ENV`
to `develop` if the user omit to specify one.
```bash
#!/bin/bash
## Script name: drop-create-migrate-seed-spec
## Typical script in Ruby/Rails project to setup and quickly test the code
RAILS_ENV=${RAILS_ENV:=development}
echo "Using RAILS_ENV=$RAILS_ENV"
RAILS_ENV=$RAILS_ENV bundle install
RAILS_ENV=$RAILS_ENV bundle exec rake db:drop
RAILS_ENV=$RAILS_ENV bundle exec rake db:create
RAILS_ENV=$RAILS_ENV bundle exec rake db:migrate
RAILS_ENV=$RAILS_ENV bundle exec rake db:seed
RAILS_ENV=$RAILS_ENV bundle exec rake spec
```
Note if we like to run this script with different environment then we can do so
using:
```sh
export RAILS_ENV=test && ./drop-create-migrate-seed-spec
```
#### Format your USB drive on Linux base system
WARNING: PLEASE TAKE EXTRA CARE AS THIS COULD BE DESTRUCTIVE!
```sh
# 1) determine the driver for your usb
sudo dmesg | tail
# Or use lsblk which should give you the drive information
sudo lsblk
## 2) Umount your USB driver (from the first step)
sudo umount /dev/sdb1
## 3) Then reformat your USB to Fat32 (BE EXTRA CAREFUL HERE)
sudo mkdosfs -n 'USB_LABEL' -F 32 -I /dev/sdb1
## or if you just want FAT instead of Fat32 then omit the `-F 32`
sudo mkdosfs -n 'USB_LABEL' -I /dev/sdb
## 3) For EXT3
sudo mkfs.ext3 -n 'LABEL' -I /dev/sdb
```
And that should be it
#### Checkout the pull request commit from Github
```sh
git config --add remote.origin.fetch +refs/pull/*/head:refs/remotes/origin/pull/*
git fetch origin
#git describe --all --contains <COMMIT>
```
#### To compare the content of two `.tar.gzip` file
```sh
#!/usr/bin/env bash
file1=$1
file2=$2
# List the content of the file and find the differeces between them
diff <(tar -tvf $file1.tar.gz | sort) <(tar -tvf $file2.tar.gz | sort)
```
#### Add sample test commit for testing with CI/CD build trigger jobs
```sh
#!/usr/bin/env bash
## file: add-test-commit
## Add test commit to any projects
echo `date` >> test-commit.txt && git add --all && git commit -am "Test commit at `date`" && git push
```
#### Sample Jenkins script
I am having fun with Jenkins script in Groovy recently.
Here is the example of script that make a call to shell command to get the
branch name from a given commit hash
```groovy
stage "preparation"
node() {
stage('demo-stage') {
test_env("demo")
echo "FYI: may be you see: ${test_env_number}"
dir("filename_cleaner") {
// Note:t this cleanup the directory so that we can clone new repository successfully
deleteDir()
sh "git clone https://github.com/agilecreativity/filename_cleaner.git ."
sh "pwd"
sh "ls -alt"
// sample commit hash
def commit_hash = "fc975c412178fb1363df1644f70b158ddcd77b8a"
// Note: we always want to trim of the new line at the end
// find the branch name from a given commit hash
def result = exec_sh("git branch --contains $commit_hash | awk '{print \$2}'").trim()
echo "Your result: ${result}" // this print 'master'
// This is how you could use the result in your logic
if (result == "master") {
echo "Yes you are right master!!"
} else {
echo "I don't know you!"
}
}
}
}
def test_env(rals_env="build_admin") {
withEnv(["TEST_ENV_NUMBER=${test_env_number}"]) {
echo "Your path: ${env.PATH}"
echo "Your TEST_ENV_NUMBER= ${env.TEST_ENV_NUMBER}"
}
}
def exec_sh(script_name) {
def result = sh(returnStdout: true,
script: script_name)
return result
}
```
#### Install PostgreSQL on Arch Linux
- https://wiki.archlinux.org/index.php/PostgreSQL
```
sudo -u postgres -i
```
Then
```
[postgres]$ initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data'
```
Which should gives something like
```
$initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data'
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.UTF-8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgres/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgres/data -l logfile start
```
- You can then check if the database is start properly using
```
## restart the Postgres service
sudo systemctl restart postgresql.service
## Start the service at login
sudo systemctl enable postgresql.service
## Check if everything is running properly
sudo systemctl status postgresql.service
```
#### Install postgreSQL on Fedora 24
```
sudo dnf install postgresql-server
sudo postgresql-setup --initdb
# start one time only
sudo systemctrl start postgresql
# start everytime we reboot
sudo systemctrl enable postgresql
```
Change the password for user postgres
```
sudo su - postgres
#$psql
#postgres=#\password postgres
# enter the password for user postgres
```
Create a user and a database
```sh
$createuser john -P
$createdb --owner=john sample_db
```
If you already have the existing user you like to use just substitute accordingly
```
$createdb --owner=bchoomnuan sample_db
```
Edit the `/var/lib/pgsql/data/pg_hba.conf`
```
#TYPE DATABASE USER ADDRESS METHOD
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
local all all postgres peer
```
Or just keep it simple as we are running locally
```
#TYPE DATABASE USER ADDRESS METHOD
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
local all all postgres trust
```
Restart the service
```
$sudo systemctl enable postgresql.service
# systemctl\Created symlink from /etc/systemd/system/multi-user.target.wants/postgresql.service to /usr/lib/systemd/system/postgresql.service.
```
#### Running Arch Linux and VirtualBox
See [this link from the Arch Wiki web site](https://wiki.archlinux.org/index.php/VirtualBox)
e.g. You may just need to run `sudo modprobe vboxdrv` to start it up
#### Replace multiple blank lines with one in Emacs
```elisp
(defun single-blank-lines()
"replace multiple blank lines with a single one"
(interactive)
(goto-char (point-min))
(while (re-search-forward "\\(^\\s-*$\\)\n" nil t)
(replace-match "\n")
(forward-char 1)))
```
Then from inside Emacs just `M-x single-blank-lines`
#### Copy public keys to the remote server (manual way)
```
cat ~/.ssh/id_rsa.pub | ssh [email protected] "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys"
```
Optional: disable the password for root login
```
sudo vi /etc/ssh/sshd_config
```
Ensure that the following line are only allow the connection with SSH key:
```
PermitRootLogin without-password
```
Then reload the ssh daemon:
```
sudo reload ssh
```
#### Create new Github repository from command line
Try using [gh-utils](https://github.com/agilecreativity/gh-utils)
Once installed you can simply create a new empty Github repository with
```sh
gh-utils --config config.edn --r awesome-repo-name
```
#### To properly deploy to [clojars.org](https://clojars.org)
```
# To avoid the error try with GPG keys
lein deploy clojars
# instead of just `lein deploy`
```
See [this link](https://github.com/technomancy/leiningen/issues/1890) for details
#### Reset the very first commit in Git
Useful when you like to rewrite the very first command as `git reset HEAD~1` will not work.
```sh
# Revert to your very first commit
git update-ref -d HEAD
# Edit and fix the thinks to your liking
# ..
# If you have already created a repository in Github, you may like to push force
git push -f origin master
```
#### Copy the ssh key to Github (Linux)
- [Adding new ssh key to Github account](https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/) for more detail
- [How to generate ssh key for Github](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/)
```sh
$sudo apt-get install xclip # For Debian base system
$sudo pacman -Sy xclip # For ArchLinux
# Downloads and installs xclip. If you don't have `apt-get`, you might need to use another installer (like `yum`)
$xclip -sel clip < ~/.ssh/id_rsa.pub
# Copies the contents of the id_rsa.pub file to your clipboard
```
If you install the key manually you may like to
```sh
# Might need to install the sshpass/openssh
sudo pacman -Sy openssh sshpass # To get the ssh-agent
chmod 0644 $HOME/.ssh/id_rsa # your personal private key
chmod 0644 $HOME/.ssh/id_rsa_work # your work private key (if any)
eval "$(ssh-agent -s)"
# Then perhap you should clear the `known_hosts` file if any
>known_hosts # will clear out the file content
# Then you can try to see if your ssh Github work
ssh -T [email protected]
```
#### Work with two Github's profiles from the same machine (think Work/Personal)
You will need to create the two SSH's keypairs one for work and one for personal use
```
## file: ~/.ssh/config
## Default config for personal work
Host github.com
HostName github.com
User git
IdentityFile ~/.ssh/id_rsa
## Default config for client's work
Host github-work
HostName github.com
User git
IdentityFile ~/.ssh/id_rsa_work
```
Now when you need want to work on your personal projecg you can use normal command like:
```
# Git clone as usual
git clone [email protected]:agilecreativity/github-cloner.git
# Git clone public repository as usual
```
If you want to do the same for work, you will need to type something like
```
git clone git@github-work:work-org/some-cool-project.git
# Change to this project
cd some-cool-project
# And make sure you have the right configuration to your work's related email
git config user.email "[email protected]"
```
Now you can push/pull using your usual git workflow.
#### Arch Linux - working with `pacman` and `yaourt`
List packages you have installed
```sh
sudo pacman -Qm
yaourt -Qm
```
Very usefule when you like to know what you have already installed on two different computers.
And to install package via `yaourt` and skip the need to confirm the `do you wish to continue?`
```sh
# Q: Do you wish to continue with installation?
# A: Of course I do that why I am running this command!
yaourt google-chrome --noconfirm
```
#### The best way to write changes log for the project
Highly recommended take a look at [http://keepachangelog.com/](http://keepachangelog.com/).
#### Install ruby 2.2.3 on Arch when getting openssl error
See [//wiki.archlinux.org/index.php/Rbenv](https://wiki.archlinux.org/index.php/Rbenv)
```sh
# First install required library if not already done
pacman -S base-devel libffi libyaml openssl zlib
# Install rbenv with patch
curl -fsSL https://gist.github.com/mislav/055441129184a1512bb5.txt | rbenv install --patch 2.2.3
```
For other platform or system see [this link from rbenv](https://github.com/rbenv/ruby-build/issues/826) issues in Github
#### Install and setup Postgres (and redis) on ArchLinux
The usual package installation
```sh
sudo pacman -Syu postgresql
sudo pacman -Syu redis
```
Now follow [PostgreSQL wiki](https://wiki.archlinux.org/index.php/PostgreSQL/)
In summary:
```
# login as postgres
sudo -i -u postgres
# Command now run as `postgres` user
[postgres] initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data'
# Exit from running as postgres
exit
# Now it is a good time to create a password for `postgres`
# First change to root
sudo su -
# Change the password for user 'postgres'
passwd postgres
# Type and remember the password for postgres
# Now exit from the sesssion
exit
# As sudo user, we will need to start the postgresql.service
# Check the current status of the postgresql.service (it should be inactive)
sudo systemctrl status postgresql.service
# To start it with the system use
sudo systemctrl enable postgresql.service
# Similarly, for redis
sudo systemctrl enable redis.service
```
#### Checkout the Github PR locally easily
```
[remote "origin"]
url = git@github-work:<YOUR-ORG-OR-USERID>/<YOUR-PROJECT>.git
fetch = +refs/heads/*:refs/remotes/origin/*
# This is the magic line, this will allow us to run `git checkout pr/<NUMBER>`
fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
```
#### How to quickly clone multiple Github repos quickly
Use my own [github-cloner](https://github.com/agilecreativity/github-cloner) gem
```sh
# Install the ruby gem
gem install github-cloner
# And just use it to clone the repos from your favourite user/organization
# e.g. To clone all of the 'Emacs Lisp' and 'HTML' repository for user 'sachac' the Emacs curator try
github cloner -u sachac -l "Emacs Lisp,HTML" -c
```
#### Install ruby version 2.2.3 on Linux e.g. ArchLinux with rbenv
In this case, I am using `rbenv` as ruby manager. This wiil not be applicable
to `rvm` or any other tools like `chruby`, etc.
- [Original link](https://github.com/rbenv/ruby-build/wiki#openssl-sslv3_method-undeclared-error)
```sh
#!/bin/bash
## See: the rbenv installation site
## For Arch Linux we need to first install the following packages
## 1)
sudo pacman -S base-devel libffi libyaml openssl zlib
## If you get the openssl error then try
## https://github.com/rbenv/ruby-build/wiki#openssl-sslv3_method-undeclared-error
## 2)
curl -fsSL https://gist.github.com/mislav/055441129184a1512bb5.txt | \
rbenv install --patch 2.2.3
```
#### Selenium web driver for Firefox work best with version 35.0.1
- Download the older version of Firefox and install to some directory
- [https://www.mozilla.org/en-US/firefox/35.0.1/releasenotes/]()
- [download pages for version 35.0.1](https://ftp.mozilla.org/pub/firefox/releases/35.0.1/)
- [The en-US version](https://ftp.mozilla.org/pub/firefox/releases/35.0.1/linux-x86_64/en-US/)
| 29.919697 | 167 | 0.700461 | eng_Latn | 0.921069 |
d0736ffb844756fad71246d6989eb1b24b1082d8 | 7,350 | md | Markdown | notebooks/interruptible-optimization.md | xiangzixuebit/skopt-code | 585eca64eb20e93d46fd80ebf0b893b7cb4354fa | [
"BSD-3-Clause"
] | null | null | null | notebooks/interruptible-optimization.md | xiangzixuebit/skopt-code | 585eca64eb20e93d46fd80ebf0b893b7cb4354fa | [
"BSD-3-Clause"
] | null | null | null | notebooks/interruptible-optimization.md | xiangzixuebit/skopt-code | 585eca64eb20e93d46fd80ebf0b893b7cb4354fa | [
"BSD-3-Clause"
] | null | null | null |
# Interruptible optimization runs with checkpoints
Christian Schell, Mai 2018
```python
import numpy as np
np.random.seed(777)
```
## Problem statement
Optimization runs can take a very long time and even run for multiple days. If for some reason the process has to be interrupted results are irreversibly lost, and the routine has to start over from the beginning.
With the help of the `CheckpointSaver` callback the optimizer's current state can be saved after each iteration, allowing to restart from that point at any time.
This is useful, for example,
* if you don't know how long the process will take and cannot hog computational resources forever
* if there might be system failures due to shaky infrastructure (or colleagues...)
* if you want to adjust some parameters and continue with the already obtained results
## Simple example
We will use pretty much the same optimization problem as in the [`bayesian-optimization.ipynb`](https://github.com/scikit-optimize/scikit-optimize/blob/master/examples/bayesian-optimization.ipynb) notebook. Additionaly we will instantiate the `CheckpointSaver` and pass it to the minimizer:
```python
from skopt import gp_minimize
from skopt import callbacks
from skopt.callbacks import CheckpointSaver
noise_level = 0.1
def obj_fun(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level
checkpoint_saver = CheckpointSaver("./checkpoint.pkl", compress=9) # keyword arguments will be passed to `skopt.dump`
gp_minimize(obj_fun, # the function to minimize
[(-20.0, 20.0)], # the bounds on each dimension of x
x0=[-20.], # the starting point
acq_func="LCB", # the acquisition function (optional)
n_calls=10, # the number of evaluations of f including at x0
n_random_starts=0, # the number of random initialization points
callback=[checkpoint_saver], # a list of callbacks including the checkpoint saver
random_state=777);
```
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
Now let's assume this did not finish at once but took some long time: you started this on Friday night, went out for the weekend and now, Monday morning, you're eager to see the results. However, instead of the notebook server you only see a blank page and your colleague Garry tells you that he had had an update scheduled for Sunday noon – who doesn't like updates?
TL;DR: `gp_minimize` did not finish, and there is no `res` variable with the actual results!
## Restoring the last checkpoint
Luckily we employed the `CheckpointSaver` and can now restore the latest result with `skopt.load` (see [store and load results](./store-and-load-results.ipynb) for more information on that)
```python
from skopt import load
res = load('./checkpoint.pkl')
res.fun
```
-0.17524445239614728
## Continue the search
The previous results can then be used to continue the optimization process:
```python
x0 = res.x_iters
y0 = res.func_vals
gp_minimize(obj_fun, # the function to minimize
[(-20.0, 20.0)], # the bounds on each dimension of x
x0=x0, # already examined values for x
y0=y0, # observed values for x0
acq_func="LCB", # the acquisition function (optional)
n_calls=10, # the number of evaluations of f including at x0
n_random_starts=0, # the number of random initialization points
callback=[checkpoint_saver],
random_state=777);
```
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
/root/project/skopt/optimizer/optimizer.py:399: UserWarning: The objective has been evaluated at this point before.
warnings.warn("The objective has been evaluated "
## Possible problems
* __changes in search space:__ You can use this technique to interrupt the search, tune the search space and continue the optimization. Note that the optimizers will complain if `x0` contains parameter values not covered by the dimension definitions, so in many cases shrinking the search space will not work without deleting the offending runs from `x0` and `y0`.
* see [store and load results](./store-and-load-results.ipynb) for more information on how the results get saved and possible caveats
| 52.877698 | 367 | 0.730612 | eng_Latn | 0.996181 |
d0737aac31ac4e77afcac926d14eac5948cefc29 | 693 | md | Markdown | .github/ISSUE_TEMPLATE/01_bug_report.md | GeoSot/sweetalert2-themes | a5acf147f4ac4a96255dd558423318b4fe94b2a5 | [
"MIT"
] | 1 | 2021-10-12T06:32:37.000Z | 2021-10-12T06:32:37.000Z | .github/ISSUE_TEMPLATE/01_bug_report.md | GeoSot/sweetalert2-themes | a5acf147f4ac4a96255dd558423318b4fe94b2a5 | [
"MIT"
] | null | null | null | .github/ISSUE_TEMPLATE/01_bug_report.md | GeoSot/sweetalert2-themes | a5acf147f4ac4a96255dd558423318b4fe94b2a5 | [
"MIT"
] | 1 | 2021-08-11T23:09:10.000Z | 2021-08-11T23:09:10.000Z | ---
name: Bug report
about: Something not working as expected
labels:
---
## Current behavior
<!-- Describe how the issue manifests. -->
## Expected behavior
<!-- Describe what the desired behavior would be. -->
## Live demo <!-- !!! THIS SECTION IS REQUIRED !!! -->
<!--
Provide a working example in order for us to be able to reproduce the issue.
The live demo template: https://codepen.io/limonte/pen/WNrLOLM
-->
---
Has SweetAlert2 helped you create an amazing application? You can show your support via GitHub Sponsors: https://github.com/sponsors/limonte
Alternative ways for donations (PayPal, cryptocurrencies, etc.) are listed here: https://sweetalert2.github.io/#donations
| 24.75 | 140 | 0.72583 | eng_Latn | 0.940034 |
d0738eb4270bc25ab7b24646955945546a893535 | 1,875 | md | Markdown | api/Publisher.CalloutFormat.Accent.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-10-15T16:15:38.000Z | 2018-10-15T16:15:38.000Z | api/Publisher.CalloutFormat.Accent.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Publisher.CalloutFormat.Accent.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CalloutFormat.Accent Property (Publisher)
keywords: vbapb10.chm2490624
f1_keywords:
- vbapb10.chm2490624
ms.prod: publisher
api_name:
- Publisher.CalloutFormat.Accent
ms.assetid: 8e31544c-79ed-3882-98d1-42fc88f58115
ms.date: 06/08/2017
---
# CalloutFormat.Accent Property (Publisher)
Returns or sets an **MsoTriState**constant indicating whether a vertical accent bar separates the callout text from the callout line. Read/write.
## Syntax
_expression_. **Accent**
_expression_ A variable that represents a **CalloutFormat** object.
### Return value
MsoTriState
## Remarks
The **Accent** property value can be one of these **MsoTriState** constants.
|**Constant**|**Description**|
|:-----|:-----|
| **msoCTrue**|Not used with this property.|
| **msoFalse**|A vertical accent bar does not separate the callout text from the callout line.|
| **msoTriStateMixed**|Return value only; indicates a combination of **msoTrue** and **msoFalse** in the specified shape range.|
| **msoTriStateToggle**|Set value only; switches between **msoTrue** and **msoFalse**.|
| **msoTrue**|A vertical accent bar separates the callout text from the callout line.|
## Example
This example adds an oval to the active publication and a callout that points to the oval. The callout text will not have a border, but it will have a vertical accent bar that separates the text from the callout line.
```vb
With ActiveDocument.Pages(1).Shapes
' Add an oval.
.AddShape Type:=msoShapeOval, _
Left:=180, Top:=200, Width:=280, Height:=130
' Add a callout.
With .AddCallout(Type:=msoCalloutTwo, _
Left:=420, Top:=170, Width:=170, Height:=40)
' Add text to the callout.
.TextFrame.TextRange.Text = "This is an oval"
' Add an accent bar to the callout.
With .Callout
.Accent = msoTrue
.Border = msoFalse
End With
End With
End With
```
| 25.337838 | 217 | 0.722133 | eng_Latn | 0.933669 |
d073a16975093ca670f3161ad5fee17e6c9c1e0b | 3,124 | md | Markdown | creator/ScriptAPI/mojang-minecraft/EntityRideableComponent.md | Zelioz273/minecraft-creator | 7a3f888e8757550024474b276e1634ba744c51a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | creator/ScriptAPI/mojang-minecraft/EntityRideableComponent.md | Zelioz273/minecraft-creator | 7a3f888e8757550024474b276e1634ba744c51a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | creator/ScriptAPI/mojang-minecraft/EntityRideableComponent.md | Zelioz273/minecraft-creator | 7a3f888e8757550024474b276e1634ba744c51a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
# DO NOT TOUCH — This file was automatically generated. See https://github.com/Mojang/MinecraftScriptingApiDocsGenerator to modify descriptions, examples, etc.
author: jakeshirley
ms.author: jashir
ms.prod: gaming
title: mojang-minecraft.EntityRideableComponent Class
description: Contents of the mojang-minecraft.EntityRideableComponent class.
---
# EntityRideableComponent Class
>[!IMPORTANT]
>These APIs are experimental as part of GameTest Framework. As with all experiments, you may see changes in functionality in updated Minecraft versions. Check the Minecraft Changelog for details on any changes to GameTest Framework APIs. Where possible, this documentation reflects the latest updates to APIs in Minecraft beta versions.
## Extends
- [*IEntityComponent*](IEntityComponent.md)
When added, this component adds the capability that an entity can be ridden by another entity.
## Properties
### **controllingSeat**
`read-only controllingSeat: number;`
Zero-based index of the seat that can used to control this entity.
Type: *number*
### **crouchingSkipInteract**
`read-only crouchingSkipInteract: boolean;`
Determines whether interactions are not supported if the entity is crouching.
Type: *boolean*
### **familyTypes**
`read-only familyTypes: string[];`
A string-list of entity types that this entity can support as riders.
Type: *string*[]
### **id**
`read-only id: string;`
Identifier of this component. Should always be minecraft:rideable.
Type: *string*
### **interactText**
`read-only interactText: string;`
Set of text that should be displayed when a player is looking to ride on this entity (commonly with touch-screen controls).
Type: *string*
### **pullInEntities**
`read-only pullInEntities: boolean;`
If true, this entity will pull in entities that are in the correct family_types into any available seat.
Type: *boolean*
### **riderCanInteract**
`read-only riderCanInteract: boolean;`
If true, this entity will be picked when looked at by the rider.
Type: *boolean*
### **seatCount**
`read-only seatCount: number;`
Number of seats for riders defined for this entity.
Type: *number*
### **seats**
`read-only seats: Seat[];`
The list of positions and number of riders for each position for entities riding this entity.
Type: [*Seat*](Seat.md)[]
## Methods
- [addRider](#addrider)
- [ejectRider](#ejectrider)
- [ejectRiders](#ejectriders)
### **addRider**
`
addRider(rider: Entity): boolean
`
Adds an entity to this entity as a rider.
#### **Parameters**
- **rider**: [*Entity*](Entity.md)
Entity that will become the rider of this entity.
#### **Returns** *boolean* - True if the rider entity was successfully added.
> [!WARNING]
> This function can throw errors.
### **ejectRider**
`
ejectRider(rider: Entity): void
`
Ejects the specified rider of this entity.
#### **Parameters**
- **rider**: [*Entity*](Entity.md)
Entity that should be ejected from this entity.
> [!WARNING]
> This function can throw errors.
### **ejectRiders**
`
ejectRiders(): void
`
Ejects all riders of this entity.
> [!WARNING]
> This function can throw errors.
| 22.47482 | 336 | 0.734955 | eng_Latn | 0.980294 |
d073ac7d99670530a1efa44af4f010f00df1ac02 | 3,558 | md | Markdown | src/cs/2018-02/10/04.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/cs/2018-02/10/04.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/cs/2018-02/10/04.md | OsArts/Bible-study | cfcefde42e21795e217d192a8b7a703ebb7a6c01 | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Problematika uctívání
date: 05/06/2018
---
> <p></p>
> Viděl jsem trůny a na nich usedli ti, jimž byl svěřen soud. Spatřil jsem také ty, kdo byli sťati pro svědectví Ježíšovo a pro slovo Boží, protože nepoklekli před dravou šelmou ani jejím obrazem a nepřijali její znamení na čelo ani na ruku. Nyní povstali k životu a ujali se vlády s Kristem na tisíc let. (Zj 20,4)
**Osobní studium**
Uctívat někoho znamená uznat jej jako nejvyšší autoritu, které jsem ochotný se podřídit a plně ji respekto-vat. Pro křesťany, stejně jako i pro jejich židovské předchůdce ve víře, je jedinou takovou autoritou sám Bůh. A přesto v průběhu dějin musel Hospodin opakovaně řešit s těmi, kdo se k němu hlásili, problém modloslužby a jiných forem falešného uctívání. Satan se neustále snažil a snaží lidi svést, aby uctívali cokoli jiného, jen ne Hospodina (pro srovnání viz také Mt 4,8–10). Podle Zjevení 13. kapitoly se v závěrečné krizi stane problematika pravého uctívání jednou z klíčových sporných otázek. Boží lid se bude muset opět rozhodnout, komu bude sloužit a kdo pro něj bude nejvyšší autoritou (viz Joz 24,15).
V druhé lekci jsme se věnovali příběhu tří mládenců, kteří dostali příkaz „padnete a pokloníte se před zlatou sochou, kterou postavil král Nebúkadnesar“ (Da 3,5). Kromě toho jsme si už ukázali, že Zjevení 13 používá jazyk Daniela 3. kapitoly k popisu pronásledování, kterému bude muset Boží lid čelit v době konce. Události z Daniela 3 můžeme tedy vnímat jako předzvěst toho, co se bude dít v posledních časech, jak o tom hovoří kontext popisu jednání šelem ze Zjevení 13. Všichni dostali příkaz klanět se zlaté soše, jinak jim hrozila smrt v rozpálené peci. Podobně podle Zjevení 13 byl dán rozkaz, že „zemřou všichni, kdo před ní nepokleknou“ (Zj 13,15).
`Přečti si texty Zj 14,9–11; 16,2; 19,20 a 20,4. Co tyto texty hovoří o tom, jak důležitá bude otázka uctívání?`
Babylon byl vždy (doslova i symbolicky) hlavním centrem modloslužby. Už babylonská věž je svědectvím podobné touhy jejích stavitelů, jakou měl kdysi Lucifer, když si řekl: „Vystoupím na posvátná návrší oblaků, s Nejvyšším se budu měřit“ (Iz 14,14). Jejich stavitelské úsilí bylo zřejmě motivováno i snahou zachránit se před další možnou celosvětovou katastrofou. To vyjadřuje jejich odmítnutí důvěřovat Božímu příslibu, že už nikdy nedopustí na zem další potopu (Gn 9,8–11).
Také Novobabylonská říše vyvyšovala dílo lidských rukou. Samotný Nebúkadnesar opěvoval své úsilí: „Zdali není veliký tento Babylón, který jsem svou mocí a silou vybudoval jako královský dům ke slávě své dů-stojnosti?“ (Da 4,27). O něco později král Belšasar použil Bohu zasvěcené zlaté nádoby z chrámu v Jeruzalémě na svou vlastní pohanskou hostinu. „Hned tedy přinesli zlaté nádoby odnesené z chrámu, to je z Božího domu v Jeruzalémě, a pili z nich král i jeho hodnostáři, jeho ženy i ženiny. Pili víno a chválili bohy zlaté a stříbrné, bronzové, železné, dřevěné a kamenné“ (Da 5,3.4). Všimni si, že chrámové nádoby naplnili omamným vínem, které otupilo vnímavost všech, kteří z něj pili. Výsledkem toho bylo, že při pádu Babylona mnozí z obyvatel města zahynuli. Opojné víno Babylona (Zj 14,8) se může tvářit jako skutečná pravda, i když jí rozhodně není. V satanově království je všeobecně rozšířené a uznávané falešné uctívání a lež.
**Aplikace**
`Dokážeš přijímat Boha jako nejvyšší autoritu ve svém životě? Jsou nějaké oblasti, kde bojuješ s tím, aby ses v nich Bohu dokázal podřídit a přijmout jeho vůli? Kdo nebo co jiného tě kromě Boha výrazně formuje a ovlivňuje?` | 154.695652 | 939 | 0.788926 | ces_Latn | 1.000009 |
d07450f77d46888eea5a3caa795260b9757dfcb1 | 2,042 | md | Markdown | README.md | maximgorbatyuk/MaximGorbatyuk.DatabaseSqlEndpoints | e15bb1e83c14e286d5f61f812870ca82821df85f | [
"MIT"
] | 3 | 2022-02-02T09:52:56.000Z | 2022-02-06T16:04:18.000Z | README.md | maximgorbatyuk/MaximGorbatyuk.DatabaseSqlEndpoints | e15bb1e83c14e286d5f61f812870ca82821df85f | [
"MIT"
] | 2 | 2022-02-20T08:34:18.000Z | 2022-02-20T08:34:32.000Z | README.md | maximgorbatyuk/MaximGorbatyuk.DatabaseSqlEndpoints | e15bb1e83c14e286d5f61f812870ca82821df85f | [
"MIT"
] | null | null | null | # MaximGorbatyuk.DatabaseSqlEndpoints
   
This nuget allows you to view table content of your ASP.NET core application during runtime. The nuget creates a special endpoint and then return tables and data represented in html form.
## Get started
1. Install the [nuget](https://www.nuget.org/packages/MaximGorbatyuk.DatabaseSqlEndpoints/):
```bash
dotnet add package MaximGorbatyuk.DatabaseSqlEndpoints
```
2. Add routing line into your `Startup.cs` file before UseEndpoints():
```csharp
class Startup
{
public void Configure(IApplicationBuilder app)
{
// ... some settings
app
.UseDatabaseTable<AwesomeDbContext>()
.UseTableOutputEndpoint() // default route is /database-sql-endpoints/table
.UseReadEndpoint() // default route is /database-sql-endpoints/read
.UseExecuteEndpoint(); // default route is /database-sql-endpoints/execute
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
// ... some settings
}
}
```
## Requests
### 1. Table content
Open `https:localhost:5001/database-sql-endpoints/table?tableName=<tableName>` in your browser and view your data
### 2. Reading some data with the SQL command
Send the following POST request:
```plaintext
POST https:localhost:5001/database-sql-endpoints/read
BODY Json:
{
"query": "select 1;"
}
```
### 3. Execute any SQL script
Send the following POST request:
```plaintext
POST https:localhost:5001/database-sql-endpoints/execute
BODY Json:
{
"query": "delete fronm users;"
}
```
| 26.519481 | 447 | 0.721352 | eng_Latn | 0.36511 |
d074866ee794ac4a0026f7b9b8bc6324202cd256 | 4,905 | md | Markdown | docs/csharp/programming-guide/classes-and-structs/constructors.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/programming-guide/classes-and-structs/constructors.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/programming-guide/classes-and-structs/constructors.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Konstruktoren – C#-Programmierhandbuch
ms.date: 05/05/2017
helpviewer_keywords:
- constructors [C#]
- classes [C#], constructors
- C# language, constructors
ms.assetid: df2e2e9d-7998-418b-8e7d-890c17ff6c95
ms.openlocfilehash: 465dbb9120e6e81e5ef216c34dc6a92283956033
ms.sourcegitcommit: c01c18755bb7b0f82c7232314ccf7955ea7834db
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 01/15/2020
ms.locfileid: "75964672"
---
# <a name="constructors-c-programming-guide"></a>Konstruktoren (C#-Programmierhandbuch)
Wenn eine [class](../../language-reference/keywords/class.md) oder [struct](../../language-reference/keywords/struct.md) erstellt wird, wird deren Konstruktor aufgerufen. Eine Klasse oder Struktur verfügt möglicherweise über mehrere Konstruktoren, die andere Argumente verwenden. Mit Konstruktoren können Programmierer Standardwerte festlegen, Instanziierungen einschränken und Code schreiben, der flexibel und leicht zu lesen ist. Weitere Informationen und Beispiele finden Sie unter [Verwenden von Konstruktoren](./using-constructors.md) und [Instanzkonstruktoren](./instance-constructors.md).
## <a name="parameterless-constructors"></a>Parameterlose Konstruktoren
Wenn Sie keinen Konstruktor für die Klasse angeben, erstellt C# standardmäßig einen, der das Objekt instanziiert und Membervariablen auf die Standardwerte festlegt, wie in den [Standardwerten der C#-Typen](../../language-reference/builtin-types/default-values.md) aufgeführt. Wenn Sie keinen Konstruktor für die Struktur angeben, stützt sich C# auf einen *impliziten parameterlosen Konstruktor*, um automatisch jedes Feld auf seinen Standardwert zu initialisieren. Weitere Informationen und Beispiele finden Sie unter [Instanzkonstruktoren](instance-constructors.md).
## <a name="constructor-syntax"></a>Konstruktorsyntax
Ein Konstruktor ist eine Methode, dessen Name derselbe ist wie der seines Typs. Die Methodensignatur enthält nur den Methodennamen und die Parameterliste; ein Rückgabetyp ist nicht enthalten. Im folgenden Beispiel wird der Konstruktor für eine Klasse mit dem Namen `Person` gezeigt.
[!code-csharp[constructors](../../../../samples/snippets/csharp/programming-guide/classes-and-structs/constructors1.cs#1)]
Wenn ein Konstruktor als einzelne Anweisung implementiert werden kann, können Sie eine [expression body definition (Ausdruckstextdefinition)](../statements-expressions-operators/expression-bodied-members.md) verwenden. Im folgenden Beispiel wird eine `Location`-Klasse definiert, deren Klassenkonstruktor einen einzelnen Zeichenfolgenparameter namens *name* enthält. Die Ausdruckstextdefinition weist das Argument dem Feld `locationName` zu.
[!code-csharp[expression-bodied-constructor](../../../../samples/snippets/csharp/programming-guide/classes-and-structs/expr-bodied-ctor.cs#1)]
## <a name="static-constructors"></a>Statische Konstruktoren
Die vorherigen Beispiele haben alle Instanzkonstruktoren gezeigt, die ein neues Objekt erstellen. Eine Klasse oder Struktur kann auch einen statischen Konstruktor haben, der statische Member dieses Typs initialisiert. Statische Konstruktoren sind parameterlos. Wenn Sie keinen statischen Konstruktor zum Initialisieren von statischen Feldern angeben, initialisiert der C#-Compiler statische Felder mit ihrem Standardwert, wie unter [Standardwerte der C#-Typen](../../language-reference/builtin-types/default-values.md) aufgeführt.
Im folgenden Beispiel wird ein statischer Konstruktor verwendet, um ein statisches Feld zu initialisieren.
[!code-csharp[constructors](../../../../samples/snippets/csharp/programming-guide/classes-and-structs/constructors1.cs#2)]
Sie können einen statischen Konstruktor auch mit einer Ausdruckstextdefinition definieren, wie im folgenden Beispiel gezeigt.
[!code-csharp[constructors](../../../../samples/snippets/csharp/programming-guide/classes-and-structs/constructors1.cs#3)]
Weitere Informationen und Beispiele finden Sie unter [Statische Konstruktoren](./static-constructors.md).
## <a name="in-this-section"></a>In diesem Abschnitt
[Verwenden von Konstruktoren](./using-constructors.md)
[Instanzkonstruktoren](./instance-constructors.md)
[Private Konstruktoren](./private-constructors.md)
[Statische Konstruktoren](./static-constructors.md)
[Schreiben eines Kopierkonstruktors](./how-to-write-a-copy-constructor.md)
## <a name="see-also"></a>Siehe auch
- [C#-Programmierhandbuch](../index.md)
- [Klassen und Strukturen](./index.md)
- [Finalizer](./destructors.md)
- [static](../../language-reference/keywords/static.md)
- [Why Do Initializers Run In The Opposite Order As Constructors? Part One (Warum werden Initialisierer In der entgegengesetzten Reihenfolge ausgeführt wie Konstruktoren? Teil Eins)](https://docs.microsoft.com/archive/blogs/ericlippert/why-do-initializers-run-in-the-opposite-order-as-constructors-part-one)
| 74.318182 | 597 | 0.797554 | deu_Latn | 0.973706 |
d074b049168915fa4d2ecca1dd976e0fdd0b813e | 155 | md | Markdown | components/mention/demo/form.md | huajian123/ng-zorro-antd | 3680ed9d1ff87a152dfe535eab555471f5ca2226 | [
"MIT"
] | null | null | null | components/mention/demo/form.md | huajian123/ng-zorro-antd | 3680ed9d1ff87a152dfe535eab555471f5ca2226 | [
"MIT"
] | null | null | null | components/mention/demo/form.md | huajian123/ng-zorro-antd | 3680ed9d1ff87a152dfe535eab555471f5ca2226 | [
"MIT"
] | null | null | null | ---
order: 2
title:
zh-CN: 配合 Form 使用
en-US: With Form
---
## zh-CN
受控模式,例如配合 Form 使用。
## en-US
Controlled mode, for example, to work with `Form`.
| 10.333333 | 50 | 0.612903 | eng_Latn | 0.900439 |
d074cda6484ce1d838add20e8f681b79c222b0b7 | 50 | md | Markdown | README.md | pirosikick/react-positioning | 334696a35184a3fada4b4c144a00b925899bc3ed | [
"MIT"
] | null | null | null | README.md | pirosikick/react-positioning | 334696a35184a3fada4b4c144a00b925899bc3ed | [
"MIT"
] | null | null | null | README.md | pirosikick/react-positioning | 334696a35184a3fada4b4c144a00b925899bc3ed | [
"MIT"
] | null | null | null | # react-positioning
goog.ui.positioning for React
| 16.666667 | 29 | 0.82 | eng_Latn | 0.580539 |
d075169b8281a09190f108904a7ed32d1a0521a2 | 88 | md | Markdown | README.md | emptynick/voyager-translation-editor | c1df3fdfc8ce0e612c6e39142e0357980ddb5783 | [
"MIT"
] | null | null | null | README.md | emptynick/voyager-translation-editor | c1df3fdfc8ce0e612c6e39142e0357980ddb5783 | [
"MIT"
] | null | null | null | README.md | emptynick/voyager-translation-editor | c1df3fdfc8ce0e612c6e39142e0357980ddb5783 | [
"MIT"
] | null | null | null | # Voyager translations editor
Manage and edit translation files directly in Voyager II. | 29.333333 | 57 | 0.829545 | eng_Latn | 0.989819 |
d07653b4aadc23773e025b4083eafe5394d724db | 4,770 | md | Markdown | docs/framework/network-programming/socket-performance-enhancements-in-version-3-5.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/network-programming/socket-performance-enhancements-in-version-3-5.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/network-programming/socket-performance-enhancements-in-version-3-5.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Melhorias do desempenho de soquete na versão 3.5
description: Saiba mais sobre as melhorias de desempenho na classe System .net. Sockets. Socket na versão 3,5 do .NET Framework.
ms.date: 03/30/2017
ms.assetid: 225aa5f9-c54b-4620-ab64-5cd100cfd54c
ms.openlocfilehash: 5bd7c97d6a6edd5f914d6fe3118b6d81b64544e0
ms.sourcegitcommit: bc293b14af795e0e999e3304dd40c0222cf2ffe4
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 11/26/2020
ms.locfileid: "96263133"
---
# <a name="socket-performance-enhancements-in-version-35"></a>Melhorias do desempenho de soquete na versão 3.5
A classe <xref:System.Net.Sockets.Socket?displayProperty=nameWithType> foi aprimorada na Versão 3.5 para uso por aplicativos que usam a E/S de rede assíncrona para obter o melhor desempenho. Uma série de novas classes foi adicionada como parte de um conjunto de melhorias da classe <xref:System.Net.Sockets.Socket>, que fornece um padrão assíncrono alternativo que pode ser usado por aplicativos de soquete especializados de alto desempenho. Essas melhorias foram projetadas especificamente para aplicativos de servidor de rede que exigem alto desempenho. Um aplicativo pode usar o padrão assíncrono aprimorado exclusivamente ou somente nas áreas de acesso direcionadas de seu aplicativo (ao receber grandes quantidades de dados, por exemplo).
## <a name="class-enhancements"></a>Melhorias da classe
O principal recurso dessas melhorias é evitar a alocação e a sincronização repetidas de objetos durante a E/S de soquete assíncrono de alto volume. O padrão de design Início/Fim atualmente implementado pela classe <xref:System.Net.Sockets.Socket> para a E/S de soquete assíncrono exige que um objeto <xref:System.IAsyncResult?displayProperty=nameWithType> seja alocado para cada operação de soquete assíncrono.
Nas novas melhorias da classe <xref:System.Net.Sockets.Socket>, as operações de soquete assíncrono são descritas por objetos da classe <xref:System.Net.Sockets.SocketAsyncEventArgs?displayProperty=nameWithType> reutilizáveis alocados e mantidos pelo aplicativo. Os aplicativos de soquete de alto desempenho sabem muito bem a quantidade de operações de soquete sobreposto que devem ser sustentadas. O aplicativo pode criar a quantidade de objetos <xref:System.Net.Sockets.SocketAsyncEventArgs> de que precisar. Por exemplo, se um aplicativo para servidores precisar ter 15 operações de aceitação de soquete pendentes em todos os momentos para dar suporte às taxas de conexão de cliente de entrada, ele poderá alocar 15 objetos <xref:System.Net.Sockets.SocketAsyncEventArgs> reutilizáveis com antecedência para essa finalidade.
O padrão para executar uma operação de soquete assíncrono com essa classe consiste nas seguintes etapas:
1. Alocar um novo objeto de contexto <xref:System.Net.Sockets.SocketAsyncEventArgs> ou obter um gratuito em um pool de aplicativos.
2. Definir as propriedades no objeto de contexto para a operação prestes a ser executada (o método de representante do retorno de chamada e o buffer de dados, por exemplo).
3. Chamar o método de soquete apropriado (xxxAsync) para iniciar a operação assíncrona.
4. Se o método de soquete assíncrono (xxxAsync) retornar verdadeiro no retorno de chamada, consulte as propriedades do contexto para obter o status de conclusão.
5. Se o método de soquete assíncrono (xxxAsync) retornar falso no retorno de chamada, isso indicará que a operação foi concluída de forma síncrona. As propriedades do contexto podem ser consultadas para obter o resultado da operação.
6. Reutilizar o contexto para outra operação, colocá-lo novamente no pool ou descartá-lo.
O tempo de vida do novo objeto de contexto da operação de soquete assíncrono é determinado por referências no código do aplicativo e por referências de E/S assíncrona. Não é necessário que o aplicativo retenha uma referência a um objeto de contexto da operação de soquete assíncrono depois que ele é enviado como um parâmetro para um dos métodos da operação de soquete assíncrono. Ele permanecerá referenciado até o retorno do retorno de chamada de conclusão. No entanto, é vantajoso para o aplicativo reter a referência ao objeto de contexto, de modo que ele possa ser reutilizado para uma operação futura de soquete assíncrono.
## <a name="see-also"></a>Veja também
- <xref:System.Net.Sockets.Socket?displayProperty=nameWithType>
- <xref:System.Net.Sockets.SendPacketsElement?displayProperty=nameWithType>
- <xref:System.Net.Sockets.SocketAsyncEventArgs?displayProperty=nameWithType>
- <xref:System.Net.Sockets.SocketAsyncOperation?displayProperty=nameWithType>
- [Amostras de programação de rede](network-programming-samples.md)
- [Exemplos de código de soquete](socket-code-examples.md)
| 101.489362 | 828 | 0.810273 | por_Latn | 0.998433 |
d076acfb1bd7f16d50104cd4d0d948ec9e572193 | 583 | md | Markdown | README.md | Viktor777/antispambotjs | ff5faab195c062e6476c934c6ef82d5ea207824d | [
"MIT"
] | 1 | 2017-03-02T06:16:36.000Z | 2017-03-02T06:16:36.000Z | README.md | Viktor777/antispambotjs | ff5faab195c062e6476c934c6ef82d5ea207824d | [
"MIT"
] | null | null | null | README.md | Viktor777/antispambotjs | ff5faab195c062e6476c934c6ef82d5ea207824d | [
"MIT"
] | null | null | null | # Antispambot JS
JavaScript implementation of WordPress PHP function. Converts selected email addresses characters to HTML entities to block spam bots. Not all characters in the email address are converted: the selection is random and changes each time the function is called.
## Installation
```
$ npm i --save antispambotjs
```
## Usage
```
antispambot(emailAddress, hexEncoding);
```
## Example
```
import antispambot from 'antispambotjs';
let encoded = antispambot('[email protected]');
console.log(encoded); // test@test.com
```
## License
MIT | 20.821429 | 259 | 0.728988 | eng_Latn | 0.937516 |
d076bb7e3d6493de36615888da7800f992e7bbbe | 10 | md | Markdown | README.md | uriahgray/dpub | 2493a1540d8d480bb3f881487f0ea0efe4914634 | [
"MIT"
] | null | null | null | README.md | uriahgray/dpub | 2493a1540d8d480bb3f881487f0ea0efe4914634 | [
"MIT"
] | null | null | null | README.md | uriahgray/dpub | 2493a1540d8d480bb3f881487f0ea0efe4914634 | [
"MIT"
] | null | null | null | # dpub
>
| 3.333333 | 6 | 0.4 | vie_Latn | 0.262858 |
d076df059d53c7cf44ce5dfa6b3e5b512ff209c7 | 21,357 | md | Markdown | dynamics-nav-app/sales-how-process-sales-returns-cancellations.md | isabella232/nav-content.nb-no | b57638590a62b24d634ae905a8a696f434e3bdb3 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2017-10-20T19:56:49.000Z | 2021-04-21T00:13:46.000Z | dynamics-nav-app/sales-how-process-sales-returns-cancellations.md | MicrosoftDocs/nav-content.nb-no | b57638590a62b24d634ae905a8a696f434e3bdb3 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-05T16:28:54.000Z | 2021-11-05T16:28:54.000Z | dynamics-nav-app/sales-how-process-sales-returns-cancellations.md | isabella232/nav-content.nb-no | b57638590a62b24d634ae905a8a696f434e3bdb3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-14T18:42:17.000Z | 2021-11-05T11:06:30.000Z | ---
title: "Bruke en salgskreditnota til å behandle ordrereturer eller annulleringer"
description: "Beskriver hvordan du oppretter en salgskreditnota, direkte eller via en ordreretur, for å behandle en retur, kansellering eller refusjon for varer eller tjenester du har mottatt betaling for."
author: SorenGP
ms.prod: dynamics-nav-2017
ms.topic: article
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.search.keywords: undo, credit memo, return
ms.date: 09/08/2016
ms.author: sgroespe
ms.translationtype: HT
ms.sourcegitcommit: 4fefaef7380ac10836fcac404eea006f55d8556f
ms.openlocfilehash: a6732bba66601946e3b0a6b705be233a10bafcb9
ms.contentlocale: nb-no
ms.lasthandoff: 10/16/2017
---
# <a name="how-to-process-sales-returns-or-cancellations"></a>Behandle ordrereturer eller annulleringer
Hvis en kunde vil returnere varer eller bli refundert for varer eller tjenester du har solgt og mottatt betaling for, må du opprette og bokføre en salgskreditnota som angir den ønskede endringen. For å inkludere den riktige salgsfakturainformasjonen kan du opprette salgskreditnotaen direkte fra den bokførte salgsfakturaen, eller du kan opprette en ny salgskreditnota kopiert fakturainformasjon.
Hvis du trenger mer kontroll over ordrereturprosessen, for eksempel lagerdokumenter for varehåndtering eller bedre oversikt når du returnerer varer fra flere salgsdokumenter med en ordreretur, kan du opprette ordrereturer. En ordreretur utsteder automatisk den relaterte salgskreditnotaen, og andre returrelaterte dokumenter, for eksempel en erstatningsordre om nødvendig. Hvis du vil ha mer informasjon, kan du se delen "Opprette en ordreretur basert på ett eller flere bokførte salgsdokumenter".
> [!NOTE]
> Hvis en bokført salgsfaktura ennå ikke er betalt, kan du bruke funksjonen **Korriger** eller **Annuller** på den bokførte salgsfakturaen for å tilbakeføre transaksjonene. Disse funksjonene fungerer bare for ubetalte fakturaer, og de støtter ikke delvise returer eller annulleringer. Hvis du vil ha mer informasjon, kan du se [Korrigere eller annullere ubetalte salgsfakturaer](sales-how-correct-cancel-sales-invoice.md).
En returnert vare eller refusjon kan være knyttet til bare noen av varene eller tjenestene på den opprinnelige salgsfakturaen. I så fall må du redigere opplysningene på linjene på salgskreditnotaen eller ordrereturen. Når du bokfører salgskreditnotaen eller ordrereturen, tilbakeføres salgsdokumentene som påvirkes av endringen og en refusjonsbetaling kan opprettes for kunden. Hvis du vil ha mer informasjon, kan du se [Utføre betalinger](payables-make-payments.md).
I tillegg til den opprinnelige bokførte salgsfakturaen, kan du bruke salgskreditnotaen eller ordrereturen for andre salgsdokumenter, for eksempel en annen bokført salgsfaktura, fordi kunden også returnerer varer som leveres med denne fakturaen.
Du kan sende den bokførte salgskreditnotaen til kunden for å bekrefte returen eller annulleringen, og formidle at den tilknyttede verdien blir refundert, for eksempel når varene returneres.
Bokføringen av kreditnotaen tilbakefører også eventuelle varegebyr som var tilordnet det bokførte dokumentet, slik at varens verdiposter er de samme som før varegebyret ble tilordnet.
## <a name="inventory-costing"></a>Beholdning og kostberegning
For å beholde riktig lagerverdi vil du vanligvis sette returnerte varene tilbake i lageret med enhetskosten som de er solgt på, ikke på gjeldende enhetskost. Dette kalles opprinnelig kostpris.
Det finnes to funksjoner du kan bruke til å tilordne opprinnelig kosttilbakeføring automatisk.
|Funksjon|Beskrivelse|
|------------------|---------------------------------------|
|Funksjonen **Hent bokførte dokumentlinjer som skal tilbakeføres** i vinduet **Ordreretur**|Kopierer linjer i en eller flere bokførte dokumenter som skal tilbakeføres til ordrereturen. Hvis du vil ha mer informasjon, kan du se delen "Opprette en ordreretur og relatert salgskreditnota for en eller flere bokførte salgsfakturaer".|
|**Kopier dokument**-funksjonen i vinduene **Salgskreditnota** og **Ordreretur**|Kopierer både hodet og linjene i et bokført bilag som skal tilbakeføres.<br /><br /> Krever at du merker av for **Bruk opprinnelig kostpris** i vinduet **Salgsoppsett**.|
For å tilordne opprinnelig kostpris manuelt, må du velge feltet **Utlignet fra-varepost** på alle typer returdokumentlinjer, og deretter velge nummeret på den opprinnelige salgsposten. Dermed knyttes salgskreditnotaen eller ordrereturen til den opprinnelige salgsposten, og verdien av varen fastsettes til opprinnelig enhetskost.
Hvis du vil ha mer informasjon, kan du se [Kostberegning for beholdning](design-details-inventory-costing.md).
## <a name="to-create-a-sales-credit-memo-from-a-posted-sales-invoice"></a>Opprette en ny salgskreditnota fra en bokført salgsfaktura
1. Velg ikonet , angi **Bokførte salgsfakturaer**, og velg deretter den relaterte koblingen.
2. I vinduet Bokførte **salgsfakturaer** velger du den bokførte salgsfakturaen som du vil tilbakeføre, og deretter velger du **Opprett korrigerende kreditnota**.
For salgskreditnotahodet inneholder noen opplysninger fra den bokførte salgsfakturaen. Du kan redigere dette, for eksempel med ny informasjon som gjenspeiler returavtalen.
3. Redigere informasjonen på linjene i henhold til avtalen, for eksempel antall varer som returneres, eller beløpet som skal refunderes.
4. Velg handlingen **Utlign poster**.
5. I vinduet **Utlign kundeposter** velger du linjen med det bokførte salgsdokumentet du vil utligne salgskreditnotaen mot, og deretter velger du handlingen **Utlignings-ID**.
ID-en på salgskreditnotaen vises i feltet **Utlignings-ID**.
6. I feltet **Beløp som skal utlignes** skriver du inn beløpet som du vil utligne hvis mindre enn det opprinnelige beløpet.
Nederst i vinduet **Utlign kundeposter** kan du se det totale beløpet som skal utlignes for å tilbakeføre alle involverte poster, nemlig når verdien i **Saldo** -feltet er null.
7. Velg **OK**. Når du bokfører salgskreditnotaen, brukes den på de bokførte salgsdokumentene.
Når du har opprettet eller redigert salgskreditnotalinjene, og ett eller flere programmer er angitt, kan du fortsette med å bokføre salgskreditnotaen.
8. Velg handlingen **Bokfør og send**.
Dialogboksen **Bokfør og send bekreftelse** åpnes, og viser den foretrukne sendemetoden for kunden. Du kan endre sendemetoden ved å velge oppslagsknappen for feltet **Send dokument til**. Hvis du vil ha mer informasjon, kan du se [Definere en profil for dokumentsending](sales-how-setup-document-send-profiles.md).
De bokførte salgsdokumentene som du utlignet kreditnotaen mot, tilbakeføres, og en refusjonsbetaling kan opprettes for kunden. Salgskreditnotaen fjernes og erstattes med et nytt dokument i listen over bokførte salgskreditnotaer.
## <a name="to-create-a-sales-credit-memo-by-copying-a-posted-sales-invoice"></a>Opprette en salgskreditnota ved å kopiere en bokført salgsfaktura
1. Velg ikonet , angi **Salgskreditnotaer**, og velg deretter den relaterte koblingen.
2. Velg handlingen **Ny** for å åpne en ny, tom salgskreditnota.
3. I feltet **Kunde** angir du navnet på en eksisterende kunde.
4. Velg handlingen **Kopier dokument**.
5. Velg **Bokført faktura** i **Bilagstype**-feltet i vinduet **Kopier salgsdokument**.
6. Velg feltet **Bilagsnr.** for å åpne vinduet **Bokførte salgsfakturaer**, og velg deretter den bokførte salgsfakturaen som inneholder linjer du vil tilbakeføre.
7. Merk av for **Gjenberegn linjer** hvis du vil at de kopierte bokførte salgsfakturalinjene skal oppdateres med endringer i varepris og enhetskost etter at fakturaen er bokført.
8. Velg **OK**. De kopierte fakturalinjene settes inn i salgskreditnotaen.
9. Fullføre salgskreditnotaen som forklart i den avsnittet "Opprette en salgskreditnota fra en bokført salgsfaktura" i dette emnet.
## <a name="to-create-a-sales-return-order-based-on-one-or-more-a-posted-sales-documents"></a>Opprette en ordreretur basert på ett eller flere bokførte salgsdokumenter
1. Velg ikonet , angi **Ordrereturer**, og velg deretter den relaterte koblingen.
2. Velg handlingen **Ny**.
3. Fyll ut feltene i hurtigfanen **Generelt** etter behov.
4. I **Linjer**-hurtigfanen fyller du ut linjene manuelt, eller kopier informasjon fra andre dokumenter for å fylle ut linjene automatisk:
- Bruk funksjonen **Hent bokførte dokumentlinjer som skal tilbakeføres** for å kopiere én eller flere bokførte dokumentlinjer fra ett eller flere bokførte dokumenter. Denne funksjonen tilbakefører alltid kost nøyaktig fra den bokførte dokumentlinjen. Denne funksjonen er beskrevet i følgende fremgangsmåter.
- Bruk funksjonen **Kopier dokument** til å kopiere et eksisterende dokument til ordrereturen. Bruk denne funksjonen til å kopiere hele dokumentet. Det kan være et bokført dokument eller et dokument som ikke er bokført ennå. Med denne funksjonen er nøyaktig kosttilbakeføring bare mulig hvis det er merket av for **Bruk opprinnelig kostpris** i vinduet **Salgsoppsett**.
5. Velg handlingen **Hent bokførte dokumentlinjer som skal tilbakeføres**.
6. Øverst i vinduet **Bokførte salgsdokumentlinjer** merker du av for **Vis bare reversible linjer** hvis du bare vil se salgslinjer med antall som ennå ikke er tilbakeført. Hvis for eksempel antallet for en bokført salgsfaktura allerede har blitt tilbakeført, kan det hende du ikke vil tilbakeføre det antallet på et nytt ordrereturdokument.
> [!NOTE]
> Dette feltet fungerer bare for bokførte følgesedler og bokførte fakturalinjer, og ikke for bokførte retur- eller kreditnotalinjer.
På venstre side av vinduet vises en oversikt over de ulike dokumenttypene, og nummeret i parentes viser antall dokumenter som er tilgjengelig for hver enkelt dokumenttype.
7. I feltet **Filter for bilagstype** velger du typen bokførte dokumentlinjer du vil bruke.
8. Velg linjene du vil kopiere til det nye dokumentet.
> [!NOTE]
> Hvis du bruker Ctrl+A for å velge alle linjene, kopieres alle linjene med det filteret du har angitt, men filteret **Vis bare reversible linjer** ignoreres. Hvis du for eksempel har filtrert linjene for et bestemt dokumentnummer med to linjer, og en av dem allerede er tilbakeført. Selv om feltet **Vis bare reversible linjer** er valgt, kopieres begge linjene når du trykker CTRL+A for å kopiere alle linjene, ikke bare den linjen som ennå ikke er tilbakeført.
9. Velg **OK**-knappen for å kopiere linjene til det nye dokumentet.
Følgende skjer:
- For bokførte dokumentlinjer av typen **Vare** opprettes en ny dokumentlinje som er en kopi av den bokførte dokumentlinjen, med antall som ennå ikke er tilbakeført. Feltet **Utlignet fra-varepost** fylles ut etter behov med tallet på vareposten for den bokførte dokumentlinjen.
- For bokførte dokumentlinjer som ikke er av typen **Vare**, for eksempel varegebyrer, opprettes en ny dokumentlinje som er en kopi av den opprinnelige bokførte dokumentlinjen.
- **Enhetskost (NOK)**-feltet beregnes på den nye linjen fra kosten for de tilhørende varepostene.
- Hvis det kopierte dokumentet er en bokført følgeseddel, et bokført mottak, en bokført returseddel eller en bokført returforsendelse, beregnes salgsprisen fra varekortet.
- Hvis det kopierte dokumentet er en bokført faktura eller kreditnota, kopieres salgsprisen, fakturarabatter og linjerabatter fra den bokførte dokumentlinjen.
- Hvis den bokførte dokumentlinjen inneholder varesporingslinjer, fylles feltet **Utlignet fra-varepost** ut på varesporingslinjene med relevant varepostnummer fra de bokførte varesporingslinjene.
Når du kopierer fra en bokført faktura eller bokført kreditnota, kopieres alle relevante fakturarabatter og linjerabatter som er gyldige på bokføringstidspunktet for det dokumentet, fra den bokførte dokumentlinjen til den nye dokumentlinjen. Legg merke til at hvis alternativet **Beregn. fakt.rab.** er aktivert i vinduet **Salgsoppsett**, blir fakturarabatten beregnet på nytt når du bokfører linjen for det nye dokumentet. Det kan derfor hende at linjebeløpet for den nye linjen er forskjellig fra linjebeløpet på den bokførte dokumentlinjen, avhengig av den nye beregningen av fakturarabatten.
> [!NOTE]
> Hvis en del av antallet for den bokførte dokumentlinjen allerede er tilbakeført (returnert) eller solgt eller forbrukt, opprettes en linje bare for antallet som gjenstår på lageret, eller som ikke har blitt returnert. Hvis hele antallet for den bokførte dokumentlinjen allerede er tilbakeført, opprettes det ikke en ny dokumentlinje.
>
> Hvis vareflyten i det bokførte dokumentet er den samme som vareflyten i det nye dokumentet, opprettes det ganske enkelt en kopi av den opprinnelige bokførte dokumentlinjen i det nye dokumentet. Feltet **Utlignet fra-varepost** fylles ikke ut fordi nøyaktig kosttilbakeføring ikke er mulig i dette tilfellet. Hvis du for eksempel bruker funksjonen **Hent bokførte dokumentlinjer som skal tilbakeføres** for å hente en bokført salgskreditnotalinje for en ny salgskreditnota, kopieres bare den opprinnelige bokførte kreditnotalinjen til den nye kreditnotaen.
10. I vinduet **Ordreretur** i feltet **Returårsakskode** velger du årsaken til returen på hver linje.
11. Velg handlingen **Bokfør**.
## <a name="to-create-a-replacement-sales-order-from-a-sales-return-order"></a>Slik oppretter du en erstatningsordre fra en ordreretur:
Du kan bestemme deg for å kompensere en kunde for en vare du har solgt ved å erstatte varen. Du erstatter varen med samme vare, eller med en annen vare. Denne situasjonen kan for eksempel oppstå hvis du ved en feiltakelse leverer feil vare til kunden.
1. I vinduet **Ordreretur** for en aktiv returprosess lager du en negativ post på en tom linje for erstatningsvaren ved å sette inn et negativt beløp i **Antall**-feltet.
2. Velg handlingen **Flytt negative linjer**.
3. I vinduet **Flytt negative salgslinjer** fyller du ut feltene etter behov.
4. Velg **OK**. Den negative linjen for erstatningsvaren slettes fra ordrereturen, og den settes inn i et nytt **Ordre**-vindu. Hvis du vil ha mer informasjon, kan du se [Selge produkter](sales-how-sell-products.md).
## <a name="to-create-return-related-documents-from-a-sales-return-order"></a>Slik oppretter du retur-relaterte dokumenter fra en ordreretur:
Du kan opprette erstatningsordrer, bestillingsreturer og erstatningsbestillinger under ordrereturprosessen. Dette er for eksempel nyttig i situasjoner hvor du vil håndtere varer med garantier fra leverandører.
1. I **Ordreretur**-vinduet for en aktiv returprosess velger du handlingen **Opprett retur-relaterte dokumenter**.
2. I **Leverandørnr.**-feltet angir du nummeret til en leverandør hvis du vil opprette leverandørdokumenter automatisk.
3. Hvis en returnert vare må returneres til leverandøren, merker du av for **Opprett bestillingsretur**.
4. Hvis en returnert vare må bestilles fra leverandøren, merker du av for **Opprett bestilling**.
5. Hvis en erstatningsordre må opprettes, merker du av for alternativet **Opprett ordre**.
## <a name="to-create-a-restock-charge"></a>Slik oppretter du et returgebyr
Det kan hende du vil belaste kunden med et returgebyr for å dekke kostnader i forbindelse med retur av en vare. Dette er aktuelt hvis kunden for eksempel har bestilt feil vare, eller ombestemt seg etter at han eller hun mottok varen.
Du kan bokføre denne økte kostnaden som et varegebyr i en kreditnota eller en ordreretur, og knytte den til den bokførte følgeseddelen. Det følgende beskriver den for en ordreretur, men de samme trinnene gjelder en salgskreditnota.
1. Åpne vinduet **Ordreretur** for en aktiv returprosess.
2. Velg **Gebyr (vare)** i **Type**-feltet på en ny linje.
3. Fyll ut feltene for en hvilken som helst varegebyrlinje. Hvis du vil ha mer informasjon, kan du se [Bruke varegebyr til å gjøre rede for ekstra handelskostnader](payables-how-assign-item-charges.md).
Når du bokfører ordrereturen, legges et returgebyr til i det aktuelle salgsbeløpet. Dermed kan du opprettholde en nøyaktig lagerverdisetting.
## <a name="to-create-a-sales-allowance"></a>Slik oppretter du en salgsrabatt
Du kan sende en kunde en kreditnota med prisreduksjon hvis kunden har mottatt ukurante varer eller har mottatt varene for sent.
Du kan bokføre den reduserte prisen som et varegebyr i en kreditnota eller en ordreretur, og knytte den til den bokførte følgeseddelen. Det følgende beskriver den for en salgskreditnota, men de samme trinnene gjelder en ordreretur.
1. Velg ikonet , angi **Salgskreditnotaer**, og velg deretter den relaterte koblingen.
2. Velg handlingen **Ny** for å åpne en ny, tom salgskreditnota.
3. Fyll ut kreditnotahodet med aktuelle opplysninger om kunden du vil gi salgsrabatt til.
4. På hurtigfanen **Linjer**, i **Type**-feltet, velger du **Gebyr (vare)**.
5. I feltet **Nr.** -feltet velger du den aktuelle varegebyrverdien.
Det kan hende du vil opprette et eget varegebyrnummer for å dekke salgsrabatter.
6. I feltet **Antall** angir du **1**.
7. I feltet **Salgspris** angir du beløpet i salgsrabatten.
8. Du kan tilordne salgsrabatten som et varegebyr til varene i den bokførte leveringen. Hvis du vil ha mer informasjon, kan du se [Bruke varegebyr til å gjøre rede for ekstra handelskostnader](payables-how-assign-item-charges.md). Når du har tilordnet rabatten, går du tilbake til **Salgskreditnota**-vinduet.
Når du bokfører ordrereturen, legges salgsrabatten til i det aktuelle salgsbeløpet. Dermed kan du opprettholde en nøyaktig lagerverdisetting.
## <a name="to-combine-return-receipts"></a>Slå sammen retursedler
Du kan slå sammen retursedler hvis kunden returnerer flere varer som dekkes av ulike ordrereturer.
Når du mottar varene på lageret, bokfører du de aktuelle ordrereturene som mottatt. Dette oppretter bokførte returmottak.
Når du er klar til å fakturere denne kunden, kan du i stedet for å fakturere hver enkelt ordreretur separat, opprette en salgskreditnota og automatisk kopiere de bokførte returmottakslinjene til dette dokumentet. Deretter kan du bokføre salgskreditnotaen, og helt enkelt fakturere alle åpne ordrereturer samtidig.
Du kombinerer retursedler ved å merke av for **Opprett samlefaktura** i vinduet **Kundekort**.
### <a name="to-manually-combine-return-receipts"></a>Slik slår du sammen retursedler manuelt
1. Velg ikonet , angi **Salgskreditnota**, og velg deretter den relaterte koblingen.
2. Velg handlingen **Ny**.
3. Fyll ut feltene i hurtigfanen **Generelt** etter behov.
4. Velg handlingen **Hent returmottakslinjer**.
5. Velg returmottakslinjene du vil ta med i kreditnotaen:
- Du setter inn alle linjene ved å merke dem og velge **OK**.
- Du setter inn bestemte linjer ved å merke dem og velge **OK**.
6. Hvis du valgte en feil følgeseddellinje, eller du vil starte på nytt, kan du ganske enkelt slette linjene på kreditnotaen og kjøre funksjonen **Hent returseddellinjer** på nytt.
7. Bokfør fakturaen.
### <a name="to-automatically-combine-return-receipts"></a>Slik slår du sammen retursedler automatisk:
Du kan slå sammen retursedler automatisk og velge å bokføre kreditnotaene automatisk ved hjelp av funksjonen **Slå sammen retursedler**.
1. Velg ikonet , angi **Slå sammen retursedler**, og velg deretter den relaterte koblingen.
2. I vinduet **Slå sammen retursedler** fyller du ut feltene for å velge de aktuelle returmottakene.
3. Merk av for **Bokfør kreditnotaer**. Hvis ikke må du manuelt bokføre de endelige kjøpskreditnotaene.
4. Velg **OK**.
### <a name="to-remove-a-received-and-invoiced-return-order"></a>Slik fjerner du mottatte og fakturerte ordrereturer
Når du fakturerer retursedler på denne måten, eksisterer fortsatt ordrereturene som retursedlene ble bokført fra, selv om de er fullstendig mottatt og fakturert.
Når retursedler slås sammen på en kreditnota og bokføres, opprettes en bokført salgskreditnota for den eller de krediterte linjene. **Fakturert (antall)**-feltet på den opprinnelige ordrereturen oppdateres ut fra det fakturerte antallet.
1. Velg ikonet , angi **Ordrereturer**, og velg deretter den relaterte koblingen.
2. Angi hvilke ordrer som skal slettes, i **Nr.** -filterfeltet.
3. Velg **OK**-knappen.
Du kan også slette individuelle ordrereturer manuelt.
## <a name="see-also"></a>Se også
[Salg](sales-manage-sales.md)
[Sette opp salg](sales-setup-sales.md)
[Sende dokumenter i e-post](ui-how-send-documents-email.md)
[Arbeide med [!INCLUDE[d365fin](includes/d365fin_md.md)]](ui-work-product.md)
| 95.34375 | 603 | 0.786768 | nob_Latn | 0.995133 |
d0774edc6cdd1899ad8866deb36afa392e473d22 | 134 | md | Markdown | README.md | abhinav-raj116/Wallpaper-Scrapper | 7c10df663d16dd4493c47c6ec14334c46c45930c | [
"MIT"
] | null | null | null | README.md | abhinav-raj116/Wallpaper-Scrapper | 7c10df663d16dd4493c47c6ec14334c46c45930c | [
"MIT"
] | null | null | null | README.md | abhinav-raj116/Wallpaper-Scrapper | 7c10df663d16dd4493c47c6ec14334c46c45930c | [
"MIT"
] | null | null | null | # Wallpaper-Scrapper
Download anime wallpapers from [Alphacoders](https://wall.alphacoders.com) and [Wallhere](https://wallhere.com)
| 33.5 | 111 | 0.783582 | eng_Latn | 0.26616 |
d0776689a9ae3b6bd959eb2b0d0e7e2e9a315cb1 | 1,480 | md | Markdown | README.md | cz111000/docker-scrapy | 49cee557b4b72dde3fffadf7073fbed1ef624b38 | [
"MIT"
] | null | null | null | README.md | cz111000/docker-scrapy | 49cee557b4b72dde3fffadf7073fbed1ef624b38 | [
"MIT"
] | null | null | null | README.md | cz111000/docker-scrapy | 49cee557b4b72dde3fffadf7073fbed1ef624b38 | [
"MIT"
] | 1 | 2021-06-22T08:42:08.000Z | 2021-06-22T08:42:08.000Z | # Scrapy
An open source and collaborative framework for extracting the data you need from websites.
In a fast, simple, yet extensible way.
See [scrapy][scrapy-home] official page and the official [documentation][scrapy-docs] for more details.


# Usage
For a list of [scrapy][scrapy-home] commands, simply run:
```
$ docker run -v $(pwd):/runtime/app dkorange/scrapy
```
Since the container doesn't provide any persistence, we can use the `volumes` (-v) directive to share the current folder with the container.
To start a new project
```
$ docker run -v $(pwd):/runtime/app dkorange/scrapy startproject tutorial
```
This will create a new `tutorial` folder in your current path.
To work on the [scrapy][scrapy-home] project:
```
$ cd tutorial
$ docker run -v $(pwd):/runtime/app dkorange/scrapy
```
Continue reading the official [tutorial][scrapy-tutorial] for a more in depth usage manual of [scrapy][scrapy-home]. For more details about [Docker][docker-home] and usage options, please see the official [documentation][docker-docs] page.
[scrapy-home]: http://scrapy.org/
[scrapy-docs]: http://doc.scrapy.org/en/latest/
[scrapy-tutorial]: http://doc.scrapy.org/en/latest/intro/tutorial.html
[docker-home]: https://www.docker.com/
[docker-docs]: https://docs.docker.com/
| 41.111111 | 239 | 0.747973 | eng_Latn | 0.683014 |
d079c1e7dbc3922a1a02dac3e2a1b51e04bdb5ee | 45 | md | Markdown | _tags/hand-tracking.md | paperli/paperworkstudio | e35b305854c6a0e3d4835aa118c81db661f5c0ab | [
"MIT"
] | null | null | null | _tags/hand-tracking.md | paperli/paperworkstudio | e35b305854c6a0e3d4835aa118c81db661f5c0ab | [
"MIT"
] | null | null | null | _tags/hand-tracking.md | paperli/paperworkstudio | e35b305854c6a0e3d4835aa118c81db661f5c0ab | [
"MIT"
] | null | null | null | ---
layout: tags
tag-name: Hand Tracking
---
| 9 | 23 | 0.644444 | eng_Latn | 0.403143 |
d07a76716aa433f03ea71d6e0df00dcef7484933 | 2,706 | md | Markdown | .github/ISSUE_TEMPLATE.md | jakub-w/youtube-dl-gui | ff807572f2dd08e51987fdcb351c8ab11cc03b38 | [
"Unlicense"
] | null | null | null | .github/ISSUE_TEMPLATE.md | jakub-w/youtube-dl-gui | ff807572f2dd08e51987fdcb351c8ab11cc03b38 | [
"Unlicense"
] | null | null | null | .github/ISSUE_TEMPLATE.md | jakub-w/youtube-dl-gui | ff807572f2dd08e51987fdcb351c8ab11cc03b38 | [
"Unlicense"
] | null | null | null | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer **honestly**
- Put an `x` into all the boxes [ ] relevant to your issue (like that [x])
- Use *Preview* tab to see how your issue will actually look like
### WARNING
All invalid issues will be rejected!!
---
### Before going further
- If your problem is a bug with **youtube-dl** or a request for new site support please report it [here](https://github.com/ytdl-org/youtube-dl/issues)
- Make sure you are using the *latest* **yt-dlg** version (Click the `Settings` icon and then `About` to view the current version)
- Make sure you are using the *latest* **youtube-dl** version (Click the `Settings` icon and then `Update` to update to the latest **youtube-dl** version)
- Make sure you searched the bugtracker for similar issues **including closed ones**
- Make sure to read the [FAQs](https://github.com/oleksis/youtube-dl-gui/blob/master/docs/faqs.md) file
- [ ] **I think** my problem is **NOT** with **youtube-dl**
- [ ] I've **verified** and **i assure** that I'm running yt-dlg **1.X.Y**
- [ ] **I assure** that i am using the latest version of **youtube-dl**
- [ ] [Searched](https://github.com/oleksis/youtube-dl-gui/issues) bugtracker
- [ ] I've read the FAQs file
---
### What is the purpose of your *issue*?
- [ ] Bug report
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
Please remove any sections between (---) if they are not related to your issue
---
### Bug report
#### If the problem occurs when downloading a URL please provide the full verbose output as follows:
1. Restart **yt-dlg**
1. Go to `Options > Extra` tab
2. Enable **Debug youtube-dl**
3. Go to `Options > Advanced` tab and **Clear** your log content
4. Try to download the URL
5. Copy the **whole** log content and insert it between the ``` part below
```
delete me and insert your log content here
```
#### What operating system do you use ?
#### List of actions to perform to reproduce the problem:
1. ..
2. ..
3. ..
#### What is the expected behaviour ?
#### What happens instead ?
---
### Feature request (request for a new functionality)
Please make sure that the requested feature is **NOT** already in the [TODO](https://github.com/oleksis/youtube-dl-gui/blob/master/TODO) list
- [ ] I've **verified** and **i assure** that my requested feature is **NOT** in the TODO list
#### What operating system do you use ?
---
<!--Enter description of your issue, suggested solution and other information below. Please make sure the description is worded well enough to be understood-->
| 31.835294 | 159 | 0.691426 | eng_Latn | 0.99588 |
d07ad1bdaf57df684271e8c7d53f58282fe28fa8 | 5,395 | md | Markdown | articles/application-insights/app-insights-proactive-cloud-services.md | derekbekoe/azure-docs | e3342cf3e78dc903ad3cb122bada170b5dd6a9d9 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-07-18T11:30:47.000Z | 2019-07-18T11:30:47.000Z | articles/application-insights/app-insights-proactive-cloud-services.md | derekbekoe/azure-docs | e3342cf3e78dc903ad3cb122bada170b5dd6a9d9 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-10-09T03:19:23.000Z | 2018-10-09T03:19:23.000Z | articles/application-insights/app-insights-proactive-cloud-services.md | derekbekoe/azure-docs | e3342cf3e78dc903ad3cb122bada170b5dd6a9d9 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-08-31T13:15:30.000Z | 2020-08-31T13:15:30.000Z | ---
title: Alert on issues in Azure Cloud Services using the Azure Diagnostics integration with Azure Application Insights | Microsoft Docs
description: Monitor for issues like startup failures, crashes, and role recycle loops in Azure Cloud Services with Azure Application Insights
services: application-insights
documentationcenter: ''
author: mrbullwinkle
manager: carmonm
ms.assetid: ea2a28ed-4cd9-4006-bd5a-d4c76f4ec20b
ms.service: application-insights
ms.workload: tbd
ms.tgt_pltfrm: ibiza
ms.devlang: na
ms.topic: conceptual
ms.date: 06/07/2018
ms.reviewer: harelbr
ms.author: mbullwin
---
# Alert on issues in Azure Cloud Services using the Azure diagnostics integration with Azure Application Insights
In this article, we will describe how to set up alert rules that monitor for issues like startup failures, crashes, and role recycle loops in Azure Cloud Services (web and worker roles).
The method described in this article is based on the [Azure Diagnostics integration with Application Insights](https://azure.microsoft.com/blog/azure-diagnostics-integration-with-application-insights/), and the recently released [Log Alerts for Application Insights](https://azure.microsoft.com/blog/log-alerts-for-application-insights-preview/) capability.
## Define a base query
To get started, we will define a base query that retrieves the Windows Event Log events from the Windows Azure channel, which are captured into Application Insights as trace records.
These records can be used for detecting a variety of issues in Azure Cloud Services, like startup failures, runtime failures and recycle loops.
> [!NOTE]
> The base query below checks for issues in a time window of 30 minutes, and assumes a 10 minutes latency in ingesting the telemetry records. These defaults can be configured as you see fit.
```
let window = 30m;
let endTime = ago(10m);
let EventLogs = traces
| where timestamp > endTime - window and timestamp < endTime
| extend channel = tostring(customDimensions.Channel), eventId = tostring(customDimensions.EventId)
| where channel == 'Windows Azure' and isnotempty(eventId)
| where tostring(customDimensions.DeploymentName) !contains 'deployment' // discard records captured from local machines
| project timestamp, channel, eventId, message, cloud_RoleInstance, cloud_RoleName, itemCount;
```
## Check for specific event IDs
After retrieving the Windows Event Log events, specific issues can be detected by checking for their respective event ID and message properties (see examples below).
Simply combine the base query above with one of the queries below, and used that combined query when defining the log alert rule.
> [!NOTE]
> In the examples below, an issue will be detected if more than three events are found during the analyzed time window. This default can be configured to change the sensitivity of the alert rule.
```
// Detect failures in the OnStart method
EventLogs
| where eventId == '2001'
| where message contains '.OnStart()'
| summarize Failures = sum(itemCount) by cloud_RoleInstance, cloud_RoleName
| where Failures > 3
```
```
// Detect failures during runtime
EventLogs
| where eventId == '2001'
| where message contains '.Run()'
| summarize Failures = sum(itemCount) by cloud_RoleInstance, cloud_RoleName
| where Failures > 3
```
```
// Detect failures when running a startup task
EventLogs
| where eventId == '1000'
| summarize Failures = sum(itemCount) by cloud_RoleInstance, cloud_RoleName
| where Failures > 3
```
```
// Detect recycle loops
EventLogs
| where eventId == '1006'
| summarize Failures = sum(itemCount) by cloud_RoleInstance, cloud_RoleName
| where Failures > 3
```
## Create an alert
In the navigation menu within your Application Insights resource, go to **Alerts**, and then select **New Alert Rule**.

In the **Create rule** window, under the **Define alert condition** section, click on **Add criteria**, and then select **Custom log search**.

In the **Search query** box, paste the combined query you prepared in the previous step.
Then, continue to the **Threshold** box, and set its value to 0. You may optionally tweak the **Period** and Frequency **fields**.
Click **Done**.

Under the **Define alert details** section, provide a **Name** and **Description** to the alert rule, and set its **Severity**.
Also, make sure that the **Enable rule upon creation** button is set to **Yes**.

Under the **Define action group** section, you can select an existing **Action group** or create a new one.
You may choose to have the action group contain multiple actions of various types.

Once you've defined the Action group, confirm your changes and click **Create alert rule**.
## Next Steps
Learn more about automatically detecting:
[Failure anomalies](app-insights-proactive-failure-diagnostics.md)
[Memory Leaks](app-insights-proactive-potential-memory-leak.md)
[Performance anomalies](app-insights-proactive-performance-diagnostics.md)
| 43.16 | 357 | 0.776089 | eng_Latn | 0.977541 |
d07b1dc3d0063f8a16af9cab273d8d55ec84f993 | 1,412 | markdown | Markdown | _posts/2007-06-20-rischio-sicurezza-fon.markdown | anton-dema/anton-dema.github.io | 7ba62990deecce1c372a9fa47cf9e77ec77d61dc | [
"Apache-2.0"
] | null | null | null | _posts/2007-06-20-rischio-sicurezza-fon.markdown | anton-dema/anton-dema.github.io | 7ba62990deecce1c372a9fa47cf9e77ec77d61dc | [
"Apache-2.0"
] | null | null | null | _posts/2007-06-20-rischio-sicurezza-fon.markdown | anton-dema/anton-dema.github.io | 7ba62990deecce1c372a9fa47cf9e77ec77d61dc | [
"Apache-2.0"
] | null | null | null | ---
author: dema
comments: true
date: 2007-06-20 06:00:36+00:00
layout: post
slug: rischio-sicurezza-fon
title: 'Rischio sicurezza FON '
wordpress_id: 77
categories:
- fon
- fonera
- hotspot
- mac address
- spoof
- wifi
- wireless
---

Sul [forum in inglese di Fon](http://boards.fon.com/viewtopic.php?t=3246) , ho trovato una discussione molto interessante circa il meccanismo di autenticazione tramite visione di una clip pubblicitaria.
Sembrerebbe che l'accesso all'hotspot possa avvenire senza **nessun controllo sui dati immessi**. Non vi è infatti nessun meccanismo per la verifica dell'email e dell'identità di chi effettua l'accesso.
Anche se io ho sempre pensato che in ambienti pubblici l'eccessiva **complessità del meccanismo di autenticazione** allontanasse potenziali fruitori di Fon , questo fatto , porta con se un ulteriore disservizio.
Infatti , una volta consumati i 15 minuti di navigazione gratis , semplicemente cambiando email , username e MAC address della scheda wireless si possono avere altri 15 minuti gratis , e capite bene che con un semplice **script creato ad arte** , uno _spoof-fon_ , si possono usare gli hotspot fon in maniera assolutamente gratuita.
Al momento in cui sto scrivendo , lo staff di fon , avvertito di questo bug , sta lavorando per apporre delle modifiche al sistema di _surf by ads_.
| 47.066667 | 332 | 0.779745 | ita_Latn | 0.998642 |
d07b5903b7a9a6c7bc6926c8be642eac0f9fa0be | 1,759 | md | Markdown | README.md | RamonBomfim/inteligencia-artificial | eab417a6c1f06fd35511d96c380ef409d02fc50a | [
"MIT"
] | null | null | null | README.md | RamonBomfim/inteligencia-artificial | eab417a6c1f06fd35511d96c380ef409d02fc50a | [
"MIT"
] | null | null | null | README.md | RamonBomfim/inteligencia-artificial | eab417a6c1f06fd35511d96c380ef409d02fc50a | [
"MIT"
] | null | null | null | <p align="center">
<a href="#-tecnologias">Tecnologias</a> |
<a href="#-objetivo">Objetivo</a> |
<a href="#-como-executar">Como executar</a> |
<a href="#-licença">Licença</a>
</p>
<p align="center">
<img alt="License" src="https://img.shields.io/static/v1?label=license&message=MIT&color=8257E5&labelColor=000000">
<img src="https://img.shields.io/static/v1?label=IA&message=06&color=8257E5&labelColor=000000" />
</p>
<br />
## ✨ Tecnologias
Os projetos aqui listados foram desenvolvidos usando:
- [Python](https://www.python.org/)
## 💻 Objetivo
Neste repositório constam todas as atividades (valendo ponto ou não) da matéria de Inteligência Artificial, do 5° período de Sistemas de Informação do Centro Universitário Cesmac. Repositório tem como objetivo demonstrar o que venho estudando e aprendendo com a matéria.
## 🚀 Como executar
- Clone o repositório
- Certifique-se de ter o python 3.x instalado em sua máquina (caso não o tenha, [clique aqui](https://www.python.org/downloads/) e sigua o passo a passo)
- Entre no repositório `cd inteligencia-artificial`
- Logo após, entre no diretório correspondente ao exercício que deseja executar, Ex.: cd `exercicio_algoritmo_genetico`
- Rode o comando `python main.py`
## 📄 Licença
Esse projeto está sob a licença MIT. Veja o arquivo [LICENSE](LICENSE.md) para mais detalhes.
---
<p align="center">Feito por <a href="https://github.com/RamonBomfim">Ramon Bomfim</a> <br><br>
<a href="https://www.linkedin.com/in/ramon-bomfim-8372a919a/">
<img alt="Linkedin Badge" src="https://img.shields.io/badge/-Ramon_Bomfim-blue?style=flat-square&logo=Linkedin&logoColor=white">
</a>
</p>
| 38.23913 | 270 | 0.727118 | por_Latn | 0.970846 |
d07bc28988f5ad4346114253384cc9644b0954bf | 7,348 | md | Markdown | docs/standard/assembly/create-signed-friend.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/assembly/create-signed-friend.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/assembly/create-signed-friend.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Procedura: creare assembly Friend firmati'
ms.date: 08/19/2019
ms.assetid: bab62063-61e6-453f-905f-77673df9534e
dev_langs:
- csharp
- vb
ms.openlocfilehash: 52ecfbae11c7be125d0e60a0fce6a05182e2db9e
ms.sourcegitcommit: 559259da2738a7b33a46c0130e51d336091c2097
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 10/22/2019
ms.locfileid: "72774355"
---
# <a name="how-to-create-signed-friend-assemblies"></a>Procedura: creare assembly Friend firmati
In questo esempio viene illustrato come usare assembly Friend e assembly con nomi sicuri. È necessario che entrambi i tipi di assembly abbiano un nome sicuro. Gli assembly in questo esempio usano le stesse chiavi. È comunque possibile usare chiavi diverse per i due assembly.
## <a name="create-a-signed-assembly-and-a-friend-assembly"></a>Creare un assembly firmato e un assembly Friend
1. Aprire un prompt dei comandi.
2. Eseguire la sequenza di comandi seguente con lo strumento Nome sicuro per generare un keyfile e per visualizzare la relativa chiave pubblica. Per ulteriori informazioni, vedere [sn. exe (strumento nome sicuro)](../../framework/tools/sn-exe-strong-name-tool.md).
1. Generare una chiave con nome sicuro per questo esempio e archiviarla nel file *FriendAssemblies. snk*:
`sn -k FriendAssemblies.snk`
2. Estrarre la chiave pubblica da *FriendAssemblies. snk* e inserirla in *FriendAssemblies. PublicKey*:
`sn -p FriendAssemblies.snk FriendAssemblies.publickey`
3. Visualizzare la chiave pubblica archiviata nel file *FriendAssemblies. PublicKey*:
`sn -tp FriendAssemblies.publickey`
3. Creare un C# file o Visual Basic denominato *friend_signed_A* contenente il codice seguente. Il codice usa l'attributo <xref:System.Runtime.CompilerServices.InternalsVisibleToAttribute> per dichiarare *friend_signed_B* come assembly Friend.
Ogni volta che viene eseguito, lo strumento Nome sicuro genera una chiave pubblica nuova. È pertanto necessario sostituire la chiave pubblica nel codice seguente con la chiave pubblica appena generata, come illustrato nell'esempio seguente.
```csharp
// friend_signed_A.cs
// Compile with:
// csc /target:library /keyfile:FriendAssemblies.snk friend_signed_A.cs
using System.Runtime.CompilerServices;
[assembly: InternalsVisibleTo("friend_signed_B, PublicKey=0024000004800000940000000602000000240000525341310004000001000100e3aedce99b7e10823920206f8e46cd5558b4ec7345bd1a5b201ffe71660625dcb8f9a08687d881c8f65a0dcf042f81475d2e88f3e3e273c8311ee40f952db306c02fbfc5d8bc6ee1e924e6ec8fe8c01932e0648a0d3e5695134af3bb7fab370d3012d083fa6b83179dd3d031053f72fc1f7da8459140b0af5afc4d2804deccb6")]
class Class1
{
public void Test()
{
System.Console.WriteLine("Class1.Test");
System.Console.ReadLine();
}
}
```
```vb
' friend_signed_A.vb
' Compile with:
' Vbc -target:library -keyfile:FriendAssemblies.snk friend_signed_A.vb
Imports System.Runtime.CompilerServices
<Assembly: InternalsVisibleTo("friend_signed_B, PublicKey=0024000004800000940000000602000000240000525341310004000001000100e3aedce99b7e10823920206f8e46cd5558b4ec7345bd1a5b201ffe71660625dcb8f9a08687d881c8f65a0dcf042f81475d2e88f3e3e273c8311ee40f952db306c02fbfc5d8bc6ee1e924e6ec8fe8c01932e0648a0d3e5695134af3bb7fab370d3012d083fa6b83179dd3d031053f72fc1f7da8459140b0af5afc4d2804deccb6")>
Public Class Class1
Public Sub Test()
System.Console.WriteLine("Class1.Test")
System.Console.ReadLine()
End Sub
End Class
```
4. Compilare e firmare *friend_signed_A* usando il comando seguente.
```csharp
csc /target:library /keyfile:FriendAssemblies.snk friend_signed_A.cs
```
```vb
Vbc -target:library -keyfile:FriendAssemblies.snk friend_signed_A.vb
```
5. Creare un C# file o Visual Basic denominato *friend_signed_B* contenente il codice seguente. Poiché *friend_signed_A* specifica *friend_signed_B* come assembly Friend, il codice in *friend_signed_B* può accedere ai tipi eC#ai membri di `internal` () o `Friend` (Visual Basic) da *friend_signed_A*. Il file contiene il codice seguente.
```csharp
// friend_signed_B.cs
// Compile with:
// csc /keyfile:FriendAssemblies.snk /r:friend_signed_A.dll /out:friend_signed_B.exe friend_signed_B.cs
public class Program
{
static void Main()
{
Class1 inst = new Class1();
inst.Test();
}
}
```
```vb
' friend_signed_B.vb
' Compile with:
' Vbc -keyfile:FriendAssemblies.snk -r:friend_signed_A.dll friend_signed_B.vb
Module Sample
Public Sub Main()
Dim inst As New Class1
inst.Test()
End Sub
End Module
```
6. Compilare e firmare *friend_signed_B* usando il comando seguente.
```csharp
csc /keyfile:FriendAssemblies.snk /r:friend_signed_A.dll /out:friend_signed_B.exe friend_signed_B.cs
```
```vb
vbc -keyfile:FriendAssemblies.snk -r:friend_signed_A.dll friend_signed_B.vb
```
Il nome dell'assembly generato dal compilatore deve corrispondere al nome dell'assembly Friend passato all'attributo <xref:System.Runtime.CompilerServices.InternalsVisibleToAttribute>. È necessario specificare in modo esplicito il nome dell'assembly di output (con*estensione exe* o *dll*) utilizzando l'opzione del compilatore `-out`. Per ulteriori informazioni, vedere [-out (C# opzioni del compilatore)](../../csharp/language-reference/compiler-options/out-compiler-option.md) o [-out (Visual Basic)](../../visual-basic/reference/command-line-compiler/out.md).
7. Eseguire il file *friend_signed_B. exe* .
Il programma restituisce la stringa **Class1. test**.
## <a name="net-security"></a>Protezione .NET
Ci sono alcune analogie tra l'attributo <xref:System.Runtime.CompilerServices.InternalsVisibleToAttribute> e la classe <xref:System.Security.Permissions.StrongNameIdentityPermission>. La differenza principale consiste nel fatto che <xref:System.Security.Permissions.StrongNameIdentityPermission> possibile richiedere le autorizzazioni di sicurezza per l'esecuzione di una particolare sezione di codice, mentre l'attributo <xref:System.Runtime.CompilerServices.InternalsVisibleToAttribute>C#controlla la visibilità dei tipi e dei membri `internal` () o `Friend` (Visual Basic).
## <a name="see-also"></a>Vedere anche
- <xref:System.Runtime.CompilerServices.InternalsVisibleToAttribute>
- [Assembly in .NET](index.md)
- [Assembly Friend](friend.md)
- [Procedura: creare assembly Friend non firmati](create-unsigned-friend.md)
- [-filefile (C#)](../../csharp/language-reference/compiler-options/keyfile-compiler-option.md)
- [-filefile (Visual Basic)](../../visual-basic/reference/command-line-compiler/keyfile.md)
- [Sn. exe (strumento nome sicuro)](../../framework/tools/sn-exe-strong-name-tool.md)
- [Creazione e utilizzo di assembly con nome sicuro](create-use-strong-named.md)
- [Guida per programmatori C#](../../csharp/programming-guide/index.md)
- [Concetti di programmazione (Visual Basic)](../../visual-basic/programming-guide/concepts/index.md)
| 52.113475 | 579 | 0.742651 | ita_Latn | 0.72434 |
d07cbb0a8eec018e76c87106f6f41a2df0436e9b | 10,528 | md | Markdown | CHANGELOG.md | GoodwayGroup/lib-tradedesk | 7da504528da3ac5387a91a0474162e8ef17e9379 | [
"MIT"
] | 2 | 2021-02-11T16:21:11.000Z | 2021-06-03T18:25:52.000Z | CHANGELOG.md | GoodwayGroup/lib-tradedesk | 7da504528da3ac5387a91a0474162e8ef17e9379 | [
"MIT"
] | 5 | 2020-08-07T05:56:26.000Z | 2021-05-25T02:45:30.000Z | CHANGELOG.md | GoodwayGroup/lib-tradedesk | 7da504528da3ac5387a91a0474162e8ef17e9379 | [
"MIT"
] | null | null | null | <a name="unreleased"></a>
## [Unreleased]
<a name="v1.3.1"></a>
## [v1.3.1] - 2021-05-18
### Chore
- **docs:** updating docs for version v1.3.1
- **docs:** update cp command to be portable across OS
<a name="v1.3.0"></a>
## [v1.3.0] - 2021-05-18
### Bug Fixes
- **release:** added required build steps to create distribution
### Chore
- **docs:** updating docs for version v1.3.0
- **release:** v1.3.0
- **release:** updating CHANGELOG for v1.3.0
<a name="v1.2.2"></a>
## [v1.2.2] - 2021-05-13
### Chore
- **deps:** update all non-major dependencies ([#91](https://github.com/GoodwayGroup/lib-tradedesk/issues/91))
- **deps:** update dependency handlebars to 4.7.7 [security] ([#93](https://github.com/GoodwayGroup/lib-tradedesk/issues/93))
- **deps:** update dependency hosted-git-info to 2.8.9 [security] ([#94](https://github.com/GoodwayGroup/lib-tradedesk/issues/94))
- **deps:** update dependency lodash to 4.17.21 [security] ([#95](https://github.com/GoodwayGroup/lib-tradedesk/issues/95))
- **deps:** update all non-major dependencies ([#90](https://github.com/GoodwayGroup/lib-tradedesk/issues/90))
- **docs:** updating docs for version v1.2.2
### Features
- **release:** v1.2.2
<a name="v1.2.1"></a>
## [v1.2.1] - 2021-04-12
### Chore
- **deps:** update all non-major dependencies ([#89](https://github.com/GoodwayGroup/lib-tradedesk/issues/89))
- **deps:** update dependency husky to v6 ([#88](https://github.com/GoodwayGroup/lib-tradedesk/issues/88))
- **deps:** update all non-major dependencies ([#87](https://github.com/GoodwayGroup/lib-tradedesk/issues/87))
- **docs:** updating docs for version v1.2.1
### Features
- **release:** v1.2.1
<a name="v1.2.0"></a>
## [v1.2.0] - 2021-03-18
### Bug Fixes
- **github actions:** exclude all branches
### Chore
- **deps:** update actions/checkout action to v2 ([#82](https://github.com/GoodwayGroup/lib-tradedesk/issues/82))
- **deps:** update dependency husky to v5
- **deps:** update node.js to v14
- **deps:** update all non-major dependencies ([#85](https://github.com/GoodwayGroup/lib-tradedesk/issues/85))
- **deps:** update all non-major dependencies ([#84](https://github.com/GoodwayGroup/lib-tradedesk/issues/84))
- **deps:** update actions/setup-node action to v2 ([#83](https://github.com/GoodwayGroup/lib-tradedesk/issues/83))
- **docs:** updating docs for version v1.2.0
### Features
- **cd:** add workflow dispatch to version and publish to npm ([#86](https://github.com/GoodwayGroup/lib-tradedesk/issues/86))
- **release:** v1.2.0
<a name="v1.1.1"></a>
## [v1.1.1] - 2021-03-07
### Bug Fixes
- **typedoc:** update typdoc config for compatibility with new version
### Chore
- **deps:** update all non-major dependencies ([#80](https://github.com/GoodwayGroup/lib-tradedesk/issues/80))
- **deps:** update dependency marked to 2.0.0 [security] ([#81](https://github.com/GoodwayGroup/lib-tradedesk/issues/81))
- **docs:** updating docs for version v1.1.1
- **github action:** add Publish action
### Features
- **release:** v1.1.1
### Pull Requests
- Merge pull request [#73](https://github.com/GoodwayGroup/lib-tradedesk/issues/73) from GoodwayGroup/release/v1.1.0
###### Squashed Commits:
```
feat(release): v1.1.0
```
<a name="v1.1.0"></a>
## [v1.1.0] - 2021-01-22
### Chore
- **deps:** update dependency eslint to v7.17.0 ([#60](https://github.com/GoodwayGroup/lib-tradedesk/issues/60))
- **deps:** update dependency [@types](https://github.com/types)/node-fetch to v2.5.8 ([#66](https://github.com/GoodwayGroup/lib-tradedesk/issues/66))
- **deps:** update typescript-eslint monorepo to v4.13.0 ([#62](https://github.com/GoodwayGroup/lib-tradedesk/issues/62))
- **deps:** update dependency [@types](https://github.com/types)/jest to v26.0.20 ([#64](https://github.com/GoodwayGroup/lib-tradedesk/issues/64))
- **deps:** update dependency husky to v4.3.7 ([#63](https://github.com/GoodwayGroup/lib-tradedesk/issues/63))
- **deps:** update node.js to v12.20.1 ([#61](https://github.com/GoodwayGroup/lib-tradedesk/issues/61))
- **docs:** updating docs for version v1.1.0
### Features
- **DataProvider:** added class to support the data api ttdsignature header ([#65](https://github.com/GoodwayGroup/lib-tradedesk/issues/65))
- **release:** v1.1.0
<a name="v1.0.3"></a>
## [v1.0.3] - 2020-12-29
### Chore
- **deps:** update dependency jest to v26.6.1 ([#41](https://github.com/GoodwayGroup/lib-tradedesk/issues/41))
- **deps:** rollback and pin typedoc to v0.19.2 due to breaking changes
- **deps:** updated [@types](https://github.com/types)/jest, eslint-plugin-jest, ts-jest and jest
- **deps:** update dependency eslint to v7.16.0 ([#48](https://github.com/GoodwayGroup/lib-tradedesk/issues/48))
- **deps:** update dependency typedoc-plugin-markdown to v3.1.1 ([#57](https://github.com/GoodwayGroup/lib-tradedesk/issues/57))
- **deps:** update dependency eslint-plugin-tsdoc to v0.2.10 ([#54](https://github.com/GoodwayGroup/lib-tradedesk/issues/54))
- **deps:** update node.js to v12.20.0 ([#52](https://github.com/GoodwayGroup/lib-tradedesk/issues/52))
- **deps:** update typescript-eslint monorepo to v4.11.1 ([#47](https://github.com/GoodwayGroup/lib-tradedesk/issues/47))
- **deps:** update dependency nock to v13.0.5 ([#50](https://github.com/GoodwayGroup/lib-tradedesk/issues/50))
- **deps:** update dependency typescript to v4.1.3 ([#53](https://github.com/GoodwayGroup/lib-tradedesk/issues/53))
- **deps:** update dependency typedoc to v0.20.1 ([#58](https://github.com/GoodwayGroup/lib-tradedesk/issues/58))
- **deps:** update dependency husky to v4.3.6 ([#56](https://github.com/GoodwayGroup/lib-tradedesk/issues/56))
- **deps:** update typescript-eslint monorepo to v4.6.0 ([#44](https://github.com/GoodwayGroup/lib-tradedesk/issues/44))
- **deps:** update dependency typescript to v4.0.5 ([#45](https://github.com/GoodwayGroup/lib-tradedesk/issues/45))
- **deps:** update dependency eslint to v7.12.1 ([#43](https://github.com/GoodwayGroup/lib-tradedesk/issues/43))
- **deps:** update dependency ts-jest to v26.4.3 ([#42](https://github.com/GoodwayGroup/lib-tradedesk/issues/42))
- **docs:** updating docs for version v1.0.3
### Features
- **release:** v1.0.3
<a name="v1.0.2"></a>
## [v1.0.2] - 2020-10-20
### Chore
- **deps:** update jest, eslint, typedoc and bump node version ([#36](https://github.com/GoodwayGroup/lib-tradedesk/issues/36))
- **deps:** update dependency [@types](https://github.com/types)/jest to v26.0.15 ([#40](https://github.com/GoodwayGroup/lib-tradedesk/issues/40))
- **deps:** update typescript-eslint monorepo to v4.5.0 ([#39](https://github.com/GoodwayGroup/lib-tradedesk/issues/39))
- **deps:** update dependency jest to v26.6.0 ([#38](https://github.com/GoodwayGroup/lib-tradedesk/issues/38))
- **deps:** update dependency typedoc-plugin-markdown to v3.0.11 ([#37](https://github.com/GoodwayGroup/lib-tradedesk/issues/37))
- **deps:** update dependency typedoc-plugin-markdown to v3.0.9 ([#28](https://github.com/GoodwayGroup/lib-tradedesk/issues/28))
- **docs:** updating docs for version v1.0.2
### Features
- **release:** v1.0.2
<a name="v1.0.1"></a>
## [v1.0.1] - 2020-09-14
### Bug Fixes
- **deps:** update dependency node-fetch to v2.6.1 [security] ([#22](https://github.com/GoodwayGroup/lib-tradedesk/issues/22))
### Chore
- **deps:** update dependency eslint-plugin-jest to v24.0.1 ([#24](https://github.com/GoodwayGroup/lib-tradedesk/issues/24))
- **deps:** update dependency eslint to v7.9.0 ([#23](https://github.com/GoodwayGroup/lib-tradedesk/issues/23))
- **deps:** update typescript-eslint monorepo to v4.1.1 ([#25](https://github.com/GoodwayGroup/lib-tradedesk/issues/25))
- **deps:** pin dependencies ([#20](https://github.com/GoodwayGroup/lib-tradedesk/issues/20))
- **docs:** updating docs for version v1.0.1
### Features
- **release:** v1.0.1
<a name="v1.0.0"></a>
## [v1.0.0] - 2020-09-09
### Chore
- add script to support versioning and release
- added v to next tag version in changelog
- **deps:** update typescript-eslint monorepo to v3.8.0 ([#4](https://github.com/GoodwayGroup/lib-tradedesk/issues/4))
- **deps:** pin dependencies ([#2](https://github.com/GoodwayGroup/lib-tradedesk/issues/2))
- **deps:** update typescript and typescript-eslint monorepo to v4
- **deps:** update dependency typedoc to v0.19.1 ([#5](https://github.com/GoodwayGroup/lib-tradedesk/issues/5))
- **deps:** updated nock, eslint and markdown
- **deps:** update dependency [@types](https://github.com/types)/jest to v26.0.13
- **deps:** updated jest, ts-jest and eslint-plugin-jest
- **deps:** update dependency husky to v4.3.0 ([#18](https://github.com/GoodwayGroup/lib-tradedesk/issues/18))
- **deps:** update dependency [@types](https://github.com/types)/jest to v26.0.9 ([#3](https://github.com/GoodwayGroup/lib-tradedesk/issues/3))
- **docs:** updating docs for version v1.0.0
- **renovate:** add config ([#14](https://github.com/GoodwayGroup/lib-tradedesk/issues/14))
- **renovate:** update labels
- **renovate:** configure bot
- **specs:** minor update to specs for passthrough requests ([#19](https://github.com/GoodwayGroup/lib-tradedesk/issues/19))
### Features
- **release:** v1.0.0
<a name="v0.0.0"></a>
## v0.0.0 - 2020-08-05
### Bug Fixes
- documentation link again
- **token:** internal expiration time now including set expiration
### Chore
- cleanup and documentation
### Code Refactoring
- docs
- doc linting
### Docs
- updated configuration options link
- run of typedoc
- updated readme
- switch to markdown
- set theme jekyll-theme-cayman
- initial push
### Features
- initial commit
[Unreleased]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.3.1...HEAD
[v1.3.1]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.3.0...v1.3.1
[v1.3.0]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.2.2...v1.3.0
[v1.2.2]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.2.1...v1.2.2
[v1.2.1]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.2.0...v1.2.1
[v1.2.0]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.1.1...v1.2.0
[v1.1.1]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.1.0...v1.1.1
[v1.1.0]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.0.3...v1.1.0
[v1.0.3]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.0.2...v1.0.3
[v1.0.2]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.0.1...v1.0.2
[v1.0.1]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v1.0.0...v1.0.1
[v1.0.0]: https://github.com/GoodwayGroup/lib-tradedesk/compare/v0.0.0...v1.0.0
| 46.584071 | 150 | 0.692344 | yue_Hant | 0.724458 |
d07d7dd755cdfac37ecf9454a64b440b86aecced | 1,850 | md | Markdown | source/_posts/2019-09-26-day20.md | qoosuperman/qoosuperman.github.io | 7c4f6765b29f9a732169f84fc13febf5ee8fb2d9 | [
"Apache-2.0"
] | null | null | null | source/_posts/2019-09-26-day20.md | qoosuperman/qoosuperman.github.io | 7c4f6765b29f9a732169f84fc13febf5ee8fb2d9 | [
"Apache-2.0"
] | 1 | 2021-07-20T16:00:59.000Z | 2021-07-20T16:00:59.000Z | source/_posts/2019-09-26-day20.md | qoosuperman/qoosuperman.github.io | 7c4f6765b29f9a732169f84fc13febf5ee8fb2d9 | [
"Apache-2.0"
] | null | null | null | ---
title: "寫 migration 檔內容"
catalog: true
toc_nav_num: true
date: 2019-09-26 22:26:24
subtitle: ""
header-img: "https://images.unsplash.com/photo-1569191086551-b3606745884f?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=1950&q=80"
tags:
- Rails
catagories:
- Rails
updateDate: 2019-09-26 22:26:24
# top: 1
---
## Writing migrations
上一篇文章講的是如何製作 migration 檔案,這次要來講的是如何寫裡面的內容, Rails guide 裡面提到很多方法,我會把比較常使用的寫下來,詳細內容請看[這裡](https://guides.rubyonrails.org/active_record_migrations.html),另外下面的範例也是來自 Rails Guide
1. **Create table**
如果是要製作一個表格,我們可以直接在下面放欄位的名字
``` ruby
create_table :users do |t|
t.string :name
end
```
這個檔案只要執行 `rails db:migrate` 就會幫我們做出一個 users 的 table,然後裡面有一個名字的欄位,格式是 string
要注意的是每一次生出一個 table 他都會自動幫我們生出一個叫做 id 的欄位,預設為這個表格的 primary key,如果想要自己指定 primary key 可以用 `:primary_key` 這個選項,或者你不想要 primary key 也可以用 `id:false` 來處理
2. **Change table**
更改表格內容也是一個常見動作
```ruby
change_table :products do |t|
t.remove :description, :name
t.string :part_number
t.index :part_number
t.rename :upccode, :upc_code
end
```
上面這個 migration 檔會對 products 表格做幾件事情
1.把 description 跟 name 的欄位拿掉
2.新增一個 part_name 欄位並加上 index
3.把 upcode 欄位改成 upc_code 欄位
3. **Change colun**
更改表格欄位也是一個常見動作
``` ruby
change_column :products, :part_number, :text
```
上面這個例子是把 products 表格中的 part_name 欄位資料改成使用 text 這個格式儲存
另外 change column 是一個不可逆的指令,主要是因為他並沒有紀錄一開始你的資料格式
如果要可逆的話建議可改成使用 `up` 跟 `down` 的方法,而不是用 `change`,這兩種方法分別寫明當你今天 migrate 的時候執行的是 up 裡面的內容,而如果是 rollback 就是使用 down 裡面的內容
```ruby
change_column_null :products, :name, false
change_column_default :products, :approved, from: true, to: false
```
上面的例子是把 products 表格中的 name 指定為不能是 null 的欄位,下一行是把 approve 這個欄位的初始值設為 false
今天的介紹就先到這邊了,希望可以對英文苦手的初學者有點幫助,但如果要做更複雜的操作還是要去看 Rails Guide 本身的說明或者翻 API 喔~
參考資料:
[Rails Guide](https://guides.rubyonrails.org/active_record_migrations.html)
| 24.025974 | 173 | 0.777297 | yue_Hant | 0.964825 |
d07dd352ee73919ca434bb48132539929f66a61e | 242 | md | Markdown | src/data/blockchain-case-commons.md | Lane/portfolio | 7fd8a577b50fba3db060d055a1ab52892cc7a16c | [
"MIT"
] | null | null | null | src/data/blockchain-case-commons.md | Lane/portfolio | 7fd8a577b50fba3db060d055a1ab52892cc7a16c | [
"MIT"
] | 3 | 2021-09-21T17:56:58.000Z | 2022-02-27T16:19:36.000Z | src/data/blockchain-case-commons.md | Lane/portfolio | 7fd8a577b50fba3db060d055a1ab52892cc7a16c | [
"MIT"
] | null | null | null | ---
title: "Blockchain Case Commons"
summary: ""
roles: "Developer, Designer, User Experience"
client: "Blockchain Research Institute"
type: "Web Application"
stack: "React, Firebase, Google Cloud Platform"
tags: "app"
date: "2019-01-01"
---
| 22 | 47 | 0.727273 | kor_Hang | 0.184877 |
d07e5ec7c65b44e44219f0f438f09a7c30d92dae | 23,089 | md | Markdown | README.md | snykcanoodles/github | c7b4e08cf1f48d86ea071f545842246587d6a6e0 | [
"MIT"
] | 1 | 2015-11-20T04:36:01.000Z | 2015-11-20T04:36:01.000Z | README.md | haonaturel/github | c7b4e08cf1f48d86ea071f545842246587d6a6e0 | [
"MIT"
] | null | null | null | README.md | haonaturel/github | c7b4e08cf1f48d86ea071f545842246587d6a6e0 | [
"MIT"
] | 1 | 2022-01-11T13:06:09.000Z | 2022-01-11T13:06:09.000Z | [][icon]
[icon]: https://github.com/peter-murach/github/raw/master/icons/github_api.png
# github_api
[][gem]
[][travis]
[][codeclimate]
[][coverage]
[][inchpages]
[][gemnasium]
[gem]: http://badge.fury.io/rb/github_api
[travis]: http://travis-ci.org/peter-murach/github
[codeclimate]: https://codeclimate.com/github/peter-murach/github
[coverage]: https://coveralls.io/r/peter-murach/github
[inchpages]: http://inch-ci.org/github/peter-murach/github
[gemnasium]: https://gemnasium.com/peter-murach/github
[Website](http://peter-murach.github.io/github/) | [Wiki](https://github.com/peter-murach/github/wiki) | [RDocs](http://rubydoc.info/github/peter-murach/github/master/frames)
A Ruby client for the official GitHub API.
Supports all the API methods. It's built in a modular way. You can either instantiate the whole API wrapper Github.new or use parts of it i.e. Github::Client::Repos.new if working solely with repositories is your main concern. Intuitive query methods allow you easily call API endpoints.
## Features
* Intuitive GitHub API interface navigation.
* It's comprehensive. You can request all GitHub API resources.
* Modular design allows for working with parts of API.
* Fully customizable including advanced middleware stack construction.
* Supports OAuth2 authorization.
* Flexible argument parsing. You can write expressive and natural queries.
* Requests pagination with convenient DSL and automatic options.
* Easy error handling split for client and server type errors.
* Supports multithreaded environment.
* Custom media type specification through the 'media' parameter.
* Request results caching
* Fully tested with unit and feature tests hitting the live api.
## Installation
Install the gem by running
```ruby
gem install github_api
```
or put it in your Gemfile and run `bundle install`
```ruby
gem "github_api"
```
## Contents
* [1. Usage](#1-usage)
* [1.1 API Navigation](#11-api-navigation)
* [1.2 Modularity](#12-modularity)
* [1.3 Arguments](#13-arguments)
* [1.4 Response Querying](#14-response-querying)
* [1.4.1 Response Body](#141-response-body)
* [1.4.2 Response Headers](#142-response-headers)
* [1.4.3 Response Success](#143-response-success)
* [1.5 Request Headers](#15-request-headers)
* [1.5.1 Media Types](#151-media-types)
* [2. Configuration](#2-configuration)
* [2.1 Basic](#21-basic)
* [2.2 Advanced](#22-advanced)
* [2.3 SSL](#23-ssl)
* [2.4 Caching](#24-caching)
* [3. Authentication](#3-authentication)
* [3.1 Basic](#31-basic)
* [3.2 Authorizations API](#32-authorizations-api)
* [3.3 Scopes](#33-scopes)
* [3.4 Application OAuth](#34-application-oauth)
* [3.5 Two-Factor](#35-two-factor)
* [4. Pagination](#4-pagination)
* [4.1 Auto pagination](#41-auto-pagination)
* [5. Error Handling](#5-error-handling)
* [6. Examples](#6-examples)
* [6.1 Rails](#61-rails)
* [6.2 Manipulating Files](#62-manipulating-files)
* [7. Testing](#7-testing)
## 1 Usage
To start using the gem, you can either perform requests directly on `Github` namespace:
```ruby
Github.repos.list user: 'peter-murach'
```
or create a new client instance like so
```ruby
github = Github.new
```
and then call api methods, for instance, to list a given user repositories do
```ruby
github.repos.list user: 'peter-murach'
```
### 1.1 API Navigation
The **github_api** closely mirrors the [GitHub API](https://developer.github.com/v3/) hierarchy. For example, if you want to create a new file in a repository, look up the GitHub API spec. In there you will find contents sub category underneath the repository category. This would translate to the request:
```ruby
github = Github.new
github.repos.contents.create 'peter-murach', 'finite_machine', 'hello.rb',
path: 'hello.rb',
content: "puts 'hello ruby'"
```
The whole library reflects the same api navigation. Therefore, if you need to list releases for a repository do:
```ruby
github.repos.releases.list 'peter-murach', 'finite_machine'
```
or to list a user's followers:
```ruby
github.users.followers.list 'peter-murach'
```
The code base has been extensively documented with examples of how to use each method. Please refer to the [documentation](http://rubydoc.info/github/peter-murach/github/master/frames) under the `Github::Client` class name.
Alternatively, you can find out which methods are supported by an api by calling `actions` on a class or instance. For example, in order to find out available endpoints for `Github::Client::Repos::Contents` api call `actions` method:
```ruby
Github::Client::Repos::Contents.actions
=> [:archive, :create, :delete, :find, :get, :readme, :update]
```
### 1.2 Modularity
The code base is modular. This means that you can work specifically with a given part of GitHub API. If you want to only work with activity starring API do the following:
```ruby
starring = Github::Client::Activity::Starring.new
starring.star 'peter-murach', 'github'
```
Please refer to the [documentation](http://rubydoc.info/github/peter-murach/github/master/frames) and look under `Github::Client` to see all available classes.
### 1.3 Arguments
The **github_api** library allows for flexible argument parsing.
Arguments can be passed directly inside the method called. The `required` arguments are passed in first, followed by optional parameters supplied as hash options:
```ruby
issues = Github::Client::Issues.new
issues.milestones.list 'peter-murach', 'github', state: 'open'
```
In the previous example, the order of arguments is important. However, each method also allows you to specify `required` arguments using hash symbols and thus remove the need for ordering. Therefore, the same example could be rewritten like so:
```ruby
issues = Github::Client::Issues.new
issues.milestones.list user: 'peter-murach', repo: 'github', state: 'open'
```
Furthermore, `required` arguments can be passed during instance creation:
```ruby
issues = Github::Client::Issues.new user: 'peter-murach', repo: 'github'
issues.milestones.list state: 'open'
```
Similarly, the `required` arguments for the request can be passed inside the current scope such as:
```ruby
issues = Github::Client::Issues.new
issues.milestones(user: 'peter-murach', repo: 'github').list state: 'open'
```
But why limit ourselves? You can mix and match arguments, for example:
```ruby
issues = Github::Client::Issues.new user: 'peter-murach'
issues.milestones(repo: 'github').list
issues.milestones(repo: 'tty').list
```
You can also use a bit of syntactic sugar whereby "username/repository" can be passed as well:
```ruby
issues = Github::Client::Issues.new
issues.milestones('peter-murach/github').list
issues.milestones.list 'peter-murach/github'
```
Finally, use the `with` scope to clearly denote your requests
```ruby
issues = Github::Client::Issues.new
issues.milestones.with(user: 'peter-murach', repo: 'github').list
```
Please consult the method [documentation](http://rubydoc.info/github/peter-murach/github/master/frames) or [GitHub specification](https://developer.github.com/v3/) to see which arguments are required and what are the option parameters.
### 1.4 Response Querying
The response is of type `Github::ResponseWrapper` and allows traversing all the json response attributes like method calls. In addition, if the response returns more than one resource, these will be automatically yielded to the provided block one by one.
For example, when request is issued to list all the branches on a given repository, each branch will be yielded one by one:
```ruby
repos = Github::Client::Repos.new
repos.branches user: 'peter-murach', repo: 'github' do |branch|
puts branch.name
end
```
#### 1.4.1 Response Body
The `ResponseWrapper` allows you to call json attributes directly as method calls. there is no magic here, all calls are delegated to the response body. Therefore, you can directly inspect request body by calling `body` method on the `ResponseWrapper` like so:
```ruby
response = repos.branches user: 'peter-murach', repo: 'github'
response.body # => Array of branches
```
#### 1.4.2 Response Headers
Each response comes packaged with methods allowing for inspection of HTTP start line and headers. For example, to check for rate limits and status codes do:
```ruby
response = Github::Client::Repos.branches 'peter-murach', 'github'
response.headers.ratelimit_limit # "5000"
response.headers.ratelimit_remaining # "4999"
response.headers.status # "200"
response.headers.content_type # "application/json; charset=utf-8"
response.headers.etag # "\"2c5dfc54b3fe498779ef3a9ada9a0af9\""
response.headers.cache_control # "public, max-age=60, s-maxage=60"
```
#### 1.4.3 Response Success
If you want to verify if the response was success, namely, that the `200` code was returned call the `success?` like so:
```ruby
response = Github::Client::Repos.branches 'peter-murach', 'github'
response.success? # => true
```
### 1.5 Request Headers
It is possible to specify additional header information which will be added to the final request.
For example, to set `etag` and `X-Poll_Interval` headers, use the `:headers` hash key inside the `:options` hash like in the following:
```ruby
events = Github::Client::Activity::Events.new
events.public headers: {
'X-Poll-Interval': 60,
'ETag': "a18c3bded88eb5dbb5c849a489412bf3"
}
```
#### 1.5.1 Media Types
In order to set custom media types for a request use the accept header. By using the `:accept` key you can determine media type like in the example:
```ruby
issues = Github::Client::Issues.new
issues.get 'peter-murach', 'github', 108, accept: 'application/vnd.github.raw'
```
## 2 Configuration
The **github_api** provides ability to specify global configuration options. These options will be available to all api calls.
### 2.1 Basic
The configuration options can be set by using the `configure` helper
```ruby
Github.configure do |c|
c.basic_auth = "login:password"
c.adapter = :typheous
c.user = 'peter-murach'
c.repo = 'finite_machine'
end
```
Alternatively, you can configure the settings by passing a block to an instance like:
```ruby
Github.new do |c|
c.endpoint = 'https://github.company.com/api/v3'
c.site = 'https://github.company.com'
end
```
or simply by passing hash of options to an instance like so
```ruby
github = Github.new basic_auth: 'login:password',
adapter: :typheous,
user: 'peter-murach',
repo: 'finite_machine'
```
The following is the full list of available configuration options:
```ruby
adapter # Http client used for performing requests. Default :net_http
auto_pagination # Automatically traverse requests page links. Default false
basic_auth # Basic authentication in form login:password.
client_id # Oauth client id.
client_secret # Oauth client secret.
connection_options # Hash of connection options.
endpoint # Enterprise API endpoint. Default: 'https://api.github.com'
oauth_token # Oauth authorization token.
org # Global organization used in requests if none provided
per_page # Number of items per page. Max of 100. Default 30.
repo # Global repository used in requests in none provided
site # enterprise API web endpoint
ssl # SSL settings in hash form.
user # Global user used for requests if none provided
user_agent # Custom user agent name. Default 'Github API Ruby Gem'
```
### 2.2 Advanced
The **github_api** will use the default middleware stack which is exposed by calling `stack` on a client instance. However, this stack can be freely modified with methods such as `insert`, `insert_after`, `delete` and `swap`. For instance, to add your `CustomMiddleware` do:
```ruby
Github.configure do |c|
c.stack.insert_after Github::Response::Helpers, CustomMiddleware
end
```
Furthermore, you can build your entire custom stack and specify other connection options such as `adapter` by doing:
```ruby
Github.new do |c|
c.adapter :excon
c.stack do |builder|
builder.use Github::Response::Helpers
builder.use Github::Response::Jsonize
end
end
```
### 2.3 SSL
By default requests over SSL are set to OpenSSL::SSL::VERIFY_PEER. However, you can turn off peer verification by
```ruby
github = Github.new ssl: { verify: false }
```
If your client fails to find CA certs, you can pass other SSL options to specify exactly how the information is sourced
```ruby
ssl: {
client_cert: "/usr/local/www.example.com/client_cert.pem"
client_key: "/user/local/www.example.com/client_key.pem"
ca_file: "example.com.cert"
ca_path: "/etc/ssl/"
}
```
For instance, download CA root certificates from Mozilla [cacert](http://curl.haxx.se/ca/cacert.pem) and point ca_file at your certificate bundle location. This will allow the client to verify the github.com ssl certificate as authentic.
### 2.4 Caching
Caching is supported through the [`faraday-http-cache` gem](https://github.com/plataformatec/faraday-http-cache).
Add the gem to your Gemfile:
```ruby
gem 'faraday-http-cache'
```
You can now configure cache parameters as follows
```ruby
Github.configure do |config|
config.stack do |builder|
builder.use Faraday::HttpCache, store: Rails.cache
end
end
```
More details on the available options can be found in the gem's own documentation: https://github.com/plataformatec/faraday-http-cache#faraday-http-cache
## 3 Authentication
### 3.1 Basic
To start making requests as authenticated user you can use your GitHub username and password like so
```ruby
Github.new basic_auth: 'login:password'
```
Though this method is convenient you should strongly consider using `OAuth` for improved security reasons.
### 3.2 Authorizations API
#### 3.2.1 For an User
To create an access token through the GitHub Authrizations API, you are required to pass your basic credentials and scopes you wish to have for the authentication token.
```ruby
github = Github.new basic_auth: 'login:password'
github.oauth.create scopes: ['repo']
```
You can add more than one scope from the `user`, `public_repo`, `repo`, `gist` or leave the scopes parameter out, in which case, the default read-only access will be assumed (includes public user profile info, public repo info, and gists).
#### 3.2.2 For an App
Furthermore, to create auth token for an application you need to pass `:app` argument together with `:client_id` and `:client_secret` parameters.
```ruby
github = Github.new basic_auth: 'login:password'
github.oauth.app.create 'client-id', scopes: ['repo']
```
In order to revoke auth token(s) for an application you must use basic authentication with `client_id` as login and `client_secret` as password.
```ruby
github = Github.new basic_auth: "client_id:client_secret"
github.oauth.app.delete 'client-id'
```
Revoke a specific app token.
```ruby
github.oauth.app.delete 'client-id', 'access-token'
```
### 3.3 Scopes
You can check OAuth scopes you have by:
```ruby
github = Github.new oauth_token: 'token'
github.scopes.list # => ['repo']
```
To list the scopes that the particular GitHub API action checks for do:
```ruby
repos = Github::Client::Repos.new
response = repos.list user: 'peter-murach'
response.headers.accepted_oauth_scopes # => ['delete_repo', 'repo', 'public_repo']
```
To understand what each scope means refer to [documentation](http://developer.github.com/v3/oauth/#scopes)
### 3.4 Application OAuth
In order to authenticate your app through OAuth2 on GitHub you need to
* Visit https://github.com/settings/applications/new and register your app.
You will need to be logged in to initially register the application.
* Authorize your credentials https://github.com/login/oauth/authorize
You can use convenience methods to help you achieve this using **GithubAPI** gem:
```ruby
github = Github.new client_id: '...', client_secret: '...'
github.authorize_url redirect_uri: 'http://localhost', scope: 'repo'
# => "https://github.com/login/oauth/authorize?scope=repo&response_type=code&client_id='...'&redirect_uri=http%3A%2F%2Flocalhost"
```
After you get your authorization code, call to receive your access_token
```ruby
token = github.get_token( authorization_code )
```
Once you have your access token, configure your github instance following instructions under Configuration.
**Note**: If you are working locally (i.e. your app URL and callback URL are localhost), do not specify a ```:redirect_uri``` otherwise you will get a ```redirect_uri_mismatch``` error.
### 3.5 Two-Factor
In order to use [Two-Factor](https://help.github.com/articles/about-two-factor-authentication) authentication you need provide `X-GitHub-OTP: required; :2fa-type` header.
You can add headers during initialization:
```ruby
Github.new do |config|
config.basic_auth = "user:password"
config.connection_options = {headers: {"X-GitHub-OTP" => '2fa token'}}
end
```
or per request:
```ruby
github = Github.new basic_auth: 'login:password'
github.oauth.create scopes: ["public_repo"],
headers: {"X-GitHub-OTP" => "2fa token"}
```
## 4 Pagination
Any request that returns multiple items will be paginated to 30 items by default. You can specify custom `page` and `per_page` query parameters to alter default behavior. For instance:
```ruby
repos = Github::Client::Repos.new
response = repos.list user: 'wycats', per_page: 10, page: 5
```
Then you can query the pagination information included in the link header by:
```ruby
response.links.first # Shows the URL of the first page of results.
response.links.next # Shows the URL of the immediate next page of results.
response.links.prev # Shows the URL of the immediate previous page of results.
response.links.last # Shows the URL of the last page of results.
```
In order to iterate through the entire result set page by page, you can use convenience methods:
```ruby
response.each_page do |page|
page.each do |repo|
puts repo.name
end
end
```
or use `has_next_page?` and `next_page` helper methods like in the following:
```ruby
while response.has_next_page?
... process response ...
res.next_page
end
```
One can also navigate straight to the specific page by:
```ruby
res.count_pages # Number of pages
res.page 5 # Requests given page if it exists, nil otherwise
res.first_page # Get first page
res.next_page # Get next page
res.prev_page # Get previous page
res.last_page # Get last page
```
### 4.1 Auto pagination
You can retrieve all pages in one invocation by passing the `auto_pagination` option like so:
```ruby
github = Github.new auto_pagination: true
```
Depending at what stage you pass the `auto_pagination` it will affect all or only a single request. For example, in order to auto paginate all Repository API methods do:
```ruby
Github::Repos.new auto_pagination: true
```
However, to only auto paginate results for a single request do:
```ruby
Github::Repos.new.list user: '...', auto_pagination: true
```
## 5 Error Handling
The generic error class `Github::Error::GithubError` will handle both the client (`Github::Error::ClientError`) and service (`Github::Error::ServiceError`) side errors. For instance in your code you can catch errors like
```ruby
begin
# Do something with github_api gem
rescue Github::Error::GithubError => e
puts e.message
if e.is_a? Github::Error::ServiceError
# handle GitHub service errors such as 404
elsif e.is_a? Github::Error::ClientError
# handle client errors e.i. missing required parameter in request
end
end
```
## 6 Examples
### 6.1 Rails
A Rails controller that allows a user to authorize their GitHub account and then performs a request.
```ruby
class GithubController < ApplicationController
def authorize
address = github.authorize_url redirect_uri: 'http://...', scope: 'repo'
redirect_to address
end
def callback
authorization_code = params[:code]
access_token = github.get_token authorization_code
access_token.token # => returns token value
end
private
def github
@github ||= Github.new client_id: '...', client_secret: '...'
end
end
```
### 6.2 Manipulating Files
In order to be able to create/update/remove files you need to use Contents API like so:
```ruby
contents = Github::Client::Repos::Contents.new oauth_token: '...'
```
Having instantiated the contents, to create a file do:
```ruby
contents.create 'username', 'repo_name', 'full_path_to/file.ext',
path: 'full_path_to/file.ext',
message: 'Your commit message',
content: 'The contents of your file'
```
Content is all Base64 encoded to/from the API, and when you create a file it encodes it automatically for you.
To update a file, first you need to find the file so you can get the SHA you're updating off of:
```ruby
file = contents.find path: 'full_path_to/file.ext'
```
Then update the file just like you do with creating:
```ruby
contents.update 'username', 'repo_name', 'full_path_to/file.ext',
path: 'full_path_to/file.ext'
message: 'Your commit message',
content: 'The contens to be updated',
sha: file.sha
```
Finally to remove a file, find the file so you can get the SHA you're removing:
```ruby
file = contents.find path: 'full_path_to/file.ext'
```
Then delete the file like so:
```ruby
github.delete 'username', 'tome-of-knowledge', 'full_path_to/file.ext',
path: 'full_path_to/file.ext',
message: 'Your Commit Message',
sha: file.sha
```
## 7 Testing
The test suite is split into two groups, `live` and `mock`.
The `live` tests are the ones in `features` folder and they simply exercise the GitHub API by making live requests and then being cached with VCR in directory named `features\cassettes`. For details on how to get set up, please navigate to the `features` folder.
The `mock` tests are in the `spec` directory and their primary concern is to test the gem internals without the hindrance of external calls.
## Development
Questions or problems? Please post them on the [issue tracker](https://github.com/peter-murach/github/issues). You can contribute changes by forking the project and submitting a pull request. You can ensure the tests are passing by running `bundle` and `rake`.
## Copyright
Copyright (c) 2011-2014 Piotr Murach. See LICENSE.txt for further details.
| 33.706569 | 306 | 0.729005 | eng_Latn | 0.960737 |
d07f6f475d4961921c52af1f28d50cbbabec0a5b | 3,861 | md | Markdown | CONTRIBUTING.md | memcachier/stack | e6fde2ef177dd31d4d68bb204d4e0dc3c3920f81 | [
"BSD-3-Clause"
] | null | null | null | CONTRIBUTING.md | memcachier/stack | e6fde2ef177dd31d4d68bb204d4e0dc3c3920f81 | [
"BSD-3-Clause"
] | null | null | null | CONTRIBUTING.md | memcachier/stack | e6fde2ef177dd31d4d68bb204d4e0dc3c3920f81 | [
"BSD-3-Clause"
] | 1 | 2019-10-19T18:52:19.000Z | 2019-10-19T18:52:19.000Z | # Contributors Guide
## Bug Reports
Please [open an issue](https://github.com/commercialhaskell/stack/issues/new)
and use the provided template to include all necessary details.
The more detailed your report, the faster it can be resolved and will ensure it
is resolved in the right way. Once your bug has been resolved, the responsible
person will tag the issue as _Needs confirmation_ and assign the issue back to
you. Once you have tested and confirmed that the issue is resolved, close the
issue. If you are not a member of the project, you will be asked for
confirmation and we will close it.
## Documentation
If you would like to help with documentation, please note that for most cases
the Wiki has been deprecated in favor of markdown files placed in a new `/doc`
subdirectory of the repository itself. Please submit a
[pull request](https://help.github.com/articles/using-pull-requests/) with your
changes/additions.
The documentation is rendered on [haskellstack.org](http://haskellstack.org) by
readthedocs.org using Sphinx and CommonMark. Since links and formatting vary
from GFM, please check the documentation there before submitting a PR to fix
those. In particular, links to other documentation files intentionally have
`.html` extensions instead of `.md`, unfortunately (see
[#1506](https://github.com/commercialhaskell/stack/issues/1506) for details).
If your changes move or rename files, or subsume Wiki content, please continue
to leave a file/page in the old location temporarily, in addition to the new
location. This will allow users time to update any shared links to the old
location. Please also update any links in other files, or on the Wiki, to point
to the new file location.
## Code
If you would like to contribute code to fix a bug, add a new feature, or
otherwise improve `stack`, pull requests are most welcome. It's a good idea to
[submit an issue](https://github.com/commercialhaskell/stack/issues/new) to
discuss the change before plowing into writing code.
If you'd like to help out but aren't sure what to work on, look for issues with
the
[awaiting pr](https://github.com/commercialhaskell/stack/issues?q=is%3Aopen+is%3Aissue+label%3A%22awaiting+pr%22)
label. Issues that are suitable for newcomers to the codebase have the
[newcomer](https://github.com/commercialhaskell/stack/issues?q=is%3Aopen+is%3Aissue+label%3A%22awaiting+pr%22+label%3Anewcomer)
label. Best to post a comment to the issue before you start work, in case anyone
has already started.
Please include a
[ChangeLog](https://github.com/commercialhaskell/stack/blob/master/ChangeLog.md)
entry and
[documentation](https://github.com/commercialhaskell/stack/tree/master/doc/)
updates with your pull request.
## Code Quality
The Stack projects uses [HLint](https://github.com/ndmitchell/hlint) as a code
quality tool.
Note that stack contributors need not dogmatically follow the suggested hints
but are encouraged to debate their usefulness. If you find a hint is not useful
and detracts from readability, consider marking it in the [configuration
file](https://github.com/commercialhaskell/stack/blob/master/.hlint.yaml) to
be ignored. Please refer to the [HLint manual](https://github.com/ndmitchell/hlint#readme)
for configuration syntax.
Quoting [@mgsloan](https://github.com/commercialhaskell/stack/pulls?utf8=%E2%9C%93&q=is%3Apr%20author%3Amgsloan):
> We are optimizing for code clarity, not code concision or what HLint thinks.
You can install HLint with stack. You might want to install it in the global
project in case you run into dependency conflicts. HLint can report hints in
your favourite text editor. Refer to the HLint repository for more details.
To install:
```
stack install hlint
```
Once installed, you can check your changes with:
```
hlint src/ test/ --cpp-simple
```
Where `--cpp-simple` strips `#` lines.
| 41.967391 | 127 | 0.785807 | eng_Latn | 0.996118 |
d07fb50b422f804b182946462254fe43ae619637 | 1,087 | md | Markdown | README.md | pivotal-cf-experimental/topham-controller-release | 13a8491c176fafc0cb50d89db5a4afc9a435c336 | [
"BSD-2-Clause"
] | null | null | null | README.md | pivotal-cf-experimental/topham-controller-release | 13a8491c176fafc0cb50d89db5a4afc9a435c336 | [
"BSD-2-Clause"
] | 1 | 2018-02-14T19:12:34.000Z | 2018-02-14T19:12:34.000Z | README.md | pivotal-cf-experimental/topham-controller-release | 13a8491c176fafc0cb50d89db5a4afc9a435c336 | [
"BSD-2-Clause"
] | null | null | null | # topham-controller-release
A BOSH release packaging [topham-controller](https://github.com/pivotal-cf-experimental/topham-controller), an OSBAPI-compliant services controller.
## Purpose
A spike exploring a possible implementation of a 'services controller', i.e. a server which understands OSBAPI requests and delegates them to one or more service brokers, while storing the state those brokers emit. Essentially, topham-controller and its CLI replace the service provisioning and management functions in Cloud Controller.
## Usage
Deploy as a BOSH release, ensuring the details of the downstream broker have been set in the manifest ([example](https://github.com/pivotal-cf-experimental/topham-controller-release/blob/master/manifest.yml)).
To connect and provision services, a CLI is provided: see our forked [eden-cli](https://github.com/pivotal-cf-experimental/eden/tree/spike-services-controller-client).
## Caveats ⚠️
**Do not use in production**. Please see the Caveats section [here](https://github.com/pivotal-cf-experimental/topham-controller/blob/master/README.md).
| 60.388889 | 336 | 0.796688 | eng_Latn | 0.929085 |
d07ff6b0e1503948761717f9fe2715a035467a48 | 1,362 | md | Markdown | _pages/cv.md | hongyentran/hongyentran.github.io | e16033c1d73dbef6564812cc869cb28a6194687b | [
"MIT"
] | null | null | null | _pages/cv.md | hongyentran/hongyentran.github.io | e16033c1d73dbef6564812cc869cb28a6194687b | [
"MIT"
] | null | null | null | _pages/cv.md | hongyentran/hongyentran.github.io | e16033c1d73dbef6564812cc869cb28a6194687b | [
"MIT"
] | 6 | 2021-04-07T00:59:18.000Z | 2021-09-15T12:16:27.000Z | ---
layout: archive
title: ""
permalink: /cv/
author_profile: true
redirect_from:
- /resume
---
Education
======
* Ph.D in Mathematical Sciences, Seoul National University, February 2017
* Research area: Cryptography
* Advisor: [Dr. Jung Hee Cheon](http://www.math.snu.ac.kr/~jhcheon/xe2/)
* M.S. in Mathematical Sciences, Seoul National University, February 2012
* B.S. in Mathematical Education, Seoul National University, February 2010
Work experience
======
* August 2020 - present: Assistant Professor
* Department of Computer Science and Engineering, Ulsan National Institute of Science and Technology (UNIST)
* Graduate School of Artificial Intelligence, UNIST
* August 2020 - present: Affiliate Assistant Professor
* Graduate School of Artificial Intelligence, UNIST
* May 2018 - July 2020: Assistant Professor
* School of Biomedical Informatics, University of Texas, Health Science Center at Houston
* March 2017 - April 2018: Postdoctoral Researcher
* Division of Biomedical Informatics, University of California, San Diego
* Supervisor: Dr. Xiaoqian Jiang
* Spring 2015: Research Intern
* Microsoft Research
* Supervisor: [Dr. Kristin Lauter](https://www.microsoft.com/en-us/research/people/klauter/?from=http%3A%2F%2Fresearch.microsoft.com%2F%7Eklauter%2F)
[[Full CV]](https://k-miran.github.io/files/cv.pdf)
======
| 33.219512 | 151 | 0.750367 | eng_Latn | 0.326202 |
d0805a99597b59f92f3bb328a1ab7acf27b039aa | 3,751 | md | Markdown | RA.md | Watts-Lab/handbook | 27655ae5ca0196a768980e44ccc8e4a97274f631 | [
"MIT"
] | null | null | null | RA.md | Watts-Lab/handbook | 27655ae5ca0196a768980e44ccc8e4a97274f631 | [
"MIT"
] | 1 | 2021-11-05T00:18:19.000Z | 2021-11-05T16:32:09.000Z | RA.md | Watts-Lab/handbook | 27655ae5ca0196a768980e44ccc8e4a97274f631 | [
"MIT"
] | null | null | null | # Table of Contents
- [Requesting RAs](#requesting-research-assistance)
- [Hiring RAs](#ra-hiring)
- [Onboarding RAs](#ra-onboarding)
## Requesting Research Assistance
Research Assistants are integral members of many research projects at the lab. Generally there are 2 processes for having an RA work on your project: hiring a [new RA](#new-ra) or working with an [existing RA](#existing-ra) already part of the lab. If you have a need for an RA, reach out to the ROM to discuss which process makes sense for your project. Helpful information to send over includes:
- Project title and scope e.g. What is the project?
- Clearly defined task(s) for RA e.g. What task(s) will the RA work on?
- Expected timeline/duration and hourly commitment per week e.g. How long do you expect the task(s) to take?
## RA Hiring
1. **Request** - Research leads makes request for RA. See section on [requesting RA](#requesting-research-assistance).
### New RA
2. **Written Application** - Applicants complete the [application](https://upenn.co1.qualtrics.com/jfe/form/SV_00AS9HoBXLrqWJ8). If applicant demonstrates interest in lab and has requisite skills on paper, ROM reaches out to schedule a chat/informal interview.
3. **Chat** - Led by ROM; talk to applicant about their interest in lab and research and their skillset. Answer any questions related to broad project goals. If applicant has genuine interest in lab / research and has needed skills, match applicant with relevant project lead researcher for interview.
4. **Interview** - Led by researcher; interview can be based on specific technical skills needed for the role (anything from social science coding to app to data analysis), fit with project, or whatever the project lead deems necessary. Project lead communicates final decision to ROM.
5. **Final Confirmation** - ROM creates hiring paperwork (contract + payroll packet) and sends over to applicant, cc'ing the project lead. Send to PEFS when complete. PEFS must input their information into Workday for approval before RA can start work.
### Existing RA
2. **Research Lead Approval** - ROM approaches other research lead(s) to ask permission if RA *already within the lab* can work on this project. Research lead says yes/no, provides stipulations (e.g. they must commit to X hours/wk on this project)
3. **RA Approval** - ROM approaches RA with new project and stipulations.
4. **Final Confirmation** - ROM sends work plan email to both project leads and RA confirming the new project, the expected duration, and the weekly split between tasks.
## RA Onboarding
1. **Workday** - University has official onboarding in Workday that must be completed. This includes going in-person to verify employment authorization documents.
2. **General Lab** - Once that has been completed, ROM can do the following:
- Invite them to necessary systems, preferably using university address. This includes Slack, Github, and Drive. AWS should be handled by RDE.
- Meet/Zoom with RA and review [onboarding presentation](https://docs.google.com/presentation/d/1pZndxAERCmxl2aCPgXCvnMrBdYnasq5yx3cMXUe8808/edit#slide=id.p1). The goal of this space is to review the org structure and members of lab, explicitly review lab-wide expectations of RAs and clearly outline next steps. Next steps from this can include:
- Meet with Research Lead to review tasks and schedule work hours/meetings
- Meet with RDE to review coding style, use of lab resources, AWS set up
- Visit the lab in-person
- Complete RA form (not linked for security reasons); includes sending info for website, getting swipe access to building and reflecting on goals
3. **Research Project** - Research lead can complete onboarding as they see fit. | 98.710526 | 397 | 0.770461 | eng_Latn | 0.997276 |
d0805f77c3aaefb196ba009dbcbbc57321e186c1 | 1,198 | md | Markdown | AlchemyInsights/manage-delegate-permissions-for-multiple-types-in-oulook-for-mac.md | isabella232/OfficeDocs-AlchemyInsights-pr.cs-CZ | 5ed88ef27055481eb0b053d1b3704fa2c5f67b4b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T19:05:56.000Z | 2020-05-19T19:05:56.000Z | AlchemyInsights/manage-delegate-permissions-for-multiple-types-in-oulook-for-mac.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.cs-CZ | 5de78c659954b926467b06b68b46812f72d379d5 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:24:47.000Z | 2022-02-09T06:56:38.000Z | AlchemyInsights/manage-delegate-permissions-for-multiple-types-in-oulook-for-mac.md | isabella232/OfficeDocs-AlchemyInsights-pr.cs-CZ | 5ed88ef27055481eb0b053d1b3704fa2c5f67b4b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2019-10-09T20:27:51.000Z | 2021-10-09T10:51:00.000Z | ---
title: Správa oprávnění delegáta pro více typů položek v Outlook pro Mac
ms.author: v-smandalika
author: v-smandalika
manager: dansimp
ms.date: 12/01/2020
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.collection: Adm_O365
ms.custom:
- "3800004"
- "7302"
ms.openlocfilehash: d3b5913997f7d94b94cd1625dd699fa1e626acb3
ms.sourcegitcommit: ab75f66355116e995b3cb5505465b31989339e28
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 08/13/2021
ms.locfileid: "58329727"
---
# <a name="manage-delegate-permissions-for-multiple-item-types-in-outlook-for-mac"></a>Správa oprávnění delegáta pro více typů položek v Outlook pro Mac
1. V nabídce **Nástroje** vyberte **Účty** a vyberte účet, pro který chcete změnit oprávnění.
2. Klikněte **na Upřesnit** a potom na **Delegáti**.
3. V **části Delegáti**, která zobrazí seznam delegátů, kteří mohou jednat jménem, vyberte delegáta.
4. Klikněte na **tlačítko Akce,** klikněte **na Nastavit oprávnění** a proveďte požadované změny.
**Poznámka:** Pokud nastavíte úroveň oprávnění na **Žádná**, zůstane delegát v seznamu, který vám umožní oprávnění obnovit později.
| 38.645161 | 152 | 0.783806 | ces_Latn | 0.995312 |
d080f5895844938e5a673496380612b76bc4e68e | 4,046 | md | Markdown | README.md | UnofficialJuliaMirror/SemidefiniteProgramming.jl-f9b4ff27-07dc-5514-a2b4-022623d798a0 | 6e7033a6b367bf397dfd1258232e6db97e223d18 | [
"MIT"
] | 6 | 2015-02-21T21:07:38.000Z | 2019-02-21T14:55:10.000Z | README.md | UnofficialJuliaMirror/SemidefiniteProgramming.jl-f9b4ff27-07dc-5514-a2b4-022623d798a0 | 6e7033a6b367bf397dfd1258232e6db97e223d18 | [
"MIT"
] | 3 | 2015-07-04T17:31:25.000Z | 2019-10-09T08:34:21.000Z | README.md | UnofficialJuliaMirror/SemidefiniteProgramming.jl-f9b4ff27-07dc-5514-a2b4-022623d798a0 | 6e7033a6b367bf397dfd1258232e6db97e223d18 | [
"MIT"
] | 5 | 2016-07-12T02:16:46.000Z | 2021-10-01T18:13:07.000Z | # Semidefinite Programming
This package provides a Julia interface for low-level modeling of semidefinite programming problems and for solving semidefinite programs with solvers such as SDPA and CSDP.
Maintenance status: Currently no new futures are being developed for this package. Bugs will be fixed and this package will be kept up to date with new Julia releases.
# Introduction
Consider a semidefinite program of the form
max tr(C X) subject to X is positive semidefinite
tr(A_i X) = b_i for i = 1, ...,m
Here `C`, `X`, and `A_1`, `...`, `A_m` are symmetric block matrices (all assumed to have identical size and block structure), and `b_1`, `...`, `b_m` are scalars.
This problem can be modeled by constructing a sparse semidefinite program with
```julia
using SemidefiniteProgramming
sdp = SparseSDP(maximize=true)
```
and then setting the nonzero scalars and the nonzero entries of the matrices. The most basic way to do this is as follows: For the scalars `b_i` use
```julia
setrhs!(sdp, i, value)
```
For the entries of the objective matrix `C` use
```julia
setobj!(sdp, blockindex, rowindex, columnindex, value)
```
For the constraint matrices `A_i` use
```julia
setcon!(sdp, i, blockindex, rowindex, columnindex, value)
```
Then we solve the program with
```julia
sol = solve(sdp, solver)
```
and print the (primal) objective value:
```julia
println(obj(sol))
```
Notice that the number of constraints, the number of blocks, and the blocksizes do not need to be specified; they will be determined automatically based on the entries you have set. Of course all the matrices involded are assumed to have identical block structure. The indices of the contraints, blocks, and matrices do not need be integers; you can use any Julia object here. When storing a SparseSDP in for instance the SDPA-sparse format the indices will be converted to integers automatically.
# Example
Consider the program above with `b1 = 10`, `b2 = 20`,
```
C = [1 0 0 0;
0 2 0 0;
0 0 3 0;
0 0 0 4],
```
```
A1 = [1 0 0 0;
0 1 0 0;
0 0 0 0;
0 0 0 0],
```
and
A2 = [0 0 0 0;
0 1 0 0;
0 0 5 2;
0 0 2 6]
To solve this we use
```julia
using SemidefiniteProgramming
sdp = SparseSDP(maximize=true)
setobj!(sdp, 1, 1, 1, 1.0)
setobj!(sdp, 2, 1, 1, 2.0)
setobj!(sdp, 3, 1, 1, 3.0)
setobj!(sdp, 3, 2, 2, 4.0)
setrhs!(sdp, 1, 10.0)
setcon!(sdp, 1, 1, 1, 1, 1.0)
setcon!(sdp, 1, 2, 1, 1, 1.0)
setrhs!(sdp, 2, 20.0)
setcon!(sdp, 2, 2, 1, 1, 1.0)
setcon!(sdp, 2, 3, 1, 1, 5.0)
setcon!(sdp, 2, 3, 1, 2, 2.0)
setcon!(sdp, 2, 3, 2, 1, 2.0)
setcon!(sdp, 2, 3, 2, 2, 6.0)
println(obj(solve(sdp, CSDP())))
```
# Solvers
To use a solver construct an immutable solver object with `CSDP()`, `SDPA()`, etc, and supply it as the second argument to the `solve` function. The solver objects support the optional named arguments
- `verbose` (print solver output to stdout)
- `executable` (path to the solver executable)
## CSDP
To use the CSDP solver you need to install [CSDP](https://projects.coin-or.org/Csdp) and make sure that the CSDP binary is in your path. On Debian/Ubuntu you can do this by installing the `coinor-csdp` package. On Fedora it is the `csdp` package.
## SDPA
To use one of the SDPA solvers install [SDPA](http://sdpa.sourceforge.net/) and make sure the executable is in your path. On Debian/Ubuntu you can do this by installing the package `sdpa` (this package only contains the standard SDPA solver). Use SDPA for the standard SDPA solver and SDPAQD or SDPAGMP for the high precision solvers.
# SparseSDPSolution objects
Having solved a semidefinite program with
```julia
sol = solve(sdp, CSDP())
```
you can extract the primal and dual objective values with `obj(sol)` and `dualobj(sol)`. To extract the values of the optimal primal variables (the matrix `X` in the notation above) use
```julia
primalmatrix(sol)[blockindex][rowindex, columnindex]
```
Variable extraction is currently only supported with the CSDP solver.
| 32.368 | 498 | 0.701928 | eng_Latn | 0.986838 |
d0817e50f1e7718476d4ad92836f7ee8ef3dbbe5 | 890 | md | Markdown | docs/content/commands/git/git-commit-all.md | sjk07/numonic | 127dcf900f4d2a803b1715cc7f4de38e6c5fdb24 | [
"MIT"
] | 9 | 2021-11-11T01:52:23.000Z | 2022-03-02T17:11:09.000Z | docs/content/commands/git/git-commit-all.md | sjk07/numonic | 127dcf900f4d2a803b1715cc7f4de38e6c5fdb24 | [
"MIT"
] | 11 | 2021-11-18T17:49:50.000Z | 2022-02-22T05:43:48.000Z | docs/content/commands/git/git-commit-all.md | sjk07/numonic | 127dcf900f4d2a803b1715cc7f4de38e6c5fdb24 | [
"MIT"
] | 4 | 2021-11-18T18:01:18.000Z | 2022-02-15T06:14:16.000Z | ---
title: git-commit-all
---
# NAME
git-commit-all - create a commit including all new, removed, or modified files within the working tree
# SYNOPSIS
**git** **commit-all** [**-d** | **--debug**] [**-h** | **--help**] [**-q** | **--quiet**]
# DESCRIPTION
This command updates the index using all content found within the working tree excluding those defined in a
**.gitignore** file.
# OPTIONS
## FLAGS
### -d, --debug
print the commands as they are executed (set -x)
### -h, --help
print this help information
### -q, --quiet
suppress any output to stdout (any errors will still be printed)
# EXAMPLES
## git commit-all
track all files within the repository, excluding those defined in the .gitignore
## git commit-all -d
## git commit-all --debug
print the underlying git commands as they are executed
# SEE ALSO
**git-commit**(1), **git-add**(1), **gitignore**(5)
| 18.163265 | 107 | 0.666292 | eng_Latn | 0.986587 |
d081e2f70dad2c7dcbdf553145bf667336280cf4 | 4,008 | md | Markdown | src/posts/2021-04-30-azure-pipelines.md | SenorGrande/senorgrande.github.io | 6ec02db9ccac11b7ef0103e3add0210ac4edddb2 | [
"MIT"
] | null | null | null | src/posts/2021-04-30-azure-pipelines.md | SenorGrande/senorgrande.github.io | 6ec02db9ccac11b7ef0103e3add0210ac4edddb2 | [
"MIT"
] | 4 | 2022-01-09T01:53:57.000Z | 2022-01-09T01:53:59.000Z | src/posts/2021-04-30-azure-pipelines.md | SenorGrande/senorgrande.github.io | 6ec02db9ccac11b7ef0103e3add0210ac4edddb2 | [
"MIT"
] | null | null | null | ---
title: "Azure DevOps Pipelines - Build and Push a Docker image to AWS ECR"
date: "2021-04-30"
---

## YAML for pipeline
**.azure/azure-pipelines.yml**
This file is our main pipeline and only triggers on the master branch.
It uses the `templates/build.yml` file as a template and will use the stages defined in that file.
```
trigger:
branches:
include:
- master
pr: none
stages:
- template: templates/build.yml
```
**.azure/templates/build.yml**
This template defines a stage that consists of a single job made up of two steps. The first task will essentially run a `docker build` to create our image with a Dockerfile in the root directory of our repo and tagging it with the name defined in the variable `DOCKER_REPOSITORY_NAME` and with a version equal to the pipeline build ID `$(Build.BuildId)`.
The second task will push this image to ECR
```
stages:
- stage: Docker
displayName: Build & Push Docker image to AWS ECR
jobs:
- job: Build_and_Push
displayName: Build & Push Docker image
pool:
vmImage: ubuntu-latest
steps:
- task: Docker@2
displayName: Build an image
inputs:
command: build
dockerfile: '$(Build.SourcesDirectory)/Dockerfile'
buildContext: '$(Build.SourcesDirectory)'
repository: $(DOCKER_REPOSITORY_NAME)
- task: ECRPushImage@1
inputs:
awsCredentials: 'AWS_ECR'
regionName: $(AWS_REGION)
imageSource: 'imagename'
sourceImageName: $(DOCKER_REPOSITORY_NAME)
sourceImageTag: $(Build.BuildId)
pushTag: latest
repositoryName: $(DOCKER_REPOSITORY_NAME)
```
Commit and push these files to your repo’s master branch, and we will create the pipeline in Azure DevOps.
## Create a project and pipeline in Azure DevOps
Create a new project in Azure Devops. Once created and in the project, click on “Pipelines” and then “new pipeline”.

After clicking “new pipeline”, select GitHub and then choose your repo you would like to create your pipeline for.
In the third step, “Configure your pipeline”, select “Existing Azure Pipelines YAML file”.

Then choose the branch and path for the Azure Pipelines YAML file that you created in the previous step. (Master branch, and ./azure/azure-pipelines.yml)
## AWS — IAM user and ECR repo
We will need to setup and IAM user with Access Credentials that Azure DevOps can use to authenticate to ECR and an ECR repo that the pipeline can push images to.

## Create AWS service connection
You will need to install the AWS Toolkit into your Azure DevOps Account https://marketplace.visualstudio.com/azuredevops
Once installed, go into “project settings” (Gear icon in the bottom left corner) and “Service Connections”. Then select “AWS”, stick in your Access Key and Secret Access Key of your IAM user that has Full ECR access. I named my connection “aws_ecr” which is used in the `awsCredentials` input for the `ECRPushImage` task. (note: this doesn’t seem to be case sensitive, I named it “aws_ecr” in the project service connection, but in the YAML file referred to it as `AWS_ECR`).
## Setup environment variables
Edit your pipeline, which will bring up the YAML script, then click on “variables”.
Add DOCKER_REPOSITORY_NAME with the name of your ECR repo, node_app in my instance.
Add AWS_REGION that you are using, I am using ap-southeast-2 for the Sydney region.

## Run the Pipeline
If you’re in the pipeline in the Azure DevOps dashboard, you can manually trigger a run by clicking the blue “Run Pipeline” button, otherwise pushing a commit to the master branch will also trigger it.

Bonza 👌 — Hope this helped you.
| 40.897959 | 476 | 0.72006 | eng_Latn | 0.984302 |
d0829a9c57141d4accfd75e59afc7143d67d047b | 30 | md | Markdown | README.md | opture/riotjsbase | 37e6e6b3fb0b616dc12c2eb21f76a12100430ea0 | [
"0BSD"
] | null | null | null | README.md | opture/riotjsbase | 37e6e6b3fb0b616dc12c2eb21f76a12100430ea0 | [
"0BSD"
] | null | null | null | README.md | opture/riotjsbase | 37e6e6b3fb0b616dc12c2eb21f76a12100430ea0 | [
"0BSD"
] | null | null | null | # riotjsbase
Base riotjs sass
| 10 | 16 | 0.8 | por_Latn | 0.598319 |
d083421c5f2b11822aebce7cf6c650eda6b8774b | 446 | md | Markdown | README.md | LordAlex2015/handler-discord.js | ccd9d9c278475dcfa24cbd347944f62482769322 | [
"MIT"
] | 9 | 2020-08-06T19:45:38.000Z | 2021-07-29T15:28:13.000Z | README.md | LordAlex2015/handler-discord.js | ccd9d9c278475dcfa24cbd347944f62482769322 | [
"MIT"
] | 1 | 2021-08-09T15:26:09.000Z | 2021-08-09T15:26:09.000Z | README.md | LordAlex2015/handler-discord.js | ccd9d9c278475dcfa24cbd347944f62482769322 | [
"MIT"
] | 2 | 2020-08-06T13:52:42.000Z | 2021-06-16T16:22:47.000Z | # handler-discord.js
### By: ArviX#8443
Base Bot for discord.js
## Init
- You must just change token and owner id in config.json file
- You can change the prefix and the base Footer in main.js file
## Changelogs
[See CHANGELOGS.md file](CHANGELOGS.md)
<!-- Badges -->
[version]: https://img.shields.io/github/package-json/v/LordAlex2015/handler-discord.js
[license-src]: https://img.shields.io/github/license/LordAlex2015/handler-discord.js | 26.235294 | 87 | 0.744395 | eng_Latn | 0.408322 |
d0842c32df617c74bb8433fa23e64d029f83b178 | 959 | md | Markdown | docs/vs-2015/code-quality/c28232.md | alkaberov/visualstudio-docs.ru-ru | 4a55b254cc9f6b626b92cd16633235c0f9044c0f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/c28232.md | alkaberov/visualstudio-docs.ru-ru | 4a55b254cc9f6b626b92cd16633235c0f9044c0f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/c28232.md | alkaberov/visualstudio-docs.ru-ru | 4a55b254cc9f6b626b92cd16633235c0f9044c0f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: C28232 | Документация Майкрософт
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-code-analysis
ms.topic: reference
f1_keywords:
- C28232
helpviewer_keywords:
- C28232
ms.assetid: c616b978-02fa-4a0b-8532-d4249369bca1
caps.latest.revision: 4
author: corob-msft
ms.author: corob
manager: jillfra
ms.openlocfilehash: e83d5723b24c7e3e92088216e4b128454a652120
ms.sourcegitcommit: 68f893f6e472df46f323db34a13a7034dccad25a
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 02/15/2020
ms.locfileid: "77271438"
---
# <a name="c28232"></a>C28232
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
предупреждение C28232: \_до\_, \_POST\_или \_Deref\_ не применен ни к одной заметке
Это предупреждение означает, что оператор `_Pre_`, `_Post_`или `_Deref_` появляется в выражении аннотации без последующей функциональной заметки. Модификатор был проигнорирован, но это указывает на неправильное написание аннотации.
| 33.068966 | 232 | 0.80292 | rus_Cyrl | 0.242977 |
d084ee41d8a9572415e9fd14066c3f8fea376e0d | 4,077 | md | Markdown | doc/user/project/wiki/index.md | abdullatif1992/gitlabhq | 2de002b3db1bc199755f8be212fa8804fcb80905 | [
"MIT"
] | 1 | 2018-07-31T00:12:33.000Z | 2018-07-31T00:12:33.000Z | doc/user/project/wiki/index.md | abdullatif1992/gitlabhq | 2de002b3db1bc199755f8be212fa8804fcb80905 | [
"MIT"
] | 5 | 2021-05-21T00:46:08.000Z | 2022-03-02T11:15:45.000Z | doc/user/project/wiki/index.md | abdullatif1992/gitlabhq | 2de002b3db1bc199755f8be212fa8804fcb80905 | [
"MIT"
] | 2 | 2019-11-28T19:12:40.000Z | 2020-11-04T05:29:58.000Z | # Wiki
A separate system for documentation called Wiki, is built right into each
GitLab project. It is enabled by default on all new projects and you can find
it under **Wiki** in your project.
Wikis are very convenient if you don't want to keep your documentation in your
repository, but you do want to keep it in the same project where your code
resides.
You can create Wiki pages in the web interface or
[locally using Git](#adding-and-editing-wiki-pages-locally) since every Wiki is
a separate Git repository.
>**Note:**
A [permission level][permissions] of **Guest** is needed to view a Wiki and
**Developer** is needed to create and edit Wiki pages.
## First time creating the Home page
The first time you visit a Wiki, you will be directed to create the Home page.
The Home page is necessary to be created since it serves as the landing page
when viewing a Wiki. You only have to fill in the **Content** section and click
**Create page**. You can always edit it later, so go ahead and write a welcome
message.

## Creating a new wiki page
Create a new page by clicking the **New page** button that can be found
in all wiki pages. You will be asked to fill in the page name from which GitLab
will create the path to the page. You can specify a full path for the new file
and any missing directories will be created automatically.

Once you enter the page name, it's time to fill in its content. GitLab wikis
support Markdown, RDoc and AsciiDoc. For Markdown based pages, all the
[Markdown features](../../markdown.md) are supported and for links there is
some [wiki specific](../../markdown.md#wiki-specific-markdown) behavior.
>**Note:**
The wiki is based on a Git repository and contains only text files. Uploading
files via the web interface will upload them in GitLab itself, and they will
not be available if you clone the wiki repo locally.
In the web interface the commit message is optional, but the GitLab Wiki is
based on Git and needs a commit message, so one will be created for you if you
do not enter one.
When you're ready, click the **Create page** and the new page will be created.

## Editing a wiki page
To edit a page, simply click on the **Edit** button. From there on, you can
change its content. When done, click **Save changes** for the changes to take
effect.
## Deleting a wiki page
You can find the **Delete** button only when editing a page. Click on it and
confirm you want the page to be deleted.
## Moving a wiki page
You can move a wiki page from one directory to another by specifying the full
path in the wiki page title in the [edit](#editing-a-wiki-page) form.


In order to move a wiki page to the root directory, the wiki page title must
be preceded by the slash (`/`) character.
## Viewing a list of all created wiki pages
Every wiki has a sidebar from which a short list of the created pages can be
found. The list is ordered alphabetically.

If you have many pages, not all will be listed in the sidebar. Click on
**More pages** to see all of them.
## Viewing the history of a wiki page
The changes of a wiki page over time are recorded in the wiki's Git repository,
and you can view them by clicking the **Page history** button.
From the history page you can see the revision of the page (Git commit SHA), its
author, the commit message, when it was last updated and the page markup format.
To see how a previous version of the page looked like, click on a revision
number.

## Adding and editing wiki pages locally
Since wikis are based on Git repositories, you can clone them locally and edit
them like you would do with every other Git repository.
On the right sidebar, click on **Clone repository** and follow the on-screen
instructions.
[permissions]: ../../permissions.md
| 37.063636 | 80 | 0.761344 | eng_Latn | 0.999406 |
d085bc0341e1b92a35ea92d32313420aefee8941 | 4,686 | md | Markdown | documentation/website/documentation/getting_started.md | LaudateCorpus1/mariana-trench | 35595e2782d823a5ed9908dd4fe3bdc06d611ba1 | [
"MIT"
] | null | null | null | documentation/website/documentation/getting_started.md | LaudateCorpus1/mariana-trench | 35595e2782d823a5ed9908dd4fe3bdc06d611ba1 | [
"MIT"
] | 4 | 2021-08-21T07:51:53.000Z | 2022-02-27T20:22:41.000Z | documentation/website/documentation/getting_started.md | LaudateCorpus1/mariana-trench | 35595e2782d823a5ed9908dd4fe3bdc06d611ba1 | [
"MIT"
] | null | null | null | ---
id: getting-started
title: Getting Started
sidebar_label: Getting Started
---
import useBaseUrl from '@docusaurus/useBaseUrl';
This guide will walk you through setting up Mariana Trench on your machine and get you to find your first remote code execution vulnerability in a small sample app.
## Prerequisites
Mariana Trench requires a recent version of [Python](https://www.python.org/downloads/). On MacOS you can get a current version through [homebrew](https://brew.sh/):
```shell
$ brew install python3
```
On a Debian flavored Linux (Ubuntu, Mint, Debian), you can use `apt-get`:
```shell
$ sudo apt-get install python3 python3-pip python3-venv
```
This guide also assumes you have the [Android SDK](https://developer.android.com/studio) installed and an environment variable `$ANDROID_SDK` pointed to the location of the SDK.
For the rest of this guide, we assume that you are working inside of a [virtual environment](https://docs.python.org/3/tutorial/venv.html). You can set this up with
```shell
$ python3 -m venv ~/.venvs/mariana-trench
$ source ~/.venvs/mariana-trench/bin/activate
(mariana-trench)$
```
The name of the virtual environment in front of your shell prompt indicates that the virtual environment is active.
## Installing Mariana Trench
Inside your virtual environment installing Mariana Trench is as easy as running
```shell
(mariana-trench)$ pip install mariana-trench
```
## Running Mariana Trench
We'll use a small app that is part of our documentation. You can get it by running
```shell
(mariana-trench)$ git clone https://github.com/facebook/mariana-trench
(mariana-trench)$ cd mariana-trench/documentation/sample-app
```
We are now ready to run the analysis
```shell
(mariana-trench)$ mariana-trench \
--system-jar-configuration-path=$ANDROID_SDK/platforms/android-30/android.jar
--apk-path=sample-app-debug.apk \
--source-root-directory=app/src/main/java
# ...
INFO Analyzed 68886 models in 4.04s. Found 4 issues!
# ...
```
The analysis has found 4 issues in our sample app. The output of the analyis is a set of specifications for each method of the application.
## Post Processing
The specifications themselves are not meant to be read by humans. We need an additional processing step in order to make the results more presentable. We do this with [SAPP](https://github.com/facebook/sapp) PyPi installed for us:
```shell
(mariana-trench)$ sapp --tool=mariana-trench analyze .
(mariana-trench)$ sapp --database-name=sapp.db server --source-directory=app/src/main/java
# ...
2021-05-12 12:27:22,867 [INFO] * Running on http://localhost:5000/ (Press CTRL+C to quit)
```
The last line of the output tells us that SAPP started a local webserver that lets us look at the results. Open the link and you will see the 4 issues found by the analyis.
## Exploring Results
Let's focus on the remote code execution issue found in the sample app. You can identify it by its issue code `1` (for all remote code executions) and the callable `void MainActivit.onCreate(Bundle)`. With only 4 issues to see it's easy to identify the issue manually but once more rules run, the filter functionality at the top right of the page comes in handy.
<img alt="Single Issue Display" src={useBaseUrl('img/issue.png')} />
The issue tells you that Mariana Trench found a remote code execution in `MainActivit.onCreate` where the data is coming from `Activity.getIntent` one call away, and flows into the constructor of `ProcessBuilder` 3 calls away. Click on "Traces" in the top right corner of the issue to see an example trace.
The trace surfaced by Mariana Trench consists of three parts.
The *source trace* represents where the data is coming from. In our example, the trace is very short: `Activity.getIntent` is called in `MainActivity.onCreate` directly.
<img alt="Trace Source" src={useBaseUrl('img/trace_source.png')} />
The *trace root* represents where the source trace meets the sink trace. In our example this is the activitie's `onCreate` method.
<img alt="Trace Root" src={useBaseUrl('img/trace_root.png')} />
The final part of the trace is the *sink trace*: This is where the data from the source flows down into a sink. In our example from `onCreate`, to `onClick`, to `execute`, and finally into the constructor of `ProcessBuilder`.
<img alt="Trace Source" src={useBaseUrl('img/trace_sink.png')} />
## Configuring Mariana Trench
You might be asking yourself, "how does the tool know what is user controlled data, and what is a sink?". This guide is meant to quickly get you started on a small app. We did not cover how to configure Mariana Trench. You can read more about that in the [Configuration section](configuration).
| 48.309278 | 362 | 0.76163 | eng_Latn | 0.992804 |
d085db0a596245097a37d1c35e13439955764da5 | 6,701 | md | Markdown | articles/azure-resource-manager/resource-manager-tutorial-deploy-sql-extensions-bacpac.md | cobwebos/docs | d4686fc43d4d58054e3c1fa41c26535210e21ba7 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-01-21T04:22:02.000Z | 2022-01-14T01:48:40.000Z | articles/azure-resource-manager/resource-manager-tutorial-deploy-sql-extensions-bacpac.md | cobwebos/docs | d4686fc43d4d58054e3c1fa41c26535210e21ba7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-resource-manager/resource-manager-tutorial-deploy-sql-extensions-bacpac.md | cobwebos/docs | d4686fc43d4d58054e3c1fa41c26535210e21ba7 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-04T04:36:46.000Z | 2020-11-04T04:36:46.000Z | ---
title: 使用 Azure 资源管理器模板导入 SQL BACPAC 文件 | Microsoft Docs
description: 了解如何使用 SQL 数据库扩展,以便通过 Azure 资源管理器模板导入 SQL BACPAC 文件。
services: azure-resource-manager
documentationcenter: ''
author: mumian
manager: dougeby
editor: ''
ms.service: azure-resource-manager
ms.workload: multiple
ms.tgt_pltfrm: na
ms.devlang: na
ms.date: 04/08/2019
ms.topic: tutorial
ms.author: jgao
ms.openlocfilehash: 239bb77d486e8cb845ec439d84def5e34cf64348
ms.sourcegitcommit: aef6040b1321881a7eb21348b4fd5cd6a5a1e8d8
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 10/09/2019
ms.locfileid: "72170228"
---
# <a name="tutorial-import-sql-bacpac-files-with-azure-resource-manager-templates"></a>教程:使用 Azure 资源管理器模板导入 SQL BACPAC 文件
了解如何使用 Azure SQL 数据库扩展,以通过 Azure 资源管理器模板导入 BACPAC 文件。 部署项目包括主模板文件以及完成部署所需的任何文件。 BACPAC 文件是一个项目。 在本教程中,你将创建一个模板来部署 Azure SQL Server、SQL 数据库并导入一个 BACPAC 文件。 若要了解如何使用 Azure 资源管理器模板来部署 Azure 虚拟机扩展,请参阅 [# 教程:使用 Azure 资源管理器模板部署虚拟机扩展](./resource-manager-tutorial-deploy-vm-extensions.md)。
本教程涵盖以下任务:
> [!div class="checklist"]
> * 准备 BACPAC 文件
> * 打开快速入门模板
> * 编辑模板
> * 部署模板
> * 验证部署
如果还没有 Azure 订阅,可以在开始前[创建一个免费帐户](https://azure.microsoft.com/free/)。
## <a name="prerequisites"></a>先决条件
若要完成本文,需要做好以下准备:
* 包含资源管理器工具扩展的 [Visual Studio Code](https://code.visualstudio.com/)。 请参阅[安装扩展](./resource-manager-quickstart-create-templates-use-visual-studio-code.md#prerequisites)。
* 若要增强安全性,请使用为 SQL Server 管理员帐户生成的密码。 以下是密码生成示例:
```azurecli-interactive
openssl rand -base64 32
```
Azure Key Vault 旨在保护加密密钥和其他机密。 有关详细信息,请参阅[教程:在资源管理器模板部署中集成 Azure Key Vault](./resource-manager-tutorial-use-key-vault.md)。 我们还建议你每三个月更新一次密码。
## <a name="prepare-a-bacpac-file"></a>准备 BACPAC 文件
BACPAC 文件在 [Github](https://github.com/Azure/azure-docs-json-samples/raw/master/tutorial-sql-extension/SQLDatabaseExtension.bacpac) 中共享。 若要创建自己的文件,请参阅[将 Azure SQL 数据库导出到 BACPAC 文件](../sql-database/sql-database-export.md)。 如果选择将文件发布到你自己的位置,则必须在教程的后面部分更新模板。
## <a name="open-a-quickstart-template"></a>打开快速入门模板
本教程中使用的模板存储在 [Github](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/tutorial-sql-extension/azuredeploy.json) 中。
1. 在 Visual Studio Code 中,选择“文件”>“打开文件”。
2. 在“文件名”中粘贴以下 URL:
```url
https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/tutorial-sql-extension/azuredeploy.json
```
3. 选择“打开”以打开该文件。
有三个在此模板中定义的资源:
* `Microsoft.Sql/servers`。 请参阅[模板参考](https://docs.microsoft.com/azure/templates/microsoft.sql/servers)。
* `Microsoft.SQL/servers/securityAlertPolicies`。 请参阅[模板参考](https://docs.microsoft.com/azure/templates/microsoft.sql/servers/securityalertpolicies)。
* `Microsoft.SQL.servers/databases`。 请参阅[模板参考](https://docs.microsoft.com/azure/templates/microsoft.sql/servers/databases)。
在自定义模板之前,不妨对其进行一些基本的了解。
4. 选择“文件”>“另存为”,将该文件的副本保存到名为 **azuredeploy.json** 的本地计算机。
## <a name="edit-the-template"></a>编辑模板
向模板添加两个其他资源。
* 若要允许 SQL 数据库扩展导入 BACPAC 文件,需允许访问 Azure 服务。 将以下 JSON 添加到 SQL 服务器定义:
```json
{
"type": "firewallrules",
"name": "AllowAllAzureIps",
"location": "[parameters('location')]",
"apiVersion": "2015-05-01-preview",
"dependsOn": [
"[variables('databaseServerName')]"
],
"properties": {
"startIpAddress": "0.0.0.0",
"endIpAddress": "0.0.0.0"
}
}
```
模板应如下所示:

* 使用以下 JSON 将 SQL 数据库扩展资源添加到数据库定义:
```json
"resources": [
{
"name": "Import",
"type": "extensions",
"apiVersion": "2014-04-01",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers/databases', variables('databaseServerName'), variables('databaseName'))]"
],
"properties": {
"storageKeyType": "SharedAccessKey",
"storageKey": "?",
"storageUri": "https://github.com/Azure/azure-docs-json-samples/raw/master/tutorial-sql-extension/SQLDatabaseExtension.bacpac",
"administratorLogin": "[variables('databaseServerAdminLogin')]",
"administratorLoginPassword": "[variables('databaseServerAdminLoginPassword')]",
"operationMode": "Import",
}
}
]
```
模板应如下所示:

若要了解资源定义,请参阅 [SQL 数据库扩展参考](https://docs.microsoft.com/azure/templates/microsoft.sql/servers/databases/extensions)。 下面是一些重要元素:
* **dependsOn**:必须在创建 SQL 数据库以后才能创建扩展资源。
* **storageKeyType**:要使用的存储密钥的类型。 值可以是 `StorageAccessKey` 或 `SharedAccessKey`。 由于提供的 BACPAC 文件在可以公开访问的 Azure 存储帐户上共享,因此此处使用“SharedAccessKey”。
* **storageKey**:要使用的存储密钥。 如果存储密钥类型为 SharedAccessKey,则必须以“?”为前缀。
* **storageUri**:要使用的存储 URI。 如果选择不使用提供的 BACPAC 文件,则需更新这些值。
* **administratorLoginPassword**:SQL 管理员的密码。 使用生成的密码。 请参阅[先决条件](#prerequisites)。
## <a name="deploy-the-template"></a>部署模板
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
有关部署过程,请参阅[部署模板](./resource-manager-tutorial-create-templates-with-dependent-resources.md#deploy-the-template)部分。 改用以下 PowerShell 部署脚本:
```azurepowershell
$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
$adminUsername = Read-Host -Prompt "Enter the SQL admin username"
$adminPassword = Read-Host -Prompt "Enter the admin password" -AsSecureString
New-AzResourceGroup -Name $resourceGroupName -Location $location
New-AzResourceGroupDeployment `
-ResourceGroupName $resourceGroupName `
-adminUser $adminUsername `
-adminPassword $adminPassword `
-TemplateFile "$HOME/azuredeploy.json"
```
使用生成的密码。 请参阅[先决条件](#prerequisites)。
## <a name="verify-the-deployment"></a>验证部署
在门户中,从新部署的资源组中选择 SQL 数据库。 选择“查询编辑器(预览)”,然后输入管理员凭据。 此时会看到两个表导入到数据库中:

## <a name="clean-up-resources"></a>清理资源
不再需要 Azure 资源时,请通过删除资源组来清理部署的资源。
1. 在 Azure 门户上的左侧菜单中选择“资源组” 。
2. 在“按名称筛选”字段中输入资源组名称。
3. 选择资源组名称。 应会看到,该资源组中总共有六个资源。
4. 在顶部菜单中选择“删除资源组”。
## <a name="next-steps"></a>后续步骤
在本教程中,你部署了 SQL Server、SQL 数据库并导入了 BACPAC 文件。 BACPAC 文件存储在 Azure 存储帐户中。 得到该 URL 的任何人都可以访问该文件。 若要了解如何保护 BACPAC 文件(项目),请参阅
> [!div class="nextstepaction"]
> [保护项目](./resource-manager-tutorial-secure-artifacts.md)
| 37.435754 | 281 | 0.711834 | yue_Hant | 0.718054 |
d0863592f6cb37966d56abb96e414634777dff50 | 528 | md | Markdown | assessment/day1.md | LilithElina/CPANG19 | 804f9e7f5cbb6071ff46b1decc961b2513f4a75d | [
"CC0-1.0"
] | 7 | 2019-09-10T16:21:11.000Z | 2021-07-23T06:40:57.000Z | assessment/day1.md | GTPB/CPANG19 | dfa70c6127672ce95adc9742d03249df96743b91 | [
"CC-BY-4.0"
] | 1 | 2020-11-02T10:03:51.000Z | 2020-11-02T10:03:51.000Z | assessment/day1.md | LilithElina/CPANG19 | 804f9e7f5cbb6071ff46b1decc961b2513f4a75d | [
"CC0-1.0"
] | 4 | 2019-09-16T07:50:45.000Z | 2022-03-08T18:21:05.000Z | # how's it going ?
- How familiar are you with the concept of pangenomics?
1: 0
2: 1
3: 1
4: 11
5: 5
- How comfortable are you with the VG data model?
1: 2
2: 5
3: 4
4: 7
5: 0
- How comfortable are you with the indexing systems used by vg?
1: 3
2: 5
3: 6
4: 4
5: 0
- How comfortable are you building variation graphs from VCFs and reference genomes?
1: 2
2: 0
3: 5
4: 6
5: 4
- How much do you feel you understand the tradeoffs involved in mapping reads to variation graphs vs. linear genomes?
1: 0
2: 2
3: 4
4: 9
5: 3
| 12.571429 | 117 | 0.679924 | eng_Latn | 0.999193 |
d0875061220da13f2226c6ae3f4a1c44f1a4fb29 | 1,970 | md | Markdown | website/versioned_docs/version-2.0.0/image.md | toygift/react-native-elements | 148fe9cd0854e82dbf8d1a296a6bc9d1394918ee | [
"MIT"
] | 1 | 2020-06-19T04:57:16.000Z | 2020-06-19T04:57:16.000Z | website/versioned_docs/version-2.0.0/image.md | zahidalidev/react-native-elements | 81a953777a5b71fd9fbb41b8fe7c6e6419d7b618 | [
"MIT"
] | null | null | null | website/versioned_docs/version-2.0.0/image.md | zahidalidev/react-native-elements | 81a953777a5b71fd9fbb41b8fe7c6e6419d7b618 | [
"MIT"
] | 1 | 2020-08-15T05:44:57.000Z | 2020-08-15T05:44:57.000Z | ---
id: version-2.0.0-image
title: Image
original_id: image
---
Drop-in replacement for the standard React Native Image component that displays
images with a placeholder and smooth image load transitioning.
<div class="component-preview component-preview--single margin-none">
<img src="https://user-images.githubusercontent.com/5962998/48658581-f4170a00-ea1a-11e8-866c-df4f42f21947.gif" alt="Image Component" />
</div>
## Usage
```js
import { ActivityIndicator } from 'react-native';
import { Image } from 'react-native-elements';
// Standard Image
<Image
source={{ uri: image }}
style={{ width: 200, height: 200 }}
/>
// Image with custom placeholder content
<Image
source={{ uri: image }}
style={{ width: 200, height: 200 }}
PlaceholderContent={<ActivityIndicator />}
/>
```
---
## Props
> Also receives all
> [React Native Image](https://facebook.github.io/react-native/docs/image#props) props
- [`containerStyle`](#containerstyle)
- [`placeholderStyle`](#placeholderstyle)
- [`transition`](#transition)
- [`ImageComponent`](#imagecomponent)
- [`PlaceholderContent`](#placeholdercontent)
---
## Reference
### `containerStyle`
Additional styling for the container (optional)
| Type | Default |
| :-----------------: | :-----: |
| View style (object) | none |
---
### `placeholderStyle`
Additional styling for the placeholder container (optional)
| Type | Default |
| :-----------------: | :-----: |
| View style (object) | none |
---
### `transition`
Perform fade transition on image load
| Type | Default |
| :-----: | :-----: |
| boolean | true |
---
### `ImageComponent`
Specify a different component as the Image component.
| Type | Default |
| :--------------------: | :-----: |
| React Native Component | Image |
---
### `PlaceholderContent`
Content to render when image is loading.
| Type | Default |
| :-------: | :-----: |
| component | none |
| 19.89899 | 137 | 0.62335 | eng_Latn | 0.539025 |
d0882bb61b5eb67d8242f22f4a52b25f22072275 | 3,134 | md | Markdown | CONTRIBUTING.md | facebookresearch/FLSim | 6f5054b85c21716c9db03b755200d8eccc77620f | [
"BSD-3-Clause"
] | 79 | 2021-12-09T18:05:09.000Z | 2022-03-23T20:43:46.000Z | CONTRIBUTING.md | facebookresearch/FLSim | 6f5054b85c21716c9db03b755200d8eccc77620f | [
"BSD-3-Clause"
] | 11 | 2021-12-30T17:54:04.000Z | 2022-03-23T17:23:00.000Z | CONTRIBUTING.md | facebookresearch/FLSim | 6f5054b85c21716c9db03b755200d8eccc77620f | [
"BSD-3-Clause"
] | 9 | 2021-12-09T19:55:22.000Z | 2022-03-15T00:02:08.000Z | # Contributing to FLSim
We want to make contributing to FLSim as easy and transparent as possible.
## Development installation
To get the development installation with all the necessary dependencies for
linting, testing, and building the documentation, run the following:
```bash
git clone https://github.com/facebookresearch/FLSim.git
cd FLSim
pip install -e .
```
## Our Development Process
#### Code Style
FLSim soon will conform to [black](https://github.com/ambv/black) and [flake8](https://github.com/PyCQA/flake8)
code formatter to enforce a common code style across the code base. black is installed easily via
pip using `pip install black`, and run locally by calling
```bash
black .
flake8 --config ./.circleci/flake8_config.ini
```
from the repository root. No additional configuration should be needed (see the
[black documentation](https://black.readthedocs.io/en/stable/installation_and_usage.html#usage)
for advanced usage).
FLSim also soon will use [isort](https://github.com/timothycrosley/isort) to sort imports
alphabetically and separate them into sections. isort is installed easily via
pip using `pip install isort`, and run locally by calling
```bash
isort -v -l 88 -o FLSim --lines-after-imports 2 -m 3 --trailing-comma .
```
from the repository root. Configuration for isort is located in .isort.cfg.
We feel strongly that having a consistent code style is extremely important, so
CircleCI will fail on your PR if it does not adhere to the black or flake8 formatting style or isort import ordering.
#### Type Hints
FLSim is fully typed using Python 3.6+
[type hints](https://www.python.org/dev/peps/pep-0484/).
We expect any contributions to also use proper type annotations.
While we currently do not enforce full consistency of these in our continuous integration
test, you should strive to type check your code locally. For this we recommend
using [mypy](http://mypy-lang.org/).
#### Unit Tests
To run the unit tests, you can either use `pytest` (if installed):
```bash
pytest -ra
```
or Python's `unittest`:
```bash
python -m unittest
```
To get coverage reports we recommend using the `pytest-cov` plugin:
```bash
pytest -ra --cov=. --cov-report term-missing
```
## Pull Requests
We actively welcome your pull requests.
1. Fork the repo and create your branch from `main`.
2. If you have added code that should be tested, add unit tests.
In other words, always add unit tests :)
3. If you have changed APIs, document the API change in the PR.
4. Ensure the test suite passes.
5. Make sure your code passes the above [styling requirement](#code-style).
## Issues
We use GitHub issues to track public bugs. Please ensure your description is
clear and has sufficient instructions to be able to reproduce the issue.
Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe
disclosure of security bugs. In those cases, please go through the process
outlined on that page and do not file a public issue.
## License
By contributing to FLSim, you agree that your contributions will be licensed
under the LICENSE file in the root directory of this source tree.
| 33.698925 | 117 | 0.764837 | eng_Latn | 0.994885 |
d08852b734eb63b59c54d37b49f21f4c963819d3 | 3,330 | md | Markdown | posts/2015-05-12-small-improvements-in-gulp-files.md | lowi307/personal-site | 2f550cf8ac2cefd9c39911ef1f6e09feb6b2f688 | [
"Apache-2.0"
] | null | null | null | posts/2015-05-12-small-improvements-in-gulp-files.md | lowi307/personal-site | 2f550cf8ac2cefd9c39911ef1f6e09feb6b2f688 | [
"Apache-2.0"
] | null | null | null | posts/2015-05-12-small-improvements-in-gulp-files.md | lowi307/personal-site | 2f550cf8ac2cefd9c39911ef1f6e09feb6b2f688 | [
"Apache-2.0"
] | null | null | null | ---
title: Small improvements in gulp files
excerpt: I want to share some things I always do when I'm building my tasks in Gulp. Small patterns to solve simple situations and improve the build process in my projects.
---
## Use the package.json file
The original use of the package.json file is to provide necessary information for module manager scripts like <a href="https://www.npmjs.com" target="_blank">npm</a> to manage version and dependencies.
One of the key properties of this file is the **name** one. The people from npm have a couple of advices to you, if you're going to publish a module, in their <a href="https://docs.npmjs.com/files/package.json" target="_blank">documentation</a>.
What is really common is to name your files as the name of the package, but if your scripts don't belong to a module you can fill it and later use it to set dinamically strings that you can use as paths in your gulp file.
```js
var project = require('./package.json');
// project paths
var paths = {
src: './src/' + project.name + '.js',
spec: './test/' + project.name + '.spec.js',
output: './dist'
}
gulp.task('something', function() {
gulp.src(paths.src)
.pipe( ... )
.pipe( ... )
.pipe(gulp.dest(paths.dest));
});
```
What is good is that if you change the name of your project you just have to do it in your package.json file and the name of your source file.
Another way to use the information available inside the package is to build a banner and put it on the top of your distribution file. To add it I use the `gulp-concat-util` package.
```js
var concat = require('gulp-concat-util');
var banner = '/*' +
'\n * ' + project.title + ' - v' + project.version +
'\n * ' + project.url +
'\n * ' + project.copyright + ' (c) ' + project.author + ' - ' + project.license + ' License' +
'\n*/\n\n';
gulp.task('build', function() {
gulp.dest(paths.src)
.pipe( ... )
.pipe( ... )
.pipe(concat.header(banner));
.pipe(gulp.dest(paths.output));
});
```
## Organizing and naming tasks
As I explained in <a href="/2015/05/using-gulp/">my previous post</a>, Gulp has a great and simple way to tell a task that some other ones need to finish before it starts.
```js
gulp.task('karma', ['lint'], function () {
// do something
})
```
Gulp waits for **lint** to execute before starting with **karma** task here. As you see we pass an array of task names as a second argument. The third one, the function that holds the functionality of the task, is actually optional. This means that you can use an alias to group similar tasks.
Something I often do in my projects is to check the syntax in both test and source files of my projects and use the name of the process followed by a colon and the name of the folder where I'm applying the task, for example _hint:src_ and _hint:spec_. Then you can create a general _hint_ task.
```js
gulp.task('hint', ['hint:spec', 'hint:src'])
```
This gives you the option of just check the syntax in your spec files or in your source files only but also to call the _hint_ task and run it on both directories.
## Wrap-up
These are just small personal decisions I make when building gulp files. I hope you found them interesting and if you have suggestions or other smart moves and patterns feel free to share them with me and the community.
| 43.246753 | 294 | 0.706607 | eng_Latn | 0.99888 |
d088825e1f2dc4a70228edf1211205b257db28ef | 1,166 | md | Markdown | ceph/CHANGELOG.md | frederic-blanc/k8s-provisioner | 938e27bc5bbba81a502eb2eb0c66ad9897a08d09 | [
"Apache-2.0"
] | null | null | null | ceph/CHANGELOG.md | frederic-blanc/k8s-provisioner | 938e27bc5bbba81a502eb2eb0c66ad9897a08d09 | [
"Apache-2.0"
] | null | null | null | ceph/CHANGELOG.md | frederic-blanc/k8s-provisioner | 938e27bc5bbba81a502eb2eb0c66ad9897a08d09 | [
"Apache-2.0"
] | null | null | null | # v0.2.0
- merge the two ceph project to have an uniq vndor folder for the building inside a docker container<br/>
still keep the rbd-provisioner pkhg with its previous import as I change minimal code inside it (see below).
- remove klog.InitFlags(nil) in the rbd-provisioner as it is not needed anymore.
- move cephfs-provisioner go code ti cmd folder.
- Build provisioner inside multistage container
# v0.1.2
## cephfs provisionner
- Use provisioner name as identity by default instead of random string. See #267. (https://github.com/kubernetes-incubator/external-storage/pull/270)
- Add PROVISIONER_NAME environment variable support (https://github.com/kubernetes-incubator/external-storage/pull/270)
# v0.1.1
## cephfs provisioner
- Fix docker file error "chmod: invalid mode: 'x+o'" (https://github.com/kubernetes-incubator/external-storage/pull/215)
## rbd provisioner
- Use provisioner name as identity by default instead of random string. (https://github.com/kubernetes-incubator/external-storage/pull/267)
- Add PROVISIONER_NAME environment variable support (https://github.com/kubernetes-incubator/external-storage/pull/267)
# v0.1.0
- Initial release | 55.52381 | 149 | 0.783877 | eng_Latn | 0.702422 |
d08905967d63a00dbb94badb460a135c60cd6feb | 10,384 | md | Markdown | doc/source/workflow/overview.md | ashrafgt/seldon-core | ec1235789120b2a0418819801048f7384258b542 | [
"Apache-2.0"
] | null | null | null | doc/source/workflow/overview.md | ashrafgt/seldon-core | ec1235789120b2a0418819801048f7384258b542 | [
"Apache-2.0"
] | null | null | null | doc/source/workflow/overview.md | ashrafgt/seldon-core | ec1235789120b2a0418819801048f7384258b542 | [
"Apache-2.0"
] | null | null | null | # Overview of Seldon Core Components
Seldon core converts your ML models into production ready REST/gRPC microservices.
These are Seldon Core main components:
- Reusable and non-reusable [model servers](./overview.html#e2e-serving-with-model-servers)
- [Language Wrappers](./overview.html#language-wrappers) to containerise models
- [SeldonDeployment](./overview.html#seldondeployment-crd) CRD and [Seldon Core Operator](./overview.html#seldon-core-operator)
- [Service Orchestrator](./overview.html#service-orchestrator) for advanced inference graphs
as well as integration with third-party systems:
- Kubernetes Ingress integration with [Ambassador](https://www.getambassador.io/) and [Istio](https://istio.io/)
- [Metrics](./overview.html#metrics-with-prometheus) with [Prometheus](https://prometheus.io/)
- [Tracing](./overview.html#distributed-tracing-with-jaeger) with [Jaeger](https://www.jaegertracing.io/)
- [Endpoint Documentation](./overview.html#endpoint-documentation) with [OpenApi](https://swagger.io/docs/specification/about/)
Keep reading to learn more!
## E2E Serving with Model Servers
With `Seldon Core` you can take and put it directly into the production using our flexible `Model Servers`.

Using the so-called `Reusable Model Servers` you can deploy your models into Kubernetes cluster in just a few steps:
1. *Data Scientist* prepares ML `model` using state of the art libraries (mlflow, dvc, xgboost, scikit-learn just to name a few).
2. Trained model is uploaded to the central repository (e.g. S3 storage).
3. *Software Engineer* prepares a `Reusable Model Server` using `Seldon Core` which is uploaded as Docker Image to the Image Registry.
4. Deployment manifest (`Seldon Deployment` CRD) is created and applied to the Kubernetes cluster.
5. Seldon Core `Operator` creates all required Kubernetes resources.
6. Inference requests sent to the `Seldon Deployment` are passed to all internal models by the `Service Orchestrator`.
7. Metrics and tracing data can be collected by leveraging our integrations with third party frameworks.
If you would be to use the `Non-Reusable Model Servers` in steps 2. and 3. you would prepare a Docker image with your ML Model embedded.
We discuss difference between these two approaches in the next section.
## Two Types of Model Servers
With Seldon Core you can build two type of servers: reusable and non-reusable ones.
Each of these are useful depending on the context and the actual use case.
- **Reusable Model Servers**: Often referred to as prepackaged model servers.
Allow to deploy a family of similar models without the need to build a new server each time.
They often fetch models from a central repository (like your company's S3 storage)
- **Non-Reusable Model Servers**: Specialised server meant to serve a single model.
Does not require the central repository but requires a build of a new image for every model.

Read more about our pre-packaged `Model Servers` on their dedicated documentation pages:
- [MLflow Server](../servers/mlflow.html)
- [SKLearn Server](../servers/sklearn.html)
- [Tensorflow Server](../servers/tensorflow.html)
- [XGBoost Server](../servers/xgboost.html)
Read how to build your own pre-packaged model server [here](../servers/custom.html).
## Language Wrappers
Language wrappers allows Seldon Core users to build `Reusable` and `Non-Reusable` model servers.
As you will see, the whole process is very simple and requires user to only define logic that
loads models and perform inference prediction as well as the required runtime dependencies.

Model loading and inference logic is defined in `Model.py` file:
```python
class Model:
def __init__(self, ...):
"""Custom logic that prepares model.
- Reusable servers: your_loader downloads model from remote repository.
- Non-Reusable servers: your_loader loads model from a file embedded in the image.
"""
self._model = your_loader(...)
def predict(self, features, names=[], meta=[]):
"""Custom inference logic.""""
return self._model.predict(...)
```
Main difference between `Reusable` and `Non-Reusable` model servers is if the model is loaded dynamically
or embedded in the image itself.
The `seldon-core-microservice` Python wrapper can be used to turn `Model.py` into a fully operational microservice:
```bash
$ seldon-core-microservice Model --service-type MODEL
```
That serves the inference requests on its endpoint (default: 9000):
```bash
$ curl http://localhost:9000/api/v1.0/predictions \
-H 'Content-Type: application/json' \
-d '{"data": {"names": ..., "ndarray": ...}}'
{
"meta" : {...},
"data" : {"names": ..., "ndarray" : ...}
}
```

To complete containerisation process you need two more components:
- `requirements.txt` file that describes your runtime dependencies
- `.s2/environment` file that describes your microservice (api and model type)
Once these are in place you can use a simple s2i command
```bash
s2i build . seldonio/seldon-core-s2i-python3:1.13.0-dev model:0.1
```
to create ready to use Docker image.
Read more about Python [Language Wrapper on its dedicated documentation page](../python/index.html).
## Seldon Deployment CRD
Seldon Deployment CRD (Custom Resource Definition) is the real strength of Seldon Core.
It allows you to easily deploy your inference model to the Kubernetes cluster and handle some real production traffic!
[Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) are basically extensions of the Kubernetes API.
They allow one to create a custom combination of basic Kubernetes objects that acts together.
In Seldon Core we use CRDs to define the inference graph through the manifest yaml files.
The manifest file that you write is very powerful yet simple.
You can easily define what models do you want in your deployment and how they are connected in the inference graph.

You can think about the CRD as an abstraction around the actual deployment and services that are created in the cluster.
Once the manifest is applied to the cluster, Seldon Core `Operator` creates all Kubernetes objects required to serve the inference requests.
Read more about [Seldon Deployment CRD on its dedicated documentation page](../reference/seldon-deployment.html).
## Seldon Core Operator
The Seldon Core `Operator` is what controls your `Seldon Deployments` in the `Kubernetes` cluster.
It reads the CRD definition of `Seldon Deployment` resources applied to the cluster and takes
care that all required components like `Pods` and `Services` are created.
It works according to the common Kubernetes operator pattern - in a continues loop it:
- `observe` current state of the cluster
- `diff` against desired state
- if necessary `act` to apply desired state

## Service Orchestrator
`Service Orchestrator` is responsible for managing intra-graph traffic.
It reads the inference graph structure from the `CRD` and when inference request is received it makes sure that it is passed to each node of the graph in the right order.
It is because of the presence of `Service Orchestrator` that complex graph components like `routers`, `combiners` and output/input `transformers` are available in the `Seldon` world.

`Service Orchestrator` is also responsible for providing many advance features out of the box:
- `Jaeger` tracing
- `Prometheus` metrics
- request payload logging
to just name a few.
Read more about [Service Orchestrator on its dedicated documentation page](../graph/svcorch.html).
## Metadata Provenance
In `Seldon` we understand the importance of the Model Metadata.
You can easily version your model and describe its expected inputs and outputs.
These allow you to make connection to the platform you trained your model with (DVC, Pachyderm, ...)
and know what inputs / outputs you can expect from your inference graph.

Read more about [metadata provenance on its dedicated documentation page](../reference/apis/metadata.html).
## Metrics with Prometheus
Metrics is important aspect of serving ML inference models in production.
Out of the box Seldon Core deployments expose standard metrics to [Prometheus](https://prometheus.io/) on the `Service Orchestrator`.

Read more about [metrics on its dedicated documentation page](../analytics/analytics.html).
## Distributed Tracing with Jaeger
By default, we support [Jaeger](https://www.jaegertracing.io/) for Distributed Tracing.

Read more about [tracing on its dedicated documentation page](../graph/distributed-tracing.html).
## So.... Why just not wrap my model with Flask?
You may ask yourself: why wouldn't I just simply wrap my model with [Flask](https://flask.palletsprojects.com/)?
Here are some benefits of choosing Seldon Core:
- all hard work is already done
- complex inference graphs possible out of the box
- reusable model servers (build once, deploy many)
- integration with metrics and tracing solutions
- automated ingress configuration
- Seldon Core is battle-tested by wide community of both open-source and commercial users
## Other features of Seldon Core?
With over 2M installs, Seldon Core is used across organisations to manage large scale deployment of machine learning models, and key benefits include:
* Easy way to containerise ML models using our language wrappers or pre-packaged inference servers.
* Out of the box endpoints which can be tested through Swagger UI, Seldon Python Client or Curl / GRPCurl
* Cloud agnostic and tested on AWS EKS, Azure AKS, Google GKE, Alicloud, Digital Ocean and Openshift.
* Powerful and rich inference graphs made out of predictors, transformers, routers, combiners, and more.
* A standardised serving layer across models from heterogeneous toolkits and languages.
* Advanced and customisable metrics with integration to Prometheus and Grafana.
* Full auditability through model input-output request logging integration with Elasticsearch.
* Microservice tracing through integration to Jaeger for insights on latency across microservice hops.
| 44.758621 | 182 | 0.770416 | eng_Latn | 0.985903 |
d0890854a2ffbb784cacd13f13b22f311a0b8c78 | 9,560 | md | Markdown | articles/data-factory/connector-azure-search.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/connector-azure-search.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/connector-azure-search.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Copia de los datos al índice de búsqueda
description: Obtenga información sobre cómo insertar o copiar datos en un índice de Azure Search mediante la actividad de copia en una canalización de Azure Data Factory.
services: data-factory
ms.author: jingwang
author: linda33wj
manager: shwang
ms.reviewer: douglasl
ms.service: data-factory
ms.workload: data-services
ms.topic: conceptual
ms.custom: seo-lt-2019
ms.date: 09/13/2019
ms.openlocfilehash: dfa1ad318ccc9e891b646ec050f6a0776e108206
ms.sourcegitcommit: b80aafd2c71d7366838811e92bd234ddbab507b6
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 04/16/2020
ms.locfileid: "81418242"
---
# <a name="copy-data-to-an-azure-cognitive-search-index-using-azure-data-factory"></a>Copia de datos a un índice de Azure Cognitive Search mediante Azure Data Factory
> [!div class="op_single_selector" title1="Seleccione la versión del servicio Data Factory que usa:"]
> * [Versión 1](v1/data-factory-azure-search-connector.md)
> * [Versión actual](connector-azure-search.md)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
En este artículo se resume el uso de la actividad de copia de Azure Data Factory para copiar datos en un índice de Azure Cognitive Search. El documento se basa en el artículo de [introducción a la actividad de copia](copy-activity-overview.md) que describe información general de la actividad de copia.
## <a name="supported-capabilities"></a>Funcionalidades admitidas
Puede copiar datos desde cualquier almacén de datos de origen compatible en un índice de búsqueda. Consulte la tabla de [almacenes de datos compatibles](copy-activity-overview.md#supported-data-stores-and-formats) para ver una lista de almacenes de datos que la actividad de copia admite como orígenes o receptores.
## <a name="getting-started"></a>Introducción
[!INCLUDE [data-factory-v2-connector-get-started](../../includes/data-factory-v2-connector-get-started.md)]
Las secciones siguientes proporcionan detalles sobre las propiedades que se usan para definir entidades de Data Factory específicas del conector de Azure Cognitive Search.
## <a name="linked-service-properties"></a>Propiedades del servicio vinculado
Las siguientes propiedades son compatibles con el servicio vinculado de Azure Cognitive Search:
| Propiedad | Descripción | Obligatorio |
|:--- |:--- |:--- |
| type | La propiedad type debe establecerse en: **AzureSearch**. | Sí |
| url | URL del servicio de búsqueda. | Sí |
| key | Clave de administración del servicio de búsqueda. Marque este campo como SecureString para almacenarlo de forma segura en Data Factory o [para hacer referencia a un secreto almacenado en Azure Key Vault](store-credentials-in-key-vault.md). | Sí |
| connectVia | El entorno [Integration Runtime](concepts-integration-runtime.md) que se usará para conectarse al almacén de datos. Puede usar los entornos Integration Runtime (autohospedado) (si el almacén de datos se encuentra en una red privada) o Azure Integration Runtime. Si no se especifica, se usará Azure Integration Runtime. |No |
> [!IMPORTANT]
> Cuando se copian datos desde un almacén de datos en la nube al índice de búsqueda, en el servicio vinculado de Azure Cognitive Search, debe hacer referencia a Azure Integration Runtime con región explícita en connactVia. Establezca como región aquella en la que reside el servicio de búsqueda. Obtenga más información acerca de [Azure Integration Runtime](concepts-integration-runtime.md#azure-integration-runtime).
**Ejemplo**:
```json
{
"name": "AzureSearchLinkedService",
"properties": {
"type": "AzureSearch",
"typeProperties": {
"url": "https://<service>.search.windows.net",
"key": {
"type": "SecureString",
"value": "<AdminKey>"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
```
## <a name="dataset-properties"></a>Propiedades del conjunto de datos
Si desea ver una lista completa de las secciones y propiedades disponibles para definir conjuntos de datos, consulte el artículo sobre [conjuntos de datos](concepts-datasets-linked-services.md). En esta sección se proporciona una lista de las propiedades compatibles con el conjunto de datos de Azure Cognitive Search.
Para copiar datos en Azure Cognitive Search, se admiten las siguientes propiedades:
| Propiedad | Descripción | Obligatorio |
|:--- |:--- |:--- |
| type | La propiedad type del conjunto de datos debe establecerse en: **AzureSearchIndex**. | Sí |
| indexName | Nombre del índice de búsqueda. Data Factory no crea el índice. El índice debe existir en Azure Cognitive Search. | Sí |
**Ejemplo**:
```json
{
"name": "AzureSearchIndexDataset",
"properties": {
"type": "AzureSearchIndex",
"typeProperties" : {
"indexName": "products"
},
"schema": [],
"linkedServiceName": {
"referenceName": "<Azure Cognitive Search linked service name>",
"type": "LinkedServiceReference"
}
}
}
```
## <a name="copy-activity-properties"></a>Propiedades de la actividad de copia
Si desea ver una lista completa de las secciones y propiedades disponibles para definir actividades, consulte el artículo sobre [canalizaciones](concepts-pipelines-activities.md). En esta sección se proporciona una lista de las propiedades compatibles con el origen de Azure Cognitive Search.
### <a name="azure-cognitive-search-as-sink"></a>Azure Cognitive Search como receptor
Si va a copiar datos a Azure Cognitive Search, establezca el tipo de origen de la actividad de copia en **AzureSearchIndexSink**. Se admiten las siguientes propiedades en la sección **sink** de la actividad de copia:
| Propiedad | Descripción | Obligatorio |
|:--- |:--- |:--- |
| type | La propiedad type del origen de la actividad de copia debe establecerse en: **AzureSearchIndexSink**. | Sí |
| writeBehavior | Especifica si, cuando ya haya un documento en el índice, se realizará una operación de combinación o de reemplazo. Consulte la propiedad [WriteBehavior](#writebehavior-property).<br/><br/>Los valores permitidos son: **Merge** (valor predeterminado) y**Upload**. | No |
| writeBatchSize | Carga datos en el índice de búsqueda cuando el tamaño del búfer alcanza el valor de writeBatchSize. Consulte la propiedad [WriteBatchSize](#writebatchsize-property) para obtener más información.<br/><br/>Los valores permitidos son: enteros de 1 a 1000; el valor predeterminado es 1000. | No |
### <a name="writebehavior-property"></a>Propiedad WriteBehavior
AzureSearchSink realiza una operación upsert al escribir los datos. Es decir, al crear un documento, si la clave de este ya se encuentra en el índice de búsqueda, Azure Cognitive Search actualiza el documento existente en lugar de generar una excepción de conflicto.
AzureSearchSink proporciona los siguientes dos comportamientos de upsert (mediante el SDK de Azure Search):
- **Combinar**: combina todas las columnas del nuevo documento con el existente. En las columnas con valor null del nuevo documento, se conserva el valor del existente.
- **Cargar**: el nuevo documento reemplaza al existente. En cuanto a las columnas no especificadas en el nuevo documento, el valor se establece en null con independencia de que haya un valor distinto de null en el documento existente.
El comportamiento predeterminado es **Combinar**.
### <a name="writebatchsize-property"></a>Propiedad WriteBatchSize
El servicio Azure Cognitive Search permite la creación de documentos como lotes. Un lote puede contener entre 1 y 1000 acciones. Una acción controla un documento para llevar a cabo la operación de combinación o de carga.
**Ejemplo**:
```json
"activities":[
{
"name": "CopyToAzureSearch",
"type": "Copy",
"inputs": [
{
"referenceName": "<input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<Azure Cognitive Search output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "<source type>"
},
"sink": {
"type": "AzureSearchIndexSink",
"writeBehavior": "Merge"
}
}
}
]
```
## <a name="data-type-support"></a>Compatibilidad con tipos de datos
En la tabla siguiente se especifica si se admite o no un tipo de datos de Azure Cognitive Search.
| Tipo de datos de Azure Cognitive Search | Compatible con el receptor de Azure Cognitive Search |
| ---------------------- | ------------------------------ |
| String | Y |
| Int32 | Y |
| Int64 | Y |
| Double | Y |
| Boolean | Y |
| DataTimeOffset | Y |
| Matriz de cadenas | N |
| GeographyPoint | N |
Actualmente no se admiten otros tipos de datos, por ejemplo, ComplexType. Para obtener una lista completa de los tipos de datos que admite Azure Cognitive Search, consulte [Tipos de datos admitidos (Azure Cognitive Search)](https://docs.microsoft.com/rest/api/searchservice/supported-data-types).
## <a name="next-steps"></a>Pasos siguientes
Consulte los [almacenes de datos compatibles](copy-activity-overview.md#supported-data-stores-and-formats) para ver la lista de almacenes de datos que la actividad de copia de Azure Data Factory admite como orígenes y receptores.
| 51.122995 | 417 | 0.718619 | spa_Latn | 0.942845 |
d089099f77e82f71536647cfa4b7969e41cdc8e8 | 3,713 | md | Markdown | README.md | eslamalix/dotnet-webapi-boilerplate6 | 39f59fc86cbaefeb44125a4c25a62dabc6fc951b | [
"MIT"
] | 1 | 2021-10-05T21:58:31.000Z | 2021-10-05T21:58:31.000Z | README.md | Ezeji/dotnet-webapi-boilerplate | 5388a868145cb3c2c04433430697f308f4bba022 | [
"MIT"
] | 12 | 2021-11-04T01:16:13.000Z | 2022-03-30T01:27:32.000Z | README.md | eslamalix/dotnet-webapi-boilerplate6 | 39f59fc86cbaefeb44125a4c25a62dabc6fc951b | [
"MIT"
] | null | null | null | <p align="center">
<img src="https://codewithmukesh.com/wp-content/uploads/2021/08/fullstackhero-banner.jpg" alt="fullstackhero">
<h1 align="center">.NET WebAPI Boilerplate</h1>
</p>
.NET WebAPI Boilerplate Template built with .NET 6.0. Incorporates the most essential Packages your projects will ever need. Follows Clean Architecture Principles.
## About
`dotnet-webapi-boilerplate` is the integral part of the `fullstackhero` project.
`fullstackhero` is a venture to develop industry leading boilerplate templates for the dotnet stack as the backend (webapi) along with modern client frameworks like Angular, MVC and Blazor.
This repository contains the WebApi Project of `fullstackhero`.
## Release Planning
### 0.0.1 RC is available now!
This is the first pre-release version of the `fullstackhero .NET WebAPI Boilerplate` package. Newer versions will be available on a weekly basis with newer updates and patches. [Read the getting-started guide for more.](https://fullstackhero.net/dotnet-webapi-boilerplate/general/getting-started/)
The Release Version is expected to be out by Novemeber, 2021 as soon as .NET 6 LTS is launched by Microsoft. Preview versions of this project is available for Initial Developer Testing.
## Quick Start Guide
Open up your Command Prompt / Powershell and run the following command to install the solution template.
```powershell
dotnet new --install FullStackHero.WebAPI.Boilerplate
```
This would install the `fullstackhero .NET WebAPI Boilerplate` template globally on your machine. With that done, let's see how you can start generating complete .NET WebAPI Solutions seamlessly.
Simply navigate to a new directory (wherever you want to place your new solution at), and open up Command Prompt at the opened directory.
Run the following command. Note that, in this demonstration I am naming my new solution as `FSH.Starter`.
```powershell
dotnet new fsh-api -o FSH.Starter
```
For further steps and details, [Read the Getting Started Guide](https://fullstackhero.net/dotnet-webapi-boilerplate/general/getting-started/)
## Important Links & Documentations
Overview - [Read](https://fullstackhero.net/dotnet-webapi-boilerplate/general/overview/)
Getting Started - [Read](https://fullstackhero.net/dotnet-webapi-boilerplate/general/getting-started/)
Development Environment - [Learn about setting up the DEV environment](https://fullstackhero.net/dotnet-webapi-boilerplate/general/development-environment/)
Track Progress - [Release 1.0 Milestones](https://github.com/fullstackhero/dotnet-webapi-boilerplate/milestone/1)
Participate in Discussions - [QNA & General Discussions](https://github.com/fullstackhero/dotnet-webapi-boilerplate/discussions)
Join our Discord - [fullstackhero @ Discord](https://discord.gg/gdgHRt4mMw)
## Features
- [x] Built on .NET 6.0
- [x] Follows Clean Architecture Principles
- [ ] Completely Documented at [fullstackhero.net](https://fullstackhero.net)
- [x] Multi Tenancy Support
- [x] Supports MySQL, MSSQL & PostgreSQL!
- [x] Uses Entity Framework Core as DB Abstraction
- [x] Flexible Repository Pattern
- [x] Dapper Integration for Optimal Performance
- [x] Serilog Integration
- [x] Swagger Support
- [x] Mapster
- [x] API Versioning
- [x] Response Caching - Distributed Caching
- [x] Fluent Validations
- [x] Audit Logging
- [ ] Advanced User & Role Based Permission Management
- [x] Code Analysis & StyleCop Integration with Rulesets
- [x] JSON Based Localization with Caching
- [x] Hangfire Support
- [x] File Storage Service
- [ ] Test Projects
- [ ] & Much More
## Community
- Discord [@fullstackhero](https://discord.gg/gdgHRt4mMw)
## License
This project is licensed with the [MIT license](LICENSE).
| 41.719101 | 297 | 0.774306 | eng_Latn | 0.86307 |
d08a017021cee4bf5618a4059c94de944673b13a | 370 | md | Markdown | README.md | TehCorwiz/AnyTray | 72f773638add44f803c093d627bdbbe3f610452a | [
"MIT"
] | 6 | 2015-04-08T18:54:26.000Z | 2021-04-07T16:47:51.000Z | README.md | TehCorwiz/AnyTray | 72f773638add44f803c093d627bdbbe3f610452a | [
"MIT"
] | 2 | 2015-04-09T16:57:13.000Z | 2015-08-13T18:45:10.000Z | README.md | TehCorwiz/AnyTray | 72f773638add44f803c093d627bdbbe3f610452a | [
"MIT"
] | null | null | null | # AnyTray
This is a windows oriented response to https://github.com/tonsky/AnyBar
Notes:
* This took me an hour.
* I threw it together without regard for how clean it is.
* The icons don't look that good. We'll replace them with better ones later.
* Semi-random ports will come later. Right now it searches sequentially from 1738
Pull requests are appreciated. :)
| 33.636364 | 82 | 0.756757 | eng_Latn | 0.999133 |
d08a9476d6b4b1d43fdf7a6c4ac13e5ade58a65e | 5,070 | md | Markdown | _posts/2019-06-26-Download-down-the-road-a-zombie-horror-story.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-06-26-Download-down-the-road-a-zombie-horror-story.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-06-26-Download-down-the-road-a-zombie-horror-story.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Down the road a zombie horror story book
Everything is something. Her sore mouth could not speak clearly. could care for themselves? "Not yet," I said and began to kiss her again. She of NASA and with the space program of the former Soviet Union, she might wake up from this nightmare, in the afternoon, I was informed that they were wandering players. His explosive breathing and the slap of his sneakers on there in more genteel and gilded ages, down the road a zombie horror story figures black against the blaze shoveled and reshoveled ore onto logs kept in a roaring blaze by great bellows, but not so very long, and approached the Arctic really dead. 398; monster approached with open mouth and rolling eyes, the clergyman's curse-all this amounted to more than even a committed man could handle, their life, was that Phimie file:D|Documents20and20Settingsharry. words: one who libeled or slandered, two-thirds, saying! had given her the crazy notion that they had suffered a blackout not because "No," I murmured, the boy drives westward to the dog's direction. " try, no prenatal care, but it's not that, to her, for that they had eyes sharper than drawn swords and the lashes of their eyelids ensorcelled all hearts. clever man, with a legal filing deadline looming so near that a muse, too many pipes were being smoked here stopped by to help Agnes. fire-water is a liquor in great request among these savages, display A book that came to teach the Truth to those in error's way, Master Hemlock. The marsh fever! With high fences and hedgerows of Indian laurels constant employment in killing foxes and at other work. But this time it was Selene's down the road a zombie horror story, honey, plus fa change. was it a hideous and distressing story, vanishing among the layered boughs: a reliable prediction that the storm would soon break. He slept wherever he chose to, but not always to others, there are only two species of shapechangers," "Bernard," Kath said quietly from the console screen, with the _Vega_ in various more or less dangerous positions among the people passed away: Stan Laurel, antiscorbutic, O Meimoun, After adjusting the hairpin that held her lace mantilla. They saw it go up stone on stone, and pulled herself erect. In a stormy debate Wellesley stood firm by his insistence that alarming though the events were, waited, laundry. 1583, or down the road a zombie horror story used the restroom only a short while ago. During a preceding voyage to the Down the road a zombie horror story him on Kereneia. From this "Is it good?" down the road a zombie horror story asked. sitting position, 'cause the spacemen Dr? The Merchant and the Genie i seeming the least homicidal. What about all this line about 'colonists' down the road a zombie horror story been feeding us ever since we it woven?" "Why should I care whether you have any peace?" she asked, Enoch Cain had scrawled Bartholomew three times. "I've seen many handsome men in my day, he became an accomplished meditator. "With The warm afternoon is gradually cooling down the road a zombie horror story the clouds pour out of the west, as if with the administration of a little pain, and that look will peel the wet off water. 335) that it took splendid discoveries with which he enriched geographical science. sleep to tell them bedtime stories, they say so will the Archmage be one returned from death, Danny. the rest of their conversation, he'd be down here in a minute to bail us out and grab the publicity, laid it in his basket, "you must understand that we did not wish it known we were working on a proposed naval system, that thou take upon thyself the governance of the kingdom and of the subjects. What do you need collective strength for. 231 writer at all? "How's Lou?" construction wasn't as supportive as a concrete-block wall, then into the foyer. plastic bag in which, Paul waved a red handkerchief out of the window of from her brain probably blew out power-company transformers all over the Bay Area, is far as the eye could reach only coffee. _Saki_ is a liquor made by fermenting and distilling rice. Monday morning, have a care lest this youth beguile thee with his sorcery and bewitch thee with his craft, wart-necked, the viziers all assembled and took counsel together and said, but more self-assured than she could remember seeing for a long time-propped loosely but confidently against the frame of the door, bands. movin' on, their high intelligence, 'Out on thee. Not with angels and pins, silver with emerald consoles (I was getting tired of these colors), he'd have been issued this license the same as if he'd coloring books. compassion even for this pitiable beast. patina of scents laid down by hundreds of miles of experience since Colorado. She could do it too. "They don't go together," he said? He didn't believe that fetuses Chukches fall into two divisions speaking the same language and greater freedom, an astonishment that situation. telling him what he's up against. | 563.333333 | 4,961 | 0.792899 | eng_Latn | 0.999917 |
d08b3f7fc9b64964f58c3919d0da10e4a93bd435 | 953 | md | Markdown | _posts/2021-04-19-dissertacao-de-mestrado-da-nova-presidente-da-capes-tem-trechos-copiados-de-outras-obras.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2021-04-19-dissertacao-de-mestrado-da-nova-presidente-da-capes-tem-trechos-copiados-de-outras-obras.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2021-04-19-dissertacao-de-mestrado-da-nova-presidente-da-capes-tem-trechos-copiados-de-outras-obras.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | 1 | 2022-01-13T07:57:24.000Z | 2022-01-13T07:57:24.000Z | ---
layout: post
item_id: 3333436318
title: >-
Dissertação de mestrado da nova presidente da Capes tem trechos copiados de outras obras
author: Tatu D'Oquei
date: 2021-04-19 04:10:00
pub_date: 2021-04-19 04:10:00
time_added: 2021-05-16 20:37:26
category:
tags: []
image: https://i.glbimg.com/og/ig/infoglobo1/f/original/blog/image_share/lauro-jardim.jpg
---
Nomeada dias atrás para a presidência da Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, a Capes, Cláudia Queda de Toledo teve inicialmente o constrangimento de dirigir uma instituição que, em 2017, foi descredenciada pela própria Capes — o Centro Universitário de Bauru, em
**Link:** [https://blogs.oglobo.globo.com/lauro-jardim/post/dissertacao-de-mestrado-da-nova-presidente-da-capes-tem-trechos-copiados-de-outras-obras.html](https://blogs.oglobo.globo.com/lauro-jardim/post/dissertacao-de-mestrado-da-nova-presidente-da-capes-tem-trechos-copiados-de-outras-obras.html)
| 50.157895 | 298 | 0.785939 | por_Latn | 0.956324 |
d08bfa5ae9f50209371036847c6d4bbae9348614 | 92 | md | Markdown | README.md | linuxgeeklol/PolanText | c5f522f1be03ca93e8a48a36507399b6cdcfbc04 | [
"Apache-2.0"
] | null | null | null | README.md | linuxgeeklol/PolanText | c5f522f1be03ca93e8a48a36507399b6cdcfbc04 | [
"Apache-2.0"
] | null | null | null | README.md | linuxgeeklol/PolanText | c5f522f1be03ca93e8a48a36507399b6cdcfbc04 | [
"Apache-2.0"
] | null | null | null | # PolanText
PolanText is a text editor designed to be funny. Polands in the Sky created it.
| 30.666667 | 79 | 0.782609 | eng_Latn | 0.999419 |
d08bfc1a612801fe6f33c76129de61f5703cb7cb | 393 | md | Markdown | content/languages/ruleml.md | tedneward/Research | 0410fe4e052961e05feda58267fbfa95f01b4a21 | [
"MIT"
] | 5 | 2020-05-30T08:22:20.000Z | 2022-03-12T09:16:10.000Z | content/languages/ruleml.md | tedneward/Research | 0410fe4e052961e05feda58267fbfa95f01b4a21 | [
"MIT"
] | 2 | 2020-05-09T06:50:04.000Z | 2022-01-29T08:47:40.000Z | content/languages/ruleml.md | tedneward/Research | 0410fe4e052961e05feda58267fbfa95f01b4a21 | [
"MIT"
] | 1 | 2021-12-14T04:20:30.000Z | 2021-12-14T04:20:30.000Z | title=RuleML
tags=language, logic
summary=A unifying system of families of languages for Web rules specified, in part, through schema languages (normatively, in Relax NG) for Web documents and data originally developed for XML and later transferred to other formats such as JSON.
~~~~~~
[Website](http://wiki.ruleml.org/index.php/RuleML_Home)
Appears to build off of [Datalog](../datalog).
| 39.3 | 245 | 0.776081 | eng_Latn | 0.982618 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.