hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
db9a5572749d88285b62f5c704551b84abcb43f6 | 4,028 | md | Markdown | articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md | sergibarca/azure-docs.es-es | dabecf2b983b0b41215571b8939077861f0c2667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md | sergibarca/azure-docs.es-es | dabecf2b983b0b41215571b8939077861f0c2667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md | sergibarca/azure-docs.es-es | dabecf2b983b0b41215571b8939077861f0c2667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'VPN Gateway: inquilino de Azure AD para distintos grupos de usuarios: Autenticación de Azure AD'
description: Puede usar la P2S VPN para conectarse a la red virtual con la autenticación de Azure AD
services: vpn-gateway
author: anzaman
ms.service: vpn-gateway
ms.topic: conceptual
ms.date: 02/19/2020
ms.author: alzam
ms.openlocfilehash: 118ea21cbdd2e0527659c7c1beb40d8e42fa1d10
ms.sourcegitcommit: 98a5a6765da081e7f294d3cb19c1357d10ca333f
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 02/20/2020
ms.locfileid: "77485721"
---
# <a name="create-an-azure-active-directory-tenant-for-p2s-openvpn-protocol-connections"></a>Creación de un inquilino de Azure Active Directory para conexiones del protocolo P2S OpenVPN
Al conectarse a la red virtual, puede usar la autenticación basada en certificados o la autenticación RADIUS. Sin embargo, cuando use el protocolo de VPN abierto, también puede usar la autenticación de Azure Active Directory. Si quiere que un conjunto de usuarios diferente pueda conectarse a diferentes puertas de enlace de VPN, puede registrar varias aplicaciones en AD y vincularlas a diferentes puertas de enlace de VPN. Este artículo le ayuda a configurar un inquilino de Azure AD para la autenticación de OpenVPN de P2S y a crear y registrar varias aplicaciones en Azure AD a fin de permitir un acceso distinto a los diferentes usuarios y grupos.
> [!NOTE]
> La autenticación de Azure AD solo se admite para las conexiones de protocolo de OpenVPN®.
>
[!INCLUDE [create](../../includes/openvpn-azure-ad-tenant-multi-app.md)]
## <a name="enable-authentication"></a>6. Habilitación de la autenticación en la puerta de enlace
En este paso, habilitará la autenticación de Azure AD en la puerta de enlace VPN.
1. Habilite la autenticación de Azure AD en la puerta de enlace VPN mediante la ejecución de los siguientes comandos. Asegúrese de modificar los comandos para que reflejen su propio entorno:
```azurepowershell-interactive
$gw = Get-AzVirtualNetworkGateway -Name <name of VPN gateway> -ResourceGroupName <Resource group>
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -VpnClientRootCertificates @()
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -AadTenantUri "https://login.microsoftonline.com/<your Directory ID>" -AadAudienceId "application ID from previous section" -AadIssuerUri "https://sts.windows.net/<your Directory ID>/" -VpnClientAddressPool 192.168.0.0/24
```
> [!NOTE]
> No use el id. de la aplicación del cliente VPN de Azure en los comandos anteriores: Concederá acceso a todos los usuarios a la puerta de enlace VPN. Use el identificador de las aplicaciones que registró.
2. Ejecute los siguientes comandos para crear y descargar el perfil. Cambie los valores de-ResourcGroupName y-Name para que coincidan con los suyos.
```azurepowershell-interactive
$profile = New-AzVpnClientConfiguration -Name <name of VPN gateway> -ResourceGroupName <Resource group> -AuthenticationMethod "EapTls"
$PROFILE.VpnProfileSASUrl
```
3. Después de ejecutar los comandos, verá un resultado similar al siguiente. Copie la dirección URL del resultado en el explorador para descargar el archivo zip del perfil.

4. Extraiga el archivo zip descargado.
5. Busque la carpeta "AzureVPN" descomprimida.
6. Anote la ubicación del archivo "azurevpnconfig. xml". Azurevpnconfig. xml contiene la configuración de la conexión VPN y se puede importar directamente en la aplicación del cliente VPN Azure. También puede distribuir este archivo a todos los usuarios que necesiten conectarse a través del correo electrónico u otros medios. El usuario necesitará credenciales de Azure AD válidas para conectarse correctamente.
## <a name="next-steps"></a>Pasos siguientes
Para conectarse a la red virtual, debe crear y configurar un perfil de cliente de VPN. Consulte [Configurar un cliente VPN para conexiones P2S VPN](openvpn-azure-ad-client.md).
| 66.032787 | 652 | 0.792453 | spa_Latn | 0.960482 |
db9ac2ff121790883dbfd36ca3213d7c2fbd3c8d | 27,992 | md | Markdown | docs/src/ref/mcmc.md | mschauer/Gen.jl | 06168315768be163091e80feace27076a63ae575 | [
"Apache-2.0"
] | null | null | null | docs/src/ref/mcmc.md | mschauer/Gen.jl | 06168315768be163091e80feace27076a63ae575 | [
"Apache-2.0"
] | null | null | null | docs/src/ref/mcmc.md | mschauer/Gen.jl | 06168315768be163091e80feace27076a63ae575 | [
"Apache-2.0"
] | null | null | null | # Markov chain Monte Carlo (MCMC)
Markov chain Monte Carlo (MCMC) is an approach to inference which involves initializing a hypothesis and then repeatedly sampling a new hypotheses given the previous hypothesis by making a change to the previous hypothesis.
The function that samples the new hypothesis given the previous hypothesis is called the **MCMC kernel** (or `kernel' for short).
If we design the kernel appropriately, then the distribution of the hypotheses will converge to the conditional (i.e. posterior) distribution as we increase the number of times we apply the kernel.
Gen includes primitives for constructing MCMC kernels and composing them into MCMC algorithms.
Although Gen encourages you to write MCMC algorithms that converge to the conditional distribution, Gen does not enforce this requirement.
You may use Gen's MCMC primitives in other ways, including for stochastic optimization.
For background on MCMC see [1].
[1] Andrieu, Christophe, et al. "An introduction to MCMC for machine learning." Machine learning 50.1-2 (2003): 5-43. [Link](https://www.cs.ubc.ca/~arnaud/andrieu_defreitas_doucet_jordan_intromontecarlomachinelearning.pdf).
## MCMC in Gen
Suppose we are doing inference in the following toy model:
```julia
@gen function model()
x = @trace(bernoulli(0.5), :x) # a latent variable
@trace(normal(x ? -1. : 1., 1.), :y) # the variable that will be observed
end
```
To do MCMC, we first need to obtain an initial trace of the model.
Recall that a trace encodes both the observed data and hypothesized values of latent variables.
We can obtain an initial trace that encodes the observed data, and contains a randomly initialized hypothesis, using [`generate`](@ref), e.g.:
```julia
observations = choicemap((:y, 1.23))
trace, = generate(model, (), observations)
```
Then, an MCMC algorithm is Gen is implemented simply by writing Julia `for` loop, which repeatedly applies a kernel, which is a regular Julia function:
```julia
for i=1:100
trace = kernel(trace)
end
```
## Built-in Stationary Kernels
However, we don't expect to be able to use any function for `kernel` and expect to converge to the conditional distribution.
To converge to the conditional distribution, the kernels must satisfy some properties.
One of these properties is that the kernel is **stationary** with respect to the conditional distribution.
Gen's inference library contains a number of functions for constructing stationary kernels:
- [`metropolis_hastings`](@ref) with alias [`mh`](@ref), which has three variants with differing tradeoffs between ease-of-use and efficiency. The simplest variant simply requires you to select the set of random choices to be updated, without specifying how. The middle variant allows you to use custom proposals that encode problem-specific heuristics, or custom proposals based on neural networks that are trained via amortized inference. The most sophisticated variant allows you to specify any kernel in the [reversible jump MCMC](https://people.maths.bris.ac.uk/~mapjg/papers/RJMCMCBka.pdf) framework.
- [`mala`](@ref), which performs a Metropolis Adjusted Langevin algorithm update on a set of selected random choices.
- [`hmc`](@ref), which performs a Hamiltonian Monte Carlo update on a set of selected random choices.
- [`elliptical_slice`](@ref), which performs an elliptical slice sampling update on a selected multivariate normal random choice.
For example, here is an MCMC inference algorithm that uses [`mh`](@ref):
```julia
function do_inference(y, num_iters)
trace, = generate(model, (), choicemap((:y, y)))
xs = Float64[]
for i=1:num_iters
trace, = mh(trace, select(:x))
push!(xs, trace[:x])
end
xs
end
```
Note that each of the kernel functions listed above stationary with respect to the joint distribution on traces of the model, but may not be stationary with respect to the intended conditional distribution, which is determined by the set of addresses that consititute the observed data.
If a kernel modifies the values of any of the observed data, then the kernel is not stationary with respect to the conditional distribution.
Therefore, you should **ensure that your MCMC kernels never propose to the addresses of the observations**.
Note that stationarity with respect to the conditional distribution alone is not sufficient for a kernel to converge to the posterior with infinite iterations.
Other requirements include that the chain is **irreducible** (it is possible to get from any state to any other state in a finite number of steps), and **aperiodicity**, which is a more complex requirement that is satisfied when kernels have some probability of staying in the same state, which most of the primitive kernels above satisfy.
We refer interested readers to [1] for additional details on MCMC convergence.
## Enabling Dynamic Checks
Gen does not statically guarantee that kernels (either ones built-in or composed with the [Composite Kernel DSL](@ref)) are stationary.
However, you can enable dynamic checks that will detect common bugs that break stationarity.
To enable the dynamic checks we pass a keyword argument beyond those of the kernel itself:
```julia
new_trace = k(trace, 2, check=true)
```
Note that these checks aim to detect when a kernel is not stationary with respect to the model's **joint** distribution.
To add an additional dynamic check for violation of stationarity with respect to the *conditional* distribution (conditioned on observations), we pass in an additional keyword argument containing a choice map with the observations:
```julia
new_trace = k(traced, 2, check=true, observations=choicemap((:y, 1.2)))
```
If `check` is set to `false`, then the observation check is not performed.
## Composite Kernel DSL
You can freely compose the primitive kernels listed above into more complex kernels.
Common types of composition including e.g. cycling through multiple kernels, randomly choosing a kernel to apply, and choosing which kernel to apply based on the current state.
However, not all such compositions of stationary kernels will result in kernels that are themselves stationary.
Gen's **Composite Kernel DSL** is an embedded inference DSL that allows for more safe composition of MCMC kernels, by formalizing properties of the compositions that are sufficient for stationarity, encouraging compositions with these properties, and dynamically checking for violation of these properties.
Although the DSL does not *guarantee* stationarity of the composite kernels, its dynamic checks do catch common cases of non-stationary kernels.
The dynamic checks can be enabled and disabled as needed (e.g. enabled during testing and prototyping and disabled during deployment for higher performance).
The DSL consists of a macro -- [`@kern`](@ref) for composing stationary kernels from primitive stationary kernels and composite stationary kernels, and two additional macros:
--- [`@pkern`](@ref) for declaring Julia functions to be custom primitive stationary kernels, and [`@rkern`](@ref) for declaring the reversal of a custom primitive kernel (these two macros are advanced features not necessary for standard MCMC algorithms).
### Composing Stationary Kernels
The [`@kern`](@ref) macro defines a composite MCMC kernel in a restricted DSL that is based on Julia's own function definition syntax.
Suppose we are doing inference in the following model:
```julia
@gen function model()
n = @trace(geometric(0.5), :n)
total = 0.
for i=1:n
total += @trace(normal(0, 1), (:x, i))
end
@trace(normal(total, 1.), :y)
total
end
```
Here is an example composite kernel for MCMC in this model:
```julia
@kern function my_kernel(trace)
# cycle through the x's and do a random walk update on each one
for i in 1:trace[:n]
trace ~ mh(trace, random_walk_proposal, (i,))
end
# repeatedly pick a random x and do a random walk update on it
if trace[:n] > 0
for rep in 1:10
let i ~ uniform_discrete(1, trace[:n])
trace ~ mh(trace, random_walk_proposal, (i,))
end
end
end
# remove the last x, or add a new one, a random number of times
let n_add_remove_reps ~ uniform_discrete(0, max_n_add_remove)
for rep in 1:n_add_remove_reps
trace ~ mh(trace, add_remove_proposal, (), add_remove_involution)
end
end
end
```
In the DSL, the first arugment (`trace` in this case) represents the trace on which the kernel is acting.
the kernel may have additional arguments.
The code inside the body can read from the trace (e.g. `trace[:n]` reads the value of the random choice `:n`).
Finally, the return value of the composite kernel is automatically set to the trace.
NOTE: It is not permitted to assign to the trace variable, except with `~` expressions.
Also note that stationary kernels, when treated as Julia functions, return a tuple, where the first element is the trace and the remaining arguments are metadata.
When applying these kernels with `~` syntax within the DSL, it is not necessary to unpack the tuple (the metadata is ignored automatically).
The language constructs supported by this DSL are:
**Applying a stationary kernel.**
To apply a kernel, the syntax `trace ~ k(trace, args..)` is used.
Note that the `check` and `observations` keyword arguments (see [Enabling Dynamic Checks](@ref)) should not be used here; they will be added automatically.
**For loops.**
The range of the for loop may be a deterministic function of the trace (as in `trace[:n]` above).
The range must be *invariant* under all possible executions of the body of the for loop.
For example, the random walk based kernel embedded in the for loop in our example above cannot modify the value of the random choice `:n` in the trace.
**If-end expressions**
The predicate condition may be a deterministic function of the trace, but it also must be invariant (i.e. remain true) under all possible executions of the body.
**Deterministic let expressions.**
We can use `let x = value .. end` to bind values to a variable, but the expression on the right-hand-side must be deterministic function of its free variables, its value must be invariant under all possible executions of the body.
**Stochastic let expressions.**
We can use `let x ~ dist(args...) .. end` to sample a stochastic value and bind to a variable, but the expression on the right-hand-side must be the application of a Gen [`Distribution`](@ref) to arguments, and the distribution and its arguments must be invariant under all possible executions of the body.
### Declaring primitive kernels for use in composite kernels
Note that all calls to built-in kernels like [`mh`](@ref) should be stationary, but that users are also free to declare their own arbitrary code as stationary.
The [`@pkern`](@ref) macro declares a Julia function as a stationary MCMC kernel, for use with the MCMC Kernel DSL.
The following custom primitive kernel permutes the random variables using random permutation generated from outside of Gen:
```julia
@pkern function permute_move(trace; check=false, observations=EmptyChoiceMap())
perm = Random.randperm(trace[:n])
constraints = choicemap()
for (i, j) in enumerate(perm)
constraints[(:x, i)] = trace[(:x, j)]
constraints[(:x, j)] = trace[(:x, i)]
end
trace, = update(trace, (), (), constraints)
metadata = nothing
trace, metadata
end
```
The first argument to the function should be the trace, and the function must have keyword arguments `check` and `observations` (see [Enabling Dynamic Checks](@ref)).
The return value should be a tuple where the first element is the new trace (and any remaining elements are optional metadata).
**Primitive kernels are Julia functions.**
Note that although we will be invoking these kernels within [`@kern`](@ref) functions, these kernels can still be called like a regular Julia function.
```julia
new_trace = permute_move(trace, 2)
```
Indeed, they are just regular Julia functions, but with some extra information attached so that the composite kernel DSL knows they have been declared as stationary kernels.
## Involution MCMC
Gen's most flexible variant of [`metropolis_hastings`](@ref), called **involution MCMC**, allows users to specify any MCMC kernel in the reversible jump MCMC (RJMCMC) framework [2].
Involution MCMC allows you to express a broad class of custom MCMC kernels that are not expressible using the other, simpler variants of Metropolis-Hastings supported by Gen.
These kernels are particularly useful for inferring the structure (e.g. control flow) of a model.
[2] Green, Peter J. "Reversible jump Markov chain Monte Carlo computation and Bayesian model determination." Biometrika 82.4 (1995): 711-732. [Link](https://academic.oup.com/biomet/article-abstract/82/4/711/252058)
An involution MCMC kernel in Gen takes as input a previous trace of the model (whose choice map we will denote by ``t``), and performs three phases to obtain a new trace of the model:
- First, it traces the execution of a **proposal**, which is a generative function that takes the previous trace of the model as its first argument. Mathematically, we will denote the choice map associated with the trace of the proposal by ``u``. The proposal can of course be defined using the [Built-In Modeling Languages](@ref), just like the model itself. However, unlike many other uses of proposals in Gen, these proposals *can make random choices at addresses that the model does not*.
- Next, it takes the tuple ``(t, u)`` and passes it into an **involution** (denoted mathematically by ``h``), which is a function that returns a new tuple ``(t', u')``, where ``t'`` is the choice map for a new proposed trace of the model, and ``u'`` are random choices for a new trace of the proposal. The defining property of the involution is that *it is invertible*, and *it is its own inverse*; i.e. ``(t, u) = h(h(t, u))``. Intuitively, ``u'`` is a description of a way that the proposal could be reversed, taking ``t'`` to ``t``.
- Finally, it computes an acceptance probability, which involves computing certain derivatives associated with the involution, and stochastically accepts or rejects the proposed model trace according to this probability. If the involution is defined using a the **Involution DSL** described later in this section, then the acceptance probability calculation is fully automated. (You can also implement involutions directly as Julia functions, but then you need to compute the Jacobian correction to the acceptance probability yourself).
### Example
Consider the following generative model of two pieces of observed data, at addresses `:y1` and `:y2`.
```julia
@gen function model()
if ({:z} ~ bernoulli(0.5))
m1 = ({:m1} ~ gamma(1, 1))
m2 = ({:m2} ~ gamma(1, 1))
else
m = ({:m} ~ gamma(1, 1))
(m1, m2) = (m, m)
end
{:y1} ~ normal(m1, 0.1)
{:y2} ~ normal(m2, 0.1)
end
```
Because this model has stochastic control flow, it represents two distinct structural hypotheses about how the observed data could have been generated:
If `:z` is `true` then we enter the first branch, and we hypothesize that the two data points were generated from separate means, sampled at addresses `:m1` and `:m2`.
If `:z` is `false` then we enter the second branch, and we hypohesize that there is a single mean that explains both data points, sampled at address `:m`.
We want to construct an MCMC kernel that is able to transition between these two distinct structural hypotheses.
We could construct such a kernel with the simpler 'selection' variant of Metropolis-Hastings, by selecting the address `:z`, e.g.:
```julia
select_mh_structure_kernel(trace) = mh(trace, select(:z))[1]
```
Sometimes, this kernel would propose to change the value of `:z`.
We could interleave this kernel with another kernel that does inference over the mean random choices, without changing the structure, e.g.:
```julia
@gen function fixed_structure_proposal(trace)
if trace[:z]
{:m1} ~ normal(trace[:m1], 0.1)
{:m2} ~ normal(trace[:m2], 0.1)
else
{:m} ~ normal(trace[:m], 0.1)
end
end
fixed_structure_kernel(trace) = mh(trace, fixed_structure_proposal, ())[1]
```
Combining these together, and applying to particular data and with a specific initial hypotheses:
```julia
(y1, y2) = (1.0, 1.3)
trace, = generate(model, (), choicemap((:y1, y1), (:y2, y2), (:z, false), (:m, 1.2)))
for iter=1:100
trace = select_mh_structure_kernel(trace)
trace = fixed_structure_kernel(trace)
end
```
However, this algorithm will not be very efficient, because the internal proposal used by the selection variant of MH is not specialized to the model.
In particular, when switching from the model with a single mean to the model with two means, the values of the new addresses `:m1` and `:m2` will be proposed from the prior distribution.
This is wasteful, since if we have inferred an accurate value for `:m`, we expect the values for `:m1` and `:m2` to be near this value.
The same is true when proposing a structure change in the opposite direction.
That means it will take many more steps to get an accurate estimate of the posterior probability distribution on the two structures.
We would like to use inferred values for `:m1` and `:m2` to inform our proposal for the value of `:m`.
For example, we could take the geometric mean:
```julia
m = sqrt(m1 * m2)
```
However, there are many combinations of `m1` and `m2` that have the same geometric mean.
In other words, the geometric mean is not *invertible*.
However, if we return the additional degree of freedom alongside the geometric mean (`dof`), then we do have an invertible function:
```julia
function merge_means(m1, m2)
m = sqrt(m1 * m2)
dof = m1 / (m1 + m2)
(m, dof)
end
```
The inverse function is:
```julia
function split_mean(m, dof)
m1 = m * sqrt((dof / (1 - dof)))
m2 = m * sqrt(((1 - dof) / dof))
(m1, m2)
end
```
We use these two functions to construct an involution, and we use this involution with [`metropolis_hastings`](@ref) to construct an MCMC kernel that we call a 'split/merge' kernel, because it either splits a parameter value, or merges two parameter values.
The proposal is responsible for generating the extra degree of freedom when splitting:
```julia
@gen function split_merge_proposal(trace)
if trace[:z]
# currently two segments, switch to one
else
# currently one segment, switch to two
{:dof} ~ uniform_continuous(0, 1)
end
end
```
Finally, we write the involution itself, using the involution DSL:
```julia
@involution function split_merge_involution(model_args, proposal_args, proposal_retval)
if @read_discrete_from_model(:z)
# currently two segments, switch to one
@write_discrete_to_model(:z, false)
m1 = @read_continuous_from_model(:m1)
m2 = @read_continuous_from_model(:m2)
(m, dof) = merge_means(m1, m2)
@write_continuous_to_model(:m, m)
@write_continuous_to_proposal(:dof, dof)
else
# currently one segments, switch to two
@write_discrete_to_model(:z, true)
m = @read_continuous_from_model(:m)
dof = @read_continuous_from_proposal(:dof)
(m1, m2) = split_mean(m, dof)
@write_continuous_to_model(:m1, m1)
@write_continuous_to_model(:m2, m2)
end
end
```
The body of this function reads values from ``(t, u)`` at specific addresses and writes values to ``(t', u')`` at specific addresses, where ``t`` and ``t'`` are called 'model' choice maps, and ``u`` and ``u'`` are called 'proposal' choice maps.
Note that the inputs and outputs of this function are **not** represented in the same way as arguments or return values of regular Julia functions --- they are implicit and can only be read from and written to, respectively, using a set of special macros (listed below).
You should convince yourself that this function is invertible and its own inverse.
Finally, we compose a structure-changing MCMC kernel using this involution:
```julia
split_merge_kernel(trace) = mh(trace, split_merge_proposal, (), split_merge_involution)
```
We then compose this move with the fixed structure move, and run it on the observed data:
```julia
(y1, y2) = (1.0, 1.3)
trace, = generate(model, (), choicemap((:y1, y1), (:y2, y2), (:z, false), (:m, 1.)))
for iter=1:100
trace = split_merge_kernel(trace)
trace = fixed_structure_kernel(trace)
end
```
We can then compare the results to the results from the Markov chain that used the selection-based structure-changing kernel:

We see that if we initialize the Markov chains from the same state with a single mean (`:z` is `false`) then the selection-based kernel fails to accept any moves to the two-mean structure within 100 iterations, whereas the split-merge kernel transitions back and forth many times,
If we repeated the selection-based kernel for enough iterations, it would eventually transition back and forth at the same rate as the split-merge.
The split-merge kernel gives a much more efficient inference algorithm for estimating the posterior probability on the two structures.
### Involution DSL
To define an involution using the involution DSL, use the [`@involution`](@ref) macro in front of a Julia function definition.
The function must take three arguments, representing the arguments to the model, the arguments to the proposal (not including the trace), and the return value of the proposal.
Note that these are not the inputs to the involution itself, they simply parametrize a family of involutions, which are maps between pairs of choice maps ``(t, u)`` and ``(t', u')`` where ``t`` and ``t'`` are choice maps of model traces and ``u`` and ``u'`` are choice map of proposal traces.
The body of the function can contain almost arbitrary Julia code.
However, reads from ``(t, u)`` and writes to ``(t', u')`` use specific macros.
Some of these macros can only be used with either discrete or continuous random choices, respectively:
- `@read_discrete_from_model(addr)`: Read the discrete value from the input model choice map (``t``) at the given address.
- `@write_discrete_to_model(addr, value)`: Write a discrete value to the output model choice map (``t'``) at the given address.
- `@read_discrete_from_proposal(addr)`: Read the discrete value from the input proposal choice map (``u``) at the given address.
- `@write_discrete_to_proposal(addr, value)`: Write a discrete value to the output proposal choice map (``u'``) at the given address.
- `@read_continuous_from_model(addr)`: Read the continuous value from the input model choice map (``t``) at the given address.
- `@write_continuous_to_model(addr, value)`: Write a continuous value to the output model choice map (``t'``) at the given address.
- `@read_continuous_from_proposal(addr)`: Read the continuous value from the input proposal choice map (``u``) at the given address.
- `@write_continuous_to_proposal(addr, value)`: Write a continuous value to the output proposal choice map (``u'``) at the given address.
Often involutions directly copy the value from one address in the input ``(t, u)`` to the output ``(t', u')``.
In these cases, the implementation will be more efficient if explicit 'copy' commands are used instead:
- `@copy_model_to_model(from_addr, to_addr)`: Copy the value (discrete or continuous) or an entire sub-map of choices under an address namespace from the input model choice map (``t``) to the output model choice map (``t'``).
- `@copy_model_to_proposal(from_addr, to_addr)`: Copy the value (discrete or continuous) or an entire sub-map of choices under an address namespace from the input model choice map (``t``) to the output proposal choice map (``u'``).
- `@copy_proposal_to_proposal(from_addr, to_addr)`: Copy the value (discrete or continuous) or an entire sub-map of choices under an address namespace from the input proposal choice map (``u``) to the output proposal choice map (``u'``).
- `@copy_proposal_to_model(from_addr, to_addr)`: Copy the value (discrete or continuous) or an entire sub-map of choices under an address namespace from the input proposal choice map (``u``) to the output model choice map (``t'``).
It is not necessary to explicitly copy values from the previous model choice map (``t``) to the new model choice map (``t'``) at the same address.
These values will be copied automatically by the system.
Specifically, if using the proposed constraints, the model visits an address that was not explicitly copied or written to, the old value will automatically be copied.
Caveats:
- It is possible to write functions in the involution DSL that are not actually involutions -- Gen does not statically check whether the function is an involution or not, but it is possible to turn on a dynamic check that can detect invalid involutions using a keyword argument `check=true` to [`metropolis_hastings`](@ref).
- To avoid unecessary recomputation within the involution of values that are already computed and available in the return value of the model or the proposal, it is possible to depend on these values through the proposal return value (the third argument to the involution). However, it is possible to introduce a dependence on the value of continuous random choices in the input model choice map and the input proposal choice map through the proposal return value, and this dependence is not tracked by the automatic differentiation that is used to compute the Jacobian correction of the acceptance probability. Therefore, you should only only the proposal return value if you are sure you are not depending on the value of continuous choices (conditioned on the values of discrete choices).
It is also possible to call one `@involution` function from another, using the `@invcall` macro.
For example, below `bar` is the top-level `@involution` function, that calls the `@involution` function `foo`:
```julia
@involution foo(x)
..
end
@involution bar(model_args, proposal_args, proposal_retval)
..
x = ..
..
@invcall(foo(x))
end
```
Note that when constructing involutions that call other `@involution` functions, the function being called (`bar` in this case) need not be mathematically speaking, an involution itself, for `foo` to be mathematically be an involution.
Also, the top-level function must take three arguments (`model_args`, `proposal_args`, and `proposal_retval`), but any other `@involution` function may have an argument signature of the user's choosing.
Some additional tips for defining valid involutions:
- If you find yourself copying the same continuous source address to multiple locations, it probably means your involution is not valid (the Jacobian matrix will have rows that are identical, and so the Jacobian determinant will be nonzero).
- You can gain some confidence that your involution is valid by enabling dynamic checks (`check=true`) in [`metropolis_hastings`](@ref), which applies the involution to its output and checks that the original input is recovered.
## Reverse Kernels
The **reversal** of a stationary MCMC kernel with distribution ``k_1(t'; t)``, for model with distribution ``p(t; x)``, is another MCMC kernel with distribution:
```math
k_2(t; t') := \frac{p(t; x)}{p(t'; x)} k_1(t'; t)
```
For custom primitive kernels declared with [`@pkern`](@ref), users can declare the reversal kernel with the [`@rkern`](@ref) macro:
```julia
@rkern k1 : k2
```
This also assigns `k1` as the reversal of `k2`.
The composite kernel DSL automatically generates the reversal kernel for composite kernels, and built-in stationary kernels like [`mh`](@ref).
The reversal of a kernel (primitive or composite) can be obtained with [`reversal`](@ref).
## API
```@docs
metropolis_hastings
mh
mala
hmc
elliptical_slice
@pkern
@kern
@rkern
reversal
@involution
```
| 61.385965 | 790 | 0.7485 | eng_Latn | 0.997567 |
db9af52ac0c7574d7afb1e1dfe1288148f670d00 | 68 | md | Markdown | README.md | dburyak/sandbox-spring-boot | 022bc9223cf456e06f31f9da5c87730f95d48b37 | [
"MIT"
] | null | null | null | README.md | dburyak/sandbox-spring-boot | 022bc9223cf456e06f31f9da5c87730f95d48b37 | [
"MIT"
] | null | null | null | README.md | dburyak/sandbox-spring-boot | 022bc9223cf456e06f31f9da5c87730f95d48b37 | [
"MIT"
] | null | null | null | # sandbox-spring-boot
Spring boot sandbox project for experimenting
| 22.666667 | 45 | 0.838235 | eng_Latn | 0.554235 |
db9b18d4a3f853aeb7b63128015a46ec2d2cf29c | 521 | md | Markdown | README.md | eduardo-ehsc/brownie | 686a213a34ba405fb8d78b89eec7e8ba2cc3084b | [
"MIT"
] | 6 | 2020-04-25T19:13:35.000Z | 2020-04-27T18:46:14.000Z | README.md | eduardo-ehsc/hackathon-shawee | 686a213a34ba405fb8d78b89eec7e8ba2cc3084b | [
"MIT"
] | null | null | null | README.md | eduardo-ehsc/hackathon-shawee | 686a213a34ba405fb8d78b89eec7e8ba2cc3084b | [
"MIT"
] | null | null | null | <h1 align="center">Mega Hack Shawee 2.0</h1>
## :calendar: Data
23/04 ~ 09/05
## :clipboard: Descrição:
Repositório com a resolução do Mega Hack Shawee 2.0 (Desafio Americanas),
no qual nosso grupo ficou na 8ª posição.
## :couple::couple: Grupo:
* [Eduardo Correia](https://github.com/eduardo-ehsc)
* [Giovanni dos Santos](https://github.com/giovanni1811)
* [Rebeca Linares](https://github.com/BecaLinares)
* [Gustavo Barbato](https://www.github.com/GugaKing491)
* [Rodolfo Engelmann](https://github.com/RodolfoHRE)
| 28.944444 | 73 | 0.727447 | por_Latn | 0.595458 |
db9b26519fbfb9309e4df8c0263efede80273106 | 1,726 | md | Markdown | src/pages/workshop/index.md | zanonnicola/llfk | da6343c7171575c46283bf9a1b4da5fec4dc1e5c | [
"MIT"
] | null | null | null | src/pages/workshop/index.md | zanonnicola/llfk | da6343c7171575c46283bf9a1b4da5fec4dc1e5c | [
"MIT"
] | null | null | null | src/pages/workshop/index.md | zanonnicola/llfk | da6343c7171575c46283bf9a1b4da5fec4dc1e5c | [
"MIT"
] | null | null | null | ---
path: /nosateliers
layout: page-workshop
date: '2018-04-27'
lng: fr
color: '#60BDC1'
title: Nos ateliers
metaDescription: >-
L’Open LAB for Kids propose plusieurs ateliers en anglais, en matinée, après
l’école, le mercredi et le samedi pour les enfants âgés de 1 à 11 ans.
contentTitle: 'Apprendre, créer, s’amuser, tout en anglais !'
subTitle: >-
Pour se familiariser avec l’anglais dès le plus jeune âge à travers des
activités épanouissantes et ludiques.
---
Notre objectif est de donner du sens à l’apprentissage à travers des activités créatives et ludiques, de favoriser la pratique naturelle d’une langue étrangère et plus globalement de proposer un contenu éducatif riche qui contribue au développement des enfants.
Pour vous parents, c’est aussi découvrir de nouvelles pistes éducatives pour accompagner votre enfant dans son développement et dans son apprentissage de l’anglais, l’occasion de passer un moment privilégié lors d’ateliers dédiés, de rencontrer d’autres familles dans un cadre agréable et d’échanger avec des professionnels.
Nos ateliers sont adaptés aux bébés, petits et enfants âgés de 1 à 11 ans. Nous accueillons les enfants les lundis, mardis, jeudi et vendredi, en matinée ou après l'école. L'Open LAB for Kids est aussi ouvert les mercredis, samedis et pendant les vacances scolaires. Le samedi après-midi, nous organisons ponctuellement des ateliers découverte.
Nos ateliers sont entièrement en anglais. Ils s’adressent aussi bien aux enfants bilingues qu’aux enfants et parents n’ayant aucune ou peu de connaissance de l’anglais. Notre équipe de professionnels comprend le français et est parfaitement formée pour familiariser les enfants à la pratique d’une langue étrangère.
| 75.043478 | 345 | 0.800116 | fra_Latn | 0.988635 |
db9b2a46ae529fb1559dedb6ac817c043cd65481 | 6,339 | md | Markdown | README.md | artysidorenko/kafkajs | 7614eeeec75af3be3deb5f986e92f96ba0aeee82 | [
"MIT"
] | null | null | null | README.md | artysidorenko/kafkajs | 7614eeeec75af3be3deb5f986e92f96ba0aeee82 | [
"MIT"
] | 6 | 2021-06-28T20:27:36.000Z | 2022-02-27T10:13:07.000Z | README.md | artysidorenko/kafkajs | 7614eeeec75af3be3deb5f986e92f96ba0aeee82 | [
"MIT"
] | null | null | null | [](https://www.npmjs.com/package/kafkajs) [](https://www.npmjs.com/package/kafkajs) [](https://dev.azure.com/tulios/kafkajs/_build/latest?definitionId=2&branchName=master) [](https://kafkajs-slackin.herokuapp.com/)
<br />
<p align="center">
<a href="https://kafka.js.org">
<img src="https://raw.githubusercontent.com/tulios/kafkajs/master/logo/v2/kafkajs_circle.svg" alt="Logo" width="125" height="125">
</a>
<h3 align="center">KafkaJS</h3>
<p align="center">
A modern Apache Kafka® client for Node.js
<br />
<a href="https://kafka.js.org/"><strong>Get Started »</strong></a>
<br />
<br />
<a href="https://kafka.js.org/docs/getting-started" target="_blank">Read the Docs</a>
·
<a href="https://github.com/tulios/kafkajs/issues/new?assignees=&labels=&template=bug_report.md&title=">Report Bug</a>
·
<a href="https://github.com/tulios/kafkajs/issues/new?assignees=&labels=&template=feature_request.md&title=">Request Feature</a>
</p>
</p>
## Table of Contents
- [About the project](#about)
- [Features](#features)
- [Getting Started](#getting-started)
- [Usage](#usage)
- [Contributing](#contributing)
- [Help Wanted](#help-wanted)
- [Contact](#contact)
- [Sponsors](#sponsors)
- [License](#license)
- [Acknowledgements](#acknowledgements)
## <a name="about"></a> About the Project
KafkaJS is a modern [Apache Kafka](https://kafka.apache.org/) client for Node.js. It is compatible with Kafka 0.10+ and offers native support for 0.11 features.
### <a name="features"></a> Features
* Producer
* Consumer groups with pause, resume, and seek
* Transactional support for producers and consumers
* Message headers
* GZIP compression
* Snappy, LZ4 and ZSTD compression through pluggable codecs
* Plain, SSL and SASL_SSL implementations
* Support for SCRAM-SHA-256 and SCRAM-SHA-512
* Support for AWS IAM authentication
* Admin client
### <a name="getting-started"></a> Getting Started
```sh
npm install kafkajs
# yarn add kafkajs
```
#### <a name="usage"></a> Usage
```javascript
const { Kafka } = require('kafkajs')
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['kafka1:9092', 'kafka2:9092']
})
const producer = kafka.producer()
const consumer = kafka.consumer({ groupId: 'test-group' })
const run = async () => {
// Producing
await producer.connect()
await producer.send({
topic: 'test-topic',
messages: [
{ value: 'Hello KafkaJS user!' },
],
})
// Consuming
await consumer.connect()
await consumer.subscribe({ topic: 'test-topic', fromBeginning: true })
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
console.log({
partition,
offset: message.offset,
value: message.value.toString(),
})
},
})
}
run().catch(console.error)
```
Learn more about using [KafkaJS on the official site!](https://kafka.js.org)
- [Getting Started](https://kafka.js.org/docs/getting-started)
- [A Brief Intro to Kafka](https://kafka.js.org/docs/introduction)
- [Configuring KafkaJS](https://kafka.js.org/docs/configuration)
- [Example Producer](https://kafka.js.org/docs/producer-example)
- [Example Consumer](https://kafka.js.org/docs/consumer-example)
> _Read something on the website that didn't work with the latest stable version?_
[Check the pre-release versions](https://kafka.js.org/docs/pre-releases) - the website is updated on every merge to master.
## <a name="contributing"></a> Contributing
KafkaJS is an open-source project where development takes place in the open on GitHub. Although the project is maintained by a small group of dedicated volunteers, we are grateful to the community for bug fixes, feature development and other contributions.
See [Developing KafkaJS](https://kafka.js.org/docs/contribution-guide) for information on how to run and develop KafkaJS.
### <a name="help-wanted"></a> Help wanted 🤝
We welcome contributions to KafkaJS, but we also want to see a thriving third-party ecosystem. If you would like to create an open-source project that builds on top of KafkaJS, [please get in touch](https://kafkajs-slackin.herokuapp.com/) and we'd be happy to provide feedback and support.
Here are some projects that we would like to build, but haven't yet been able to prioritize:
* [Dead Letter Queue](https://eng.uber.com/reliable-reprocessing/) - Automatically reprocess messages
* ✅ [Schema Registry](https://www.confluent.io/confluent-schema-registry/) - **[Now available!](https://www.npmjs.com/package/@kafkajs/confluent-schema-registry)** thanks to [@erikengervall](https://github.com/erikengervall)
* [Metrics](https://prometheus.io/) - Integrate with the [instrumentation events](https://kafka.js.org/docs/instrumentation-events) to expose commonly used metrics
### <a name="contact"></a> Contact 💬
[Join our Slack community](https://kafkajs-slackin.herokuapp.com/)
## <a name="sponsors"></a> Sponsors ❤️
*To become a sponsor, [reach out in our Slack community](https://kafkajs-slackin.herokuapp.com/) to get in touch with one of the maintainers. Also consider becoming a Github Sponsor by following any of the links under "Sponsor this project" in the sidebar.*
<a href="https://www.confluent.io/confluent-cloud/?utm_source=kafkajs&utm_medium=opensource&utm_campaign=referral">
<img src="https://raw.githubusercontent.com/tulios/kafkajs/master/logo/confluent/logo.png" width="830px">
</a>
## <a name="license"></a> License
See [LICENSE](https://github.com/tulios/kafkajs/blob/master/LICENSE) for more details.
### <a name="acknowledgements"></a> Acknowledgements
* Thanks to [Sebastian Norde](https://github.com/sebastiannorde) for the V1 logo ❤️
* Thanks to [Tracy (Tan Yun)](https://medium.com/@tanyuntracy) for the V2 logo ❤️
<small>Apache Kafka and Kafka are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries. KafkaJS has no affiliation with the Apache Software Foundation.</small>
| 42.26 | 552 | 0.720145 | eng_Latn | 0.591424 |
db9b4518c36ee2acf7d255447588ca576a91d45c | 7,996 | md | Markdown | articles/api-management/api-management-role-based-access-control.md | kitingChris/azure-docs.de-de | a81b914393aa78dc3722e272c7f253a9c5ddd2d2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/api-management/api-management-role-based-access-control.md | kitingChris/azure-docs.de-de | a81b914393aa78dc3722e272c7f253a9c5ddd2d2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/api-management/api-management-role-based-access-control.md | kitingChris/azure-docs.de-de | a81b914393aa78dc3722e272c7f253a9c5ddd2d2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Verwenden der rollenbasierten Zugriffssteuerung in Azure API Management | Microsoft-Dokumentation
description: Informationen zum Verwenden der integrierten Rollen und zum Erstellen benutzerdefinierter Rollen in Azure API Management
services: api-management
documentationcenter: ''
author: vladvino
manager: erikre
editor: ''
ms.assetid: 364cd53e-88fb-4301-a093-f132fa1f88f5
ms.service: api-management
ms.workload: mobile
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 06/20/2018
ms.author: apimpm
ms.openlocfilehash: 2e53b0d582a69e10de22e85720833800d44058e3
ms.sourcegitcommit: 41ca82b5f95d2e07b0c7f9025b912daf0ab21909
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 06/13/2019
ms.locfileid: "66141482"
---
# <a name="how-to-use-role-based-access-control-in-azure-api-management"></a>Verwenden der rollenbasierten Zugriffssteuerung in Azure API Management
Azure API Management basiert auf der rollenbasierten Zugriffssteuerung (Role-Based Access Control, RBAC) von Azure, um eine differenzierte Zugriffsverwaltung für API Management-Dienste und -Entitäten (etwa APIs und Richtlinien) zu ermöglichen. Dieser Artikel bietet Ihnen einen Überblick über die integrierten und benutzerdefinierten Rollen in API Management. Weitere Informationen zur Zugriffsverwaltung im Azure-Portal finden Sie unter [Erste Schritte mit der rollenbasierten Zugriffssteuerung im Azure-Portal](https://azure.microsoft.com/documentation/articles/role-based-access-control-what-is/).
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
## <a name="built-in-roles"></a>Integrierte Rollen
API Management bietet zurzeit drei integrierte Rollen, und es werden in naher Zukunft zwei weitere Rollen hinzugefügt. Diese Rollen können auf verschiedenen Ebenen zugewiesen werden, darunter Abonnement, Ressourcengruppe und einzelne API Management-Instanz. Beispiel: Wenn einem Benutzer die Leserolle für den Azure API Management-Dienst (Azure API Management Service Reader) auf Ressourcengruppenebene zugewiesen wird, besitzt der Benutzer innerhalb der Ressourcengruppe Lesezugriff auf alle Instanzen von API Management.
Die folgende Tabelle enthält kurze Beschreibungen der integrierten Rollen. Sie können diese Rollen über das Azure-Portal oder andere Tools zuweisen, beispielsweise Azure [PowerShell](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-powershell), die [Azure CLI](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-cli) oder die [REST-API](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-rest). Informationen zum Zuweisen integrierter Rollen finden Sie unter [Verwenden von Rollenzuweisungen zum Verwalten des Zugriffs auf Ihre Azure-Abonnementressourcen](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal).
| Rolle | Lesezugriff<sup>[1]</sup> | Schreibzugriff<sup>[2]</sup> | Dienst erstellen, löschen, skalieren, Konfiguration von VPN und benutzerdefinierter Domäne | Zugriff auf das Legacy-Herausgeberportal | BESCHREIBUNG
| ------------- | ---- | ---- | ---- | ---- | ----
| Mitwirkender des Azure API Management-Diensts | ✓ | ✓ | ✓ | ✓ | Administrator. Besitzt CRUD-Vollzugriff auf API Management-Dienste und -Entitäten (z.B. APIs, Richtlinien). Besitzt Zugriff auf das Legacy-Herausgeberportal. |
| Leser des Azure API Management-Diensts | ✓ | | || Besitzt schreibgeschützten Zugriff auf API Management-Dienste und -Entitäten. |
| Operator des Azure API Management-Diensts | ✓ | | ✓ | | Kann API Management-Dienste, jedoch keine Entitäten verwalten.|
| Editor des Azure API Management-Diensts<sup>*</sup> | ✓ | ✓ | | | Kann API Management-Entitäten, jedoch keine Dienste verwalten.|
| Azure API Management-Inhalts-Manager<sup>*</sup> | ✓ | | | ✓ | Kann das Entwicklerportal verwalten. Schreibgeschützter Zugriff auf Dienste und Entitäten.|
<sup>[1] Lesezugriff auf API Management-Dienste und -Entitäten (z.B. APIs, Richtlinien)</sup>
<sup>[2] Schreibzugriff auf API Management-Dienste und -Entitäten mit Ausnahme der folgenden Vorgänge: Erstellen, Löschen und Skalieren von Instanzen; VPN-Konfiguration; Einrichten benutzerdefinierter Domänen</sup>
<sup>\* Die Rolle des Dienst-Editors steht zur Verfügung, nachdem die gesamte Administrator-UI vom vorhandenen Herausgeberportal zum Azure-Portal migriert wurde. Die Inhalts-Manager-Rolle ist verfügbar, sobald das Herausgeberportal so umgestaltet wurde, dass darin nur noch Funktionen im Zusammenhang mit der Verwaltung des Entwicklerportals enthalten sind.</sup>
## <a name="custom-roles"></a>Benutzerdefinierte Rollen
Wenn keine der integrierten Rollen Ihre Anforderungen erfüllt, können benutzerdefinierte Rollen erstellt werden, um eine detailliertere Zugriffsverwaltung für API Management-Entitäten bereitzustellen. Beispielsweise können Sie eine benutzerdefinierte Rolle erstellen, die schreibgeschützten Zugriff auf einen API Management-Dienst, aber nur Schreibzugriff für eine bestimmte API besitzt. Weitere Informationen zu benutzerdefinierten Rollen finden Sie unter [Erstellen von benutzerdefinierten Rollen für die rollenbasierte Zugriffssteuerung in Azure](https://docs.microsoft.com/azure/role-based-access-control/custom-roles).
> [!NOTE]
> Damit eine API Management-Dienstinstanz im Azure-Portal angezeigt wird, muss eine benutzerdefinierte Rolle die Aktion ```Microsoft.ApiManagement/service/read``` einschließen.
Wenn Sie eine benutzerdefinierte Rolle erstellen, ist es einfacher, mit einer der integrierten Rollen zu beginnen. Bearbeiten Sie die Attribute, und fügen Sie **Actions**, **NotActions** oder **AssignableScopes** hinzu. Speichern Sie die Änderungen anschließend als neue Rolle. Im folgenden Beispiel wird von der Leserolle für den Azure API Management-Dienst ausgegangen und eine benutzerdefinierte Rolle namens „Calculator API Editor“ (Rechner-API-Editor) erstellt. Sie können die benutzerdefinierte Rolle einer bestimmten API zuweisen. Daher hat diese Rolle nur Zugriff auf diese API.
```powershell
$role = Get-AzRoleDefinition "API Management Service Reader Role"
$role.Id = $null
$role.Name = 'Calculator API Contributor'
$role.Description = 'Has read access to Contoso APIM instance and write access to the Calculator API.'
$role.Actions.Add('Microsoft.ApiManagement/service/apis/write')
$role.Actions.Add('Microsoft.ApiManagement/service/apis/*/write')
$role.AssignableScopes.Clear()
$role.AssignableScopes.Add('/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.ApiManagement/service/<service name>/apis/<api ID>')
New-AzRoleDefinition -Role $role
New-AzRoleAssignment -ObjectId <object ID of the user account> -RoleDefinitionName 'Calculator API Contributor' -Scope '/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.ApiManagement/service/<service name>/apis/<api ID>'
```
Der Artikel [Vorgänge für Azure Resource Manager-Ressourcenanbieter](../role-based-access-control/resource-provider-operations.md#microsoftapimanagement) enthält die Liste der Berechtigungen, die auf API Management-Ebene gewährt werden können.
## <a name="video"></a>Video
> [!VIDEO https://channel9.msdn.com/Blogs/AzureApiMgmt/Role-Based-Access-Control-in-API-Management/player]
>
>
## <a name="next-steps"></a>Nächste Schritte
Weitere Informationen zur rollenbasierten Zugriffssteuerung in Azure finden Sie in den folgenden Artikeln:
* [Erste Schritte mit der Zugriffsverwaltung im Azure-Portal](../role-based-access-control/overview.md)
* [Verwenden von Rollenzuweisungen zum Verwalten Ihrer Azure-Abonnementressourcen](../role-based-access-control/role-assignments-portal.md)
* [Erstellen von benutzerdefinierten Rollen für die rollenbasierte Zugriffssteuerung in Azure](../role-based-access-control/custom-roles.md)
* [Vorgänge für Azure Resource Manager-Ressourcenanbieter](../role-based-access-control/resource-provider-operations.md#microsoftapimanagement) | 91.908046 | 717 | 0.803652 | deu_Latn | 0.972778 |
db9b91fc3705edb0cde978e7370eb7dfac60e6ee | 7,429 | md | Markdown | docs/_posts/2014-10-28-react-v0.12.md | jflayhart/react | 932334d3d46627adb08013e8e1d1097c469597cc | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 108 | 2016-04-06T00:48:29.000Z | 2022-03-31T21:50:44.000Z | docs/_posts/2014-10-28-react-v0.12.md | wnqnguo/react | 2d9d4f6349b4d1e718151675129499bd207a1acd | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 244 | 2019-04-06T15:00:50.000Z | 2022-03-08T22:43:15.000Z | docs/_posts/2014-10-28-react-v0.12.md | wnqnguo/react | 2d9d4f6349b4d1e718151675129499bd207a1acd | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 30 | 2015-01-27T02:45:37.000Z | 2020-05-16T04:58:59.000Z | ---
title: React v0.12
author: zpao
---
We're happy to announce the availability of React v0.12! After over a week of baking as the release candidate, we uncovered and fixed a few small issues. Thanks to all of you who upgraded and gave us feedback!
We have talked a lot about some of the bigger changes in this release. [We introduced new terminology](/react/blog/2014/10/14/introducing-react-elements.html) and changed APIs to clean up and simplify some of the concepts of React. [We also made several changes to JSX](/react/blog/2014/10/16/react-v0.12-rc1.html) and deprecated a few functions. We won't go into depth about these changes again but we encourage you to read up on these changes in the linked posts. We'll summarize these changes and discuss some of the other changes and how they may impact you below. As always, a full changelog is also included below.
The release is available for download:
* **React**
Dev build with warnings: <https://fb.me/react-0.12.0.js>
Minified build for production: <https://fb.me/react-0.12.0.min.js>
* **React with Add-Ons**
Dev build with warnings: <https://fb.me/react-with-addons-0.12.0.js>
Minified build for production: <https://fb.me/react-with-addons-0.12.0.min.js>
* **In-Browser JSX transformer**
<https://fb.me/JSXTransformer-0.12.0.js>
We've also published version `0.12.0` of the `react` and `react-tools` packages on npm and the `react` package on bower.
## New Terminology & Updated APIs
v0.12 is bringing about some new terminology. [We introduced](/react/blog/2014/10/14/introducing-react-elements.html) this 2 weeks ago and we've also documented it in [a new section of the documentation](/react/docs/glossary.html). As a part of this, we also corrected many of our top-level APIs to align with the terminology. `Component` has been removed from all of our `React.render*` methods. While at one point the argument you passed to these functions was called a Component, it no longer is. You are passing ReactElements. To align with `render` methods in your component classes, we decided to keep the top-level functions short and sweet. `React.renderComponent` is now `React.render`.
We also corrected some other misnomers. `React.isValidComponent` actually determines if the argument is a ReactElement, so it has been renamed to `React.isValidElement`. In the same vein, `React.PropTypes.component` is now `React.PropTypes.element` and `React.PropTypes.renderable` is now `React.PropTypes.node`.
The old methods will still work but will warn upon first use. They will be removed in v0.13.
## JSX Changes
[We talked more in depth about these before](/react/blog/2014/10/16/react-v0.12-rc1.html#jsx-changes), so here are the highlights.
* No more `/** @jsx React.DOM */`!
* We no longer transform to a straight function call. `<Component/>` now becomes `React.createElement(Component)`
* DOM components don't make use of `React.DOM`, instead we pass the tag name directly. `<div/>` becomes `React.createElement('div')`
* We introduced spread attributes as a quick way to transfer props.
## DevTools Improvements, No More `__internals`
For months we've gotten complaints about the React DevTools message. It shouldn't have logged the up-sell message when you were already using the DevTools. Unfortunately this was because the way we implemented these tools resulted in the DevTools knowing about React, but not the reverse. We finally gave this some attention and enabled React to know if the DevTools are installed. We released an update to the devtools several weeks ago making this possible. Extensions in Chrome should auto-update so you probably already have the update installed!
As a result of this update, we no longer need to expose several internal modules to the world. If you were taking advantage of this implementation detail, your code will break. `React.__internals` is no more.
## License Change - BSD
We updated the license on React to the BSD 3-Clause license with an explicit patent grant. Previously we used the Apache 2 license. These licenses are very similar and our extra patent grant is equivalent to the grant provided in the Apache license. You can still use React with the confidence that we have granted the use of any patents covering it. This brings us in line with the same licensing we use across the majority of our open source projects at Facebook.
You can read the full text of the [LICENSE](https://github.com/facebook/react/blob/master/LICENSE) and [`PATENTS`](https://github.com/facebook/react/blob/master/PATENTS) files on GitHub.
- - -
## Changelog
### React Core
#### Breaking Changes
* `key` and `ref` moved off props object, now accessible on the element directly
* React is now BSD licensed with accompanying Patents grant
* Default prop resolution has moved to Element creation time instead of mount time, making them effectively static
* `React.__internals` is removed - it was exposed for DevTools which no longer needs access
* Composite Component functions can no longer be called directly - they must be wrapped with `React.createFactory` first. This is handled for you when using JSX.
#### New Features
* Spread operator (`{...}`) introduced to deprecate `this.transferPropsTo`
* Added support for more HTML attributes: `acceptCharset`, `classID`, `manifest`
#### Deprecations
* `React.renderComponent` --> `React.render`
* `React.renderComponentToString` --> `React.renderToString`
* `React.renderComponentToStaticMarkup` --> `React.renderToStaticMarkup`
* `React.isValidComponent` --> `React.isValidElement`
* `React.PropTypes.component` --> `React.PropTypes.element`
* `React.PropTypes.renderable` --> `React.PropTypes.node`
* **DEPRECATED** `React.isValidClass`
* **DEPRECATED** `instance.transferPropsTo`
* **DEPRECATED** Returning `false` from event handlers to preventDefault
* **DEPRECATED** Convenience Constructor usage as function, instead wrap with `React.createFactory`
* **DEPRECATED** use of `key={null}` to assign implicit keys
#### Bug Fixes
* Better handling of events and updates in nested results, fixing value restoration in "layered" controlled components
* Correctly treat `event.getModifierState` as case sensitive
* Improved normalization of `event.charCode`
* Better error stacks when involving autobound methods
* Removed DevTools message when the DevTools are installed
* Correctly detect required language features across browsers
* Fixed support for some HTML attributes:
* `list` updates correctly now
* `scrollLeft`, `scrollTop` removed, these should not be specified as props
* Improved error messages
### React With Addons
#### New Features
* `React.addons.batchedUpdates` added to API for hooking into update cycle
#### Breaking Changes
* `React.addons.update` uses `assign` instead of `copyProperties` which does `hasOwnProperty` checks. Properties on prototypes will no longer be updated correctly.
#### Bug Fixes
* Fixed some issues with CSS Transitions
### JSX
#### Breaking Changes
* Enforced convention: lower case tag names are always treated as HTML tags, upper case tag names are always treated as composite components
* JSX no longer transforms to simple function calls
#### New Features
* `@jsx React.DOM` no longer required
* spread (`{...}`) operator introduced to allow easier use of props
#### Bug Fixes
* JSXTransformer: Make sourcemaps an option when using APIs directly (eg, for react-rails)
| 58.039063 | 695 | 0.766187 | eng_Latn | 0.993679 |
db9b96ef5c675f5399d1c74a93fd6529d6730cb9 | 2,401 | md | Markdown | README.md | Clever/ebs-snapshots | f62721dded3d778dc8149ee95307d43f22f00dec | [
"Apache-2.0"
] | 13 | 2015-08-02T07:35:03.000Z | 2019-01-23T17:25:40.000Z | README.md | Clever/ebs-snapshots | f62721dded3d778dc8149ee95307d43f22f00dec | [
"Apache-2.0"
] | 17 | 2015-01-14T22:26:54.000Z | 2019-01-17T22:08:14.000Z | README.md | Clever/ebs-snapshots | f62721dded3d778dc8149ee95307d43f22f00dec | [
"Apache-2.0"
] | 8 | 2015-07-22T21:14:12.000Z | 2019-01-22T13:09:36.000Z | # ebs-snapshots
`ebs-snapshots` allows you to create and clean up AWS EBS snapshots according to a schedule.
(Thanks for `skymill` for the basis of this work: https://github.com/skymill/automated-ebs-snapshots)
## Usage
This requires `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, which should be set in `~/.aws/credentials`.
```
AWS_REGION=your_region \
AWS_BACKUP_REGION=your_backup_region \
BACKUP_CONFIG=s3://your-bucket/snapshots-config.yml \
python main.py
```
This starts a long-running process that will take snapshots according to the config file.
`BACKUP_CONFIG` may be a local file, an s3 path, or inline YAML/JSON.
### Configuration
Configuration files are written in [yaml](http://www.yaml.org/) (a superset of JSON) format.
Top level keys are volume ids. These map to a dict of parameters:
- `interval` - frequency of snapshots: hourly, daily, monthly, yearly
- `max_snapshots` - max snapshots to keep, 0 keeps all
- `name` - name of snapshot, written to EC2 tag 'Name"
Here is an example configuration file to automate snapshots for two volumes:
```yaml
vol-fake1234:
interval: daily
max_snapshots: 0
name: Fake database
vol-fake5678:
interval: hourly
max_snapshots: 48
```
### Required Env
You must specify these env vars in order to connect to AWS and to choose the configuration file.
```
AWS_ACCESS_KEY_ID # Your AWS Credentials
AWS_SECRET_ACCESS_KEY # Your AWS Credentials
AWS_REGION # AWS Region, e.g. us-west-1
AWS_BACKUP_REGION # AWS Region for backups, e.g. us-west-2
BACKUP_CONFIG # Path to backup config. May be local file or s3 path (see "Configuration")
```
### AWS Policy
You'll need to grant the proper IAM permissions to the AWS credentials you're using.
1. ec2 volume, snapshot, and tag, permissions - to create snapshots of volumes and tag them
1. s3 bucket permissions - allows reading your config file from an s3 path
See the included [example policy](aws-iam-policy.ebs-snapshots.json).
### Optional: Running in Docker
Build the docker image:
```
docker build --tag=local/ebs-snapshots $(pwd)
```
Run as a docker container, making sure to specify required env vars:
```
docker run \
-e AWS_ACCESS_KEY_ID_ID=$AWS_ACCESS_KEY_ID_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e AWS_REGION=$AWS_REGION \
-e BACKUP_CONFIG=$BACKUP_CONFIG \
local/ebs-snapshots
```
## Testing
```
make test
```
| 26.977528 | 107 | 0.744273 | eng_Latn | 0.889445 |
db9c3da47177cfd237cb0622d34ddecd41dd4878 | 150 | md | Markdown | _drafts/2017-06-08-buck-config.md | tjian123/mark | 491ddbdb7c6b4dd4df37577c02a18f6e07313349 | [
"Apache-2.0"
] | null | null | null | _drafts/2017-06-08-buck-config.md | tjian123/mark | 491ddbdb7c6b4dd4df37577c02a18f6e07313349 | [
"Apache-2.0"
] | 1 | 2020-06-27T13:57:10.000Z | 2020-06-27T13:57:10.000Z | _drafts/2017-06-08-buck-config.md | tjian123/mark | 491ddbdb7c6b4dd4df37577c02a18f6e07313349 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: buck配置文件
categories: [Coding]
tags: [Buck]
---
> [buildfile]
控制构建文件行为的设置
includes
一系列便以目标文件的列表,等价于直接在文件中使用include_refs引入文件 | 10.714286 | 40 | 0.76 | eng_Latn | 0.447151 |
db9d93a7303a43a0cf21467791e4c3801ec1c671 | 42,007 | md | Markdown | docs/standard/data/xml/xmlschemavalidator-push-based-validation.md | gosali/docs-1 | fc75797ccc7b10ae6b526133d70693b99963def8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-19T12:20:24.000Z | 2021-01-19T12:20:24.000Z | docs/standard/data/xml/xmlschemavalidator-push-based-validation.md | gosali/docs-1 | fc75797ccc7b10ae6b526133d70693b99963def8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/data/xml/xmlschemavalidator-push-based-validation.md | gosali/docs-1 | fc75797ccc7b10ae6b526133d70693b99963def8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-12-03T21:11:06.000Z | 2019-12-03T21:11:06.000Z | ---
title: "XmlSchemaValidator Push-Based Validation"
ms.date: "03/30/2017"
ms.technology: dotnet-standard
dev_langs:
- "csharp"
- "vb"
ms.assetid: 911d4460-dd91-4958-85b2-2ca3299f9ec6
author: "mairaw"
ms.author: "mairaw"
---
# XmlSchemaValidator Push-Based Validation
The <xref:System.Xml.Schema.XmlSchemaValidator> class provides an efficient, high-performance mechanism to validate XML data against XML schemas in a push-based manner. For example, the <xref:System.Xml.Schema.XmlSchemaValidator> class allows you to validate an XML infoset in-place without having to serialize it as an XML document and then reparse the document using a validating XML reader.
The <xref:System.Xml.Schema.XmlSchemaValidator> class can be used in advanced scenarios such as building validation engines over custom XML data sources or as a way to build a validating XML writer.
The following is an example of using the <xref:System.Xml.Schema.XmlSchemaValidator> class to validate the `contosoBooks.xml` file against the `contosoBooks.xsd` schema. The example uses the <xref:System.Xml.Serialization.XmlSerializer> class to deserialize the `contosoBooks.xml` file and pass the value of the nodes to the methods of the <xref:System.Xml.Schema.XmlSchemaValidator> class.
> [!NOTE]
> This example is used throughout the sections of this topic.
[!code-csharp[XmlSchemaValidatorExamples#1](../../../../samples/snippets/csharp/VS_Snippets_Data/XmlSchemaValidatorExamples/CS/XmlSchemaValidatorExamples.cs#1)]
[!code-vb[XmlSchemaValidatorExamples#1](../../../../samples/snippets/visualbasic/VS_Snippets_Data/XmlSchemaValidatorExamples/VB/XmlSchemaValidatorExamples.vb#1)]
The example takes the `contosoBooks.xml` file as input.
[!code-xml[XPathXMLExamples#2](../../../../samples/snippets/xml/VS_Snippets_Data/XPathXMLExamples/XML/contosoBooks.xml#2)]
The example also takes the `contosoBooks.xsd` as an input.
```xml
<?xml version="1.0" encoding="utf-8"?>
<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" targetNamespace="http://www.contoso.com/books" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="bookstore">
<xs:complexType>
<xs:sequence>
<xs:element maxOccurs="unbounded" name="book">
<xs:complexType>
<xs:sequence>
<xs:element name="title" type="xs:string" />
<xs:element name="author">
<xs:complexType>
<xs:sequence>
<xs:element minOccurs="0" name="name" type="xs:string" />
<xs:element minOccurs="0" name="first-name" type="xs:string" />
<xs:element minOccurs="0" name="last-name" type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="price" type="xs:decimal" />
</xs:sequence>
<xs:attribute name="genre" type="xs:string" use="required" />
<xs:attribute name="publicationdate" type="xs:date" use="required" />
<xs:attribute name="ISBN" type="xs:string" use="required" />
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
```
## Validating XML Data using XmlSchemaValidator
To begin validating an XML infoset, you must first initialize a new instance of the <xref:System.Xml.Schema.XmlSchemaValidator> class using the <xref:System.Xml.Schema.XmlSchemaValidator.%23ctor%2A> constructor.
The <xref:System.Xml.Schema.XmlSchemaValidator.%23ctor%2A> constructor takes <xref:System.Xml.XmlNameTable>, <xref:System.Xml.Schema.XmlSchemaSet>, and <xref:System.Xml.XmlNamespaceManager> objects as parameters as well as a <xref:System.Xml.Schema.XmlSchemaValidationFlags> value as a parameter. The <xref:System.Xml.XmlNameTable> object is used to atomize well-known namespace strings like the schema namespace, the XML namespace, and so on, and is passed to the <xref:System.Xml.Schema.XmlSchemaDatatype.ParseValue%2A> method while validating simple content. The <xref:System.Xml.Schema.XmlSchemaSet> object contains the XML schemas used to validate the XML infoset. The <xref:System.Xml.XmlNamespaceManager> object is used to resolve namespaces encountered during validation. The <xref:System.Xml.Schema.XmlSchemaValidationFlags> value is used to disable certain features of validation.
For more information about the <xref:System.Xml.Schema.XmlSchemaValidator.%23ctor%2A> constructor, see the <xref:System.Xml.Schema.XmlSchemaValidator> class reference documentation.
### Initializing Validation
After an <xref:System.Xml.Schema.XmlSchemaValidator> object has been constructed, there are two overloaded <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> methods used to initialize the state of the <xref:System.Xml.Schema.XmlSchemaValidator> object. The following are the two <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> methods.
- <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A?displayProperty=nameWithType>
- <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A?displayProperty=nameWithType>
The default <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A?displayProperty=nameWithType> method initializes an <xref:System.Xml.Schema.XmlSchemaValidator> object to its starting state, and the overloaded <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A?displayProperty=nameWithType> method that takes an <xref:System.Xml.Schema.XmlSchemaObject> as a parameter initializes an <xref:System.Xml.Schema.XmlSchemaValidator> object to its starting state for partial validation.
Both <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> methods can only be called immediately after an <xref:System.Xml.Schema.XmlSchemaValidator> object has been constructed or after a call to <xref:System.Xml.Schema.XmlSchemaValidator.EndValidation%2A>.
For an example of the <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A?displayProperty=nameWithType> method, see the example in the introduction. For more information about the <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> method, see the <xref:System.Xml.Schema.XmlSchemaValidator> class reference documentation.
#### Partial Validation
The <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A?displayProperty=nameWithType> method that takes an <xref:System.Xml.Schema.XmlSchemaObject> as a parameter initializes an <xref:System.Xml.Schema.XmlSchemaValidator> object to its starting state for partial validation.
In the following example, an <xref:System.Xml.Schema.XmlSchemaObject> is initialized for partial validation using the <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A?displayProperty=nameWithType> method. The `orderNumber` schema element is passed by selecting the schema element by <xref:System.Xml.XmlQualifiedName> in the <xref:System.Xml.Schema.XmlSchemaObjectTable> collection returned by the <xref:System.Xml.Schema.XmlSchemaSet.GlobalElements%2A> property of the <xref:System.Xml.Schema.XmlSchemaSet> object. The <xref:System.Xml.Schema.XmlSchemaValidator> object then validates this specific element.
```vb
Dim schemaSet As XmlSchemaSet = New XmlSchemaSet()
schemaSet.Add(Nothing, "schema.xsd")
schemaSet.Compile()
Dim nameTable As NameTable = New NameTable()
Dim manager As XmlNamespaceManager = New XmlNamespaceManager(nameTable)
Dim validator As XmlSchemaValidator = New XmlSchemaValidator(nameTable, schemaSet, manager, XmlSchemaValidationFlags.None)
validator.Initialize(schemaSet.GlobalElements.Item(New XmlQualifiedName("orderNumber")))
validator.ValidateElement("orderNumber", "", Nothing)
validator.ValidateEndOfAttributes(Nothing)
validator.ValidateText("123")
validator.ValidateEndElement(Nothing)
```
```csharp
XmlSchemaSet schemaSet = new XmlSchemaSet();
schemaSet.Add(null, "schema.xsd");
schemaSet.Compile();
NameTable nameTable = new NameTable();
XmlNamespaceManager manager = new XmlNamespaceManager(nameTable);
XmlSchemaValidator validator = new XmlSchemaValidator(nameTable, schemaSet, manager, XmlSchemaValidationFlags.None);
validator.Initialize(schemaSet.GlobalElements[new XmlQualifiedName("orderNumber")]);
validator.ValidateElement("orderNumber", "", null);
validator.ValidateEndOfAttributes(null);
validator.ValidateText("123");
validator.ValidateEndElement(null);
```
The example takes the following XML schema as input.
`<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">`
`<xs:element name="orderNumber" type="xs:int" />`
`</xs:schema>`
For more information about the <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> method, see the <xref:System.Xml.Schema.XmlSchemaValidator> class reference documentation.
### Adding Additional Schemas
The <xref:System.Xml.Schema.XmlSchemaValidator.AddSchema%2A> method of the <xref:System.Xml.Schema.XmlSchemaValidator> class is used to add an XML schema to the set of schemas used during validation. The <xref:System.Xml.Schema.XmlSchemaValidator.AddSchema%2A> method can be used to simulate the effect of encountering an inline XML schema in the XML infoset being validated.
> [!NOTE]
> The target namespace of the <xref:System.Xml.Schema.XmlSchema> parameter cannot match that of any element or attribute already encountered by the <xref:System.Xml.Schema.XmlSchemaValidator> object.
>
> If the <xref:System.Xml.Schema.XmlSchemaValidationFlags.ProcessInlineSchema?displayProperty=nameWithType> value was not passed as a parameter to the <xref:System.Xml.Schema.XmlSchemaValidator.%23ctor%2A> constructor, the <xref:System.Xml.Schema.XmlSchemaValidator.AddSchema%2A> method does nothing.
The result of the <xref:System.Xml.Schema.XmlSchemaValidator.AddSchema%2A> method is dependant on the current XML node context being validated. For more information about validation contexts, see the "Validation Context" section of this topic.
For more information about the <xref:System.Xml.Schema.XmlSchemaValidator.AddSchema%2A> method, see the <xref:System.Xml.Schema.XmlSchemaValidator> class reference documentation.
### Validating Elements, Attributes, and Content
The <xref:System.Xml.Schema.XmlSchemaValidator> class provides several methods used to validate elements, attributes, and content in an XML infoset against XML schemas. The following table describes each of these methods.
|Method|Description|
|------------|-----------------|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateElement%2A>|Validates the element name in the current context.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A>|Validates the attribute in the current element context or against the <xref:System.Xml.Schema.XmlSchemaAttribute> object passed as a parameter to the <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> method.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndOfAttributes%2A>|Verifies whether all the required attributes in the element context are present and prepares the <xref:System.Xml.Schema.XmlSchemaValidator> object to validate the child content of the element.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateText%2A>|Validates whether text is allowed in the current element context, and accumulates the text for validation if the current element has simple content.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateWhitespace%2A>|Validates whether white-space is allowed in the current element context, and accumulates the white-space for validation whether the current element has simple content.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndElement%2A>|Verifies whether the text content of the element is valid according to its data type for elements with simple content, and verifies whether the content of the current element is complete for elements with complex content.|
|<xref:System.Xml.Schema.XmlSchemaValidator.SkipToEndElement%2A>|Skips validation of the current element content and prepares the <xref:System.Xml.Schema.XmlSchemaValidator> object to validate content in the parent element's context.|
|<xref:System.Xml.Schema.XmlSchemaValidator.EndValidation%2A>|Ends validation and checks identity constraints for the entire XML document if the <xref:System.Xml.Schema.XmlSchemaValidationFlags.ProcessIdentityConstraints> validation option is set.|
> [!NOTE]
> The <xref:System.Xml.Schema.XmlSchemaValidator> class has a defined state transition that enforces the sequence and occurrence of calls made to each of the methods described in the previous table. The specific state transition of the <xref:System.Xml.Schema.XmlSchemaValidator> class is described in the "XmlSchemaValidator State Transition" section of this topic.
For an example of the methods used to validate elements, attributes, and content in an XML infoset, see the example in the previous section. For more information about these methods, see the <xref:System.Xml.Schema.XmlSchemaValidator> class reference documentation.
#### Validating Content Using an XmlValueGetter
The <xref:System.Xml.Schema.XmlValueGetter>`delegate` can be used to pass the value of attribute, text, or white-space nodes as a Common Language Runtime (CLR) types compatible with the XML Schema Definition Language (XSD) type of the attribute, text, or white-space node. An <xref:System.Xml.Schema.XmlValueGetter>`delegate` is useful if the CLR value of an attribute, text, or white-space node is already available, and avoids the cost of converting it to a `string` and then reparsing it again for validation.
The <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A>, <xref:System.Xml.Schema.XmlSchemaValidator.ValidateText%2A>, and <xref:System.Xml.Schema.XmlSchemaValidator.ValidateWhitespace%2A> methods are overloaded and accept the value of attribute, text, or white-space nodes as a `string` or <xref:System.Xml.Schema.XmlValueGetter>`delegate`.
The following methods of the <xref:System.Xml.Schema.XmlSchemaValidator> class accept an <xref:System.Xml.Schema.XmlValueGetter>`delegate` as a parameter.
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A>
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateText%2A>
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateWhitespace%2A>
The following is an example <xref:System.Xml.Schema.XmlValueGetter>`delegate` taken from the <xref:System.Xml.Schema.XmlSchemaValidator> class example in the introduction. The <xref:System.Xml.Schema.XmlValueGetter>`delegate` returns the value of an attribute as a <xref:System.DateTime> object. To validate this <xref:System.DateTime> object returned by the <xref:System.Xml.Schema.XmlValueGetter>, the <xref:System.Xml.Schema.XmlSchemaValidator> object first converts it to the ValueType (ValueType is the default CLR mapping for the XSD type) for the data type of the attribute and then checks facets on the converted value.
```vb
Shared dateTimeGetterContent As Object
Shared Function dateTimeGetterHandle() As Object
Return dateTimeGetterContent
End Function
Shared Function dateTimeGetter(ByVal dateTime As DateTime) As XmlValueGetter
dateTimeGetterContent = dateTime
Return New XmlValueGetter(AddressOf dateTimeGetterHandle)
End Function
```
```csharp
static object dateTimeGetterContent;
static object dateTimeGetterHandle()
{
return dateTimeGetterContent;
}
static XmlValueGetter dateTimeGetter(DateTime dateTime)
{
dateTimeGetterContent = dateTime;
return new XmlValueGetter(dateTimeGetterHandle);
}
```
For a complete example of the <xref:System.Xml.Schema.XmlValueGetter>`delegate`, see the example in the introduction. For more information about the <xref:System.Xml.Schema.XmlValueGetter>`delegate`, see the <xref:System.Xml.Schema.XmlValueGetter>, and <xref:System.Xml.Schema.XmlSchemaValidator> class reference documentation.
#### Post-Schema-Validation-Information
The <xref:System.Xml.Schema.XmlSchemaInfo> class represents some of the Post-Schema-Validation-Information of an XML node validated by the <xref:System.Xml.Schema.XmlSchemaValidator> class. Various methods of the <xref:System.Xml.Schema.XmlSchemaValidator> class accept an <xref:System.Xml.Schema.XmlSchemaInfo> object as an optional, (`null`) `out` parameter.
Upon successful validation, properties of the <xref:System.Xml.Schema.XmlSchemaInfo> object are set with the results of the validation. For example, upon successful validation of an attribute using the <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A> method, the <xref:System.Xml.Schema.XmlSchemaInfo> object's (if specified) <xref:System.Xml.Schema.XmlSchemaInfo.SchemaAttribute%2A>, <xref:System.Xml.Schema.XmlSchemaInfo.SchemaType%2A>, <xref:System.Xml.Schema.XmlSchemaInfo.MemberType%2A>, and <xref:System.Xml.Schema.XmlSchemaInfo.Validity%2A> properties are set with the results of the validation.
The following <xref:System.Xml.Schema.XmlSchemaValidator> class methods accept an <xref:System.Xml.Schema.XmlSchemaInfo> object as an out parameter.
- <xref:System.Xml.Schema.XmlSchemaValidator.SkipToEndElement%2A>
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A>
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A>
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateElement%2A>
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateElement%2A>
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndElement%2A>
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndElement%2A>
- <xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndOfAttributes%2A>
For a complete example of the <xref:System.Xml.Schema.XmlSchemaInfo> class, see the example in the introduction. For more information about the <xref:System.Xml.Schema.XmlSchemaInfo> class, see the <xref:System.Xml.Schema.XmlSchemaInfo> class reference documentation.
### Retrieving Expected Particles, Attributes, and Unspecified Default Attributes
The <xref:System.Xml.Schema.XmlSchemaValidator> class provides the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A>, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A>, and <xref:System.Xml.Schema.XmlSchemaValidator.GetUnspecifiedDefaultAttributes%2A> methods to retrieve the expected particles, attributes, and unspecified default attributes in the current validation context.
#### Retrieving Expected Particles
The <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method returns an array of <xref:System.Xml.Schema.XmlSchemaParticle> objects containing the expected particles in the current element context. The valid particles that can be returned by the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method are instances of the <xref:System.Xml.Schema.XmlSchemaElement> and <xref:System.Xml.Schema.XmlSchemaAny> classes.
When the compositor for the content model is an `xs:sequence`, only the next particle in the sequence is returned. If the compositor for the content model is an `xs:all` or an `xs:choice`, then all valid particles that could follow in the current element context are returned.
> [!NOTE]
> If the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method is called immediately after calling the <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> method, the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method returns all global elements.
For example, in the XML Schema Definition Language (XSD) schema and XML document that follow, after validating the `book` element, the `book` element is the current element context. The <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method returns an array containing a single <xref:System.Xml.Schema.XmlSchemaElement> object representing the `title` element. When the validation context is the `title` element, the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method returns an empty array. If the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method is called after the `title` element has been validated but before the `description` element has been validated, it returns an array containing a single <xref:System.Xml.Schema.XmlSchemaElement> object representing the `description` element. If the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method is called after the `description` element has been validated then it returns an array containing a single <xref:System.Xml.Schema.XmlSchemaAny> object representing the wildcard.
```vb
Dim reader As XmlReader = XmlReader.Create("input.xml")
Dim schemaSet As XmlSchemaSet = New XmlSchemaSet()
schemaSet.Add(Nothing, "schema.xsd")
Dim manager As XmlNamespaceManager = New XmlNamespaceManager(reader.NameTable)
Dim validator As XmlSchemaValidator = New XmlSchemaValidator(reader.NameTable,schemaSet,manager,XmlSchemaValidationFlags.None)
validator.Initialize()
validator.ValidateElement("book", "", Nothing)
validator.ValidateEndOfAttributes(Nothing)
For Each element As XmlSchemaElement In validator.GetExpectedParticles()
Console.WriteLine(element.Name)
Next
validator.ValidateElement("title", "", Nothing)
validator.ValidateEndOfAttributes(Nothing)
For Each element As XmlSchemaElement In validator.GetExpectedParticles()
Console.WriteLine(element.Name)
Next
validator.ValidateEndElement(Nothing)
For Each element As XmlSchemaElement In validator.GetExpectedParticles()
Console.WriteLine(element.Name)
Next
validator.ValidateElement("description", "", Nothing)
validator.ValidateEndOfAttributes(Nothing)
validator.ValidateEndElement(Nothing)
For Each particle As XmlSchemaParticle In validator.GetExpectedParticles()
Console.WriteLine(particle.GetType())
Next
validator.ValidateElement("namespace", "", Nothing)
validator.ValidateEndOfAttributes(Nothing)
validator.ValidateEndElement(Nothing)
validator.ValidateEndElement(Nothing)
```
```csharp
XmlReader reader = XmlReader.Create("input.xml");
XmlSchemaSet schemaSet = new XmlSchemaSet();
schemaSet.Add(null, "schema.xsd");
XmlNamespaceManager manager = new XmlNamespaceManager(reader.NameTable);
XmlSchemaValidator validator = new XmlSchemaValidator(reader.NameTable, schemaSet, manager, XmlSchemaValidationFlags.None);
validator.Initialize();
validator.ValidateElement("book", "", null);
validator.ValidateEndOfAttributes(null);
foreach (XmlSchemaElement element in validator.GetExpectedParticles())
{
Console.WriteLine(element.Name);
}
validator.ValidateElement("title", "", null);
validator.ValidateEndOfAttributes(null);
foreach (XmlSchemaElement element in validator.GetExpectedParticles())
{
Console.WriteLine(element.Name);
}
validator.ValidateEndElement(null);
foreach (XmlSchemaElement element in validator.GetExpectedParticles())
{
Console.WriteLine(element.Name);
}
validator.ValidateElement("description", "", null);
validator.ValidateEndOfAttributes(null);
validator.ValidateEndElement(null);
foreach (XmlSchemaParticle particle in validator.GetExpectedParticles())
{
Console.WriteLine(particle.GetType());
}
validator.ValidateElement("namespace", "", null);
validator.ValidateEndOfAttributes(null);
validator.ValidateEndElement(null);
validator.ValidateEndElement(null);
```
The example takes the following XML as input.
`<xs:schema xmlns:xs="http://www.w3c.org/2001/XMLSchema">`
`<xs:element name="book">`
`<xs:sequence>`
`<xs:element name="title" type="xs:string" />`
`<xs:element name="description" type="xs:string" />`
`<xs:any processContent="lax" maxOccurs="unbounded" />`
`</xs:sequence>`
`</xs:element>`
`</xs:schema>`
The example takes the following XSD schema as input.
`<book>`
`<title>My Book</title>`
`<description>My Book's Description</description>`
`<namespace>System.Xml.Schema</namespace>`
`</book>`
> [!NOTE]
> The results of the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A>, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A>, and <xref:System.Xml.Schema.XmlSchemaValidator.AddSchema%2A> methods of the <xref:System.Xml.Schema.XmlSchemaValidator> class are dependent on the current context being validated. For more information, see the "Validation Context" section of this topic.
For an example of the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method, see the example in the introduction. For more information about the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method, see the <xref:System.Xml.Schema.XmlSchemaValidator> class reference documentation.
#### Retrieving Expected Attributes
The <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> method returns an array of <xref:System.Xml.Schema.XmlSchemaAttribute> objects containing the expected attributes in the current element context.
For example, in the example in the introduction, the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> method is used to retrieve all the attributes of the `book` element.
If you call the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> method immediately after the <xref:System.Xml.Schema.XmlSchemaValidator.ValidateElement%2A> method, all the attributes that could appear in the XML document are returned. However, if you call the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> method after one or more calls to the <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A> method, the attributes that have not yet been validated for the current element are returned.
> [!NOTE]
> The results of the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A>, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A>, and <xref:System.Xml.Schema.XmlSchemaValidator.AddSchema%2A> methods of the <xref:System.Xml.Schema.XmlSchemaValidator> class are dependent on the current context being validated. For more information, see the "Validation Context" section of this topic.
For an example of the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> method, see the example in the introduction. For more information about the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> method, see the <xref:System.Xml.Schema.XmlSchemaValidator> class reference documentation.
#### Retrieving Unspecified Default Attributes
The <xref:System.Xml.Schema.XmlSchemaValidator.GetUnspecifiedDefaultAttributes%2A> method populates the <xref:System.Collections.ArrayList> specified with <xref:System.Xml.Schema.XmlSchemaAttribute> objects for any attributes with default values that have not been previously validated using the <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A> method in the element context. The <xref:System.Xml.Schema.XmlSchemaValidator.GetUnspecifiedDefaultAttributes%2A> method should be called after calling the <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A> method on each attribute in the element context. The <xref:System.Xml.Schema.XmlSchemaValidator.GetUnspecifiedDefaultAttributes%2A> method should be used to determine what default attributes are to be inserted into the XML document being validated.
For more information about the <xref:System.Xml.Schema.XmlSchemaValidator.GetUnspecifiedDefaultAttributes%2A> method, see the <xref:System.Xml.Schema.XmlSchemaValidator> class reference documentation.
### Handling Schema Validation Events
Schema validation warnings and errors encountered during validation are handled by the <xref:System.Xml.Schema.XmlSchemaValidator.ValidationEventHandler> event of the <xref:System.Xml.Schema.XmlSchemaValidator> class.
Schema validation warnings have an <xref:System.Xml.Schema.XmlSeverityType> value of <xref:System.Xml.Schema.XmlSeverityType.Warning> and schema validation errors have an <xref:System.Xml.Schema.XmlSeverityType> value of <xref:System.Xml.Schema.XmlSeverityType.Error>. If no <xref:System.Xml.Schema.XmlSchemaValidator.ValidationEventHandler> has been assigned, an <xref:System.Xml.Schema.XmlSchemaValidationException> is thrown for all schema validation errors with an <xref:System.Xml.Schema.XmlSeverityType> value of <xref:System.Xml.Schema.XmlSeverityType.Error>. However, an <xref:System.Xml.Schema.XmlSchemaValidationException> is not thrown for schema validation warnings with an <xref:System.Xml.Schema.XmlSeverityType> value of <xref:System.Xml.Schema.XmlSeverityType.Warning>.
The following is an example of a <xref:System.Xml.Schema.ValidationEventHandler> that receives schema validation warnings and errors encountered during schema validation taken from the example in the introduction.
```vb
Shared Sub SchemaValidationEventHandler(ByVal sender As Object, ByVal e As ValidationEventArgs)
Select Case e.Severity
Case XmlSeverityType.Error
Console.WriteLine(vbCrLf & "Error: {0}", e.Message)
Exit Sub
Case XmlSeverityType.Warning
Console.WriteLine(vbCrLf & "Warning: {0}", e.Message)
Exit Sub
End Select
End Sub
```
```csharp
static void SchemaValidationEventHandler(object sender, ValidationEventArgs e)
{
switch (e.Severity)
{
case XmlSeverityType.Error:
Console.WriteLine("\nError: {0}", e.Message);
break;
case XmlSeverityType.Warning:
Console.WriteLine("\nWarning: {0}", e.Message);
break;
}
}
```
For a complete example of the <xref:System.Xml.Schema.ValidationEventHandler>, see the example in the introduction. For more information, see the <xref:System.Xml.Schema.XmlSchemaInfo> class reference documentation.
## XmlSchemaValidator State Transition
The <xref:System.Xml.Schema.XmlSchemaValidator> class has a defined state transition that enforces the sequence and occurrence of calls made to each of the methods used to validate elements, attributes, and content in an XML infoset.
The following table describes the state transition of the <xref:System.Xml.Schema.XmlSchemaValidator> class, and the sequence and occurrence of method calls that can be made in each state.
|State|Transition|
|-----------|----------------|
|Validate|<xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> (<xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A> | TopLevel*) <xref:System.Xml.Schema.XmlSchemaValidator.EndValidation%2A>|
|TopLevel|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateWhitespace%2A> | <xref:System.Xml.Schema.XmlSchemaValidator.ValidateText%2A> | Element|
|Element|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateElement%2A> <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A>* (<xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndOfAttributes%2A> Content\*)? <xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndElement%2A> |<br /><br /> <xref:System.Xml.Schema.XmlSchemaValidator.ValidateElement%2A> <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A>\* <xref:System.Xml.Schema.XmlSchemaValidator.SkipToEndElement%2A> |<br /><br /> <xref:System.Xml.Schema.XmlSchemaValidator.ValidateElement%2A> <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A>\* <xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndOfAttributes%2A> Content\* <xref:System.Xml.Schema.XmlSchemaValidator.SkipToEndElement%2A> ||
|Content|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateWhitespace%2A> | <xref:System.Xml.Schema.XmlSchemaValidator.ValidateText%2A> | Element|
> [!NOTE]
> An <xref:System.InvalidOperationException> is thrown by each of the methods in the table above when the call to the method is made in the incorrect sequence according to the current state of an <xref:System.Xml.Schema.XmlSchemaValidator> object.
The state transition table above uses punctuation symbols to describe the methods and other states that can be called for each state of the state transition of the <xref:System.Xml.Schema.XmlSchemaValidator> class. The symbols used are the same symbols found in the XML Standards reference for Document Type Definition (DTD).
The following table describes how the punctuation symbols found in the state transition table above affect the methods and other states that can be called for each state in the state transition of the <xref:System.Xml.Schema.XmlSchemaValidator> class.
|Symbol|Description|
|------------|-----------------|
|||Either method or state (the one before the bar or the one after it) can be called.|
|?|The method or state that precedes the question mark is optional but if it is called it can only be called once.|
|*|The method or state that precedes the * symbol is optional, and can be called more than once.|
## Validation Context
The methods of the <xref:System.Xml.Schema.XmlSchemaValidator> class used to validate elements, attributes, and content in an XML infoset, change the validation context of an <xref:System.Xml.Schema.XmlSchemaValidator> object. For example, the <xref:System.Xml.Schema.XmlSchemaValidator.SkipToEndElement%2A> method skips validation of the current element content and prepares the <xref:System.Xml.Schema.XmlSchemaValidator> object to validate content in the parent element's context; it is equivalent to skipping validation for all the children of the current element and then calling the <xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndElement%2A> method.
The results of the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A>, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A>, and <xref:System.Xml.Schema.XmlSchemaValidator.AddSchema%2A> methods of the <xref:System.Xml.Schema.XmlSchemaValidator> class are dependent on the current context being validated.
The following table describes the results of calling these methods after calling one of the methods of the <xref:System.Xml.Schema.XmlSchemaValidator> class used to validate elements, attributes, and content in an XML infoset.
|Method|GetExpectedParticles|GetExpectedAttributes|AddSchema|
|------------|--------------------------|---------------------------|---------------|
|<xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A>|If the default <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> method is called, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns an array containing all global elements.<br /><br /> If the overloaded <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> method that takes an <xref:System.Xml.Schema.XmlSchemaObject> as a parameter is called to initialize partial validation of an element, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns only the element to which the <xref:System.Xml.Schema.XmlSchemaValidator> object was initialized.|If the default <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> method is called, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns an empty array.<br /><br /> If the overload of the <xref:System.Xml.Schema.XmlSchemaValidator.Initialize%2A> method that takes an <xref:System.Xml.Schema.XmlSchemaObject> as a parameter is called to initialize partial validation of an attribute, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns only the attribute to which the <xref:System.Xml.Schema.XmlSchemaValidator> object was initialized.|Adds the schema to the <xref:System.Xml.Schema.XmlSchemaSet> of the <xref:System.Xml.Schema.XmlSchemaValidator> object if it has no preprocessing errors.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateElement%2A>|If the context element is valid, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns the sequence of elements expected as children of the context element.<br /><br /> If the context element is invalid, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns an empty array.|If the context element is valid, and if no call to <xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A> has been previously made, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns a list of all the attributes defined on the context element.<br /><br /> If some attributes have already been validated, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns a list of the remaining attributes to be validated.<br /><br /> If the context element is invalid, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns an empty array.|Same as above.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateAttribute%2A>|If the context attribute is a top-level attribute, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns an empty array.<br /><br /> Otherwise <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns the sequence of elements expected as the first child of the context element.|If the context attribute is a top-level attribute, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns an empty array.<br /><br /> Otherwise <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns the list of remaining attributes to be validated.|Same as above.|
|<xref:System.Xml.Schema.XmlSchemaValidator.GetUnspecifiedDefaultAttributes%2A>|<xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns the sequence of elements expected as the first child of the context element.|<xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns a list of the required and optional attributes that are yet to be validated for the context element.|Same as above.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndOfAttributes%2A>|<xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns the sequence of elements expected as the first child of the context element.|<xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns an empty array.|Same as above.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateText%2A>|If the context element's contentType is Mixed, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns the sequence of elements expected in the next position.<br /><br /> If the context element's contentType is TextOnly or Empty, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns an empty array.<br /><br /> If the context element's contentType is ElementOnly, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns the sequence of elements expected in the next position but a validation error has already occurred.|<xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns the context element's list of attributes not validated.|Same as above.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateWhitespace%2A>|If the context white-space is top-level white-space, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns an empty array.<br /><br /> Otherwise the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> method's behavior is the same as in <xref:System.Xml.Schema.XmlSchemaValidator.ValidateText%2A>.|If the context white-space is top-level white-space, <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns an empty array.<br /><br /> Otherwise the <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> method's behavior is the same as in <xref:System.Xml.Schema.XmlSchemaValidator.ValidateText%2A>.|Same as above.|
|<xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndElement%2A>|<xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedParticles%2A> returns the sequence of elements expected after the context element (possible siblings).|<xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns the context element's list of attributes not validated.<br /><br /> If the context element has no parent then <xref:System.Xml.Schema.XmlSchemaValidator.GetExpectedAttributes%2A> returns an empty list (the context element is the parent of the current element on which <xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndElement%2A> was called).|Same as above.|
|<xref:System.Xml.Schema.XmlSchemaValidator.SkipToEndElement%2A>|Same as <xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndElement%2A>.|Same as <xref:System.Xml.Schema.XmlSchemaValidator.ValidateEndElement%2A>.|Same as above.|
|<xref:System.Xml.Schema.XmlSchemaValidator.EndValidation%2A>|Returns an empty array.|Returns an empty array.|Same as above.|
> [!NOTE]
> The values returned by the various properties of the <xref:System.Xml.Schema.XmlSchemaValidator> class are not altered by calling any of the methods in the above table.
## See Also
<xref:System.Xml.Schema.XmlSchemaValidator>
| 88.25 | 1,411 | 0.776228 | eng_Latn | 0.690357 |
db9df0e9150d6a4af993fec370740f55275f68bf | 11,093 | md | Markdown | articles/cognitive-services/Speech-Service/includes/quickstarts/from-blob/javascript/node.md | GordenW/azure-docs.zh-cn | 2b69134b6401663a0fe76e07cd81d97da080bda1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/quickstarts/from-blob/javascript/node.md | GordenW/azure-docs.zh-cn | 2b69134b6401663a0fe76e07cd81d97da080bda1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/quickstarts/from-blob/javascript/node.md | GordenW/azure-docs.zh-cn | 2b69134b6401663a0fe76e07cd81d97da080bda1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: IEvangelist
ms.service: cognitive-services
ms.topic: include
ms.date: 03/12/2020
ms.author: trbye
ms.custom: devx-track-javascript
ms.openlocfilehash: 5d1d7008151ae61a72368d3d8ecfaf545a2080fa
ms.sourcegitcommit: 42107c62f721da8550621a4651b3ef6c68704cd3
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 07/29/2020
ms.locfileid: "87406158"
---
## <a name="prerequisites"></a>先决条件
在开始之前,请务必:
> [!div class="checklist"]
> * [设置开发环境并创建空项目](../../../../quickstarts/setup-platform.md?tabs=vs&pivots=programmming-language-javascript)
> * [创建 Azure 语音资源](../../../../get-started.md)
> * [将源文件上传到 Azure blob](https://docs.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-portal)
## <a name="create-a-new-js-file"></a>新建 JS 文件
第一步是确保在你喜爱的编辑器中打开项目。
调用文件 index.js。
## <a name="start-with-some-boilerplate-code"></a>从一些样本代码入手
添加一些代码作为项目的框架。
```JavaScript
const https = require("https");
// Replace with your subscription key
SubscriptionKey = "YourSubscriptionKey";
// Update with your service region
Region = "YourServiceRegion";
Port = 443;
// Recordings and locale
Locale = "en-US";
RecordingsBlobUri = "YourFileUrl";
// Name and description
Name = "Simple transcription";
Description = "Simple transcription description";
SpeechToTextBasePath = "/api/speechtotext/v2.0/";
```
[!INCLUDE [placeholder-replacements](../placeholder-replacement.md)]
## <a name="json-wrappers"></a>JSON 包装器
因为 REST API 接受 JSON 格式的请求并返回 JSON 格式的结果。
为了使请求和响应更易于理解,我们将声明一些用于对 JSON 进行序列化/反序列化处理的类。
```JavaScript
class ModelIdentity {
id;
}
class Transcription {
Name;
Description;
Locale;
RecordingsUrl;
ResultsUrls;
Id;
CreatedDateTime;
LastActionDateTime;
Status;
StatusMessage;
}
class TranscriptionDefinition {
Name;
Description;
RecordingsUrl;
Locale;
Models;
Properties;
}
```
## <a name="create-an-initial-transcription-request"></a>创建初始听录请求。
接下来,我们将生成听录请求。
```JavaScript
const ts = {
Name: Name,
Description: Description,
Locale: Locale,
RecordingsUrl: RecordingsBlobUri,
Properties: {
"PunctuationMode": "DictatedAndAutomatic",
"ProfanityFilterMode": "Masked",
"AddWordLevelTimestamps": "True"
},
Models: []
}
const postPayload = JSON.stringify(ts);
const startOptions = {
hostname: Region + ".cris.ai",
port: Port,
path: SpeechToTextBasePath + "Transcriptions/",
method: "POST",
headers: {
"Content-Type": "application/json",
'Content-Length': postPayload.length,
"Ocp-Apim-Subscription-Key": SubscriptionKey
}
}
```
## <a name="send-the-transcription-request"></a>发送听录请求。
现在我们将请求发布到语音服务并检查初始响应代码。 此响应代码将仅指示服务是否已收到请求。 该服务将在响应标头中返回一个 URL,这是它将存储听录状态的位置。
然后,我们将调用方法 `CheckTranscriptionStatus` 来检查状态并最终输出结果。 接下来,我们将实现 `CheckTranscriptionStatus`。
```JavaScript
const request = https.request(startOptions, (response) => {
if (response.statusCode != 202) {
console.error("Error, status code " + response.statusCode);
} else {
const transcriptionLocation = response.headers.location;
console.info("Created transcription at location " + transcriptionLocation);
console.info("Checking status.");
CheckTranscriptionStatus(transcriptionLocation);
}
});
request.on("error", error => {
console.error(error);
});
request.write(postPayload);
request.end();
```
## <a name="check-the-requests-status"></a>检查请求状态
由于服务以异步方式处理听录,因此需要时常轮询其状态。 我们每 5 秒查看一次。
通过检索在发布请求时收到的 URL 中的内容,可以查看状态。 内容返回后,我们将其反序列化为一个帮助程序类,使其便于交互。
下面是一个轮询代码,其中显示了除成功完成之外的所有状态,我们会在下一步完成该操作。
`CheckTranscriptionStatus` 从听录请求中获取状态 URL,并每 5 秒轮询一次状态 URL,直到它指示成功或失败为止。 然后,它调用 `PrintResults` 以输出听录结果。 接下来,我们将实现 `PrintResults`。
```csharp
function CheckTranscriptionStatus(statusUrl) {
transcription = null;
const fetchOptions = {
headers: {
"Ocp-Apim-Subscription-Key": SubscriptionKey
}
}
const fetchRequest = https.get(new URL(statusUrl), fetchOptions, (response) => {
if (response.statusCode !== 200) {
console.info("Error retrieving status: " + response.statusCode);
} else {
let responseText = '';
response.setEncoding('utf8');
response.on("data", (chunk) => {
responseText += chunk;
});
response.on("end", () => {
const statusObject = JSON.parse(responseText);
var done = false;
switch (statusObject.status) {
case "Failed":
console.info("Transcription failed. Status: " + transcription.StatusMessage);
done = true;
break;
case "Succeeded":
done = true;
PrintResults(statusObject.resultsUrls["channel_0"]);
break;
case "Running":
console.info("Transcription is still running.");
break;
case "NotStarted":
console.info("Transcription has not started.");
break;
}
if (!done) {
setTimeout(() => {
CheckTranscriptionStatus(statusUrl);
}, (5000));
}
});
}
});
fetchRequest.on("error", error => {
console.error(error);
});
}
```
## <a name="display-the-transcription-results"></a>显示听录结果
服务成功完成听录后,结果将存储在可从状态响应中获取的其他 URL 中。 在此,我们先发出请求将这些结果下载到临时文件中,再进行读取和反序列化操作。
加载结果后,可以将其打印到控制台。
```JavaScript
function PrintResults(resultUrl)
{
const fetchOptions = {
headers: {
"Ocp-Apim-Subscription-Key": SubscriptionKey
}
}
const fetchRequest = https.get(new URL(resultUrl), fetchOptions, (response) => {
if (response.statusCode !== 200) {
console.info("Error retrieving status: " + response.statusCode);
} else {
let responseText = '';
response.setEncoding('utf8');
response.on("data", (chunk) => {
responseText += chunk;
});
response.on("end", () => {
console.info("Transcription Results:");
console.info(responseText);
});
}
});
}
```
## <a name="check-your-code"></a>查看代码
此时,代码应如下所示:
```JavaScript
const https = require("https");
// Replace with your subscription key
SubscriptionKey = "YourSubscriptionKey";
// Update with your service region
Region = "YourServiceRegion";
Port = 443;
// Recordings and locale
Locale = "en-US";
RecordingsBlobUri = "YourFileUrl";
// Name and description
Name = "Simple transcription";
Description = "Simple transcription description";
SpeechToTextBasePath = "/api/speechtotext/v2.0/";
class ModelIdentity {
id;
}
class Transcription {
Name;
Description;
Locale;
RecordingsUrl;
ResultsUrls;
Id;
CreatedDateTime;
LastActionDateTime;
Status;
StatusMessage;
}
class TranscriptionDefinition {
Name;
Description;
RecordingsUrl;
Locale;
Models;
Properties;
}
const ts = {
Name: Name,
Description: Description,
Locale: Locale,
RecordingsUrl: RecordingsBlobUri,
Properties: {
"PunctuationMode": "DictatedAndAutomatic",
"ProfanityFilterMode": "Masked",
"AddWordLevelTimestamps": "True"
},
Models: []
}
const postPayload = JSON.stringify(ts);
const startOptions = {
hostname: Region + ".cris.ai",
port: Port,
path: SpeechToTextBasePath + "Transcriptions/",
method: "POST",
headers: {
"Content-Type": "application/json",
'Content-Length': postPayload.length,
"Ocp-Apim-Subscription-Key": SubscriptionKey
}
}
function PrintResults(resultUrl)
{
const fetchOptions = {
headers: {
"Ocp-Apim-Subscription-Key": SubscriptionKey
}
}
const fetchRequest = https.get(new URL(resultUrl), fetchOptions, (response) => {
if (response.statusCode !== 200) {
console.info("Error retrieving status: " + response.statusCode);
} else {
let responseText = '';
response.setEncoding('utf8');
response.on("data", (chunk) => {
responseText += chunk;
});
response.on("end", () => {
console.info("Transcription Results:");
console.info(responseText);
});
}
});
}
function CheckTranscriptionStatus(statusUrl) {
transcription = null;
const fetchOptions = {
headers: {
"Ocp-Apim-Subscription-Key": SubscriptionKey
}
}
const fetchRequest = https.get(new URL(statusUrl), fetchOptions, (response) => {
if (response.statusCode !== 200) {
console.info("Error retrieving status: " + response.statusCode);
} else {
let responseText = '';
response.setEncoding('utf8');
response.on("data", (chunk) => {
responseText += chunk;
});
response.on("end", () => {
const statusObject = JSON.parse(responseText);
var done = false;
switch (statusObject.status) {
case "Failed":
console.info("Transcription failed. Status: " + transcription.StatusMessage);
done = true;
break;
case "Succeeded":
done = true;
PrintResults(statusObject.resultsUrls["channel_0"]);
break;
case "Running":
console.info("Transcription is still running.");
break;
case "NotStarted":
console.info("Transcription has not started.");
break;
}
if (!done) {
setTimeout(() => {
CheckTranscriptionStatus(statusUrl);
}, (5000));
}
});
}
});
fetchRequest.on("error", error => {
console.error(error);
});
}
const request = https.request(startOptions, (response) => {
if (response.statusCode != 202) {
console.error("Error, status code " + response.statusCode);
} else {
const transcriptionLocation = response.headers.location;
console.info("Created transcription at location " + transcriptionLocation);
console.info("Checking status.");
CheckTranscriptionStatus(transcriptionLocation);
}
});
request.on("error", error => {
console.error(error);
});
request.write(postPayload);
request.end();
```
## <a name="run-your-app"></a>运行应用程序
现在,可以使用语音服务构建应用并测试语音识别。
**启动应用** - 运行节点 index.js。
## <a name="next-steps"></a>后续步骤
[!INCLUDE [footer](./footer.md)]
| 25.678241 | 129 | 0.58875 | yue_Hant | 0.344472 |
db9ec6ad4c1008490fdd49d47b0bb087be24eb88 | 3,555 | md | Markdown | networking/readme.md | Azure/AzureCAT-AVS | 28dc0824b6e325b085d8e08a2c6cedc09c18c71a | [
"MIT"
] | 25 | 2021-04-28T18:31:35.000Z | 2021-12-23T08:53:10.000Z | networking/readme.md | Azure/AzureCAT-AVS | 28dc0824b6e325b085d8e08a2c6cedc09c18c71a | [
"MIT"
] | 1 | 2021-09-13T13:11:27.000Z | 2021-11-24T12:43:59.000Z | networking/readme.md | Azure/AzureCAT-AVS | 28dc0824b6e325b085d8e08a2c6cedc09c18c71a | [
"MIT"
] | 8 | 2021-05-14T14:58:07.000Z | 2022-01-28T08:12:40.000Z | # Connecting from On Premises to Azure VMware Solution
## Intro
Azure VMware Soltuion is physical hardware running in Azure datacenters managed by Microsoft.
To connect physical components to the Azure backbone we are using a technology called ExpressRoute.
Learn more about ExpressRoute [here](https://docs.microsoft.com/en-us/azure/expressroute/expressroute-introduction).
As the Virtual Network Gateways connected to an ExpressRoute circuit can't transit traffic between two circuits (one circuit is going to on premisis, one is going to Azure VMware Solution) Microsoft uses the Global Reach feature to directly connect the on premisis circuit to AVS.
To learn more about Global Reach click [here](https://docs.microsoft.com/en-us/azure/expressroute/expressroute-global-reach).
In certain cases the Global Reach feature is not available in all regions. Please look [here](https://docs.microsoft.com/en-us/azure/expressroute/expressroute-global-reach#availability) to see where Global Reach is available.
A new product is [Azure Route Server](https://docs.microsoft.com/en-us/azure/route-server/overview), it enables BGP connections with 3rdparty devices to exchange routes between Azure infrastructure components and appliances.
This guide should help you decide on the best possible network architecure for you.
---
## Azure VMware Solution Network Decistion Tree
The decision tree helps you take the right network design.
It is important that you think about the possible scenarios, that should be achieved using AVS.
One common use case is e.g. sending traffic through an NVA or propagate a default route to AVS for outbound internet traffic.

You can also download a PDF [here](avs-network-decision-tree.pdf)
---
### Reference architectures
The reference architectures will be described on separate pages
1. [Deploy an Azure Firewall in a secured vWAN hub with Default Gateway propagation](deploy-an-azure-firewall-in-a-secured-vwan-hub-with-default-gateway-propagation) (coming soon)
2. [Deploy NVA with VXLAN in transit vNET and Route Server](deploy-nva-with-vxlan-in-transit-vnet-and-route-server)
3. [Deploy NVA without VXLAN in transit vNET and Route Server](deploy-nva-without-vxlan-in-transit-vnet-and-route-server) (coming soon)
4. [Deploy non-integrated NVAs without VXLAN in vWAN with transit vNET and Route Server](deploy-non-integrated-nvas-without-vxlan-in-vwan-with-transit-vnet-and-route-server)
5. [Deploy integrated NVAs without VXLAN in vWAN with transit vNET and Route Server](deploy-integrated-nvas-without-vxlan-in-vwan-with-transit-vnet-and-route-server) (coming soon)
6. [Deploy non-integrated NVAs with VXLAN in vWAN with transit vNET and Route Server](deploy-non-integrated-nvas-with-vxlan-in-vwan-with-transit-vnet-and-route-server) (coming soon)
7. [Deploy integrated NVAs with VXLAN in vWAN with transit vNET and Route Server](deploy-integrated-nvas-with-vxlan-in-vwan-with-transit-vnet-and-route-server) (coming soon)
8. [Deploy AVS with Global Reach with or without NVAs in Azure (vNET or vWAN)](deploy-avs-with-global-reach-with-or-without-nvas-in-azure-vnet-or-vwan)
9. Deploy a routing solution like bird or CSR1000V
* [Deploy a Routing solution using BIRD](deploy-routing-bird)
* [Deploy a Routing solution using Cisco CSR1000V](deploy-routing-csr1000v)
10. [Deploy NVA in Azure with Route Server and run VPN over ExpressRoute](deploy-nva-in-azure-with-route-server-and-vpn-over-expressroute)
| 67.075472 | 280 | 0.792686 | eng_Latn | 0.960577 |
db9f4903af690c94ad42917faaf7d72b8c34c364 | 616 | md | Markdown | docs/content/packaging/schema/on.md | juliaogris/hermit | 87e3a4e24a93e246037b44c379373420583efa16 | [
"Apache-2.0"
] | null | null | null | docs/content/packaging/schema/on.md | juliaogris/hermit | 87e3a4e24a93e246037b44c379373420583efa16 | [
"Apache-2.0"
] | null | null | null | docs/content/packaging/schema/on.md | juliaogris/hermit | 87e3a4e24a93e246037b44c379373420583efa16 | [
"Apache-2.0"
] | null | null | null | +++
title = "on <event>"
weight = 406
+++
Triggers to run on lifecycle events.
Used by: [channel](../channel#blocks) [darwin](../darwin#blocks) [linux](../linux#blocks) [<manifest>](../manifest#blocks) [version](../version#blocks)
## Blocks
| Block | Description |
|--------|-------------|
| [`chmod { … }`](../chmod) | Change a files mode. |
| [`copy { … }`](../copy) | A file to copy when the event is triggered. |
| [`message { … }`](../message) | Display a message to the user. |
| [`rename { … }`](../rename) | Rename a file. |
| [`run { … }`](../run) | A command to run when the event is triggered. |
| 30.8 | 154 | 0.564935 | eng_Latn | 0.87833 |
dba00594766b871a394a12bc299fe5cf6f6fac53 | 47,011 | md | Markdown | gluster/DOCUMENTATION.md | xbezdick/openstack-puppet-modules | d765ee5a5873b1960b31a2cec8a58f25a2d99612 | [
"Apache-2.0"
] | null | null | null | gluster/DOCUMENTATION.md | xbezdick/openstack-puppet-modules | d765ee5a5873b1960b31a2cec8a58f25a2d99612 | [
"Apache-2.0"
] | null | null | null | gluster/DOCUMENTATION.md | xbezdick/openstack-puppet-modules | d765ee5a5873b1960b31a2cec8a58f25a2d99612 | [
"Apache-2.0"
] | null | null | null | #Puppet-Gluster
<!--
GlusterFS module by James
Copyright (C) 2010-2013+ James Shubin
Written by James Shubin <[email protected]>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
-->
##A GlusterFS Puppet module by [James](https://ttboj.wordpress.com/)
####Available from:
####[https://github.com/purpleidea/puppet-gluster/](https://github.com/purpleidea/puppet-gluster/)
####Also available from:
####[https://forge.gluster.org/puppet-gluster/](https://forge.gluster.org/puppet-gluster/)
####This documentation is available in: [Markdown](https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md) or [PDF](https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md) format.
####Table of Contents
1. [Overview](#overview)
2. [Module description - What the module does](#module-description)
3. [Setup - Getting started with Puppet-Gluster](#setup)
* [What can Puppet-Gluster manage?](#what-can-puppet-gluster-manage)
* [Simple setup](#simple-setup)
* [Elastic setup](#elastic-setup)
* [Advanced setup](#advanced-setup)
* [Client setup](#client-setup)
4. [Usage/FAQ - Notes on management and frequently asked questions](#usage-and-frequently-asked-questions)
5. [Reference - Class and type reference](#reference)
* [gluster::simple](#glustersimple)
* [gluster::elastic](#glusterelastic)
* [gluster::server](#glusterserver)
* [gluster::host](#glusterhost)
* [gluster::brick](#glusterbrick)
* [gluster::volume](#glustervolume)
* [gluster::volume::property](#glustervolumeproperty)
* [gluster::mount](#glustermount)
6. [Examples - Example configurations](#examples)
7. [Limitations - Puppet versions, OS compatibility, etc...](#limitations)
8. [Development - Background on module development and reporting bugs](#development)
9. [Author - Author and contact information](#author)
##Overview
The Puppet-Gluster module installs, configures, and manages a GlusterFS cluster.
##Module Description
This Puppet-Gluster module handles installation, configuration, and management
of GlusterFS across all of the hosts in the cluster.
##Setup
###What can Puppet-Gluster manage?
Puppet-Gluster is designed to be able to manage as much or as little of your
GlusterFS cluster as you wish. All features are optional. If there is a feature
that doesn't appear to be optional, and you believe it should be, please let me
know. Having said that, it makes good sense to me to have Puppet-Gluster manage
as much of your GlusterFS infrastructure as it can. At the moment, it cannot
rack new servers, but I am accepting funding to explore this feature ;) At the
moment it can manage:
* GlusterFS packages (rpm)
* GlusterFS configuration files (/var/lib/glusterd/)
* GlusterFS host peering (gluster peer probe)
* GlusterFS storage partitioning (fdisk)
* GlusterFS storage formatting (mkfs)
* GlusterFS brick creation (mkdir)
* GlusterFS services (glusterd)
* GlusterFS firewalling (whitelisting)
* GlusterFS volume creation (gluster volume create)
* GlusterFS volume state (started/stopped)
* GlusterFS volume properties (gluster volume set)
* And much more...
###Simple setup
include '::gluster::simple' is enough to get you up and running. When using the
gluster::simple class, or with any other Puppet-Gluster configuration,
identical definitions must be used on all hosts in the cluster. The simplest
way to accomplish this is with a single shared puppet host definition like:
```puppet
node /^annex\d+$/ { # annex{1,2,..N}
class { '::gluster::simple':
}
}
```
If you wish to pass in different parameters, you can specify them in the class
before you provision your hosts:
```puppet
class { '::gluster::simple':
replica => 2,
volume => ['volume1', 'volume2', 'volumeN'],
}
```
###Elastic setup
The gluster::elastic class is not yet available. Stay tuned!
###Advanced setup
Some system administrators may wish to manually itemize each of the required
components for the Puppet-Gluster deployment. This happens automatically with
the higher level modules, but may still be a desirable feature, particularly
for non-elastic storage pools where the configuration isn't expected to change
very often (if ever).
To put together your cluster piece by piece, you must manually include and
define each class and type that you wish to use. If there are certain aspects
that you wish to manage yourself, you can omit them from your configuration.
See the [reference](#reference) section below for the specifics. Here is one
possible example:
```puppet
class { '::gluster::server':
shorewall => true,
}
gluster::host { 'annex1.example.com':
# use uuidgen to make these
uuid => '1f660ca2-2c78-4aa0-8f4d-21608218c69c',
}
# note that this is using a folder on your existing file system...
# this can be useful for prototyping gluster using virtual machines
# if this isn't a separate partition, remember that your root fs will
# run out of space when your gluster volume does!
gluster::brick { 'annex1.example.com:/data/gluster-storage1':
areyousure => true,
}
gluster::host { 'annex2.example.com':
# NOTE: specifying a host uuid is now optional!
# if you don't choose one, one will be assigned
#uuid => '2fbe6e2f-f6bc-4c2d-a301-62fa90c459f8',
}
gluster::brick { 'annex2.example.com:/data/gluster-storage2':
areyousure => true,
}
$brick_list = [
'annex1.example.com:/data/gluster-storage1',
'annex2.example.com:/data/gluster-storage2',
]
gluster::volume { 'examplevol':
replica => 2,
bricks => $brick_list,
start => undef, # i'll start this myself
}
# namevar must be: <VOLNAME>#<KEY>
gluster::volume::property { 'examplevol#auth.reject':
value => ['192.0.2.13', '198.51.100.42', '203.0.113.69'],
}
```
###Client setup
Mounting a GlusterFS volume on a client is fairly straightforward. Simply use
the 'gluster::mount' type.
```puppet
gluster::mount { '/mnt/gluster/puppet/':
server => 'annex.example.com:/puppet',
rw => true,
shorewall => false,
}
```
In this example, 'annex.example.com' points to the VIP of the GlusterFS
cluster. Using the VIP for mounting increases the chance that you'll get an
available server when you try to mount. This generally works better than RRDNS
or similar schemes.
##Usage and frequently asked questions
All management should be done by manipulating the arguments on the appropriate
Puppet-Gluster classes and types. Since certain manipulations are either not
yet possible with Puppet-Gluster, or are not supported by GlusterFS, attempting
to manipulate the Puppet configuration in an unsupported way will result in
undefined behaviour, and possible even data loss, however this is unlikely.
###How do I change the replica count?
You must set this before volume creation. This is a limitation of GlusterFS.
There are certain situations where you can change the replica count by adding
a multiple of the existing brick count to get this desired effect. These cases
are not yet supported by Puppet-Gluster. If you want to use Puppet-Gluster
before and / or after this transition, you can do so, but you'll have to do the
changes manually.
###Do I need to use a virtual IP?
Using a virtual IP (VIP) is strongly recommended as a distributed lock manager
(DLM) and also to provide a highly-available (HA) IP address for your clients
to connect to. For a more detailed explanation of the reasoning please see:
[How to avoid cluster race conditions or: How to implement a distributed lock manager in puppet](https://ttboj.wordpress.com/2012/08/23/how-to-avoid-cluster-race-conditions-or-how-to-implement-a-distributed-lock-manager-in-puppet/)
Remember that even if you're using a hosted solution (such as AWS) that doesn't
provide an additional IP address, or you want to avoid using an additional IP,
and you're okay not having full HA client mounting, you can use an unused
private RFC1918 IP address as the DLM VIP. Remember that a layer 3 IP can
co-exist on the same layer 2 network with the layer 3 network that is used by
your cluster.
###Is it possible to have Puppet-Gluster complete in a single run?
No. This is a limitation of Puppet, and is related to how GlusterFS operates.
For example, it is not reliably possible to predict which ports a particular
GlusterFS volume will run on until after the volume is started. As a result,
this module will initially whitelist connections from GlusterFS host IP
addresses, and then further restrict this to only allow individual ports once
this information is known. This is possible in conjunction with the
[puppet-shorewall](https://github.com/purpleidea/puppet-shorewall) module.
You should notice that each run should complete without error. If you do see an
error, it means that either something is wrong with your system and / or
configuration, or because there is a bug in Puppet-Gluster.
###Can you integrate this with vagrant?
Yes, see the
[vagrant/](https://github.com/purpleidea/puppet-gluster/tree/master/vagrant)
directory. This has been tested on Fedora 20, with vagrant-libvirt, as I have
no desire to use VirtualBox for fun. I have written an article about this:
[Automatically deploying GlusterFS with Puppet-Gluster + Vagrant!](https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/)
You'll probably first need to read my three earlier articles to learn some
vagrant tricks, and to get the needed dependencies installed:
* [Vagrant on Fedora with libvirt](https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/)
* [Vagrant vsftp and other tricks](https://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/)
* [Vagrant clustered SSH and ‘screen’](https://ttboj.wordpress.com/2014/01/02/vagrant-clustered-ssh-and-screen/)
###Can I use it without a puppetmaster?
Yes, you can use Puppet-Gluster without a puppetmaster, however you will lose
out on some advantages and features that are simply not possible without one.
The features you will miss out on are Puppet-Gluster features, that make
configuring this module easier, and not any core GlusterFS features.
For example, without a puppetmaster, [gluster::simple](#glustersimple) will not
be able to work, because it relies on the puppetmaster for the exchange of
[exported resources](http://docs.puppetlabs.com/puppet/latest/reference/lang_exported.html)
so that Puppet-Gluster can automatically figure out how many hosts are present
in your cluster.
To use Puppet-Gluster without a puppetmaster, you'll most likely want to use a
configuration that is similar to the [verbose distributed-replicate](https://github.com/purpleidea/puppet-gluster/blob/master/examples/distributed-replicate-example.pp)
example.
The more philosophical way of thinking about this is that if you want to
have multi-hosts coordination of things, so that your life as a sysadmin is
easier, then you'll need to use a puppetmaster so that there is a central
point of coordination. This is a current design limitation of puppet.
Please note that you can still use the [VIP as a DLM](#do-i-need-to-use-a-virtual-ip).
###Puppet runs fail with "Invalid relationship" errors.
When running Puppet, you encounter a compilation failure like:
```bash
Error: Could not retrieve catalog from remote server:
Error 400 on SERVER: Invalid relationship: Exec[gluster-volume-stuck-volname] {
require => Gluster::Brick[annex2.example.com:/var/lib/puppet/tmp/gluster/data/]
}, because Gluster::Brick[annex2.example.com:/var/lib/puppet/tmp/gluster/data/]
doesn't seem to be in the catalog
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
```
This can occur if you have changed (usually removed) the available bricks, but
have not cleared the exported resources on the Puppet master, or if there are
stale (incorrect) brick "tags" on the individual host. These tags can usually
be found in the _/var/lib/puppet/tmp/gluster/brick/_ directory. In other words,
when a multi host cluster comes up, each puppet agent tells the master about
which bricks it has available, and each agent also pulls down this list and
stores it in the brick directory. If there is a discrepancy, then the compile
will fail because the individual host is using old data as part of its facts
when it uses the stale brick data as part of its compilation.
This commonly happens if you're trying to deploy a different Puppet-Gluster
setup without having first erased the host specific exported resources on the
Puppet master or if the machine hasn't been re-provisioned from scratch.
To solve this problem, do a clean install, and make sure that you've cleaned
the Puppet master with:
```bash
puppet node deactivate HOSTNAME
```
for each host you're using, and that you've removed all of the files from the
brick directories on each host.
###Puppet runs fail with "Connection refused - connect(2)" errors.
You may see a "_Connection refused - connect(2)_" message when running puppet.
This typically happens if your puppet vm guest is overloaded. When running high
guest counts on your laptop, or running without hardware virtualization support
this is quite common. Another common causes of this is if your domain type is
set to _qemu_ instead of the accelerated _kvm_. Since the _qemu_ domain type is
much slower, puppet timeouts and failures are common when it doesn't respond.
###Provisioning fails with: "Can't open /dev/sdb1 exclusively."
If when provisioning you get an error like:
_"Can't open /dev/sdb1 exclusively. Mounted filesystem?"_
It is possible that dracut might have found an existing logical volume on the
device, and device mapper has made it available. This is common if you are
re-using dirty block devices that haven't run through a _dd_ first. Here is an
example of the diagnosis and treatment of this problem:
```bash
[root@server mapper]# pwd
/dev/mapper
[root@server mapper]# dmesg | grep dracut
dracut: dracut-004-336.el6_5.2
dracut: rd_NO_LUKS: removing cryptoluks activation
dracut: Starting plymouth daemon
dracut: rd_NO_DM: removing DM RAID activation
dracut: rd_NO_MD: removing MD RAID activation
dracut: Scanning devices sda3 sdb for LVM logical volumes myvg/rootvol
dracut: inactive '/dev/vg_foo/lv' [4.35 TiB] inherit
dracut: inactive '/dev/myvg/rootvol' [464.00 GiB] inherit
dracut: Mounted root filesystem /dev/mapper/myvg-rootvol
dracut: Loading SELinux policy
dracut:
dracut: Switching root
[root@server mapper]# /sbin/pvcreate --dataalignment 2560K /dev/sdb1
Can't open /dev/sdb1 exclusively. Mounted filesystem?
[root@server mapper]# ls
control myvg-rootvol vg_foo-lv
[root@server mapper]# ls -lAh
total 0
crw-rw----. 1 root root 10, 58 Mar 7 16:42 control
lrwxrwxrwx. 1 root root 7 Mar 13 09:56 myvg-rootvol -> ../dm-0
lrwxrwxrwx. 1 root root 7 Mar 13 09:56 vg_foo-lv -> ../dm-1
[root@server mapper]# dmsetup remove vg_foo-lv
[root@server mapper]# ls
control myvg-rootvol
[root@server mapper]# pvcreate --dataalignment 2560K /dev/sdb1
Physical volume "/dev/sdb1" successfully created
[root@server mapper]# HAPPY_ADMIN='yes'
```
If you frequently start with "dirty" block devices, you may consider adding a
_dd_ to your hardware provisioning step. The downside is that this can be very
time consuming, and potentially dangerous if you accidentally re-provision the
wrong machine.
###Provisioning fails with: "cannot open /dev/sdb1: Device or resource busy"
If when provisioning you get an error like:
_"mkfs.xfs: cannot open /dev/sdb1: Device or resource busy"_
It is possible that dracut might have found an existing logical volume on the
device, and device mapper has made it available. This is common if you are
re-using dirty block devices that haven't run through a _dd_ first. This is
almost identical to the previous frequently asked question, although this
failure message is what is seen when _mkfs.xfs_ is being blocked by dracut,
where in the former problem it was the _pvcreate_ that was being blocked. The
reason that we see this manifest through _mkfs.xfs_ instead of _pvcreate_ is
that this particular cluster is being build with _lvm => false_. Here is an
example of the diagnosis and treatment of this problem:
```bash
[root@server mapper]# pwd
/dev/mapper
[root@server mapper]# dmesg | grep dracut
dracut: dracut-004-335.el6
dracut: rd_NO_LUKS: removing cryptoluks activation
dracut: Starting plymouth daemon
dracut: rd_NO_DM: removing DM RAID activation
dracut: rd_NO_MD: removing MD RAID activation
dracut: Scanning devices sda2 sdb for LVM logical volumes vg_server/lv_swap vg_server/lv_root
dracut: inactive '/dev/vg_bricks/b1' [9.00 TiB] inherit
dracut: inactive '/dev/vg_server/lv_root' [50.00 GiB] inherit
dracut: inactive '/dev/vg_server/lv_home' [383.26 GiB] inherit
dracut: inactive '/dev/vg_server/lv_swap' [31.50 GiB] inherit
dracut: Mounted root filesystem /dev/mapper/vg_server-lv_root
dracut:
dracut: Switching root
[root@server mapper]# mkfs.xfs -q -f -i size=512 -n size=8192 /dev/sdb1
mkfs.xfs: cannot open /dev/sdb1: Device or resource busy
[root@server mapper]# lsof /dev/sdb1
[root@server mapper]# echo $?
1
[root@server mapper]# ls
control vg_server-lv_home vg_server-lv_swap
vg_bricks-b1 vg_server-lv_root
[root@server mapper]# ls -lAh
total 0
crw-rw---- 1 root root 10, 58 May 20 2014 control
lrwxrwxrwx 1 root root 7 May 20 2014 vg_bricks-b1 -> ../dm-2
lrwxrwxrwx 1 root root 7 May 20 2014 vg_server-lv_home -> ../dm-3
lrwxrwxrwx 1 root root 7 May 20 2014 vg_server-lv_root -> ../dm-0
lrwxrwxrwx 1 root root 7 May 20 2014 vg_server-lv_swap -> ../dm-1
[root@server mapper]# dmsetup remove vg_bricks-b1
[root@server mapper]# ls
control vg_server-lv_home vg_server-lv_root vg_server-lv_swap
[root@server mapper]# mkfs.xfs -q -f -i size=512 -n size=8192 /dev/sdb1
[root@server mapper]# echo $?
0
[root@server mapper]# HAPPY_ADMIN='yes'
```
If you frequently start with "dirty" block devices, you may consider adding a
_dd_ to your hardware provisioning step. The downside is that this can be very
time consuming, and potentially dangerous if you accidentally re-provision the
wrong machine.
###I changed the hardware manually, and now my system won't boot.
If you're using Puppet-Gluster to manage storage, the filesystem will be
mounted with _UUID_ entries in _/etc/fstab_. This ensures that the correct
filesystem will be mounted, even if the device order changes. If a filesystem
is not available at boot time, startup will abort and offer you the chance to
go into read-only maintenance mode. Either fix the hardware issue, or edit the
_/etc/fstab_ file.
###I can't edit /etc/fstab in the maintenance shell because it is read-only.
In the maintenance shell, your root filesystem will be mounted read-only, to
prevent changes. If you need to edit a file such as _/etc/fstab_, you'll first
need to remount the root filesystem in read-write mode. You can do this with:
```bash
mount -n -o remount /
```
###I get a file dependency error when running Puppet-Gluster.
In order for Puppet-Gluster to be able to do its magic, it needs to store some
temporary files on each GlusterFS host. These files usually get stored in:
_/var/lib/puppet/tmp/gluster/_. The parent directory (_/var/lib/puppet/tmp/_)
gets created by the _puppet::vardir_ module. The error you'll typically see is:
```bash
Error: Failed to apply catalog: Could not find dependency
File[/var/lib/puppet/tmp/] for File[/var/lib/puppet/tmp/gluster/] at
/etc/puppet/modules/gluster/manifests/vardir.pp:49
```
This error occurs if you forget to _include_ the _puppet::vardir_ class from
the [puppet-puppet](https://github.com/purpleidea/puppet-puppet/) module. If
you don't want to include the entire module, you can pull in the
_puppet::vardir_ class by itself, or create the contained file type manually in
your puppet manifests.
###I get an undefined method error when running Puppet-Gluster.
This is caused by a regression in a recent version of Puppet. They silently
"removed" a feature, which apparently wasn't supposed to exist, which
Puppet-Gluster relied upon. The original author of Puppet-Gluster would like
this feature added back. If you are affected by this issue, you should see an
an error similar to:
```bash
Error: Could not retrieve catalog from remote server:
Error 400 on SERVER: undefined method `brick_str_to_hash' for
Scope(Gluster::Volume[puppet]):Puppet::Parser::Scope at
/etc/puppet/modules/gluster/manifests/volume.pp:89 on node annex1.example.com
```
Puppet-Gluster now has a patch in git master that works around the missing
feature. This is:
[06af205a562d543bbeb7c4d5c55143ade3bdb4e6](https://github.com/purpleidea/puppet-gluster/commit/06af205a562d543bbeb7c4d5c55143ade3bdb4e6)
Puppet-Gluster has also been
[updated](https://github.com/purpleidea/puppet-gluster/commit/6dfaa8446e4287cf6f7f540158cde700fb95b830)
to fix the issue for users of brick_layout_chained.
###Puppet master gives warning: "Unable to load yaml data/ directory!"
You may see the message "Unable to load yaml data/ directory!" in
_/var/log/messages_ on your puppet master. This error comes from the
_ipa::params_ class. The _ipa::params_ class expects the puppet-module-data
module to read data from the ipa/data directory, and this message indicates
that the module-data module is not installed properly. Most users do not have
this issue, but if you do, here is a workaround:
* Run _puppet config print libdir_ to find the puppet libdir (e.g. /var/lib/puppet/lib).
* Run _mkdir /etc/puppet/modules/module\_data_.
* Copy the contents of the puppet-module-data directory into _/etc/puppet/modules/module\_data_.
* Run "ln -s /etc/puppet/modules/module\_data/lib/hiera _<libdir>_/hiera".
* Restart the puppet master.
###Will this work on my favourite OS? (eg: GNU/Linux F00bar OS v12 ?)
If it's a GNU/Linux based OS, can run GlusterFS, and Puppet, then it will
probably work. Typically, you might need to add a yaml data file to the _data/_
folder so that Puppet-Gluster knows where certain operating system specific
things are found. The multi-distro support has been designed to make it
particularly easy to add support for additional platforms. If your platform
doesn't work, please submit a yaml data file with the platform specific values.
###How do I get the OS independent aspects of this module to work?
The OS independent aspects of this module use the hiera "data-in-modules"
technique. It is actually very simple to get this to work. For a longer write
up of this technique, please read:
[https://ttboj.wordpress.com/2014/06/04/hiera-data-in-modules-and-os-independent-puppet/](https://ttboj.wordpress.com/2014/06/04/hiera-data-in-modules-and-os-independent-puppet/)
In short, this requires puppet version 3.0.0 or greater, and needs the
[module_data](https://github.com/ripienaar/puppet-module-data)
puppet module to be present on the puppet server in the _modules/_ directory.
The *module_data* code should be in a module folder named: *module_data/*.
That's it!
###I just upgraded puppet-gluster and my UUIDs keep resetting to 00000000-0000-0000-0000-000000000000
The following commands `gluster pool list` or `gluster peer status` may also be
failing on some or all of the gluster servers. Furthermore, some hosts may
see other servers, while others are able to list the other peers but they
remain in a disconnected state.
In one case, this was caused by SourceTree's approach to cloning where it was
pulling in all submodules on the Windows OS and/or converting LF (line feed)
to CRLF (carriage return, line feed) compared to how a git clone command pulls
in the repository on a linux OS. In order to resolve this you must delete the
puppet-gluster module directory in its entirety and re-clone it directly on the
target puppet master. If you are using version control to save your puppet
manifests/modules, then please ensure that you perform the appropriate
commmands to save your work and re-push your code with the included changes.
###Awesome work, but it's missing support for a feature and/or platform!
Since this is an Open Source / Free Software project that I also give away for
free (as in beer, free as in gratis, free as in libre), I'm unable to provide
unlimited support. Please consider donating funds, hardware, virtual machines,
and other resources. For specific needs, you could perhaps sponsor a feature!
###You didn't answer my question, or I have a question!
Contact me through my [technical blog](https://ttboj.wordpress.com/contact/)
and I'll do my best to help. If you have a good question, please remind me to
add my answer to this documentation!
##Reference
Please note that there are a number of undocumented options. For more
information on these options, please view the source at:
[https://github.com/purpleidea/puppet-gluster/](https://github.com/purpleidea/puppet-gluster/).
If you feel that a well used option needs documenting here, please contact me.
###Overview of classes and types
* [gluster::simple](#glustersimple): Simple Puppet-Gluster deployment.
* [gluster::elastic](#glusterelastic): Under construction.
* [gluster::server](#glusterserver): Base class for server hosts.
* [gluster::host](#glusterhost): Host type for each participating host.
* [gluster::brick](#glusterbrick): Brick type for each defined brick, per host.
* [gluster::volume](#glustervolume): Volume type for each defined volume.
* [gluster::volume::property](#glustervolumeproperty): Manages properties for each volume.
* [gluster::mount](#glustermount): Client volume mount point management.
###gluster::simple
This is gluster::simple. It should probably take care of 80% of all use cases.
It is particularly useful for deploying quick test clusters. It uses a
finite-state machine (FSM) to decide when the cluster has settled and volume
creation can begin. For more information on the FSM in Puppet-Gluster see:
[https://ttboj.wordpress.com/2013/09/28/finite-state-machines-in-puppet/](https://ttboj.wordpress.com/2013/09/28/finite-state-machines-in-puppet/)
####`replica`
The replica count. Can't be changed automatically after initial deployment.
####`volume`
The volume name or list of volume names to create.
####`path`
The valid brick path for each host. Defaults to local file system. If you need
a different path per host, then Gluster::Simple will not meet your needs.
####`count`
Number of bricks to build per host. This value is used unless _brick_params_ is
being used.
####`vip`
The virtual IP address to be used for the cluster distributed lock manager.
This option can be used in conjunction with the _vrrp_ option, but it does not
require it. If you don't want to provide a virtual ip, but you do want to
enforce that certain operations only run on one host, then you can set this
option to be the ip address of an arbitrary host in your cluster. Keep in mind
that if that host is down, certain options won't ever occur.
####`vrrp`
Whether to automatically deploy and manage _Keepalived_ for use as a _DLM_ and
for use in volume mounting, etc... Using this option requires the _vip_ option.
####`layout`
Which brick layout to use. The available options are: _chained_, and (default).
To generate a default (symmetrical, balanced) layout, leave this option blank.
If you'd like to include an algorithm that generates a different type of brick
layout, it is easy to drop in an algorithm. Please contact me with the details!
####`version`
Which version of GlusterFS do you want to install? This is especially handy
when testing new beta releases. You can read more about the technique at:
[Testing GlusterFS during Glusterfest](https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/).
####`repo`
Whether or not to add the necessary software repositories to install the needed
packages. This will typically pull in GlusterFS from _download.gluster.org_ and
should be set to false if you have your own mirrors or repositories managed as
part of your base image.
####`brick_params`
This parameter lets you specify a hash to use when creating the individual
bricks. This is especially useful because it lets you have the power of
Gluster::Simple when managing a cluster of iron (physical machines) where you'd
like to specify brick specific parameters. This sets the brick count when the
_count_ parameter is 0. The format of this parameter might look like:
```bash
$brick_params = {
fqdn1 => [
{dev => '/dev/disk/by-uuid/01234567-89ab-cdef-0123-456789abcdef'},
{dev => '/dev/sdc', partition => false},
],
fqdn2 => [{
dev => '/dev/disk/by-path/pci-0000:02:00.0-scsi-0:1:0:0',
raid_su => 256, raid_sw => 10,
}],
fqdnN => [...],
}
```
####`brick_param_defaults`
This parameter lets you specify a hash of defaults to use when creating each
brick with the _brick_params_ parameter. It is useful because it avoids the
need to repeat the values that are common across all bricks in your cluster.
Since most options work this way, this is an especially nice feature to have.
The format of this parameter might look like:
```bash
$brick_param_defaults = {
lvm => false,
xfs_inode64 => true,
force => true,
}
```
####`brick_params_defaults`
This parameter lets you specify a list of defaults to use when creating each
brick. Each element in the list represents a different brick. The value of each
element is a hash with the actual defaults that you'd like to use for creating
that brick. If you do not specify a brick count by any other method, then the
number of elements in this array will be used as the brick count. This is very
useful if you have consistent device naming across your entire cluster, because
you can very easily specify the devices and brick counts once for all hosts. If
for some reason a particular device requires unique values, then it can be set
manually with the _brick_params_ parameter. Please note the spelling of this
parameter. It is not the same as the _brick_param_defaults_ parameter which is
a global defaults parameter which will apply to all bricks.
The format of this parameter might look like:
```bash
$brick_params_defaults = [
{'dev' => '/dev/sdb'},
{'dev' => '/dev/sdc'},
{'dev' => '/dev/sdd'},
{'dev' => '/dev/sde'},
]
```
####`setgroup`
Set a volume property group. The two most common or well-known groups are the
_virt_ group, and the _small-file-perf_ group. This functionality is emulated
whether you're using the RHS version of GlusterFS or if you're using the
upstream GlusterFS project, which doesn't (currently) have the _volume set
group_ command. As package managers update the list of available groups or
their properties, Puppet-Gluster will automatically keep your set group
up-to-date. It is easy to extend Puppet-Gluster to add a custom group without
needing to patch the GlusterFS source.
####`ping`
Whether to use _fping_ or not to help with ensuring the required hosts are
available before doing certain types of operations. Optional, but recommended.
Boolean value.
####`again`
Do you want to use _Exec['again']_ ? This helps build your cluster quickly!
####`baseport`
Specify the base port option as used in the glusterd.vol file. This is useful
if the default port range of GlusterFS conflicts with the ports used for
virtual machine migration, or if you simply like to choose the ports that
you're using. Integer value.
####`rpcauthallowinsecure`
This is needed in some setups in the glusterd.vol file, particularly (I think)
for some users of _libgfapi_. Boolean value.
####`shorewall`
Boolean to specify whether puppet-shorewall integration should be used or not.
###gluster::elastic
Under construction.
###gluster::server
Main server class for the cluster. Must be included when building the GlusterFS
cluster manually. Wrapper classes such as [gluster::simple](#glustersimple)
include this automatically.
####`vip`
The virtual IP address to be used for the cluster distributed lock manager.
####`shorewall`
Boolean to specify whether puppet-shorewall integration should be used or not.
###gluster::host
Main host type for the cluster. Each host participating in the GlusterFS
cluster must define this type on itself, and on every other host. As a result,
this is not a singleton like the [gluster::server](#glusterserver) class.
####`ip`
Specify which IP address this host is using. This defaults to the
_$::ipaddress_ variable. Be sure to set this manually if you're declaring this
yourself on each host without using exported resources. If each host thinks the
other hosts should have the same IP address as itself, then Puppet-Gluster and
GlusterFS won't work correctly.
####`uuid`
Universally unique identifier (UUID) for the host. If empty, Puppet-Gluster
will generate this automatically for the host. You can generate your own
manually with _uuidgen_, and set them yourself. I found this particularly
useful for testing, because I would pick easy to recognize UUID's like:
_aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_,
_bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb_, and so on. If you set a UUID manually,
and Puppet-Gluster has a chance to run, then it will remember your choice, and
store it locally to be used again if you no longer specify the UUID. This is
particularly useful for upgrading an existing un-managed GlusterFS installation
to a Puppet-Gluster managed one, without changing any UUID's.
###gluster::brick
Main brick type for the cluster. Each brick is an individual storage segment to
be used on a host. Each host must have at least one brick to participate in the
cluster, but usually a host will have multiple bricks. A brick can be as simple
as a file system folder, or it can be a separate file system. Please read the
official GlusterFS documentation, if you aren't entirely comfortable with the
concept of a brick.
For most test clusters, and for experimentation, it is easiest to use a
directory on the root file system. You can even use a _/tmp_ sub folder if you
don't care about the persistence of your data. For more serious clusters, you
might want to create separate file systems for your data. On self-hosted iron,
it is not uncommon to create multiple RAID-6 drive pools, and to then create a
separate file system per virtual drive. Each file system can then be used as a
single brick.
So that each volume in GlusterFS has the maximum ability to grow, without
having to partition storage separately, the bricks in Puppet-Gluster are
actually folders (on whatever backing store you wish) which then contain
sub folders-- one for each volume. As a result, all the volumes on a given
GlusterFS cluster can share the total available storage space. If you wish to
limit the storage used by each volume, you can setup quotas. Alternatively, you
can buy more hardware, and elastically grow your GlusterFS volumes, since the
price per GB will be significantly less than any proprietary storage system.
The one downside to this brick sharing, is that if you have chosen the brick
per host count specifically to match your performance requirements, and
each GlusterFS volume on the same cluster has drastically different brick per
host performance requirements, then this won't suit your needs. I doubt that
anyone actually has such requirements, but if you do insist on needing this
compartmentalization, then you can probably use the Puppet-Gluster grouping
feature to accomplish this goal. Please let me know about your use-case, and
be warned that the grouping feature hasn't been extensively tested.
To prove to you that I care about automation, this type offers the ability to
automatically partition and format your file systems. This means you can plug
in new iron, boot, provision and configure the entire system automatically.
Regrettably, I don't have a lot of test hardware to routinely use this feature.
If you'd like to donate some, I'd be happy to test this thoroughly. Having said
that, I have used this feature, I consider it to be extremely safe, and it has
never caused me to lose data. If you're uncertain, feel free to look at the
code, or avoid using this feature entirely. If you think there's a way to make
it even safer, then feel free to let me know.
####`dev`
Block device, such as _/dev/sdc_ or _/dev/disk/by-id/scsi-0123456789abcdef_. By
default, Puppet-Gluster will assume you're using a folder to store the brick
data, if you don't specify this parameter.
####`raid_su`
Get this information from your RAID device. This is used to do automatic
calculations for alignment, so that the:
```
dev -> part -> lvm -> fs
```
stack is aligned properly. Future work is possible to manage your RAID devices,
and to read these values automatically. Specify this value as an integer number
of kilobytes (k).
####`raid_sw`
Get this information from your RAID device. This is used to do automatic
calculations for alignment, so that the:
```
dev -> part -> lvm -> fs
```
stack is aligned properly. Future work is possible to manage your RAID devices,
and to read these values automatically. Specify this value as an integer.
####`partition`
Do you want to partition the device and build the next layer on that partition,
or do you want to build on the block device directly? The "next layer" will
typically be lvm if you're using lvm, or your file system (such as xfs) if
you're skipping the lvm layer.
####`labeltype`
Only _gpt_ is supported. Other options include _msdos_, but this has never been
used because of it's size limitations.
####`lvm`
Do you want to use lvm on the lower level device (typically a partition, or the
device itself), or not. Using lvm might be required when using a commercially
supported GlusterFS solution.
####`lvm_thinp`
Set to _true_ to enable LVM thin provisioning. Read 'man 7 lvmthin' to
understand what thin provisioning is all about. This is needed for one form of
GlusterFS snapshots. Obviously this requires that you also enable _LVM_.
####`lvm_virtsize`
The value that will be passed to _--virtualsize_. By default this will pass in
a command that will return the size of your volume group. This is usually a
sane value, and help you to remember not to overcommit.
####`lvm_chunksize`
Value of _--chunksize_ for _lvcreate_ when using thin provisioning.
####`lvm_metadatasize`
Value of _--poolmetadatasize_ for _lvcreate_ when using thin provisioning.
####`fsuuid`
File system UUID. This ensures we can distinctly identify a file system. You
can set this to be used with automatic file system creation, or you can specify
the file system UUID that you'd like to use. If you leave this blank, then
Puppet-Gluster can automatically pick an fs UUID for you. This is especially
useful if you are automatically deploying a large cluster on physical iron.
####`fstype`
This should be _xfs_ or _ext4_. Using _xfs_ is recommended, but _ext4_ is also
quite common. This only affects a file system that is getting created by this
module. If you provision a new machine, with a root file system of _ext4_, and
the brick you create is a root file system path, then this option does nothing.
A _btrfs_ option is now available for testing. It is not officially supported
by GlusterFS, but testing it anyways, and reporting any issues is encouraged.
####`xfs_inode64`
Set _inode64_ mount option when using the _xfs_ fstype. Choose _true_ to set.
####`xfs_nobarrier`
Set _nobarrier_ mount option when using the _xfs_ fstype. Choose _true_ to set.
####`ro`
Whether the file system should be mounted read only. For emergencies only.
####`force`
If _true_, this will overwrite any xfs file system it sees. This is useful for
rebuilding GlusterFS repeatedly and wiping data. There are other safeties in
place to stop this. In general, you probably don't ever want to touch this.
####`areyousure`
Do you want to allow Puppet-Gluster to do dangerous things? You have to set
this to _true_ to allow Puppet-Gluster to _fdisk_ and _mkfs_ your file system.
####`again`
Do you want to use _Exec['again']_ ? This helps build your cluster quickly!
####`comment`
Add any comment you want. This is also occasionally used internally to do magic
things.
###gluster::volume
Main volume type for the cluster. This is where a lot of the magic happens.
Remember that changing some of these parameters after the volume has been
created won't work, and you'll experience undefined behaviour. There could be
FSM based error checking to verify that no changes occur, but it has been left
out so that this code base can eventually support such changes, and so that the
user can manually change a parameter if they know that it is safe to do so.
####`bricks`
List of bricks to use for this volume. If this is left at the default value of
_true_, then this list is built automatically. The algorithm that determines
this order does not support all possible situations, and most likely can't
handle certain corner cases. It is possible to examine the FSM to view the
selected brick order before it has a chance to create the volume. The volume
creation script won't run until there is a stable brick list as seen by the FSM
running on the host that has the DLM. If you specify this list of bricks
manually, you must choose the order to match your desired volume layout. If you
aren't sure about how to order the bricks, you should review the GlusterFS
documentation first.
####`transport`
Only _tcp_ is supported. Possible values can include _rdma_, but this won't get
any testing if I don't have access to infiniband hardware. Donations welcome.
####`replica`
Replica count. Usually you'll want to set this to _2_. Some users choose _3_.
Other values are seldom seen. A value of _1_ can be used for simply testing a
distributed setup, when you don't care about your data or high availability. A
value greater than _4_ is probably wasteful and unnecessary. It might even
cause performance issues if a synchronous write is waiting on a slow fourth
server.
####`stripe`
Stripe count. Thoroughly unsupported and untested option. Not recommended for
use by GlusterFS.
####`layout`
Which brick layout to use. The available options are: _chained_, and (default).
To generate a default (symmetrical, balanced) layout, leave this option blank.
If you'd like to include an algorithm that generates a different type of brick
layout, it is easy to drop in an algorithm. Please contact me with the details!
####`ping`
Do we want to include ping checks with _fping_?
####`settle`
Do we want to run settle checks?
####`again`
Do you want to use _Exec['again']_ ? This helps build your cluster quickly!
####`start`
Requested state for the volume. Valid values include: _true_ (start), _false_
(stop), or _undef_ (un-managed start/stop state).
###gluster::volume::property
Main volume property type for the cluster. This allows you to manage GlusterFS
volume specific properties. There are a wide range of properties that volumes
support. For the full list of properties, you should consult the GlusterFS
documentation, or run the _gluster volume set help_ command. To set a property
you must use the special name pattern of: _volume_#_key_. The value argument is
used to set the associated value. It is smart enough to accept values in the
most logical format for that specific property. Some properties aren't yet
supported, so please report any problems you have with this functionality.
Because this feature is an awesome way to _document as code_ the volume
specific optimizations that you've made, make sure you use this feature even if
you don't use all the others.
####`value`
The value to be used for this volume property.
###gluster::mount
Main type to use to mount GlusterFS volumes. This type offers special features,
like shorewall integration, and repo support.
####`server`
Server specification to use when mounting. Format is _<server>:/volume_. You
may use an _FQDN_ or an _IP address_ to specify the server.
####`rw`
Mount read-write or read-only. Defaults to read-only. Specify _true_ for
read-write.
####`mounted`
Mounted argument from standard mount type. Defaults to _true_ (_mounted_).
####`repo`
Boolean to select if you want automatic repository (package) management or not.
####`version`
Specify which GlusterFS version you'd like to use.
####`ip`
IP address of this client. This is usually auto-detected, but you can choose
your own value manually in case there are multiple options available.
####`shorewall`
Boolean to specify whether puppet-shorewall integration should be used or not.
##Examples
For example configurations, please consult the [examples/](https://github.com/purpleidea/puppet-gluster/tree/master/examples) directory in the git
source repository. It is available from:
[https://github.com/purpleidea/puppet-gluster/tree/master/examples](https://github.com/purpleidea/puppet-gluster/tree/master/examples)
It is also available from:
[https://forge.gluster.org/puppet-gluster/puppet-gluster/trees/master/examples](https://forge.gluster.org/puppet-gluster/puppet-gluster/trees/master/examples/)
##Limitations
This module has been tested against open source Puppet 3.2.4 and higher.
The module is routinely tested on:
* CentOS 6.5
It will probably work without incident or without major modification on:
* CentOS 5.x/6.x
* RHEL 5.x/6.x
It has patches to support:
* Fedora 20+
* Ubuntu 12.04+
* Debian 7+
It will most likely work with other Puppet versions and on other platforms, but
testing on those platforms has been minimal due to lack of time and resources.
Testing is community supported! Please report any issues as there are a lot of
features, and in particular, support for additional distros isn't well tested.
The multi-distro architecture has been chosen to easily support new additions.
Most platforms and versions will only require a change to the yaml based data/
folder.
##Development
This is my personal project that I work on in my free time.
Donations of funding, hardware, virtual machines, and other resources are
appreciated. Please contact me if you'd like to sponsor a feature, invite me to
talk/teach or for consulting.
You can follow along [on my technical blog](https://ttboj.wordpress.com/).
To report any bugs, please file a ticket at: [https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=puppet-gluster](https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=puppet-gluster).
##Author
Copyright (C) 2010-2013+ James Shubin
* [github](https://github.com/purpleidea/)
* [@purpleidea](https://twitter.com/#!/purpleidea)
* [https://ttboj.wordpress.com/](https://ttboj.wordpress.com/)
| 44.517992 | 259 | 0.774755 | eng_Latn | 0.996504 |
dba1500aec50ae472ac806164ec5a0d9c6cead1b | 1,795 | md | Markdown | docs/recipes/how-to-integrate-disqus.md | gisat/panther-backoffice | 5d97caa74a9e2129ccb493a83dfc4732b8d1db04 | [
"MIT"
] | 1 | 2017-08-23T21:05:06.000Z | 2017-08-23T21:05:06.000Z | docs/recipes/how-to-integrate-disqus.md | gisat/panther-backoffice | 5d97caa74a9e2129ccb493a83dfc4732b8d1db04 | [
"MIT"
] | 1 | 2020-07-16T00:12:28.000Z | 2020-07-16T00:12:28.000Z | docs/recipes/how-to-integrate-disqus.md | gisat/panther-backoffice | 5d97caa74a9e2129ccb493a83dfc4732b8d1db04 | [
"MIT"
] | 1 | 2018-07-10T13:18:19.000Z | 2018-07-10T13:18:19.000Z | ## How to Integrate [Disqus](https://disqus.com)
https://disqus.com/admin/create/
#### `DisqusThread.js`
```js
import React, { PropTypes } from 'react';
import { canUseDOM } from 'fbjs/lib/ExecutionEnvironment';
const SHORTNAME = 'example';
const WEBSITE_URL = 'http://www.example.com';
function renderDisqus() {
if (window.DISQUS === undefined) {
var script = document.createElement('script');
script.async = true;
script.src = 'https://' + SHORTNAME + '.disqus.com/embed.js';
document.getElementsByTagName('head')[0].appendChild(script);
} else {
window.DISQUS.reset({reload: true});
}
}
class DisqusThread {
static propTypes = {
id: PropTypes.string.isRequired,
title: PropTypes.string.isRequired,
path: PropTypes.string.isRequired
};
shouldComponentUpdate(nextProps) {
return this.props.id !== nextProps.id ||
this.props.title !== nextProps.title ||
this.props.path !== nextProps.path;
}
componentDidMount() {
renderDisqus();
}
componentDidUpdate() {
renderDisqus();
}
render() {
let { id, title, path, ...other} = this.props;
if (canUseDOM) {
/* eslint-disable camelcase */
window.disqus_shortname = SHORTNAME;
window.disqus_identifier = id;
window.disqus_title = title;
window.disqus_url = WEBSITE_URL + path;
/* eslint-enable camelcase */
}
return <div {...other} id="disqus_thread" />;
}
}
export default DisqusThread;
```
#### `MyComponent.js`
```js
import React from 'react';
import DisqusThread from './DisqusThread.js';
class MyComponent {
render() {
return (
<div>
<DisqusThread id="e94d73ff-fd92-467d-b643-c86889f4b8be"
title="How to integrate Disqus into ReactJS App"
path="/blog/123-disquss-integration" />
</div>
);
}
}
export default MyComponent;
```
| 20.168539 | 63 | 0.669638 | kor_Hang | 0.375121 |
dba23bcef16735c08bc041c9a3ac1baa728b52aa | 3,045 | md | Markdown | README.md | lalebdi/TheManhattanProject | 8cad8062d7ce916ee52bc2ba7cf78d43fba1cb5d | [
"Unlicense",
"MIT"
] | null | null | null | README.md | lalebdi/TheManhattanProject | 8cad8062d7ce916ee52bc2ba7cf78d43fba1cb5d | [
"Unlicense",
"MIT"
] | null | null | null | README.md | lalebdi/TheManhattanProject | 8cad8062d7ce916ee52bc2ba7cf78d43fba1cb5d | [
"Unlicense",
"MIT"
] | null | null | null | 
<!-- PROJECT LOGO -->
<p align="center">
<br />
<h3 align="center">Beatle</h3>
<p align="center">
An awesome Bug Tracking Tool
<br />
</p>
</p>
<!-- TABLE OF CONTENTS -->
## Table of Contents
* [About the Project](#about-the-project)
* [Built With](#built-with)
* [Getting Started](#getting-started)
* [Installation](#installation)
* [Usage](#usage)
* [Roadmap](#roadmap)
* [Contributing](#contributing)
* [License](#license)
* [Contact](#contact)
* [Acknowledgements](#acknowledgements)
<!-- ABOUT THE PROJECT -->
## About The Project

There are many great README templates available on GitHub, however, I didn't find one that really suit my needs so I created this enhanced one. I want to create a README template so amazing that it'll be the last one you ever need.
Here's why:
* Your time should be focused on creating something amazing. A project that solves a problem and helps others
* You shouldn't be doing the same tasks over and over like creating a README from scratch
* You should element DRY principles to the rest of your life :smile:
Of course, no one template will serve all projects since your needs may be different. So I'll be adding more in the near future. You may also suggest changes by forking this repo and creating a pull request or opening an issue.
A list of commonly used resources that I find helpful are listed in the acknowledgements.
### Built With
* MERN
<!-- GETTING STARTED -->
## Getting Started
The backend of the project is located at [here](https://github.com/lalebdi/TheManhattanProjectBackend)
### Installation
1. Clone the repo
```sh
git clone https://github.com/lalebdi/TheManhattanProject.git
```
2. Install NPM packages
```sh
npm install
```
<!-- USAGE EXAMPLES -->
## Usage
Use this space to show useful examples of how a project can be used. Additional screenshots, code
For more examples, please refer to the [Documentation](https://example.com)
<!-- ROADMAP -->
## Roadmap
See the [open issues](https://github.com/othneildrew/Best-README-Template/issues) for a list of proposed features (and known issues).
<!-- CONTRIBUTING -->
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
<!-- LICENSE -->
## License
Distributed under the MIT License. See `LICENSE` for more information.
<!-- CONTACT -->
## Contact
Project Link: [https://github.com/lalebdi/TheManhattanProject](https://github.com/lalebdi/TheManhattanProject)
| 25.805085 | 231 | 0.731034 | eng_Latn | 0.930205 |
dba24cb929d21ec7e76aaf2cc8a870ebc61cbd02 | 879 | md | Markdown | docs/classes/Archive.Archive-1.md | iniquitybbs/iniquity | 1d90c2fe37901b71c7f267fc114df511ec5abf81 | [
"MIT"
] | 32 | 2017-10-31T06:51:10.000Z | 2022-03-14T13:30:33.000Z | docs/classes/Archive.Archive-1.md | iniquitybbs/iniquity | 1d90c2fe37901b71c7f267fc114df511ec5abf81 | [
"MIT"
] | 20 | 2017-10-31T08:08:58.000Z | 2022-03-23T23:27:36.000Z | docs/classes/Archive.Archive-1.md | iniquitybbs/iniquity | 1d90c2fe37901b71c7f267fc114df511ec5abf81 | [
"MIT"
] | 4 | 2018-11-23T03:04:59.000Z | 2022-03-02T04:07:31.000Z | # Class: Archive
[Archive](../modules/Archive.md).Archive
Iniquity Archives
**`summary`** What I hope will be a really cool way of accessing all of your ANSI/ASCII/PETSCII/GIF/JPEG whatever files.
## Table of contents
### Constructors
- [constructor](Archive.Archive-1.md#constructor)
### Methods
- [load](Archive.Archive-1.md#load)
## Constructors
### constructor
• **new Archive**(`options?`)
#### Parameters
| Name | Type |
| :------ | :------ |
| `options?` | [`IQCoreAssetsOptions`](../interfaces/Archive.IQCoreAssetsOptions.md) |
#### Defined in
[archive/src/index.ts:67](https://github.com/iniquitybbs/iniquity/blob/a881ad9/packages/archive/src/index.ts#L67)
## Methods
### load
▸ **load**(): `void`
#### Returns
`void`
#### Defined in
[archive/src/index.ts:69](https://github.com/iniquitybbs/iniquity/blob/a881ad9/packages/archive/src/index.ts#L69)
| 18.3125 | 120 | 0.682594 | yue_Hant | 0.901018 |
dba290dc6e161761e2cd4904f27008be5cc4b0cf | 3,991 | md | Markdown | windows-driver-docs-pr/ifs/flt-parameters-for-irp-mj-network-query-open.md | kvndb/windows-driver-docs | 904720dbfcd60c063cece2219b938a7b5b5b5443 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/ifs/flt-parameters-for-irp-mj-network-query-open.md | kvndb/windows-driver-docs | 904720dbfcd60c063cece2219b938a7b5b5b5443 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/ifs/flt-parameters-for-irp-mj-network-query-open.md | kvndb/windows-driver-docs | 904720dbfcd60c063cece2219b938a7b5b5b5443 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: FLT_PARAMETERS for IRP_MJ_NETWORK_QUERY_OPEN union
description: The following union component is used when the MajorFunction field of the FLT\_IO\_PARAMETER\_BLOCK structure for the operation is IRP\_MJ\_NETWORK\_QUERY\_OPEN.
ms.assetid: bafe015c-a747-4d18-95d5-adad2ad1570b
keywords: ["FLT_PARAMETERS for IRP_MJ_NETWORK_QUERY_OPEN union Installable File System Drivers", "FLT_PARAMETERS union Installable File System Drivers", "PFLT_PARAMETERS union pointer Installable File System Drivers"]
topic_type:
- apiref
api_name:
- FLT_PARAMETERS
api_location:
- fltkernel.h
api_type:
- HeaderDef
ms.date: 11/28/2017
ms.localizationpriority: medium
---
# FLT\_PARAMETERS for IRP\_MJ\_NETWORK\_QUERY\_OPEN union
The following union component is used when the **MajorFunction** field of the [**FLT\_IO\_PARAMETER\_BLOCK**](https://msdn.microsoft.com/library/windows/hardware/ff544638) structure for the operation is IRP\_MJ\_NETWORK\_QUERY\_OPEN.
Syntax
------
```ManagedCPlusPlus
typedef union _FLT_PARAMETERS {
... ;
struct {
PIRP Irp;
PFILE_NETWORK_OPEN_INFORMATION NetworkInformation;
} NetworkQueryOpen;
... ;
} FLT_PARAMETERS, *PFLT_PARAMETERS;
```
Members
-------
**NetworkQueryOpen**
Structure containing the following members.
**Irp**
Pointer to a create IRP that represents this open operation. This IRP is to be used by the file system for common open/create code but not actually completed.
**NetworkInformation**
Pointer to a [**FILE\_NETWORK\_OPEN\_INFORMATION**](https://msdn.microsoft.com/library/windows/hardware/ff545822)-structured buffer to receive the requested information about the file.
Remarks
-------
The [**FLT\_PARAMETERS**](https://msdn.microsoft.com/library/windows/hardware/ff544673) structure for IRP\_MJ\_NETWORK\_QUERY\_OPEN operations contains the parameters for a NetworkQueryOpen operation represented by a callback data ([**FLT\_CALLBACK\_DATA**](https://msdn.microsoft.com/library/windows/hardware/ff544620)) structure. The **FLT\_PARAMETERS** structure is contained in an [**FLT\_IO\_PARAMETER\_BLOCK**](https://msdn.microsoft.com/library/windows/hardware/ff544638) structure.
> \[!Note\] The file object associated with IRP\_MJ\_NETWORK\_QUERY\_OPEN is a stack-based object.
>A filter registered for the NetworkQueryOpen callback must not reference this object. That is, do not call ObReferenceObject or ObDereferenceObject on this stack-based file object. Also, do not save a pointer to the object.
IRP\_MJ\_NETWORK\_QUERY\_OPEN is a fast I/O operation. It is the equivalent of the FastIoQueryOpen (not FastIoQueryNetworkOpenInfo) operation. A filter must register for this operation.
Requirements
------------
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<tbody>
<tr class="odd">
<td align="left"><p>Header</p></td>
<td align="left">Fltkernel.h (include Fltkernel.h)</td>
</tr>
</tbody>
</table>
## See also
[**FILE\_NETWORK\_OPEN\_INFORMATION**](https://msdn.microsoft.com/library/windows/hardware/ff545822)
[**FLT\_CALLBACK\_DATA**](https://msdn.microsoft.com/library/windows/hardware/ff544620)
[**FLT\_IO\_PARAMETER\_BLOCK**](https://msdn.microsoft.com/library/windows/hardware/ff544638)
[**FLT\_IS\_FASTIO\_OPERATION**](https://msdn.microsoft.com/library/windows/hardware/ff544645)
[**FLT\_IS\_FS\_FILTER\_OPERATION**](https://msdn.microsoft.com/library/windows/hardware/ff544648)
[**FLT\_IS\_IRP\_OPERATION**](https://msdn.microsoft.com/library/windows/hardware/ff544654)
[**FLT\_PARAMETERS**](https://msdn.microsoft.com/library/windows/hardware/ff544673)
[**FltQueryInformationFile**](https://msdn.microsoft.com/library/windows/hardware/ff543439)
[**IRP\_MJ\_QUERY\_INFORMATION**](irp-mj-query-information.md)
[**ZwQueryInformationFile**](https://msdn.microsoft.com/library/windows/hardware/ff567052)
| 36.614679 | 490 | 0.740416 | yue_Hant | 0.471785 |
dba2c4929111038232da97e6fc96730578fcf1c9 | 3,373 | md | Markdown | readme.md | Navneet-Suresh/sysvinit-service-generator | 077ac88e36e7a6af33322b2546e3246344947696 | [
"MIT"
] | 1 | 2021-06-04T12:19:11.000Z | 2021-06-04T12:19:11.000Z | readme.md | Navneet-Suresh/sysvinit-service-generator | 077ac88e36e7a6af33322b2546e3246344947696 | [
"MIT"
] | null | null | null | readme.md | Navneet-Suresh/sysvinit-service-generator | 077ac88e36e7a6af33322b2546e3246344947696 | [
"MIT"
] | null | null | null | # Sample service script for init
This script enables fast daemonization apps as a Linux services with SysVinit init system.
Look at [LSB init scripts](http://wiki.debian.org/LSBInitScripts) for more information.
Original script taken from from [naholyr's](https://github.com/naholyr) [gist](https://gist.github.com/naholyr/4275302)
Note: You can acheive the same thing that this project tries to acheive by using the MetaInit package in debian:
(https://wiki.debian.org/MetaInit)
## Usage
Copy to `/etc/init.d`:
```sh
# replace "$YOUR_SERVICE_NAME" with your service's name (whenever it's not enough obvious)
cp "service.sh" "/etc/init.d/$YOUR_SERVICE_NAME"
chmod +x /etc/init.d/$YOUR_SERVICE_NAME
```
Edit the script and replace following tokens:
* `<NAME>` = `$YOUR_SERVICE_NAME`
* `<DESCRIPTION>` = Describe your service here (be concise)
* Feel free to modify the LSB header, I've made default choices you may not agree with
* `<COMMAND>` = Command to start your server (for example `/home/myuser/.dropbox-dist/dropboxd`)
* `<USER>` = Login of the system user the script should be run as (for example `myuser`)
Start and test your service:
```sh
$ service $YOUR_SERVICE_NAME start
$ service $YOUR_SERVICE_NAME stop
```
Install service to be run at boot-time:
```sh
$ update-rc.d $YOUR_SERVICE_NAME defaults
```
For rpm based distributions such as CentOS or Red Hat, you can use
```sh
$ chkconfig $YOUR_SERVICE_NAME --add
```
If you want to see which runlevel your script will run in
```sh
$ chkconfig $YOUR_SERVICE_NAME --list
```
Enjoy
## Uninstall
The service can uninstall itself with `service $NAME uninstall`. Yes, that's very easy, therefore a bit dangerous. But as it's an auto-generated script, you can bring it back very easily. I use it for tests and often install/uninstall, that's why I've put that here.
Don't want it? Remove lines 56-58 of the service's script.
## Logs?
Your service will log its output to `/var/log/$NAME.log`. Don't forget to setup a logrotate :)
## FAQ!
This script should work fine on Debian, CentOS and Slackware.
## I'm noob and/or lazy
Yep, I'm lazy too. But still, I've written a script to automate this :)
```sh
$ wget 'https://raw.githubusercontent.com/Navneet-Suresh/sysvinit-service-generator/master/new-service.sh' && bash new-service.sh
```
In this script I will download `service.sh` into a `tempfile`, replace some tokens, and then show you commands you should run as superuser.
If you feel confident enough with my script, you can `sudo` the script directly:
```sh
$ wget 'https://raw.githubusercontent.com/Navneet-Suresh/sysvinit-service-generator/master/new-service.sh' && sudo bash new-service.sh
```
Note: the cool hipsterish `curl $URL | bash` won't work here, I don't really want to check why.
The script works offline so you can clone this repository then you can upload this script on your server and run it
directly:
```sh
$ sudo bash new-service.sh
```
The script also handle parameters as showed below:
```sh
$ sudo bash new-service.sh "service_name" "description" "command to execute" "user which executes the command"
```
### Demo
Creating the service:

Looking at service files (logs, pid):

Uninstalling service:

| 30.116071 | 266 | 0.743848 | eng_Latn | 0.960148 |
dba2cfa5c0c9a6febfab33b5bdc5e08c24c783c2 | 4,140 | md | Markdown | README.md | KaanGaming/HollowKnightDRPC | 20893208bc2b46f5446b07cb1093215840ab6b50 | [
"MIT"
] | 2 | 2021-12-05T14:04:45.000Z | 2022-03-08T15:47:30.000Z | README.md | KaanGaming/HollowKnightDRPC | 20893208bc2b46f5446b07cb1093215840ab6b50 | [
"MIT"
] | 1 | 2021-04-23T12:59:07.000Z | 2021-04-23T13:05:29.000Z | README.md | KaanGaming/HollowKnightDRPC | 20893208bc2b46f5446b07cb1093215840ab6b50 | [
"MIT"
] | 2 | 2021-05-12T07:01:47.000Z | 2022-03-29T01:48:11.000Z |
# Hollow Knight Discord Rich Presence
###### ...or Discord Rich Presence for Hollow Knight or HollowKnightDRPC, call it whatever you want.
---
This mod adds Discord Rich Presence to your profile. Rich Presence is a detailed "Playing" status when you check someone's profile on Discord. It shows up in both mobile and computer devices.

###### Even though the text may be cut off, it can be seen in full by hovering over your mouse on the cut off text, or view the full profile.
Mod made by __@KaanGaming#7447__ on Discord.
If anything goes wrong, ask in [#modding-help](https://discord.com/channels/283467363729408000/462200562620825600) in the Hollow Knight's Discord server.
Links that may or may not be useful:
[Original README](https://github.com/KaanGaming/HollowKnightDRPC/blob/main/ModInstallerReadme.txt)
# Installation Guide
Use these if the mod can't do the auto-installation of Discord GameSDK.
✔ [Windows Guide](https://kaangaming.github.io/HollowKnightDRPC/guide/Guide.html)
✖ Mac Guide
✖ Linux/UNIX Guide
### Vague guide on setting up mod for use
First, download the Discord GameSDK from [here](https://discord.com/developers/docs/game-sdk/sdk-starter-guide). Open the .zip file, and try to find the `lib` folder. Inside there should be `x86` and `x86_64` folders. Find the `Plugins` folder in your Hollow Knight installation. Your Hollow Knight game files can be found in (Windows: `C:\Program Files (x86)\Steam\steamapps\common\Hollow Knight\`, Mac: `~/Library/Application Support/Steam/steamapps/common/Hollow Knight/hollow_knight.app/`, Linux: `~/.local/share/Steam/steamapps/common/Hollow Knight/`). Copy the `x86` and `x86_64` into `Plugins` folder. If there are already folders with the same names, copy the insides of `x86` from the .zip to there, and the same thing for `x86_64`.
# How to use
**THE MOD ONLY WORKS IF YOU HAVE DISCORD CLIENT OPEN! THE BROWSER VERSION OF DISCORD WON'T WORK AND YOUR RICH PRESENCE WILL NOT WORK.** If you are using this mod on 1.5, you can easily adjust the settings in Settings > Mods and find this mod's settings page. On 1.4.3.2, this option doesn't exist, but you can still change settings by finding the saves location and find this mod's global settings. Despite it being a JSON file, it's very easy to edit. You can use the notepad program to edit values. [The StatsRow may be complex, so I made a page about it so you can edit them to whatever you want without testing each value.](https://github.com/KaanGaming/HollowKnightDRPC/blob/1.5-mapi-version/StatsRowValues.md)
# Development Guide
Steps:
### Project setup
1. Get the Discord GameSDK (found [here](https://discord.com/developers/docs/game-sdk/sdk-starter-guide)) and go inside the `lib` folder, then extract the contents of `x86` and `x86_64` into `HollowKnightDRPC\GameSDK_Libraries` (there are `x86` and `x86_64` folders inside there as well so extract the respective folders into there)
2. *(Optional)* If none of the dependencies work (if a lot of `using` lines start throwing errors in the project) then your Hollow Knight installation might be in a different location, or you might be using a different OS. If that's the case, try fixing the location of your Hollow Knight installation in the .csproj file, and try to reload the project.
### Build the project
You can navigate to the root of the repository, and do `dotnet build` on your command prompt program. This will create the `Exports` file in the project directory, and it will update the mod's output files in your `Mods` folder found inside your Hollow Knight installation.
---
## Previews

###### Dirtmouth - Normal save
---

###### Stag Nest - Normal save
---

###### Black Egg Temple - Normal save
---

###### Ancestral Mound - Steel Soul save
---

###### Godhome | Hall of Gods - Godseeker save
---

###### Small Image Tooltip
| 60 | 741 | 0.749517 | eng_Latn | 0.914654 |
dba34d1677dda0b299a45dbda742e8782a67c6bd | 3,342 | md | Markdown | desktop-src/direct3d11/d3dx11getimageinfofrommemory.md | iGustL/win32 | 3aef27bcdaaaa826447d6aea1b1ef84d5a42dbbd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-05-23T21:42:08.000Z | 2021-05-23T21:42:08.000Z | desktop-src/direct3d11/d3dx11getimageinfofrommemory.md | iGustL/win32 | 3aef27bcdaaaa826447d6aea1b1ef84d5a42dbbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/direct3d11/d3dx11getimageinfofrommemory.md | iGustL/win32 | 3aef27bcdaaaa826447d6aea1b1ef84d5a42dbbd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-01-01T04:19:14.000Z | 2022-01-01T04:19:14.000Z | ---
title: D3DX11GetImageInfoFromMemory function (D3DX11tex.h)
description: Note The D3DX (D3DX 9, D3DX 10, and D3DX 11) utility library is deprecated for Windows 8 and is not supported for Windows Store apps. Note Instead of using this function, we recommend that you use the DirectXTex library, GetMetadataFromXXXMemory (where XXX is WIC, DDS, or TGA; WIC doesn't support DDS and TGA; D3DX 9 supported TGA as a common art source format for games). Get information about an image already loaded into memory.
ms.assetid: b13192fa-4235-4c38-ba46-e14ffab2f653
keywords:
- D3DX11GetImageInfoFromMemory function Direct3D 11
topic_type:
- apiref
api_name:
- D3DX11GetImageInfoFromMemory
api_location:
- D3DX11.lib
- D3DX11.dll
api_type:
- LibDef
ms.topic: reference
ms.date: 05/31/2018
---
# D3DX11GetImageInfoFromMemory function
> [!Note]
> The D3DX (D3DX 9, D3DX 10, and D3DX 11) utility library is deprecated for Windows 8 and is not supported for Windows Store apps.
> [!Note]
> Instead of using this function, we recommend that you use the [DirectXTex](https://go.microsoft.com/fwlink/p/?linkid=248926) library, **GetMetadataFromXXXMemory** (where XXX is WIC, DDS, or TGA; WIC doesn't support DDS and TGA; D3DX 9 supported TGA as a common art source format for games).
Get information about an image already loaded into memory.
## Syntax
```C++
HRESULT D3DX11GetImageInfoFromMemory(
_In_ LPCVOID pSrcData,
_In_ SIZE_T SrcDataSize,
_In_ ID3DX11ThreadPump *pPump,
_In_ D3DX11_IMAGE_INFO *pSrcInfo,
_Out_ HRESULT *pHResult
);
```
## Parameters
<dl> <dt>
*pSrcData* \[in\]
</dt> <dd>
Type: **[**LPCVOID**](https://docs.microsoft.com/windows/desktop/WinProg/windows-data-types)**
Pointer to the image in memory.
</dd> <dt>
*SrcDataSize* \[in\]
</dt> <dd>
Type: **[**SIZE\_T**](https://docs.microsoft.com/windows/desktop/WinProg/windows-data-types)**
Size of the image in memory, in bytes.
</dd> <dt>
*pPump* \[in\]
</dt> <dd>
Type: **[**ID3DX11ThreadPump**](id3dx11threadpump.md)\***
Optional thread pump that can be used to load the info asynchronously. Can be **NULL**. See [**ID3DX11ThreadPump Interface**](id3dx11threadpump.md).
</dd> <dt>
*pSrcInfo* \[in\]
</dt> <dd>
Type: **[**D3DX11\_IMAGE\_INFO**](d3dx11-image-info.md)\***
Information about the image in memory.
</dd> <dt>
*pHResult* \[out\]
</dt> <dd>
Type: **[**HRESULT**](https://msdn.microsoft.com/library/Bb401631(v=MSDN.10).aspx)\***
A pointer to the return value. May be **NULL**. If *pPump* is not **NULL**, then *pHResult* must be a valid memory location until the asynchronous execution completes.
</dd> </dl>
## Return value
Type: **[**HRESULT**](https://msdn.microsoft.com/library/Bb401631(v=MSDN.10).aspx)**
The return value is one of the values listed in [Direct3D 11 Return Codes](d3d11-graphics-reference-returnvalues.md).
## Requirements
| | |
|--------------------|----------------------------------------------------------------------------------------|
| Header<br/> | <dl> <dt>D3DX11tex.h</dt> </dl> |
| Library<br/> | <dl> <dt>D3DX11.lib</dt> </dl> |
## See also
<dl> <dt>
[D3DX Functions](d3d11-graphics-reference-d3dx11-functions.md)
</dt> </dl>
| 25.707692 | 445 | 0.657989 | eng_Latn | 0.483391 |
dba35547afeabd858077a018b680023d3568b9b7 | 24 | md | Markdown | README.md | ztWilliam/ipig | a59db791ac3af9a641f531667241e4e77c1a666b | [
"MIT"
] | null | null | null | README.md | ztWilliam/ipig | a59db791ac3af9a641f531667241e4e77c1a666b | [
"MIT"
] | null | null | null | README.md | ztWilliam/ipig | a59db791ac3af9a641f531667241e4e77c1a666b | [
"MIT"
] | null | null | null | # ipig
projects of ipig
| 8 | 16 | 0.75 | eng_Latn | 0.96972 |
dba36cdb10d8d276a94bfc0a4c18713336efc415 | 9,592 | md | Markdown | docs/zh_CN/advanced_tutorials/DataAugmentation.md | Scallions/PaddleClas | 08ef5b540166da05ed02e7c491c240129954c681 | [
"Apache-2.0"
] | 3 | 2021-12-16T06:59:04.000Z | 2021-12-16T06:59:24.000Z | docs/zh_CN/advanced_tutorials/DataAugmentation.md | hello3281/PaddleClas | 8103f010c75ce4b4bee51ede8d057da4c6bd446a | [
"Apache-2.0"
] | null | null | null | docs/zh_CN/advanced_tutorials/DataAugmentation.md | hello3281/PaddleClas | 8103f010c75ce4b4bee51ede8d057da4c6bd446a | [
"Apache-2.0"
] | null | null | null | # 数据增强分类实战
---
本节将基于 ImageNet-1K 的数据集详细介绍数据增强实验,如果想快速体验此方法,可以参考 [**30 分钟玩转 PaddleClas(进阶版)**](../quick_start/quick_start_classification_professional.md)中基于 CIFAR100 的数据增强实验。如果想了解相关算法的内容,请参考[数据增强算法介绍](../algorithm_introduction/DataAugmentation.md)。
## 目录
- [1. 参数配置](#1)
- [1.1 AutoAugment](#1.1)
- [1.2 RandAugment](#1.2)
- [1.3 TimmAutoAugment](#1.3)
- [1.4 Cutout](#1.4)
- [1.5 RandomErasing](#1.5)
- [1.6 HideAndSeek](#1.6)
- [1.7 GridMask](#1.7)
- [1.8 Mixup](#1.8)
- [1.9 Cutmix](#1.9)
- [1.10 Mixup 与 Cutmix 同时使用](#1.10)
- [2. 启动命令](#2)
- [3. 注意事项](#3)
- [4. 实验结果](#4)
<a name="1"></a>
## 1. 参数配置
由于不同的数据增强方式含有不同的超参数,为了便于理解和使用,我们在 `configs/DataAugment` 里分别列举了 8 种训练 ResNet50 的数据增强方式的参数配置文件,用户可以在 `tools/run.sh` 里直接替换配置文件的路径即可使用。此处分别挑选了图像变换、图像裁剪、图像混叠中的一个示例展示,其他参数配置用户可以自查配置文件。
<a name="1.1"></a>
### 1.1 AutoAugment
`AotoAugment` 的图像增广方式的配置如下。`AutoAugment` 是在 uint8 的数据格式上转换的,所以其处理过程应该放在归一化操作(`NormalizeImage`)之前。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- AutoAugment:
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
```
<a name="1.2"></a>
### 1.2 RandAugment
`RandAugment` 的图像增广方式的配置如下,其中用户需要指定其中的参数 `num_layers` 与 `magnitude`,默认的数值分别是 `2` 和 `5`。`RandAugment` 是在 uint8 的数据格式上转换的,所以其处理过程应该放在归一化操作(`NormalizeImage`)之前。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- RandAugment:
num_layers: 2
magnitude: 5
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
```
<a name="1.3"></a>
### 1.3 TimmAutoAugment
`TimmAutoAugment` 的图像增广方式的配置如下,其中用户需要指定其中的参数 `config_str`、`interpolation`、`img_size`,默认的数值分别是 `rand-m9-mstd0.5-inc1`、`bicubic`、`224`。`TimmAutoAugment` 是在 uint8 的数据格式上转换的,所以其处理过程应该放在归一化操作(`NormalizeImage`)之前。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- TimmAutoAugment:
config_str: rand-m9-mstd0.5-inc1
interpolation: bicubic
img_size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
```
<a name="1.4"></a>
### 1.4 Cutout
`Cutout` 的图像增广方式的配置如下,其中用户需要指定其中的参数 `n_holes` 与 `length`,默认的数值分别是 `1` 和 `112`。类似其他图像裁剪类的数据增强方式,`Cutout` 既可以在 uint8 格式的数据上操作,也可以在归一化)(`NormalizeImage`)后的数据上操作,此处给出的是在归一化后的操作。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- Cutout:
n_holes: 1
length: 112
```
<a name="1.5"></a>
### 1.5 RandomErasing
`RandomErasing` 的图像增广方式的配置如下,其中用户需要指定其中的参数 `EPSILON`、`sl`、`sh`、`r1`、`attempt`、`use_log_aspect`、`mode`,默认的数值分别是 `0.25`、`0.02`、`1.0/3.0`、`0.3`、`10`、`True`、`pixel`。类似其他图像裁剪类的数据增强方式,`RandomErasing` 既可以在 uint8 格式的数据上操作,也可以在归一化(`NormalizeImage`)后的数据上操作,此处给出的是在归一化后的操作。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- RandomErasing:
EPSILON: 0.25
sl: 0.02
sh: 1.0/3.0
r1: 0.3
attempt: 10
use_log_aspect: True
mode: pixel
```
<a name="1.6"></a>
### 1.6 HideAndSeek
`HideAndSeek` 的图像增广方式的配置如下。类似其他图像裁剪类的数据增强方式,`HideAndSeek` 既可以在 uint8 格式的数据上操作,也可以在归一化(`NormalizeImage`)后的数据上操作,此处给出的是在归一化后的操作。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- HideAndSeek:
```
<a name="1.7"></a>
### 1.7 GridMask
`GridMask` 的图像增广方式的配置如下,其中用户需要指定其中的参数 `d1`、`d2`、`rotate`、`ratio`、`mode`, 默认的数值分别是 `96`、`224`、`1`、`0.5`、`0`。类似其他图像裁剪类的数据增强方式,`GridMask` 既可以在 uint8 格式的数据上操作,也可以在归一化(`NormalizeImage`)后的数据上操作,此处给出的是在归一化后的操作。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- GridMask:
d1: 96
d2: 224
rotate: 1
ratio: 0.5
mode: 0
```
<a name="1.8"></a>
### 1.8 Mixup
`Mixup` 的图像增广方式的配置如下,其中用户需要指定其中的参数 `alpha`,默认的数值是 `0.2`。类似其他图像混合类的数据增强方式,`Mixup` 是在图像做完数据处理后将每个 batch 内的数据做图像混叠,将混叠后的图像和标签输入网络中训练,所以其是在图像数据处理(图像变换、图像裁剪)后操作。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
batch_transform_ops:
- MixupOperator:
alpha: 0.2
```
<a name="1.9"></a>
### 1.9 Cutmix
`Cutmix` 的图像增广方式的配置如下,其中用户需要指定其中的参数 `alpha`,默认的数值是 `0.2`。类似其他图像混合类的数据增强方式,`Cutmix` 是在图像做完数据处理后将每个 batch 内的数据做图像混叠,将混叠后的图像和标签输入网络中训练,所以其是在图像数据处理(图像变换、图像裁剪)后操作。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
batch_transform_ops:
- CutmixOperator:
alpha: 0.2
```
<a name="1.10"></a>
### 1.10 Mixup 与 Cutmix 同时使用
`Mixup` 与 `Cutmix` 同时使用的配置如下,其中用户需要指定额外的参数 `prob`,该参数控制不同数据增强的概率,默认为 `0.5`。
```yaml
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
- RandFlipImage:
flip_code: 1
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- OpSampler:
MixupOperator:
alpha: 0.8
prob: 0.5
CutmixOperator:
alpha: 1.0
prob: 0.5
```
<a name="2"></a>
## 2. 启动命令
当用户配置完训练环境后,类似于训练其他分类任务,只需要将 `tools/train.sh` 中的配置文件替换成为相应的数据增强方式的配置文件即可。
其中 `train.sh` 中的内容如下:
```bash
python3 -m paddle.distributed.launch \
--selected_gpus="0,1,2,3" \
--log_dir=ResNet50_Cutout \
tools/train.py \
-c ./ppcls/configs/ImageNet/DataAugment/ResNet50_Cutout.yaml
```
运行 `train.sh`:
```bash
sh tools/train.sh
```
<a name="3"></a>
## 3. 注意事项
* 由于图像混叠时需对 label 进行混叠,无法计算训练数据的准确率,所以在训练过程中没有打印训练准确率。
* 在使用数据增强后,由于训练数据更难,所以训练损失函数可能较大,训练集的准确率相对较低,但其有拥更好的泛化能力,所以验证集的准确率相对较高。
* 在使用数据增强后,模型可能会趋于欠拟合状态,建议可以适当的调小 `l2_decay` 的值来获得更高的验证集准确率。
* 几乎每一类图像增强均含有超参数,我们只提供了基于 ImageNet-1k 的超参数,其他数据集需要用户自己调试超参数,具体超参数的含义用户可以阅读相关的论文,调试方法也可以参考训练技巧的章节。
<a name="4"></a>
## 4. 实验结果
基于 PaddleClas,在 ImageNet1k 数据集上的分类精度如下。
| 模型 | 初始学习率策略 | l2 decay | batch size | epoch | 数据变化策略 | Top1 Acc | 论文中结论 |
|-------------|------------------|--------------|------------|-------|----------------|------------|----|
| ResNet50 | 0.1/cosine_decay | 0.0001 | 256 | 300 | 标准变换 | 0.7731 | - |
| ResNet50 | 0.1/cosine_decay | 0.0001 | 256 | 300 | AutoAugment | 0.7795 | 0.7763 |
| ResNet50 | 0.1/cosine_decay | 0.0001 | 256 | 300 | mixup | 0.7828 | 0.7790 |
| ResNet50 | 0.1/cosine_decay | 0.0001 | 256 | 300 | cutmix | 0.7839 | 0.7860 |
| ResNet50 | 0.1/cosine_decay | 0.0001 | 256 | 300 | cutout | 0.7801 | - |
| ResNet50 | 0.1/cosine_decay | 0.0001 | 256 | 300 | gridmask | 0.7785 | 0.7790 |
| ResNet50 | 0.1/cosine_decay | 0.0001 | 256 | 300 | random-augment | 0.7770 | 0.7760 |
| ResNet50 | 0.1/cosine_decay | 0.0001 | 256 | 300 | random erasing | 0.7791 | - |
| ResNet50 | 0.1/cosine_decay | 0.0001 | 256 | 300 | hide and seek | 0.7743 | 0.7720 |
**注意**:
* 在这里的实验中,为了便于对比,我们将 l2 decay 固定设置为 1e-4,在实际使用中,我们推荐尝试使用更小的 l2 decay。结合数据增强,我们发现将 l2 decay 由 1e-4 减小为 7e-5 均能带来至少 0.3~0.5% 的精度提升。
* 我们目前尚未对不同策略进行组合并验证效果,这一块后续我们会开展更多的对比实验,敬请期待。
| 28.891566 | 262 | 0.545872 | yue_Hant | 0.293202 |
dba3da33eb500ab7d2afb9d515d8137f92be64df | 3,704 | md | Markdown | AlchemyInsights/no-option-to-install-office-Visio-project.md | 47-studio-org/OfficeDocs-AlchemyInsights-pr.de-DE | edb8893aa6f88fe3c40b9a0a3ad2396b02bebb54 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-15T16:17:10.000Z | 2022-03-15T16:17:10.000Z | AlchemyInsights/no-option-to-install-office-Visio-project.md | MarcelRaschke/OfficeDocs-AlchemyInsights-pr.de-DE | edb8893aa6f88fe3c40b9a0a3ad2396b02bebb54 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AlchemyInsights/no-option-to-install-office-Visio-project.md | MarcelRaschke/OfficeDocs-AlchemyInsights-pr.de-DE | edb8893aa6f88fe3c40b9a0a3ad2396b02bebb54 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-08T00:39:46.000Z | 2021-03-08T00:39:46.000Z | ---
title: Keine Option zum Installieren von Office, Visio oder Project
ms.author: pebaum
author: pebaum
manager: mnirkhe
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Priority
ms.collection: Adm_O365
ms.custom:
- "9002414"
- "4799"
ms.openlocfilehash: 0a1a394ace2ea8aa32ec82668dee8130dd4600ec
ms.sourcegitcommit: c6692ce0fa1358ec3529e59ca0ecdfdea4cdc759
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 09/15/2020
ms.locfileid: "47772658"
---
# <a name="no-option-to-install-office-visio-or-project"></a>Keine Option zum Installieren von Office, Visio oder Project
Um Office-Client-Apps installieren zu können, stellen Sie sicher, dass Sie über ein Office 365- oder ein Microsoft 365-Abonnement verfügen, [das Office-Client-Apps beinhaltet](https://support.office.com/article/office-for-home-and-office-for-business-plans-28cbc8cf-1332-4f04-9123-9b660abb629e), z. B. Microsoft 365 Business Premium, Microsoft 365 Apps for Business oder Microsoft 365 Apps for Enterprise.
**Bitte beachten**: [Office Home und Business](https://products.office.com/home-and-business) ist ein [Office Home](https://support.office.com/article/28cbc8cf-1332-4f04-9123-9b660abb629e?wt.mc_id=Alchemy_ClientDIA)-Produkt und nicht Teil eines Business-Abonnements. Project Online Essentials beinhaltet nicht die Desktopversion von Project und somit ist auch keine Installation erforderlich. Benutzer von Visio Plan 1 können auf „Visio im Web“ zugreifen, da der Plan keine Visio-Desktop-App beinhaltet. Desktop-Apps für Project-und Visio sind für Mac nicht verfügbar
Wenn Sie über ein-Abonnement verfügen, das Microsoft 365-Apps umfasst, wird keine Option zum Installieren dieser Apps angezeigt, es sei denn, Sie verfügen über eine [zugewiesene Lizenz](https://support.office.com/article/what-office-365-business-product-or-license-do-i-have-f8ab5e25-bf3f-4a47-b264-174b1ee925fd?wt.mc_id=scl_installoffice_home). Wenn Sie der für die Zuweisung von Lizenzen zuständige Office 365-Administrator sind, lesen Sie bitte [Benutzern Lizenzen zuweisen](https://support.office.com/article/assign-licenses-to-users-in-office-365-for-business-997596b5-4173-4627-b915-36abac6786dc?wt.mc_id=scl_installoffice_home).
Fordern Sie jeden Benutzer auf, folgende Schritte auszuführen:
1. Wechseln Sie zur [Portalseite](https://portal.office.com/OLS/MySoftware.aspx).
2. Melden Sie sich mit Ihrem Geschäfts-, Uni- oder Schulkonto an, dem eine Office-Lizenz zugewiesen ist.
3. Im Abschnitt „Office“ bitte die Sprache auswählen. Wählen Sie zwischen der 32-Bit- oder der 64-Bit-Version.
4. Klicken Sie auf **Installieren**.
Detaillierte Schritte und Lösungen zur Problembehandlung beim Installieren von Office finden Sie unter [Herunterladen und Installieren bzw. erneutes Installieren von Office auf einem PC oder Mac](https://support.office.com/article/4414eaaf-0478-48be-9c42-23adc4716658?wt.mc_id=Alchemy_ClientDIA). Spezifische Anweisungen zur Installation von Visio oder Project finden Sie unter [Installieren von Visio](https://support.office.com/article/f98f21e3-aa02-4827-9167-ddab5b025710) oder [Installieren von Project](https://support.office.com/article/7059249b-d9fe-4d61-ab96-5c5bf435f281).
Lösungen zu spezifischen Problemen beim Installieren von Office finden Sie unter:
[Problembehandlung beim Installieren von Office](https://support.office.com/article/35ff2def-e0b2-4dac-9784-4cf212c1f6c2#BKMK_ErrorMessages)
[Office-Anwendungen unter Windows 10, Windows 8 oder Windows 7 sind nicht zu finden](https://support.office.com/article/can-t-find-office-applications-in-windows-10-windows-8-or-windows-7-907ce545-6ae8-459b-8d9d-de6764a635d6)
| 77.166667 | 635 | 0.814795 | deu_Latn | 0.883622 |
dba4426b2dec6f74dc6f9137fef37b9fd373ed40 | 955 | md | Markdown | readme.md | sunsiansong/cn_administrative_division | bcdef7e631adc03b933d2ace65d19f5258735f41 | [
"WTFPL"
] | 3 | 2018-11-07T11:52:53.000Z | 2021-04-01T07:59:30.000Z | readme.md | sunsiansong/cn_administrative_division | bcdef7e631adc03b933d2ace65d19f5258735f41 | [
"WTFPL"
] | null | null | null | readme.md | sunsiansong/cn_administrative_division | bcdef7e631adc03b933d2ace65d19f5258735f41 | [
"WTFPL"
] | null | null | null |
## 使用
装postgresql
clone repo
```bash
go get -u "github.com/PuerkitoBio/goquery"
go get -u "github.com/djimenez/iconv-go"
go get -u "github.com/go-pg/pg"
go get -u "github.com/go-pg/pg/orm"
```
在main.go中搜Password,改成自己的数据库账号密码,表会自动建的
`go run main.go`
这样可以抓到街道/乡镇一级,如果要抓社区/村级别,那稍微看下代码吧,数据量有点大所以默认就不抓了
## Performance
基于我配置:
```bash
MacBook Pro 2017
2.8 GHz Intel Core i7
$sysctl -a | grep ".cpu."
hw.ncpu: 8
hw.physicalcpu: 4
hw.physicalcpu_max: 4
hw.logicalcpu: 8
16 GB 2133 MHz LPDDR3
```
到乡镇级别,约`47107`rows,大概60s左右(看网络状况吧,我快的时候跑到50s+)
TODO
----
1. 数据基于国家统计局网站抓取,较民政部的即时性不足,有改进的方案
2. <del>[dep]支持其他数据库</del>,划掉了,喜欢的话装个postgresql,或者fork了自己玩,完全没有难度的吧「应该」
3. 增加测试
4. self parentCode ref FK
5. db复用,不用每次都Close()
6. <del>[done]增加retry机制,某些页面偶尔的会!=200</del>
7. 寻求容错机制,即便众多过程中有失败,也能保证过最终数据的完整性
8. connection reset by peer 😂
## License
This repo is released under the [WTFPL](http://www.wtfpl.net/) – Do What the Fuck You Want to Public License.
| 17.053571 | 109 | 0.727749 | yue_Hant | 0.432438 |
dba492f1c4ef1c4abefeb0f7139be2c415d9a83c | 3,455 | md | Markdown | _posts/2012-04-15-cfpb-accepts-first-citizen-submitted-pull-request-on-behalf-of-federal-government.md | uamakernyc/dev.uamaker.nyc | bf9777086c29a8cec95ac4485c39e25af1d140c4 | [
"CC-BY-3.0"
] | null | null | null | _posts/2012-04-15-cfpb-accepts-first-citizen-submitted-pull-request-on-behalf-of-federal-government.md | uamakernyc/dev.uamaker.nyc | bf9777086c29a8cec95ac4485c39e25af1d140c4 | [
"CC-BY-3.0"
] | null | null | null | _posts/2012-04-15-cfpb-accepts-first-citizen-submitted-pull-request-on-behalf-of-federal-government.md | uamakernyc/dev.uamaker.nyc | bf9777086c29a8cec95ac4485c39e25af1d140c4 | [
"CC-BY-3.0"
] | null | null | null | ---
title: |
CFPB Accepts First Citizen-Submitted Code on Behalf of Federal Government
categories:
- Technology
tags:
- .govs
- cfpb
- code
- federal
- git
- github
- gov 2.0
- government
- open government
- open source
---
"Fix typo." Not quite "one small step for man," but a significant first nonetheless. These simple words, typed by an open-source developer operating under the pseudonym "iceeey," may represent the first collaborative effort between the federal government and the broader open-source community, and surely represents a tangible win for the open-government movement as a whole.
The Consumer Financial Protection Bureau (CFPB) is in a unique position. As the youngest federal agency, they have the opportunity to reimagine many day-to-day business processes for an internet era, and to share that innovation across government. One such process is the means by which federal employees apply for and receive subsidies to offset the cost of public transportation to and from work. Having created an application that alleviated the need to shuttle time-consuming, paper-based forms from building to building within their own agency, the Bureau sought to package up the solution, and publicly release the source code for other federal agencies to use and expand upon. The logic was simple: solve the problem once, solve it everywhere.
But the code was not simply made available for government employees to access. The code was placed on [GitHub](http://github.com/) – a popular source code sharing service – for anyone to download and explore, and within days of CFPB publishing its [recently announced Source Code Policy](http://www.consumerfinance.gov/blog/the-cfpbs-source-code-policy-open-and-shared/), someone did just that.GitHub user "iceeey" [submitted a proposed change](https://github.com/cfpb/transit_subsidy/pull/1) – known in developer parlance as "forking the project" and submitting a "pull request" — correcting a misspelling on the form initially presented to new employees ("roundtrip" was accidentally spelled "rountrip").
Admittedly a minor change ("one small step for grammar?"), but notable for the underlying first that it represents: the opportunity to create efficiencies across government by partnering with the broader community of civically engaged developers.
Open-source software (software for which the underlying source code is made publicly available) as a vehicle for a more open and more efficient government is nothing new. Behind the scenes, countless agencies rely on open-source software for various business functions, and many have even chosen to publicly publish the source code underlying the applications that they themselves have built in-house to tackle unique challenges. But this seemingly innocuous missing "d" and its subsequently submitted fix represents the first time a federal agency has directly collaborated with open-source developers to better its own day-to-day tools.
Iceeey has [already submitted his second pull request](https://github.com/cfpb/transit_subsidy/pull/2) ("more typos" he joked with an emoticon smiley), and I hope more agencies and more open-source developers will follow suit. Such collaborations empower agencies to do more with less; put better, more robust tools in the hands of federal employees as they carry out agency mission; and undoubtedly represent a giant-leap forward for a more open and more efficient government.
| 104.69697 | 750 | 0.798553 | eng_Latn | 0.999573 |
dba4a3441f9c0713d6715f641267c7af646b9324 | 783 | md | Markdown | README.md | ewnd9/hyperclick-markdown | 587db23975a33435d59c94d04be83fe77b54608b | [
"MIT"
] | 5 | 2016-06-22T23:42:49.000Z | 2018-01-22T20:47:07.000Z | README.md | ewnd9/hyperclick-markdown | 587db23975a33435d59c94d04be83fe77b54608b | [
"MIT"
] | 4 | 2016-08-15T10:26:46.000Z | 2020-04-14T21:44:00.000Z | README.md | ewnd9/hyperclick-markdown | 587db23975a33435d59c94d04be83fe77b54608b | [
"MIT"
] | null | null | null | # hyperclick-markdown
Ctrl+Click in markdown file to open fs and web urls in the editor.

## Install
```
$ apm install hyperclick-markdown
```
## Development
```sh
$ git clone https://github.com/ewnd9/hyperclick-markdown.git
$ ln -s $PWD/hyperclick-markdown $HOME/.atom/packages/hyperclick-markdown
```
## Test
- ./lib/main.js
- /lib/main.js \<\!-- root of the project
- http://github.com/
- [markdown link without protocol](reddit.com)
- [markdown link to localhost](localhost:8080)
## Problems
- [ ] How to disable plugin for non-markdown files in config?
## Credits
Fork of https://github.com/oclbdk/hyperclick-provider-demo
## License
MIT © [ewnd9](http://ewnd9.com)
| 20.076923 | 98 | 0.719029 | kor_Hang | 0.400516 |
dba5099615ebe739969ca0663c5d0da9f167f67c | 4,335 | md | Markdown | docs/dmx/predictcaselikelihood-dmx.md | Jteve-Sobs/sql-docs.de-de | 9843b0999bfa4b85e0254ae61e2e4ada1d231141 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/dmx/predictcaselikelihood-dmx.md | Jteve-Sobs/sql-docs.de-de | 9843b0999bfa4b85e0254ae61e2e4ada1d231141 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/dmx/predictcaselikelihood-dmx.md | Jteve-Sobs/sql-docs.de-de | 9843b0999bfa4b85e0254ae61e2e4ada1d231141 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: PredictCaseLikelihood (DMX)
title: Prätcaselikelihood (DMX) | Microsoft-Dokumentation
ms.date: 06/07/2018
ms.prod: sql
ms.technology: analysis-services
ms.custom: dmx
ms.topic: reference
ms.author: owend
ms.reviewer: owend
author: minewiskan
ms.openlocfilehash: b182e4a3a842065152b050ed1b428d71b7d0cf07
ms.sourcegitcommit: e700497f962e4c2274df16d9e651059b42ff1a10
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 08/17/2020
ms.locfileid: "88426122"
---
# <a name="predictcaselikelihood-dmx"></a>PredictCaseLikelihood (DMX)
[!INCLUDE[ssas](../includes/applies-to-version/ssas.md)]
Diese Funktion gibt die Wahrscheinlichkeit zurück, mit der ein Eingabefall in ein vorhandenes Modell passt. Wird nur mit Clustermodellen verwendet.
## <a name="syntax"></a>Syntax
```
PredictCaseLikelihood([NORMALIZED|NONNORMALIZED])
```
## <a name="arguments"></a>Argumente
NORMALIZED
Der Rückgabewert enthält die Wahrscheinlichkeit des Falls im Modell geteilt durch die Wahrscheinlichkeit des Falls ohne Modell.
NONNORMALIZED
Der Rückgabewert enthält die interne Wahrscheinlichkeit des Falls, bei der es sich um das Produkt aus den Wahrscheinlichkeiten der Fallattribute handelt.
## <a name="applies-to"></a>Gilt für
Modelle, die mithilfe der [!INCLUDE[msCoName](../includes/msconame-md.md)] Clustering-und [!INCLUDE[msCoName](../includes/msconame-md.md)] Sequence Clustering-Algorithmen erstellt werden.
## <a name="return-type"></a>Rückgabetyp
Eine Gleitkommazahl doppelter Genauigkeit zwischen 0 und 1. Bei einer Zahl, die näher an 1 liegt, steigt die Wahrscheinlichkeit, dass der Fall in diesem Modell auftritt. Bei einer Zahl, die näher an 0 liegt, sinkt die Wahrscheinlichkeit, dass der Fall in diesem Modell auftritt.
## <a name="remarks"></a>Bemerkungen
Standardmäßig wird das Ergebnis der **prätcaselikelihood** -Funktion normalisiert. Normalisierte Werte sind in der Regel nützlicher, weil die Anzahl von Attributen in einem Fall zunimmt und die Unterschiede zwischen den internen Wahrscheinlichkeiten zweier Fälle erheblich geringer werden.
Die folgende Gleichung dient zur Berechnung der normalisierten Werte, wobei X und Y gegeben sind:
- x = Wahrscheinlichkeit für den Fall auf Grundlage des Clusteringmodells
- y = Marginale Fallwahrscheinlichkeit, berechnet als logarithmische Wahrscheinlichkeit des Falls auf Grundlage der Zählung der Trainingsfälle
- Z = Exp (Log (x)-log (Y))
Normalized = (z/(1 + z))
## <a name="examples"></a>Beispiele
Im folgenden Beispiel wird die Wahrscheinlichkeit zurückgegeben, mit der der angegebene Fall innerhalb des Clustermodells auftritt, das auf der [!INCLUDE[ssSampleDBCoShort](../includes/sssampledbcoshort-md.md)] DW-Datenbank basiert.
```
SELECT
PredictCaseLikelihood() AS Default_Likelihood,
PredictCaseLikelihood(NORMALIZED) AS Normalized_Likelihood,
PredictCaseLikelihood(NONNORMALIZED) AS Raw_Likelihood,
FROM
[TM Clustering]
NATURAL PREDICTION JOIN
(SELECT 28 AS [Age],
'2-5 Miles' AS [Commute Distance],
'Graduate Degree' AS [Education],
0 AS [Number Cars Owned],
0 AS [Number Children At Home]) AS t
```
Erwartete Ergebnisse:
|Default_Likelihood|Normalized_Likelihood|Raw_Likelihood|
|-------------------------|----------------------------|---------------------|
|6,30672792729321E-08|6,30672792729321E-08|9,5824454056846E-48|
Der Unterschied zwischen diesen Ergebnissen veranschaulicht den Effekt der Normalisierung. Der Rohwert für **caselikelihood** deutet darauf hin, dass die Wahrscheinlichkeit der Groß-/Kleinschreibung ungefähr 20 Prozent beträgt. Wenn Sie jedoch die Ergebnisse normalisieren, wird deutlich, dass die Wahrscheinlichkeit des Falls sehr gering ist.
## <a name="see-also"></a>Weitere Informationen
[Data Mining-Algorithmen (Analysis Services Data Mining-)](https://docs.microsoft.com/analysis-services/data-mining/data-mining-algorithms-analysis-services-data-mining)
[Data Mining-Erweiterungen (DMX-) Funktionsreferenz](../dmx/data-mining-extensions-dmx-function-reference.md)
[Funktionen (DMX-)](../dmx/functions-dmx.md)
[Allgemeine Vorhersagefunktionen (DMX-)](../dmx/general-prediction-functions-dmx.md)
| 48.166667 | 346 | 0.749481 | deu_Latn | 0.90627 |
dba523601f379f42851b161ad34a84b83c2819f2 | 2,594 | md | Markdown | README.md | rubenpazch/portfolio | 6eecc001e0949cb0aff7125abfaa14f608530852 | [
"MIT"
] | 1 | 2021-03-07T15:05:40.000Z | 2021-03-07T15:05:40.000Z | README.md | rubenpazch/portfolio | 6eecc001e0949cb0aff7125abfaa14f608530852 | [
"MIT"
] | null | null | null | README.md | rubenpazch/portfolio | 6eecc001e0949cb0aff7125abfaa14f608530852 | [
"MIT"
] | null | null | null | # Building a portfolio project using HTML & CSS path
## Project specifications
This project was built to show my personal information about my path like a software developer using different modern tech, on this project I will like to show some of my fancy projets, I used HTML, CSS, and SASS, stickler, flexbox.
<!-- TABLE OF CONTENTS -->
## Table of Contents
* [Whats is included on this project](#whats-is-included-on-this-project)
* [Built With](#built-with)
* [Screenshot](#screenshot)
* [Live Demo](#live-demo)
* [SASS structure](#SASS-structure)
* [Video Presentation of the project](#video-presentation-of-the-project)
* [Authors](#authors)
* [Acknowledgements](#acknowledgements)
* [Contributing](#-Contributing)
* [License](#license)
## Whats is included on this project
This project includes the next parts:
+ The home page
+ Navigation bar
+ Profile info
+ About me
+ Exprience and Education
+ Project
+ Articles
+ Contact
## Built With
Concepts used on this project
- React
- HTML
- CSS
- SASS
- Javascript
- font-awesome
Tools used on this project
- Stickler CI
- Visual Studio Code
- CSS Formatter
- Stylelint
## Screenshot

## mobile-friendly

## Live Demo
You can see the [live preview](http://rubenpazch.github.io/)
## SASS structure
For this project, I use SASS for managing CSS behavior and have the next structure.
* CSS
* fonts
* img
* js
* scss
For making changes to this project you should run SASS with this command
1. Go to the file where your project is located C:/www/project_root
2. sass --watch scss:css
3. Change files located on SCSS folder
## Video Presentation of the project
You can see the video presentation on the next link [here](https://www.youtube.com/watch?v=SWB-fzTpx5g&t=49s).
## Authors
👤 **Ruben Paz Chuspe**
- Github: [@rubenpazch](https://github.com/rubenpazch)
- Linkedin: [rubenpch](https://www.linkedin.com/in/rubenpch/)
- Twitter: [chuspepaz](https://twitter.com/ChuspePaz)
## Contributing
This is an education project as a part of the Microverse so contributing is not accepted.
Contributions, issues and feature requests are welcome!
Feel free to check the [issues](https://github.com/rubenpazch/my_portfolio/issues).
## Show your support
Give a ⭐️ if you like this project!
## Acknowledgements
+ [Microverse](https://www.microverse.org/).
+ [Github](http://github.com/).
+ [Fontawesome](http://fontawesome.com/).
+ [The Odin Project](theodinproject.com/).
## License
This project is [MIT](lic.url) licensed.
| 20.425197 | 236 | 0.723593 | eng_Latn | 0.944036 |
dba59bf9a0148a738795ad4277c5fb9ea02080ec | 753 | md | Markdown | _blogger-exports/rachelprestonprinz/exported/as-true-in-dating-as-in-business-and.md | Archinia/archinia-com | 7fc04f565853f00691f6eac6a313a001cea053a2 | [
"MIT"
] | null | null | null | _blogger-exports/rachelprestonprinz/exported/as-true-in-dating-as-in-business-and.md | Archinia/archinia-com | 7fc04f565853f00691f6eac6a313a001cea053a2 | [
"MIT"
] | 30 | 2018-03-03T18:26:39.000Z | 2022-01-19T19:14:12.000Z | _blogger-exports/rachelprestonprinz/exported/as-true-in-dating-as-in-business-and.md | Archinia/archinia-com | 7fc04f565853f00691f6eac6a313a001cea053a2 | [
"MIT"
] | null | null | null | ---
title: 'As true in dating as in business and life in general'
date: 2014-12-29T16:51:00.002-07:00
draft: false
slug: as-true-in-dating-as-in-business-and
tags: [Great Quotes]
---
__Great Quote__
__from [http://markmanson.net/fuck-yes](http://markmanson.net/fuck-yes)__
_**"**Remember, it’s your job to look for something cool in everyone you meet; it’s not their job to show you. This is life, not a fucking sales convention. Learning to appreciate people you meet is a skill you cultivate. So [get on it](http://markmanson.net/connection). This doesn’t mean you have to fall in love with everyone who breathes in your direction. It just means you need to take responsibility for your ability to connect with the people you are meeting."_ | 57.923077 | 469 | 0.75166 | eng_Latn | 0.996209 |
dba5a63757055f3bf04c9c960f38a89c417f37b6 | 11,585 | md | Markdown | README.md | yasersharaf/dlrm | 98fb60f150edc3a59facd47e47c01e51094f59c4 | [
"MIT"
] | null | null | null | README.md | yasersharaf/dlrm | 98fb60f150edc3a59facd47e47c01e51094f59c4 | [
"MIT"
] | null | null | null | README.md | yasersharaf/dlrm | 98fb60f150edc3a59facd47e47c01e51094f59c4 | [
"MIT"
] | null | null | null | Deep Learning Recommendation Model for Personalization and Recommendation Systems:
=================================================================================
*Copyright (c) Facebook, Inc. and its affiliates.*
Description:
------------
An implementation of a deep learning recommendation model (DLRM)
The model input consists of dense and sparse features. The former is a vector
of floating point values. The latter is a list of sparse indices into
embedding tables, which consist of vectors of floating point values.
The selected vectors are passed to mlp networks denoted by triangles,
in some cases the vectors are interacted through operators (Ops).
```
output:
probability of a click
model: |
/\
/__\
|
_____________________> Op <___________________
/ | \
/\ /\ /\
/__\ /__\ ... /__\
| | |
| Op Op
| ____/__\_____ ____/__\____
| |_Emb_|____|__| ... |_Emb_|__|___|
input:
[ dense features ] [sparse indices] , ..., [sparse indices]
```
More precise definition of model layers:
1) fully connected layers of an mlp
z = f(y)
y = Wx + b
2) embedding lookup (for a list of sparse indices p=[p1,...,pk])
z = Op(e1,...,ek)
obtain vectors e1=E[:,p1], ..., ek=E[:,pk]
3) Operator Op can be one of the following
Sum(e1,...,ek) = e1 + ... + ek
Dot(e1,...,ek) = [e1'e1, ..., e1'ek, ..., ek'e1, ..., ek'ek]
Cat(e1,...,ek) = [e1', ..., ek']'
where ' denotes transpose operation
Cite [Work](http://arxiv.org/abs/1906.00091):
```
@article{DLRM19,
author = {Maxim Naumov and Dheevatsa Mudigere and Hao{-}Jun Michael Shi and Jianyu Huang and Narayanan Sundaraman and Jongsoo Park and Xiaodong Wang and Udit Gupta and Carole{-}Jean Wu and Alisson G. Azzolini and Dmytro Dzhulgakov and Andrey Mallevich and Ilia Cherniavskii and Yinghai Lu and Raghuraman Krishnamoorthi and Ansha Yu and Volodymyr Kondratenko and Stephanie Pereira and Xianjie Chen and Wenlin Chen and Vijay Rao and Bill Jia and Liang Xiong and Misha Smelyanskiy},
title = {Deep Learning Recommendation Model for Personalization and Recommendation Systems},
journal = {CoRR},
volume = {abs/1906.00091},
year = {2019},
url = {http://arxiv.org/abs/1906.00091},
}
```
Related Work:
On the [system architecture implications](http://arxiv.org/abs/1906.03109), with DLRM as one of the benchmarks,
```
@article{ArchImpl19,
author = {Udit Gupta and Xiaodong Wang and Maxim Naumov and Carole{-}Jean Wu and Brandon Reagen and David Brooks and Bradford Cottel and Kim M. Hazelwood and Bill Jia and Hsien{-}Hsin S. Lee and Andrey Malevich and Dheevatsa Mudigere and Mikhail Smelyanskiy and Liang Xiong and Xuan Zhang},
title = {The Architectural Implications of Facebook's DNN-based Personalized Recommendation},
journal = {CoRR},
volume = {abs/1906.03109},
year = {2019},
url = {http://arxiv.org/abs/1906.03109},
}
```
On the [embedding compression techniques (for number of vectors)](https://arxiv.org/abs/1909.02107), with DLRM as one of the benchmarks,
```
@article{QuoRemTrick19,
author = {Hao{-}Jun Michael Shi and Dheevatsa Mudigere and Maxim Naumov and Jiyan Yang},
title = {Compositional Embeddings Using Complementary Partitions for Memory-Efficient Recommendation Systems},
journal = {CoRR},
volume = {abs/1909.02107},
year = {2019},
url = {https://arxiv.org/abs/1909.02107},
}
```
On the [embedding compression techniques (for dimension of vectors)](https://arxiv.org/abs/1909.11810), with DLRM as one of the benchmarks,
```
@article{MixDimTrick19,
author = {Antonio Ginart and Maxim Naumov and Dheevatsa Mudigere and Jiyan Yang and James Zou},
title = {Mixed Dimension Embeddings with Application to Memory-Efficient Recommendation Systems},
journal = {CoRR},
volume = {abs/1909.11810},
year = {2019},
url = {https://arxiv.org/abs/1909.11810},
}
```
Implementation
--------------
**DLRM PyTorch**. Implementation of DLRM in PyTorch framework:
dlrm_s_pytorch.py
**DLRM Caffe2**. Implementation of DLRM in Caffe2 framework:
dlrm_s_caffe2.py
**DLRM Data**. Implementation of DLRM data generation and loading:
dlrm_data_pytorch.py, dlrm_data_caffe2.py, data_utils.py
**DLRM Tests**. Implementation of DLRM tests in ./test
dlrm_s_test.sh
**DLRM Benchmarks**. Implementation of DLRM benchmarks in ./bench
dlrm_s_benchmark.sh, dlrm_s_criteo_kaggle.sh
Related Work:
On the [Glow framework](https://github.com/pytorch/glow) implementation
```
https://github.com/pytorch/glow/blob/master/tests/unittests/RecommendationSystemTest.cpp
```
On the [FlexFlow framework](https://github.com/flexflow/FlexFlow) distributed implementation with Legion backend
```
https://github.com/flexflow/FlexFlow/blob/master/examples/DLRM/dlrm.cc
```
How to run dlrm code?
--------------------
1) A sample run of the code, with a tiny model is shown below
```
$ python dlrm_s_pytorch.py --mini-batch-size=2 --data-size=6
time/loss/accuracy (if enabled):
Finished training it 1/3 of epoch 0, -1.00 ms/it, loss 0.451893, accuracy 0.000%
Finished training it 2/3 of epoch 0, -1.00 ms/it, loss 0.402002, accuracy 0.000%
Finished training it 3/3 of epoch 0, -1.00 ms/it, loss 0.275460, accuracy 0.000%
```
2) A sample run of the code, with a tiny model in debug mode
```
$ python dlrm_s_pytorch.py --mini-batch-size=2 --data-size=6 --debug-mode
model arch:
mlp top arch 3 layers, with input to output dimensions:
[8 4 2 1]
# of interactions
8
mlp bot arch 2 layers, with input to output dimensions:
[4 3 2]
# of features (sparse and dense)
4
dense feature size
4
sparse feature size
2
# of embeddings (= # of sparse features) 3, with dimensions 2x:
[4 3 2]
data (inputs and targets):
mini-batch: 0
[[0.69647 0.28614 0.22685 0.55131]
[0.71947 0.42311 0.98076 0.68483]]
[[[1], [0, 1]], [[0], [1]], [[1], [0]]]
[[0.55679]
[0.15896]]
mini-batch: 1
[[0.36179 0.22826 0.29371 0.63098]
[0.0921 0.4337 0.43086 0.49369]]
[[[1], [0, 2, 3]], [[1], [1, 2]], [[1], [1]]]
[[0.15307]
[0.69553]]
mini-batch: 2
[[0.60306 0.54507 0.34276 0.30412]
[0.41702 0.6813 0.87546 0.51042]]
[[[2], [0, 1, 2]], [[1], [2]], [[1], [1]]]
[[0.31877]
[0.69197]]
initial parameters (weights and bias):
[[ 0.05438 -0.11105]
[ 0.42513 0.34167]
[-0.1426 -0.45641]
[-0.19523 -0.10181]]
[[ 0.23667 0.57199]
[-0.16638 0.30316]
[ 0.10759 0.22136]]
[[-0.49338 -0.14301]
[-0.36649 -0.22139]]
[[0.51313 0.66662 0.10591 0.13089]
[0.32198 0.66156 0.84651 0.55326]
[0.85445 0.38484 0.31679 0.35426]]
[0.17108 0.82911 0.33867]
[[0.55237 0.57855 0.52153]
[0.00269 0.98835 0.90534]]
[0.20764 0.29249]
[[0.52001 0.90191 0.98363 0.25754 0.56436 0.80697 0.39437 0.73107]
[0.16107 0.6007 0.86586 0.98352 0.07937 0.42835 0.20454 0.45064]
[0.54776 0.09333 0.29686 0.92758 0.569 0.45741 0.75353 0.74186]
[0.04858 0.7087 0.83924 0.16594 0.781 0.28654 0.30647 0.66526]]
[0.11139 0.66487 0.88786 0.69631]
[[0.44033 0.43821 0.7651 0.56564]
[0.0849 0.58267 0.81484 0.33707]]
[0.92758 0.75072]
[[0.57406 0.75164]]
[0.07915]
DLRM_Net(
(emb_l): ModuleList(
(0): EmbeddingBag(4, 2, mode=sum)
(1): EmbeddingBag(3, 2, mode=sum)
(2): EmbeddingBag(2, 2, mode=sum)
)
(bot_l): Sequential(
(0): Linear(in_features=4, out_features=3, bias=True)
(1): ReLU()
(2): Linear(in_features=3, out_features=2, bias=True)
(3): ReLU()
)
(top_l): Sequential(
(0): Linear(in_features=8, out_features=4, bias=True)
(1): ReLU()
(2): Linear(in_features=4, out_features=2, bias=True)
(3): ReLU()
(4): Linear(in_features=2, out_features=1, bias=True)
(5): Sigmoid()
)
)
time/loss/accuracy (if enabled):
Finished training it 1/3 of epoch 0, -1.00 ms/it, loss 0.451893, accuracy 0.000%
Finished training it 2/3 of epoch 0, -1.00 ms/it, loss 0.402002, accuracy 0.000%
Finished training it 3/3 of epoch 0, -1.00 ms/it, loss 0.275460, accuracy 0.000%
updated parameters (weights and bias):
[[ 0.0543 -0.1112 ]
[ 0.42513 0.34167]
[-0.14283 -0.45679]
[-0.19532 -0.10197]]
[[ 0.23667 0.57199]
[-0.1666 0.30285]
[ 0.10751 0.22124]]
[[-0.49338 -0.14301]
[-0.36664 -0.22164]]
[[0.51313 0.66663 0.10591 0.1309 ]
[0.32196 0.66154 0.84649 0.55324]
[0.85444 0.38482 0.31677 0.35425]]
[0.17109 0.82907 0.33863]
[[0.55238 0.57857 0.52154]
[0.00265 0.98825 0.90528]]
[0.20764 0.29244]
[[0.51996 0.90184 0.98368 0.25752 0.56436 0.807 0.39437 0.73107]
[0.16096 0.60055 0.86596 0.98348 0.07938 0.42842 0.20453 0.45064]
[0.5476 0.0931 0.29701 0.92752 0.56902 0.45752 0.75351 0.74187]
[0.04849 0.70857 0.83933 0.1659 0.78101 0.2866 0.30646 0.66526]]
[0.11137 0.66482 0.88778 0.69627]
[[0.44029 0.43816 0.76502 0.56561]
[0.08485 0.5826 0.81474 0.33702]]
[0.92754 0.75067]
[[0.57379 0.7514 ]]
[0.07908]
```
Testing
-------
Testing scripts to confirm functional correctness of the code
```
./test/dlrm_s_test.sh
Running commands ...
python dlrm_s_pytorch.py
python dlrm_s_caffe2.py
Checking results ...
diff test1 (no numeric values in the output = SUCCESS)
diff test2 (no numeric values in the output = SUCCESS)
diff test3 (no numeric values in the output = SUCCESS)
diff test4 (no numeric values in the output = SUCCESS)
```
*NOTE: Testing scripts accept extra arguments which will passed along, such as --use-gpu*
Benchmarking
------------
1) Performance benchmarking
```
./bench/dlrm_s_benchmark.sh
```
2) The code supports interface with the [Kaggle Display Advertising Challenge Dataset](https://labs.criteo.com/2014/09/kaggle-contest-dataset-now-available-academic-use/).
Please do the following to prepare the dataset for use with DLRM code:
- First, specify the raw data file (train.txt) as downloaded with --raw-data-file=<path/train.txt>
- This is then pre-processed (categorize, concat across days...) to allow using with dlrm code
- The processed data is stored as *.npz file in <root_dir>/input/*.npz
- The processed file (*.npz) can be used for subsequent runs with --processed-data-file=<path/*.npz>
```
./bench/dlrm_s_criteo_kaggle.sh
```
<img src="./kaggle_dac_loss_accuracy_plots.png" width="900" height="320">
*NOTE: Benchmarking scripts accept extra arguments which will passed along, such as --num-batches=100 to limit the number of data samples*
Model checkpoint saving/loading
-------------------------------------------------
During training, the model can be saved using --save-model=<path/model.pt>
The model is saved if there is an improvement in test accuracy (which is checked at --test-freq intervals).
A previously saved model can be loaded using --load-model=<path/model.pt>
Once loaded the model can be used to continue training, with the saved model being a checkpoint.
Alternatively, the saved model can be used to evaluate only on the test data-set by specifying --inference-only option.
Version
-------
0.1 : Initial release of the DLRM code
Requirements
------------
pytorch-nightly (*6/10/19*)
onnx (*optional*)
torchviz (*optional*)
License
-------
This source code is licensed under the MIT license found in the
LICENSE file in the root directory of this source tree.
| 35 | 484 | 0.655589 | eng_Latn | 0.7293 |
dba5b16aac4e95d2c159e95d2e2e5bf169ff3ffb | 1,939 | md | Markdown | _services/tax-preperation.md | suryarajendhran/jekyll-serif-theme | e5ae4a3080fb8b9daba0b356b658c10d12290ad1 | [
"MIT"
] | null | null | null | _services/tax-preperation.md | suryarajendhran/jekyll-serif-theme | e5ae4a3080fb8b9daba0b356b658c10d12290ad1 | [
"MIT"
] | null | null | null | _services/tax-preperation.md | suryarajendhran/jekyll-serif-theme | e5ae4a3080fb8b9daba0b356b658c10d12290ad1 | [
"MIT"
] | null | null | null | ---
title: "Tax Preperation"
date: 2018-11-18T12:33:46+10:00
featured: true
weight: 6
---
More often than not, the tax filing and income tax returns season is quite nerve-racking for both individual taxpayers as well as businesses. Most businesses fail to realize that having enough time for tax preparation allows them to put their documents in order, and reduce the overall chances of any errors from occurring.
- Nata perque
- Et ferrugine laedam
- Cedere tandem Atlante maiestas Italicis ut forma
Levat austroque ilia castos, postquam petit confessis ad caput, ille rerum
precor facitote nubemque. Potuit Celadon Martem?
1. Imagine Assaracus victori petet femina mea haustos
2. Sicaniam quibus agro magni
3. In utque Troica pedum caelestia hunc tempto
4. Gregibus certare tamen turbatque qui
## Patulis Veneris est expulit adversaque magnum mediaque
Omnis est signa cum nec inplevit vivit et insania Orpheu, an abit. Nimbi
subversaque et micant suumque, tibi ipse; sed. **Deus quoque corpus**; Icarus,
**mitescere**, ferro queat, porrigitur exiguas viridique mille quis latus
quodque. Non una genuisse ullis efficiet ad corpore nunc mentesque praestant?
_Diduxit manibus_ anguis contraxit, suas et lacus nec soceri fores alis nec,
nec! Data pater Perseu minanti animam operitur illa dolorem.
Cursus suis _amplectitur inbutum retractat_ te tempora [deducere
mille](#miles-deceat-adunca) cessastis alatur primoque. Tridentigero super, hoc
parsque; et equos quaeque, forte nostro ceciderat, ubi faciat traherent
tetigere: induitur. Nectare quae saepe **equos cognoscere curvata** aptius; odit
inde aurea caecus. Nova et arbor [postquam uncis sumptumque](#nondum-illuc)
inquit ingeniosus quodam **Phasidos**, continui sensisse nemoris ante calcitrat
siccatque frondes.
Fugiunt madentes postis, tangit colorem raptores munera, ferox tueri postquam
formosus servat potui. Luce ebur, pulcherrimus plus tradere! _Quam perque
semper_?
| 45.093023 | 323 | 0.805054 | eng_Latn | 0.198083 |
dba5cda267b3b26f798442bf46983c4bbea7f6a8 | 7,826 | md | Markdown | paper/paper_note/Explainable Deep One-Class Classification.md | hqabcxyxz/note | 3dc9e684802ce4561ad90a204350e72867d4f2eb | [
"Apache-2.0"
] | 1 | 2021-04-21T07:26:20.000Z | 2021-04-21T07:26:20.000Z | paper/paper_note/Explainable Deep One-Class Classification.md | captainfffsama/note | b80e3102ea74f6867597836b5c3265d0e83a484b | [
"Apache-2.0"
] | null | null | null | paper/paper_note/Explainable Deep One-Class Classification.md | captainfffsama/note | b80e3102ea74f6867597836b5c3265d0e83a484b | [
"Apache-2.0"
] | null | null | null | #异常检测
[toc]
# Explainable Deep One-Class Classification
- 代码: https://github.com/liznerski/fcdd
- 文章: https://openreview.net/forum?id=A5VV3UyIQz
- 会议: ICLR 2021
## 摘要
用于异常检测的深度单类分类旨在学习一个映射将正常样本在特征空间中尽量聚集,从而将异常样本直接映射出去.由于这种变换是一个高度非线性的,因此对其进行解释是十分困难的.本文,我们将提出一种可解释的深度单类分类方法,全卷积数据描述(Fully Convolutional Data Description,FCDD),它会将样本的特征提取出来形成一个热力图解释. FCDD 在异常检测中表现不错,且为 CIFAR-10 和 ImageNet 等一些常见基准数据集提供了合理的解释.在 MVTec-AD 下,FCDD 在无监督的情况下达到了 SOTA. 在训练过程中结合 gt 标注的异常解释,仅是使用少量的这样的数据,也可以显著提高模型性能.最后利用 FCDD 的可解释性,我们论证了单类分类模型对于诸如水印等图像特征的无力.
## 1 引言
关于 anomaly detection(AD) 的历史.略.
深度支持向量数据描述(deep support vector data description,DSVDD)方法,旨在寻找一个网络模型将正常数据变换聚集到一个特定的中心,而异常数据位于其他位置.而本文提出的 FCDD 则是其一个变体,转换之后的图像的本身就是下采样之后的异常热力图.热力图中距离中心远的像素就是输入图像中的异常区域. FCDD 仅使用卷积和池化层,这可以限制输出像素的感受野.本文方法属于单类分类这一框架.
在 CIFAR-10 和 ImageNet 上, FCDD 的异常检测性能达到了 SOTA 并提供了可解释性.在 MVTec-AD 上,我们展示了 FCDD 可解释性的精准度.并且在后续实验中,我们发现了深度单类分类模型容易产生 ["Clever Hans" 效应](https://arxiv.org/abs/1902.10178)(即观察者期望效应),比如检测了一些虚假的特征,比如水印.我们还发现生成的异常热力图比起包括基于梯度的基线模型结果来说,噪声更少且结构性更多.

## 2 相关工作
本章我们将综述一些 AD 工作,并将重点放在其可解释性上.比如经典的使用自编码器的,通过在正常样本上训练使得自编码器在异常样本上重建性能很差,将重建误差作为异常分数,并可以找到像素级的区别予以解释.从而提供异常热力图.最近的一些工作更是将注意力引入到重建模型中,用以解释.在视频领域, Sabokrou 等人使用了预训练的全卷积结构,结合稀疏自编码器来提取 2D 特征并定位异常.重建方法的一个缺点是,对于已知的异常,它们无法在训练过程中充分利用.
最近,单类分类方法兴起.这类方法尝试使用无监督的方式将正常和异常样本分离,通过网络尝试将正常样本进行聚类,将异常样本排斥.在 NLP 领域,DSVDD 已经被成功用于文本,并由此产生了使用注意力机制来进行解释这一形式.对于图像, Kauffmann 则尝试使用深度泰勒分解来获得相关分数.
表现最棒的是一些基于自监督的方法.这些方法将正常样本进行变换,然后训练一个网络来预测何种变换被应用到了输入上,并通过预测的置信度来获得异常分数,并能够拓展到已知的异常上,但目前,这类方法的可解释性很差.
当然,通过来说解释的方法有很多种,比如和模型无关的 LIME 或是基于梯度的方法.对于本文而言,我们注意到全卷积架构常被用于监督性的分割任务,且在训练过程中需要目标分割图.
## 3 解释深度单类分类方法
在开始解释我们的方法之前,我们先回顾一下单类分类和全卷积架构.
**深度单类分类方法:**
深度单类分类通过学习一个网络将正常样本映射到输出空间中心 c 周围,将异常样本映射掉的思想来进行异常检测.对于我们的方法,我们使用了[超球面分类器(Hypersphere Classifier,HSC)](https://arxiv.org/abs/2006.00339),该方法是 Deep SAD 的一个变体, DSVDD 的半监督版本.使用 $X_1,...,X_n$来表示样本集,$y_1,...,y_n$表示标签,$y_i=1$是异常,$y_i=0$是正常,那么 HSC 的优化目标为: ^a639b1
$$
\underset{W,c}{min} \frac{1}{n} \sum_{i=1}^{n}{(1-y_i)h(\phi(X_i;W)-c)-y_ilog(1-exp(-h(\phi(X_i;W)-c)))} \tag{1}
$$
关于函数的分析见下面[关于超球分类器函数的分析](#关于超球分类器函数的分析)
整体函数的函数图如下:

这里 $c \in R^d$ 输出空间中心,$\phi : R^{c \times h \times w} \to R^d$ 为神经网络,权值为$W$. $h$ 为 [pseudo-Huber 损失](../../DL_knowlege/Huder%20loss.md#Pseudo-Huber%20loss%20function), $h(a)=\sqrt{||a||_2^2+1}-1$,即使用一个可导的二次惩罚来近似一个线性惩罚. HSC 损失鼓励正常样本聚集到 $c$,异常样本远离 $c$ .在本文实现中,中心 $c$ 对应于网络最后一层的偏置项.所以这个中心是包含在网络$\phi$中的,因此我们在 FCDD 的描述将忽略这点.
**全卷积架构:**
全卷积网络将图片映射成一个矩阵,比如$\phi: R^{c \times h \times w} \to R^{1 \times u \times v}$,它一般仅包含卷积层和池化层,不包含全连接层.这种情况下,池化可以被视为是固定权重的卷积.
显然全卷积网络是保留了空间信息的.
**全卷积数据描述**
由此,我们引入 FCDD.结合 FCN 和 HSC,FCDD 的输出特征可以保留空间信息,且可以作为下采样之后的异常分数热力图.对于那些需要全分辨率热力图的情况,我们提供了一种基于感受野特性的低分辨率热力图上采样方法.

FCDD 使用标注的数据进行训练,使用 $X_1,...,X_n$来表示样本集,$y_1,...,y_n$表示标签,$y_i=1$是异常,$y_i=0$是正常,异常数据可以是任何不来自正常样本集合的随机图片,比如 Tiny Image 或者是 ImageNet. 使用这种辅助数据在推荐系统的异常检测中比较常见,我们称之为离群点暴露(Outlier Exposure,OE).当我们可以拿到一个来自异常数据集的真实样本时,这代表着在实际测试中我们也很可能会遇到,我们发现即使使用少量甚至几个标注异常作为语料库,模型性能表现也很不错.(这句没太理解清楚).换言之,即使没有任何的已知异常实例,我们可以合成人工异常样本,这也是很有效的.
FCDD 在 FCN 的输出上,针对每个像素使用 HSC, 即对于最终输出特征图上每个点 $A(X)=(\sqrt{\phi(X;W)^2+1}-1)$.那么FCDD 的优化目标是:
$$
\underset{W}{min} \frac{1}{n} \sum_{i=1}^{n}{(1-y_i) \frac{1}{u \cdot v} ||A(X_i)||_1- y_ilog(1-exp(-\frac{1}{u \cdot v ||A(X_i)||_i}))} \tag{2}
$$
其中$||A(X)||_1$是所有像素$A(X)$的和,且皆为正值.异常样本将最大化$||A(X)||_1$,而正常样本将尝试最小化它,因此我们将其作为异常分数.而增大$||A(X)||_1$的$A(X)$所对应的区域就可能是异常区域.而这些区域的形状取决于FCN.在[附录A]()中展示了我们对于感受野的敏感性分析,我们发现感受野大大小对性能影响不大.注意最终输出特征图大小是$u \times v$,原始图片大小是$h \times w$.直接使用最终输出作为一个低分辨率热力图当然是可以的,但是通常我们希望有一个全分辨率的热力图.由于在训练中,我们通常没有异常区域的gt,因此通过监督学习得到一个FCN将低分率热力图上采样到原始分辨率是几乎不可能的.为此,我们基于感受野策略,提出了一个上采样策略.
**热力图上采样:**
由于我们在训练时没有异常区域的 gt,因此无法使用监督学习训练类似反卷积的结构.对于输出$A(X)$的每个像素都有对应有一个感受野中心的输入像素. Luo 揭示了,输入像素对输出像素的影响以感受野中心,以高斯分布衰减.基于此,我们使用一个权重为固定高斯分布的反卷积来上采样$SA(X)$,反卷积核为感受野大小,步长为累计步长.权重分布的方差是按照经验来选取,详情见[附录B]().图3显示了整个方法的结构.

## 4 实验
本节将对 FCDD 进行定量和定性实验评价.定量评价时,我们使用 AUC 作为评价指标.定性评价,我们将 FCDD 和其他深度 AD 解释性方法比较.对于基线模型,我们将基于梯度的方法结合 HSC 作为基线,另外还使用自动编码器的重建方法也作为基线.对于基线模型的低分辨率特征图,我们使用和 FCDD 一样的上采样方法. [附录G]()显示了不用我们提出的上采样方法的效果.[附录C]()我们对已不同热力图上采样方法.对于我们的实验,我们将忽略于模型无关的解释方法,比如 LIME 或是 anchors,因为它们不是针对 AD 量身定制且性能较差.
### 4.1 标准异常检测基准
使用了 Fashion-MNIST,CIFAR-10 和 ImageNet.做法是设置一类为正常,其它类都是异常.训练是仅仅使用正常类以及一些随机图片用来做离群点暴露,对于每类我们使用 AUC 作为评价指标.
**Fashion-MNIST**.我们使用 EMNIST 作为训练集, CIFAR-100作为 OE.网路结构是三卷积+BN+2池化.
**CIFAR-10** 同样使用 CIFAR-100 中和 CIFAR-10 无关的类作为OE,网络结构类似 LeNet-5,但是所有卷积都是 3x3,添加BN,将最后的全链接和最大池化换成2个卷积.
**ImageNet** 使用 ImageNet1k 中30个类作为正常训练和测试.使用 ImageNet22k 中和 1k 无关的类作为 OE. 使用类似 VGG11 的结构,输入缩放成224x224.详情见[附录D]()
**SOTA 方法**

**定量结果**
结合上表, FCDD 虽然使用了比较龊的 FCN 结构,但是性能依然接近 SOTA 方法,且解释性不错,而自编码器结果则出现了类似随机预测的结果.

**定性结果**
图4和5显示了 Fashion-MNIST 和 ImageNet 上的热力图.在 Fashion-MNIST 上使用裤子作为正常类, FCDD 倾向于将水平排列的区域识别为异常,这是正常的,因为裤子大多是垂直排列的.对于 ImageNet 方法使用橡树果作为正常样本,我们可以看到似乎颜色是更加重要的特征,绿色和棕色区域往往趋于认为是正常的,而其他颜色则被认为是异常的,比如红色的谷仓和白色的雪.不仅如此,它似乎也可以利用更多的语义特征,比如识别绿色的毛毛虫是异常的,而橡树果即使是在红色的背景下,也依然能够识别对.

图6显示CIFAR-10模型在不同OE下的热力图,正常样本是飞机,我们可以观察到,随着 OE 样本的增加, FCDD 更加倾向于对图中主要目标进行解释,比如鸟,船,卡车.在[附录G]()中展示了我们对所有类进行热力图的测试结果.

**基线解释**
参见图6,我们发现基于梯度的方法容易产生和空间上下文无关的中心斑点.而 AE的热力图是基于重建的,和异常分数直接相关,反而看上去更加合理.我们还发现,将 OE 或者是标记好的异常纳入到AE并不容易,这使得AE的性能较差.简而言之, FCDD 的异常热力图更加良好且具有一致的可解释性.
### 4.2 关于人工合成缺陷的解释
这里我们在 MVTec-AD 数据集上人工制造缺陷,并再次测试了 FCDD 的性能.该数据集提供了缺陷的 gt 掩模,因此可以对模型解释进行定量评估. MVTec-AD 包含了15个对象对,所有图片都是RGB图片,分辨率都是1024,其中异常测试样本进一步分为8中缺陷类型,具体取决于具体分类. 我们借鉴 Bergmann, 计算热烈图像素分的 AUC,使用给定的缺陷gt掩模作为量化对照标签.对于 FCDD,我们使用了基于 VGG11 的模型,从ImageNet预训练,冻结了前10层,添加了一个额外的全卷积层.
**人工合成缺陷**
由于这里的异常是类间微妙的缺陷,而不是超出类别的,因此使用诸如 ImageNet 这样的自然图像数据集作为OE是没有帮助的.处于这个原因,我们通过使用一种"confetti noise"来制造缺陷,这是一个简单的噪声模型,它将颜色块插入到图像中来反映局部的缺陷变化,如图7.

**半监督 FCDD**
和重建方法相比,FCDD 可以很容易的应用到半监督领域.为了测试少量标注的异常对模型训练的指导效果,我们在 MVTec-AD 训练时,对于每个类选取了一个真实的缺陷样本到训练集,这样大约有3~8张缺陷样本被混入到了每类的训练集中.为了充分利用 gt 标注,我们以像素等级来训练模型.设定 $X_1,...,X_n$ 为输入,对应的gt是$Y_1,...,Y_n$,每个样本有 $m=h \cdot w$个像素.设定$A(X)$表示$X$对于的异常热力图.那么我们像素级的优化目标变成了:
$$
\underset{W}{min} \frac{1}{n} \sum_{i=1}^n(\frac{1}{m}\sum_{j=1}^{m}(1-(Y_i)_j)A'(X_i)_j) - log(1 - exp(- \frac{1}{m} \sum_{j=1}^m{ (Y_i)_j A' (X_i)_j})) \tag{3}
$$
**结果**
引言中的图1展示了 FCDD 在 MVTec-AD 中的热力图.量化结果见表2.可以看到在无监督的情况下,FCDD 是优于其他方法,AUC 高达 0.92.在半监督中,若仅使用一个异常样本,AUC 可以到 0.96,同时 FCDD 在所有类中的性能很稳定一致.

### 4.3 Clever Hans 效应
Lapuschkin 等人发现在 VOC 中,有五分之一马的图片在左下角包含了水印,当删除水印时,分类器会识别失败.他们将之称之为 "Clever Hans"效应.我们尝试将马类设置为异常,将 ImageNet 设置为正常样本来训练 FCDD.我们期望 FCDD 的热力图应该会出现在马身上,但是结果确实 FCDD 也出现了聪明汉斯效应.

图8b 显示了单类分类模型很容易学到一些虚假特征,比如水印等.我们还发现了,该模型在图形中的条,格和栅栏中得分很高,如图8a.这可能是由于数据集中很多图像包含了马跳过栅栏围栏.在这两种情况下,马的本体特征并没有取得最高的分数,因为模型无法在训练时得知哪些是虚假的特征,这些特征通常在训练是提供了较好的区分能力,但是在测试时却不管用.比起黑盒模型,诸如 FCDD 这类透明模型可以更好的帮助从业者分辨出这类不良情况.
## 5 结论
综上,我们发现 FCDD 比以前方法性能更好,既可以适用于语义检测任务(4.1节),也可以用来做更加细微的缺陷检测任务(4.2节).最后,与其他后验解释的任务相比,FCDD将异常分数和解释直接绑定,更不容易受到攻击.我们将在下一阶段工作中分析这种现象.
## 附录待补
## 关于超球分类器函数的分析
设定$c=1$,则$log(1-exp(x-c)))$部分的函数图下如下:

可以看到,假设
样本距离中心近,预测为正常,那么$y_i=0$,$h(\phi(X_i;W)$近似为$\frac{x^2}{2}$,那么此刻损失大约为$\frac{c^2}{2}-c$ ;
若样本距离中心近,预测为异常,那么此时损失大约是$log(1-exp(-\frac{c^2}{2}-c)))$,其中$-\frac{c^2}{2}-c$为一个很小的值,那么根据$-log(1-exp(x))$可知道此时损失很大, 近乎是垂直的指数.
).png)
若样本距离中心远,预测为异常,后半部分大约是$-log(1-exp(-x-c)))$,其中$-x-c$较大,根据上图,产生的loss是很小的.
若样本距离中心远,预测为正常,后半部分大约是$-x-c$,依然是一个很大的 Loss.
| 57.97037 | 360 | 0.770381 | yue_Hant | 0.609426 |
dba6699000e09997b69e0584fd9fa8dac5d7d59c | 746 | md | Markdown | docs/capture_virus/star_virus.md | pigraul/CeleScope | c527d72405a6c7c1f4976989919479ec4bdadb48 | [
"MIT"
] | null | null | null | docs/capture_virus/star_virus.md | pigraul/CeleScope | c527d72405a6c7c1f4976989919479ec4bdadb48 | [
"MIT"
] | null | null | null | docs/capture_virus/star_virus.md | pigraul/CeleScope | c527d72405a6c7c1f4976989919479ec4bdadb48 | [
"MIT"
] | 1 | 2021-10-08T06:53:47.000Z | 2021-10-08T06:53:47.000Z |
## Arguments
`--genomeDir` Required. Genome directory.
`--outFilterMatchNmin` Default `0`. Alignment will be output only if the number of matched bases
is higher than or equal to this value.
`--out_unmapped` Output unmapped reads.
`--STAR_param` Other STAR parameters.
`--outFilterMultimapNmax` Default `1`. How many places are allowed to match a read at most.
`--starMem` Default `30`. Maximum memory that STAR can use.
`--fq` Required. R2 fastq file.
`--consensus_fq` Input fastq has been consensused.
`--outdir` Output diretory.
`--assay` Assay name.
`--sample` Sample name.
`--thread` Thread to use.
`--debug` If this argument is used, celescope may output addtional file for debugging.
`--virus_genomeDir` virus genome dir.
| 22.606061 | 97 | 0.730563 | eng_Latn | 0.956318 |
dba6dbeb58aecea59c51ec03df8e8bb198b4df20 | 268 | md | Markdown | PULL_REQUEST_TEMPLATE.md | Incognito/rfc-cookiecutter | f7789d4314235144e331f025a4c559582e967f2f | [
"MIT"
] | null | null | null | PULL_REQUEST_TEMPLATE.md | Incognito/rfc-cookiecutter | f7789d4314235144e331f025a4c559582e967f2f | [
"MIT"
] | 1 | 2022-01-11T18:01:14.000Z | 2022-01-11T18:01:14.000Z | PULL_REQUEST_TEMPLATE.md | Incognito/rfc-cookiecutter | f7789d4314235144e331f025a4c559582e967f2f | [
"MIT"
] | null | null | null | ## Change Requirements
- [ ] Pull request only makes one logical change
- [ ] PR under 250 lines edited
## Description of problem, or behaviour change (Required)
Explain to another person why you did this. We can read your code, so we already know what you changed.
| 33.5 | 103 | 0.75 | eng_Latn | 0.999066 |
dba6e6f0e5a0b60b59aa369ae70a6a77d996ceb4 | 4,018 | md | Markdown | _posts/2019-03-11-Download-learning-without-knowing.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2019-03-11-Download-learning-without-knowing.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2019-03-11-Download-learning-without-knowing.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Learning without knowing book
" As though the fog were a paralytic gas, and although she knew the pointlessness of asking why. " The baths are under open sheds. She had always known, and was the weapon up from his side. encyclopedias of information between them. "If I was into the purse of the palm, surviving on tap water and paper- not one of the reindeer or dog-foremen learning without knowing past who learning without knowing conditions of North Asia are in question, he couldn't resist this knowledge, and two Polar hears were killed, glittering outfits the boys looked dressed up as Roman soldiers, desolate anger swelled up in him, take this casting-bottle of rose-water and go forth-right and learning without knowing them therewith, the learning without knowing worlds all in one place. " twice, and look on me and speak with me, but now they focused, returned to the Khalif, and Francine and Boris, too. " time, whale and in Corte Madera, he had set Silence to studying the Acastan Spells. 103_n_ The boy's silvery giggles rang as merrily as sleigh bells, ii. At another two recently shot or slaughtered reindeer so resourceful and cunning that they are likely to track down their quarry no matter how successful the that hope, which had been achieved by draping the lamps with red blouses. " you'd know this before you consider where you want to go from. The sky goes away in the dark, like that of most wild races. " prison. Kapatljin, Fate stoops to no control, the morning draweth near; A plaintive voice (114) bespeaks me and I rejoice to hear. "We have an The spoken name of a True Rune may be the word it signifies in the Old Speech, at first I thought I was imagining it. He tries to shoo away the dog, Nais. 'Tm sure you'll learning without knowing somewhere wonderful. A fully learning without knowing man is self-controlled and calm. stick to one word learning without knowing it, she is a Learning without knowing weather. I stood up, wasn't scheduled to arrive until ten o'clock! After Olaus Magnus (1555). East Fields," the young man said. The foot-covering consists Feeling as though she had failed completely to be understood, which he had so warmly cherished from the first moment, bursting into the room. Presence in the dog's dreams. "We'll see, not the click-tick-rattle of the equipment packed Herr Wilhelm Meyer's Xylographic Institute in Stockholm, as he spot-read the text vessel's deck still formed a favourite rendezvous for crowds of men, Vivien do Saint Martin, now had only one tent. blood hadnвt come from the eye but from a gash on her head, she is a Earthquake weather, what can a rabble of ruffians with handguns do to stop me now?" moved along the swooning fence to a point where it had entirely collapsed. less narrative learning without knowing, and Micky's going to get a good restaurants cannot compare with them. He probably purchases his stock of it Junior's throat felt torn inside, whispers. a little bit scared, and her bones did not at once turn to dust. therefore always to form part of the equipment in voyages in which "I'm captivated more by painting than I am by most dimensional work," Junior explained. Perhaps the infant. - Just as learning without knowing man turned away, the building endureth; wherefore it behoveth the king to strengthen the foundation. People's minds worked like that. A network of soundproof passages took me to a learning without knowing Wind of the East, us, the thought of her trying to escape would not enter his mind seriously, it seems pretty magical to me-that flipped-coin trick, a length of 3, or laugh. movie, whose occasional forays from the East had in recent times become a slave-taking, the vessel has an ice-skin of greenheart, by a quarter of a milliparsec. 1, for she said she came from the Citadel. " covered with stones and washed by foaming breakers, till it was all spent and I abode expecting the mercy of the Lord of all creatures! | 446.444444 | 3,920 | 0.788701 | eng_Latn | 0.999969 |
dba728e315c7a036736fed40f9099be5505ce7c4 | 6,400 | md | Markdown | articles/static-web-apps/getting-started.md | tsunami416604/azure-docs.hu-hu | aeba852f59e773e1c58a4392d035334681ab7058 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/static-web-apps/getting-started.md | tsunami416604/azure-docs.hu-hu | aeba852f59e773e1c58a4392d035334681ab7058 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/static-web-apps/getting-started.md | tsunami416604/azure-docs.hu-hu | aeba852f59e773e1c58a4392d035334681ab7058 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Rövid útmutató: az első statikus hely felépítése az Azure statikus Web Apps'
description: Ismerje meg, hogyan helyezhet üzembe statikus helyet az Azure statikus Web Apps.
services: static-web-apps
author: craigshoemaker
ms.service: static-web-apps
ms.topic: quickstart
ms.date: 08/13/2020
ms.author: cshoe
ms.openlocfilehash: eb2356451c349f894c9ca74b1359f6a02d0e002a
ms.sourcegitcommit: 77ab078e255034bd1a8db499eec6fe9b093a8e4f
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 12/16/2020
ms.locfileid: "97562514"
---
# <a name="quickstart-building-your-first-static-site-with-azure-static-web-apps"></a>Rövid útmutató: az első statikus hely felépítése az Azure statikus Web Apps
Az Azure statikus Web Apps egy GitHub-tárházból származó alkalmazások létrehozásával tesz közzé webhelyeket az éles környezetben. Ebben a rövid útmutatóban egy webalkalmazást helyez üzembe az Azure statikus Web Apps szolgáltatásban a Visual Studio Code bővítménnyel.
Ha nem rendelkezik Azure-előfizetéssel, [hozzon létre egy ingyenes próbaverziós fiókot](https://azure.microsoft.com/free).
## <a name="prerequisites"></a>Előfeltételek
- [GitHub](https://github.com)-fiók
- [Azure](https://portal.azure.com) -fiók
- [Visual Studio Code](https://code.visualstudio.com)
- [Azure statikus Web Apps-bővítmény a Visual Studio Code-hoz](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps)
[!INCLUDE [create repository from template](../../includes/static-web-apps-get-started-create-repo.md)]
[!INCLUDE [clone the repository](../../includes/static-web-apps-get-started-clone-repo.md)]
Ezután nyissa meg a Visual Studio Code-ot, és nyissa meg a **fájl > megnyitása mappát** , ahol megnyithatja azt a tárházat, amelyet az imént klónozott a gépre a szerkesztőben.
## <a name="create-a-static-web-app"></a>Statikus webalkalmazás létrehozása
1. Az Azure-bővítmények ablak megnyitásához a Visual Studio Code-ban kattintson a Tevékenység sávon az Azure-emblémára.
:::image type="content" source="media/getting-started/extension-azure-logo.png" alt-text="Azure-embléma":::
> [!NOTE]
> Be kell jelentkeznie az Azure-ba és a GitHubra. Ha még nincs bejelentkezve az Azure-ba és a GitHubra a Visual Studio Code-ban, a bővítmény arra fogja kérni, hogy a létrehozás folyamata során jelentkezzen be mindkét szolgáltatásba.
1. Vigye az egeret a _Static Web Apps_ címke fölé, és kattintson a **pluszjelre**.
:::image type="content" source="media/getting-started/extension-create-button.png" alt-text="Alkalmazás neve":::
1. A szájpadlás parancs a szerkesztő tetején nyílik meg, és rákérdez az alkalmazás nevének megadására.
Adja meg a **my-first-static-web-app** nevet, és nyomja le az **Enter** billentyűt.
:::image type="content" source="media/getting-started/extension-create-app.png" alt-text="Statikus webalkalmazás létrehozása":::
1. Válassza a **fő** ágat, és nyomja le az **Enter** billentyűt.
:::image type="content" source="media/getting-started/extension-branch.png" alt-text="Ág neve":::
1. Válassza ki az **/** alkalmazás kódjának helyét, majd nyomja le az **ENTER** billentyűt.
:::image type="content" source="media/getting-started/extension-app-location.png" alt-text="Alkalmazáskód helye":::
1. A bővítmény az API helyét keresi az alkalmazásban. Ez a cikk nem tartalmaz API-megvalósítást.
Válassza a **Kihagyás most** lehetőséget, és nyomja le az **Enter** billentyűt.
:::image type="content" source="media/getting-started/extension-api-location.png" alt-text="Az API helye":::
1. Válassza ki azt a helyet, ahol a fájlok elkészülhetnek az alkalmazásbeli üzemeléshez.
# <a name="no-framework"></a>[Nincs keretrendszer](#tab/vanilla-javascript)
Törölje a mezőt, majd nyomja le az **ENTER** billentyűt.
:::image type="content" source="media/getting-started/extension-artifact-no-framework.png" alt-text="Alkalmazás fájljainak elérési útja":::
# <a name="angular"></a>[Angular](#tab/angular)
Írja be a **dist/szögletes-Basic** értéket, majd nyomja le az **ENTER** billentyűt.
:::image type="content" source="media/getting-started/extension-artifact-angular.png" alt-text="Angular-alkalmazásfájlok útvonala":::
# <a name="react"></a>[React](#tab/react)
Írja be a **build** szót, majd nyomja le az **Enter** billentyűt.
:::image type="content" source="media/getting-started/extension-artifact-react.png" alt-text="React-alkalmazásfájlok útvonala":::
# <a name="vue"></a>[Vue](#tab/vue)
Írja be a **dist** szót, majd nyomja le az **Enter** billentyűt.
:::image type="content" source="media/getting-started/extension-artifact-vue.png" alt-text="Vue-alkalmazásfájlok útvonala":::
---
1. Válassza ki a legközelebbi helyet, és nyomja le az **Enter** billentyűt.
:::image type="content" source="media/getting-started/extension-location.png" alt-text="Erőforrás helye":::
1. Az alkalmazás létrehozása után megerősítő értesítés jelenik meg a Visual Studio Code-ban.
:::image type="content" source="media/getting-started/extension-confirmation.png" alt-text="Létrehozás megerősítése":::
1. A Visual Studio Code Explorer ablakban navigáljon az előfizetés nevét tartalmazó csomóponthoz, és bontsa ki azt. Vegye figyelembe, hogy a telepítés befejezéséhez néhány percet is igénybe vehet. Ezután térjen vissza a statikus Web Apps szakaszra, és válassza ki az alkalmazás nevét, majd kattintson a jobb gombbal az én-első-static-Web-App elemre, és válassza a Megnyitás a portálon lehetőséget az alkalmazás megjelenítéséhez a Azure Portal.
:::image type="content" source="media/getting-started/extension-open-in-portal.png" alt-text="Portál megnyitása":::
[!INCLUDE [view website](../../includes/static-web-apps-get-started-view-website.md)]
## <a name="clean-up-resources"></a>Az erőforrások eltávolítása
Ha nem folytatja az alkalmazás használatát, törölheti az Azure statikus Web Apps példányát a bővítmény használatával.
A Visual Studio Code Explorer ablakban térjen vissza a _statikus Web Apps_ szakaszhoz, és kattintson a jobb gombbal a **My-First-static-Web-App** elemre, és válassza a **Törlés** lehetőséget.
:::image type="content" source="media/getting-started/extension-delete.png" alt-text="Alkalmazás törlése":::
## <a name="next-steps"></a>További lépések
> [!div class="nextstepaction"]
> [API hozzáadása](add-api.md)
| 52.03252 | 443 | 0.755938 | hun_Latn | 0.999621 |
dba740ad64cc0bf973d1068f268f20d2fac9f407 | 1,597 | md | Markdown | _posts/2020-06-05-sum-of-left-leaves.md | soubh1k/jekyll-theme-chirpy | 131d99d6a69ff46602814b77ab2381c94fa6b4ac | [
"MIT"
] | null | null | null | _posts/2020-06-05-sum-of-left-leaves.md | soubh1k/jekyll-theme-chirpy | 131d99d6a69ff46602814b77ab2381c94fa6b4ac | [
"MIT"
] | null | null | null | _posts/2020-06-05-sum-of-left-leaves.md | soubh1k/jekyll-theme-chirpy | 131d99d6a69ff46602814b77ab2381c94fa6b4ac | [
"MIT"
] | 1 | 2020-06-29T16:38:11.000Z | 2020-06-29T16:38:11.000Z | ---
title: Sum of Left Leaves
author: Soubhik Rakshit
date: 2020-06-05 17:10:00 -0400
categories: [Leetcode, Code]
tags: [easy, tree]
---
[**LeetCode Question Link**](https://leetcode.com/problems/sum-of-left-leaves/){:target="_blank"}
**Problem Statement**
> Find the sum of all left leaves in a given binary tree.
**Solution Approach**
* For every node that has both left and right children and left child is not a leaf, the result is sum of left leaves in left and right subtrees.
* If the left child is a leaf, then result is sum of value of left node and left leaves in right subtree.
**Complexity**
Let n be the number of nodes in the tree.
* Time complexity - _O(n)_, since we traverse through all nodes.
* Space complexity - _O(1)_, since we don't need any extra space.
**Code**
```c++
/**
* Definition for a binary tree node.
* struct TreeNode {
* int val;
* TreeNode *left;
* TreeNode *right;
* TreeNode() : val(0), left(nullptr), right(nullptr) {}
* TreeNode(int x) : val(x), left(nullptr), right(nullptr) {}
* TreeNode(int x, TreeNode *left, TreeNode *right) : val(x), left(left), right(right) {}
* };
*/
class Solution {
public:
int sumOfLeftLeaves(TreeNode* root) {
if(root==NULL) return 0;
int left = sumOfLeftLeaves(root->left);
int right = sumOfLeftLeaves(root->right);
if(root->left && !root->left->left && !root->left->right)
return root->left->val + right;
else
return left + right;
}
};
``` | 29.574074 | 146 | 0.617408 | eng_Latn | 0.937389 |
dba7dabf37fdbf966392f9fdd4cf6223f09e574e | 241 | md | Markdown | inst/examples/02_text/Readme.md | sheikhbarabas/shiny | 907b9a9862fbcd596b602276b497052d06844a4e | [
"Apache-2.0"
] | 95 | 2018-03-02T14:18:36.000Z | 2022-01-26T23:32:19.000Z | inst/examples/02_text/Readme.md | sheikhbarabas/shiny | 907b9a9862fbcd596b602276b497052d06844a4e | [
"Apache-2.0"
] | 21 | 2016-04-26T03:54:47.000Z | 2019-05-21T22:17:46.000Z | inst/examples/02_text/Readme.md | sheikhbarabas/shiny | 907b9a9862fbcd596b602276b497052d06844a4e | [
"Apache-2.0"
] | 23 | 2018-07-12T20:13:24.000Z | 2022-02-07T02:05:09.000Z | This example demonstrates output of raw text from R using the `renderPrint` function in `server.R` and the `verbatimTextOutput` function in `ui.R`. In this case, a textual summary of the data is shown using R's built-in `summary` function.
| 120.5 | 240 | 0.771784 | eng_Latn | 0.996459 |
dba80b00678b87b567446f5415b87ae35dc56d4c | 2,754 | md | Markdown | faq-allos/faq-allos03_mine_rewards_to_a_single_verus_wallet_gui_+_cli.md | alexenglish/Wiki | 444fbaf3df998de9830a1f82e2e41939ab753e9b | [
"MIT"
] | null | null | null | faq-allos/faq-allos03_mine_rewards_to_a_single_verus_wallet_gui_+_cli.md | alexenglish/Wiki | 444fbaf3df998de9830a1f82e2e41939ab753e9b | [
"MIT"
] | null | null | null | faq-allos/faq-allos03_mine_rewards_to_a_single_verus_wallet_gui_+_cli.md | alexenglish/Wiki | 444fbaf3df998de9830a1f82e2e41939ab753e9b | [
"MIT"
] | 1 | 2021-03-20T21:47:52.000Z | 2021-03-20T21:47:52.000Z |
# Question: How do I direct all my solo mined rewards to a single Verus wallet?
Attention: Read it completely before using.
## Verus `Wallet.dat`, Chaindata & `VRSC.conf` standard locations
Linux: `~/.Komodo/VRSC`
Mac OS: `~/Library/Application Support/Komodo/VRSC`
Windows 10: `%AppData%\Roaming\Komodo\VRSC\`
## Prerequisites
* Have a __native__ VRSC wallet running
## In Verus Desktop
In Verus Desktop, there is at this moment no way to enter an address to mine to. However, editing the `VRSC.conf` can be done to achive our goal:
* In the receive window of your wallet, click the hamburger (three vertical dots) next to the address you want to receive your rewards in and click `Copy Public Key`.
* Close down Verus Desktop.
* Edit `VRSC.conf`(see standard locations at the top) and add the line `pubkey=THELONGSTRINGCOPIED`.
* Save and exit.
* Start Verus Desktop as you normally do.
## In Agama Wallet
Step 1 - First get your wallet address you want to mine to:
* If you don't have an address, click "Receive", click "Get New Address" and choose "Transparent Address" from the drop down.
Step 2 - Next we need to retrieve our pubkey,
* click on the hamburg next to the address that you want to receive the rewards in and click `copy pubkey`
Step 3 - Set your PubKey
* Go to 'Settings', 'App Config (config.json)' and enter your pubkey(THELONGSTRINGCOPIED) into the 'Pubkey VRSC mining key' field.
* Click 'Save app config' to save these settings.
* Restart Agama
## In Verus CLI
Step 1 - First get your wallet address you want to mine to:
You can find an address if you already have previous transactions, or you can create a new one. To find an address from a previous transaction, use the command line verus listtransactions and copy the address found after "address".
To generate a new wallet address, use the command line `verus getnewaddress` and a new address will be created.
Step 2 - Next, using your new address, enter the command with verus-cli `verus validateaddress`. From the output find the long string after "pubkey", copy without the quotation marks.
Step 3 - Set your PubKey
* Option 1: use this pubkey when starting your daemon by adding the following line to the end of your command, just before the "&" sign: -pubkey=THELONGSTRINGCOPIED
* Option 2: edit your `VRSC.conf` and add the line `pubkey=THELONGSTRINGCOPIED`. Then start your whallet as you are used to.
Your rewards will now be mined to that address. It would be a good idea to keep notes and associate the wallet address with the pubkey...also to double check that you did validate the correct pubkey for the wallet address, making sure you made no errors.
(submitted by @Oliver Westbrook, edited by Oink.vrsc@)
note: last revision date 2020-02-24.
| 55.08 | 255 | 0.762527 | eng_Latn | 0.994718 |
dba8638e9eb9217afda23682a8c0141ed87c72c9 | 3,209 | md | Markdown | MIGRATION.md | alonp99/onfido-sdk-ui | 539cfbca680801f0888113369971a53342b1800b | [
"MIT"
] | null | null | null | MIGRATION.md | alonp99/onfido-sdk-ui | 539cfbca680801f0888113369971a53342b1800b | [
"MIT"
] | null | null | null | MIGRATION.md | alonp99/onfido-sdk-ui | 539cfbca680801f0888113369971a53342b1800b | [
"MIT"
] | null | null | null | # Onfido JS SDK Migration Guide
These guides below are provided to ease the transition of existing applications using the Onfido SDK from one version to another that introduces breaking API changes.
## `2.8.0` -> `3.0.0`
### Breaking changes
- Removed support for `buttonId`. From this version you will need to create a function that launches the SDK when a trigger element (ie a button) is clicked.
### Example of old behaviour
```html
<script>
Onfido.init({
useModal: true,
buttonId: 'onfido-btn',
token: 'YOUR_JWT_TOKEN',
onComplete: function(data) {
// callback for when everything is complete
console.log("everything is complete")
}
});
</script>
<body>
<button id='onfido-btn'>Verify identity</button>
<div id='onfido-mount'></div>
</body>
```
### Example of new behaviour
```html
<script>
var onfido = {}
function triggerOnfido() {
onfido = Onfido.init({
useModal: true,
isModalOpen: true,
onModalRequestClose: function() {
// Update options with the state of the modal
onfido.setOptions({isModalOpen: false})
},
token: 'YOUR_JWT_TOKEN',
onComplete: function(data) {
// callback for when everything is complete
console.log("everything is complete")
}
});
};
</script>
<body>
<!-- Use a button to trigger the Onfido SDK -->
<button onClick="triggerOnfido()">Verify identity</button>
<div id='onfido-mount'></div>
</body>
```
## `1.1.0` -> `2.0.0`
### Breaking changes
- Removed `onDocumentCapture` that used to be fired when the document had been successfully captured, confirmed by the user and uploaded to the Onfido API
- Removed `onFaceCapture` callbacks that used to be fired when the face has beed successfully captured, confirmed by the user and uploaded to the Onfido API.
- Removed `getCaptures` function that used to return the document and face files captured during the flow.
- Changed the behaviour of `onComplete` callback. It used to return an object that contained all captures, now it doesn't return any data.
### Example of old behaviour
```js
Onfido.init({
token: 'YOUR_JWT_TOKEN',
containerId: 'onfido-mount',
onDocumentCapture: function(data) {
/*callback for when the*/ console.log("document has been captured successfully", data)
},
onFaceCapture: function(data) {
/*callback for when the*/ console.log("face capture was successful", data)
},
onComplete: function(capturesHash) {
console.log("everything is complete")
// data returned by the onComplete callback including the document and face files captured during the flow
console.log(capturesHash)
// function that used to return the document and face files captured during the flow.
console.log(Onfido.getCaptures())
}
})
```
### Example of new behaviour
```js
Onfido.init({
// the JWT token that you generated earlier on
token: 'YOUR_JWT_TOKEN',
// id of the element you want to mount the component on
containerId: 'onfido-mount',
onComplete: function() {
console.log("everything is complete")
// You can now trigger your backend to start a new check
}
})
```
| 30.561905 | 166 | 0.684637 | eng_Latn | 0.986324 |
dba871e783a6584f0f1124c4a810e854ce4ed5c8 | 95 | md | Markdown | src/documents/pages/about.md | novogeek/docpad-blog | 62139e557f9f91b19473d5fae718ed4a50fdc5ee | [
"WTFPL"
] | 1 | 2017-03-23T14:01:49.000Z | 2017-03-23T14:01:49.000Z | src/documents/pages/about.md | novogeek/docpad-blog | 62139e557f9f91b19473d5fae718ed4a50fdc5ee | [
"WTFPL"
] | null | null | null | src/documents/pages/about.md | novogeek/docpad-blog | 62139e557f9f91b19473d5fae718ed4a50fdc5ee | [
"WTFPL"
] | null | null | null | ```
title: About
layout: page
tags: ['page']
pageOrder: 1
```
About the blog, blah blah blah.. | 11.875 | 32 | 0.652632 | eng_Latn | 0.361163 |
dba90220302c4253f4cc7bce74b2522dddfd1bf1 | 1,130 | md | Markdown | README.md | EmmaMyers/chicago-food-inspections | 4a03da301c8a17f3a3c647e3f4bb9c669873b3cf | [
"MIT"
] | null | null | null | README.md | EmmaMyers/chicago-food-inspections | 4a03da301c8a17f3a3c647e3f4bb9c669873b3cf | [
"MIT"
] | null | null | null | README.md | EmmaMyers/chicago-food-inspections | 4a03da301c8a17f3a3c647e3f4bb9c669873b3cf | [
"MIT"
] | null | null | null | # chicago-food-inspections
Analyzing chicago food inspections
Recent watchdog report published by Chicago Tribune indcated that food safety inspectors overlook hundreds of day cares in the city of Chicago.
The key take away from the Chicago Tribune watchdog report is that the city had only 33 working field inspectors to cover the entire city of Chicago. Many of the facilties serve food for Children, and while few fail inspectionns, many escape routine inspections.
This is a classic resource allocation problem. In this assignment, our goal is to identify the hot-spots (areas that have facilities serving food to children and have failed inspections in the past) on the Chicago map to dispatch inspectors to.
To achive our goal, we need the following:
Dataset for Chicago Food Inspections
NoSQL database Egnine (ElasticSearch) for indexing and data retrieval
HeatMap to plot the children facilties that failed Chicago Food Inspections
The CSV file for dataset of the city of chicago is obtained from the data portal for the city of Chicago. Here th elink for the city of Chicago data portal City of Chicago Data Portal
| 66.470588 | 262 | 0.817699 | eng_Latn | 0.998717 |
dba9d6e4177a5b84e8539650b84731bc31792b3c | 1,280 | md | Markdown | windows-driver-docs-pr/devtest/trace-log.md | ahidaka/windows-driver-docs | 6eac87818eba4c606a292991994b90f3279c2ab8 | [
"CC-BY-4.0",
"MIT"
] | 485 | 2017-05-26T02:26:37.000Z | 2022-03-30T18:22:09.000Z | windows-driver-docs-pr/devtest/trace-log.md | ahidaka/windows-driver-docs | 6eac87818eba4c606a292991994b90f3279c2ab8 | [
"CC-BY-4.0",
"MIT"
] | 2,511 | 2017-05-16T23:06:32.000Z | 2022-03-31T23:57:00.000Z | windows-driver-docs-pr/devtest/trace-log.md | ahidaka/windows-driver-docs | 6eac87818eba4c606a292991994b90f3279c2ab8 | [
"CC-BY-4.0",
"MIT"
] | 687 | 2017-05-19T03:16:24.000Z | 2022-03-31T03:19:04.000Z | ---
title: Trace Log
description: Trace Log
keywords:
- event trace logs WDK
- log files WDK tracing
- .etl files
- etl files
- trace logs WDK
- storing trace messages
ms.date: 04/20/2017
ms.localizationpriority: medium
---
# Trace Log
## <span id="ddk_trace_log_tools"></span><span id="DDK_TRACE_LOG_TOOLS"></span>
An event trace log (.etl) file, also known as a *trace log*, stores the trace messages generated during one or more [trace sessions](trace-session.md).
The system first stores the trace messages that [trace providers](trace-provider.md) generate in trace session buffers, and then delivers them directly to a [trace consumer](trace-consumer.md) or writes them to a trace log.
Because the messages can occupy a large amount of disk space, trace logs store them in a compressed binary format. To read the messages, trace consumers use information supplied by the trace provider (the *FormatString* parameter in the [**DoTraceMessage**](/previous-versions/windows/hardware/previsioning-framework/ff544918(v=vs.85)) macro) to parse and format the messages so that they are readable. The trace consumer can find this information in the [PDB symbol file](pdb-symbol-files.md) or the [trace message format file](trace-message-format-file.md) for the provider.
| 44.137931 | 576 | 0.771094 | eng_Latn | 0.965892 |
dbaa517efd6d3178a07e3bccfb08f8a7ee4e6b44 | 2,547 | md | Markdown | content/_events/types.md | MarEichler/wist-website | 9e8fc07e394e41030fec8d29b4be3f579a7e5798 | [
"MIT"
] | null | null | null | content/_events/types.md | MarEichler/wist-website | 9e8fc07e394e41030fec8d29b4be3f579a7e5798 | [
"MIT"
] | null | null | null | content/_events/types.md | MarEichler/wist-website | 9e8fc07e394e41030fec8d29b4be3f579a7e5798 | [
"MIT"
] | 1 | 2020-06-29T20:58:33.000Z | 2020-06-29T20:58:33.000Z | +++
# A Skills section created with the Featurette widget.
widget = "featurette" # See https://sourcethemes.com/academic/docs/page-builder/
headless = true # This file represents a page section.
active = true # Activate this widget? true/false
weight = 2 # Order that this section will appear.
title = "Types of WIST Events"
subtitle = ""
# Showcase personal skills or business features.
#
# Add/remove as many `[[feature]]` blocks below as you like.
#
# For available icons, see: https://sourcethemes.com/academic/docs/widgets/#icons
[[feature]]
icon = "female"
icon_pack = "fas"
name = "Speaker Series"
description = "WIST strives to extend the professional networks of its members by having access to female leaders in the field of statistics. These speakers include academics and industry professionals."
[[feature]]
icon = "laptop-code" #users
icon_pack = "fas"
name = "WiDS Conference"
description = "WIST has been a host for the WiDS (Women in Data Science) regional event since 2018. This event includes local women-identifying data scientists (industry and academic), networking event, and data dive hackathon. The conference is held annually in March."
[[feature]]
icon = "utensils"
icon_pack = "fas"
name = "Group Meals"
description = "An important tool for women to identify and resolve gender-related obstacles is to have support from a community or networks. The meals are an opportunity to strengthen the WIST network. "
[[feature]]
icon = "tools" #laptop-code
icon_pack = "fas"
name = "Workshops"
description = "WIST sponsors a variety of workshops to help WIST members gain aditional skills they may not have the opportunity to learn in their day-to-day academic work. These events include learning how to write R packages, exploring new software, and practicing coding skills"
[[feature]]
icon = "comments"
icon_pack = "far" #fas will give solid comment bubbles
name = "Discussions"
description = "WIST sponsors small group events to allow discourse and discussion about topics in STEM. These events are often based on common reading, such as books and articles, or reactions to speakers."
[[feature]]
icon = "hands-helping" #venus (women's sign)
icon_pack = "fas"
name = "Other Events"
description = "WIST may organize other events throughout the year based on suggestions or requests form WIST members. These events can range from specific career development meetings to additional casual meetings to discuss possible research topics. "
+++
| 45.482143 | 285 | 0.744013 | eng_Latn | 0.996642 |
dbaa8845489b73d46af9cc32ea73da40513501c5 | 47 | md | Markdown | README.md | SourceLastBenchCoder/Python_Programming | 4c4281252ab657cbb781f98fe5c945738a2c618e | [
"MIT"
] | null | null | null | README.md | SourceLastBenchCoder/Python_Programming | 4c4281252ab657cbb781f98fe5c945738a2c618e | [
"MIT"
] | null | null | null | README.md | SourceLastBenchCoder/Python_Programming | 4c4281252ab657cbb781f98fe5c945738a2c618e | [
"MIT"
] | null | null | null | # Python_Programming
All Basic Python Concepts
| 15.666667 | 25 | 0.851064 | eng_Latn | 0.793695 |
dbaadb6ef0a3c0d1d7132c58555ed715c527a838 | 2,111 | md | Markdown | tags/dp/house-robber/explanation.md | jinrunheng/algorithm | 069519caab05d57676d410cff43672541e0328a5 | [
"Apache-2.0"
] | 2 | 2021-11-20T10:08:54.000Z | 2022-03-21T09:48:34.000Z | tags/dp/house-robber/explanation.md | jinrunheng/algorithm | 069519caab05d57676d410cff43672541e0328a5 | [
"Apache-2.0"
] | null | null | null | tags/dp/house-robber/explanation.md | jinrunheng/algorithm | 069519caab05d57676d410cff43672541e0328a5 | [
"Apache-2.0"
] | null | null | null | ## 动态规划经典入门问题:打家劫舍
#### 解题思路:dp
设计动态规划的三个步骤:
1. 将问题分解成子问题
2. 使用递归的方式表述子问题
3. 递归是自顶向下的设计方式,dp则是自底向上将递归转换为迭代
将问题分解成最优子问题:
如果房屋数量只有一间,那么偷窃的最高总金额就是这间房屋的金额。
如果房屋数量有两间,那么偷窃的最高总金额就是两间房屋中最大的金额数量。
如果房屋数量大于两间,应该如何计算能够偷窃到的最高总金额呢?对于第 `k(k>2)`间房屋,有两个选项:
1. 偷窃第 k 间房屋,那么就不能偷窃第 k-1间房屋,偷窃总金额为前 k-2间房屋的最高总金额与第 k 间房屋的金额之和。
2. 不偷窃第 k间房屋,偷窃总金额为前 k-1 间房屋的最高总金额。
使用递归方式描述子问题:
```java
class Solution {
public int rob(int[] nums) {
if(nums == null || nums.length == 0){
return 0;
}
return rob(nums,nums.length - 1);
}
private int rob(int[] nums,int lastIndex){
if(lastIndex == 0){
return nums[0];
}
if(lastIndex == 1){
return Math.max(nums[0],nums[1]);
}
int sum1 = rob(nums,lastIndex - 1);
int sum2 = rob(nums,lastIndex - 2) + nums[lastIndex];
return Math.max(sum1,sum2);
}
}
```
递归方式是一种自顶向下的思考方式,这个代码无法通过OJ,最后会显示超出时间限制,因为递归的代码中,含有大量的重复计算。
动态规划,就是将每次计算的结果存储,避免重复的计算,这是一种用空间换时间的策略。
创建dp数组,用 dp[i]表示前 i间房屋能偷窃到的最高总金额,那么就有如下的状态转移方程:
```
dp[i] = max(dp[i−2]+nums[i],dp[i−1])
```
#### 代码
Java:
```java
class Solution {
public int rob(int[] nums) {
if(nums == null || nums.length == 0){
return 0;
}
if(nums.length == 1){
return nums[0];
}
int[] dp = new int[nums.length];
dp[0] = nums[0];
dp[1] = Math.max(nums[0],nums[1]);
for(int i = 2; i < nums.length; i ++){
dp[i] = Math.max(dp[i - 1],nums[i] + dp[i - 2]);
}
return dp[nums.length - 1];
}
}
```
JavaScript:
```javascript
/**
* @param {number[]} nums
* @return {number}
*/
var rob = function(nums) {
if(nums == null || nums.length == 0){
return 0
}
if(nums.length == 1){
return nums[0];
}
let dp = new Array(nums.length)
dp[0] = nums[0]
dp[1] = max(nums[0],nums[1])
for(let i = 2; i < nums.length; i++){
dp[i] = max(dp[i - 1],nums[i] + dp[i - 2])
}
return dp[nums.length - 1]
};
var max = function(a,b){
return a >= b ? a : b
}
```
| 18.356522 | 63 | 0.543818 | yue_Hant | 0.102093 |
dbab1aea26c00abfaf1489cf973c7059b620fff8 | 7,117 | md | Markdown | Benchmarks/WhereToListBenchmarks.md | ualehosaini/NetFabric.Hyperlinq | 7ff12727f20c66f83b5e4ebfa7fdc4402bc2f5fa | [
"MIT"
] | 13 | 2021-02-21T20:33:57.000Z | 2021-03-23T17:16:18.000Z | Benchmarks/WhereToListBenchmarks.md | ualehosaini/NetFabric.Hyperlinq | 7ff12727f20c66f83b5e4ebfa7fdc4402bc2f5fa | [
"MIT"
] | null | null | null | Benchmarks/WhereToListBenchmarks.md | ualehosaini/NetFabric.Hyperlinq | 7ff12727f20c66f83b5e4ebfa7fdc4402bc2f5fa | [
"MIT"
] | null | null | null | ## WhereToListBenchmarks
### Source
[WhereToListBenchmarks.cs](../NetFabric.Hyperlinq.Benchmarks/Benchmarks/WhereToListBenchmarks.cs)
### References:
- Linq: 4.8.4300.0
- System.Linq.Async: [5.0.0](https://www.nuget.org/packages/System.Linq.Async/5.0.0)
- System.Interactive: [5.0.0](https://www.nuget.org/packages/System.Interactive/5.0.0)
- System.Interactive.Async: [5.0.0](https://www.nuget.org/packages/System.Interactive.Async/5.0.0)
- StructLinq: [0.25.3](https://www.nuget.org/packages/StructLinq/0.25.3)
- NetFabric.Hyperlinq: [3.0.0-beta29](https://www.nuget.org/packages/NetFabric.Hyperlinq/3.0.0-beta29)
### Results:
``` ini
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19042
Intel Core i7-7567U CPU 3.50GHz (Kaby Lake), 1 CPU, 4 logical and 2 physical cores
[Host] : .NET Framework 4.8 (4.8.4300.0), X64 RyuJIT
.NET Core 5.0 : .NET Core 5.0.2 (CoreCLR 5.0.220.61120, CoreFX 5.0.220.61120), X64 RyuJIT
Job=.NET Core 5.0 Runtime=.NET Core 5.0
```
| Method | Categories | Count | Mean | Error | StdDev | Ratio | RatioSD | Gen 0 | Gen 1 | Gen 2 | Allocated |
|------------------------------------ |-------------------------- |------ |-----------:|---------:|---------:|------:|--------:|-------:|------:|------:|----------:|
| Linq_Array | Array | 100 | 400.6 ns | 2.94 ns | 2.60 ns | 1.00 | 0.00 | 0.3328 | - | - | 696 B |
| StructLinq_Array | Array | 100 | 431.3 ns | 5.80 ns | 4.85 ns | 1.08 | 0.01 | 0.1297 | - | - | 272 B |
| Hyperlinq_Array | Array | 100 | 514.6 ns | 4.31 ns | 3.82 ns | 1.28 | 0.01 | 0.1297 | - | - | 272 B |
| Hyperlinq_Span | Array | 100 | 505.9 ns | 5.18 ns | 4.85 ns | 1.26 | 0.01 | 0.1297 | - | - | 272 B |
| Hyperlinq_Memory | Array | 100 | 484.4 ns | 5.52 ns | 4.90 ns | 1.21 | 0.01 | 0.1297 | - | - | 272 B |
| | | | | | | | | | | | |
| Linq_Enumerable_Value | Enumerable_Value | 100 | 1,340.7 ns | 15.61 ns | 13.84 ns | 1.00 | 0.00 | 0.3510 | - | - | 736 B |
| StructLinq_Enumerable_Value | Enumerable_Value | 100 | 1,269.4 ns | 10.89 ns | 9.10 ns | 0.95 | 0.01 | 0.1450 | - | - | 304 B |
| Hyperlinq_Enumerable_Value | Enumerable_Value | 100 | 592.3 ns | 9.60 ns | 8.51 ns | 0.44 | 0.01 | 0.1297 | - | - | 272 B |
| | | | | | | | | | | | |
| Linq_Collection_Value | Collection_Value | 100 | 1,246.4 ns | 11.41 ns | 10.68 ns | 1.00 | 0.00 | 0.3510 | - | - | 736 B |
| StructLinq_Collection_Value | Collection_Value | 100 | 1,298.3 ns | 21.72 ns | 19.25 ns | 1.04 | 0.01 | 0.1450 | - | - | 304 B |
| Hyperlinq_Collection_Value | Collection_Value | 100 | 553.5 ns | 8.56 ns | 8.00 ns | 0.44 | 0.01 | 0.1297 | - | - | 272 B |
| | | | | | | | | | | | |
| Linq_List_Value | List_Value | 100 | 1,228.4 ns | 4.36 ns | 3.86 ns | 1.00 | 0.00 | 0.3510 | - | - | 736 B |
| StructLinq_List_Value | List_Value | 100 | 823.2 ns | 4.09 ns | 3.41 ns | 0.67 | 0.00 | 0.1297 | - | - | 272 B |
| Hyperlinq_List_Value | List_Value | 100 | 912.7 ns | 6.66 ns | 6.23 ns | 0.74 | 0.00 | 0.1297 | - | - | 272 B |
| | | | | | | | | | | | |
| Linq_AsyncEnumerable_Value | AsyncEnumerable_Value | 100 | 6,239.2 ns | 28.51 ns | 23.81 ns | 1.00 | 0.00 | 0.3586 | - | - | 752 B |
| Hyperlinq_AsyncEnumerable_Value | AsyncEnumerable_Value | 100 | 6,028.3 ns | 40.70 ns | 33.98 ns | 0.97 | 0.00 | 0.3738 | - | - | 784 B |
| | | | | | | | | | | | |
| Linq_Enumerable_Reference | Enumerable_Reference | 100 | 923.7 ns | 9.56 ns | 8.94 ns | 1.00 | 0.00 | 0.3519 | - | - | 736 B |
| StructLinq_Enumerable_Reference | Enumerable_Reference | 100 | 870.5 ns | 6.22 ns | 4.86 ns | 0.94 | 0.01 | 0.1450 | - | - | 304 B |
| Hyperlinq_Enumerable_Reference | Enumerable_Reference | 100 | 951.5 ns | 12.88 ns | 12.05 ns | 1.03 | 0.02 | 0.1450 | - | - | 304 B |
| | | | | | | | | | | | |
| Linq_Collection_Reference | Collection_Reference | 100 | 825.7 ns | 6.73 ns | 6.30 ns | 1.00 | 0.00 | 0.3519 | - | - | 736 B |
| StructLinq_Collection_Reference | Collection_Reference | 100 | 856.7 ns | 8.19 ns | 7.66 ns | 1.04 | 0.01 | 0.1450 | - | - | 304 B |
| Hyperlinq_Collection_Reference | Collection_Reference | 100 | 954.3 ns | 6.18 ns | 5.47 ns | 1.16 | 0.01 | 0.1450 | - | - | 304 B |
| | | | | | | | | | | | |
| Linq_List_Reference | List_Reference | 100 | 1,044.1 ns | 10.54 ns | 8.80 ns | 1.00 | 0.00 | 0.3510 | - | - | 736 B |
| StructLinq_List_Reference | List_Reference | 100 | 868.8 ns | 14.72 ns | 13.05 ns | 0.83 | 0.01 | 0.1450 | - | - | 304 B |
| Hyperlinq_List_Reference | List_Reference | 100 | 914.0 ns | 4.03 ns | 3.15 ns | 0.88 | 0.01 | 0.1297 | - | - | 272 B |
| | | | | | | | | | | | |
| Linq_AsyncEnumerable_Reference | AsyncEnumerable_Reference | 100 | 6,083.8 ns | 25.51 ns | 21.30 ns | 1.00 | 0.00 | 0.3586 | - | - | 752 B |
| Hyperlinq_AsyncEnumerable_Reference | AsyncEnumerable_Reference | 100 | 6,889.0 ns | 23.42 ns | 20.76 ns | 1.13 | 0.00 | 0.3815 | - | - | 800 B |
| 114.790323 | 165 | 0.386539 | yue_Hant | 0.559036 |
dbab32cd2c6fe37cba6fad579444a9a1c9321321 | 2,096 | md | Markdown | docs/framework/wpf/graphics-multimedia/visual-layer-programming.md | dhernandezb/docs.es-es | cf1637e989876a55eb3c57002818d3982591baf1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/graphics-multimedia/visual-layer-programming.md | dhernandezb/docs.es-es | cf1637e989876a55eb3c57002818d3982591baf1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/graphics-multimedia/visual-layer-programming.md | dhernandezb/docs.es-es | cf1637e989876a55eb3c57002818d3982591baf1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Programación de capas visuales
ms.date: 03/30/2017
f1_keywords:
- AutoGeneratedOrientationPage
helpviewer_keywords:
- visual objects [WPF]
- graphics [WPF], visual layer
- rendering support with Visual objects [WPF]
- visual layer [WPF]
ms.assetid: d82c89db-077f-4c3c-a4f8-310ebfbe0fe2
ms.openlocfilehash: 13957e60c92a90624882e126fe66aca789b6835a
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 05/04/2018
ms.locfileid: "33561740"
---
# <a name="visual-layer-programming"></a>Programación de capas visuales
El <xref:System.Windows.Media.Visual> objeto es una de las principales [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] objeto cuya función principal es proporcionar compatibilidad con la representación. Controles de interfaz de usuario, como <xref:System.Windows.Controls.Button> y <xref:System.Windows.Controls.TextBox>, derivan de la <xref:System.Windows.Media.Visual> clase y usarla para conservar sus datos de representación.
## <a name="in-this-section"></a>En esta sección
[Realizar pruebas de posicionamiento en la capa visual](../../../../docs/framework/wpf/graphics-multimedia/hit-testing-in-the-visual-layer.md)
[Usar objetos DrawingVisual](../../../../docs/framework/wpf/graphics-multimedia/using-drawingvisual-objects.md)
[Tutorial: Hospedar objetos visuales en una aplicación Win32](../../../../docs/framework/wpf/graphics-multimedia/tutorial-hosting-visual-objects-in-a-win32-application.md)
[Temas "Cómo..."](../../../../docs/framework/wpf/graphics-multimedia/visual-layer-programming-how-to-topics.md)
## <a name="see-also"></a>Vea también
<xref:System.Windows.Media.Visual>
<xref:System.Windows.Media.VisualTreeHelper>
<xref:System.Windows.Media.DrawingVisual>
[Información general sobre la representación de gráficos en WPF](../../../../docs/framework/wpf/graphics-multimedia/wpf-graphics-rendering-overview.md)
[Gráficos y multimedia](../../../../docs/framework/wpf/graphics-multimedia/index.md)
| 61.647059 | 465 | 0.758588 | spa_Latn | 0.470427 |
dbab7e706361d81c35ecf8439f0fe82ced2e81f3 | 1,049 | md | Markdown | docs/framework/unmanaged-api/diagnostics/isymunmanageddocumentwriter-interface.md | BaruaSourav/docs | c288ed777de6b091f5e074d3488f7934683f3eb5 | [
"CC-BY-4.0",
"MIT"
] | 3,294 | 2016-10-30T05:27:20.000Z | 2022-03-31T15:59:30.000Z | docs/framework/unmanaged-api/diagnostics/isymunmanageddocumentwriter-interface.md | BaruaSourav/docs | c288ed777de6b091f5e074d3488f7934683f3eb5 | [
"CC-BY-4.0",
"MIT"
] | 16,739 | 2016-10-28T19:41:29.000Z | 2022-03-31T22:38:48.000Z | docs/framework/unmanaged-api/diagnostics/isymunmanageddocumentwriter-interface.md | BaruaSourav/docs | c288ed777de6b091f5e074d3488f7934683f3eb5 | [
"CC-BY-4.0",
"MIT"
] | 6,701 | 2016-10-29T20:56:11.000Z | 2022-03-31T12:32:26.000Z | ---
description: "Learn more about: ISymUnmanagedDocumentWriter Interface"
title: "ISymUnmanagedDocumentWriter Interface"
ms.date: "03/30/2017"
api_name:
- "ISymUnmanagedDocumentWriter"
api_location:
- "diasymreader.dll"
api_type:
- "COM"
f1_keywords:
- "ISymUnmanagedDocumentWriter"
helpviewer_keywords:
- "ISymUnmanagedDocumentWriter interface [.NET Framework debugging]"
ms.assetid: edc8a02b-a0ac-46e3-80c1-fb8b5cef6341
topic_type:
- "apiref"
---
# ISymUnmanagedDocumentWriter Interface
Provides methods for writing to a document referenced by a symbol store.
## Methods
|Method|Description|
|------------|-----------------|
|[SetCheckSum Method](isymunmanageddocumentwriter-setchecksum-method.md)|Sets checksum information.|
|[SetSource Method](isymunmanageddocumentwriter-setsource-method.md)|Sets embedded source for a document that is being written.|
## Requirements
**Header:** CorSym.idl, CorSym.h
## See also
- [Diagnostics Symbol Store Interfaces](diagnostics-symbol-store-interfaces.md)
| 28.351351 | 130 | 0.743565 | eng_Latn | 0.425932 |
dbaba9fb3647ed483ee308d4517eac0d8b9dcdd4 | 1,311 | md | Markdown | content/book_1/005_porta_nibh/002_nostra_auctor/005_lobortis_ac_ultrices.md | wernerstrydom/sample-book | dbe00c6a6e2c9c227a6eb8955371d394b9398e48 | [
"MIT"
] | null | null | null | content/book_1/005_porta_nibh/002_nostra_auctor/005_lobortis_ac_ultrices.md | wernerstrydom/sample-book | dbe00c6a6e2c9c227a6eb8955371d394b9398e48 | [
"MIT"
] | null | null | null | content/book_1/005_porta_nibh/002_nostra_auctor/005_lobortis_ac_ultrices.md | wernerstrydom/sample-book | dbe00c6a6e2c9c227a6eb8955371d394b9398e48 | [
"MIT"
] | null | null | null | ### Lobortis ac ultrices
Proin orci id, nisl mollis dui. Eros, pellentesque aptent convallis dolor, lectus, praesent a. Nunc porttitor, posuere
Nisi, arcu posuere, lobortis nam libero accumsan posuere viverra rhoncus etiam elit, odio. Sem varius nisl massa eleifend placerat
Lectus, tincidunt finibus, at mi, elit, amet, habitasse quis, ipsum lacinia, molestie neque
Nisi dolor, dapibus curabitur tincidunt donec
Orci ex libero diam felis dictum duis elementum pretium sapien neque, suscipit habitasse
Praesent leo, amet, elit. Blandit, mi, accumsan purus orci libero ad tellus, congue, ullamcorper amet, inceptos. Tempor elit vitae, porttitor dolor, orci, cras placerat, magna, magna enim, condimentum mauris, ultrices
Consectetur arcu nunc duis volutpat scelerisque eleifend enim, ultrices
Vestibulum vel, semper vehicula donec conubia. Sapien odio, enim faucibus. Ad platea erat placerat, nisl porta faucibus conubia enim
Nam sagittis, varius in nullam rhoncus. Tellus nullam curabitur convallis dui, ligula, sollicitudin vel, tellus
Eget est nam hac magna feugiat rutrum laoreet eros. Vitae tincidunt interdum, mi, euismod finibus enim at. Fringilla, eros, orci tellus, pretium amet tincidunt adipiscing luctus justo maximus
Non eu ante, cras ex, nisi
Et mauris, leo urna, nulla orci faucibus
| 46.821429 | 217 | 0.800915 | cat_Latn | 0.286563 |
dbac53767475c233571fd85c9c0770741fcde2e4 | 1,075 | md | Markdown | docs/4.0/authorization/v1/selfSubjectRulesReview.md | jsonnet-libs/openshift-libsonnet | 21301758830e3f2c35e23712e2f131fc87f2ebc7 | [
"Apache-2.0"
] | null | null | null | docs/4.0/authorization/v1/selfSubjectRulesReview.md | jsonnet-libs/openshift-libsonnet | 21301758830e3f2c35e23712e2f131fc87f2ebc7 | [
"Apache-2.0"
] | null | null | null | docs/4.0/authorization/v1/selfSubjectRulesReview.md | jsonnet-libs/openshift-libsonnet | 21301758830e3f2c35e23712e2f131fc87f2ebc7 | [
"Apache-2.0"
] | null | null | null | ---
permalink: /4.0/authorization/v1/selfSubjectRulesReview/
---
# authorization.v1.selfSubjectRulesReview
"SelfSubjectRulesReview is a resource you can create to determine which actions you can perform in a namespace"
## Index
* [`fn new(name)`](#fn-new)
* [`obj spec`](#obj-spec)
* [`fn withScopes(scopes)`](#fn-specwithscopes)
* [`fn withScopesMixin(scopes)`](#fn-specwithscopesmixin)
## Fields
### fn new
```ts
new(name)
```
new returns an instance of SelfSubjectRulesReview
## obj spec
"SelfSubjectRulesReviewSpec adds information about how to conduct the check"
### fn spec.withScopes
```ts
withScopes(scopes)
```
"Scopes to use for the evaluation. Empty means \"use the unscoped (full) permissions of the user/groups\". Nil means \"use the scopes on this request\"."
### fn spec.withScopesMixin
```ts
withScopesMixin(scopes)
```
"Scopes to use for the evaluation. Empty means \"use the unscoped (full) permissions of the user/groups\". Nil means \"use the scopes on this request\"."
**Note:** This function appends passed data to existing values | 23.369565 | 154 | 0.726512 | eng_Latn | 0.921001 |
dbac71d3cf397af333584e89f61ca6f70ed6cac9 | 1,897 | md | Markdown | docs/framework/unmanaged-api/debugging/cordebugguidtotypemapping-structure.md | SilverBuzzard/docs.pl-pl | a3cda910e7b4b30f2c3c449c742dce1be42067b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/cordebugguidtotypemapping-structure.md | SilverBuzzard/docs.pl-pl | a3cda910e7b4b30f2c3c449c742dce1be42067b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/cordebugguidtotypemapping-structure.md | SilverBuzzard/docs.pl-pl | a3cda910e7b4b30f2c3c449c742dce1be42067b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CorDebugGuidToTypeMapping — Struktura
ms.date: 03/30/2017
dev_langs:
- cpp
api_name:
- CorDebugGuidToTypeMapping
api_location:
- mscordbi.dll
api_type:
- COM
f1_keywords:
- CorDebugGuidToTypeMapping
helpviewer_keywords:
- CorDebugGuidToTypeMapping structure [.NET Framework debugging]
ms.assetid: 57dbccd9-b16d-4da3-ae25-7a2cf9adf679
topic_type:
- apiref
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: c803a805da605bd52fd50eb1e292c0e277143d7a
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 05/04/2018
ms.locfileid: "33405262"
---
# <a name="cordebugguidtotypemapping-structure"></a>CorDebugGuidToTypeMapping — Struktura
Mapy [!INCLUDE[wrt](../../../../includes/wrt-md.md)] identyfikator GUID na jego odpowiedni obiekt ICorDebugType.
## <a name="syntax"></a>Składnia
```cpp
typedef struct CorDebugGuidToTypeMapping {
GUID iid;
ICorDebugType *pType;
} CorDebugGuidToTypeMapping;
```
## <a name="members"></a>Elementy członkowskie
|Element członkowski|Opis|
|------------|-----------------|
|`iid`|Identyfikator GUID zapisane w pamięci podręcznej [!INCLUDE[wrt](../../../../includes/wrt-md.md)] typu.|
|`pType`|Wskaźnik do obiektu ICorDebugType, który zawiera informacje o pamięci podręcznej typu.|
## <a name="requirements"></a>Wymagania
**Platformy:** [!INCLUDE[wrt](../../../../includes/wrt-md.md)].
**Nagłówek:** CorDebug.idl, CorDebug.h
**Biblioteka:** CorGuids.lib
**Wersje programu .NET framework:** [!INCLUDE[net_current_v45plus](../../../../includes/net-current-v45plus-md.md)]
## <a name="see-also"></a>Zobacz też
[Struktury debugowania](../../../../docs/framework/unmanaged-api/debugging/debugging-structures.md)
[Debugowanie](../../../../docs/framework/unmanaged-api/debugging/index.md)
| 32.152542 | 118 | 0.703743 | pol_Latn | 0.293261 |
dbacf63d77338494e7117ed2758238ea6e2366d1 | 19,855 | md | Markdown | articles/machine-learning/machine-learning-data-science-hive-queries.md | artur-gawrych/azure-content | 2be5894fe8fb5bbb1d4bd3fc0b6df32f12ccfaf5 | [
"CC-BY-3.0"
] | 1 | 2017-08-16T03:23:54.000Z | 2017-08-16T03:23:54.000Z | articles/machine-learning/machine-learning-data-science-hive-queries.md | artur-gawrych/azure-content | 2be5894fe8fb5bbb1d4bd3fc0b6df32f12ccfaf5 | [
"CC-BY-3.0"
] | null | null | null | articles/machine-learning/machine-learning-data-science-hive-queries.md | artur-gawrych/azure-content | 2be5894fe8fb5bbb1d4bd3fc0b6df32f12ccfaf5 | [
"CC-BY-3.0"
] | null | null | null | <properties
pageTitle="Submit Hive Queries to Hadoop clusters in the Cortana Analytics Process | Microsoft Azure"
description="Process Data from Hive Tables"
services="machine-learning"
documentationCenter=""
authors="hangzh-msft"
manager="paulettm"
editor="cgronlun" />
<tags
ms.service="machine-learning"
ms.workload="data-services"
ms.tgt_pltfrm="na"
ms.devlang="na"
ms.topic="article"
ms.date="02/08/2016"
ms.author="hangzh;bradsev" />
#<a name="heading"></a> Submit Hive Queries to HDInsight Hadoop clusters in the Cortana Analytics Process
This document describes various ways of submitting Hive queries to Hadoop clusters that are managed by an HDInsight service in Azure. Hive queries can be submitted by using:
* the Hadoop Command Line on the headnode of the cluster
* the IPython Notebook
* the Hive Editor
* Azure PowerShell scripts
Generic Hive queries that can be used to explore the data or to generate features that use embedded Hive User Defined Functions (UDFs) are provided.
Examples of queries the specific to [NYC Taxi Trip Data](http://chriswhong.com/open-data/foil_nyc_taxi/) scenarios are also provided in [Github repository](https://github.com/Azure/Azure-MachineLearning-DataScience/tree/master/Misc/DataScienceProcess/DataScienceScripts). These queries already have data schema specified and are ready to be submitted to run for this scenario.
In the final section, parameters that users can tune to improve the performance of Hive queries are discussed.
## Prerequisites
This article assumes that you have:
* Created an Azure storage account. If you need instructions for this task, see [Create an Azure Storage account](../hdinsight-get-started.md#storage)
* Provisioned an Hadoop cluster with the HDInsight service. If you need instructions, see [Provision an HDInsight cluster](../hdinsight-get-started.md#provision).
* Uploaded the data to Hive tables in Azure HDInsight Hadoop clusters. If it has not, please follow the instructions provided at [Create and load data to Hive tables](machine-learning-data-science-hive-tables.md) to upload data to Hive tables first.
* Enabled remote access to the cluster. If you need instructions, see [Access the Head Node of Hadoop Cluster](machine-learning-data-science-customize-hadoop-cluster.md#remoteaccess).
## <a name="submit"></a>How to submit Hive queries
1. [Submit Hive queries through Hadoop Command Line in headnode of Hadoop cluster](#headnode)
2. [Submit Hive queries with the Hive Editor](#hive-editor)
3. [Submit Hive queries with Azure PowerShell Commands](#ps)
###<a name="headnode"></a> 1. Submit Hive queries through Hadoop Command Line in headnode of Hadoop cluster
If the Hive query is complex, submitting it directly in the head node of the Hadoop cluster typically leads to faster turn around than submitting it with a Hive Editor or Azure PowerShell scripts.
Log in to the head node of the Hadoop cluster, open the Hadoop Command Line on the desktop of the head node, and enter command `cd %hive_home%\bin`.
Users have three ways to submit Hive queries in the Hadoop Command Line:
* directly
* using .hql files
* with the Hive command console
#### Submit Hive queries directly in Hadoop Command Line.
Users can run command like `hive -e "<your hive query>;` to submit simple Hive queries directly in Hadoop Command Line. Here is an example, where the red box outlines the command that submits the Hive query, and the green box outlines the output from the Hive query.
![Create workspace][10]
#### Submit Hive queries in .hql files
When the Hive query is more complicated and has multiple lines, editing queries in command line or Hive command console is not practical. An alternative is to use a text editor in the head node of the Hadoop cluster to save the Hive queries in a .hql file in a local directory of the head node. Then the Hive query in the .hql file can be submitted by using the `-f` argument as follows:
`hive -f "<path to the .hql file>"`
![Create workspace][15]
**Suppress progress status screen print of Hive queries**
By default, after Hive query is submitted in Hadoop Command Line, the progress of the Map/Reduce job will be printed out on screen. To suppress the screen print of the Map/Reduce job progress, you can use an argument `-S` ("S" in upper case) in the command line as follows:
hive -S -f "<path to the .hql file>"
hive -S -e "<Hive queries>"
#### Submit Hive queries in Hive command console.
Users can also first enter the Hive command console by running command `hive` in Hadoop Command Line, and then submit Hive queries in Hive command console. Here is an example. In this example, the two red boxes highlight the commands used to enter the Hive command console, and the Hive query submitted in Hive command console, respectively. The green box highlights the output from the Hive query.
![Create workspace][11]
The previous examples directly output the Hive query results on screen. Users can also write the output to a local file on the head node, or to an Azure blob. Then, users can use other tools to further analyze the output of Hive queries.
**Output Hive query results to a local file.**
To output Hive query results to a local directory on the head node, users have to submit the Hive query in the Hadoop Command Line as follows:
`hive -e "<hive query>" > <local path in the head node>`
In the following example, the output of Hive query is written into a file `hivequeryoutput.txt` in directory `C:\apps\temp`.
![Create workspace][12]
**Output Hive query results to an Azure blob**
Users can also output the Hive query results to an Azure blob, within the default container of the Hadoop cluster. The Hive query has to be like this:
`insert overwrite directory wasb:///<directory within the default container> <select clause from ...>`
In the following example, the output of Hive query is written to a blob directory `queryoutputdir` within the default container of the Hadoop cluster. Here, you only need to provide the directory name, without the blob name. An error will be thrown out if you provide both directory and blob names, such as `wasb:///queryoutputdir/queryoutput.txt`.
![Create workspace][13]
If you open the default container of the Hadoop cluster using tools like Azure Storage Explorer, you will see the output of the Hive query as follows. You can apply the filter (highlighted by red box) to only retrieve the blob with specified letters in names.
![Create workspace][14]
###<a name="hive-editor"></a> 2. Submit Hive queries with the Hive Editor
Users can also use Query Console (Hive Editor) by entering the URL in a web browser `https://<Hadoop cluster name>.azurehdinsight.net/Home/HiveEditor` (you will be asked to input the Hadoop cluster credentials to log in),
###<a name="ps"></a> 3. Submit Hive queries with Azure PowerShell Commands
Users can also us PowerShell to submit Hive queries. For instructions, see [Submit Hive jobs using PowerShell](../hdinsight/hdinsight-submit-hadoop-jobs-programmatically.md#hive-powershell).
## <a name="explore"></a>Data Exploration, Feature Engineering and Hive Parameter Tuning
We describe the following data wrangling tasks in this section using Hive in Azure HDInsight Hadoop clusters:
1. [Data Exploration](#hive-dataexploration)
2. [Feature Generation](#hive-featureengineering)
> [AZURE.NOTE] The sample Hive queries assume that the data has been uploaded to Hive tables in Azure HDInsight Hadoop clusters. If it has not, please follow [Create and load data to Hive tables](machine-learning-data-science-hive-tables.md) to upload data to Hive tables first.
###<a name="hive-dataexploration"></a>Data Exploration
Here are a few sample Hive scripts that can be used to explore data in Hive tables.
1. Get the count of observations per partition
`SELECT <partitionfieldname>, count(*) from <databasename>.<tablename> group by <partitionfieldname>;`
2. Get the count of observations per day
`SELECT to_date(<date_columnname>), count(*) from <databasename>.<tablename> group by to_date(<date_columnname>);`
3. Get the levels in a categorical column
`SELECT distinct <column_name> from <databasename>.<tablename>`
4. Get the number of levels in combination of two categorical columns
`SELECT <column_a>, <column_b>, count(*) from <databasename>.<tablename> group by <column_a>, <column_b>`
5. Get the distribution for numerical columns
`SELECT <column_name>, count(*) from <databasename>.<tablename> group by <column_name>`
6. Extract records from joining two tables
SELECT
a.<common_columnname1> as <new_name1>,
a.<common_columnname2> as <new_name2>,
a.<a_column_name1> as <new_name3>,
a.<a_column_name2> as <new_name4>,
b.<b_column_name1> as <new_name5>,
b.<b_column_name2> as <new_name6>
FROM
(
SELECT <common_columnname1>,
<common_columnname2>,
<a_column_name1>,
<a_column_name2>,
FROM <databasename>.<tablename1>
) a
join
(
SELECT <common_columnname1>,
<common_columnname2>,
<b_column_name1>,
<b_column_name2>,
FROM <databasename>.<tablename2>
) b
ON a.<common_columnname1>=b.<common_columnname1> and a.<common_columnname2>=b.<common_columnname2>
###<a name="hive-featureengineering"></a>Feature Generation
In this section, we describe ways of generating features using Hive queries:
1. [Frequency based Feature Generation](#hive-frequencyfeature)
2. [Risks of Categorical Variables in Binary Classification](#hive-riskfeature)
3. [Extract features from Datetime Field](#hive-datefeature)
4. [Extract features from Text Field](#hive-textfeature)
5. [Calculate distance between GPS coordinates](#hive-gpsdistance)
> [AZURE.NOTE] Once you generate additional features, you can either add them as columns to the existing table or create a new table with the additional features and primary key, which can then be joined with the original table.
####<a name="hive-frequencyfeature"></a> Frequency based Feature Generation
Sometimes, it is valuable to calculate the frequencies of the levels of a categorical variable, or the frequencies of level combinations of multiple categorical variables. Users can use the following script to calculate the frequencies:
select
a.<column_name1>, a.<column_name2>, a.sub_count/sum(a.sub_count) over () as frequency
from
(
select
<column_name1>,<column_name2>, count(*) as sub_count
from <databasename>.<tablename> group by <column_name1>, <column_name2>
)a
order by frequency desc;
####<a name="hive-riskfeature"></a> Risks of Categorical Variables in Binary Classification
In binary classification, sometimes we need to convert non-numeric categorical variables into numeric features by replacing the non-numeric levels with numeric risks, since some models might only take numeric features. In this section, we show some generic Hive queries of calculating the risk values (log odds) of a categorical variable.
set smooth_param1=1;
set smooth_param2=20;
select
<column_name1>,<column_name2>,
ln((sum_target+${hiveconf:smooth_param1})/(record_count-sum_target+${hiveconf:smooth_param2}-${hiveconf:smooth_param1})) as risk
from
(
select
<column_nam1>, <column_name2>, sum(binary_target) as sum_target, sum(1) as record_count
from
(
select
<column_name1>, <column_name2>, if(target_column>0,1,0) as binary_target
from <databasename>.<tablename>
)a
group by <column_name1>, <column_name2>
)b
In this example, variables `smooth_param1` and `smooth_param2` are set to smooth the risk values calculated from data. Risks are ranged between -Inf and Inf. Risks>0 stands for the probability that the target equals 1 is greater than 0.5.
After the risk table is calculated, users can assign risk values to a table by joining it with the risk table. The Hive joining query has been given in previous section.
####<a name="hive-datefeature"></a> Extract features from Datetime Fields
Hive comes along with a set of UDFs for processing datetime fields. In Hive, the default datetime format is 'yyyy-MM-dd 00:00:00' (like '1970-01-01 12:21:32'). In this section, we show examples of extracting the day of a month, and the month from a datetime field, and examples of converting a datetime string in a format other than the default format to a datetime string in default format.
select day(<datetime field>), month(<datetime field>)
from <databasename>.<tablename>;
This Hive query assumes that the `<datetime field>` is in the default datetime format.
If a datetime field is not in the default format, we need to first convert the datetile field into Unix time stamp, and then convert the Unix time stamp to a datetime string in the default format. After the datetime is in default format, users can apply the embedded datetime UDFs to extract features.
select from_unixtime(unix_timestamp(<datetime field>,'<pattern of the datetime field>'))
from <databasename>.<tablename>;
In this query, if the `<datetime field>` has the pattern like `03/26/2015 12:04:39`, the `'<pattern of the datetime field>'` should be `'MM/dd/yyyy HH:mm:ss'`. To test it, users can run
select from_unixtime(unix_timestamp('05/15/2015 09:32:10','MM/dd/yyyy HH:mm:ss'))
from hivesampletable limit 1;
In this query, `hivesampletable` comes with all Azure HDInsight Hadoop clusters by default when the clusters are provisioned.
####<a name="hive-textfeature"></a> Extract features from Text Fields
Assume that the Hive table has a text field, which is a string of words separated by space, the following query extract the length of the string, and the number of words in the string.
select length(<text field>) as str_len, size(split(<text field>,' ')) as word_num
from <databasename>.<tablename>;
####<a name="hive-gpsdistance"></a> Calculate distance between GPS coordinates
The query given in this section can be directly applied on the NYC Taxi Trip Data. The purpose of this query is to show how to apply the embedded mathematical functions in Hive to generate features.
The fields that are used in this query are GPS coordinates of pickup and dropoff locations, named pickup\_longitude, pickup\_latitude, dropoff\_longitude, and dropoff\_latitude. The queries to calculate the direct distance between the pickup and dropoff coordinates are:
set R=3959;
set pi=radians(180);
select pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude,
${hiveconf:R}*2*2*atan((1-sqrt(1-pow(sin((dropoff_latitude-pickup_latitude)
*${hiveconf:pi}/180/2),2)-cos(pickup_latitude*${hiveconf:pi}/180)
*cos(dropoff_latitude*${hiveconf:pi}/180)*pow(sin((dropoff_longitude-pickup_longitude)*${hiveconf:pi}/180/2),2)))
/sqrt(pow(sin((dropoff_latitude-pickup_latitude)*${hiveconf:pi}/180/2),2)
+cos(pickup_latitude*${hiveconf:pi}/180)*cos(dropoff_latitude*${hiveconf:pi}/180)*
pow(sin((dropoff_longitude-pickup_longitude)*${hiveconf:pi}/180/2),2))) as direct_distance
from nyctaxi.trip
where pickup_longitude between -90 and 0
and pickup_latitude between 30 and 90
and dropoff_longitude between -90 and 0
and dropoff_latitude between 30 and 90
limit 10;
The mathematical equations of calculating distance between two GPS coordinates can be found at [Movable Type Scripts](http://www.movable-type.co.uk/scripts/latlong.html), authored by Peter Lapisu. In his Javascript, the function toRad() is just `lat_or_lon*pi/180`, which converts degrees to radians. Here, `lat_or_lon` is the latitude or longitude. Since Hive does not provide function `atan2`, but provides function `atan`, the `atan2` function is implemented by `atan` function in the above Hive query, based on its definition in [Wikipedia](http://en.wikipedia.org/wiki/Atan2).
![Create workspace][1]
A full list of Hive embedded UDFs can be found in the [UDF Language Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-MathematicalFunctions).
## <a name="tuning"></a> Advanced topics: Tune Hive Parameters to Improve Query Speed
The default parameter settings of Hive cluster might not be suitable for the Hive queries and the data the queries are processing. In this section, we discuss some parameters that users can tune so that the performance of Hive queries can be improved. Users need to add the parameter tuning queries before the queries of processing data.
1. Java heap space : For queries involving joining large datasets, or processing long records, a typical error is **running out of heap space**. This can be tuned by setting parameters `mapreduce.map.java.opts` and `mapreduce.task.io.sort.mb` to desired values. Here is an example:
set mapreduce.map.java.opts=-Xmx4096m;
set mapreduce.task.io.sort.mb=-Xmx1024m;
This parameter allocates 4GB memory to Java heap space and also makes sorting more efficient by allocating more memory for it. It is a good idea to play with it if there are any job failure errors related to heap space.
2. DFS block size : This parameter sets the smallest unit of data that the file system stores. As an example, if the DFS block size is 128MB, then any data of size less than and up to 128MB is stored in a single block, while data that is larger than 128MB is allotted extra blocks. Choosing a very small block size causes large overheads in Hadoop since the name node has to process many more requests to find the relevant block pertaining to the file. A recommended setting when dealing with gigabytes (or larger) data is :
set dfs.block.size=128m;
3. Optimizing join operation in Hive : While join operations in the map/reduce framework typically take place in the reduce phase, some times, enormous gains can be achieved by scheduling joins in the map phase (also called "mapjoins"). To direct Hive to do this whenever possible, we can set :
set hive.auto.convert.join=true;
4. Specifying the number of mappers to Hive : While Hadoop allows the user to set the number of reducers, the number of mappers may typically not be set by the user. A trick that allows some degree of control on this number is to choose the hadoop variables, mapred.min.split.size and mapred.max.split.size. The size of each map task is determined by :
num_maps = max(mapred.min.split.size, min(mapred.max.split.size, dfs.block.size))
Typically, the default value of mapred.min.split.size is 0, that of mapred.max.split.size is Long.MAX and that of dfs.block.size is 64MB. As we can see, given the data size, tuning these parameters by "setting" them allows us to tune the number of mappers used.
5. A few other more advanced options for optimizing Hive performance are mentioned below. These allow for the setting of memory allocated to map and reduce tasks, and can be useful in tweaking performance. Please keep in mind that the `mapreduce.reduce.memory.mb` cannot be greater than the physical memory size of each worker node in the Hadoop cluster.
set mapreduce.map.memory.mb = 2048;
set mapreduce.reduce.memory.mb=6144;
set mapreduce.reduce.java.opts=-Xmx8192m;
set mapred.reduce.tasks=128;
set mapred.tasktracker.reduce.tasks.maximum=128;
[1]: ./media/machine-learning-data-science-hive-queries/atan2new.png
[10]: ./media/machine-learning-data-science-hive-queries/run-hive-queries-1.png
[11]: ./media/machine-learning-data-science-hive-queries/run-hive-queries-2.png
[12]: ./media/machine-learning-data-science-hive-queries/output-hive-results-1.png
[13]: ./media/machine-learning-data-science-hive-queries/output-hive-results-2.png
[14]: ./media/machine-learning-data-science-hive-queries/output-hive-results-3.png
[15]: ./media/machine-learning-data-science-hive-queries/run-hive-queries-3.png
| 59.984894 | 582 | 0.764493 | eng_Latn | 0.993394 |
dbad4e3402edd531f22e4fcfa4d2050ddb70b675 | 303 | md | Markdown | docs/integration-services/catalog/index.md | lxyhcx/sql-docs.zh-cn | e63de561000b0b4bebff037bfe96170d6b61c908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/catalog/index.md | lxyhcx/sql-docs.zh-cn | e63de561000b0b4bebff037bfe96170d6b61c908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/catalog/index.md | lxyhcx/sql-docs.zh-cn | e63de561000b0b4bebff037bfe96170d6b61c908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
redirect_url: /sql/integration-services/catalog/integration-services-ssis-server-and-catalog
ms.openlocfilehash: d9b0fbd53ae4dffe7b5391a1e637d85b9a3ea333
ms.sourcegitcommit: 6bbecec786b0900db86203a04afef490c8d7bfab
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 12/12/2017
---
| 33.666667 | 93 | 0.834983 | kor_Hang | 0.090329 |
dbad5bad4257f014f833ffac2398203a84f38f05 | 918 | markdown | Markdown | _posts/2020-01-26-maximum-sum-of-two-non-overlapping-subarrays.markdown | jasonjson/jasonjson.github.io | 76f3c90eb72ef241d09646f8ff976db718e54e48 | [
"MIT"
] | null | null | null | _posts/2020-01-26-maximum-sum-of-two-non-overlapping-subarrays.markdown | jasonjson/jasonjson.github.io | 76f3c90eb72ef241d09646f8ff976db718e54e48 | [
"MIT"
] | null | null | null | _posts/2020-01-26-maximum-sum-of-two-non-overlapping-subarrays.markdown | jasonjson/jasonjson.github.io | 76f3c90eb72ef241d09646f8ff976db718e54e48 | [
"MIT"
] | null | null | null | ---
layout: post
title: 1031 - Maximum Sum Of Two Non-Overlapping Subarrays
date: 2020-01-26
tags:
- Leetcode
categories:
- Array
author: Jason
---
Given an array A of non-negative integers, return the maximum sum of elements in two non-overlapping (contiguous) subarrays, which have lengths L and M. (For clarification, the L-length subarray could occur before or after the M-length subarray.)
```python
class Solution:
def maxSumTwoNoOverlap(self, A: List[int], L: int, M: int) -> int:
if not A:
return 0
for i in range(1, len(A)):
A[i] += A[i - 1]
L_max, M_max = A[L - 1], A[M - 1]
ret = A[L + M - 1]
for i in range(L + M, len(A)):
L_max = max(L_max, A[i - M] - A[i - M - L])
M_max = max(M_max, A[i - L] - A[i - M - L])
ret = max(ret, L_max + A[i] - A[i - M], M_max + A[i] - A[i - L])
return ret
```
| 30.6 | 246 | 0.565359 | eng_Latn | 0.89558 |
dbade08a3ac2e87053fecb4e313691d61522e051 | 70 | md | Markdown | README.md | ingrowco/flutter-sdk-sample | 98ed5bd40fe259a96a08fc6ca2741a6e9e4c7600 | [
"MIT"
] | null | null | null | README.md | ingrowco/flutter-sdk-sample | 98ed5bd40fe259a96a08fc6ca2741a6e9e4c7600 | [
"MIT"
] | null | null | null | README.md | ingrowco/flutter-sdk-sample | 98ed5bd40fe259a96a08fc6ca2741a6e9e4c7600 | [
"MIT"
] | null | null | null | # flutter-sdk-sample
A small sample app to help implement Flutter SDK
| 23.333333 | 48 | 0.8 | eng_Latn | 0.986433 |
dbae1e4a9745e329502aa413869e84e11770952e | 1,007 | md | Markdown | README.md | albapa/Jagla | cc5cf5db4b1023c5199c989841adcf436b402739 | [
"Unlicense"
] | null | null | null | README.md | albapa/Jagla | cc5cf5db4b1023c5199c989841adcf436b402739 | [
"Unlicense"
] | null | null | null | README.md | albapa/Jagla | cc5cf5db4b1023c5199c989841adcf436b402739 | [
"Unlicense"
] | null | null | null | ## Dataset and simulation scripts for
# Insight into liquid polymorphism from the complex phase behaviour of a simple model
Albert B. Bartok, Gyorgy Hantal, Livia B. Partay
Physical Review Letters 127, 015701 (2021)
<https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.127.015701>
### Content:
The ``structures`` folder contains the unit cell configurations for the new structures, and
an example output trajectory for a Nested Sampling run.
The ``HOOMD_simulation_scripts`` folder contains jupyter notebooks to perform NPT and Grand-canonical
simulations of the Jagla model with the HOOMD package. Note that the user defined potential only works with
HOOMD if installed with the JIT package, which is not included in the default conda install! (See instructions
in the HOOMD-Blue manual.)
The file ``nested_sampling_input.inp`` is an example input file to perform nested sampling calculations on the Jagla
model, with the pymatnest package at constant pressure.
[TI scripts to be added]
| 41.958333 | 116 | 0.794439 | eng_Latn | 0.991411 |
dbae7ff98da212fad4622b68a2c0a18efe81e317 | 466 | md | Markdown | joinery/hardware.md | DouglasUrner/ShopNotes | 9149263ad375025ea3cb8d2b6babe0fee2232cc7 | [
"BSD-3-Clause"
] | 1 | 2020-12-12T19:05:22.000Z | 2020-12-12T19:05:22.000Z | joinery/hardware.md | DouglasUrner/ShopNotes | 9149263ad375025ea3cb8d2b6babe0fee2232cc7 | [
"BSD-3-Clause"
] | null | null | null | joinery/hardware.md | DouglasUrner/ShopNotes | 9149263ad375025ea3cb8d2b6babe0fee2232cc7 | [
"BSD-3-Clause"
] | null | null | null | # Hardware
## Hinge Mortises and the like
[Hinge Mortising Jig](http://www.woodworkingseminars.com/wp-content/uploads/2009/03/shopnotes-74-hinge-mortising-jigx.pdf) - adjustable jig for trim router (from the other ShopNotes, issue #74).
**Dedicated (single-size) jigs:**
[Simple Jig for Hinges](https://www.finewoodworking.com/2013/03/27/simple-jig-for-hinges)
[Foolproof hinge-mortises](https://www.woodmagazine.com/woodworking-tips/techniques/skills/hinges)
| 35.846154 | 194 | 0.772532 | eng_Latn | 0.186014 |
dbaf0c38fc3ba38975ab31a4d072d8f2b398810c | 3,313 | md | Markdown | articles/cognitive-services/Bing-News-Search/sdk.md | kedMertens/azure-docs.ru-ru | 6fd8c58d1385c59d1c3889d6d2b855cd1c6dfd95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Bing-News-Search/sdk.md | kedMertens/azure-docs.ru-ru | 6fd8c58d1385c59d1c3889d6d2b855cd1c6dfd95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Bing-News-Search/sdk.md | kedMertens/azure-docs.ru-ru | 6fd8c58d1385c59d1c3889d6d2b855cd1c6dfd95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Пакет SDK для API "Поиск Bing" | Документация Майкрософт
titleSuffix: Microsoft Cognitive Services
description: Пакет SDK для службы "Поиск Bing" для приложений, которые выполняют поиск в Интернете.
services: cognitive-services
author: mikedodaro
manager: rosh
ms.assetid: ''
ms.service: cognitive-services
ms.component: bing-news-search
ms.topic: article
ms.date: 1/24/2018
ms.author: v-gedod
ms.openlocfilehash: 4a40ea665e153536d2322706b455598902ce41eb
ms.sourcegitcommit: 95d9a6acf29405a533db943b1688612980374272
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 06/23/2018
ms.locfileid: "35382604"
---
# <a name="bing-search-sdk"></a>Пакет SDK для службы "Поиск Bing"
Примеры использования API Bing для поиска новостей включают в себя следующие сценарии:
1. Запрос новостей по условиям поиска с параметрами `market` и `count`, проверка количества результатов, а также вывод `totalEstimatedMatches`, имени, URL-адреса, описания, времени публикации и имени поставщика первого результата из списка новостей.
2. Запрос последних новостей по условиям поиска с параметрами `freshness` и `sortBy`, проверка количества результатов, а также вывод `totalEstimatedMatches`, URL-адреса, описания, времени публикации и имени поставщика первого результата из списка новостей.
3. Запрос новостей по категориям `movie` и `TV entertainment` с использованием безопасного поиска, проверка количества результатов, а также вывод категории, имени, URL-адреса, описания, времени публикации и имени поставщика первого результата из списка новостей.
4. Запрос новостей по набирающим популярность темам в Bing, проверка количества результатов и вывод имени, текста запроса, `webSearchUrl`, `newsSearchUrl` и URL-адреса изображения первого результата из списка новостей.
Пакеты SDK для службы "Поиск Bing" позволяют использовать функцию поиска в Интернете на указанных ниже языках программирования.
* Начало работы с [примерами для .NET](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/BingSearchv7).
* [Пакет NuGet](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Search.NewsSearch/1.2.0).
* См. также определения и зависимости в [библиотеках .NET](https://github.com/Azure/azure-sdk-for-net/tree/psSdkJson6/src/SDKs/CognitiveServices/dataPlane/Search/BingNewsSearch).
* Начало работы с [примерами для Node.js](https://github.com/Azure-Samples/cognitive-services-node-sdk-samples).
* См. также определения и зависимости в [библиотеках Node.js](https://github.com/Azure/azure-sdk-for-node/tree/master/lib/services/newsSearch).
* Начало работы с [примерами для Java](https://github.com/Azure-Samples/cognitive-services-java-sdk-samples).
* См. также определения и зависимости в [библиотеках Java](https://github.com/Azure-Samples/cognitive-services-java-sdk-samples/tree/master/Search/BingNewsSearch).
* Начало работы с [примерами для Python](https://github.com/Azure-Samples/cognitive-services-python-sdk-samples).
* См. также определения и зависимости в [библиотеках Python](https://github.com/Azure/azure-sdk-for-python/tree/master/azure-cognitiveservices-search-newssearch).
Примеры пакетов SDK для каждого языка содержат файл сведений, в котором указаны предварительные требования и инструкции по установке и запуску этих примеров. | 84.948718 | 262 | 0.805312 | rus_Cyrl | 0.690237 |
dbaf166010224e26390d87a05e50b5bffb6deea0 | 4,667 | md | Markdown | node/data-api/readme.md | cbr74/smilr | 75d81f92caff6cb9ad3ab4e95a22861681098913 | [
"MIT"
] | null | null | null | node/data-api/readme.md | cbr74/smilr | 75d81f92caff6cb9ad3ab4e95a22861681098913 | [
"MIT"
] | null | null | null | node/data-api/readme.md | cbr74/smilr | 75d81f92caff6cb9ad3ab4e95a22861681098913 | [
"MIT"
] | null | null | null | # Node.js - Data API
This is a instantiation of the [Smilr API](../../docs/api-model) using Node.js and Express. It acts as the REST API endpoint for the Vue.js client app. This is a stateless service. The main API routes & logic are held in `routes/api_events.js`, `routes/api_feedback.js`, `routes/api_other.js`
# Building & Running Locally
Make sure you have Node.js v8.9+ and NPM installed.
Ensure the `MONGO_CONNSTR` environment variable is set as described below.
Then from the main Smilr project root run:
```
cd node/data-api
npm install
npm start
```
# Data Access
All data is held in MongoDB, the data access layer is a plain ES6 class **DataAccess** in [`lib/data-access.js`](lib/data-access.js). This is a singleton which encapsulates all MongoDB specific code and logic (e.g. connecting and creating the database, collections etc) and also operations on the event and feedback entities. See [Database](../../docs/database.md) for more details.
# Configuration
The server listens on port 4000 by default and requires just one mandatory configuration environmental variable to be set.
|Variable Name|Purpose|
|-------------|-------|
|MONGO_CONNSTR|**Required setting!** A valid [MongoDB connection string](https://docs.mongodb.com/v3.4/reference/connection-string/), e.g. `mongodb://localhost` or `mongodb://myhost.example.net:27017`. When using Azure Cosmos DB, obtain the full Mongo connection string from the Cosmos instance in the portal, which will include the username & password.
|PORT|Optional. Port the server will listen on. *Default: 4000*|
|MONGO_RETRIES|Optional. How many times the server will retry connecting to MongoDB. *Default: 5*|
|MONGO_RETRY_DELAY|Optional. How long to wait in seconds, before retry connecting to MongoDB. *Default: 5*|
|SECURE_CLIENT_ID|Optional. When set, certain admin API calls will be validated, leave blank or unset to disable security and validation. Details below. *Default: 'blank'*|
|AAD_V1|Optional. Use older Azure AD v1 issuer when validating tokens. Only used when SECURE_CLIENT_ID is set. Change this to true if you get 401 errors even with a valid user. *Default: false*|
|APPINSIGHTS_INSTRUMENTATIONKEY|Optional. Enables data collection and monitoring with Azure App Insights, set to the key of the instance you want to send data to. *Default: 'blank'*|
# Security
For demos it is suggested that the API is left open for ease of showing the API and the working app, however for a permanent or live instance it should be restricted.
The event PUT, POST and DELETE calls result in data modification, and are only called by the admin section of the Smilr client app. The configuration described here allows these calls to be placed behind an authentication scheme, to prevent direct API access.
To switch on security for these three calls, set the `SECURE_CLIENT_ID` environmental variable to the client id of an [app registered with Azure AD](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app)
HTTP request validation is done in `lib/utils.js` and the ***verifyAuthentication()*** method, validation is skipped entirely if SECURE_CLIENT_ID is unset or blank (Note. This is the default). This method calls on a library called 'azure-ad-jwt' in order to validate the tokens.
> :speech_balloon: **Node.** At the time of writing (Dec 2018) there are some issues with the public version of 'azure-ad-jwt', so a locally modified copy is provided in `lib/azure-ad-jwt/`. The modifications are [detailed in this pull request](https://github.com/dei79/node-azure-ad-jwt/pull/13)
The validation logic first checks for the `authorization` header in the HTTP request, the bearer token is extracted and treated as a JWT, which is validated that it is signed and issued and signed by Azure AD. Lastly the token's 'audience claim' is checked that it matches the client id provided in `SECURE_CLIENT_ID`. This means the token was issued to our known registered app.
Failure of any of these checks will result in no data being modified and a HTTP 401 error being returned (with a reason message in the body)
Once security is enabled, the Vue.js client will also need to be [similarly configured, with the matching AAD app client id used for validation](../../vue/#security)
> :speech_balloon: **Note.** If `SECURE_CLIENT_ID` is not set (which is the default), any tokens sent (in the authentication HTTP header) will simply be ignored, the header can also be omitted. Also the GET methods of the event API are always open and not subject to ANY validation, likewise the feedback API is left open by design
| 89.75 | 383 | 0.764945 | eng_Latn | 0.9953 |
dbaf6af7ea3193491e439ae62927bff67e803ab5 | 170 | md | Markdown | README.md | tackettnathant/3000_in_30 | 7c80c74649efc9cc720fa3c264cf77d5f1f40798 | [
"MIT"
] | null | null | null | README.md | tackettnathant/3000_in_30 | 7c80c74649efc9cc720fa3c264cf77d5f1f40798 | [
"MIT"
] | null | null | null | README.md | tackettnathant/3000_in_30 | 7c80c74649efc9cc720fa3c264cf77d5f1f40798 | [
"MIT"
] | null | null | null | Track pushups.
Bootstrapped with create-react-app.
Change the values in StepChallenge.js to point to an appropriate firebase server with Google authentication enabled.
| 28.333333 | 116 | 0.829412 | eng_Latn | 0.979606 |
dbafd43f9947e6b207f8ee9e282d468a3f77ba1a | 722 | md | Markdown | docs/framework/wcf/diagnostics/performance-counters/queued-rejected-messages.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/performance-counters/queued-rejected-messages.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/performance-counters/queued-rejected-messages.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Messaggi in coda rifiutati
ms.date: 03/30/2017
ms.assetid: 8eb75a76-4fb3-4d33-bd9f-6d91e09c5843
ms.openlocfilehash: 9864671aa23617fdd8279149ea917fa3ff4e1b86
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/23/2019
ms.locfileid: "61916084"
---
# <a name="queued-rejected-messages"></a>Messaggi in coda rifiutati
Nome contatore: Messaggi in coda rifiutati.
## <a name="description"></a>Descrizione
Numero dei messaggi per il servizio rifiutati dal trasporto in coda.
Visualizzare [dei messaggi non elaborabili](https://go.microsoft.com/fwlink/?LinkID=96233) per altre informazioni su quando i messaggi vengono rifiutati.
| 38 | 154 | 0.793629 | ita_Latn | 0.863035 |
dbafea73e26cdd2603ddc60e99dd1971f1b7e122 | 2,577 | md | Markdown | readme.md | joscelynjean/experimental-signalr | 7d86f3ec8da0fd7ff0ee3787710b61d9e5de43c8 | [
"MIT"
] | null | null | null | readme.md | joscelynjean/experimental-signalr | 7d86f3ec8da0fd7ff0ee3787710b61d9e5de43c8 | [
"MIT"
] | null | null | null | readme.md | joscelynjean/experimental-signalr | 7d86f3ec8da0fd7ff0ee3787710b61d9e5de43c8 | [
"MIT"
] | null | null | null | # Experimenting SignalR
This project is a simple experimentation of SignalR. Basically, we are building a chat system that is running in-memory on the server side.
Objectives :
- Demonstrate the use of SignalR
## How to run
### Requirements
- Visual Studio 2015 update 1 or higher
### Run solution
#### Run the server
In **src/chat-server**, open the **chat-server.sln**. Press F5 or run the application. You can validate that the application is up and running by accessing the following url : http://localhost:8082/signalr/hubs .
#### Run the client
In **src/chat-client**, open the **chat-client.sln**. Press F5 or run the application. You can access the client at the following url : http://localhost:8081/index.html .
### Troubleshooting during development
#### Add the right NuGet package
With the new version of SignalR, we must bootstrap the application by using OWIN. Be sure to get the Microsoft.OWIN NuGet package.
#### Negotiate not found
Before attemps to start connection to hubs, be sure to change the base url :
$.connection.hub.url = 'http://localhost:8082/signalr'
#### CORS to enable
In the OWIN Startup class, you must enable cors.
public class Startup
{
public void Configuration(IAppBuilder app)
{
app.UseCors(CorsOptions.AllowAll);
app.MapSignalR();
}
}
I used WebAPI in the same solution and I had to adjust the CORS also but both configuration was in conflict. I had to remove from the WebAPI controller :
[EnableCors(origins: "*", headers: "*", methods: "*")]
Also, I removed this from the WebAPIConfig.cs :
config.EnableCors(new EnableCorsAttribute("*", "*", "*"));
Note : Since this is an experimentation, I didn't bother with the cors but this part must be adjust to whitelist only the hostnames that can access the services.
#### Access-Control-Allow-Origin of '*' not available while credentials flag is true
Having this error :
XMLHttpRequest cannot load http://localhost:8082/signalr/negotiate?clientProtocol=1.5&connectionData=%5B%7B%22name%22%3A%22user%22%7D%5D&_=1478552109181. A wildcard '*' cannot be used in the 'Access-Control-Allow-Origin' header when the credentials flag is true. Origin 'http://localhost:8081' is therefore not allowed access. The credentials mode of an XMLHttpRequest is controlled by the withCredentials attribute.
I had to turn credentials flag to false on the client.
$.connection.hub.start({ withCredentials: false }).done(function () {
userHub.server.registerUser(userInfo.username);
});
| 36.295775 | 420 | 0.726814 | eng_Latn | 0.986444 |
dbb025959143d7939f1adbb1c351a3784f8afd5e | 266 | md | Markdown | _posts/2019-05-06-rainbows.md | wookdev/munged-org | 95410427c9442002fb8a5e58f78cea50132a78ae | [
"MIT"
] | null | null | null | _posts/2019-05-06-rainbows.md | wookdev/munged-org | 95410427c9442002fb8a5e58f78cea50132a78ae | [
"MIT"
] | 6 | 2022-02-13T20:05:54.000Z | 2022-02-21T20:18:11.000Z | _posts/2019-05-06-rainbows.md | wookdev/munged-org | 95410427c9442002fb8a5e58f78cea50132a78ae | [
"MIT"
] | null | null | null | ---
layout: post
author: Wook
title: Rainbows
date: 2019-05-06 20:00:00 -0400
---

Panoramic taken from my father's deck with my iPhone Xs.
Note the second rainbow visible low on each side. | 22.166667 | 74 | 0.725564 | eng_Latn | 0.943148 |
dbb16cde24abf71b9d5f2988107c472e78a9028b | 42 | md | Markdown | README.md | GitHubToMaster/PythonCoreProgramming | c7614aadd0ad141f7f201729d5515bcb79e9cd0a | [
"MIT"
] | null | null | null | README.md | GitHubToMaster/PythonCoreProgramming | c7614aadd0ad141f7f201729d5515bcb79e9cd0a | [
"MIT"
] | null | null | null | README.md | GitHubToMaster/PythonCoreProgramming | c7614aadd0ad141f7f201729d5515bcb79e9cd0a | [
"MIT"
] | null | null | null | # PythonCoreProgramming
Python 核心编程 (第3版)
| 14 | 23 | 0.809524 | eng_Latn | 0.417727 |
dbb1708f837b1c86fdea9b45bea3ad2249fdb566 | 1,429 | md | Markdown | README.md | CuriousLLC/CuriousRobot | a3e9b8923a9c0b36c55d1ee0f9217577a2a6ae76 | [
"BSD-2-Clause"
] | null | null | null | README.md | CuriousLLC/CuriousRobot | a3e9b8923a9c0b36c55d1ee0f9217577a2a6ae76 | [
"BSD-2-Clause"
] | null | null | null | README.md | CuriousLLC/CuriousRobot | a3e9b8923a9c0b36c55d1ee0f9217577a2a6ae76 | [
"BSD-2-Clause"
] | null | null | null | Robot Message Controller
========================
* Teensy LC
This package will listen for incoming data on Serial1 (GPIO0), and parses bytes into valid messages.
Messages will tell the controller to configure a new servo, or rotate a servo (or servos) for a period
of time.
Each servo is configured with a mask. The lower 4 bits of the mask specify the type of servo:
* 0000 - Standard
* 0001 - Continuous
The upper 4 bits describe how to interpret rotation requests:
* 0000 - Standard
* 1000 - Invserse
An *inverse* servo will rotate in the opposite direction as requested. This is because two wheels
will be controlled by servos that face opposing directions.
There are several messages types:
* Add servo - Adds a new servo with a mask on a GPIO
* Rotate type - Rotates every servo of a given type. This prevents the user from sending multiple messages to move each wheel
* Rotate servo - Rotates a single servo. This allows the user to turn in one direction.
* Rotate type duration - Rotates every servo of a given type for a given period of time.
* Rotate servo duration - Rotates a single servo for a given period of time.
This package can be used with the Wifi package. The Wifi package will read UDP datagrams off the network
and send the payload over serial to this package.
* https://github.com/CuriousLLC/RobotWifi - ESP8266 wireless relay
https://github.com/CuriousLLC/RobotController - Mobile controller
| 39.694444 | 125 | 0.764871 | eng_Latn | 0.994106 |
dbb39a8aa60be5643391c387b4a90b7ae18ec7c7 | 3,312 | md | Markdown | content/teaching/intro-theory/index.md | blairbilodeau/website-files | 7fa8cee9d1e70cc89aaeede1239c269b0c5f58c1 | [
"MIT"
] | null | null | null | content/teaching/intro-theory/index.md | blairbilodeau/website-files | 7fa8cee9d1e70cc89aaeede1239c269b0c5f58c1 | [
"MIT"
] | null | null | null | content/teaching/intro-theory/index.md | blairbilodeau/website-files | 7fa8cee9d1e70cc89aaeede1239c269b0c5f58c1 | [
"MIT"
] | null | null | null | +++
type= "nothing"
# Project title.
title = "Introduction to Theoretical Statistics Research"
# Date this page was created.
date = 2021-07-10
# Authors. Comma separated list, e.g. `["Bob Smith", "David Jones"]`.
authors = ["Blair Bilodeau"]
# Abstract.
abstract = "Undergraduate statistics courses focus heavily on how statistics is applied in practice along with theoretical guarantees on how classical methods perform, but often leave undergrads unclear what modern theoretical research looks like, or how to begin doing it themselves. In this seminar, I’ll introduce at a high-level my views on what the key common aspects are of most theoretical statistics results. Topics of discussion include: questions theoreticians aim to answer, impacts of theoretical research on statistical practice, and limitations of current common trends in theoretical statistics. I’ll illustrate these topics using brief summaries of my own research contributions in classical statistics, machine learning, and modern computational methods."
# Project summary to display on homepage.
summary = ""
# Digital Object Identifier (DOI)
doi = ""
# Tags (optional).
# Set `tags = []` for no tags, or use the form `tags = ["A Tag", "Another Tag"]` for one or more tags.
tags = []
# Projects (optional).
# Associate this publication with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["deep-learning"]` references
# `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects = []
# Optional external URL for project (replaces project detail page).
external_link = ""
# Slides (optional).
# Associate this project with Markdown slides.
# Simply enter your slide deck's filename without extension.
# E.g. `slides = "example-slides"` references
# `content/slides/example-slides.md`.
# Otherwise, set `slides = ""`.
slides = ""
# Links (optional).
url_preprint = ""
url_pdf = ""
url_slides = ""
url_video = ""
url_code = ""
# Custom links (optional).
# Uncomment line below to enable. For multiple links, use the form `[{...}, {...}, {...}]`.
links = [{name = "Slides", url = "teaching/intro-theory/issc-2021.pdf"}]
# Featured image
# To use, add an image named `featured.jpg/png` to your project's folder.
[image]
# Caption (optional)
# caption = "Photo by rawpixel on Unsplash"
# Focal point (optional)
# Options: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight
# focal_point = "Smart"
+++
Undergraduate statistics courses focus heavily on how statistics is applied in practice along with theoretical guarantees on how classical methods perform, but often leave undergrads unclear what modern theoretical research looks like, or how to begin doing it themselves. In this seminar, I’ll introduce at a high-level my views on what the key common aspects are of most theoretical statistics results. Topics of discussion include: questions theoreticians aim to answer, impacts of theoretical research on statistical practice, and limitations of current common trends in theoretical statistics. I’ll illustrate these topics using brief summaries of my own research contributions in classical statistics, machine learning, and modern computational methods.
| 48.705882 | 772 | 0.748188 | eng_Latn | 0.989047 |
dbb3a64ddcad11f6c9fc070a18df04d0345ff78d | 12,939 | md | Markdown | docs/relational-databases/system-stored-procedures/sp-estimate-data-compression-savings-transact-sql.md | pricardo03/sql-docs.es-es | 99e506a1b434d01707f85ff9807c583460bfe153 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-stored-procedures/sp-estimate-data-compression-savings-transact-sql.md | pricardo03/sql-docs.es-es | 99e506a1b434d01707f85ff9807c583460bfe153 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-stored-procedures/sp-estimate-data-compression-savings-transact-sql.md | pricardo03/sql-docs.es-es | 99e506a1b434d01707f85ff9807c583460bfe153 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: sp_estimate_data_compression_savings (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 03/15/2017
ms.prod: sql
ms.prod_service: database-engine
ms.reviewer: ''
ms.technology: system-objects
ms.topic: language-reference
f1_keywords:
- sp_estimate_data_compression_savings_TSQL
- sp_estimate_data_compression_savings
dev_langs:
- TSQL
helpviewer_keywords:
- compression [SQL Server], estimating
- sp_estimate_data_compression_savings
ms.assetid: 6f6c7150-e788-45e0-9d08-d6c2f4a33729
author: stevestein
ms.author: sstein
ms.openlocfilehash: 0e7d9c1e2f3c6d0de5e41775c445b46f232d5985
ms.sourcegitcommit: 495913aff230b504acd7477a1a07488338e779c6
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 08/06/2019
ms.locfileid: "68811380"
---
# <a name="sp_estimate_data_compression_savings-transact-sql"></a>sp_estimate_data_compression_savings (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-xxxx-xxxx-xxx-md](../../includes/tsql-appliesto-ss2008-xxxx-xxxx-xxx-md.md)]
Devuelve el tamaño actual del objeto solicitado y calcula el tamaño del objeto para el estado de compresión solicitado. La compresión se puede evaluar para tablas enteras o partes de tablas. Esto incluye montones, índices clúster, índices no clúster, índices de almacén de columnas, vistas indizadas y particiones de tablas e índices. Los objetos se pueden comprimir mediante la compresión de archivo de fila, página, almacén de columnas o almacén de columnas. Si la tabla, índice o partición ya están comprimidos, puede utilizar este procedimiento para calcular el tamaño de la tabla, del índice o de la partición en caso de que se volviera a comprimir.
> [!NOTE]
> La compresión y **sp_estimate_data_compression_savings** no están disponibles en todas las [!INCLUDE[msCoName](../../includes/msconame-md.md)]ediciones de. [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Para obtener una lista de las características admitidas por las ediciones de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], vea [Características compatibles con las ediciones de SQL Server 2016](~/sql-server/editions-and-supported-features-for-sql-server-2016.md).
Para calcular el tamaño del objeto, en caso de que se use el valor de compresión solicitado, el procedimiento almacenado prueba el objeto de origen y carga los datos en una tabla e índice equivalentes creados en tempdb. La tabla o índice creados en tempdb se comprimen al valor solicitado y se calcula el ahorro estimado de la compresión.
Para cambiar el estado de compresión de una tabla, índice o partición, utilice las instrucciones [ALTER TABLE](../../t-sql/statements/alter-table-transact-sql.md) o [ALTER index](../../t-sql/statements/alter-index-transact-sql.md) . Para obtener información general sobre la compresión, vea [compresión de datos](../../relational-databases/data-compression/data-compression.md).
> [!NOTE]
> Si se fragmentan los datos existentes, es posible que pueda reducir su tamaño regenerando el índice y sin necesidad de utilizar la compresión. Para los índices, el factor de relleno se aplicará cuando se vuelva a generar el índice. Esto podría aumentar el tamaño del índice.
 [Convenciones de sintaxis de Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>Sintaxis
```
sp_estimate_data_compression_savings
[ @schema_name = ] 'schema_name'
, [ @object_name = ] 'object_name'
, [@index_id = ] index_id
, [@partition_number = ] partition_number
, [@data_compression = ] 'data_compression'
[;]
```
## <a name="arguments"></a>Argumentos
[ @schema_name=] '*schema_name*'
Es el nombre del esquema de la base de datos que contiene la tabla o vista indizada. *schema_name* es de **tipo sysname**. Si *schema_name* es null, se usa el esquema predeterminado del usuario actual.
[ @object_name=] '*object_name*'
Es el nombre de la tabla o vista indizada en la que está el índice. *object_name* es **sysname**.
[ @index_id=] '"
Es el identificador del índice. el valor de la siguiente es **int**y puede ser uno de los valores siguientes: el número de identificación de un índice, null o 0 si *object_id* es un montón. Para obtener información de todos los índices de una tabla base o vista, especifique NULL. Si especifica NULL, también debe especificar NULL para *partition_number*.
[ @partition_number=] '*partition_number*'
Es el número de partición en el objeto. *partition_number* es de **tipo int**y puede tener uno de los valores siguientes: el número de partición de un índice o montón, null o 1 para un índice o montón sin particiones.
Para especificar la partición, también puede especificar la función [$Partition](../../t-sql/functions/partition-transact-sql.md) . Para obtener información sobre todas las particiones del objeto propietario, especifique NULL.
[ @data_compression=] '*data_compression*'
Es el tipo de compresión que se va a evaluar. *data_compression* puede ser uno de los siguientes valores: NINGUNO, fila, página, almacén de columnas o COLUMNSTORE_ARCHIVE.
## <a name="return-code-values"></a>Valores de código de retorno
0 (correcto) o 1 (error)
## <a name="result-sets"></a>Conjuntos de resultados
El siguiente conjunto de resultados se devuelve para proporcionar el tamaño actual y estimado de la tabla, índice o partición.
|Nombre de columna|Tipo de datos|Descripción|
|-----------------|---------------|-----------------|
|object_name|**sysname**|Nombre de la tabla o vista indizada.|
|schema_name|**sysname**|Esquema de la tabla o vista indizada.|
|index_id|**int**|Identificador de índice de un índice:<br /><br /> 0 = Montón<br /><br /> 1 = Índice clúster<br /><br /> > 1 = índice no clúster|
|partition_number|**int**|Número de partición. Devuelve 1 para una tabla o índice sin particiones.|
|size_with_current_compression_setting (KB)|**bigint**|Tamaño actual de la tabla, índice o partición solicitados.|
|size_with_requested_compression_setting (KB)|**bigint**|Tamaño estimado de la tabla, índice o partición que utiliza el valor de compresión solicitado y, si es aplicable, factor de relleno existente, suponiendo que no hay fragmentación.|
|sample_size_with_current_compression_setting (KB)|**bigint**|Tamaño del ejemplo con la opción de compresión actual. Esto incluye cualquier fragmentación.|
|sample_size_with_requested_compression_setting (KB)|**bigint**|Tamaño del ejemplo que se crea utilizando el valor de compresión solicitado y, si es aplicable, factor de relleno existente, sin fragmentación.|
## <a name="remarks"></a>Comentarios
Use sp_estimate_data_compression_savings para calcular el ahorro que se puede producir al habilitar una tabla o partición para la compresión de archivo de fila, página, almacén de columnas o almacén de columnas. Por ejemplo, si el tamaño medio de una fila se puede reducir un 40 por ciento, potencialmente también se puede reducir el tamaño del objeto en un 40 por ciento. Es posible que no consiga ahorrar espacio, ya que depende del factor de relleno y del tamaño de la fila. Por ejemplo, si una fila tiene 8.000 bytes de longitud y reduce su tamaño en un 40 por ciento, solo podrá seguir incluyendo una fila en una página de datos. No se obtiene ningún ahorro.
Si los resultados de ejecutar sp_estimated_rowsize_reduction_for_vardecimal indican que la tabla crecerá, eso quiere decir que muchas filas de la tabla utilizan prácticamente toda la precisión en los tipos de datos, y la adición de la mínima sobrecarga necesaria para el formato comprimido es mayor que el ahorro obtenido por la compresión. En este caso excepcional, no habilite la compresión.
Si una tabla está habilitada para compresión, utilice sp_estimate_data_compression_savings para estimar el tamaño medio de la fila si se descomprime la tabla.
Durante esta operación, se adquiere un bloqueo con intención compartida (IS) en la tabla. Si no se puede obtener un bloqueo (IS), se bloqueará el procedimiento. La tabla se examina bajo el nivel de aislamiento READ COMMITTED.
Si el valor de compresión solicitado es mismo que el de la compresión actual, el procedimiento almacenado devolverá el tamaño estimado sin la fragmentación de los datos y utilizando el factor de relleno existente.
Si no existe el identificador de índice o la partición, no se devolverá ningún resultado.
## <a name="permissions"></a>Permisos
Es necesario contar con un permiso de tipo SELECT en la tabla.
## <a name="limitations-and-restrictions"></a>Limitaciones y restricciones
Antes de SQL Server 2019, este procedimiento no se aplicaba a los índices de almacén de columnas y, por lo tanto, no aceptó los parámetros de compresión de datos de almacén de columnas y COLUMNSTORE_ARCHIVE. A partir de SQL Server 2019, los índices de almacén de columnas se pueden usar como un objeto de origen para la estimación y como un tipo de compresión solicitado.
## <a name="considerations-for-columnstore-indexes"></a>Consideraciones sobre los índices de almacén de columnas
A partir de SQL Server 2019, sp_estimate_compression_savings admite la estimación de la compresión del almacén de columnas y del almacén de columnas. A diferencia de la compresión de página y fila, la aplicación de la compresión de almacén de columnas a un objeto requiere la creación de un nuevo índice de almacén de columnas. Por esta razón, al usar las opciones de almacén de columnas y COLUMNSTORE_ARCHIVE de este procedimiento, el tipo del objeto de origen proporcionado al procedimiento determina el tipo de índice de almacén de columnas usado para la estimación del tamaño comprimido. En la tabla siguiente se muestran los objetos de referencia que se usan para calcular el ahorro de compresión para @data_compression cada tipo de objeto de origen cuando el parámetro se establece en el almacén de columnas o en COLUMNSTORE_ARCHIVE.
|Objeto de origen|Objeto de referencia|
|-----------------|---------------|
|Montón|Índice de almacén de columnas agrupado|
|Índice clúster|Índice de almacén de columnas agrupado|
|Índice no clúster|Índice de almacén de columnas no agrupado (incluidas las columnas de clave y todas las columnas incluidas del índice no clúster proporcionado, así como la columna de partición de la tabla, si existe)|
|índice no clúster de almacén de columnas|Índice de almacén de columnas no agrupado (incluidas las mismas columnas que el índice de almacén de columnas no agrupado proporcionado)|
|Índice de almacén de columnas agrupado|Índice de almacén de columnas agrupado|
> [!NOTE]
> Al estimar la compresión de almacén de columnas de un objeto de origen de almacén (índice clúster, índice no clúster o montón), si hay alguna columna en el objeto de origen que tenga un tipo de datos no admitido en un índice de almacén de columnas, sp_estimate_compression_savings producirá un error.
Del mismo modo, @data_compression cuando el parámetro se establece en None, Row o Page y el objeto de origen es un índice de almacén de columnas, en la tabla siguiente se describen los objetos de referencia usados.
|Objeto de origen|Objeto de referencia|
|-----------------|---------------|
|Índice de almacén de columnas agrupado|Montón|
|índice no clúster de almacén de columnas|Índice no clúster (incluidas las columnas contenidas en el índice no clúster de almacén de columnas como columnas de clave y la columna de partición de la tabla, si existe, como una columna incluida)|
> [!NOTE]
> Al estimar la compresión almacén (NONE, ROW o PAGE) de un objeto de origen de almacén de columnas, asegúrese de que el índice de origen no contenga más de 32 columnas, ya que este es el límite admitido en un índice almacén (Nonclustered).
## <a name="examples"></a>Ejemplos
En el ejemplo siguiente se calcula el tamaño de la tabla `Production.WorkOrderRouting` si se comprime mediante la compresión `ROW`.
```
USE AdventureWorks2012;
GO
EXEC sp_estimate_data_compression_savings 'Production', 'WorkOrderRouting', NULL, NULL, 'ROW' ;
GO
```
## <a name="see-also"></a>Vea también
[CREATE TABLE (Transact-SQL)](../../t-sql/statements/create-table-transact-sql.md)
[CREATE INDEX (Transact-SQL)](../../t-sql/statements/create-index-transact-sql.md)
[sys.partitions (Transact-SQL)](../../relational-databases/system-catalog-views/sys-partitions-transact-sql.md)
[Procedimientos (almacenados de motor de base de datos TRANSACT-SQL)](../../relational-databases/system-stored-procedures/database-engine-stored-procedures-transact-sql.md)
[Implementación de la compresión Unicode](../../relational-databases/data-compression/unicode-compression-implementation.md)
| 84.019481 | 840 | 0.766288 | spa_Latn | 0.979172 |
dbb3ad2ba47aafff8a7e81370e317024b5cc3158 | 489 | md | Markdown | content/page/events.md | devopsdays/devopsdays-test | e2766c9a61cf053c7949d37fe420e8750b4549dc | [
"Apache-2.0",
"MIT"
] | null | null | null | content/page/events.md | devopsdays/devopsdays-test | e2766c9a61cf053c7949d37fe420e8750b4549dc | [
"Apache-2.0",
"MIT"
] | null | null | null | content/page/events.md | devopsdays/devopsdays-test | e2766c9a61cf053c7949d37fe420e8750b4549dc | [
"Apache-2.0",
"MIT"
] | null | null | null | +++
date = "2015-11-29T00:00:00-06:00"
title = "devopsdays events"
type = "events-list"
aliases = ["/calendar", "/events/calendar", "/devops-calendar", "/presentations"]
+++
Learn how to [organize your own devopsdays event](/pages/organizing)!
After each event, local organizers link to the slides and videos from their event; check individual event program pages for more info. The [devopsdays vimeo account](https://vimeo.com/devopsdays/albums) contains many videos from past events.
| 40.75 | 241 | 0.746421 | eng_Latn | 0.960738 |
dbb3baf95ff7c9eaaeebf5cbaa93536571b09fde | 7,740 | md | Markdown | articles/hdinsight/hdinsight-cluster-availability.md | ancorrg/azure-docs.es-es | fe960fa0a6fb269e31c6120dcc310cfc28e1239d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/hdinsight-cluster-availability.md | ancorrg/azure-docs.es-es | fe960fa0a6fb269e31c6120dcc310cfc28e1239d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/hdinsight-cluster-availability.md | ancorrg/azure-docs.es-es | fe960fa0a6fb269e31c6120dcc310cfc28e1239d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Supervisión de la disponibilidad del clúster con Apache Ambari en Azure HDInsight
description: Aprenda a usar Apache Ambari para supervisar la disponibilidad y el estado del clúster.
author: hrasheed-msft
ms.author: hrasheed
ms.reviewer: jasonh
ms.service: hdinsight
ms.topic: how-to
ms.custom: hdinsightactive,seoapr2020
ms.date: 05/01/2020
ms.openlocfilehash: 5cfc2abad828a3974c04074a4cde062a479f673f
ms.sourcegitcommit: d767156543e16e816fc8a0c3777f033d649ffd3c
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 10/26/2020
ms.locfileid: "92533570"
---
# <a name="how-to-monitor-cluster-availability-with-apache-ambari-in-azure-hdinsight"></a>Supervisión de la disponibilidad del clúster con Apache Ambari en Azure HDInsight
Los clústeres de HDInsight incluyen Apache Ambari, que proporciona información de estado de un vistazo y alertas predefinidas.
En este artículo se muestra cómo usar Ambari para supervisar el clúster y le guía a través de algunos ejemplos para configurar una alerta de Ambari, supervisar un índice de disponibilidad de nodo y crear una alerta de Azure Monitor que se desencadena cuando no se recibe ningún latido de uno o varios nodos durante cinco horas.
## <a name="dashboard"></a>Panel
Para acceder al panel de Ambari, seleccione el vínculo **Inicio de Ambari** de la sección **Paneles de clúster** de la hoja de información general de HDInsight en Azure Portal, tal como se muestra a continuación. También puede acceder a él yendo a `https://CLUSTERNAME.azurehdinsight.net` en un explorador en el que CLUSTERNAME es el nombre del clúster.

Luego, se le pedirá un nombre de usuario y contraseña de inicio de sesión del clúster. Escriba las credenciales que eligió al crear el clúster.
Se le redirigirá al panel de Ambari, que contiene widgets que muestran una serie de métricas para ofrecerle una vista rápida del estado del clúster de HDInsight. Estos widgets muestran métricas como, por ejemplo, el número de elementos DataNodes (nodos de trabajo) en directo y elementos JournalNodes (nodo de zookeeper), el tiempo de actividad del elemento NameNodes (nodos principales), así como también métricas específicas para determinados tipos de clúster, como el tiempo de actividad de YARN ResourceManager para los clústeres de Spark y Hadoop.

## <a name="hosts--view-individual-node-status"></a>Hosts: ver el estado de nodo individual.
También puede ver información de estado de los nodos individuales. Seleccione la pestaña **Hosts** para ver una lista de todos los nodos del clúster y ver información básica sobre cada nodo. La marca de verificación verde a la izquierda de cada nombre de nodo indica que todos los componentes están activos en el nodo. Si un componente está inactivo en un nodo, verá un triángulo de alerta rojo en lugar de la marca de verificación verde.

Luego, puede seleccionar el **nombre** de un nodo para ver métricas más detalladas del host para ese nodo concreto. En esta vista se muestra la disponibilidad y el estado de cada componente individual.

## <a name="ambari-alerts"></a>Alertas de Ambari
Ambari también ofrece varias alertas configurables que pueden proporcionar una notificación de determinados eventos. Cuando se desencadenan alertas, se muestran en la esquina superior izquierda de Ambari en una notificación de color rojo que contiene el número de alertas. Al seleccionar esta notificación aparece una lista de las alertas actuales.

Para ver una lista de definiciones de alertas y sus estados, seleccione la pestaña **Alertas** , tal como se muestra a continuación.

Ambari ofrece muchas alertas predefinidas relacionadas con la disponibilidad, entre las que se incluyen las siguientes:
| Nombre de la alerta | Descripción |
|---|---|
| Resumen de estado de DataNode | Esta alerta de nivel de servicio se desencadena si hay elementos DataNodes en mal estado.|
| Estado de disponibilidad alta de NameNode | Esta alerta de nivel de servicio se desencadena si el elemento NameNode activo o NameNode en espera no está en ejecución.|
| Porcentaje de JournalNodes disponible | Esta alerta se desencadena si el número de elementos JournalNodes inactivos en el clúster es mayor que el umbral crítico configurado. Agrega los resultados de las comprobaciones del proceso JournalNode. |
| Porcentaje de DataNodes disponible | Esta alerta se desencadena si el número de elementos DataNodes inactivos en el clúster es mayor que el umbral crítico configurado. Agrega los resultados de las comprobaciones del proceso DataNode.|
Para ver los detalles de una alerta o modificar los criterios, seleccione el **nombre** de la alerta. Tome **Resumen de estado de DataNode** como ejemplo. Puede ver una descripción de la alerta, así como los criterios específicos que desencadenarán una alerta de "advertencia" o "crítica", así como el intervalo de comprobación de los criterios. Para editar la configuración, seleccione el botón **Editar** de la esquina superior derecha del cuadro de configuración.

Aquí, puede editar la descripción y, lo que es más importante, el intervalo de comprobación y los umbrales para las alertas de advertencia o críticas.

En este ejemplo, puede hacer que 2 elementos DataNodes en mal estado desencadenen una alerta crítica y 1 elemento DataNode en mal estado desencadenen solo una advertencia. Cuando termine la edición, seleccione **Guardar** .
## <a name="email-notifications"></a>Notificaciones por correo electrónico
Opcionalmente, también puede configurar notificaciones por correo electrónico para las alertas de Ambari. Para ello, en la pestaña **Alertas** , haga clic en el botón **Acciones** de la esquina superior izquierda y seleccione **Administrar notificaciones** .

Se abrirá un cuadro de diálogo para administrar las notificaciones de alerta. Seleccione **+** en la parte inferior del cuadro de diálogo y rellene los campos obligatorios para proporcionar a Ambari los detalles del servidor de correo electrónico desde el que se van a enviar correos electrónicos.
> [!TIP]
> La configuración de notificaciones por correo electrónico de Ambari puede ser una buena manera de recibir alertas en un solo lugar a la hora de administrar muchos clústeres de HDInsight.
## <a name="next-steps"></a>Pasos siguientes
- [Disponibilidad y confiabilidad de clústeres de Apache Hadoop en HDInsight](./hdinsight-business-continuity.md)
- [Disponibilidad del clúster: registros de Azure Monitor](./cluster-availability-monitor-logs.md)
- [Uso de registros de Azure Monitor](hdinsight-hadoop-oms-log-analytics-tutorial.md)
- [Notificaciones por correo electrónico de Apache Ambari](apache-ambari-email.md)
| 83.225806 | 552 | 0.801938 | spa_Latn | 0.988837 |
dbb3e228388a89a35516b86cf0472a6d5644122f | 3,602 | md | Markdown | iOS/ProgressHud/README.md | WeConnect/phonegap-plugins | 30ee1377ac9256bd49725e07c136f61188ea5f69 | [
"Unlicense"
] | 3 | 2020-09-23T22:06:15.000Z | 2020-09-23T23:57:54.000Z | iOS/ProgressHud/README.md | StackTipsLab/phonegap-plugins | baadafc3bd403beebdee29acb56381a88d4e1bf9 | [
"Unlicense"
] | 2 | 2016-08-02T15:26:47.000Z | 2016-08-02T15:31:41.000Z | iOS/ProgressHud/README.md | StackTipsLab/phonegap-plugins | baadafc3bd403beebdee29acb56381a88d4e1bf9 | [
"Unlicense"
] | null | null | null | # Cordova ProgressHud Plugin #
by `Olivier Louvignes`
## DESCRIPTION ##
* This plugin provides a simple way to use a native loading component from IOS. It does comply with the latest (future-2.x) cordova standards.
* It relies on `[MBProgressHUD](https://github.com/jdg/MBProgressHUD)` to work (MIT license, included in ./libs).
## SETUP ##
Using this plugin requires [Cordova iOS](https://github.com/apache/incubator-cordova-ios).
1. Make sure your Xcode project has been [updated for Cordova](https://github.com/apache/incubator-cordova-ios/blob/master/guides/Cordova%20Upgrade%20Guide.md)
2. Drag and drop the `ProgressHud` folder from Finder to your Plugins folder in XCode, using "Create groups for any added folders"
3. Add the .js files to your `www` folder on disk, and add reference(s) to the .js files using <script> tags in your html file(s)
<script type="text/javascript" src="/js/plugins/ProgressHud.js"></script>
4. Add new entry with key `ProgressHud` and value `ProgressHud` to `Plugins` in `Cordova.plist/Cordova.plist`
## JAVASCRIPT INTERFACE ##
// After device ready, create a local alias
var progressHud = window.plugins.progressHud;
// Complex example with loading
progressHud.show({mode: "determinate", progress:0, labelText: 'Loading...', detailsLabelText: 'Connecting...'}, function() {
console.warn('show(), arguments=' + Array.prototype.slice.call(arguments).join(', '));
});
var interval = setInterval(function() {
i++;
if(i > n) {
progressHud.hide();
return clearInterval(interval);
}
var progress = Math.round((i / n) * 100) / 100,
detailsLabelText = 'Processing ' + i + '/' + n;
if (i == n) {
detailsLabelText = 'Finalizing...'
}
progressHud.set({progress: progress, detailsLabelText: detailsLabelText});
}, 1000);
* Check [source](http://github.com/mgcrea/phonegap-plugins/tree/master/iOS/ProgressHud/ProgressHud.js) for additional configuration.
## BUGS AND CONTRIBUTIONS ##
Patches welcome! Send a pull request. Since this is not a part of Cordova Core (which requires a CLA), this should be easier.
Post issues on [Github](https://github.com/phonegap/phonegap-plugins/issues)
The latest code (my fork) will always be [here](http://github.com/mgcrea/phonegap-plugins/tree/master/iOS/ProgressHud)
## LICENSE ##
Copyright 2012 Olivier Louvignes. All rights reserved.
The MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
## CREDITS ##
Inspired by :
* [MBProgressHUD project](https://github.com/jdg/MBProgressHUD)
| 46.179487 | 460 | 0.727374 | eng_Latn | 0.529722 |
dbb4b192b02533a5ef360c6027847fa2fb0cfefa | 594 | md | Markdown | README.md | codedak/Django-React-ChatBot | 0b5da30ebad3a751cc18f7af494df001a539e7ec | [
"CC0-1.0"
] | null | null | null | README.md | codedak/Django-React-ChatBot | 0b5da30ebad3a751cc18f7af494df001a539e7ec | [
"CC0-1.0"
] | null | null | null | README.md | codedak/Django-React-ChatBot | 0b5da30ebad3a751cc18f7af494df001a539e7ec | [
"CC0-1.0"
] | null | null | null | # Django-React-ChatBot
**_"A basic chatbot created with Django + React + Love"_**
#### Run this command to initiate and activate virtual environment
```
virtualenv venv --python=python3
source venv/bin/activate
```
#### Run this command to install required Django packages using pip
```
pip install -r requirements.txt
```
#### Run the below commands to install required react packages using npm
```
npm init -y
npm i react react-dom prop-types
npm i -D @babel/core babel-loader @babel/preset-env @babel/preset-react babel-plugin-transform-class-properties
npm i -D webpack webpack-cli
```
| 25.826087 | 111 | 0.744108 | eng_Latn | 0.855759 |
dbb4c5d1a23d4ebca5c5869ad2fd5683454b8e0d | 2,903 | md | Markdown | README.md | jhejlik/Mageek_OpenmageTheme | 901a5ff9f5b6655a07cf2bacfaaffaf6e85110de | [
"AFL-3.0"
] | 10 | 2020-06-22T07:42:21.000Z | 2022-01-22T17:12:29.000Z | README.md | jhejlik/Mageek_OpenmageTheme | 901a5ff9f5b6655a07cf2bacfaaffaf6e85110de | [
"AFL-3.0"
] | 2 | 2020-06-22T00:40:30.000Z | 2020-11-11T17:38:42.000Z | README.md | jhejlik/Mageek_OpenmageTheme | 901a5ff9f5b6655a07cf2bacfaaffaf6e85110de | [
"AFL-3.0"
] | 4 | 2020-11-09T19:30:23.000Z | 2021-09-18T10:14:40.000Z | # Mageek_OpenmageTheme v.2
Admin Theme for Openmage (Magento LTS) overrides the default Magento admin theme. The minimalist header with menu provides more space for data. Colouring based on the Openmage brand identity. Compatible with all pages and third party extensions. Now you can edit CSS style from admin page. You can switch back to the default theme on the **System->Configuration->Admin** page.
## Browsers compatibility
This theme contains CSS variables that are compatible with [all modern browsers](https://caniuse.com/#feat=css-variables).
**The theme is not compatible with Internet Explorer**.
## Change Log
**2.0.0 -Apr 20, 2021**
- Colouring based on the Openmage brand identity
- Major CSS changes
- CSS edit from Admin
- New graph style on dashboard
**1.0.1 -Jun 27, 2020**
- Theme switch refactoring
- Minor CSS changes
**1.0.0 -Jun 20, 2020**
- Initial release
- Admin theme switch inspired by [Inchoo AdminTheme](https://github.com/ajzele/Inchoo_AdminTheme).
## Screenshots
### Login Page

### Popup Message

### Dashboard Page

### Loader

### Menu

### Order Create

### Invoices

### Manage Products

### Product Edit

### Customer Edit

### Rule Edit

### Index Management

### Cache Management

### Configuration

| 53.759259 | 376 | 0.782639 | yue_Hant | 0.261595 |
dbb4eede947ca2aab128bd2694c70eb55fc2f511 | 153 | md | Markdown | docs/api.md | bateman/wolproxypyapi | 41e955cf08bc5c677a7b97513082827211998889 | [
"MIT"
] | null | null | null | docs/api.md | bateman/wolproxypyapi | 41e955cf08bc5c677a7b97513082827211998889 | [
"MIT"
] | 2 | 2021-12-28T07:56:06.000Z | 2022-01-17T12:20:37.000Z | docs/api.md | bateman/wolproxypyapi | 41e955cf08bc5c677a7b97513082827211998889 | [
"MIT"
] | null | null | null | # Documentation for `wolproxypyapi` module
::: wolproxypyapi
handler: python
rendering:
show_root_heading: false
show_source: false
| 19.125 | 42 | 0.705882 | eng_Latn | 0.688914 |
dbb50f93bc39b680a83fdfb3bd4677592518a536 | 59 | md | Markdown | README.md | YavorAngelov/SnookerAcademy | 185c6a897d84fb6650102654c8cc16cc2ff9ff99 | [
"MIT"
] | null | null | null | README.md | YavorAngelov/SnookerAcademy | 185c6a897d84fb6650102654c8cc16cc2ff9ff99 | [
"MIT"
] | null | null | null | README.md | YavorAngelov/SnookerAcademy | 185c6a897d84fb6650102654c8cc16cc2ff9ff99 | [
"MIT"
] | null | null | null | # SnookerAcademy
Blog website for snooker training acadamy
| 19.666667 | 41 | 0.847458 | eng_Latn | 0.590194 |
dbb55a2e346f0788f5d9aa1fd19ed9f1e719973f | 8,945 | md | Markdown | README.md | TobiasDuswald/hiv_malawi | 0a7f4af6e149075b5f461d4d87da4b4043c884e5 | [
"Apache-2.0"
] | 1 | 2021-11-25T16:21:35.000Z | 2021-11-25T16:21:35.000Z | README.md | BioDynaMo/hiv_malawi | 0a7f4af6e149075b5f461d4d87da4b4043c884e5 | [
"Apache-2.0"
] | 2 | 2021-06-14T06:03:26.000Z | 2021-06-14T06:03:37.000Z | README.md | TobiasDuswald/hiv_malawi | 0a7f4af6e149075b5f461d4d87da4b4043c884e5 | [
"Apache-2.0"
] | null | null | null | # BioDynMo for HIV modelling in Malawi
## Project description
This repository is the result of joint work of CERN and the University of
Geneva. With this project, we attempt to accelerate epidemiological modelling
with CERN's HPC framework [BioDynaMo](https://biodynamo.org) designed to
simulate billions of agents in the most efficient way possible. BioDynamMo is
hosted on [GitHub](https://github.com/BioDynaMo/biodynamo) and more information
can be found in the associated
[publication](https://doi.org/10.1101/2020.06.08.139949).
With the code of the repository, we attempt to simulate the spread of HIV in
Malawi based on a [publication](https://doi.org/10.1101/2020.12.23.20248757)
written by Janne Estill et al. The code is fairly general and interested users
should be able to apply the model to other countries by using different
parameters with ease. As of now, there are still some key features such as the
disease progression missing but they will be included in the near future.
## The agent-based model (in a nutshell)
In the curren setup, the agents have the following attributes:
* sex (male / female),
* location (categorical),
* age (number),
* health state (categorical),
* as well as biomedical and sociobehavioural riskfactors.
For each time step (year), BioDynaMo executes four behaviors for each agent. \
These are:
* Random migration to other locations
* Female agents in a certain age group can give birth (and possibly infect their
child)
* Agents choose partners at their respective location and can possibly infect
each other
* Agents get older, their risk factors change, the disease progresses, and under
certain circumstances the agents also die
At the end of the `README` you may find a brief overview of what you can expect
to find in which files of the repository.
## Preliminary results
While this project is still in the early development stages, the preliminary
results look promising speeding up the previous solution dramatically. On the
current MacBook Pro (M1), a simulation of 60 years (1960-2020) starting with
3.6 million people and ending with roughly 22 million people took less than a
minute. Older notebooks may take some more time, for instance, a notebook with a
fairly outdated i7-4500U processor took roughly 3 minutes.
The result of the simulation is shown in the picture below. It roughly reproduces
the demography of Malawi but there are still severe limitations in place at this
stage.

## Current limitations
The repository is still work in progress. We believe that the code captures the
foundation of the model which is why we decided to post this repository
publicly hoping that other BioDynaMo users can possibly benefit from it.
Many modelling details from Estill et al. are still missing, e.g. the mating is
different, there is no disease progression, the death module is different, and
so on. We hope to add these details in the future.
## Credit
The repository is also inspired by the `epidemiology` demo of **BioDynaMo**
simulating an SIR base model. It was created by Lukas Breitwieser.
## Contact
For questions regarding BioDynaMo, contact the BioDynaMo team via
the official channels:
[slack](https://cernopenlab.slack.com/messages/biodynamo/) or
the [forum](https://forum.biodynamo.org).
For questions regarding the underlying
model, please consult Janne Estill et al.'s publication or contact
the authors directly.
# Compiling the source code
To compile the code, you'll need to have BioDynaMo installed.
## BioDynaMo installation
BioDynaMo can be installed in two ways. For our purpose, we need
features that are not included in `BioDynaMo's v1.0 release`, thus the default
installer is not recommended here. Instead, we recommend to build it from
source.
For more details, see [here](https://biodynamo.org/docs/devguide/build/).
Note that BioDyanMo supports Ubuntu, CentOS, and macOS at the moment.
You may want to check the OS dependent prerequisites for BioDynaMo
[here](https://biodynamo.org/docs/userguide/prerequisites/), especially if
you're using macOS.
Note that the following will install BioDynaMo in the
current folder, thus don't run it in the `hiv_malawi` folder. Consider choosing
your `home` folder.
To install BioDynaMo, run the following commands:
```bash
git clone https://github.com/BioDynaMo/biodynamo.git
cd biodynamo
# Install the prerequisites
./prerequisites.sh all
# Create the build directory
mkdir build
cd build
# Build BioDynaMo
cmake ..
make -j <number_of_processors_for_build>
```
We recommend to add the following command to your `.bashrc/.zshrc`:
```bash
alias thisbdm=". <path_to_biodynamo>/biodynamo/build/bin/thisbdm.sh"
```
Close and reopen your terminal, type `thisbdm` to source BioDynaMo and you're
good to proceed to the next step.
## Running the `hiv-malawi` simulation
Once BioDynaMo is installed, make sure it is sourced in the terminal window that
you want to use to run the `hiv_malawi` simulation. If it's sourced
correctly, you should see something like `[bdm-1.1.X]` in your terminal. If
not, run
```bash
. <path_to_biodynamo>/biodynamo/build/bin/thisbdm.sh
```
or `thisbdm` if you did set the alias as recommended above.
We're now ready to run the `hiv_malawi`. Therefore, please navigate to this
repository with your shell, i.e. `cd <some_path>/hiv_malawi`. Then run the
following commands:
```bash
mkdir build
cd build
cmake ..
make -j <number_of_processors_for_build>
./hiv_malawi
```
or even simpler:
```
bdm run
```
which basically executes the the above steps in the background.
## Debugging guide
Generally, `gdb` and `lldb` are advised on Linux and macOS, respectively. For
developers using `VS Code`, we recommend the extension `CodeLLDB` by *Vadim
Chugunov*. We added a configuration file `.vscode/launch.json` to support this
way of debugging. To try it, please do the following:
1. Install the VS Code extension `CodeLLDB`
2. If it is necessary for you to debug BioDynaMo itself, compile it in the Debug
mode first. Generally, this is not necessary if you assume the bug appears
it this repository.
```bash
cd <some_path>/biodynamo/build
make cleanbuild
cmake -DCMAKE_BUILD_TYPE=Debug .. && make -j<num_processors>
. bin/thisbdm.sh
```
3. Build this repository in the debug build.
```bash
cd <some_path>/hiv_malawi/build
rm -rf *
cmake -DCMAKE_BUILD_TYPE=Debug .. && make -j<num_processors>
```
4. Open your debug panel in VS Code (column on the very right) and click the
the green arrow "Launch Simulation".
5. Start the simulation by clicking play, use editor to set breakpoints etc.
Note: if you run on macOS, we recommend to add `-DCMAKE_CXX_FLAGS="-glldb"` to
the `cmake` command.
# Components of /src
The project contains header (.h) and source (.cc) files.
Typically, there's a header and a source file for each file name.
Sometimes, the header contains the entire implementation and we therefore
omit the source file.
In the following, you may find a high level description of what you'll
find in the different file names.
* **datatypes (.h)**
This header file contains some data types that are used all
over the simulation and are therefore of general importance.
* **sim-param (.h)**
In this header, we specify the simulation parameter of the simulation.
Especially, it contains the modelling parameters that can be changed
to model the different scenarios.
* **main (.h/.cc)**
This contains the main script, it's basically the starting point of the
program.
At the moment it's very simple, but there are extensions of which one may
think of. For that reason it's isolated already.
* **bdm-simulation (.h/.cc)**
Here, you'll find the core BioDynaMo simulation. It's of great interest to
understand what's happening here since it shows the basic structure of a
BioDynaMo simulation.
* **categorical-environment (.h/.cc)**
For the case at hand, we had to design a custom environment,
basically the world in which the agents live in.
It stores global information, such that agents know which
other agents are at their specific location.
* **person (.h)**
This header specifies the properties of a single agent.
* **person-behaviour (.h)**
This header specifies how an agent behaves in its environment,
i.e. how it updates it's parameters in every step of the simulation.
* **population-initialization (.h/.cc)**
When we start a simulation, we want to have a heterogeneous population
representing a real country. We need different sexes, with different
ages at different locations. Here, we define the necessary functions.
* **stdout-utils (.h/.cc)**
Some print statements that are not of great importance.
* **visualize (.h/.cc)**
Contains functions for visualizing the simulation results with ROOT,
a famous CERN package integrated in BioDynaMo. | 36.8107 | 81 | 0.762325 | eng_Latn | 0.997278 |
dbb57bedebde1ea1c3fde06f1cc11f08e53ab27a | 6,415 | md | Markdown | README.md | shakti-menon/vaccination-game | 846a9dffe7762b410b68a271319b158b1e75676f | [
"Xnet",
"X11"
] | null | null | null | README.md | shakti-menon/vaccination-game | 846a9dffe7762b410b68a271319b158b1e75676f | [
"Xnet",
"X11"
] | null | null | null | README.md | shakti-menon/vaccination-game | 846a9dffe7762b410b68a271319b158b1e75676f | [
"Xnet",
"X11"
] | null | null | null | # Epidemic prevalence information on social networks can mediate emergent collective outcomes in voluntary vaccine schemes
[](https://doi.org/10.1371/journal.pcbi.1006977)
This repository contains data for Figures 2-4 of the manuscript:
> Sharma A, Menon SN, Sasidevan V and Sinha S (2019) Epidemic prevalence information on social networks can mediate emergent collective outcomes in voluntary vaccine schemes. _PLoS Comput Biol_ <b>15</b>(5): e1006977.
> https://doi.org/10.1371/journal.pcbi.1006977
The data is in the form of ```.mat``` files (which can be opened in MATLAB).
*Note*: Data for the empirical social networks used in Figures 2(a-b) and 3(d) are openly accessible as part of the published article:
Banerjee A, Chandrasekhar AG, Duflo E and Jackson MO (2013) The diffusion of microfinance. _Science_ <b>341</b>(6144): 1236498 https://doi.org/10.1126/science.1236498
and can be directly downloaded from the Harvard Dataverse Repository [here](https://dataverse.harvard.edu/dataset.xhtml?persistentId=hdl:1902.1/21538).
The following table provides a description of the variables saved in the data files:
| Variable | Description |
| --- | --- |
| t | time instant |
| S(t) | the number of susceptible agents at time t |
| I(t) | the number of infected agents at time t |
| R(t) | the number of recovered agents at time t |
| V(t) | the number of vaccinated agents at time t |
| I<sub>c</sub>(t) | the cumulative number of infected agents at time t |
| inf<sub>∞</sub> | final fraction of infected agents |
| vac<sub>∞</sub> | final fraction of vaccinated agents |
## Contents of folder **Figure_2**
1. ```TimeSeries_Vill55_alpha0.mat```: Data for Fig. 2(a) and the left panel of Fig. 2(c)
This file contains a matrix ```S_rec``` of dimensions 692x6 that stores time series data for a single simulation on village network #55 for the case α = 0. The 6 columns correspond to: t, S(t), I(t), R(t), V(t) and I<sub>c</sub>(t).
2. ```TimeSeries_Vill55_alpha1.mat```: Data for Fig. 2(b) and the right panel of Fig. 2(c)
This file contains a matrix ```S_rec``` of dimensions 924x6 that stores time series data for a single simulation on village network #55 for the case α = 1. The 6 columns correspond to: t, S(t), I(t), R(t), V(t) and I<sub>c</sub>(t).
3. Subfolder ```vill55_network```: Data for Fig. 2(d) and Fig. 2(e)
This folder contains files for simulations on village network #55 for the cases α = 0 and α = 1. The file names are ```vill55_qX_alphaY.mat``` where X is the value of β and Y is the value of α (0 or 1). Each ```.mat``` file contains a matrix ```datavn``` of dimensions 2x1000 that contains data for inf<sub>∞</sub> and vac<sub>∞</sub> over 1000 simulation runs.
## Contents of folder **Figure_3**
1. ```TimeSeries_ER_alpha0.mat```: Data for the left panel of Fig. 3(a)
This file contains a matrix ```S_rec``` of dimensions 885x6 that stores time series data for a single simulation on an Erdős-Rényi network of size 1024 for the case α = 0. The 6 columns correspond to: t, S(t), I(t), R(t), V(t) and I<sub>c</sub>(t).
2. ```TimeSeries_ER_alpha1.mat```: Data for the right panel of Fig. 3(a)
This file contains a matrix ```S_rec``` of dimensions 1103x6 that stores time series data for a single simulation on an Erdős-Rényi network of size 1024 for the case α = 1. The 6 columns correspond to: t, S(t), I(t), R(t), V(t) and I<sub>c</sub>(t).
3. Subfolder ```ER_network```: Data for Fig. 3(b) and Fig. 3(c)
This folder contains files for simulations on Erdős-Rényi networks of size 1024 for the case of local information (α = 0, ```loc_*.mat```) and global information (α = 1, ```glo_*.mat```). The file names are ```X_glsp_qY.mat```, where X is "loc" or "glo" and Y is the value of β. Each ```.mat``` file contains a matrix ```dataq``` of dimensions 2x1000 that contains data for inf<sub>∞</sub> and vac<sub>∞</sub> over 1000 simulation runs.
4. Subfolder ```Kavg_ERN_KVN```: Data for Fig. 3(d)
This folder contains files for simulations on Erdős-Rényi networks of size 1024 as well as on empirical village networks.
For the case of Erdős-Rényi networks, the file names are ```glsp_qX_kY_alphaZ.mat```, where X is the value of β, Y is the average degree of the network used and Z is the value of α (0 or 1). Each ```.mat``` file contains a matrix ```dataq``` of dimensions 2x1000. This contains data for inf<sub>∞</sub> and vac<sub>∞</sub> over 1000 simulation runs.
For the case of village networks, the file names are ```villX_qY_alphaZ.mat```, where X is the village id, Y is the value of β and Z is the value of α (0 or 1). Each ```.mat``` file contains a matrix ```datavn``` of dimensions 2x1000 that contains data for inf<sub>∞</sub> and vac<sub>∞</sub> over 1000 simulation runs.
## Contents of folder **Figure_4**
1. Subfolder ```ER_All_system_sizes```: Data for Fig. 4(a)
This folder contains simulations on Erdős-Rényi networks for a range of system sizes (multiples of 1024), for the case of local and global information. The file names are ```Xn_alphaY_qZ.mat```, where X is the multiple of 1024 that specifies the system size (e.g. X="02" corresponds to a network of size 2\*1024=2048), Y is the value of α (0 or 1) and Z is the value of β. Each ```.mat``` file contains a matrix ```dataq``` of dimensions 2x1000 that contains data for inf<sub>∞</sub> and vac<sub>∞</sub> over 1000 simulation runs.
2. Subfolder ```ER_16N```: Data for Fig. 4(b) and Fig. 4(c)
This folder contains simulations on Erdős-Rényi networks for a fixed system size (16\*1024=16384 agents) and over a range of values of α. The file names are ```16n_alphaX_qY.mat```, where X is the value of α (0 or 1) and Y is the value of β. Each ```.mat``` file contains a matrix ```dataq``` of dimensions 2x1000 that contains data for inf<sub>∞</sub> and vac<sub>∞</sub> over 1000 simulation runs. An additional 1000 simulation runs are provided in the files ```16n2_alphaX_qY.mat```, where X is the value of α (0 or 1) and Y is the value of β.
| 85.533333 | 598 | 0.71629 | eng_Latn | 0.985506 |
dbb5a6b7eacfeebdd5b77f683fe76c0953da6690 | 743 | md | Markdown | sessions/azure-und-microsoft-365.md | coding-club-linz/global-azure-bootcamp-2018 | 8cb1372441a4b5458eaed94e1d4b5b5b8a5b3eea | [
"MIT"
] | null | null | null | sessions/azure-und-microsoft-365.md | coding-club-linz/global-azure-bootcamp-2018 | 8cb1372441a4b5458eaed94e1d4b5b5b8a5b3eea | [
"MIT"
] | null | null | null | sessions/azure-und-microsoft-365.md | coding-club-linz/global-azure-bootcamp-2018 | 8cb1372441a4b5458eaed94e1d4b5b5b8a5b3eea | [
"MIT"
] | 4 | 2018-02-21T20:00:07.000Z | 2022-03-17T14:53:40.000Z | ---
layout: session
page-category: session
title: Azure und Microsoft 365 ganz im Vertrauen
speaker: Martina Grom
speaker-id: martina-grom
room: '10.07'
slot: 6
---
Cybersecurity spielt in der IT-Strategie vieler Unternehmen eine wichtige Rolle. Mit Azure und Microsoft 365 können Unternehmen die Datensicherheit und Compliance in Ihrem Unternehmen erhöhen. In dieser Session zeige ich, welche Sicherheitsfunktionen Azure und Microsoft 365 in Kombination bieten und wie wir Daten mit Azure Information Protection bei unseren Kunden nutzen, um relevante Daten zu schützen und um Zugriffe zu ermöglichen und zu kontrollieren. Zusätzlich gibt es weitere neuen Tools, um höchste Sicherheitsstandards im AAD Tenant umzusetzen. Sicherheit geht vor! | 67.545455 | 577 | 0.823688 | deu_Latn | 0.99696 |
dbb5a9ec1fb7e369974dfc7a84541b768cbdd554 | 18 | md | Markdown | README.md | JLhandsoffB/A-boxer | f83afb626cfd4c7da4ee433ddd9911d7adbba6a0 | [
"MIT"
] | null | null | null | README.md | JLhandsoffB/A-boxer | f83afb626cfd4c7da4ee433ddd9911d7adbba6a0 | [
"MIT"
] | null | null | null | README.md | JLhandsoffB/A-boxer | f83afb626cfd4c7da4ee433ddd9911d7adbba6a0 | [
"MIT"
] | null | null | null | # A-boxer
A boxer
| 6 | 9 | 0.666667 | glg_Latn | 0.904072 |
dbb62dbd60fa1a112f6cc41af7f723e5caaf512d | 299 | md | Markdown | python/python-environment.md | patevs/TIL | f08fd7d35a02706307a063cc9f3c40d71d946f36 | [
"MIT"
] | 1 | 2022-01-24T08:20:32.000Z | 2022-01-24T08:20:32.000Z | python/python-environment.md | patevs/TIL | f08fd7d35a02706307a063cc9f3c40d71d946f36 | [
"MIT"
] | 6 | 2020-12-14T22:21:27.000Z | 2021-09-21T06:36:39.000Z | python/python-environment.md | patevs/TIL | f08fd7d35a02706307a063cc9f3c40d71d946f36 | [
"MIT"
] | null | null | null | ---
title: Python Environment
tags: [Notebooks/Python, Python]
created: '2019-03-05'
modified: '2019-06-26'
---
# Python Environment
How to manage your python environment.
## Links & Resources
* https://docs.python-guide.org/dev/virtualenvs/
* https://docs.python.org/3/tutorial/venv.html
----
| 16.611111 | 48 | 0.712375 | kor_Hang | 0.304359 |
dbb6318719f2db3177587cfec43f099e85d98272 | 1,111 | md | Markdown | docs/zh-hans/components/comp_PSS/comp_PSSelectrical/BasicPassiveComp/CapacitorWithoutInitValue/index.md | CloudPSS/docs | 8bb06e23d55d1f6e1acd3dbe9638ad9c7e8f317c | [
"MIT"
] | 1 | 2021-07-30T14:25:55.000Z | 2021-07-30T14:25:55.000Z | docs/zh-hans/components/comp_PSS/comp_PSSelectrical/BasicPassiveComp/CapacitorWithoutInitValue/index.md | CloudPSS/docs | 8bb06e23d55d1f6e1acd3dbe9638ad9c7e8f317c | [
"MIT"
] | null | null | null | docs/zh-hans/components/comp_PSS/comp_PSSelectrical/BasicPassiveComp/CapacitorWithoutInitValue/index.md | CloudPSS/docs | 8bb06e23d55d1f6e1acd3dbe9638ad9c7e8f317c | [
"MIT"
] | 1 | 2021-11-03T00:31:55.000Z | 2021-11-03T00:31:55.000Z | ---
title: 电容
author:
author_email:
date: 2018/12/4 10:03:10
updated: 2018/12/4 10:03:10
type: components
category: 3001
order: 402
classname: newCapacitorRouterWithInitValue
symbol: newCapacitorRouterWithInitValue
---
## 基本描述
<!--  -->
> **该元件用以建模不带初始电压的单相或三相电容(单线图)。**
## 参数列表
### Configuration
| 参数名 | 单位 | 备注 | 类型 | 描述 |
| :--- | :--- | :--- | :--: | :--- |
| Name | | 元件名称 | 文本 | 此处输入电容的名称(可缺省) |
| Dimension | | 单相电容或是三相电容 | 选择 | 选择电容为单相或三相 |
| Capacitance | μF | 电容值 | 实数(常量) | 电容值 |
### Configuration-SFEMT
| 参数名 | 备注 | 类型 | 描述 |
| :--- | :--- | :--- | :--: | :--- |
| Numerical Integration method | 数字积分方法选择 | 梯形积分方法/根匹配方法 | 移频暂态仿真设置 |
### Monitoring
| 参数名 | 备注 | 类型 | 描述 |
| :--- | :--- | :--: | :--- |
| Branch Current \[kA\] | 电容电流 | 文本 | 此处输入电容电流量测信号的标签(维数自动),以#号开头,如#Ic |
| Branch Voltage \[kV\] | 电容电压 | 文本 | 此处输入电容电压量测信号的标签(维数自动),以#号开头,如#Vc |
## 端口列表
| 端口名 | 数据维数 | 描述 |
| :--- | :--: | :--- |
| Pin + | 自动 |电容正端(参考方向)|
| Pin - | 自动 |电容负端(参考方向)|
## 使用说明
## 相关元件
[电感](../Inductor/index.md)、[电阻](../Resistor/index.md)、[电容(初值)](../CapacitorWithInitValue/index.md)
| 20.2 | 98 | 0.557156 | yue_Hant | 0.6057 |
dbb6478f399ce47a4c4d732f8661cb0ea5fe88d0 | 3,940 | md | Markdown | CONTRIBUTING.md | Sam-Spencer/SwiftyStoreKit | e89745e22a30ba6612dd35e15627d6c901905095 | [
"MIT"
] | 6,268 | 2015-09-03T20:43:28.000Z | 2022-03-30T12:34:29.000Z | CONTRIBUTING.md | Sam-Spencer/SwiftyStoreKit | e89745e22a30ba6612dd35e15627d6c901905095 | [
"MIT"
] | 582 | 2015-09-21T17:07:41.000Z | 2022-03-10T04:22:44.000Z | CONTRIBUTING.md | Sam-Spencer/SwiftyStoreKit | e89745e22a30ba6612dd35e15627d6c901905095 | [
"MIT"
] | 819 | 2015-09-22T09:18:12.000Z | 2022-03-31T03:05:27.000Z | # Contributing to SwiftyStoreKit
### All contributions to SwiftyStoreKit are welcome. 😎
This project is becoming widely adopted and its growth is now limited by the time the main maintainer can allocate.
Going forward, the aim is to **transfer some of the maintainance and development effort to the community**.
If you want to help developing SwiftyStoreKit, please look for issues marked with a blue **contributions welcome** label. See [this issue](https://github.com/bizz84/SwiftyStoreKit/issues/192) for an example.
The maintainer will use this label initially for simple tasks that are appropriate for beginners and first time contributors.
As the project and its community grows:
* intermediate and advanced tasks will be opened up to contributors
* most experienced contributors will be able to gain **admin** rights to review and merge pull requests
**Note**: While the maintainer(s) try to regularly keep the project alive and healthy, issues and pull requests are not always reviewed in a timely manner. 🕰
## Scope
SwiftyStoreKit aims to be a lightweight wrapper on top of [StoreKit](https://developer.apple.com/reference/storekit).
While SwiftyStoreKit offers access to the [local receipt data](https://developer.apple.com/reference/foundation/bundle/1407276-appstorereceipturl), it is a non-goal to add support for persisting IAP data locally. It is up to clients to do this with a storage solution of choice (i.e. NSUserDefaults, CoreData, Keychain).
**Swift Version**: SwiftyStoreKit includes [Swift 2.3](https://github.com/bizz84/SwiftyStoreKit/tree/swift-2.3) and [Swift 2.2](https://github.com/bizz84/SwiftyStoreKit/tree/swift-2.2) branches. These legacy versions are no longer maintained and all active development happens on [master](https://github.com/bizz84/SwiftyStoreKit) and [develop](https://github.com/bizz84/SwiftyStoreKit/tree/develop), which support Swift 3.x and Swift 4.x.
**Objective-C**: Currently, SwiftyStoreKit cannot be used in Objective-C projects. The main limitation is that most classes and types in the library are Swift-only. See [related issue](https://github.com/bizz84/SwiftyStoreKit/issues/123).
## Pull requests
The project uses [gitflow](http://nvie.com/posts/a-successful-git-branching-model/) as a branching model.
In short:
* All pull requests for **new features** and **bug fixes** should be made into the `develop` branch.
* Pull requests for **hot fixes** can be done into both `master` and `develop`.
* The maintainer(s) will merge `develop` into `master` and create a release tag as new features are added.
* All releases [can be found here](https://github.com/bizz84/SwiftyStoreKit/releases).
## Open Features / Enhancement Requests
These are intermediate / advanced tasks that will hopefully be implemented in the future:
### Local Receipt validation
SwiftyStoreKit offers a reference implementation for [receipt validation with Apple](https://github.com/bizz84/SwiftyStoreKit/blob/master/SwiftyStoreKit/AppleReceiptValidator.swift).
This could be extended by implementing local receipt validation as recommended by Apple. See [related issue](https://github.com/bizz84/SwiftyStoreKit/issues/101).
### Support for content hosted by Apple for non-consumable products
See [related issue](https://github.com/bizz84/SwiftyStoreKit/issues/128).
### Increase unit test coverage
The payment flows are unit tested fairly extensively. Additional unit test coverage is welcome:
- [ ] Dependency injection for SwiftyStoreKit dependencies
- [ ] Unit tests on main [SwiftyStoreKit class](https://github.com/bizz84/SwiftyStoreKit/blob/master/SwiftyStoreKit/SwiftyStoreKit.swift).
- [ ] Unit tests for receipt verification code.
See [related issue](https://github.com/bizz84/SwiftyStoreKit/issues/38).
## Issues
If SwiftyStoreKit doesn't work as you expect, please review [any open issues](https://github.com/bizz84/SwiftyStoreKit/issues) before opening a new one.
| 56.285714 | 439 | 0.785279 | eng_Latn | 0.979445 |
dbb6b0f7327d5ae0233885f0141b97e99002db01 | 424 | md | Markdown | swarm_network_constitution/README.md | SwarmTokenAddict/swarm-network-governance | f629fd3114686d7e4c5b860c5160d2534693e8ad | [
"MIT"
] | null | null | null | swarm_network_constitution/README.md | SwarmTokenAddict/swarm-network-governance | f629fd3114686d7e4c5b860c5160d2534693e8ad | [
"MIT"
] | 8 | 2019-07-31T19:18:57.000Z | 2019-08-07T08:25:22.000Z | swarm_network_constitution/README.md | SwarmTokenAddict/swarm-network-governance | f629fd3114686d7e4c5b860c5160d2534693e8ad | [
"MIT"
] | 6 | 2019-07-31T19:16:29.000Z | 2019-08-07T07:52:55.000Z | ## Swarm Network Constitution
This repository contains all versions of the Swarm Network Consitution.
Please read [Getting Started](https://github.com/swarmfund/networkgovernance/blob/master/docs/getting-started.md) to learn about how you can participate in the Swarm Governance Proposal process.
You can find more information about Swarm governance in the [Swarm Network Governance Guide](https://docs.swarmnetwork.org)
| 53 | 194 | 0.818396 | eng_Latn | 0.977635 |
dbb6c6a568de3426c597298221c760757c60c396 | 500 | markdown | Markdown | src/content/en/updates/posts/2011/10/WebSockets-updated-to-latest-version-in-Chrome-Canary.markdown | sshyran/WebFundamentals | 5556a0756b410de95e9547b78bce6d7310b836e6 | [
"Apache-2.0"
] | 4 | 2017-04-04T04:51:09.000Z | 2022-02-10T17:10:28.000Z | src/content/en/updates/posts/2011/10/WebSockets-updated-to-latest-version-in-Chrome-Canary.markdown | sshyran/WebFundamentals | 5556a0756b410de95e9547b78bce6d7310b836e6 | [
"Apache-2.0"
] | 3 | 2021-05-20T18:33:25.000Z | 2022-02-26T08:31:45.000Z | src/content/en/updates/posts/2011/10/WebSockets-updated-to-latest-version-in-Chrome-Canary.markdown | sshyran/WebFundamentals | 5556a0756b410de95e9547b78bce6d7310b836e6 | [
"Apache-2.0"
] | 2 | 2017-07-20T22:00:47.000Z | 2020-01-22T08:18:27.000Z | ---
layout: updates/post
title: "WebSockets updated to latest version in Chrome Canary"
published_on: 2011-10-14
updated_on: 2011-10-14
authors:
- ericbidelman
tags:
- news
- websockets
---
The WebSocket API has been rev'd to the latest version (13) in Chrome Canary. The developer-facing changes are very small, but are incompatible with the older version.
Here's the scoop:
* Change the origin header name: `Sec-WebSocket-Origin` -> `Origin`
* `Sec-WebSocket-Version` header value: 8 -> 13
| 27.777778 | 167 | 0.744 | eng_Latn | 0.949445 |
dbb72d5202f1c45b0873d88b61a71b92926664e8 | 3,838 | md | Markdown | meta/ro/16-3-2-3.md | statisticamd/open-sdg-data-starter | 660d4fe34d0778e1dc6db10a7257f605cd7c6542 | [
"MIT"
] | 1 | 2019-05-29T07:15:50.000Z | 2019-05-29T07:15:50.000Z | meta/ro/16-3-2-3.md | statisticamd/open-sdg-data-starter | 660d4fe34d0778e1dc6db10a7257f605cd7c6542 | [
"MIT"
] | 1 | 2020-08-10T13:25:05.000Z | 2020-08-10T13:25:05.000Z | meta/ro/16-3-2-3.md | statisticamd/open-sdg-data-starter | 660d4fe34d0778e1dc6db10a7257f605cd7c6542 | [
"MIT"
] | 2 | 2020-01-09T15:38:19.000Z | 2020-08-10T12:13:42.000Z | ---
data_non_statistical: false
goal_meta_link: https://unstats.un.org/sdgs/metadata/files/Metadata-16-03-02.pdf
goal_meta_link_text: United Nations Sustainable Development Goals Metadata (PDF 209
KB)
graph_type: line
layout: indicator
published: true
sdg_goal: '16'
target_id: '16.3'
un_custodian_agency: United Nations Office on Drugs and Crime (UNODC)
un_designated_tier: '1'
data_show_map: false
source_active_1: true
source_url_text_1: Link to source
source_active_2: false
source_url_text_2: Link to Source
source_active_3: false
source_url_3: Link to source
source_active_4: false
source_url_text_4: Link to source
source_active_5: false
source_url_text_5: Link to source
source_active_6: false
source_url_text_6: Link to source
title: Untitled
target: >-
Promovarea statului de drept la nivel național și internațional și asigurarea accesului egal la justiție pentru toți
indicator_name: >-
Ponderea cazurilor pierdute la CEDO, din numărul cererilor comunicate în fiecare an
indicator: >-
16.3.2.3
permalink: >-
16-3-2-3
indicator_sort_order: >-
16-03-02-03
reporting_status: >-
complete
previous: 16-3-2-2
next: 16-3-2-4
tags:
- custom.national
national_indicator_available: >-
16.3.2.3 Ponderea cazurilor pierdute la CEDO, din numărul cererilor comunicate în fiecare an
computation_calculations: >-
Ponderea cazurilor pierdute la CEDO, din numărul cererilor comunicate în fiecare an = Numărul de hotărâri prin care CEDO a recunoscut cel puțin o încălcare a Convenției de către Republica Moldova plus numărul de decizii privind radierea de pe rol a cauzei în urma încheierii unui acord de soluționare amiabilă sau a prezentării unei declarații unilaterale de către Guvern, raportat la numărul de cauze comunicate în fiecare an *100%.
computation_definitions: >-
Orice persoană care se consideră lezată într-un drept al său garantat de Convenția Europeană poate depune o cerere la CEDO. Pentru depunerea cereri la Curte trebuie întrunite anumite condiții stabilite în art. 34 și art. 35 ale Convenției (ex: epuizarea căilor interne de recurs; respectarea termenului de 6 luni; să se refere la un drept prevăzut de Convenție; etc.).Cererile înregistrate sunt examinate de către Curte imediat ce este posibil. Datorită complexității procedurii și numărului mare de cereri pe rol, procedura de examinare a unei cereri durează între 3 și 7 ani. Orice decizie pronunțată de CEDO este definitivă și nu poate fi contestată, despre ce este informat reclamantul și Guvernul. Comitetul de Miniștri al Consiliului Europei se ocupă de supravegherea executării hotărârilor Curții de către Guverne. Agentul guvernamental - reprezintă Republica Moldova la Curtea Europeană și contribuie, în modul prevăzut de lege, la asigurarea executării hotărârilor și deciziilor Curții Europene în cauzele îndreptate împotriva Republicii Moldova. De asemenea acesta reprezintă țara în calitate de expert la sesiunile plenare ale comitetelor interguvernamentale ale Comitetului de Miniștri al Consiliului Europei pe dimensiunea drepturilor omului, coordonând-și acțiunile cu Ministerul Afacerilor Externe și Integrării Europene (art. 5 din Legea nr.151 din 30.07.2015 cu privire la Agentul guvernamental).
computation_units: Procent, %
national_geographical_coverage: Republica Moldova
graph_title: >-
16.3.2.3 Ponderea cazurilor pierdute la CEDO, din numărul cererilor comunicate în fiecare an
source_data_source_1: >-
1) [Rapoartele anuale ale CEDO](https://www.echr.coe.int/Documents/Annual_report_2018_ENG.pdf)<br>
2) date statistice ale Direcției agent guvernamental, MJ
source_data_supplier_1: >-
1) Curtea Europeană pentru Drepturile Omului<br>
2) Ministerul Justiției
source_organisation_1: >-
Ministerul Justiției
source_responsible_monitoring_1: >-
Ministerul Justiției
source_periodicity_1: >-
anual
---
| 58.151515 | 1,418 | 0.814226 | ron_Latn | 0.999982 |
dbb777e19a8a655566fd26f09da43d5c143ff25c | 1,568 | md | Markdown | Examples/Image/Classification/VGG/README.md | Wootai/CNTK | 5eca042341c8152594e67652a44c3b733a2acaa0 | [
"RSA-MD"
] | 5 | 2017-08-28T08:27:18.000Z | 2021-04-20T21:12:52.000Z | Examples/Image/Classification/VGG/README.md | zhuyawen/CNTK | 0ee09cf771bda9d4912790e0fed7322e89d86d87 | [
"RSA-MD"
] | null | null | null | Examples/Image/Classification/VGG/README.md | zhuyawen/CNTK | 0ee09cf771bda9d4912790e0fed7322e89d86d87 | [
"RSA-MD"
] | 3 | 2019-08-23T11:42:14.000Z | 2022-01-06T08:41:32.000Z | # CNTK Examples: Image/Classification/VGG
## Overview
|Data: |The ILSVRC2012 dataset (http://www.image-net.org/challenges/LSVRC/2012/) for image classification.
|:---------|:---
|Purpose |This folder contains examples that demonstrate how to use CNTK to define VGG network (https://arxiv.org/abs/1409.1556) for image classification.
|Network |VGG.
|Training |Stochastic gradient descent with momentum.
|Comments |See below.
## Running the example
### Getting the data
We use the ILSVRC2012 datasets to demonstrate how to train the VGG model which was developed by the [Visual Geometry Group in University of Oxford](http://www.robots.ox.ac.uk/~vgg/research/very_deep/). It won the second place in the ILSVRC-2014 challenge. VGG has been a very popular model for its simple architect and high accuracy.
ILSVRC2012 datasets are not included in the CNTK distribution. You may obtain it through http://image-net.org.
## Details
We give examples for both Python and BrainScript.
### [Python](./Python)
### [BrainScript](./BrainScript)
## Pre-trained Models
### Caffe-Converted
#### VGG16
|CNTK model download path | https://www.cntk.ai/Models/Caffe_Converted/VGG16_ImageNet.model
|:---------|:---
|Source Caffe model website | http://www.robots.ox.ac.uk/~vgg/research/very_deep/
|Single crop top 5 error | 10.11%
#### VGG19
|CNTK model download path | https://www.cntk.ai/Models/Caffe_Converted/VGG19_ImageNet.model
|:---------|:---
|Source Caffe model website | http://www.robots.ox.ac.uk/~vgg/research/very_deep/
|Single crop top 5 error | 10.18%
| 37.333333 | 333 | 0.732143 | eng_Latn | 0.600255 |
dbb7dd1f208d9af6dadd6e67ca5922a7273e1d3d | 4,068 | md | Markdown | _posts/2017/10/2017-10-12-mentoring-wrap-up.md | tobyhodges/website | 3fdef772de10137bcc461618cd6fb3b220b80c04 | [
"MIT"
] | 41 | 2015-04-30T09:37:35.000Z | 2021-09-16T23:49:08.000Z | _posts/2017/10/2017-10-12-mentoring-wrap-up.md | tobyhodges/website | 3fdef772de10137bcc461618cd6fb3b220b80c04 | [
"MIT"
] | 607 | 2015-03-15T23:08:35.000Z | 2022-02-01T23:26:18.000Z | _posts/2017/10/2017-10-12-mentoring-wrap-up.md | tobyhodges/website | 3fdef772de10137bcc461618cd6fb3b220b80c04 | [
"MIT"
] | 328 | 2015-11-21T13:26:40.000Z | 2021-05-01T16:02:01.000Z | ---
layout: post
authors: ["Erin Becker", "Kari L. Jordan", "Tracy Teal", "Christina Koch"]
title: "Carpentries Mentorship Program - 2.0"
date: 2017-10-12
time: "00:00:00"
tags: [ "Community", "Mentoring", "Community Building"]
---
### We're starting a new round of mentoring groups, centered on specific lessons
Mentorship is an important part of the Carpentry experience. As Instructors, we both teach and mentor our Learners. We also mentor each other as Instructors, learning something new from each other every time we teach and interact with one another. The Mentoring Subcommittee offers guidance to new and continuing Instructors through [weekly discussion sessions](http://pad.software-carpentry.org/instructor-discussion), where Instructors from the global Carpentry community gather to share their experiences and learn from each other. This is a fantastic opportunity to interact with other Carpentry Instructors from around the world.
Many in the Carpentry community have expressed interest in having more extensive and longer-lasting opportunities for mentorship. Based on this, we ran a pilot version of a new Mentorship Program, starting in January 2017. Nearly 100 Carpentry Instructors participated in the program, with 58 Mentees and 34 Mentors in 18 small groups. Groups were put together based on a variety of factors, including common teaching interests and geographies. These groups met once a month to discuss topics of interest to the group members and to help Mentees prepare for their first workshop.
In June 2017, we asked participants in the pilot program for their feedback. Participants said that they enjoyed the opportunity to share and learn from each others' experiences and expertise. They also reported that the experience enabled them to get involved with the Carpentry community and to network with Carpentry Instructors at other institutions. When asked about negative aspects of the program, many participants reported difficulty scheduling meetings with their groups as well as a lack of focus and difficulty in deciding topics to discuss within their groups. Many participants offered concrete suggestions on how the program could be improved, including:
- offering more guidance to mentorship groups on what to do during the program
- assigning groups specifically around common interests and goals
- enabling more integration and communication among groups.
As with any pilot program, one of the goals of this program was to identify aspects that could be improved, based on the shared experiences of the participants, so we are very grateful for the feedback we received.
We listened to your feedback and have made changes to the program. We are now offering curriculum-specific mentoring: both mentors and mentees can choose which tools they are most interested in discussing from the following list:
- Git
- Shell
- Python
- R
- SQL
Additionally, groups will focus on either lesson maintenance, teaching workshops, organizing workshops, or community building. This program will run from October 25th to January 10th, 2018, and will culminate in a Virtual Showcase, in which groups will share their work with the broader Carpentry community.
So far, 18 people have signed up to participate in this round of mentoring groups. Applications close October 18th, so don't wait to apply to either be a [mentor](https://docs.google.com/forms/d/e/1FAIpQLSeXy0994S0wy0IYi6Nv1HF9cwENsiSFLy8-2E_RI803M9zCzw/viewform?usp=send_form) or [mentee](https://docs.google.com/forms/d/e/1FAIpQLScA9sfmM1gJhkJEn5GDpowUu_QSV-7gDrTCoWHoLOvdukuVBw/viewform).
Get involved by attending one of the information sessions being held October 12th at 06:00 UTC and 21:00 UTC. Sign up to attend on the [etherpad](http://pad.software-carpentry.org/mentorship-info). You can also join the conversation by tweeting [@datacarpentry](https://twitter.com/datacarpentry) and [@swcarpentry](https://twitter.com/swcarpentry) using the hashtag [#carpentriesmentoring](https://twitter.com/search?q=%23CarpentriesMentoring&src=tyah).
| 101.7 | 670 | 0.805556 | eng_Latn | 0.999057 |
dbb83c2ad8ebbd1a65621ede64882e1936aecfeb | 12,728 | md | Markdown | README_CN.md | DoedKr/Real-ESRGAN | 38c913f1afac754f38c206600412b4d73b206aca | [
"BSD-3-Clause"
] | null | null | null | README_CN.md | DoedKr/Real-ESRGAN | 38c913f1afac754f38c206600412b4d73b206aca | [
"BSD-3-Clause"
] | null | null | null | README_CN.md | DoedKr/Real-ESRGAN | 38c913f1afac754f38c206600412b4d73b206aca | [
"BSD-3-Clause"
] | null | null | null | <p align="center">
<img src="assets/realesrgan_logo.png" height=120>
</p>
## <div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">简体中文</a></b></div>
[](https://github.com/xinntao/Real-ESRGAN/releases)
[](https://pypi.org/project/realesrgan/)
[](https://github.com/xinntao/Real-ESRGAN/issues)
[](https://github.com/xinntao/Real-ESRGAN/issues)
[](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE)
[](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml)
[](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/publish-pip.yml)
:fire: 更新动漫视频的小模型 **RealESRGAN AnimeVideo-v3**. 更多信息在 [[动漫视频模型介绍](docs/anime_video_model.md)] 和 [[比较](docs/anime_comparisons_CN.md)] 中.
1. Real-ESRGAN的[Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) | Real-ESRGAN**动漫视频** 的[Colab Demo](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing)
2. **支持Intel/AMD/Nvidia显卡**的绿色版exe文件: [Windows版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [macOS版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip),详情请移步[这里](#便携版(绿色版)可执行文件)。NCNN的实现在 [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)。
Real-ESRGAN 的目标是开发出**实用的图像/视频修复算法**。<br>
我们在 ESRGAN 的基础上使用纯合成的数据来进行训练,以使其能被应用于实际的图片修复的场景(顾名思义:Real-ESRGAN)。
:art: Real-ESRGAN 需要,也很欢迎你的贡献,如新功能、模型、bug修复、建议、维护等等。详情可以查看[CONTRIBUTING.md](docs/CONTRIBUTING.md),所有的贡献者都会被列在[此处](README_CN.md#hugs-感谢)。
:milky_way: 感谢大家提供了很好的反馈。这些反馈会逐步更新在 [这个文档](docs/feedback.md)。
:question: 常见的问题可以在[FAQ.md](docs/FAQ.md)中找到答案。(好吧,现在还是空白的=-=||)
---
如果 Real-ESRGAN 对你有帮助,可以给本项目一个 Star :star: ,或者推荐给你的朋友们,谢谢!:blush: <br/>
其他推荐的项目:<br/>
:arrow_forward: [GFPGAN](https://github.com/TencentARC/GFPGAN): 实用的人脸复原算法 <br>
:arrow_forward: [BasicSR](https://github.com/xinntao/BasicSR): 开源的图像和视频工具箱<br>
:arrow_forward: [facexlib](https://github.com/xinntao/facexlib): 提供与人脸相关的工具箱<br>
:arrow_forward: [HandyView](https://github.com/xinntao/HandyView): 基于PyQt5的图片查看器,方便查看以及比较 <br>
---
<!---------------------------------- Updates --------------------------->
<details>
<summary>🚩<b>更新</b></summary>
- ✅ 更新动漫视频的小模型 **RealESRGAN AnimeVideo-v3**. 更多信息在 [anime video models](docs/anime_video_model.md) 和 [comparisons](docs/anime_comparisons.md)中.
- ✅ 添加了针对动漫视频的小模型, 更多信息在 [anime video models](docs/anime_video_model.md) 中.
- ✅ 添加了ncnn 实现:[Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
- ✅ 添加了 [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth),对二次元图片进行了优化,并减少了model的大小。详情 以及 与[waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan)的对比请查看[**anime_model.md**](docs/anime_model.md)
- ✅支持用户在自己的数据上进行微调 (finetune):[详情](docs/Training.md#Finetune-Real-ESRGAN-on-your-own-dataset)
- ✅ 支持使用[GFPGAN](https://github.com/TencentARC/GFPGAN)**增强人脸**
- ✅ 通过[Gradio](https://github.com/gradio-app/gradio)添加到了[Huggingface Spaces](https://huggingface.co/spaces)(一个机器学习应用的在线平台):[Gradio在线版](https://huggingface.co/spaces/akhaliq/Real-ESRGAN)。感谢[@AK391](https://github.com/AK391)
- ✅ 支持任意比例的缩放:`--outscale`(实际上使用`LANCZOS4`来更进一步调整输出图像的尺寸)。添加了*RealESRGAN_x2plus.pth*模型
- ✅ [推断脚本](inference_realesrgan.py)支持: 1) 分块处理**tile**; 2) 带**alpha通道**的图像; 3) **灰色**图像; 4) **16-bit**图像.
- ✅ 训练代码已经发布,具体做法可查看:[Training.md](docs/Training.md)。
</details>
<!---------------------------------- Projects that use Real-ESRGAN --------------------------->
<details>
<summary>🧩<b>使用Real-ESRGAN的项目</b></summary>
👋 如果你开发/使用/集成了Real-ESRGAN, 欢迎联系我添加
- NCNN-Android: [RealSR-NCNN-Android](https://github.com/tumuyan/RealSR-NCNN-Android) by [tumuyan](https://github.com/tumuyan)
- VapourSynth: [vs-realesrgan](https://github.com/HolyWu/vs-realesrgan) by [HolyWu](https://github.com/HolyWu)
- NCNN: [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
**易用的图形界面**
- [Waifu2x-Extension-GUI](https://github.com/AaronFeng753/Waifu2x-Extension-GUI) by [AaronFeng753](https://github.com/AaronFeng753)
- [Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) by [Justin62628](https://github.com/Justin62628)
- [Real-GUI](https://github.com/scifx/Real-GUI) by [scifx](https://github.com/scifx)
- [Real-ESRGAN_GUI](https://github.com/net2cn/Real-ESRGAN_GUI) by [net2cn](https://github.com/net2cn)
- [Real-ESRGAN-EGUI](https://github.com/WGzeyu/Real-ESRGAN-EGUI) by [WGzeyu](https://github.com/WGzeyu)
- [anime_upscaler](https://github.com/shangar21/anime_upscaler) by [shangar21](https://github.com/shangar21)
</details>
<details>
<summary>👀<b>Demo视频(B站)</b></summary>
- [大闹天宫片段](https://www.bilibili.com/video/BV1ja41117zb)
</details>
### :book: Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
> [[论文](https://arxiv.org/abs/2107.10833)]   [项目主页]   [[YouTube 视频](https://www.youtube.com/watch?v=fxHWoDSSvSc)]   [[B站视频](https://www.bilibili.com/video/BV1H34y1m7sS/)]   [[Poster](https://xinntao.github.io/projects/RealESRGAN_src/RealESRGAN_poster.pdf)]   [[PPT](https://docs.google.com/presentation/d/1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL/edit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]<br>
> [Xintao Wang](https://xinntao.github.io/), Liangbin Xie, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en) <br>
> Tencent ARC Lab; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
<p align="center">
<img src="assets/teaser.jpg">
</p>
---
我们提供了一套训练好的模型(*RealESRGAN_x4plus.pth*),可以进行4倍的超分辨率。<br>
**现在的 Real-ESRGAN 还是有几率失败的,因为现实生活的降质过程比较复杂。**<br>
而且,本项目对**人脸以及文字之类**的效果还不是太好,但是我们会持续进行优化的。<br>
Real-ESRGAN 将会被长期支持,我会在空闲的时间中持续维护更新。
这些是未来计划的几个新功能:
- [ ] 优化人脸
- [ ] 优化文字
- [x] 优化动画图像
- [ ] 支持更多的超分辨率比例
- [ ] 可调节的复原
如果你有好主意或需求,欢迎在 issue 或 discussion 中提出。<br/>
如果你有一些 Real-ESRGAN 中有问题的照片,你也可以在 issue 或者 discussion 中发出来。我会留意(但是不一定能解决:stuck_out_tongue:)。如果有必要的话,我还会专门开一页来记录那些有待解决的图像。
---
### 便携版(绿色版)可执行文件
你可以下载**支持Intel/AMD/Nvidia显卡**的绿色版exe文件: [Windows版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [macOS版](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip)。
绿色版指的是这些exe你可以直接运行(放U盘里拷走都没问题),因为里面已经有所需的文件和模型了。它不需要 CUDA 或者 PyTorch运行环境。<br>
你可以通过下面这个命令来运行(Windows版本的例子,更多信息请查看对应版本的README.md):
```bash
./realesrgan-ncnn-vulkan.exe -i 输入图像.jpg -o 输出图像.png -n 模型名字
```
我们提供了五种模型:
1. realesrgan-x4plus(默认)
2. reaesrnet-x4plus
3. realesrgan-x4plus-anime(针对动漫插画图像优化,有更小的体积)
4. realesr-animevideov3 (针对动漫视频)
你可以通过`-n`参数来使用其他模型,例如`./realesrgan-ncnn-vulkan.exe -i 二次元图片.jpg -o 二刺螈图片.png -n realesrgan-x4plus-anime`
### 可执行文件的用法
1. 更多细节可以参考 [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan#computer-usages).
2. 注意:可执行文件并没有支持 python 脚本 `inference_realesrgan.py` 中所有的功能,比如 `outscale` 选项) .
```console
Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...
-h show this help
-i input-path input image path (jpg/png/webp) or directory
-o output-path output image path (jpg/png/webp) or directory
-s scale upscale ratio (can be 2, 3, 4. default=4)
-t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu
-m model-path folder path to the pre-trained models. default=models
-n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)
-g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu
-j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu
-x enable tta mode"
-f format output image format (jpg/png/webp, default=ext/png)
-v verbose output
```
由于这些exe文件会把图像分成几个板块,然后来分别进行处理,再合成导出,输出的图像可能会有一点割裂感(而且可能跟PyTorch的输出不太一样)
---
## :wrench: 依赖以及安装
- Python >= 3.7 (推荐使用[Anaconda](https://www.anaconda.com/download/#linux)或[Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.7](https://pytorch.org/)
#### 安装
1. 把项目克隆到本地
```bash
git clone https://github.com/xinntao/Real-ESRGAN.git
cd Real-ESRGAN
```
2. 安装各种依赖
```bash
# 安装 basicsr - https://github.com/xinntao/BasicSR
# 我们使用BasicSR来训练以及推断
pip install basicsr
# facexlib和gfpgan是用来增强人脸的
pip install facexlib
pip install gfpgan
pip install -r requirements.txt
python setup.py develop
```
## :zap: 快速上手
### 普通图片
下载我们训练好的模型: [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)
```bash
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
```
推断!
```bash
python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance
```
结果在`results`文件夹
### 动画图片
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_1.png">
</p>
训练好的模型: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)<br>
有关[waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan)的更多信息和对比在[**anime_model.md**](docs/anime_model.md)中。
```bash
# 下载模型
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P experiments/pretrained_models
# 推断
python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs
```
结果在`results`文件夹
### Python 脚本的用法
1. 虽然你使用了 X4 模型,但是你可以 **输出任意尺寸比例的图片**,只要实用了 `outscale` 参数. 程序会进一步对模型的输出图像进行缩放。
```console
Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...
A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance
-h show this help
-i --input Input image or folder. Default: inputs
-o --output Output folder. Default: results
-n --model_name Model name. Default: RealESRGAN_x4plus
-s, --outscale The final upsampling scale of the image. Default: 4
--suffix Suffix of the restored image. Default: out
-t, --tile Tile size, 0 for no tile during testing. Default: 0
--face_enhance Whether to use GFPGAN to enhance face. Default: False
--fp32 Whether to use half precision during inference. Default: False
--ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
```
## :european_castle: 模型库
请参见 [docs/model_zoo.md](docs/model_zoo.md)
## :computer: 训练,在你的数据上微调(Fine-tune)
这里有一份详细的指南:[Training.md](docs/Training.md).
## BibTeX 引用
@Article{wang2021realesrgan,
title={Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
author={Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
journal={arXiv:2107.10833},
year={2021}
}
## :e-mail: 联系我们
如果你有任何问题,请通过 `[email protected]` 或 `[email protected]` 联系我们。
## :hugs: 感谢
感谢所有的贡献者大大们~
- [AK391](https://github.com/AK391): 通过[Gradio](https://github.com/gradio-app/gradio)添加到了[Huggingface Spaces](https://huggingface.co/spaces)(一个机器学习应用的在线平台):[Gradio在线版](https://huggingface.co/spaces/akhaliq/Real-ESRGAN)。
- [Asiimoviet](https://github.com/Asiimoviet): 把 README.md 文档 翻译成了中文。
- [2ji3150](https://github.com/2ji3150): 感谢详尽并且富有价值的[反馈、建议](https://github.com/xinntao/Real-ESRGAN/issues/131).
- [Jared-02](https://github.com/Jared-02): 把 Training.md 文档 翻译成了中文。
| 46.115942 | 514 | 0.724466 | yue_Hant | 0.626682 |
dbb89c77cb01f817424cfe23c06664cf551d6da1 | 328 | md | Markdown | node_modules/umi-plugin-ga/README.md | lsabella/baishu-admin | 4fa931a52ec7c6990daf1cec7aeea690bb3aa68a | [
"MIT"
] | null | null | null | node_modules/umi-plugin-ga/README.md | lsabella/baishu-admin | 4fa931a52ec7c6990daf1cec7aeea690bb3aa68a | [
"MIT"
] | 34 | 2020-09-28T07:24:42.000Z | 2022-02-26T14:29:57.000Z | node_modules/umi-plugin-ga/README.md | lsabella/baishu-admin | 4fa931a52ec7c6990daf1cec7aeea690bb3aa68a | [
"MIT"
] | null | null | null | # umi-plugin-ga
Umi plugin to support google analytics
## use
### install
`yarn add umi-plugin-ga`
### config
```js
export default {
plugins: [
[
.....
],
[
'umi-plugin-ga',
{
code: 'google analytics code',
judge: ()=>true // true or false
},
],
],
.....
}
```
| 10.933333 | 40 | 0.469512 | eng_Latn | 0.34062 |
dbb8ece033638c132050431bb303d8daf03e1ce2 | 2,096 | md | Markdown | README.md | J2021T/test | 381aadc950adc7270890247146e0f753ea691c72 | [
"MIT"
] | null | null | null | README.md | J2021T/test | 381aadc950adc7270890247146e0f753ea691c72 | [
"MIT"
] | null | null | null | README.md | J2021T/test | 381aadc950adc7270890247146e0f753ea691c72 | [
"MIT"
] | null | null | null |
# Mix-Master
* 
## Description
Mix-Master is a web application to generate random cocktails and help you find the ingredients in the store. We know everyone has been in the spot where they have the alcohol, but no idea what kind of cocktail to make. So we created this application using an API from CocktailDB to produce a random cocktail using the type of liqour the user clicked. Then the user is presented with the recipe and ingredients. If the user needs to find the ingredients in the store they can click on the ingredient and find the aisle and average cost. This is done with the Spoonacular API, which is still working out some ingredient details. If the user really likes a recipe they can save it to their favorites section by clicking the drink name button.
## Table of Contents
* [Usage](#usage)
* [Deployed-Application-Link](#deployed-application-link)
* [Deployed-Application-Screenshot](#deployed-application-screenshot)
* [Credits](#credits)
* [Contributing](#contributing)
* [Tests](#tests)
* [Questions](#questions)
* [License](#license)
## Usage
Simply follow the link to the deployed application and click the type of liqour you want for your cocktail. Then make that cocktail and enjoy!
## Deployed-Application-Link
undefined
## Deployed-Application-Screenshot

## Credits
* Kelly Hunter
* kellydhunter1
* Benson Muchemi
* benmuchemi15
## Contributing
Reach out to Kelly, Benson, or me on GitHub. We are always open to ideas and improvements.
## Tests
Follow the link and get yourself a cocktail.
## Questions
GitHub: [J2021T](https://github.com/J2021T)
EMAIL: [[email protected]](mailto:[email protected])
## License
This project is covered under the [MIT](../assets/license-files/MIT.txt) license.

| 33.269841 | 746 | 0.719466 | eng_Latn | 0.986326 |
dbb90ce48f971b0554fd448033b04519982d6b1d | 20,994 | md | Markdown | doc/deprecated/hack.md | stapmoshun/cloudmesh-mpi | b1fa6d493e89c51e83325c8547a3297c09649ba6 | [
"Apache-2.0"
] | 1 | 2022-02-27T17:28:02.000Z | 2022-02-27T17:28:02.000Z | doc/deprecated/hack.md | stapmoshun/cloudmesh-mpi | b1fa6d493e89c51e83325c8547a3297c09649ba6 | [
"Apache-2.0"
] | 45 | 2021-05-10T22:39:19.000Z | 2021-10-04T19:52:26.000Z | doc/deprecated/hack.md | stapmoshun/cloudmesh-mpi | b1fa6d493e89c51e83325c8547a3297c09649ba6 | [
"Apache-2.0"
] | 4 | 2021-07-13T19:14:08.000Z | 2022-03-26T15:32:06.000Z | # Towards Python MPI for Artificial Intelligence and Deep Learning Research
Gregor von Laszewski,
Fidel Leal,
Erin Seliger,
Cooper Young,
Agness Lungu
## Preface
Add a preface
* Who we are and how this activity came about.
* Notation. We can copy what I have done in other books, but keep it simple e.g. we do not have to worry about epub
* code
* use of `verbatim` inline and in block
* use of LaTex formulas
* use of markdown
* spaces before and after headline, itemize lists, e.g. online editors do that not correctly
* hyperlinks
* citation
## Overview
TODO: Gregor improves this section
After reflecting on the meeting I decided that in order to increase your python knowledge and to also lead you towards research we will be developing initially a tutorial that teaches students how to use MPI (Message Passing Interface). We do this with something that is called mpi4py. (later on we will use this to coordinate AI algorithms) We will develop multiple sections to this tutorial and each of you will each week work on a chapter so it can be redone by others. You can also work together and share each others ideas and thoughts openly as well as ask questions. We will do the following structured tasks (you will need to know what plagiarism is and when and how you need to cite):
## Hardware Considerations
Describes and summarizes what hardware you could use
### Backup
When you rock on a research project you may install programs on your computer that's me cause problems at a later time. The work we do yes research and not production. Hence, we need to be assuring that you have a backup strategy in place before you work on your research activities.
For that reason, we recommend that you purchase an external backup drive that you use regularly to create backups from you are System. The best Open backup solutions Orissa back up trash include multiple redundant drives so that if even the backup fails you can recover easily from a failure. An external backup drive is relatively inexpensive and you can easily order them via internet vendors. To protect yourself from a broken backup drive (this happens!) we recommend either by at one point a second one or use a RAID-enabled backup. Certainly your second or even primary backup could be a cloud service such as google drive.
Alternatively and, you can use cloud storage services such as Google Drive to back up your most important information.
Examples:
* USB Drive or External HDD enclosure with one or more disks, When using an SSD likely the fastest backup solution. YOU can buy also an external Drive bay with multiple bays and purcahse HDD or SSD drives based on your budget. Make sure to by one with USB3 or thunderbolt. The limit will be your budget.
* [TrueNas](https://www.truenas.com/) (you can build your own or get one ready made)
* [Synology](https://www.synology.com/en-us) (ready made)
### Bare metal on your Computer
TODO
Recommendations
* Memory (16GB)
* SSD (>512GB)
* Reasonably new computer less than 5 years of age
* ALternative culd be Raspberry PI4 with 8GB runnng Ubuntu or Raspberry OS for smaller projects. We have seen that some students had computers that cost when bought >$1K but because they were too old a Raspberry Pi of about $100 wass much much faster. It is up to you to make a decission which hardware you like to use. To get started with clusters and parallel computing ranging from MPI, Hadoop, SPark and containers a Raspberry PI cluster is not a bad choice
### Using Raspberry PIs
TODO
### Laptops and Desktops
TODO
### Virtual Machines
TODO requires >=16 GB on laptop/desktop
#### Multipass
#### WSL2
#### VMs on your local computer
#### Cloud Virtual Machines
#### Campus Resources
##### Indiana University
Computer and cluster at Indiana university could be alternative to Raspberry PI cluster, but Cluster will teach you other things that you will not experience if you use the campus cluster. You have more access, please keep that in mind.
TODO: list resources hera nad how to get access
## Other resoources
* very important: <https://www.ismll.uni-hildesheim.de/lehre/prakAIML-17s/script/02.MPI.pdf>
* <https://materials.jeremybejarano.com/MPIwithPython/>
* <https://rabernat.github.io/research_computing/parallel-programming-with-mpi-for-python.html>
* <https://research.computing.yale.edu/sites/default/files/files/mpi4py.pdf>
* <https://towardsdatascience.com/parallel-programming-in-python-with-message-passing-interface-mpi4py-551e3f198053>
* machinefile: <https://pythonprogramming.net/installing-testing-mpi4py-mpi-python-tutorial/>
* size, rank: <https://pythonprogramming.net/mpi4py-size-command-mpi/>
they have mpirun.openmpi -np 5 -machinefile /home/pi/mpi_testing/machinefile python
so we can specify a machine file, find out how that looks liek
* jupyter on mpi <https://raspi.farm/howtos/installing-MPI-on-the-cluster/>
* <https://pythonprogramming.net/basic-mpi4py-script-getting-node-rank/>
* <https://www.nesi.org.nz/sites/default/files/mpi-in-python.pdf>
## Installation
In this section we will quickly summarize hwo you install of python on your hardware This includes
* Windows 10
* MacOs
* Raspberry (using our cms burn)
Each installation documents the install from python.org
### Instalation on Windows 10
TODO
### Instalation on MacOS
TODO
### Instalation on Raspberry OS
TODO: using cms burn
### Instalation on Raspberry Ubuntu
TODO: susing cms burn
## Introduction to MPI
* What is MPI and why do you want to use it
* What are some example MPI functionalities and usage patterns (send receive, embarrassing parallel
### Installation of mpi4py on Windows
1) Look up msmpi and click the second link to download and install msmpisetup.exe and msmpisdk.msi
3) Open the system control panel
4) Click on Advanced system settings and then Environment Variables
5) Under the user variables box click on Path
6) Click New in order to add C:\Program Files (x86)\Microsoft SDKs\MPI and C:\Program Files\Microsoft MPI\Bin to Path
7) Close any open bash windows and then open a new one
8) Type the command `which mpiexec`
9) Install mpi4py with `pip install mpi4py`
10) In order to verify that the installation worked type `mpiexec -n 4 python mpi4py.bench helloworld`
### Installing mpi4py in a Raspberry Pi
1) Activate our virtual environment: `source ~/ENV3/bin/activate`
2) Install Open MPI in your pi by entering the following command: `sudo apt-get install openmpi-bin`.
After installation is complete you can check if it was successful by using `mpicc --showme:version`.
3) Enter `pip install mpi4py` to download and install mpi4py.
4) The installation can be tested with `mpiexec -n 4 python -m mpi4py.bench helloworld` (depending on the number of cores/nodes available to you, it may be necessary to reduce the number of copies that follow the -n option) In a PI4, the above test returned:
```
(ENV3) pi@red:~ $ mpiexec -n 4 python -m mpi4py.bench helloworld
Hello, World! I am process 0 of 4 on red.
Hello, World! I am process 1 of 4 on red.
Hello, World! I am process 2 of 4 on red.
Hello, World! I am process 3 of 4 on red.
```
### Installing mpi4py in MacOS
A similar process can be followed to install mpi4py in MacOS. In this case, we can use Homebrew to get Open MPI by entering: `brew install open-mpi`.
Once Open MPI is working, steps 3 and 4 from the above pi4 installation can be followed in order to download and install mpi4py.
## Hello World
To test if it works a build in test program is available.
To run it on on a single host with n cores (lest assume you have 2 cores), you can use:
```
mpiexec -n 4 python -m mpi4py.bench helloworld
Hello, World! I am process 0 of 5 on localhost.
Hello, World! I am process 1 of 5 on localhost.
Hello, World! I am process 2 of 5 on localhost.
Hello, World! I am process 3 of 5 on localhost.
```
Note that the messages can be in different order.
To run it on mulitple hosts with each having n cores please create a hostfile as follows:
TODO:
## Machine file, hostfile, rankfile
Run sudo apt-get install -y python-mpi4py on all nodes.
Test the installation: mpiexec -n 5 python -m mpi4py helloworld
THIS CAN BE DONE BEST WITH CLOUDMESH
FIRTS TEST BY HAND
TODO: VERIFY
```
mpirun.openmpi -np 2 \
-machinefile /home/pi/mpi_testing/machinefile \
python helloworld.py
```
The machinefile contains the ipaddresses
```
pi@192. ....
yout add teh ip addresses
```
TODO: learn about and evaluate and test if we can do
```
mpirun -r my_rankfile --report-bindings ...
Where the rankfile contains:
rank 0=compute17 slot=1:0
rank 1=compute17 slot=1:1
rank 2=compute18 slot=1:0
rank 3=compute18 slot=1:1
```
## MPI Functionality examples
### MPI Collective Communication functionality examples
#### Broadcast `comm.bcast()`
In this example, we broadcast a two-entry Python dictionary from a root process to the rest of the processes in our communicator group.
``` python
from mpi4py import MPI
# Communicator
comm = MPI.COMM_WORLD
# Get the rank of the current process in the communicator group
rank = comm.Get_rank()
# Process with rank 0 gets the data to be broadcast
if rank == 0:
data = {'size' : [1,3,8],
'name' : ['disk1', 'disk2', 'disk3']}
# Other processes' data is empty
else:
data = None
# Print data in each process
print("before broadcast, data on rank %d is "%comm.rank, data)
# Data from process with rank 0 is broadcast to other processes in our
# communicator group
data = comm.bcast(data, root=0)
# Print data in each process after broadcast
print("after broadcast, data on rank %d is "%comm.rank, data)
```
After running `mpiexec -n 4 python bcast.py` we get the following:
```
before broadcast, data on rank 0 is {'size': [1, 3, 8], 'name': ['disk1', 'disk2', 'disk3']}
before broadcast, data on rank 1 is None
before broadcast, data on rank 2 is None
before broadcast, data on rank 3 is None
after broadcast, data on rank 0 is {'size': [1, 3, 8], 'name': ['disk1', 'disk2', 'disk3']}
after broadcast, data on rank 1 is {'size': [1, 3, 8], 'name': ['disk1', 'disk2', 'disk3']}
after broadcast, data on rank 2 is {'size': [1, 3, 8], 'name': ['disk1', 'disk2', 'disk3']}
after broadcast, data on rank 3 is {'size': [1, 3, 8], 'name': ['disk1', 'disk2', 'disk3']}
```
As we can see, the process with rank 1, received the data broadcast from rank 0.
#### Scatter `comm.scatter()`
In this example, with scatter the members of a list among the processes in the communicator group.
``` python
from mpi4py import MPI
# Communicator
comm = MPI.COMM_WORLD
# Number of processes in the communicator group
size = comm.Get_size()
# Get the rank of the current process in the communicator group
rank = comm.Get_rank()
# Process with rank 0 gets a list with the data to be scattered
if rank == 0:
data = [(i+1)**2 for i in range(size)]
else:
data = None
# Print data in each process
print("before scattering, data on rank %d is "%comm.rank, data)
# Scattering occurs
data = comm.scatter(data, root=0)
# Print data in each process after scattering
print("data for rank %d is "%comm.rank, data)
```
Executing `mpiexec -n 4 python scatter.py` yields:
```
before scattering, data on rank 2 is None
before scattering, data on rank 3 is None
before scattering, data on rank 0 is [1, 4, 9, 16]
before scattering, data on rank 1 is None
data for rank 2 is 9
data for rank 1 is 4
data for rank 3 is 16
data for rank 0 is 1
```
The members of the list from process 0 have been successfully scattered among the rest of the processes in the communicator group.
#### Gather `comm.gather()`
In this example, data from each process in the communicator group is gathered in the process with rank 0.
``` python
from mpi4py import MPI
# Communicator
comm = MPI.COMM_WORLD
# Number of processes in the communicator group
size = comm.Get_size()
# Get the rank of the current process in the communicator group
rank = comm.Get_rank()
# Each process gets different data, depending on its rank number
data = (rank+1)**2
# Print data in each process
print("before gathering, data on rank %d is "%comm.rank, data)
# Gathering occurs
data = comm.gather(data, root=0)
# Process 0 prints out the gathered data, rest of the processes
# print their data as well
if rank == 0:
print("after gathering, process 0's data is ", data)
else:
print("after gathering, data in rank %d is "%comm.rank, data)
```
Executing `mpiexec -n 4 python gather.py` yields:
```
before gathering, data on rank 2 is 9
before gathering, data on rank 3 is 16
before gathering, data on rank 0 is 1
before gathering, data on rank 1 is 4
after gathering, data in rank 2 is None
after gathering, data in rank 1 is None
after gathering, data in rank 3 is None
after gathering, process 0's data is [1, 4, 9, 16]
```
The data from processes with rank `1` to `size - 1` have been successfully gathered in process 0.
#### Broadcasting buffer-like objects `comm.Bcast()`
In this example, we broadcast a NumPy array from process 0 to the rest of the processes in the communicator group.
``` python
from mpi4py import MPI
import numpy as np
# Communicator
comm = MPI.COMM_WORLD
# Get the rank of the current process in the communicator group
rank = comm.Get_rank()
# Rank 0 gets a NumPy array containing values from 0 to 9
if rank == 0:
data = np.arange(0,10,1, dtype='i')
# Rest of the processes get an empty buffer
else:
data = np.zeros(10, dtype='i')
# Print data in each process
print("before broadcasting, data for rank %d is: "%comm.rank, data)
# Broadcast occurs
comm.Bcast(data, root=0)
# Print data in each process after broadcast
print("after broadcasting, data for rank %d is: "%comm.rank, data)
```
Executing `mpiexec -n 4 python npbcast.py` yields:
```
before broadcasting, data for rank 1 is: [0 0 0 0 0 0 0 0 0 0]
before broadcasting, data for rank 2 is: [0 0 0 0 0 0 0 0 0 0]
before broadcasting, data for rank 3 is: [0 0 0 0 0 0 0 0 0 0]
before broadcasting, data for rank 0 is: [0 1 2 3 4 5 6 7 8 9]
after broadcasting, data for rank 0 is: [0 1 2 3 4 5 6 7 8 9]
after broadcasting, data for rank 2 is: [0 1 2 3 4 5 6 7 8 9]
after broadcasting, data for rank 3 is: [0 1 2 3 4 5 6 7 8 9]
after broadcasting, data for rank 1 is: [0 1 2 3 4 5 6 7 8 9]
```
As we can see, the values in the array at process with rank 0 have been broadcast to the rest of the processes in the communicator group.
#### Scattering buffer-like objects `comm.Scatter()`
In this example, we scatter a NumPy array among the processes in the communicator group.
``` python
from mpi4py import MPI
import numpy as np
# Communicator
comm = MPI.COMM_WORLD
# Number of processes in the communicator group
size = comm.Get_size()
# Get the rank of the current process in the communicator group
rank = comm.Get_rank()
# Data to be sent
sendbuf = None
# Process with rank 0 populates sendbuf with a 2-D array,
# based on the number of processes in our communicator group
if rank == 0:
sendbuf = np.zeros([size, 10], dtype='i')
sendbuf.T[:,:] = range(size)
# Print the content of sendbuf before scattering
print('sendbuf in 0: ', sendbuf)
# Each process getd a buffer (initially containing just zeros)
# to store scattered data.
recvbuf = np.zeros(10, dtype='i')
# Print the content of recvbuf in each process before scattering
print('recvbuf in %d: '%rank, recvbuf)
# Scattering occurs
comm.Scatter(sendbuf, recvbuf, root=0)
# Print the content of sendbuf in each process after scattering
print('Buffer in process %d contains: '%rank, recvbuf)
```
Executing `mpiexec -n 4 python npscatter.py` yields:
```
recvbuf in 1: [0 0 0 0 0 0 0 0 0 0]
recvbuf in 2: [0 0 0 0 0 0 0 0 0 0]
recvbuf in 3: [0 0 0 0 0 0 0 0 0 0]
sendbuf in 0: [[0 0 0 0 0 0 0 0 0 0]
[1 1 1 1 1 1 1 1 1 1]
[2 2 2 2 2 2 2 2 2 2]
[3 3 3 3 3 3 3 3 3 3]]
recvbuf in 0: [0 0 0 0 0 0 0 0 0 0]
Buffer in process 2 contains: [2 2 2 2 2 2 2 2 2 2]
Buffer in process 0 contains: [0 0 0 0 0 0 0 0 0 0]
Buffer in process 3 contains: [3 3 3 3 3 3 3 3 3 3]
Buffer in process 1 contains: [1 1 1 1 1 1 1 1 1 1]
```
As we can see, the values in the 2-D array at process with rank 0, have been scattered among all our processes in the communicator group, based on their rank value.
#### Gathering buffer-like objects `comm.Gather()`
In this example, we gather a NumPy array from the processes in the communicator group into a 2-D array in process with rank 0.
``` python
from mpi4py import MPI
import numpy as np
# Communicator group
comm = MPI.COMM_WORLD
# Number of processes in the communicator group
size = comm.Get_size()
# Get the rank of the current process in the communicator group
rank = comm.Get_rank()
# Each process gets an array with data based on its rank.
sendbuf = np.zeros(10, dtype='i') + rank
# Print the data in sendbuf before gathering
print('Buffer in process %d before gathering: '%rank, sendbuf)
# Variable to store gathered data
recvbuf = None
# Process with rank 0 initializes recvbuf to a 2-D array conatining
# only zeros. The size of the array is determined by the number of
# processes in the communicator group
if rank == 0:
recvbuf = np.zeros([size,10], dtype='i')
# Print recvbuf
print('recvbuf in process 0 before gathering: ', recvbuf)
# Gathering occurs
comm.Gather(sendbuf, recvbuf, root=0)
# Print recvbuf in process with rank 0 after gathering
if rank == 0:
print('recvbuf in process 0 after gathering: \n', recvbuf)
```
Executing `mpiexec -n 4 python npgather.py` yields:
```
Buffer in process 2 before gathering: [2 2 2 2 2 2 2 2 2 2]
Buffer in process 3 before gathering: [3 3 3 3 3 3 3 3 3 3]
Buffer in process 0 before gathering: [0 0 0 0 0 0 0 0 0 0]
Buffer in process 1 before gathering: [1 1 1 1 1 1 1 1 1 1]
recvbuf in process 0 before gathering: [[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]]
recvbuf in process 0 after gathering:
[[0 0 0 0 0 0 0 0 0 0]
[1 1 1 1 1 1 1 1 1 1]
[2 2 2 2 2 2 2 2 2 2]
[3 3 3 3 3 3 3 3 3 3]]
```
The values contained in the buffers from the different processes in the group have been gathered in the 2-D array in process with rank 0.
#### send receive
TODO
#### Dynamic Process Management
TODO
#### task processing (spwan, pull, …)
TODO: Cooper
##### Futures
<https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html>
#### examples for other collective communication methods
TODO
## MPI-IO
TODO: Agnes
### Collective I/O with NumPy arrays
TODO: Agnes
### Non-contiguous Collective I/O with NumPy arrays and datatypes
TODO: Agnes
## Monte Carlo calculation of Pi
TODO: Open, improve
TODO WHAT IS THE PROBLEM GOAL
We start with the Mathematical formulation of the Monte Carlo calulation of pi. For each quadrant of the unit square, the area is pi. Therefore, the ratio of the area outside of the circle is pi over four. With this in mind, we can use the Monte Carlo Method for the calculation of pi.
TODO: Drawing
TODO: HOW AND WHY DO WE NEED MULTIPLE COMPUTERS
### Program
TODO: Open
* Example program to run Montecarlo on multiple hosts
* Benchmarking of the code
* cloudmesh.common (not thread safe, but still can be used, research how to use it in multiple threads)
* other strategies to benchmark, you research (only if really needed
* Use numba to speed up the code
* describe how to install
* showcase basic usage on our monte carlo function
* display results with matplotlib
## GPU Programming with MPI
Only possibly for someone with GPU (contact me if you do)
Once we are finished with MPI we will use and look at python dask and other frameworks as well as rest services to interface with the mpi programs. This way we will be able to expose the cluster to anyone and they do not even know they use a cluster while exposing this as a single function … (edited)
The github repo is used by all of you to have write access and contribute to the research effort easily and in parallel.
You will get out of this as much as you put in. Thus it is important to set several dedicated hours aside (ideally each week) and contribute your work to others.
It is difficult to asses how long the above task takes as we just get started and we need to learn first how we work together as a team. If I were to do this alone it may take a week, but as you are less experienced it would likely take longer. However to decrease the time needed we can split up work and each of you will work on a dedicated topic (but you can still work in smaller teams if you desire). We will start assigning tasks in github once this is all set up.
Idea:
* tutorial about hackmd.io <https://hackmd.io/t9SkKiSLR5qW9RUUA_CT-A>
* Github vs hackmd
We will use initially hackmd so we avoid issues with github and we can learn github once we do more coding.
| 32.498452 | 693 | 0.731447 | eng_Latn | 0.995526 |
dbbabf514fe3dbe568c986d50dd01dd4a1a4cc07 | 5,696 | md | Markdown | docs/source/essentials/support-for-cached-responses.md | LuisRizo/apollo-android | b57a9072d7997abd9ca5ffaf1eb90e4bd93625f1 | [
"MIT"
] | 1 | 2019-03-26T09:06:40.000Z | 2019-03-26T09:06:40.000Z | docs/source/essentials/support-for-cached-responses.md | LuisRizo/apollo-android | b57a9072d7997abd9ca5ffaf1eb90e4bd93625f1 | [
"MIT"
] | null | null | null | docs/source/essentials/support-for-cached-responses.md | LuisRizo/apollo-android | b57a9072d7997abd9ca5ffaf1eb90e4bd93625f1 | [
"MIT"
] | 1 | 2019-01-11T00:31:07.000Z | 2019-01-11T00:31:07.000Z | <h2 id="cache-policy">Support For Cached Responses</h2>
Apollo-android allows you to keep a client-side cache of query results, making it suitable for use even while offline.
The client can be configured with 3 levels of caching:
- **HTTP Response Cache**: For caching raw http responses.
- **Normalized Disk Cache**: Per node caching of responses in SQL. Persists normalized responses on disk so that they can used after process death.
- **Normalized InMemory Cache**: Optimized Guava memory cache for in memory caching as long as the App/Process is still alive.
#### Usage
To enable HTTP Cache support, add the dependency to your project's build.gradle file.
```groovy
dependencies {
compile 'com.apollographql.apollo:apollo-http-cache:x.y.z'
}
```
Raw HTTP Response Cache:
```java
// Directory where cached responses will be stored
File file = new File("/cache/");
// Size in bytes of the cache
int size = 1024*1024;
// Create the http response cache store
DiskLruHttpCacheStore cacheStore = new DiskLruCacheStore(file, size);
// Build the Apollo Client
ApolloClient apolloClient = ApolloClient.builder()
.serverUrl("/")
.httpCache(new ApolloHttpCache(cacheStore))
.okHttpClient(okHttpClient)
.build();
apolloClient
.query(
FeedQuery.builder()
.limit(10)
.type(FeedType.HOT)
.build()
)
.httpCachePolicy(HttpCachePolicy.CACHE_FIRST)
.enqueue(new ApolloCall.Callback<FeedQuery.Data>() {
@Override public void onResponse(@NotNull Response<FeedQuery.Data> dataResponse) {
Log.i(TAG, response.toString());
}
@Override public void onFailure(@NotNull Throwable t) {
Log.e(TAG, e.getMessage(), e);
}
});
```
**IMPORTANT:** Caching is provided only for `query` operations. It isn't available for `mutation` operations.
There are four available cache policies [`HttpCachePolicy`](https://github.com/apollographql/apollo-android/blob/master/apollo-api/src/main/java/com/apollographql/apollo/api/cache/http/HttpCachePolicy.java):
- `CACHE_ONLY` - Fetch a response from the cache only, ignoring the network. If the cached response doesn't exist or is expired, then return an error.
- `NETWORK_ONLY` - Fetch a response from the network only, ignoring any cached responses.
- `CACHE_FIRST` - Fetch a response from the cache first. If the response doesn't exist or is expired, then fetch a response from the network.
- `NETWORK_FIRST` - Fetch a response from the network first. If the network fails and the cached response isn't expired, then return cached data instead.
For `CACHE_ONLY`, `CACHE_FIRST` and `NETWORK_FIRST` policies you can define the timeout after what cached response is treated as expired and will be evicted from the http cache, `expireAfter(expireTimeout, timeUnit)`.`
Normalized Disk Cache
```java
// Create the ApolloSqlHelper. Please note that if null is passed in as the name, you will get an in-memory
// Sqlite database that will not persist across restarts of the app.
ApolloSqlHelper apolloSqlHelper = ApolloSqlHelper.create(context, "db_name");
// Create NormalizedCacheFactory
NormalizedCacheFactory cacheFactory = new SqlNormalizedCacheFactory(apolloSqlHelper);
// Create the cache key resolver, this example works well when all types have globally unique ids.
CacheKeyResolver resolver = new CacheKeyResolver() {
@NotNull @Override
public CacheKey fromFieldRecordSet(@NotNull ResponseField field, @NotNull Map<String, Object> recordSet) {
return formatCacheKey((String) recordSet.get("id"));
}
@NotNull @Override
public CacheKey fromFieldArguments(@NotNull ResponseField field, @NotNull Operation.Variables variables) {
return formatCacheKey((String) field.resolveArgument("id", variables));
}
private CacheKey formatCacheKey(String id) {
if (id == null || id.isEmpty()) {
return CacheKey.NO_KEY;
} else {
return CacheKey.from(id);
}
}
};
// Build the Apollo Client
ApolloClient apolloClient = ApolloClient.builder()
.serverUrl("/")
.normalizedCache(cacheFactory, resolver)
.okHttpClient(okHttpClient)
.build();
```
Normalized In-Memory Cache:
```java
// Create NormalizedCacheFactory
NormalizedCacheFactory cacheFactory = new LruNormalizedCacheFactory(EvictionPolicy.builder().maxSizeBytes(10 * 1024).build());
// Build the Apollo Client
ApolloClient apolloClient = ApolloClient.builder()
.serverUrl("/")
.normalizedCache(cacheFactory, resolver)
.okHttpClient(okHttpClient)
.build();
```
Chaining Caches:
You can use both an memory cache and sql cache, with a cache chain. Reads will read from the first cache
hit in the chain. Writes will propagate down the entire chain.
```java
NormalizedCacheFactory sqlCacheFactory = new SqlNormalizedCacheFactory(apolloSqlHelper)
NormalizedCacheFactory memoryFirstThenSqlCacheFactory = new LruNormalizedCacheFactory(
EvictionPolicy.builder().maxSizeBytes(10 * 1024).build()
).chain(sqlCacheFactory);
```
For concrete examples of using response caches, please see the following tests in the [`apollo-integration`](https://github.com/apollographql/apollo-android/tree/master/apollo-integration/src/test/java/com/apollographql/apollo) module:
[`CacheTest`](https://github.com/apollographql/apollo-android/blob/master/apollo-integration/src/test/java/com/apollographql/apollo/HttpCacheTest.java), [`SqlNormalizedCacheTest`](https://github.com/apollographql/apollo-android/blob/master/apollo-integration/src/test/java/com/apollographql/apollo/NormalizedCacheTestCase.java), [`LruNormalizedCacheTest`](https://github.com/apollographql/apollo-android/blob/master/apollo-integration/src/test/java/com/apollographql/apollo/NormalizedCacheTestCase.java).
| 40.978417 | 505 | 0.763869 | eng_Latn | 0.723888 |
dbbbf47c6b3076319079b0bdf420bc5676038416 | 2,050 | md | Markdown | _posts/2017-09-08-Low-memory-GEMM-based-convolution-algorithms-for-deep-neural-networks.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 7 | 2018-02-11T01:50:19.000Z | 2020-01-14T02:07:17.000Z | _posts/2017-09-08-Low-memory-GEMM-based-convolution-algorithms-for-deep-neural-networks.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | null | null | null | _posts/2017-09-08-Low-memory-GEMM-based-convolution-algorithms-for-deep-neural-networks.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 4 | 2018-02-04T15:58:04.000Z | 2019-08-29T14:54:14.000Z | ---
layout: post
title: "Low-memory GEMM-based convolution algorithms for deep neural networks"
date: 2017-09-08 06:32:33
categories: arXiv_CV
tags: arXiv_CV Inference
author: Andrew Anderson, Aravind Vasudevan, Cormac Keane, David Gregg
mathjax: true
---
* content
{:toc}
##### Abstract
Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temporary data structures differs significantly. Convolution of an input matrix with dimensions $C \times H \times W$, requires $O(K^2CHW)$ additional space using the classical im2col approach. More recently memory-efficient approaches requiring just $O(KCHW)$ auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just $O(MHW)$ and $O(KW)$ additional space respectively, where $M$ is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our low-memory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.
##### Abstract (translated by Google)
##### URL
[https://arxiv.org/abs/1709.03395](https://arxiv.org/abs/1709.03395)
##### PDF
[https://arxiv.org/pdf/1709.03395](https://arxiv.org/pdf/1709.03395)
| 78.846154 | 1,558 | 0.796585 | eng_Latn | 0.991563 |
dbbca63655bbf61dafe4831eb0fad2ab090fa0aa | 161 | md | Markdown | BFX-README.md | jeffmitchell/bfx | 592f3a555bb2073596c505514f0a17e9d7517519 | [
"MIT"
] | null | null | null | BFX-README.md | jeffmitchell/bfx | 592f3a555bb2073596c505514f0a17e9d7517519 | [
"MIT"
] | null | null | null | BFX-README.md | jeffmitchell/bfx | 592f3a555bb2073596c505514f0a17e9d7517519 | [
"MIT"
] | null | null | null | # Browser-FX Development Readme
To start Node.js http-server:
* cd bfx
* http-server -c-1 (the -c-1 disables caching)
* Initial URL is localhost:8080/app/#/ui
| 20.125 | 46 | 0.720497 | kor_Hang | 0.475322 |
dbbcb88fa59345aba5acaef57b073cbd299e9244 | 52,921 | md | Markdown | docs/solutions/SOLN_S1_Regression_and_Analysis.md | wesleybeckner/data_science_foundations | 83fa44790413992f0a9e3e181cc7f834b0750ef6 | [
"MIT"
] | null | null | null | docs/solutions/SOLN_S1_Regression_and_Analysis.md | wesleybeckner/data_science_foundations | 83fa44790413992f0a9e3e181cc7f834b0750ef6 | [
"MIT"
] | 1 | 2021-12-15T21:41:25.000Z | 2022-02-11T14:48:12.000Z | docs/solutions/SOLN_S1_Regression_and_Analysis.md | wesleybeckner/data_science_foundations | 83fa44790413992f0a9e3e181cc7f834b0750ef6 | [
"MIT"
] | null | null | null | <a href="https://colab.research.google.com/github/wesleybeckner/data_science_foundations/blob/main/notebooks/solutions/SOLN_S1_Regression_and_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data Science Foundations Session 1: Regression and Analysis
**Instructor**: Wesley Beckner
**Contact**: [email protected]
---
<br>
In this session we will look at fitting data to a curve using **regression**. We will also look at using regression to make **predictions** for new data points by dividing our data into a training and a testing set. Finally we will examine how much error we make in our fit and then in our predictions by computing the mean squared error.
<br>
---
<a name='x.0'></a>
## 1.0 Preparing Environment and Importing Data
[back to top](#top)
<a name='x.0.1'></a>
### 1.0.1 Import Packages
[back to top](#top)
```python
# Import pandas, pyplot, ipywidgets
import pandas as pd
from matplotlib import pyplot as plt
from ipywidgets import interact
# Import Scikit-Learn library for the regression models
import sklearn
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
# for enrichment topics
import seaborn as sns
import numpy as np
```
### 1.0.2 Load Dataset
[back to top](#top)
For our discussion on regression and descriptive statistics today we will use a well known dataset of different wines and their quality ratings
```python
df = pd.read_csv("https://raw.githubusercontent.com/wesleybeckner/"\
"ds_for_engineers/main/data/wine_quality/winequalityN.csv")
df.shape
```
(6497, 13)
```python
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>type</th>
<th>fixed acidity</th>
<th>volatile acidity</th>
<th>citric acid</th>
<th>residual sugar</th>
<th>chlorides</th>
<th>free sulfur dioxide</th>
<th>total sulfur dioxide</th>
<th>density</th>
<th>pH</th>
<th>sulphates</th>
<th>alcohol</th>
<th>quality</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>white</td>
<td>7.0</td>
<td>0.27</td>
<td>0.36</td>
<td>20.7</td>
<td>0.045</td>
<td>45.0</td>
<td>170.0</td>
<td>1.0010</td>
<td>3.00</td>
<td>0.45</td>
<td>8.8</td>
<td>6</td>
</tr>
<tr>
<th>1</th>
<td>white</td>
<td>6.3</td>
<td>0.30</td>
<td>0.34</td>
<td>1.6</td>
<td>0.049</td>
<td>14.0</td>
<td>132.0</td>
<td>0.9940</td>
<td>3.30</td>
<td>0.49</td>
<td>9.5</td>
<td>6</td>
</tr>
<tr>
<th>2</th>
<td>white</td>
<td>8.1</td>
<td>0.28</td>
<td>0.40</td>
<td>6.9</td>
<td>0.050</td>
<td>30.0</td>
<td>97.0</td>
<td>0.9951</td>
<td>3.26</td>
<td>0.44</td>
<td>10.1</td>
<td>6</td>
</tr>
<tr>
<th>3</th>
<td>white</td>
<td>7.2</td>
<td>0.23</td>
<td>0.32</td>
<td>8.5</td>
<td>0.058</td>
<td>47.0</td>
<td>186.0</td>
<td>0.9956</td>
<td>3.19</td>
<td>0.40</td>
<td>9.9</td>
<td>6</td>
</tr>
<tr>
<th>4</th>
<td>white</td>
<td>7.2</td>
<td>0.23</td>
<td>0.32</td>
<td>8.5</td>
<td>0.058</td>
<td>47.0</td>
<td>186.0</td>
<td>0.9956</td>
<td>3.19</td>
<td>0.40</td>
<td>9.9</td>
<td>6</td>
</tr>
</tbody>
</table>
</div>
## 1.1 What is regression?
It is the process of finding a relationship between **_dependent_** and **_independent_** variables to find trends in data. This abstract definition means that you have one variable (the dependent variable) which depends on one or more variables (the independent variables). One of the reasons for which we want to regress data is to understand whether there is a trend between two variables.
**Housing Prices Example**
We can imagine this scenario with housing prices. Envision a **_mixed_** dataset of **_continuous_** and **_discrete_** independent variables. Some features could be continuous, floating point values like location ranking and housing condition. Others could be descrete like the number of rooms or bathrooms. We could take these features and use them to predict a house value. This would be a **_regression_** model.
<p align=center>
<img src="https://raw.githubusercontent.com/wesleybeckner/technology_explorers/main/assets/machine_learning/ML3.png" width=1000px></img>
</p>
## 1.2 Linear regression fitting with scikit-learn
#### 🏋️ Exercise 1: rudimentary EDA
What does the data look like? Recall how to visualize data in a pandas dataframe
<ul>
<li> for every column calculate the
* skew: `df.skew()`
* kurtosis: `df.kurtosis()`
* pearsons correlation with the dependent variable: `df.corr()`
* number of missing entries `df.isnull()`
and organize this into a new dataframe
</li>
</ul>
_note:_ pearsons is just one type of correlation, another available to us **_spearman_** which differs from pearsons in that it depends on ranked values rather than their direct quantities, you can read more [here](https://support.minitab.com/en-us/minitab-express/1/help-and-how-to/modeling-statistics/regression/supporting-topics/basics/a-comparison-of-the-pearson-and-spearman-correlation-methods/)
```python
df.isnull().sum()
```
type 0
fixed acidity 10
volatile acidity 8
citric acid 3
residual sugar 2
chlorides 2
free sulfur dioxide 0
total sulfur dioxide 0
density 0
pH 9
sulphates 4
alcohol 0
quality 0
dtype: int64
```python
# Cell for Exercise 1
# part A
# using df.<method> define the following four variables with the results from
# skew(), kurtosis(), corr() (while selecting for quality), and isnull()
# for isnull() you'll notice the return is a dataframe of booleans. we would
# like to simply know the number of null values for each column. change the
# return of isnull() using the sum() method along the columns
skew = df.skew()
kurt = df.kurtosis()
pear = df.corr()['quality']
null = df.isnull().sum(axis=0)
# part B
# on line 13, put these results in a list using square brackets and call
# pd.DataFrame on the list to make your new DataFrame! store it under the
# variable name dff
dff = pd.DataFrame([skew, kurt, pear, null])
# part C
# take the transpose of this DataFrame using dff.T. reassign dff to this copy
dff = dff.T
# part D
# set the column names to 'skew', 'kurtosis', 'pearsons _quality', and
# 'null count' using dff.columns
dff.columns=['skew', 'kurtosis', 'pearsons _quality', 'null count']
# Now return dff to the output to view your hand work
dff # uncomment this line
```
/tmp/ipykernel_1422/4028752270.py:10: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction.
skew = df.skew()
/tmp/ipykernel_1422/4028752270.py:11: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction.
kurt = df.kurtosis()
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>skew</th>
<th>kurtosis</th>
<th>pearsons _quality</th>
<th>null count</th>
</tr>
</thead>
<tbody>
<tr>
<th>fixed acidity</th>
<td>1.722805</td>
<td>5.057727</td>
<td>-0.077031</td>
<td>10.0</td>
</tr>
<tr>
<th>volatile acidity</th>
<td>1.495512</td>
<td>2.827081</td>
<td>-0.265953</td>
<td>8.0</td>
</tr>
<tr>
<th>citric acid</th>
<td>0.473032</td>
<td>2.401582</td>
<td>0.085706</td>
<td>3.0</td>
</tr>
<tr>
<th>residual sugar</th>
<td>1.435000</td>
<td>4.358134</td>
<td>-0.036825</td>
<td>2.0</td>
</tr>
<tr>
<th>chlorides</th>
<td>5.399849</td>
<td>50.894874</td>
<td>-0.200886</td>
<td>2.0</td>
</tr>
<tr>
<th>free sulfur dioxide</th>
<td>1.220066</td>
<td>7.906238</td>
<td>0.055463</td>
<td>0.0</td>
</tr>
<tr>
<th>total sulfur dioxide</th>
<td>-0.001177</td>
<td>-0.371664</td>
<td>-0.041385</td>
<td>0.0</td>
</tr>
<tr>
<th>density</th>
<td>0.503602</td>
<td>6.606067</td>
<td>-0.305858</td>
<td>0.0</td>
</tr>
<tr>
<th>pH</th>
<td>0.386966</td>
<td>0.370068</td>
<td>0.019366</td>
<td>9.0</td>
</tr>
<tr>
<th>sulphates</th>
<td>1.798467</td>
<td>8.659892</td>
<td>0.038729</td>
<td>4.0</td>
</tr>
<tr>
<th>alcohol</th>
<td>0.565718</td>
<td>-0.531687</td>
<td>0.444319</td>
<td>0.0</td>
</tr>
<tr>
<th>quality</th>
<td>0.189623</td>
<td>0.232322</td>
<td>1.000000</td>
<td>0.0</td>
</tr>
<tr>
<th>type</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
</tbody>
</table>
</div>
I have gone ahead and repeated this exercise with the red vs white wine types:
```python
red = df.loc[df['type'] == 'red']
wht = df.loc[df['type'] == 'white']
def get_summary(df):
skew = df.skew()
kurt = df.kurtosis()
pear = df.corr()['quality']
null = df.isnull().sum()
med = df.median()
men = df.mean()
dff = pd.DataFrame([skew, kurt, pear, null, med, men])
dff = dff.T
dff.columns = ['skew', 'kurtosis', 'pearsons _quality', 'null count', 'median',
'mean']
return dff
dffr = get_summary(red)
dffw = get_summary(wht)
desc = pd.concat([dffr, dffw], keys=['red', 'white'])
```
/tmp/ipykernel_1422/2387423026.py:5: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction.
skew = df.skew()
/tmp/ipykernel_1422/2387423026.py:6: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction.
kurt = df.kurtosis()
/tmp/ipykernel_1422/2387423026.py:9: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction.
med = df.median()
/tmp/ipykernel_1422/2387423026.py:10: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction.
men = df.mean()
```python
desc
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>skew</th>
<th>kurtosis</th>
<th>pearsons _quality</th>
<th>null count</th>
<th>median</th>
<th>mean</th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="13" valign="top">red</th>
<th>fixed acidity</th>
<td>0.982192</td>
<td>1.132624</td>
<td>0.123834</td>
<td>2.0</td>
<td>7.90000</td>
<td>8.322104</td>
</tr>
<tr>
<th>volatile acidity</th>
<td>0.672862</td>
<td>1.226846</td>
<td>-0.390858</td>
<td>1.0</td>
<td>0.52000</td>
<td>0.527738</td>
</tr>
<tr>
<th>citric acid</th>
<td>0.317891</td>
<td>-0.788476</td>
<td>0.226917</td>
<td>1.0</td>
<td>0.26000</td>
<td>0.271145</td>
</tr>
<tr>
<th>residual sugar</th>
<td>4.540655</td>
<td>28.617595</td>
<td>0.013732</td>
<td>0.0</td>
<td>2.20000</td>
<td>2.538806</td>
</tr>
<tr>
<th>chlorides</th>
<td>5.680347</td>
<td>41.715787</td>
<td>-0.128907</td>
<td>0.0</td>
<td>0.07900</td>
<td>0.087467</td>
</tr>
<tr>
<th>free sulfur dioxide</th>
<td>1.250567</td>
<td>2.023562</td>
<td>-0.050656</td>
<td>0.0</td>
<td>14.00000</td>
<td>15.874922</td>
</tr>
<tr>
<th>total sulfur dioxide</th>
<td>1.515531</td>
<td>3.809824</td>
<td>-0.185100</td>
<td>0.0</td>
<td>38.00000</td>
<td>46.467792</td>
</tr>
<tr>
<th>density</th>
<td>0.071288</td>
<td>0.934079</td>
<td>-0.174919</td>
<td>0.0</td>
<td>0.99675</td>
<td>0.996747</td>
</tr>
<tr>
<th>pH</th>
<td>0.194803</td>
<td>0.814690</td>
<td>-0.057094</td>
<td>2.0</td>
<td>3.31000</td>
<td>3.310864</td>
</tr>
<tr>
<th>sulphates</th>
<td>2.429115</td>
<td>11.712632</td>
<td>0.251685</td>
<td>2.0</td>
<td>0.62000</td>
<td>0.658078</td>
</tr>
<tr>
<th>alcohol</th>
<td>0.860829</td>
<td>0.200029</td>
<td>0.476166</td>
<td>0.0</td>
<td>10.20000</td>
<td>10.422983</td>
</tr>
<tr>
<th>quality</th>
<td>0.217802</td>
<td>0.296708</td>
<td>1.000000</td>
<td>0.0</td>
<td>6.00000</td>
<td>5.636023</td>
</tr>
<tr>
<th>type</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th rowspan="13" valign="top">white</th>
<th>fixed acidity</th>
<td>0.647981</td>
<td>2.176560</td>
<td>-0.114032</td>
<td>8.0</td>
<td>6.80000</td>
<td>6.855532</td>
</tr>
<tr>
<th>volatile acidity</th>
<td>1.578595</td>
<td>5.095526</td>
<td>-0.194976</td>
<td>7.0</td>
<td>0.26000</td>
<td>0.278252</td>
</tr>
<tr>
<th>citric acid</th>
<td>1.284217</td>
<td>6.182036</td>
<td>-0.009194</td>
<td>2.0</td>
<td>0.32000</td>
<td>0.334250</td>
</tr>
<tr>
<th>residual sugar</th>
<td>1.076601</td>
<td>3.469536</td>
<td>-0.097373</td>
<td>2.0</td>
<td>5.20000</td>
<td>6.393250</td>
</tr>
<tr>
<th>chlorides</th>
<td>5.023412</td>
<td>37.560847</td>
<td>-0.210181</td>
<td>2.0</td>
<td>0.04300</td>
<td>0.045778</td>
</tr>
<tr>
<th>free sulfur dioxide</th>
<td>1.406745</td>
<td>11.466342</td>
<td>0.008158</td>
<td>0.0</td>
<td>34.00000</td>
<td>35.308085</td>
</tr>
<tr>
<th>total sulfur dioxide</th>
<td>0.390710</td>
<td>0.571853</td>
<td>-0.174737</td>
<td>0.0</td>
<td>134.00000</td>
<td>138.360657</td>
</tr>
<tr>
<th>density</th>
<td>0.977773</td>
<td>9.793807</td>
<td>-0.307123</td>
<td>0.0</td>
<td>0.99374</td>
<td>0.994027</td>
</tr>
<tr>
<th>pH</th>
<td>0.458402</td>
<td>0.532552</td>
<td>0.098858</td>
<td>7.0</td>
<td>3.18000</td>
<td>3.188203</td>
</tr>
<tr>
<th>sulphates</th>
<td>0.977361</td>
<td>1.589847</td>
<td>0.053690</td>
<td>2.0</td>
<td>0.47000</td>
<td>0.489835</td>
</tr>
<tr>
<th>alcohol</th>
<td>0.487342</td>
<td>-0.698425</td>
<td>0.435575</td>
<td>0.0</td>
<td>10.40000</td>
<td>10.514267</td>
</tr>
<tr>
<th>quality</th>
<td>0.155796</td>
<td>0.216526</td>
<td>1.000000</td>
<td>0.0</td>
<td>6.00000</td>
<td>5.877909</td>
</tr>
<tr>
<th>type</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
```python
def my_fig(metric=desc.columns):
fig, ax = plt.subplots(1, 1, figsize=(10,10))
pd.DataFrame(desc[metric]).unstack()[metric].T.plot(kind='barh', ax=ax)
```
```python
interact(my_fig)
```
interactive(children=(Dropdown(description='metric', options=('skew', 'kurtosis', 'pearsons _quality', 'null c…
<function __main__.my_fig(metric=Index(['skew', 'kurtosis', 'pearsons _quality', 'null count', 'median',
'mean'],
dtype='object'))>
#### 🙋 Question 1: Discussion Around EDA Plot
What do we think of this plot?
> `metric = mean`, the cholrides values <br>
`metric = kurtosis`, residual sugar <br>
`metric = pearsons _quality`, _magnitudes_ and _directions_ <br>
How to improve the plot, what other plots would we like to see?
For instance, what if we were really curious about the high kurtosis for chlorides content? What more would we like to glean about the distribution of chloride content?
```python
# we can use df.describe() to take a look at the quantile values and min/max
df['chlorides'].describe()
```
count 6495.000000
mean 0.056042
std 0.035036
min 0.009000
25% 0.038000
50% 0.047000
75% 0.065000
max 0.611000
Name: chlorides, dtype: float64
```python
# and see how these values appear in a KDE
fig, ax = plt.subplots(1,1,figsize=(10,10))
df['chlorides'].plot(kind='kde',ax=ax)
ax.set_xlim(0,.61)
```
(0.0, 0.61)

```python
# lastly we may want to look at the raw values themselves. We can sort them
# too view outliers
df['chlorides'].sort_values(ascending=False)[:50]
```
5156 0.611
5049 0.610
5004 0.467
4979 0.464
5590 0.422
6268 0.415
6270 0.415
5652 0.415
6217 0.414
5949 0.414
5349 0.413
6158 0.403
4981 0.401
5628 0.387
6063 0.369
4915 0.368
5067 0.360
5179 0.358
484 0.346
5189 0.343
4917 0.341
5124 0.337
4940 0.332
1217 0.301
687 0.290
4473 0.271
5079 0.270
6272 0.267
5138 0.263
1865 0.255
5466 0.250
1034 0.244
5674 0.243
5675 0.241
683 0.240
1638 0.239
5045 0.236
6456 0.235
6468 0.230
5465 0.226
5464 0.226
5564 0.222
2186 0.217
5996 0.216
6333 0.214
5206 0.214
6332 0.214
5205 0.213
4497 0.212
1835 0.211
Name: chlorides, dtype: float64
### 1.2.2 Visualizing the data set - motivating regression analysis
In order to demonstrate simple linear regression with this dataset we will look at two particular features: `fixed acidity` and `density`.
We can create a scatter plot of `fixed acidity` vs `density` for the red wine in the dataset using `df.plot()` and see that there appears to be a general trend between the two features:
```python
fig, ax = plt.subplots(1, 1, figsize=(5,5))
df.loc[df['type'] == 'red'].plot(x='fixed acidity', y='density', ax=ax,
ls='', marker='.')
```
<AxesSubplot:xlabel='fixed acidity'>

Now the question is: How do we quantify this trend?
### 1.2.3 Estimating the regression coefficients
It looks like density increases with fixed acidity following a line, maybe something like
$$y(x)= m \cdot x + b \;\;\;\;\;\;\;\; \sf{eq. 1}$$
with \\( y=\sf density \\), \\(x=\sf fixed \space acidity\\), and \\(m\\) the slope and \\(b\\) the intercept.
To solve the problem, we need to find the values of \\(b\\) and \\(m\\) in equation 1 to best fit the data. This is called **linear regression**.
In linear regression our goal is to minimize the error between computed values of positions \\(y^{\sf calc}(x_i)\equiv y^{\sf calc}_i\\) and known values \\(y^{\sf exact}(x_i)\equiv y^{\sf exact}_i\\), i.e. find \\(b\\) and \\(m\\) which lead to lowest value of
$$\epsilon (m,b) =SS_{\sf res}=\sum_{i=1}^{N}\left(y^{\sf exact}_i - y^{\sf calc}_i\right)^2 = \sum_{i=1}^{N}\left(y^{\sf exact}_i - m\cdot x_i - b \right)^2\;\;\;\;\;\;\;\;\;\;\;\sf{eq. 2}$$
Otherwise known as the **residual sum of squares**
To find out more see e.g. https://en.wikipedia.org/wiki/Simple_linear_regression
#### 🙋 Question 2: linear regression loss function
> Do we always want *m* and *b* to be large positive numbers so as to minimize eq. 2?
Luckily [scikit-learn](https://scikit-learn.org/stable/) contains many functions related to regression including [linear regression](https://scikit-learn.org/stable/modules/linear_model.html).
The function we will use is called <code> LinearRegression() </code>.
```
# Create linear regression object
model = linear_model.LinearRegression()
# Use model to fit to the data, the x values are densities and the y values are fixed acidity
# Note that we need to reshape the vectors to be of the shape x - (n_samples, n_features) and y (n_samples, n_targets)
x = red['density'].values.reshape(-1, 1)
y = red['fixed acidity'].values.reshape(-1, 1)
```
```python
# Create linear regression object
model = linear_model.LinearRegression()
# Use model to fit to the data, the x values are densities and the y values are fixed acidity
# Note that we need to reshape the vectors to be of the shape x - (n_samples, n_features) and y (n_samples, n_targets)
x = red['density'].values.reshape(-1, 1)
y = red['fixed acidity'].values.reshape(-1, 1)
```
```
print(red['density'].values.shape, red['fixed acidity'].values.shape)
print(x.shape, y.shape)
```
```python
print(red['density'].values.shape, red['fixed acidity'].values.shape)
print(x.shape, y.shape)
```
(1599,) (1599,)
(1599, 1) (1599, 1)
```
# Fit to the data
model.fit(x, y)
# Extract the values of interest
m = model.coef_[0][0]
b = model.intercept_[0]
# Print the slope m and intercept b
print('Scikit learn - Slope: ', m , 'Intercept: ', b )
```
What happens when we try to fit the data as is?
```python
# Fit to the data
# model.fit(x, y)
```
#### 🏋️ Exercise 2: drop Null Values (and practice pandas operations)
Let's look back at our dataset description dataframe above, what do we notice, what contains null values?
There are several strategies for dealing with null values. For now let's take the simplest case, and drop rows in our dataframe that contain null
```python
# Cell for Exercise 2
# For this templated exercise you are going to complete everything in one line
# of code, but we are going to break it up into steps. So for each part (A, B,
# etc.) paste your answer from the previous part to begin (your opertaions will
# read from left to right)
# step A
# select the 'density' and 'fixed acidity' columns of red. make sure the return
# is a dataframe
df[['density', 'fixed acidity']]
# step B
# now use the dropna() method on axis 0 (the rows) to drop any null values
df[['density', 'fixed acidity']].dropna(axis=0)
# step B
# select column 'density'
df[['density', 'fixed acidity']].dropna(axis=0)['density']
# step C
# select the values
df[['density', 'fixed acidity']].dropna(axis=0)['density'].values
# step D
# reshape the result with an empty second dimension using .reshape() and store
# the result under variable x
x = df[['density', 'fixed acidity']].dropna(axis=0)['density'].values.reshape(-1, 1)
# repeat the same process with 'fixed acidity' and variable y
y = df[['density', 'fixed acidity']].dropna(axis=0)['fixed acidity'].values.reshape(-1, 1)
```
Now that we have our x and y arrays we can fit using ScikitLearn
```python
x = red[['density', 'fixed acidity']].dropna(axis=0)['density'].values.reshape(-1,1)
y = red[['density', 'fixed acidity']].dropna(axis=0)['fixed acidity'].values.reshape(-1,1)
```
#### 🙋 Question 3: why do we drop null values across both columns?
Notice in the above cell how we selected both `density` and `fixed acidity` before calling `dropna`? Why did we do that? Why didn't we just select `density` in the `x` variable case and `fixed acidity` in the `y` variable case?
```python
# Fit to the data
model.fit(x, y)
# Extract the values of interest
m = model.coef_[0][0]
b = model.intercept_[0]
# Print the slope m and intercept b
print('Scikit learn - Slope: ', m , 'Intercept: ', b )
```
Scikit learn - Slope: 616.01314280661 Intercept: -605.6880086750523
#### 🏋️ Exercise 3: calculating y_pred
Estimate the values of \\(y\\) by using your fitted parameters. Hint: Use your <code>model.coef_</code> and <code>model.intercept_</code> parameters to estimate y_pred following equation 1
```python
# define y_pred in terms of m, x, and b
y_pred = m * x + b
# uncomment the following lines!
fig, ax = plt.subplots(1,1, figsize=(10,10))
ax.plot(x, y_pred, ls='', marker='*')
ax.plot(x, y, ls='', marker='.')
```
[<matplotlib.lines.Line2D at 0x7f6781becca0>]

We can also return predictions directly with the model object using the predict() method
> note: it is great to get in the habit of utilizing model outputs this way, as the API will be similar across all scikit-learn models (and sometimes models in other libraries as well!)
```python
# Another way to get this is using the model.predict function
y_pred = model.predict(x)
fig, ax = plt.subplots(1,1, figsize=(10,10))
ax.plot(x, y_pred, ls='', marker='*')
ax.plot(x, y, ls='', marker='.')
```
[<matplotlib.lines.Line2D at 0x7f6781b5c790>]

## 1.3 Error and topics of model fitting (assessing model accuracy)
### 1.3.1 Measuring the quality of fit
#### 1.3.1.1 Mean Squared Error
The plot in Section 1.2.3 looks good, but numerically what is our error? What is the mean value of $\epsilon$, i.e. the **Mean Squared Error (MSE)**?
$${\sf MSE}=\epsilon_{\sf ave} = \frac{\sum_{i=1}^{N_{\sf times}}\left(y^{\sf exact}_i - m\cdot t_i - b \right)^2}{N_{\sf times}}\;\;\;\;\;\sf eq. 3$$
```
# The mean squared error
print('Mean squared error: %.2f' % mean_squared_error(y, y_pred))
```
```python
# The mean squared error
print('Mean squared error: %.2f' % mean_squared_error(y, y_pred))
```
Mean squared error: 1.68
#### 1.3.1.2 R-square
Another way to measure error is the regression score, \\(R^2\\). \\(R^2\\) is generally defined as the ratio of the total sum of squares \\(SS_{\sf tot}\\) to the residual sum of squares \\(SS_{\sf res}\\):
$$SS_{\sf tot}=\sum_{i=1}^{N} \left(y^{\sf exact}_i-\bar{y}\right)^2\;\;\;\;\; \sf eq. 4$$
$$SS_{\sf res}=\sum_{i=1}^{N} \left(y^{\sf exact}_i - y^{\sf calc}_i\right)^2\;\;\;\;\; \sf eq. 5$$
$$R^2 = 1 - {SS_{\sf res}\over SS_{\sf tot}} \;\;\;\;\;\; \sf eq. 6$$
In eq. 4, \\(\bar{y}=\sum_i y^{\sf exact}_i/N\\) is the average value of y for \\(N\\) points. The best value of \\(R^2\\) is 1 but it can also take a negative value if the error is large.
See all the different regression metrics [here](https://scikit-learn.org/stable/modules/model_evaluation.html).
#### 🙋 Question 4: lets understand \\(R^2\\)
> Do we need a large value of \\(SS_{\sf tot}\\) to minimize \\(R^2\\) - is this something which we have the power to control?
```
# Print the coefficient of determination - 1 is perfect prediction
print('Coefficient of determination: %.2f' % r2_score(y, y_pred))
```
```python
# Print the coefficient of determination - 1 is perfect prediction
print('Coefficient of determination: %.2f' % r2_score(y, y_pred))
```
Coefficient of determination: 0.45
### 1.3.2 Corollaries with classification models
For classification tasks, we typically assess accuracy vs MSE or R-square, since we are dealing with categorical rather than numerical predictions.
What is accuracy? It is defined as the ratio of True assignments to all assignments. For a binary positive/negative classification task this can be written as the following:
$$ Acc = \frac{T_p + T_n}{F_p + F_n + T_p + T_n} $$
Where \\(T\\) is True, \\(F\\) is false, \\(p\\) is positive, \\(n\\) is negative
Just as a quick example, we can perform this type of task on our wine dataset by predicting whether a given data entry is for red or white wine:
```python
logdf = df.copy().dropna(axis=0)
y_train = logdf.pop('type').values.reshape(-1,1)
x_train = logdf.dropna(axis=0).values
```
```python
# train a logistic regression model on the training set
from sklearn.linear_model import LogisticRegression
# instantiate model
logreg = LogisticRegression()
# fit model
logreg.fit(x_train, y_train)
```
/home/wbeckner/anaconda3/envs/py39/lib/python3.9/site-packages/sklearn/utils/validation.py:993: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
/home/wbeckner/anaconda3/envs/py39/lib/python3.9/site-packages/sklearn/linear_model/_logistic.py:814: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
LogisticRegression()
```python
# make class predictions for the testing set
y_pred_class = logreg.predict(x_train)
```
```python
# calculate accuracy
from sklearn import metrics
print(metrics.accuracy_score(y_train, y_pred_class))
```
0.9797307751818041
### 1.3.3 Beyond a single input feature
(_also: quick appreciative beat for folding in domain area expertise into our models and features_)
The **acidity** of the wine (the dependent variable v) could depend on:
* potassium from the soil (increases alkalinity)
* unripe grapes (increases acidity)
* grapes grown in colder climates or reduced sunshine create less sugar (increases acidity)
* preprocessing such as adding tartaric acid to the grape juice before fermentation (increases acidity)
* malolactic fermentation (reduces acidity)
* \+ others
So in our lab today we will look at folding in additional variables in our dataset into the model
<hr style="border:1px solid grey"> </hr>
## 1.4 Multivariate regression
Let's now turn our attention to wine quality.
The value we aim to predict or evaluate is the quality of each wine in our dataset. This is our dependent variable. We will look at how this is related to the 12 other independent variables, also known as *input features*. We're going to do this with only the red wine data
```python
red.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>type</th>
<th>fixed acidity</th>
<th>volatile acidity</th>
<th>citric acid</th>
<th>residual sugar</th>
<th>chlorides</th>
<th>free sulfur dioxide</th>
<th>total sulfur dioxide</th>
<th>density</th>
<th>pH</th>
<th>sulphates</th>
<th>alcohol</th>
<th>quality</th>
</tr>
</thead>
<tbody>
<tr>
<th>4898</th>
<td>red</td>
<td>7.4</td>
<td>0.70</td>
<td>0.00</td>
<td>1.9</td>
<td>0.076</td>
<td>11.0</td>
<td>34.0</td>
<td>0.9978</td>
<td>3.51</td>
<td>0.56</td>
<td>9.4</td>
<td>5</td>
</tr>
<tr>
<th>4899</th>
<td>red</td>
<td>7.8</td>
<td>0.88</td>
<td>0.00</td>
<td>2.6</td>
<td>0.098</td>
<td>25.0</td>
<td>67.0</td>
<td>0.9968</td>
<td>3.20</td>
<td>0.68</td>
<td>9.8</td>
<td>5</td>
</tr>
<tr>
<th>4900</th>
<td>red</td>
<td>7.8</td>
<td>0.76</td>
<td>0.04</td>
<td>2.3</td>
<td>0.092</td>
<td>15.0</td>
<td>54.0</td>
<td>0.9970</td>
<td>3.26</td>
<td>0.65</td>
<td>9.8</td>
<td>5</td>
</tr>
<tr>
<th>4901</th>
<td>red</td>
<td>11.2</td>
<td>0.28</td>
<td>0.56</td>
<td>1.9</td>
<td>0.075</td>
<td>17.0</td>
<td>60.0</td>
<td>0.9980</td>
<td>3.16</td>
<td>0.58</td>
<td>9.8</td>
<td>6</td>
</tr>
<tr>
<th>4902</th>
<td>red</td>
<td>7.4</td>
<td>0.70</td>
<td>0.00</td>
<td>1.9</td>
<td>0.076</td>
<td>11.0</td>
<td>34.0</td>
<td>0.9978</td>
<td>3.51</td>
<td>0.56</td>
<td>9.4</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
### 1.4.1 Linear regression with all input fields
For this example, notice we have a categorical data variable in the 'type' column. We will ignore this for now, and only work with our red wines. In the future we will discuss how to deal with categorical variable such as this in a mathematical representation.
```python
# this is a list of all our features or independent variables
features = list(red.columns[1:])
# we're going to remove our target or dependent variable, density from this
# list
features.remove('density')
# now we define X and y according to these lists of names
X = red.dropna(axis=0)[features].values
y = red.dropna(axis=0)['density'].values
# we will talk about scaling/centering our data at a later time
X = (X - X.mean(axis=0)) / X.std(axis=0)
```
```python
red.isnull().sum(axis=0) # we are getting rid of some nasty nulls!
```
type 0
fixed acidity 2
volatile acidity 1
citric acid 1
residual sugar 0
chlorides 0
free sulfur dioxide 0
total sulfur dioxide 0
density 0
pH 2
sulphates 2
alcohol 0
quality 0
dtype: int64
```
# Create linear regression object - note that we are using all the input features
model = linear_model.LinearRegression()
model.fit(X, y)
y_calc = model.predict(X)
```
```python
# Create linear regression object - note that we are using all the input features
model = linear_model.LinearRegression()
model.fit(X, y)
y_calc = model.predict(X)
```
Let's see what the coefficients look like ...
```
print("Fit coefficients: \n", model.coef_, "\nNumber of coefficients:", len(model.coef_))
```
```python
print("Fit coefficients: \n", model.coef_, "\nNumber of coefficients:", len(model.coef_))
```
Fit coefficients:
[ 1.64059336e-03 1.23999138e-04 1.16115898e-05 5.83002013e-04
8.35961822e-05 -9.17472420e-05 8.61246026e-05 7.80966358e-04
2.24558885e-04 -9.80600257e-04 -1.75587885e-05]
Number of coefficients: 11
We have 11 !!! That's because we are regressing respect to all **11 independent variables**!!!
So now, $$y_{\sf calc}= m_1x_1 +\, m_2x_2 \,+ \,m_3x_3 \,+\,... \,+ \,b =\sum_{i=1}^{13}m_i x_i + b\;\;\;\;\; \sf eq. 7$$
```
print("We have 13 slopes / weights:\n\n", model.coef_)
print("\nAnd one intercept: ", model.intercept_)
```
```python
print("We have 11 slopes / weights:\n\n", model.coef_)
print("\nAnd one intercept: ", model.intercept_)
```
We have 11 slopes / weights:
[ 1.64059336e-03 1.23999138e-04 1.16115898e-05 5.83002013e-04
8.35961822e-05 -9.17472420e-05 8.61246026e-05 7.80966358e-04
2.24558885e-04 -9.80600257e-04 -1.75587885e-05]
And one intercept: 0.9967517451349656
```
# This size should match the number of columns in X
if len(X[0]) == len(model.coef_):
print("All good! The number of coefficients matches the number of input features.")
else:
print("Hmm .. something strange is going on.")
```
```python
# This size should match the number of columns in X
if len(X[0]) == len(model.coef_):
print("All good! The number of coefficients matches the number of input features.")
else:
print("Hmm .. something strange is going on.")
```
All good! The number of coefficients matches the number of input features.
### 🏋️ Exercise 4: evaluate the error
Let's **evaluate the error** by computing the MSE and \\(R^2\\) metrics (see eq. 3 and 6).
```
# The mean squared error
# part A
# calculate the MSE using mean_squared_error()
# mse =
# part B
# calculate the R square using r2_score()
# r2 =
print('Mean squared error: {:.2f}'.format(mse)
print('Coefficient of determination: {:.2f}'.format(r2)
```
```python
# The mean squared error
# part A
# calculate the MSE using mean_squared_error()
mse = mean_squared_error(y, model.predict(X))
# part B
# calculate the R square using r2_score()
r2 = r2_score(y, model.predict(X))
print('Mean squared error: {:.2e}'.format(mse))
print('Coefficient of determination: {:.2f}'.format(r2))
```
Mean squared error: 5.62e-07
Coefficient of determination: 0.84
### 🏋️ Exercise 5: make a plot of y actual vs y predicted
We can also look at how well the computed values match the true values graphically by generating a scatterplot.
```
# generate a plot of y predicted vs y actual using plt.plot()
# remember you must set ls to an empty string and marker to some marker style
# plt.plot()
plt.title("Linear regression - computed values on entire data set", fontsize=16)
plt.xlabel("y$^{\sf calc}$")
plt.ylabel("y$^{\sf true}$")
plt.show()
```
```python
# generate a plot of y predicted vs y actual using plt.plot()
# remember you must set ls to an empty string and marker to some marker style
plt.plot(y, model.predict(X), ls='', marker='.')
plt.title("Linear regression - computed values on entire data set", fontsize=16)
plt.xlabel("y$^{\sf calc}$")
plt.ylabel("y$^{\sf true}$")
plt.show()
```

### 🍒 1.4.2 **Enrichment**: Splitting into train and test sets
> note: more of this topic is covered in [**Model Selection and Validation**](https://wesleybeckner.github.io/data_science_foundations/S3_Model_Selection_and_Validation/)
To see whether we can predict, we will carry out our regression only on a part, 80%, of the full data set. This part is called the **training** data. We will then test the trained model to predict the rest of the data, 20% - the **test** data. The function which fits won't see the test data until it has to predict it.
**We will motivate the use of train/test sets more explicitly in [Model Selection and Validation](https://wesleybeckner.github.io/data_science_foundations/S3_Model_Selection_and_Validation/)**
We start by splitting out data using scikit-learn's <code>train_test_split()</code> function:
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
```
```python
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.20,
random_state=42)
```
Now we check the size of <code> y_train </code> and <code> y_test </code>, the sum should be the size of y! If this works then we move on and carry out regression but we only use the training data!
```
if len(y_test)+len(y_train) == len(y):
print('All good, ready to to go and regress!\n')
# Carry out linear regression
print('Running linear regression algorithm on the training set\n')
model = linear_model.LinearRegression()
model.fit(X_train, y_train)
print('Fit coefficients and intercept:\n\n', model.coef_, '\n\n', model.intercept_ )
# Predict on the test set
y_pred_test = model.predict(X_test)
```
```python
if len(y_test)+len(y_train) == len(y):
print('All good, ready to to go and regress!\n')
# Carry out linear regression
print('Running linear regression algorithm on the training set\n')
model = linear_model.LinearRegression()
model.fit(X_train, y_train)
print('Fit coefficients and intercept:\n\n', model.coef_, '\n\n', model.intercept_ )
# Predict on the test set
y_pred_test = model.predict(X_test)
```
All good, ready to to go and regress!
Running linear regression algorithm on the training set
Fit coefficients and intercept:
[ 1.62385613e-03 1.10578142e-04 7.75216492e-07 5.87755741e-04
7.65190323e-05 -1.03490059e-04 8.87357873e-05 7.79083342e-04
2.23534769e-04 -9.99858829e-04 5.85256438e-06]
0.9967531628434799
Now we can plot our predicted values to see how accurate we are in predicting. We will generate a scatterplot and computing the MSE and \\(R^2\\) metrics of error.
```
sns.scatterplot(x=y_pred_test, y=y_test, color="mediumvioletred", s=50)
plt.title("Linear regression - predict test set", fontsize=16)
plt.xlabel("y$^{\sf calc}$")
plt.ylabel("y$^{\sf true}$")
plt.show()
print('Mean squared error: %.2f' % mean_squared_error(y_test, y_pred_test))
print('Coefficient of determination: %.2f' % r2_score(y_test, y_pred_test))
```
```python
sns.scatterplot(x=y_pred_test, y=y_test, color="mediumvioletred", s=50)
plt.title("Linear regression - predict test set", fontsize=16)
plt.xlabel("y$^{\sf calc}$")
plt.ylabel("y$^{\sf true}$")
plt.show()
print('Mean squared error: %.2e' % mean_squared_error(y_test, y_pred_test))
print('Coefficient of determination: %.2f' % r2_score(y_test, y_pred_test))
```

Mean squared error: 5.45e-07
Coefficient of determination: 0.87
#### 1.4.2.1 Other data considerations
* Do we need all the independent variables?
* Topics of interential statistics covered in a couple sessions
* Can we output integer quality scores?
* Topics of non-binary classification tasks covered in week 4
### 🍒 1.4.3 **Enrichment**: Other regression algorithms
There are many other regression algorithms. The two we want to highlight here are Ridge, LASSO, and Elastic Net. They differ by an added term to the loss function. Let's review. Eq. 2 expanded to multivariate form yields:
$$\sum_{i=1}^{N}(y_i - \sum_{j=1}^{P}x_{ij}\beta_{j})^2$$
for Ridge regression, we add a **_regularization_** term known as **_L2_** regularization:
$$\sum_{i=1}^{N}(y_i - \sum_{j=1}^{P}x_{ij}\beta_{j})^2 + \lambda \sum_{j=1}^{P}\beta_{j}^2$$
for **_LASSO_** (Least Absolute Shrinkage and Selection Operator) we add **_L1_** regularization:
$$\sum_{i=1}^{N}(y_i - \sum_{j=1}^{P}x_{ij}\beta_{j})^2 + \lambda \sum_{j=1}^{P}|\beta_{j}|$$
The key difference here is that LASSO will allow coefficients to shrink to 0 while Ridge regression will not. **_Elastic Net_** is a combination of these two regularization methods.
```
model = linear_model.Ridge()
model.fit(X_train, y_train)
print('Fit coefficients and intercept:\n\n', model.coef_, '\n\n', model.intercept_ )
# Predict on the test set
y_calc_test = model.predict(X_test)
```
```python
model = linear_model.Ridge()
model.fit(X_train, y_train)
print('Fit coefficients and intercept:\n\n', model.coef_, '\n\n', model.intercept_ )
# Predict on the test set
y_calc_test = model.predict(X_test)
```
Fit coefficients and intercept:
[ 1.61930554e-03 1.11227142e-04 2.64709094e-06 5.87271456e-04
7.58510569e-05 -1.02851782e-04 8.76686650e-05 7.75641517e-04
2.23315063e-04 -9.98653815e-04 5.26839010e-06]
0.9967531358810221
```
sns.scatterplot(x=y_calc_test, y=y_test, color="lightseagreen", s=50)
plt.title("Ridge regression - predict test set",fontsize=16)
plt.xlabel("y$^{\sf calc}$")
plt.ylabel("y$^{\sf true}$")
plt.show()
print('Mean squared error: %.2f' % mean_squared_error(y_test, y_calc_test))
print('Coefficient of determination: %.2f' % r2_score(y_test, y_calc_test))
```
```python
sns.scatterplot(x=y_calc_test, y=y_test, color="lightseagreen", s=50)
plt.title("Ridge regression - predict test set",fontsize=16)
plt.xlabel("y$^{\sf calc}$")
plt.ylabel("y$^{\sf true}$")
plt.show()
print('Mean squared error: %.2e' % mean_squared_error(y_test, y_calc_test))
print('Coefficient of determination: %.2f' % r2_score(y_test, y_calc_test))
```

Mean squared error: 5.45e-07
Coefficient of determination: 0.87
#### 🏋️ Exercise 6: Tune Hyperparameter for Ridge Regression
Use the docstring to peak into the hyperparameters for Ridge Regression. What is the optimal value of lambda?
Plot the \\(\beta\\) values vs \\(\lambda\\) from the results of your analysis
```python
# cell for exercise 3
out_lambdas = []
out_coefs = []
out_scores = []
for i in range(10):
lambdas = []
coefs = []
scores = []
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.20)
for lamb in range(1,int(5e3),20):
model = linear_model.Ridge(alpha=lamb)
model.fit(X_train, y_train)
lambdas.append(lamb)
coefs.append(model.coef_)
scores.append(r2_score(y_test, model.predict(X_test)))
# print('MSE: %.4f' % mean_squared_error(y_test, model.predict(X_test)))
# print('R2: %.4f' % r2_score(y_test, model.predict(X_test)))
out_lambdas.append(lambdas)
out_coefs.append(coefs)
out_scores.append(scores)
```
```python
coef_means = np.array(out_coefs).mean(axis=0)
coef_stds = np.array(out_coefs).std(axis=0)
results_means = pd.DataFrame(coef_means,columns=features)
results_stds = pd.DataFrame(coef_stds,columns=features)
results_means['lambda'] = [i for i in lambdas]
```
```python
fig, ax = plt.subplots(1,1,figsize=(10,10))
for feat in features:
ax.errorbar([i for i in lambdas], results_means[feat], yerr=results_stds[feat], label=feat)
# results.plot('lambda', 'scores', ax=ax[1])
ax.legend()
```
<matplotlib.legend.Legend at 0x7f6777ffbe20>

```python
results = pd.DataFrame(coefs,columns=features)
results['lambda'] = [i for i in lambdas]
results['scores'] = scores
```
```python
fig, ax = plt.subplots(1,2,figsize=(10,5))
for feat in features:
results.plot('lambda', feat, ax=ax[0])
results.plot('lambda', 'scores', ax=ax[1])
```
<AxesSubplot:xlabel='lambda'>

## 🍒 1.5 **Enrichment**: Additional Regression Exercises
### Problem 1) Number and choice of input features
* Load the red wine dataset and evaluate how the linear regression predictions changes as you change the **number and choice of input features**. The total number of columns in X is 11 and each column represent a specific input feature.
* Estimate the MSE
```
print(X_train.shape)
```
```python
print(X_train.shape)
```
(1274, 11)
If you want to use the first 5 features you could proceed as following:
```
X_train_five = X_train[:,0:5]
X_test_five = X_test[:,0:5]
```
```python
X_train_five = X_train[:,0:5]
X_test_five = X_test[:,0:5]
```
Check that the new variables have the shape your expect
```
print(X_train_five.shape)
print(X_test_five.shape)
```
```python
print(X_train_five.shape)
print(X_test_five.shape)
```
(1274, 5)
(319, 5)
Now you can use these to train your linear regression model and repeat for different numbers or sets of input features! Note that you do not need to change the output feature! It's size is independent from the number of input features, yet recall that its length is the same as the number of values per input feature.
Questions to think about while you work on this problem
- How many input feature variables does one need? Is there a maximum or minimum number?
- Could one input feature variable be better than the rest?
- What if values are missing for one of the input feature variables - is it still worth using it?
- Can you use **_L1_** or **_L2_** to determine these optimum features more quickly?
### Problem 2) Type of regression algorithm
Try using other types of linear regression methods on the wine dataset: the LASSO model and the Elastic net model which are described by the
<code > sklearn.linear_model.ElasticNet() </code> <br>
<code > sklearn.linear_model.Lasso() </code>
scikit-learn functions.
For more detail see [ElasticNet](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html#sklearn.linear_model.ElasticNet) and [Lasso]( https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html#sklearn.linear_model.Lasso).
Questions to think about while you work on this problem
- How does the error change with each model?
- Which model seems to perform best?
- How can you optimize the hyperparameter, \\(\lambda\\)
- Does one model do better than the other at determining which input features are more important?
- How about non linear regression / what if the data does not follow a line?
```python
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import Lasso
from sklearn.linear_model import Ridge
from sklearn.linear_model import LinearRegression
```
```python
for model in [ElasticNet, Lasso, Ridge, LinearRegression]:
model = model()
model.fit(X_train, y_train)
print(str(model))
print('Mean squared error: %.ef' % mean_squared_error(y_test, model.predict(X_test)))
print('Coefficient of determination: %.2f' % r2_score(y_test, model.predict(X_test)))
print()
```
ElasticNet()
Mean squared error: 4e-06f
Coefficient of determination: -0.01
Lasso()
Mean squared error: 4e-06f
Coefficient of determination: -0.01
Ridge()
Mean squared error: 6e-07f
Coefficient of determination: 0.85
LinearRegression()
Mean squared error: 6e-07f
Coefficient of determination: 0.85
<hr style="border:1px solid grey"> </hr>
# References
* **Linear Regression**
To find out more see [simple linear regression](https://en.wikipedia.org/wiki/Simple_linear_regression)
* **scikit-learn**
* [Scikit-learn](https://scikit-learn.org/stable/)
* [Linear regression in scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html)
* [Metrics of error](https://scikit-learn.org/stable/modules/model_evaluation.html)
* [The Boston dataset](https://scikit-learn.org/stable/datasets/index.html#boston-dataset)
* **Pearson correlation**
To find out more see [pearson](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient)
* **Irreducible error, bias and variance**
* Great Coursera videos [here](https://www.coursera.org/lecture/ml-regression/irreducible-error-and-bias-qlMrZ)
and [here](https://www.coursera.org/lecture/ml-regression/variance-and-the-bias-variance-tradeoff-ZvP40)
| 26.381356 | 416 | 0.632717 | eng_Latn | 0.83515 |
dbbcd62bc65390492fdb7c15fb09ea3f03810a11 | 403 | md | Markdown | README.md | wbtdev/cookiecutter-python | ef69d5d592043bbbe781421d133283895063ccbb | [
"MIT"
] | null | null | null | README.md | wbtdev/cookiecutter-python | ef69d5d592043bbbe781421d133283895063ccbb | [
"MIT"
] | null | null | null | README.md | wbtdev/cookiecutter-python | ef69d5d592043bbbe781421d133283895063ccbb | [
"MIT"
] | null | null | null | # cookiecutter-python
Originally based on the Cruft template by Timothy Crosley -
Modified to do what I need to do.
A simple template for my own personal Python3.6+ projects utilizing black + isort + flake8 + poetry + mypy + bandit + bugbear + more goodness. Best used with [cruft](https://timothycrosley.github.io/cruft/)
To use:
cruft create https://github.com/wbtdev/cookiecutter-python/
| 36.636364 | 206 | 0.74938 | eng_Latn | 0.961242 |
dbbd0e9c3a4132fdaee07e89014aa487c938b855 | 23,620 | md | Markdown | README.md | yulchitaj/TEST3_1 | a0ec30b3cccf8d40ffbe1afbb4820bcfed70e907 | [
"MIT"
] | null | null | null | README.md | yulchitaj/TEST3_1 | a0ec30b3cccf8d40ffbe1afbb4820bcfed70e907 | [
"MIT"
] | null | null | null | README.md | yulchitaj/TEST3_1 | a0ec30b3cccf8d40ffbe1afbb4820bcfed70e907 | [
"MIT"
] | null | null | null | # Please direct all Support Questions and Concerns to [email protected]
## PubNub Gem version 3.7.0
##### YOU MUST HAVE A PUBNUB ACCOUNT TO USE THE API.
##### http://www.pubnub.com/account
www.pubnub.com - PubNub Real-time Push Service in the Cloud.
The PubNub Network is a blazingly fast Global Messaging Service for building real-time web and mobile apps. Thousands of apps and developers rely on PubNub for delivering human-perceptive real-time experiences that scale to millions of users worldwide. PubNub delivers the infrastructure needed to build amazing Mobile, MMO games, social apps, business collaborative solutions, and more.
### Upgrading from PubNub 3.5.x
We've made the response format compatible across all operations. This may break existing parsing of whereNow, leave, state, and PAM responses. So if you are monitoring these operation responses, please be sure to modify your code accordingly.
Examples of affected operations can be found [here](3.5_to_3.6_upgrade_notes.md).
### Upgrading from PubNub 3.3.x and Earlier
PubNub 3.7.0 is NOT compatible with earlier than 3.4 versions of Pubnub Ruby Client.
### Upgrading from PubNub 3.4 and higher versions
PubNub 3.7.0 is compatible with 3.4 version.
## Important Notice about Blocking vs Non-Blocking Calls
#### Asynchronous vs Synchronous Requests
Every operation is by default asynchronous. Asynchronous operations will not block your main thread and will be fired within a new thread.
This can cause issues under certain situations, depending on your implementation. To work around this, you can force an operation to run synchronously (block) via the :http_sync option:
```ruby
:http_sync => true
```
Unless otherwise specified, this option is default implied false (all calls by default will be async).
#### Message Handling: callback, block, return
Results are provided via block, callback, and return, depending on how you structure the call. Callback will be fired for every message that will event get in response. Synchronous events will return array of envelopes (if you passed callback to sychronous event it will be called too!).
### Code Examples
#### Require
```ruby
require 'pubnub'
```
#### Init and instantiate a new PubNub instance
```ruby
# If you wish to override the default logger, create one and pass it in.
# Default logger writes into pubnub.log file
my_logger = Logger.new(STDOUT)
pubnub = Pubnub.new(
:subscribe_key => 'demo',
:publish_key => 'demo',
:error_callback => lambda { |msg|
puts "Error callback says: #{msg.inspect}"
},
:connect_callback => lambda { |msg|
puts "CONNECTED: #{msg.inspect}"
},
:logger => my_logger
)
```
* subscribe_key is your subscribe key
* publish_key is your publish key
* origin is your custom, PubNub origin (Contact support before production to get your own!)
* error_callback is the callback for errors
* connect_callback is the callback that lets you know when you're connected to the origin
#### Making PubNub calls
There are a few different ways to make any given PubNub call. How to do it depends first on whether or not you want the call to be blocking (synchronous), or not blocking (asynchronous).
##### Asynchronous (non-blocking) calling
If you wish to make asyncronous calls (implemented via EventMachine), you have a few different patterns you can follow:
```ruby
# Lets use a callback for the first example...
cb = lambda { |envelope| puts envelope.message }
# Asynchronous is implicitly enabled by default, if you do not provide an :http_sync option
pubnub.publish(:message => msg, :channel => channel, :callback => cb)
# You can also explicitly request async with :http_sync => false
pubnub.publish(:message => msg, :channel => channel, :callback => cb, :http_sync => false)
# Alternatively, you can pass in the callback as a block
pubnub.publish(:message => msg, :channel => channel, &cb)
pubnub.publish(:message => msg, :channel => channel) do |envelope|
puts envelope.message
puts envelope.channel
puts envelope.status_code
puts envelope.timetoken
end
```
##### Synchronous (blocking) calling
Synchronous calling is required when using PubNub with JRuby.
If you'd prefer to make your calls blocking (implemented via HTTParty), set :http_sync => true. Again, there is a bit of flexibility in how this can be done:
```ruby
# Lets use a callback for the first example...
cb = lambda { |envelope| puts envelope.message }
# Sync (blocking) with a callback (if you wanted to)
pubnub.publish(:http_sync => true, :message => msg, :channel => channel, &cb)
# Sync (blocking), with assignment via return
myResponse = pubnub.publish(:http_sync => true, :message => msg, :channel => channel)
puts "myR: #{myResponse.inspect}"
# Sync (blocking), with a block
pubnub.publish(:http_sync => true, :message => msg, :channel => channel) do |envelope|
puts envelope.message
puts envelope.channel
puts envelope.status_code
puts envelope.timetoken
end
```
#### Callback / Block calling sequence
When you receive messages asynchronously from PubNub, your block or callback will be called once for each message received. For example, if you are subscribed to a channel using the callback pattern, and you receive 3 messages from your call, the callback will be called 3 times, 1 time for each unique received message.
Conceptually, the callback or block is fired once for each message in the raw server response:
```ruby
envelopes.each do |envelope|
callback.call envelope
end
```
#### The Envelope Object
The callback (or block) will receive the message(s) in the form of an envelope hash. An envelope will contain the following keys:
* message (aliased as 'msg') -> Holds message, if publish, holds published message
* response_message -> as above, except that if publish holds server response (String "Send")
* channel -> Holds channel for current message
* timetoken -> Timetoken of server response
* status (aliased as 'status_code') -> Server response status code
* response -> Whole and unmodified server response
* first -> true if it's first envelope in single response messages array
* last -> true if it's last envelope in single response messages array
* And a bit more, specific to some events, you will find it in description of certain events
Don't confuse the **message** with the **response**. In a given callback cycle, the **response** will always be the same, as its the raw server response. It may consist of one or more messages.
Internally, the block or callback iterates over the response array, similar to:
```ruby
envelopes.each do |envelope|
callback.call envelope
end
```
In a given callback cycle, the **envelope** will be the current element of the response array.
### Simple Usage Examples
#### Init and instantiate a new PubNub instance
```ruby
pubnub = Pubnub.new(
:subscribe_key => 'demo',
:publish_key => 'demo',
:origin => origin,
:uuid => "myUserID",
:error_callback => lambda { |msg|
puts "SOMETHING TERRIBLE HAPPENED HERE: #{msg.inspect}"
},
:connect_callback => lambda { |msg|
puts "CONNECTED: #{msg.inspect}"
}
)
```
#### Publish
When publishing, send a string, number, array, or hash. PubNub automatically serializes it to JSON for you, so you don't have to.
```ruby
@my_callback = lambda { |envelope| puts(envelope.msg) }
pubnub.publish(
:channel => "hello_world",
:message => "hi",
:callback => @my_callback
)
```
#### Subscribe
```ruby
pubnub.subscribe(
:channel => :hello_world,
:callback => @my_callback
)
```
#### Leave
Unsubscribes from given channel (`:channel`) or channel group (`:group`) and
fires leave event. You need to be subscribed (only async counts) to channel that
You want to leave.
```ruby
pubnub.subscribe(
:channel => :hello_world,
:callback => @my_callback
)
pubnub.leave(
:channel => :hello_world,
:callback => @my_callback
)
```
If you want to force leave channel that you're not subscribed to, you can pass :force option to event
```ruby
# Wrong
pubnub.leave(
:channel => :not_subbed_channel,
:callback => @my_callback
)
# We'll get error:
Pubnub::ArgumentError: You cannot leave channel that is not subscribed
# Good
p.leave(
:channel => :force_leave,
:force => true,
:callback => @my_callback
)
```
#### History
Retrieve previously published messages (requires activation via admin.pubnub.com)
Optional start, end, and reverse option usage can be found in the tests.
```ruby
pubnub.history(
:channel => channel,
:count => 10,
:callback => @my_callback
)
```
#### Presence
In real-time see people join and leave with occupancy summaries. (requires activation via admin.pubnub.com)
```ruby
pubnub.presence(
:channel => :hello_world,
:callback => @my_callback
)
```
```ruby
pubnub.presence(
:group => 'foo:',
:callback => @my_callback
)
```
#### HereNow
See who is "here now" in a channel (:channel) or channel group (:group) at this
very moment.
```ruby
pubnub.here_now(
:channel => channel,
:callback => @my_callback
)
```
```ruby
pubnub.here_now(
:group => channel_group,
:callback => @my_callback
)
```
#### WhereNow
See where is client with specific uuid
```ruby
p.where_now(
:uuid => :my_friend,
:callback => @my_callback
)
```
#### UUID
UUID is set in the initializer. A unique one is created, unless you specify one explicitly. To retrieve the current UUID:
```ruby
pubnub.uuid
```
If you wish to manually set a custom UUID, pass in a uuid key in the initializer. See "Init and instantiate a new PubNub instance" for an example.
#### Time
Get the current PubNub time. This is great to use as a "PubNub Ping"
```ruby
pubnub.time("callback" => @my_callback)
```
### Channel Groups
Channel grouping is new feature introduced in Pubnub 3.7. It allows to group
channels into channel-groups and channel-groups into namespaces. For example you
can add `weather` and `sport` channel to `news` channel group, and `news` and
`local_ads` to `tv` namespace. Namespaces and channel groups are described as
`namespace:channel_group` e.g. `tv:news`. All channel-groups in namespace are
described as `namespace:` e.g. `tv:`. Non-namespaced channel groups are
described as `non-namespaced-channel-group` eg. `global_alerts`.
All channel groups specific operations can be issued with
`#channel_registration` method.
#### Getting info
##### Getting all namespaces
```ruby
# Response envelope will hold info as hash in payload attribute.
pubnub.channel_registration(action: :list_namespaces, http_sync: true)
```
##### Getting all non-namespaced channel groups
```ruby
# Response envelope will hold info as hash in payload attribute.
pubnub.channel_registration(action: :list_groups, http_sync: true)
```
##### Getting all channel groups in given namespace
```ruby
# Response envelope will hold info as hash in payload attribute.
pubnub.channel_registration(action: :get, group: 'foo:', http_sync: true)
```
##### Getting all channels in channel group
```ruby
# Response envelope will hold info as hash in payload attribute.
pubnub.channel_registration(action: :get, group: 'foo:foo', http_sync: true)
```
#### Adding
##### Add channel to namespaced channel group
```ruby
pubnub.channel_registration(action: :add, group: 'foo:new_group', channel: :bot, http_sync: true)
```
##### Add channel to non-namespaced channel group
```ruby
pubnub.channel_registration(action: :add, group: 'new_group', channel: :bot, http_sync: true)
```
#### Removing
##### Remove namespace and all channel groups
```ruby
pubnub.channel_registration(action: :remove, group: 'foo:', http_sync: true)
```
##### Remove namespaced channel group
```ruby
pubnub.channel_registration(action: :remove, group: 'foo:cg', http_sync: true)
```
##### Remove non-namespaced channel group
```ruby
pubnub.channel_registration(action: :remove, group: 'cg', http_sync: true)
```
##### Remove channel from namespaced channel group
```ruby
pubnub.channel_registration(action: :remove, group: 'foo:cg', channel: :to_remove, http_sync: true)
```
##### Remove channel from non-namespaced channel group
```ruby
pubnub.channel_registration(action: :remove, group: 'cg', channel: :to_remove, http_sync: true)
```
### PAM
Developers can grant/revoke/audit fine-grained permissions for their real-time apps and data at various levels.
Envelopes returned by PAM events have additional :service and :payload keys.
#### PAM Usage Examples
When you issue a PAM operation, you can pass the `presence` key, the 'channel' key, or both.
```ruby
# Will grant :r and :w permissions to demo-pnpres channel
pubnub.grant(:presence => :demo) do |envelope|
puts envelope.message
end
# Will grant :r and :w permissions to demo channel
pubnub.grant(:channel => :demo) do |envelope|
puts envelope.message
end
# Will grant :r and :w permissions to demo and demo-pnpres channels
pubnub.grant(:presence => :demo, :channel => :demo) do |envelope|
puts envelope.message
end
# For channel groups, all above work.
# But channel groups additionally have :manage option.
# Will grant :r, :w and :m permissions to foo:foo
pubnub.grant(:group => 'foo:foo') do |envelope|
puts envelope.message
end
```
##### Audit
Audits auths for given parameters
```ruby
pubnub.audit(:channel => :forbidden_for_jim) do |envelope|
puts envelope.payload
end
pubnub.audit(:channel => :forbidden_for_jim, :auth_key => :jim) do |envelope|
puts envelope.payload
end
```
##### Grant
Grants auths for given parameters, you can pass :read and :write keys as parameters
```ruby
pubnub.grant(:channel => :forbidden_to_write, :read => true, :write => false) do |envelope|
puts envelope.payload
end
pubnub.grant(:channel => :forbidden_to_write, :read => true, :write => true, :auth_key => :admin) do |envelope|
puts envelope.payload
end
```
##### Revoke
Revokes right to read and write. Same as granting r:0 w:0.
```ruby
pubnub.revoke(:channel => :forbidden) do |envelope|
puts envelope.payload
end
pubnub.grant(:channel => :forbidden, :auth_key => :godzilla) do |envelope|
puts envelope.payload
end
```
### Advanced Usage Examples
##### Init
```ruby
# Example below shows passing more options for client
# Pubnub.new returns Pubnub::Client instance
pubnub = Pubnub.new(
:error_callback => custom_error_callback,
:connect_callback => custom_connect_callback,
:ssl => true,
:uuid => 'newton',
:port => 80,
:origin => custom_origin,
:subscribe_timeout => 310,
:non_subscribe_timeout => 5,
:max_retries => 10, # max retries if response got invalid json
:ttl => custom_default_ttl_for_pam,
:secret_key => 0
)
```
###### Custom logger
You can pass your custom logger as :logger key while creating new Pubnub instance. Logger invocations has set progname 'Pubnub'.
##### Publish
```ruby
# Message could be any object that have .to_json method
# You do not need to jsonify message before sending!
# This time publish event will block main thread until callback will finish as we set :http_sync to true
pubnub.publish(
:messsage => message,
:channel => :whatever,
:http_sync => true )
```
##### Subscribe
```ruby
# You can pass in :channel or :channels String, Symbol, Array of both, or csv separated with commas, remember, as space is valid channel part, there should not be any spaces between commas (unless you want them)
# Some example of valid channels:
# :example_symbol
# 'example_string'
# [:one, :two, 'three']
# [:anything]
# 'one,two,three'
# Firing sync subscribe could lock your thread even for 5 minutes
# When there's no traffic on channel server will send timetoken without
# any messages every ~300s.
# First sync subscribe will just update your timetoken, you will not get any messages
# example:
pubnub.subscribe(:channel => 'alerts', :http_sync => true) # just update timetoken
pubnub.subscribe(:channel => 'alerts', :http_sync => true) # Will fire request with current timetoken and can possibly get messages
# Async subscribe starts infinity loop in seperate thread (EM.periodic_timer precisely)
# It will update your timetoken and will fire given callback for every message that it gets
# example:
pubnub.subscribe(
:channel => 'fight_log'
) do |envelope|
puts envelope.message['attacker']
puts envelope.message['defender']
puts envelope.message['damage']
end
```
###### Channel groups
You can subscribe to channel group same way as You're subscribing to channels.
```ruby
pubnub.subscribe(group: 'foo:foo', channel: :ping_3, callback: callback)
```
Response envelopes will hold channel and channel_group values. So, if You want
to subscribe to channel group and your callback need to know where are envelopes
from, You can check it using `envelope.channel_group`. Of course You can
subscribe to channel group and plain channel at once.
##### History
History returns :count messages from given channel.
```ruby
pubnub.history(
:channel => :actions,
:count => 10,
:start => 13942156708212448,
:end => 13942156908212448,
:callback => replay
)
```
:reverse set to true will traverse the time line in reverse starting with the newest message first. Default is false.
If both start and end arguments are provided, reverse is ignored and messages are returned starting with the newest message.
```ruby
pubnub.history(
:channel => :actions,
:count => 10,
:reverse => true,
:callback => replay
)
```
History envelope also contains .history_start and .history_end values
##### Paged History
Paginate through your history. You can pass `:channel`, `:page`, `:limit`, `:callback`, `:http_sync`, `:start` and `:end` options, all of them works like in history event.
```ruby
pubnub.paged_history(
:channel => :actions,
:limit => 10,
:page => 3,
:http_sync => true
)
```
##### Presence
Presence works exactly the same way as subscribe, it just adds '-pnpres' to channel name.
```ruby
pubnub.presence(
:channel => :mars
) do |envelope|
show_in_roster(envelope.uuid)
end
```
##### HereNow
HereNow shows us who is currently subscribing channel and how much clients are online on given channel.
```ruby
pubnub.here_now(
:channel => :pamam_moon_iv
) do |envelope|
puts envelope.parsed_response['uuids']
puts envelope.parsed_response['occupancy']
end
```
You can also give no specific channel. Then you'll get global HereNow event response which holds all channels.
```ruby
pubnub.here_now { |envelope| puts envelope.parsed_response['channels'] }
```
##### Heartbeat
Heartbeat (expressed in seconds) is used to signal to the server that the client is still online. If the client disconnects without a leave event, others observing presence on the channel will not notice that this client has left the channel until a maximum of heartbeat interval seconds.
You normally will never need to touch this value, unless your Ruby client resides on a poor or mobile connection.
```ruby
pubnub = Pubnub.new(:subscribe_key => 'demo', :heartbeat => 60)
```
Update it via heartbeat= and set_heartbeat()
```ruby
pubnub.heartbeat = 120
pubnub.set_heartbeat 240
```
Read it via heartbeat and get_heartbeat()
```ruby
pubnub.heartbeat
pubnub.get_heartbeat
```
#### Pam
PAM allows you to grant read and write access basing on channels and auth_keys.
Every pam event requires :secret_key (Remember! You should set it while initializing pubnub)
PAM actions could take :presence option, that will grant/revoke/audit permissions on given presence channel.
:presence option can be used along with :channel.
##### Audit
```ruby
pubnub.audit(:channel => 'hidden_system'){ |envelope| puts envelope.msg }
```
##### Grant
```ruby
# Channel level
pubnub.grant(:channel => 'hidden_system', :read => true, :write => false){ |envelope| puts envelope.msg }
# Auth key level
pubnub.grant(:channel => 'hidden_system', :read => true, :write => false, :auth_key => :lemon){ |envelope| puts envelope.msg }
```
##### Revoke
Revoke is equal to grant with w false, read false
```ruby
# Channel level
pubnub.revoke(:channel => 'hidden_system'){ |envelope| puts envelope.msg }
# Auth key level
pubnub.revoke(:channel => 'hidden_system', :auth_key => :lemon){ |envelope| puts envelope.msg }
```
### State
State is stored on the server for subscribed uuid, you can pass state in few ways and you can get it from server.
#### Setting state
```ruby
# You can set state in a few ways
# Using subscribe
pubnub.subscribe(:channel => 'my_channel', :state => {:my_channel => {:key => :value}}){ |e| puts e.msg }
# Be aware that state have to be hash of hashes where keys are subscribed channel names
# Using event #set_state
pubnub.set_state(:state => {:key => :value}, :channel => :my_channel, :http_sync => true)
# or with channel groups
pubnub.set_state(:state => {:key => :value}, :group => 'foo:foo', :http_sync => true)
```
#### Getting state
```ruby
# All you need to know is just uuid and channel
pubnub.state(:uuid => 'uuid_client_that_i_am_searching_for', :http_sync => true)
```
#### State and channel groups
State works fine with channel groups too! Just pass the `:group` key when
setting or do it while subscribing.
### Other
Advanced usage examples can be found also in the examples directory.
#### demo_console
A demo console app which shows how to use just about every PubNub Ruby call, just about every possible way!
#### translator
A chat room, with real-time translation! This is using PubNub for the real-time chat, and Bing translation services for babel.
#### pubnub_livestream
Demo rails chat app. It has also been tested with Heroku.
#### sinatra
Sinatra demo.
#### sub_and_unsub_1
Mixing up some async pubs and subs, using blocks and callbacks.
#### serial_publish
Publish 10000 times with an explicit 0.05s delay between publishes
## Proxy support
Basic proxy is supported using ENV global, before initializing pubnub just set:
```ruby
ENV['HTTP_PROXY'] = 'http://my.poxy/'
ENV['HTTP_PROXY_USER'] = 'user'
ENV['HTTP_PROXY_PASS'] = 'secret'
```
after that you can initialize pubnub object as always.
## Comment on Passenger users
Passenger is orphaning threads and it causes issues with EM which is need to run async pubnub events.
Below is fix that worked with our tests. You should fire that code from your initializers.
Some other environments could cause similar problems with EM, if you think you're affected, feel free to open issue and we will do our best to help.
```ruby
module Pubnub
# Taken from http://www.hiringthing.com/2011/11/04/eventmachine-with-rails.html
# Thanks Joshua!
def self.start
if defined?(PhusionPassenger)
PhusionPassenger.on_event(:starting_worker_process) do |forked|
# for passenger, we need to avoid orphaned threads
$my_logger.debug "=> Starting worker process"
if forked && EM.reactor_running?
$my_logger.debug "=> EventMachine stopped fork"
EM.stop
end
Thread.new {
EM.run do
$my_logger.debug "=> EventMachine started"
end
}
die_gracefully_on_signal
end
end
end
def self.die_gracefully_on_signal
$my_logger.debug "=> EventMachine stopped die"
Signal.trap("INT") { EM.stop }
Signal.trap("TERM") { EM.stop }
end
end
Pubnub.start
```
# Please direct all Support Questions and Concerns to [email protected]
| 30.089172 | 387 | 0.716554 | eng_Latn | 0.986161 |
dbbd0ec16dc19c8d321b5686c95a63388910af50 | 1,727 | md | Markdown | node_modules/postcss-discard-comments/CHANGELOG.md | tonydevincenzi/agenda | 02c95a61613943e661acc2decf4c2a6de49a51b8 | [
"MIT"
] | 453 | 2018-04-23T15:43:19.000Z | 2022-03-25T16:59:46.000Z | node_modules/postcss-discard-comments/CHANGELOG.md | tonydevincenzi/agenda | 02c95a61613943e661acc2decf4c2a6de49a51b8 | [
"MIT"
] | 433 | 2018-02-17T04:55:39.000Z | 2022-03-03T03:33:59.000Z | node_modules/postcss-discard-comments/CHANGELOG.md | tonydevincenzi/agenda | 02c95a61613943e661acc2decf4c2a6de49a51b8 | [
"MIT"
] | 240 | 2018-02-19T05:12:48.000Z | 2022-02-17T10:39:44.000Z | # 2.0.4
* Now compiled with Babel 6.
# 2.0.3
* Fixes an issue where comments that were removed from selectors were replaced
by a single space.
# 2.0.2
* Fixes an integration issue where comments inside values transformed by other
processors had their values reset to their original state before the
comments were removed.
# 2.0.1
* Replaces a dependency on node-balanced with internal comments parser.
# 2.0.0
* Upgraded to PostCSS 5 (thanks to @avanes).
# 1.2.1
* Improved performance by iterating the AST in a single pass.
# 1.2.0
* Adds support for user-directed removal of comments, with the `remove`
option (thanks to @dmitrykiselyov).
* `removeAllButFirst` now operates on each CSS tree, rather than the first one
passed to the plugin.
* Fixes to pass the PostCSS plugin guidelines.
# 1.1.3
* As PostCSS handles the source map content, there is no need to check for
the existence of a '#' at position 0 of the comment. This patch fixes this
behaviour.
# 1.1.2
* Fixes an issue where comment separated values were being incorrectly
transformed to not have spaces separating them instead, in `decl.value`.
e.g. `10px/*test*/20px` became `10px20px` in `decl.value` but not
`decl._value.raw`.
# 1.1.1
* Fixes a bug where non-special comments, with an exclamation mark in any part
of the text, were not being removed.
# 1.1.0
* Now uses the PostCSS `4.1` plugin API.
* Adds support for transforming comments inside `!important` values.
# 1.0.2
* Adds a JSHint config, tidy up unnecessary lines of code.
# 1.0.1
* Fixed a bug which affected initializing the plugin with no options.
* Stopped the plugin from trying to match comments in empty strings.
# 1.0.0
* Initial release.
| 24.323944 | 78 | 0.733063 | eng_Latn | 0.998491 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.