hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bb4398c7a674e716739d43c1da60be211e5c5d52 | 6,970 | md | Markdown | node_modules/react-native-bouncy-checkbox/README.md | erumd/Stay-RnB-AWS | b2ecd520a29499f61382e6c06675b068059997e0 | [
"MIT"
] | null | null | null | node_modules/react-native-bouncy-checkbox/README.md | erumd/Stay-RnB-AWS | b2ecd520a29499f61382e6c06675b068059997e0 | [
"MIT"
] | null | null | null | node_modules/react-native-bouncy-checkbox/README.md | erumd/Stay-RnB-AWS | b2ecd520a29499f61382e6c06675b068059997e0 | [
"MIT"
] | null | null | null | <img alt="React Native Bouncy Checkbox" src="assets/logo.png" width="1050"/>
[](https://github.com/WrathChaos/react-native-bouncy-checkbox)
[](https://github.com/WrathChaos/react-native-bouncy-checkbox)
[](https://www.npmjs.com/package/react-native-bouncy-checkbox)
[](https://www.npmjs.com/package/react-native-bouncy-checkbox)

[](https://opensource.org/licenses/MIT)
[](https://github.com/prettier/prettier)
<table>
<tr>
<td align="center">
<img alt="React Native Bouncy Checkbox"
src="assets/Screenshots/react-native-bouncy-checkbox.gif" />
</td>
<td align="center">
<img alt="React Native Bouncy Checkbox"
src="assets/Screenshots/react-native-bouncy-checkbox.png" />
</td>
</tr>
</table>
## Installation
Add the dependency:
### React Native
```ruby
npm i react-native-bouncy-checkbox
```
## Version 2.0.0 is Here 🥳
- Typescript
- **Zero Dependency**
- More Customization Options
- New customization props are available:
- `iconStyle`
- `bounceEffect`
- `bounceFriction`
## Import
```js
import BouncyCheckbox from "react-native-bouncy-checkbox";
```
# Usage
## Basic Usage
```js
<BouncyCheckbox onPress={(isChecked: boolean) => {}} />
```
## Advanced Custom Usage
```jsx
<BouncyCheckbox
size={25}
fillColor="red"
unfillColor="#FFFFFF"
text="Custom Checkbox"
iconStyle={{ borderColor: "red" }}
textStyle={{ fontFamily: "JosefinSans-Regular" }}
onPress={(isChecked: boolean) => {}}
/>
```
### Configuration - Props
| Property | Type | Default | Description |
| -------------------- | :-------: | :------------: | ----------------------------------------------------------- |
| text | string | undefined | set the checkbox's text |
| onPress | function | null | set your own onPress functionality after the bounce effect |
| disableText | boolean | false | if you want to use checkbox without text, you can enable it |
| size | number | 25 | size of `width` and `height` of the checkbox |
| style | style | default | set/override the container style |
| textStyle | style | default | set/override the text style |
| iconStyle | style | default | set/override the icon style |
| isChecked | boolean | false | set the default checkbox value |
| fillColor | color | #f09f48 | change the checkbox's filled color |
| unfillColor | color | transparent | change the checkbox's un-filled color when it's not checked |
| useNativeDriver | boolean | true | enable/disable the useNativeDriver for animation |
| iconComponent | component | Icon | set your own icon component |
| checkIconImageSource | image | default | set your own check icon image |
| ImageComponent | component | Image | set your own Image component instead of RN's default Image |
| bounceEffect | number | 1 | change the bounce effect |
| bounceFriction | number | 3 | change the bounce friction |
## Synthetic Press Functionality with Manual Check State
<div>
<img alt="React Native Bouncy Checkbox"
src="assets/Screenshots/react-native-bouncy-checkbox-syntetic-onpress.gif" />
</div>
Please check the `example-manual-state` runable project to how to make it work on a real project.
<b><i>Becareful while using `disableBuiltInState` you MUST set the `isChecked` prop to use your own check state manually.</b></i>
Here is the basic implementation:
```jsx
import React from "react";
import {
SafeAreaView,
StyleSheet,
Text,
TouchableOpacity,
View,
} from "react-native";
import BouncyCheckbox from "./lib/BouncyCheckbox";
import RNBounceable from "@freakycoder/react-native-bounceable";
const App = () => {
let bouncyCheckboxRef: BouncyCheckbox | null = null;
const [checkboxState, setCheckboxState] = React.useState(false);
return (
<SafeAreaView
style={{
flex: 1,
alignItems: "center",
justifyContent: "center",
}}
>
<View
style={{
height: 30,
width: 150,
alignItems: "center",
justifyContent: "center",
borderRadius: 12,
backgroundColor: checkboxState ? "#34eb83" : "#eb4034",
}}
>
<Text
style={{ color: "#fff" }}
>{`Check Status: ${checkboxState.toString()}`}</Text>
</View>
<BouncyCheckbox
style={{ marginTop: 16 }}
ref={(ref: any) => (bouncyCheckboxRef = ref)}
isChecked={checkboxState}
text="Synthetic Checkbox"
disableBuiltInState
onPress={(isChecked: boolean = false) =>
setCheckboxState(!checkboxState)
}
/>
<RNBounceable
style={{
marginTop: 16,
height: 50,
width: "90%",
backgroundColor: "#ffc484",
borderRadius: 12,
alignItems: "center",
justifyContent: "center",
}}
onPress={() => bouncyCheckboxRef?.onPress()}
>
<Text style={{ color: "#fff" }}>Synthetic Checkbox Press</Text>
</RNBounceable>
</SafeAreaView>
);
};
const styles = StyleSheet.create({});
export default App;
```
### Future Plans
- [x] ~~LICENSE~~
- [x] ~~Typescript Challange!~~
- [x] ~~Version 2.0.0 is alive 🥳~~
- [x] ~~Synthetic Press Functionality~~
- [x] ~~Disable built-in check state~~
- [ ] Write an article about the lib on Medium
## Author
FreakyCoder, [email protected]
## License
React Native Bouncy Checkbox is available under the MIT license. See the LICENSE file for more info.
| 35.74359 | 261 | 0.588953 | eng_Latn | 0.485815 |
bb43a50e2c43b058f68ca0682d4b3d29dd535820 | 5,077 | md | Markdown | code/10-multi-repo-example/README.md | alfonsof/terraform-examples | 232620f12031c0b6c064fea256eb3c0f75dd404f | [
"MIT"
] | 13 | 2019-06-08T19:10:34.000Z | 2022-03-29T00:16:43.000Z | code/10-multi-repo-example/README.md | alfonsof/terraform-examples | 232620f12031c0b6c064fea256eb3c0f75dd404f | [
"MIT"
] | null | null | null | code/10-multi-repo-example/README.md | alfonsof/terraform-examples | 232620f12031c0b6c064fea256eb3c0f75dd404f | [
"MIT"
] | 12 | 2019-09-15T04:40:40.000Z | 2022-03-16T11:23:18.000Z | # Terraform Muti Repo example
This folder contains a multi repo example of a [Terraform](https://www.terraform.io/) file on AWS (Amazon Web Services).
It shows how to develop (not duplicating code) web server clusters in different environments using a module in another repo in order to use different version of the module in the environments.
The environments are:
* Staging (stage)
* Production (prod)
This is the file layout in this repo:
```bash
live
├── global
│ └── s3/
│ ├── main.tf
│ └── (etc)
│
├── stage
│ ├── services/
│ │ └── webserver-cluster/
│ │ ├── main.tf
│ │ └── (etc)
│ └── data-stores/
│ └── mysql/
│ ├── main.tf
│ └── (etc)
│
└── prod
├── services/
│ └── webserver-cluster/
│ ├── main.tf
│ └── (etc)
└── data-stores/
└── mysql/
├── main.tf
└── (etc)
```
This is the file layout used from another repo:
```bash
modules
└── services/
└── webserver-cluster/
├── main.tf
└── (etc)
```
It uses in common for both environments:
* Terraform Remote State example: [live/global/s3](live/global/s3)
* Terraform Web Server Cluster module example in another repo: [https://github.com/alfonsof/terraform-aws-repo-examples](https://github.com/alfonsof/terraform-aws-repo-examples)
It uses for staging environment:
* Terraform MySQL on RDS example (staging environment): [live/stage/data-stores/mysql](live/stage/data-stores/mysql)
* Terraform Web Server Cluster example (staging environment): [live/stage/services/webserver-cluster](live/stage/services/webserver-cluster)
It uses for production environment:
* Terraform MySQL on RDS example (production environment): [live/prod/data-stores/mysql](live/prod/data-stores/mysql)
* Terraform Web Server Cluster example (production environment): [live/prod/services/webserver-cluster](live/prod/services/webserver-cluster)
## Requirements
* You must have [Terraform](https://www.terraform.io/) installed on your computer.
* You must have an [AWS (Amazon Web Services)](http://aws.amazon.com/) account.
* It uses the Terraform AWS Provider that interacts with the many resources supported by AWS through its APIs.
* This code was written for Terraform 0.10.x.
## Using the code
* Configure your AWS access keys.
**Important:** For security, it is strongly recommend that you use IAM users instead of the root account for AWS access.
Setting your credentials for use by Terraform can be done in a number of ways, but here are the recommended approaches:
* The default credentials file
Set credentials in the AWS credentials profile file on your local system, located at:
`~/.aws/credentials` on Linux, macOS, or Unix
`C:\Users\USERNAME\.aws\credentials` on Windows
This file should contain lines in the following format:
```bash
[default]
aws_access_key_id = <your_access_key_id>
aws_secret_access_key = <your_secret_access_key>
```
Substitute your own AWS credentials values for the values `<your_access_key_id>` and `<your_secret_access_key>`.
* Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
Set the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables.
To set these variables on Linux, macOS, or Unix, use `export`:
```bash
export AWS_ACCESS_KEY_ID=<your_access_key_id>
export AWS_SECRET_ACCESS_KEY=<your_secret_access_key>
```
To set these variables on Windows, use `set`:
```bash
set AWS_ACCESS_KEY_ID=<your_access_key_id>
set AWS_SECRET_ACCESS_KEY=<your_secret_access_key>
```
* Use Terraform Remote State example for creating the remote state bucket. See: [live/global/s3](live/global/s3)
* Use Terraform module example (in another repo) for Web Server Cluster example in the staging environment and Web Server Cluster example in the production environment. See: [https://github.com/alfonsof/terraform-aws-repo-examples](https://github.com/alfonsof/terraform-aws-repo-examples)
* Use Terraform MySQL on RDS example for creating a MySQL database in the staging environment. See: [live/stage/data-stores/mysql](live/stage/data-stores/mysql)
* Use Terraform Web Server Cluster example for creating a web server cluster in the staging environment. See: [live/stage/services/webserver-cluster](live/stage/services/webserver-cluster)
* Use Terraform MySQL on RDS example for creating a MySQL database in the production environment. See: [live/prod/data-stores/mysql](live/prod/data-stores/mysql)
* Use Terraform Web Server Cluster example for creating a web server cluster in the production environment. See: [live/prod/services/webserver-cluster](live/prod/services/webserver-cluster)
| 39.664063 | 289 | 0.676187 | eng_Latn | 0.889003 |
bb43f098c6d91e8229036d9d0e9e15816608b7e1 | 3,488 | md | Markdown | doc/user/content/sql/types/timestamp.md | OscarTHZhang/materialize | 233d40ece6ca704b2dbbfc4ab303c4cd4e48e10c | [
"MIT"
] | null | null | null | doc/user/content/sql/types/timestamp.md | OscarTHZhang/materialize | 233d40ece6ca704b2dbbfc4ab303c4cd4e48e10c | [
"MIT"
] | null | null | null | doc/user/content/sql/types/timestamp.md | OscarTHZhang/materialize | 233d40ece6ca704b2dbbfc4ab303c4cd4e48e10c | [
"MIT"
] | null | null | null | ---
title: "Timestamp Data Types"
description: "Expresses a date and time"
menu:
main:
parent: 'sql-types'
aliases:
- /sql/types/timestamptz
---
`timestamp` and `timestamp with time zone` data expresses a date and time in
UTC.
## `timestamp` info
Detail | Info
-------|------
**Quick Syntax** | `TIMESTAMP WITH TIME ZONE '2007-02-01 15:04:05+06'`
**Size** | 8 bytes
**Catalog name** | `pg_catalog.timestamp`
**OID** | 1083
**Min value** | 4713 BC
**Max value** | 294276 AD
**Resolution** | 1 microsecond / 14 digits
## `timestamp with time zone` info
Detail | Info
-------|------
**Quick Syntax** | `TIMESTAMPTZ '2007-02-01 15:04:05+06'`
**Aliases** | `timestamp with time zone`
**Size** | 8 bytes
**Catalog name** | `pg_catalog.timestamptz`
**OID** | 1184
**Min value** | 4713 BC
**Max value** | 294276 AD
**Resolution** | 1 microsecond / 14 digits
## Syntax
{{< diagram "type-timestamp.svg" >}}
Field | Use
------|-----
**WITH TIME ZONE** | Apply the _tz_offset_ field. If not specified, don't.
**TIMESTAMPTZ** | Apply the _tz_offset_ field.
_date_str_ | _date_str_ | A string representing a date in `Y-M-D`, `Y M-D`, `Y M D` or `YMD` format.
_time_str_ | A string representing a time of day in `H:M:S.NS` format.
_tz_offset_ | The timezone's distance, in hours, from UTC.
## Details
- `timestamp` and `timestamp with time zone` store data in
[UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time).
- The difference between the two types is that `timestamp with time zone` can read or write
timestamps with the offset specified by the timezone. Importantly,
`timestamp with time zone` itself doesn't store any timezone data; Materialize simply
performs the conversion from the time provided and UTC.
- Materialize assumes all clients expect UTC time, and does not currently
support any other timezones.
### Valid casts
#### From `timestamp`
You can [cast](../../functions/cast) `timestamp` or `timestamp with time zone` to:
- [`date`](../date)
- [`text`](../text)
- `timestamp`
- `timestamp with time zone`
#### To `timestamp`
You can [cast](../../functions/cast) the following types to `timestamp` or
`timestamp with time zone`:
- [`date`](../date)
- [`text`](../text)
- `timestamp`
- `timestamp with time zone`
### Valid operations
`timestamp` and `timestamp with time zone` data (collectively referred to as
`timestamp/tz`) supports the following operations with other types.
Operation | Computes
----------|------------
[`date`](../date) `+` [`interval`](../interval) | [`timestamp/tz`](../timestamp)
[`date`](../date) `-` [`interval`](../interval) | [`timestamp/tz`](../timestamp)
[`date`](../date) `+` [`time`](../time) | [`timestamp/tz`](../timestamp)
[`timestamp/tz`](../timestamp) `+` [`interval`](../interval) | [`timestamp/tz`](../timestamp)
[`timestamp/tz`](../timestamp) `-` [`interval`](../interval) | [`timestamp/tz`](../timestamp)
[`timestamp/tz`](../timestamp) `-` [`timestamp/tz`](../timestamp) | [`interval`](../interval)
## Examples
### Return timestamp
```sql
SELECT TIMESTAMP '2007-02-01 15:04:05' AS ts_v;
```
```nofmt
ts_v
---------------------
2007-02-01 15:04:05
```
### Return timestamp with time zone
```sql
SELECT TIMESTAMPTZ '2007-02-01 15:04:05+06' AS tstz_v;
```
```nofmt
tstz_v
-------------------------
2007-02-01 09:04:05 UTC
```
## Related topics
* [`TIMEZONE` and `AT TIME ZONE` functions](../../functions/timezone-and-at-time-zone)
| 28.357724 | 114 | 0.647076 | eng_Latn | 0.615314 |
bb44902d4ecd017fe129f67d6a87dca8cb717e75 | 5,095 | md | Markdown | 2017/2017-09-28-Codility.md | 4669842/dmca | 7d81b53340b9d1f33b0746743429a6e6c9165da2 | [
"CC-BY-3.0"
] | 4,432 | 2015-01-05T19:30:36.000Z | 2022-03-30T18:15:29.000Z | 2017/2017-09-28-Codility.md | 4669842/dmca | 7d81b53340b9d1f33b0746743429a6e6c9165da2 | [
"CC-BY-3.0"
] | 189 | 2015-03-04T20:55:32.000Z | 2022-03-20T13:11:53.000Z | 2017/2017-09-28-Codility.md | 4669842/dmca | 7d81b53340b9d1f33b0746743429a6e6c9165da2 | [
"CC-BY-3.0"
] | 2,135 | 2015-01-01T12:23:57.000Z | 2022-03-31T18:12:08.000Z | To whom It May Concern
DMCA Notification
The following information is presented for the purposes of removing web content that infringes on our copyright per the Digital Millennium Copyright Act. We appreciate your enforcement of copyright law and support of our rights in this matter.
Identification of Copyrighted Work
The copyrighted work at issue is the text that appears on codility.com and its related pages. The pages in question contain a clear copyright notification and are the intellectual property of the complainant.
Identification of Infringed Material
The following copyrighted paragraphs have been allegedly copied from the copyrighted work:
1) Link :
https://github.com/blakeembrey/code-problems/blob/master/solutions/java/StackMachine.java
Text starting from
"A <i>stack machine</i> is a simple system"
to
"* solution will not be the focus of the assessment"
2) Link :
https://github.com/Jarosh/Exercises/wiki/Split-array-into-two-parts,-such-that-the-number-of-elements-equal-to-X-in-the-first-part-is-the-same-as-the-number-of-elements-different-from-X-in-the-other-part
Text starting from
"An integer X and a non-empty zero-indexed array A"
to
"Elements of input arrays can be modified"
3) Link :
https://github.com/demonSong/leetcode/issues/18
Text starting from
"A zero-indexed array A consisting of N different integers is"
to
"Each element of array A is an integer within the range [0, N-1]."
4) Link :
https://github.com/DanLux/phonebill
Text starting from
"Your monthly phone bill has just arrived, and it's unexpectedly"
to
"format "hh:mm:ss,nnn-nnn-nnn" strictly; there are no empty lines and spaces"
5) Link :
https://github.com/Himansu-Nayak/jse-examples/blob/master/crackingthecoding/src/main/java/com/zalando/Password.java
Text starting from
"You would like to set a password"
to
"there is no substring that satisfies the restrictions on the format of a valid password."
Notifying Party
Codility Limited
Attn: Legal Dept.
107 Cheapside
9th Floor
London
EC2V 6DN
United Kingdom
[private]
Copyright Owners Statement
I have a good faith belief that use of the copyrighted materials described above on the allegedly infringing web pages is not authorized by the copyright owner, its agent, or the law.
I swear, under penalty of perjury, that the information in the notification is accurate and that I am authorized to act on behalf of the copyright owner of an exclusive right that is allegedly infringed.
**Are you the copyright owner or authorized to act on the copyright owner's behalf?**
I am authorized to act on the copyright owner's behalf.
**Please provide a detailed description of the original copyrighted work that has allegedly been infringed. If possible, include a URL to where it is posted online.**
All of the websites accessible by links posted above contain descriptions of programming tasks from https://codility.com/.
1) https://codility.com/tasks/stack_machine_emulator
2) https://codility.com/tasks/assymetry_index
3) https://codility.com/tasks/perm_cycles
4) https://codility.com/tasks/phone_billing
5) https://codility.com/tasks/digitless_password
**What files should be taken down? Please provide URLs for each file, or if the entire repository, the repository's URL:**
Provided above.
**Have you searched for any forks of the allegedly infringing files or repositories? Each fork is a distinct repository and must be identified separately if you believe it is infringing and wish to have it taken down.**
If possible, please take down any forks.
**Is the work licensed under an open source license? If so, which open source license? Are the allegedly infringing files being used under the open source license, or are they in violation of the license?**
It is not.
**What would be the best solution for the alleged infringement? Are there specific changes the other person can make other than removal?**
No, removal is the only solution.
**Do you have the alleged infringer's contact information? If so, please provide it:**
No.
**Type (or copy and paste) the following statement: "I have a good faith belief that use of the copyrighted materials described above on the infringing web pages is not authorized by the copyright owner, or its agent, or the law. I have taken fair use into consideration."**
Included in the above section.
**Type (or copy and paste) the following statement: "I swear, under penalty of perjury, that the information in this notification is accurate and that I am the copyright owner, or am authorized to act on behalf of the owner, of an exclusive right that is allegedly infringed."**
Included in the above section.
**Please confirm that you have you have read our Guide to Submitting a DMCA Takedown Notice: https://help.github.com/articles/guide-to-submitting-a-dmca-takedown-notice/**
I confirm.
**So that we can get back to you, please provide either your telephone number or physical address:**
Please get back in touch at [private]
**Please type your full legal name below to sign this request:**
[private]
| 39.496124 | 278 | 0.778999 | eng_Latn | 0.997501 |
bb44c58ac9fd1588c1e48b660b981e7bf055cd0e | 1,621 | md | Markdown | _posts/2015-10-14-cockpit-0.80.md | garrett/cockpit-website-jekyll | 3c3f83a7daa201dd857015fbc11c9887d4ac878e | [
"MIT"
] | null | null | null | _posts/2015-10-14-cockpit-0.80.md | garrett/cockpit-website-jekyll | 3c3f83a7daa201dd857015fbc11c9887d4ac878e | [
"MIT"
] | 15 | 2017-07-25T16:15:21.000Z | 2017-09-14T11:11:36.000Z | _posts/2015-10-14-cockpit-0.80.md | garrett/cockpit-website-jekyll | 3c3f83a7daa201dd857015fbc11c9887d4ac878e | [
"MIT"
] | null | null | null | ---
title: Cockpit 0.80 Released
date: 2015-10-14 22:19
tags: cockpit linux technical
slug: cockpit-0.80
summary: Cockpit releases every week. This week it was 0.80
category: release
---
Cockpit releases every week. This week it was 0.80
### SSH private keys
You can now use Cockpit to load SSH private keys into the ssh-agent that's
running in the Cockpit login session. These keys are used to authenticate
against other systems when they are added to the dashboard. Cockpit also
supports inspecting and changing the passwords for SSH private keys.
<iframe width="853" height="480" src="https://www.youtube.com/embed/RZ_N2iCPm_U" frameborder="0" allowfullscreen></iframe>
### Always start an SSH agent
Cockpit now always starts an SSH agent in the Cockpit login session. Previously
this would happen if the correct PAM modules were loaded.
### From the future
Peter has done work to build an OSTree UI, useful for upgrading and rolling back
the operating system on Atomic Host:
[](https://raw.githubusercontent.com/cockpit-project/cockpit-design/master/software-updates/software-updates-ostree-alt.png)
Subin has done work to get Nulecule Kubernetes applications working with Atomic
and larger Kubernetes clusters.
### Try it out
Cockpit 0.80 is available now:
* [Source Tarball](https://github.com/cockpit-project/cockpit/releases/tag/0.80)
* [Fedora 23 and Fedora Rawhide](https://bodhi.fedoraproject.org/updates/FEDORA-2015-28a7f2b07f)
* [COPR for Fedora 21, 22, CentOS and RHEL](https://copr.fedoraproject.org/coprs/sgallagh/cockpit-preview/)
| 36.840909 | 180 | 0.778532 | eng_Latn | 0.962896 |
bb4575fb0037ca6d72e50296f7a04855639486c2 | 4,073 | md | Markdown | docs/vsto/how-to-programmatically-print-documents.md | thoelkef/visualstudio-docs.de-de | bfc727b45dcd43a837d3f4f2b06b45d59da637bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vsto/how-to-programmatically-print-documents.md | thoelkef/visualstudio-docs.de-de | bfc727b45dcd43a837d3f4f2b06b45d59da637bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vsto/how-to-programmatically-print-documents.md | thoelkef/visualstudio-docs.de-de | bfc727b45dcd43a837d3f4f2b06b45d59da637bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Gewusst wie: Programm gesteuertes Drucken von Dokumenten'
ms.date: 02/02/2017
ms.topic: how-to
dev_langs:
- VB
- CSharp
helpviewer_keywords:
- Word [Office development in Visual Studio], printing documents
- documents [Office development in Visual Studio], printing
author: John-Hart
ms.author: johnhart
manager: jillfra
ms.workload:
- office
ms.openlocfilehash: 413d0e4f56aeb897af4f16a0dc6c43b4f04eace7
ms.sourcegitcommit: 6cfffa72af599a9d667249caaaa411bb28ea69fd
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 09/02/2020
ms.locfileid: "85537825"
---
# <a name="how-to-programmatically-print-documents"></a>Gewusst wie: Programm gesteuertes Drucken von Dokumenten
Sie können ein ganzes Microsoft Office Word-Dokument oder einen Teil eines Dokuments auf dem Standarddrucker drucken.
[!INCLUDE[appliesto_wdalldocapp](../vsto/includes/appliesto-wdalldocapp-md.md)]
## <a name="print-a-document-that-is-part-of-a-document-level-customization"></a>Drucken eines Dokuments, das Teil einer Anpassung auf Dokument Ebene ist
### <a name="to-print-the-entire-document"></a>So drucken Sie das ganze Dokument
1. Rufen Sie die <xref:Microsoft.Office.Tools.Word.Document.PrintOut%2A> -Methode der `ThisDocument` -Klasse im Projekt auf, um das gesamte Dokument zu drucken. Um dieses Codebeispiel verwenden zu können, müssen Sie den Code in der `ThisDocument` -Klasse ausführen.
[!code-vb[Trin_VstcoreWordAutomation#11](../vsto/codesnippet/VisualBasic/Trin_VstcoreWordAutomationVB/ThisDocument.vb#11)]
[!code-csharp[Trin_VstcoreWordAutomation#11](../vsto/codesnippet/CSharp/Trin_VstcoreWordAutomationCS/ThisDocument.cs#11)]
### <a name="to-print-the-current-page-of-the-document"></a>So drucken Sie die aktuelle Seite des Dokuments
1. Rufen Sie die <xref:Microsoft.Office.Tools.Word.Document.PrintOut%2A> -Methode der `ThisDocument` -Klasse im Projekt auf, und geben Sie an, dass eine Kopie der aktuellen Seite gedruckt werden soll. Um dieses Codebeispiel verwenden zu können, müssen Sie den Code in der `ThisDocument` -Klasse ausführen.
[!code-vb[Trin_VstcoreWordAutomation#12](../vsto/codesnippet/VisualBasic/Trin_VstcoreWordAutomationVB/ThisDocument.vb#12)]
[!code-csharp[Trin_VstcoreWordAutomation#12](../vsto/codesnippet/CSharp/Trin_VstcoreWordAutomationCS/ThisDocument.cs#12)]
## <a name="print-a-document-by-using-a-vsto-add-in"></a>Drucken eines Dokuments mithilfe eines VSTO-Add-ins
### <a name="to-print-an-entire-document"></a>So drucken Sie ein ganzes Dokument
1. Rufen Sie die <xref:Microsoft.Office.Interop.Word._Document.PrintOut%2A> -Methode des <xref:Microsoft.Office.Interop.Word.Document> -Objekts auf, das Sie drucken möchten. Im folgenden Codebeispiel wird das aktive Dokument gedruckt. Wenn Sie dieses Beispiel verwenden möchten, führen Sie den Code von der `ThisAddIn` -Klasse im Projekt aus.
[!code-vb[Trin_VstcoreWordAutomationAddIn#11](../vsto/codesnippet/VisualBasic/Trin_VstcoreWordAutomationAddIn/ThisAddIn.vb#11)]
[!code-csharp[Trin_VstcoreWordAutomationAddIn#11](../vsto/codesnippet/CSharp/Trin_VstcoreWordAutomationAddIn/ThisAddIn.cs#11)]
### <a name="to-print-the-current-page-of-a-document"></a>So drucken Sie die aktuelle Seite eines Dokuments
1. Rufen Sie die <xref:Microsoft.Office.Interop.Word._Document.PrintOut%2A> -Methode des <xref:Microsoft.Office.Interop.Word.Document> -Objekts auf, das Sie drucken möchten, und geben Sie an, dass eine Kopie der aktuellen Seite gedruckt werden soll. Im folgenden Codebeispiel wird das aktive Dokument gedruckt. Wenn Sie dieses Beispiel verwenden möchten, führen Sie den Code von der `ThisAddIn` -Klasse im Projekt aus.
[!code-vb[Trin_VstcoreWordAutomationAddIn#12](../vsto/codesnippet/VisualBasic/Trin_VstcoreWordAutomationAddIn/ThisAddIn.vb#12)]
[!code-csharp[Trin_VstcoreWordAutomationAddIn#12](../vsto/codesnippet/CSharp/Trin_VstcoreWordAutomationAddIn/ThisAddIn.cs#12)]
## <a name="see-also"></a>Weitere Informationen
- [Optionale Parameter in Office-Projektmappen](../vsto/optional-parameters-in-office-solutions.md)
| 65.693548 | 418 | 0.793273 | deu_Latn | 0.75362 |
bb463b133a1413ac80cd84b801ad1a467fdbf4a2 | 186 | md | Markdown | CONTRIBUTING.md | sseveran/IEXTools | 43adba29b3747bbee391df961b7af935252f8919 | [
"Apache-2.0"
] | 22 | 2018-11-14T13:50:48.000Z | 2021-06-29T21:21:51.000Z | CONTRIBUTING.md | sseveran/IEXTools | 43adba29b3747bbee391df961b7af935252f8919 | [
"Apache-2.0"
] | 8 | 2018-11-25T20:53:17.000Z | 2021-05-20T19:14:17.000Z | CONTRIBUTING.md | sseveran/IEXTools | 43adba29b3747bbee391df961b7af935252f8919 | [
"Apache-2.0"
] | 13 | 2018-11-14T13:50:50.000Z | 2021-04-19T02:20:55.000Z | Contributions welcome.
Project direction is a bit unclear at the moment, so I would appreciate any feedback on potential use cases. Please file an issue or fork the repo to contribute.
| 46.5 | 161 | 0.806452 | eng_Latn | 0.999674 |
bb4663f1ada0402cfbf4adac849f3ee81ab6e86a | 5,007 | md | Markdown | content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md | rayzhou2017/website | c60a10245756c38e721584b331f67615d2e9121b | [
"Apache-2.0"
] | null | null | null | content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md | rayzhou2017/website | c60a10245756c38e721584b331f67615d2e9121b | [
"Apache-2.0"
] | null | null | null | content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md | rayzhou2017/website | c60a10245756c38e721584b331f67615d2e9121b | [
"Apache-2.0"
] | null | null | null | ---
title: "Traffic Mirroring"
keywords: 'KubeSphere, Kubernetes, traffic mirroring, istio'
description: 'Traffic Mirroring'
linkTitle: "Traffic Mirroring"
weight: 2130
---
Traffic mirroring, also called shadowing, is a powerful, risk-free method of testing your app versions as it sends a copy of live traffic to a service that is being mirrored. Namely, you implement a similar setup for acceptance test so that problems can be detected in advance. As mirrored traffic happens out of band of the critical request path for the primary service, your end users will not be affected during the whole process.
## Prerequisites
- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
- You need to enable **Application Governance** and have an available app so that you can mirror the traffic of it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy Bookinfo and Manage Traffic](../../../quick-start/deploy-bookinfo-to-k8s/).
## Create Traffic Mirroring Job
1. Log in KubeSphere as `project-regular`. Under **Categories**, click **Create Job** on the right of **Traffic Mirroring**.

2. Set a name for it and click **Next**.

3. Select your app from the drop-down list and the service of which you want to mirror the traffic. If you also use the sample app Bookinfo, select **reviews** and click **Next**.

4. On the **Grayscale Release Version** page, add another version of it (e.g. `v2`) as shown in the image below and click **Next**:

{{< notice note >}}
The image version is `v2` in the screenshot.
{{</ notice >}}
5. Click **Create** in the final step.

6. The traffic mirroring job created displays under the tab **Job Status**. Click it to view details.

7. You can see the traffic is being mirrored to `v2` with real-time traffic displaying in the line chart.

8. The new **Deployment** is created as well.

9. You can directly get the virtual service to view `mirror` and `weight` by executing the following command:
```bash
kubectl -n demo-project get virtualservice -o yaml
```
{{< notice note >}}
- When you execute the command above, replace `demo-project` with your own project (i.e. namespace) name.
- If you want to execute the command from the web kubectl on the KubeSphere console, you need to use the account `admin`.
{{</ notice >}}
10. Expected output:
```bash
...
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
port:
number: 9080
subset: v1
weight: 100
mirror:
host: reviews
port:
number: 9080
subset: v2
...
```
This route rule sends 100% of the traffic to `v1`. The last stanza specifies that you want to mirror to the service `reviews v2`. When traffic gets mirrored, the requests are sent to the mirrored service with their Host/Authority headers appended with `-shadow`. For example, `cluster-1` becomes `cluster-1-shadow`.
{{< notice note >}}
These requests are mirrored as “fire and forget”, which means that the responses are discarded. You can specify the `weight` field to mirror a fraction of the traffic, instead of mirroring all requests. If this field is absent, for compatibility with older versions, all traffic will be mirrored. For more information, see [Mirroring](https://istio.io/v1.5/pt-br/docs/tasks/traffic-management/mirroring/).
{{</ notice >}}
## Take a Job Offline
You can remove the traffic mirroring job by clicking **Job offline**, which does not affect the current app version.
 | 47.235849 | 433 | 0.723587 | eng_Latn | 0.988531 |
bb478bfd3a53b555f13d622105b4b4b1a3414530 | 934 | md | Markdown | docs/concepts/index.md | kyessenov/istio.github.io | 033a116d78dafea1e0047965782f14a4af40b119 | [
"Apache-2.0"
] | null | null | null | docs/concepts/index.md | kyessenov/istio.github.io | 033a116d78dafea1e0047965782f14a4af40b119 | [
"Apache-2.0"
] | null | null | null | docs/concepts/index.md | kyessenov/istio.github.io | 033a116d78dafea1e0047965782f14a4af40b119 | [
"Apache-2.0"
] | null | null | null | ---
title: Concepts
headline: Concepts
sidenav: doc-side-concepts-nav.html
bodyclass: docs
layout: docs
type: markdown
---
# Concepts
Concepts help you learn about the different parts
of the Istio system and the abstractions it uses.
- [What is Istio?](./what-is-istio.html). Provides a broad overview of what
problems Istio is designed to solve as well as presenting its high-level
architecture,
- [Service Model](./service-model.html). Describes how services are modeled
within the Istio mesh.
- [Attributes](./attributes.html). Explains the important notion of attributes, which
is a central mechanism for how policies and control are applied to services within the
mesh.
- [Mixer](./mixer.html). Architectural deep-dive into the design of Mixer, which provides
the policy and control mechanisms within the service mesh.
- [Mixer Configuration](./mixer-config.html). An overview of the key concepts used to configure
Mixer.
| 30.129032 | 95 | 0.775161 | eng_Latn | 0.994118 |
bb47bc5e2adb1347f22a82fbbe7171dceee0f938 | 22,461 | md | Markdown | README.md | gaetanozappi/react-textinput-chip | c29720db8e6afeac79d00c37128b05a50726be20 | [
"Apache-2.0"
] | 16 | 2019-10-10T10:54:27.000Z | 2019-10-16T08:03:52.000Z | README.md | gaetanozappi/react-textinput-chip | c29720db8e6afeac79d00c37128b05a50726be20 | [
"Apache-2.0"
] | null | null | null | README.md | gaetanozappi/react-textinput-chip | c29720db8e6afeac79d00c37128b05a50726be20 | [
"Apache-2.0"
] | 1 | 2019-10-10T20:24:30.000Z | 2019-10-10T20:24:30.000Z | # React Js: react-textinput-chip
[](https://github.com/gaetanozappi/react-textinput-chip)
[](https://www.npmjs.com/package/react-textinput-chip)
[](https://github.com/gaetanozappi/react-textinput-chip)
[](https://www.npmjs.com/package/react-textinput-chip)
[](https://github.com/gaetanozappi/react-textinput-chip/issues)
[](https://github.com/gaetanozappi/react-textinput-chip/issues?q=is%3Aissue+is%3Aclosed)
[](http://github.com/gaetanozappi/react-textinput-chip/issues)
[]()
<img src="https://github.com/gaetanozappi/react-textinput-chip/raw/master/screenshot/react-textinput-chip.gif" />
Demo: [Codesandbox](https://codesandbox.io/s/material-demo-n1it6 "Codesandbox")
- [Usage](#-usage)
- [License](#-license)
## 📖 Getting started
`$ npm install react-textinput-chip --save`
## 💻 Usage
```javascript
import React, { Component } from "react";
import ReactDOM from "react-dom";
import deburr from "lodash/deburr";
import ReactJson from "react-json-view";
import TextInputChip from "react-textinput-chip";
import MenuItem from "./components/MenuItem";
import {
Avatar,
Button,
Dialog,
DialogActions,
DialogContent,
DialogTitle,
TextField
} from "@material-ui/core";
import { withStyles } from "@material-ui/styles";
import FaceIcon from "@material-ui/icons/Face";
import DoneIcon from "@material-ui/icons/Done";
const styles = theme => ({
root: {
"& label.Mui-focused": {
color: "#007bff"
},
"& .MuiInput-underline:before": {
borderBottomColor: "#cacccf"
},
"& .MuiInput-underline:after": {
borderBottomColor: "#007bff"
},
"& .MuiInput-underline:hover:not(.Mui-disabled):before": {
borderBottomColor: "#ffb41b"
}
}
});
let MyTextField = withStyles(styles)(TextField);
const suggestions = [
{
name: "Regina",
surname: "Hampton",
email: "[email protected]",
address: "506 Macon Street, Waterford, Washington, 706"
},
{
name: "Mosley",
surname: "Navarro",
email: "[email protected]",
address: "172 Wythe Place, Smock, Tennessee, 9071"
},
{
name: "Lillie",
surname: "Steele",
email: "[email protected]",
address: "727 Brightwater Avenue, Welda, North Dakota, 453"
},
{
name: "Liz",
surname: "Cleveland",
email: "[email protected]",
address: "447 Lewis Place, Kerby, Alabama, 6018"
},
{
name: "Rogers",
surname: "Boyd",
email: "[email protected]",
address: "912 Hooper Street, Masthope, Maine, 4501"
},
{
name: "Bullock",
surname: "Glenn",
email: "[email protected]",
address: "921 Seton Place, Downsville, Idaho, 8474"
},
{
name: "Everett",
surname: "Bradshaw",
email: "[email protected]",
address: "994 Montague Street, Driftwood, Puerto Rico, 6436"
},
{
name: "Mccormick",
surname: "Walls",
email: "[email protected]",
address: "809 Decatur Street, Bawcomville, Indiana, 8329"
},
{
name: "Weiss",
surname: "Garcia",
email: "[email protected]",
address: "347 Hinckley Place, Greer, Iowa, 4916"
},
{
name: "Sonja",
surname: "Valdez",
email: "[email protected]",
address: "266 Elm Place, Hanover, Mississippi, 4444"
},
{
name: "Little",
surname: "Cote",
email: "[email protected]",
address: "383 Lott Avenue, Cartwright, Utah, 9826"
},
{
name: "Juliet",
surname: "Dunlap",
email: "[email protected]",
address: "126 Hastings Street, Lydia, Connecticut, 8128"
},
{
name: "Sheena",
surname: "Brady",
email: "[email protected]",
address: "414 Pulaski Street, Choctaw, Georgia, 2412"
},
{
name: "Bobbi",
surname: "Alexander",
email: "[email protected]",
address: "956 Ide Court, Madaket, Wisconsin, 2251"
},
{
name: "Schneider",
surname: "Mosley",
email: "[email protected]",
address: "425 Love Lane, Mansfield, Oregon, 519"
},
{
name: "Griffin",
surname: "Camacho",
email: "[email protected]",
address: "655 Provost Street, Venice, Arkansas, 7752"
},
{
name: "Chavez",
surname: "Bauer",
email: "[email protected]",
address: "701 Williamsburg Street, Brule, Virginia, 2962"
},
{
name: "Kent",
surname: "Nicholson",
email: "[email protected]",
address: "556 Bushwick Avenue, Klondike, South Carolina, 9899"
},
{
name: "Lauren",
surname: "Stephenson",
email: "[email protected]",
address: "452 Kermit Place, Columbus, South Dakota, 5995"
},
{
name: "Debra",
surname: "Meadows",
email: "[email protected]",
address: "542 Powell Street, Nadine, New Jersey, 6918"
},
{
name: "Robinson",
surname: "Shelton",
email: "[email protected]",
address: "181 Central Avenue, Edgar, American Samoa, 4913"
},
{
name: "Roth",
surname: "Boone",
email: "[email protected]",
address: "895 Granite Street, Hickory, Wyoming, 9024"
},
{
name: "Mattie",
surname: "Lynch",
email: "[email protected]",
address: "998 Grove Place, Watchtower, Massachusetts, 2874"
},
{
name: "Frances",
surname: "Ellison",
email: "[email protected]",
address: "315 Banner Avenue, Makena, Alaska, 7395"
},
{
name: "Catherine",
surname: "Dickerson",
email: "[email protected]",
address: "605 Oceanview Avenue, Gardners, West Virginia, 6136"
},
{
name: "Whitfield",
surname: "Donaldson",
email: "[email protected]",
address: "326 Interborough Parkway, Dunbar, Maryland, 401"
},
{
name: "Hayes",
surname: "Herman",
email: "[email protected]",
address: "161 Keen Court, Westboro, Delaware, 4142"
},
{
name: "Rodriquez",
surname: "Craft",
email: "[email protected]",
address: "924 Calder Place, Comptche, Illinois, 4976"
},
{
name: "Russell",
surname: "Oneal",
email: "[email protected]",
address: "217 Kingston Avenue, Thomasville, Virgin Islands, 1829"
},
{
name: "Ramos",
surname: "Skinner",
email: "[email protected]",
address: "285 Baughman Place, Baker, Missouri, 6189"
},
{
name: "Eaton",
surname: "Salinas",
email: "[email protected]",
address: "489 Union Street, Vernon, Marshall Islands, 2136"
},
{
name: "Parsons",
surname: "Wade",
email: "[email protected]",
address: "967 Dodworth Street, Harborton, Montana, 696"
},
{
name: "Mendoza",
surname: "Chandler",
email: "[email protected]",
address: "344 Hudson Avenue, Thatcher, Kentucky, 2071"
},
{
name: "Valentine",
surname: "French",
email: "[email protected]",
address: "216 Berry Street, Beaverdale, Colorado, 1766"
},
{
name: "Eva",
surname: "Reeves",
email: "[email protected]",
address: "960 Landis Court, Caron, Rhode Island, 3102"
},
{
name: "Cunningham",
surname: "Sweet",
email: "[email protected]",
address: "784 Woodhull Street, Soudan, Palau, 4977"
},
{
name: "Lindsey",
surname: "Savage",
email: "[email protected]",
address:
"381 Kenilworth Place, Sisquoc, Federated States Of Micronesia, 238"
},
{
name: "Virginia",
surname: "Molina",
email: "[email protected]",
address: "397 Wolcott Street, Townsend, Vermont, 1052"
},
{
name: "Watkins",
surname: "Hull",
email: "[email protected]",
address: "440 Friel Place, Toftrees, Oklahoma, 5860"
},
{
name: "Teresa",
surname: "Knapp",
email: "[email protected]",
address: "394 Colby Court, Coral, North Carolina, 4182"
},
{
name: "Barron",
surname: "Callahan",
email: "[email protected]",
address: "125 Ashland Place, Waiohinu, Ohio, 7142"
},
{
name: "Bradshaw",
surname: "Roy",
email: "[email protected]",
address: "194 Veterans Avenue, Alden, Kansas, 3236"
},
{
name: "Vargas",
surname: "Keller",
email: "[email protected]",
address: "102 Times Placez, Tooleville, Nevada, 7208"
},
{
name: "Levine",
surname: "Fitzgerald",
email: "[email protected]",
address: "486 Tapscott Avenue, Kirk, Pennsylvania, 8353"
},
{
name: "Connie",
surname: "Park",
email: "[email protected]",
address: "953 Caton Place, Baden, Hawaii, 6875"
},
{
name: "Webster",
surname: "Mooney",
email: "[email protected]",
address: "372 Bragg Court, Marne, Minnesota, 1062"
},
{
name: "Allie",
surname: "Dodson",
email: "[email protected]",
address: "118 Harman Street, Edneyville, Arizona, 6451"
},
{
name: "Kline",
surname: "Alford",
email: "[email protected]",
address: "148 Lorraine Street, Libertytown, Florida, 5568"
},
{
name: "Trujillo",
surname: "Ellis",
email: "[email protected]",
address: "598 Village Court, Rodanthe, Nebraska, 8622"
},
{
name: "Frye",
surname: "Wise",
email: "[email protected]",
address: "127 Pierrepont Place, Dupuyer, Northern Mariana Islands, 6382"
},
{
name: "Ashley",
surname: "Medina",
email: "[email protected]",
address: "156 Beaver Street, Woodlands, Guam, 9604"
},
{
name: "Stokes",
surname: "Nelson",
email: "[email protected]",
address: "502 Bevy Court, Jackpot, Louisiana, 8235"
},
{
name: "Alford",
surname: "Weaver",
email: "[email protected]",
address: "901 Cornelia Street, Spokane, District Of Columbia, 7646"
},
{
name: "Mcleod",
surname: "Hunt",
email: "[email protected]",
address: "482 Ludlam Place, Vowinckel, Michigan, 4598"
},
{
name: "Sybil",
surname: "Winters",
email: "[email protected]",
address: "962 Kiely Place, Chamberino, California, 5225"
},
{
name: "Chandler",
surname: "Pacheco",
email: "[email protected]",
address: "488 Harden Street, Canby, New York, 7722"
},
{
name: "Fisher",
surname: "Porter",
email: "[email protected]",
address: "525 Hendrix Street, Wiscon, New Mexico, 8549"
},
{
name: "Lucas",
surname: "Davis",
email: "[email protected]",
address: "527 Garden Street, Otranto, Texas, 4584"
},
{
name: "Petty",
surname: "Pate",
email: "[email protected]",
address: "738 Gunnison Court, Itmann, Washington, 9121"
},
{
name: "Peters",
surname: "Gaines",
email: "[email protected]",
address: "235 Girard Street, Caroleen, Tennessee, 8832"
},
{
name: "Jan",
surname: "Flowers",
email: "[email protected]",
address: "702 Beverly Road, Caroline, North Dakota, 1450"
},
{
name: "Deborah",
surname: "Jacobson",
email: "[email protected]",
address: "515 Tennis Court, Lorraine, Alabama, 9509"
},
{
name: "Bass",
surname: "Blevins",
email: "[email protected]",
address: "866 Trucklemans Lane, Rossmore, Maine, 6478"
},
{
name: "Maritza",
surname: "Stein",
email: "[email protected]",
address: "596 Albee Square, Genoa, Idaho, 412"
},
{
name: "Isabel",
surname: "Mcfarland",
email: "[email protected]",
address: "633 Stryker Court, Alamo, Puerto Rico, 2328"
},
{
name: "Vance",
surname: "Bush",
email: "[email protected]",
address: "543 Horace Court, Zeba, Indiana, 528"
},
{
name: "Fitzgerald",
surname: "Byrd",
email: "[email protected]",
address: "556 Hegeman Avenue, Glasgow, Iowa, 545"
},
{
name: "Lessie",
surname: "Delacruz",
email: "[email protected]",
address: "427 Paerdegat Avenue, Sanford, Mississippi, 7206"
},
{
name: "Pamela",
surname: "Gallagher",
email: "[email protected]",
address: "620 Louis Place, Winston, Utah, 3358"
},
{
name: "Nora",
surname: "Berger",
email: "[email protected]",
address: "940 Greenwood Avenue, Coultervillle, Connecticut, 2787"
},
{
name: "Silvia",
surname: "Monroe",
email: "[email protected]",
address: "980 Vanderveer Place, Berwind, Georgia, 348"
},
{
name: "Amalia",
surname: "Roberson",
email: "[email protected]",
address: "444 Bay Street, Woodlake, Wisconsin, 770"
},
{
name: "Gardner",
surname: "Fulton",
email: "[email protected]",
address: "877 Reed Street, Swartzville, Oregon, 1852"
},
{
name: "James",
surname: "Beasley",
email: "[email protected]",
address: "442 Java Street, Dahlen, Arkansas, 5561"
},
{
name: "Gordon",
surname: "Crawford",
email: "[email protected]",
address: "236 Irving Avenue, Shindler, Virginia, 7767"
},
{
name: "Walters",
surname: "Rodriguez",
email: "[email protected]",
address: "847 Clara Street, Joppa, South Carolina, 2859"
},
{
name: "Willie",
surname: "Guerra",
email: "[email protected]",
address: "149 Richardson Street, Glidden, South Dakota, 7043"
},
{
name: "Marla",
surname: "Carrillo",
email: "[email protected]",
address: "918 Boulevard Court, Norfolk, New Jersey, 3312"
},
{
name: "Marsha",
surname: "Greer",
email: "[email protected]",
address: "708 Kensington Street, Knowlton, American Samoa, 3326"
},
{
name: "Leann",
surname: "Rowland",
email: "[email protected]",
address: "675 Agate Court, Odessa, Wyoming, 4910"
},
{
name: "Moody",
surname: "Atkins",
email: "[email protected]",
address: "630 Moore Place, Hartsville/Hartley, Massachusetts, 1099"
},
{
name: "Lola",
surname: "Alston",
email: "[email protected]",
address: "996 Jackson Street, Snyderville, Alaska, 4664"
},
{
name: "Ingrid",
surname: "Velasquez",
email: "[email protected]",
address: "650 Linden Street, Edmund, West Virginia, 5926"
},
{
name: "Bailey",
surname: "Maynard",
email: "[email protected]",
address: "955 Bridgewater Street, Ribera, Maryland, 1362"
},
{
name: "Torres",
surname: "Duffy",
email: "[email protected]",
address: "165 National Drive, Russellville, Delaware, 708"
}
].map((suggestion, i) => ({
label: suggestion.email,
name: suggestion.name,
surname: suggestion.surname,
address: suggestion.address,
email: suggestion.email,
picture: "https://picsum.photos/id/" + i + "/200/200"
}));
export default class Home extends Component {
constructor() {
super();
this.state = {
sugg: suggestions,
options: [],
userSelect: suggestions[2],
open: false,
name: "",
surname: "",
address: ""
};
}
onSearch = q => {
const { sugg } = this.state;
console.log("onSearchHome", q);
const inputValue = deburr(q.trim()).toLowerCase();
const inputLength = inputValue.length;
let count = 0;
let options =
inputLength === 0
? []
: sugg.filter(suggestion => {
const keep =
count < 5 &&
suggestion.label &&
suggestion.label.slice(0, inputLength).toLowerCase() ===
inputValue;
if (keep) count += 1;
return keep;
});
this.setState({ options });
};
getLabel(...props) {
return props
.reduce((acc, val) => {
return acc + "" + val.charAt(0);
}, "")
.toUpperCase();
}
onChange = userSelect => {
this.setState({ userSelect });
};
handleDialog = open => {
this.setState({ open });
};
onChangeField = field => ({ target: { value } }) => {
let state = {};
state[field] = value;
this.setState(state);
};
addUser = () => {
const { name, surname, address, sugg } = this.state;
let obj = {
value: name,
label: (name + "." + surname + "@email.com").toLowerCase(),
name,
surname,
email: (name + "." + surname + "@email.com").toLowerCase(),
address,
picture: "https://picsum.photos/id/" + sugg.length + "/200/200"
};
this.setState(
previousState => ({
name: "",
surname: "",
address: "",
sugg: [...previousState.sugg, obj]
}),
() => {
this.handleDialog(false);
}
);
};
render() {
const { userSelect, options, open } = this.state;
return (
<>
{userSelect && <ReactJson src={userSelect} theme="solarized" />}
<br />
<br />
<TextInputChip
fullWidth
options={options}
onSearch={this.onSearch}
onChange={userSelect => this.onChange(userSelect)}
//selectedItem={userSelect}
//renderMenuItemChildren={option => `${option.name} ${option.surname} (${option.email})` }
renderMenuItemChildren={option => <MenuItem option={option} />}
//labelKey="address"
labelKey={option =>
`${option.name} ${option.surname} (${option.address})`
}
placeholder={"Cerca"}
label={"User"}
noResult={a => this.handleDialog(a)}
noFound={"No found result. Add user."}
backgroundColorChip={"#007bcc"}
colorTextChip={"#fff"}
colorDelete={"#d4d7d6"}
loading
backgroundColorLoading={"#f59c42"}
avatar={option => <Avatar alt={option.label} src={option.picture} />}
/*avatar={option => (
<Avatar>{this.getLabel(option.name, option.surname)}</Avatar>
)}*/
//avatar={() => <FaceIcon style={{ color: "#fff" }} />}
//deleteIcon={() => <DoneIcon />}
/*backgroundMenuItem={"#ffb41b"}
colorMenuItem={"#007bff"}
colorMenuItemSelect={"#ffb41b"}
backgroundItemSelect={"#007bff"}*/
/>
<Dialog
open={open}
onClose={() => this.handleDialog(false)}
aria-labelledby="form-dialog-title"
>
<DialogTitle id="form-dialog-title" style={{ textAlign: "center" }}>
Create a new user
</DialogTitle>
<DialogContent>
<MyTextField
autoFocus
margin="dense"
id="name"
label="Name"
type="text"
fullWidth
onChange={this.onChangeField("name")}
/>
<MyTextField
margin="dense"
id="surname"
label="Surname"
type="text"
fullWidth
onChange={this.onChangeField("surname")}
/>
<MyTextField
margin="dense"
id="address"
label="Address"
type="text"
fullWidth
onChange={this.onChangeField("address")}
/>
</DialogContent>
<DialogActions>
<Button onClick={() => this.handleDialog(false)} color="primary">
Close
</Button>
<Button onClick={() => this.addUser()} color="primary">
Add
</Button>
</DialogActions>
</Dialog>
</>
);
}
}
```
MenuItem:
```javascript
import PropTypes from "prop-types";
import React from "react";
const MenuItem = ({ option }) => {
return (
<div>
<img
alt={option.label}
src={option.picture}
style={{
height: "24px",
marginRight: "10px",
width: "24px",
verticalAlign: "middle",
borderRadius: "50%"
}}
/>
<span>
{option.name} ({option.email})
</span>
</div>
);
};
MenuItem.propTypes = {
option: PropTypes.shape({
name: PropTypes.string.isRequired,
surname: PropTypes.string.isRequired
}).isRequired
};
export default MenuItem;
```
## 💡 Props
|Prop|Type|Default|Note|
| - | - | - | - |
|fullWidth|`boolean`|`false`||
|options|`array`|||
|onSearch|`function`|||
|onChange|`function`|||
|selectedItem|`obj`|||
|renderMenuItemChildren|`function: optional`|`obj.label`||
|labelKey|`function: optional`|`obj.label`||
|placeholder|`string`|`Search`||
|label|`string`|`Search`||
|noResult|`function`|||
|noFound|`string`|`No matches found.`||
|backgroundColorChip|`string`| `#e0e0e0`||
|colorTextChip|`string`| `#000000`||
|colorDelete|`string`| `rgba(0, 0, 0, 0.26)`||
|avatar|`function: optional`|||
|deleteIcon|`function: optional`|||
|colorMenuItem|`string`| `#000000`||
|backgroundMenuItem|`string`| `#ffffff`||
|colorMenuItemSelect|`string`| `#000000` ||
|backgroundItemSelect|`string`| `#e0e0e0`||
|loading|`boolean`|`false`||
|backgroundColorLoading|`string`| `#f59c42`||
## 📜 License
This library is provided under the Apache License.
| 27.225455 | 219 | 0.598905 | eng_Latn | 0.261617 |
bb47d401db3d9dd46054a71bf59ffecc0b3ea783 | 4,019 | md | Markdown | docs/monorepo.md | peterschussheim/gamma | 244148539ecde2d4b17cbf0d1d1c370cfdb09da6 | [
"MIT"
] | 2 | 2019-03-07T21:01:40.000Z | 2020-03-19T02:20:34.000Z | docs/monorepo.md | peterschussheim/gamma | 244148539ecde2d4b17cbf0d1d1c370cfdb09da6 | [
"MIT"
] | 4 | 2019-03-12T18:30:19.000Z | 2020-09-15T16:00:27.000Z | docs/monorepo.md | peterschussheim/gamma | 244148539ecde2d4b17cbf0d1d1c370cfdb09da6 | [
"MIT"
] | null | null | null | # monorepo tips
- [monorepo tips](#monorepo-tips)
- [Workflows](#workflows)
- [Initial Dependency Installation](#initial-dependency-installation)
- [Build all packages within repo](#build-all-packages-within-repo)
- [Run tests across all repo packages](#run-tests-across-all-repo-packages)
- [Modifying a package's dependencies](#modifying-a-packages-dependencies)
- [Clear local caches](#clear-local-caches)
- [`yarn workspaces` `nohoist` option](#yarn-workspaces-nohoist-option)
- [`lerna`](#lerna)
- [Root `package.json`](#root-packagejson)
## Workflows
Unless otherwise noted, all commands are expected to be issued from project root.
### Initial Dependency Installation
- `yarn`
### Build all packages within repo
- `yarn build`
`yarn build` calls `lerna build` which will inspect each package and if a `build` script exists in it's `package.json`, execute the defined build script.
### Run tests across all repo packages
- `yarn test`
### Modifying a package's dependencies
- `yarn workspace <workspace_name> <command>`
Example:
`yarn workspace ssr add express`
`yarn workspace ssr remove eslint --dev`
### Clear local caches
This command is useful when you have issues with dependecies.
- `yarn reset`
Executing this command will:
1. remove root `node_modules`, `yarn.lock` and the `node_modules` directory of each package
2. `yarn cache clean`
## `yarn workspaces` `nohoist` option
The following snippet is from the excellent example repo [yarn-nohoist] demonstrating common scenarios, their problems and solutions to fix them.
Below is the yarn workspaces configure from the package.json under the **project's root directory**.
```
"workspaces": {
"packages": ["packages/*"],
"nohoist": [
"**/react-native", // --- 1
"**/react-native/**", // --- 2
"react-cipher/react-scripts", // --- 3
"**/cipher-core", // --- 4
"RNCipher/react-native-*", // --- 5
"RNCipher/react-native-*/**", // --- 6
"RNCipher/vm-browserify", // --- 7
"RNCipher/vm-browserify/**" // --- 8
]
},
```
1. this tells yarn not to hoist react-native module for any workspace package referencing it. This is mainly for react-native app, we could also replace the wildcard "**" with explicit package name like RNCipher.
2. this tells yarn not to hoist any of the react-native's dependent modules. This is for RNCipher.
3. this tells yarn not to hoist react-scripts under workspace react-cipher. This is to bypass create-react-app problem under monorepo project as of today (1/31/2018).
4. this tells yarn not to hoist cipher-core for any workspace referencing it. Both react-ciper and RNCipher depends on cipher-core. yarn will create a symlink to the actual cipher-core under each package's node_modules.
5. (lines 5-8)these tell yarn not to hoist any module and their dependencies with name "vm-borwserify" or prefixed with "react-native-" under RNCipher workspace. These modules are react-native adapted node modules, which will be bundled by react-native's bundler metro.
## `lerna`
`lerna` is a tool that helps manage multi-package repositories. It is different from `yarn workspaces` in that `lerna` is focused on linking together packages and running common tasks across all or a select set of packages.
### Root `package.json`
This manages `devDependencies` and repo-wide scripts. For example, we can define the following script in our root `package.json`:
```json
{
"scripts": {
"build": "lerna run build"
}
}
```
Executing this script will instruct `lerna` to inspect all packages under our `lerna.json` config, and if a `build` script exists in each package, run it.
[yarn-nohoist]: https://github.com/connectdotz/yarn-nohoist-examples.git
[babel-monorepo-design-docs]: https://github.com/babel/babel/blob/master/doc/design/monorepo.md
[Why we dropped Lerna from PouchDB]: https://gist.github.com/nolanlawson/457cdb309c9ec5b39f0d420266a9faa4
[alle]: https://github.com/boennemann/alle
| 40.59596 | 269 | 0.721573 | eng_Latn | 0.984938 |
bb47d6714d6db94556b60999b5dffd0fb6c860d8 | 3,163 | md | Markdown | articles/azure-percept/azure-percept-vision-datasheet.md | MSFTandrelom/azure-docs.ru-ru | ccbb3e755b9c6e50a8a58babbc01a4e5977635b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-percept/azure-percept-vision-datasheet.md | MSFTandrelom/azure-docs.ru-ru | ccbb3e755b9c6e50a8a58babbc01a4e5977635b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-percept/azure-percept-vision-datasheet.md | MSFTandrelom/azure-docs.ru-ru | ccbb3e755b9c6e50a8a58babbc01a4e5977635b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Спецификации Azure Percept Vision
description: Подробные спецификации устройств приведены в техническом описании Azure Перцепт.
author: elqu20
ms.author: v-elqu
ms.service: azure-percept
ms.topic: reference
ms.date: 02/16/2021
ms.openlocfilehash: 7bbb3a88bbc3011ec5dd917cdb0c1e49f7556aab
ms.sourcegitcommit: 24a12d4692c4a4c97f6e31a5fbda971695c4cd68
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 03/05/2021
ms.locfileid: "102177170"
---
# <a name="azure-percept-vision-datasheet"></a>Спецификации Azure Percept Vision
Перечисленные ниже спецификации предназначены для устройства "концепция Azure Перцепт", включенного в [Azure ПЕРЦЕПТ DK](./azure-percept-dk-datasheet.md).
|Спецификация продукта |Значение |
|--------------------------------|---------------------|
|Целевые отрасли |Производство <br> Интеллектуальные строения <br> Auto (Автоматически) <br> Розничная торговля |
|Сценарии Hero |Аналитика покупателей <br> Доступность для хранения <br> Уменьшение сжатия <br> Мониторинг рабочей области|
|Измерения |Часы x часы x 40mm (сборка SoM для концепции Azure Перцепт с корпусом) <br> Часы x часы x 6mm (схема SoM для видения)|
|Плоскость управления Управление |Обновление устройства Azure (аду) |
|Поддерживаемое программное обеспечение и службы |[Центр Интернета вещей Azure](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) <br> [Машинное обучение Azure](https://azure.microsoft.com/services/machine-learning/) <br> [Среда выполнения ONNX](https://www.onnxruntime.ai/) <br> [опенвино](https://docs.openvinotoolkit.org/latest/index.html) <br> Обновление устройства Azure |
|Ускорение искусственного интеллекта |Intel Мовидиус множество X (MA2085) (ВПУ) с интегрированным поставщиком услуг Интернет-камеры Intel, 0,7 ВЕРХНих устройств |
|Датчики и визуальные индикаторы |Датчик камеры Sony IMX219 с 6P Lens<br>Решение. 8MP по адресу 30FPS, Distance: 50cm-Infinity<br>Фов: 120 градусов по диагонали, цвет: широкий динамический диапазон, фиксированный чередующийся фокус|
|Поддержка камеры |RGB <br> 2 камеры можно запускать одновременно |
|Crypto-Controller безопасности |ST-Micro STM32L462CE |
|Компонент управления версиями или ИДЕНТИФИКАТОРом |64 КБ |
|Память |LPDDR4 2 ГБ |
|Мощный |3,5 Вт |
|Порты |1x USB 3,0, тип C <br> 2 МИПИ 4 Lane (до 1,5 Гбит/с на каждую дорожку) |
|Интерфейсы управления |2x I2C <br> 2x SPI <br> 6xный модулятор (GPIO: двукратное время, двукратная синхронизация кадров, 2-неиспользуемая) <br> 2. запасной диск GPIO |
|Сертификация |Комиссии <br> ВНУТРИФИРМЕННОГО <br> RoHS <br> ПОПАСТЬ <br> UL |
|Рабочая температура |от 0 до 27 градусов C (сборка SoM для Перцепт концепции Azure с корпусом) <br> от-10 до 70 градусов C (микросхема SoM для видения) |
|Сенсорная температура |<= 48 градусов C |
|Относительная влажность |от 8% до 90% |
| 79.075 | 444 | 0.697439 | rus_Cyrl | 0.683568 |
bb482582d0f9efff48ea94596d4eb1593f11e61e | 1,205 | md | Markdown | AlchemyInsights/commercial-dialogues/shared-mailboxes-cannot-open-encrypted-messages.md | isabella232/OfficeDocs-AlchemyInsights-pr.sl-SI | 11a4d6bfb0580342f538adc12818d1a86ddcdac3 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T19:07:51.000Z | 2020-05-19T19:07:51.000Z | AlchemyInsights/commercial-dialogues/shared-mailboxes-cannot-open-encrypted-messages.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.sl-SI | feeebbc38a3aeca0b2a03f49e3cab735f0ea4069 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:33:13.000Z | 2022-02-09T06:51:06.000Z | AlchemyInsights/commercial-dialogues/shared-mailboxes-cannot-open-encrypted-messages.md | isabella232/OfficeDocs-AlchemyInsights-pr.sl-SI | 11a4d6bfb0580342f538adc12818d1a86ddcdac3 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-10-11T19:12:12.000Z | 2021-10-09T10:38:29.000Z | ---
title: Nabiralniki v skupni rabi ne morejo odpreti šifriranih sporočil
ms.author: v-smandalika
author: v-smandalika
manager: dansimp
ms.date: 02/24/2021
audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.collection: Adm_O365
ms.custom:
- "9000078"
- "7342"
ms.openlocfilehash: d597fa0020beedd481e017ab707a5a4f5192219ac87609a894d8ba7345ce3110
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: sl-SI
ms.lasthandoff: 08/05/2021
ms.locfileid: "54005718"
---
# <a name="shared-mailboxes-cant-open-encrypted-messages"></a>Nabiralniki v skupni rabi ne morejo odpreti šifriranih sporočil
- Nabiralniki v skupni rabi ne morejo odpreti šifriranih sporočil, ne glede na to, kateri odjemalski program uporabljate (na primer Outlook 2016 ali Outlook v spletu).
- Šifrirana sporočila lahko pošiljate iz nabiralnika v skupni rabi. Nastavite naslov za šifriranje tako, kot bi ga nastavili za nabiralnike drugih uporabnikov v vaši organizaciji. Če ste na primer nastavili šifriranje sporočil za vsa e-poštna sporočila, poslana iz organizacije, bo privzeto vključen nabiralnik v skupni rabi.
| 44.62963 | 325 | 0.820747 | slv_Latn | 0.992654 |
bb484950326bb9d506a408de0db1493842f7420d | 1,519 | md | Markdown | DOCS/Release_Notes/inline_phot_pgi_floating_point_crashes.md | Simeng-unique/CMAQ-changed | cb83401728ed7ea1bb19a6986c0acc84dabe11a4 | [
"CC0-1.0"
] | 203 | 2017-02-04T18:01:47.000Z | 2022-03-30T09:09:00.000Z | DOCS/Release_Notes/inline_phot_pgi_floating_point_crashes.md | Simeng-unique/CMAQ-changed | cb83401728ed7ea1bb19a6986c0acc84dabe11a4 | [
"CC0-1.0"
] | 54 | 2017-01-03T21:40:27.000Z | 2022-03-04T19:03:53.000Z | DOCS/Release_Notes/inline_phot_pgi_floating_point_crashes.md | Simeng-unique/CMAQ-changed | cb83401728ed7ea1bb19a6986c0acc84dabe11a4 | [
"CC0-1.0"
] | 170 | 2016-11-09T22:30:04.000Z | 2022-03-31T03:21:59.000Z | # Removing sporadic floating point crashes from photolysis rate calculation with the PGI Fortran compiler.
[William T. Hutzell](mailto:[email protected]), U.S. Environmental Protection Agency
## Brief Description
The changes remove sporadic crashes when CMAQ is compiled using the PGI compiler with debug options. Floating point errors in the inline calculation of photolysis rates occur when exponentials are evaluated at very large negative REAL(8) numbers for the **TWOSTREAM_S** and **get_aggregate_optics** subroutines in the **PHOT_MOD.F** and **CLOUD_OPTICS.F** files, respectively. These code changes limit the lowest value of the exponential argument to ‑709.090848126508, which corresponds to 9.0 x 10<sup>‑307</sup>. Exponentials that are evaluated below the limit are set to 9.0 x 10<sup>‑307</sup>.
## Significance and Impact
Simulations over the 12-km CMAQ Southeast benchmark and CONUS domains show no differences when the model is compiled the Intel version 17.0 and gcc version 6.1 Fortran compilers. When using the PGI version 17.4 Fortran compiler, concentration differences are much less than 0.1% for most species. Monoatomic chlorine, hypochlorous acid, and formyl chloride had differences on the order of 10% for concentrations below 10<sup>‑5</sup> to 10<sup>‑8</sup> ppmV in isolated locations over the Gulf of Mexico and the Florida peninsula.
## Files Affected
* CCTM/src/phot/inline/PHOT_MOD.F
* CCTM/src/phot/inline/CLOUD_OPTICS.F
| 101.266667 | 636 | 0.791968 | eng_Latn | 0.987131 |
bb486545194bf9982168f95b420f09c371b057d6 | 56 | md | Markdown | README.md | NanciNunes/ola-mundo | 306f06aa2df0331619fd61b138cf0221fad899fa | [
"MIT"
] | null | null | null | README.md | NanciNunes/ola-mundo | 306f06aa2df0331619fd61b138cf0221fad899fa | [
"MIT"
] | null | null | null | README.md | NanciNunes/ola-mundo | 306f06aa2df0331619fd61b138cf0221fad899fa | [
"MIT"
] | null | null | null | # ola mundo
primeiro repositorio do curso git e github
| 18.666667 | 43 | 0.785714 | glg_Latn | 0.55323 |
bb48f78e404c460b200b7e53112d5ddef5849c71 | 3,811 | md | Markdown | README.md | JesperKrogh/nbgallery | 7974201887c4687894c01aba8c03c847c07347b4 | [
"MIT"
] | 140 | 2017-01-26T01:13:57.000Z | 2022-02-17T04:23:40.000Z | README.md | JesperKrogh/nbgallery | 7974201887c4687894c01aba8c03c847c07347b4 | [
"MIT"
] | 445 | 2017-01-14T21:25:38.000Z | 2022-03-18T13:51:47.000Z | README.md | JesperKrogh/nbgallery | 7974201887c4687894c01aba8c03c847c07347b4 | [
"MIT"
] | 26 | 2017-06-22T01:31:20.000Z | 2022-01-28T08:24:27.000Z | # What is nbgallery?
nbgallery (notebook gallery) is an enterprise [Jupyter](http://jupyter.org/) notebook sharing and collaboration platform. For an overview, please check out our [github.io site](https://nbgallery.github.io/).

[Tony Hirst](https://github.com/psychemedia) published a nice walkthrough of some of the features of nbgallery [on his blog](https://blog.ouseful.info/2019/01/28/first-play-with-nbgallery/).
## Getting Started
### Requirements
nbgallery is a [Ruby on Rails](https://rubyonrails.org/) application. You can run it with the built-in `rails server` command or with [Rack](https://rack.github.io/) servers like [Puma](http://puma.io/) or [Passenger](https://www.phusionpassenger.com/).
The nbgallery application requires a MySQL or MariaDB server. Other SQL-based servers may work but have not been tested. We recommend creating a separate mysql user account for use by the app.
The application also requires an [Apache Solr](http://lucene.apache.org/solr/) server for full-text indexing. For small to medium instances (small thousands of notebooks and users), the bundled [sunspot](https://github.com/sunspot/sunspot) Solr server may suffice. Larger instances may require a standalone server. See our [notes](docs/solr.md) for more detail.
### Installation
You can install nbgallery on various platforms:
* [Install from source on Linux or Mac Homebrew](docs/installation.md)
* [Run with docker](docs/docker.md)
### Configuration
Most configuration settings will should work out of the box, but there are a few things you'll need to set up. See our [configuration notes](docs/configuration.md) for more detail.
### Running the server
Once everything is configured, you're ready to go! See [this page](docs/running.md) for details on starting up the app and shutting it down.
## Jupyter integration
One of the benefits of nbgallery is its two-way integration with Jupyter. You can launch notebooks from nbgallery into Jupyter with a single click. Within Jupyter, the Gallery menu enables you to save notebooks to nbgallery and submit change requests to other notebook authors. See [this page](docs/jupyter_integration.md) for more information.
## Providing OAuth to JupyterHub
If you want to use NBGallery as your central login repository for your JupyterHub, you can configure NBGallery to operate as an OAuth2 provider. This will work for other applications as well, but for a detailed write-up of how it can be connected to JupyterHub, see [this page](docs/jupyter_hub_oauth.md).
## Selected topics
Here is some documentation on various features of nbgallery:
* Our [notebook recommender system](https://nbgallery.github.io/recommendation.html) helps users find notebooks that are most relevant to them.
* When [integrated with Jupyter](docs/jupyter_integration.md), nbgallery can track cell executions to assess whether [notebooks are healthy](https://nbgallery.github.io/health_paper.html).
* Our [notebook review system](docs/notebook_review.md) helps build quality and user confidence through peer review of notebooks.
* The [extension system](docs/extensions.md) enables you to add custom/proprietary features that are specific to your enterprise.
* Notebook revisions can be [tracked in git](docs/revisions.md).
* [Notes on computation and cleanup jobs](docs/scheduled_jobs.md).
* [Notes on backing up nbgallery](docs/backup.md).
## Contributions
Issues and pull requests are welcome. For code contributions, please note that we use [rubocop](https://github.com/bbatsov/rubocop) ([our config](.rubocop.yml)), so please run `overcommit --install` in your project directory to activate the git commit hooks.
| 66.859649 | 364 | 0.778536 | eng_Latn | 0.988164 |
bb48f7b5da2c5886eb972ece8f995a9e2e2ee33f | 4,382 | md | Markdown | _posts/2013-07-05-WP-yi-bu-HTTP-qing-qiu-fu-zhu-lei.md | liubaicai/blog | 29afdff62ef4ef35ee9ecb158ab8837f2dfea91e | [
"Apache-2.0"
] | null | null | null | _posts/2013-07-05-WP-yi-bu-HTTP-qing-qiu-fu-zhu-lei.md | liubaicai/blog | 29afdff62ef4ef35ee9ecb158ab8837f2dfea91e | [
"Apache-2.0"
] | 3 | 2021-03-25T14:56:32.000Z | 2022-01-17T06:28:22.000Z | _posts/2013-07-05-WP-yi-bu-HTTP-qing-qiu-fu-zhu-lei.md | liubaicai/baicai_vue_blog | 14d9ef6d8a7f075e28aea9ffe474bf513ce8aa5c | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: "WP异步HTTP请求辅助类"
date: 2013-07-05 06:35:23 UTC
author: "baicai"
catalog: true
tags:
- 存档
---
<pre class="prettyprint lang-cs">/// <summary>
/// 异步HTTP请求辅助类
/// </summary>
public class HttpHelper
{
public class HttpArgs
{
public HttpWebRequest request { set; get; }
public string post { set; get; }
}
public HttpHelper(string userAgent)
{
UserAgent = userAgent;
}
public string Referer { set; get; }
public string UserAgent { set; get; }
public object Tag { set; get; }
public String subString { set; get; }
private HttpResponseDelegate httpResponseDelegate;
public delegate void HttpResponseDelegate(HttpHelper sender, Stream stream);
/// <summary>
/// 开始一个请求
/// </summary>
/// <param name="url">网址</param>
/// <param name="post">如果为NULL,则为GET请求</param>
/// <param name="resp">回调方法</param>
public void request(string url, string post, HttpResponseDelegate resp)
{
Random random = new Random();
if (url.Contains("?b") || url.Contains("&b") || url.Contains("notic"))
{
request(new Uri(url), post, resp);
}
else
{
if (url.Contains("?"))
{
subString = url + "&b=" + random.Next(1000, 9000).ToString();
}
else
{
subString = url + "?b=" + random.Next(1000, 9000).ToString();
}
request(new Uri(subString), post, resp);
}
}
/// <summary>
/// 开始一个请求
/// </summary>
/// <param name="url">网址</param>
/// <param name="post">如果为NULL,则为GET请求</param>
/// <param name="resp">回调方法</param>
public void request(Uri url, string post, HttpResponseDelegate resp)
{
httpResponseDelegate = resp;
HttpWebRequest request = HttpWebRequest.Create(url) as HttpWebRequest;
WebHeaderCollection whc = new WebHeaderCollection();
if (Referer != null)
{
request.Headers[HttpRequestHeader.Referer] = Referer;
}
request.UserAgent = UserAgent;
if (post != null)
{
request.ContentType = "application/x-www-form-urlencoded";
request.Method = "POST";
request.BeginGetRequestStream(requestReady, new HttpArgs() { request = request, post = post });
}
else
{
request.BeginGetResponse(responseReady, request);
}
}
/// <summary>
/// 准备
/// </summary>
/// <param name="result"></param>
private void requestReady(IAsyncResult result)
{
HttpArgs obj = result.AsyncState as HttpArgs;
HttpWebRequest request = obj.request;
String webpost = obj.post;
var stream = request.EndGetRequestStream(result);
using (StreamWriter writer = new StreamWriter(stream))
{
writer.Write(webpost);
writer.Flush();
}
request.BeginGetResponse(responseReady, request);
}
private void responseReady(IAsyncResult result)
{
HttpWebRequest webrequest = result.AsyncState as HttpWebRequest;
try
{
WebResponse response = webrequest.EndGetResponse(result);
using (var stream = response.GetResponseStream())
{
if (httpResponseDelegate != null)
{
httpResponseDelegate.Invoke(this, stream);
}
}
response.Close();
}
catch (Exception ex)
{
var msg = ex.Message;
if (httpResponseDelegate != null)
{
httpResponseDelegate.Invoke(this, null);
}
}
}
}</pre> | 34.503937 | 111 | 0.498174 | kor_Hang | 0.261884 |
bb49c3d03270cd6d27c08f675c79c898e41ca07a | 424 | md | Markdown | elasticsearch/external_dependencies_readme.md | FAANG/faang-portal-backend | 8e286bc4cc5d14e49b74183cc28550d73dde370e | [
"Apache-2.0"
] | 1 | 2019-05-15T13:44:22.000Z | 2019-05-15T13:44:22.000Z | elasticsearch/external_dependencies_readme.md | FAANG/faang-portal-backend | 8e286bc4cc5d14e49b74183cc28550d73dde370e | [
"Apache-2.0"
] | 1 | 2019-06-14T13:23:14.000Z | 2019-06-14T13:23:14.000Z | elasticsearch/external_dependencies_readme.md | FAANG/faang-portal-backend | 8e286bc4cc5d14e49b74183cc28550d73dde370e | [
"Apache-2.0"
] | 1 | 2022-03-15T09:22:33.000Z | 2022-03-15T09:22:33.000Z | This page lists the command lines that are utilising the FAANG mapping. Developers should add their commands to this list or inform the faang-dcc if they wish for their functionality to be protected. A primary contact should also be listed for each command in case they need to be contacted about developments. FAANG-dcc developers will be cautious about changing the mapping if it affects any of the commands listed here: | 424 | 424 | 0.818396 | eng_Latn | 0.999932 |
bb4a27a93fa10f0746d804df4ecd19237d443298 | 938 | md | Markdown | includes/event-grid-limits.md | gypark/azure-docs.ko-kr | 82cc67dc083bbeb294948dc4f29257033f6215fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/event-grid-limits.md | gypark/azure-docs.ko-kr | 82cc67dc083bbeb294948dc4f29257033f6215fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/event-grid-limits.md | gypark/azure-docs.ko-kr | 82cc67dc083bbeb294948dc4f29257033f6215fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 포함 파일
description: 포함 파일
services: event-grid
author: tfitzmac
ms.service: event-grid
ms.topic: include
ms.date: 05/22/2019
ms.author: tomfitz
ms.custom: include file
ms.openlocfilehash: 3f94481e6a8550479788d92c744327e1dc3b58c4
ms.sourcegitcommit: 009334a842d08b1c83ee183b5830092e067f4374
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 05/29/2019
ms.locfileid: "66376913"
---
Azure Event Grid 시스템 항목 및 사용자 지정 항목에는 다음 제한이 적용 *되지* 이벤트 도메인입니다.
| Resource | 제한 |
| --- | --- |
| Azure 구독당 사용자 지정 토픽 | 100 |
| 토픽당 이벤트 구독 | 500 |
| 사용자 지정 토픽에 대한 게시 비율(수신) | 토픽별 초당 5,000개 이벤트 |
| 요청 게시 | 초당 250 |
| 이벤트 크기 | 에 대 한 지원 64KB 일반적 가용성 (GA). 1MB에 대 한 지원은 현재 미리 보기로 제공에서 됩니다. |
다음 제한은 이벤트 도메인 에서만 적용 됩니다.
| Resource | 제한 |
| --- | --- |
| 이벤트 도메인당 항목 | 공개 미리 보기 중 1,000개 |
| 도메인 내에서 항목당 이벤트 구독 | 공개 미리 보기 중 50개 |
| 도메인 범위 이벤트 구독 | 공개 미리 보기 중 50개 |
| 이벤트 도메인 (수신) 속도 게시 합니다. | 공개 미리 보기 중 초당 5,000개 이벤트 |
| 요청 게시 | 초당 250 | | 26.055556 | 73 | 0.672708 | kor_Hang | 1.00001 |
bb4a55e4e7b12f7971011717f25b5296cd3cd1a1 | 757 | md | Markdown | README.md | tonytamps/builderio-templated-preview-url | 7a0048cf59247097635f10e0df04fc2cab971d7f | [
"MIT"
] | null | null | null | README.md | tonytamps/builderio-templated-preview-url | 7a0048cf59247097635f10e0df04fc2cab971d7f | [
"MIT"
] | null | null | null | README.md | tonytamps/builderio-templated-preview-url | 7a0048cf59247097635f10e0df04fc2cab971d7f | [
"MIT"
] | null | null | null | # builderio-templated-preview-url
> Allows the usage of template variables in the preview url of your model
[](https://www.npmjs.com/package/builderio-templated-preview-url) [](https://standardjs.com)
## Install
1. Go to Account > Plugins
2. Add a new plugin with the URL `https://unpkg.com/builderio-templated-preview-url`
3. Save
## Usage
1. Go to Model > Editing URL
2. Enter a [mustache.js](https://github.com/janl/mustache.js/) template like `http://localhost:8080/{{targeting.locale.0}}{{{targeting.urlPath}}}`
3. Save
## License
MIT © [tonytamps](https://github.com/tonytamps)
| 32.913043 | 255 | 0.738441 | yue_Hant | 0.327037 |
bb4a86c08207996b5c70d13a34d29818e3b223d2 | 1,815 | md | Markdown | README.md | BackEndTea/go-orb | 3cf67c8e6ac17a13f0cf9d4480a6dbecf45696a8 | [
"MIT"
] | null | null | null | README.md | BackEndTea/go-orb | 3cf67c8e6ac17a13f0cf9d4480a6dbecf45696a8 | [
"MIT"
] | null | null | null | README.md | BackEndTea/go-orb | 3cf67c8e6ac17a13f0cf9d4480a6dbecf45696a8 | [
"MIT"
] | null | null | null | # Go (Golang) Orb [](https://circleci.com/gh/CircleCI-Public/go-orb) [][reg-page] [](https://raw.githubusercontent.com/CircleCI-Public/go-orb/master/LICENSE) [](https://discuss.circleci.com/c/ecosystem/orbs)
*This orb is under active development and do not yet have a release. This orb cannot be used in a production/stable build yet.*
A Go Orb for CircleCI.
This orb allows you to do common Go related tasks on CircleCI such as install Go, download modules, caching, etc.
## Usage
Example use as well as a list of available executors, commands, and jobs are available on this orb's [registry page][reg-page].
## Resources
[CircleCI Orb Registry Page][reg-page] - The official registry page for this orb will all versions, executors, commands, and jobs described.
[CircleCI Orb Docs](https://circleci.com/docs/2.0/orb-intro/#section=configuration) - Docs for using and creating CircleCI Orbs.
## Contributing
We welcome [issues](https://github.com/CircleCI-Public/go-orb/issues) to and [pull requests](https://github.com/CircleCI-Public/go-orb/pulls) against this repository!
For further questions/comments about this or other orbs, visit the Orb Category of [CircleCI Discuss](https://discuss.circleci.com/c/orbs).
### Publishing
New versions of this orb are published by pushing a SemVer git tag by the Community & Partner Engineering Team.
[reg-page]: https://circleci.com/orbs/registry/orb/circleci/go
| 58.548387 | 597 | 0.766942 | eng_Latn | 0.644125 |
bb4b8591ba0df151e8af554e002dbfee54ff6851 | 145 | md | Markdown | README.md | fosslc/locationtech-events | 10e16b01930a3b6b8af6ca4a0c85d0742c22cfa5 | [
"Apache-2.0"
] | 1 | 2015-06-09T13:50:31.000Z | 2015-06-09T13:50:31.000Z | README.md | fosslc/locationtech-events | 10e16b01930a3b6b8af6ca4a0c85d0742c22cfa5 | [
"Apache-2.0"
] | 5 | 2015-10-02T13:32:28.000Z | 2017-12-12T15:03:07.000Z | README.md | EclipseFdn/locationtech-events | 10e16b01930a3b6b8af6ca4a0c85d0742c22cfa5 | [
"Apache-2.0"
] | 5 | 2016-07-21T13:49:26.000Z | 2021-02-07T06:46:59.000Z | locationtech-events
===================
Re-usable material for LocationTech events.
Webpage for the current tour: http://tour.locationtech.org
| 20.714286 | 58 | 0.703448 | eng_Latn | 0.497401 |
bb4bb246b55ed5373b1d59cf3ecdfd18bfd9e68f | 949 | md | Markdown | documentation/api/TextSerializer_TextSerializer(TextSerializationContext).md | monodop/DefaultEcs | 042222a6d150c899128e0972d60977aaabf8b0dc | [
"MIT-0"
] | 439 | 2018-02-24T13:47:52.000Z | 2022-03-30T15:58:53.000Z | documentation/api/TextSerializer_TextSerializer(TextSerializationContext).md | monodop/DefaultEcs | 042222a6d150c899128e0972d60977aaabf8b0dc | [
"MIT-0"
] | 145 | 2018-03-08T10:31:01.000Z | 2022-03-26T13:40:00.000Z | documentation/api/TextSerializer_TextSerializer(TextSerializationContext).md | monodop/DefaultEcs | 042222a6d150c899128e0972d60977aaabf8b0dc | [
"MIT-0"
] | 64 | 2019-01-30T21:15:29.000Z | 2022-03-28T17:53:10.000Z | #### [DefaultEcs](DefaultEcs.md 'DefaultEcs')
### [DefaultEcs.Serialization](DefaultEcs.md#DefaultEcs_Serialization 'DefaultEcs.Serialization').[TextSerializer](TextSerializer.md 'DefaultEcs.Serialization.TextSerializer')
## TextSerializer.TextSerializer(TextSerializationContext) Constructor
Initializes a new instance of the [TextSerializer](TextSerializer.md 'DefaultEcs.Serialization.TextSerializer') class.
```csharp
public TextSerializer(DefaultEcs.Serialization.TextSerializationContext context);
```
#### Parameters
<a name='DefaultEcs_Serialization_TextSerializer_TextSerializer(DefaultEcs_Serialization_TextSerializationContext)_context'></a>
`context` [TextSerializationContext](TextSerializationContext.md 'DefaultEcs.Serialization.TextSerializationContext')
The [TextSerializationContext](TextSerializationContext.md 'DefaultEcs.Serialization.TextSerializationContext') used to convert type during serialization/deserialization.
| 73 | 175 | 0.849315 | kor_Hang | 0.627108 |
bb4c679670ab96c45c73f1c735d6260194898949 | 2,641 | md | Markdown | README.md | misterjones/dedent | 27f1ecee53e03284abe572d0ac1a509919eeaffb | [
"MIT"
] | 1 | 2019-02-10T01:04:22.000Z | 2019-02-10T01:04:22.000Z | README.md | misterjones/dedent | 27f1ecee53e03284abe572d0ac1a509919eeaffb | [
"MIT"
] | null | null | null | README.md | misterjones/dedent | 27f1ecee53e03284abe572d0ac1a509919eeaffb | [
"MIT"
] | null | null | null | # `dedent`
Removes code-formatting indentation from multiline template literals.
## Description
Multiline template string literals are great! — Until they're not.
While it is nice not having to concatenate multiple strings ending in "`\n`"
together in order to create multiline blocks of text, it is annoying that
string literals drag along their excess code-indentation baggage wherever they
go.
For example:
```js
(function outputOverIndentedText() {
const culprit = 'code indentation'
console.log(`
This block
of text
contains the
${culprit}.
`)
}())
```
Outputs:
```txt
This block
of text
contains the
code indentation.
```
Note the wide margin created from the indentation of the code-formatting being
embedded into the string literal.
A "dedent" function — such as this one — solves this by removing the indentation
caused by code-formatting, and returns a block of text with what is assumed to
be the intended level of indentation.
> The idea is to determine which line has the smallest indent and to remove that
> indent from all lines. Additionally, leading and trailing whitespace is
> trimmed.
- Dr. Axel Rauschmayer, [_Handling whitespace in ES6 template literals_](http://2ality.com/2016/05/template-literal-whitespace.html)
## Usage
`dedent` can be called as a [tag function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#Tagged_templates):
```js
import textBlock from './dedent.js'
(function outputCorrectlyIndentedText() {
const culprit = 'code indentation'
console.log(textBlock`
This block
of text
does not contain the
${culprit}.
`)
}())
```
Output:
```txt
This block
of text
does not contain the
code indentation.
```
Or `dedent` can be called as a "standard" function:
```js
import textBlock from './dedent.js'
(function outputDedentedHtml() {
console.log(textBlock(`
<pre>
Why am I logging HTML
to the console?
</pre>
`))
const url = 'https://example.com/'
const hyperlink = `
<a href="${url}">
Click Me.
</a>
`
console.log(textBlock(hyperlink))
}())
```
Output:
```txt
<pre>
Why am I logging HTML
to the console?
</pre>
<a href="https://example.com/">
Click Me.
</a>
```
## Discussions
- [_Handling whitespace in ES6 template literals_](http://2ality.com/2016/05/template-literal-whitespace.html)
| Dr. Axel Rauschmayer, 2ality
- [_Multiline template strings that don't break indentation_](https://esdiscuss.org/topic/multiline-template-strings-that-don-t-break-indentation)
| ECMAScript Discussion Archives
| 22.008333 | 146 | 0.707686 | eng_Latn | 0.950174 |
bb4e19f1b0552f9372a58b98fffe2762794f59c0 | 226 | md | Markdown | site/content/form/sample-form.md | arnabn639/sample-netlify-cms | df3c84572a4d65612c775549ace569cbfd574208 | [
"MIT"
] | null | null | null | site/content/form/sample-form.md | arnabn639/sample-netlify-cms | df3c84572a4d65612c775549ace569cbfd574208 | [
"MIT"
] | null | null | null | site/content/form/sample-form.md | arnabn639/sample-netlify-cms | df3c84572a4d65612c775549ace569cbfd574208 | [
"MIT"
] | null | null | null | ---
title: First Form
first_name: Bijay
middle_name: Ranjan
last_name: Nath
address: >-
E-Block,1st Floor,Nirupama Abasan,Mujibar Rahaman Road, Shaileshnagar Doharia,
P.O.-Ganganagar, P.S.-Madhyamgram, Kolkata-700132
---
| 20.545455 | 80 | 0.752212 | eng_Latn | 0.300364 |
bb4e9c1edb5a8f6d486e4358045f254982aaf5c8 | 13,095 | md | Markdown | content/articles/save-and-load-stacked-ensembles-in-onnx/index.md | markiesar57/engineering-education | 39c774fda6fa8d7e1979129f3ac3042d92628769 | [
"Apache-2.0"
] | 74 | 2020-02-14T16:36:55.000Z | 2021-01-27T12:44:40.000Z | content/articles/save-and-load-stacked-ensembles-in-onnx/index.md | markiesar57/engineering-education | 39c774fda6fa8d7e1979129f3ac3042d92628769 | [
"Apache-2.0"
] | 1,343 | 2019-10-02T14:50:30.000Z | 2021-01-27T19:23:59.000Z | content/articles/save-and-load-stacked-ensembles-in-onnx/index.md | markiesar57/engineering-education | 39c774fda6fa8d7e1979129f3ac3042d92628769 | [
"Apache-2.0"
] | 245 | 2019-10-01T23:18:34.000Z | 2021-01-26T22:05:56.000Z | ---
layout: engineering-education
status: publish
published: true
url: /save-and-load-stacked-ensembles-in-onnx/
title: Saving and Loading Stacked Ensemble Classifiers in ONNX Format in Python
description: In this tutorial, the reader will learn how to build an ensemble classifiers. They will learn to save and load these models using ONNX format.
author: ian-njari
date: 2021-12-09T00:00:00-13:40
topics: [Machine Learning]
excerpt_separator: <!--more-->
images:
- url: /engineering-education/save-and-load-stacked-ensembles-in-onnx/hero.jpg
alt: Stacked Ensemble Classifiers in ONNX format in Python example image
---
Stacked ensemble models are learners that increase predictive performance over stand-alone learners by combining the results of two or several machine learning models and running them through a meta-learner.
<!--more-->
The stacked models are different (not a single type), unlike in bagging methods (just decision trees) where each model in the stack does not correct the predictions of the previous ones like it happens in boosting. You can learn how to build one such Ensemble model by reading [this article](/engineering-education/ensemble-learning-based-regression-model-using-python/) by [Adhinga Fredrick](/engineering-education/authors/adhinga-fredrick/).
[Open Neural Network Exchange](https://onnx.ai/) (ONNX) is an open-source format for deep learning and traditional machine learning developed by Microsoft that has a unified schema for saving models despite the library they were developed in.
Launched in December 2017, this gave data scientists and machine learning engineers a way to persist models without worrying about platform inconsistencies and library version deprecation.
It acts as a means to avoid vendor locking since ONNX models can be deployed on any platform - not just where they were trained.
Assume you trained an image recognition model on NVIDIA's GPUs. But, for operations purposes, you decide to deploy it to a production environment on Google's TPUs. Well, ONNX is a nifty tool to transfer the model between the two. Container-based methods for pushing models to the production environment using Docker can also be bypassed altogether.
For machine learning engineers who may want to ship models across platforms, or containerizing them, ONNX models can help avoid that all together.
### Table of contents
- Preparing the environments.
- Importing and preparing the data.
- Building and evaluating the classifier.
- Serializing the model to ONNX format.
- Loading the model using the ONNX runtime inference session.
### Prerequisites
- Basic knowledge of Python.
- Machine learning model building, evaluation, and validation in Scikit-Learn.
- Basic data manipulation skills.
- Python (with `pip`, `numpy`, `pandas`, and `sklearn`) installed on your computer or an online environment like Google Colab or Kaggle.
### Goal
In this article, you will learn how to:
- Install ONNX and `onnxruntime`
- Determine the ONNX input initial types.
- Serializing and saving a stacked ensemble to ONNX format.
- Loading it to production using an ONNX runtime inference session.
### Setting up environments
To install ONNX and `onnxruntime` on a local environment, run the following commands:
If you're using `pip`, on your terminal:
```bash
pip install onnx
pip install onnxruntime
```
If you're using Anaconda, on anaconda terminal:
```bash
conda install -c conda-forge onnx
conda install -c conda-forge onnxruntime
```
> Note: ONNX is not pre-installed in the runtime environments on Google Colab and Kaggle notebooks.
To install ONNX and `onnxruntime` on Google Colab or Kaggle:
```bash
!pip install onnx
!pip install onnxruntime
```
> Note: Online editors like `repl.it` may fail to run our code due to insufficient memory allocations.
### Importing and preparing the data
Let's start by importing `pandas` library and the dataset.
```python
import pandas as pd
path='https://raw.githubusercontent.com/iannjari/datasets/main/diabetes.csv'
df=pd.read_csv(path,engine='python')
print(df)
```
#### Dataset
We will be using the "Heart Failure Prediction Dataset" from [Kaggle](https://www.kaggle.com/fedesoriano/heart-failure-prediction).
This dataset is a combination of 5 other datasets containing 11 features in total. Here, the target feature to be classified is `Heart Disease`. A patient having heart disease or not is represented by 1 or 0, respectively.
You can read more about the dataset [here](https://www.kaggle.com/fedesoriano/heart-failure-prediction).
##### Output:

*Screenshot of the dataset by author*
We will separate out the target data column `Outcome` from the other feature columns as shown:
```python
target_name = "Outcome"
target = df[target_name]
data = df.drop(columns=[target_name])
print(data)
```
Split the data into training and testing partitions with a split of 70-30, as shown:
```python
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
data,target, test_size=0.33, random_state=42)
```
### Training and evaluating the stacked classifier
We shall employ a stack with a [random forest classifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html), [kNN classifier](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html), [gradient boosting classifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html), and a [logistic regressor](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) as a final model.
A random forest classifier uses a number of decision trees on randomly selected subsets of the data and makes decisions out of these trees based on votes. A k-Nearest Neighbors classifier classifies possible data points based on distance similarity.
A gradient boosing classifier combines many weak learning classifiers together to create a strong predictive model. The logistic regression is used to model data like linear regression and then predict the outcome that falls into classes, instead of having them as continuous values.
Let's import all the necessary packages:
```python
from sklearn.ensemble import (RandomForestClassifier, StackingClassifier, GradientBoostingClassifier)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import make_pipeline
```
Then, we initialize the stack:
```python
clf=StackingClassifier(estimators=[
("rf",RandomForestClassifier(n_estimators=10,random_state=42)),
("gb",GradientBoostingClassifier(n_estimators=10,random_state=42)),
("knn",KNeighborsClassifier(n_neighbors=5))],final_estimator=LogisticRegression())
```
Now, let's build a pipeline, fit it on training data, and score it on test data:
```python
pipeline = make_pipeline(
StackingClassifier(estimators=[
("rf",RandomForestClassifier(n_estimators=10,random_state=42)),
("gb",GradientBoostingClassifier(n_estimators=10,random_state=42)),
("knn",KNeighborsClassifier(n_neighbors=5))],final_estimator=LogisticRegression()))
pipeline.fit(x_train,y_train)
print(pipeline.score(x_test,y_test))
```
**Output:**
```bash
0.7716535433070866
```
When evaluating the model using a confusion matrix as shown, we can get the precision, recall, and F1 scores:
```python
from sklearn.metrics import confusion_matrix
preds=pipeline.predict(x_test)
c_matrix=confusion_matrix(y_test, preds)
tn, fp, fn, tp = c_matrix.ravel()
precision= tp/(tp+fp)
misclassification= (fp+fn)/(tn+fn+tp+fp)
f_one=tp/(tp+0.5*(fp+fn))
print('Precision=',precision)
print('Misclassification=',misclassification)
print('F1 score=',f_one)
```
**Output:**
```bash
Precision= 0.6842105263157895
Misclassification= 0.2283464566929134
F1 score= 0.6419753086419753
```
Now that the model is trained and scoring well, let's save it and infer from it.
### Saving the model
To serialize (save) the model, we need to import ` convert_sklearn` from the `skl2onnx` package, along with `common.data_types` to define the types of our features as a parameter `initial_types`.
```python
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
```
The `convert_sklearn` function requires a parameter `initial_types` to save the model. Each data type of the data columns must be assigned to this parameter. For example, if the data contains 3 columns with `float` followed by 2 of `String` types, and 1 with `int64`, then the following would be the declaration:
```python
initial_types = [('feature_input', FloatTensorType([None, 3])),
('feature_input', StringTensorType([None, 2])),
('feature_input', FloatTensorType([None, 1]))]
```
In our case, the dataset has 8 `float` types.
> NOTE: Int can be treated as float, since it can be type casted.
So, we shall make the variable `initial_types` as:
```python
initial_types = [('feature_input', FloatTensorType([None, 8]))]
```
Now, we will go ahead and save the model by passing the model `pipeline` and `initial_types` to the `convert_sklearn` function as shown:
```python
onx = convert_sklearn(pipeline,
initial_types=
initial_types)
with open("stacked_clf.onnx", "wb") as f:
f.write(onx.SerializeToString())
```
The model is saved sucessfully.
> NOTE: If establishing the initial types is too challenging. For example, if the data has too many features, then the `to_onnx` method can do be used.
You just need to pass the `x_test` data (or one of it's column) as an argurment and ONNX extracts it automatically.
```python
# Use a section of data instead of x_test to avoid key errors
x=data.loc[44].to_numpy(dtype='float32')
# Give x as a keyword argument by using X=x
# Note case-sensivity
onx=skl2onnx.to_onnx(pipeline, X=x)
with open("stacked_clf.onnx", "wb") as f:
f.write(onx.SerializeToString())
```
### Loading the model using ONNX runtime inference session
To make predictions from the model, import `onnxruntime` and call `InferenceSession`.
```python
import onnxruntime as rt
sess = rt.InferenceSession("stacked_clf.onnx")
input_name = sess.get_inputs()[0].name
label_name = sess.get_outputs()[0].name
pred_onx = sess.run([label_name],
{input_name:
x_test.to_numpy(dtype='float32')})
```
`x_test` can be replaced by an array of the shape and type of the test data.
Let us see our predictions:
```python
print(pred_onx)
```
**Output:**
```bash
[array([0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0,
0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1,
0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0,
0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1,
0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0,
0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1,
1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0], dtype=int64)]
```
As you can see here, we have saved models in the ONNX format, and then tried to load them for prediction.
### Conclusion
In this tutorial, we learned how to install ONNX and `onnxruntime`, determine ONNX input initial types, serializing, saved a stacked ensemble to ONNX format, and, loaded it to production using an ONNX runtime inference session.
This model can now be served via any web application framework like Streamlit or Dash using Django or Flask via an API. In case of any issues with ONNX, you can raise an issue on [ONNX's GitHub](https://github.com/onnx/sklearn-onnx/issues).
You can find the full code [here](https://github.com/iannjari/scrapbook/blob/main/Stacked_Ensemble.ipynb).
Happy ML-ing!
### References
- [Tutorial on Building an Ensemble Learning Based Regression Model using Python](https://www.section.io/engineering-education/ensemble-learning-based-regression-model-using-python/), *Adhinga Fredrick, Sections Engineering*
- [ONNX Homepage](https://onnx.ai/)
- [ONNX sklearn documenation](http://onnx.ai/sklearn-onnx/introduction.html)
- [Common errors with onnxruntime](http://onnx.ai/sklearn-onnx/auto_examples/plot_errors_onnxruntime.html?highlight=errors)
- [Notebook with accompanying Source Code](https://github.com/iannjari/scrapbook/blob/main/Stacked_Ensemble.ipynb).
---
Peer Review Contributions by: [Srishilesh P S](/engineering-education/authors/srishilesh-p-s/) | 43.360927 | 540 | 0.736388 | eng_Latn | 0.958451 |
bb4ee2ede6c5b775297a54e9a7192fd8287b778c | 478 | md | Markdown | LeetcodeAlgorithms/354. Russian Doll Envelopes/question.md | Fenghuapiao/PyLeetcode | d804a62643fe935eb61808196a2c093ea9583654 | [
"MIT"
] | 3 | 2019-08-20T06:54:38.000Z | 2022-01-07T12:56:46.000Z | LeetcodeAlgorithms/354. Russian Doll Envelopes/question.md | yhangf/PyLeetcode | d804a62643fe935eb61808196a2c093ea9583654 | [
"MIT"
] | null | null | null | LeetcodeAlgorithms/354. Russian Doll Envelopes/question.md | yhangf/PyLeetcode | d804a62643fe935eb61808196a2c093ea9583654 | [
"MIT"
] | 2 | 2018-11-01T16:10:34.000Z | 2020-06-02T03:24:43.000Z | You have a number of envelopes with widths and heights given as a pair of integers (w, h). One envelope can fit into another if and only if both the width and height of one envelope is greater than the width and height of the other envelope.
What is the maximum number of envelopes can you Russian doll? (put one inside other)
Example:
Given envelopes = [[5,4],[6,4],[6,7],[2,3]], the maximum number of envelopes you can Russian doll is 3 ([2,3] => [5,4] => [6,7]).
| 53.111111 | 242 | 0.709205 | eng_Latn | 0.999934 |
bb4ef24b4cafe223292e11486a57346c9cf506c3 | 1,068 | md | Markdown | docs/BulkRevertProductChangesRequest.md | Yaksa-ca/eShopOnWeb | 70bd3bd5f46c0cb521960f64c548eab3e7d1df5c | [
"MIT"
] | null | null | null | docs/BulkRevertProductChangesRequest.md | Yaksa-ca/eShopOnWeb | 70bd3bd5f46c0cb521960f64c548eab3e7d1df5c | [
"MIT"
] | null | null | null | docs/BulkRevertProductChangesRequest.md | Yaksa-ca/eShopOnWeb | 70bd3bd5f46c0cb521960f64c548eab3e7d1df5c | [
"MIT"
] | null | null | null | # Yaksa.OrckestraCommerce.Client.Model.BulkRevertProductChangesRequest
Products are entities which represents a buyable item managed in a catalog.
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**CorrelationId** | **string** | The correlation id for the durable task. | [optional]
**FilterByPublicationStatus** | **string** | The publication status to filter the products by. The filtering isn't applied for explicitly explicitly passed Product Ids. | [optional]
**IncludeUncategorized** | **bool** | When set to true, will indicate that all of the uncategorized products should be updated. | [optional]
**ParentCategoryIds** | **List<string>** | The id-s of the categories, products of which should be updated. | [optional]
**ProductIds** | **List<string>** | The id values of the products to be updated. | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 66.75 | 186 | 0.69382 | eng_Latn | 0.768845 |
bb4f44745721b75991e65f9434c146657d61ef3a | 414 | md | Markdown | README.md | johnnyawesome/ProcessingChaosGame | 1e5e6e74bebcae8d3fb9a744bd6c142820d3ce00 | [
"MIT"
] | null | null | null | README.md | johnnyawesome/ProcessingChaosGame | 1e5e6e74bebcae8d3fb9a744bd6c142820d3ce00 | [
"MIT"
] | null | null | null | README.md | johnnyawesome/ProcessingChaosGame | 1e5e6e74bebcae8d3fb9a744bd6c142820d3ce00 | [
"MIT"
] | null | null | null | # ChaosGame
Processing Code that plays the Chaos Game
https://en.wikipedia.org/wiki/Chaos_game
Moving the mouse to the right slows the calculations down, moving it to the left fastens calculations

## More Information
[I blogged about this project in more detail](https://breaksome.tech/the-chaos-game-in-processing/)
| 31.846154 | 101 | 0.794686 | eng_Latn | 0.522849 |
bb4f4687e1c22fd906876485d6bd49f861a18682 | 7,032 | md | Markdown | README.md | CrackerCat/JiaGu360 | b6062407070ab3f604f379a3166f29e5e25ce3c1 | [
"Apache-2.0"
] | 2 | 2020-07-20T03:34:54.000Z | 2021-07-12T23:42:47.000Z | README.md | CrackerCat/JiaGu360 | b6062407070ab3f604f379a3166f29e5e25ce3c1 | [
"Apache-2.0"
] | null | null | null | README.md | CrackerCat/JiaGu360 | b6062407070ab3f604f379a3166f29e5e25ce3c1 | [
"Apache-2.0"
] | null | null | null | # JiaGu360
[  ](https://github.com/903600017/JiaGu360/release)
[](https://raw.githubusercontent.com/903600017/JiaGu360/master/LICENSE)
JiaGu360 根据360加固命令实现app快捷加固的插件,解放双手,实现自动化流程。
### Gradle插件使用方式
#### 下载360加固软件
[360加固软件下载地址](http://jiagu.360.cn/#/global/download)

#### 360加固 jiagu.jar,

#### 360加固 多渠道配置文件,

#### 配置build.gradle
在位于项目的根目录 `build.gradle` 文件中添加 ApkSign插件的依赖, 如下:
```groovy
buildscript {
dependencies {
classpath 'com.zf.plugins:JiaGu360:1.0.1'
}
}
```
并在当前App的 `build.gradle` 文件中apply这个插件
```groovy
apply plugin: 'jiagu360'
```
#### 配置插件(最简易配置)
```groovy
jiaGu360Config {
//必填,360用户名
userName "XXXXXX"
//必填, 360密码
passWord "XXXXXX"
//必填,360加固jiagu.jar包位置
jiaGuJarPath new File("D:\\XXXXX\\360jiagubao_windows_64\\jiagu\\jiagu.jar").absolutePath
items {
debug {
//必填, 需要签名的APK 路径
inputApkFilePath file("build/outputs/apk/tap_unsign.apk").absolutePath
}
// ...... 可以添加更多选项
}
}
```
#### 插件全部配置
```groovy
jiaGu360Config {
//必填, 360用户名
userName "XXXXXX"
//必填, 360密码
passWord "XXXXXX"
//必填, 360加固jar包位置
jiaGuJarPath new File("D:\\XXXXX\\360jiagubao_windows_64\\jiagu\\jiagu.jar").absolutePath
//加固配置项服务是否都支持
isSupportAll false
//统一配置 优先低于自定义配置--------------------------start----------------------------------------------------------------------
//android 签名配置名称,默认android默认的'debug'签名配置,signingName="debug"
signingName 'debug'
//加固apk的输出目录
outputApkDirPath new File("D:\\XXXXX\\360jiagubao_windows_64\\jiagu\\XXXX").absolutePath
//加固完成后是否打开输出目录,只支持windows。默认false
openOutputDir false
// 加固配置项服务-------------------------------start-------------------------------------
// 可选增强服务--------------start----------------------
isSupportCrashLong false //【崩溃日志分析】
isSupportX86 false //【x86支持】
isSupportAnalyse false//【加固数据分析】
isNocert false//【跳过签名校验】
// 可选增强服务--------------end----------------------
//高级加固选项-------------start------------------
isSupportVmp false//【全VMP保护】
isSupportDataProtection false//【本地数据文件保护】
isSupportAssetsProtection false// 【资源文件保护】
isSupportFileCheck false//【文件完整性校验】
isSupportPtrace false//【Ptrace防注入】
isSupportSoProtection false//【SO文件保护】
isSupportDex2cProtection false//【dex2C保护】
isSupportStringObfusProtection false//【字符串加密】
isSupportDexShadowProtection false//【DexShadow】
isSupportSoPrivateProtection false//【SO防盗用】
//高级加固选项-------------end------------------
// 加固配置项服务--------------------------------end---------------------------------------------
//自动签名
autosign false
//自定义文件生成多渠道,可以根据前面下载的360加固软件里的 “多渠道模板.txt” 编写
mulpkgFilePath =new File("D:\\XXXXX\\360jiagubao_windows_64\\jiagu\\多渠道模板.txt")
//签名配置项
signingInfo {
storeFilePath "E:\\Document\\XXXXXX.jks"
storePassword "XXXXXX"
keyAlias "XXXXXX"
keyPassword "XXXXXX"
}
//统一配置 --------------------------end----------------------------------------------------------------------
items {
release {
//必填, 需要加固的APK 路径
inputApkFilePath file("build/outputs/apk/XXXX.apk").absolutePath
//自定义配置-----------------------------start-----------------------------------------------------------------------
//android 签名配置名称,默认android默认的'debug'签名配置,signingName="debug"
signingName 'debug'
//加固apk的输出目录
outputApkDirPath new File("D:\\XXXXX\\360jiagubao_windows_64\\jiagu\\XXXX").absolutePath
//加固完成后是否打开输出目录,只支持windows。默认false
openOutputDir false
// 加固配置项服务-------------------------------start-------------------------------------
// 可选增强服务--------------start----------------------
isSupportCrashLong false //【崩溃日志分析】
isSupportX86 false //【x86支持】
isSupportAnalyse false//【加固数据分析】
isNocert false//【跳过签名校验】
// 可选增强服务--------------end----------------------
//高级加固选项-------------start------------------
isSupportVmp false//【全VMP保护】
isSupportDataProtection false//【本地数据文件保护】
isSupportAssetsProtection false// 【资源文件保护】
isSupportFileCheck false//【文件完整性校验】
isSupportPtrace false//【Ptrace防注入】
isSupportSoProtection false//【SO文件保护】
isSupportDex2cProtection false//【dex2C保护】
isSupportStringObfusProtection false//【字符串加密】
isSupportDexShadowProtection false//【DexShadow】
isSupportSoPrivateProtection false//【SO防盗用】
//高级加固选项-------------end------------------
// 加固配置项服务--------------------------------end---------------------------------------------
//自动签名
autosign false
//自定义文件生成多渠道,可以根据前面下载的360加固软件里的 “多渠道模板.txt” 编写
mulpkgFilePath =new File("D:\\XXXXX\\360jiagubao_windows_64\\jiagu\\多渠道模板.txt")
//签名配置项
signingInfo {
storeFilePath "E:\\Document\\XXXXXX.jks"
storePassword "XXXXXX"
keyAlias "XXXXXX"
keyPassword "XXXXXX"
}
//自定义配置-----------------------------end-----------------------------------------------------------------------
}
debug {
//需要加固的APK 路径
inputApkFilePath file("build/outputs/apk/XXXX.apk").absolutePath
}
// ...... 可以添加更多选项
}
}
```
**配置项具体解释:**
* 当 “统一配置” ,“自定义配置” 设置的参数都存在时, **自定义配置 > 统一配置 , 这是总的原则**
* 当`isSupportAll=true`时 ,统一配置里的可配置服务全部都支持
* `signingInfo` ,`signingName`都配置时,优化级为 `signingInfo` > `signingName`;当两个配置项都不配置时,默认使用 android项目里的默认debug签名。
* `signingName='release'` 签名信息配置的名称,

**生成apk加固包:**
`./gradlew jiagu360${item配置名称(首页字母大小)} `

如上面的配置,生成签名包需要执行如下命令:
`./gradlew apkSignRelease `
**360加固升级命令:**
`./gradlew jiagu360Update `
## Q&A
- [输出乱码](https://github.com/903600017/JiaGu360/wiki/Terminal-%E8%BE%93%E5%87%BA%E4%B9%B1%E7%A0%81%EF%BC%9F)?
## 技术支持
* Read The Fucking Source Code
* 通过提交issue来寻求帮助
* 联系我们寻求帮助.(QQ群:366399995)
## 贡献代码
* 欢迎提交issue
* 欢迎提交PR
## License
Copyright 2017 903600017
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 28.469636 | 141 | 0.610779 | yue_Hant | 0.46349 |
bb4fb0404dfb94ad197a68bb520bc9fa07906890 | 1,788 | md | Markdown | en/ydb/quickstart/document-api/aws-cli/create-table.md | barmex/docs | e7f6be6035c66c1ab52224c350bfbf1d1fb605e9 | [
"CC-BY-4.0"
] | null | null | null | en/ydb/quickstart/document-api/aws-cli/create-table.md | barmex/docs | e7f6be6035c66c1ab52224c350bfbf1d1fb605e9 | [
"CC-BY-4.0"
] | null | null | null | en/ydb/quickstart/document-api/aws-cli/create-table.md | barmex/docs | e7f6be6035c66c1ab52224c350bfbf1d1fb605e9 | [
"CC-BY-4.0"
] | null | null | null | ---
sourcePath: en/ydb/overlay/quickstart/document-api/aws-cli/create-table.md
---
# Creating a table
To create a table named `series` with `series_id` as the partition key and `title` as the sort key:
{% list tabs %}
* AWS CLI
Run the command by replacing `https://your-database-endpoint` with the endpoint of your DB:
{% note warning %}
To work with the AWS CLI from Windows, we recommend using the [WSL](https://docs.microsoft.com/ru-ru/windows/wsl/).
{% endnote %}
```bash
endpoint="https://your-database-endpoint"
aws dynamodb create-table \
--table-name series \
--attribute-definitions \
AttributeName=series_id,AttributeType=N \
AttributeName=title,AttributeType=S \
--key-schema \
AttributeName=series_id,KeyType=HASH \
AttributeName=title,KeyType=RANGE \
--endpoint $endpoint
```
Output:
```text
{
"TableDescription": {
"AttributeDefinitions": [
{
"AttributeName": "series_id",
"AttributeType": "N"
},
{
"AttributeName": "title",
"AttributeType": "S"
}
],
"TableName": "series",
"KeySchema": [
{
"AttributeName": "series_id",
"KeyType": "HASH"
},
{
"AttributeName": "title",
"KeyType": "RANGE"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": "2020-12-27T13:21:10+00:00",
"TableSizeBytes": 0,
"ItemCount": 0
}
}
```
{% endlist %}
| 26.294118 | 119 | 0.490492 | eng_Latn | 0.385826 |
bb4fb77f2055a4648bf84ffcf06002dd4024e24b | 3,487 | md | Markdown | 11. Functional Programming/2020/5. Functions/Tasks/readme.md | dimitarminchev/ITCareer | 0e36aa85e5f1c12de186bbe402f8c66042ce84a2 | [
"MIT"
] | 50 | 2018-09-22T07:18:13.000Z | 2022-03-04T12:31:04.000Z | 11. Functional Programming/2020/5. Functions/Tasks/readme.md | dimitarminchev/ITCareer | 0e36aa85e5f1c12de186bbe402f8c66042ce84a2 | [
"MIT"
] | null | null | null | 11. Functional Programming/2020/5. Functions/Tasks/readme.md | dimitarminchev/ITCareer | 0e36aa85e5f1c12de186bbe402f8c66042ce84a2 | [
"MIT"
] | 30 | 2019-03-22T21:48:16.000Z | 2022-02-14T08:48:52.000Z | # Упражнения: Функции от по-висок ред
## Зад. 1 Генериране на математически израз
Дефинирайте функция, която приема списък от числа и генерира математически израз в следния формат:
```
(((a + b) + c) + d)
```
където a,b,c,d са елементите на подадения списък ([a,b,c,d])
### Пример
| Вход | Изход |
|-------------|---------------------|
| [1,2,3,4,5] | "((((1+2)+3)+4)+5)" |
| [1] | "1" |
| [1,10] | "(1+10)" |
| [] | "" |
### Подсказки
1. Изполвайте **fold** функция, за да преминете през всички елементи от списъка
2. Дефинирайте помощна функция, която да приема 2 аргумента и да връща като резултат форматиран символен низ от тип
- "(a+b)"
- "b" - при а - празен символен низ
## Зад. 2 Генериране на математически израз
Дефинирайте функция, която приема списък от числа и генерира математически израз в следния формат:
```
(a + (b + (c+ d)))
```
където a,b,c,d са елементите на подадения списък ([a,b,c,d])
### Пример
| Вход | Изход |
|-------------|---------------------|
| [1,2,3,4,5] | "(1+(2+(3+(4+5))))" |
| [1] | "1" |
| [1,10] | "(1+10)" |
| [] | "" |
### Подсказки
1. Изполвайте **fold** функция, за да преминете през всички елементи от списъка
2. Дефинирайте помощна функция, която да приема 2 аргумента и да връща като резултат форматиран символен низ от тип
- "(a+b)"
- "b" - при а - празен символен низ
## Зад. 3 Компресиране на списък
Дефинирайте функция, която приема списък и го компресира, като премахва повтарящите се последователни елементи:
### Пример
| Вход | Изход |
|-----------------------------------------|-----------------|
| [1,1,1,1,1,1,1,1,1,1,1,1,2,3,4,5,5,7,8] | [1,2,3,4,5,7,8] |
| [1] | [1] |
| [1,10] | [1,10] |
| [] | [] |
### Подсказки
Използвайте **fold** функция за да обработите списъка
## Зад. 4 Дупликация на списъчни елементи
Дефинирайте функция, която приема списък и връща нов списък като дупликира всеки елемент от него.
### Пример
| Вход | Изход |
|-------------|-----------------------|
| [1,2,3,4] | [1,1,2,2,3,3,4,4] |
| [1,2,3,4,4] | [1,1,2,2,3,3,4,4,4,4] |
| [1] | [1,1] |
| [] | [] |
## Зад. 5 Репликация на списъчни елементи
Дефинирайте функция, която приема списък и число - n и връща нов списък като репликира всеки елемент от него n на брой пъти.
### Пример
| Вход | Изход |
|---------------|-----------------------|
| [1,2,3,4,5] 2 | [1,1,2,2,3,3,4,4,5,5] |
| [1,2] 5 | [1,1,1,1,1,2,2,2,2,2] |
| [1,2,3] 0 | [] |
| [] 10 | [] |
## Зад. 6 Отрязване на списък
Дефинирайте функция, която приема списък, начален индекс и краен индекс и връща като резултат нов списък - елементите от началния до крайния индекс от първоначалния списък.
Бележка: Ако крайният индекс надвишава дължината на списъка, функцията да връща всички елементи до края.
### Пример
| Вход | Изход |
|-----------------|-------------|
| [1,2,3,4,5] 1 2 | [2,3] |
| [1,2,3,4,5] 0 4 | [1,2,3,4,5] |
| [1,2,3,4,5] 1 0 | [] |
| [] 5 5 | [] |
| [1,2,3,4] 0 10 | [1,2,3,4] |
| 37.095745 | 172 | 0.478635 | bul_Cyrl | 0.998229 |
bb4fcd0e642a58a18ee62481fadb39c077d0278e | 4,474 | md | Markdown | algorithm_design_and_analysis/notes/markdown/L_5.md | ReversalS/NJU-open-resources | b1e3a69f8f401fa970640f890e48890af648c94f | [
"MIT"
] | 7 | 2019-04-02T15:31:45.000Z | 2020-07-25T00:47:53.000Z | algorithm_design_and_analysis/notes/markdown/L_5.md | ReversalS/NJU-open-resources | b1e3a69f8f401fa970640f890e48890af648c94f | [
"MIT"
] | null | null | null | algorithm_design_and_analysis/notes/markdown/L_5.md | ReversalS/NJU-open-resources | b1e3a69f8f401fa970640f890e48890af648c94f | [
"MIT"
] | 2 | 2021-11-28T14:01:05.000Z | 2021-11-28T14:01:07.000Z | [](https://github.com/i1123581321/NJU-open-resource)
# HeapSort
## Heap
> **定义 7.1** 堆:一棵二叉树满足”堆结构特性“和”堆偏序特性“,则称它为一个堆
>
> * 堆结构特性:一颗二叉树要么是完美的,要么仅比一颗完美二叉树在最底层少若干节点,且最底层的结点从左向右紧挨着依次排列
> * 堆偏序特性:堆结点中存储的元素满足父结点的值大于所有子结点的值
### 堆的具体实现:数组
对于一个大小为 $n$ 的堆,需要一个大小为 $n$ 的数组(下标为 $1, 2, \dots, n$),将堆中元素按深度从小到大,同一深度的元素从左到右,依次放入数组中。如此实现的堆中父子结点下标满足
* $\text{PARENT}(i) = \left\lfloor \dfrac{i}{2} \right\rfloor$
* $\text{LEFT}(i) = 2i$
* $\text{RIGHT}(i) = 2i+1$
### Fix-Heap
从堆顶取出一个元素后,修复其使其重新成为一个堆
* 结构特性修复:取出底层最右边的元素放在堆顶的位置
* 偏序特性修复:
* 将父结点值与两个子结点值比较,假设左结点是其中值最大的,交换父结点和左结点的值
* 递归地对左子树进行修复
每次修复比较两次,修复次数不会超过树的高度,故修复的最坏情况代价为 $O(\log n)$
基于数组实现可得到算法 FIX-HEAP (A, p):
```pseudocode
l := LEFT(p);
r := RIGHT(p);
next := p;
if l <= heapSize and A[l] > A[p] then
next := l;
if r <= heapSize and A[r] > A[next] then
next := r;
if next != p then
SWAP(A[p], A[next]);
FIX-HEAP(A, next);
```
### Construct-Heap
基于数组的堆实现已经完成了结构特性,故仅讨论偏序特性的构建,思想为:
* 从根开始构建,根的左右子树已经是堆,若根不符合偏序特性,则进行一次修复即可
* 递归地将左右子树构建为堆
堆的构建为线性时间
> Smart Guess:
> $$
> W(n) = 2W\left(\frac{n}{2} \right) + 2\log n
> $$
> 根据 Master Theorem
> $$
> W(n) = \Theta(n)
> $$
假设堆是完美二叉树,树中高度为 $h$ 的结点个数为 $\left\lceil\dfrac{n}{2^{h+1}} \right\rceil$,所有结点的修复代价总和为
$$
\begin{align}
W(n) &= \sum_{h = 0}^{\lfloor \log n \rfloor} \left\lceil\frac{n}{2^{h+1}} \right\rceil O(h)\\
&= O\left( n \sum_{h = 1}^{\lfloor\log n\rfloor} \frac{h}{2^h} \right) \\
&= O(n)
\end{align}
$$
故堆的构建最坏情况时间复杂度为 $O(n)$
可基于数组实现得到算法 CONSTRUCT-HEAP (A[1...n]):
```pseudocode
heapSize := n;
BUILD(1)
subroutine BUILD(p) begin
l := LEFT(p);
r := RIGHT(p);
if l <= heapSize then
BUILD(l);
if r <= heapSize then
BUILD(r);
FIX-HEAP(A, p);
```
### Understanding Heap
#### $k^{th}$ element
寻找堆中第 $k$ 大的元素,设堆大小为 $n$ 且 $k \ll n$,要求算法代价只与 $k$ 有关
#### Sum of heights
堆的所有结点的高度之和为 $n-1$,证明使用数学归纳法
## HeapSort
### Strategy
将输入的元素建立堆,取出堆顶的元素与下标为 $n$ 的元素交换,然后将前 $n-1$ 个元素修复为堆
### Algorithm
HEAP-SORT (A[1...n]):
```pseudocode
CONSTRUCT-HEAP(A[1...n]);
for i := n downto 2 do
SWAP(A[1], A[i]);
heapSise := heapSize - 1;
FIX-HEAP(A, 1)
```
### Analysis
#### Worst-case
显然
$$
W(n) = W_{cons}(n) + \sum_{k = 1}^{n - 1}W_{fix}(k)
$$
而由之前的结论
$$
W_{cons}(n) = \Theta(n) \text{ and } W_{fix}(k) \leqslant 2 \log k
$$
而
$$
2 \sum_{k = 1}^{n - 1}\lceil \log k\rceil \leqslant 2 \int _1^n \log e \ln x \mathrm{d}x = 2\log e (n \ln n - n) = \textcolor{red}{2(n \log n - 1.443 n)}
$$
所以
$$
W(n) \leqslant 2n\log n + \Theta(n)\\
W(n) = \Theta(n \log n)
$$
最坏情况时间复杂度为 $\Theta(n \log n)$
#### Average-case
平均情况时间复杂度同样为 $\Theta(n\log n)$
### Accelerated HeapSort
减少修复堆时的比较次数:
* Bubbling Up,即由下向上修复堆
* 修复时仅比较一次,然后再上浮调整
* 使用 Divide & Conquer:即每次向下调整 $\dfrac{1}{2}$ ,若是超过了则向上浮动调整,否则继续向下调整 $\dfrac{1}{2}$
相关算法:
Bubble-Up Heap Algorithm:
```c
void bubbleUpHeap(Element E[], int root, Element k, int vacant){
if(vacant == root){
E[vacant] = k;
} else {
int parent = vacant / 2;
if (K.key <= E[parent].key){
E[vacant] = K;
} else {
E[vacant] = E[parent];
bubbleUpHeap(E, root, K, parent);
}
}
}
```
Depth Bounded Filtering Down:
```c
int promote(Element E[], int hStop, int vacant, int h){
int vacStop;
if(h <= hStop){
vacStop = vacant;
} else if (E[2 * vacant].key <= E[2 * vacant +1].key){
E[vacant] = E[2 * vacant + 1];
vacStop = promote(E, hStop, 2 * vacant + 1, h - 1);
} else {
E[vacant] = E[2 * vacant];
vacStop = promote(E, hStop, 2 * vacant, h - 1);
}
return vacStop;
}
```
Fix-Heap Using Divide-and-Conquer:
```c
void fixHeapFast(Element E[], Element K, int vacant, int h){
if(h <= 1){
// Process heap of height 0 or 1
} else {
int hStop = h / 2;
int vacStop = promote(E, hStop, vacant, h);
int vacParent = vacStop / 2;
if(E[vacParent].key <= K.key){
E[vacStop] = E[vacParent];
bubbleUpHeap(E, vacant, K, vacParent);
} else {
fixHeapFast(E, K, vacStop, hStop);
}
}
}
```
一次调整最多调用 $t$ 次 promote 和 1 次 bubbleUpHaep,比较次数为
$$
\sum_{k = 1}^{t}\left\lceil \frac{h}{2^k} \right\rceil + \left\lceil \frac{h}{2^t} \right\rceil = h = \log(n+1)
$$
且要执行 $\log h$ 次检查是否需要继续调整的比较
对于 Accelerated HeapSort
$$
W(n) = n\log n + \Theta(n \log \log n)
$$
### More than Sorting
寻找第 $k$ 大元素
寻找前 $k$ 大元素
合并 $k$ 个排好序的列表
动态中位数 | 19.367965 | 153 | 0.587617 | yue_Hant | 0.347971 |
bb506339dcee3163c809f1211c0252c7cec8e19a | 1,604 | md | Markdown | doc/api/httplib_connect_websocket_client.md | arnevarholm/libhttp | d0b3288e446db42cc3099e8b83ada64cec8f1365 | [
"MIT"
] | 841 | 2016-12-16T20:14:18.000Z | 2022-03-28T00:59:20.000Z | doc/api/httplib_connect_websocket_client.md | arnevarholm/libhttp | d0b3288e446db42cc3099e8b83ada64cec8f1365 | [
"MIT"
] | 73 | 2016-12-19T09:34:25.000Z | 2021-10-04T08:14:58.000Z | doc/api/httplib_connect_websocket_client.md | arnevarholm/libhttp | d0b3288e446db42cc3099e8b83ada64cec8f1365 | [
"MIT"
] | 112 | 2017-04-21T05:06:20.000Z | 2022-03-31T15:21:35.000Z | # LibHTTP API Reference
### `httplib_connect_websocket_client( host, port, use_ssl, error_buffer, error_buffer_size, path, origin, data_func, close_func, user-data);`
### Parameters
| Parameter | Type | Description |
| :--- | :--- | :--- |
|**`host`**|`const char *`|The hostname or IP address of the server|
|**`port`**|`int`|The port on the server|
|**`use_ssl`**|`int`|Use SSL if this parameter is not equal to zero|
|**`error_buffer`**|`char *`|Buffer to store an error message|
|**`error_buffer_size`**|`size_t`|Size of the error message buffer including the NUL terminator|
|**`path`**|`const char *`|The server path to connect to, for example `/app` if you want to connect to `localhost/app`|
|**`origin`**|`const char *`|The value of the `Origin` HTTP header|
|**`data_func`**|`httplib_websocket_data_handler`|Callback which is used to process data coming back from the server|
|**`close_func`**|`httplib_websocket_close_handler`|Callback which is called when the connection is to be closed|
|**`user_data`**|`void *`|User supplied argument|
### Return Value
| Type | Description |
| :--- | :--- |
|`struct httplib_connection *`|A pointer to the connection structure, or NULL if connecting failed|
### Description
The function `httplib_connect_websocket_client()` connects to a websocket on a server as a client. Data and close events are processed with callback functions which must be provided in the call.
### See Also
* [`httplib_connect_client();`](httplib_connect_client.md)
* [`httplib_connect_client_secure();`](httplib_connect_client_secure.md)
| 47.176471 | 195 | 0.708229 | eng_Latn | 0.959693 |
bb50948335f58e8c2400099be427a6270d51a4f9 | 12,024 | md | Markdown | _components/lists.md | erkde/wai-website-theme | 4e1a7b631dabb09652684f9c9273e1d9e998722b | [
"MIT"
] | null | null | null | _components/lists.md | erkde/wai-website-theme | 4e1a7b631dabb09652684f9c9273e1d9e998722b | [
"MIT"
] | null | null | null | _components/lists.md | erkde/wai-website-theme | 4e1a7b631dabb09652684f9c9273e1d9e998722b | [
"MIT"
] | null | null | null | ---
title: "Lists"
lang: en
# translators: # Uncomment (remove #) for translations, one - name line per translator.
# - name: Translator 1
# - name: Translator 2
# contributors:
# - name: Contributor 1
# - name: Contributor 2
footer: > # Text in footer in HTML
<p> This is the text in the footer </p>
github:
repository: w3c/wai-website-theme
path: _components/lists.md
inline_css: |
---
{% include toc.html levels="2,3" %}
## Ordered List
### Code
{% capture example %}
1. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica.
1. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
2. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
1. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
2. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
2. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica.
{% endcapture %}
```md
{{example}}
```
### Example
{{example}}
## Unordered List
### Code
{% capture example %}
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
{% endcapture %}
```md
{{example}}
```
### Example
{{example}}
### Alternative style (Circle) {#circle}
Use the class `alt` to have open circle list-icons.
#### Code
{% capture example %}
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicabo quasi vero.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
{:.alt}
{% endcapture %}
```md
{{example}}
```
#### Example
{{example}}
## Description List
### Code
```md
[Selecting Web Accessibility Evaluation Tools](…)
: Provides guidance on choosing tools. It describes the features and functionality of different types of evaluation tools, and discusses things to consider for your situation.
[Web Accessibility Evaluation Tools List](…)
: Includes information on almost 100 tools. You can use the filters to narrow down the list to the types of tools you are interested in.
```
### Example
[Selecting Web Accessibility Evaluation Tools](…)
: Provides guidance on choosing tools. It describes the features and
functionality of different types of evaluation tools, and discusses
things to consider for your situation.
[Web Accessibility Evaluation Tools List](…)
: Includes information on almost 100 tools. You can use the filters to
narrow down the list to the types of tools you are interested in.
### Not Bold
To remove the boldness from the `<dt>` elements, use the `notbold` class:
#### Code
```md
[Selecting Web Accessibility Evaluation Tools](…)
: Provides guidance on choosing tools. It describes the features and functionality of different types of evaluation tools, and discusses things to consider for your situation.
[Web Accessibility Evaluation Tools List](…)
: Includes information on almost 100 tools. You can use the filters to narrow down the list to the types of tools you are interested in.
{:.notbold}
```
#### Example
[Selecting Web Accessibility Evaluation Tools](…)
: Provides guidance on choosing tools. It describes the features and
functionality of different types of evaluation tools, and discusses
things to consider for your situation.
[Web Accessibility Evaluation Tools List](…)
: Includes information on almost 100 tools. You can use the filters to
narrow down the list to the types of tools you are interested in.
{:.notbold}
## List variations
### Link List
#### Code
```html
<ul class="linklist">
<li><a href="#">{%raw%}{%include_cached icon.html name="check-circle" %}{%endraw%} <span class="visual-a">adaptive strategies</span></a></li>
<li><a href="#">{%raw%}{%include_cached icon.html name="chevron-right" %}{%endraw%} <span class="visual-a">adaptive strategies</span></a></li>
<li><a href="#">{%raw%}{%include_cached icon.html name="chevron-right" %}{%endraw%} <span class="visual-a">adaptive strategies</span></a></li>
<li><a href="#">{%raw%}{%include_cached icon.html name="chevron-right" %}{%endraw%} <span class="visual-a">adaptive strategies</span></a></li>
<li><a href="#">{%raw%}{%include_cached icon.html name="chevron-right" %}{%endraw%} <span class="visual-a">adaptive strategies</span></a></li>
</ul>
```
#### Example
<ul class="linklist">
<li><a href="#">{%include_cached icon.html name="check-circle" %} <span class="visual-a">adaptive strategies</span></a></li>
<li><a href="#">{%include_cached icon.html name="chevron-right" %} <span class="visual-a">adaptive strategies</span></a></li>
<li><a href="#">{%include_cached icon.html name="chevron-right" %} <span class="visual-a">adaptive strategies</span></a></li>
<li><a href="#">{%include_cached icon.html name="chevron-right" %} <span class="visual-a">adaptive strategies</span></a></li>
<li><a href="#">{%include_cached icon.html name="chevron-right" %} <span class="visual-a">adaptive strategies</span></a></li>
</ul>
### Two Columns
#### Code
```md
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
{:.two.columns}
```
#### Example
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
{:.two.columns}
### Four Columns
#### Code
```md
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
{:.four.columns}
```
#### Example
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
{:.four.columns}
### Checkbox List
#### Code
```md
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
{:.checkbox}
```
#### Example
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
{:.checkbox}
### List that does not look like a list {#nolist}
{% capture example %}
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
* Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explicab.
{:.nolist}
{% endcapture %}
#### Code
```md
{{example}}
```
#### Example
{{example}}
### No list with Images/Icons {#nolistimg}
You can specify the width of the image with an extra class (instead of `{:.nolist.withicons}` write `{:.nolist.withicons.sizeclass}`). Possible values for sizes are:
* `tiny`: 60px
* `mini`: 90px
* `small`: 120px
* `normal`: 240px (default)
On mobile, images float left and are half the size, with a minimum width of 60px.
{% capture example %}
* {:.left} {% include image.html src="picture.jpg" alt="Demo alt text" %} Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica.
* {:.right} {% include image.html src="picture.jpg" alt="Demo alt text" %} Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica.
{:.nolist.withicons}
{% endcapture %}
#### Code
```md
* {:.left} {%raw%}{% include image.html src="picture.jpg" alt="Demo alt text" %}{%endraw%} Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica.
* {:.right} {%raw%}{% include image.html src="picture.jpg" alt="Demo alt text" %}{%endraw%} Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Soluta minus harum sit eos ipsam aliquid eveniet explica.
{:.nolist.withicons}
```
#### Example
{{example}}
## Sentence list
Specify class `.sentence` if you want to display a list as a sentence:
{% capture example %}
* One
* Two
* Three
{:.sentence}
{% endcapture %}
### Example
{{ example }}
### Code
```md
{{ example }}
```
| 41.462069 | 555 | 0.74842 | eng_Latn | 0.521786 |
bb50b4100572a40e2af540b476cbff3e935ccec1 | 759 | md | Markdown | README.md | sebrock/sebrock | cfd3bcabb759b40d227a3a2340479071b738fe39 | [
"MIT"
] | null | null | null | README.md | sebrock/sebrock | cfd3bcabb759b40d227a3a2340479071b738fe39 | [
"MIT"
] | null | null | null | README.md | sebrock/sebrock | cfd3bcabb759b40d227a3a2340479071b738fe39 | [
"MIT"
] | null | null | null |
### Hi there 👋
⚡ Fun fact: These pretzels are making me thirsty!








| 36.142857 | 96 | 0.770751 | hun_Latn | 0.119944 |
bb511f4c4e033ac7daba7cb038cee89db534c5b1 | 307 | md | Markdown | _posts/2012-01-31-39-7183.md | ashmaroli/mycad.github.io | 47fb8e82e4801dd33a3d4335a3d4a6657ec604cf | [
"MIT"
] | null | null | null | _posts/2012-01-31-39-7183.md | ashmaroli/mycad.github.io | 47fb8e82e4801dd33a3d4335a3d4a6657ec604cf | [
"MIT"
] | null | null | null | _posts/2012-01-31-39-7183.md | ashmaroli/mycad.github.io | 47fb8e82e4801dd33a3d4335a3d4a6657ec604cf | [
"MIT"
] | null | null | null | ---
layout: post
amendno: 39-7183
cadno: CAD2012-A380-03
title: 检查翼肋底部
date: 2012-01-31 00:00:00 +0800
effdate: 2012-01-31 00:00:00 +0800
tag: A380
categories: 民航中南地区管理局适航审定处
author: 朱江
---
##适用范围:
本指令适用于序列号(S/N)为01、03、04、05、06、07、08、09、10、12、13、16、17、19、20、21、23、33、34及45的空客A380-841、A380-842及A380-861飞机。
| 19.1875 | 106 | 0.723127 | yue_Hant | 0.240761 |
bb51aa16f4945d8c7019480991defa2933852227 | 7,509 | md | Markdown | docs/ios/platform/search/web-markup.md | yoichinak/xamarin-docs.ja-jp | b75b04a6bad506b9e4f75fb4fa92afad1b4ec8f1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ios/platform/search/web-markup.md | yoichinak/xamarin-docs.ja-jp | b75b04a6bad506b9e4f75fb4fa92afad1b4ec8f1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ios/platform/search/web-markup.md | yoichinak/xamarin-docs.ja-jp | b75b04a6bad506b9e4f75fb4fa92afad1b4ec8f1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Xamarin で Web マークアップを使用して検索する
description: このドキュメントでは、Xamarin iOS アプリにリンクする web ベースの検索結果を作成する方法について説明します。 ここでは、web コンテンツのインデックス作成を有効にする方法、アプリの web サイトを検出可能にする方法、スマートアプリバナー、ユニバーサルリンクなどについて説明します。
ms.prod: xamarin
ms.assetid: 876315BA-2EF9-4275-AE33-A3A494BBF7FD
ms.technology: xamarin-ios
author: davidortinau
ms.author: daortin
ms.date: 03/20/2017
ms.openlocfilehash: b578d1d171c6b8e91e76758f4c979fbc8a1b6eaa
ms.sourcegitcommit: 00e6a61eb82ad5b0dd323d48d483a74bedd814f2
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/29/2020
ms.locfileid: "91436992"
---
# <a name="search-with-web-markup-in-xamarinios"></a>Xamarin で Web マークアップを使用して検索する
Web サイトを介してコンテンツへのアクセスを提供するアプリ (アプリ内からだけでなく) は、Apple によってクロールされる特別なリンクを使用して web コンテンツをマークし、ユーザーの iOS 9 デバイスでアプリへのディープリンクを提供することができます。
IOS アプリが既にモバイルディープリンクをサポートしていて、web サイトにアプリ内のコンテンツへのディープリンクが表示されている場合、Apple の _Applebot_ web クローラーはこのコンテンツのインデックスを作成し、クラウドインデックスに自動的に追加します。
[](web-markup-images/webmarkup01.png#lightbox)
Apple は、これらの結果をスポットライト検索および Safari 検索結果に表示します。
ユーザーがこれらの結果のいずれかをタップした場合 (およびアプリがインストールされている場合)、アプリのコンテンツに移動します。
[](web-markup-images/webmarkup02.png#lightbox)
## <a name="enabling-web-content-indexing"></a>Web コンテンツのインデックス作成を有効にする
Web マークアップを使用してアプリのコンテンツを検索できるようにするには、次の4つの手順が必要です。
1. アプリの web サイトを検出してインデックスを作成するには、iTunes Connect の **サポート** または **マーケティング** web サイトとして定義します。
2. モバイルディープリンクを実装するために、アプリの web サイトに必要なマークアップが含まれていることを確認します。 詳細については、以下のセクションを参照してください。
3. IOS アプリでディープリンクの処理を有効にします。
4. アプリの web サイトに表示される構造化データのマークアップを追加して、エンドユーザーに豊富で魅力的な結果を提供します。 この手順は厳密には必須ではありませんが、Apple では強くお勧めします。
以下のセクションでは、これらの手順について詳しく説明します。
## <a name="make-your-apps-website-discoverable"></a>アプリの Web サイトを検出可能にする
Apple がアプリの web サイトを検索する最も簡単な方法は、iTunes Connect を使用して Apple にアプリを送信するときに、 **サポート** または **マーケティング** web サイトとして使用することです。
## <a name="using-smart-app-banners"></a>スマートアプリバナーの使用
アプリに明確なリンクを表示するには、web サイトにスマートアプリバナーを提供します。 アプリがまだインストールされていない場合、Safari は自動的にアプリをインストールするようにユーザーに求めます。 それ以外の場合は、[ **表示** ] リンクをタップして、web サイトからアプリを起動することができます。 たとえば、スマートアプリバナーを作成するには、次のコードを使用します。
```html
<meta name="AppName" content="app-id=123456, app-argument=http://company.com/AppName">
```
詳細については、Apple の「 [アプリをスマートアプリのバナーで昇格](https://developer.apple.com/library/ios/documentation/AppleApplications/Reference/SafariWebContent/PromotingAppswithAppBanners/PromotingAppswithAppBanners.html) する」を参照してください。
## <a name="using-universal-links"></a>ユニバーサルリンクの使用
IOS 9 の新機能であるユニバーサルリンクを使用すると、次のようなスマートアプリバナーまたは既存のカスタム URL スキームに代わる優れた方法となります。
- **一意** –同じ URL を複数の web サイトで要求することはできません。
- **Secure** – web サイトが所有者であり、アプリに有効なリンクがあることを確認するために、web サイトに署名入り証明書が必要です。
- **柔軟** : エンドユーザーは、URL が web サイトまたはアプリを起動するかどうかを制御できます。
- **Universal** –同じ URL を使用して、web サイトとアプリのコンテンツの両方を定義できます。
## <a name="using-twitter-cards"></a>Twitter カードの使用
Twitter カードを使用して、アプリのコンテンツへのディープリンクを提供できます。 次に例を示します。
```html
<meta name="twitter:app:name:iphone" content="AppName">
<meta name="twitter:app:id:iphone" content="AppNameID">
<meta name="twitter:app:url:iphone" content="AppNameURL">
```
詳細については、Twitter の [Twitter カードのプロトコル](https://developer.twitter.com/en/docs/tweets/optimize-with-cards/overview/abouts-cards) に関するドキュメントを参照してください。
## <a name="using-facebook-app-links"></a>Facebook アプリリンクの使用
Facebook アプリリンクを使用して、アプリのコンテンツへのディープリンクを提供できます。 次に例を示します。
```html
<meta property="al:ios:app_name" content="AppName">
<meta property="al:ios:app_store_id" content="AppNameID">
<meta property="al:ios:url" content="AppNameURL">
```
詳細については、Facebook の [アプリリンク](https://developers.facebook.com/docs/applinks) に関するドキュメントを参照してください。
## <a name="opening-deep-links"></a>ディープリンクを開く
Xamarin. iOS アプリでディープリンクを開いて表示するためのサポートを追加する必要があります。 **AppDelegate.cs**ファイルを編集し、メソッドをオーバーライドして `OpenURL` カスタム URL 形式を処理します。 次に例を示します。
```csharp
public override bool OpenUrl (UIApplication application, NSUrl url, string sourceApplication, NSObject annotation)
{
// Handling a URL in the form http://company.com/appname/?123
try {
var components = new NSUrlComponents(url,true);
var path = components.Path;
var query = components.Query;
// Is this a known format?
if (path == "/appname") {
// Display the view controller for the content
// specified in query (123)
return ContentViewController.LoadContent(query);
}
} catch {
// Ignore issue for now
}
return false;
}
```
上のコードでは、 `/appname` `query` `123` 要求されたコンテンツをユーザーに表示するために、(この例では) の値をアプリのカスタムビューコントローラーに渡すとを含む URL を探しています。
## <a name="providing-rich-results-with-structured-data"></a>構造化データを使用した豊富な結果の提供
構造化されたデータマークアップを含めることにより、タイトルと説明だけではなく、豊富な検索結果をエンドユーザーに提供できます。 構造化データマークアップを使用して、画像、アプリ固有のデータ (評価など)、結果に対するアクションを含めます。
豊富な結果を得ることができ、より多くのユーザーを魅力的して、クラウドベースの検索インデックスの順位を向上させることができます。
構造化データマークアップを提供するオプションの1つに、Open Graph を使用する方法があります。 次に例を示します。
```html
<meta property="og:image" content="http://company.com/appname/icon.jpg">
<meta property="og:audio" content="http://company.com/appname/theme.m4a">
<meta property="og:video" content="http://company.com/appname/tutorial.mp4">
```
詳細については、「 [Open Graph](https://ogp.me) 」 web サイトを参照してください。
構造化データマークアップのもう1つの一般的な形式は、schema. org のマイクロデータ形式です。 次に例を示します。
```html
<div itemprop="aggregateRating" itemscope itemtype="http://schema.org/AggregateRating">
<span itemprop="ratingValue">4** stars -
<span itemprop="reviewCount">255** reviews
```
同じ情報は、次のようにスキーマで表すことができます。組織の JSON-LD 形式:
```html
<script type="application/ld+json">
"@content":"http://schema.org",
"@type":"AggregateRating",
"ratingValue":"4",
"reviewCount":"255"
</script>
```
Web サイトのメタデータの例を次に示します。この例では、リッチな検索結果をエンドユーザーに提供しています。
[](web-markup-images/deeplink01.png#lightbox)
現在、Apple は schema.org からの次のスキーマの種類をサポートしています。
- 集積
- ImageObject
- InteractionCount
- オファー
- 組織
- PriceRange
- レシピ
- SearchAction
これらのスキームの種類の詳細については、 [schema.org](https://schema.org)を参照してください。
## <a name="providing-actions-with-structured-data"></a>構造化データを使用したアクションの提供
特定の種類の構造化データを使用すると、エンドユーザーが検索結果を実行できるようになります。 現在、次のアクションがサポートされています。
- 電話番号をダイヤルしています。
- 指定されたアドレスへのマップの方向を取得しています。
- オーディオファイルまたはビデオファイルを再生しています。
たとえば、電話番号をダイヤルするアクションを定義すると、次のようになります。
```html
<div itemscope itemtype="http://schema.org/Organization">
<span itemprop="telephone">(408) 555-1212**
```
この検索結果をエンドユーザーに表示すると、小さいスマートフォンアイコンが結果に表示されます。 ユーザーがアイコンをタップすると、指定した数値が呼び出されます。
次の HTML は、検索結果からオーディオファイルを再生するアクションを追加します。
```html
<div itemscope itemtype="http://schema.org/AudioObject">
<span itemprop="contentUrl">http://company.com/appname/greeting.m4a**
```
最後に、次の HTML では、検索結果から方向を取得するアクションを追加します。
```html
<div itemscope itemtype="http://schema.org/PostalAddress">
<span itemprop="streetAddress">1 Infinite Loop**
<span itemprop="addressLocality">Cupertino**
<span itemprop="addressRegion">CA**
<span itemprop="postalCode">95014**
```
詳細については、Apple の [アプリ検索開発者向けサイト](https://developer.apple.com/ios/search/)を参照してください。
## <a name="related-links"></a>関連リンク
- [iOS 9 のサンプル](/samples/browse/?products=xamarin&term=Xamarin.iOS%2biOS9)
- [iOS 9 (開発者向け)](https://developer.apple.com/ios/pre-release/)
- [iOS 9.0](https://developer.apple.com/library/prerelease/ios/releasenotes/General/WhatsNewIniOS/Articles/iOS9.html)
- [アプリ検索のプログラミングガイド](https://developer.apple.com/library/prerelease/ios/documentation/General/Conceptual/AppSearch/index.html#//apple_ref/doc/uid/TP40016308) | 35.587678 | 212 | 0.785058 | yue_Hant | 0.733494 |
bb51c920c1cebfd529e20ee3b0d2da07425ab4c7 | 1,770 | md | Markdown | docs/technical-details/self-hosted/file-storage.md | apanzerj/insights-docs | 3fdb7b6e4ac1ef338c2fe30477a1e07c74867718 | [
"Apache-2.0"
] | null | null | null | docs/technical-details/self-hosted/file-storage.md | apanzerj/insights-docs | 3fdb7b6e4ac1ef338c2fe30477a1e07c74867718 | [
"Apache-2.0"
] | null | null | null | docs/technical-details/self-hosted/file-storage.md | apanzerj/insights-docs | 3fdb7b6e4ac1ef338c2fe30477a1e07c74867718 | [
"Apache-2.0"
] | null | null | null | # File Storage
When clusters report back to Fairwinds Insights (e.g. with a list of Trivy scans, or Polaris results),
Insights stores that data as a JSON file. In order to use Insights, you'll need a place to store those
files. Currently we support two options
* Amazon S3
* [Minio](https://min.io/), an open source alternative to S3
In the default installation, we use an ephemeral instance of [minio](https://min.io/),
but you'll want something more resilient when running in production to ensure you don't lose
any data.
## Amazon S3
To use Amazon S3, set your bucket name and region in _values.yaml_:
```yaml
reportStorage:
strategy: s3
bucket: your-bucket-name
awsRegion: us-east-1
```
You'll also need to specify your AWS access key and secret in _secrets.yaml_:
```yaml
apiVersion: v1
data:
aws_access_key_id: aGVsbG93b3JsZA==
aws_secret_access_key: aGVsbG93b3JsZA==
kind: Secret
metadata:
name: fwinsights-secrets
type: Opaque
```
Note that if you're using other AWS integrations (like SES below) they will use the same AWS credentials.
## Minio
You can use your own instance of Minio, or install a copy of Minio alongside Insights.
To have the Insights chart install Minio, you can configure it with the `minio` option:
```yaml
reportStorage:
strategy: minio
minio:
install: true
accessKey: fwinsights
secretKey: fwinsights
persistence:
enabled: true
```
In particular, you should set `minio.persistence.enabled=true` to use a PersistentVolume for your
data. You can see the [full chart configuration here](https://github.com/helm/charts/tree/master/stable/minio)
To use an existing installation of Minio, just set `reportStorage.minioHost`
```yaml
reportStorage:
strategy: minio
minioHost: minio.example.com
```
| 28.095238 | 110 | 0.757062 | eng_Latn | 0.980397 |
bb523558efd5e73694e0efd3c77db02676d0f0a2 | 2,714 | md | Markdown | mixed-reality-docs/object-collection.md | dmckinnon/mixed-reality | de3c23fa1f248c7c43f894ce3a04f8c4d7965ddf | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-08-06T10:37:35.000Z | 2020-08-06T10:37:35.000Z | mixed-reality-docs/object-collection.md | msatranjr/mixed-reality | c61579c44cf4b5e9e78a78f5425999fbdd5dc406 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | mixed-reality-docs/object-collection.md | msatranjr/mixed-reality | c61579c44cf4b5e9e78a78f5425999fbdd5dc406 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-10-02T22:05:21.000Z | 2018-10-02T22:18:24.000Z | ---
title: Object collection
description: Object collection is a layout control which helps you lay out an array of objects in a predefined three-dimensional shape.
author: cre8ivepark
ms.author: dongpark
ms.date: 03/21/2018
ms.topic: article
keywords: Windows Mixed Reality, controls, design
---
# Object collection
Object collection is a layout control which helps you lay out an array of objects in a predefined three-dimensional shape. It supports four different surface styles - **plane, cylinder, sphere** and **scatter**. You can adjust the radius and size of the objects and the space between them. Object collection supports any object from Unity - both 2D and 3D. In the **[Mixed Reality Toolkit](https://github.com/Microsoft/MixedRealityToolkit-Unity/blob/master/Assets/HoloToolkit-Examples/UX/Readme/README_ObjectCollection.md)**, we have created Unity script and [example scene](https://github.com/Microsoft/MixedRealityToolkit-Unity/blob/master/Assets/HoloToolkit-Examples/UX/Scenes/ObjectCollectionExample.unity) that will help you create an object collection.
<br>
*Object collection used in the Periodic Table of the Elements sample app*
## Object collection examples
[Periodic Table of the Elements](periodic-table-of-the-elements.md) is a sample app that demonstrates how Object collection works. It uses Object collection to lay out 3D chemical element boxes in different shapes.
<br>
*Object collection examples shown in the Periodic Table of the Elements sample app*
### 3D objects
You can use Object collection to lay out imported 3D objects. The example below shows a plane and a cylinder layout of some 3D chair objects.
<br>
*Examples of plane and cylindrical layouts of 3D objects*
### 2D objects
You can also use 2D images with Object collection. The examples below demonstrate how 2D images can be displayed in a grid.

<br>
*Examples of 2D images with object collection*
## See also
* [Scripts and prefabs for Object collection in the Mixed Reality Toolkit on GitHub](https://github.com/Microsoft/MixedRealityToolkit-Unity/tree/master/Assets/HoloToolkit-Examples/UX)
* [Interactable object](interactable-object.md)
| 59 | 759 | 0.786662 | eng_Latn | 0.970017 |
bb523ac8e5aeeaed6e8b61d9817a23bcbb0ef0d1 | 8,688 | md | Markdown | articles/active-directory-b2c/identity-protection-investigate-risk.md | miiitch/azure-docs.fr-fr | e313657eaf54f5b2ed1a87723e447fb546a6beb4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory-b2c/identity-protection-investigate-risk.md | miiitch/azure-docs.fr-fr | e313657eaf54f5b2ed1a87723e447fb546a6beb4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory-b2c/identity-protection-investigate-risk.md | miiitch/azure-docs.fr-fr | e313657eaf54f5b2ed1a87723e447fb546a6beb4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Examiner le risque avec Azure Active Directory B2C Identity Protection
description: Découvrez comment examiner des utilisateurs à risque et des détections dans Azure AD B2C Identity Protection.
services: active-directory
ms.service: active-directory
ms.subservice: conditional-access
ms.topic: overview
ms.date: 03/03/2021
ms.custom: project-no-code
ms.author: mimart
author: msmimart
manager: celested
zone_pivot_groups: b2c-policy-type
ms.openlocfilehash: f15fd789264922865acb792bdb766b9624665d91
ms.sourcegitcommit: 3de22db010c5efa9e11cffd44a3715723c36696a
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 05/10/2021
ms.locfileid: "109654757"
---
# <a name="investigate-risk-with-identity-protection-in-azure-ad-b2c"></a>Examiner les risques avec Identity Protection dans Azure AD B2C
[!INCLUDE [b2c-public-preview-feature](../../includes/active-directory-b2c-public-preview.md)]
Identity Protection assure la détection des risques en continu pour votre locataire Azure AD B2C. Le service permet aux organisations de découvrir, d’examiner et de corriger les risques basés sur l’identité. Identity Protection est fourni avec des rapports sur les risques qui peuvent être utilisés pour examiner les risques liés à l’identité dans les locataires Azure AD B2C. Dans cet article, vous allez apprendre à examiner et à atténuer les risques.
## <a name="overview"></a>Vue d’ensemble
Azure AD B2C Identity Protection fournit deux rapports. Le rapport *Utilisateurs à risque* permet aux administrateurs de savoir quels utilisateurs sont à risque et de connaître les détails relatifs aux détections. Le rapport *Détections de risque* fournit des informations sur chaque détection de risque, notamment le type, les autres risques déclenchés au même moment, l’emplacement de la tentative de connexion et bien plus encore.
Chaque rapport démarre avec une liste de toutes les détections pour la période indiquée en haut du rapport. Les rapports peuvent être filtrés à l’aide des filtres situés dans la partie supérieure du rapport. Les administrateurs peuvent choisir de télécharger les données ou d’utiliser [l’API MS Graph et le Kit de développement logiciel (SDK) Microsoft Graph PowerShell](../active-directory/identity-protection/howto-identity-protection-graph-api.md) pour exporter les données en continu.
## <a name="service-limitations-and-considerations"></a>Limitations et considérations relatives au service
Lorsque vous utilisez Identity Protection, tenez compte des éléments suivants :
- Identity Protection est activé par défaut.
- Identity Protection est disponible pour les identités locales et sociales, telles que Google ou Facebook. Pour les identités sociales, l’accès conditionnel doit être activé. La détection est limitée, car les informations d’identification des comptes sociaux sont gérées par le fournisseur d’identité externe.
- Dans les locataires Azure AD B2C, seul un sous-ensemble des [détections de risque d’Azure AD Identity Protection](../active-directory/identity-protection/overview-identity-protection.md) est disponible. Les détections de risque suivantes sont prises en charge par Azure AD B2C :
|Type de détection des risques |Description |
|---------|---------|
| Voyage inhabituel | Connexion à partir d’un emplacement inhabituel par rapport aux dernières connexions de l’utilisateur. |
|Adresse IP anonyme | Connexion à partir d’une adresse IP anonyme (par exemple : navigateur Tor, VPN anonymes). |
|Adresse IP liée à un programme malveillant | Connexion à partir d’une adresse IP liée à un programme malveillant. |
|Propriétés de connexion inhabituelles | Connexion avec des propriétés inhabituelles pour l’utilisateur concerné. |
|L’administrateur a confirmé que cet utilisateur est compromis | Un administrateur a indiqué qu’un utilisateur a été compromis. |
|Pulvérisation de mots de passe | Connexion par le biais d’une attaque par pulvérisation de mots de passe. |
|Azure AD Threat Intelligence | Les sources de renseignements sur les menaces internes et externes de Microsoft ont identifié un modèle d’attaque connu. |
## <a name="pricing-tier"></a>Niveau tarifaire
Azure AD B2C Premium P2 est requis pour certaines fonctionnalités d’Identity Protection. Si nécessaire, [passez au niveau tarifaire Azure AD B2C Premium P2](./billing.md). Le tableau suivant récapitule les fonctionnalités d’Identity Protection et le niveau tarifaire requis.
|Fonctionnalité |P1 |P2|
|----------|:-----------:|:------------:|
|Rapport sur les utilisateurs à risque |✓ |✓ |
|Détails du rapport sur les utilisateurs à risque | |✓ |
|Correction du rapport sur les utilisateurs à risque | ✓ |✓ |
|Rapport sur les détections de risques |✓|✓|
|Détails du rapport sur les détections de risque ||✓|
|Téléchargement du rapport | ✓| ✓|
|Accès à l’API MS Graph | ✓| ✓|
## <a name="prerequisites"></a>Prérequis
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
## <a name="investigate-risky-users"></a>Examiner des utilisateurs à risque
Les informations indiquées dans le rapport Utilisateurs à risque permettent aux administrateurs de trouver :
- L’**état du risque**, indiquant quels utilisateurs sont **à risque**, ont vu leur risque **corrigé** ou ont vu leur risque **ignoré**
- Détails sur les détections
- Historique de toutes les connexions à risque
- Historique des risques
Les administrateurs peuvent ensuite choisir d’agir sur ces événements. Les administrateurs peuvent choisir d’effectuer les opérations suivantes :
- Réinitialiser le mot de passe de l’utilisateur
- Confirmer la compromission de l’utilisateur
- Ignorer le risque lié à l’utilisateur
- Empêcher l’utilisateur de se connecter
- Effectuer d’autres examens au moyen d’Azure ATP
Un administrateur peut choisir d’ignorer le risque d’un utilisateur dans le portail Azure ou par programmation par le biais de l’API Microsoft Graph [Ignorer le risque lié à l’utilisateur](https://docs.microsoft.com/graph/api/riskyusers-dismiss?view=graph-rest-beta&preserve-view=true). Des privilèges d’administrateur sont nécessaires pour ignorer le risque lié à un utilisateur. La correction d’un risque peut être effectuée par l’utilisateur à risque ou par un administrateur au nom de l’utilisateur, notamment par le biais d’une réinitialisation de mot de passe.
### <a name="navigating-the-risky-users-report"></a>Consultation du rapport sur les utilisateurs à risque
1. Connectez-vous au [portail Azure](https://portal.azure.com/).
1. Sélectionnez l’icône **Annuaire et abonnement** dans la barre d’outils du portail, puis sélectionnez le répertoire qui contient votre locataire Azure AD B2C.
1. Sous **Services Azure**, sélectionnez **Azure AD B2C**. Vous pouvez également utiliser la zone de recherche pour rechercher et sélectionner **Azure AD B2C**.
1. Sous **Sécurité**, sélectionnez **Utilisateurs à risque (préversion)** .

La sélection d’entrées individuelles développe une fenêtre de détails sous les détections. L’affichage des détails permet aux administrateurs d’investiguer et d’effectuer des actions lors de chaque détection.

## <a name="risk-detections-report"></a>Rapport sur les détections de risques
Le rapport sur les détections de risque contient des données filtrables correspondant aux 90 derniers jours (3 mois).
Les informations indiquées dans le rapport des détections de risques permettent aux administrateurs de trouver :
- Des informations sur chaque détection de risques, y compris le type
- Les autres risques déclenchés en même temps.
- L’emplacement de la tentative de connexion.
Les administrateurs peuvent ensuite choisir de revenir au rapport des risques ou des connexions de l’utilisateur pour effectuer des actions en fonction des informations recueillies.
### <a name="navigating-the-risk-detections-report"></a>Consultation du rapport sur les détections de risque
1. Dans le portail Azure, recherchez et sélectionnez **Azure AD B2C**.
1. Sous **Sécurité**, sélectionnez **Détections de risque (préversion)** .

## <a name="next-steps"></a>Étapes suivantes
- [Ajouter l’accès conditionnel à un flux d’utilisateur](conditional-access-user-flow.md)
| 67.875 | 566 | 0.779121 | fra_Latn | 0.976289 |
bb524eacfda88d74c2b3cf1ddf6e423f93be92e1 | 1,234 | md | Markdown | site/pages/action-script/reference/flow-control/index.md | 3v1lW1th1n/developer.bigfix.com | 883f749c1b81b3401829e337de13c51702036ff8 | [
"Apache-2.0"
] | 20 | 2015-07-03T14:03:04.000Z | 2022-03-06T05:02:18.000Z | site/pages/action-script/reference/flow-control/index.md | Shivi-S/developer.bigfix.com | 80d5236b071477c3665db1238db9c2f45ba6c0ac | [
"Apache-2.0"
] | 83 | 2015-06-25T20:05:26.000Z | 2021-12-10T12:23:53.000Z | site/pages/action-script/reference/flow-control/index.md | Shivi-S/developer.bigfix.com | 80d5236b071477c3665db1238db9c2f45ba6c0ac | [
"Apache-2.0"
] | 25 | 2015-07-02T20:20:05.000Z | 2022-03-03T18:47:09.000Z | ---
title: Flow Control Commands
---
These commands allow you to add conditional logic to your action script.
<dl>
<dt>[**action may require restart**](./action-may-require-restart.html)</dt>
<dd>Place the action in *Pending Restart* if a restart is required.</dd>
<dt>[**action parameter query**](./action-parameter-query.html)</dt>
<dd>Prompt the user that creates the action for a parameter.</dd>
<dt>[**action requires login**](./action-requires-login.html)</dt>
<dd>Place the action in *Pending Login* until a user logs in.</dd>
<dt>[**action requires restart**](./action-requires-restart.html)</dt>
<dd>Place the action in *Pending Restart*.</dd>
<dt>[**continue if**](./continue-if.html)</dt>
<dd>Stop action script evaluation if a relevance expression is false.</dd>
<dt>[**exit**](./exit.html)</dt>
<dd>Abort the script and set the exit code.</dd>
<dt>[**if, elseif, else, endif**](./if-elseif-else-endif.html)</dt>
<dd>Conditionally run commands in a script.</dd>
<dt>[**parameter**](./parameter.html)</dt>
<dd>Set the value of a variable.</dd>
<dt>[**pause while**](./pause-while.html)</dt>
<dd>Pause action script evaluation while a relevance expression is true.</dd>
</dl>
| 33.351351 | 79 | 0.67342 | eng_Latn | 0.765743 |
bb52a27b6bb9fcf9cfa3848c29a2b2ac67a3ffee | 18,544 | md | Markdown | _docs/concepts/traffic-management/rules-configuration.md | xiaolanz/xiaolanz.github.io | e49269da10285aa039001d75f25c4bb182234475 | [
"Apache-2.0"
] | null | null | null | _docs/concepts/traffic-management/rules-configuration.md | xiaolanz/xiaolanz.github.io | e49269da10285aa039001d75f25c4bb182234475 | [
"Apache-2.0"
] | null | null | null | _docs/concepts/traffic-management/rules-configuration.md | xiaolanz/xiaolanz.github.io | e49269da10285aa039001d75f25c4bb182234475 | [
"Apache-2.0"
] | null | null | null | ---
title: Rules Configuration
overview: Provides a high-level overview of the domain-specific language used by Istio to configure traffic management rules in the service mesh.
order: 50
layout: docs
type: markdown
---
{% include home.html %}
Istio provides a simple Domain-specific language (DSL) to
control how API calls and layer-4 traffic flow across various
services in the application deployment. The DSL allows the operator to
configure service-level properties such as circuit breakers, timeouts,
retries, as well as set up common continuous deployment tasks such as
canary rollouts, A/B testing, staged rollouts with %-based traffic splits,
etc. See [routing rules reference]({{home}}/docs/reference/config/istio.routing.v1alpha1.html) for detailed information.
For example, a simple rule to send 100% of incoming traffic for a "reviews"
service to version "v1" can be described using the Rules DSL as
follows:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-default
spec:
destination:
name: reviews
route:
- labels:
version: v1
weight: 100
```
The destination is the name of the service to which the traffic is being
routed. The route *labels* identify the specific service instances that will
receive traffic. For example, in a Kubernetes deployment of Istio, the route
*label* "version: v1" indicates that only pods containing the label "version: v1"
will receive traffic.
Rules can be configured using the
[istioctl CLI]({{home}}/docs/reference/commands/istioctl.html), or in a Kubernetes
deployment using the `kubectl` command instead. See the
[configuring request routing task]({{home}}/docs/tasks/traffic-management/request-routing.html) for
examples.
There are three kinds of traffic management rules in Istio: **Route Rules**, **Destination
Policies** (these are not the same as Mixer policies), and **Egress Rules**. All three
kinds of rules control how requests are routed to a destination service.
## Route Rules
Route rules control how requests are routed within an Istio service mesh.
For example, a route rule could route requests to different versions of a service.
Requests can be routed based on the source and destination, HTTP
header fields, and weights associated with individual service versions. The
following important aspects must be kept in mind while writing route rules:
### Qualify rules by destination
Every rule corresponds to some destination service identified by a
*destination* field in the rule. For example, rules that apply to calls
to the "reviews" service will typically include at least the following.
```yaml
destination:
name: reviews
```
The *destination* value specifies, implicitly or explicitly, a fully qualified
domain name (FQDN). It is used by Istio Pilot for matching rules to services.
Normally, the FQDN of the service is composed from three components: *name*,
*namespace*, and *domain*:
```xxx
FQDN = name + "." + namespace + "." + domain
```
These fields can be explicitly specified as follows.
```yaml
destination:
name: reviews
namespace: default
domain: svc.cluster.local
```
More commonly, to simplify and maximize reuse of the rule (for example, to use
the same rule in more than one namespace or domain), the rule destination
specifies only the *name* field, relying on defaults for the other
two.
The default value for the *namespace* is the namespace of the rule
itself, which can be specified in the *metadata* field of the rule,
or during rule install using the `istioctl -n <namespace> create`
or `kubectl -n <namespace> create` command. The default value of
the *domain* field is implementation specific. In Kubernetes, for example,
the default value is `svc.cluster.local`.
In some cases, such as when referring to external services in egress rules or
on platforms where *namespace* and *domain* are not meaningful, an alternative
*service* field can be used to explicitly specify the destination:
```yaml
destination:
service: my-service.com
```
When the *service* field is specified, all other implicit or explicit values of the
other fields are ignored.
### Qualify rules by source/headers
Rules can optionally be qualified to only apply to requests that match some
specific criteria such as the following:
_1. Restrict to a specific caller_. For example, the following rule only
applies to calls from the "reviews" service.
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-to-ratings
spec:
destination:
name: ratings
match:
source:
name: reviews
...
```
The *source* value, just like *destination*, specifies a FQDN of a service,
either implicitly or explicitly.
_2. Restrict to specific versions of the caller_. For example, the following
rule refines the previous example to only apply to calls from version "v2"
of the "reviews" service.
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-v2-to-ratings
spec:
destination:
name: ratings
match:
source:
name: reviews
labels:
version: v2
...
```
_3. Select rule based on HTTP headers_. For example, the following rule will
only apply to an incoming request if it includes a "cookie" header that
contains the substring "user=jason".
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ratings-jason
spec:
destination:
name: reviews
match:
request:
headers:
cookie:
regex: "^(.*?;)?(user=jason)(;.*)?$"
...
```
If more than one header is provided, then all of the
corresponding headers must match for the rule to apply.
Multiple criteria can be set simultaneously. In such a case, AND semantics
apply. For example, the following rule only applies if the source of the
request is "reviews:v2" AND the "cookie" header containing "user=jason" is
present.
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ratings-reviews-jason
spec:
destination:
name: ratings
match:
source:
name: reviews
labels:
version: v2
request:
headers:
cookie:
regex: "^(.*?;)?(user=jason)(;.*)?$"
...
```
### Split traffic between service versions
Each route rule identifies one or more weighted backends to call when the rule is activated.
Each backend corresponds to a specific version of the destination service,
where versions can be expressed using _labels_.
If there are multiple registered instances with the specified tag(s),
they will be routed to based on the load balancing policy configured for the service,
or round-robin by default.
For example, the following rule will route 25% of traffic for the "reviews" service to instances with
the "v2" tag and the remaining traffic (i.e., 75%) to "v1".
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-v2-rollout
spec:
destination:
name: reviews
route:
- labels:
version: v2
weight: 25
- labels:
version: v1
weight: 75
```
### Timeouts and retries
By default, the timeout for http requests is 15 seconds,
but this can be overridden in a route rule as follows:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ratings-timeout
spec:
destination:
name: ratings
route:
- labels:
version: v1
httpReqTimeout:
simpleTimeout:
timeout: 10s
```
The number of retries for a given http request can also be specified in a route rule.
The maximum number of attempts, or as many as possible within the default or overridden timeout period,
can be set as follows:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ratings-retry
spec:
destination:
name: ratings
route:
- labels:
version: v1
httpReqRetries:
simpleRetry:
attempts: 3
```
Note that request timeouts and retries can also be
[overridden on a per-request basis](./handling-failures.html#fine-tuning).
See the [request timeouts task]({{home}}/docs/tasks/traffic-management/request-timeouts.html) for a demonstration of timeout control.
### Injecting faults in the request path
A route rule can specify one or more faults to inject
while forwarding http requests to the rule's corresponding request destination.
The faults can be either delays or aborts.
The following example will introduce a 5 second delay in 10% of the requests to the "v1" version of the "reviews" microservice.
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ratings-delay
spec:
destination:
name: reviews
route:
- labels:
version: v1
httpFault:
delay:
percent: 10
fixedDelay: 5s
```
The other kind of fault, abort, can be used to prematurely terminate a request,
for example, to simulate a failure.
The following example will return an HTTP 400 error code for 10%
of the requests to the "ratings" service "v1".
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ratings-abort
spec:
destination:
name: ratings
route:
- labels:
version: v1
httpFault:
abort:
percent: 10
httpStatus: 400
```
Sometimes delays and abort faults are used together. For example, the following rule will delay
by 5 seconds all requests from the "reviews" service "v2" to the "ratings" service "v1" and
then abort 10 percent of them:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ratings-delay-abort
spec:
destination:
name: ratings
match:
source:
name: reviews
labels:
version: v2
route:
- labels:
version: v1
httpFault:
delay:
fixedDelay: 5s
abort:
percent: 10
httpStatus: 400
```
To see fault injection in action, see the [fault injection task]({{home}}/docs/tasks/traffic-management/fault-injection.html).
### Rules have precedence
Multiple route rules could be applied to the same destination. The order of
evaluation of rules corresponding to a given destination, when there is
more than one, can be specified by setting the *precedence* field of the
rule.
```yaml
destination:
name: reviews
precedence: 1
```
The precedence field is an optional integer value, 0 by default. Rules
with higher precedence values are evaluated first. _If there is more than
one rule with the same precedence value the order of evaluation is
undefined._
**When is precedence useful?** Whenever the routing story for a particular
service is purely weight based, it can be specified in a single rule,
as shown in the earlier example. When, on the other hand, other criteria
(e.g., requests from a specific user) are being used to route traffic, more
than one rule will be needed to specify the routing. This is where the
rule *precedence* field must be set to make sure that the rules are
evaluated in the right order.
A common pattern for generalized route specification is to provide one or
more higher priority rules that qualify rules by source/headers to specific
destinations, and then provide a single weight-based rule with no match
criteria at the lowest priority to provide the weighted distribution of
traffic for all other cases.
For example, the following 2 rules, together, specify that all requests for
the "reviews" service that includes a header named "Foo" with the value
"bar" will be sent to the "v2" instances. All remaining requests will be
sent to "v1".
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-foo-bar
spec:
destination:
name: reviews
precedence: 2
match:
request:
headers:
Foo:
exact: bar
route:
- labels:
version: v2
---
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-default
spec:
destination:
name: reviews
precedence: 1
route:
- labels:
version: v1
weight: 100
```
Notice that the header-based rule has the higher precedence (2 vs. 1). If
it was lower, these rules wouldn't work as expected since the weight-based
rule, with no specific match criteria, would be evaluated first which would
then simply route all traffic to "v1", even requests that include the
matching "Foo" header. Once a rule is found that applies to the incoming
request, it will be executed and the rule-evaluation process will
terminate. That's why it's very important to carefully consider the
priorities of each rule when there is more than one.
## Destination policies
Destination policies describe various routing related policies associated
with a particular service or version, such as the load balancing algorithm,
the configuration of circuit breakers, health checks, etc.
Unlike route rules, destination policies cannot be qualified based on attributes
of a request other than the calling service, but they can be restricted to
apply to requests that are routed to destination backends with specific labels.
For example, the following load balancing policy will only apply to requests
targeting the "v1" version of the "ratings" microservice that are called
from version "v2" of the "reviews" service.
```yaml
apiVersion: config.istio.io/v1alpha2
kind: DestinationPolicy
metadata:
name: ratings-lb-policy
spec:
source:
name: reviews
labels:
version: v2
destination:
name: ratings
labels:
version: v1
loadBalancing:
name: ROUND_ROBIN
```
### Circuit breakers
A simple circuit breaker can be set based on a number of criteria such as connection and request limits.
For example, the following destination policy
sets a limit of 100 connections to "reviews" service version "v1" backends.
```yaml
apiVersion: config.istio.io/v1alpha2
kind: DestinationPolicy
metadata:
name: reviews-v1-cb
spec:
destination:
name: reviews
labels:
version: v1
circuitBreaker:
simpleCb:
maxConnections: 100
```
The complete set of simple circuit breaker fields can be found
[here]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#CircuitBreaker).
### Destination policy evaluation
Similar to route rules, destination policies are associated with a
particular *destination* however if they also include *labels* their
activation depends on route rule evaluation results.
The first step in the rule evaluation process evaluates the route rules for
a *destination*, if any are defined, to determine the labels (i.e., specific
version) of the destination service that the current request will be routed
to. Next, the set of destination policies, if any, are evaluated to
determine if they apply.
**NOTE:** One subtlety of the algorithm to keep in mind is that policies
that are defined for specific tagged destinations will only be applied if
the corresponding tagged instances are explicitly routed to. For example,
consider the following rule, as the one and only rule defined for the
"reviews" service.
```yaml
apiVersion: config.istio.io/v1alpha2
kind: DestinationPolicy
metadata:
name: reviews-v1-cb
spec:
destination:
name: reviews
labels:
version: v1
circuitBreaker:
simpleCb:
maxConnections: 100
```
Since there is no specific route rule defined for the "reviews"
service, default round-robin routing behavior will apply, which will
presumably call "v1" instances on occasion, maybe even always if "v1" is
the only running version. Nevertheless, the above policy will never be
invoked since the default routing is done at a lower level. The rule
evaluation engine will be unaware of the final destination and therefore
unable to match the destination policy to the request.
You can fix the above example in one of two ways. You can either remove the
`labels:` from the rule, if "v1" is the only instance anyway, or, better yet,
define proper route rules for the service. For example, you can add a
simple route rule for "reviews:v1".
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-default
spec:
destination:
name: reviews
route:
- labels:
version: v1
```
Although the default Istio behavior conveniently sends traffic from all
versions of a source service to all versions of a destination service
without any rules being set, as soon as version discrimination is desired
rules are going to be needed.
Therefore, setting a default rule for every service, right from the
start, is generally considered a best practice in Istio.
## Egress Rules
Egress rules are used to enable requests to services outside of an Istio service mesh.
For example, the following rule can be used to allow external calls to services hosted
under the `*.foo.com` domain.
```yaml
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: foo-egress-rule
spec:
destination:
service: *.foo.com
ports:
- port: 80
protocol: http
- port: 443
protocol: https
```
The destination of an egress rule is specified using the *service* field, which
can be either a fully qualified or wildcard domain name.
It represents a white listed set of one or more external services that services
in the mesh are allowed to access. The supported wildcard syntax can be found
[here]({{home}}/docs/reference/config/istio.routing.v1alpha1.html).
Currently, only HTTP-based services can be expressed using an egress rule, however,
TLS origination from the sidecar can be achieved by setting the protocol of
the associated service port to "https", as shown in the above example.
The service must be accessed over HTTP
(e.g., `http://secure-service.foo.com:443`, instead of `https://secure-service.foo.com`),
however, the sidecar will upgrade the connection to TLS in this case.
Egress rules work well in conjunction with route rules and destination
policies as long as they refer to the external services using the exact same
specification for the destination service as the corresponding egress rule.
For example, the following rule can be used in conjunction with the above egress
rule to set a 10s timeout for calls to the external services.
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: foo-timeout-rule
spec:
destination:
service: *.foo.com
httpReqTimeout:
simpleTimeout:
timeout: 10s
```
Destination policies and route rules to redirect and forward traffic, to define retry,
timeout and fault injection policies are all supported for external destinations.
Weighted (version-based) routing is not possible, however, since there is no notion
of multiple versions of an external service.
| 29.909677 | 145 | 0.752535 | eng_Latn | 0.998137 |
bb52a8bf268954bd4987cff360df393a9ffa872e | 2,862 | md | Markdown | website/translated_docs/en-US/version-20.2.0/cita/architecture/view.md | ruizhaoz1/citahub-docs | 881a3ed093a1d5b53c9899f22c31eafb42c932a5 | [
"MIT"
] | 7 | 2019-12-26T08:38:06.000Z | 2020-12-17T09:29:01.000Z | website/translated_docs/en-US/version-20.2.0/cita/architecture/view.md | ruizhaoz1/citahub-docs | 881a3ed093a1d5b53c9899f22c31eafb42c932a5 | [
"MIT"
] | 38 | 2019-12-03T09:51:40.000Z | 2020-12-01T02:49:42.000Z | website/translated_docs/en-US/version-20.2.0/cita/architecture/view.md | ruizhaoz1/citahub-docs | 881a3ed093a1d5b53c9899f22c31eafb42c932a5 | [
"MIT"
] | 16 | 2019-12-03T06:15:55.000Z | 2022-02-20T12:04:01.000Z | ---
id: version-20.2.0-view
title: View
original_id: view
---
## 账号
视图状态是执行器执行过程中读写的对象,常见的视图状态模型有 UTXO 及账户两种。在 UTXO 模型中,由 UTXO 构成账本视图,每个交易在销毁旧有 UTXO 的同时创造新的 UTXO;在账户模型中,由账户构成世界状态视图,交易在处理过程中可以读写多个账户。 账户模型相对更加简单,实现通用任务更有效率。在企业级应用中往往存在身份验证与授权的需要,这些服务所依赖的数据可以自然的与账户模型关联。CITA 默认支持账户模型。用户可以自定义包括 UTXO 在内的其他状态模型。
在 CITA 中存在两种账号:外部账号和合约账号。外部账号通常情况下代表用户的身份,用户可以通过外部账号来发送交易。与公链不同,CITA 具有用户准入机制。首先用户自行生成私钥和公钥,私钥由用户妥善保存; 然后将公钥通过链外的方式提交给 CITA 系统中的 KYC 系统;通过申请之后,系统管理员将用户公钥通过操作账户管理合约,发送交易将用户加入 CITA 网络中。对于未准入的外部账户,无法向 CITA 发送交易。同时,CITA 内置了基于角色的权限管理,系统管理员(角色)可以根据实际情况灵活配置账户的权限。
为了阻止重放攻击,每笔交易必须有 nonce,这就使得账户需要跟踪 nonce 的使用情况。CITA 中用户可以根据实际业务需求,自定义 nonce 防重放机制。现阶段 CITA 兼容了 Ethereum 的自增 nonce 的防重放机制。
总体来讲,外部账户具有以下特性:
* 外部账号需要准入机制;
* 通过私钥控制,可以发送交易;
* 同一账户可以支持多种签名;
* 支持用户自定义的交易放重放规则。
合约账户与外部账户最大的区别在于,合约账户通过交易进行创建,合约内部维护一定的代码,合约的执行和调用通过外部账户发送交易来进行。当前 CITA 支持EVM虚拟机,用户可以通过直接发送合约代码的方式来创建合约,也可以通过合约调用的方式来创建合约。
总体来讲,合约账户具有以下特性:
* 合约账号通过交易创建,并支持合约创建合约;
* 合约通过交易调用,并支持合约间互相调用;
* 合约具有图灵完备性,但是交易对计算资源的配额受系统合约控制;
* 合约维持自身的特定储存状态;
* CITA 内置系统合约模块,在创世块中生成,方便用户对系统进行管理。
## 存储
区块链本质上去中心化的分布式复制状态机,每个节点通过持久化的方式来保存自身的状态。CITA 使用 KV 持久化数据存储,支持 RocksDB、LevelDB。节点将 Block 结构,交易以及合约状态等持久化保存到 KV 数据库中。
为了更高效的检索和更新数据,区块链一般会在内存中维护某种数据结构的视图模型。对于传统的区块链,如Bitcoin 采用了 Merkle Tree 来保存交易;Ethereum 采用了 Merkle Patricia Tree,一种改进的Merkle Tree 来保存状态和交易。 CITA 采用了 Simple Merkle Tree 来保存交易列表和交易回执。下面我们将分别介绍这几种模型。
### Merkle Tree
在 Bitcoin 中的每个区块都包含了产生于该区块的所有交易,且以 Merkle 树表示。Merkle 树是一种哈希二叉树,它是一种用作快速归纳和校验大规模数据完整性的数据结构。这种二叉树包含加密哈希值。
在比特币网络中,Merkle 树被用来归纳一个区块中的所有交易,同时生成整个交易集合的数字指纹,且提供了一种校验区块是否存在某交易的高效途径。生成一棵完整的 Merkle 树需要递归地对哈希节点对进行哈希,并将新生成的哈希节点插入到 Merkle 树中,直到只剩一个哈希节点,该节点就是 Merkle 树的根。
当 N 个数据元素经过加密后插入 Merkle 树时,你至多计算2*log2(N)次就能检查出任意某数据元素是否在该树中,这使得该数据结构非常高效。同时 Merkle 树可以很好的支持轻节点。

### Merkle Patricia Trie
在 Ethereum 中,使用 Trie 来构建 Merkle tree,即 Merkle Patricia Trie。它是 Ethereum 中主要的数据结构,用来存储所有账号的状态以及交易和交易回执。MPT 支持高效的检索及动态的插入、删除、修改,Ethereum 将其命名为 Merkle Patricia Tree(MPT),其示意图如下:

更多关于 MPT 的介绍可以参考 Ethereum [Patricia-Tree](https://github.com/ethereum/wiki/wiki/Patricia-Tree)。
### Simple Merkle Tree
在 Ethereum 中,交易和交易回执同样采用 MPT 树来进行保存。而 CITA 中,区块中的交易在共识完成后就已经确认了。所以在 Chain 处理交易时,交易的顺序和交易结果的顺序都是确定不变的。 而 MPT 树优点是便于保存历史快照可维持可变性,对于静态数据可以采用 Merkle 树,而不必采用 MPT 这样的数据结构。而比特币的 Merkle 树在处理奇数节点时,需要拷贝节点,额外做一次 Sha3 计算。 CITA 采用了简单的 Merkle 树来保存,对于奇数个节点情况,计算 Sha3 的次数会减少。
```
*
/ \
/ \
/ \
/ \
* *
/ \ / \
/ \ / \
/ \ / \
* * * h6
/ \ / \ / \
h0 h1 h2 h3 h4 h5
```
| 35.333333 | 259 | 0.719776 | yue_Hant | 0.882844 |
bb52cb0d80c5ef7e06ba203d32225f88081dfe5e | 2,585 | md | Markdown | docker/README.md | Lchangliang/incubator-doris | d056f5873b9ddfd11e32dc97cb31f0cdf2ae3676 | [
"Apache-2.0"
] | null | null | null | docker/README.md | Lchangliang/incubator-doris | d056f5873b9ddfd11e32dc97cb31f0cdf2ae3676 | [
"Apache-2.0"
] | null | null | null | docker/README.md | Lchangliang/incubator-doris | d056f5873b9ddfd11e32dc97cb31f0cdf2ae3676 | [
"Apache-2.0"
] | null | null | null | <!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
## Doris Develop Environment based on docker
### Preparation
1. Download the Doris code repo
```console
$ cd /to/your/workspace/
$ git clone https://github.com/apache/doris.git
```
You can remove the `.git` dir in `doris/` to make the dir size smaller.
So that the following generated docker image can be smaller.
2. Copy Dockerfile
```console
$ cd /to/your/workspace/
$ cp doris/docker/Dockerfile ./
```
After preparation, your workspace should like this:
```
.
├── Dockerfile
├── doris
│ ├── be
│ ├── bin
│ ├── build.sh
│ ├── conf
│ ├── DISCLAIMER-WIP
│ ├── docker
│ ├── docs
│ ├── env.sh
│ ├── fe
│ ├── ...
```
### Build docker image
```console
$ cd /to/your/workspace/
$ docker build -t doris:v1.0 .
```
> `doris` is docker image repository name and `v1.0` is tag name, you can change them to whatever you like.
### Use docker image
This docker image you just built does not contain Doris source code repo. You need
to download it first and map it to the container. (You can just use the one you
used to build this image before)
```console
$ docker run -it -v /your/local/path/doris/:/root/doris/ doris:v1.0
$ docker run -it -v /your/local/.m2:/root/.m2 -v /your/local/doris-DORIS-x.x.x-release/:/root/doris-DORIS-x.x.x-release/ doris:v1.0
```
Then you can build source code inside the container.
```console
$ cd /root/doris/
$ sh build.sh
```
**NOTICE**
The default JDK version is openjdk 11, if you want to use openjdk 8, you can run the command:
```console
$ alternatives --set java java-1.8.0-openjdk.x86_64
$ alternatives --set javac java-1.8.0-openjdk.x86_64
$ export JAVA_HOME=/usr/lib/jvm/java-1.8.0
```
The version of jdk you used to run FE must be the same version you used to compile FE.
### Latest update time
2022-1-23
| 25.594059 | 131 | 0.701741 | eng_Latn | 0.977847 |
bb53593493c43ee0ec3aeb85fa31425d4623f367 | 2,653 | markdown | Markdown | _posts/2018-2-4-git-change-branch-before-mark-staging.markdown | yusdirman/yusdirman.github.io | 3f2aec660c5e340918dd59f63ddebb03c1d8f40e | [
"MIT"
] | null | null | null | _posts/2018-2-4-git-change-branch-before-mark-staging.markdown | yusdirman/yusdirman.github.io | 3f2aec660c5e340918dd59f63ddebb03c1d8f40e | [
"MIT"
] | null | null | null | _posts/2018-2-4-git-change-branch-before-mark-staging.markdown | yusdirman/yusdirman.github.io | 3f2aec660c5e340918dd59f63ddebb03c1d8f40e | [
"MIT"
] | null | null | null | ---
layout: post
title: "Checkout to other Branch or new before staging to bring over all edited lines to the new created branch"
date: 2018-02-04 09:05:00 +0800
categories: Git
---
### Assalamualaikum...
Often in my environment situation especially during debugging process, I start editing the codes at any branch I was on to investigate or to test proof my solution for repair, before I create a branch for the fix.
When the solution was proven and the solution is acceptable to implement and test, only then I create a branch in our gitlab interface and then create a Merge Request(MR) right away.
```bash
# in local
me@local:$\> git fetch
From gitlab.mydomain.com:project/application
* [new branch] 520-[BUG]-issue-name-and-simple-descriptions -> origin/520-[BUG]-issue-name-and-simple-descriptions
# when I check my current status
me@local$\> git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: app/controllers/departments_controller.rb
modified: app/views/departments/show.html.haml
no changes added to commit (use "git add" and/or "git commit -a")
# just checkout to the new created origin branch
$\> git co -b 520-[BUG]-issue-name-and-simple-descriptions origin/520-[BUG]-issue-name-and-simple-descriptions
# check status
me@local$\> git status
On branch 520-[BUG]-issue-name-and-simple-descriptions
Your branch is up-to-date with 'origin/520-[BUG]-issue-name-and-simple-descriptions '.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: app/controllers/departments_controller.rb
modified: app/views/departments/show.html.haml
no changes added to commit (use "git add" and/or "git commit -a")
```
And now, the changes is in the new created branch from MR the issue in gitlab.
### Cautions
- please use a new branch or a clean destination branch. Otherwise it would return error and you cannot checkout to the branch.
- Must be in UnStage state. Edit the code and save. Don't add tu staging state and of course don't commit for this to work.
## What if I've already commited and then decided that all the modified code should be in a new issue/branch?
just run reset soft to return to staged state. Then reset all files to return to unstaged state. And then, change to new branch and stage and commit from there.
Thanks for reading.
Assalamualaikum
yusdirman
| 32.753086 | 213 | 0.74821 | eng_Latn | 0.993319 |
bb538e5a0cef0c5822848035604d7d27f9ad5f99 | 2,047 | md | Markdown | content/en/account_management/billing/usage_metrics.md | ericmschow/documentation | b1942216c917a677c5f41906e1b103c1bd940c4f | [
"BSD-3-Clause"
] | null | null | null | content/en/account_management/billing/usage_metrics.md | ericmschow/documentation | b1942216c917a677c5f41906e1b103c1bd940c4f | [
"BSD-3-Clause"
] | null | null | null | content/en/account_management/billing/usage_metrics.md | ericmschow/documentation | b1942216c917a677c5f41906e1b103c1bd940c4f | [
"BSD-3-Clause"
] | null | null | null | ---
title: Estimated Usage Metrics
kind: faq
---
## Overview
Datadog calculates your current estimated usage in near real-time. Estimate usage metrics enable you to:
* Graph your estimated usage
* Create monitors around your estimated usage based on thresholds of your choosing
* Get instant alerts of spikes or drops in your usage
* Assess the potential impact of code changes on your usage in near real-time
**Note**: These usage metrics are estimates that won't always match up to billable usage given their real-time nature. There is a 10-20% difference between estimated usage and billable usage on average.
{{< img src="account_management/billing/usage-metrics-01.png" alt="Dashboard Example" responsive="true">}}
### Types of usage
Estimated usage metrics are generally available for the following usage types:
| Usage Type | Metric |
|--------------------|-----------------------------------------------------------------------------------------------------|
| Infrastructure Hosts | `datadog.estimated_usage.hosts` |
| Containers | `datadog.estimated_usage.containers` |
| Custom Metrics | `datadog.estimated_usage.metrics.custom` |
{{< img src="account_management/billing/usage-metrics-02.png" alt="Metric Names" responsive="true">}}
### Multi-Org usage
For accounts with multiple organizations, you can roll up estimated usage from child organizations using the `from` field to monitor usage across your entire account.
{{< img src="account_management/billing/usage-metrics-03.png" alt="Multi-Org Usage" responsive="true">}}
## Troubleshooting
For technical questions, contact [Datadog support][1].
For billing questions, contact your [Customer Success][2] Manager.
[1]: /help
[2]: mailto:[email protected]
| 45.488889 | 202 | 0.60723 | eng_Latn | 0.941381 |
bb544d92ed05af6b352d0b217533e6cc9c92c3e6 | 857 | md | Markdown | articles/cognitive-services/Speech-Service/includes/service-pricing-advisory.md | sergibarca/azure-docs.es-es | dabecf2b983b0b41215571b8939077861f0c2667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/service-pricing-advisory.md | sergibarca/azure-docs.es-es | dabecf2b983b0b41215571b8939077861f0c2667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/service-pricing-advisory.md | sergibarca/azure-docs.es-es | dabecf2b983b0b41215571b8939077861f0c2667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Servicio de voz
titleSuffix: Azure Cognitive Services
services: cognitive-services
author: IEvangelist
manager: nitinme
ms.service: cognitive-services
ms.subservice: speech-service
ms.topic: include
ms.date: 12/02/2019
ms.author: dapine
ms.openlocfilehash: ceb062cc5272fae0030c331ad7c9c6c870763df7
ms.sourcegitcommit: 6c01e4f82e19f9e423c3aaeaf801a29a517e97a0
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 12/04/2019
ms.locfileid: "74828772"
---
> [!NOTE]
> Al realizar pruebas, el sistema llevará a cabo una transcripción. Es importante tenerlo en cuenta, ya que el precio varía según la oferta de servicio y el nivel de suscripción. Consulte siempre en la página oficial Precios de Cognitive Services: Speech Services [los detalles más recientes](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
| 40.809524 | 373 | 0.81797 | spa_Latn | 0.647091 |
bb54b68c65ff0dd796fa551a7b7eb45d55a527ea | 19 | md | Markdown | README.md | yasminpierazo/erasmo-lyrics-api | dd325a51e95bc473d558bdbfef58db2d575d3a3e | [
"MIT"
] | null | null | null | README.md | yasminpierazo/erasmo-lyrics-api | dd325a51e95bc473d558bdbfef58db2d575d3a3e | [
"MIT"
] | null | null | null | README.md | yasminpierazo/erasmo-lyrics-api | dd325a51e95bc473d558bdbfef58db2d575d3a3e | [
"MIT"
] | null | null | null | # erasmo-lyrics-api | 19 | 19 | 0.789474 | azb_Arab | 0.471154 |
bb54c535d2a5e44c8a474ca34036a4118fbf7315 | 2,142 | md | Markdown | ce/customerengagement/on-premises/developer/use-discovery-service.md | Gen1a/dynamics-365-customer-engagement | ce3c02bfa54594f016166522e552982fb66a9389 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/customerengagement/on-premises/developer/use-discovery-service.md | Gen1a/dynamics-365-customer-engagement | ce3c02bfa54594f016166522e552982fb66a9389 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/customerengagement/on-premises/developer/use-discovery-service.md | Gen1a/dynamics-365-customer-engagement | ce3c02bfa54594f016166522e552982fb66a9389 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Use the Dynamics 365 Customer Engagement (on-premises) Discovery services (Developer Guide for Dynamics 365 Customer Engagement (on-premises))| MicrosoftDocs"
description: "In a multi-tenant environment like Dynamics 365 Customer Engagement (on-premises), the Discovery web service helps determine which organizations a user is a member of."
ms.custom:
ms.date: 09/05/2019
ms.reviewer: pehecke
ms.suite:
ms.tgt_pltfrm:
ms.topic: article
applies_to:
- Dynamics 365 Customer Engagement (on-premises)
ms.assetid: 0b95ebbd-49f5-4e09-8f18-7708dbef65d0
caps.latest.revision: 9
author: JimDaly
ms.author: jdaly
manager: amyla
search.audienceType:
- developer
---
# Use the Dynamics 365 Customer Engagement (on-premises) Discovery service
The Discovery web service is used to determine the organizations that a user is a member of, and the endpoint address URL to access the Organization service or Web API for each of those organizations. This Discovery service is necessary because Dynamics 365 Customer Engagement (on-premises) is a multi-tenant environment. A single Dynamics 365 Server can host multiple business organizations. By using the Discovery web service, your application can determine the endpoint address URL to access the target organization’s business data.
The Discovery service is accessed through the OData V4 RESTful API or the Organization service.
- For the Web API see: [Discover the URL for your organization](webapi/discover-url-organization-web-api.md)
- For the Organization Service see: [Discover the URL for your organization using the Organization Service API](org-service/discover-url-organization-organization-service.md)
### See also
[Use Dynamics 365 Customer Engagement web services](use-microsoft-dynamics-365-web-services.md)<br />
[Use Dynamics 365 Customer Engagement Web API](./use-microsoft-dynamics-365-web-api.md)<br />
[Use Dynamics 365 Customer Engagement Organization Service](/dynamics365/customerengagement/on-premises/developer/org-servi/use-microsoft-dynamics-365-organization-service)<br />
[!INCLUDE[footer-include](../../../includes/footer-banner.md)] | 57.891892 | 538 | 0.793184 | eng_Latn | 0.872815 |
bb55af210f1dbfbdb0e32f0c9e6ac960d47c40dc | 5,716 | md | Markdown | docs/vs-2015/modeling/modeling-sdk-for-visual-studio-domain-specific-languages.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/modeling/modeling-sdk-for-visual-studio-domain-specific-languages.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/modeling/modeling-sdk-for-visual-studio-domain-specific-languages.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Modeling SDK for Visual Studio — języki specyficzne dla domeny | Dokumentacja firmy Microsoft
ms.custom: ''
ms.date: 2018-06-30
ms.prod: visual-studio-tfs-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-devops-techdebt
ms.tgt_pltfrm: ''
ms.topic: article
helpviewer_keywords:
- Domain-Specific Language Tools
- Domain-Specific Language
ms.assetid: 17a531e2-1964-4a9d-84fd-6fb1b4aee662
caps.latest.revision: 79
author: gewarren
ms.author: gewarren
manager: douge
ms.openlocfilehash: 4f41594e1a39046e82cd7280c5569edbbeb65eb7
ms.sourcegitcommit: 55f7ce2d5d2e458e35c45787f1935b237ee5c9f8
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 08/22/2018
ms.locfileid: "42631455"
---
# <a name="modeling-sdk-for-visual-studio---domain-specific-languages"></a>Modelowanie SDK dla Visual Studio — języki specyficzne dla domeny
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Najnowszą wersję tego tematu znajduje się w temacie [zestawu Modeling SDK for Visual Studio — języki specyficzne dla domeny](https://docs.microsoft.com/visualstudio/modeling/modeling-sdk-for-visual-studio-domain-specific-languages).
Używając zestawu Modeling SDK for [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] (MSDK), można tworzyć zaawansowane opartych na modelu narzędzia programistyczne, które można zintegrować [!INCLUDE[vsprvs](../includes/vsprvs-md.md)]. Na przykład narzędzia UML są tworzone przy użyciu zestawu MSDK. W ten sam sposób można utworzyć co najmniej jedną definicję modelu i zintegrować ją w zestaw narzędzi.
Centralnym elementem zestawu MSDK jest definicja modelu tworzona w celu przedstawienia koncepcji z obszaru biznesowego. Można otoczyć model z szeroką gamą narzędzi, takich jak widok diagramowy, możliwość generowania kodu i innych artefaktów, polecenia przekształcania modelu oraz możliwość interakcji z kodem i innych obiektów w [!INCLUDE[vsprvs](../includes/vsprvs-md.md)]. Podczas opracowywania modelu można połączyć go z innymi modelami i narzędziami w celu utworzenia zestawu narzędzi o dużych możliwościach, który będzie wspomagał proces projektowania.
Zestaw MSDK umożliwia szybkie opracowanie modelu z użyciem języka specyficznego dla domeny (DSL). Należy rozpocząć od użycia specjalnego edytora w celu zdefiniowania schematu lub abstrakcyjnej składni wraz z notacją graficzną. Na podstawie tej definicji zestaw VMSDK generuje następujące elementy:
- Implementacja modelu z silnie typizowanym interfejsem API, który działa w magazynie opartym na transakcjach.
- Eksplorator oparty na drzewie.
- Edytor graficzny, w którym użytkownicy mogą wyświetlać zdefiniowany model lub jego części.
- Metody serializacji zapisujące modele w przystosowanych do odczytu plikach XML.
- Możliwości generowania kodu programu i innych artefaktów przy użyciu funkcji tworzenia szablonów tekstu.
Wszystkie te funkcje można dostosowywać i rozszerzać. Rozszerzenia są integrowane w taki sposób, że można aktualizować definicję DSL oraz ponownie generować funkcje bez utraty używanych rozszerzeń.
## <a name="samples-and-the-latest-information"></a>Przykłady i najnowsze informacje
[Pobierz modelowania SDK dla programu Visual Studio 2015](http://www.microsoft.com/download/details.aspx?id=48148)
[Przykłady](http://go.microsoft.com/fwlink/?LinkId=186128) modelowania SDK dla programu Visual Studio.
Aby uzyskać wskazówki dotyczące zaawansowanych technik i rozwiązywania problemów, odwiedź stronę [forum Visual Studio DSL & Modeling Tools Extensibility](http://go.microsoft.com/fwlink/?LinkID=186074).
## <a name="in-this-section"></a>W tej sekcji
[Wprowadzenie do języków specyficznych dla domeny](../modeling/getting-started-with-domain-specific-languages.md)
[Opis modeli, klas i relacji](../modeling/understanding-models-classes-and-relationships.md)
[Instrukcje: Definiowanie języka właściwego dla domeny](../modeling/how-to-define-a-domain-specific-language.md)
[Dostosowywanie i rozszerzanie języka specyficznego dla domeny](../modeling/customizing-and-extending-a-domain-specific-language.md)
[Walidacja w języku specyficznym dla domeny](../modeling/validation-in-a-domain-specific-language.md)
[Pisanie kodu pod kątem dostosowywania języka specyficznego dla domeny](../modeling/writing-code-to-customise-a-domain-specific-language.md)
[Generowanie kodu z języka specyficznego dla domeny](../modeling/generating-code-from-a-domain-specific-language.md)
[Znajomość kodu DSL](../modeling/understanding-the-dsl-code.md)
[Dostosowywanie przechowywania plików i serializacji XML](../modeling/customizing-file-storage-and-xml-serialization.md)
[Wdrażanie rozwiązań dla języka specyficznego dla domeny](../modeling/deploying-domain-specific-language-solutions.md)
[Tworzenie języka specyficznego dla domeny opartego na modelu Windows Forms](../modeling/creating-a-windows-forms-based-domain-specific-language.md)
[Tworzenie języka specyficznego dla domeny opartego na podsystemie WPF](../modeling/creating-a-wpf-based-domain-specific-language.md)
[Instrukcje: Rozszerzanie projektanta języka specyficznego dla domeny](../modeling/how-to-extend-the-domain-specific-language-designer.md)
[Wersje programu Visual Studio obsługiwane przez zestaw Visualization and Modeling SDK](../modeling/supported-visual-studio-editions-for-visualization-amp-modeling-sdk.md)
[Instrukcje: Migracja języka specyficznego dla domeny do nowej wersji](../modeling/how-to-migrate-a-domain-specific-language-to-a-new-version.md)
[Odwołania API do modelowania SDK dla Visual Studio](../modeling/api-reference-for-modeling-sdk-for-visual-studio.md)
| 62.130435 | 560 | 0.791812 | pol_Latn | 0.995903 |
bb5617052c3b626f285cf58f1b6edf9d3451db10 | 298 | md | Markdown | content/Artikel Beschreibungen/Package-Business.md | ApptivaAG/trial-store-landingpage-gatsby | aa979fa93380ee838764045b1880d553ed169696 | [
"MIT"
] | null | null | null | content/Artikel Beschreibungen/Package-Business.md | ApptivaAG/trial-store-landingpage-gatsby | aa979fa93380ee838764045b1880d553ed169696 | [
"MIT"
] | 7 | 2020-07-16T14:09:57.000Z | 2022-02-26T01:34:52.000Z | content/Artikel Beschreibungen/Package-Business.md | ApptivaAG/trial-store-landingpage-gatsby | aa979fa93380ee838764045b1880d553ed169696 | [
"MIT"
] | null | null | null | ---
urlPath: Package-Business
---
Gerne Business - aber nicht für immer!
Mit diesem Package hast du alles, was du für kurze Zeit brauchst.
# Beschreibung
- Blazer Schwarz von NAVYBOOT
- Businesshosen Schwarz von NAVYBOOT
- Hemd Bluse Weiss von NAVYBOOT
- Schultertasche "Sutton" Schwarz von COACH
| 24.833333 | 65 | 0.775168 | deu_Latn | 0.946779 |
bb56a2ea7541b7f9cc44dc20383649728647d7fb | 331 | md | Markdown | _pages/year-archive.md | tyconsulting/blog.tyang.github.io | 498c9b809f41e3469fd3a4a78a06fc94590a411a | [
"Apache-2.0"
] | 5 | 2021-08-08T17:39:58.000Z | 2022-03-27T20:13:30.000Z | _pages/year-archive.md | tyconsulting/blog.tyang.github.io | 498c9b809f41e3469fd3a4a78a06fc94590a411a | [
"Apache-2.0"
] | null | null | null | _pages/year-archive.md | tyconsulting/blog.tyang.github.io | 498c9b809f41e3469fd3a4a78a06fc94590a411a | [
"Apache-2.0"
] | null | null | null | ---
title: "Posts by Year"
permalink: /posts/
layout: posts
author_profile: true
sidebar:
- text: "[](https://www.nice.de/nice-active-365-monitor-for-azure/)"
- text: "[](https://www.gripmatix.com/gripmatix-citrix-sbc-vdi-scom-management-packs)"
--- | 36.777778 | 134 | 0.725076 | yue_Hant | 0.202332 |
bb5771f76c58fd8dc57e067ac5df3a4180e9e754 | 3,625 | md | Markdown | wdk-ddi-src/content/d3dumddi/ns-d3dumddi-_d3dddicb_updategpuvirtualaddress.md | aktsuda/windows-driver-docs-ddi | a7b832e82cc99f77dbde72349c0a61670d8765d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3dumddi/ns-d3dumddi-_d3dddicb_updategpuvirtualaddress.md | aktsuda/windows-driver-docs-ddi | a7b832e82cc99f77dbde72349c0a61670d8765d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3dumddi/ns-d3dumddi-_d3dddicb_updategpuvirtualaddress.md | aktsuda/windows-driver-docs-ddi | a7b832e82cc99f77dbde72349c0a61670d8765d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:d3dumddi._D3DDDICB_UPDATEGPUVIRTUALADDRESS
title: "_D3DDDICB_UPDATEGPUVIRTUALADDRESS"
author: windows-driver-content
description: D3DDDICB_UPDATEGPUVIRTUALADDRESS is used with pfnUpdateGpuVirtualAddressCb to allow the user mode driver to specify a number of mapping operations to be applied to the process virtual address space in a single batch of page table updates.
old-location: display\d3dddicb_updategpuvirtualaddress.htm
old-project: display
ms.assetid: 6D460EBF-1D5D-4A99-90EE-FCBBC56B8EA4
ms.author: windowsdriverdev
ms.date: 4/16/2018
ms.keywords: D3DDDICB_UPDATEGPUVIRTUALADDRESS, D3DDDICB_UPDATEGPUVIRTUALADDRESS structure [Display Devices], _D3DDDICB_UPDATEGPUVIRTUALADDRESS, d3dumddi/D3DDDICB_UPDATEGPUVIRTUALADDRESS, display.d3dddicb_updategpuvirtualaddress
ms.prod: windows-hardware
ms.technology: windows-devices
ms.topic: struct
req.header: d3dumddi.h
req.include-header: D3dumddi.h
req.target-type: Windows
req.target-min-winverclnt: Windows 10
req.target-min-winversvr: Windows Server 2016
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- d3dumddi.h
api_name:
- D3DDDICB_UPDATEGPUVIRTUALADDRESS
product:
- Windows
targetos: Windows
req.typenames: D3DDDICB_UPDATEGPUVIRTUALADDRESS
---
# _D3DDDICB_UPDATEGPUVIRTUALADDRESS structure
## -description
<b>D3DDDICB_UPDATEGPUVIRTUALADDRESS</b> is used with <a href="https://msdn.microsoft.com/99D075A0-4483-47D1-BA24-80C45BFF407A">pfnUpdateGpuVirtualAddressCb</a> to allow the user mode driver to specify a number of mapping operations to be applied to the process virtual address space in a single batch of page table updates.
## -struct-fields
### -field hContext
Specifies the context against which the map operation will be synchronized against. This also determines which kernel context the map operation will be executed against. In an linked display adapter configuration <b>hContext</b> defines a physical GPU, whose page tables are modified.
### -field hFenceObject
Specifies the monitored fence object to use for synchronization. This should typically be set to the monitored fence used by the user mode driver to track progress of <b>hContext</b>.
### -field NumOperations
Specifies the number of operations in the <b>Operations</b> array.
### -field Operations
<a href="https://msdn.microsoft.com/library/windows/hardware/dn906329">D3DDDI_UPDATEGPUVIRTUALADDRESS_OPERATION</a> array of operations to perform on the GPU virtual address space.
### -field Reserved0
This member is reserved and should be set to zero.
### -field Reserved1
This member is reserved and should be set to zero.
### -field FenceValue
Specifies the <b>FenceValue</b> for <b>hFenceObject</b> that the <i>Map</i> operation should wait on (unless <b>DoNotWait</b> is 1). When the <i>Map</i> operation completes, the fence object will signal <b>hFenceObject</b> with <b>FenceValue</b>+1.
### -field Flags
### -field Flags.DoNotWait
When set to 1, there will be no wait for the sync objects before executing the operations.
### -field Flags.Reserved
This member is reserved and should be set to zero.
### -field Flags.Value
The consolidated value of the <b>Flags</b> union.
## -see-also
<a href="https://msdn.microsoft.com/library/windows/hardware/dn906329">D3DDDI_UPDATEGPUVIRTUALADDRESS_OPERATION</a>
<a href="https://msdn.microsoft.com/99D075A0-4483-47D1-BA24-80C45BFF407A">pfnUpdateGpuVirtualAddressCb</a>
| 27.884615 | 324 | 0.78731 | eng_Latn | 0.769034 |
bb577fbfa009a1d46a5b357544389870a67930d5 | 2,202 | md | Markdown | docs/framework/unmanaged-api/debugging/icordebugfunction2-interface.md | Jonatandb/docs.es-es | c18663ce8a09607fe195571492cad602bc2f01bb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugfunction2-interface.md | Jonatandb/docs.es-es | c18663ce8a09607fe195571492cad602bc2f01bb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugfunction2-interface.md | Jonatandb/docs.es-es | c18663ce8a09607fe195571492cad602bc2f01bb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Interfaz ICorDebugFunction2
ms.date: 03/30/2017
api_name:
- ICorDebugFunction2
api_location:
- mscordbi.dll
api_type:
- COM
f1_keywords:
- ICorDebugFunction2
helpviewer_keywords:
- ICorDebugFunction2 interface [.NET Framework debugging]
ms.assetid: 2b936bef-9b75-48bf-859f-42e419c65f1c
topic_type:
- apiref
ms.openlocfilehash: 5364e39f7e0a9b6c9cd10cd8f17bab4a03a4b7af
ms.sourcegitcommit: 13e79efdbd589cad6b1de634f5d6b1262b12ab01
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 01/28/2020
ms.locfileid: "76794482"
---
# <a name="icordebugfunction2-interface"></a>Interfaz ICorDebugFunction2
Extiende lógicamente la interfaz ICorDebugFunction para proporcionar compatibilidad con Solo mi código depuración paso a paso, que omite el código que no es de usuario.
## <a name="methods"></a>Métodos
|Método|Descripción|
|------------|-----------------|
|[EnumerateNativeCode (método)](icordebugfunction2-enumeratenativecode-method.md)|(Aún no implementado). Obtiene un puntero de interfaz a un ICorDebugCodeEnum (que contiene las instrucciones de código nativo en la función a la que hace referencia este objeto ICorDebugFunction2.|
|[GetJMCStatus (método)](icordebugfunction2-getjmcstatus-method.md)|Obtiene un valor que indica si esta función está marcada como código de usuario.|
|[GetVersionNumber (método)](icordebugfunction2-getversionnumber-method.md)|Obtiene la versión de edición y continuación de esta función.|
|[SetJMCStatus (método)](icordebugfunction2-setjmcstatus-method.md)|Marca esta función para Solo mi código la ejecución paso a paso.|
## <a name="remarks"></a>Notas
> [!NOTE]
> Esta interfaz no admite que se la llame de forma remota, ya sea entre procesos o entre equipos.
## <a name="requirements"></a>Requisitos de
**Plataformas:** Vea [Requisitos de sistema](../../../../docs/framework/get-started/system-requirements.md).
**Encabezado:** CorDebug.idl, CorDebug.h
**Biblioteca:** CorGuids.lib
**.NET Framework versiones:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
## <a name="see-also"></a>Vea también
- [Interfaces de depuración](debugging-interfaces.md)
| 40.777778 | 281 | 0.755223 | spa_Latn | 0.668741 |
bb57d2b79b5fcd832ad2e882fe5237664536d612 | 4,404 | md | Markdown | README.md | abursavich/gogomarshal | 974527c20aaa00da2f5045a017b4b7665618c4f9 | [
"BSD-3-Clause"
] | null | null | null | README.md | abursavich/gogomarshal | 974527c20aaa00da2f5045a017b4b7665618c4f9 | [
"BSD-3-Clause"
] | null | null | null | README.md | abursavich/gogomarshal | 974527c20aaa00da2f5045a017b4b7665618c4f9 | [
"BSD-3-Clause"
] | null | null | null |
# gogomarshal
`import "github.com/abursavich/gogomarshal"`
* [Overview](#pkg-overview)
* [Index](#pkg-index)
## <a name="pkg-overview">Overview</a>
Package gogomarshal contains marshaling code extracted from
github.com/grpc-ecosystem/grpc-gateway/runtime and altered
to depend on gogo versions of the proto and jsonpb packages.
## <a name="pkg-index">Index</a>
* [type JSONPb](#JSONPb)
* [func (*JSONPb) ContentType() string](#JSONPb.ContentType)
* [func (j *JSONPb) Delimiter() []byte](#JSONPb.Delimiter)
* [func (j *JSONPb) Marshal(v interface{}) ([]byte, error)](#JSONPb.Marshal)
* [func (j *JSONPb) NewDecoder(r io.Reader) runtime.Decoder](#JSONPb.NewDecoder)
* [func (j *JSONPb) NewEncoder(w io.Writer) runtime.Encoder](#JSONPb.NewEncoder)
* [func (j *JSONPb) Unmarshal(data []byte, v interface{}) error](#JSONPb.Unmarshal)
* [type Proto](#Proto)
* [func (p *Proto) ContentType() string](#Proto.ContentType)
* [func (*Proto) Marshal(value interface{}) ([]byte, error)](#Proto.Marshal)
* [func (p *Proto) NewDecoder(reader io.Reader) runtime.Decoder](#Proto.NewDecoder)
* [func (p *Proto) NewEncoder(writer io.Writer) runtime.Encoder](#Proto.NewEncoder)
* [func (*Proto) Unmarshal(data []byte, value interface{}) error](#Proto.Unmarshal)
#### <a name="pkg-files">Package files</a>
[jsonpb.go](/jsonpb.go) [proto.go](/proto.go)
## <a name="JSONPb">type</a> [JSONPb](/jsonpb.go?s=509:537#L20)
``` go
type JSONPb jsonpb.Marshaler
```
JSONPb is a runtime.Marshaler which marshals/unmarshals into/from
JSON with "github.com/gogo/protobuf/jsonpb".
### <a name="JSONPb.ContentType">func</a> (\*JSONPb) [ContentType](/jsonpb.go?s=589:624#L23)
``` go
func (*JSONPb) ContentType() string
```
ContentType always returns "application/json".
### <a name="JSONPb.Delimiter">func</a> (\*JSONPb) [Delimiter](/jsonpb.go?s=5293:5328#L187)
``` go
func (j *JSONPb) Delimiter() []byte
```
Delimiter for newline encoded JSON streams.
### <a name="JSONPb.Marshal">func</a> (\*JSONPb) [Marshal](/jsonpb.go?s=692:747#L28)
``` go
func (j *JSONPb) Marshal(v interface{}) ([]byte, error)
```
Marshal marshals "v" into JSON.
### <a name="JSONPb.NewDecoder">func</a> (\*JSONPb) [NewDecoder](/jsonpb.go?s=2634:2690#L95)
``` go
func (j *JSONPb) NewDecoder(r io.Reader) runtime.Decoder
```
NewDecoder returns a runtime.Decoder which reads JSON stream from "r".
### <a name="JSONPb.NewEncoder">func</a> (\*JSONPb) [NewEncoder](/jsonpb.go?s=2874:2930#L101)
``` go
func (j *JSONPb) NewEncoder(w io.Writer) runtime.Encoder
```
NewEncoder returns an Encoder which writes JSON stream into "w".
### <a name="JSONPb.Unmarshal">func</a> (\*JSONPb) [Unmarshal](/jsonpb.go?s=2461:2521#L90)
``` go
func (j *JSONPb) Unmarshal(data []byte, v interface{}) error
```
Unmarshal unmarshals JSON "data" into "v".
Currently it can only marshal proto.Message.
TODO(yugui) Support fields of primitive types in a message.
## <a name="Proto">type</a> [Proto](/proto.go?s=241:380#L14)
``` go
type Proto struct {
// CustomContentType overrides the default Content-Type
// of "application/octet-stream".
CustomContentType string
}
```
Proto is a runtime.Marshaller which marshals/unmarshals into/from serialize
proto bytes
### <a name="Proto.ContentType">func</a> (\*Proto) [ContentType](/proto.go?s=496:532#L22)
``` go
func (p *Proto) ContentType() string
```
ContentType returns the Content-Type.
If CustomContentType is empty, it returns "application/octet-stream".
### <a name="Proto.Marshal">func</a> (\*Proto) [Marshal](/proto.go?s=676:732#L30)
``` go
func (*Proto) Marshal(value interface{}) ([]byte, error)
```
Marshal marshals "value" into Proto
### <a name="Proto.NewDecoder">func</a> (\*Proto) [NewDecoder](/proto.go?s=1220:1280#L48)
``` go
func (p *Proto) NewDecoder(reader io.Reader) runtime.Decoder
```
NewDecoder returns a Decoder which reads proto stream from "reader".
### <a name="Proto.NewEncoder">func</a> (\*Proto) [NewEncoder](/proto.go?s=1536:1596#L59)
``` go
func (p *Proto) NewEncoder(writer io.Writer) runtime.Encoder
```
NewEncoder returns an Encoder which writes proto stream into "writer".
### <a name="Proto.Unmarshal">func</a> (\*Proto) [Unmarshal](/proto.go?s=932:993#L39)
``` go
func (*Proto) Unmarshal(data []byte, value interface{}) error
```
Unmarshal unmarshals proto "data" into "value"
- - -
Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md)
| 36.098361 | 93 | 0.69891 | yue_Hant | 0.350239 |
bb58d68960f55f8b899f4528330c89a63d258a88 | 3,270 | md | Markdown | content/privacy-policy/index.md | JanMooMoo/NavHub | 0a9f9d3580525e80eb7fd87504dfec9a6f5f0fbe | [
"MIT"
] | 6 | 2018-03-21T20:18:22.000Z | 2018-04-20T09:34:54.000Z | content/privacy-policy/index.md | JanMooMoo/NavHub | 0a9f9d3580525e80eb7fd87504dfec9a6f5f0fbe | [
"MIT"
] | 57 | 2018-05-28T04:51:04.000Z | 2019-04-17T04:17:01.000Z | content/privacy-policy/index.md | JanMooMoo/NavHub | 0a9f9d3580525e80eb7fd87504dfec9a6f5f0fbe | [
"MIT"
] | 16 | 2018-04-04T09:54:14.000Z | 2018-05-03T22:37:51.000Z | ---
title: "Privacy Policy"
date: 2018-03-12T17:47:48+13:00
draft: false
---
<br />
This page informs you of our policies regarding the collection, use and disclosure of personal information we receive from users of our site (https://navcoin.org). We use your personal information to better understand your usage of the site, and to collect traffic statistics.
By using the site, you agree to the collection and use of information in accordance with this policy.
### Log Data
Like many site operators, we collect information that your browser sends whenever you visit our site (“Log Data”). This Log Data may include information such as your computer’s Internet Protocol (“IP”) address (with replaced last byte), browser type, browser version, the pages of our site that you visit, the time and date of your visit, the time spent on those pages and other statistics.
### Cookies
Cookies are files with small amount of data, which may include an anonymous unique identifier. Cookies are sent to your browser from a web site and stored on your computer’s hard drive. You can instruct your browser to refuse all cookies or to indicate when a cookie is being sent. However, if you do not accept cookies, you may not be able to use some portions of our site.
We use cookies for the following purposes:
- To keep track of whether you have pressed the “OK” button on the cookie disclaimer, so we don’t bother you with the notification if you have.
- Our Analytics software (Google Analytics) uses cookies to measure and better understand user-interactions on our Site. You can read more about how Google Analytics uses cookies here.
### Google Analytics
We use a third-party JavaScript plug-in provided by Google called “Google Analytics” to provide us with useful traffic statistics and to better understand how you use our site. We do not have direct access to the information obtained from Google Analytics, but Google provides us with a summary through their dashboard.
We may share the information obtained from Google Analytics with business partners who are interested in advertising on our website. The information shared with these business partners will not contain any personally identifying information (Google does not provide us with direct access to the data and therefore we cannot see this information).
You can opt-out of having your information collected by Google Analytics by downloading the Google Analytics opt-out browser add-on provided by Google. This will prevent your information being used by Google Analytics. Doing this will not affect your ability to use our Site in any way. You can download the opt-out browser add-on here. We also honor the Do Not Track header and will not track visitors who have Do Not Track switched on.
### Changes to this Privacy Policy
We may update this privacy policy from time to time. We will notify you of any changes by posting the new privacy policy on the Site. You are advised to review this privacy policy periodically for any changes.
This Privacy Policy was last updated: 26th March, 2018.
### Contact Us
If you have any questions about our privacy policy, or how your data is being collected and processed, please join the [community discord channel](https://discord.gg/y4Vu9jw).
<br /><br />
| 83.846154 | 437 | 0.792355 | eng_Latn | 0.999619 |
bb5946a5c86b412fa720d889553d98b7d999ba30 | 8,178 | md | Markdown | README.md | lmfaber/poseidon | 91afc55197e22e80aaa0b735b40e81050d481963 | [
"MIT"
] | null | null | null | README.md | lmfaber/poseidon | 91afc55197e22e80aaa0b735b40e81050d481963 | [
"MIT"
] | null | null | null | README.md | lmfaber/poseidon | 91afc55197e22e80aaa0b735b40e81050d481963 | [
"MIT"
] | null | null | null | # PoSeiDon
## Docker command:
```
docker run --rm --user $(id -u):$(id -g) -it --mount type=bind,source=/your/favorite/poseidon/folder,target=/home/output lmfaber/poseidon:latest /bin/bash
```
## Usage:
```
Usage: poseidon --input /absolute/input.fa --output /absoulute/output/path
Use poseidon --example for a test run.
-o, --output=NAME Output directory. (Absolute path)
-i, --input=NAME Input multiple fasta file. (Absolute path)
-t, --title=NAME Project title. (No spaces)
--root-species=NAME Comma separated list of root species. E.g. Escherichia_coli,
-r, --reference-species=NAME Reference species.
--kh kh option?
--timestamp=NAME Timestamp
--example Run a test example.
-h, --help Prints this help
```
Here we present __PoSeiDon__, a pipeline to detect significant positively selected sites and possible recombination events in analignment of multiple coding sequences. Sites that undergo positive selection can give you insights in the evolutionary history of your sequences, for example showing you important mutation hot spots, accumulated as results of virus-host arms races during evolution.
We provide all ruby scripts needed to run the PoSeiDon pipeline.
__Please note__: we aimed that with these scripts the pipeline can be run _out
of the box_, however, PoSeiDon relies on a variety of different third-party
tools (see below). Binaries for most tools are also included in this repository
(`tools`) and PoSeiDon assumes them to be located in this folder. The larger
software package for HYPHY can be downloaded here directly and needs to be added
and extracted manually to the `tools` folder:
* <a href="https://www.rna.uni-jena.de/supplements/poseidon/hyphy.zip">hyphy.zip</a>
<!--* <a href="https://www.rna.uni-jena.de/supplements/poseidon/openmpi.zip">openmpi.zip</a>-->
Furthermore, you will need inkscape, pdflatex, ruby (tested with v2.4.2) and
some ruby gems (packages) as well as mpirun (Open MPI; tested with v2.0.2). If
you don't have anything of this installed, you can try on a Linux system:
````
apt-get install ruby
gem install bio
gem install mail
gem install encrypted_strings
apt-get install inkscape
apt-get install texlive-latex-base
apt-get install openmpi-bin
apt-get install hyphy-mpi
````
__We heavily recommend__ to use our Docker image that can be easily executed without the need to install tools manually.:
````
docker run mhoelzer/poseidon <TODO>
````
## Workflow of the PoSeiDon pipeline and example output
<a target="_blank" href="https://github.com/hoelzer/poseidon/blob/master/images/pipeline_landscape.pdf"><img src="https://github.com/hoelzer/poseidon/blob/master/images/pipeline_landscape.png" alt="PoSeiDon workflow" /></a>
The PoSeiDon pipeline comprises in-frame alignment of homologous protein-coding sequences, detection of putative recombination events and evolutionary breakpoints, phylogenetic reconstructions and detection of positively selected sites in the full alignment and all possible fragments. Finally, all results are combined and visualized in a user-friendly and clear HTML web page. The resulting alignment fragments are indicated with colored bars in the HTML output.
Please find an example output of the pipeline <a href="http://www.rna.uni-jena.de/supplements/mx1_bats/full_aln/">here</a>. (<a href="https://doi.org/10.1128/JVI.00361-17">Fuchs _et al_., 2017, Journal of Virology</a>)
### The PoSeiDon pipeline is based on the following tools and scripts:
* TranslatorX (v1.1), Abascal et al. (2010); <a target="_blank" href="https://www.ncbi.nlm.nih.gov/pubmed/20435676">20435676</a>
* Muscle (v3.8.31), Edgar (2004); <a target="_blank" href="https://www.ncbi.nlm.nih.gov/pubmed/15034147">15034147</a>
* RAxML (v8.0.25), Stamatakis (2014); <a target="_blank" href="https://www.ncbi.nlm.nih.gov/pubmed/24451623">24451623</a>
* Newick Utilities (v1.6), Junier and Zdobnov (2010); <a target="_blank" href="https://www.ncbi.nlm.nih.gov/pubmed/20472542">20472542</a>
* MODELTEST , Posada and Crandall (1998); <a target="_blank" href="https://www.ncbi.nlm.nih.gov/pubmed/9918953">9918953</a>
* HyPhy (v2.2), Pond et al. (2005); <a target="_blank" href="https://www.ncbi.nlm.nih.gov/pubmed/15509596">15509596</a>
* GARD , Pond et al. (2006); <a target="_blank" href="https://www.ncbi.nlm.nih.gov/pubmed/17110367">17110367</a>
* PaML/CodeML (v4.8), Yang (2007); <a target="_blank" href="https://www.ncbi.nlm.nih.gov/pubmed/17483113">17483113</a>
* Ruby (v2.3.1)
* Inkscape (v0.48.5)
* pdfTeX (v3.14)
## Parameters
Most of the PoSeiDon parameters are optional and are explained here in detail.
### Input
Mandatory. Your input FASTA file must follow the format:
````
>Myotis_lucifugus Mx1 Gene
ATGGCGATCGAGATACGATACGTA...
>Myotis_davidii Mx1 Gene
ATGGCGGTCGAGATAAGATACGTT...
````
All sequences must have a correct open reading frame, are only allowed to contain nucleotide characters [A|C|G|T] and no internal stop codon.
Sequence IDs must be unique until the first occurrence of a space.
### Reference
Optional. Default: use first sequence ID as reference. You can define <b>one</b> species ID from your multiple FASTA file as a reference species. Positively selected sites and corresponding amino acids will be drawn in respect to this species. The ID must match the FASTA header until the occurence of the first space. For example, if you want <i>Myotis lucifugus</i> as your reference species and your FASTA file contains:
````>Myotis_lucifugus Mx1 Gene
ATGGCGATCGAGATACGATACGTA...
````
use
````Myotis_lucifugus````
as parameter to set the reference species. Per default the first ID occurring in the multiple FASTA file will be used.
### Outgroup
Optional. Default: trees are unrooted. You can define <b>one</b> or <b>multiple</b> (comma separated) species IDs as outgroup. All phylogenetic trees will be rooted according to this species. For example, if your multiple FASTA file contains
````>Myotis_lucifugus Mx1 Gene
ATGGCGATCGAGATACGATACGTA...
>Myotis_davidii Mx1 Gene
ATGGCGGTCGAGATAAGATACGTT...
>Pteropus_vampyrus Mx1 Gene
ATGGCCGTAGAGATTAGATACTTT...
>Eidolon_helvum Mx1 Gene
ATGCCCGTAGAGAATAGATACTTT...
````
you can define:
````Pteropus_vampyrus,Eidolon_helvum````
to root all trees in relation to this two species.
### Use also insignificant breakpoints
Optional. Default: false. With this parameter you can decide if insignificant breakpoints should be taken into account. All breakpoints are tested for significant topological incongruence using a Kashino Hasegawa (KH) test [Kishino, H. and Hasegawa, M. (1989)]. KH-insignificant breakpoints most frequently arise from variation in branch lengths between segments. Nevertheless, taking KH-insignificant breakpoints into account could be interesting, because we already observed putative positively selected sites in fragments without any significant topological incongruence. KH-insignificant fragments are marked in the final output, as they might not occur from real recombination events.
Per default only significant breakpoints are used for further calculations.
Please also keep in mind that using also insignificant breakpoints can extend the run time of PoSeiDon from minutes to hours, depending on the number of detected breakpoints.
### Use your own parameters
Currently, we don't provide full access to the parameters used within PoSeiDon through the web interface __[the web serice is currently under maintenance due to web page changes]__. In a future release, we will provide a local version of the pipeline for download including full access to the parameter settings of all executed tools. If you want to change parameters (e.g. for RAxML) now, just run the pipeline and PoSeiDon will also generate a 'Parameters' sub page (like <a href="https://www.rna.uni-jena.de/supplements/mx1_bats/full_aln/params.html">this</a>) in the final output, allowing access to all executed commands. With this, certain parts of the pipeline can be rerun locally using the provided commands and output files.
| 56.791667 | 734 | 0.754219 | eng_Latn | 0.961212 |
bb59606beb66f3b23fa59107578b1b36a925efce | 48 | md | Markdown | README.md | yolomachine/PerlDate | bdcee2600c7f8e5f89a810df2b241931eb49d357 | [
"MIT"
] | null | null | null | README.md | yolomachine/PerlDate | bdcee2600c7f8e5f89a810df2b241931eb49d357 | [
"MIT"
] | null | null | null | README.md | yolomachine/PerlDate | bdcee2600c7f8e5f89a810df2b241931eb49d357 | [
"MIT"
] | null | null | null | # PerlDate
Date manipulations via perl packages
| 16 | 36 | 0.833333 | eng_Latn | 0.539212 |
bb5984e0bb735f675ba55c883a09af8ea910c314 | 1,396 | md | Markdown | windows-driver-docs-pr/storage/dsm-queryuniqueid-wmi-class.md | thethales/windows-driver-docs | 55455d5e0ef9b8087e36c3bac7301b0db8ce79ba | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-30T20:31:06.000Z | 2021-11-30T20:31:06.000Z | windows-driver-docs-pr/storage/dsm-queryuniqueid-wmi-class.md | thethales/windows-driver-docs | 55455d5e0ef9b8087e36c3bac7301b0db8ce79ba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/storage/dsm-queryuniqueid-wmi-class.md | thethales/windows-driver-docs | 55455d5e0ef9b8087e36c3bac7301b0db8ce79ba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: DSM\_QueryUniqueId WMI Class
description: DSM\_QueryUniqueId WMI Class
ms.assetid: 576e208d-972c-47ba-ab30-a05bf3d0943d
ms.localizationpriority: medium
ms.date: 10/17/2018
---
# DSM\_QueryUniqueId WMI Class
MPIO publishes the DSM\_QueryUniqueId WMI class but expects the DSM to register the GUID and handle its implementation. A WMI client uses the DSM\_QueryUniqueId WMI class to query the unique identifier for a path.
```cpp
class DSM_QueryUniqueId
{
[key, read]
string InstanceName;
[read]
boolean Active;
//
// This Identifier needs to be set by DSMs that want management applications
// like VDS to be able to manage the devices controlled by the particular DSM.
// This DsmUniqueId will be used in conjuction with the DsmPathId to construct
// a path identitifer that is unique not just among all paths known to this DSM,
// but also among all the DSMs present on the system.
//
[WmiDataId(1),
DisplayName("DSM Unique Identifier") : amended,
Description("DSM Unique Identifier to be used by a management application") : amended
]
uint64 DsmUniqueId;
};
```
When this class definition is compiled by the WMI tool suite, it produces the [**DSM\_QueryUniqueId**](/windows-hardware/drivers/ddi/mpiodisk/ns-mpiodisk-_dsm_queryuniqueid) data structure. There are no methods associated with this WMI class.
| 32.465116 | 242 | 0.74212 | eng_Latn | 0.971915 |
bb5a20c07dd73ea3c2860799a996d8d54e2e0b9a | 792 | md | Markdown | README.md | illumineX/.gitignore-for-Cocoa-iOS-OSX-TV-Watch- | 09e43011cbfe7af8c7d2117e899e0b416b4caa8c | [
"MIT"
] | null | null | null | README.md | illumineX/.gitignore-for-Cocoa-iOS-OSX-TV-Watch- | 09e43011cbfe7af8c7d2117e899e0b416b4caa8c | [
"MIT"
] | null | null | null | README.md | illumineX/.gitignore-for-Cocoa-iOS-OSX-TV-Watch- | 09e43011cbfe7af8c7d2117e899e0b416b4caa8c | [
"MIT"
] | null | null | null | # .gitignore-for-Xcode
A comprehensive .gitignore for Cocoa development on all Apple platforms (Mac OS X, iOS, TV, Watch)
We first shared this in 2009, and it's been wildly popular.
The .gitignore file should prevent kruft from being checked into the Git repository. It should
default to excluding files that normally don't belong in a repository. Your application might
be an exception in some way. For example, you might want to include a database template in the
application install.
The .gitignore file should prevent even unlikely kruft, such as backup files from tools that you
don't use, because someone else on your team might use them.
Share and enjoy!
If you have suggestions for improvements, let us know or send a pull request.
Kind regards,
Gary W. Longsine
°
| 36 | 100 | 0.775253 | eng_Latn | 0.999312 |
bb5a9633f2bc0197569a30d917c9a85c4f7da769 | 522 | md | Markdown | Video/FFMPEG/RTSP to Youtube/README.md | 0x4447/0x4447-Scripts | 3ac525145db9d0258850b3640b422d7eaada014e | [
"MIT"
] | 1 | 2019-01-05T18:18:41.000Z | 2019-01-05T18:18:41.000Z | Video/FFMPEG/RTSP to Youtube/README.md | 0x4447/0x4447-Scripts | 3ac525145db9d0258850b3640b422d7eaada014e | [
"MIT"
] | 5 | 2018-10-16T08:16:53.000Z | 2018-12-03T20:50:24.000Z | Video/FFMPEG/RTSP to Youtube/README.md | 0x4447/0x4447-Scripts | 3ac525145db9d0258850b3640b422d7eaada014e | [
"MIT"
] | null | null | null |
# Stream IP Camera live to YouTube
The script uses your IP Camera address and YouTube stream information to Live stream the camera to youtube.
# How to Run
```
] ./script.sh
```
The script will prompt you for the RTSP URL, YouTube Stream URL, and also your YouTube Stream key.
The script has the capability of saving your information and re-using it in the future for your convenience.
# What to expect
Once you have provided the required information, the script will begin streaming to YouTube in the background.
| 27.473684 | 110 | 0.772031 | eng_Latn | 0.997412 |
bb5aeaa1f05d4b3633f8a28d9e6ec0e3bad20cac | 19 | md | Markdown | README.md | FTC12817/code | 809ef98802d94aaf697f63204fe1d37cff617761 | [
"MIT"
] | null | null | null | README.md | FTC12817/code | 809ef98802d94aaf697f63204fe1d37cff617761 | [
"MIT"
] | null | null | null | README.md | FTC12817/code | 809ef98802d94aaf697f63204fe1d37cff617761 | [
"MIT"
] | null | null | null | # code
hello world
| 6.333333 | 11 | 0.736842 | eng_Latn | 0.734341 |
bb5c26c07ddce356ab4b9a6b084b6c5d5697af75 | 1,397 | md | Markdown | src/main/tut/_partial_functions/try_get.md | Philippus/scala-best-practices | 20ea133022f031e986dd926424a982241f85037f | [
"CC-BY-4.0"
] | null | null | null | src/main/tut/_partial_functions/try_get.md | Philippus/scala-best-practices | 20ea133022f031e986dd926424a982241f85037f | [
"CC-BY-4.0"
] | null | null | null | src/main/tut/_partial_functions/try_get.md | Philippus/scala-best-practices | 20ea133022f031e986dd926424a982241f85037f | [
"CC-BY-4.0"
] | null | null | null | ---
title: Do not call get on an Try
layout: article
linters:
- name: wartremover
rules:
- name: TryPartial
url: http://www.wartremover.org/doc/warts.html#trypartial
- name: scapegoat
rules:
- name: TryGet
---
> When retrieving the content of a [`Try`], do not use [`get`].
# Reason
Some [`Trys`][`Try`] are [`Failures`][`Failure`], and [`get`] deals with them by throwing an exception.
```tut:book:fail
scala.util.Failure(new Exception).get
```
If you have a default value to provide in case of a [`Failure`], use [`getOrElse`]:
```tut:book
scala.util.Failure(new Exception).getOrElse(-1)
```
Another practical approach is to use [`fold`], which lets you provide a handler for each case:
```tut:book
import scala.util.{Failure, Try}
(Failure(new Exception): Try[Int]).fold(
e => s"Found an error: '${e.getMessage}'",
i => s"Found an int: '$i'"
)
```
[`Try`]:https://www.scala-lang.org/api/2.12.8/scala/util/Try.html
[`getOrElse`]:https://www.scala-lang.org/api/2.12.8/scala/util/Try.html#getOrElse[U%3E:T](default:=%3EU):U
[`fold`]:https://www.scala-lang.org/api/2.12.8/scala/util/Try.html#fold[U](fa:Throwable=%3EU,fb:T=%3EU):U
[`get`]:https://www.scala-lang.org/api/2.12.8/scala/util/Try.html#get:T
[`Failure`]:https://www.scala-lang.org/api/2.12.8/scala/util/Failure.html
[`Success`]:https://www.scala-lang.org/api/2.12.8/scala/util/Success.html
| 29.723404 | 106 | 0.672155 | yue_Hant | 0.433603 |
bb5d5e46c8c3d0aa39979803d2b3dc048ed1f7d2 | 4,632 | md | Markdown | Documentation/VirtualBox.md | densogiaichned/serenity | 99c0b895fed02949b528437d6b450d85befde7a5 | [
"BSD-2-Clause"
] | 2 | 2022-02-08T09:18:50.000Z | 2022-02-21T17:57:23.000Z | Documentation/VirtualBox.md | densogiaichned/serenity | 99c0b895fed02949b528437d6b450d85befde7a5 | [
"BSD-2-Clause"
] | null | null | null | Documentation/VirtualBox.md | densogiaichned/serenity | 99c0b895fed02949b528437d6b450d85befde7a5 | [
"BSD-2-Clause"
] | null | null | null | # Serenity installation guide for VirtualBox
## NOTICE
There are currently issues with running Serenity in VirtualBox. Please refer to the [open issue](https://github.com/SerenityOS/serenity/issues/2927) for a list of currently known issues. Anything that doesn't currently work will be noted in this document.
## Creating the disk image
Before creating a disk image that will work in VirtualBox, you will need to create a GRUB image as described in the [Serenity installation guide](BareMetalInstallation.md). Please skip the final step of that section, as that is only relevant for putting the image onto a real drive. You **cannot** use the same disk image created for QEMU. Using that image will halt immediately with the message ``FATAL: No bootable medium found! System halted.``
There are a couple of ways to convert the disk image:
If you have QEMU installed:
``
qemu-img convert -O vdi /path/to/grub_disk_image /path/to/output/serenityos.vdi
``
If you only have VirtualBox installed:
``
VBoxManage convertfromraw --format VDI /path/to/grub_disk_image /path/to/output/serenityos.vdi
``
Set an identifier to the disk image, otherwise updating the disk image makes the identifiers no longer match up.
``
VBoxManage internalcommands sethduuid serenityos.vdi 19850209-0000-0000-0000-000000000000
``
Note that if you are on Windows and you do not have QEMU or VirtualBox in your PATH environment variable, you must be in the installation folder for the tool you're using. You will also need to put ``./`` in front of the command.
## Creating the virtual machine
**Please note that these instructions were written with VirtualBox v6.1.12 in mind. Therefore, these instructions may not match exactly for past and future versions.**
1. Open the **Create Virtual Machine** dialog. Switch to **Expert Mode**.
2. Feel free to give it any name and store it anywhere.
3. Switch the **Type** to **Other** and the **Version** to **Other/Unknown (64-bit)**.
4. Serenity requires at minimum 256 MB of memory. Set **Memory size** equal to or above 256 MB. The currently recommended size is 1024 MB. Please note that Serenity is currently a 32-bit system, so anything above the ~3.5 GB mark will not be recognized.
5. For **Hard disk**, select **Use an existing virtual hard disk file**. Click the folder icon next to the dropdown to open the **Hard Disk Selector**.
6. Click **Add**. Browse to where you stored the converted disk image from the previous stage and add it. Click **Choose**.
7. Finally click **Create**.
Reference image:

## Configuring the virtual machine to boot Serenity
Serenity will not be able to boot with the default configuration. There are a couple settings to adjust. Open **Settings** and:
1. Go to **System**, open the **Processor** tab and tick **Enable PAE/NX**.
2. Go to **Audio** and set **Audio Controller** to **SoundBlaster 16**.
There are a couple of settings to check:
- In **Storage**, click on the **Controller**. Make sure the controller type is PIIX4. PIIX3 and ICH6 are untested. Anything else is guaranteed not to work, as Serenity does not currently support them.
- In **Network** and in the **Advanced** drop down, make sure the **Adapter Type** is anything but **Intel PRO/1000 MT Desktop (82540EM)**. While it is the only adapter type Serenity currently supports, it does not currently work in VirtualBox.
Please note that at the time of writing, audio and networking do not work in VirtualBox.
That is all you need to boot Serenity in VirtualBox! Read on for additional configuration you may want to use.
## Blinking cursor after GRUB menu
If you only see a blinking cursor after selecting an option in the GRUB menu, it is very likely you have encountered one of the errors listed in the [troubleshooting document.](Troubleshooting.md)
- Check that you have enabled PAE/NX in the **Settings** > **System** > **Processor** tab.
- If you are using a 64-bit disk image, check that **Version** is set to **Other/Unknown (64-bit)** instead of **Other/Unknown** in **Settings** > **General**.
## Additional configuration (optional)
For serial debugging, go to **Serial Ports** and enable port 1. Feel free to set the **Port Mode** to anything if you know what you're doing. The recommended mode is **Raw File**. Set **Path/Address** to where you want to store the file. This must also include the file name.
While the default 16 MB of video memory is more than enough to use the default resolution, it is not enough to use all the supported resolutions. If you want to use 2560x1080, you will need to supply at minimum 22 MB of video memory.
| 70.181818 | 447 | 0.759283 | eng_Latn | 0.997268 |
bb5de2351ec135a989caf93e4e97a5332625881f | 11,872 | md | Markdown | packages/Chronic_Absenteeism/readme.md | cstohlmann/OpenEduAnalytics | 1d607aa4255d5461d7da0c702ec2550b7a5bd187 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | packages/Chronic_Absenteeism/readme.md | cstohlmann/OpenEduAnalytics | 1d607aa4255d5461d7da0c702ec2550b7a5bd187 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | packages/Chronic_Absenteeism/readme.md | cstohlmann/OpenEduAnalytics | 1d607aa4255d5461d7da0c702ec2550b7a5bd187 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | # Student Support Package: Predicting Chronic Absenteeism
The OEA Chronic Absenteeism Package provides a set of assets which support an education system in developing their own predictive model to address chronic absenteeism. There are two main components of this package:
1. <ins>Guidance and documentation:</ins> The [OEA Chronic Absenteeism Package - Use Case Documentation](https://github.com/microsoft/OpenEduAnalytics/blob/main/packages/Chronic_Absenteeism/docs/OEA%20Chronic%20Abs%20Package%20-%20Use%20Case%20Doc.pdf) provides guidance on the end-to-end process of developing a successful Chronic Absenteeism use case project, including how to engage stakeholders in the project, prior research on the use case problem domain and theory, how to map data sources to the theory of the problem, and how to implement Microsoft’s Principles of Responsible Data and AI in the process of predictive modelling. <em> It is highly recommended this document be reviewed by any education system considering using this package, and that the documentation be revised to the specific data and decisions for that system’s context. </em>
2. <ins>Technical assets:</ins> Various assets are freely available in this package to help accelerate implementation of Chronic Absenteeism use cases. Assets include descriptions of data sources, notebooks for data processing, a pipeline for ML model building and deployment, and sample PowerBI dashboards. See descriptions of technical assets below.
This OEA Package was developed through a partnership between Microsoft Education, [Kwantum Analytics](https://www.kwantumanalytics.com/), and [Fresno Unified School District](https://www.fresnounified.org/) in Fresno, California.
## Problem Statement
Chronic absenteeism is generally defined as a student missing 10% or more of a school year. Student absenteeism is a fundamental challenge for education systems which has increased as result of the global pandemic. There is a growing body of research (see [Use Case Documentation](https://github.com/microsoft/OpenEduAnalytics/blob/main/packages/Chronic_Absenteeism/docs/OEA%20Chronic%20Abs%20Package%20-%20Use%20Case%20Doc.pdf)) substantiating what most parents and teachers have long believed to be true: School truancy undermines the growth and development of students. Students with more school absences have lower test scores and grades, a greater chance of dropping out of school, and higher odds of future unemployment. Absent students also exhibit greater behavioral issues, including social disengagement and alienation. The most recent national estimates in the US suggest that approximately 5–7.5 million students, out of a K–12 population of approximately 50 million, are missing at least 1 cumulative month of school days in a given academic year, translating into an aggregate 150–225 million days of instruction lost annually.
Machine learning models offer the potential to find patterns of absenteeism across student attendance patterns, class engagement, academic achievement, demographics, social-emotional measures and more. Predictions of students at risk of becoming chronically absent allows for targeted support of these students. A predictive model can be used to precisely focus resources to support students who are on the trajectory of chronic absenteeism, identify the best interventions to prevent absenteeism, and ultimately reduce absenteeism.
## Package Impact
This package was developed in collaboration with [Fresno Unified School District](https://www.fresnounified.org/) in Fresno, California and already has created an impact (see the [Use Case Documentation](https://github.com/microsoft/OpenEduAnalytics/blob/main/packages/Chronic_Absenteeism/docs/OEA%20Chronic%20Abs%20Package%20-%20Use%20Case%20Doc.pdf) for details).
In general, this package can be used by system or institutional leaders, school, or department leaders, support staff, and educators to:
- <em> accurately identify </em> which students are at risk of becoming chronically absent or may move to a higher tier of absence
- <em> quickly understand </em> what type of support resources or interventions might be most effective to prevent or reduce absenteeism with individual students
- <em> guide decision making </em> of school support staff by providing a real-time and detailed snapshot of students who are at risk of higher level of absence based on engagement, academic, and well-being patterns of that student.
See below for examples of developed PowerBI dashboards.
Patterns of absenteeism | Strongest drivers of model predictions | School support staff dashboard
:-------------------------:|:-------------------------:|:-------------------------:
 |  | 
## Machine Learning Approach
The machine learning model learns from past student data to predict if a student will become chronically absent in the future. The model building and assessment is done in 5 main steps:
1. <ins>Data collection:</ins> Select and aggregate data needed to train the model (described below)
2. <ins>Feature engineering:</ins> Use education context to combine and normalize data.
3. <ins>Model trianing:</ins> [AutoML](https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml) is used to train a best model via [Azure Machine Learning Studio](https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-machine-learning-studio). The best model is used to score the training dataset with predictions.
4. <ins>Model prediction interpretations:</ins> The [AutoML Explainer](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-automl) is used to identify which features are most impactful (called key drivers) on the best model predictions.
5. <ins>Fairness and PowerBI:</ins> Training data, model predictions, and model explanations are combined with other data such as student demographics. The combined data is made ready for PowerBI consumption. PowerBI enables assessment of model quality, analysis of predictions and key drivers, and analysis of model fairness with respect to student demographics.
See the Chronic Absenteeism Package [Documentation](https://github.com/microsoft/OpenEduAnalytics/tree/main/packages/Chronic_Absenteeism/docs) and [Pipelines](https://github.com/microsoft/OpenEduAnalytics/tree/main/packages/Chronic_Absenteeism/pipelines) for more details on model building.
## Data Sources
This package combines multiple data sources which were identified through research as strongly related to absenteeism:
* School Information System (SIS): School, grade, and roster data
* Barriers to students: Transportation data, distance from school, school changes, student illness
* School experiences: School suspension, disciplinary, behavior, and learning outcome data
* Engagement data: School attendance, digital engagement
This package can use several [OEA Modules](https://github.com/microsoft/OpenEduAnalytics/tree/main/modules) to help ingest data sources that are typically used to understand patterns of chronic absenteeism (see below for list of relevant OEA modules).
| OEA Module | Description |
| --- | --- |
| [Ed-Fi Data Standards](https://github.com/microsoft/OpenEduAnalytics/tree/main/modules/Education_Data_Standards/Ed-Fi) | For typical Student Information System (SIS) data, including detailed student attendance, demographic, digital activity, and academic data. |
| [Microsoft Digital Engagement](https://github.com/microsoft/OpenEduAnalytics/tree/main/modules/Microsoft_Data) | Such as M365 [Education Insights Premium](https://github.com/microsoft/OpenEduAnalytics/tree/main/modules/Microsoft_Data/Microsoft_Education_Insights_Premium), or [Microsoft Graph](https://github.com/microsoft/OpenEduAnalytics/tree/main/modules/Microsoft_Data/Microsoft_Graph) data. |
| [Digital Learning Apps and Platforms](https://github.com/microsoft/OpenEduAnalytics/tree/main/modules/Digital_Learning_Apps_and_Platforms) | [Clever](https://github.com/microsoft/OpenEduAnalytics/tree/main/modules/Digital_Learning_Apps_and_Platforms/Clever) for learning ap data and [iReady](https://github.com/microsoft/OpenEduAnalytics/tree/main/modules/Digital_Learning_Apps_and_Platforms/iReady) for language and math assessments and learning activities. |
## Package Components
This Predicting Chronic Absenteeism package was developed by [Kwantum Analytics](https://www.kwantumanalytics.com/) in partnership with [Fresno Unified School District](https://www.fresnounified.org/) in Fresno, California. The architecture and reference implementation for all modules is built on [Azure Synapse Analytics](https://azure.microsoft.com/en-us/services/synapse-analytics/) - with [Azure Data Lake Storage](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction) as the storage backbone, and [Azure Active Directory](https://azure.microsoft.com/en-us/services/active-directory/) providing the role-based access control.
Assets in the Chronic Absenteeism package include:
1. [Data](https://github.com/microsoft/OpenEduAnalytics/tree/main/packages/Chronic_Absenteeism/data): For understanding the data relationships and standardized schema mappings used for certain groups of data.
2. [Documentation](https://github.com/microsoft/OpenEduAnalytics/tree/main/packages/Chronic_Absenteeism/docs):
* [Use Case Documentation](https://github.com/microsoft/OpenEduAnalytics/blob/main/packages/Chronic_Absenteeism/docs/OEA%20Chronic%20Abs%20Package%20-%20Use%20Case%20Doc.pdf)
* Resources and documentation for Machine Learning in Azure.
3. [Notebooks](https://github.com/microsoft/OpenEduAnalytics/tree/main/packages/Chronic_Absenteeism/notebooks): For cleaning, processing, and curating data within the data lake.
4. [Pipelines](https://github.com/microsoft/OpenEduAnalytics/tree/main/packages/Chronic_Absenteeism/pipelines): For an overarching process used to train the machine learning model and support PowerBI dashboards.
5. [PowerBI](https://github.com/microsoft/OpenEduAnalytics/tree/main/packages/Chronic_Absenteeism/powerbi): For exploring, visualizing, and deriving insights from the data.
# Legal Notices
Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode), see the [LICENSE](https://github.com/microsoft/OpenEduAnalytics/blob/main/LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the [LICENSE-CODE](https://github.com/microsoft/OpenEduAnalytics/blob/main/LICENSE-CODE) file.
Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.
Privacy information can be found at https://privacy.microsoft.com/en-us/
Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel or otherwise.
| 146.567901 | 1,141 | 0.810057 | eng_Latn | 0.940633 |
bb5de2e328fe60f83d1e4d292fb22dfde43c7420 | 2,529 | md | Markdown | windows.ui.xaml.controls/settingsflyout_showindependent_61271159.md | tossnet/winrt-api | 42ebcbca22e88f3ed0f273629d7cdf290dfe2133 | [
"CC-BY-4.0",
"MIT"
] | 199 | 2017-02-09T23:13:51.000Z | 2022-03-28T15:56:12.000Z | windows.ui.xaml.controls/settingsflyout_showindependent_61271159.md | tossnet/winrt-api | 42ebcbca22e88f3ed0f273629d7cdf290dfe2133 | [
"CC-BY-4.0",
"MIT"
] | 2,093 | 2017-02-09T21:52:45.000Z | 2022-03-25T22:23:18.000Z | windows.ui.xaml.controls/settingsflyout_showindependent_61271159.md | tossnet/winrt-api | 42ebcbca22e88f3ed0f273629d7cdf290dfe2133 | [
"CC-BY-4.0",
"MIT"
] | 620 | 2017-02-08T19:19:44.000Z | 2022-03-29T11:38:25.000Z | ---
-api-id: M:Windows.UI.Xaml.Controls.SettingsFlyout.ShowIndependent
-api-type: winrt method
---
<!-- Method syntax
public void ShowIndependent()
-->
# Windows.UI.Xaml.Controls.SettingsFlyout.ShowIndependent
## -description
Opens the Settings flyout, and returns the user to the app after the flyout is dismissed.
## -remarks
If a [SettingsFlyout](settingsflyout.md) is shown by calling the [Show](settingsflyout_show_392493604.md) method, then clicking the Back button reopens the [SettingsPane](../windows.ui.applicationsettings/settingspane.md) after the [SettingsFlyout](settingsflyout.md) dismisses. If a [SettingsFlyout](settingsflyout.md) is shown by calling ShowIndependent, then clicking the Back button dismisses the [SettingsFlyout](settingsflyout.md) and returns the user to the app; the [SettingsPane](../windows.ui.applicationsettings/settingspane.md) is not reopened.
Call the ShowIndependent method to open the [SettingsFlyout](settingsflyout.md) from a button in your app. In this case, because the user did not open the [SettingsFlyout](settingsflyout.md) from the [SettingsPane](../windows.ui.applicationsettings/settingspane.md), they should return to your app when they click the Back button.
Only one [SettingsFlyout](settingsflyout.md) is shown at a time. Calling ShowIndependent on a [SettingsFlyout](settingsflyout.md) closes any other [SettingsFlyout](settingsflyout.md) currently shown. The [SettingsFlyout](settingsflyout.md) being closed completes its close animation before the new [SettingsFlyout](settingsflyout.md) begins its show animation.
## -examples
This example shows how to use the ShowIndependent method to open a [SettingsFlyout](settingsflyout.md) from a button in your app.
```xaml
<Button Content="App update settings" Click="UpdateSettingsButton_Click"/>
```
```csharp
private void UpdateSettingsButton_Click(object sender, RoutedEventArgs e)
{
UpdateSettingsFlyout updatesFlyout = new UpdateSettingsFlyout();
updatesFlyout.ShowIndependent();
}
```
For more code in context, see Scenario 4 of the [App settings sample](https://github.com/microsoftarchive/msdn-code-gallery-microsoft/tree/master/Official%20Windows%20Platform%20Sample/App%20settings%20sample) and [Quickstart: Add app help](/previous-versions/windows/apps/jj649425(v=win.10)).
## -see-also
[Show](settingsflyout_show_392493604.md), [Quickstart: Add app settings](/previous-versions/windows/apps/hh872190(v=win.10)), [Quickstart: Add app help](/previous-versions/windows/apps/jj649425(v=win.10))
| 60.214286 | 556 | 0.794385 | eng_Latn | 0.890709 |
bb5e159c97904208a49a34a8082200c5bbd77caa | 570 | md | Markdown | _posts/2016-10-13-error-unexpected end of file.md | jixuege/jixuege.github.io | 9fe677db699834ebb3829654be9ad6a331e47f49 | [
"Apache-2.0"
] | null | null | null | _posts/2016-10-13-error-unexpected end of file.md | jixuege/jixuege.github.io | 9fe677db699834ebb3829654be9ad6a331e47f49 | [
"Apache-2.0"
] | null | null | null | _posts/2016-10-13-error-unexpected end of file.md | jixuege/jixuege.github.io | 9fe677db699834ebb3829654be9ad6a331e47f49 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: 解决 syntax error unexpected end of file
date: 2016-10-04
categories: blog
tags: [错误总结,Linux]
description: 执行脚本报错信息syntax error unexpected end of file
---
# 前言
shell脚本执行提示错误信息“syntax error unexpected end of file”,下面就记录一下如何解决。
# 原因分析
这是由于在Windows下写好的shell在Linux上不能直接使用的缘故
# 解决办法
使用命令dos2unix来解决。需要安装一下
<pre>
yum install dos2unix -y
dos2unix container-entrypoint
</pre>
that's all,enjoy!!
参考blog:
[http://renyongjie668.blog.163.com/blog/static/1600531201172803244846/](http://renyongjie668.blog.163.com/blog/static/1600531201172803244846/ )
| 19 | 143 | 0.785965 | yue_Hant | 0.45309 |
bb5e5de5df4f82532d027154566e5411d24b191c | 2,596 | md | Markdown | customize/desktop/unattend/microsoft-windows-snmp-agent-service-permittedmanagers.md | xiaoyinl/commercialization-public | b6bcc82a54c3f0b2f58d3b54e8bc9c4cb69df34d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | customize/desktop/unattend/microsoft-windows-snmp-agent-service-permittedmanagers.md | xiaoyinl/commercialization-public | b6bcc82a54c3f0b2f58d3b54e8bc9c4cb69df34d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | customize/desktop/unattend/microsoft-windows-snmp-agent-service-permittedmanagers.md | xiaoyinl/commercialization-public | b6bcc82a54c3f0b2f58d3b54e8bc9c4cb69df34d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: PermittedManagers
description: PermittedManagers
MSHAttr:
- 'PreferredSiteName:MSDN'
- 'PreferredLib:/library/windows/hardware'
ms.assetid: 19b926cd-d3ff-417a-956d-3d216f6dd183
ms.mktglfcycl: deploy
ms.sitesec: msdn
ms.date: 05/02/2017
ms.topic: article
---
# PermittedManagers
`PermittedManagers` specifies whether the computer accepts Simple Network Management Protocol (SNMP) requests from any host.
If a host is specified in [A1](microsoft-windows-snmp-agent-service-permittedmanagers-a1.md), the computer accepts SNMP requests only from that host. You can specify either the host computer name or its IP address for the host. It is recommended that `Host` be used as the host computer name.
You can use this setting in core installations of Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012, by enabling **SNMP-SC** in the Windows Foundation package.
## Child Elements
| Setting | Description |
|:------------------------|:--------------------------------------------------------------------------------------|
| [A1](microsoft-windows-snmp-agent-service-permittedmanagers-a1.md) | Specifies a host from which the computer accepts SNMP requests. |
## Valid Configuration Passes
generalize
specialize
## Parent Hierarchy
[Microsoft-Windows-SNMP-Agent-Service](microsoft-windows-snmp-agent-service.md) | **PermittedManagers**
## Applies To
For the list of the supported Windows editions and architectures that this component supports, see [Microsoft-Windows-SNMP-Agent-Service](microsoft-windows-snmp-agent-service.md).
## XML Example
The following XML output shows how to set SNMP.
```XML
<PermittedManagers>
<A1>networkhost</A1>
</PermittedManagers>
<RFC1156Agent>
<sysContact>MyContact</sysContact>
<sysLocation>MyLocation</sysLocation>
<sysServices>65</sysServices>
</RFC1156Agent>
<TrapConfiguration>
<TrapConfigurationItems wcm:action="add">
<Community_Name>Private</Community_Name>
<Traps>ComputerName</Traps>
</TrapConfigurationItems>
<TrapConfigurationItems wcm:action="add">
<Community_Name>Public</Community_Name>
<Traps>207.46.197.32</Traps>
</TrapConfigurationItems>
</TrapConfiguration>
<ValidCommunities>
<ValidCommunity wcm:action="add" wcm:keyValue="Community1">2</ValidCommunity>
<ValidCommunity wcm:action="add" wcm:keyValue="Community2">4</ValidCommunity>
</ValidCommunities>
```
## Related topics
[Microsoft-Windows-SNMP-Agent-Service](microsoft-windows-snmp-agent-service.md)
| 33.714286 | 292 | 0.71225 | eng_Latn | 0.596321 |
bb5e7ec66b4fa2e02e384fefbfca1d92e7db408a | 585 | md | Markdown | internal/formatter/test-fixtures/typescript/constructorQualifiers/input.test.md | mainangethe/tools | a030d7efe77ccbf3b4fcc1f1d86fd1de29e2743f | [
"MIT"
] | 1 | 2021-03-12T02:12:59.000Z | 2021-03-12T02:12:59.000Z | internal/formatter/test-fixtures/typescript/constructorQualifiers/input.test.md | dctalbot/tools | c64b9e87d1e389db0e698444349978c7188b1c8a | [
"MIT"
] | null | null | null | internal/formatter/test-fixtures/typescript/constructorQualifiers/input.test.md | dctalbot/tools | c64b9e87d1e389db0e698444349978c7188b1c8a | [
"MIT"
] | null | null | null | # `index.test.ts`
**DO NOT MODIFY**. This file has been autogenerated. Run `rome test internal/formatter/index.test.ts --update-snapshots` to update.
## `typescript > constructorQualifiers`
### `Diagnostics`
```
```
### `Input`
```js
class Foo {
constructor(public a: string, private b: string, protected c: string) {
console.log(a);
console.log(b);
console.log(c);
}
}
```
### `Output`
```js
class Foo {
constructor(public a: string, private b: string, protected c: string) {
console.log(a);
console.log(b);
console.log(c);
}
}
```
| 15 | 131 | 0.613675 | eng_Latn | 0.409483 |
bb5f38f1a8eea226b3f57b1588964299581e9751 | 1,238 | md | Markdown | content/blog/kubernetes-healthchecks.md | okeeffed/dennisokeeffe-blog | a7cfdca4481934c5e766a0bbf7d1a3b622b25665 | [
"MIT"
] | 2 | 2021-02-04T11:47:33.000Z | 2022-01-30T07:34:40.000Z | content/blog/kubernetes-healthchecks.md | okeeffed/dennisokeeffe-blog | a7cfdca4481934c5e766a0bbf7d1a3b622b25665 | [
"MIT"
] | 2 | 2021-02-11T21:36:48.000Z | 2021-02-11T21:36:49.000Z | content/blog/kubernetes-healthchecks.md | okeeffed/dennisokeeffe-blog | a7cfdca4481934c5e766a0bbf7d1a3b622b25665 | [
"MIT"
] | 1 | 2021-12-06T02:06:58.000Z | 2021-12-06T02:06:58.000Z | ---
title: Kubernetes Healthchecks
date: "2018-07-24"
description: An example of a simple health check for Kubernetes.
---
If the application malfunctions, the pod and container may still be running but the application may no longer be running. This is where health checks come in.
## Two types of health checks
1. Running a command in the container periodically
2. Periodic checks on a URL
The typical prod application behind a load balancer should always have health checks implemented in some way to ensure availability and resiliency.
Below you can see where the healthcheck is. You can check the port or container port name.
```yaml
# pod-helloworld.yml
apiVersion: v1
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
# The containers are listed here
containers:
- name: k8s-demo
image: okeeffed/docker-demo
ports:
- containerPort: 3000
# ! This is the health check
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 15
timeoutSeconds: 30
```
More explicit information can be [found here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/).
| 27.511111 | 158 | 0.725363 | eng_Latn | 0.99079 |
bb5f8c96ebfaf31b6672f60d5551d19111112611 | 11,384 | md | Markdown | .atom/packages/phpunit-snippets/README.md | callistino/dotfiles | 5aa9c0ba64021a87e5b98ac4e76a784645bb356b | [
"MIT"
] | null | null | null | .atom/packages/phpunit-snippets/README.md | callistino/dotfiles | 5aa9c0ba64021a87e5b98ac4e76a784645bb356b | [
"MIT"
] | null | null | null | .atom/packages/phpunit-snippets/README.md | callistino/dotfiles | 5aa9c0ba64021a87e5b98ac4e76a784645bb356b | [
"MIT"
] | null | null | null | # PHPUnit Snippets
Snippets to help you writing [PHPUnit](http://phpunit.de) tests in [Atom.io](http://atom.io).
```php
// assertArrayHasKey + [TAB]
$this->assertArrayHasKey($key, $array, "message");
// assertArrayNotHasKey + [TAB]
$this->assertArrayNotHasKey($key, $array, "message");
// assertContains + [TAB]
$this->assertContains($needle, $haystack, "message", $ignoreCase = false, $checkForObjectIdentity = true, $checkForNonObjectIdentity = false);
// assertAttributeContains + [TAB]
$this->assertAttributeContains($needle, $haystack, "message", $ignoreCase = false, $checkForObjectIdentity = true, $checkForNonObjectIdentity = false);
// assertNotContains + [TAB]
$this->assertNotContains($needle, $haystack, "message", $ignoreCase = false, $checkForObjectIdentity = true, $checkForNonObjectIdentity = false);
// assertAttributeNotContains + [TAB]
$this->assertAttributeNotContains($needle, $haystack, "message", $ignoreCase = false, $checkForObjectIdentity = true, $checkForNonObjectIdentity = false);
// assertContainsOnly + [TAB]
$this->assertContainsOnly($type, $haystack, $isNativeType = NULL, "message");
// assertContainsOnlyInstancesOf + [TAB]
$this->assertContainsOnlyInstancesOf($classname, $haystack, "message");
// assertAttributeContainsOnly + [TAB]
$this->assertAttributeContainsOnly($type, $haystackAttributeName, $haystackClassOrObject, $isNativeType = null, "message");
// assertNotContainsOnly + [TAB]
$this->assertNotContainsOnly($type, $haystack, $isNativeType = null, "message");
// assertAttributeNotContainsOnly + [TAB]
$this->assertAttributeNotContainsOnly($type, $haystackAttributeName, $haystackClassOrObject, $isNativeType = null, "message");
// assertCount + [TAB]
$this->assertCount($expectedCount, $haystack, "message");
// assertAttributeCount + [TAB]
$this->assertAttributeCount($expectedCount, $haystackAttributeName, $haystackClassOrObject, "message");
// assertNotCount + [TAB]
$this->assertNotCount($expectedCount, $haystack, "message");
// assertAttributeNotCount + [TAB]
$this->assertAttributeNotCount($expectedCount, $haystackAttributeName, $haystackClassOrObject, "message");
// assertEquals + [TAB]
$this->assertEquals($expected, $actual, "message", $delta = 0, $maxDepth = 10, $canonicalize = false, $ignoreCase = false);
// assertAttributeEquals + [TAB]
$this->assertAttributeEquals($expected, $actualAttributeName, $actualClassOrObject, "message", $delta = 0, $maxDepth = 10, $canonicalize = false, $ignoreCase = false);
// assertNotEquals + [TAB]
$this->assertNotEquals($expected, $actual, "message", $delta = 0, $maxDepth = 10, $canonicalize = false, $ignoreCase = false);
// assertAttributeNotEquals + [TAB]
$this->assertAttributeNotEquals($expected, $actualAttributeName, $actualClassOrObject, "message", $delta = 0, $maxDepth = 10, $canonicalize = false, $ignoreCase = false);
// assertEmpty + [TAB]
$this->assertEmpty($actual, "message");
// assertAttributeEmpty + [TAB]
$this->assertAttributeEmpty($haystackAttributeName, $haystackClassOrObject, "message");
// assertNotEmpty + [TAB]
$this->assertNotEmpty($actual, "message");
// assertAttributeNotEmpty + [TAB]
$this->assertAttributeNotEmpty($haystackAttributeName, $haystackClassOrObject, "message");
// assertGreaterThan + [TAB]
$this->assertGreaterThan($expected, $actual, "message");
// assertAttributeGreaterThan + [TAB]
$this->assertAttributeGreaterThan($expected, $actualAttributeName, $actualClassOrObject, "message");
// assertGreaterThanOrEqual + [TAB]
$this->assertGreaterThanOrEqual($expected, $actual, "message");
// assertAttributeGreaterThanOrEqual + [TAB]
$this->assertAttributeGreaterThanOrEqual($expected, $actualAttributeName, $actualClassOrObject, "message");
// assertLessThan + [TAB]
$this->assertLessThan($expected, $actual, "message");
// assertAttributeLessThan + [TAB]
$this->assertAttributeLessThan($expected, $actualAttributeName, $actualClassOrObject, "message");
// assertLessThanOrEqual + [TAB]
$this->assertLessThanOrEqual($expected, $actual, "message");
// assertAttributeLessThanOrEqual + [TAB]
$this->assertAttributeLessThanOrEqual($expected, $actualAttributeName, $actualClassOrObject, "message");
// assertFileEquals + [TAB]
$this->assertFileEquals($expected, $actual, "message", $canonicalize = false, $ignoreCase = false);
// assertFileNotEquals + [TAB]
$this->assertFileNotEquals($expected, $actual, "message", $canonicalize = false, $ignoreCase = false);
// assertStringEqualsFile + [TAB]
$this->assertStringEqualsFile($expectedFile, $actualString, "message", $canonicalize = false, $ignoreCase = false);
// assertStringNotEqualsFile + [TAB]
$this->assertStringNotEqualsFile($expectedFile, $actualString, "message", $canonicalize = false, $ignoreCase = false);
// assertFileExists + [TAB]
$this->assertFileExists($filename, "message");
// assertFileNotExists + [TAB]
$this->assertFileNotExists($filename, "message");
// assertTrue + [TAB]
$this->assertTrue($condition, "message");
// assertNotTrue + [TAB]
$this->assertNotTrue($condition, "message");
// assertFalse + [TAB]
$this->assertFalse($condition, "message");
// assertNotFalse + [TAB]
$this->assertNotFalse($condition, "message");
// assertNotNull + [TAB]
$this->assertNotNull($actual, "message");
// assertNull + [TAB]
$this->assertNull($actual, "message");
// assertClassHasAttribute + [TAB]
$this->assertClassHasAttribute($attributeName, $className, "message");
// assertClassNotHasAttribute + [TAB]
$this->assertClassNotHasAttribute($attributeName, $className, "message");
// assertClassHasStaticAttribute + [TAB]
$this->assertClassHasStaticAttribute($attributeName, $className, "message");
// assertClassNotHasStaticAttribute + [TAB]
$this->assertClassNotHasStaticAttribute($attributeName, $className, "message");
// assertObjectHasAttribute + [TAB]
$this->assertObjectHasAttribute($attributeName, $object, "message");
// assertObjectNotHasAttribute + [TAB]
$this->assertObjectNotHasAttribute($attributeName, $object, "message");
// assertSame + [TAB]
$this->assertSame($expected, $actual, "message");
// assertAttributeSame + [TAB]
$this->assertAttributeSame($expected, $actualAttributeName, $actualClassOrObject, "message");
// assertNotSame + [TAB]
$this->assertNotSame($expected, $actual, "message");
// assertAttributeNotSame + [TAB]
$this->assertAttributeNotSame($expected, $actualAttributeName, $actualClassOrObject, "message");
// assertInstanceOf + [TAB]
$this->assertInstanceOf($expected, $actual, "message");
// assertAttributeInstanceOf + [TAB]
$this->assertAttributeInstanceOf($expected, $attributeName, $classOrObject, "message");
// assertNotInstanceOf + [TAB]
$this->assertNotInstanceOf($expected, $actual, "message");
// assertAttributeNotInstanceOf + [TAB]
$this->assertAttributeNotInstanceOf($expected, $attributeName, $classOrObject, "message");
// assertInternalType + [TAB]
$this->assertInternalType($expected, $actual, "message");
// assertAttributeInternalType + [TAB]
$this->assertAttributeInternalType($expected, $attributeName, $classOrObject, "message");
// assertNotInternalType + [TAB]
$this->assertNotInternalType($expected, $actual, "message");
// assertAttributeNotInternalType + [TAB]
$this->assertAttributeNotInternalType($expected, $attributeName, $classOrObject, "message");
// assertRegExp + [TAB]
$this->assertRegExp($pattern, $string, "message");
// assertNotRegExp + [TAB]
$this->assertNotRegExp($pattern, $string, "message");
// assertSameSize + [TAB]
$this->assertSameSize($expected, $actual, "message");
// assertNotSameSize + [TAB]
$this->assertNotSameSize($expected, $actual, "message");
// assertStringMatchesFormat + [TAB]
$this->assertStringMatchesFormat($format, $string, "message");
// assertStringNotMatchesFormat + [TAB]
$this->assertStringNotMatchesFormat($format, $string, "message");
// assertStringMatchesFormatFile + [TAB]
$this->assertStringMatchesFormatFile($formatFile, $string, "message");
// assertStringNotMatchesFormatFile + [TAB]
$this->assertStringNotMatchesFormatFile($formatFile, $string, "message");
// assertStringStartsWith + [TAB]
$this->assertStringStartsWith($prefix, $string, "message");
// assertStringStartsNotWith + [TAB]
$this->assertStringStartsNotWith($prefix, $string, "message");
// assertStringEndsWith + [TAB]
$this->assertStringEndsWith($suffix, $string, "message");
// assertStringEndsNotWith + [TAB]
$this->assertStringEndsNotWith($suffix, $string, "message");
// assertXmlFileEqualsXmlFile + [TAB]
$this->assertXmlFileEqualsXmlFile($expectedFile, $actualFile, "message");
// assertXmlFileNotEqualsXmlFile + [TAB]
$this->assertXmlFileNotEqualsXmlFile($expectedFile, $actualFile, "message");
// assertXmlStringEqualsXmlFile + [TAB]
$this->assertXmlStringEqualsXmlFile($expectedFile, $actualXml, "message");
// assertXmlStringNotEqualsXmlFile + [TAB]
$this->assertXmlStringNotEqualsXmlFile($expectedFile, $actualXml, "message");
// assertXmlStringEqualsXmlString + [TAB]
$this->assertXmlStringEqualsXmlString($expectedXml, $actualXml, "message");
// assertXmlStringNotEqualsXmlString + [TAB]
$this->assertXmlStringNotEqualsXmlString($expectedXml, $actualXml, "message");
// assertEqualXMLStructure + [TAB]
$this->assertEqualXMLStructure($expectedElement, $actualElement, $checkAttributes = false, "message");
// assertSelectCount + [TAB]
$this->assertSelectCount($selector, $count, $actual, "message", $isHtml = true);
// assertSelectRegExp + [TAB]
$this->assertSelectRegExp($selector, $pattern, $count, $actual, "message", $isHtml = true);
// assertSelectEquals + [TAB]
$this->assertSelectEquals($selector, $content, $count, $actual, "message", $isHtml = true);
// assertTag + [TAB]
$this->assertTag($matcher, $actual, "message", $isHtml = true);
// assertNotTag + [TAB]
$this->assertNotTag($matcher, $actual, "message", $isHtml = true);
// assertThat + [TAB]
$this->assertThat($value, $constraint, "message");
// assertJson + [TAB]
$this->assertJson($actualJson, "message");
// assertJsonStringEqualsJsonString + [TAB]
$this->assertJsonStringEqualsJsonString($expectedJson, $actualJson, "message");
// assertJsonStringNotEqualsJsonString + [TAB]
$this->assertJsonStringNotEqualsJsonString($expectedJson, $actualJson, "message");
// assertJsonStringEqualsJsonFile + [TAB]
$this->assertJsonStringEqualsJsonFile($expectedFile, $actualJson, "message");
// assertJsonStringNotEqualsJsonFile + [TAB]
$this->assertJsonStringNotEqualsJsonFile($expectedFile, $actualJson, "message");
// assertJsonFileNotEqualsJsonFile + [TAB]
$this->assertJsonFileNotEqualsJsonFile($expectedFile, $actualFile, "message");
// assertJsonFileEqualsJsonFile + [TAB]
$this->assertJsonFileEqualsJsonFile($expectedFile, $actualFile, "message");
```
## Patches & Features
* Fork
* Mod, fix
* Test - this is important, so it's not unintentionally broken
* Commit - do not mess with license, todo, version, etc. (if you do change any, bump them into commits of their own that I can ignore when I pull)
* Pull request - bonus point for topic branches
## Bugs & Feedback
http://github.com/gourmet/common/issues
### Known bugs
* Snippets with more than 10+ placeholders - tabs don't work as expected yet.
## License
Copyright 2014, [Jad Bitar](http://jadb.io)
Licensed under [The MIT License](http://www.opensource.org/licenses/mit-license.php)
Redistributions of files must retain the above copyright notice.
| 36.841424 | 170 | 0.749473 | kor_Hang | 0.254244 |
bb5fa26d62c628a2d949060e02130760bb83c6e0 | 702 | md | Markdown | content/blog/20200405/index.md | Isandaddy/myBlog | efb678c2bafe778bf8a5089f5bad711af514420e | [
"MIT"
] | 2 | 2020-04-10T00:12:16.000Z | 2020-05-14T05:27:14.000Z | src/posts/20200405/index.md | Isandaddy/blog-again | 222136c9af29d0130b86db96776935165c8ba1fc | [
"MIT"
] | null | null | null | src/posts/20200405/index.md | Isandaddy/blog-again | 222136c9af29d0130b86db96776935165c8ba1fc | [
"MIT"
] | null | null | null | ---
title: TCL
date: "2020-04-05"
description: "TCL20200405"
---
### TCL
#### javaScript
- 전개연산자
...
배열에 있는 요소들을 가지고 온다.
```
function sum(x, y, z, a) {
return x + y + z - a;
}
const numbers = [1, 2, 3, 4];
console.log(...numbers); // 1, 2, 3, 4
console.log(sum(...numbers));
// expected output: 2
console.log(sum.apply(null, numbers));
// expected output: 2
```
객체의 경우 덮어쓰기
```
var obj1 = { foo: 'bar', x: 42 };
var obj2 = { foo: 'baz', y: 13 };
var clonedObj = { ...obj1 };
// Object { foo: "bar", x: 42 }
var mergedObj = { ...obj1, ...obj2 };
// Object { foo: "baz", x: 42, y: 13 }
```
### 생각
###ref
https://developer.mozilla.org/ko/docs/Web/JavaScript/Reference/Operators/Spread_syntax
| 17.55 | 86 | 0.575499 | kor_Hang | 0.63762 |
bb607608804108d348573fcb4807d128d8f26d58 | 2,343 | md | Markdown | LJ-code201-day2.md | austriker27/learning-journal | e1cb8527a5edaa1403bc1b8183ebe88df94634c1 | [
"MIT"
] | null | null | null | LJ-code201-day2.md | austriker27/learning-journal | e1cb8527a5edaa1403bc1b8183ebe88df94634c1 | [
"MIT"
] | null | null | null | LJ-code201-day2.md | austriker27/learning-journal | e1cb8527a5edaa1403bc1b8183ebe88df94634c1 | [
"MIT"
] | null | null | null | # LJ Code 201 - Day 2
## Journaling to Learning
*Directions:
At the end of the day, write to your heart's content. Write about what you learned, what may or may not have happened in group work, what you're hoping for, etc. This is for your own reflection more than it is just another assessment instrument.*
Today was good. So far I feel good. Although my brain hurts, so far most of the material is review so my mind hasn't been totally blown yet. Which makes sense, because it's only day 2. I really like the html and CSS material but struggle with in-depth JS so far. So I'm a bit apprehensive looking towards some heavy JS days in the near future but am stoked to learn more about building websites. I'm excited to learn more about layout and dig deep into the how to of designing a website. All the side project work I've done in html and css in the previous months has helped me understand a lot of the basics around html, css, git and random computer knowledge (like explaining what agile was earlier today).
Today I really enjoyed a collaborative lab environment that a few students and I had as we plowed through some code together. I like helping those around me as I shared a lot of the knowledge I have to share about web dev. Today was a great chance of pace from Day 1 because it was only a half day of lecture and we got a lot of hands on exposure in the afternoon which was nice. Sometimes I get a bit frustrated when questions stray completely off topic or are things that were covered many times in our readings.
I stumped TA Brian today, which was kinda fun. I was working on the `or` part of the `if` statement in my JS file for our about_me page and had `(predictionLower === 'yes' || === 'y') ` in my code after a prompt but no matter what I typed it seemed like the `predictionLower` variable was returning the answer for yes even though `consoleLog` returned what we entered correctly. Turns out this conditional is incorrect and the actual way to write it is : `(predictionLower === 'yes' || predictionLower === 'y')`. *Guess the computer-machine-thing isn't smart to know we wanted the conditional included before and after the `||` statement.* **Yay humans!**
Next up, more complicated JS!

| 146.4375 | 708 | 0.772514 | eng_Latn | 0.9999 |
bb60cf57cb03a2406fa028aacaa63720fea61451 | 28,033 | md | Markdown | src/api/vehicle.md | Hashim11223344/platform-documentation | fdba7bedcc7958907471509af2b39a6e19ea08d2 | [
"MIT"
] | null | null | null | src/api/vehicle.md | Hashim11223344/platform-documentation | fdba7bedcc7958907471509af2b39a6e19ea08d2 | [
"MIT"
] | null | null | null | src/api/vehicle.md | Hashim11223344/platform-documentation | fdba7bedcc7958907471509af2b39a6e19ea08d2 | [
"MIT"
] | null | null | null | ---
id: vehicle
name: Vehicle
title: Vehicle
tags:
- API
---
# Vehicle
Vehicle is a CoreObject representing a vehicle that can be occupied and driven by a player. Vehicle also implements the [Damageable](damageable.md) interface.
## Properties
| Property Name | Return Type | Description | Tags |
| -------- | ----------- | ----------- | ---- |
| `driver` | [`Player`](player.md) | The Player currently driving the vehicle, or `nil` if there is no driver. | Read-Only |
| `enterTrigger` | [`Trigger`](trigger.md) | Returns the Trigger a Player uses to occupy the vehicle. | Read-Only |
| `camera` | [`Camera`](camera.md) | Returns the Camera used for the driver while they occupy the vehicle. | Read-Only |
| `driverAnimationStance` | `string` | Returns the animation stance that will be applied to the driver while they occupy the vehicle. | Read-Only |
| `mass` | `number` | Returns the mass of the vehicle in kilograms. | Read-Write |
| `maxSpeed` | `number` | The maximum speed of the vehicle in centimeters per second. | Read-Write |
| `accelerationRate` | `number` | The approximate acceleration rate of the vehicle in centimeters per second squared. | Read-Write |
| `brakeStrength` | `number` | The maximum deceleration of the vehicle when stopping. | Read-Write |
| `coastBrakeStrength` | `number` | The deceleration of the vehicle while coasting (with no forward or backward input). | Read-Write |
| `tireFriction` | `number` | The amount of friction tires or treads have on the ground. | Read-Write |
| `gravityScale` | `number` | How much gravity affects this vehicle. Default value is 1.9. | Read-Write |
| `isAccelerating` | `boolean` | Returns `true` if the vehicle is currently accelerating. | Read-Only |
| `isDriverHidden` | `boolean` | Returns `true` if the driver is made invisible while occupying the vehicle. | Read-Only |
| `isDriverAttached` | `boolean` | Returns `true` if the driver is attached to the vehicle while they occupy it. | Read-Only |
| `isBrakeEngaged` | `boolean` | Returns `true` if the driver of the vehicle is currently using the brakes. | Read-Only |
| `isHandbrakeEngaged` | `boolean` | Returns `true` if the driver of the vehicle is currently using the handbrake. | Read-Only |
| `hitPoints` | `number` | Current amount of hit points. | Read-Write |
| `maxHitPoints` | `number` | Maximum amount of hit points. | Read-Write |
| `isDead` | `boolean` | True if the object is dead, otherwise false. Death occurs when damage is applied which reduces hit points to 0, or when the `Die()` function is called. | Read-Only |
| `isImmortal` | `boolean` | When set to `true`, this object cannot die. | Read-Write |
| `isInvulnerable` | `boolean` | When set to `true`, this object does not take damage. | Read-Write |
| `destroyOnDeath` | `boolean` | When set to `true`, this object will automatically be destroyed when it dies. | Read-Only |
| `destroyOnDeathDelay` | `number` | Delay in seconds after death before this object is destroyed, if `destroyOnDeath` is set to `true`. Defaults to 0. | Read-Only |
| `destroyOnDeathClientTemplateId` | `string` | Optional asset ID of a template to be spawned on clients when this object is automatically destroyed on death. | Read-Only |
| `destroyOnDeathNetworkedTemplateId` | `string` | Optional asset ID of a networked template to be spawned on the server when this object is automatically destroyed on death. | Read-Only |
## Functions
| Function Name | Return Type | Description | Tags |
| -------- | ----------- | ----------- | ---- |
| `SetDriver(Player)` | `None` | Sets the given player as the new driver of the vehicle. A `nil` value will remove the current driver. | None |
| `RemoveDriver()` | `None` | Removes the current driver from the vehicle. | None |
| `AddImpulse(Vector3)` | `None` | Adds an impulse force to the vehicle. | None |
| `GetPhysicsBodyOffset()` | [`Vector3`](vector3.md) | Returns the positional offset for the body collision of the vehicle. | None |
| `GetPhysicsBodyScale()` | [`Vector3`](vector3.md) | Returns the scale offset for the body collision of the vehicle. | None |
| `GetDriverPosition()` | [`Vector3`](vector3.md) | Returns the position relative to the vehicle at which the driver is attached while occupying the vehicle. | None |
| `GetDriverRotation()` | [`Rotation`](rotation.md) | Returns the rotation with which the driver is attached while occupying the vehicle. | None |
| `GetCenterOfMassOffset()` | [`Vector3`](vector3.md) | Returns the center of mass offset for this vehicle. | None |
| `SetCenterOfMassOffset(Vector3 offset)` | `None` | Sets the center of mass offset for this vehicle. This resets the vehicle state and may not behave nicely if called repeatedly or while in motion. | None |
| `ApplyDamage(Damage)` | `None` | Damages the vehicle, unless it is invulnerable. If its hit points reach 0 and it is not immortal, it dies. | Server-Only |
| `Die([Damage])` | `None` | Kills the vehicle, unless it is immortal. The optional Damage parameter is a way to communicate cause of death. | Server-Only |
## Events
| Event Name | Return Type | Description | Tags |
| ----- | ----------- | ----------- | ---- |
| `driverEnteredEvent` | [`Event`](event.md)<[`Vehicle`](vehicle.md) vehicle, [`Player`](player.md) player> | Fired when a new driver occupies the vehicle. | None |
| `driverExitedEvent` | [`Event`](event.md)<[`Vehicle`](vehicle.md) vehicle, [`Player`](player.md) player> | Fired when a driver exits the vehicle. | None |
| `damagedEvent` | [`Event`](event.md)<[`Vehicle`](vehicle.md) vehicle, [`Damage`](damage.md) damage> | Fired when the vehicle takes damage. | Server-Only |
| `diedEvent` | [`Event`](event.md)<[`Vehicle`](vehicle.md) vehicle, [`Damage`](damage.md) damage> | Fired when the vehicle dies. | Server-Only |
## Hooks
| Hook Name | Return Type | Description | Tags |
| ----- | ----------- | ----------- | ---- |
| `clientMovementHook` | [`Hook`](hook.md)<[`Vehicle`](vehicle.md) vehicle, `table` parameters> | Hook called when processing the driver's input. The `parameters` table contains "throttleInput", "steeringInput", and "isHandbrakeEngaged". This is only called on the driver's client. "throttleInput" is a number -1.0, to 1.0, with positive values indicating forward input. "steeringInput" is the same, and positive values indicate turning to the right. "isHandbrakeEngaged" is a boolean. | Client-Only |
| `serverMovementHook` | [`Hook`](hook.md)<[`Vehicle`](vehicle.md) vehicle, `table` parameters> | Hook called when on the server for a vehicle with no driver. This has the same parameters as clientMovementHook. | Server-Only |
| `damageHook` | [`Hook`](hook.md)<[`Vehicle`](vehicle.md) vehicle, [`Damage`](damage.md) damage> | Hook called when applying damage from a call to `ApplyDamage()`. The incoming damage may be modified or prevented by modifying properties on the `damage` parameter. | Server-Only |
## Examples
Example using:
### `maxSpeed`
### `tireFriction`
### `accelerationRate`
### `turnRadius`
Off road sections are an excellent way to encourage players to stay on the track. In this example, when vehicle with a driver enters a trigger, they will be slowed down and given more traction. Once a vehicle exits the trigger, it will drive at its original speed. Note: [Treaded Vehicles](../api/treadedvehicle.md) do not support `turnRadius`.
```lua
-- Get the trigger object that will represent the off road area
local propTrigger = script:GetCustomProperty("Trigger"):WaitForObject()
-- This function will be called whenever an object enters the trigger
function OnEnter(trigger, other)
-- Check if a vehicle has entered the trigger and if that vehicle is currently not off road
if(other:IsA("Vehicle") and not other.serverUserData.offRoad) then
-- Set the off road status of the vehicle to "true"
other.serverUserData.offRoad = true
-- Store the original specifications of the vehicle. The "serverUserData" properties
-- are used in the case that other road obstacles are modifying the specifications of the vehicle
other.serverUserData.originalTireFriction = other.serverUserData.originalTireFriction or other.tireFriction
other.serverUserData.originalMaxSpeed = other.serverUserData.originalMaxSpeed or other.maxSpeed
other.serverUserData.originalAccelerationRate = other.serverUserData.originalAccelerationRate or other.accelerationRate
-- turnRadius only supported for FourWheeledVehicle type.
if other:IsA("FourWheeledVehicle") then
other.serverUserData.originalTurnRadius = other.serverUserData.originalTurnRadius or other.turnRadius
end
-- Increase the tire friction of the vehicle by 900%, this will give the vehicle more traction
other.tireFriction = other.tireFriction * 10.0
-- Decrease the maximum speed of the vehicle by 90%
other.maxSpeed = other.maxSpeed * 0.1
-- Decrease the acceleration of the vehicle by 80%
other.accelerationRate = other.accelerationRate * 0.2
-- Shrink the turn radius by 80%, this will allow the vehicle to make tighter turns
-- turnRadius only supported for FourWheeledVehicle type.
if other:IsA("FourWheeledVehicle") then
other.turnRadius = other.tireFriction * 0.2
end
end
end
-- Bind the "OnEnter" function to the "beginOverlapEvent" of the "propTrigger" so that
-- when an object enters the "propTrigger" the "OnEnter" function is executed
propTrigger.beginOverlapEvent:Connect(OnEnter)
-- This function will be called whenever an object enters the trigger
function OnExit(trigger, other)
-- If a vehicle has entered the trigger and the vehicle is off road, then reset
-- the vehicle specifications (maximum speed, acceleration, turning radius)
if(other:IsA("Vehicle") and Object.IsValid(other.driver) and other.serverUserData.offRoad) then
-- Set the off road status of the vehicle to "false"
other.serverUserData.offRoad = false
-- Reset the vehicle specifications to the values before the vehicle
-- had entered the boost pad
other.maxSpeed = other.serverUserData.originalMaxSpeed
-- turnRadius only supported for FourWheeledVehicle type.
if other:IsA("FourWheeledVehicle") then
other.turnRadius = other.serverUserData.originalTurnRadius
end
other.accelerationRate = other.serverUserData.originalAccelerationRate
end
end
-- Bind the "OnExit" function to the "endOverlapEvent" of the "propTrigger" so that
-- when an object exits the "propTrigger" the "OnExit" function is executed
propTrigger.endOverlapEvent:Connect(OnExit)
```
See also: [event:Trigger.beginOverlapEvent](event.md) | [CoreObject.serverUserData](coreobject.md)
---
Example using:
### `tireFriction`
### `maxSpeed`
### `driver`
In this example, entering the trigger will act like an oil slick. Any vehicles that enter the trigger will spin out of control.
```lua
-- Get the trigger that will represent the oil slick area
local propTrigger = script:GetCustomProperty("Trigger"):WaitForObject()
-- This function will be called whenever an object enters the trigger
function OnEnter(trigger, other)
-- Check if the object entering is a vehicle and if the vehicle is currently not in an oil slick
if(other:IsA("Vehicle") and not other.serverUserData.inOil) then
-- Set the oil slick status of the vehicle to "true" so that any more oil slicks that the vehicle passes
-- over do not affect the vehicle.
other.serverUserData.inOil = true
-- Store the original max speed of the vehicle
local originalMaxSpeed = other.serverUserData.originalMaxSpeed or other.maxSpeed
-- Store the original tire friction of the vehicle
local originalTireFriction = other.serverUserData.originalTireFriction or other.tireFriction
-- Set the maximum speed of the vehicle to 0 to stop it from moving
other.maxSpeed = 1000
-- Set the tire friction of the wheels to 0 so that the car can easily spin
other.tireFriction = 0.5
-- Make the vehicle spin for 2 seconds
other:SetLocalAngularVelocity(Vector3.New(0, 0, 999))
Task.Wait(2)
-- Reset the specifications of the vehicle to the values before the vehicle entered the oil slick
other.maxSpeed = originalMaxSpeed
-- Resetting the "tireFriction" will cause the vehicle to stop spinning
other.tireFriction = originalTireFriction
-- Reset the in oil status of the vehicle so that the vehicle can be affected by another oil slick.
other.serverUserData.inOil = false
end
end
-- Bind the "OnEnter" function to the "beginOverlapEvent" of the "propTrigger" so that
-- when an object enters the "propTrigger" the "OnEnter" function is executed
propTrigger.beginOverlapEvent:Connect(OnEnter)
```
See also: [CoreObject.SetLocalAngularVelocity](coreobject.md) | [event:Trigger.beginOverlapEvent](event.md)
---
Example using:
### `damagedEvent`
### `diedEvent`
### `driver`
Vehicles implement the DamageableObject interface. As such, they have all the properties and events from that type. In this example, we add a script to a vehicle that causes it to pass to the driver some of the damage it receives. By default, a vehicle's damageable properties are configured to make them immune-- Change them for this example to work.
```lua
local VEHICLE = script:FindAncestorByType("Vehicle")
local CHANCE_TO_PASS_DAMAGE = 0.5
local DAMAGE_REDUCTION = 0.2
local ON_DEATH_DIRECT_DAMAGE_TO_DRIVER = 75
function ApplyDamageToDriver(newAmount, vehicleDamage)
-- Create new damage object for the player
local damage = Damage.New(newAmount)
-- Copy properties from the vehicle's damage object
damage.reason = vehicleDamage.reason
damage.sourceAbility = vehicleDamage.sourceAbility
damage.sourcePlayer = vehicleDamage.sourcePlayer
damage:SetHitResult(vehicleDamage:GetHitResult())
local player = VEHICLE.driver
-- If we think the player will die from this damage, eject them and
-- wait a bit, so they will ragdoll correctly
if player.hitPoints <= damage.amount then
VEHICLE:RemoveDriver()
Task.Wait(0.15)
end
-- Apply it
player:ApplyDamage(damage)
end
function OnDamaged(_, damage)
if damage.amount <= 0 then return end
if not Object.IsValid(VEHICLE.driver) then return end
-- Chance to apply damage to the player or prevent it completely
if math.random() >= CHANCE_TO_PASS_DAMAGE then return end
-- Reduction of the original damage amount
local newAmount = damage.amount * (1 - DAMAGE_REDUCTION)
newAmount = math.ceil(newAmount)
-- Apply reduced damage
ApplyDamageToDriver(newAmount, damage)
end
function OnDied(_, damage)
if not Object.IsValid(VEHICLE.driver) then return end
local player = VEHICLE.driver
-- Apply the on-death damage
ApplyDamageToDriver(ON_DEATH_DIRECT_DAMAGE_TO_DRIVER, damage)
end
VEHICLE.damagedEvent:Connect(OnDamaged)
VEHICLE.diedEvent:Connect(OnDied)
```
See also: [DamageableObject.damagedEvent](damageableobject.md) | [CoreObject.FindAncestorByType](coreobject.md) | [Damage.New](damage.md) | [Player.ApplyDamage](player.md) | [Object.IsValid](object.md)
---
Example using:
### `driverEnteredEvent`
### `driverExitedEvent`
A common situation could be a game that has both weapons and vehicles. In this example, when a player enters the vehicle all their abilities are disabled. When they exit the vehicle their abilities are re-enabled. This script expects to be placed as a child of the vehicle.
```lua
local VEHICLE = script.parent
function OnDriverEntered(vehicle, player)
for _,ability in ipairs(player:GetAbilities()) do
ability.isEnabled = false
end
end
function OnDriverExited(vehicle, player)
for _,ability in ipairs(player:GetAbilities()) do
ability.isEnabled = true
end
end
VEHICLE.driverEnteredEvent:Connect(OnDriverEntered)
VEHICLE.driverExitedEvent:Connect(OnDriverExited)
```
See also: [CoreObject.parent](coreobject.md) | [Player.GetAbilities](player.md) | [Ability.isEnabled](ability.md)
---
Example using:
### `driverEnteredEvent`
### `driverExitedEvent`
### `accelerationRate`
### `maxSpeed`
### `tireFriction`
In this example, when the driver of the vehicle presses the spacebar, the vehicle launches up and forward a slight amount in the direction the vehicle is facing.
```lua
local VEHICLE = script:FindAncestorByType("Vehicle")
local bindingPressListener = nil
function OnBindingPressed(player, binding)
if binding == "ability_extra_17" then
-- Vector3 (forward, 0, up) rotated into the world space of the vehicle
local impulseVector = VEHICLE:GetWorldRotation() * Vector3.New(1000, 0, 1000)
VEHICLE:AddImpulse(impulseVector * VEHICLE.mass)
end
end
function OnDriverEntered(vehicle, player)
bindingPressListener = player.bindingPressedEvent:Connect(OnBindingPressed)
end
function OnDriverExited(vehicle, player)
if bindingPressListener and bindingPressListener.isConnected then
bindingPressListener:Disconnect()
end
end
VEHICLE.driverEnteredEvent:Connect(OnDriverEntered)
VEHICLE.driverExitedEvent:Connect(OnDriverExited)
```
See also: [CoreObject.AddImpulse](coreobject.md) | [Player.bindingPressedEvent](player.md) | [EventListener.Disconnect](eventlistener.md) | [Vector3.New](vector3.md)
---
Example using:
### `driverExitedEvent`
In this example, players take damage when they exit a vehicle that is in motion. The amount of damage taken is variable, depending on how fast the vehicle is going. This script expects to be placed as a child of the vehicle.
```lua
local VEHICLE = script.parent
local MAX_SAFE_SPEED = 400
local LETHAL_SPEED = 4000
function OnDriverExited(vehicle, player)
if MAX_SAFE_SPEED >= LETHAL_SPEED then return end
local speed = vehicle:GetVelocity().size
print("Exit speed = " .. speed)
if not player.isDead and speed > MAX_SAFE_SPEED then
local t = (speed - MAX_SAFE_SPEED) / (LETHAL_SPEED - MAX_SAFE_SPEED)
local amount = CoreMath.Lerp(0, player.maxHitPoints, t)
if amount > 0 then
local damage = Damage.New(amount)
damage.reason = DamageReason.MAP
player:ApplyDamage(damage)
end
end
end
VEHICLE.driverExitedEvent:Connect(OnDriverExited)
```
See also: [CoreObject.GetVelocity](coreobject.md) | [Vector3.size](vector3.md) | [Player.isDead](player.md) | [CoreMath.Lerp](coremath.md) | [Damage.New](damage.md)
---
Example using:
### `serverMovementHook`
This script leverages the movement hook to implement a crude AI that autopilots a vehicle. It avoids obstacles and if it gets stuck, it backs up. It expects to be added as a server script under the vehicle's hierarchy. The script's position inside the vehicle is important, it should be at the front bumper, almost touching the vehicle, and its X-axis should point forward, in the direction the vehicle moves.
```lua
local VEHICLE = script:FindAncestorByType("Vehicle")
if not VEHICLE then return end
local reverseClock = 0
local forwardClock = 0
local deltaTime = 0
local averageSpeed = 100
function OnMovementHook(vehicle, params)
-- Disable the handbrake
params.isHandbrakeEngaged = false
-- Pre-process information about the script's position and rotation
local pos = script:GetWorldPosition()
local qRotation = Quaternion.New(script:GetWorldRotation())
local forwardV = qRotation:GetForwardVector()
local rightV = qRotation:GetRightVector() * 120
local speed = VEHICLE:GetVelocity().size
averageSpeed = CoreMath.Lerp(averageSpeed, speed, 0.1)
speed = math.max(speed * 1.2, 120)
local velocity = forwardV * speed
-- Cast 3 rays forward to see if they hit something. The decisions to
-- steer and accelerate are based on the results of these:
local centerHit = World.Raycast(pos, pos + velocity)
local leftHit = World.Raycast(pos - rightV, pos - rightV + velocity)
local rightHit = World.Raycast(pos + rightV, pos + rightV + velocity)
-- Reverse logic in case the vehicle gets stuck
if forwardClock > 0 then
forwardClock = forwardClock - deltaTime
params.throttleInput = 1 -- Press the gas
elseif reverseClock <= 0 and averageSpeed < 30 then
-- Randomize the reverse duration in case multiple cars get stuck on each other
reverseClock = 1 + math.random()
end
if reverseClock > 0 then
reverseClock = reverseClock - deltaTime
params.throttleInput = -1 -- Go in reverse
if reverseClock <= 0 then
forwardClock = 1
end
elseif centerHit then
params.throttleInput = 0 -- Let go of gas
else
params.throttleInput = 1 -- Press the gas
end
-- Steer left/right
if reverseClock > 0 then
params.steeringInput = 1 -- Right (reverse)
elseif rightHit then
params.steeringInput = -1 -- Left
elseif leftHit then
params.steeringInput = 1 -- Right
else
params.steeringInput = 0 -- Don't steer
end
end
function Tick(dt)
deltaTime = dt
end
VEHICLE.serverMovementHook:Connect(OnMovementHook)
```
See also: [CoreObject.GetVelocity](coreobject.md) | [World.Raycast](world.md) | [Quaternion.GetForwardVector](quaternion.md) | [Vector3.size](vector3.md) | [CoreMath.Lerp](coremath.md)
---
Example using:
### `AddImpulse`
### `maxSpeed`
### `serverUserData`
In this example, when a vehicle enters a trigger, the vehicle will be launched forward for 0.5 seconds.
```lua
-- Get the trigger object that will represent the boost pad
local propTrigger = script:GetCustomProperty("Trigger"):WaitForObject()
-- This function will be called whenever an object enters the trigger
function OnEnter(trigger, other)
-- Check if a vehicle has entered the trigger
if(other:IsA("Vehicle")) then
-- Get a vector that represents the direction the boost pad is pointing in
local direction = trigger:GetWorldRotation() * Vector3.RIGHT
-- Apply a force to the vehicle to give the vehicle sudden increase in speed
other:AddImpulse(direction * 2000000)
-- Check if the maximum speed of the vehicle has already been increased
if(not other.serverUserData.isBoosting) then
other.serverUserData.isBoosting = true
-- Store the original maximum speed of the vehicle. The "originalMaxSpeed" property
-- is used in case any other road obstacles are modifying the "maxSpeed" of the vehicle
local originalMaxSpeed = other.serverUserData.originalMaxSpeed or other.maxSpeed
-- Increase the maximum speed of the vehicle by 300%`
other.maxSpeed = originalMaxSpeed * 4.0
-- Wait 0.5 seconds before returning the vehicle to its original speed
Task.Wait(0.5)
-- Return the vehicle to its original speed
other.maxSpeed = originalMaxSpeed
-- Set "isBoosting" to false so that the speed increase of boost pads can be activated
-- when the vehicle passes over another boost pad
other.serverUserData.isBoosting = false
end
end
end
-- Bind the "OnEnter" function to the "beginOverlapEvent" of the "propTrigger" so that
-- when an object enters the "propTrigger" the "OnEnter" function is executed
propTrigger.beginOverlapEvent:Connect(OnEnter)
```
See also: [event:Trigger.beginOverlapEvent](event.md) | [Rotation * Vector3](rotation.md)
---
Example using:
### `SetDriver`
In this example, a vehicle is spawned for each player at the moment they join the game. Also, when they leave the game we destroy the vehicle. For best results, delete the `Enter Trigger` that usually comes with vehicles and set the vehicle's `Exit Binding` to `None`.
```lua
local VEHICLE = script:GetCustomProperty("VehicleTemplate")
local playerVehicles = {}
function OnPlayerJoined(player)
local pos = player:GetWorldPosition()
local vehicleInstance = World.SpawnAsset(VEHICLE, {position = pos})
vehicleInstance:SetDriver(player)
playerVehicles[player] = vehicleInstance
end
function OnPlayerLeft(player)
local vehicle = playerVehicles[player]
if Object.IsValid(vehicle) then
vehicle:Destroy()
end
end
Game.playerJoinedEvent:Connect(OnPlayerJoined)
Game.playerLeftEvent:Connect(OnPlayerLeft)
```
See also: [CoreObject.Destroy](coreobject.md) | [Player.GetWorldPosition](player.md) | [World.SpawnAsset](world.md) | [Object.IsValid](object.md) | [Game.playerJoinedEvent](game.md)
---
Example using:
### `SetDriver`
### `driverExitedEvent`
In some games it will be important to override the built-in trigger behavior to add more gameplay. In this example, the vehicle belongs to a specific player (Bot1). If another player tries to drive it they will receive a message saying the car doesn't belong to them. This script expects to be placed as a child of the vehicle.
```lua
local VEHICLE = script:FindAncestorByType("Vehicle")
local TRIGGER = script:GetCustomProperty("EnterTrigger"):WaitForObject()
local OWNER = "Bot1"
function OnInteracted(trigger, player)
if player.name == OWNER then
VEHICLE:SetDriver(player)
TRIGGER.isEnabled = false
else
Chat.BroadcastMessage("Not your car.", {players = player})
end
end
function OnDriverExited(player)
TRIGGER.isEnabled = true
end
TRIGGER.interactedEvent:Connect(OnInteracted)
VEHICLE.driverExitedEvent:Connect(OnDriverExited)
```
See also: [CoreObject.FindAncestorByType](coreobject.md) | [CoreObjectReference.WaitForObject](coreobjectreference.md) | [Player.name](player.md) | [Chat.BroadcastMessage](chat.md) | [Trigger.interactedEvent](trigger.md)
---
Example using:
### `accelerationRate`
### `maxSpeed`
### `tireFriction`
This example takes vehicle stats (acceleration, max speed and tire friction) and normalizes them to rating values between 1 and 5. This could be used, for example, in the UI of a vehicle selection screen to show how vehicles compare to each other in their various stats. When the script runs it searches the game for all vehicles that exist and prints their ratings to the Event Log.
```lua
local ACCELE_MIN = 400
local ACCELE_MAX = 4000
local TOP_SPEED_MIN = 2000
local TOP_SPEED_MAX = 20000
local HANDLING_MIN = 0.5
local HANDLING_MAX = 10
local RATING_LEVELS = 5
function RateStat(value, min, max)
if value >= max then
return RATING_LEVELS
end
if value > min and max > min then
local p = (value - min) / (max - min)
local rating = p * RATING_LEVELS
rating = math.floor(rating) + 1
return rating
end
return 1
end
function RateVehicle(vehicle)
local accele = RateStat(vehicle.accelerationRate, ACCELE_MIN, ACCELE_MAX)
local topSpeed = RateStat(vehicle.maxSpeed, TOP_SPEED_MIN, TOP_SPEED_MAX)
local handling = RateStat(vehicle.tireFriction, HANDLING_MIN, HANDLING_MAX)
-- Print vehicle ratings to the Event Log
print(vehicle.name)
print("Acceleration: " .. accele)
print("Top Speed: " .. topSpeed)
print("Handling: " .. handling)
if vehicle:IsA("TreadedVehicle") then
print("Type: Treaded, " .. vehicle.turnSpeed)
-- Check to make sure the vehicle type is four wheeled, because treaded vehicles
-- do not have a turnRadius property.
elseif vehicle:IsA("FourWheeledVehicle") then
print("Type: 4-wheel, " .. vehicle.turnRadius)
else
print("Type: Unknown")
end
print("")
end
-- Search for all vehicles and rate them
for _,vehicle in ipairs(World.FindObjectsByType("Vehicle")) do
RateVehicle(vehicle)
end
```
See also: [TreadedVehicle.turnSpeed](treadedvehicle.md) | [FourWheeledVehicle.turnRadius](fourwheeledvehicle.md) | [World.FindObjectsByType](world.md) | [CoreObject.name](coreobject.md)
---
| 42.091592 | 501 | 0.720615 | eng_Latn | 0.950751 |
bb6169db348a92bab3699fe336033bc4f907e52c | 15,004 | markdown | Markdown | _posts/2007-01-26-system-and-method-for-managing-data-using-static-lists.markdown | api-evangelist/patents-2007 | da723589b6977a05c0119d5476325327da6c5a5c | [
"Apache-2.0"
] | 1 | 2017-11-15T11:20:53.000Z | 2017-11-15T11:20:53.000Z | _posts/2007-01-26-system-and-method-for-managing-data-using-static-lists.markdown | api-evangelist/patents-2007 | da723589b6977a05c0119d5476325327da6c5a5c | [
"Apache-2.0"
] | null | null | null | _posts/2007-01-26-system-and-method-for-managing-data-using-static-lists.markdown | api-evangelist/patents-2007 | da723589b6977a05c0119d5476325327da6c5a5c | [
"Apache-2.0"
] | 2 | 2019-10-31T13:03:32.000Z | 2020-08-13T12:57:02.000Z | ---
title: System and method for managing data using static lists
abstract: A method and system are provided in which static lists facilitate arbitrary grouping of items of data independent of their locations and in ways that are meaningful to the user. A static list is a set of items defined by a root item, a direction, and the entry relationships with that root item in that direction. The static list also defines the properties that each entry relationship in the list is required to have. Verbs are provided to manage a static list. A verb is an action that may be performed on the items in the static list, and includes, among others, move, copy, add, remove, and delete. A view is provided to specify characteristics for displaying data from a static list, including visibility, order, and formatting, among other characteristics.
url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=07711754&OS=07711754&RS=07711754
owner: Microsoft Corporation
number: 07711754
owner_city: Redmond
owner_country: US
publication_date: 20070126
---
This application is a continuation of U.S. application Ser. No. 10 693 666 filed on Oct. 24 2003 entitled SYSTEM AND METHOD FOR MANAGING DATA USING STATIC LISTS and incorporated herein by reference.
In general the present invention relates to data storage systems and in particular to systems and methods for managing data using static lists.
As the use of electronic media to store text music pictures and other types of data grows and the restrictions on data storage capacities lessen computer users find themselves faced with enormous numbers of files to manage. Conventional file systems such as those based on a file allocation table or FAT file system can make management of files difficult. For example the traditional directory access to files that is provided with conventional file systems assumes that the users wishes to maintain their files in a hierarchical directory tree. However besides being location dependent a hierarchical organization may not be the most advantageous way to access the files from the user s point of view.
In the context of the Windows operating system user interface one technique for making access to files easier is the shortcut. A shortcut that provides a link to a file may be created on the desktop or in a folder and is a quick way to start a program or open a file or folder without having to go to its permanent location. But shortcuts may not be reliable since they are not updated to reflect changes in the location or status of the underlying file. For example moving the file to a different directory results in an error when accessing the shortcut.
Another technique for making access to files easier is the playlist. Media players offer users playlists as a way to organize certain types of files for later playback. For example in the Windows Media Player the playlist contains references to music files for playback through the media player in a designated order. But playlists suffer from the same drawback as shortcuts in that the references in the playlist are not updated to reflect changes in the location or status of the underlying files. If a music file is moved or deleted the user must hunt through all of his or her playlists to update or remove the outdated references.
Both the shortcut and playlist model of accessing files are further limited by their inability to provide to the user with alternative ways to access items other than through another folder or in a certain order.
To overcome the above described problems a system method and computer accessible medium for managing data using static lists are provided. Static lists facilitate arbitrary grouping of items of data independent of their locations and in ways that are meaningful to the user.
In accordance with one aspect of the present invention a static list is a set of items defined by a root item a direction and the entry relationships with that root item in that direction. The items in the set are determined by following the entry relationships with the root item. The direction is either to or from the root item depending on whether the root item is the target or the source of the entry relationship. The static list also defines the properties that each entry relationship in the list is required to have.
In accordance with another aspect of the present invention verbs are provided to manage a static list. A verb is an action that may be performed on the items in the static list and includes among others move copy add remove and delete. The actions performed on the items include actions performed on the entry relationships between the item and the root item.
In accordance with a further aspect of the present invention a view is provided to specify characteristics for displaying data from a static list including visibility order and formatting among other characteristics.
In accordance with yet another aspect of the present invention using static lists the user is able to propagate certain security attributes to the items in the list so that others may access them via the list. The user may also add other information to the list as metadata to enhance the usefulness of the list and the items contained therein.
In accordance with a still further aspect of the present invention using static lists each item in the list is automatically managed so that the references to the data are always valid even when the location status or other characteristic of the data changes.
In accordance with yet other aspects of the present invention a computer accessible medium for managing data using static lists is provided. The computer accessible medium comprises data and computer executable components to create and manage static lists. The data defines the static list and the items contained therein. The computer executable components are capable of performing actions generally consistent with the above described method.
The following discussion is intended to provide a brief general description of a computing system suitable for implementing various features of the invention. While the computing system will be described in the general context of a personal computer usable in a distributed computing environment where complementary tasks are performed by remote computing devices linked together through a communication network those skilled in the art will appreciate that the invention may be practiced with many other computer system configurations including multiprocessor systems minicomputers mainframe computers and the like. In addition to the more conventional computer systems described above those skilled in the art will recognize that the invention may be practiced on other computing devices including laptop computers tablet computers personal digital assistants PDAs and other devices upon which computer software or other digital content is installed.
While aspects of the invention may be described in terms of programs executed by applications in conjunction with a personal computer those skilled in the art will recognize that those aspects also may be implemented in combination with other program modules. Generally program modules include routines programs components data structures etc. which perform particular tasks or implement particular abstract data types.
A relationship is an association between two items. Each relationship refers to two items called a source or a target depending on direction of the relationship . Source items originate the relationship and target items receive the relationship .
An extension is similar to an item in that it contains properties defined by a type . But extensions are associated with exactly one item and have different types .
The type defines the structure of an item relationship or extension by defining its properties. Since types can be used with items relationships or extensions they are commonly referred to as item types relationship types or extension types.
Any combination of an item a relationship type and a direction determines a static set . For example the set of authors of a document can be found by following author relationships from the document root item as can the set of document authored by a person by following the same relationship in the other direction.
A number of program modules may be stored in the drives and RAM including an operating system one or more application programs other program modules such as the extensions and interfaces of the present invention and program data including the command item and insert location data of the present invention. A user may enter commands and information into the personal computer through input devices such as a keyboard or a mouse . Other input devices not shown may include a microphone touch pad joystick game pad satellite dish scanner or the like. These and other input devices are often connected to the processing unit through a user input interface that is coupled to the system bus but may be connected by other interfaces not shown such as a game port or a universal serial bus USB . A display device is also connected to the system bus via a display subsystem that typically includes a graphics display interface not shown and a code module sometimes referred to as a display driver to interface with the graphics display interface. While illustrated as a stand alone device the display device could be integrated into the housing of the personal computer . Furthermore in other computing systems suitable for implementing the invention such as a PDA the display could be overlaid with a touch screen. In addition to the elements illustrated in client devices also typically include other peripheral output devices not shown such as speakers or printers.
The personal computer may operate in a networked environment using logical connections to one or more remote computers such as a remote computer . The remote computer may be a server a router a peer device or other common network node and typically includes many or all of the elements described relative to the personal computer . The logical connections depicted in include a local area network LAN and a wide area network WAN . The LAN and WAN may be wired wireless or a combination thereof. Such networking environments are commonplace in offices enterprise wide computer networks Intranets and the Internet.
When used in a LAN networking environment the personal computer is connected to the LAN through a network interface . When used in a WAN networking environment the personal computer typically includes a modem or other means for establishing communications over the WAN such as the Internet. The modem which may be internal or external is connected to the system bus via the user input interface . In a networked environment program modules depicted relative to the personal computer or portions thereof may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing a communication link between the computers may be used. In addition the LAN and WAN may be used as a source of nonvolatile storage for the system.
In one embodiment processing continues at process block where the user can elect to apply a previously defined View to the list in order to display at process block the list contents in a user interface such as the user interface depicted in .
The XML file further permits the processing system to store and track user defined arbitrary metadata to represent the properties of the items and the relationships . In such an implementation the properties are identified by their assigned globally unique identification GUID plus the property identification also referred to in the Windows operating system as the PROPERTYKEY. The metadata may also be advantageously employed to propagate certain security features for the static list to the referenced items .
In an example scenario a user wants to produce a list of documents used to give presentations to clients about his company s new product a brake pad. The documents include various Word documents that describe the brake pad technology in depth a PowerPoint presentation pictures of the brake pads and even some video files shown the brake pads in action using an infrared camera. The user gives the presentation to different clients having different needs cares and wants. As a result the user wishes to customize the presentation. Using static lists the user can create different static lists each with references to the same items but in a different order to tune the presentation to the audience . The user can also include different important properties. For example for one client the sales price on all items is shown in the clear and may even be specific to a client whereas for other clients the sales price is masked. In yet another example the user may include properties that reveal the latest details of guarantees and awards they have won.
In the example scenario the static lists are maintained automatically. When the user deletes one of the documents from one of the lists the document is still available in all of the other lists where it is referenced. On the other hand when the user deletes one of the documents from the folder where it resides all lists referencing that document are updated to remove the reference so that the reference does not display as a dead link.
As a result of the foregoing the user can advantageously create an unlimited number of static lists customized for a particular audience and yet avoid the hassles of managing all of the references in those lists.
While the presently preferred embodiments of the invention have been illustrated and described it will be appreciated that various changes may be made therein without departing from the spirit and scope of the invention. For example it should be noted that either of the above described implementations may be employed on a processing system regardless of what type of file system is employed. It may be advantageous to represent a static list as an XML file even on processing systems capable of using containers where interoperability with systems using more conventional file systems is desired. Moreover in other embodiments regardless of the type of file system employed the items in the static list may be presented to the user using any user interface including in a folder of the Windows Shell user interface. As various operations are performed on the static list or the items in the list the operations are either handled by the folder or delegated to a target of the referenced item i.e. the target item.
While the preferred embodiment of the invention has been illustrated and described it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention.
| 214.342857 | 1,462 | 0.820648 | eng_Latn | 0.999965 |
bb61b04402a37579acd90a01ecac8593d6d5a430 | 2,651 | md | Markdown | docs/analysis-services/instances/install-windows/deploying-sql-server-2016-powerpivot-and-power-view-in-sharepoint-2016.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/instances/install-windows/deploying-sql-server-2016-powerpivot-and-power-view-in-sharepoint-2016.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/instances/install-windows/deploying-sql-server-2016-powerpivot-and-power-view-in-sharepoint-2016.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Implantando o SQL Server 2016 PowerPivot e Power View no SharePoint 2016 | Microsoft Docs
ms.custom:
ms.date: 03/01/2017
ms.prod: analysis-services
ms.prod_service: analysis-services
ms.service:
ms.component:
ms.reviewer:
ms.suite: pro-bi
ms.technology:
ms.tgt_pltfrm:
ms.topic: article
ms.assetid: 2d0a9834-db91-403f-847c-79a8f49fc916
caps.latest.revision:
author: Minewiskan
ms.author: owend
manager: kfile
ms.workload: Inactive
ms.openlocfilehash: 166ce8f52401d5cd5cad70e19aea37871d57afe4
ms.sourcegitcommit: 7519508d97f095afe3c1cd85cf09a13c9eed345f
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 02/15/2018
---
# <a name="deploying-sql-server-2016-powerpivot-and-power-view-in-sharepoint-2016"></a>Implantação do SQL Server 2016 PowerPivot e Power View no SharePoint 2016
[!INCLUDE[ssas-appliesto-sqlas](../../../includes/ssas-appliesto-sqlas.md)]
**Resumo:** este white paper fornece aos administradores e arquitetos do SharePoint instruções passo a passo detalhadas para implantar e configurar um ambiente de demonstração de BI da Microsoft, com base em versões de Preview do SharePoint Server 2016, Office Online Server e a pilha de BI do SQL Server 2016 para SharePoint 2016. A seguir uma breve introdução às importantes alterações de arquitetura e dependências correspondentes do sistema, descreve os requisitos de configuração e de software e um caminho de implantação recomendada para habilitar e verificar os recursos de BI em três etapas principais. Este white paper também aborda problemas conhecidos que existem nas versões SharePoint Server 2016 Beta 2, visualização do servidor online do Office e SQL Server 2016 CTP 3.1 e sugere soluções alternativas apropriadas. Essas soluções alternativas não serão mais necessárias as nas versões finais dos produtos. Procure uma versão atualizada deste white paper ao implantar versões de RTM.
**Escritor:** Kay Unkroth
**Revisores técnicos:** Adam Saxton, Anne Zorner, Craig Guyer, Frank Weigel, Gregory Appel, Heidi Steen, Jason Haak, Kasper de Jonge, Kirk Stark, Klaus Sobel, Mike Plumley, Mike Taghizadeh, Patrick Wheeler, Riccardo Muti, Steve Hord
**Publicação:** dezembro de 2015
**Aplica-se a:** SQL Server 2016 CTP3.1, visualização do SharePoint 2016 e do Servidor do Office Online
Para revisar o documento, baixe o documento do Word [Implantando o SQL Server 2016 PowerPivot e Power View no SharePoint 2016](http://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/Deploying%20SQL%20Server%202016%20PowerPivot%20and%20Power%20View%20in%20SharePoint%202016.docx) .
| 64.658537 | 1,001 | 0.789513 | por_Latn | 0.966536 |
bb61d45fefd1279fc0b2c59d454dee6a3a85c9b9 | 53,090 | md | Markdown | site/docs/guide.md | KanKanTAD/bazel | 5ebc1fbb8c9073e488db74bc90442054d64e6f75 | [
"Apache-2.0"
] | 1 | 2020-12-27T06:48:04.000Z | 2020-12-27T06:48:04.000Z | site/docs/guide.md | KanKanTAD/bazel | 5ebc1fbb8c9073e488db74bc90442054d64e6f75 | [
"Apache-2.0"
] | null | null | null | site/docs/guide.md | KanKanTAD/bazel | 5ebc1fbb8c9073e488db74bc90442054d64e6f75 | [
"Apache-2.0"
] | null | null | null | ---
layout: documentation
title: User's guide
---
# A user's guide to Bazel
To run Bazel, go to your base [workspace](build-ref.html#workspace) directory
or any of its subdirectories and type `bazel`.
<pre>
% bazel help
[Bazel release bazel-<<i>version</i>>]
Usage: bazel <command> <options> ...
Available commands:
<a href='user-manual.html#analyze-profile'>analyze-profile</a> Analyzes build profile data.
<a href='user-manual.html#aquery'>aquery</a> Executes a query on the <a href='#analysis-phase'>post-analysis</a> action graph.
<a href='#build'>build</a> Builds the specified targets.
<a href='user-manual.html#canonicalize'>canonicalize-flags</a> Canonicalize Bazel flags.
<a href='user-manual.html#clean'>clean</a> Removes output files and optionally stops the server.
<a href='user-manual.html#query'>cquery</a> Executes a <a href='#analysis-phase'>post-analysis</a> dependency graph query.
<a href='user-manual.html#dump'>dump</a> Dumps the internal state of the Bazel server process.
<a href='user-manual.html#help'>help</a> Prints help for commands, or the index.
<a href='user-manual.html#info'>info</a> Displays runtime info about the bazel server.
<a href='#fetch'>fetch</a> Fetches all external dependencies of a target.
<a href='user-manual.html#mobile-install'>mobile-install</a> Installs apps on mobile devices.
<a href='user-manual.html#query'>query</a> Executes a dependency graph query.
<a href='user-manual.html#run'>run</a> Runs the specified target.
<a href='user-manual.html#shutdown'>shutdown</a> Stops the Bazel server.
<a href='user-manual.html#test'>test</a> Builds and runs the specified test targets.
<a href='user-manual.html#version'>version</a> Prints version information for Bazel.
Getting more help:
bazel help <command>
Prints help and options for <command>.
bazel help <a href='user-manual.html#startup_options'>startup_options</a>
Options for the JVM hosting Bazel.
bazel help <a href='#target-patterns'>target-syntax</a>
Explains the syntax for specifying targets.
bazel help info-keys
Displays a list of keys used by the info command.
</pre>
The `bazel` tool performs many functions, called commands. The most commonly
used ones are `bazel build` and `bazel test`. You can browse the online help
messages using `bazel help`.
<a id="build"></a>
## Building programs with Bazel
### The `build` command
Type `bazel build` followed by the name of the [target](#target-patterns) you
wish to build. Here's a typical session:
```
% bazel build //foo
INFO: Analyzed target //foo:foo (14 packages loaded, 48 targets configured).
INFO: Found 1 target...
Target //foo:foo up-to-date:
bazel-bin/foo/foo
INFO: Elapsed time: 9.905s, Critical Path: 3.25s
INFO: Build completed successfully, 6 total actions
```
Bazel prints the progress messages as it loads all the packages in the
transitive closure of dependencies of the requested target, then analyzes them
for correctness and to create the build actions, finally executing the compilers
and other tools of the build.
Bazel prints progress messages during the [execution phase](#execution-phase) of
the build, showing the current build step (compiler, linker, etc.) that is being
started, and the number completed over the total number of build actions. As the
build starts the number of total actions will often increase as Bazel discovers
the entire action graph, but the number will usually stabilize within a few
seconds.
At the end of the build Bazel prints which targets were requested, whether or
not they were successfully built, and if so, where the output files can be
found. Scripts that run builds can reliably parse this output; see
[`--show_result`](user-manual.html#flag--show_result) for more details.
Typing the same command again:
```
% bazel build //foo
INFO: Analyzed target //foo:foo (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //foo:foo up-to-date:
bazel-bin/foo/foo
INFO: Elapsed time: 0.144s, Critical Path: 0.00s
INFO: Build completed successfully, 1 total action
```
We see a "null" build: in this case, there are no packages to re-load, since
nothing has changed, and no build steps to execute. (If something had changed in
"foo" or some of its dependencies, resulting in the re-execution of some build
actions, we would call it an "incremental" build, not a "null" build.)
Before you can start a build, you will need a Bazel workspace. This is simply a
directory tree that contains all the source files needed to build your
application. Bazel allows you to perform a build from a completely read-only
volume.
<a id="target-patterns"></a>
### Specifying targets to build
Bazel allows a number of ways to specify the targets to be built. Collectively,
these are known as _target patterns_. This syntax is used in commands like
`build`, `test`, or `query`.
Whereas [labels](build-ref.html#labels) are used to specify individual targets,
e.g. for declaring dependencies in BUILD files, Bazel's target patterns are a
syntax for specifying multiple targets: they are a generalization of the label
syntax for _sets_ of targets, using wildcards. In the simplest case, any valid
label is also a valid target pattern, identifying a set of exactly one target.
All target patterns starting with `//` are resolved relative to the current
workspace.
<table>
<tr>
<td><code>//foo/bar:wiz</code></td>
<td>Just the single target <code>//foo/bar:wiz</code>.</td>
</tr>
<tr>
<td><code>//foo/bar</code></td>
<td>Equivalent to <code>//foo/bar:bar</code>.</td>
</tr>
<tr>
<td><code>//foo/bar:all</code></td>
<td>All rules in the package <code>foo/bar</code>.</td>
</tr>
<tr>
<td><code>//foo/...</code></td>
<td>All rules in all packages beneath the directory <code>foo</code>.</td>
</tr>
<tr>
<td><code>//foo/...:all</code></td>
<td>All rules in all packages beneath the directory <code>foo</code>.</td>
</tr>
<tr>
<td><code>//foo/...:*</code></td>
<td>All targets (rules and files) in all packages beneath the directory <code>foo</code>.</td>
</tr>
<tr>
<td><code>//foo/...:all-targets</code></td>
<td>All targets (rules and files) in all packages beneath the directory <code>foo</code>.</td>
</tr>
</table>
Target patterns which do not begin with `//` are resolved relative to the
current _working directory_. These examples assume a working directory of `foo`:
<table>
<tr>
<td><code>:foo</code></td>
<td>Equivalent to <code>//foo:foo</code>.</td>
</tr>
<tr>
<td><code>bar:wiz</code></td>
<td>Equivalent to <code>//foo/bar:wiz</code>.</td>
</tr>
<tr>
<td><code>bar/wiz</code></td>
<td>Equivalent to:
<code>//foo/bar/wiz:wiz</code> if <code>foo/bar/wiz</code> is a package,
<code>//foo/bar:wiz</code> if <code>foo/bar</code> is a package,
<code>//foo:bar/wiz</code> otherwise.
</td>
</tr>
<tr>
<td><code>bar:all</code></td>
<td>Equivalent to <code>//foo/bar:all</code>.</td>
</tr>
<tr>
<td><code>:all</code></td>
<td>Equivalent to <code>//foo:all</code>.</td>
</tr>
<tr>
<td><code>...:all</code></td>
<td>Equivalent to <code>//foo/...:all</code>.</td>
</tr>
<tr>
<td><code>...</code></td>
<td>Equivalent to <code>//foo/...:all</code>.</td>
</tr>
<tr>
<td><code>bar/...:all</code></td>
<td>Equivalent to <code>//foo/bar/...:all</code>.</td>
</tr>
</table>
By default, directory symlinks are followed for recursive target patterns,
except those that point to under the output base, such as the convenience
symlinks that are created in the root directory of the workspace.
In addition, Bazel does not follow symlinks when evaluating recursive target
patterns in any directory that contains a file named as follows:
`DONT_FOLLOW_SYMLINKS_WHEN_TRAVERSING_THIS_DIRECTORY_VIA_A_RECURSIVE_TARGET_PATTERN`
`foo/...` is a wildcard over _packages_, indicating all packages recursively
beneath directory `foo` (for all roots of the package path). `:all` is a
wildcard over _targets_, matching all rules within a package. These two may be
combined, as in `foo/...:all`, and when both wildcards are used, this may be
abbreviated to `foo/...`.
In addition, `:*` (or `:all-targets`) is a wildcard that matches _every target_
in the matched packages, including files that aren't normally built by any rule,
such as `_deploy.jar` files associated with `java_binary` rules.
This implies that `:*` denotes a _superset_ of `:all`; while potentially
confusing, this syntax does allow the familiar `:all` wildcard to be used for
typical builds, where building targets like the `_deploy.jar` is not desired.
In addition, Bazel allows a slash to be used instead of the colon required by
the label syntax; this is often convenient when using Bash filename expansion.
For example, `foo/bar/wiz` is equivalent to `//foo/bar:wiz` (if there is a
package `foo/bar`) or to `//foo:bar/wiz` (if there is a package `foo`).
Many Bazel commands accept a list of target patterns as arguments, and they all
honor the prefix negation operator `-`. This can be used to subtract a set of
targets from the set specified by the preceding arguments. Note that this means
order matters. For example,
```
bazel build foo/... bar/...
```
means "build all targets beneath `foo` _and_ all targets beneath `bar`", whereas
```
bazel build -- foo/... -foo/bar/...
```
means "build all targets beneath `foo` _except_ those beneath `foo/bar`". (The
`--` argument is required to prevent the subsequent arguments starting with `-`
from being interpreted as additional options.)
It's important to point out though that subtracting targets this way will not
guarantee that they are not built, since they may be dependencies of targets
that weren't subtracted. For example, if there were a target `//foo:all-apis`
that among others depended on `//foo/bar:api`, then the latter would be built as
part of building the former.
Targets with `tags = ["manual"]` will not be included in wildcard target
patterns (`...`, `:*`, `:all`, etc.). You should specify such test targets with
explicit target patterns on the command line if you want Bazel to build/test
them.
<a id="fetch"></a>
### Fetching external dependencies
By default, Bazel will download and symlink external dependencies during the
build. However, this can be undesirable, either because you'd like to know
when new external dependendencies are added or because you'd like to
"prefetch" dependencies (say, before a flight where you'll be offline). If you
would like to prevent new dependencies from being added during builds, you
can specify the `--fetch=false` flag. Note that this flag only
applies to repository rules that do not point to a directory in the local
file system. Changes, for example, to `local_repository`,
`new_local_repository` and Android SDK and NDK repository rules
will always take effect regardless of the value `--fetch` .
If you disallow fetching during builds and Bazel finds new external
dependencies, your build will fail.
You can manually fetch dependencies by running `bazel fetch`. If
you disallow during-build fetching, you'll need to run `bazel fetch`:
- Before you build for the first time.
- After you add a new external dependency.
Once it has been run, you should not need to run it again until the WORKSPACE
file changes.
`fetch` takes a list of targets to fetch dependencies for. For
example, this would fetch dependencies needed to build `//foo:bar`
and `//bar:baz`:
```
$ bazel fetch //foo:bar //bar:baz
```
To fetch all external dependencies for a workspace, run:
```
$ bazel fetch //...
```
You do not need to run bazel fetch at all if you have all of the tools you are
using (from library jars to the JDK itself) under your workspace root.
However, if you're using anything outside of the workspace directory then Bazel
will automatically run `bazel fetch` before running
`bazel build`.
<a id="repository-cache"></a>
#### The repository cache
Bazel tries to avoid fetching the same file several times, even if the same
file is needed in different workspaces, or if the definition of an external
repository changed but it still needs the same file to download. To do so,
bazel caches all files downloaded in the repository cache which, by default,
is located at `~/.cache/bazel/_bazel_$USER/cache/repos/v1/`. The
location can be changed by the `--repository_cache` option. The
cache is shared between all workspaces and installed versions of bazel.
An entry is taken from the cache if
Bazel knows for sure that it has a copy of the correct file, that is, if the
download request has a SHA256 sum of the file specified and a file with that
hash is in the cache. So specifying a hash for each external file is
not only a good idea from a security perspective; it also helps avoiding
unnecessary downloads.
Upon each cache hit, the modification time of the file in the cache is
updated. In this way, the last use of a file in the cache directory can easily
be determined, for example to manually clean up the cache. The cache is never
cleaned up automatically, as it might contain a copy of a file that is no
longer available upstream.
<a id="distdir"></a>
#### Distribution files directories
The distribution directory is another Bazel mechanism to avoid unnecessary
downloads. Bazel searches distribution directories before the repository cache.
The primary difference is that the distribution directory requires manual
preparation.
Using the
[`--distdir=/path/to-directory`](https://docs.bazel.build/versions/master/command-line-reference.html#flag--distdir)
option, you can specify additional read-only directories to look for files
instead of fetching them. A file is taken from such a directory if the file name
is equal to the base name of the URL and additionally the hash of the file is
equal to the one specified in the download request. This only works if the
file hash is specified in the WORKSPACE declaration.
While the condition on the file name is not necessary for correctness, it
reduces the number of candidate files to one per specified directory. In this
way, specifying distribution files directories remains efficient, even if the
number of files in such a directory grows large.
#### Running Bazel in an airgapped environment
To keep Bazel's binary size small, Bazel's implicit dependencies are fetched
over the network while running for the first time. These implicit dependencies
contain toolchains and rules that may not be necessary for everyone. For
example, Android tools are unbundled and fetched only when building Android
projects.
However, these implicit dependencies may cause problems when running
Bazel in an airgapped environment, even if you have vendored all of your
WORKSPACE dependencies. To solve that, you can prepare a distribution directory
containing these dependencies on a machine with network access, and then
transfer them to the airgapped environment with an offline approach.
To prepare the [distribution directory](distribution-files-directories), use the
[`--distdir`](https://docs.bazel.build/versions/master/command-line-reference.html#flag--distdir)
flag. You will need to do this once for every new Bazel binary version, since
the implicit dependencies can be different for every release.
To build these dependencies outside of your airgapped environment, first
checkout the Bazel source tree at the right version:
```
git clone https://github.com/bazelbuild/bazel "$BAZEL_DIR"
cd "$BAZEL_DIR"
git checkout "$BAZEL_VERSION"
```
Then, build the tarball containing the implicit runtime dependencies for that
specific Bazel version:
```
bazel build @additional_distfiles//:archives.tar
```
Export this tarball to a directory that can be copied into your airgapped
environment. Note the `--strip-components` flag, because `--distdir` can be
quite finicky with the directory nesting level:
```
tar xvf bazel-bin/external/additional_distfiles/archives.tar \
-C "$NEW_DIRECTORY" --strip-components=3
```
Finally, when you use Bazel in your airgapped environment, pass the `--distdir`
flag pointing to the directory. For convenience, you can add it as an `.bazelrc`
entry:
```
build --distdir=path/to/directory
```
<a id="configurations"></a>
### Build configurations and cross-compilation
All the inputs that specify the behavior and result of a given build can be
divided into two distinct categories. The first kind is the intrinsic
information stored in the BUILD files of your project: the build rule, the
values of its attributes, and the complete set of its transitive dependencies.
The second kind is the external or environmental data, supplied by the user or
by the build tool: the choice of target architecture, compilation and linking
options, and other toolchain configuration options. We refer to a complete set
of environmental data as a **configuration**.
In any given build, there may be more than one configuration. Consider a
cross-compile, in which you build a `//foo:bin` executable for a 64-bit
architecture, but your workstation is a 32-bit machine. Clearly, the build will
require building `//foo:bin` using a toolchain capable of creating 64-bit
executables, but the build system must also build various tools used during the
build itself—for example tools that are built from source, then subsequently
used in, say, a genrule—and these must be built to run on your workstation. Thus
we can identify two configurations: the **host configuration**, which is used
for building tools that run during the build, and the **target configuration**
(or _request configuration_, but we say "target configuration" more often even
though that word already has many meanings), which is used for building the
binary you ultimately requested.
Typically, there are many libraries that are prerequisites of both the requested
build target (`//foo:bin`) and one or more of the host tools, for example some
base libraries. Such libraries must be built twice, once for the host
configuration, and once for the target configuration. Bazel takes care of
ensuring that both variants are built, and that the derived files are kept
separate to avoid interference; usually such targets can be built concurrently,
since they are independent of each other. If you see progress messages
indicating that a given target is being built twice, this is most likely the
explanation.
Bazel uses one of two ways to select the host configuration, based on the
`--distinct_host_configuration` option. This boolean option is somewhat subtle,
and the setting may improve (or worsen) the speed of your builds.
#### `--distinct_host_configuration=false`
We do not recommend this option.
- If you frequently make changes to your request configuration, such as
alternating between `-c opt` and `-c dbg` builds, or between simple- and
cross-compilation, you will typically rebuild the majority of your codebase
each time you switch.
When this option is false, the host and request configurations are identical:
all tools required during the build will be built in exactly the same way as
target programs. This setting means that no libraries need to be built twice
during a single build.
However, it does mean that any change to your request configuration also affects
your host configuration, causing all the tools to be rebuilt, and then anything
that depends on the tool output to be rebuilt too. Thus, for example, simply
changing a linker option between builds might cause all tools to be re-linked,
and then all actions using them re-executed, and so on, resulting in a very
large rebuild. Also, please note: if your host architecture is not capable of
running your target binaries, your build will not work.
#### `--distinct_host_configuration=true` _(default)_
If this option is true, then instead of using the same configuration for the
host and request, a completely distinct host configuration is used. The host
configuration is derived from the target configuration as follows:
- Use the same version of Crosstool (`--crosstool_top`) as specified in the
request configuration, unless `--host_crosstool_top` is specified.
- Use the value of `--host_cpu` for `--cpu` (default: `k8`).
- Use the same values of these options as specified in the request
configuration: `--compiler`, `--use_ijars`, and if `--host_crosstool_top` is
used, then the value of `--host_cpu` is used to look up a
`default_toolchain` in the Crosstool (ignoring `--compiler`) for the host
configuration.
- Use the value of `--host_javabase` for `--javabase`
- Use the value of `--host_java_toolchain` for `--java_toolchain`
- Use optimized builds for C++ code (`-c opt`).
- Generate no debugging information (`--copt=-g0`).
- Strip debug information from executables and shared libraries
(`--strip=always`).
- Place all derived files in a special location, distinct from that used by
any possible request configuration.
- Suppress stamping of binaries with build data (see `--embed_*` options).
- All other values remain at their defaults.
There are many reasons why it might be preferable to select a distinct host
configuration from the request configuration. Some are too esoteric to mention
here, but two of them are worth pointing out.
Firstly, by using stripped, optimized binaries, you reduce the time spent
linking and executing the tools, the disk space occupied by the tools, and the
network I/O time in distributed builds.
Secondly, by decoupling the host and request configurations in all builds, you
avoid very expensive rebuilds that would result from minor changes to the
request configuration (such as changing a linker options does), as described
earlier.
That said, for certain builds, this option may be a hindrance. In particular,
builds in which changes of configuration are infrequent (especially certain Java
builds), and builds where the amount of code that must be built in both host and
target configurations is large, may not benefit.
<a id="correctness"></a>
### Correct incremental rebuilds
One of the primary goals of the Bazel project is to ensure correct incremental
rebuilds. Previous build tools, especially those based on Make, make several
unsound assumptions in their implementation of incremental builds.
Firstly, that timestamps of files increase monotonically. While this is the
typical case, it is very easy to fall afoul of this assumption; syncing to an
earlier revision of a file causes that file's modification time to decrease;
Make-based systems will not rebuild.
More generally, while Make detects changes to files, it does not detect changes
to commands. If you alter the options passed to the compiler in a given build
step, Make will not re-run the compiler, and it is necessary to manually discard
the invalid outputs of the previous build using `make clean`.
Also, Make is not robust against the unsuccessful termination of one of its
subprocesses after that subprocess has started writing to its output file. While
the current execution of Make will fail, the subsequent invocation of Make will
blindly assume that the truncated output file is valid (because it is newer than
its inputs), and it will not be rebuilt. Similarly, if the Make process is
killed, a similar situation can occur.
Bazel avoids these assumptions, and others. Bazel maintains a database of all
work previously done, and will only omit a build step if it finds that the set
of input files (and their timestamps) to that build step, and the compilation
command for that build step, exactly match one in the database, and, that the
set of output files (and their timestamps) for the database entry exactly match
the timestamps of the files on disk. Any change to the input files or output
files, or to the command itself, will cause re-execution of the build step.
The benefit to users of correct incremental builds is: less time wasted due to
confusion. (Also, less time spent waiting for rebuilds caused by use of `make
clean`, whether necessary or pre-emptive.)
#### Build consistency and incremental builds
Formally, we define the state of a build as _consistent_ when all the expected
output files exist, and their contents are correct, as specified by the steps or
rules required to create them. When you edit a source file, the state of the
build is said to be _inconsistent_, and remains inconsistent until you next run
the build tool to successful completion. We describe this situation as _unstable
inconsistency_, because it is only temporary, and consistency is restored by
running the build tool.
There is another kind of inconsistency that is pernicious: _stable
inconsistency_. If the build reaches a stable inconsistent state, then repeated
successful invocation of the build tool does not restore consistency: the build
has gotten "stuck", and the outputs remain incorrect. Stable inconsistent states
are the main reason why users of Make (and other build tools) type `make clean`.
Discovering that the build tool has failed in this manner (and then recovering
from it) can be time consuming and very frustrating.
Conceptually, the simplest way to achieve a consistent build is to throw away
all the previous build outputs and start again: make every build a clean build.
This approach is obviously too time-consuming to be practical (except perhaps
for release engineers), and therefore to be useful, the build tool must be able
to perform incremental builds without compromising consistency.
Correct incremental dependency analysis is hard, and as described above, many
other build tools do a poor job of avoiding stable inconsistent states during
incremental builds. In contrast, Bazel offers the following guarantee: after a
successful invocation of the build tool during which you made no edits, the
build will be in a consistent state. (If you edit your source files during a
build, Bazel makes no guarantee about the consistency of the result of the
current build. But it does guarantee that the results of the _next_ build will
restore consistency.)
As with all guarantees, there comes some fine print: there are some known ways
of getting into a stable inconsistent state with Bazel. We won't guarantee to
investigate such problems arising from deliberate attempts to find bugs in the
incremental dependency analysis, but we will investigate and do our best to fix
all stable inconsistent states arising from normal or "reasonable" use of the
build tool.
If you ever detect a stable inconsistent state with Bazel, please report a bug.
<a id="sandboxing"></a>
#### Sandboxed execution
Bazel uses sandboxes to guarantee that actions run hermetically<sup>1</sup> and
correctly. Bazel runs _Spawns_ (loosely speaking: actions) in sandboxes that
only contain the minimal set of files the tool requires to do its job. Currently
sandboxing works on Linux 3.12 or newer with the `CONFIG_USER_NS` option
enabled, and also on macOS 10.11 or newer.
Bazel will print a warning if your system does not support sandboxing to alert
you to the fact that builds are not guaranteed to be hermetic and might affect
the host system in unknown ways. To disable this warning you can pass the
`--ignore_unsupported_sandboxing` flag to Bazel.
<sup>1</sup>: Hermeticity means that the action only uses its declared input
files and no other files in the filesystem, and it only produces its declared
output files.
On some platforms such as [Google Kubernetes
Engine](https://cloud.google.com/kubernetes-engine/) cluster nodes or Debian,
user namespaces are deactivated by default due to security
concerns. This can be checked by looking at the file
`/proc/sys/kernel/unprivileged_userns_clone`: if it exists and contains a 0,
then user namespaces can be activated with
`sudo sysctl kernel.unprivileged_userns_clone=1`.
In some cases, the Bazel sandbox fails to execute rules because of the system
setup. The symptom is generally a failure that output a message similar to
`namespace-sandbox.c:633: execvp(argv[0], argv): No such file or directory`.
In that case, try to deactivate the sandbox for genrules with
`--strategy=Genrule=standalone` and for other rules with
`--spawn_strategy=standalone`. Also please report a bug on our
issue tracker and mention which Linux distribution you're using so that we can
investigate and provide a fix in a subsequent release.
<a id="phases"></a>
### Phases of a build
In Bazel, a build occurs in three distinct phases; as a user, understanding the
difference between them provides insight into the options which control a build
(see below).
#### Loading phase
The first is **loading** during which all the necessary BUILD files for the
initial targets, and their transitive closure of dependencies, are loaded,
parsed, evaluated and cached.
For the first build after a Bazel server is started, the loading phase typically
takes many seconds as many BUILD files are loaded from the file system. In
subsequent builds, especially if no BUILD files have changed, loading occurs
very quickly.
Errors reported during this phase include: package not found, target not found,
lexical and grammatical errors in a BUILD file, and evaluation errors.
#### Analysis phase
The second phase, **analysis**, involves the semantic analysis and validation of
each build rule, the construction of a build dependency graph, and the
determination of exactly what work is to be done in each step of the build.
Like loading, analysis also takes several seconds when computed in its entirety.
However, Bazel caches the dependency graph from one build to the next and only
reanalyzes what it has to, which can make incremental builds extremely fast in
the case where the packages haven't changed since the previous build.
Errors reported at this stage include: inappropriate dependencies, invalid
inputs to a rule, and all rule-specific error messages.
The loading and analysis phases are fast because Bazel avoids unnecessary file
I/O at this stage, reading only BUILD files in order to determine the work to be
done. This is by design, and makes Bazel a good foundation for analysis tools,
such as Bazel's [query](#query) command, which is implemented atop the loading
phase.
#### Execution phase
The third and final phase of the build is **execution**. This phase ensures that
the outputs of each step in the build are consistent with its inputs, re-running
compilation/linking/etc. tools as necessary. This step is where the build spends
the majority of its time, ranging from a few seconds to over an hour for a large
build. Errors reported during this phase include: missing source files, errors
in a tool executed by some build action, or failure of a tool to produce the
expected set of outputs.
<a id="client/server"></a>
## Client/server implementation
The Bazel system is implemented as a long-lived server process. This allows it
to perform many optimizations not possible with a batch-oriented implementation,
such as caching of BUILD files, dependency graphs, and other metadata from one
build to the next. This improves the speed of incremental builds, and allows
different commands, such as `build` and `query` to share the same cache of
loaded packages, making queries very fast.
When you run `bazel`, you're running the client. The client finds the server
based on the output base, which by default is determined by the path of the base
workspace directory and your userid, so if you build in multiple workspaces,
you'll have multiple output bases and thus multiple Bazel server processes.
Multiple users on the same workstation can build concurrently in the same
workspace because their output bases will differ (different userids). If the
client cannot find a running server instance, it starts a new one. The server
process will stop after a period of inactivity (3 hours, by default, which can
be modified using the startup option `--max_idle_secs`).
For the most part, the fact that there is a server running is invisible to the
user, but sometimes it helps to bear this in mind. For example, if you're
running scripts that perform a lot of automated builds in different directories,
it's important to ensure that you don't accumulate a lot of idle servers; you
can do this by explicitly shutting them down when you're finished with them, or
by specifying a short timeout period.
The name of a Bazel server process appears in the output of `ps x` or `ps -e f`
as <code>bazel(<i>dirname</i>)</code>, where _dirname_ is the basename of the
directory enclosing the root of your workspace directory. For example:
```
% ps -e f
16143 ? Sl 3:00 bazel(src-johndoe2) -server -Djava.library.path=...
```
This makes it easier to find out which server process belongs to a given
workspace. (Beware that with certain other options to `ps`, Bazel server
processes may be named just `java`.) Bazel servers can be stopped using the
[shutdown](user-manual.html#shutdown) command.
When running `bazel`, the client first checks that the server is the appropriate
version; if not, the server is stopped and a new one started. This ensures that
the use of a long-running server process doesn't interfere with proper
versioning.
<a id="bazelrc"></a>
## `.bazelrc`, the Bazel configuration file
Bazel accepts many options. Some options are varied frequently (for example,
`--subcommands`) while others stay the same across several builds (such as
`--package_path`). To avoid specifying these unchanged options for every build
(and other commands), you can specify options in a configuration file.
### Where are the `.bazelrc` files?
Bazel looks for optional configuration files in the following locations,
in the order shown below. The options are interpreted in this order, so
options in later files can override a value from an earlier file if a
conflict arises. All options that control which of these files are loaded are
startup options, which means they must occur after `bazel` and
before the command (`build`, `test`, etc).
1. **The system RC file**, unless `--nosystem_rc` is present.
Path:
- On Linux/macOS/Unixes: `/etc/bazel.bazelrc`
- On Windows: `%ProgramData%\bazel.bazelrc`
It is not an error if this file does not exist.
If another system-specified location is required, you must build a custom
Bazel binary, overriding the `BAZEL_SYSTEM_BAZELRC_PATH` value in
[`//src/main/cpp:option_processor`](https://github.com/bazelbuild/bazel/blob/0.28.0/src/main/cpp/BUILD#L141).
The system-specified location may contain environment variable references,
such as `${VAR_NAME}` on Unix or `%VAR_NAME%` on Windows.
2. **The workspace RC file**, unless `--noworkspace_rc` is present.
Path: `.bazelrc` in your workspace directory (next to the main
`WORKSPACE` file).
It is not an error if this file does not exist.
3. **The home RC file**, unless `--nohome_rc` is present.
Path:
- On Linux/macOS/Unixes: `$HOME/.bazelrc`
- On Windows: `%USERPROFILE%\.bazelrc` if exists, or `%HOME%/.bazelrc`
It is not an error if this file does not exist.
4. **The user-specified RC file**, if specified with
<code>--bazelrc=<var>file</var></code>
This flag is optional. However, if the flag is specified, then the file must
exist.
In addition to this optional configuration file, Bazel looks for a global rc
file. For more details, see the [global bazelrc section](#global_bazelrc).
### `.bazelrc` syntax and semantics
Like all UNIX "rc" files, the `.bazelrc` file is a text file with a line-based
grammar. Empty lines and lines starting with `#` (comments) are ignored. Each
line contains a sequence of words, which are tokenized according to the same
rules as the Bourne shell.
#### Imports
Lines that start with `import` or `try-import` are special: use these to load
other "rc" files. To specify a path that is relative to the workspace root,
write `import %workspace%/path/to/bazelrc`.
The difference between `import` and `try-import` is that Bazel fails if the
`import`'ed file is missing (or can't be read), but not so for a `try-import`'ed
file.
Import precedence:
- Options in the imported file take precedence over options specified before
the import statement.
- Options specified after the import statement take precedence over the
options in the imported file.
- Options in files imported later take precedence over files imported earlier.
#### Option defaults
Most lines of a bazelrc define default option values. The first word on each
line specifies when these defaults are applied:
- `startup`: startup options, which go before the command, and are described
in `bazel help startup_options`.
- `common`: options that apply to all Bazel commands.
- _`command`_: Bazel command, such as `build` or `query` to which the options
apply. These options also apply to all commands that inherit from the
specified command. (For example, `test` inherits from `build`.)
Each of these lines may be used more than once and the arguments that follow the
first word are combined as if they had appeared on a single line. (Users of CVS,
another tool with a "Swiss army knife" command-line interface, will find the
syntax similar to that of `.cvsrc`.) For example, the lines:
```
build --test_tmpdir=/tmp/foo --verbose_failures
build --test_tmpdir=/tmp/bar
```
are combined as:
```
build --test_tmpdir=/tmp/foo --verbose_failures --test_tmpdir=/tmp/bar
```
so the effective flags are `--verbose_failures` and `--test_tmpdir=/tmp/bar`.
Option precedence:
- Options on the command line always take precedence over those in rc files.
For example, if a rc file says `build -c opt` but the command line flag is
`-c dbg`, the command line flag takes precedence.
- Within the rc file, precedence is governed by specificity: lines for a more
specific command take precedence over lines for a less specific command.
Specificity is defined by inheritance. Some commands inherit options from
other commands, making the inheriting command more specific than the base
command. For example `test` inherits from the `build` command, so all `bazel
build` flags are valid for `bazel test`, and all `build` lines apply also to
`bazel test` unless there's a `test` line for the same option. If the rc
file says:
```
test -c dbg --test_env=PATH
build -c opt --verbose_failures
```
then `bazel build //foo` will use `-c opt --verbose_failures`, and `bazel
test //foo` will use `--verbose_failures -c dbg --test_env=PATH`.
The inheritance (specificity) graph is:
* Every command inherits from `common`
* The following commands inherit from (and are more specific than)
`build`: `test`, `run`, `clean`, `mobile-install`, `info`,
`print_action`, `config`, `cquery`, and `aquery`
* `coverage` inherits from `test`
- Two lines specifying options for the same command at equal specificity are
parsed in the order in which they appear within the file.
- Because this precedence rule does not match the file order, it helps
readability if you follow the precedence order within rc files: start with
`common` options at the top, and end with the most-specific commands at the
bottom of the file. This way, the order in which the options are read is the
same as the order in which they are applied, which is more intuitive.
The arguments specified on a line of an rc file may include arguments that are
not options, such as the names of build targets, and so on. These, like the
options specified in the same files, have lower precedence than their siblings
on the command line, and are always prepended to the explicit list of non-
option arguments.
#### `--config`
In addition to setting option defaults, the rc file can be used to group options
and provide a shorthand for common groupings. This is done by adding a `:name`
suffix to the command. These options are ignored by default, but will be
included when the option <code>--config=<var>name</var></code> is present,
either on the command line or in a `.bazelrc` file, recursively, even inside of
another config definition. The options specified by `command:name` will only be
expanded for applicable commands, in the precedence order described above.
Note that configs can be defined in any `.bazelrc` file, and that all lines of
the form `command:name` (for applicable commands) will be expanded, across the
different rc files. In order to avoid name conflicts, we suggest that configs
defined in personal rc files start with an underscore (`_`) to avoid
unintentional name sharing.
`--config=foo` expands to the options defined in the rc files "in-place" so that
the options specified for the config have the same precedence that the
`--config=foo` option had.
This syntax does not extend to the use of `startup` to set
[startup options](#option-defaults), e.g. setting
`startup:config-name --some_startup_option` in the .bazelrc will be ignored.
#### Example
Here's an example `~/.bazelrc` file:
```
# Bob's Bazel option defaults
startup --host_jvm_args=-XX:-UseParallelGC
import /home/bobs_project/bazelrc
build --show_timestamps --keep_going --jobs 600
build --color=yes
query --keep_going
# Definition of --config=memcheck
build:memcheck --strip=never --test_timeout=3600
```
<a id="startup files"></a>
### Other files governing Bazel's behavior
#### `.bazelignore`
You can specify directories within the workspace
that you want Bazel to ignore, such as related projects
that use other build systems. Place a file called
`.bazelignore` at the root of the workspace
and add the directories you want Bazel to ignore, one per
line. Entries are relative to the workspace root.
<a id="global_bazelrc"></a>
### The global bazelrc file
In addition to your personal `.bazelrc` file, Bazel reads global bazelrc
files in this order: `$workspace/tools/bazel.rc`, `.bazelrc` next to the
Bazel binary, and `/etc/bazel.bazelrc`. (It's fine if any are missing.)
You can make Bazel ignore the global bazelrcs by passing the
`--nomaster_bazelrc` startup option.
<a id="scripting"></a>
## Calling Bazel from scripts
Bazel can be called from scripts in order to perform a build, run tests or query
the dependency graph. Bazel has been designed to enable effective scripting, but
this section lists some details to bear in mind to make your scripts more
robust.
### Choosing the output base
The `--output_base` option controls where the Bazel process should write the
outputs of a build to, as well as various working files used internally by
Bazel, one of which is a lock that guards against concurrent mutation of the
output base by multiple Bazel processes.
Choosing the correct output base directory for your script depends on several
factors. If you need to put the build outputs in a specific location, this will
dictate the output base you need to use. If you are making a "read only" call to
Bazel (e.g. `bazel query`), the locking factors will be more important. In
particular, if you need to run multiple instances of your script concurrently,
you will need to give each one a different (or random) output base.
If you use the default output base value, you will be contending for the same
lock used by the user's interactive Bazel commands. If the user issues
long-running commands such as builds, your script will have to wait for those
commands to complete before it can continue.
### Notes about Server Mode
By default, Bazel uses a long-running [server process](#client/server) as an
optimization. When running Bazel in a script, don't forget to call `shutdown`
when you're finished with the server, or, specify `--max_idle_secs=5` so that
idle servers shut themselves down promptly.
### What exit code will I get?
Bazel attempts to differentiate failures due to the source code under
consideration from external errors that prevent Bazel from executing properly.
Bazel execution can result in following exit codes:
**Exit Codes common to all commands:**
- `0` - Success
- `2` - Command Line Problem, Bad or Illegal flags or command combination, or
Bad Environment Variables. Your command line must be modified.
- `8` - Build Interrupted but we terminated with an orderly shutdown.
- `32` - External Environment Failure not on this machine.
- `33` - OOM failure. You need to modify your command line.
- `34` - Reserved for Google-internal use.
- `35` - Reserved for Google-internal use.
- `36` - Local Environmental Issue, suspected permanent.
- `37` - Unhandled Exception / Internal Bazel Error.
- `38` - Reserved for Google-internal use.
- `41-44` - Reserved for Google-internal use.
- `45` - Error publishing results to the Build Event Service.
**Return codes for commands `bazel build`, `bazel test`:**
- `1` - Build failed.
- `3` - Build OK, but some tests failed or timed out.
- `4` - Build successful but no tests were found even though testing was
requested.
**For `bazel run`:**
- `1` - Build failed.
- If the build succeeds but the executed subprocess returns a non-zero exit
code it will be the exit code of the command as well.
**For `bazel query`:**
- `3` - Partial success, but the query encountered 1 or more errors in the
input BUILD file set and therefore the results of the operation are not 100%
reliable. This is likely due to a `--keep_going` option on the command line.
- `7` - Command failure.
Future Bazel versions may add additional exit codes, replacing generic failure
exit code `1` with a different non-zero value with a particular meaning.
However, all non-zero exit values will always constitute an error.
### Reading the .bazelrc file
By default, Bazel reads the [`.bazelrc` file](#bazelrc) from the base
workspace directory or the user's home directory. Whether or not this is
desirable is a choice for your script; if your script needs to be perfectly
hermetic (e.g. when doing release builds), you should disable reading the
.bazelrc file by using the option `--bazelrc=/dev/null`. If you want to perform
a build using the user's preferred settings, the default behavior is better.
### Command log
The Bazel output is also available in a command log file which you can find with
the following command:
```
% bazel info command_log
```
The command log file contains the interleaved stdout and stderr streams of the
most recent Bazel command. Note that running `bazel info` will overwrite the
contents of this file, since it then becomes the most recent Bazel command.
However, the location of the command log file will not change unless you change
the setting of the `--output_base` or `--output_user_root` options.
### Parsing output
The Bazel output is quite easy to parse for many purposes. Two options that may
be helpful for your script are `--noshow_progress` which suppresses progress
messages, and <code>--show_result <var>n</var></code>, which controls whether or
not "build up-to-date" messages are printed; these messages may be parsed to
discover which targets were successfully built, and the location of the output
files they created. Be sure to specify a very large value of _n_ if you rely on
these messages.
<a id="profiling"></a>
## Troubleshooting performance by profiling
The first step in analyzing the performance of your build is to profile your
build with the [`--profile`](user-manual.html#flag--profile) flag.
The file generated by the [`--profile`](user-manual.html#flag--profile) flag is
a binary file. Once you have generated this binary profile, you can analyze it
using Bazel's [`analyze-profile`](user-manual.html#analyze-profile') command. By
default, it will print out summary analysis information for each of the
specified profile datafiles. This includes cumulative statistics for different
task types for each build phase and an analysis of the critical execution path.
The first section of the default output describes an overview of the time spent
on the different build phases:
```
=== PHASE SUMMARY INFORMATION ===
Total launch phase time 6.00 ms 0.01%
Total init phase time 864 ms 1.11%
Total loading phase time 21.841 s 28.05%
Total analysis phase time 5.444 s 6.99%
Total preparation phase time 155 ms 0.20%
Total execution phase time 49.473 s 63.54%
Total finish phase time 83.9 ms 0.11%
Total run time 77.866 s 100.00%
```
The following sections show the execution time of different tasks happening
during a particular phase:
```
=== INIT PHASE INFORMATION ===
Total init phase time 864 ms
Total time (across all threads) spent on:
Type Total Count Average
VFS_STAT 2.72% 1 23.5 ms
VFS_READLINK 32.19% 1 278 ms
=== LOADING PHASE INFORMATION ===
Total loading phase time 21.841 s
Total time (across all threads) spent on:
Type Total Count Average
SPAWN 3.26% 154 475 ms
VFS_STAT 10.81% 65416 3.71 ms
[...]
STARLARK_BUILTIN_FN 13.12% 45138 6.52 ms
=== ANALYSIS PHASE INFORMATION ===
Total analysis phase time 5.444 s
Total time (across all threads) spent on:
Type Total Count Average
SKYFRAME_EVAL 9.35% 1 4.782 s
SKYFUNCTION 89.36% 43332 1.06 ms
=== EXECUTION PHASE INFORMATION ===
Total preparation time 155 ms
Total execution phase time 49.473 s
Total time finalizing build 83.9 ms
Action dependency map creation 0.00 ms
Actual execution time 49.473 s
Total time (across all threads) spent on:
Type Total Count Average
ACTION 2.25% 12229 10.2 ms
[...]
SKYFUNCTION 1.87% 236131 0.44 ms
```
The last section shows the critical path:
```
Critical path (32.078 s):
Id Time Percentage Description
1109746 5.171 s 16.12% Building [...]
1109745 164 ms 0.51% Extracting interface [...]
1109744 4.615 s 14.39% Building [...]
[...]
1109639 2.202 s 6.86% Executing genrule [...]
1109637 2.00 ms 0.01% Symlinking [...]
1109636 163 ms 0.51% Executing genrule [...]
4.00 ms 0.01% [3 middleman actions]
```
You can use the following options to display more detailed information:
- <a id="dump-text-format"></a>[`--dump=text`](user-manual.html#flag--dump)
This option prints all recorded tasks in the order they occurred. Nested
tasks are indented relative to the parent. For each task, output includes
the following information:
```
[task type] [task description]
Thread: [thread id] Id: [task id] Parent: [parent task id or 0 for top-level tasks]
Start time: [time elapsed from the profiling session start] Duration: [task duration]
[aggregated statistic for nested tasks, including count and total duration for each nested task]
```
- <a id="dump-raw-format"></a>[`--dump=raw`](user-manual.html#flag--dump)
This option is most useful for automated analysis with scripts. It outputs
each task record on a single line using '|' delimiter between fields. Fields
are printed in the following order:
1. thread id - integer positive number, identifies owner thread for the
task
2. task id - integer positive number, identifies specific task
3. parent task id for nested tasks or 0 for root tasks
4. task start time in ns, relative to the start of the profiling session
5. task duration in ns. Please note that this will include duration of all
subtasks.
6. aggregated statistic for immediate subtasks per type. This will include
type name (lower case), number of subtasks for that type and their
cumulative duration. Types are space-delimited and information for
single type is comma-delimited.
7. task type (upper case)
8. task description
Example:
```
1|1|0|0|0||PHASE|Launch Bazel
1|2|0|6000000|0||PHASE|Initialize command
1|3|0|168963053|278111411||VFS_READLINK|/[...]
1|4|0|571055781|23495512||VFS_STAT|/[...]
1|5|0|869955040|0||PHASE|Load packages
[...]
```
If Bazel appears to be hung, you can hit <kbd>Ctrl-\</kbd> or send
Bazel a `SIGQUIT` signal (`kill -3 $(bazel info server_pid)`) to get a thread
dump in the file `$(bazel info output_base)/server/jvm.out`.
Since you may not be able to run `bazel info` if bazel is hung, the
`output_base` directory is usually the parent of the `bazel-<workspace>`
symlink in your workspace directory.
| 43.480753 | 143 | 0.745093 | eng_Latn | 0.998767 |
bb622af1afcb7d78c558124784fc53a51b5c0c97 | 231 | md | Markdown | cmd/reactVet/README.md | JGarto/itcv | 4eea09c1aa0dccf8b9e2f97b99e605c7dcb7bc07 | [
"BSD-3-Clause"
] | 232 | 2017-04-17T13:23:13.000Z | 2021-04-17T02:28:12.000Z | cmd/reactVet/README.md | JGarto/itcv | 4eea09c1aa0dccf8b9e2f97b99e605c7dcb7bc07 | [
"BSD-3-Clause"
] | 33 | 2017-04-30T18:07:07.000Z | 2018-08-26T13:59:08.000Z | cmd/reactVet/README.md | JGarto/itcv | 4eea09c1aa0dccf8b9e2f97b99e605c7dcb7bc07 | [
"BSD-3-Clause"
] | 20 | 2017-04-17T13:23:47.000Z | 2021-11-22T10:05:47.000Z | ## `** myitcv.io/react/cmd/reactVet **` has moved
The repository hosting `myitcv.io/react/cmd/reactVet` has changed.
`myitcv.io/react/cmd/reactVet` can now be found at:
https://github.com/myitcv/x/tree/master/react/cmd/reactVet
| 28.875 | 66 | 0.744589 | eng_Latn | 0.888029 |
bb623eff003ea761bb01c3ddf4d28e5eaf1ce4e6 | 1,048 | md | Markdown | translations/pt-BR/content/developers/apps/activating-beta-features-for-apps.md | Hardik-Ghori/docs | a1910a7f8cedd4485962ad0f0c3c93d7348709a5 | [
"CC-BY-4.0",
"MIT"
] | 20 | 2021-02-17T16:18:11.000Z | 2022-03-16T08:30:36.000Z | translations/pt-BR/content/developers/apps/activating-beta-features-for-apps.md | 0954011723/docs | d51685810027d8071e54237bbfd1a9fb7971941d | [
"CC-BY-4.0",
"MIT"
] | 40 | 2020-10-21T12:54:07.000Z | 2021-07-23T06:10:46.000Z | translations/pt-BR/content/developers/apps/activating-beta-features-for-apps.md | 0954011723/docs | d51685810027d8071e54237bbfd1a9fb7971941d | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-03-31T18:21:34.000Z | 2021-04-10T21:07:53.000Z | ---
title: Ativar recursos beta para os aplicativos
intro: 'Você pode testar novos recursos dos aplicativos lançados em beta público para seus {% data variables.product.prodname_github_apps %} e {% data variables.product.prodname_oauth_app %}s.'
versions:
free-pro-team: '*'
---
{% warning %}
**Aviso:** Os recursos disponíveis na versão pública do beta estão sujeitos a alterações.
{% endwarning %}
### Ativar recursos do beta para {% data variables.product.prodname_github_apps %}
{% data reusables.user-settings.access_settings %}
{% data reusables.user-settings.developer_settings %}
3. Selecione o {% data variables.product.prodname_github_app %} para o qual você deseja habilitar um recurso do beta.
{% data reusables.apps.beta_feature_activation %}
### Ativar recursos do beta para {% data variables.product.prodname_oauth_app %}s
{% data reusables.user-settings.access_settings %}
{% data reusables.user-settings.developer_settings %}
{% data reusables.user-settings.oauth_apps %}
{% data reusables.apps.beta_feature_activation %}
| 38.814815 | 193 | 0.766221 | por_Latn | 0.713362 |
bb62558f4ebec92623439d505f703795f52aaf31 | 2,041 | md | Markdown | docs/data/oledb/csqllanguages-csqllanguageinfo.md | manbearian/cpp-docs | a5916b48541f804a79891ff04e246628b5f9a24a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/data/oledb/csqllanguages-csqllanguageinfo.md | manbearian/cpp-docs | a5916b48541f804a79891ff04e246628b5f9a24a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/data/oledb/csqllanguages-csqllanguageinfo.md | manbearian/cpp-docs | a5916b48541f804a79891ff04e246628b5f9a24a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "CSQLLanguages, CSQLLanguageInfo | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.reviewer: ""
ms.suite: ""
ms.technology: ["cpp-windows"]
ms.tgt_pltfrm: ""
ms.topic: "article"
f1_keywords: ["CSQLLanguageInfo", "m_szProgrammingLanguage", "m_szImplementation", "m_szIntegrity", "m_szBindingStyle", "m_szConformance", "m_szSource", "m_szYear", "CSQLLanguages"]
dev_langs: ["C++"]
helpviewer_keywords: ["m_szBindingStyle", "m_szProgrammingLanguage", "m_szYear", "m_szImplementation", "m_szSource", "m_szConformance", "CSQLLanguages typedef class", "CSQLLanguageInfo parameter class", "m_szIntegrity"]
ms.assetid: 9c36c5bb-6917-49c3-9ac3-942339893f19
caps.latest.revision: 6
author: "mikeblome"
ms.author: "mblome"
manager: "ghogen"
ms.workload: ["cplusplus", "data-storage"]
---
# CSQLLanguages, CSQLLanguageInfo
Call the typedef class **CSQLLanguages** to implement its parameter class **CSQLLanguageInfo**.
## Remarks
See [Schema Rowset Classes and Typedef Classes](../../data/oledb/schema-rowset-classes-and-typedef-classes.md) for more information on using typedef classes.
This class identifies the conformance levels, options, and dialects supported by the SQL-implementation processing data defined in the catalog.
The following table lists the class data members and their corresponding OLE DB Columns. See [SQL_LANGUAGES Rowset](https://msdn.microsoft.com/en-us/library/ms714374.aspx) in the *OLE DB Programmer's Reference* for more information about the schema and columns.
|Data members|OLE DB columns|
|------------------|--------------------|
|m_szSource|SQL_LANGUAGE_SOURCE|
|m_szYear|SQL_LANGUAGE_YEAR|
|m_szConformance|SQL_LANGUAGE_CONFORMANCE|
|m_szIntegrity|SQL_LANGUAGE_INTEGRITY|
|m_szImplementation|SQL_LANGUAGE_IMPLEMENTATION|
|m_szBindingStyle|SQL_LANGUAGE_BINDING_STYLE|
|m_szProgrammingLanguage|SQL_LANGUAGE_PROGRAMMING_LANGUAGE|
## Requirements
**Header:** atldbsch.h
## See Also
[CRestrictions Class](../../data/oledb/crestrictions-class.md) | 46.386364 | 264 | 0.755022 | yue_Hant | 0.510813 |
bb62731380589383ca83a1e1823f3f84848cf426 | 19,423 | md | Markdown | vendor/kartik-v/yii2-grid/README.md | ZloyBarsuk/www | 68e07b3c74ea8c48a7badcd8c7d71aed6686ef7a | [
"BSD-3-Clause"
] | null | null | null | vendor/kartik-v/yii2-grid/README.md | ZloyBarsuk/www | 68e07b3c74ea8c48a7badcd8c7d71aed6686ef7a | [
"BSD-3-Clause"
] | 1 | 2018-04-18T21:38:40.000Z | 2018-04-18T21:38:40.000Z | vendor/kartik-v/yii2-grid/README.md | ZloyBarsuk/www | 68e07b3c74ea8c48a7badcd8c7d71aed6686ef7a | [
"BSD-3-Clause"
] | 2 | 2020-11-04T08:28:13.000Z | 2020-11-06T07:38:04.000Z | yii2-grid
=========
[](https://packagist.org/packages/kartik-v/yii2-grid)
[](https://packagist.org/packages/kartik-v/yii2-grid)
[](https://packagist.org/packages/kartik-v/yii2-grid)
[](https://packagist.org/packages/kartik-v/yii2-grid)
[](https://packagist.org/packages/kartik-v/yii2-grid)
[](https://packagist.org/packages/kartik-v/yii2-grid)
Yii2 GridView on steroids. A module with various modifications and enhancements to one of the most used widgets by Yii developers. The widget contains new additional Grid Columns with enhanced settings for Yii Framework 2.0. The widget also incorporates various Bootstrap 3.x styling options.
Refer [detailed documentation](http://demos.krajee.com/grid) and/or a [complete demo](http://demos.krajee.com/grid-demo). You can also view the [grid grouping demo here](http://demos.krajee.com/group-grid).

## Latest Release
The latest version of the module is v3.1.3. Refer the [CHANGE LOG](https://github.com/kartik-v/yii2-grid/blob/master/CHANGE.md) for details.
New features with release 2.7.0.
1. A brand new column `ExpandRowColumn` has been added which allows one to expand grid rows, show details, or load content via ajax. Check the [ExpandRowColumn documentation](http://demos.krajee.com/grid#expand-row-column) for further details. The features available with this column are:
- Ability to expand grid rows and show a detail content in a new row below it like a master detail record.
- Allows configuring the column like any grid DataColumn. The value of the column determines if the row is to be expanded or collapsed by default.
- Allows you to configure/customize the expand and collapse indicators.
- Ability to configure only specific rows to have expand/collapse functionality.
- Ability to disable the expand / collapse behavior and indicators for selective rows.
- Allows you to configure the detail content markup directly in the column configuration (using `detail` property). This can be set as a HTML markup directly or via Closure callback using column parameters.
- Allows you to load the detail content markup via ajax. Set the `detailUrl` property directly or via a Closure callback using column parameters.
- Automatically caches the content loaded via ajax so that the content is rendered from local on toggling the expand/collapse indicators, until the grid state is changed via filtering, sorting, or pagination.
- Ability to batch expand or batch collapse grid rows from the header. If content is loaded via ajax, the batch expand and collapse will fire the ajax requests to load and use intelligently from cache where possible.
2. Included `prepend` and `append` settings within `pageSummaryOptions` to prepend/append content to page summary.
3. All asset (JS & CSS) files have been carefully isolated to only load them if the specific grid feature has been selected.
4. Enhancements for JS confirmation popups being hidden by browser's hide dialog settings.
5. Recursively replace/merge PDF export configuration correctly.
6. Include demo messages for auto generating via config.
7. Allows grouping grid column data, including master detail groups and generating group summaries (since v3.0.5).
8. Allows special formatting of data for cells exported in Excel Format.
> NOTE: This extension depends on other yii2 extensions based on the functionality chosen by you. It will not install such dependent packages by default, but will prompt through an exception, if accessed.
For example, if you choose to enable PDF export, then the [yii2-mpdf](http://demos.krajee.com/mpdf) will be mandatory and exception will be raised if `yii2-mpdf` is not installed.
Check the [composer.json](https://github.com/kartik-v/yii2-grid/blob/master/composer.json) for other extension dependencies.
## Module
The extension has been created as a module to enable access to advanced features like download actions (exporting as csv, text, html, or xls). You should configure the module with a name of `gridview` as shown below:
```php
'modules' => [
'gridview' => [
'class' => '\kartik\grid\Module'
]
],
```
## GridView
### \kartik\grid\GridView
The following functionalities have been added/enhanced:
### Table Styling (Enhanced)
Control various options to style your grid table. Added `containerOptions` to customize your grid table container. Enhancements for grid and columns to work with yii\widgets\Pjax.
### Grid Grouping (New)
With release v3.0.5, the module allows grouping of GridView data by setting various `group` related properties at the `kartik\grid\DataColumn` level. The following functionalities are supported:
- Ability to group and merge similar data for each column.
- Allow multi level/complex grouping and making a sub group dependent on a parent group.
- Allow displaying grouped data as a separate grouped row above the child rows.
- Allow configuring and displaying of group level summary rows.
- Summaries can be setup as a group footer OR a group header.
- Summaries intelligently embed between sub-groups and parent groups.
- Summaries can include auto calculated values (for numbers) at runtime based on previous child column data.
- Summaries can include advanced calculations using a javascript callback configuration.
- Ability to merge columns in the summary group header or footer.
- Complex configurations of groups will allow - group properties to be set dynamically using Closure.
- Allow you to style your group cells in various ways including setting odd and even row CSS properties.
### Pjax Settings (New)
Inbuilt support for Pjax. Enhancements for grid and columns to work with `yii\widgets\Pjax`. Auto-reinitializes embedded javascript plugins when GridView is refreshed via Pjax. Added `pjax` property to enable pjax and `pjaxSettings` to customize the pjax behavior.
### Custom Header & Footer (New)
Add custom header or footer rows, above / below your default grid header and footer.
### Resizing Columns (New)
Allows resizing of the columns just like a spreadsheet (since v3.0.0). Uses the [JQuery ResizableColumns plugin](https://github.com/dobtco/jquery-resizable-columns) for resize and [store.js](https://github.com/marcuswestin/store.js/) for localStorage persistence.
### Floating Header (New)
Allows the grid table to have a floating table header. Uses the [JQuery Float THead plugin](http://mkoryak.github.io/floatThead) to display a seamless floating table header.
### Panel (New)
Allows configuration of GridView to be enclosed in a panel that can be styled as per Bootstrap 3.x. The panel will enable configuration of various
sections to embed content/buttons, before and after header, and before and after footer.
### Toolbar (New)
The grid offers ability to configure toolbar for adding various actions. The default templates place the toolbar in the `before` section of the `panel`. The toolbar is by default styled using Bootstrap button groups. Some of the default actions like the `export` button is by default appended to the toolbar.
With version v2.1.0, if you are using the `yii2-dynagrid` extension it automatically displays the **personalize**, **sort**, and **filter** buttons in the toolbar. The toolbar can be configured as a simple array. Refer the [docs and demos](http://demos.krajee.com/grid) for details.
### Grid Plugins (New)
The grid now offers ability to plugin dynamic content to your grid at runtime. A new property `replaceTags` has been added with v2.3.0. This allows you to specify tags which will be replaced dynamically at grid rendering time and wherever you set these tags in any of the grid layout templates.
### Page Summary (New)
This is a new feature added to the GridView widget. The page summary is an additional row above the footer - for displaying the
summary/totals for the current GridView page. The following parameters are applicable to control this behavior:
- `showPageSummary`: _boolean_ whether to display the page summary row for the grid view. Defaults to `false`.
- `pageSummaryRowOptions`: _array_, HTML attributes for the page summary row. Defaults to `['class' => 'kv-page-summary warning']`.
### Export Grid Data (New)
This is a new feature added to the GridView widget. It allows you to export the displayed grid content as HTML, CSV, TEXT, EXCEL, PDF, & JSON. It uses the rendered grid data on client to convert to one of the format specified using JQuery.
This is supported across all browsers. The PDF rendering is achieved through a separate extension [yii2-mpdf](http://demos.krajee.com/mpdf).
Features offered by yii2-grid export:
- Ability to preprocess and convert column data to your desired value before exporting. There is a new property `exportConversions` that can be setup in GridView.
For example, this currently is set as a default to convert the HTML formatted icons for BooleanColumn to user friendly text like `Active` or `Inactive` after export.
- Hide any row or column in the grid by adding one or more of the following CSS classes:
- `skip-export`: Will skip this element during export for all formats (`html`, `csv`, `txt`, `xls`, `pdf`, `json`).
- `skip-export-html`: Will skip this element during export only for `html` export format.
- `skip-export-csv`: Will skip this element during export only for `csv` export format.
- `skip-export-txt`: Will skip this element during export only for `txt` export format.
- `skip-export-xls`: Will skip this element during export only for `xls` (excel) export format.
- `skip-export-pdf`: Will skip this element during export only for `pdf` export format.
- `skip-export-json`: Will skip this element during export only for `json` export format.
These CSS can be set virtually anywhere. For example `headerOptions`, `contentOptions`, `beforeHeader` etc.
- With release v2.1.0, you can now merge additional action items to the export button dropdown.
- With release v2.3.0 the export functionality includes these additional features:
- A separate export popup progress window is now shown for download.
- Asynchronous export process on the separate window - and avoid any grid refresh
- Set export mime types to be configurable
- Includes support for exporting new file types:
- JSON export
- PDF export (using `yii2-mpdf` extension)
- Adds functionality for full data export
- Enhance icons formatting for export file types (and beautify optionally using font awesome)
- Ability to hide entire column from export using `hiddenFromExport` property, but show them in normal on screen display.
- Ability to do reverse of above. Hide column in display but show on export using `hidden` property.
- Adds ability to integrate a separate extension for full data export i.e. [yii2-export](https://github.com/kartik-v/yii2-export).
### Toggle Grid Data (New)
This extension (with v2.3.0) adds ability to toggle between viewing **all grid data** and **paginated data**. By default the grid displays paginated data. This can be used for exporting complete grid data.
## Data Column (Enhanced)
### \kartik\grid\DataColumn
The default Yii data column has been enhanced with various additional parameters. Refer [documentation](http://demos.krajee.com/grid#data-column) for details.
## Expand Row Column (New)
### \kartik\grid\ExpandRowColumn
An enhanced data column that allows one to expand a grid row and display additional/detail content in a new row below it either directly or via ajax. Refer [documentation](http://demos.krajee.com/grid#expand-row-column) for details.
## Editable Column (New)
### \kartik\grid\EditableColumn
An enhanced data column that allows you to edit the cell content using [kartik\editable\Editable](http://demos.krajee.com/editable) widget. You can selectively choose to disable editable for certain rows or all rows. Refer [documentation](http://demos.krajee.com/grid#editable-column) for details.
## Formula Column (New)
### \kartik\grid\FormulaColumn
This is a new grid column class that extends the \kartik\grid\DataColumn class. It allows calculating formulae just like in spreadsheets - based on
values of other columns in the grid. The formula calculation is done at grid rendering runtime and does not need to query the database. Hence you can use formula columns
within another formula column. Refer [documentation](http://demos.krajee.com/grid#formula-column) for details.
## Boolean Column (New)
### \kartik\grid\BooleanColumn
This is a new grid column class that extends the \kartik\grid\DataColumn class. It automatically converts boolean data (true/false) values to user friendly indicators or labels (that are configurable).
Refer [documentation](http://demos.krajee.com/grid#boolean-column) for details. The following are new features added since release v1.6.0:
- `BooleanColumn` icons have been setup as `ICON_ACTIVE` and `ICON_INACTIVE` constants in GridView.
## Radio Column (New)
### \kartik\grid\RadioColumn
This is a new grid column that works similar to the `CheckboxColumn`, but allows and restricts only a single row to be selected using radio inputs. In addition, it includes a header level clear button to clear the selected rows. It automatically works with the new pageSummary and includes a default styling to work for many scenarios. Refer [documentation](http://demos.krajee.com/grid#radio-column) for details.
## Action Column (Enhanced)
### \kartik\grid\ActionColumn
Enhancements of `\yii\grid\ActionColumn` to include optional dropdown Action menu and work with the new pageSummary and a default styling to work for many scenarios. Refer [documentation](http://demos.krajee.com/grid#action-column) for details.
The following are new features added since release v1.6.0:
- `ActionColumn` content by default has been disabled to appear in export output. The `skip-export` CSS class has been set as default in `headerOptions` and `contentOptions`.
## Serial Column (Enhanced)
### \kartik\grid\SerialColumn
Enhancement of `\yii\grid\SerialColumn` to work with the new pageSummary and a default styling to work for many scenarios. Refer [documentation](http://demos.krajee.com/grid#serial-column) for details.
## Checkbox Column (Enhanced)
### \kartik\grid\CheckboxColumn
Enhancements of `\yii\grid\CheckboxColumn` to work with the new pageSummary and a default styling to work for many scenarios. Refer [documentation](http://demos.krajee.com/grid#checkbox-column) for details.
### Demo
You can see detailed [documentation](http://demos.krajee.com/grid) and [demonstration](http://demos.krajee.com/grid-demo) on usage of the extension. You can also view the [grid grouping demo here](http://demos.krajee.com/group-grid).
## Installation
The preferred way to install this extension is through [composer](http://getcomposer.org/download/).
### Pre-requisites
> Note: Check the [composer.json](https://github.com/kartik-v/yii2-dropdown-x/blob/master/composer.json) for this extension's requirements and dependencies.
You must set the `minimum-stability` to `dev` in the **composer.json** file in your application root folder before installation of this extension OR
if your `minimum-stability` is set to any other value other than `dev`, then set the following in the require section of your composer.json file
```
kartik-v/yii2-grid: "@dev",
kartik-v/yii2-krajee-base: "@dev"
```
Read this [web tip /wiki](http://webtips.krajee.com/setting-composer-minimum-stability-application/) on setting the `minimum-stability` settings for your application's composer.json.
### Install
Either run
```
$ php composer.phar require kartik-v/yii2-grid "@dev"
```
or add
```
"kartik-v/yii2-grid": "@dev"
```
to the ```require``` section of your `composer.json` file.
## Usage
```php
use kartik\grid\GridView;
$gridColumns = [
['class' => 'kartik\grid\SerialColumn'],
[
'class' => 'kartik\grid\EditableColumn',
'attribute' => 'name',
'pageSummary' => 'Page Total',
'vAlign'=>'middle',
'headerOptions'=>['class'=>'kv-sticky-column'],
'contentOptions'=>['class'=>'kv-sticky-column'],
'editableOptions'=>['header'=>'Name', 'size'=>'md']
],
[
'attribute'=>'color',
'value'=>function ($model, $key, $index, $widget) {
return "<span class='badge' style='background-color: {$model->color}'> </span> <code>" .
$model->color . '</code>';
},
'filterType'=>GridView::FILTER_COLOR,
'vAlign'=>'middle',
'format'=>'raw',
'width'=>'150px',
'noWrap'=>true
],
[
'class'=>'kartik\grid\BooleanColumn',
'attribute'=>'status',
'vAlign'=>'middle',
],
[
'class' => 'kartik\grid\ActionColumn',
'dropdown' => true,
'vAlign'=>'middle',
'urlCreator' => function($action, $model, $key, $index) { return '#'; },
'viewOptions'=>['title'=>$viewMsg, 'data-toggle'=>'tooltip'],
'updateOptions'=>['title'=>$updateMsg, 'data-toggle'=>'tooltip'],
'deleteOptions'=>['title'=>$deleteMsg, 'data-toggle'=>'tooltip'],
],
['class' => 'kartik\grid\CheckboxColumn']
];
echo GridView::widget([
'dataProvider' => $dataProvider,
'filterModel' => $searchModel,
'columns' => $gridColumns,
'containerOptions' => ['style'=>'overflow: auto'], // only set when $responsive = false
'beforeHeader'=>[
[
'columns'=>[
['content'=>'Header Before 1', 'options'=>['colspan'=>4, 'class'=>'text-center warning']],
['content'=>'Header Before 2', 'options'=>['colspan'=>4, 'class'=>'text-center warning']],
['content'=>'Header Before 3', 'options'=>['colspan'=>3, 'class'=>'text-center warning']],
],
'options'=>['class'=>'skip-export'] // remove this row from export
]
],
'toolbar' => [
['content'=>
Html::button('<i class="glyphicon glyphicon-plus"></i>', ['type'=>'button', 'title'=>Yii::t('kvgrid', 'Add Book'), 'class'=>'btn btn-success', 'onclick'=>'alert("This will launch the book creation form.\n\nDisabled for this demo!");']) . ' '.
Html::a('<i class="glyphicon glyphicon-repeat"></i>', ['grid-demo'], ['data-pjax'=>0, 'class' => 'btn btn-default', 'title'=>Yii::t('kvgrid', 'Reset Grid')])
],
'{export}',
'{toggleData}'
],
'pjax' => true,
'bordered' => true,
'striped' => false,
'condensed' => false,
'responsive' => true,
'hover' => true,
'floatHeader' => true,
'floatHeaderOptions' => ['scrollingTop' => $scrollingTop],
'showPageSummary' => true,
'panel' => [
'type' => GridView::TYPE_PRIMARY
],
]);
```
## License
**yii2-grid** is released under the BSD 3-Clause License. See the bundled `LICENSE.md` for details. | 64.743333 | 413 | 0.732894 | eng_Latn | 0.969948 |
bb62a7a0c81201a10007a122932330060befe9b1 | 64 | md | Markdown | README.md | rinkuphenil/Phenil-chauhan | 85961f95e27217a8d9cef3cc897ab9233cb2f185 | [
"Apache-2.0"
] | null | null | null | README.md | rinkuphenil/Phenil-chauhan | 85961f95e27217a8d9cef3cc897ab9233cb2f185 | [
"Apache-2.0"
] | null | null | null | README.md | rinkuphenil/Phenil-chauhan | 85961f95e27217a8d9cef3cc897ab9233cb2f185 | [
"Apache-2.0"
] | null | null | null | # Phenil-chauhan
PHP Project of online school management system
| 21.333333 | 46 | 0.828125 | eng_Latn | 0.935746 |
bb62d7f4d7d090d367279cfa71dfa2421c6ca9a2 | 10,932 | md | Markdown | news/_posts/core-weekly/2019-11-19-coreweekly-week-49-2019.md | prestascott/prestashop.github.io | 5ada918dc801b89c349b1c6bc40a731325363482 | [
"CC0-1.0"
] | 40 | 2015-03-20T22:57:22.000Z | 2022-03-13T21:00:56.000Z | news/_posts/core-weekly/2019-11-19-coreweekly-week-49-2019.md | prestascott/prestashop.github.io | 5ada918dc801b89c349b1c6bc40a731325363482 | [
"CC0-1.0"
] | 455 | 2015-04-04T19:50:25.000Z | 2022-03-31T10:02:11.000Z | news/_posts/core-weekly/2019-11-19-coreweekly-week-49-2019.md | prestascott/prestashop.github.io | 5ada918dc801b89c349b1c6bc40a731325363482 | [
"CC0-1.0"
] | 50 | 2015-04-04T13:17:59.000Z | 2021-09-21T17:33:42.000Z | ---
layout: post
title: "PrestaShop Core Weekly - Week 49 of 2019"
subtitle: "An inside look at the PrestaShop codebase"
date: 2019-12-13
authors: [ PrestaShop ]
image: /assets/images/2017/04/core_weekly_banner.jpg
icon: icon-calendar
tags:
- core-weekly
---
This edition of the Core Weekly report highlights changes in PrestaShop's core codebase from Monday 2nd to Sunday 8th of December 2019.

## General messages
Dear Developers,
We were at [Paris Open Source Summit last week](https://www.opensourcesummit.paris/), and our developer advocate, Antoine Thomas, spoke about his [open source project activities and maturity dashboard](https://github.com/PrestaShop/open-source/tree/master/templates). We also are very glad to announce that we have won the [International Development Award](https://lesacteursdulibre.com/portfolio/prix-developpement-international/) at this event.
## A quick update about PrestaShop's GitHub issues and pull requests:
- [63 new issues](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Aissue+created%3A2019-12-02..2019-12-08) have been created in the project repositories;
- [53 issues have been closed](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Aissue+closed%3A2019-12-02..2019-12-08), including [8 fixed issues](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Aissue+label%3Afixed+closed%3A2019-12-02..2019-12-08) on the core;
- [59 pull requests have been opened](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Apr+created%3A2019-12-02..2019-12-08) in the project repositories;
- [49 pull requests have been closed](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Apr+closed%3A2019-12-02..2019-12-08), including [40 merged pull requests](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Apr+merged%3A2019-12-02..2019-12-08).
## Code changes in the 'develop' branch
### Core
* [#16635](https://github.com/PrestaShop/PrestaShop/pull/16635): Fix discount calculation if two gift-cartrules exist for the same product.. Thank you [@Hlavtox](https://github.com/Hlavtox)
### Back office
* [#16690](https://github.com/PrestaShop/PrestaShop/pull/16690): Add Khmer language, by [@LouiseBonnard](https://github.com/LouiseBonnard)
* [#16662](https://github.com/PrestaShop/PrestaShop/pull/16662): Fix PHP docblocks. Thank you [@mfurga](https://github.com/mfurga)
* [#16570](https://github.com/PrestaShop/PrestaShop/pull/16570): Provides several UX improvements for order pages and allows to change order addresses, by [@matks](https://github.com/matks)
* [#16552](https://github.com/PrestaShop/PrestaShop/pull/16552): Add generic ButtonBulkAction and javascript to handle open in tabs, by [@jolelievre](https://github.com/jolelievre)
* [#16255](https://github.com/PrestaShop/PrestaShop/pull/16255): Migration of order view page messages block. Thank you [@tomas862](https://github.com/tomas862)
* [#16074](https://github.com/PrestaShop/PrestaShop/pull/16074): Move the customer search by id to the first place. Thank you [@levyn](https://github.com/levyn)
### Front office
* [#16638](https://github.com/PrestaShop/PrestaShop/pull/16638): Fix logic and display of customer's cart rules. Thank you [@Hlavtox](https://github.com/Hlavtox)
* [#16528](https://github.com/PrestaShop/PrestaShop/pull/16528): Changing links block style in carrier process. Thank you [@NeOMakinG](https://github.com/NeOMakinG)
* [#16524](https://github.com/PrestaShop/PrestaShop/pull/16524): Switching select of ps_brandlist to a bootstrap dropdown. Thank you [@NeOMakinG](https://github.com/NeOMakinG)
### Installer
* [#16527](https://github.com/PrestaShop/PrestaShop/pull/16527): Check memory_limit during installation, by [@PierreRambaud](https://github.com/PierreRambaud)
### Tests
* [#16727](https://github.com/PrestaShop/PrestaShop/pull/16727): Functional tests - Fix test CRUD profile, by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16721](https://github.com/PrestaShop/PrestaShop/pull/16721): Functional tests - Fix echange rate on test currency, by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16708](https://github.com/PrestaShop/PrestaShop/pull/16708): Tests - Fix eslint errors on linkchecker, by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16705](https://github.com/PrestaShop/PrestaShop/pull/16705): Functional tests - Add test 'Bulk Edit Quantity in stocks', by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16699](https://github.com/PrestaShop/PrestaShop/pull/16699): Functional Tests - add test create official currency, by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16681](https://github.com/PrestaShop/PrestaShop/pull/16681): Tests - Fix errors in functional tests, by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16679](https://github.com/PrestaShop/PrestaShop/pull/16679): Functional Tests - Fix describe message for generate invoice by date/status. Thank you [@nesrineabdmouleh](https://github.com/nesrineabdmouleh)
* [#16674](https://github.com/PrestaShop/PrestaShop/pull/16674): Functional Tests - Add BO tests for invoice options Enable/Disable. Thank you [@nesrineabdmouleh](https://github.com/nesrineabdmouleh)
* [#16666](https://github.com/PrestaShop/PrestaShop/pull/16666): Tests - Running tests with user root , by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16659](https://github.com/PrestaShop/PrestaShop/pull/16659): Tests Update README.md and DOCKER.md, by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16655](https://github.com/PrestaShop/PrestaShop/pull/16655): Functional Tests - Add BO tests for generate invoice by status. Thank you [@nesrineabdmouleh](https://github.com/nesrineabdmouleh)
* [#16647](https://github.com/PrestaShop/PrestaShop/pull/16647): Tests - Using pptruser to run tests with download, by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16644](https://github.com/PrestaShop/PrestaShop/pull/16644): Tests - Fix logout used in Employee tests, by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16612](https://github.com/PrestaShop/PrestaShop/pull/16612): Functional tests - Adding test update Quantity on catalog-stocks page, by [@boubkerbribri](https://github.com/boubkerbribri)
* [#16566](https://github.com/PrestaShop/PrestaShop/pull/16566): Functional Tests - Add BO tests for generate invoice by date. Thank you [@nesrineabdmouleh](https://github.com/nesrineabdmouleh)
## Code changes in the '1.7.6.x' branch (for 1.7.6.3)
### Back office
* [#16648](https://github.com/PrestaShop/PrestaShop/pull/16648): Fix customer statuses not being able to toggle when optin field is required, by [@matthieu-rolland](https://github.com/matthieu-rolland)
* [#16294](https://github.com/PrestaShop/PrestaShop/pull/16294): Fix email not translated when installing a new language. Thank you [@atomiix](https://github.com/atomiix)
## Code changes in modules, themes & tools
### Google Analytics module
* [#38](https://github.com/PrestaShop/ps_googleanalytics/pull/38): Release v3.2.0 of Google Analytics module, by [@Quetzacoalt91](https://github.com/Quetzacoalt91)
### Prestashop UI Kit
* [#72](https://github.com/PrestaShop/prestashop-ui-kit/pull/72): Adding stylelint to the UI Kit. Thank you [@NeOMakinG](https://github.com/NeOMakinG)
### Changes in developer documentation
* [#416](https://github.com/PrestaShop/docs/pull/416): twig file syntax error fixed. Thank you [@dheerajwebkul](https://github.com/dheerajwebkul)
### Theme custo module
* [#19](https://github.com/PrestaShop/ps_themecusto/pull/19): Make sure the dropzone js is loaded before adding the dropzone component, by [@PierreRambaud](https://github.com/PierreRambaud)
### Currency selector module
* [#14](https://github.com/PrestaShop/ps_currencyselector/pull/14): Update to version 2.0.1, by [@jolelievre](https://github.com/jolelievre)
### Supplier list module
* [#4](https://github.com/PrestaShop/ps_supplierlist/pull/4): Changing suppliers select to bootstrap dropdown. Thank you [@NeOMakinG](https://github.com/NeOMakinG)
### Brand list module
* [#6](https://github.com/PrestaShop/ps_brandlist/pull/6): Changing brands select into boostrap dropdown. Thank you [@NeOMakinG](https://github.com/NeOMakinG)
### Block reassurance module
* [#37](https://github.com/PrestaShop/blockreassurance/pull/37): Added the ability to remove a block, by [@Progi1984](https://github.com/Progi1984)
### Classic-rocket theme
* [#110](https://github.com/PrestaShop/classic-rocket/pull/110): Smarty templates improvement. Thank you [@micka-fdz](https://github.com/micka-fdz)
### Prestafraud module
* [#12](https://github.com/PrestaShop/prestafraud/pull/12): Add missing domains, by [@LouiseBonnard](https://github.com/LouiseBonnard)
### Emails manager module
* [#10](https://github.com/PrestaShop/ps_emailsmanager/pull/10): Add missing domains, by [@LouiseBonnard](https://github.com/LouiseBonnard)
<hr />
Thank you to the contributors whose pull requests were merged since the last Core Weekly Report: [@boubkerbribri](https://github.com/boubkerbribri), [@LouiseBonnard](https://github.com/LouiseBonnard), [@Quetzacoalt91](https://github.com/Quetzacoalt91), [@nesrineabdmouleh](https://github.com/nesrineabdmouleh), [@NeOMakinG](https://github.com/NeOMakinG), [@mfurga](https://github.com/mfurga), [@matthieu-rolland](https://github.com/matthieu-rolland), [@dheerajwebkul](https://github.com/dheerajwebkul), [@Hlavtox](https://github.com/Hlavtox), [@matks](https://github.com/matks), [@PierreRambaud](https://github.com/PierreRambaud), [@jolelievre](https://github.com/jolelievre), [@Progi1984](https://github.com/Progi1984), [@ziegenberg](https://github.com/ziegenberg), [@atomiix](https://github.com/atomiix), [@micka-fdz](https://github.com/micka-fdz), [@tomas862](https://github.com/tomas862), [@levyn](https://github.com/levyn)!
Thank you to the contributors whose PRs haven't been merged yet! And of course, a big thank you to all those who contribute with issues and comments [on GitHub](https://github.com/PrestaShop/PrestaShop)!
If you want to contribute to PrestaShop with code, please read these pages first:
* [Contributing code to PrestaShop](https://devdocs.prestashop.com/1.7/contribute/contribution-guidelines/)
* [Coding standards](https://devdocs.prestashop.com/1.7/development/coding-standards/)
...and if you do not know how to fix an issue but wish to report it, please read this: [How to use GitHub to report an issue](https://devdocs.prestashop.com/1.7/contribute/contribute-reporting-issues/). Thank you!
Happy contributin' everyone!
| 74.876712 | 928 | 0.761434 | yue_Hant | 0.324595 |
bb639a3091a975c21906517a90af9dc24c53bfce | 21,113 | md | Markdown | site/docs/tutorial/cc-toolchain-config.md | xylocarp-whelky/bazel | 4ba404f7ed0473df3f0effa016c107ef677464f6 | [
"Apache-2.0"
] | 1 | 2020-01-26T09:55:10.000Z | 2020-01-26T09:55:10.000Z | site/docs/tutorial/cc-toolchain-config.md | xylocarp-whelky/bazel | 4ba404f7ed0473df3f0effa016c107ef677464f6 | [
"Apache-2.0"
] | 1 | 2019-03-29T19:01:56.000Z | 2019-03-29T19:01:56.000Z | site/docs/tutorial/cc-toolchain-config.md | xylocarp-whelky/bazel | 4ba404f7ed0473df3f0effa016c107ef677464f6 | [
"Apache-2.0"
] | null | null | null | ---
layout: documentation
title: Configuring C++ toolchains
---
# Configuring C++ toolchains
* ToC
{:toc}
## Overview
This tutorial uses an example scenario to describe how to configure C++
toolchains for a project. It's based on an
[example C++ project](https://github.com/bazelbuild/examples/tree/master/cpp-tutorial/stage1)
that builds error-free using `gcc`, `clang`, and `msvc`.
In this tutorial, you will create a Starlark rule that provides additional
configuration for the `cc_toolchain` so that Bazel can build the application
with `emscripten`. The expected outcome is to run
`bazel build --config=asmjs //main:helloworld.js` on a Linux machine and build the
C++ application using [`emscripten`](https://kripken.github.io/emscripten-site/)
targeting [`asm.js`](http://asmjs.org/).
## Setting up the build environment
This tutorial assumes you are on Linux on which you have successfully built
C++ applications - in other words, we assume that appropriate tooling and
libraries have been installed.
Set up your build environment as follows:
1. If you have not already done so,
[download and install Bazel 0.23](../install-ubuntu.html) or later.
2. Download the
[example C++ project](https://github.com/bazelbuild/examples/tree/master/cpp-tutorial/stage1)
from GitHub and place it in an empty directory on your local machine.
3. Add the following `cc_binary` target to the `main/BUILD` file:
```
cc_binary(
name = "helloworld.js",
srcs = ["hello-world.cc"],
)
```
4. Create a `.bazelrc` file at the root of the workspace directory with the
following contents to enable the use of the `--config` flag:
```
# Use our custom-configured c++ toolchain.
build:asmjs --crosstool_top=//toolchain:emscripten
# Use --cpu as a differentiator.
build:asmjs --cpu=asmjs
# Use the default Bazel C++ toolchain to build the tools used during the
# build.
build:asmjs --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
```
In this example, we are using the `--cpu` flag as a differentiator, since
`emscripten` can target both `asmjs` and Web assembly. We are not configuring a
Web assembly toolchain, however. Since Bazel uses many internal tools written in
C++, such as process-wrapper, we are specifying a "sane" C++ toolchain for the
host platform.
## Configuring the C++ toolchain
To configure the C++ toolchain, repeatedly build the application and eliminate
each error one by one as described below.
**Note:** This tutorial assumes you're using Bazel 0.23 or later. If you're
using an older release of Bazel, look for the "Configuring CROSSTOOL" tutorial.
1. Run the build with the following command:
```
bazel build --config=asmjs //main:helloworld.js
```
Because you specified `--crosstool_top=//toolchain:emscripten` in the
`.bazelrc` file, Bazel throws the following error:
```
No such package `toolchain`: BUILD file not found on package path.
```
In the workspace directory, create the `toolchain` directory for the package
and an empty `BUILD` file inside the `toolchain` directory.
2. Run the build again. Because the `toolchain` package does not yet define the
`emscripten` target, Bazel throws the following error:
```
No such target '//toolchain:emscripten': target 'emscripten' not declared in
package 'toolchain' defined by .../toolchain/BUILD
```
In the `toolchain/BUILD` file, define an empty filegroup as follows:
```
package(default_visibility = ['//visibility:public'])
filegroup(name = "emscripten")
```
3. Run the build again. Bazel throws the following error:
```
'//toolchain:emscripten' does not have mandatory providers: 'ToolchainInfo'
```
Bazel discovered that the `--crosstool_top` flag points to a rule that
doesn't provide the necessary `ToolchainInfo` provider. So we need to point
`--crosstool_top` to a rule that does provide `ToolchainInfo` - that is the
`cc_toolchain_suite` rule. In the `toolchain/BUILD` file, replace the empty
filegroup with the following:
```
cc_toolchain_suite(
name = "emscripten",
toolchains = {
"asmjs": ":asmjs_toolchain",
},
)
```
The `toolchains` attribute automatically maps the `--cpu` (and also
`--compiler` when specified) values to `cc_toolchain`. You have not yet
defined any `cc_toolchain` targets and Bazel will complain about that
shortly.
4. Run the build again. Bazel throws the following error:
```
Rule '//toolchain:asmjs_toolchain' does not exist
```
Now you need to define `cc_toolchain` targets for every value in the
`cc_toolchain_suite.toolchains` attribute. This is where you specify the
files that comprise the toolchain so that Bazel can set up sandboxing. Add
the following to the `toolchain/BUILD` file:
```
filegroup(name = "empty")
cc_toolchain(
name = "asmjs_toolchain",
toolchain_identifier = "asmjs-toolchain",
toolchain_config = ":asmjs_toolchain_config",
all_files = ":empty",
compiler_files = ":empty",
dwp_files = ":empty",
linker_files = ":empty",
objcopy_files = ":empty",
strip_files = ":empty",
supports_param_files = 0,
)
```
5. Run the build again. Bazel throws the following error:
```
Rule '//toolchain:asmjs-toolchain' does not exist
```
Let's add a ":asmjs-toolchain-config" target to the `toolchain/BUILD` file:
```
filegroup(name = "asmjs_toolchain_config")
```
6. Run the build again. Bazel throws the following error:
```
'//toolchain:asmjs_toolchain_config' does not have mandatory providers:
'CcToolchainConfigInfo'
```
`CcToolchainConfigInfo` is a provider that we use to configure our C++
toolchains. We are going to create a Starlark rule that will provide
`CcToolchainConfigInfo`. Create a `toolchain/cc_toolchain_config.bzl`
file with the following content:
```
def _impl(ctx):
return cc_common.create_cc_toolchain_config_info(
ctx = ctx,
toolchain_identifier = "asmjs-toolchain",
host_system_name = "i686-unknown-linux-gnu",
target_system_name = "asmjs-unknown-emscripten",
target_cpu = "asmjs",
target_libc = "unknown",
compiler = "emscripten",
abi_version = "unknown",
abi_libc_version = "unknown",
)
cc_toolchain_config = rule(
implementation = _impl,
attrs = {},
provides = [CcToolchainConfigInfo],
)
```
`cc_common.create_cc_toolchain_config_info()` creates the needed provider
`CcToolchainConfigInfo`. Now let's declare a rule that will make use of
the newly implemented `cc_toolchain_config` rule. Add a load statement to
`toolchains/BUILD`:
```
load(":cc_toolchain_config.bzl", "cc_toolchain_config")
```
And replace the "asmjs_toolchain_config" filegroup with a declaration of a
`cc_toolchain_config` rule:
```
cc_toolchain_config(name = "asmjs_toolchain_config")
```
7. Run the build again. Bazel throws the following error:
```
.../BUILD:1:1: C++ compilation of rule '//:helloworld.js' failed (Exit 1)
src/main/tools/linux-sandbox-pid1.cc:421:
"execvp(toolchain/DUMMY_GCC_TOOL, 0x11f20e0)": No such file or directory
Target //:helloworld.js failed to build`
```
At this point, Bazel has enough information to attempt building the code but
it still does not know what tools to use to complete the required build
actions. We will modify our Starlark rule implementation to tell Bazel what
tools to use. For that, we'll need the tool_path() constructor from
[`@bazel_tools//tools/cpp:cc_toolchain_config_lib.bzl`](https://source.bazel.build/bazel/+/4eea5c62a566d21832c93e4c18ec559e75d5c1ce:tools/cpp/cc_toolchain_config_lib.bzl;l=400):
```
# toolchain/cc_toolchain_config.bzl:
load("@bazel_tools//tools/cpp:cc_toolchain_config_lib.bzl", "tool_path")
def _impl(ctx):
tool_paths = [
tool_path(
name = "gcc",
path = "emcc.sh",
),
tool_path(
name = "ld",
path = "emcc.sh",
),
tool_path(
name = "ar",
path = "/bin/false",
),
tool_path(
name = "cpp",
path = "/bin/false",
),
tool_path(
name = "gcov",
path = "/bin/false",
),
tool_path(
name = "nm",
path = "/bin/false",
),
tool_path(
name = "objdump",
path = "/bin/false",
),
tool_path(
name = "strip",
path = "/bin/false",
),
]
return cc_common.create_cc_toolchain_config_info(
ctx = ctx,
toolchain_identifier = "asmjs-toolchain",
host_system_name = "i686-unknown-linux-gnu",
target_system_name = "asmjs-unknown-emscripten",
target_cpu = "asmjs",
target_libc = "unknown",
compiler = "emscripten",
abi_version = "unknown",
abi_libc_version = "unknown",
tool_paths = tool_paths,
)
```
You may notice the `emcc.sh` wrapper script, which delegates to the external
`emcc.py` file. Create the script in the `toolchain` package directory with
the following contents and set its executable bit:
```
#!/bin/bash
set -euo pipefail
python external/emscripten_toolchain/emcc.py "$@"
```
Paths specified in the `tool_paths` list are relative to the package where
the `cc_toolchain_config` target is specified.
The `emcc.py` file does not yet exist in the workspace directory. To obtain
it, you can either check the `emscripten` toolchain in with your project or
pull it from its GitHub repository. This tutorial uses the latter approach.
To pull the toolchain from the GitHub repository, add the following
`http_archive` repository definitions to your `WORKSPACE` file:
```
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = 'emscripten_toolchain',
url = 'https://github.com/kripken/emscripten/archive/1.37.22.tar.gz',
build_file = '//:emscripten-toolchain.BUILD',
strip_prefix = "emscripten-1.37.22",
)
http_archive(
name = 'emscripten_clang',
url = 'https://s3.amazonaws.com/mozilla-games/emscripten/packages/llvm/tag/linux_64bit/emscripten-llvm-e1.37.22.tar.gz',
build_file = '//:emscripten-clang.BUILD',
strip_prefix = "emscripten-llvm-e1.37.22",
)
```
In the workspace directory root, create the `emscripten-toolchain.BUILD` and
`emscripten-clang.BUILD` files that expose these repositories as filegroups
and establish their visibility across the build.
In the workspace directory root, make sure that a `BUILD` file is present.
If not, create an empty one.
```
touch BUILD
```
First create the `emscripten-toolchain.BUILD` file with the following
contents:
```
package(default_visibility = ['//visibility:public'])
filegroup(
name = "all",
srcs = glob(["**/*"]),
)
```
Next, create the `emscripten-clang.BUILD` file with the following contents:
```
package(default_visibility = ['//visibility:public'])`
filegroup(
name = "all",
srcs = glob(["**/*"]),
)
```
You may notice that the targets simply parse all of the files contained in
the archives pulled by the `http_archive` repository rules. In a real
world scenario, you would likely want to be more selective and granular by
only parsing the files needed by the build and splitting them by action,
such as compilation, linking, and so on. For the sake of simplicity, this
tutorial omits this step.
8. Run the build again. Bazel throws the following error:
```
"execvp(toolchain/emcc.sh, 0x12bd0e0)": No such file or directory
```
You now need to make Bazel aware of the artifacts you added in the previous
step. In particular, the `emcc.sh` script must also be explicitly listed as
a dependency of the corresponding `cc_toolchain` rule. Modify the
`toolchain/BUILD` file to look as follows:
```
package(default_visibility = ["//visibility:public"])
load(":cc_toolchain_config.bzl", "cc_toolchain_config")
cc_toolchain_config(name = "asmjs_toolchain_config")
cc_toolchain_suite(
name = "emscripten",
toolchains = {
"asmjs": ":asmjs_toolchain",
},
)
filegroup(
name = "all",
srcs = [
"emcc.sh",
"@emscripten_clang//:all",
"@emscripten_toolchain//:all",
],
)
cc_toolchain(
name = "asmjs_toolchain",
toolchain_identifier = "asmjs-toolchain",
toolchain_config = ":asmjs_toolchain_config",
all_files = ":all",
compiler_files = ":all",
cpu = "asmjs",
dwp_files = ":empty",
linker_files = ":all",
objcopy_files = ":empty",
strip_files = ":empty",
supports_param_files = 0,
)
```
Congratulations! You are now using the `emscripten` toolchain to build your
C++ sample code. The next steps are optional but are included for
completeness.
9. (Optional) Run the build again. Bazel throws the following error:
```
ERROR: .../BUILD:1:1: C++ compilation of rule '//:helloworld.js' failed (Exit 1)
```
The next step is to make the toolchain deterministic and hermetic - that
is, limit it to only touch files it's supposed to touch and ensure it
doesn't write temporary data outside the sandbox.
You also need to ensure the toolchain does not assume the existence of your
home directory with its configuration files and that it does not depend on
unspecified environment variables.
For our example project, make the following modifications to the
`toolchain/BUILD` file:
```
filegroup(
name = "all",
srcs = [
"emcc.sh",
"@emscripten_toolchain//:all",
"@emscripten_clang//:all",
":emscripten_cache_content"
],
)
filegroup(
name = "emscripten_cache_content",
srcs = glob(["emscripten_cache/**/*"]),
)
```
Since `emscripten` caches standard library files, you can save time by not
compiling `stdlib` for every action and also prevent it from storing
temporary data in random place, check in the precompiled bitcode files into
the `toolchain/emscript_cache directory`. You can create them by calling
the following from the `emscripten_clang` repository (or let `emscripten`
create them in `~/.emscripten_cache`):
```
python embuilder.py build dlmalloc libcxx libc gl libcxxabi libcxx_noexcept wasm-libc
```
Copy those files to `toolchain/emscripten_cache`.
Also update the `emcc.sh` script to look as follows:
```
#!/bin/bash
set -euo pipefail
export LLVM_ROOT='external/emscripten_clang'
export EMSCRIPTEN_NATIVE_OPTIMIZER='external/emscripten_clang/optimizer'
export BINARYEN_ROOT='external/emscripten_clang/'
export NODE_JS=''
export EMSCRIPTEN_ROOT='external/emscripten_toolchain'
export SPIDERMONKEY_ENGINE=''
export EM_EXCLUSIVE_CACHE_ACCESS=1
export EMCC_SKIP_SANITY_CHECK=1
export EMCC_WASM_BACKEND=0
mkdir -p "tmp/emscripten_cache"
export EM_CACHE="tmp/emscripten_cache"
export TEMP_DIR="tmp"
# Prepare the cache content so emscripten doesn't keep rebuilding it
cp -r toolchain/emscripten_cache/* tmp/emscripten_cache
# Run emscripten to compile and link
python external/emscripten_toolchain/emcc.py "$@"
# Remove the first line of .d file
find . -name "*.d" -exec sed -i '2d' {} \;
```
Bazel can now properly compile the sample C++ code in `hello-world.cc`.
10. (Optional) Run the build again. Bazel throws the following error:
```
..../BUILD:1:1: undeclared inclusion(s) in rule '//:helloworld.js':
this rule is missing dependency declarations for the following files included by 'helloworld.cc':
'.../external/emscripten_toolchain/system/include/libcxx/stdio.h'
'.../external/emscripten_toolchain/system/include/libcxx/__config'
'.../external/emscripten_toolchain/system/include/libc/stdio.h'
'.../external/emscripten_toolchain/system/include/libc/features.h'
'.../external/emscripten_toolchain/system/include/libc/bits/alltypes.h'
```
At this point you have successfully compiled the example C++ code. The
error above occurs because Bazel uses a `.d` file produced by the compiler
to verify that all includes have been declared and to prune action inputs.
In the `.d` file, Bazel discovered that our source code references system
headers that have not been explicitly declared in the `BUILD` file. This in
and of itself is not a problem and you can easily fix this by adding the
target folders as `-isystem` directories. For this, you'll need to add
a [`feature`](https://source.bazel.build/bazel/+/4eea5c62a566d21832c93e4c18ec559e75d5c1ce:tools/cpp/cc_toolchain_config_lib.bzl;l=336) to the `CcToolchainConfigInfo`.
Modify `toolchain/cc_toolchain_config.bzl` to look like this:
```
load("@bazel_tools//tools/cpp:cc_toolchain_config_lib.bzl",
"feature",
"flag_group",
"flag_set",
"tool_path")
load("@bazel_tools//tools/build_defs/cc:action_names.bzl", "ACTION_NAMES")
def _impl(ctx):
tool_paths = [
tool_path(
name = "gcc",
path = "emcc.sh",
),
tool_path(
name = "ld",
path = "emcc.sh",
),
tool_path(
name = "ar",
path = "/bin/false",
),
tool_path(
name = "cpp",
path = "/bin/false",
),
tool_path(
name = "gcov",
path = "/bin/false",
),
tool_path(
name = "nm",
path = "/bin/false",
),
tool_path(
name = "objdump",
path = "/bin/false",
),
tool_path(
name = "strip",
path = "/bin/false",
),
]
toolchain_include_directories_feature = feature(
name = "toolchain_include_directories",
enabled = True,
flag_sets = [
flag_set(
actions = [
ACTION_NAMES.assemble,
ACTION_NAMES.preprocess_assemble,
ACTION_NAMES.linkstamp_compile,
ACTION_NAMES.c_compile,
ACTION_NAMES.cpp_compile,
ACTION_NAMES.cpp_header_parsing,
ACTION_NAMES.cpp_module_compile,
ACTION_NAMES.cpp_module_codegen,
ACTION_NAMES.lto_backend,
ACTION_NAMES.clif_match,
],
flag_groups = [
flag_group(
flags = [
"-isystem",
"external/emscripten_toolchain/system/include/libcxx",
"-isystem",
"external/emscripten_toolchain/system/include/libc",
],
),
],
),
],
)
return cc_common.create_cc_toolchain_config_info(
ctx = ctx,
toolchain_identifier = "asmjs-toolchain",
host_system_name = "i686-unknown-linux-gnu",
target_system_name = "asmjs-unknown-emscripten",
target_cpu = "asmjs",
target_libc = "unknown",
compiler = "emscripten",
abi_version = "unknown",
abi_libc_version = "unknown",
tool_paths = tool_paths,
features = [toolchain_include_directories_feature],
)
cc_toolchain_config = rule(
implementation = _impl,
attrs = {},
provides = [CcToolchainConfigInfo],
)
```
11. (Optional) Run the build again. With this final change, the build now
completes error-free.
| 33.673046 | 181 | 0.620944 | eng_Latn | 0.97454 |
bb6480e9fce8c2b98f9fda9a2559ba409320ad22 | 20,137 | md | Markdown | articles/machine-learning/team-data-science-process/spark-overview.md | ctkalleppally/azure-docs.de-de | d40425c12772e53250efeb064aea56c453474e96 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/team-data-science-process/spark-overview.md | ctkalleppally/azure-docs.de-de | d40425c12772e53250efeb064aea56c453474e96 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/team-data-science-process/spark-overview.md | ctkalleppally/azure-docs.de-de | d40425c12772e53250efeb064aea56c453474e96 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Data Science mit Spark in Azure HDInsight: Team Data Science-Prozess'
description: Das Spark MLlib-Toolkit bringt wesentliche Machine Learning-Modellierungsfunktionen in die verteilte HDInsight-Umgebung ein.
services: machine-learning
author: marktab
manager: marktab
editor: marktab
ms.service: machine-learning
ms.subservice: team-data-science-process
ms.topic: article
ms.date: 01/10/2020
ms.author: tdsp
ms.custom: seodec18, previous-author=deguhath, previous-ms.author=deguhath
ms.openlocfilehash: 1dd82fb00c55e3676929999f204eae8755671038
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 03/30/2021
ms.locfileid: "93314744"
---
# <a name="overview-of-data-science-using-spark-on-azure-hdinsight"></a>Übersicht über Data Science mit Spark in Azure HDInsight
Diese Sammlung von Themen zeigt, wie mit HDInsight Spark allgemeine Data Science-Aufgaben wie Datenerfassung, Featureentwicklung, Modellierung und Modellauswertung durchgeführt werden können. Die verwendeten Daten sind eine Stichprobe des Datasets für Taxifahrten und Fahrpreise in New York aus dem Jahr 2013. Die erstellten Modelle umfassen logistische und lineare Regression, Random Forests und Gradient-Boosted-Strukturen. In den Themen wird auch gezeigt, wie diese Modelle in Azure Blob Storage (WASB) gespeichert werden und wie ihre Vorhersageleistung bewertet und ausgewertet wird. Fortgeschrittenere Themen behandeln das Trainieren von Modellen mithilfe von Kreuzvalidierung und Hyper-Parameter-Sweeping. Dieses Übersichtsthema enthält außerdem Themen, in denen beschrieben wird, wie Sie den Spark-Cluster einrichten, den Sie zum Ausführen der Schritte in den bereitgestellten exemplarischen Vorgehensweisen benötigen.
## <a name="spark-and-mllib"></a>Spark und MLlib
[Spark](https://spark.apache.org/) ist ein Open-Source-Framework für die Parallelverarbeitung, das die In-Memory-Verarbeitung unterstützt, um die Leistung von Anwendungen zur Analyse von Big Data zu steigern. Die Spark-Verarbeitungs-Engine ist auf Geschwindigkeit, einfache Nutzung und anspruchsvolle Analysen ausgelegt. Dank seiner verteilten In-Memory-Datenverarbeitungsfunktionen eignet sich Spark besonders für iterative Algorithmen beim maschinellen Lernen und für Graphberechnungen. [MLlib](https://spark.apache.org/mllib/) ist die skalierbare Bibliothek für maschinelles Lernen von Spark, die algorithmische Modellierungsfunktionen in diese verteilte Umgebung bereitstellt.
## <a name="hdinsight-spark"></a>HDInsight Spark
[HDInsight Spark](../../hdinsight/spark/apache-spark-overview.md) ist das in Azure gehostete Angebot für Open-Source-Spark. Darüber hinaus unterstützt der Spark-Cluster **Jupyter-PySpark-Notebooks**. Außerdem kann er interaktive Spark-SQL-Abfragen zum Transformieren, Filtern und Visualisieren von in Azure-Blobs (WASB) gespeicherten Daten ausführen. PySpark ist die Python-API für Spark. Die Codeausschnitte, die die Lösungen bereitstellen und die relevanten Plots zum Visualisieren der Daten zeigen, werden in Jupyter-Notebooks in den Spark-Clustern ausgeführt. Die Modellierungsschritte in diesen Themen enthalten auch Code zum Trainieren, Evaluieren, Speichern und Nutzen jedes Modelltyps.
## <a name="setup-spark-clusters-and-jupyter-notebooks"></a>Einrichtung: Spark-Cluster und Jupyter-Notebooks
Die Einrichtungsschritte und der Code in dieser exemplarischen Vorgehensweise beziehen sich auf HDInsight Spark 1.6. Jupyter-Notebooks werden jedoch für HDInsight Spark 1.6- und Spark 2.0-Cluster bereitgestellt. Eine Beschreibung der Notebooks und Links zu diesen finden Sie in der Datei [Readme.md](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/pySpark/Readme.md) zu dem GitHub-Repository, das sie enthält. Der hier und in den verknüpften Notebooks zu findende Code ist darüber hinaus generisch und sollte in allen Spark-Clustern funktionieren. Wenn Sie HDInsight Spark nicht verwenden, weichen Clustereinrichtung und Verwaltungsschritte möglicherweise geringfügig von dem ab, was hier gezeigt wird. Der Einfachheit halber finden Sie hier die Links zu den Jupyter-Notebooks für Spark 1.6 (auszuführen im PySpark-Kernel des Jupyter-Notebookservers) und Spark 2.0 (auszuführen im PySpark3-Kernel des Jupyter-Notebookservers):
### <a name="spark-16-notebooks"></a>Spark 1.6-Notebooks
Diese Notebooks werden im PySpark-Kernel des Jupyter-Notebookservers ausgeführt.
- [pySpark-machine-learning-data-science-spark-data-exploration-modeling.ipynb](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/pySpark/Spark1.6/pySpark-machine-learning-data-science-spark-data-exploration-modeling.ipynb): Bietet Informationen zum Durchführen von Datenuntersuchungen, Modellieren und Bewerten mit mehreren verschiedenen Algorithmen.
- [pySpark-machine-learning-data-science-spark-advanced-data-exploration-modeling.ipynb](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/pySpark/Spark1.6/pySpark-machine-learning-data-science-spark-advanced-data-exploration-modeling.ipynb): Enthält Themen im Notebook 1 und Modellentwicklung mit Hyperparameteroptimierung und Kreuzvalidierung.
- [pySpark-machine-learning-data-science-spark-model-consumption.ipynb](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/pySpark/Spark1.6/pySpark-machine-learning-data-science-spark-model-consumption.ipynb): Zeigt, wie ein gespeichertes Modell mithilfe von Python in HDInsight-Clustern operationalisiert wird.
### <a name="spark-20-notebooks"></a>Spark 2.0-Notebooks
Diese Notebooks werden im PySpark3-Kernel des Jupyter-Notebookservers ausgeführt.
- [Spark2.0-pySpark3-machine-learning-data-science-spark-advanced-data-exploration-modeling.ipynb:](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/pySpark/Spark2.0/Spark2.0-pySpark3-machine-learning-data-science-spark-advanced-data-exploration-modeling.ipynb) Diese Datei enthält Informationen zum Durchsuchen, Modellieren und Bewerten von Daten in Spark 2.0-Clustern anhand des [hier](#the-nyc-2013-taxi-data) beschriebenen „NYC Taxi Trips“ und des Fahrpreisdatasets. Dieses Notebook ist möglicherweise ein guter Ausgangspunkt zum schnellen Untersuchen des Codes, den wir für Spark 2.0 bereitgestellt haben. Ein ausführlicheres Notebook zur Analyse der NYC-Taxidaten finden Sie im nächsten Notebook in dieser Liste. Informationen finden Sie in den Hinweisen im Anschluss an diese Liste, in der diese Notebooks verglichen werden.
- [Spark2.0-pySpark3_NYC_Taxi_Tip_Regression.ipynb](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/pySpark/Spark2.0/Spark2.0_pySpark3_NYC_Taxi_Tip_Regression.ipynb): Diese Datei zeigt, wie Datenanalysen (Spark SQL und Dataframevorgänge), Suchvorgänge, Modellierungen und Bewertungen mithilfe des [hier](#the-nyc-2013-taxi-data) beschriebenen Datasets zu Taxifahrten und Fahrpreisen in New York durchgeführt werden.
- [Spark2.0-pySpark3_Airline_Departure_Delay_Classification.ipynb](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/pySpark/Spark2.0/Spark2.0_pySpark3_Airline_Departure_Delay_Classification.ipynb): Diese Datei zeigt, wie Datenanalysen (Spark SQL und Dataframevorgänge), Suchvorgänge, Modellierungen und Bewertungen mithilfe des bekannten Fluglinien-Datasets zu pünktlichen Abflügen aus dem Jahr 2011 und 2012 durchgeführt werden. Wir kombinieren das Fluglinien-Dataset vor der Modellierung mit den Flughafen-Wetterdaten (z. B. Windgeschwindigkeit, Temperatur, Höhe usw.), damit diese Wetterdaten in das Modell aufgenommen werden können.
<!-- -->
> [!NOTE]
> Das Fluglinien-Dataset wurde den Spark 2.0-Notebooks hinzugefügt, um die Verwendung von Klassifizierungsalgorithmen besser zu veranschaulichen. Unter den folgenden Links finden Sie Informationen zum Dataset zur Pünktlichkeit von Flugreisestarts und zum Wetterdataset:
>
> - Daten zur Pünktlichkeit von Fluggesellschaften: [https://www.transtats.bts.gov/ONTIME/](https://www.transtats.bts.gov/ONTIME/)
>
> - Flughafen-Wetterdaten: [https://www.ncdc.noaa.gov/](https://www.ncdc.noaa.gov/)
<!-- -->
<!-- -->
> [!NOTE]
> Die Ausführung der Spark 2.0-Notebooks zu den Datasets mit den NYC-Taxidaten und den Flugverspätungen bei Fluggesellschaften können 10 Minuten oder länger dauern (je nach Größe Ihres HDI-Clusters). Das erste Notebook in der Liste oben zeigt viele Aspekte der Datenuntersuchung, Visualisierung und des ML-Modelltrainings in einem Notebook, das weniger Zeit für die Ausführung durch einen heruntergerechneten NYC-Datensatz benötigt, in dem die Taxi- und Fahrpreisdateien bereits zusammengeführt wurden: [Spark2.0-pySpark3-machine-learning-data-science-spark-advanced-data-exploration-modeling.ipynb](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/pySpark/Spark2.0/Spark2.0-pySpark3-machine-learning-data-science-spark-advanced-data-exploration-modeling.ipynb). Dieses Notebook benötigt weitaus weniger Zeit bis zum Abschluss (2-3 Minuten) ist möglicherweise ein guter Ausgangspunkt zum schnellen Untersuchen des Codes, den wir für Spark 2.0 bereitgestellt haben.
<!-- -->
Anleitungen zur Operationalisierung eines Spark 2.0-Modells und der Modellnutzung für die Bewertung finden Sie in der [Spark 1.6-Dokumentation zum Verbrauch](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/pySpark/Spark1.6/pySpark-machine-learning-data-science-spark-model-consumption.ipynb). Diese enthält ein Beispiel, in dem die erforderlichen Schritte aufgeführt werden. Um dieses Beispiel in Spark 2.0 verwenden zu können, ersetzen Sie die Python-Codedatei durch [diese Datei](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/Spark/Python/Spark2.0_ConsumeRFCV_NYCReg.py).
### <a name="prerequisites"></a>Voraussetzungen
Die folgenden Vorgehensweisen beziehen sich auf Spark 1.6. Verwenden Sie für die Version 2.0 von Spark die zuvor beschriebenen und verlinkten Notebooks.
1. Sie benötigen ein Azure-Abonnement. Wenn Sie noch keins besitzen, lesen Sie den Artikel [How to get Azure Free trial for testing Hadoop in HDInsight](https://azure.microsoft.com/documentation/videos/get-azure-free-trial-for-testing-hadoop-in-hdinsight/)(Gewusst wie: Erhalten einer Azure-Testversion zum Testen von Hadoop in HDInsight).
2. Sie benötigen zum Durchführen dieser exemplarischen Vorgehensweise einen Spark 1.6-Cluster. Anweisungen zum Erstellen eines solchen Clusters finden Sie unter [Erste Schritte: Erstellen von Apache Spark in Azure HDInsight](../../hdinsight/spark/apache-spark-jupyter-spark-sql.md). Clustertyp und Version werden im Menü **Clustertyp auswählen** angegeben.

<!-- -->
> [!NOTE]
> Ein Thema, das die Verwendung von Scala statt Python zur Ausführung von Aufgaben für einen End-to-End-Data Science-Process veranschaulicht, ist [Data Science unter Verwendung von Scala und Spark in Azure](scala-walkthrough.md).
>
>
<!-- -->
> [!INCLUDE [delete-cluster-warning](../../../includes/hdinsight-delete-cluster-warning.md)]
>
>
## <a name="the-nyc-2013-taxi-data"></a>Die NYC-2013-Taxidaten
Die NYC Taxi Trips-Daten umfassen ca. 20 GB komprimierter CSV-Dateien (ca. 48 GB unkomprimiert) mit mehr als 173 Mio. einzelnen Fahrten mit den zugehörigen Preisen. Jeder Fahrtendatensatz enthält den Start- und Zielort mit der Uhrzeit, die anonymisierte Lizenznummer des Fahrers (Hack) und die eindeutige ID des Taxis (Medallion). Die Daten umfassen alle Fahrten im Jahr 2013. Sie werden für jeden Monat in den folgenden beiden Datasets bereitgestellt:
1. Die CSV-Dateien des Typs „trip_data“ enthalten Fahrtendetails, z. B. die Anzahl der Fahrgäste, Start- und Zielort, Fahrtdauer und Fahrtlänge. Es folgen einige Beispieleinträge:
`medallion,hack_license,vendor_id,rate_code,store_and_fwd_flag,pickup_datetime,dropoff_datetime,passenger_count,trip_time_in_secs,trip_distance,pickup_longitude,pickup_latitude,dropoff_longitude,dropoff_latitude`
`89D227B655E5C82AECF13C3F540D4CF4,BA96DE419E711691B9445D6A6307C170,CMT,1,N,2013-01-01 15:11:48,2013-01-01 15:18:10,4,382,1.00,-73.978165,40.757977,-73.989838,40.751171`
`0BD7C8F5BA12B88E0B67BED28BEA73D8,9FD8F69F0804BDB5549F40E9DA1BE472,CMT,1,N,2013-01-06 00:18:35,2013-01-06 00:22:54,1,259,1.50,-74.006683,40.731781,-73.994499,40.75066`
`0BD7C8F5BA12B88E0B67BED28BEA73D8,9FD8F69F0804BDB5549F40E9DA1BE472,CMT,1,N,2013-01-05 18:49:41,2013-01-05 18:54:23,1,282,1.10,-74.004707,40.73777,-74.009834,40.726002`
`DFD2202EE08F7A8DC9A57B02ACB81FE2,51EE87E3205C985EF8431D850C786310,CMT,1,N,2013-01-07 23:54:15,2013-01-07 23:58:20,2,244,.70,-73.974602,40.759945,-73.984734,40.759388`
`DFD2202EE08F7A8DC9A57B02ACB81FE2,51EE87E3205C985EF8431D850C786310,CMT,1,N,2013-01-07 23:25:03,2013-01-07 23:34:24,1,560,2.10,-73.97625,40.748528,-74.002586,40.747868`
2. Die CSV-Dateien "trip_fare" enthalten Details zu den Kosten für jede Fahrt, beispielsweise Zahlungsart, Fahrpreis, Zuschläge und Steuern, Trinkgelder und Mautgebühren sowie den entrichteten Gesamtbetrag. Es folgen einige Beispieleinträge:
`medallion, hack_license, vendor_id, pickup_datetime, payment_type, fare_amount, surcharge, mta_tax, tip_amount, tolls_amount, total_amount`
`89D227B655E5C82AECF13C3F540D4CF4,BA96DE419E711691B9445D6A6307C170,CMT,2013-01-01 15:11:48,CSH,6.5,0,0.5,0,0,7`
`0BD7C8F5BA12B88E0B67BED28BEA73D8,9FD8F69F0804BDB5549F40E9DA1BE472,CMT,2013-01-06 00:18:35,CSH,6,0.5,0.5,0,0,7`
`0BD7C8F5BA12B88E0B67BED28BEA73D8,9FD8F69F0804BDB5549F40E9DA1BE472,CMT,2013-01-05 18:49:41,CSH,5.5,1,0.5,0,0,7`
`DFD2202EE08F7A8DC9A57B02ACB81FE2,51EE87E3205C985EF8431D850C786310,CMT,2013-01-07 23:54:15,CSH,5,0.5,0.5,0,0,6`
`DFD2202EE08F7A8DC9A57B02ACB81FE2,51EE87E3205C985EF8431D850C786310,CMT,2013-01-07 23:25:03,CSH,9.5,0.5,0.5,0,0,10.5`
Wir haben eine Stichprobe von 0,1 % dieser Dateien entnommen und diese CSV-Dateien der Typen „trip\_data“ und „trip\_fare“ in einem einzelnen Dataset verknüpft, das für diese exemplarische Vorgehensweise als Eingabedataset verwendet wird. Der eindeutige Schlüssel für die Zusammenführung von „trip\_data“ und „trip\_fare“ besteht aus den Feldern: „medallion“, „hack\_licence“ und „pickup\_datetime“. Jeder Datensatz des Datasets enthält die folgenden Attribute, die eine NYC-Taxifahrt darstellen:
| Feld | Kurzbeschreibung |
| --- | --- |
| medallion |Anonymisierte Taxi-Medallion (eindeutige Taxi-ID) |
| hack_license |Anonymisierte Hackney Carriage-Lizenznummer |
| vendor_id |Taxiunternehmer-ID |
| rate_code |NYC-Taxitarif |
| store_and_fwd_flag |Speicherungs- und Weiterleitungsflag |
| pickup_datetime |Datum und Uhrzeit der Aufnahme |
| dropoff_datetime |Datum und Uhrzeit des Fahrtgastabsetzens |
| pickup_hour |Stunde der Aufnahme |
| pickup_week |Kalenderwoche der Aufnahme |
| weekday |Wochentag (Bereich 1 bis 7) |
| passenger_count |Anzahl der Fahrgäste bei einer Taxifahrt |
| trip_time_in_secs |Fahrtzeit in Sekunden |
| trip_distance |Zurückgelegte Fahrstrecke in Meilen |
| pickup_longitude |Längengrad der Aufnahme |
| pickup_latitude |Breitengrad der Aufnahme |
| dropoff_longitude |Längengrad des Absetzens |
| dropoff_latitude |Breitengrad des Absetzens |
| direct_distance |Direkte Entfernung zwischen Start- und Zielort |
| payment_type |Zahlungsart (bar, Kreditkarte usw.) |
| fare_amount |Fahrpreis in |
| surcharge |Zuschlag |
| mta_tax |MTA Metro-Transportsteuer |
| tip_amount |Trinkgeldbetrag |
| tolls_amount |Mautgebührenbetrag |
| total_amount |Gesamtbetrag |
| tipped |Trinkgeld gezahlt (0/1 für „Nein“ oder „Ja“) |
| tip_class |Trinkgeldklasse (0: 0 $, 1: 0-5 $, 2: 6 bis 10 $, 3: 11 bis 20 $, 4: > 20$) |
## <a name="execute-code-from-a-jupyter-notebook-on-the-spark-cluster"></a>Ausführen von Code über ein Jupyter-Notebook auf dem Spark-Cluster
Sie können das Jupyter-Notebook über das Azure-Portal starten. Suchen Sie Ihren Spark-Cluster auf dem Dashboard, und klicken Sie darauf, um zur Verwaltungsseite für Ihren Cluster zu gelangen. Klicken Sie dann auf **Cluster-Dashboards** -> **Jupyter-Notebook**, um das dem Spark-Cluster zugeordnete Notebook zu öffnen.

Für den Zugriff auf die Jupyter-Notebooks können Sie auch zu ***`https://CLUSTERNAME.azurehdinsight.net/jupyter`*** navigieren. Ersetzen Sie das Element CLUSTERNAME in dieser URL durch den Namen Ihres eigenen Clusters. Sie benötigen das Kennwort für Ihr Administratorkonto, um auf die Notebooks zuzugreifen.

Wählen Sie „PySpark“ aus, um ein Verzeichnis mit ein paar Beispielen für vorkonfigurierte Notebooks anzuzeigen, die die PySpark-API verwenden. Die Notebooks, die die Codebeispiele für diese Sammlung von Spark-Themen enthalten, sind auf [GitHub](https://github.com/Azure/Azure-MachineLearning-DataScience/tree/master/Misc/Spark/pySpark) verfügbar.
Sie können die Notebooks direkt von [GitHub](https://github.com/Azure/Azure-MachineLearning-DataScience/tree/master/Misc/Spark/pySpark) auf den Jupyter-Notebookserver in Ihrem Spark-Cluster hochladen. Klicken Sie auf der Startseite Ihres Jupyter im rechten Teil des Bildschirms auf die Schaltfläche **Hochladen**. Ein Datei-Explorer wird geöffnet. Hier können Sie die GitHub-URL (Rohdateninhalt) des Notebooks einfügen und auf **Öffnen** klicken.
Sie sehen den Dateinamen in der Jupyter-Dateiliste, wieder mit einer Schaltfläche **Hochladen**. Klicken Sie auf diese Schaltfläche **Hochladen** . Sie haben nun das Notebook importiert. Wiederholen Sie diese Schritte, um die anderen Notebooks dieser exemplarischen Vorgehensweise hochzuladen.
> [!TIP]
> Sie können in Ihrem Browser mit der rechten Maustaste auf die Links klicken und **Link kopieren** auswählen, um die GitHub-URL für unformatierten Inhalt abzurufen. Sie können diese URL in das Dialogfeld „Hochladen“ im Datei-Explorer von Jupyter einfügen.
>
>
Sie können jetzt:
* Den Code anzeigen, indem Sie auf das Notebook klicken.
* Jede Zelle durch Drücken von **UMSCHALT+EINGABETASTE** ausführen.
* Das gesamte Notebook ausführen, indem Sie auf **Cell** -> **Run** klicken.
* Die automatische Visualisierung von Abfragen verwenden.
> [!TIP]
> Der PySpark-Kernel visualisiert automatisch die Ausgabe der SQL-Abfragen (HiveQL). Sie haben die Möglichkeit, verschiedene Arten von Visualisierungen auszuwählen (Tabelle, Kreis, Linie, Fläche oder Balken), indem Sie im Notebook die Menüschaltflächen unter **Typ** verwenden:
>
>

## <a name="whats-next"></a>Wie geht es weiter?
Nachdem Sie einen HDInsight Spark-Cluster eingerichtet und die Jupyter-Notebooks hochgeladen haben, können Sie nun die Themen für diese drei PySpark-Notebooks durcharbeiten. Darin erfahren Sie, wie Sie Ihre Daten durchsuchen sowie Modelle erstellen und nutzen. Das Notebook zum erweiterten Durchsuchen von Daten und Modellieren zeigt, wie Sie die Kreuzvalidierung, das Hyper-Parameter-Sweeping und die Auswertung von Modellen einbeziehen können.
**Datenuntersuchung und -modellierung mit Spark:** Untersuchen Sie das Dataset, und erstellen und bewerten Sie die Machine Learning-Modelle. Arbeiten Sie dazu das Thema [Erstellen von binären Klassifizierungs- und Regressionsmodellen für Daten mit dem Spark MLib-Toolkit](spark-data-exploration-modeling.md) durch.
**Nutzung von Modellen:** Informationen zum Bewerten der in diesem Thema erstellten Klassifizierungs- und Regressionsmodelle finden Sie unter [Bewerten von Machine Learning-Modellen, die mit Spark erstellt wurden](spark-model-consumption.md).
**Kreuzvalidierung und Hyperparameter-Sweeping**: Unter [Erweiterte Datenuntersuchung und Modellierung mit Spark](spark-advanced-data-exploration-modeling.md) erfahren Sie, wie Modelle mit Kreuzvalidierung und Hyperparameter-Sweeping trainiert werden können. | 103.266667 | 997 | 0.811491 | deu_Latn | 0.959105 |
bb64ea3bcc85005e4f2e234639327b0fd23c1b29 | 2,536 | md | Markdown | docs/windows/modifying-the-layout-grid.md | svick/cpp-docs | 76fd30ff3e0352e2206460503b61f45897e60e4f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-18T12:54:41.000Z | 2021-04-18T12:54:41.000Z | docs/windows/modifying-the-layout-grid.md | Mikejo5000/cpp-docs | 4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/windows/modifying-the-layout-grid.md | Mikejo5000/cpp-docs | 4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-11T13:20:45.000Z | 2020-07-11T13:20:45.000Z | ---
title: "Modifying the Layout Grid | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.technology: ["cpp-windows"]
ms.topic: "conceptual"
dev_langs: ["C++"]
helpviewer_keywords: ["controls [C++], layout grid", "snap to layout grid", "grids, turning on or off", "layout grid in Dialog Editor", "grids, changing size"]
ms.assetid: ec31f595-7542-485b-806f-efbaeccc1b3d
author: "mikeblome"
ms.author: "mblome"
ms.workload: ["cplusplus", "uwp"]
---
# Modifying the Layout Grid
When you are placing or arranging controls in a dialog box, you can use the layout grid for more precise positioning. When the grid is turned on, controls appear to "snap to" the dotted lines of the grid as if magnetized. You can turn this "snap to grid" feature on and off and change the size of the layout grid cells.
### To turn the layout grid on or off
1. From the **Format** menu, choose **Guide Settings**.
2. In the [Guide Settings Dialog Box](../windows/guide-settings-dialog-box.md), select or clear the **Grid** button.
You can still control the grid in individual Dialog editor windows using the **Toggle Grid** button on the [Dialog Editor Toolbar](../windows/showing-or-hiding-the-dialog-editor-toolbar.md).
### To change the size of the layout grid
1. From the **Format** menu, choose **Guide Settings**.
2. In the [Guide Settings Dialog Box](../windows/guide-settings-dialog-box.md), type the height and width in DLUs for the cells in the grid. The minimum height or width is 4 DLUs. For more information on DLUs, see [The Arrangement of Controls on Dialog Boxes](../windows/arrangement-of-controls-on-dialog-boxes.md).
For information on adding resources to managed projects, please see [Resources in Desktop Apps](/dotnet/framework/resources/index) in the *.NET Framework Developer's Guide.* For information on manually adding resource files to managed projects, accessing resources, displaying static resources, and assigning resource strings to properties, see [Creating Resource Files for Desktop Apps](/dotnet/framework/resources/creating-resource-files-for-desktop-apps). For information on globalization and localization of resources in managed apps, see [Globalizing and Localizing .NET Framework Applications](/dotnet/standard/globalization-localization/index).
Requirements
Win32
## See Also
[Dialog Editor States (Guides and Grids)](../windows/dialog-editor-states-guides-and-grids.md)
[Controls in Dialog Boxes](../windows/controls-in-dialog-boxes.md)
| 61.853659 | 654 | 0.744085 | eng_Latn | 0.916981 |
bb6619ebcfd1359069c6a21458e43c3950786e4f | 34,681 | md | Markdown | CHANGELOG.md | jfernandes-uq/teku | da7cb7b682f3bffb650bd08cefe3cdccf30da744 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | jfernandes-uq/teku | da7cb7b682f3bffb650bd08cefe3cdccf30da744 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | jfernandes-uq/teku | da7cb7b682f3bffb650bd08cefe3cdccf30da744 | [
"Apache-2.0"
] | null | null | null | # Changelog
Due to the rapidly changing nature of ETH2 testnets and rapid rate of improvements to Teku,
we recommend most users use the latest `master` branch of Teku.
## Upcoming Breaking Changes
- REST API endpoints will be updated to match emerging standards in a future release.
- `--validators-key-files` and `--validators-key-password-files` have been replaced by `--validator-keys`. The old arguments still work but will be removed in a future release.
## 0.12.9
### Additions and Improvements
- Added `zinken` network definition. As the genesis state is not yet known, an ETH1 endpoint must be specified when connecting to the `zinken` testnet
- Added the option to output validator performance over time. Service can be enabled by using `--validators-performance-tracking-enabled`
- Implemented caching of beacon block roots to improve block import and thus sync speed
- Support symlinked keystore files
- Updated to spec version 0.12.3
### Bug Fixes
- Fixed issues where slot calculation and Store time management led to underflow errors
- Fixed issue discovered with the remote validator where the websocket publishing backed up due to slow readers
## 0.12.8
### Additions and Improvements
- Added `spadina` network genesis state so an ETH1 endpoint is no longer required when connecting to Spadina.
### Bug Fixes
- Fixed issue where topped-up up deposits did not lead to activated validators.
## 0.12.7
### Additions and Improvements
- Added `spadina` network definition. As the genesis state is not yet known, an ETH1 endpoint must be specified when connecting to the `spadina` testnet
New REST APIs
- `/eth/v1/validator/duties/attester/:epoch` - gets attester duties for the given epoch
- `/eth/v1/validator/duties/proposer/:epoch` - gets block proposer duties for the given epoch
- Deprecated POST `/validator/duties`, as the new standard endpoints are now implemented
- `eth/v1/beacon/genesis` - retrieves details of the chain's genesis
- Deprecated the previous genesis endpoint `/node/genesis_time`
- `/eth/v1/beacon/states/:state_id/validators/:validator_id` - gets validator from state by id
- `/eth/v1/beacon/states/{state_id}/fork` - gets Fork object for requested state
- Deprecated the previous fork endpoint `/node/fork`
- `/eth/v1/events` - subscribes to beacon node events
- Implemented validator keystore file locking to prohibit another process using the same keys and getting slashed
- Updated slashing protection interchange format version to v.4
- Upgraded `Jblst` version to `0.2.0` which adds ARMv8 arch support
- Implemented sending goodbye message to peers on shutdown
- Reduced reorg noise during sync
- Updated metrics library from Besu to latest version
- Better handle P2P target peer bounds
### Bug Fixes
- Fixed debug-tools db subcommands to support writing UInt64 as YAML
- Prevented fork choice head from going backwards when updating the chain
## 0.12.6
### Additions and Improvements
- Added support for the slashing protection interchange format via the `teku slashing-protection import` and `teku slashing-protection export` subcommands
- New REST APIs
- `/eth/v1/node/peers` - lists information about the currently connected peers
- `/eth/v1/node/peers/:peer_id` - list information about a specific peer
- `/eth/v1/node/health` - return the node health status via HTTP status codes. Useful for load balancers
- `/eth/v1/node/syncing` - describe the node's current sync status
- `/v1/node/version` has been moved to `/eth/v1/node/version` and `/v1/node/identity` to `/eth/v1/node/identity` matching changes in the standard API spec
- Gossip messages produced by Teku no longer set the `from`, `signature` or `seqNo` fields
- Enabled Gossipsub flood publishing to reduce propagation times for attestations and blocks produced locally
- Generated P2P private keys and Discovery v5 sequence numbers are now persisted across restarts
- Implemented unicode normalization process for validator keystore passwords
- Progress messages are now logged when loading large number of validator keys at startup
- The default network is now Medalla instead of Altona
- Avoid recalculating validator duties for reorgs that are not long enough to change the scheduling
- Validator duties are now calculated as soon as the genesis state is known instead of waiting for the actual genesis time.
- Added additional validation for deposit events received from the ETH1 node to flag when the ETH1 node has malfunctioned and missed some deposit log events
- Exit with a clear message when the ETH1 service is unable to start. Note that this only applies when the existing deposit data is invalid. Teku will continue retrying if the ETH1 node is not currently available.
- Operations (e.g. attestations, slashings etc) included in blocks are now readded to the pending pool if a reorg causes them to no longer be in the canonical chain
- Removed support for generating unencrypted keystores
- Discv5 now caches the hash of the local node to reduce load caused by significant numbers of incoming discovery messages
- Early access support for running the validator node independently of the beacon node (see [#2683](https://github.com/PegaSysEng/teku/pull/2683) for details). Please note this is not yet a recommended configuration and the CLI options and APIs used are subject to change.
### Bug Fixes
- Gossip messages with null `from`, `signature` or `seqNo` fields are now rebroadcast with the fields still null instead of replaced by default values
- Added validation to ensure that the uncompressed length of Gossip messages is within the possible ranges for a valid message for each gossip message type
- Fixed validation of compressed gossip message length which may incorrectly reject messages
- Fixed unhandled exception reported when an attestation received over gossip had an invalid checkpoint, specifying a block root from after the specified epoch.
- Improved validation of remote peer status
- Fix "AbstractRouter internal error on message control" errors when messages are received from peers before the outbound connection has been fully established
- Fixed errors logged when the ETH1 chain is shorter than the configured follow distance
- Explicitly fsync the RocksDB write ahead log files to disk on shutdown
- Fixed issue where environment variables named `TEKU_VERSION` or `TEKU_HELP` would be incorrectly interpreted as specifying the `--version` and `--help` arguments, preventing Teku from starting
- Avoid performing duplicate tasks to regenerate states in a particular corner case when the task can be rebased to start from the output of an already scheduled task
- Improve performance of cacheable task queue used for state regeneration
- Suppressed `ClosedChannelException` errors in logs
## 0.12.5
### Bug Fixes
- Fix race condition when a block and its parents are received at around the same time which could cause the node to fall out of sync until it reverted to syncing mode to catch up
- Fix issue where attestations from blocks could be processed prior to the block they target being available resulting in `ProtoNode: Delta to be subtracted is greater than node weight` errors
- Return a non-zero exit code from `validator register` subcommand when the user does not confirm the transaction
## 0.12.4
### Additions and Improvements
- Includes a significant number of bug fixes and performance improvements as a result of the recent issues on the Medalla testnet. See https://github.com/PegaSysEng/teku/issues/2596 for a full list of related issues.
- Support loading an entire directory of validator keys using `--validator-keys=<keyDir>:<passDir>`. Individual keystore and password files can also be specified using this new argument.
- Major reduction in CPU and memory usage during periods of non-finalization by intelligently queuing and combining requests for beacon states and checkpoint states
- Fixed slow startup times during long periods of non-finalization. Non-finalized states are now periodically persisted to disk to avoid needing to replay large numbers of blocks to regenerate state.
- Reduced sync times during long periods of non-finalization by searching for a more recent common ancestor than the finalized checkpoint
- Added explicit UInt64 overflow and underflow protection
- Local signing is now multithreaded to better utilise CPU when running large numbers of validators
- Improved sync performance by continuing to update the target peer's status during the sync process
- Removed support for SecIO. Only the NOISE handshake is now supported.
- Provide a more userfriendly error message and exit if the P2P network port is already in use
- Added new metrics
- `validator_attestation_publication_delay` reports a histogram showing real time between when validations were due and when they were published to the network
### Bug Fixes
- Fixed issue where attestations were created for the wrong head because fork choice had not yet been run. Results in a significant improvement in attestation inclusion rate.
- Fixed issue where invalid blocks were produced because they included attestations from forks that had different attestation committees
- Fixed issue where block production may be scheduled for the wrong slot due to calculating duties one epoch ahead
- Fixed issue where sync could appear to stall because fork choice data wasn't being updated during sync
- Fixed race condition when updating fork choice votes which could lead to `ProtoNode: Delta to be subtracted is greater than node weight` errors
- Reduce `RejectedExecutionException` noise in logs when Teku is unable to keep up with incoming gossip messages. Other performance improvements should also improve the ability to keep up.
- Fixed cases where states were regenerated without first checking if a cached version was available
- Fixed excessive memory usage in discovery when parsing an invalid RLP message
- Fixed issue where maximum cache sizes could be exceeded resulting in excessive memory usage
- Be more lenient in detecting peers that excessively throttle requests for blocks to better handle long stretches of empty slots
- Fixed error when validating gossiped attestations that point to old blocks
- Fixed netty thread blocked error messages from metrics by avoiding contention on key locks while retrieving metrics
- Fixed issue where sockets were left open when using an external signer
## 0.12.3
### Breaking Changes
- Removed `--validators-unencrypted-key-files` option. This was only intended for early interop testing. Keys should be loaded from encrypted keystores.
### Additions and Improvements
- Add basic built-in slashing protection. Note that this only a last line of defence against bugs in the beacon node and will not prevent slashing if validator keys are run in multiple processes simultaneously.
- Validator duty logging is now enabled by default. It can be disabled with `--log-include-validator-duties-enabled=false`
- Add updated Medalla bootnodes
- Updated to be compliant with beacon chain spec 0.12.2
- Prioritise more recent attestations when creating blocks as they pay higher rewards
- Refuse to start if the existing database is from a different network to the current configuration
- Added rate limiting for remote peers based on both number of blocks requested and total number of requests made
- Discovery now requires confirmation from multiple peers before updating the external IP reported by the node
- Improved interoperability with other clients: seqno field is now optional for libp2p messages
- REST API updates:
- Added genesis validator root to the `/node/fork` REST API
- Added `/validator/aggregate_attestation`
- Added `/validator/persistent_subnets_subscription`
- Added `/validator/beacon_committee_subscription`
- Added `/validator/aggregate_and_proofs`
- Added `/node/pending_attestation_count`
- Report the current peer count in "time to genesis"
- Added UInt64 overflow and underflow detection
- Improved performance of list shuffling operations
- Snappy compression is now enabled by default for custom networks. It can be disabled with `--p2p-snappy-enabled=false`
### Bug Fixes
- Fixed vector for DOS attack caused by not throttling libp2p response rate. (See https://github.com/libp2p/jvm-libp2p/pull/127 and https://github.com/ethereum/public-attacknets/issues/7 for futher details)
- Fixed issue that delayed publication of created attestations by a slot
- Fixed "Invalid attestation: Signature is invalid" errors caused by incorrect caching of committee selections (see https://github.com/PegaSysEng/teku/pull/2501 for further details)
- Fixed issues where validators failed to perform duties because the node incorrectly returned to syncing state
- Fixed `--logging` option to accept lowercase `debug` option. Renamed the `debug` subcommand to avoid the naming conflict
- Avoid lock contention when reading in-memory storage metrics
- Reduced memory usage when loading large numbers of scrypt encoded keystores
- Increased read timeout for ETH1 requests to avoid repeatedly timing out when the ETH1 node is slow
- Reduced noise in logs from `ClosedChannelException` when a peer unexpected disconnects
- Fixed `IllegalArgumentException` when RPC response code was greater than 127
- Fixed `IllegalArgumentException` when unexpectedly short discovery messages were received
- Fixed very frequenet `InternalErrorException` when a peer disconnected during initial libp2p handshake
- Fixed crash during shutdown caused by metrics accessing RocksDB after it was closed
- Restricted the maximum epoch value accepted by REST API to ensure it can be converted to a slot without overflowing uint64
- Fixed help text for `--p2p-discovery-bootnodes`
## 0.12.2
### Additions and Improvements
- Added `medalla` network definition. As the genesis state is not yet known, an ETH1 endpoint must be specified when connecting to the `medalla` testnet
- Attestations are now created and published immediately after the block for the slot is imported, instead of waiting until 1/3rd of the way through the slot
- The Teku docker image has been upgraded to run Java 14
- `/beacon/state` REST API now supports a `stateRoot` parameter to request states by state root. This includes retrieving states for empty slots
- Reduced gas limit and used current gas price reported by the ETH1 node when sending deposit transactions with the `validator` subcommands
- Validator keys are now loaded in parallel to improve start up time
- Added a docker-compose configuration to quickly launch a 4-node local testnet
- Exposed additional metrics to report on RocksDB memory usage:
- storage_hot_estimated_table_readers_memory
- storage_finalized_estimated_table_readers_memory
- storage_hot_current_size_all_mem_tables
- storage_finalized_current_size_all_mem_tables
- Stricter req/resp message lengths are now enforced based on message content type
### Bug Fixes
- Significant reductions in process resident memory. As this involved a configuration change for RocksDB the most significant reduction is achieved with a new database
- Fixed issue where Teku did not reconnect to peers after a network interruption
- Fixed issue where Teku may stop attempting to create new outbound peer connections
- Fixed incompatibility with deposits with public keys that could not be resolved to a G1 point
- Avoid disconnecting peers that do not return all requested blocks for a block by range request
- Reduced log level for a noisy message about duplicate peer connections
## 0.12.1
### Breaking Changes
- External signing API now uses the data field instead of signingRoot field when making signing requests. Update Eth2Signer to ensure it is compatible with this change.
### Additions and Improvements
- Further reduced memory usage during periods of non-finalization. Checkpoint states can now be dropped from memory and regenerated on demand.
- Added additional metrics:
- `beacon_peer_count` tracks the number of connected peers which have completed chain validation
- `network_peer_chain_validation_attempts` tracks the number and status of peer chain validations
- `network_peer_connection_attempt_count` tracks the number and status of outbound peer requests made
- `network_peer_reputation_cache_size` reports the size of the peer reputation cache
- `beacon_block_import_total` tracks the number of blocks imported
- `beacon_reorgs_total` tracks the number of times a different fork is chosen as the new chain head
- `beacon_published_attestation_total` tracks the total number of attestations sent to the gossip network
- External signing API now uses the data field instead of signingRoot field when making signing requests. Eth2Signer has been updated with this change.
- Enforced the 256 byte limit for Req/Resp error messages
- Blocks by range requests which exceed the maximum block request count are now rejected rather than partially processed as required by the P2P specification
- Improved tracking of peer reputation to avoid reattempting connections to peers we have previously rejected
- ForkChoice data is now persistent to disk, improving startup times especially during long periods of non-finalization
- Reduced the maximum number of blocks held in memory to reduce memory consumption during periods of non-finalization
- Increased the defaults for the target peer count range
- Actively manage peers to ensure we have at least some peers on each of the attestation subnets
- Maintain a minimum number of randomly selected peers, created via outbound connections to provide Sybil resistance
- Updated dependencies to latest versions
### Bug Fixes
- Fixed issue where the validator produced attestations in the incorrect slot or committee resulting in `Produce invalid attestation` messages
- Fixed an issue where attestations were not published to gossip when the node was not subscribed to the attestation subnet
- Fixed a number of unhandled exceptions in discv5
- Fixed an issue where discv5 may return node responses with a total greater than 5
- Fixed `Trying to reuse disposable LengthPrefixedPayloadDecoder` exception
- Fixed an issue where peers were not disconnected when the initial status exchange failed
- Fixed `NullPointerException` when validating attestations which became too old during validation
- Updated the `EXPOSE` ports listed in the Dockerfile to match the new defaults
- Fixed time until genesis log message to handle time zones correctly
- Fixed `NoSuchElementException` when running a validator that was not in active status
- Fixed `IndexOutOfBoundsException` when validating an `IndexedAttestation` which included invalid validator indices
## 0.12.0
### Breaking Changes
- Upgraded to v0.12.1 of the beacon chain spec. This is compatible with the Altona and Onyx testnets. For the Witti testnet use the 0.11.5 release.
- `--metrics-host-whitelist` CLI option has been renamed `--metrics-host-allowlist`
- `--rest-api-host-whitelist` CLI option has been renamed `--rest-api-host-allowlist`
- The Rest interface is now correctly set from `--rest-api-interface`, so will need to be correctly configured to ensure
that hosts specified in the `--rest-api-host-allowlist` are able to access that interface. Note that the default is to listen on localhost only.
- `--rest-api-enabled` is now correctly used to determine whether to start the rest api. Ensure it is set if using the rest api. Note that the default is disabled.
### Additions and Improvements
- Support beacon chain spec v0.12.1.
- The `--network` option now includes support for `altona` and `onyx`. The default network is now `altona`
- Disk space requirements have been very significantly reduced, particularly when using archive storage mode
- Finalized states are now stored in a separate database to non-finalized data.
CLI options to store these under different directories are not yet available but will be considered in the future
- Snapshots of finalized states are stored periodically. The `data-storage-archive-frequency` option controls how frequent these snapshots are.
More frequent snapshots result in greater disk usage but improve the performance of API requests that access finalized state.
- Due to the way regenerated states are cached, iterating through slots in increasing order is significantly faster than iterating in decreasing order
- Teku now exposes RocksDB metrics to prometheus
- The genesis state root, block root and time are now printed at startup
- The time to genesis being reached is now output every 10 minutes, so that it's visibly apparent that teku is still running
- Requests made to ETH1 nodes now include the Teku version as the user agent
- Unstable options can now be listed with `teku -X`. These options may be changed at any time without warning and are generally not required, but can be useful in some situations.
- Added a bash/zsh autocomplete script for Teku options and subcommands
- Reduced memory usage of state caches
- The list of validators being run is now printed to the console instead of just the log file
### Bug Fixes
- Fixed a very common error message while handling received attestations from peers
- Reduced log level of `Connection reset` log messages from `NoiseXXSecureChannel`.
- Fixed an issue where the last ETH1 block with deposits was processed twice when Teku was restarted pre-genesis. This resulted in an incorrect genesis state being generated.
- Update the private key message at startup to more clearly indicate it is referring to the ENR.
- The `--rest-api-interface` configuration attribute is now set on the HTTP server, it no longer listens on all interfaces.
- The `--rest-api-enabled` flag will determine whether the http server actually starts and binds to a port now.
- Fixed minor memory leak in storage
- Fixed potential crashes in RocksDB while shutting down Teku.
- Fixed an incompatibility with other clients due to Teku incorrectly expecting a length prefix for metadata requests
- When no advertised IP is specified and the p2p-interface is set to `0.0.0.0` or some other "any local" address,
Teku will now resolve the IP for localhost instead of using the "any local" address in it's initial ENR.
### Known Issues
- Validator may produce attestations in the incorrect slot or committee resulting in `Produced invalid attestation` messages ([#2179](https://github.com/PegaSysEng/teku/issues/2179))
## 0.11.5
### Additions and Improvements
- Reduced disk space required to store finalized data
- Improved performance when states need to be regenerated during periods of non-finalization
- Reduced on-heap memory usage
- Reduced startup time, particularly during periods of non-finalization
## 0.11.4
### Breaking Changes
- `teku validator generate` no longer sends ETH1 deposit transactions. It only generates BLS keys for validators.
The new `teku validator generate-and-register` subcommand can be used to generate and register validators in one step
### Additions and Improvements
- Renamed `--metrics-host-whitelist` to `--metrics-host-allowlist` and `--rest-api-host-whitelist` to `--rest-api-host-allowlist`
- Added `/v1/node/version` and `/v1/node/identity` REST endpoints. Anyone using `/node/version` should switch to use
the new endpoint, as `/node/version` will be removed in a future release.
- Added `--validators-graffiti="GRAFFITI"` command line option to allow graffiti to be used in block production.
- The Teku version is now printed at startup
- Eth1 deposits now load on startup from the local database, then sync to the eth1 provider once loading is complete.
A hidden flag has been added to disable this functionality if it causes any issues - `--Xeth1-deposits-from-storage-enabled=false`.
Local storage requirements will increase slightly due to the need to store each deposit block
event from the eth1 provider so that it can be replayed during restarts.
- Added a `teku debug db get-deposits` subcommand to load report the ETH1 deposits from the database. This is not intended for normal use but can be useful when debugging issues.
- The directory structure created by `validator generate` has changed. All files are now generated in a single directory.
Filename of withdrawal key is now named using the validator public key eg 814a1a6_validator.json 814a1a6_withdrawal.json
- Validator voluntary exit gossip topics are now supported
- Proposer slashing gossip topics are now supported
- Attester slashing gossip topics are now supported
- Added support for Gossipsub 1.1
- Building from downloaded source packages rather than git checkout now works
- Memory requirements during non-finalized periods has been reduced, but further work is required in this area
- REST API documentation is automatically published to https://pegasyseng.github.io/teku/ as soon as changes are merged
- A memory dump is now captured if Teku encounters and out of memory error. By default these are
written to the current work directory but an alternate location can be specified by setting the
environment variable `TEKU_OPTS=-XX:HeapDumpPath=/path`
This can be disabled by setting `TEKU_OPTS=-XX:-HeapDumpOnOutOfMemoryError`
- Treat attestations from verified blocks or produced in the validator as "seen" for gossip handling
- Added metrics to report the active and live validators for the current and previous epochs
(`beacon_previous_live_validators`, `beacon_previous_active_validators`, `beacon_current_live_validators`, `beacon_current_active_validators`)
- Added metrics to report on the in-memory store. Includes state cache hit/miss rates and the current number of states, blocks and checkpoint states held in the store.
- Optimised decompression of points in BLS
- Keystores generated with Teku are now compatible with Lighthouse
- Added additional logging during start up to provide progress information while hot states are being regenerated
- Integrated the new fork choice reference tests
### Bug Fixes
- Fixed `StackOverflowException` from fork choice `get_ancestor` method during long periods of non-finalization
- Fixed issue where ETH1 events could be missed immediately after startup resulting in an incorrect genesis state being generated
- Fixed issue where the beacon node was considered in-sync prior to finding any peers
- Recalculate validator duties when a block is imported which may affect duty scheudling.
This occurs when blocks are delayed for more than epoch, for example due to a network outage
- Block production no longer fails if the Eth1Data vote in the new block is the last required vote
for that Eth1Data and the new Eth1Data allows inclusion of new deposits
- `io.netty.handler.timeout.ReadTimeoutException` and `io.netty.channel.ExtendedClosedChannelException` are no longer reported at `ERROR` level
- Fix `attestation is invalid` error messages caused by a race condition verifying signatures
- Support zero-roundtrip multistream negotiations in libp2p
- Fix missed assertion in fork choice which could lead to an `IndexOutOfBoundException` (not consensus affecting)
- Clarified the "Minimum genesis time reached" message to be clearer that this only indicates that an
ETH1 block satisfying the time criteria for genesis has been found.
Additional validators may still be required before the genesis state is known.
- Fixed a number of error messages that were logged during Teku shutdown
- Respect the optional `epoch` paramter to the `/beacon/validators` REST API endpoint
- Fix "Command too long" error when running on Windows
- Fixed issue where exceptions may be reported by the uncaught exception handler instead of reported back to the original caller.
This resulted in some RPC streams not being closed correctly.
- Fixed execessive use of CPU regenerating states to map slots to block roots while servicing beacon block by root RPC requests.
### Known Issues
## 0.11.3
### Breaking Changes
- The `--eth1-enabled` option removed. ETH1 will be enabled when an `--eth1-endpoint` is provided and otherwise disabled.
- CLI option `--validators-key-file` renamed to `--validators-unencrypted-key-file` to avoid ambiguity from similar
named CLI option `--validators-key-files` which is used to specify encrypted validator keystore files.
- Added CLI option `--rest-api-host-whitelist` which restricts access to the REST API. Defaults to [localhost, 127.0.0.1]
- External signer API has been changed to not include the type being signed. Please ensure you update to the latest version of Eth2Signer
- `peer generate` and `genesis mock` subcommands now use lower case options consistently:
`--outputFile` is renamed `--output-file`, `--validatorCount` is renamed `--validator-count`, and `--genesisTime` is renamed `--genesis-time`
### Additions and Improvements
- `--network witti` includes the final configuration for the Witti testnet. The genesis state is included so an ETH1 endpoint is no longer required when connecting to Witti
- Teku can now use Infura as the ETH1 endpoint
- ETH1 node is no longer required to maintain historic world state
- Added `--log-include-validator-duties-enabled` option to enable log messages when validator clients produce blocks, attestations or aggregates (defaults to off)
- Improved logging of errors during execution of validator duties to be more informative, less noisy and set log levels more appropriately
- Teku will now exit when an `OutOfMemoryError` is encountered to allow tools like systemd to restart it
- Added support for compiling from source using Java 14
- Improved error messages for a number of configuration errors
- Added protection for DNS rebinding attacks to the REST API via host whitelisting. Configure available hosts with the `--rest-api-host-whitelist` option (defaults to `[localhost, 127.0.0.1]`)
- Simplified API for external signers
- Added support for the `name` field in keystore files
- Improved reporting of errors when using `BOTH` or default log destinations. Unhandled exceptions are now reported to the console but without stack traces. The full stack trace is available in the log file
- Report (to log file) the list of validators being run at startup.
- Append to existing log files rather than rolling them. Avoids the potential for logs to be lost when rolling
### Bug Fixes
- Improved selection of attestations to include in proposed blocks.
- Include attestations received via gossip
- Exclude attestations that have already been included in blocks
- Fix issue where attestations with an incompatible source were included, resulting in an invalid block
- Fixed a file descriptor leak caused by not correctly disconnecting duplicate peer connections
- Fixed race condition when the genesis event occurs which prevented the validator client from subscribing to persistent committee topics and retrieving the initial duties
- ETH1 chain processing recovers better after interruptions to the ETH1 node
- RPC `STATUS` messages now use the finalized checkpoint from the state, not the fork choice store. Fixes a networking incompatibility with Lighthouse when only genesis has been finalized
- Fix incompatibility with Lighthouse in how RPC `METADATA` requests are made
- Large RPC response chunks (> 90K) are now correctly processed
- Improved validation of received attestations
- Fixed a number of race conditions which could lead to inconsistent data being reported via the REST API
- Stopped logging the Javalin ascii art banner during startup
- The `peer generate` subcommand now provides a useful error message when the output file can not be written
- The `peer generate` subcommand no longer silently overwrites an existing output file
- Interop validators are no longer loaded when no validator keys were specified
- Fixed or suppressed a number of `ERROR` level log messages
- Non-fatal errors are no longer reported at `FATAL` log level
### Known Issues
- Block production may fail if the Eth1Data vote in the new block is the last required vote for that Eth1Data and the new Eth1Data allows inclusion of new deposits
- `io.netty.handler.timeout.ReadTimeoutException` and `io.netty.channel.ExtendedClosedChannelException` reported at `ERROR` level.
## 0.11.2
### Additions and Improvements
- Updated to spec version v0.11.3.
- Improved recovery from network changes. Peers are now disconnected if they do not respond for a
period ensuring upstream network interruptions are detected and peers can reconnect.
- The node's ENR is printed at startup even if the genesis state is not yet known.
As per the beacon chain spec, the network ports are still not opened until the genesis state is known.
- OpenAPI schemas are now more compatible with code generating tools.
- Include block root in `/beacon/block` responses.
- Improved error messages when invalid or incompatible CLI options are provided.
- Improved peer discovery by filtering out peers with incompatible `eth2` ENR fields.
- Improved performance of BLS signature verification
- Updated to jvm-libp2p 0.4.0
### Bug Fixes
- Fixed a deadlock condition which could cause block imports to silently stall.
- Initial sync now reaches chain head correctly even when the chain has not finalized for more than 10 epochs.
- Fixed `NullPointerException` and `ArrayIndexOutOfBoundException` intermittently encountered when importing blocks
due to a concurrency issue in batch signature verification.
- `/beacon/chainhead` reported incorrect slot and block root data.
- Fixed a range of race conditions when loading chain data which could result in inconsistent views
of the data or data not being found as it moved from recent to finalized storage.
- Significantly reduced the number of ERROR level log messages.
Invalid network data or unexpectedly disconnected peers is now logged at DEBUG level.
- Storage system did not correctly prune blocks loaded from disk on startup when they became finalized.
### Known Issues
- This release provides support for the Witti testnet via `--network witti` however the configuration
for this testnet is not yet stable and will likely differ from the one currently used.
- The Schlesi testnet has been abandoned. The `--network schlesi` option will be removed in a future release.
- Memory usage grows signficantly during periods of non-finalization.
- Teku requires the ETH1 endpoint to keep historic world state available for at least the ETH1 voting period.
This is typically more historic state than is kept when ETH1 nodes are pruning state.
Workaround is to connect to an archive node or configure the node to preserve a greater period of historic world state.
| 72.101871 | 272 | 0.799083 | eng_Latn | 0.998691 |
bb6733ac4efe4d4c50c2fa155f55d366f57a6091 | 2,155 | md | Markdown | README.md | hojin-kr/docker-codeigniter | 8281efe445c2e1b0fc5ae79c4f3ea58fefff3292 | [
"MIT"
] | null | null | null | README.md | hojin-kr/docker-codeigniter | 8281efe445c2e1b0fc5ae79c4f3ea58fefff3292 | [
"MIT"
] | null | null | null | README.md | hojin-kr/docker-codeigniter | 8281efe445c2e1b0fc5ae79c4f3ea58fefff3292 | [
"MIT"
] | null | null | null | # Docker + CodeIgniter + VirtualHost
This image serves as a starting point for legacy CodeIgniter projects.
## Supported Tags
- `latest`: [Dockerfile](https://github.com/hojin-kr/docker-codeigniter-virtualhost/blob/master/Dockerfile)
## Quick Start
깃 저장소를 클론 받습니다.
```shell
$ git clone https://github.com/hojin-kr/docker-codeigniter-virtualhost.git
```
000-default.conf 파일에 virtual host 정보를 수정합니다.
```shell
$ vi 000-default.conf
```
Container를 빌드합니다.
```shell
$ docker build -t hojindev/codeigniter-virtualhost .
```
코드이그나이터 루트 디렉토리에 docker-compose.yml을 복사하고 도커를 띄웁니다.
~~~
# docker-compose.yml
version: '2.2'
services:
web:
image: hojindev/codeigniter-virtualhost
ports:
- 80:80
volumes:
- $PWD:/var/www/html/
~~~
or
데이터베이스까지 함께 설치
~~~
# docker-compose.yml
version: '2.2'
services:
web:
image: hojindev/codeigniter-virtualhost
ports:
- 80:80
volumes:
- $PWD:/var/www/html/
db:
image: mysql:5.7
ports:
- 3306:3306
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
volumes:
- $PWD/../mysql_dev:/var/lib/mysql
~~~
```shell
$ docker-compose up
```
## 변경 사항
- mysql, mysqli 둘다 설치
- session.auto_start = 1
### Environment Variables
Environment variables can be passed to docker-compose.
The following variables trigger actions run by the entrypoint script at runtime.
| Variable | Default | Action |
| -------- | ------- | ------ |
| ENABLE_CRON | false | `true` starts a cron process within the container |
| FWD_REMOTE_IP | false | `true` enables remote IP forwarding from proxy (Apache) |
| PHP_DISPLAY_ERRORS | off | Override value for `display_errors` in docker-ci-php.ini |
| PHP_POST_MAX_SIZE | 32M | Override value for `post_max_size` in docker-ci-php.ini |
| PHP_MEMORY_LIMIT | 128M | Override value for `memory_limit` in docker-ci-php.ini |
| PHP_UPLOAD_MAX_FILESIZE | 32M | Override value for `upload_max_filesize` in docker-ci-php.ini |
| TZ | UTC | Override value for `data.timezone` in docker-ci-php.ini |
### original git
[https://github.com/aspendigital/docker-codeigniter](https://github.com/aspendigital/docker-codeigniter)
| 23.172043 | 107 | 0.692807 | kor_Hang | 0.557206 |
bb675482d986c238f84bc69f3874b20d829ad2ff | 44 | md | Markdown | README.md | cdhenry/ComputedChanges | 88ef13b33089d405fe5582353121ba688fe7c8d9 | [
"MIT"
] | null | null | null | README.md | cdhenry/ComputedChanges | 88ef13b33089d405fe5582353121ba688fe7c8d9 | [
"MIT"
] | null | null | null | README.md | cdhenry/ComputedChanges | 88ef13b33089d405fe5582353121ba688fe7c8d9 | [
"MIT"
] | null | null | null | # ComputedChanges
Computed Changes in VueJS
| 14.666667 | 25 | 0.840909 | eng_Latn | 0.779524 |
bb67bdf86ac5194a25fa7dde0e53ab9894665ddc | 320 | md | Markdown | docs/Lipid.md | omegakid1902/mkdocs-garden | a5005b79dc42a2ed96c726df775d35d5726141d4 | [
"CC0-1.0"
] | 1 | 2021-09-16T00:29:16.000Z | 2021-09-16T00:29:16.000Z | docs/Lipid.md | omegakid1902/mkdocs-garden | a5005b79dc42a2ed96c726df775d35d5726141d4 | [
"CC0-1.0"
] | null | null | null | docs/Lipid.md | omegakid1902/mkdocs-garden | a5005b79dc42a2ed96c726df775d35d5726141d4 | [
"CC0-1.0"
] | null | null | null | ---
title: Lipid
UID:
created: August 10, 2021 6:24 PM
tags:
- '#created/2021/Aug/10'
- '#seed🥜'
- '#permanent/concept'
aliases:
- Lipid
- chất béo
---
# Lipid
## Notes:
## Ideas & thoughts:
[[Carbohydrate]]
[[Protein]]
## Questions:
## Tham khảo:
```dataview
list
from [[Lipid]]
sort file.name asc
```
| 10.322581 | 32 | 0.596875 | eng_Latn | 0.343326 |
bb68f5e80663e888ad0a26f8888da98b8b8c852a | 607 | md | Markdown | ChangeLog.md | sdmg15/libfort | ae6757db6183a75e7fc37b8a6b29fde247fb9021 | [
"MIT"
] | null | null | null | ChangeLog.md | sdmg15/libfort | ae6757db6183a75e7fc37b8a6b29fde247fb9021 | [
"MIT"
] | null | null | null | ChangeLog.md | sdmg15/libfort | ae6757db6183a75e7fc37b8a6b29fde247fb9021 | [
"MIT"
] | null | null | null | ## v0.1.6
### Bug fixes
- Changed specific style reset tags to universal reset style tag.
## v0.1.5
### Tests
- Add tests for 'mk_wcswidth' function.
## v0.1.4
### Internal
- Removed redundant build options from cmake files.
- Added build for arm platform with drone.ci.
## v0.1.3
### Internal
- Fixed error with incorrect types when determine class of 'wchar_t' symbol for platforms with unsigned 'wchar_t'.
## v0.1.2
### Internal
- Removed '-Werror' flag from the build process.
## v0.1.1
### Internal
- Add library version and soversion to the built library.
## v0.1.0
Initial release
| 14.804878 | 114 | 0.693575 | eng_Latn | 0.975901 |
bb693699f632fd64f19f087bf9cb1d8ead401568 | 1,609 | md | Markdown | docs/odbc/microsoft/sqlallocconnect-visual-foxpro-odbc-driver.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/odbc/microsoft/sqlallocconnect-visual-foxpro-odbc-driver.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/odbc/microsoft/sqlallocconnect-visual-foxpro-odbc-driver.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Sqlverbincconnect (Visual FoxPro-ODBC-Treiber) | Microsoft-Dokumentation
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql
ms.prod_service: connectivity
ms.reviewer: ''
ms.technology: connectivity
ms.topic: conceptual
helpviewer_keywords:
- SQLAllocConnect function [ODBC], Visual FoxPro ODBC Driver
ms.assetid: 70d48b12-def5-475c-b8e1-654a55fdfe0f
author: MightyPen
ms.author: genemi
ms.openlocfilehash: 2889ef8e5c6f3a0db4e133ddf0bdd51fda338b40
ms.sourcegitcommit: b87d36c46b39af8b929ad94ec707dee8800950f5
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 02/08/2020
ms.locfileid: "68063295"
---
# <a name="sqlallocconnect-visual-foxpro-odbc-driver"></a>SQLAllocConnect (Visual FoxPro-ODBC-Treiber)
> [!NOTE]
> Dieses Thema enthält Visual FoxPro-ODBC-Treiber spezifische Informationen. Allgemeine Informationen zu dieser Funktion finden Sie im entsprechenden Thema unter [ODBC-API-Referenz](../../odbc/reference/syntax/odbc-api-reference.md).
Unterstützung: vollständig
ODBC-API-Konformität: kernstufe
Belegt Speicher für ein Verbindungs Handle, *hdbc*, in der durch *HENV*identifizierten Umgebung. Der Treiber-Manager verarbeitet diesen Aufruf und ruft **sqlverbincconnect** des Treibers auf, wenn [SQLCONNECT](../../odbc/microsoft/sqlconnect-visual-foxpro-odbc-driver.md), **sqlbrowseconnetct**oder [SQLDriverConnect](../../odbc/microsoft/sqldriverconnect-visual-foxpro-odbc-driver.md) aufgerufen wird.
Weitere Informationen finden Sie unter [sqlverbincconnect](../../odbc/reference/syntax/sqlallocconnect-function.md) in der *ODBC Programmer es Reference*.
| 48.757576 | 405 | 0.794282 | deu_Latn | 0.56803 |
bb698ad43e2faebd2d19f98ee9851fa787ea8cba | 4,451 | md | Markdown | skype/skype-ps/skype/Get-CsTenant.md | MSDN-WhiteKnight/office-docs-powershell | 5a5a33843bcc38546ece78d5e24c98d82195364f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-12-12T18:16:09.000Z | 2019-12-12T18:16:09.000Z | skype/skype-ps/skype/Get-CsTenant.md | MSDN-WhiteKnight/office-docs-powershell | 5a5a33843bcc38546ece78d5e24c98d82195364f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | skype/skype-ps/skype/Get-CsTenant.md | MSDN-WhiteKnight/office-docs-powershell | 5a5a33843bcc38546ece78d5e24c98d82195364f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-19T16:57:39.000Z | 2021-04-19T16:57:39.000Z | ---
external help file: Microsoft.Rtc.Management.Hosted.dll-help.xml
applicable: Skype for Business Online
title: Get-CsTenant
schema: 2.0.0
manager: bulenteg
author: tomkau
ms.author: tomkau
ms.reviewer:
---
# Get-CsTenant
## SYNOPSIS
Returns information about the Skype for Business Online tenants that have been configured for use in your organization.
Tenants represent groups of online users.
## SYNTAX
```
Get-CsTenant [-Filter <String>] [-DomainController <Fqdn>] [[-Identity] <OUIdParameter>] [-ResultSize <Int32>]
[<CommonParameters>]
```
## DESCRIPTION
In Skype for Business Online, tenants are groups of users who have accounts homed on the service.
Organizations will typically have a single tenant in which to house all their user accounts.
## EXAMPLES
### -------------------------- Example 1 --------------------------
```
Get-CsTenant
```
The command shown in Example 1 returns information about your tenant.
Organizations will have only one tenant.
## PARAMETERS
### -DomainController
This parameter is not used with Skype for Business Online.
```yaml
Type: Fqdn
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Filter
Enables you to return data by using Active Directory attributes and without having to specify the full Active Directory distinguished name.
For example, to retrieve a tenant by using the tenant display name, use syntax similar to this:
Get-CsTenant -Filter {DisplayName -eq "FabrikamTenant"}
To return all tenants that use a Fabrikam domain use this syntax:
Get-CsTenant -Filter {Domains -like "*fabrikam*"}
The Filter parameter uses the same Windows PowerShell filtering syntax is used by the `Where-Object` cmdlet.
You cannot use both the Identity parameter and the Filter parameter in the same command.
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Identity
Unique identifier for the tenant.
For example:
-Identity "bf19b7db-6960-41e5-a139-2aa373474354"
If you do not include either the Identity or the Filter parameter then the `Get-CsTenant` cmdlet will return information about all your tenants.
```yaml
Type: OUIdParameter
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: 1
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -ResultSize
Enables you to limit the number of records returned by the cmdlet.
For example, to return seven tenants (regardless of the number of tenants that are in your forest) include the ResultSize parameter and set the parameter value to 7.
Note that there is no way to guarantee which 7 users will be returned.
The result size can be set to any whole number between 0 and 2147483647, inclusive.
If set to 0 the command will run, but no data will be returned.
If you set the tenants to 7 but you have only three contacts in your forest, the command will return those three tenants and then complete without error.
```yaml
Type: Int32
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -AsJob
{{Fill AsJob Description}}
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (https://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
### Microsoft.Rtc.Management.ADConnect.Schema.TenantObject or String
The `Get-CsTenant` cmdlet accepts pipelined instances of the Microsoft.Rtc.Management.ADConnect.Schema.TenantObject object as well as string values representing the Identity of the tenant (for example "bf19b7db-6960-41e5-a139-2aa373474354").
## OUTPUTS
### Microsoft.Rtc.Management.ADConnect.Schema.TenantObject
## NOTES
## RELATED LINKS
| 27.645963 | 315 | 0.771287 | eng_Latn | 0.951089 |
bb698b84ef8340be4f51a5265e582023dd572ba5 | 4,650 | md | Markdown | controls/radfiledialogs/features/environment-variables.md | kylemurdoch/xaml-docs | 724c2772d5b1bf5c3fe254fdc0653c24d51824fc | [
"MIT",
"Unlicense"
] | null | null | null | controls/radfiledialogs/features/environment-variables.md | kylemurdoch/xaml-docs | 724c2772d5b1bf5c3fe254fdc0653c24d51824fc | [
"MIT",
"Unlicense"
] | null | null | null | controls/radfiledialogs/features/environment-variables.md | kylemurdoch/xaml-docs | 724c2772d5b1bf5c3fe254fdc0653c24d51824fc | [
"MIT",
"Unlicense"
] | null | null | null | ---
title: Environment Variables Support
page_title: Environment Variables Support
description: Check our "Environment Variables Support" documentation article for the RadFileDialogs {{ site.framework_name }} control.
slug: radfiledialogs-features-environment-variables
tags: environment,variables
published: True
position: 5
---
# Environment Variables
The RadFileDialogs provide out of the box support for the most common system environment variables that refer to well known directories in the Windows file system, as well as, to any user defined variables. This article will showcase the common behavior that all file dialogs share with regards to environment variables and also the differences between them.
> When adding a new environment variable, you have to restart the application (if you started it from the .exe file). This is needed in order to get the new environement variables from Windows. If the application is started from Visual Studio, a restart of Visual Studio is required.
## Common Behavior
>The examples in this section assume that you have an environment variable defined, named "test" with a value of "D:\Test5".
When typing an environment variable in the **Path Navigation Pane** of the file dialogs and typing enter, the **Tree Navigation Pane** will navigate to the respective directory. **Figure 1** demonstrates what will happen when typing %test% in the Path Navigation Pane.
#### Figure 1: Typing an EV in the breadcrumb

When typing an enviroment variable followed by **"\"** in the Path Navigation Pane, the suggestions list containing all of the child folders of the current directory will be listed. When doing the same in the Operations Pane, all child folders and files will be listed. This is demonstrated in **Figure 2**.
#### Figure 2: Listing all child files/folders in an EV directory from breadcrumb and autocomplete

> If you type %invalid% and hit enter, assuming that you have not defined an enviroment variable named "invalid", an **InvalidOperationException** will be thrown and the [ExceptionRaised]({%slug radfiledialogs-events%}) event will be raised.
> If the environment variable returns a valid path to a file, the file will be opened using the corresponding Windows program.
## Different Behavior
Depending on whether the environment variable points to a folder or a file, the different RadFileDialogs have different behavior.
* **Environment variables that point to files**
When typing an environment variable which points to a file in the Operations Pane, the **RadOpenFileDialog's** Open button will be enabled. If you click it, the respective name of the file will be returned. As for the **RadSaveFileDialog**, when clicking the Save button, an attempt will be made to override the file and a message box will appear for confirmation. In the **RadOpenFolderDialog**, when typing an EV which points to a file, the Open Folder button will remain disabled.
* **Environment variables that point to folders**
In both the **RadOpenFileDialog** and the **RadSaveFileDialog**, when typing an environment variable that points to a folder in the Operations Pane, the Tree Navigation Pane will navigate to the respective directory. In the same case the **RadOpenFolderDialog** will have its Open Folder button enabled and if clicked, it will return the FileName, FileNames and SafeFileNames of the respective directory.
## Common Windows Environment Variables
* __ComSpec__: Typing this environment variable and hitting enter will open the terminal.
* __windir__: Typing "%windir%/", followed by a valid path is a supported scenario.
* __USERNAME__: It is possible to include an environment variable in the middle of a file path. "C:\Users\%USERNAME%\", followed by a valud path is a supported scenario.
* __userdomain__: When typing this enviroment variable and hitting enter, an **InvalidOperationException** will be thrown and the [ExceptionRaised]({%slug radfiledialogs-events%}) event will be raised, since this enviroment variable does not point to a file or folder.
The examples above assume that the default paths of the listed system environment variables have not been changed.
## See Also
* [Visual Structure]({%slug radfiledialogs-visual-structure%})
* [Common Features]({%slug radfiledialogs-features-common%})
* [RadOpenFileDialog]({%slug radfiledialogs-radopenfiledialog%})
* [RadOpenFolderDialog]({%slug radfiledialogs-radopenfolderdialog%})
* [RadSaveFileDialog]({%slug radfiledialogs-radsavefiledialog%})
| 71.538462 | 487 | 0.791183 | eng_Latn | 0.993706 |
bb69ad8565abc441cbc58d83027acfaabc90a37f | 7,208 | md | Markdown | articles/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery.md | ClosetheWorld/azure-docs.ja-jp | e6c4cee8628ba1bb30a2138f7bb2d12f9b48dc51 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-22T22:16:30.000Z | 2019-10-22T22:16:30.000Z | articles/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery.md | ClosetheWorld/azure-docs.ja-jp | e6c4cee8628ba1bb30a2138f7bb2d12f9b48dc51 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery.md | ClosetheWorld/azure-docs.ja-jp | e6c4cee8628ba1bb30a2138f7bb2d12f9b48dc51 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: SAP HANA on Azure (L インスタンス) の高可用性とディザスター リカバリー | Microsoft Docs
description: 高可用性を確立し、SAP HANA on Azure (L インスタンス) のディザスター リカバリーを計画します。
services: virtual-machines-linux
documentationcenter: ''
author: saghorpa
manager: gwallace
editor: ''
ms.service: virtual-machines-linux
ms.topic: article
ms.tgt_pltfrm: vm-linux
ms.workload: infrastructure
ms.date: 09/10/2018
ms.author: saghorpa
ms.custom: H1Hack27Feb2017
ms.openlocfilehash: d0150aeace3960d075bbf61c1dd0bba4865aaf2b
ms.sourcegitcommit: 44e85b95baf7dfb9e92fb38f03c2a1bc31765415
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 08/28/2019
ms.locfileid: "70099721"
---
# <a name="sap-hana-large-instances-high-availability-and-disaster-recovery-on-azure"></a>Azure での SAP HANA L インスタンスの高可用性とディザスター リカバリー
>[!IMPORTANT]
>このドキュメントは、SAP HANA の管理ドキュメントまたは SAP Note の代わりになるものではありません。 このドキュメントでは、SAP HANA の管理と操作 (特にバックアップ、復元、高可用性およびディザスター リカバリーのトピック) について、読者が十分な理解と専門知識を有していることを前提としています。
お使いの環境、および使用する HANA のバージョンとリリースに応じた手順とプロセスを理解することが重要です。 このドキュメントで説明されている一部のプロセスは、一般的な情報がよく理解できるように単純化されており、最終的な操作ハンドブックのための詳細な手順として使うことは意図されていません。 使用する構成に対応した操作ハンドブックを作成する場合は、実際のプロセスをテストおよび練習し、特定の構成に関連するプロセスを文書化する必要があります。
高可用性とディザスター リカバリー (DR) は、ミッション クリティカルな SAP HANA on Azure L インスタンス サーバーを実行する場合のきわめて重要な要素です。 適切な高可用性とディザスター リカバリー戦略をきちんと策定、実装するには、SAP、システム インテグレーター、Microsoft と協力することが重要です。 また、環境に固有の回復ポイントの目標 (RPO) と回復ポイントの目標を検討することも重要です。
Microsoft では、HANA L インスタンスでいくつかの SAP HANA 高可用性手法をサポートしています。 次のような機能があります。
- **ストレージ レプリケーション**: すべてのデータを別の Azure リージョンの別の HANA L インスタンス スタンプにレプリケートする、ストレージ システムの機能です。 SAP HANA は、この手法と関係なく動作します。 この機能は、HANA L インスタンスに対して提供されている既定のディザスター リカバリー メカニズムです。
- **HANA システム レプリケーション**: 別の SAP HANA システムを対象とした [SAP HANA の全データのレプリケーション](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/b74e16a9e09541749a745f41246a065e.html)です。 目標復旧時間は、一定の間隔で実行されるデータ レプリケーションによって最小限に抑えられます。 SAP HANA では、非同期、メモリ内同期、および同期 の各モードがサポートされています。 同期モードは、同じデータ センター内または 100 km 未満の距離のデータ センター内の SAP HANA システムにのみ使用されます。 HANA L インスタンス スタンプの現在の設計では、HANA システム レプリケーションは 1 つのリージョンだけの高可用性を実現するために利用することができます。 HANA システム レプリケーションには、別の Azure リージョンへのディザスター リカバリー構成に対応するサード パーティのリバース プロキシ コンポーネントまたはルーティング コンポーネントが必要です。
- **ホストの自動フェールオーバー**: SAP HANA が HANA システム レプリケーションの代わりに使用するローカル障害復旧ソリューションです。 マスター ノードが使用できなくなると、1 つ以上のスタンバイ SAP HANA ノードがスケールアウト モードで構成され、SAP HANA がスタンバイ ノードに自動的にフェールオーバーされます。
SAP HANA on Azure (L インスタンス) は、4 つの地政学的地域 (米国、オーストラリア、ヨーロッパ、2 本) を対象とする 2 つの Azure リージョンで提供されます。 HANA L インスタンス スタンプをホストする地政学的地域内の 2 つのリージョンは、個別の専用ネットワーク回線に接続されます。 これらは、ストレージのスナップショットをレプリケートしてディザスター リカバリーの手段を提供するために使用されます。 レプリケーションは既定では確立されず、ディザスター リカバリー機能を指定したお客様に対して設定されます。 ストレージ レプリケーションは、HANA L インスタンスのストレージ スナップショットの使用に依存します。 別の地政学的領域内の Azure リージョンを DR リージョンとして選択することはできません。
現在サポートされている高可用性とディザスター リカバリーの手段およびその組み合わせを次の表に示します。
| HANA L インスタンスでサポートされるシナリオ | 高可用性オプション | ディザスター リカバリー オプション | 説明 |
| --- | --- | --- | --- |
| 単一ノード | 使用できません。 | 専用 DR セットアップ。<br /> 多目的 DR セットアップ。 | |
| ホストの自動フェールオーバー: スケールアウト (スタンバイあり、またはスタンバイなし)<br /> (1 + 1 を含む) | アクティブ ロールを担うスタンバイで可能。<br /> HANA がロールの切り替えを制御。 | 専用 DR セットアップ。<br /> 多目的 DR セットアップ。<br /> ストレージ レプリケーションを使用した DR 同期。 | HANA ボリューム セットはすべてのノードに接続。<br /> DR サイトに同じ数のノードが必要。 |
| HANA システム レプリケーション | プライマリまたはセカンダリ セットアップで可能。<br /> フェールオーバーが発生した場合にセカンダリがプライマリ ロールに移行。<br /> HANA システム レプリケーションと OS 制御のフェールオーバー。 | 専用 DR セットアップ。<br /> 多目的 DR セットアップ。<br /> ストレージ レプリケーションを使用した DR 同期。<br /> HANA システム レプリケーションを使用した DR はサード パーティのコンポーネントがないとまだ不可能。 | 各ノードに接続された個別のディスク ボリューム セット。<br /> DR の場所にレプリケートされるのは、実稼働サイトのセカンダリ レプリカのディスク ボリュームのみ。<br /> DR サイトにボリューム セットが 1 つ必要。 |
専用 DR セットアップは、DR サイトの HANA L インスタンス ユニットが他のワークロードや非実稼働システムの実行には使用されないセットアップです。 ユニットはパッシブであり、障害時のフェールオーバーが実行されたときにのみデプロイされます。 ただし、このセットアップは多くのお客様にお勧めする方法ではありません。
ご使用のアーキテクチャのストレージ レイアウトとイーサネットの詳細については、[HLI でサポートされるシナリオ](hana-supported-scenario.md)に関する記事を参照してください。
> [!NOTE]
> オーバーレイ シナリオとしての [SAP HANA の MCOD デプロイ](https://launchpad.support.sap.com/#/notes/1681092) (1 つのユニットに複数の HANA Instances) は、表に一覧で示されている HA と DR の方法で機能します。 例外は、Pacemaker に基づく自動フェールオーバー クラスターを備えた HANA システム レプリケーションを使う場合です。 このような場合は、ユニットごとにサポートされる HANA インスタンスは 1 つのみです。 [SAP HANA MDC](https://launchpad.support.sap.com/#/notes/2096000) デプロイでは、複数のテナントがデプロイされる場合、非ストレージ ベースの HA および DR の方法のみが機能します。 1 つのテナントがデプロイされる場合は、一覧のすべての方法が有効です。
多目的 DR セットアップは、DR サイト上の HANA L インスタンス ユニットが非実稼働ワークロードを実行するセットアップです。 障害発生時には、非運用システムをシャットダウンし、ストレージでレプリケートされた (追加の) ボリューム セットをマウントしてから、HANA 運用インスタンスを起動します。 HANA L インスタンスのディザスター リカバリー機能を使用するお客様の多くが、この構成を使用しています。
SAP HANA の高可用性の詳細については、SAP の次の記事を参照してください。
- [SAP HANA の高可用性のホワイトペーパー](https://go.sap.com/documents/2016/05/f8e5eeba-737c-0010-82c7-eda71af511fa.html)
- [SAP HANA 管理ガイド](https://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdf)
- [SAP HANA システム レプリケーションに関する SAP HANA Academy ビデオ](https://scn.sap.com/community/hana-in-memory/blog/2015/05/19/sap-hana-system-replication)
- [SAP サポート ノート #1999880 - SAP HANA システム レプリケーションに関する FAQ](https://apps.support.sap.com/sap/support/knowledge/preview/en/1999880)
- [SAP サポート ノート #2165547 - SAP HANA システム レプリケーション環境での SAP HANA のバックアップと復元](https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/sno/ui_entry/entry.htm?param=69765F6D6F64653D3030312669765F7361706E6F7465735F6E756D6265723D3231363535343726)
- [SAP サポート ノート #1984882 - 最小/ゼロ ダウンタイムでのハードウェア交換のための SAP HANA システム レプリケーションの使用](https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/sno/ui_entry/entry.htm?param=69765F6D6F64653D3030312669765F7361706E6F7465735F6E756D6265723D3139383438383226)
## <a name="network-considerations-for-disaster-recovery-with-hana-large-instances"></a>HANA L インスタンスでのディザスター リカバリーのネットワークに関する考慮事項
HANA L インスタンスのディザスター リカバリー機能を利用するには、2 つの Azure リージョンへのネットワーク接続を設計する必要があります。 また、オンプレミスからメイン Azure リージョンへの Azure ExpressRoute 回線接続のほか、オンプレミスからディザスター リカバリーを行うリージョンへの回線接続が必要になります。 この方法では、Microsoft Enterprise Edge Router (MSEE) の場所を含め、Azure リージョンで問題が発生した状況に対処できます。
2 つ目の方法として、1 つのリージョンの SAP HANA on Azure L インスタンスに接続するすべての Azure Virtual Network を、もう一方のリージョンの HANA L インスタンスに接続する ExpressRoute 回線に接続できます。 この *相互接続* により、リージョン 1 の Azure Virtual Network で実行されているサービスがリージョン 2 の HANA L インスタンス ユニットに接続することができ、その逆も可能になります。 この方法では、オンプレミスの場所を Azure と接続する MSEE の場所のいずれか 1 つのみがオフラインになった場合に対処できます。
次の図では、ディザスター リカバリーの場合の回復性に優れた構成を示します。

## <a name="other-requirements-with-hana-large-instances-storage-replication-for-disaster-recovery"></a>ディザスター リカバリーに HANA L インスタンスのストレージ レプリケーションを使用する際のその他の要件
HANA L インスタンスでのディザスター リカバリー セットアップについては、上記の要件以外に次の要件があります。
- 製品 SKU と同じサイズの SAP HANA on Azure L インスタンス SKU を指定し、ディザスター リカバリー リージョンにデプロイする必要があります。 現在の顧客デプロイでは、これらのインスタンスを使用して HANA 非実稼働インスタンスを実行しています。 これらの構成は、"*多目的 DR セットアップ*" と呼ばれます。
- ディザスター リカバリー サイトで復旧する SAP HANA on Azure (L インスタンス) SKU ごとに、DR サイトの追加のストレージを指定する必要があります。 追加のストレージを購入すると、ストレージ ボリュームを割り当てることができます。 運用 Azure リージョンからディザスター リカバリー Azure リージョンへのストレージ レプリケーションのターゲットとなるボリュームを割り当てることができます。
- プライマリで HSR をセットアップし、DR サイトへのストレージ ベースのレプリケーションをセットアップする場合は、DR サイトにストレージを追加購入して、プライマリ ノードとセカンダリ ノードの両方のデータが DR サイトにレプリケートされるようにする必要があります。
**次のステップ**
- 「[バックアップおよび復元作業](hana-backup-restore.md)」を参照してください。
| 69.980583 | 545 | 0.820339 | jpn_Jpan | 0.552582 |
bb6a4a85d1f292fbf6a83abe02046662862db350 | 878 | md | Markdown | docs/framework/wcf/diagnostics/event-logging/complusinvokingmethodfailedmismatchedtransactions.md | yunuskorkmaz/docs.tr-tr | e73dea6e171ca23e56c399c55e586a61d5814601 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/event-logging/complusinvokingmethodfailedmismatchedtransactions.md | yunuskorkmaz/docs.tr-tr | e73dea6e171ca23e56c399c55e586a61d5814601 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/event-logging/complusinvokingmethodfailedmismatchedtransactions.md | yunuskorkmaz/docs.tr-tr | e73dea6e171ca23e56c399c55e586a61d5814601 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: 'Hakkında daha fazla bilgi edinin: ComPlusInvokingMethodFailedMismatchedTransactions'
title: ComPlusInvokingMethodFailedMismatchedTransactions
ms.date: 03/30/2017
ms.assetid: d13f1978-ff42-443a-939f-75c8c8d50286
ms.openlocfilehash: 0cc8ea86606f55fffb5994ac9cb122520ffc788d
ms.sourcegitcommit: ddf7edb67715a5b9a45e3dd44536dabc153c1de0
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 02/06/2021
ms.locfileid: "99771359"
---
# <a name="complusinvokingmethodfailedmismatchedtransactions"></a>ComPlusInvokingMethodFailedMismatchedTransactions
Kimlik: 135
Önem derecesi: Hata
Kategori: ServiceModel
## <a name="description"></a>Description
ComPlus: Yöntem çağrısı işlemi uyumsuzluğu.
## <a name="see-also"></a>Ayrıca bkz.
- [Etkinlikleri Günlüğe Kaydetme](index.md)
- [Etkinlik Genel Başvurusu](events-general-reference.md)
| 30.275862 | 115 | 0.802961 | tur_Latn | 0.314435 |
bb6a7cd14eab561faad5e1894f7b57db648fc39c | 4,780 | md | Markdown | README.md | grizz-it/json-schema | c31c84de436990e880c779d2252796aa662f19ce | [
"MIT"
] | 1 | 2021-04-03T14:56:43.000Z | 2021-04-03T14:56:43.000Z | README.md | grizz-it/json-schema | c31c84de436990e880c779d2252796aa662f19ce | [
"MIT"
] | null | null | null | README.md | grizz-it/json-schema | c31c84de436990e880c779d2252796aa662f19ce | [
"MIT"
] | null | null | null | [](https://travis-ci.com/grizz-it/json-schema)
# GrizzIT JSON Schema
This package contains a [JSON schema](https://json-schema.org/) validator
library for PHP. It support Draft 07 and 06.
To get a grip on JSON schema's, what they are and how they work, please see the
manual of [json-schema-org](https://json-schema.org/learn/).
The package generates a reusable validation object, which can be used to verify
data against.
## Installation
To install the package run the following command:
```
composer require grizz-it/json-schema
```
## Usage
Before a validation object can be created, the factory for the validators needs
to be instantiated. This can be done by using the following snippet:
```php
<?php
use GrizzIt\JsonSchema\Factory\SchemaValidatorFactory;
$factory = new SchemaValidatorFactory();
```
All of the below described method of generating a validation object will result
in a [ValidatorInterface](https://github.com/grizz-it/validator/blob/master/src/Common/ValidatorInterface.php).
To verify data against this object, simply pass the data to the `__invoke`
method, like so:
```php
<?php
use GrizzIt\Validator\Common\ValidatorInterface;
/** @var ValidatorInterface $validator */
$validator($myData); // returns true or false.
```
After the factory is created there are 4 options to create the validation object.
### Object injection
If an object is already created by a (for example) a previous call to
`json_decode` (second parameter must be either null or false, to get an object).
The validation object can be created by calling the `create` method on the
previously instantiated `SchemaValidatorFactory`.
```php
<?php
use GrizzIt\JsonSchema\Factory\SchemaValidatorFactory;
/** @var object|bool $schema */
/** @var SchemaValidatorFactory $factory */
$factory->create($schema);
```
It is also possible to create a verified validation object.
This is possible when the `$schema` property is set on the
provided schema. The schema will then be validated against
the schema which is defined on the property. This can be
done with the following snippet:
```php
<?php
use GrizzIt\JsonSchema\Factory\SchemaValidatorFactory;
/** @var object|bool $schema */
/** @var SchemaValidatorFactory $factory */
$factory->createVerifiedValidator($schema);
```
### Local file
To create a validator object from a local schema file, it is also possible to
reference this file location to a method and let this method load it. This
can be done with the following snippet:
```php
<?php
use GrizzIt\JsonSchema\Factory\SchemaValidatorFactory;
/** @var SchemaValidatorFactory $factory */
$factory->createFromLocalFile('path/to/my/schema.json');
```
### Remote file
A schema can also be loaded from a remote location, for example:
[http://json-schema.org/draft-07/schema#](http://json-schema.org/draft-07/schema#).
To load a schema from a remote location, use the following method:
```php
<?php
use GrizzIt\JsonSchema\Factory\SchemaValidatorFactory;
/** @var SchemaValidatorFactory $factory */
$factory->createFromRemoteFile('http://json-schema.org/draft-07/schema#');
```
### From string
If a validation object needs to be created from a JSON string, use the method
`createFromString`.
```php
<?php
use GrizzIt\JsonSchema\Factory\SchemaValidatorFactory;
/** @var SchemaValidatorFactory $factory */
$factory->createFromString(
'{"$ref": "http://json-schema.org/draft-07/schema#"}'
);
```
## Change log
Please see [CHANGELOG](CHANGELOG.md) for more information on what has changed recently.
## Contributing
Please see [CONTRIBUTING](CONTRIBUTING.md) and [CODE_OF_CONDUCT](CODE_OF_CONDUCT.md) for details.
## MIT License
Copyright (c) GrizzIT
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| 30.641026 | 123 | 0.768201 | eng_Latn | 0.900405 |
bb6b9df3ab3002d4184b7f61c07c3bd2c4148719 | 2,821 | md | Markdown | dev-itpro/developer/readiness/readiness-checklist-a-languange-branding.md | christianbraeunlich/dynamics365smb-devitpro-pb | 564dbba9682ed01e7c0d7aee64db97dc1922637f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dev-itpro/developer/readiness/readiness-checklist-a-languange-branding.md | christianbraeunlich/dynamics365smb-devitpro-pb | 564dbba9682ed01e7c0d7aee64db97dc1922637f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dev-itpro/developer/readiness/readiness-checklist-a-languange-branding.md | christianbraeunlich/dynamics365smb-devitpro-pb | 564dbba9682ed01e7c0d7aee64db97dc1922637f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Language, Branding, and Images"
description: "Guidelines on language, branding and images"
author: a-emniel
ms.custom: na
ms.date: 04/01/2021
ms.reviewer: solsen
ms.topic: conceptual
ms.author: a-emniel
---
# Language, Branding, and Microsoft Images
## Language requirements
Your app can be in any language; if not in English, a document with English translation is required. See in the list below what to consider for each storefront detail when it comes to language:
|Description | Requirements |
|------------|---------------|
|Offer name:| Can be in any language |
|Summary:| Can be in any one language (not more because of length constraints).|
|Description:| Can be in multiple languages (max 3) but must be the same content for each language.|
|Screenshots:| Can be in any language.|
|Videos:|Can be in any language. If the audio is not in English, English captioning is required.|
|Documents:| Can be in any language.|
|Keywords:| Can be in any language.|
|Support / Help:|Can be in any language.|
|Privacy:| Can be in any language.|
|Licensing:| Can be in any language.|
|Supported countries:| Must be in English.|
|Supported editions:| Must be in English.|
|Supported languages:| Must be in English.|
> [!NOTE]
> If you are targeting an international audience, we recommend you use English on AppSource.
## Branding requirements
Be consistent with branding throughout your communications. All references (spoken and written in videos, docs, app landing page, screenshots, title bars etc.) must refer to the correct branding i.e. either **Microsoft Dynamics 365 Business Central**, **Dynamics 365 Business Central**, or **Business Central**.
Make sure to make the right reference throughout your content:
- Based on the "new" branding guidelines, the full name, "**Microsoft Dynamics 365 Business Central**" must be used in its entirety at first mention on the page, and at all prominent locations such as titles, headings etc. (unless the Microsoft brand is already clearly established)
- **Dynamics 365 Business Central** on second mention (or first mention if the Microsoft brand is already clearly established)
- **Business Central** on subsequent mention as long as Microsoft Dynamics 365 has been clearly established on that given page or content.
> [!NOTE]
> Don't make any references to acronyms (such as D365, BC, or MSDYN365BC) or old brand names such as ("for financials" or "for Finance & Operations").
## Microsoft images
- You can include the [AppSource badge](https://appsource.microsoft.com/blogs/new-get-it-from-badging-for-microsoft-appsource-and-azure-marketplace-available-in-the-marketing-resources-guide) in your marketing material.
- If you want to use the Dynamics 365 icons, you can learn about do's and don'ts [here](/dynamics365/get-started/icons).
| 53.226415 | 313 | 0.753633 | eng_Latn | 0.99483 |
bb6cb9a928e1660824be6aecfe49e0672758fd07 | 19 | md | Markdown | README.md | matiasmillain/hero-finder-vuejs | 07ec5e7c18a2915fbd656d85ec41354e92c4834c | [
"MIT"
] | null | null | null | README.md | matiasmillain/hero-finder-vuejs | 07ec5e7c18a2915fbd656d85ec41354e92c4834c | [
"MIT"
] | null | null | null | README.md | matiasmillain/hero-finder-vuejs | 07ec5e7c18a2915fbd656d85ec41354e92c4834c | [
"MIT"
] | null | null | null | # hero-finder-vuejs | 19 | 19 | 0.789474 | dan_Latn | 0.577815 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.