hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
58b26c7cd5a7af5990f5094e736ceacd26dac225 | 1,668 | md | Markdown | README.md | Xiryl/Coupons-Qr-App | 97bc9407716aa4dd110729b7dd156306d57ff2c5 | [
"MIT"
] | 2 | 2018-07-13T19:31:35.000Z | 2018-10-29T22:19:38.000Z | README.md | Xiryl/Coupons-Qr-App | 97bc9407716aa4dd110729b7dd156306d57ff2c5 | [
"MIT"
] | null | null | null | README.md | Xiryl/Coupons-Qr-App | 97bc9407716aa4dd110729b7dd156306d57ff2c5 | [
"MIT"
] | 3 | 2018-10-01T16:58:58.000Z | 2021-08-04T23:25:53.000Z | ## Coupons Qr - App
*...the best way to save your coupons!*
<img src="https://github.com/Xiryl/Coupons-Qr-App/blob/master/img/icona.png" height="75px"> <a href='https://play.google.com/store/apps/details?id=it.chiarani.qrcoupons'><img alt='Disponibile su Google Play' src='https://play.google.com/intl/en_us/badges/images/generic/it_badge_web_generic.png' height='70px' /></a>
Coupons QR is an application that allows you to manage, save and archive your QR and barcode coupons. Say goodbye to obsolete and inconvenient applications to use.
Download it from the [Play Store](https://play.google.com/store/apps/details?id=it.chiarani.qrcoupons).
### Screen
|Home|New QR| Set Date|Qr List| Qr Details|
| --- | --- | --- | --- | --- |
|<img src="https://github.com/Xiryl/Coupons-Qr-App/blob/master/img/unnamed.png" height="200px">| <img src="https://github.com/Xiryl/Coupons-Qr-App/blob/master/img/unnamed-2.png" height="200px">| <img src="https://github.com/Xiryl/Coupons-Qr-App/blob/master/img/unnamed-3.png" height="200px">| <img src="https://github.com/Xiryl/Coupons-Qr-App/blob/master/img/unnamed-4.png" height="200px">| <img src="https://github.com/Xiryl/Coupons-Qr-App/blob/master/img/unnamed-5.png" height="200px">|
### Privacy
> The app don't save any personal data of the user. We only use google.Firebase services to handle anomalous crashes.
### License
> This app is under the MIT License
#### Version Number & Last Update
> *2.0 - First Release*
#### Thanks
> I thank [@francescotonini](https://github.com/francescotonini) for help during development and [@evaviesi](https://www.facebook.com/eva.viesi) for help with translations and typos
| 52.125 | 488 | 0.72542 | eng_Latn | 0.34394 |
58b29aff4d9488cb026811f53db0507977253d01 | 4,772 | md | Markdown | _listings/us-digital-registry/social-mediaservices-json-get-openapi.md | streamdata-gallery-organizations/us-digital-registry | d37cc4f8442a426d916e9f33112b7b4bdc5c7f9d | [
"CC-BY-3.0"
] | null | null | null | _listings/us-digital-registry/social-mediaservices-json-get-openapi.md | streamdata-gallery-organizations/us-digital-registry | d37cc4f8442a426d916e9f33112b7b4bdc5c7f9d | [
"CC-BY-3.0"
] | null | null | null | _listings/us-digital-registry/social-mediaservices-json-get-openapi.md | streamdata-gallery-organizations/us-digital-registry | d37cc4f8442a426d916e9f33112b7b4bdc5c7f9d | [
"CC-BY-3.0"
] | null | null | null | ---
swagger: "2.0"
x-collection-name: US Digital Registry
x-complete: 0
info:
title: U.S. Digital Registry Social Media API Social Media Services
description: This returns a list of services along with the number of accounts registered
with them
version: 1.0.0
host: usdigitalregistry.digitalgov.gov
basePath: /api/v1
schemes:
- http
produces:
- application/json
consumes:
- application/json
paths:
/agencies.json:
get:
summary: Agencies
description: This lists all active agencies in the system. These agencies can
be used to query for social media accounts, mobile products, and galleries.
operationId: Api::V1::Agencies#index
x-api-path-slug: agencies-json-get
parameters:
- in: query
name: no_accounts
description: Including this parameter with value true will cause the endpoint
to include agencies that have no account in the system
type: string
format: string
- in: query
name: page
description: Page number
- in: query
name: page_size
description: Number of results per page
- in: query
name: q
description: String to compare to the name & acronym of the agencies
responses:
200:
description: OK
tags:
- Agencies
/agencies/{id}.json:
get:
summary: Agency
description: This returns an agency based on an ID.
operationId: Api::V1::Agencies#show
x-api-path-slug: agenciesid-json-get
parameters:
- in: path
name: id
description: ID of the agency
responses:
200:
description: OK
tags:
- Agencies
/mobile_apps/tokeninput.json:
get:
summary: Mobile App Token
description: This returns tokens representing agencies, tags, and a basic text
search token for the purpose of building search dialogs
operationId: Api::V1::MobileApps#tokeninput
x-api-path-slug: mobile-appstokeninput-json-get
parameters:
- in: query
name: q
description: String to compare to the various values
responses:
200:
description: OK
tags:
- Mobile Apps
/mobile_apps.json:
get:
summary: Mobile Apps
description: This lists all active mobile apps. It accepts parameters to perform
basic search as well as searching by tag and agency.
operationId: Api::V1::MobileApps#index
x-api-path-slug: mobile-apps-json-get
parameters:
- in: query
name: agencies
description: Comma separated list of agency ids
- in: query
name: language
description: Language of the social media accounts to return
- in: query
name: page
description: Page number
- in: query
name: page_size
description: Number of results per page, defaults to 25
- in: query
name: q
description: String to compare to the name & short description of the mobile
apps
- in: query
name: tags
description: Comma separated list of tag ids
responses:
200:
description: OK
tags:
- Mobile Apps
/mobile_apps/{id}.json:
get:
summary: Mobile App
description: This returns an mobile app based on an ID.
operationId: Api::V1::MobileApps#show
x-api-path-slug: mobile-appsid-json-get
parameters:
- in: path
name: id
description: ID of the mobile app
responses:
200:
description: OK
tags:
- Mobile Apps
/social_media/verify.json:
get:
summary: Social Media Verify
description: This returns an agency based on an URL. If not found, it will return
a 404
operationId: Api::V1::SocialMedia#verify
x-api-path-slug: social-mediaverify-json-get
parameters:
- in: query
name: url
description: URL of social media account
responses:
200:
description: OK
tags:
- Social Media
/social_media/services.json:
get:
summary: Social Media Services
description: This returns a list of services along with the number of accounts
registered with them
operationId: Api::V1::SocialMedia#services
x-api-path-slug: social-mediaservices-json-get
responses:
200:
description: OK
tags:
- Social Media
x-streamrank:
polling_total_time_average: 0
polling_size_download_average: 0
streaming_total_time_average: 0
streaming_size_download_average: 0
change_yes: 0
change_no: 0
time_percentage: 0
size_percentage: 0
change_percentage: 0
last_run: ""
days_run: 0
minute_run: 0
--- | 28.746988 | 91 | 0.636211 | eng_Latn | 0.90808 |
58b3babcf4532c373e798975a1d652544dc75607 | 171 | md | Markdown | data/content/fate-grand-order/ce/sweet-days/attr.zh.md | tmdict/tmdict | c2f8ddb7885a91d01343de4ea7b66fea78351d94 | [
"MIT"
] | 3 | 2022-02-25T11:13:45.000Z | 2022-02-28T11:55:41.000Z | data/content/fate-grand-order/ce/sweet-days/attr.zh.md | SomiaWhiteRing/tmdict | 13c6c818c84a65ee956535e08d20246bde87dd48 | [
"MIT"
] | null | null | null | data/content/fate-grand-order/ce/sweet-days/attr.zh.md | SomiaWhiteRing/tmdict | 13c6c818c84a65ee956535e08d20246bde87dd48 | [
"MIT"
] | 2 | 2022-02-25T09:59:50.000Z | 2022-02-28T11:55:09.000Z | ---
parent: attribute.ce
source: fate-grand-order
id: sweet-days
language: zh
weight: 0
---
为空虚的夜晚带去苦涩的刺激。
哪怕重复的日常不会遇到情人节,或许也有一天能有这样的时光。
无论是钢铁之制服淑女还是银盘之圣女,愿她们也能获得这份礼物。
| 12.214286 | 30 | 0.777778 | eng_Latn | 0.218473 |
58b4182237ee209f7f08f3b0a779105ad2c03a5d | 21,951 | md | Markdown | docs/application/web/guides/tau/tau-porting.md | neostom432/tizen-docs | b4ebb74b7eb2847b5c1b1c18df635279caefbaf0 | [
"CC-BY-3.0",
"BSD-3-Clause"
] | null | null | null | docs/application/web/guides/tau/tau-porting.md | neostom432/tizen-docs | b4ebb74b7eb2847b5c1b1c18df635279caefbaf0 | [
"CC-BY-3.0",
"BSD-3-Clause"
] | null | null | null | docs/application/web/guides/tau/tau-porting.md | neostom432/tizen-docs | b4ebb74b7eb2847b5c1b1c18df635279caefbaf0 | [
"CC-BY-3.0",
"BSD-3-Clause"
] | null | null | null | # 2.4 Porting Guide
This guide describes the changes required to migrate a TAU element from 2.3 to 2.4.
This feature is supported in mobile applications only.
As the Tizen version number changes, TAU has been updated with new features. When migrating from 2.3 to 2.4, consider the following issues:
- Selectors for defining the UI components
- New and deprecated components in 2.4
- Gesture event handling
## Backward Compatibility in TAU
To support backward compatibility, TAU provides the `tau.support-2.3.js` and `tau.support-2.3.css` files.
If you want to use deprecated components, you can import those files. See the following example:
```
<html>
<head>
<script type="text/javascript" src="../lib/tau/mobile/js/tau.js"></script>
<script type="text/javascript" src="../lib/tau/mobile/js/tau.support-2.3.js"></script>
<link rel="stylesheet" href="../lib/tau/mobile/theme/default/tau.css">
<link rel="stylesheet" href="../lib/tau/mobile/theme/default/tau.support-2.3.css">
<link rel="stylesheet" href="css/custom.css">
</head>
</html>
```
> **Note**
> The `tau.support-2.3` file is only for backward compatibility. The above components are **DEPRECATED since Tizen 2.4** and are deleted in Tizen 3.0.
## Component Definitions
Since Tizen 2.4, it is strongly recommended to use the `class` selector to define the components in HTML files. The `"data-role"` selector has been deprecated and is no longer supported.
The class selectors in TAU are composed with the `"ui-"` prefix and followed by the `<COMPONENT_NAME>`. For more information, see [UI Component API Reference](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_component_list.htm).
The following example shows how to define the UI components before and after:
- Before:
```
<!--Create Expandable component-->
<div data-role="expandable">
<h1>Expandable head</h1>
<div>Content</div>
</div>
<!--Create ToggleSwitch component-->
<select data-role="toggleswitch">
<option value="off"></option>
<option value="on"></option>
</select>
<!--Create SectionChanger component-->
<div data-role="section-changer">
<div>
<section>
<h3>LEFT1 PAGE</h3>
</section>
</div>
</div>
```
> **Note**
> The old selector with `data-role` can still be used in 2.4, but it is **DEPRECATED** and no longer supported in the next version.
- After:
```
<!--Create Expandable component-->
<div class="ui-expandable">
<h1>Expandable head</h1>
<div>Content</div>
</div>
<!--Create ToggleSwitch component-->
<select class="ui-toggleswitch">
<option value="off"></option>
<option value="on"></option>
</select>
<!--Create SectionChanger component-->
<div class="ui-section-changer">
<div>
<section>
<h3>LEFT1 PAGE</h3>
</section>
</div>
</div>
```
## New Components in 2.4
Some new mobile components are added in TAU since 2.4. Some are renamed from old components (such as Checkbox and Radio) and others are newly added with a new feature and theme (such as Colored ListView). The following table shows the new TAU components in 2.4.
For more information, see the [Mobile UI Component API Reference](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_component_list.htm).
**Table: New TAU mobile components in 2.4**
| UI component | Description |
| ---------------------------------------- | ---------------------------------------- |
| [Checkbox](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_Checkbox.htm) | The checkbox component changes the default browser checkboxes to a form more adapted to the mobile environment. |
| [Colored List View](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_ColoredListview.htm) | The colored list view component shows each list item with a gradient background color. |
| [Dropdown Menu](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_DropdownMenu.htm) | The dropdown menu component is used to select one option. It is created as a drop-down list form. |
| [Expandable](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_Expandable.htm) | The expandable component allows you to expand or collapse content when tapped. |
| [Floating Actions](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_FloatingActions.htm) | The floating actions component shows a floating action button that can be moved left and right. |
| [Grid View](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_GridView.htm) | The grid view component provides a grid-type list and presents content that are easily identified as images. |
| [Index Scrollbar](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_IndexScrollBar.htm) | The index scrollbar component shows a shortcut list that is bound to its parent scrollbar and list view. |
| [Page Indicator](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_PageIndicator.htm) | The page indicator component presents as a dot-typed indicator. |
| [Panel Changer](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_PanelChanger.htm) | The panel changer and panel component provide a multi-page layout in a page component. |
| [Radio](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_Radio.htm) | The radio component changes the default browser radio button to a form more adapted to the mobile environment. |
| [Search Input](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_SearchInput.htm) | The search input component is used to search for page content. |
| [Section Changer](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_SectionChanger.htm) | The section changer component provides an application architecture, which has multiple sections on one page. |
| [Tabs](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_Tabs.htm) | The tabs component shows an unordered list of buttons on the screen wrapped together in a single group. |
| [Text Enveloper](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_TextEnveloper.htm) | The text enveloper component changes a text item to a button. |
## Deprecated Components
Some mobile components are deprecated and no longer supported since 2.4. Instead of using deprecated components, see the following table and replace the components by new components or an HTML element.
For more information on deprecated components, see the [Mobile Component API Reference](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/mobile_component_list.htm).
**Table: Deprecated TAU mobile components**
| UI component | Replace with |
| ---------------------------------------- | ---------------------------------------- |
| [Autodividers](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_Autodividers.htm) | - |
| [CheckboxRadio](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_Checkboxradio.htm) | Checkbox component for the checkbox, radio component for the radio button. |
| [Collapsible](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_Collapsible.htm) | Expandable component. |
| [ControlGroup](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_Controlgroup.htm) | Implement your own customized application style. |
| [Fast Scroll](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_FastScroll.htm) | Index scrollbar component. |
| [Gallery](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_Gallery.htm) | Implement your own gallery with the section changer component. |
| [List Divider](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_ListDivider.htm) | Use the `ui-group-index` class for a group index. |
| [Notification](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_Notification.htm) | Popup component with the `ui-popup-toast` class. |
| [Progress Bar](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_ProgressBar.htm) | Progress component with the `data-type="bar"` option. |
| [Scroll Handler](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_ScrollHandler.htm) | - |
| [Search Bar](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_SearchBar.htm) | Search input component. |
| [Select Menu](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_SelectMenu.htm) | Dropdown menu component. |
| [Swipe](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_Swipe.htm) | - |
| [Tab Bar](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_TabBar.htm) | Tabs component. |
| [Token Text Area](../../../../org.tizen.web.apireference/html/ui_fw_api/Mobile_UIComponents/deprecated/mobile_Tokentextarea.htm) | Text enveloper component. |
If your application used the above deprecated components, see the following examples for successful migration:
- **CheckboxRadio**
Before:
```
<input type="checkbox" name="checkbox-1" id="checkbox-1"/>
<input type="radio" name="radio-1" id="radio-1"/>
<script>
var element1 = document.getElementById('checkbox-1'),
element2 = document.getElementById('radio-1'),
checkboxWidget = tau.widget.Checkboxradio(element1),
radioWidget = tau.widget.Checkboxradio(element2);
checkboxWidget.enable();
radioWidget.disable();
</script>
```
After:
```
<input type="checkbox" name="checkbox-1" id="checkbox-1"/>
<input type="radio" name="radio-1" id="radio-1"/>
<script>
var element1 = document.getElementById('checkbox-1'),
element2 = document.getElementById('radio-1'),
checkboxWidget = tau.widget.Checkbox(element1),
radioWidget = tau.widget.Radio(element2);
checkboxWidget.enable();
radioWidget.disable();
</script>
```
- **Collapsible**
Before:
```
<ul data-role="listview">
<li id="collapsible" data-role="collapsible" data-inset="false">
<h2>Collapsible head</h2>
<!--Sub list in collapsible li-->
<ul data-role="listview">
<li>sub list item1</li>
<li>sub list item2</li>
</ul>
</li>
<!--List item in 1st depth-->
<li>other list item</li>
<li>other list item</li>
</ul>
<script>
var collapsibleElement = document.getElementById('collapsible'),
collapsible = tau.widget.Collapsible(collapsibleElement);
</script>
```
After:
```
<div id="expandable" class="ui-expandable" data-collapsed="false">
<h1>Expandable head</h1>
<div>Content</div>
</div>
<script>
var expandableEl = document.getElementById('expandable'),
expandableWidget = tau.widget.Expandable(expandableEl);
</script>
```
- **Fast Scroll**
Before:
```
<div data-role="page" id="main">
<div data-role="content">
<ul id="list" data-role="listview" data-fastscroll="true">
<li data-role="list-divider">A</li>
<li>Anton</li>
<li>Arabella</li>
<li data-role="list-divider">B</li>
<li>Barry</li>
<li>Bily</li>
</ul>
</div>
</div>
<script>
var fastscroll = tau.widget.FastScroll(document.getElementById('list'));
</script>
```
After:
```
<div class="ui-page" id="indexscrollbarPage">
<div class="ui-indexscrollbar" id="indexscrollbar"></div>
<div class="ui-content">
<ul class="ui-listview" id="isbList">
<li class="ui-group-index">A</li>
<li class="ui-li-static">Anton</li>
<li class="ui-li-static">Arabella</li>
<li class="ui-group-index">B</li>
<li class="ui-li-static">Barry</li>
<li class="ui-li-static">Bibi</li>
</ul>
</div>
</div>
<script>
var isb = tau.widget.IndexScrollbar(document.getElementById('indexscrollbar'));
</script>
```
- **Gallery**
Before:
```
<div data-role="content" data-scroll="none">
<div data-role="gallery" id="gallery" data-vertical-align="middle"></div>
</div>
<script>
var galleryWidget = tau.widget.Gallery(document.getElementById('gallery'));
galleryWidget.add('./images/01.jpg');
galleryWidget.add('./images/02.jpg');
galleryWidget.add('./images/03.jpg');
galleryWidget.refresh(1);
</script>
```
After:
```
<div id="gallerySection" class="ui-content ui-section-changer" data-orientation="horizontal">
<div>
<section class="gallery-section">
<img src="images/01.jpg"/>
</section>
<section class="gallery-section">
<img src="images/02.jpg"/>
</section>
</div>
</div>
<script>
var sectionChangerElement = document.getElementById('gallerySection'),
sectionChangerWidget = tau.widget.SectionChanger(sectionChangerElement),
newSectionElement = document.createElement('section');
newSectionElement.innerHTML = '<img src="images/03.jpg">';
sectionsParentNode.appendChild(newSectionElement);
sectionChangerWidget.refresh();
sectionChangerWidget.setActiveSection(1);
</script>
```
- **List Divider**
Before:
```
<ul data-role="listview">
<li data-role="list-divider">Item styles</li>
<li><a href="#">Normal lists</a></li>
<li><a href="#">Normal lists</a></li>
<li><a href="#">Normal lists</a></li>
</ul>
```
After:
```
<ul class="ui-listview">
<li class="ui-group-index">Item styles</li>
<li class="ui-li-anchor"><a href="#">Normal lists</a></li>
<li class="ui-li-anchor"><a href="#">Normal lists</a></li>
<li class="ui-li-anchor"><a href="#">Normal lists</a></li>
</ul>
```
- **Notification**
Before:
```
<div data-role="page" id="demo">
<div data-role="notification" id="notification" data-type="popup">
<p>Notification Demo TEST</p>
</div>
<div data-role="header" data-position="fixed">
<h1>Notification</h1>
</div>
<div data-role="content">
<div data-role="button" id="noti-demo">Show small popup</div>
</div>
</div>
<script>
var notification = tau.widget.Notification(document.getElementById('notification')),
buttonEl = document.getElementById('noti-demo');
buttonEl.addEventListener('vclick', function() {
notification.open();
});
</script>
```
After:
```
<div data-role="content">
<a class="ui-btn" id="open" data-inline="true">Button</a>
<div id="popup_toast" data-role="popup" class="ui-popup ui-popup-toast">
<div class="ui-popup-content">
Toast popup text Toast popup text
</div>
</div>
</div>
<script>
var btn = document.getElementById('open');
btn.addEventListener('vclick', function() {
tau.openPopup('#popup_toast');
});
</script>
```
- **Progress Bar**
Before:
```
<div data-role="progressbar" id="progressbar"></div>
<script>
var progressbarWidget = tau.widget.ProgressBar(document.getElementById('progressbar'));
progressbarWidget.value(30);
</script>
```
After:
```
<div class="ui-progress" data-type="bar" id="progressbar"></div>
<script>
var progressWidget = tau.widget.Progress(document.getElementById('progressbar'));
progressWidget.value(30);
</script>
```
- **Search Bar**
Before:
```
<input type="search" name="search" id="search-bar"/>
<script>
var searchBarElement = document.getElementById('searchbar'),
searchBarWidget = tau.widget.SearchBar(searchBarElement);
value = searchBarWidget.disable();
</script>
```
After:
```
<input type="search" id="search-test"/>
<script>
var searchEl = document.getElementById('search-test'),
searchWidget = tau.widget.SearchInput(searchEl);
searchInputWidget.disable();
</script>
```
- **Select Menu**
Before:
```
<select id="selectmenu" data-native-menu="false">
<option value="1">Item1</option>
<option value="2">Item2</option>
<option value="3">Item3</option>
<option value="4">Item4</option>
</select>
<script>
var element = document.getElementById('selectmenu'),
widget = tau.widget.SelectMenu(element);
widget.open();
</script>
```
After:
```
<select id="dropdownmenu" data-native-menu="false">
<option value="1">Item1</option>
<option value="2">Item2</option>
<option value="3">Item3</option>
<option value="4">Item4</option>
</select>
<script>
var element = document.getElementById('dropdownmenu'),
widget = tau.widget.DropdownMenu(element);
widget.open();
</script>
```
- **Tab Bar**
Before:
```
<div data-role="header">
<div data-role="tabbar" id="tab-bar">
<ul>
<li><a href="#">Tabbar1</a></li>
<li><a href="#">Tabbar2</a></li>
<li><a href="#">Tabbar3</a></li>
</ul>
</div>
</div>
<script>
var tabBar = tau.widget.TabBar(document.getElementById('tab-bar'));
</script>
```
After:
```
<!--Tabs component is composed with Tabbar and SectionChanger-->
<div id="tabs" class="ui-tabs">
<div class="ui-tabbar">
<ul>
<li><a href="#" class="ui-btn-active">Tab1</a></li>
<li><a href="#">Tab2</a></li>
<li><a href="#">Tab3</a></li>
</ul>
</div>
<div class="ui-section-changer">
<div>
<section class="ui-section-active">
<p>Tab1</p>
</section>
<section>
<p>Tab2</p>
</section>
<section>
<p>Tab3</p>
</section>
</div>
</div>
</div>
<script>
var tabsElement = document.getElementById('tabs'),
tabs = tau.widget.Tabs(tabsElement);
tabs.setIndex(1);
</script>
```
- **Token Text Area**
Before:
```
<div data-role="tokentextarea" id="tokentext"></div>
<script>
var tokenWidget = tau.widget.TokenTextarea(document.getElementById('tokentext'));
tokenWidget.add('foobar');
</script>
```
After:
```
<div class="ui-text-enveloper"></div>
<script>
var textEnveloperElement = document.getElementById('textenveloper'),
textEnveloper = tau.component.TextEnveloper(textEnveloperElement);
textEnveloper.add('hello');
</script>
```
## Event Handling
Some events have changed. The following examples illustrate how to handle events:
- Swipe event
In the previous version, the `swipe` event was triggered in every area in the page automatically, but since 2.4, for efficient trigger and handling, the `swipe` event is only triggered when the Gesture event is created.
To enable the swipe event, use the `enableGesture()` method. The following example shows how to enable the swipe event on the content area:
```
<!--HTML code-->
<div class="ui-page ui-page-active" id="pageSwipe">
<header class="ui-header">
<h2 class="ui-title">Swipe Event</h2>
</header>
<div id="content" class="ui-content></div>
</div>
```
```
(function() {
var page = document.getElementById('pageSwipe');
page.addEventListener('pagebeforeshow', function() {
var content = document.getElementById('content');
tau.event.enableGesture(content, new tau.event.gesture.Swipe({
orientation: 'horizontal'
}));
});
}());
```
When the `swipe` event is enabled, the application can handle this event with some event detail data:
```
(function() {
var content = document.getElementById('content');
content.addEventListener('swipe', function(e) {
console.log('swipe direction = ' + e.detail.direction);
});
}());
```
For more information, see the [Gesture Event API](../../../../org.tizen.web.apireference/html/ui_fw_api/Gesture_Events/gesture.htm).
- Tap event
Since 2.4, the `tap` event has been deprecated. Use the `click` event instead.
If the application has one button in the content area:
```
<div class="ui-content">
<a id="btn" href="#" class="ui-btn">Click me</a>
</div>
```
Before:
```
var button = document.getElementById('btn');
button.addEventListener('tap', eventHandler);
```
After:
```
var button = document.getElementById('btn');
button.addEventListener('click', eventHandler);
```
## Related Information
* Dependencies
- Tizen 2.4 and Higher for Mobile
| 34.514151 | 261 | 0.632545 | eng_Latn | 0.479266 |
58b4f03d80de136bf8099f7c0622fd36eb2ce6dc | 210 | md | Markdown | README.md | anakod/WConfigurator | d25c257e7d1b0d39f0cc5ffb6545114998d5cb6e | [
"MIT"
] | 3 | 2021-01-25T01:53:31.000Z | 2021-05-07T03:15:31.000Z | README.md | anakod/WConfigurator | d25c257e7d1b0d39f0cc5ffb6545114998d5cb6e | [
"MIT"
] | 2 | 2021-01-01T00:44:49.000Z | 2021-05-29T19:25:41.000Z | README.md | anakod/WConfigurator | d25c257e7d1b0d39f0cc5ffb6545114998d5cb6e | [
"MIT"
] | 1 | 2021-05-12T06:48:39.000Z | 2021-05-12T06:48:39.000Z | # WConfigurator
Lilygo TTGO T-Watch-2020 bluetooth configurator
https://watch.anakod.ru
Currently can be used only for IR Remote configuration.
 
| 23.333333 | 62 | 0.785714 | eng_Latn | 0.418811 |
58b4f97583ba93a4891547f84d5765218c773906 | 1,472 | md | Markdown | _templates/2013-09-22-template-half-slider.md | niurkaR23/PersonalPortfolio | 23ea9653482d0793c1b3f1e364b58bba3bc0df79 | [
"CC-BY-3.0"
] | null | null | null | _templates/2013-09-22-template-half-slider.md | niurkaR23/PersonalPortfolio | 23ea9653482d0793c1b3f1e364b58bba3bc0df79 | [
"CC-BY-3.0"
] | null | null | null | _templates/2013-09-22-template-half-slider.md | niurkaR23/PersonalPortfolio | 23ea9653482d0793c1b3f1e364b58bba3bc0df79 | [
"CC-BY-3.0"
] | null | null | null | ---
title: "Half Slider"
slug: half-slider
src: /template-overviews/half-slider
categories: template landing-pages one-page portfolios unstyled
description: "A half page background image slider for Bootstrap 4 using the built-in Bootstrap carousel plugin."
bump: "A half page image slider template."
img-src: /assets/img/templates/half-slider.jpg
img-desc: "Half Page Bootstrap Image Carousel Slider"
layout: template-overview
meta-title: "Half Slider - Bootstrap 4 Background Image Slider"
meta-description: "A half page background image slider template for Bootstrap 4 built with the default Bootstrap carousel. All Start Bootstrap templates are free to download and open source."
features:
- Half page height, full width image slider
- Caption markup included
- Easy to edit, inline CSS for handling background images
- Content section below the slider
long-description: "Half Slider is a half page height, full width image slider template for Bootstrap 4. This theme is great for creating landing pages or one page websites."
alt-version: "yes"
alt-v3: "https://github.com/BlackrockDigital/startbootstrap-half-slider/archive/v3.3.7.zip"
user-version: "no"
redirect_from:
- /half-slider/
- /half-slider.php/
- /templates/half-slider.html/
- /templates/half-slider/
- /templates/half-slider/index.html
- /downloads/half-slider.zip/
---
| 40.888889 | 191 | 0.713995 | eng_Latn | 0.582959 |
58b5d54d85df68fd1cf22fcd117ccf67b64de2ba | 1,491 | markdown | Markdown | _posts/2018-02-11-180211.markdown | joshua-qa/joshua-qa.github.io | eaabdbdf88726f5adb165ba89bb9f7c9ba52b1e1 | [
"MIT"
] | 1 | 2019-11-27T14:56:49.000Z | 2019-11-27T14:56:49.000Z | _posts/2018-02-11-180211.markdown | joshua-qa/joshua-qa.github.io | eaabdbdf88726f5adb165ba89bb9f7c9ba52b1e1 | [
"MIT"
] | null | null | null | _posts/2018-02-11-180211.markdown | joshua-qa/joshua-qa.github.io | eaabdbdf88726f5adb165ba89bb9f7c9ba52b1e1 | [
"MIT"
] | null | null | null | ---
layout: post
title: 180205-180211
date: 2018-02-11 23:57:25
tags: Daily
categories: Daily
---
#### GDG Campus Korea 2월 Meetup 참석
https://www.meetup.com/ko-KR/GDG-Campus/events/246643824/
페북에서 홍보하는 글을 보고 신청해서 다녀왔다.
처음에는 지인분이 첫번째 세션때 발표하신다고 해서 관심이 생겼던건데, 트위터로 알고 지내던 다른 지인분도 만나뵙고 여러 개발자+학생 분들과 네트워킹 시간도 가질 수 있었다.
나한테는 다섯개 세션 모두 흥미로운 내용이었으며, 어렴풋이 '그거 DB 같은거 해주는거 아니야?' 수준으로 알고 있던 파이어베이스에 대해 조금 더 관심이 생기는 계기가 되었다.
네트워킹 시간에 새해 목표를 말할 기회가 있었는데 나는 여기서 한 세가지 정도 얘기했다. 지킬 수 있었으면 좋겠음.
#### 입사 6주차
드디어 만든 서비스를 리얼 환경에 배포했다.
프로젝트 회고를 작성해서 사내 컨플루언스에 공유했는데 반응이 좋아서 다행이었고, 앞으로도 더 분발해야겠다는 생각이 들었다.
팀 내부에서 파일럿 오픈한 서비스가 이래저래 문제가 발생해서 다들 고생하셨는데 난 아직 해당 내용을 인수인계 받은 상태가 아니어서 거의 도와드릴 수 있는 일이 없었다. ㅠㅠ 코드를 미리미리 봐둔 덕에 화면단 버그 하나 수정해드리긴 했지만.. 일단 nGrinder 관련으로 요청 받아서 웹소켓 부하 테스트 할 수 있는 코드 작성중이니까 그거라도 잘 도와드려야겠다!
#### 패스트캠퍼스 Java 1day class
NIO 책으로 유명하신 (나에게는 Dooray! 개발자로 기억에 남는) 김성박님이 패스트캠퍼스에서 강의를 할 예정인데, 그 전에 1Day Class를 여신다고 해서 커리큘럼 보고 바로 신청했다.
내용은 `객체지향과 UML 설명 + 디자인 패턴에 대한 전반적인 소개 및 설명`이었다.
친구 만나러 가는 약속이 있어서 질답 시간까지는 참여하지 못했지만, 강의 내용은 매우 만족이었다. 무엇보다 디자인 패턴에 대해 학습할 필요성을 느끼는 와중에 전반적으로 한번 짚고 넘어갈 수 있는 기회가 생겨서 맘에 들었다.
객체지향에 대한 설명도.. 사실 강의 직전까지 '뭐 이미 알고 있는거잖아~'라는 느낌이었지만 막상 듣고나니 감동적이었다. 이렇게 설명할 수도 있구나~ 라는 생각도 들고, 좀 더 개념을 확실하게 이해했다는 느낌이 들어서..
이제 조금이나마 기본적인건 알았으니 디자인 패턴을 틈틈히 봐야겠다 ㅠㅠ
#### 느낀 점
역시 밋업가서 발표 듣거나 개발자 블로그, 슬라이드 등을 보면 느끼는 점이 많다.
'이거 정말 이래도 괜찮나? 나만 뒤쳐지나? ㅠㅠ 내가 잘못 생각하나? ㅠㅠㅠㅠ' 했던 것들에 대해 '딴 사람도 이러는구나'를 알게 되는 순간이 많이 생겨서 기분이 좋고, 한 발짝이라도 더 나아갈 수 있는 계기가 되는 것 같다.
이번 주는 또 뭘 공부해볼지 고민해야겠다.
| 31.0625 | 202 | 0.704225 | kor_Hang | 1.00001 |
58b736b0cb51762ea7f03eaae6e46e5636f5663b | 4,762 | md | Markdown | CONTRIBUTING.md | explody/Junction | 700df9385fceda00d6830816606d8854dc9cef7b | [
"MIT"
] | 16 | 2020-04-28T07:03:26.000Z | 2022-03-05T14:26:40.000Z | CONTRIBUTING.md | explody/Junction | 700df9385fceda00d6830816606d8854dc9cef7b | [
"MIT"
] | 14 | 2020-03-19T04:32:18.000Z | 2021-03-05T23:54:47.000Z | CONTRIBUTING.md | explody/Junction | 700df9385fceda00d6830816606d8854dc9cef7b | [
"MIT"
] | 3 | 2021-01-19T18:39:00.000Z | 2022-02-14T23:51:07.000Z | # Contributing to Junction
The following is a set of basic guidelines to help you be productive working in the Junction codebase. This is a hobby project, I am not particular about who is contributing, and will accept any pull request that doesn't take the project in a totally random direction.
## Working on Issues
All of the work done on this project is tracked in GitHub issues (with bigger release efforts tracked in GitHub projects). Start by looking there for things you can work on (or open an issue with your suggestions).
Once you have something you want to take on:
1. Comment in the issue that you are working on the problem (or have stopped, so others can take a look).
2. Submit a pull request, detailing the changes you made.
3. GitHub Actions will automatically verify your changes meet the [styleguides](#styleguides).
4. I will review and accept your PR as soon as possible.
Depending on the issue and the benefit it offers, I have no problem cutting an early release/hotfix so it can be picked up on PyPI.
## Ramping Up
I am happy to answer your questions and chat 1:1 if you are working on Junction. [Contact details are on my GitHub profile](https://github.com/HUU).
An attempt is made to keep an overview of Junctions [architecture and design in the wiki](https://github.com/HUU/Junction/wiki).
## Development Environment
This project is attempting to use the latest (stable-ish) standards and practices emerging in the Python community. This includes `pyproject.toml` (a single unified configuration file for the project and its various tools). As of this writing, `pyproject.toml` is sparse, and so some exceptions are made (e.g. `mypy.ini`).
* [Poetry](https://python-poetry.org/) for dependency management, virtualenv creation, and packaging.
* [Black](https://black.readthedocs.io/en/stable/) for code formatting.
* [Flake8](https://flake8.pycqa.org/en/latest/) for PEP8 enforcement.
* [mypy](http://mypy-lang.org/) for static type checking.
* [tox](https://tox.readthedocs.io/en/latest/) for test execution.
* [pytest](https://docs.pytest.org/en/latest/) for unit testing.
* [pre-commit](https://pre-commit.com/) for automatically checking compliance with the above.
* [GitHub Actions](https://github.com/HUU/Junction/actions) for CI/CD.
#### Setup
1. [Install Poetry](https://python-poetry.org/docs/#installation).
2. Clone the repistory:
```sh
git clone [email protected]:HUU/Junction.git
```
3. Setup the virtual environment:
```sh
cd Junction
poetry install
```
4. Activate the virtual environment:
```sh
poetry shell
```
5. Setup pre-commit hooks:
```sh
pre-commit install
```
With the above completed, you are now prepared to develop against Junction using your preferred tools. Thanks to Poetry, you will also have access to the Junction CLI from within the virtual environment, and it will use the code in your cloned repository directly.
### Test Confluence Instance
You will need an instance of Confluence to test against. Atlassian offers free instances of Confluence Cloud (liimited to 10 users). [Sign-up here](https://www.atlassian.com/software/confluence/free).
## Styleguides
#### Git Commit Messages
* Use the present tense ("Fixes problem" not "Fixed problem")
* Try to limit the first line to 72 characters or less
* Reference issues and pull requests on the first line
* Use the second line and beyond to elaborate on your change in more detail:
* For bugs, describe the root cause of the issue and how your change addresses it if not obious.
* For enhancements, explain the consequences of your changes if not obvious.
* Don't worry too much about it, I'll accept almost anything...
#### Python
* Black will automatically reformat your code to meet project standards. The style is not always pretty, but the benefit of having a rigid standard and a tool to autoformat your code outweighs the downside.
* Flake8, mypy will take care of the rest... so in effect: if the pre-commit hooks pass then you're good to go.
* All code must be type hinted. mypy should enforce this for you.
* Avoid gratuitous usage of `Any` as it defeats the point of type hinting.
* When added an untyped dependency, you can register a mypy exemption in `mypy.ini`
* Avoid dependencies between the subpackages i.e. `markdown` should not depend on `confluence` and vice-versa.
#### Dependencies
* External dependencies must work with the latest version of Python. As such, I will refuse new dependencies that are not clearly maintained towards that end.
* Dependencies do not have to be typed, but it is nice when they are.
* Junction is a pure-python project; no native or cross-runtime dependencies will be accepted (except for development tools).
| 54.113636 | 324 | 0.755775 | eng_Latn | 0.997325 |
58b74932af68202040fea332319fcaff30a0fbc0 | 989 | md | Markdown | changelog/sprint-2.md | praktikum-tiunpad-2021/oop-final-kelompok-a-10 | 716f80d558dae59350e3c0b704628d09aee72084 | [
"MIT"
] | null | null | null | changelog/sprint-2.md | praktikum-tiunpad-2021/oop-final-kelompok-a-10 | 716f80d558dae59350e3c0b704628d09aee72084 | [
"MIT"
] | null | null | null | changelog/sprint-2.md | praktikum-tiunpad-2021/oop-final-kelompok-a-10 | 716f80d558dae59350e3c0b704628d09aee72084 | [
"MIT"
] | 1 | 2022-01-06T04:36:10.000Z | 2022-01-06T04:36:10.000Z | # Scrum Report (Sprint 2)
| From 23/11/2021 to 29/11/2021
## Team PDF Perjuangan
| NPM | Name |
| ------------- |----------------------------|
| 140810200013 | Rihlan Lumenda Suherman |
| 140810200025 | Raihan Fadhlal Aziz |
| 140810200039 | Rifqy Kurnia Sudarman |
## Sprint Overview
| Planned (n) | Completed (n) |
| ------------- |-------------- |
| 3 | 3 |
## Sprint 2 Backlog
| ID | Title/Desc | Asignee | Status |
| --- | ---------- | ------- | ------ |
| 2.1 | Membuat tampilan menu dan isinya | Raihan | DONE |
| 2.2 | Membuat leaderboard bagi 10 skor tertinggi | Rihlan | DONE |
| 2.3 | Merapikan interface program | Rifqy | DONE |
## Retrospective
Semua tugas pada sprint 2 dapat terlaksana dengan baik
## Next Sprint Backlog (Sprint 3)
| ID | Title/Desc | Asignee |
| --- | ---------- | ------- |
| 3.1 | Level Game | Raihan |
| 3.2 | Membuat UML | Rifqy |
| 3.3 | Bug Fixing | Rihlan |
| 29.088235 | 68 | 0.511628 | ind_Latn | 0.337524 |
58b76afe25131e63fada3df96142084ebf2d2895 | 190 | md | Markdown | README.md | knowledge-map/cartographer | 8237e1f7ccf69a076d14b0a20593ea3296cce749 | [
"MIT"
] | 3 | 2015-03-03T00:03:20.000Z | 2020-03-19T02:13:10.000Z | README.md | knowledge-map/cartographer | 8237e1f7ccf69a076d14b0a20593ea3296cce749 | [
"MIT"
] | null | null | null | README.md | knowledge-map/cartographer | 8237e1f7ccf69a076d14b0a20593ea3296cce749 | [
"MIT"
] | null | null | null | # Cartographer [](https://travis-ci.org/knowledge-map/cartographer)
Client-side knowledge graph renderer.
| 47.5 | 150 | 0.789474 | kor_Hang | 0.160585 |
58b77cf92972c2f45783a39748f716b2fb663513 | 2,428 | md | Markdown | src/it/2020-04/08/04.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/it/2020-04/08/04.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/it/2020-04/08/04.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Mosè e i profeti
date: 17/11/2020
---
`Leggere 2 Timoteo 3:14-17. Qual è il ruolo della Scrittura nell’educazione cristiana?`
La parola Torah, che identifica la prima parte della Bibbia, è spesso tradotta con «legge», anche perché quei libri contengono molte normative. Ma Torah, in realtà, significa «insegnamento», «istruzione». Una comprensione ben diversa da quella che molti ritengono essere il ruolo della «legge» nella Bibbia, ovvero regole e norme da adempiere per restare nelle buone grazie di Dio. Non è così; la legge ha come scopo un insegnamento materiale che indichi come vivere correttamente e in modo sicuro all’interno di quel patto previsto da Dio quando ci ha creati.
Per quanto riguarda le due altre sezioni della Bibbia ebraica, i profeti riferiscono la capacità del popolo di Dio di gestire questo materiale educativo, di vivere coerentemente con ciò che insegna (i primi profeti, o libri storici) e le cose che avrebbero dovuto imparare (gli ultimi profeti). La parte rimanente dell’Antico Testamento (in ebraico «gli scritti») è ricca di esempi che raccontano di insegnanti e studenti, alcuni virtuosi altri meno, e delle loro esperienze educative. Modelli di successo educativo in questi libri sono quelli di Ester, Rut, Daniele e Giobbe. Tra i fallimenti, vengono annoverati i quattro amici di Giobbe. I Salmi sono un libro di canti, ma tre di essi sono da considerare didattici: i Salmi 1, 37 e 73.
Nei Vangeli abbonda il materiale preposto a scopi educativi, in particolare grazie alle parabole di Gesù. Molte delle epistole paoline iniziano con un’accorata proclamazione del vangelo, ma terminano con elementi educativi, insegnamenti pratici che riguardano la quotidianità della vita cristiana. L’Apocalisse, poi, è ricca di materiale formativo. Per esempio, la totale divulgazione del futuro della chiesa di Cristo viene rivelata in un libro che solo l’Agnello di Dio, Gesù - il grande Maestro - potrà aprire.
`Si potrebbe obiettare che non tutto il materiale istruttivo presente nei libri di Mosè sia applicabile al nostro tempo, ed è un’osservazione corretta. Per esempio in Deuteronomio 17:14-20 ci sono le direttive riguardanti i sovrani, con istruzioni molto esplicite circa la selezione di chi potrà aspirare a quella carica. Oggi non nominiamo alcun sovrano nella nostra chiesa. Come si determina la corretta applicabilità del materiale educativo presente nella Scrittura?` | 173.428571 | 738 | 0.805601 | ita_Latn | 0.999908 |
58b7ca434d69263121db23149de2def2ff00c6c6 | 31,756 | md | Markdown | docs/UserGuide.md | eezj35/tp | da2f77affb089aaecfc1964d556f7e4a4355e78e | [
"MIT"
] | null | null | null | docs/UserGuide.md | eezj35/tp | da2f77affb089aaecfc1964d556f7e4a4355e78e | [
"MIT"
] | null | null | null | docs/UserGuide.md | eezj35/tp | da2f77affb089aaecfc1964d556f7e4a4355e78e | [
"MIT"
] | null | null | null | ---
layout: page
title: User Guide
---
* Table of Contents
{:toc}
--------------------------------------------------------------------------------------------------------------------
## **Introduction**
Welcome to the User Guide of **TuitiONE**!
**TuitiONE** is a Command Line Interface (CLI) based application that aims to **simplify the work of Customer Servicing Officers (CSO) in a tuition centre**.
The application also incorporates the benefits of a Graphical User Interface (GUI).
**TuitiONE** allows CSOs to input the details of students and parents through simple and intuitive commands. With our application, a CSO's work would be reduced and efficient.
If you can type fast, **TuitiONE** can get your contact management tasks done faster than most other GUI apps. The GUI application would allow you to interact with the application
through graphical icons (such as buttons).
However you do not have to worry even if you are new to CLI / GUI applications. **TuitiONE** uses easy to learn and simple CLI commands that usually fall under one sentence.
Moreover, this User Guide will take you through every feature of **TuitiONE**, providing you with the easiest and best user experience possible.
If you are interested, jump to [Quick start](#quick-start) to learn how to start up **TuitiONE** in a simple and quick manner.
--------------------------------------------------------------------------------------------------------------------
## **Quick start**
1. Ensure you have Java 11 or above installed in your Computer. You may follow the instructions and install it [here](https://www.oracle.com/java/technologies/downloads/#java11).
2. Download the latest `TuitiONE.jar` from [here](https://github.com/AY2122S1-CS2103T-F13-4/tp/releases).
3. Copy the file to the folder you want to use as the _home folder_ for your TuitiONE.
4. Double-click the file to start the app. The GUI similar to the below should appear in a few seconds.<br>
_Note how the app contains some sample data_.<br>
<br>
<center><i>Image: <b>TuitiONE</b> upon loading for the first time.<br></i></center>
6. Type the command in the command box and press Enter to execute it. e.g. typing **`help`** and pressing Enter will open the help window.<br>
Some example commands you can try:
* **`list`** : Lists all students and lessons.
* **`add`** `n/John Doe p/98765432 e/[email protected] a/311, Clementi Ave 2, #02-25 g/S3 r/friends r/owesMoney` : Adds a student named `John Doe` to the **TuitiONE** app.
* **`delete`** `3` : Deletes the 3rd student shown in the student list.
* **`clear`** : Deletes all data (students and lessons).
* **`exit`** : Exits the app.
<div markdown="span" class="alert alert-warning">
:exclamation: **Caution:** After running **TuitiONE**, you will notice there are some additional files created in the _home folder_. Please do not delete these files, as they contain your data and important configurations for **TuitiONE** to run smoothly.
</div>
Refer to the [Features](#features) below for details of each command.
--------------------------------------------------------------------------------------------------------------------
## **About this document**
### Structure of this document
This document serves as a guide to help you get started on how you can set up and use **TuitiONE**. This User Guide has been structured to help you find what you require in a quick and simple manner.
Here are several useful tips on how to read and intrepret this guide.
### Reading the User Guide
This section introduces you to the technical terms, symbols and syntax that are used inside this User Guide.
This would be useful for you should they be unclear to you.
#### Technical terms
The table below are the interpretations of a few technical terms that you may encounter in this User Guide.
| Technical term | What it means |
| ------------- | ------------- |
| CLI | The Command Line Interface (CLI) is the interface that accepts text input to execute the functions of **TuitiONE**. |
| GUI | The Graphical User Interface (GUI) is the user interface which has graphical indicator representations that the user may interact with. Graphics, icons, windows, menus, cursor and buttons are all components of a GUI. |
| Parameter | Parameter refers to the user input required after the user is prompted by the TuitiONE GUI |
#### General Symbols and Syntax
The table below are the interpretations of symbols and formatting used in this document.
| Syntax | What it means |
| ------------- | ------------- |
| `highlights` | A grey highlighted block represents an executable command, or possible parameters that can be entered into the CLI. |
| :information_source: | Indicates additional information |
| :bulb: | Indicates tips |
| :exclamation: | Indicates that you should take caution |
There are multiple examples provided for you in the section below [Features](#features). Each simulated scenario include expeced outputs by the **TuitiONE** application.
If there are still any doubts on the terms and usage of this app, you can refer to the [Glossary](#glossary) and [FAQ](#faq) located at the end of the document.
For each command specifically, you can view them in the relevant sections (such as in [Managing Students](#managing-students) and [Managing Lessons](#managing-lessons)) to learn more in detail.
The [Command summary](#command-summary) lists a table with all the commands present and their syntax.
--------------------------------------------------------------------------------------------------------------------
## **Application Layout**
This section presents the various elements in our **TuitiONE** application.

<center><i>Image: Layout of <b>TuitiONE</b>.</i></center><br>
<br>
**No.** | **Section** | **Description**
--------|-------------|----------------
1 | Toolbar pane | Here you can select the quit button or view the help window.
2 | Student panel | Here you can see the list of students present in your application. This list can be skimmed down using commands such as [`find`](#locating-students-by-name-find) and [`filter`](#filtering-of-list-filter).
3 | Lesson panel | Here you can see the list of lessons present in your application. This list can be skimmed down using commands such as [`filter`](#filtering-of-list-filter) and [`roster`](#viewing-of-lesson-roster-roster).
4 | Result panel | Here is where you will receive the various messages from the application after running your commands. There are a variety of messages, ranging from success messages to error messages.
5 | Command input box | Here is where you type your commands to run in the application.
6 | Send button | A button that helps submit your input command to run. Using the `Enter`-key on your keyboard after typing in the **Command input box** performs the same job here.
7 | Storage file indicator | This portion displays the location of your saved **TuitiONE** data file in your device.
<center><i>Table: <b>Legend of TuitiONE</b>.<br></i></center>
## **Features**
This section outlines all the features that **TuitiONE** has. You will be able to see the purpose of each feature, the format of each command and possible examples and images of what you should expect to see.
<div markdown="block" class="alert alert-info">
**:information_source: Notes about the command format:**<br>
* Words in `UPPER_CASE` are the parameters to be supplied by you.<br>
e.g. in `add n/NAME`, `NAME` is a parameter which can be used as `add n/John Doe`.
* Items in square brackets are optional.<br>
e.g `n/NAME [r/REMARK]` can be used as `n/John Doe r/friend` or as `n/John Doe`.
* Items with `…` after them can be used as many times as the user would like.<br>
e.g. `[r/REMARK]…` can be used multiple times like `r/sick` or `r/absent r/graduated`, or can be omitted altogether.
* Parameters can be in any order.<br>
e.g. if the command specifies `n/NAME p/PARENT_CONTACT`, `p/PARENT_CONTACT n/NAME` is also acceptable.
* If a parameter is expected only once in the command but you specified it multiple times, only the last occurrence of the parameter will be taken.<br>
e.g. if you specify `p/12341234 p/56785678`, only `p/56785678` will be taken.
* Commands that do not take in parameters (such as `help`, `list`, `exit` and `clear`) will ignore follow-up inputs.<br>
e.g. if the command specifies `help 123`, it will be interpreted as `help`.
</div>
<div markdown="block" class="alert alert-warning">
**:exclamation: Caution on use of the symbol `/` in commands:**<br>
* For all commands, the symbol `/` should only be used in the **representation of prefixes**, such as `n/`, `p/` and `r/`, etc.
* You **should not** use the symbol `/` when filling up any of the parameters.
* For example, in `add` command,
* **Acceptable** command: `add n/John Doe p/98765432 e/[email protected] a/John street, block 123, #01-01 g/P2`.
* **Invalid** command: `add n/John Doe p/98765432 e/[email protected] a/John stre/et, block 123, #01-01 g/P2`.<br>_*notice the additional `/` used in the parameter of `a/ADDRESS`._
</div>
### General Commands
#### Viewing help: `help`
TuitiONE will display the help panel which shows you a summary of the command syntax that is usable to the current version of TuitiONE.
Command Format: `help`
#### Listing all students: `list`
Shows you a list of all students and lessons in the **TuitiONE**. Students will be sorted in ascending alphabetical order by their name. Lessons will be sorted by grade, from `P1` to `S5`.
<div markdown="block" class="alert alert-info">
**:information_source: Notes about the command format:**<br>
Sorting of the lists by other fields (eg. day, time, subject) is not available in the current version of **TuitiONE**, and will be an upcoming feature.
</div>
Command Format: `list`

<center><i>Image: Expected output of <code>list</code> command.</i></center><br>
<br>
Upon entering the `list` command, both student and lesson panels will be updated to show all the students and lessons present.
#### Filtering of list: `filter`
Filter the respective list(s) to display entries that correspond to the conditions as specified.
Command Format: `filter [g/GRADE] [s/SUBJECT]`
:information_source: **Details:**
* You can filter by `GRADE`, `SUBJECT`, or both.
* If you are only filtering by `GRADE`, both of the student list and lesson list will be filtered to display the respective entries that correspond to the `GRADE` as specified.
* If you are only filtering by `SUBJECT`, only the lesson list will be filtered to display the respective lessons that correspond to the `SUBJECT` as specified.
* If you are filtering by both `GRADE` and `SUBJECT`, both of the student list and lesson list will be filtered to display the respective entries that correspond to the `GRADE` and `SUBJECT` as specified.
* `GRADE` refers to the educational level of the student. It can be specified in lower- or upper- case (i.e. `P5` and `p5` represents the same grade).
* `SUBJECT` can be specified in lower- or upper- cases (i.e. `MATH` and `math` represents the same subject which is `Math`). See [`add-l` (add lesson)](#adding-a-lesson-add-l) command for more information.
Example(s):
* `filter g/P2` will filter both of the student list and lesson list by grade of `P2` and display the corresponding entries in the respective lists.
* `filter s/science` will filter the lesson list by subject of `Science` and display the corresponding entries in the respective list.
* `filter s/SCIENCE g/p2` will filter the lesson list by subject of `Science` and grade of `P2`, and the student list by grade of `P2`, and display the corresponding entries in the respective lists.

<center><i>Image: Expected output of <code>filter g/P5 s/Science</code> command.</i></center><br>
<br>
### Managing Students
#### Adding a student: `add`
Adds a student to the TuitiONE.
Command Format: `add n/NAME p/PARENT_CONTACT e/EMAIL a/ADDRESS g/GRADE [r/REMARK]…`
:information_source: **Details:**
* `NAME` can only be alphanumeric and within a maximum of 150 characters. `NAME` will also be set to have the first character of each space separated word to be capital while the rest becomes lower case (i.e. `samUel oNg` becomes `Samuel Ong`). This is to comply with the majority of naming regulations worldwide.
* `PARENT_CONTACT` can only be 8 digits long and start with `6`, `8` or `9` (as this application is currently based for Singapore use).
* `EMAIL` can only have a maximum of 100 characters and have the conventional `@` as well as a domain name.
* As there can be a variety of possible email address namings and domains, the application will not run a through check on your input. If you happen to input the wrong email address, you can use the `edit` command [here](#editing-a-student--edit).
* `ADDRESS` can only have a maximum of 150 characters.
* `GRADE` refers to the educational level of the student. It can only be in a range of `P1`-`P6` (primary school levels) or `S1`-`S5` (secondary school levels). Here specifying lower case will also be a valid grade input (e.g. `p3` is allowed and will be read in the application as `P3`).
* `REMARK` can have a maximum of 25 characters, single worded without spacings in between them. A student can have any number of remarks, capped at 5. (including 0). (e.g `smart` is valid, while `needs help` is invalid)
* Each student must have a unique name.
* Phone numbers are not unique as multiple students may share the same parent.
Example(s):
* `add n/John Doe p/98765432 e/[email protected] a/John street, block 123, #01-01 g/P2`
* `add n/Betsy Crowe p/91234567 e/[email protected] a/Bleecker street, block 123, #01-01 g/S5 r/foreign student`

<center><i>Image: Expected output of <code>add</code> command.</i></center><br>
<br>
#### Locating students by name: `find`
Finds students whose names contain any of the given keywords.
Command Format: `find KEYWORD [MORE_KEYWORDS]`
:information_source: **Details:**
* The search is case-insensitive. e.g `hans` will match `Hans`.
* The order of the keywords does not matter. e.g. `Hans Bo` will match `Bo Hans`.
* Only keywords based on name will be searched.
* Prefixed matching words will be supported e.g. `Han` will match `Hans`.
* Students matching at least one keyword will be returned (i.e. `OR` search).
e.g. `Hans Bo` will return `Hans Gruber`, `Bo Yang`.
Example(s):
* `find John` returns `john` and `John Doe`
* `find alex david` returns `Alex Yeoh`, `David Li`<br>

<center><i>Image: Expected output of <code>find tan</code> command.</i></center><br>
<br>
#### Deleting a student : `delete`
Deletes a student from the TuitiONE.
Command Format: `delete INDEX`
:information_source: **Details:**
* Deletes the student at the specified `INDEX`.
* The index refers to the index number shown in the displayed student list.
* Deleting a student also unenrolls (see [`unenroll`](#unenrolling-a-student-from-lesson-unenroll)) themselves from their lessons.
* The index **must be a positive integer** `1`, `2`, `3`, …
Example(s):
* `list` followed by `delete 2` deletes the student indexed `2` in the TuitiONE.
* `find Betsy` followed by `delete 1` deletes the 1st student in the results of the `find` command.

<center><i>Image: Expected output of <code>delete 1</code> command.</i></center><br>
<br>
#### Editing a student : `edit`
Edits a student's particulars.
Command Format: `edit INDEX (must be a positive integer) [n/NAME] [p/PHONE] [e/EMAIL] [a/ADDRESS] [g/GRADE] [r/REMARK_TO_ADD]... [dr/REMARK_TO_DELETE]...`
:information_source: **Details:**
* Edits the student at the specified `INDEX` based on the fields given.
* You can edit any number of fields.
* The index refers to the index number shown in the displayed student list.
* The index **must be a positive integer** `1`, `2`, `3`, …
* You can edit a student to have any number of remarks, capped at 5 (including 0). The number of characters each remark can have is capped at 25.
* Remarks are unique, and you cannot tag more than one of the same remark to the same student. For example, `edit 2 r/overdueFees r/overdueFees` will only tag a single `overdueFees` remark to the student at index `2`.
* See [`add` command](#adding-a-student-add) for other constraints on defining a student.
* If you enter `edit r/[REMARK_TO_ADD]`, TuitiONE will add on the given `REMARK` to the existing set of remarks.
* If you enter `edit dr/[REMARK_TO_DELETE]`, TuitiONE will delete the `REMARK` from the existing set of remarks, if it is present in the set.
* If you were to add and remove remarks in the same command, TuitiONE will remove specified remarks before adding the new ones.
* **Note that if you change a student's grade, TuitiONE will unenroll the student from all the classes he or she was previously taking in the previous grade**.
Example(s):
* `edit 2 p/98765432` changes the parent contact number information of the second student in the student list.
* `edit 2 g/S2` changes the grade of the second student in the student list from its current grade to `S2`, and he or she will be unenrolled from all classes in his or her previous grade.
* `edit 2 n/Ben Lim e/[email protected]` changes the name and email of the second student in the student list.
* `edit 2 r/discounted dr/unpaid` removes the `unpaid` remark from the second student's set of remarks, before adding the `discounted` remark.

<center><i>Image: Expected output of <code>edit</code> command.</i></center><br>
<br>
#### Enrolling a student from lesson: `enroll`
Enroll a specified student to a given TuitiONE lesson.
Command Format: `enroll STUDENT_INDEX l/LESSON_INDEX`
:information_source: **Details:**
* Enroll the student identified by `STUDENT_INDEX` in the displayed student list to the specific lesson identified by `LESSON_INDEX` in the displayed lesson list.
* Enrolling a student is only possible if the student:
1. has the same `grade` as the lesson,
2. is not enrolled to the lesson and,
3. has no other lessons with conflicting timing.
* `STUDENT_INDEX` refers to the index number shown in the displayed student list.
* `LESSON_INDEX` refers to the index number shown in the displayed lesson list.
* Both indexes **must be a positive integer** `1`, `2`, `3`, …
* Students can only be enrolled to a **maximum of 10 lessons**.
* Lessons can only have enrolled a **maximum of 15 students**.
Example(s):
* `enroll 1 l/2` will enroll the student indexed at `2` in the displayed student list to the lesson indexed at `2` in the displayed lesson list.

<center><i>Image: Expected output of <code>enroll 1 l/2</code> command.</i></center><br>
<br>
#### Unenrolling a student from lesson: `unenroll`
Unenroll a student from a given TuitiONE lesson.
Command Format: `unenroll STUDENT_INDEX l/LESSON_INDEX`
:information_source: **Details:**
* Unenroll the student identified by `STUDENT_INDEX` in the displayed student list from the specific lesson identified by `LESSON_INDEX` in the displayed lesson list.
* `STUDENT_INDEX` refers to the index number shown in the displayed student list.
* `LESSON_INDEX` refers to the index number shown in the displayed lesson list.
* Both indexes **must be a positive integer** `1`, `2`, `3`, …
Example(s):
* `unenroll 1 l/1` will unenroll the student indexed `1` in the displayed student list from the lesson indexed at `1` in the displayed lesson list.

<center><i>Image: Expected output of <code>unenroll 1 l/1</code> command.</i></center><br>
<br>
### Managing Lessons
#### Adding a lesson: `add-l`
Adds a lesson to the TuitiONE with the specified prefixes.
Command Format: `add-l s/SUBJECT g/GRADE d/DAY_OF_WEEK t/START_TIME c/COST`
:information_source: **Details:**
* `GRADE` refers to the level of education a lesson is catering for. It follows the similar requirements when adding a student. See [`add`](#adding-a-student-add) command for more details regarding grade.
* `SUBJECT` can only be a single word limited to `20` characters, and its first letter will be capitalized.
* `DAY_OF_WEEK` can only be these form: `Mon`, `Tue`, `Wed`, `Thu`, `Fri`, `Sat`, `Sun` (the first letter need not be capitalized, i.e. `mon` is allowed but not `MON`).
* `START_TIME` is in `2400` hours format and can only be between `0900` and `1900` (as lessons can only be conducted between 9am to 9pm). It must also be in intervals of `30` minutes: `1300` and `1330` are acceptable timings but not `1315`.
* Lessons are fixed at **two** hour periods. In upcoming features, we will give you the power to define your lesson timing ranges.
* Lessons with the same `SUBJECT` and `GRADE` cannot have the same `DAY_OF_WEEK` and `START_TIME`.
* The cost must be a non-negative number `0.0`, `2.0`, `3.3`, … The currency used here in **TuitiONE** is Singapore dollar, SGD. The maximum value for a lesson, for practical reasons, is capped at SGD $ 200.00 inclusive. The cost will be displayed in the lesson list rounded off to two decimal places.
Example(s):
* `add-l s/Science g/P5 d/Wed t/1230 c/12.0`
* `add-l s/Mathematics g/S4 d/Fri t/1500 c/10.3`
* `add-l s/Mathematics g/S4 d/fri t/1500 c/10.3`

<center><i>Image: Expected output of <code>add-l</code> command.</i></center><br>
<br>
#### Deleting a lesson: `delete-l`
Deletes a lesson from the TuitiONE.
Command Format: `delete-l INDEX`
:information_source: **Details:**
* Deletes the lesson of the specified `INDEX`.
* The index refers to the index number shown in the displayed lesson list.
* The index **must be a positive integer** `1`, `2`, `3`, …
Example(s):
* `delete-l 1` deletes the lesson with corresponding index `1`.

<center><i>Image: Expected output of <code>delete-l 1</code> command.</i></center><br>
<br>
#### Viewing of lesson roster: `roster`
Shows you the student roster of a specified lesson in the student panel. The names of the students will also be displayed in the result panel.
Command Format: `roster LESSON_INDEX`
:information_source: **Details:**
* Displays the student roster of the lesson of the specified `LESSON_INDEX`.
* The index refers to the index number shown in the displayed lesson list.
* The index **must be a positive integer** `1`, `2`, `3`, …
Examples:
* `roster 1` will display the students currently enrolled in the lesson indexed at `1` in the student panel.

<center><i>Image: Expected output of <code>roster 1</code> command.</i></center><br>
<br>
### Others
#### Clearing all entries : `clear`
Clears all students and lessons data stored in TuitiONE.
Command Format: `clear`

<center><i>Image: Expected output of <code>clear</code> command.</i></center><br>
<br>
<div markdown="span" class="alert alert-warning">
:exclamation: **Caution:** Using this command removes all data from <b>TuitiONE</b>. Only use this command if you want to reset all information on the application and start anew.
</div>
#### Exiting the program : `exit`
Exits the program.
Command Format: `exit`
### Managing Data
#### Saving the data
<b>TuitiONE</b> data is saved in the hard disk automatically after any command that changes the data. There is no need for you to save manually.
#### Editing the data file
<b>TuitiONE</b> data is saved as a `.json` file `[JAR file location]/data/TuitiONE.json`. Advanced users are welcome to update data directly by editing that data file.
<div markdown="span" class="alert alert-warning">
:exclamation: **Caution:**
If the changes you made to the data file render its format invalid, <b>TuitiONE</b> will discard all data and start with an empty data file at the next run.
</div>
### Upcoming Features
You can expect these features in upcoming versions of **TuitiONE**.
#### Editing lessons
With this feature, you will have the flexibility in managing the lessons in your tuition center. You will be able to change the day and time of the lesson, its subject, its grade, and its price.
#### Customised sorting
With this feature, you will have the flexibility to order the student and lesson lists to your preference. For students, you can expect to be able to sort the student list by address, number of lessons and even by their fees. For lessons, you can expect to sort the lesson list by day and time, and number of students enrolled.
#### Verification of student particulars
With this feature, **TuitiONE** will help you check whether given student particulars are legitimate. This includes verifying the email and address of newly added students.
#### Tutor management
With this feature, you will be able to store information about tutors into **TuitiONE**. The information can include the lessons they are teaching, their qualifications and their schedule.
#### Importing/exporting files
With this feature, you will be able to import existing files, such as csv and excel files, into **TuitiONE**, and **TuitiONE** will automatically format them for you in its GUI.
You can also export current data in **TuitiONE** as other file types for compatibility with other applications.
#### Data Analytics
With this feature, you will be able to view statistics on the performance of students, popular lessons, times, and tutors. This allows you to gain better business insights about your centre.
--------------------------------------------------------------------------------------------------------------------
## **Command summary**
Action | Format | Examples
--------|-------|----------
**Add** | `add n/NAME p/PARENT_PHONE_NUMBER e/EMAIL a/ADDRESS g/GRADE [r/REMARK]…` | `add n/Betsy Crowe p/91234567 e/[email protected] a/Bleecker street, block 123, #01-01 g/S5 r/foreign student`
**Add lesson** | `add-l s/SUBJECT g/GRADE d/DAY_OF_WEEK t/TIME_START c/COST` | `add-l s/Science g/P5 d/Wed t/1230 c/12.0`
**Edit** | `edit INDEX [n/NAME] [p/PHONE] [e/EMAIL] [a/ADDRESS] [g/GRADE] [r/REMARK_TO_ADD] [dr/REMARK_TO_DELETE]` | `edit 2 n/Ben Lim e/[email protected]`
**Delete** | `delete INDEX` | `delete 3`
**Delete lesson** | `delete-l INDEX` | `delete-l 1`
**Enroll** | `enroll STUDENT_INDEX l/LESSON_INDEX` | `enroll 1 l/1`
**Unenroll** | `unenroll STUDENT_INDEX l/LESSON_INDEX` | `unenroll 1 l/1`
**Find** | `find KEYWORD [MORE_KEYWORDS]` | `find James Jake`
**Filter** | `filter [g/GRADE] s[SUBJECT]` | `filter g/P2`
**Roster** | `roster LESSON_INDEX` | `roster 1`
**List** | `list` |
**Help** | `help` |
**Clear** | `clear`
**Exit** | `exit` |
--------------------------------------------------------------------------------------------------------------------
## **Glossary**
* **Java**: A widely used programming language
* **JAR**: An executable java file for you to open the app
* **GUI**: Graphical User Interface
* **Json**: Javascript object notation file. This is used to store your preferences and data.
* **Command**: The user inputs that run the specific features of the app.
* **Parameter**: A specific detail required for a command to run.
--------------------------------------------------------------------------------------------------------------------
## **FAQ**
1. Where can I view the list of commands?
> You can type `help` or you can click on the 'Help' tab on the top left of the app window. (see [Help](#viewing-help-help) for more)
1. Why are some usual email address inputed valid, such as '[email protected]'?
> There are many possible email addresses and domains such as school email address and personal domains, hence **TuitiONE** will not provide a thorough checking in this current version. If there is any scenario where you have inputted the wrong email address and would like to change it, refer to the the `edit` command [here](#editing-a-student--edit).
1. Why am I unable to add a student with the same name as another student?
> Currently our system identifies uniqueness of students by their name, hence you are unable to add students with the same name. We are working on an update to identify uniqueness through the combination of name and phone number which will counter this problem.
1. How do I edit a lesson?
> Unfortunately, in the current version of TuitiONE, you will need to use `delete-l` and `add-l` to make your edits, then re-enroll the students. In the upcoming update, there will be an `edit-l` command that will allow for the editing of lessons.
1. How do I edit a remark?
> To edit a remark, you will need to use the `dr/` and `r/` prefixes in the `edit` command to make any changes to remarks. (see [Editing a student](#editing-a-student--edit) for more).
1. Am I able to add or edit `Remarks` to have spacings within them?
> No. The number of characters each `Remark` can have is capped at 25, and must be single words. (see [Adding a student](#adding-a-student-add) for more)
1. Am I able to use "4PM" instead of "1600" for my timings when creating a new lesson?
> No. **TuitiONE** only accepts timings that follow the `2400` hours format. Additionally, timings must also be in intervals of 30 minutes For instance, `1400` and `1415` are valid inputs, while `1415` is an invalid input. (see [Adding a lesson](#adding-a-lesson-add-l) for more)
1. How long can my name be for adding a new `Student`?
> We have imposed a `150` character limit for the respective names of `Students`. `Students` with names longer than 150 characters should use initials to represent their full name instead. (see [Adding a student](#adding-a-student-add) for more)
1. Can `Lessons` of the same `Subject` and `Grade` start at the same time?
> No. **TuitiONE** would consider a `Lesson` of the same `Subject` and `Grade` that start at the same time on the same day as a conflict. (see [Adding a lesson](#adding-a-lesson-add-l) for more)
1. How many `Lessons` can a `Student` be enrolled in?
> A `Student` can be enrolled in a maximum of 10 `Lessons` at any time. **TuitiONE** will not allow a `Student` to be enrolled in more than **10** `Lessons` (see [Enrolling a student from lesson](#enrolling-a-student-from-lesson-enroll) for more).
1. How many `Students` can a `Lesson` contain?
> A `Lesson` can have up to 15 `Students` enrolled in at any time. **TuitiONE** will not allow a `Lesson` to have more than **15** `Students` enrolled in at one time (see [Enrolling a student from lesson](#enrolling-a-student-from-lesson-enroll) for more)
1. Why are there some unusual files present in my folder after I run **TuitiONE**?
> **TuitiONE** currently is a local desktop application, and hence the application would need to store the data you have inputted into these files. These files contain your personal preferences as well as the student and lesson data your tuition center holds. As such do not delete these files as this may cause **TuitiONE** to reset the next time you run it, potentially losing all your data. You may wish to edit these files directly, but we do not recommend such as well (see [Managing data](#managing-data) for more).
| 46.768778 | 525 | 0.710606 | eng_Latn | 0.994921 |
58b945b87edabb958ee0c0b81239ddc2aeff3b6c | 197 | md | Markdown | README.md | nirfast-admin/NIRFASTSlicer | 968c434a96ab8df1ac664d1f3a213e762326ba8f | [
"Apache-2.0"
] | 1 | 2018-08-25T07:30:41.000Z | 2018-08-25T07:30:41.000Z | README.md | yushuiqiang/NIRFASTSlicer | 968c434a96ab8df1ac664d1f3a213e762326ba8f | [
"Apache-2.0"
] | 22 | 2018-03-16T12:49:36.000Z | 2019-11-18T11:16:02.000Z | README.md | yushuiqiang/NIRFASTSlicer | 968c434a96ab8df1ac664d1f3a213e762326ba8f | [
"Apache-2.0"
] | 5 | 2019-03-31T11:54:35.000Z | 2021-08-10T07:09:42.000Z | 
---
Customized version of 3D Slicer for [NIRFAST](www.nirfast.org)
Binaries available at http://nirfast.org/downloads.php
| 32.833333 | 73 | 0.791878 | kor_Hang | 0.487156 |
58b95df5fce2bd8965ebd637c871477a1abaecff | 1,832 | md | Markdown | docs/jenkins/notify.md | 375785353/i-kubernetes.com | 037a3b7f5f98bc05b8ed058c265e7ac07b3e2259 | [
"MIT"
] | null | null | null | docs/jenkins/notify.md | 375785353/i-kubernetes.com | 037a3b7f5f98bc05b8ed058c265e7ac07b3e2259 | [
"MIT"
] | null | null | null | docs/jenkins/notify.md | 375785353/i-kubernetes.com | 037a3b7f5f98bc05b8ed058c265e7ac07b3e2259 | [
"MIT"
] | null | null | null | ## **钉钉脚本**
```
#!/usr/local/python-3.6/bin/python3.6
#钉钉发版通知脚本
import json,requests,sys,time,datetime
#[工程名、分支号、是否成功、编号]
job = sys.argv[1]
branch = sys.argv[2]
stat = sys.argv[3]
bianhao = sys.argv[4]
user_name = sys.argv[5]
if int(stat) == 0:
name = '发布成功!'
tupian = ''
elif int(stat) == 1:
name = '发布失败!'
tupian = ''
url = "https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxxxxxxxxxxxxxxxxxxx"
title = job + name
nowtime = datetime.datetime.now()
nowtime = str(nowtime.strftime('%Y-%m-%d %H:%M:%S'))
msg = """### %s \n
> 时间: %s \n
> 分支: %s \n
> 提交人:%s \n
> 编号: #%s \n
> 地址: [工程链接](http://1.1.1.1:8080/job/%s) \n
%s
"""
def Alert():
headers = {"Content-Type": "application/json"}
data = {"msgtype": "markdown",
"markdown": {
"title": title,
"text": msg %(title, nowtime, branch, user_name, bianhao, job, tupian)
}
}
r = requests.post(url, data=json.dumps(data), headers=headers, verify=False)
print(r.text)
Alert()
```
```
post {
success {
wrap ([$class: 'BuildUser']) {
script {
def user_user = env.BUILD_USER_ID
currentBuild.description = "executor is #${user_user}#"
}
sh "/data/jenkins_home/jenkins_work/jenkins_script/dingding.py ${JOB_NAME} ${BranchName} 0 ${BUILD_NUMBER} ${BUILD_USER}"
}
}
failure {
wrap ([$class: 'BuildUser']) {
sh "/data/jenkins_home/jenkins_work/jenkins_script/dingding.py ${JOB_NAME} ${BranchName} 1 ${BUILD_NUMBER} ${BUILD_USER}"
}
}
}
``` | 26.941176 | 137 | 0.580786 | yue_Hant | 0.307874 |
58b9e531f73eb41f00ca3203659c551864377ba2 | 995 | md | Markdown | _posts/2018-05-28-creating-certificate-signing-requests-csrs.md | dsgnr/snippets.sysadminstuff.io | 59874a1374cca3781fbfd656d7a608ae34954e6d | [
"MIT"
] | null | null | null | _posts/2018-05-28-creating-certificate-signing-requests-csrs.md | dsgnr/snippets.sysadminstuff.io | 59874a1374cca3781fbfd656d7a608ae34954e6d | [
"MIT"
] | 1 | 2021-06-25T15:24:18.000Z | 2021-06-25T15:24:18.000Z | _posts/2018-05-28-creating-certificate-signing-requests-csrs.md | dsgnr/snippets.sysadminstuff.io | 59874a1374cca3781fbfd656d7a608ae34954e6d | [
"MIT"
] | null | null | null | ---
layout: post
title: Creating Certificate Signing Requests (CSR's)
categories: ssl
permalink: /:categories/:title
---
Create self signed certificate:
{% highlight bash %}
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout snippets.sysadminstuff.io.key -out snippets.sysadminstuff.io.crt
{% endhighlight %}
Create certificate signing request (CSR) and private key:
{% highlight bash %}
openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key
{% endhighlight %}
Generate a certificate signing request (CSR) for an existing private key:
{% highlight bash %}
openssl req -out CSR.csr -key privateKey.key -new
{% endhighlight %}
Generate a certificate signing request based on an existing certificate:
{% highlight bash %}
openssl x509 -x509toreq -in certificate.crt -out CSR.csr -signkey privateKey.key
{% endhighlight %}
Remove a passphrase from a private key:
{% highlight bash %}
openssl rsa -in privateKey.pem -out newPrivateKey.pem
{% endhighlight %}
| 31.09375 | 129 | 0.756784 | eng_Latn | 0.845782 |
58ba3133fc43fce010cb1c29d633361811af14f7 | 298 | md | Markdown | README.md | obsidianspork/new-blog-build | e47f1c0759fb47fe2daf8ff58933d389ef20b24d | [
"MIT"
] | null | null | null | README.md | obsidianspork/new-blog-build | e47f1c0759fb47fe2daf8ff58933d389ef20b24d | [
"MIT"
] | null | null | null | README.md | obsidianspork/new-blog-build | e47f1c0759fb47fe2daf8ff58933d389ef20b24d | [
"MIT"
] | null | null | null | ## Dcoy
This is my blog about software engineering, devops and automation. Theme is very much a work in progress...
### Run
```
bundle exec jekyll server
```
### New post
```
rake gen:post title="Your title here" cat="Your categories here"
```
### License
MIT
All content Copyright David Coy | 14.9 | 107 | 0.697987 | eng_Latn | 0.976401 |
58ba56d1b3622b5fe5772ab9cec9d8f8a41dd5ac | 2,566 | md | Markdown | includes/application-gateway-limits.md | Udbv/azure-docs.ru-ru | bb2a78d5edc16997258dd3601c133db25767917f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/application-gateway-limits.md | Udbv/azure-docs.ru-ru | bb2a78d5edc16997258dd3601c133db25767917f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/application-gateway-limits.md | Udbv/azure-docs.ru-ru | bb2a78d5edc16997258dd3601c133db25767917f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: vhorne
ms.service: application-gateway
ms.topic: include
ms.date: 03/04/2020
ms.author: victorh
ms.openlocfilehash: ff97aa6c6f04ad41ba6e1b986f3cc0734ec7a326
ms.sourcegitcommit: 59f506857abb1ed3328fda34d37800b55159c91d
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 10/24/2020
ms.locfileid: "92526154"
---
| Ресурс | Ограничение | Примечание |
| --- | --- | --- |
| Шлюз приложений Azure |1000 на подписку | |
| Конфигурации интерфейсных IP-адресов |2 |1 открытая и 1 закрытая |
| Интерфейсные порты |100<sup>1</sup> | |
| Серверные пулы адресов |100<sup>1</sup> | |
| Количество внутренних серверов на пул |1200 | |
| HTTP-прослушиватели |200<sup>1</sup> |Ограничение до 100 активных прослушивателей, направляющих трафик. Количество активных прослушивателей = общее количество прослушивателей − количество неактивных прослушивателей.<br>Если конфигурация по умолчанию в правиле маршрутизации настроена для маршрутизации трафика (например, при наличии прослушивателя, серверного пула и параметров HTTP), это также будет считаться прослушивателем.|
| Правила балансировки нагрузки HTTP |100<sup>1</sup> | |
| Параметры HTTP для сервера |100<sup>1</sup> | |
| Экземпляров на шлюз |Номер SKU версии 1 — 32<br>Номер SKU версии 2 — 125 | |
| SSL-сертификаты |100<sup>1</sup> |1 на HTTP-прослушиватель |
| Максимальный размер SSL-сертификата |Номер SKU версии 1 — 10 КБ<br>Номер SKU версии 2 — 16 КБ| |
| Сертификаты аутентификации клиента |100 | |
| Доверенные корневые сертификаты |100 | |
| Минимальное время ожидания запроса |1 с | |
| Максимальное время ожидания запроса |24 часа | |
| Количество сайтов |100<sup>1</sup> |1 на HTTP-прослушиватель |
| Количество сопоставлений URL-адреса на прослушиватель |1 | |
| Максимальное количество правил на основе путей на сопоставление URL-адреса|100||
| Конфигурации перенаправления |100<sup>1</sup>| |
| Одновременные подключения WebSocket |Средние шлюзы, 20 тыс.<br> Крупные шлюзы, 50 тыс.| |
| Максимальная длина URL-адреса|32 КБ| |
| Максимальный размер заголовка для HTTP/2 |4 КБ| |
| Максимальный размер передаваемого файла, категория "Стандартный" |2 ГБ | |
| Максимальный размер передаваемого файла (WAF) |Средние шлюзы WAF версии 1, 100 МБ<br>Крупные шлюзы WAF версии 1, 500 МБ<br>WAF версии 2, 750 МБ| |
| Максимальный размер текста WAF, без файлов|128 КБ||
| Максимальное количество настраиваемых правил WAF|100||
| Максимальное число исключений WAF для Шлюза приложений|40||
<sup>1</sup> При использовании номера SKU с поддержкой WAF рекомендуем ограничить количество ресурсов до 40.
| 57.022222 | 430 | 0.765004 | rus_Cyrl | 0.926878 |
58ba6d8ff115fe94c45aab7bb962c55e3c1506c2 | 1,377 | md | Markdown | CONTRIBUTING.md | kawaiiamber/app-template | 0d8923180f9f5e183d98bc2d656b0fdec3632f65 | [
"Unlicense"
] | null | null | null | CONTRIBUTING.md | kawaiiamber/app-template | 0d8923180f9f5e183d98bc2d656b0fdec3632f65 | [
"Unlicense"
] | null | null | null | CONTRIBUTING.md | kawaiiamber/app-template | 0d8923180f9f5e183d98bc2d656b0fdec3632f65 | [
"Unlicense"
] | null | null | null | Contributing
============
Pull Requests
-------------
I am always looking for better standard and ways to improve makefiles to
work on more and more systems. If you would like to make a PR, all I ask
if that when you fork it, you make a branch that is named appropriately
and request to merge that to the master branch. \#\#\# Quick PR Guide
Fork my repo, then set it up via
git clone <your-fork-url>
cd app-template
git remote add upstream https://github.com/kawaiiamber/app-template
git checkout -b <your-branch-name>
# Make your changes
git add -A
git commit -m "your commit message"
git push origin <your-branch-name>
If it has been a while since you cloned the repo, please pull to be up
to date with latest upstream master branch:
git checkout master
git pull upstream master
If you end up needing to make multiple commits, unless they're not that
relates, please squash commits. It's better to have one commit for one
thing that is changing.
git rebase -i HEAD~<number-of-commits>
# change `pick` to `squash` on all but the first
git push origin <your-branch-name> -f
Issues
------
For submitting issues, please tag the issue apprpriately in the title
via \[bug\], \[feature\], etc. If it is a bug, please list what the bug
is and what it affects, along with the proper steps to reproduce it (if
applicable).
| 32.023256 | 72 | 0.716049 | eng_Latn | 0.998846 |
58bcc14bee537b35499d9dddf2df31b5e2731acf | 10,386 | md | Markdown | articles/machine-learning/classic/migrate-rebuild-web-service.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2017-08-28T07:46:17.000Z | 2022-02-03T12:49:15.000Z | articles/machine-learning/classic/migrate-rebuild-web-service.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 407 | 2018-06-14T16:12:48.000Z | 2021-06-02T16:08:13.000Z | articles/machine-learning/classic/migrate-rebuild-web-service.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2017-10-04T22:53:31.000Z | 2022-03-10T16:41:59.000Z | ---
title: 'Estúdio ML (clássico): Migrar para Azure Machine Learning - Reconstruir serviço web'
description: Reconstruir serviços web (clássicos) como pontos finais de pipeline em Azure Machine Learning
services: machine-learning
ms.service: machine-learning
ms.subservice: studio-classic
ms.topic: how-to
author: xiaoharper
ms.author: zhanxia
ms.date: 03/08/2021
ms.openlocfilehash: 35ee5bf22aa88c18bade0ebdcd961b7687d24e7f
ms.sourcegitcommit: b4fbb7a6a0aa93656e8dd29979786069eca567dc
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 04/13/2021
ms.locfileid: "107311826"
---
# <a name="rebuild-a-studio-classic-web-service-in-azure-machine-learning"></a>Reconstruir um serviço web studio (clássico) em Azure Machine Learning
Neste artigo, você aprende a reconstruir um serviço web Studio (clássico) como **um ponto final** em Azure Machine Learning.
Utilize pontos finais do gasoduto Azure Machine Learning para fazer previsões, retreinar modelos ou executar qualquer gasoduto genérico. O ponto final REST permite-lhe executar gasodutos a partir de qualquer plataforma.
Este artigo faz parte da série de migração Studio (clássico) para Azure Machine Learning. Para obter mais informações sobre a migração para a Azure Machine Learning, consulte o [artigo visão geral da migração.](migrate-overview.md)
> [!NOTE]
> Esta série de migração centra-se no designer de arrastar e largar. Para obter mais informações sobre a implementação de modelos programáticamente, consulte [os modelos de machine learning implementar em Azure](../how-to-deploy-and-where.md).
## <a name="prerequisites"></a>Pré-requisitos
- Uma conta Azure com uma subscrição ativa. [Crie uma conta gratuita.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
- Uma área de trabalho do Azure Machine Learning. Criar um espaço de trabalho para [aprendizagem automática Azure.](../how-to-manage-workspace.md#create-a-workspace)
- Um oleoduto de treino de aprendizagem automática Azure. Para obter mais informações, consulte [a experiência Rebuild a Studio (classic) em Azure Machine Learning](migrate-rebuild-experiment.md).
## <a name="real-time-endpoint-vs-pipeline-endpoint"></a>Ponto final em tempo real vs ponto final do gasoduto
Os serviços web do estúdio (clássicos) foram substituídos por **pontos finais** em Azure Machine Learning. Utilize a seguinte tabela para escolher qual o tipo de ponto final a utilizar:
|Serviço web de estúdio (clássico)| Substituição de aprendizagem automática Azure
|---|---|
|Serviço web de pedido/resposta (previsão em tempo real)|Ponto final em tempo real|
|Serviço web de lote (previsão do lote)|Ponto final do gasoduto|
|Serviço web de reciclagem (reconversão)|Ponto final do gasoduto|
## <a name="deploy-a-real-time-endpoint"></a>Implementar um ponto final em tempo real
No Studio (clássico), usou um **serviço web REQUEST/RESPOND** para implementar um modelo para previsões em tempo real. Em Azure Machine Learning, você usa um **ponto final em tempo real**.
Existem várias formas de implementar um modelo no Azure Machine Learning. Uma das formas mais simples é usar o designer para automatizar o processo de implantação. Utilize os seguintes passos para implantar um modelo como ponto final em tempo real:
1. Executar o seu oleoduto de treino completo pelo menos uma vez.
1. Depois de concluída a execução, na parte superior da tela, **selecione Criar o gasoduto** de > **inferência em tempo real**.

O designer converte o gasoduto de treino num oleoduto de inferência em tempo real. Uma conversão semelhante também ocorre em Studio (clássico).
No designer, o passo de conversão também [regista o modelo treinado para o seu espaço de trabalho Azure Machine Learning](../how-to-deploy-and-where.md#registermodel).
1. **Selecione Submeter-se** para executar o pipeline de inferência em tempo real e verificar se funciona com sucesso.
1. Depois de verificar o gasoduto de inferência, selecione **Implementar**.
1. Insira um nome para o seu ponto final e um tipo de cálculo.
A tabela a seguir descreve as suas opções de computação de implantação no designer:
| Destino de computação | Utilizado para | Description | Criação |
| ----- | ----- | ----- | ----- |
|[Azure Kubernetes Service (AKS)](../how-to-deploy-azure-kubernetes-service.md) |Inferência em tempo real|Implantações de produção em larga escala. Tempo de resposta rápida e autoscalagem de serviço.| Criado pelo utilizador. Para obter mais informações, consulte [Criar metas de computação](../how-to-create-attach-compute-studio.md#inference-clusters). |
|[Instâncias de contentores Azure ](../how-to-deploy-azure-container-instance.md)|Teste ou desenvolvimento | Cargas de trabalho em pequena escala, baseadas em CPU que requerem menos de 48 GB de RAM.| Criada automaticamente por Azure Machine Learning.
### <a name="test-the-real-time-endpoint"></a>Teste o ponto final em tempo real
Após a implementação concluída, pode ver mais detalhes e testar o seu ponto final:
1. Vá ao **separador Endpoints.**
1. Selecione o ponto final.
1. Selecione o separador **Teste**.

## <a name="publish-a-pipeline-endpoint-for-batch-prediction-or-retraining"></a>Publicar um ponto final de gasoduto para previsão ou reconversão de lotes
Também pode utilizar o seu pipeline de treino para criar um **ponto final** de gasoduto em vez de um ponto final em tempo real. Utilize **pontos finais do gasoduto** para efetuar a previsão do lote ou a reconversão.
Os pontos finais do pipeline substituem **os pontos finais de execução de lote (clássicos)** e **os serviços web de reconversão.**
### <a name="publish-a-pipeline-endpoint-for-batch-prediction"></a>Publique um ponto final de pipeline para previsão de lote
Publicar um ponto final de previsão de lote é semelhante ao ponto final em tempo real.
Utilize as seguintes etapas para publicar um ponto final do gasoduto para a previsão do lote:
1. Executar o seu oleoduto de treino completo pelo menos uma vez.
1. Depois de concluída a execução, na parte superior da tela, **selecione Criar pipeline de** > **inferência Pipeline pipeline**

O designer converte o gasoduto de treino num pipeline de inferência de lote. Uma conversão semelhante também ocorre em Studio (clássico).
No designer, este passo [também regista o modelo treinado para o seu espaço de trabalho Azure Machine Learning](../how-to-deploy-and-where.md#registermodel).
1. **Selecione Submeter-se** para executar o pipeline de inferência do lote e verificar se completa com sucesso.
1. Depois de verificar o gasoduto de inferência, **selecione Publicar**.
1. Crie um novo ponto final do gasoduto ou selecione um existente.
Um novo ponto final do gasoduto cria um novo ponto final REST para o seu oleoduto.
Se selecionar um ponto final de gasoduto existente, não substitui o gasoduto existente. Em vez disso, a Azure Machine Learning versa cada oleoduto no ponto final. Pode especificar qual a versão a executar na sua chamada REST. Também deve definir um pipeline predefinido se a chamada REST não especificar uma versão.
### <a name="publish-a-pipeline-endpoint-for-retraining"></a>Publique um ponto final de pipeline para a reconversão
Para publicar um ponto final de oleoduto para reciclagem, já deve ter um rascunho de gasoduto que treine um modelo. Para obter mais informações sobre a construção de um pipeline de formação, consulte [a experiência Rebuild a Studio (clássica).](migrate-rebuild-experiment.md)
Para reutilizar o seu ponto final do pipeline para a reconversão, tem de criar um **parâmetro de pipeline** para o conjunto de dados de entrada. Isto permite-lhe definir dinamicamente o conjunto de dados de treino, para que possa voltar a treinar o seu modelo.
Utilize as seguintes medidas para publicar o ponto final do gasoduto de reconversão:
1. Executar o seu oleoduto de treino pelo menos uma vez.
1. Após a execução concluída, selecione o módulo de conjunto de dados.
1. No painel de detalhes do módulo, selecione **Definir como parâmetro de pipeline**.
1. Forneça um nome descritivo como "InputDataset".

Isto cria um parâmetro de pipeline para o seu conjunto de dados de entrada. Quando ligar para o seu ponto final do pipeline para o treino, pode especificar um novo conjunto de dados para treinar o modelo.
1. Selecione **Publicar**.

## <a name="call-your-pipeline-endpoint-from-the-studio"></a>Ligue para o seu ponto final do pipeline a partir do estúdio
Depois de criar o seu ponto final de inferência ou retreinamento do pipeline, pode ligar diretamente para o seu ponto final a partir do seu navegador.
1. Vá ao **separador Pipelines** e selecione **pontos finais do Pipeline**.
1. Selecione o ponto final do gasoduto que pretende executar.
1. Selecione **Submeter**.
Pode especificar quaisquer parâmetros de pipeline depois de selecionar **Enviar por isso.**
## <a name="next-steps"></a>Passos seguintes
Neste artigo, aprendeu a reconstruir um serviço web Studio (clássico) em Azure Machine Learning. O próximo passo é [integrar o seu serviço web com aplicações de clientes.](migrate-rebuild-integrate-with-client-app.md)
Veja os outros artigos na série de migração Studio (clássica):
1. [Visão geral da migração](migrate-overview.md).
1. [Conjunto de dados migrar](migrate-register-dataset.md).
1. [Reconstruir um pipeline de treino studio (clássico).](migrate-rebuild-experiment.md)
1. **Reconstruir um serviço web Studio (clássico).**
1. [Integre um serviço web Azure Machine Learning com aplicações para clientes.](migrate-rebuild-integrate-with-client-app.md)
1. [Migrar Executar O Guião R](migrate-execute-r-script.md).
| 64.111111 | 359 | 0.775467 | por_Latn | 0.999552 |
58bdeb6a6bbc0f2af5ab835487aed644e2f1349e | 1,884 | md | Markdown | _datasets/Papua_New_GuineaHise_et_al_2003_GENES__IMMUNITY_Polymorphisms_and_susceptibility_to_LF_PNG_6364015a.md | pacelf/pacelf | cd9f3608843eaf7d9dff6e20e06ee4bf773467e3 | [
"MIT"
] | null | null | null | _datasets/Papua_New_GuineaHise_et_al_2003_GENES__IMMUNITY_Polymorphisms_and_susceptibility_to_LF_PNG_6364015a.md | pacelf/pacelf | cd9f3608843eaf7d9dff6e20e06ee4bf773467e3 | [
"MIT"
] | 2 | 2021-10-06T01:58:48.000Z | 2022-02-18T04:52:34.000Z | _datasets/Papua_New_GuineaHise_et_al_2003_GENES__IMMUNITY_Polymorphisms_and_susceptibility_to_LF_PNG_6364015a.md | pacelf/pacelf | cd9f3608843eaf7d9dff6e20e06ee4bf773467e3 | [
"MIT"
] | null | null | null | ---
schema: pacelf
title: Polymorphisms of innate immunity genes and susceptibility to lymphatic filariasis
organization: Hise, A.G., Hazlett, F.E., Bockarie, M.J., Zimmerman, P.A., Tisch, D.J., Kazura, J.W.
notes: We examined 906 residents of an area of Papua New Guinea where bancroftian filariasis is endemic for genetic polymorphisms in three innate immunity genes suspected of contributing to susceptibility to infection and lymphatic pathology. Active infection was confirmed by the presence of blood-borne microfilariae and circulating filarial antigen in plasma. Disease was ascertained by physical examination for the presence of overt lymphedema (severe swelling of an arm or leg) or hydrocele. There was no association of infection status, lymphedema of an extremity, or hydrocele with chitotriosidase genotype (CHIT1). Polymorphisms of toll-like receptor-2 and toll-like receptor-4 genes (TLR4 A896G; TLR2 T2178A, G2258A) were not detected (N=200-625 individuals genotyped) except for two individuals heterozygous for a TLR2 mutation (C2029 T). These results indicate that a CHIT1 genotype associated previously with susceptibility to filariasis in residents of southern India and TLR2 and TLR4 polymorphisms do not correlate with infection status or disease phenotype in this Melanesian population.
access: Restricted
resources:
- name: Polymorphisms of innate immunity genes and susceptibility to lymphatic filariasis
url: '/docs/Papua_New_GuineaHise_et_al_2003_GENES__IMMUNITY_Polymorphisms_and_susceptibility_to_LF_PNG_6364015a.txt'
format: Electronic
access: Restricted
pages: 524-527
category: Scientific Papers
access: Restricted
journal: Genes and Immunity
publisher: No publisher available.
language: English
tags: English
hardcopy_location: No hardcopy location available.
work_location: Papua New Guinea
year: 2003
decade: 2000
PacELF_ID: 700
---
| 69.777778 | 1,103 | 0.820064 | eng_Latn | 0.98512 |
58bdf2849276ca4e0664107afd977eeb69dd43ea | 762 | md | Markdown | catalog/another-summer-knights/en-US_another-summer-knights.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/another-summer-knights/en-US_another-summer-knights.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/another-summer-knights/en-US_another-summer-knights.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Another Summer Knights

- **type**: manga
- **volumes**: 1
- **chapters**: 4
- **original-name**: アナザー・サマー・騎士'S
## Tags
- comedy
- romance
- shoujo
- supernatural
## Authors
- Ooya
- Kazumi (Story & Art)
## Sinopse
Natsumi Touma is sixteen years old. She began to work part-time at a coffee shop during the summer vacation when she was a first-year student of high school. However, the shop is a dubious place where Natsumi can see those who cannot be seen. Her inspiration gets stronger day by day at this spiritual place. There appears a very handsome savior.
## Links
- [My Anime list](https://myanimelist.net/manga/12117/Another_Summer_Knights)
| 26.275862 | 346 | 0.711286 | eng_Latn | 0.987233 |
58be9574c953b6a5494bf462b0d0880d3a8a3144 | 1,293 | md | Markdown | docs/error-messages/tool-errors/c-runtime-error-r6030.md | changeworld/cpp-docs.zh-cn | fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/tool-errors/c-runtime-error-r6030.md | changeworld/cpp-docs.zh-cn | fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/tool-errors/c-runtime-error-r6030.md | changeworld/cpp-docs.zh-cn | fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: C 运行时错误 R6030
ms.date: 11/04/2016
f1_keywords:
- R6030
helpviewer_keywords:
- R6030
ms.assetid: 0238a6c3-a033-4046-8adc-f8f99d961153
ms.openlocfilehash: 7f5c61d9b39b1d655bcbf3d42ea870370ddf2842
ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 10/31/2018
ms.locfileid: "50461615"
---
# <a name="c-runtime-error-r6030"></a>C 运行时错误 R6030
未初始化的 CRT
> [!NOTE]
> 如果运行应用时遇到此错误消息,该应用已关闭,因为它具有内部问题。 特定安全的软件程序,或很少,该程序中的 bug,通常被导致此问题。
>
> 可以尝试以下步骤来修复此错误:
>
> - 安全软件可能的缓解此问题的具体说明。 检查安全软件供应商的网站了解详细信息。 或者,检查有更新版本的安全软件,或尝试使用不同的安全软件。
> - 使用**应用程序和功能**或**程序和功能**页**控制面板**来修复或重新安装该程序。
> - 检查**Windows Update**中**控制面板**的软件更新。
> - 检查应用程序的更新版本。 如果问题仍然存在,请与应用供应商联系。
**程序员提供的的信息**
如果你正在使用 C 运行时 (CRT),但不是执行 CRT 启动代码,会发生此错误。 可能出现此错误,如果链接器开关[/ENTRY](../../build/reference/entry-entry-point-symbol.md)用于重写的默认开始地址,通常**mainCRTStartup**, **wmainCRTStartup**为控制台 exe 文件, **WinMainCRTStartup**或**wWinMainCRTStartup**为 Windows exe 文件,或 **_DllMainCRTStartup** dll。 除非在启动时调用上面的函数之一,C 运行时将不会初始化。 链接到 C 运行时库和使用普通时,默认情况下通常调用这些启动函数**主要**, **wmain**, **WinMain**,或**DllMain**入口点。
还有可能出现此错误,另一个程序使用代码注入技术来捕获某些 DLL 库调用时。 某些具有侵入性安全程序使用此方法。 在 Visual Studio 2015 之前的 Visual c + + 版本,则可以使用静态链接的 CRT 库为解决问题,但不是建议这样做的安全性和应用程序更新的原因。 更正此问题可能需要最终用户操作。 | 38.029412 | 382 | 0.772622 | yue_Hant | 0.943928 |
58c0be2bfbe6afac89034728104cacbf7dd7ab61 | 1,329 | md | Markdown | group-agreement.md | dannyramirezgd/wag-dog-walker-app | c961e7d1b17662a500f38c72a8a72f82ae85c5b3 | [
"MIT"
] | null | null | null | group-agreement.md | dannyramirezgd/wag-dog-walker-app | c961e7d1b17662a500f38c72a8a72f82ae85c5b3 | [
"MIT"
] | 22 | 2022-03-01T01:45:12.000Z | 2022-03-09T21:36:19.000Z | group-agreement.md | dannyramirezgd/wag-dog-walker-app | c961e7d1b17662a500f38c72a8a72f82ae85c5b3 | [
"MIT"
] | 3 | 2022-03-10T13:08:17.000Z | 2022-03-13T15:01:39.000Z | # Boot Camp Project Group Agreements
## What is your team name?
Team Wag
## What is your project about?
### User Story:
~~~
AS A Dog Owner
I WANT to look up local dog walkers based on location and rate
SO THAT my dog can be walked
~~~
## What role(s) will everyone on the team hold:
- GitHub Master - Chris
- Models and Version control - Chris
- Handlebars/views - Danny
- Backend and Frontend Routes - DK
## How often will you have meetings? How long?
At least every other day for 30-60 minutes
## How will you keep track of tasks?
GitHub Projects KanBan board with headings of To Do, In Progress, and Completed
## What if someone misses a deadline?
As long as we're open with communication whether on slack or group text message and within a day it's fine. But after two days a conversation needs to be had.
## What happens if someone misses a meeting? What is their responsibility? The team’s responsibility?
There needs to be notice and an equal amount of effort and work. Communication is required and possible instructor or TA involvement incase.
## How does each team member like to receive feedback?
Just be direct and straight with feedback.
## What if there is friction between members? How will you address it?
As long as there is an open line of communication when friction arises.
## Other Group Agreements:
| 42.870968 | 158 | 0.761475 | eng_Latn | 0.999801 |
58c11e49b4a1cdb3b093e369368f9ef0b28426a9 | 1,518 | md | Markdown | wdk-ddi-src/content/d3d12umddi/ne-d3d12umddi-d3d12ddi_video_motion_estimator_differences_metric_flags_0053.md | riwaida/windows-driver-docs-ddi | c6f3d4504dc936bba6226651b2810df9c9cb7f1c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3d12umddi/ne-d3d12umddi-d3d12ddi_video_motion_estimator_differences_metric_flags_0053.md | riwaida/windows-driver-docs-ddi | c6f3d4504dc936bba6226651b2810df9c9cb7f1c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3d12umddi/ne-d3d12umddi-d3d12ddi_video_motion_estimator_differences_metric_flags_0053.md | riwaida/windows-driver-docs-ddi | c6f3d4504dc936bba6226651b2810df9c9cb7f1c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NE:d3d12umddi.D3D12DDI_VIDEO_MOTION_ESTIMATOR_DIFFERENCES_METRIC_FLAGS_0053
title: D3D12DDI_VIDEO_MOTION_ESTIMATOR_DIFFERENCES_METRIC_FLAGS_0053 (d3d12umddi.h)
description: Indicates the differences metric flags to capture during video motion estimation.
ms.assetid: 2e95acd3-35d2-4fdf-b7f0-765350b976fb
ms.date: 10/19/2018
ms.topic: enum
ms.keywords: D3D12DDI_VIDEO_MOTION_ESTIMATOR_DIFFERENCES_METRIC_FLAGS_0053, D3D12DDI_VIDEO_MOTION_ESTIMATOR_DIFFERENCES_METRIC_FLAGS_0053,
req.header: d3d12umddi.h
req.include-header:
req.target-type:
req.target-min-winverclnt: Windows 10, version 1809
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.max-support:
req.typenames: D3D12DDI_VIDEO_MOTION_ESTIMATOR_DIFFERENCES_METRIC_FLAGS_0053
topic_type:
- apiref
api_type:
- HeaderDef
api_location:
- d3d12umddi.h
api_name:
- D3D12DDI_VIDEO_MOTION_ESTIMATOR_DIFFERENCES_METRIC_FLAGS_0053
product:
- Windows
targetos: Windows
tech.root: display
ms.custom: RS5
---
# D3D12DDI_VIDEO_MOTION_ESTIMATOR_DIFFERENCES_METRIC_FLAGS_0053 enumeration
## -description
Indicates the differences metric flags to capture during video motion estimation.
## -enum-fields
### -field D3D12DDI_VIDEO_MOTION_ESTIMATOR_DIFFERENCES_METRIC_FLAG_0053_NONE
No differences metric flag.
### -field D3D12DDI_VIDEO_MOTION_ESTIMATOR_DIFFERENCES_METRIC_FLAG_0053_SUM_OF_ABSOLUTE_TRANSFORMED_DIFFERENCES
The differences metric calculated as a sum of absolute transformed differences.
## -remarks
## -see-also
| 28.641509 | 139 | 0.849144 | yue_Hant | 0.563892 |
58c157670501d468244338182fb698c89b41bff4 | 18,001 | md | Markdown | docs-archive-a/2014/analysis-services/multidimensional-models/mdx/autoexists.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/analysis-services/multidimensional-models/mdx/autoexists.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-10-11T06:39:57.000Z | 2021-11-25T02:25:30.000Z | docs-archive-a/2014/analysis-services/multidimensional-models/mdx/autoexists.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:51:33.000Z | 2021-10-13T09:18:07.000Z | ---
title: Autoexists | Microsoft Docs
ms.custom: ''
ms.date: 07/17/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: analysis-services
ms.topic: conceptual
ms.assetid: 56283497-624c-45b5-8a0d-036b0e331d22
author: minewiskan
ms.author: owend
ms.openlocfilehash: dd35958a364456c12d58392afe3754f6adcf97b8
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/04/2020
ms.locfileid: "87611374"
---
# <a name="autoexists"></a>Autoexists
La fonctionnalité d’auto-existence, ou *autoexists* , limite l’espace du cube aux cellules qui existent réellement dans le cube, par opposition à celles qui pourraient exister en créant toutes les combinaisons possibles de membres de la hiérarchie d’attribut à partir de la même hiérarchie. En effet, les membres d'une hiérarchie d'attribut ne peuvent coexister avec les membres d'une autre hiérarchie d'attribut au sein de la même dimension. Lorsque deux hiérarchies d'attribut, ou plus, de la même dimension sont utilisées dans une instruction SELECT, Analysis Services évalue les expressions des attributs pour s'assurer que les membres de ces attributs sont correctement limités afin de répondre aux critères de tous les autres attributs.
Supposons, par exemple, que vous utilisez des attributs de la dimension de Geography. Si vous avez une expression qui retourne tous les membres de l'attribut Ville et une autre expression qui limite les membres de l'attribut Pays à tous les pays d'Europe, il en résultera une limitation des membres de l'attribut Ville aux seules villes qui appartiennent à des pays d'Europe. Cela est dû à la particularité de la fonctionnalité Autoexists d'Analysis Services. En effet, elle tente d'empêcher que des enregistrements de la dimension exclus d'une expression d'attribut soient exclus par les autres expressions d'attributs. La fonctionnalité Autoexists peut également être interprétée comme l'intersection obtenue entre les différentes expressions d'attributs sur les lignes de la dimension.
## <a name="cell-existence"></a>Existence des cellules
Les cellules suivantes existent toujours :
- Le membre (All), de chaque hiérarchie, lorsqu'il est croisé avec des membres d'autres hiérarchies dans la même dimension.
- Les membres calculés lorsqu'ils sont croisés avec leurs frères et sœurs non calculés, ou avec les parents ou descendants de leurs frères et sœurs non calculés.
## <a name="providing-non-existing-cells"></a>Création de cellules inexistantes
Une cellule inexistante est une cellule fournie par le système en réponse à une requête ou à un calcul qui demande une cellule qui n'existe pas dans le cube. Par exemple, si vous travaillez avec un cube muni d'une hiérarchie d'attribut Ville, d'une hiérarchie d'attribut Pays qui appartient à la dimension Zone géographique, et d'une mesure Montant des ventes sur Internet, l'espace de ce cube inclut uniquement les membres qui coexistent entre eux. Par exemple, si la hiérarchie d'attribut Ville inclut les villes de New York, Londres, Paris, Tokyo et Melbourne et la hiérarchie d'attribut Pays les pays États-Unis, Royaume-Uni, France, Japon et Australie, l'espace du cube n'inclut pas l'espace (cellule) à l'intersection de Paris et États-Unis.
Lorsque vous interrogez des cellules qui n'existent pas, ces cellules retournent des valeurs NULL, ce qui signifie qu'elles ne peuvent pas contenir des calculs et que vous ne pouvez pas définir un calcul autorisé à écrire dans l'espace concerné. Par exemple, l'instruction suivante inclut des cellules qui n'existent pas :
```
SELECT [Customer].[Gender].[Gender].Members ON COLUMNS,
{[Customer].[Customer].[Aaron A. Allen]
,[Customer].[Customer].[Abigail Clark]} ON ROWS
FROM [Adventure Works]
WHERE Measures.[Internet Sales Amount]
```
> [!NOTE]
> Cette requête utilise la fonction [Members (Set) (MDX)](/sql/mdx/members-set-mdx) pour retourner l’ensemble de membres de la hiérarchie d’attribut Gender sur l’axe des colonnes, puis croise cet ensemble avec l’ensemble de membres spécifié à partir de la hiérarchie d’attribut Customer sur l’axe des lignes.
Lorsque vous exécutez la requête ci-dessus, la cellule à l'intersection de Aaron A. Allen et Female affiche une valeur NULL, tout comme la cellule à l'intersection d'Abigail Clark et Male. Ces cellules n'existent pas et ne peuvent contenir de valeur. En revanche, les cellules non existantes peuvent apparaître dans le résultat que retourne une requête.
Quand vous utilisez la fonction [Crossjoin (MDX)](/sql/mdx/crossjoin-mdx) pour retourner le produit croisé des membres de la hiérarchie d’attributs à partir des hiérarchies d’attributs de la même dimension, l’existence automatique limite les tuples retournés à l’ensemble des tuples qui existent réellement, au lieu de retourner un produit cartésien complet. Par exemple, exécutez et examinez les résultats de l'exécution de la requête suivante :
```
SELECT CROSSJOIN
(
{[Customer].[Country].[United States]},
[Customer].[State-Province].Members
) ON 0
FROM [Adventure Works]
WHERE Measures.[Internet Sales Amount]
```
> [!NOTE]
> Remarquez l'emploi de 0, soit la formule abrégée de Axes(0), pour désigner l'axe des colonnes.
La requête ci-dessus retourne uniquement les cellules des membres de chaque hiérarchie d'attribut dans la requête qui coexistent entre eux. La requête précédente peut également être écrite à l’aide de la nouvelle variante * de la fonction [CrossJoin (MDX)](/sql/mdx/crossjoin-mdx) .
```
SELECT
[Customer].[Country].[United States] *
[Customer].[State-Province].Members
ON 0
FROM [Adventure Works]
WHERE Measures.[Internet Sales Amount]
```
Cette requête peut aussi être écrite de la manière suivante :
```
SELECT [Customer].[State-Province].Members
ON 0
FROM [Adventure Works]
WHERE (Measures.[Internet Sales Amount],
[Customer].[Country].[United States])
```
Les valeurs retournées des cellules sont identiques, même si les métadonnées dans l'ensemble de résultats apparaissent différemment. Par exemple, dans la requête ci-dessus, la hiérarchie Country a été déplacée vers l'axe de segment (dans la clause WHERE) et ne peut donc s'afficher de manière explicite dans l'ensemble de résultats.
Chacune de ces trois requêtes précédentes illustre l’effet du comportement de l’auto-existence dans [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] [!INCLUDE[ssASnoversion](../../../includes/ssasnoversion-md.md)] .
## <a name="deep-and-shallow-autoexists"></a>Fonctionnalités Deep Autoexists et Shallow Autoexists
Autoexists peut être appliquée en profondeur ou superficiellement aux expressions. `Deep Autoexists` signifie que toutes les expressions seront évaluées pour rencontrer l'espace le plus profond possible après l'application des expressions de découpage, des expressions de sous-sélection dans l'axe, etc. `Shallow Autoexists` permet d'évaluer les expressions externes avant l'expression actuelle et de passer ces résultats à l'expression actuelle. La fonctionnalité Deep Autoexists est paramétrée par défaut.
Le scénario et les exemples suivants illustrent les différents types de fonctionnalités Autoexists. Dans les exemples suivants, nous allons créer deux jeux : l'un sous forme d'expression calculée et l'autre sous forme d'expression constante.
`//Obtain the Top 10 best reseller selling products by Name`
`with member [Measures].[PCT Discount] AS '[Measures].[Discount Amount]/[Measures].[Reseller Sales Amount]', FORMAT_STRING = 'Percent'`
`set Top10SellingProducts as 'topcount([Product].[Model Name].children, 10, [Measures].[Reseller Sales Amount])'`
`set Preferred10Products as '`
`{[Product].[Model Name].&[Mountain-200],`
`[Product].[Model Name].&[Road-250],`
`[Product].[Model Name].&[Mountain-100],`
`[Product].[Model Name].&[Road-650],`
`[Product].[Model Name].&[Touring-1000],`
`[Product].[Model Name].&[Road-550-W],`
`[Product].[Model Name].&[Road-350-W],`
`[Product].[Model Name].&[HL Mountain Frame],`
`[Product].[Model Name].&[Road-150],`
`[Product].[Model Name].&[Touring-3000]`
`}'`
`select {[Measures].[Reseller Sales Amount], [Measures].[Discount Amount], [Measures].[PCT Discount]} on 0,`
`Top10SellingProducts on 1`
`from [Adventure Works]`
Le jeu de résultats obtenu est le suivant :
|||||
|-|-|-|-|
||**Reseller Sales Amount**|**Discount Amount**|**PCT Discount**|
|**Mountain-200**|**14 356 699,36 $**|**19 012,71 $**|**0,13%**|
|**Road-250**|**9 377 457,68 $**|**4 032,47 $**|**0,04%**|
|**Mountain-100**|**8 568 958,27 $**|**139 393,27 $**|**1,63%**|
|**Road-650**|**7 442 141,81 $**|**39 698,30 $**|**0,53 %**|
|**Touring-1000**|**6 723 794,29 $**|**166 144,17 $**|**2,47%**|
|**Road-550-W**|**3 668 383,88 $**|**1 901,97 $**|**0,05%**|
|**Road-350-W**|**3 665 932,31 $**|**20 946,50 $**|**0,57%**|
|**HL Mountain Frame**|**3 365 069,27 $**|**$174.11**|**0,01%**|
|**Road-150**|**2 363 805,16 $**|**$0,00**|**0,00%**|
|**Touring-3000**|**2 046 508,26 $**|**79 582,15 $**|**3,89%**|
Le jeu de produits obtenu semble être le même que jeu Preferred10Products, qui se présente ainsi :
`with member [Measures].[PCT Discount] AS '[Measures].[Discount Amount]/[Measures].[Reseller Sales Amount]', FORMAT_STRING = 'Percent'`
`set Top10SellingProducts as 'topcount([Product].[Model Name].children, 10, [Measures].[Reseller Sales Amount])'`
`set Preferred10Products as '`
`{[Product].[Model Name].&[Mountain-200],`
`[Product].[Model Name].&[Road-250],`
`[Product].[Model Name].&[Mountain-100],`
`[Product].[Model Name].&[Road-650],`
`[Product].[Model Name].&[Touring-1000],`
`[Product].[Model Name].&[Road-550-W],`
`[Product].[Model Name].&[Road-350-W],`
`[Product].[Model Name].&[HL Mountain Frame],`
`[Product].[Model Name].&[Road-150],`
`[Product].[Model Name].&[Touring-3000]`
`}'`
`select {[Measures].[Reseller Sales Amount], [Measures].[Discount Amount], [Measures].[PCT Discount]} on 0,`
`Preferred10Products on 1`
`from [Adventure Works]`
Conformément aux résultats suivants, les deux jeux (Top10SellingProducts, Preferred10Products) sont identiques :
|||||
|-|-|-|-|
||**Reseller Sales Amount**|**Discount Amount**|**PCT Discount**|
|**Mountain-200**|**14 356 699,36 $**|**19 012,71 $**|**0,13%**|
|**Road-250**|**9 377 457,68 $**|**4 032,47 $**|**0,04%**|
|**Mountain-100**|**8 568 958,27 $**|**139 393,27 $**|**1,63%**|
|**Road-650**|**7 442 141,81 $**|**39 698,30 $**|**0,53 %**|
|**Touring-1000**|**6 723 794,29 $**|**166 144,17 $**|**2,47%**|
|**Road-550-W**|**3 668 383,88 $**|**1 901,97 $**|**0,05%**|
|**Road-350-W**|**3 665 932,31 $**|**20 946,50 $**|**0,57%**|
|**HL Mountain Frame**|**3 365 069,27 $**|**$174.11**|**0,01%**|
|**Road-150**|**2 363 805,16 $**|**$0,00**|**0,00%**|
|**Touring-3000**|**2 046 508,26 $**|**79 582,15 $**|**3,89%**|
L'exemple suivant illustre le concept de la fonctionnalité Deep Autoexists. Dans l'exemple, nous filtrons Top10SellingProducts sur l'attribut [Product].[Product Line] pour les membres du groupe [Mountain]. Notez que les deux attributs (slicer et axis) appartiennent à la même dimension, [Product].
`with member [Measures].[PCT Discount] AS '[Measures].[Discount Amount]/[Measures].[Reseller Sales Amount]', FORMAT_STRING = 'Percent'`
`set Top10SellingProducts as 'topcount([Product].[Model Name].children, 10, [Measures].[Reseller Sales Amount])'`
`// Preferred10Products set removed for clarity`
`select {[Measures].[Reseller Sales Amount], [Measures].[Discount Amount], [Measures].[PCT Discount]} on 0,`
`Top10SellingProducts on 1`
`from [Adventure Works]`
`where [Product].[Product Line].[Mountain]`
Génère les jeux de résultats suivants :
|||||
|-|-|-|-|
||**Reseller Sales Amount**|**Discount Amount**|**PCT Discount**|
|**Mountain-200**|**14 356 699,36 $**|**19 012,71 $**|**0,13%**|
|**Mountain-100**|**8 568 958,27 $**|**139 393,27 $**|**1,63%**|
|**HL Mountain Frame**|**3 365 069,27 $**|**$174.11**|**0,01%**|
|**Mountain-300**|**1 907 249,38 $**|**$876.95**|**0,05%**|
|**Mountain-500**|**1 067 327,31 $**|**17 266,09 $**|**1,62%**|
|**Mountain-400-W**|**592 450,05 $**|**$303.49**|**0,05%**|
|**LL Mountain Frame**|**521 864,42 $**|**$252.41**|**0,05%**|
|**ML Mountain Frame-W**|**482 953,16 $**|**$206,95**|**0,04%**|
|**ML Mountain Frame**|**343 785,29 $**|**$161.82**|**0,05%**|
|**Women's Mountain Shorts**|**260 304,09 $**|**6 675,56 $**|**2,56%**|
Dans le jeu de résultats ci-dessus, nous avons sept nouveaux venus dans la liste des articles Top10SellingProducts ; de plus, Mountain-200, Mountain-100 et HL Mountain Frame ont été déplacés en haut de la liste. Dans le jeu de résultats précédent, ces trois valeurs étaient entrecoupées.
On parle alors de fonctionnalité Deep Autoexists, car le jeu Top10SellingProducts est évalué pour répondre aux conditions de découpage de la requête. La fonctionnalité Deep Autoexists signifie que toutes les expressions seront évaluées pour rencontrer l'espace le plus profond possible après l'application des expressions de découpage, des expressions de sous-sélection dans l'axe, etc.
Toutefois, vous pouvez souhaiter effectuer l'analyse sur Top10SellingProducts comme équivalent à Preferred10Products, comme dans l'exemple suivant :
`with member [Measures].[PCT Discount] AS '[Measures].[Discount Amount]/[Measures].[Reseller Sales Amount]', FORMAT_STRING = 'Percent'`
`set Top10SellingProducts as 'topcount([Product].[Model Name].children, 10, [Measures].[Reseller Sales Amount])'`
`set Preferred10Products as '`
`{[Product].[Model Name].&[Mountain-200],`
`[Product].[Model Name].&[Road-250],`
`[Product].[Model Name].&[Mountain-100],`
`[Product].[Model Name].&[Road-650],`
`[Product].[Model Name].&[Touring-1000],`
`[Product].[Model Name].&[Road-550-W],`
`[Product].[Model Name].&[Road-350-W],`
`[Product].[Model Name].&[HL Mountain Frame],`
`[Product].[Model Name].&[Road-150],`
`[Product].[Model Name].&[Touring-3000]`
`}'`
`select {[Measures].[Reseller Sales Amount], [Measures].[Discount Amount], [Measures].[PCT Discount]} on 0,`
`Preferred10Products on 1`
`from [Adventure Works]`
`where [Product].[Product Line].[Mountain]`
Génère les jeux de résultats suivants :
|||||
|-|-|-|-|
||**Reseller Sales Amount**|**Discount Amount**|**PCT Discount**|
|**Mountain-200**|**14 356 699,36 $**|**19 012,71 $**|**0,13%**|
|**Mountain-100**|**8 568 958,27 $**|**139 393,27 $**|**1,63%**|
|**HL Mountain Frame**|**3 365 069,27 $**|**$174.11**|**0,01%**|
Dans les résultats ci-dessus, le découpage donne un résultat qui contient uniquement les produits de Preferred10Products qui font partie du groupe [Mountain] dans [Product].[Product Line], comme prévu car Preferred10Products est une expression constante.
Ce jeu de résultats est également interprété comme une fonctionnalité Shallow Autoexists. En effet, l'expression est évaluée avant la clause de découpage. Dans l'exemple précédent, l'expression était une expression constante à des fins d'illustration pour présenter le concept.
Le comportement d'Autoexists peut être modifié au niveau de la session à l'aide de la propriété de chaîne de connexion `Autoexists`. L’exemple suivant commence en ouvrant une nouvelle session et en ajoutant la propriété *Autoexists=3* à la chaîne de connexion. Vous devez ouvrir une nouvelle connexion pour exécuter l'exemple. Une fois la connexion établie avec le paramètre Autoexists, elle reste effective jusqu'à ce qu'elle soit interrompue.
`with member [Measures].[PCT Discount] AS '[Measures].[Discount Amount]/[Measures].[Reseller Sales Amount]', FORMAT_STRING = 'Percent'`
`set Top10SellingProducts as 'topcount([Product].[Model Name].children, 10, [Measures].[Reseller Sales Amount])'`
`//Preferred10Products set removed for clarity`
`select {[Measures].[Reseller Sales Amount], [Measures].[Discount Amount], [Measures].[PCT Discount]} on 0,`
`Top10SellingProducts on 1`
`from [Adventure Works]`
`where [Product].[Product Line].[Mountain]`
Le jeu de résultats suivant présente maintenant le comportement superficiel d'Autoexists.
|||||
|-|-|-|-|
||**Reseller Sales Amount**|**Discount Amount**|**PCT Discount**|
|**Mountain-200**|**14 356 699,36 $**|**19 012,71 $**|**0,13%**|
|**Mountain-100**|**8 568 958,27 $**|**139 393,27 $**|**1,63%**|
|**HL Mountain Frame**|**3 365 069,27 $**|**$174.11**|**0,01%**|
Le comportement d’Autoexists peut être modifié à l’aide du paramètre Autoexists = [1 | 2 | 3] dans la chaîne de connexion ; consultez les [Propriétés XMLA prises en charge (les)XMLA](https://docs.microsoft.com/bi-reference/xmla/xml-elements-properties/propertylist-element-supported-xmla-properties) et <xref:Microsoft.AnalysisServices.AdomdClient.AdomdConnection.ConnectionString%2A> l’utilisation des paramètres.
## <a name="see-also"></a>Voir aussi
[Concepts clés dans MDX (Analysis Services)](../key-concepts-in-mdx-analysis-services.md)
[Espace du cube](cube-space.md)
[Tuples](tuples.md)
[Utilisation de membres, de tuples et de jeux (MDX)](working-with-members-tuples-and-sets-mdx.md)
[Totaux visuels et non visuels](visual-totals-and-non-visual-totals.md)
[Référence du langage MDX ()MDX](/sql/mdx/mdx-language-reference-mdx)
[Référence MDX (Multidimensional Expressions)](/sql/mdx/multidimensional-expressions-mdx-reference)
| 55.903727 | 791 | 0.694295 | fra_Latn | 0.842961 |
58c17b6d0438a68e85052cf74e6810339b4cc558 | 47 | md | Markdown | packages/tkeel-console-plugin-admin-usage-statistics/README.md | lunz1207/console | 5d0a521632a9da0aee6b5a80f7a061e171a88ca4 | [
"Apache-2.0"
] | null | null | null | packages/tkeel-console-plugin-admin-usage-statistics/README.md | lunz1207/console | 5d0a521632a9da0aee6b5a80f7a061e171a88ca4 | [
"Apache-2.0"
] | null | null | null | packages/tkeel-console-plugin-admin-usage-statistics/README.md | lunz1207/console | 5d0a521632a9da0aee6b5a80f7a061e171a88ca4 | [
"Apache-2.0"
] | null | null | null | # @tkeel/console-plugin-admin-usage-statistics
| 23.5 | 46 | 0.808511 | est_Latn | 0.171258 |
58c1bf470a8579f397ea5a3d368b01cc178fbd1c | 499 | md | Markdown | message-router/publisher/README.md | onap/archive-ccsdk-sli-adaptors | cb066c294b8bc988441d25a222e08d0af4673ac5 | [
"Apache-2.0"
] | null | null | null | message-router/publisher/README.md | onap/archive-ccsdk-sli-adaptors | cb066c294b8bc988441d25a222e08d0af4673ac5 | [
"Apache-2.0"
] | null | null | null | message-router/publisher/README.md | onap/archive-ccsdk-sli-adaptors | cb066c294b8bc988441d25a222e08d0af4673ac5 | [
"Apache-2.0"
] | null | null | null | # Publisher
## Modules
- api - exports the publisher interface for clients and providers to import
- features - used for managing the feature repository for publisher
- installer - provides a simple install script
- provider - provides an implementation of the publisher api, this implementation assumes the controller has a single identity for publishing to DMAAP message router
- sample.client - a dummy client that posts a simple message to a configured topic during its initialization
| 55.444444 | 166 | 0.791583 | eng_Latn | 0.999107 |
58c26bbd3a29a85f115c91124bf4eea4f62bd0d7 | 32,006 | md | Markdown | articles/iot-edge/tutorial-nested-iot-edge.md | marcobrunodev/azure-docs.pt-br | 0fff07f85663724745ac15ce05b4570890d108d9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/iot-edge/tutorial-nested-iot-edge.md | marcobrunodev/azure-docs.pt-br | 0fff07f85663724745ac15ce05b4570890d108d9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/iot-edge/tutorial-nested-iot-edge.md | marcobrunodev/azure-docs.pt-br | 0fff07f85663724745ac15ce05b4570890d108d9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tutorial – Criar uma hierarquia de dispositivos IoT Edge – Azure IoT Edge
description: Este tutorial mostra como criar uma estrutura hierárquica de dispositivos IoT Edge usando gateways.
author: v-tcassi
manager: philmea
ms.author: v-tcassi
ms.date: 11/10/2020
ms.topic: tutorial
ms.service: iot-edge
services: iot-edge
monikerRange: '>=iotedge-2020-11'
ms.openlocfilehash: 28b34ecaf51406b35c67d3838714691390f5adf7
ms.sourcegitcommit: 6a350f39e2f04500ecb7235f5d88682eb4910ae8
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 12/01/2020
ms.locfileid: "96453053"
---
# <a name="tutorial-create-a-hierarchy-of-iot-edge-devices-preview"></a>Tutorial: Criar uma hierarquia de dispositivos IoT Edge (versão prévia)
Implante nós do Azure IoT Edge em redes organizadas em camadas hierárquicas. Cada camada em uma hierarquia é um dispositivo de gateway que processa as mensagens e as solicitações de dispositivos na camada abaixo dele.
>[!NOTE]
>Este recurso exige o IoT Edge versão 1.2, que está em versão prévia pública, que execute contêineres do Linux.
Estruture uma hierarquia de dispositivos para que apenas a camada superior tenha conectividade com a nuvem e as camadas inferiores só possam se comunicar com as camadas norte e sul adjacentes. Essa camada de rede é a base da maioria das redes industriais, que seguem o [padrão ISA-95](https://en.wikipedia.org/wiki/ANSI/ISA-95).
O objetivo deste tutorial é criar uma hierarquia de dispositivos IoT Edge que simula um ambiente de produção. No final, você implantará o [módulo Sensor de Temperatura Simulada](https://azuremarketplace.microsoft.com/marketplace/apps/azure-iot.simulated-temperature-sensor) a um dispositivo de camada inferior sem acesso à Internet, baixando as imagens de contêiner por meio da hierarquia.
Para atingir essa meta, este tutorial explica como criar uma hierarquia de dispositivos IoT Edge, implantar contêineres do runtime do IoT Edge nos seus dispositivos e configurar os dispositivos localmente. Neste tutorial, você aprenderá como:
> [!div class="checklist"]
>
> * Criar e definir as relações em uma hierarquia de dispositivos IoT Edge.
> * Configurar o runtime do IoT Edge nos dispositivos da sua hierarquia.
> * Instalar certificados consistentes na hierarquia de dispositivos.
> * Adicionar cargas de trabalho aos dispositivos da hierarquia.
> * Usar um módulo de proxy de API para rotear com segurança o tráfego HTTP em uma só porta dos seus dispositivos de camada inferior.
Neste tutorial, as seguintes camadas de rede são definidas:
* **Camada superior**: os dispositivos IoT Edge dessa camada podem se conectar diretamente à nuvem.
* **Camada inferior**: os dispositivos IoT Edge dessa camada não podem se conectar diretamente à nuvem. Eles precisam passar por um ou mais dispositivos IoT Edge intermediários para enviar e receber dados.
Este tutorial usa uma hierarquia de dois dispositivos para simplificar. Um dispositivo, **topLayerDevice**, representa um dispositivo na camada superior da hierarquia, que pode se conectar diretamente à nuvem. Esse dispositivo também será chamado de **dispositivo pai**. O outro dispositivo, **lowerLayerDevice**, representa um dispositivo na camada inferior da hierarquia, que não pode se conectar diretamente à nuvem. Esse dispositivo também será chamado de **dispositivo filho**. Adicione mais dispositivos de camada inferior para representar seu ambiente de produção. A configuração de qualquer dispositivo de camada inferior adicional seguirá a configuração do **lowerLayerDevice**.
## <a name="prerequisites"></a>Pré-requisitos
Para criar uma hierarquia de dispositivos IoT Edge, será necessário:
* Um computador (Windows ou Linux) com conectividade com a Internet.
* Dois dispositivos Linux a serem configurados como dispositivos IoT Edge. Se você não tiver dispositivos disponíveis, use [máquinas virtuais do Azure](../virtual-machines/linux/index.yml).
* Uma conta do Azure com uma assinatura válida. Se você não tiver uma [assinatura do Azure](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), crie uma [conta gratuita](https://azure.microsoft.com/free/) antes de começar.
* Um [Hub IoT](../iot-hub/iot-hub-create-through-portal.md) gratuito ou da camada Standard no Azure.
* A CLI do Azure v2.3.1 com a extensão de IoT do Azure v0.10.6 ou superior instalada. Este tutorial usa o [Azure Cloud Shell](../cloud-shell/overview.md). Se você não estiver familiarizado com o Azure Cloud Shell, [confira um guia de início rápido para obter mais detalhes](./quickstart-linux.md#use-azure-cloud-shell).
Experimente também esse cenário seguindo o [exemplo do Azure IoT Edge para IoT Industrial](https://aka.ms/iotedge-nested-sample) com script, que implanta máquinas virtuais do Azure como dispositivos pré-configurados para simular um ambiente de fábrica.
## <a name="configure-your-iot-edge-device-hierarchy"></a>Configurar a hierarquia de dispositivos IoT Edge
### <a name="create-a-hierarchy-of-iot-edge-devices"></a>Criar uma hierarquia de dispositivos IoT Edge
A primeira etapa, criar seus dispositivos IoT Edge, pode ser realizada por meio do portal do Azure ou da CLI do Azure. Este tutorial criará uma hierarquia de dois dispositivos IoT Edge: **topLayerDevice** e o filho **lowerLayerDevice**.
# <a name="portal"></a>[Portal](#tab/azure-portal)
1. No [portal do Azure](https://ms.portal.azure.com/), navegue até o seu Hub IoT.
1. No menu no painel esquerdo, em **Gerenciamento Automático de Dispositivo**, selecione **IoT Edge**.
1. Selecione **+ Adicionar um dispositivo IoT Edge**. Esse dispositivo será o dispositivo de camada superior. Portanto, insira uma identificação do dispositivo exclusiva apropriada. Selecione **Salvar**.
1. Selecione **+ Adicionar um dispositivo IoT Edge** novamente. Esse dispositivo será o dispositivo de borda de camada inferior. Portanto, insira uma identificação do dispositivo exclusiva apropriada.
1. Escolha **Definir um dispositivo pai**, escolha o dispositivo de camada superior na lista de dispositivos e selecione **OK**. Selecione **Salvar**.

# <a name="azure-cli"></a>[CLI do Azure](#tab/azure-cli)
1. No [Azure Cloud Shell](https://shell.azure.com/), insira o comando a seguir para criar um dispositivo IoT Edge no hub. Esse dispositivo será o dispositivo de camada superior. Portanto, insira uma identificação do dispositivo exclusiva apropriada:
```azurecli-interactive
az iot hub device-identity create --device-id {top_layer_device_id} --edge-enabled --hub-name {hub_name}
```
1. Insira o seguinte comando para criar seu dispositivo IoT Edge filho e a relação pai-filho entre os dispositivos:
```azurecli-interactive
az iot hub device-identity create --device-id {lower_layer_device_id} --edge-enabled --pd {top_layer_device_id} --hub-name {iothub_name}
```
---
Anote a cadeia de conexão de cada dispositivo IoT Edge. Elas serão usadas mais tarde.
# <a name="portal"></a>[Portal](#tab/azure-portal)
1. No [portal do Azure](https://ms.portal.azure.com/), navegue até a seção **IoT Edge** do Hub IoT.
1. Clique na identificação de um dos dispositivos na lista de dispositivos.
1. Selecione **Copiar** no campo **Cadeia de Conexão Primária** e registre-o em um lugar de sua escolha.
1. Repita essa etapa para todos os outros dispositivos.
# <a name="azure-cli"></a>[CLI do Azure](#tab/azure-cli)
1. No [Azure Cloud Shell](https://shell.azure.com/), para cada dispositivo, insira o seguinte comando para recuperar a cadeia de conexão do dispositivo e registre-o em um lugar de sua escolha:
```azurecli-interactive
az iot hub device-identity connection-string show --device-id {device_id} --hub-name {hub_name}
```
---
### <a name="create-certificates"></a>Criar certificados
Todos os dispositivos em um [cenário de gateway](iot-edge-as-gateway.md) precisam ter um certificado compartilhado para configurar conexões seguras entre eles. Use as etapas a seguir para criar certificados de demonstração para os dois dispositivos neste cenário.
Para criar certificados de demonstração em um dispositivo Linux, você precisará clonar os scripts de geração e configurá-los para serem executados localmente no Bash.
1. Clone o repositório Git do IoT Edge, que contém scripts para gerar certificados de demonstração.
```bash
git clone https://github.com/Azure/iotedge.git
```
1. Navegue até o diretório no qual você deseja trabalhar. Todos os arquivos de certificado e de chave serão criados nesse diretório.
1. Copie os arquivos de configuração e de script do repositório do IoT Edge clonado para o diretório de trabalho.
```bash
cp <path>/iotedge/tools/CACertificates/*.cnf .
cp <path>/iotedge/tools/CACertificates/certGen.sh .
```
1. Crie o Certificado de AC raiz e um certificado intermediário.
```bash
./certGen.sh create_root_and_intermediate
```
Este comando de script cria vários arquivos de certificado e de chave, mas estamos usando o seguinte arquivo como o **Certificado de AC raiz** para a hierarquia de gateway:
* `<WRKDIR>/certs/azure-iot-test-only.root.ca.cert.pem`
1. Crie dois conjuntos de Certificados de Autoridade de Certificação do dispositivo IoT Edge e chaves privadas com o seguinte comando: um conjunto para o dispositivo de camada superior e um definido para o dispositivo de camada inferior. Forneça nomes que sejam fáceis de serem lembrados para os Certificados de Autoridade de Certificação, a fim de diferenciá-los um do outro.
```bash
./certGen.sh create_edge_device_ca_certificate "top-layer-device"
./certGen.sh create_edge_device_ca_certificate "lower-layer-device"
```
Este comando de script cria vários arquivos de certificado e de chave, mas estamos usando o seguinte certificado e par de chaves em cada dispositivo IoT Edge e referenciados no arquivo config.yaml:
* `<WRKDIR>/certs/iot-edge-device-<CA cert name>-full-chain.cert.pem`
* `<WRKDIR>/private/iot-edge-device-<CA cert name>.key.pem`
Cada dispositivo precisa ter uma cópia do Certificado de AC raiz e uma cópia do próprio Certificado de Autoridade de Certificação do dispositivo e da chave privada. Use uma unidade USB ou uma [cópia de arquivo seguro](https://www.ssh.com/ssh/scp/) para mover os certificados para cada dispositivo.
1. Depois que os certificados forem transferidos, instale a AC raiz para cada dispositivo.
```bash
sudo cp <path>/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
sudo update-ca-certificates
```
Este comando deverá gerar um certificado que tenha sido adicionado em /etc/ssl/certs.
### <a name="install-iot-edge-on-the-devices"></a>Instalar o IoT Edge nos dispositivos
Instale o IoT Edge seguindo estas etapas nos dois dispositivos.
1. Atualize as listas de pacotes no dispositivo.
```bash
sudo apt-get update
```
1. Instale o mecanismo de Moby.
```bash
sudo apt-get install moby-engine
```
1. Instalar o hsmlib e o daemon do IoT Edge <!-- Update with proper image links on release -->
```bash
curl -L https://github.com/Azure/azure-iotedge/releases/download/1.2.0-rc2/libiothsm-std_1.2.0.rc2-1-1_debian9_amd64.deb -o libiothsm-std.deb
curl -L https://github.com/Azure/azure-iotedge/releases/download/1.2.0-rc2/iotedge_1.2.0_rc2-1_debian9_amd64.deb -o iotedge.deb
sudo dpkg -i ./libiothsm-std.deb
sudo dpkg -i ./iotedge.deb
```
### <a name="configure-the-iot-edge-runtime"></a>Configurar o runtime do IoT Edge
Configure o runtime do IoT Edge seguindo estas etapas nos dois dispositivos. A configuração do runtime do IoT Edge para seus dispositivos consiste em quatro etapas, todas realizadas com a edição do arquivo de configuração do IoT Edge:
1. Provisione manualmente cada dispositivo adicionando a cadeia de conexão do dispositivo ao arquivo de configuração.
1. Conclua a configuração dos certificados do dispositivo apontando o arquivo de configuração para o Certificado de Autoridade de Certificação do dispositivo, a chave privada da CA do dispositivo e o Certificado de AC raiz.
1. Inicialize o sistema usando o agente do IoT Edge.
1. Atualize o parâmetro **hostname** para o dispositivo de **camada superior** e atualize os parâmetros **hostname** e **parent_hostname** para os dispositivos de **camada inferior**.
Conclua estas etapas e reinicie o serviço do IoT Edge para configurar seus dispositivos.
1. Em cada dispositivo, abra o arquivo de configuração do IoT Edge.
```bash
sudo nano /etc/iotedge/config.yaml
```
1. Encontre as configurações de provisionamento do arquivo e remova a marca de comentário da seção **Configuração de provisionamento manual usando uma cadeia de conexão**.
1. Atualizar o valor de **device_connection_string** com a cadeia de caracteres de conexão do dispositivo IoT Edge. Verifique se alguma outra seção de provisionamento foi comentada. Verifique se a linha **provisionamento:** não tem nenhum espaço em branco precedente e se os itens aninhados são recuados em dois espaços.
```yml
# Manual provisioning configuration using a connection string
provisioning:
source: "manual"
device_connection_string: "<ADD DEVICE CONNECTION STRING HERE>"
dynamic_reprovisioning: false
```
>[!TIP]
>Para colar o conteúdo da área de transferência no Nano, `Shift+Right Click` ou clique em `Shift+Insert`.
1. Encontre a seção **certificados**. Remova a marca de comentário e atualize os três campos do certificado para que eles apontem para os certificados criados na seção anterior e movidos para o dispositivo IoT Edge. Forneça os caminhos de URI de arquivo, que usam o formato `file:///<path>/<filename>`.
```yml
certificates:
device_ca_cert: "<File URI path to the device CA certificate unique to this device.>"
device_ca_pk: "<File URI path to the device CA private key unique to this device.>"
trusted_ca_certs: "<File URI path to the root CA certificate shared by all devices in the gateway hierarchy.>"
```
1. Para o dispositivo de **camada superior**, encontre o parâmetro **hostname**. Atualize o valor para que ele seja o FQDN (nome de domínio totalmente qualificado) ou o endereço IP do dispositivo IoT Edge. Use qualquer valor escolhido de maneira consistente nos dispositivos da sua hierarquia.
```yml
hostname: <device fqdn or IP>
```
1. Para os dispositivos IoT Edge em **camadas inferiores**, atualize o arquivo de configuração para que ele aponte para o FQDN ou o IP do dispositivo pai, correspondendo a qualquer um que ele seja no campo **hostname** do dispositivo pai. Para os dispositivos IoT Edge na **camada superior**, mantenha esse parâmetro comentado.
```yml
parent_hostname: <parent device fqdn or IP>
```
> [!NOTE]
> Para as hierarquias com mais de uma camada inferior, atualize o campo *parent_hostname* com o FQDN do dispositivo da camada imediatamente acima.
1. Para o dispositivo de **camada superior**, encontre a seção YAML **agente** e atualize o valor da imagem para a versão correta do agente do IoT Edge. Nesse caso, apontaremos o agente do IoT Edge da camada superior para o Registro de Contêiner do Azure com a versão prévia pública da imagem do agente do IoT Edge disponível.
```yml
agent:
name: "edgeAgent"
type: "docker"
env: {}
config:
image: "mcr.microsoft.com/azureiotedge-agent:1.2.0-rc2"
auth: {}
```
1. Para os dispositivos IoT Edge em **camadas inferiores**, atualize o nome de domínio do valor da imagem para que ele aponte para o FQDN ou o IP do dispositivo pai, seguido da porta do proxy de API, 8000. Você adicionará o módulo de proxy de API na próxima seção.
```yml
agent:
name: "edgeAgent"
type: "docker"
env: {}
config:
image: "<parent_device_fqdn_or_ip>:8000/azureiotedge-agent:1.2.0-rc2"
auth: {}
```
1. Salve e feche o arquivo.
`CTRL + X`, `Y`, `Enter`
1. Depois de inserir as informações de provisionamento no arquivo de configuração, reinicie o daemon:
```bash
sudo systemctl restart iotedge
```
## <a name="deploy-modules-to-the-top-layer-device"></a>Implantar módulos no dispositivo de camada superior
As etapas restantes necessárias para concluir a configuração do runtime do IoT Edge e implantar cargas de trabalho são realizadas na nuvem por meio do portal do Azure ou da CLI do Azure.
# <a name="portal"></a>[Portal](#tab/azure-portal)
No [portal do Azure](https://ms.portal.azure.com/):
1. Navegue até seu Hub IoT.
1. No menu no painel esquerdo, em **Gerenciamento Automático de Dispositivo**, selecione **IoT Edge**.
1. Clique na identificação do dispositivo de borda de **camada superior** na lista de dispositivos.
1. Na barra superior, selecione **Definir Módulos**.
1. Selecione **Configurações de Runtime** ao lado do ícone de engrenagem.
1. Em **Hub do Edge**, no campo de imagem, insira `mcr.microsoft.com/azureiotedge-hub:1.2.0-rc2`.

1. Adicione as seguintes variáveis de ambiente ao módulo do Hub do Edge:
| Name | Valor |
| - | - |
| `experimentalFeatures__enabled` | `true` |
| `experimentalFeatures__nestedEdgeEnabled` | `true` |

1. Em **Agente do Edge**, no campo de imagem, insira `mcr.microsoft.com/azureiotedge-agent:1.2.0-rc2`. Selecione **Salvar**.
1. Adicione o módulo do registro do Docker ao dispositivo de camada superior. Selecione **+ Adicionar** e escolha **Módulo do IoT Edge** na lista suspensa. Forneça o nome `registry` para o módulo do registro do Docker e insira `registry:latest` como o URI da imagem. Em seguida, adicione variáveis de ambiente e crie opções para apontar o módulo de registro local para o registro de contêiner da Microsoft a fim de baixar bidirecionalmente imagens de contêiner fornecê-las em registry:5000.
1. Na guia Variáveis de ambiente, insira o seguinte par nome-valor da variável de ambiente:
| Name | Valor |
| - | - |
| `REGISTRY_PROXY_REMOTEURL` | `https://mcr.microsoft.com` |
1. Na guia Opções de criação de contêiner, insira o seguinte JSON:
```json
{
"HostConfig": {
"PortBindings": {
"5000/tcp": [
{
"HostPort": "5000"
}
]
}
}
}
```
1. Em seguida, adicione o módulo de proxy de API ao dispositivo de camada superior. Selecione **+ Adicionar** e escolha **Módulo do Marketplace** na lista suspensa. Pesquise `IoT Edge API Proxy` e selecione o módulo. O proxy de API do IoT Edge usa a porta 8000 e está configurado para usar o módulo do registro chamado `registry` na porta 5000 por padrão.
1. Selecione **Examinar + criar** e **Criar** para concluir a implantação. O runtime do IoT Edge do dispositivo de camada superior, que tem acesso à Internet, efetuará pull das configurações da **versão prévia pública** e a executará para o hub do IoT Edge e o agente do IoT Edge.

# <a name="azure-cli"></a>[CLI do Azure](#tab/azure-cli)
1. No [Azure Cloud Shell](https://shell.azure.com/), insira o seguinte comando para criar um arquivo deployment.json:
```azurecli-interactive
code deploymentTopLayer.json
```
1. Copie o conteúdo do JSON abaixo no deployment.json, salve-o e feche-o.
```json
{
"modulesContent": {
"$edgeAgent": {
"properties.desired": {
"modules": {
"dockerContainerRegistry": {
"settings": {
"image": "registry:latest",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5000/tcp\":[{\"HostPort\":\"5000\"}]}}}"
},
"type": "docker",
"version": "1.0",
"env": {
"REGISTRY_PROXY_REMOTEURL": {
"value": "https://mcr.microsoft.com"
},
"status": "running",
"restartPolicy": "always"
},
"IoTEdgeAPIProxy": {
"settings": {
"image": "mcr.microsoft.com/azureiotedge-api-proxy",
"createOptions": "{\"HostConfig\": {\"PortBindings\": {\"8000/tcp\": [{\"HostPort\":\"8000\"}]}}}"
},
"type": "docker",
"env": {
"NGINX_DEFAULT_PORT": {
"value": "8000"
},
"DOCKER_REQUEST_ROUTE_ADDRESS": {
"value": "registry:5000"
},
"BLOB_UPLOAD_ROUTE_ADDRESS": {
"value": "AzureBlobStorageonIoTEdge:11002"
}
},
"status": "running",
"restartPolicy": "always",
"version": "1.0"
}
},
"runtime": {
"settings": {
"minDockerVersion": "v1.25",
},
"type": "docker"
},
"schemaVersion": "1.1",
"systemModules": {
"edgeAgent": {
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.2.0-rc2",
"createOptions": ""
},
"type": "docker"
},
"edgeHub": {
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.2.0-rc2",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"443/tcp\":[{\"HostPort\":\"443\"}],\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}]}}}"
},
"type": "docker",
"env": {
"experimentalFeatures__enabled": {
"value": "true"
},
"experimentalFeatures__nestedEdgeEnabled": {
"value": "true"
}
},
"status": "running",
"restartPolicy": "always"
}
}
}
},
"$edgeHub": {
"properties.desired": {
"routes": {
"route": "FROM /messages/* INTO $upstream"
},
"schemaVersion": "1.1",
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
}
}
}
```
1. Insira o seguinte comando para criar uma implantação para o dispositivo de borda de camada superior:
```azurecli-interactive
az iot edge set-modules --device-id <top_layer_device_id> --hub-name <iot_hub_name> --content ./deploymentTopLayer.json
```
---
## <a name="deploy-modules-to-the-lower-layer-device"></a>Implantar módulos no dispositivo de camada inferior
Use o portal do Azure e a CLI do Azure para implantar cargas de trabalho da nuvem nos seus dispositivos de **camada inferior**.
# <a name="portal"></a>[Portal](#tab/azure-portal)
No [portal do Azure](https://ms.portal.azure.com/):
1. Navegue até seu Hub IoT.
1. No menu no painel esquerdo, em **Gerenciamento Automático de Dispositivo**, selecione **IoT Edge**.
1. Clique na identificação do dispositivo de camada inferior na lista de dispositivos IoT Edge.
1. Na barra superior, selecione **Definir Módulos**.
1. Selecione **Configurações de Runtime** ao lado do ícone de engrenagem.
1. Em **Hub do Edge**, no campo de imagem, insira `$upstream:8000/azureiotedge-hub:1.2.0-rc2`.
1. Adicione as seguintes variáveis de ambiente ao módulo do Hub do Edge:
| Name | Valor |
| - | - |
| `experimentalFeatures__enabled` | `true` |
| `experimentalFeatures__nestedEdgeEnabled` | `true` |
1. Em **Agente do Edge**, no campo de imagem, insira `$upstream:8000/azureiotedge-agent:1.2.0-rc2`. Selecione **Salvar**.
1. Adicione o módulo de sensor de temperatura. Selecione **+ Adicionar** e escolha **Módulo do Marketplace** na lista suspensa. Pesquise `Simulated Temperature Sensor` e selecione o módulo.
1. Em **Módulos do IoT Edge**, selecione o módulo `Simulated Temperature Sensor` recém-adicionados e atualize o URI de imagem para que ele aponte para `$upstream:8000/azureiotedge-simulated-temperature-sensor:1.0`.
1. Selecione **Salvar**, **Examinar + criar** e **Criar** para concluir a implantação.

# <a name="azure-cli"></a>[CLI do Azure](#tab/azure-cli)
1. No [Azure Cloud Shell](https://shell.azure.com/), insira o seguinte comando para criar um arquivo deployment.json:
```azurecli-interactive
code deploymentLowerLayer.json
```
1. Copie o conteúdo do JSON abaixo no deployment.json, salve-o e feche-o.
```json
{
"modulesContent": {
"$edgeAgent": {
"properties.desired": {
"modules": {
"simulatedTemperatureSensor": {
"settings": {
"image": "$upstream:8000/azureiotedge-simulated-temperature-sensor:1.0",
"createOptions": ""
},
"type": "docker",
"status": "running",
"restartPolicy": "always",
"version": "1.0"
}
},
"runtime": {
"settings": {
"minDockerVersion": "v1.25",
},
"type": "docker"
},
"schemaVersion": "1.1",
"systemModules": {
"edgeAgent": {
"settings": {
"image": "$upstream:8000/azureiotedge-agent:1.2.0-rc2",
"createOptions": ""
},
"type": "docker"
},
"edgeHub": {
"settings": {
"image": "$upstream:8000/azureiotedge-hub:1.2.0-rc2",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"443/tcp\":[{\"HostPort\":\"443\"}],\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}]}}}"
},
"type": "docker",
"env": {
"experimentalFeatures__enabled": {
"value": "true"
},
"experimentalFeatures__nestedEdgeEnabled": {
"value": "true"
}
},
"status": "running",
"restartPolicy": "always"
}
}
}
},
"$edgeHub": {
"properties.desired": {
"routes": {
"route": "FROM /messages/* INTO $upstream"
},
"schemaVersion": "1.1",
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
}
}
}
```
1. Insira o seguinte comando para criar uma implantação de configuração de módulos para o dispositivo de borda de camada inferior:
```azurecli-interactive
az iot edge set-modules --device-id <lower_layer_device_id> --hub-name <iot_hub_name> --content ./deploymentLowerLayer.json
---
Notice that the image URI that we used for the simulated temperature sensor module pointed to `$upstream:8000` instead of to a container registry. We configured this device to not have direct connections to the cloud, because it's in a lower layer. To pull container images, this device requests the image from its parent device instead. At the top layer, the API proxy module routes this container request to the registry module, which handles the image pull.
On the device details page for your lower layer IoT Edge device, you should now see the temperature sensor module listed along the system modules as **Specified in deployment**. It may take a few minutes for the device to receive its new deployment, request the container image, and start the module. Refresh the page until you see the temperature sensor module listed as **Reported by device**.
## View generated data
The **Simulated Temperature Sensor** module that you pushed generates sample environment data. It sends messages that include ambient temperature and humidity, machine temperature and pressure, and a timestamp.
You can watch the messages arrive at your IoT hub by using the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
You can also view these messages through the [Azure Cloud Shell](https://shell.azure.com/):
```azurecli-interactive
az iot hub monitor-events -n <iothub_name> -d <lower-layer-device-name>
```
## <a name="clean-up-resources"></a>Limpar os recursos
Exclua as configurações locais e os recursos do Azure criados neste artigo para evitar custos.
Para excluir os recursos:
1. Entre no [portal do Azure](https://portal.azure.com) e selecione **Grupos de recursos**.
2. Selecione o nome do grupo de recursos que contém os recursos de teste do IoT Edge.
3. Reveja a lista de recursos contidos no grupo de recursos. Se você deseja excluir todos eles, selecione **Excluir grupo de recursos**. Se você quiser excluir apenas alguns deles, clique em cada recurso para excluí-los individualmente.
## <a name="next-steps"></a>Próximas etapas
Neste tutorial, você configurou dois dispositivos IoT Edge como gateways e definiu um deles como o dispositivo pai do outro. Em seguida, você demonstrou a extração de uma imagem de contêiner para o dispositivo filho por meio de um gateway. Experimente também esse cenário seguindo o [exemplo do Azure IoT Edge para IoT Industrial](https://aka.ms/iotedge-nested-sample) com script, que implanta máquinas virtuais do Azure como dispositivos pré-configurados para simular um ambiente de fábrica.
Continue com os outros tutoriais para saber como o Azure IoT Edge pode criar outras soluções para seus negócios.
> [!div class="nextstepaction"]
> [Implantar um modelo do Azure Machine Learning como um módulo](tutorial-deploy-machine-learning.md) | 51.705977 | 687 | 0.657595 | por_Latn | 0.981397 |
58c29d4f3dd1a602b2004feaacd062b356ed8c64 | 197 | md | Markdown | docs/build/index.md | wekan/wekan-doc | 2187f675f3c9795ff6c54266dbee166a01df4b8d | [
"MIT"
] | null | null | null | docs/build/index.md | wekan/wekan-doc | 2187f675f3c9795ff6c54266dbee166a01df4b8d | [
"MIT"
] | null | null | null | docs/build/index.md | wekan/wekan-doc | 2187f675f3c9795ff6c54266dbee166a01df4b8d | [
"MIT"
] | 2 | 2019-05-22T09:47:48.000Z | 2021-09-24T06:17:23.000Z | # Developpement guide
* Building
* [Debian Package for Debian's families operating systems without systemd](package.md)
* [Wekan meteor bundle](bundle.md)
* [Translations](translations.md) | 32.833333 | 90 | 0.746193 | eng_Latn | 0.573784 |
58c3279eb5bb36b8379dddd915f1e021ecd2393b | 843 | md | Markdown | README.md | Zertz/stripe-balance | c3b93337ed1c1a6c81deb46c2ff910df27ed3c22 | [
"MIT"
] | 1 | 2016-02-13T09:22:54.000Z | 2016-02-13T09:22:54.000Z | README.md | Zertz/stripe-balance | c3b93337ed1c1a6c81deb46c2ff910df27ed3c22 | [
"MIT"
] | null | null | null | README.md | Zertz/stripe-balance | c3b93337ed1c1a6c81deb46c2ff910df27ed3c22 | [
"MIT"
] | null | null | null | **DEPRECATED: please use the [Stripe API](https://stripe.com/docs/api/node#balance) directly**
stripe-balance
==============
[](https://badge.fury.io/js/stripe-balance) [](https://travis-ci.org/Zertz/cloudcp) [](http://standardjs.com/)
Retrieve Stripe account balance
API
---
```javascript
var stripeBalance = require('stripe-balance')
stripeBalance({
secretKey: 'stripe-secret',
account: 'stripe-account'
}, function (err, balance) {
/*
balance: {
amount_available: 0,
amount_pending: 0
}
*/
})
```
License
-------
[MIT](https://github.com/Zertz/stripe-balance/blob/master/LICENSE)
| 25.545455 | 334 | 0.689205 | yue_Hant | 0.357977 |
58c340a752d426df35e1a68a26c683a2b168a774 | 2,862 | md | Markdown | wdk-ddi-src/content/netpowersettings/nf-netpowersettings-netpowersettingsiswakepatternenabled.md | xiaoyinl/windows-driver-docs-ddi | 2442baf424975cfeec65190615ed8638a01791b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/netpowersettings/nf-netpowersettings-netpowersettingsiswakepatternenabled.md | xiaoyinl/windows-driver-docs-ddi | 2442baf424975cfeec65190615ed8638a01791b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/netpowersettings/nf-netpowersettings-netpowersettingsiswakepatternenabled.md | xiaoyinl/windows-driver-docs-ddi | 2442baf424975cfeec65190615ed8638a01791b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:netpowersettings.NetPowerSettingsIsWakePatternEnabled
title: NetPowerSettingsIsWakePatternEnabled function (netpowersettings.h)
description: Determines if a wake-on-LAN (WoL) pattern is enabled for a network adapter.
tech.root: netvista
ms.assetid: 3ae9bce4-27db-404a-a9c7-6958004fcd0d
ms.date: 02/08/2018
ms.topic: function
f1_keywords:
- "netpowersettings/NetPowerSettingsIsWakePatternEnabled"
ms.keywords: NetPowerSettingsIsWakePatternEnabled
req.header: netpowersettings.h
req.include-header: netadaptercx.h
req.target-type: Universal
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver: 1.23
req.umdf-ver:
req.lib: NetAdapterCxStub.lib
req.dll:
req.irql: PASSIVE_LEVEL
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.alt-api:
req.alt-loc:
topic_type:
- apiref
api_type:
- HeaderDef
api_location:
- netpowersettings.h
api_name:
- NetPowerSettingsIsWakePatternEnabled
targetos: Windows
product:
- Windows
---
# NetPowerSettingsIsWakePatternEnabled function
## -description
> [!WARNING]
> Some information in this topic relates to prereleased product, which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
>
> NetAdapterCx is preview only in Windows 10, version 1903.
Determines if a wake-on-LAN (WoL) pattern is enabled for a network adapter.
## -parameters
### -param NetPowerSettings
A handle to the NETPOWERSETTINGS object associated with the net adapter. To retrieve the handle, call [NetAdapterGetPowerSettings](../netadapter/nf-netadapter-netadaptergetpowersettings.md).
### -param NdisPmWolPattern
A pointer to an [NDIS_PM_WOL_PATTERN](../ntddndis/ns-ntddndis-_ndis_pm_wol_pattern.md) structure obtained by calling [NetPowerSettingsGetWakePattern](nf-netpowersettings-netpowersettingsgetwakepattern.md).
## -returns
Returns **TRUE** if the WoL pattern is enabled and the driver must enable it in its hardware, and **FALSE** otherwise.
## -remarks
The client driver must only call **NetPowerSettingsIsWakePatternEnabled** during a power transition, typically from its *[EVT_WDF_DEVICE_ARM_WAKE_FROM_SX](../wdfdevice/nc-wdfdevice-evt_wdf_device_arm_wake_from_sx.md)*, *[EVT_WDF_DEVICE_ARM_WAKE_FROM_S0](../wdfdevice/nc-wdfdevice-evt_wdf_device_arm_wake_from_s0.md)*, or *[EVT_NET_ADAPTER_PREVIEW_WAKE_PATTERN](../netadapter/nc-netadapter-evt_net_adapter_preview_wake_pattern.md)* callback function. Otherwise, the call results in a system bugcheck.
If the wake pattern is enabled, the driver programs its hardware to enable the pattern during a power down transition.
## -see-also
[NDIS_PM_WOL_PATTERN](../ntddndis/ns-ntddndis-_ndis_pm_wol_pattern.md)
| 37.168831 | 500 | 0.790007 | eng_Latn | 0.570355 |
58c56ba08c6674d778076bf2237f48e4bb266d72 | 2,434 | md | Markdown | src/posts/twis-2019-11-2.md | seoh/seoh.dev | a1e67f148be70452aed430a2ea6d8eda18d51cd9 | [
"MIT"
] | null | null | null | src/posts/twis-2019-11-2.md | seoh/seoh.dev | a1e67f148be70452aed430a2ea6d8eda18d51cd9 | [
"MIT"
] | 3 | 2019-11-04T07:16:20.000Z | 2022-02-26T01:44:57.000Z | src/posts/twis-2019-11-2.md | seoh/seoh.dev | a1e67f148be70452aed430a2ea6d8eda18d51cd9 | [
"MIT"
] | null | null | null | ---
layout: layouts/post.njk
title: TWIS/2019-11 2주차
date: 2019-11-18T10:25:36.929Z
tags:
- twis
---
## Devs
- [Are There Random Numbers in CSS?](https://css-tricks.com/are-there-random-numbers-in-css)
- 똑같이 생긴 레이어를 3초 주기로 z-index로 맨 위로 올리는데, i*3/n ([for i in range(n)])초 딜레이를 줘서 유저가 보는건 똑같은데 3/n초마다 순서대로 변경되는 중. 엄밀히 말하면 랜덤이라고 하기 어렵지만 pure css로 구현했다는데 의미있고 재미있는 시도.
- #css
## News
- [More South Korean academics caught naming kids as co-authors](https://www.nature.com/articles/d41586-019-03371-0)
- [교육부, 미성년 공저자 논문 등 관련 특별감사 결과 발표](https://www.moe.go.kr/boardCnts/view.do?boardID=294&boardSeq=78739&lev=0)를 인용하며 한국 대학교 교수의 자녀, 지인자녀 등 중고등학생이 기여없이 논문 공저자로 등재된 일들이 네이처 기사에 올라왔다
- #news
- [A16Z-backed Shift.org announces veterans hiring pipeline partnership with Better.com](https://techcrunch.com/2019/11/11/a16z-backed-shift-org-announces-veterans-hiring-pipeline-partnership-with-better-com)
- 안데르센 호로위츠(a.k.a a16z)가 투자한 shift.org가 better.com과 함께 파트너를 맺고 퇴역군인의 재취업을 돕기로. shift는 퇴역군인들의 민간기업 취업을 위해 재교육을 해주는 회사, better.com은 부동산중계 스타트업. 그리고 [Lambda School](https://lambdaschool.com/shift)이라는 코딩교육회사와도 파트너. 한국도 공무원/군인 등 폐쇄적인 환경에서 일하다 은퇴보다는 이른 나이에 그만둔 사람들이 퇴직금 날려먹는 이야기를 종종 들어서 이런 회사들이나 제도가 있었으면 좋겠다는 생각에 기록
- #news
- [Yahoo Japan to merge with Naver's Line app](https://asia.nikkei.com/Business/Business-deals/Yahoo-Japan-to-merge-with-Naver-s-Line-app)
- 야후 재팬의 모회사 Z Holdings의 대주주(45%)인 소프트뱅크와 라인의 대주주(73%)인 네이버가 둘을 합병하려고 한다고. 된다면 매출상으로는 라쿠텐을 추월하고, 3700만명의 라인페이와 1900만명의 페이페이가 합쳐지면 시너지가 클 것이라는 이야기도.
- #news
- [Mirantis acquires Docker Enterprise](https://techcrunch.com/2019/11/13/mirantis-acquires-docker-enterprise/)
- [After selling enterprise biz, Docker lands $35M investment and new CEO](https://techcrunch.com/2019/11/13/after-selling-enterprise-biz-docker-lands-35m-investment-and-new-ceo)
- [Is Docker in Trouble?](https://start.jcolemorrison.com/is-docker-in-trouble/)
- Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane, Docker CLI 등 관련 지적재산권들도 인수했다는데 남은 수익모델이 뭔지 모르겠지만 인수 후에 35M 투자를 받은걸로봐서 합치면 크게 뭔가 시작하기엔 충분해보인다.
- #docker #news
- [Three of Apple and Google’s former star chip designers launch NUVIA with $53M in series A funding](https://techcrunch.com/2019/11/15/three-of-apple-and-googles-former-star-chip-designers-launch-nuvia-with-53m-in-series-a-funding)
- 창업자 셋은 애플에서 몇년동안 같이 일했고 20개 칩 개발에 참여했고 반도체쪽 100개 이상의 특허도. 데이터센터용을 만든다는데 ARM에서 만드는 저전력이랑 경쟁하려나
- #news
| 64.052632 | 311 | 0.751438 | kor_Hang | 0.999915 |
58c593430a7f3434c16cecff8146f04eac992d76 | 3,592 | md | Markdown | docs/master-data-services/create-a-consolidated-member-master-data-services.md | sql-aus-hh/sql-docs.de-de | edfac31211cedb5d13440802f131a1e48934748a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-25T18:10:29.000Z | 2022-02-25T18:10:29.000Z | docs/master-data-services/create-a-consolidated-member-master-data-services.md | sql-aus-hh/sql-docs.de-de | edfac31211cedb5d13440802f131a1e48934748a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/master-data-services/create-a-consolidated-member-master-data-services.md | sql-aus-hh/sql-docs.de-de | edfac31211cedb5d13440802f131a1e48934748a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Erstellen eines konsolidierten Elements (Master Data Services) | Microsoft-Dokumentation
ms.custom: ''
ms.date: 04/01/2016
ms.prod: sql
ms.prod_service: mds
ms.reviewer: ''
ms.technology:
- master-data-services
ms.topic: conceptual
helpviewer_keywords:
- creating consolidated members [Master Data Services]
- members [Master Data Services], creating consolidated members
- consolidated members [Master Data Services], creating
ms.assetid: 431ab2d2-5517-4372-9980-142b05427c08
author: leolimsft
ms.author: lle
manager: craigg
ms.openlocfilehash: 60510b3d7eefea9add5dec803b2f5e86dc38306e
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/01/2018
ms.locfileid: "47596689"
---
# <a name="create-a-consolidated-member-master-data-services"></a>Create a Consolidated Member (Master Data Services)
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md-winonly](../includes/appliesto-ss-xxxx-xxxx-xxx-md-winonly.md)]
Erstellen Sie in [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)]ein konsolidiertes Element, wenn Sie einen übergeordneten Knoten für eine explizite Hierarchie wollen. Wenn Sie Daten in einer Massenoperation hinzufügen möchten, verwenden Sie stattdessen die Stagingtabellen. Weitere Informationen finden Sie unter [Importieren von Daten aus Tabellen (Master Data Services)](../master-data-services/import-data-from-tables-master-data-services.md).
## <a name="prerequisites"></a>Voraussetzungen
So führen Sie diese Prozedur aus
- Sie müssen über die Berechtigung für den Zugriff auf den Funktionsbereich **Explorer** verfügen.
- Sie müssen mindestens die Berechtigung **Aktualisieren** für das konsolidierte Modellobjekt für die Entität besitzen, der Sie ein Element hinzufügen, sowie die Berechtigung **Erstellen** für den konsolidierten Typ unter der Entität.
### <a name="to-create-a-consolidated-member"></a>So erstellen Sie ein konsolidiertes Element
1. Wählen Sie auf der [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)] -Startseite aus der Liste **Modell** ein Modell aus.
2. Wählen Sie aus der Liste **Version** eine Version aus.
3. Klicken Sie auf **Explorer**.
4. Zeigen Sie auf der Menüleiste auf **Hierarchien** , und klicken Sie auf den Namen der Hierarchie, der Sie ein konsolidiertes Element hinzufügen möchten.
5. Aktivieren Sie über dem Raster die Option **Konsolidierte Elemente** oder **Alle konsolidierten Elemente in der Hierarchie** .
6. Wählen Sie im linken Bereich einen Stammknoten oder ein konsolidiertes Element aus, unter dem Sie ein konsolidiertes Element erstellen möchten.
7. Klicken Sie auf **Hinzufügen**.
8. Vervollständigen Sie im Bereich rechts die Felder.
9. Optional. Geben Sie im Feld **Anmerkungen** einen Kommentar dazu ein, warum das Element hinzugefügt wurde. Alle Benutzer, die Zugriff auf das Element haben, können die Anmerkung anzeigen.
10. Klicken Sie auf **OK**.
## <a name="see-also"></a>Weitere Informationen finden Sie unter
[Erstellen einer expliziten Hierarchie (Master Data Services)](../master-data-services/create-an-explicit-hierarchy-master-data-services.md)
[Erstellen eines Blattelements (Master Data Services)](../master-data-services/create-a-leaf-member-master-data-services.md)
[Elemente (Master Data Services)](../master-data-services/members-master-data-services.md)
[Explizite Hierarchien (Master Data Services)](../master-data-services/explicit-hierarchies-master-data-services.md)
| 52.823529 | 459 | 0.76559 | deu_Latn | 0.925189 |
58c5bc96b5c51f7524b8e9f27695ce4b999a35d4 | 31,471 | md | Markdown | docs/standard/security/cryptographic-services.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/security/cryptographic-services.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/security/cryptographic-services.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: servizi crittografici
ms.date: 03/30/2017
ms.technology: dotnet-standard
helpviewer_keywords:
- cryptography [.NET Framework]
- pattern of derived class inheritance
- digital signatures
- asymmetric cryptographic algorithms
- digital signatures, public-key systems
- public keys
- decryption [.NET Framework]
- private keys
- MAC algorithms
- cryptographic algorithms
- private keys, overview
- encryption [.NET Framework]
- security [.NET Framework], encryption
- cryptographic services
- symmetric cryptographic algorithms
- hash
- message authentication codes
- derived class inheritance
- cryptography [.NET Framework], about
- random number generation
ms.assetid: f96284bc-7b73-44b5-ac59-fac613ad09f8
author: mairaw
ms.author: mairaw
ms.openlocfilehash: 1f773b6f7d0b8b4e0b8647b7086d8782d1afbb93
ms.sourcegitcommit: d8ebe0ee198f5d38387a80ba50f395386779334f
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 06/05/2019
ms.locfileid: "66690527"
---
# <a name="cryptographic-services"></a>servizi crittografici
<a name="top"></a> Le reti pubbliche, ad esempio Internet, non offrono comunicazioni sicure tra entità. Le comunicazioni su tali reti possono essere lette o addirittura modificate da terze parti non autorizzate. La crittografia aiuta a proteggere i dati dalla visualizzazione, offre modalità per rilevare se i dati sono stati modificati e aiuta a offrire una modalità di comunicazione sicura su canali altrimenti non sicuri. Ad esempio, è possibile crittografare i dati usando un algoritmo di crittografia, trasmesso in stato crittografato e quindi decrittografato dal destinatario designato. Se una terza parte intercetta i dati crittografati, la decrittografia risulterà difficile.
In .NET Framework le classi dello spazio dei nomi <xref:System.Security.Cryptography?displayProperty=nameWithType> gestiscono automaticamente molti dettagli della crittografia. In alcuni casi si tratta di wrapper presenti nelle CryptoAPI di Microsoft non gestite, in altri semplicemente di implementazioni gestite. Non è necessario essere esperti di crittografia per usare queste classi. Quando si crea una nuova istanza di una delle classi dell'algoritmo di crittografia, le chiavi vengono generate automaticamente per semplificare l'uso e le proprietà predefinite offrono la maggiore sicurezza possibile.
Questa panoramica offre un riepilogo dei metodi di crittografia e delle procedure supportati da .NET Framework, inclusi i manifesti ClickOnce, Suite B e il supporto di Cryptography Next Generation (CNG) introdotto in .NET Framework 3.5.
In questa panoramica sono incluse le sezioni seguenti:
- [Primitive di crittografia](#primitives)
- [Crittografia a chiave segreta](#secret_key)
- [Crittografia a chiave pubblica](#public_key)
- [Digital Signatures](#digital_signatures)
- [Valore hash](#hash_values)
- [Random Number Generation](#random_numbers)
- [Manifesti ClickOnce](#clickonce)
- [Supporto per Suite B](#suite_b)
- [Argomenti correlati](#related_topics)
Per altre informazioni sulla crittografia e sui servizi, i componenti e gli strumenti Microsoft che permettono l'aggiunta di sicurezza crittografica alle applicazioni, vedere la sezione di questo documento relativa allo sviluppo e alla sicurezza Win32 e COM.
<a name="primitives"></a>
## <a name="cryptographic-primitives"></a>Primitive di crittografia
In una situazione tipica in cui si usa la crittografia, due parti (Alice e Bob) comunicano su un canale non sicuro. Alice e Bob vogliono assicurare che le comunicazioni risultino incomprensibili a eventuali ascoltatori. Poiché, inoltre, Alice e Bob si trovano in posizioni remote, Alice deve assicurarsi che le informazioni ricevute da Bob non siano state modificate da altri durante la trasmissione. Deve inoltre assicurarsi che le informazioni provengano effettivamente da Bob e non da qualcuno che lo sta rappresentando.
La crittografia permette di realizzare gli obiettivi seguenti:
- Riservatezza: Per impedire la lettura di dati o identità dell'utente.
- Integrità dei dati: Per proteggere i dati dalla modifica.
- Autenticazione: Per garantire che i dati provengono da una parte specifica.
- Il non ripudio: Per impedire una parte specifica di negare di aver inviato un messaggio.
Per realizzare questi obiettivi, è possibile usare una combinazione di algoritmi e procedure nota come primitive di crittografia per creare uno schema crittografico. La tabella seguente elenca le primitive di crittografia e il rispettivo uso.
|Primitiva di crittografia|Usa|
|-----------------------------|---------|
|Crittografia a chiave segreta (crittografia simmetrica)|Esegue una trasformazione sui dati per impedirne la lettura da terze parti. Questo tipo di crittografia usa una singola chiave segreta condivisa per crittografare e decrittografare i dati.|
|Crittografia a chiave pubblica (crittografia asimmetrica)|Esegue una trasformazione sui dati per impedirne la lettura da terze parti. Questo tipo di crittografia usa una coppia di chiavi pubblica/privata per crittografare e decrittografare i dati.|
|Firme di crittografia|Aiuta a verificare che i dati provengano da una parte specifica, tramite la creazione di una firma digitale univoca per tale parte. Questo processo usa anche funzioni hash.|
|Hash di crittografia|Mappa i dati di qualsiasi lunghezza a una sequenza di byte a lunghezza fissa. Gli hash sono statisticamente univoci. Una sequenza di due byte diversa non avrà come risultato hash con lo stesso valore.|
[Torna all'inizio](#top)
<a name="secret_key"></a>
## <a name="secret-key-encryption"></a>Crittografia a chiave segreta
Gli algoritmi di crittografia a chiave segreta usano una singola chiave segreta per crittografare e decrittografare i dati. È necessario proteggere la chiave dall'accesso da parte di agenti non autorizzati, poiché chiunque sia in possesso della chiave la potrà usare per decrittografare i dati o crittografare i propri dati, affermando che sono stati originati dall'utente.
La crittografia a chiave segreta è definita anche crittografia simmetrica, poiché si usa la stessa chiave per la crittografia e la decrittografia. Gli algoritmi di crittografia a chiave segreta sono molto veloci, rispetto agli algoritmi a chiave pubblica e sono ottimali per l'esecuzione di trasformazioni crittografiche su grandi flussi di dati. Gli algoritmi di crittografia asimmetrica, ad esempio RSA, sono matematicamente limitati a livello di quantità di dati che sono in grado di crittografare. Gli algoritmi di crittografia simmetrica non presentano in genere questi problemi.
Un tipo di algoritmo a chiave segreta, denominato crittografia a blocchi, viene usato per crittografare un blocco di dati alla volta. Tramite la crittografia a blocchi, come Data Encryption Standard (DES), TripleDES e Advanced Encryption Standard (AES), un blocco di input di *n* byte viene trasformato a livello di crittografia in un blocco di output di byte crittografati. Se si vuole crittografare o decrittografare una sequenza di byte, è necessario eseguire tale operazione blocco per blocco. Poiché la dimensione di *n* è limitata (8 byte per DES e TripleDES, 16 byte come valore predefinito, 24 o 32 byte per AES), i valori di dati superiori a *n* devono essere crittografati un blocco alla volta. I valori di dati inferiori a *n* devono essere espansi a *n* per essere elaborati.
Una forma semplice di crittografia a blocchi viene definita modalità ECB (Electronic Codebook). La modalità ECB non è considerata sicura, poiché non usa un vettore di inizializzazione per inizializzare il primo blocco di testo normale. Per una determinata chiave segreta *k*, tramite una semplice crittografia a blocchi in cui non viene usato un vettore di inizializzazione lo stesso blocco di input di testo non crittografato verrà crittografato nello stesso blocco di output di testo crittografato. Se sono quindi presenti blocchi duplicati nel flusso di testo normale di input, saranno presenti blocchi duplicati nel flusso di testo crittografato di output. Questi blocchi di output duplicati segnalano agli utenti non autorizzati la crittografia debole usata, gli algoritmi usati e le possibili modalità di attacco. La modalità ECB è quindi abbastanza vulnerabile all'analisi e quindi all'individuazione delle chiavi.
Le classi d crittografia a blocchi fornite nella libreria di classi base usano una modalità di concatenamento predefinita denominata CBC (Cipher-Block Chaining), ma se si vuole è possibile modificare questa impostazione predefinita.
La crittografia CBC supera i problemi associati alla crittografia ECB usando un vettore di inizializzazione per crittografare il primo blocco di testo normale. Ogni blocco successivo di testo normale viene sottoposto a un'operazione Bitwise-OR esclusiva (`XOR`) con il blocco di testo crittografato precedente prima della crittografia. Ogni blocco di testo crittografato dipende prima da tutti i blocchi precedenti. Quando viene usato questo sistema, le intestazioni di messaggio comuni che potrebbero essere note a un utente non autorizzato non potranno essere usate per decodificare una chiave.
Un modo per compromettere dati crittografati con la modalità CBC consiste nell'eseguire una ricerca completa di ogni chiave possibile. In base alle dimensioni della chiave usata per eseguire la crittografia, questo tipo di ricerca richiede molto tempo anche nei computer più veloci ed è quindi irrealizzabile. Le dimensioni di chiave più elevate sono più difficili da decifrare. Benché la crittografia non renda teoricamente impossibile a un utente malintenzionato il recupero di dati crittografati, aumenta il costo di tale operazione. Se sono necessari tre mesi per eseguire una ricerca completa per recuperare dati che risultano significativi solo per alcuni giorni, il metodo di ricerca completa risulta poco pratico.
Lo svantaggio della crittografia a chiave segreta consiste nel fatto che presuppone che le due parti si siano accordate su una chiave e un vettore di inizializzazione e che abbiano comunicato i rispettivi valori. Il vettore di inizializzazione non è considerato segreto e può essere trasmesso in testo normale con il messaggio. La chiave deve essere tuttavia mantenuta segreta agli utenti non autorizzati. A causa di questi problemi, la crittografia a chiave segreta viene usata spesso insieme alla crittografia a chiave pubblica per comunicare in modo privato i valori della chiave e del vettore di inizializzazione.
Supponendo che Alice e Bob siano due parti che vogliono comunicare su un canale non sicuro, è possibile usare crittografia a chiave segreta come indicato di seguito: Alice e Bob decidono di usare un algoritmo specifico (ad esempio, AES) con una chiave e un vettore di Inizializzazione. Alice compone un messaggio e crea un flusso di rete (forse una named pipe o rete posta elettronica) su cui inviare il messaggio. In seguito crittografa il testo usando la chiave e il vettore di inizializzazione e invia il messaggio crittografato e il vettore di inizializzazione a Bob tramite Intranet. Bob riceve il testo crittografato e lo decrittografa usando il vettore di inizializzazione e la chiave precedentemente concordata. Se la trasmissione viene intercettata, l'intercettore non può recuperare il messaggio originale, poiché non conosce la chiave. In questo scenario, solo la chiave deve rimanere segreta. In uno scenario reale Alice o Bob genera una chiave segreta e usa la crittografia a chiave pubblica (asimmetrica) per trasferire la chiave segreta (simmetrica) all'altra parte. Per altre informazioni sulla crittografia a chiave pubblica, vedere la sezione successiva.
.NET Framework fornisce le classi seguenti che implementano gli algoritmi di crittografia a chiave segreta:
- <xref:System.Security.Cryptography.AesManaged> (introdotta in .NET Framework 3.5).
- <xref:System.Security.Cryptography.DESCryptoServiceProvider>.
- <xref:System.Security.Cryptography.HMACSHA1> : si tratta tecnicamente di un algoritmo a chiave segreta in quanto rappresenta il codice di autenticazione del messaggio calcolato usando una funzione hash di crittografia combinata con una chiave segreta. Vedere [Valori hash](#hash_values)più avanti in questo argomento.
- <xref:System.Security.Cryptography.RC2CryptoServiceProvider>.
- <xref:System.Security.Cryptography.RijndaelManaged>.
- <xref:System.Security.Cryptography.TripleDESCryptoServiceProvider>.
[Torna all'inizio](#top)
<a name="public_key"></a>
## <a name="public-key-encryption"></a>Crittografia a chiave pubblica
a crittografia a chiave pubblica usa una chiave privata che deve essere tenuta segreta agli utenti non autorizzati e una chiave pubblica che può essere resa pubblica a tutti. La chiave pubblica e la chiave privata sono collegate matematicamente. I dati crittografati con la chiave pubblica possono essere decrittografati solo con la chiave privata e i dati firmati con la chiave privata possono essere verificati solo con la chiave pubblica. La chiave pubblica può essere distribuita a tutti in quanto viene usata per crittografare i dati da inviare a chi detiene la chiave privata. Gli algoritmi di crittografia a chiave pubblica sono noti anche come algoritmi asimmetrici, poiché per crittografare e successivamente decrittografare i dati è necessario usare due chiavi diverse. Una regola di crittografia di base proibisce il riutilizzo di chiavi ed entrambe le chiavi devono essere univoche per ogni sessione di comunicazione. In pratica, tuttavia, le chiavi asimmetriche sono in genere di lunga durata.
Due parti (Alice e Bob) possono utilizzare crittografia a chiave pubblica come indicato di seguito: In primo luogo, Alice genera una coppia di chiavi pubblica/privata. Se Bob vuole inviare ad Alice un messaggio crittografato, le chiede la chiave pubblica. Alice invia a Bob la chiave pubblica su una rete non sicura e Bob la usa per crittografare un messaggio. Bob invia il messaggio crittografato ad Alice, che lo decrittografa usando la sua chiave privata. Se Bob ha ricevuto la chiave di Alice su un canale non sicuro, ad esempio una rete pubblica, sarà esposto a un attacco di tipo man-in-the-middle. Bob deve quindi verificare con Alice che la copia a sua disposizione della chiave pubblica sia corretta.
Durante la trasmissione della chiave pubblica di Alice, una persona non autorizzata potrebbe intercettare la chiave. La stessa persona potrebbe intercettare anche il messaggio crittografato da Bob. Tale persona, tuttavia, non potrà decrittografare il messaggio con la chiave pubblica. Il messaggio può essere decrittografato solo con la chiave privata di Alice che non è stata trasmessa. Alice non usa la propria chiave privata per crittografare un messaggio di risposta a Bob, poiché chiunque sia in possesso della chiave pubblica potrebbe decrittografare il messaggio. Se Alice vuole inviare un messaggio a Bob, gli chiede la chiave pubblica e crittografa il suo messaggio usando quella chiave. Bob decrittografa quindi il messaggio usando la sua chiave privata associata.
In questo scenario Alice e Bob usano la crittografia a chiave pubblica (asimmetrica) per trasferire una chiave segreta (simmetrica) e usano la crittografia a chiave segreta per il resto della sessione.
L'elenco seguente offre un confronto tra algoritmi di crittografia a chiave pubblica e a chiave segreta:
- Gli algoritmi di crittografia a chiave pubblica usano una dimensione fissa del buffer, mentre gli algoritmi di crittografia a chiave segreta usano un buffer a lunghezza variabile.
- Gli algoritmi di crittografia a chiave pubblica non possono essere usati per il concatenamento di dati in flussi come gli algoritmi di crittografia a chiave segreta, poiché è possibile crittografare solo piccole quantità di dati. Le operazioni asimmetriche non usano quindi lo stesso modello di streaming delle operazioni simmetriche.
- La crittografia a chiave pubblica ha uno spazio delle chiavi (intervallo dei valori possibili) molto più ampio rispetto alla crittografia a chiave segreta, pertanto è meno esposta a tecniche esaustive per scoprire la chiave.
- Le chiavi pubbliche possono essere distribuite in modo semplice poiché non è necessario proteggerle, purché sia possibile verificare l'identità del mittente.
- Alcuni algoritmi a chiave pubblica, ad esempio RSA e DSA, ma non Diffie-Hellman, possono essere usati per creare firme digitali per la verifica dell'identità del mittente dei dati.
- Gli algoritmi a chiave pubblica sono molto lenti rispetto a quelli a chiave segreta e non sono destinati alla crittografia di grandi quantità di dati. Risultano utili solo per il trasferimento di piccole quantità di dati. Generalmente la crittografia a chiave pubblica viene usata per crittografare una chiave e un vettore di inizializzazione utilizzabili da un algoritmo a chiave segreta. Dopo il trasferimento della chiave e del vettore di inizializzazione, la crittografia a chiave segreta viene usata per il resto della sessione.
.NET Framework fornisce le classi seguenti che implementano gli algoritmi di crittografia a chiave pubblica:
- <xref:System.Security.Cryptography.DSACryptoServiceProvider>
- <xref:System.Security.Cryptography.RSACryptoServiceProvider>
- <xref:System.Security.Cryptography.ECDiffieHellman> (classe base)
- <xref:System.Security.Cryptography.ECDiffieHellmanCng>
- <xref:System.Security.Cryptography.ECDiffieHellmanCngPublicKey> (classe base)
- <xref:System.Security.Cryptography.ECDiffieHellmanKeyDerivationFunction> (classe base)
- <xref:System.Security.Cryptography.ECDsaCng>
RSA consente sia la crittografia sia la firma, mentre DSA può essere usato solo per la firma e Diffie-Hellman solo per la generazione di chiavi. In genere, gli algoritmi a chiave pubblica sono più limitati nell'utilizzo rispetto a quelli a chiave privata.
[Torna all'inizio](#top)
<a name="digital_signatures"></a>
## <a name="digital-signatures"></a>firme digitali
Gli algoritmi a chiave pubblica possono essere usati per formare firme digitali, il cui obiettivo è l'autenticazione dell'identità di un mittente, se la chiave pubblica di quest'ultimo viene considerata attendibile, e la protezione dell'integrità dei dati. Attraverso una chiave pubblica generata da Alice, il destinatario dei suoi dati può verificare che siano stati inviati effettivamente da lei confrontando la firma digitale sui dati e la chiave pubblica di Alice.
Per apporre una firma digitale a un messaggio usando la crittografia a chiave pubblica, Alice applica prima di tutto un algoritmo hash al messaggio per creare un digest del messaggio. Il digest è una rappresentazione dei dati compatta e univoca. Alice quindi crittografa il digest del messaggio con la sua chiave privata per creare la sua firma personale. Alla ricezione del messaggio e della firma, Bob decrittografa la firma con la chiave pubblica di Alice per recuperare il digest del messaggio e genera un hash mediante lo stesso algoritmo hash inviato da Alice. Se il digest del messaggio calcolato da Bob corrisponde esattamente al digest del messaggio ricevuto da Alice, Bob ha la certezza che il messaggio provenga dal possessore della chiave privata e che i dati non siano stati modificati. Se Bob ha la certezza che Alice sia il possessore della chiave privata, saprà che il messaggio proviene solo da Alice.
> [!NOTE]
> Chiunque può verificare una firma, poiché la chiave pubblica del mittente è di pubblico dominio e generalmente viene inclusa nel formato della firma digitale. Questo metodo non mantiene la segretezza del messaggio. Perché possa essere segreto, anche il messaggio deve essere crittografato.
.NET Framework fornisce le classi seguenti che implementano gli algoritmi di firma digitale:
- <xref:System.Security.Cryptography.DSACryptoServiceProvider>
- <xref:System.Security.Cryptography.RSACryptoServiceProvider>
- <xref:System.Security.Cryptography.ECDsa> (classe base)
- <xref:System.Security.Cryptography.ECDsaCng>
[Torna all'inizio](#top)
<a name="hash_values"></a>
## <a name="hash-values"></a>Valore hash
Gli algoritmi hash associano valori binari di lunghezza arbitraria a piccoli valori binari di lunghezza fissa, noti come valori hash. Per valore hash si intende una rappresentazione numerica di una porzione di dati. Se si inserisce un hash in un paragrafo di testo non crittografato e si modifica anche una sola lettera del paragrafo, un hash successivo produrrà un valore diverso. Se l'hash è basato su una crittografia avanzata, il relativo valore verrà modificato in modo significativo. Se ad esempio viene modificato un singolo bit di un messaggio, una funzione hash sicura può produrre un output che si differenzia del 50%. Molti valori di input possono avere hash dello stesso valore di output. Dal punto di vista del calcolo, tuttavia, è impossibile trovare due input di versi che forniscono come risultato un hash con lo stesso valore.
Due parti (Alice e Bob) sono riuscite a usare una funzione hash per garantire l'integrità del messaggio. Hanno selezionato un algoritmo di hash per firmare i messaggi. Alice ha scritto un messaggio, quindi ha creato un hash di tale messaggio tramite l'algoritmo selezionato. Hanno quindi seguito uno dei metodi seguenti:
- Alice invia il messaggio come testo normale e il messaggio con hash (firma digitale) a Bob. Bob riceve il messaggio, ne esegue l'hashing, quindi confronta il proprio valore hash con quello che ha ricevuto da Alice. Se i valori hash corrispondono, il messaggio non è stato alterato. Se invece i valori non corrispondono, il messaggio è stato alterato dopo essere stato scritto da Alice.
Purtroppo, questo metodo non consente di stabilire l'autenticità del mittente. Chiunque può rappresentare Alice e inviare un messaggio a Bob. Possono usare lo stesso algoritmo hash per firmare il messaggio e tutto ciò che Bob è in grado di determinare è che il messaggio corrisponde alla relativa firma. Si tratta di una forma di attacco di tipo man-in-the-middle. Per altre informazioni, vedere [esempio di comunicazioni protette con Cryptography Next Generation (CNG)](https://docs.microsoft.com/previous-versions/cc488018(v=vs.100)).
- Alice invia il messaggio come testo normale a Bob tramite un canale pubblico non protetto. Invia il messaggio con hash a Bob su un canale privato protetto. Bob riceve il messaggio in testo normale, ne esegue l'hashing, quindi confronta il valore hash con quello scambiato privatamente. Se i valori corrispondono, Bob può accertare quanto segue:
- Il messaggio non è stato modificato.
- Il mittente del messaggio (Alice) è autentico.
Perché il sistema funzioni, Alice deve nascondere il valore hash originale a tutte le parti ad eccezione di Bob.
- Alice invia il messaggio in testo normale a Bob tramite un canale pubblico non protetto e inserisce il messaggio con hash sul proprio sito Web pubblico.
Questo metodo consente di evitare la manomissione del messaggio impedendo a chiunque di modificare il valore hash. Anche se chiunque può leggere il messaggio e il relativo hash, il valore hash può essere modificato solo da Alice. Un utente non autorizzato che vuole rappresentare Alice necessita di accesso al sito Web di Alice.
Nessuno dei metodi precedenti impedisce la lettura dei messaggi di Alice, perché vengono trasmessi come testo normale. Una soluzione di sicurezza completa richiede le firme digitali (firma dei messaggi) e la crittografia.
.NET Framework fornisce le classi seguenti che implementano gli algoritmi di hash:
- <xref:System.Security.Cryptography.HMACSHA1>.
- <xref:System.Security.Cryptography.MACTripleDES>.
- <xref:System.Security.Cryptography.MD5CryptoServiceProvider>.
- <xref:System.Security.Cryptography.RIPEMD160>.
- <xref:System.Security.Cryptography.SHA1Managed>.
- <xref:System.Security.Cryptography.SHA256Managed>.
- <xref:System.Security.Cryptography.SHA384Managed>.
- <xref:System.Security.Cryptography.SHA512Managed>.
- Varianti HMAC di tutti gli algoritmi SHA (Secure Hash Algorithm), MD5 (Message Digest 5) e RIPEMD-160.
- Implementazioni CryptoServiceProvider (wrapper del codice gestito) di tutti gli algoritmi SHA.
- Implementazioni CNG (Cryptography Next Generation) di tutti gli algoritmi MD5 e SHA.
> [!NOTE]
> I difetti di progettazione di MD5 sono stati individuati nel 1996 e SHA-1 è stato consigliato in sostituzione. Nel 2004 sono stati individuati altri difetti e l'algoritmo MD5 non è più considerato sicuro. Anche l'algoritmo SHA-1 si è rivelato non sicuro e attualmente è consigliato l'algoritmo SHA-2.
[Torna all'inizio](#top)
<a name="random_numbers"></a>
## <a name="random-number-generation"></a>generazione casuale di numeri
La generazione di numeri casuali è integrata in molte operazioni di crittografia. Le chiavi di crittografia, ad esempio, devono essere il più casuali possibile in modo che non sia possibile riprodurle. I generatori di numeri casuali di crittografia devono generare output che sia impossibile da prevedere con una probabilità superiore al 50%. Pertanto, qualsiasi metodo di previsione del bit di output successivo non deve avere una prestazione migliore della previsione casuale. Le classi in .NET Framework usano i generatori di numeri casuali per generare le chiavi di crittografia.
La classe <xref:System.Security.Cryptography.RNGCryptoServiceProvider> è un'implementazione di un algoritmo di generazione di numeri casuali.
[Torna all'inizio](#top)
<a name="clickonce"></a>
## <a name="clickonce-manifests"></a>Manifesti ClickOnce
In .NET Framework 3.5, le classi di crittografia seguenti consentono di ottenere e verificare informazioni sulle firme del manifesto per le applicazioni che vengono distribuite mediante [tecnologia ClickOnce](/visualstudio/deployment/clickonce-security-and-deployment):
- La classe <xref:System.Security.Cryptography.ManifestSignatureInformation> ottiene informazioni su una firma del manifesto quando si usano i relativi overload del metodo <xref:System.Security.Cryptography.ManifestSignatureInformation.VerifySignature%2A> .
- È possibile usare l'enumerazione <xref:System.Security.ManifestKinds> per specificare i manifesti da verificare. Il risultato della verifica è uno dei valori di enumerazione <xref:System.Security.Cryptography.SignatureVerificationResult> .
- La classe <xref:System.Security.Cryptography.ManifestSignatureInformationCollection> fornisce una raccolta di sola lettura di oggetti <xref:System.Security.Cryptography.ManifestSignatureInformation> delle firme verificate.
Le classi seguenti forniscono inoltre informazioni specifiche sulla firma:
- <xref:System.Security.Cryptography.StrongNameSignatureInformation> contiene informazioni sulla firma con nome sicuro per un manifesto.
- <xref:System.Security.Cryptography.X509Certificates.AuthenticodeSignatureInformation> rappresenta le informazioni sulla firma Authenticode per un manifesto.
- <xref:System.Security.Cryptography.X509Certificates.TimestampInformation> contiene informazioni sul timestamp di una firma Authenticode.
- <xref:System.Security.Cryptography.X509Certificates.TrustStatus> fornisce un modo semplice per controllare se una firma Authenticode è attendibile.
[Torna all'inizio](#top)
<a name="suite_b"></a>
## <a name="suite-b-support"></a>Supporto per Suite B
.NET Framework 3.5 supporta il set B Suite di algoritmi di crittografia pubblicati da National Security Agency (NSA). Per altre informazioni su Suite B, vedere la pagina relativa alla [scheda descrittiva della crittografia Suite B dell'agenzia NSA](https://www.nsa.gov/what-we-do/information-assurance/).
Sono inclusi gli algoritmi seguenti:
- Algoritmo AES (Advanced Encryption Standard) con dimensioni della chiave di 128, 192 e 256 bit per la crittografia.
- Algoritmi SHA-1, SHA-256, SHA-384 e SHA-512 (Secure Hash Algorithm) per l'hashing. Gli ultimi tre sono in genere raggruppati e noti come SHA-2.
- Algoritmo ECDSA (Elliptic Curve Digital Signature Algorithm) che usa curve di coefficienti di numeri primi di 256, 384 e 521 bit per la generazione della firma. La documentazione NSA definisce in modo specifico queste curve e le chiama P-256, P-384 e P-521. Questo algoritmo viene fornito dalla classe <xref:System.Security.Cryptography.ECDsaCng> . e consente di firmare con una chiave privata e verificare la firma con una chiave pubblica.
- Algoritmo ECDH (Elliptic Curve Diffie-Hellman) che usa curve di coefficienti di numeri primi di 256, 384 e 521 bit per lo scambio di chiave e la generazione della chiave privata. Questo algoritmo viene fornito dalla classe <xref:System.Security.Cryptography.ECDiffieHellmanCng> .
Nelle nuove classi <xref:System.Security.Cryptography.AesCryptoServiceProvider>, <xref:System.Security.Cryptography.SHA256CryptoServiceProvider>, <xref:System.Security.Cryptography.SHA384CryptoServiceProvider>e <xref:System.Security.Cryptography.SHA512CryptoServiceProvider> sono disponibili wrapper del codice gestito per le implementazioni certificate da FIPS (Federal Information Processing Standard) delle implementazioni AES, SHA-256, SHA-384 e SHA-512.
[Torna all'inizio](#top)
<a name="cng"></a>
## <a name="cryptography-next-generation-cng-classes"></a>Classi Cryptography Next Generation (CNG)
Le classi Cryptography Next Generation (CNG) forniscono un wrapper gestito intorno alle funzioni CNG native. CNG sostituisce CryptoAPI. Il nome di queste classi contiene "Cng". Elemento centrale delle classi wrapper CNG è la classe del contenitore di chiavi <xref:System.Security.Cryptography.CngKey> che astrae l'archiviazione e l'utilizzo delle chiavi CNG. Questa classe consente di archiviare in modo sicuro una coppia di chiavi o una chiave pubblica e fare riferimento a tale chiave usando un semplice nome di stringa. La classe di firma basata su curva ellittica <xref:System.Security.Cryptography.ECDsaCng> e la classe di crittografia <xref:System.Security.Cryptography.ECDiffieHellmanCng> possono usare oggetti <xref:System.Security.Cryptography.CngKey> .
La classe <xref:System.Security.Cryptography.CngKey> viene usata per una varietà di operazioni aggiuntive, incluse l'apertura, la creazione, l'eliminazione e l'esportazione di chiavi. Fornisce inoltre l'accesso all'handle di chiave sottostante da usare quando le funzioni native vengono chiamate direttamente.
.NET Framework 3.5 include anche varie classi CNG, ad esempio di supporto:
- <xref:System.Security.Cryptography.CngProvider> gestisce un provider di archiviazione chiavi.
- <xref:System.Security.Cryptography.CngAlgorithm> gestisce un algoritmo CNG.
- <xref:System.Security.Cryptography.CngProperty> gestisce proprietà di chiave usate frequentemente.
[Torna all'inizio](#top)
<a name="related_topics"></a>
## <a name="related-topics"></a>Argomenti correlati
|Titolo|Descrizione|
|-----------|-----------------|
|[Modello di crittografia](../../../docs/standard/security/cryptography-model.md)|Illustra il modo in cui la crittografia viene implementata nella libreria delle classi base.|
|[Procedura dettagliata: Creazione di un'applicazione di crittografia](../../../docs/standard/security/walkthrough-creating-a-cryptographic-application.md)|Illustra le attività di base di crittografia e decrittografia.|
|[Configurazione di classi di crittografia](../../../docs/framework/configure-apps/configure-cryptography-classes.md)|Illustra come associare i nomi degli algoritmi a classi di crittografia e come associare identificatori di oggetti a un algoritmo di crittografia.|
| 93.66369 | 1,172 | 0.814528 | ita_Latn | 0.99868 |
58c617e8dd620c18c833e719cf9934f78618f2e5 | 152 | md | Markdown | README.md | gdar91/CopyAndUpdate | a878d20445e3e2f41f3e60f5dbc2453aff33f2c1 | [
"MIT"
] | null | null | null | README.md | gdar91/CopyAndUpdate | a878d20445e3e2f41f3e60f5dbc2453aff33f2c1 | [
"MIT"
] | null | null | null | README.md | gdar91/CopyAndUpdate | a878d20445e3e2f41f3e60f5dbc2453aff33f2c1 | [
"MIT"
] | null | null | null | # CopyAndUpdate
A .NET library, containing extension methods that that copy an existing object, update specified fields, and return the updated object.
| 50.666667 | 135 | 0.815789 | eng_Latn | 0.990573 |
58c61ddffb0ebc74c9c33f5c533234a1c7f37080 | 1,014 | md | Markdown | docs/error-messages/compiler-errors-2/compiler-error-c3910.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c3910.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c3910.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:54:57.000Z | 2020-05-28T15:54:57.000Z | ---
title: Errore del compilatore C3910
ms.date: 11/04/2016
f1_keywords:
- C3910
helpviewer_keywords:
- C3910
ms.assetid: cfcbe620-b463-463b-95ea-2d60ad33ebb5
ms.openlocfilehash: ef63b8f5d1ee4b3f094bed3549eec8157a950e91
ms.sourcegitcommit: 16fa847794b60bf40c67d20f74751a67fccb602e
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 12/03/2019
ms.locfileid: "74748879"
---
# <a name="compiler-error-c3910"></a>Errore del compilatore C3910
' Event ': deve definire il membro ' Method '
Un evento è stato definito, ma non contiene il metodo della funzione di accesso obbligatorio specificato.
Per ulteriori informazioni, vedere [evento](../../extensions/event-cpp-component-extensions.md).
L'esempio seguente genera l'C3910:
```cpp
// C3910.cpp
// compile with: /clr /c
delegate void H();
ref class X {
event H^ E {
// uncomment the following lines
// void add(H^) {}
// void remove( H^ h ) {}
// void raise( ) {}
}; // C3910
event H^ E2; // OK data member
};
```
| 24.731707 | 105 | 0.713018 | ita_Latn | 0.646659 |
58c84e852d4eb68f1a5b25faf399407c3d451588 | 1,277 | md | Markdown | hexo/source/_posts/Literature/Chinese/Speech/Mao-Zedong's-maxims.md | skyqin1999/cloudbase-templates | 386c96a156a7ff2fb3aedb97f5064280d00506bb | [
"Apache-2.0"
] | null | null | null | hexo/source/_posts/Literature/Chinese/Speech/Mao-Zedong's-maxims.md | skyqin1999/cloudbase-templates | 386c96a156a7ff2fb3aedb97f5064280d00506bb | [
"Apache-2.0"
] | null | null | null | hexo/source/_posts/Literature/Chinese/Speech/Mao-Zedong's-maxims.md | skyqin1999/cloudbase-templates | 386c96a156a7ff2fb3aedb97f5064280d00506bb | [
"Apache-2.0"
] | null | null | null | ---
title: 毛泽东箴(zhen)言
date: 2019-05-09 22:48:45
notshow: true
categories:
- 文学
- 中文
- 演讲语录
tags:
- 伟人
---
## 目录
1. 观世
* 壹:[有穷与无尽](#yqywj)
* 贰:[新旧与消长](#xjyxz)
2. 正己
3. 待人
4. 处事
## 壹 ◉ 有穷与无尽<span id="yqywj">
### 第一篇
> 什么叫问题?
> 问题就是矛盾。
> 哪里有没有解决的矛盾,
> 哪里就有问题。
### 第二篇
> 事情总是不完全的,
> 这就给我们一个任务,
> 向比较完全前进,
> 向相对真理前进,
> 但是永远也达不到绝对完全,
> 达不到绝对真理,
> 所以,
> 我们要无穷无尽无止境地努力。
### 第三篇
> 一个矛盾克服了,
> 另一个矛盾又产生了。
> 总任何时间、
> 任何地点、
> 任何人身上,
> 总是有矛盾存在的,
> 没有矛盾就没有世界。
### 第四篇
> 世界上有好东西,也有坏东西,
> 自古以来是这样,
> 一万年后也会是这样。
> 正因为世界上有坏的东西,
> 我们才要改造,才要做工作。
> 但是我们不会把一切都做好,
> 负责我们的后代就没有工作可做了。
### 第五篇
> 如果我们的认识是有穷尽的,
> 我们已经把一切都认识到了,
> 还要我们这些人干什么?
## 贰 ◉ 新旧与消长<span id="xjyxz">
### 第六篇
> 一切新的事物都是从艰苦斗争中锻炼出来的。
### 第七篇
> 每年的春夏之交,
> 夏秋之交,
> 秋冬之交与冬春之交,
> 各要变换一次衣服。
> 但是人们往往在那“之交”
> 不会变换衣服,
> 要闹出些毛病来,
> 这就是由于习惯的力量。
### 第八篇
> 过去的工作只不过是像万里长征走完了第一步。
> ……
> 我们熟悉的东西有些快要闲起来了,
> 我们不熟悉的东西正在强迫我们去做。
> 这就是困难。
### 第九篇
> 旧的东西要一下子去掉很不容易,
> 新的东西要一下子接受也不容易。
### 第十篇
> 一切都有变化。
> 腐朽的大的力量要让位给新生的、小的力量。
> 力量小的要变成大的,
> 因为大多数人要求变。
### 第十一篇
> 正确的东西,
> 好的东西,
> 人们一开始常常不承认它们是香花,
> 反而把它们看做毒草。
### 第十二篇
> 什么东西都是旧的习惯了,
> 新的就钻不进去了,
> 因为旧的把新的压住了。
### 第十三篇
> 一张白纸,
> 没有负担,
> 好写最新最美的文字,
> 好画最新最美的画图。 | 11.609091 | 29 | 0.566171 | yue_Hant | 0.912117 |
58c8ac1a9614edcce42700684bf09cdd0ca5cd4f | 5,680 | md | Markdown | docs/md/interview/Java-Distributed4.md | face-gale/my-blog | 0ac7b78b1e47ecce693d6201956b7cbac247fe40 | [
"MIT"
] | null | null | null | docs/md/interview/Java-Distributed4.md | face-gale/my-blog | 0ac7b78b1e47ecce693d6201956b7cbac247fe40 | [
"MIT"
] | 8 | 2021-03-01T21:21:03.000Z | 2022-02-26T01:59:45.000Z | docs/md/interview/Java-Distributed4.md | face-gale/my-blog | 0ac7b78b1e47ecce693d6201956b7cbac247fe40 | [
"MIT"
] | 1 | 2019-05-14T16:44:56.000Z | 2019-05-14T16:44:56.000Z | # 分布式事务
## 分布式一致性
在分布式系统中,为了保证数据的高可用,通常,我们会将数据保留多个副本(replica),这些副本会放置在不同的物理的机器上。为了对用户提供正确的 CRUD 等语义,我们需要保证这些放置在不同物理机器上的副本是一致的。
为了解决这种分布式一致性问题,前人在性能和数据一致性的反反复复权衡过程中总结了许多典型的协议和算法。其中比较著名的有**二阶提交协议(Two Phase Commitment Protocol)**、**三阶提交协议(Three Phase Commitment Protocol)** 和 **Paxos 算法**。
## 分布式事务
::: tip
分布式事务是指会涉及到操作多个数据库的事务。其实就是将对同一库事务的概念扩大到了对多个库的事务。目的是为了保证分布式系统中的数据一致性。分布式事务处理的关键是必须有一种方法可以知道事务在任何地方所做的所有动作,提交或回滚事务的决定必须产生统一的结果(全部提交或全部回滚)
:::
在分布式系统中,各个节点之间在物理上相互独立,通过网络进行沟通和协调。由于存在事务机制,可以保证每个独立节点上的数据操作可以满足 ACID。但是,相互独立的节点之间无法准确的知道其他节点中的事务执行情况。所以从理论上讲,两台机器理论上无法达到一致的状态。如果想让分布式部署的多台机器中的数据保持一致性,那么就要保证在所有节点的数据写操作,要不全部都执行,要么全部的都不执行。但是,一台机器在执行本地事务的时候无法知道其他机器中的本地事务的执行结果。所以他也就不知道本次事务到底应该 commit 还是 rollback。所以,常规的解决办法就是引入一个“协调者”的组件来统一调度所有分布式节点的执行。
## XA 规范
X/Open 组织(即现在的 Open Group )定义了分布式事务处理模型。 X/Open DTP 模型( 1994 )包括应用程序( AP )、事务管理器( TM )、资源管理器( RM )、通信资源管理器( CRM )四部分。一般,常见的事务管理器( TM )是交易中间件,常见的资源管理器( RM )是数据库,常见的通信资源管理器( CRM )是消息中间件。 通常把一个数据库内部的事务处理,如对多个表的操作,作为本地事务看待。数据库的事务处理对象是本地事务,而分布式事务处理的对象是全局事务。 所谓全局事务,是指分布式事务处理环境中,多个数据库可能需要共同完成一个工作,这个工作即是一个全局事务,例如,一个事务中可能更新几个不同的数据库。对数据库的操作发生在系统的各处但必须全部被提交或回滚。此时一个数据库对自己内部所做操作的提交不仅依赖本身操作是否成功,还要依赖与全局事务相关的其它数据库的操作是否成功,如果任一数据库的任一操作失败,则参与此事务的所有数据库所做的所有操作都必须回滚。 一般情况下,某一数据库无法知道其它数据库在做什么,因此,在一个 DTP 环境中,交易中间件是必需的,由它通知和协调相关数据库的提交或回滚。而一个数据库只将其自己所做的操作(可恢复)影射到全局事务中。
::: tip
XA 就是 X/Open DTP 定义的交易中间件与数据库之间的接口规范(即接口函数),交易中间件用它来通知数据库事务的开始、结束以及提交、回滚等。 XA 接口函数由数据库厂商提供。
:::
二阶提交协议和三阶提交协议就是根据这一思想衍生出来的。可以说二阶段提交其实就是实现 XA 分布式事务的关键(确切地说:两阶段提交主要保证了分布式事务的原子性:即所有结点要么全做要么全不做)
### 2PC
::: tip
二阶段提交(Two-phaseCommit)是指,在计算机网络以及数据库领域内,为了使基于分布式系统架构下的所有节点在进行事务提交时保持一致性而设计的一种算法(Algorithm)。通常,二阶段提交也被称为是一种协议(Protocol))。在分布式系统中,每个节点虽然可以知晓自己的操作时成功或者失败,却无法知道其他节点的操作的成功或失败。当一个事务跨越多个节点时,为了保持事务的ACID特性,需要引入一个作为协调者的组件来统一掌控所有节点(称作参与者)的操作结果并最终指示这些节点是否要把操作结果进行真正的提交(比如将更新后的数据写入磁盘等等)。因此,**二阶段提交的算法思路可以概括为:参与者将操作成败通知协调者,再由协调者根据所有参与者的反馈情报决定各参与者是否要提交操作还是中止操作**。
:::
所谓的两个阶段是指:第一阶段:**准备阶段(投票阶段)** 和第二阶段:**提交阶段(执行阶段)**。
### 准备阶段
事务协调者(事务管理器)给每个参与者(资源管理器)发送 Prepare 消息,每个参与者要么直接返回失败(如权限验证失败),要么在本地执行事务,写本地的 redo 和 undo 日志,但不提交,到达一种“万事俱备,只欠东风”的状态。
可以进一步将准备阶段分为以下三个步骤:
1. 协调者节点向所有参与者节点询问是否可以执行提交操作(vote),并开始等待各参与者节点的响应。
2. 参与者节点执行询问发起为止的所有事务操作,并将 Undo 信息和 Redo 信息写入日志。(注意:若成功这里其实每个参与者已经执行了事务操作)
3. 各参与者节点响应协调者节点发起的询问。如果参与者节点的事务操作实际执行成功,则它返回一个”同意”消息;如果参与者节点的事务操作实际执行失败,则它返回一个”中止”消息。
### 提交阶段
如果协调者收到了参与者的失败消息或者超时,直接给每个参与者发送回滚( Rollback )消息;否则,发送提交( Commit )消息;参与者根据协调者的指令执行提交或者回滚操作,释放所有事务处理过程中使用的锁资源。(注意:必须在最后阶段释放锁资源)
接下来分两种情况分别讨论提交阶段的过程。
当协调者节点从所有参与者节点获得的相应消息都为”同意”时:

1. 协调者节点向所有参与者节点发出”正式提交( commit )”的请求。
2. 参与者节点正式完成操作,并释放在整个事务期间内占用的资源。
3. 参与者节点向协调者节点发送”完成”消息。
4. 协调者节点受到所有参与者节点反馈的”完成”消息后,完成事务。
如果任一参与者节点在第一阶段返回的响应消息为”中止”,或者 协调者节点在第一阶段的询问超时之前无法获取所有参与者节点的响应消息时:

1. 协调者节点向所有参与者节点发出”回滚操作( rollback )”的请求。
2. 参与者节点利用之前写入的 Undo 信息执行回滚,并释放在整个事务期间内占用的资源。
3. 参与者节点向协调者节点发送”回滚完成”消息。
4. 协调者节点受到所有参与者节点反馈的”回滚完成”消息后,取消事务。
**不管最后结果如何,第二阶段都会结束当前事务。**
二阶段提交看起来确实能够提供原子性的操作,但是不幸的事,二阶段提交还是有几个**缺点**的:
1. **同步阻塞问题**:执行过程中,所有参与节点都是事务阻塞型的。当参与者占有公共资源时,其他第三方节点访问公共资源不得不处于阻塞状态。
2. **单点故障**:由于协调者的重要性,一旦协调者发生故障。参与者会一直阻塞下去。尤其在第二阶段,协调者发生故障,那么所有的参与者还都处于锁定事务资源的状态中,而无法继续完成事务操作。(如果是协调者挂掉,可以重新选举一个协调者,但是无法解决因为协调者宕机导致的参与者处于阻塞状态的问题)
3. **数据不一致**:在二阶段提交的阶段二中,当协调者向参与者发送 commit 请求之后,发生了局部网络异常或者在发送 commit 请求过程中协调者发生了故障,这回导致只有一部分参与者接受到了 commit 请求。而在这部分参与者接到 commit 请求之后就会执行 commit 操作。但是其他部分未接到 commit 请求的机器则无法执行事务提交。于是整个分布式系统便出现了数据部一致性的现象。
4. 二阶段无法解决的问题:协调者再发出 commit 消息之后宕机,而唯一接收到这条消息的参与者同时也宕机了。那么即使协调者通过选举协议产生了新的协调者,这条事务的状态也是不确定的,没人知道事务是否被已经提交。
由于二阶段提交存在着诸如同步阻塞、单点问题、脑裂等缺陷,所以,研究者们在二阶段提交的基础上做了改进,提出了三阶段提交。
### 3PC
::: tip
三阶段提交(Three-phase commit),也叫三阶段提交协议(Three-phase commit protocol),是二阶段提交(2PC)的改进版本。
:::

与两阶段提交不同的是,三阶段提交有两个改动点。
引入超时机制。同时在协调者和参与者中都引入超时机制。
在第一阶段和第二阶段中插入一个准备阶段。保证了在最后提交阶段之前各参与节点的状态是一致的。
也就是说,除了引入超时机制之外,3PC 把 2PC 的准备阶段再次一分为二,这样三阶段提交就有 CanCommit、PreCommit、DoCommit 三个阶段。
### CanCommit 阶段
3PC 的 CanCommit 阶段其实和 2PC 的准备阶段很像。协调者向参与者发送 commit 请求,参与者如果可以提交就返回 Yes 响应,否则返回 No 响应。
1. **事务询问**:协调者向参与者发送CanCommit请求。询问是否可以执行事务提交操作。然后开始等待参与者的响应。
2. **响应反馈**:参与者接到CanCommit请求之后,正常情况下,如果其自身认为可以顺利执行事务,则返回Yes响应,并进入预备状态。否则反馈No
### PreCommit 阶段
协调者根据参与者的反应情况来决定是否可以记性事务的 PreCommit 操作。根据响应情况,有以下两种可能。
1. **假如协调者从所有的参与者获得的反馈都是 Yes 响应,那么就会执行事务的预执行。**
- 发送预提交请求:协调者向参与者发送 PreCommit 请求,并进入 Prepared 阶段。
- 事务预提交:参与者接收到 PreCommit 请求后,会执行事务操作,并将 undo 和 redo 信息记录到事务日志中。
- 响应反馈:如果参与者成功的执行了事务操作,则返回 ACK 响应,同时开始等待最终指令。
2. **假如有任何一个参与者向协调者发送了 No 响应,或者等待超时之后,协调者都没有接到参与者的响应,那么就执行事务的中断。**
- 发送中断请求:协调者向所有参与者发送 abort 请求。
- 中断事务:参与者收到来自协调者的 abort 请求之后(或超时之后,仍未收到协调者的请求),执行事务的中断。
### doCommit 阶段
该阶段进行真正的事务提交,也可以分为以下两种情况。
1. **执行提交**
- 发送提交请求:协调接收到参与者发送的 ACK 响应,那么他将从预提交状态进入到提交状态。并向所有参与者发送 doCommit 请求。
- 事务提交:参与者接收到 doCommit 请求之后,执行正式的事务提交。并在完成事务提交之后释放所有事务资源。
- 响应反馈:事务提交完之后,向协调者发送 ACK 响应。
- 完成事务:协调者接收到所有参与者的 ACK 响应之后,完成事务。
2. **中断事务**
协调者没有接收到参与者发送的 ACK 响应(可能是接受者发送的不是 ACK 响应,也可能响应超时),那么就会执行中断事务。
- 发送中断请求:协调者向所有参与者发送 abort 请求
- 事务回滚:参与者接收到 abort 请求之后,利用其在阶段二记录的 undo 信息来执行事务的回滚操作,并在完成回滚之后释放所有的事务资源。
- 反馈结果:参与者完成事务回滚之后,向协调者发送 ACK 消息
- 中断事务:协调者接收到参与者反馈的ACK消息之后,执行事务的中断。
::: tip
在 doCommit 阶段,如果参与者无法及时接收到来自协调者的 doCommit 或者 abort 请求时,会在等待超时之后,会继续进行事务的提交。(其实这个应该是基于概率来决定的,当进入第三阶段时,说明参与者在第二阶段已经收到了 PreCommit 请求,那么协调者产生 PreCommit 请求的前提条件是他在第二阶段开始之前,收到所有参与者的 CanCommit 响应都是 Yes。(一旦参与者收到了 PreCommit,意味他知道大家其实都同意修改了)所以,一句话概括就是,当进入第三阶段时,由于网络超时等原因,虽然参与者没有收到 commit 或者 abort 响应,但是他有理由相信:成功提交的几率很大。)
:::
## 2PC 与 3PC 的区别
相对于 2PC,3PC 主要解决的单点故障问题,并减少阻塞,因为一旦参与者无法及时收到来自协调者的信息之后,他会默认执行 commit。而不会一直持有事务资源并处于阻塞状态。但是这种机制也会导致数据一致性问题,因为,由于网络原因,协调者发送的 abort 响应没有及时被参与者接收到,那么参与者在等待超时之后执行了 commit 操作。这样就和其他接到 abort 命令并执行回滚的参与者之间存在数据不一致的情况。 | 51.636364 | 549 | 0.832042 | yue_Hant | 0.721074 |
58c8ff1d7ebd66f53651c4ddbe85d4fede6f7e2b | 2,708 | md | Markdown | README.md | dibaliqaja/guest-book | 1c0f4be775651fb4d55b7b8e900a04392bacf00a | [
"MIT"
] | null | null | null | README.md | dibaliqaja/guest-book | 1c0f4be775651fb4d55b7b8e900a04392bacf00a | [
"MIT"
] | null | null | null | README.md | dibaliqaja/guest-book | 1c0f4be775651fb4d55b7b8e900a04392bacf00a | [
"MIT"
] | null | null | null | <h1 align="center">
<img src="https://raw.githubusercontent.com/laravel/art/master/logo-lockup/5%20SVG/2%20CMYK/1%20Full%20Color/laravel-logolockup-cmyk-red.svg" width="224px"/><br/>
Guestbook
</h1>
<p align="center">Guestbook is a simple project for Guestbook Management App</p>
<p align="center"><a href="https://github.com/dibaliqaja/guest-book/releases" target="_blank"><img src="https://img.shields.io/badge/version-v0.0.1-red?style=for-the-badge&logo=none" alt="system version" /></a> <a href="https://pkg.go.dev/github.com/create-go-app/cli/v3?tab=doc" target="_blank"><img src="https://img.shields.io/badge/Laravel-8.75.0-fb503b?style=for-the-badge&logo=laravel" alt="laravel version" /></a> <img src="https://img.shields.io/badge/license-mit-red?style=for-the-badge&logo=none" alt="license" /></p>
### Features
- Admin Panel
- Login
- Logout
- List table Guestbook
- Add Guest Book
- Edit Guest Book
- Delete Guest Book
- Frontpage
- Show List Guest Book
- Add Guest Book with input:
- First Name and Last Name
- Organization
- Address
- Province
- City
### ⚙️ Requirements
- PHP >= 7.2.5
- BCMath PHP Extension
- Ctype PHP Extension
- Fileinfo PHP extension
- JSON PHP Extension
- Mbstring PHP Extension
- OpenSSL PHP Extension
- PDO PHP Extension
- Tokenizer PHP Extension
- XML PHP Extension
### ⚡️ Installation
1. Clone GitHub repo for this project locally
```bash
$ git clone https://github.com/dibaliqaja/guest-book.git
```
2. Change directory in project which already clone
```bash
$ cd guest-book
```
3. Install Composer dependencies
```bash
$ composer install
```
4. Install NPM dependencies
```bash
$ npm install
```
5. Create a copy of your .env file
```bash
$ cp .env.example .env
```
6. Generate an app encryption key
```bash
$ php artisan key:generate
```
7. Create an empty database for our application
8. In the .env file, add database information to allow Laravel to connect to the database
```bash
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE={database-name}
DB_USERNAME={username-database}
DB_PASSWORD={password-database}
```
9. Migrate the database
```bash
$ php artisan migrate
```
10. Seed the database
```bash
$ php artisan db:seed
```
11. Running project
```bash
$ npm run dev
```
```bash
$ php artisan serve
```
### Admin Credential in Seeder
> Email : [email protected] <br>
> Pass : password
## Screenshoots




## License
The Laravel framework is open-sourced software licensed under the [MIT license](https://opensource.org/licenses/MIT).
| 26.291262 | 536 | 0.715657 | kor_Hang | 0.25325 |
58c9d5b20537430d80508a1617dfeb65427e44e6 | 619 | md | Markdown | docs/projects-using-specification/index.md | Allann/Specification | 1f64da4b51f51ed89c663d745db99df3c58c4c73 | [
"MIT"
] | null | null | null | docs/projects-using-specification/index.md | Allann/Specification | 1f64da4b51f51ed89c663d745db99df3c58c4c73 | [
"MIT"
] | null | null | null | docs/projects-using-specification/index.md | Allann/Specification | 1f64da4b51f51ed89c663d745db99df3c58c4c73 | [
"MIT"
] | null | null | null | ---
layout: default
title: Projects Using Specification
nav_order: 7
has_children: true
---
# Projects using Ardalis.Specification
## Contents (to do)
- [eShopOnWeb Reference App](https://github.com/dotnet-architecture/eShopOnWeb)
- [Pluralsight DDD Fundamentals Course sample](https://github.com/ardalis/pluralsight-ddd-fundamentals)
- [CleanArchitecture Solution Template](https://github.com/ardalis/CleanArchitecture)
- [fullstackhero Web API Boilerplate](https://github.com/fullstackhero/dotnet-webapi-boilerplate)
- (add your own project here via [pull request](https://github.com/ardalis/Specification/pulls))
| 36.411765 | 103 | 0.789984 | eng_Latn | 0.229062 |
58c9f97cbf217b5b3a55da6cfab7edfb4688fd0e | 3,519 | md | Markdown | source/_posts/Java/jvm-memory.md | qypone/blog | 79cc9df6d8ad87753280a8226c0d2894988453f8 | [
"MIT"
] | null | null | null | source/_posts/Java/jvm-memory.md | qypone/blog | 79cc9df6d8ad87753280a8226c0d2894988453f8 | [
"MIT"
] | null | null | null | source/_posts/Java/jvm-memory.md | qypone/blog | 79cc9df6d8ad87753280a8226c0d2894988453f8 | [
"MIT"
] | null | null | null | ---
title: jvm内存分析及工具
description: '内存结构:对象头(Header)、实例数据(Instance Data)和对齐填充(Padding),内存分析,常见OOM类型及原因等'
tags:
- java
categories:
- jvm
abbrlink: jvm-memory
date: 2022-01-13 23:16:00
---
## 内存结构
在HotSpot虚拟机中,对象在内存中存储的布局可以分为3块区域:**对象头(Header)**、**实例数据(Instance Data)**和**对齐填充(Padding)**

### 对象头
1. markword
用于存储对象自身的运行时数据,如哈希码(HashCode)、GC分代年龄、锁状态标志、线程持有的锁、偏向线程ID、偏向时间戳等,这部分数据的长度在32位和64位的虚拟机(未开启压缩指 针)中分别为32bit和64bit,官方称它为“MarkWord”。
2. klass
klass类型指针,即对象指向它的类元数据的指针,虚拟机通过这个指针来确定这个对象是哪个类的实例.
3. 数组长度(只有数组对象有)
如果对象是一个数组,那在对象头中还必须有一块数据用于记录数组长度
### 实例数据
实例数据部分是对象真正存储的有效信息,也是在程序代码中所定义的各种类型的字段内容。无论是从父类继承下来的,还是在子类中定义的,都需要记录起来。
### 对齐填充
对齐填充并不是必然存在的,也没有特别的含义,它仅仅起着占位符的作用。由于HotSpot VM的自动内存管理系统要求对象起始地址必须是8字节的整数倍,换句话说,就是对象的大小必须是8字节的整数倍。而对象头部分正好是8字节的倍数(1倍或者2倍),因此,当对象实例数据部分没有对齐时,就需要通过对齐填充来补全。
### 一个 Java 对象占用多少内存
> 可以使用 Instrumentation.getObjectSize()
>
> 方法来估算一个对象占用的内存空间。
>
> JOL (Java Object Layout) 可以用来查看对象内存布局
## 内存分析
#### 对象头和对象引用
在64位 JVM 中,对象头占据的空间是 12-byte(=96bit=64+32),但是以8字节对齐,所以一个空类的实例至少占用16字节
在32位 JVM 中,对象头占8个字节,以4的倍数对齐(32=4*8)
所以 new 出来很多简单对象,甚至是 new Object(),都会占用不少内容哈
通常在32位 JVM,以及内存小于 -Xmx32G 的64位JVM 上(默认开启指针压缩),一个引用占的内存默认是4个字节。
因此,64位 JVM 一般需要多消耗堆内存
#### 包装类型
比原生数据类型消耗的内存要多:
* Integer:占用16字节(8+4=12+补齐),因为 int 部分 占4个字节。 所以使用 Integer 比原生类型 int 要多消耗 300% 的内存。
* Long:一般占用16个字节(8+8=16),当然,对象的实际大小由底层平台的内存对齐确定,具体由特定 CPU 平台的 JVM 实现决定。看起来一个 Long 类型的对 象,比起原生类型 long 多占用了8个字节(也多消耗 了100%)
* 多维数组:在二维数组int【dim1】【dim2】中,每个嵌套的数组 int[dim2] 都是一个单独的 Object,会额外占用16字节的空间。当数组维度更大时,这种开销特别明显。
> int【128】【2】 实例占用3600字节。 而 int[256] 实例则只占用1040字节,里面的有效存储空间是一样的,3600 比起1040多了246%的额外开销。在极端情况下,byte【256】【1】,额外开销的比例是19倍!
* String:String 对象的空间随着内部字符数组的增长而增长。当然,String 类的对象有24个字节的额外开销。对于10字符以内的非空 String,增加的开销比起有效载荷(每个字符2字节 + 4 个字节的 length), 多占用了100% 到 400% 的内存。
#### 案例

对齐是绕不过去的问题
我们可能会认为,一个 X 类的实例占用17字节的空间。 但是由于需要对齐(padding),JVM 分配的内存是8字节的整数倍,所以占用的空间不是17字节,而是24字节
## OOM
### OutOfMemoryError: Java heap space
* 现象:创建新的对象时,堆内存中的空间不足以存放新创建的对象
* 产生原因
* 很多时候就类似于将 XXL 号的对象,往 S 号的 Java heap space 里面塞。其实清楚了原因,问题就很容易解决了:只要增加堆内存的大小,程序就能正常运行
* 代码问题导致的:
* 超出预期的访问量/数据量:应用系统设计时,一般是有 “容量” 定义的,部署这么多机器,用来处理一定流量的数据/业务。如果访问量突然飙升,超过预期的阈值,类似于时间坐标系中针尖形状的图谱。那么在峰值所在的时间段,程序很可能就会卡死、并触发 java.lang.OutOfMemoryError: Java heap space错误
* 内存泄露(Memory leak):这也是一种经常出现的情形。由于代码中的某些隐蔽错误,导致系统占用的内存越来越多。如果某个方法/某段代码存在内存泄漏,每执行一次,就会(有更多的垃圾对 象)占用更多的内存。随着运行时间的推移,泄漏的对象耗光了堆中的所有内存,那么 java.lang.OutOfMemoryError: Java heap space 错误就爆发了。
#### OutOfMemoryError: PermGen space/OutOfMemoryError: Metaspace
* 产生原因:是加载到内存中的 class 数量太多或体积太大,超过了 PermGen 区的大小。
* 解决办法:增大 PermGen/Metaspace
-XX:MaxPermSize=512m
-XX:MaxMetaspaceSize=512m
高版本 JVM 也可以: -XX:+CMSClassUnloadingEnabled
#### OutOfMemoryError: Unable to create new native thread
* 产生原因:程序创建的线程数量已达到上限值的异常信息
* 解决思路:
* 调整系统参数 ulimit -a,echo 120000 > /proc/sys/kernel/threads-max
* 降低 xss 等参数
* 调整代码,改变线程创建和使用方式
## 好文
着重看下第一、第二篇文章,最后一篇文章包含**HotSpot对象模型(OOP-Klass模型)**
[深入理解Instrument]: https://www.jianshu.com/p/5c62b71fd882 "深入理解Instrument"
[一个Java对象占多少内存?]: https://cloud.tencent.com/developer/article/1596672 "一个Java对象占多少内存?"
[一个java对象到底占用多大内存]: https://www.cnblogs.com/zhanjindong/p/3757767.html "一个java对象到底占用多大内存"
[一个对象占用多少字节]: https://www.iteye.com/blog/yueyemaitian-2033046 "一个对象占用多少字节"
[java对象结构]: https://blog.csdn.net/zqz_zqz/article/details/70246212 "java对象结构"
本文中部分转载自以上文章,感谢各位大佬的知识传递
| 29.325 | 189 | 0.779767 | yue_Hant | 0.753 |
58ca8ce3f9b21098f0ba5eb920022247e625bde1 | 3,776 | md | Markdown | contribute/posters/index.md | CatarinaFidalgo/2021 | bc31949e2016dc2ba7c02ce8c456b363be1d5e26 | [
"MIT"
] | null | null | null | contribute/posters/index.md | CatarinaFidalgo/2021 | bc31949e2016dc2ba7c02ce8c456b363be1d5e26 | [
"MIT"
] | null | null | null | contribute/posters/index.md | CatarinaFidalgo/2021 | bc31949e2016dc2ba7c02ce8c456b363be1d5e26 | [
"MIT"
] | 1 | 2021-03-18T18:47:22.000Z | 2021-03-18T18:47:22.000Z | ---
layout: ieeevr-default
title: "Posters"
---
<div>
<h1 id="cfp-posters"> Call for Posters</h1>
<p>
<strong style="color: black">IEEE VR 2021: the 28th IEEE Conference on Virtual Reality and 3D User Interfaces</strong><br /> March 27-April 3, 2021, Virtual
<br />
<a href="http://ieeevr.org/2021/">http://ieeevr.org/2021/</a>
</p>
<h2 id="important-dates"> Important Dates </h2>
<ul>
<li><b>January 11, 2021:</b> Two-page abstract and optional material submission.</li>
<li><b>February 1, 2021:</b> Notification of results.</li>
<li><b>February 12, 2021:</b> Camera-ready material and copyright submission via IEEE CPS (to be
published in conference proceedings) and PCS (poster,video and other additional materials).</li>
</ul>
<p>
Each deadline is 23:59:59 AoE (Anywhere on Earth) == GMT/UTC-12:00 on the stated day, no matter where the submitter is located.
</p>
<h2 id="Overview">Overview</h2>
<p>
The 28th IEEE Conference on Virtual Reality and 3D User Interfaces 2021 seeks poster submissions, which describe recently completed work, highly relevant results of work in progress, or successful systems and applications, in all areas related to virtual reality, including augmented reality (AR), mixed reality (MR), and 3D user interfaces.
</p>
<p>
Presenting a poster is a great way to get feedback on work that has not yet been published. A combination of local and remote poster presentations will be an integral part of the conference, with sessions for interactive discussion between presenters and attendees for at least two days, plus a fast-forward presentation track where authors can orally present a brief summary of their work to all conference attendees. Note that, in order to maintain interactive and exciting poster presentations, we require at least one presenter per accepted poster to attend the conference (in person or virtually).
</p>
<h2 id="submission-guidelines">Submission Guidelines</h2>
<p>
Poster submissions must be in English and anonymous. The submission must be in the form of a two-page summary paper with a maximum 100-word abstract. Supplemental material, such as a poster draft or videos may be submitted as well, but are not mandatory. All submitted materials must be in PDF format with embedded fonts. Two-page paper will be included in the proceedings and will be archived in the IEEE Digital Library, and therefore must be formatted using the IEEE Computer Society format described at <a href="http://junctionpublishing.org/vgtc/Tasks/camera.html">http://junctionpublishing.org/vgtc/Tasks/camera.html</a>. Poster papers must be submitted through a special poster slot available at the online submission site - <a href="https://new.precisionconference.com/submissions">https://new.precisionconference.com/submissions</a>.
Supplemental materials can be uploaded to the submission site as well. Poster drafts and final posters should be in portrait format with a maximum size given by the A0 standard (841 x 1189 mm; or 33.1 x 46.8 in). This year, every accepted poster would submit a 30-second teaser video. This video would not have sound, but would have captions.
</p>
<h2 id="contact">Contacts</h2>
<p>
For more information, please contact the Posters Chairs:
<ul>
<li>Daniel Medeiros ‒ University of Glasgow, UK</li>
<li>Isaac Cho - North Carolina A& State University, USA</li>
<li>Alexandre Gomes de Siqueira ‒ University of Florida, USA</li>
<li>Daniel Zielasko - University of Trier, Germany</li>
</ul>
posters2021 [at] ieeevr.org
</p>
</div>
| 68.654545 | 851 | 0.714248 | eng_Latn | 0.978616 |
58caa7a858327a6ef0d1d41939b174ef3915cbf8 | 1,902 | md | Markdown | content/docs/reference/spec/migration/buildpack-api-0.4-0.5.md | sap-contributions/buildpacks-docs | 9c88cda3a1a6d439ee3c60508e252de038cb6c01 | [
"Apache-2.0"
] | 21 | 2019-12-18T16:12:32.000Z | 2022-01-20T18:04:07.000Z | content/docs/reference/spec/migration/buildpack-api-0.4-0.5.md | sap-contributions/buildpacks-docs | 9c88cda3a1a6d439ee3c60508e252de038cb6c01 | [
"Apache-2.0"
] | 253 | 2019-12-12T15:51:09.000Z | 2022-03-29T21:52:51.000Z | content/docs/reference/spec/migration/buildpack-api-0.4-0.5.md | sap-contributions/buildpacks-docs | 9c88cda3a1a6d439ee3c60508e252de038cb6c01 | [
"Apache-2.0"
] | 80 | 2020-01-21T16:28:17.000Z | 2022-03-21T21:48:46.000Z | +++
title="Buildpack API 0.4 -> 0.5"
+++
<!--more-->
This guide is most relevant to buildpack authors.
See the [spec release](https://github.com/buildpacks/spec/releases/tag/buildpack%2Fv0.5) for buildpack API 0.5 for the full list of changes and further details.
### Stack ID
The buildpack may now indicate compatibility with any stack by specifying the special value `*`.
### Character restrictions for process types
For each process in `launch.toml`, `type` is now restricted to only contain numbers, letters, and the characters `.`, `_`, and `-`, so symlinks on both linux and windows filesystems can be created.
### Override env vars
Override is now the default behavior for env files without a suffix.
It means that for the `/env/`, `/env.build/`, and `/env.launch/` directories, the default, suffix-less behavior will be `VAR.override` instead of `VAR.append`+`VAR.delim=:`.
### Decouple Builpack Plan and BOM
The Buildpack Plan and Bill-Of-Materials are now decoupled.
`<plan>` that is provided to `/bin/build` and contains the Buildpack Plan entries for the buildpack, is now read-only.
There are new `[[bom]]` sections in `<layers>/launch.toml` and `<layers>/build.toml` for runtime and build-time Bill-of-Materials entries respectively.
There is a new `[[entries]]` section in `<layers>/build.toml` for Buildpack Plan entries that should be passed to subsequent buildpacks that may provide the dependencies.
### exec.d
The launch process now supports `exec.d` executables. Unlike `profile.d` scripts, which must be text files containing Bash 3+ scripts, the `exec.d` scripts do not depend on a shell but can still modify the environment of the app process.
Lifecycle will execute each file in each `<layers>/<layer>/exec.d` directory and each file in each `<layers>/<layer>/exec.d/<process>` directory.
The output of the script may contain any number of top-level key/value pairs.
| 61.354839 | 237 | 0.750263 | eng_Latn | 0.996653 |
58cb50895e7a419a7bede8911cd5b7f21fb487c4 | 17,021 | md | Markdown | docs/learn/documentation/versioned/comparisons/storm.md | FreshetFDMS/samza-2015-fork | fc303e55955d6a4eeb4cbf17c229141b6e54ccaf | [
"Apache-2.0"
] | 1 | 2015-05-03T00:44:42.000Z | 2015-05-03T00:44:42.000Z | docs/learn/documentation/versioned/comparisons/storm.md | criccomini/samza | 9fd86d104fb132ac64f62c3a023ad05f47c2b4c0 | [
"Apache-2.0"
] | 1 | 2021-02-24T03:19:56.000Z | 2021-02-24T03:19:56.000Z | docs/learn/documentation/versioned/comparisons/storm.md | criccomini/samza | 9fd86d104fb132ac64f62c3a023ad05f47c2b4c0 | [
"Apache-2.0"
] | 1 | 2021-01-13T11:12:02.000Z | 2021-01-13T11:12:02.000Z | ---
layout: page
title: Storm
---
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
*People generally want to know how similar systems compare. We've done our best to fairly contrast the feature sets of Samza with other systems. But we aren't experts in these frameworks, and we are, of course, totally biased. If we have goofed anything, please let us know and we will correct it.*
[Storm](http://storm-project.net/) and Samza are fairly similar. Both systems provide many of the same high-level features: a partitioned stream model, a distributed execution environment, an API for stream processing, fault tolerance, Kafka integration, etc.
Storm and Samza use different words for similar concepts: *spouts* in Storm are similar to stream consumers in Samza, *bolts* are similar to tasks, and *tuples* are similar to messages in Samza. Storm also has some additional building blocks which don't have direct equivalents in Samza.
### Ordering and Guarantees
Storm allows you to choose the level of guarantee with which you want your messages to be processed:
* The simplest mode is *at-most-once delivery*, which drops messages if they are not processed correctly, or if the machine doing the processing fails. This mode requires no special logic, and processes messages in the order they were produced by the spout.
* There is also *at-least-once delivery*, which tracks whether each input tuple (and any downstream tuples it generated) was successfully processed within a configured timeout, by keeping an in-memory record of all emitted tuples. Any tuples that are not fully processed within the timeout are re-emitted by the spout. This implies that a bolt may see the same tuple more than once, and that messages can be processed out-of-order. This mechanism also requires some co-operation from the user code, which must maintain the ancestry of records in order to properly acknowledge its input. This is explained in depth on [Storm's wiki](https://github.com/nathanmarz/storm/wiki/Guaranteeing-message-processing).
* Finally, Storm offers *exactly-once semantics* using its [Trident](https://github.com/nathanmarz/storm/wiki/Trident-tutorial) abstraction. This mode uses the same failure detection mechanism as the at-least-once mode. Tuples are actually processed at least once, but Storm's state implementation allows duplicates to be detected and ignored. (The duplicate detection only applies to state managed by Storm. If your code has other side-effects, e.g. sending messages to a service outside of the topology, it will not have exactly-once semantics.) In this mode, the spout breaks the input stream into batches, and processes batches in strictly sequential order.
Samza also offers guaranteed delivery — currently only at-least-once delivery, but support for exactly-once semantics is planned. Within each stream partition, Samza always processes messages in the order they appear in the partition, but there is no guarantee of ordering across different input streams or partitions. This model allows Samza to offer at-least-once delivery without the overhead of ancestry tracking. In Samza, there would be no performance advantage to using at-most-once delivery (i.e. dropping messages on failure), which is why we don't offer that mode — message delivery is always guaranteed.
Moreover, because Samza never processes messages in a partition out-of-order, it is better suited for handling keyed data. For example, if you have a stream of database updates — where later updates may replace earlier updates — then reordering the messages may change the final result. Provided that all updates for the same key appear in the same stream partition, Samza is able to guarantee a consistent state.
### State Management
Storm's lower-level API of bolts does not offer any help for managing state in a stream process. A bolt can maintain in-memory state (which is lost if that bolt dies), or it can make calls to a remote database to read and write state. However, a topology can usually process messages at a much higher rate than calls to a remote database can be made, so making a remote call for each message quickly becomes a bottleneck.
As part of its higher-level Trident API, Storm offers automatic [state management](https://github.com/nathanmarz/storm/wiki/Trident-state). It keeps state in memory, and periodically checkpoints it to a remote database (e.g. Cassandra) for durability, so the cost of the remote database call is amortized over several processed tuples. By maintaining metadata alongside the state, Trident is able to achieve exactly-once processing semantics — for example, if you are counting events, this mechanism allows the counters to be correct, even when machines fail and tuples are replayed.
Storm's approach of caching and batching state changes works well if the amount of state in each bolt is fairly small — perhaps less than 100kB. That makes it suitable for keeping track of counters, minimum, maximum and average values of a metric, and the like. However, if you need to maintain a large amount of state, this approach essentially degrades to making a database call per processed tuple, with the associated performance cost.
Samza takes a [completely different approach](../container/state-management.html) to state management. Rather than using a remote database for durable storage, each Samza task includes an embedded key-value store, located on the same machine. Reads and writes to this store are very fast, even when the contents of the store are larger than the available memory. Changes to this key-value store are replicated to other machines in the cluster, so that if one machine dies, the state of the tasks it was running can be restored on another machine.
By co-locating storage and processing on the same machine, Samza is able to achieve very high throughput, even when there is a large amount of state. This is necessary if you want to perform stateful operations that are not just counters. For example, if you want to perform a window join of multiple streams, or join a stream with a database table (replicated to Samza through a changelog), or group several related messages into a bigger message, then you need to maintain so much state that it is much more efficient to keep the state local to the task.
A limitation of Samza's state handling is that it currently does not support exactly-once semantics — only at-least-once is supported right now. But we're working on fixing that, so stay tuned for updates.
### Partitioning and Parallelism
Storm's [parallelism model](https://github.com/nathanmarz/storm/wiki/Understanding-the-parallelism-of-a-Storm-topology) is fairly similar to Samza's. Both frameworks split processing into independent *tasks* that can run in parallel. Resource allocation is independent of the number of tasks: a small job can keep all tasks in a single process on a single machine; a large job can spread the tasks over many processes on many machines.
The biggest difference is that Storm uses one thread per task by default, whereas Samza uses single-threaded processes (containers). A Samza container may contain multiple tasks, but there is only one thread that invokes each of the tasks in turn. This means each container is mapped to exactly one CPU core, which makes the resource model much simpler and reduces interference from other tasks running on the same machine. Storm's multithreaded model has the advantage of taking better advantage of excess capacity on an idle machine, at the cost of a less predictable resource model.
Storm supports *dynamic rebalancing*, which means adding more threads or processes to a topology without restarting the topology or cluster. This is a convenient feature, especially during development. We haven't added this to Samza: philosophically we feel that this kind of change should go through a normal configuration management process (i.e. version control, notification, etc.) as it impacts production performance. In other words, the code and configuration of the jobs should fully recreate the state of the cluster.
When using a transactional spout with Trident (a requirement for achieving exactly-once semantics), parallelism is potentially reduced. Trident relies on a global ordering in its input streams — that is, ordering across all partitions of a stream, not just within one partion. This means that the topology's input stream has to go through a single spout instance, effectively ignoring the partitioning of the input stream. This spout may become a bottleneck on high-volume streams. In Samza, all stream processing is parallel — there are no such choke points.
### Deployment & Execution
A Storm cluster is composed of a set of nodes running a *Supervisor* daemon. The supervisor daemons talk to a single master node running a daemon called *Nimbus*. The Nimbus daemon is responsible for assigning work and managing resources in the cluster. See Storm's [Tutorial](https://github.com/nathanmarz/storm/wiki/Tutorial) page for details. This is quite similar to YARN; though YARN is a bit more fully featured and intended to be multi-framework, Nimbus is better integrated with Storm.
Yahoo! has also released [Storm-YARN](https://github.com/yahoo/storm-yarn). As described in [this Yahoo! blog post](http://developer.yahoo.com/blogs/ydn/storm-yarn-released-open-source-143745133.html), Storm-YARN is a wrapper that starts a single Storm cluster (complete with Nimbus, and Supervisors) inside a YARN grid.
There are a lot of similarities between Storm's Nimbus and YARN's ResourceManager, as well as between Storm's Supervisor and YARN's Node Managers. Rather than writing our own resource management framework, or running a second one inside of YARN, we decided that Samza should use YARN directly, as a first-class citizen in the YARN ecosystem. YARN is stable, well adopted, fully-featured, and inter-operable with Hadoop. It also provides a bunch of nice features like security (user authentication), cgroup process isolation, etc.
The YARN support in Samza is pluggable, so you can swap it for a different execution framework if you wish.
### Language Support
Storm is written in Java and Clojure but has good support for non-JVM languages. It follows a model similar to MapReduce Streaming: the non-JVM task is launched in a separate process, data is sent to its stdin, and output is read from its stdout.
Samza is written in Java and Scala. It is built with multi-language support in mind, but currently only supports JVM languages.
### Workflow
Storm provides modeling of *topologies* (a processing graph of multiple stages) [in code](https://github.com/nathanmarz/storm/wiki/Tutorial). Trident provides a further [higher-level API](https://github.com/nathanmarz/storm/wiki/Trident-tutorial) on top of this, including familiar relational-like operators such as filters, grouping, aggregation and joins. This means the entire topology is wired up in one place, which has the advantage that it is documented in code, but has the disadvantage that the entire topology needs to be developed and deployed as a whole.
In Samza, each job is an independent entity. You can define multiple jobs in a single codebase, or you can have separate teams working on different jobs using different codebases. Each job is deployed, started and stopped independently. Jobs communicate only through named streams, and you can add jobs to the system without affecting any other jobs. This makes Samza well suited for handling the data flow in a large company.
Samza's approach can be emulated in Storm by connecting two separate topologies via a broker, such as Kafka. However, Storm's implementation of exactly-once semantics only works within a single topology.
### Maturity
We can't speak to Storm's maturity, but it has an [impressive number of adopters](https://github.com/nathanmarz/storm/wiki/Powered-By), a strong feature set, and seems to be under active development. It integrates well with many common messaging systems (RabbitMQ, Kestrel, Kafka, etc).
Samza is pretty immature, though it builds on solid components. YARN is fairly new, but is already being run on 3000+ node clusters at Yahoo!, and the project is under active development by both [Hortonworks](http://hortonworks.com/) and [Cloudera](http://www.cloudera.com/content/cloudera/en/home.html). Kafka has a strong [powered by](https://cwiki.apache.org/KAFKA/powered-by.html) page, and has seen increased adoption recently. It's also frequently used with Storm. Samza is a brand new project that is in use at LinkedIn. Our hope is that others will find it useful, and adopt it as well.
### Buffering & Latency
Storm uses [ZeroMQ](http://zeromq.org/) for non-durable communication between bolts, which enables extremely low latency transmission of tuples. Samza does not have an equivalent mechanism, and always writes task output to a stream.
On the flip side, when a bolt is trying to send messages using ZeroMQ, and the consumer can't read them fast enough, the ZeroMQ buffer in the producer's process begins to fill up with messages. If this buffer grows too much, the topology's processing timeout may be reached, which causes messages to be re-emitted at the spout and makes the problem worse by adding even more messages to the buffer. In order to prevent such overflow, you can configure a maximum number of messages that can be in flight in the topology at any one time; when that threshold is reached, the spout blocks until some of the messages in flight are fully processed. This mechanism allows back pressure, but requires [topology.max.spout.pending](http://nathanmarz.github.io/storm/doc/backtype/storm/Config.html#TOPOLOGY_MAX_SPOUT_PENDING) to be carefully configured. If a single bolt in a topology starts running slow, the processing in the entire topology grinds to a halt.
A lack of a broker between bolts also adds complexity when trying to deal with fault tolerance and messaging semantics. Storm has a [clever mechanism](https://github.com/nathanmarz/storm/wiki/Guaranteeing-message-processing) for detecting tuples that failed to be processed, but Samza doesn't need such a mechanism because every input and output stream is fault-tolerant and replicated.
Samza takes a different approach to buffering. We buffer to disk at every hop between a StreamTask. This decision, and its trade-offs, are described in detail on the [Comparison Introduction](introduction.html) page. This design decision makes durability guarantees easy, and has the advantage of allowing the buffer to absorb a large backlog of messages if a job has fallen behind in its processing. However, it comes at the price of slightly higher latency.
As described in the *workflow* section above, Samza's approach can be emulated in Storm, but comes with a loss in functionality.
### Isolation
Storm provides standard UNIX process-level isolation. Your topology can impact another topology's performance (or vice-versa) if too much CPU, disk, network, or memory is used.
Samza relies on YARN to provide resource-level isolation. Currently, YARN provides explicit controls for memory and CPU limits (through [cgroups](../yarn/isolation.html)), and both have been used successfully with Samza. No isolation for disk or network is provided by YARN at this time.
### Distributed RPC
In Storm, you can write topologies which not only accept a stream of fixed events, but also allow clients to run distributed computations on demand. The query is sent into the topology as a tuple on a special spout, and when the topology has computed the answer, it is returned to the client (who was synchronously waiting for the answer). This facility is called [Distributed RPC](https://github.com/nathanmarz/storm/wiki/Distributed-RPC) (DRPC).
Samza does not currently have an equivalent API to DRPC, but you can build it yourself using Samza's stream processing primitives.
### Data Model
Storm models all messages as *tuples* with a defined data model but pluggable serialization.
Samza's serialization and data model are both pluggable. We are not terribly opinionated about which approach is best.
## [Spark Streaming »](spark-streaming.html)
| 136.168 | 950 | 0.797897 | eng_Latn | 0.999318 |
58cb5aed7cf274c55f793368ff265f9bc1fdae4b | 4,374 | md | Markdown | pages/docs/installation/installation-source-preparation.md | kursatyurt/precice.github.io | ca305cd3581071c39a1a954f27d78481db98ffaa | [
"MIT",
"BSD-3-Clause"
] | 17 | 2018-01-22T16:36:47.000Z | 2022-02-08T08:33:11.000Z | pages/docs/installation/installation-source-preparation.md | kursatyurt/precice.github.io | ca305cd3581071c39a1a954f27d78481db98ffaa | [
"MIT",
"BSD-3-Clause"
] | 146 | 2017-06-30T11:30:42.000Z | 2022-03-31T16:38:14.000Z | pages/docs/installation/installation-source-preparation.md | kursatyurt/precice.github.io | ca305cd3581071c39a1a954f27d78481db98ffaa | [
"MIT",
"BSD-3-Clause"
] | 27 | 2017-06-30T10:59:44.000Z | 2022-03-18T16:03:25.000Z | ---
title: Building from source - Preparation
permalink: installation-source-preparation.html
keywords: configuration, basics, cmake, installation, building, source
---
## Which version to build
You decided to build preCICE from source, thus you most likely require a specific configuration.
preCICE builds as a fully-featured library by default, but you can turn off some features.
This is the way to go if you run into issues with an unnecessary dependency.
These features include:
* Support for MPI communication.
* Support for radial-basis function mappings based on PETSc. This requires MPI communication to be enabled.
* Support for user-defined python actions.
We recommend to leave all features enabled unless you have a good reason to disable them.
Next is the type of the build which defaults to debug:
* A **debug** build enables some logging facilities, which can give you a deep insight into preCICE.
This is useful to debug and understand what happens during API calls.
This build type is far slower than release builds for numerous reasons and not suitable for running large examples.
* A **release** build is an optimized build of the preCICE library, which makes it the preferred version for running large simulations.
The version offers limited logging support: debug and trace log output is not available.
* A **release with debug info** build allows to debug the internals of preCICE only.
Similar to the release build, it does not support neither debug nor trace logging.
At this point, you should have decided on which build-type to use and which features to disable.
## The source code
Download and unpack the `Source Code` of the [latest release](https://github.com/precice/precice/releases/latest) of preCICE and unpack the content to a directory.
Then open a terminal in the resulting folder.
To download and extract a version directly from the terminal, please execute the following:
```bash
wget https://github.com/precice/precice/archive/v{{ site.precice_version }}.tar.gz
tar -xzvf v{{ site.precice_version }}.tar.gz
cd precice-{{ site.precice_version }}
```
## Installation prefix
The next step is to decide where to install preCICE to.
This directory is called the installation prefix and will later contain the folders `lib` and `include` after installation.
System-wide prefixes require root permissions and may lead to issues in the long run, however, they often do not require setting up additional variables.
User-wide prefixes are located in the home directory of the user. These prefixes do not conflict with the system libraries and do not require special permissions.
Using such prefixes is generally required when working on clusters.
Using a user-wide prefix such as `~/software/precice` is the recommended choice.
Common system-wide prefixes are:
* `/usr/local` which does not collide with package managers and is picked up by most linkers (depends on each system).
* `/opt/precice` which is often used for system-wide installation of optional software. Choosing this prefix requires setting additional variables, which is why we generally don't recommend using it.
Common user-wide prefixes are:
* `~/software/precice` which allows to install preCICE in an isolated directory. This requires setting some additional variables, but saves a lot of headache.
* `~/software` same as above but preCICE will share the prefix with other software.
In case you choose a user-wide prefix you need to extend some additional environment variables in your `~/.bashrc`:
Replace `<prefix>` with your selected prefix
```bash
PRECICE_PREFIX=~/software/prefix # set this to your selected prefix
export LD_LIBRARY_PATH=$PRECICE_PREFIX/lib:$LD_LIBRARY_PATH
export CPATH=$PRECICE_PREFIX/include:$CPATH
# Enable detection with pkg-config and CMake
export PKG_CONFIG_PATH=$PRECICE_PREFIX/lib/pkgconfig:$PKG_CONFIG_PATH
export CMAKE_PREFIX_PATH=$PRECICE_PREFIX:$CMAKE_PREFIX_PATH
```
After adding these variables, please start a new session (open a new terminal or logout and login again).
{% include note.html content="On debian-based distributions, you can also build preCICE as a debian package and install it using the package manager. [Read more](installation-source-advanced#debian-packages)" %}
## The next step
In the next step we will install all required [dependencies](installation-source-dependencies).
| 50.275862 | 211 | 0.791267 | eng_Latn | 0.998517 |
58cbe55c2c9b6509d8579aa8f5484c62a9834f85 | 1,110 | md | Markdown | README.md | Sagleft/utopia-iot | e68490214406815f05e0ab3147bece30f9f6219e | [
"MIT"
] | null | null | null | README.md | Sagleft/utopia-iot | e68490214406815f05e0ab3147bece30f9f6219e | [
"MIT"
] | null | null | null | README.md | Sagleft/utopia-iot | e68490214406815f05e0ab3147bece30f9f6219e | [
"MIT"
] | 1 | 2021-03-20T20:08:24.000Z | 2021-03-20T20:08:24.000Z | # utopia-iot
An example of creating an IOT-device for Utopia Ecosystem
## about the device
You can use this project as an example for your development of smart devices working with Utopia Ecosystem. This may be a device that monitors the status of the network, mining, synchronization, and so on.
This example uses RGB: flashing blue signal when connected to Wifi, red signal on error and green if successful.
As a board with a controller, NodeMCU WeMos is used.
Also, this solution can be used as an example of the use of UtopiaAPI.
## compilation and firmware
Use Android Studio to build.
before build: ```sh prepare.sh```
## photos



## log example

---

### :globe_with_meridians: [Telegram канал](https://t.me/+VIvd8j6xvm9iMzhi)
| 31.714286 | 205 | 0.757658 | eng_Latn | 0.94267 |
58cbf6c838b8dc8e0431845ebfc12a77420a257e | 4,880 | md | Markdown | articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/Review-Moderated-Images.md | RobAaldijk/azure-docs.nl-nl | 519c7fc80075795af2670d665d1d93078faf7a87 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/Review-Moderated-Images.md | RobAaldijk/azure-docs.nl-nl | 519c7fc80075795af2670d665d1d93078faf7a87 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/Review-Moderated-Images.md | RobAaldijk/azure-docs.nl-nl | 519c7fc80075795af2670d665d1d93078faf7a87 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Gebruik inhouds beoordelingen via het hulp programma voor beoordeling-Content Moderator
titleSuffix: Azure Cognitive Services
description: Meer informatie over hoe u met het hulp programma controleren afbeeldingen in een webportal kunt controleren.
services: cognitive-services
author: PatrickFarley
manager: mikemcca
ms.service: cognitive-services
ms.subservice: content-moderator
ms.topic: conceptual
ms.date: 03/15/2019
ms.author: pafarley
ms.openlocfilehash: cfda4d7970c734d92c9f2355d553721ef6165e43
ms.sourcegitcommit: d76108b476259fe3f5f20a91ed2c237c1577df14
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 10/29/2020
ms.locfileid: "92911848"
---
# <a name="create-human-reviews"></a>Mensen beoordelingen maken
In deze hand leiding leert u hoe u [beoordelingen](../review-api.md#reviews) kunt instellen op de website van het controle programma. Beoordelingen van de winkel en de weer gave van inhoud voor menselijke moderators om te beoordelen. Moderators kunnen de toegepaste labels wijzigen en hun eigen aangepaste labels Toep assen. Wanneer een gebruiker een beoordeling voltooit, worden de resultaten verzonden naar een opgegeven eind punt van de retour aanroep en wordt de inhoud verwijderd van de site.
## <a name="prerequisites"></a>Vereisten
- Meld u aan of maak een account op de site van het Content Moderator [controle programma](https://contentmoderator.cognitive.microsoft.com/) .
## <a name="image-reviews"></a>Beoordelingen van afbeeldingen
1. Ga naar het [hulp programma voor controle](https://contentmoderator.cognitive.microsoft.com/), selecteer het tabblad **try** en upload enkele afbeeldingen om te controleren.
1. Zodra de verwerking van de geüploade installatie kopieën is voltooid, gaat u naar het tabblad **controleren** en selecteert u **afbeelding** .

De afbeeldingen worden weer gegeven met labels die zijn toegewezen door het automatische toezicht proces. De installatie kopieën die u via het hulp programma voor beoordeling hebt verzonden, zijn niet zichtbaar voor andere revisoren.
1. U kunt de beoordelingen eventueel verplaatsen **naar** de schuif regelaar (1) om het aantal installatie kopieën dat op het scherm wordt weer gegeven, aan te passen. Klik op de knoppen met een **Label** of een **Label** (2) om de afbeeldingen dienovereenkomstig te sorteren. Klik op het deel venster tag (3) om het in of uit te scha kelen.

1. Als u meer informatie wilt weer geven over een afbeelding, klikt u op het beletsel teken in de miniatuur en selecteert u **Details weer geven** . U kunt een installatie kopie toewijzen aan een subteam met de optie **verplaatsen naar** (Zie de sectie [teams](./configure.md#manage-team-and-subteams) voor meer informatie over subteams).

1. Blader op de pagina Details van de afbeeldings controle-informatie.

1. Zodra u de label toewijzingen hebt gecontroleerd en zo nodig hebt bijgewerkt, klikt u op **volgende** om uw beoordelingen in te dienen. Nadat u hebt ingediend, hebt u ongeveer vijf seconden op de knop **vorige** om terug te gaan naar het vorige scherm en de afbeeldingen opnieuw te bekijken. Daarna worden de afbeeldingen niet meer in de verzend wachtrij geplaatst en is de knop **vorige** niet meer beschikbaar.
## <a name="text-reviews"></a>Beoordelingen van teksten
Tekst beoordelingen worden op dezelfde manier gebruikt als voor beeld Beoordelingen. In plaats van inhoud te uploaden, kunt u tekst schrijven of plakken (Maxi maal 1.024 tekens). Content Moderator analyseert vervolgens de tekst en past labels toe (naast andere informatie over de controle, zoals Gods taal en persoonlijke gegevens). In tekst beoordelingen kunt u de toegepaste Tags en/of aangepaste labels Toep assen voordat u de beoordeling verzendt.

## <a name="next-steps"></a>Volgende stappen
In deze hand leiding hebt u geleerd hoe u beoordelingen kunt instellen en gebruiken via het [hulp programma content moderator beoordeling](https://contentmoderator.cognitive.microsoft.com). Raadpleeg vervolgens de [rest API gids](../try-review-api-review.md) of de [.NET SDK Quick](../client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) start voor meer informatie over het programmatisch maken van Beoordelingen. | 82.711864 | 497 | 0.799385 | nld_Latn | 0.999221 |
58ccd9b3439a698787f988c9418567b1143d7ea2 | 555 | md | Markdown | README.md | FlexYourBrain/TrickOrTreat | 310df12cd36c926cd46b513aa17d4cc570f83f91 | [
"CC0-1.0"
] | null | null | null | README.md | FlexYourBrain/TrickOrTreat | 310df12cd36c926cd46b513aa17d4cc570f83f91 | [
"CC0-1.0"
] | null | null | null | README.md | FlexYourBrain/TrickOrTreat | 310df12cd36c926cd46b513aa17d4cc570f83f91 | [
"CC0-1.0"
] | null | null | null | # **TrickOrTreat**
Free Halloween Themed Art Assets
|  |
|--------------------------------------------|
|  |
| License |
| ------------------ |
| (CC0 1.0 Universal) You're free to use these game assets in any project, personal or commercial.No permission is needed before using these assets. Giving attribution is not required. |
| [](http://creativecommons.org/publicdomain/zero/1.0/) |
| 46.25 | 188 | 0.612613 | eng_Latn | 0.797305 |
58cce58c685ef8d860a8be09e1e6fcfe42edc791 | 941 | md | Markdown | src/newsletters/2017/12/miis-markdown-iis-handler-miis-a-markdown-based-cms-for-iis.md | dotnetweekly/dnw-gatsby | b12818bfe6fd92aeb576b3f79c75c2e9ac251cdb | [
"MIT"
] | 1 | 2022-02-03T19:26:09.000Z | 2022-02-03T19:26:09.000Z | src/newsletters/2017/12/miis-markdown-iis-handler-miis-a-markdown-based-cms-for-iis.md | dotnetweekly/dnw-gatsby | b12818bfe6fd92aeb576b3f79c75c2e9ac251cdb | [
"MIT"
] | 13 | 2020-08-19T06:27:35.000Z | 2022-02-26T17:45:11.000Z | src/newsletters/2017/12/miis-markdown-iis-handler-miis-a-markdown-based-cms-for-iis.md | dotnetweekly/dnw-gatsby | b12818bfe6fd92aeb576b3f79c75c2e9ac251cdb | [
"MIT"
] | null | null | null | ---
_id: 5a88e1abbd6dca0d5f0d1f90
title: "MIIS - Markdown IIS Handler - MIIS: A Markdown based CMS for IIS"
url: 'http://miis.azurewebsites.net/'
category: libraries-tools
slug: 'miis-markdown-iis-handler-miis-a-markdown-based-cms-for-iis'
user_id: 5a83ce59d6eb0005c4ecda2c
username: 'bill-s'
createdOn: '2017-03-18T02:38:07.000Z'
tags: []
---
It's a easy and fast way to create professional documentation sites based on Markdown under IIS. It can be used as a simple Markdown handler to return HTML in the response, or to create full-fledged documentation systems based in easy to create (but powerful) templates. It works in Internet Information Services, locally with IISExpress (handy to develop the docs) and in Azure Web Sites.
You can create your documentation site from any Markdown files in les than 30 seconds and have a lot of control over the final result.
The project's own documentation site has been created with MIIS.
| 52.277778 | 390 | 0.782147 | eng_Latn | 0.970075 |
58cd01336d7ae879ef8e9246c8edd40797ab20ec | 612 | md | Markdown | README.md | yasirjanjua/linux-dev-setup | 549cb4c77bdfc4b2bb4ccf70c1f666d2f6399eff | [
"CC0-1.0"
] | null | null | null | README.md | yasirjanjua/linux-dev-setup | 549cb4c77bdfc4b2bb4ccf70c1f666d2f6399eff | [
"CC0-1.0"
] | null | null | null | README.md | yasirjanjua/linux-dev-setup | 549cb4c77bdfc4b2bb4ccf70c1f666d2f6399eff | [
"CC0-1.0"
] | null | null | null | # linux-dev-setup
setup development environment with linux
## Backup Windows
It is very important to take backup of your data and you might also consider taking backup up your windows image because who knows you don't like linux :laugh: and want to switch back. Well a good practice is to always have a backup so you can have confidence and security and focus on new things
So in order to take your windows backup
- Goto ControlPanel > System & Security > File History > System Image Backup > Create System Image > Create System Image
- Select drive for storing the Image
- Wait for the process to complete
| 55.636364 | 296 | 0.78268 | eng_Latn | 0.998139 |
58ce670720a254070d4bdb4605aa0f8aeb005712 | 968 | md | Markdown | docs/Architecture.md | ParksProjets/kattis-hunter | c4990edf59fba6d91d22fdc126673781ab423d0f | [
"MIT"
] | null | null | null | docs/Architecture.md | ParksProjets/kattis-hunter | c4990edf59fba6d91d22fdc126673781ab423d0f | [
"MIT"
] | null | null | null | docs/Architecture.md | ParksProjets/kattis-hunter | c4990edf59fba6d91d22fdc126673781ab423d0f | [
"MIT"
] | null | null | null | # Code architecture
At each execution, Kattis gives us a runtime between 0.00 and 16.00. With that
range we can encode about 10.5 bits of information.
## `kattis` module
TODO.
## Overview of the algorithm
```py
# Assume that there is the same number of rounds for all environnements.
NUM_ROUNDS = 10
for i in range(number_of_env):
# First, retrieve the number of birds for each round of the env.
for j in range(0, NUM_ROUNDS, 2):
get("number of birds of round j and j+1")
# Get species of each bird.
for k in range(0, NUM_ROUNDS * NUM_BIRDS_ENV, 4):
get("species of birds k, k+1, k+2 and k+3")
# Finally retrieve the directions of each bird.
for k in range(0, NUM_ROUNDS * NUM_BIRDS_ENV, 3):
get("directions of birds k, k+1 and k+2")
# Retrieve the first directions of the birds from the first round. We do
# that for knowing in which env we currently are.
get("directions of first birds")
```
| 25.473684 | 78 | 0.679752 | eng_Latn | 0.99256 |
58ce7212eae724c75694179f6efa9988aa14e805 | 1,366 | md | Markdown | 2020/10/27/2020-10-27 03:40.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/10/27/2020-10-27 03:40.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/10/27/2020-10-27 03:40.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年10月27日03时数据
Status: 200
1.周震南发声
微博热度:396938
2.2元钱就能买上千张涉隐私人脸照
微博热度:170100
3.辛芷蕾 我发型咋啦
微博热度:165320
4.朴敏英鼻子
微博热度:110752
5.使徒行者3
微博热度:74925
6.中方对6家美国媒体驻华分社采取反制
微博热度:72118
7.男子第100次献血为自己庆生
微博热度:70059
8.孙悦受伤
微博热度:70010
9.樊少皇二胎得女
微博热度:69992
10.父亲91岁生日子女买10层蛋糕庆祝
微博热度:64444
11.台湾接种流感疫苗出现51例不良反应
微博热度:61743
12.女孩地铁默默保护拍线路图老人
微博热度:51715
13.外交部再次提醒尽量避免跨境旅行
微博热度:45625
14.蚂蚁集团发行价格
微博热度:44577
15.如何看待大学生做全职太太
微博热度:41356
16.王一博粉丝 蔡徐坤粉丝
微博热度:39747
17.费莱尼绝杀被吹
微博热度:37800
18.喀什新增无症状感染者26例
微博热度:37730
19.沈逸
微博热度:37664
20.斗罗大陆
微博热度:37538
21.半是蜜糖半是伤
微博热度:37451
22.谭松韵替马晓晓圆梦北京
微博热度:37420
23.公务员向单位饮用水投毒
微博热度:37300
24.曾舜晞打戏
微博热度:37196
25.中国新说唱总决赛录制
微博热度:37102
26.想打杜磊
微博热度:37096
27.蚂蚁集团IPO将融资345亿美元
微博热度:34976
28.外交部宣布制裁对台军售美国企业个人
微博热度:34514
29.修个手机差点笑死
微博热度:34481
30.时代峰峻工作人员道歉
微博热度:33881
31.虐猫男子老家不断收到花圈寿衣
微博热度:33750
32.模特公司套路学生录音曝光
微博热度:32387
33.白宫官员称美国不会控制疫情
微博热度:32175
34.2021年清明节和复活节是同一天
微博热度:31981
35.圣诞岛红蟹大迁移
微博热度:27116
36.邓超鹿晗虞书欣穿着接地气
微博热度:26873
37.全连一百多人抗美援朝只回来7个
微博热度:26687
38.喀什离新疆其它城市有多远
微博热度:26352
39.马布里太难了
微博热度:25123
40.小狗生气有多可爱
微博热度:24269
41.安九道歉
微博热度:23591
42.欧小剑是个狠人
微博热度:23519
43.乳山官方回应公务员投毒事件
微博热度:23151
44.林心如删与霍建华自拍照
微博热度:22292
45.聋哑人被骗民警怒斥骗子不是人
微博热度:20537
46.胡先煦被一群女生追着跑
微博热度:19934
47.袁帅护妻
微博热度:19850
48.瞄准
微博热度:16174
49.薇娅孟美岐直播
微博热度:15931
50.A股
微博热度:15534
| 6.696078 | 20 | 0.775256 | yue_Hant | 0.326988 |
58ced03a5f7d80fda61ce67b398af40d43fb014d | 13,284 | md | Markdown | content/reference/services/SoftLayer_Hardware/createObject.md | BrianSantivanez/githubio_source | d4b2bc7cc463dee4a8727268e78f26d8cd3a20ae | [
"Apache-2.0"
] | null | null | null | content/reference/services/SoftLayer_Hardware/createObject.md | BrianSantivanez/githubio_source | d4b2bc7cc463dee4a8727268e78f26d8cd3a20ae | [
"Apache-2.0"
] | null | null | null | content/reference/services/SoftLayer_Hardware/createObject.md | BrianSantivanez/githubio_source | d4b2bc7cc463dee4a8727268e78f26d8cd3a20ae | [
"Apache-2.0"
] | null | null | null | ---
title: "createObject"
description: "
<style type='text/css'>.create_object > li > div { padding-top: .5em; padding-bottom: .5em}</style>
createObject() enables the creation of servers on an account. This
method is a simplified alternative to interacting with the ordering system directly.
In order to create a server, a template object must be sent in with a few required
values.
When this method returns an order will have been placed for a server of the specified configuration.
To determine when the server is available you can poll the server via [SoftLayer_Hardware::getObject](/reference/services/SoftLayer_Hardware/getObject),
checking the <code>provisionDate</code> property.
When <code>provisionDate</code> is not null, the server will be ready. Be sure to use the <code>globalIdentifier</code>
as your initialization parameter.
<b>Warning:</b> Servers created via this method will incur charges on your account. For testing input parameters see [SoftLayer_Hardware::generateOrderTemplate](/reference/services/SoftLayer_Hardware/generateOrderTemplate).
<b>Input</b> - [SoftLayer_Hardware](/reference/datatypes/SoftLayer_Hardware)
<ul class='create_object'>
<li><code>hostname</code>
<div>Hostname for the server.</div><ul>
<li><b>Required</b></li>
<li><b>Type</b> - string</li>
</ul>
<br />
</li>
<li><code>domain</code>
<div>Domain for the server.</div><ul>
<li><b>Required</b></li>
<li><b>Type</b> - string</li>
</ul>
<br />
</li>
<li><code>processorCoreAmount</code>
<div>The number of logical CPU cores to allocate.</div><ul>
<li><b>Required</b></li>
<li><b>Type</b> - int</li>
<li>See [SoftLayer_Hardware::getCreateObjectOptions](/reference/services/SoftLayer_Hardware/getCreateObjectOptions) for available options.</li>
</ul>
<br />
</li>
<li><code>memoryCapacity</code>
<div>The amount of memory to allocate in gigabytes.</div><ul>
<li><b>Required</b></li>
<li><b>Type</b> - int</li>
<li>See [SoftLayer_Hardware::getCreateObjectOptions](/reference/services/SoftLayer_Hardware/getCreateObjectOptions) for available options.</li>
</ul>
<br />
</li>
<li><code>hourlyBillingFlag</code>
<div>Specifies the billing type for the server.</div><ul>
<li><b>Required</b></li>
<li><b>Type</b> - boolean</li>
<li>When true the server will be billed on hourly usage, otherwise it will be billed on a monthly basis.</li>
</ul>
<br />
</li>
<li><code>operatingSystemReferenceCode</code>
<div>An identifier for the operating system to provision the server with.</div><ul>
<li><b>Required</b></li>
<li><b>Type</b> - string</li>
<li>See [SoftLayer_Hardware::getCreateObjectOptions](/reference/services/SoftLayer_Hardware/getCreateObjectOptions) for available options.</li>
</ul>
<br />
</li>
<li><code>datacenter.name</code>
<div>Specifies which datacenter the server is to be provisioned in.</div><ul>
<li><b>Required</b></li>
<li><b>Type</b> - string</li>
<li>The <code>datacenter</code> property is a [SoftLayer_Location](/reference/datatypes/SoftLayer_Location) structure with the <code>name</code> field set.</li>
<li>See [SoftLayer_Hardware::getCreateObjectOptions](/reference/services/SoftLayer_Hardware/getCreateObjectOptions) for available options.</li>
</ul>
<http title='Example'>{
'datacenter': {
'name': 'dal05'
}
}</http>
<br />
</li>
<li><code>networkComponents.maxSpeed</code>
<div>Specifies the connection speed for the server's network components.</div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - int</li>
<li><b>Default</b> - The highest available zero cost port speed will be used.</li>
<li><b>Description</b> - The <code>networkComponents</code> property is an array with a single [SoftLayer_Network_Component](/reference/datatypes/SoftLayer_Network_Component) structure. The <code>maxSpeed</code> property must be set to specify the network uplink speed, in megabits per second, of the server.</li>
<li>See [SoftLayer_Hardware::getCreateObjectOptions](/reference/services/SoftLayer_Hardware/getCreateObjectOptions) for available options.</li>
</ul>
<http title='Example'>{
'networkComponents': [
{
'maxSpeed': 1000
}
]
}</http>
<br />
</li>
<li><code>networkComponents.redundancyEnabledFlag</code>
<div>Specifies whether or not the server's network components should be in redundancy groups.</div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - bool</li>
<li><b>Default</b> - <code>false</code></li>
<li><b>Description</b> - The <code>networkComponents</code> property is an array with a single [SoftLayer_Network_Component](/reference/datatypes/SoftLayer_Network_Component) structure. When the <code>redundancyEnabledFlag</code> property is true the server's network components will be in redundancy groups.</li>
</ul>
<http title='Example'>{
'networkComponents': [
{
'redundancyEnabledFlag': false
}
]
}</http>
<br />
</li>
<li><code>privateNetworkOnlyFlag</code>
<div>Specifies whether or not the server only has access to the private network</div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - boolean</li>
<li><b>Default</b> - <code>false</code></li>
<li>When true this flag specifies that a server is to only have access to the private network.</li>
</ul>
<br />
</li>
<li><code>primaryNetworkComponent.networkVlan.id</code>
<div>Specifies the network vlan which is to be used for the frontend interface of the server.</div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - int</li>
<li><b>Description</b> - The <code>primaryNetworkComponent</code> property is a [SoftLayer_Network_Component](/reference/datatypes/SoftLayer_Network_Component) structure with the <code>networkVlan</code> property populated with a [SoftLayer_Network_Vlan](/reference/datatypes/SoftLayer_Network_Vlan) structure. The <code>id</code> property must be set to specify the frontend network vlan of the server.</li>
</ul>
<http title='Example'>{
'primaryNetworkComponent': {
'networkVlan': {
'id': 1
}
}
}</http>
<br />
</li>
<li><code>primaryBackendNetworkComponent.networkVlan.id</code>
<div>Specifies the network vlan which is to be used for the backend interface of the server.</div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - int</li>
<li><b>Description</b> - The <code>primaryBackendNetworkComponent</code> property is a [SoftLayer_Network_Component](/reference/datatypes/SoftLayer_Network_Component) structure with the <code>networkVlan</code> property populated with a [SoftLayer_Network_Vlan](/reference/datatypes/SoftLayer_Network_Vlan) structure. The <code>id</code> property must be set to specify the backend network vlan of the server.</li>
</ul>
<http title='Example'>{
'primaryBackendNetworkComponent': {
'networkVlan': {
'id': 2
}
}
}</http>
<br />
</li>
<li><code>fixedConfigurationPreset.keyName</code>
<div></div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - string</li>
<li><b>Description</b> - The <code>fixedConfigurationPreset</code> property is a [SoftLayer_Product_Package_Preset](/reference/datatypes/SoftLayer_Product_Package_Preset) structure. The <code>keyName</code> property must be set to specify preset to use.</li>
<li>If a fixed configuration preset is used <code>processorCoreAmount</code>, <code>memoryCapacity</code> and <code>hardDrives</code> properties must not be set.</li>
<li>See [SoftLayer_Hardware::getCreateObjectOptions](/reference/services/SoftLayer_Hardware/getCreateObjectOptions) for available options.</li>
</ul>
<http title='Example'>{
'fixedConfigurationPreset': {
'keyName': 'SOME_KEY_NAME'
}
}</http>
<br />
</li>
<li><code>userData.value</code>
<div>Arbitrary data to be made available to the server.</div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - string</li>
<li><b>Description</b> - The <code>userData</code> property is an array with a single [SoftLayer_Hardware_Attribute](/reference/datatypes/SoftLayer_Hardware_Attribute) structure with the <code>value</code> property set to an arbitrary value.</li>
<li>This value can be retrieved via the [SoftLayer_Resource_Metadata::getUserMetadata](/reference/services/SoftLayer_Resource_Metadata/getUserMetadata) method from a request originating from the server. This is primarily useful for providing data to software that may be on the server and configured to execute upon first boot.</li>
</ul>
<http title='Example'>{
'userData': [
{
'value': 'someValue'
}
]
}</http>
<br />
</li>
<li><code>hardDrives</code>
<div>Hard drive settings for the server</div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - SoftLayer_Hardware_Component</li>
<li><b>Default</b> - The largest available capacity for a zero cost primary disk will be used.</li>
<li><b>Description</b> - The <code>hardDrives</code> property is an array of [SoftLayer_Hardware_Component](/reference/datatypes/SoftLayer_Hardware_Component) structures.</i>
<li>Each hard drive must specify the <code>capacity</code> property.</li>
<li>See [SoftLayer_Hardware::getCreateObjectOptions](/reference/services/SoftLayer_Hardware/getCreateObjectOptions) for available options.</li>
</ul>
<http title='Example'>{
'hardDrives': [
{
'capacity': 500
}
]
}</http>
<br />
</li>
<li id='hardware-create-object-ssh-keys'><code>sshKeys</code>
<div>SSH keys to install on the server upon provisioning.</div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - array of [SoftLayer_Security_Ssh_Key](/reference/datatypes/SoftLayer_Security_Ssh_Key)</li>
<li><b>Description</b> - The <code>sshKeys</code> property is an array of [SoftLayer_Security_Ssh_Key](/reference/datatypes/SoftLayer_Security_Ssh_Key) structures with the <code>id</code> property set to the value of an existing SSH key.</li>
<li>To create a new SSH key, call [SoftLayer_Security_Ssh_Key::createObject](/reference/services/SoftLayer_Security_Ssh_Key/createObject) on the [SoftLayer_Security_Ssh_Key](/reference/datatypes/SoftLayer_Security_Ssh_Key) service.</li>
<li>To obtain a list of existing SSH keys, call [SoftLayer_Account::getSshKeys](/reference/services/SoftLayer_Account/getSshKeys) on the [SoftLayer_Account](/reference/datatypes/SoftLayer_Account) service.
</ul>
<http title='Example'>{
'sshKeys': [
{
'id': 123
}
]
}</http>
<br />
</li>
<li><code>postInstallScriptUri</code>
<div>Specifies the uri location of the script to be downloaded and run after installation is complete.</div><ul>
<li><b>Optional</b></li>
<li><b>Type</b> - string</li>
</ul>
<br />
</li>
</ul>
<h1>REST Example</h1>
<http title='Request'>curl -X POST -d '{
'parameters':[
{
'hostname': 'host1',
'domain': 'example.com',
'processorCoreAmount': 2,
'memoryCapacity': 2,
'hourlyBillingFlag': true,
'operatingSystemReferenceCode': 'UBUNTU_LATEST'
}
]
}' https://api.softlayer.com/rest/v3/SoftLayer_Hardware.json
</http>
<http title='Response'>HTTP/1.1 201 Created
Location: https://api.softlayer.com/rest/v3/SoftLayer_Hardware/f5a3fcff-db1d-4b7c-9fa0-0349e41c29c5/getObject
{
'accountId': 232298,
'bareMetalInstanceFlag': null,
'domain': 'example.com',
'hardwareStatusId': null,
'hostname': 'host1',
'id': null,
'serviceProviderId': null,
'serviceProviderResourceId': null,
'globalIdentifier': 'f5a3fcff-db1d-4b7c-9fa0-0349e41c29c5',
'hourlyBillingFlag': true,
'memoryCapacity': 2,
'operatingSystemReferenceCode': 'UBUNTU_LATEST',
'processorCoreAmount': 2
}
</http> "
date: "2018-02-12"
tags:
- "method"
- "sldn"
- "Hardware"
classes:
- "SoftLayer_Hardware"
type: "reference"
layout: "method"
mainService : "SoftLayer_Hardware"
---
| 47.442857 | 427 | 0.631662 | eng_Latn | 0.654593 |
58cf074f2bcb8d8f7047a4c76c6dc55474f02093 | 6,142 | md | Markdown | README.md | JacobHayes/multimethod | 23589d3ce8ee50edb2829e71eb5c89c60fb7e5a0 | [
"Apache-2.0"
] | null | null | null | README.md | JacobHayes/multimethod | 23589d3ce8ee50edb2829e71eb5c89c60fb7e5a0 | [
"Apache-2.0"
] | null | null | null | README.md | JacobHayes/multimethod | 23589d3ce8ee50edb2829e71eb5c89c60fb7e5a0 | [
"Apache-2.0"
] | null | null | null | [](https://pypi.org/project/multimethod/)

[](https://pepy.tech/project/multimethod)

[](https://github.com/coady/multimethod/actions)
[](https://codecov.io/gh/coady/multimethod/)
[](https://github.com/coady/multimethod/security/code-scanning)
[](https://pypi.org/project/black/)
[](http://mypy-lang.org/)
Multimethod provides a decorator for adding multiple argument dispatching to functions. The decorator creates a multimethod object as needed, and registers the function with its annotations.
There are several multiple dispatch libraries on PyPI. This one aims for simplicity and speed. With caching of argument types, it should be the fastest pure Python implementation possible.
## Usage
### multimethod
```python
from multimethod import multimethod
@multimethod
def func(x: int, y: float):
...
```
`func` is now a `multimethod` which will delegate to the above function, when called with arguments of the specified types. Subsequent usage will register new types and functions to the existing multimethod of the same name.
```python
@multimethod
def func(x: float, y: int):
...
```
Alternatively, functions can be explicitly registered in the same style as [functools.singledispatch](https://docs.python.org/3/library/functools.html#functools.singledispatch). This syntax is also compatible with [mypy](http://mypy-lang.org), which by default checks that [each name is defined once](https://mypy.readthedocs.io/en/stable/error_code_list.html#check-that-each-name-is-defined-once-no-redef).
```python
@func.register
def _(x: bool, y: bool):
...
@func.register(object, bool)
@func.register(bool, object)
def _(x, y): # stackable without annotations
...
```
Multimethods are implemented as mappings from signatures to functions, and can be introspected as such.
```python
method[type, ...] # get registered function
method[type, ...] = func # register function by explicit types
```
Multimethods support any types that satisfy the `issubclass` relation, including abstract base classes in `collections.abc` and `typing`. Subscripted generics are supported:
* `Union[...]`
* `Mapping[...]` - the first key-value pair is checked
* `tuple[...]` - all args are checked
* `Iterable[...]` - the first arg is checked
Naturally checking subscripts is slower, but the implementation is optimized, cached, and bypassed if no subscripts are in use in the multimethod.
Dispatch resolution details:
* If an exact match isn't registered, the next closest method is called (and cached).
* If the `issubclass` relation is ambiguous,
[mro](https://docs.python.org/3/library/stdtypes.html?highlight=mro#class.mro) position is used as a tie-breaker.
* If there are still ambiguous methods - or none - a custom `TypeError` is raised.
* Default and keyword-only parameters may be annotated, but won't affect dispatching.
* A skipped annotation is equivalent to `: object`, which implicitly supports methods by leaving `self` blank.
* If no types are specified, it will inherently match all arguments.
### overload
Overloads dispatch on annotated predicates. Each predicate is checked in the reverse order of registration.
The implementation is separate from `multimethod` due to the different performance characteristics. Instead a simple `isa` predicate is provided for checking instance type.
```python
from multimethod import isa, overload
@overload
def func(obj: isa(str)):
...
@overload
def func(obj: str.isalnum):
...
@overload
def func(obj: str.isdigit):
...
```
### multidispatch
`multidispatch` is a wrapper to provide compatibility with `functools.singledispatch`. It requires a base implementation and use of the `register` method instead of namespace lookup. It also provisionally supports dispatching on keyword arguments.
### multimeta
Use `metaclass=multimeta` to create a class with a special namespace which converts callables to multimethods, and registers duplicate callables with the original.
```python
from multimethod import multimeta
class Foo(metaclass=multimeta):
def bar(self, x: str):
...
def bar(self, x: int):
...
```
Equivalent to:
```python
from multimethod import multimethod
class Foo:
@multimethod
def bar(self, x: str):
...
@bar.register
def bar(self, x: int):
...
```
## Installation
```console
% pip install multimethod
```
## Tests
100% branch coverage.
```console
% pytest [--cov]
```
## Changes
dev
* Improved checking for TypeErrors
* `multidispatch` has provisional support for dispatching on keyword arguments
1.5
* Postponed evaluation of nested annotations
* Variable-length tuples of homogeneous type
* Ignore default and keyword-only parameters
* Resolved ambiguous `Union` types
* Fixed an issue with name collision when defining a multimethod
* Resolved dispatch errors when annotating parameters with meta-types such as `type`
1.4
* Python >=3.6 required
* Expanded support for subscripted type hints
1.3
* Python 3 required
* Support for subscripted ABCs
1.2
* Support for typing generics
* Stricter dispatching consistent with singledispatch
1.1
* Fix for Python 2 typing backport
* Metaclass for automatic multimethods
1.0
* Missing annotations default to object
* Removed deprecated dispatch stacking
0.7
* Forward references allowed in type hints
* Register method
* Overloads with predicate dispatch
0.6
* Multimethods can be defined inside a class
0.5
* Optimized dispatching
* Support for `functools.singledispatch` syntax
0.4
* Dispatch on Python 3 annotations
| 31.336735 | 407 | 0.74829 | eng_Latn | 0.981448 |
58d070f738ca5554f49186871d88de55d439db58 | 6,014 | md | Markdown | _posts/03.middleware/2020-02-24-nacos_notes.md | veezean/veezean.github.com | 47e383a9c8efed79228e850783dfd65330b3de2e | [
"MIT"
] | 2 | 2017-09-23T03:57:22.000Z | 2020-03-19T02:07:39.000Z | _posts/03.middleware/2020-02-24-nacos_notes.md | veezean/veezean.github.com | 47e383a9c8efed79228e850783dfd65330b3de2e | [
"MIT"
] | 2 | 2017-09-24T00:05:01.000Z | 2019-07-15T05:49:39.000Z | _posts/03.middleware/2020-02-24-nacos_notes.md | veezean/veezean.github.com | 47e383a9c8efed79228e850783dfd65330b3de2e | [
"MIT"
] | null | null | null | ---
layout: post
title: "玩转Nacos!替代Eureka作为配置中心与注册中心"
date: 2020-02-24
categories: 中间件
tags: JAVA Spring SpringBoot 中间件 Nacos 注册中心 配置中心 dubbo
---
* content
{:toc}
`Nacos`是阿里开源的配置和服务管理的中间件。`Nacos`提供了一组简单易用的特性集,帮助您快速实现动态`服务发现`、`服务配置`、`服务元数据及流量管理`。
更多详细介绍,参见[官方介绍](https://nacos.io/zh-cn/docs/what-is-nacos.html)
## 1 Nacos的生态情况
当前微服务生态系统中,各种中间件太多,很多中间件都在尝试报团取暖,提供相互之间的支持与协作,形成完整的生态。
Nacos作为后来者,实现了与主流开源生态之间的无缝支持。典型地、`Nacos`可以与`SpringCloud`进行配套对接,替换`Eureka`作为注册中心;也可以和`Dubbo`无缝对接,替代`zookeeper`作为服务`注册管理中心`,以及`配置下发中心`等。
## 2 Nacos的优势在哪
看到这里,会有一个疑问,既然有这么多的组件可以用来选择作为注册中心或者配置中心,那么nacos的优势在哪?出于哪些方面的考虑,可以优先将Nacos作为选择的对象呢?
下面从几个方向进行简单的分析。
### 2.1 SpringCloud生态:Nacos Vs Eureka
比较详细的对比,参见:[nacos与eureke的比较](http://www.pianshen.com/article/8646198661/)
### 2.2 Dubbo的注册中心:Nacos Vs ZooKeeper
毫无疑问,**Nacos比ZooKeeper更适合做注册中心**。为什么这么说呢?下面简单分析下。
分布式系统`CAP理论`指出,**一个分布式系统不可能同时满足C(一致性)、A(可用性)和P(分区容错性)**。由于分区容错性在是分布式系统中必须要保证的,因此我们只能在`A`和`C`之间进行权衡
#### 2.2.1 ZooKeeper是个典型的CP系统
从CAP维度来看,`ZooKeeper`是一个典型的`CP系统`,强调数据的高度的一致性。
举个简单的例子:
> ZooKeeper集群中,当master节点挂了后,集群需要重新选举,而在此时,如果服务过来调用zookeeper来获取服务时,zookeeper是不可用的,这就直接影响到了业务的正常运行。
#### 2.2.2 注册中心更应该是个AP系统
而作为一个分布式系统的注册中心,是绝对不允许出现注册中心不可用、导致服务之间无法调用的问题的。而对于服务注册中心而言,短暂的数据不一致,对整个分布式系统的影响是有限的,可接受的。
举个例子:
> 正常情况应该是3个节点中间负载均衡,结果数据出现短暂偏差,请求在2个节点中进行负载均衡了。
相比整个系统服务不可用而言,这种负载均衡不完全的影响,是更容易接受的。
因此,服务注册中心,应该是一个`AP系统`,强调高度可用性。
#### 2.2.3 对比汇总
基于上述2点分析,ZooKeeper作为CP系统,其实不太合适作为注册中心,而`Nacos`是按照`AP系统`进行设计实现的。所以比较下来,**Nacos比ZooKeeper更适合作为注册中心**。
## 3 Nacos的较大制约因素在哪?
* 1. 中小规模场景
对于一般中小型公司而言,或者一些简单的、重要性不是很高的系统而言,可以考虑试用下nacos简化下整体的开发复杂度。
* 2. 大型规模场景
对于大型公司,或者非常重要的项目而言,可能会考虑到选择的开源组件的后续`可维护性`、`稳定性`等方面,由于Nacos开源的时间较短,社区热度还远不及Eureka等老牌组件,且可能依旧存在些许bug在修改迭代中,这些因素很大程度上制约着选择Nacos的可能性。
作为国内软件开发领域的巨头,阿里在开源方面的实例是有目共睹的。Nacos作为阿里开源范畴内的一个重磅产品,后续的前景**应该**是比较明朗的(但是阿里放弃的开源项目也不少~),所以Nacos的未来还是值得期待的。
## 4 使用篇: Dubbo使用Nacos作为服务注册管理中心
### 4.1 pom.xml中引入Nacos依赖
```xml
<!-- https://mvnrepository.com/artifact/org.apache.dubbo/dubbo-registry-nacos -->
<dependency>
<groupId>org.apache.dubbo</groupId>
<artifactId>dubbo-registry-nacos</artifactId>
<version>2.7.4</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.alibaba.nacos/nacos-client -->
<dependency>
<groupId>com.alibaba.nacos</groupId>
<artifactId>nacos-client</artifactId>
<version>1.1.4</version>
</dependency>
<dependency>
<groupId>com.alibaba.spring</groupId>
<artifactId>spring-context-support</artifactId>
<version>1.0.2</version>
</dependency>
```
### 4.2 application.properties中增加地址配置
```properties
dubbo.registry.address=nacos://172.31.236.126:8848
```
### 4.3 provider按照正常SpringBoot方式提供controller接口
```java
@RestController
@RequestMapping("/demo/dubbo/provider")
public class RpcController {
private static final Logger LOG = LoggerFactory.getLogger(RpcController.class);
@Resource(name = "rpcDemoService")
private RpcDemoService rpcDemoService;
@GetMapping("/get/time/{requestId}")
public String getTime(@PathVariable("requestId") String requestId) {
String time = rpcDemoService.queryCurrentTime(requestId);
return time;
}
}
```
其中,注入的rpcDemoService服务定义的时候,使用`@org.apache.dubbo.config.annotation.Service`指定了group值为`dubbo-group`,这样如果有多个provider进程注册到nacos中,会自动按照负载均衡的方式分发客户端的请求到各个provider上面。
```java
@Component("rpcDemoService")
@Service(timeout = 5000, group = "dubbo-group")
public class RpcDemoServiceImpl implements RpcDemoService {
@Value("${spring.application.name}")
private String serviceName;
@Override
public String queryCurrentTime(String queryId) {
SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
String formattedDate = simpleDateFormat.format(new Date());
System.out.println(serviceName + "----" + queryId + " called, current time is: " + formattedDate);
return serviceName + "|" + queryId + "|" + formattedDate;
}
}
```
### 4.4 consumer中的调用
```java
@RestController
@RequestMapping("/demo/dubbo")
public class RpcController {
@Resource(name = "rpcConsumerService")
private RpcConsumerService service;
@GetMapping("/get/time")
public String getTime() {
int provider1Count = 0;
int provider2Count = 0;
for (int i = 0; i < 100; i++) {
String time = service.getTime("1");
if (time.contains("provider1")) {
provider1Count++;
} else {
provider2Count++;
}
}
String result = "provider1 call count: " + provider1Count
+ "<br>" + "\r\n"
+ "provider2 call count: " + provider2Count;
System.out.println(result);
return result;
}
}
```
其中,rpcConsumerService的具体实现逻辑如下, 通过`@org.apache.dubbo.config.annotation.Reference`指定了远端服务对应的group信息(也即前面provider中指定的group值),这样客户端的请求会在此group中进行负载均衡。
```java
@org.springframework.stereotype.Service
public class RpcConsumerService {
@Reference(group = "dubbo-group")
private RpcDemoService rpcDemoService;
public String getTime(String requestId) {
return rpcDemoService.queryCurrentTime(requestId);
}
}
```
### 4.5 Nacos服务端部署
[官网下载](https://nacos.io/zh-cn/docs/quick-start.html)压缩包,解压后,执行bin目录的脚本即可启动服务端进程,默认端口号8848,如果被占用,可以到conf目录中application.proerties文件中修改。
### 4.6 测试验证
启动Nacos服务进程、1个Dubbo consumer进程,2个Dubbo provider进程进行测试。整体的组网情况如下所示:

通过`http://localhost:8848/nacos/`打开Nacos的管理界面,可以看到注册的服务信息如下:

通过`http://localhost:28812/demo/dubbo/get/time/`触发consumer向provider发起100次调用请求,查看下请求被分配到2个provider上的情况如下:
```
provider1 call count: 47
provider2 call count: 53
```
从验证结果上看,基本上是在group中的多个provider之间进行负载均衡分发请求的。
---
欢迎关注我的公众号“**架构笔录**”,原创技术文章第一时间推送,也可互动一起探讨交流技术。

| 26.147826 | 160 | 0.732125 | yue_Hant | 0.568991 |
58d07d586422b6cc51850fb8c9f1f3c99c60b652 | 1,579 | md | Markdown | ROADMAP.md | sdaves/Fabulous | f09055d84b22afc32221be28ceab38341a14316b | [
"Apache-2.0"
] | null | null | null | ROADMAP.md | sdaves/Fabulous | f09055d84b22afc32221be28ceab38341a14316b | [
"Apache-2.0"
] | null | null | null | ROADMAP.md | sdaves/Fabulous | f09055d84b22afc32221be28ceab38341a14316b | [
"Apache-2.0"
] | null | null | null | ## Roadmap
* Programming model:
* Move to `seq<_>` as the de-facto model type
* Add `OpenGLView`
* Docs
* Generate `///` docs in code generator
* Live Reload
* State migration: Support hot-reloading of the saved model, reapplying to the same app where possible
* Use actual newly compiled DLLs on Android instead of F# interperter
* Check Live Reload on WPF and other same-machine
* Make IDE launch of `fscd` tool simpler
## Ideas
* Performance:
* Consider possibilities for better list comparison/diffing
* Do more perf-test on large lists and do resulting perf work
* Consider allowing a `ChunkList` tree as input to ListView etc., e.g. `chunks { yield! stablePart; yield newElement; yield! stablePart2 }`
* Consider memoize function closure creation
* Consider moving 'view' and 'model' computations off the UI thread
* Consider making some small F# language improvements to improve code:
* Remove `yield` in more cases
* Automatically save function values that do not capture any arguments
* Allow a default unnamed argument for `children` so the argument name doesn't have to be given explicitly
* Allow the use of struct options for optional arguments (to reduce allocations)
* Implement the C# 5.0 "open static classes" feature in F# to allow the `View.` prefix to be dropped
* App size:
* Remove F# resources in linker, see https://github.com/fsprojects/Fabulous/issues/94
## Discarded Ideas
* Possibly switch to a type provider (see [this comment](https://github.com/fsprojects/Fabulous/issues/50#issuecomment-390396365))
| 41.552632 | 142 | 0.746042 | eng_Latn | 0.972077 |
58d0a6bb3ff2da7c05e7adb0767e4f87b2a3673a | 1,676 | md | Markdown | _posts/2018-10-01-goodbye-phantom.md | jake-webbernet/jake-webbernet.github.io | 7bbe8d4caa749e45f10bd9be1ee2e0cccef6cbf2 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | _posts/2018-10-01-goodbye-phantom.md | jake-webbernet/jake-webbernet.github.io | 7bbe8d4caa749e45f10bd9be1ee2e0cccef6cbf2 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | _posts/2018-10-01-goodbye-phantom.md | jake-webbernet/jake-webbernet.github.io | 7bbe8d4caa749e45f10bd9be1ee2e0cccef6cbf2 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | So it's common knowledge that you can run your automated tests via Google Chrome instead of PhantomJS. PhantomJS is now officially not supported for obvious reasons.
I've been meaning to start running my feature specs with Chrome rather than PhantomJS but i've always had a bit of trouble getting it running.
But I finally did it, and this is what I had to do:
Added these two gems to my `test` section of my rails `Gemfile`
```ruby
gem 'selenium-webdriver'
gem 'chromedriver-helper'
```
Removed all code mentioning `Poltergeist` in my `rails_helper`
I'm using system specs now rather then feature specs, so I added these two sections to my `rails_helper`. It's pretty handy to be able to run the specs in non-headless mode when developing. It even works on my WSL setup which I really did not expect.
```ruby
config.before(:each, type: :system, js: true) do
if ENV['HEADLESS']
driven_by :selenium_chrome_headless
else
driven_by :selenium
end
end
```
```ruby
Capybara.register_driver :selenium do |app|
options = Selenium::WebDriver::Chrome::Options.new(args: ['disable-gpu'])
Capybara::Selenium::Driver.new(
app,
browser: :chrome,
options: options
)
end
```
And here is what I needed to do to get chromium and the driver running locally on my computer
```shell
# Update the local chromedriver
$ chromedriver-update
# Install chromium so that chromedriver has something to drive!
$ sudo apt install chromium-browser
# And finally to test if its working
$ chromedriver
-> Starting ChromeDriver 70.0.3538.16 (16ed95b41bb05e565b11fb66ac33c660b721f778) on port 9515
-> Only local connections are allowed.
```
| 32.230769 | 250 | 0.743437 | eng_Latn | 0.9943 |
58d0fbd89b1d66d60b59b9a057aca128d7c95f22 | 3,031 | md | Markdown | README.md | maudl3116/GPS | 381a8e209bfec4c06b43ed4b69445bdf5c365409 | [
"Apache-2.0"
] | 25 | 2019-06-20T10:38:42.000Z | 2022-03-19T03:56:44.000Z | README.md | maudl3116/GPS | 381a8e209bfec4c06b43ed4b69445bdf5c365409 | [
"Apache-2.0"
] | 5 | 2019-12-16T21:55:14.000Z | 2022-02-10T00:26:07.000Z | README.md | vishalbelsare/GPSig | d250295faeb84d82af097e0dbbe441614888fd69 | [
"Apache-2.0"
] | 5 | 2020-08-20T12:00:21.000Z | 2022-02-28T12:09:03.000Z | # GPSig
A Gaussian process library for Bayesian learning from sequential data, such as time series, using signature kernels as covariance functions based on GPflow and TensorFlow. This repository contains supplementary code to the paper https://arxiv.org/abs/1906.08215.
***
## Installing
To get started, you should first clone the repository using git, e.g. with the command
```
git clone https://github.com/tgcsaba/GPSig.git
```
and then create and activate virtual environment with Python <= 3.7
```
conda create -n env_name python=3.7
conda activate env_name
```
Then, install the requirements using pip by
```
pip install -r requirements.txt
```
If you would like to use a GPU to run computations (which we heavily recommend, if you have one available), you most likely need to install a GPU compatible version of TensorFlow instead.
Depending on your OS and CUDA compute capability of your GPU, you might be able to acquire a pre-built version of Tensorflow for your system (from pip, conda or other sources). In some cases, you might have to build it on your system yourself (https://www.tensorflow.org/install/source), which is generally recommended so that you end up with a version that is able to make full use of your hardware.
***
## Getting started
To get started, we suggest to first look at the notebook `signature_kernel.ipynb`, which gives a simple worked out example of how to use the signature kernel as a standalone object. In this notebook, we validate the implementation of the signature kernel by comparing our results to an alternative way of computing signature features using the `esig` package.
The difference between the two ways of computing the signature kernel is a 'kernel trick', which makes it possible to compute the signature kernel using only inner product evaluation on the underlying state-space.
In the other notebook, `ts_classification.ipynb`, a worked out example is given on how to use signature kernels for time series classification using inter-domain sparse variational inference with inducing tensors to make computations tractable and efficient. To make the most of these examples, we also recommend to look into the [GPflow](https://github.com/GPflow/GPflow) syntax of defining kernels and GP models, a Gaussian process library that we build on.
***
## Download datasets
The benchmarks directory contains the appropriate scripts used to run the benchmarking experiments in the paper. The datasets can be downloaded from our dropbox folder using the `download_data.sh` script in the `./benchmarks/datasets` folder by running
```
cd benchmarks
bash ./datasets/download_data.sh
```
or manually by copy-pasting the dropbox url containd within the aforementioned script.
## Support
We encourage the use of this code for applications, and we aim to provide support in as many cases as possible. For further assistance or to tell us about your project, please send an email to
`[email protected]` or `[email protected]`.
| 75.775 | 460 | 0.78324 | eng_Latn | 0.998928 |
58d147e7819b219ee8270aa3ef28a32119f15fb7 | 2,094 | md | Markdown | _posts/2019-12-30-can_you_show_a_movie_from_netflix_in_your_classroom.md | eddiecmurray/blog | 147c401dd2d2d265c7b57b62cde6c3aba53f2d84 | [
"MIT"
] | null | null | null | _posts/2019-12-30-can_you_show_a_movie_from_netflix_in_your_classroom.md | eddiecmurray/blog | 147c401dd2d2d265c7b57b62cde6c3aba53f2d84 | [
"MIT"
] | null | null | null | _posts/2019-12-30-can_you_show_a_movie_from_netflix_in_your_classroom.md | eddiecmurray/blog | 147c401dd2d2d265c7b57b62cde6c3aba53f2d84 | [
"MIT"
] | null | null | null | ---
layout: post
title: Can You Show a Movie from Netflix in Your Classroom?
tags: copyright digital_citizenship netflix
eye_catch: /blog/assets/img/netflix-3733812_1280.jpg
---
As I have discussed before, [copyright is a very complicated topic](https://www.eddiecmurray.com/blog/2019/10/07/houston_isd_fined_92_million_for_copyright_violation/) and it can be confusing to know when you have permission to use content in your classroom. One of the questions I receive a few times a year is if teachers can show a movie from a video streaming service like Netflix or Amazon Prime in their classroom. Before showing a video from Netflix in your classroom you will need to do some research.
<!--more-->
Video streaming services are very convenient and make it easy to access a large library of movies and TV shows. As an educator, are you allowed to show a movie from Netflix in your classroom? [Eva Harvell](https://twitter.com/techie_teach) at [EdSurge](https://www.edsurge.com/) explains what movies you can show in your classroom from Netflix:
> At first glance it seems like it would be OK, but a teacher wanting to show a Netflix movie would have to log into Netflix using a personal account. The user agreement the individual agreed to when he or she created the Netflix account prohibits showing movies in a public venue, which may be a contract violation. (However, Netflix does permit the [showing of some documentaries in class](https://help.netflix.com/en/node/57695).)
As EdSurge explains, depending on the movie you may be able to show it in your classroom. A teacher asked me about a documentary this year and according to the Netflix website they allow that documentary to be shown in an educational setting.
Netflix provides some great documentaries that can be shown in the classroom but be sure to check the Netflix website first. When showing your documentary be sure to use instructional strategies to help your students process the information they are viewing.
**Photo:** [afra32 / Pixabay](https://pixabay.com/photos/netflix-peliculas-youtube-digital-3733812/) | 104.7 | 511 | 0.795129 | eng_Latn | 0.998264 |
58d19158ab2ff49cfb7d292685e36cb548d5d60d | 8,688 | md | Markdown | articles/time-series-insights/how-to-ingest-data-iot-hub.md | cluxter/azure-docs.fr-fr | 9e1df772cdd6e4a94e61c4ccee8cd41e692dc427 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/time-series-insights/how-to-ingest-data-iot-hub.md | cluxter/azure-docs.fr-fr | 9e1df772cdd6e4a94e61c4ccee8cd41e692dc427 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/time-series-insights/how-to-ingest-data-iot-hub.md | cluxter/azure-docs.fr-fr | 9e1df772cdd6e4a94e61c4ccee8cd41e692dc427 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Guide pratique pour ajouter une source d’événement de hub IoT – Azure Time Series Insights | Microsoft Docs
description: Découvrez comment ajouter une source d’événement de Hub IoT à votre environnement Azure Time Series Insights.
ms.service: time-series-insights
services: time-series-insights
author: deepakpalled
ms.author: dpalled
manager: diviso
ms.reviewer: v-mamcge, jasonh, kfile
ms.workload: big-data
ms.topic: conceptual
ms.date: 06/30/2020
ms.custom: seodec18
ms.openlocfilehash: e963c092b968476d20e25482cbe165234f7e86f0
ms.sourcegitcommit: 3543d3b4f6c6f496d22ea5f97d8cd2700ac9a481
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 07/20/2020
ms.locfileid: "86527934"
---
# <a name="add-an-iot-hub-event-source-to-your-azure-time-series-insight-environment"></a>Découvrez comment ajouter une source d’événement de Hub IoT à votre environnement Azure Time Series Insights
Cet article décrit comment utiliser le portail Azure pour ajouter une source d’événement qui lit des données à partir d’un IoT Hub Azure dans votre environnement Azure Time Series Insights.
> [!NOTE]
> Les instructions dans cet article s’appliquent à la fois aux environnements Azure Time Series Insights Gen 1 et Time Series Insights Gen 2.
## <a name="prerequisites"></a>Prérequis
* Créez un [environnement Azure Time Series Insights](time-series-insights-update-create-environment.md).
* Créez un [IoT Hub à l’aide du portail Azure](../iot-hub/iot-hub-create-through-portal.md).
* L’IoT Hub doit avoir des événements de message actifs envoyés.
* Créez un groupe de consommateurs dédié dans IoT Hub pour l’environnement Azure Time Series Insights à utiliser. Chaque source d’événement Time Series Insights Azure doit avoir son propre groupe de consommateurs dédié, qui n’est pas partagé avec un autre consommateur. Si plusieurs lecteurs consomment des événements du même groupe de consommateurs, tous les lecteurs sont susceptibles de présenter des défaillances. Pour plus d’informations, consultez le [Guide du développeur Azure IoT Hub](../iot-hub/iot-hub-devguide.md).
### <a name="add-a-consumer-group-to-your-iot-hub"></a>Ajouter un groupe de consommateurs à votre instance IoT Hub
Les applications utilisent des groupes de consommateurs pour extraire des données d’Azure IoT Hub. Indiquez un groupe de consommateurs dédiés, qui sera utilisé par cet environnement Time Series Insights Azure uniquement, pour lire les données de manière fiable à partir de votre Hub IoT.
Pour ajouter un nouveau groupe de consommateurs à votre IoT Hub :
1. Dans le [portail Azure](https://portal.azure.com), recherchez et ouvrez votre hub IoT.
1. Sous **Paramètres**, sélectionnez **Points de terminaison intégrés**, puis sélectionnez le point de terminaison **Événements**.
[](media/time-series-insights-how-to-add-an-event-source-iothub/tsi-connect-iot-hub.png#lightbox)
1. Sous **Groupes de consommateurs**, entrez un nom unique pour le groupe de consommateurs. Utilisez le même nom dans votre environnement Time Series Insights Azure lors de la création d’une source d’événement.
1. Sélectionnez **Enregistrer**.
## <a name="add-a-new-event-source"></a>Ajouter une nouvelle source d’événement
1. Connectez-vous au [portail Azure](https://portal.azure.com).
1. Dans le menu de gauche, sélectionnez **Toutes les ressources**. Sélectionnez votre environnement Azure Time Series Insight.
1. Sous **Paramètres**, sélectionnez **Sources d'événements**, puis **Ajouter**.
[](media/time-series-insights-how-to-add-an-event-source-iothub/tsi-add-event-source.png#lightbox)
1. Dans le volet **Nouvelle source d’événements**, pour **Nom de source d’événement**, entrez un nom unique pour cet environnement Time Series Insights Azure. Par exemple, entrez **event-stream**.
1. Pour **Source**, sélectionnez **IoT Hub**.
1. Sélectionnez une valeur pour **Option d’importation** :
* Si vous avez déjà un IoT Hub dans l’un de vos abonnements, sélectionnez **Utiliser IoT Hub à partir des abonnements disponibles**. Il s’agit de l’approche la plus simple.
[](media/time-series-insights-how-to-add-an-event-source-iothub/tsi-select-an-import-option.png#lightbox)
* Le tableau suivant décrit les propriétés requises pour l’option **Utiliser IoT Hub à partir des abonnements disponibles** :
[](media/time-series-insights-how-to-add-an-event-source-iothub/tsi-create-configure-confirm.png#lightbox)
| Propriété | Description |
| --- | --- |
| Abonnement | Abonnement auquel appartient le hub IoT souhaité. |
| Nom de l’IoT Hub | Nom du hub IoT sélectionné. |
| Nom de la stratégie IoT Hub | Sélectionnez la stratégie d’accès partagé. Vous trouverez la stratégie d’accès partagé dans l’onglet Paramètres IoT Hub. Chaque stratégie d’accès partagé a un nom, les autorisations que vous définissez ainsi que des clés d’accès. La stratégie d’accès partagé pour votre source d’événements *doit* avoir des autorisations de **connexion au service**. |
| Clé de stratégie IoT Hub | La clé est déjà renseignée. |
* Sélectionnez **Fournir les paramètres d’IoT Hub manuellement** si l’IoT Hub est externe à vos abonnements ou si vous souhaitez choisir des options avancées.
Le tableau suivant décrit les propriétés requises pour l’option **Fournir des paramètres IoT Hub manuellement** :
| Propriété | Description |
| --- | --- |
| Identifiant d’abonnement | Abonnement auquel appartient le hub IoT souhaité. |
| Resource group | Nom du groupe de ressources dans lequel l’IoT Hub a été créé. |
| Nom de l’IoT Hub | Nom de votre IoT Hub. Lorsque vous avez créé votre IoT Hub, vous lui avez donné un nom. |
| Nom de la stratégie IoT Hub | Stratégie d’accès partagé. Vous pouvez créer la stratégie d’accès partagé dans l’onglet Paramètres IoT Hub. Chaque stratégie d’accès partagé a un nom, les autorisations que vous définissez ainsi que des clés d’accès. La stratégie d’accès partagé pour votre source d’événements *doit* avoir des autorisations de **connexion au service**. |
| Clé de stratégie IoT Hub | Clé d’accès partagé utilisée pour authentifier l’accès à l’espace de noms Azure Service Bus. Entrez la clé primaire ou secondaire ici. |
* Les deux options partagent les options de configuration suivantes :
| Propriété | Description |
| --- | --- |
| Groupe de consommateurs IoT Hub | Groupe de consommateurs qui lit les événements de l’IoT Hub. Il est vivement recommandé d’utiliser un groupe de consommateurs dédié pour votre source de l’événement. |
| Format de sérialisation de l’événement | Actuellement, JSON est le seul format de sérialisation disponible. Les messages d’événement doivent respecter ce format, sans quoi aucune donnée ne peut être lue. |
| Nom de la propriété d’horodatage | Pour déterminer cette valeur, vous devez comprendre le format de message des données de message envoyées dans IoT Hub. Cette valeur est le **nom** de la propriété d’événement spécifique dans les données de message à utiliser comme horodateur de l’événement. Cette valeur respecte la casse. Lorsque ce champ est vide, l’**heure de mise en file d’attente de l’événement** dans la source de l’événement est utilisée comme timestamp de l’événement. |
1. Ajoutez le nom du groupe de consommateurs Azure Time Series Insights dédié que vous avez ajouté à votre Hub IoT.
1. Sélectionnez **Create** (Créer).
1. Après la création de la source d’événement, Azure Time Series Insights démarre automatiquement la diffusion de données dans votre environnement.
## <a name="next-steps"></a>Étapes suivantes
* [Définissez les stratégies d’accès aux données](time-series-insights-data-access.md) pour sécuriser les données.
* [Envoyez des événements](time-series-insights-send-events.md) à la source d’événement.
* Accédez à votre environnement dans [l’explorateur Azure Time Series Insights](https://insights.timeseries.azure.com).
| 74.896552 | 526 | 0.766344 | fra_Latn | 0.98196 |
58d1922e60fd18b40fe36a34047c6b48d807a500 | 2,364 | md | Markdown | src/sv/2021-03/05/02.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/sv/2021-03/05/02.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/sv/2021-03/05/02.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: ”Jag skall sänka er vila”
date: 25/07/2021
---
`Läs Matt. 11:20–28 där Jesus säger: ”Kom till mig, alla ni som är tyngda av bördor; jag skall skänka er vila.” I vilket sammanhang säger han det? På vilket sätt ger Jesus oss vila?`
Jesus talade aldrig utan ett sammanhang. För att förstå honom behöver vi ta med hela sammanhanget, särskilt om vi vill undvika att missförstå honom.
Matt. 11 markerar en vändpunkt i Matteusevangeliet. Utsagorna om förkastelsen av viktiga galileiska städer är de hårdaste hittills i evangeliet. Jesus är inte inställsam utan sätter fingret på det som gör ont. Han umgås med ”fel” människor (Matt. 9:9–13). Hans anspråk på att kunna förlåta synder är skandalöst i de religiösa ledarnas ögon (Matt. 9:1–8).
Jesus talar verkligen kraftfulla dömande ord till folket och jämför dem t.o.m. med Sodom, som ansågs då (liksom nu) för en plats av obeveklig ondska. ”Men jag säger er: för Sodoms land skall det bli lindrigare på domens dag än för dig” (Matt. 11:24).
Mitt i det spända läge som uppstår, växlar Jesus spår och erbjuder sann vila. Det kan han göra eftersom: ”Allt har min fader anförtrott åt mig. Och ingen känner Sonen, utom Fadern, och ingen känner Fadern, utom Sonen” (Matt. 11:27). Jesus förmåga att ge vila är grundad på hans gudom och enhet med Fadern.
Innan vi kan lasta av våra bördor behöver vi förstå att vi inte kan bära dem ensamma. De flesta av oss kommer faktiskt inte förrän vi har insett vårt verkliga tillstånd. Jesus inbjudan är behovsbaserad.
Jesus ord i Matt. 11:27 börjar med ett imperativ i grekiskan: ”Kom” är inte frivilligt utan en förutsättning för att finna vila. Det innebär att vi behöver släppa kontrollen. I en tid då vi lätt kan kontrollera mycket i våra liv genom våra smartphones är inte inriktningen mot Jesus naturlig. För de flesta är överlåtelsen faktiskt den svåraste delen av det kristna livet.
Med all rätt älskar vi att tala om vad Gud gör för oss i Kristus och att vi inte kan frälsa oss själva etc. Allt detta är sant. Men till slut behöver vi fortfarande göra ett medvetet val att ”komma” till Jesus, överlåta oss åt honom. Det är här som verkligheten av den fria viljan blir det främsta och det mest centrala i det kristna livet.
`Vilka bördor bär du? Hur kan du lära dig att lämna dem till Jesus och uppleva den vila han erbjuder och som kostade honom så mycket?` | 107.454545 | 372 | 0.780034 | swe_Latn | 1.00001 |
58d1a9a399121a3a3a77db4543957b6101eb7e1e | 3,386 | md | Markdown | docs/framework/unmanaged-api/fusion/fusion-install-reference-structure.md | lbragaglia/docs.it-it | 2dc596db6f16ffa0e123c2ad225ce4348546fdb2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/fusion-install-reference-structure.md | lbragaglia/docs.it-it | 2dc596db6f16ffa0e123c2ad225ce4348546fdb2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/fusion-install-reference-structure.md | lbragaglia/docs.it-it | 2dc596db6f16ffa0e123c2ad225ce4348546fdb2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Struttura FUSION_INSTALL_REFERENCE
ms.date: 03/30/2017
api_name:
- FUSION_INSTALL_REFERENCE
api_location:
- fusion.dll
api_type:
- COM
f1_keywords:
- FUSION_INSTALL_REFERENCE
helpviewer_keywords:
- FUSION_INSTALL_REFERENCE structure [.NET Framework fusion]
ms.assetid: ae181ec8-36bf-4ed1-9a02-ca27d417c80b
topic_type:
- apiref
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: 9e81fb7c99b9fd03a69456a84f2191770f40121d
ms.sourcegitcommit: d2e1dfa7ef2d4e9ffae3d431cf6a4ffd9c8d378f
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 09/07/2019
ms.locfileid: "70795338"
---
# <a name="fusion_install_reference-structure"></a>Struttura FUSION_INSTALL_REFERENCE
Rappresenta un riferimento che un'applicazione esegue a un assembly installato dall'applicazione nell'Global Assembly Cache.
## <a name="syntax"></a>Sintassi
```cpp
typedef struct _FUSION_INSTALL_REFERENCE_ {
DWORD cbSize,
DWORD dwFlags,
GUID guidScheme,
LPCWSTR szIdentifier,
LPCWSTR szNonCanonicalData
} FUSION_INSTALL_REFERENCE, *LPFUSION_INSTALL_REFERENCE;
```
## <a name="members"></a>Members
|Member|Descrizione|
|------------|-----------------|
|`cbSize`|Dimensioni in byte della struttura.|
|`dwFlags`|Riservato per l'estendibilità futura. Questo valore deve essere 0 (zero).|
|`guidScheme`|Entità che aggiunge il riferimento. Questo campo può avere uno dei valori seguenti:<br /><br /> - FUSION_REFCOUNT_MSI_GUID: Un'applicazione che è stata installata utilizzando il Microsoft Windows Installer fa riferimento all'assembly. Il `szIdentifier` campo è impostato su `MSI`e il `szNonCanonicalData` campo è impostato su `Windows Installer`. Questo schema viene utilizzato per gli assembly side-by-side di Windows.<br />- FUSION_REFCOUNT_UNINSTALL_SUBKEY_GUID: All'assembly viene fatto riferimento da un'applicazione visualizzata nell'interfaccia **Installazione applicazioni** . Il `szIdentifier` campo fornisce il token che registra l'applicazione con l'interfaccia di **Installazione applicazioni** .<br />- FUSION_REFCOUNT_FILEPATH_GUID: All'assembly viene fatto riferimento da un'applicazione rappresentata da un file nel file system. Il `szIdentifier` campo fornisce il percorso del file.<br />- FUSION_REFCOUNT_OPAQUE_STRING_GUID: Un'applicazione a cui fa riferimento l'assembly è rappresentata solo da una stringa opaca. Il `szIdentifier` campo fornisce questa stringa opaca. Il Global Assembly Cache non verifica l'esistenza di riferimenti opachi quando si rimuove questo valore.<br />- FUSION_REFCOUNT_OSINSTALL_GUID: Questo valore è riservato.|
|`szIdentifier`|Stringa univoca che identifica l'applicazione che ha installato l'assembly nel Global Assembly Cache. Il valore dipende dal valore del `guidScheme` campo.|
|`szNonCanonicalData`|Stringa riconosciuta solo dall'entità che aggiunge il riferimento. Il Global Assembly Cache archivia questa stringa, ma non la utilizza.|
## <a name="requirements"></a>Requisiti
**Piattaforme** Vedere [Requisiti di sistema](../../get-started/system-requirements.md).
**Intestazione:** Fusion. h
**Versioni di .NET Framework:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)]
## <a name="see-also"></a>Vedere anche
- [Strutture Fusion](fusion-structures.md)
- [Global Assembly Cache](../../app-domains/gac.md)
| 54.612903 | 1,276 | 0.769344 | ita_Latn | 0.921401 |
58d2c143c340085edda8fb7fd6a324961a296599 | 612 | md | Markdown | website/docs/patterns/global-errors/global-errors-code.md | prog13/intergalactic | 0ef386eb93d984d77cf1b54e18ffd3b35f64f68b | [
"MIT"
] | null | null | null | website/docs/patterns/global-errors/global-errors-code.md | prog13/intergalactic | 0ef386eb93d984d77cf1b54e18ffd3b35f64f68b | [
"MIT"
] | null | null | null | website/docs/patterns/global-errors/global-errors-code.md | prog13/intergalactic | 0ef386eb93d984d77cf1b54e18ffd3b35f64f68b | [
"MIT"
] | null | null | null | ---
title: Code
---
@## Example of using templates
Both graphics and texts are already included in ready-to-use templates. The locale can be either got inside the component, or wrapped in an application `I18nProvider` from the `@react-semocre/utils` package, as in the example below.
@example pageError
@## Example of using a custom error
You can create any error page. In the `Error` package, you will find the `getIconPath` feature, which will allow you to get the latest versions of icons. The list of potential icons is described in the [API](/patterns/global-errors/global-errors-api).
@example error
| 38.25 | 251 | 0.764706 | eng_Latn | 0.999577 |
58d30c447d464b0ff98b9c9c788652882e49b889 | 4,108 | md | Markdown | docs/code-quality/c6014.md | Miguel-byte/visualstudio-docs.es-es | 5f77731590b1efe39f2dd38ad51400c1e44a9d58 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/c6014.md | Miguel-byte/visualstudio-docs.es-es | 5f77731590b1efe39f2dd38ad51400c1e44a9d58 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/c6014.md | Miguel-byte/visualstudio-docs.es-es | 5f77731590b1efe39f2dd38ad51400c1e44a9d58 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: C6014
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- C6014
helpviewer_keywords:
- C6014
ms.assetid: ef76ec88-74d2-4a3b-b6fe-4b0851ab3372
author: mikeblome
ms.author: mblome
manager: markl
ms.workload:
- multiple
ms.openlocfilehash: 502e3f8a548bcffb266717541d0548898f124129
ms.sourcegitcommit: 5f6ad1cefbcd3d531ce587ad30e684684f4c4d44
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/22/2019
ms.locfileid: "72746872"
---
# <a name="c6014"></a>C6014
ADVERTENCIA C6014: pérdida de memoria.
Esta advertencia indica que el puntero especificado apunta a la memoria asignada o a algún otro recurso asignado que no se ha liberado. El analizador comprueba esta condición solo cuando se especifica la anotación SAL `_Analysis_mode_(_Analysis_local_leak_checks_)`. De forma predeterminada, esta anotación se especifica para el código del modo kernel de Windows (controlador). Para obtener más información sobre las anotaciones SAL, consulte [uso de anotaciones sal para reducir defectosC++ de C/Code](../code-quality/using-sal-annotations-to-reduce-c-cpp-code-defects.md).
## <a name="example"></a>Ejemplo
El código siguiente genera esta advertencia:
```cpp
// cl.exe /analyze /EHsc /nologo /W4
#include <sal.h>
#include <stdlib.h>
#include <string.h>
_Analysis_mode_(_Analysis_local_leak_checks_)
#define ARRAYSIZE 10
const int TEST_DATA [ARRAYSIZE] = {10,20,30,40,50,60,70,80,90,100};
void f( )
{
int *p = (int *)malloc(sizeof(int)*ARRAYSIZE);
if (p) {
memcpy(p, TEST_DATA, sizeof(int)*ARRAYSIZE);
// code ...
}
}
int main( )
{
f();
}
```
## <a name="example"></a>Ejemplo
El código siguiente corrige la advertencia liberando la memoria:
```cpp
// cl.exe /analyze /EHsc /nologo /W4
#include <sal.h>
#include <stdlib.h>
#include <string.h>
_Analysis_mode_(_Analysis_local_leak_checks_)
#define ARRAYSIZE 10
const int TEST_DATA [ARRAYSIZE] = {10,20,30,40,50,60,70,80,90,100};
void f( )
{
int *p = (int *)malloc(sizeof(int)*ARRAYSIZE);
if (p) {
memcpy(p, TEST_DATA, sizeof(int)*ARRAYSIZE);
// code ...
free(p);
}
}
int main( )
{
f();
}
```
Esta advertencia se indica para las pérdidas de memoria y de recursos cuando el recurso suele tener *alias* en otra ubicación. La memoria tiene un alias cuando un puntero a la memoria escapa de la función por medio de una `_Out_` anotación de parámetro, una variable global o un valor devuelto. Esta advertencia se puede informar de la salida de la función si el argumento se anota como si se hubiera previsto que se liberase.
Tenga en cuenta que el análisis de código no reconocerá la implementación real de un asignador de memoria (que implique la aritmética de direcciones) y no reconocerá que la memoria está asignada (aunque se reconozcan muchos contenedores). En este caso, el analizador no reconoce que la memoria se asignó y emite esta advertencia. Para suprimir el falso positivo, utilice una directiva de `#pragma` en la línea que precede a la llave de apertura `{` del cuerpo de la función.
Para evitar todos estos tipos de fugas posibles, use los mecanismos proporcionados por la biblioteca de C++ plantillas estándar (STL). Entre ellos se incluyen [shared_ptr](/cpp/standard-library/shared-ptr-class), [unique_ptr](/cpp/standard-library/unique-ptr-class)y [Vector](/cpp/standard-library/vector). Para obtener más información, vea [punteros inteligentes](/cpp/cpp/smart-pointers-modern-cpp) y [ C++ biblioteca estándar](/cpp/standard-library/cpp-standard-library-reference).
```cpp
// cl.exe /analyze /EHsc /nologo /W4
#include <sal.h>
#include <memory>
using namespace std;
_Analysis_mode_(_Analysis_local_leak_checks_)
const int ARRAYSIZE = 10;
const int TEST_DATA [ARRAYSIZE] = {10,20,30,40,50,60,70,80,90,100};
void f( )
{
unique_ptr<int[]> p(new int[ARRAYSIZE]);
std::copy(begin(TEST_DATA), end(TEST_DATA), p.get());
// code ...
// No need for free/delete; unique_ptr
// cleans up when out of scope.
}
int main( )
{
f();
}
```
## <a name="see-also"></a>Vea también
[C6211](../code-quality/c6211.md)
| 33.129032 | 575 | 0.733447 | spa_Latn | 0.81929 |
58d310b10893b5596ecf0d83dc1a657c2832c0cd | 256 | md | Markdown | themes/docs-new/layouts/shortcodes/search_key_name.md | RachelWyatt/chef-web-docs | 6aea00be2ef29052ea40f9aec59baa4f3f9bc006 | [
"CC-BY-3.0"
] | 147 | 2015-10-02T15:42:48.000Z | 2022-03-16T19:05:35.000Z | themes/docs-new/layouts/shortcodes/search_key_name.md | RachelWyatt/chef-web-docs | 6aea00be2ef29052ea40f9aec59baa4f3f9bc006 | [
"CC-BY-3.0"
] | 2,585 | 2015-09-30T03:43:31.000Z | 2022-03-30T19:57:11.000Z | themes/docs-new/layouts/shortcodes/search_key_name.md | RachelWyatt/chef-web-docs | 6aea00be2ef29052ea40f9aec59baa4f3f9bc006 | [
"CC-BY-3.0"
] | 613 | 2015-10-02T18:02:48.000Z | 2022-03-05T06:48:00.000Z | To see the available keys for a node, enter the following (for a node
named `staging`):
```bash
knife node show staging -Fj | less
```
to return a full JSON description of the node and to view the available
keys with which any search query can be based.
| 25.6 | 71 | 0.746094 | eng_Latn | 0.999574 |
58d33f5c1e79861d11e770a89f5b343cb9294ff3 | 3,430 | md | Markdown | docs/rest-api/api/driveitem_update.md | mymindstorm/onedrive-api-docs | 1edfaded1b2e23a6e0fddfb4fe10860c281a6487 | [
"MIT-0",
"MIT"
] | null | null | null | docs/rest-api/api/driveitem_update.md | mymindstorm/onedrive-api-docs | 1edfaded1b2e23a6e0fddfb4fe10860c281a6487 | [
"MIT-0",
"MIT"
] | null | null | null | docs/rest-api/api/driveitem_update.md | mymindstorm/onedrive-api-docs | 1edfaded1b2e23a6e0fddfb4fe10860c281a6487 | [
"MIT-0",
"MIT"
] | null | null | null | ---
author: JeremyKelley
ms.author: JeremyKe
ms.date: 09/10/2017
title: Update a file or folder - OneDrive API
localization_priority: Priority
---
# Update DriveItem properties
Update the metadata for a [DriveItem](../resources/driveitem.md) by ID or path.
You can also use update to [move an item](driveitem_move.md) to another parent by updating the item's **parentReference** property.
## Permissions
One of the following permissions is required to call this API. To learn more, including how to choose permissions, see [Permissions](../concepts/permissions_reference.md).
|Permission type | Permissions (from least to most privileged) |
|:--------------------|:---------------------------------------------------------|
|Delegated (work or school account) | Files.ReadWrite, Files.ReadWrite.All, Sites.ReadWrite.All |
|Delegated (personal Microsoft account) | Files.ReadWrite, Files.ReadWrite.All |
|Application | Files.ReadWrite.All, Sites.ReadWrite.All |
## HTTP request
<!-- { "blockType": "ignored" } -->
```http
PATCH /drives/{drive-id}/items/{item-id}
PATCH /groups/{group-id}/drive/items/{item-id}
PATCH /me/drive/items/{item-id}
PATCH /sites/{site-id}/drive/items/{item-id}
PATCH /users/{user-id}/drive/items/{item-id}
```
## Optional request headers
| Name | Type | Description |
|:--------------|:-------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| if-match | String | If this request header is included and the eTag (or cTag) provided does not match the current eTag on the folder, a `412 Precondition Failed` response is returned. |
## Request body
In the request body, supply the values for properties that should be updated.
Existing properties that are not included in the request body will maintain their previous values or be recalculated based on changes to other property values.
For best performance your app should not include properties that haven't changed.
## Response
If successful, this method returns a `200 OK` response code and updated [DriveItem](../resources/driveitem.md) resource in the response body.
## Example
This example renames the DriveItem resource to "new-file-name.docx".
<!-- { "blockType": "request", "name": "update-item", "tags": "service.graph" } -->
```http
PATCH /me/drive/items/{item-id}
Content-type: application/json
{
"name": "new-file-name.docx"
}
```
### Response
If successful, this method returns an [driveItem][item-resource] resource in the response body.
<!-- { "blockType": "response", "@odata.type": "microsoft.graph.driveItem", "truncated": true } -->
```http
HTTP/1.1 200 OK
Content-type: application/json
{
"id": "01NKDM7HMOJTVYMDOSXFDK2QJDXCDI3WUK",
"name": "new-file-name.docx",
"file": { }
}
```
## Error responses
See [Error Responses][error-response] for details about how errors are returned.
[error-response]: ../concepts/errors.md
[item-resource]: ../resources/driveitem.md
<!-- {
"type": "#page.annotation",
"description": "Update or replace the contents or properties of an item.",
"keywords": "update,replace,contents,item",
"section": "documentation",
"tocPath": "Items/Update"
} -->
| 34.646465 | 192 | 0.628863 | eng_Latn | 0.844661 |
58d3f3a354986f62bde90c6585f88b71c0e653ff | 2,606 | md | Markdown | docs/technical-reference/mutant-schemata.md | AlexNDRmac/stryker-net | 5b9d51179fc06a54495f0468a9a3e32ca48e0f55 | [
"Apache-2.0"
] | 1,222 | 2018-06-06T07:34:46.000Z | 2022-03-31T18:08:54.000Z | docs/technical-reference/mutant-schemata.md | AlexNDRmac/stryker-net | 5b9d51179fc06a54495f0468a9a3e32ca48e0f55 | [
"Apache-2.0"
] | 1,594 | 2018-06-06T07:29:56.000Z | 2022-03-31T08:06:04.000Z | docs/technical-reference/mutant-schemata.md | AlexNDRmac/stryker-net | 5b9d51179fc06a54495f0468a9a3e32ca48e0f55 | [
"Apache-2.0"
] | 188 | 2018-06-06T07:40:50.000Z | 2022-03-11T21:37:42.000Z | ---
title: Mutant schemata
sidebar_position: 30
custom_edit_url: https://github.com/stryker-mutator/stryker-net/edit/master/docs/technical-reference/mutant-schemata.md
---
Stryker.NET chose to work with mutant schemata. This created a number of challenges.
## Compile errors
Some mutations result in compile errors like the one below.
``` csharp
if (Environment.GetEnvironmentVariable("ActiveMutation") == "1") {
return "hello " - "world"; // mutated code
} else {
return "hello " + "world"; // original code
}
```
We chose to accept the fact that not all mutations can be compiled. So mutators don't have to take compile errors in account. This keeps the mutators as simple as possible.
The framework itself should handle the compile errors.
This is done by rollbacking all mutations that result in compile errors. The mutant that linked to that piece of code gets the status `builderror`.
`compile` → `remove compile error codes` → `compile 2nd time`
Sometimes not all errors are returned by the compiler at the first try. Thats why we repeat this process untill we have compiling code. Usually 1-3 retries are needed. With roslyns incremental compilation these retries are fast.
## Scope
The scope of some variables can change by placing it insite an if statement. This results in compile errors.
``` csharp
if (Environment.GetEnvironmentVariable("ActiveMutation") == "1") {
int i = 0; // mutated code
} else {
int i = 99; // original code
}
return i;
```
This kind of errors can't be rollbacked because the location of the diagnostic error will be the return statement. The location of the actual code that causes the error will be somewhere else.
This can be solved by using conditional statements instead of if statements.
``` csharp
int i = Environment.GetEnvironmentVariable("ActiveMutation") == "1" ? 0 : 99;
return i;
```
What kind of placement should be used depends on the type of SyntaxNode the mutation is made in. There are some rules build into Stryker.net when to choose an if-statement and when to use a conditional statement.
## Constant values
A drawback of mutant schemata is that Stryker.NET cannot mutate constant values.
For example:
``` cs
public enum Numbers
{
One = 1,
Two = (One + 1)
}
```
would be mutated into
``` cs
public enum Numbers
{
One = 1,
Two = (MutantControl.IsActive(0) ? (One - 1) : (One + 1))
}
```
This cannot compile since `MutantControl.IsActive(0)` is not a constant value. That is why we skip constant values from mutating.
We are researching ways to overcome this issue but have not yet found a way to do this.
| 33.410256 | 228 | 0.74482 | eng_Latn | 0.994671 |
58d405fd634bf460824b52015d5b8c7c1ea5e1df | 1,422 | md | Markdown | doc/file/ftp.md | baigoStudio/GInkgo | 39c4d5abeffba65831add0781ef9fd124e90e841 | [
"Apache-2.0"
] | null | null | null | doc/file/ftp.md | baigoStudio/GInkgo | 39c4d5abeffba65831add0781ef9fd124e90e841 | [
"Apache-2.0"
] | null | null | null | doc/file/ftp.md | baigoStudio/GInkgo | 39c4d5abeffba65831add0781ef9fd124e90e841 | [
"Apache-2.0"
] | null | null | null | ## FTP
FTP 功能由 `ginkgo\Ftp` 类完成,FTP 全称 File Transfer Protocol,即文件传输协议,是用于在网络上进行文件传输的一套标准协议。
----------
#### 定义服务器
可以通过配置文件定义
``` php
'var_extra' => array(
'ftp' => array(
'host' => '', // 服务器
'port' => 21, // 端口
'user' => '', // 用户名
'pass' => '', // 密码
'path' => '', // 远程路径
'pasv' => 'off', // 被动模式
),
...
),
```
也可以在实例化 FTP 类时定义
``` php
$config = array(
'host' => '', // 服务器
'port' => 21, // 端口
'user' => '', // 用户名
'pass' => '', // 密码
'path' => '', // 远程路径
'pasv' => 'off', // 被动模式
);
$ftp = Ftp::instance($config);
```
> 优先级:初始化定义 > 配置文件定义
----------
#### 连接服务器
`0.2.0` 起,不再需要如下操作
`init()` 方法可快捷的连接并登录服务器
``` php
$ftp->init();
```
`connect()` 方法可连接服务器
``` php
$ftp->connect();
```
`login()` 方法可登录服务器
``` php
$ftp->login();
```
----------
#### 基本操作
> 默认所有操作均为相对目录,即系统会自动在路径前加上配置中所定义的远程路径。下列方法中的 $abs 参数可以定义。true 为绝对路径,false 为相对路径。
* 列出文件和目录 `0.2.0` 起废弃
``` php
$ftp = Ftp::instance();
$lists = $ftp->dirList('./image', $abs);
```
* 创建文件夹 `0.2.0` 起废弃
``` php
$ftp->dirMk('./image', $abs);
```
* 删除目录 `0.2.0` 起废弃
``` php
$ftp->dirDelete('./dir', $abs);
```
* 上传文件
``` php
$ftp->fileUpload($local, $remote, $abs, $mod);
```
1. local 本地服务器路径
2. remote 远程服务器路径
3. abs 是否绝对路径
4. mod 传输模式,只能为 FTP_ASCII(文本模式)或 FTP_BINARY(二进制模式)
* 删除文件
``` php
$ftp->fileDelete('./src.txt', $abs);
```
| 13.166667 | 84 | 0.490858 | yue_Hant | 0.702675 |
58d43545a7d0f4bb3e0af31663d196e472ebc0a2 | 538 | md | Markdown | windows.media.capture/cameracaptureuivideocapturesettings_format.md | gbaychev/winrt-api | 25346cd51bc9d24c8c4371dc59768e039eaf02f1 | [
"CC-BY-4.0",
"MIT"
] | 199 | 2017-02-09T23:13:51.000Z | 2022-03-28T15:56:12.000Z | windows.media.capture/cameracaptureuivideocapturesettings_format.md | gbaychev/winrt-api | 25346cd51bc9d24c8c4371dc59768e039eaf02f1 | [
"CC-BY-4.0",
"MIT"
] | 2,093 | 2017-02-09T21:52:45.000Z | 2022-03-25T22:23:18.000Z | windows.media.capture/cameracaptureuivideocapturesettings_format.md | gbaychev/winrt-api | 25346cd51bc9d24c8c4371dc59768e039eaf02f1 | [
"CC-BY-4.0",
"MIT"
] | 620 | 2017-02-08T19:19:44.000Z | 2022-03-29T11:38:25.000Z | ---
-api-id: P:Windows.Media.Capture.CameraCaptureUIVideoCaptureSettings.Format
-api-type: winrt property
-api-device-family-note: xbox
---
<!-- Property syntax
public Windows.Media.Capture.CameraCaptureUIVideoFormat Format { get; set; }
-->
# Windows.Media.Capture.CameraCaptureUIVideoCaptureSettings.Format
## -description
Determines the format for storing captured videos.
## -property-value
A value indicating the format for storing captured videos.
## -remarks
## -examples
## -see-also
## -capabilities
microphone, webcam
| 19.214286 | 77 | 0.765799 | eng_Latn | 0.541743 |
58d4c89fd756f02e413c94ed1d9905b66e50c32e | 991 | md | Markdown | README.md | virsas/tfmod_vpc_subnet | ff0492050e1295beb422712115a337cba7d18aeb | [
"MIT"
] | null | null | null | README.md | virsas/tfmod_vpc_subnet | ff0492050e1295beb422712115a337cba7d18aeb | [
"MIT"
] | null | null | null | README.md | virsas/tfmod_vpc_subnet | ff0492050e1295beb422712115a337cba7d18aeb | [
"MIT"
] | null | null | null | # terraform_vpc_subnet
Terraform module to create VPC subnets in AWS using terraform
## Variables
``` terraform
# name: the name of the VPC subnet
# block: the IP allocation for the subnet. Must be within the VPC block
# zone: Specify in which zone the subnet should be placed.
# public: If allocate public IP by default if this subnet is used by EC2
variable "vpc_subnets_example" { default = { name ="example", block = "10.0.0.0/24", zone = "eu-west-1a", public = "false" } }
```
## Dependency
VPC <https://github.com/virsas/terraform_vpc>
## Terraform example
``` terraform
######################
# VPC subnet variables
######################
variable "vpc_subnets_test_a" { default = { name ="test-a", block = "10.0.0.0/24", zone = "eu-west-1a", public = "false" } }
######################
# VPC subnet
######################
module "vpc_subnet_test_a" {
source = "github.com/virsas/terraform_vpc_subnet"
vpc_id = module.vpc_main.id
subnet = var.vpc_subnets_test_a
}
```
| 27.527778 | 126 | 0.644803 | eng_Latn | 0.702882 |
58d5656a18fb52028300f86011f53c66316c9435 | 2,108 | md | Markdown | curriculum/challenges/english/01-responsive-web-design/applied-accessibility/jump-straight-to-the-content-using-the-main-element.md | olayis/freeCodeCamp | 26c9b3c16b16671a87cb48913b883545796e866c | [
"BSD-3-Clause"
] | 4 | 2020-01-08T16:53:41.000Z | 2021-12-18T03:48:21.000Z | curriculum/challenges/english/01-responsive-web-design/applied-accessibility/jump-straight-to-the-content-using-the-main-element.md | Berkan-ak/freeCodeCamp | fc17a816cbd76dd23940ee85e5bbd3b5c9261a36 | [
"BSD-3-Clause"
] | 650 | 2020-10-20T03:25:35.000Z | 2022-03-28T01:59:38.000Z | curriculum/challenges/english/01-responsive-web-design/applied-accessibility/jump-straight-to-the-content-using-the-main-element.md | Berkan-ak/freeCodeCamp | fc17a816cbd76dd23940ee85e5bbd3b5c9261a36 | [
"BSD-3-Clause"
] | 2 | 2021-03-31T12:37:32.000Z | 2021-11-14T15:35:56.000Z | ---
id: 587d774e367417b2b2512a9f
title: Jump Straight to the Content Using the main Element
challengeType: 0
videoUrl: 'https://scrimba.com/c/cPp7zuE'
forumTopicId: 301018
---
# --description--
HTML5 introduced a number of new elements that give developers more options while also incorporating accessibility features. These tags include `main`, `header`, `footer`, `nav`, `article`, and `section`, among others.
By default, a browser renders these elements similarly to the humble `div`. However, using them where appropriate gives additional meaning in your markup. The tag name alone can indicate the type of information it contains, which adds semantic meaning to that content. Assistive technologies can access this information to provide better page summary or navigation options to their users.
The `main` element is used to wrap (you guessed it) the main content, and there should be only one per page. It's meant to surround the information that's related to the central topic of your page. It's not meant to include items that repeat across pages, like navigation links or banners.
The `main` tag also has an embedded landmark feature that assistive technology can use to quickly navigate to the main content. If you've ever seen a "Jump to Main Content" link at the top of a page, using a main tag automatically gives assistive devices that functionality.
# --instructions--
Camper Cat has some big ideas for his ninja weapons page. Help him set up his markup by adding opening and closing `main` tags between the `header` and `footer` (covered in other challenges). Keep the `main` tags empty for now.
# --hints--
Your code should have one `main` tag.
```js
assert($('main').length == 1);
```
The `main` tags should be between the closing `header` tag and the opening `footer` tag.
```js
assert(code.match(/<\/header>\s*?<main>\s*?<\/main>/gi));
```
# --seed--
## --seed-contents--
```html
<header>
<h1>Weapons of the Ninja</h1>
</header>
<footer></footer>
```
# --solutions--
```html
<header>
<h1>Weapons of the Ninja</h1>
</header>
<main>
</main>
<footer></footer>
```
| 34 | 388 | 0.739089 | eng_Latn | 0.998396 |
58d647ff9c78a1404cae5b7dcad9e9ef9a8c003a | 492 | md | Markdown | assets/ckeditor/tests/plugins/link/manual/forcedownload.md | mirwansyahs/siabanks | 3ad789291587639fea875fd59297b67cb6a04cfd | [
"MIT"
] | 2 | 2021-04-09T14:59:45.000Z | 2021-04-18T07:27:07.000Z | assets/ckeditor/tests/plugins/link/manual/forcedownload.md | mirwansyahs/siabanks | 3ad789291587639fea875fd59297b67cb6a04cfd | [
"MIT"
] | 5 | 2020-06-21T16:48:10.000Z | 2020-07-20T18:32:12.000Z | assets/ckeditor/tests/plugins/link/manual/forcedownload.md | mirwansyahs/siabanks | 3ad789291587639fea875fd59297b67cb6a04cfd | [
"MIT"
] | 2 | 2020-07-02T19:47:39.000Z | 2020-07-20T18:29:33.000Z | @bender-tags: link, bug, 4.6.0,
@bender-ui: collapsed
@bender-ckeditor-plugins: link,toolbar,wysiwygarea,sourcearea
1. Select link and open link dialog. Go to `Advanced` tab.
1. Check `Force Download` checkbox and click `OK` to close the dialog.
1. Switch to source mode and check if `download=""` attribute is added to the anchor element.
1. Remove `download=""` attribute, switch to editing mode and open link dialog.
1. Check on `Advanced` tab if `Force Download` checkbox is deselected.
| 49.2 | 93 | 0.754065 | eng_Latn | 0.865547 |
58d6500a20f558192e09777f2fac93f04c966418 | 23,922 | md | Markdown | help/implement/js-implementation/function-tl.md | apowersadobe/analytics.en | 9ab2f1720dcb384485e7941ca01bcad3d60e77fd | [
"Apache-2.0"
] | null | null | null | help/implement/js-implementation/function-tl.md | apowersadobe/analytics.en | 9ab2f1720dcb384485e7941ca01bcad3d60e77fd | [
"Apache-2.0"
] | null | null | null | help/implement/js-implementation/function-tl.md | apowersadobe/analytics.en | 9ab2f1720dcb384485e7941ca01bcad3d60e77fd | [
"Apache-2.0"
] | null | null | null | ---
description: File downloads and exit links can be automatically tracked based on parameters set in the AppMeasurement for JavaScript file.
keywords: Analytics Implementation
seo-description: File downloads and exit links can be automatically tracked based on parameters set in the AppMeasurement for JavaScript file.
seo-title: The s.tl() Function - Link Tracking
solution: Analytics
subtopic: Link tracking
title: The s.tl() Function - Link Tracking
topic: Developer and implementation
uuid: f28f071a-8820-4f74-89cd-fd2333a21f22
---
# The s.tl() Function - Link Tracking
File downloads and exit links can be automatically tracked based on parameters set in the [!DNL AppMeasurement] for JavaScript file.
## The s.tl() Function - Link Tracking {#concept_EA13689CB8EE4F308FC89A1293046D5E}
File downloads and exit links can be automatically tracked based on parameters set in the [!DNL AppMeasurement] for JavaScript file.
If needed, these types of links can be manually tracked using custom link code as explained below. In addition, custom link code can be used to track generic custom links that can be used for a variety of tracking and reporting needs.
## s.tl() Parameter Reference {#section_DDF19EE3ACE24EFAB2D65CD4B0D7DBC4}
<!-- Meike, I converted a table within to table to this section. Please check against orginal file -Bob -->
**this**
The first argument should always be set either to this (default) or true. The argument refers to the object being clicked; when set to "this," it refers to the HREF property of the link.
If you are implementing link tracking on an object that has no HREF property, you should always set this argument to "this."
Because clicking a link often takes a visitor off the current page, a 500ms delay is used to ensure that an image request is sent to Adobe before the user leaves the page. This delay is only necessary when leaving the page, but is typically present when the s.tl() function is called. If you want to disable the delay, pass the keyword 'true' as the first parameter when calling the s.tl() function.
**linkType**
`s.tl(this,linkType,linkName, variableOverrides, doneAction)`
linkType has three possible values, depending on the type of link that you want to capture. If the link is not a download or an exit link, you should choose the Custom links option.
| Type | linkType value |
|--- |--- |
| File Downloads | 'd' |
| Exit Links | 'e' |
| Custom Links | 'o' |
**linkName**
This can be any custom value, up to 100 characters. This determines how the link is displayed in the appropriate report.
**variableOverrides**
(Optional, pass null if not using) This lets you change variable values for this single call, It is similar to other [!DNL AppMeasurement] libraries.
**useForcedLinkTracking**
This flag is used to disable forced link tracking for some browsers. Forced link tracking is enabled by default for FireFox 20+ and WebKit browsers.
Default Value = true
Example: `s.useForcedLinkTracking& = false`
**forcedLinkTrackingTimeout**
The maximum number of milliseconds to wait for tracking to finish before performing the doneAction that was passed into `s.tl`. This value specifies the maximum wait time. If the track link call completes before this timeout, the doneAction is executed immediately. If you notice that track link calls are not completing, you might need to increase this timeout.
Default Value = 250
Example: `s.forcedLinkTrackingTimeout = 500`
**doneAction**
An optional parameter to specify a navigation action to execute after the track link call completes when useForcedLinkTracking is enabled.
Syntax:
`s.tl(linkObject,linkType,linkName,variableOverrides,doneAction)`
doneAction: (optional) Specifies the action to take after the link track call is sent or has timed out, based on the value specified by:
`s.forcedLinkTrackingTimeout`
The doneAction variable can be the string navigate, which causes the method to set `document.location` to the href attribute of linkObject . The doneAction variable can also be a function allowing for advanced customization.
If providing a value for doneAction in an anchor onClick event, you must return false after the `s.tl` call to prevent the default browser navigation.
To mirror the default behavior and follow the URL specified by the href attribute, provide a string of navigate as the doneAction .
Optionally, you can provide your own function to handle the navigation event by passing this function as the doneAction .
Examples:
```js
<a href="..." onclick="s.tl(this,'o','MyLink',null,'navigate');return false">Click Here</a> <a href="#" onclick="s.tl(this,'o','MyLink',null,function(){if(confirm('Proceed?'))document.location=...});return false">Click Here</a>
```
**Example**
The following example of an [!UICONTROL s.tl()] function call uses the default 500 ms delay to ensure data is collected before leaving the page.
```js
s.tl(this,'o','link name');
```
The following example disables the 500 ms delay, if the user is not going to leave the page, or whenever the object being clicked has no HREF.
```js
s.tl(true,'o','link name');
```
The 500ms delay is a maximum delay. If the image requested returns in less than 500 ms, the delay stops immediately. This allows the visitor to move onto the next page or next action within the page.
The following examples are for handling custom links on WebKit browsers:
```js
<a href="..." onclick="s.tl(this,'o','MyLink',null,'navigate');return false">Click Here</a>
```
```js
<a href="#" onclick="s.tl(this,'o','MyLink',null,
function(){if(confirm('Proceed?'))document.location=...});return false">Click Here</a>
```
>[!NOTE]
>
>Uses of custom link code are often very specific to your Web site and reporting needs. You can contact your Adobe Consultant or Customer Care before implementing custom link code to understand the possibilities available to you and how best to leverage this feature based on your business needs.
The basic code to track a link using custom link code is shown in the following example:
```js
<a href="index.html" onClick="s.tl(this,'o','Link Name')">My Page</a>
```
>[!NOTE]
>
>The [!UICONTROL s_gi] function must contain your report suite ID as a function parameter. Be sure to swap out [!DNL rsid] for your unique report suite ID.
>[!NOTE]
>
>If the link name parameter is not defined, the URL of the link (determined from the "this" object) is used as the link name.
[!DNL Analytics] variables can be defined as part of custom link code.
## Automatic Tracking of Exit Links and File Downloads {#concept_DF078C5C1B004695B3B7C4536C99D4B2}
The JavaScript file can be configured to automatically track file downloads and exit links based on parameters that define file download file types and exit links.
<!--
link_automatic.xml
-->
The parameters that control automatic tracking are as follows:
```js
s.trackDownloadLinks=true
s.trackExternalLinks=true
s.linkDownloadFileTypes="exe,zip,wav,mp3,mov,mpg,avi,doc,pdf,xls"
s.linkInternalFilters="javascript:,mysite.com,[more filters here]"
s.linkLeaveQueryString=false
```
The parameters *`trackDownloadLinks`* and *`trackExternalLinks`* determine if automatic file download and exit link tracking are enabled. When enabled, any link with a file type matching one of the values in *`linkDownloadFileTypes`* is automatically tracked as a file download. Any link with a URL that does not contain one of the values in *`linkInternalFilters`* is automatically tracked as an exit link.
In JavaScript H.25.4 (released February 2013), automatic exit link tracking was updated to always ignore links with `HREF` attributes that start with `#`, `about:`, or `javascript:`.
## Example 1 {#section_504D163608E14B25A8B4CA9D615C6735}
The file types [!DNL jpg] and [!DNL aspx] are not included in *`linkDownloadFileTypes`* above, therefore no clicks on them are automatically tracked and reported as file downloads.
The parameter *`linkLeaveQueryString`* modifies the logic used to determine exit links. When *`linkLeaveQueryString`*=false, exit links are determined using only the domain, path, and file portion of the link URL. When *`linkLeaveQueryString`*=true, the query string portion of the link URL is also used to determine an exit link.
## Example 2 {#section_25660B64E28248A0BC982B2AF5603C0E}
With the following settings, the example below will be counted as an exit link:
```js
//JS file
s.linkInternalFilters="javascript:,mysite.com"
s.linkLeaveQueryString=false
//HTML file
<a href='https://othersite.com/index.html?r=mysite.com'>Visit Other Site!</a>
```
## Example 3 {#section_2A78D05162D640169844A7D1E9799BAA}
With the following settings, the link below is not counted as an exit link:
```js
//JS file
s.linkInternalFilters="javascript:,mysite.com"
s.linkLeaveQueryString=true
//HTML
<a href='https://othersite.com/index.html?r=mysite.com'>Visit Other Site</a>
```
>[!NOTE]
>
>A single link can be tracked only as a file download or exit link, with file download taking priority. If a link is both an exit link and file download based on the parameters *`linkDownloadFileTypes`* and *`linkInternalFilters`*, it is tracked and reported as a file download and not an exit link. The following table summarizes the automatic tracking of file downloads and exit links.
## Manual Link Tracking Using Custom Link Code {#concept_7113B5D037BE4622B6934554C6D18F96}
Custom link code lets file downloads, exit links, and custom links be tracked in situations where automatic tracking is not sufficient or applicable.
<!--
link_manual.xml
-->
Custom link code is typically implemented by adding an [!UICONTROL onClick] event handler to a link or by adding code to an existing routine. It can be implemented from essentially any JavaScript event handler or function.
Link Tracking consists of calling the [!DNL AppMeasurement] for JavaScript function whenever the user performs actions that generate data you want to capture. This function, [!UICONTROL s.tl()], is either called directly in an event handler, such as [!UICONTROL onClick] or [!UICONTROL onChange], or from within a separate function. This [!UICONTROL s.tl()] function has five arguments. The first three are required:
```js
s.tl(this,linkType,linkName, variableOverrides, doneAction)
```
## Custom Link Tracking on FireFox and WebKit Browsers {#section_F2B9A2A3CC1F4BB9A64456BC39FC50B9}
JavaScript H.25 includes an update to ensure that link tracking completes successfully on WebKit browsers (Safari and Chrome). JavaScript H.26 includes an update to ensure that link tracking completes successfully on FireFox 20+.
After this update, download and exit links that are automatically tracked (determined by [!DNL s.trackDownloadLinks]and [!DNL s.trackExternalLinks]) are tracked successfully. If you are tracking custom links using manual JavaScript calls, you need to modify how these calls are made. For example, exit and download links are often tracked using code similar to the following:
```js
<a href="https://anothersite.com" onclick="s.tl(this,'e','AnotherSite',null)">
```
Internet Explorer executes the track link call and open the new page. Other browsers might cancel execution of the track link call when the new page opens. This often prevents track link calls from completing.
To work around this behavior, H.25 (released July 2012) includes an overloaded track link method ( [!DNL s.tl]) that forces browsers with this behavior to wait for the track link call to complete. This new method executes the track link call and handles the navigation event, instead of using the default browser action. This overloaded method requires an additional parameter, called [!UICONTROL doneAction], to specify the action to take when the link tracking call completes.
To use this new method, update calls to [!DNL s.tl] with an additional [!UICONTROL doneAction] parameter, similar to the following:
```js
<a href="https://anothersite.com"
onclick="s.tl(this,'e','AnotherSite',null,'navigate');return false">
```
Passing navigate as the [!UICONTROL doneAction] mirrors the default browser behavior and opens the URL specified by the href attribute when the tracking call completes.
In JavaScript H.25.4 (released February 2013), the following scope limitations were added to links tracked when `useForcedLinkTracking` is enabled. The automatic forced link tracking applies only to:
* `<A>` and `<AREA>` tags.
* The tag must have an `HREF` attribute.
* The `HREF` can't start with `#`, `about:`, or `javascript:`.
* The `TARGET` attribute must not be set, or the `TARGET` needs to refer to the current window ( `_self`, `_top`, or the value of `window.name`).
## Link Tracking Using an Image Request {#concept_FF31C8D1B3DF483D853BF0A9D637F02F}
Links can be tracked without calling the s.tl() function by constructing an image request.
<!--
link_img.xml
-->
Image requests are hard coded by adding the "pe" parameter to your image request src parameter as follows:
```
pe=[type]
```
Where `[type]` is replaced with the type of link you want to track:
* lnk_d = download
* lnk_e = exit
* lnk_o = custom
Additionally, a link URL can be specified by passing the URL in the pev1 parameter:
```
pev1=mylink.com
```
A link name can be specified by passing the name in the pev2 parameter:
```
pev2=My%20Link
```
For example:
```
<img src="https://collectiondomain.112.2o7.net/b/ss/reportsuite/1/H.25.3--NS/0?pe=lnk_e&pev1=othersite.com&pev2=Offsite%20Link" width="1" height="1" border="0" />
```
## Setting Additional Variables for File Downloads, Exit Links, and Custom Links {#concept_8DD06387D5234A52A6E361572FAA2DF6}
Two parameters ( and ) control which [!DNL Analytics] variables are set for file downloads, exit links, and custom links.
<!--
link_variables.xml
-->
They are, by default, set within the JS file as follows:
```js
s.linkTrackVars="None"
```
```js
s.linkTrackEvents="None"
```
The *`linkTrackVars`* parameter should include each variable that you want to track with every file download, exit link, and custom link. The *`linkTrackEvents`* parameter should include each event you want to track with every file download, exit link, and custom link. When one of these link types occur, the current value of each variable identified is tracked.
For example to track prop1, eVar1, and event1 with every file download, exit link, and custom link, use the following settings within the global JS file:
```js
s.linkTrackVars="prop1,eVar1,events"
```
```js
s.linkTrackEvents="event1"
```
>[!NOTE]
>
>The variable *`pageName`* cannot be set for a file download, exit link, or custom link, because each of the link types is not a page view and does not have an associated page name.
>[!NOTE]
>
>If *`linkTrackVars`* (or *`linkTrackEvents`*) is null (or an empty string), all [!DNL Analytics] variables (or events) that are defined for the current page are tracked. This most likely inflates instances of each variable inadvertently and should be avoided.
## Best Practices {#section_DA3CA596792E4BD6B5FFE89BCE0E617D}
The settings for *`linkTrackVars`* and *`linkTrackEvents`* within the JS file affect every file download, exit link, and custom link. Instances of each variable and event can be inflated in situations where the variable (or event) applies to the current page, but not the specific file download, exit link, or custom link.
To ensure that the proper variables are set with custom link code, Adobe recommends setting *`linkTrackVars`* and *`linkTrackEvents`* within the custom link code, as follows:
```js
<a href="index.html" onClick="
var s=s_gi('rsid');
s.linkTrackVars='prop1,prop2,eVar1,eVar2,events';
s.linkTrackEvents='event1';
s.prop1='Custom Property of Link';
s.events='event1';
s.tl(this,'o','Link Name');
">My Page
```
The values of *`linkTrackVars`* and *`linkTrackEvents`* override the settings in the JS file and ensure only the variables and events specified in the custom link code are set for the specific link.
>[!NOTE]
>
>In the above example, the value for prop1 is set within the custom link code itself. The value of prop2 comes from the current value of the variable as set on the page.
## Using Function Calls with Custom Link Code {#concept_DB662C93B3ED415DB72C80270502BE5D}
Due to the complex nature of custom link code, you can consolidate the code into a self-contained JavaScript function (defined on the page or in a linked JavaScript file) and make calls to the function within the [!UICONTROL onClick] handler.
<!--
link_functions.xml
-->
For example, you could insert the following two functions in your `AppMeasurement.js` file, just below the `s_doPlugins()` function, and then use them throughout your site:
```js
/* Set Click Interaction values (with timeout - H25 code and higher*/
function trackClickInteraction(name){
var s=s_gi('rsid');
s.linkTrackVars='prop42,prop35';
s.prop42=name;
s.prop35=s.pageName;
s.tl(true,'o','track interaction',null,'navigate');
}
```
```js
/* Set Click Interaction values (without timeout - pre H25 code*/
function trackClickInteraction(name){
var s=s_gi('rsid');
s.linkTrackVars='prop42,prop35';
s.prop42=name;
s.prop35=s.pageName;
s.tl(true,'o','track interaction');
}
```
>[!NOTE]
>
>If needed, you can pass the link type and link name as additional parameters for the JavaScript function.
You can use code similar to the following to call these functions:
```
<a href=”https://www.your-site.com/some_page.php” onclick=”trackClickInteraction('this.href');”>Link Text</a>
```
## Avoiding Duplicate Link Counts {#section_9C3F73DE758F4727943439DED110543C}
It is possible for the link to be double-counted in situations where the link is normally captured by automatic file download or exit link tracking.
For example, if you are tracking PDF downloads automatically, the following code results in a duplicate download count:
```js
function trackDownload(obj) {
var s=s_gi('rsid');
s.linkTrackVars='None';
s.linkTrackEvents='None';
s.tl(obj,'d','PDF Document');
}
```
To ensure link double counting does not occur, use the following modified JavaScript function:
```js
<script language=JavaScript>
function linkCode(obj) {
var s=s_gi('rsid');
s.linkTrackVars='None';
s.linkTrackEvents='None';
var lt=obj.href!=null?s.lt(obj.href):"";
if (lt=="") { s.tl(obj,'d','PDF Document'); }
}
```
The last two lines of the code above modify the behavior of custom link code so only the automatic tracking behavior occurs, eliminating any possible double counting.
## Popup Windows with useForcedLinkTracking {#concept_0AC4BA3A64B84CCB8D9A6021A022D1C3}
When `useForcedLinkTracking` is enabled, [!DNL AppMeasurement] overrides the default link behavior on some browsers to prevent the track link call from being canceled when the new page opens. [!DNL AppMeasurement] executes the track link call and handles the navigation event manually, instead of using the default browser action.
<!--
link_popups.xml
-->
When tracking some links that open a popup window, the Chrome built-in popup blocker might prevent [!DNL AppMeasurement] from opening a popup window that would normally be allowed. To avoid this, add a `target="_blank"` attribute to calls that open a window:
```
<a href="./popup.html" target="_blank" onClick="<a href="./popup.html" onclick="s.tl(this,'o','popup',null,'navigate');javascript:window.open('./popup.html','popup','width=550,height=450');
return false;">
```
This allows the link to be tracked and the popup to load as expected.
## Links from URL Shorteners {#concept_CD792362A8E04448B452BE9A18772024}
Links from URL shortener services (such as bit.ly) are typically not tracked as page views or as referrers. These services return 301/302 redirects with the full URL to the web browser, which then sends a new, separate request to the full URL. The original referrer is preserved since the shortener is no longer in the loop, and there isn't an indication on the request that a redirect service was used to get the URL.
<!--
link_shortener.xml
-->
Due to the variety of services available, we recommending testing the specific scenarios in use on your site to determine the redirect mechanism used by the service.
## Link Tracking Variables in doPlugins {#concept_B5AC1D4372AA4A7D8C7871453A4F1215}
To help manage link tracking, these following variables are populated before the `doPlugins` function runs.
<!--
util_linkhandler.xml
-->
Inside of `doPlugins`, you can use the following values to modify link tracking behavior. For example, you can abort tracking, or add additional variables to tracking requests.
<table id="table_55CCF4F2BF474FD3B703C926B896B392">
<thead>
<tr>
<th colname="col1" class="entry"> Variable </th>
<th colname="col2" class="entry"> Description </th>
<th colname="col3" class="entry"> Impact </th>
</tr>
</thead>
<tbody>
<tr>
<td colname="col1"> linkType </td>
<td colname="col2"> <p>Contains the automatically determined link type, if any. Can be set to one of the following: </p>
<ul id="ul_81ACB5D00D774E86AFD22C61AD4D0E2C">
<li id="li_52B6F2B124024DEFB422D1E9E97254C0">d (download) </li>
<li id="li_E842C2E64F034181A364C639C30451FD">e (exit) </li>
<li id="li_3263F378CE65407E81B6C5C597CED1E8">o (custom/other) </li>
</ul> <p>This is the <code> pe </code> parameter in the image request. </p> </td>
<td colname="col3"> <p>If set with <code> linkURL </code> or <code> linkName </code>, a server call is sent as a download, custom, or exit link. </p> </td>
</tr>
<tr>
<td colname="col1"> linkName </td>
<td colname="col2"> <p>The name that will appear in the custom, download or exit link report. Truncated at 100 characters. Can be set to any string. </p> <p>This is the <code> pev2 </code> parameter in the image request. </p> </td>
<td colname="col3"> <p> If set with <code> linkType </code> , an image request will be sent as a download, custom or exit link </p> </td>
</tr>
<tr>
<td colname="col1"> linkURL </td>
<td colname="col2"> <p>The URL of the link, which acts as the name if a linkName does not exist. Can be set to any URL string. </p> <p>This is the <code> pev1 </code> parameter in the image request. </p> </td>
<td colname="col3"> <p>If set with <code> linkType </code>, an image request will be sent as a download, custom or exit link </p> </td>
</tr>
<tr>
<td colname="col1"> linkObject </td>
<td colname="col2"> <p>The clicked object for reference. This is read-only. </p> </td>
<td colname="col3"> <p>No direct impact on measurement. </p> </td>
</tr>
</tbody>
</table>
**Example**
```js
function s_doPlugins(s) {
if (s.linkType == "d" && s.linkURL.indexOf(".aspx?f=") {
//special tracking for .aspx file download script
s.eVar11 = s.linkURL.substring(s.linkURL.lastIndexOf("?f=") + 3, s.linkURL.length);
}
else if (s.linkType == "o" ) {
// note: linkType is set to "o" only if you make a custom call
// to s.tl() and set the link type to "o". Automatically tracked
// links are set to "d" or "e" only.
s.eVar10 = s.LinkURL;
}
}
```
## Validating File Downloads, Exit Links, and Custom Links {#concept_0B43AD582D3E470899FCCB58E44D3D49}
To fully validate download, exit, and custom links, Adobe recommends using a packet analyzer to examine the links in real-time.
<!--
downloads_validate.xml
-->
File downloads, exit links, and custom links are not page views, so the [!UICONTROL DigitalPulse Debugger] tool cannot be used to verify parameters and variable settings. You must use a [Packet Analyzer](../../implement/impl-testing/packet-monitor.md#concept_490DF35E06D44234A91B5FC57C0BF258) to view track link data.
| 43.974265 | 478 | 0.748809 | eng_Latn | 0.986546 |
58d6c6a4d3395cb14edfa1c2d47ce0d839025185 | 987 | md | Markdown | README.md | zeinwedding/galihhanni | ec3f0820db5425a21613439578c7cef06932fca6 | [
"MIT"
] | null | null | null | README.md | zeinwedding/galihhanni | ec3f0820db5425a21613439578c7cef06932fca6 | [
"MIT"
] | null | null | null | README.md | zeinwedding/galihhanni | ec3f0820db5425a21613439578c7cef06932fca6 | [
"MIT"
] | null | null | null | # Wedding Landing Page - Daeng Sherly Menikah

# Section/Feature
- Main Info
- Countdown to D-Day
- Time and Place Info
- Add to Calendar Button (Google Calendar)
- Map Direction Button (Google Map)
- Send Message Button (Whatssapp API)
# Stack
- Netlify (https://netlify.com)
[](https://app.netlify.com/sites/sherly-daeng-menikah/deploys)
- Bulma CSS (https://bulma.io)
# Info
- Live version check at: https://sherly.dae.ng/
- Or check demo at: https://sherly-daeng-menikah.netlify.app/
- Check my web at: https://daengdoang.com :)
# Fonts
- Rouge Script (Google Font)
- Raleway (Google Font)
# Credits
- Floral vector created by BiZkettE1 - www.freepik.com (https://www.freepik.com/free-photos-vectors/background)# W e b s i t e - U n d a n g a n - P e r n i k a h a n - H a n n i - - - G a l i h
| 29.029412 | 199 | 0.702128 | kor_Hang | 0.31345 |
58d79c718a94dc571a97c42ac03492505ec41132 | 4,979 | md | Markdown | README.md | folieadrien/grounds.io | d44d43d756b6cc8b13c2a72c11070d313e5ad9ed | [
"MIT"
] | 1 | 2015-02-02T12:57:23.000Z | 2015-02-02T12:57:23.000Z | README.md | folieadrien/grounds.io | d44d43d756b6cc8b13c2a72c11070d313e5ad9ed | [
"MIT"
] | null | null | null | README.md | folieadrien/grounds.io | d44d43d756b6cc8b13c2a72c11070d313e5ad9ed | [
"MIT"
] | null | null | null | # grounds.io
[](https://circleci.com/gh/grounds/grounds.io/tree/master)
[](https://codeclimate.com/github/grounds/grounds.io)
[](https://codeclimate.com/github/grounds/grounds.io)
This project is the web application behind [Grounds](http://beta.42grounds.io).
Grounds is a 100% open source developer tool built to provide a way to share
runnable snippets within various languages from a web browser.
Grounds is using a [socket.io](http://socket.io/) server to execute arbitrary
code inside Docker containers, called grounds-exec. grounds-exec has its own
repository [here](https://github.com/grounds/grounds-exec).
All you need is [Docker 1.3+](https://docker.com/),
[Docker Compose 1.1+](http://docs.docker.com/compose/)
and [make](http://www.gnu.org/software/make/) to run this project inside Docker
containers with the same environment as in production.
## Languages
Grounds currently supports latest version of:
* C
* C++
* C#
* Elixir
* Go
* Haxe
* Java
* Node.js
* PHP
* Python 2 and 3
* Ruby
* Rust
Checkout this [documentation](/docs/NEW_LANGUAGE.md) to get more informations
about how to add support for a new language stack.
## Prerequisite
Grounds is a [Ruby on Rails](http://rubyonrails.org/) web application.
Grounds is using the latest version of grounds-exec and will automatically
pull the latest Docker image.
Grounds requires a [Redis](http://redis.io/) instance and will automatically
spawn a Docker container with a new Redis instance inside.
### Clone this project
git clone https://github.com/grounds/grounds.io.git
### Get into this project directory
cd grounds.io
### Pull language stack Docker images
make pull
If you want to pull these images from your own repository:
REPOSITORY="<you repository>" make pull
>Pulling all language stack images can take a long time and a lot of space.
However, only ruby image is mandatory when running the test suite.
Pull a specific language stack image:
docker pull grounds/exec-ruby
Checkout all available images on the official
[repository](https://registry.hub.docker.com/repos/grounds/).
### Set Docker remote API url
You need to specify a Docker remote API url to connect with.
export DOCKER_URL="https://127.0.0.1:2375"
If your are using Docker API through `https`, your `DOCKER_CERT_PATH` will be
mounted has a volume inside the container.
>Be careful: boot2docker enforces tls verification since version 1.3.
## Launch the web application
make run
You can also run the web application in the background:
make detach
Or:
make
The web app should now be listening on port 3000 on your docker daemon (if you
are using boot2docker, `boot2docker ip` will tell you its address).
You can also run Grounds in production mode:
RAILS_ENV=production make run
>When running in production mode, a default secret key is set as convenience,
but this should be changed in production by specifying `SECRET_KEY_BASE`.
If you want [New Relic](http://newrelic.com/) metrics you can also specify:
* `NEWRELIC_LICENSE_KEY`
* `NEWRELIC_APP_NAME`
>New Relic metrics are available only when running in production mode.
If you want [Piwik](http://piwik.org/) web analytics you can also specify:
* `PIWIK_URL`
>Piwik web analytics are available only when running in production mode.
## Get a shell in a preconfigured environment
For ease of debugging, you can open a preconfigured environment
inside a container with every services required to work with:
make shell
You can then launch common tasks like:
rake run
rake test
rails console
bundle install
bundle update
## Install / Update ruby gems
Open a shell inside a container:
make shell
To install a new gem:
1. Edit `Gemfile`
2. Run bundle install
bundle install
To update existing gems:
bundle update
Both commands update `Gemfile.lock`, then next time that docker rebuild
the image, it will use this configuration to install these gems inside the
image.
>Be careful: if you update the `Gemfile` first, then trying to open a shell
will fail, docker will try to rebuild the image with an outdated
`Gemfile.lock`.
## Tests
Tests will also run inside Docker containers with the same environment
as the CI server.
To run the test suite:
make test
To run specific test files or add a flag for [RSpec](http://rspec.info/) you can
specify `TEST_OPTS`:
TEST_OPTS="spec/models/ground_spec.rb" make test
## Contributing
Before sending a pull request, please checkout the contributing
[guidelines](/docs/CONTRIBUTING.md).
## Authors
See [authors](/docs/AUTHORS.md) file.
## Licensing
grounds.io is licensed under the MIT License. See [LICENSE](LICENSE) for full
license text.
| 25.664948 | 140 | 0.752561 | eng_Latn | 0.974297 |
58d7b0fbc265974e2e7d5e5b12203dfe50920482 | 701 | md | Markdown | DEVELOPMENT.md | nasum/yadockeri | 0fdc7bc9f7c5c6c33b6b21edd7dc36fdfcd4e1ff | [
"MIT"
] | null | null | null | DEVELOPMENT.md | nasum/yadockeri | 0fdc7bc9f7c5c6c33b6b21edd7dc36fdfcd4e1ff | [
"MIT"
] | null | null | null | DEVELOPMENT.md | nasum/yadockeri | 0fdc7bc9f7c5c6c33b6b21edd7dc36fdfcd4e1ff | [
"MIT"
] | null | null | null | ## Development
Development guid for Yadockeri in your local machine.
### Server Side
At first, you have to register GitHub OAuth Application.
And you please export environment variables:
```bash
export GITHUB_CLIENT_ID=hoge
export GITHUB_CLIENT_SECRET=fuga
export SESSION_SECRET=your_secret
export ALLOW_GITHUB_ORG=your_github_org_name
```
And run server.
```cmd
$ docker-compose run --rm --service-ports app sh
/go/src/github.com/h3poteto/yadockeri # goose up
...
/go/src/github.com/h3poteto/yadockeri # glide install -v
/go/src/github.com/h3poteto/yadockeri # go run main.go
```
### Frontend
```cmd
$ docker-compose run --rm frontend
```
After that, you can access `http://localhost:9090`.
| 20.617647 | 56 | 0.758916 | eng_Latn | 0.530297 |
58d7eb77e1eca958d4c5dae6805c2ce7913caf35 | 3,678 | md | Markdown | _posts/2019-09-23-post-19.md | bugkingK/bugkingK.github.io | 2381a555a962879e8951f0bbcd9a8cc5f5615751 | [
"MIT"
] | null | null | null | _posts/2019-09-23-post-19.md | bugkingK/bugkingK.github.io | 2381a555a962879e8951f0bbcd9a8cc5f5615751 | [
"MIT"
] | null | null | null | _posts/2019-09-23-post-19.md | bugkingK/bugkingK.github.io | 2381a555a962879e8951f0bbcd9a8cc5f5615751 | [
"MIT"
] | null | null | null | ---
layout: post
title: Secure Coding)Buffer Overflow
subtitle: 시큐어 코딩
tags: [Secure Coding]
comments: true
---
### @
## Buffer Overflow란?
프로그램이 입력(사용자, 파일, 네트워크를 통하거나 또는 다른 방법)을 요구할 때마다, 부적절한 데이터를 받을 가능성이 있다. 입력 데이터가 예약된 공간보다 큰 경우 이것을 자르지 않는다면 그 데이터는 다른 데이터를 덮어 쓰게 될 것이다. 이 경우를 Buffer Overflow라고 한다.
## Buffer Overflow의 특징
Buffer Overflow는 어플리케이션에 충돌을 일으키거나 데이터를 손상 시킬 수 있다. 그리고 어플리케이션이 작동하는 시스템에 손상을 입힐 수 있는 추가적인 권한 상승을 위한 공격 벡터를 제공할 수 있다.
모든 응용 프로그램 또는 시스템 소프트웨어는 적어도 일시적으로 사용자로부터의 입력, 파일로부터의 입력, 네트워크로부터의 입력을 저장할 수 있다. 특별한 경우를 제외하고 대부분의 어플리케이션 메모리는 두 장소 중 하나에 저장된다. 따라서 두 장소를 공격하는 Stack Overflows, Heap Overflows가 있다.
- stack-A : 특정 함수, 메서드, 블록 또는 다른 동등한 구조에 대한 단일 호출을 하는 특정 데이터를 저장하는 어플리케이션의 주소 공간의 일부분
- heap : 일반적인 목적은 어플리케이션을 저장, 어플리케이션이 작동하는 동안 (또는 명시적으로 어플리케이션이 더 이상 데이터를 필요로 하지 않는 운영체제를 알려줄 때 까지) 힙에 저장된 데이터는 사용 가능한 상태로 남아있다. 클래스 인스턴스, malloc 으로 할당된 데이터, 핵심 기반 개체 그리고 다른 어플리케이션 데이터는 힙에 상주하고 있다. ( 참고. 그러나 실제 데이터를 가리키는 지역변수는 스택에 저장된다.)
## Buffer Underflow란?
입력된 데이터가 있거나 예약된 공간(잘못된 가정, 잘못된 길이 값 또는 C 문자열과 같은 원시 데이터 복사로 인한)보다 짧게 나타날 때를 Buffer Underflow라고 한다. 이것은 오작동부터 현재 스택 또는 힙에 있는 데이터 유출까지 여러 문제가 발생할 수 있다.
### 해결 방안1. String Handling
{: .center-block :}
많은 문자열 처리 함수는 문자열 길이에 대한 확인을 하지 않고 있기 때문에 문자열은 자주 익스플로잇 가능한 버퍼 오버플로우 소스이다. 위의 그림은 세 개의 문자열 복사 함수가 길이를 초과하는 동일한 문자열을 처리하는 다른 방법을 나타낸 것이다.
- strcpy 함수는 단순하게 전체 문자열을 메모리에 쓴다. 그리고 그 뒤에 무엇이 오든지 간에 덮어써버린다.
- strncpy 함수는 정확한 길이로 문자열을 자른다. 하지만 종료 null 문자 없이 자른다. 이 문자열을 읽을 때, 메모리에서 모든 바이트는 다음 null 문자까지를 문자열의 부분으로 읽을 수 있다. strncpy 를 안전하게 사용하기 위해, 당신은 strncpy 호출 후 명시적으로 버퍼의 마지막 바이트를 0으로 맞추거나 버퍼를 0 이전으로 맞추어야 한다. 즉, 최대 길이에 버퍼 사이즈보다 1바이트 작은 사이즈를 전달해야 한다.
- 오직 srtlcpy 함수는 버퍼 사이즈보다 1 바이트 작게 문자열을 자르고 끝에 null 문자를 추가하므로 완전히 안전하다.
이외에도 주의해야할 함수들이 많이 존재한다. 그것들을 모두 다루는 것은 다소 무리가 있다. Objective-C에선 NSString, CFString을 사용하는 것이 위의 문제들을 해결한 API이기에 특별한 경우가 없다면 Core Foundation을 사용하자.
### 해결 방안2. Calculating Buffer Sizes
고정된 길이의 버퍼로 작업을 할 때, 버퍼의 사이즈를 계산하기 위해 항상 sizeof 를 사용해야 한다. 그리고 보유할 수 있는 것보다 더 많은 데이터를 버퍼에 넣지 않았는지 확인해야 한다. 원래 버퍼에 정적 크기를 할당하더라도, 당신 또는 나중에 당신의 코드를 유지하는 어떤 사람 중 누군가 버퍼 사이즈를 변경할 수 있지만, 버퍼에 기록하는 모든 경우를 변경하는 것은 실패 할 수 있다.
{: .center-block, width=30%}
### 해결 방안3. Avoiding Integer Overflows and Underflows
버퍼에 들어간 데이터의 크기와 버퍼의 크기를 계산할 때, 항상 size_t와 같은 unsigned 변수를 사용해야 한다. 음수는 큰 정수로 저장되기 때문에, 만약 signed 변수를 사용하는 경우 프로그램에 큰 수를 써서 공격자는 데이터 또는 버퍼의 크기에서 잘못된 계산을 일으킬 수 있다.
### 해결 방안4. Detecting Buffer Overflows
Buffer Overflow를 테스트하기 위해, 프로그램에서 입력 허용하는 데이터보다 더 많은 데이터가 들어가는 것을 허용해야 한다. 또한, 만약 프로그램이 그래픽 또는 오디오 데이터와 같은 표준형 데이터를 허용하는 경우, 잘못된 데이터를 전달하는 것을 시도해야 한다. 이 과정은 퍼징으로 알려져 있다.
### 해결 방안5. Avoiding Buffer Underflows
근본적으로 버퍼의 사이즈 또는 버퍼에서 데이터에 대해 코드의 두 가지 부분이 맞지 않는 경우 Buffer Underflow는 발생한다. 예를 들어 고정된 길이의 C 문자열 변수는 256 바이트 공간을 갖게 될 것이다. 그러나 단지 12 바이트 길이 문자열을 포함할 수 있다.
다음 규칙을 따를 경우, 대부분의 언더플로우 공격을 피할 수 있을 것이다.
- 사용하기 전 모든 버퍼를 0으로 채워라. 단지 0만 포함하는 버퍼는 오래된 민감한 정보를 포함할 수 없다.
- 항상 반환값과 실패를 적절히 확인해라.
- 할당 또는 초기화 함수를 호출을 실패하는 경우(예: AuthorizationCopyRights) 오래된 것일 수 있으므로 결과 데이터를 평가하지 마라.
- 얼마나 많은 데이터가 실제로 읽혔는지 정하기 위해 read 시스템 호출과 다른 유사한 호출로부터 반환된 값을 사용해라.
- 미리 정의된 상수를 사용하는 대신에 얼마나 많은 데이터가 존재하는지 확인하기 위한 결과로 사용하거나 함수가 예상된 데이터양을 반환하지 않는 경우 실패로 나타낸다.
- 모든 데이터를 작성하지 않고 write, printf 또는 다른 출력 호출이 반환하는 경우, 특히 나중에 그 데이터를 다시 읽을 수 있는 경우 에러를 표시하고 실패로 나타내라.
- 길이 정보를 포함한 데이터 구조 작업을 할 때, 항상 그 데이터가 예상된 크기인지 검증해라.
- 가능한 C 문자열이 아닌 문자열(CFStringRef 객체, NSString 객체, Pascal 문자열 기타 등등)을 C 문자열로 변경하는 것을 피해라. 대신 원본 형태로 문자열 작업을 해라.
이게 불가능한 경우, 항상 C 문자열에 대한 길이 검사를 수행하거나 또는 소스 데이터의 null 바이트를 확인해라.
- 버퍼 연산과 문자열 연산을 혼합하는 것을 피해라. 이게 불가능한 경우, 항상 C 문자열에 대한 길이 검사를 수행하거나 또는 소스 데이터의 null 바이트를 확인해라.
- 악의적인 조작 또는 잘라내기를 방지하는 방식으로 파일을 저장해라.
- 정수형 오버플로우와 언더플로우를 피해라.
| 58.380952 | 244 | 0.719413 | kor_Hang | 1.00001 |
58d840f1bdc026c9b9a6d76624b1c5c4c8e171ab | 664 | md | Markdown | _posts/blog/2015-10-03-corporate-buddhism.md | fdurant/fdurant_github_io_source | cb677ec83a7a9bc3fc401f4333c94655f5a96953 | [
"MIT"
] | null | null | null | _posts/blog/2015-10-03-corporate-buddhism.md | fdurant/fdurant_github_io_source | cb677ec83a7a9bc3fc401f4333c94655f5a96953 | [
"MIT"
] | 1 | 2021-01-10T16:50:40.000Z | 2021-01-10T16:50:40.000Z | _posts/blog/2015-10-03-corporate-buddhism.md | fdurant/fdurant_github_io_source | cb677ec83a7a9bc3fc401f4333c94655f5a96953 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Corporate buddhism"
modified:
categories: blog
excerpt:
tags: ["poetry", "haiku", "business"]
date: 2015-10-03T12:01:00-00:00
share: true
---
# Conference call
> Let's all go on mute<br/>
> One hour full of silence<br/>
> Now that's a brainstorm
# Work
> There's no I in 'team'<br/>
> They say - and no U either<br/>
> Then why are we here?
# Project planning
> So many milestones<br/>
> Along the critical path<br/>
> To stumble upon
# Acceleration
> Faster and faster<br/>
> Time reduced to one moment<br/>
> Absolute standstill
# Hierarchy
> With each new layer<br/>
> The game of Chinese whispers<br/>
> Beautifies the truth
| 16.195122 | 37 | 0.686747 | eng_Latn | 0.966939 |
58d8a28bd610a4402692b87fccfe6f949d0a8d1d | 4,374 | md | Markdown | docs/analytics-platform-system/high-availability.md | aisbergde/sql-docs.de-de | 7d98a83b0b13f2b9ff7482c5bdf9f024c20595b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analytics-platform-system/high-availability.md | aisbergde/sql-docs.de-de | 7d98a83b0b13f2b9ff7482c5bdf9f024c20595b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analytics-platform-system/high-availability.md | aisbergde/sql-docs.de-de | 7d98a83b0b13f2b9ff7482c5bdf9f024c20595b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Hohe Verfügbarkeit in Analytics Platform System | Microsoft-Dokumentation
description: Erfahren Sie, wie Analytics Platform System (APS) für hochverfügbarkeit entworfen wird.
author: mzaman1
ms.prod: sql
ms.technology: data-warehouse
ms.topic: conceptual
ms.date: 04/17/2018
ms.author: murshedz
ms.reviewer: martinle
ms.openlocfilehash: cdf1837bd3b3b1cdf8e189ae591cd6fbff58387a
ms.sourcegitcommit: b2464064c0566590e486a3aafae6d67ce2645cef
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 07/15/2019
ms.locfileid: "67960864"
---
# <a name="analytics-platform-system-high-availability"></a>Hohe Verfügbarkeit für Analytics Platform System
Erfahren Sie, wie Analytics Platform System (APS) für hochverfügbarkeit entworfen wird.
## <a name="high-availability-architecture"></a>Hochverfügbarkeitsarchitektur

## <a name="network"></a>Netzwerk
Für die Verfügbarkeit des Netzwerks hat die APS-Appliance zwei InfiniBand-Netzwerken. Wenn die InfiniBand-Netzwerke ausfällt, ist der andere Controller weiterhin verfügbar. Darüber hinaus wurde Active Directory-Domänencontrollern, um eingehende Anforderungen mit dem richtigen InfiniBand-Netzwerk zu beheben repliziert.
Weitere Informationen finden Sie unter [konfigurieren InfiniBand-Netzwerkadapter](configure-infiniband-network-adapters.md).
## <a name="storage"></a>Speicherung
Um Daten zu schützen, verwendet die APS RAID 1-Spiegelung, um zwei Kopien der Daten für alle Benutzer zu verwalten. Wenn ein Datenträger ausfällt, wird das Hardwaresystem die Daten auf ein Hotspare-Laufwerk erstellt und legt eine Warnung, dass ein Fehler auf dem Datenträger vorhanden ist.
Um die verfügbaren Daten online zu halten, verwendet APS Windows-Speicherplätze "und" freigegebene Clustervolumes, um die Benutzer-Datenträger in den direkt angeschlossenen Speicher zu verwalten. Es gibt einen einzigen Speicherpool pro Daten Skalierungseinheit unterteilt Cluster Shared Volumes, die auf die Compute-Knoten-Hosts durch einbindungspunkte verfügbar sind.
Um sicherzustellen, dass der Speicherpool online bleibt, hat jedem Host in der Data-Skalierungseinheit einen iSCSI-virtuellen Computer, der kein Failover durchgeführt wird. Diese Architektur ist wichtig, da ein Host ausfällt, die Daten weiterhin über den anderen Hosts in der Data-Skalierungseinheit verfügbar sind.
## <a name="hosts"></a>Hosts
Für die hostverfügbarkeit werden auf allen Hosts in einem Windows-Failovercluster konfiguriert. Jedes Rack verfügt über einen passiven Host. Das erste Rack, das SQL Server Parallel Data Warehouse (PDW) und der Appliance-Fabric gesteuert werden soll, kann optional einen zweiten passiven Host verfügen. Wenn ein Host ausfällt, virtuelle Computer, die für Failover konfiguriert sind wird ein Failover auf einen verfügbaren passiven Host.
## <a name="pdw-nodes-and-appliance-fabric"></a>PDW-Knoten und Appliance-fabric
APS wird für hohe Verfügbarkeit der PDW-Knoten und der Appliance-Fabric Netzwerkvirtualisierung verwendet. Jede der PDW und Fabric-Komponenten, die auf einem virtuellen Computer ausführen.
Jeder virtuelle Computer wird als eine Rolle in der Windows-Failovercluster definiert. Bei ein virtuellen Computer ein Fehler auftritt, startet Sie Cluster auf einem verfügbaren passiven-Host neu. Die virtuellen Computer werden mithilfe von System Center Virtual Machine Manager bereitgestellt. Wenn ein Failover auftritt, kann der virtuelle Computer auf dem passiven Host mit weiterhin Zugriff auf die Benutzerdaten über das InfiniBand-Netzwerk.
Der steuerknoten und den Compute-Knoten virtuelle Computer werden jeweils als einen einzelnen Knoten-Cluster konfiguriert. Der einzelnen Knoten-Cluster verwaltet InfiniBand-Netzwerke als Clusterressource, um sicherzustellen, dass der Cluster immer das aktive InfiniBand IP verwendet. Die einzelnen Knoten-Cluster verwaltet die PDW-Prozesse, die innerhalb des virtuellen Computers ausgeführt. Beispielsweise hat die Einzelknotencluster SQL Server- und Windows-Verwaltungsinstrumentation (Data Movement Service, DMS) als Ressourcen, damit er in der richtigen Reihenfolge gestartet werden kann. Der steuerknoten VM steuert auch die Startreihenfolge für die anderen virtuellen Computer, die auf der orchestrierungshost ausgeführt.
| 95.086957 | 728 | 0.822359 | deu_Latn | 0.99496 |
58d9cbd15d158ea8666f8dda50c1c7696e609928 | 20,664 | md | Markdown | README.md | daranzolin/hacksaw | 5bbe004d7ed7656df52b301398de9461fb5f1191 | [
"MIT"
] | 28 | 2020-05-25T07:05:42.000Z | 2021-11-15T11:12:41.000Z | README.md | daranzolin/hacksaw | 5bbe004d7ed7656df52b301398de9461fb5f1191 | [
"MIT"
] | 7 | 2020-05-27T21:47:36.000Z | 2021-09-20T21:58:42.000Z | README.md | daranzolin/hacksaw | 5bbe004d7ed7656df52b301398de9461fb5f1191 | [
"MIT"
] | 3 | 2020-05-29T20:50:03.000Z | 2020-10-05T21:00:34.000Z |
<!-- README.md is generated from README.Rmd. Please edit that file -->
# hacksaw
<!-- badges: start -->


 [](https://travis-ci.com/daranzolin/hacksaw)
<!-- badges: end -->
hacksaw is as an adhesive between various dplyr and purrr operations,
with some extra tidyverse-like functionality (e.g. keeping NAs, shifting
row values) and shortcuts (e.g. filtering patterns, casting, plucking,
etc.).
## Installation
You can install the released version of hacksaw from CRAN with:
``` r
install.packages("hacksaw")
```
Or install the development version from GitHub with:
``` r
remotes::install_github("daranzolin/hacksaw")
```
## Split operations
hacksaw’s assortment of split operations recycle the original data
frame. This is useful when you want to run slightly different code on
the same object multiple times (e.g. assignment) or you want to take
advantage of some list functionality (e.g. purrr, `lengths()`, `%->%`,
etc.).
The useful`%<-%` and `%->%` operators are re-exported from [the zeallot
package.](https://github.com/r-lib/zeallot)
### filter
``` r
library(hacksaw)
library(tidyverse)
iris %>%
filter_split(
large_petals = Petal.Length > 5.1,
large_sepals = Sepal.Length > 6.4
) %>%
map(summary)
#> $large_petals
#> Sepal.Length Sepal.Width Petal.Length Petal.Width
#> Min. :6.100 Min. :2.500 Min. :5.200 Min. :1.400
#> 1st Qu.:6.400 1st Qu.:2.900 1st Qu.:5.525 1st Qu.:1.900
#> Median :6.700 Median :3.000 Median :5.700 Median :2.100
#> Mean :6.862 Mean :3.071 Mean :5.826 Mean :2.094
#> 3rd Qu.:7.200 3rd Qu.:3.200 3rd Qu.:6.075 3rd Qu.:2.300
#> Max. :7.900 Max. :3.800 Max. :6.900 Max. :2.500
#> Species
#> setosa : 0
#> versicolor: 0
#> virginica :34
#>
#>
#>
#>
#> $large_sepals
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> Min. :6.500 Min. :2.500 Min. :4.400 Min. :1.30 setosa : 0
#> 1st Qu.:6.700 1st Qu.:3.000 1st Qu.:5.050 1st Qu.:1.65 versicolor: 9
#> Median :6.800 Median :3.000 Median :5.700 Median :2.00 virginica :26
#> Mean :6.971 Mean :3.071 Mean :5.569 Mean :1.94
#> 3rd Qu.:7.200 3rd Qu.:3.200 3rd Qu.:6.050 3rd Qu.:2.25
#> Max. :7.900 Max. :3.800 Max. :6.900 Max. :2.50
```
### select
Include multiple columns and select helpers within `c()`:
``` r
iris %>%
select_split(
sepal_data = c(Species, starts_with("Sepal")),
petal_data = c(Species, starts_with("Petal"))
) %>%
str()
#> List of 2
#> $ sepal_data:'data.frame': 150 obs. of 3 variables:
#> ..$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
#> ..$ Sepal.Length: num [1:150] 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
#> ..$ Sepal.Width : num [1:150] 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
#> $ petal_data:'data.frame': 150 obs. of 3 variables:
#> ..$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
#> ..$ Petal.Length: num [1:150] 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
#> ..$ Petal.Width : num [1:150] 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
```
### count
Count across multiple variables:
``` r
mtcars %>%
count_split(
cyl,
carb,
gear
)
#> [[1]]
#> cyl n
#> 1 8 14
#> 2 4 11
#> 3 6 7
#>
#> [[2]]
#> carb n
#> 1 2 10
#> 2 4 10
#> 3 1 7
#> 4 3 3
#> 5 6 1
#> 6 8 1
#>
#> [[3]]
#> gear n
#> 1 3 15
#> 2 4 12
#> 3 5 5
```
### rolling\_count\_split
Rolling counts, left-to-right
``` r
mtcars %>%
rolling_count_split(
cyl,
carb,
gear
)
#> [[1]]
#> cyl n
#> 1 4 11
#> 2 6 7
#> 3 8 14
#>
#> [[2]]
#> cyl carb n
#> 1 4 1 5
#> 2 4 2 6
#> 3 6 1 2
#> 4 6 4 4
#> 5 6 6 1
#> 6 8 2 4
#> 7 8 3 3
#> 8 8 4 6
#> 9 8 8 1
#>
#> [[3]]
#> cyl carb gear n
#> 1 4 1 3 1
#> 2 4 1 4 4
#> 3 4 2 4 4
#> 4 4 2 5 2
#> 5 6 1 3 2
#> 6 6 4 4 4
#> 7 6 6 5 1
#> 8 8 2 3 4
#> 9 8 3 3 3
#> 10 8 4 3 5
#> 11 8 4 5 1
#> 12 8 8 5 1
```
### distinct
Easily get the unique values of multiple columns:
``` r
starwars %>%
distinct_split(skin_color, eye_color, homeworld) %>%
str() # lengths() is also useful
#> List of 3
#> $ : chr [1:31] "fair" "gold" "white, blue" "white" ...
#> $ : chr [1:15] "blue" "yellow" "red" "brown" ...
#> $ : chr [1:49] "Tatooine" "Naboo" "Alderaan" "Stewjon" ...
```
### mutate
``` r
iris %>%
mutate_split(
Sepal.Length2 = Sepal.Length * 2,
Sepal.Length3 = Sepal.Length * 3
) %>%
str()
#> List of 2
#> $ :'data.frame': 150 obs. of 6 variables:
#> ..$ Sepal.Length : num [1:150] 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
#> ..$ Sepal.Width : num [1:150] 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
#> ..$ Petal.Length : num [1:150] 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
#> ..$ Petal.Width : num [1:150] 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
#> ..$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
#> ..$ Sepal.Length2: num [1:150] 10.2 9.8 9.4 9.2 10 10.8 9.2 10 8.8 9.8 ...
#> $ :'data.frame': 150 obs. of 6 variables:
#> ..$ Sepal.Length : num [1:150] 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
#> ..$ Sepal.Width : num [1:150] 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
#> ..$ Petal.Length : num [1:150] 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
#> ..$ Petal.Width : num [1:150] 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
#> ..$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
#> ..$ Sepal.Length3: num [1:150] 15.3 14.7 14.1 13.8 15 16.2 13.8 15 13.2 14.7 ...
```
### group\_by
Separate groups:
``` r
mtcars %>%
group_by_split(cyl, gear, am, across(c(cyl, gear))) %>%
map(tally, wt = vs)
#> [[1]]
#> # A tibble: 3 x 2
#> cyl n
#> <dbl> <dbl>
#> 1 4 10
#> 2 6 4
#> 3 8 0
#>
#> [[2]]
#> # A tibble: 3 x 2
#> gear n
#> <dbl> <dbl>
#> 1 3 3
#> 2 4 10
#> 3 5 1
#>
#> [[3]]
#> # A tibble: 2 x 2
#> am n
#> <dbl> <dbl>
#> 1 0 7
#> 2 1 7
#>
#> [[4]]
#> # A tibble: 8 x 3
#> # Groups: cyl [3]
#> cyl gear n
#> <dbl> <dbl> <dbl>
#> 1 4 3 1
#> 2 4 4 8
#> 3 4 5 1
#> 4 6 3 2
#> 5 6 4 2
#> 6 6 5 0
#> 7 8 3 0
#> 8 8 5 0
```
### rolling\_group\_by\_split
Rolling groups, left-to-right:
``` r
mtcars %>%
rolling_group_by_split(
cyl,
carb,
gear
) %>%
map(summarize, mean_mpg = mean(mpg))
#> [[1]]
#> # A tibble: 3 x 2
#> cyl mean_mpg
#> <dbl> <dbl>
#> 1 4 26.7
#> 2 6 19.7
#> 3 8 15.1
#>
#> [[2]]
#> # A tibble: 9 x 3
#> # Groups: cyl [3]
#> cyl carb mean_mpg
#> <dbl> <dbl> <dbl>
#> 1 4 1 27.6
#> 2 4 2 25.9
#> 3 6 1 19.8
#> 4 6 4 19.8
#> 5 6 6 19.7
#> 6 8 2 17.2
#> 7 8 3 16.3
#> 8 8 4 13.2
#> 9 8 8 15
#>
#> [[3]]
#> # A tibble: 12 x 4
#> # Groups: cyl, carb [9]
#> cyl carb gear mean_mpg
#> <dbl> <dbl> <dbl> <dbl>
#> 1 4 1 3 21.5
#> 2 4 1 4 29.1
#> 3 4 2 4 24.8
#> 4 4 2 5 28.2
#> 5 6 1 3 19.8
#> 6 6 4 4 19.8
#> 7 6 6 5 19.7
#> 8 8 2 3 17.2
#> 9 8 3 3 16.3
#> 10 8 4 3 12.6
#> 11 8 4 5 15.8
#> 12 8 8 5 15
```
### nest\_by
``` r
mtcars %>%
nest_by_split(cyl, gear) %>%
map(mutate, model = list(lm(mpg ~ wt, data = data)))
#> [[1]]
#> # A tibble: 3 x 3
#> # Rowwise: cyl
#> cyl data model
#> <dbl> <list<tbl_df[,10]>> <list>
#> 1 4 [11 × 10] <lm>
#> 2 6 [7 × 10] <lm>
#> 3 8 [14 × 10] <lm>
#>
#> [[2]]
#> # A tibble: 3 x 3
#> # Rowwise: gear
#> gear data model
#> <dbl> <list<tbl_df[,10]>> <list>
#> 1 3 [15 × 10] <lm>
#> 2 4 [12 × 10] <lm>
#> 3 5 [5 × 10] <lm>
```
### rolling\_nest\_by
``` r
mtcars %>%
rolling_nest_by_split(cyl, gear) %>%
map(mutate, model = list(lm(mpg ~ wt, data = data)))
#> [[1]]
#> # A tibble: 3 x 3
#> # Rowwise: cyl
#> cyl data model
#> <dbl> <list<tbl_df[,10]>> <list>
#> 1 4 [11 × 10] <lm>
#> 2 6 [7 × 10] <lm>
#> 3 8 [14 × 10] <lm>
#>
#> [[2]]
#> # A tibble: 8 x 4
#> # Rowwise: cyl, gear
#> cyl gear data model
#> <dbl> <dbl> <list<tbl_df[,9]>> <list>
#> 1 4 3 [1 × 9] <lm>
#> 2 4 4 [8 × 9] <lm>
#> 3 4 5 [2 × 9] <lm>
#> 4 6 3 [2 × 9] <lm>
#> 5 6 4 [4 × 9] <lm>
#> 6 6 5 [1 × 9] <lm>
#> 7 8 3 [12 × 9] <lm>
#> 8 8 5 [2 × 9] <lm>
```
### transmute
``` r
iris %>%
transmute_split(Sepal.Length * 2, Petal.Width + 5) %>%
str()
#> List of 2
#> $ : num [1:150] 10.2 9.8 9.4 9.2 10 10.8 9.2 10 8.8 9.8 ...
#> $ : num [1:150] 5.2 5.2 5.2 5.2 5.2 5.4 5.3 5.2 5.2 5.1 ...
```
### slice
``` r
iris %>%
slice_split(1:10, 11:15, 30:50) %>%
str()
#> List of 3
#> $ :'data.frame': 10 obs. of 5 variables:
#> ..$ Sepal.Length: num [1:10] 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9
#> ..$ Sepal.Width : num [1:10] 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1
#> ..$ Petal.Length: num [1:10] 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5
#> ..$ Petal.Width : num [1:10] 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1
#> ..$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1
#> $ :'data.frame': 5 obs. of 5 variables:
#> ..$ Sepal.Length: num [1:5] 5.4 4.8 4.8 4.3 5.8
#> ..$ Sepal.Width : num [1:5] 3.7 3.4 3 3 4
#> ..$ Petal.Length: num [1:5] 1.5 1.6 1.4 1.1 1.2
#> ..$ Petal.Width : num [1:5] 0.2 0.2 0.1 0.1 0.2
#> ..$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1
#> $ :'data.frame': 21 obs. of 5 variables:
#> ..$ Sepal.Length: num [1:21] 4.7 4.8 5.4 5.2 5.5 4.9 5 5.5 4.9 4.4 ...
#> ..$ Sepal.Width : num [1:21] 3.2 3.1 3.4 4.1 4.2 3.1 3.2 3.5 3.6 3 ...
#> ..$ Petal.Length: num [1:21] 1.6 1.6 1.5 1.5 1.4 1.5 1.2 1.3 1.4 1.3 ...
#> ..$ Petal.Width : num [1:21] 0.2 0.2 0.4 0.1 0.2 0.2 0.2 0.2 0.1 0.2 ...
#> ..$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
Use the `var_max` and `var_min` helpers to easily get minimum and
maximum values of a variable:
``` r
iris %>%
slice_split(
largest_sepals = var_max(Sepal.Length, 4),
smallest_sepals = var_min(Sepal.Length, 4)
)#
#> $largest_sepals
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 1 7.7 3.8 6.7 2.2 virginica
#> 2 7.7 2.6 6.9 2.3 virginica
#> 3 7.7 2.8 6.7 2.0 virginica
#> 4 7.9 3.8 6.4 2.0 virginica
#>
#> $smallest_sepals
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> 1 4.4 2.9 1.4 0.2 setosa
#> 2 4.3 3.0 1.1 0.1 setosa
#> 3 4.4 3.0 1.3 0.2 setosa
#> 4 4.4 3.2 1.3 0.2 setosa
```
### precision\_split
`precision_split` splits the mtcars data frame into two: one with mpg
greater than 20, one with mpg less than 20:
``` r
mtcars %>%
precision_split(mpg > 20) %->% c(lt20mpg, gt20mpg)
str(gt20mpg)
#> 'data.frame': 14 obs. of 11 variables:
#> $ mpg : num 21 21 22.8 21.4 24.4 22.8 32.4 30.4 33.9 21.5 ...
#> $ cyl : num 6 6 4 6 4 4 4 4 4 4 ...
#> $ disp: num 160 160 108 258 147 ...
#> $ hp : num 110 110 93 110 62 95 66 52 65 97 ...
#> $ drat: num 3.9 3.9 3.85 3.08 3.69 3.92 4.08 4.93 4.22 3.7 ...
#> $ wt : num 2.62 2.88 2.32 3.21 3.19 ...
#> $ qsec: num 16.5 17 18.6 19.4 20 ...
#> $ vs : num 0 0 1 1 1 1 1 1 1 1 ...
#> $ am : num 1 1 1 0 0 0 1 1 1 0 ...
#> $ gear: num 4 4 4 3 4 4 4 4 4 3 ...
#> $ carb: num 4 4 1 1 2 2 1 2 1 1 ...
str(lt20mpg)
#> 'data.frame': 18 obs. of 11 variables:
#> $ mpg : num 18.7 18.1 14.3 19.2 17.8 16.4 17.3 15.2 10.4 10.4 ...
#> $ cyl : num 8 6 8 6 6 8 8 8 8 8 ...
#> $ disp: num 360 225 360 168 168 ...
#> $ hp : num 175 105 245 123 123 180 180 180 205 215 ...
#> $ drat: num 3.15 2.76 3.21 3.92 3.92 3.07 3.07 3.07 2.93 3 ...
#> $ wt : num 3.44 3.46 3.57 3.44 3.44 ...
#> $ qsec: num 17 20.2 15.8 18.3 18.9 ...
#> $ vs : num 0 1 0 1 1 0 0 0 0 0 ...
#> $ am : num 0 0 0 0 0 0 0 0 0 0 ...
#> $ gear: num 3 3 3 4 4 3 3 3 3 3 ...
#> $ carb: num 2 1 4 4 4 3 3 3 4 4 ...
```
### eval\_split
Evaluate any expression:
``` r
mtcars %>%
eval_split(
select(hp, mpg),
filter(mpg > 25),
mutate(pounds = wt*1000)
) %>%
str()
#> List of 3
#> $ :'data.frame': 32 obs. of 2 variables:
#> ..$ hp : num [1:32] 110 110 93 110 175 105 245 62 95 123 ...
#> ..$ mpg: num [1:32] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
#> $ :'data.frame': 6 obs. of 11 variables:
#> ..$ mpg : num [1:6] 32.4 30.4 33.9 27.3 26 30.4
#> ..$ cyl : num [1:6] 4 4 4 4 4 4
#> ..$ disp: num [1:6] 78.7 75.7 71.1 79 120.3 ...
#> ..$ hp : num [1:6] 66 52 65 66 91 113
#> ..$ drat: num [1:6] 4.08 4.93 4.22 4.08 4.43 3.77
#> ..$ wt : num [1:6] 2.2 1.61 1.83 1.94 2.14 ...
#> ..$ qsec: num [1:6] 19.5 18.5 19.9 18.9 16.7 ...
#> ..$ vs : num [1:6] 1 1 1 1 0 1
#> ..$ am : num [1:6] 1 1 1 1 1 1
#> ..$ gear: num [1:6] 4 4 4 4 5 5
#> ..$ carb: num [1:6] 1 2 1 1 2 2
#> $ :'data.frame': 32 obs. of 12 variables:
#> ..$ mpg : num [1:32] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
#> ..$ cyl : num [1:32] 6 6 4 6 8 6 8 4 4 6 ...
#> ..$ disp : num [1:32] 160 160 108 258 360 ...
#> ..$ hp : num [1:32] 110 110 93 110 175 105 245 62 95 123 ...
#> ..$ drat : num [1:32] 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ...
#> ..$ wt : num [1:32] 2.62 2.88 2.32 3.21 3.44 ...
#> ..$ qsec : num [1:32] 16.5 17 18.6 19.4 17 ...
#> ..$ vs : num [1:32] 0 0 1 1 0 1 0 1 1 1 ...
#> ..$ am : num [1:32] 1 1 1 0 0 0 0 0 0 0 ...
#> ..$ gear : num [1:32] 4 4 4 3 3 3 3 4 4 4 ...
#> ..$ carb : num [1:32] 4 4 1 1 2 1 4 2 2 4 ...
#> ..$ pounds: num [1:32] 2620 2875 2320 3215 3440 ...
```
## Casting
Tired of `mutate(var = as.[character|numeric|logical](var))`?
``` r
starwars %>% cast_character(height, mass) %>% str(max.level = 2)
#> tibble [87 × 14] (S3: tbl_df/tbl/data.frame)
#> $ name : chr [1:87] "Luke Skywalker" "C-3PO" "R2-D2" "Darth Vader" ...
#> $ height : chr [1:87] "172" "167" "96" "202" ...
#> $ mass : chr [1:87] "77" "75" "32" "136" ...
#> $ hair_color: chr [1:87] "blond" NA NA "none" ...
#> $ skin_color: chr [1:87] "fair" "gold" "white, blue" "white" ...
#> $ eye_color : chr [1:87] "blue" "yellow" "red" "yellow" ...
#> $ birth_year: num [1:87] 19 112 33 41.9 19 52 47 NA 24 57 ...
#> $ sex : chr [1:87] "male" "none" "none" "male" ...
#> $ gender : chr [1:87] "masculine" "masculine" "masculine" "masculine" ...
#> $ homeworld : chr [1:87] "Tatooine" "Tatooine" "Naboo" "Tatooine" ...
#> $ species : chr [1:87] "Human" "Droid" "Droid" "Human" ...
#> $ films :List of 87
#> $ vehicles :List of 87
#> $ starships :List of 87
iris %>% cast_character(contains(".")) %>% str(max.level = 1)
#> 'data.frame': 150 obs. of 5 variables:
#> $ Sepal.Length: chr "5.1" "4.9" "4.7" "4.6" ...
#> $ Sepal.Width : chr "3.5" "3" "3.2" "3.1" ...
#> $ Petal.Length: chr "1.4" "1.4" "1.3" "1.5" ...
#> $ Petal.Width : chr "0.2" "0.2" "0.2" "0.2" ...
#> $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
```
hacksaw also includes `cast_numeric` and `cast_logical`.
## Keeping NAs
The reverse of `tidyr::drop_na`, strangely omitted in the original
tidyverse.
``` r
df <- tibble(x = c(1, 2, NA, NA, NA), y = c("a", NA, "b", NA, NA))
df %>% keep_na()
#> # A tibble: 2 x 2
#> x y
#> <dbl> <chr>
#> 1 NA <NA>
#> 2 NA <NA>
df %>% keep_na(x)
#> # A tibble: 3 x 2
#> x y
#> <dbl> <chr>
#> 1 NA b
#> 2 NA <NA>
#> 3 NA <NA>
df %>% keep_na(x, y)
#> # A tibble: 2 x 2
#> x y
#> <dbl> <chr>
#> 1 NA <NA>
#> 2 NA <NA>
```
## Coercive joins
I seldom care if my join keys are incompatible. The `*_join2` suite of
functions coerce either the left or right table accordingly.
``` r
df1 <- tibble(x = 1:10, b = 1:10, y = letters[1:10])
df2 <- tibble(x = as.character(1:10), z = letters[11:20])
left_join2(df1, df2)
#> Joining, by = "x"
#> # A tibble: 10 x 4
#> x b y z
#> <chr> <int> <chr> <chr>
#> 1 1 1 a k
#> 2 2 2 b l
#> 3 3 3 c m
#> 4 4 4 d n
#> 5 5 5 e o
#> 6 6 6 f p
#> 7 7 7 g q
#> 8 8 8 h r
#> 9 9 9 i s
#> 10 10 10 j t
```
## Shifting row values
Shift values across rows in either direction. Sometimes useful when
importing irregularly-shaped tabular data.
``` r
df <- tibble(
s = c(NA, 1, NA, NA),
t = c(NA, NA, 1, NA),
u = c(NA, NA, 2, 5),
v = c(5, 1, 9, 2),
x = c(1, 5, 6, 7),
y = c(NA, NA, 8, NA),
z = 1:4
)
df
#> # A tibble: 4 x 7
#> s t u v x y z
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
#> 1 NA NA NA 5 1 NA 1
#> 2 1 NA NA 1 5 NA 2
#> 3 NA 1 2 9 6 8 3
#> 4 NA NA 5 2 7 NA 4
shift_row_values(df)
#> # A tibble: 4 x 7
#> s t u v x y z
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
#> 1 5 1 1 NA NA NA NA
#> 2 1 1 5 2 NA NA NA
#> 3 1 2 9 6 8 3 NA
#> 4 5 2 7 4 NA NA NA
shift_row_values(df, at = 1:3)
#> # A tibble: 4 x 7
#> s t u v x y z
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
#> 1 5 1 1 NA NA NA NA
#> 2 1 1 5 2 NA NA NA
#> 3 1 2 9 6 8 3 NA
#> 4 NA NA 5 2 7 NA 4
shift_row_values(df, at = 1:2, .dir = "right")
#> # A tibble: 4 x 7
#> s t u v x y z
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
#> 1 NA NA NA NA 5 1 1
#> 2 NA NA NA 1 1 5 2
#> 3 NA 1 2 9 6 8 3
#> 4 NA NA 5 2 7 NA 4
```
## Filtering, keeping, and discarding patterns
A wrapper around `filter(grepl(..., var))`:
``` r
starwars %>%
filter_pattern(homeworld, "oo") %>%
distinct(homeworld)
#> # A tibble: 2 x 1
#> homeworld
#> <chr>
#> 1 Tatooine
#> 2 Naboo
```
Use `keep_pattern` and `discard_pattern` for lists and vectors.
## Plucking values
A wrapper around `x[p][i]`:
``` r
df <- tibble(
id = c(1, 1, 1, 2, 2, 2, 3, 3),
tested = c("no", "no", "yes", "no", "no", "no", "yes", "yes"),
year = c(2015:2017, 2010:2012, 2019:2020)
)
df %>%
group_by(id) %>%
mutate(year_first_tested = pluck_when(year, tested == "yes"))
#> # A tibble: 8 x 4
#> # Groups: id [3]
#> id tested year year_first_tested
#> <dbl> <chr> <int> <int>
#> 1 1 no 2015 2017
#> 2 1 no 2016 2017
#> 3 1 yes 2017 2017
#> 4 2 no 2010 NA
#> 5 2 no 2011 NA
#> 6 2 no 2012 NA
#> 7 3 yes 2019 2019
#> 8 3 yes 2020 2019
```
| 29.562232 | 205 | 0.463657 | eng_Latn | 0.178219 |
58db04ad0dc921eb9796311cbd3178bc685484f7 | 415 | md | Markdown | README.md | M-ZubairAhmed/unswash | b0de31b17d3130030eed3f6f3249920c4d0d4fb4 | [
"MIT"
] | null | null | null | README.md | M-ZubairAhmed/unswash | b0de31b17d3130030eed3f6f3249920c4d0d4fb4 | [
"MIT"
] | null | null | null | README.md | M-ZubairAhmed/unswash | b0de31b17d3130030eed3f6f3249920c4d0d4fb4 | [
"MIT"
] | null | null | null | <p align="center">
<img src="https://user-images.githubusercontent.com/17708702/93482155-03b85000-f91d-11ea-8be6-22119ad15291.png" alt="repo image" width="200" height="200" />
<h5 align="center"><i>An open source clone of unsplash.com</i></h5>
</p>
---
[](https://app.netlify.com/sites/unswash/deploys)
| 51.875 | 158 | 0.73253 | yue_Hant | 0.324931 |
58dbaa9ca054670bc9032b6ba0ff4c5e17effeed | 1,376 | md | Markdown | website/versioned_docs/version-1.0.0/jsonapi/jsonapi-view.md | DarkoKukovec/datx | 3939bee120ca46a6230ff4c17b791b5487b0ca25 | [
"MIT"
] | 138 | 2017-12-26T20:04:38.000Z | 2022-02-15T11:29:30.000Z | website/versioned_docs/version-1.0.0/jsonapi/jsonapi-view.md | DarkoKukovec/datx | 3939bee120ca46a6230ff4c17b791b5487b0ca25 | [
"MIT"
] | 270 | 2018-01-03T21:33:43.000Z | 2022-03-25T15:34:42.000Z | website/versioned_docs/version-1.0.0/jsonapi/jsonapi-view.md | nanyuantingfeng/datx | 310923815e77229d0742ec93c2ae1b8a2403e0a6 | [
"MIT"
] | 9 | 2018-05-09T09:02:47.000Z | 2021-11-19T15:14:57.000Z | ---
id: version-1.0.0-jsonapi-view
title: View
original_id: jsonapi-view
---
The JSON API View enhances the `datx` view with the following properties and methods:
## sync
```typescript
sync<T extends IJsonapiModel = IJsonapiModel>(body?: IResponse): T|Array<T>|null;
```
Add the data from a JSON API response to the view. The return value is a model or an array of models from the JSON API `data` property.
## fetch
```typescript
fetch<T extends IJsonapiModel = IJsonapiModel>(id: number|string, options?: IRequestOptions): Promise<Response<T>>;
```
Fetch a single model from the server.
The options can be used to send additional parameters to the server.
If an error happens, the function will reject with the [`Response`](jsonapi-response) object with the `error` property set.
## fetchAll
```typescript
fetchAll<T extends IJsonapiModel = IJsonapiModel>(options?: IRequestOptions): Promise<Response<T>>
```
Fetch multiple models of the view type from the server. This will either be all models or a first page of models, depending on the server configuration. When requesting other pages with the response getters, they will also be added to the view automatically.
The options can be used to send additional parameters to the server.
If an error happens, the function will reject with the [`Response`](jsonapi-response) object with the `error` property set.
| 34.4 | 258 | 0.763808 | eng_Latn | 0.986701 |
58dbaf69875f02d594fa48a5ee8d01f48e6217f1 | 1,052 | md | Markdown | content/posts/azs-update-2021-06-25.md | kongou-ae/azdocChangefeed | 900436e0dd6b7164136a4a5a8cddfc3dd912c5c9 | [
"MIT"
] | null | null | null | content/posts/azs-update-2021-06-25.md | kongou-ae/azdocChangefeed | 900436e0dd6b7164136a4a5a8cddfc3dd912c5c9 | [
"MIT"
] | null | null | null | content/posts/azs-update-2021-06-25.md | kongou-ae/azdocChangefeed | 900436e0dd6b7164136a4a5a8cddfc3dd912c5c9 | [
"MIT"
] | null | null | null | ---
title: Azure Stack Update at 2021-06-25
date: 2021-06-25
draft: false
tags: [
]
---
### aks-hci
- [azure-stack/aks-hci/pricing.md](https://github.com/MicrosoftDocs/azure-stack-docs/compare/e81179a..f0a97a3#diff-ffbe4bc7b3bf08a07f37261c67ce3b26086d729774a8b5dfb7b20f1b1f114bf3) ([To docs](https://docs.microsoft.com/en-us/azure-stack/aks-hci/pricing?WT.mc_id=AZ-MVP-5003408))
### operator
- [azure-stack/operator/azure-stack-update-oem.md](https://github.com/MicrosoftDocs/azure-stack-docs/compare/e81179a..f0a97a3#diff-e6500f72746c0bb1d8e7e49897d2c92026d98bf14fa965d58ee20a05d2c5fce6) ([To docs](https://docs.microsoft.com/en-us/azure-stack/operator/azure-stack-update-oem?WT.mc_id=AZ-MVP-5003408))
- [azure-stack/operator/powershell-install-az-module.md](https://github.com/MicrosoftDocs/azure-stack-docs/compare/e81179a..f0a97a3#diff-30850e7d5f4d10e14e37ab1d18daa6f030b59777fa195d031c5297a98065c28a) ([To docs](https://docs.microsoft.com/en-us/azure-stack/operator/powershell-install-az-module?WT.mc_id=AZ-MVP-5003408))
| 65.75 | 323 | 0.782319 | yue_Hant | 0.230866 |
58dbc0ed74b2df8f684cf20ebe84790f890c240f | 2,459 | md | Markdown | README.md | ionething/fuck-druid-ad | 671245c75df2ba6e1d28c6e57cfa2a2e38c99c1c | [
"Apache-2.0"
] | 2 | 2019-12-09T10:59:59.000Z | 2020-06-12T13:29:23.000Z | README.md | ionething/fuck-druid-ad | 671245c75df2ba6e1d28c6e57cfa2a2e38c99c1c | [
"Apache-2.0"
] | 1 | 2019-02-15T05:25:30.000Z | 2019-04-23T04:09:47.000Z | README.md | ionething/fuck-druid-ad | 671245c75df2ba6e1d28c6e57cfa2a2e38c99c1c | [
"Apache-2.0"
] | null | null | null | # fuck-druid-ad
去掉druid监控页面的广告banner
> druid 1.1.4版本之后加了直接到阿里云的footer广告banner,在 druid [issues2731](https://github.com/alibaba/druid/issues/2731#issuecomment-428277842)中提过一些想法去掉烦人的广告,这里做实践
## FuckDruidAdConfiguration(推荐)
Spring Boot项目中添加下面的configuration
```java
@Configuration
@ConditionalOnWebApplication
@AutoConfigureAfter(DruidDataSourceAutoConfigure.class)
@ConditionalOnProperty(name = "spring.datasource.druid.stat-view-servlet.enabled", havingValue = "true", matchIfMissing = true)
public class FuckDruidAdConfiguration {
@Bean
public FilterRegistrationBean fuckDruidAdFilterRegistrationBean(DruidStatProperties properties) {
// 获取web监控页面的参数
DruidStatProperties.StatViewServlet config = properties.getStatViewServlet();
// 提取common.js的配置路径
String pattern = config.getUrlPattern() != null ? config.getUrlPattern() : "/druid/*";
String commonJsPattern = pattern.replaceAll("\\*", "js/common.js");
final String filePath = "support/http/resources/js/common.js";
Filter filter = new Filter() {
@Override
public void init(FilterConfig filterConfig) throws ServletException {
}
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
chain.doFilter(request, response);
// 重置缓冲区,响应头不会被重置
response.resetBuffer();
// 获取common.js
String text = Utils.readFromResource(filePath);
// 正则替换banner
text = text.replaceAll("<a.*?banner\"></a><br/>", "");
response.getWriter().write(text);
}
@Override
public void destroy() {
}
};
FilterRegistrationBean registrationBean = new FilterRegistrationBean();
registrationBean.setFilter(filter);
registrationBean.addUrlPatterns(commonJsPattern);
return registrationBean;
}
}
```
- 原理很简单,就是使用过滤器过滤common.js的请求,重新处理后用正则替换相关的广告代码片段
- 非 Spring Boot项目可以参照自己实现一个filter
- 可以封装成一个starter,但不建议这么做
## 使用干净的js
1. 下载干净的[common.js](https://raw.githubusercontent.com/alibaba/druid/35ff7bafad6b5fdad6ed174e6bfbde8fa6396f46/src/main/resources/support/http/resources/js/common.js)
1. 把这个文件放在自己的项目里,路径:src/main/resource/support/http/resources/js/common.js
## 说明
**利益无关,纯属学习**
Mail: [email protected]
| 34.633803 | 164 | 0.688898 | yue_Hant | 0.429252 |
58dbe4a8ef4492924a4df956fa3c564261404ea9 | 889 | md | Markdown | README.md | taran-nulu/periodic-table | a637c1fe214756e18b96246578af1bebf1468c5d | [
"Apache-2.0"
] | 1 | 2022-01-22T21:06:57.000Z | 2022-01-22T21:06:57.000Z | README.md | taran-nulu/periodic-table | a637c1fe214756e18b96246578af1bebf1468c5d | [
"Apache-2.0"
] | null | null | null | README.md | taran-nulu/periodic-table | a637c1fe214756e18b96246578af1bebf1468c5d | [
"Apache-2.0"
] | null | null | null | # Periodic Table
This is a Python program that displays a periodic table with the following for each element:
* Element Name
* Element Abbreviation/Symbol
* Atomic Number
* Atomic Mass
* Electron Shells
The periodic table opens in a window with scroll bars.
**NOTE**: You must drag the scrollbars in the window to pan the periodic table;
mouse scroll wheel and other such methods of scrolling do not work.

## Installation on macOS
Install `brew` if you don't have it:
```
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
Install `python3` if you don't have it:
```
brew install python
```
Install tkinter if you don't have it:
```
brew install python-tk
```
Download the code in this repository, `cd` into the directory, and run the following command:
```
python3 main.py
``` | 21.166667 | 95 | 0.739033 | eng_Latn | 0.977493 |
58dc4e5060505a3bd6a953e13c8996e6b68f19df | 234 | md | Markdown | README.md | KevoLoyal/youngrockets | fe04938fe5058ab9083c36f2c3e61536151cbc1b | [
"MIT"
] | null | null | null | README.md | KevoLoyal/youngrockets | fe04938fe5058ab9083c36f2c3e61536151cbc1b | [
"MIT"
] | null | null | null | README.md | KevoLoyal/youngrockets | fe04938fe5058ab9083c36f2c3e61536151cbc1b | [
"MIT"
] | null | null | null | # Lab Info
## Before Starting
This content was designed for free use, it is intended to explore Microsoft Face API Cognitive Service.
It was based on a real life scenario in which it was required to create a Access Control System
| 29.25 | 104 | 0.782051 | eng_Latn | 0.999978 |
58dd29cc9d0e69666d6ce250da4b228446671ef8 | 4,049 | md | Markdown | docs/framework/configure-apps/file-schema/windows-workflow-foundation/behavior-of-servicebehaviors-of-workflow.md | jkugiya/docs.ja-jp | 4bf64d76a19b83c1850e097eb54a717bccc8a064 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/configure-apps/file-schema/windows-workflow-foundation/behavior-of-servicebehaviors-of-workflow.md | jkugiya/docs.ja-jp | 4bf64d76a19b83c1850e097eb54a717bccc8a064 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/configure-apps/file-schema/windows-workflow-foundation/behavior-of-servicebehaviors-of-workflow.md | jkugiya/docs.ja-jp | 4bf64d76a19b83c1850e097eb54a717bccc8a064 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: 詳細については、「」を参照してください。 <behavior> <serviceBehaviors>
title: <behavior><serviceBehaviors>ワークフローの
ms.date: 03/30/2017
ms.topic: reference
ms.assetid: 6a4b718a-1b40-4957-935a-f6122819ab3c
ms.openlocfilehash: 7504e9b307286871440bb6efdb672a59d3d13cb1
ms.sourcegitcommit: ddf7edb67715a5b9a45e3dd44536dabc153c1de0
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 02/06/2021
ms.locfileid: "99725286"
---
# <a name="behavior-of-servicebehaviors-of-workflow"></a>\<behavior>\<serviceBehaviors>ワークフローの
**Behavior** 要素には、サービスの動作に関する設定のコレクションが含まれています。 各動作は、 **名前** によってインデックスが作成されます。 サービスは、要素の **設定** 属性を使用して、この名前を使用して各動作にリンクでき [\<endpoint>](../wcf/endpoint-element.md) ます。 これにより、設定を再定義することなく、エンドポイント間で共通の動作構成を共有できます。
[**\<configuration>**](../configuration-element.md)\
[**\<system.ServiceModel>**](system-servicemodel-of-workflow.md)\
[**\<behaviors>**](behaviors-of-workflow.md)\
[**\<serviceBehaviors>**](servicebehaviors-of-workflow.md)\
**\<behavior>**
## <a name="syntax"></a>構文
```xml
<system.ServiceModel>
<behaviors>
<serviceBehaviors>
<behavior name="String">
<bufferReceive maxPendingMessagesPerChannel="Integer" />
<etwTracking profileName="String" />
<sendMessageChannelCache allowUnsafeCaching="Boolean">
<channelSettings idleTimeout="TimeSpan"
leaseTimeout="TimeSpan"
maxItemsInCache="Integer" />
<factorySettings idleTimeout="TimeSpan"
leaseTimeout="TimeSpan"
maxItemsInCache="Integer" />
</sendMessageChannelCache>
<sqlWorkflowInstanceStore connectionStringName="String"
hostLockRenewalPeriod="TimeSpan"
instanceCompletionAction="DeleteNothing/DeleteAll"
instanceEncodingAction="None/GZip"
instanceLockedExceptionAction="NoRetry/BasicRetry/AggressiveRetry"
runnableInstancesDetectionPeriod="TimeSpan" />
<workflowIdle timeToPersist="TimeSpan"
timeToUnload="TimeSpan" />
<workflowUnhandledException action="Abandon/AbandonAndSuspend/Cancel/Terminate" />
</behavior>
</serviceBehaviors>
</behaviors>
</system.ServiceModel>
```
## <a name="attributes-and-elements"></a>属性および要素
以降のセクションでは、属性、子要素、および親要素について説明します。
### <a name="attributes"></a>属性
|属性|説明|
|---------------|-----------------|
|name|動作の構成名を含む一意の文字列。 この値は、要素の識別文字列として機能するため、一意のユーザー定義の文字列である必要があります。|
### <a name="child-elements"></a>子要素
|要素|説明|
|-------------|-----------------|
|[\<bufferReceive>](bufferreceive.md)|サービスが、バッファーされた受信処理を使用するためのサービス動作。これにより、ワークフロー サービスは、順番を無視したメッセージを処理できます。|
|[\<routing>](../wcf/routing-of-servicebehavior.md)|<xref:System.Activities.Tracking.EtwTrackingParticipant> を使用して、サービスで ETW 追跡を利用するためのサービス動作。|
|[\<sendMessageChannelCache>](sendmessagechannelcache.md)|キャッシュの共有レベルのカスタマイズや、チャネル ファクトリ キャッシュの設定を可能にするほか、Send メッセージング アクティビティを使用してサービス エンドポイントにメッセージを送信するワークフローのチャネル キャッシュの設定も可能にするサービス動作。|
|[\<sqlWorkflowInstanceStore>](sqlworkflowinstancestore.md)|ワークフロー サービス インスタンスの状態情報の永続化を SQL Server 2005 または SQL Server 2008 データベースでサポートする <xref:System.Activities.DurableInstancing.SqlWorkflowInstanceStore> 機能を構成するためのサービス動作。|
|[\<workflowIdle>](workflowidle.md)|アイドル状態のワークフロー インスタンスのアンロードおよび永続化のタイミングを制御するサービス動作。|
|[\<workflowInstanceManagement>](workflowinstancemanagement.md)|ワークフロー インスタンスの実行方法を制御する設定を指定するためのサービス動作。これには、永続する未処理の例外動作やアイドル状態の動作が含まれます。|
|[\<workflowUnhandledException>](workflowunhandledexception.md)|ワークフロー サービス内で未処理の例外が発生した場合のアクションを指定するためのサービス動作。|
### <a name="parent-elements"></a>親要素
|要素|説明|
|-------------|-----------------|
|[\<serviceBehaviors>](servicebehaviors-of-workflow.md)|サービス動作要素のコレクション。|
| 48.783133 | 227 | 0.685107 | yue_Hant | 0.671225 |
58ddd5c07cd308144ee691aaaea4a8eb810255f6 | 506 | md | Markdown | README.md | iongion/podman-desktop-companion | be572969d801e30d4d28f5953b29d792ba130b28 | [
"MIT"
] | 309 | 2021-12-06T16:04:59.000Z | 2022-03-31T22:14:15.000Z | README.md | iongion/podman-desktop-companion | be572969d801e30d4d28f5953b29d792ba130b28 | [
"MIT"
] | 7 | 2021-12-10T14:03:25.000Z | 2022-03-28T14:20:45.000Z | README.md | iongion/podman-desktop-companion | be572969d801e30d4d28f5953b29d792ba130b28 | [
"MIT"
] | 12 | 2021-12-06T16:19:49.000Z | 2022-02-18T18:44:39.000Z | # Podman Desktop Companion
A familiar desktop graphical interface to the free and open container manager, [podman!](https://podman.io/)
Main goals
* Cross-platform desktop integrated application with consistent UI
* Learning tool for the powerful `podman` command line interface
## Podman is the driving engine

## Multiple engines supported, familiar ones too

| 29.764706 | 108 | 0.788538 | eng_Latn | 0.866214 |
58de6497f1c0f006485a13bfc401aa05ab8e3225 | 2,328 | md | Markdown | docs/ado/reference/ado-api/ado-code-examples-vbscript.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/ado-code-examples-vbscript.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/ado-code-examples-vbscript.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Esempi di codice ADO VBScript | Microsoft Docs
ms.prod: sql
ms.prod_service: connectivity
ms.technology: connectivity
ms.custom: ''
ms.date: 01/19/2017
ms.reviewer: ''
ms.topic: conceptual
dev_langs:
- VB
helpviewer_keywords:
- VBScript code examples [ADO], about VBScript code examples
- VBScript code examples [ADO]
ms.assetid: 78bb9a95-7ac4-44b6-818b-d1787f952ed7
author: MightyPen
ms.author: genemi
ms.openlocfilehash: bd4bef039b082d281f2426cc9c7695d8115fad05
ms.sourcegitcommit: b87d36c46b39af8b929ad94ec707dee8800950f5
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 02/08/2020
ms.locfileid: "67921170"
---
# <a name="ado-code-examples-vbscript"></a>Esempi di codice ADO VBScript
Usare gli esempi di codice seguenti per informazioni su come usare i metodi ADO durante la scrittura in Microsoft® Visual Basic® Scripting Edition (VBScript).
> [!NOTE]
> Incollare l'intero esempio di codice, dall'inizio alla fine, nell'editor di codice. L'esempio potrebbe non essere eseguito correttamente se vengono usati esempi parziali o se la formattazione del paragrafo va persa.
## <a name="methods"></a>Metodi
- [Esempio di metodo AddNew](../../../ado/reference/ado-api/addnew-method-example-vbscript.md)
- [Esempio di metodo Clone](../../../ado/reference/ado-api/clone-method-example-vbscript.md)
- [Esempio di metodo Delete](../../../ado/reference/ado-api/delete-method-example-vbscript.md)
- [Esempio di metodi Execute, Requery e Clear](../../../ado/reference/ado-api/execute-requery-and-clear-methods-example-vbscript.md)
- [Esempio di metodo Move](../../../ado/reference/ado-api/move-method-example-vbscript.md)
- [Esempio di metodi MoveFirst, MoveLast, MoveNext e MovePrevious](../../../ado/reference/ado-api/movefirst-movelast-movenext-and-moveprevious-methods-example-vbscript.md)
- [Esempio di metodi Open e Close](../../../ado/reference/ado-api/open-and-close-methods-example-vbscript.md)
## <a name="see-also"></a>Vedere anche
[Esempi di codice ADO in Visual Basic](../../../ado/reference/ado-api/ado-code-examples-in-visual-basic.md)
[Esempi di codice ADO in Visual C++](../../../ado/reference/ado-api/ado-code-examples-in-visual-c.md)
[Appendice D: Esempi ADO](../../../ado/guide/appendixes/appendix-d-ado-samples.md)
| 45.647059 | 220 | 0.733677 | ita_Latn | 0.453793 |
58de9ddb92845074a7bb2b53e66fae6512a46d58 | 26 | md | Markdown | README.md | simonh10/microservices-dev-deploy | 222bc78d4cac12f6d4ef171a5f34f90522339fa8 | [
"Apache-2.0"
] | null | null | null | README.md | simonh10/microservices-dev-deploy | 222bc78d4cac12f6d4ef171a5f34f90522339fa8 | [
"Apache-2.0"
] | null | null | null | README.md | simonh10/microservices-dev-deploy | 222bc78d4cac12f6d4ef171a5f34f90522339fa8 | [
"Apache-2.0"
] | null | null | null | # microservices-dev-deploy | 26 | 26 | 0.846154 | eng_Latn | 0.754878 |
58dea7dd686fd8d3780c49dca835181622845880 | 37 | md | Markdown | README.md | goodlmk/YXTDownloader | 42ac3b32c95f349919f76952c1beb38b43ea3c64 | [
"Apache-2.0"
] | 1 | 2016-02-26T06:41:44.000Z | 2016-02-26T06:41:44.000Z | README.md | goodlmk/YXTDownloader | 42ac3b32c95f349919f76952c1beb38b43ea3c64 | [
"Apache-2.0"
] | null | null | null | README.md | goodlmk/YXTDownloader | 42ac3b32c95f349919f76952c1beb38b43ea3c64 | [
"Apache-2.0"
] | null | null | null | # YXTDownloader
###队列式异步下载器,支持断点下载。
| 9.25 | 19 | 0.72973 | eng_Latn | 0.156743 |
58dec0da43e20764dfe2aa7fc3d0109269a67215 | 1,936 | md | Markdown | _pages/cv.md | AlirezaShamsoshoara/alirezashamsoshoara.github.io | 752c88505c0353a1f051f20a6a8aa30c22b35b8a | [
"MIT"
] | 1 | 2021-01-12T15:40:12.000Z | 2021-01-12T15:40:12.000Z | _pages/cv.md | AlirezaShamsoshoara/alirezashamsoshoara.github.io | 752c88505c0353a1f051f20a6a8aa30c22b35b8a | [
"MIT"
] | null | null | null | _pages/cv.md | AlirezaShamsoshoara/alirezashamsoshoara.github.io | 752c88505c0353a1f051f20a6a8aa30c22b35b8a | [
"MIT"
] | 6 | 2020-05-23T01:01:00.000Z | 2021-10-13T16:30:34.000Z | ---
layout: archive
title: ""
permalink: /cv/
author_profile: true
redirect_from:
- /resume
---
{% include base_path %}
[[Download My Latest Resume in PDF here]](http://AlirezaShamsoshoara.github.io/files/AlirezaResume2021CV.pdf)
<embed src="../files/AlirezaResume2021CV.pdf" width="570px" height="710px" />
<!---
Education
======
* Ph.D. in Informatics, Northern Arizona University, 2021 (Expected)
* M.Sc. in Informatics, Northern Arizona University, 2019
* M.Sc. in Electrical Engineering, K.N. Toosi Unveristy of Technology, 2015
* B.Sc. in Electrical Engineering, Shahid Beheshti University, 2012
Work experience
======
* Fall 2017 - Present: Research Assistant
* WINIP LAB, Northern Arizona Univeristy, Flagstaff, Arizona
* Role: Conducting research on spectrum management for UAV networks using Machine Learning
* Supervisor: [Dr. Fatemeh Afghah](https://www.cefns.nau.edu/~fa334/index.html)
* Summer 2018: Internship
* [Next Biometrics](https://www.nextbiometrics.com), Seattle, Washington
* Role: Firmware Programmer
* Supervisor: Mr. Charles Horkin
* Fall 2018 - Present: Teaching Assistant
* Northern Arizona Univeristy, Flagstaff, Arizona
* Courses: Microprocessors, Introduction to digital logic, Introduction to Electronics, Signals and Systems, Fundamental of Electromagnetics, and Fundamental of Computer Engineering
* Role: Lab Instructor
* Summer 2016 - Summer 2017: Engineer
* NAK World-Class Telecom Managed Service
* Role: Network Engineer, IP network Designer
Publications
======
<ul>{% for post in site.publications %}
{% include archive-single-cv.html %}
{% endfor %}</ul>
======
<ul>{% for post in site.talks %}
{% include archive-single-talk-cv.html %}
{% endfor %}</ul>
Teaching
======
<ul>{% for post in site.teaching %}
{% include archive-single-cv.html %}
{% endfor %}</ul>
Service
======
* Reviewer for 14 different Journals and Conferences.
--->
| 33.37931 | 189 | 0.716426 | eng_Latn | 0.303933 |
58dec58e2dc85e6a925b471f56f25faa053afc88 | 885 | md | Markdown | readme.md | Mousto097/BlogApp | 99ef4cb0c44ed321aa76b2b6a9787636f612bf69 | [
"MIT"
] | null | null | null | readme.md | Mousto097/BlogApp | 99ef4cb0c44ed321aa76b2b6a9787636f612bf69 | [
"MIT"
] | 3 | 2021-10-06T05:20:32.000Z | 2022-02-26T18:34:34.000Z | readme.md | Mousto097/Bloggit | 99ef4cb0c44ed321aa76b2b6a9787636f612bf69 | [
"MIT"
] | null | null | null | # Bloggit
A platform to share a person’s thoughts, feelings, opinions or experiences – an online journal or diary with a minimal following. Enjoy!


# What is Bloggit?
Bloggit is a website with a blog application. It also includes full authentication and file uploading.
# Contributors
- [Mamadou Bah](https://www.linkedin.com/in/mamadou-bah-9962a711b/)
- Full Stack Developer
# Tech Stack
- Laravel @^5.4.\*
- Bootstrap 4
- PostgreSQL
# Wiki
You can find more information about the project in my [Github Wiki](https://github.com/Mousto097/BlogApp/wiki).
# License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
# Contact
- [Personal LinkedIn](https://www.linkedin.com/in/mamadou-bah-9962a711b/)
| 26.029412 | 136 | 0.744633 | eng_Latn | 0.805963 |
58dedee07d5210be97211bedc2fe398454592da2 | 25,751 | md | Markdown | README.md | alw-iwu/aas-transformation-library | 50a1e01298541ce7f2bbcac7923a253ae8070e98 | [
"Apache-2.0"
] | null | null | null | README.md | alw-iwu/aas-transformation-library | 50a1e01298541ce7f2bbcac7923a253ae8070e98 | [
"Apache-2.0"
] | null | null | null | README.md | alw-iwu/aas-transformation-library | 50a1e01298541ce7f2bbcac7923a253ae8070e98 | [
"Apache-2.0"
] | null | null | null | # aas-transformation-library
[](https://app.travis-ci.com/SAP/aas-transformation-library)
[](https://api.reuse.software/info/github.com/SAP/aas-transformation-library)
Transform [AutomationML (AML)](https://www.automationml.org/) content into [Asset Administration Shell (AAS)](https://www.plattform-i40.de/PI40/Redaktion/DE/Downloads/Publikation/Details_of_the_Asset_Administration_Shell_Part_2_V1.html) format.
- [aas-transformation-library](#aas-transformation-library)
- [Documentation](#documentation)
- [Requirements](#requirements)
- [Local Usage](#local-usage)
- [Validation](#validation)
- [AML files](#aml-files)
- [AMLX files](#amlx-files)
- [Configuration](#configuration)
- [Asset](#asset)
- [Asset Information](#asset-information)
- [Asset Shell](#asset-shell)
- [Submodel](#submodel)
- [Submodel Elements](#submodel-elements)
- [Blob](#blob)
- [File](#file)
- [Entity](#entity)
- [MultiLanguageProperty](#multilanguageproperty)
- [Range](#range)
- [Capability](#capability)
- [BasicEvent](#basicevent)
- [Operation](#operation)
- [Property](#property)
- [ReferenceElement](#referenceelement)
- [RelationshipElement](#relationshipelement)
- [AnnotatedRelationshipElement](#annotatedrelationshipelement)
- [SubmodelElementCollection](#submodelelementcollection)
- [Known Issues & Limitations](#known-issues--limitations)
- [Upcoming Changes](#upcoming-changes)
- [Contributing](#contributing)
- [Code of Conduct](#code-of-conduct)
- [To Do](#to-do)
- [License](#license)
## Documentation
The library can be used twofold. Users can..
1. import the library in Java applications to transform AML inputs into AAS objects.
2. use the library to build a [fat JAR](https://github.com/johnrengelman/shadow) to transform AML files into AAS JSON files locally.
In order to provide this functionality the library makes heavy use of [XPath](https://www.w3schools.com/xml/xpath_intro.asp) to parse the XML based object-oriented data modeling language. Currently, we support XPath version 1.0 (which is supported by dom4j).
Users have to provide two inputs:
1. One AML or AMLX file
2. One configuration ("config") file in JSON format, which details out which part of a given AutomationML input corresponds to AAS fields (Examplary configuration files can be found under [_src/test/resources/config_](src/test/resources/config))
Depending on its usage the library will output one of the two:
* One AAS JSON file containing a serialized _AssetAdministrationShellEnvironment_ object
* One _AssetAdministrationShellEnvironment_ object (leveraging [admin-shell-io/java-model](https://github.com/admin-shell-io/java-model))
#### Note:
There is an _IdGeneration_ function being used to create values for Reference objects.
In order for the function to be called correctly, the XPath of the config file must be registered in the node graph, which happens in the _prepareGraph()_ of the IdGenerator.java class.
## Requirements
We rely on [SapMachine 11](https://sap.github.io/SapMachine/) and use [Gradle](https://gradle.org/).
## Local Usage
```sh
$ ./gradlew build
$ java -jar build/distributions/aas-transformation-library-shadow-0.0.1-SNAPSHOT.jar
usage: transform -a <AML_INPUT_FILE> | -amlx <AMLX_INPUT_FILE> | -p -c
<CONFIG_FILE> [-P <PLACEHOLDER_VALUES_JSON>]
Transform AutomationML file into an AAS structured file
-a,--aml <AML_INPUT_FILE> AML input file
-amlx,--amlx <AMLX_INPUT_FILE> AMLX input file
-c,--config <CONFIG_FILE> Mapping config file
-P,--placeholder-values <PLACEHOLDER_VALUES_JSON> Map of placeholder
values in JSON format
-p,--print-placeholders Print placeholders
with description
Missing required options: c, [-a AML input file, -amlx AMLX input file, -p
Print placeholders with description]
$ java -jar ./build/distributions/aas-transformation-library-shadow-0.0.1-SNAPSHOT.jar -c src/test/resources/config/simpleConfig.json -a src/test/resources/aml/full_AutomationComponent.aml
[main] INFO com.sap.dsc.aas.lib.aml.ConsoleApplication - Loaded config version 1.0.0, aas version 2.0.1
[main] INFO com.sap.dsc.aas.lib.aml.transform.AmlTransformer - Loaded config version 1.0.0, AAS version 2.0.1
[main] INFO com.sap.dsc.aas.lib.aml.transform.AssetAdministrationShellEnvTransformer - Transforming 1 config assets...
[main] INFO com.sap.dsc.aas.lib.aml.ConsoleApplication - Wrote AAS file to full_AutomationComponent.json
$ cd src/test/resources/amlx/minimal_AutomationMLComponent_WithDocuments
$ zip -r minimal_AutomationMLComponent_WithDocuments.amlx . -x "*.DS_Store"
adding: [Content_Types].xml (deflated 52%)
adding: _rels/ (stored 0%)
adding: _rels/.rels (deflated 68%)
adding: lib/ (stored 0%)
adding: lib/AutomationComponentLibrary_v1_0_0_Full_CAEX3_BETA.aml (deflated 85%)
adding: files/ (stored 0%)
adding: files/TestPDFDeviceManual.pdf (deflated 14%)
adding: files/TestTXTDeviceManual.txt (stored 0%)
adding: files/TestTXTWarranty.txt (stored 0%)
adding: CAEX_ClassModel_V.3.0.xsd (deflated 90%)
adding: minimal_AutomationMLComponent_WithDocuments.aml (deflated 80%)
$ cd ../../../../../
$ java -jar ./build/distributions/aas-transformation-library-shadow-0.0.1-SNAPSHOT.jar -c src/test/resources/config/simpleConfig.json -amlx src/test/resources/amlx/minimal_AutomationMLComponent_WithDocuments/minimal_AutomationMLComponent_WithDocuments.amlx
[main] INFO com.sap.dsc.aas.lib.aml.ConsoleApplication - Loaded config version 1.0.0, aas version 2.0.1
[main] INFO com.sap.dsc.aas.lib.aml.transform.AmlTransformer - Loaded config version 1.0.0, AAS version 2.0.1
[main] INFO com.sap.dsc.aas.lib.aml.transform.AssetAdministrationShellEnvTransformer - Transforming 1 config assets...
[main] INFO com.sap.dsc.aas.lib.aml.ConsoleApplication - Wrote AAS file to minimal_AutomationMLComponent_WithDocuments.json
Writing to: minimal_AutomationMLComponent_WithDocuments/files/TestTXTDeviceManual.txt
Writing to: minimal_AutomationMLComponent_WithDocuments/files/TestPDFDeviceManual.pdf
Writing to: minimal_AutomationMLComponent_WithDocuments/files/TestTXTWarranty.txt
```
## Validation
### AML files
AML file validation includes the following steps (cf. [_AmlValidator.java_](https://github.com/alw-iwu/aas-transformation-library/blob/main/src/main/java/com/sap/dsc/aas/lib/aml/transform/validation/AmlValidator.java)):
- Check that the AML file is a valid XML file
- Check that the AML file is valid according to the [CAEX 3.0 class model](https://github.com/alw-iwu/aas-transformation-library/blob/main/src/main/resources/aml/CAEX_ClassModel_V.3.0.xsd)
### AMLX files
AMLX file validation includes the following steps (cf. [_AmlxValidator.java_](https://github.com/alw-iwu/aas-transformation-library/blob/main/src/main/java/com/sap/dsc/aas/lib/aml/amlx/AmlxValidator.java)):
- Check whether each document defined in */_rels/.rels* exists
- Check whether each file in the AMLX file (a ZIP archive) is defined in */_rels/.rels*
- Check that the root document exists
- Check that there is exactly one root document
- Check that the root document is a valid AML file
## Configuration
The configuration file describes how the AAS file should be generated.
See [this config file](src/test/resources/config/simpleConfig.json) for an example.
### Asset Administration Shell Environment
An AAS Asset Administration Shell Environment is generated using the following configuration:
```javascript
{
"version": "1.0.0",
"aasVersion": "3.0RC01",
"configMappings": [
{
"from_xpath": "",
"idGeneration": { ... },
"assetInformation": { ... },
"submodels": [ ... ],
"assetShell": { ... }
}
]
}
```
### Asset
An AAS Asset is generated using the following configuration:
```javascript
{
"idGeneration":{
"parameters":[
{
"from_string":"shellId"
}
]
},
// XPath expression for finding the idShort
// If this attribute is omitted, the xpath "@Name" is used by default
"idShort_xpath":"@Name"
}
```
### Asset Information
AAS AssetInformation is generated using the following configuration:
```javascript
{
// XPath expression for the type of the asset ("Type" or "Instance")
// XPath allows hardcoded values by using single quotes
// If this attribute is omitted, the xpath "'Type'" is used by default
"kindType_xpath":"TYPE",
"idGeneration":{
"parameters":[
{
"from_xpath":"caex:Attribute[@Name='IdentificationData']/caex:Attribute[@Name='Manufacturer']/caex:Value",
"from_string":"DefaultManufacturer"
},
{
"from_string":"/"
},
{
"from_xpath":"caex:Attribute[@Name='IdentificationData']/caex:Attribute[@Name='ManufacturerUri']/caex:Value"
}
],
"finalizeFunction":"concatenate_and_hash",
"idGenerationName":"assetIdGeneration"
},
"globalAssetIdReference":{
"valueId":"assetInformationGlobalAssetIdReference",
"keyType":"CUSTOM",
"keyElement":"ASSET"
}
}
```
### Asset Shell
An AAS AssetAdministrationShell is generated using the following configuration:
```javascript
{
"id": "assetId",
"idShort_xpath": "@Name"
}
```
### Submodel
An AAS Submodel is generated using the following configuration:
```
{
// XPath expression to be used to navigate to the root XML node(s) for the submodel.
// One AAS submodel will be generated for each matching node.
"from_xpath": "caex:Attribute[@Name='IdentificationData']"
// This attribute is "syntax sugar", in other words it is
// designed to make the config file less verbose.
// This example corresponds to the XPath above.
// NOTE: Pass either this attribute OR "from_xpath"
"from_attributeName": "IdentificationData".
// Pass a string representing the semantic ID (not an XPath)
"semanticId": "http://sap.com",
"id": "submodelId",
"isShort_xpath": "@Name",
// See "Submodel Element"
"submodelElements": [{...}, {...}]
}
```
### Submodel Elements
AAS SubmodelElements are generated using type-specific configuration. This examples yields in an AAS Property:
```javascript
{
// For the following two attributes see "Submodel".
// One submodel element is generated for each matching XML node.
"from_xpath": "caex:Attribute[@Name='Manufacturer']",
"from_attributeName": "Manufacturer",
"semanticId": "http://sap.com/submodelElement",
// Define the value type of the submodel element
// Examples: "string" or "float"
"valueType": "string",
// Define the model type of the submodel element
// Example: "Property" or "Collection"
"modelType": "Property"
// This attribute is "syntax sugar", in other words it is
// designed to make the config file less verbose.
// This example corresponds to the combination of valueType and
// modelType above.
// Example: "string" or "collection"
"elementType": "string"
// XPath expression for finding the value of the submodel element
"valueXPath": "caex:Value",
// Hardcoded value to use if the value returned by XPath is null
"valueDefault": "MyDefaultValue"
// If this submodel element is a "collection":
// Pass an array of submodel elements as children
"submodelElements": [{...}, {...}]
}
```
Here are examples of additional mapping configurations for different SubmodelElements:
#### Blob
```javascript
{
"from_xpath": "caex:InternalElement[@Name='ManualsBIN']/caex:InternalElement[@Name='BetriebsanleitungBIN']/caex:ExternalInterface[@Name='ExternalDataReference']",
"idShort_xpath": "'BetriebsanleitungBIN'",
"@type": "Blob",
// type-specific submodel element attribute paths are listed explicitly
"valueXPath": "caex:Attribute[@Name='refURI']/caex:Value",
"mimeTypeXPath": "caex:Attribute[@Name='MIMEType']/caex:Value"
}
```
The matching AML:
```xml
<InternalElement Name="BetriebsanleitungBIN">
<ExternalInterface Name="ExternalDataReference"
RefBaseClassPath="AutomationMLBPRInterfaceClassLib/ExternalDataReference">
<Attribute Name="MIMEType" AttributeDataType="xs:string">
<DefaultValue></DefaultValue>
<Value>application/pdf</Value>
</Attribute>
<Attribute Name="refURI" AttributeDataType="xs:anyURI"
RefAttributeType="AutomationMLBaseAttributeTypeLib/refURI">
<Value>foo</Value>
</Attribute>
</ExternalInterface>
<SupportedRoleClass
RefRoleClassPath="AutomationMLBPRRoleClassLib/ExternalData"/>
</InternalElement>
```
#### File
```javascript
{
"from_xpath": "caex:InternalElement[@Name='Manuals']/caex:InternalElement[@Name='Betriebsanleitung']/caex:ExternalInterface[@Name='ExternalDataReference']",
"idShort_xpath": "'Betriebsanleitung'",
"@type": "File",
// type-specific submodel element attribute paths are listed explicitly
"valueXPath": "caex:Attribute[@Name='refURI']/caex:Value",
"mimeTypeXPath": "caex:Attribute[@Name='MIMEType']/caex:Value"
}
```
The matching AML:
```xml
<InternalElement Name="Manuals">
<InternalElement Name="Betriebsanleitung">
<ExternalInterface Name="ExternalDataReference"
RefBaseClassPath="AutomationMLBPRInterfaceClassLib/ExternalDataReference">
<Attribute Name="MIMEType" AttributeDataType="xs:string">
<DefaultValue></DefaultValue>
<Value>application/pdf</Value>
</Attribute>
<Attribute Name="refURI" AttributeDataType="xs:anyURI"
RefAttributeType="AutomationMLBaseAttributeTypeLib/refURI">
<Value>manual/OI_wtt12l_en_de_fr_it_pt_es_zh_.pdf</Value>
</Attribute>
</ExternalInterface>
<SupportedRoleClass
RefRoleClassPath="AutomationMLBPRRoleClassLib/ExternalData"/>
</InternalElement>
<RoleRequirements
RefBaseRoleClassPath="AutomationMLBaseRoleClassLib/AutomationMLBaseRole"/>
</InternalElement>
```
#### Entity
```javascript
{
"from_attributeName": "PackagingAndTransportation",
"@type": "Entity",
"semanticId": "PackAndTransport",
// type-specific submodel element attribute paths are listed explicitly
"entityType": "SelfManagedEntity",
"assetReference": {
"valueId": "AssetIdExtern",
"local": "false"
},
"localAssetReference": "false",
"statements": [
{
"from_attributeName": "GTIN",
"@type": "Property",
"valueType": "string"
}
]
}
```
The matching AML:
```xml
<Attribute Name="CommercialData">
<Attribute Name="PackagingAndTransportation">
<Attribute Name="GTIN">
<Value>TestGlobalTradeItemNumber1234</Value>
</Attribute>
<Attribute Name="CustomsTariffNumber">
<Value>1234</Value>
</Attribute>
<Attribute Name="CountryOfOrigin">
<Value>TestCountryOfOriginDE</Value>
</Attribute>
</Attribute>
</Attribute>
```
#### MultiLanguageProperty
```javascript
{
"from_attributeName": "Material",
"@type": "MultiLanguageProperty"
}
```
The matching AML:
```xml
<Attribute Name="Material">
<Value>Test Material</Value>
<Attribute Name="aml-lang=en_US">
<Description>This is the value name in english</Description>
<Value>English Test Material</Value>
</Attribute>
<Attribute Name="aml-lang=de_DE">
<Description>Dies ist der Name in Deutsch</Description>
<Value>Test Material Deutsch</Value>
</Attribute>
</Attribute>
```
#### Range
```javascript
{
"from_attributeName": "AmbientTemperature",
"@type": "Range",
"semanticId": "AmbientTemperature",
"valueType": "Integer",
// type-specific submodel element attribute paths are listed explicitly
"minValueXPath": "caex:Attribute[@Name='TemperatureMin']/caex:Value",
"maxValueXPath": "caex:Attribute[@Name='TemperatureMax']/caex:Value"
}
```
The matching AML:
```xml
<Attribute Name="GeneralTechnicalData">
<Attribute Name="AmbientTemperature">
<Attribute Name="TemperatureMin">
<Value>-273</Value>
</Attribute>
<Attribute Name="TemperatureMax">
<Value>100</Value>
</Attribute>
</Attribute>
</Attribute>
```
#### Capability
```javascript
{
"from_xpath": "caex:Attribute[@Name='ManufacturerURI']",
"idShort_xpath": "'Browseable'",
"@type": "Capability"
}
```
The matching AML:
```xml
<Attribute Name="IdentificationData">
<Attribute Name="ManufacturerURI">
<Value>http://www.example.com/manufacturerURI</Value>
</Attribute>
</Attribute>
```
#### BasicEvent
```javascript
{
"from_xpath": "caex:InternalElement[@Name='ManualsBasicEvent']/caex:InternalElement[@Name='SampleBasicEvent']/caex:ExternalInterface[@Name='ExternalDataReference']",
"idShort_xpath": "'SampleBasicEvent'",
"@type": "BasicEvent",
// reference to another submodel element that is a Refereable
"observed": {
"valueIdGeneration": {
"parameters": [
{
"from_xpath": "caex:Attribute[@Name='refURI']/caex:Value"
}
]
},
"keyElement": "Blob",
"local": "false"
}
}
```
The matching AML:
```xml
<InternalElement Name="ManualsBasicEvent">
<InternalElement Name="SampleBasicEvent">
<ExternalInterface Name="ExternalDataReference" RefBaseClassPath="AutomationMLBPRInterfaceClassLib/ExternalDataReference">
<Attribute Name="refURI" AttributeDataType="xs:anyURI" RefAttributeType="AutomationMLBaseAttributeTypeLib/refURI">
<Value>Blob</Value>
</Attribute>
</ExternalInterface>
<SupportedRoleClass
RefRoleClassPath="AutomationMLBPRRoleClassLib/ExternalData"/>
</InternalElement>
<RoleRequirements
RefBaseRoleClassPath="AutomationMLBaseRoleClassLib/AutomationMLBaseRole"/>
</InternalElement>
```
#### Operation
```javascript
{
"from_attributeName": "OperationA",
"semanticId": "ops",
"@type": "Operation",
// type-specific submodel element attribute paths are listed explicitly
"inputVariables": [
{
"from_xpath": "caex:Attribute[@Name='Inputs']/caex:Attribute",
"@type": "Property",
"valueType": "string"
}
],
"outputVariables": [
{
"from_xpath": "caex:Attribute[@Name='Outputs']/caex:Attribute",
"@type": "Property",
"valueType": "string"
}
],
"inOutputVariables": [
{
"from_xpath": "caex:Attribute[@Name='InOut']/caex:Attribute",
"@type": "Property",
"valueType": "string"
}
]
}
```
The matching AML:
```xml
<Attribute Name="OperationA">
<Attribute Name="Inputs">
<Attribute Name="PinA">
<Value>10</Value>
</Attribute>
<Attribute Name="PinB">
<Value>15</Value>
</Attribute>
<Attribute Name="PinC">
<Value>30</Value>
</Attribute>
</Attribute>
<Attribute Name="Outputs">
<Attribute Name="PinD"/>
<Attribute Name="PinE">
<Value>15</Value>
</Attribute>
</Attribute>
<Attribute Name="InOut">
<Attribute Name="PinF">
<Value>10</Value>
</Attribute>
</Attribute>
</Attribute>
```
#### Property
```javascript
{
"from_xpath": "caex:Attribute[@Name='InOut']/caex:Attribute",
"@type": "Property",
"valueType": "string"
}
```
The matching AML:
```xml
<Attribute Name="InOut">
<Attribute Name="PinF">
<Value>10</Value>
</Attribute>
</Attribute>
```
#### ReferenceElement
```javascript
{
"from_xpath": "caex:InternalElement[@Name='STEPGeometry']/caex:ExternalInterface[@Name='ExternalDataReference']",
"idShort_xpath": "'STEPGeometry'",
"@type": "ReferenceElement",
// type-specific submodel element attribute paths are listed explicitly
"value": {
"valueIdGeneration": {
"parameters": [
{
"from_xpath": "caex:Attribute[@Name='refURI']/caex:Value"
}
]
},
"keyElement": "ConceptDescription",
"local": "false"
}
}
```
The matching AML:
```xml
<InternalElement Name="STEPGeometry">
<ExternalInterface Name="ExternalDataReference"
RefBaseClassPath="AutomationMLBPRInterfaceClassLib/ExternalDataReference">
<Attribute Name="MIMEType" AttributeDataType="xs:string">
<Value>application/STEP</Value>
</Attribute>
<Attribute Name="refURI" AttributeDataType="xs:anyURI"
RefAttributeType="AutomationMLBaseAttributeTypeLib/refURI">
<Value>MGFTT2-DFC-C_20200519_134129_7UszRjlaUUSGJwr_3pyZ3g</Value>
</Attribute>
</ExternalInterface>
<SupportedRoleClass RefRoleClassPath="AutomationMLComponentBaseRCL/GeometryModel"/>
</InternalElement>
```
#### RelationshipElement
```javascript
{
"from_attributeName": "OperationA",
"semanticId": "test",
"@type": "RelationshipElement",
"idShort_xpath": "'relElement'",
// type-specific submodel element attribute paths are listed explicitly
"first": {
"valueId": "FOO",
"keyElement": "ConceptDescription",
"local": "false"
},
"second": {
"valueId": "BAR",
"keyElement": "ConceptDescription",
"local": "true"
}
}
```
The matching AML:
```xml
```
#### AnnotatedRelationshipElement
```javascript
{
"from_attributeName": "OperationA",
"semanticId": "test/test/test",
"@type": "AnnotatedRelationshipElement",
"idShort_xpath": "'annoRelElement'",
// type-specific submodel element attribute paths are listed explicitly
"first": {
"valueId": "abc",
"keyElement": "ConceptDescription",
"local": "false"
},
"second": {
"valueId": "def",
"keyElement": "ConceptDescription",
"local": "true"
},
"annotations": [
{
"from_xpath": "caex:Attribute[@Name='Inputs']/caex:Attribute",
"@type": "Property",
"valueType": "string"
}
]
}
```
The matching AML:
```xml
<Attribute Name="OperationA">
<Attribute Name="Inputs">
<Attribute Name="PinA">
<Value>10</Value>
</Attribute>
<Attribute Name="PinB">
<Value>15</Value>
</Attribute>
<Attribute Name="PinC">
<Value>30</Value>
</Attribute>
</Attribute>
<Attribute Name="Outputs">
<Attribute Name="PinD"/>
<Attribute Name="PinE">
<Value>15</Value>
</Attribute>
</Attribute>
<Attribute Name="InOut">
<Attribute Name="PinF">
<Value>10</Value>
</Attribute>
</Attribute>
</Attribute>
```
#### SubmodelElementCollection
```javascript
{
"from_attributeName": "ProductPriceDetails",
"@type": "SubmodelElementCollection",
"semanticId": "SubmodelElementCollectionSemanticId",
// an array of 0..* contained submodel elements
"submodelElements": [
{
"from_attributeName": "ValidStartDate",
"@type": "Property",
"valueType": "string"
},
{
"from_xpath": "caex:Attribute[@Name='ProductPrice']/caex:Attribute[@Name='PriceAmount']",
"@type": "Property",
"valueType": "string"
}
]
}
```
The matching AML:
```xml
<Attribute Name="ProductPriceDetails">
<Attribute Name="ValidStartDate">
<Value>2020-01-01</Value>
</Attribute>
<Attribute Name="ProductPrice">
<Attribute Name="PriceAmount">
<Value>TestPriceAmount</Value>
</Attribute>
</Attribute>
</Attribute>
```
## Known issues & limitations
Please see [Issues](https://github.com/SAP/aas-transformation-library/issues) list.
## Upcoming changes
Please refer to the Github issue board. For upcoming features check the "enhancement" label.
## Contributing
You are welcome to join forces with us in the quest to contribute to the Asset Administration Shell community! Simply check our [Contribution Guidelines](CONTRIBUTING.md).
## Code of Conduct
Everyone participating in this joint project is welcome as long as our [Code of Conduct](CODE_OF_CONDUCT.md) is being adhered to.
## To Do
Many improvements are coming! All tasks will be posted to our GitHub issue tracking system. As mentioned, some of the improvements will mean breaking changes. While we strive to avoid doing so, we cannot guarantee this will not happen before the first release.
## License
Copyright 2021 SAP SE or an SAP affiliate company and aas-transformation-library contributors. Please see our [LICENSE](https://github.com/alw-iwu/aas-transformation-library/blob/main/LICENSES/Apache-2.0.txt) for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available via the [REUSE tool](https://api.reuse.software/info/github.com/SAP/aas-transformation-library). | 34.243351 | 453 | 0.669217 | yue_Hant | 0.276363 |
58df00392f91e4b4cb4b7eec76002926a0eb1810 | 17,070 | md | Markdown | reference/5.1/PSScheduledJob/New-JobTrigger.md | MSAdministrator/PowerShell-Docs | d2b1b89a7a5cba91261190a1665e609c9265ccaa | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-07-16T17:21:34.000Z | 2021-07-16T09:32:03.000Z | reference/5.1/PSScheduledJob/New-JobTrigger.md | MSAdministrator/PowerShell-Docs | d2b1b89a7a5cba91261190a1665e609c9265ccaa | [
"CC-BY-4.0",
"MIT"
] | null | null | null | reference/5.1/PSScheduledJob/New-JobTrigger.md | MSAdministrator/PowerShell-Docs | d2b1b89a7a5cba91261190a1665e609c9265ccaa | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-08-29T15:56:09.000Z | 2018-08-29T15:56:09.000Z | ---
ms.date: 2017-06-09
schema: 2.0.0
locale: en-us
keywords: powershell,cmdlet
online version: http://go.microsoft.com/fwlink/?LinkId=821688
external help file: Microsoft.PowerShell.ScheduledJob.dll-Help.xml
title: New-JobTrigger
---
# New-JobTrigger
## SYNOPSIS
Creates a job trigger for a scheduled job.
## SYNTAX
### Once (Default)
```
New-JobTrigger [-RandomDelay <TimeSpan>] -At <DateTime> [-Once] [-RepetitionInterval <TimeSpan>]
[-RepetitionDuration <TimeSpan>] [-RepeatIndefinitely] [<CommonParameters>]
```
### Daily
```
New-JobTrigger [-DaysInterval <Int32>] [-RandomDelay <TimeSpan>] -At <DateTime> [-Daily] [<CommonParameters>]
```
### Weekly
```
New-JobTrigger [-WeeksInterval <Int32>] [-RandomDelay <TimeSpan>] -At <DateTime> -DaysOfWeek <DayOfWeek[]>
[-Weekly] [<CommonParameters>]
```
### AtStartup
```
New-JobTrigger [-RandomDelay <TimeSpan>] [-AtStartup] [<CommonParameters>]
```
### AtLogon
```
New-JobTrigger [-RandomDelay <TimeSpan>] [-User <String>] [-AtLogOn] [<CommonParameters>]
```
## DESCRIPTION
The **New-JobTrigger** cmdlet creates a job trigger that starts a scheduled job on a one-time or recurring schedule, or when an event occurs.
You can use the **ScheduledJobTrigger** object that **New-JobTrigger** returns to set a job trigger for a new or existing scheduled job.
You can also create a job trigger by using the Get-JobTrigger cmdlet to get the job trigger of an existing scheduled job, or by using a hash table value to represent a job trigger.
When creating a job trigger, review the default values of the options specified by the New-ScheduledJobOption cmdlet.
These options, which have the same valid and default values as the corresponding options in **Task Scheduler**, affect the scheduling and timing of scheduled jobs.
**New-JobTrigger** is one of a collection of job scheduling cmdlets in the PSScheduledJob module that is included in Windows PowerShell.
For more information about Scheduled Jobs, see the About topics in the PSScheduledJob module.
Import the PSScheduledJob module and then type: `Get-Help about_Scheduled*` or see about_Scheduled_Jobs.
This cmdlet was introduced in Windows PowerShell 3.0.
## EXAMPLES
### Example 1: Once Schedule
```
PS C:\> New-JobTrigger -Once -At "1/20/2012 3:00 AM"
```
This command uses the **New-JobTrigger** cmdlet to create a job trigger that starts a scheduled job only one time.
The value of the *At* parameter is a string that Windows PowerShell converts into a **DateTime** object.
The *At* parameter value includes an explicit date, not just a time.
If the date were omitted, the trigger would be created with the current date and 3:00 AM time, which is likely to represent a time in the past.
### Example 2: Daily Schedule
```
PS C:\> New-JobTrigger -Daily -At "4:15 AM" -DaysInterval 3
Id Frequency Time DaysOfWeek Enabled
-- --------- ---- ---------- -------
0 Daily 9/21/2012 4:15:00 AM True
```
This command creates a job trigger that starts a scheduled job every 3 days at 4:15 a.m.
Because the value of the *At* parameter does not include a date, the current date is used as the date value in the **DateTime** object.
If the date and time is in the past, the scheduled job is started at the next occurrence, which is 3 days later from the *At* parameter value.
### Example 3: Weekly Schedule
```
PS C:\> New-JobTrigger -Weekly -DaysOfWeek Monday, Wednesday, Friday -At "23:00" -WeeksInterval 4
Id Frequency Time DaysOfWeek Enabled
-- --------- ---- ---------- -------
0 Weekly 9/21/2012 11:00:00 PM {Monday, Wednesday, Friday} True
```
This command creates a job trigger that starts a scheduled job every 4 weeks on Monday, Wednesday, and Friday at 2300 hours (11:00 PM).
You can also enter the *DaysOfWeek* parameter value in integers, such as `-DaysOfWeek 1, 5`.
### Example 4: Logon Schedule
```
PS C:\> New-JobTrigger -AtLogOn -User Domain01\Admin01
```
This command creates a job trigger that starts a scheduled job whenever the domain administrator logs onto the computer.
### Example 5: Using a Random Delay
```
PS C:\> New-JobTrigger -Daily -At 1:00 -RandomDelay 00:20:00
```
This command creates a job trigger that starts a scheduled job every day at 1:00 in the morning.
The command uses the *RandomDelay* parameter to set the maximum delay to 20 minutes.
As a result, the job runs every day between 1:00 AM and 1:20 AM, with the interval varying pseudo-randomly.
You can use a random delay for sampling, load balancing, and other administrative tasks.
When setting the delay value, review the effective and default values of the New-ScheduledJobOption cmdlet and coordinate the delay with the option settings.
### Example 6: Create a Job Trigger for a New Scheduled Job
```
The first command uses the **New-JobTrigger** cmdlet to create a job trigger that starts a job every Monday, Wednesday, and Friday at 12:01 AM. The command saves the job trigger in the $T variable.
PS C:\> $T = New-JobTrigger -Weekly -DaysOfWeek 1,3,5 -At 12:01AM
The second command uses the Register-ScheduledJob cmdlet to create a scheduled job that starts a job every Monday, Wednesday, and Friday at 12:01 AM. The value of the *Trigger* parameter is the trigger that is stored in the $T variable.
PS C:\> Register-ScheduledJob -Name Test-HelpFiles -FilePath C:\Scripts\Test-HelpFiles.ps1 -Trigger $T
```
These commands use a job trigger to create a new scheduled job.
### Example 7: Add a Job Trigger to a Scheduled Job
```
PS C:\> Add-JobTrigger -Name SynchronizeApps -Trigger (New-JobTrigger -Daily -At 3:10AM)
```
This example shows how to add a job trigger to an existing scheduled job.
You can add multiple job triggers to any scheduled job.
The command uses the Add-JobTrigger cmdlet to add the job trigger to the SynchronizeApps scheduled job.
The value of the *Trigger* parameter is a **New-JobTrigger** command that runs the job every day at 3:10 AM.
When the command completes, SynchronizeApps is a scheduled job that runs at the times specified by the job trigger.
### Example 8: Create a repeating job trigger
```
PS C:\> New-JobTrigger -Once -At "09/12/2013 1:00:00" -RepetitionInterval (New-TimeSpan -Hours 1) -RepetitionDuration (New-Timespan -Hours 48)
```
This command creates a job trigger that runs a job every 60 minutes for 48 hours beginning on September 12, 2013 at 1:00 AM.
### Example 9: Stop a repeating job trigger
```
PS C:\> Get-JobTrigger -Name SecurityCheck | Set-JobTrigger -RepetitionInterval 0:00 -RepetitionDuration 0:00
```
This command forcibly stops the SecurityCheck job, which is triggered to run every 60 minutes until its job trigger expires.
To prevent the job from repeating, the command uses the Get-JobTrigger to get the job trigger of the SecurityCheck job and the Set-JobTrigger cmdlet to change the repetition interval and repetition duration of the job trigger to zero (0).
### Example 10: Create an hourly job trigger
```
PS C:\> New-JobTrigger -Once -At "9/21/2012 0am" -RepetitionInterval (New-TimeSpan -Hour 12) -RepetitionDuration ([TimeSpan]::MaxValue)
```
The following command creates a job trigger that runs a scheduled job once every 12 hours for an indefinite period of time.
The schedule begins tomorrow (9/21/2012) at midnight (0:00 AM).
## PARAMETERS
### -At
Starts the job at the specified date and time.
Enter a **DateTime** object, such as one that the Get-Date cmdlet returns, or a string that can be converted to a date and time, such as "April 19, 2012 15:00", "12/31", or "3am".
If you don't specify an element of the date, such as the year, the date in the trigger has the corresponding element from the current date.
When using the *Once* parameter, set the value of the *At* parameter to a future date and time.
Because the default date in a **DateTime** object is the current date, if you specify a time before the current time without an explicit date, the job trigger is created for a time in the past.
**DateTime** objects, and strings that are converted to **DateTime** objects, are automatically adjusted to be compatible with the date and time formats selected for the local computer in Region and Language in Control Panel.
```yaml
Type: DateTime
Parameter Sets: Once, Daily, Weekly
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -AtLogOn
Starts the scheduled job when the specified users log on to the computer.
To specify a user, use the *User* parameter.
```yaml
Type: SwitchParameter
Parameter Sets: AtLogon
Aliases:
Required: True
Position: 0
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -AtStartup
Starts the scheduled job when Windows starts.
```yaml
Type: SwitchParameter
Parameter Sets: AtStartup
Aliases:
Required: True
Position: 0
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Daily
Specifies a recurring daily job schedule.
Use the other parameters in the *Daily* parameter set to specify the schedule details.
```yaml
Type: SwitchParameter
Parameter Sets: Daily
Aliases:
Required: True
Position: 0
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -DaysInterval
Specifies the number of days between occurrences on a daily schedule.
For example, a value of 3 starts the scheduled job on days 1, 4, 7 and so on.
The default value is 1.
```yaml
Type: Int32
Parameter Sets: Daily
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -DaysOfWeek
Specifies the days of the week on which a weekly scheduled job runs.
Enter day names, such as "Monday" or integers 0-6, where 0 represents Sunday.
This parameter is required in the Weekly parameter set.
Day names are converted to their integer values in the job trigger.
When you enclose day names in quotation marks in a command, enclose each day name in separate quotation marks, such as "Monday", "Tuesday".
If you enclose multiple day names in a single quotation mark pair, the corresponding integer values are summed.
For example, "Monday, Tuesday" (1, 2) results in a value of "Wednesday" (3).
```yaml
Type: DayOfWeek[]
Parameter Sets: Weekly
Aliases:
Accepted values: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Once
Specifies a non-recurring (one time) or custom repeating schedule.
To create a repeating schedule, use the *Once* parameter with the *RepetitionDuration* and *RepetitionInterval* parameters.
```yaml
Type: SwitchParameter
Parameter Sets: Once
Aliases:
Required: True
Position: 0
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -RandomDelay
Enables a random delay that begins at the scheduled start time, and sets the maximum delay value.
The length of the delay is set pseudo-randomly for each start and varies from no delay to the time specified by the value of this parameter.
The default value, zero (00:00:00), disables the random delay.
Enter a timespan object, such as one returned by the New-TimeSpan cmdlet, or enter a value in \<hours\>:\<minutes\>:\<seconds\> format, which is automatically converted to a **TimeSpan** object.
```yaml
Type: TimeSpan
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -RepeatIndefinitely
This parameter, available starting in Windows PowerShell 4.0, eliminates the necessity of specifying a **TimeSpan.MaxValue** value for the *RepetitionDuration* parameter to run a scheduled job repeatedly, for an indefinite period.
```yaml
Type: SwitchParameter
Parameter Sets: Once
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -RepetitionDuration
Repeats the job until the specified time expires.
The repetition frequency is determined by the value of the *RepetitionInterval* parameter.
For example, if the value of **RepetitionInterval** is 5 minutes and the value of **RepetitionDuration** is 2 hours, the job is triggered every five minutes for two hours.
Enter a timespan object, such as one that the New-TimeSpan cmdlet returns or a string that can be converted to a timespan object, such as "1:05:30".
To run a job indefinitely, add the *RepeatIndefinitely* parameter instead.
To stop a job before the job trigger repetition duration expires, use the Set-JobTrigger cmdlet to set the *RepetitionDuration* value to zero (0).
This parameter is valid only when the *Once*, *At* and *RepetitionInterval* parameters are used in the command.
```yaml
Type: TimeSpan
Parameter Sets: Once
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -RepetitionInterval
Repeats the job at the specified time interval.
For example, if the value of this parameter is 2 hours, the job is triggered every two hours.
The default value, 0, does not repeat the job.
Enter a timespan object, such as one that the New-TimeSpan cmdlet returns or a string that can be converted to a timespan object, such as "1:05:30".
This parameter is valid only when the *Once*, *At*, and *RepetitionDuration* parameters are used in the command.
```yaml
Type: TimeSpan
Parameter Sets: Once
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -User
Specifies the users who trigger an *AtLogon* start of a scheduled job.
Enter the name of a user in \<UserName\> or \<Domain\Username\> format or enter an asterisk (*) to represent all users.
The default value is all users.
```yaml
Type: String
Parameter Sets: AtLogon
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Weekly
Specifies a recurring weekly job schedule.
Use the other parameters in the Weekly parameter set to specify the schedule details.
```yaml
Type: SwitchParameter
Parameter Sets: Weekly
Aliases:
Required: True
Position: 0
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -WeeksInterval
Specifies the number of weeks between occurrences on a weekly job schedule.
For example, a value of 3 starts the scheduled job on weeks 1, 4, 7 and so on.
The default value is 1.
```yaml
Type: Int32
Parameter Sets: Weekly
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (http://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
### None
You cannot pipe input to this cmdlet.
## OUTPUTS
### Microsoft.PowerShell.ScheduledJob.ScheduledJobTrigger
## NOTES
* Job triggers are not saved to disk. However, scheduled jobs are saved to disk, and you can use the Get-JobTrigger to get the job trigger of any scheduled job.
* **New-JobTrigger** does not prevent you from creating a job trigger that will not run a scheduled job, such as one-time trigger for a date in the past.
* The Register-ScheduledJob cmdlet accepts a ScheduledJobTrigger object, such as one returned by the **New-JobTrigger** or Get-JobTrigger cmdlets, or a hash table with trigger values.
To submit a hash table, use the following keys.
`@{Frequency="Once" (or Daily, Weekly, AtStartup, AtLogon);At="3am"` (or any valid time string);
`DaysOfWeek="Monday", "Wednesday"` (or any combination of day names);
`Interval=2` (or any valid frequency interval);
`RandomDelay="30minutes"` (or any valid timespan string);
`User="Domain1\User01` (or any valid user; used only with the *AtLogon* frequency value)
}
## RELATED LINKS
[Add-JobTrigger](Add-JobTrigger.md)
[Disable-JobTrigger](Disable-JobTrigger.md)
[Disable-ScheduledJob](Disable-ScheduledJob.md)
[Enable-JobTrigger](Enable-JobTrigger.md)
[Enable-ScheduledJob](Enable-ScheduledJob.md)
[Get-JobTrigger](Get-JobTrigger.md)
[Get-ScheduledJob](Get-ScheduledJob.md)
[Get-ScheduledJobOption](Get-ScheduledJobOption.md)
[New-JobTrigger](New-JobTrigger.md)
[New-ScheduledJobOption](New-ScheduledJobOption.md)
[Register-ScheduledJob](Register-ScheduledJob.md)
[Remove-JobTrigger](Remove-JobTrigger.md)
[Set-JobTrigger](Set-JobTrigger.md)
[Set-ScheduledJob](Set-ScheduledJob.md)
[Set-ScheduledJobOption](Set-ScheduledJobOption.md)
[Unregister-ScheduledJob](Unregister-ScheduledJob.md)
| 35.341615 | 314 | 0.752373 | eng_Latn | 0.969599 |
58df5fb85b67ba8592320d1eea04a7b3f0e35ebf | 1,674 | md | Markdown | doc/ref/jsonpointer/remove.md | stac47/jsoncons | b9e95d6a0bcb2dc8a821d33f98a530f10cfe1bc0 | [
"BSL-1.0"
] | null | null | null | doc/ref/jsonpointer/remove.md | stac47/jsoncons | b9e95d6a0bcb2dc8a821d33f98a530f10cfe1bc0 | [
"BSL-1.0"
] | null | null | null | doc/ref/jsonpointer/remove.md | stac47/jsoncons | b9e95d6a0bcb2dc8a821d33f98a530f10cfe1bc0 | [
"BSL-1.0"
] | 2 | 2020-09-15T13:00:19.000Z | 2020-11-13T13:17:44.000Z | ### jsoncons::jsonpointer::remove
Removes a `json` element.
```c++
#include <jsoncons_ext/jsonpointer/jsonpointer.hpp>
template<class J>
void remove(J& target, const typename J::string_view_type& path); (1)
template<class J>
void remove(J& target, const typename J::string_view_type& path, std::error_code& ec); (2)
```
Removes the value at the location specifed by `path`.
#### Return value
None
### Exceptions
(1) Throws a [jsonpointer_error](jsonpointer_error.md) if `remove` fails.
(2) Sets the out-parameter `ec` to the [jsonpointer_error_category](jsonpointer_errc.md) if `remove` fails.
### Examples
#### Remove an object member
```c++
#include <jsoncons/json.hpp>
#include <jsoncons_ext/jsonpointer/jsonpointer.hpp>
namespace jsonpointer = jsoncons::jsonpointer;
int main()
{
auto target = json::parse(R"(
{ "foo": "bar", "baz" : "qux"}
)");
std::error_code ec;
jsonpointer::remove(target, "/baz", ec);
if (ec)
{
std::cout << ec.message() << std::endl;
}
else
{
std::cout << target << std::endl;
}
}
```
Output:
```json
{"foo":"bar"}
```
#### Remove an array element
```c++
#include <jsoncons/json.hpp>
#include <jsoncons_ext/jsonpointer/jsonpointer.hpp>
using jsoncons::json;
namespace jsonpointer = jsoncons::jsonpointer;
int main()
{
auto target = json::parse(R"(
{ "foo": [ "bar", "qux", "baz" ] }
)");
std::error_code ec;
jsonpointer::remove(target, "/foo/1", ec);
if (ec)
{
std::cout << ec.message() << std::endl;
}
else
{
std::cout << target << std::endl;
}
}
```
Output:
```json
{"foo":["bar","baz"]}
```
| 18 | 108 | 0.611111 | eng_Latn | 0.311315 |
58dfa38569deee7fc54f35a26f85d5444fc01c8e | 529 | md | Markdown | catalog/gunnm/en-US_gunnm-kasei-senki.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/gunnm/en-US_gunnm-kasei-senki.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/gunnm/en-US_gunnm-kasei-senki.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Gunnm: Kasei Senki

- **type**: manga
- **original-name**: 銃夢火星戦記
- **start-date**: 2014-10-28
## Tags
- action
- sci-fi
- seinen
## Authors
- Kishiro
- Yukito (Story & Art)
## Sinopse
As Alita begins to piece together fragments of her past, she travels to Mars in order to discover her roots and seek the truth about her Kunstler training.
## Links
- [My Anime list](https://myanimelist.net/manga/81507/Gunnm__Kasei_Senki)
| 19.592593 | 155 | 0.68431 | eng_Latn | 0.535187 |
58dfdfa9f21180f4f796d2b9d67f9812a8faa360 | 9,118 | md | Markdown | desktop-src/WinSock/summary-of-socket-ioctl-opcodes-2.md | dianmsft/win32 | f07b550595a83e44dd2fb6e217525edd10a0341b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-07-26T16:18:49.000Z | 2022-02-19T02:00:21.000Z | desktop-src/WinSock/summary-of-socket-ioctl-opcodes-2.md | dianmsft/win32 | f07b550595a83e44dd2fb6e217525edd10a0341b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-04-09T17:00:51.000Z | 2020-04-09T18:30:01.000Z | desktop-src/WinSock/summary-of-socket-ioctl-opcodes-2.md | dianmsft/win32 | f07b550595a83e44dd2fb6e217525edd10a0341b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-07-19T02:58:48.000Z | 2021-03-06T21:09:47.000Z | ---
Description: Some of the socket IOCTL opcodes for Windows Sockets 2 are summarized in the following table.
ms.assetid: fb6447b4-28f5-4ab7-bbdc-5a57ed38a994
title: Summary of Socket Ioctl Opcodes
ms.topic: article
ms.date: 05/31/2018
---
# Summary of Socket Ioctl Opcodes
Some of the socket IOCTL opcodes for Windows Sockets 2 are summarized in the following table. More detailed information is in the Winsock reference on [**Winsock IOCTLs**](winsock-ioctls.md) and the [**WSPIoctl**](https://msdn.microsoft.com/library/ms742282(v=VS.85).aspx) function. There are other new protocol-specific IOCTL opcodes that can be found in the protocol-specific annex.
A complete list of [**Winsock IOCTLs**](winsock-ioctls.md) are available in the Winsock reference.
| Opcode | Input type | Output type | Meaning |
|-------------------------------------------------------------|------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| FIONBIO | Unsigned long | <Not used> | Enables or disables nonblocking mode on the socket. |
| FIONREAD | <Not used> | Unsigned long | Determines the amount of data that can be read atomically from the socket. |
| SIOCATMARK | <Not used> | BOOL | Determines whether or not all OOB data has been read. |
| SIO\_ASSOCIATE\_HANDLE | Companion API dependent | <Not used> | Associates the socket with the specified handle of a companion interface. |
| SIO\_ENABLE\_CIRCULAR\_QUEUEING | <Not used> | <Not used> | Enables circular queuing. |
| SIO\_FIND\_ROUTE | [**sockaddr**](sockaddr-2.md) structure | <Not used> | Requests the route to the specified address to be discovered. |
| SIO\_FLUSH | <Not used> | <Not used> | Discards current contents of the sending queue. |
| SIO\_GET\_BROADCAST\_ADDRESS | <Not used> | [**sockaddr**](sockaddr-2.md) structure | Retrieves the protocol-specific broadcast address to be used in [**WSPSendTo**](https://msdn.microsoft.com/library/ms742291(v=VS.85).aspx). |
| SIO\_GET\_QOS | <Not used> | [**QOS**](/windows/win32/api/winsock2/ns-winsock2-qos) | Retrieves current flow specifications for the socket. |
| SIO\_GET\_GROUP\_QOS | <Not used> | [**QOS**](/windows/win32/api/winsock2/ns-winsock2-qos) | Reserved. |
| SIO\_MULTIPOINT\_LOOPBACK | BOOL | <Not used> | Controls whether data sent in a multipoint session will also be received by the same socket on the local host. |
| SIO\_MULTICAST\_SCOPE | int | <Not used> | Specifies the scope over which multicast transmissions will occur. |
| SIO\_SET\_QOS | [**QOS**](/windows/win32/api/winsock2/ns-winsock2-qos) | <Not used> | Establishes new flow specifications for the socket. |
| SIO\_SET\_GROUP\_QOS | [**QOS**](/windows/win32/api/winsock2/ns-winsock2-qos) | <Not used> | Reserved. |
| SIO\_TRANSLATE\_HANDLE | int | Companion-API dependent | Obtains a corresponding handle for socket *s* that is valid in the context of a companion interface. |
| SIO\_ROUTING\_INTERFACE\_QUERY | [**sockaddr**](sockaddr-2.md) | [**sockaddr**](sockaddr-2.md) | Obtains the address of the local interface that should be used to send to the specified address. |
| SIO\_ROUTING\_INTERFACE\_CHANGE | [**sockaddr**](sockaddr-2.md) | <Not used> | Requests notification of changes in information reported through SIO\_ROUTING\_INTERFACE\_QUERY for the specified address. |
| [**SIO\_ADDRESS\_LIST\_QUERY**](https://msdn.microsoft.com/library/Dd877219(v=VS.85).aspx) | <Not used> | [**SOCKET\_ADDRESS**](/windows/desktop/api/Ws2def/ns-ws2def-socket_address) | Obtains a list of local transport addresses of the socket's protocol family to which the application can bind. The list of addresses varies based on address family and some addresses are excluded from the list. |
| SIO\_ADDRESS\_LIST\_CHANGE | <Not used> | <Not used> | Requests notification of changes in information reported through SIO\_ADDRESS\_LIST\_QUERY |
| SIO\_QUERY\_PNP\_TARGET\_HANDLE | <Not used> | SOCKET | Obtains socket descriptor of the next provider in the chain on which current socket depends in regards to PnP. |
## Related topics
<dl> <dt>
[**Winsock IOCTLs**](winsock-ioctls.md)
</dt> <dt>
[**WSPIoctl**](https://msdn.microsoft.com/library/ms742282(v=VS.85).aspx)
</dt> </dl>
| 151.966667 | 422 | 0.3288 | eng_Latn | 0.915382 |
58e008767c836c0b3614d0c1bd7a74ac24d3323f | 700 | md | Markdown | doc/release-notes/release-notes-0.16.2.md | phm87/bitcoin-abc | e0c55f5e6fbb3f4c19a9e203223f8ad2f3eba7b9 | [
"MIT"
] | 20 | 2020-01-04T15:35:12.000Z | 2022-03-15T13:51:41.000Z | doc/abc/release-notes/release-notes-0.16.2.md | Chihuataneo/bitcoin-sv | d9b12a23dbf0d2afc5f488fa077d762b302ba873 | [
"MIT"
] | 10 | 2020-02-21T15:34:52.000Z | 2021-07-05T07:12:24.000Z | doc/abc/release-notes/release-notes-0.16.2.md | Chihuataneo/bitcoin-sv | d9b12a23dbf0d2afc5f488fa077d762b302ba873 | [
"MIT"
] | 20 | 2020-02-14T11:40:57.000Z | 2022-03-15T14:22:13.000Z | Bitcoin ABC version 0.16.2 is now available from:
<https://download.bitcoinabc.org/0.16.2/>
This release includes the following features and fixes:
- Remove the newdaaactivationtime configuration.
- Do not use the NODE_BITCOIN_CASH service bit for preferencial peering anymore.
- Only connect to node using the cash magic.
- Remove indicator mentionning if a node uses the cash magic getpeerinfo RPC.
- Add support for the new cashaddr format. The `-usecashaddr` flag can be used to select which format is used when presenting addresses to users. By default, Bitcoin ABC will keep using the old format until Jan, 14 and then switch to the new format. Both format are now accepted as input.
| 58.333333 | 289 | 0.784286 | eng_Latn | 0.998335 |
58e0181e9dab86bb3b383830832a3105ee253271 | 14,494 | md | Markdown | _posts/2022-01-22-info-2005-391242-11350.md | seed-info/apt-info-sub | 5d3a83a8da1bd659d7d4392146f97ddbbaf7633f | [
"MIT"
] | null | null | null | _posts/2022-01-22-info-2005-391242-11350.md | seed-info/apt-info-sub | 5d3a83a8da1bd659d7d4392146f97ddbbaf7633f | [
"MIT"
] | null | null | null | _posts/2022-01-22-info-2005-391242-11350.md | seed-info/apt-info-sub | 5d3a83a8da1bd659d7d4392146f97ddbbaf7633f | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
title: 한화꿈에그린
categories: [아파트정보]
permalink: /apt/서울특별시노원구중계동한화꿈에그린
---
한화꿈에그린 실거래 상세정보
<script type="text/javascript">
google.charts.load('current', {'packages':['line', 'corechart']});
google.charts.setOnLoadCallback(drawChart);
function drawChart() {
var data = new google.visualization.DataTable();
data.addColumn('date', '거래일');
data.addColumn('number', "매매");
data.addColumn('number', "전세");
data.addColumn('number', "전매");
data.addRows([[new Date(Date.parse("2022-01-06")), null, 51450, null], [new Date(Date.parse("2021-12-29")), null, null, null], [new Date(Date.parse("2021-12-22")), null, 51450, null], [new Date(Date.parse("2021-12-18")), null, 47250, null], [new Date(Date.parse("2021-12-08")), null, 60000, null], [new Date(Date.parse("2021-12-04")), null, 47250, null], [new Date(Date.parse("2021-11-27")), 102000, null, null], [new Date(Date.parse("2021-11-27")), null, null, null], [new Date(Date.parse("2021-11-13")), null, 47250, null], [new Date(Date.parse("2021-10-28")), null, null, null], [new Date(Date.parse("2021-10-15")), null, null, null], [new Date(Date.parse("2021-10-08")), null, 45150, null], [new Date(Date.parse("2021-10-02")), 100000, null, null], [new Date(Date.parse("2021-09-15")), null, 75000, null], [new Date(Date.parse("2021-09-06")), null, null, null], [new Date(Date.parse("2021-08-19")), 104500, null, null], [new Date(Date.parse("2021-08-14")), null, 78000, null], [new Date(Date.parse("2021-08-11")), 100000, null, null], [new Date(Date.parse("2021-08-11")), null, 47000, null], [new Date(Date.parse("2021-08-06")), 70500, null, null], [new Date(Date.parse("2021-07-12")), 100000, null, null], [new Date(Date.parse("2021-07-10")), null, null, null], [new Date(Date.parse("2021-07-10")), null, 61000, null], [new Date(Date.parse("2021-07-08")), null, 47000, null], [new Date(Date.parse("2021-07-05")), null, null, null], [new Date(Date.parse("2021-07-02")), 92000, null, null], [new Date(Date.parse("2021-06-25")), 85000, null, null], [new Date(Date.parse("2021-06-23")), 70500, null, null], [new Date(Date.parse("2021-06-10")), 93900, null, null], [new Date(Date.parse("2021-06-07")), null, 46200, null], [new Date(Date.parse("2021-05-31")), 92650, null, null], [new Date(Date.parse("2021-05-31")), null, 71500, null], [new Date(Date.parse("2021-05-26")), null, 47200, null], [new Date(Date.parse("2021-05-17")), null, 39500, null], [new Date(Date.parse("2021-05-01")), null, null, null], [new Date(Date.parse("2021-04-22")), null, 70000, null], [new Date(Date.parse("2021-04-14")), null, null, null], [new Date(Date.parse("2021-04-06")), 91700, null, null], [new Date(Date.parse("2021-04-03")), 95000, null, null], [new Date(Date.parse("2021-03-23")), 94500, null, null], [new Date(Date.parse("2021-03-12")), null, null, null], [new Date(Date.parse("2021-03-10")), null, 40940, null], [new Date(Date.parse("2021-03-02")), 90000, null, null], [new Date(Date.parse("2021-02-22")), null, 59500, null], [new Date(Date.parse("2021-02-20")), null, 49000, null], [new Date(Date.parse("2021-02-20")), null, 60000, null], [new Date(Date.parse("2021-02-15")), null, 61000, null], [new Date(Date.parse("2021-02-10")), null, 46200, null], [new Date(Date.parse("2021-02-02")), null, 43000, null], [new Date(Date.parse("2021-02-02")), null, 45000, null], [new Date(Date.parse("2021-01-29")), null, 46200, null], [new Date(Date.parse("2021-01-29")), null, 55000, null], [new Date(Date.parse("2021-01-25")), 90000, null, null], [new Date(Date.parse("2021-01-23")), 89900, null, null], [new Date(Date.parse("2021-01-23")), null, 46000, null]]);
var options = {
hAxis: {
format: 'yyyy/MM/dd'
},
lineWidth: 0,
pointsVisible: true,
title: '최근 1년간 유형별 실거래가 분포',
legend: { position: 'bottom' }
};
var formatter = new google.visualization.NumberFormat({pattern:'###,###'} );
formatter.format(data, 1);
formatter.format(data, 2);
setTimeout(function() {
var chart = new google.visualization.LineChart(document.getElementById('columnchart_material'));
chart.draw(data, (options));
document.getElementById('loading').style.display = 'none';
}, 200);
}
</script>
<div id="loading" style="z-index:20; display: block; margin-left: 0px">"그래프를 그리고 있습니다"</div>
<div id="columnchart_material" style="width: 95%; margin-left: 0px; display: block"></div>
<!-- contents start -->
<b>역대 전용면적, 거래별 최고가</b>
<table class="sortable">
<tr>
<td>거래</td>
<td>가격</td>
<td>면적</td>
<td>층</td>
<td>거래일</td>
</tr>
<tr>
<td><a style="color: blue">매매</a></td>
<td>104,500</td>
<td>84.902</td>
<td>9</td>
<td>2021-08-19</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>86,000</td>
<td>125.027</td>
<td>5</td>
<td>2020-06-18</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>70,500</td>
<td>59.941</td>
<td>2</td>
<td>2021-06-23</td>
</tr>
<tr>
<td><a style="color: darkgreen">전세</a></td>
<td>78,000</td>
<td>84.902</td>
<td>15</td>
<td>2021-08-14</td>
</tr>
</table>
<b>최근 1년간 거래 내역</b>
<table class="sortable">
<tr>
<td>거래</td>
<td>가격</td>
<td>면적</td>
<td>층</td>
<td>거래일</td>
</tr>
<tr>
<td><a style="color: darkgreen">전세</a></td>
<td>51,450</td>
<td>84.902</td>
<td>14</td>
<td>2022-01-06</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>30 (30,000)</td>
<td>84.902</td>
<td>14</td>
<td>2021-12-29</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>51,450</td>
<td>84.902</td>
<td>7</td>
<td>2021-12-22</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>47,250</td>
<td>84.902</td>
<td>11</td>
<td>2021-12-18</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>60,000</td>
<td>84.902</td>
<td>5</td>
<td>2021-12-08</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>47,250</td>
<td>84.902</td>
<td>9</td>
<td>2021-12-04</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>102,000</td>
<td>84.902</td>
<td>7</td>
<td>2021-11-27</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>200 (10,000)</td>
<td>84.902</td>
<td>10</td>
<td>2021-11-27</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>47,250</td>
<td>84.902</td>
<td>3</td>
<td>2021-11-13</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>107 (15,000)</td>
<td>84.902</td>
<td>13</td>
<td>2021-10-28</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>100 (30,000)</td>
<td>84.902</td>
<td>6</td>
<td>2021-10-15</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>45,150</td>
<td>84.902</td>
<td>3</td>
<td>2021-10-08</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>100,000</td>
<td>84.902</td>
<td>9</td>
<td>2021-10-02</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>75,000</td>
<td>84.902</td>
<td>2</td>
<td>2021-09-15</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>20 (42,000)</td>
<td>84.902</td>
<td>5</td>
<td>2021-09-06</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>104,500</td>
<td>84.902</td>
<td>9</td>
<td>2021-08-19</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>78,000</td>
<td>84.902</td>
<td>15</td>
<td>2021-08-14</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>100,000</td>
<td>84.902</td>
<td>9</td>
<td>2021-08-11</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>47,000</td>
<td>84.902</td>
<td>13</td>
<td>2021-08-11</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>70,500</td>
<td>59.941</td>
<td>1</td>
<td>2021-08-06</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>100,000</td>
<td>84.902</td>
<td>14</td>
<td>2021-07-12</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>102 (30,000)</td>
<td>125.027</td>
<td>3</td>
<td>2021-07-10</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>61,000</td>
<td>84.902</td>
<td>8</td>
<td>2021-07-10</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>47,000</td>
<td>84.902</td>
<td>5</td>
<td>2021-07-08</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>120 (20,000)</td>
<td>84.902</td>
<td>12</td>
<td>2021-07-05</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>92,000</td>
<td>84.902</td>
<td>2</td>
<td>2021-07-02</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>85,000</td>
<td>84.902</td>
<td>1</td>
<td>2021-06-25</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>70,500</td>
<td>59.941</td>
<td>2</td>
<td>2021-06-23</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>93,900</td>
<td>84.902</td>
<td>6</td>
<td>2021-06-10</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>46,200</td>
<td>84.902</td>
<td>11</td>
<td>2021-06-07</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>92,650</td>
<td>84.902</td>
<td>8</td>
<td>2021-05-31</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>71,500</td>
<td>84.902</td>
<td>5</td>
<td>2021-05-31</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>47,200</td>
<td>84.902</td>
<td>10</td>
<td>2021-05-26</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>39,500</td>
<td>84.902</td>
<td>9</td>
<td>2021-05-17</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>60 (40,000)</td>
<td>84.902</td>
<td>8</td>
<td>2021-05-01</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>70,000</td>
<td>84.902</td>
<td>1</td>
<td>2021-04-22</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>60 (40,000)</td>
<td>84.902</td>
<td>4</td>
<td>2021-04-14</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>91,700</td>
<td>84.902</td>
<td>7</td>
<td>2021-04-06</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>95,000</td>
<td>84.902</td>
<td>15</td>
<td>2021-04-03</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>94,500</td>
<td>84.902</td>
<td>5</td>
<td>2021-03-23</td>
</tr> <tr>
<td><a style="color: darkgoldenrod">월세</a></td>
<td>140 (8,000)</td>
<td>84.902</td>
<td>4</td>
<td>2021-03-12</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>40,940</td>
<td>84.902</td>
<td>3</td>
<td>2021-03-10</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>90,000</td>
<td>84.902</td>
<td>2</td>
<td>2021-03-02</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>59,500</td>
<td>84.902</td>
<td>4</td>
<td>2021-02-22</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>49,000</td>
<td>84.902</td>
<td>8</td>
<td>2021-02-20</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>60,000</td>
<td>84.902</td>
<td>13</td>
<td>2021-02-20</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>61,000</td>
<td>84.902</td>
<td>3</td>
<td>2021-02-15</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>46,200</td>
<td>84.902</td>
<td>3</td>
<td>2021-02-10</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>43,000</td>
<td>84.902</td>
<td>1</td>
<td>2021-02-02</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>45,000</td>
<td>84.902</td>
<td>6</td>
<td>2021-02-02</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>46,200</td>
<td>84.902</td>
<td>11</td>
<td>2021-01-29</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>55,000</td>
<td>59.941</td>
<td>10</td>
<td>2021-01-29</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>90,000</td>
<td>84.902</td>
<td>8</td>
<td>2021-01-25</td>
</tr> <tr>
<td><a style="color: blue">매매</a></td>
<td>89,900</td>
<td>84.902</td>
<td>8</td>
<td>2021-01-23</td>
</tr> <tr>
<td><a style="color: darkgreen">전세</a></td>
<td>46,000</td>
<td>84.902</td>
<td>12</td>
<td>2021-01-23</td>
</tr> </table>
<!-- contents end -->
| 33.62877 | 3,149 | 0.476887 | kor_Hang | 0.197733 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.