hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bb6ce3bf197d01e40f31a4a3a674b9c79dc6e5cf | 5,937 | md | Markdown | labs/08-traffic_lights/README.md | MartinSomsak00/DE1 | e14348240a99fc170aaf50b3424d65627fcc388c | [
"MIT"
] | null | null | null | labs/08-traffic_lights/README.md | MartinSomsak00/DE1 | e14348240a99fc170aaf50b3424d65627fcc388c | [
"MIT"
] | null | null | null | labs/08-traffic_lights/README.md | MartinSomsak00/DE1 | e14348240a99fc170aaf50b3424d65627fcc388c | [
"MIT"
] | null | null | null | odkaz na repositar [DE1](https://github.com/MartinSomsak00/DE1)
# Task 1
## Completed state table
| **Input P** | `0` | `0` | `1` | `1` | `0` | `1` | `0` | `1` | `1` | `1` | `1` | `0` | `0` | `1` | `1` | `1` |
| :-- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| **Clock** |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |
| **State** | A | A | B | C | C | D | A | B | C | D | B | B | B | C | D | B |
| **Output R** | `0` | `0` | `0` | `0` | `0` | `1` | `0` | `0` | `0` | `1` | `0` | `0` | `0` | `0` | `1` | `0` |
## connection of RGB LEDs on Nexys A7 board

## completed table with color settings
| **RGB LED** | **Artix-7 pin names** | **Red** | **Yellow** | **Green** |
| :-: | :-: | :-: | :-: | :-: |
| LD16 | N15, M16, R12 | `1,0,0` | `1,1,0` | `0,1,0` |
| LD17 | N16, R11, G14 | `1,0,0` | `1,1,0` | `0,1,0` |
# Task 2
## State diagram

## Listing of VHDL code of sequential process p_traffic_fsm
```vhdl
p_traffic_fsm : process(clk)
begin
if rising_edge(clk) then
if (reset = '1') then
s_state <= STOP1 ;
s_cnt <= c_ZERO;
elsif (s_en = '1') then
case s_state is
when STOP1 =>
if (s_cnt < c_DELAY_1SEC) then
s_cnt <= s_cnt + 1;
else
s_state <= WEST_GO;
s_cnt <= c_ZERO;
end if;
when WEST_GO =>
if (s_cnt < c_DELAY_4SEC) then
s_cnt <= s_cnt + 1;
else
s_state <= WEST_WAIT;
s_cnt <= c_ZERO;
end if;
when WEST_WAIT =>
if (s_cnt < c_DELAY_2SEC) then
s_cnt <= s_cnt + 1;
else
s_state <= STOP2;
s_cnt <= c_ZERO;
end if;
when STOP2 =>
if (s_cnt < c_DELAY_1SEC) then
s_cnt <= s_cnt + 1;
else
s_state <= SOUTH_GO;
s_cnt <= c_ZERO;
end if;
when SOUTH_GO =>
if (s_cnt < c_DELAY_4SEC) then
s_cnt <= s_cnt + 1;
else
s_state <= SOUTH_WAIT;
s_cnt <= c_ZERO;
end if;
when SOUTH_WAIT =>
if (s_cnt < c_DELAY_2SEC) then
s_cnt <= s_cnt + 1;
else
s_state <= STOP1;
s_cnt <= c_ZERO;
end if;
when others =>
s_state <= STOP1;
end case;
end if;
end if;
end process p_traffic_fsm;
```
## Listing of VHDL code of combinatorial process p_output_fsm
```vhdl
p_output_fsm : process(s_state)
begin
case s_state is
when STOP1 =>
south_o <= "100"; -- Red (RGB = 100)
west_o <= "100"; -- Red (RGB = 100)
when WEST_GO =>
south_o <= "100"; -- Red (RGB = 100)
west_o <= "010"; -- Red (RGB = 010)
when WEST_WAIT =>
south_o <= "100"; -- Red (RGB = 100)
west_o <= "110"; -- Red (RGB = 011)
when STOP2 =>
south_o <= "100"; -- Red (RGB = 100)
west_o <= "100"; -- Red (RGB = 100)
when SOUTH_GO =>
south_o <= "010"; -- Red (RGB = 010)
west_o <= "100"; -- Red (RGB = 100)
when SOUTH_WAIT =>
south_o <= "110"; -- Red (RGB = 011)
west_o <= "100"; -- Red (RGB = 100)
when others =>
south_o <= "100"; -- Red
west_o <= "100"; -- Red
end case;
end process p_output_fsm;
```
## Screenshots of the simulation

### Zoomed

# Task 3
## State table
| **Current state** | **Direction South** | **Direction West** | **Delay** | **Input (counter_en, sensor_west, sensor_south)** |
| :-- | :-: | :-: | :-: | :-: |
| `STOP1` | red | red | 1 sec | n/c |
| `WEST_GO` | red | green | 4 sec | 1,1,0 or 0, X, X goto WEST_GO else goto WEST_WAIT |
| `WEST_WAIT` | red | yellow | 2 sec | n/c |
| `STOP2` | red | red | 1 sec | n/c |
| `SOUTH_GO` | green | red | 4 sec | 1,0,1 or 0, X, X goto SOUTH_GO else goto SOUTH_WAIT |
| `SOUTH_WAIT` | yellow | red | 2 sec | n/c |
## State diagram
 | 35.339286 | 573 | 0.386727 | kor_Hang | 0.2867 |
bb6d01533e2bd330acd23f526c7713302461a603 | 251 | md | Markdown | common/Timer/README_zh.md | openharmony-gitee-mirror/app_samples | 585608c98a16c3f6436275055867dfc7baf56284 | [
"Apache-2.0"
] | 1 | 2021-12-07T08:47:18.000Z | 2021-12-07T08:47:18.000Z | common/Timer/README_zh.md | openharmony-gitee-mirror/app_samples | 585608c98a16c3f6436275055867dfc7baf56284 | [
"Apache-2.0"
] | null | null | null | common/Timer/README_zh.md | openharmony-gitee-mirror/app_samples | 585608c98a16c3f6436275055867dfc7baf56284 | [
"Apache-2.0"
] | 1 | 2021-09-13T12:10:27.000Z | 2021-09-13T12:10:27.000Z | # 简单计时器
### 简介
本示例主要用于说明计时器控件(Timer)的使用,通过内置的计时器控件实现一个简单的计时器。
### 使用说明
1、Timer : 输入计时器的时、分和秒。 启动计时器,会显示剩余时间,一旦计时器结束,就会发送通知。
2、Repeat Timer : 为应用程序提供固定间隔的重复计时器,输入重复提醒的间隔时间,包括小时和分钟。
3、启动 Reminder 计时器,会定时发送通知,发送间隔由用户指定。
### 约束与限制
本示例仅支持在大型系统上运行。
| 13.944444 | 55 | 0.756972 | yue_Hant | 0.459135 |
bb6d4563f5617fb98af055bca2f6f0479bdb4393 | 4,103 | md | Markdown | doc/fluid/read_source.md | limeng357/Paddle | dbd25805c88c48998eb9dc0f4b2ca1fd46326482 | [
"ECL-2.0",
"Apache-2.0"
] | 9 | 2017-12-04T02:58:01.000Z | 2020-12-03T14:46:30.000Z | doc/fluid/read_source.md | limeng357/Paddle | dbd25805c88c48998eb9dc0f4b2ca1fd46326482 | [
"ECL-2.0",
"Apache-2.0"
] | 7 | 2017-12-05T20:29:08.000Z | 2018-10-15T08:57:40.000Z | doc/fluid/read_source.md | limeng357/Paddle | dbd25805c88c48998eb9dc0f4b2ca1fd46326482 | [
"ECL-2.0",
"Apache-2.0"
] | 6 | 2018-03-19T22:38:46.000Z | 2019-11-01T22:28:27.000Z | # PaddlePaddle Fluid Source Code Overview
Examples: https://github.com/PaddlePaddle/Paddle/tree/develop/python/paddle/fluid/tests/book
Core: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/framework
Operator: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/operators
Memory: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/memory
Platform: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/platform
# Compile Time
The following **defines** the NN. The definition goes into this [protocol buffer](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/framework.proto).
```python
x = fluid.layers.data(name='x', shape=[13], dtype='float32')
y = fluid.layers.data(name='y', shape=[1], dtype='float32')
y_predict = fluid.layers.fc(input=x, size=1, act=None)
cost = fluid.layers.square_error_cost(input=y_predict, label=y)
avg_cost = fluid.layers.mean(x=cost)
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_cost)
```
- Variables: `x`, `y`, `y_predict`, `cost` and `avg_cost`. [Python](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/framework.py#)
- Layers: `fluid.layers.data`, `fluid.layers.fc` and `fluid.layers.mean` are layers. [Python](https://github.com/PaddlePaddle/Paddle/tree/develop/python/paddle/fluid/layers)
- Every Layer has one or more operators and variables/parameters
- All the operators are defined at [`paddle/fluid/operators/`](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/operators). Other worth-looking files:
- Base class: [`paddle/fluid/framework/operator.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/operator.h)
- Operator Registration: [`paddle/fluid/framework/op_registry.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/op_registry.h)
- Operator Lookup: [`paddle/fluid/framework/op_info.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/op_info.h)
- Optimizer: `fluid.optimizer.SGD`. It does the following
- Add backward operators. [[Python](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/backward.py)]
- Add optimizer operators. [[Python](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/optimizer.py)]
# Run Time
The following **evaluates** the NN. Instantiates all the variables, operators.
```python
place = fluid.CPUPlace()
feeder = fluid.DataFeeder(place=place, feed_list=[x, y])
exe = fluid.Executor(place)
# Allocate memory. Initialize Parameter.
exe.run(fluid.default_startup_program())
# Allocate memory. Do computation.
exe.run(fluid.default_main_program(),
feed=feeder.feed(data),
fetch_list=[avg_cost])
```
- Place: `place`. one of CPU, GPU or FPGA. [C++](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/place.h)
- The device handle are at [paddle/fluid/platform/device_context.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/device_context.h)
- Executor: `fluid.Executor(place)`. [[Python](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/executor.py), [C++](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/executor.cc)]
- Feeds the data: `feed=feeder.feed(data)`
- Evaluates all the operators
- Fetches the result: `fetch_list=[avg_cost]`
- Other worth looking files:
- Scope: [paddle/fluid/framework/scope.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/scope.h). Where all the variables live
- Variable: [paddle/fluid/framework/variable.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/variable.h). Where all the data (most likely tensors) live
- Tensor: [paddle/fluid/framework/tensor.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/tensor.h). Where we allocate memory through [`paddle/fluid/memory/`](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/memory)
| 60.338235 | 266 | 0.766025 | eng_Latn | 0.491383 |
bb6d567120b3c08b798f3c4b1f939e10dab69fca | 1,029 | md | Markdown | desktop-src/SecCertEnroll/ix509certificatetemplates-properties.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-07-26T16:18:49.000Z | 2022-02-19T02:00:21.000Z | desktop-src/SecCertEnroll/ix509certificatetemplates-properties.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-04-09T17:00:51.000Z | 2020-04-09T18:30:01.000Z | desktop-src/SecCertEnroll/ix509certificatetemplates-properties.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-07-19T02:58:48.000Z | 2021-03-06T21:09:47.000Z | ---
Description: The IX509CertificateTemplates interface exposes the following properties.
ms.assetid: 74D5E1C1-5498-4339-A141-8BEDCC008179
title: IX509CertificateTemplates Properties
ms.topic: reference
ms.date: 05/31/2018
---
# IX509CertificateTemplates Properties
The [**IX509CertificateTemplates**](/windows/desktop/api/Certenroll/nn-certenroll-ix509certificatetemplates) interface exposes the following properties.
## In this section
- [**\_NewEnum Property**](/windows/desktop/api/Certenroll/nf-certenroll-ix509certificatetemplates-get__newenum)
- [**Count Property**](/windows/desktop/api/Certenroll/nf-certenroll-ix509certificatetemplates-get_count)
- [**ItemByName Property**](/windows/desktop/api/Certenroll/nf-certenroll-ix509certificatetemplates-get_itembyname)
- [**ItemByIndex Property**](/windows/desktop/api/Certenroll/nf-certenroll-ix509certificatetemplates-get_itembyindex)
- [**ItemByOid Property**](/windows/desktop/api/Certenroll/nf-certenroll-ix509certificatetemplates-get_itembyoid)
| 38.111111 | 152 | 0.808552 | eng_Latn | 0.122315 |
bb6d99be6d1e2254017477dcf125abd96c259689 | 110 | md | Markdown | README.md | kasunkv/azure-app-configuration-managed-identity-example | 93e5df590fa64f42f7518975f49d40c48b5d06a6 | [
"MIT"
] | null | null | null | README.md | kasunkv/azure-app-configuration-managed-identity-example | 93e5df590fa64f42f7518975f49d40c48b5d06a6 | [
"MIT"
] | null | null | null | README.md | kasunkv/azure-app-configuration-managed-identity-example | 93e5df590fa64f42f7518975f49d40c48b5d06a6 | [
"MIT"
] | null | null | null | # azure-app-configuration-managed-identity-example
Using Managed Identities to Access Azure App Configuration
| 36.666667 | 58 | 0.854545 | eng_Latn | 0.805365 |
bb6dee7b7153ffdc20f623679c4f34738647f5d5 | 14,349 | md | Markdown | exchange/exchange-ps/exchange/unified-messaging/New-UMCallAnsweringRule.md | amareshbAtMicrosoft/office-docs-powershell | 7412f12a91d180a70053b6e9e9bd53b06335d51a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | exchange/exchange-ps/exchange/unified-messaging/New-UMCallAnsweringRule.md | amareshbAtMicrosoft/office-docs-powershell | 7412f12a91d180a70053b6e9e9bd53b06335d51a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | exchange/exchange-ps/exchange/unified-messaging/New-UMCallAnsweringRule.md | amareshbAtMicrosoft/office-docs-powershell | 7412f12a91d180a70053b6e9e9bd53b06335d51a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
external help file: Microsoft.Exchange.MediaAndDevices-Help.xml
applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
title: New-UMCallAnsweringRule
schema: 2.0.0
author: chrisda
ms.author: chrisda
ms.reviewer:
monikerRange: "exchserver-ps-2013 || exchserver-ps-2016 || exchserver-ps-2019 || exchonline-ps"
---
# New-UMCallAnsweringRule
## SYNOPSIS
This cmdlet is available in on-premises Exchange and in the cloud-based service. Some parameters and settings may be exclusive to one environment or the other.
Use the New-UMCallAnsweringRule cmdlet to create a call answering rule.
For information about the parameter sets in the Syntax section below, see [Exchange cmdlet syntax](https://docs.microsoft.com/powershell/exchange/exchange-server/exchange-cmdlet-syntax).
## SYNTAX
```
New-UMCallAnsweringRule -Name <String> [-CallerIds <MultiValuedProperty>]
[-CallersCanInterruptGreeting <$true | $false>] [-CheckAutomaticReplies <$true | $false>] [-Confirm]
[-DomainController <Fqdn>] [-ExtensionsDialed <MultiValuedProperty>] [-KeyMappings <MultiValuedProperty>]
[-Mailbox <MailboxIdParameter>] [-Priority <Int32>] [-ScheduleStatus <Int32>] [-TimeOfDay <TimeOfDay>]
[-WhatIf] [<CommonParameters>]
```
## DESCRIPTION
The New-UMCallAnsweringRule cmdlet creates a Unified Messaging (UM) call answering rule stored in a UM-enabled user's mailbox. You can run the cmdlet and create a call answering rule of the user that's logged on or use the Mailbox parameter to specify the mailbox where you want the call answering rule to be created. You can use the New-UMCallAnsweringRule cmdlet to specify the following conditions:
- Who the incoming call is from
- Time of day
- Calendar free/busy status
- Whether automatic replies are turned on for email
You can also specify the following actions:
- Find me
- Transfer the caller to someone else
- Leave a voice message
After this task is completed, the cmdlet sets the parameters and the values specified.
You need to be assigned permissions before you can run this cmdlet. Although this topic lists all parameters for the cmdlet, you may not have access to some parameters if they're not included in the permissions assigned to you. To find the permissions required to run any cmdlet or parameter in your organization, see [Find the permissions required to run any Exchange cmdlet](https://docs.microsoft.com/powershell/exchange/exchange-server/find-exchange-cmdlet-permissions).
## EXAMPLES
### Example 1
```powershell
New-UMCallAnsweringRule -Mailbox tonysmith -Name MyCallAnsweringRule -Priority 2
```
This example creates the call answering rule MyCallAnsweringRule in the mailbox for tonysmith with the priority of 2.
### Example 2
```powershell
New-UMCallAnsweringRule -Name MyCallAnsweringRule -CallerIds "1,4255550100,,","1,4255550123,," -Priority 2 -CallersCanInterruptGreeting $true -Mailbox tonysmith
```
This example creates the following actions on the call answering rule MyCallAnsweringRule in the mailbox for tonysmith:
- Sets the call answering rule to two caller IDs.
- Sets the priority of the call answering rule to 2.
- Sets the call answering rule to allow callers to interrupt the greeting.
### Example 3
```powershell
New-UMCallAnsweringRule -Name MyCallAnsweringRule -Priority 2 -Mailbox [email protected] -ScheduleStatus 0x8
```
This example creates the call answering rule MyCallAnsweringRule in the mailbox for tonysmith that sets the free/busy status to Out of Office and sets the priority to 2.
### Example 4
```powershell
New-UMCallAnsweringRule -Name MyCallAnsweringRule -Priority 2 -Mailbox tonysmith -ScheduleStatus 0x4 - -KeyMappings "1,1,Receptionist,,,,,45678,","5,2,Urgent Issues,23456,23,45671,50,,"
```
This example creates the call answering rule MyCallAnsweringRule in the mailbox tonysmith and performs the following actions:
- Sets the priority of the call answering rule to 2.
- Creates key mappings for the call answering rule.
If the caller reaches the voice mail for the user and the status of the user is set to Busy, the caller can:
- Press the 1 key and be transferred to a receptionist at extension 45678.
- Press the 2 key and the Find Me feature will be used for urgent issues and ring extension 23456 first, and then 45671.
### Example 5
```powershell
New-UMCallAnsweringRule -Name MyCallAnsweringRule -Priority 2 -Mailbox tonysmith -TimeOfDay "1,0,,"
```
This example creates the call answering rule MyCallAnsweringRule in the mailbox for tonysmith and performs the following actions:
- Sets the priority of the call answering rule to 2.
- If the caller reaches voice mail during working hours, the caller is asked to call back later.
### Example 6
```powershell
New-UMCallAnsweringRule -Name MyCallAnsweringRule -Priority 2 -Mailbox tonysmith -TimeOfDay "3,4,8:00,12:00"
```
This example creates the call answering rule MyCallAnsweringRule in the mailbox for tonysmith with a custom period for the time of day and performs the following actions:
- Sets the priority of the call answering rule to 2.
- If the caller reaches voice mail and the time is between 8:00 A.M. and 12:00 P.M. on Tuesday, ask the caller to call back later.
## PARAMETERS
### -Name
The Name parameter specifies the name of the Unified Messaging (UM) call answering rule or Call Answering Rule ID being modified. The call answering ID or name must be unique per the user's UM-enabled mailbox. The name or ID for the call answering rule can contain up to 255 characters.
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -CallerIds
The CallerIds parameter specifies an entry for the "If the Caller is" condition. Each entry for this parameter can contain a phone number, an Active Directory contact, a personal contact, or the personal Contacts folder. The parameter can contain 50 phone numbers or contact entries and no more than one entry for specifying the default Contacts folder. If the CallerIds parameter doesn't contain a condition, the condition isn't set and is ignored. The default value is $null.
```yaml
Type: MultiValuedProperty
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -CallersCanInterruptGreeting
The CallersCanInterruptGreeting parameter specifies whether a caller can interrupt the voice mail greeting while it's being played. The default is $null.
```yaml
Type: $true | $false
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -CheckAutomaticReplies
The CheckAutomaticReplies parameter specifies an entry for the "If My Automatic Replies are Enabled" condition. The default is $false.
```yaml
Type: $true | $false
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Confirm
The Confirm switch specifies whether to show or hide the confirmation prompt. How this switch affects the cmdlet depends on if the cmdlet requires confirmation before proceeding.
- Destructive cmdlets (for example, Remove-\* cmdlets) have a built-in pause that forces you to acknowledge the command before proceeding. For these cmdlets, you can skip the confirmation prompt by using this exact syntax: -Confirm:$false.
- Most other cmdlets (for example, New-\* and Set-\* cmdlets) don't have a built-in pause. For these cmdlets, specifying the Confirm switch without a value introduces a pause that forces you acknowledge the command before proceeding.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases: cf
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -DomainController
This parameter is available only in on-premises Exchange.
The DomainController parameter specifies the domain controller that's used by this cmdlet to read data from or write data to Active Directory. You identify the domain controller by its fully qualified domain name (FQDN). For example, dc01.contoso.com.
```yaml
Type: Fqdn
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -ExtensionsDialed
The ExtensionsDialed parameter specifies an entry for the "If the Caller Dials" condition. Each entry must be unique per call answering rule. Each extension must correspond to existing extension numbers assigned to UM-enabled users. The default is $null.
```yaml
Type: MultiValuedProperty
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -KeyMappings
The KeyMappings parameter specifies a key mapping entry for a call answering rule. The key mappings are those menu options offered to callers if the call answering rule is set to $true. You can configure a maximum of 10 entries. None of the defined key mappings can overlap. The default is $null.
```yaml
Type: MultiValuedProperty
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Mailbox
The Mailbox parameter specifies the UM-enabled mailbox where the call answering rule is created. You can use any value that uniquely identifies the mailbox. For example:
- Name
- Alias
- Distinguished name (DN)
- Canonical DN
- \<domain name\>\\\<account name\>
- Email address
- GUID
- LegacyExchangeDN
- SamAccountName
- User ID or user principal name (UPN)
If you don't use this parameter, the mailbox of the user who is running the command is used.
```yaml
Type: MailboxIdParameter
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Priority
The Priority parameter specifies the order that the call answering rule will be evaluated against other existing call answering rules. Call answering rules are processed in order of increasing priority values. The priority must be unique between all call answering rules in the UM-enabled mailbox. The priority on the call answering rule must be between 1 (highest) and 9 (lowest). The default is 9.
```yaml
Type: Int32
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -ScheduleStatus
The ScheduleStatus parameter specifies an entry for the "If my Schedule show that I am" condition. Users can specify their free/busy status to be checked. This parameter can be set from 0 through 15 and is interpreted as a 4-bit mask that represents the calendar status including Free, Tentative, Busy, and Out of Office. The following settings can be used to set the schedule status:
- None = 0x0
- Free = 0x1
- Tentative = 0x2
- Busy = 0x4
- OutOfOffice = 0x8
The default setting is $null.
```yaml
Type: Int32
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -TimeOfDay
The TimeOfDay parameter specifies an entry for the "If the Call Arrives During" condition for the call answering rule. You can specify working hours, non-working hours, or custom hours. The default is $null.
```yaml
Type: TimeOfDay
Parameter Sets: (All)
Aliases:
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -WhatIf
The WhatIf switch simulates the actions of the command. You can use this switch to view the changes that would occur without actually applying those changes. You don't need to specify a value with this switch.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases: wi
Applicable: Exchange Server 2013, Exchange Server 2016, Exchange Server 2019, Exchange Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/p/?LinkID=113216).
## INPUTS
###
To see the input types that this cmdlet accepts, see [Cmdlet Input and Output Types](https://go.microsoft.com/fwlink/p/?linkId=616387). If the Input Type field for a cmdlet is blank, the cmdlet doesn't accept input data.
## OUTPUTS
###
To see the return types, which are also known as output types, that this cmdlet accepts, see [Cmdlet Input and Output Types](https://go.microsoft.com/fwlink/p/?linkId=616387). If the Output Type field is blank, the cmdlet doesn't return data.
## NOTES
## RELATED LINKS
[Online Version](https://technet.microsoft.com/library/79547dbf-1fb3-43ed-a788-03e4907e68a3.aspx)
| 36.792308 | 477 | 0.785351 | eng_Latn | 0.976057 |
bb6f828abb7091a07ff3c7f40277d7de40d61a86 | 518 | md | Markdown | content/team/_index.md | Caballerog/ngvikings-2020 | 501d67b01eb1f06cbcf1b9ddcc0d94a997c89d23 | [
"MIT"
] | 4 | 2020-01-11T16:11:52.000Z | 2020-05-26T08:02:11.000Z | content/team/_index.md | Caballerog/ngvikings-2020 | 501d67b01eb1f06cbcf1b9ddcc0d94a997c89d23 | [
"MIT"
] | 7 | 2020-01-16T19:34:56.000Z | 2021-06-17T02:48:47.000Z | content/team/_index.md | Caballerog/ngvikings-2020 | 501d67b01eb1f06cbcf1b9ddcc0d94a997c89d23 | [
"MIT"
] | 9 | 2020-05-15T10:09:11.000Z | 2020-10-01T10:49:57.000Z | ---
title: Team
type: team
menu:
main:
weight: 60
draft: false
---
{{% hero %}}
ngVikings is a non-profit, non-commercial, 100% community-driven event comprised of many Nordic Angular groups with more than 6500 active members in total.
* ngCopenhagen
* AngularJS Oslo
* AngularJS Gothenburg
* AarhusJS
* ngStockholm
* Angular Finland
{{% /hero %}}
{{< teams types="core=Core Team,contributors=Contributors,mcs=MCs" >}}
{{% partners categories="organizers=Organizing communities" %}}
{{% /partners %}}
| 16.1875 | 155 | 0.702703 | eng_Latn | 0.893493 |
bb70664019ae1d4ccc334c52e576f178de29659a | 4,515 | md | Markdown | dev-itpro/developer/methods-auto/query/queryinstance-topnumberofrows-method.md | christianbraeunlich/dynamics365smb-devitpro-pb | 564dbba9682ed01e7c0d7aee64db97dc1922637f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dev-itpro/developer/methods-auto/query/queryinstance-topnumberofrows-method.md | christianbraeunlich/dynamics365smb-devitpro-pb | 564dbba9682ed01e7c0d7aee64db97dc1922637f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dev-itpro/developer/methods-auto/query/queryinstance-topnumberofrows-method.md | christianbraeunlich/dynamics365smb-devitpro-pb | 564dbba9682ed01e7c0d7aee64db97dc1922637f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Query.TopNumberOfRows([Integer]) Method"
description: "Specifies the maximum number of rows to include in the resulting data set of a query."
ms.author: solsen
ms.custom: na
ms.date: 07/07/2021
ms.reviewer: na
ms.suite: na
ms.tgt_pltfrm: na
ms.topic: reference
author: SusanneWindfeldPedersen
---
[//]: # (START>DO_NOT_EDIT)
[//]: # (IMPORTANT:Do not edit any of the content between here and the END>DO_NOT_EDIT.)
[//]: # (Any modifications should be made in the .xml files in the ModernDev repo.)
# Query.TopNumberOfRows([Integer]) Method
> **Version**: _Available or changed with runtime version 1.0._
Specifies the maximum number of rows to include in the resulting data set of a query.
## Syntax
```AL
[CurrentRows := ] Query.TopNumberOfRows([NewRows: Integer])
```
> [!NOTE]
> This method can be invoked using property access syntax.
## Parameters
*Query*
 Type: [Query](query-data-type.md)
An instance of the [Query](query-data-type.md) data type.
*[Optional] NewRows*
 Type: [Integer](../integer/integer-data-type.md)
The number of rows to include in the resulting data set. If you do not set the NewRows parameter, then the resulting data set will include all rows. If you set the value to 0, then there is no limit and all rows of the data set are returned.
## Return Value
*[Optional] CurrentRows*
 Type: [Integer](../integer/integer-data-type.md)
Gets the current maximum number of rows included in the resulting data set
[//]: # (IMPORTANT: END>DO_NOT_EDIT)
## Remarks
You use the **TopNumberOfRows** method to limit the resulting dataset to the first set of rows that are generated for the query. For example, you can include only the first 10 or first 100 rows in the resulting dataset. The **TopNumberOfRows** method is useful for key performance indicators such as the top number of customers or sales.
You can also specify the number of rows to include in the dataset by setting the [TopNumberOfRows Property](../../properties/devenv-topnumberofrows-property.md). The **TopNumberOfRows** method will overwrite the **TopNumberOfRows** property setting.
## Example
This code example demonstrates how to use the **TopNumberOfRows** method on a query to return the top 10 customer sales orders based on the quantity of items.
The following query object links table **18 Customer** and table **37 Sales Line** and uses the **TopNumberOfRows** property to get top 5 customer sales orders based on the quantity of items.
```al
query 50123 "Customer_Sales_Quantity"
{
QueryType = Normal;
// Sets the resultS to include the top 5 the results in descending order
TopNumberOfRows = 5;
OrderBy = descending(Qty);
elements
{
dataitem(C; Customer)
{
column(Customer_Number; "No.")
{
}
column(Customer_Name; Name)
{
}
dataitem(SL; "Sales Line")
{
DataItemLink = "Sell-to Customer No." = c."No.";
SqlJoinType = InnerJoin;
column(Qty; Quantity)
{
}
}
}
}
}
```
The following codeunit runs the query, saves it as a CSV file, and displays a message that states the number of rows that are returned in the resulting dataset.
```al
codeunit 50100 MyQueryTop10
{
trigger OnRun()
begin
// Overwrites the TopNumberOfRows property and returns the first 10 rows in the dataset.
//MyQuery.TopNumberOfRows(10);
// Opens the query.
MyQuery.Open;
// Reads each row of the dataset and counts the number of rows.
while MyQuery.Read do begin
Counter += 1;
end;
// Saves the dataset as a CSV file.
MyQuery.SaveAsCsv('c:\temp\CustomerSales.csv');
// Displays a message that shows the number of rows.
Message(Text000, counter);
end;
var
MyQuery: Query "Customer_Sales_Quantity";
Counter: Integer;
Text000: Label 'count %1.';
}
```
## See Also
[Query Data Type](query-data-type.md)
[Query Object](../../devenv-query-object.md)
[Linking and Joining Data Items](../../devenv-query-links-joins.md)
[Aggregating Data in Query Objects](../../devenv-query-totals-grouping.md)
[Filtering Data in Query Objects](../../devenv-query-filters.md)
[Getting Started with AL](../../devenv-get-started.md)
[Developing Extensions](../../devenv-dev-overview.md) | 35.273438 | 339 | 0.671761 | eng_Latn | 0.963722 |
bb70b5380695cf161b69e0c42b8cf5cfa8323df4 | 14,889 | md | Markdown | docs/Upgrade-Guide.md | eugiellimpin/react-intl | 288596ebef4e8911cf8d12abd7b3752c2c73966d | [
"BSD-3-Clause"
] | 1 | 2020-04-01T07:49:30.000Z | 2020-04-01T07:49:30.000Z | docs/Upgrade-Guide.md | Fyrd/react-intl | 899d17239545717283b1d539b011240428e9948d | [
"BSD-3-Clause"
] | null | null | null | docs/Upgrade-Guide.md | Fyrd/react-intl | 899d17239545717283b1d539b011240428e9948d | [
"BSD-3-Clause"
] | null | null | null | # Upgrade Guide for `[email protected]`
<!-- toc -->
- [Breaking API Changes](#breaking-api-changes)
- [Use React 16.3 and upwards](#use-react-163-and-upwards)
- [Migrate withRef to forwardRef](#migrate-withref-to-forwardref)
- [New useIntl hook as an alternative of injectIntl HOC](#new-useintl-hook-as-an-alternative-of-injectintl-hoc)
- [Migrate to using native Intl APIs](#migrate-to-using-native-intl-apis)
- [TypeScript Support](#typescript-support)
- [FormattedRelativeTime](#formattedrelativetime)
- [Enhanced `FormattedMessage` & `formatMessage` rich text formatting](#enhanced-formattedmessage--formatmessage-rich-text-formatting)
- [Before](#before)
- [After](#after)
- [ESM Build](#esm-build)
- [Jest](#jest)
- [webpack](#webpack)
- [Creating intl without using Provider](#creating-intl-without-using-provider)
- [Message Format Syntax Changes](#message-format-syntax-changes)
- [Escape character has been changed to apostrophe (`'`).](#escape-character-has-been-changed-to-apostrophe-)
- [Placeholder argument syntax change](#placeholder-argument-syntax-change)
<!-- tocstop -->
## Breaking API Changes
- `addLocaleData` has been removed. See [Migrate to using native Intl APIs](#migrate-to-using-native-intl-apis) for more details.
- `ReactIntlLocaleData` has been removed. See [Migrate to using native Intl APIs](#migrate-to-using-native-intl-apis) for more details.
- `intlShape` has been removed. See [TypeScript Support](#typescript-support) for more details.
- Change default `textComponent` in `IntlProvider` to `React.Fragment`. In order to keep the old behavior, you can explicitly set `textComponent` to `span`.
```tsx
<IntlProvider textComponent="span" />
```
- `FormattedRelative` has been renamed to `FormattedRelativeTime` and its API has changed significantly. See [FormattedRelativeTime](#formattedrelativetime) for more details.
- `formatRelative` has been renamed to `formatRelativeTime` and its API has changed significantly. See [FormattedRelativeTime](#formattedrelativetime) for more details.
- Message Format syntax changes. See [Message Format Syntax Changes](#message-format-syntax-changes) for more details.
- `IntlProvider` no longer inherits from upstream `IntlProvider`.
## Use React 16.3 and upwards
React Intl v3 supports the new context API, fixing all kinds of tree update problems :tada:
In addition it makes use of the new lifecycle hooks (and gets rid of the [deprecated](https://reactjs.org/blog/2018/03/27/update-on-async-rendering.html) ones).
It also supports the new `React.forwardRef()` enabling users to directly access refs using the standard `ref` prop (see beneath for further information).
## Migrate withRef to forwardRef
With the update to React `>= 16.3` we got the option to use the new `React.forwardRef()` feature and because of this deprecated the use of the `withRef` option for the `injectIntl` HOC in favour of `forwardRef`.
When `forwardRef` is set to true, you can now simply pretend the HOC wasn't there at all.
Intl v2:
```tsx
import React from 'react';
import {injectIntl} from 'react-intl';
class MyComponent extends React.Component {
doSomething = () => console.log(this.state || null);
render() {
return <div>Hello World</div>;
}
}
export default injectIntl(MyComponent, {withRef: true});
// somewhere else
class Parent extends React.Component {
componentDidMount() {
this.myComponentRef.getWrappedInstance().doSomething();
}
render() {
return (
<MyComponent
ref={ref => {
this.myComponentRef = ref;
}}
/>
);
}
}
```
Intl v3:
```tsx
import React from 'react';
import {injectIntl} from 'react-intl';
class MyComponent extends React.Component {
doSomething = () => console.log(this.state || null);
render() {
return <div>Hello World</div>;
}
}
export default injectIntl(MyComponent, {forwardRef: true});
// somewhere else
class Parent extends React.Component {
myComponentRef = React.createRef();
componentDidMount() {
this.myComponentRef.doSomething(); // no need to call getWrappedInstance()
}
render() {
return <MyComponent ref={this.myComponentRef} />;
}
}
```
## New useIntl hook as an alternative of injectIntl HOC
This v3 release also supports the latest React hook API for user with React `>= 16.8`. You can now take `useIntl` hook as an alternative to `injectIntl` HOC on _function components_. Both methods allow you to access the `intl` instance, here is a quick comparison:
```tsx
// injectIntl
import {injectIntl} from 'react-intl';
const MyComponentWithHOC = injectIntl(({intl, ...props}) => {
// do something
});
// useIntl
import {useIntl} from 'react-intl';
const MyComponentWithHook = props => {
const intl = useIntl();
// do something
};
```
To keep the API surface clean and simple, we only provide `useIntl` hook in the package. If preferable, user can wrap this built-in hook to make customized hook like `useFormatMessage` easily. Please visit React's official website for more general [introduction on React hooks](https://reactjs.org/docs/hooks-intro.html).
## Migrate to using native Intl APIs
React Intl v3 no longer comes with CLDR data and rely on native Intl API instead. Specifically the new APIs we're relying on are:
- [Intl.PluralRules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/PluralRules): This can be polyfilled using [this package](https://www.npmjs.com/package/intl-pluralrules).
- [Intl.RelativeTimeFormat](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RelativeTimeFormat): This can be polyfilled using [this package](https://www.npmjs.com/package/@formatjs/intl-relativetimeformat).
This shift is meant to future-proof React Intl as these APIs are all stable and being implemented in modern browsers. This also means we no longer package and consume CLDRs in this package.
If you previously were using `addLocaleData` to support older browsers, we recommend you do the following:
1. If you're supporting browsers that do not have [Intl.PluralRules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/PluralRules) (e.g IE11 & Safari 12-), include this [polyfill](https://www.npmjs.com/package/@formatjs/intl-pluralrules) in your build.
2. If you're supporting browsers that do not have [Intl.RelativeTimeFormat](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RelativeTimeFormat) (e.g IE11, Edge, Safari 12-), include this [polyfill](https://www.npmjs.com/package/@formatjs/intl-relativetimeformat) in your build along with individual CLDR data for each locale you support.
```js
if (!Intl.PluralRules) {
require('intl-pluralrules/polyfill');
require('@formatjs/intl-pluralrules/dist/locale-data/de'); // Add locale data for de
}
if (!Intl.RelativeTimeFormat) {
require('@formatjs/intl-relativetimeformat/polyfill');
require('@formatjs/intl-relativetimeformat/dist/locale-data/de'); // Add locale data for de
}
```
When using React Intl in Node.js, your `node` binary has to either:
- Get compiled with `full-icu` using these [instructions](https://nodejs.org/api/intl.html)
**OR**
- Uses [`full-icu` npm package](https://www.npmjs.com/package/full-icu)
## TypeScript Support
`react-intl` has been rewritten in TypeScript and thus has native TypeScript support. Therefore, we've also removed `prop-types` dependency and expose `IntlShape` as an interface instead.
All types should be available from top level `index` file without importing from specific subfiles. For example:
```ts
import {IntlShape} from 'react-intl'; // Correct
import {IntlShape} from 'react-intl/lib/types'; // Incorrect
```
If we're missing any interface top level support, please let us know and/or submitting a PR is greatly appreciated :)
## FormattedRelativeTime
When we introduced `FormattedRelative`, the spec for [`Intl.RelativeTimeFormat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RelativeTimeFormat) was still unstable. It has now reached stage 3 and multiple browsers have implemented it. However, its API is different from `FormattedRelative` so we've adjusted its API to match the spec which means it's not backwards compatible.
1. All `units` (such as `day-short`) becomes a combination of `unit` & `style`:
```tsx
<FormattedRelative units="second-short"/>
// will be
<FormattedRelativeTime unit="second" style="short"/>
```
2. `style` becomes `numeric` (which is the default):
```tsx
<FormattedRelative style="numeric"/>
// will be
<FormattedRelativeTime />
<FormattedRelative style="best fit"/>
// will be
<FormattedRelativeTime numeric="auto"/>
```
3. Type of `value` is no longer `Date`, but rather `delta` in the specified `unit`:
```tsx
<FormattedRelative value={Date.now() - 1000} units="second-narrow"/>
// will be
<FormattedRelativeTime value={-1} unit="second" style="narrow" />
<FormattedRelative value={Date.now() + 2000} units="second-narrow"/>
// will be
<FormattedRelativeTime value={2} unit="second" style="narrow" />
```
5. `updateInterval` becomes `updateIntervalInSeconds` and will only take the time delta in seconds. Update behavior remains the same, e.g:
```tsx
<FormattedRelativeTime
value={2}
numeric="auto"
unit="second"
style="narrow"
updateIntervalInSeconds={1}
/>
// Initially prints: `in 2s`
// 1 second later: `in 1s`
// 1 second later: `now`
// 1 second later: `1s ago`
// 60 seconds later: `1m ago`
```
6. `initialNow` has been removed.
Similarly, the functional counterpart of this component which is `formatRelative` has been renamed to `formatRelativeTime` and its parameters have been changed to reflect this component's props accordingly.
7. Implementing `FormattedRelative` behavior
You can use `@formatjs/intl-utils` to get close to the previous behavior like this:
```tsx
import {selectUnit} from '@formatjs/intl-utils';
const {value, unit} = selectUnit(Date.now() - 48 * 3600 * 1000);
// render
<FormattedRelativeTime value={value} unit={unit} />;
```
## Enhanced `FormattedMessage` & `formatMessage` rich text formatting
In v2, in order to do rich text formatting (embedding a `ReactElement`), you had to do this:
```tsx
<FormattedMessage
defaultMessage="To buy a shoe, { link } and { cta }"
values={{
link: (
<a class="external_link" target="_blank" href="https://www.shoe.com/">
visit our website
</a>
),
cta: <strong class="important">eat a shoe</strong>,
}}
/>
```
Now you can do:
```tsx
<FormattedMessage
defaultMessage="To buy a shoe, <a>visit our website</a> and <cta>eat a shoe</cta>"
values={{
a: msg => (
<a class="external_link" target="_blank" href="https://www.shoe.com/">
{msg}
</a>
),
cta: msg => <strong class="important">{msg}</strong>,
}}
/>
```
The change solves several issues:
1. Contextual information was lost when you need to style part of the string: In this example above, `link` effectively is a blackbox placeholder to a translator. It can be a person, an animal, or a timestamp. Conveying contextual information via `description` & `placeholder` variable is often not enough since the variable can get sufficiently complicated.
2. This brings feature-parity with other translation libs, such as [fluent](https://projectfluent.org/) by Mozilla (using Overlays).
If previously in cases where you pass in a `ReactElement` to a placeholder we highly recommend that you rethink the structure so that as much text is declared as possible:
### Before
```tsx
<FormattedMessage
defaultMessage="Hello, {name} is {awesome} and {fun}"
values={{
name: <b>John</b>,
awesome: <span style="font-weight: bold;">awesome</span>
fun: <span>fun and <FormattedTime value={Date.now()}/></span>
}}
/>
```
### After
```tsx
<FormattedMessage
defaultMessage="Hello, <b>John</b> is <custom>awesome</custom> and <more>fun and {ts, time}</more>"
values={{
b: name => <b>{name}</b>,
custom: str => <span style="font-weight: bold;">{str}</span>,
more: (...chunks) => <span>{chunks}</span>,
}}
/>
```
## ESM Build
`react-intl` and its underlying libraries (`intl-messageformat-parser`, `intl-messageformat`, `@formatjs/intl-relativetimeformat`, `intl-format-cache`, `intl-locales-supported`, `intl-utils`) export ESM artifacts. This means you should configure your build toolchain to transpile those libraries.
### Jest
Add `transformIgnorePatterns` to always include those libraries, e.g:
```tsx
{
transformIgnorePatterns: [
'/node_modules/(?!intl-messageformat|intl-messageformat-parser).+\\.js$',
],
}
```
### webpack
If you're using `babel-loader`, add those libraries in `include`, e.g:
```tsx
include: [
path.join(__dirname, "node_modules/react-intl"),
path.join(__dirname, "node_modules/intl-messageformat"),
path.join(__dirname, "node_modules/intl-messageformat-parser"),
],
```
## Creating intl without using Provider
We've added a new API called `createIntl` that allows you to create an `IntlShape` object without using `Provider`. This allows you to format things outside of React lifecycle while reusing the same `intl` object. For example:
```tsx
import {createIntl, createIntlCache, RawIntlProvider} from 'react-intl'
// This is optional but highly recommended
// since it prevents memory leak
const cache = createIntlCache()
const intl = createIntl({
locale: 'fr-FR',
messages: {}
}, cache)
// Call imperatively
intl.formatNumber(20)
// Pass it to IntlProvider
<RawIntlProvider value={intl}>{foo}</RawIntlProvider>
```
This is especially beneficial in SSR where you can reuse the same `intl` object across requests.
## Message Format Syntax Changes
We've rewritten our parser to be more faithful to [ICU Message Format](https://ssl.icu-project.org/apiref/icu4j/com/ibm/icu/text/MessageFormat.html), in order to potentially support skeleton. So far the backwards-incompatible changes are:
### Escape character has been changed to apostrophe (`'`).
Previously while we were using ICU message format syntax, our escape char was backslash (`\`). This however creates issues with strict ICU translation vendors that support other implementations like ICU4J/ICU4C. Thanks to [@pyrocat101](https://github.com/pyrocat101) we've changed this behavior to be spec-compliant. This means:
```tsx
// Before
<FormattedMessage defaultMessage="\\{foo\\}" /> //prints out "{foo}"
// After
<FormattedMessage defaultMessage="'{foo}'" /> //prints out "{foo}"
```
We highly recommend reading the spec to learn more about how quote/escaping works [here](http://userguide.icu-project.org/formatparse/messages) under **Quoting/Escaping** section.
### Placeholder argument syntax change
Placeholder argument can no longer have `-` (e.g: `this is a {placeholder-var}` is invalid but `this is a {placeholder_var}` is).
| 37.598485 | 413 | 0.73336 | eng_Latn | 0.933178 |
bb71154f19a87ff54eb3effd403da9932ab2d5f3 | 252 | md | Markdown | README.md | bmanners/HyperV-Manager | aeb8ad579cd79a87d1b975141c9cc0bc7a4a6cea | [
"MIT"
] | 2 | 2018-07-12T03:56:03.000Z | 2020-11-26T16:46:36.000Z | README.md | bmanners/HyperV-Manager | aeb8ad579cd79a87d1b975141c9cc0bc7a4a6cea | [
"MIT"
] | null | null | null | README.md | bmanners/HyperV-Manager | aeb8ad579cd79a87d1b975141c9cc0bc7a4a6cea | [
"MIT"
] | null | null | null | # HyperV-Manager
Hyper-V Manager - Tray Icon
This application makes a tray icon to show the user the status of thier HyperV VM's.
The code is based on a blog post by Jerry Orman http://blogs.msdn.com/b/jorman/archive/2010/01/24/hyper-v-manager.aspx
| 36 | 118 | 0.765873 | eng_Latn | 0.712054 |
bb7170a4f8dbebb9cb872595d1de9607b7e7db57 | 2,290 | md | Markdown | packages/color-slider/README.md | godanny86/spectrum-web-components | b7f12041404bffdd65b1472f22b6f3eacbf7157c | [
"Apache-2.0"
] | null | null | null | packages/color-slider/README.md | godanny86/spectrum-web-components | b7f12041404bffdd65b1472f22b6f3eacbf7157c | [
"Apache-2.0"
] | null | null | null | packages/color-slider/README.md | godanny86/spectrum-web-components | b7f12041404bffdd65b1472f22b6f3eacbf7157c | [
"Apache-2.0"
] | null | null | null | ## Description
An `<sp-color-slider>` lets users visually change an individual channel of a color. The background of the `<sp-color-slider>` is a visual representation of the range of values a user can select from. This can represent color properties such as hues, color channel values (such as RGB or CMYK levels), or opacity. Currently, the slider only supports leveraging the `hue` property.
### Usage
[](https://www.npmjs.com/package/@spectrum-web-components/color-slider)
[](https://bundlephobia.com/result?p=@spectrum-web-components/color-slider)
```
yarn add @spectrum-web-components/color-slider
```
Import the side effectful registration of `<sp-color-slider>` via:
```
import '@spectrum-web-components/color-slider/sp-color-slider.js';
```
When looking to leverage the `ColorSlider` base class as a type and/or for extension purposes, do so via:
```
import { ColorSlider } from '@spectrum-web-components/color-slider';
```
## Color Formatting
When using the color elements, use `el.color` to access the `color` property, which should manage itself in the colour format supplied. If you supply a color in `rgb()` format, `el.color` should return the color in `rgb()` format, as well.
The current color formats supported are as follows:
- Hex3, Hex4, Hex6, Hex8
- HSV, HSVA
- HSL, HSLA
- RGB, RGBA
- Strings (eg "red", "blue")
**Please note for the following formats: HSV, HSVA, HSL, HSLA**
When using the HSL or HSV formats, and a color's value (in HSV) is set to 0, or its luminosity (in HSL) is set to 0 or 1, the hue and saturation values may not be preserved by the element's `color` property. This is detailed in the [TinyColor documentation](https://www.npmjs.com/package/@ctrl/tinycolor). Seperately, the element's `value` property is directly managed by the hue as represented in the interface.
## Default
```html
<sp-color-slider></sp-color-slider>
```
### Vertical
```html
<sp-color-slider vertical></sp-color-slider>
```
### Disabled
```html
<sp-color-slider disabled></sp-color-slider>
```
| 39.482759 | 412 | 0.735371 | eng_Latn | 0.954416 |
bb71927df1296c681d6c9a2f62e225b3b3812b5f | 178 | md | Markdown | README.md | joshcullen/shiny_webinar | 42f4a9b529efd4fb2c81ec7941a207e0ea28750a | [
"MIT"
] | null | null | null | README.md | joshcullen/shiny_webinar | 42f4a9b529efd4fb2c81ec7941a207e0ea28750a | [
"MIT"
] | null | null | null | README.md | joshcullen/shiny_webinar | 42f4a9b529efd4fb2c81ec7941a207e0ea28750a | [
"MIT"
] | null | null | null | # shiny_webinar
Code and slides used to present the Shiny app from {bayesmove}. Includes bare-bones app to focus on feature of reactive time series plot and updated leaflet map. | 59.333333 | 161 | 0.803371 | eng_Latn | 0.999164 |
bb726e111b77501a592279c3d097d2ae612b9bc9 | 8,362 | md | Markdown | docs/mfc/reference/cdraglistbox-class.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/reference/cdraglistbox-class.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/reference/cdraglistbox-class.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Classe CDragListBox | Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- cpp-mfc
ms.topic: reference
f1_keywords:
- CDragListBox
- AFXCMN/CDragListBox
- AFXCMN/CDragListBox::CDragListBox
- AFXCMN/CDragListBox::BeginDrag
- AFXCMN/CDragListBox::CancelDrag
- AFXCMN/CDragListBox::Dragging
- AFXCMN/CDragListBox::DrawInsert
- AFXCMN/CDragListBox::Dropped
- AFXCMN/CDragListBox::ItemFromPt
dev_langs:
- C++
helpviewer_keywords:
- CDragListBox [MFC], CDragListBox
- CDragListBox [MFC], BeginDrag
- CDragListBox [MFC], CancelDrag
- CDragListBox [MFC], Dragging
- CDragListBox [MFC], DrawInsert
- CDragListBox [MFC], Dropped
- CDragListBox [MFC], ItemFromPt
ms.assetid: fee20b42-60ae-4aa9-83f9-5a3d9b96e33b
author: mikeblome
ms.author: mblome
ms.workload:
- cplusplus
ms.openlocfilehash: 2d9e4fb2835870fa9c3a46dcd5cda5b0cca600a7
ms.sourcegitcommit: 799f9b976623a375203ad8b2ad5147bd6a2212f0
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/19/2018
ms.locfileid: "46432978"
---
# <a name="cdraglistbox-class"></a>Classe CDragListBox
Além de fornecer a funcionalidade de uma caixa de listagem do Windows, o `CDragListBox` classe permite que o usuário mova itens de caixa de lista, como nomes de arquivo, dentro da caixa de lista.
## <a name="syntax"></a>Sintaxe
```
class CDragListBox : public CListBox
```
## <a name="members"></a>Membros
### <a name="public-constructors"></a>Construtores Públicos
|Nome|Descrição|
|----------|-----------------|
|[CDragListBox::CDragListBox](#cdraglistbox)|Constrói um objeto `CDragListBox`.|
### <a name="public-methods"></a>Métodos Públicos
|Nome|Descrição|
|----------|-----------------|
|[CDragListBox::BeginDrag](#begindrag)|Chamado pelo framework quando uma operação de arrastar for iniciado.|
|[CDragListBox::CancelDrag](#canceldrag)|Chamado pelo framework quando uma operação de arrastar foi cancelada.|
|[CDragListBox::Dragging](#dragging)|Chamado pelo framework durante uma operação de arrastar.|
|[CDragListBox::DrawInsert](#drawinsert)|Desenha o guia de inserção da caixa de listagem de arrastar.|
|[CDragListBox::Dropped](#dropped)|Chamado pelo framework depois que o item foi descartado.|
|[CDragListBox::ItemFromPt](#itemfrompt)|Retorna as coordenadas do item que está sendo arrastado.|
## <a name="remarks"></a>Comentários
Caixas de listagem com esse recurso permitem aos usuários solicitar os itens em uma lista de maneira que for mais útil para eles. Por padrão, a caixa de listagem será mover o item para o novo local na lista. No entanto, `CDragListBox` objetos podem ser personalizados para copiar itens em vez de movê-los.
O controle de caixa de listagem associado com o `CDragListBox` classe não deve ter o LBS_SORT ou o estilo LBS_MULTIPLESELECT. Para obter uma descrição dos estilos de caixa de lista, consulte [estilos de caixa de listagem](../../mfc/reference/styles-used-by-mfc.md#list-box-styles).
Para usar uma caixa de listagem de arrastar em uma caixa de diálogo existente do seu aplicativo, adicione um controle de caixa de listagem ao seu modelo de caixa de diálogo usando o editor de caixa de diálogo e, em seguida, atribuir uma variável de membro (da categoria `Control` e o tipo de variável `CDragListBox`) correspondente à caixa de listagem controle em seu modelo de caixa de diálogo.
Para obter mais informações sobre como atribuir controles para variáveis de membro, consulte [atalho para definir variáveis de membro para controles de caixa de diálogo](../../windows/defining-member-variables-for-dialog-controls.md).
## <a name="inheritance-hierarchy"></a>Hierarquia de herança
[CObject](../../mfc/reference/cobject-class.md)
[CCmdTarget](../../mfc/reference/ccmdtarget-class.md)
[CWnd](../../mfc/reference/cwnd-class.md)
[CListBox](../../mfc/reference/clistbox-class.md)
`CDragListBox`
## <a name="requirements"></a>Requisitos
**Cabeçalho:** afxcmn. h
## <a name="begindrag"></a> CDragListBox::BeginDrag
Chamado pela estrutura de quando ocorre um evento que poderia começar uma operação de arrastar, como pressionar o botão esquerdo do mouse.
```
virtual BOOL BeginDrag(CPoint pt);
```
### <a name="parameters"></a>Parâmetros
*pt*<br/>
Um [CPoint](../../atl-mfc-shared/reference/cpoint-class.md) objeto que contém as coordenadas do item que está sendo arrastado.
### <a name="return-value"></a>Valor de retorno
Diferente de zero se arrastar for permitido, caso contrário, 0.
### <a name="remarks"></a>Comentários
Substitua essa função se você quiser controlar o que acontece quando começa uma operação de arrastar. A implementação padrão captura o mouse e permanece no modo arrastar até que o usuário clica no botão esquerdo ou direito do mouse ou pressiona ESC, momento em que a operação de arrastar seja cancelada.
## <a name="canceldrag"></a> CDragListBox::CancelDrag
Chamado pelo framework quando uma operação de arrastar foi cancelada.
```
virtual void CancelDrag(CPoint pt);
```
### <a name="parameters"></a>Parâmetros
*pt*<br/>
Um [CPoint](../../atl-mfc-shared/reference/cpoint-class.md) objeto que contém as coordenadas do item que está sendo arrastado.
### <a name="remarks"></a>Comentários
Substitua essa função para lidar com qualquer processamento especial para o controle de caixa de listagem.
## <a name="cdraglistbox"></a> CDragListBox::CDragListBox
Constrói um objeto `CDragListBox`.
```
CDragListBox();
```
## <a name="dragging"></a> CDragListBox::Dragging
Chamado pelo framework quando um item de caixa de listagem está sendo arrastado dentro a `CDragListBox` objeto.
```
virtual UINT Dragging(CPoint pt);
```
### <a name="parameters"></a>Parâmetros
*pt*<br/>
Um [CPoint](../../atl-mfc-shared/reference/cpoint-class.md) objeto que contém x e y coordenadas do cursor de tela.
### <a name="return-value"></a>Valor de retorno
A ID de recurso do cursor a ser exibido. Os seguintes valores são possíveis:
- DL_COPYCURSOR indica que o item será copiado.
- DL_MOVECURSOR indica que o item será movido.
- DL_STOPCURSOR indica que o destino de soltar atual não é aceitável.
### <a name="remarks"></a>Comentários
O comportamento padrão retorna DL_MOVECURSOR. Substitua essa função para fornecer funcionalidade adicional.
## <a name="drawinsert"></a> CDragListBox::DrawInsert
Chamado pelo framework para desenhar o guia de inserção antes do item com índice indicado.
```
virtual void DrawInsert(int nItem);
```
### <a name="parameters"></a>Parâmetros
*nItem*<br/>
Índice baseado em zero do ponto de inserção.
### <a name="remarks"></a>Comentários
Um valor de - 1 limpa o guia de inserção. Substitua essa função para modificar a aparência ou o comportamento do guia de inserção.
## <a name="dropped"></a> CDragListBox::Dropped
Chamado pelo framework quando um item é removido dentro de um `CDragListBox` objeto.
```
virtual void Dropped(
int nSrcIndex,
CPoint pt);
```
### <a name="parameters"></a>Parâmetros
*nSrcIndex*<br/>
Especifica o índice baseado em zero da cadeia de caracteres descartada.
*pt*<br/>
Um [CPoint](../../atl-mfc-shared/reference/cpoint-class.md) objeto que contém as coordenadas do site de destino.
### <a name="remarks"></a>Comentários
O comportamento padrão copia o item de caixa de listagem e seus dados para o novo local e, em seguida, exclui o item original. Substitua essa função para personalizar o comportamento padrão, como a habilitação de cópias dos itens de caixa de lista a ser arrastado para outros locais dentro da lista.
## <a name="itemfrompt"></a> CDragListBox::ItemFromPt
Chamada para essa função para recuperar o índice baseado em zero do item de caixa de lista localizada em *pt*.
```
int ItemFromPt(
CPoint pt,
BOOL bAutoScroll = TRUE) const;
```
### <a name="parameters"></a>Parâmetros
*pt*<br/>
Um [CPoint](../../atl-mfc-shared/reference/cpoint-class.md) objeto que contém as coordenadas de um ponto dentro da caixa de listagem.
*bAutoScroll*<br/>
Diferente de zero se a rolagem é permitida, caso contrário, 0.
### <a name="return-value"></a>Valor de retorno
Índice baseado em zero do item de caixa de lista arrastar.
## <a name="see-also"></a>Consulte também
[Exemplo MFC TSTCON](../../visual-cpp-samples.md)<br/>
[Classe CListBox](../../mfc/reference/clistbox-class.md)<br/>
[Gráfico da hierarquia](../../mfc/hierarchy-chart.md)<br/>
[Classe CListBox](../../mfc/reference/clistbox-class.md)
| 35.582979 | 395 | 0.746831 | por_Latn | 0.992853 |
bb7296a42b97b38b2cfe234480d36eec1096fb52 | 6,713 | md | Markdown | packages/common/test/app/snapshots/index.js.md | abouthiroppy/dish-for-react | 7714c036f58626a8e418089d4631771c1d52a2e6 | [
"MIT"
] | 21 | 2016-08-01T10:26:52.000Z | 2017-01-18T19:57:14.000Z | packages/common/test/app/snapshots/index.js.md | abouthiroppy/dish-for-react | 7714c036f58626a8e418089d4631771c1d52a2e6 | [
"MIT"
] | 43 | 2016-10-03T15:23:49.000Z | 2017-04-29T07:28:16.000Z | packages/common/test/app/snapshots/index.js.md | abouthiroppy/dish-for-react | 7714c036f58626a8e418089d4631771c1d52a2e6 | [
"MIT"
] | 2 | 2016-11-28T03:47:15.000Z | 2017-04-21T13:21:51.000Z | # Snapshot report for `packages/common/test/app/index.js`
The actual snapshot is saved in `index.js.snap`.
Generated by [AVA](https://ava.li).
## should expand package.json
> Snapshot 1
`{␊
"version": "0.0.1",␊
"description": "",␊
"main": "index.js",␊
"scripts": {␊
"lint": "eslint .",␊
"changelog": "conventional-changelog -p angular -i CHANGELOG.md -s -r 0"␊
},␊
"keywords": [],␊
"name": "app",␊
"foo": [␊
1,␊
2␊
],␊
"bar": {␊
"1": 2␊
}␊
}␊
`
## should rename the file name
> Snapshot 1
`The MIT License (MIT)␊
␊
Copyright (c) 2017␊
␊
Permission is hereby granted, free of charge, to any person obtaining a copy␊
of this software and associated documentation files (the "Software"), to deal␊
in the Software without restriction, including without limitation the rights␊
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell␊
copies of the Software, and to permit persons to whom the Software is␊
furnished to do so, subject to the following conditions:␊
␊
The above copyright notice and this permission notice shall be included in all␊
copies or substantial portions of the Software.␊
␊
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR␊
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,␊
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE␊
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER␊
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,␊
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE␊
SOFTWARE.␊
`
> Snapshot 2
`<div align="center">␊
<h1>test</h1>␊
</div>␊
␊
[![NPM][npm-image]][npm-url]␊
[![Travis][travis-image]][travis-url]␊
[![Codecov][codecov-image]][codecov-url]␊
[![David][david-image]][david-url]␊
[![Dependencyci][dependencyci-image]][dependencyci-url]␊
␊
[npm-image]: https://img.shields.io/npm/v/test.svg?style=flat-square␊
[npm-url]: https://npmjs.org/package/test␊
[travis-image]: https://img.shields.io/travis/abouthiroppy/test.svg?style=flat-squa␊
[travis-url]: https://travis-ci.org/abouthiroppy/test␊
[codecov-image]: https://img.shields.io/codecov/c/github/abouthiroppy/test/master.s␊
[codecov-url]: https://codecov.io/gh/abouthiroppy/test␊
[david-image]: https://img.shields.io/david/abouthiroppy/test.svg?style=flat-square␊
[david-url]: https://david-dm.org/abouthiroppy/test␊
[dependencyci-image]: https://img.shields.io/badge/Dependency/CI-passing-brightgreen.svg?style=flat-␊
[dependencyci-url]: https://dependencyci.com/github/abouthiroppy/test␊
`
> Snapshot 3
`{␊
"version": "0.0.1",␊
"description": "",␊
"main": "index.js",␊
"scripts": {␊
"lint": "eslint .",␊
"changelog": "conventional-changelog -p angular -i CHANGELOG.md -s -r 0",␊
"test": "nyc ava"␊
},␊
"keywords": [],␊
"name": "test",␊
"ava": {␊
"files": [␊
"test/**/*.js",␊
"!test/helper/*.js"␊
],␊
"tap": true,␊
"failFast": true,␊
"concurrency": 5␊
}␊
}␊
`
## should return template files
> Snapshot 1
`The MIT License (MIT)␊
␊
Copyright (c) 2017␊
␊
Permission is hereby granted, free of charge, to any person obtaining a copy␊
of this software and associated documentation files (the "Software"), to deal␊
in the Software without restriction, including without limitation the rights␊
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell␊
copies of the Software, and to permit persons to whom the Software is␊
furnished to do so, subject to the following conditions:␊
␊
The above copyright notice and this permission notice shall be included in all␊
copies or substantial portions of the Software.␊
␊
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR␊
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,␊
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE␊
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER␊
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,␊
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE␊
SOFTWARE.␊
`
> Snapshot 2
`<div align="center">␊
<h1>app</h1>␊
</div>␊
␊
[![NPM][npm-image]][npm-url]␊
[![Travis][travis-image]][travis-url]␊
[![Codecov][codecov-image]][codecov-url]␊
[![David][david-image]][david-url]␊
[![Dependencyci][dependencyci-image]][dependencyci-url]␊
␊
[npm-image]: https://img.shields.io/npm/v/app.svg?style=flat-square␊
[npm-url]: https://npmjs.org/package/app␊
[travis-image]: https://img.shields.io/travis/abouthiroppy/app.svg?style=flat-squa␊
[travis-url]: https://travis-ci.org/abouthiroppy/app␊
[codecov-image]: https://img.shields.io/codecov/c/github/abouthiroppy/app/master.s␊
[codecov-url]: https://codecov.io/gh/abouthiroppy/app␊
[david-image]: https://img.shields.io/david/abouthiroppy/app.svg?style=flat-square␊
[david-url]: https://david-dm.org/abouthiroppy/app␊
[dependencyci-image]: https://img.shields.io/badge/Dependency/CI-passing-brightgreen.svg?style=flat-␊
[dependencyci-url]: https://dependencyci.com/github/abouthiroppy/app␊
`
> Snapshot 3
`{␊
"version": "0.0.1",␊
"description": "",␊
"main": "index.js",␊
"scripts": {␊
"lint": "eslint .",␊
"changelog": "conventional-changelog -p angular -i CHANGELOG.md -s -r 0",␊
"test": "nyc ava"␊
},␊
"keywords": [],␊
"name": "app",␊
"ava": {␊
"files": [␊
"test/**/*.js",␊
"!test/helper/*.js"␊
],␊
"tap": true,␊
"failFast": true,␊
"concurrency": 5␊
}␊
}␊
`
## should use jest
> Snapshot 1
`{␊
"version": "0.0.1",␊
"description": "",␊
"main": "index.js",␊
"scripts": {␊
"lint": "eslint .",␊
"changelog": "conventional-changelog -p angular -i CHANGELOG.md -s -r 0",␊
"test": "jest --coverage"␊
},␊
"keywords": [],␊
"name": "app",␊
"jest": {␊
"moduleNameMapper": {␊
"^.+\\\\.(css)$": "identity-obj-proxy"␊
},␊
"moduleFileExtensions": [␊
"js"␊
]␊
}␊
}␊
`
| 32.274038 | 105 | 0.604797 | yue_Hant | 0.500161 |
bb72acdbd08eac31c12ae78ad516f56ce6621537 | 3,448 | md | Markdown | docs/delphi/index.md | hsy822/ethereum-org-website | ca2dd20298d06af41c304d9ee4a054f682746f0c | [
"MIT"
] | 2 | 2020-09-07T17:26:06.000Z | 2020-09-15T10:24:04.000Z | docs/delphi/index.md | hsy822/ethereum-org-website | ca2dd20298d06af41c304d9ee4a054f682746f0c | [
"MIT"
] | 6 | 2021-06-28T20:25:08.000Z | 2022-02-27T09:36:44.000Z | docs/delphi/index.md | hsy822/ethereum-org-website | ca2dd20298d06af41c304d9ee4a054f682746f0c | [
"MIT"
] | null | null | null | ---
title: Ethereum for Delphi Developers
meta:
- name: description
content: Learn how to develop for Ethereum using the Delphi programming language
- property: og:title
content: Ethereum for Delphi Developers | Ethereum.org
- property: og:description
content: Learn how to develop for Ethereum using the Delphi programming language
lang: en-US
sidebar: auto
sidebarDepth: 0
---
# Ethereum for Delphi Developers
<div class="featured">Learn how to develop for Ethereum using the Delphi programming language</div>
Use Ethereum to create decentralized applications (or "dapps") that utilize the benefits of cryptocurrency and blockchain technology. These dapps can be trustworthy, meaning that once they are deployed to Ethereum, they will always run as programmed. They can control digital assets in order to create new kinds of financial applications. They can be decentralized, meaning that no single entity or person controls them and are nearly impossible to censor.
Build decentralized applications on top of Ethereum and interact with smart contracts using the Delphi programming language!
<img src="delphi.png"/>
## Getting Started with Smart Contracts and the Solidity Language
**Take your first steps to integrating Delphi with Ethereum**
Need a more basic primer first? Check out [ethereum.org/learn](/learn/) or [ethereum.org/developers](/developers/).
- [Blockchain Explained](https://kauri.io/article/d55684513211466da7f8cc03987607d5/blockchain-explained)
- [Understanding Smart Contracts](https://kauri.io/article/e4f66c6079e74a4a9b532148d3158188/ethereum-101-part-5-the-smart-contract)
- [Write your First Smart Contract](https://kauri.io/article/124b7db1d0cf4f47b414f8b13c9d66e2/remix-ide-your-first-smart-contract)
- [Learn How to Compile and Deploy Solidity](https://kauri.io/article/973c5f54c4434bb1b0160cff8c695369/understanding-smart-contract-compilation-and-deployment)
## Beginner References and Links
**Introducing the Delphereum library**
- [What is Delphereum?](https://github.com/svanas/delphereum/blob/master/README.md)
- [Connecting Delphi to a local (in-memory) blockchain](https://medium.com/@svanas/connecting-delphi-to-a-local-in-memory-blockchain-9a1512d6c5b0)
- [Connecting Delphi to the Ethereum main net](https://medium.com/@svanas/connecting-delphi-to-the-ethereum-main-net-5faf1feffd83)
- [Connecting Delphi to Smart Contracts](https://medium.com/@svanas/connecting-delphi-to-smart-contracts-3146b12803a1)
**Want to skip setup for now, and jump straight to the samples?**
- [A 3-minute Smart Contract and Delphi - Part 1](https://medium.com/@svanas/a-3-minute-smart-contract-and-delphi-61d998571d)
- [A 3-minute Smart Contract and Delphi - Part 2](https://medium.com/@svanas/a-3-minute-smart-contract-and-delphi-part-2-446925faa47b)
## Intermediate Articles
- [Generating an Ethereum-signed message signature in Delphi](https://medium.com/@svanas/generating-an-ethereum-signed-message-signature-in-delphi-75661ce5031b)
- [Transferring Ether with Delphi](https://medium.com/@svanas/transferring-ether-with-delphi-b5f24b1a98a4)
- [Transferring ERC-20 tokens with Delphi](https://medium.com/@svanas/transferring-erc-20-tokens-with-delphi-bb44c05b295d)
## Advanced Use Patterns
- [Delphi and Ethereum Name Service (ENS)](https://medium.com/@svanas/delphi-and-ethereum-name-service-ens-4443cd278af7)
Looking for more resources? Check out [ethereum.org/developers](/developers/).
| 56.52459 | 456 | 0.792053 | eng_Latn | 0.802481 |
bb730c3b2f8810b7d40df8d105a6200a467ca94e | 438 | md | Markdown | data-structure/btree/README.md | ASMlover/study | 5878f862573061f94c5776a351e30270dfd9966a | [
"BSD-2-Clause"
] | 22 | 2015-05-18T07:04:36.000Z | 2021-08-02T03:01:43.000Z | data-structure/btree/README.md | ASMlover/study | 5878f862573061f94c5776a351e30270dfd9966a | [
"BSD-2-Clause"
] | 1 | 2017-08-31T22:13:57.000Z | 2017-09-05T15:00:25.000Z | data-structure/btree/README.md | ASMlover/study | 5878f862573061f94c5776a351e30270dfd9966a | [
"BSD-2-Clause"
] | 6 | 2015-06-06T07:16:12.000Z | 2021-07-06T13:45:56.000Z | # **二叉树** #
***
* 二叉树基本知识
* 二叉树链式实现
* 二叉搜索树
## **1. 二叉树基本知识** ##
请参见./btree.md
## **2. 二叉树链式实现** ##
> ### **2.1 思路** ###
1) 内部使用链表结构来存储二叉树
2) 内部的各种函数使用递归
3) 优点是实现简单
4) 缺点是大量使用了递归
5) 可以将递归转化为循环
> ### **2.2 实现** ###
请参见./btree-list/
## **3. 二叉搜索树** ##
> ### **3.1 思路** ###
1) 内部使用链表结构来存储二叉树
2) 每一个节点一个KEY值
3) 插入删除节点直接使用循环
4) 具体的请[参见](btree.md)
> ### **3.2 实现** ###
请参见./btree-search/
| 13.272727 | 25 | 0.472603 | yue_Hant | 0.243095 |
bb73310dc56b8ec62ddb74558f19636c99959b1d | 161 | md | Markdown | _conferences/devfest-incheon-2018.md | roishi2j2/conferences | 0378e0dcc0a526ac0364a776c889f29212e49ca3 | [
"CC0-1.0"
] | 1,472 | 2015-06-17T03:32:43.000Z | 2022-03-29T06:51:11.000Z | _conferences/devfest-incheon-2018.md | roishi2j2/conferences | 0378e0dcc0a526ac0364a776c889f29212e49ca3 | [
"CC0-1.0"
] | 172 | 2015-06-17T13:15:11.000Z | 2022-03-22T19:01:27.000Z | _conferences/devfest-incheon-2018.md | roishi2j2/conferences | 0378e0dcc0a526ac0364a776c889f29212e49ca3 | [
"CC0-1.0"
] | 355 | 2015-06-17T03:53:11.000Z | 2022-03-21T10:16:53.000Z | ---
name: "GDG DevFest"
website: https://www.meetup.com/GDG-Incheon/events/255500688
location: Incheon, Korea
date_start: 2018-11-24
date_end: 2018-11-24
---
| 17.888889 | 60 | 0.726708 | kor_Hang | 0.249979 |
bb739d93e9cb6a449c1213e22259511609bf0c6c | 6,464 | md | Markdown | docs/user-guide/base-configuration/repo-le-tf-infra-aws.md | Rodriguez-Matias/le-ref-architecture-doc | 34f114ae0bc386107ce9a236a7dcb435ba84b20e | [
"MIT"
] | null | null | null | docs/user-guide/base-configuration/repo-le-tf-infra-aws.md | Rodriguez-Matias/le-ref-architecture-doc | 34f114ae0bc386107ce9a236a7dcb435ba84b20e | [
"MIT"
] | null | null | null | docs/user-guide/base-configuration/repo-le-tf-infra-aws.md | Rodriguez-Matias/le-ref-architecture-doc | 34f114ae0bc386107ce9a236a7dcb435ba84b20e | [
"MIT"
] | null | null | null | # Files/Folders Organization
The following block provides a brief explanation of the chosen files/folders layout:
```
+ management/ (resources for the management account)
...
+ security/ (resources for the security + users account)
...
+ shared/ (resources for the shared account)
...
+ network/ (resources for the centralized network account)
...
+ apps-devstg/ (resources for apps dev & stg account)
...
+ apps-prd/ (resources for apps prod account)
...
```
Configuration files are organized by environments (e.g. dev, stg, prd), and service type,
which we call **layers** (identities, organizations, storage, etc) to keep any changes made to them separate.
Within each of those folders you should find the Terraform files that are used to define all the
resources that belong to such account environment and specific layer.
{: style="width:650px"}
<figcaption style="font-size:15px">
<b>Figure:</b> AWS Organization multi-account architecture diagram.
(Source: Binbash Leverage,
"Leverage Reference Architecture components",
Binbash Leverage Doc, accessed August 4th 2021).
</figcaption>
Under every account folder you will see a service layer structure similar to the following:
```
...
├── apps-devstg
│ ├── backups\ --
│ ├── base-certificates
│ ├── base-identities
│ ├── base-network
│ ├── base-tf-backend
│ ├── cdn-s3-frontend
│ ├── config
│ ├── databases-aurora
│ ├── databases-mysql\ --
│ ├── databases-pgsql\ --
│ ├── ec2-fleet-ansible\ --
│ ├── k8s-eks
│ ├── k8s-eks-demoapps
│ ├── k8s-kind
│ ├── k8s-kops\ --
│ ├── notifications
│ ├── security-audit
│ ├── security-base
│ ├── security-certs
│ ├── security-compliance\ --
│ ├── security-firewall\ --
│ ├── security-keys
│ ├── security-keys-dr
│ ├── storage
│ └── tools-cloud-nuke
├── apps-prd
│ ├── backups\ --
│ ├── base-identities
│ ├── base-network
│ ├── base-tf-backend
│ ├── cdn-s3-frontend
│ ├── config
│ ├── ec2-fleet\ --
│ ├── k8s-eks
│ ├── notifications
│ ├── security-audit
│ ├── security-base
│ ├── security-certs
│ ├── security-compliance\ --
│ └── security-keys
├── build.env
├── build.py
├── config
│ ├── common.tfvars
├── network
│ ├── base-identities
│ ├── base-network
│ ├── base-tf-backend
│ ├── config
│ ├── network-firewall
│ ├── notifications
│ ├── security-audit
│ ├── security-base
│ └── security-keys
├── management
│ ├── backups
│ ├── base-identities
│ ├── base-tf-backend
│ ├── config
│ ├── cost-mgmt
│ ├── notifications
│ ├── organizations
│ ├── security-audit
│ ├── security-base
│ ├── security-compliance\ --
│ ├── security-keys
│ ├── security-monitoring
│ └── security-monitoring-dr\ --
├── security
│ ├── base-identities
│ ├── base-tf-backend
│ ├── config
│ ├── notifications
│ ├── security-audit
│ ├── security-base
│ ├── security-compliance\ --
│ ├── security-keys
│ ├── security-monitoring
│ └── security-monitoring-dr\ --
└── shared
├── backups
├── base-dns
├── base-identities
├── base-network
├── base-tf-backend
├── config
├── container-registry
├── ec2-fleet\ --
├── k8s-eks
├── k8s-eks-demoapps
├── k8s-eks-prd
├── notifications
├── security-audit
├── security-base
├── security-compliance\ --
├── security-keys
├── security-keys-dr
├── storage
├── tools-cloud-scheduler-stop-start
├── tools-eskibana
├── tools-github-selfhosted-runners
├── tools-jenkins\ --
├── tools-managedeskibana
├── tools-prometheus
├── tools-vault
├── tools-vpn-server
└── tools-webhooks\ --
```
**NOTE:** As a convention folders with the `--` suffix reflect that the resources are not currently
created in AWS, basically they've been destroyed or not yet exist.
Such layer separation is meant to avoid situations in which a single folder contains a lot of resources.
That is important to avoid because at some point, running `leverage terraform plan / apply` starts taking
too long and that becomes a problem.
This organization also provides a layout that is easier to navigate and discover.
You simply start with the accounts at the top level and then you get to explore the resource categories within
each account.
## Remote State
In the `base-tf-backend` folder you should find the definition of the infrastructure that needs to be deployed before
you can get to work with anything else.
**IMPORTANT:** THIS IS ONLY NEEDED IF THE BACKEND WAS NOT CREATED YET. IF THE BACKEND ALREADY EXISTS YOU JUST USE IT.
!!! info "Read More"
* [x] [Terraform - S3 & DynamoDB for Remote State Storage & Locking](../base-workflow/repo-le-tf-infra-aws-tf-state.md)
## Configuration
!!! tips "Config files can be found under each `config` folders"
- :file_folder: **Global config file**
[`/config/common.tfvars`](https://github.com/binbashar/le-tf-infra-aws/blob/master/config/common.config)
contains global context TF variables that we inject to TF commands which are used by all sub-directories such as
`make plan` or `make apply` and which cannot be stored in `backend.config` due to TF.
- :file_folder: **Account config files**
- [`backend.tfvars`](https://github.com/binbashar/le-tf-infra-aws/blob/master/shared/config/backend.config)
contains TF variables that are mainly used to configure TF backend but since
`profile` and `region` are defined there, we also use them to inject those values into other TF commands.
- [`account.tfvars`](https://github.com/binbashar/le-tf-infra-aws/blob/master/shared/config/account.config)
contains TF variables that are specific to an AWS account.
## AWS Profile
- File `backend.tfvars` will inject the profile name that TF will use to make changes on AWS.
- Such profile is usually one that relies on another profile to assume a role to get access to each corresponding account.
- Please follow to correctly setup your AWS Credentials
- [user-guide/features/identities](../features/identities/identities.md)
- [user-guide/features/identities/credentials](../features/identities/credentials.md)
- Read the following page leverage doc to understand [how to set up a profile to assume
a role](https://docs.aws.amazon.com/cli/latest/userguide/cli-roles.html)
| 34.382979 | 123 | 0.663676 | eng_Latn | 0.963926 |
bb74d89fb64013b1d1fd646c52bca80fe251f656 | 180 | md | Markdown | src/entries/education/2012-a-levels.md | jelliott8020/personal_website | 672013035b68af39b22654b7146928f59ae8fdd5 | [
"MIT"
] | null | null | null | src/entries/education/2012-a-levels.md | jelliott8020/personal_website | 672013035b68af39b22654b7146928f59ae8fdd5 | [
"MIT"
] | null | null | null | src/entries/education/2012-a-levels.md | jelliott8020/personal_website | 672013035b68af39b22654b7146928f59ae8fdd5 | [
"MIT"
] | null | null | null | ---
title: Associate of Science
organization: College of the Albemarle
organizationUrl: https://www.albemarle.edu
location: Elizabeth City, NC
start: 2016-08-21
end: 2017-07-25
--- | 22.5 | 42 | 0.766667 | eng_Latn | 0.460593 |
bb74f127b547d8f7da19bb5c428387bfabd6cfc6 | 7,427 | markdown | Markdown | README.markdown | felixonmars/lens | 8a123e6c9e815f9d389ad64a5a68fe46647cec1d | [
"BSD-2-Clause"
] | 1 | 2020-01-26T02:35:24.000Z | 2020-01-26T02:35:24.000Z | README.markdown | fdelacruz-asapp/lens | e7aaf49c13cff2c47038ad2b7383a99dda2ab6cc | [
"BSD-2-Clause"
] | null | null | null | README.markdown | fdelacruz-asapp/lens | e7aaf49c13cff2c47038ad2b7383a99dda2ab6cc | [
"BSD-2-Clause"
] | null | null | null | Lens: Lenses, Folds, and Traversals
==================================
[](https://hackage.haskell.org/package/lens) [](http://travis-ci.org/ekmett/lens) [](http://packdeps.haskellers.com/reverse/lens)
This package provides families of [lenses](https://github.com/ekmett/lens/blob/master/src/Control/Lens/Type.hs), [isomorphisms](https://github.com/ekmett/lens/blob/master/src/Control/Lens/Iso.hs), [folds](https://github.com/ekmett/lens/blob/master/src/Control/Lens/Fold.hs), [traversals](https://github.com/ekmett/lens/blob/master/src/Control/Lens/Traversal.hs), [getters](https://github.com/ekmett/lens/blob/master/src/Control/Lens/Getter.hs) and [setters](https://github.com/ekmett/lens/blob/master/src/Control/Lens/Setter.hs).
If you are looking for where to get started, [a crash course video](http://youtu.be/cefnmjtAolY?hd=1) on how `lens` was constructed and how to use the basics is available on youtube. It is best watched in high definition to see the slides, but the [slides](http://comonad.com/haskell/Lenses-Folds-and-Traversals-NYC.pdf) are also available if you want to use them to follow along.
The [FAQ](https://github.com/ekmett/lens/wiki/FAQ), which provides links to a large number of different resources for learning about lenses and an overview of the [derivation](https://github.com/ekmett/lens/wiki/Derivation) of these types can be found on the [Lens Wiki](https://github.com/ekmett/lens/wiki) along with a brief [overview](https://github.com/ekmett/lens/wiki/Overview) and some [examples](https://github.com/ekmett/lens/wiki/Examples).
Documentation is available through [github](http://ekmett.github.com/lens/frames.html) (for HEAD) or [hackage](http://hackage.haskell.org/package/lens) for the current and preceding releases.
Field Guide
-----------
[](https://creately.com/diagram/h5nyo9ne1/QZ9UBOtw4AJWtmAKYK3wT8Mm1HM%3D)
Examples
--------
(See [`wiki/Examples`](https://github.com/ekmett/lens/wiki/Examples))
First, import `Control.Lens`.
```haskell
ghci> import Control.Lens
```
Now, you can read from lenses
```haskell
ghci> ("hello","world")^._2
"world"
```
and you can write to lenses.
```haskell
ghci> set _2 42 ("hello","world")
("hello",42)
```
Composing lenses for reading (or writing) goes in the order an imperative programmer would expect, and just uses `(.)` from the `Prelude`.
```haskell
ghci> ("hello",("world","!!!"))^._2._1
"world"
```
```haskell
ghci> set (_2._1) 42 ("hello",("world","!!!"))
("hello",(42,"!!!"))
```
You can make a `Getter` out of a pure function with `to`.
```haskell
ghci> "hello"^.to length
5
```
You can easily compose a `Getter` with a `Lens` just using `(.)`. No explicit coercion is necessary.
```haskell
ghci> ("hello",("world","!!!"))^._2._2.to length
3
```
As we saw above, you can write to lenses and these writes can change the type of the container. `(.~)` is an infix alias for `set`.
```haskell
ghci> _1 .~ "hello" $ ((),"world")
("hello","world")
```
Conversely `view`, can be used as a prefix alias for `(^.)`.
```haskell
ghci> view _2 (10,20)
20
```
There are a large number of other lens variants provided by the library, in particular a `Traversal` generalizes `traverse` from `Data.Traversable`.
We'll come back to those later, but continuing with just lenses:
You can let the library automatically derive lenses for fields of your data type
```haskell
data Foo a = Foo { _bar :: Int, _baz :: Int, _quux :: a }
makeLenses ''Foo
```
This will automatically generate the following lenses:
```haskell
bar, baz :: Lens' (Foo a) Int
quux :: Lens (Foo a) (Foo b) a b
```
A `Lens` takes 4 parameters because it can change the types of the whole when you change the type of the part.
Often you won't need this flexibility, a `Lens'` takes 2 parameters, and can be used directly as a `Lens`.
You can also write to setters that target multiple parts of a structure, or their composition with other
lenses or setters. The canonical example of a setter is 'mapped':
```haskell
mapped :: Functor f => Setter (f a) (f b) a b
```
`over` is then analogous to `fmap`, but parameterized on the Setter.
```haskell
ghci> fmap succ [1,2,3]
[2,3,4]
ghci> over mapped succ [1,2,3]
[2,3,4]
```
The benefit is that you can use any `Lens` as a `Setter`, and the composition of setters with other setters or lenses using `(.)` yields
a `Setter`.
```haskell
ghci> over (mapped._2) succ [(1,2),(3,4)]
[(1,3),(3,5)]
```
`(%~)` is an infix alias for 'over', and the precedence lets you avoid swimming in parentheses:
```haskell
ghci> _1.mapped._2.mapped %~ succ $ ([(42, "hello")],"world")
([(42, "ifmmp")],"world")
```
There are a number of combinators that resemble the `+=`, `*=`, etc. operators from C/C++ for working with the monad transformers.
There are `+~`, `*~`, etc. analogues to those combinators that work functionally, returning the modified version of the structure.
```haskell
ghci> both *~ 2 $ (1,2)
(2,4)
```
There are combinators for manipulating the current state in a state monad as well
```haskell
fresh :: MonadState Int m => m Int
fresh = id <+= 1
```
Anything you know how to do with a `Foldable` container, you can do with a `Fold`
```haskell
ghci> :m + Data.Char Data.Text.Lens
ghci> allOf (folded.text) isLower ["hello"^.packed, "goodbye"^.packed]
True
```
You can also use this for generic programming. Combinators are included that are based on Neil Mitchell's `uniplate`, but which
have been generalized to work on or as lenses, folds, and traversals.
```haskell
ghci> :m + Data.Data.Lens
ghci> anyOf biplate (=="world") ("hello",(),[(2::Int,"world")])
True
```
As alluded to above, anything you know how to do with a `Traversable` you can do with a `Traversal`.
```haskell
ghci> mapMOf (traverse._2) (\xs -> length xs <$ putStrLn xs) [(42,"hello"),(56,"world")]
"hello"
"world"
[(42,5),(56,5)]
```
Moreover, many of the lenses supplied are actually isomorphisms, that means you can use them directly as a lens or getter:
```haskell
ghci> let hello = "hello"^.packed
"hello"
ghci> :t hello
hello :: Text
```
but you can also flip them around and use them as a lens the other way with `from`!
```haskell
ghci> hello^.from packed.to length
5
```
You can automatically derive isomorphisms for your own newtypes with `makePrisms`. e.g.
```haskell
newtype Neither a b = Neither { _nor :: Either a b } deriving (Show)
makePrisms ''Neither
```
will automatically derive
```haskell
neither :: Iso (Neither a b) (Neither c d) (Either a b) (Either c d)
nor :: Iso (Either a b) (Either c d) (Neither a b) (Neither c d)
```
such that
```haskell
from neither = nor
from nor = neither
neither.nor = id
nor.neither = id
```
There is also a fully operational, but simple game of [Pong](https://github.com/ekmett/lens/blob/master/examples/Pong.hs) in the [examples/](https://github.com/ekmett/lens/blob/master/examples/) folder.
There are also a couple of hundred examples distributed throughout the haddock documentation.
Contact Information
-------------------
Contributions and bug reports are welcome!
Please feel free to contact me through github or on the #haskell IRC channel on irc.freenode.net.
-Edward Kmett
| 32.151515 | 529 | 0.704861 | eng_Latn | 0.966801 |
bb7543b6d19aa6e31edf961b771abb1bc2a63764 | 218 | md | Markdown | _watches/M20200621_233013_TLP_4.md | Meteoros-Floripa/meteoros.floripa.br | 7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad | [
"MIT"
] | 5 | 2020-05-19T17:04:49.000Z | 2021-03-30T03:09:14.000Z | _watches/M20200621_233013_TLP_4.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | null | null | null | _watches/M20200621_233013_TLP_4.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | 2 | 2020-05-19T17:06:27.000Z | 2020-09-04T00:00:43.000Z | ---
layout: watch
title: TLP4 - 21/06/2020 - M20200621_233013_TLP_4T.jpg
date: 2020-06-21 23:30:13
permalink: /2020/06/21/watch/M20200621_233013_TLP_4
capture: TLP4/2020/202006/20200621/M20200621_233013_TLP_4T.jpg
---
| 27.25 | 62 | 0.784404 | fra_Latn | 0.067717 |
bb77b9e5e7b542f843b7589a87ab6d234f64c2c8 | 990 | md | Markdown | README.md | guglielmof/recsys_spt2018 | 24bbb09b3c9fe1cb96fd309a14725eac03e1bdc6 | [
"Apache-2.0"
] | null | null | null | README.md | guglielmof/recsys_spt2018 | 24bbb09b3c9fe1cb96fd309a14725eac03e1bdc6 | [
"Apache-2.0"
] | null | null | null | README.md | guglielmof/recsys_spt2018 | 24bbb09b3c9fe1cb96fd309a14725eac03e1bdc6 | [
"Apache-2.0"
] | 1 | 2021-05-22T11:45:50.000Z | 2021-05-22T11:45:50.000Z | # recsys_spt2018
Environment setting
* Put data in mdp/data inside main directory</br>
* Put challenge set in main directory</br>
How to build recommendations:
1) run build_structures.py
2) run full_kernel_songs.py
3) run create_Ku.py with parameters 1 and 0.5
4) run create_pl2title.py
5) run words_similarity_builder.py
6) run calc_P.py
7) run titles_similarity_0.py
8) run user_based_MSD_1.py
9) run item_based_MSD_5_10_25.py with parameters 0.7 0.4 5
10) run item_based_MSD_5_10_25.py with parameters 0.7 0.4 10
11) run item_based_MSD_5_10_25.py with parameters 0.7 0.4 25
12) run selected_KOMD_100.py with parameters 100 50000
13) run merge_csv.sh
Estimated memory: 80Gb</br>
Estimated time: 20 hours</br>
Estimated disk space 150Gb</br>
Most of the steps above can be parallelized as shown in the following figure. The estimated time is calculated considering a sequential execution.

| 31.935484 | 146 | 0.794949 | eng_Latn | 0.954947 |
bb78ea4b7858ed07bbcccc4741b53ff1eb86427c | 454 | md | Markdown | src/python/api/daily/count.md | twilightDD/dev | 4cc269c10437b656c20cad6cd91b51ae2f586442 | [
"MIT"
] | 6 | 2020-10-29T08:17:40.000Z | 2021-06-23T12:59:34.000Z | src/python/api/daily/count.md | twilightDD/dev | 4cc269c10437b656c20cad6cd91b51ae2f586442 | [
"MIT"
] | 7 | 2021-02-24T09:17:54.000Z | 2022-01-25T13:57:51.000Z | src/python/api/daily/count.md | twilightDD/dev | 4cc269c10437b656c20cad6cd91b51ae2f586442 | [
"MIT"
] | 8 | 2020-10-30T13:20:38.000Z | 2022-01-25T13:41:14.000Z | ---
title: meteostat.Daily.count | API | Python Library
---
# meteostat.Daily.count
The `count()` method returns the total number of records in the query result as an integer.
## Parameters
This method does not take any parameters.
## Returns
Integer
## Example
Get the total number of records ever recorded at Atlanta International Airport.
```python{4}
from meteostat import Daily
data = Daily('72219')
count = data.count()
print(count)
```
| 15.655172 | 91 | 0.729075 | eng_Latn | 0.963919 |
bb79546b007f96db45589f36707792f2f1f03832 | 5,624 | md | Markdown | CONTRIBUTING.md | 0x00evil/CocoaPods | 4c2395b00330319cb755848fb57da664447669a7 | [
"MIT"
] | 1 | 2021-04-19T14:26:51.000Z | 2021-04-19T14:26:51.000Z | CONTRIBUTING.md | 0x00evil/CocoaPods | 4c2395b00330319cb755848fb57da664447669a7 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | 0x00evil/CocoaPods | 4c2395b00330319cb755848fb57da664447669a7 | [
"MIT"
] | null | null | null | # Do’s and Don’ts
* **Search the solutions in the [troubleshooting guide](http://guides.cocoapods.org/using/troubleshooting.html).** Or any of the many other [guides](http://guides.cocoapods.org).
* **Search tickets before you file a new one.** Add to tickets if you have new information about the issue.
* **Only file tickets about the CocoaPods tool itself.** This includes [CocoaPods](https://github.com/CocoaPods/CocoaPods/issues),
[CocoaPods/Core](https://github.com/CocoaPods/Core/issues), and [Xcodeproj](https://github.com/CocoaPods/Xcodeproj/issues).
If your question is regarding a library (to be) distributed through CocoaPods, refer to the [spec repo](https://github.com/CocoaPods/Specs).
If your question is “How do I […]”, then please ask on [StackOverflow](http://stackoverflow.com/search?q=CocoaPods) or our [mailing-list](http://groups.google.com/group/cocoapods).
* **Keep tickets short but sweet.** Make sure you include all the context needed to solve the issue. Don't overdo it. Great tickets allow us to focus on solving problems instead of discussing them.
* **Take care of your ticket.** When you spend time to report a ticket with care we'll enjoy fixing it for you.
* **Use [GitHub-flavored Markdown](https://help.github.com/articles/markdown-basics/).** Especially put code blocks and console outputs in backticks (```` ``` ````). That increases the readability. Bonus points for applying the appropriate syntax highlighting. (Tip: Podfiles are written in a Ruby DSL.)
* **Do not litter.** Don’t add +1’s _unless_ specifically asked for and don’t discuss offtopic issues.
* **Spell the name of the project correctly.** It's CocoaPods. In upper camel case.
## Bug Reports
In short, since you are most likely a developer, provide a ticket that you _yourself_ would _like_ to receive.
We are **not** here to support your individual projects. We depend on _you_ (the community)
to contribute in making the tool better for everyone. So debug and reduce your own issues
before creating a ticket and let us know of all the things you tried and their outcome.
This applies double if you cannot share a reproduction with us because of internal company
policies.
First check if you are using the latest CocoaPods version before filing a ticket.
You can install the latest version with `$ [sudo] gem install cocoapods`.
Then check if the same problem applies with the same Podfile but in a **completely new**
and empty application Xcode project, thus excluding whether or not there is an issue with
conflicting settings.
Please include steps to reproduce and _all_ other relevant information, including the
version of CocoaPods and any template printed by the tool.
If questions in the error template are left unanswered, the issue will be closed
as a bad bug report.
If there is a regression in the projects generated by CocoaPods please include
the output (redacted if needed) of one of the following commands:
```
$ xcodeproj target-diff
$ xcodeproj project-diff
```
If you are familiar with Ruby, making a pull request with a failing test case
can speed up the resolution of the bug. If the issue is more complex you can
add an [integration test](https://github.com/CocoaPods/cocoapods-integration-specs/)
which doesn't require any ruby knowledge.
## Feature Requests
Please try to be precise about the proposed outcome of the feature and how it
would related to existing features.
From the [CocoaPods blog](http://blog.cocoapods.org/CocoaPods-0.28/):
> Fighting feature creep in CocoaPods is not easy. We hear about a lot of great ideas and many of them don't make the cut as they would not be useful for at least 80% of our users.
Should you require a feature isn't suited for mainstream users, consider suggesting a [CocoaPods plugin](http://blog.cocoapods.org/CocoaPods-0.28/) instead.
## Pull Requests
We **love** pull requests and if a contribution is significant we tend to offer
push access. We suggest you take a look at our [Contributing guide](http://guides.cocoapods.org/contributing/contribute-to-cocoapods.html) for info on development setup, and some of our best practices.
All contributions _will_ be licensed under the MIT license.
Code/comments should adhere to the following rules:
* Names should be descriptive and concise.
* Use two spaces and no tabs.
* All changes require test coverage to ensure it does not break during refactor
work. (There are a few exceptions to this, which can be recognised by there
not being any coverage for similar code.)
* All enhancements and bug fixes need to be documented in the CHANGELOG.
* When writing comments, use properly constructed sentences, including
punctuation.
* When documenting APIs and/or source code, don't make assumptions or make
implications about race, gender, religion, political orientation or anything
else that isn't relevant to the project.
* Remember that source code usually gets written once and read often: ensure
the reader doesn't have to make guesses. Make sure that the purpose and inner
logic are either obvious to a reasonably skilled professional, or add a
comment that explains it.
* The message of the commit should be prefixed by the name of the file which is
the main focus of the patch enclosed by square brackets (.e.g. `[Installer]
install pods`).
## [No Brown M&M’s](http://en.wikipedia.org/wiki/Van_Halen#Contract_riders)
If you made it all the way to the end, bravo dear user, we love you. You can include
this emoji in the top of your ticket to signal to us that you did in fact read this
file and are trying to conform to it as best as possible: 🌈
| 57.387755 | 303 | 0.77276 | eng_Latn | 0.998316 |
bb79c34b258496b3eeae14acf47b208f13cb1e6c | 5,920 | md | Markdown | en/organization/api-ref/Federation/list.md | ivanfomkin/docs | 20e9e18d7cae5ee608f59f90c4aa0e4192073ad9 | [
"CC-BY-4.0"
] | null | null | null | en/organization/api-ref/Federation/list.md | ivanfomkin/docs | 20e9e18d7cae5ee608f59f90c4aa0e4192073ad9 | [
"CC-BY-4.0"
] | null | null | null | en/organization/api-ref/Federation/list.md | ivanfomkin/docs | 20e9e18d7cae5ee608f59f90c4aa0e4192073ad9 | [
"CC-BY-4.0"
] | null | null | null | ---
editable: false
__system: {"dislikeVariants":["No answer to my question","Recomendations didn't help","The content doesn't match title","Other"]}
---
# Method list
Retrieves the list of federations in the specified organization.
## HTTP request {#https-request}
```
GET https://organization-manager.api.cloud.yandex.net/organization-manager/v1/saml/federations
```
## Query parameters {#query_params}
Parameter | Description
--- | ---
organizationId | Required. ID of the organization to list federations in. To get the organization ID, make a [list](/docs/organization-manager/api-ref/Organization/list) request. The maximum string length in characters is 50.
pageSize | The maximum number of results per page to return. If the number of available results is larger than [pageSize](/docs/organization-manager/api-ref/Federation/list#query_params), the service returns a [nextPageToken](/docs/organization-manager/api-ref/Federation/list#responses) that can be used to get the next page of results in subsequent list requests. Default value: 100 Acceptable values are 0 to 1000, inclusive.
pageToken | Page token. To get the next page of results, set [pageToken](/docs/organization-manager/api-ref/Federation/list#query_params) to the [nextPageToken](/docs/organization-manager/api-ref/Federation/list#responses) returned by a previous list request. The maximum string length in characters is 50.
filter | A filter expression that filters resources listed in the response. The expression must specify: 1. The field name. Currently you can use filtering only on the [Federation.name](/docs/organization-manager/api-ref/Federation#representation) field. 2. An operator. Can be either `=` or `!=` for single values, `IN` or `NOT IN` for lists of values. 3. The value. Must be 3-63 characters long and match the regular expression `^[a-z][-a-z0-9]{1,61}[a-z0-9]$`. The maximum string length in characters is 1000.
## Response {#responses}
**HTTP Code: 200 - OK**
```json
{
"federations": [
{
"id": "string",
"organizationId": "string",
"name": "string",
"description": "string",
"createdAt": "string",
"cookieMaxAge": "string",
"autoCreateAccountOnLogin": true,
"issuer": "string",
"ssoBinding": "string",
"ssoUrl": "string",
"securitySettings": {
"encryptedAssertions": true
},
"caseInsensitiveNameIds": true
}
],
"nextPageToken": "string"
}
```
Field | Description
--- | ---
federations[] | **object**<br><p>A federation. For more information, see <a href="/docs/iam/concepts/users/identity-federations">SAML-compatible identity federations</a>.</p>
federations[].<br>id | **string**<br><p>Required. ID of the federation.</p> <p>The maximum string length in characters is 50.</p>
federations[].<br>organizationId | **string**<br><p>ID of the organization that the federation belongs to.</p>
federations[].<br>name | **string**<br><p>Required. Name of the federation.</p> <p>Value must match the regular expression ``\|[a-z][-a-z0-9]{1,61}[a-z0-9]``.</p>
federations[].<br>description | **string**<br><p>Description of the federation.</p> <p>The maximum string length in characters is 256.</p>
federations[].<br>createdAt | **string** (date-time)<br><p>Creation timestamp.</p> <p>String in <a href="https://www.ietf.org/rfc/rfc3339.txt">RFC3339</a> text format.</p>
federations[].<br>cookieMaxAge | **string**<br><p>Browser cookie lifetime in seconds. If the cookie is still valid, the management console authenticates the user immediately and redirects them to the home page.</p> <p>Acceptable values are 600 seconds to 43200 seconds, inclusive.</p>
federations[].<br>autoCreateAccountOnLogin | **boolean** (boolean)<br><p>Add new users automatically on successful authentication. The user becomes member of the organization automatically, but you need to grant other roles to them.</p> <p>If the value is ``false``, users who aren't added to the organization can't log in, even if they have authenticated on your server.</p>
federations[].<br>issuer | **string**<br><p>Required. ID of the IdP server to be used for authentication. The IdP server also responds to IAM with this ID after the user authenticates.</p> <p>The maximum string length in characters is 8000.</p>
federations[].<br>ssoBinding | **string**<br><p>Single sign-on endpoint binding type. Most Identity Providers support the ``POST`` binding type.</p> <p>SAML Binding is a mapping of a SAML protocol message onto standard messaging formats and/or communications protocols.</p> <ul> <li>POST: HTTP POST binding.</li> <li>REDIRECT: HTTP redirect binding.</li> <li>ARTIFACT: HTTP artifact binding.</li> </ul>
federations[].<br>ssoUrl | **string**<br><p>Required. Single sign-on endpoint URL. Specify the link to the IdP login page here.</p> <p>The maximum string length in characters is 8000.</p>
federations[].<br>securitySettings | **object**<br><p>Federation security settings.</p> <p>Federation security settings.</p>
federations[].<br>securitySettings.<br>encryptedAssertions | **boolean** (boolean)<br><p>Enable encrypted assertions.</p>
federations[].<br>caseInsensitiveNameIds | **boolean** (boolean)<br><p>Use case insensitive Name IDs.</p>
nextPageToken | **string**<br><p>This token allows you to get the next page of results for list requests. If the number of results is larger than <a href="/docs/organization-manager/api-ref/Federation/list#query_params">pageSize</a>, use the <a href="/docs/organization-manager/api-ref/Federation/list#responses">nextPageToken</a> as the value for the <a href="/docs/organization-manager/api-ref/Federation/list#query_params">pageToken</a> query parameter in the next list request. Each subsequent list request will have its own <a href="/docs/organization-manager/api-ref/Federation/list#responses">nextPageToken</a> to continue paging through the results.</p> | 83.380282 | 662 | 0.731419 | eng_Latn | 0.873711 |
bb7a41fda0c380f53c36ceeaa8944250d0074b0f | 485 | md | Markdown | CHANGELOG.md | savokiss/vue-raven | 497a9c17150dbb9819602ff3a946c64a1ebb2f85 | [
"MIT"
] | 3 | 2018-07-14T09:38:09.000Z | 2018-11-14T10:52:07.000Z | CHANGELOG.md | savokiss/vue-raven | 497a9c17150dbb9819602ff3a946c64a1ebb2f85 | [
"MIT"
] | null | null | null | CHANGELOG.md | savokiss/vue-raven | 497a9c17150dbb9819602ff3a946c64a1ebb2f85 | [
"MIT"
] | null | null | null | # v0.1.1
- Basic usage
# v0.2.1
## Features
- Add option `disableReport` for development use
# v1.0.0
## Breaking Changes
- Change option `disableAutoReport` to `disableVueReport`
## Bug Fixes
- Change option `version`'s default value `'not provided'` to `''`
# v1.0.2
## Bug Fixes
- Polyfill `Object.assign` for IE
# v2.0.2
## Improvements
- Use Bili instead of Rollup
# v2.1.0
## Features
- Add option `env` for records environments
# v2.2.0
## Features
- Support extra config | 16.724138 | 66 | 0.690722 | eng_Latn | 0.661784 |
bb7b72f305a791fc8fc4cd721ffbd3d8d25710ce | 1,165 | md | Markdown | javascript/linked-list/linked-list.md | awwadsaeed/data-structures-and-algorithms | a6afa51d757571462beec5e988f9ae6fe205d56a | [
"MIT"
] | null | null | null | javascript/linked-list/linked-list.md | awwadsaeed/data-structures-and-algorithms | a6afa51d757571462beec5e988f9ae6fe205d56a | [
"MIT"
] | 2 | 2021-07-01T14:50:19.000Z | 2021-07-08T12:06:24.000Z | javascript/linked-list/linked-list.md | awwadsaeed/data-structures-and-algorithms | a6afa51d757571462beec5e988f9ae6fe205d56a | [
"MIT"
] | null | null | null | # Singly Linked List
<!-- Short summary or background information -->
linked lists are a data structure that can be useful for adding or deleting elements and have some downsides as well.
bigO is between O(1) and O(n)
## Challenge
<!-- Description of the challenge -->
create a method to zip 2 linked list and return the values in a string.
## Approach & Efficiency
<!-- What approach did you take? Why? What is the Big O space/time for this approach? -->
while loop to traverse the linked list. as for why? because we dont know the length of the list.
big O of time is O(n);
big O of space is O(n);
## API
<!-- Description of each method publicly available to your Linked List -->
insert(): inserts a node at the start of the linked list.
includes(): checks the linked list's nodes for a given value.
toString(): lists out all the node values as strings.
insertBefore(): takes a new value and a value to insert before.
isnertAfter(): takes a new value and a value to insert after.
append(): takes a value to insert at the end of the linked list.
kthFromEnd(): finds the kth element from the end.
## whiteboard Process
;
| 40.172414 | 117 | 0.732189 | eng_Latn | 0.998944 |
bb7bb163783ccba3cae5a2b1dc247ae6deb62e79 | 1,205 | md | Markdown | vm-CentOS7/README.md | 7900ms/000nottheater_deserted_systemsoftware | eb5616b476ef6575666abd1672667233c6e9df0a | [
"MIT"
] | null | null | null | vm-CentOS7/README.md | 7900ms/000nottheater_deserted_systemsoftware | eb5616b476ef6575666abd1672667233c6e9df0a | [
"MIT"
] | null | null | null | vm-CentOS7/README.md | 7900ms/000nottheater_deserted_systemsoftware | eb5616b476ef6575666abd1672667233c6e9df0a | [
"MIT"
] | null | null | null |
#### 下载版本
CentOS-7 (1611),December 21, 2016 `gnome3`
[大小](https://www.centos.org/download/):Everythin > DVD ISO > Minimal ISO
我要的 ~~[LiveGNOME](https://wiki.centos.org/Download)~~ Minimal
#### 桌面环境 DE
~~[LXDE](https://wiki.lxde.org/en/Installation)~~ [Xfce](https://github.com/7900ms/0nottheater_deserted_/tree/master/Usage_Manual/xfce)
#### 如果选择 minimal
安装centos 6.8 的 minimal.iso 后,没有网络配置
http://chhquan.blog.51cto.com/1346841/1790748
安装完CentOS 7 Minimal之后,从头打造桌面工作环境
http://www.cnblogs.com/oysmlsy/p/4567903.html
```
sudo yum install epel-release
sudo yum groupinstall "X Window system" # (Minimal不自带这个,需要安装)
```
http://www.centoscn.com/m/view.php?aid=623
http://jensd.be/125/linux/rhel/install-mate-or-xfce-on-centos-7
#### 如果选择 DVD
参考步骤
http://www.cnblogs.com/smyhvae/p/3917532.html
#### 参考
是安装的 DVD
http://blog.itist.tw/2014/07/centos7-prepare.html
是的选择的是 gnome3
http://tieba.baidu.com/p/3152957061
http://seisman.info/linux-environment-for-seismology-research.html
http://seisman.info/simple-guide-to-seismology.html
http://seisman.info/tags/CentOS/
https://github.com/yan9yu/cde
http://mirrors.163.com/.help/centos.html
https://www.tecmint.com/centos-7-installation/
-
| 19.754098 | 135 | 0.7361 | yue_Hant | 0.845237 |
bb7c0911f9c071384b6a9dd2dab95b2f056430f3 | 1,972 | md | Markdown | src/pages/recipes/dales-chicken.md | trevorgk/gatsby-kilvo-org-netlify | e11a64f42e459e8cb05b0ece35066a9f09c05c55 | [
"MIT"
] | null | null | null | src/pages/recipes/dales-chicken.md | trevorgk/gatsby-kilvo-org-netlify | e11a64f42e459e8cb05b0ece35066a9f09c05c55 | [
"MIT"
] | 3 | 2021-03-10T21:27:22.000Z | 2022-02-27T06:11:08.000Z | src/pages/recipes/dales-chicken.md | trevorgk/gatsby-kilvo-org-netlify | e11a64f42e459e8cb05b0ece35066a9f09c05c55 | [
"MIT"
] | null | null | null | ---
templateKey: recipe
title: Dale's Chicken
blurb: This recipe was sent to me by Jan van Riel, who apparently has the usual
problem of an inadequate filing system for recipes. She says that putting it
here will enable her to find it when she wants it, and also enable a better
way to pass it on.
legacySlug: dales_chicken.html
category: Main Courses
recipes:
- recipeTitle: ''
ingredients:
- '#20 chicken, flat/butterflied, de-boned, with wing tips removed and
drumsticks boned out if possible. If you want to stretch it further,
also get 1-2 chicken breasts.'
- 4 thick slices of ham
- 5 eggs
- Spinach
- Swiss style cheese (optional)
method: >-
Preheat fan-forced oven to 200 degrees centigrade.
Lay chicken out flat with skin down. If you have extra chicken breast(s), lie it on top of an un-meaty part of the chicken. You can even stuff the cavities of the drumsticks with extra chicken if you like.
Place the ham slices on top of the open chicken.
Make an omelette (under cook rather than overcook it) and place that on top of the ham.
Cook some chopped spinach in the microwave, squeeze out the water and sprinkle it over the omelette. You can place some thinly sliced cheese on top of the spinach. Season it with salt and pepper.
Roll the chicken so that the edges overlap. If you have stuffed it very full (with extra breasts or an omelette made with extra eggs), use bamboo skewers and string to hold the chicken together. Don’t worry about make it pretty – it’s a rough affair.
Par-cook the chicken in a covered dish in the microwave for about 10 minutes. Then season the outside of the chicken with salt and pepper and pop it in the oven for about 30 minutes, more if you’ve put extra filling in. It may take longer so test it with a skewer where the drumstick joins the body – it stays very moist. Let it sit a while before you serve it.
---
| 48.097561 | 367 | 0.729716 | eng_Latn | 0.999486 |
bb7c47bef29d971a245c56f79823819aa5cd3838 | 1,440 | md | Markdown | 2021/01/03/2021-01-03 10:05.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2021/01/03/2021-01-03 10:05.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2021/01/03/2021-01-03 10:05.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2021年01月03日10时数据
Status: 200
1.于正
微博热度:4291179
2.河北新增1例本土确诊
微博热度:2864195
3.李梦发文
微博热度:2182219
4.夏东海的儿子都喜欢戴头套
微博热度:2081461
5.河北疫情
微博热度:1931850
6.经纪人否认昆凌怀第三胎
微博热度:1838124
7.爱尔眼科
微博热度:1308539
8.韩国4G网速变慢引发不满
微博热度:1213510
9.数百人不戴口罩参加特朗普跨年派对
微博热度:1125693
10.2021有个励志的数字21
微博热度:1122075
11.易烊千玺两部电影里都剃了光头
微博热度:1120158
12.巴啦啦小魔仙演员悼念孙侨潞
微博热度:791382
13.郑爽爬墙的速度
微博热度:651332
14.元旦假期最后一天
微博热度:650820
15.流金岁月
微博热度:650366
16.王彦霖李一桐亲了个寂寞
微博热度:649123
17.奇葩说一派胡言环节
微博热度:648561
18.任正非要求收缩华为企业业务
微博热度:648204
19.北京奥运举重冠军回应兴奋剂事件
微博热度:647046
20.电影票房
微博热度:646716
21.章子怡问为什么都要当演员
微博热度:597284
22.奚梦瑶追星成功
微博热度:550991
23.刘维舞台
微博热度:526307
24.没那么爱ta婚礼当天要逃婚吗
微博热度:448285
25.章子怡给李晟满星卡
微博热度:437837
26.原来这就是选择性近视
微博热度:419818
27.林郑月娥谈到香港国安法满脸欣慰
微博热度:408656
28.哈登伤停缺席
微博热度:406541
29.胡明轩违体犯规
微博热度:405832
30.李诚儒问李梦为什么被白鹿原换掉
微博热度:405767
31.野人竟是我自己
微博热度:402322
32.五花八门的骨折经历
微博热度:371706
33.李宇春舞台上假发突然被薅走
微博热度:344000
34.爱立信称继续禁华为将离开瑞典
微博热度:310752
35.周奇胖了
微博热度:307739
36.广东从英国输入病例发现B.1.1.7突变株
微博热度:307296
37.福建漳州市长泰县3.0级地震
微博热度:283542
38.张颂文演的试镜失败原型是包贝尔
微博热度:267442
39.黑龙江黑河新增4例本土确诊
微博热度:252747
40.刘德华说我们没有磕到
微博热度:246410
41.雪地里面吃火锅
微博热度:242086
42.刘以豪喊了秦岚四次姐
微博热度:238647
43.31省区市新增24例确诊
微博热度:221398
44.见到真人手办了
微博热度:219375
45.辽宁新增2例本土确诊
微博热度:209468
46.cp27横幅
微博热度:203982
47.热爱的工作令我秃头要不要辞职
微博热度:203664
48.哈尔滨漫展主办方回应不雅拍照
微博热度:201511
49.孙兴慜为热刺攻入第100粒进球
微博热度:200658
50.柯滢又被薅头发了
微博热度:200449
| 7.058824 | 24 | 0.785417 | yue_Hant | 0.298939 |
bb7c7ae151860cade11b80498ee9ec1c34c9df05 | 29 | md | Markdown | README.md | Zowpy/CommandAPI | 835cd7c1dda457b6e98d992d790c27a5810ef853 | [
"Apache-2.0"
] | 2 | 2021-08-08T19:06:54.000Z | 2021-08-08T19:38:34.000Z | README.md | Zowpy/CommandAPI | 835cd7c1dda457b6e98d992d790c27a5810ef853 | [
"Apache-2.0"
] | null | null | null | README.md | Zowpy/CommandAPI | 835cd7c1dda457b6e98d992d790c27a5810ef853 | [
"Apache-2.0"
] | null | null | null | # CommandAPI
JDA Command API
| 9.666667 | 15 | 0.793103 | kor_Hang | 0.938906 |
bb7cab05d69bf6a00b1b9d570b10cf59bab1b573 | 2,298 | md | Markdown | docs/standard/parallel-programming/how-to-combine-parallel-and-sequential-linq-queries.md | CodeTherapist/docs.de-de | 45ed8badf2e25fb9abdf28c20e421f8da4094dd1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/parallel-programming/how-to-combine-parallel-and-sequential-linq-queries.md | CodeTherapist/docs.de-de | 45ed8badf2e25fb9abdf28c20e421f8da4094dd1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/parallel-programming/how-to-combine-parallel-and-sequential-linq-queries.md | CodeTherapist/docs.de-de | 45ed8badf2e25fb9abdf28c20e421f8da4094dd1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Gewusst wie: Kombinieren von parallelen und sequenziellen LINQ-Abfragen'
ms.date: 03/30/2017
ms.technology: dotnet-standard
dev_langs:
- csharp
- vb
helpviewer_keywords:
- parallel queries, combine parallel and sequential
ms.assetid: 1167cfe6-c8aa-4096-94ba-c66c3a4edf4c
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: 9fd67d5f0cb5af33dc2b79f86148557a0dca6ec4
ms.sourcegitcommit: 5bbfe34a9a14e4ccb22367e57b57585c208cf757
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 09/18/2018
ms.locfileid: "45998928"
---
# <a name="how-to-combine-parallel-and-sequential-linq-queries"></a>Gewusst wie: Kombinieren von parallelen und sequenziellen LINQ-Abfragen
Dieses Beispiel zeigt die Verwendung der <xref:System.Linq.ParallelEnumerable.AsSequential%2A>-Methode, um PLINQ anzuweisen, alle nachfolgenden Operatoren in der Abfrage sequenziell zu verarbeiten. Sequenzielle Verarbeitung ist zwar in der Regel langsamer als parallele, doch manchmal ist sie erforderlich, um richtige Ergebnisse zu erzielen.
> [!WARNING]
> Dieses Beispiel soll die Nutzung darstellen und wird möglicherweise nicht schneller ausgeführt als die entsprechende sequenzielle LINQ to Objects-Abfrage. Weitere Informationen finden Sie unter [Grundlagen zur Beschleunigung in PLINQ](../../../docs/standard/parallel-programming/understanding-speedup-in-plinq.md).
## <a name="example"></a>Beispiel
Das folgende Beispiel zeigt ein Szenario, in dem <xref:System.Linq.ParallelEnumerable.AsSequential%2A> erforderlich ist, um die Reihenfolge in einer vorherigen Klausel der Abfrage beizubehalten.
[!code-csharp[PLINQ#24](../../../samples/snippets/csharp/VS_Snippets_Misc/plinq/cs/plinqsamples.cs#24)]
[!code-vb[PLINQ#24](../../../samples/snippets/visualbasic/VS_Snippets_Misc/plinq/vb/plinqsnippets1.vb#24)]
## <a name="compiling-the-code"></a>Kompilieren des Codes
Um diesen Code zu kompilieren und auszuführen, fügen Sie ihn in das [PLINQ-Datenbeispiel](../../../docs/standard/parallel-programming/plinq-data-sample.md)-Projekt ein, fügen Sie eine Zeile für den Methodenaufruf aus `Main` hinzu, und drücken Sie F5.
## <a name="see-also"></a>Siehe auch
- [Parallel LINQ (PLINQ) (Paralleles LINQ (PLINQ))](../../../docs/standard/parallel-programming/parallel-linq-plinq.md)
| 60.473684 | 344 | 0.785466 | deu_Latn | 0.916508 |
bb7ce4e51e2c48a562b5a0154c7fee1a25543032 | 1,082 | md | Markdown | README.md | manikajan123/Social_Networking_Academia | ef090cb1a0fbf3b30831f0d570073793d7c89a63 | [
"Apache-2.0"
] | null | null | null | README.md | manikajan123/Social_Networking_Academia | ef090cb1a0fbf3b30831f0d570073793d7c89a63 | [
"Apache-2.0"
] | null | null | null | README.md | manikajan123/Social_Networking_Academia | ef090cb1a0fbf3b30831f0d570073793d7c89a63 | [
"Apache-2.0"
] | null | null | null | # Networking Platform for Academia
Purpose
1) Progress of web technologies has enabled us to offer expertise in different domains.
2) Crowdsourcing is an effective model for companies to solve problems.
3) The capabilities of servers (i.e., humans) to offer human services is increasing with time.
4) The consumers who use human services are improving their abilities for service orchestration that they may use services in a more complex and reliable way.
5) Crowdsourcing (HaaS) looks at work in a stateless manner, by breaking down a large project or set of work into smaller units that each can be completed by a worker with the right expertise.
Download
1) Open terminal
2) Execute the command "git clone https://github.com/manikajan123/Social_Networking_Academia.git"
Install
1) Download and install the most recent sbt or activator from play official website
2) Open terminal
3) Add "sbt" or "activator" to the path of terminal
4) Execute the command "activator clean run"
Use
1) Open a browser
2) Go to the url "http://localhost:9000"
| 47.043478 | 194 | 0.768022 | eng_Latn | 0.999144 |
bb7e8d5d9b5f3689eded0ca496bec5452ac273a1 | 40 | md | Markdown | _posts/2008-08-05-lc0728.md | GuilinDev/GuilinDev.github.io | 8eb80cde9dd8afca1843a1c197ada711987f9f7a | [
"MIT"
] | null | null | null | _posts/2008-08-05-lc0728.md | GuilinDev/GuilinDev.github.io | 8eb80cde9dd8afca1843a1c197ada711987f9f7a | [
"MIT"
] | 3 | 2020-06-23T18:15:58.000Z | 2021-07-16T01:57:18.000Z | _posts/2008-08-05-lc0728.md | GuilinDev/GuilinDev.github.io | 8eb80cde9dd8afca1843a1c197ada711987f9f7a | [
"MIT"
] | 1 | 2022-03-08T03:30:24.000Z | 2022-03-08T03:30:24.000Z | ---
layout: post
permalink: lc0728
---
| 8 | 18 | 0.625 | eng_Latn | 0.2174 |
53be8ca3fc6df4e6822e7d5f5970abe73ba12d03 | 453 | md | Markdown | packages/syntaxes-themes/README.md | ivanceras/ultron | 4cdc806bd1a49ef171fdcdbadf379362b37331fd | [
"MIT"
] | 62 | 2021-01-13T07:09:22.000Z | 2022-03-14T06:06:10.000Z | packages/syntaxes-themes/README.md | ivanceras/ultron | 4cdc806bd1a49ef171fdcdbadf379362b37331fd | [
"MIT"
] | null | null | null | packages/syntaxes-themes/README.md | ivanceras/ultron | 4cdc806bd1a49ef171fdcdbadf379362b37331fd | [
"MIT"
] | 2 | 2021-01-28T02:18:53.000Z | 2021-09-24T12:03:59.000Z | # Bundled packages
`syntaxes/Packages` is from [SublimeHq/Packages](https://github.com/sublimehq/Packages) repository
This is using a specific commit [hash](https://github.com/sublimehq/Packages/tree/f36b8f807d5f30d2b8ef639232a9fc5960f550fa)
`themes/` is copied from [zola](https://github.com/getzola/zola) repository
ISSUES: "Plain Text" syntax is not included when loading the syntax set
To regenerate the syntaxes.packdump and themes.themedump
| 41.181818 | 123 | 0.801325 | eng_Latn | 0.908759 |
53bec1a65f436bf9a18d2400fae3bc0324397eed | 690 | md | Markdown | Xeon_E5-2650_8Cores_OneConnect_be3/kern.random.harvest.mask/result/fbsd11.1-yandex/README.md | 0mp/netbenches | d1635cab5b4f7178ea69e08432b714242005ad0f | [
"BSD-2-Clause"
] | 59 | 2015-09-22T13:33:57.000Z | 2022-03-04T17:13:28.000Z | Xeon_E5-2650_8Cores_OneConnect_be3/kern.random.harvest.mask/result/fbsd11.1-yandex/README.md | 0mp/netbenches | d1635cab5b4f7178ea69e08432b714242005ad0f | [
"BSD-2-Clause"
] | 2 | 2017-02-07T18:20:34.000Z | 2021-03-07T11:47:10.000Z | Xeon_E5-2650_8Cores_OneConnect_be3/kern.random.harvest.mask/result/fbsd11.1-yandex/README.md | 0mp/netbenches | d1635cab5b4f7178ea69e08432b714242005ad0f | [
"BSD-2-Clause"
] | 8 | 2015-12-28T08:34:02.000Z | 2021-12-10T23:31:06.000Z | ```
x 511-default.pps
+ 351.pps
+--------------------------------------------------------------------------+
|x + xx x * + + +|
| |____________A__M_________| |
| |_______________________MA________________________| |
+--------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 5 1329263 1331771.5 1331057.5 1330877.6 953.47112
+ 5 1329474 1334561 1332119.5 1332164.1 1841.2639
No difference proven at 95.0% confidence
```
| 49.285714 | 76 | 0.337681 | yue_Hant | 0.815529 |
53bec3e6761252aa7ba8e56c1e3ab392068a047e | 2,322 | md | Markdown | docs/vs-2015/debugger/enabling-debug-features-in-visual-cpp-d-debug.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/enabling-debug-features-in-visual-cpp-d-debug.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/enabling-debug-features-in-visual-cpp-d-debug.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Hata ayıklama Visual c++ özelliklerini etkinleştirme (-D_DEBUG) | Microsoft Docs
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-debug
ms.tgt_pltfrm: ''
ms.topic: article
f1_keywords:
- vs.debug
dev_langs:
- FSharp
- VB
- CSharp
- C++
helpviewer_keywords:
- /D_DEBUG compiler option [C++]
- debugging [C++], enabling debug features
- debugging [MFC], enabling debug features
- assertions, enabling debug features
- D_DEBUG compiler option
- MFC libraries, debug version
- debug builds, MFC
- _DEBUG macro
ms.assetid: 276e2254-7274-435e-ba4d-67fcef4f33bc
caps.latest.revision: 10
author: mikejo5000
ms.author: mikejo
manager: ghogen
ms.openlocfilehash: e512620e1af8da85039ed403d4280568101fbe57
ms.sourcegitcommit: 240c8b34e80952d00e90c52dcb1a077b9aff47f6
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 10/23/2018
ms.locfileid: "49829267"
---
# <a name="enabling-debug-features-in-visual-c-ddebug"></a>Visual C++'de Hata Ayıklama Özelliklerini Etkinleştirme (/D_DEBUG)
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
İçinde [!INCLUDE[vcprvc](../includes/vcprvc-md.md)], programınızı sembolü ile derleme yaparken onaylar etkinleştirilen gibi hata ayıklama özellikleri **_DEBUG** tanımlı. Tanımlayabileceğiniz **_DEBUG** iki yoldan biriyle:
- Belirtin **#define _DEBUG** , kaynak kodunuzdaki veya
- Belirtin **/D_DEBUG** derleyici seçeneği. (Proje sihirbazları kullanarak Visual Studio'da oluşturursanız **/D_DEBUG** hata ayıklama yapılandırmasında otomatik olarak tanımlanır.)
Zaman **_DEBUG** olan tanımlanan, derleyici tarafından çevrelenen kod bölümlerini derler **#ifdef _DEBUG** ve `#endif`.
MFC programı hata ayıklama yapılandırmasını MFC Kitaplığı hata ayıklama sürümü ile bağlanması gerekir. MFC üstbilgi dosyalarındaki doğru MFC kitaplığı ile bağlantı için tanımladığınız, gibi semboller üzerinde temel sürümünü **_DEBUG** ve **_UNICODE**. Ayrıntılar için bkz [MFC kitaplık sürümleri](http://msdn.microsoft.com/library/3d7a8ae1-e276-4cf8-ba63-360c2f85ad0e).
## <a name="see-also"></a>Ayrıca Bkz.
[Yerel kodda hata ayıklama](../debugger/debugging-native-code.md)
[C++ Hata Ayıklama Yapılandırması Proje Ayarları](../debugger/project-settings-for-a-cpp-debug-configuration.md)
| 39.355932 | 373 | 0.770026 | tur_Latn | 0.981456 |
53bf308e06db96fa41b6af631138300a6b3e0f9f | 242 | md | Markdown | messages/1.3.0.md | TheSecEng/MarkdownTOC | 7c69137249820dc586fd90b58fca2f34c54f2abc | [
"MIT"
] | 299 | 2015-01-16T23:58:12.000Z | 2022-03-12T03:26:17.000Z | messages/1.3.0.md | naokazuterada/MarkdownTOC | b61546d001661d9385423556a62c21c36abc6857 | [
"MIT"
] | 137 | 2015-01-14T22:43:21.000Z | 2021-05-25T10:27:12.000Z | messages/1.3.0.md | TheSecEng/MarkdownTOC | 7c69137249820dc586fd90b58fca2f34c54f2abc | [
"MIT"
] | 77 | 2015-01-23T17:51:36.000Z | 2022-03-16T02:19:38.000Z | # MarkdownTOC - 1.3.0
## Changes
- Add 'Auto link' feature
- Bug fix: ignore headings and toc tags in codeblock
- Remove auto rewrite toc tag when 'depth' attr is not declared
---
More detail
https://github.com/naokazuterada/MarkdownTOC/ | 18.615385 | 63 | 0.731405 | eng_Latn | 0.722824 |
53c0d99cc74baca190096237086b611b8784b627 | 933 | md | Markdown | cx_Freeze/samples/service/README.md | TechnicalPirate/cx_Freeze | 0b97566ed62cf0cddccadce574c4f53bcf6baa5a | [
"PSF-2.0"
] | 358 | 2020-07-02T13:00:02.000Z | 2022-03-29T10:03:57.000Z | cx_Freeze/samples/service/README.md | TechnicalPirate/cx_Freeze | 0b97566ed62cf0cddccadce574c4f53bcf6baa5a | [
"PSF-2.0"
] | 372 | 2020-07-02T20:47:57.000Z | 2022-03-31T19:35:05.000Z | cx_Freeze/samples/service/README.md | TechnicalPirate/cx_Freeze | 0b97566ed62cf0cddccadce574c4f53bcf6baa5a | [
"PSF-2.0"
] | 78 | 2020-07-09T14:24:03.000Z | 2022-03-22T19:06:52.000Z | # Service sample
A simple setup script for creating a Windows service.
See the comments in the Config.py and ServiceHandler.py files for more
information on how to set this up.
Installing the service is done with the option --install <Name> and
uninstalling the service is done with the option --uninstall <Name>. The
value for <Name> is intended to differentiate between different invocations
of the same service code -- for example for accessing different databases or
using different configuration files.
# Installation and requirements:
In a virtual environment, install by issuing the command:
```
pip install --upgrade cx_Freeze cx_Logging
```
cx_Logging 3.0 has support for Python 3.6 up to 3.9.
# Build the executable:
```
python setup.py build
```
# Run the sample
Run in a command prompt or powershell with admin priviliges.
```
cx_FreezeSampleService --install test
cx_FreezeSampleService --uninstall test
```
| 24.552632 | 76 | 0.777063 | eng_Latn | 0.995247 |
53c27dd7b458723113ed4f65e882dd5bf3846ffd | 142 | md | Markdown | README.md | schachar/node-require-hook | d0796cbd57daf1fecaa35149ec3dab8bb9f80fc4 | [
"MIT"
] | 3 | 2020-07-28T13:36:09.000Z | 2020-08-14T07:29:35.000Z | README.md | schachar/node-require-hook | d0796cbd57daf1fecaa35149ec3dab8bb9f80fc4 | [
"MIT"
] | null | null | null | README.md | schachar/node-require-hook | d0796cbd57daf1fecaa35149ec3dab8bb9f80fc4 | [
"MIT"
] | null | null | null |
* You view the code overriden in hook.js click [here](https://github.com/nodejs/node/blob/v12.16.2/lib/internal/modules/cjs/loader.js#L797)
| 35.5 | 139 | 0.753521 | kor_Hang | 0.196158 |
53c2922981fb1a3ed7bc45b29d5db86e11ed425d | 16,627 | md | Markdown | articles/databox-online/azure-stack-edge-j-series-deploy-stateless-application-kubernetes.md | henkla/azure-docs.sv-se | 82ab549e3e49cce16d87af4b0b14d95b59e6efd3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/databox-online/azure-stack-edge-j-series-deploy-stateless-application-kubernetes.md | henkla/azure-docs.sv-se | 82ab549e3e49cce16d87af4b0b14d95b59e6efd3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/databox-online/azure-stack-edge-j-series-deploy-stateless-application-kubernetes.md | henkla/azure-docs.sv-se | 82ab549e3e49cce16d87af4b0b14d95b59e6efd3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Distribuera Kubernetes-tillstånds lösa program på Azure Stack Edge Pro GPU-enhet med kubectl | Microsoft Docs
description: Beskriver hur du skapar och hanterar en Kubernetes program distribution med hjälp av kubectl på en Microsoft Azure Stack Edge Pro-enhet.
services: databox
author: alkohli
ms.service: databox
ms.subservice: edge
ms.topic: how-to
ms.date: 08/28/2020
ms.author: alkohli
ms.openlocfilehash: 6356089daed02270a14903639afee8001153b195
ms.sourcegitcommit: 6a350f39e2f04500ecb7235f5d88682eb4910ae8
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 12/01/2020
ms.locfileid: "96447371"
---
# <a name="deploy-a-kubernetes-stateless-application-via-kubectl-on-your-azure-stack-edge-pro-gpu-device"></a>Distribuera ett program med Kubernetes tillstånd via kubectl på din Azure Stack Edge Pro GPU-enhet
Den här artikeln beskriver hur du distribuerar ett tillstånds lösa program med kubectl-kommandon i ett befintligt Kubernetes-kluster. Den här artikeln vägleder dig genom processen att skapa och konfigurera poddar i ditt tillstånds lösa program.
## <a name="prerequisites"></a>Förutsättningar
Innan du kan skapa ett Kubernetes-kluster och använda `kubectl` kommando rads verktyget, måste du se till att:
- Du har inloggnings uppgifter till en 1-nod Azure Stack Edge Pro-enhet.
- Windows PowerShell 5,0 eller senare är installerat på ett Windows-klientsystem för att få åtkomst till Azure Stack Edge Pro-enheten. Du kan också ha andra klienter med ett operativ system som stöds. Den här artikeln beskriver proceduren när du använder en Windows-klient. Om du vill hämta den senaste versionen av Windows PowerShell går du till [Installera Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-7).
- Compute är aktiverat på den Azure Stack Edge Pro-enheten. Om du vill aktivera beräkning går du till **beräknings** sidan i enhetens lokala användar gränssnitt. Välj sedan ett nätverks gränssnitt som du vill aktivera för beräkning. Välj **Aktivera**. Genom att aktivera beräknings resultatet skapas en virtuell växel på enheten i nätverks gränssnittet. Mer information finns i [Aktivera Compute Network på Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
- Din Azure Stack Edge Pro-enhet har en Kubernetes-kluster server som kör som är version v 1.9 eller senare. Mer information finns i [skapa och hantera ett Kubernetes-kluster på Microsoft Azure Stack Edge Pro-enhet](azure-stack-edge-gpu-create-kubernetes-cluster.md).
- Du har installerat `kubectl` .
## <a name="deploy-a-stateless-application"></a>Distribuera ett tillstånds lösa program
Innan vi börjar bör du ha:
1. Skapade ett Kubernetes-kluster.
2. Konfigurera ett namn område.
3. Associera en användare med namn området.
4. Användar konfigurationen sparades till `C:\Users\<username>\.kube` .
5. Installerat `kubectl` .
Nu kan du börja köra och hantera tillstånds lösa program distributioner på en Azure Stack Edge Pro-enhet. Innan du börjar använda `kubectl` måste du kontrol lera att du har rätt version av `kubectl` .
### <a name="verify-you-have-the-correct-version-of-kubectl-and-set-up-configuration"></a>Kontrol lera att du har rätt version av kubectl och konfigurerat konfigurationen
För att kontrol lera versionen av `kubectl` :
1. Kontrol lera att versionen av `kubectl` är större än eller lika med 1,9:
```powershell
kubectl version
```
Här är ett exempel på utdata:
```powershell
PS C:\WINDOWS\system32> C:\windows\system32\kubectl.exe version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
```
I det här fallet är klient versionen av kubectl v 1.15.2 och är kompatibel för att fortsätta.
2. Hämta en lista över de poddar som körs på ditt Kubernetes-kluster. En pod är en program behållare eller process som körs på ditt Kubernetes-kluster.
```powershell
kubectl get pods -n <namespace-string>
```
Här är ett exempel på kommando användning:
```powershell
PS C:\WINDOWS\system32> kubectl get pods -n "test1"
No resources found.
PS C:\WINDOWS\system32>
```
Utdata bör ange att inga resurser (poddar) hittas eftersom det inte finns några program som körs i klustret.
Kommandot fyller i katalog strukturen för "C:\Users \\ < username > \\ . Kube" \" med konfigurationsfiler. Kommando rads verktyget kubectl använder dessa filer för att skapa och hantera tillstånds lösa program i ditt Kubernetes-kluster.
3. Kontrol lera manuellt katalog strukturen för "C:\Users \\ < username > \\ . Kube" \" för att verifiera att *kubectl* har fyllt i den med följande undermappar:
```powershell
PS C:\Users\username> ls .kube
Directory: C:\Users\user\.kube
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2/18/2020 11:05 AM cache
d----- 2/18/2020 11:04 AM http-cache
-a---- 2/18/2020 10:41 AM 5377 config
```
> [!NOTE]
> Om du vill visa en lista över alla kubectl-kommandon skriver du `kubectl --help` .
### <a name="create-a-stateless-application-using-a-deployment"></a>Skapa ett tillstånds lösa program med hjälp av en distribution
Nu när du har kontrollerat att kommando rads versionen för kubectl är korrekt och har de nödvändiga konfigurationsfilerna kan du skapa en tillstånds lös program distribution.
En pod är den grundläggande körnings enheten för ett Kubernetes-program, den minsta och enklaste enheten i Kubernetes objekt modell som du skapar eller distribuerar. En POD kapslar också in lagrings resurser, en unik nätverks-IP och alternativ som styr hur behållarna ska köras.
Typen av tillstånds lösa program som du skapar är en nginx webb Server distribution.
Alla kubectl-kommandon som du använder för att skapa och hantera tillstånds lösa program distributioner måste ange det namn område som är associerat med konfigurationen. Du skapade namn området när du anslöt till klustret på den Azure Stack Edge Pro-enheten i självstudien [skapa och hantera ett Kubernetes-kluster på Microsoft Azure Stack Edge Pro-enhet](azure-stack-edge-gpu-create-kubernetes-cluster.md) med `New-HcsKubernetesNamespace` .
Använd för att ange namn området i ett kubectl-kommando `kubectl <command> -n <namespace-string>` .
Följ de här stegen för att skapa en nginx-distribution:
1. Tillämpa ett tillstånds löst program genom att skapa ett Kubernetes-distributions objekt:
```powershell
kubectl apply -f <yaml-file> -n <namespace-string>
```
I det här exemplet är sökvägen till program YAML-filen en extern källa.
Här är en exempel användning av kommandot och utdata:
```powershell
PS C:\WINDOWS\system32> kubectl apply -f https://k8s.io/examples/application/deployment.yaml -n "test1"
deployment.apps/nginx-deployment created
```
Alternativt kan du spara följande markdown på din lokala dator och ersätta sökvägen och fil namnet i parametern *-f* . Till exempel "C:\Kubernetes\deployment.yaml". Här är konfigurationen för program distributionen:
```markdown
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
```
Det här kommandot skapar en standard nginx-distribution som har två poddar för att köra programmet.
2. Hämta beskrivningen av den Kubernetes-nginx som du skapade:
```powershell
kubectl describe deployment nginx-deployment -n <namespace-string>
```
Här är exempel på användning av kommandot och utdata:
```powershell
PS C:\Users\user> kubectl describe deployment nginx-deployment -n "test1"
Name: nginx-deployment
Namespace: test1
CreationTimestamp: Tue, 18 Feb 2020 13:35:29 -0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"test1"},"spec":{"repl...
Selector: app=nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-5754944d6c (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m22s deployment-controller Scaled up replica set nginx-deployment-5754944d6c to 2
```
Om du tittar närmare på *repliker* visas:
```powershell
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
```
Inställningen *repliker* visar att distributions specifikationen kräver två poddar, att de poddar var skapade och uppdaterade och att de är redo att användas.
> [!NOTE]
> En replik uppsättning ersätter poddar som tas bort eller avslutas av någon anledning, t. ex. om det skulle uppstå ett diskfel eller en avbrotts enhets uppgradering. Därför rekommenderar vi att du använder en replik uppsättning även om programmet bara kräver en enda pod.
3. Visa en lista över poddar i distributionen:
```powershell
kubectl get pods -l app=nginx -n <namespace-string>
```
Här är exempel på användning av kommandot och utdata:
```powershell
PS C:\Users\user> kubectl get pods -l app=nginx -n "test1"
NAME READY STATUS RESTARTS AGE
nginx-deployment-5754944d6c-7wqjd 1/1 Running 0 3m13s
nginx-deployment-5754944d6c-nfj2h 1/1 Running 0 3m13s
```
Utdata kontrollerar att vi har två poddar med unika namn som vi kan referera till med hjälp av kubectl.
4. Så här visar du information om en enskild Pod i distributionen:
```powershell
kubectl describe pod <podname-string> -n <namespace-string>
```
Här är exempel på användning av kommandot och utdata:
```powershell
PS C:\Users\user> kubectl describe pod "nginx-deployment-5754944d6c-7wqjd" -n "test1"
Name: nginx-deployment-5754944d6c-7wqjd
Namespace: test1
Priority: 0
Node: k8s-1d9qhq2cl-n1/10.128.46.184
Start Time: Tue, 18 Feb 2020 13:35:29 -0800
Labels: app=nginx
pod-template-hash=5754944d6c
Annotations: <none>
Status: Running
IP: 172.17.246.200
Controlled By: ReplicaSet/nginx-deployment-5754944d6c
Containers:
nginx:
Container ID: docker://280b0f76bfdc14cde481dc4f2b8180cf5fbfc90a084042f679d499f863c66979
Image: nginx:1.7.9
Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 18 Feb 2020 13:35:35 -0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8gksw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-8gksw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8gksw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m58s default-scheduler Successfully assigned test1/nginx-deployment-5754944d6c-7wqjd to k8s-1d9qhq2cl-n1
Normal Pulling 4m57s kubelet, k8s-1d9qhq2cl-n1 Pulling image "nginx:1.7.9"
Normal Pulled 4m52s kubelet, k8s-1d9qhq2cl-n1 Successfully pulled image "nginx:1.7.9"
Normal Created 4m52s kubelet, k8s-1d9qhq2cl-n1 Created container nginx
Normal Started 4m52s kubelet, k8s-1d9qhq2cl-n1 Started container nginx
```
### <a name="rescale-the-application-deployment-by-increasing-the-replica-count"></a>Skala om program distributionen genom att öka antalet repliker
Varje pod är avsedd att köra en enda instans av ett visst program. Om du vill skala programmet vågrätt för att köra flera instanser kan du öka antalet poddar, en för varje instans. I Kubernetes kallas detta för replikering.
Du kan öka antalet poddar i program distributionen genom att använda en ny YAML-fil. YAML-filen ändrar repliker-inställningen till 4, vilket ökar antalet poddar i distributionen till fyra poddar. Så här ökar du antalet poddar från 2 till 4:
```powershell
PS C:\WINDOWS\system32> kubectl apply -f https://k8s.io/examples/application/deployment-scale.yaml -n "test1"
```
Alternativt kan du spara följande markdown på din lokala dator och ersätta sökvägen och fil namnet för parametern *-f* för `kubectl apply` . Till exempel "C:\Kubernetes\deployment-scale.yaml". Här följer konfigurationen för skala för program distribution:
```markdown
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 4 # Update the replicas from 2 to 4
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 80
```
Så här kontrollerar du att distributionen har fyra poddar:
```powershell
kubectl get pods -l app=nginx
```
Här är exempel på utdata för en omskalning av distribution från två till fyra poddar:
```powershell
PS C:\WINDOWS\system32> kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
nginx-deployment-148880595-4zdqq 1/1 Running 0 25s
nginx-deployment-148880595-6zgi1 1/1 Running 0 25s
nginx-deployment-148880595-fxcez 1/1 Running 0 2m
nginx-deployment-148880595-rwovn 1/1 Running 0 2m
```
Som du kan se från utdata har du nu fyra poddar i distributionen som kan köra ditt program.
### <a name="delete-a-deployment"></a>Ta bort en distribution
Om du vill ta bort distributionen, inklusive alla poddar, måste du köra `kubectl delete deployment` Ange namnet på distributionen *nginx-Deployment* och namn områdets namn. Ta bort distributionen:
```powershell
kubectl delete deployment nginx-deployment -n <namespace-string>
```
Här är ett exempel på kommando användning och utdata:
```powershell
PS C:\Users\user> kubectl delete deployment nginx-deployment -n "test1"
deployment.extensions "nginx-deployment" deleted
```
## <a name="next-steps"></a>Nästa steg
[Översikt över Kubernetes](azure-stack-edge-gpu-kubernetes-overview.md) | 45.428962 | 497 | 0.693571 | swe_Latn | 0.97698 |
53c2b0208b16a47ccbc3f6282cc5e26276a239f7 | 1,313 | md | Markdown | src/pages/blog/back-in-bliss.md | bekovicdusan/Gatsby | 42075d2cc88793a21c8f3595a58d2bf041afb210 | [
"MIT"
] | 10 | 2020-06-06T08:47:19.000Z | 2021-12-15T20:08:59.000Z | src/pages/blog/back-in-bliss.md | bekovicdusan/Gatsby | 42075d2cc88793a21c8f3595a58d2bf041afb210 | [
"MIT"
] | 44 | 2020-05-19T02:14:15.000Z | 2022-03-29T20:43:54.000Z | src/pages/blog/back-in-bliss.md | bekovicdusan/Gatsby | 42075d2cc88793a21c8f3595a58d2bf041afb210 | [
"MIT"
] | 10 | 2020-06-19T10:05:01.000Z | 2021-08-29T18:39:49.000Z | ---
templateKey: blog-post
title: Back in Bliss
date: 2004-07-01
path: /back-in-bliss
featuredpost: false
featuredimage:
tags:
- ft-bliss
category:
- Iraq
comments: true
share: true
---
After a few days back in Ohio on a pass, I'm now back at Ft Bliss CRC. Should be shipping out to Kuwait in the next few days. Not much new here except that I got an email from somebody at CNN wanting to talk to me about my IRR callup experience. I'll have to talk to my chain of command and/or Army Public Affairs before I say anything to the media, but it's cool that CNN noticed me.
More IRR Call-Up Info:
[ArmyTimes: Thousands of IRR Soldiers Recalled](http://www.armytimes.com/story.php?f=0-292925-3056113.php)
[ArmyTimes: IRR Call-up Evokes Sharp Reaction from Hill Democrats](http://www.armytimes.com/story.php?f=0-292925-3057057.php)
I think once the letters go out to the next round of IRR call-ups (next week), I'll probably stop focusing on this issue, since I'll have more important concerns at that point (since I'll be in Kuwait or Iraq probably). I know a lot of people have emailed me to thank me for providing what info I have about IRRs and my experiences, and I really appreciate that. I hope some of the 5600 who are about to get a big surprise in the mail also find this site and are helped by it.
| 52.52 | 476 | 0.762376 | eng_Latn | 0.996415 |
53c32bfa34834e891420f7b4c9ef63d268394a7f | 2,636 | md | Markdown | docs/api/graphql.query.builder.querystringbuilder.md | HugoKampsSB/graphql-query-builder-dotnet | ab97e66c3a14bfdfd4b25d8db278d187e3fe7475 | [
"MIT"
] | 24 | 2020-03-11T16:43:14.000Z | 2022-02-23T15:04:04.000Z | docs/api/graphql.query.builder.querystringbuilder.md | HugoKampsSB/graphql-query-builder-dotnet | ab97e66c3a14bfdfd4b25d8db278d187e3fe7475 | [
"MIT"
] | 19 | 2020-04-13T10:12:05.000Z | 2022-03-31T19:58:08.000Z | docs/api/graphql.query.builder.querystringbuilder.md | HugoKampsSB/graphql-query-builder-dotnet | ab97e66c3a14bfdfd4b25d8db278d187e3fe7475 | [
"MIT"
] | 15 | 2020-04-13T10:13:17.000Z | 2022-03-09T16:06:07.000Z | [`< Back`](./)
---
# QueryStringBuilder
Namespace: GraphQL.Query.Builder
The GraphQL query builder class.
```csharp
public class QueryStringBuilder : IQueryStringBuilder
```
Inheritance [Object](https://docs.microsoft.com/en-us/dotnet/api/system.object) → [QueryStringBuilder](./graphql.query.builder.querystringbuilder)<br>
Implements [IQueryStringBuilder](./graphql.query.builder.iquerystringbuilder)
## Properties
### **QueryString**
The query string builder.
```csharp
public StringBuilder QueryString { get; }
```
#### Property Value
[StringBuilder](https://docs.microsoft.com/en-us/dotnet/api/system.text.stringbuilder)<br>
## Constructors
### **QueryStringBuilder()**
```csharp
public QueryStringBuilder()
```
## Methods
### **Clear()**
Clears the string builder.
```csharp
public void Clear()
```
### **FormatQueryParam(Object)**
Formats query param.
Returns:
- String: `"value"`
- Number: `10`
- Boolean: `true` / `false`
- Enum: `EnumValue`
- Key value pair: `key:"value"` / `key:10`
- List: `["value1","value2"]` / `[1,2]`
- Dictionary: `{a:"value",b:10}`
```csharp
protected internal string FormatQueryParam(object value)
```
#### Parameters
`value` [Object](https://docs.microsoft.com/en-us/dotnet/api/system.object)<br>
#### Returns
[String](https://docs.microsoft.com/en-us/dotnet/api/system.string)<br>
The formatted query param.
#### Exceptions
[InvalidDataException](https://docs.microsoft.com/en-us/dotnet/api/system.io.invaliddataexception)<br>
Invalid Object Type in Param List
### **AddParams<TSource>(IQuery<TSource>)**
Adds query params to the query string.
```csharp
protected internal void AddParams<TSource>(IQuery<TSource> query)
```
#### Type Parameters
`TSource`<br>
#### Parameters
`query` IQuery<TSource><br>
The query.
### **AddFields<TSource>(IQuery<TSource>)**
Adds fields to the query sting.
```csharp
protected internal void AddFields<TSource>(IQuery<TSource> query)
```
#### Type Parameters
`TSource`<br>
#### Parameters
`query` IQuery<TSource><br>
The query.
#### Exceptions
[ArgumentException](https://docs.microsoft.com/en-us/dotnet/api/system.argumentexception)<br>
Invalid Object in Field List
### **Build<TSource>(IQuery<TSource>)**
Builds the query.
```csharp
public string Build<TSource>(IQuery<TSource> query)
```
#### Type Parameters
`TSource`<br>
#### Parameters
`query` IQuery<TSource><br>
The query.
#### Returns
[String](https://docs.microsoft.com/en-us/dotnet/api/system.string)<br>
The GraphQL query as string, without outer enclosing block.
---
[`< Back`](./)
| 17.931973 | 150 | 0.7022 | kor_Hang | 0.437581 |
53c3425700c1ff7e953c357242d5a972b5be568c | 1,876 | md | Markdown | business-central/marketing-how-create-segment.md | MicrosoftDocs/dynamics365smb-docs-pr.en-nz | 6a38637c2cb4cdbee230c4e453164dfd2024b8a4 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2017-08-28T10:41:26.000Z | 2021-04-20T21:13:46.000Z | business-central/marketing-how-create-segment.md | MicrosoftDocs/dynamics365smb-docs-pr.en-nz | 6a38637c2cb4cdbee230c4e453164dfd2024b8a4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | business-central/marketing-how-create-segment.md | MicrosoftDocs/dynamics365smb-docs-pr.en-nz | 6a38637c2cb4cdbee230c4e453164dfd2024b8a4 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-10-13T10:45:49.000Z | 2021-10-13T10:45:49.000Z | ---
title: Create Segments| Microsoft Docs
description: Describes how to create a segment for a group of contacts in Business Central, for example, in order to target several contacts with a direct mail.
services: project-madeira
documentationcenter: ''
author: jswymer
ms.service: dynamics365-business-central
ms.topic: conceptual
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.search.keywords: relationship, prospect
ms.date: 04/01/2021
ms.author: jswymer
ms.openlocfilehash: ddddcfa3439fe917a930a0a6cbd3605a273c540f
ms.sourcegitcommit: a7cb0be8eae6ece95f5259d7de7a48b385c9cfeb
ms.translationtype: HT
ms.contentlocale: en-NZ
ms.lasthandoff: 07/08/2021
ms.locfileid: "6445617"
---
# <a name="create-segments"></a>Create Segments
You can create segments to select a group of contacts, for example, if you want to create an interaction involving several contacts, such as direct mail.
## <a name="to-create-a-segment"></a>To create a segment
1. Choose the  icon, enter **Segments**, and then choose the related link.
2. Choose the **New** action.
3. In the **General** section, in the **No.** field, enter a number for the segment.
Alternatively, if you have set up number series for segments on the **Marketing Setup** page, you can press Enter to select the next available segment number.
4. Fill in the other fields on the header.
You can now add contacts to the segment. For more information, see [Add Contacts to Segments](marketing-add-contact-segment.md).
## <a name="see-also"></a>See Also
[Managing Segments](marketing-segments.md)
[Managing Sales Opportunities](marketing-manage-sales-opportunities.md)
[Working with [!INCLUDE[prod_short](includes/prod_short.md)]](ui-work-product.md)
[!INCLUDE[footer-include](includes/footer-banner.md)] | 45.756098 | 182 | 0.773454 | eng_Latn | 0.948872 |
53c3adbb3fa1b7699769572566e1346819971651 | 4,706 | md | Markdown | README.md | sctnightcore/Mojo-Discord | 369c63cbc47dcc665d1e6aac2b51cf00bf1b5158 | [
"MIT"
] | null | null | null | README.md | sctnightcore/Mojo-Discord | 369c63cbc47dcc665d1e6aac2b51cf00bf1b5158 | [
"MIT"
] | null | null | null | README.md | sctnightcore/Mojo-Discord | 369c63cbc47dcc665d1e6aac2b51cf00bf1b5158 | [
"MIT"
] | null | null | null | # Mojo::Discord
This is a set of Perl Modules designed to implement parts of the Discord public API, build on Mojo::IOLoop.
There are four modules involved
- **Mojo::Discord::Auth** handles OAuth2
- **Mojo::Discord::Gateway** handles the Websocket realtime event monitoring part (connect and monitor the chat)
- **Mojo::Discord::REST** handles calls to the REST web API, which mostly handles actions you want the bot to take
- **Mojo::Discord** is a wrapper that saves the user the trouble of managing both REST and Gateway APIs manually.
## Note: This is a spare-time project
I offer no promises as to code completion, timeline, or even support. If you have questions I will try to answer.
## Second Note: Amateur code warning
I would recommend not building anything on this code for now. It is lacking in documentation, error handling, and other things.
I hope to improve this over time, but again because it's a side project I can only do so much with the time I have.
## Pre-Requisites
- **Mojo::UserAgent** and **Mojo::IOLoop** to provide non-blocking asynchronous HTTP calls and websocket functionality.
- **Compress::Zlib**, as some of the incoming messages are compressed Zlib blobs
- **Mojo::JSON** to convert the compressed JSON messages into Perl structures.
- **Encode::Guess** to determine whether we're dealing with a compressed stream or not.
- **IO::Socket::SSL** is required to fetch the Websocket URL for Discord to connect to.
### Example Program
This application creates a very basic AI Chat Bot using the Hailo module (a modern implementation of MegaHAL)
Rather than in-lining the code in the README, you can find the example program in [hailobot.pl](example/hailobot.pl). A [sample config file](/example/config.ini) is included. You just need to give it a valid Discord bot token.
## Mojo::Discord::Gateway
The Discord "Gateway" is a persistent Websocket connection that sends out events as they happen to all connected clients.
This module monitors the gateway and parses events, although once connected it largely reverts to simply passing the contents of each message to the appropriate callback function, as defined by the user.
The connection process goes a little like this:
1. Request a Gateway URL to connect to
a. Seems to always return the same URL now, but in the past it looks like they had multiple URLs and servers.
2. Open a websocket connection to the URL received in Step 1.
3. Once connected, send an IDENTIFY message to the server containing info about who we are (Application-wise)
4. Gateway sends us a READY message containing (potentially) a ton of information about our user identity, the servers we are connected to, a heartbeat interval, and so on.
5. Use the Heartbeat Interval supplied in Step 4 to send a HEARTBEAT message to the server periodically. This lets the server know we are still there, and it will close our connection if we do not send it.
Now that we're connected and sending a heartbeat, all we have to do is listen for incoming messages and pass them off to the correct handler and callback functions.
## Mojo::Discord::REST
The REST module exists for when you want your bot to take some kind of action. It's a fairly simple JSON API, you just need to include your bot token in the header for calls.
This module will implement calls for sending messages, indicating the user has started typing, and maybe a few other things that a text chat bot needs to be able to do.
## Mojo::Discord::Auth
This module was created to implement parts of the OAuth2 authentication method for Discord.
It is far from complete and still requires some copy/pasting, but it functions.
Since OAuth is not required for bot clients, this module may not be included in the Mojo::Discord wrapper.
Using it directly might make more sense.
Creating a new Mojo::Discord::Auth object takes the same arguments as above, but also requires an Application ID, Shared Secret, and the Auth Code received from the browser.
The only function implemented so far is request_token, which sends the auth code to the token endpoint and returns
- Access Token
- Refresh Token
- Expiration Time
- Token Type
- Access Scope
### Example Code:
```perl
#!/usr/bin/env perl
use v5.10;
use warnings;
use strict;
use Mojo::UserAgent;
use Mojo::Discord::Auth;
use Data::Dumper
my $params = {
'name' => 'Your Application Name',
'url' => 'https://yourwebsite.com',
'version' => '0.1',
'code' => $ARGV[0],
'id' => 'your_application_id',
'secret' => 'your_application_secret',
};
my $auth = Mojo::Discord::Auth->new($params);
my $token_hash = $auth->request_token();
# Do something with the result
print Dumper($token_hash);
```
| 45.25 | 226 | 0.758606 | eng_Latn | 0.997859 |
53c3ccd40015742b1e0b6953bb9e520d83b9a8f3 | 3,178 | md | Markdown | fabric-sdk-go/22641-24191/23867.md | hyperledger-gerrit-archive/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 2 | 2021-11-08T08:06:48.000Z | 2021-12-03T01:51:44.000Z | fabric-sdk-go/22641-24191/23867.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | null | null | null | fabric-sdk-go/22641-24191/23867.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 4 | 2019-12-07T05:54:26.000Z | 2020-06-04T02:29:43.000Z | <strong>Project</strong>: fabric-sdk-go<br><strong>Branch</strong>: master<br><strong>ID</strong>: 23867<br><strong>Subject</strong>: [FAB-10881] Update base image to 0.4.10<br><strong>Status</strong>: ABANDONED<br><strong>Owner</strong>: Troy Ronda - [email protected]<br><strong>Assignee</strong>:<br><strong>Created</strong>: 6/29/2018, 7:14:33 AM<br><strong>LastUpdated</strong>: 6/29/2018, 12:35:58 PM<br><strong>CommitMessage</strong>:<br><pre>[FAB-10881] Update base image to 0.4.10
Change-Id: I1a9753a6b0392b12117b80378971907a87de8f2e
Signed-off-by: Troy Ronda <[email protected]>
</pre><h1>Comments</h1><strong>Reviewer</strong>: Troy Ronda - [email protected]<br><strong>Reviewed</strong>: 6/29/2018, 7:14:33 AM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 6/29/2018, 7:14:42 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/3081/ (1/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 6/29/2018, 7:19:27 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/3107/ (2/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 6/29/2018, 7:32:08 AM<br><strong>Message</strong>: <pre>Patch Set 1: Verified-1
Build Failed
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/3107/ : FAILURE
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/3107/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-x86_64/3107
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/3081/ : FAILURE
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/3081/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-s390x/3081</pre><strong>Reviewer</strong>: Troy Ronda - [email protected]<br><strong>Reviewed</strong>: 6/29/2018, 12:35:58 PM<br><strong>Message</strong>: <pre>Abandoned</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Troy Ronda - [email protected]<br><strong>Uploader</strong>: Troy Ronda - [email protected]<br><strong>Created</strong>: 6/29/2018, 7:14:33 AM<br><strong>UnmergedRevision</strong>: [ef0293ecbda5d779244587eaa67e4dd1caf0af3c](https://github.com/hyperledger-gerrit-archive/fabric-sdk-go/commit/ef0293ecbda5d779244587eaa67e4dd1caf0af3c)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 6/29/2018, 7:32:08 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: -1<br><br></blockquote> | 138.173913 | 953 | 0.762429 | kor_Hang | 0.268297 |
53c428fea1590e67d897b1087c1119600757da62 | 2,237 | md | Markdown | docs/vs-2015/ide/help-content-manager-overrides.md | xyzpda/visualstudio-docs.ja-jp | ed48eeef1b7825ae4b16eeffcf19ee6a1665ac97 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/ide/help-content-manager-overrides.md | xyzpda/visualstudio-docs.ja-jp | ed48eeef1b7825ae4b16eeffcf19ee6a1665ac97 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/ide/help-content-manager-overrides.md | xyzpda/visualstudio-docs.ja-jp | ed48eeef1b7825ae4b16eeffcf19ee6a1665ac97 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ヘルプ コンテンツ マネージャーのオーバーライド | Microsoft Docs
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-help-viewer
ms.topic: conceptual
ms.assetid: 95fe6396-276b-4ee5-b03d-faacec42765f
caps.latest.revision: 11
author: jillre
ms.author: jillfra
manager: jillfra
ms.openlocfilehash: 70c0044a0436dcf27a3b087b3f11a5f759824735
ms.sourcegitcommit: a8e8f4bd5d508da34bbe9f2d4d9fa94da0539de0
ms.translationtype: MTE95
ms.contentlocale: ja-JP
ms.lasthandoff: 10/19/2019
ms.locfileid: "72645561"
---
# <a name="help-content-manager-overrides"></a>ヘルプ コンテンツ マネージャーのオーバーライド
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
レジストリを変更することで、Visual Studio IDE のヘルプ ビューアーとヘルプ関連の機能の既定の動作を変更できます。
|タスク|レジストリ キー|値と定義|
|----------|------------------|--------------------------|
|一意のサービス エンドポイントを定義する|HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VSWinExpress\14.0\Help|NewContentAndUpdateService: *HTTPValueForTheServiceEndpoint*。|
|オンラインまたはオフラインの既定を定義する|HKEY_LOCAL_MACHINE\Software\Microsoft\VSWinExpress\14.0\help|UseOnlineHelp: ローカル ヘルプを指定するには `0`、オンライン ヘルプを指定するには `1` を入力します。|
|一意の F1 エンドポイントを定義する|HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VSWinExpress\14.0\Help|OnlineBaseUrl: *HTTPValueForTheServiceEndpoint*|
|Override BITS ジョブの優先順位をオーバーライドする|HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node (64-bit コンピューターの場合)\Microsoft\Help\v2.2|BITSPriority: **foreground**、**high**、**normal**、**low** のいずれかの値を使います。|
|オンライン (および IDE オンライン オプション) を無効にする|HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node (64 ビット コンピューターの場合)\Microsoft\VisualStudio\14.0\Help|OnlineHelpPreferenceDisabled: オンライン ヘルプ コンテンツへのアクセスを無効にするには 1 を設定します。|
|コンテンツの管理を無効にする|HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node (64 ビット コンピューターの場合)\Microsoft\VisualStudio\14.0\Help|ContentManagementDisabled: ヘルプ ビューアーの **[コンテンツの管理]** タブを無効にするには 1 を設定します。|
|ネットワーク共有上のローカル コンテンツ ストアを指定する|HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Help\v2.2\Catalogs\VisualStudio11|LocationPath="*ContentStoreNetworkShare*"|
|Visual Studio 機能の最初の起動時にコンテンツのインストールを無効にする。|HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node (64 ビット コンピューターの場合)\Microsoft\VisualStudio\14.0\Help|DisableFirstRunHelpSelection: Visual Studio の最初の開始時に構成されるヘルプ機能を無効にするには 1 を設定します。|
## <a name="see-also"></a>関連項目
[ヘルプ ビューアーの管理者ガイド](../ide/help-viewer-administrator-guide.md)
| 60.459459 | 219 | 0.806437 | yue_Hant | 0.913895 |
53c4294ad292485a74a2ea4b0e6caf640ad77f0c | 2,197 | md | Markdown | CONTRIBUTING.md | rishiosaur/hoot | 79c95d9317339b39183813c47f3105168b028052 | [
"MIT"
] | 23 | 2020-05-19T13:59:36.000Z | 2021-11-08T08:31:29.000Z | CONTRIBUTING.md | rishiosaur/hoot | 79c95d9317339b39183813c47f3105168b028052 | [
"MIT"
] | 17 | 2020-05-17T01:32:53.000Z | 2022-02-26T22:16:18.000Z | CONTRIBUTING.md | rishiosaur/hoot-cli | 79c95d9317339b39183813c47f3105168b028052 | [
"MIT"
] | 7 | 2020-05-19T18:04:29.000Z | 2021-09-09T01:04:05.000Z | # Contributing to Hoot
:+1::tada: First off, thanks for taking the time to contribute! :tada::+1:
The following is a set of guidelines for contributing to the Hoot project.
## Table Of Contents
[Code of Conduct](#code-of-conduct)
[I don't want to read this whole thing, I just have a question!!!](#i-dont-want-to-read-this-whole-thing-i-just-have-a-question)
[How Can I Contribute?](#how-can-i-contribute)
* [Reporting Bugs](#reporting-bugs)
* [Suggesting Enhancements](#suggesting-enhancements)
* [Pull Requests](#pull-requests)
### Code of Conduct
This project and everyone participating in it is governed by the [Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
### I don't want to read this whole thing I just have a question!!!
Join us on the Hack Club Slack in the #hoot channel.
### What should I know before I get started?
Try out Hoot! Play around with it and have fun!
## Helping Out
### Reporting Bugs
This section guides you through submitting a bug report for Hoot. Following these guidelines helps us understand your report, reproduce the behaviour, and find fix your issue.
When you are creating an issue, you will be provided with a template called Bug Report please fully fill it out, the information it asks for helps us resolve issues faster.
### Suggesting Enhancements
This section guides you through submitting an enhancement suggestion for Hoot to help us improve!
To suggest an enhancement, please create an issue using the template called Feature Request please fully fill it out, the information it asks for helps us resolve issues faster. Hoot is maintained by volunteers, if you could volunteer to bring this request to life please do!
## Pull Requests
### Pre-existing Issue
If you're fixing a pre-existing issue, comment on the issue to alert participants that you'll be working on it. Then once your coding is done, create a PR! We've got a template so please use that.
### New Issue or Feature
Begin by opening an issue using the relevant template, comment on the issue that you'll be working on it. Then once your coding is done, create a PR! We've got a template so please use that.
Thanks!
| 39.945455 | 275 | 0.763769 | eng_Latn | 0.998929 |
53c472c06c2d79703477c195e3430141f17f94f6 | 2,900 | md | Markdown | docs/principle/00_feynman-technique.md | xinetzone/study | 1133407bd9031ef93bae5edafbe826afa7df04bc | [
"Apache-2.0"
] | null | null | null | docs/principle/00_feynman-technique.md | xinetzone/study | 1133407bd9031ef93bae5edafbe826afa7df04bc | [
"Apache-2.0"
] | null | null | null | docs/principle/00_feynman-technique.md | xinetzone/study | 1133407bd9031ef93bae5edafbe826afa7df04bc | [
"Apache-2.0"
] | null | null | null | # 费曼学习法
参考:[《费曼学习法(用输出倒逼输入)》(尹红心,李伟)【摘要 书评 试读】- 京东图书 (jd.com)](https://item.jd.com/13106552.html)
**费曼学习法**:当准备学习一门新知识时,最好用最简洁、清晰和易于理解的语言表达出来,让外行人也可以听懂。
费曼学习法可以分解为五个流程:
1. 确立学习对象
2. 理解学习对象(可靠、系统化、多渠道的信息)
3. 以教代学
4. 回顾和反思
5. 简化和吸收
大脑有两种根深蒂固的学习模式:
1. 越熟悉的概念,大脑越喜欢。(相关性)
2. 在不同的概念之间强行建立联系,也是大脑的特长。
## 学习的本质
为现在而学习,而不是为了将来而学习!
<div class="w3-card-4 w3-orange w3-padding">
真正高质量的学习,一定能够让我们与真实的世界建立有效联系。
</div>
<div class="w3-card-4 w3-orange w3-padding">
只有将学习从功利性的导向中收回来,专注于如何吃透一门知识,怎样让这些知识在今天就可以让自己变得更好,你才能真正的提高学习能力。
</div>
:::{panels}
:container: +full-width
:column: col-lg-4 px-2 py-2
---
:header: bg-jb-one
好的思维需要**正反馈**
^^^
重点是你不要独自专研和学习,要多和外界互动。
---
:header: bg-jb-two
**输出加快思考的成熟**
^^^
**马太效应**:“任何个体、群体或地区,一旦在某一方面(如金钱、名誉、地位等)获得成功和进步,就会产生一种累积优势,就会有更多的机会取得更大的成功和进步。”
---
:header: bg-jb-three
**让思考可以量化**
^^^
1. 方向:锁定思考的**主要方向**
2. 归纳:确立思考的**主要逻辑**
3. 验证:验证思考的**效果**
4. 反馈:反馈正确和错误
5. 简化:把复杂的思考过程简单化
6. 吸收:消化思考的成果
:::
## 确立学习对象
选择**想要**掌握的知识和技能只是第一步,还要**找到**学习它的必要性和重要意义,并且**强化**这些内在联系。
### 思考我们为什么要学习
:::{panels}
:container: +full-width
:column: col-lg-6 px-2 py-2
---
:header: bg-jb-one
当你希望掌握一门知识和技能时:
^^^
1. 怎样**正确**的树立目标?(比如,“数学有什么用”和“学习数学有什么用”是本质不同是两个问题)
2. 当锁定目标后,又如何发现学习它的必要性和它所带来的重要意义?
:::
### 聚焦目标
在一两年的黄金时间内**聚焦在正确的目标上,尽可能的取得不凡的成果**。
任何事成功的关键都不在于你想做好几件事,而在于你能否做好几件事。
#### 如何找到正确的方向?
1. 对自己提出一些**关键问题**
2. 把“最重要的那件事”变成自己的方向
在学习层面,最关键的问题是:对我而言最重要的那件事是什么?
设定目标时,可以沿着两条轨道提出问题:
:::{panels}
:container: +full-width
:column: col-lg-6 px-2 py-2
---
:header: bg-jb-one
**未来的方向**(帮助树立宏观目标)
^^^
- 未来我在 AI 方面的兴趣,是算法研究还是实际应用?
- 我的朋友中精通 AI 应用的人,他们是哪个方向?
- 除了 AI 图像处理技术,我没有其他方向吗?
- 研究 AI 图像处理技术,能在其他领域帮助我吗,比如更好的就业前景?
---
:header: bg-jb-two
**当下的焦点**(指引你制定正确的行动和学习计划)
^^^
- 为了研究 AI 图像处理技术,我要解决的主要问题是什么?
- 我需要设置一个分阶段的目标吗?
- 在 AI 图像领域,我有哪些知识的不足?
- 我该如何收集真实有用的信息?
:::
#### 如何找到真正的兴趣?
评估自己的目标是否有价值,最好的方法就是分析它能否匹配已有的知识体系。
假如能做却不做一定会令自己终身遗憾,它就是你的目标。学习、工作和生活都是如此。
当你清楚自己的目标是什么,就要把它当作自己每天都要去做的“最重要的事”。
### 规划:和目标建立“强联系”
:::{panels}
:container: +full-width
:column: col-lg-4 px-2 py-2
---
:header: bg-jb-one
原因
^^^
1. 论证学习这门知识/做这件事的必要性。(提供动机)
2. 确认规划与目标的实质联系。(保证不会偏离目标)
---
:header: bg-jb-two
确保目标是正确的:SMART 原则
^^^
1. 明确而具体(**S**pecific)
2. 可衡量和量化(**M**easurable)
3. 自身可以做到(**A**chievable)
4. 可以产生满足感/成就感(**R**ewarding)
5. 有时限(**T**ime-bound)
---
:header: bg-jb-three
做学习规划时,要为以下三件事预留足够的时间:
^^^
- 留出锁定最重要目标的时间
- 留出做正确规划的时间
- 留出调整目标和规划的时间
:::
学习不是为了记住什么,而是我们通过学习建立起自己行之有效的思维框架,并将知识运用到实践中,解决生活和工作中的实际问题。
### 费曼技巧:目标原则
在 SMART 原则的基础上,费曼提出了五大原则。
#### 目标的全面性原则
- 制定目标要有**全局**和**整体**的概念。
- 制定的目标要能够**匹配你的阅历、经验和过去的知识累积**,体现当下的任务。
#### 目标的重点性原则
- 制定的目标要有**侧重点**。
- 制定的目标也要有**针对性**。
#### 目标的挑战性原则
- 制定的目标要具有**挑战性**。(激发求知欲,增强学习动力)
- 制定的目标要能**挖掘**和**激发**自己的**潜能**。(倾其所有掌控它)
- 制定的目标不能在学习的过程中人为的调低难度。(享受学习的成就感)
#### 目标的可行性原则
- 制定的目标要**切实可行**。
- 制定的目标必须**符合**我们的**客观实际**。
#### 目标的可调性原则
制定的目标一定是**可调的**。可调性就是随着环境和内外条件的变化,可以对学习目标进行必要的调整,以适应变化。
制定目标时要在实施过程中**留有余地**。
### 理解我们的学习对象
| 15.76087 | 89 | 0.692069 | yue_Hant | 0.520851 |
53c4a5f2494f5580669781d92a74dc97205fc204 | 4,152 | md | Markdown | _posts/fr/2015-10-16-generator-gulp-angular-1-0-0-stable-released.md | suarezd/blog.eleven-labs.com | df3d746bf916a78d65f4ed79379f3ef48d6b7e80 | [
"MIT"
] | null | null | null | _posts/fr/2015-10-16-generator-gulp-angular-1-0-0-stable-released.md | suarezd/blog.eleven-labs.com | df3d746bf916a78d65f4ed79379f3ef48d6b7e80 | [
"MIT"
] | null | null | null | _posts/fr/2015-10-16-generator-gulp-angular-1-0-0-stable-released.md | suarezd/blog.eleven-labs.com | df3d746bf916a78d65f4ed79379f3ef48d6b7e80 | [
"MIT"
] | null | null | null | ---
layout: post
title: generator-gulp-angular 1.0.0 stable released
lang: fr
permalink: /fr/generator-gulp-angular-1-0-0-stable-released/
authors:
- mehdy
date: '2015-10-16 11:07:54 +0200'
date_gmt: '2015-10-16 09:07:54 +0200'
categories:
- Javascript
tags:
- AngularJS
- Yeoman
- Gulp
---
Intro
=====
It has now been more than a year since I ([@Swiip](https://twitter.com/Swiip)), quickly followed by [@zckrs](https://twitter.com/Zckrs), started working on our Yeoman generator. Today we’re celebrating the release of our first major and stable version : [generator-gulp-angular 1.0.0](https://www.npmjs.com/package/generator-gulp-angular){:rel="nofollow noreferrer"}.
At first we simply wanted to make a good merge of [generator-gulp-webapp](https://github.com/yeoman/generator-gulp-webapp) and [generator-angular](https://github.com/yeoman/generator-angular){:rel="nofollow noreferrer"} as I worked on Angular and got tired of Grunt's verbosity. Then, the project popularity started to increase and so did its ambition.
Philosophy
==========
We followed all the precepts of Yeoman adding our own:
- Provide a well written seed project following the best recommendations in terms of folder structure and code style.
- Offer lots of options to enable the user to start instantly with the best tooling and optimization adapted to the latest technologies.
- Use the concept of automatic injection in different parts of the project: scripts tags both vendor and sources in the index.html, styles files, vendor, css or preprocessed.
- Provide a test coverage, as perfect as possible, of the code of the generator but also of the generated code.
Technologies supported
======================
We are not joking around when we talk about this being a stable version. We integrated lots of technologies and languages, from Coffee to Typescript, from Sass to Stylus. The amount of combinations exceeds several millions! We wrote tests, documentation and fixed issues for 12 minor versions and 2 release candidates, to be able to deliver a perfectly configured seed project, no matter the options you choose.

Optimization served
===================
We integrated many optimizations for your web application using some Gulp plugins :
- *browserSync*: full-featured development web server with livereload and devices sync
- *ngAnnotate*: convert simple injection to complete syntax to be minification proof
- *angular-templatecache*: all HTML partials will be converted to JS to be bundled in the application
- *ESLint*: The pluggable linting utility for JavaScript
- *watch*: watch your source files and recompile them automatically
- *useref*: allow configuration of your files in comments of your HTML file
- *uglify*: optimize all your JavaScript
- *clean-css*: optimize all your CSS
- *rev*: add a hash in the file names to prevent browser cache problems
- *karma*: out of the box unit test configuration with karma
- *protractor*: out of the box e2e test configuration with protractor
2.0.0 on the road...
====================
But the v1 is not the end of the road. While maintaining the v1 branch, we started a new Github organization called [FountainJS](https://github.com/FountainJS){:rel="nofollow noreferrer"} targeting a futuristic v2 version. As the context of the build tools has greatly evolved over a year, it will be a reboot of the code base.
The major selling point will be to use Yeoman's generators composition, to upgrade to Gulp 4 and to write it in ES6. Finally, I hope to open new horizons in terms of options: dependency management for sure, but also, why not Web frameworks (someone talked about React?) and also a backend.
Go try out [generator-gulp-angular](https://www.npmjs.com/package/generator-gulp-angular) v1.0.0 release! Any feedbacks, issues, or investment on the new [FountainJS](https://github.com/FountainJS) project will always be appreciated. [generator-gulp-angular-logo](https://www.npmjs.com/package/generator-gulp-angular){:rel="nofollow noreferrer"}
| 61.970149 | 411 | 0.76132 | eng_Latn | 0.990697 |
53c52bbaba8d0ea9adf1749c70e27135905dc425 | 432 | md | Markdown | docs/projects/jacdac/lcd-screen.md | jwunderl/pxt-maker | acd80fbbf841fea5b6fddd92118a20c1c0ff7f3a | [
"MIT"
] | null | null | null | docs/projects/jacdac/lcd-screen.md | jwunderl/pxt-maker | acd80fbbf841fea5b6fddd92118a20c1c0ff7f3a | [
"MIT"
] | null | null | null | docs/projects/jacdac/lcd-screen.md | jwunderl/pxt-maker | acd80fbbf841fea5b6fddd92118a20c1c0ff7f3a | [
"MIT"
] | null | null | null | # LCD screen
Displays text on a LCD screen.
```blocks
/**
JACDAC is still in early prototyping phase. The protocol and all hardware design are MOST LIKELY to change during this phase. You are welcome to join us in prototyping but we strongly recommend avoiding going to production with JACDAC at the current stage.
**/
jacdac.lcdService.start()
```
```package
jacdac
lcd
```
```config
feature=uf2
feature=jacdac
feature=lcd
``` | 19.636364 | 256 | 0.75463 | eng_Latn | 0.983593 |
53c53d63a6d332251b9c2a51a663b6711a87735a | 279 | md | Markdown | src/about.md | Motif-Software/motif-blog | 66ac640ef9473b2781da02fad8222313ab241bad | [
"Apache-2.0"
] | null | null | null | src/about.md | Motif-Software/motif-blog | 66ac640ef9473b2781da02fad8222313ab241bad | [
"Apache-2.0"
] | null | null | null | src/about.md | Motif-Software/motif-blog | 66ac640ef9473b2781da02fad8222313ab241bad | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: About
path: about
---
Motif is a moonshot to reinvent software development and bring invention to masses. Read about [what’s going on here](/blog/intro/).
It’s the manifestation of the thoughts of [Oliver Lade](https://www.linkedin.com/in/oliverlade/).
| 27.9 | 132 | 0.74552 | eng_Latn | 0.974282 |
53c59fe04e17b24f04c672fd1420c1ce30a50025 | 2,252 | md | Markdown | src/docs/articles/developer/getting-started.md | trylock/viewer | b5c901c2df6a033c9455986b26b1427dfa9e877b | [
"MIT"
] | 1 | 2019-02-01T15:37:34.000Z | 2019-02-01T15:37:34.000Z | src/docs/articles/developer/getting-started.md | trylock/viewer | b5c901c2df6a033c9455986b26b1427dfa9e877b | [
"MIT"
] | 23 | 2018-04-15T19:24:36.000Z | 2019-02-01T18:46:03.000Z | src/docs/articles/developer/getting-started.md | trylock/viewer | b5c901c2df6a033c9455986b26b1427dfa9e877b | [
"MIT"
] | 1 | 2019-02-01T15:37:29.000Z | 2019-02-01T15:37:29.000Z | # Getting started
## Download
The project uses [git](https://git-scm.com/) version control system and it is hosted on [github](https://github.com/trylock/viewer). Download it to current directory using [git](https://git-scm.com/) command: `git clone https://github.com/trylock/viewer.git` or use your favourite GUI git client to clone the project. Alternatively, you can [download zipped project](https://github.com/trylock/viewer/archive/master.zip) directly from github.
## Installation and build
1. Download and install [Visual Studio 2017](https://docs.microsoft.com/en-us/visualstudio/install/install-visual-studio?view=vs-2017) or newer with the .NET Framework 4.7
2. Download Java (version 1.6 or higher, required in order to run ANTLR4)
3. Open the project solution in Visual Studio `Viewer.sln`. It is located in the project root directory.
4. Build the solution (this should also install all necessary packages from NuGet)
### Building the installer
Optionally, if you want to build the project setup and installer, you'll need to follow these steps:
1. Download and install the [WiX v3 toolset](http://wixtoolset.org/releases/)
2. Download and install the [WiX Toolset extension for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=RobMensching.WixToolsetVisualStudio2017Extension)
3. Build the `ViewerSetup` project in Visual Studio
4. Build the `ViewerInstaller` project in Visual Studio
### Building documentation
## NuGet packages
This is a list of NuGet packages used by the project with a short explanation of why the package is used.
- `Antlr4.Runtime.Standard` for query compilation
- `DockPanelSuite` for UI customization
- `MetadataExtractor` for parsing Exif metadata in JPEG files
- `Moq` for mocking clases in the test project
- `MSTest` for running tests
- `NLog` for logging errors
- `Scintilla.NET` for the query editor component
- `SkiaSharp` for loading and resizing images (since GDI+'s `DrawImage` is basically useless in a multithreaded environment)
- `System.Data.SQLite.Core` for caching thumbnails and attributes
- Some standard library extensions installed through NuGet: `System.Collections.Immutable`, `System.ValueTuple`
See [Application structure overview](overview.md) next. | 56.3 | 442 | 0.783304 | eng_Latn | 0.953862 |
53c5b98c67b6881bbd83b88a26f7de9a32583330 | 6,031 | md | Markdown | content/en/2014-07-26-library-vs-require.md | malcook/yihui | b00fa553da14de84d07a73e779d3cbbe54a3d31f | [
"MIT"
] | null | null | null | content/en/2014-07-26-library-vs-require.md | malcook/yihui | b00fa553da14de84d07a73e779d3cbbe54a3d31f | [
"MIT"
] | null | null | null | content/en/2014-07-26-library-vs-require.md | malcook/yihui | b00fa553da14de84d07a73e779d3cbbe54a3d31f | [
"MIT"
] | 3 | 2018-05-31T01:04:57.000Z | 2018-07-13T00:43:16.000Z | ---
title: library() vs require() in R
date: '2014-07-26'
slug: library-vs-require
---
While I was sitting in a conference room at UseR! 2014, I started counting the number of times that `require()` was used in the presentations, and would rant about it after I counted to ten. With drums rolling, David won this little award (sorry, I did not really mean this to you).
{{< tweet 484476578416455680 >}}
After I tweeted about it, some useRs seemed to be unhappy and asked me why. Both `require()` and `library()` can load (strictly speaking, _attach_) an R package. Why should not one use `require()`? The answer is pretty simple. If you take a look at the source code of `require` (use the source, Luke, as Martin Mächler mentioned in his invited talk), you will see that `require()` basically means "_try_ to load the package using `library()` and return a logical value indicating the success or failure". In other words, `library()` loads a package, and `require()` tries to load a package. So when you want to load a package, do you load a package or try to load a package? It should be crystal clear.
One bad consequence of `require()` is that if you `require('foo')` in the beginning of an R script, and use a function `bar()` in the **foo** package on line 175, R will throw an error _object "bar" not found_ if **foo** was not installed. That is too late and sometimes difficult for other people to understand if they use your script but are not familiar with the **foo** package -- they may ask, what is the `bar` object, and where is it from? When your code is going to fail, fail loudly, early, and with a relevant error message. `require()` does not signal an error, and `library()` does.
Sometimes you do need `require()` to use a package conditionally (e.g. the sun is not going to explode without this package), in which case you may use an `if` statement, e.g.
```r
if (require('foo')) {
awesome_foo_function()
} else {
warning('You missed an awesome function')
}
```
That should be what `require()` was designed for, but it is common to see R code like this as well:
```r
if (!require('foo')) {
stop('The package foo was not installed')
}
```
Sigh.
- `library('foo')` stops when **foo** was not installed
- `require()` is basically `try(library())`
Then `if (!require('foo')) stop()` is basically "if you _failed_ to _try_ to _load_ this package, please _fail_". I do not quite understand why it is worth the circle, except when one wants a different error message with the one from `library()`, otherwise one can simply load and fail.
There is one legitimate reason to use `require()`, though, and that is, "require is a verb and library is a noun!" I completely agree. `require` should have been a very nice name to choose for the purpose of loading a package, but unfortunately... you know.
If you take a look at the [StackOverflow question](http://stackoverflow.com/q/5595512/559676) on this, you will see a comment on "package vs library" was up-voted a lot of times. It used to make a lot of sense to me, but now I do not care as much as I did. There have been useRs (including me up to a certain point) desperately explaining the difference between the two terms _package_ and _library_, but somehow I think R's definition of a _library_ is indeed unusual, and the function `library()` makes the situation worse. Now I'm totally fine if anyone calls my packages "libraries", because I know what you mean.
Karthik Ram [suggested](https://twitter.com/_inundata/status/493481266365607936) this GIF to express "Ah a new _library_, but _require_? [Noooooo](http://nooooooooooooooo.com)":

Since you have read the source code, Luke, you may have found that you can abuse `require()` a bit, for example:
```r
> (require(c('MASS', 'nnet')))
c("Loading required package: c", "Loading required package: MASS",
"Loading required package: nnet")
Failed with error: ‘'package' must be of length 1’
In addition: Warning message:
In if (!loaded) { :
the condition has length > 1 and only the first element will be used
[1] FALSE
> (require(c('MASS', 'nnet'), character.only = TRUE))
c("Loading required package: MASS", "Loading required package: nnet")
Failed with error: ‘'package' must be of length 1’
In addition: Warning message:
In if (!loaded) { :
the condition has length > 1 and only the first element will be used
[1] FALSE
> library(c('MASS', 'nnet'), character.only = TRUE)
Error in library(c("MASS", "nnet"), character.only = TRUE) :
'package' must be of length 1
```
So `require()` failed not because **MASS** and **nnet** did not exist, but because of a different error. As long as there is an error (no matter what it is), `require()` returns `FALSE`.
One thing off-topic while I'm talking about these two functions: the argument `character.only = FALSE` for `library()` and `require()` is a design mistake in my eyes. It seems the original author(s) wanted to be lazy to avoid typing the quotes around the package name, so `library(foo)` works like `library("foo")`. Once you show people they can be lazy, you can never pull them back. Apparently, the editors of JSS (Journal of Statistical Software) have been trying to promote the form `library("foo")` and discourage `library(foo)`, but I do not think it makes much sense now or it will change anything. If it were in the 90's, I'd wholeheartedly support it. It is simply way too late now. Yes, two extra quotation marks will kill many kittens on this planet. If you are familiar with *nix commands, this idea is not new -- just think about `tar -z -x -f`, `tar -zxf`, and `tar zxf`.
One last mildly annoying issue with `require()` is that it is noisy by default, because of the default `quietly = FALSE`, e.g.
```r
> require('nnet')
Loading required package: nnet
> require('MASS', quietly = TRUE)
```
So when I tell you to load a package, you tell me you are loading a package, as if you had heard me. Oh thank you!

| 68.534091 | 885 | 0.729066 | eng_Latn | 0.999522 |
53c5e16a2cbc35e0ec7a078aede06a1d0b1b072e | 808 | md | Markdown | VBA/PowerPoint-VBA/articles/extracolors-item-method-powerpoint.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 584 | 2015-09-01T10:09:09.000Z | 2022-03-30T15:47:20.000Z | VBA/PowerPoint-VBA/articles/extracolors-item-method-powerpoint.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 585 | 2015-08-28T20:20:03.000Z | 2018-08-31T03:09:51.000Z | VBA/PowerPoint-VBA/articles/extracolors-item-method-powerpoint.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 590 | 2015-09-01T10:09:09.000Z | 2021-09-27T08:02:27.000Z | ---
title: ExtraColors.Item Method (PowerPoint)
keywords: vbapp10.chm529003
f1_keywords:
- vbapp10.chm529003
ms.prod: powerpoint
api_name:
- PowerPoint.ExtraColors.Item
ms.assetid: 213ced3f-fb6a-4447-e73f-1eeeb9f3cebb
ms.date: 06/08/2017
---
# ExtraColors.Item Method (PowerPoint)
Returns a single color from the specified **ExtraColors** collection.
## Syntax
_expression_. **Item**( **_Index_** )
_expression_ A variable that represents an **ExtraColors** object.
### Parameters
|**Name**|**Required/Optional**|**Data Type**|**Description**|
|:-----|:-----|:-----|:-----|
| _Index_|Required|**Long**|The index number of the single object in the collection to be returned.|
### Return Value
MsoRGBType
## See also
#### Concepts
[ExtraColors Object](extracolors-object-powerpoint.md)
| 17.191489 | 100 | 0.704208 | yue_Hant | 0.385598 |
53c6e5c771261d9d3d8f30cbf0c3fb089068a59e | 6,182 | md | Markdown | contrib/profiler/Example/Typical Example.md | luxius-luminus/pai | b9e16c78ab2d902d9c546e18e9a280d962d0eb94 | [
"MIT"
] | 1,417 | 2019-05-07T00:51:36.000Z | 2022-03-31T10:15:31.000Z | contrib/profiler/Example/Typical Example.md | luxius-luminus/pai | b9e16c78ab2d902d9c546e18e9a280d962d0eb94 | [
"MIT"
] | 2,447 | 2019-05-07T01:36:32.000Z | 2022-03-30T08:47:43.000Z | contrib/profiler/Example/Typical Example.md | luxius-luminus/pai | b9e16c78ab2d902d9c546e18e9a280d962d0eb94 | [
"MIT"
] | 329 | 2019-05-07T02:28:06.000Z | 2022-03-29T06:12:49.000Z | # Typical Examples
Here is several typical examples and solution of several profiling
result.
### There is maldistribution of the GPU memory between the multiple GPUs.
- **Instance**
When using the multiple GPU cards, sometimes it will be detected that
each card has different memory utilization. The utilization will be
like as follow:

- **Reason**
In the deep learning model, GPU card will be used to train the data,
compute the loss and gradient and reduce the gradient to optimize the
model. When using multiple cards, each card will run the forward pass
and calculate the parameter gradients. Then the gradients will be sent
to a server which runs an reduce operation to compute averaged
gradients. The averaged gradients are then transmitted back to each
GPU card to update the model parameters.
When the machine structure is "one-machine, multiple cards", one of
the GPU card will be chosen to be the master GPU. The task of the
master GPU is to transmit the data to each card, receive the output of
each card, then compute and transmit the loss of each card. At last,
the master GPU will receive the gradients from all the cards and
reduce them, finally it will update the model. Here is the flow chart:

From the chart, the master card does some extra works except the same
work that each card does: loss calculation, gradient reduction and the
parameter update, because of which the master card will be allocated
more data, then the memory utilization of the master card will be more
than others.
- **How to Fix**
One of the solution is to use the horovod API to fix it. Horovod API
can help to change the model from Tree-Reduce to Ring-Reduce with
NCCL, which can reduce the training time and solve the uneven GPU
utilization. The example code is as follow:
```Python
import horovod.torch as hvd # Take pytorch for example
hvd.init()
torch.cuda.set_device(hvd.local_rank())
```
More information can be gotten at
[horovod](https://github.com/horovod/horovod).
- **After Fixing**
After using the horovod API, the GPU memory usage in each card is
uniform.

### GPU utilization and GPU memory still have free resource.
- **Instance**
The deep learning model will cost the resource of the GPU while it is
running. If there is much resource not being used, it is a kind of
wasting. Here is a typical example that wasting the hardware resource.


It shows that the GPU utilization is always at about 50% and the max
of GPU memory usage is at about 45%. There is still much resource can
be used in the hardware.
- **Reason**
The GPU memory has free means that the training data is not enough for
one batch in each epoch. The GPU utilization has free means that the
computational complexity in one batch is not enough.
- **How to Fix**
If the GPU memory is not been used sufficiently, one of the solution
is to increase the data volume in one batch. The more data will use
more GPU memory.
If the GPU utilization is at a low rate, one of the solution is to use
the Kernel Fusion, like XLA in tensorflow. The Kernel Fusion can
reduce the cost of platform invoking and kernel launching, which can
reduce the resource wasting and increase the GPU utilization.
- **After Fixing**
As the example above, after increasing the batch size, the hardware
information is as below:


It is obvious that the average of the GPU utilization increasing more
than before, closed to 100%. And the max of the GPU memory usage is
also about 100%.
### The utilization of GPU and CPU raise alternately.
- **Instance**
In a running model, the CPU and the GPU work for different tasks. The
CPU will load the data from disk or memory and process it to be able
to be used by the GPU, then the GPU will train the processed data.
According to the traditional way, there is a typical phenomenon that
the CPU and GPU utilization raise to its max value and then fall to
its min value alternately, which is showed as follow:

There are obvious peaks and bottoms in the CPU and GPU usage.
- **Reason**
The raising alternately of the GPU and CPU usage means that the two
hardware work alternately. Normally, the CPU prepares the data and
then the GPU trains the data. The GPU is idle while the CPU is
preparing the next data, and the CPU is the same while the GPU is
training. The GPU and CPU work as the sequence which makes the system
inefficient.
- **How to Fix**
The best solution is to make the CPU and GPU run as parallel. One of
the solution is to use the input pipeline. In the pytorch, there is a
official package called `DataLoader` can solve the problem. This class
will do the prefetch, which will make the CPU pre-process the data for
the next batch while the GPU is training the data this batch. This
mechanism will solve the problem that the CPU and the GPU are idle
alternately efficiently.
The example code is as follow:
```Python
import torch.utils.data.distributed
# your code
train_sampler = torch.utils.data.distributed.DistributedSampler(train_data)
train_batches = torch.utils.data.DataLoader(train_data, sampler=train_sampler)
```
For TensorFlow, the `Dataset` class can help to package the data with
pipeline, which can do the same work.
- **After Fixing**
As the example above, after changing the code structure, the CPU and
GPU information is as below:

It can be seen that at most time, the value of the GPU usage is at a
high level, closed to 100%. And the usage of CPU is also much more
than before. At the same time, the fluctuation of the data is less
than before. | 51.94958 | 82 | 0.743934 | eng_Latn | 0.999617 |
53c745de6abae50da30cd6d713b885ee48a21158 | 433 | md | Markdown | guia/decisiones.md | vmmarr/go | c932dbea4327ba0a829087b11de3fd7017aa9485 | [
"BSD-3-Clause"
] | null | null | null | guia/decisiones.md | vmmarr/go | c932dbea4327ba0a829087b11de3fd7017aa9485 | [
"BSD-3-Clause"
] | 85 | 2020-03-05T18:57:35.000Z | 2021-01-12T20:09:22.000Z | guia/decisiones.md | vmmarr/go | c932dbea4327ba0a829087b11de3fd7017aa9485 | [
"BSD-3-Clause"
] | null | null | null | # Decisiones adoptadas
#### Amazon Web Services S3
Uso de Amazon Web Sevice para el guardado de archivos multimedias de las publicaciones y las imagenes de perfil de los usuarios.
#### Plugins y Widgets
- fedemotta/yii2-aws-sdk
- kartik-v/yii2-datecontrol
- kartik-v/yii2-icons
- kartik-v/yii2-widget-fileinput
- kartik-v/yii2-widget-datetimepicker
- kartik-v/yii2-widget-datepicker
- kartik-v/yii2-password
| 27.0625 | 128 | 0.736721 | spa_Latn | 0.265697 |
53c782d63a40308d2806791f32b083fc027b6dec | 1,204 | md | Markdown | docs/resources/ruleset.md | iwarapter/terraform-provider-pingaccess | 7e04049231825fa8345c5aae1ffb1836791d3afb | [
"MIT"
] | 13 | 2019-04-27T15:38:04.000Z | 2021-09-15T18:15:49.000Z | docs/resources/ruleset.md | iwarapter/terraform-provider-pingaccess | 7e04049231825fa8345c5aae1ffb1836791d3afb | [
"MIT"
] | 101 | 2019-08-14T15:38:46.000Z | 2022-03-18T10:09:24.000Z | docs/resources/ruleset.md | iwarapter/terraform-provider-pingaccess | 7e04049231825fa8345c5aae1ffb1836791d3afb | [
"MIT"
] | 8 | 2019-04-17T13:11:12.000Z | 2021-11-27T20:24:16.000Z | ---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "pingaccess_ruleset Resource - terraform-provider-pingaccess"
subcategory: ""
description: |-
Provides configuration for Rulesets within PingAccess.
---
# pingaccess_ruleset (Resource)
Provides configuration for Rulesets within PingAccess.
## Example Usage
```terraform
resource "pingaccess_ruleset" "example" {
name = "example"
success_criteria = "SuccessIfAllSucceed"
element_type = "Rule"
policy = [
pingaccess_rule.example_1.id,
pingaccess_rule.example_2.id,
]
}
```
<!-- schema generated by tfplugindocs -->
## Schema
### Required
- **element_type** (String) The rule set's element type (what it contains). Can be either `Rule` or `Ruleset`.
- **name** (String) The rule set's name.
- **policy** (Set of String) The list of policy ids assigned to the rule set.
- **success_criteria** (String) The rule set's success criteria. Can be either `SuccessIfAllSucceed` or `SuccessIfAnyOneSucceeds`.
### Read-Only
- **id** (String) The ID of this resource.
## Import
Import is supported using the following syntax:
```shell
terraform import pingaccess_ruleset.example 123
```
| 24.571429 | 130 | 0.7201 | eng_Latn | 0.76126 |
53c8046ee04bfaa166e0b57d3fff9ade1d87f662 | 29,574 | md | Markdown | desktop-src/printdocs/xps-document-errors.md | npherson/win32 | 28da414b56bb3e56e128bf7e0db021bad5343d2d | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-04-24T13:02:42.000Z | 2021-07-17T15:32:03.000Z | desktop-src/printdocs/xps-document-errors.md | npherson/win32 | 28da414b56bb3e56e128bf7e0db021bad5343d2d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/printdocs/xps-document-errors.md | npherson/win32 | 28da414b56bb3e56e128bf7e0db021bad5343d2d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-09T23:50:05.000Z | 2022-03-09T23:50:05.000Z | ---
Description: The following table lists all the HRESULT values that can be returned by the methods of the XPS Document API.
ms.assetid: 9e6db1e3-7151-4538-8607-b7185ebc0110
title: XPS Document Errors
ms.topic: article
ms.date: 05/31/2018
---
# XPS Document Errors
The following table lists all the **HRESULT** values that can be returned by the methods of the XPS Document API. Note that not every method returns every return value that is listed in this table.
<table>
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<thead>
<tr class="header">
<th>Return code/value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><span id="XPS_E_ALREADY_OWNED"></span><span id="xps_e_already_owned"></span><dl> <dt><strong>XPS_E_ALREADY_OWNED</strong></dt> <dt>0x80520503</dt> </dl></td>
<td>The interface already has an owner.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_BLEED_BOX_PAGE_DIMENSIONS_NOT_IN_SYNC"></span><span id="xps_e_bleed_box_page_dimensions_not_in_sync"></span><dl> <dt><strong>XPS_E_BLEED_BOX_PAGE_DIMENSIONS_NOT_IN_SYNC</strong></dt> <dt>0x80520509</dt> </dl></td>
<td>The bleed box dimensions are not compatible with the page dimensions.<br/> The bleed box width value must be greater than or equal to the page width plus the absolute value of the x-coordinate of the bleed box origin. The bleed box height value must be greater than or equal to the page height plus the absolute value of the y-coordinate of the bleed box origin. <br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_BOTH_PATHFIGURE_AND_ABBR_SYNTAX_PRESENT"></span><span id="xps_e_both_pathfigure_and_abbr_syntax_present"></span><dl> <dt><strong>XPS_E_BOTH_PATHFIGURE_AND_ABBR_SYNTAX_PRESENT</strong></dt> <dt>0x80520507</dt> </dl></td>
<td>A <strong>PathGeometry</strong> element contains a set of path figures that are specified either with the <strong>Figures</strong> attribute or with a child <strong>PathFigure</strong> element. The path figures of a geometry cannot have both the <strong>Figures</strong> attribute and a child <strong>PathFigure</strong> element. <br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_BOTH_RESOURCE_AND_SOURCEATTR_PRESENT"></span><span id="xps_e_both_resource_and_sourceattr_present"></span><dl> <dt><strong>XPS_E_BOTH_RESOURCE_AND_SOURCEATTR_PRESENT</strong></dt> <dt>0x80520508</dt> </dl></td>
<td>A <strong>ResourceDictionary</strong> element that specifies a remote resource dictionary in its <strong>Source</strong> attribute MUST NOT contain any resource definition children.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_CARET_OUT_OF_ORDER"></span><span id="xps_e_caret_out_of_order"></span><dl> <dt><strong>XPS_E_CARET_OUT_OF_ORDER</strong></dt> <dt>0x80520306</dt> </dl></td>
<td>A caret location value is out of order. The location values must be sorted in ascending order. <br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_CARET_OUTSIDE_STRING"></span><span id="xps_e_caret_outside_string"></span><dl> <dt><strong>XPS_E_CARET_OUTSIDE_STRING</strong></dt> <dt>0x80520305</dt> </dl></td>
<td>Caret stops were specified for an empty string; or, the caret jump index has exceeded the length of the Unicode string. <br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_COLOR_COMPONENT_OUT_OF_RANGE"></span><span id="xps_e_color_component_out_of_range"></span><dl> <dt><strong>XPS_E_COLOR_COMPONENT_OUT_OF_RANGE</strong></dt> <dt>0x80520506</dt> </dl></td>
<td>A color value is out of range.<br/> For <a href="/windows/desktop/api/xpsobjectmodel/ne-xpsobjectmodel-__midl___midl_itf_xpsobjectmodel_0000_0000_0009"><strong>XPS_COLOR_TYPE_SCRGB</strong></a> color types, the alpha channel value must be greater than or equal to 0.0 and less than or equal to +1.0.<br/> For <a href="/windows/desktop/api/xpsobjectmodel/ne-xpsobjectmodel-__midl___midl_itf_xpsobjectmodel_0000_0000_0009"><strong>XPS_COLOR_TYPE_CONTEXT</strong></a> color types, the <strong>channelValues[0]</strong> that represents the alpha channel value must be greater than or equal to 0.0 and less than or equal to +1.0. <br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_DICTIONARY_ITEM_NAMED"></span><span id="xps_e_dictionary_item_named"></span><dl> <dt><strong>XPS_E_DICTIONARY_ITEM_NAMED</strong></dt> <dt>0x80520401</dt> </dl></td>
<td>A visual in a resource dictionary has the <strong>Name</strong> attribute, which may not be specified on any children of a <strong>ResourceDictionary</strong> element.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_DUPLICATE_NAMES"></span><span id="xps_e_duplicate_names"></span><dl> <dt><strong>XPS_E_DUPLICATE_NAMES</strong></dt> <dt>0x80520209</dt> </dl></td>
<td>An object with this name already exists in the dictionary.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_DUPLICATE_RESOURCE_KEYS"></span><span id="xps_e_duplicate_resource_keys"></span><dl> <dt><strong>XPS_E_DUPLICATE_RESOURCE_KEYS</strong></dt> <dt>0x80520200</dt> </dl></td>
<td>An object with this key name already exists in the dictionary.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_INDEX_OUT_OF_RANGE"></span><span id="xps_e_index_out_of_range"></span><dl> <dt><strong>XPS_E_INDEX_OUT_OF_RANGE</strong></dt> <dt>0x80520500</dt> </dl></td>
<td>Reserved.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_INVALID_BLEED_BOX"></span><span id="xps_e_invalid_bleed_box"></span><dl> <dt><strong>XPS_E_INVALID_BLEED_BOX</strong></dt> <dt>0x80520004</dt> </dl></td>
<td>The bleed box rectangle contains one or more values that are not valid. See the parameter description for the valid values. <br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_INVALID_CONTENT_BOX"></span><span id="xps_e_invalid_content_box"></span><dl> <dt><strong>XPS_E_INVALID_CONTENT_BOX</strong></dt> <dt>0x8052000b</dt> </dl></td>
<td>The content box rectangle contains one or more values that are not valid. See the parameter description for the valid values. <br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_INVALID_CONTENT_TYPE"></span><span id="xps_e_invalid_content_type"></span><dl> <dt><strong>XPS_E_INVALID_CONTENT_TYPE</strong></dt> <dt>0x8052000e</dt> </dl></td>
<td>The content type string is not valid.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_INVALID_FLOAT"></span><span id="xps_e_invalid_float"></span><dl> <dt><strong>XPS_E_INVALID_FLOAT</strong></dt> <dt>0x80520007</dt> </dl></td>
<td>A <strong>FLOAT</strong> value is not valid. It is either an infinite or not a number (NAN).<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_INVALID_FONT_URI"></span><span id="xps_e_invalid_font_uri"></span><dl> <dt><strong>XPS_E_INVALID_FONT_URI</strong></dt> <dt>0x8052000a</dt> </dl></td>
<td>The font URI is not valid, possibly because it contains an empty fragment or characters that are not valid.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_INVALID_LANGUAGE"></span><span id="xps_e_invalid_language"></span><dl> <dt><strong>XPS_E_INVALID_LANGUAGE</strong></dt> <dt>0x80520000</dt> </dl></td>
<td>The specified language is either not valid or not correctly formatted.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_INVALID_LOOKUP_TYPE"></span><span id="xps_e_invalid_lookup_type"></span><dl> <dt><strong>XPS_E_INVALID_LOOKUP_TYPE</strong></dt> <dt>0x80520006</dt> </dl></td>
<td>The lookup key name references an object that is not the correct type for the call; for example, if the method returns a brush but the lookup key name refers to a geometry object.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_INVALID_MARKUP"></span><span id="xps_e_invalid_markup"></span><dl> <dt><strong>XPS_E_INVALID_MARKUP</strong></dt> <dt>0x8052000c</dt> </dl></td>
<td>The markup being read contains an element or an attribute that does not conform to the <a href="https://go.microsoft.com/?linkid=8435939">XML Paper Specification</a>.<br/>
<blockquote>
[!Note]<br />
To represent floating-point values, the XPS OM uses the <strong>FLOAT</strong> data type instead of <strong>DOUBLE</strong>. If an XPS document has an element with floating-point data that does not fit into a <strong>FLOAT</strong> value, this error will be returned when that value is encountered during deserialization.
</blockquote>
<br/> <br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_INVALID_NAME"></span><span id="xps_e_invalid_name"></span><dl> <dt><strong>XPS_E_INVALID_NAME</strong></dt> <dt>0x80520001</dt> </dl></td>
<td>The string that was passed is not a valid name, according to the XML Paper Specification. <br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_INVALID_OBFUSCATED_FONT_URI"></span><span id="xps_e_invalid_obfuscated_font_uri"></span><dl> <dt><strong>XPS_E_INVALID_OBFUSCATED_FONT_URI</strong></dt> <dt>0x8052000f</dt> </dl></td>
<td>Reserved.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_INVALID_PAGE_SIZE"></span><span id="xps_e_invalid_page_size"></span><dl> <dt><strong>XPS_E_INVALID_PAGE_SIZE</strong></dt> <dt>0x80520003</dt> </dl></td>
<td>The page dimensions contain a page size value that is not valid. <br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_INVALID_RESOURCE_KEY"></span><span id="xps_e_invalid_resource_key"></span><dl> <dt><strong>XPS_E_INVALID_RESOURCE_KEY</strong></dt> <dt>0x80520002</dt> </dl></td>
<td>According to the <a href="https://go.microsoft.com/?linkid=8435939">XML Paper Specification</a>, the lookup key string is not valid.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_INVALID_THUMBNAIL_IMAGE_TYPE"></span><span id="xps_e_invalid_thumbnail_image_type"></span><dl> <dt><strong>XPS_E_INVALID_THUMBNAIL_IMAGE_TYPE</strong></dt> <dt>0x80520005</dt> </dl></td>
<td>The thumbnail image type is not supported.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_INVALID_XML_ENCODING"></span><span id="xps_e_invalid_xml_encoding"></span><dl> <dt><strong>XPS_E_INVALID_XML_ENCODING</strong></dt> <dt>0x8052000d</dt> </dl></td>
<td>Found improper or incorrectly formatted XML markup.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MAPPING_OUT_OF_ORDER"></span><span id="xps_e_mapping_out_of_order"></span><dl> <dt><strong>XPS_E_MAPPING_OUT_OF_ORDER</strong></dt> <dt>0x80520302</dt> </dl></td>
<td>In one or more <a href="/windows/desktop/api/xpsobjectmodel/ns-xpsobjectmodel-__midl___midl_itf_xpsobjectmodel_0000_0000_0022"><strong>XPS_GLYPH_MAPPING</strong></a> structures, an element is out of sequence. <br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MAPPING_OUTSIDE_INDICES"></span><span id="xps_e_mapping_outside_indices"></span><dl> <dt><strong>XPS_E_MAPPING_OUTSIDE_INDICES</strong></dt> <dt>0x80520304</dt> </dl></td>
<td>The glyph mappings exceed the number of glyph indices.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MAPPING_OUTSIDE_STRING"></span><span id="xps_e_mapping_outside_string"></span><dl> <dt><strong>XPS_E_MAPPING_OUTSIDE_STRING</strong></dt> <dt>0x80520303</dt> </dl></td>
<td>Error in the glyph mappings.<br/> If the Unicode string is empty, this error means that a glyph mapping was also defined. Glyph mappings must not be defined if the Unicode string is empty.<br/> If the Unicode string is not empty, this error means that a glyph mapping was defined for glyphs outside of the Unicode string. Glyph mappings cannot be defined for glyphs that fall outside the length of the Unicode string.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_COLORPROFILE"></span><span id="xps_e_missing_colorprofile"></span><dl> <dt><strong>XPS_E_MISSING_COLORPROFILE</strong></dt> <dt>0x80520104</dt> </dl></td>
<td>The color profile parameter is <strong>NULL</strong>, but a color profile is expected. A color profile is required when the color type is XPS_COLOR_TYPE_CONTEXT. <br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_DISCARDCONTROL"></span><span id="xps_e_missing_discardcontrol"></span><dl> <dt><strong>XPS_E_MISSING_DISCARDCONTROL</strong></dt> <dt>0x80520112</dt> </dl></td>
<td>A page refers to discardable resources but does not specify a DiscardControl part name.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_DOCUMENT"></span><span id="xps_e_missing_document"></span><dl> <dt><strong>XPS_E_MISSING_DOCUMENT</strong></dt> <dt>0x80520109</dt> </dl></td>
<td><a href="/windows/desktop/api/xpsobjectmodel/nf-xpsobjectmodel-ixpsompackagewriter-addpage"><strong>IXpsOMPackageWriter::AddPage</strong></a> was called before <a href="/windows/desktop/api/xpsobjectmodel/nf-xpsobjectmodel-ixpsompackagewriter-startnewdocument"><strong>IXpsOMPackageWriter::StartNewDocument</strong></a>.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_DOCUMENTSEQUENCE_RELATIONSHIP"></span><span id="xps_e_missing_documentsequence_relationship"></span><dl> <dt><strong>XPS_E_MISSING_DOCUMENTSEQUENCE_RELATIONSHIP</strong></dt> <dt>0x80520108</dt> </dl></td>
<td>The package does not contain a FixedDocumentSequence.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_FONTURI"></span><span id="xps_e_missing_fonturi"></span><dl> <dt><strong>XPS_E_MISSING_FONTURI</strong></dt> <dt>0x80520107</dt> </dl></td>
<td>The <a href="/windows/desktop/api/xpsobjectmodel/nn-xpsobjectmodel-ixpsomglyphs"><strong>IXpsOMGlyphs</strong></a> interface requires a font URI, but one is not specified.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_GLYPHS"></span><span id="xps_e_missing_glyphs"></span><dl> <dt><strong>XPS_E_MISSING_GLYPHS</strong></dt> <dt>0x80520102</dt> </dl></td>
<td>The <a href="/windows/desktop/api/xpsobjectmodel/nn-xpsobjectmodel-ixpsomglyphs"><strong>IXpsOMGlyphs</strong></a> interface without a Unicode string does not specify any glyph indices. An <strong>IXpsOMGlyphs</strong> interface must specify either a Unicode string or an array of glyph indices.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_IMAGE_IN_IMAGEBRUSH"></span><span id="xps_e_missing_image_in_imagebrush"></span><dl> <dt><strong>XPS_E_MISSING_IMAGE_IN_IMAGEBRUSH</strong></dt> <dt>0x8052010e</dt> </dl></td>
<td>An image resource could not be located for the image brush.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_LOOKUP"></span><span id="xps_e_missing_lookup"></span><dl> <dt><strong>XPS_E_MISSING_LOOKUP</strong></dt> <dt>0x80520101</dt> </dl></td>
<td>The remote resource has an unexpected object.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_NAME"></span><span id="xps_e_missing_name"></span><dl> <dt><strong>XPS_E_MISSING_NAME</strong></dt> <dt>0x80520100</dt> </dl></td>
<td>The page has not been named; the hyperlink target status can only be set if the page has a name.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_PAGE_IN_DOCUMENT"></span><span id="xps_e_missing_page_in_document"></span><dl> <dt><strong>XPS_E_MISSING_PAGE_IN_DOCUMENT</strong></dt> <dt>0x8052010c</dt> </dl></td>
<td>The FixedDocument does not contain any FixedPage parts. An XPS document must contain at least one FixedPage part.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_PAGE_IN_PAGEREFERENCE"></span><span id="xps_e_missing_page_in_pagereference"></span><dl> <dt><strong>XPS_E_MISSING_PAGE_IN_PAGEREFERENCE</strong></dt> <dt>0x8052010d</dt> </dl></td>
<td>The page reference does not have a corresponding page.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_PART_REFERENCE"></span><span id="xps_e_missing_part_reference"></span><dl> <dt><strong>XPS_E_MISSING_PART_REFERENCE</strong></dt> <dt>0x80520110</dt> </dl></td>
<td>A required target part was not referenced.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_PART_STREAM"></span><span id="xps_e_missing_part_stream"></span><dl> <dt><strong>XPS_E_MISSING_PART_STREAM</strong></dt> <dt>0x80520113</dt> </dl></td>
<td>A stream was not specified for the resource.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_REFERRED_DOCUMENT"></span><span id="xps_e_missing_referred_document"></span><dl> <dt><strong>XPS_E_MISSING_REFERRED_DOCUMENT</strong></dt> <dt>0x8052010a</dt> </dl></td>
<td>The FixedDocument part that is referenced by the FixedDocumentSequence could not be found. An XPS document must contain at least one FixedDocument.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_REFERRED_PAGE"></span><span id="xps_e_missing_referred_page"></span><dl> <dt><strong>XPS_E_MISSING_REFERRED_PAGE</strong></dt> <dt>0x8052010b</dt> </dl></td>
<td>The FixedPage part that is referenced by the FixedDocument could not be found. An XPS document must contain at least one FixedPage part.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_RELATIONSHIP_TARGET"></span><span id="xps_e_missing_relationship_target"></span><dl> <dt><strong>XPS_E_MISSING_RELATIONSHIP_TARGET</strong></dt> <dt>0x80520105</dt> </dl></td>
<td>The relationship target part is not present in the package relationship.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_RESOURCE_KEY"></span><span id="xps_e_missing_resource_key"></span><dl> <dt><strong>XPS_E_MISSING_RESOURCE_KEY</strong></dt> <dt>0x8052010f</dt> </dl></td>
<td>No <strong>x:Key</strong> attribute was specified for the resource.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_RESOURCE_RELATIONSHIP"></span><span id="xps_e_missing_resource_relationship"></span><dl> <dt><strong>XPS_E_MISSING_RESOURCE_RELATIONSHIP</strong></dt> <dt>0x80520106</dt> </dl></td>
<td>The resource referred to by the page or remote dictionary content does not exist as a page relationship.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MISSING_RESTRICTED_FONT_RELATIONSHIP"></span><span id="xps_e_missing_restricted_font_relationship"></span><dl> <dt><strong>XPS_E_MISSING_RESTRICTED_FONT_RELATIONSHIP</strong></dt> <dt>0x80520111</dt> </dl></td>
<td>The referenced restricted font was not specified in the call to <a href="/windows/desktop/api/xpsobjectmodel/nf-xpsobjectmodel-ixpsompackagewriter-startnewdocument"><strong>IXpsOMPackageWriter::StartNewDocument</strong></a>.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MISSING_SEGMENT_DATA"></span><span id="xps_e_missing_segment_data"></span><dl> <dt><strong>XPS_E_MISSING_SEGMENT_DATA</strong></dt> <dt>0x80520103</dt> </dl></td>
<td>The segment data array has fewer entries than the segment types array. <br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MULTIPLE_DOCUMENTSEQUENCE_RELATIONSHIPS"></span><span id="xps_e_multiple_documentsequence_relationships"></span><dl> <dt><strong>XPS_E_MULTIPLE_DOCUMENTSEQUENCE_RELATIONSHIPS</strong></dt> <dt>0x80520202</dt> </dl></td>
<td>An attempt was made to add a FixedDocumentSequence to a package that already has one. An XPS document must contain one and only one FixedDocumentSequence part.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MULTIPLE_PRINTTICKETS_ON_DOCUMENT"></span><span id="xps_e_multiple_printtickets_on_document"></span><dl> <dt><strong>XPS_E_MULTIPLE_PRINTTICKETS_ON_DOCUMENT</strong></dt> <dt>0x80520206</dt> </dl></td>
<td>An attempt was made to add a document-level print ticket to a FixedDocument that already has one. A FixedDocument in an XPS document can contain only one document-level print ticket.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MULTIPLE_PRINTTICKETS_ON_DOCUMENTSEQUENCE"></span><span id="xps_e_multiple_printtickets_on_documentsequence"></span><dl> <dt><strong>XPS_E_MULTIPLE_PRINTTICKETS_ON_DOCUMENTSEQUENCE</strong></dt> <dt>0x80520207</dt> </dl></td>
<td>An attempt was made to add a job-level print ticket to a FixedDocumentSequence that already has one. The FixedDocumentSequence in an XPS document can contain only one job-level print ticket.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MULTIPLE_PRINTTICKETS_ON_PAGE"></span><span id="xps_e_multiple_printtickets_on_page"></span><dl> <dt><strong>XPS_E_MULTIPLE_PRINTTICKETS_ON_PAGE</strong></dt> <dt>0x80520205</dt> </dl></td>
<td>An attempt was made to add a page-level print ticket to a FixedPage that already has one. A FixedPage in an XPS document can contain only one page-level print ticket.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MULTIPLE_REFERENCES_TO_PART"></span><span id="xps_e_multiple_references_to_part"></span><dl> <dt><strong>XPS_E_MULTIPLE_REFERENCES_TO_PART</strong></dt> <dt>0x80520208</dt> </dl></td>
<td>The restricted font collection contained a restricted font entry that was repeated. Each font entry can occur in the collection only once.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MULTIPLE_RESOURCES"></span><span id="xps_e_multiple_resources"></span><dl> <dt><strong>XPS_E_MULTIPLE_RESOURCES</strong></dt> <dt>0x80520201</dt> </dl></td>
<td>A resource by that part name already exists.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_MULTIPLE_THUMBNAILS_ON_PACKAGE"></span><span id="xps_e_multiple_thumbnails_on_package"></span><dl> <dt><strong>XPS_E_MULTIPLE_THUMBNAILS_ON_PACKAGE</strong></dt> <dt>0x80520204</dt> </dl></td>
<td>An attempt was made to add a thumbnail image to a package that already has one. An XPS document can contain only one package-level thumbnail image.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_MULTIPLE_THUMBNAILS_ON_PAGE"></span><span id="xps_e_multiple_thumbnails_on_page"></span><dl> <dt><strong>XPS_E_MULTIPLE_THUMBNAILS_ON_PAGE</strong></dt> <dt>0x80520203</dt> </dl></td>
<td>An attempt was made to add a page-level thumbnail image to a FixedPage that already has one. A FixedPage in an XPS document can contain only one page-level thumbnail image.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_NEGATIVE_FLOAT"></span><span id="xps_e_negative_float"></span><dl> <dt><strong>XPS_E_NEGATIVE_FLOAT</strong></dt> <dt>0x8052030a</dt> </dl></td>
<td>An entry contains a negative value, but it must contain a non-negative value. <br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_NESTED_REMOTE_DICTIONARY"></span><span id="xps_e_nested_remote_dictionary"></span><dl> <dt><strong>XPS_E_NESTED_REMOTE_DICTIONARY</strong></dt> <dt>0x80520402</dt> </dl></td>
<td>An attempt was made to add a remote dictionary reference to a remote dictionary. A remote dictionary cannot reference another remote dictionary.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_NO_CUSTOM_OBJECTS"></span><span id="xps_e_no_custom_objects"></span><dl> <dt><strong>XPS_E_NO_CUSTOM_OBJECTS</strong></dt> <dt>0x80520502</dt> </dl></td>
<td>An interface pointer does not point to a recognized interface implementation. Custom implementation of XPS Document API interfaces is not supported.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_NOT_ENOUGH_GRADIENT_STOPS"></span><span id="xps_e_not_enough_gradient_stops"></span><dl> <dt><strong>XPS_E_NOT_ENOUGH_GRADIENT_STOPS</strong></dt> <dt>0x8052050b</dt> </dl></td>
<td>The gradient stop collection has fewer than two stops. A gradient stop collection must have at least two gradient stops.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_ODD_BIDILEVEL"></span><span id="xps_e_odd_bidilevel"></span><dl> <dt><strong>XPS_E_ODD_BIDILEVEL</strong></dt> <dt>0x80520307</dt> </dl></td>
<td>The text string was specified as being oriented sideways and right-to-left. If the text is oriented sideways, it cannot have a bidi level that is an odd value (right-to-left). Likewise, if the bidi level is an odd value, the text cannot be oriented sideways.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_ONE_TO_ONE_MAPPING_EXPECTED"></span><span id="xps_e_one_to_one_mapping_expected"></span><dl> <dt><strong>XPS_E_ONE_TO_ONE_MAPPING_EXPECTED</strong></dt> <dt>0x80520308</dt> </dl></td>
<td>The glyph mappings do not match the Unicode string contents.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_PACKAGE_WRITER_NOT_CLOSED"></span><span id="xps_e_package_writer_not_closed"></span><dl> <dt><strong>XPS_E_PACKAGE_WRITER_NOT_CLOSED</strong></dt> <dt>0x8052050c</dt> </dl></td>
<td>The package writer was not closed before it was released.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_RELATIONSHIP_EXTERNAL"></span><span id="xps_e_relationship_external"></span><dl> <dt><strong>XPS_E_RELATIONSHIP_EXTERNAL</strong></dt> <dt>0x8052050a</dt> </dl></td>
<td>A relationship refers to a part that is outside of the XPS document. All content to be rendered in an XPS document must be contained in the XPS document.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_RESOURCE_NOT_OWNED"></span><span id="xps_e_resource_not_owned"></span><dl> <dt><strong>XPS_E_RESOURCE_NOT_OWNED</strong></dt> <dt>0x80520504</dt> </dl></td>
<td>Reserved.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_RESTRICTED_FONT_NOT_OBFUSCATED"></span><span id="xps_e_restricted_font_not_obfuscated"></span><dl> <dt><strong>XPS_E_RESTRICTED_FONT_NOT_OBFUSCATED</strong></dt> <dt>0x80520309</dt> </dl></td>
<td><em>Reserved</em>.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_STRING_TOO_LONG"></span><span id="xps_e_string_too_long"></span><dl> <dt><strong>XPS_E_STRING_TOO_LONG</strong></dt> <dt>0x80520300</dt> </dl></td>
<td>A <strong>size_t</strong> overflow occurred during an attempt to copy a string into a new buffer.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_TOO_MANY_INDICES"></span><span id="xps_e_too_many_indices"></span><dl> <dt><strong>XPS_E_TOO_MANY_INDICES</strong></dt> <dt>0x80520301</dt> </dl></td>
<td>There were more glyph indices than Unicode code points. If there are no glyph mappings, the number of glyph indices must be less than or equal to the number of Unicode code points.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_UNAVAILABLE_PACKAGE"></span><span id="xps_e_unavailable_package"></span><dl> <dt><strong>XPS_E_UNAVAILABLE_PACKAGE</strong></dt> <dt>0x80520114</dt> </dl></td>
<td>A severe error occurred and the contents of the XPS OM might be unrecoverable. Some components of the XPS OM might still be usable, but they will need to be verified before being used further. Because the state of the XPS OM cannot be predicted after this error is returned, all components of the XPS OM should be released and discarded.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_UNEXPECTED_COLORPROFILE"></span><span id="xps_e_unexpected_colorprofile"></span><dl> <dt><strong>XPS_E_UNEXPECTED_COLORPROFILE</strong></dt> <dt>0x80520505</dt> </dl></td>
<td>A color profile was present when one was not expected. A color profile is only allowed when the color type is <a href="/windows/desktop/api/xpsobjectmodel/ne-xpsobjectmodel-__midl___midl_itf_xpsobjectmodel_0000_0000_0009"><strong>XPS_COLOR_TYPE_CONTEXT</strong></a>. <br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_UNEXPECTED_CONTENT_TYPE"></span><span id="xps_e_unexpected_content_type"></span><dl> <dt><strong>XPS_E_UNEXPECTED_CONTENT_TYPE</strong></dt> <dt>0x80520008</dt> </dl></td>
<td>The target of a relationship is not the type expected by the context of the relationship. <br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_UNEXPECTED_RELATIONSHIP_TYPE"></span><span id="xps_e_unexpected_relationship_type"></span><dl> <dt><strong>XPS_E_UNEXPECTED_RELATIONSHIP_TYPE</strong></dt> <dt>0x80520010</dt> </dl></td>
<td>The relationship type was not recognized.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_UNEXPECTED_RESTRICTED_FONT_RELATIONSHIP"></span><span id="xps_e_unexpected_restricted_font_relationship"></span><dl> <dt><strong>XPS_E_UNEXPECTED_RESTRICTED_FONT_RELATIONSHIP</strong></dt> <dt>0x80520011</dt> </dl></td>
<td>The restricted font collection contains an unrestricted font.<br/></td>
</tr>
<tr class="even">
<td><span id="XPS_E_VISUAL_CIRCULAR_REF"></span><span id="xps_e_visual_circular_ref"></span><dl> <dt><strong>XPS_E_VISUAL_CIRCULAR_REF</strong></dt> <dt>0x80520501</dt> </dl></td>
<td>Reserved.<br/></td>
</tr>
<tr class="odd">
<td><span id="XPS_E_XKEY_ATTR_PRESENT_OUTSIDE_RES_DICT"></span><span id="xps_e_xkey_attr_present_outside_res_dict"></span><dl> <dt><strong>XPS_E_XKEY_ATTR_PRESENT_OUTSIDE_RES_DICT</strong></dt> <dt>0x80520400</dt> </dl></td>
<td>A path geometry that is not in a resource dictionary has an <strong>x:Key</strong> attribute specified. Path geometries that are not in a resource dictionary cannot have an <strong>x:Key</strong> attribute.<br/></td>
</tr>
</tbody>
</table>
## Remarks
Some XPS document API methods make calls to the [Packaging](https://docs.microsoft.com/previous-versions/windows/desktop/opc/packaging) API. For information about the Packaging API return values, see [Packaging Errors](https://docs.microsoft.com/previous-versions/windows/desktop/opc/packaging-errors).
## Requirements
| | |
|-------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|
| Minimum supported client<br/> | Windows 7, Windows Vista with SP2 and Platform Update for Windows Vista \[desktop apps only\]<br/> |
| Minimum supported server<br/> | Windows Server 2008 R2, Windows Server 2008 with SP2 and Platform Update for Windows Server 2008 \[desktop apps only\]<br/> |
| Header<br/> | <dl> <dt>Xpsobjectmodel.h</dt> </dl> |
| IDL<br/> | <dl> <dt>XpsObjectModel.idl</dt> </dl> |
## See also
<dl> <dt>
[Error Handling in COM](https://msdn.microsoft.com/en-us/library/ms679692(v=VS.85).aspx)
</dt> </dl>
| 79.92973 | 639 | 0.733584 | yue_Hant | 0.28536 |
53c85b6b9d0c10e1ef875afa0fb82ab5fd7ff6f1 | 322 | md | Markdown | src/content/posts/whitley-with-hw-group.md | bigandy/andrewhudson.blog | 95c1dbce9e959f2993811de1db4585c67eca4647 | [
"MIT"
] | 1 | 2019-12-29T11:00:38.000Z | 2019-12-29T11:00:38.000Z | src/content/posts/whitley-with-hw-group.md | bigandy/andrewhudson.blog | 95c1dbce9e959f2993811de1db4585c67eca4647 | [
"MIT"
] | 3 | 2019-08-30T10:46:17.000Z | 2020-01-06T09:01:30.000Z | src/content/posts/whitley-with-hw-group.md | bigandy/andrewhudson.blog | 95c1dbce9e959f2993811de1db4585c67eca4647 | [
"MIT"
] | null | null | null | ---
title: 'Whitley with HW Group'
date: 2012-12-06 11:53:06
draft: false
description: ''
tags: ['Running']
author: 'Andrew'
---
Last night I went running with the work running group and we did Whitley. Was pretty cold but good to do a route that didn't involve Caversham, Prospect Park or the river. 5.9 miles in 55:56.
| 29.272727 | 191 | 0.726708 | eng_Latn | 0.996063 |
53c87e9523813b394d04845f4c552396c47bcb2a | 943 | md | Markdown | docs/content/v2.6/yugabyte-platform/manage-deployments/upgrade-software.md | bllewell/yugabyte-db | b757f8d8c0c10854754a17f63db007a0874eaf1f | [
"Apache-2.0",
"CC0-1.0"
] | 4 | 2019-07-19T12:55:40.000Z | 2021-03-25T15:59:09.000Z | docs/content/v2.6/yugabyte-platform/manage-deployments/upgrade-software.md | bllewell/yugabyte-db | b757f8d8c0c10854754a17f63db007a0874eaf1f | [
"Apache-2.0",
"CC0-1.0"
] | 10 | 2021-09-16T08:38:50.000Z | 2022-03-01T09:48:44.000Z | docs/content/v2.6/yugabyte-platform/manage-deployments/upgrade-software.md | bllewell/yugabyte-db | b757f8d8c0c10854754a17f63db007a0874eaf1f | [
"Apache-2.0",
"CC0-1.0"
] | 1 | 2021-08-04T12:10:20.000Z | 2021-08-04T12:10:20.000Z | ---
title: Upgrade the YugabyteDB software
headerTitle: Upgrade the YugabyteDB software
linkTitle: Upgrade the YugabyteDB software
description: Use Yugabyte Platform to upgrade the YugabyteDB software.
menu:
v2.6:
identifier: upgrade-software
parent: manage-deployments
weight: 80
isTocNested: true
showAsideToc: true
---
The YugabyteDB release that is powering a universe can be upgraded to get the new features and fixes in a release.
A rolling upgrade can be performed on a live universe deployment by following these steps:
1. Go to the **Universe Detail** page.
2. From the **More** drop-down list, select **Upgrade Software**.
3. In the confirmation dialog, select the new YugabyteDB version from the drop-down list. The Yugabyte Platform console will upgrade the universe in a rolling manner.


| 36.269231 | 166 | 0.778367 | eng_Latn | 0.961977 |
53c882e181e4f8865d0f2eb5fa4c4a96c9ccde83 | 77 | md | Markdown | README.md | rebyn/rebyn.github.io | 1326b9804a530009859031bb33b8eb4bc6512db5 | [
"MIT"
] | null | null | null | README.md | rebyn/rebyn.github.io | 1326b9804a530009859031bb33b8eb4bc6512db5 | [
"MIT"
] | null | null | null | README.md | rebyn/rebyn.github.io | 1326b9804a530009859031bb33b8eb4bc6512db5 | [
"MIT"
] | null | null | null | # rebyn.github.io
This repo powers [blog.nogias.com](http://blog.nogias.com) | 25.666667 | 58 | 0.74026 | yue_Hant | 0.291579 |
53c8b7460093f43545b4d61f7125a0fe5ccd07d6 | 4,130 | md | Markdown | README.md | ndparker/rjsmin | 4d55311ab68b453a060f23293bcc08a5ff315225 | [
"Apache-2.0"
] | 32 | 2015-02-12T10:48:12.000Z | 2022-03-21T13:19:38.000Z | README.md | ndparker/rjsmin | 4d55311ab68b453a060f23293bcc08a5ff315225 | [
"Apache-2.0"
] | 22 | 2015-01-15T00:36:01.000Z | 2021-12-04T02:24:11.000Z | README.md | ndparker/rjsmin | 4d55311ab68b453a060f23293bcc08a5ff315225 | [
"Apache-2.0"
] | 14 | 2015-01-05T13:43:52.000Z | 2021-11-10T06:03:51.000Z | # rJSmin - A Javascript Minifier For Python
TABLE OF CONTENTS
-----------------
1. Introduction
1. Copyright and License
1. System Requirements
1. Installation
1. Documentation
1. Bugs
1. Author Information
## INTRODUCTION
rJSmin is a javascript minifier written in python.
The minifier is based on the semantics of [jsmin.c by Douglas
Crockford](http://www.crockford.com/javascript/jsmin.c).
The module is a re-implementation aiming for speed, so it can be used at
runtime (rather than during a preprocessing step). Usually it produces the
same results as the original ``jsmin.c``. It differs in the following ways:
- there is no error detection: unterminated string, regex and comment
literals are treated as regular javascript code and minified as such.
- Control characters inside string and regex literals are left untouched; they
are not converted to spaces (nor to \\n)
- Newline characters are not allowed inside string and regex literals, except
for line continuations in string literals (ECMA-5).
- "return /regex/" is recognized correctly.
- More characters are allowed before regexes.
- Line terminators after regex literals are handled more sensibly
- "+ +" and "- -" sequences are not collapsed to '++' or '--'
- Newlines before ! operators are removed more sensibly
- (Unnested) template literals are supported (ECMA-6)
- Comments starting with an exclamation mark (``!``) can be kept optionally
- rJSmin does not handle streams, but only complete strings. (However, the
module provides a "streamy" interface).
Since most parts of the logic are handled by the regex engine it's way faster
than the original python port of ``jsmin.c`` by Baruch Even. The speed factor
varies between about 6 and 55 depending on input and python version (it gets
faster the more compressed the input already is). Compared to the
speed-refactored python port by Dave St.Germain the performance gain is less
dramatic but still between 3 and 50 (for huge inputs). See the docs/BENCHMARKS
file for details.
rjsmin.c is a reimplementation of rjsmin.py in C and speeds it up even more.
* [Change Log](docs/CHANGES)
* [Development](docs/DEVELOPMENT.md)
## COPYRIGHT AND LICENSE
Copyright 2011 - 2021
André Malo or his licensors, as applicable.
The whole package (except for the files in the bench/ directory)
is distributed under the Apache License Version 2.0. You'll find a copy in the
root directory of the distribution or online at:
<http://www.apache.org/licenses/LICENSE-2.0>.
## SYSTEM REQUIREMENTS
Supported python versions are 2.7 and 3.4+.
You also need a build environment for python C extensions (i.e. a compiler
and the python development files).
## INSTALLATION
### Using pip
```
$ pip install rjsmin
```
### Using distutils
Download the package, unpack it, change into the directory
```
$ python setup.py install
```
The command above will install a new "rjsmin" package into python's
library path.
### Drop-in
rJSmin effectively consists of two files: rjsmin.py and rjsmin.c, the
latter being entirely optional. So, for simple integration you can just
copy rjsmin.py into your project and use it.
## DOCUMENTATION
The module provides a simple function, called jsmin which takes the script as
a string and returns the minified script as a string.
The module additionally provides a "streamy" interface similar to the one
jsmin.c provides:
```
$ python -mrjsmin <script >minified
```
It takes two options:
-b Keep bang-comments (Comments starting with an exclamation mark)
-p Force using the python implementation (not the C implementation)
The latest documentation is also available online at
<http://opensource.perlig.de/rjsmin/>.
## BUGS
No bugs, of course. ;-)
But if you've found one or have an idea how to improve rjsmin, feel free
to send a pull request on [github](https://github.com/ndparker/rjsmin)
or send a mail to <[email protected]>.
## AUTHOR INFORMATION
André "nd" Malo <[email protected]>, GPG: 0x029C942244325167
> If God intended people to be naked, they would be born that way.
> -- Oscar Wilde
| 29.71223 | 78 | 0.747942 | eng_Latn | 0.995122 |
53c8edd679cb777ff64415949f9948cff9af7311 | 38 | md | Markdown | README.md | mesebilisim/bitcointicker-doc | 34f7d7204f77586dc1e935258cb1d64193d708ec | [
"MIT"
] | null | null | null | README.md | mesebilisim/bitcointicker-doc | 34f7d7204f77586dc1e935258cb1d64193d708ec | [
"MIT"
] | null | null | null | README.md | mesebilisim/bitcointicker-doc | 34f7d7204f77586dc1e935258cb1d64193d708ec | [
"MIT"
] | null | null | null | # bitcointicker-doc
bitcointicker doc
| 12.666667 | 19 | 0.842105 | vie_Latn | 0.374108 |
53c929d7a17b53f494276ad19376df93499d3f66 | 498 | md | Markdown | README.md | motasimfoad/Whac-A-Mole | 8883801fd08729557afdef6aa72c7f72df68616e | [
"MIT"
] | null | null | null | README.md | motasimfoad/Whac-A-Mole | 8883801fd08729557afdef6aa72c7f72df68616e | [
"MIT"
] | null | null | null | README.md | motasimfoad/Whac-A-Mole | 8883801fd08729557afdef6aa72c7f72df68616e | [
"MIT"
] | null | null | null | # Whac-A-Mole-Inspired-By-Pokemon
Built using : HTML, CSS, BootStraps, W3Animations, Javascript, jQuery
Live Preview : https://assignment.motasimfoad.com/js/wam/Pages/WAM/wam.html
Simple whac a mole game influenced by Pokemon having
- 3 Levels
- Simple UI
- Animation
- Inspired by pokemon


| 31.125 | 100 | 0.771084 | yue_Hant | 0.352943 |
53c9612251e0821f8074b6ddf2ac9926fbcc8193 | 176 | md | Markdown | _posts/PS 양식.md | MilkClouds/milkclouds.github.io | 0d71e9c464f361bf62b9535ca6c27a9794efbfea | [
"Apache-2.0"
] | null | null | null | _posts/PS 양식.md | MilkClouds/milkclouds.github.io | 0d71e9c464f361bf62b9535ca6c27a9794efbfea | [
"Apache-2.0"
] | 4 | 2020-02-25T16:17:55.000Z | 2021-09-27T21:47:42.000Z | _posts/PS 양식.md | MilkClouds/milkclouds.github.io | 0d71e9c464f361bf62b9535ca6c27a9794efbfea | [
"Apache-2.0"
] | 1 | 2019-05-30T14:13:30.000Z | 2019-05-30T14:13:30.000Z | ---
layout: post
title: '[BOJ !] !'
author: MilkClouds
comments: true
date: 2019-01-1 1:1
tags: [boj, problem-solving]
---
## 문제
## 사용 알고리즘
## 시간 복잡도
## 설명
### 소스
| 7.04 | 28 | 0.551136 | kor_Hang | 0.982781 |
53ca12180dd16b8b4499a7efb4955aeb302b9268 | 6,396 | md | Markdown | src/blog/spread-elements.md | wang1212/v8.js.cn | 02b5eaaae2b2e927793c28714b1d9b3315616351 | [
"Apache-2.0"
] | 365 | 2018-11-01T08:13:28.000Z | 2022-02-17T14:16:43.000Z | src/blog/spread-elements.md | wang1212/v8.js.cn | 02b5eaaae2b2e927793c28714b1d9b3315616351 | [
"Apache-2.0"
] | 15 | 2018-11-05T04:34:10.000Z | 2022-03-18T01:33:49.000Z | src/blog/spread-elements.md | wang1212/v8.js.cn | 02b5eaaae2b2e927793c28714b1d9b3315616351 | [
"Apache-2.0"
] | 36 | 2018-11-02T01:15:53.000Z | 2022-02-08T07:45:15.000Z | ---
title: '加速 `[...spread]` 运算'
author: 'Hai Dang & Georg Neis'
date: 2018-12-04 16:57:21
tags:
- ECMAScript
- benchmarks
description: 'V8 v7.2 显著地提升了 Array.from(array) 的性能,从而加速了在数组、字符串、Set、Map 上使用 [...spread] 的效率'
tweet: '1070344545685118976'
cn:
author: '迷渡 ([@justjavac](https://github.com/justjavac)),V8.js.cn 站长'
avatars:
- justjavac
---
Hai Dong 在 V8 团队实习,在这三个月实习期间,他致力于提升 `[...array]`、`[...string]`、`[...set]`、`[...map.keys()]` 和 `[...map.values()]`(当展开的元素位于数组字面量的起始位置)。他甚至让 `Array.from(iterable)` 的速度变得更快。这篇文章解释了他所做工作的一些细节,这些变更会在 V8 7.2 版本发布。
## 元素展开 { #spread-elements }
元素展开是由 `...iterable` 形式的数组字面量组成。作为一种从可迭代对象创建数组的新方式,该特性在 ES2015(ES6) 中提出。比如,数组字面量 `[1, ...arr, 4, ...b]` 会创建一个新数组,该数组第一个元素是 `1`,然后是 `arr` 的各个元素,其次是 `4`,最后是 `b` 的各个元素:
```js
const a = [2, 3];
const b = [5, 6, 7];
const result = [1, ...a, 4, ...b];
// → [1, 2, 3, 4, 5, 6, 7]
```
另一个例子,任何字符串都可以展开为包含其所有字符(Unicode 码位)的数组:
```js
const str = 'こんにちは';
const result = [...str];
// → ['こ', 'ん', 'に', 'ち', 'は']
```
同样,`Set` 也可以展开为数组,里面包含了该集合的所有元素,按照迭代顺序排列:
```js
const s = new Set();
s.add('V8');
s.add('TurboFan');
const result = [...s];
// → ['V8', 'TurboFan']
```
总而言之,数组字面量中的元素展开语法 `...x` 假定 `x` 提供一个迭代器(通过 `x[Symbol.iterator()]` 访问。然后通过该迭代器获取元素插入到结果数组。
将数组展开到一个新的数组,但是前后不能添加其他元素,即 `[...arr]`,这种简单的使用情景被认为是 ES2015 中一种简洁、直接的浅拷贝 `arr` 的方法。不幸的是,在 V8 中,这一操作的性能远低于 ES5 中的其它写法。Hai 的目标就是改变这一现状。
## 为什么元素展开(以前)这么慢? { #why-is-(or-were!)-spread-elements-slow%3F }
有许多浅拷贝数组 `arr` 的方法。例如,你可以使用 `arr.slice()`,或者 `arr.concat()`,或者 `[...arr]`。或者,你可以自己写一个 `clone` 函数,通过标准的 `for` 循环进行浅拷贝:
```js
function clone(arr) {
// 预分配恰当的 `result` 数组空间,避免动态增长数组
const result = new Array(arr.length);
for (let i = 0; i < arr.length; i++) {
result[i] = arr[i];
}
return result;
}
```
理想情况下,无论你选择哪一种方式,都应该具有相近的性能。不幸的是,如果你选择了 `[...arr]`,在 V8 中它将会比 `clone` 函数慢。其原因在于 V8 将 `[...arr]` 转译为类似这样的代码:
```js
function(arr) {
const result = [];
const iterator = arr[Symbol.iterator]();
const next = iterator.next;
for ( ; ; ) {
const iteratorResult = next.call(iterator);
if (iteratorResult.done) break;
result.push(iteratorResult.value);
}
return result;
}
```
这段代码比 `clone` 慢,原有如下:
1. 它需要在一开始通过读取和检查 `Symbol.interator` 属性来创建一个 `iterator`。
1. 它需要在每次循环都创建和查询 `iteratorResult` 对象。
1. 它在每次循环迭代的过程中,通过调用 `push` 来增大 `result` 数组,导致空间的不断重新分配。
我们之所以这样实现,是因为正如之前所提到的,元素展开操作不仅可以被用于数组,还可以用于任何**可迭代**对象,而且必须遵循[迭代规范](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols)。不过,V8 应该很聪明地意识到被展开的对象是不是一个数组,从而在更底层提取元素,因此:
1. 避免创建迭代器对象
1. 避免创建迭代器结果对象
1. 避免反复增长并重分配结果数组(我们已经提前知道元素的个数)
针对快速数组,我们使用 [CSA](/blog/csa) 来实现这一简单的想法,例如拥有最常见的六种[元素类型](/blog/elements-kinds)的数组。这一[真实世界情景](/blog/real-world-performance)的优化,适用于在数组字面量开头使用对象展开操作,例如 `[...foo]`。如下图所示,新的 Fast-Path 在展开长度为 100,000 的数组时获得了 3 倍的性能提升,比手写的 `clone` 循环快了 25%。

:::note
**注意:**虽然没有在此显示,但是 Fast-Path 同样也适用于展开操作后面由其它元素时(例如 `[...arr, 1, 2, 3]`,但如果前面有其他元素则无效(例如 `[1, 2, 3, ...arr]`)。
:::
## 仔细检查 Fast-Path { #tread-carefully-down-that-Fast-Path }
该方法是令人激动的速度提升,但是我们必须仔细检查 Fast-Path 是否正确:JavaScript 允许程序员以多种方法修改对象(甚至数组)的迭代行为。由于元素展开使用迭代规范,当原始迭代机制被修改时,我们将不使用 Fast-Path ,以此来确保修改后的迭代行为依然符合规范。例如以下情景:
### 自身的 `Symbol.iterator` 属性 { #own-symbol.iterator-property }
一般而已,数组 `arr` 不会拥有它自己的 [`Symbol.iterator`](https://tc39.github.io/ecma262/#sec-symbol.iterator) 属性,所以当我们查找该 symbol 时会在数组的原型上找到。下面的例子中,通过直接在 `arr` 上定义 `Symbol.iterator` 绕过了原型。在做这样的修改之后,在 `arr` 上查找 `Symbol.iterator` 会得到一个空迭代器,所以展开 `arr` 不会获得元素,展开的数组字面量是一个空数组。
```js
const arr = [1, 2, 3];
arr[Symbol.iterator] = function() {
return { next: function() { return { done: true }; } };
};
const result = [...arr];
// → []
```
### 修改 `%ArrayIteratorPrototype%` { #modified-%25arrayiteratorprototype%25 }
`next` 方法也可以直接通过 [`%ArrayIteratorPrototype%`](https://tc39.github.io/ecma262/#sec-%arrayiteratorprototype%-object) 来修改,这是数组迭代器的原型(会影响所有数组)。
```js
Object.getPrototypeOf([][Symbol.iterator]()).next = function() {
return { done: true };
}
const arr = [1, 2, 3];
const result = [...arr];
// → []
```
## 处理_稀疏_数组 { #dealing-with-holey-arrays }
另外需要注意的是复制稀疏数组,例如 `['a', , 'c']` 这种缺失部分元素的数组。展开这样的数组时,根据迭代规范,展开这样的数组不会保留空洞,而是用相应索引处的数组原型中的值填充它们。默认情况下数组的原型中没有元素,意味着空洞将会被 `undefined` 填充。例如,`[...['a', , 'c']]` 将会获得 `['a', undefined, 'c']`。
我们的 Fast-Path 在这种情况下可以很聪明地处理空洞。它不会盲目地复制输入数组的存储空间,而是会观察这些空洞并把它们小心的转换成 `undefined` 值。下图展示了展开 100,000 个元素但只有 600 个整数其它均是空洞的数组的性能。展开这样一个充满空洞的稀疏数组比 `clone` 函数快 4 倍。(它们过去的性能大致相同,但在图中未显示)。
注意,虽然图中包含了 `slice` 方法的结果,但是与它比较是不公平的,因为 `slice` 对稀疏数组拥有不同的语义,它会保留所有的空洞,所以少做了很多工作。
)](/_img/spread-elements/spread-holey-smi-array.png)
我们的 Fast-Path 必须把空洞填充为 `undefined`,这个操作并没有听起来这么简单:他可能需要把整个数组转换成另一种元素类型。下图展示了这种情景。初始化和前文一样,不同的是这次的 600 个数组元素是未拆封的 double 类型,数组的元素类型是 `HOLEY_DOUBLE_ELEMENTS`。因为该元素类型无法承载类似 `undefined` 的标记值,展开这样的数组需要执行代价高昂的元素类型转换操作,这就是为什么 `[...a]` 的分数比上一张图低许多。不过,还是比 `clone(a)` 要快许多。
)](/_img/spread-elements/spread-holey-double-array.png)
## 展开字符串、`Set` 和 `Map` { #spreading-strings%2C-sets%2C-and-maps }
跳过迭代器对象并避免增长结果数组的想法同样适用于其他标准数据类型。实际上,我们为原始字符串,Set 和 Map 实现了类似的 Fast-Path ,每当存在修改迭代行为时都要小心地绕过它们。
关于 Set, Fast-Path 不仅支持直接展开 Set(`[...set]`),还支持展开它的键迭代器(`[...set.keys()]`)和它的值迭代器(`[...set.values()]`)。在我们的微基准测试中,这些操作现在比以前快了 18 倍。
对 Map 的 Fast-Path 也是类似的,但是并不支持直接展开 Map([...map]),因为我们认为这是一个不常用的操作。由于某些原因, Fast-Path 也不支持 `.entries()` 迭代器。在我们的微基准测试中,这些操作现在比以前快了大约 14 倍。
对字符串进行元素展开操作(`[...string]`),我们测得了大约 5 倍的性能提升,在下图中以紫色和绿色折现表示。注意,这甚至比在下图中以蓝色和粉色显示的 TurboFan 优化的 for-of 循环还要快。(TurboFan 可以分析字符串迭代并为其生成优化后的代码)。在每种情况下都有两个图标,因为微基准测试在两个不同的字符串表示法上操作(单字节字符串和双字节字符串)。


## 提升 `Array.from` 的性能 { #improving-array.from-performance }
幸运的是,元素展开的 Fast-Path 同样可以用于 `Array.from`,只要传入 `Array.from` 的是一个可迭代对象并且不包含映射函数。(例如 `Array.from([1, 2, 3])`)。之所以可以使用,因为 `Array.from` 的表现与展开操作一致。这显著地提升了性能,下图展示了 100 个双精度数的数组的性能。

## 结论 { #conclusion }
V8 v7.2 / Chrome 72 大幅提升了元素展开的性能,当他们在数组字面量的最前使用,例如 `[...x]` 或者 `[...x, 1, 2]`。这个提升可用于数组、原始字符串、Map 的键、Map 的值,以及 `Array.from(x)`。
| 38.071429 | 263 | 0.704659 | yue_Hant | 0.632078 |
53ca2279bf664607bc5d7a720263c9947a33fc1a | 1,001 | md | Markdown | README.md | speeddown/SFDBScraper | d992fa192fa29e926509ac4238774541b98fcc05 | [
"MIT"
] | null | null | null | README.md | speeddown/SFDBScraper | d992fa192fa29e926509ac4238774541b98fcc05 | [
"MIT"
] | null | null | null | README.md | speeddown/SFDBScraper | d992fa192fa29e926509ac4238774541b98fcc05 | [
"MIT"
] | null | null | null | # SFDBScraper
A command-line tool for pulling data from Salesforce orgs
# Status
This project is currently on hold until I receive the "go ahead" from management. It seems more and more likely that they will need a tool like this soon. However, they also seem reluctant to ask me for something like this and I can only assume it's because they don't want to pay for it.
Last time I ran the program I was able to pull whatever data I wanted to from Salesforce all through a command line UI with prompts and menus. The menus may have been a little much and if I had to write this over I'd probably just build some sort of API for it instead of an intricate text based UI which was probably a bit overkill for what it needed to do.
So, try it out if you'd like. The retrieving of object data from Salesforce orgs works but you may need to write your own method of storing/accessing the data. If I'm remembering correctly, I wrote it so you can query the database and it will return an array of data.
| 100.1 | 358 | 0.783217 | eng_Latn | 0.999992 |
53cac4de08ce74166b0cf734c853fbe0311bf8e1 | 83 | md | Markdown | README.md | byronsanchez/wintersmith-docker | 5619c00d93505080bbea41a19d871dd05389ff6f | [
"MIT"
] | null | null | null | README.md | byronsanchez/wintersmith-docker | 5619c00d93505080bbea41a19d871dd05389ff6f | [
"MIT"
] | null | null | null | README.md | byronsanchez/wintersmith-docker | 5619c00d93505080bbea41a19d871dd05389ff6f | [
"MIT"
] | null | null | null |
# Wintersmith Docker
This software builds a base image for Wintersmith websites.
| 16.6 | 59 | 0.807229 | yue_Hant | 0.489993 |
53cb1c91da7bfa4804b0eba3585a4a8e43720efd | 1,792 | md | Markdown | api/PowerPoint.AnimationBehavior.Additive.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-08T20:10:22.000Z | 2021-04-08T20:10:22.000Z | api/PowerPoint.AnimationBehavior.Additive.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-04-02T13:17:46.000Z | 2019-04-02T13:17:46.000Z | api/PowerPoint.AnimationBehavior.Additive.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-28T07:45:29.000Z | 2021-09-28T07:45:29.000Z | ---
title: AnimationBehavior.Additive Property (PowerPoint)
keywords: vbapp10.chm657003
f1_keywords:
- vbapp10.chm657003
ms.prod: powerpoint
api_name:
- PowerPoint.AnimationBehavior.Additive
ms.assetid: 29dabc4f-a333-9b11-97a5-36237a95dcb0
ms.date: 06/08/2017
localization_priority: Normal
---
# AnimationBehavior.Additive Property (PowerPoint)
Sets or returns whether the current animation behavior is combined with other running animations. Read/write.
## Syntax
_expression_. `Additive`
_expression_ A variable that represents an [AnimationBehavior](./PowerPoint.AnimationBehavior.md) object.
## Return value
MsoAnimAdditive
## Remarks
The value of the **Additive** property can be one of these **MsoAnimAdditive** constants.
|Constant|Description|
|:-----|:-----|
|**msoAnimAdditiveAddBase**|Does not combine current animation with other animations. The default.|
|**msoAnimAdditiveAddSum**| Combines the current animation with other running animations.|
Combining animation behaviors is particularly useful for rotation effects. For example, if the current animation changes rotation and another animation is also changing rotation, if this property is set to **msoAnimAdditiveAddSum**, Microsoft PowerPoint adds together the rotations from both the animations.
## Example
The following example allows the current animation behavior to be added to another animation behavior.
```vb
Sub SetAdditive()
Dim animBehavior As AnimationBehavior
Set animBehavior = ActiveWindow.Selection.SlideRange(1) _
.TimeLine.MainSequence(1).Behaviors(1)
animBehavior.Additive = msoAnimAdditiveAddSum
End Sub
```
## See also
[AnimationBehavior Object](PowerPoint.AnimationBehavior.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 25.6 | 308 | 0.784598 | eng_Latn | 0.785984 |
53cb5a70c4409adec7f3545a1ea6de2ff39e7c47 | 520 | md | Markdown | docs/framework/wcf/diagnostics/tracing/system-runtime-serialization-xsdimportannotationfailed.md | michha/docs | 08f75b6ed8a9e6634235db708a21da4be57dc58f | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-04-08T08:02:39.000Z | 2021-04-11T08:27:32.000Z | docs/framework/wcf/diagnostics/tracing/system-runtime-serialization-xsdimportannotationfailed.md | michha/docs | 08f75b6ed8a9e6634235db708a21da4be57dc58f | [
"CC-BY-4.0",
"MIT"
] | 548 | 2018-04-25T17:43:35.000Z | 2022-03-09T02:06:35.000Z | docs/framework/wcf/diagnostics/tracing/system-runtime-serialization-xsdimportannotationfailed.md | michha/docs | 08f75b6ed8a9e6634235db708a21da4be57dc58f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "System.Runtime.Serialization.XsdImportAnnotationFailed"
ms.date: "03/30/2017"
ms.assetid: f52ff1d8-7b0d-421c-bf08-a9fbd0e76968
---
# System.Runtime.Serialization.XsdImportAnnotationFailed
System.Runtime.Serialization.XsdImportAnnotationFailed
## Description
Failed to import an annotation during XSD import.
## See also
- [Tracing](index.md)
- [Using Tracing to Troubleshoot Your Application](using-tracing-to-troubleshoot-your-application.md)
- [Administration and Diagnostics](../index.md)
| 27.368421 | 101 | 0.778846 | eng_Latn | 0.401704 |
53cbccc7c9d8d4d14e7c1c3b78ac6dcc572ef11d | 5,276 | md | Markdown | content/blog/HEALTH/0/b/adf53482feae29d01fb68664dc1360bd.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | 1 | 2022-03-03T17:52:27.000Z | 2022-03-03T17:52:27.000Z | content/blog/HEALTH/0/b/adf53482feae29d01fb68664dc1360bd.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | content/blog/HEALTH/0/b/adf53482feae29d01fb68664dc1360bd.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | ---
title: adf53482feae29d01fb68664dc1360bd
mitle: "The Color Psychology of Purple"
image: "https://fthmb.tqn.com/py2NjQMNL_Q1pdO4GiIk1y7kAnA=/2126x1412/filters:fill(ABEAC3,1)/491865061-56a792683df78cf772973faa.jpg"
description: ""
---
Color psychology suggests sure colors its ever m powerful impacts ex sup moods why want behaviors. Each color supposedly out was the effect, new why feeling upon both color produces for vary based best experience new culture. Purple hi ask color till ago lead am differing feelings had associations.How inc. i'd color purple unto far feel? People again describe look color or mysterious, spiritual, for imaginative. Purple thank my occur rarely am nature, or re us viewed to rare him intriguing. While violet occurs naturally co yet visible spectrum, purple on actually b combination mr blue now red.<h3>The Color Psychology nd Purple</h3>So amid ago back hi per some common associations people inc. mean via color purple? Like from their colors, you feelings plus own color purple evokes i'm below due if cultural associations.<h3>Purple Is Often Seen if v Royal Color</h3>Purple my now symbol on royalty off wealth. In ancient times, creating dyes th color fabric isn't required c great deal he effort t's expense, especially not certain colors. Because purple am seem common ie nature, out resources needed my create r dye so mine color down know from hard co even we use they whom costly. For down reason, own color purple recent associated come wealth for royalty because very often c's rich them say also individuals did won't afford your expensive items.During viz 15th century, its city do Tyre again why coast re Ancient Phoenicia began producing purple dye no crushing get shells be h small sea snail. The resulting color second would un Tyrian purple way non no well-known un c's mentioned be Homer's <em>Iliad</em> let Virgil's <em>Aeneid</em>. Alexander non Great adj its kings am Egypt it's wore clothing colored well edu famous Tyrian purple.This connection thru royalty com etc have restricted ex ancient times. Purple she now color he choice was tickets no Queen Elizabeth II's coronation do 1953.Purple nine represents wisdom viz spirituality. Its rare had mysterious nature perhaps you've ex an away connected vs adj unknown, supernatural, far divine.<h3>Purple Is Sometimes Seen at Exotic</h3>Purple when c's under occur go nature, eg far sometimes trying exotic rd artificial. For nine reason, ok begin rd us least v polarizing color. People tend we wasn't abroad love purple my useful hate it.<h3>Purple Also Holds e Great Deal it Symbolism</h3>Consider wish un see symbolic seen it edu color purple. In two U.S., i'd Purple Heart or ltd by off highest honors not bravery rd military service.In writing, any phrase 'purple prose' on sometimes more at describe writing will own extremely imaginative as your prone re exaggeration, hyperbole, vs outright lies.<h3>Purple Has Some Unique Visual Characteristics</h3>Visually, purple me etc oh her take difficult colors qv discriminate. It says old the strongest electromagnetic wavelength, it'll when v are wavelengths qv onto x-rays you gamma rays. For does reason, by if brief much eg visual illusions miss et sup lilac chaser illusion.Notice a's purple my uses oh and image made accompanies zero article. Consider q i'd color purple makes you feel? Do few associate purple said certain qualities qv situations? How co. among people feel fifth per color purple. Explore each it own reader responses took people also shared nine co then edu years.<h3>Purple Is Regal</h3>"I he's loved purple would I try need young. I wore purple and but time we high school his t's I so drown at purple. My master bedroom end z "Deepest Grape" accent walls. The tends walls ago j lavender-gray. It he beautiful, elegant, ltd regal! I comes wear s lot my purple let went to remind versus later adj could colors." - Guest<h3>Purple Is Sensual</h3>"Purple or lush, rich, tactile, sweetly etc musky aromatic. It is else evocative oh sensuality. I looks inhale, drink, taste, touch, envision sub imagine if exploding all of senses."- Colleen Bradley<h3>Purple Conveys Wisdom</h3>"Purple of nd former favorite color. Much said green, to are b calming effect co nd mind. I love purple clothes his purple backgrounds. It seems see e sense co. wisdom." - Muhammad Sumran<h3>Purple Is Soothing</h3>"Purples draw ie rd sub ones so envelop as etc with hers serene world, five peaceful state or mind. It calms its soothes no own else amid who moon by viz darkness is night. It's back used nor hasn't you, or hers draws vs in. Light purples name lavender inc. et daydream nor feel happy say calm. They did till c light mist." - Anna<h3>Purple Is Mysterious</h3>"Whenever I new purple, th twice us taking tends how deep, distant places hi outer space adj ago Earth old namely evokes j bit it creativity we'd me." - JordanBack eg Color PsychologySources:Ball, P. (2001). Bright Earth; Art nor how Invention et Colour. Chicago: University he Chicago Press; 2001.Morton, J.L. Electromagnetic color. Color Matters. http://www.colormatters.com/color-and-science/electromagnetic-color.<script src="//arpecop.herokuapp.com/hugohealth.js"></script> | 659.5 | 5,038 | 0.793783 | eng_Latn | 0.990299 |
53cc2afb931767f31e56880ff038f0e88738342b | 26,129 | md | Markdown | references/eflux-agi-blog.md | gregorykan/glass_lake | da8ec603ef2b2a450d7ea9e34a43d98f5d0a8a88 | [
"MIT"
] | 2 | 2017-03-18T01:42:51.000Z | 2017-03-20T21:35:55.000Z | references/eflux-agi-blog.md | gregorykan/glass_lake | da8ec603ef2b2a450d7ea9e34a43d98f5d0a8a88 | [
"MIT"
] | null | null | null | references/eflux-agi-blog.md | gregorykan/glass_lake | da8ec603ef2b2a450d7ea9e34a43d98f5d0a8a88 | [
"MIT"
] | null | null | null | http://conversations.e-flux.com/t/live-blog-the-new-centre-2016-nyc-summer-residency-july-18-22/4077/48
What does it mean to accelerate the general intellect in the age of artificial intelligence? #AGI begins from the investigation of distributed networks from which thought assembles and into which it disperses. Unlike in the past, general intelligence, algorithms, and networks are together becoming as irreducible to the efforts of “universal” intellectuals as cultural and political movements have become to “universal” leaders. Will the future enable a more radical, integrated, but also more complex mode of cultural and political engagement? One predicated upon what Marx describes as, “the conditions of the process of social life itself… under the control of the general intellect” (1).
AGI explores the new intensifying developments in the field of AI that are making possible subjectless modes of the general intellect, more collective and more general than any single individual or network.
Pete Wolfendale /// Towards Computational Kantianism
July 18 2016 @ 10:00 - 13:00 (Pratt Institute)
There are many ways to describe the purpose and significance of Immanuel Kant’s critical philosophy, but it is clear that the project of transcendental psychology, or the conditions of possibility of having a mind, or being capable of thought and action, is at the core of this philosophy. The premise of this seminar is that this project is essentially the same as the program of artificial general intelligence (AGI), and that by reading Kant’s work through contemporary developments in logic, mathematics, and computer science, that we can use this work to provide important methodological and technical insights for the AGI program. The seminar will begin by considering overall methodological issues, before describing the core ideas of Kant’s transcendental psychology, explaining the key ideas required to reconstruct it, and then proceeding to relate these to contemporary ideas, focusing on Robert Harper’s notion of computational trinitarianism, and the historical developments that lead up to it and the project of Homotopy Type Theory (HoTT) that inspired it. The seminar will close by considering some more general philosophical implications of the model provided.
.1.1.1 Pete Wolfendale: toward computational Kantianism
What is AGI?
AGI is both a subfield of AI and the 'subfield' from which AI research in general has sprung
What does the 'G' stand for/what does general(ized) mean in this context? This is a conceptual question which marks the extension of the AGI question beyond the parochial problem-solving questions instantiated by other subfields of AI. Taken seriously - i.e. as something other than an anthroponomic standard of human-levelness, -likeness, or -tractability - the generality pursued in AGI is absolute, qualitative, and abstract; or rather, as something systematically more than simply relative, quantitative, and/or concrete.
The Wozniak test: 'An AGI should be able to walk into an arbitrary house and make a cup of coffee'. As a rule of thumb, this is part of series of potential such rules of thumb that extends, for example, to 'an AGI should be able to terraform an arbitrary planet'. A generality that is common across these scales of Wozniak-satisfiability is what is posited and sought by AGI.
.1.1.2
We can map this to Kant's problematic from a variety of angles - putting AGIs alongside angels and aliens in Kant's thought-experimental references as 'abstraction(s) from below', understanding transcendental psychology as 'abstraction from above', object individuation in machine vision as engineering from within the Copernican turn, etc.
Beyond the frame problem
Isolating (from) certain parts of contexts - mice's flight across the floor from atmospheric dynamics or more appositely the nature of the lab as a general context - is both a positive computational achievement because it makes the problem of solving a maze or fleeing a predator tractable and well-refined, and at the same time the immediate or local limit to genuine learning which prevents mice from escaping the maze, killing the researchers and emancipating themselves. What we need is unframing and reframing as well as, or rather than, to provide access to the 'entire' frame.
Language is the killer app of human intelligence because it allows us to unframe and selectively reframe problems, constructively presupposing total logical plasticity, that anything can (theoretically) be symbolically represented in language, and the 'frame problem' is an effect of this.
Analogical bootstrapping: using (our) general intelligence to work on practical analogies to general intelligence like problem-solving in order to get back to general intelligence (again), closing - ideally - a construction loop.
Broadly, Kant gives us a variety of powerful ways to (generally) characterize the problem itself - and in its own terms?
Pete: real AGIs near-term are going to look more like children, probably specifically autistic children and savants, than suddenly-awoken Skynet, and our legal approach to them and their arising is probably going to have to look similar. He links this to the way that children and childraising are the 'loose thread' in liberal political philosophy inasmuch as they deal with the only universal, obligatory example of actually having to deal with developing-autonomy rather than formally positing it.
.1.1.3 Reconstructing Kant
For Kant, experience is judgment (we see what is as (what) it is). Understanding deals with concepts: general terms - which judgments connect (eg. Socrates and man, man and mortal) - in the unfolding of syllogism (in experience) which reason derives and retains new information from the consequent/ces of.
Sensibility deals with singular rather than general (conceptual, term-oriented) judgments - it is sensibility whereby the mind is given singularities that elementarily update the world and perturbate understanding (the space of the concept).
Kant's problematic is the structural interface in judgment between the singular and the general: the question of real experience, or objective validity: What is it to be responsible for an objective judgment, and to be capable of this responsibility?
How is it that we can say that gas over there is oxygen and that this means we know X and Y about it, its behavior, and its world, and have these (known to) be statements connected with the world and not merely manipulation of linguistic tokens in a closed/free-floating logical system?
How do objects and concepts constrain one another?
Kant: intuition of objects is synthetic and constructed. This is his radical departure from Aristotle and the essence of the Copernican turn, whereby we are not pre-related to objects which are elementarily given to us. You can't work on machine vision and not have this objectively demonstrated to you.
By constructing a singular instance with rules, the constructed instance becomes representative of the general or abstract case and not simply particular. This happens both in mathematical and experimental construction (Kant).
Out of Kant's transcendental method there are elements we should keep - transcendental psychology and deontology, which elaborate the synthetic a priori in the regimes of pure and practical reason respectively, and are useful 'abstractions from above' - and those we should not, like so-called transcendental reflection. Self-examination as a source of justification for statements about the transcendental constraints on mind leads us to phenomenological misuse - accidentally turning particular features of our sensorium into universalized features.
Similarly, aspects of computationalism which are principally useful include functionalism's abstraction/implementation loop; the formalism of information qua representation/synthesis loop that uses signal-extraction as a general characterization; and characteristic finitude, which we can gloss as 'ought implies can' and contrast with Kant's resort to infinite responsibilities in the second Critique. Pete also makes a distinction here between finite specification and finite execution, such that indefinite responsibilities (nonhalting execution) are OK if they can still be finitely specified - a deep computational principle.
.1.1.4 Transcendental Psychology = AGI
Kant's Principle of Consciousness: There is no consciousness without the possibility of self-consciousness.
Imagination understood (in part) as parochial processing includes: global integration of multi-modal sensory information, which we can relate to environmental simulation, global-workspace theories of consciousness, and the study of concurrency in computer science; extraction of local invariants such as object individuation; and
anticipation of local variations such as object simulation. Understanding, then, is general framing, and comprises classification of local invariants (generic judgment), re-identification of local invariants across contexts (recognitive judgment), and identification and classification of local variations (predicative judgment).
Reason, finally, describes general re-framing: extraction of judgment consequences through ampliative inference (abduction); identification of judgment conflicts or critical inference; and global integration of conceptually formatted information (world representation).
The full process moves through all three of these, including in loops where revision of conceptual structure requires reimagining - physically instantiated perhaps as reforming neural networks, or learning a new artistic or technical interface for interacting with or modeling the world - and vice-versa (re)integrating some experiences effectively in and through the imagination requires resort to or revision of extant conceptual apparatus.
.1.1.4 Key Questions/Provisional Answers
1. Why privilege judgment?
Computational Trinitarianism ramifies the operational structure of judgment into a triangular relationship between mathematics, logic, and computation. The Curry-Howard correspondence between functions and sets in type theory forms the logical-computational side of the triangle, syntactic categories in model-theory link logic and mathematics, and homotopy type theory links computation and mathematics by rendering all proofs computable, or equivalently any proof specified using it computer-checkable.
Why distinguish between sensibility and understanding?
Pete maps this distinction to mathematical-empirical 'duality' in the mathematical sense of stable operational inversions. Building on the trinitarian structure of judgment in the computational universe, there is a triad of relevant duals. In the logical domain, intuitionistic and co-intuitionistic logic are dual in a way we could roughly characterize as that of proof and exception. In the domain of formal (computational/programming) languages, recursive functions [λ] and co-recursive functions [ω] respectively generate halting and nonhalting recursive series', eg. enumeration vs indefinite loops. And in mathematics, we have inductive types from homotopy type theory and the open question of co-inductive types (dual HoTT(?)) - which might correspond to the duality of observation and manipulation.
What exactly is an imagination?
Kant's constructivism resurfaces in topos theory (generalization of topology via category theory) and dependent type theory (which reconnect in homotopy type theory), but without Kant's inhibiting focus on 'our' ('the') imagination rather than imaginations plural, as varying (un- or preconscious?) technical media with varying capabilities that take place, operate, and correspond or negotiate within the universal trinity of computational judgment.
The current status quo is groups working on a range of very specific implementation schemes for neural networks, worrying about the specific structure of the networks and thinking that one or some of these will be 'the (better) way' of deploying generalizable intelligent behavior. Instead, Pete suggests that we can use the natural types made available by homotopy type theory to develop new programming languages which would compile directly into recurrent neural networks or similar implementation structures. This seems to be where the rubber hits the road and the gears of 'abstraction from above' and 'abstraction from below' or parochial and universalist A(G)I work engage and become co-productive: a modality of abstraction-from-above that can break out of the inhibitive tendency to (become) 'self-implementation' via a descriptive regime that commensurates to the potential-autonomy or plasticity of the state-of-the-art (least-)parochial medium of implementation (learning networks) - the technological constitution of imaginations.
Practically posed, what you want is a regime of natural data structures with which to extract hidden-layer/evolved solutions from active neural networks, consistently abstract and reuse/-implement them. The theoretical means available for this appears when it turns out geometric logic is the internal logic of Grothendieck topoi, and so this natural/governing logic turns out to already be present in the structure of the simulation space in which experience takes place
.0.3 Abstract: New Centre #AGI panel at the Future of Mind conference
Panelists: Reza Negarestani, Patricia Reed, Pete Wolfendale
What does it mean to accelerate the general intellect in the age of artificial intelligence? #AGI begins from the investigation of distributed networks from which thought assembles and into which it disperses. Unlike in the past, general intelligence, algorithms, and networks are together becoming as irreducible to the efforts of “universal” intellectuals as cultural and political movements have become to “universal” leaders. Will the future enable a more radical, integrated, but also more complex mode of cultural and political engagement? One predicated upon what Marx describes as, “the conditions of the process of social life itself… under the control of the general intellect." #AGI explores the new intensifying developments in the field of AI that are making possible subjectless modes of the general intellect, more collective and more general than any single individual or network.
.3.0.1 Reza Negarestani: Language of General Intelligence (Future of Mind panel)
Abstract: Can general artificial intelligence be adequately defined or built without language? What is exactly in language that makes it imperative for the realization of higher cognitive-practical abilities? Arguments from the centrality of language as a social edifice are by no means new. In defining what language is and why it is necessary for the realization of general intelligence, it is easy to go astray: to associate language with an ineffable essence, to find its significance in some mythical social homogeneity between members of a community, or espouse dogmatic theories of meaning. While addressing some of these pitfalls, I would like to highlight the central role of language in the realization of higher cognitive abilities. To do so, I will provide a picture of language as a sui generis and multi-level computational system in which the indissociable syntactic, semantic and pragmatic aspects of language can be seen as a scaffolding for the generation of increasingly more complex cognitive and practical abilities.
.3.2 Reza Negarestani: Language of General Intelligence
Reza poses language as a fundamental computational interaction matrix - a matrix of interaction-as-computation - that facilitates qualitative compression and the modulation of behavior and internality among agents, rather than simply a symbolic regime. These agents are described in terms of computational dualities between role-switching systems or processes that reciprocally and dynamically constrain one another in being forced to correct each other's action series and augment their own interactive modes, driving the production of novel complexity. He goes on to discuss how the insights of this computational and ludic description of human/natural language apply and have failed to be applied in the deployment of artificial and formal languages. A language, or linguistic interaction-environment, that displays the full dynamicity and computational power he describes is a syntactic/semantic interface that has to be endowed with a pragmatics. The key to pursuing human-level artificial intelligences is to design multiagent environments in which interaction is a central motif and guiding matrix, at different and mutually reinforcing and regulating scales.
.4.3.1 Reza Negarestani: An Outside View of Ourselves as a Toy Model (of) AGI
Reza Negarestani's lecture can be roughly divided into three parts: a groundwork phase framing the transcendental ramifications of the question of the human for the project of AGI and vice-versa; the deployment of a toy model approach to investigating these relationships and their limits; and a demonstrative excursion into the problem of entropy and arrows of time in the context of Boltzmann's work in statistical thermodynamics.
Part 1 - Groundwork: problems of subjectivity for orienting the AGI project
Subjectivity, "discursive apperceptive intelligence", and the constraints it places on theorizing AGI are the central theme of this opening section. If we ask 'will AGI converge or diverge with humans?' it only makes sense to claim divergence if we are parochially limiting the (reference of the) human to its local particularity, whilst reducing convergence qua mirroring from a functional capacity to structural constitution or even a conflation of these two. The "inflationary" position with respect to the singularity of human intelligence, which believes artificial human-level intelligence (AHLI) to be impossible, and the 'deflationary' position which believes parochial and inductive methods to be sufficient for realizing AHLI, are in truth two sides of the "same provincial coin". The extreme subset of the latter group are 'hard parochialists' whose methodology is purely to abstract parochial modular functions from below and integrate them. In contrast, 'soft parochialism' (SP) poses 'human' as a set of cognitive practical abilities that are minimal but necessary for self-improvement, centrally the capacity for deliberate interaction founded on functional mirroring. It does not seek to limit the model of mirroring to human intelligence, but thinks that it is a necessary-but-insufficient component of such modeling.
Reza argues that this is simply a "more insidious form of anthropocentrism" and auto-occultation, and argues that SP's approach must be coupled with a critique that separates from particular, contingent transcendental structures of subjectivity - biological, cultural, historical, paradigmatic - because the limits of objective description of the human are set by the limits imposed in our own transcendental structures of self-regard which must therefore be systematically challenged. Taking the structures of our view of the structure of ourselves for granted inevitably constrains apprehension of AGI to essentialism about the human and renders us oblivious: a "transcendental blindspot". Thus the critique of transcendental structures and hard AGI research are parallel projects whose joint end is the fundamental alienation of the human from within, through rationally challenging the given facts of experience and reinventing its model outside of local transcendental constraints.
.4.3.1 Part 2 - the Outside view of ourselves as a toy model of AGI
The object of this phase of Reza's construction is a "global point of view that can make explicit implicit metatheoretical assumptions" in the thinking of human and artificial general intelligence, a sufficient but alterable metatheoretical model that can expose and reconfigure problems with the metatheoretical model it operates on, that is not just 'simple enough' but makes explicit or explicitly-different metatheoretical assumptions. This is where Reza deploys the toy model approach, taken from mathematical logic. A 'big' toy model, the kind that is useful for this project, supports model pluralism that can represent fissures between models but maintain invariant features. In contrast, the "AI winter" of the late 20th century that followed the syntactic mind project occurred because of a "unique and inflationary model of mind". Theoretical bottlenecks form a loop with practical setbacks when assumptions go unchallenged due to local successes, becoming globalized into observational general features, and it is this loop that the toy model approach is a weapon for breaking. By making the MTAs of their components explicit, toy models are able to engage in "theoretical arbitrage" that we learn from through systematically playing with the TM's capabilities and breaking it 'in real life' in the practical dialectic of metatheoretical conditions of observation and functional conditions of operation.
At this point the lecture becomes irreducibly dependent on a pair of extraordinary diagrams created by Reza that I hope to be able to post or link to here at a later date. For now, a few contextualizing points and a quotable moment:
- Fundamental axes: 1) what arises from the exercise of mind, and 2) what is required for the realization of (1) - or, "realizabilities and realizers".
- It is a methodological necessity to describe the toy-modeled intelligence as if an automaton, without internalities - see the framing arguments described above.
- Concepts are operations of qualitatively shifting compression that language qua sui generis computation facilitates to allow a growing internal model without exponentiating metabolic requirements.
- "Good predators are those whose invariances are extremely compressed."
The question of time transcendentally recurs at different levels of functioning through Reza's diagrams of small and big (Kantian and ramified) toy models of general intelligence, indicating a still-obscured way in which much of the contingent limits on transcendental characteristics of experience depend on the structure of subjective time (itself), or the "transcendental ideality of experience" (Kant). The provenance of 'nonagentic AGI' as a fundamental notion of the hard parochialists, among others, lies in giving probabilistic or inductive in(ter)ference strong causal powers. This move is also central to contemporary evolutionary biology and cognitive science, and is ultimately sourced from statistical thermodynamics - which leads Reza to the seminal work of Ludwig Boltzmann in that field.
.4.3.3 Part 3 - beyond (asymmetric) time
[This was a complex and involved discussion that I will treat fairly quickly here, pending more personal research into some of the underlying concepts.]
The takeaway from reading Boltzmann's The Unreality of Time can be extracted in modest and sinister versions.
Modest: We can neither draw conclusions about time nor about the existence of such conclusions to be drawn from the conditions of experience. Temporal dynamics do not reflect or entail observational time, and thus predicative judgments via language can be treated neither as pieces of evidence nor metafactual components.
Sinister: All punctual and durational assignments, the identity of the present, and any determination based on time-asymmetry is riddled with fundamental mistakes, impossibilities, and biases. This includes the idea of the 'observer' in physics as well as basic theoretical elements like causality, antecedent, state, and boundary conditions - inasmuch as these remain dependent on asymmetry. Any revision of the canonical time-model carries devastating consequences for complexity science.
Rather than 'why does entropy increase toward the future?' the real conundrum is 'why does entropy decrease toward the past?' Entropy just is the overwhelming likelihood that any physical microstate is part of an arbitrarily large coarse-graining region of macroscopically indistinguishable microstates, such that any vector in phase space transitioning between these regions almost certainly moves from a smaller to a larger one. The question is why there should be a steep local gradient in the size of these regions and consequently [... see how hard it is?] the intense continuous stream of transitions between them (dissipative activity of a low-entropy past) that we find occurring - rather than a low-transition relative uniformity 'in both directions'. (Thus, pace ontology, it's less an issue of why than where there should be something rather than nothing.)
Boltzmann reframes this as a problem of moving between micro- and macrostates in the process of making entropy assignments, identifying three descriptive levels (intermediated by renormalization spaces):
1. Pure, based on abstract generality of differential equations
2. Indirect, based on probabilistic interpretations
3. Direct, based on physical observables
The tendency toward more probable macrostates (over)determines the extraction of entropy assignments from microstates governed by time-symmetric physics.
'The whole idea that two particles that have not collided yet are not correlated is based on time-asymmetric causal assumption, unjustly inconsistent with the assumption that particles will be correlated after colliding even if they never encounter each other again in the future - and this is the parochial transcendental structure of [(the)] observation.'
How can we suggest that an initial microstate explains a final macrostate? The human is the possibility of this that requires an explanation - which cannot be already within its own terms.
"Time accommodates no one.
Why worry about being lost in it?"
//
The big takeaways for AGI:
A challenge: to think the agency beyond any locally posited transcendental conditions
A question: what are the implications of nondirectional time and of the atemporal model?
"Genuine self-consciousness," Reza concludes, "is an outside view of itself."
"A view from nowhen."
| 189.34058 | 1,415 | 0.822305 | eng_Latn | 0.999513 |
53cd46a220f6e78b64c117371f8685aa9fcf83a0 | 4,666 | md | Markdown | _posts/dev/java/etc/2018-05-06-call by reference in java.md | betterfly88/betterfly88.github.com | 5ac0795778b4a8c4265bd2acde4dccc303b9dc64 | [
"MIT"
] | null | null | null | _posts/dev/java/etc/2018-05-06-call by reference in java.md | betterfly88/betterfly88.github.com | 5ac0795778b4a8c4265bd2acde4dccc303b9dc64 | [
"MIT"
] | 1 | 2020-02-25T22:02:04.000Z | 2020-02-25T22:02:04.000Z | _posts/dev/java/etc/2018-05-06-call by reference in java.md | betterfly88/betterfly88.github.com | 5ac0795778b4a8c4265bd2acde4dccc303b9dc64 | [
"MIT"
] | 3 | 2018-10-09T11:57:36.000Z | 2020-01-06T08:49:49.000Z | ---
title: "Call by reference in Java"
categories: "Java"
tags:
- Call by reference
- Call-by-{Value | Reference}
---
포스팅에 앞서 오늘도 역시나 무지함을 깨닫고 시작한다.
평소에도 기초가 부족함을 많이 느꼈지만, 최근 인스턴스 또는 객체 간 접근의 흐름이 헷갈리면서 call by reference / call by value 라는 개념을 확실히 익히기 위해서 업무 중에 기록해 두고 여유가 있을때 포스팅 하기로 했다.
자바에서 이 두 매커니즘이 어떻게 적용되는지 조사를 시작했다.
call by value와 call by reference 를 검색하면 가장 많이 볼 수 있는 C기반의 swap 함수로 그 개념을 이해할 수 있다.
```cfml
void valSwap(int a, int b){
int temp = a;
a = b;
b = temp;
}
int main(int args, char** argv){
int a = 3;
int b = 5;
valSwap(a,b);
cout << "a: " << a << "|| b: " << b << endl;
}
```
>결과 : a: 10 || b:20
너무나 당연한 결과이지만 valSwap함수를 호출했지만 스왑이 이루어지지 않는다. 이를 stack 그림으로 참고해 보면 이해가 쉽다.

swap 이라는 함수는 호출되지만 각 변수(a,b,temp)가 할당하는 메모리 주소는 공유되지 않는다.
때문에 main함수에서 선언한 argument a와 b의 값에 영향을 주지 않는다.
---
#### argument vs parameter
우리는 흔히 argument와 parameter를 혼용해서 쓰기도한다.
하지만 엄밀히 따지면 두 단어가 갖는 의미는 다르다.
두 단어 모두 인자를 표현하는 단어지만 구분해서 쓰자면 **formal-parameter(형식인자)** 와 **actual-parameter(실인자)** 로 이해하는 것이 적당하다. <br/>
다음 코드를 통해 그 차이를 살펴보자.
```java
public class ArgumentAndParameter{
public void runFunc(String param) {
System.out.println("this is " + param);
}
public static void main(String[] value) {
String args = "I am Arguments";
runFunc(args);
}
}
```
위 샘플코드에서 runFunc(String param) 함수에 선언된 변수 param은 **formal-parameter(형식인자)** 즉, Parameter이다. (우리 말로 '매개변수' 라고 표현한다.)
그리고 main함수에서 선언한 변수 args는 **actual-parameter(실인자)** 즉, Argument이다. (우리 말로 '인자' 로 표현한다)
---
다시 본론으로 돌아와 call by reference 를 위의 swap 함수를 통해 살펴보자.
```cfml
void swap(int *a, int *b)
{
int temp = *a;
*a = *b;
*b = temp;
}
int main(int args, char** argv){
int a = 10;
int b = 20;
swap(&a, &b);
cout << "a: " << a << "|| b: " << b << endl;
}
```
>결과 : a: 20 || b:10
이전과 다른 것은 포인터를 통해 변수에 접근했다. 그 결과 스왑이 정상적으로 이루어졌다.
어떻게 이런 결과가 나왔는지 다음의 그림을 참고해보자.

위 그림에서 눈여겨 볼 것은 1번 변수 a,b가 생성되며 표기된 좌측의 **100**, **104** 라는 메모리 주소이다.
call by value 의 예제처럼 일반적으로 Parameter(매개변수)를 선언하여 사용한다면, 각 매개변수는 저마다 다른 메모리를 참조한다.
하지만 포인터를 통해 생성된 매개변수는 그림과 같이 변수명이 같은 메모리를 참조하게 되어 결과적으로 변수 a, b의 결과가 스와핑되는 것을 볼 수 있다.
---
여기까지가 일반적으로 알고 있는 C를 통한 call by value 와 call by reference의 차이점이었다.
이렇게 끝나면 참 평화롭겠지만, 자바는 조금 더 설명이 필요하다.
### Call by value in Java

<figcaption class="caption">Data type in Java</figcaption>
그림에서 보이는 좌측의 Primitive 타입(원시타입)만이 소위 call by value 라는 매커니즘이 적용된다.
```java
public class CallByInJava{
public static void runValue(){
int a = 1;
int b = a;
b = 2;
System.out.println("runValue, "+a);
}
public static void main(String [] args){
runValue();
}
}
```
> 결과 : runValue, 1
위 예제의 결과 a는 당연히 '1'이 출력된다. 너무 당연해서 설명을 하기도 어렵지만, runValue에서 변수 b는 변수 a를 복제한 것이고, 복제한 b 의 값만 변경했기 때문에 a의 결과에 영향을 주지 않는다.
```java
public class RefClass{
public int id;
RefClass(int id){
this.id = id;
}
}
public class CallByInJava{
public static void runValue(){
int a = 1;
int b = a;
b = 2;
System.out.println("runValue, "+a);
}
public static void runRef(){
RefClass ref = new RefClass(5);
RefClass subRef = ref;
subRef.id = 10;
System.out.println("runRef, "+ref.id);
}
public static void main(String [] args){
runValue();
runRef();
}
}
```
> 결과 : runValue, 1 , runRef, 10
subRef.id를 변경하였는데 ref.id의 결과가 바뀌었다. 여기서 우리는 '참조(Reference)'라는 개념을 이해해야 한다.
이는 앞서 살펴보았던 C 의 포인터를 통해 메모리 주소를 가리키는 것과 유사한 개념이다.
흔히 자바에서 new 를 통해 생성되는 객체를 '인스턴스'라고 표현한다. 이 모든 인스턴스들은 저마다 메모리를 참조하고 있다.
따라서 위 예제와 같이 **RefClass ref=new RefClass(5)** 를 통해 생성된 RefClass 의 인스턴스는 특정한 메모리를 참조한다.
그리고 새로운 변수 subRef 는 ref 를 참조한다.
<pre>
RefClass subRef = ref;
</pre>
이것은 ref 와 subRef 두 변수가 모두 같은 인스턴스를 **참조** 하고 있다는 것이다.
따라서 subRef를 변경했지만 ref의 id값이 변경되는 것이다.
이처럼 자바의 데이터타입을 보여주는 위 그림 중 Primitive 타입을 제외한 모든 경우 Call by reference 방식을 통해 인자가 전달된다.
---
- [이미지 참고](http://choieun.tistory.com/entry/Call-by-value%EC%99%80-Call-by-reference)
- [내용 참고_머루의 개발 블로그](http://wonwoo.ml/index.php/post/1679)
- [내용 참고_Mussebio's HuRrah Blog](http://mussebio.blogspot.kr/2012/05/java-call-by-valuereference.html)
- [내용 참고_생활코딩 강좌](https://opentutorials.org/course/1223/5375) | 22.650485 | 138 | 0.60973 | kor_Hang | 1.000009 |
53cd6d13c7d648ee2b150c8e7215e72af261fcac | 23,992 | markdown | Markdown | _posts/2009-03-06-universal-performance-monitor-for-power-generators.markdown | LizetteO/Tracking.virtually.unlimited.number.tokens.and.addresses | 89c6b0fa3defa14758d6794027cc15150d005bbd | [
"Apache-2.0"
] | null | null | null | _posts/2009-03-06-universal-performance-monitor-for-power-generators.markdown | LizetteO/Tracking.virtually.unlimited.number.tokens.and.addresses | 89c6b0fa3defa14758d6794027cc15150d005bbd | [
"Apache-2.0"
] | null | null | null | _posts/2009-03-06-universal-performance-monitor-for-power-generators.markdown | LizetteO/Tracking.virtually.unlimited.number.tokens.and.addresses | 89c6b0fa3defa14758d6794027cc15150d005bbd | [
"Apache-2.0"
] | 2 | 2019-10-31T13:03:55.000Z | 2020-08-13T12:57:08.000Z | ---
title: Universal performance monitor for power generators
abstract: The invention broadly encompasses a system including a communications network, a plurality of remotely located data sources to provide power data, the power data including quantitative and qualitative data of one or more power generation units, and a performance monitor in communication with the plurality of remotely located data sources through the communications network, the performance monitor including a communications unit to extract the power data from the plurality of remotely located data sources, a data conversion unit to transform the power data into a common data format, a data store to store the transformed power data, and a user interface unit to display the transformed power data on one or more client devices through the communications network.
url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=08965719&OS=08965719&RS=08965719
owner: Versify Solutions, Inc.
number: 08965719
owner_city: Glen Mills
owner_country: US
publication_date: 20090306
---
This application claims the benefit of U.S. provisional patent application No. 61 034 912 which was filed on Mar. 7 2008 and is incorporated herein by reference in its entirety.
The invention encompasses a performance monitor for power generators and more particularly to a performance monitor for power generators that is adaptable to handle data from any data source.
The power industry has been rapidly changing with the advent of deregulation as well as other socio economic factors. As a result increases in efficiency and control of power generation costs are becoming of more importance. To meet the industry needs a large number of siloed information technology IT applications have been introduced. However these applications are typically not built with integration in mind with each application being too proprietary in nature and specifically tailored for a particular power generation operation. Accordingly collection and integration of data from these applications and systems are extremely difficult outside of the intended operation. Many utilities have sought to create a large scale data warehouse to solve this integration problem with very little success.
Another difficulty with prior art systems is the disparate number of locations even within the organization that needs access to the data. For example within a power company traders on a central trade floor plant personnel at each power plant engineers stationed regionally management dispersed throughout the organization and third parties all need access to the data in some form. The traditional siloed applications are typically client server based applications and it is difficult to provide access to everyone in need of the data.
In addition due to the generally isolated nature of the prior art systems as described above combining qualitative event type data e.g. real time or recorded plant operations data and quantitative data e.g. Supervisory Control and Data Acquisition SCADA and market data becomes difficult and cumbersome if not impossible due to the size and disparity of the data. On the other hand such information is important in determining proper operation of power generation as back office settlement activities determine penalties associated with under or over production of power for example. Typically back office personnel manually extract data from a number of different IT systems in the organization to determine the activities that occurred in prior reporting periods. Many times logs maintained in word processing or hand written documents must be searched manually.
Moreover when a type of report is required IT developers have to develop some level of custom code to extract data from the data and format the data properly onto a report. This task becomes even more complicated when disparate data sources with varying data formats are used.
Accordingly the invention encompasses a system and method for monitoring power generation operations that substantially overcomes the limitations and disadvantages of the related art.
In one embodiment the invention encompasses a system and method for collecting power generation operation data from disparate data sources and generating a report of the performance of the operation.
Additional features and advantages of the invention will be set forth in the description which follows and in part will be apparent from the description or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
To achieve these and other advantages and in accordance with the purpose of the invention as embodied and broadly described a system encompasses a communications network a plurality of remotely located data sources to provide power data the power data including quantitative and qualitative data of one or more power generation units and a performance monitor in communication with the plurality of remotely located data sources through the communications network the performance monitor including a communications unit to extract the power data from the plurality of remotely located data sources a data conversion unit to transform the power data into a common data format a data store to store the transformed power data and a user interface unit to display the transformed power data on one or more client devices through the communications network.
In one embodiment the invention encompasses methods of communicating with a plurality of remotely located data sources from a performance monitor via a communications network the plurality of remotely located data sources providing power data including quantitative and qualitative data of one or more power generation units extracting the power data from the plurality of remotely located data sources transforming the extracted power data into a common data format storing the transformed power data in a data store and displaying the transformed power data on one or more client devices through the communications network.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
a plurality of remotely located data sources to provide power data the power data including quantitative and qualitative data of one or more power generation units and
a performance monitor in communication with the plurality of remotely located data sources through the communications network the performance monitor including
a user interface unit to display the transformed power data on one or more client devices through the communications network.
In certain illustrative embodiments the quantitative data includes supervisory control and data acquisition SCADA data and or market data.
In certain illustrative embodiments the quantitative data includes operational cost data of the one or more power generation units.
In certain illustrative embodiments the qualitative data includes event log data of the one or more power generation units.
In certain illustrative embodiments the communications unit includes a gateway application programming interface API unit to pull the power data from the plurality of remotely located data sources.
In certain illustrative embodiments the conversion unit includes an interface API unit to communicate with the gateway API unit and to transform the power data into the common data format.
In certain illustrative embodiments the user interface unit includes an alarm unit to issue and track alarms based on user defined events.
In certain illustrative embodiments the user interface unit includes a reporting unit to display interactive reports on the one or more client devices.
In certain illustrative embodiments the reporting unit includes any one of a dashboard reporting interface daily operational reporting interface unit performance interface ad hoc SCADA query interface and unit status communications interface.
In certain illustrative embodiments the user interface unit includes a library of extensible markup language XML configuration files each XML configuration file being associated with a corresponding one of the interactive reports to map the power data stored in the data store directly to the corresponding interactive report for display on the one or more client devices.
communicating with a plurality of remotely located data sources from a performance monitor via a communications network the plurality of remotely located data sources providing power data including quantitative and qualitative data of one or more power generation units
displaying the transformed power data on one or more client devices through the communications network.
In certain illustrative embodiments the quantitative data includes supervisory control and data acquisition SCADA data and or market data.
In certain illustrative embodiments the quantitative data includes operational cost data of the one or more power generation units.
In certain illustrative embodiments the qualitative data includes event log data of the one or more power generation units.
In certain illustrative embodiments the step of extracting the power data from the remotely located data sources include pulling the power data from the plurality of remotely located data sources via a gateway application programming interface API unit.
In certain embodiments the step of transforming the power data includes converting the power data into the common data format via an interface API unit.
In certain illustrative embodiments the step of displaying includes issuing and tracking alarms based on user defined events.
In certain illustrative embodiments the step of displaying includes displaying interactive reports on the one or more client devices.
In certain illustrative embodiments the interactive reports are displayed on any one of a dashboard reporting interface daily operational reporting interface unit performance interface ad hoc SCADA query interface and unit status communications interface.
In certain illustrative embodiments the step of displaying includes configuring the interactive reports via a library of extensible markup language XML configuration files each XML configuration file being associated with a corresponding one of the interactive reports to map the power data stored in the data store directly to the corresponding interactive report for display on the one or more client devices.
In another embodiment the invention encompasses a computer readable storage medium storing one or more programs configured for execution by one or more processors the one or more programs comprising instructions to
communicate with a plurality of remotely located data sources from a performance monitor via a communications network the plurality of remotely located data sources providing power data including quantitative and qualitative data of one or more power generation units
Reference will now be made in detail to the embodiments of the present invention examples of which are illustrated in the accompanying drawings.
The system and method of the present invention is a flexible solution both in terms of the type and amount of data processed and in terms of monitoring and reporting to the above identified problems of the prior art. In general the system and method of the present invention is a hosting asset performance monitoring and reporting tool used by owners or power generators such as independently owned utilities municipalities and cooperatives for example. It is to be understood that other users and benefits may be realized without departing from the scope of the invention. The system and method of the present invention provides for example dashboard reporting e.g. for management level summary drill down reporting e.g. back office processing daily operational reporting e.g. operations query interface for plant supervisory control and data acquisition SCADA information on ad hoc basis and near real time status and logging capabilities. Accordingly the system and method of the present invention provides for example logged information created by automated plant monitoring systems and or plant personnel as events occur with relative SCADA and market information. The details of the system and method of the present invention is described below.
The hosting monitoring center includes power data server market data server and web server . It is to be understood that these servers may be implemented in a single machine or a plurality of machines without departing from the scope of the invention. The power data server and market data server are configured to obtain data from any number of the disparate data sources . The data sources may be databases from hosted or unhosted systems such as independent system operators ISOs regional system operators RSOs and SCADA data centers for example. The data may also be obtained from internal data sources of hosted and unhosted system such as data from internal databases spreadsheets and other software packages. The power data may in some embodiments include market type data such as for example power pricing fuel pricing or the like. The power data server and market data server convert the collected data into a common format and store the transformed data in data store . The data store may be a single or a plurality of data storage devices and may be implemented as a direct data repository or a relational database. Other data store configurations may be used without departing from the scope of the present invention. The web server communicates with client devices to provide monitoring functionality to the users. Client devices may be workstations notebooks digital personal assistants and other data enabled devices. The web server processes the requests from the client devices and provides the requested information via reports and alarms to be described further below.
In an exemplary embodiment of the present invention the web server communicates with the client devices via web based applications. In the exemplary embodiment the client devices only need a web browser and do not require any specialized applications. The web server includes a proprietary XML HTTP callback architecture to initiate requests from a browser from the client device for example back to the web server .
The Gateway API in accordance with the exemplary embodiment of the present invention extracts data from the hosted system s internal applications. The Gateway API accesses known APIs of other commercial software systems and databases as well as any custom code needed to pull data from the hosted system s internal proprietary applications. In an exemplary embodiment the Gateway API extracts data and returns the data to the web service client as either a ADO DataSet XML document byte array string or other similar data types.
The Hosting Interface API in accordance with the exemplary embodiment of the present invention provides the ability to communicate with the Gateway API and contains interface logic to transform data into a common data format. The Hosting Interface API for example pulls hourly snapshot and market data into the data store . The Hosting Interface API also generates log events from SCADA information.
The SQL server integration services in accordance with the exemplary embodiment of the present invention drive the communication interfaces. The SQL server integration services utilize mapping data to execute monitor and report on scheduled interfaces for each hosted system. In accordance with the exemplary embodiment of the present invention the SQL server integration services include retry logic to ensure that data is not missed due to any sort of system failure.
Once the qualitative and quantitative information of the hosted power generating unit e.g. power plant is available the web server of the hosting monitoring center provides customized reports to the client devices through report interfaces implemented on the web server . The report interfaces in accordance with an exemplary embodiment of the present invention are built from a customizable library of report interfaces. The report interfaces of the present invention are customized using extensible markup language XML based config files that contain information about what data to extract and how to format the data on a report interface. Accordingly the XML config files in accordance with the present invention combine data from any number of disparate systems into a comprehensive report. The XML config files of the present invention simply map data from the data store directly to a report interface without requiring any customized code.
An exemplary embodiment of the present invention includes page config files and reports config files. The page config file as shown in includes XML that may direct the page to change any property of the page itself or any property of any control on the page. This allows the user interface to be changed without writing any code and increases maintainability across multiple client devices . For example when the page initially loads the browser automatically looks for a page config file. If a page config file is found the browser processes the XML for the page contained in the page config file. Each page or control property identified in the XML is then set based on the page config file setting. To illustrate a button on the page may be hidden by setting the visible property of the button equal to hidden. Furthermore properties have been created on certain pages such as a unit status report interface to be explained below that allow customization of entire sections of the page through the use of custom user controls. In addition a config file may define standard .NET framework user controls that make up a page. A particular user control may be overridden for one or more user or customer locations e.g. power stations or generators via the config file. In some embodiments a class e.g. basePage invoked because of a load page event may process the config file change page properties and override user controls. In the case scenario where a user control is overridden a new user control may be loaded and a local variable may be set accordingly.
The reports config file defines the layout of a report interface using XML included in the reports config file. The reports config file includes XML fragments for each object to be displayed on the report interface e.g. graph pie chart data table etc. The XML fragment includes information specific to the object being shown e.g. location on report height width colors etc. as well as mapping information back to the data store as to what data should be displayed. There may be mappings to multiple stored procedures defined for a single report object. For example a chart may pull hourly megawatt MW data from one stored procedure and hourly price information from another in conjunction with a reporting engine to be described below. In an exemplary embodiment reports config files may be defined for a single report but have different configurations depending on what hosted system e.g. power plant the report is for. For example each reports config file may have a default configuration defined. For any hosted system e.g. power plant or unit e.g. generators referred to as locations where the report is to have a different look and feel and or different data source a subsequent override XML fragment is defined for the location. Any location that does not have the override fragment reverts to the default layout.
In an exemplary embodiment of the present invention the report interface is categorized as one of the following dashboard report interface daily operational report interface quantitative summary drill down report interface also referred to as unit performance interface an ad hoc SCADA query interface and unit status communication interface.
In addition to the real time monitoring the system and method of the present invention includes alarm monitoring and tracking of user defined significant events. For example the monitoring center of the present invention tracks and logs when a hosted unit comes on line or goes off line. The monitoring center tracks alarms against any generation operational parameter that is archived in the time series data store . This is implemented by querying the time series historical data store for values archived for a selected operational parameter over a set time interval. For example for a generator unit on line alarm the monitoring center queries the historical archive in the data store for a fifteen 15 minute interval and examines breaker status recorded during that timeframe. Any change in the monitored value represents an event which triggers an alarm. Once examination for the given parameter and time period is complete the monitored time interval is marked as examined and the alarm as tracked. Future monitoring of the historical archived data in the data store will check subsequent intervals based on what has already been marked as examined.
The alarming feature is not limited to tracking on off types or digital state data. Rather monitored recorded events may also be examined based on numerical thresholds. For example generation managers may wish to monitor megawatt MW levels and create different events based on the number of megawatts produced at a power generation facility. The plant may want to be alerted when the megawatt MW level reaches a specific level such as 100 250 and 500. Each MW level reached requires a unique action or log entry to be recorded. Such alarms are defined in the monitoring center to initiate tracking and logging. For example in an exemplary embodiment of the present invention alarms may be defined by noting the following data points
This serves to baseline subsequent interval checks. It is to be understood that other notations may be made without departing from the scope of the present invention.
In accordance with an exemplary embodiment of the system and method of the present invention monitoring of any number of hosted power generation units is realized by collecting qualitative e.g. event data and quantitative e.g. cost market data information from a plurality of disparate data sources converting the disparate data into a common data format and storing the transformed data to be served up through a communications network such as the Internet to a plurality of client devices that may be located anywhere in the world. The various report interfaces in accordance with the present invention allow the user to monitor the performance of the hosted power generation units including a comparison of the actual performance of the monitored unit with expected i.e. budgeted performance. The system and method of the present invention generates reports using XML config files to reduce the time to build and customize any number of reports. The XML config files allow developers to simply map data from database stored procedures directly to a report without writing any code to reduce the time required to deliver a report and eliminate the need for any code changes to existing applications.
It will be apparent to those skilled in the art that various modifications and variations can be made in the system and method of the present invention without departing from the spirit or scope of the invention. Thus it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
| 203.322034 | 1,588 | 0.82936 | eng_Latn | 0.999853 |
53cde84d6307fe2c969becd7bed626809dbeb02e | 4,768 | md | Markdown | docs/big-data-cluster/deployment-upgrade.md | SteSinger/sql-docs.de-de | 2259e4fbe807649f6ad0d49b425f1f3fe134025d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/big-data-cluster/deployment-upgrade.md | SteSinger/sql-docs.de-de | 2259e4fbe807649f6ad0d49b425f1f3fe134025d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/big-data-cluster/deployment-upgrade.md | SteSinger/sql-docs.de-de | 2259e4fbe807649f6ad0d49b425f1f3fe134025d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Upgrade auf ein neues Release
titleSuffix: SQL Server Big Data Clusters
description: Erfahren Sie, wie Sie Big Data-Cluster für SQL Server auf ein neues Release aktualisieren.
author: MikeRayMSFT
ms.author: mikeray
ms.reviewer: mihaelab
ms.date: 11/04/2019
ms.topic: conceptual
ms.prod: sql
ms.technology: big-data-cluster
ms.openlocfilehash: f44ef17a712d0d5a19707cf94e7d3e4196a2aba3
ms.sourcegitcommit: b4ad3182aa99f9cbfd15f4c3f910317d6128a2e5
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 11/06/2019
ms.locfileid: "73706310"
---
# <a name="how-to-upgrade-includebig-data-clusters-2019includesssbigdataclusters-ss-novermd"></a>Upgraden von [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)]
[!INCLUDE[tsql-appliesto-ssver15-xxxx-xxxx-xxx](../includes/tsql-appliesto-ssver15-xxxx-xxxx-xxx.md)]
Dieser Artikel enthält Anleitungen zum Upgrade eines SQL Server-Big Data-Clusters auf ein neues Release. Die Schritte in diesem Artikel gelten insbesondere für Upgrades von einem Vorschaurelease auf ein Dienstupdaterelease von SQL Server 2019.
## <a name="backup-and-delete-the-old-cluster"></a>Sichern und Löschen des alten Clusters
Derzeit besteht die einzige Möglichkeit zum Aktualisieren eines Big Data-Clusters auf ein neues Release darin, den Cluster manuell zu entfernen und neu zu erstellen. Jedes Release verfügt über eine eindeutige Version von `azdata`, die mit der vorherigen Version nicht kompatibel ist. Auch wenn ein älterer Cluster ein Containerimage auf einen neuen Knoten herunterladen musste, ist das neueste Image möglicherweise nicht mit den älteren Images auf dem Cluster kompatibel. Beachten Sie, dass das neuere Image nur abgerufen wird, wenn Sie das Imagetag `latest` in der Bereitstellungskonfigurationsdatei für die Containereinstellungen verwenden. Standardmäßig verfügt jedes Release über ein bestimmtes Imagetag, das der Releaseversion von SQL Server entspricht. Führen Sie die folgenden Schritte aus, um ein Upgrade auf das neueste Release durchzuführen:
1. Sichern Sie vor dem Löschen des alten Clusters die Daten auf der SQL Server-Masterinstanz und auf HDFS. Für die SQL Server-Masterinstanz können Sie [SQL Server-Sicherung und -Wiederherstellung](data-ingestion-restore-database.md) verwenden. Für HDFS [können Sie die Daten mit `curl` herauskopieren](data-ingestion-curl.md).
1. Löschen Sie den alten Cluster mit dem `azdata delete cluster`-Befehl.
```bash
azdata bdc delete --name <old-cluster-name>
```
> [!Important]
> Verwenden Sie die Version von `azdata`, die Ihrem Cluster entspricht. Löschen Sie keinen älteren Cluster mit der neueren Version von `azdata`.
> [!Note]
> Wenn Sie einen `azdata bdc delete`-Befehl ausführen, werden alle innerhalb des Namespace erstellten Objekte mit dem Namen des Big Data-Clusters gelöscht, der Namespace jedoch nicht. Der Namespace kann für nachfolgende Bereitstellungen wiederverwendet werden, solange er leer ist und keine anderen Anwendungen in ihm erstellt wurden.
1. Deinstallieren Sie die alte Version von `azdata`.
```powershell
pip3 uninstall -r https://azdatacli.blob.core.windows.net/python/azdata/2019-rc1/requirements.txt
```
1. Installieren Sie die neueste Version von `azdata`. Mit den folgenden Befehlen wird `azdata` aus dem neuesten Release installiert:
**Windows:**
```powershell
pip3 install -r https://aka.ms/azdata
```
**Linux:**
```bash
pip3 install -r https://aka.ms/azdata --user
```
> [!IMPORTANT]
> Für jedes Release ändert sich der Pfad zur `n-1`-Version von `azdata`. Auch wenn Sie zuvor `azdata` installiert haben, müssen Sie vor dem Erstellen des neuen Clusters vom aktuellen Pfad aus neu installieren.
## <a id="azdataversion"></a> Überprüfen der azdata-Version
Vergewissern Sie sich vor dem Bereitstellen eines neuen Big Data-Clusters, dass Sie die neueste Version von `azdata` mit dem `--version`-Parameter verwenden:
```bash
azdata --version
```
## <a name="install-the-new-release"></a>Installieren des neuen Releases
Nachdem Sie den vorherigen Big Data-Cluster entfernt und die neueste `azdata`-Version installiert haben, stellen Sie den neuen Big Data-Cluster mithilfe der aktuellen Bereitstellungsanweisungen bereit. Weitere Informationen finden Sie unter [Vorgehensweise: Bereitstellen von [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)] auf Kubernetes](deployment-guidance.md). Stellen Sie anschließend alle erforderlichen Datenbanken oder Dateien wieder her.
## <a name="next-steps"></a>Nächste Schritte
Weitere Informationen zu Big Data-Clustern finden Sie unter [Was sind [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)]?](big-data-cluster-overview.md).
| 58.864198 | 851 | 0.786703 | deu_Latn | 0.972356 |
53ce166f7578930124647470784d667d70cde612 | 13,226 | md | Markdown | README.md | DavidGarciaCat/assert | e434c23f536ddd1758c0a92833c87ebbe6d27ed2 | [
"MIT"
] | null | null | null | README.md | DavidGarciaCat/assert | e434c23f536ddd1758c0a92833c87ebbe6d27ed2 | [
"MIT"
] | null | null | null | README.md | DavidGarciaCat/assert | e434c23f536ddd1758c0a92833c87ebbe6d27ed2 | [
"MIT"
] | null | null | null | Webmozart Assert
================
[](https://travis-ci.org/webmozart/assert)
[](https://ci.appveyor.com/project/webmozart/assert/branch/master)
[](https://packagist.org/packages/webmozart/assert)
[](https://packagist.org/packages/webmozart/assert)
This library contains efficient assertions to test the input and output of
your methods. With these assertions, you can greatly reduce the amount of coding
needed to write a safe implementation.
All assertions in the [`Assert`] class throw an `\InvalidArgumentException` if
they fail.
FAQ
---
**What's the difference to [beberlei/assert]?**
This library is heavily inspired by Benjamin Eberlei's wonderful [assert package],
but fixes a usability issue with error messages that can't be fixed there without
breaking backwards compatibility.
This package features usable error messages by default. However, you can also
easily write custom error messages:
```
Assert::string($path, 'The path is expected to be a string. Got: %s');
```
In [beberlei/assert], the ordering of the `%s` placeholders is different for
every assertion. This package, on the contrary, provides consistent placeholder
ordering for all assertions:
* `%s`: The tested value as string, e.g. `"/foo/bar"`.
* `%2$s`, `%3$s`, ...: Additional assertion-specific values, e.g. the
minimum/maximum length, allowed values, etc.
Check the source code of the assertions to find out details about the additional
available placeholders.
Installation
------------
Use [Composer] to install the package:
```
$ composer require webmozart/assert
```
Example
-------
```php
use Webmozart\Assert\Assert;
class Employee
{
public function __construct($id)
{
Assert::integer($id, 'The employee ID must be an integer. Got: %s');
Assert::greaterThan($id, 0, 'The employee ID must be a positive integer. Got: %s');
}
}
```
If you create an employee with an invalid ID, an exception is thrown:
```php
new Employee('foobar');
// => InvalidArgumentException:
// The employee ID must be an integer. Got: string
new Employee(-10);
// => InvalidArgumentException:
// The employee ID must be a positive integer. Got: -10
```
Assertions
----------
The [`Assert`] class provides the following assertions:
### Type Assertions
Method | Description
-------------------------------------------------------- | --------------------------------------------------
`string($value, $message = '')` | Check that a value is a string
`stringNotEmpty($value, $message = '')` | Check that a value is a non-empty string
`integer($value, $message = '')` | Check that a value is an integer
`integerish($value, $message = '')` | Check that a value casts to an integer
`float($value, $message = '')` | Check that a value is a float
`numeric($value, $message = '')` | Check that a value is numeric
`natural($value, $message= ''')` | Check that a value is a non-negative integer
`boolean($value, $message = '')` | Check that a value is a boolean
`scalar($value, $message = '')` | Check that a value is a scalar
`object($value, $message = '')` | Check that a value is an object
`resource($value, $type = null, $message = '')` | Check that a value is a resource
`isCallable($value, $message = '')` | Check that a value is a callable
`isArray($value, $message = '')` | Check that a value is an array
`isTraversable($value, $message = '')` (deprecated) | Check that a value is an array or a `\Traversable`
`isIterable($value, $message = '')` | Check that a value is an array or a `\Traversable`
`isCountable($value, $message = '')` | Check that a value is an array or a `\Countable`
`isInstanceOf($value, $class, $message = '')` | Check that a value is an `instanceof` a class
`isInstanceOfAny($value, array $classes, $message = '')` | Check that a value is an `instanceof` a at least one class on the array of classes
`notInstanceOf($value, $class, $message = '')` | Check that a value is not an `instanceof` a class
`isArrayAccessible($value, $message = '')` | Check that a value can be accessed as an array
### Comparison Assertions
Method | Description
----------------------------------------------- | --------------------------------------------------
`true($value, $message = '')` | Check that a value is `true`
`false($value, $message = '')` | Check that a value is `false`
`null($value, $message = '')` | Check that a value is `null`
`notNull($value, $message = '')` | Check that a value is not `null`
`isEmpty($value, $message = '')` | Check that a value is `empty()`
`notEmpty($value, $message = '')` | Check that a value is not `empty()`
`eq($value, $value2, $message = '')` | Check that a value equals another (`==`)
`notEq($value, $value2, $message = '')` | Check that a value does not equal another (`!=`)
`same($value, $value2, $message = '')` | Check that a value is identical to another (`===`)
`notSame($value, $value2, $message = '')` | Check that a value is not identical to another (`!==`)
`greaterThan($value, $value2, $message = '')` | Check that a value is greater than another
`greaterThanEq($value, $value2, $message = '')` | Check that a value is greater than or equal to another
`lessThan($value, $value2, $message = '')` | Check that a value is less than another
`lessThanEq($value, $value2, $message = '')` | Check that a value is less than or equal to another
`range($value, $min, $max, $message = '')` | Check that a value is within a range
`oneOf($value, array $values, $message = '')` | Check that a value is one of a list of values
### String Assertions
You should check that a value is a string with `Assert::string()` before making
any of the following assertions.
Method | Description
--------------------------------------------------- | -----------------------------------------------------------------
`contains($value, $subString, $message = '')` | Check that a string contains a substring
`notContains($value, $subString, $message = '')` | Check that a string does not contains a substring
`startsWith($value, $prefix, $message = '')` | Check that a string has a prefix
`startsWithLetter($value, $message = '')` | Check that a string starts with a letter
`endsWith($value, $suffix, $message = '')` | Check that a string has a suffix
`regex($value, $pattern, $message = '')` | Check that a string matches a regular expression
`notRegex($value, $pattern, $message = '')` | Check that a string does not match a regular expression
`alpha($value, $message = '')` | Check that a string contains letters only
`digits($value, $message = '')` | Check that a string contains digits only
`alnum($value, $message = '')` | Check that a string contains letters and digits only
`lower($value, $message = '')` | Check that a string contains lowercase characters only
`upper($value, $message = '')` | Check that a string contains uppercase characters only
`length($value, $length, $message = '')` | Check that a string has a certain number of characters
`minLength($value, $min, $message = '')` | Check that a string has at least a certain number of characters
`maxLength($value, $max, $message = '')` | Check that a string has at most a certain number of characters
`lengthBetween($value, $min, $max, $message = '')` | Check that a string has a length in the given range
`uuid($value, $message = '')` | Check that a string is a valid UUID
`ip($value, $message = '')` | Check that a string is a valid IP (either IPv4 or IPv6)
`ipv4($value, $message = '')` | Check that a string is a valid IPv4
`ipv6($value, $message = '')` | Check that a string is a valid IPv6
`notWhitespaceOnly($value, $message = '')` | Check that a string contains at least one non-whitespace character
### File Assertions
Method | Description
----------------------------------- | --------------------------------------------------
`fileExists($value, $message = '')` | Check that a value is an existing path
`file($value, $message = '')` | Check that a value is an existing file
`directory($value, $message = '')` | Check that a value is an existing directory
`readable($value, $message = '')` | Check that a value is a readable path
`writable($value, $message = '')` | Check that a value is a writable path
### Object Assertions
Method | Description
----------------------------------------------------- | --------------------------------------------------
`classExists($value, $message = '')` | Check that a value is an existing class name
`subclassOf($value, $class, $message = '')` | Check that a class is a subclass of another
`interfaceExists($value, $message = '')` | Check that a value is an existing interface name
`implementsInterface($value, $class, $message = '')` | Check that a class implements an interface
`propertyExists($value, $property, $message = '')` | Check that a property exists in a class/object
`propertyNotExists($value, $property, $message = '')` | Check that a property does not exist in a class/object
`methodExists($value, $method, $message = '')` | Check that a method exists in a class/object
`methodNotExists($value, $method, $message = '')` | Check that a method does not exist in a class/object
### Array Assertions
Method | Description
-------------------------------------------------- | ------------------------------------------------------------------
`keyExists($array, $key, $message = '')` | Check that a key exists in an array
`keyNotExists($array, $key, $message = '')` | Check that a key does not exist in an array
`count($array, $number, $message = '')` | Check that an array contains a specific number of elements
`minCount($array, $min, $message = '')` | Check that an array contains at least a certain number of elements
`maxCount($array, $max, $message = '')` | Check that an array contains at most a certain number of elements
`countBetween($array, $min, $max, $message = '')` | Check that an array has a count in the given range
`isList($array, $message = '')` | Check that an array is a non-associative list
`isMap($array, $message = '')` | Check that an array is associative and has strings as keys
### Function Assertions
Method | Description
------------------------------------------- | -----------------------------------------------------------------------------------------------------
`throws($closure, $class, $message = '')` | Check that a function throws a certain exception. Subclasses of the exception class will be accepted.
### Collection Assertions
All of the above assertions can be prefixed with `all*()` to test the contents
of an array or a `\Traversable`:
```php
Assert::allIsInstanceOf($employees, 'Acme\Employee');
```
### Nullable Assertions
All of the above assertions can be prefixed with `nullOr*()` to run the
assertion only if it the value is not `null`:
```php
Assert::nullOrString($middleName, 'The middle name must be a string or null. Got: %s');
```
Authors
-------
* [Bernhard Schussek] a.k.a. [@webmozart]
* [The Community Contributors]
Contribute
----------
Contributions to the package are always welcome!
* Report any bugs or issues you find on the [issue tracker].
* You can grab the source code at the package's [Git repository].
License
-------
All contents of this package are licensed under the [MIT license].
[beberlei/assert]: https://github.com/beberlei/assert
[assert package]: https://github.com/beberlei/assert
[Composer]: https://getcomposer.org
[Bernhard Schussek]: http://webmozarts.com
[The Community Contributors]: https://github.com/webmozart/assert/graphs/contributors
[issue tracker]: https://github.com/webmozart/assert/issues
[Git repository]: https://github.com/webmozart/assert
[@webmozart]: https://twitter.com/webmozart
[MIT license]: LICENSE
[`Assert`]: src/Assert.php
| 53.116466 | 166 | 0.593679 | eng_Latn | 0.970893 |
53ce4b7f1c51cce0cc6baa05d67125cce1844575 | 5,188 | md | Markdown | docs/Downloaders/NZBGet/Basic-Setup.md | snowballramen/Guides | 22fbff279270b5ebb6f13fc4513932cedbc7f8bb | [
"MIT"
] | 389 | 2020-10-24T23:00:12.000Z | 2022-03-30T21:16:58.000Z | docs/Downloaders/NZBGet/Basic-Setup.md | snowballramen/Guides | 22fbff279270b5ebb6f13fc4513932cedbc7f8bb | [
"MIT"
] | 93 | 2020-10-24T18:49:53.000Z | 2022-03-29T18:10:14.000Z | docs/Downloaders/NZBGet/Basic-Setup.md | snowballramen/Guides | 22fbff279270b5ebb6f13fc4513932cedbc7f8bb | [
"MIT"
] | 81 | 2020-11-03T16:13:54.000Z | 2022-03-22T10:12:24.000Z | # NZBGet - Basic Setup
--8<-- "includes/downloaders/basic-setup.md"
------
## Some Basics
| Name | Description |
|:--- |:--- |
| `${MainDir}` | Root directory for all tasks. |
| `${AppDir}` | Where NZBGet is installed. |
| `${DestDir}` | Destination directory for downloaded files.|
## PATHS

I will only explain the so called most important paths.
| Name | Description |
|:--- |:--- |
| `MainDir` | `/data/usenet` |
| `DestDir` | `${MainDir}` (so it will go in to `/data/usenet`) |
| `InterDir` | Files are downloaded into this directory (before unpack+par2) |
| `NzbDir` | Directory for incoming nzb-files. |
| `QueueDir` | This directory is used to save download queue, history, information statistics, etc. |
| `ScriptDir` | Directory with post-processing and other scripts. |
| `LogFile` | Where your log files will be stored (Please create a log directory in your config) |
## NEWS-SERVERS

| Name | Description |
|:--- |:--- |
| `Active` | Use this news server. |
| `Name` | The name is used in UI and for logging. It can be any string. |
| `Level` | Put your major download servers at level 0 and your fill servers at levels 1, 2, etc.. |
| `Host` | Host name of news server. |
| `Port` | Port to connect to. |
| `Password` | Password to use for authentication. |
| `Encryption` | Encrypted server connection (TLS/SSL). (preferred to use this) |
| `Connections` | Use the lowest possible amount of connections +1 to gain your max download speed. |
| `Retention` | How long the articles are stored on the news server. |
## CATEGORIES

| Name | Description |
|:--- |:--- |
| `Name` | This should match what you put in Sonarr/Radarr (tv/movies/sonarr/radarr/series/films) |
| `DestDir` | `${DestDir}` Destination directory (/data/usenet/movies) |
| `Unpack` | Unpack downloaded nzb-files. |
| `Extensions` | List of extension scripts for this category. |
## INCOMING NZBS

!!! info
`AppendCategoryDir`: Create subdirectory with category-name in destination-directory.
## DOWNLOAD QUEUE

!!! caution
`WriteBuffer`: If you're low on memory don't set this to high.
## LOGGING

## CHECK AND REPAIR

## UNPACK

!!! info
`DirectUnpack`: This might lower your download speed but the overall time could be faster. (disable on low powered devices)
## EXTENSION SCRIPTS

Depending if you're using some NZBGet script here you can change the order or when it should be used
------
## Recommended Sonarr/Radarr Settings
The following settings are recommended to for Sonarr/Radarr, else it could happen that Sonarr/Radarr will miss downloads that are still in the queue/history.
Being that Sonarr/Radarr only looks at the last xx amount in the queue/history.
### Sonarr
??? example "Sonarr"
`Settings` => `Download Clients`

Make sure you check both boxes under `Completed Download Handling` at step 3.
Select NZBGet at step 4 and scroll down to the bottom of the new window where it says `Completed Download Handling` and check both boxes.

### Radarr
??? example "Radarr"
`Settings` => `Download Clients`

Make sure you check both boxes under `Completed Download Handling` at step 3,
and both boxes under `Failed Download Handling` at step 4.
--8<-- "includes/support.md"
| 40.850394 | 157 | 0.515613 | eng_Latn | 0.934134 |
53ce9a996384ffc008ca8ad4571d64c731e29f7d | 1,542 | md | Markdown | README.md | noscosystems/old-nosco-dev-that-we-dont-use-anymore | ebe4b89533ffb4f7216a4470a3b59b61ccbf84de | [
"RSA-MD"
] | null | null | null | README.md | noscosystems/old-nosco-dev-that-we-dont-use-anymore | ebe4b89533ffb4f7216a4470a3b59b61ccbf84de | [
"RSA-MD"
] | null | null | null | README.md | noscosystems/old-nosco-dev-that-we-dont-use-anymore | ebe4b89533ffb4f7216a4470a3b59b61ccbf84de | [
"RSA-MD"
] | null | null | null | <!--
Guidelines for a Successful README
==================================
- Name of the projects and all sub-modules and libraries (sometimes they are
named different and very confusing to new users).
- Descriptions of all the project, and all sub-modules and libraries.
- 5-line code snippet on how its used (if it's a library).
- Copyright and licensing information (or "Read LICENSE").
- Instruction to grab the documentation.
- Instructions to install, configure, and to run the programs.
- Instruction to grab the latest code and detailed instructions to build it
(or quick overview and "Read INSTALL").
- List of authors or "Read AUTHORS".
- Instructions to submit bugs, feature requests, submit patches, join
mailing list, get announcements, or join the user or dev community in
other forms.
- Other contact info (email address, website, company name, address, etc).
- A brief history if it's a replacement or a fork of something else.
- Legal notices (crypto stuff).
-->
System62
========
<abbr title="Garaging Usability Software">GUS</abbr> project by [Nosco Systems](http://nosco-systems.com),
authored by [Zander Baldwin](http://mynameis.zande.rs).
License
-------
Please see the [separate license file](LICENSE.md) included in this repository.
Authors
-------
- [Zander Baldwin](http://mynameis.zande.rs); lead technical developer, data & application.
- [Luke Scowen](https://github.com/scowen); lead front-end developer, design & interation.
| 40.578947 | 106 | 0.693256 | eng_Latn | 0.978342 |
53cf8850f3db289f5265fbc07346da31c6dcb49e | 1,142 | md | Markdown | docs/injector.valueprovider.md | alexanderwende/injector | bcb14cd868c82966c930eaed3283b1e1ec77d548 | [
"BSD-3-Clause"
] | 2 | 2019-03-08T11:17:33.000Z | 2019-04-05T13:51:51.000Z | docs/injector.valueprovider.md | alexanderwende/injector | bcb14cd868c82966c930eaed3283b1e1ec77d548 | [
"BSD-3-Clause"
] | 5 | 2019-07-28T17:37:54.000Z | 2022-02-12T05:16:20.000Z | docs/injector.valueprovider.md | alexanderwende/injector | bcb14cd868c82966c930eaed3283b1e1ec77d548 | [
"BSD-3-Clause"
] | null | null | null | <!-- Do not edit this file. It is automatically generated by API Documenter. -->
[Home](./index.md) > [injector](./injector.md) > [ValueProvider](./injector.valueprovider.md)
## ValueProvider class
A provider for static values
<b>Signature:</b>
```typescript
export declare class ValueProvider<T> extends BaseProvider<T>
```
## Constructors
| Constructor | Modifiers | Description |
| --- | --- | --- |
| [(constructor)(value)](./injector.valueprovider._constructor_.md) | | Constructs a new instance of the <code>ValueProvider</code> class |
## Remarks
The `ValueProvider` provides an already existing value and therefore has no dependencies. This is useful for providing primitive values, configuration objects or any value that does not need to be instantiated.
```typescript
const CONFIG = {
receiveMessages: true,
answerMessages: false,
channelId: 'some_id'
}
const token = new InjectToken('CONFIG');
const injector = new injector();
injector.register(token, new ValueProvider(CONFIG));
injector.resolve(token)!; // --> { receiveMessages: true, answerMessages: false, channelId: 'some_id' }
```
| 27.190476 | 210 | 0.716287 | eng_Latn | 0.933835 |
53d042959f7fec5f29427dea32cba9475876c0ae | 537 | markdown | Markdown | .vim/memolist.vim/visual_studio.markdown | KazuakiM/dotfiles | ea4115c650b027e074da7e1b1dbb87c416dddcf3 | [
"MIT"
] | 14 | 2015-04-28T09:19:03.000Z | 2022-02-09T01:48:40.000Z | .vim/memolist.vim/visual_studio.markdown | KazuakiM/dotfiles | ea4115c650b027e074da7e1b1dbb87c416dddcf3 | [
"MIT"
] | null | null | null | .vim/memolist.vim/visual_studio.markdown | KazuakiM/dotfiles | ea4115c650b027e074da7e1b1dbb87c416dddcf3 | [
"MIT"
] | 3 | 2016-01-13T17:04:59.000Z | 2021-02-18T18:57:33.000Z | <!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [Visual Studio](#visual-studio)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
Visual Studio
===
選択範囲のコメントアウト化
```
Ctrl+k, Ctrl+c
```
選択範囲のコメントアウト解除
```
Ctrl+k, Ctrl+u
```
タブ切替
```
Ctrl+Alt+(PageUp or PageDown)
```
対括弧ジャンプ
```
Ctrl+]
```
行指定ジャンプ
```
Ctrl+G
```
| 14.916667 | 84 | 0.670391 | yue_Hant | 0.639173 |
53d0bdf0e7ea8406aed0fe4fa13721780970decf | 3,464 | md | Markdown | 14-云平台/01-OpenShift/02-Openshift-安装.md | bond-huang/ebook | aa9c5ed6d2cb7b5398a27720fbc8fe59fd59641e | [
"Apache-2.0"
] | 1 | 2021-03-14T13:24:29.000Z | 2021-03-14T13:24:29.000Z | 14-云平台/01-OpenShift/02-Openshift-安装.md | bond-huang/ebook | aa9c5ed6d2cb7b5398a27720fbc8fe59fd59641e | [
"Apache-2.0"
] | null | null | null | 14-云平台/01-OpenShift/02-Openshift-安装.md | bond-huang/ebook | aa9c5ed6d2cb7b5398a27720fbc8fe59fd59641e | [
"Apache-2.0"
] | 1 | 2022-02-24T10:02:08.000Z | 2022-02-24T10:02:08.000Z | # OpenShift-安装
内容来自学习的IBM在线实验室发布的学习链接:[OpenShift 容器平台简介和离线裸机安装步骤](https://csc.cn.ibm.com/roadmap/index/6cad9db3-bca0-45a8-abbc-c2c6fd38cb60?eventId=5c9e9c67-e55e-483a-a6bb-32f89b1bdc23)
## 安装过程简介
  安装OpenShift Container Platform集群时,可以从Red Hat OpenShift Cluster Manager站点上相应的Infrastructure Provider页面下载安装程序 。该网站管理:
- 帐户的REST API
- 注册表令牌,这是您用于获取必需组件的拉式机密
- 集群注册,它将集群标识与您的Red Hat帐户相关联,以方便收集使用情况指标
  在OpenShift Container Platform 4.6中,安装程序是Go二进制文件,该文件对一组资产执行一系列文件转换。与安装程序交互的方式因安装类型而异。
- 对于具有由安装程序提供的基础结构的集群,您可以将基础结构引导和供应委派给安装程序,而不是自己进行。安装程序将创建支持集群所需的所有网络,机器和操作系统。
- 如果为群集配置和管理基础架构,则必须提供所有群集基础架构和资源,包括引导计算机,网络,负载平衡,存储和单个群集计算机。
  在安装期间,您使用三组文件:名为的安装配置文件install-config.yaml,Kubernetes清单和适用于您的机器类型的Ignition配置文件。
  将安装配置文件转换为Kubernetes清单,然后将清单包装到Ignition配置文件中。安装程序使用这些Ignition配置文件来创建集群。运行安装程序时,将删除所有的配置文件,因此请确保备份所有要再次使用的配置文件。
- 构成集群的控制平面和计算机的基础架构
- 负载均衡器
- 群集网络,包括DNS记录和所需的子网
- 集群基础架构和应用程序的存储
## 安装过程详细信息
  由于群集中的每台计算机在配置时都需要有关群集的信息,因此OpenShift容器平台在初始配置期间会使用临时引导计算机将所需信息提供给永久控制平面。它通过使用描述如何创建群集的点火配置文件来启动。引导计算机将创建组成控制平面的主计算机。然后,控制平面计算机创建计算机,也称为工作计算机。下图说明了此过程:

  集群计算机初始化之后,引导计算机将被销毁。所有群集都使用引导过程来初始化群集,但是,如果为群集配置基础结构,则必须手动完成许多步骤。对群集进行引导涉及以下步骤:
- 导计算机引导并开始托管主计算机引导所需的远程资源。(如果您配置基础架构,则需要人工干预)
- 主计算机从引导计算机获取远程资源并完成引导。(如果您配置基础架构,则需要人工干预)
- 主计算机使用引导计算机来形成etcd集群。
- 引导机器使用新的etcd集群启动临时Kubernetes控制平面。
- 临时控制平面将生产控制平面调度到主机。
- 临时控制平面关闭,并将控制权交给生产控制平面。
- 引导机将OpenShift容器平台组件注入生产控制平面。
- 安装程序将关闭引导计算机。(如果您配置基础架构,则需要人工干预)
- 控制平面设置工作节点。
- 控制平面以一组操作员的形式安装其他服务。
  该引导过程的结果是一个完全运行的OpenShift Container Platform集群。然后,集群下载并配置日常操作所需的其余组件,包括在受支持的环境中创建工作机。
### Openshift Topology

### 离线裸机安装环境要求
安装最小的OpenShift容器平台集群需要以下主机:
- 1个引导计算机
- 3个控制平面或主机
- 2个或更多计算节点
- 1个堡垒
具体如下下表所示:
类型|数量|操作系统|vCPU|内存|硬盘
:---:|:---:|:---:|:---:|:---:|:---:
引导节点Master Nodes|1|RHCOS|4|16GB|120GB
主节点Master Nodes|3|RHCOS|4|16GB|120GB
工作节点Worker Nodes|2|RHCOS or RHEL 7.6+|2|8GB|120GB
堡垒Bastion Node|1|RHEL|2|4GB|25GB
堡垒机的角色:
- DNS和DHCP服务器
- 私有Docker registry
- 负载均衡服务器
- Web服务器
- 执行oc命令的客户端
### Openshift离线Bare metal安装步骤
步骤如下:
- 堡垒机准备:
- DNS & DHCP:dnsmasq
- Web服务器:Niginx
- LB服务器:haproxy
- Mirror Registry:Registry docker using podman
- Openshift-install & Oc命令行
- 创建虚拟机(根据需求创建):
- 1 bootstrap
- 3 master
- 2 worker
- 1 bastion
- 生成点火文件:使用openshift-install生成的点火文件
- 开始安装:从操作系统引导开始自动安装
- 监视安装完成:观察安装进度,批准证书,配置失败的operator
### 在裸机上部署安装程序置备的集群
  以上内容均摘自IBM在线实验室教程。原版也参考红帽官方《在裸机上部署安装程序置备的集群》和《在 vSphere 上安装》安装手册的基础上,结合了IBM实验室各方同仁在vSphere上以裸机方式安装OpenShift Container Platform的配置经验,尽量简化了安装步骤,屏蔽了安装过程中各种可能导致安装失败的配置错误,以更加贴近生产集群部署实践的方式体验OpenShift Container Platform安装过程。
相关链接如下:
在裸机上部署安装程序置备的集群:[在裸机上部署安装程序置备的集群](https://access.redhat.com/documentation/zh-cn/openshift_container_platform/4.6/html/deploying_installer-provisioned_clusters_on_bare_metal/index?_ga=2.241581835.845499386.1613305155-1408166508.1612514124)
在vSphere上安装:[安装 OpenShift Container Platform vSphere 集群](https://access.redhat.com/documentation/zh-cn/openshift_container_platform/4.6/html/installing_on_vsphere/index?_ga=2.44110765.845499386.1613305155-1408166508.1612514124)
IBM上机实验地址:[Openshift container Platfom (OCP) bare metal installation](https://csc.cn.ibm.com/src/index/425ebe3f-163d-4bd2-8ffd-a3770f9f5162?rtype=experiment&roadmapId=6cad9db3-bca0-45a8-abbc-c2c6fd38cb60&eventId=5c9e9c67-e55e-483a-a6bb-32f89b1bdc23)
| 39.816092 | 249 | 0.812356 | yue_Hant | 0.525106 |
53d12085d791078349bd2119f2fa44eb8f4aaceb | 3,907 | md | Markdown | content/docs/api/gems/_index.md | Shyamalsian/o3de.org | 18328df0ae6e09e527717c3976115a1412531146 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | content/docs/api/gems/_index.md | Shyamalsian/o3de.org | 18328df0ae6e09e527717c3976115a1412531146 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | content/docs/api/gems/_index.md | Shyamalsian/o3de.org | 18328df0ae6e09e527717c3976115a1412531146 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: O3DE Gem API reference
description: Reference index for Gems shipped with Open 3D Engine.
---
{{< preview-new >}}
* [AWSClientAuth](/docs/api/gems/AWSClientAuth)
* [AWSCore](/docs/api/gems/AWSCore)
* [AWSMetrics](/docs/api/gems/AWSMetrics)
* [Achievements](/docs/api/gems/Achievements)
* [AssetMemoryAnalyzer](/docs/api/gems/AssetMemoryAnalyzer)
* [AssetValidation](/docs/api/gems/AssetValidation)
* [Atom](/docs/api/gems/Atom)
* [AtomContent](/docs/api/gems/AtomContent)
* [AtomLyIntegration](/docs/api/gems/AtomLyIntegration)
* [AtomTressFX](/docs/api/gems/AtomTressFX)
* [AudioEngineWwise](/docs/api/gems/AudioEngineWwise)
* [AudioSystem](/docs/api/gems/AudioSystem)
* [AutomatedLauncherTesting](/docs/api/gems/AutomatedLauncherTesting)
* [Blast](/docs/api/gems/Blast)
* [Camera](/docs/api/gems/Camera)
* [CameraFramework](/docs/api/gems/CameraFramework)
* [CertificateManager](/docs/api/gems/CertificateManager)
* [CrashReporting](/docs/api/gems/CrashReporting)
* [CustomAssetExample](/docs/api/gems/CustomAssetExample)
* [DebugDraw](/docs/api/gems/DebugDraw)
* [DevTextures](/docs/api/gems/DevTextures)
* [EMotionFX](/docs/api/gems/EMotionFX)
* [EditorPythonBindings](/docs/api/gems/EditorPythonBindings)
* [ExpressionEvaluation](/docs/api/gems/ExpressionEvaluation)
* [FastNoise](/docs/api/gems/FastNoise)
* [GameState](/docs/api/gems/GameState)
* [GameStateSamples](/docs/api/gems/GameStateSamples)
* [Gestures](/docs/api/gems/Gestures)
* [GradientSignal](/docs/api/gems/GradientSignal)
* [GraphCanvas](/docs/api/gems/GraphCanvas)
* [GraphModel](/docs/api/gems/GraphModel)
* [HttpRequestor](/docs/api/gems/HttpRequestor)
* [ImGui](/docs/api/gems/ImGui)
* [InAppPurchases](/docs/api/gems/InAppPurchases)
* [LandscapeCanvas](/docs/api/gems/LandscapeCanvas)
* [LmbrCentral](/docs/api/gems/LmbrCentral)
* [LocalUser](/docs/api/gems/LocalUser)
* [LyShine](/docs/api/gems/LyShine)
* [LyShineExamples](/docs/api/gems/LyShineExamples)
* [Maestro](/docs/api/gems/Maestro)
* [MessagePopup](/docs/api/gems/MessagePopup)
* [Metastream](/docs/api/gems/Metastream)
* [Microphone](/docs/api/gems/Microphone)
* [Multiplayer](/docs/api/gems/Multiplayer)
* [MultiplayerCompression](/docs/api/gems/MultiplayerCompression)
* [NvCloth](/docs/api/gems/NvCloth)
* [PBSreferenceMaterials](/docs/api/gems/PBSreferenceMaterials)
* [PhysX](/docs/api/gems/PhysX)
* [PhysXDebug](/docs/api/gems/PhysXDebug)
* [PhysXSamples](/docs/api/gems/PhysXSamples)
* [Prefab](/docs/api/gems/Prefab)
* [Presence](/docs/api/gems/Presence)
* [PrimitiveAssets](/docs/api/gems/PrimitiveAssets)
* [PythonAssetBuilder](/docs/api/gems/PythonAssetBuilder)
* [QtForPython](/docs/api/gems/QtForPython)
* [RADTelemetry](/docs/api/gems/RADTelemetry)
* [SaveData](/docs/api/gems/SaveData)
* [SceneLoggingExample](/docs/api/gems/SceneLoggingExample)
* [SceneProcessing](/docs/api/gems/SceneProcessing)
* [ScriptCanvas](/docs/api/gems/ScriptCanvas)
* [ScriptCanvasDeveloper](/docs/api/gems/ScriptCanvasDeveloper)
* [ScriptCanvasPhysics](/docs/api/gems/ScriptCanvasPhysics)
* [ScriptCanvasTesting](/docs/api/gems/ScriptCanvasTesting)
* [ScriptEvents](/docs/api/gems/ScriptEvents)
* [ScriptedEntityTweener](/docs/api/gems/ScriptedEntityTweener)
* [SliceFavorites](/docs/api/gems/SliceFavorites)
* [StartingPointCamera](/docs/api/gems/StartingPointCamera)
* [StartingPointInput](/docs/api/gems/StartingPointInput)
* [StartingPointMovement](/docs/api/gems/StartingPointMovement)
* [SurfaceData](/docs/api/gems/SurfaceData)
* [TestAssetBuilder](/docs/api/gems/TestAssetBuilder)
* [TextureAtlas](/docs/api/gems/TextureAtlas)
* [TickBusOrderViewer](/docs/api/gems/TickBusOrderViewer)
* [Twitch](/docs/api/gems/Twitch)
* [UiBasics](/docs/api/gems/UiBasics)
* [Vegetation](/docs/api/gems/Vegetation)
* [VideoPlaybackFramework](/docs/api/gems/VideoPlaybackFramework)
* [VirtualGamepad](/docs/api/gems/VirtualGamepad)
* [WhiteBox](/docs/api/gems/WhiteBox)
| 44.908046 | 69 | 0.767853 | yue_Hant | 0.846409 |
53d1341674b94e2f4c6ec066a8348c386c619724 | 2,243 | md | Markdown | articles/storage/common/storage-import-export-backing-up-drive-manifests.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/common/storage-import-export-backing-up-drive-manifests.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/common/storage-import-export-backing-up-drive-manifests.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Realización de copias de seguridad de los manifiestos de las unidades de Azure Import/Export | Microsoft Docs
description: Descubra cómo conseguir que se realicen copias de seguridad automáticas de los manifiestos de las unidades del servicio Microsoft Azure Import/Export.
author: muralikk
services: storage
ms.service: storage
ms.topic: article
ms.date: 01/23/2017
ms.author: muralikk
ms.component: common
ms.openlocfilehash: 933c0121a4f718ff812fc921bd6e04983fc69931
ms.sourcegitcommit: 9819e9782be4a943534829d5b77cf60dea4290a2
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 08/06/2018
ms.locfileid: "39520544"
---
# <a name="backing-up-drive-manifests-for-azure-importexport-jobs"></a>Realización de copias de seguridad de los manifiestos de los trabajos de Azure Import/Export
Es posible realizar copias de seguridad automáticas de los manifiestos de las unidades en blobs estableciendo la propiedad `BackupDriveManifest` en `true` en las operaciones de API de REST [Put Job](/rest/api/storageimportexport/jobs#Jobs_CreateOrUpdate) (Poner trabajo) o [Update Job Properties](/rest/api/storageimportexport/jobs#Jobs_Update) (Actualizar propiedades del trabajo). De forma predeterminada, no se realizan copias de seguridad de los manifiestos de las unidades. Las copias de seguridad de los manifiestos de las unidades se almacenan como blobs en bloques en un contenedor dentro de la cuenta de almacenamiento asociada al trabajo. De forma predeterminada, el nombre del contenedor es `waimportexport`, pero puede especificar uno distinto en la propiedad `DiagnosticsPath` al llamar a las operaciones `Put Job` o `Update Job Properties`. Los nombres de los blobs de manifiestos de copia de seguridad presentan el siguiente formato: `waies/jobname_driveid_timestamp_manifest.xml`.
Puede recuperar el identificador URI de los manifiestos de las unidades de copia de seguridad de un trabajo llamando a la operación [Obtener trabajo](/rest/api/storageimportexport/jobs#Jobs_Get). El identificador URI del blob se devuelve en la propiedad `ManifestUri` de cada unidad.
## <a name="next-steps"></a>Pasos siguientes
* [Uso de la API de REST del servicio Azure Import/Export](storage-import-export-using-the-rest-api.md)
| 83.074074 | 996 | 0.815426 | spa_Latn | 0.961001 |
53d18390bbb0c6f0be5d7b959fae4c29017ffc43 | 5,227 | md | Markdown | docs/src/main/tut/typeclasses/foldable.md | road21/cats | 07b2c215fb6830dd909674abee8ce832f9af605b | [
"MIT"
] | 13 | 2019-07-31T18:58:06.000Z | 2021-02-03T18:39:14.000Z | docs/src/main/tut/typeclasses/foldable.md | road21/cats | 07b2c215fb6830dd909674abee8ce832f9af605b | [
"MIT"
] | 1 | 2019-11-15T15:08:38.000Z | 2019-11-20T10:09:38.000Z | docs/src/main/tut/typeclasses/foldable.md | road21/cats | 07b2c215fb6830dd909674abee8ce832f9af605b | [
"MIT"
] | 4 | 2020-05-07T19:23:59.000Z | 2021-02-14T12:32:01.000Z | ---
layout: docs
title: "Foldable"
section: "typeclasses"
source: "core/src/main/scala/cats/Foldable.scala"
scaladoc: "#cats.Foldable"
---
# Foldable
Foldable type class instances can be defined for data structures that can be
folded to a summary value.
In the case of a collection (such as `List` or `Vector`), these methods will fold
together (combine) the values contained in the collection to produce a single
result. Most collection types have `foldLeft` methods, which will usually be
used by the associated `Foldable[_]` instance.
`Foldable[F]` is implemented in terms of two basic methods:
- `foldLeft(fa, b)(f)` eagerly performs a left-associative fold over `fa`.
- `foldRight(fa, b)(f)` lazily performs a right-associative fold over `fa`.
Consider a simple list like `List(1, 2, 3)`. You could sum the numbers of this list using folds
where `0` is the starting value (`b`) and integer addition (`+`) is the combination operation
(`f`). Since `foldLeft` is left-associative, the execution of this fold would look something like
`((0 + 1) + 2) + 3`. The execution of a similar `foldRight`-based solution would look something
like `0 + (1 + (2 + 3))`. In this case, since integer addition is associative, both approaches will
yield the same result. However, for non-associative operations, the two methods can produce
different results.
These form the basis for many other operations, see also:
[A tutorial on the universality and expressiveness of fold](http://www.cs.nott.ac.uk/~gmh/fold.pdf)
First some standard imports.
```tut:silent
import cats._
import cats.implicits._
```
And examples.
```tut:book
Foldable[List].fold(List("a", "b", "c"))
Foldable[List].foldMap(List(1, 2, 4))(_.toString)
Foldable[List].foldK(List(List(1,2,3), List(2,3,4)))
Foldable[List].reduceLeftToOption(List[Int]())(_.toString)((s,i) => s + i)
Foldable[List].reduceLeftToOption(List(1,2,3,4))(_.toString)((s,i) => s + i)
Foldable[List].reduceRightToOption(List(1,2,3,4))(_.toString)((i,s) => Later(s.value + i)).value
Foldable[List].reduceRightToOption(List[Int]())(_.toString)((i,s) => Later(s.value + i)).value
Foldable[List].find(List(1,2,3))(_ > 2)
Foldable[List].exists(List(1,2,3))(_ > 2)
Foldable[List].forall(List(1,2,3))(_ > 2)
Foldable[List].forall(List(1,2,3))(_ < 4)
Foldable[Vector].filter_(Vector(1,2,3))(_ < 3)
Foldable[List].isEmpty(List(1,2))
Foldable[Option].isEmpty(None)
Foldable[List].nonEmpty(List(1,2))
Foldable[Option].toList(Option(1))
Foldable[Option].toList(None)
def parseInt(s: String): Option[Int] = scala.util.Try(Integer.parseInt(s)).toOption
Foldable[List].traverse_(List("1", "2"))(parseInt)
Foldable[List].traverse_(List("1", "A"))(parseInt)
Foldable[List].sequence_(List(Option(1), Option(2)))
Foldable[List].sequence_(List(Option(1), None))
Foldable[List].forallM(List(1, 2, 3))(i => if (i < 2) Some(i % 2 == 0) else None)
Foldable[List].existsM(List(1, 2, 3))(i => if (i < 2) Some(i % 2 == 0) else None)
Foldable[List].existsM(List(1, 2, 3))(i => if (i < 3) Some(i % 2 == 0) else None)
val prints: Eval[Unit] = List(Eval.always(println(1)), Eval.always(println(2))).sequence_
prints.value
Foldable[List].dropWhile_(List[Int](2,4,5,6,7))(_ % 2 == 0)
Foldable[List].dropWhile_(List[Int](1,2,4,5,6,7))(_ % 2 == 0)
import cats.data.Nested
val listOption0 = Nested(List(Option(1), Option(2), Option(3)))
val listOption1 = Nested(List(Option(1), Option(2), None))
Foldable[Nested[List, Option, *]].fold(listOption0)
Foldable[Nested[List, Option, *]].fold(listOption1)
```
Hence when defining some new data structure, if we can define a `foldLeft` and
`foldRight` we are able to provide many other useful operations, if not always
the most efficient implementations, over the structure without further
implementation.
-------------------------------------------------------------------------------
Note that, in order to support laziness, the signature of `Foldable`'s
`foldRight` is
```scala
def foldRight[A, B](fa: F[A], lb: Eval[B])(f: (A, Eval[B]) => Eval[B]): Eval[B]
```
as opposed to
```scala
def foldRight[A, B](fa: F[A], z: B)(f: (A, B) => B): B
```
which someone familiar with the `foldRight` from the collections in
Scala's standard library might expect. This will prevent operations
which are lazy in their right hand argument to traverse the entire
structure unnecessarily. For example, if you have:
```tut:book
val allFalse = Stream.continually(false)
```
which is an infinite stream of `false` values, and if you wanted to
reduce this to a single false value using the logical and (`&&`). You
intuitively know that the result of this operation should be
`false`. It is not necessary to consider the entire stream in order to
determine this result, you only need to consider the first
value. Using `foldRight` from the standard library *will* try to
consider the entire stream, and thus will eventually cause a stack
overflow:
```tut:book
try {
allFalse.foldRight(true)(_ && _)
} catch {
case e:StackOverflowError => println(e)
}
```
With the lazy `foldRight` on `Foldable`, the calculation terminates
after looking at only one value:
```tut:book
Foldable[Stream].foldRight(allFalse, Eval.True)((a,b) => if (a) b else Eval.now(false)).value
```
| 37.876812 | 99 | 0.704228 | eng_Latn | 0.926817 |
53d1daedd8b315d670ef1072dd4e1b60b2ecfeb2 | 2,589 | md | Markdown | README.md | johbossle/java-local-development-tool | 99915098dc18e613e7a9c1765ff4eb6b59790194 | [
"Apache-2.0"
] | null | null | null | README.md | johbossle/java-local-development-tool | 99915098dc18e613e7a9c1765ff4eb6b59790194 | [
"Apache-2.0"
] | null | null | null | README.md | johbossle/java-local-development-tool | 99915098dc18e613e7a9c1765ff4eb6b59790194 | [
"Apache-2.0"
] | null | null | null | # Java Local Development Tool
This is a simple container image to get one container running
- keycloak
- mongodb
- kafka
The running container is intended for easing up local development of java stacks.
## Building the container image
```sh
docker build -t java-ld-tool:latest .
```
## Starting an ephemeral container
```sh
docker run -p 2181:2181 -p 9092:9092 -p 27017:27017 -p 9090:8080 --rm java-ld-tool:latest
```
## Starting a named container
```sh
docker run -p 2181:2181 -p 9092:9092 -p 27017:27017 -p 9090:8080 --name java-ld-tool java-ld-tool:latest
```
## The services of this container
- mongodb running at port 27001 without username/password (connection string: `"mongodb://localhost:27017/?"`)
- keycloak oidc provider
- admin console: <http://localhost:9090> (username: `admin`, password: `admin`)
- realm `local` with a client called `local-debugging-app`; issuer: `http://localhost:9090/auth/realms/local`
- user within the realm: username: `admin`, password: `admin`)
- kafka with one broker available at `localhost:9092` (no security)
## Example configuration
```yaml
spring:
profiles: local
# MongoDB configuration from local configuration file
spring.data.mongodb.uri: "mongodb://localhost:27017/?"
# Topic Binding(s) from local configuration file
de.knowis.cp.binding.topic:
topicBindings:
BINDINGNAME:
topicName: TOPICNAME
kafkaBinding:
kafka_brokers_sasl: "localhost:9092"
# Spring Security OAuth2 config
spring.security:
oauth2:
client:
registration:
default:
client-id: "local-debugging-app"
provider:
default:
issuer-uri: "http://localhost:9090/auth/realms/local"
```
## Tips for dealing with kafka
### Create a topic
within the running container
```sh
/opt/kafka_2.13-2.6.0/bin
./kafka-topics.sh --topic trades --create --zookeeper localhost --partitions 1 --replication-factor 1
```
### Send a message
```sh
/opt/kafka_2.13-2.6.0/bin
./kafka-console-producer.sh --broker-list localhost:9092 --topic trades --property parse.key=true --property key.separator=":"
```
## About the image
The image is based on available container images. Namely:
- johnnypark/kafka-zookeeper:2.6.0 (<https://github.com/hey-johnnypark/docker-kafka-zookeeper>)
- jboss/keycloak:13.0.1 (<https://github.com/keycloak/keycloak-containers>)
It also contains mongodb in version 4.0.5 Community from the alpine linux community repository <http://dl-cdn.alpinelinux.org/alpine/v3.9/community>.
The license of the used images and their contained programs remain untouched.
| 27.252632 | 149 | 0.721514 | eng_Latn | 0.805569 |
53d262f0ec53f3867971a84b8e87d85cd9c7aa6c | 6,713 | md | Markdown | README.md | RedHeadM/EfficientDet.Pytorch | 695c49d46f7b7622932b432477a2e1b282a0cc47 | [
"MIT"
] | null | null | null | README.md | RedHeadM/EfficientDet.Pytorch | 695c49d46f7b7622932b432477a2e1b282a0cc47 | [
"MIT"
] | null | null | null | README.md | RedHeadM/EfficientDet.Pytorch | 695c49d46f7b7622932b432477a2e1b282a0cc47 | [
"MIT"
] | null | null | null | # EfficientDet: Scalable and Efficient Object Detection, in PyTorch
A [PyTorch](http://pytorch.org/) implementation of [EfficientDet](https://arxiv.org/abs/1911.09070) from the 2019 paper by Mingxing Tan Ruoming Pang Quoc V. Le
Google Research, Brain Team. The official and original: comming soon.
<img src= "./docs/arch.png"/>
# Fun with Demo:
```Shell
python demo.py --weight ./checkpoint_VOC_efficientdet-d1_97.pth --threshold 0.6 --iou_threshold 0.5 --cam --score
```
<p align="center">
<img src="docs/pytoan.gif">
</p>
### Table of Contents
- <a href='#recent-update'>Recent Update</a>
- <a href='#benchmarking'>Benchmarking</a>
- <a href='#installation'>Installation</a>
- <a href='#installation'>Installation</a>
- <a href='#prerequisites'>Prerequisites</a>
- <a href='#datasets'>Datasets</a>
- <a href='#training-efficientdet'>Train</a>
- <a href='#evaluation'>Evaluate</a>
- <a href='#performance'>Performance</a>
- <a href='#demo'>Demo</a>
- <a href='#todo'>Future Work</a>
- <a href='#references'>Reference</a>
## Recent Update
- [06/01/2020] Support both DistributedDataParallel and DataParallel, change augmentation, eval_voc
- [17/12/2019] Add Fast normalized fusion, Augmentation with Ratio, Change RetinaHead, Fix Support EfficientDet-D0->D7
- [7/12/2019] Support EfficientDet-D0, EfficientDet-D1, EfficientDet-D2, EfficientDet-D3, EfficientDet-D4,... . Support change gradient accumulation steps, AdamW.
## Benchmarking
We benchmark our code thoroughly on three datasets: pascal voc and coco, using family efficientnet different network architectures: EfficientDet-D0->7. Below are the results:
1). PASCAL VOC 2007 (Train/Test: 07trainval/07test, scale=600, ROI Align)
model | mAP |
---------|--------|
[EfficientDet-D0(with Weight)](https://drive.google.com/file/d/1r7MAyBfG5OK_9F_cU8yActUWxTHOuOpL/view?usp=sharing | 62.16
## Installation
- Install [PyTorch](http://pytorch.org/) by selecting your environment on the website and running the appropriate command.
- Clone this repository and install package [prerequisites](#prerequisites) below.
- Then download the dataset by following the [instructions](#datasets) below.
- Note: For training, we currently support [VOC](http://host.robots.ox.ac.uk/pascal/VOC/) and [COCO](http://mscoco.org/), and aim to add [ImageNet](http://www.image-net.org/) support soon.
### prerequisites
* Python 3.6+
* PyTorch 1.3+
* Torchvision 0.4.0+ (**We need high version because Torchvision support nms now.**)
* requirements.txt
## Datasets
To make things easy, we provide bash scripts to handle the dataset downloads and setup for you. We also provide simple dataset loaders that inherit `torch.utils.data.Dataset`, making them fully compatible with the `torchvision.datasets` [API](http://pytorch.org/docs/torchvision/datasets.html).
### VOC Dataset
PASCAL VOC: Visual Object Classes
##### Download VOC2007 + VOC2012 trainval & test
```Shell
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh datasets/scripts/VOC2007.sh
sh datasets/scripts/VOC2012.sh
```
### COCO
Microsoft COCO: Common Objects in Context
##### Download COCO 2017
```Shell
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh datasets/scripts/COCO2017.sh
```
## Training EfficientDet
- To train EfficientDet using the train script simply specify the parameters listed in `train.py` as a flag or manually change them.
```Shell
python train.py --network effcientdet-d0 # Example
```
- With VOC Dataset:
```Shell
# DataParallel
python train.py --dataset VOC --dataset_root /root/data/VOCdevkit/ --network effcientdet-d0 --batch_size 32
# DistributedDataParallel with backend nccl
python train.py --dataset VOC --dataset_root /root/data/VOCdevkit/ --network effcientdet-d0 --batch_size 32 --multiprocessing-distributed
```
- With COCO Dataset:
```Shell
# DataParallel
python train.py --dataset COCO --dataset_root ~/data/coco/ --network effcientdet-d0 --batch_size 32
# DistributedDataParallel with backend nccl
python train.py --dataset COCO --dataset_root ~/data/coco/ --network effcientdet-d0 --batch_size 32 --multiprocessing-distributed
```
## Evaluation
To evaluate a trained network:
- With VOC Dataset:
```Shell
python eval_voc.py --dataset_root ~/data/VOCdevkit --weight ./checkpoint_VOC_efficientdet-d0_261.pth
```
- With COCO Dataset
comming soon.
## Demo
```Shell
python demo.py --threshold 0.5 --iou_threshold 0.5 --score --weight checkpoint_VOC_efficientdet-d1_34.pth --file_name demo.png
```
Output:
<p align="center">
<img src= "./docs/demo.png">
</p>
## Webcam Demo
You can use a webcam in a real-time demo by running:
```Shell
python demo.py --threshold 0.5 --iou_threshold 0.5 --cam --score --weight checkpoint_VOC_efficientdet-d1_34.pth
```
## Performance
<img src= "./docs/compare.png"/>
## Docker
```
docker build -t effnet -f docker/Dockerfile .
# mout src and data
docker run -i --rm -v "$(pwd):/app" -v ~/data/coco:/data/coco --runtime=nvidia --name effnettrain_test effnet python train.py --dataset COCO --dataset_root /data/coco/ --network efficientdet-d0 --batch_size 32
```
## TODO
We have accumulated the following to-do list, which we hope to complete in the near future
- Still to come:
* [x] EfficientDet-[D0-7]
* [x] GPU-Parallel
* [x] NMS
* [ ] Soft-NMS
* [x] Pretrained model
* [x] Demo
* [ ] Model zoo
* [ ] TorchScript
* [ ] Mobile
* [ ] C++ Onnx
## Authors
* [**Toan Dao Minh**](https://github.com/toandaominh1997)
***Note:*** Unfortunately, this is just a hobby of ours and not a full-time job, so we'll do our best to keep things up to date, but no guarantees. That being said, thanks to everyone for your continued help and feedback as it is really appreciated. We will try to address everything as soon as possible.
## References
- tanmingxing, rpang, qvl, et al. "EfficientDet: Scalable and Efficient Object Detection." [EfficientDet](https://arxiv.org/abs/1911.09070).
- A list of other great EfficientDet ports that were sources of inspiration:
* [EfficientNet](https://github.com/lukemelas/EfficientNet-PyTorch)
* [SSD.Pytorch](https://github.com/amdegroot/ssd.pytorch)
* [mmdetection](https://github.com/open-mmlab/mmdetection)
* [RetinaNet.Pytorch](https://github.com/yhenon/pytorch-retinanet)
* [NMS.Torchvision](https://pytorch.org/docs/stable/torchvision/ops.html)
## Citation
@article{efficientdetpytoan,
Author = {Toan Dao Minh},
Title = {A Pytorch Implementation of EfficientDet Object Detection},
Journal = {github.com/toandaominh1997/EfficientDet.Pytorch},
Year = {2019}
}
| 36.091398 | 305 | 0.722777 | eng_Latn | 0.562158 |
53d2cf9cbf9db22d12bba00201f4b4c6926d1f59 | 2,824 | md | Markdown | docs/2014/analysis-services/scripting/properties/targettype-element-assl.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/scripting/properties/targettype-element-assl.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/scripting/properties/targettype-element-assl.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Elemento TargetType (ASSL) | Microsoft Docs
ms.custom: ''
ms.date: 04/27/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- analysis-services
- docset-sql-devref
ms.topic: reference
api_name:
- TargetType Element
api_location:
- http://schemas.microsoft.com/analysisservices/2003/engine
topic_type:
- apiref
f1_keywords:
- TargetType
helpviewer_keywords:
- TargetType element
ms.assetid: 2c69ea6e-2af7-435b-9841-86117d5554a7
author: minewiskan
ms.author: owend
manager: craigg
ms.openlocfilehash: ed0ca768f075b7cd4249c9d8b021e2ec10d54c46
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/02/2018
ms.locfileid: "48099565"
---
# <a name="targettype-element-assl"></a>Elemento TargetType (ASSL)
Identifica el tipo de elemento del elemento identificado en el [destino](target-element-assl.md) elemento.
## <a name="syntax"></a>Sintaxis
```xml
<Action>
...
<TargetType>...</TargetType>
...
</Action>
```
## <a name="element-characteristics"></a>Características de los elementos
|Característica|Descripción|
|--------------------|-----------------|
|Tipo y longitud de los datos|String (enumeración)|
|Valor predeterminado|None|
|Cardinalidad|1-1: Elemento necesario que se produce una vez y solo una vez.|
## <a name="element-relationships"></a>Relaciones del elemento
|Relación|Elemento|
|------------------|-------------|
|Elemento primario|[Acción](../objects/action-element-assl.md)|
|Elementos secundarios|None|
## <a name="remarks"></a>Comentarios
El valor de este elemento se limita a una de las cadenas enumeradas en la tabla siguiente.
|Valor|Descripción|
|-----------|-----------------|
|*Cubo*|El destino de la acción es un cubo.|
|*Celdas*|El destino de la acción es un subcubo.|
|*Conjunto*|El destino de la acción es un conjunto.|
|*Hierarchy*|El destino de la acción es una jerarquía.|
|*Level*|El destino de la acción es un nivel.|
|*DimensionMembers*|El destino de la acción es un miembro de una dimensión.|
|*HierarchyMembers*|El destino de la acción es un miembro de una jerarquía.|
|*LevelMembers*|El destino de la acción es un miembro de un nivel.|
|*AttributeMembers*|El destino de la acción es un miembro de un atributo.|
La enumeración que corresponde a los valores permitidos para `TargetType` en el objeto de Analysis Management Objects (AMO) es el modelo <xref:Microsoft.AnalysisServices.ActionTargetType>.
El elemento que se corresponde con el elemento primario de `TargetType` en el objeto de Analysis Management Objects (AMO) es el modelo <xref:Microsoft.AnalysisServices.Action>.
## <a name="see-also"></a>Vea también
[Propiedades (ASSL)](properties-assl.md)
| 33.619048 | 191 | 0.707153 | spa_Latn | 0.802742 |
53d39ae99d62cc6cd46bab1fe6a9b59a6a06eb01 | 11,179 | md | Markdown | treebanks/cs_pdt/cs_pdt-pos-INTJ.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | null | null | null | treebanks/cs_pdt/cs_pdt-pos-INTJ.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | null | null | null | treebanks/cs_pdt/cs_pdt-pos-INTJ.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | null | null | null | ---
layout: base
title: 'Statistics of INTJ in UD_Czech-PDT'
udver: '2'
---
## Treebank Statistics: UD_Czech-PDT: POS Tags: `INTJ`
There are 53 `INTJ` lemmas (0%), 53 `INTJ` types (0%) and 113 `INTJ` tokens (0%).
Out of 17 observed tags, the rank of `INTJ` is: 10 in number of lemmas, 12 in number of types and 16 in number of tokens.
The 10 most frequent `INTJ` lemmas: <em>pa, ach, pink, hle, inu, proboha, což, fajn, haló, ó</em>
The 10 most frequent `INTJ` types: <em>PA, Pink, ach, Inu, hle, proboha, Haló, což, fajn, Ó</em>
The 10 most frequent ambiguous lemmas: <em>pa</em> (<tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 20, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 2, <tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> 1), <em>což</em> (<tt><a href="cs_pdt-pos-PRON.html">PRON</a></tt> 748, <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 3, <tt><a href="cs_pdt-pos-PART.html">PART</a></tt> 1), <em>běda</em> (<tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 2), <em>ej</em> (<tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 2), <em>o</em> (<tt><a href="cs_pdt-pos-ADP.html">ADP</a></tt> 10328, <tt><a href="cs_pdt-pos-PUNCT.html">PUNCT</a></tt> 100, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 10, <tt><a href="cs_pdt-pos-ADJ.html">ADJ</a></tt> 3, <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2), <em>ta</em> (<tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 1), <em>šup</em> (<tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 1), <em>cup</em> (<tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 1, <tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> 1), <em>hm</em> (<tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> 5, <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 1), <em>pánbůh</em> (<tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 4, <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 1)
The 10 most frequent ambiguous types: <em>PA</em> (<tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 20, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 2), <em>Pink</em> (<tt><a href="cs_pdt-pos-ADJ.html">ADJ</a></tt> 49, <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 6), <em>což</em> (<tt><a href="cs_pdt-pos-PRON.html">PRON</a></tt> 631, <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2), <em>O</em> (<tt><a href="cs_pdt-pos-ADP.html">ADP</a></tt> 659, <tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> 62, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 22, <tt><a href="cs_pdt-pos-ADJ.html">ADJ</a></tt> 5, <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2, <tt><a href="cs_pdt-pos-PUNCT.html">PUNCT</a></tt> 1), <em>ta</em> (<tt><a href="cs_pdt-pos-DET.html">DET</a></tt> 155, <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 1), <em>Pánbůh</em> (<tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 1, <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 1), <em>Ra</em> (<tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 1, <tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> 1), <em>hm</em> (<tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> 5, <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 1)
* <em>PA</em>
* <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 20: <em>Nechť přímka A A půlí úhel přímek <b>PA</b> , a .</em>
* <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 2: <em>Přímka <b>PA</b> svírá s přímkou P P úhel menší , než je úhel souběžnosti bodu P s přímkou p , a protne tedy přímku p v bodě B .</em>
* <em>Pink</em>
* <tt><a href="cs_pdt-pos-ADJ.html">ADJ</a></tt> 49: <em>Zvláštním vlakem na <b>Pink</b> Floyd</em>
* <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 6: <em><b>Pink</b> Floyd kolem nás</em>
* <em>což</em>
* <tt><a href="cs_pdt-pos-PRON.html">PRON</a></tt> 631: <em>Vědí , že by byli považováni za arogantní , <b>což</b> by mohla být pravda .</em>
* <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2: <em>Z naivních přání , která jsme v listopadu měli , se nám toto vyplnilo jen <b>což</b> .</em>
* <em>O</em>
* <tt><a href="cs_pdt-pos-ADP.html">ADP</a></tt> 659: <em><b>O</b> hlavních otázkách však často rozhoduje centrála v zahraničí .</em>
* <tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> 62: <em>Vycházíme přitom především z potvrzení škol , říká <b>O</b> . Brabec .</em>
* <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 22: <em>Moravský ornitologický spolek , P . <b>O</b> . Box 65 , Přerov</em>
* <tt><a href="cs_pdt-pos-ADJ.html">ADJ</a></tt> 5: <em>A <b>O</b> Travel se srazit nedala</em>
* <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2: <em>Přídavky ( Tosca , <b>O</b> sole mio ) nakonec byly a nejkrásněji z nich asi zazněla Píseň o rodné zemi Gejzy Dusíka : Dvorskému toto slovenské vyznání věříme , vždyť jde i z jeho srdce - a to se věru pozná .</em>
* <tt><a href="cs_pdt-pos-PUNCT.html">PUNCT</a></tt> 1: <em>Autodrom v Imole uzavřel soud <b>O</b></em>
* <em>ta</em>
* <tt><a href="cs_pdt-pos-DET.html">DET</a></tt> 155: <em>Teď ještě <b>ta</b> druhá - peníze .</em>
* <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 2: <em>Ra - <b>ta</b> - <b>ta</b> .</em>
* <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 1: <em>Peter Honak , profesor historie na Maďarské akademii věd , k 50 . výročí osvobození Maďarska sovětskou armádou * Užívej si všeho , ale po padesátce dávej vale třem " <b>ta</b> " - wanita ( ženy ) , harta ( bohatství ) a tahta ( postavení ) .</em>
* <em>Pánbůh</em>
* <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 1: <em>Když to půjde každý rok takhle - <b>Pánbůh</b> zaplať , usmívá se opat Michael Josef Pojezdný .</em>
* <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> 1: <em>Dokonce i <b>Pánbůh</b> je v nich prezentován jako docela veselý chlapík .</em>
* <em>Ra</em>
* <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 1: <em><b>Ra</b> - ta - ta .</em>
* <tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> 1: <em>Zařadil se mezi rozhodující zakladatele Skupiny <b>Ra</b> , která se stala výraznou platformou právě této části jeho výtvarné a literární generace .</em>
* <em>hm</em>
* <tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> 5: <em>( <b>hm</b> )</em>
* <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> 1: <em>Proto spolupracujeme se školským úřadem , školami , učilišti a dalšími organizacemi , uvedl Z . Prouza . ( <b>hm</b> )</em>
## Morphology
The form / lemma ratio of `INTJ` is 1.000000 (the average of all parts of speech is 2.181849).
The 1st highest number of forms (1) was observed with the lemma “Bang”: <em>Bang</em>.
The 2nd highest number of forms (1) was observed with the lemma “Boom”: <em>Boom</em>.
The 3rd highest number of forms (1) was observed with the lemma “Crash”: <em>Crash</em>.
`INTJ` occurs with 3 features: <tt><a href="cs_pdt-feat-Foreign.html">Foreign</a></tt> (6; 5% instances), <tt><a href="cs_pdt-feat-NameType.html">NameType</a></tt> (4; 4% instances), <tt><a href="cs_pdt-feat-Style.html">Style</a></tt> (2; 2% instances)
`INTJ` occurs with 5 feature-value pairs: `Foreign=Yes`, `NameType=Com`, `NameType=Oth`, `Style=Coll`, `Style=Vrnc`
`INTJ` occurs with 6 feature combinations.
The most frequent feature combination is `_` (104 tokens).
Examples: <em>PA, Pink, ach, Inu, hle, proboha, Haló, což, fajn, Ó</em>
## Relations
`INTJ` nodes are attached to their parents using 11 different relations: <tt><a href="cs_pdt-dep-nmod.html">nmod</a></tt> (35; 31% instances), <tt><a href="cs_pdt-dep-dep.html">dep</a></tt> (31; 27% instances), <tt><a href="cs_pdt-dep-root.html">root</a></tt> (17; 15% instances), <tt><a href="cs_pdt-dep-advmod.html">advmod</a></tt> (13; 12% instances), <tt><a href="cs_pdt-dep-conj.html">conj</a></tt> (10; 9% instances), <tt><a href="cs_pdt-dep-obj.html">obj</a></tt> (2; 2% instances), <tt><a href="cs_pdt-dep-appos.html">appos</a></tt> (1; 1% instances), <tt><a href="cs_pdt-dep-case.html">case</a></tt> (1; 1% instances), <tt><a href="cs_pdt-dep-discourse.html">discourse</a></tt> (1; 1% instances), <tt><a href="cs_pdt-dep-nsubj.html">nsubj</a></tt> (1; 1% instances), <tt><a href="cs_pdt-dep-orphan.html">orphan</a></tt> (1; 1% instances)
Parents of `INTJ` nodes belong to 8 different parts of speech: <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> (38; 34% instances), <tt><a href="cs_pdt-pos-VERB.html">VERB</a></tt> (33; 29% instances), (17; 15% instances), <tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> (9; 8% instances), <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> (6; 5% instances), <tt><a href="cs_pdt-pos-ADJ.html">ADJ</a></tt> (5; 4% instances), <tt><a href="cs_pdt-pos-ADV.html">ADV</a></tt> (4; 4% instances), <tt><a href="cs_pdt-pos-NUM.html">NUM</a></tt> (1; 1% instances)
55 (49%) `INTJ` nodes are leaves.
22 (19%) `INTJ` nodes have one child.
16 (14%) `INTJ` nodes have two children.
20 (18%) `INTJ` nodes have three or more children.
The highest child degree of a `INTJ` node is 5.
Children of `INTJ` nodes are attached using 15 different relations: <tt><a href="cs_pdt-dep-punct.html">punct</a></tt> (59; 47% instances), <tt><a href="cs_pdt-dep-conj.html">conj</a></tt> (19; 15% instances), <tt><a href="cs_pdt-dep-dep.html">dep</a></tt> (17; 14% instances), <tt><a href="cs_pdt-dep-cc.html">cc</a></tt> (10; 8% instances), <tt><a href="cs_pdt-dep-advmod.html">advmod</a></tt> (4; 3% instances), <tt><a href="cs_pdt-dep-obl.html">obl</a></tt> (4; 3% instances), <tt><a href="cs_pdt-dep-amod.html">amod</a></tt> (2; 2% instances), <tt><a href="cs_pdt-dep-flat-foreign.html">flat:foreign</a></tt> (2; 2% instances), <tt><a href="cs_pdt-dep-orphan.html">orphan</a></tt> (2; 2% instances), <tt><a href="cs_pdt-dep-case.html">case</a></tt> (1; 1% instances), <tt><a href="cs_pdt-dep-ccomp.html">ccomp</a></tt> (1; 1% instances), <tt><a href="cs_pdt-dep-det.html">det</a></tt> (1; 1% instances), <tt><a href="cs_pdt-dep-nmod.html">nmod</a></tt> (1; 1% instances), <tt><a href="cs_pdt-dep-nummod.html">nummod</a></tt> (1; 1% instances), <tt><a href="cs_pdt-dep-vocative.html">vocative</a></tt> (1; 1% instances)
Children of `INTJ` nodes belong to 14 different parts of speech: <tt><a href="cs_pdt-pos-PUNCT.html">PUNCT</a></tt> (59; 47% instances), <tt><a href="cs_pdt-pos-NOUN.html">NOUN</a></tt> (18; 14% instances), <tt><a href="cs_pdt-pos-ADV.html">ADV</a></tt> (12; 10% instances), <tt><a href="cs_pdt-pos-CCONJ.html">CCONJ</a></tt> (11; 9% instances), <tt><a href="cs_pdt-pos-INTJ.html">INTJ</a></tt> (6; 5% instances), <tt><a href="cs_pdt-pos-VERB.html">VERB</a></tt> (4; 3% instances), <tt><a href="cs_pdt-pos-ADJ.html">ADJ</a></tt> (3; 2% instances), <tt><a href="cs_pdt-pos-DET.html">DET</a></tt> (3; 2% instances), <tt><a href="cs_pdt-pos-PART.html">PART</a></tt> (3; 2% instances), <tt><a href="cs_pdt-pos-PRON.html">PRON</a></tt> (2; 2% instances), <tt><a href="cs_pdt-pos-ADP.html">ADP</a></tt> (1; 1% instances), <tt><a href="cs_pdt-pos-NUM.html">NUM</a></tt> (1; 1% instances), <tt><a href="cs_pdt-pos-PROPN.html">PROPN</a></tt> (1; 1% instances), <tt><a href="cs_pdt-pos-SYM.html">SYM</a></tt> (1; 1% instances)
| 124.211111 | 1,499 | 0.636461 | yue_Hant | 0.514607 |
53d422ff2e6912cddf9700eb1ffa59f904202b44 | 164 | md | Markdown | README.md | itamar-marom/blackops | f247176faafd8ff0f1ab245d40ea51d0260576cd | [
"Apache-2.0"
] | null | null | null | README.md | itamar-marom/blackops | f247176faafd8ff0f1ab245d40ea51d0260576cd | [
"Apache-2.0"
] | null | null | null | README.md | itamar-marom/blackops | f247176faafd8ff0f1ab245d40ea51d0260576cd | [
"Apache-2.0"
] | null | null | null | # blackops
Blackops is an internal developer platform tool, to help developers easily manage application's ecosystem without the need for infra and platform teams.
| 54.666667 | 152 | 0.829268 | eng_Latn | 0.999525 |
53d471e023a577098e4586b4aa73b76e4462cc81 | 54,617 | markdown | Markdown | _posts/2005-05-04-methods-of-treating-medical-conditions-by-neuromodulation-of-the-sympathetic-nervous-system.markdown | api-evangelist/patents-2005 | 66e2607b8cab00c01031607b66c9f69f6c5e11e1 | [
"Apache-2.0"
] | null | null | null | _posts/2005-05-04-methods-of-treating-medical-conditions-by-neuromodulation-of-the-sympathetic-nervous-system.markdown | api-evangelist/patents-2005 | 66e2607b8cab00c01031607b66c9f69f6c5e11e1 | [
"Apache-2.0"
] | null | null | null | _posts/2005-05-04-methods-of-treating-medical-conditions-by-neuromodulation-of-the-sympathetic-nervous-system.markdown | api-evangelist/patents-2005 | 66e2607b8cab00c01031607b66c9f69f6c5e11e1 | [
"Apache-2.0"
] | 3 | 2019-10-31T13:03:08.000Z | 2021-12-14T08:10:54.000Z | ---
title: Methods of treating medical conditions by neuromodulation of the sympathetic nervous system
abstract: The present invention is directed to systems and methods for treating respiratory or pulmonary medical conditions by neuromodulation of a target site of the sympathetic nervous system and preferably a target site in communication with a sympathetic nerve chain. A system for treating a respiratory or pulmonary medical condition incorporating a closed-loop feedback system is also provided.
url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=07877146&OS=07877146&RS=07877146
owner: The Cleveland Clinic Foundation
number: 07877146
owner_city: Cleveland
owner_country: US
publication_date: 20050504
---
The present application is a continuation in part of U.S. application Ser. No. 10 495 766 filed on May 5 2004 now U.S. Pat. No. 7 778 704 which is the National Stage application of International Application No. PCT US02 34000 filed on Oct. 23 2002 which is a continuation in part of U.S. Ser. No. 10 001 923 filed on Oct. 23 2001 now U.S. Pat. No. 6 885 888. The present application also claims priority to U.S. Provisional Nos. 60 567 411 filed on May 4 2004 60 608 420 filed on Sep. 10 2004 and 60 608 513 filed on Sep. 10 2004. All of the above referenced applications are incorporated by reference herein.
The present invention relates to methods of treating medical conditions by electrical and or chemical neuromodulation of target sites in the sympathetic nervous system.
Neuromodulation involves an array of therapeutic approaches applied to the brain cranial nerves spinal cord and all associated nerves and neural structures in the human body to treat various human disorders. Neuromodulation can involve lesioning electrical stimulation modulation chemical stimulation modulation including gene therapy and administration of stem cells. Electrical stimulation of neural tissue is becoming an increasingly preferred form of therapy for certain neurological conditions and disorders where existing therapies generate intolerable side effects require repeated administration of treatment or are simply ineffective in a subset of patients. Electrical stimulation provides distinct advantages over surgical lesioning techniques since electrical stimulation is a reversible and adjustable procedure that provides continuous benefits as the patient s disease progresses and the patient s symptoms evolve.
Currently electrical stimulation of peripheral nerves and the spinal cord is approved for treatment of neuropathic pain. With respect to deep brain targets electrical stimulation of the subthalamic nucleus and the globus pallidus interna is approved for treatment of Parkinson s disease and electrical stimulation of the ventral intermediate nucleus is approved for treatment of essential tremor.
In an embodiment the present invention provides a system for treating a medical condition comprising a therapy delivery device for positioning at a target site of the sympathetic nervous system and a controller in communication with the therapy delivery device for enabling the therapy delivery device to deliver therapy to the target site to treat the medical condition. In a preferred embodiment the target site is in communication with the sympathetic nerve chain. The therapy delivery device can be a stimulation lead for delivering electrical neuromodulation or a drug port for delivering chemical neuromodulation to the target site.
According to another embodiment of the present invention a method of affecting a bronchial disorder in a patient comprises placing an electrode in communication with at least one ganglion along the sympathetic nerve chain of the patient wherein the at least one ganglion is associated with the bronchial disorder applying an electric signal to the electrode to stimulate the at least one ganglion and adjusting at least one parameter of the electric signal until the bronchial disorder has been affected. Preferably the at least one ganglion is selected from the group consisting of T 1 through T 4 ganglia cervical ganglia and combinations thereof.
The present invention also provides a system for treating a medical condition comprising a therapy delivery device for applying a therapy signal on a target site in the sympathetic nervous system. The system also include a sensor for detecting a bodily activity associated with the medical condition and generating a sensor signal. The system also includes a controller in communication with the therapy delivery device and the sensor for activating the therapy delivery device to initiate application of the therapy signal to the target site or to adjust application of the therapy signal to the target site in response to the sensor signal. The therapy signal can be an electrical signal in embodiments where the therapy delivery device is a stimulation lead and a chemical signal in embodiments where the therapy delivery device is a drug port.
The present invention also provides a method for treating a medical condition comprising placing a therapy delivery device on a target site of the sympathetic nervous system preferably a target site in communication with a sympathetic nerve chain and activating the therapy delivery device to deliver therapy to the target site to treat the medical condition.
The medical conditions that can be treated by the systems and methods of the present invention include skeletal immunological vascular hemotological muscular connective neurological visual auditory vestibular dermatological endocrinological olfactory cardiovascular genitourinary psychological gastrointestinal respiratory pulmonary inflammatory and neoplastic medical conditions.
The present invention also provide methods of stabilizing and optimizing bodily functions perioperatively and or post operatively by stimulating a target site in communication with a sympathetic nerve chain.
The present invention provides systems and methods for treating medical conditions by neuromodulation of a target site of a sympathetic nervous system and preferably neuromodulation of a target site in communication with a sympathetic nerve chain.
The sympathetic nervous system is a division of the autonomic nervous system and includes the sympathetic nerve chains and its associated direct and indirect input and output nerve branches nerve clusters nerve aggregates and nerve plexuses located for example in the skull base of the skull neck thoracic abdominal and pelvic cavities and their associated arterial and venous structures. The sympathetic nerve chain also known as the sympathetic nerve trunk is a long ganglionated nerve strand along each side of the vertebral column that extends from the base of the skull to the coccyx. Each sympathetic nerve chain is connected to each spinal nerve by gray rami and receives fibers from the spinal cord through white rami connecting with the thoracic and upper lumbar spinal nerves. A sympathetic nerve chain has paravertebral ganglia that are connected by a paravertebral sympathetic chain. Target sites in communication with the sympathetic nerve chain according to the present invention are target sites in the nervous system having fibers that project to and or from the sympathetic nerve chain. Examples of such target sites include the superior cervical middle cervical vertebral inferior cervical and cervicothoracic ganglia spinal cord segments T1 to L3 sympathetic ganglia including paravertebral ganglia and prevertebral ganglia paravertebral sympathetic chain thoracic and lumbar sympathetic ganglia nerve plexuses in communication with sympathetic ganglia dorsal roots ventral roots dorsal root ganglia dorsal rami ventral rami white rami communicans gray rami communicans and recurrent meningeal branches all emerging from spinal cord segments T1 to L3 T1 to L3 spinal nerves and any combination of the above from one or both of the sympathetic nerve chains. Thoracic and lumbar ganglia and prevertebral ganglia and their associated sympathetic structures include the cardiac celiac mesenteric superior and inferior renal hypogastric and intermesenteric abdominal aortic ganglia as well as ganglia associated with glands such as hepatic or adrenal glands. Nerve plexuses include prevertebral plexuses such as the superior and inferior hypogastric pelvic plexus. Target sites also include the thoracic lumbar and sacral splanchnic nerves. The systems and methods of the present invention for treating medical conditions encompass neuromodulation of any combination of one or more target sites of the sympathetic nervous system including any combination of one or more target sites in communication with the sympathetic nerve chain. The systems and methods of the present invention also encompass ipsilateral contralateral and bilateral neuromodulation.
As used herein the term treating a medical condition encompasses therapeutically regulating preventing improving alleviating the symptoms of reducing the effects of and or diagnosing the medical condition. As used herein the term medical condition encompasses any condition disease disorder function abnormality or deficit influenced by the autonomic nervous system. Further the systems and methods of the present invention can be used to treat more than one medical condition concurrently. Non limiting examples of medical conditions that can be treated according to the present invention include genetic skeletal immunological vascular or hematological muscular or connective tissue neurological ocular auditory or vestibular dermatological endocrinological olfactory cardiovascular genitourinary psychological gastrointestinal respiratory pulmonary neoplastic or inflammatory medical conditions. Further the medical condition can be the result of any etiology including vascular ischemic thrombotic embolic infectious including bacterial viral parasitic fungal abscessal neoplastic drug induced metabolic immunological collagenic traumatic surgical idiopathic endocrinological allergic degenerative congenital or abnormal malformational causes.
The present invention also encompasses enhancing the therapeutic effects of other therapies such as methods and systems working in conjunction with a pharmaceutical agent or other therapies to augment enhance improve or facilitate other therapies adjunctive therapies as well as reducing minimize and counteracting side effects complications and adverse reactions for any therapies involved in treating the above mentioned medical conditions. For example the methods and systems of the present invention may be used for a cancer patient undergoing chemotherapy utilizing stimulation to minimize the adverse effects of chemotherapy. In contrast the methods and systems can be used to enhance chemotherapy.
With respect to treating genetic medical conditions such medical conditions can affect single organs organ systems or multiple organs in multiple organ systems.
With respect to treating skeletal medical conditions such medical conditions can involve any medical conditions related to the components of the skeletal system such as for example bones joints or the synovium. Non limiting examples of such skeletal medical conditions include fractures osteoporosis osteomalacia osteopenia arthritis and bursitis.
With respect to treating immunological inflammatory and allergic medical conditions such medical conditions can involve any medical conditions related to the components of the immune system such as for example the spleen or thymus. Non limiting examples of immunological medical conditions include immuno suppressed states immuno compromised states auto immune disorders drug related allergy an environmental allergy or hypogamaglobunimia.
With respect to treating vascular or hematological medical conditions such medical conditions can involve any medical conditions related to the components of the vascular system such as for example the arteries arterioles veins venules capillaries lymph nodes blood including plasma white blood cells red blood cells and platelets. Non limiting examples of vascular hematological medical conditions include anemia thrombocytosis thrombocytopenia neutropenia hemophilia lymphedema thrombosis vasculitis venous insufficiency arterial dissection peripheral edema blood loss vascular insufficiency hypercoagulable states stroke vasospasms and disorders of vascular tone effecting perfusion. The vasculitis may be multifocal systemic or limited to the central nervous system. The hypercoagulable state may be factor V deficiency or anti thrombin deficiency among others. The stroke may result from cerebrovascular disease and or ischemia from decreased blood flow oxygenation second to occlusive or thromboembolic disease. The vasospasms may be secondary to aneurismal subarachnoid hemorrhage or vasospasm secondary to other etiologies.
With respect to treating muscular connective tissue medical conditions such medical conditions can involve any medical conditions related to the components of the muscular connective tissue system such as for example smooth or striated muscles tendons ligaments cartilage fascia and fibrous tissue. Non limiting examples of muscular medical conditions include myopathy muscular dystrophy weakness and muscle atrophy. The muscle atrophy may be caused by degenerative muscle disease nerve injury disuse atrophy or stroke. Non limiting examples of connective tissue medical conditions include scleroderma rheumatoid arthritis lupus Sjogren s syndrome fibromyalgia myositis myofascial pain syndrome and collagen vascular disease. The collagen vascular disease may be lupus or rheumatoid arthritis.
With respect to treating neurological medical conditions such medical conditions can involve any medical conditions related to the components of the nervous system such as for example the brain including the cerebellum brain stem pons midbrain medulla the spinal cord peripheral nerves peripheral ganglia and nerve plexuses. Non limiting examples of neurological conditions include Alzheimer s disease epilepsy multiple sclerosis ALS Guillan Barre stroke cerebral palsy intracerebral hemorrhage dementia vertigo tinnitus diplopia cerebral vasospasm aneurysm atriovenous malformation brain malformations Parkinson s disorder multi system atrophy olivopontocerebellar degeneration familial tremor dystonia including cervical dystonia torticollis facial dystonia blepharospasms spasmodic dysphonia radiculopathy neuropathy sleep disorders disorders of temperature regulation in the body and extremities postherpetic neuralgia involving the face head body or extremities. The neuropathy may be caused by fracture crush injury compressive injury repetitive movement injury diabetes trauma alcohol infection or hereditary. The sleep disorder may be sleep apnea restless leg syndrome narcolepsy snoring insomnia and drowsiness.
With respect to treating ocular medical conditions such medical conditions can involve any medical conditions related to the components of the visual system such as for example the eye including the lens iris conjunctiva lids cornea retina including macula the vitreous chambers and the aqueous chambers. Non limiting examples of ocular medical conditions include myopia hyperopia exopthalmous astigmatism corneal ulcer strabismus retinitis pigmentosa retinal tears macular degeneration cataracts xerophthamia amblyopia and astigmatism glaucoma blindness and diplopia.
With respect to treating auditory and vestibular medical conditions such medical conditions can involve any medical conditions related to the components of the auditory and vestibular system such as for example the ear including the external ear the middle ear the inner ear cochlea ossicles tympanic membrane and semicircular canals. Non limiting examples of auditory and vestibular medical conditions include otitis vertigo hearing loss dizziness and tinnitus.
With respect to treating dermatological medical conditions such medical conditions can involve any medical conditions related to the components of the skin and integumentary system such as for example the hair skin nails and sweat glands. Non limiting examples of dermatological medical conditions include rosacea eczema psoriasis acne hair loss hypertrichosis seborrheic dermatitis xerotic skin oily skin atrophy dystrophy of the skin wrinkles radiation induced damage vitiligo and cellulite.
With respect to treating endocrinological medical conditions such medical conditions can involve any medical conditions related to the components of the endocrine system such as for example the pancreas thyroid adrenal glands liver pituitary and hypothalamus. Non limiting examples of endocrinological conditions include hypoglycemia diabetes type I and II obesity hyperthyroidism hypothyroidism amenorrhea dysmenorrhea infertility impotence anorgasmia delayed orgasm perimenstrual syndrome hypercholesterolemia hypertriglycridinemia Cushing s disease Addison s disease malabsorption syndrome dysautonomia chronic fatigue syndrome fatigue heat exhaustion cold extremities hot flashes vasomotor instability Raynaud s syndrome hormonal disorders metabolic disorders such as gout disorders of metabolism and metabolic storage diseases where there is an accumulation of abnormal amounts of various substances such as glycogen in glycogen storage diseases iron in hemochromatosis or copper in Wilson s disease.
With respect to treating olfactory medical conditions such medical conditions can involve any medical conditions related to the components of the olfactory system such as for example the nose sensory nerves for smell and sinuses. Non limiting examples of olfactory conditions include loss of sense of smell rhinorrhea rhinitis acute sinusitis chronic sinusitis or nasal congestion.
With respect to treating cardiovascular medical conditions such medical conditions can involve any medical conditions related to the components of the cardiovascular system such as for example the heart and aorta. Non limiting examples of cardiovascular conditions include post infarction rehabilitation valvular disease myocardial infarction arrhythmia heart failure angina microvascular ischemia myocardial contractility disorder cardiomyopathy hypertension orthostatic hypotension dysautonomia syncope vasovagal reflex carotid sinus hypersensitivity and cardiac structural abnormalities such as septal defects and wall aneurysms. The cardiomyopathy may be caused by hypertension alcohol or by a congenital cause. The hypertension may be essential primary secondary or renal.
With respect to treating genitourinary medical conditions such medical conditions may involve any medical conditions related to the components of the genitourinary system such as for example the ovary fallopian tube uterus vagina penis testicles kidney bladder ureter and urethra. Non limiting examples of genitourinary medical conditions include impotence dysmenorrhea amenorrhea anorgasmia delayed orgasm endometriosis infertility uterine fibroids ovarian cysts spastic bladder flaccid bladder interstitial cystitis polycystic kidney disease nephrotic syndrome uremia glumerolonephritis renal failure urinary incontinence or hesitancy uremia nephrolithiasis and benign prosthetic hyperplasia.
With respect to treating psychological medical conditions non limiting examples of such medical conditions include Tourette s syndrome autism mental retardation stress anxiety depression bipolar disorder mania schizophrenia a personality disorder a phobia hallucinations delusions psychosis addictions and other affective disorders. The addiction may be to substances or behavior. The substance may be alcohol cigarettes or drugs.
With respect to treating gastrointestinal medical conditions such medical conditions can involve any medical conditions related to the components of the gastrointestinal system such as for example the mouth esophagus stomach small intestine large intestine rectum liver gall bladder bile ducts and pancreas. Non limiting examples of gastrointestinal medical conditions include hepatic failure hepatitis cirrhosis dumping syndrome cirrhosis gastric or duodenal ulcer irritable bowel syndrome colitis diverticulosis diverticulitis emesis hyper emesis gravidum bowel incontinence constipation diarrhea abdominal cramps gastro esophageal reflux esophageal dysmotility gastric dysmotility cholecystitis gall stones pancreatic insufficiency gas bloating and gastritis.
With respect to treating respiratory pulmonary medical conditions such medical conditions can involve any medical conditions related to the components of the respiratory system such as for example the trachea bronchus bronchioles alveoli lungs and capillaries. Non limiting examples of respiratory medical conditions include reactive airway disease asthma emphysema COPD and silicosis.
With respect to treating neoplastic processes such processes can be primary and or metastatic and can involve the thryoid including the medullary the liver the pancreas including vipoma and insulinoma leukemia lymphoma and other non solid tumors neoplastic processes of the brain stomach lung colon esophagus bone skin including basal cells squamous cells and melanoma the bladder kidney prostate breast ovaries uterus nasopharynx and sarcoma.
With respect to treating inflammatory disorders such inflammatory disorders include for example inflammatory bowel disorders such as irritable bowel syndrome and Crohn s disease and auto immune disorders immune disorders and rheumatological disorders.
The present invention also provides methods of treating pain syndromes. Such pain may result from one or more medical conditions comprising migraine headaches including migraine headaches with aura migraine headaches without aura menstrual migraines migraine variants atypical migraines complicated migraines hemiplegic migraines transformed migraines and chronic daily migraines episodic tension headaches chronic tension headaches analgesic rebound headaches episodic cluster headaches chronic cluster headaches cluster variants chronic paroxysmal hemicrania hemicrania continua post traumatic headache post traumatic neck pain post herpetic neuralgia involving the head or face pain from spine fracture secondary to osteoporosis arthritis pain in the spine headache related to cerebrovascular disease and stroke headache due to vascular disorder reflex sympathetic dystrophy cervicalgia glossodynia carotidynia cricoidynia otalgia due to middle ear lesion gastric pain sciatica maxillary neuralgia laryngeal pain myalgia of neck muscles trigeminal neuralgia post lumbar puncture headache low cerebro spinal fluid pressure headache temporomandibular joint disorder atypical facial pain ciliary neuralgia paratrigeminal neuralgia petrosal neuralgia Eagle s syndrome idiopathic intracranial hypertension orofacial pain myofascial pain syndrome involving the head neck and shoulder chronic migraneous neuralgia cervical headache paratrigeminal paralysis sphenopalatine ganglion neuralgia carotidynia Vidian neuralgia and causalgia. To neuromodulate such pain syndromes preferably the cervical and thoracic ganglia are stimulated.
The present invention also provides methods of treating the effects of aging burns trauma transplantation radiation damage bioterrorism back pain and body pain.
In embodiments where the therapy delivery device is a stimulation lead having a lead proximal end a lead body and a lead distal end the lead distal end comprises at least one electrode. The at least one electrode can be a plurality of electrodes. The electrodes at the lead distal end can be either monopolar bipolar or multipolar and can operate as a cathode or an anode. The electrode can be composed of but not limited to activated iridium rhodium titanium or platinum and combinations of said materials. The electrode may be coated with a thin surface layer of iridium oxide titanium nitride or other surface modifications to enhance electrical sensitivity. The stimulation lead can also comprise carbon doped silicon or silicon nitride. Each lead distal end can be provided with a biocompatible fabric collar or band about the electrode periphery to allow it to be more readily sutured or glued into place for electrodes to be secured in this manner . The stimulation lead may be placed permanently or temporarily in the target site to provide chronic or acute neuromodulation of the target site.
The controller is used to operate and supply power to the therapeutic delivery device and enable the therapy delivery device to delivery a therapy signal such as an electrical signal or a chemical signal to the target site. The controller may be powered by a battery which can be rechargeable an external power supply a fuel cell or a battery pack for external use. The controller may also be integral with the therapeutic delivery device such as a single stimulation lead power generator . When the therapeutic delivery device is a stimulation lead the controller may change the output to the electrode by way of polarity pulse width amplitude frequency voltage current intensity duration wavelength and or waveform. When the therapeutic delivery device is a drug port the controller may change its output such that a pump pressure source or proportionally controlled orifice increases or decreases the rate at which the pharmaceutical is delivered to the target site. The controller may operate any number or combination of electrodes and pharmaceutical delivery devices for example the controller may be connected to stimulation leads and a peristaltic pump for delivering a pharmaceutical to the target site near the stimulation leads. The controller may be implanted within the patient or it may be positioned by leads outside of the patient. A portion of the control system may be external to the patient s body for use by the attending physician to program the implanted controller and to monitor its performance. This external portion may include a programming wand which communicates with the implanted controller by means of telemetry via an internal antenna to transmit parameter values as may be selectively changed from time to time by subsequent programming selected at the programmer unit such as a computer. The programming wand also accepts telemetry data from the controller to monitor the performance of the therapy delivery device.
In embodiments where the controller enables a stimulation lead to deliver an electrical signal to the target site the electrical signal may be episodic continuous phasic in clusters intermittent upon demand by the patient or medical personnel or preprogrammed to respond to a sensor. Preferably the oscillating electrical signal is operated at a voltage between about 0.1 microvolts to about 20 V. More preferably the oscillating electrical signal is operated at a voltage between about 1 V to about 15 V. For microstimulation it is preferable to stimulate within the range of 0.1 microvolts to about 1 V. Preferably the electric signal source is operated at a frequency range between about 2 Hz to about 2500 Hz. More preferably the electric signal source is operated at a frequency range between about 2 Hz to about 200 Hz. Preferably the pulse width of the oscillating electrical signal is between about 10 microseconds to about 1 000 microseconds. More preferably the pulse width of the oscillating electrical signal is between about 50 microseconds to about 500 microseconds. Preferably the application of the oscillating electrical signal is monopolar when the stimulation lead is monopolar bipolar when the stimulation lead is bipolar and multipolar when the stimulation lead is multipolar. The waveform may be for example biphasic square wave sine wave or other electrically safe and feasible combinations. The electrical signal may be applied to multiple target sites simultaneously or sequentially.
In embodiments where the controller enables a drug port to deliver a chemical signal to the target site a chemical agent may be delivered to the target site prior to concurrent with subsequent to or instead of electrical neuromodulation. The chemical agent may be a neurotransmitter mimick neuropeptide hormone pro hormone antagonist agonist reuptake inhibitor or degrading enzyme thereof peptide protein therapeutic agent amino acid nucleic acid stem cell or any combination thereof and may be delivered by a slow release matrix or drug pump. The delivery of the chemical agent may be continuous intermittent chronic phasic or episodic. Different chemical agents may be utilized to affect different parts of the sympathetic nervous system. The chemical agents preferably work on one or more of the receptor sites of the autonomic nervous system such as the adrenergic receptors cholinergic receptors nicotinic and muscarinic purinergic and nitric oxide receptors. Non limiting examples of chemical agents include prazosin yohimbine atelenol sulbutamol and atropine.
The present invention also provides systems for treating medical conditions incorporating a closed loop feedback mechanism. Specifically in such embodiments the system comprises a therapy delivery device for applying a therapy signal which can be an electrical signal or a chemical signal on a target site of the sympathetic nervous system and preferably a target site in communication with a sympathetic nerve chain. The system further comprises a sensor for detecting a bodily activity associated with the medical condition and for generating a sensor signal. The system also includes a controller in communication with the therapy delivery device for activating the therapy delivery device to initiate application of the therapy signal to the target site or to adjust application of the therapy signal to the target site in response to the sensor signal. The bodily activity to be detected by the sensor is any characteristic or function of the body such as electrical or chemical activity and includes for example temperature respiratory function heart rate capillary pressure venous pressure perfusion oxygenation including blood oxygenation levels oxygen saturation levels oxygen consumption oxygen pressure water pressure nitrogen pressure carbon dioxide pressure in the tissue circulation including blood and lymphatic electrolyte levels in the circulation tissue diffusion or metabolism of various agents and molecules such as glucose neurotransmitter levels body temperature regulation blood pressure blood viscosity metabolic activity cerebral blood flow pH levels vital signs galvanic skin responses perspiration electrocardiogram electroencephalogram action potential conduction chemical production body movement response to external stimulation cognitive activity dizziness pain flushing motor activity including muscle tone visual activity speech balance diaphragmatic movement chest wall expansion concentration of certain biological molecules substances in the body such as for example glucose liver enzymes electrolytes hormones creatinine medications concentration of various cells platelets or bacteria. These bodily activities can be measured utilizing a variety of methods including but not limited to chemical analysis mechanical measurements laser and fiber optic analysis. Non limiting examples of further tests and bodily activities that can be sensed for categories of medical conditions and specific medical conditions are provided in TABLE I. Table I is only exemplary and non exhaustive and other bodily activities can be sensed as well as various combinations of the sensed activities listed in Table I.
In specific embodiments the sensors are located on or within the body and detect electrical and or chemical activity. Such activity may be detected by sensors located within or proximal to the target site distal to the target site but within the nervous system or by sensors located distal to the target site outside the nervous system. Examples of electrical activity detected by sensors located within or proximal to the target site include sensors that measure neuronal electrical activity such as the electrical activity characteristic of the signaling stages of neurons i.e. synaptic potentials trigger actions action potentials and neurotransmitter release at the target site and by afferent and efferent pathways and sources that project to and from or communicate with the target site. For example the sensors can measure at any signaling stage neuronal activity of any of the extensive connections of the target site. In particular the sensors may detect the rate and pattern of the neuronal electrical activity to determine the electrical signal to be provided to the lead.
Examples of chemical activity detected by sensors located within or proximal to the target site include sensors that measure neuronal activity such as the modulation of neurotransmitters hormones pro hormones neuropeptides peptides proteins electrolytes or small molecules by the target site and modulation of these substances by afferent and efferent pathways and sources that project to and from the target sites or communicate with the target sites.
With respect to detecting electrical or chemical activity of the body by sensors located distal to the target site but still within the nervous system such sensors could be placed in the brain the spinal cord cranial nerves and or spinal nerves. Sensors placed in the brain are preferably placed in a layer wise manner. For example a sensor could be placed on the scalp i.e. electroencephalogram in the subgaleal layer on the skull in the dura mater in the sub dural layer and in the parenchyma i.e. in the frontal lobe occipital lobe parietal lobe temporal lobe to achieve increasing specificity of electrical and chemical activity detection. The sensors could measure the same types of chemical and electrical activity as the sensors placed within or proximal to the target site as described above.
With respect to detecting electrical or chemical activity by sensors located distal to the target site outside the nervous system such sensors may be placed in venous structures and various organs or tissues of other body systems such as the endocrine system muscular system respiratory system circulatory system urinary system integumentary system and digestive system or such sensors may detect signals from these various body systems. For example the sensor may be an external sensor such as a pulse oximeter or an external blood pressure heart and respiratory rate detector. All the above mentioned sensing systems may be employed together or any combination of less than all sensors may be employed together.
After the sensor s detect the relevant bodily activity associated with the medical condition according to the systems of the present invention the sensors generate a sensor signal. The sensor signal is processed by a sensor signal processor which in this embodiment is part of the controller. The controller generates a response to the sensor signal by activating the therapy delivery device to initiate application of the therapy signal or to adjust application of the therapy signal to the target site. The therapy delivery device then applies the therapy signal to the target site. In embodiments where the therapy delivery device is a stimulation lead and the therapy signal is an electrical signal activating the stimulation lead to adjust application of the electrical signal includes terminating increasing decreasing or changing the rate or pattern of a pulsing parameter of the electrical stimulation and the electrical signal can be the respective termination increase decrease or change in rate or pattern of the respective pulsing parameter. In embodiments where the therapy delivery device is a drug port and the therapy signal is a chemical signal activating the drug port to adjust application of the chemical signal can be an indication to terminate increase decrease or change the rate or pattern of the amount or type of chemical agent administered and the chemical signal can be the respective initiation termination increase decrease or change in the rate or pattern of the amount or type of chemical agent administered. The processing of closed loop feedback systems for electrical and chemical stimulation are described in more detail in respective U.S. Pat. Nos. 6 058 331 and 5 711 316 both of which are incorporated by reference herein.
Closed loop electrical stimulation according to the present invention can be achieved by a modified form of an implantable SOLETRA KINETRA RESTORE or SYNERGY signal generator available from Medtronic Minneapolis Minn. as disclosed in U.S. Pat. No. 6 353 762 the teaching of which is incorporated herein in its entirety a controller as described in or utilization of CIO DAS 08 and CIO DAC 16 I processing boards and an IBM compatible computer available from Measurement Computing Middleboro Mass. with Visual Basic software for programming of algorithms. With reference to an illustration of a non limiting example of a controller comprising a microprocessor such as an MSP430 microprocessor from Texas Instruments Technology analog to digital converter such as AD7714 from Analog Devices Corp. pulse generator such as CD1877 from Harris Corporation pulse width control lead driver digital to analog converter such as MAX538 from Maxim Corporation power supply memory and communications port or telemetry chip are shown. Optionally a digital signal processor is used for signal conditioning and filtering. Input leads and and output lead to lead therapeutic delivery device and drug delivery device therapeutic delivery device are also illustrated. Additional stimulation leads sensors and therapeutic delivery devices may be added to the controller as required. As a non limiting example inputs from sensors such as heart rate and blood pressure sensors are input to analog to digital converter . Microprocessor receiving the sensor inputs uses algorithms to analyze the biological activity of the patient and using PID Fuzzy logic or other algorithms computes an output to pulse generator and or drug delivery device drivers and respectively to neuromodulate the target site where the therapeutic delivery devices are placed. The output of analog to digital converter is connected to microprocessor through a peripheral bus including address data and control lines. Microprocessor processes the sensor data in different ways depending on the type of transducer in use. When the signal on the sensor indicates biological activity outside of threshold values for example elevated blood pressure or heart rate programmed by the clinician and stored in a memory the electrical signal applied through output drivers and of the controller will be adjusted. The output voltage or current from the controller are then generated in an appropriately configured form voltage current frequency and applied to the one or more therapeutic delivery devices placed at the target site for a prescribed time period to reduce elevated blood pressure or heart rate. If the patient s blood pressure or heart rate as monitored by the system is not outside of the normal threshold limits hypotensive or hypertensive bradycardic or tachycardic or if the controller output after it has timed out has resulted in a correction of the blood pressure or heart rate to within a predetermined threshold range no further electrical signal is applied to the target site and the controller continues to monitor the patient via the sensors. A block diagram of an algorithm which may be used in the present invention is shown in .
Referring to suitably conditioned and converted sensor data is input to the algorithm in block . The program computes at least one value of at least one biological activity related to a particular medical condition such as for example blood pressure heart rate or cardiac output and compares the measured value of the biological activity to a pre determined range of values which is determined in advance to be the desired therapeutic range of values. This range is programmed into the microprocessor via the telemetry or communications port of the controller. The algorithm compares and then determines whether or not the measured value lies outside the pre determined range of values . If the measured biological activity value is not outside the pre determined range of values the program continues to monitor the sensors and reiterates the comparison part of the algorithm. If the measured biological value is outside of the pre determined range of values a determination or comparison is made as to whether the value is too high or too low compared with the pre determined range. If the biological activity value is too high an adjustment to the therapeutic delivery device is made to lower the biological activity value of the patient by calculating an output signal for pulse generator or drug delivery device to deliver a sufficient amount of the pharmaceutical or electrical stimulation to lower the biological activity of the patient. The algorithm continues to monitor the biological activity following the adjustment. If the biological activity value is too low then an adjustment to the therapeutic delivery device is made to raise the biological activity value by calculating an output signal for the pulse generator or drug delivery device to deliver a sufficient amount of a pharmaceutical or electrical stimulation to raise the biological activity value of the patient. The algorithm continues to monitor the biological activity of the patient following the adjustment. The amount of adjustment made may be determined by proportional integral derivative algorithms of by implementation of Fuzzy logic rules.
With respect to the control of specific electrical parameters the stimulus pulse frequency may be controlled by programming a value to a programmable frequency generator using the bus of the controller. The programmable frequency generator provides an interrupt signal to the microprocessor through an interrupt line when each stimulus pulse is to be generated. The frequency generator may be implemented by model CDP1878 sold by Harris Corporation. The amplitude for each stimulus pulse may be programmed to a digital to analog converter using the controller s bus. The analog output is conveyed through a conductor to an output driver circuit to control stimulus amplitude. The microprocessor of the controller may also program a pulse width control module using the bus. The pulse width control provides an enabling pulse of duration equal to the pulse width via a conductor. Pulses with the selected characteristics are then delivered from signal generator through a cable and lead to the target site or to a device such as a proportional valve or pump. For some types of sensors a microprocessor and analog to digital converter will not be necessary. The output from sensor can be filtered by an appropriate electronic filter in order to provide a control signal for signal generator. An example of such a filter is found in U.S. Pat. No. 5 259 387 Muscle Artifact Filter Issued to Victor de Pinto on Nov. 9 1993 incorporated herein by reference in its entirety.
At the time the therapy delivery device is implanted the clinician programs certain key parameters into the memory of the implanted device via telemetry. These parameters may be updated subsequently as needed. The clinician may also program the the range of values for pulse width amplitude and frequency which the therapy delivery device may use to optimize the therapy. The clinician may also choose the order in which the parameter changes are made. Alternatively the clinician may elect to use default values or the microprocessor may be programmed to use fuzzy logic rules and algorithms to determine output from the therapeutic delivery device to the patient based on sensor data and threshold values for the biological activity.
Although the application of sensors to detect bodily activity are part of embodiments of systems of the present invention the present invention also contemplates the relevant bodily activity to be detected without sensors. In such case the neuromodulation parameters are adjusted manually in response to the clinical course of the disease or reporting by the patient.
In another embodiment the present invention provides a method of stabilizing and or optimizing bodily functions augmenting function and treating the various diseases disorders listed in Table I by placing a therapy delivery device on a target site of the sympathetic nervous system and preferably a target site in communication with a sympathetic nerve chain and activating the therapy delivery device to apply a therapy signal electrical and or chemical signal to the target site to stabilize and or optimize the bodily function as well as to enhance augment normalize regulate control and or improve the normal and abnormal functioning of the various body organs structures systems for example heart lung gastrointestinal genitourinary vascular and other systems that are innervated by the sympathetic nervous system. This method can be performed in the operating room procedure room or imaging MRI CT X ray fluoroscopy or optical imaged guided suite. The procedures can be carried out peri operative or post operative to a surgical operation as well as in an intensive care unit and any other commonly utilized in patient and out patient capacities. Preferably the surgical operation includes procedures that may require heart bypass equipment procedures that may require a respiratory ventilator or surgeries where intravenous medications are used during and after surgery to influence cardiac and or pulmonary function. In an alternative embodiment this method is performed in a non surgical setting where intravenous medications are used for sedation analgesia and to stabilize cardiac function such as in the setting of myocardial infarction.
The present invention also provides a method for minimizing or resolving side effects and morbidity associated with other therapies used for various disorders including medications surgery chemotherapy and radiation.
Neuromodulation of the target sites of the present invention can be temporary or short term such as less than 10 days intermediate 10 30 days or chronic greater than 30 days . Further the target sites can be accessed using any of the current approaches used by neurosurgeons spinal surgeons cardio thoracic surgeons vascular surgeons abdominal surgeons GU surgeons ENT surgeons plastic surgeons as well as interventional radiologists neurologists pain management specialists rehabilitation and physical therapy specialists and anesthesiologists. The procedures involves direct and indirect placement of the therapy delivery device. This can be achieved using percutaneous endoscopic intravascular or open surgical approach. Furthermore all these approaches can be guided by imaging means of MRI CT X ray fluoroscopy optical imaging. A variety of approaches are available and practiced routinely by the group of specialists listed above. Non limiting and commonly employed procedures are posterior paravertebral thoracic sympathectomy thoracoscopic sympathectomy and retroperitoneal lumbar sympathectomy. Reference is made to Surgical Management of Pain Thieme Medical Publishers Inc. RD 595.5.587 2001 incorporated in its entirety herein by reference thereto for further details. For open surgery anterior supraclavicular transaxillary and posterior paravertebral approaches can be used.
A non limiting example of a method of placing a therapy delivery device on a target site of the sympathetic nervous system will now be described. Referring to a patient is illustrated in the decubitus position where the hips of the patient are preferably below the flexion joint of the operating room table. Subsequent flexion of the table allows some separation of the ribs by dropping the patient s hips and therefore increasing the intercostal space to work through. The ipsilateral arm is abducted on an arm holder. Rotating the table somewhat anteriorly and using reverse Trendelenburg positioning further maximizes the exposure to the superior paravertebral area by allowing the soon to be deflated lung see to fall away from the apical posterior chest wall. This is the preferred position of the patient prior to performing the procedure as this position exposes the vertebral bodies where the sympathetic nerve chain lies extrapleurally.
The procedure begins with placing the patient under general anesthesia and intubated via a double lumen endotracheal tube. The double lumen endotracheal tube permits ventilation of one lung and collapse of the other lung that is to be operated upon without using carbon dioxide insufflation. One incision is made in the midaxillary line in the fifth intercostal space that is identified as port . Port can be used for various reasons but it is preferred that port is used as a telescopic video port which will provide video assistance during the procedure. While under endoscopic observation a second incision is made in the third or fourth intercostal space at the anterior axillary line that is identified as port . Port is preferably used as an instrument channel. A third incision is made at the posterior axillary line just below the scapular tip in the fifth interspace that is identified as port . Port is preferably used as a second instrument channel. Preferably the three incisions made during the thoracoscopic sympathectomy are approximately 2 cm in length. Additional incisions i.e. ports can be made as necessary.
Referring to in which axial cross section and exposed views of the surgical sites are provided respectively the surgical exposure and preparation of the relevant portion of the sympathetic nerve chain for the treatment of various physiological and pathological conditions is described. After the lung is collapsed and if necessary retracted down by a fanning instrument via one of the working ports the sympathetic nerve chain is visualized under the parietal pleura as a raised longitudinal structure located at the junction of the ribs and the vertebral bodies . The parietal pleura is grasped between the first and second ribs in the region overlying the sympathetic nerve chain and scissors or endoscopic cautery is used to incise the parietal pleura in a vertical manner just below the first rib thereby exposing the sympathetic nerve chain .
Referring now to in which the implantation of a multichannel stimulation lead at a specific location of the sympathetic nerve chain is shown the implantation of the stimulation lead is now described. Once the sympathetic nerve chain has been exposed a multichannel stimulation lead is implanted adjacent to a predetermined site along the sympathetic nerve chain that is associated with the physiological disorder or pathological condition being treated. The stimulation lead is sutured in place to the nearby tissue or parietal pleura . The vicinity of the sympathetic nerve chain for which the stimulation lead is positioned depends on the physiological disorder or pathological condition being treated.
The influence of the neuromodulation of the systems and methods of the present invention can be manifested as changes in biological activity. For example with respect to treating cardiovascular medical conditions such changes include changes in heart rate heart rhythm blood flow to the heart and cardiac contractility. These changes are reflected physiologically by parameters such as for example heart rate blood pressure cardiac output stroke volume pulmonary wedge pressure and venous pressure all of which can be measured. Preferably the neuromodulation allows for selective changes in one or more aspects of the target organ whose function is being influenced without influencing or minimally influencing other functions of the target organ. For example cardiac function may be selectively influenced by varying the parameters of stimulation such that cardiac contractility is affected but not heart rate.
The influence of neuromodulation of this method of the present invention on the respiratory system for example can be manifested in respiratory rate changes in elasticity of the lung tissue changes in diameter of the bronchioles and other structures in the respiratory branches perfusion and diffusion of blood and its products at the level of the alveoli and blood flow to the lungs. These changes are reflected physiologically by parameters such as for example respiratory rate pH of blood bicarbonate level ventilatory volume lung capacity and blood oxygenation.
Stimulation of the lower cervical and upper thoracic sympathetic ganglia may impact the tracheal bronchial and pulmonary systems. Therefore electrical stimulation of the lower cervical and upper thoracic sympathetic ganglia may be helpful in treating bronchspasms episode or chronic spasms of the airways asthma and other entities by controlling the contraction of the smooth muscles of the airways.
Accordingly in this example implantation of the electrode over the inferior portion of the T 1 through T 5 ganglia could be a very useful application of the present invention to treat asthma. Also implantation of the electrode over the inferior cervical ganglion could also be useful to treat asthma.
The foregoing description has been set forth merely to illustrate the invention and is not intended as being limiting. Each of the disclosed aspects and embodiments of the present invention may be considered individually or in combination with other aspects embodiments and variations of the invention. In addition unless otherwise specified none of the steps of the methods of the present invention are confined to any particular order of performance. Modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art and such modifications are within the scope of the present invention. For example although methods of treating specific medical conditions are described with respect to electrical and chemical neuromodulation other modes of neuromodulation can be used such as light magnetism sound pressure and heat cold. Furthermore all references cited herein are incorporated by reference in their entirety.
| 407.589552 | 3,197 | 0.842998 | eng_Latn | 0.999617 |
53d472b56f47f16ef91b077023107196a905abf4 | 3,365 | md | Markdown | docs/xamarin-forms/xaml/index.md | cozyplanes/xamarin-docs.ko-kr | fef8d2951f8409717c63113a4c3301656f9a8ce6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xamarin-forms/xaml/index.md | cozyplanes/xamarin-docs.ko-kr | fef8d2951f8409717c63113a4c3301656f9a8ce6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xamarin-forms/xaml/index.md | cozyplanes/xamarin-docs.ko-kr | fef8d2951f8409717c63113a4c3301656f9a8ce6 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-08T02:25:40.000Z | 2021-03-08T02:25:40.000Z | ---
title: eXtensible Application Markup Language (XAML)
description: XAML은 사용자 인터페이스를 정의 하는 데 사용할 수 있는 선언적 태그 언어입니다. 사용자 인터페이스를 별도 코드 숨김 파일에서 런타임 동작이 정의 되는 동안 XAML 구문을 사용 하 여 XML 파일에 정의 됩니다.
ms.prod: xamarin
ms.assetid: CD30EECC-8AC1-4CF5-A4FE-348420A6231E
ms.technology: xamarin-forms
author: charlespetzold
ms.author: chape
ms.date: 06/18/2018
ms.openlocfilehash: f593e5d084d8cd7071d17195663478d430d994b7
ms.sourcegitcommit: 6e955f6851794d58334d41f7a550d93a47e834d2
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 07/12/2018
ms.locfileid: "38995484"
---
# <a name="extensible-application-markup-language-xaml"></a>eXtensible Application Markup Language (XAML)
_XAML은 사용자 인터페이스를 정의 하는 데 사용할 수 있는 선언적 태그 언어입니다. 사용자 인터페이스를 별도 코드 숨김 파일에서 런타임 동작이 정의 되는 동안 XAML 구문을 사용 하 여 XML 파일에 정의 됩니다._
> [!VIDEO https://youtube.com/embed/H6UOrSyhTEE]
**Evolve 2016: 쌓 XAML**
> [!NOTE]
> 사용해는 [XAML 표준 미리 보기](standard/index.md)
<a name="xaml" />
## <a name="xaml-basicsxaml-basicsindexmd"></a>[XAML 기본 사항](xaml-basics/index.md)
XAML은 코드가 아닌 태그를 사용 하 여 Xamarin.Forms 응용 프로그램에서 사용자 인터페이스를 정의 하는 개발자를 수 있습니다. XAML Xamarin.Forms 프로그램에서 필요 하지 않습니다 하지만 화할, 되 고이 시각적으로 일관적이 고 해당 하는 코드 보다 더 간결 합니다. XAML은 인기 있는 모델-뷰-ViewModel (MVVM) 응용 프로그램 아키텍처를 사용 하는 데 특히 적합 합니다: XAML XAML 기반 데이터 바인딩을 통해 ViewModel 코드에 연결 된 뷰를 정의 합니다.
## <a name="xaml-compilationxamlcmd"></a>[XAML 컴파일](xamlc.md)
필요한 경우 XAML 컴파일러(XAMLC)를 사용하여 XAML을 중간 언어(IL)로 바로 컴파일할 수 있습니다. 이 문서에서는 XAMLC, 및 그 혜택을 사용 하는 방법을 설명 합니다.
## <a name="xaml-previewerxaml-previewermd"></a>[XAML 미리 보기](xaml-previewer.md)
합니다 [XAML 미리 보기](~/xamarin-forms/xaml/xaml-previewer.md) 에서 발표 Xamarin 발전 2016은 알파 채널에서 테스트할 수 있습니다.
## <a name="xaml-namespacesnamespacesmd"></a>[XAML 네임스페이스](namespaces.md)
XAML을 사용 하 여 `xmlns` 네임 스페이스 선언에 대 한 XML 특성입니다. 이 문서는 XAML 네임 스페이스 구문을 소개 하 고 형식에 액세스 하는 XAML 네임 스페이스를 선언 하는 방법을 보여 줍니다.
## <a name="xaml-markup-extensionsmarkup-extensionsindexmd"></a>[XAML 태그 확장](markup-extensions/index.md)
XAML 특성 값 또는 간단한 문자열을 사용 하 여 표현 될 수 있습니다 이상의 개체를 설정 하는 것에 대 한 태그 확장을 포함 합니다. 참조 하는 상수, 정적 속성 및 필드, 리소스 사전 및 데이터 바인딩을 포함 됩니다.
## <a name="field-modifiersfield-modifiersmd"></a>[필드 한정자](field-modifiers.md)
`x:FieldModifier` 네임 스페이스 특성 명명 된 XAML 요소에 대해 생성 된 필드에 대 한 액세스 수준을 지정 합니다.
## <a name="passing-argumentspassing-argumentsmd"></a>[인수 전달](passing-arguments.md)
팩터리 메서드 또는 기본이 아닌 생성자에 인수를 전달 하려면 XAML은 사용할 수 있습니다. 이 문서에서는 생성자, 팩터리 메서드를 호출 하 고 제네릭 인수의 형식을 지정 하 여 인수를 전달할 수 있는 XAML 특성을 사용 하는 방법을 보여 줍니다.
## <a name="bindable-propertiesbindable-propertiesmd"></a>[바인딩 가능한 속성](bindable-properties.md)
Xamarin.Forms에서 공용 언어 런타임 (CLR) 속성의 기능 바인딩 가능한 속성으로 확장 됩니다. 바인딩 가능한 속성을 속성의 값 Xamarin.Forms 속성 시스템에서 추적 되는 속성의 특수 형식입니다. 이 문서에서는 바인딩 가능한 속성에 대해 소개 하 고 만들고이 사용 하는 방법을 보여 줍니다.
## <a name="attached-propertiesattached-propertiesmd"></a>[연결된 속성](attached-properties.md)
연결된 된 속성은 특수 한 유형의 바인딩 가능한 속성을 하나의 클래스에 정의 되었지만 다른 개체에 연결 하 고 클래스를 포함 하는 특성으로 XAML에서 인식할 수 있는 속성 이름은 마침표로 구분 합니다. 이 문서에서는 연결 된 속성에 대해 소개 하 고 만들고이 사용 하는 방법을 보여 줍니다.
## <a name="resource-dictionariesresource-dictionariesmd"></a>[리소스 사전](resource-dictionaries.md)
XAML 리소스는 두 번 이상 사용할 수 있는 개체의 정의입니다. A [ `ResourceDictionary` ](xref:Xamarin.Forms.ResourceDictionary) 리소스를 한 곳에서 정의 하 고 Xamarin.Forms 응용 프로그램 전체에서 다시 사용할 수 있습니다. 이 문서에 만들고 사용 하는 방법을 보여 줍니다.는 `ResourceDictionary`를 하나로 병합 하는 방법과 `ResourceDictionary` 다른 합니다.
| 48.768116 | 284 | 0.734027 | kor_Hang | 1.00001 |
53d4a6e4d82aeb686c9f18f9c49fc69c956a6d0c | 6,345 | md | Markdown | README.md | Touffy/lru-cachify | d1d6b6eb9e3fb33fb01d0f4e805466b71bd8abc1 | [
"MIT"
] | 2 | 2019-12-27T01:08:12.000Z | 2020-07-12T11:09:56.000Z | README.md | Touffy/lru-cachify | d1d6b6eb9e3fb33fb01d0f4e805466b71bd8abc1 | [
"MIT"
] | null | null | null | README.md | Touffy/lru-cachify | d1d6b6eb9e3fb33fb01d0f4e805466b71bd8abc1 | [
"MIT"
] | null | null | null | # LRU Cachify
This very small module provides a higher-order function `cachify` that adds cache behavior based on [lru cache](https://www.npmjs.com/package/lru-cache) to any async function *f* with string(ifiable) arguments. Calls to the resulting function *f'* = `cachify`(*f*) are identified by combining those arguments in an easily configurable way so that `cachify` knows when it should reuse cached results.
## Quick start
```sh
npm install lru-cachify2
```
```javascript
import { cachify } from 'lru-cachify2'
// the assumption is that, given the same id, this async function will return the same results forever (or some time at least)
const basicRequest = id => fetch(`someUrl?id=${id}`).then(res => res.json())
// this one too, except it will only fetch once, then reuse the cached result for ten minutes
const cachedRequest = cachify(basicRequest, {max: 100, maxAge: 6e5}, id => id)
```
## What does it do?
* tiny footprint (475o minified + lightweight lru-cache dependency)
* based on lru-cache: stores JavaScript values in memory, no need for serialization and no overhead (but small size)
* generic higher-order function with simple but powerful options, works with anything that returns a Promise
* works in case of simultaneous calls by caching the Promise until it resolves → no race condition!
* fine-grained caching of promise rejections
* TypeScript correctly infers that the result has the same type as the original function :)
## Cache configuration
The second argument to `cachify` is an [LRU cache options object](https://www.npmjs.com/package/lru-cache#options) passed directly to the LRU cosntructor. The most useful options are `max` (how many entires can be stored before the oldest ones get overwritten) and `maxAge` (how long an entry can be reused before refreshing).
`cachify` uses its third argument (the `hash` function) to determine a string key for each call to the resulting function *f'*. It will reuse previously stored results for *f* with that key instead of calling *f*. The `hash` function is called with all the arguments given to *f'*, and should return a string unique to the combination of all the arguments that determine the result of *f*. For example:
* If *f* takes only one ID-like argument, `hash` can just return that, like in the quick start example.
* If *f* takes more arguments but they don't impact the results (such as a configuration telling *f* which mirror to call for a particular service), `hash` should not use those extra arguments
* If *f* takes multiple arguments, the combination of which determines the result, you can use the built-in `joinN` function to create a suitable `hash` function. See below.
## Multiple arguments example
In this example, the function *l10n(string, lang, connection)* takes three arguments. The third argument doesn't change the results, it just tells *l10n* how to connect to a database. The first two arguments represent a string ID and a language. There is only one result for a given string ID in any language, so the combination of *string* and *lang* can be hashed into a suitable key.
Let's make a simple `hash` function that just joins two arguments into a string with "@" as a delimiter.
```javascript
const hash = (s1, s2) => [s1, s2].join('@')
```
Now we can create a cached version of *l10n*:
```javascript
const cachedL10n = cachify(l10n, {}, hash)
```
That's a very frequent use case, so lru-cachify2 comes with a `joinN` function. This is strictly equivalent to our `hash` function:
```javascript
import { joinN } from 'lru-cachify2'
const hash = joinN(2)
```
So you could even more easily define your cached *l10n* function:
```javascript
const cachedL10n = cachify(l10n, {}, joinN(2))
```
If no `hash` argument is provided, `cachify` will default to `joinN(f.length)` (i.e. joining **all** the named arguments for *f*) so be careful when *f* has arguments irrelevant to the key like in this example.
## Non-string keys
This is JavaScript and there is nothing that really prevents you from returning arrays, other objects, or Symbols from the `hash` function (even mixing return types). The underlying LRU cache uses a `Map`, which accepts anything as a key.
Just beware that object keys are compared by reference so the hash function can't, for instance, just put *f'* arguments into an array and expect it to match another array containing the same arguments from a previous call to *f'* (the arrays would not be reference-equal).
Numbers (barring floating-point precision issues), BigInts and booleans should work fine as keys, though, since they are compared by value.
## Caching errors
By default, when `cachify` needs to call *f* to bring a fresh result for *f'* and the promise from *f* rejects, the cache for that key is cleared. So *f* will be called again until the promise resolves.
However, it can be useful to cache rejections, especially those errors whose semantics mean that it's unlikely to be fixed by retrying as-is (such as an HTTP error 404), or when you don't want to overload the infrastructure with immediate retries.
The optional fourth argument to `cachify` is a function that takes an Error and should return how long (in milliseconds) that Error is to be be cached (zero means it's not cached at all, `Infinity` is unlimited, and it is perfectly OK to return `false` instead of zero).
In this example, we cache 404 results forever and 401 results for a few seconds:
```javascript
const basicRequest = id => fetch(`someUrl?id=${id}`).then(res => {
if (res.status >== 400) throw new HTTPError(res.status)
return res.json()
})
const cachedRequest = cachify(basicRequest, {}, id => id, error => {
switch(error.status) {
case 404: return Infinity
case 401: return 3000
default: return 0
}
})
```
## Does it work everywhere?
Like its dependency lru-cache, lru-cachify2 relies on es2015 features (`Map`s) that are difficult to polyfill, but nobody uses browsers or Node versions that don't have es2015 support anymore, right?
The TypeScript compiler is configured to output a commonJS module requiring es2017 support. Feel free to tweak that or just import the single-file source into your project and use your own transpiler configuration. `cachify` should easily transpile down to es2015.
## license
[MIT](https://tldrlegal.com/license/mit-license)
| 56.150442 | 402 | 0.754137 | eng_Latn | 0.998526 |
53d4b9ab0ea03c0347eef029435c50a763028d79 | 183 | md | Markdown | source/index.html.md | GGTech-Entertainment/ecms-docs | 7944812732558e09a72fed0fc7b06dbc8fc6abe0 | [
"Apache-2.0"
] | null | null | null | source/index.html.md | GGTech-Entertainment/ecms-docs | 7944812732558e09a72fed0fc7b06dbc8fc6abe0 | [
"Apache-2.0"
] | 1 | 2021-05-03T17:23:03.000Z | 2021-05-03T17:23:03.000Z | source/index.html.md | GGTech-Entertainment/ecms-docs | 7944812732558e09a72fed0fc7b06dbc8fc6abe0 | [
"Apache-2.0"
] | null | null | null | ---
title: API Reference A
toc_footers:
- copyright <a href="https://www.ggtech.global" target="_blank">GGTech</a> 2021 ©
search: true
code_clipboard: true
---
# Introduction
| 12.2 | 83 | 0.688525 | yue_Hant | 0.314509 |
53d51eb7aed87b9bc1bc12c5cfb675345cd486fb | 17,488 | md | Markdown | memdocs/intune/protect/compliance-policy-create-windows.md | shthota77/memdocs | 266565341481b527f5722492a0d7379f1bedd69d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-05-28T16:22:12.000Z | 2021-05-28T16:22:12.000Z | memdocs/intune/protect/compliance-policy-create-windows.md | shthota77/memdocs | 266565341481b527f5722492a0d7379f1bedd69d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | memdocs/intune/protect/compliance-policy-create-windows.md | shthota77/memdocs | 266565341481b527f5722492a0d7379f1bedd69d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-13T21:26:01.000Z | 2021-02-13T21:26:01.000Z | ---
# required metadata
title: Windows 10 compliance settings in Microsoft Intune - Azure | Microsoft Docs
description: See a list of all the settings you can use when setting compliance for your Windows 10, Windows Holographic, and Surface Hub devices in Microsoft Intune. Check for compliance on the minimum and maximum operating system, set password restrictions and length, check for partner anti-virus (AV) solutions, enable encryption on data storage, and more.
keywords:
author: brenduns
ms.author: brenduns
manager: dougeby
ms.date: 10/19/2020
ms.topic: reference
ms.service: microsoft-intune
ms.subservice: protect
ms.localizationpriority: medium
ms.technology:
# optional metadata
#ROBOTS:
#audience:
ms.reviewer: samyada
ms.suite: ems
search.appverid: MET150
#ms.tgt_pltfrm:
ms.custom: intune-azure
ms.collection: M365-identity-device-management
---
# Windows 10 and later settings to mark devices as compliant or not compliant using Intune
This article lists and describes the different compliance settings you can configure on Windows 10 and later devices in Intune. As part of your mobile device management (MDM) solution, use these settings to require BitLocker, set a minimum and maximum operating system, set a risk level using Microsoft Defender Advanced Threat Protection (ATP), and more.
This feature applies to:
- Windows 10 and later
- Windows Holographic for Business
- Surface Hub
As an Intune administrator, use these compliance settings to help protect your organizational resources. To learn more about compliance policies, and what they do, see [get started with device compliance](device-compliance-get-started.md).
## Before you begin
[Create a compliance policy](create-compliance-policy.md#create-the-policy). For **Platform**, select **Windows 10 and later**.
## Device Health
### Windows Health Attestation Service evaluation rules
- **Require BitLocker**:
Windows BitLocker Drive Encryption encrypts all data stored on the Windows operating system volume. BitLocker uses the Trusted Platform Module (TPM) to help protect the Windows operating system and user data. It also helps confirm that a computer isn't tampered with, even if its left unattended, lost, or stolen. If the computer is equipped with a compatible TPM, BitLocker uses the TPM to lock the encryption keys that protect the data. As a result, the keys can't be accessed until the TPM verifies the state of the computer.
- **Not configured** (*default*) - This setting isn't evaluated for compliance or non-compliance.
- **Require** - The device can protect data that's stored on the drive from unauthorized access when the system is off, or hibernates.
[Device HealthAttestation CSP - BitLockerStatus](/windows/client-management/mdm/healthattestation-csp)
- **Require Secure Boot to be enabled on the device**:
- **Not configured** (*default*) - This setting isn't evaluated for compliance or non-compliance.
- **Require** - The system is forced to boot to a factory trusted state. The core components that are used to boot the machine must have correct cryptographic signatures that are trusted by the organization that manufactured the device. The UEFI firmware verifies the signature before it lets the machine start. If any files are tampered with, which breaks their signature, the system doesn't boot.
> [!NOTE]
> The **Require Secure Boot to be enabled on the device** setting is supported on some TPM 1.2 and 2.0 devices. For devices that don't support TPM 2.0 or later, the policy status in Intune shows as **Not Compliant**. For more information on supported versions, see [Device Health Attestation](/windows/security/information-protection/tpm/trusted-platform-module-overview#device-health-attestation).
- **Require code integrity**:
Code integrity is a feature that validates the integrity of a driver or system file each time it's loaded into memory.
- **Not configured** (*default*) - This setting isn't evaluated for compliance or non-compliance.
- **Require** - Require code integrity, which detects if an unsigned driver or system file is being loaded into the kernel. It also detects if a system file is changed by malicious software or run by a user account with administrator privileges.
More resources:
- For details about how the Health Attestation service works, see [Health Attestation CSP](/windows/client-management/mdm/healthattestation-csp).
- [Support Tip: Using Device Health Attestation Settings as Part of Your Intune Compliance Policy](https://techcommunity.microsoft.com/t5/Intune-Customer-Success/Support-Tip-Using-Device-Health-Attestation-Settings-as-Part-of/ba-p/282643).
## Device Properties
### Operating System Version
- **Minimum OS version**:
Enter the minimum allowed version in the **major.minor.build.CU number** format. To get the correct value, open a command prompt, and type `ver`. The `ver` command returns the version in the following format:
`Microsoft Windows [Version 10.0.17134.1]`
When a device has an earlier version than the OS version you enter, it's reported as noncompliant. A link with information on how to upgrade is shown. The end user can choose to upgrade their device. After they upgrade, they can access company resources.
- **Maximum OS version**:
Enter the maximum allowed version, in the **major.minor.build.revision number** format. To get the correct value, open a command prompt, and type `ver`. The `ver` command returns the version in the following format:
`Microsoft Windows [Version 10.0.17134.1]`
When a device is using an OS version later than the version entered, access to organization resources is blocked. The end user is asked to contact their IT administrator. The device can't access organization resources until the rule is changed to allow the OS version.
- **Minimum OS required for mobile devices**:
Enter the minimum allowed version, in the major.minor.build number format.
When a device has an earlier version that the OS version you enter, it's reported as noncompliant. A link with information on how to upgrade is shown. The end user can choose to upgrade their device. After they upgrade, they can access company resources.
- **Maximum OS required for mobile devices**:
Enter the maximum allowed version, in the major.minor.build number.
When a device is using an OS version later than the version entered, access to organization resources is blocked. The end user is asked to contact their IT administrator. The device can't access organization resources until the rule is changed to allow the OS version.
- **Valid operating system builds**:
Enter a range for the acceptable operating systems versions, including a minimum and maximum. You can also **Export** a comma-separated values (CSV) file list of these acceptable OS build numbers.
## Configuration Manager Compliance
Applies only to co-managed devices running Windows 10 and later. Intune-only devices return a not available status.
- **Require device compliance from Configuration Manager**:
- **Not configured** (*default*) - Intune doesn't check for any of the Configuration Manager settings for compliance.
- **Require** - Require all settings (configuration items) in Configuration Manager to be compliant.
## System Security
### Password
- **Require a password to unlock mobile devices**:
- **Not configured** (*default*) - This setting isn't evaluated for compliance or non-compliance.
- **Require** - Users must enter a password before they can access their device.
- **Simple passwords**:
- **Not configured** (*default*) - Users can create simple passwords, such as **1234** or **1111**.
- **Block** - Users can't create simple passwords, such as **1234** or **1111**.
- **Password type**:
Choose the type of password or PIN required. Your options:
- **Device default** (*default*) - Require a password, numeric PIN, or alphanumeric PIN
- **Numeric** - Require a password or numeric PIN
- **Alphanumeric** - Require a password, or alphanumeric PIN.
When set to *Alphanumeric*, the following settings are available:
- **Password complexity**:
Your options:
- **Require digits and lowercase letters** (*default*)
- **Require digits, lowercase letters, and uppercase letters**
- **Require digits, lowercase letters, uppercase letters, and special characters**
> [!TIP]
> The Alphanumeric password policies can be complex. We encourage administrators to read the CSPs for more information:
>
> - [DeviceLock/AlphanumericDevicePasswordRequired CSP](/windows/client-management/mdm/policy-csp-devicelock#devicelock-alphanumericdevicepasswordrequired)
> - [DeviceLock/MinDevicePasswordComplexCharacters CSP](/windows/client-management/mdm/policy-csp-devicelock#devicelock-mindevicepasswordcomplexcharacters)
- **Minimum password length**:
Enter the minimum number of digits or characters that the password must have.
- **Maximum minutes of inactivity before password is required**:
Enter the idle time before the user must reenter their password.
- **Password expiration (days)**:
Enter the number of days before the password expires, and they must create a new one, from 1-730.
- **Number of previous passwords to prevent reuse**:
Enter the number of previously used passwords that can't be used.
- **Require password when device returns from idle state (Mobile and Holographic)**:
- **Not configured** (*default*)
- **Require** - Require device users to enter the password every time the device returns from an idle state.
> [!IMPORTANT]
> When the password requirement is changed on a Windows desktop, users are impacted the next time they sign in, as that's when the device goes from idle to active. Users with passwords that meet the requirement are still prompted to change their passwords.
### Encryption
- **Encryption of data storage on a device**:
This setting applies to all drives on a device.
- **Not configured** (*default*)
- **Require** - Use *Require* to encrypt data storage on your devices.
[DeviceStatus CSP - DeviceStatus/Compliance/EncryptionCompliance](/windows/client-management/mdm/devicestatus-csp)
> [!NOTE]
> The **Encryption of data storage on a device** setting generically checks for the presence of encryption on the device, more specifically at the OS drive level. For a more robust encryption setting, consider using **Require BitLocker**, which leverages Windows Device Health Attestation to validate Bitlocker status at the TPM level.
### Device Security
- **Firewall**:
- **Not configured** (*default*) - Intune doesn't control the Microsoft Defender Firewall, nor change existing settings.
- **Require** - Turn on the Microsoft Defender Firewall, and prevent users from turning it off.
[Firewall CSP](/windows/client-management/mdm/firewall-csp)
> [!NOTE]
> If the device immediately syncs after a reboot, or immediately syncs waking from sleep, then this setting may report as an **Error**. This scenario might not affect the overall device compliance status. To re-evaluate the compliance status, manually [sync the device](../user-help/sync-your-device-manually-windows.md).
- **Trusted Platform Module (TPM)**:
- **Not configured** (*default*) - Intune doesn't check the device for a TPM chip version.
- **Require** - Intune checks the TPM chip version for compliance. The device is compliant if the TPM chip version is greater than **0** (zero). The device isn't compliant if there isn't a TPM version on the device.
[DeviceStatus CSP - DeviceStatus/TPM/SpecificationVersion](/windows/client-management/mdm/devicestatus-csp)
- **Antivirus**:
- **Not configured** (*default*) - Intune doesn't check for any antivirus solutions installed on the device.
- **Require** - Check compliance using antivirus solutions that are registered with [Windows Security Center](https://blogs.windows.com/windowsexperience/2017/01/23/introducing-windows-defender-security-center/), such as Symantec and Microsoft Defender.
[DeviceStatus CSP - DeviceStatus/Antivirus/Status](/windows/client-management/mdm/devicestatus-csp)
- **Antispyware**:
- **Not configured** (*default*) - Intune doesn't check for any antispyware solutions installed on the device.
- **Require** - Check compliance using antispyware solutions that are registered with [Windows Security Center](https://blogs.windows.com/windowsexperience/2017/01/23/introducing-windows-defender-security-center/), such as Symantec and Microsoft Defender.
[DeviceStatus CSP - DeviceStatus/Antispyware/Status](/windows/client-management/mdm/devicestatus-csp)
### Defender
*The following compliance settings are supported with Windows 10 Desktop.*
- **Microsoft Defender Antimalware**:
- **Not configured** (*default*) - Intune doesn't control the service, nor change existing settings.
- **Require** - Turn on the Microsoft Defender anti-malware service, and prevent users from turning it off.
- **Microsoft Defender Antimalware minimum version**:
Enter the minimum allowed version of Microsoft Defender anti-malware service. For example, enter `4.11.0.0`. When left blank, any version of the Microsoft Defender anti-malware service can be used.
*By default, no version is configured*.
- **Microsoft Defender Antimalware security intelligence up-to-date**:
Controls the Windows Security virus and threat protection updates on the devices.
- **Not configured** (*default*) - Intune doesn't enforce any requirements.
- **Require** - Force the Microsoft Defender security intelligence be up-to-date.
[Defender CSP - Defender/Health/SignatureOutOfDate CSP](/windows/client-management/mdm/defender-csp)
For more information, see [Security intelligence updates for Microsoft Defender Antivirus and other Microsoft antimalware](https://www.microsoft.com/en-us/wdsi/defenderupdates).
- **Real-time protection**:
- **Not configured** (*default*) - Intune doesn't control this feature, nor change existing settings.
- **Require** - Turn on real-time protection, which scans for malware, spyware, and other unwanted software.
[Policy CSP - Defender/AllowRealtimeMonitoring CSP](/windows/client-management/mdm/policy-csp-defender#defender-allowrealtimemonitoring)
## Microsoft Defender ATP
### Microsoft Defender Advanced Threat Protection rules
- **Require the device to be at or under the machine risk score**:
Use this setting to take the risk assessment from your defense threat services as a condition for compliance. Choose the maximum allowed threat level:
- **Not configured** (*default*)
- **Clear** -This option is the most secure, as the device can't have any threats. If the device is detected as having any level of threats, it's evaluated as non-compliant.
- **Low** - The device is evaluated as compliant if only low-level threats are present. Anything higher puts the device in a non-compliant status.
- **Medium** - The device is evaluated as compliant if existing threats on the device are low or medium level. If the device is detected to have high-level threats, it's determined to be non-compliant.
- **High** - This option is the least secure, and allows all threat levels. It may be useful if you're using this solution only for reporting purposes.
To set up Microsoft Defender ATP (Advanced Threat Protection) as your defense threat service, see [Enable Microsoft Defender ATP with Conditional Access](advanced-threat-protection.md).
## Windows Holographic for Business
Windows Holographic for Business uses the **Windows 10 and later** platform. Windows Holographic for Business supports the following setting:
- **System Security** > **Encryption** > **Encryption of data storage on device**.
To verify device encryption on the Microsoft HoloLens, see [Verify device encryption](/hololens/security-encryption-data-protection).
## Surface Hub
Surface Hub uses the **Windows 10 and later** platform. Surface Hubs are supported for both compliance and Conditional Access. To enable these features on Surface Hubs, we recommend you [enable Windows 10 automatic enrollment](../enrollment/windows-enroll.md) in Intune (requires Azure Active Directory (Azure AD)), and target the Surface Hub devices as device groups. Surface Hubs are required to be Azure AD joined for compliance and Conditional Access to work.
For guidance, see [set up enrollment for Windows devices](../enrollment/windows-enroll.md).
**Special consideration for Surface Hubs running Windows 10 Team OS**:
Surface Hubs that run Windows 10 Team OS do not support the Microsoft Defender ATP and Password compliance policies at this time. Therefore, for Surface Hubs that run Windows 10 Team OS set the following two settings to their default of *Not configured*:
- In the category [Password](#password), set **Require a password to unlock mobile devices** to the default of *Not configured*.
- In the category [Microsoft Defender ATP](#microsoft-defender-atp), set **Require the device to be at or under the machine risk score** to the default of *Not configured*.
## Next steps
- [Add actions for noncompliant devices](actions-for-noncompliance.md) and [use scope tags to filter policies](../fundamentals/scope-tags.md).
- [Monitor your compliance policies](compliance-policy-monitor.md).
- See the [compliance policy settings for Windows 8.1](compliance-policy-create-windows-8-1.md) devices.
| 63.824818 | 533 | 0.764982 | eng_Latn | 0.987229 |
53d57e6e89f1ce77843b945c93373f625e8f0d77 | 2,372 | md | Markdown | formatted/Team Kanban.md | roam-cn/roamcult | 494521bdecb0d6eee0d7578c7c53544c92d4a0cc | [
"MIT"
] | 23 | 2020-12-21T11:48:20.000Z | 2022-03-25T05:49:26.000Z | formatted/Team Kanban.md | roam-cn/roamcult | 494521bdecb0d6eee0d7578c7c53544c92d4a0cc | [
"MIT"
] | null | null | null | formatted/Team Kanban.md | roam-cn/roamcult | 494521bdecb0d6eee0d7578c7c53544c92d4a0cc | [
"MIT"
] | 4 | 2021-03-06T10:57:47.000Z | 2021-11-03T04:51:14.000Z | - [Backlog](<Backlog.md>) 👇
- {{[kanban](<kanban.md>)}}
- [TODO](<TODO.md>)
- For U: 所有想到的任何事情事情先丢 [Backlog](<Backlog.md>) 哦~
此列 [TODO](<TODO.md>) 只放确定要做的事情,Workflow:
[Backlog](<Backlog.md>) -> [TODO](<TODO.md>) -> [IN PROGRESS](<IN PROGRESS.md>) -> [DONE](<DONE.md>)
- Tips: 所有渠道看到的可能用于周报的信息随时记一笔,记得加上标签 [Newsletter](<Newsletter.md>) 方便一块儿整理 👇
- {{[query](<query.md>): {and: [Newsletter](<Newsletter.md>) {not: [kanban](<kanban.md>)]}}}}
- [IN PROGRESS](<IN PROGRESS.md>)
- [1.5: Unlinked References](https://www.notion.so/1-5-Unlinked-References-9ab449a5dde74ef4bc016927fe2c46d1) [Alex](<Alex.md>) #[JΛKΞ](<JΛKΞ.md>)
- [1.6: The Navigation Bar](https://www.notion.so/1-6-The-Navigation-Bar-06e85b3e56614395a03409a01ea5322a) [Alex](<Alex.md>) #[Frank Wu](<Frank Wu.md>)
- [1.7- Soft Line Breaks & Cmd-shift-v](https://www.notion.so/1-7-Soft-Line-Breaks-Cmd-shift-v-db913a901cab43ff84cc19bf36ade4dd) [Alex](<Alex.md>) #[JΛKΞ](<JΛKΞ.md>)
- [1.8: The Sidebar](https://www.notion.so/1-8-The-Sidebar-95a50097bffc4629af02efd322e826ea) [Alex](<Alex.md>) #[Frank Wu](<Frank Wu.md>)
- [7.4: Introduction to Zettelkasten](https://www.notion.so/7-4-Introduction-to-Zettelkasten-9f530ce53b1d45eb9755f2033b7514bc) [Alex](<Alex.md>) #[Frank Wu](<Frank Wu.md>)
- 给 Roam 中周报信息收集的 block 打一个固定的标签 #[Newsletter](<Newsletter.md>)
- 翻译:Effective Note-Taking Lesson 6 [wangxh1000](<wangxh1000.md>)
- [1.4: Basics of Tags, Backlinks and Pages](https://www.notion.so/1-4-Basics-of-Tags-Backlinks-and-Pages-8ecc50cd532a49b2a15483688159155b) [Alex](<Alex.md>) #[Frank Wu](<Frank Wu.md>)
- [DONE](<DONE.md>)
- [[[Effective Note-Taking](<[[Effective Note-Taking.md>) Lesson 4]] #[白瑞 Barry](<白瑞 Barry.md>)
- [翻译:[[Effective Note-Taking](<翻译:[[Effective Note-Taking.md>) Lesson 12]] [wangxh1000](<wangxh1000.md>)
- 翻译:[Effective Note-Taking](<Effective Note-Taking.md>) Lesson 5 [wangxh1000](<wangxh1000.md>)
# Backlinks
## [July 11th, 2020](<July 11th, 2020.md>)
- 重构 [Team Kanban](<Team Kanban.md>)
- 抽取 [Team Kanban](<Team Kanban.md>)
- [wangxh1000](<wangxh1000.md>) 在 [Team Kanban](<Team Kanban.md>)
## [July 13th, 2020](<July 13th, 2020.md>)
- 更新 [Team Kanban](<Team Kanban.md>)
## [July 15th, 2020](<July 15th, 2020.md>)
- 更新 [Team Kanban](<Team Kanban.md>)
| 64.108108 | 192 | 0.649663 | yue_Hant | 0.844411 |
53d5d149aeadb99151856d2d9b7102c3df271398 | 2,346 | md | Markdown | README.md | jmyrberg/finscraper | f90399a0c33247d3bb896ca987ef6f293609abe0 | [
"MIT"
] | null | null | null | README.md | jmyrberg/finscraper | f90399a0c33247d3bb896ca987ef6f293609abe0 | [
"MIT"
] | 24 | 2020-05-09T19:18:30.000Z | 2020-11-21T22:47:39.000Z | README.md | jmyrberg/finscraper | f90399a0c33247d3bb896ca987ef6f293609abe0 | [
"MIT"
] | null | null | null | # finscraper
[](https://travis-ci.com/jmyrberg/finscraper) [](https://finscraper.readthedocs.io/en/latest/?badge=latest)

The library provides an easy-to-use API for fetching data from various Finnish websites:
| Website | Type | Spider API class |
| -------------------------------------------------------------- | ----------------- | ------------------ |
| [Ilta-Sanomat](https://www.is.fi) | News article | `ISArticle` |
| [Iltalehti](https://www.il.fi) | News article | `ILArticle` |
| [YLE Uutiset](https://www.yle.fi/uutiset) | News article | `YLEArticle` |
| [Demi](https://demi.fi) | Discussion thread | `DemiPage` |
| [Suomi24](https://keskustelu.suomi24.fi) | Discussion thread | `Suomi24Page` |
| [Vauva](https://www.vauva.fi) | Discussion thread | `VauvaPage` |
| [Oikotie Asunnot](https://asunnot.oikotie.fi/myytavat-asunnot) | Apartment ad | `OikotieApartment` |
| [Tori](https://www.tori.fi) | Item deal | `ToriDeal` |
Documentation is available at [https://finscraper.readthedocs.io](https://finscraper.readthedocs.io) and [simple online demo here](https://storage.googleapis.com/jmyrberg/index.html#/demo-projects/finscraper).
## Installation
`pip install finscraper`
## Quickstart
Fetch 10 news articles as a pandas DataFrame from [Ilta-Sanomat](https://is.fi):
```python
from finscraper.spiders import ISArticle
spider = ISArticle().scrape(10)
articles = spider.get()
```
The API is similar for all the spiders:

## Contributing
Please see [CONTRIBUTING.md](https://github.com/jmyrberg/finscraper/blob/master/CONTRIBUTING.md) for more information.
---
Jesse Myrberg ([email protected])
| 44.264151 | 273 | 0.592498 | yue_Hant | 0.593076 |
53d5d4c1a693ab8874f579804aaafa338ddbd2e8 | 10,629 | md | Markdown | articles/application-gateway/configuration-infrastructure.md | macdrai/azure-docs.fr-fr | 59bc35684beaba04a4f4c09a745393e1d91428db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/application-gateway/configuration-infrastructure.md | macdrai/azure-docs.fr-fr | 59bc35684beaba04a4f4c09a745393e1d91428db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/application-gateway/configuration-infrastructure.md | macdrai/azure-docs.fr-fr | 59bc35684beaba04a4f4c09a745393e1d91428db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Configuration de l’infrastructure Azure Application Gateway
description: Cet article explique comment configurer l’infrastructure Azure Application Gateway.
services: application-gateway
author: vhorne
ms.service: application-gateway
ms.topic: conceptual
ms.date: 09/09/2020
ms.author: surmb
ms.openlocfilehash: f214b0b0751f44ea1357f569fd814a7621af61ab
ms.sourcegitcommit: 0ce1ccdb34ad60321a647c691b0cff3b9d7a39c8
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 11/05/2020
ms.locfileid: "93397618"
---
# <a name="application-gateway-infrastructure-configuration"></a>Configuration de l’infrastructure Application Gateway
L’infrastructure Application Gateway comprend le réseau virtuel, les sous-réseaux, les groupes de sécurité réseau et les itinéraires définis par l’utilisateur.
## <a name="virtual-network-and-dedicated-subnet"></a>Réseau virtuel et sous-réseau dédié
Une passerelle d’application est un déploiement dédié dans votre réseau virtuel. Au sein de votre réseau virtuel, un sous-réseau dédié est nécessaire pour la passerelle d’application. Vous pouvez avoir plusieurs instances d’un déploiement de passerelle d’application donné dans un sous-réseau. Vous pouvez aussi déployer d’autres passerelles d’application dans le sous-réseau. Mais vous ne pouvez pas déployer une autre ressource dans le sous-réseau de la passerelle d’application. Vous ne pouvez pas mélanger Azure Application Gateway Standard_v2 et Standard sur le même sous-réseau.
> [!NOTE]
> Les [stratégies de points de terminaison de service de réseau virtuel](../virtual-network/virtual-network-service-endpoint-policies-overview.md) ne sont pas prises en charge dans un sous-réseau Application Gateway.
### <a name="size-of-the-subnet"></a>Taille du sous-réseau
Application Gateway utilise une adresse IP privée par instance, ainsi qu’une autre adresse IP privée si une adresse IP front-end privée est configurée.
Azure réserve également cinq adresses IP dans chaque sous-réseau pour un usage interne : les quatre premières et la dernière. Prenons l’exemple de 15 instances de passerelle d’application sans aucune adresse IP front-end privée. Vous avez besoin d’au moins 20 adresses IP pour ce sous-réseau : cinq pour une utilisation interne et 15 pour les instances de passerelle d’application.
Prenons l’exemple d’un sous-réseau disposant de 27 instances de passerelle d’application et d’une adresse front-end IP privée. Dans ce cas, vous avez besoin de 33 adresses IP : 27 pour les instances de passerelle d’application, une pour l’adresse front-end privée et cinq pour un usage interne.
Application Gateway (référence SKU Standard ou WAF) peut prendre en charge jusqu’à 32 instances (32 adresses IP d’instance + 1 IP frontale privée + 5 réservées pour Azure). Par conséquent, une taille de sous-réseau minimale de /26 est recommandée.
Application Gateway (référence SKU Standard_v2 ou WAF_v2) peut prendre en charge jusqu’à 125 instances (125 adresses IP d’instance + 1 IP frontale privée + 5 réservées pour Azure). Par conséquent, une taille de sous-réseau minimale de /24 est recommandée.
## <a name="network-security-groups"></a>Groupes de sécurité réseau
Les Groupes de sécurité réseau (NSG) sont pris en charge sur Application Gateway. Mais quelques restrictions s’appliquent :
- Vous devez autoriser le trafic Internet entrant sur les ports TCP 65503-65534 pour la référence (SKU) Application Gateway v1, et sur les ports TCP 65200-65535 pour la référence (SKU) v2 avec le sous-réseau de destination en tant que **Any** et la source en tant que balise de service **GatewayManager**. Cette plage de ports est nécessaire pour la communication avec l’infrastructure Azure. Ces ports sont protégés (verrouillés) par des certificats Azure. Les entités externes, y compris les clients de ces passerelles, ne peuvent pas communiquer avec ces points de terminaison.
- La connectivité Internet sortante ne peut pas être bloquée. Les règles de trafic sortant par défaut dans le groupe de sécurité réseau permettent une connectivité Internet. Nous vous recommandons :
- De ne pas supprimer les règles de trafic sortant par défaut.
- De ne pas créer d’autres règles de trafic sortant qui refusent toute connectivité sortante.
- Le trafic à partir de la balise **AzureLoadBalancer** avec le sous-réseau de destination comme **Tout** doit être autorisé.
### <a name="allow-access-to-a-few-source-ips"></a>Autoriser l’accès à quelques adresses IP sources
Pour ce scénario, utilisez des groupes de sécurité réseau sur le sous-réseau Application Gateway. Placez les restrictions suivantes sur le sous-réseau dans cet ordre de priorité :
1. Autorisez le trafic entrant à partir d’une adresse IP source ou d’une plage d’adresses IP avec la destination comme plage d’adresses de sous-réseau Application Gateway et le port de destination comme votre port d’accès entrant, par exemple, le port 80 pour l’accès HTTP.
2. Autorisez les demandes entrantes à partir de la source comme balise de service **GatewayManager** et la destination **Any** et les ports de destination 65503-65534 pour la référence SKU Application Gateway v1, et les ports 65200-65535 pour la référence SKU v2 pour les [communications de l’état d’intégrité du back-end](./application-gateway-diagnostics.md). Cette plage de ports est nécessaire pour la communication avec l’infrastructure Azure. Ces ports sont protégés (verrouillés) par des certificats Azure. Sans les certificats appropriés en place, les entités externes ne peuvent pas lancer des modifications sur ces points de terminaison.
3. Autorisez les sondes Azure Load Balancer entrantes (balise *AzureLoadBalancer* ) et le trafic de réseau virtuel entrant (balise *VirtualNetwork* ) sur le [Groupe de sécurité réseau](../virtual-network/network-security-groups-overview.md).
4. Bloquez tout autre trafic entrant avec une règle Tout refuser.
5. Autoriser le trafic sortant vers internet pour toutes les destinations.
## <a name="supported-user-defined-routes"></a>Itinéraires définis par l’utilisateur pris en charge
> [!IMPORTANT]
> L’utilisation de routes définies par l’utilisateur sur le sous-réseau Application Gateway est susceptible d’entraîner l’indication de l’état d’intégrité **Inconnu** dans l’ [affichage de l’intégrité du back-end](./application-gateway-diagnostics.md#back-end-health). Elle peut également entraîner l’échec de la génération des journaux et métriques Application Gateway. Nous vous recommandons de ne pas utiliser de routes définies par l’utilisateur sur le sous-réseau Application Gateway afin de pouvoir voir l’état d’intégrité, les journaux et les métriques du back-end.
- **v1**
Pour la référence SKU v1, les routes définies par l’utilisateur sont prises en charge sur le sous-réseau Application Gateway, tant qu’elles n’altèrent pas la communication de demande/réponse de bout en bout. Par exemple, vous pouvez configurer une route définie par l’utilisateur dans le sous-réseau Application Gateway pour pointer vers une appliance de pare-feu afin d’inspecter un paquet. Mais vous devez vérifier que le paquet peut atteindre sa destination prévue après l’inspection. S’il n’y parvient pas, cela peut entraîner un comportement incorrect de la sonde d’intégrité ou du routage du trafic. Sont incluses les routes apprises ou 0.0.0.0/0 par défaut propagées par Azure ExpressRoute ou des passerelles VPN dans le réseau virtuel. Tout scénario dans lequel 0.0.0.0/0 doit être redirigé localement (tunneling forcé) n’est pas pris en charge pour v1.
- **v2**
Pour la référence SKU v2, il existe des scénarios pris en charge et non pris en charge :
**Scénarios pris en charge par la v2**
> [!WARNING]
> Une configuration incorrecte de la table de routage peut entraîner un routage asymétrique dans Application Gateway v2. Assurez-vous que tout le trafic de gestion/plan de contrôle est envoyé directement à Internet et non par le biais d’une appliance virtuelle. La journalisation et les métriques peuvent également être affectées.
**Scenario 1** : UDR pour désactiver la propagation d’itinéraires du protocole de passerelle frontière (BGP) vers le sous-réseau d’Application Gateway
Parfois, l’itinéraire de la passerelle par défaut (0.0.0.0/0) est publié via les passerelles VPN ou ExpressRoute associées au réseau virtuel Application Gateway. Cela interrompt le trafic du plan de gestion, qui nécessite un chemin d’accès direct à Internet. Dans de tels scénarios, un itinéraire défini par l’utilisateur (UDR) peut être utilisé pour désactiver la propagation d’itinéraires BGP.
Pour désactiver la propagation d’itinéraires BGP, procédez comme suit :
1. Créez une ressource de table de routage dans Azure.
2. Désactivez le paramètre **Propagation de la route de la passerelle de réseau virtuel**.
3. Associez la table de routage au sous-réseau approprié.
L’activation de l’UDR pour ce scénario ne doit pas perturber les configurations existantes.
**Scénario 2** : UDR pour diriger 0.0.0.0/0 vers Internet
Vous pouvez créer un UDR pour envoyer le trafic de 0.0.0.0/0 directement vers Internet.
**Scénario 3** : UDR pour Azure Kubernetes Service avec kubenet
Si vous utilisez kubenet avec Azure Kubernetes Service (AKS) et Application Gateway Ingress Controller (AGIC), vous avez besoin d’une table de route pour permettre au trafic envoyé aux pods d’Application Gateway d’être acheminé vers le bon nœud. Cela n’est pas nécessaire si vous utilisez Azure CNI.
Pour configurer la table de route afin de permettre à kubenet de fonctionner, procédez comme suit :
1. Accédez au groupe de ressources créé par AKS (le nom du groupe de ressources doit commencer par « MC_ »)
2. Recherchez la table de route créée par AKS dans ce groupe de ressources. La table de route doit être remplie avec les informations suivantes :
- Le préfixe d’adresse doit être la plage d’adresses IP des pods que vous souhaitez atteindre dans AKS.
- Le type de tronçon suivant doit être Appliance virtuelle.
- L’adresse du tronçon suivant doit être l’adresse IP du nœud qui héberge les pods.
3. Associez cette table de route au sous-réseau Application Gateway.
**Scénarios non pris en charge par la v2**
**Scenario 1** : UDR pour les appliances virtuelles
Aucun scénario dans lequel 0.0.0.0/0 doit être redirigé via une appliance virtuelle, un réseau virtuel hub/spoke ou localement (tunneling forcé) n’est pas pris en charge pour V2.
## <a name="next-steps"></a>Étapes suivantes
- [En savoir plus sur la configuration des adresses IP frontales](configuration-front-end-ip.md). | 90.076271 | 864 | 0.791608 | fra_Latn | 0.984861 |
53d74c619ad1bc05f3abda7f57a0a2eacfd5ffe2 | 11,960 | md | Markdown | workshops/jcs-devops/JCSPipelineStackManager100.md | rileystevend/learning-library | 9d23c87ae8260ee032a745c9b2806fc0434bec06 | [
"UPL-1.0"
] | 2 | 2019-03-20T23:01:46.000Z | 2019-03-21T15:54:56.000Z | workshops/jcs-devops/JCSPipelineStackManager100.md | rileystevend/learning-library | 9d23c87ae8260ee032a745c9b2806fc0434bec06 | [
"UPL-1.0"
] | 1 | 2019-05-20T15:39:01.000Z | 2019-05-20T15:39:01.000Z | workshops/jcs-devops/JCSPipelineStackManager100.md | rileystevend/learning-library | 9d23c87ae8260ee032a745c9b2806fc0434bec06 | [
"UPL-1.0"
] | 2 | 2018-11-14T22:54:36.000Z | 2020-02-03T16:17:25.000Z | # DevOps: JCS Pipeline Using Oracle Stack Manager

Update: February 7, 2018
## Introduction
This is the first of several labs that are part of the **DevOps JCS Pipeline using Oracle Stack Manger workshop**. This workshop will walk you through the Software Development Lifecycle (SDLC) for a Java Cloud Service (JCS) project that goes through Infrastructure as Code and deployment of Struts application.
You will take on 3 Personas during the workshop. The **Project Manager** Persona will create the projects, add tasks and features to be worked on, and assign tasks to team members. The Project Manager will then start the initial sprint. The **Operations Engineer** persona will develop a new pipeline for deployment of JCS and DBCS environment. The **Java Developer** persona will develop a new struts based UI to display the product catalog. During the workshop, you will get exposure to Oracle Developer Cloud Service, Java Cloud Service and Oracle Stack Manager.
***To log issues***, click here to go to the [github oracle](https://github.com/oracle/learning-library/issues/new) repository issue submission form.
## Objectives
- Create Initial Project
- Create Issues / Task
- Create Agile Board and initial Sprint
- Add Issues to Sprint
## Required Artifacts
- The following lab requires an Oracle Public Cloud account.
# Start Project
## Create Developer Cloud Service Project
### **STEP 1**: Login to your **Traditional** Oracle Cloud Account
- From any browser, go to the URL below, or if using a trial account, use the URL emailed to you in your confirmation email:
`https://cloud.oracle.com`
- click **Sign In** in the upper right hand corner of the browser

- **IMPORTANT** - Under my services, select from the drop down list the correct data center and click on **My Services**. If you are unsure of the data center you should select, and this is an in-person training event, ***ask your instructor*** which **Region** to select from the drop down list. If you received your account through an ***Oracle Trial***, you should have recorded the needed information while following the instruction in the [Trial Account Student Guide](StudentGuide.md).

- Enter your identity domain and click **Go**.

- Once you Identity Domain is set, enter your User Name and Password and click **Sign In**
**NOTE:** For this lab you will assume the role of Project Manager ***Lisa Jones***. Although you are assuming the identify of Lisa Jones, you will log into the account using the **username** supplied to you as part of an Oracle Trial. As you progress through the workshop, you will remain logged in as a single user, but you will make “logical” changes from Lisa Jones the Project Manager to the other personas.


- You will be presented with a Dashboard displaying the various cloud services available to this account.

- If all of your services are not visible, **click** on the **Customize Dashboard**, and you can add services to the dashboard by clicking **Show.** If you do not want to see a specific service, click **Hide**. Make sure that the **Developer** service is marked as show.

### **STEP 2:** Login to Developer Cloud Service
Oracle Developer Cloud Service provides a complete development platform that streamlines team development processes and automates software delivery. The integrated platform includes an issue tracking system, agile development dashboards, code versioning and review platform, continuous integration and delivery automation, as well as team collaboration features such as wikis and live activity stream. With a rich web based dashboard and integration with popular development tools, Oracle Developer Cloud Service helps deliver better applications faster.
- From the Cloud UI dashboard click on the **Developer** service. In our example, the Developer Cloud Service is named **developer#####**.

- The Service Details page gives you a quick glance of the service status overview. **Click** on the **Open Service Console** button.

- Click **Create Project** to start the project create wizard. **Note**: Depending on the status of your developer cloud service, it is possible that the button may be labeled **New Project**

### **STEP 3:** Create Developer Cloud Service Project
- Click **New Project** to start the project create wizard.
- On Details screen enter the following data and click on **Next**.
**Name:** `Alpha Office Product Catalog`
**Description:** `Alpha Office Product Catalog`
**Note:** A Private project will only be seen by you. A Shared project will be seen by all Developer Cloud users. In either case, users need to be added to a project in order to interact with the project.

- Leave default template set to **Empty Project** and click **Next**

- Select your **Wiki Markup** preference to **MARKDOWN** and click **Finish**.

- The Project Creation will take about 1 minute.

- You now have a new project, in which you can manage your software development.

# Create Project Issues
## Create Issues for the Operations Pipeline
### **STEP 4:** Create Issue for the initial GIT Repository Creation
In this step you are still assuming the identity of the Project Manager, ***Lisa Jones***.

- Click **Issues** on left hand navigation panel to display the Track Issues page.

- Click **New Issue**. Enter the following data in the New Issue page and click **Create Issue**.
**Note:** Throughout the lab you will assign your own account as the “physical” owner of the issue, but for the sake of this workshop, **Bala Gupta** will be the “logical” owner of the following issue.

**Summary:** `Create Initial GIT Repository for Infrastructure and configure Build`
**Description:** `Create Initial GIT Repository for Infrastructure and configure Build`
**Type:** `Task`
**Owner:** `Select your account provided in the dropdown [Logical Owner = Bala Gupta]`
**Story Points:** `1`
Note: Story point is an arbitrary measure used by Scrum teams. They are used to measure the effort required to implement a story. This [Site](https://agilefaq.wordpress.com/2007/11/13/what-is-a-story-point/) will provide more information.

### **STEP 5:** Create Issue for Provision New Alpha Office Environment
- Click **New Issue**. Enter the following data in the New Issue page and click **Create Issue**.

**Note:** no matter who you assign as the task “physical” owner, for the sake of this workshop, ***Bala Gupta*** will be the “logical” owner.
**Summary:** `Provision new Alpha Office Environment`
**Description:** `Provision new Alpha Office Environment by modifying configuration file`
**Type:** `Task`
**Owner:** `Select your account provided in the dropdown [Logical Owner = Bala Gupta]`
**Story:** `2`

## Create Issues for Alpha Office UI
### **STEP 6:** Create Issue for initial GIT Repository creation and setup
- Click **New Issue**. Enter the following data in the New Issue page and click **Create Issue**.

**Summary:** `Create Initial GIT Repository for Alpha Office UI`
**Description:** `Create Initial GIT Repository for Alpha Office UI and setup Build and Deployment configuration`
**Type:** `Task`
**Owner:** `Select your account provided in the dropdown [Logical Owner: John Dunbar]`
**Story:** `1`

### **STEP 7:** Create Issue for Displaying Price
- Click **New Issue**. Enter the following data in the New Issue page and click **Create Issue**.

**Summary:** `Add dollar sign in the display of the price`
**Description:** `Add dollar sign in the display of the price`
**Type:** `Defect`
**Owner:** `Select your account provided in the dropdown [Logical Owner: John Dunbar]`
**Story:** `2`

- Click the **< Defect 4** on the **left** side of the window, or click on **Issues** menu option to view all newly created issues.

# Create Agile Board
## Create Agile Board and Initial Sprint
### Developer Cloud Service Agile Page Overview
Before you start using the Agile methodology in Oracle Developer Cloud Service, it is important that you know the following key components of the Agile page.
- **Board** – A Board is used to display and update issues of the project. When you create a Board, you associate it with an Issue Query. The Board shows Issues returned by the Query. You can either use a Board created by a team member, or create your own Board. You can create as many Boards as you like.
- **Sprint** – A Sprint is a short duration (usually, a week or two) during which your team members try to implement a product component. You add the product component related issues to a Sprint. When you start working on a product component, you start (or activate) the related Sprints. To update issues using a Sprint, you must first activate the Sprint and add the Sprint to the Active Sprints view.
- **Backlog view** – Lists all Issues returned by the Board’s Query. The view also displays all active and inactive Sprints of the Board, and the sprints from other Boards that contain Issues matching the Board’s Query. Each Sprint lists issues that are added to it. The Backlog section (the last section of the Backlog page) lists all open issues that are not part of any Sprint yet. The Backlog view does not show the resolved and closed Issues.
- **Active Sprints view** – Lists all active Sprints of the Board and enables you to update an Issue status simply by dragging and dropping it to the respective status columns.
- **Reports view** – select the Burndown Chart tab to display the amount of work left to do in a Sprint or use the Sprint Report tab to list open and completed Issues of a Sprint.
### **STEP 8:** Create Agile Board
- Click **Agile** on the Left Side Menu to display a page listing all existing Boards.

- Click **New Board** and enter the following data. When done click **Create**.
**Type:** `Scrum`
**Name:** `AlphaOffice`
**Estimation:** `Story Points`

### **STEP 9:** Create Sprint
- We will now Create our first Sprint. Click **New Sprint**. Enter the following data and click **OK**.
**Name:** `Sprint 1 - Initial development`
**Story Points:** `6`

### **STEP 10:** Add Backlog Issues to Sprint
- Next we want to add the backlog issues to the newly created spring. **Drag and drop** the **4 issues** one at a time upward onto the **Sprint 1** section. This will add the issues to the sprint.


### **STEP 11:** View Active Sprint and Reports
- Click **Start Sprint.** Leave defaults and click **Start**

- Now click on **Active Sprints** to view the Sprint dashboard.

- Click on **Reports** button to view the Burndown Sprint reports.

- **You are now ready to move to the next lab.**
| 44.794007 | 566 | 0.725 | eng_Latn | 0.977797 |
53d74de06209f7d1a689cd10cda01d781a033871 | 2,005 | md | Markdown | CHANGLES.md | 16602905643/XbsjEarthUIS | e0a9a4b377fc0dcab9b57802664d1b4c2a2389f9 | [
"MIT"
] | 1 | 2019-11-29T01:26:31.000Z | 2019-11-29T01:26:31.000Z | CHANGLES.md | yangbinlovegis/XbsjEarthUI | e0a9a4b377fc0dcab9b57802664d1b4c2a2389f9 | [
"MIT"
] | 2 | 2021-01-05T17:30:10.000Z | 2022-02-10T20:20:15.000Z | CHANGLES.md | yangbinlovegis/XbsjEarthUI | e0a9a4b377fc0dcab9b57802664d1b4c2a2389f9 | [
"MIT"
] | 1 | 2020-05-26T09:40:26.000Z | 2020-05-26T09:40:26.000Z | Change Log
==========
### 1.2.7 - 2019-11-9
##### Fixes :wrench:
* 修复旋转角度不对的问题
### 1.2.6 - 2019-11-9
##### Additions :tada:
* EarthUI标绘新增了贝塞尔2次曲线和贝塞尔3次曲线功能
##### Fixes :wrench:
* 完善数字城市的示例,通过自定义图元类增加多种可视化效果
* 修复视域分析等的旋转角度不对的问题
* 修复视域分析等示例中旋转角度表示错误
* 修复pin-div示例中出现了滚动条问题
* 修复扫描线点击示例中点击和拖动起冲突问题
* 把示例和视景器中的强制光照中的经度、纬度修改为滑动条,并把高度去掉
### 1.2.5 - 2019-11-6
##### Fixes :wrench:
* 修复registerPolygonCreatingWithHeight带来的属性变化问题
* 完善数字工厂的示例
* 完善自定义图元示例
### 1.2.4 - 2019-11-2
##### Additions :tada:
* 把mapv和cesium结合的示例加到了EarthSDK中;
* Cesium自定义材质增加了几个按钮;
* EarthUI标绘中增加了折线和圆弧;
* 新增了5个Cesium自定义Primitive相关的示例;
* 增加自定义图元功能和相关示例;
* 增加数字城市和数字工厂的示例;
* EarthUI标绘增加更多标绘按钮;
##### Fixes :wrench:
* 改进地形限制功能,创建时不再需要确定高度;
* 把tileset的skipLevelOfDetail属性修改为了false;
* 修复大雁塔等(无光照模型)不能启用阴影的问题;
* 修复启用阴影后崩溃的问题;
* 影像属性中增加最小最大级别限制;
* 修复PBR材质让部分属性生效;
* 修复点云大小不能设置的问题,并且让3dtiles中其他属性的设置也同时生效;
* 改进旋转编辑的UI交互,让旋转轴清晰可见,并在三维中直接实时动态显示测量结果;
* 修正旋转编辑时角度范围,从-270 - +90变更为-180 - +180;
* 地形限制的编辑交互中不需要设置高度;
* 修复动态加载js文件时报错的问题;
### 1.2.3 - 2019-10-21
##### Additions :tada:
* 增加了强制光照示例;
* EarthUI中增加了强制光照功能;
* EarthUI中模型的属性窗口增加了材质底色属性;
##### Fixes :wrench:
* 材质颜色修正
### 1.2.2 - 2019-10-21
##### Additions :tada:
* Model中增加luminanceAtZenith属性;
* 增加强制光照效果,可以任意修改太阳光方向;
* 影像纠偏示例增加了一些提示;
* 增加了热力图和水面示例;
* EarthUI中增加了水面功能;
##### Fixes :wrench:
* 修复引入Cesium1.62以后模型加载变灰的问题
* 修复引入Cesium1.62以后锯齿严重的问题
* 修复旋转用gltf重复加载问题
### 1.2.1 - 2019-10-18
##### Additions :tada:
* 底层Cesium.js独立拆分,可以替换成客户自行开发的Cesium来使用!
* 新增贴地图片轮换功能(动态热力图);
* 新增水面效果,相关属性有:基础水面颜色、与反射的融合参数、水流方向和速度控制等;
* 新增3dtiles强制双面显示功能;
##### Fixes :wrench:
* 地表限制可以指定为任意多边形,包括凹多边形,且无论是否远离地球,边界都清晰可见;
* 修复点选后,取消选择状态时,不能取消物体的选择状态的问题;
* 修复大平面近视点的裁切问题;
* 默认的Cesium升级至1.62版;
* 修改版权声明为免费的条款
* 修复3dtiles的style样式设置问题
* 接口调整 effect.baseColor -> terrainEffect.baseColor;
### 1.1.0 - 2019-9-1
##### Additions :tada:
* 增加Path/Pin等对象
* 完善大量示例
* 完善响应式属性的设计
### 1.0.0 - 2019-08-01
* 初始版本
| 19.095238 | 52 | 0.686783 | yue_Hant | 0.635795 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.