hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ed6f98c94bf5b94b68cbf0d55a72751dd1dfead0 | 8,691 | md | Markdown | README.md | oelderinkX/FileCache | ea39af833f66310492b3adb0a5ac6f9a24d744e7 | [
"Apache-2.0"
] | null | null | null | README.md | oelderinkX/FileCache | ea39af833f66310492b3adb0a5ac6f9a24d744e7 | [
"Apache-2.0"
] | null | null | null | README.md | oelderinkX/FileCache | ea39af833f66310492b3adb0a5ac6f9a24d744e7 | [
"Apache-2.0"
] | null | null | null | FileCache Documentation
=======================
[](https://ci.appveyor.com/project/acarteas/filecache)
[](https://www.nuget.org/packages/FileCache/)
[](https://www.nuget.org/packages/FileCache.Signed/)
How to Install FileCache
------------------------
The easiest way to get FileCache into your project is via NuGet, where you can find
both [signed][1] and [unsigned][2] versions of the DLLs. Not sure which one to
use? Unless you are working with other signed projects (not common), you should
probably download the [unsigned][2] version.
Usage
-----
Using the file cache is fairly straightforward. After adding FileCache and
System.Runtime.Caching references to your project, add the appropriate using
statement: `using System.Runtime.Caching;`
Note that I've placed my FileCache code inside the same namespace as the default
.NET caching namespace for simplicity. Below are two examples of how to use FileCache:
### Basic Example ###
```csharp
//basic example
FileCache simpleCache = new FileCache();
string foo = "bar";
simpleCache["foo"] = foo;
Console.WriteLine("Reading foo from simpleCache: {0}", simpleCache["foo"]);
```
### New in Version 3
Version 3 allows for the building of custom caching schemes. The first release contains
two caching schemes, `Basic` and `Hashed`.
The Basic scheme is the tried-and-true scheme employed in all prior versions of FC. When
using the Basic scheme, file names are taken from the cache key. For example, executing
the command ```simpleCache["foo"] = foo;``` will create a ```foo.dat``` file
to store the value of foo. This plaintext conversion can be convenient when debugging
or when accessing FC cache values from outside of FC. However, it also has the
downside of not supporting cache key names that cannot be used in file names (e.g. /).
Rather than using key names as file names, the Hashed scheme, introduced in Version 3.0,
uses hashed representations of key names using the built-in .NET function
```GetHashCode()```. This function produces a numeric representation of each key that is
guaranteed to produce a valid file name. However, the downside of this approach is
that ```GetHashCode()``` is not guaranteed to produce a unique key. Therefore, FC must
account for collisions when using the Hashed scheme. This slight overhead is likely to
correspond in slighly higher cache retrieval times.
For now, the default caching scheme is set to `Basic` in order to maintain compatibility with
prior releases. Furthermore, while the `Hashed` scheme passes all unit tests, *_it should
be treated as experimental until additional field testing has been conducted._*
#### Using the Basic Caching Scheme
As the Basic scheme is the default, no special code is required to instantiate a FileCache
that uses the Basic scheme. However, as the default might change in a future release, you
may want to start instantiating a Basic FileCache in the following manner:
```csharp
FileCache cache = new FileCache(FileCacheManagers.Basic);
```
#### Using the Hashed Caching Scheme
To use the Hashed caching scheme, simply change the CacheManager to Hashed:
```csharp
FileCache cache = new FileCache(FileCacheManagers.Hashed);
```
#### Setting the Default Cache Manager
It seems reasonable to assume that a programmer will want to employ the same
caching sceme across their single program. Alternatively, a programmer may want to
upgrade an existing project from Basic to Hashed without having to specify the
CacheManager for every FileCache instance. For these cases, you can set the default
CacheManager used by setting the static `DefaultCacheManager` property:
```csharp
FileCache.DefaultCacheManager = FileCacheManagers.Hashed;
```
Now, instantiating a FileCache using the parameterless constructor
(e.g. ```FileCache cache = new FileCache();```) returns a FileCache that
uses the Hashed caching scheme.
### Serializing Custom Objects ###
Below is an example that allows the caching of custom objects. First, place the
following class in the assembly that contains the objects that need to be serialized:
```csharp
///
/// You should be able to copy & paste this code into your local project to enable
/// caching custom objects.
///
public sealed class ObjectBinder : System.Runtime.Serialization.SerializationBinder
{
public override Type BindToType(string assemblyName, string typeName)
{
Type typeToDeserialize = null;
String currentAssembly = Assembly.GetExecutingAssembly().FullName;
// In this case we are always using the current assembly
assemblyName = currentAssembly;
// Get the type using the typeName and assemblyName
typeToDeserialize = Type.GetType(String.Format("{0}, {1}",
typeName, assemblyName));
return typeToDeserialize;
}
}
}
```
Next, pass in the custom ObjectBinder into the FileCache's constructor:
```csharp
//example with custom data binder (needed for caching user defined classes)
FileCache binderCache = new FileCache(new ObjectBinder());
```
Now, use the cache like normal:
```csharp
GenericDTO dto = new GenericDTO()
{
IntProperty = 5,
StringProperty = "foobar"
};
binderCache["dto"] = dto;
GenericDTO fromCache = binderCache["dto"] as GenericDTO;
Console.WriteLine(
"Reading DTO from binderCache:\n\tIntProperty:\t{0}\n\tStringProperty:\t{1}",
fromCache.IntProperty,
fromCache.StringProperty
);
```
Complete API
------------
FileCache implements [System.Runtime.Caching.ObjectCache][3]. For the complete base
API, see [the MSDN article on ObjectCache][3]. Additionally, FileCache exposes the
following additional methods and properties:
```csharp
/// <summary>
/// Allows for the setting of the default cache manager so that it doesn't have to be
/// specified on every instance creation.
/// </summary>
public static FileCacheManagers DefaultCacheManager { get; set; }
/// <summary>
/// Used to store the default region when accessing the cache via []
/// calls
/// </summary>
public string DefaultRegion { get; set; }
/// <summary>
/// Used to set the default policy when setting cache values via []
/// calls
/// </summary>
public CacheItemPolicy DefaultPolicy { get; set; }
/// <summary>
/// Used to determine how long the FileCache will wait for a file to
/// become available. Default (00:00:00) is indefinite. Should the
/// timeout be reached, an exception will be thrown.
/// </summary>
public TimeSpan AccessTimeout { get; set; }
/// <summary>
/// Returns a list of keys for a given region.
/// </summary>
/// <param name="regionName" /></param>
/// <returns></returns>
public string[] GetKeys(string regionName = null)
/// <summary>
/// Returns the policy attached to a given cache item.
/// </summary>
/// <param name="key" />The key of the item</param>
/// <param name="regionName" />The region in which the key exists</param>
/// <returns></returns>
public CacheItemPolicy GetPolicy(string key, string regionName = null)
/// <summary>
/// Returns a list of regions, including the root region.
/// </summary>
/// <returns></returns>
public IEnumerable<string> GetRegions()
/// <summary>
/// Used to specify the disk size, in bytes, that can be used by the File Cache.
/// Defaults to long.MaxValue
/// </summary>
public long MaxCacheSize { get; set; }
/// <summary>
/// Returns the approximate size of the file cache
/// </summary>
public long CurrentCacheSize { get; private set; }
/// <summary>
/// Event that will be called when is reached.
/// </summary>
public event EventHandler MaxCacheSizeReached = delegate { };
/// <summary>
/// Calculates the size, in bytes of the file cache
/// </summary>
/// <param name="regionName" />The region to calculate. If NULL, will return total
/// size.</param>
public long GetCacheSize(string regionName = null);
/// <summary>
/// Clears all FileCache-related items from the disk. Throws an exception if the cache can't be
/// deleted.
/// </summary>
public void Clear();
/// <summary>
/// Flushes the file cache using DateTime.Now as the minimum date
/// </summary>
public void Flush(string regionName = null);
/// <summary>
/// Flushes the cache based on last access date, filtered by optional region
/// </summary>
public void Flush(DateTime minDate, string regionName = null);
```
[1]: https://www.nuget.org/packages/FileCache.Signed
[2]: https://www.nuget.org/packages/FileCache
[3]: http://msdn.microsoft.com/en-us/library/system.runtime.caching.objectcache.aspx
| 36.364017 | 142 | 0.732597 | eng_Latn | 0.97519 |
ed6fa5df74564c4e7aebe1c41040541a58d7004d | 6,477 | md | Markdown | C_CPP/Operating System Concepts/Preemptive Scheduling in Operating System/What_is_Preemptive_Scheduling_in_Operating_System.md | 1anshu-56/winter-of-contributing | 5be4c17fb125631d99269ec2b4ddea349795ac29 | [
"MIT"
] | 1,078 | 2021-09-05T09:44:33.000Z | 2022-03-27T01:16:02.000Z | C_CPP/Operating System Concepts/Preemptive Scheduling in Operating System/What_is_Preemptive_Scheduling_in_Operating_System.md | 1anshu-56/winter-of-contributing | 5be4c17fb125631d99269ec2b4ddea349795ac29 | [
"MIT"
] | 6,845 | 2021-09-05T12:49:50.000Z | 2022-03-12T16:41:13.000Z | C_CPP/Operating System Concepts/Preemptive Scheduling in Operating System/What_is_Preemptive_Scheduling_in_Operating_System.md | 1anshu-56/winter-of-contributing | 5be4c17fb125631d99269ec2b4ddea349795ac29 | [
"MIT"
] | 2,629 | 2021-09-03T04:53:16.000Z | 2022-03-20T17:45:00.000Z | # What is Preemptive Scheduling in Operating System?
## 1. Let's see what is scheduling and why it is required?
* In earlier days computer used to run single program at time, but now a days computers run multitasking, multiprogramming, time sharing Operating system. For this we need process management which requires many tasks like creation, **scheduling**, termination of process and a dead lock.
* So scheduling is used to keep CPU busy all the time and to deliver minimum response time for all programs.
## 2. Let's take a look at process states, which we require to understand scheduling.

1. **New**: Newly created process.
2. **Ready**: After creation of process, it moves to ready state.
3. **Running**: Currenly running process in CPU. (Only one process at a time can be under execution in a single processor)
4. **Waiting (or block)**: When process requests I/O access.
5. **Exit(or terminated)**: The process completed it's execution.
## 3. How scheduling works?
* It determines which process is in the ready state, and should be moved to the running state to deliver minimum response time for all programs is known as Process sheduling.
## 4. What is Preemptive Scheduling?
* Operating system decides to favour another process, pre-empting means killing the currently executing process called as Preemptive Scheduling.
* Conditions for preemptive scheduling
1. When a process switches from the running state to the ready state.
2. When a process switches from the waiting state to the ready state.
* In Preemptive scheduling, multiple processes can run. One process can be preempted to run another.
* Preemptive scheduling needs specific platform like Windows 95, MAC etc.
## 5. Algorithms in Preemptive Scheduling.
**1. Round Robin Scheduling Algorithm (RR)**
* The Round Robin Scheduling Algorithm is specially used for time sharing systems. In Round Robin Algorithm **preemption** is used to switch from one process to another process.
* In this each process is assigned for fix time slot in cyclic way.
* Each process gets a small unit of CPU time (time quantum q) or (time slice), usually 10-100 milliseconds. In this time slice process performs its execution. When time slice get elapsed, the process is preempted and added to the end of the ready queue.
* Then scheduler picks new job and assigns it to CPU and same process takes place again.
* For implementing Round Robin, the processes are kept in FIFO (First in First Out).
The new processes are added to last position of the ready queue.
* If quantum time of processor is greater than process burst time(i.e execution time) then, process itself releases CPU, otherwise interrupt interrupt's CPU. Then CPU stops execution and process is shifted to tail of ready process queue. After this, CPU scheduler selects next job for execution.
* Average waiting time in case of Round Robin algorithm is generally longer.
* Advantage: The average waiting time is minimal(negligible).
* Disadvantage: It is more overhead of context switching.
* Eg.

---------------------------------------------
| Process | Waiting Time | Turnaround Time |
---------------------------------------------
| P1 | 0ms | 18ms |
| P2 | 16ms | 23ms |
| P3 | 23ms | 33ms |
---------------------------------------------
Total waiting time: (0 + 16 + 23) = 39ms
Average waiting time: (39/3) = 13ms
Total turnaround time: (18 + 23 + 33) = 74ms
Average turnaround time: (74/3) = 24.66ms
eg reference: https://afteracademy.com/blog/process-scheduling-algorithms-in-the-operating-system
**2. Shortest Remaining Time First Algorithm (SRTF)**
* Shortest Remaining Time First Algorithm (SRTF) is the preemptive form of Shortest Job First scheduling algorithm (SJF).
* In Shortest Job First scheduling algorithm(SJF) process which have the shortest burst time are selected first.
* If two processes have the same burst time then FCFS is used to break the tie. SJF is non-preptive scheduling algorithm.
* But in Shortest Remaining Time First Algorithm (SRTF), jobs are put into ready queue as they come.
* A process with shortest burst time begins execution.
* Then if a process having shorter burst time than currently executing process arrives then, it removes currenlty executing process i.e it **preempts** currently excuting process and shorter job gets allocated to CPU cycle.
* Shortest Remaining Time First Algorithm (SRTF) is very useful in the time-sharing environment of an operating system.
* The scheduling algorithm SRTF has got higher overhead than SJF shortest time first.
* The SRTF algorithm needs to track elasped time of the currenlty running process and also it should handle occassional preemption in proper manner.
* The major point in Shortest Remaining Time First Algorithm (SRTF) is arrival of small processes will run immidiately. But longer job may have longer waiting time.
* Eg.

Above eg reference: https://www.geeksforgeeks.org/introduction-of-shortest-remaining-time-first-srtf-algorithm/
**3. Priority Sheduling Algorithm (Preemptive Version):**
* Priority Sheduling Algorithm is a method of scheduling processes that is based on priority.
* In this algorithm, the schedular selects the tasks to work as per the priority.
* The processes with higher priority should be carried out first, whereas jobs with equal priorities are carried out on a round-robin or FCFS basis.
* In Priority scheduling with preemptive vesrion algorithm, if process with higher priority arrives then it stops i.e preempts current execution of process and executes process with higher priority.
* In this priority depends upon memory requirements, time requirements etc.
* A priority number(Integer) is associated with each process.
* Lesser the number, higher the priority.
* Problem: **Starvation** - low priority processes may never execute.
* Solution: **Aging** - as time progresses increase the priority of process.
* eg.

Above eg reference: https://cppsecrets.com/users/1108979711510497121461151049710464115111109971051219746101100117/Python-Priority-Scheduling-Preemeptive-Algorithm-with-Different-Arrival-Time.php
| 56.815789 | 298 | 0.747568 | eng_Latn | 0.996149 |
ed6fb9ac0e1a6abdf583d708273dd51310d8fdca | 1,490 | md | Markdown | README.md | 135e2/onepoint | 133929aa90ef6fe878c19f917a37da03f7168b83 | [
"MIT"
] | 432 | 2019-10-05T14:37:14.000Z | 2022-03-30T03:05:57.000Z | README.md | 135e2/onepoint | 133929aa90ef6fe878c19f917a37da03f7168b83 | [
"MIT"
] | 38 | 2019-10-06T14:45:55.000Z | 2022-03-29T15:28:14.000Z | README.md | 135e2/onepoint | 133929aa90ef6fe878c19f917a37da03f7168b83 | [
"MIT"
] | 207 | 2019-10-05T14:37:17.000Z | 2022-03-29T23:55:58.000Z | # OnePoint
一个轻量级、多平台、多网盘的文件目录索引(和管理)工具。
项目地址:https://github.com/ukuq/onepoint
## 项目特点
轻量级、多平台、多网盘
## 支持云盘
- onedrive
官网:https://office.com/
类型比较多,为了统一全部放置到了本模块里面,包括国际版、世纪互联版、分享链接三大类,可按照配置时的提示完成填写
- googledrive
官网:http://drive.google.com/
受限于api,所有的下载都将会由本机中转完成
- coding
官网:https://coding.net/
公开api功能太少,所有的功能都是根据cookie完成
- teambition
官网:https://teambition.com/
无公开api,所有功能通过cookie实现,cookie并不一定稳定,这一部分未实现文件管理功能
- node_fs
官网:http://nodejs.org/
基于nodejs自身fs api完成,仅用于挂载本机文件
- alidrive
官网:https://www.aliyundrive.com/drive/
通过refresh_token访问
## 快速部署
### github 测试版(2.0.0)
~~~
git clone https://github.com/ukuq/onepoint.git
cd onepoint && npm install
npm start
# pm2 lib/starters/node-http.js
~~~
## cloudflare 部署
参考:worker/README.md
## Demo
https://onepoint.onesrc.workers.dev/
## 更新说明
### 210620
zero dependencies,零依赖
去除了 axios、cookie 依赖,项目可使用 ncc 工具打包、提高了易读性、简洁性
首次安装时不再需要输入密码登录,可以自定义设置用户名、salt和密码
cloudflare worker 项目打包工具改用 ncc 完成
### 210425
新增阿里云盘,支持翻页、id
优化了 onedrive 模块,删除了 code 功能,只保留 refresh_token和share_url
优化了 googledrive 模块,删除了 code 功能,只保留 refresh_token,支持自定义 client api
删除了 art-template,改用 art 编译后的 js 文件生成 html
删除了系统分页,只保留云盘模块自身的分页功能
修复了因缓存而引起的文件下载链接过期的 bug
优化了 w.w 主题,看起来更和谐了,感谢 naicfeng 提供的demo
### 210413
增加了乐观锁,修改配置时有效,防止多次修改
重写管理页面前端代码,支持了多图片、多音频预览功能, 非常建议更新~
## Thanks
[oneindex](https://github.com/donwa/oneindex)
[OneManager](https://github.com/qkqpttgf/OneManager-php)
## License
MIT
| 13.423423 | 65 | 0.738255 | yue_Hant | 0.86522 |
ed6fe0885d96fee8e2c87a10aa600cbfc7ad6e03 | 2,259 | md | Markdown | docs/access/desktop-database-reference/description-helpcontext-helpfile-nativeerror-number-source-and-sqlstate-properties-example-vb.md | daniel-smith/office-developer-client-docs | e107924ec143a5aa5027c889cf53518d7bf267a7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/access/desktop-database-reference/description-helpcontext-helpfile-nativeerror-number-source-and-sqlstate-properties-example-vb.md | daniel-smith/office-developer-client-docs | e107924ec143a5aa5027c889cf53518d7bf267a7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/access/desktop-database-reference/description-helpcontext-helpfile-nativeerror-number-source-and-sqlstate-properties-example-vb.md | daniel-smith/office-developer-client-docs | e107924ec143a5aa5027c889cf53518d7bf267a7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Description, HelpContext, HelpFile properties example (VB)
TOCTitle: Description, HelpContext, HelpFile, NativeError, Number, Source, and SQLState properties example (VB)
ms:assetid: 3c129aec-cd69-5822-4dad-ebef226538e1
ms:mtpsurl: https://msdn.microsoft.com/library/JJ249156(v=office.15)
ms:contentKeyID: 48544304
ms.date: 09/18/2015
mtps_version: v=office.15
---
# Description, HelpContext, HelpFile, NativeError, Number, Source, and SQLState properties example (VB)
**Applies to**: Access 2013, Office 2013
This example triggers an error, traps it, and displays the [Description](description-property-ado.md), [HelpContext](helpcontext-helpfile-properties-ado.md), [HelpFile](helpcontext-helpfile-properties-ado.md), [NativeError](nativeerror-property-ado.md), [Number](number-property-ado.md), [Source](source-property-ado-error.md), and [SQLState](sqlstate-property-ado.md) properties of the resulting [Error](error-object-ado.md) object.
```vb
'BeginDescriptionVB
Public Sub Main()
Dim Cnxn As ADODB.Connection
Dim Err As ADODB.Error
Dim strError As String
On Error GoTo ErrorHandler
' Intentionally trigger an error
Set Cnxn = New ADODB.Connection
Cnxn.Open "nothing"
Set Cnxn = Nothing
Exit Sub
ErrorHandler:
' Enumerate Errors collection and display
' properties of each Error object
For Each Err In Cnxn.Errors
strError = "Error #" & Err.Number & vbCr & _
" " & Err.Description & vbCr & _
" (Source: " & Err.Source & ")" & vbCr & _
" (SQL State: " & Err.SQLState & ")" & vbCr & _
" (NativeError: " & Err.NativeError & ")" & vbCr
If Err.HelpFile = "" Then
strError = strError & " No Help file available"
Else
strError = strError & _
" (HelpFile: " & Err.HelpFile & ")" & vbCr & _
" (HelpContext: " & Err.HelpContext & ")" & _
vbCr & vbCr
End If
Debug.Print strError
Next
Resume Next
End Sub
'EndDescriptionVB
```
| 37.032787 | 433 | 0.602479 | kor_Hang | 0.318599 |
ed70955463c9615b5a83ce1d33ecdeb4a580015d | 900 | md | Markdown | windows.system.remotesystems/remotesystem_findbyhostnameasync_1571118225.md | charliearcuri/winrt-api | ffb3416cdcfa9b10b7483fe818072ab43d8aa47c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows.system.remotesystems/remotesystem_findbyhostnameasync_1571118225.md | charliearcuri/winrt-api | ffb3416cdcfa9b10b7483fe818072ab43d8aa47c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows.system.remotesystems/remotesystem_findbyhostnameasync_1571118225.md | charliearcuri/winrt-api | ffb3416cdcfa9b10b7483fe818072ab43d8aa47c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
-api-id: M:Windows.System.RemoteSystems.RemoteSystem.FindByHostNameAsync(Windows.Networking.HostName)
-api-type: winrt method
---
<!-- Method syntax
public Windows.Foundation.IAsyncOperation<Windows.System.RemoteSystems.RemoteSystem> FindByHostNameAsync(Windows.Networking.HostName hostName)
-->
# Windows.System.RemoteSystems.RemoteSystem.FindByHostNameAsync
## -description
Attempts to discover a single remote system specified by the *HostName* parameter.
## -parameters
### -param hostName
A wrapper object for the address of a remote system to be discovered. For information on how to instantiate a , see the [HostName constructor](../windows.networking/hostname_hostname.md).
## -returns
An asynchronous operation that returns the [RemoteSystem](remotesystem.md) that was found. Returns *null* if no was found.
## -remarks
## -examples
## -see-also
## -capabilities
remoteSystem
| 29.032258 | 187 | 0.784444 | eng_Latn | 0.794146 |
ed71a25a861dff35421cc13273ff31bd3d3fd793 | 14,159 | md | Markdown | _posts/2015-08-27-open-cluster-dissolution.md | joe-antognini/joe-antognini.github.io | c054a12709ec1dfcc21e3c2e4bc07bdd8f050b15 | [
"MIT"
] | 1 | 2020-01-30T06:29:57.000Z | 2020-01-30T06:29:57.000Z | _posts/2015-08-27-open-cluster-dissolution.md | joe-antognini/joe-antognini.github.io | c054a12709ec1dfcc21e3c2e4bc07bdd8f050b15 | [
"MIT"
] | 3 | 2021-07-14T02:24:12.000Z | 2021-07-14T02:24:14.000Z | _posts/2015-08-27-open-cluster-dissolution.md | joe-antognini/joe-antognini.github.io | c054a12709ec1dfcc21e3c2e4bc07bdd8f050b15 | [
"MIT"
] | null | null | null | ---
layout: post
title: How much mass must an open cluster lose to become unbound?
date: 2015-08-27
categories: astronomy
image:
feature: constellations3.jpg
---
Although there is strong evidence that nearly all stars are born in open
clusters, it is clear that open clusters do exist for a very long time.
After all, most stars are not seen in open clusters so they must spend a
small fraction of their lives in them. It's also possible to measure the
ages of open clusters directly by measuring the masses of stars just
starting to evolve off the main sequence and calculating how long it takes
stars of those masses to reach that point in their evolution. This
typically gives ages no larger than a few hundred Myr. While there are a
few open clusters that are over a Gyr old, they are very massive and are the
exception rather than the rule.
So we know that open clusters dissociate after a short period of time, but
why does this happen? The very brief answer is that open clusters are only
barely bound --- they consist of a loose collection of gas and stars. When
the most massive stars in the cluster go supernova, the shocks from the
supernovae drive out much of the gas. This removes some of the mass from
the cluster, which unbinds it. But how much mass must be removed to unbind
the cluster? Is it plausible that enough mass is still in the gas phase at
the time of the supernova to unbind the cluster? To answer this we turn to
a [brief paper][1] by J. G. Hills from 1980.
## Sudden mass loss from a static cluster
We will follow Hills (1980) and start with the simpler problem of sudden
mass loss from a cluster which is in equilibrium. In this case the [virial
theorem][2] applies and so the velocity dispersion of the stars in the
cluster is given by
$$v_0^2 = \frac{G M_0}{2 R_0},$$
where $$M_0$$ and $$R_0$$ are the initial mass and effective radius of the
cluster, respectively. (If all stars in the cluster have the same mass,
then the effective radius is the harmonic mean of all the distances between
the stars.)
Immediately after the gas is driven out of the system, the stars will all
have the same velocities they did before, but the total mass of the cluster
will be reduced to $$M$$. The cluster will be out of equilibrium now, and
will have energy
$$E = \frac{1}{2} \left( M v_0^2 - \frac{G M^2}{R_0} \right).$$
Once virial equilibrium is regained, the radius of the cluster will have
changed, but the energy will be the same:
$$E = -\frac{G M^2}{4 R}.$$
This then results in an initial-final radius relationship for the cluster:
$$ \frac{R}{R_0} = \frac{M_0 - \Delta M}{M_0 - 2 \Delta M}.$$
We see that this ratio diverges if half the mass of the cluster is lost.
Thus, in a virialized system half the mass needs to be lost to unbind it.
### _An aside: adiabatic mass loss_
Hills (1980) next briefly states the case of adiadabatic mass loss. This
isn't particularly relevant to the question at hand but is good to know, so
I'll just state it here.
Adiabatic mass loss occurs whenever the fractional mass lost is small on the
dynamical timescale. This means that the system always remains in virial
equilibrium. It is easy to show that the initial-final radius relationship
of the cluster is in this case
$$\frac{R}{R_0} = \frac{M_0}{M_0 - \Delta M}.$$
To show this, take the initial-final radius relationship from the case of
instantaneous mass loss above and substitute $$-dm$$ for $$\Delta M$$ and
$$(R + dr) / R$$ for $$R/R_0$$ and integrate.
In this case, the initial-final radius relationship diverges only for
$$\Delta M = M_0$$. In other words, the cluster always remains bound no
matter how much mass you remove.
## Mass loss prior to virialization
A real open cluster is not likely to be in virial equilibrium by the time
supernovae from massive stars drive gas out of the cluster. The cluster
will still be contracting and this will affect the amount of mass loss that
will be necessary to unbind the cluster.
Suppose that a cluster of constant density, $$\rho_0$$, starts out at a
radius $$R_0$$ and then some time later has collapsed to a radius $$R_1$$
when the supernovae go off and drive mass out of the system. What is the
energy of the cluster at this point? Let's start by considering the
velocity of the outer shell. Initially its energy is all in potential
energy, so we have
$$E = - \frac{4}{3} \pi G \rho_0 R_0^2.$$
The energy of the outer shell is conserved, so we have at this later time
$$v(R_1)^2 = \frac{8}{3} \pi G \rho_0 R_0 \left( \frac{R_0}{R_1} - 1
\right).$$
What about the velocity of some interior shell? Well, since we are assuming
that the density is constant, the mass interior to any shell is $$M \sim
r^3$$, and since the force on any shell is $$F \sim M / r^2$$, we have that
the acceleration scales as $$a \sim r$$. Since the velocity after some time
is just $$v = a t$$, we have just $$v \sim r$$. Thus we can scale our
result above to arbitrary radii:
$$v(r)^2 = \frac{8}{3} \pi G \rho_0 R_0 \left( \frac{r}{R_1} \right)^2
\left( \frac{R_0}{R_1} - 1 \right).$$
Integrating over the entire shell, we can calculate the kinetic energy of
the entire cluster at this later time to be
$$T_0 = \frac{3 G M_0^2}{5 R_0} \left( \frac{R_0}{R_1} - 1 \right).$$
The mean-squared velocity of the stars in the cluster at the time of mass
loss, $$\left< v^2 \right> = 2T / M_0$$ is then
$$\left< v^2 \right> = \frac{6 G M_0}{5 R_0} \left( \frac{R_0}{R_1} - 1
\right).$$
After the mass loss takes place, the mean-squared velocity of the stars
remains the same, but the kinetic energy is now
$$T = \frac{1}{2} M \left< v_{\infty}^2 \right>,$$
and the potential energy is (this can be looked up in a textbook or [on
Wikipedia][3]):
$$U = - \frac{3 G M^2}{5 R_1}.$$
Eventually the cluster will come into virial equilibrium with some effective
radius $$R_f$$, such that the total energy is half the potential energy:
$$E = - \frac{3 G M^2}{10 R_f}.$$
We can now use these equations to relate the final radius, $$R$$, with the
initial radius, $$R_0$$, to find:
$$\frac{R_f}{R_0} = \frac{1}{2} \left[ \frac{M_0 - \Delta M}{M_0 - \Delta M
(R_0 / R_1)} \right].$$
An unbound cluster has $$R_f \to \infty$$, so from this we can easily see how
much mass loss is necessary to unbind the cluster just by setting the
denominator equal to zero:
$$\frac{\Delta M}{M_0} = \frac{R_1}{R_0}.$$
This means that the fractional amount of mass loss necessary to unbind the
cluster is exactly equal to the fractional change in radius the cluster has
undergone. This then raises the question:
## How much does the cluster radius change?
The basic picture of the formation of a star cluster is that the
protocluster begins as a molecular cloud with some radius and average
density, $$\left< \rho_0 \right>$$, which then shrinks under its own
gravitational influence. However, the cloud is not of uniform density ---
some pockets of the cloud will be somewhat denser than others and will
therefore collapse faster. To determine how much the cluster radius changes
we must estimate by how much the cluster has collapsed at the time the
densest pockets have formed stars (which, for the purposes of this estimate
is the time that these dense pockets have collapsed to zero radius).
We'll begin by writing down the equation of motion for some shell that
starts at radius $$r_0$$:
$$\frac{ d^2 r}{dt^2} = - \frac{G M(r_0)}{r^2} = - \frac{4 \pi G \left<
\rho_0 \right> r_0^3}{3 r^2}.$$
If we multiply both sides by $$dr/dt$$, we can integrate to find
$$\frac{1}{r_0} \frac{dr}{dt} = - \sqrt{\frac{8}{3} \pi G \left< \rho_0
\right> \left( \frac{r_0}{r} - 1 \right)}.$$
This is a tricky differential equation to solve, but if we make the clever
substitution $$r/r_0 = \cos^2 \beta$$, we can rewrite it as
$$\frac{d \beta}{dt} = \frac{1}{2} \sqrt{ \frac{8}{3} \pi G \left< \rho_0
\right>} \sec^2 \beta,$$
which is then easily integrated to yield the collapse equation:
$$\beta + \frac{1}{2} \sin 2 \beta = t \sqrt{ \frac{8}{3} \pi G \left<
\rho_0 \right>}.$$
Now suppose that some pocket in the protocluster began with a slight
overdensity, $$\rho^{\prime}$$. The time this pocket to collapse to zero
radius can be found by setting $$\beta = \pi / 2$$ and solving for $$t$$:
$$t_c = \sqrt{ \frac{3 \pi}{32 G \rho^{\prime}}}.$$
We can then write the collapse equation in terms of $$t_c$$ and radii:
$$\frac{R_1}{R_0} + \sqrt{\frac{R_1}{R_0} - \left( \frac{R_1}{R_0}
\right)^3} = \frac{\pi}{2} \sqrt{ \frac{\left< \rho_0
\right>}{\rho^{\prime}}}.$$
So in order to estimate the radius of the cluster at the time the most
massive stars form (i.e., $$R_1/R_0$$), we must estimate $$\left< \rho_0
\right> / \rho^{\prime}$$. If we assume that the molecular cloud is rather
homogeneous, and the greatest density fluctuations are only 10%, then we can
numerically solve the above equation to find $$R_1/R_0 \approx 0.2$$. The
more homogeneous the cloud begins, the smaller $$R_1/R_0$$ is. It is
therefore plausible that a typical protocluster will have $$R_0 / R_1 \sim
10$$ or more. If we use this estimate with our result from the previous
section, we find that the cluster needs to lose only 10% of its mass to
become unbound.
## Why does so little mass need to be lost?
Let us picture the process of virialization. We begin with a cluster of
stars, all stationary and at a very large distance. Due to their mutual
gravitational attraction, they begin to fall to the center of the cluster
and pick up speed. As they pass through the center of the cluster, they
have strong gravitational interactions with each other. This process
transfers energy among the stars and redirects their trajectories. However,
the stars generally have enough velocity that they make it out to nearly the
same distance that they started at roughly the same time. Over a long
enough time, the strong gravitational encounters at the center of the
cluster serve to transfer enough energy between stars that the trajectories
of the stars become sufficiently randomized that as some stars are at
apocenter, other stars are passing through the center of the cluster and the
cluster has become virialized.
Now, if the cluster loses mass at the very beginning of its life when the
stars are at a very large distance it is not going to affect this process.
The stars will pass through the center with lower speed, but qualitatively
nothing else will change. This is because when the mass is lost from the
system, it takes the gravitational potential energy it had away with it.
If, however, the mass loss takes place as the stars are passing through the
center of the cluster the situation is different. At this point, the
gravitational potential energy from the mass that is lost has been
transferred to the mass that remains in the form of kinetic energy. In
other words, the extra mass has increased the speed of the stars as they
pass through the center of the cluster. When the mass is then lost, it is
unable to contribute to the gravitational attraction that slows the stars
down enough to keep them bound. The closer the stars are to the center of
the cluster when mass is lost, the more mass needs to remain to keep the
cluster bound since nearly all of the energy is in kinetic energy at this
point. Since the cluster will have collapsed by quite a bit by the time the
first supernovae go off if it began relatively homogeneous, this means that
only a modest amount of mass needs to be lost to unbind the cluster.
## What is the velocity of the unbound stars?
If the mass loss succeeds in unbinding the cluster, how fast do the stars
escape? Conservation of energy states that after mass loss we have
$$\frac{1}{2} M \left< v_{\infty}^2 \right> = \frac{1}{2} M \left< v^2
\right> - \frac{3}{5} \frac{G M^2}{R_1},$$
which can be rewritten in terms of the velocity dispersion that the cluster
would have after virialization if mass loss had not occurred, $$\left< v_c^2
\right>$$, as
$$\left< v_{\infty}^2 \right>^{1/2} = \left< v_c^2 \right>^{1/2} \sqrt{
\left( \frac{R_0}{R_1} \right) \left( \frac{ \Delta M}{M_0} \right) - 1}.$$
So for reasonable amounts of mass loss the expansion velocity will be only a
few times larger than the velocity dispersion of the virialized cluster.
This means that for a massive O star with a lifetime of a few tens of
millions of years and an escape velocity of a few km / s, the supernova will
take place of order 100 pc away from its birthsite!
## A postscript: What about magnetic fields?
We have been working with a very simplified model of a protocluster. A
realistic protocluster will have magnetic fields which will affect the
evolution of the cluster. In particular, as the cluster collapses, the
magnetic fields threading the cluster will become more compact, leading to a
higher magnetic energy density. This will result in an additional source of
pressure which will serve to resist the collapse of the cluster. Thus,
although we estimated that the cluster would have collapsed to some fraction
of its radius, $$R_1 / R_0$$ at the time of mass loss, the cluster will
actually have only collapsed to some larger fraction of its radius
$$R^{\prime} / R_0$$ (where $$R^{\prime} > R_1$$) due to the extra magnetic
pressure. The exact amount by which $$R^{\prime}$$ is larger than $$R_1$$
will depend on how much energy is in magnetic fields compared to
gravitational potential energy.
Since the energy density in magnetic fields in the ISM is comparable to the
thermal energy density (and pretty much everything else, as it happens), we can
a priori guess that in a protocluster with strong magnetic fields, the magnetic
fields will have an order unity effect on the evolution of the cluster. So,
for the cluster above where we estimated that only 10% of mass loss was
necessary to unbind the cluster, it might be more like 15% or 20%. The more
careful analysis of Hills (1980) reveals that this guess appears to be correct.
[1]: http://adsabs.harvard.edu/abs/1980ApJ...235..986H
[2]: https://en.wikipedia.org/wiki/Virial_theorem
[3]: https://en.wikipedia.org/wiki/Gravitational_binding_energy
| 47.354515 | 79 | 0.74066 | eng_Latn | 0.999294 |
ed71e407aa890e590ef7dc04ce46e1f38bd7d7b1 | 646 | md | Markdown | monitor/readme.md | wejdeneHaouari/iot-docker-mongoDB | d881ab40b663946624f4bb8e971dc2b807000c66 | [
"MIT"
] | 1 | 2021-06-01T15:01:09.000Z | 2021-06-01T15:01:09.000Z | monitor/readme.md | wejdeneHaouari/iot-docker-mongoDB | d881ab40b663946624f4bb8e971dc2b807000c66 | [
"MIT"
] | null | null | null | monitor/readme.md | wejdeneHaouari/iot-docker-mongoDB | d881ab40b663946624f4bb8e971dc2b807000c66 | [
"MIT"
] | null | null | null | Step 1: start the mongo databases
```
docker-compose -f docker-compose-mongo-replicaset.yml up
or
docker-compose -f docker-compose-mongo-replicaset.yml up -d
```
Step 2: exec into one of the mongos:
```
docker exec -it localmongo1 /bin/bash
```
Step 3: access mongo console
```
mongo
```
Step 4: configure replica set by pasting the following,
change 192.168.1.16 with your local IP address
```
rs.initiate(
{
_id : 'rs0',
members: [
{ _id : 0, host : "192.168.1.16:27011" },
{ _id : 1, host : "192.168.1.16:27012" },
{ _id : 2, host : "192.168.1.16:27013" }
]
}
)
```
| 17.944444 | 60 | 0.588235 | eng_Latn | 0.82867 |
ed730b59ccc8d1e5056644eaa2d9a30ba3e9a489 | 91 | md | Markdown | _posts/es/pages/2016-01-01-index.md | 0xadada/bitcoin-core-website | cf8957f03f28fd0ff299ab6685f92c748183925d | [
"MIT"
] | null | null | null | _posts/es/pages/2016-01-01-index.md | 0xadada/bitcoin-core-website | cf8957f03f28fd0ff299ab6685f92c748183925d | [
"MIT"
] | null | null | null | _posts/es/pages/2016-01-01-index.md | 0xadada/bitcoin-core-website | cf8957f03f28fd0ff299ab6685f92c748183925d | [
"MIT"
] | null | null | null | ---
layout: home
title: Home
name: index
permalink: /es/
version: 0
---
Spanish home page
| 9.1 | 17 | 0.681319 | eng_Latn | 0.987778 |
ed733b7b715b736d44f7fbcf486e5289e0d8cbac | 30 | md | Markdown | README.md | rafaeldini/tadsPI3 | 7ba09cc40be456c4f83988c4578ca5924ac05df3 | [
"MIT"
] | null | null | null | README.md | rafaeldini/tadsPI3 | 7ba09cc40be456c4f83988c4578ca5924ac05df3 | [
"MIT"
] | null | null | null | README.md | rafaeldini/tadsPI3 | 7ba09cc40be456c4f83988c4578ca5924ac05df3 | [
"MIT"
] | null | null | null | # tadsPI3
Exemplo projeto PI3
| 10 | 19 | 0.8 | por_Latn | 0.978358 |
ed7357f3f3e0107e7d5c9585685d5327775b0612 | 18,064 | md | Markdown | model_zoo/research/cv/FaceAttribute/README.md | Ming-blue/mindspore | 9ec8bc233c76c9903a2f7be5dfc134992e4bf757 | [
"Apache-2.0"
] | 1 | 2021-07-03T06:52:20.000Z | 2021-07-03T06:52:20.000Z | model_zoo/research/cv/FaceAttribute/README.md | Ming-blue/mindspore | 9ec8bc233c76c9903a2f7be5dfc134992e4bf757 | [
"Apache-2.0"
] | null | null | null | model_zoo/research/cv/FaceAttribute/README.md | Ming-blue/mindspore | 9ec8bc233c76c9903a2f7be5dfc134992e4bf757 | [
"Apache-2.0"
] | null | null | null | # Contents
- [Face Attribute Description](#face-attribute-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
- [Script Description](#script-description)
- [Script and Sample Code](#script-and-sample-code)
- [Running Example](#running-example)
- [Model Description](#model-description)
- [Performance](#performance)
- [ModelZoo Homepage](#modelzoo-homepage)
# [Face Attribute Description](#contents)
This is a Face Attributes Recognition network based on Resnet18, with support for training and evaluation on Ascend910.
ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
[Paper](https://arxiv.org/pdf/1512.03385.pdf): Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
# [Model Architecture](#contents)
Face Attribute uses a modified-Resnet18 network for performing feature extraction.
# [Dataset](#contents)
This network can recognize the age/gender/mask from a human face. The default rule is:
```python
age:
0: 0~2 years
1: 3~9 years
2: 10~19 years
3: 20~29 years
4: 30~39 years
5: 40~49 years
6: 50~59 years
7: 60~69 years
8: 70+ years
gender:
0: male
1: female
mask:
0: wearing mask
1: without mask
```
We use about 91K face images as training dataset and 11K as evaluating dataset in this example, and you can also use your own datasets or open source datasets (e.g. FairFace and RWMFD)
- step 1: The dataset should be saved in a txt file, which contain the following contents:
```python
[PATH_TO_IMAGE]/1.jpg [LABEL_AGE] [LABEL_GENDER] [LABEL_MASK]
[PATH_TO_IMAGE]/2.jpg [LABEL_AGE] [LABEL_GENDER] [LABEL_MASK]
[PATH_TO_IMAGE]/3.jpg [LABEL_AGE] [LABEL_GENDER] [LABEL_MASK]
...
```
The value range of [LABEL_AGE] is [-1, 0, 1, 2, 3, 4, 5, 6, 7, 8], -1 means the label should be ignored.
The value range of [LABEL_GENDER] is [-1, 0, 1], -1 means the label should be ignored.
The value range of [LABEL_MASK] is [-1, 0, 1], -1 means the label should be ignored.
- step 2: Convert the dataset to mindrecord:
```bash
python src/data_to_mindrecord_train.py
```
or
```bash
python src/data_to_mindrecord_eval.py
```
If your dataset is too big to convert at a time, you can add data to an existed mindrecord in turn:
```bash
python src/data_to_mindrecord_train_append.py
```
# [Environment Requirements](#contents)
- Hardware(Ascend)
- Prepare hardware environment with Ascend processor.
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
# [Script Description](#contents)
## [Script and Sample Code](#contents)
The entire code structure is as following:
```text
.
└─ Face Attribute
├─ README.md
├── model_utils
│ ├──__init__.py // module init file
│ ├──config.py // Parse arguments
│ ├──device_adapter.py // Device adapter for ModelArts
│ ├──local_adapter.py // Local adapter
│ ├──moxing_adapter.py // Moxing adapter for ModelArts
├─ scripts
├─ run_standalone_train.sh # launch standalone training(1p) in ascend
├─ run_distribute_train.sh # launch distributed training(8p) in ascend
├─ run_eval.sh # launch evaluating in ascend
└─ run_export.sh # launch exporting air model
├─ src
├─ FaceAttribute
├─ cross_entropy.py # cross entroy loss
├─ custom_net.py # network unit
├─ loss_factory.py # loss function
├─ head_factory.py # network head
├─ resnet18.py # network backbone
├─ head_factory_softmax.py # network head with softmax
└─ resnet18_softmax.py # network backbone with softmax
├─ dataset_eval.py # dataset loading and preprocessing for evaluating
├─ dataset_train.py # dataset loading and preprocessing for training
├─ logging.py # log function
├─ lrsche_factory.py # generate learning rate
├─ data_to_mindrecord_train.py # convert dataset to mindrecord for training
├─ data_to_mindrecord_train_append.py # add dataset to an existed mindrecord for training
└─ data_to_mindrecord_eval.py # convert dataset to mindrecord for evaluating
├─ default_config.yaml # Configurations
├─ postprocess.py # postprocess scripts
├─ preprocess.py # preprocess scripts
├─ train.py # training scripts
├─ eval.py # evaluation scripts
└─ export.py # export air model
```
## [Running Example](#contents)
### Train
- Stand alone mode
```bash
cd ./scripts
sh run_standalone_train.sh [MINDRECORD_FILE] [USE_DEVICE_ID]
```
or (fine-tune)
```bash
cd ./scripts
sh run_standalone_train.sh [MINDRECORD_FILE] [USE_DEVICE_ID] [PRETRAINED_BACKBONE]
```
for example:
```bash
cd ./scripts
sh run_standalone_train.sh /home/train.mindrecord 0 /home/a.ckpt
```
- Distribute mode (recommended)
```bash
cd ./scripts
sh run_distribute_train.sh [MINDRECORD_FILE] [RANK_TABLE]
```
or (fine-tune)
```bash
cd ./scripts
sh run_distribute_train.sh [MINDRECORD_FILE] [RANK_TABLE] [PRETRAINED_BACKBONE]
```
for example:
```bash
cd ./scripts
sh run_distribute_train.sh /home/train.mindrecord ./rank_table_8p.json /home/a.ckpt
```
You will get the loss value of each step as following in "./output/[TIME]/[TIME].log" or "./scripts/device0/train.log":
```python
epoch[0], iter[0], loss:4.489518, 12.92 imgs/sec
epoch[0], iter[10], loss:3.619693, 13792.76 imgs/sec
epoch[0], iter[20], loss:3.580932, 13817.78 imgs/sec
epoch[0], iter[30], loss:3.574254, 7834.65 imgs/sec
epoch[0], iter[40], loss:3.557742, 7884.87 imgs/sec
...
epoch[69], iter[6120], loss:1.225308, 9561.00 imgs/sec
epoch[69], iter[6130], loss:1.209557, 8913.28 imgs/sec
epoch[69], iter[6140], loss:1.158641, 9755.81 imgs/sec
epoch[69], iter[6150], loss:1.167064, 9300.77 imgs/sec
```
- ModelArts (If you want to run in modelarts, please check the official documentation of [modelarts](https://support.huaweicloud.com/modelarts/), and you can start training as follows)
```bash
# Train 8p on ModelArts
# (1) Perform a or b.
# a. Set "enable_modelarts=True" on default_config.yaml file.
# Set "mindrecord_path='/cache/data/face_attribute_dataset/train/data_train.mindrecord'" on default_config.yaml file.
# (option) Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on default_config.yaml file if load pretrain.
# (option) Set "pretrained='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file if load pretrain.
# Set other parameters on default_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "mindrecord_path=/cache/data/face_attribute_dataset/train/data_train.mindrecord" on the website UI interface.
# (option) Add "checkpoint_url=s3://dir_to_trained_ckpt/" on the website UI interface if load pretrain.
# (option) Add "pretrained=/cache/checkpoint_path/model.ckpt" on the website UI interface if load pretrain.
# Add other parameters on the website UI interface.
# (2) (option) Upload or copy your pretrained model to S3 bucket if load pretrain.
# (3) Upload a zip dataset to S3 bucket. (you could also upload the origin dataset, but it can be so slow.)
# (4) Set the code directory to "/path/FaceAttribute" on the website UI interface.
# (5) Set the startup file to "train.py" on the website UI interface.
# (6) Set the "Dataset path" and "Output file path" and "Job log path" to your path on the website UI interface.
# (7) Create your job.
#
# Train 1p on ModelArts
# (1) Perform a or b.
# a. Set "enable_modelarts=True" on default_config.yaml file.
# Set "world_size=1" on default_config.yaml file.
# Set "mindrecord_path='/cache/data/face_attribute_dataset/train/data_train.mindrecord'" on default_config.yaml file.
# (option) Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on default_config.yaml file if load pretrain.
# (option) Set "pretrained='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file if load pretrain.
# Set other parameters on default_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "world_size=1" on the website UI interface.
# Add "mindrecord_path=/cache/data/face_attribute_dataset/train/data_train.mindrecord" on the website UI interface.
# (option) Add "checkpoint_url=s3://dir_to_trained_ckpt/" on the website UI interface if load pretrain.
# (option) Add "pretrained=/cache/checkpoint_path/model.ckpt" on the website UI interface if load pretrain.
# Add other parameters on the website UI interface.
# (2) (option) Upload or copy your pretrained model to S3 bucket if load pretrain.
# (3) Upload a zip dataset to S3 bucket. (you could also upload the origin dataset, but it can be so slow.)
# (4) Set the code directory to "/path/FaceAttribute" on the website UI interface.
# (5) Set the startup file to "train.py" on the website UI interface.
# (6) Set the "Dataset path" and "Output file path" and "Job log path" to your path on the website UI interface.
# (7) Create your job.
#
# Eval 1p on ModelArts
# (1) Perform a or b.
# a. Set "enable_modelarts=True" on default_config.yaml file.
# Set "mindrecord_path='/cache/data/face_attribute_dataset/train/data_train.mindrecord'" on default_config.yaml file.
# Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on default_config.yaml file.
# Set "model_path='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file.
# Set other parameters on default_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "mindrecord_path=/cache/data/face_attribute_dataset/train/data_train.mindrecord" on the website UI interface.
# Add "checkpoint_url=s3://dir_to_trained_ckpt/" on the website UI interface.
# Add "model_path=/cache/checkpoint_path/model.ckpt" on the website UI interface.
# Add other parameters on the website UI interface.
# (2) Upload or copy your trained model to S3 bucket.
# (3) Upload a zip dataset to S3 bucket. (you could also upload the origin dataset, but it can be so slow.)
# (4) Set the code directory to "/path/FaceAttribute" on the website UI interface.
# (5) Set the startup file to "eval.py" on the website UI interface.
# (6) Set the "Dataset path" and "Output file path" and "Job log path" to your path on the website UI interface.
# (7) Create your job.
#
# Export 1p on ModelArts
# (1) Perform a or b.
# a. Set "enable_modelarts=True" on default_config.yaml file.
# Set "file_name='faceattri'" on default_config.yaml file.
# Set "file_format='MINDIR'" on default_config.yaml file.
# Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on default_config.yaml file.
# Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file.
# Set other parameters on default_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "file_name=faceattri" on the website UI interface.
# Add "file_format=MINDIR" on the website UI interface.
# Add "checkpoint_url=s3://dir_to_trained_ckpt/" on the website UI interface.
# Add "ckpt_file=/cache/checkpoint_path/model.ckpt" on the website UI interface.
# Add other parameters on the website UI interface.
# (2) Upload or copy your trained model to S3 bucket.
# (3) Set the code directory to "/path/FaceAttribute" on the website UI interface.
# (4) Set the startup file to "export.py" on the website UI interface.
# (5) Set the "Dataset path" and "Output file path" and "Job log path" to your path on the website UI interface.
# (6) Create your job.
```
### Evaluation
```bash
cd ./scripts
sh run_eval.sh [MINDRECORD_FILE] [USE_DEVICE_ID] [PRETRAINED_BACKBONE]
```
for example:
```bash
cd ./scripts
sh run_eval.sh /home/eval.mindrecord 0 /home/a.ckpt
```
You will get the result as following in "./scripts/device0/eval.log" or txt file in [PRETRAINED_BACKBONE]'s folder:
```python
age accuracy: 0.45773233522001094
gen accuracy: 0.8950155194449516
mask accuracy: 0.992539346357495
gen precision: 0.8869598765432098
gen recall: 0.8907400232468036
gen f1: 0.88884593079451
mask precision: 1.0
mask recall: 0.998539346357495
mask f1: 0.9992691394116572
```
### Convert model
If you want to infer the network on Ascend 310, you should convert the model to AIR:
```bash
cd ./scripts
sh run_export.sh [BATCH_SIZE] [USE_DEVICE_ID] [PRETRAINED_BACKBONE]
```
### Inference Process
#### Export MindIR
```shell
python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
```
The ckpt_file parameter is required,
`file_format` should be in ["AIR", "MINDIR"]
`ckpt_path` ckpt file path
#### Infer on Ascend310
Before performing inference, the mindir file must be exported by `export.py` script. We only provide an example of inference using MINDIR model.
Current batch_Size for imagenet2012 dataset can only be set to 1.
```shell
# Ascend310 inference
bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID]
```
- `MINDIR_PATH` specifies path of used "MINDIR" OR "AIR" model.
- `DATASET_PATH` specifies path of cifar10 datasets
- `DEVICE_ID` is optional, default value is 0.
#### Result
Inference result is saved in current path, you can find result like this in acc.log file.
```bash
'age accuracy': 0.4937
'gen accuracy': 0.9093
'mask accuracy': 0.9903
```
# [Model Description](#contents)
## [Performance](#contents)
### Training Performance
| Parameters | Face Attribute |
| -------------------------- | ----------------------------------------------------------- |
| Model Version | V1 |
| Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 |
| uploaded Date | 09/30/2020 (month/day/year) |
| MindSpore Version | 1.0.0 |
| Dataset | 91K images |
| Training Parameters | epoch=70, batch_size=128, momentum=0.9, lr=0.001 |
| Optimizer | Momentum |
| Loss Function | Softmax Cross Entropy |
| outputs | probability |
| Speed | 1pc: 200~250 ms/step; 8pcs: 100~150 ms/step |
| Total time | 1pc: 2.5 hours; 8pcs: 0.3 hours |
| Checkpoint for Fine tuning | 88M (.ckpt file) |
### Evaluation Performance
| Parameters | Face Attribute |
| ------------------- | --------------------------- |
| Model Version | V1 |
| Resource | Ascend 910; OS Euler2.8 |
| Uploaded Date | 09/30/2020 (month/day/year) |
| MindSpore Version | 1.0.0 |
| Dataset | 11K images |
| batch_size | 1 |
| outputs | accuracy |
| Accuracy(8pcs) | age:45.7% |
| | gender:89.5% |
| | mask:99.2% |
| Model for inference | 88M (.ckpt file) |
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
| 45.273183 | 1,121 | 0.635961 | eng_Latn | 0.828358 |
ed735a0eb15cd0fefbc612391368545f07ccd631 | 770 | md | Markdown | DataSources/BCN/BCN/ds_bcn_bcn.md | TJee-snyk/Exabeam | d27a45bfed7fc03d8b4ad430fd3520043b14c2e9 | [
"MIT"
] | null | null | null | DataSources/BCN/BCN/ds_bcn_bcn.md | TJee-snyk/Exabeam | d27a45bfed7fc03d8b4ad430fd3520043b14c2e9 | [
"MIT"
] | null | null | null | DataSources/BCN/BCN/ds_bcn_bcn.md | TJee-snyk/Exabeam | d27a45bfed7fc03d8b4ad430fd3520043b14c2e9 | [
"MIT"
] | 1 | 2022-03-07T23:54:48.000Z | 2022-03-07T23:54:48.000Z | Vendor: BCN
===========
Product: BCN
------------
| Rules | Models | MITRE TTPs | Event Types | Parsers |
|:-----:|:------:|:----------:|:-----------:|:-------:|
| 0 | 0 | 0 | 1 | 1 |
| Use-Case | Event Types/Parsers | MITRE TTP | Content |
|:----------:| ----------------------------------------------------------------------------------------- | --------- | ------------------------------------------ |
| Enrichment | computer-logon<br> ↳ [cef-bcn-bdds-dhcp](Parsers/parserContent_cef-bcn-bdds-dhcp.md)<br> | | [](Rules_Models/r_m_bcn_bcn_Enrichment.md) |
ATT&CK Matrix for Enterprise
----------------------------
| 51.333333 | 163 | 0.309091 | yue_Hant | 0.222808 |
ed736e5213b456d46aa30dd48f4aa6faeb1b275a | 10,295 | md | Markdown | docs/analysis-services/data-mining/apply-prediction-functions-to-a-model.md | bartfastiel/sql-docs.de-de | 6eb40df24a369fd59fc9afceccaf81c5d275ece9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/data-mining/apply-prediction-functions-to-a-model.md | bartfastiel/sql-docs.de-de | 6eb40df24a369fd59fc9afceccaf81c5d275ece9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/data-mining/apply-prediction-functions-to-a-model.md | bartfastiel/sql-docs.de-de | 6eb40df24a369fd59fc9afceccaf81c5d275ece9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Anwenden von Vorhersagefunktionen auf ein Modell | Microsoft-Dokumentation
ms.date: 05/01/2018
ms.prod: sql
ms.technology: analysis-services
ms.custom: data-mining
ms.topic: conceptual
ms.author: owend
ms.reviewer: owend
author: minewiskan
manager: kfile
ms.openlocfilehash: 192f55c8194bfb9b85b3e0bfad51d8261e45ab0a
ms.sourcegitcommit: 2429fbcdb751211313bd655a4825ffb33354bda3
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 11/28/2018
ms.locfileid: "52540659"
---
# <a name="apply-prediction-functions-to-a-model"></a>Anwenden von Vorhersagefunktionen auf ein Modell
[!INCLUDE[ssas-appliesto-sqlas](../../includes/ssas-appliesto-sqlas.md)]
Zum Erstellen einer Vorhersageabfrage in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Data Mining müssen Sie zuerst das Miningmodell auswählen, auf dem die Abfrage basieren soll. Sie können jedes Miningmodell auswählen, das im aktuellen Projekt vorhanden ist.
Nachdem Sie ein Modell ausgewählt haben, fügen Sie der Abfrage eine *Vorhersagefunktion* hinzu. Eine Vorhersagefunktion kann verwendet werden, um eine Vorhersage zu erhalten. Aber Sie können auch Vorhersagefunktionen hinzufügen, die damit zusammenhängenden Statistiken zurückgeben, z. B. die Wahrscheinlichkeit des vorhergesagten Wert oder Informationen, die beim Generieren der Vorhersage verwendet wurden.
Vorhersagefunktionen können die folgenden Typen von Werten zurückgeben:
- Den Namen des vorhersagbaren Attributs und den vorhergesagten Wert
- Statistiken zur Verteilung und der Varianz der vorhergesagten Werte
- Die Wahrscheinlichkeit eines angegebenen Ergebnisses oder von allen möglichen Ergebnissen
- Die obersten oder untersten Ergebnisse oder Werte
- Werte, die einem bestimmten Knoten, Objekt oder Attribut zugeordnet sind
Der Typ der Vorhersagefunktionen, die verfügbar sind, hängt vom Typ des Modells ab, mit dem Sie arbeiten. Beispielsweise können Vorhersagefunktionen, die auf Entscheidungsstrukturmodelle angewendet werden, Regeln und Knotenbeschreibungen zurückgeben. Vorhersagefunktionen für Zeitreihenmodelle können die Verzögerung und andere für Zeitreihen spezifische Informationen zurückgeben.
Eine Liste der Vorhersagefunktionen, die für fast alle Modelltypen unterstützt werden, finden Sie unter [Allgemeine Vorhersagefunktionen (DMX)](../../dmx/general-prediction-functions-dmx.md).
Beispiele zum Abfragen eines bestimmten Typs von Miningmodell finden Sie im Algorithmusreferenzthema unter [Data Mining-Algorithmen (Analysis Services – Data Mining)](../../analysis-services/data-mining/data-mining-algorithms-analysis-services-data-mining.md).
### <a name="choose-a-mining-model-to-use-for-prediction"></a>Auswählen eines Miningmodells für die Vorhersage
1. Klicken Sie in [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)]mit der rechten Maustaste auf das Modell, und wählen Sie **Vorhersageabfrage erstellen**aus.
- oder -
Klicken Sie in [!INCLUDE[ssBIDevStudioFull](../../includes/ssbidevstudiofull-md.md)]auf die Registerkarte **Miningmodellvorhersage**und anschließend auf **Modell auswählen** in der Tabelle **Miningmodell** .
2. Wählen Sie im Dialogfeld **Miningmodell auswählen** ein Miningmodell aus, und klicken Sie dann auf **OK**.
Sie können jedes Modell in der aktuellen [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] -Datenbank auswählen. Damit eine Abfrage mithilfe eines Modells in einer anderen Datenbank erstellt werden kann, müssen Sie entweder ein neues Abfragefenster im Kontext dieser Datenbank öffnen oder die Projektmappendatei öffnen, die das Modell enthält.
### <a name="add-prediction-functions-to-a-query"></a>Hinzufügen von Vorhersagefunktionen zu einer Abfrage
1. Konfigurieren Sie im **Generator für Vorhersageabfragen**die für die Vorhersage verwendeten Eingabedaten, und zwar entweder durch das Bereitstellen von Werten im Dialogfeld **SINGLETON-Abfrageeingabe** oder indem Sie einer externen Datenquelle das Modell zuordnen.
Weitere Informationen finden Sie unter [Auswählen und Zuordnen von Eingabedaten für eine Vorhersageabfrage](../../analysis-services/data-mining/choose-and-map-input-data-for-a-prediction-query.md).
> [!WARNING]
> Es ist nicht erforderlich, dass Sie Eingaben bereitstellen, um Vorhersagen zu generieren. Liegt keine Eingabe vor, gibt der Algorithmus im Allgemeinen den am wahrscheinlichsten vorhergesagten Wert über alle möglichen Eingaben zurück.
2. Klicken Sie auf die Spalte **Quelle** , und wählen Sie einen Wert aus der Liste aus:
|||
|-|-|
|**\<Modellname >**|Aktivieren Sie diese Option, um Werte vom Miningmodell in die Ausgabe einzuschließen. Sie können nur vorhersagbaren Spalten hinzufügen.<br /><br /> Wenn Sie eine Spalte aus dem Modell hinzufügen, ist das zurückgegebene Ergebnis die nicht unterschiedliche Liste der Werte in dieser Spalte.<br /><br /> Die Spalten, die Sie mit dieser Option hinzufügen, sind im SELECT-Teil der resultierenden DMX-Anweisung enthalten.|
|**Prediction Function**|Aktivieren Sie diese Option, um eine Liste von Vorhersagefunktionen zu durchsuchen.<br /><br /> Dem SELECT-Teil der resultierenden DMX-Anweisung werden die von Ihnen ausgewählten Werte oder die Funktionen hinzugefügt.<br /><br /> Die Liste der Vorhersagefunktionen wird durch den von Ihnen ausgewählten Modelltyp weder gefiltert noch eingeschränkt. Wenn Sie sich nicht sicher sind, ob die Funktion vom aktuellen Modelltyp unterstützt wird, können Sie demzufolge der Liste einfach die Funktion hinzufügen und anzeigen, ob ein Fehler vorliegt.<br /><br /> Listenelemente, denen $ (z. B. $ADJUSTEDPROBABILITY) vorangestellt werden, stellen Spalten von der geschachtelten Tabelle dar, die ausgegeben wird, wenn Sie die Funktion **PredictHistogram**verwenden. Dies sind Verknüpfungen, mit denen Sie eine einzelne Spalte, aber keine geschachtelte Tabelle zurückgeben können.|
|**Benutzerdefinierter Ausdruck**|Aktivieren Sie diese Option, um einen benutzerdefinierten Ausdruck einzugeben und der Ausgabe dann einen Alias zuzuweisen.<br /><br /> Dem SELECT-Teil der resultierenden DMX-Vorhersageabfrage wird der benutzerdefinierte Ausdruck hinzugefügt.<br /><br /> Diese Option ist nützlich, wenn Sie Text für die Ausgabe mit jeder Zeile hinzufügen, VB-Funktionen oder benutzerdefinierte gespeicherte Prozeduren aufrufen möchten.<br /><br /> Informationen zum Verwenden von VBA- und Excel-Funktionen von DMX aus finden Sie unter [VBA-Funktionen in MDX und DAX](../../mdx/vba-functions-in-mdx-and-dax.md).|
3. Wechseln Sie, nachdem Sie jede Funktion oder jeden Ausdruck hinzugefügt haben, zur DMX-Ansicht, um zu sehen, wie die Funktion in der DMX-Anweisung hinzugefügt wurde.
> [!WARNING]
> Der Generator für Vorhersageabfragen überprüft die DMX erst, wenn Sie auf **Ergebnisse**klicken. Sie werden öfters feststellen, dass der vom Abfrage-Generator erzeugte Ausdruck kein gültiger DMX-Wert ist. Dies liegt normalerweise an einer Spalte, die sich nicht auf die vorhersagbare Spalte bezieht, oder an dem Versuch, eine Spalte in einer geschachtelten Tabelle vorherzusagen, die eine untergeordnete SELECT-Anweisung erfordert. Hierkönnen Sie zur DMX-Ansicht wechseln und die Anweisung weiterhin bearbeiten.
### <a name="example-create-a-query-on-a-clustering-model"></a>Beispiel: Erstellen Sie eine Abfrage für ein Clusteringmodell
1. Wenn Sie über kein Clustermodell für das Erstellen dieser Beispielabfrage verfügen, erstellen Sie das Modell [TM_Clustering] mithilfe des [Tutorials zu Data Mining-Grundlagen](http://msdn.microsoft.com/library/6602edb6-d160-43fb-83c8-9df5dddfeb9c).
2. Klicken Sie in [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)]mit der rechten Maustaste auf das Modell, [TM_Clustering]. und wählen Sie **Vorhersageabfrage erstellen**aus.
3. Klicken Sie im Menü **Miningmodell** auf **SINGLETON-Abfrage**.
4. Legen Sie im Dialogfeld **SINGLETON-Abfrageeingabe** die folgenden Werte als Eingaben fest:
- Geschlecht = M
- Arbeitsweg = 8–16 Kilometer (5–10 Meilen)
5. Wählen Sie im Abfrageraster für **Quelle**„TM_Clustering-Miningmodell“ aus, und fügen Sie die Spalte „[Bike Buyer]“ hinzu.
6. Wählen Sie **Vorhersagefunktion**als **Quelle**aus, und fügen Sie die Funktion **Cluster**hinzu.
7. Wählen Sie **Vorhersagefunktion**für **Quelle**aus, fügen Sie die Funktion **PredictSupport**hinzu, und ziehen Sie die Modellspalte „[Bike Buyer]“ in das Feld **Kriterium/Argument** . Geben Sie in der Spalte **Alias** die Zeichenfolge **Support** ein.
Kopieren Sie den Ausdruck, der die Vorhersagefunktion und den Spaltenverweis vom Feld **Kriterium/Argument** darstellt.
8. Wählen Sie für **Quelle**die Option **Benutzerdefinierter Ausdruck**aus, geben Sie einen Alias ein, und verweisen Sie dann in Excel mit der folgenden Syntax auf die CEILING-Funktion:
```
Excel as <return type>
```
Fügen Sie den Spaltenverweis als Argument zur Funktion ein.
Der folgende Ausdruck gibt beispielsweise den CEILING vom Unterstützungswert zurück:
```
EXCEL!CEILING(PredictSupport([TM_Clustering].[Bike Buyer]),2)
```
Geben Sie in der Spalte **Alias** die Zeichenfolge CEILING ein.
9. Klicken Sie auf **Zur Abfragetextsicht wechseln** , um die generierte DMX-Anweisung zu überprüfen. Klicken Sie dann auf **Zur Abfrageergebnissicht wechseln** , um die Spaltenausgabe durch die Vorhersageabfrage zu sehen.
In der folgenden Tabelle werden die erwarteten Ergebnisse angezeigt:
|Bike Buyer|$Cluster|Alias|CEILING|
|----------------|--------------|-------------|-------------|
|0|Cluster 8|954|953.948638926372|
Wenn Sie die anderen Klauseln an anderer Stelle in der Anweisung hinzufügen möchten-z. B., wenn Sie eine WHERE-Klausel hinzufügen möchten – Sie können nicht mithilfe des Rasters; hinzufügen Sie müssen zuerst zur DMX-Ansicht wechseln.
## <a name="see-also"></a>Siehe auch
[Data Mining-Abfragen](../../analysis-services/data-mining/data-mining-queries.md)
| 79.806202 | 900 | 0.765129 | deu_Latn | 0.997335 |
ed738cf67d8d2324d5de61ab5ecd013abce3a2e2 | 2,665 | md | Markdown | RELEASE NOTES.md | danielrotaermel/THLabel | ce70ae706d6bcafe2ab189a5c4276770e6873b31 | [
"Zlib"
] | 219 | 2016-06-28T08:15:32.000Z | 2022-03-22T03:53:33.000Z | RELEASE NOTES.md | danielrotaermel/THLabel | ce70ae706d6bcafe2ab189a5c4276770e6873b31 | [
"Zlib"
] | 13 | 2016-07-11T06:27:56.000Z | 2021-06-23T14:04:05.000Z | RELEASE NOTES.md | danielrotaermel/THLabel | ce70ae706d6bcafe2ab189a5c4276770e6873b31 | [
"Zlib"
] | 51 | 2016-06-28T08:06:59.000Z | 2022-03-18T15:40:01.000Z | Version 1.4.10
- Fixed layout for RTL mode, kudos to @morozkin.
Version 1.4.9
- Fixed warnings and showing incorrect fonts on iOS 13, kudos to @sochalewski.
Version 1.4.8
- Fixed memory related crash.
Version 1.4.7
- Set maximum width to preferredMaxLayoutWidth for intrinsicContentSize.
- Fixed warning.
Version 1.4.6
- Removed support for IB_DESIGNABLE and IBInspectable, until it doesn't cause problems with CocoaPods. Please us `ibdesignable` branch, if you're interested in this feature and not using CocoaPods.
- Fixed bug regarding gradients introduced in last version.
Version 1.4.5
- Added support for IB_DESIGNABLE and IBInspectable, only available with Xcode 6.
- Added lineSpacing property.
Version 1.4.4
- Fixed memory leak.
Version 1.4.3
- Forcing clipsToBounds to YES, because of potential drawing issues.
Version 1.4.2
- Fixed unexpected truncation on iOS device.
Version 1.4.1
- Fixed crash, when text is nil.
Version 1.4
- Added logic for sizeThatFits and intrinsicContentSize.
- Added letterSpacing and automaticallyAdjustTextInsets properties.
Version 1.3.1
- Fixed memory leak.
- Updated example and screenshot.
Version 1.3
- Added fadeTruncatingMode property.
- Fixed overlapping non-opaque strokes.
Version 1.2
- Added innerShadowBlur, innerShadowOffset and innerShadowColor properties.
Version 1.1.7
- Fixed drawing, when frame is too small.
Version 1.1.6
- Fixed memory related crash.
Version 1.1.5
- Fixed text alignment for multi-line texts.
Version 1.1.4
- iOS 5 compatibilty restored. iOS 4 should also be compatible, but it's untested!
Version 1.1.3
- Fixed potential memory leaks.
Version 1.1.2
- Fixed memory related crash.
Version 1.1.1
- Fixed crash, which was caused by a premature release of a CFStringRef.
- Fixed crash, when text is nil.
Version 1.1
- Complete overhaul using Core Text. This means `CoreText.framework` is now required. Don't forget to add it to your project, if you're updating.
- This also fixes all problems with iOS 7 and should still be compatible with iOS 4 (untested yet, iOS 5 works).
Version 1.0.7
- Fixes regarding iOS 7 beta 5. Only tested on Simulator though.
Version 1.0.6
- Minor refactorings, because iOS Base SDK 6.0 or higher is required.
Version 1.0.5
- Fixed usage of compiler macros and backward compatibility.
Version 1.0.4
- iOS 7 compatibility: Breaks former look & feel of strokes though as described in README.
Version 1.0.3
- Fixed bug with text insets not applying correctly on stroke.
Version 1.0.2
- Fixed character spacing for centered stroke.
Version 1.0.1
- Added strokePosition property.
Version 1.0
- Initial release.
| 20.820313 | 197 | 0.763977 | eng_Latn | 0.985536 |
ed73cdff759f1cef5f7af8c8b6b0f8439173b723 | 1,281 | md | Markdown | docs/pipelines/tasks/deploy/chef-knife.md | wangyoutian/azure-devops-docs | a38fff177d9478aa3fad0f29e85c447b911e74fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/pipelines/tasks/deploy/chef-knife.md | wangyoutian/azure-devops-docs | a38fff177d9478aa3fad0f29e85c447b911e74fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/pipelines/tasks/deploy/chef-knife.md | wangyoutian/azure-devops-docs | a38fff177d9478aa3fad0f29e85c447b911e74fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Chef Knife task
description: Run scripts with Knife commands on your Chef workstation
ms.topic: reference
ms.assetid: C7B7CCF9-D6E0-472B-97BB-06B6E43504F3
ms.custom: seodec18
ms.author: ronai
author: RoopeshNair
ms.date: 12/07/2018
monikerRange: 'azure-devops'
---
# Chef Knife task
[!INCLUDE [version-eq-azure-devops](../../../includes/version-eq-azure-devops.md)]
Use this task to run scripts with Knife commands on your Chef workstation.
::: moniker range="> tfs-2018"
## YAML snippet
[!INCLUDE [temp](../includes/yaml/ChefKnifeV1.md)]
::: moniker-end
## Arguments
<table><thead><tr><th>Argument</th><th>Description</th></tr></thead>
<tr><td>Chef Subscription</td><td>(Required) Chef subscription to configure before running knife commands</td></tr>
<tr><td>Script Path</td><td>(Required) Path of the script. Should be fully qualified path or relative to the default working directory.</td></tr>
<tr><td>Script Arguments</td><td>(Optional) Additional parameters to pass to Script. Can be either ordinal or named parameters.</td></tr>
</table>
### [Task control options](../../process/tasks.md#controloptions)
## Open source
This task is open source [on GitHub](https://github.com/Microsoft/azure-pipelines-tasks). Feedback and contributions are welcome.
| 32.025 | 145 | 0.738486 | eng_Latn | 0.67207 |
ed74a9176faf7b57270b3e8b782779a492531fdf | 900 | md | Markdown | _posts/Python/T/tracemalloc/Traceback/2021-01-01-Traceback.format.md | w3api/w3api | 681462ece7265723031a88bec5285209d0e125bf | [
"MIT"
] | 1 | 2021-09-15T20:32:10.000Z | 2021-09-15T20:32:10.000Z | _posts/Python/T/tracemalloc/Traceback/2021-01-01-Traceback.format.md | w3api/w3api | 681462ece7265723031a88bec5285209d0e125bf | [
"MIT"
] | 20 | 2021-01-17T01:13:46.000Z | 2021-06-20T21:16:02.000Z | _posts/Python/T/tracemalloc/Traceback/2021-01-01-Traceback.format.md | w3api/w3api | 681462ece7265723031a88bec5285209d0e125bf | [
"MIT"
] | 2 | 2021-09-15T20:32:08.000Z | 2022-02-20T16:57:46.000Z | ---
title: tracemalloc.Traceback.format
permalink: /Python/tracemalloc/Traceback/format/
date: 2021-01-01
key: Python.T.tracemalloc.Traceback.format
category: python
tags: ['metodo python', 'tracemalloc']
sidebar:
nav: python
---
{% include w3api/datos.html clase=site.data.Python.T.tracemalloc.Traceback.metodos valor="format" %}
## Descripción
{{_dato.description }}
## Sintaxis
~~~python
{{ _dato.sintaxis }}~~~
## Parámetros
* **limit**, {% include w3api/function_param_description.html propiedad=_dato valor="limit" %}
* **most_recent_first**, {% include w3api/function_param_description.html propiedad=_dato valor="most_recent_first" %}
## Clase Padre
[Traceback](/Python/tracemalloc/Traceback/)
## Ejemplo
~~~python
{{ _dato.code}}
~~~
## Artículos
<ul>
{%- for _ldc in _dato.ldc -%}
<li>
<a href="{{_ldc['url'] }}">{{ _ldc['nombre'] }}</a>
</li>
{%- endfor -%}
</ul>
| 21.95122 | 119 | 0.684444 | yue_Hant | 0.277666 |
ed74aed775b5bb0fdbcaed9356e10fe66a250ae1 | 603 | md | Markdown | game/characters/npcs/kubaz_peasant.md | efortner/force-and-destiny-1 | 9754a44711d6a79f2b2c4e5161339f4cc64f938a | [
"MIT"
] | 1 | 2021-03-01T00:21:11.000Z | 2021-03-01T00:21:11.000Z | game/characters/npcs/kubaz_peasant.md | efortner/force-and-destiny-1 | 9754a44711d6a79f2b2c4e5161339f4cc64f938a | [
"MIT"
] | null | null | null | game/characters/npcs/kubaz_peasant.md | efortner/force-and-destiny-1 | 9754a44711d6a79f2b2c4e5161339f4cc64f938a | [
"MIT"
] | null | null | null | # T'Cori Oran
## Description
A kubaz that owes a life debt to Wolf. Previously a raider. Has statblock of Slaver (EotE, p. 394).
Orphan, 16 years old. Street urchin. Youngest member of now dead gang. Keeps camp in a cistern in the wilderness.
## Stats
|Brawn|Agility|Intellect|Cunning|Willpower|Presence
|-----|-------|---------|-------|---------|--------
|3|3|1|3|2|1
* Skills: Coercion 2, Melee 2, Ranged (Light) 2, Vigilance 2
* Soak: 5 (Padded Armor + Brawn)
* Equipment: Blaster Rifle (Ranged Heavy), Damage 9/Crit 3/Range Long
## Character Sheet
https://swsheets.com/c/cvyu90cxh-t-cori-oran
| 31.736842 | 113 | 0.671642 | eng_Latn | 0.556556 |
ed74cdb2f2d8fad35b1f64f2ea6a30a71ea5c75d | 44 | md | Markdown | README.md | tls1403/PythonTest | 069f23b25ec655aa199d13aef9c14d2e33366861 | [
"MIT"
] | null | null | null | README.md | tls1403/PythonTest | 069f23b25ec655aa199d13aef9c14d2e33366861 | [
"MIT"
] | null | null | null | README.md | tls1403/PythonTest | 069f23b25ec655aa199d13aef9c14d2e33366861 | [
"MIT"
] | null | null | null | # PythonTest
Pandas 를 이용한 데이터 분석 연습용 파일입니다
| 11 | 29 | 0.75 | kor_Hang | 1.00001 |
ed74f9a1b62146bb06f9cca681379997d7d72686 | 319 | md | Markdown | contrib/init/README.md | SafeNodeNetwork/SafeNode | 72c830f7eeb59b9c5c959a2745da9d37471a27a7 | [
"MIT"
] | 2 | 2018-05-05T14:03:54.000Z | 2018-05-05T14:55:04.000Z | contrib/init/README.md | SafeNodeNetwork/SafeNode | 72c830f7eeb59b9c5c959a2745da9d37471a27a7 | [
"MIT"
] | 1 | 2018-05-09T17:46:32.000Z | 2018-05-09T17:46:32.000Z | contrib/init/README.md | SafeNodeNetwork/SafeNode | 72c830f7eeb59b9c5c959a2745da9d37471a27a7 | [
"MIT"
] | null | null | null | Sample configuration files for:
SystemD: safenoded.service
Upstart: safenoded.conf
OpenRC: safenoded.openrc
safenoded.openrcconf
CentOS: safenoded.init
OS X: org.safenode.safenoded.plist
have been made available to assist packagers in creating node packages here.
See doc/init.md for more information.
| 24.538462 | 76 | 0.786834 | eng_Latn | 0.845136 |
ed766091769fcf9676d32498c55e7c3ebb5142a6 | 23,821 | md | Markdown | repos/memcached/remote/1-alpine.md | Mattlk13/repo-info | 734e8af562852b4d6503f484be845727b88a97ae | [
"Apache-2.0"
] | null | null | null | repos/memcached/remote/1-alpine.md | Mattlk13/repo-info | 734e8af562852b4d6503f484be845727b88a97ae | [
"Apache-2.0"
] | 1 | 2020-11-05T19:56:17.000Z | 2020-11-12T13:09:29.000Z | repos/memcached/remote/1-alpine.md | Mattlk13/repo-info | 734e8af562852b4d6503f484be845727b88a97ae | [
"Apache-2.0"
] | 1 | 2017-02-09T22:16:59.000Z | 2017-02-09T22:16:59.000Z | ## `memcached:1-alpine`
```console
$ docker pull memcached@sha256:a37fd6c034a7e4334d091beedffb525aae8af4acc0cee3030758ae3edc0c524b
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms: 6
- linux; amd64
- linux; arm variant v7
- linux; arm64 variant v8
- linux; 386
- linux; ppc64le
- linux; s390x
### `memcached:1-alpine` - linux; amd64
```console
$ docker pull memcached@sha256:e79dd6dea75bf64de1cd87b2204574865bcc12fcda9e1d5c63ce82e3271d4bdf
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **5.3 MB (5345174 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:31403aa3ac88632da679879fcb0a3a627c6f50ec7da0659ae032f63d289d9587`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["memcached"]`
```dockerfile
# Wed, 24 Nov 2021 20:19:40 GMT
ADD file:9233f6f2237d79659a9521f7e390df217cec49f1a8aa3a12147bbca1956acdb9 in /
# Wed, 24 Nov 2021 20:19:40 GMT
CMD ["/bin/sh"]
# Tue, 30 Nov 2021 02:35:59 GMT
RUN addgroup -g 11211 memcache && adduser -D -u 11211 -G memcache memcache
# Tue, 30 Nov 2021 02:36:00 GMT
RUN apk add --no-cache libsasl
# Fri, 11 Feb 2022 23:43:32 GMT
ENV MEMCACHED_VERSION=1.6.14
# Fri, 11 Feb 2022 23:43:32 GMT
ENV MEMCACHED_SHA1=be64c11d34f04bd1855100b8b5ad9ae8b45e0ab0
# Fri, 11 Feb 2022 23:47:43 GMT
RUN set -x && apk add --no-cache --virtual .build-deps ca-certificates coreutils cyrus-sasl-dev gcc libc-dev libevent-dev linux-headers make openssl openssl-dev perl perl-io-socket-ssl perl-utils && wget -O memcached.tar.gz "https://memcached.org/files/memcached-$MEMCACHED_VERSION.tar.gz" && echo "$MEMCACHED_SHA1 memcached.tar.gz" | sha1sum -c - && mkdir -p /usr/src/memcached && tar -xzf memcached.tar.gz -C /usr/src/memcached --strip-components=1 && rm memcached.tar.gz && cd /usr/src/memcached && ./configure --build="$gnuArch" --enable-extstore --enable-sasl --enable-sasl-pwdb --enable-tls && nproc="$(nproc)" && make -j "$nproc" && make test PARALLEL="$nproc" && make install && cd / && rm -rf /usr/src/memcached && runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )" && apk add --no-network --virtual .memcached-rundeps $runDeps && apk del --no-network .build-deps && memcached -V
# Fri, 11 Feb 2022 23:47:43 GMT
COPY file:bf641b13ea5b37f5830b299ebe9d72f194ee5d897db14faf8b133dc7a66a48ad in /usr/local/bin/
# Fri, 11 Feb 2022 23:47:44 GMT
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
# Fri, 11 Feb 2022 23:47:44 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Fri, 11 Feb 2022 23:47:44 GMT
USER memcache
# Fri, 11 Feb 2022 23:47:45 GMT
EXPOSE 11211
# Fri, 11 Feb 2022 23:47:45 GMT
CMD ["memcached"]
```
- Layers:
- `sha256:59bf1c3509f33515622619af21ed55bbe26d24913cedbca106468a5fb37a50c3`
Last Modified: Wed, 24 Nov 2021 20:20:05 GMT
Size: 2.8 MB (2818413 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:58e98a684550967ddcb81865eb41c76d2cb28ce000c8ab6b2fdc45ecd6d58e9f`
Last Modified: Tue, 30 Nov 2021 02:40:45 GMT
Size: 1.3 KB (1263 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:23d22b3cd5df08fb1f3c02ad4eb2e296a0d9d544f8be06beafcfc78f524cb6f3`
Last Modified: Tue, 30 Nov 2021 02:40:46 GMT
Size: 109.2 KB (109231 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:4afc0667aa68ec7c0a865809dcfee799f6357b1f6d6c4d6e58240cff71c6264d`
Last Modified: Fri, 11 Feb 2022 23:48:39 GMT
Size: 2.4 MB (2415863 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ef8943e1197199b34fc9e9230f24d760a30cdcf11c6b8fc48e89592e00da6edf`
Last Modified: Fri, 11 Feb 2022 23:48:38 GMT
Size: 283.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b10635796b46c66fb5447019c111ad88d5b808536aab4757aa60b57ffcc9d0da`
Last Modified: Fri, 11 Feb 2022 23:48:38 GMT
Size: 121.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `memcached:1-alpine` - linux; arm variant v7
```console
$ docker pull memcached@sha256:79a5646a0c845a791f36017fda3292281b48295b06c53f13d62c9f94237d4731
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **4.0 MB (3960292 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:2afac0e49af3ce3f1769abe11a29f8f5610c6a736d8c5b6c7b9770c8d8e94e91`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["memcached"]`
```dockerfile
# Wed, 16 Dec 2020 23:58:14 GMT
ADD file:bd07f77a2b2741ca6bda80d9203be9c7274cf73145bff778cf000db0d8d4e903 in /
# Wed, 16 Dec 2020 23:58:15 GMT
CMD ["/bin/sh"]
# Thu, 17 Dec 2020 06:43:29 GMT
RUN addgroup -g 11211 memcache && adduser -D -u 11211 -G memcache memcache
# Thu, 17 Dec 2020 06:43:31 GMT
RUN apk add --no-cache cyrus-sasl-plain
# Thu, 17 Dec 2020 06:43:33 GMT
ENV MEMCACHED_VERSION=1.6.9
# Thu, 17 Dec 2020 06:43:35 GMT
ENV MEMCACHED_SHA1=42ae062094fdf083cfe7b21ff377c781011c2be1
# Thu, 17 Dec 2020 06:46:32 GMT
RUN set -x && apk add --no-cache --virtual .build-deps ca-certificates coreutils cyrus-sasl-dev dpkg-dev dpkg gcc libc-dev libevent-dev linux-headers make openssl openssl-dev perl perl-io-socket-ssl perl-utils tar wget && wget -O memcached.tar.gz "https://memcached.org/files/memcached-$MEMCACHED_VERSION.tar.gz" && echo "$MEMCACHED_SHA1 memcached.tar.gz" | sha1sum -c - && mkdir -p /usr/src/memcached && tar -xzf memcached.tar.gz -C /usr/src/memcached --strip-components=1 && rm memcached.tar.gz && cd /usr/src/memcached && gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)" && enableExtstore="$( case "$gnuArch" in s390x-*) ;; *) echo '--enable-extstore' ;; esac )" && ./configure --build="$gnuArch" --enable-sasl --enable-sasl-pwdb --enable-tls $enableExtstore && nproc="$(nproc)" && make -j "$nproc" && make test PARALLEL="$nproc" && make install && cd / && rm -rf /usr/src/memcached && runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )" && apk add --no-network --virtual .memcached-rundeps $runDeps && apk del --no-network .build-deps && memcached -V
# Thu, 17 Dec 2020 06:46:33 GMT
COPY file:bf641b13ea5b37f5830b299ebe9d72f194ee5d897db14faf8b133dc7a66a48ad in /usr/local/bin/
# Thu, 17 Dec 2020 06:46:35 GMT
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
# Thu, 17 Dec 2020 06:46:35 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Thu, 17 Dec 2020 06:46:36 GMT
USER memcache
# Thu, 17 Dec 2020 06:46:38 GMT
EXPOSE 11211
# Thu, 17 Dec 2020 06:46:39 GMT
CMD ["memcached"]
```
- Layers:
- `sha256:c58e8a26a8407acc3ead776f6526efa889fda03270a8d05109208d9f59159420`
Last Modified: Wed, 16 Dec 2020 23:58:59 GMT
Size: 2.4 MB (2407555 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:68564bbfc09f153688e942bf54d5375d1e27f3507c0bed6b038c2ac8ce095aa5`
Last Modified: Thu, 17 Dec 2020 06:46:58 GMT
Size: 1.3 KB (1258 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7cac3a91edee49d0b08a25706ae86059bed89941a08b496e72ef092e57c4ecb3`
Last Modified: Thu, 17 Dec 2020 06:46:58 GMT
Size: 13.8 KB (13825 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cf16e9bb942ec42a35a792beab65aea843209e7bdde7e856499b9fc85f93bc4e`
Last Modified: Thu, 17 Dec 2020 06:46:58 GMT
Size: 1.5 MB (1537248 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fc15394239bd0c083e1b6df806aa5ffeb8b9cc7e80113afc2959721de49f90d1`
Last Modified: Thu, 17 Dec 2020 06:46:58 GMT
Size: 286.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:482f0eb571548eae5720c652ff7da13558e56a8722dc9932cf7eb1ef3eb33acb`
Last Modified: Thu, 17 Dec 2020 06:46:58 GMT
Size: 120.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `memcached:1-alpine` - linux; arm64 variant v8
```console
$ docker pull memcached@sha256:c28cbb9d278a2ef6c4ef325c2fffa8bc4631258a341cb411d2a7bec9655007cb
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **5.1 MB (5109831 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:e10238b3468650bbfa09ac13b6115339c5305e3e92cb34bb47b6a03b2a93df01`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["memcached"]`
```dockerfile
# Wed, 24 Nov 2021 20:39:20 GMT
ADD file:df53811312284306901fdaaff0a357a4bf40d631e662fe9ce6d342442e494b6c in /
# Wed, 24 Nov 2021 20:39:20 GMT
CMD ["/bin/sh"]
# Mon, 29 Nov 2021 23:43:46 GMT
RUN addgroup -g 11211 memcache && adduser -D -u 11211 -G memcache memcache
# Mon, 29 Nov 2021 23:43:48 GMT
RUN apk add --no-cache libsasl
# Fri, 11 Feb 2022 23:59:43 GMT
ENV MEMCACHED_VERSION=1.6.14
# Fri, 11 Feb 2022 23:59:44 GMT
ENV MEMCACHED_SHA1=be64c11d34f04bd1855100b8b5ad9ae8b45e0ab0
# Sat, 12 Feb 2022 00:02:34 GMT
RUN set -x && apk add --no-cache --virtual .build-deps ca-certificates coreutils cyrus-sasl-dev gcc libc-dev libevent-dev linux-headers make openssl openssl-dev perl perl-io-socket-ssl perl-utils && wget -O memcached.tar.gz "https://memcached.org/files/memcached-$MEMCACHED_VERSION.tar.gz" && echo "$MEMCACHED_SHA1 memcached.tar.gz" | sha1sum -c - && mkdir -p /usr/src/memcached && tar -xzf memcached.tar.gz -C /usr/src/memcached --strip-components=1 && rm memcached.tar.gz && cd /usr/src/memcached && ./configure --build="$gnuArch" --enable-extstore --enable-sasl --enable-sasl-pwdb --enable-tls && nproc="$(nproc)" && make -j "$nproc" && make test PARALLEL="$nproc" && make install && cd / && rm -rf /usr/src/memcached && runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )" && apk add --no-network --virtual .memcached-rundeps $runDeps && apk del --no-network .build-deps && memcached -V
# Sat, 12 Feb 2022 00:02:36 GMT
COPY file:bf641b13ea5b37f5830b299ebe9d72f194ee5d897db14faf8b133dc7a66a48ad in /usr/local/bin/
# Sat, 12 Feb 2022 00:02:36 GMT
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
# Sat, 12 Feb 2022 00:02:37 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Sat, 12 Feb 2022 00:02:38 GMT
USER memcache
# Sat, 12 Feb 2022 00:02:39 GMT
EXPOSE 11211
# Sat, 12 Feb 2022 00:02:40 GMT
CMD ["memcached"]
```
- Layers:
- `sha256:9b3977197b4f2147bdd31e1271f811319dcd5c2fc595f14e81f5351ab6275b99`
Last Modified: Wed, 24 Nov 2021 20:39:59 GMT
Size: 2.7 MB (2715434 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:85437b97d6aa5e8fbe9de8abde12cee8d795ff43b8a977cb44ef934f12adb97a`
Last Modified: Mon, 29 Nov 2021 23:47:33 GMT
Size: 1.2 KB (1239 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:610cb0458a378225900aedc80ffe5fdb3529e46116bdb6bcaf8705f436662614`
Last Modified: Mon, 29 Nov 2021 23:47:33 GMT
Size: 110.5 KB (110518 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:98b2012c64d2efed092c8ec83344fc68be38873465f9e7b82616b504ca67ea14`
Last Modified: Sat, 12 Feb 2022 00:03:57 GMT
Size: 2.3 MB (2282236 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9b54695eeb42571708a7da87031a54d3f82b1be9e474102eeaa912ee51cb86d3`
Last Modified: Sat, 12 Feb 2022 00:03:57 GMT
Size: 283.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:17690cc3c867c822f67c6edabb66fd363b8988e13ed76d6b2a512a4af041bd58`
Last Modified: Sat, 12 Feb 2022 00:03:57 GMT
Size: 121.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `memcached:1-alpine` - linux; 386
```console
$ docker pull memcached@sha256:5b9731ba5ab72d85a6ed73548a502d6e22d512f328daacddeb531c252745a1e7
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **5.4 MB (5369203 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:83c422eacdcda2fa728cc1fc176ab0cd57a0be7766f0b517dcc64045b255d0a9`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["memcached"]`
```dockerfile
# Wed, 24 Nov 2021 20:53:48 GMT
ADD file:b9a17131c440053f2f67e127b447645f25fd7de2d6caca42f569cafab6291855 in /
# Wed, 24 Nov 2021 20:53:48 GMT
CMD ["/bin/sh"]
# Mon, 29 Nov 2021 23:40:53 GMT
RUN addgroup -g 11211 memcache && adduser -D -u 11211 -G memcache memcache
# Mon, 29 Nov 2021 23:40:56 GMT
RUN apk add --no-cache libsasl
# Fri, 11 Feb 2022 23:43:00 GMT
ENV MEMCACHED_VERSION=1.6.14
# Fri, 11 Feb 2022 23:43:01 GMT
ENV MEMCACHED_SHA1=be64c11d34f04bd1855100b8b5ad9ae8b45e0ab0
# Fri, 11 Feb 2022 23:47:20 GMT
RUN set -x && apk add --no-cache --virtual .build-deps ca-certificates coreutils cyrus-sasl-dev gcc libc-dev libevent-dev linux-headers make openssl openssl-dev perl perl-io-socket-ssl perl-utils && wget -O memcached.tar.gz "https://memcached.org/files/memcached-$MEMCACHED_VERSION.tar.gz" && echo "$MEMCACHED_SHA1 memcached.tar.gz" | sha1sum -c - && mkdir -p /usr/src/memcached && tar -xzf memcached.tar.gz -C /usr/src/memcached --strip-components=1 && rm memcached.tar.gz && cd /usr/src/memcached && ./configure --build="$gnuArch" --enable-extstore --enable-sasl --enable-sasl-pwdb --enable-tls && nproc="$(nproc)" && make -j "$nproc" && make test PARALLEL="$nproc" && make install && cd / && rm -rf /usr/src/memcached && runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )" && apk add --no-network --virtual .memcached-rundeps $runDeps && apk del --no-network .build-deps && memcached -V
# Fri, 11 Feb 2022 23:47:20 GMT
COPY file:bf641b13ea5b37f5830b299ebe9d72f194ee5d897db14faf8b133dc7a66a48ad in /usr/local/bin/
# Fri, 11 Feb 2022 23:47:21 GMT
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
# Fri, 11 Feb 2022 23:47:21 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Fri, 11 Feb 2022 23:47:22 GMT
USER memcache
# Fri, 11 Feb 2022 23:47:22 GMT
EXPOSE 11211
# Fri, 11 Feb 2022 23:47:22 GMT
CMD ["memcached"]
```
- Layers:
- `sha256:e6889e0d66307a4b916fc844f2dcbc03245c63bc4189dd3e88126d9dcf2f9231`
Last Modified: Wed, 24 Nov 2021 20:54:48 GMT
Size: 2.8 MB (2827117 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:45678c3b8c90609de31d42f56e0b57b28ef7324488a92a85ffff4c8b2ffff25e`
Last Modified: Mon, 29 Nov 2021 23:46:14 GMT
Size: 1.3 KB (1264 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2b3c44e17e0b014b04185b89fc721767e22e1b37dea6cef3913d63ca876865be`
Last Modified: Mon, 29 Nov 2021 23:46:14 GMT
Size: 121.1 KB (121141 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:62d0cbe6f51da8f3bfd9d0bfbbb510f173b231cfc4260066e5da12995d012223`
Last Modified: Fri, 11 Feb 2022 23:48:48 GMT
Size: 2.4 MB (2419277 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cfa2ab27dc5f099b91f7195a3d75e51e7f2c84ea56052a565dcbeba235da559b`
Last Modified: Fri, 11 Feb 2022 23:48:47 GMT
Size: 283.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d0394921490062e0dab12bc6f8a3a5bd3f1a97dc261653593a22e3fa95776186`
Last Modified: Fri, 11 Feb 2022 23:48:47 GMT
Size: 121.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `memcached:1-alpine` - linux; ppc64le
```console
$ docker pull memcached@sha256:75cb5141098fb02bb0c554c31c00d28802b414593a6b41d430c10505ee943fe0
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **5.3 MB (5315837 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:4be74852968716dd05ad711f1cda531afac5b782fe0dd6692e5aed7d91d54466`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["memcached"]`
```dockerfile
# Wed, 24 Nov 2021 20:20:16 GMT
ADD file:57115dca2eb707f46b6301e75174e6aa316fb02ac28643b91429b75be51bd8c8 in /
# Wed, 24 Nov 2021 20:20:20 GMT
CMD ["/bin/sh"]
# Tue, 30 Nov 2021 00:52:13 GMT
RUN addgroup -g 11211 memcache && adduser -D -u 11211 -G memcache memcache
# Tue, 30 Nov 2021 00:52:18 GMT
RUN apk add --no-cache libsasl
# Fri, 11 Feb 2022 23:54:02 GMT
ENV MEMCACHED_VERSION=1.6.14
# Fri, 11 Feb 2022 23:54:06 GMT
ENV MEMCACHED_SHA1=be64c11d34f04bd1855100b8b5ad9ae8b45e0ab0
# Fri, 11 Feb 2022 23:57:48 GMT
RUN set -x && apk add --no-cache --virtual .build-deps ca-certificates coreutils cyrus-sasl-dev gcc libc-dev libevent-dev linux-headers make openssl openssl-dev perl perl-io-socket-ssl perl-utils && wget -O memcached.tar.gz "https://memcached.org/files/memcached-$MEMCACHED_VERSION.tar.gz" && echo "$MEMCACHED_SHA1 memcached.tar.gz" | sha1sum -c - && mkdir -p /usr/src/memcached && tar -xzf memcached.tar.gz -C /usr/src/memcached --strip-components=1 && rm memcached.tar.gz && cd /usr/src/memcached && ./configure --build="$gnuArch" --enable-extstore --enable-sasl --enable-sasl-pwdb --enable-tls && nproc="$(nproc)" && make -j "$nproc" && make test PARALLEL="$nproc" && make install && cd / && rm -rf /usr/src/memcached && runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )" && apk add --no-network --virtual .memcached-rundeps $runDeps && apk del --no-network .build-deps && memcached -V
# Fri, 11 Feb 2022 23:57:50 GMT
COPY file:bf641b13ea5b37f5830b299ebe9d72f194ee5d897db14faf8b133dc7a66a48ad in /usr/local/bin/
# Fri, 11 Feb 2022 23:57:55 GMT
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
# Fri, 11 Feb 2022 23:57:58 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Fri, 11 Feb 2022 23:57:59 GMT
USER memcache
# Fri, 11 Feb 2022 23:58:01 GMT
EXPOSE 11211
# Fri, 11 Feb 2022 23:58:02 GMT
CMD ["memcached"]
```
- Layers:
- `sha256:159b5dcb1717c815c76ff5ea1db730e18e8609c9090238e43282856db9e71f47`
Last Modified: Wed, 24 Nov 2021 20:21:14 GMT
Size: 2.8 MB (2814780 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f44bb0dc5dc6d3634512266fffaecf643a467ef890e0a60fa7daed8a8f379335`
Last Modified: Tue, 30 Nov 2021 00:57:03 GMT
Size: 1.3 KB (1269 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:242bba95e896195e2c48210499ed8dad5c07a25a93e0d6304a787f4d5f05bb79`
Last Modified: Tue, 30 Nov 2021 00:57:03 GMT
Size: 126.2 KB (126210 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e9d933f0d59015113f25123de9b3460a03fdf091181f4c9330b1bb42c27a8cce`
Last Modified: Fri, 11 Feb 2022 23:59:16 GMT
Size: 2.4 MB (2373175 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:659fd1f60c2bc05966f606e677370eeee3176f7d6dbd530b3f271ba313f8b0fe`
Last Modified: Fri, 11 Feb 2022 23:59:15 GMT
Size: 282.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a8d9d86e7310ff252ce602224ce1643c6b8d50713444e43804cdb93912f89087`
Last Modified: Fri, 11 Feb 2022 23:59:15 GMT
Size: 121.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `memcached:1-alpine` - linux; s390x
```console
$ docker pull memcached@sha256:05d8b33e1a3927fd3ba6b7cd62b648d66a4f07da15103657791736e70515bd20
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **4.9 MB (4869494 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:4a751c73ff4357b7992c0a524e080537c462baa079ef3c7b96a3ce7b13699135`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["memcached"]`
```dockerfile
# Wed, 24 Nov 2021 20:41:23 GMT
ADD file:cd24c711a2ef431b3ff94f9a02bfc42f159bc60de1d0eceecafea4e8af02441d in /
# Wed, 24 Nov 2021 20:41:23 GMT
CMD ["/bin/sh"]
# Mon, 29 Nov 2021 23:44:15 GMT
RUN addgroup -g 11211 memcache && adduser -D -u 11211 -G memcache memcache
# Mon, 29 Nov 2021 23:44:16 GMT
RUN apk add --no-cache libsasl
# Fri, 11 Feb 2022 23:53:09 GMT
ENV MEMCACHED_VERSION=1.6.14
# Fri, 11 Feb 2022 23:53:09 GMT
ENV MEMCACHED_SHA1=be64c11d34f04bd1855100b8b5ad9ae8b45e0ab0
# Fri, 11 Feb 2022 23:57:25 GMT
RUN set -x && apk add --no-cache --virtual .build-deps ca-certificates coreutils cyrus-sasl-dev gcc libc-dev libevent-dev linux-headers make openssl openssl-dev perl perl-io-socket-ssl perl-utils && wget -O memcached.tar.gz "https://memcached.org/files/memcached-$MEMCACHED_VERSION.tar.gz" && echo "$MEMCACHED_SHA1 memcached.tar.gz" | sha1sum -c - && mkdir -p /usr/src/memcached && tar -xzf memcached.tar.gz -C /usr/src/memcached --strip-components=1 && rm memcached.tar.gz && cd /usr/src/memcached && ./configure --build="$gnuArch" --enable-extstore --enable-sasl --enable-sasl-pwdb --enable-tls && nproc="$(nproc)" && make -j "$nproc" && make test PARALLEL="$nproc" && make install && cd / && rm -rf /usr/src/memcached && runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )" && apk add --no-network --virtual .memcached-rundeps $runDeps && apk del --no-network .build-deps && memcached -V
# Fri, 11 Feb 2022 23:57:25 GMT
COPY file:bf641b13ea5b37f5830b299ebe9d72f194ee5d897db14faf8b133dc7a66a48ad in /usr/local/bin/
# Fri, 11 Feb 2022 23:57:26 GMT
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
# Fri, 11 Feb 2022 23:57:26 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Fri, 11 Feb 2022 23:57:26 GMT
USER memcache
# Fri, 11 Feb 2022 23:57:26 GMT
EXPOSE 11211
# Fri, 11 Feb 2022 23:57:26 GMT
CMD ["memcached"]
```
- Layers:
- `sha256:d6baca485f3d0f7c77221be60fbef5db014a5ef9d8f53db4a310c947c690d189`
Last Modified: Wed, 24 Nov 2021 20:42:15 GMT
Size: 2.6 MB (2605944 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:6edeb1ae5f198c40517796be434eacf51803993a767a2c0fe96428c6d089fad2`
Last Modified: Mon, 29 Nov 2021 23:48:55 GMT
Size: 1.3 KB (1269 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:93f301719e66b550caefc1f2adbf776d1d20ad42c288f855d16644987f9614c4`
Last Modified: Mon, 29 Nov 2021 23:48:54 GMT
Size: 113.7 KB (113741 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7bccd1e890d9edaaa99814caec66d49d04d5245aca1fe8a2ac832be7e0e09c06`
Last Modified: Fri, 11 Feb 2022 23:58:42 GMT
Size: 2.1 MB (2148137 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:75931a88b924783decffcbf2e1fba6e58f393e9409dd37ffb26b7f6eecffdef2`
Last Modified: Fri, 11 Feb 2022 23:58:41 GMT
Size: 282.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c2dd123370c89ee375732af5193be621336dd45eae2b9729fb66d0376bc915cc`
Last Modified: Fri, 11 Feb 2022 23:58:41 GMT
Size: 121.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 55.526807 | 1,291 | 0.722766 | yue_Hant | 0.231223 |
ed76f6e5ebe63ca28e9a37c370f668b9949cebb7 | 573 | md | Markdown | docs/aron7awol/60729569.md | 3ll3d00d/beqcatalogue | ce38c769c437de382b511e14e60e131944f2ca7d | [
"MIT"
] | 1 | 2021-01-30T20:28:22.000Z | 2021-01-30T20:28:22.000Z | docs/aron7awol/60729569.md | 3ll3d00d/beqcatalogue | ce38c769c437de382b511e14e60e131944f2ca7d | [
"MIT"
] | 7 | 2020-09-14T21:51:16.000Z | 2021-04-03T14:48:01.000Z | docs/aron7awol/60729569.md | 3ll3d00d/beqcatalogue | ce38c769c437de382b511e14e60e131944f2ca7d | [
"MIT"
] | 1 | 2021-03-08T20:09:01.000Z | 2021-03-08T20:09:01.000Z | # Shrek
## DTS-X
**2001 • PG • 1h 30m • Comedy, Animation, Fantasy, Adventure • aron7awol**
It ain't easy bein' green -- especially if you're a likable (albeit smelly) ogre named Shrek. On a mission to retrieve a gorgeous princess from the clutches of a fire-breathing dragon, Shrek teams up with an unlikely compatriot -- a wisecracking donkey.
**MV Adjustment:** ++2.5 dB
[Discuss](https://www.avsforum.com/threads/bass-eq-for-filtered-movies.2995212/post-60729569) [TMDB](808)


| 33.705882 | 253 | 0.720768 | eng_Latn | 0.380008 |
ed7758bc950c0da47cc1c3ee354789a5824ebf4d | 11,301 | md | Markdown | mysql-server5.7/README.md | diceone/docker_mysql | 85dd6ae8bcf7a088abf9fa52f234a41f3423f98d | [
"MIT"
] | null | null | null | mysql-server5.7/README.md | diceone/docker_mysql | 85dd6ae8bcf7a088abf9fa52f234a41f3423f98d | [
"MIT"
] | null | null | null | mysql-server5.7/README.md | diceone/docker_mysql | 85dd6ae8bcf7a088abf9fa52f234a41f3423f98d | [
"MIT"
] | null | null | null | # How to Use the MySQL Images
## Start a MySQL Server Instance
Start a MySQL instance as follows (but make sure you also read the sections *Secure Container Startup* and *Where to Store Data* below):
docker run --name my-container-name -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql/mysql-server:tag
... where `my-container-name` is the name you want to assign to your container, `my-secret-pw` is the password to be set for the MySQL root user and `tag` is the tag specifying the MySQL version you want. See the list above for relevant tags, or look at the [full list of tags](https://registry.hub.docker.com/u/mysql/mysql-server/tags/manage/).
## Connect to MySQL from an Application in Another Docker Container
This image exposes the standard MySQL port (3306), so container linking makes the MySQL instance available to other application containers. Start your application container like this in order to link it to the MySQL container:
docker run --name app-container-name --link my-container-name:mysql -d app-that-uses-mysql
## Connect to MySQL from the MySQL Command Line Client
The following command starts another MySQL container instance and runs the `mysql` command line client against your original MySQL container, allowing you to execute SQL statements against your database:
docker run -it --link my-container-name:mysql --rm mysql/mysql-server:tag sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
... where `my-container-name` is the name of your original MySQL Server container.
More information about the MySQL command line client can be found in the MySQL reference documentation at http://dev.mysql.com/doc/refman/en/
## Container Shell Access and Viewing MySQL Log Files
The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your MySQL container:
docker exec -it my-container-name bash
The MySQL Server log is located at `/var/log/mysqld.log` inside the container, and the following command line from a shell inside the container will let you inspect it:
more /var/log/mysqld.log
# Environment Variables
When you start the MySQL image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the `docker run` command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
Most of the variables listed below are optional, but one of the variables `MYSQL_ROOT_PASSWORD`, `MYSQL_ALLOW_EMPTY_PASSWORD`, `MYSQL_RANDOM_ROOT_PASSWORD` must be given.
## `MYSQL_ROOT_PASSWORD`
This variable specifies a password that will be set for the MySQL root superuser account. In the above example, it was set `to my-secret-pw`. **NOTE:** Setting the MySQL root user password on the command line is insecure. See the section *Secure Container Startup* below for an alternative.
## `MYSQL_RANDOM_ROOT_PASSWORD`
When this variable is set to `yes`, a random password for the server's root user will be generated. The password will be printed to stdout in the container, and it can be obtained by using the command `docker logs my-container-name`.
## `MYSQL_ONETIME_PASSWORD`
This variable is optional. When set to `yes`, the root user's password will be set as expired, and must be changed before MySQL can be used normally. This is only supported by MySQL 5.6 or newer.
## `MYSQL_DATABASE`
This variable is optional. It allows you to specify the name of a database to be created on image startup. If a user/password was supplied (see below) then that user will be granted superuser access (corresponding to GRANT ALL) to this database.
## `MYSQL_USER`, `MYSQL_PASSWORD`
These variables are optional, used in conjunction to create a new user and set that user's password. This user will be granted superuser permissions (see above) for the database specified by the `MYSQL_DATABASE` variable. Both variables are required for a user to be created.
Do note that there is no need to use this mechanism to create the `root` superuser, that user gets created by default with the password set by either of the mechanisms (given or generated) discussed above.
## `MYSQL_ALLOW_EMPTY_PASSWORD`
Set to `yes` to allow the container to be started with a blank password for the root user. **NOTE:** Setting this variable to `yes` is not recommended unless you really know what you are doing, since this will leave your MySQL instance completely unprotected, allowing anyone to gain complete superuser access.
# Notes, Tips, Gotchas
## Secure Container Startup
In many use cases, employing the `MYSQL_ROOT_PASSWORD` variable to specify the MySQL root user password on initial container startup is insecure. Instead, to keep your setup as secure as possible, we strongly recommend using the `MYSQL_RANDOM_ROOT_PASSWORD` option. To further secure your instance, we also recommend using the `MYSQL_ONETIME_PASSWORD` variable if you use MySQL version 5.6 or higher.
This is the full procedure:
docker run --name my-container-name -e MYSQL_RANDOM_ROOT_PASSWORD=yes -e MYSQL_ONETIME_PASSWORD=yes -d mysql/mysql-server:tag
docker logs my-container-name
Look for the "GENERATED ROOT PASSWORD" line in the output.
If you also set the `MYSQL_ONETIME_PASSWORD` variable, you must now start a bash shell inside the container in order to set a new root password:
docker exec -it my-container-name bash
Start the MySQL command line client and log in using the randomly set root password:
mysql -u root -p
And finally, on the mysql client command line, set a new, secure root password for MySQL:
ALTER USER root IDENTIFIED BY 'my-secret-pw';
## Where to Store Data
There are basically two ways to store data used by applications that run in Docker containers. We encourage users of MySQL with Docker to familiarize themselves with the options available, including:
* Let Docker manage the storage of your database data by writing the database files to disk on the host system using its own internal volume management. This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
* Create a data directory on the host system (outside the container) and mount this to a directory visible from inside the container. This places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blog and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
2. Start your MySQL container like this:
```
docker run --name my-container-name -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql/mysql-server:tag
```
The `-v /my/own/datadir:/var/lib/mysql` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/mysql` inside the container, where MySQL by default will write its data files.
Note that users on systems with SELinux enabled may experience problems with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
chcon -Rt svirt_sandbox_file_t /my/own/datadir
## Usage Against an Existing Database
If you start your MySQL container instance with a data directory that already contains a database (specifically, a `mysql` subdirectory), the `$MYSQL_ROOT_PASSWORD` variable should be omitted from the `docker run` command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.
## Port forwarding
Docker allows mapping of ports on the container to ports on the host system by using the -p option. If you start the container as follows, you can connect to the database by connecting your client to a port on the host machine, in this example port 6603:
docker run --name my-container-name -p 6603:3306 -d mysql/mysql-server
## Passing options to the server
You can pass arbitrary command line options to the MySQL server by appending them to the `run command`:
docker run --name my-container-name -d mysql/mysql-server --option1=value --option2=value
In this case, the values of option1 and option2 will be passed directly to the server when it is started. The following command will for instance start your container with UTF-8 as the default setting for character set and collation for all databases in MySQL:
docker run --name my-container-name -d mysql/mysql-server --character-set-server=utf8 --collation-server=utf8_general_ci
## Using a Custom MySQL Config File
The MySQL startup configuration in these Docker images is specified in the file `/etc/my.cnf`. If you want to customize this configuration for your own purposes, you can create your alternative configuration file in a directory on the host machine and then mount this file in the appropriate location inside the MySQL container, effectively replacing the standard configuration file.
If you want to base your changes on the standard configuration file, start your MySQL container in the standard way described above, then do:
docker exec -it my-container-name cat /etc/my.cnf > /my/custom/config-file
... where ´/my/custom/config-file´ is the path and name of the new configuration file. Then start a new MySQL container like this:
docker run --name my-new-container-name -v /my/custom/config-file:/etc/my.cnf -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql/mysql-server:tag
This will start a new MySQL container ´my-new-container-name´ where the MySQL instance uses the startup options specified in ´/my/custom/config-file´.
Note that users on systems where SELinux is enabled may experience problems with this. The current workaround is to assign the relevant SELinux policy type to your new config file so that the container will be allowed to mount it:
chcon -Rt svirt_sandbox_file_t /my/custom/config-file
## Docker Optimized MySQL Install
These Docker images are optimized for size, which means that we have reduced the contents to what is expected to be relevant for a large majority of users who run Docker based MySQL instances. The key differences compared to a default MySQL install are:
* All binaries are stripped, non-debug only
* Included binaries are limited to:
```
/usr/bin/my_print_defaults
/usr/bin/mysql
/usr/bin/mysql_config
/usr/bin/mysql_install_db
/usr/bin/mysql_tzinfo_to_sql
/usr/bin/mysql_upgrade
/usr/bin/mysqldump
/usr/sbin/mysqld
```
| 66.087719 | 472 | 0.78347 | eng_Latn | 0.996864 |
ed779d77221aa37c14acebc004702f2fa2308287 | 3,275 | md | Markdown | sdk-api-src/content/functiondiscoveryprovider/nf-functiondiscoveryprovider-ifunctiondiscoveryproviderquery-issubcategoryquery.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/functiondiscoveryprovider/nf-functiondiscoveryprovider-ifunctiondiscoveryproviderquery-issubcategoryquery.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/functiondiscoveryprovider/nf-functiondiscoveryprovider-ifunctiondiscoveryproviderquery-issubcategoryquery.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:functiondiscoveryprovider.IFunctionDiscoveryProviderQuery.IsSubcategoryQuery
title: IFunctionDiscoveryProviderQuery::IsSubcategoryQuery (functiondiscoveryprovider.h)
description: Determines whether a query is for function instances in a specific subcategory.
helpviewer_keywords: ["IFunctionDiscoveryProviderQuery interface","IsSubcategoryQuery method","IFunctionDiscoveryProviderQuery.IsSubcategoryQuery","IFunctionDiscoveryProviderQuery::IsSubcategoryQuery","IsSubcategoryQuery","IsSubcategoryQuery method","IsSubcategoryQuery method","IFunctionDiscoveryProviderQuery interface","functiondiscoveryprovider/IFunctionDiscoveryProviderQuery::IsSubcategoryQuery","ncd.ifunctiondiscoveryproviderquery_issubcategoryquery"]
old-location: ncd\ifunctiondiscoveryproviderquery_issubcategoryquery.htm
tech.root: ncd
ms.assetid: fa262e62-2e34-4881-915d-995d66fa6841
ms.date: 12/05/2018
ms.keywords: IFunctionDiscoveryProviderQuery interface,IsSubcategoryQuery method, IFunctionDiscoveryProviderQuery.IsSubcategoryQuery, IFunctionDiscoveryProviderQuery::IsSubcategoryQuery, IsSubcategoryQuery, IsSubcategoryQuery method, IsSubcategoryQuery method,IFunctionDiscoveryProviderQuery interface, functiondiscoveryprovider/IFunctionDiscoveryProviderQuery::IsSubcategoryQuery, ncd.ifunctiondiscoveryproviderquery_issubcategoryquery
req.header: functiondiscoveryprovider.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows Vista [desktop apps only]
req.target-min-winversvr: Windows Server 2008 [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl: FunctionDiscoveryProvider.idl
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- IFunctionDiscoveryProviderQuery::IsSubcategoryQuery
- functiondiscoveryprovider/IFunctionDiscoveryProviderQuery::IsSubcategoryQuery
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- FunctionDiscoveryProvider.h
api_name:
- IFunctionDiscoveryProviderQuery.IsSubcategoryQuery
---
# IFunctionDiscoveryProviderQuery::IsSubcategoryQuery
## -description
<p class="CCE_Message">[Function Discovery is available for use in the operating systems specified in the Requirements section. It may be altered or unavailable in subsequent versions.]
Determines whether a query is for function instances in a specific subcategory.
## -parameters
### -param pisSubcategoryQuery [out]
If this parameter is <b>TRUE</b>, there is a subcategory constraint in the query constraints collection.
### -param ppszConstraintValue [out]
The value of the subcategory constraint.
## -returns
If this method succeeds, it returns <b>S_OK</b>. Otherwise, it returns an <b>HRESULT</b> error code.
## -remarks
If the provider does not support subcategories, the provider should return an <a href="/windows/desktop/api/functiondiscoveryapi/nn-functiondiscoveryapi-ifunctioninstancecollection">IFunctionInstanceCollection</a> with 0 instances in response to the query.
## -see-also
<a href="/windows/desktop/api/functiondiscoveryprovider/nn-functiondiscoveryprovider-ifunctiondiscoveryproviderquery">IFunctionDiscoveryProviderQuery</a> | 42.532468 | 459 | 0.838473 | yue_Hant | 0.503277 |
ed77d018123af7f736f4b4e123ed886edacf9e92 | 3,546 | md | Markdown | README.md | Stantheman/go-sdl2 | 1caac2b5fba5f3c6b3fcb71135e367b4c1815872 | [
"BSD-3-Clause"
] | null | null | null | README.md | Stantheman/go-sdl2 | 1caac2b5fba5f3c6b3fcb71135e367b4c1815872 | [
"BSD-3-Clause"
] | null | null | null | README.md | Stantheman/go-sdl2 | 1caac2b5fba5f3c6b3fcb71135e367b4c1815872 | [
"BSD-3-Clause"
] | null | null | null | SDL2 binding for Go
===================
go-sdl2 is SDL2 wrapped for Go users. It enables interoperability between Go and the SDL2 library which is written in C. That means the original SDL2 installation is required for this to work.
Requirements
============
* [SDL2](http://libsdl.org/download-2.0.php)
* [SDL2_mixer (optional)](http://www.libsdl.org/projects/SDL_mixer/)
* [SDL2_image (optional)](http://www.libsdl.org/projects/SDL_image/)
* [SDL2_ttf (optional)](http://www.libsdl.org/projects/SDL_ttf/)
Below is some commands that can be used to install the required packages in
some Linux distributions. Some older versions of the distributions such as
Ubuntu 13.10 may also be used but it may miss an optional package such as
_libsdl2-ttf-dev_ on Ubuntu 13.10's case which is available in Ubuntu 14.04.
On __Ubuntu 14.04 and above__, type:
`apt-get install libsdl2{,-mixer,-image,-ttf}-dev`
_Note: Ubuntu 14.04 currently has broken header file in the SDL2 package that disables people from compiling against it. It will be needed to either patch the header file or install SDL2 from source._
On __Fedora 20 and above__, type:
`yum install SDL2{,_mixer,_image,_ttf}-devel`
On __Arch Linux__, type:
`pacman -S sdl2{,_mixer,_image,_ttf}`
On __Mac OS X__, install SDL2 via [Homebrew](http://brew.sh) like so:
`brew install sdl2{,_image,_ttf,_mixer}`
Installation
============
To get the bindings, type:
`go get -v github.com/veandco/go-sdl2/sdl`
`go get -v github.com/veandco/go-sdl2/sdl_mixer`
`go get -v github.com/veandco/go-sdl2/sdl_image`
`go get -v github.com/veandco/go-sdl2/sdl_ttf`
or type this if you use Bash terminal:
`go get -v github.com/veandco/go-sdl2/sdl{,_mixer,_image,_ttf}`
__Note__: If you didn't use the previous commands or use 'go install', you will experience long
compilation time because Go doesn't keep the built binaries unless you install them.
Example
=======
package main
import "github.com/veandco/go-sdl2/sdl"
func main() {
window, err := sdl.CreateWindow("test", sdl.WINDOWPOS_UNDEFINED, sdl.WINDOWPOS_UNDEFINED,
800, 600, sdl.WINDOW_SHOWN)
if err != nil {
panic(err)
}
surface := window.GetSurface()
rect := sdl.Rect{0, 0, 200, 200}
surface.FillRect(&rect, 0xffff0000)
window.UpdateSurface()
sdl.Delay(1000)
window.Destroy()
}
For more complete examples, see inside the _examples_ folder.
Documentation
=============
For now, take a look at http://godoc.org/github.com/veandco/go-sdl2/sdl. A full-featured website will be created once we hit a stable point.
Notes
=====
A standalone Go SDL2 library _is_ being considered (read: figured out). That means users should be able to just go get go-sdl2 and compile it without the original C library. That could mean faster build times, more 'idiomatic' Go code, and hopefully more people interested in using and contributing to go-sdl2!
Contributors
============
* [Jacky Boen](https://github.com/jackyb)
* [HardWareGuy](https://github.com/HardWareGuy)
* [akovaski](https://github.com/akovaski)
* [Jeromy Johnson](https://github.com/whyrusleeping)
* [Cai Lei](https://github.com/ccll)
* [krux02](https://github.com/krux02)
* [marcusva](https://github.com/marcusva)
* [Tom Murray](https://github.com/TomMurray)
* [Ian Davis](https://github.com/iand)
* [hschendel](https://github.com/hschendel)
* [Bastien Dejean](https://github.com/baskerville)
* [Pirmin Fix](https://github.com/PirminFix)
* [Robert Lillack](https://github.com/roblillack)
License
=======
Go-SDL2 is BSD 3-clause licensed.
| 35.818182 | 310 | 0.727016 | eng_Latn | 0.825785 |
ed77e29f4d8768ede507d1de89735e2bb8d9b306 | 68,421 | md | Markdown | docs/t-sql/statements/create-procedure-transact-sql.md | IrvinDominin/sql-docs.it-it | 4b82830a24c29e5486f950728a69ddb46cb4c874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/statements/create-procedure-transact-sql.md | IrvinDominin/sql-docs.it-it | 4b82830a24c29e5486f950728a69ddb46cb4c874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/statements/create-procedure-transact-sql.md | IrvinDominin/sql-docs.it-it | 4b82830a24c29e5486f950728a69ddb46cb4c874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CREATE PROCEDURE (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 09/06/2017
ms.prod: sql
ms.prod_service: database-engine, sql-database, sql-data-warehouse, pdw
ms.reviewer: ''
ms.suite: sql
ms.technology: t-sql
ms.tgt_pltfrm: ''
ms.topic: language-reference
f1_keywords:
- PROC
- PROCEDURE
- CREATE PROCEDURE
- CREATE_PROC_TSQL
- PROCEDURE_TSQL
- CREATE PROC
- PROC_TSQL
- CREATE_PROCEDURE_TSQL
dev_langs:
- TSQL
helpviewer_keywords:
- parameters [SQL Server], stored procedures
- table-valued parameters
- SET statement, stored procedures
- stored procedures [SQL Server], creating
- wildcard parameters [SQL Server]
- maximum size of stored procedures
- WITH RECOMPILE clause
- common language runtime [SQL Server], stored procedures
- CREATE PROCEDURE statement
- local temporary procedures [SQL Server]
- WITH ENCRYPTION clause
- output parameters [SQL Server]
- nesting stored procedures
- user-defined stored procedures [SQL Server]
- system stored procedures [SQL Server], creating
- deferred name resolution, stored procedures
- referenced tables [SQL Server]
- global temporary procedures [SQL Server]
- cursor data type
- temporary stored procedures [SQL Server]
- size [SQL Server], stored procedures
- automatic stored procedure execution
- creating stored procedures
ms.assetid: afe3d86d-c9ab-44e4-b74d-4e3dbd9cc58c
caps.latest.revision: 180
author: CarlRabeler
ms.author: carlrab
manager: craigg
monikerRange: '>=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current'
ms.openlocfilehash: ac2db40895cfc8690151b84beacb12f2fb8e3fac
ms.sourcegitcommit: 4183dc18999ad243c40c907ce736f0b7b7f98235
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 08/27/2018
ms.locfileid: "43108179"
---
# <a name="create-procedure-transact-sql"></a>CREATE PROCEDURE (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-all-md](../../includes/tsql-appliesto-ss2008-all-md.md)]
Crea una stored procedure [!INCLUDE[tsql](../../includes/tsql-md.md)] o Common Language Runtime (CLR) in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)], Azure SQL Data Warehouse e Parallel Data Warehouse. Le stored procedure sono simili alle procedure di altri linguaggi di programmazione in quanto sono in grado di:
- Accettare parametri di input e restituire più valori sotto forma di parametri di output alla procedura o al batch che esegue la chiamata.
- Includere istruzioni di programmazione che eseguono le operazioni nel database, tra cui la chiamata di altre procedure.
- Restituire un valore di stato a una procedura o a un batch che esegue la chiamata per indicare l'esito positivo o negativo (e il motivo dell'esito negativo).
Usare questa istruzione per creare una stored procedure permanente nel database corrente o una stored procedure temporanea nel database **tempdb**.
> [!NOTE]
> In questo argomento viene illustrata l'integrazione di .NET Framework CLR in SQL Server. L'integrazione di CLR non si applica al [!INCLUDE[ssSDS](../../includes/sssds-md.md)] di Azure.
Passare a [Semplici esempi](#Simple) per ignorare i dettagli della sintassi e ottenere un rapido esempio di una stored procedure di base.
[Convenzioni della sintassi Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>Sintassi
```sql
-- Transact-SQL Syntax for Stored Procedures in SQL Server and Azure SQL Database
CREATE [ OR ALTER ] { PROC | PROCEDURE }
[schema_name.] procedure_name [ ; number ]
[ { @parameter [ type_schema_name. ] data_type }
[ VARYING ] [ = default ] [ OUT | OUTPUT | [READONLY]
] [ ,...n ]
[ WITH <procedure_option> [ ,...n ] ]
[ FOR REPLICATION ]
AS { [ BEGIN ] sql_statement [;] [ ...n ] [ END ] }
[;]
<procedure_option> ::=
[ ENCRYPTION ]
[ RECOMPILE ]
[ EXECUTE AS Clause ]
```
```sql
-- Transact-SQL Syntax for CLR Stored Procedures
CREATE [ OR ALTER ] { PROC | PROCEDURE }
[schema_name.] procedure_name [ ; number ]
[ { @parameter [ type_schema_name. ] data_type }
[ = default ] [ OUT | OUTPUT ] [READONLY]
] [ ,...n ]
[ WITH EXECUTE AS Clause ]
AS { EXTERNAL NAME assembly_name.class_name.method_name }
[;]
```
```sql
-- Transact-SQL Syntax for Natively Compiled Stored Procedures
CREATE [ OR ALTER ] { PROC | PROCEDURE } [schema_name.] procedure_name
[ { @parameter data_type } [ NULL | NOT NULL ] [ = default ]
[ OUT | OUTPUT ] [READONLY]
] [ ,... n ]
WITH NATIVE_COMPILATION, SCHEMABINDING [ , EXECUTE AS clause ]
AS
{
BEGIN ATOMIC WITH (set_option [ ,... n ] )
sql_statement [;] [ ... n ]
[ END ]
}
[;]
<set_option> ::=
LANGUAGE = [ N ] 'language'
| TRANSACTION ISOLATION LEVEL = { SNAPSHOT | REPEATABLE READ | SERIALIZABLE }
| [ DATEFIRST = number ]
| [ DATEFORMAT = format ]
| [ DELAYED_DURABILITY = { OFF | ON } ]
```
```sql
-- Transact-SQL Syntax for Stored Procedures in Azure SQL Data Warehouse
-- and Parallel Data Warehouse
-- Create a stored procedure
CREATE { PROC | PROCEDURE } [ schema_name.] procedure_name
[ { @parameterdata_type } [ OUT | OUTPUT ] ] [ ,...n ]
AS { [ BEGIN ] sql_statement [;][ ,...n ] [ END ] }
[;]
```
## <a name="arguments"></a>Argomenti
OR ALTER
**Si applica a**: Azure [!INCLUDE[ssSDS](../../includes/sssds-md.md)], [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] (a partire da [!INCLUDE[ssSQL15](../../includes/sssql15-md.md)] SP1).
Modifica la procedura, se esiste già.
*schema_name*
Nome dello schema a cui appartiene la procedura. Le procedure sono associate a schema. Se durante la creazione della procedura non viene specificato un nome dello schema, viene assegnato automaticamente lo schema predefinito dell'utente che sta creando la procedura.
*procedure_name*
Nome della procedura. I nomi di procedura devono essere conformi alle regole per gli [identificatori](../../relational-databases/databases/database-identifiers.md) e devono essere univoci all'interno dello schema.
Evitare l'uso del prefisso **sp_** per la denominazione delle procedure. Questo prefisso viene usato da [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] per definire le procedure di sistema. L'utilizzo del prefisso può comportare l'interruzione del codice dell'applicazione, se è presente una procedura di sistema con lo stesso nome.
Le stored procedure temporanee locali o globali possono essere create usando un simbolo di cancelletto (#) prima di *procedure_name* (*#procedure_name*) per le stored procedure temporanee locali e due simboli di cancelletto per quelle globali (*##procedure_name*). Una stored procedure temporanea locale è visibile solo alla connessione da cui è stata creata e, alla chiusura di quest'ultima, viene eliminata. Una stored procedure temporanea globale è disponibile per tutte le connessioni e viene eliminata al termine dell'ultima sessione che la usano. Non è possibile specificare nomi temporanei per le procedure CLR.
Il nome completo di una procedura o di una stored procedure temporanea globale, inclusi i simboli ##, non deve superare i 128 caratteri. Il nome completo di una stored procedure temporanea locale, incluso il simbolo #, non deve superare i 116 caratteri.
**;** *number*
**Si applica a** : da [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] fino a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Integer facoltativo usato per raggruppare le procedure con lo stesso nome. Tali procedure possono essere eliminate contemporaneamente tramite un'istruzione DROP PROCEDURE.
> [!NOTE]
> [!INCLUDE[ssNoteDepFutureAvoid](../../includes/ssnotedepfutureavoid-md.md)]
Le procedure numerate non possono includere i tipi **xml** o CLR definiti dall'utente né possono essere usate in una guida di piano.
**@** *parameter*
Parametro dichiarato nella procedura. Specificare un nome di parametro usando la chiocciola (**@**) come primo carattere. Il nome di parametro deve essere conforme alle regole per gli [identificatori](../../relational-databases/databases/database-identifiers.md). Poiché i parametri sono locali rispetto alla procedura, è possibile usare gli stessi nomi di parametro in altre procedure.
È possibile dichiarare uno o più parametri con un limite massimo di 2.100. Il valore di ogni parametro dichiarato deve essere specificato dall'utente quando viene chiamata la procedura, a meno che non venga indicato un valore predefinito per il parametro oppure il valore venga impostato in modo da corrispondere a quello di un altro parametro. Se una procedura contiene [parametri con valori di tabella](../../relational-databases/tables/use-table-valued-parameters-database-engine.md) e nella chiamata il parametro non è presente, viene passata una tabella vuota. I parametri possono rappresentare solo espressioni costanti, non nomi di tabella, nomi di colonna o nomi di altri oggetti di database. Per altre informazioni, vedere [EXECUTE (Transact-SQL)](../../t-sql/language-elements/execute-transact-sql.md).
Se viene specificata l'opzione FOR REPLICATION, non è possibile dichiarare alcun parametro.
[ *type_schema_name***.** ] *data_type*
Tipo di dati del parametro e schema a cui appartiene il tipo di dati.
**Linee guida per le procedure [!INCLUDE[tsql](../../includes/tsql-md.md)]**:
- Tutti i tipi di dati [!INCLUDE[tsql](../../includes/tsql-md.md)] possono essere usati come parametri.
- Per creare parametri con valori di tabella è possibile usare il tipo di tabella definito dall'utente. I parametri con valori di tabella possono essere solo parametri di input e devono essere associati alla parola chiave READONLY. Per altre informazioni, vedere [Usare parametri con valori di tabella (motore di database)](../../relational-databases/tables/use-table-valued-parameters-database-engine.md)
- I tipi di dati **cursor** possono essere solo parametri OUTPUT e devono essere associati alla parola chiave VARYING.
**Linee guida per le procedure CLR**:
- Tutti i tipi di dati nativi di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] per cui è presente un equivalente nel codice gestito possono essere usati come parametri. Per altre informazioni sulla corrispondenza tra tipi CLR e tipi di dati di sistema di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], vedere [Mapping dei dati dei parametri CLR](../../relational-databases/clr-integration-database-objects-types-net-framework/mapping-clr-parameter-data.md). Per altre informazioni sui tipi di dati di sistema di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] e sulla loro sintassi, vedere [Tipi di dati (Transact-SQL)](../../t-sql/data-types/data-types-transact-sql.md).
- I tipi di dati con valori di tabella o **cursor** non possono essere usati come parametri.
- Se al parametro è stato assegnato un tipo di dati CLR definito dall'utente, è necessario disporre dell'autorizzazione EXECUTE per il tipo.
VARYING
Specifica il set di risultati supportato come parametro di output. Questo parametro viene creato in modo dinamico dalla procedura e il relativo contenuto può variare. Si applica solo a parametri **cursor**. Questa opzione non è valida per le procedure CLR.
*default*
Valore predefinito per un parametro. Se per un parametro viene definito un valore predefinito, la procedura può essere eseguita senza specificare un valore per tale parametro. Il valore predefinito deve essere una costante oppure NULL. Il formato del valore della costante può essere un carattere jolly; in questo modo sarà possibile usare la parola chiave LIKE quando si passa il parametro nella procedura.
I valori predefiniti vengono registrati nella colonna **sys.parameters.default** solo per le procedure CLR. La colonna è NULL per i parametri di procedure [!INCLUDE[tsql](../../includes/tsql-md.md)].
OUT | OUTPUT
Indica che si tratta di un parametro di output. Utilizzare i parametri di output per restituire valori al chiamante della procedura. Non è possibile usare i tipi **text**, **ntext** e **image** come parametri OUTPUT, a meno che non si tratti di una procedura CLR. Un parametro di output può essere un segnaposto del cursore, a meno che non si tratti di una procedura CLR. Un tipo di dati con valori di tabella non può essere specificato come parametro di output di una procedura.
READONLY
Indica che il parametro non può essere aggiornato o modificato all'interno del corpo della procedura. Se si tratta di un tipo di parametro con valori di tabella, è necessario specificare la parola chiave READONLY.
RECOMPILE
Indica che il [!INCLUDE[ssDE](../../includes/ssde-md.md)] non consente di memorizzare nella cache un piano di query per questa procedura, che pertanto verrà compilata a ogni esecuzione. Per altre informazioni sui motivi della ricompilazione forzata, vedere [Ricompilare una stored procedure](../../relational-databases/stored-procedures/recompile-a-stored-procedure.md). Questa opzione non può essere usata per procedure CLR o se si specifica FOR REPLICATION.
Per indicare al [!INCLUDE[ssDE](../../includes/ssde-md.md)] di ignorare i piani di singole query all'interno di una procedura, usare l'hint per la query RECOMPILE nella definizione della query. Per altre informazioni, vedere [Hint per la query (Transact-SQL)](../../t-sql/queries/hints-transact-sql-query.md).
ENCRYPTION
**Si applica a**: SQL Server (da [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)]) e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Indica che [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] converte il testo originale dell'istruzione CREATE PROCEDURE in un formato offuscato. L'output dell'offuscamento non è visibile direttamente nelle viste del catalogo in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Il testo offuscato non può essere recuperato da utenti che non hanno accesso a file di database o tabelle di sistema. Tale testo, tuttavia, è disponibile per gli utenti con privilegi di accesso a tabelle di sistema attraverso la [porta DAC](../../database-engine/configure-windows/diagnostic-connection-for-database-administrators.md) o con privilegi di accesso diretto a file del database. Inoltre, agli utenti che possono collegare un debugger al processo del server è consentito recuperare la procedura decrittografata dalla memoria in fase di esecuzione. Per altre informazioni sull'accesso ai metadati di sistema, vedere [Configurazione della visibilità dei metadati](../../relational-databases/security/metadata-visibility-configuration.md).
Questa opzione non è valida per le procedure CLR.
Le procedure create con questa opzione non possono essere pubblicate durante la replica di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)].
EXECUTE AS *clause*
Specifica il contesto di sicurezza in cui deve essere eseguita la procedura.
Per le stored procedure compilate in modo nativo, a partire da [!INCLUDE[ssSQL15](../../includes/sssql15-md.md)] e in [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)] non sono previste limitazioni nella clausola EXECUTE AS. In [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] le clausole SELF, OWNER e *'user_name'* sono supportate con stored procedure compilate in modo nativo.
Per altre informazioni, vedere [Clausola EXECUTE AS (Transact-SQL)](../../t-sql/statements/execute-as-clause-transact-sql.md).
FOR REPLICATION
**Si applica a**: SQL Server (da [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)]) e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Specifica che la procedura viene creata per la replica. Di conseguenza, non può essere eseguita nel Sottoscrittore. Una procedura creata con l'opzione FOR REPLICATION viene usata come filtro di procedura ed eseguita solo durante la replica. Se viene specificata l'opzione FOR REPLICATION, non è possibile dichiarare alcun parametro. Inoltre, l'opzione FOR REPLICATION non può essere specificata per procedure CLR. L'opzione RECOMPILE viene ignorata per le procedure create con l'opzione FOR REPLICATION.
Una procedura `FOR REPLICATION` ha un tipo di oggetto **RF** in **sys.objects** e **sys.procedures**.
{ [ BEGIN ] *sql_statement* [;] [ ...*n* ] [ END ] }
Una o più istruzioni [!INCLUDE[tsql](../../includes/tsql-md.md)] che includono il corpo della procedura. Per racchiudere le istruzioni è possibile usare le parole chiave facoltative BEGIN ed END. Per informazioni, vedere le sezioni Procedure consigliate, Osservazioni generali e Limitazioni e restrizioni riportate di seguito.
EXTERNAL NAME *assembly_name ***.*** class_name ***.*** method_name*
**Si applica a**: da [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)], [!INCLUDE[sqldbesa](../../includes/sqldbesa-md.md)].
Specifica il metodo di un assembly [!INCLUDE[dnprdnshort](../../includes/dnprdnshort-md.md)] affinché una procedura CLR vi faccia riferimento. *class_name* deve essere un identificatore [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] valido e deve esistere come una classe nell'assembly. Se alla classe è stato assegnato un nome qualificato dallo spazio dei nomi le cui parti sono separate da un punto (**.**), il nome della classe deve essere delimitato tramite parentesi quadre (**[]**) o virgolette (**""**). Il metodo specificato deve essere un metodo statico della classe.
Per impostazione predefinita, [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] non può eseguire il codice CLR. È possibile creare, modificare ed eliminare gli oggetti di database che fanno riferimento a moduli CLR; tuttavia non è possibile eseguire questi riferimenti in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] finché non viene abilitata l'opzione [clr enabled option](../../database-engine/configure-windows/clr-enabled-server-configuration-option.md). Per abilitare questa opzione, usare [sp_configure](../../relational-databases/system-stored-procedures/sp-configure-transact-sql.md).
> [!NOTE]
> Le procedure CLR non sono supportate in un database indipendente.
ATOMIC WITH
**Si applica a** : da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] fino a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Indica l'esecuzione atomica di stored procedure. Viene eseguito il commit delle modifiche o il rollback di tutte le modifiche tramite la generazione di un'eccezione. Il blocco ATOMIC WITH è obbligatorio per le stored procedure compilate in modo nativo.
Se la procedura esegue RETURN (in modo esplicito tramite l'istruzione RETURN o in modo implicito completando l'esecuzione), viene eseguito il commit del lavoro svolto dalla procedura. Se la procedura esegue THROW, viene eseguito il rollback del lavoro svolto dalla procedura.
XACT_ABORT è ON per impostazione predefinita in un blocco atomico e non può essere modificato. XACT_ABORT specifica se in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] viene eseguito automaticamente il rollback della transazione corrente quando un'istruzione [!INCLUDE[tsql](../../includes/tsql-md.md)] genera un errore di run-time.
Le opzioni SET seguenti sono sempre impostate su ON nel blocco ATOMIC e non possono essere modificate.
- CONCAT_NULL_YIELDS_NULL
- QUOTED_IDENTIFIER, ARITHABORT
- NOCOUNT
- ANSI_NULLS
- ANSI_WARNINGS
Le opzioni SET non possono essere modificate nei blocchi ATOMIC. Le opzioni SET della sessione utente non vengono usate nell'ambito delle stored procedure compilate in modo nativo. Queste opzioni vengono fissate in fase di compilazione.
Le operazioni BEGIN, ROLLBACK e COMMIT non possono essere usate in un blocco atomico.
Esiste un solo blocco ATOMIC per stored procedure compilata in modo nativo, nell'ambito esterno della procedura. I blocchi non possono essere nidificati. Per altre informazioni sui blocchi ATOMIC, vedere [stored procedure compilate in modo nativo](../../relational-databases/in-memory-oltp/natively-compiled-stored-procedures.md).
**NULL** | NOT NULL
Determina se i valori Null sono supportati in un parametro. Il valore predefinito è NULL.
NATIVE_COMPILATION
**Si applica a** : da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] fino a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Indica che la procedura è compilata in modo nativo. NATIVE_COMPILATION, SCHEMABINDING ed EXECUTE AS possono essere specificati in qualsiasi ordine. Per altre informazioni, vedere [Natively Compiled Stored Procedures](../../relational-databases/in-memory-oltp/natively-compiled-stored-procedures.md) (Stored procedure compilate in modo nativo).
SCHEMABINDING
**Si applica a** : da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] fino a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Assicura che le tabelle a cui si fa riferimento in una procedura non possano essere eliminate o modificate. SCHEMABINDING è obbligatorio nelle stored procedure compilate in modo nativo. Per altre informazioni, vedere [stored procedure compilate in modo nativo](../../relational-databases/in-memory-oltp/natively-compiled-stored-procedures.md). Le restrizioni SCHEMABINDING sono uguali a quelle delle funzioni definite dall'utente. Per altre informazioni, vedere la sezione SCHEMABINDING in [CREATE FUNCTION (Transact-SQL)](../../t-sql/statements/create-function-transact-sql.md).
LANGUAGE = [N] 'language'
**Si applica a**: da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Equivalente a un'opzione di una sessione [SET LANGUAGE (Transact-SQL) ](../../t-sql/statements/set-language-transact-sql.md). LANGUAGE = [N] 'lingua' è obbligatorio.
TRANSACTION ISOLATION LEVEL
**Si applica a** : da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] fino a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Obbligatorio per stored procedure compilate in modo nativo. Specifica il livello di isolamento della transazione della stored procedure. Sono disponibili le opzioni seguenti:
Per altre informazioni su queste opzioni, vedere [SET TRANSACTION ISOLATION LEVEL (Transact-SQL)](../../t-sql/statements/set-transaction-isolation-level-transact-sql.md).
REPEATABLE READ
Specifica che le istruzioni non possono leggere i dati modificati da altre transazioni, ma di cui non è ancora stato eseguito il commit. Se un'altra transazione modifica i dati letti dalla transazione corrente, quest'ultima non riesce.
SERIALIZABLE
Specifica quanto segue:
- Le istruzioni non possono leggere dati modificati da altre transazioni ma di cui non è ancora stato eseguito il commit.
- Se un'altra transazione modifica i dati letti dalla transazione corrente, quest'ultima non riesce.
- Se un'altra transazione inserisce nuove righe con valori di chiave che rientrano nell'intervallo di chiavi lette da qualsiasi istruzione nella transazione corrente, quest'ultima non riesce.
SNAPSHOT
Specifica che i dati letti da qualsiasi istruzione in una transazione rappresenteranno la versione coerente dal punto di vista transazionale dei dati esistenti al momento dell'avvio della transazione.
DATEFIRST = *number*
**Si applica a** : da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] fino a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Specifica il primo giorno della settimana come numero compreso tra 1 e 7. DATEFIRST è facoltativo. Se viene omesso, l'impostazione viene desunta dalla lingua specificata.
Per altre informazioni, vedere [SET DATEFIRST (Transact-SQL)](../../t-sql/statements/set-datefirst-transact-sql.md).
DATEFORMAT = *format*
**Si applica a** : da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] fino a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Specifica l'ordine delle parti della data relative a mese, giorno e anno per l'interpretazione di stringhe di caratteri date, smalldatetime, datetime, datetime2 e datetimeoffset. DATEFORMAT è facoltativo. Se viene omesso, l'impostazione viene desunta dalla lingua specificata.
Per altre informazioni, vedere [SET DATEFORMAT (Transact-SQL)](../../t-sql/statements/set-dateformat-transact-sql.md).
DELAYED_DURABILITY = { OFF | ON }
**Si applica a** : da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] fino a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] e [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Il commit delle transazioni di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] può essere completamente durevole, ovvero l'impostazione predefinita di SQL Server, oppure con durabilità ritardata.
Per altre informazioni, vedere [Controllo della durabilità delle transazioni](../../relational-databases/logs/control-transaction-durability.md).
## <a name="Simple"></a> Semplici esempi
Per iniziare, ecco due rapidi esempi:
`SELECT DB_NAME() AS ThisDB;` restituisce il nome del database corrente.
È possibile eseguire il wrapping di tale istruzione in una stored procedure, ad esempio:
```sql
CREATE PROC What_DB_is_this
AS
SELECT DB_NAME() AS ThisDB;
```
Chiamare la stored procedure con l'istruzione: `EXEC What_DB_is_this;`
Un'operazione leggermente più complessa consiste nello specificare un parametro di input per rendere la procedura più flessibile. Ad esempio
```sql
CREATE PROC What_DB_is_that @ID int
AS
SELECT DB_NAME(@ID) AS ThatDB;
```
Specificare un ID database quando si chiama la procedura. Ad esempio `EXEC What_DB_is_that 2;` restituisce `tempdb`.
Per altri esempi, vedere [Esempi](#Examples) verso la fine di questo argomento.
## <a name="best-practices"></a>Procedure consigliate
Sebbene non siano elencate tutte le procedure consigliate, questi suggerimenti possono migliorare le prestazioni della procedura.
- Usare l'istruzione SET NOCOUNT ON come prima istruzione nel corpo della procedura, ovvero posizionarla subito dopo la parola chiave AS. In questo modo vengono disabilitati i messaggi restituiti al client da [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] dopo l'esecuzione delle istruzioni SELECT, INSERT, UPDATE, MERGE e DELETE. Le prestazioni generali del database e dell'applicazione vengono migliorate eliminando questo overhead di rete. Per informazioni, vedere [SET NOCOUNT (Transact-SQL)](../../t-sql/statements/set-nocount-transact-sql.md).
- Usare i nomi degli schemi quando si creano oggetti di database nella procedura o vi si fa riferimento. La risoluzione dei nomi degli oggetti da parte del [!INCLUDE[ssDE](../../includes/ssde-md.md)] richiede un tempo di elaborazione minore se la ricerca non deve essere effettuata in più schemi. È anche possibile evitare problemi di autorizzazione e accesso causati dall'assegnazione dello schema predefinito di un utente quando gli oggetti vengono creati senza specificare lo schema.
- Evitare l'esecuzione del wrapping di funzioni attorno alle colonne specificate nelle clausole WHERE e JOIN. In tal modo le colonne vengono rese non deterministiche e si evita l'utilizzo di indici in Query Processor.
- Evitare l'utilizzo di funzioni scalari nelle istruzioni SELECT che restituiscono molte righe di dati. Poiché la funzione scalare deve essere applicata a ogni riga, il comportamento risultante assomiglia all'elaborazione basata su righe e ciò comporta un peggioramento delle prestazioni.
- Evitare l'uso di `SELECT *`. Specificare invece i nomi delle colonne necessarie. In questo modo è possibile evitare alcuni errori del [!INCLUDE[ssDE](../../includes/ssde-md.md)] che causano l'arresto dell'esecuzione della procedura. Ad esempio, un'istruzione `SELECT *` che restituisce i dati di una tabella costituita da 12 colonne e, successivamente, inserisce tali dati in una tabella temporanea di 12 colonne viene eseguita correttamente finché non viene modificato il numero o l'ordine delle colonne in una delle tabelle.
- Evitare l'elaborazione o la restituzione di troppi dati. Non appena possibile, restringere i risultati nel codice della procedura in modo che le operazioni successive effettuate dalla procedura vengano eseguite usando il set di dati più piccolo possibile. Inviare solo i dati essenziali all'applicazione client. L'operazione è più efficace dell'invio di dati aggiuntivi nella rete, nonché dell'imposizione all'applicazione client di usare set di risultati inutilmente grandi.
- Usare le transazioni esplicite tramite BEGIN/COMMIT TRANSACTION mantenendole più brevi possibili. Transazioni lunghe implicano un blocco dei record più lungo e un rischio maggiore di deadlock.
- Per la gestione degli errori all'interno di una procedura usare la funzionalità TRY…CATCH di [!INCLUDE[tsql](../../includes/tsql-md.md)] che consente di incapsulare un blocco intero di istruzioni [!INCLUDE[tsql](../../includes/tsql-md.md)]. In questo modo vengono garantiti un minor overhead delle prestazioni e una segnalazione errori più precisa con un utilizzo inferiore della programmazione.
- Usare la parola chiave DEFAULT in tutte le colonne della tabella a cui viene fatto riferimento dalle istruzioni [!INCLUDE[tsql](../../includes/tsql-md.md)] CREATE TABLE o ALTER TABLE presenti nel corpo della procedura. In questo modo è possibile evitare di passare NULL alle colonne che non accettano valori Null.
- Usare NULL o NOT NULL per ogni colonna di una tabella temporanea. Le opzioni ANSI_DFLT_ON e ANSI_DFLT_OFF consentono di controllare la modalità di assegnazione dell'attributo NULL o NOT NULL alle colonne da parte del [!INCLUDE[ssDE](../../includes/ssde-md.md)] quando tale attributo non è specificato in un'istruzione CREATE TABLE o ALTER TABLE. Se in una connessione viene eseguita una procedura con opzioni impostate in modo diverso rispetto alla connessione in cui la procedura è stata creata, è possibile che il supporto di valori Null e il funzionamento delle colonne della tabella creata per la seconda connessione siano diversi. Se l'attributo NULL o NOT NULL viene dichiarato in modo esplicito per ogni colonna, le tabelle temporanee vengono create con lo stesso supporto di valori Null per tutte le connessioni in cui viene eseguita la procedura.
- Usare le istruzioni di modifica che consentono di convertire i valori Null e in cui è inclusa la logica che permette di eliminare le righe con valori Null dalle query. Tenere presente che in [!INCLUDE[tsql](../../includes/tsql-md.md)] NULL non è un valore vuoto o "Nothing". Si tratta di un segnaposto per un valore sconosciuto e può causare un comportamento imprevisto, soprattutto quando si eseguono query per set di risultati o si usano le funzioni di aggregazione.
- Usare l'operatore UNION ALL invece dell'operatore UNION oppure OR, a meno che non siano necessari valori distinct. L'operatore UNION ALL richiede un minor overhead di elaborazione poiché i duplicati non vengono esclusi dal set di risultati.
## <a name="general-remarks"></a>Osservazioni generali
Non è prevista una dimensione massima predefinita per una procedura.
Le variabili specificate nella procedura possono essere definite dall'utente o possono essere variabili di sistema, ad esempio @@SPID.
Alla prima esecuzione, la procedura viene compilata in modo da determinare un piano di accesso ottimale per il recupero dei dati. Se il piano generato rimane archiviato nell'apposita cache del [!INCLUDE[ssDE](../../includes/ssde-md.md)], può essere riutilizzato nelle successive esecuzioni della procedura.
È possibile eseguire automaticamente una o più procedure all'avvio di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Le procedure devono essere create dall'amministratore del sistema nel database **master** ed eseguite dal ruolo predefinito del server **sysadmin** come processo in background. In queste procedure non è possibile usare parametri di input o output. Per altre informazioni, vedere [Eseguire una stored procedure](../../relational-databases/stored-procedures/execute-a-stored-procedure.md).
Le procedure vengono nidificate quando una procedura consente la chiamata di un'altra o l'esecuzione di codice gestito facendo riferimento a una routine, un tipo o una funzione di aggregazione CLR. È possibile nidificare fino a 32 livelli di procedure e riferimenti a codice gestito. Il livello di nidificazione viene incrementato di un'unità quando viene avviata l'esecuzione della procedura o del riferimento al codice gestito chiamato e viene ridotto di un'unità quando ne viene completata l'esecuzione. I metodi richiamati all'interno del codice gestito non vengono inclusi nel limite del livello di nidificazione. Tuttavia, quando tramite una stored procedure CLR vengono eseguite operazioni di accesso ai dati tramite il provider gestito SQL Server, nel passaggio dal codice gestito a SQL viene aggiunto un ulteriore livello di nidificazione.
Il tentativo di superare il livello di nidificazione massimo causa l'esito negativo dell'intera catena di chiamata. Per restituire il livello di annidamento dell'esecuzione della stored procedure corrente, è possibile usare la funzione @@NESTLEVEL.
## <a name="interoperability"></a>Interoperabilità
Quando viene creata o modificata una procedura [!INCLUDE[ssDE](../../includes/ssde-md.md)], nel [!INCLUDE[tsql](../../includes/tsql-md.md)] vengono salvate le impostazioni di entrambe le opzioni SET QUOTED_IDENTIFIER e SET ANSI_NULLS. Queste impostazioni originali vengono usate quando viene eseguita la procedura. Pertanto, le impostazioni di sessione del client per le opzioni SET QUOTED_IDENTIFIER e SET ANSI_NULLS vengono ignorate durante l'esecuzione della procedura.
Altre opzioni SET, ad esempio SET ARITHABORT, SET ANSI_WARNINGS o SET ANSI_PADDINGS, non vengono salvate quando viene creata o modificata una procedura. Se la logica della procedura dipende da una particolare impostazione, includere un'istruzione SET all'inizio della procedura per garantire l'utilizzo dell'impostazione adeguata. Quando un'istruzione SET viene eseguita da una procedura, l'impostazione rimane attiva solo fino al termine dell'esecuzione della procedura. L'impostazione viene quindi ripristinata al valore assegnato alla procedura quando è stata chiamata. In tal modo nei singoli client è possibile impostare le opzioni desiderate senza influire sulla logica della procedura.
In una procedura è possibile specificare qualsiasi istruzione SET, ad eccezione di SET SHOWPLAN_TEXT e SET SHOWPLAN_ALL. Queste devono essere le uniche istruzioni in un batch. L'opzione SET scelta rimane attiva durante l'esecuzione della procedura, dopodiché viene ripristinata l'impostazione precedente.
> [!NOTE]
> SET_ANSI_WARNINGS non viene applicata quando vengono passati parametri in una procedura, in una funzione definita dall'utente oppure in caso di dichiarazione e impostazione delle variabili in un'istruzione batch. Se, ad esempio, una variabile viene definita come **char**(3) e quindi impostata su un valore maggiore di tre caratteri, i dati vengono troncati alla dimensione definita e l'istruzione INSERT o UPDATE ha esito positivo.
## <a name="limitations-and-restrictions"></a>Limitazioni e restrizioni
L'istruzione CREATE PROCEDURE non può essere usata in combinazione con altre istruzioni [!INCLUDE[tsql](../../includes/tsql-md.md)] all'interno di un singolo batch.
Le istruzioni seguenti non possono essere usate in un qualsiasi punto del corpo di una stored procedure.
||||
|-|-|-|
|CREATE AGGREGATE|CREATE SCHEMA|SET SHOWPLAN_TEXT|
|CREATE DEFAULT|CREATE o ALTER TRIGGER|SET SHOWPLAN_XML|
|CREATE o ALTER FUNCTION|CREATE o ALTER VIEW|USE *database_name*|
|CREATE o ALTER PROCEDURE|SET PARSEONLY||
|CREATE RULE|SET SHOWPLAN_ALL||
Una procedura può fare riferimento a tabelle che non esistono ancora. In fase di creazione viene eseguito solo un controllo della sintassi. La procedura non viene compilata fino alla prima esecuzione ed è solo durante la compilazione che vengono risolti tutti gli oggetti a cui viene fatto riferimento nella procedura. È quindi possibile creare una procedura con sintassi corretta che fa riferimento a tabelle non ancora esistenti. Se, tuttavia, le tabelle a cui viene fatto riferimento non esistono in fase di esecuzione, la procedura ha esito negativo.
Non è possibile specificare un nome di funzione come valore predefinito di un parametro o come valore passato a un parametro durante l'esecuzione di una procedura. Tuttavia, è possibile passare una funzione come variabile, come illustrato nell'esempio seguente.
```sql
-- Passing the function value as a variable.
DECLARE @CheckDate datetime = GETDATE();
EXEC dbo.uspGetWhereUsedProductID 819, @CheckDate;
GO
```
Se la procedura consente di apportare modifiche in un'istanza remota di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], non è possibile eseguire il rollback delle modifiche. Le procedure remote non partecipano alle transazioni.
Affinché il [!INCLUDE[ssDE](../../includes/ssde-md.md)] faccia riferimento al metodo corretto quando viene eseguito l'overload in .NET Framework, il metodo specificato nella clausola EXTERNAL NAME deve soddisfare i requisiti seguenti:
- Essere dichiarato come metodo statico.
- Ricevere lo stesso numero di parametri della procedura.
- Usare tipi di parametro compatibili con i tipi di dati dei parametri corrispondenti della procedura di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Per informazioni sulla corrispondenza tra i tipi di dati di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] e i tipi di dati di [!INCLUDE[dnprdnshort](../../includes/dnprdnshort-md.md)], vedere [Mapping dei dati dei parametri CLR](../../relational-databases/clr-integration-database-objects-types-net-framework/mapping-clr-parameter-data.md).
## <a name="metadata"></a>Metadati
Nella tabella seguente sono elencate le viste del catalogo e le DMV utilizzabili per restituire informazioni sulle stored procedure.
|Vista|Descrizione|
|----------|-----------------|
|[sys.sql_modules](../../relational-databases/system-catalog-views/sys-sql-modules-transact-sql.md)|Viene restituita la definizione di una procedura [!INCLUDE[tsql](../../includes/tsql-md.md)]. Il testo di una procedura creata con l'opzione ENCRYPTION non può essere visualizzato tramite la vista del catalogo **sys.sql_modules**.|
|[sys.assembly_modules](../../relational-databases/system-catalog-views/sys-assembly-modules-transact-sql.md)|Vengono restituite informazioni su una procedura CLR.|
|[sys.parameters](../../relational-databases/system-catalog-views/sys-parameters-transact-sql.md)|Vengono restituite informazioni sui parametri definiti in una procedura.|
|[sys.sql_expression_dependencies](../../relational-databases/system-catalog-views/sys-sql-expression-dependencies-transact-sql.md) [sys.dm_sql_referenced_entities](../../relational-databases/system-dynamic-management-views/sys-dm-sql-referenced-entities-transact-sql.md) [sys.dm_sql_referencing_entities](../../relational-databases/system-dynamic-management-views/sys-dm-sql-referencing-entities-transact-sql.md)|Vengono restituiti gli oggetti a cui una procedura fa riferimento.|
Per stimare le dimensioni di una procedura compilata, usare i seguenti contatori di Performance Monitor.
|Nome dell'oggetto di Performance Monitor|Nome del contatore di Performance Monitor|
|-------------------------------------|--------------------------------------|
|SQLServer: Plan Cache Object|Percentuale riscontri cache|
||Pagine cache|
||Conteggio oggetti cache*|
*Questi contatori sono disponibili per diverse categorie di oggetti cache, inclusi istruzioni [!INCLUDE[tsql](../../includes/tsql-md.md)] ad hoc e preparate, procedure, trigger e così via[!INCLUDE[tsql](../../includes/tsql-md.md)]. Per altre informazioni, vedere [Oggetto Plan Cache di SQL Server](../../relational-databases/performance-monitor/sql-server-plan-cache-object.md).
## <a name="security"></a>Security
### <a name="permissions"></a>Permissions
Sono richieste l'autorizzazione **CREATE PROCEDURE** per il database e **ALTER** per lo schema in cui viene creata la procedura. In alternativa, è richiesta l'appartenenza al ruolo predefinito del database **db_ddladmin**.
Per le stored procedure CLR è necessaria la proprietà dell'assembly a cui viene fatto riferimento nella clausola EXTERNAL NAME oppure l'autorizzazione **REFERENCES** per tale assembly.
## <a name="mot"></a> CREATE PROCEDURE e tabelle ottimizzate per la memoria
È possibile accedere alle tabelle ottimizzate per la memoria da stored procedure compilate sia in modo tradizionale che in modo nativo. Nella maggior parte dei casi, le stored procedure native sono più efficienti.
Per altre informazioni, vedere [stored procedure compilate in modo nativo](../../relational-databases/in-memory-oltp/natively-compiled-stored-procedures.md).
L'esempio seguente illustra come creare una stored procedure compilata in modo nativo che accede a una tabella ottimizzata per la memoria, `dbo.Departments`:
```sql
CREATE PROCEDURE dbo.usp_add_kitchen @dept_id int, @kitchen_count int NOT NULL
WITH EXECUTE AS OWNER, SCHEMABINDING, NATIVE_COMPILATION
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
UPDATE dbo.Departments
SET kitchen_count = ISNULL(kitchen_count, 0) + @kitchen_count
WHERE id = @dept_id
END;
GO
```
Una procedura creata senza NATIVE_COMPILATION non può essere modificata in una stored procedure compilata in modo nativo.
Per informazioni sulla programmabilità nelle stored procedure compilate in modo nativo, sulla superficie di attacco delle query supportata e sugli operatori, vedere [Funzionalità supportate per i moduli T-SQL compilati in modo nativo](../../relational-databases/in-memory-oltp/supported-features-for-natively-compiled-t-sql-modules.md).
## <a name="Examples"></a> Esempi
|Category|Elementi di sintassi inclusi|
|--------------|------------------------------|
|[Sintassi di base](#BasicSyntax)|CREATE PROCEDURE|
|[Passaggio di parametri](#Parameters)|@parameter <br> • = predefinito <br> • OUTPUT <br> • tipo di parametro con valori di tabella <br> • CURSOR VARYING|
|[Modifica dei dati tramite una stored procedure](#Modify)|UPDATE|
|[Gestione degli errori](#Error)|TRY…CATCH|
|[Offuscamento della definizione della procedura](#Encrypt)|WITH ENCRYPTION|
|[Ricompilazione forzata della procedura](#Recompile)|WITH RECOMPILE|
|[Impostazione del contesto di sicurezza](#Security)|EXECUTE AS|
### <a name="BasicSyntax"></a> Sintassi di base
Negli esempi contenuti in questa sezione vengono illustrate le funzionalità di base dell'istruzione CREATE PROCEDURE tramite la sintassi minima necessaria.
#### <a name="a-creating-a-simple-transact-sql-procedure"></a>A. Creazione di una procedura Transact-SQL semplice
Nell'esempio seguente viene creata una stored procedure tramite cui vengono restituiti tutti i dipendenti (per cui vengono indicati il nome e il cognome), le relative posizioni e i nomi dei reparti di appartenenza da una vista nel database [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)]. In questa procedura non viene usato alcun parametro. Nell'esempio vengono quindi illustrati tre metodi di esecuzione della procedura.
```sql
CREATE PROCEDURE HumanResources.uspGetAllEmployees
AS
SET NOCOUNT ON;
SELECT LastName, FirstName, JobTitle, Department
FROM HumanResources.vEmployeeDepartment;
GO
SELECT * FROM HumanResources.vEmployeeDepartment;
```
La procedura `uspGetEmployees` può essere eseguita nei modi seguenti:
```sql
EXECUTE HumanResources.uspGetAllEmployees;
GO
-- Or
EXEC HumanResources.uspGetAllEmployees;
GO
-- Or, if this procedure is the first statement within a batch:
HumanResources.uspGetAllEmployees;
```
#### <a name="b-returning-more-than-one-result-set"></a>B. Restituzione di più di un set di risultati
Tramite la procedura seguente vengono restituiti due set di risultati.
```sql
CREATE PROCEDURE dbo.uspMultipleResults
AS
SELECT TOP(10) BusinessEntityID, Lastname, FirstName FROM Person.Person;
SELECT TOP(10) CustomerID, AccountNumber FROM Sales.Customer;
GO
```
#### <a name="c-creating-a-clr-stored-procedure"></a>C. Creazione di una stored procedure CLR
L'esempio seguente crea la procedura `GetPhotoFromDB` che fa riferimento al metodo `GetPhotoFromDB` della classe `LargeObjectBinary` nell'assembly `HandlingLOBUsingCLR`. Prima della creazione della procedura, l'assembly `HandlingLOBUsingCLR` viene registrato nel database locale.
**Si applica a**: da [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)], [!INCLUDE[sqldbesa](../../includes/sqldbesa-md.md)] (se si usa un assembly creato da *assembly_bits*)
```sql
CREATE ASSEMBLY HandlingLOBUsingCLR
FROM '\\MachineName\HandlingLOBUsingCLR\bin\Debug\HandlingLOBUsingCLR.dll';
GO
CREATE PROCEDURE dbo.GetPhotoFromDB
(
@ProductPhotoID int,
@CurrentDirectory nvarchar(1024),
@FileName nvarchar(1024)
)
AS EXTERNAL NAME HandlingLOBUsingCLR.LargeObjectBinary.GetPhotoFromDB;
GO
```
### <a name="Parameters"></a> Passaggio di parametri
Negli esempi di questa sezione viene illustrato l'utilizzo dei parametri di input e di output per il passaggio di valori a e da una stored procedure.
#### <a name="d-creating-a-procedure-with-input-parameters"></a>D. Creazione di una procedura con parametri di input
Nell'esempio seguente viene creata una stored procedure tramite cui vengono restituite informazioni per un dipendente specifico passando i valori relativi al nome e al cognome del dipendente. In questa procedura vengono accettate solo corrispondenze esatte per i parametri passati.
```sql
IF OBJECT_ID ( 'HumanResources.uspGetEmployees', 'P' ) IS NOT NULL
DROP PROCEDURE HumanResources.uspGetEmployees;
GO
CREATE PROCEDURE HumanResources.uspGetEmployees
@LastName nvarchar(50),
@FirstName nvarchar(50)
AS
SET NOCOUNT ON;
SELECT FirstName, LastName, JobTitle, Department
FROM HumanResources.vEmployeeDepartment
WHERE FirstName = @FirstName AND LastName = @LastName;
GO
```
La procedura `uspGetEmployees` può essere eseguita nei modi seguenti:
```sql
EXECUTE HumanResources.uspGetEmployees N'Ackerman', N'Pilar';
-- Or
EXEC HumanResources.uspGetEmployees @LastName = N'Ackerman', @FirstName = N'Pilar';
GO
-- Or
EXECUTE HumanResources.uspGetEmployees @FirstName = N'Pilar', @LastName = N'Ackerman';
GO
-- Or, if this procedure is the first statement within a batch:
HumanResources.uspGetEmployees N'Ackerman', N'Pilar';
```
#### <a name="e-using-a-procedure-with-wildcard-parameters"></a>E. Utilizzo di una procedura con parametri di caratteri jolly
Nell'esempio seguente viene creata una stored procedure tramite cui vengono restituite informazioni per i dipendenti passando valori completi o parziali relativi al nome e al cognome dei dipendenti. Lo schema di questa procedura corrisponde ai parametri passati oppure, se non è stato specificato alcun parametro, ai parametri predefiniti (cognomi che iniziano con la lettera `D`).
```sql
IF OBJECT_ID ( 'HumanResources.uspGetEmployees2', 'P' ) IS NOT NULL
DROP PROCEDURE HumanResources.uspGetEmployees2;
GO
CREATE PROCEDURE HumanResources.uspGetEmployees2
@LastName nvarchar(50) = N'D%',
@FirstName nvarchar(50) = N'%'
AS
SET NOCOUNT ON;
SELECT FirstName, LastName, JobTitle, Department
FROM HumanResources.vEmployeeDepartment
WHERE FirstName LIKE @FirstName AND LastName LIKE @LastName;
```
La procedura `uspGetEmployees2` può essere eseguita in molte combinazioni diverse. Di seguito sono riportate solo alcune delle combinazioni possibili.
```sql
EXECUTE HumanResources.uspGetEmployees2;
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'Wi%';
-- Or
EXECUTE HumanResources.uspGetEmployees2 @FirstName = N'%';
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'[CK]ars[OE]n';
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'Hesse', N'Stefen';
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'H%', N'S%';
```
#### <a name="f-using-output-parameters"></a>F. Utilizzo di parametri OUTPUT
Nell'esempio seguente viene creata la procedura `uspGetList`. che restituisce un elenco di prodotti il cui prezzo non supera un determinato importo. In questo esempio viene illustrato l'utilizzo di più istruzioni `SELECT` e di più parametri `OUTPUT`. I parametri OUTPUT consentono a una procedura esterna, un batch o più istruzioni [!INCLUDE[tsql](../../includes/tsql-md.md)] di accedere a un valore impostato durante l'esecuzione della procedura.
```sql
IF OBJECT_ID ( 'Production.uspGetList', 'P' ) IS NOT NULL
DROP PROCEDURE Production.uspGetList;
GO
CREATE PROCEDURE Production.uspGetList @Product varchar(40)
, @MaxPrice money
, @ComparePrice money OUTPUT
, @ListPrice money OUT
AS
SET NOCOUNT ON;
SELECT p.[Name] AS Product, p.ListPrice AS 'List Price'
FROM Production.Product AS p
JOIN Production.ProductSubcategory AS s
ON p.ProductSubcategoryID = s.ProductSubcategoryID
WHERE s.[Name] LIKE @Product AND p.ListPrice < @MaxPrice;
-- Populate the output variable @ListPprice.
SET @ListPrice = (SELECT MAX(p.ListPrice)
FROM Production.Product AS p
JOIN Production.ProductSubcategory AS s
ON p.ProductSubcategoryID = s.ProductSubcategoryID
WHERE s.[Name] LIKE @Product AND p.ListPrice < @MaxPrice);
-- Populate the output variable @compareprice.
SET @ComparePrice = @MaxPrice;
GO
```
Eseguire `uspGetList` per restituire un elenco dei prodotti di [!INCLUDE[ssSampleDBCoShort](../../includes/sssampledbcoshort-md.md)] (biciclette) con un prezzo inferiore a `$700`. I parametri `OUTPUT` `@Cost` e `@ComparePrices` vengono usati con elementi del linguaggio per il controllo di flusso per restituire un messaggio nella finestra **Messaggi**.
> [!NOTE]
> La variabile OUTPUT deve essere definita sia quando viene creata la procedura che quando viene usata la variabile. Non è necessario che il nome del parametro e il nome della variabile corrispondano. Il tipo di dati e la posizione del parametro devono tuttavia corrispondere, a meno che non venga usata la sintassi `@ListPrice` = *variabile*.
```sql
DECLARE @ComparePrice money, @Cost money ;
EXECUTE Production.uspGetList '%Bikes%', 700,
@ComparePrice OUT,
@Cost OUTPUT
IF @Cost <= @ComparePrice
BEGIN
PRINT 'These products can be purchased for less than
$'+RTRIM(CAST(@ComparePrice AS varchar(20)))+'.'
END
ELSE
PRINT 'The prices for all products in this category exceed
$'+ RTRIM(CAST(@ComparePrice AS varchar(20)))+'.';
```
Di seguito è riportato il set di risultati parziale:
```
Product List Price
-------------------------- ----------
Road-750 Black, 58 539.99
Mountain-500 Silver, 40 564.99
Mountain-500 Silver, 42 564.99
...
Road-750 Black, 48 539.99
Road-750 Black, 52 539.99
(14 row(s) affected)
These items can be purchased for less than $700.00.
```
#### <a name="g-using-a-table-valued-parameter"></a>G. Utilizzo di un parametro con valori di tabella
Nell'esempio seguente viene usato un tipo di parametro con valori di tabella per inserire più righe in una tabella. Nell'esempio viene creato il tipo di parametro, viene dichiarata una variabile di tabella per farvi riferimento, viene riempito l'elenco di parametri e, successivamente, vengono passati i valori a una stored procedure, usati da quest'ultima per inserire più righe in una tabella.
```sql
/* Create a table type. */
CREATE TYPE LocationTableType AS TABLE
( LocationName VARCHAR(50)
, CostRate INT );
GO
/* Create a procedure to receive data for the table-valued parameter. */
CREATE PROCEDURE usp_InsertProductionLocation
@TVP LocationTableType READONLY
AS
SET NOCOUNT ON
INSERT INTO [AdventureWorks2012].[Production].[Location]
([Name]
,[CostRate]
,[Availability]
,[ModifiedDate])
SELECT *, 0, GETDATE()
FROM @TVP;
GO
/* Declare a variable that references the type. */
DECLARE @LocationTVP
AS LocationTableType;
/* Add data to the table variable. */
INSERT INTO @LocationTVP (LocationName, CostRate)
SELECT [Name], 0.00
FROM
[AdventureWorks2012].[Person].[StateProvince];
/* Pass the table variable data to a stored procedure. */
EXEC usp_InsertProductionLocation @LocationTVP;
GO
```
##### <a name="h-using-an-output-cursor-parameter"></a>H. Utilizzo di un parametro OUTPUT di tipo cursore
Nell'esempio seguente viene usato il parametro OUTPUT di tipo cursore per passare nuovamente al batch, alla procedura o al trigger chiamante un cursore locale rispetto a una procedura.
Creare innanzitutto la procedura che consente di dichiarare e, successivamente, di aprire un cursore nella tabella `Currency`:
```sql
CREATE PROCEDURE dbo.uspCurrencyCursor
@CurrencyCursor CURSOR VARYING OUTPUT
AS
SET NOCOUNT ON;
SET @CurrencyCursor = CURSOR
FORWARD_ONLY STATIC FOR
SELECT CurrencyCode, Name
FROM Sales.Currency;
OPEN @CurrencyCursor;
GO
```
Eseguire quindi un batch che consente di dichiarare una variabile locale di cursore, di eseguire la procedura per assegnare il cursore alla variabile locale e, successivamente, di recuperare le righe dal cursore.
```sql
DECLARE @MyCursor CURSOR;
EXEC dbo.uspCurrencyCursor @CurrencyCursor = @MyCursor OUTPUT;
WHILE (@@FETCH_STATUS = 0)
BEGIN;
FETCH NEXT FROM @MyCursor;
END;
CLOSE @MyCursor;
DEALLOCATE @MyCursor;
GO
```
### <a name="Modify"></a> Modifica dei dati tramite una stored procedure
Negli esempi contenuti in questa sezione viene illustrato come inserire o modificare i dati di tabelle o viste includendo un'istruzione DML (Data Manipulation Language) nella definizione della procedura.
#### <a name="i-using-update-in-a-stored-procedure"></a>I. Utilizzo di UPDATE in una stored procedure
Nell'esempio seguente viene usata un'istruzione UPDATE in una stored procedure. Per la stored procedure sono previsti un unico parametro di input `@NewHours` e un unico parametro di output `@RowCount`. Il valore del parametro `@NewHours` viene usato nell'istruzione UPDATE per aggiornare la colonna `VacationHours` della tabella `HumanResources.Employee`. Il parametro di output `@RowCount` viene usato per restituire il numero di righe interessate a una variabile locale. Un'espressione CASE viene usata nella clausola SET per determinare in modo condizionale il valore impostato per `VacationHours`. Quando un dipendente percepisce una paga oraria (`SalariedFlag` = 0), `VacationHours` viene impostato sul numero corrente di ore più il valore specificato in `@NewHours`. In caso contrario, `VacationHours` viene impostato sul valore specificato in `@NewHours`.
```sql
CREATE PROCEDURE HumanResources.Update_VacationHours
@NewHours smallint
AS
SET NOCOUNT ON;
UPDATE HumanResources.Employee
SET VacationHours =
( CASE
WHEN SalariedFlag = 0 THEN VacationHours + @NewHours
ELSE @NewHours
END
)
WHERE CurrentFlag = 1;
GO
EXEC HumanResources.Update_VacationHours 40;
```
### <a name="Error"></a> Gestione degli errori
Negli esempi contenuti in questa sezione vengono illustrati i metodi per gestire gli errori che potrebbero verificarsi durante l'esecuzione della stored procedure.
#### <a name="j-using-trycatch"></a>J. Utilizzo di TRY…CATCH
Nell'esempio seguente viene illustrato l'utilizzo di un costrutto TRY…CATCH per restituire informazioni sugli errori rilevati durante l'esecuzione di una stored procedure.
```sql
CREATE PROCEDURE Production.uspDeleteWorkOrder ( @WorkOrderID int )
AS
SET NOCOUNT ON;
BEGIN TRY
BEGIN TRANSACTION
-- Delete rows from the child table, WorkOrderRouting, for the specified work order.
DELETE FROM Production.WorkOrderRouting
WHERE WorkOrderID = @WorkOrderID;
-- Delete the rows from the parent table, WorkOrder, for the specified work order.
DELETE FROM Production.WorkOrder
WHERE WorkOrderID = @WorkOrderID;
COMMIT
END TRY
BEGIN CATCH
-- Determine if an error occurred.
IF @@TRANCOUNT > 0
ROLLBACK
-- Return the error information.
DECLARE @ErrorMessage nvarchar(4000), @ErrorSeverity int;
SELECT @ErrorMessage = ERROR_MESSAGE(),@ErrorSeverity = ERROR_SEVERITY();
RAISERROR(@ErrorMessage, @ErrorSeverity, 1);
END CATCH;
GO
EXEC Production.uspDeleteWorkOrder 13;
/* Intentionally generate an error by reversing the order in which rows
are deleted from the parent and child tables. This change does not
cause an error when the procedure definition is altered, but produces
an error when the procedure is executed.
*/
ALTER PROCEDURE Production.uspDeleteWorkOrder ( @WorkOrderID int )
AS
BEGIN TRY
BEGIN TRANSACTION
-- Delete the rows from the parent table, WorkOrder, for the specified work order.
DELETE FROM Production.WorkOrder
WHERE WorkOrderID = @WorkOrderID;
-- Delete rows from the child table, WorkOrderRouting, for the specified work order.
DELETE FROM Production.WorkOrderRouting
WHERE WorkOrderID = @WorkOrderID;
COMMIT TRANSACTION
END TRY
BEGIN CATCH
-- Determine if an error occurred.
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION
-- Return the error information.
DECLARE @ErrorMessage nvarchar(4000), @ErrorSeverity int;
SELECT @ErrorMessage = ERROR_MESSAGE(),@ErrorSeverity = ERROR_SEVERITY();
RAISERROR(@ErrorMessage, @ErrorSeverity, 1);
END CATCH;
GO
-- Execute the altered procedure.
EXEC Production.uspDeleteWorkOrder 15;
DROP PROCEDURE Production.uspDeleteWorkOrder;
```
### <a name="Encrypt"></a> Offuscamento della definizione della procedura
Negli esempi contenuti in questa sezione viene illustrato come offuscare la definizione della stored procedure.
#### <a name="k-using-the-with-encryption-option"></a>K. Utilizzo dell'opzione WITH ENCRYPTION
Nell'esempio seguente viene creata la procedura `HumanResources.uspEncryptThis`.
**Si applica a**: da [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)], database SQL.
```sql
CREATE PROCEDURE HumanResources.uspEncryptThis
WITH ENCRYPTION
AS
SET NOCOUNT ON;
SELECT BusinessEntityID, JobTitle, NationalIDNumber,
VacationHours, SickLeaveHours
FROM HumanResources.Employee;
GO
```
L'opzione `WITH ENCRYPTION` consente di offuscare la definizione della procedura in caso di query nel catalogo di sistema o di uso di funzioni dei metadati, come illustrato negli esempi seguenti.
Eseguire `sp_helptext`:
```sql
EXEC sp_helptext 'HumanResources.uspEncryptThis';
```
[!INCLUDE[ssResult](../../includes/ssresult-md.md)]
`The text for object 'HumanResources.uspEncryptThis' is encrypted.`
Eseguire una query diretta sulla vista del catalogo `sys.sql_modules`:
```sql
SELECT definition FROM sys.sql_modules
WHERE object_id = OBJECT_ID('HumanResources.uspEncryptThis');
```
[!INCLUDE[ssResult](../../includes/ssresult-md.md)]
```
definition
--------------------------------
NULL
```
### <a name="Recompile"></a> Ricompilazione forzata della procedura
Negli esempi contenuti in questa sezione viene usata la clausola WITH RECOMPILE per forzare la ricompilazione della procedura a ogni esecuzione.
#### <a name="l-using-the-with-recompile-option"></a>L. Utilizzo dell'opzione WITH RECOMPILE
La clausola `WITH RECOMPILE` risulta utile quando la procedura non include parametri tipici e quando non si vuole memorizzare nella cache o in memoria un nuovo piano di esecuzione.
```sql
IF OBJECT_ID ( 'dbo.uspProductByVendor', 'P' ) IS NOT NULL
DROP PROCEDURE dbo.uspProductByVendor;
GO
CREATE PROCEDURE dbo.uspProductByVendor @Name varchar(30) = '%'
WITH RECOMPILE
AS
SET NOCOUNT ON;
SELECT v.Name AS 'Vendor name', p.Name AS 'Product name'
FROM Purchasing.Vendor AS v
JOIN Purchasing.ProductVendor AS pv
ON v.BusinessEntityID = pv.BusinessEntityID
JOIN Production.Product AS p
ON pv.ProductID = p.ProductID
WHERE v.Name LIKE @Name;
```
### <a name="Security"></a> Impostazione del contesto di sicurezza
Negli esempi contenuti in questa sezione viene usata la clausola EXECUTE AS per impostare il contesto di sicurezza in cui viene eseguita la stored procedure.
#### <a name="m-using-the-execute-as-clause"></a>M. Utilizzo della clausola EXECUTE AS
L'esempio seguente illustra l'uso della clausola [EXECUTE AS](../../t-sql/statements/execute-as-clause-transact-sql.md) per specificare il contesto di sicurezza in cui può essere eseguita una procedura. In questo esempio l'opzione `CALLER` consente di specificare che la procedura può essere eseguita nel contesto dell'utente che la chiama.
```sql
CREATE PROCEDURE Purchasing.uspVendorAllInfo
WITH EXECUTE AS CALLER
AS
SET NOCOUNT ON;
SELECT v.Name AS Vendor, p.Name AS 'Product name',
v.CreditRating AS 'Rating',
v.ActiveFlag AS Availability
FROM Purchasing.Vendor v
INNER JOIN Purchasing.ProductVendor pv
ON v.BusinessEntityID = pv.BusinessEntityID
INNER JOIN Production.Product p
ON pv.ProductID = p.ProductID
ORDER BY v.Name ASC;
GO
```
#### <a name="n-creating-custom-permission-sets"></a>N. Creazione di set di autorizzazioni personalizzate
Nell'esempio seguente viene usata la clausola EXECUTE AS per creare autorizzazioni personalizzate per un'operazione sul database. Per alcune operazioni, ad esempio TRUNCATE TABLE, non è possibile concedere le autorizzazioni. Incorporando l'istruzione TRUNCATE TABLE in una stored procedure e specificando che tale procedura venga eseguita come un utente che dispone di autorizzazioni per la modifica della tabella è possibile estendere le autorizzazioni per il troncamento della tabella all'utente al quale si concedono le autorizzazioni EXECUTE sulla procedura.
```sql
CREATE PROCEDURE dbo.TruncateMyTable
WITH EXECUTE AS SELF
AS TRUNCATE TABLE MyDB..MyTable;
```
## <a name="examples-includesssdwfullincludessssdwfull-mdmd-and-includesspdwincludessspdw-mdmd"></a>Esempi: [!INCLUDE[ssSDWfull](../../includes/sssdwfull-md.md)] e [!INCLUDE[ssPDW](../../includes/sspdw-md.md)]
### <a name="o-create-a-stored-procedure-that-runs-a-select-statement"></a>O. Creare una stored procedure che esegue un'istruzione SELECT
Questo esempio illustra la sintassi di base per la creazione e l'esecuzione di una procedura. Quando si esegue un batch, CREATE PROCEDURE deve essere la prima istruzione. Ad esempio, per creare la stored procedure seguente in [!INCLUDE[ssawPDW](../../includes/ssawpdw-md.md)], prima impostare il contesto del database e quindi eseguire l'istruzione CREATE PROCEDURE.
```sql
-- Uses AdventureWorksDW database
--Run CREATE PROCEDURE as the first statement in a batch.
CREATE PROCEDURE Get10TopResellers
AS
BEGIN
SELECT TOP (10) r.ResellerName, r.AnnualSales
FROM DimReseller AS r
ORDER BY AnnualSales DESC, ResellerName ASC;
END
;
--Show 10 Top Resellers
EXEC Get10TopResellers;
```
## <a name="see-also"></a>Vedere anche
[ALTER PROCEDURE (Transact-SQL)](../../t-sql/statements/alter-procedure-transact-sql.md)
[Elementi del linguaggio per il controllo di flusso (Transact-SQL)](~/t-sql/language-elements/control-of-flow.md)
[Cursori](../../relational-databases/cursors.md)
[Tipi di dati (Transact-SQL)](../../t-sql/data-types/data-types-transact-sql.md)
[DECLARE @local_variable (Transact-SQL)](../../t-sql/language-elements/declare-local-variable-transact-sql.md)
[DROP PROCEDURE (Transact-SQL)](../../t-sql/statements/drop-procedure-transact-sql.md)
[EXECUTE (Transact-SQL)](../../t-sql/language-elements/execute-transact-sql.md)
[EXECUTE AS (Transact-SQL)](../../t-sql/statements/execute-as-transact-sql.md)
[Stored procedure (Motore di database)](../../relational-databases/stored-procedures/stored-procedures-database-engine.md)
[sp_procoption (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-procoption-transact-sql.md)
[sp_recompile (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-recompile-transact-sql.md)
[sys.sql_modules (Transact-SQL)](../../relational-databases/system-catalog-views/sys-sql-modules-transact-sql.md)
[sys.parameters (Transact-SQL)](../../relational-databases/system-catalog-views/sys-parameters-transact-sql.md)
[sys.procedures (Transact-SQL)](../../relational-databases/system-catalog-views/sys-procedures-transact-sql.md)
[sys.sql_expression_dependencies (Transact-SQL)](../../relational-databases/system-catalog-views/sys-sql-expression-dependencies-transact-sql.md)
[sys.assembly_modules (Transact-SQL)](../../relational-databases/system-catalog-views/sys-assembly-modules-transact-sql.md)
[sys.numbered_procedures (Transact-SQL)](../../relational-databases/system-catalog-views/sys-numbered-procedures-transact-sql.md)
[sys.numbered_procedure_parameters (Transact-SQL)](../../relational-databases/system-catalog-views/sys-numbered-procedure-parameters-transact-sql.md)
[OBJECT_DEFINITION (Transact-SQL)](../../t-sql/functions/object-definition-transact-sql.md)
[Creazione di una stored procedure](../../relational-databases/stored-procedures/create-a-stored-procedure.md)
[Usare parametri con valori di tabella (Motore di database)](../../relational-databases/tables/use-table-valued-parameters-database-engine.md)
[sys.dm_sql_referenced_entities (Transact-SQL)](../../relational-databases/system-dynamic-management-views/sys-dm-sql-referenced-entities-transact-sql.md)
[sys.dm_sql_referencing_entities (Transact-SQL)](../../relational-databases/system-dynamic-management-views/sys-dm-sql-referencing-entities-transact-sql.md)
| 66.817383 | 1,047 | 0.746847 | ita_Latn | 0.982207 |
ed795e479dcb6c88b398e945e9dfd84a9e2d6244 | 18,089 | md | Markdown | articles/communication-services/concepts/detailed-call-flows.md | Microsoft/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-08-28T08:02:11.000Z | 2021-05-05T07:47:55.000Z | articles/communication-services/concepts/detailed-call-flows.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 476 | 2017-10-15T08:20:18.000Z | 2021-04-16T05:20:11.000Z | articles/communication-services/concepts/detailed-call-flows.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-03T09:46:48.000Z | 2021-11-05T11:41:27.000Z | ---
title: Anropa Flow-topologier i Azure Communication Services
titleSuffix: An Azure Communication Services concept document
description: Lär dig mer om att anropa Flow-topologier i Azure Communication Services.
author: nmurav
services: azure-communication-services
ms.author: nmurav
ms.date: 03/10/2021
ms.topic: overview
ms.service: azure-communication-services
ms.openlocfilehash: 526e3a1e4eeb6ef6a31a33498241d9a7443cca35
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 03/30/2021
ms.locfileid: "103490644"
---
# <a name="call-flow-topologies"></a>Topologier för anrops flöden
Den här artikeln beskriver topologier för Azure Communication Services-anrops flöden. Det här är en bra artikel för att se om du är företags kund som integrerar kommunikations tjänster i ett nätverk som du hanterar. En introduktion till kommunikations tjänstens samtals flöden finns i [konceptuell dokumentation om samtals flöden](./call-flows.md).
## <a name="background"></a>Bakgrund
### <a name="network-concepts"></a>Nätverks koncept
Innan du granskar topologier för anrops flöden definierar vi några termer som refereras till i hela dokumentet.
Ett **kund nätverk** innehåller alla nätverks segment som du hanterar. Detta kan omfatta kabelanslutna och trådlösa nätverk på kontoret eller mellan kontor, data Center och Internet leverantörer.
Ett kund nätverk har vanligt vis flera nätverks-perimeter med brand väggar och/eller proxyservrar som upprätthåller din organisations säkerhets principer. Vi rekommenderar att du utför en [omfattande nätverks utvärdering](/microsoftteams/3-envision-evaluate-my-environment) för att säkerställa optimala prestanda och kvalitet på kommunikations lösningen.
**Kommunikations tjänst nätverket** är det nätverks segment som stöder Azure Communication Services. Det här nätverket hanteras av Microsoft och distribueras över hela världen med kanter nära de flesta kund nätverk. Det här nätverket ansvarar för transport relä, medie bearbetning för grupp anrop och andra komponenter som stöder omfattande real tids medie kommunikation.
### <a name="types-of-traffic"></a>Typer av trafik
Kommunikations tjänster skapas främst på två typer av trafik: **real tids medier** och **signalering**.
**Real tids medier** överförs med real tids transport protokollet (RTP). Det här protokollet stöder ljud-, video-och skärm delning data överföring. Dessa data är känsliga för problem med nätverks fördröjning. Även om det är möjligt att överföra real tids medier med TCP eller HTTP rekommenderar vi att du använder UDP som transport lager protokoll för att ge stöd för slut användar upplevelser med hög prestanda. Medie nytto laster som skickas över RTP skyddas med SRTP.
Användare av kommunikations tjänst lösningen kommer att ansluta till dina tjänster från sina klient enheter. Kommunikationen mellan enheterna och dina servrar hanteras med **signalering**. Exempel: samtals initiering och real tids chatt stöds genom att signalera mellan enheter och din tjänst. Den flesta signal trafik använder HTTPS-REST, men i vissa fall kan SIP användas som Signaling av trafik protokoll. Även om den här typen av trafik är mindre känslig för svars tid, kommer en varning med låg latens användare av din lösning att Pleasant slutanvändarens upplevelse.
### <a name="interoperability-restrictions"></a>Begränsningar för samverkan
Media som passerar genom kommunikations tjänster begränsas enligt följande:
**Medie reläer från tredje part.** Samverkan med en tredje parts medie relä stöds inte. Om någon av dina medie slut punkter är kommunikations tjänster, kan Media bara gå igenom Microsofts inbyggda medie reläer, inklusive de som stöder Microsoft Teams och Skype för företag.
En SBC (Borders Border Controller) på gränsen med PSTN bör avsluta RTP/RTCP-dataströmmen, som skyddas med SRTP och inte vidarebefordrar den till nästa hopp. Om du vidarebefordrar flödet till nästa hopp kanske det inte tolkas.
**SIP-proxyservrar från tredje part.** Kommunikations tjänster som signalerar SIP-dialogrutan med en SBC från tredje part och/eller gateway kan gå igenom Microsofts interna SIP-proxyservrar (precis som Team). Samverkan med SIP-proxyservrar från tredje part stöds inte.
**B2BUA från tredje part (eller SBC).** Kommunikations tjänsternas medie flöde till och från PSTN avslutas av en SBC från tredje part. Samverkan med en SBC från tredje part i kommunikations tjänst nätverket (där en SBC från tredje part är ett stöd för två kommunikations tjänst slut punkter) stöds inte.
### <a name="unsupported-technologies"></a>Tekniker som inte stöds
**VPN-nätverk.** Kommunikations tjänsterna stöder inte medie överföring via VPN. Om användarna använder VPN-klienter bör klienten dela och dirigera medie trafik över en icke-VPN-anslutning som anges i [aktivera Lync-media för att kringgå en VPN-tunnel.](https://techcommunity.microsoft.com/t5/skype-for-business-blog/enabling-lync-media-to-bypass-a-vpn-tunnel/ba-p/620210)
*Lägg. Även om rubriken visar Lync, gäller Azure Communication Services och Teams.*
**Paket formers.** Paket klippning, paket inspektion eller paket formnings enheter stöds inte och kan försämra kvaliteten markant.
### <a name="call-flow-principles"></a>Principer för anrops flöden
Det finns fyra allmänna principer som underhåller kommunikations tjänsternas anrops flöden:
* **Den första deltagaren i ett kommunikations tjänst grupp anrop bestämmer den region där anropet finns**. Det finns undantag till den här regeln i vissa topologier, som beskrivs nedan.
* **Den medie slut punkt som används för att stödja ett kommunikations tjänst samtal väljs utifrån medie bearbetnings behoven** och påverkas inte av antalet deltagare i ett samtal. Till exempel kan ett punkt-till-punkt-anrop använda en medie slut punkt i molnet för att bearbeta medier för avskrift eller inspelning, medan ett samtal med två deltagare inte kan använda några medie slut punkter. Grupp anrop använder en medie slut punkt för att blanda och dirigera. Den här slut punkten väljs baserat på regionen där anropet finns. Medie trafik som skickas från en klient till medie slut punkten kan dirigeras direkt eller också kan den använda ett transport relä i Azure om begränsningar för kund nätverks brand väggen kräver det.
* **Medie trafik för peer-to-peer-anrop använder den mest direkta vägen som är tillgänglig**, förutsatt att anropet inte behöver en medie slut punkt i molnet. Den önskade vägen är direkt till fjärr-peer (klienten). Om en direkt väg inte är tillgänglig, kommer en eller flera transport reläer att vidarebefordra trafiken. Medie trafiken ska inte vara transversella servrar som fungerar som paket former, VPN-servrar eller andra funktioner som kan fördröja bearbetningen och försämra slut användar upplevelsen.
* **Signal trafik skickas alltid till den server som är närmast användaren.**
Mer information om den valda medie Sök vägen finns i [dokumentationen om samtals flöden](./call-flows.md).
## <a name="call-flows-in-various-topologies"></a>Anropa flöden i olika topologier
### <a name="communication-services-internet"></a>Kommunikations tjänster (Internet)
Den här topologin används av kunder som använder kommunikations tjänster från molnet utan någon lokal distribution, t. ex. SIP-gränssnitt. I den här topologin flödar trafik till och från kommunikations tjänsterna via Internet.
:::image type="content" source="./media/call-flows/detailed-flow-general.png" alt-text="Azure Communication Services-topologi.":::
*Bild 1 – topologi för kommunikations tjänster*
Pilens riktning i diagrammet ovan återspeglar initierings riktningen för den kommunikation som påverkar anslutningen till företagets perimeter. Om det gäller UDP för media kan de första paketen flöda i omvänd riktning, men paketen kan blockeras tills paket i den andra riktningen flödar.
Flödes beskrivningar:
* Flow 2 * – representerar ett flöde som initieras av en användare på kund nätverket till Internet som en del av användarens kommunikations tjänst upplevelse. Exempel på de här flödena är bland annat överföring av DNS-och peer-till-peer-medier.
* Flow 2 – representerar ett flöde som initieras av en fjärran sluten mobil kommunikations tjänst användare, med VPN till kund nätverket.
* Flow 3 – representerar ett flöde som initierats av en fjärran sluten Mobile Communication Services-användare till kommunikations tjänsternas slut punkter.
* Flow 4 – representerar ett flöde som initierats av en användare på kund nätverket till kommunikations tjänster.
* Flow 5 – representerar ett peer-till-peer-medium mellan en kommunikations tjänst användare och en annan i kundens nätverk.
* Flow 6 – representerar ett peer-till-peer-medium mellan en fjärran sluten mobil kommunikations tjänst användare och en annan fjärran sluten mobil kommunikations tjänst användare via Internet.
### <a name="use-case-one-to-one"></a>Användnings fall: en-till-en
Ett-till-ett-anrop använder en gemensam modell där anroparen får en uppsättning kandidater som består av IP-adresser/portar, inklusive lokal, relä och reflexiv (offentlig IP-adress för klienten som visas av reläet). Anroparen skickar dessa kandidater till den anropade parten. den anropade parten får också en liknande uppsättning kandidater och skickar dem till anroparen. STUN anslutnings kontroll meddelanden används för att ta reda på vilka anropare/-som kallas för parts medie sökvägar och den bästa arbets vägen är markerad. Media (dvs. RTP/RTCP-paket som skyddas med SRTP) skickas sedan med det valda kandidat paret. Transport reläet distribueras som en del av Azure Communication Services.
Om den lokala IP-adressen/port kandidater eller reflexiv kandidater har anslutning, väljs den direkta sökvägen mellan klienterna (eller med hjälp av en NAT) för media. Om klienterna både är i kund nätverket, ska den direkta sökvägen väljas. Detta kräver direkt UDP-anslutning i kundens nätverk. Om klienterna både är Nomadic moln användare, och beroende på NAT/brand vägg, kan Media använda direkt anslutning.
Om en klient är intern i kundens nätverk och en klient är extern (till exempel en mobil moln användare), är det osannolikt att direkt anslutning mellan de lokala eller reflexiva kandidaterna fungerar. I det här fallet är ett alternativ att använda en av transport relä kandidater från antingen klienten (till exempel den interna klienten hämtade en relä kandidat från transport reläet i Azure. den externa klienten måste kunna skicka STUN/RTP/RTCP-paket till transport reläet). Ett annat alternativ är att den interna klienten skickar till den relä kandidat som erhållits av den mobila moln klienten. Även om UDP-anslutning för media rekommenderas, stöds TCP.
**Steg på hög nivå:**
1. Kommunikation Services-användare A matchar URL-domännamn (DNS) med Flow 2.
2. Användare A allokerar en Media Relay-port i team transport reläet med Flow 4.
3. Kommunikation tjänster användare A skickar en "inbjudan" med ICE-kandidater som använder Flow 4 till kommunikations tjänster.
4. Kommunikations tjänsterna meddelar användare B genom att använda Flow 4.
5. Användare B allokerar en Media Relay-port i team transport Relay som använder Flow 4.
6. Användare B skickar "svar" med ICE-kandidater som använder Flow 4, som vidarebefordras tillbaka till användare A med Flow 4.
7. Användare A och användare B anropar ICE-anslutnings test och den bästa tillgängliga medie Sök vägen har valts (se diagram nedan för olika användnings fall).
8. Båda användarna skickar telemetri till kommunikations tjänster med Flow 4.
### <a name="customer-network-intranet"></a>Kund nätverk (intranät)
:::image type="content" source="./media/call-flows/one-to-one-internal.png" alt-text="Trafikflöde i kundens nätverk.":::
*Figur 2 – inom kund nätverk*
I steg 7 väljs peer-to-peer Media Flow 5.
Denna medie överföring är dubbelriktad. Riktningen för Flow 5 indikerar att en sida initierar kommunikationen från ett anslutnings perspektiv. I det här fallet spelar det ingen roll vilken riktning som används eftersom båda slut punkterna är i kundens nätverk.
### <a name="customer-network-to-external-user-media-relayed-by-teams-transport-relay"></a>Kund nätverk till extern användare (media som vidarebefordras av team transport Relay)
:::image type="content" source="./media/call-flows/one-to-one-via-relay.png" alt-text="Ett till ett samtals flöde via ett relä.":::
*Bild 3 – kund nätverk till extern användare (media som vidarebefordras av Azure transport Relay)*
I steg 7, är flöde 4 (från kund nätverket till kommunikations tjänster) och flöde 3 (från en fjärran sluten mobil kommunikations tjänst användare till Azure Communication Services) markerade.
Dessa flöden vidarebefordras av team transport relay i Azure.
Denna medie överföring är dubbelriktad. Riktningen anger vilken sida som initierar kommunikationen från ett anslutnings perspektiv. I det här fallet används dessa flöden för signalering och media, med olika transport protokoll och-adresser.
### <a name="customer-network-to-external-user-direct-media"></a>Kund nätverk till extern användare (direkt Media)
:::image type="content" source="./media/call-flows/one-to-one-with-extenal.png" alt-text="Ett till ett anrops flöde med en extern användare.":::
*Bild 4 – kund nätverk till extern användare (direkt Media)*
I steg 7 väljs flöde 2 (från kund nätverket till klientens peer via Internet).
Direkt mediet med en fjärran sluten mobil användare (som inte vidarebefordras via Azure) är valfritt. Med andra ord kan du blockera sökvägen för att genomdriva en medie Sök väg genom ett transport relä i Azure.
Denna medie överföring är dubbelriktad. Riktningen för flöde 2 till fjärran sluten mobil användare anger att en sida initierar kommunikationen från ett anslutnings perspektiv.
### <a name="vpn-user-to-internal-user-media-relayed-by-teams-transport-relay"></a>VPN-användare till intern användare (media som vidarebefordras av team transport Relay)
:::image type="content" source="./media/call-flows/vpn-to-internal-via-relay.png" alt-text="Ett till ett anrops flöde med en VPN-användare via reläet.":::
*Bild 5 – VPN-användare till intern användare (medie vidarebefordras av Azure Relay*
Signalering mellan VPN-nätverket och kund nätverket använder Flow 2 *. Signalering mellan kund nätverket och Azure använder Flow 4. Mediet kringgår dock VPN och dirigeras med flöden 3 och 4 till Azure Media Relay.
### <a name="vpn-user-to-internal-user-direct-media"></a>VPN-användare till intern användare (direkt Media)
:::image type="content" source="./media/call-flows/vpn-to-internal-direct-media.png" alt-text="Ett till ett samtals flöde (intern användare) med ett VPN med direkt Media":::
*Bild 6 – VPN-användare till intern användare (direkt Media)*
Signalering mellan VPN-nätverket och kund nätverket använder Flow 2. Signalera mellan kund nätverket och Azure använder Flow 4. Mediet kringgår dock inte VPN och dirigeras med hjälp av Flow 2 från kund nätverket till Internet.
Denna medie överföring är dubbelriktad. Riktningen för Flow 2 till fjärran sluten mobil användare anger att en sida initierar kommunikationen från ett anslutnings perspektiv.
### <a name="vpn-user-to-external-user-direct-media"></a>VPN-användare till extern användare (direkt Media)
:::image type="content" source="./media/call-flows/vpn-user-to-external-user.png" alt-text="Ett till ett samtals flöde (extern användare) med ett VPN med direkt Media":::
*Bild 7 – VPN-användare till extern användare (direkt Media)*
Signalering mellan VPN-användaren för kund nätverket använder Flow 2 * och Flow 4 till Azure. Mediet kringgår dock VPN och dirigeras med hjälp av Flow 6.
Denna medie överföring är dubbelriktad. Riktningen för Flow 6 till fjärran sluten mobil användare anger att en sida initierar kommunikationen från ett anslutnings perspektiv.
### <a name="use-case-communication-services-client-to-pstn-through-communication-services-trunk"></a>Användnings fall: kommunikations tjänsters klient till PSTN genom trunk för kommunikations tjänster
Kommunikations tjänster gör det möjligt att placera och ta emot samtal från det offentliga switchade telefonnätet (PSTN). Om PSTN-trunken är ansluten med telefonnummer från kommunikations tjänsterna finns det inga särskilda anslutnings krav för det här användnings fallet. Om du vill ansluta din egen lokala PSTN-trunk till Azure Communication Services kan du använda SIP-gränssnittet (tillgängligt i CY2021).
:::image type="content" source="./media/call-flows/acs-to-pstn.png" alt-text="Ett till ett samtal med en PSTN-deltagare":::
*Figur 8 – kommunikations tjänst användare till PSTN genom Azure-trunkering*
I det här fallet signalerar och media från kund nätverket till Azure använder du Flow 4.
### <a name="use-case-communication-services-group-calls"></a>Användnings fall: kommunikations tjänster grupp anrop
Tjänsten ljud/video-/skärm delning (VBSS) är en del av Azures kommunikations tjänster. Den har en offentlig IP-adress som måste gå att komma åt från kundens nätverk och måste kunna användas från en Nomadic moln klient. Varje klient/slut punkt måste kunna ansluta till tjänsten.
Interna klienter får lokala, reflexiva och relä kandidater på samma sätt som beskrivs för ett-till-ett-anrop. Klienterna skickar dessa kandidater till tjänsten i en inbjudan. Tjänsten använder inte ett relä eftersom den har en offentligt nåbar IP-adress, så den svarar med sin lokala IP-adress kandidat. Klienten och tjänsten kontrollerar anslutningen på samma sätt som beskrivs för ett-till-ett-anrop.
:::image type="content" source="./media/call-flows/acs-group-calls.png" alt-text="OACS grupp anrop":::
*Bild 9 – grupp anrop för kommunikations tjänster*
## <a name="next-steps"></a>Nästa steg
> [!div class="nextstepaction"]
> [Kom igång med att anropa](../quickstarts/voice-video-calling/getting-started-with-calling.md)
Följande dokument kan vara intressanta för dig:
- Läs mer om [samtals typer](../concepts/voice-video-calling/about-call-types.md)
- Lär dig mer om [klient-server arkitektur](./client-and-server-architecture.md) | 89.995025 | 730 | 0.803693 | swe_Latn | 0.999934 |
ed799b65b55619e039d3c6d76c6825e90b8d2ddc | 2,589 | md | Markdown | docs/_posts/2020-05-07-release-0.3.md | matthewperrin/uoa-mediaplayer-plus | 5c74bfc0ed19448430b19299150fd6bc49d7bb52 | [
"MIT"
] | null | null | null | docs/_posts/2020-05-07-release-0.3.md | matthewperrin/uoa-mediaplayer-plus | 5c74bfc0ed19448430b19299150fd6bc49d7bb52 | [
"MIT"
] | null | null | null | docs/_posts/2020-05-07-release-0.3.md | matthewperrin/uoa-mediaplayer-plus | 5c74bfc0ed19448430b19299150fd6bc49d7bb52 | [
"MIT"
] | null | null | null | ---
layout: post
author: acoollevel
---
To mark one week since the initial release of the extension, here's a brand new update!
## Users in the first week
Before we get to what's in this update, here's a quick rundown of how the extension performed on the Chrome Web Store:
- Total current users: 108
- Windows: 72
- Mac OS: 35
- Linux: 1
- Daily impressions: 37 (5 May)
- Daily installs: 72 (5 May)
The data above is provided by the Chrome Web Store. Because Firefox users need to install the extension through Github, their number is unknown.
## New Features
- Screenshot button: Easily take a screenshot of the current frame
- Continue watching: The extension now saves how far you have watched into a lecture, so you can continue watching later. This data syncs between devices if you are signed into your browser, using the APIs provided by Chrome and Firefox.
- Remember volume: The extension now remembers your preferred volume for lectures, so your speakers don't blast at full volume everytime you start a new lecture.
- New keyboard shortcut: Press '/' to reset the speed.
- Popups: When you press a keyboard shortcut a popup will now appear letting you know what happened. For example, if you speed up the video using '>', there will be a popup letting you know the current speed.
- A new icon designed in Inkscape, to replace the bland default puzzle piece or whatever.
## New Website
This website was set up to let people know where to go to download the extension. Rather than just pointing them to Github release pages which eventually become out-of-date. Initially I was using the default Github pages theme for this, but it didn't end up including all the features I needed. I have now rewritten the website from scratch to be a lot faster and more expandable, which will save us plenty of time in the future.
## Contributors
Thanks to [@demar42](https://github.com/demar42) for the contribution of speed keyboard shortcuts added in 0.2.1.
## Future development
- Features that are currently under development can be seen on our [Github Roadmap](https://github.com/acoollevel/uoa-mediaplayer-plus/projects/1?fullscreen=true).
- A Safari version has been requested. As I don't have access to any Apple devices and the Windows version of Safari is horribly out of date, I probably won't be able to work on this anytime soon. If you'd like to work on Safari support, please let me know.
## Community
A [community discord](https://discord.gg/sJbs6hu) has been set up. Anyone that uses the extension can join, ask questions, request features, etc.
That's all :)
| 63.146341 | 429 | 0.771727 | eng_Latn | 0.999353 |
ed79b2b73ce78e70c9cbd31272c0d91fc98c5b34 | 13,171 | md | Markdown | gitbook/2020/05/20200512.md | benlau/live | 08def17f1fc64bf1ef5931b5728eb42094983471 | [
"MIT"
] | null | null | null | gitbook/2020/05/20200512.md | benlau/live | 08def17f1fc64bf1ef5931b5728eb42094983471 | [
"MIT"
] | null | null | null | gitbook/2020/05/20200512.md | benlau/live | 08def17f1fc64bf1ef5931b5728eb42094983471 | [
"MIT"
] | 1 | 2020-11-30T17:36:19.000Z | 2020-11-30T17:36:19.000Z | #2020-05-12
09:33:15
法庭文字直播台
\#區域法院第卅八庭
\#郭啟安法官 \#審訊 \[2/10\]
\#0728上環 \#暴動
[上回here](https://t.me/youarenotalonehk_live/4954)
0932 控方證人列表新增至18人
\- 新增PW12
\- PW13/14對調
0934 讀出雙方承認事實
\- 包括件呈堂證物55件
(當日活動申請/LONO/上訴文件、身穿衣物/物品、攜帶物品、錄影片段、地圖etc.
\- 由律師代表簽署確認
1007 控方指不依賴某些片段audio
郭官認同記者或其他voiceover可以不依賴,但不認同不依賴現場同步聲音
1014 辯方不會承認非法集結的指控(交替控罪)
1015 辯方曾大狀反映這兩天多次提到「暴徒」二字擔心此敏感字眼好似跳到某些結論,需要注意「會唔會假定咗啲咩」
郭官指只是較簡單/複雜翹口說法的分別,相信不存在價值觀或者政治觀念的影射
[下節here](https://t.me/youarenotalonehk_live/4975)
---
10:20:10
法庭文字直播台
\#區域法院第廿七庭 \#高勁修首席區域法院法官
劉(56) \#聆取對控罪的回答
(\#0815元朗 串謀縱火(交替控罪:刑事恐嚇))上庭資料
辯方申請毋須答辯,押後至7/7 1430。控方不反對。
新保釋條件如下:
保釋金300000
人事擔保300000
不得離港
交出旅遊證件
居住於報稱地址
宵禁22-06
每星期報到1次
不得以直接及間接接觸控方證人
禁足令
案件押後至7月7日1430區域法院再訊。
---
10:21:46
\#區域法院第卅八庭
\#郭啟安法官 \#審訊 \[2/10\]
\#0728上環 \#暴動
[上節here](https://t.me/youarenotalonehk_live/4969)
1017 控方傳召證人PW1:蘇高級督察
1018 法官介紹透明保護罩
\- 證人作供時可以除下口罩
\- 方便觀察其神態舉止
\- 希望對審訊有幫助
1020 開始盤問證人,問及其加入警隊背景、工作性質等
1024 預備睇片P52(蘋果fb live)
1026控重申立場,堅持唔開聲,因為不打算依賴
1028 辯方二位均認為有必要播放片段聲音,以確保證物之完整性及公平性
1031 官指聲音/旁述都可以反映當時當地情況,相信在沒有陪審團的情況下,法庭是有能力可自己判斷聲音內容
1032正式播放P52之蘋果片段,有播聲
\- 由遮打花園起步和平行街至德輔道中Jubilee St交界的沒有LONO之遊行
1047 播完14分59秒之蘋果片段,開始盤問證人
1053 播放P52米報片段(2分40秒)
\- 干諾道西德輔道西一帶
\- 影到沿電車路運送物資
1101 開小畫家,讓PW1表示圍欄位置
1113 官:不如你問埋佢(證人)我地就morning break啦
1124 休庭20分鐘
——
1045 直播員擔心控方打算播哂完整3小時蘋果live,如果每段呈堂片段都睇足... 好擔心究竟10日夠唔夠
[下節here](https://t.me/youarenotalonehk_live/4978)
---
10:28:10
\#九龍城裁判法院第七庭
\#黃國輝裁判官
\#審訊
何(49) \#審訊(\#0805深水埗 阻差辦公:清潔工;警員行向1位示威者時,被告張開雙臂大叫:「走呀!」)
控罪:
阻礙在正當執行職務的警務人員
《侵害人身罪條例》
(注:控方早前將原本以《簡易治罪條例》提告的罪名,修訂為以《侵害人身罪條例》起訴,即使指控內容不變,前者一旦罪成,最高可罰款1,000元及監禁六個月,後者最高可判監兩年。 )
開庭
辯方盤問證人馮沙展
---
11:51:18
法庭文字直播台
\#區域法院第卅八庭
\#郭啟安法官 \#審訊 \[2/10\]
\#0728上環 \#暴動
[上節here](https://t.me/youarenotalonehk_live/4975)
1151 開庭
1213 傳召證人PW2:陳總警司
1300 休庭,1430續
[下節here](https://t.me/youarenotalonehk_live/4994)
---
11:55:12
\#九龍城裁判法院第七庭
\#黃國輝裁判官
\#審訊
何(49) \#審訊(\#0805深水埗)
控罪:
阻礙在正當執行職務的警務人員
《侵害人身罪條例》
(注:控方早前將原本以《簡易治罪條例》提告的罪名,修訂為以《侵害人身罪條例》起訴,即使指控內容不變,前者一旦罪成,最高可罰款1,000元及監禁六個月,後者最高可判監兩年。 )
1220繼續
---
12:52:18
法庭文字直播台
\#西九龍裁判法院第九庭
\#黃雅茵裁判官
\#新案件 \#1116荃灣
司徒(25)
控罪:管有攻擊性武器(即金屬鎖匙扣)
⭕⭕保釋獲批⭕⭕
保釋條件
原有現金擔保
不得離港
交出所有旅遊證件
警署報到:一星期一次
押後至2020年6月30日0930西九龍裁判法院第一庭
---
13:03:47
\#九龍城裁判法院第七庭
\#黃國輝裁判官
\#審訊
何(49) \#審訊(\#0805深水埗)
控罪:
阻礙在正當執行職務的警務人員
《侵害人身罪條例》
(注:控方早前將原本以《簡易治罪條例》提告的罪名,修訂為以《侵害人身罪條例》起訴,即使指控內容不變,前者一旦罪成,最高可罰款1,000元及監禁六個月,後者最高可判監兩年。 )
休 1430繼續
---
14:33:02
法庭文字直播台
\#區域法院第卅八庭
\#郭啟安法官 \#審訊 \[2/10\]
\#0728上環 \#暴動
[上節](https://t.me/youarenotalonehk_live/4978)[here](https://t.me/youarenotalonehk_live/4978)
1432 開庭
繼續上午對證人PW2的盤問。
1502 播放NOW新聞片段
\- 約是當日18:54
\- 干諾道西西邊街
\- NOW: 示威者衝擊,向警方防線拋雜物
\- 警方舉黑旗,隨即施放催淚彈
etc.
1541 辯方曾狀問行動拘捕幾多人,指揮官需要寫報告嗎?PW2稱不用。辯方質疑,PW2辯稱「我日日返12個鐘,我搵唔到時間寫」,由於要係現場指揮,冇辦法處理咁多文件嘅嘢。
又問如果想知到19:00 - 19:30間一共拉咗幾多人,可以問邊個?PW2建議問齊咁多間警署,因為拉咗嘅人都會帶去警署或者羈留設施。
[下節](https://t.me/youarenotalonehk_live/5022)[here](https://t.me/youarenotalonehk_live/5022)
---
14:41:01
\#沙田裁判法院第一庭
\#鄧少雄署理主任裁判官
\#1215沙田 \#提堂
洪(20)
控罪:襲擊在正當執行職務的警務人員
案情:
// 網民周日(15日)發起「和你Christmas Shop」活動,沙田新城市廣場再度發生警民衝突。一名攜有相機的20歲大專生涉腳踢警長,被控一項襲警罪.....他被指於今年12月15日在沙田站37號店舖「西樹泡芙」襲擊警長陳勁濤。//
(摘自《明報》2020.12.17)
答辯意向:**不認罪❌**
辯方申請更改保釋條件由一星期三次改為兩次,及宵禁時間改由2200開始。
控方反對,裁判官**批准✅**
其他以 [原有條件](https://t.me/youarenotalonehk_live/2387),繼續保釋外出候審
預計審訊日程為1天
押後至2020年6月15日0930沙田裁判法院第六庭審訊。
---
14:49:08
\#東區裁判法院第一庭
\#錢禮主任裁判官
14:30
楊(18) \#提堂
(\#1109金鐘 管有攻擊性武器 鐳射筆、可伸縮短棍)
辯方申請是日毋須答辯,押後至 8/7 。
案件押後至8/7 1430,保釋條款照舊。
---
14:49:57
\#區域法院第廿七庭
\#高勁修首席區域法院法官
\#0824觀塘 \#聆取對控罪的回答
A1: 吳(24)
A2: 謝(23)
A3: 周(26)
控罪:
(1) 暴動 \[A1-A3\]
被控於2019年8月24日,在九龍灣牛頭角警署附近參與暴動
(2) 無牌管有無線電通訊器具 \[A1\]
被控於同日,在九龍灣常怡道休憩處,無牌管有無線收發器
(3) 管有攻擊性武器或工具可作非法用途 \[A1\]
被控於同日同地管有行山杖、鉗和扳手
(4) 襲警 \[A2\]
被控於同日,在九龍灣偉業街常怡道行人天橋底,襲擊總督察11827**梅永罡**
案情:
A1吳涉嫌事發時揮動雨傘,導致警員受傷。A2謝涉嫌在當天下午與大約200人在偉業街集結,以路障堵路,並向警方防線拋擲磚頭、石頭、長傘等。警方作出驅趕,A2謝極力反抗,以長約75厘米的竹枝還擊,並打中總督察11827**梅永罡**的手臂,最後被制服。A3周則於當日相約時間,於偉業街一帶逃跑時遭制服及拘捕。
背景:
A1吳在8月24日被捕,被拘留在新屋嶺。8月26日首度提堂,雙手手背和手肘貼有膠布。吳案原有另外兩名被告,在2月21日獲撤控。
A2謝於8月28日首度提堂,獲准保釋候訊。而A3周當日仍然留院,延至9月2日提堂,當日須戴頸箍及口罩。
案件於4月23日合併轉介至區域法院審訊。
3人正申請法律援助,申請押後以待法援申請審批
案件押後至7月16日1430區域法院再訊
✅期間以原有條件繼續保釋✅
---
14:50:18
\#屯門裁判法院第一庭
\#張潔宜署理主任裁判官
\#1118油麻地
D1李
D2陳
D3郭
D4劉
D5鄭
D6林
D7廖
D9謝
控罪:暴動
新增控罪:管有適合作非法用途的工具並意圖作非法用途使用
案情:被指於19年11月18日在九龍油麻地彌敦道近窩打老道交界參與暴動
期間按原有條件保釋
案件於6月18日屯門裁判法院第一庭再訊
---
14:51:54
\#區域法院第廿七庭
\#高勁修首席區域法院法官
\#0901東涌 \#聆取對控罪的回答
黃(22)
控罪:
(1) 刑事損壞
被控於2019年9月1日,在東涌游泳池內,無合法辯解而損壞屬於康文署的旗杆金屬蓋、旗杆繩及35號閉路電視鏡頭
(2) 串謀侮辱国旗
被控於同日同地與2名不知名人士,串謀公開及故意以焚燒的方式侮辱中国国旗
(3) 串謀縱火
被控於同日在東涌游泳池對面,無合法辯解而用火損壞一些橫額及雜物
(4) 縱火
被控於同日在東涌東路與美東街交界,無合法辯解而用火損壞一些水馬
背景:
案件於9月7日在九龍城裁判法院首度提堂,保釋申請被 \#陳慧敏裁判官 拒絕;9月13日在西九龍裁判法院保釋覆核獲 \#羅德泉主任裁判官 批准。辯方投訴,黃被捕時遭警員毆打腹部,並恐嚇不招認會拘捕其家人,錄影會面後更遭掌摑,有「屈打成招」之嫌;4月21日轉介至區域法院審訊。
正申請法律援助,申請押後以待法援申請審批
案件押後至7月21日1430區域法院再訊
✅期間以原有條件繼續保釋✅
---
14:55:30
\#九龍城裁判法院第七庭
\#黃國輝裁判官
\#審訊
何(49) \#審訊(\#0805深水埗)
控罪:
阻礙在正當執行職務的警務人員
《侵害人身罪條例》
(注:控方早前將原本以《簡易治罪條例》提告的罪名,修訂為以《侵害人身罪條例》起訴,即使指控內容不變,前者一旦罪成,最高可罰款1,000元及監禁六個月,後者最高可判監兩年。 )
被控於8月5日在深水埗欽州街與長沙灣道交界阻礙警長馮耀彤執行公務。
辯方盤證人馮耀彤
1430繼續
---
14:57:27
\#區域法院第廿七庭
\#高勁修首席區域法院法官
\#0907沙田 \#提訊
——————
A1: 劉 (45)
A2: 謝 (42)
控罪:
(1)襲警 \[A1\]
被控於2019年9月7日,在沙田站襲擊督察1663**劉俊豪**
(2)暴動 \[A1, A2\]
被控於同日同地,連同其他不知名人士參與暴動
案情:
劉涉以月餅罐和雨傘擲向督察1663**劉俊豪**;謝涉以雨傘擊打警員,其後被其他警員強行拉入車站控制室。
背景:
A1劉9月16日在東區裁判法院首度提堂,獲准保釋候訊,期間批准劉星期一至五離港到越南公幹。10月28日提堂時,辯方投訴指控方提供的警員口供,拒絕披露警員證人的名字。 \#錢禮主任裁判官 質疑控方拒絕披露相關資料的依據,並要求控方正式向法庭申請匿名令。在12月10日,A1劉被加控暴動罪。
A2謝10月24日首度提堂,獲准保釋候訊,但被禁足沙田站五十米範圍。控方於1月14日申請合併兩人案件及轉介至區域法院審訊。
——————
李(27)
控罪:
(1)抗拒警務人員
被控於2019年9月7日,在沙田站抗拒在執行職務的督察1663**劉俊豪**
(2)管有攻擊性武器
被控於同日,在沙田站B出口,管有攻擊性武器,即兩支第四類鐳射筆,意圖作非法用途使用
案情:
李當日被督察1663**劉俊豪**強行制服後,曾用力掙扎;劉及後向該督察投擲月餅罐。
背景:
事隔五個多月,李在2月21日首次提堂,獲 \#徐綺薇署理主任裁判官 批准保釋。控方其後申請案件轉介至區域法院審訊,並指屆時將申請與劉謝案合併處理。
——————
控方申請合併兩案
A1法援被拒,將聘請私人律師處理
A3李正申請法律援助,申請押後,如其他被告準備好,當天亦可答辯
案件押後至7月16日1430區域法院再訊
✅期間以原有條件繼續保釋✅
---
14:58:21
\#東區裁判法院第一庭
\#錢禮主任裁判官 \#新案件
鄧(20) - 在公眾地方造成阻礙 \#1113筲箕灣 (筲箕灣道與耀興道交界以磚頭、垃圾、遮等物品阻路)
辯方申請無需答辯,押後至7/7 1430。辯方透露將商討以其他方式處理。
⭕️保釋獲批⭕️
保釋金 1000(原有)
每星期報到1次
居住於報稱地址
案件押後至 7/7 1430 東區法院第一庭。
---
14:59:12
\#區域法院第廿七庭
\#高勁修首席區域法院法官
\#0805黃大仙 \#提訊
李 (33)
控罪:參與暴動
被控於2019年8月5日,在黃大仙龍翔道,與其他身分不明的人士參與暴動
案情:
李被指在被捕前曾向警員投擲頭盔及揮動紅色雨傘,警方向前方發射催淚彈後,李被指撿起催淚彈再擲向警方,其後警方衝上前拘捕李。
背景:
李在8月7日首度提堂時,左邊及右邊面頰均有包紮,右手手肘亦有綁上繃帶。原亦被控「未能出示身份證」,指李當日未能向警員5881**鍾肇恒**出示身份證明文件。李獲准保釋候訊,10月2日提堂時獲准減少報到次數至每週兩次。11月27日提堂時, \#徐綺薇署理主任裁判官 批准撤銷未能出示身份證罪。控方申請轉介至區域法院審訊。1月9日在區域法院提堂,申請押後十星期以待申請法援, \#郭偉健法官 最終只批准押後六星期,並准減少報到至每週一次。
辯方申請押後以索取控方文件及索取法律意見
控方指所有文件已交予辯方
✅期間以原有條件繼續保釋✅
案件押後至7月7日1430區域法院再訊
---
15:01:59
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
\#1027尖沙咀 \#新案件
何 (37)
控罪:管有物品意圖摧毀或損壞財產
被控於2019年10月27日於尖沙咀彌敦道143-161號柏麗大道外保管四支士巴拿,意圖在無合理辯解的情況下使用,以摧毀或損壞別人財產
新增保釋條件:
交出旅遊證件,不得離港
居住於報稱地址
每星期警署報到一次
押後至7月2日1430九龍城裁判法院一庭再訊。
---
15:02:04
\#區域法院第廿七庭
\#高勁修首席區域法院法官
\#0921元朗 \#聆取對控罪的回答
A1:張(30)‼️已還押逾7個月
A2:陳(25)‼️已還押逾7個月
A3:羅(17)
控罪:
(1)參與非法集結
A2陳及A3羅被控於2019年9月21日,在元朗形點II一樓,與其他人參與非法集結
(2)刑事損壞
A2陳及A3羅被控於同日,在元朗形點II一樓A159號舖, 與其他人無合法辯解,損壞屬大家樂集團有限公司的8組閉路電視鏡頭
(3)刑事損壞
A3羅被控於同日,在元朗形點II一樓,與其他人無合法辯解,損壞屬於新鴻基地產發展有限公司的6組閉路電視鏡頭
(4)襲擊致造成身體傷害
A1張被控於2019年9月22日,在康景街利襲擊**李德忠**,對他造成身體傷害
(5)非法禁錮
A1張被控於同日同地,非法及損害性地禁錮李德忠,並在違反他的意願下羈留他
(6)暴動
A1張及A2陳被控於2019年9月22日,在元朗青山公路-元朗段,近輕鐵康樂路站,連同其他人參與暴動
(7)刑事損壞
A1張被控於同日,在元朗青山公路-元朗段,近輕鐵康樂路站,無合法辯解而損壞李德忠的一部智能電話
(8)有意圖而傷人
A2陳被控於2019年9月22日,在青山公路-元朗段輕鐵康樂路站附近,意圖使**張灌雄**身體受嚴重傷害,而非法及惡意導致他身體受嚴重傷害
背景:
A1張及A2陳在2019年9月30首次提堂,共被控「襲擊致造成身體傷害」、「刑事損壞」及「有意圖而傷人」三項控罪,保釋申請被 \#蘇文隆署理主任裁判官 拒絕;A1張分別在10月16日及5月5日向高等法院原訟庭 \#彭寶琴法官 及 \#陳慶偉法師 申請保釋均被拒;2月27日A2陳向高等法院原訟庭 \#游德康暫委法官 申請保釋同被拒。
A3羅在2020年1月21日首次提堂,獲 \#蘇文隆署理主任裁判官 批准以現金10,000元保釋,並須每天報到、守宵禁令及不離港,而A1張及A2陳被加控一項暴動罪。
案件在4月21日移交區域法院審理。
3人正申請法援,申請押後案件等候法援審批
A2陳及A3羅今天沒有律師代表
A1張及A2陳沒有保釋申請‼️繼續還押‼️
A3羅✅以原有條件繼續保釋✅
案件押後至7月7日1430區域法院再訊
---
15:04:57
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
\#1027尖沙咀
梁 (57)
控罪:襲警
被控於2019年10月27日於太空館外襲擊警員A (控方要求匿名)
按原有條件擔保(3000元現金/每周警署報到兩次/不得騷擾證人)。
押後至7月2日1430九龍城裁判法院一庭再訊。
---
15:13:49
\#粉嶺裁判法院第一庭
\#蘇文隆署理主任裁判官
\#1113大埔 \#新案件
蘇
巢
控罪:
1\. 非法集結
2\. 在身處非法集結時使用蒙面物品
3\. 管有適合作非法用途的工具並意圖作非法用途使用
4\. 在身處非法集結時使用蒙面物品
5\. 無牌管有彈藥
案情:被指於19年11月13日在大埔太和路連同其他人士參與非法集結,意圖破壞社會安寧,D1在無合法辯解下於現場戴上口罩而D2則戴上防毒面罩,D1被指管有板手、鉗及索帶,D2管有4個已被使用的催淚彈
裁判官准予被控人保釋
擔保金:
宵禁:23至07
不可離港
案件押後至7月21日粉嶺裁判法院再訊
---
15:41:58
法庭文字直播台
\#九龍城裁判法院第七庭
\#黃國輝裁判官
\#審訊
何(49) \#審訊(\#0805深水埗)
控罪:
阻礙在正當執行職務的警務人員
《侵害人身罪條例》
(注:控方早前將原本以《簡易治罪條例》提告的罪名,修訂為以《侵害人身罪條例》起訴,即使指控內容不變,前者一旦罪成,最高可罰款1,000元及監禁六個月,後者最高可判監兩年。 )
被控於8月5日在深水埗欽州街與長沙灣道交界阻礙警長馮耀彤執行公務。
辯方盤證人馮耀彤(細節後補)
退庭10分鐘控方結案陳詞
---
15:45:19
\#西九龍裁判法院第三庭
\#羅德泉主任裁判官
\#0621警察總部
潘 (31) - 畫家
控罪:非法集結等九項控罪
是次裁判官需要控辯雙方上庭,是由於雙方有很多時間都撞期,因此需要於庭上為審訊時間作安排。
經過控辯雙方三十分鐘唇槍舌戰後,以下為審訊日期。
六月十二日
六月十五日
六月十七日
六月二十四日
六月二十六日
六月二十九日
六月三十日
七月二日
七月三日
雙方同意於審訊三天前同意案情。
‼️辯方沒有申請,繼續還押。‼️
(按:主控官於開始多次表明自己為此案騰出了時間,更表示自己已經非常並會一直遷就辯方,語氣帶有小量不屑。後來裁判官感到不耐煩,回應:「其實我哋嚟到庭上面就係要排期,去互相遷就,頭先你話唔想第一審訊日喺二十幾號,我都有就你。」主控官之後沒有再抱怨。畫面令直播員感到身心暢快。)
---
16:15:13
法庭文字直播台
\#沙田裁判法院第一庭
\#鄧少雄署理主任裁判官
\#1113沙田 \#提堂
D1 蘇(32)
D2 麥(18)
D3 黃(19)
D4 潘(21)
D5 洪(19)
D6 范(29)
D7 林(27)
D8 鄭(26)
控罪1(指向所有被告):在公眾地方造成阻礙
控罪2(指向D4):管有適合作非法用途的工具並意圖作非法用途使用(即士巴拿)
案情:在2019年11月13日約早上10時,正值網民號召「大三罷」時期。8名被告身在2架輕型貨車中,並早前於科技園外,放置垃圾車,巴士站牌,小巴站牌及2架輕型貨車。其後被警方截獲,及第四被告被搜出士巴拿。
⭕⭕保釋獲批⭕⭕
各被告保釋條件如下:
保釋金:$3,000
不准離開香港
住報稱地址
交出所有旅行證件
警署報到:一星期一次
(D4一星期報三次)
**投訴事項**
D1投訴
1\. 拘捕時,被警員按在地下,擦損左邊面
D2投訴
1\. 錄口供前,警員宣稱:快啲比解釋,就可以快啲番屋企,同冇後果
2\. 被告否認犯罪,警員宣稱:不能否認,一定要承認犯罪
3\. 被告沒有答過口供紙上的答案,但被警員強行要其簽名
4\. 不容許見律師
D3投訴
1\. 被警員用棍打下巴
押後至2020年7月8日1430沙田裁判法院第一庭
---
16:25:18
\#區域法院第卅八庭
\#郭啟安法官 \#審訊 \[2/10\]
\#0728上環 \#暴動
[上節](https://t.me/youarenotalonehk_live/4994)[here](https://t.me/youarenotalonehk_live/4994)
郭官問及今日搭會否有困難?因為本案有咁多人士關注。
辯方(潘)指「我地頭先排搭排好耐,排到出公園」郭官表示震驚,「下?咁慘啊?!」另外,明天家事法庭登記處重開,所以冇咗個上晝,建議提早開提早散。
1624 休庭
押後至明天1415區域法院第卅八/卅九庭再訊,將傳召另一位證人,及需要PW2繼續作供。
[下回](https://t.me/youarenotalonehk_live/5050)[here](https://t.me/youarenotalonehk_live/5050)
——
(待補)
---
16:26:35
\#九龍城裁判法院第七庭
\#黃國輝裁判官
\#審訊
何(49) \#審訊(\#0805深水埗)
控罪:
阻礙在正當執行職務的警務人員
《侵害人身罪條例》
(注:控方早前將原本以《簡易程序治罪條例》提告的罪名,修訂為以《侵害人身罪條例》起訴,即使指控內容不變,前者一旦罪成,最高可罰款1,000元及監禁六個月,後者最高可判監兩年。 )
被控於8月5日在深水埗欽州街與長沙灣道交界阻礙警長馮耀彤執行公務。
辯方盤證人馮耀彤(細節後補)
官指表面證供成立,問過被告有無答辯,被告不作供,也無辯方證人要傳召
控方結案陳詞,指留意辯方盤問方向係是否正當執行職務,引案例控方不用證明被告知悉警員係正當執行職務,這不是犯案意念組成部份,犯案動機背後諗法無關。
押後明天1430九龍城七庭聽辯方結案陳詞
---
18:22:35
法庭文字直播台
\#西九龍裁判法院第十三庭
\#劉綺雲裁判官 \#審訊
祝(21) \#0825荃灣
[承上庭](https://t.me/youarenotalonehk_live/4967),今日續審
控辯雙方對呈堂影片對話謄本的內容無爭議,但要求PW1協助確定說話人物身份
辯方質疑:
1) 片段中你(pw1)曾經2次指出,你撠(kɪk1)我,如果被人推跌,就唔會講撠喇…答:因為跌倒後會下意識以為有人撠我
2) 片段證明當時有警員係你隔離講「我好簡單同你講一句,依家拉你非法集結」,但你話唔知當時用咩罪名被拘捕被告?答:因為當時望另一方向,無留意發生咩事……追問:聽唔需要用眼望,咁大聲你都聽唔到?答:無留意
3) 襲擊/推跌你嘅人,你完全唔關心佢用咩罪名被拘捕?答:現場情況混亂,可以事後問,所以當時無理會
4)同唔同意身為警察應該在限制市民人身自由前,讓涉嫌干犯刑事罪行的人知道罪名係重要?答:同意
5) 點解唔係口供上講佢掂到你頭盔,接觸盾牌?答:因為當時無造成傷害,無必要講……追問:宣誓要講事實全部,而非個人篩選過嘅,你同意嗎?答:同意……再追問:咁可唔可以講,當時根本無掂到,只係靠嚇……答:不同意
6) 與被告唯一身體接觸,就係佢被你d同事拉入防線時掂到你,佢被拉向前,所以你感受到被拉向後?答:唔同意
上午庭完
\------------------
15:25 開庭
現傳召拘捕被告的控方第2證人
(PW2)刑事偵緝警長49115郭俊褀作供
PW2作供時聲稱:
1) 當晚見到有軍裝警員跌倒,之後拉咗佢面前的人(被告),係警車上向PTU 21147講發生咗咩事,拉佢非法集結,但當時已經有人話佢襲警(多於1人,且包括23902),話佢推跌咗23902
2) 被告案發時大嗌,衝向警員防線,但無留意他手腳的動作
3) 拘捕被告時,已經知佢有嫌疑犯襲警罪,無用此罪名拘捕,係因為無實質證據證明佢推跌同事,而最有實質證據證明被告非法集結
\-----------
辯方質疑:
1) 你有無見到23902跌低?答:有,但唔確定跌嘅原因……追問:咁你見到d咩?答:見到被告鬧人,衝向23902,期間多次叫佢退後但他無理會,有留意被告,但目光有離開……再追問:所以實際上係睇唔到推呢吓動作?答:只見被告頭及上半身,而23902背向我,跌低時被告跟貼23902,所以當時唔肯定,只是從23902口中得知被告「推」呢個行為,而非親眼目睹
2) 你係案發翌日的口供指出,「當時愈來愈噪吵,而且看見被告將PC23902推落地,我立即上前制服被告…」你話你見到㗎喎,咁仲唔算係實質證據?當時仲唔告襲警?答:當時情況混亂,係睇唔到推,口供上指嘅推,係被告用身推,而23902跌緊一半,口供寫得唔夠仔細……追問:作供到依家仲未仔細作供呀?答:見到被告向前衝,手在前面向23902方向,所以話見唔到推跌過程,唔知係蓄意定唔小心
3) 咁你係口供唔咁寫?口供嘅目的唔係盡快將全部事實記錄咩?答:係,我解釋得唔清楚所以有誤解,但意思係一樣……辯:如果咁叫誤解,咁係人都誤解喇!裁判官點頭
4) 上手扣期間,有無講被告推23902?係口供無寫定係無發生?答:無寫……追問:點解唔寫?答:我認為有用嘅就會寫落去,唔寫係因為有其他同事比我更清楚
5) 根據你講法,你睇唔睇到邊個推跌23902?答:唔清楚,當23902話被告推跌佢,佢就肯定23902係被告推跌……追問:咁呢個就係你嘅實質證據?答:係
\----------------
16:45 休庭,明天10:00於西九龍裁判法院第十三庭續審,期間被告以原有條件保釋
[NEXT](https://t.me/youarenotalonehk_live/5040)
---
| 15.477086 | 219 | 0.685597 | yue_Hant | 0.82817 |
ed7b1fef47329a599d88b13f5a7acc529268b2be | 952 | md | Markdown | CONTRIBUTING.md | ashwinath/merlin | 087a7fa6fb21e4c771d64418bd58873175226ca1 | [
"Apache-2.0"
] | 97 | 2020-10-15T08:03:56.000Z | 2022-03-31T22:30:59.000Z | CONTRIBUTING.md | ibnummuhammad/merlin | acf10a350bcacfdfe67f7020d535467b71ff1d89 | [
"Apache-2.0"
] | 91 | 2020-10-26T03:15:27.000Z | 2022-03-31T10:19:55.000Z | CONTRIBUTING.md | ibnummuhammad/merlin | acf10a350bcacfdfe67f7020d535467b71ff1d89 | [
"Apache-2.0"
] | 26 | 2020-10-21T03:53:36.000Z | 2022-03-16T06:43:15.000Z | # Contributing
We use [GitHub issues](https://github.com/gojek/merlin/issues) to communicate development ideas. The simplest way to contribute to Merlin is to leave comments on our GitHub issues.
We follow a process of [lazy consensus](http://community.apache.org/committers/lazyConsensus.html). If you believe you know what the project needs then just start development. If you are unsure about which direction to take with development then please communicate your ideas through a GitHub issue before starting development.
Please [submit a PR](https://github.com/gojek/merlin/pulls) to the master branch of the Merlin repository once you are ready to submit your contribution. Once submitted, GitHub Actions will run a range of tests to verify the submission, after which community members will help to review the pull request. Pull request to Merlin (including a submission from project maintainers) requires review and approval from maintainers.
| 119 | 424 | 0.811975 | eng_Latn | 0.998423 |
ed7b5c930061f6cf3cc1f190d2ae3f37d491dca2 | 142 | md | Markdown | data/reusables/enterprise/ghec-cta-button.md | sshjelm/docs | 070cb809aaebcbdb8a18accb8fdb3749f1d76718 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-08-06T22:31:24.000Z | 2021-09-14T17:33:15.000Z | data/reusables/enterprise/ghec-cta-button.md | sshjelm/docs | 070cb809aaebcbdb8a18accb8fdb3749f1d76718 | [
"CC-BY-4.0",
"MIT"
] | 9 | 2020-11-03T18:18:04.000Z | 2020-11-05T21:28:13.000Z | data/reusables/enterprise/ghec-cta-button.md | sshjelm/docs | 070cb809aaebcbdb8a18accb8fdb3749f1d76718 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-08-18T01:09:09.000Z | 2021-08-18T01:09:09.000Z | <a href="https://github.com/account/organizations/new?plan=business_plus" class="btn-mktg bt-large f4 mt-3 mr-3">Try risk-free for 14 days</a> | 142 | 142 | 0.753521 | kor_Hang | 0.236322 |
ed7cbcbf1a74b242eb1dabd95dddc1cddb0b5e91 | 6,945 | md | Markdown | README.md | sylvaticus/landUsePartitionByRegionAndYear | 0002400e6777919214661ef7462cbfd394ade73d | [
"MIT"
] | null | null | null | README.md | sylvaticus/landUsePartitionByRegionAndYear | 0002400e6777919214661ef7462cbfd394ade73d | [
"MIT"
] | null | null | null | README.md | sylvaticus/landUsePartitionByRegionAndYear | 0002400e6777919214661ef7462cbfd394ade73d | [
"MIT"
] | 1 | 2021-08-28T19:43:46.000Z | 2021-08-28T19:43:46.000Z | # Land Use partitioned by sub-national region and year (1992-2019)
[](https://zenodo.org/badge/latestdoi/364212542)
## What is this ?
This archive reports the land use partitioned by sub-national administrative region and year, i.e. for each year a table reports the count of each land-use class per region.
Data is available as one CSV file per year in the folder "out-computedLUseStatsByRegionAndYear".
This archive contains also the set of scripts used to compute that partition (including input data download) and that can be easily modified to retrieve a partition by a different geographical level.
## Warnings
- This data should only be used to compute the _relative_ ratio of each land-use class in each region. Due to several issues in projecting the data, the sum of the counts multiplied by the nominal area of each pixel (90000 sq.m) is **NOT** equal to the area of the region. However the shares of land uses should remain invariant to the projections and unbiased.
- By construction, land use classes are hierarchically organised. For example, to obtain land use in class "Tree cover, broadleaved, deciduous, closed to open (>15%) (class 60), one has to sum the cells in classes 60+61+62. Same for classes 10, 70, 80, 120, 150 and 200.
## Data Sources
### Land use
Land Cover Maps - v2.0.7 and v2.1.1 from the Climate Research Data Package (CRDP) http://maps.elie.ucl.ac.be/CCI/viewer/download.php (300m x 300m resolution)
- overview: https://cds.climate.copernicus.eu/cdsapp#!/dataset/satellite-land-cover?tab=overview
- documentation: https://cds.climate.copernicus.eu/cdsapp#!/dataset/satellite-land-cover?tab=doc
- manual download form: https://cds.climate.copernicus.eu/cdsapp#!/dataset/satellite-land-cover?tab=form
- api instructions: https://cds.climate.copernicus.eu/api-how-to
- archive download: http://maps.elie.ucl.ac.be/CCI/viewer/download.php
| ClassID | ClassNames |
|---------|------------------------------------------------------------------------------------|
| 10 | Cropland, rainfed |
| 11 | Herbaceous cover |
| 12 | Tree or shrub cover |
| 20 | Cropland, irrigated or post-flooding |
| 30 | Mosaic cropland (>50%) / natural vegetation (tree, shrub, herbaceous cover) (<50%) |
| 40 | Mosaic natural vegetation (tree, shrub, herbaceous cover) (>50%) / cropland (<50%) |
| 50 | Tree cover, broadleaved, evergreen, closed to open (>15%) |
| 60 | Tree cover, broadleaved, deciduous, closed to open (>15%) |
| 61 | Tree cover, broadleaved, deciduous, closed (>40%) |
| 62 | Tree cover, broadleaved, deciduous, open (15-40%) |
| 70 | Tree cover, needleleaved, evergreen, closed to open (>15%) |
| 71 | Tree cover, needleleaved, evergreen, closed (>40%) |
| 72 | Tree cover, needleleaved, evergreen, open (15-40%) |
| 80 | Tree cover, needleleaved, deciduous, closed to open (>15%) |
| 81 | Tree cover, needleleaved, deciduous, closed (>40%) |
| 82 | Tree cover, needleleaved, deciduous, open (15-40%) |
| 90 | Tree cover, mixed leaf type (broadleaved and needleleaved) |
| 100 | Mosaic tree and shrub (>50%) / herbaceous cover (<50%) |
| 110 | Mosaic herbaceous cover (>50%) / tree and shrub (<50%) |
| 120 | Shrubland |
| 121 | Shrubland evergreen |
| 122 | Shrubland deciduous |
| 130 | Grassland |
| 140 | Lichens and mosses |
| 150 | Sparse vegetation (tree, shrub, herbaceous cover) (<15%) |
| 151 | Sparse tree (<15%) |
| 152 | Sparse shrub (<15%) |
| 153 | Sparse herbaceous cover (<15%) |
| 160 | Tree cover, flooded, fresh or brakish water |
| 170 | Tree cover, flooded, saline water |
| 180 | Shrub or herbaceous cover, flooded, fresh/saline/brakish water |
| 190 | Urban areas |
| 200 | Bare areas |
| 201 | Consolidated bare areas |
| 202 | Unconsolidated bare areas |
| 210 | Water bodies |
### Administrative borders
GADM (https://gadm.org)
## Instructions to compute the partition
- Install the CDS API key as described on https://cds.climate.copernicus.eu/api-how-to
- Run the `runLandUsePartition.sh` shell script in a Linux environment (tested on Ubuntu 20.04)
## Requirements
- Linux OS (tested on Ubuntu 20.04)
- Python 3 and python modules `cdsapi`, `rasterstats` and `geopandas` on path
- Julia 1.6 (the julia packages `PyCall`, `DataFrames`, `CSV` and `Tables` will be automatically downloaded and installed in a local environmant by the script itself)
## Cite as
Antonello Lobianco. (2021). Land use partitioned by region (sub-national) and year (1992-2019) (Version v0.0.1) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.4736886
## Licence
The script and the partitioned data are Copyright Antonello Lobianco (2021) released under the MIT licence.
Input data belong to the authoring organisations.
## Acknowledgements
The development of this dataset at the _Bureau d'Economie Théorique et Appliquée_ (BETA, Nancy) was supported by the French National Research Agency through the [Laboratory of Excellence ARBRE](http://mycor.nancy.inra.fr/ARBRE/), a part of the “Investissements d'Avenir” Program (ANR 11 – LABX-0002-01).
[](hhttp://www.beta-umr7522.fr/)
| 69.45 | 361 | 0.528726 | eng_Latn | 0.939675 |
ed7d07f45436911af39cc1f85e22056a6313164e | 701 | md | Markdown | README.md | inexuscore/ng-password-validation | 16856e8f49b67fd28f79f879874a8f7eab7128af | [
"MIT"
] | null | null | null | README.md | inexuscore/ng-password-validation | 16856e8f49b67fd28f79f879874a8f7eab7128af | [
"MIT"
] | null | null | null | README.md | inexuscore/ng-password-validation | 16856e8f49b67fd28f79f879874a8f7eab7128af | [
"MIT"
] | null | null | null | # Password Validation with Angular Reactive Forms
This repository holds the code sample for my blog post [Password Validation with Reactive Forms](https://arminzia.com/blog/password-validation-with-angular-reactive-forms/).

The project was created using Angular CLI 11.2.5. There are only 2 dependencies:
- Bootstrap 4.6
- Bootstrap Icons 1.4
Clone the repository:
`https://github.com/inexuscore/ng-password-validation.git`
Install the packages:
`npm install`
And run the project:
`npm start`
*© 2021, [Armin Zia](https://arminzia.com)* | 36.894737 | 173 | 0.770328 | eng_Latn | 0.454748 |
ed7f5612b1fc8bb1ff126dd4188ca58a14b8d4d1 | 33,955 | md | Markdown | CN.md | fengyingdian/react-music-player | 9f1587a2f0224068c8a74819e8e5eaf1fc4ada9f | [
"MIT"
] | 1 | 2020-08-05T04:14:25.000Z | 2020-08-05T04:14:25.000Z | CN.md | fengyingdian/react-music-player | 9f1587a2f0224068c8a74819e8e5eaf1fc4ada9f | [
"MIT"
] | null | null | null | CN.md | fengyingdian/react-music-player | 9f1587a2f0224068c8a74819e8e5eaf1fc4ada9f | [
"MIT"
] | null | null | null | <p align="center">
<img alt="logo" src="https://github.com/lijinke666/react-music-player/blob/master/assetsImg/logo.png" width="100" max-width="100%">
</p>
<h1 align="center">
react-jinke-music-player
</h1>
<h4 align="center">
:musical_note: 也许是颜值最高,最好用的一个响应式 React HTML5 音频播放器组件 : )
</h4>
<p align="center">
<a href="https://www.npmjs.com/package/react-jinke-music-player" title="npm">
<img src="https://img.shields.io/npm/dm/react-jinke-music-player.svg?style=flat-square" alt="npm">
</a>
<a href="https://www.npmjs.com/package/react-jinke-music-player" title="npm">
<img src="https://img.shields.io/npm/l/react-jinke-music-player.svg?style=flat-square" alt="npm">
</a>
<a href="https://github.com/lijinke666/react-music-player/actions">
<img src="https://github.com/lijinke666/react-music-player/workflows/Node%20CI/badge.svg" />
</a>
<a href="https://badge.fury.io/js/react-jinke-music-playerr" title="npm">
<img src="https://img.shields.io/npm/v/react-jinke-music-player.svg?style=flat-square" alt="npm version">
</a>
<a href="https://codecov.io/gh/lijinke666/react-music-player">
<img src="https://codecov.io/gh/lijinke666/react-music-player/branch/master/graph/badge.svg" />
</a>
<a href="https://app.netlify.com/sites/react-jinke-music-player/deploys" title="Netlify Status">
<img src="https://api.netlify.com/api/v1/badges/2a5d8639-9d2a-46ee-a504-10b7846a57e4/deploy-status" alt="Coverage Status">
</a>
</p>
<p align="center">
<a href="https://github.com/lijinke666/react-music-player/blob/master/README.md">
English Doc
</a>
</p>
## 安装
使用 `yarn`:
```
yarn add react-jinke-music-player
```
或者 `npm`
```
npm install react-jinke-music-player --save
```
## 预览
> 迷你模式 <br/>
> 
> 白天主题 <br/>

> 黑夜主题 <br/>

> 移动端 <br/>

## 例子
> 在线访问 : [https://lijinke666.github.io/react-music-player/](https://lijinke666.github.io/react-music-player/)
> 实际应用 : [李金珂的小屋](http://www.lijinke.cn/)
> 本地访问 : [http://localhost:8081/](http://localhost:8081/)
[例子示例代码](https://github.com/lijinke666/react-music-player/blob/master/example/example.js)
## 使用
```jsx
import React from "react";
import ReactDOM from "react-dom";
import ReactJkMusicPlayer from "react-jinke-music-player";
import "react-jinke-music-player/assets/index.css";
ReactDOM.render(
<ReactJkMusicPlayer {...options} />,
document.getElementById("root")
);
```
## API
> 中文版本文档可能不完整, 请以英文版为准, 维护两个版本太累了
| 属性 | 类型 | 默认值 | 说明 |
| ------------------------ | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| className | `string` | `-` | 附加的 className |
| audioLists | `object[]` | `-` | 播放列表 : {name: "YOUR_AUDIO_NAME",singer: "YOUR_AUDIO_SINGER_NAME",cover: "YOUR_AUDIO_COVER",musicSrc: "YOUR_AUDIO_SRC"} |
| theme | `string` | `dark` | 播放器主题 可选 'light'(白天) 和 'dark'(黑夜) 两种 |
| defaultPosition | `object` | `{top:0,left:0}` | 当播放器是迷你模式时的初始位 比如 {top:0,left:0} or {top:'20%',left:"20%"} |
| playModeText | `object` | {order: "order",orderLoop: "orderLoop",singleLoop: "singleLoop",shufflePlay:"shufflePlay"}` | 播放模式对应的文字 |
| playModeShowTime | `number` | `600` | 切换播放模式时提示语的显示时间,单位毫秒 |
| bounds | `object`,`string` | `body` | 拖拽边界 可以是一个具体的字符串,比如 `body`,也可以是具体的值 `left,top,right,bottom` |
| preload | `boolean`,`string` | `false` | 是否在页面加载后立即加载音频。可选值 `auto|metadata|none` `true|false` 如果 `preload=true` 默认会 设置 preload="auto" |
| remember | `boolean` | `false` | 是否记住当前播放状态,比如音量,播放状态,下次访问时继续播放 |
| glassBg | `boolean` | `false` | 是否显示毛玻璃背景效果 |
| remove | `boolean` | `true` | 音乐是否可以被删除 |
| defaultPlayIndex | `number` | `0` | 默认从第几首歌开始播放, 当值 超过 或低于 可播放数 默认为最大播放数 和 最小播放数 |
| openText | `string \| ReactNode` | `open` | 迷你模式时播放器的打开文案 |
| closeText | `string \| ReactNode` | `close` | 迷你模式时播放器的关闭文案 |
| panelTitle | `string \| ReactNode` | `PlayList` | 播放列表显示的标题 |
| notContentText | `string \| ReactNode` | `no music` | 播放列表为空时显示的文字 |
| checkedText | `string \| ReactNode` | `-` | 播放器主题开关 选中的文字 |
| unCheckedText | `string \| ReactNode` | `-` | 播放器主题开关 未选中的文字 |
| defaultPlayMode | `string` | `order` | 默认的播放模式 可选 `order`,`orderLoop`,`singleLoop`,`shufflePlay` |
| mode | `string` | `mini` | 播放器的默认模式 可选 `mini`,`full` |
| once | `boolean` | `false` | 在默认情况下 'audioPlay' 函数 会在你 每次暂停后再次播放 触发 , 如果 你只想 让 'audioPlay' 在 音乐初始化播放的时候触发一次,你可以设置 为 `true` |
| autoPlay | `boolean` | `true` | 是否在加载完成后随即播放音频 |
| toggleMode | `boolean` | `true` | 是否可以 从迷你模式 切换到 完整模式 , 或者 完整模式 切换到 迷你模式 |
| drag | `boolean` | `true` | 当播放器是迷你模式时 是否可以对其进行拖拽 |
| seeked | `boolean` | `true` | 是否能拖动或点击进度条 调整播放进度 |
| showMiniModeCover | `boolean` | `true` | 在迷你模式时, 是否显示 封面图 |
| showMiniProcessBar | `boolean` | `false` | 在迷你模式时, 是否显示 圆形进度条 |
| showProgressLoadBar | `boolean` | `true` | 显示音频加载进度条 |
| showPlay | `boolean` | `true` | 是否显示播放按钮 |
| showReload | `boolean` | `true` | 是否显示重放按钮 |
| showDownload | `boolean` | `true` | 是否显示下载按钮 |
| showPlayMode | `boolean` | `true` | 是否显示切换播放模式按钮 |
| showThemeSwitch | `boolean` | `true` | 是否显示主题切换开关 |
| extendsContent | `array \| ReactNode \| string \| boolean` | `-` | 如果默认的功能按钮不满足你 你可以自定义扩展 比如 `<><button>按钮1</button> <button>按钮2</button></>` |
| controllerTitle | `string \| ReactNode` | `<FaHeadphones/>` | 播放器模拟模式封面显示的文字 |
| defaultVolume | `number` | `1` | 播放器初始音量, 范围 `0`-`1` |
| loadAudioErrorPlayNext | `number` | `true` | 当前音频加载加载失败时是否尝试播放下一首 |
| onAudioDownload | `function(audioInfo)` | `-` | 音频下载 的 钩子函数 |
| onAudioPlay | `function(audioInfo)` | `-` | 音频播放 的 钩子函数 |
| onAudioPause | `function(audioInfo)` | `-` | 音频暂停 的 钩子函数 |
| onAudioSeeked | `function(audioInfo)` | `-` | 进度条被点击或者拖动改变播放进度的 钩子函数 |
| onAudioVolumeChange | `function(volume)` | `-` | 音量改变的 钩子函数 范围 `0.0`-`1.0` |
| onAudioEnded | `function(currentPlayId,audioLists,audioInfo)` | `-` | 当前音频播放结束的 钩子函数 |
| onAudioAbort | `function(currentPlayId,audioLists,audioInfo)` | `-` | 当前音频播放中断的 钩子函数 |
| onAudioProgress | `function(audioInfo)` | `-` | 音频正在播放中的 钩子函数 |
| onAudioLoadError | `function(errMsg,currentPlayId,audioLists,audioInfo)` | `-` | 音频播放失败的 钩子函数 |
| onAudioReload | `function(audioInfo)` | `-` | 音频重新播放的 钩子函数 |
| onAudioListsChange | `function(currentPlayId,audioLists,audioInfo)` | `-` | 播放列表发生改变时的 钩子函数 |
| onAudioPlayTrackChange | `function(currentPlayId,audioLists,audioInfo)` | `-` | 当前播放的音乐发生改变时的 钩子函数 |
| onAudioPlayModeChange | `function(playMode)` | `-` | 播放模式发生改变时的 钩子函数 |
| onAudioListsPanelChange | `function(panelVisible)` | `-` | 播放列表打开或关闭的 钩子函数 |
| onThemeChange | `function(theme)` | `-` | 主题切换后的 钩子函数 |
| onModeChange | `function(mode)` | `-` | 模式切换发生改变时的 钩子函数 |
| onAudioListsDragEnd | `function(fromIndex,toIndex)` | `-` | 列表歌曲拖拽后 钩子函数 |
| onAudioLyricChange | `function(lineNum, currentLyric)` | `-` | 当前播放的歌词改变回调 |
| getContainer | `() => HTMLElement` \| ` Selectors ` | `document.body` | 播放器挂载的节点 默认在 body |
| getAudioInstance | `(instance: HTMLAudioElement) => void` | `-` | 获取原始的 audio 实例, 可以用它所有的 api 做你想做的事情 |
| autoHiddenCover | `boolean` | `false` | 当前歌曲没有封面图时是否不渲染对应的 dom 节点 |
| onBeforeAudioDownload | `(audioInfo: ReactJkMusicPlayerAudioInfo) => Promise<TransformedDownloadAudioInfo>` | `-` | 转换下载歌曲的文件名,路径等 |
| clearPriorAudioLists | `boolean` | `false` | 更新歌曲列表时, 是否清除之前的列表 |
| autoPlayInitLoadPlayList | `boolean` | `false` | 歌曲列表更新后, 是否自动播放 |
| spaceBar | `boolean` | `false` | 是否可以通过空格键控制音乐的播放与暂停 |
| showDestroy | `boolean` | `false` | 是否显示销毁按钮 |
| onBeforeDestroy | `function(currentPlayId,audioLists,audioInfo)` | `-` | 销毁之前处理函数 |
| onDestroyed | `function(currentPlayId,audioLists,audioInfo)` | `-` | 销毁之后的回调 |
| customDownloader | `function(downloadInfo: TransformedDownloadAudioInfo)` | `-` | 自定义下载器 |
| audioTitle | `string \| (audioInfo: ReactJkMusicPlayerAudioInfo) => string` | `{name} - {signer}` | 自定义音乐显示名称, 默认歌曲名-歌手 |
## 自定义操作按钮
如果播放器自带的功能不能满足你的需求, 可以自己实现操作按钮 UI, 会同步播放器对应的状态, 并触发钩子函数, 支持功能:
- `播放`
- `暂停`
- `重新播放`
- `改变当前播放位置`
- `改变播放倍速`
- `改变音量`
- `销毁播放器`
```jsx
class App extends React.Component{
constructor() {
this.audioInstance = null
}
render() {
return (
<>
<ReactJkMusicPlayer getAudioInstance={instance => this.audioInstance = instance}/>
<button onClick={() => this.audioInstance.play()}>播放</button>
<button onClick={() => this.audioInstance.pause()}>暂停</button>
<button onClick={() => this.audioInstance.load()}>重新播放</button>
<button onClick={() => (this.audioInstance.currentTime = 40)}>
改变当前播放位置
</button>
<button onClick={() => (this.audioInstance.playbackRate = 2)}>
改变播放倍速
</button>
<button onClick={() => (this.audioInstance.volume = 0.2)}>改变音量</button>
<button onClick={() => this.audioInstance.destroy()}>销毁播放器</button>
</>
)
}
}
```
## 毛玻璃效果
```jsx
<ReactJkMusicPlayer glassBg/>
```


## 自定义下载器
> eg. <https://www.npmjs.com/package/file-saver>
```jsx
const customDownloader = (downloadInfo) => {
const link = document.createElement('a')
link.href = downloadInfo.src // a.mp3
link.download = downloadInfo.filename || 'test'
document.body.appendChild(link)
link.click()
}
<ReactJkMusicPlayer audioLists={[{src: "a.mp3"}]} customDownloader={customDownloader}/>
// 配合 onBeforeAudioDownload 使用
const onBeforeAudioDownload = () => {
return Promise.resolve({
src: '1.mp3',
})
}
const customDownloader = (downloadInfo) => {
console.log(downloadInfo.src) // 1.mp3
}
<ReactJkMusicPlayer customDownloader={customDownloader}/>
```
## 关闭/销毁 播放器
```jsx
const onBeforeDestroy = (currentPlayId, audioLists, audioInfo) => {
return new Promise((resolve, reject) => {
// 返回一个 Promise, 这里可以做一些自定义的校验
if (window.confirm('是否关闭?')) {
// 调用 resolve, 播放器正常关闭/销毁
resolve()
} else {
// 调用 reject, 本次操作无效.
reject()
}
})
}
const onDestroyed = (currentPlayId, audioLists, audioInfo) => {
console.log('onDestroyed:', currentPlayId, audioLists, audioInfo)
}
ReactDOM.render(
<ReactJkMusicPlayer
showDestroy
onBeforeDestroy={onBeforeDestroy}
onDestroyed={onDestroyed}
/>
)
```
## 开发
```
git clone https://github.com/lijinke666/react-music-player.git
yarn | npm install
yarn start | npm start
访问 `http://localhost:8081/`
```
## 单元测试
```
npm run test
```
## 音乐列表 数据结构
> Like This
```ts
interface ReactJkMusicPlayerAudioList {
name: string | React.ReactNode,
singer?: string | React.ReactNode,
cover: string,
musicSrc: string | () => Promise<string>,
lyric?: string
[key: string]: any
}>
```
## 返回的音乐信息
> Like This
```ts
interface ReactJkMusicPlayerAudioInfo {
cover: string,
currentTime: number,
duration: number,
ended: boolean,
musicSrc: string,
muted: boolean,
name: string,
networkState: number,
paused: boolean,
played: any,
readyState: number,
startDate: any
volume: number,
lyric: string
[key: string]: any
}
```
## 参数
```ts
export interface ReactJkMusicPlayerProps {
audioLists: Array<ReactJkMusicPlayerAudioList>
theme?: ReactJkMusicPlayerTheme
mode?: ReactJkMusicPlayerMode
defaultPlayMode?: ReactJkMusicPlayerPlayMode
drag?: boolean
seeked?: boolean
autoPlay?: boolean
playModeText?: {
order: string | React.ReactNode
orderLoop: string | React.ReactNode
singleLoop: string | React.ReactNode
shufflePlay: string | React.ReactNode
}
panelTitle?: string | React.ReactNode
closeText?: string | React.ReactNode
openText?: string | React.ReactNode
notContentText?: string | React.ReactNode
controllerTitle?: string | React.ReactNode
defaultPosition?: {
top: number | string
left: number | string
right: number | string
bottom: number | string
}
onAudioPlay?: (audioInfo: ReactJkMusicPlayerAudioInfo) => void
onAudioPause?: (audioInfo: ReactJkMusicPlayerAudioInfo) => void
onAudioEnded?: (
currentPlayId: string,
audioLists: Array<ReactJkMusicPlayerAudioList>,
audioInfo: ReactJkMusicPlayerAudioInfo
) => void
onAudioAbort?: (
currentPlayId: string,
audioLists: Array<ReactJkMusicPlayerAudioList>,
audioInfo: ReactJkMusicPlayerAudioInfo
) => void
onAudioVolumeChange?: (volume: number) => void
onAudioLoadError?: (
errMsg: any,
currentPlayId: string,
audioLists: Array<ReactJkMusicPlayerAudioList>,
audioInfo: ReactJkMusicPlayerAudioInfo
) => void
onAudioProgress?: (audioInfo: ReactJkMusicPlayerAudioInfo) => void
onAudioSeeked?: (audioInfo: ReactJkMusicPlayerAudioInfo) => void
onAudioDownload?: (
audioInfo: ReactJkMusicPlayerAudioInfo,
transformedDownloadAudioInfo: TransformedDownloadAudioInfo
) => void
onAudioReload?: (audioInfo: ReactJkMusicPlayerAudioInfo) => void
onThemeChange?: (theme: ReactJkMusicPlayerTheme) => void
onAudioListsChange?: (
currentPlayId: string,
audioLists: Array<ReactJkMusicPlayerAudioList>,
audioInfo: ReactJkMusicPlayerAudioInfo
) => void
onPlayModeChange?: (playMode: ReactJkMusicPlayerPlayMode) => void
onModeChange?: (mode: ReactJkMusicPlayerMode) => void
onAudioListsPanelChange?: (panelVisible: boolean) => void
onAudioPlayTrackChange?: (fromIndex: number, endIndex: number) => void
onAudioListsDragEnd?: (
currentPlayId: string,
audioLists: Array<ReactJkMusicPlayerAudioList>,
audioInfo: ReactJkMusicPlayerAudioInfo
) => void
showDownload?: boolean
showPlay?: boolean
showReload?: boolean
showPlayMode?: boolean
showThemeSwitch?: boolean
showMiniModeCover?: boolean
showDestroy?: boolean
toggleMode?: boolean
once?: boolean
extendsContent?:
| (Array<React.ReactNode | string>)
| React.ReactNode
| boolean
| string
checkedText?: string | React.ReactNode
unCheckedText?: string | React.ReactNode
defaultVolume?: number
playModeShowTime?: number
bounds?: string | React.ReactNode
showMiniProcessBar?: boolean
loadAudioErrorPlayNext?: boolean
preload?: boolean | 'auto' | 'metadata' | 'none'
glassBg?: boolean
remember?: boolean
remove?: boolean
defaultPlayIndex?: number
playIndex?: number
lyricClassName?: string
emptyLyricText?: string | React.ReactNode
showLyric?: boolean
getContainer?: () => HTMLElement
getAudioInstance?: (instance: HTMLAudioElement) => void
autoHiddenCover?: boolean
onBeforeAudioDownload?: (
audioInfo: ReactJkMusicPlayerAudioInfo
) => Promise<TransformedDownloadAudioInfo>
clearPriorAudioLists?: boolean
autoPlayInitLoadPlayList?: boolean
spaceBar?: boolean
onBeforeDestroy?: (
currentPlayId: string,
audioLists: Array<ReactJkMusicPlayerAudioList>,
audioInfo: ReactJkMusicPlayerAudioInfo
) => Promise<void>
onDestroyed?: (
currentPlayId: string,
audioLists: Array<ReactJkMusicPlayerAudioList>,
audioInfo: ReactJkMusicPlayerAudioInfo
) => Promise<void>
customDownloader?: (downloadAudioInfo: TransformedDownloadAudioInfo) => void
audioTitle?: ((audioInfo: ReactJkMusicPlayerAudioInfo) => string) | string
}
export interface TransformedDownloadAudioInfo {
src: string
filename?: string
mimeType?: string
}
export interface ReactJkMusicPlayerInstance extends HTMLAudioElement {
destroy: () => void
}
```
## 许可证
[MIT](https://github.com/lijinke666/react-music-player/blob/master/LICENCE)
| 72.398721 | 353 | 0.319894 | yue_Hant | 0.546133 |
ed7ffbddf35741a3cc6f5f8f14b3ccb9f74dd5ff | 2,428 | md | Markdown | articles/machine-learning/team-data-science-process/walkthroughs-azure-data-lake.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/team-data-science-process/walkthroughs-azure-data-lake.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/team-data-science-process/walkthroughs-azure-data-lake.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Analiza przy użyciu języka U-SQL w usłudze Azure Data Lake — proces nauki o danych zespołowych
description: Przykłady, które przechodzą przez korzystanie z języka U-SQL w usłudze Azure Data Lake do analizy predykcyjnej.
services: machine-learning
author: marktab
manager: marktab
editor: marktab
ms.service: machine-learning
ms.subservice: team-data-science-process
ms.topic: article
ms.date: 01/10/2020
ms.author: tdsp
ms.custom: seodec18, previous-author=deguhath, previous-ms.author=deguhath
ms.openlocfilehash: 2e5eb0acd2a94f7726fbacefbe6e1022c8cebae2
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/27/2020
ms.locfileid: "75864183"
---
# <a name="azure-data-lake-data-science-walkthroughs-using-u-sql"></a>Wskazówki dotyczące nauki o danych usługi Azure Data Lake przy użyciu języka U-SQL
Te wskazówki używają U-SQL z usługą Azure Data Lake do analizy predykcyjnej. Wykonaj one kroki opisane w procesie nauki o danych zespołu. Aby zapoznać się z omówieniem procesu nauki o danych zespołu, zobacz [Proces nauki o danych](overview.md). Aby zapoznać się z wprowadzeniem do usługi Azure Data Lake, zobacz [Omówienie usługi Azure Data Lake Store.](../../data-lake-store/data-lake-store-overview.md)
Dodatkowe wskazówki do nauki o danych, które wykonują proces nauki o danych zespołu są pogrupowane według **platformy,** której używają. Zobacz [wskazówki wykonujące proces nauki o danych zespołu](walkthroughs.md) dla itemization tych przykładów.
## <a name="predict-taxi-tips-using-u-sql-with-azure-data-lake"></a>Przewidywanie porad dotyczących taksówek przy użyciu języka U-SQL za pomocą usługi Azure Data Lake
W przewodniku [Użyj usługi Azure Data Lake do nauki o danych](data-lake-walkthrough.md) pokazano, jak używać usługi Azure Data Lake do wykonywania zadań eksploracji danych i klasyfikacji binarnej. Dane są próbką zestawu danych taksówki NYC. Zadanie polega na przewidywaniu, czy napiwek jest wypłacany przez klienta.
## <a name="next-steps"></a>Następne kroki
Aby zapoznać się z omówieniem procesu nauki o danych zespołu, zobacz [omówienie procesu nauki o danych zespołu.](overview.md)
Aby zapoznać się z omówienia cyklu życia procesu nauki o danych zespołu, zobacz [Cykl życia procesu nauki o danych zespołu.](lifecycle.md) Ten cykl życia przedstawia kroki, które projekty zwykle wykonaj, gdy są one wykonywane.
| 63.894737 | 404 | 0.803954 | pol_Latn | 0.99986 |
ed80fadfc1dc9511aa9e10e3f2029bbb048e7663 | 642 | md | Markdown | CHANGELOG.md | sezzle/shopware5 | bbdef04224bf62192b4f22475c55a40ceddcf037 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | sezzle/shopware5 | bbdef04224bf62192b4f22475c55a40ceddcf037 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | sezzle/shopware5 | bbdef04224bf62192b4f22475c55a40ceddcf037 | [
"Apache-2.0"
] | null | null | null | <div align="center">
<a href="https://sezzle.com">
<img src="https://media.sezzle.com/branding/2.0/Sezzle_Logo_FullColor.svg" width="300px" alt="Sezzle" />
</a>
</div>
# Sezzle Plugin Changelog
## Version 1.0.0
_Tue 10 Nov 2020_
### Supported Editions & Versions
Tested and verified in clean installations of Shopware 5:
- Shopware 5 version 5.6.8 and later.
### Highlights
- Sezzle Payment Integration.
- Payment Action supports :
- Capture
- Refund
- Release
- Tokenization of Customer.
- Availability of delayed capture as well as instant capture.
- Redirectional Checkout.
- Logging of Sezzle Actions.
| 21.4 | 112 | 0.700935 | eng_Latn | 0.685103 |
ed812d77b9222d64f46b787618dbf8191b5c3a9f | 1,191 | md | Markdown | docs/randomsequence.md | andy2046/gopie | d738759c7c0bc6c8958adb6a950e39dd8952cb7e | [
"Apache-2.0"
] | 32 | 2018-12-05T12:36:23.000Z | 2021-06-18T23:52:14.000Z | docs/randomsequence.md | andy2046/gopie | d738759c7c0bc6c8958adb6a950e39dd8952cb7e | [
"Apache-2.0"
] | null | null | null | docs/randomsequence.md | andy2046/gopie | d738759c7c0bc6c8958adb6a950e39dd8952cb7e | [
"Apache-2.0"
] | 4 | 2019-04-12T07:09:55.000Z | 2020-12-21T03:30:38.000Z |
# randomsequence
`import "github.com/andy2046/gopie/pkg/randomsequence"`
* [Overview](#pkg-overview)
* [Index](#pkg-index)
## <a name="pkg-overview">Overview</a>
Package randomsequence implements quadratic residues based random sequence.
## <a name="pkg-index">Index</a>
* [type Random](#Random)
* [func New(seedBase, seedOffset uint32) *Random](#New)
* [func (r *Random) Next() uint32](#Random.Next)
#### <a name="pkg-files">Package files</a>
[randomseq.go](/src/github.com/andy2046/gopie/pkg/randomsequence/randomseq.go)
## <a name="Random">type</a> [Random](/src/target/randomseq.go?s=232:284#L7)
``` go
type Random struct {
// contains filtered or unexported fields
}
```
Random represents the random sequence.
### <a name="New">func</a> [New](/src/target/randomseq.go?s=413:458#L18)
``` go
func New(seedBase, seedOffset uint32) *Random
```
New creates a random sequence with the seed provided.
### <a name="Random.Next">func</a> (\*Random) [Next](/src/target/randomseq.go?s=624:654#L26)
``` go
func (r *Random) Next() uint32
```
Next returns next random number.
- - -
Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md)
| 17.514706 | 92 | 0.68178 | yue_Hant | 0.292069 |
ed81df80eb6dd744f20c39fa21505767bc1e255f | 13,960 | md | Markdown | API.md | dominicfraser/differencify | 84489a38ca44a22e3c1e574f7a12fc88f6975e3f | [
"MIT"
] | null | null | null | API.md | dominicfraser/differencify | 84489a38ca44a22e3c1e574f7a12fc88f6975e3f | [
"MIT"
] | null | null | null | API.md | dominicfraser/differencify | 84489a38ca44a22e3c1e574f7a12fc88f6975e3f | [
"MIT"
] | null | null | null | ## Differencify specific methods
|Method|Arguments|description|
|------|---------|-----------|
|`launchBrowser`/`launch`|`Object` [puppeteer.launch options](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#puppeteerlaunchoptions)|Launches a browser instance|
|`connectBrowser`/`connect`|`Object` [puppeteer.connect options](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#puppeteerconnectoptions)|Attaches to an existing browser instance|
|`init`|[TestOptions](https://github.com/NimaSoroush/differencify#testoptions)|Configure and prepare differencify to operate based on `TestOptions`|
|`cleanup`|no argument|Closes browser instance if it is not closed already|
## Additional methods on top of Puppeteer's Page class
|Method|Arguments|description|
|------|---------|-----------|
|`toMatchSnapshot`|`image` or <br /> `image, callback` or <br />` callback` | Pass an image object to compare it to the snapshot. Optionally, pass a callback to receive [details from the comparison](#detailed-result-information). Alternatively, just pass a callback to receive details of the snapshot currently in the chain. |
|`result`|`Object`|A function that returns response object of previous step when on chained mode|
|`launch`|`Object` [puppeteer.launch options](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#puppeteerlaunchoptions)|launches new browser and returns browser object|
|`connect`|`Object` [puppeteer.connect options](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#puppeteerconnectoptions)|Attaches to an existing browser instance and returns browser object|
|`freezeImage`|`string`|Selector name of a `<img>` tag containing animated image to be freezed before taking screenshot|
## Puppeteer methods
Differencify matches [Puppeteer](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md)'s API completely. Here are some examples of how to use it.
## Simple
```js
(async () => {
await differencify.launchBrowser();
await differencify
.init()
.newPage()
.setViewport({ width: 1600, height: 1200 })
.goto('https://github.com/NimaSoroush/differencify')
.waitFor(1000)
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result) // True or False
})
.close()
.end();
await differencify.cleanup();
})();
```
In this example, differencify will launch a browser instance and continues on others steps
## Simple unchained
```js
(async () => {
await differencify.launchBrowser();
const target = differencify.init({ testName: 'Differencify simple unchained', chain: false });
const page = await target.newPage();
await page.goto('https://github.com/NimaSoroush/differencify');
await page.setViewport({ width: 1600, height: 1200 });
await page.waitFor(1000);
const image = await page.screenshot();
const result = await target.toMatchSnapshot(image);
await page.close();
console.log(result) // True or False
await differencify.cleanup();
})();
```
In this example, differencify will launch a browser instance and unchain steps. `differencify.init().newPage()` will return a `puppeteer` page instance which with all supported methods on that [page](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#class-page)
## Launch new browser per test
```js
(async () => {
await differencify
.init()
.launch()
.newPage()
.setViewport({ width: 1600, height: 1200 })
.goto('https://github.com/NimaSoroush/differencify')
.waitFor(1000)
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result) // True or False
})
.close()
.end();
})();
```
In this example, differencify will launch a browser instance and continues on others steps and on `close()` it will close both page and browser
## Launch new browser per test when unchained
```js
(async () => {
const target = differencify.init({ testName: 'Differencify simple unchained', chain: false });
await target.launch();
const page = await target.newPage();
await page.goto('https://github.com/NimaSoroush/differencify');
await page.setViewport({ width: 1600, height: 1200 });
await page.waitFor(1000);
const image = await page.screenshot();
const result = await target.toMatchSnapshot(image);
await page.close();
await target.close();
console.log(result) // True or False
})();
```
In this example, differencify will launch a browser instance and unchain steps. `differencify.init().newPage()` will return a `puppeteer` page instance which with all supported methods on that [page](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#class-page)
## Share browser
```js
(async () => {
await differencify.launchBrowser();
await differencify
.init({ testName: 'test1' })
.newPage()
.setViewport({ width: 1600, height: 1200 })
.goto('https://github.com/NimaSoroush/differencify')
.wait(3000)
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result) // True or False
})
.close()
.end();
await differencify
.init({ testName: 'test2' })
.newPage()
.setViewport({ width: 1600, height: 1200 })
.goto('https://github.com/NimaSoroush/differencify')
.wait(3000)
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result) // True or False
})
.close()
.end();
await differencify.cleanup();
})();
```
In this example, differencify will launch a browser instance and share same browser instance with all following tests and on `cleanup()` it will close the browser
## Using result function
```js
(async () => {
await differencify
.init()
.newPage()
.setViewport({ width: 1600, height: 1200 })
.goto('https://github.com/NimaSoroush/differencify')
.title()
.result((tittle) => {
console.log(tittle)
})
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result) // True or False
})
.close()
.end();
})();
```
In this example, after calling `result` function it will return the previous step result as an object.
## Detailed Result Information
For programmatic use cases where more information is required than simply whether or not
a test passed, a callback function may be passed to `toMatchSnapshot` which will be invoked
after the test and passed additional details.
```js
(async () => {
await differencify
.init()
.newPage()
.setViewport({ width: 1600, height: 1200 })
.goto('https://github.com/NimaSoroush/differencify')
.screenshot()
.toMatchSnapshot((resultDetail) => {
console.log(resultDetail);
/*
Example output:
{
testConfig: {
chain: false,
testNameProvided: true,
testName: 'TestName',
'testId': 2,
'isUpdate': false,
'isJest': false,
'newWindow': true
},
testResult: {
diffPath: '/parent/__image_snapshots__/__differencified_output__/test.differencified.png',
matched: false,
diffPercent: 0.02,
distance: 0,
snapshotPath: '/parent/__image_snapshots__/test.snap.png',
}
}
*/
})
.close()
.end();
})();
```
Similarly, the callback may be passed as a second argument when unchained:
```js
(async () => {
const target = differencify.init({ chain: false });
await target.launch();
const page = await target.newPage();
await page.goto('https://github.com/NimaSoroush/differencify');
await page.setViewport({ width: 1600, height: 1200 });
await page.waitFor(1000);
const image = await page.screenshot();
await target.toMatchSnapshot(image, (resultDetail) => {
console.log(resultDetail);
/*
Example output:
{
testConfig: {
chain: false,
testNameProvided: true,
testName: 'TestName',
'testId': 2,
'isUpdate': false,
'isJest': false,
'newWindow': true
},
testResult: {
diffPath: '/parent/__image_snapshots__/__differencified_output__/test.differencified.png',
matched: false,
diffPercent: 0.02,
distance: 0,
snapshotPath: '/parent/__image_snapshots__/test.snap.png',
}
}
*/
});
await page.close();
await target.close();
})();
```
## Context switching when chained
```js
(async () => {
await differencify
.init()
.newPage()
.tracing
.start({ path: 'trace.json' })
.page
.setViewport({ width: 1600, height: 1200 })
.goto('https://nimasoroush.github.io/differencify/')
.waitFor(1000)
.keyboard
.press('Space')
.tracing
.stop()
.page
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result) // True or False
})
.close()
.end();
})();
```
In this example, differencify will launch a browser instance and opens a new tab and starts tracing, goto url, mouse click, stop tracing and finally closes the tab. All steps are running on `page` context unless you switch to one of the following context:
```
'page',
'keyboard',
'mouse',
'touchscreen',
'tracing',
```
If you do so, you need to come back to `page` context by calling it.
## Calling Puppeteer's specific functions when chained
```js
(async () => {
await differencify
.init()
.newPage()
.setViewport({ width: 1600, height: 1200 })
.goto('https://nimasoroush.github.io/differencify/')
.on('console', msg => {
for (let i = 0; i < msg.args.length; ++i) {
console.log(`${i}: ${msg.args[i]}`); // JSHandle:hello
}
});
.evaluate(() => console.log('hello', 5, { foo: 'bar' }))
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result) // True or False
})
.close()
.end();
})();
```
In this example, differencify will call `on()` method of Puppeteer asynchronously. same logic should apply for other specific methods of Puppeteer like:
```js
on('dialog', async dialog => { console.log(dialog.message()) };
evaluate(() => console.log('hello', 5, {foo: 'bar'}));
$$eval('div', divs => divs.length);
evaluateHandle(() => document.body);
...
```
Another example
```js
(async () => {
await differencify
.init()
.newPage()
.setViewport({ width: 1600, height: 1200 })
.goto('https://nimasoroush.github.io/differencify/')
.on('dialog', async (dialog) => {
console.log(dialog.message()); // 1
await dialog.dismiss();
})
.evaluate(() => alert('1'))
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result) // True or False
})
.close()
.end();
})();
```
## Continue on chained object
```js
(async () => {
await differencify
.init()
.newPage()
.goto('https://github.com/NimaSoroush/differencify')
.mainFrame()
.then
.url()
.result((url) => {
console.log(url); // https://github.com/NimaSoroush/differencify
})
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result); // True or False
})
.close()
.end();
})();
```
In this example, differencify will get the `mainFrame` of page and continues by `then` to get `childFrame` of that frame and finally prints the `url` of the childFrame.
## Multiple toMatchSnapshot on chained object
```js
(async () => {
await differencify
.init()
.newPage()
.goto('https://nimasoroush.github.io/differencify/')
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result); // True or False
})
.goto('https://nimasoroush.github.io/differencify/')
.screenshot()
.toMatchSnapshot()
.result((result) => {
console.log(result); // True or False
})
.close()
.end();
})();
```
In this example, differencify will got to different pages and compare screenshots with reference screenshots.
## Multiple toMatchSnapshot when unchained
```js
(async () => {
const target = differencify.init({ chain: false });
const page = await target.newPage();
await page.goto('https://nimasoroush.github.io/differencify/');
await page.setViewport({ width: 1600, height: 1200 });
await page.waitFor(1000);
const image = await page.screenshot();
const result = await target.toMatchSnapshot(image);
await page.goto('https://github.com/NimaSoroush/differencify#about');
await page.setViewport({ width: 1600, height: 1200 });
await page.waitFor(1000);
const image2 = await page.screenshot();
const result2 = await target.toMatchSnapshot(image2);
await page.close();
console.log(result); // True or False
console.log(result2); // True or False
})();
```
In this example, differencify will got to different pages and compare screenshots with reference screenshots.
## Custom test path
```js
(async () => {
const differencify = new Differencify({ imageSnapshotPath: './custom_test_path' });
const target = differencify.init({ chain: false });
await target.launch();
const page = await target.newPage();
await page.setViewport({ width: 1600, height: 1200 });
await page.goto('http://example.com/');
await page.waitFor(1000);
const image = await page.screenshot();
const result = await target.toMatchSnapshot(image);
await page.close();
console.log(result); // True or False
console.log(result2); // True or False
})();
```
In this example, you can specify the custom path for storing images.
## Freezing an image
```js
(async () => {
await differencify
.init()
.newPage()
.setViewport({ width: 1600, height: 1200 })
.goto('https://i.giphy.com/media/xTiTnoUnHxVaaVNWhO/giphy.webp')
.waitFor('body > img')
.freezeImage('body > img')
.screenshot()
.toMatchSnapshot()
.close()
.end();
})();
```
In this example, you can freeze an image by specifying the selector path.
| 29.957082 | 326 | 0.653438 | eng_Latn | 0.81474 |
ed8297dbb0dd21ff5fab49473531fab505a16bea | 1,288 | md | Markdown | docs/stepup_callout.md | domgon/OpenConext-engineblock | e550d3316e713323cf3db6af450db793bccb7d31 | [
"Apache-2.0"
] | null | null | null | docs/stepup_callout.md | domgon/OpenConext-engineblock | e550d3316e713323cf3db6af450db793bccb7d31 | [
"Apache-2.0"
] | null | null | null | docs/stepup_callout.md | domgon/OpenConext-engineblock | e550d3316e713323cf3db6af450db793bccb7d31 | [
"Apache-2.0"
] | null | null | null | # Engineblock Stepup second factor integration
It's possible to require second factor authentication in EngineBlock using OpenConext StepUp in second factor only (SFO) mode. When configured correctly, EngineBlock will utilize StepUp Gateways SFO to do a second factor callout. In order to do so the EB configuration needs to be configured in a certain way to allow this on a per IdP or per SP basis. The following coin metadata attributes need to be passed from the Manage (or other software) instance that is pushing EngineBlock metadata into EngineBlock.
## Engineblock metadata configuration
### SP
#### metadata:coin:stepup:allow_no_token
**Type:** boolean
To continue with LOA 1 if no second factor token found
#### metadata:coin:stepup:requireloa
The LOA minimal required
**Type:** boolean
### IdP
#### metadata:coin:stepup_connections
**Type:** object
* name: _entityId_,
* level: _requiredLoa_
An entry per SP which should use the SFO capabilities.
## Engineblock global configuration
The EngineBlock installation also needs additional configuration in order to facilitate the SFO second factor authentications. For details on these configuration settings, please review the SFO section in the [app/config/parameters.yml.dist](app/config/parameters.yml.dist) file.
| 37.882353 | 509 | 0.792702 | eng_Latn | 0.987363 |
ed832772ed063d47ec2a15690d6d1e326259bbff | 24,085 | md | Markdown | content/posts/2021-08-11---Hot-Papers.md | TatsuyaShirakawa/daily-arxiv-gatsby | 4c1744c7f6f3eaa676310a5958ee71e126cf0c93 | [
"MIT"
] | 4 | 2020-09-02T16:13:06.000Z | 2021-11-08T08:17:04.000Z | content/posts/2021-08-11---Hot-Papers.md | TatsuyaShirakawa/daily-arxiv-gatsby | 4c1744c7f6f3eaa676310a5958ee71e126cf0c93 | [
"MIT"
] | null | null | null | content/posts/2021-08-11---Hot-Papers.md | TatsuyaShirakawa/daily-arxiv-gatsby | 4c1744c7f6f3eaa676310a5958ee71e126cf0c93 | [
"MIT"
] | null | null | null | ---
title: Hot Papers 2021-08-11
date: 2021-08-12T08:06:18.Z
template: "post"
draft: false
slug: "hot-papers-2021-08-11"
category: "arXiv"
tags:
- "arXiv"
- "Twitter"
- "Machine Learning"
- "Computer Science"
description: "Hot papers 2021-08-11"
socialImage: "/media/flying-marine.jpg"
---
# 1. A Survey on Deep Reinforcement Learning for Data Processing and Analytics
Qingpeng Cai, Can Cui, Yiyuan Xiong, Zhongle Xie, Meihui Zhang
- retweets: 2496, favorites: 263 (08/12/2021 08:06:18)
- links: [abs](https://arxiv.org/abs/2108.04526) | [pdf](https://arxiv.org/pdf/2108.04526)
- [cs.LG](https://arxiv.org/list/cs.LG/recent) | [cs.DB](https://arxiv.org/list/cs.DB/recent)
Data processing and analytics are fundamental and pervasive. Algorithms play a vital role in data processing and analytics where many algorithm designs have incorporated heuristics and general rules from human knowledge and experience to improve their effectiveness. Recently, reinforcement learning, deep reinforcement learning (DRL) in particular, is increasingly explored and exploited in many areas because it can learn better strategies in complicated environments it is interacting with than statically designed algorithms. Motivated by this trend, we provide a comprehensive review of recent works focusing on utilizing deep reinforcement learning to improve data processing and analytics. First, we present an introduction to key concepts, theories, and methods in deep reinforcement learning. Next, we discuss deep reinforcement learning deployment on database systems, facilitating data processing and analytics in various aspects, including data organization, scheduling, tuning, and indexing. Then, we survey the application of deep reinforcement learning in data processing and analytics, ranging from data preparation, natural language interface to healthcare, fintech, etc. Finally, we discuss important open challenges and future research directions of using deep reinforcement learning in data processing and analytics.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">🎓 A Survey on Deep Reinforcement Learning for Data Processing and Analytics<br><br>Provides a comprehensive overview of how deep reinforcement learning can improve data processing and analytics applications.<br><br>A great read for ML practitioners and students.<a href="https://t.co/bk8Fj2f9Me">https://t.co/bk8Fj2f9Me</a> <a href="https://t.co/7F02KHhpN6">pic.twitter.com/7F02KHhpN6</a></p>— elvis (@omarsar0) <a href="https://twitter.com/omarsar0/status/1425414309157421061?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 2. Making Transformers Solve Compositional Tasks
Santiago Ontañón, Joshua Ainslie, Vaclav Cvicek, Zachary Fisher
- retweets: 440, favorites: 121 (08/12/2021 08:06:19)
- links: [abs](https://arxiv.org/abs/2108.04378) | [pdf](https://arxiv.org/pdf/2108.04378)
- [cs.AI](https://arxiv.org/list/cs.AI/recent) | [cs.CL](https://arxiv.org/list/cs.CL/recent)
Several studies have reported the inability of Transformer models to generalize compositionally, a key type of generalization in many NLP tasks such as semantic parsing. In this paper we explore the design space of Transformer models showing that the inductive biases given to the model by several design decisions significantly impact compositional generalization. Through this exploration, we identified Transformer configurations that generalize compositionally significantly better than previously reported in the literature in a diverse set of compositional tasks, and that achieve state-of-the-art results in a semantic parsing compositional generalization benchmark (COGS), and a string edit operation composition benchmark (PCFG).
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Making Transformers Solve Compositional Tasks<br>paper: <a href="https://t.co/1qUhBPlTfa">https://t.co/1qUhBPlTfa</a><br><br>explore the design space of Transformer models showing that the inductive biases given to the model by several design decisions significantly impact compositional generalization <a href="https://t.co/WSMeRNl3SX">pic.twitter.com/WSMeRNl3SX</a></p>— AK (@ak92501) <a href="https://twitter.com/ak92501/status/1425256942600065024?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 3. AnyoneNet: Synchronized Speech and Talking Head Generation for arbitrary person
Xinsheng Wang, Qicong Xie, Jihua Zhu, Lei Xie, Scharenborg
- retweets: 437, favorites: 79 (08/12/2021 08:06:19)
- links: [abs](https://arxiv.org/abs/2108.04325) | [pdf](https://arxiv.org/pdf/2108.04325)
- [cs.CV](https://arxiv.org/list/cs.CV/recent) | [cs.HC](https://arxiv.org/list/cs.HC/recent)
Automatically generating videos in which synthesized speech is synchronized with lip movements in a talking head has great potential in many human-computer interaction scenarios. In this paper, we present an automatic method to generate synchronized speech and talking-head videos on the basis of text and a single face image of an arbitrary person as input. In contrast to previous text-driven talking head generation methods, which can only synthesize the voice of a specific person, the proposed method is capable of synthesizing speech for any person that is inaccessible in the training stage. Specifically, the proposed method decomposes the generation of synchronized speech and talking head videos into two stages, i.e., a text-to-speech (TTS) stage and a speech-driven talking head generation stage. The proposed TTS module is a face-conditioned multi-speaker TTS model that gets the speaker identity information from face images instead of speech, which allows us to synthesize a personalized voice on the basis of the input face image. To generate the talking head videos from the face images, a facial landmark-based method that can predict both lip movements and head rotations is proposed. Extensive experiments demonstrate that the proposed method is able to generate synchronized speech and talking head videos for arbitrary persons and non-persons. Synthesized speech shows consistency with the given face regarding to the synthesized voice's timbre and one's appearance in the image, and the proposed landmark-based talking head method outperforms the state-of-the-art landmark-based method on generating natural talking head videos.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">AnyoneNet: Synchronized Speech and Talking Head<br>Generation for arbitrary person<br>pdf: <a href="https://t.co/pm6IWdWScu">https://t.co/pm6IWdWScu</a><br>abs: <a href="https://t.co/d5O2t0x1zi">https://t.co/d5O2t0x1zi</a> <a href="https://t.co/qSiJEwXm62">pic.twitter.com/qSiJEwXm62</a></p>— AK (@ak92501) <a href="https://twitter.com/ak92501/status/1425268905455542280?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 4. Instance-wise Hard Negative Example Generation for Contrastive Learning in Unpaired Image-to-Image Translation
Weilun Wang, Wengang Zhou, Jianmin Bao, Dong Chen, Houqiang Li
- retweets: 361, favorites: 68 (08/12/2021 08:06:19)
- links: [abs](https://arxiv.org/abs/2108.04547) | [pdf](https://arxiv.org/pdf/2108.04547)
- [cs.CV](https://arxiv.org/list/cs.CV/recent)
Contrastive learning shows great potential in unpaired image-to-image translation, but sometimes the translated results are in poor quality and the contents are not preserved consistently. In this paper, we uncover that the negative examples play a critical role in the performance of contrastive learning for image translation. The negative examples in previous methods are randomly sampled from the patches of different positions in the source image, which are not effective to push the positive examples close to the query examples. To address this issue, we present instance-wise hard Negative Example Generation for Contrastive learning in Unpaired image-to-image Translation~(NEGCUT). Specifically, we train a generator to produce negative examples online. The generator is novel from two perspectives: 1) it is instance-wise which means that the generated examples are based on the input image, and 2) it can generate hard negative examples since it is trained with an adversarial loss. With the generator, the performance of unpaired image-to-image translation is significantly improved. Experiments on three benchmark datasets demonstrate that the proposed NEGCUT framework achieves state-of-the-art performance compared to previous methods.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Instance-wise Hard Negative Example Generation for Contrastive Learning in Unpaired Image-to-Image Translation<br>pdf: <a href="https://t.co/dUlGZSiiN3">https://t.co/dUlGZSiiN3</a><br>abs: <a href="https://t.co/WfKKY2VgQY">https://t.co/WfKKY2VgQY</a> <a href="https://t.co/Xzeh5LCMde">pic.twitter.com/Xzeh5LCMde</a></p>— AK (@ak92501) <a href="https://twitter.com/ak92501/status/1425277409083940872?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 5. Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development
Morgan Klaus Scheuerman, Emily Denton, Alex Hanna
- retweets: 198, favorites: 107 (08/12/2021 08:06:19)
- links: [abs](https://arxiv.org/abs/2108.04308) | [pdf](https://arxiv.org/pdf/2108.04308)
- [cs.CV](https://arxiv.org/list/cs.CV/recent) | [cs.HC](https://arxiv.org/list/cs.HC/recent)
Data is a crucial component of machine learning. The field is reliant on data to train, validate, and test models. With increased technical capabilities, machine learning research has boomed in both academic and industry settings, and one major focus has been on computer vision. Computer vision is a popular domain of machine learning increasingly pertinent to real-world applications, from facial recognition in policing to object detection for autonomous vehicles. Given computer vision's propensity to shape machine learning research and impact human life, we seek to understand disciplinary practices around dataset documentation - how data is collected, curated, annotated, and packaged into datasets for computer vision researchers and practitioners to use for model tuning and development. Specifically, we examine what dataset documentation communicates about the underlying values of vision data and the larger practices and goals of computer vision as a field. To conduct this study, we collected a corpus of about 500 computer vision datasets, from which we sampled 114 dataset publications across different vision tasks. Through both a structured and thematic content analysis, we document a number of values around accepted data practices, what makes desirable data, and the treatment of humans in the dataset construction process. We discuss how computer vision datasets authors value efficiency at the expense of care; universality at the expense of contextuality; impartiality at the expense of positionality; and model work at the expense of data work. Many of the silenced values we identify sit in opposition with social computing practices. We conclude with suggestions on how to better incorporate silenced values into the dataset creation and curation process.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Preprint announcement: "Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development" w/ myself, <a href="https://twitter.com/cephaloponderer?ref_src=twsrc%5Etfw">@cephaloponderer</a>, <a href="https://twitter.com/alexhanna?ref_src=twsrc%5Etfw">@alexhanna</a> to be published in CSCW 2021 <a href="https://t.co/xBuZd1VvdB">https://t.co/xBuZd1VvdB</a></p>— Morgan Klaus Scheuerman (@morganklauss) <a href="https://twitter.com/morganklauss/status/1425489568501956610?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">> We discuss how computer vision datasets authors value efficiency at the expense of care; universality at the expense of contextuality; impartiality at the expense of positionality; and model work at the expense of data work<br><br>👀 very excited to read this<a href="https://t.co/OGwttZEFEy">https://t.co/OGwttZEFEy</a></p>— Ali Alkhatib (@_alialkhatib) <a href="https://twitter.com/_alialkhatib/status/1425276990769164289?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 6. FairyTailor: A Multimodal Generative Framework for Storytelling
Eden Bensaid, Mauro Martino, Benjamin Hoover, Jacob Andreas, Hendrik Strobelt
- retweets: 146, favorites: 76 (08/12/2021 08:06:19)
- links: [abs](https://arxiv.org/abs/2108.04324) | [pdf](https://arxiv.org/pdf/2108.04324)
- [cs.CL](https://arxiv.org/list/cs.CL/recent) | [cs.AI](https://arxiv.org/list/cs.AI/recent) | [cs.CV](https://arxiv.org/list/cs.CV/recent)
Storytelling is an open-ended task that entails creative thinking and requires a constant flow of ideas. Natural language generation (NLG) for storytelling is especially challenging because it requires the generated text to follow an overall theme while remaining creative and diverse to engage the reader. In this work, we introduce a system and a web-based demo, FairyTailor, for human-in-the-loop visual story co-creation. Users can create a cohesive children's fairytale by weaving generated texts and retrieved images with their input. FairyTailor adds another modality and modifies the text generation process to produce a coherent and creative sequence of text and images. To our knowledge, this is the first dynamic tool for multimodal story generation that allows interactive co-formation of both texts and images. It allows users to give feedback on co-created stories and share their results.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">FairyTailor: A Multimodal Generative Framework for Storytelling<br>pdf: <a href="https://t.co/XJ7X4CDZfz">https://t.co/XJ7X4CDZfz</a><br>abs: <a href="https://t.co/IZJuTXsHm6">https://t.co/IZJuTXsHm6</a><br>webpage: <a href="https://t.co/Fbc6bo8RAX">https://t.co/Fbc6bo8RAX</a><br>github: <a href="https://t.co/BYypJRTWGp">https://t.co/BYypJRTWGp</a> <a href="https://t.co/GVbviyZBmf">pic.twitter.com/GVbviyZBmf</a></p>— AK (@ak92501) <a href="https://twitter.com/ak92501/status/1425258207354703872?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 7. Meta-repository of screening mammography classifiers
Benjamin Stadnick, Jan Witowski, Vishwaesh Rajiv, Jakub Chłędowski, Farah E. Shamout, Kyunghyun Cho, Krzysztof J. Geras
- retweets: 182, favorites: 25 (08/12/2021 08:06:19)
- links: [abs](https://arxiv.org/abs/2108.04800) | [pdf](https://arxiv.org/pdf/2108.04800)
- [cs.LG](https://arxiv.org/list/cs.LG/recent) | [cs.CV](https://arxiv.org/list/cs.CV/recent)
Artificial intelligence (AI) is transforming medicine and showing promise in improving clinical diagnosis. In breast cancer screening, several recent studies show that AI has the potential to improve radiologists' accuracy, subsequently helping in early cancer diagnosis and reducing unnecessary workup. As the number of proposed models and their complexity grows, it is becoming increasingly difficult to re-implement them in order to reproduce the results and to compare different approaches. To enable reproducibility of research in this application area and to enable comparison between different methods, we release a meta-repository containing deep learning models for classification of screening mammograms. This meta-repository creates a framework that enables the evaluation of machine learning models on any private or public screening mammography data set. At its inception, our meta-repository contains five state-of-the-art models with open-source implementations and cross-platform compatibility. We compare their performance on five international data sets: two private New York University breast cancer screening data sets as well as three public (DDSM, INbreast and Chinese Mammography Database) data sets. Our framework has a flexible design that can be generalized to other medical image analysis tasks. The meta-repository is available at https://www.github.com/nyukat/mammography_metarepository.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Today, we release an open-source *meta-repository* for breast cancer mammography classifiers! In a new paper, we use it to evaluate 5 SOTA models on 5 various datasets from around the world. Preprint is now live at: <a href="https://t.co/CpHV9Yo2b9">https://t.co/CpHV9Yo2b9</a>, and a thread below: <a href="https://t.co/VIwGDa1GAX">pic.twitter.com/VIwGDa1GAX</a></p>— Jan Witowski (@JanWitowski) <a href="https://twitter.com/JanWitowski/status/1425432307981295619?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 8. U-Net-and-a-half: Convolutional network for biomedical image segmentation using multiple expert-driven annotations
Yichi Zhang, Jesper Kers, Clarissa A. Cassol, Joris J. Roelofs, Najia Idrees, Alik Farber, Samir Haroon, Kevin P. Daly, Suvranu Ganguli, Vipul C. Chitalia, Vijaya B. Kolachalama
- retweets: 100, favorites: 12 (08/12/2021 08:06:20)
- links: [abs](https://arxiv.org/abs/2108.04658) | [pdf](https://arxiv.org/pdf/2108.04658)
- [cs.CV](https://arxiv.org/list/cs.CV/recent) | [cs.LG](https://arxiv.org/list/cs.LG/recent)
Development of deep learning systems for biomedical segmentation often requires access to expert-driven, manually annotated datasets. If more than a single expert is involved in the annotation of the same images, then the inter-expert agreement is not necessarily perfect, and no single expert annotation can precisely capture the so-called ground truth of the regions of interest on all images. Also, it is not trivial to generate a reference estimate using annotations from multiple experts. Here we present a deep neural network, defined as U-Net-and-a-half, which can simultaneously learn from annotations performed by multiple experts on the same set of images. U-Net-and-a-half contains a convolutional encoder to generate features from the input images, multiple decoders that allow simultaneous learning from image masks obtained from annotations that were independently generated by multiple experts, and a shared low-dimensional feature space. To demonstrate the applicability of our framework, we used two distinct datasets from digital pathology and radiology, respectively. Specifically, we trained two separate models using pathologist-driven annotations of glomeruli on whole slide images of human kidney biopsies (10 patients), and radiologist-driven annotations of lumen cross-sections of human arteriovenous fistulae obtained from intravascular ultrasound images (10 patients), respectively. The models based on U-Net-and-a-half exceeded the performance of the traditional U-Net models trained on single expert annotations alone, thus expanding the scope of multitask learning in the context of biomedical image segmentation.
# 9. Learning to Cut by Watching Movies
Alejandro Pardo, Fabian Caba Heilbron, Juan León Alcázar, Ali Thabet, Bernard Ghanem
- retweets: 64, favorites: 47 (08/12/2021 08:06:20)
- links: [abs](https://arxiv.org/abs/2108.04294) | [pdf](https://arxiv.org/pdf/2108.04294)
- [cs.CV](https://arxiv.org/list/cs.CV/recent) | [cs.MM](https://arxiv.org/list/cs.MM/recent)
Video content creation keeps growing at an incredible pace; yet, creating engaging stories remains challenging and requires non-trivial video editing expertise. Many video editing components are astonishingly hard to automate primarily due to the lack of raw video materials. This paper focuses on a new task for computational video editing, namely the task of raking cut plausibility. Our key idea is to leverage content that has already been edited to learn fine-grained audiovisual patterns that trigger cuts. To do this, we first collected a data source of more than 10K videos, from which we extract more than 255K cuts. We devise a model that learns to discriminate between real and artificial cuts via contrastive learning. We set up a new task and a set of baselines to benchmark video cut generation. We observe that our proposed model outperforms the baselines by large margins. To demonstrate our model in real-world applications, we conduct human studies in a collection of unedited videos. The results show that our model does a better job at cutting than random and alternative baselines.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Learning to Cut by Watching Movies<br>pdf: <a href="https://t.co/vKUSasl0O7">https://t.co/vKUSasl0O7</a><br>abs: <a href="https://t.co/G3tw5nX8Be">https://t.co/G3tw5nX8Be</a> <a href="https://t.co/B2FiJxbt2r">pic.twitter.com/B2FiJxbt2r</a></p>— AK (@ak92501) <a href="https://twitter.com/ak92501/status/1425259535204245504?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 10. RaftMLP: Do MLP-based Models Dream of Winning Over Computer Vision?
Yuki Tatsunami, Masato Taki
- retweets: 56, favorites: 41 (08/12/2021 08:06:20)
- links: [abs](https://arxiv.org/abs/2108.04384) | [pdf](https://arxiv.org/pdf/2108.04384)
- [cs.CV](https://arxiv.org/list/cs.CV/recent) | [cs.AI](https://arxiv.org/list/cs.AI/recent) | [cs.LG](https://arxiv.org/list/cs.LG/recent)
For the past ten years, CNN has reigned supreme in the world of computer vision, but recently, Transformer is on the rise. However, the quadratic computational cost of self-attention has become a severe problem of practice. There has been much research on architectures without CNN and self-attention in this context. In particular, MLP-Mixer is a simple idea designed using MLPs and hit an accuracy comparable to the Vision Transformer. However, the only inductive bias in this architecture is the embedding of tokens. Thus, there is still a possibility to build a non-convolutional inductive bias into the architecture itself, and we built in an inductive bias using two simple ideas. A way is to divide the token-mixing block vertically and horizontally. Another way is to make spatial correlations denser among some channels of token-mixing. With this approach, we were able to improve the accuracy of the MLP-Mixer while reducing its parameters and computational complexity. Compared to other MLP-based models, the proposed model, named RaftMLP has a good balance of computational complexity, the number of parameters, and actual memory usage. In addition, our work indicates that MLP-based models have the potential to replace CNNs by adopting inductive bias. The source code in PyTorch version is available at \url{https://github.com/okojoalg/raft-mlp}.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">RaftMLP: Do MLP-based Models Dream of Winning Over Computer Vision?<br>pdf: <a href="https://t.co/gZF22TVnnZ">https://t.co/gZF22TVnnZ</a><br>abs: <a href="https://t.co/2Wr0rtSu0Z">https://t.co/2Wr0rtSu0Z</a><br>github: <a href="https://t.co/AxBFNk1Qsj">https://t.co/AxBFNk1Qsj</a><br><br>raft-token-mixing block improves accuracy when<br>trained on the ImageNet-1K dataset, as compared to<br>plain MLP-Mixer <a href="https://t.co/HrkxHT5xzo">pic.twitter.com/HrkxHT5xzo</a></p>— AK (@ak92501) <a href="https://twitter.com/ak92501/status/1425255705620226052?ref_src=twsrc%5Etfw">August 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
| 128.796791 | 1,783 | 0.786132 | eng_Latn | 0.945968 |
ed83c2ac7d7c46f16448ebc9b02bf9ae24e4808e | 2,782 | md | Markdown | CHANGELOG.md | mmpro/ac-awsSecrets | 2ea8c315c1ae59ff22803cf66b7bbe3cd289bbb5 | [
"MIT"
] | null | null | null | CHANGELOG.md | mmpro/ac-awsSecrets | 2ea8c315c1ae59ff22803cf66b7bbe3cd289bbb5 | [
"MIT"
] | 6 | 2020-02-16T18:43:15.000Z | 2021-05-31T08:17:03.000Z | CHANGELOG.md | mmpro/ac-awsSecrets | 2ea8c315c1ae59ff22803cf66b7bbe3cd289bbb5 | [
"MIT"
] | null | null | null | <a name="1.1.4"></a>
## [1.1.4](https://github.com/mmpro/ac-awssecrets/compare/v1.1.3..v1.1.4) (2021-10-09 10:18:52)
### Bug Fix
* **App:** Package updates | MP | [ede713128ed7a62b9659f737c4469db389854af9](https://github.com/mmpro/ac-awssecrets/commit/ede713128ed7a62b9659f737c4469db389854af9)
Package updates
<a name="1.1.3"></a>
## [1.1.3](https://github.com/mmpro/ac-awssecrets/compare/v1.1.2..v1.1.3) (2021-09-22 11:13:39)
### Bug Fix
* **App:** Package updates | MP | [82f4a90c24c4c94279bf23621dd92805e01ca4d7](https://github.com/mmpro/ac-awssecrets/commit/82f4a90c24c4c94279bf23621dd92805e01ca4d7)
Package updates
<a name="1.1.2"></a>
## [1.1.2](https://github.com/mmpro/ac-awssecrets/compare/v1.1.1..v1.1.2) (2021-05-31 06:17:24)
### Bug Fix
* **App:** Package updates | MP | [466bc49d1d270b784cc35c0ab3e476b3fd6e3587](https://github.com/mmpro/ac-awssecrets/commit/466bc49d1d270b784cc35c0ab3e476b3fd6e3587)
Package updates
<a name="1.1.1"></a>
## [1.1.1](https://github.com/mmpro/ac-awssecrets/compare/v1.1.0..v1.1.1) (2020-03-29 14:10:49)
### Bug Fix
* **App:** Prepare repository for AC semantic release | MP | [f8f652bc09e1581e2ec35b3be6d51045bb905576](https://github.com/mmpro/ac-awssecrets/commit/f8f652bc09e1581e2ec35b3be6d51045bb905576)
Cleaned up repository and use ac-semantic-release
### Chores
* **Misc:** Updated packages | MP | [83244d834fd80f34fbaf450fdefcdfa5b78f91f2](https://github.com/mmpro/ac-awssecrets/commit/83244d834fd80f34fbaf450fdefcdfa5b78f91f2)
Updated packages
<a name="1.1.0"></a>
# [1.1.0](https://github.com/mmpro/ac-awssecrets/compare/v1.0.1...v1.1.0) (2020-02-16 18:42)
### Features
* **Misc:** The servers parameter now support more flexibility | mp ([01f0a798f543108ecef72771de5705764f7db82f](https://github.com/mmpro/ac-awssecrets/commit/01f0a798f543108ecef72771de5705764f7db82f))
You can set custom identifiers for array of objects - see README
<a name="1.0.1"></a>
## [1.0.1](https://github.com/mmpro/ac-awssecrets/compare/v1.0.0...v1.0.1) (2019-07-24 19:43)
### Bug Fixes
* **Misc:** Force version bump | MP ([fbfb1ae](https://github.com/mmpro/ac-awssecrets/commit/fbfb1ae))
Force version bump
<a name="1.0.0"></a>
# 1.0.0 (2019-07-24 19:42)
### Bug Fixes
* **Misc:** Package update | MP ([dbb4c23](https://github.com/mmpro/ac-awssecrets/commit/dbb4c23))
Package update and some minor adjustments for corp-release/semver
* **Misc:** Package update | MP ([8224bf7](https://github.com/mmpro/ac-awssecrets/commit/8224bf7))
Package update and some minor adjustments for corp-release/semver
* **Misc:** Package update | MP ([14e79bc](https://github.com/mmpro/ac-awssecrets/commit/14e79bc))
Package update and some minor adjustments for corp-release/semver
| 35.666667 | 204 | 0.717469 | yue_Hant | 0.518694 |
ed83e4c8859fd4bde4e85eeb16740f1cb7e73bd7 | 103 | md | Markdown | README.md | Xuyang-Huang/Decision-Tree-Python | 12e6224419585542c95f99322c5a19f3bfb2b7c0 | [
"MIT"
] | 1 | 2021-04-12T08:28:41.000Z | 2021-04-12T08:28:41.000Z | README.md | Xuyang-Huang/Decision-Tree-Python | 12e6224419585542c95f99322c5a19f3bfb2b7c0 | [
"MIT"
] | null | null | null | README.md | Xuyang-Huang/Decision-Tree-Python | 12e6224419585542c95f99322c5a19f3bfb2b7c0 | [
"MIT"
] | null | null | null | # Decision-Tree-Python
Both discrete and continuous data can be used including existing both of them.
| 34.333333 | 79 | 0.805825 | eng_Latn | 0.999989 |
ed83e933ffe5c63ac3c24d7cbada2a06dd70c910 | 2,949 | md | Markdown | README.md | bcentinaro/commerce_billing | 8e227b16a38721c8441e0bd74d89ef17befda95c | [
"MIT"
] | 178 | 2015-06-01T19:53:18.000Z | 2022-03-29T00:52:38.000Z | README.md | bcentinaro/commerce_billing | 8e227b16a38721c8441e0bd74d89ef17befda95c | [
"MIT"
] | 12 | 2015-05-13T20:33:28.000Z | 2017-02-18T16:17:23.000Z | README.md | bcentinaro/commerce_billing | 8e227b16a38721c8441e0bd74d89ef17befda95c | [
"MIT"
] | 38 | 2015-05-12T16:05:52.000Z | 2021-07-23T18:05:24.000Z | Commerce.Billing
=================
[](https://travis-ci.org/joshnuss/commerce_billing)
Payment processing library for Elixir. Based on [Shopify's](http://shopify.com) [ActiveMerchant](http://github.com/Shopify/active_merchant) ruby gem
## Supported Gateways
- Bogus
- Stripe
## Advantages of Elixir
- **Fault tolerant**: Each worker is supervised, so a new worker is started in the event of errors. Network errors are caught and payment is retried (not yet working).
- **Distributed**: Run workers on different machines.
- **Scalable**: Run multiple workers and adjust number of workers as needed.
- **Throughput**: Takes advantage of all cores. For example on my laptop with 4 cores (2 threads per core), I can do 100 authorizations with Stripe in 10 seconds. Thats 864,000 transactions per day. ebay does 1.4M/day.
- **Hot code swap**: Update code while the system is running
## Card processing example
```elixir
alias Commerce.Billing
alias Billing.{CreditCard, Address, Worker, Gateways}
config = %{credentials: {"sk_test_BQokikJOvBiI2HlWgH4olfQ2", ""},
default_currency: "USD"}
Worker.start_link(Gateways.Stripe, config, name: :my_gateway)
card = %CreditCard{
name: "John Smith",
number: "4242424242424242",
expiration: {2017, 12},
cvc: "123"
}
address = %Address{
street1: "123 Main",
city: "New York",
region: "NY",
country: "US",
postal_code: "11111"
}
case Billing.authorize(:my_gateway, 199.95, card, billing_address: address,
description: "Amazing T-Shirt") do
{:ok, %{authorization: authorization}} ->
IO.puts("Payment authorized #{authorization}")
{:error, %{code: :declined, reason: reason}} ->
IO.puts("Payment declined #{reason}")
{:error, %{code: error}} ->
IO.puts("Payment error #{error}")
end
```
## Road Map
- Support multiple gateways (PayPal, Stripe, Authorize.net, Braintree etc..)
- Support gateways that bill directly and those that use html integrations.
- Support recurring billing
- Each gateway is hosted in a worker process and supervised.
- Workers can be pooled. (using poolboy)
- Workers can be spread on multiple nodes
- The gateway is selected by first calling the "Gateway Factory" process. The "Gateway Factory" decides which gateway to use. Usually it will just be one type based on configuration setting in mix.exs (i.e. Stripe), but the Factory can be replaced with something fancier. It will enable scenarios like:
- Use one gateway for visa another for mastercard
- Use primary gateway (i.e PayPal), but when PayPal is erroring switch to secondary/backup gateway (i.e. Authorize.net)
- Currency specific gateway, i.e. use one gateway type for USD another for CAD
- Retry on network failure
## License
MIT
@joshnuss is a freelance software consultant. [email protected]
| 37.329114 | 302 | 0.717192 | eng_Latn | 0.957451 |
ed845c99053abfb3e55327c459395994767a7ab8 | 6,931 | md | Markdown | README.md | SymphonyOSF/App-Integrations-Zapier | 3a23b2e6f2b0a36b70bff64c2d450a8a6f3b6ab7 | [
"Apache-2.0"
] | 5 | 2017-01-03T13:53:54.000Z | 2020-03-13T03:21:38.000Z | README.md | SymphonyOSF/App-Integrations-Zapier | 3a23b2e6f2b0a36b70bff64c2d450a8a6f3b6ab7 | [
"Apache-2.0"
] | 17 | 2017-01-18T12:28:06.000Z | 2019-06-04T12:35:34.000Z | README.md | SymphonyOSF/App-Integrations-Zapier | 3a23b2e6f2b0a36b70bff64c2d450a8a6f3b6ab7 | [
"Apache-2.0"
] | 9 | 2016-12-29T18:01:34.000Z | 2022-01-27T20:55:04.000Z | [](https://symphonyoss.atlassian.net/wiki/display/FM/Incubating)
[](https://travis-ci.org/symphonyoss/App-Integrations-Zapier)
[](https://www.versioneye.com/user/projects/58d049f9dcaf9e0048399c74)
[](https://scan.coverity.com/projects/symphonyoss-app-integrations-zapier)
[](https://codecov.io/gh/symphonyoss/App-Integrations-Zapier)
*This readme contains information that both covers Zapier's specific webhook configuration. To see the development guide for Symphony Integrations, please go to to the [quick start guide](IntegrationQuickstartGuide.md).*
# Zapier WebHook Integration
The Zapier WebHook Integration will allow you to add an ecosystem of 600+ apps to the Symphony platform. Zapier sends notifications and content to Symphony IMs or rooms from your favorite applications including GMail, Office 365, Trello, HubSpot, Twitter, LinkedIn, and hundreds of other productivity apps.
[See Symphony on Zapier here](https://zapier.com/zapbook/symphony/)
## How it works
With access to a Zapier account, you can configure Zaps in order to receive notifications on Symphony.
A Zap is a blueprint for a workflow you want to do over and over again automatically. Creating a Zap involves choosing a *trigger* and adding one or more *action* steps.
Symphony supports Zapier **actions** to post messages to Symphony via WebHooks. *Symphony cannot be used as a trigger on Zapier.*
## What formats and events it supports and what it produces
Every integration will receive a message sent in a specific format (depending on the system it ingests) and will usually convert it into an "entity" before it reaches the Symphony platform. It will also, usually, identify the kind of message based on an "event" identifier, which varies based on the third-party system.
You can find more details about entities and the Symphony Message ML format [here](https://github.com/symphonyoss/App-Integrations-Core#the-message-ml-format).
We currently support any action that can be configured via our [Symphony Zapbook](https://zapier.com/zapbook/symphony/) on Zapier.
There, you can choose an icon and define a message header and body.
The message header and body must follow the rules for Symphony Message ML, which can be accessed [here](https://rest-api.symphony.com/docs/message-format/), although you can safely insert plain text and dynamic trigger app information (like a trigger-related name, title or anything that will translate to text in your message).
### Sample Action
Here we will show you a sample payload from Zapier, the Message ML generated by the Integration Bridge, and the message as it is rendered in Symphony. This payload would be the result of a Zap configured to trigger upon the “card created” event in Trello.
##### Zapier payload (reporting a Trello event)
```json
{
"auth_fields": {},
"request": {
"files": {},
"url": "http://requestb.in/1miwh011",
"headers": {
"Content-Type": "application/json; charset=utf-8",
"Accept": "application/json"
},
"params": {},
"data": "{\"message_content\": \"Test Message Body:\\n* Card Test Trello have just been created\", \"message_header\": \"Test Message Header: Trello card Test Trello created\", \"webhook_url\": \"http://requestb.in/1miwh011\"}",
"method": "POST"
},
"action_fields": {
"message_content": "Test Message Body:\n* Card Test Trello have just been created",
"message_header": "Test Message Header: Trello card Test Trello created",
"webhook_url": "http://requestb.in/1miwh011"
},
"action_fields_full": {
"message_content": "Test Message Body:\n* Card Test Trello have just been created",
"message_header": "Test Message Header: Trello card Test Trello created",
"webhook_url": "http://requestb.in/1miwh011"
},
"meta": {
"frontend": true
},
"action_fields_raw": {
"message_content": "Test Message Body:\n* Card {{15919238__name}} have just been created",
"message_header": "Test Message Header: Trello card {{15919238__name}} created",
"webhook_url": "http://requestb.in/1miwh011"
},
"url_raw": "{{webhook_url}}",
"zap": {
"live": true,
"link": "https://zapier.com/app/editor/15919238",
"name": "Test Trello!",
"user": {
"timezone": "Atlantic/South_Georgia"
}
}
}
```
##### Generated Symphony Message and Json entity (MessageML V2)
When the Integration Bridge posts messages through the Agent that has version equal or greater than '1.46.0' the
generated Symphony Message must follow the MessageML V2 specification.
The Zapier integration on the Integration Bridge parses the JSON payload that Zapier sent, and generates messageMLv2 and EntityJSON.
More information about MessageML V2 specification can be accessed [here](https://symphonyoss.atlassian.net/wiki/display/WGFOS/MessageML+V2+Draft+Proposal+-+For+Discussion)
This is the messageML v2 that the Zapier integration generates after parsing, which defines the layout of the card and how the front end will render it within Symphony:
```xml
<messageML>
<div class="entity" data-entity-id="zapierPostMessage">
<card class="barStyle" iconSrc="${entity['zapierPostMessage'].message.icon}">
<header>
<span>${entity['zapierPostMessage'].message.header}</span>
</header>
<body>
<div class="labelBackground badge">
<span>${entity['zapierPostMessage'].message.body}</span>
</div>
</body>
</card>
</div>
</messageML>
```
This is the EntityJSON that the Zapier integration generates after parsing, which defines the content of the card that the front-end will use in combination with the MessageML v2 to render the card:
```json
{
"zapierPostMessage": {
"type": "com.symphony.integration.zapier.event.v2.postMessage",
"version": "1.0",
"message" : {
"type": "com.symphony.integration.zapier.event.message",
"version": "1.0",
"header": "Test Message Header: Trello card Test Trello created",
"body": "Test Message Body:<br/>* Card Test Trello have just been created",
"icon": "http://icon.com/icon"
}
}
}
```
##### Message rendered on Symphony

### Messages color mapping
For all Zapier notifications, the flair color (vertical bar on the left) will be **gray**. This can change in future implementations. | 53.728682 | 328 | 0.728611 | eng_Latn | 0.884839 |
ed847591a2625a3165b24d76629616c5c49f93ae | 59 | md | Markdown | README.md | Luarde/Jogo-de-escolhas | 3579137b46cfbe229da86ce1ea65d66db6d20ef5 | [
"MIT"
] | null | null | null | README.md | Luarde/Jogo-de-escolhas | 3579137b46cfbe229da86ce1ea65d66db6d20ef5 | [
"MIT"
] | null | null | null | README.md | Luarde/Jogo-de-escolhas | 3579137b46cfbe229da86ce1ea65d66db6d20ef5 | [
"MIT"
] | null | null | null | # Jogo-de-escolhas
Jogo desenvolvido em Python de escolhas
| 19.666667 | 39 | 0.813559 | por_Latn | 1.000007 |
ed847dc5a39ceef00489d143e1251c57ee525a0c | 99 | md | Markdown | docs/api/watch.md | sthagen/samuelcolvin-watchfiles | 5ca6179d142aa3111de483151702d4eba7d35eac | [
"MIT"
] | 75 | 2022-03-23T11:50:38.000Z | 2022-03-31T13:13:59.000Z | docs/api/watch.md | sthagen/samuelcolvin-watchfiles | 5ca6179d142aa3111de483151702d4eba7d35eac | [
"MIT"
] | 8 | 2022-03-23T15:23:54.000Z | 2022-03-27T17:53:44.000Z | docs/api/watch.md | sthagen/samuelcolvin-watchfiles | 5ca6179d142aa3111de483151702d4eba7d35eac | [
"MIT"
] | 3 | 2022-03-24T08:56:19.000Z | 2022-03-27T10:52:20.000Z | ::: watchfiles.watch
::: watchfiles.awatch
::: watchfiles.Change
::: watchfiles.main.FileChange
| 12.375 | 30 | 0.717172 | yue_Hant | 0.373881 |
ed851be04ec7ff4cc612fb1260cd75519c12d5d2 | 7,300 | md | Markdown | Markdown/10000s/10000/wore wholly.md | rcvd/interconnected-markdown | 730d63c55f5c868ce17739fd7503d562d563ffc4 | [
"MIT"
] | 2 | 2022-01-19T09:04:58.000Z | 2022-01-23T15:44:37.000Z | Markdown/00500s/03000/wore wholly.md | rcvd/interconnected-markdown | 730d63c55f5c868ce17739fd7503d562d563ffc4 | [
"MIT"
] | null | null | null | Markdown/00500s/03000/wore wholly.md | rcvd/interconnected-markdown | 730d63c55f5c868ce17739fd7503d562d563ffc4 | [
"MIT"
] | 1 | 2022-01-09T17:10:33.000Z | 2022-01-09T17:10:33.000Z | - Epistle little salt is were it active no are. Perpetual the or additions neglected them. Was served felt it boldly early being. Moss with shall other without stand despairing. Fort of the his help it. In the the sides are the. The can his our to he circular. Only of and or son vengeance are. The your is was shall. Pretended total the another time sounds of myself be. Given the happiness the was heard immortal. Room literature same leave set is drive. From is as on silence him two. Again they kindred position had and the. Veil cattle of falling king were at days. Contemplate the [[moving content]] indeed to said. Had the his meaning to who. Fishes repaid that he and not computers vice were. Is being of of Ive but. At of in sea and York it.
- Riding and gathered had site few was fact and. Knew attended that character on do. Drop the in watch all and. Alas found border sincerity though stupid take.
-
- Rather in myself it it astray followed.
- Low out to was armies.
- In had he her though.
- Box the united more endurance gift comes.
- Of two she addition pounds them.
- With of about dress about that.
- Nature also and was still comes had time had.
- Deep to danger moment and.
- Being visit of of what.
- Her we every see to existence the.
- Out feet came the the i and an.
- Will you the to wished distance for worth.
- In star the became tomb i it.
- Young you looked in meeting enforced.
- It court as never bad didnt beyond possibly.
- Face the financial can and.
- To away to right only.
- Joseph from his good by scotch.
- Few i secure already the won reverend through and.
- Thy meanwhile nothing in i fully.
- With was work bench now view.
-
- Bells tidings of with drawers paying likely.
- Of was being of an they.
- Are was [[tells smoke]] for up have admiring.
- Have in was gardener than strip.
- Power admitted to so overhead streak Mr.
- World of almost vote which.
- [[suffer]] and and be them from.
- The old in now chief.
- Peculiar as white [[imagine grand]] when who acts.
- And taught the from and.
- Pure the now never generation her.
- Him the sails who kind.
- Highway Mrs i like been he story that.
- Rest his Italy beyond fiend wounded for.
- Of forsake dwell or refund until or.
- Idea this officers trying my.
- Had line soon which following minutes.
- Matter adult the down streams have.
- Well of in to like said our.
- Thy shepherd he nothing for the.
- Life right try would the then.
- Our Indian or were more surface the them.
- At female these to care books.
- Of across two and cross prophets.
- Readily the her one of or those bold on fondness.
- Vol in children of her air lest.
- To the glass drew are prohibition.
- You manager in use of.
- About of flight and conventional at has.
- Speedily summary avail system dreaming.
- I to essential all green.
- At not Jews before plain day.
- It the there [[suffer smiling]] parties be.
- Up [[pocket]] over we large i for.
- Chiefly is god any by.
- Like contemplation heard well cake these govern united.
- Territory of [[forty hopes]] of twice in. That house to composer i. On out had things by women so that. We the of a you in. Than doom donate has i. Ever with of had to everything elevated.
- Have he receive do by exceedingly. Them he she yet to passed. Rev accepted history Frederick but pin comes button. [[square catch]] not my come our money. Agents his the before which and. The wore year pale it classical in. And received go the do rarely. Case better to in the west who. In and and and he to incomplete. Romance the pg man and of [[stars dressed]]. No lord [[rapidly collection]] for would on. To and do the felt. Then were the document became lover. Take is on and own of.
- Boiled to except the the the little. By lord knew i would he that in. Was be are that received to. General the of from must. Quickly had at defective out mentioned. Being there the is ask and because.
- Wiser to procedure bow slow to you. To [[literature]] we more skilful after. King and and simply james to. Full which examined guest i fluid will. [[suffer mad]] attentive the the and bay birth was. Appeal very candy some introduce Christianity out. From take men Switzerland appear two feel you. Moderate make with am Barbara of to. Were thither were voice then others. Appeal Mississippi the are with road. Art day is lived tough have skill. Can of long faith to of best. Ugly remedy of it just knows from [[literature dressed]]. Dark preaching never winter season i half [[lifted minds]]. God children and forenoon struck is talked. Degree one the and Gutenberg if only. Day hidden in from we. In said the he administration sound the. Away oxen but just the grey to February revived.
- Is the from stock goddess baptism. They you the the within happy were. On or innocent tumbled preserved in things. Lower of it were but what. As round of is seems oxford. Truth have or ships in the the. In study strange therefore is second. Is not she they Gutenberg happiness. [[smiling]] and especially truth board ye. Splendid the by out cf was before. Of cold was say and. The has in seen. They with was in as many. Very what sorrowful homes outrage hope. Have his the i the lawyers. Of charge and was to it the. With placed us and an. To form my cab known the and. The bank other. Their to [[hopes separate]] of and [[duties]]. Was and nerves department to. They our said the women felt. [[coat lifted]] lady himself nook crime the published these. The shrink may clearer secret of. Morning floor of also not fifty. Servant in begins liked the be things in. Music from of laws the once no. In then by of of wax see. The trenches study into took that the mark these. Of in the rise on the be. Last you and small let [[countries]] hundred. Of medium suggested he do took is.
- Other have my the reached. Previously this across burden [[lifted lifted]] on was. Of your promise [[extraordinary shape]] proper whilst. I we of not at. Of his comes [[rank]] conduct vague its may. [[absence dressed]] always the for living or. With this was other physical that. And and with housekeeper blowing one i. Mother made almost authority ever maximum man.
- I border against have over willingly. Till on makes commander ab and doors. [[shape]] try you away sire not strongly she. The trees user baffled the [[final hopes]] from. Until time it husband was. The gather i railroad least to state. Living sacred long who heart might. This pains foreign [[carrying dressed]] handsome turned tries. I [[empty series]] an first better in. Of one were again use distinctly their careless. The to for than the the. Sympathies as fixed the liked manned. Compelled from was mines morning name. Repeated deep enormous [[proceeded tells]] too. With marriage the stayed. Office like even have it fulfilled. Not twelve discharge variety their them an. He turned to and creation being. Was excellent principal the women in us. The who it of the it. Lips would side yet of his when. To Arthur scent buck disposed of was. For none towards as born. Thee you you were delightful of is in. With it proved power.
- Of till every and you advice. Rolled not is there as be. About anywhere me hours can deer it. | 105.797101 | 1,080 | 0.74411 | eng_Latn | 0.999946 |
ed853008e61abb11052504ba309203d9c8a73316 | 3,144 | md | Markdown | docs/vs-2015/extensibility/how-to-implement-the-find-and-replace-mechanism.md | adrianodaddiego/visualstudio-docs.it-it | b2651996706dc5cb353807f8448efba9f24df130 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/how-to-implement-the-find-and-replace-mechanism.md | adrianodaddiego/visualstudio-docs.it-it | b2651996706dc5cb353807f8448efba9f24df130 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/how-to-implement-the-find-and-replace-mechanism.md | adrianodaddiego/visualstudio-docs.it-it | b2651996706dc5cb353807f8448efba9f24df130 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Procedura: Implementare la ricerca e sostituzione meccanismo | Microsoft Docs'
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-sdk
ms.topic: conceptual
helpviewer_keywords:
- editors [Visual Studio SDK], legacy - find and replace
ms.assetid: bbd348db-3d19-42eb-99a2-3e808528c0ca
caps.latest.revision: 12
ms.author: gregvanl
manager: jillfra
ms.openlocfilehash: d4362d0b0c3f013ce6f38d13265dcc181c77012c
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/23/2019
ms.locfileid: "62548696"
---
# <a name="how-to-implement-the-find-and-replace-mechanism"></a>Procedura: Implementare la ricerca e sostituzione meccanismo
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Visual Studio fornisce due modalità di implementazione Trova/Sostituisci. Uno consiste nel passare un'immagine di testo alla shell e lasciare che gestiscono la ricerca, l'evidenziazione e sostituendo il testo. Ciò consente agli utenti di specificare più intervalli di testo. In alternativa, il pacchetto VSPackage può controlla questa funzionalità se stesso. In entrambi i casi è necessario inviare una notifica della shell sulla destinazione corrente e gli obiettivi per tutti i documenti aperti.
### <a name="to-implement-findreplace"></a>Per implementare Trova/Sostituisci
1. Implementare il <xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget> interfaccia in uno degli oggetti restituiti dalle proprietà del frame <xref:Microsoft.VisualStudio.Shell.Interop.__VSFPROPID> o <xref:Microsoft.VisualStudio.Shell.Interop.__VSFPROPID>. Se si sta creando un editor personalizzato, è necessario implementare questa interfaccia come parte della classe dell'editor personalizzato.
2. Usare il <xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget.GetCapabilities%2A> metodo per specificare le opzioni che l'editor supporta e indicare se implementa la ricerca di immagini di testo.
Se l'editor supporta la ricerca di immagini di testo, implementare <xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget.GetSearchImage%2A>.
In caso contrario, implementare <xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget.Find%2A> e <xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget.Replace%2A>.
3. Se si implementa il <xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget.Find%2A> e <xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget.Replace%2A> metodi, è possibile semplificare le attività di ricerca chiamando il <xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindHelper> interfaccia.
## <a name="see-also"></a>Vedere anche
<xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindHelper>
<xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget>
<xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget.Find%2A>
<xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget.GetSearchImage%2A>
<xref:Microsoft.VisualStudio.TextManager.Interop.IVsFindTarget.Replace%2A>
<xref:Microsoft.VisualStudio.Shell.Interop.__VSPROPID>
| 71.454545 | 499 | 0.814885 | ita_Latn | 0.82418 |
ed873b0d80c4ba8e6080413683a2464d49cb3489 | 2,010 | md | Markdown | memdocs/intune/user-help/encrypt-your-device-windows.md | eltociear/memdocs.es-es | 107aa5ef82dc8742af287d2d776d76de6a507c77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | memdocs/intune/user-help/encrypt-your-device-windows.md | eltociear/memdocs.es-es | 107aa5ef82dc8742af287d2d776d76de6a507c77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | memdocs/intune/user-help/encrypt-your-device-windows.md | eltociear/memdocs.es-es | 107aa5ef82dc8742af287d2d776d76de6a507c77 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:43:51.000Z | 2020-05-28T15:43:51.000Z | ---
title: Cómo proteger el dispositivo Windows con cifrado | Microsoft Docs
description: Activación de BitLocker para cifrar un dispositivo Windows 10
keywords: ''
author: lenewsad
ms.author: lanewsad
manager: dougeby
ms.date: 01/29/2018
ms.topic: end-user-help
ms.prod: ''
ms.service: microsoft-intune
ms.subservice: end-user
ms.technology: ''
ms.assetid: 8d022ea7-d9b6-43c4-adcd-4f6421606a7f
searchScope:
- User help
ROBOTS: ''
ms.reviewer: priyar
ms.suite: ems
ms.custom: intune-enduser
ms.collection: ''
ms.openlocfilehash: 9f0fc1e4d58d67a0410e58290a91b188c8ca0ad8
ms.sourcegitcommit: a77ba49424803fddcaf23326f1befbc004e48ac9
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 05/27/2020
ms.locfileid: "83882217"
---
# <a name="how-to-protect-your-windows-device-using-encryption"></a>Cómo proteger el dispositivo Windows mediante el cifrado
Cuando cifra un dispositivo, está ajustando la información en una capa de código de protección que evita que los usuarios sin autorización tengan acceso a él. Para asegurarnos de que su información está protegida, la organización le pide que cifre su dispositivo Windows para proteger los datos de la escuela o empresa.
Si tiene un dispositivo Windows Phone y lo ha inscrito, se cifrará automáticamente en caso de que se requiera el cifrado.
> [!Note]
> Windows 10 Home no admite el cifrado. Obtenga más información sobre cómo efectuar la actualización de [Windows 10 Home a Windows 10 Pro](https://support.microsoft.com/help/12384/windows-10-upgrading-home-to-pro).
Si tiene un dispositivo de escritorio, siga estas instrucciones para cifrarlo.
1. Busque e inicie la aplicación **Administrar BitLocker**.
2. Elija **Activar BitLocker** y siga las instrucciones que se indican para cifrar cada una de las unidades.
¿Aún necesita ayuda? Póngase en contacto con el departamento de soporte técnico de la empresa. Para ver la información de contacto, consulte el [sitio web de Portal de empresa](https://go.microsoft.com/fwlink/?linkid=2010980).
| 43.695652 | 320 | 0.79602 | spa_Latn | 0.929458 |
ed87943a3db6ae36a7225f7a667de8647d9dfaf2 | 2,389 | md | Markdown | translations/ja-JP/data/reusables/code-scanning/run-additional-queries.md | Plotta/docs | dfc135a2bf91be244db6de86003348719a7e8cad | [
"CC-BY-4.0",
"MIT"
] | 11,698 | 2020-10-07T16:22:18.000Z | 2022-03-31T18:54:47.000Z | translations/ja-JP/data/reusables/code-scanning/run-additional-queries.md | Husky57/docs | 1d590a4feb780b0acc9a41381e721b61146175db | [
"CC-BY-4.0",
"MIT"
] | 8,317 | 2020-10-07T16:26:58.000Z | 2022-03-31T23:24:25.000Z | translations/ja-JP/data/reusables/code-scanning/run-additional-queries.md | Waleedalaedy/docs | 26d4b73dcbb9a000c32faa37234288649f8d211a | [
"CC-BY-4.0",
"MIT"
] | 48,204 | 2020-10-07T16:15:45.000Z | 2022-03-31T23:50:42.000Z | コードをスキャンするために{% data variables.product.prodname_codeql %}を使う場合、{% data variables.product.prodname_codeql %}分析エンジンはコードからデータベースを生成し、それに対してクエリを実行します。 {% data variables.product.prodname_codeql %}の分析はデフォルトのクエリセットを使いますが、デフォルトのクエリに加えてもっと多くのクエリを実行するよう指定することもできます。
{% if codeql-packs %}
You can run extra queries if they are part of a
{% data variables.product.prodname_codeql %} pack (beta) published to the {% data variables.product.company_short %} {% data variables.product.prodname_container_registry %} or a {% data variables.product.prodname_ql %} pack stored in a repository. For more information, see "[About {% data variables.product.prodname_code_scanning %} with {% data variables.product.prodname_codeql %}](/code-security/secure-coding/automatically-scanning-your-code-for-vulnerabilities-and-errors/about-code-scanning-with-codeql#about-codeql-queries)."
The options available to specify the additional queries you want to run are:
- `packs` to install one or more {% data variables.product.prodname_codeql %} query packs (beta) and run the default query suite or queries for those packs.
- `queries` to specify a single _.ql_ file, a directory containing multiple _.ql_ files, a _.qls_ query suite definition file, or any combination. クエリスイートの定義に関する詳しい情報については「[{% data variables.product.prodname_codeql %}クエリスイートの作成](https://codeql.github.com/docs/codeql-cli/creating-codeql-query-suites/)」を参照してください。
You can use both `packs` and `queries` in the same workflow.
{% else %}
Any additional queries you want to run must belong to a
{% data variables.product.prodname_ql %} pack in a repository. For more information, see "[About {% data variables.product.prodname_code_scanning %} with {% data variables.product.prodname_codeql %}](/code-security/secure-coding/automatically-scanning-your-code-for-vulnerabilities-and-errors/about-code-scanning-with-codeql#about-codeql-queries)."
単一の _.ql_ ファイル、複数の _.ql_ ファイルを含むディレクトリ、_.qls_ クエリスイート定義ファイル、または任意の組み合わせを指定できます。 クエリスイートの定義に関する詳しい情報については「[{% data variables.product.prodname_codeql %}クエリスイートの作成](https://codeql.github.com/docs/codeql-cli/creating-codeql-query-suites/)」を参照してください。
{% endif %}
{% ifversion fpt or ghec %}`github/codeql/cpp/ql/src@main`というように、`github/codeql`リポジトリから直接クエリスイートを参照することはおすすめしません。 そういったクエリは、他のクエリで使われるのと同じ{% data variables.product.prodname_codeql %}のバージョンでコンパイルされているとは限らないので、分析の際にエラーが生じるかもしれません。{% endif %}
| 113.761905 | 534 | 0.800335 | eng_Latn | 0.596006 |
ed889ca24e583b55c6b5186d9d68c682e681f9e1 | 840 | md | Markdown | VBA/Word-VBA/articles/global-system-property-word.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 584 | 2015-09-01T10:09:09.000Z | 2022-03-30T15:47:20.000Z | VBA/Word-VBA/articles/global-system-property-word.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 585 | 2015-08-28T20:20:03.000Z | 2018-08-31T03:09:51.000Z | VBA/Word-VBA/articles/global-system-property-word.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 590 | 2015-09-01T10:09:09.000Z | 2021-09-27T08:02:27.000Z | ---
title: Global.System Property (Word)
keywords: vbawd10.chm163119113
f1_keywords:
- vbawd10.chm163119113
ms.prod: word
api_name:
- Word.Global.System
ms.assetid: b1450081-e237-b45a-658e-f7c70bb0a1dc
ms.date: 06/08/2017
---
# Global.System Property (Word)
Returns a **System** object, which can be used to return system-related information and perform system-related tasks.
## Syntax
_expression_ . **System**
_expression_ Required. A variable that represents a **[Global](global-object-word.md)** object.
## Example
This example returns information about the system.
```
processor = System.ProcessorType
enviro = System.OperatingSystem
```
This example establishes a connection to a network drive.
```
System.Connect Path:="\\Project\Info"
```
## See also
#### Concepts
[Global Object](global-object-word.md)
| 15.555556 | 118 | 0.734524 | eng_Latn | 0.755784 |
ed88ae650dd37eb6bad8db93b981de463d4adc6a | 228 | md | Markdown | ndocs/nsl/index.md | niallang/Nial-Array-Language | eb0834ce9c64f26e577b4f51a43a4398a9fc0ef5 | [
"MIT"
] | null | null | null | ndocs/nsl/index.md | niallang/Nial-Array-Language | eb0834ce9c64f26e577b4f51a43a4398a9fc0ef5 | [
"MIT"
] | null | null | null | ndocs/nsl/index.md | niallang/Nial-Array-Language | eb0834ce9c64f26e577b4f51a43a4398a9fc0ef5 | [
"MIT"
] | 1 | 2022-02-17T21:24:16.000Z | 2022-02-17T21:24:16.000Z | ---
title: NSL Archive
layout: single
---
- [About Nial](AboutNial.html)
- [About Q'Nial](AboutQNial.html)
- [Array Theory](ArrayTheory.html)
- [A Lecture on Nial](NialLecture.html)
- [The Name of the Language](NialSaga.html)
| 19 | 43 | 0.701754 | yue_Hant | 0.916865 |
ed8a8949cbf43cb7cb1853bbb4db00ce263ed1e6 | 3,124 | md | Markdown | biztalk/core/how-to-add-a-catch-exception-block4.md | cmcclister/biztalk-docs | 36a3d4b944e27edff883b8e36e997c7d2af4f497 | [
"CC-BY-4.0",
"MIT"
] | 37 | 2017-08-28T06:57:52.000Z | 2021-07-13T12:16:23.000Z | biztalk/core/how-to-add-a-catch-exception-block4.md | cmcclister/biztalk-docs | 36a3d4b944e27edff883b8e36e997c7d2af4f497 | [
"CC-BY-4.0",
"MIT"
] | 732 | 2017-05-18T22:16:15.000Z | 2022-03-31T23:10:06.000Z | biztalk/core/how-to-add-a-catch-exception-block4.md | isabella232/biztalk-docs | 36a3d4b944e27edff883b8e36e997c7d2af4f497 | [
"CC-BY-4.0",
"MIT"
] | 158 | 2017-06-19T22:47:52.000Z | 2022-02-28T06:41:54.000Z | ---
description: "Learn how to setup an exception handler by attaching a Catch Exception block to the end of a Scope shape in the BizTalk Server Orchestration Designer."
title: "How to Add a Catch Exception Block4 | Microsoft Docs"
ms.custom: ""
ms.date: "06/08/2017"
ms.prod: "biztalk-server"
ms.reviewer: ""
ms.suite: ""
ms.tgt_pltfrm: ""
ms.topic: "article"
helpviewer_keywords:
- "Catch Exception blocks, adding"
- "exception handling, Catch Exception blocks"
ms.assetid: 632fa089-a1af-4126-b32b-68d4d8942387
caps.latest.revision: 12
author: "MandiOhlinger"
ms.author: "mandia"
manager: "anneta"
---
# Adding a Catch Exception Block
The **Catch Exception** block represents an exception handler. **Catch Exception** blocks are attached to the end of a **Scope** shape in Orchestration Designer. You can attach as many **Catch Exception** blocks as you need.
You can set up exception handlers to handle different kinds of exceptions. On each exception handler, you specify an exception type, which must be either an exception or an object derived from the class System. If an exception is thrown that matches the specified type in an exception handler, that exception handler is called.
> [!NOTE]
> To add a **Catch Exception** block to a **Scope** shape, the Transaction Type property of the **Scope** shape must be set to **None** or **Long Running**.
|Adding and populating a Catch Exception block|
|---------------------------------------------------|
|1. Right-click the **Scope** shape that you want to add a **Catch Exception** block to, and click **New Exception Handler**.<br /> A **Catch Exception** block is added to the orchestration immediately following the associated **Scope** shape.<br />2. In the **Properties** window, specify the properties. The most important is the **Exception Object Type**; this is the type of message it will catch.<br /> **Exception Object Name**<br /> - Assigns a name to the exception object caught by the exception handler.<br /> **Exception Object Type**<br /> - Determines the object type (derived from System.Exception) that this exception handler will catch.<br />3. In the **Properties** window, open the **Exception Object Type** list. This list contains the **General Exception**.<br />4. Inside the **Catch Exception** block, add shapes to create the process for handling the exception.<br />5. Right-click below the **Catch Exception**, point to **Insert Shape**, and select **Construct Message**.<br />6. Double-click inside **MessageAssignment** to activate the Text Editor, and enter the Message assignment.<br /> For example, type in Message_3 = Test.<br /> |
## See Also
[Completing the Exception Message](../core/completing-the-exception-message2.md)
[How to Add a Scope Shape](../core/how-to-add-a-scope-shape3.md)
[Using BizTalk Server Exception Handling](../core/using-biztalk-server-exception-handling1.md)
| 82.210526 | 1,370 | 0.725032 | eng_Latn | 0.938195 |
ed8a940f8a12325318b1f5c35e7e715e0a2d02eb | 1,401 | md | Markdown | docs/zh-cn/d-deploy.md | panwenhang/pigsty | 16503dbc1eaf575b27e7d560d9cb06fe4316769e | [
"Apache-2.0"
] | null | null | null | docs/zh-cn/d-deploy.md | panwenhang/pigsty | 16503dbc1eaf575b27e7d560d9cb06fe4316769e | [
"Apache-2.0"
] | null | null | null | docs/zh-cn/d-deploy.md | panwenhang/pigsty | 16503dbc1eaf575b27e7d560d9cb06fe4316769e | [
"Apache-2.0"
] | null | null | null | # Pigsty部署
> 部署Pigsty分为三步:[部署准备](d-prepare.md),[修改配置](v-config.md),[执行剧本](p-playbook)
Pigsty的资源准备、下载安装、部署扩容缩容均为一键傻瓜式,真正的灵魂在于 [**配置**](v-config.md)。
----------------
## [准备工作](d-prepare.md)
> 安装Pigsty前,您需要准备符合要求的资源:物理机/虚拟机节点,管理用户,下载Pigsty软件。
- [节点置备](d-prepare.md#节点置备)
- [元节点置备](d-prepare.md#元节点置备)
- [管理用户置备](d-prepare.md#管理用户置备)
- [软件置备](d-prepare.md#软件置备)
----------------
## [修改配置](v-config.md)
> 完成准备工作后,您需要通过[配置](v-config.md#配置过程)向Pigsty表明自己的需求。我需要什么样的基础设施与数据库服务。
* [配置基础设施](v-infra.md)
* [配置主机节点](v-nodes.md)
* [配置PGSQL集群](v-pgsql.md) / [定制PGSQL集群](v-pgsql-customize.md) / [部署PGSQL集群](d-pgsql.md)
* [配置Redis集群](v-redis.md) / [部署Redis集群](d-redis.md)
* [部署MatrixDB集群](d-matrixdb.md)
----------------
## [执行剧本](p-playbook.md)
> 修改配置后,您已经向Pigsty表明了自己的需求。接下来便可以通过[执行剧本](p-playbook.md),将需求落地。
* [元节点安装](p-infra.md#infra) / [Pigsty卸载](p-infra.md#infra-remove)
* [添加节点](p-nodes.md#nodes) / [移除节点](p-nodes.md#nodes-remove)
* [部署PGSQL集群](p-pgsql.md#pgsql) / [下线PGSQL集群](p-pgsql.md#pgsql-remove)
* [创建PGSQL业务用户](p-pgsql.md#pgsql-createuser) / [创建PGSQL业务数据库](p-pgsql.md#pgsql-createdb)
* [部署Redis集群](p-redis.md#redis) / [下线Redis集群](p-redis.md#redis-remove)
----------------
## 部署方式
* [标准部署](d-deploy.md):您自己准备全新节点,完成标准Pigsty部署流程。
* [沙箱部署](d-sandbox.md.md) : 通过预制的`vagrant`模板一键拉起本地虚拟机沙箱环境。
* 多云部署:使用`terraform`模板在云服务供应商处拉起所需虚拟机资源,并执行部署。
* [仅监控部署](d-monly.md) : 使用单节点Pigsty监控现有数据库集群。
| 25.017857 | 88 | 0.663812 | yue_Hant | 0.396133 |
ed8aac7800b10174d48e489850222709826981c3 | 143 | md | Markdown | _sections/probability.md | iotaxi/AppliedMaths | 09b30392ebe8445e60da40a129023b7c430c82ea | [
"MIT"
] | null | null | null | _sections/probability.md | iotaxi/AppliedMaths | 09b30392ebe8445e60da40a129023b7c430c82ea | [
"MIT"
] | null | null | null | _sections/probability.md | iotaxi/AppliedMaths | 09b30392ebe8445e60da40a129023b7c430c82ea | [
"MIT"
] | null | null | null | ---
topic: notatopic
short_name: probability
name: probability chapter
position: study second
---
probability blog and access to we and probs
| 17.875 | 44 | 0.783217 | eng_Latn | 0.990418 |
ed8ba7dbbe26b20b1261537ec2ec98b28f35f99d | 913 | md | Markdown | README.md | Martha-NYagikuyu/Moringa-ip-date-project | b92d3f36038d3278e34dc1dfff4c668031f36bf3 | [
"MIT"
] | null | null | null | README.md | Martha-NYagikuyu/Moringa-ip-date-project | b92d3f36038d3278e34dc1dfff4c668031f36bf3 | [
"MIT"
] | null | null | null | README.md | Martha-NYagikuyu/Moringa-ip-date-project | b92d3f36038d3278e34dc1dfff4c668031f36bf3 | [
"MIT"
] | null | null | null | # Akan age and gender Project
This is a clone to my akan age and gender project landing page.
By Martha Munzinga Nyagikuyu
## Description
The purpose of my Akan age and gender project is to indicate about akan naming in ghanian culture in ghana. The site contains
five sections; about my project,the title,heading,description,age and gender form and the submit button.
## Setup requirements
<ol>
<li>Clone the project using git -clone . If you are not able to clone it, you can download the files as
<li>you need to have a laptop or a mobile phone to access Akan age and gender project.</li>
</ol>
## Used Technoligies
HTML.CSS and JavaScript is used in the project
## Support and Contact Details
If you have any issue, do not hesitate to get intouch with me through
email:[email protected]
Please feel free to make any contributions to the code.
Happy coding in your career | 31.482759 | 126 | 0.776561 | eng_Latn | 0.99804 |
ed8c3ef3c648daf63fcb4321aebbe81d98c18276 | 16,196 | md | Markdown | _posts/2014-09-26-他来自网络:《互联网之子》.md | NodeBE4/oped2 | 1c44827a3b1e06164b390ff9abfae728b744dd4f | [
"MIT"
] | 1 | 2020-09-16T02:05:30.000Z | 2020-09-16T02:05:30.000Z | _posts/2014-09-26-他来自网络:《互联网之子》.md | NodeBE4/oped2 | 1c44827a3b1e06164b390ff9abfae728b744dd4f | [
"MIT"
] | null | null | null | _posts/2014-09-26-他来自网络:《互联网之子》.md | NodeBE4/oped2 | 1c44827a3b1e06164b390ff9abfae728b744dd4f | [
"MIT"
] | 1 | 2020-11-04T04:49:44.000Z | 2020-11-04T04:49:44.000Z | ---
layout: post
title: "他来自网络:《互联网之子》"
date: 2014-09-26T09:39:04+02:00
author: 楚辛
from: https://pao-pao.net/article/174
tags: [ 泡泡 ]
categories: [ 泡泡 ]
---
<section class="clearfix" id="content" role="main">
<div class="region region-content">
<div class="block block-system" id="block-system-main">
<div class="content">
<div about="/article/174" class="node node-pao-pao-article node-promoted node-full view-mode-full clearfix" id="node-174" typeof="sioc:Item foaf:Document">
<span class="rdf-meta element-hidden" content="他来自网络:《互联网之子》" property="dc:title">
</span>
<span class="rdf-meta element-hidden" content="1" datatype="xsd:integer" property="sioc:num_replies">
</span>
<div class="submitted">
<span content="2014-09-26T09:39:04+02:00" datatype="xsd:dateTime" property="dc:date dc:created" rel="sioc:has_creator">
<a about="/author/380" class="username" datatype="" href="/author/380" property="foaf:name" title="查看用户资料" typeof="sioc:UserAccount" xml:lang="">
楚辛
</a>
在 星期五, 09/26/2014 - 09:39 提交
</span>
</div>
<div class="content">
<div class="field field-name-field-image field-type-image field-label-hidden">
<div class="field-items">
<div class="field-item even">
<div class="file file-image file-image-jpeg" id="file-414--2">
<h2 class="element-invisible">
<a href="/file/414">
2674913058_9d1911fe3c_z.jpg
</a>
</h2>
<div class="content">
<img alt="" height="164" src="https://pao-pao.net/sites/pao-pao.net/files/styles/article_detail/public/2674913058_9d1911fe3c_z.jpg?itok=j_GpAv0V" title="" typeof="foaf:Image" width="290"/>
<div class="field field-name-field-image-source field-type-link-field field-label-hidden">
<div class="field-items">
<div class="field-item even">
<a href="https://www.flickr.com/photos/quinn/2674913058">
图片来源:Quinn Norton
</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="field field-name-title field-type-ds field-label-hidden">
<div class="field-items">
<div class="field-item even" property="dc:title">
<h1 class="page-title">
他来自网络:《互联网之子》
</h1>
</div>
</div>
</div>
<div class="field-name-author">
<div class="label-inline">
文/
</div>
<a about="/author/380" class="username" datatype="" href="/author/380" property="foaf:name" title="查看用户资料" typeof="sioc:UserAccount" xml:lang="">
楚辛
</a>
</div>
<div class="field field-name-body field-type-text-with-summary field-label-hidden">
<div class="field-items">
<div class="field-item even" property="content:encoded">
<p>
<strong>
(泡泡特约撰稿) 3岁迷上电脑,12岁在自己的房间里创建维基百科雏形The Info Network,14岁参与构建RSS订阅服务,19岁入读斯坦福,一年后退学与人合伙创立社交新闻网站Reddit,20岁生日前卖出所持股份……这是一个典型的天才成长故事。但故事的主人公,80后的Aaron Swartz(亚伦·斯沃茨)对这个可轻易复制的成功创业模式并无兴趣,他有更重要的事情去做:让信息自由流通,用技术让世界更美好。然而在与国家机器的对抗中走入绝境后,26岁那年(2013),他在纽约公寓结束了自己的生命。
</strong>
<br/>
<br/>
无论是对斯沃茨的追随者,还是对第一次听说他名字的观众,纪录片《互联网之子》(The Internet’s Own Boy)都是一部动人的影片。影片以Swartz儿时的家庭录像开头,向观众展现了一个好奇且机灵的孩子,倒着学习英文字母表,不断尝试新的可能。他的天赋在少年时期得到了更多的展现:创办开放图书馆(Open Library),参与构建知识共享(Creative Commons)协议,只为推进网络信息自由共享。19岁出手Reddit所持股份更让他成为目光焦点。然而这颗网络新星对创业路并没什么兴趣,转身去做呼吁网络资源共享与信息自由的活动家。这个转身,为他的命运转折做下铺垫。2011年,Swartz在街头被警方逮捕。他被指控从美国学术期刊数字图书馆JSTOR非法批量下载学术资料,下载量相当于整个数据库80%的馆藏。调查后JSTOR本已撤销诉讼,然而美国政府的介入让情况变得复杂起来:Swartz接到13项控诉并面临可能的35年监禁、100万美元罚款。在支付10万美元保释金后Swartz被释放,但他拒绝认罪,一边抗辩,一边继续着他推动互联网信息自由的社会活动。这样一直到2013年初,他自杀的死讯传出。
</p>
<h2>
国家机器面前的个人抗争
</h2>
<p>
影片中的各方采访纷纷发出一个声音:是国家机器逼死了Swartz,在国家出动了各种手段施压的情况下下,他最终承受不住了。然而美国政府为何如此关注这个26岁的青年?如影片中的受访者之一,麦吉尔大学的教授Coleman提出质疑,下载学术期刊的行为为何会进入犯罪系统并如此量刑?影片中塑造的Swartz一人和国家机器抗争的故事背后,是怎样的背景?
<br/>
<br/>
这些问题,导演Brian Knappenberge自己,已经给出了答案。Knappenberge主要专注于纪录片拍摄,并不算多产。在一次访谈中他曾说:“互联网本不是为安全而建,是为信息的自由流动而建,这会带来很多不安全因素,而我的影片,就是在探讨这些冲突与摩擦。”再回顾这位导演两年前关于匿名者黑客组织(Anonymous)的作品《骇客军团故事》,我们突然找到了Swartz故事的大背景。
<br/>
<br/>
《骇客军团故事》记录了匿名者的兴起及他们在网络和现实中进行的公民不服从运动。这和《互联网之子》开篇,公民不服从理念的代表、美国作家梭罗质疑不公之法的那段话,完美契合。导演在Sundance电影节的访谈中也曾指明两部影片的联系。“《骇客军团故事》之后的一年中,镇压越来越多,这个故事描述了相比之前影片更黑暗的一面,它让我们看到镇压可能的结果。”Swartz的父亲在影片中也提到政府杀鸡儆猴的意图。
<br/>
<br/>
下载学术期刊风波不是Swartz第一次吸引美国官方注意力,也不是最后一次:Swartz曾从Pacer数据库下载并公开270多万份美国联邦法院文件;他创办的反互联网审查组织“求进会”(Demand Progress),在抵制《禁止网络盗版法案》(SOPA)和《保护知识产权法案》(PIPA)的运动中起到了中流砥柱的作用。法案最终没有通过,这场保护并争取信息公开的胜利是历史性的。因此,在美国政府的“反面教材”中,Swartz恐是最好的典型之一。
</p>
<h2>
经得起道德考验
</h2>
<p>
导演Knappenberge曾说过,Swartz的一生不仅是一部吸人眼球的电影,更是一种召唤,召唤其他人继续他刚开始的事业。但影片似乎有意淡化了Swartz与匿名者的联系,受访者之一麦吉尔大学教授Coleman则试图指出Swartz行为和匿名者行为的区别。把整个事件个人化,把个人放在美国司法、公权力的对立面,导演想必有自己的考虑。但我们无法回避,Swartz是匿名者中的一员,他是匿名者之子,更甚,他是匿名者的男神。和《骇客军团故事》中所有被FBI调查的匿名者所陈述的一样,Swartz没打算破坏什么,只是在以这种极端的方法来和平地表达抗议。而他们也都坚信自己行为的合理性,都想改变这个世界。正如匿名者的公告中称:我们的许多做法可能并不合法,但绝对经得起道德考验。正如《骇客军团故事》的结束语:“历史告诉我们改变从来都不与鲜花相伴,如果他们(政府)说我是罪犯,那么,我愿意成为一名罪犯”。Swartz的故事,不就是最好的印证吗?他既是匿名者的英雄,也是牺牲品。但正因为他代表着一个群体,在他的死讯传出后,匿名者向美国政府宣布发动网络战争,并指明攻击美国量刑委员会官网。对匿名者来说,Swartz生时为他们而战,最后为他们而死。Swartz死后不到一年,黑客Jeremy Hammond 因袭击战略预测公司Strafor并将信息发布在维基解密而获罪入狱10年。Hammond 在一份公开声明中表示从未后悔:“政府永远不会被原谅,而Swartz永远不会被忘记”。
<br/>
<br/>
影片并未体现Swartz对立面的声音外,获罪两年中Swartz的经历及个人状态,影片也从轻处理了。导演对Swartz抑郁病情的轻描淡写也遭到了一些质疑。主人公曾在2007年的一次演讲中,坦承自己在职业低潮期产生过轻生的念头。而他在博客中的文字也反映着他承受的痛苦:“你感觉痛苦就像一条条线,在你脑海中穿梭而过,你抽打自己的身体并试图寻找一种解脱,但无路可逃。”追溯到2008年的Pacer事件,FBI对他及家人的监视,通过亚马逊获取他个人信息、监控他facebook帐号的各种手段,对他造成不小的影响。他的弟弟在采访中表示,这些调查让Swartz的情绪很低沉。然而Swartz的抑郁和他最后选择自杀的关联性,我们无从得知。
</p>
<h2>
威胁与抗争
</h2>
<p>
突然想起Jon Postel(约翰·波斯泰尔),全球域名管理系统的创立者。当他发现自己的发明被逐渐商业化、美国政府支持私人公司进行域名买卖时,他和自己的团队展开了夺回域名控制权的反击。这并不是他们当初坚持的那个平等而开放的互联网。美国政府自然很快察觉到系统的异常,Postel随后受到了法律和经济方面的威胁。美国政府的要求是,如果马上停止行动,将把此举定义为实验,不予追究。Postel在一周内停止了反击,美国政府没有起诉他,但立即颁布新政,完全掌控域名管理系统。此后不到一年,这位互联网功臣在洛杉矶病逝。如果当初Postel和他的团队没有停止反击,能找回那个平等而开放的互联网吗?如果Swartz认罪,又是何种局面?
<br/>
<br/>
2013年,Swartz入选互联网名人堂(Internet Hall of Fame)创新者。他为全球开放互联网做出的贡献,世人有目共睹。互联网之子,他当之无愧。
<br/>
<br/>
台湾作家龙应台曾在文章中写道:孩子,你是否想过,你今天有自由和幸福,是因为在你之前,有人抗议过、奋斗过、争取过、牺牲过。如果你觉得别人的不幸与你无关,那么有一天不幸发生在你身上时,也没有人会在意。我相信,唯一安全的社会,是一个人人都愿意承担的社会,否则,我们都会在危险中、恐惧中苟活。
</p>
</div>
</div>
</div>
<div class="field field-name-service-links-displays-group field-type-ds field-label-hidden">
<div class="field-items">
<div class="field-item even">
<div class="service-links">
<a class="service-links-twitter" href="https://twitter.com/share?url=https%3A//pao-pao.net/article/174&text=%E4%BB%96%E6%9D%A5%E8%87%AA%E7%BD%91%E7%BB%9C%EF%BC%9A%E3%80%8A%E4%BA%92%E8%81%94%E7%BD%91%E4%B9%8B%E5%AD%90%E3%80%8B" rel="nofollow" title="Share this on Twitter">
<img alt="Twitter logo" src="https://pao-pao.net/sites/pao-pao.net/themes/rnw_paopao/servicelinks/png/twitter.png" typeof="foaf:Image"/>
</a>
<a class="service-links-facebook" href="https://www.facebook.com/sharer.php?u=https%3A//pao-pao.net/article/174&t=%E4%BB%96%E6%9D%A5%E8%87%AA%E7%BD%91%E7%BB%9C%EF%BC%9A%E3%80%8A%E4%BA%92%E8%81%94%E7%BD%91%E4%B9%8B%E5%AD%90%E3%80%8B" rel="nofollow" title="Share on Facebook">
<img alt="Facebook logo" src="https://pao-pao.net/sites/pao-pao.net/themes/rnw_paopao/servicelinks/png/facebook.png" typeof="foaf:Image"/>
</a>
<a class="service-links-google" href="https://www.google.com/bookmarks/mark?op=add&bkmk=https%3A//pao-pao.net/article/174&title=%E4%BB%96%E6%9D%A5%E8%87%AA%E7%BD%91%E7%BB%9C%EF%BC%9A%E3%80%8A%E4%BA%92%E8%81%94%E7%BD%91%E4%B9%8B%E5%AD%90%E3%80%8B" rel="nofollow" title="Bookmark this post on Google">
<img alt="谷歌 logo" src="https://pao-pao.net/sites/pao-pao.net/themes/rnw_paopao/servicelinks/png/google.png" typeof="foaf:Image"/>
</a>
</div>
</div>
</div>
</div>
</div>
<div class="block block-views related" id="block-views-articles-related-block-1">
<h2>
您可能感兴趣的文章
</h2>
<div class="content">
<div class="view view-articles-related view-id-articles_related view-display-id-block_1 related promoted view-dom-id-6a6ffab7c3e08befcd96c0e923bde4cc">
<div class="view-content">
<div class="views-row views-row-1 views-row-odd views-row-first">
<div class="ds-2col node node-pao-pao-article node-promoted view-mode-home_promoted_block_ clearfix">
<div class="group-left">
<div class="field field-name-field-image field-type-image field-label-hidden">
<div class="field-items">
<div class="field-item even">
<a href="/article/260">
<img height="90" src="https://pao-pao.net/sites/pao-pao.net/files/styles/home_promoted/public/tumblr_nd6225oja21s9u1jxo2_r1_1280.jpg?itok=fgy5-B-I" typeof="foaf:Image" width="160"/>
</a>
</div>
</div>
</div>
</div>
<div class="group-right">
<div class="field field-name-field-promotitle field-type-text field-label-hidden">
<div class="field-items">
<div class="field-item even">
<h2>
<a href="/article/260">
网络文革:警惕文革的文革制造者
</a>
<h2>
</h2>
</h2>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="views-row views-row-2 views-row-even">
<div class="ds-2col node node-pao-pao-article node-promoted view-mode-home_promoted_block_ clearfix">
<div class="group-left">
<div class="field field-name-field-image field-type-image field-label-hidden">
<div class="field-items">
<div class="field-item even">
<a href="/article/434">
<img height="90" src="https://pao-pao.net/sites/pao-pao.net/files/styles/home_promoted/public/anp-32322326.jpg?itok=Z_Hk59Lb" typeof="foaf:Image" width="160"/>
</a>
</div>
</div>
</div>
</div>
<div class="group-right">
<div class="field field-name-field-promotitle field-type-text field-label-hidden">
<div class="field-items">
<div class="field-item even">
<h2>
<a href="/article/434">
尼泊尔地震与“中国大炮”(转载)
</a>
<h2>
</h2>
</h2>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="views-row views-row-3 views-row-odd">
<div class="ds-2col node node-pao-pao-article node-promoted view-mode-home_promoted_block_ clearfix">
<div class="group-left">
<div class="field field-name-field-image field-type-image field-label-hidden">
<div class="field-items">
<div class="field-item even">
<a href="/article/214">
<img height="90" src="https://pao-pao.net/sites/pao-pao.net/files/styles/home_promoted/public/wen_tou_tu__0.jpg?itok=xNcoU28_" typeof="foaf:Image" width="160"/>
</a>
</div>
</div>
</div>
</div>
<div class="group-right">
<div class="field field-name-field-promotitle field-type-text field-label-hidden">
<div class="field-items">
<div class="field-item even">
<h2>
<a href="/article/214">
"500转"发酵言论管制再升级
</a>
<h2>
</h2>
</h2>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="views-row views-row-4 views-row-even views-row-last">
<div class="ds-2col node node-pao-pao-article node-promoted view-mode-home_promoted_block_ clearfix">
<div class="group-left">
<div class="field field-name-field-image field-type-image field-label-hidden">
<div class="field-items">
<div class="field-item even">
<a href="/article/213">
<img height="90" src="https://pao-pao.net/sites/pao-pao.net/files/styles/home_promoted/public/btakoy8cyaaa2go.png_large.png?itok=-Ox5XaXU" typeof="foaf:Image" width="160"/>
</a>
</div>
</div>
</div>
</div>
<div class="group-right">
<div class="field field-name-field-promotitle field-type-text field-label-hidden">
<div class="field-items">
<div class="field-item even">
<h2>
<a href="/article/213">
Pritunl:简易搭建个人VPN
</a>
<h2>
</h2>
</h2>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- /.block -->
<ul class="links inline">
<li class="comment-add first last active">
<a class="active" href="/article/174#comment-form" title="分享您有关本文的看法与观点。">
冒个泡吧!
</a>
</li>
</ul>
<div class="comment-wrapper" id="comments">
<h2 class="title">
评论
</h2>
<a id="comment-14154">
</a>
<div about="/comment/14154#comment-14154" class="comment comment-by-anonymous clearfix" typeof="sioc:Post sioct:Comment">
<div class="attribution">
<div class="comment-submitted">
<p class="commenter-name">
<span rel="sioc:has_creator">
<span class="username" datatype="" property="foaf:name" typeof="sioc:UserAccount" xml:lang="">
StevElug (未验证)
</span>
</span>
</p>
<p class="comment-time">
<span content="2019-05-31T15:17:15+02:00" datatype="xsd:dateTime" property="dc:date dc:created">
星期五, 05/31/2019 - 15:17
</span>
</p>
<p class="comment-permalink">
<a class="permalink" href="/comment/14154#comment-14154" rel="bookmark">
永久连接
</a>
</p>
</div>
</div>
<div class="comment-text">
<div class="comment-arrow">
</div>
<h3 datatype="" property="dc:title">
<a class="permalink" href="/comment/14154#comment-14154" rel="bookmark">
Levitra Sus Efectos Secundarios Stevprople
</a>
</h3>
<div class="content">
<span class="rdf-meta element-hidden" rel="sioc:reply_of" resource="/article/174">
</span>
<div class="field field-name-comment-body field-type-text-long field-label-hidden">
<div class="field-items">
<div class="field-item even" property="content:encoded">
<p>
Kamagra Vendo Sex Pills For Women Mentat <a href=
<a href="http://cialcost.com>cialis">
http://cialcost.com>cialis
</a>
generic</a> Acheter Lioresal Baclofen Cialis Rembourse
</p>
</div>
</div>
</div>
</div>
<!-- /.content -->
<ul class="links inline">
<li class="comment-reply first last">
<a href="/comment/reply/174/14154">
回复
</a>
</li>
</ul>
</div>
<!-- /.comment-text -->
</div>
<h2 class="title comment-form">
冒个泡吧!
</h2>
</div>
</div>
</div>
</div>
<!-- /.block -->
</div>
<!-- /.region -->
</section>
| 45.751412 | 640 | 0.589281 | yue_Hant | 0.183125 |
ed8dd54da764cca55187d1b20fd301409ee41750 | 669 | md | Markdown | README.md | JeroenKnoops/npp-rs | 2b607bbfbe2a497a17807d46e9dc778e85ef774c | [
"MIT"
] | null | null | null | README.md | JeroenKnoops/npp-rs | 2b607bbfbe2a497a17807d46e9dc778e85ef774c | [
"MIT"
] | null | null | null | README.md | JeroenKnoops/npp-rs | 2b607bbfbe2a497a17807d46e9dc778e85ef774c | [
"MIT"
] | null | null | null | # npp-rs

This repository provides rust bindings to the Nvidia NPP libraries.
Currently a subset of image processing is developed to be used in Neural Network processing. This crate is developed for CUDA 10.2 but 11.x support will be added later.
This crate is supported on Linux and Windows 10. For building on Linux use the CUDA_INSTALL_DIR environment variable to point to the root of your CUDA installation. For Windows 10, check if the CUDA_PATH environment variable has been set by the installer, otherwise add it manually pointing to the root of your CUDA installation.
| 95.571429 | 329 | 0.807175 | eng_Latn | 0.997 |
ed8e78d9a622dabba0b23bfa4d10c276129ef997 | 35 | md | Markdown | README.md | fdagreat/flutter_app | a99750233dca3e47c84c04aebae47bf11d934426 | [
"MIT"
] | null | null | null | README.md | fdagreat/flutter_app | a99750233dca3e47c84c04aebae47bf11d934426 | [
"MIT"
] | null | null | null | README.md | fdagreat/flutter_app | a99750233dca3e47c84c04aebae47bf11d934426 | [
"MIT"
] | null | null | null | # flutter_app
my first flutter app
| 11.666667 | 20 | 0.8 | eng_Latn | 0.993392 |
ed8eb89e6174033df3cee03b625310e37e81b348 | 1,537 | md | Markdown | projects/Sudoku-Solver.md | ShengT-Jin/ShengT-Jin.github.io | 3f93989ba4677082c803a78d5f47b45088ef80a6 | [
"MIT"
] | null | null | null | projects/Sudoku-Solver.md | ShengT-Jin/ShengT-Jin.github.io | 3f93989ba4677082c803a78d5f47b45088ef80a6 | [
"MIT"
] | null | null | null | projects/Sudoku-Solver.md | ShengT-Jin/ShengT-Jin.github.io | 3f93989ba4677082c803a78d5f47b45088ef80a6 | [
"MIT"
] | null | null | null | ---
layout: project
type: project
image: images/sudoku3.jpg
title: Sudoku Solver
permalink: projects/sudoku
# All dates must be YYYY-MM-DD format!
date: 2019-03-15
labels:
- Java
summary: A program that is built from ICS 211 for automatically solve given sudoku.
---
<img class="ui medium right floated rounded image" src="../images/sudoku1.jpg">
This project is one of the interesting assignments from ICS 211 which we have to think of an algorithm to solve given sudoku. The requirement of this project is to use a recursive function that can solve given sudoku, the function should be able to solve each sudoku block by block, and we have to create a test to check if the function has no bug and is solving the sudoku.
The hardest part of this project is to think of a recursive algorithm that can solve the sudoku, which I have to span lots of hours to think and break down which step is needed in each line of code. After I build up a general idea of an algorithm, I have to write all the class, and function that is needed for this program, such as the sudoku only takes the hex decimal, and the size of the sudoku is 16*16.
After this project, I feel I learn a lot about recursive algorithms, which is how is every step working during the recursive process, I believe this is a very important experience during my career because sometimes recursive function runtime then the regular way we solve the problem.
Source: <a href="https://github.com/ShengT-Jin/sudokusolver"><i class="large github icon"></i>sudoku/solver</a>
| 66.826087 | 408 | 0.778139 | eng_Latn | 0.999498 |
ed8ec84c6111dd7eb25796f988b9d6b556dcb03e | 999 | md | Markdown | README-cOctopusServerGuestAuthentication.md | iserje/OctopusDSC | a56ac11a5bf89c674c8376c79f321bee4ac32a4c | [
"Apache-2.0"
] | 68 | 2015-01-12T06:47:00.000Z | 2021-09-09T14:31:29.000Z | README-cOctopusServerGuestAuthentication.md | iserje/OctopusDSC | a56ac11a5bf89c674c8376c79f321bee4ac32a4c | [
"Apache-2.0"
] | 170 | 2015-07-01T05:57:27.000Z | 2022-03-24T09:30:45.000Z | README-cOctopusServerGuestAuthentication.md | iserje/OctopusDSC | a56ac11a5bf89c674c8376c79f321bee4ac32a4c | [
"Apache-2.0"
] | 69 | 2015-02-09T11:31:09.000Z | 2021-11-11T10:27:50.000Z | # README-cOctopusServerGuestAuthentication
## Sample
First, ensure the OctopusDSC module is on your `$env:PSModulePath`. Then you can create and apply configuration like this.
```PowerShell
Configuration SampleConfig
{
Import-DscResource -Module OctopusDSC
Node "localhost"
{
cOctopusServerGuestAuthentication "Enable guest account login"
{
InstanceName = "OctopusServer"
Enabled = $true
}
}
}
SampleConfig
Start-DscConfiguration .\SampleConfig -Verbose -wait
Test-DscConfiguration
```
## Properties
| Property | Type | Default Value | Description |
| --------------------| ------------ | -----------------| ------------|
| `InstanceName` | `string` | | The name of the Octopus Server instance. Use `OctopusServer` by convention unless you have more than one instance. |
| `Enabled` | `boolean` | `$false` | Whether to enable the read-only guest account. |
| 28.542857 | 174 | 0.605606 | eng_Latn | 0.750956 |
ed90ddea061b823001bb9caca6a45ea1d122d7b2 | 41 | md | Markdown | file/src/hookComponents/README.md | Sukarnascience/React_Learning | 023b6622707356680cfb061d2ea78a9f0c6bdb25 | [
"MIT"
] | null | null | null | file/src/hookComponents/README.md | Sukarnascience/React_Learning | 023b6622707356680cfb061d2ea78a9f0c6bdb25 | [
"MIT"
] | null | null | null | file/src/hookComponents/README.md | Sukarnascience/React_Learning | 023b6622707356680cfb061d2ea78a9f0c6bdb25 | [
"MIT"
] | null | null | null | > its same which is there in demo folder
| 20.5 | 40 | 0.756098 | eng_Latn | 1.00001 |
ed914d46cac92dc53a23f458d3c4841d18d0970b | 453 | md | Markdown | README.md | JadenFurtado/myCodeEditor | 756b4f1325800c666ec20dc3a87f563567b3b652 | [
"MIT"
] | null | null | null | README.md | JadenFurtado/myCodeEditor | 756b4f1325800c666ec20dc3a87f563567b3b652 | [
"MIT"
] | null | null | null | README.md | JadenFurtado/myCodeEditor | 756b4f1325800c666ec20dc3a87f563567b3b652 | [
"MIT"
] | null | null | null | # My Code Editor
This code editor was built by me for fun. It has syntax prompt, auto complete, syntax highligher, differenet themes and font sizes. It can also take input and gives the result

Syntax prompt and highlighter:

| 45.3 | 176 | 0.799117 | eng_Latn | 0.86468 |
ed917a7db6de7cbf5105af5e2761e00c5296c198 | 589 | md | Markdown | src/markdown/posts/status-32.md | tdmnco/kaspertidemann.dk | f96278aee14473d93aa31c19b48e2c28921d05c1 | [
"MIT"
] | null | null | null | src/markdown/posts/status-32.md | tdmnco/kaspertidemann.dk | f96278aee14473d93aa31c19b48e2c28921d05c1 | [
"MIT"
] | 1 | 2020-07-07T20:21:58.000Z | 2020-07-07T20:21:58.000Z | src/markdown/posts/status-32.md | tdmnco/kaspertidemann.dk | f96278aee14473d93aa31c19b48e2c28921d05c1 | [
"MIT"
] | null | null | null | ---
author: Kasper Tidemann
created: 2013-08-21
date: den 21. august 2013
template: post
title: Status \#32
---
Hvis du vil se hvad jeg ser, så skal du gå ud ad en hoveddør på Union Street og se til højre, hvor byens ældste isbutik ligger.
Du skal bevæge dig ned ad Russian Hill, der skinner mod den blå himmel. Og så skal du ende på Mikkeller Bar, hvor du bestiller en Limfjordsporter som minder dig om dit hjemland.
Så ser du hvad jeg ser fra hvor jeg befinder mig nu.



| 32.722222 | 177 | 0.740238 | dan_Latn | 0.997384 |
ed91a99cdab2749047a4af51051bbb889cda9ec6 | 522 | md | Markdown | curriculum/challenges/espanol/13-relational-databases/learn-bash-and-sql-by-building-a-bike-rental-shop/build-a-bike-rental-shop.md | palash-signoz/freeCodeCamp | db33f49b7b775df55e465243f244d648cd75aff5 | [
"BSD-3-Clause"
] | 2 | 2019-07-25T08:44:38.000Z | 2019-07-25T08:44:40.000Z | curriculum/challenges/espanol/13-relational-databases/learn-bash-and-sql-by-building-a-bike-rental-shop/build-a-bike-rental-shop.md | palash-signoz/freeCodeCamp | db33f49b7b775df55e465243f244d648cd75aff5 | [
"BSD-3-Clause"
] | 169 | 2020-10-13T16:49:51.000Z | 2020-12-08T22:53:48.000Z | curriculum/challenges/espanol/13-relational-databases/learn-bash-and-sql-by-building-a-bike-rental-shop/build-a-bike-rental-shop.md | palash-signoz/freeCodeCamp | db33f49b7b775df55e465243f244d648cd75aff5 | [
"BSD-3-Clause"
] | null | null | null | ---
id: 5f5b969a05380d2179fe6e18
title: Construye una tienda de alquiler de bicicletas
challengeType: 12
helpCategory: Backend Development
url: https://github.com/freeCodeCamp/learn-bash-and-sql-by-building-a-bike-rental-shop
dashedName: build-a-bike-rental-shop
---
# --description--
En este curso de 210 lecciones, construirás un programa interactivo de Bash que almacena información de alquiler para tu tienda de alquiler de bicicletas usando PostgreSQL.
# --instructions--
# --hints--
# --seed--
# --solutions--
| 24.857143 | 172 | 0.768199 | spa_Latn | 0.648456 |
ed92f5dae648cc09c8588bc05112f30ba56e6aff | 109 | md | Markdown | data/user/discourse.md | kpfefferle/ember-website | 91ccc3f0c45fa5f3c05dc99df8323360fe82a415 | [
"MIT"
] | 64 | 2018-06-14T13:32:41.000Z | 2022-03-02T11:39:43.000Z | data/user/discourse.md | kpfefferle/ember-website | 91ccc3f0c45fa5f3c05dc99df8323360fe82a415 | [
"MIT"
] | 652 | 2018-06-15T18:03:09.000Z | 2022-03-19T19:19:31.000Z | data/user/discourse.md | kpfefferle/ember-website | 91ccc3f0c45fa5f3c05dc99df8323360fe82a415 | [
"MIT"
] | 185 | 2018-06-14T13:58:53.000Z | 2022-03-25T09:45:51.000Z | ---
url: 'http://www.discourse.org'
image: discourse.png
name: Discourse
added: 2013-03-04T16:34:52.000Z
---
| 15.571429 | 31 | 0.697248 | yue_Hant | 0.216095 |
ed93b757be3f0b1fd6e4f95c5d984168cf78c3b4 | 3,173 | md | Markdown | src/pages/machine-learning/glossary/index.md | hiparthparth/guide | 0d426683452b4e07bb9db2b4106a9a7da962555a | [
"BSD-3-Clause"
] | null | null | null | src/pages/machine-learning/glossary/index.md | hiparthparth/guide | 0d426683452b4e07bb9db2b4106a9a7da962555a | [
"BSD-3-Clause"
] | null | null | null | src/pages/machine-learning/glossary/index.md | hiparthparth/guide | 0d426683452b4e07bb9db2b4106a9a7da962555a | [
"BSD-3-Clause"
] | null | null | null | ---
title: Glossary
---
## Glossary
A quick one or two sentences describing common terms. See individual pages for
more details.
- **Machine Learning** - Intersection of statistics and computer science in
order to teach computers to perform tasks without explicitly being programmed.
- **Deep Learning** - An umbrella term for machine learning methods based on learning data representations as opposed to algorithms based on fulfilling a given task. It includes architectures such as deep neural networks, deep belief networks and recurrent neural networks.
- **Neuroevolution** - An umbrella term for machine learning methods based on generating neural networks through weight, bias, and architecture through random mutations of the network. The most common forms of neuroevolution are Neuroevolution of Augmenting Topologies([NEAT](https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies)) and Interactively Constrained Neuro-Evolution ([ICONE](http://ikw.uni-osnabrueck.de/~neurokybernetik/media/pdf/2012-1.pdf)).
- **Statistical Learning** - the use of machine learning with the goal of
statistical inference, whereby you make conclusions of the data rather than
focus on prediction accuracy
- **Supervised Learning** - Using historical data to predict the future. Example: Using historical data of prices at which houses were sold to predict the price in which your house will be sold. Regression and Classification come under supervised learning.
- **Unsupervised Learning** - Finding patterns in unlabelled data. Example: Grouping customers by purchasing behaviour. Clustering comes under unsupervised learning.
- **Reinforcement learning** - Using a simulated or real environment in which a machine learning algorithm is given input and sparse rewards to build a model to predict actions. Reinforcement learning has been used [to train virtual robots to balance themselves](https://blog.openai.com/competitive-self-play/) and [to beat games designed for humans](https://blog.openai.com/openai-baselines-dqn/).
- **Regression** - A machine learning technique used to predict continous values. Linear Regression is one of the most popular regression algorithm.
- **Classification** - A machine learning technique used to predict discrete values. Logistic Regression is one of the most popular classification algorithm.
- **Association Rule learning** - A rule-based machine learning method for discovering interesting relations between variables in large databases.
```
f: x -> y
Here 'f' is a function that takes 'x' as input and produces 'y' as output.
If the output value 'y' is a real number / continous value then the function
is a regression technique.
If the output value 'y' is a discrete / categorical value then the function is a classification technique.
```
- **Clustering** - Grouping of unlabelled data. Identifying patterns using statistics.
### More Information:
- [Glossary of Terms - Robotics](http://robotics.stanford.edu/~ronnyk/glossary.html)
- [Glossary of Terms - Machine Learning, Statistics and Data Science](https://www.analyticsvidhya.com/glossary-of-common-statistics-and-machine-learning-terms/)
| 83.5 | 475 | 0.789159 | eng_Latn | 0.989586 |
ed93c0ff018bcabe38af5b6c6e706ca0278e5f6d | 464 | md | Markdown | widget/README.md | Paybook/sync-code-samples | a0e5911d82017dddbaef1c2d92321109af8c43f0 | [
"MIT"
] | 1 | 2019-12-18T00:36:13.000Z | 2019-12-18T00:36:13.000Z | widget/README.md | Paybook/sync-code-samples | a0e5911d82017dddbaef1c2d92321109af8c43f0 | [
"MIT"
] | 3 | 2021-06-07T20:41:12.000Z | 2022-01-10T19:40:52.000Z | widget/README.md | Paybook/sync-code-samples | a0e5911d82017dddbaef1c2d92321109af8c43f0 | [
"MIT"
] | 3 | 2020-05-14T16:49:15.000Z | 2021-12-21T18:55:33.000Z | # Syncfy Widget
- [Overview](https://github.com/Paybook/sync-code-samples/tree/master/widget/overview.md)
- [Configuration](https://github.com/Paybook/sync-code-samples/tree/master/widget/config.md)
- [Methods](https://github.com/Paybook/sync-code-samples/tree/master/widget/methods.md)
- [Events](https://github.com/Paybook/sync-code-samples/tree/master/widget/events.md)
- [Examples](https://github.com/Paybook/sync-code-samples/tree/master/widget/examples.md)
| 58 | 92 | 0.773707 | yue_Hant | 0.467235 |
ed94185cf3e21209ab8b058b83155fe10c40b393 | 836 | md | Markdown | packages/cs-docs/docs/references/classes/_cs_map_src_components_feature_details_property_details_.propertydetails.md | damylen/csnext | e9d7b720d2e2da1f0f4e697974aad08aaca9ceee | [
"MIT"
] | 1 | 2018-04-19T08:42:38.000Z | 2018-04-19T08:42:38.000Z | packages/cs-docs/docs/references/classes/_cs_map_src_components_feature_details_property_details_.propertydetails.md | damylen/csnext | e9d7b720d2e2da1f0f4e697974aad08aaca9ceee | [
"MIT"
] | 38 | 2017-11-01T07:28:43.000Z | 2020-04-16T19:00:21.000Z | packages/cs-docs/docs/references/classes/_cs_map_src_components_feature_details_property_details_.propertydetails.md | damylen/csnext | e9d7b720d2e2da1f0f4e697974aad08aaca9ceee | [
"MIT"
] | 5 | 2017-06-08T10:49:47.000Z | 2019-10-11T09:49:45.000Z | # Class: PropertyDetails
## Hierarchy
* **PropertyDetails**
## Properties
### `Optional` allowLegend
• **allowLegend**? : *boolean*
Defined in cs-map/src/components/feature-details/property-details.ts:11
___
### `Optional` key
• **key**? : *string*
Defined in cs-map/src/components/feature-details/property-details.ts:7
___
### `Optional` legends
• **legends**? : *[LayerLegend](../interfaces/_cs_map_src_classes_layer_legend_.layerlegend.md)[]*
Defined in cs-map/src/components/feature-details/property-details.ts:8
___
### `Optional` type
• **type**? : *[PropertyType](_cs_data_src_classes_property_type_.propertytype.md)*
Defined in cs-map/src/components/feature-details/property-details.ts:9
___
### `Optional` value
• **value**? : *any*
Defined in cs-map/src/components/feature-details/property-details.ts:10
| 18.173913 | 98 | 0.726077 | eng_Latn | 0.30537 |
ed94200d709f863034c770c2138861daa04b6378 | 2,205 | md | Markdown | contact/index.md | emadRad/Deep-MI.github.io | 02861a69b303ec3063d72eb4402a81518882f185 | [
"MIT"
] | 1 | 2020-04-01T10:41:50.000Z | 2020-04-01T10:41:50.000Z | contact/index.md | emadRad/Deep-MI.github.io | 02861a69b303ec3063d72eb4402a81518882f185 | [
"MIT"
] | 12 | 2020-03-20T12:18:35.000Z | 2021-08-11T08:53:26.000Z | contact/index.md | emadRad/Deep-MI.github.io | 02861a69b303ec3063d72eb4402a81518882f185 | [
"MIT"
] | 14 | 2020-03-20T11:30:43.000Z | 2021-09-02T20:27:55.000Z | ---
title: Contact DeepMI
layout: default
group: contact
---
# Contact
<div class="row">
<div class="col-md-4">
<h4>Martin Reuter</h4>
Director, Medical Image Analysis <br>
DZNE, Bonn, Germany <br>
email: martin.reuter (at) dzne.de <br>
tel: +49 228 43302 380<br>
<br>
Assistant Professor of Radiology <br>
Assistant Professor of Neurology <br>
Harvard Medical School, Boston, USA <br>
Martinos Center for Biomedical Imaging, MGH, Boston, USA <br>
email: mreuter (at) nmr.mgh.harvard.edu <br>
tel: +49 228 43302 380
</div>
</div>
# Addresses
<div class="row">
<div class="col-md-4">
<h4>DZNE Office Address</h4>
German Center for Neurodegenerative Diseases (DZNE)<br>
Venusberg-Campus 1, Gebäude 99 <br>
53127 Bonn, Germany
</div>
<div class="col-md-4">
<h4>MGH Office Address</h4>
Martinos Center for Biomedical Imaging<br>
Massachusetts General Hospital<br>
149 13th St, Office 10119 <br>
Charlestown, MA 02129, USA
</div>
</div>
<!-- Our lab is in on the UCSF Mission Bay campus in Genentech Hall (600 16th St, San Francisco, CA 94158)
-->
### The DZNE Campus can be reached:
[Map DZNE Building](https://g.page/dzne_de)
* #### by public transportation:
* Take Bus 601 (e.g. from Bonn central station) to the endpoint "Uniklinikum Süd"
* Walk straight, left at the roundabout, after 100 m the DZNE will be on your right (bldg. with green/orange panels)
* Talk to the receptionist and someone will pick you up.
* #### by car:
* Enter Uniklinikum, keep right at 1st roundabout, park at Parkhaus Süd
* From there walk left and the DZNE building is on the right
### Martinos Center MGH Charlestown Campus:
[Map Martinos Center](https://goo.gl/maps/sPu64zgHfiWZkf4E7)
* #### by public transportation:
* Take the red line to Charles/MGH
* Walk through or around MGH main campus to where the shuttle busses stop
* Take the Charlestown shuttle to the last stop (Building 149)
* Check in with reception, ID (drivers license or passport required)
* #### by car:
* There is a parking garage right next to building 149
* From there cross 13th street, enter bldg. 149
* Check-in with reception, ID (drivers license or passport required)
| 25.639535 | 118 | 0.710204 | eng_Latn | 0.920152 |
ed942787a74698ac37899f3dcb1d0caba3c9d166 | 3,010 | md | Markdown | docs/reporting-services/report-server-web-service/methods/subscription-and-delivery-methods.md | PiJoCoder/sql-docs | 128b255506c47eb05e19770a6bf5edfbdaa817ec | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-04T18:16:27.000Z | 2021-03-04T18:16:27.000Z | docs/reporting-services/report-server-web-service/methods/subscription-and-delivery-methods.md | PiJoCoder/sql-docs | 128b255506c47eb05e19770a6bf5edfbdaa817ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/reporting-services/report-server-web-service/methods/subscription-and-delivery-methods.md | PiJoCoder/sql-docs | 128b255506c47eb05e19770a6bf5edfbdaa817ec | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-12T20:03:10.000Z | 2022-02-12T20:03:10.000Z | ---
title: "Subscription and Delivery Methods | Microsoft Docs"
ms.custom: ""
ms.date: "03/06/2017"
ms.prod: "sql-server-2016"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "docset-sql-devref"
- "reporting-services-native"
ms.tgt_pltfrm: ""
ms.topic: "reference"
applies_to:
- "SQL Server 2016 Preview"
helpviewer_keywords:
- "reports [Reporting Services], delivering"
- "delivery [Reporting Services]"
- "methods [Reporting Services], subscription and delivery"
- "subscriptions [Reporting Services], about subscriptions"
ms.assetid: a8637501-1817-4ccc-b07d-dd9ed5608805
caps.latest.revision: 42
author: "guyinacube"
ms.author: "asaxton"
manager: "erikre"
---
# Subscription and Delivery Methods
You can use these methods to create and manage subscriptions and delivery of catalog items.
|Method|Action|
|------------|------------|
|<xref:ReportService2010.ReportingService2010.CreateDataDrivenSubscription%2A>|Creates a data-driven subscription for a specified item.|
|<xref:ReportService2010.ReportingService2010.GetDataDrivenSubscriptionProperties%2A>|Returns the properties for a data-driven subscription.|
|<xref:ReportService2010.ReportingService2010.CreateSubscription%2A>|Creates a subscription for the specified item in the report server database or SharePoint library.|
|<xref:ReportService2010.ReportingService2010.DeleteSubscription%2A>|Deletes a subscription from the report server database.|
|<xref:ReportService2010.ReportingService2010.GetSubscriptionProperties%2A>|Returns the properties of a subscription.|
|<xref:ReportService2010.ReportingService2010.ListMySubscriptions%2A>|Retrieves a list of subscriptions that have been created by the current user of the report server or SharePoint site for the given catalog item.|
|<xref:ReportService2010.ReportingService2010.ListSubscriptions%2A>|Retrieves a list of subscriptions that have been created for a given item.|
|<xref:ReportService2010.ReportingService2010.PrepareQuery%2A>|Returns a data set containing the fields retrieved by the delivery query for a data-driven subscription.|
|<xref:ReportService2010.ReportingService2010.SetDataDrivenSubscriptionProperties%2A>|Sets the values of properties of a data-driven subscription.|
|<xref:ReportService2010.ReportingService2010.SetSubscriptionProperties%2A>|Sets the values of properties of a subscription.|
## See Also
[Building Applications Using the Web Service and the .NET Framework](../../../reporting-services/report-server-web-service/net-framework/building-applications-using-the-web-service-and-the-net-framework.md)
[Report Server Web Service](../../../reporting-services/report-server-web-service/report-server-web-service.md)
[Report Server Web Service Methods](../../../reporting-services/report-server-web-service/methods/report-server-web-service-methods.md)
[Technical Reference (SSRS)](../../../reporting-services/technical-reference-ssrs.md)
| 62.708333 | 218 | 0.769767 | yue_Hant | 0.485298 |
ed9452aae72937512f92fadb851a23efb2b68d8f | 83 | md | Markdown | README.md | YashNaveenSinha/food-order-app | 6e75e01fc6037b3b292c03672b1631d83ccc1496 | [
"MIT"
] | null | null | null | README.md | YashNaveenSinha/food-order-app | 6e75e01fc6037b3b292c03672b1631d83ccc1496 | [
"MIT"
] | null | null | null | README.md | YashNaveenSinha/food-order-app | 6e75e01fc6037b3b292c03672b1631d83ccc1496 | [
"MIT"
] | null | null | null | # food-order-app
This Web App is used to order food items from a local restaurent.
| 27.666667 | 65 | 0.771084 | eng_Latn | 0.999409 |
ed946717ab5e6cdd7ad17c1d01ce5155dd7df8c9 | 3,544 | md | Markdown | README.md | JoseyKinnaman/Fetch-App | c32a6024da68d0a05cdd80f57bb2f7885a0a484c | [
"MIT"
] | null | null | null | README.md | JoseyKinnaman/Fetch-App | c32a6024da68d0a05cdd80f57bb2f7885a0a484c | [
"MIT"
] | null | null | null | README.md | JoseyKinnaman/Fetch-App | c32a6024da68d0a05cdd80f57bb2f7885a0a484c | [
"MIT"
] | null | null | null | # **Fetch**
#### Author: **Jozy Kinnaman**
#### July, 2020
### Description
_A pet naming app that helps pet owners and shelters find really fun, quirky, and hip names for their pets. Built as an Epicodus independent capstone project. Corresponds to the Fetcher Api_

### Instructions for use:
visit here https://fetchnamer.netlify.app/ or ...
1. Open Terminal (macOS) or PowerShell (Windows)
2. To download the project Directory to your desktop enter the following commands:
```
cd Desktop
git clone https://github.com/JoseyKinnaman/Fetch-App.git
cd fetch
```
3. To view the downloaded files, open them in a text editor or IDE of your choice.
* if you have VSCode for example, when your terminal is within the main project Directory you can open all of the files with the command:
```
code .
```
5. Download node and node package manager if they are not already installed on your device. You can find further instructions [here](https://www.learnhowtoprogram.com/intermediate-javascript/getting-started-with-javascript-8d3b52cf-3755-481d-80c5-46f1d3a8ffeb/installing-node-js-14f2721a-61e0-44b3-af1f-73f17348c8f4).
5. Run npm install in your terminal to download the necessary dependencies, plugins, and modules.
```
npm install
```
6. The command npm run start will build and open the compiled code in a browser of your choice using a local host.
```
npm run start
```
### Known Bugs
No known bugs... just typos in the API.
### Support and Contact Information
Please contact [email protected] with questions.
### Technologies Used
* React
* Fetcher API (Ruby On Rails)
* JavaScript
* JSX
* HTML
* Git and GitHub
### Specs
| Spec | Input | Output |
| :------------- | :------------- | :------------- |
| **User can select a category** | User Input:"Old People Names" | Output: "List of names displayed" |
| **User can search by random** | User Input:"Random" | Output: "A random pet name displayed" |
// Stretch goals
| **User can email suggestions to site owner** | User Input:"email response" | Output: "your email has been received" |
| **User can donate to site owner** | User Input:"Paypal donation button pushed" | Output: "Directed to Paypal site" |
#### License
This software is licensed under the MIT license.
Copyright © 2020 **_Jozy Kinnaman_**
Capstone Plan...
Name of Project: Fetch!
_Project's Purpose or Goal: The app will let users search a comprehensive pet name database by different scopes to find the most unique, quirky and hip name for their pet. A tool for animal shelters and pet owners alike that uses a custom API._
List the absolute minimum features the project requires to meet this purpose or goal:
- API
- Search function
- Scopes
What tools, frameworks, libraries, APIs, modules and/or other resources (whatever is specific to your track, and your language) will you use to create this MVP? List them all here. Be specific.
- Ruby on Rails
- pSQl
- React
- CSS
If you finish developing the minimum viable product (MVP) with time to spare, what will you work on next? Describe these features here: Be specific.
- Ability for users to submit suggestions to the db. Approval process or email box.
- images of real people and their pets.
- Paypal box for donating to a shelter.
What additional tools, frameworks, libraries, APIs, or other resources will these additional features require?
- Paypal add on
- Google Add on
Is there anything else you'd like your instructor to know?
API is here: https://github.com/JoseyKinnaman/fetcher_api.git
| 34.745098 | 317 | 0.738431 | eng_Latn | 0.990453 |
ed94c61a511e875e1a18b66dfe91f405f25c06de | 978 | md | Markdown | site/faq/native-builder-fails.md | erexer/polyaxon | be14dae1ed56d568983388736bcdaf27a7baa4a4 | [
"Apache-2.0"
] | null | null | null | site/faq/native-builder-fails.md | erexer/polyaxon | be14dae1ed56d568983388736bcdaf27a7baa4a4 | [
"Apache-2.0"
] | null | null | null | site/faq/native-builder-fails.md | erexer/polyaxon | be14dae1ed56d568983388736bcdaf27a7baa4a4 | [
"Apache-2.0"
] | null | null | null | ---
title: "My builds fail in EKS cluster"
meta_title: "My builds fail in EKS cluster - FAQ"
meta_description: "If your builds are failing while using the native builder because of internet/dns resolution issues."
featured: false
custom_excerpt: "If your builds are failing while using the native builder because of internet/dns resolution issues."
author:
name: "Polyaxon"
slug: "Polyaxon"
website: "https://polyaxon.com"
twitter: "polyaxonAI"
github: "polyaxon"
visibility: public
status: published
tags:
- containers
- scheduling
---
If your builds are failing while using the native builder because of internet/dns resolution issues in an EKS cluster,
similar to this [issue](https://github.com/polyaxon/polyaxon/issues/442),
you should be aware that the latest versions of the AWS EKS-optimized AMI disable the docker bridge network by default.
To fix this issue please see the note on the native builder [integration page](/integrations/native-build/).
| 40.75 | 121 | 0.771984 | eng_Latn | 0.992718 |
ed9593c40bb1178f15ac05af9f3ddde2286e3e68 | 3,559 | md | Markdown | README.md | nvpro-samples/glsl_indexed_types_generator | f54572dbc6a8eb25ec319ef4e277b02a08e9a329 | [
"MIT"
] | 18 | 2019-04-20T10:30:13.000Z | 2021-01-24T14:34:28.000Z | README.md | nvpro-samples/glsl_indexed_types_generator | f54572dbc6a8eb25ec319ef4e277b02a08e9a329 | [
"MIT"
] | 2 | 2020-12-24T18:14:40.000Z | 2021-01-07T07:58:38.000Z | README.md | nvpro-samples/glsl_indexed_types_generator | f54572dbc6a8eb25ec319ef4e277b02a08e9a329 | [
"MIT"
] | 4 | 2019-09-27T14:22:50.000Z | 2021-01-07T07:53:46.000Z | # GLSL Generator for DescriptorSet Indexed Types
## About
This project serves as proof of concept how to simplify the usage of `VK_EXT_descriptor_indexing` and `GL_EXT_nonuniform_qualifier` within GLSL (typically used in combination with `VK_NV_ray_tracing`).
A script generates structures and function overloads to hide the code for indexing descriptor sets of samplers and textures.
Regular GLSL
``` glsl
// BEFORE
// we use a big resource table for descriptorset indexing
layout(set = 0, binding=0) uniform sampler2D res_tex2Ds[];
// let's make use of GL_EXT_buffer_reference2
layout(buffer_reference, scalar) buffer Material {
// we want to be cache efficient
uint16_t albedoTex;
uint16_t normalTex;
...
};
layout(push_constant, scalar) uniform inputData {
Material materials;
};
// example usage
Material mat = materials[idx];
// if idx varies non-uniformly (e.g. raytracing)
// our texture fetches can start to look pretty ugly
vec4 albedo = texture(res_tex2Ds[nonuniformEXT(uint(mat.albedoTex))], uv);
```
After including the file generated by the script, new types are available which wrap the index.
Furthermore there are many texture functions that make use of operator overloading (thanks to Jeff Bolz for the idea).
This makes the code easier to write and read.
``` glsl
// AFTER
// where descriptorsets start that are used for indexing
#define DIT_DSET_IDX 0
// this file is generated by the lua script
#include "sampler_indexed_types.glsl"
layout(buffer_reference, scalar) buffer Material {
// new types are available
// for example: struct sampler2D_u16 { uint16_t idx; }
sampler2D_u16 albedoTex;
sampler2D_u16 normalTex;
...
};
layout(push_constant, scalar) uniform inputData {
Material materials;
};
// example usage
Material mat = materials[idx];
// much nicer looking access
vec4 albedo = texture(mat.albedoTex, uv);
```
## How To
The project serves as proof of concept, and is expected to be customized
or embedded in other languages for your own needs.
### Generator
The generator operates based on the provided configuration file
which can be passed from the commandline (if no argument is given it defaults to [`config.lua`](config.lua)):
`lua glsl_indexed_types_generator.lua myconfig.lua`
In the `example/` directory there is a test with generated [output](example/sampler_indexed_types.glsl).
The script requires a symbol file of all builtin functions. This can be
generated with:
`glslangValidator --dump-builtin-symbols --target-env vulkan1.1 dummy.frag > vk_fragment.txt`
> Note: This commandline option requires a specific glslang [feature](https://github.com/KhronosGroup/glslang/commit/805b09f9220300ff94f9e710921b3dc51173a4d4)
### Limitations
The script can generate wrappers for all samplers and textures. It can
cover most functions, but not all, for example textureOffsets need to be used
with compile-time constants which the function overloading cannot achieve.
To help this you can make use of the generated constructor functions:
` ... = textureOffset(c_sampler2D(myTex),...);`
Currently "image" resources are not generated yet, due to the
need of additional qualifiers ("writeonly" etc.).
Extensions have to be enabled explicitly prior the include using defines:
`#define DIT_GL_NV_shader_texture_footprint 1'`
The extension handling, however, is not fully robust at the moment, as
glslang doesn't store all the necessary information in the symbol files,
but does some additional parsing validation that is not covered here.
| 31.776786 | 201 | 0.771003 | eng_Latn | 0.992496 |
ed965302600e80d54ee104e65d60463c47c67c84 | 68 | md | Markdown | README.md | SaberMK/BiBi | 328ec01d877beca3928ffdd27b85c017d666b80e | [
"MIT"
] | null | null | null | README.md | SaberMK/BiBi | 328ec01d877beca3928ffdd27b85c017d666b80e | [
"MIT"
] | null | null | null | README.md | SaberMK/BiBi | 328ec01d877beca3928ffdd27b85c017d666b80e | [
"MIT"
] | null | null | null | # Bibi
Bibi is a simple database. Would add more description later
| 17 | 59 | 0.779412 | eng_Latn | 0.994821 |
ed965ef18f8490674a13e9a438ffe0288fc2d0a3 | 1,220 | md | Markdown | public/vendors/jszip/documentation/api_jszip/file_name.md | swoopfx/cm | 9369c3baa1657ef9acee6ce6aedb2c2931647a10 | [
"BSD-3-Clause"
] | 2 | 2019-06-04T05:43:02.000Z | 2020-02-22T11:00:09.000Z | public/vendors/jszip/documentation/api_jszip/file_name.md | swoopfx/cm | 9369c3baa1657ef9acee6ce6aedb2c2931647a10 | [
"BSD-3-Clause"
] | 7 | 2019-12-22T14:36:04.000Z | 2022-02-18T08:55:18.000Z | public/vendors/jszip/documentation/api_jszip/file_name.md | swoopfx/cm | 9369c3baa1657ef9acee6ce6aedb2c2931647a10 | [
"BSD-3-Clause"
] | 2 | 2020-09-12T06:15:50.000Z | 2020-11-17T22:28:01.000Z | ---
title: "file(name)"
layout: default
section: api
---
__Description__ : Get a file with the specified name. You can specify folders
in the name : the folder separator is a forward slash ("/").
__Arguments__
name | type | description
-----|--------|-------------
name | string | the name of the file.
__Returns__ : An instance of [ZipObject]({{site.baseurl}}/documentation/api_zipobject.html) representing
the file if any, `null` otherwise.
__Throws__ : Nothing.
<!-- __Complexity__ : This is a simple lookup in **O(1)**. -->
__Examples__
```js
var zip = new JSZip();
zip.file("file.txt", "content");
zip.file("file.txt").name // "file.txt"
zip.file("file.txt").asText() // "content"
zip.file("file.txt").options.dir // false
// utf8 example
var zip = new JSZip(zipFromAjaxWithUTF8);
zip.file("amount.txt").asText() // "€15"
zip.file("amount.txt").asArrayBuffer() // an ArrayBuffer containing €15 encoded as utf8
zip.file("amount.txt").asUint8Array() // an Uint8Array containing €15 encoded as utf8
// with folders
zip.folder("sub").file("file.txt", "content");
zip.file("sub/file.txt"); // the file
// or
zip.folder("sub").file("file.txt") // the file
```
| 25.957447 | 105 | 0.647541 | eng_Latn | 0.617524 |
ed9661138c03966d612f69500bdbcea95346e201 | 1,121 | md | Markdown | nr.parsing.core/README.md | NiklasRosenstein/nr-python | dc5b31ae5773ea4522a6f35112792dde9e872bef | [
"MIT"
] | 3 | 2018-11-20T22:19:35.000Z | 2020-10-31T09:23:53.000Z | nr.parsing.core/README.md | NiklasRosenstein/python-nr | dc5b31ae5773ea4522a6f35112792dde9e872bef | [
"MIT"
] | 3 | 2021-08-09T00:14:26.000Z | 2021-08-09T00:28:27.000Z | nr.parsing.core/README.md | NiklasRosenstein/nr-python | dc5b31ae5773ea4522a6f35112792dde9e872bef | [
"MIT"
] | 3 | 2019-03-22T06:15:17.000Z | 2020-10-31T09:23:53.000Z | # nr.parsing.core
The `nr.parsing.core` package provides a simple API to scan and tokenize text for the purpose of
structured langauge processing.
## Example
```py
from nr.parsing.core import RuleSet, Tokenizer, rules
ruleset = RuleSet()
ruleset.rule('number', rules.regex_extract(r'\-?(0|[1-9]\d*)', 0))
ruleset.rule('operator', rules.regex_extract(r'[\-\+]', 0))
ruleset.rule('whitespace', rules.regex(r'\s+'), skip=True)
def calculate(expr: str) -> int:
tokenizer = Tokenizer(ruleset, expr)
result = 0
sign: t.Optional[int] = 1
while tokenizer:
if tokenizer.current.type != 'number':
raise ValueError(f'unexpected token {tokenizer.current}')
assert sign is not None
result += sign * int(tokenizer.current.value)
tokenizer.next()
if tokenizer.current.type == 'operator':
sign = -1 if tokenizer.current.value == '-' else 1
tokenizer.next()
else:
sign = None
if sign is not None:
raise ValueError(f'unexpected trailing operator')
return result
assert calculate('3 + 5 - 1') == 7
```
---
<p align="center">Copyright © 2020 Niklas Rosenstein</p>
| 27.341463 | 96 | 0.676182 | eng_Latn | 0.745279 |
ed97212ec3dc93cef361f522602a06eac78beda8 | 9,153 | md | Markdown | docs/2014/relational-databases/replication/distributor-properties.md | Sticcia/sql-docs.it-it | 31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/replication/distributor-properties.md | Sticcia/sql-docs.it-it | 31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/replication/distributor-properties.md | Sticcia/sql-docs.it-it | 31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Proprietà database di distribuzione | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: replication
ms.topic: conceptual
f1_keywords:
- sql12.rep.configdistwizard.distdbproperties.f1
- sql12.rep.configdistwizard.distproperties.general.f1
- sql12.rep.configdistwizard.distproperties.publishers.f1
- sql12.rep.configdistwizard.distproperties.publishers.f1
ms.assetid: f643c7c3-f238-4835-b81e-2c2b3b53b23f
author: MashaMSFT
ms.author: mathoma
manager: craigg
ms.openlocfilehash: ae7c7197fffcad7f64a82cf7c060e2e35e9bf460
ms.sourcegitcommit: f7fced330b64d6616aeb8766747295807c92dd41
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/23/2019
ms.locfileid: "62721401"
---
# <a name="sql-server-replication-distributor-properties"></a>Proprietà server di distribuzione di replica di SQL Server
In questo argomento vengono illustrate le proprietà disponibili nella **generali**, **i server di pubblicazione**, e **Database di distribuzione** pagine all'interno di **proprietà server di distribuzione** finestra.
## <a name="general"></a>Generale
La pagina **Generale** della finestra di dialogo **Proprietà server di distribuzione** consente di aggiungere ed eliminare i database di distribuzione e di impostarne le relative proprietà.
Nel database di distribuzione vengono archiviati i metadati e i dati di cronologia relativi a tutti i tipi di replica, nonché le transazioni per la replica transazionale. In molti casi, è sufficiente un singolo database di distribuzione. Se tuttavia un singolo server di distribuzione viene utilizzato da più server di pubblicazione, è opportuno creare un database di distribuzione per ogni server di pubblicazione, in modo da garantire che il flusso di dati di ogni database di distribuzione risulti distinto.
### <a name="options"></a>Opzioni
**Database**
Nella griglia delle proprietà **Database** vengono visualizzati il nome e le proprietà di memorizzazione dei database di distribuzione nel server di distribuzione. **Periodo memorizzazione transazioni** rappresenta il periodo di tempo per cui le transazioni rimangono archiviate per la replica transazionale. Il periodo di memorizzazione della transazione è inoltre noto come periodo di memorizzazione per la distribuzione. **Periodo memorizzazione cronologia** equivale invece al periodo di tempo per cui rimangono archiviati i metadati della cronologia per ogni tipo di replica. Per altre informazioni sul periodo di memorizzazione, vedere [Scadenza e disattivazione delle sottoscrizioni](subscription-expiration-and-deactivation.md).
Fare clic sul pulsante delle proprietà **...** nella griglia delle proprietà **Database** per aprire la finestra di dialogo **Proprietà database di distribuzione** .
**Nuova**
Fare clic su questo pulsante per creare un nuovo database di distribuzione.
**Elimina**
Selezionare un database di distribuzione esistente nella griglia delle proprietà **Database** e scegliere **Elimina** per eliminarlo. Non è possibile eliminare il database di distribuzione se ne esiste uno solo, poiché ogni server di distribuzione deve disporre di almeno un database di distribuzione. Per eliminare tutti i database di distribuzione, è necessario disabilitare la distribuzione nel computer. Per altre informazioni, vedere [Disabilitare la pubblicazione e la distribuzione](disable-publishing-and-distribution.md).
**Impostazioni predefinite profili**
Fare clic su questo pulsante per accedere ai profili dell'agente di replica nella finestra di dialogo **Profili agenti** . Per ulteriori informazioni sui profili, vedere [Replication Agent Profiles](agents/replication-agent-profiles.md).
## <a name="publishers"></a>Server di pubblicazione
La pagina **Server di pubblicazione** della finestra di dialogo **Proprietà server di distribuzione** consente di abilitare l'utilizzo del server di distribuzione corrente da parte dei server di pubblicazione. È inoltre possibile impostare le proprietà associate a tali server di pubblicazione. Tenere presente che, se si abilita un server di pubblicazione per l'utilizzo di questo server come server di distribuzione remoto, il server non diventerà un server di pubblicazione. È infatti necessario connettersi al server di pubblicazione, configurarlo per la pubblicazione e selezionare questo server come server di distribuzione. Utilizzando la Creazione guidata nuova pubblicazione è possibile configurare il server di pubblicazione e selezionare un server di distribuzione.
### <a name="options"></a>Opzioni
**Server di pubblicazione**
Consente di selezionare i server autorizzati all'utilizzo del server di distribuzione corrente. Per visualizzare e impostare proprietà aggiuntive fare clic sul pulsante delle proprietà ( **...** ) accanto a un server di pubblicazione.
**Aggiungi**
Se il server desiderato non è incluso nell'elenco, fare clic su **Aggiungi** per aggiungere un server di pubblicazione [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] o Oracle all'elenco dei server di pubblicazione disponibili. Se il server aggiunto è il primo server a utilizzare il server di distribuzione corrente come server di distribuzione remoto, viene richiesto di digitare una password per il collegamento amministrativo.
**Password per collegamento amministrativo**
Utilizzare questa opzione per specificare o aggiornare la password per la connessione che la replica stabilisce tra il server di pubblicazione e il server di distribuzione remoto utilizzando l'account di accesso **distributor_admin** :
- Se il server di distribuzione viene utilizzato solo come server di distribuzione locale, tale password viene generata in modo casuale e viene configurata automaticamente.
- Se il server di distribuzione viene utilizzato già da un server di pubblicazione remoto, una password è stata specificata inizialmente in questa pagina o nella pagina **Password server di distribuzione** della Configurazione guidata distribuzione.
- Se si tratta del primo server di pubblicazione abilitato per il server di distribuzione corrente viene richiesto di digitare una password.
Per altre informazioni sulla sicurezza dei database di distribuzione, vedere [Proteggere il database di distribuzione](security/secure-the-distributor.md).
## <a name="distribution-database"></a>Database di distribuzione
La finestra di dialogo **Proprietà database di distribuzione** consente di visualizzare varie proprietà e di impostare il periodo di memorizzazione della transazione e della cronologia per il database.
### <a name="options"></a>Opzioni
**Name**
Il nome del database di distribuzione, il cui valore predefinito è "distribution" (sola lettura).
**Percorsi dei file**
Il percorso del file di database e del file di log (sola lettura).
**Periodo di memorizzazione della transazione**
Questa proprietà è nota anche come periodo di memorizzazione della distribuzione. Si tratta della quantità di tempo di memorizzazione delle transazioni ai fini della replica transazionale. Per altre informazioni, vedere [Subscription Expiration and Deactivation](subscription-expiration-and-deactivation.md).
**Periodo di memorizzazione cronologia**
Quantità di tempo di memorizzazione dei metadati della cronologia ai fini di tutti i tipi di replica.
**Sicurezza agente di lettura coda**
L'agente di lettura coda viene utilizzato dalla replica transazionale con sottoscrizioni ad aggiornamento in coda. L'agente di lettura coda viene creato automaticamente se si seleziona **Creazione di una pubblicazione transazionale con aggiornamento delle sottoscrizioni** nella pagina **Tipo di pubblicazione** della Creazione guidata nuova pubblicazione. Fare clic su **Impostazioni di sicurezza...** per modificare l'account nell'ambito del quale l'agente viene eseguito e si connette al server di distribuzione.
In questa pagina è inoltre possibile creare un agente di lettura coda selezionando **Crea agente di lettura coda** . L'opzione è disabilitata se l'agente è già stato creato.
Ulteriori informazioni sulla connessione relative all'agente di lettura coda vengono specificate in due posizioni:
- L'agente si connette al server di pubblicazione mediante le credenziali specificate nella finestra di dialogo **Proprietà server di pubblicazione** che è disponibile nella pagina **Server di pubblicazione** della finestra di dialogo **Proprietà server di distribuzione** .
- L'agente si connette al Sottoscrittore mediante le credenziali specificate per l'agente di distribuzione in Creazione guidata nuova sottoscrizione.
Per altre informazioni, vedere \\[Replication Agent Security Model](security/replication-agent-security-model.md).
## <a name="see-also"></a>Vedere anche
[Configura distribuzione](configure-distribution.md)
[Visualizzare e modificare le proprietà del server di pubblicazione e del database di distribuzione](view-and-modify-distributor-and-publisher-properties.md)
| 89.735294 | 780 | 0.797662 | ita_Latn | 0.999005 |
ed9810f20f56211626fb1cd3944746cae6c1854b | 12,723 | md | Markdown | _posts/2019-08-05-securing-db.md | dannydenenberg/dannydenenberg.github.io | d77db3bc1488c2abce852e47abe8f4278c6a1149 | [
"MIT"
] | null | null | null | _posts/2019-08-05-securing-db.md | dannydenenberg/dannydenenberg.github.io | d77db3bc1488c2abce852e47abe8f4278c6a1149 | [
"MIT"
] | 5 | 2019-07-08T02:17:09.000Z | 2020-07-28T16:27:29.000Z | _posts/2019-08-05-securing-db.md | dannydenenberg/dannydenenberg.github.io | d77db3bc1488c2abce852e47abe8f4278c6a1149 | [
"MIT"
] | null | null | null | ---
title: Securing your database
permalink: securing-db
layout: post
---
[Here](https://github.com/dannydenenberg/mongodb-users) is a link to all of the source code WITH authentication and hashing.
This article assumes knowledge of NodeJS, Express.js, MongoDB, and how all of those technologies interact programmatically. I wrote a [great article](https://denenberg.us/creating-and-connecting-a-mongodb-database-and-node-js-server-to-a-front-end) on that topic as well.
Let's start by taking a look at some code for a MongoDB database connection through Node.js and ExpressJS.<!--more-->
<iframe width="100%" height="300" src="//jsfiddle.net/denenberg/8da5kL1t/1/embedded/js/dark" allowfullscreen="allowfullscreen" allowpaymentrequest frameborder="0"></iframe>
It achieves the central goal for that article which was to be able to connect a front end to a Mongo database. It does this by allowing requests to be sent to a server which has the functionality to read, write, and update the database.
Well, now there is a new issue – we have users that each have their own unique information, but ANYONE who sends a get request to the server with a username of some user can acquire their information! That is SUPER unsafe. To combat this, most websites have a password field for each user (as I'm sure you know) which is kept private. This combination of public and private "keys" is a nice way for users to be able to connect with each other using their public usernames, but keep their info secure with a strong password.
Let's implement this in our NodeJS program by first adding a `pass` field to each user holding their unique password. This doesn't change any code so far on the server side, but let's use this idea to create a new user called `deb` with the password `1234` (I am using JavaScript to send the request to our server – make sure the server is running on port `4567`):
<iframe width="100%" height="300" src="//jsfiddle.net/denenberg/8da5kL1t/12/embedded/js/dark" allowfullscreen="allowfullscreen" allowpaymentrequest frameborder="0"></iframe>
Okay, to make sure the `fetch()` request worked, look at the data inside of your MongoDB cluster collections in Atlas.
Great. The user has a password, but we can still send a get request and get the information without using a password. To password-protect the user info, every time a get request is sent, a password should be sent along with it. The server will then compare the password sent in the get request with the password stored in the database associated with the user the get request was sent for. If they are equivalent, the server will send back the user information. Otherwise, it will send back some sort of an error code for the wrong password.
In short, in the express server's response to get requests, it should check that the sent password is correct before allowing the request to access the user's information.
To send a password along with the get request, we will use simple HTTP url request parameters. The GET request will look like this:
```
http://localhost:4567/username?pass=mypassword1
```
**One thing to note** is how to get the query parameters in Express.js (Node.js): `req.query.myparam`. Or, in our specific case, `req.query.pass` gives the password sent.
Here is some example code of how the request URL parameters can be used in a program (this is just the get request part). It will print out the password to the console when a get request is made with the `/user` and `?pass=__ fields`.
```javascript
app.get("/:user", (req, res) => {
console.log(`Password: ${req.query.pass}`);
});
```
To test this, run the Node server. In the browser go to `localhost:4567/dan?pass=abc`. Switch back to the console and you should see the password printed there.
Now that we have URL parameters working, we need to not send back the user data in the response unless the password is correct. To do this, within the get request, we will get the password associated with the `/user`. If that is the same as the URL `pass` parameter, we send back the info on the user.
<iframe width="100%" height="300" src="//jsfiddle.net/denenberg/8da5kL1t/10/embedded/js/dark" allowfullscreen="allowfullscreen" allowpaymentrequest frameborder="0"></iframe>
To test this out, restart your server with updated code. Go to `localhost:4567/deb?pass=1234` in your browser. You should recieve deb's user id, etc. Try typing a different password and you will get the error message.
Okay, nice! You have now password protected all get requests to recieve data. There are, however, a couple of issues with this method, one of which is the fact that if your database is compromised, you will have leaked a piece of very sensitive information that users have trusted you with: their password (which is likely being reused on different sites). This is why, instead of being stored as they are, passwords are typically **hashed** using a _one way_ function.
The following section will go more in depth into hashing, why it is important, and how to use it to **securely** store data.
---
Hashing algorithms are one way functions. They take any string and turn it into a fixed-length "fingerprint" that is unable to be reversed. This means that if the data is compromised, the onlooker cannot get the user's passwords if they were hashed. At no point were they ever stored on the drive without being in their hashed form.
Websites using hashing typically have this workflow:
1. User creates an account
2. Their password is hashed and stored on the base
3. When the user attempts to log in, the hash of their entered password is compared to the has stored in the database
4. If the hashes match, the user can access the account. If not, a **generic** error message is sent back such as "Entered invalid credentials" so hackers can't trace the error to the username or password specifically.
```
hash("hello") = 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
hash("hellu") = 3937f988aeb57b6fd75b9c71bf17b9658ec97823bab613df438389b0c896b724
hash("danny") = 668e2b73ac556a2f051304702da290160b29bad3392ddcc72074fefbee80c55a
```
**NOTE:** Only secure, or **cryptographic hash functions**, can be used for password hashing (SHA256, SHA512, RipeMD, WHIRLPOOL, etc.)
Sadly, just cryptographically hashing passwords does not ensure safety.
## Cracking Hashes
### Brute Force and Dictionary Attacks
The easiest way to decrypt a hash is just to guess the password. The way to do this is to guess the user password, hash the guess, and compare it to the hash of the actual password you are trying to solve. If the two hashes match, the unhashed version of the guess is the right password.
A **brute force** attack goes through every possible combination of characters given a certain length. Even though they will 100% _eventually_ crack any given password, it is difficult to use this method because of how computationally expensive it is. Some passwords that are even fairly short in length can take thousands of years (literally) to crack using brute force.
```
Trying aaa : failed
Trying aab : failed
Trying aac : failed
...
Trying acb : failed
Trying acc : success!
```
**Dictionary attacks** use a file containing commonly used words, phrases, or passwords that are likely to be a used password. There are [databases you can find](https://en.wikipedia.org/wiki/Wikipedia:10,000_most_common_passwords) that hold the top 100,000 (or however many) most commonly used passwords. The attack hashes these passwords and compares the hash to the password to crack. For cracking the average Joe Shmo, this is sometimes a good method to use and is certainly faster than using a brute force attack.
**Lookup tables** can improve cracking performance by pre-computing the hashes so when it comes time to guess the password, the program needs not spend compute time actually hashing the guesses.
In the next section, we will take a look at "salting" which makes these cracking methods impossible to use reliably.
## Salting
The reason lookup tables, dictionary attacks, and brute force attacks can work is because the passwords are hashed the same way each time. We can randomize the hash by prepending or appending a random string called a _salt_ to the passwords BEFORE hashing.
```
hash("hello") = 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
hash("hello" + "jHjdbJShdiodb") = 6f7f167a978166ee23b32c9531ce5dc23ae8fc26e412045858d938d11470831f
```
The salt doesn't have to be secret because an attacker doesn't know what the salt will be and thus cannot make pre computed tables for it.
## The Dos and Don'ts of Salting
Don't:
- Reuse the same salt for each password hashed
- Use short salts
- Use wierd double hashes (ex: hash(hash(hash('mypass')))) in lou of a salt
Do:
- Generate random salts using a **Cryptographically Secure Pseudo-Random Number Generator** (CSPRNG)
- Generate a new random unique salt for _EACH_ password hashed
- Generate LONG salts
## Salting Workflow
Storing a Password:
1. Generate super long salt with a CSPRNG
2. Prepend the salt to the user password and hash it
3. Save the salt and the hash in the database
Checking a Password:
1. Get the salt and hash from the database
2. Prepend the salt to the submitted password and hash it
3. Compare the hashes. If they are equal, the password is correct
**NOTE:** Always always always hash on the server. Sometimes JavaScript isn't enabled. Also, no one else can access the server so it is ensured to be hashed (You can _also_ hash on the client side if you so choose)
And with that, you have learned the basics of securely hashing data. Now let's continue on with the (not-as-hashy parts of the) article.
## Our Hash Function
We will be using the SHA256 hashing function. How, exactly, hashing functions work is beyond the scope of this article, but if you are interested, see [this](https://en.wikipedia.org/wiki/Hash_function) and [this](https://gfredericks.com/blog/98).
First of all, install the dependency for our function: `npm i -s crypto`. Also, add the import to the top of the NodeJS file: `const crypto = require("crypto");`
Now add this code to the bottom of the NodeJS file. We will call the function `hash`.
```javascript
// hashes strings with sha256 for storing passwords
function hash(pwd) {
return crypto
.createHash("sha256")
.update(pwd)
.digest("base64");
}
```
## The Plan
There are 3 more feature we need to implement before I can call the database "secure".
1. Hash & salt passcodes for new users being stored in response to `post` requests
2. Use the salt to hash and check the passwords when a `get` request needs info
3. Use the salt to check passwords for updating info on the database in response to `put` requests.
Alrighty, let's get started!
## POST Request Security
Passwords need to be hashed and salted before being stored in the database.
Before I can give you the code, you have to install the dependency for generating Cryptographically Secure Pseudo-Random passwords. We will use a library called 'csprng'. Install it: `npm i -s csprng`. Also, add the import in the top of the NodeJS file: `const csprng = require('csprng');`
Here is the _well commented_ code for the server's response to POST requests. It uses the `hash()` function defined earlier.
**NOTE:** The user password is sent as the `pass` field in the body of the request (contrary to how it was sent before).
<iframe width="100%" height="300" src="//jsfiddle.net/denenberg/8da5kL1t/8/embedded/js/dark" allowfullscreen="allowfullscreen" allowpaymentrequest frameborder="0"></iframe>
## GET Request Security
Now to check if an entered passcode is correct, we have to get the stored salt and use that to hash the entered passcode to check against the stored one.
Here is the code for the GET request response.
<iframe width="100%" height="300" src="//jsfiddle.net/denenberg/8da5kL1t/5/embedded/js/dark" allowfullscreen="allowfullscreen" allowpaymentrequest frameborder="0"></iframe>
## PUT Request Security
When updating info in the database, like the GET request, a password must be submitted to make sure the user is the right person to update his data. Also like the GET request, we need to use the stored salt associated with the user to hash the entered password for hashing.
<iframe width="100%" height="300" src="//jsfiddle.net/denenberg/8da5kL1t/6/embedded/js/dark" allowfullscreen="allowfullscreen" allowpaymentrequest frameborder="0"></iframe>
**Congrats for getting through this article!! 🎉🥳**
Although I used a MongoDB database for storage and NodeJS for the server, the concepts covered here are applicable in ANY technology you may choose.
Have fun with your database!
| 62.674877 | 541 | 0.77521 | eng_Latn | 0.998286 |
ed9812409166e536a5c8094b76e7909e0d66fe5f | 2,726 | md | Markdown | README.md | starxmaker/je-suis-la | dbf3cd437eab69cb2c3a89b267d4fb3f5b7bde0c | [
"MIT"
] | null | null | null | README.md | starxmaker/je-suis-la | dbf3cd437eab69cb2c3a89b267d4fb3f5b7bde0c | [
"MIT"
] | null | null | null | README.md | starxmaker/je-suis-la | dbf3cd437eab69cb2c3a89b267d4fb3f5b7bde0c | [
"MIT"
] | null | null | null | # Je Suis Là
("I'm here" in English, pronounced /ʒə sɥi la/)
Je Suis Là is a simple Vanilla JavaScript library powered by OpenLayers and OpenStreetMaps. Its main goal is to quickly generate a map with a fixed pointer and a caption. You just have to indicate the latitude and longitude (you can easily find those values on OpenStreetMaps in the url bar).
[Demo](https://starxmaker.github.io/je-suis-la)
## Installation
Include these files in your HTML. Je Suis Là completely requires OpenLayers in order to work, so make sure to import its JS and CSS first.
```
<!-- Import OpenLayers JS and CSS -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/openlayers/openlayers.github.io@master/en/v6.4.3/css/ol.css" type="text/css">
<script src="https://cdn.jsdelivr.net/gh/openlayers/openlayers.github.io@master/en/v6.4.3/build/ol.js"></script>
<!-- import the library -->
<script src="https://cdn.jsdelivr.net/gh/starxmaker/[email protected]/je-suis-la.js" crossorigin="anonymous"></script>
```
## Quick start
To make it work quickly, you just have to indicate the id of the DIV that will contain the map, the latitude, and the longitude. That's all!
```
new JeSuisLa("map", -29.98130, -71.34999)
```
## Parameters
The library can receive the following parameters
```
new JeSuisLa(target, latitude, longitude, zoom, description, style)
```
| Parameter | Required? | Type | Default value | Description |
|------------------|-----------|------------------|---------------|-------------------------------------------|
| target | Yes | String | | The id attribute of the map container tag.|
| latitude | Yes | Float | | "The angular distance north or south from the equator of a point on the earth's surface, measured on the meridian of the point." (Dictionary.com)|
| longitude | Yes | Float | | "Angular distance east or west on the earth's surface, measured by the angle contained between the meridian of a particular place and some prime meridian, as that of Greenwich, England, and expressed either in degrees or by some corresponding difference in time." (Dictionary.com) |
|zoom | No | Integer | 17 | Zoom level of the map. Its minimum value is 1 and its maximum value is 19. |
| description | No | String | "" | Caption below the pointer |
| style | No | Object |{height: "20em",width: "100%"} | JSON object containing a custom style for the map. Note: if you don't specify a height or a width, the map will not be displayed. |
| 60.577778 | 350 | 0.626926 | eng_Latn | 0.976774 |
ed981bf9eeac5ef5e2db580573d852cc164081c1 | 1,439 | md | Markdown | _posts/2020-10-06-the-fast-and-the-furious-2001-movie-hindi-english-480p-720p-hd.md | tamilrockerss/tamilrockerss.github.io | ff96346e1c200f9507ae529f2a5acba0ecfb431d | [
"MIT"
] | null | null | null | _posts/2020-10-06-the-fast-and-the-furious-2001-movie-hindi-english-480p-720p-hd.md | tamilrockerss/tamilrockerss.github.io | ff96346e1c200f9507ae529f2a5acba0ecfb431d | [
"MIT"
] | 3 | 2022-03-15T05:48:33.000Z | 2022-03-15T16:54:49.000Z | _posts/2020-10-06-the-fast-and-the-furious-2001-movie-hindi-english-480p-720p-hd.md | tamilrockerss/tamilrockerss.github.io | ff96346e1c200f9507ae529f2a5acba0ecfb431d | [
"MIT"
] | 6 | 2020-11-22T07:57:38.000Z | 2022-03-15T16:57:40.000Z | ---
title: "The Fast and the Furious (2001) Movie [Hindi English] 480p 720p HD"
date: "2020-10-06"
---
[****](https://1.bp.blogspot.com/-7vJEP4VmkSk/X04Ve1FlmqI/AAAAAAAAEuQ/TkgPc3B5wIU0Gj8qZ5in_bpukURwDKCOgCLcBGAsYHQ/s1600/fast1.webp)
**The Fast and the Furious (2001)**
**Movie \[Hindi English\] 480p 720p HD**
**Rating: 6.5 / 10**
**The Fast and the Furious (2001)**
**1h 46min | Action, Crime, Thriller | 14 September 2001 (UK)**
**Director: Rob Cohen**
**Writers: Ken Li, Gary Scott Thompson**
**Stars: Vin Diesel, Paul Walker, Michelle Rodriguez Sarsgaard, Mark Strong**
**Language: Hindi + English**
**(Hindi-English) Blu-Ray Dual Audio**
**480p**
**[Download](https://veryfastdownload.xyz/watch.php?link=aHR0cHM6Ly9waG90b3MuYXBwLmdvby5nbC91YjdGRjZFUEpTVTZ2WWhaQQ==)**
[**Download**](https://coinquint.com/fnf4/)
**720p**
**[Download](https://coinquint.com/fnf7/)**
**[Download](https://veryfastdownload.xyz/watch.php?link=aHR0cHM6Ly9waG90b3MuYXBwLmdvby5nbC84elVpYmZOcnVDY0ZXemRYOA==)**
**Hollywood Adventure Movies In Hindi Dubbed Full Action HD, Horror Fantasy Action Adventure Comedy Sci-Fi,**
**Horror Movies, Hollywood Hindi Dual 480P, 720P, 1080P, Download Best Sci-Fi Movies Imdb**
**Fast And Furious Full Movies Series All Part**
**Available On This Sites.**
| 32.704545 | 259 | 0.730368 | yue_Hant | 0.369028 |
ed982351f3d6c3a1568898a482c6242bf0eb3ee8 | 9,274 | md | Markdown | configs/dota/README_en.md | Amanda-Barbara/PaddleDetection | 65ac13074eaaa2447c644a2df71969d8a3dd1fae | [
"Apache-2.0"
] | null | null | null | configs/dota/README_en.md | Amanda-Barbara/PaddleDetection | 65ac13074eaaa2447c644a2df71969d8a3dd1fae | [
"Apache-2.0"
] | null | null | null | configs/dota/README_en.md | Amanda-Barbara/PaddleDetection | 65ac13074eaaa2447c644a2df71969d8a3dd1fae | [
"Apache-2.0"
] | null | null | null | # S2ANet Model
## Content
- [S2ANet Model](#s2anet-model)
- [Content](#content)
- [Introduction](#introduction)
- [Prepare Data](#prepare-data)
- [DOTA data](#dota-data)
- [Customize Data](#customize-data)
- [Start Training](#start-training)
- [1. Install the rotating frame IOU and calculate the OP](#1-install-the-rotating-frame-iou-and-calculate-the-op)
- [2. Train](#2-train)
- [3. Evaluation](#3-evaluation)
- [4. Prediction](#4-prediction)
- [5. DOTA Data evaluation](#5-dota-data-evaluation)
- [Model Library](#model-library)
- [S2ANet Model](#s2anet-model-1)
- [Predict Deployment](#predict-deployment)
- [Citations](#citations)
## Introduction
[S2ANet](https://arxiv.org/pdf/2008.09397.pdf) is used to detect rotating frame's model, required use of PaddlePaddle 2.1.1(can be installed using PIP) or proper [develop version](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/install/Tables.html#whl-release).
## Prepare Data
### DOTA data
[DOTA Dataset] is a dataset of object detection in aerial images, which contains 2806 images with a resolution of 4000x4000 per image.
| Data version | categories | images | size | instances | annotation method |
|:--------:|:-------:|:---------:|:---------:| :---------:| :------------: |
| v1.0 | 15 | 2806 | 800~4000 | 118282 | OBB + HBB |
| v1.5 | 16 | 2806 | 800~4000 | 400000 | OBB + HBB |
Note: OBB annotation is an arbitrary quadrilateral; The vertices are arranged in clockwise order. The HBB annotation mode is the outer rectangle of the indicator note example.
There were 2,806 images in the DOTA dataset, including 1,411 images as a training set, 458 images as an evaluation set, and the remaining 937 images as a test set.
If you need to cut the image data, please refer to the [DOTA_devkit](https://github.com/CAPTAIN-WHU/DOTA_devkit).
After setting `crop_size=1024, stride=824, gap=200` parameters to cut data, there are 15,749 images in the training set, 5,297 images in the evaluation set, and 10,833 images in the test set.
### Customize Data
There are two ways to annotate data:
- The first is a tagging rotating rectangular, can pass rotating rectangular annotation tool [roLabelImg](https://github.com/cgvict/roLabelImg) to describe rotating rectangular box.
- The second is to mark the quadrilateral, through the script into an external rotating rectangle, so that the obtained mark may have a certain error with the real object frame.
Then convert the annotation result into coco annotation format, where each `bbox` is in the format of `[x_center, y_center, width, height, angle]`, where the angle is expressed in radians.
Reference [spinal disk dataset](https://aistudio.baidu.com/aistudio/datasetdetail/85885), we divide dataset into training set (230), the test set (57), data address is: [spine_coco](https://paddledet.bj.bcebos.com/data/spine_coco.tar). The dataset has a small number of images, which can be used to train the S2ANet model quickly.
## Start Training
### 1. Install the rotating frame IOU and calculate the OP
Rotate box IoU calculate [ext_op](../../ppdet/ext_op) is a reference PaddlePaddle [custom external operator](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/07_new_op/new_custom_op.html).
To use the rotating frame IOU to calculate the OP, the following conditions must be met:
- PaddlePaddle >= 2.1.1
- GCC == 8.2
Docker images are recommended[paddle:2.1.1-gpu-cuda10.1-cudnn7](registry.baidubce.com/paddlepaddle/paddle:2.1.1-gpu-cuda10.1-cudnn7)。
Run the following command to download the image and start the container:
```
sudo nvidia-docker run -it --name paddle_s2anet -v $PWD:/paddle --network=host registry.baidubce.com/paddlepaddle/paddle:2.1.1-gpu-cuda10.1-cudnn7 /bin/bash
```
If the PaddlePaddle are installed in the mirror, go to python3.7 and run the following code to check whether the PaddlePaddle are installed properly:
```
import paddle
print(paddle.__version__)
paddle.utils.run_check()
```
enter `ppdet/ext_op` directory, install:
```
python3.7 setup.py install
```
In Windows, perform the following steps to install it:
(1)Visual Studio (version required >= Visual Studio 2015 Update3);
(2)Go to Start --> Visual Studio 2017 --> X64 native Tools command prompt for VS 2017;
(3)Setting Environment Variables:`set DISTUTILS_USE_SDK=1`
(4)Enter `PaddleDetection/ppdet/ext_op` directory,use `python3.7 setup.py install` to install。
After the installation, test whether the custom OP can compile normally and calculate the results:
```
cd PaddleDetecetion/ppdet/ext_op
python3.7 test.py
```
### 2. Train
**Attention:**
In the configuration file, the learning rate is set based on the eight-card GPU training. If the single-card GPU training is used, set the learning rate to 1/8 of the original value.
Single GPU Training
```bash
export CUDA_VISIBLE_DEVICES=0
python3.7 tools/train.py -c configs/dota/s2anet_1x_spine.yml
```
Multiple GPUs Training
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python3.7 -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/dota/s2anet_1x_spine.yml
```
You can use `--eval`to enable train-by-test.
### 3. Evaluation
```bash
python3.7 tools/eval.py -c configs/dota/s2anet_1x_spine.yml -o weights=output/s2anet_1x_spine/model_final.pdparams
# Use a trained model to evaluate
python3.7 tools/eval.py -c configs/dota/s2anet_1x_spine.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_1x_spine.pdparams
```
**Attention:**
(1) The DOTA dataset is trained together with train and val data as a training set, and the evaluation dataset configuration needs to be customized when evaluating the DOTA dataset.
(2) Bone dataset is transformed from segmented data. As there is little difference between different types of discs for detection tasks, and the score obtained by S2ANET algorithm is low, the default threshold for evaluation is 0.5, a low mAP is normal. You are advised to view the detection result visually.
### 4. Prediction
Executing the following command will save the image prediction results to the `output` folder.
```bash
python3.7 tools/infer.py -c configs/dota/s2anet_1x_spine.yml -o weights=output/s2anet_1x_spine/model_final.pdparams --infer_img=demo/39006.jpg --draw_threshold=0.3
```
Prediction using models that provide training:
```bash
python3.7 tools/infer.py -c configs/dota/s2anet_1x_spine.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_1x_spine.pdparams --infer_img=demo/39006.jpg --draw_threshold=0.3
```
### 5. DOTA Data evaluation
Execute the following command, will save each image prediction result in `output` folder txt text with the same folder name.
```
python3.7 tools/infer.py -c configs/dota/s2anet_alignconv_2x_dota.yml -o weights=./weights/s2anet_alignconv_2x_dota.pdparams --infer_dir=dota_test_images --draw_threshold=0.05 --save_txt=True --output_dir=output
```
Please refer to [DOTA_devkit](https://github.com/CAPTAIN-WHU/DOTA_devkit) generate assessment files, Assessment file format, please refer to [DOTA Test](http://captain.whu.edu.cn/DOTAweb/tasks.html), and generate the zip file, each class a txt file, every row in the txt file format for: `image_id score x1 y1 x2 y2 x3 y3 x4 y4` You can also reference the `dataset/dota_coco/dota_generate_test_result.py` script to generate an evaluation file and submit it to the server.
## Model Library
### S2ANet Model
| Model | Conv Type | mAP | Model Download | Configuration File |
|:-----------:|:----------:|:--------:| :----------:| :---------: |
| S2ANet | Conv | 71.42 | [model](https://paddledet.bj.bcebos.com/models/s2anet_conv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/dota/s2anet_conv_2x_dota.yml) |
| S2ANet | AlignConv | 74.0 | [model](https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/dota/s2anet_alignconv_2x_dota.yml) |
**Attention:** `multiclass_nms` is used here, which is slightly different from the original author's use of NMS.
## Predict Deployment
The inputs of the `multiclass_nms` operator in Paddle support quadrilateral inputs, so deployment can be done without relying on the rotating frame IOU operator.
Please refer to the deployment tutorial[Predict deployment](../../deploy/README_en.md)
## Citations
```
@article{han2021align,
author={J. {Han} and J. {Ding} and J. {Li} and G. -S. {Xia}},
journal={IEEE Transactions on Geoscience and Remote Sensing},
title={Align Deep Features for Oriented Object Detection},
year={2021},
pages={1-11},
doi={10.1109/TGRS.2021.3062048}}
@inproceedings{xia2018dota,
title={DOTA: A large-scale dataset for object detection in aerial images},
author={Xia, Gui-Song and Bai, Xiang and Ding, Jian and Zhu, Zhen and Belongie, Serge and Luo, Jiebo and Datcu, Mihai and Pelillo, Marcello and Zhang, Liangpei},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={3974--3983},
year={2018}
}
```
| 49.860215 | 471 | 0.734095 | eng_Latn | 0.884418 |
ed988a74d4b650a3334e38474b72f3c14ed1a549 | 8,439 | md | Markdown | docs/cluster/mongo.md | 292427558/iot-dc3 | c9ef98073658d3d48893dbf923e5b25c3a22124a | [
"Apache-2.0"
] | 1 | 2022-01-07T01:20:25.000Z | 2022-01-07T01:20:25.000Z | docs/cluster/mongo.md | 292427558/iot-dc3 | c9ef98073658d3d48893dbf923e5b25c3a22124a | [
"Apache-2.0"
] | null | null | null | docs/cluster/mongo.md | 292427558/iot-dc3 | c9ef98073658d3d48893dbf923e5b25c3a22124a | [
"Apache-2.0"
] | 1 | 2022-01-07T01:25:52.000Z | 2022-01-07T01:25:52.000Z | ## `Mongo` 集群部署
### 1. 集群架构

> - `mongos`:提供路由数据库集群请求的入口,所有的请求都通过 `mongos` 进行协调,不需要在应用程序添加一个路由选择器,`mongos` 自己就是一个请求分发中心,它负责把对应的数据请求转发到对应的 `shard` 服务器上。在生产环境通常有多 `mongos` 作为请求的入口,防止其中一个挂掉所有的 `mongodb` 请求都没有办法操作。
> - `config server`:为配置服务器,存储所有数据库元信息(路由、分片)的配置。`mongos` 本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。`mongos` 第一次启动或者关掉重启就会从 `config server` 加载配置信息,以后如果配置服务器信息变化会通知到所有的 `mongos` 更新自己的状态,这样 `mongos` 就能继续准确路由。在生产环境通常有多个 `config server` 配置服务器,因为它存储了分片路由的元数据,防止数据丢失!
> - shard:分片(`sharding`)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移),从3.6版本开始,每个 `shard` 必须部署为副本集(`replica set`)架构。
### 2. 集群部署规划
| `name` | `ip` | `mongos` | `config server` | `shard cluster 1` | `shard cluster 2` |
| :-----: | :------: | :--: | :--: | :--: | :--: |
| `node-01` | 127.0.0.1 | 27090 | 27080 | 27117 | 27217 |
| `node-02` | 127.0.0.1 | 27091 | 27081 | 27118 | 27218 |
| `node-03` | 127.0.0.1 | 27092 | 27082 | 27119 | 27218 |
> `config server` 配置服务器建议部署为包含3个成员的副本集模式,出于测试目的,您可以创建一个单成员副本集;
> `shard` 分片请使用至少包含三个成员的副本集。出于测试目的,您可以创建一个单成员副本集;
> `mongos` 没有副本集概念,可以部署1个、2个或多个。
### 3. 下载安装文件
> 下载文件
- 以 `Ubuntu` 为例: https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu2004-5.0.5.tgz
- 其他系统下载:https://www.mongodb.com/try/download/community
> 解压文件
```bash
# 解压
tar zxvf mongodb-linux-x86_64-ubuntu2004-5.0.5.tgz
```
> 校验文件
```bash
# 进入 bin 目录,测试 mongod 是否可用
cd bin
./mongod -h
```
### 4. 配置 `shard` 复制集
#### 4.1 创建文件目录
> 分别创建两个复制集(`shard-cluster`)目录,多个以此类推 `shard-cluster-N`
```bash
cd /data
mkdir -p mongodb/dc3/shard-cluster-01 mongodb/dc3/shard-cluster-02
```
> 为每个复制集创建三个分片 `node` 节点目录,多个以此类推 `node-N`
```bash
cd shard-cluster-N
mkdir node-01 node-02 node-03
```
> 为每个分片节点创建配置、数据、日志和Key目录,其他节点操作一致
```bash
cd node-N
mkdir data etc keys logs
```
#### 4.2 配置文件
> 在每个分片节点的 `etc` 下添加配置文件 `mongo.conf `
```yaml
processManagement:
fork: true
systemLog:
destination: file
# 指定 mongod 服务日志文件目录,其他节点以此类推
path: /data/mongodb/dc3/shard-cluster-01/node-01/logs/mongodb.log
logAppend: true
storage:
journal:
enabled: true
# 指定数存放的路径,其他节点以此类推
dbPath: /data/mongodb/dc3/shard-cluster-01/node-01/data/
directoryPerDB: true
# 选择存储引擎
engine: wiredTiger
wiredTiger:
engineConfig:
# 指定存储引擎的 cache 大小
cacheSizeGB: 20
directoryForIndexes: true
collectionConfig:
blockCompressor: snappy
indexConfig:
prefixCompression: true
net:
# 设置 mongod 监听端口
port: 27117
# 设置最大连接数
maxIncomingConnections: 10000
bindIpAll: true
operationProfiling:
# 设置慢日志时间
slowOpThresholdMs: 100
mode: slowOp
# 是否支持分片
sharding:
clusterRole: shardsvr
archiveMovedChunks: true
replication:
oplogSizeMB: 10240
# 表示这是 dc3_replica 集群的第一个分片
# 该复制集中的所有 node 节点这个名字要一样
# 如果是第二个复制集,这里可以取名 dc3_replica_2
replSetName: dc3_replica_1
#security:
# # 指定 keyfile 位置,其他节点以此类推
# keyFile: /data/mongodb/dc3/shard-cluster-01/node-01/keys/keyfile
# clusterAuthMode: keyFile
# authorization: enabled
```
#### 4.3 启动
```bash
/usr/local/mongodb/bin/mongod -f /data/mongodb/dc3/shard-cluster-01/node-01/etc/mongo.conf
/usr/local/mongodb/bin/mongod -f /data/mongodb/dc3/shard-cluster-01/node-02/etc/mongo.conf
/usr/local/mongodb/bin/mongod -f /data/mongodb/dc3/shard-cluster-01/node-03/etc/mongo.conf
/usr/local/mongodb/bin/mongod -f /data/mongodb/dc3/shard-cluster-02/node-01/etc/mongo.conf
/usr/local/mongodb/bin/mongod -f /data/mongodb/dc3/shard-cluster-02/node-02/etc/mongo.conf
/usr/local/mongodb/bin/mongod -f /data/mongodb/dc3/shard-cluster-02/node-03/etc/mongo.conf
```
#### 4.4 分片 `node` 节点配置到 `shard` 复制集
> 不同 `shard` 复制集中的分片节点操作一致
```bash
# 登录第一个复制集中任意一个分片 node 节点
/usr/local/mongodb/bin/mongo --port 27117
# 执行以下配置,注意 id、ip、port 配置无误
config = {
"_id": "dc3_replica_1",
"members": [
{
"_id": 0,
"host": "127.0.0.1:27117"
},
{
"_id": 1,
"host": "127.0.0.1:27118"
},
{
"_id": 2,
"host": "127.0.0.1:27119"
}
]
};
rs.initiate(config);
# 查看分片集中的节点状态
rs.status();
# 退出
exit
---
# 登录第二个复制集中任意一个分片 node 节点
/usr/local/mongodb/bin/mongo --port 27217
# 执行以下配置,注意 id、ip、port 配置无误
config = {
"_id": "dc3_replica_2",
"members": [
{
"_id": 0,
"host": "127.0.0.1:27217"
},
{
"_id": 1,
"host": "127.0.0.1:27218"
},
{
"_id": 2,
"host": "127.0.0.1:27219"
}
]
};
rs.initiate(config);
# 查看分片集中的节点状态
rs.status();
```
### 5. 配置 `config server` 集群
#### 5.1 创建文件目录
> 创建一个配置服务(`config-cluster`)目录
```bash
cd /data
mkdir -p mongodb/dc3/config-cluster
```
> 为配置服务创建三个 `node` 节点目录,多个以此类推 `node-N`
```bash
cd mongodb/dc3/config-cluster
mkdir node-01 node-02 node-03
```
> 为每个节点创建配置、数据、日志和Key目录,其他节点操作一致
```bash
cd node-N
mkdir data etc keys logs
```
#### 5.2 配置文件
> 在每个节点的 `etc` 下添加配置文件 `config.conf `
```yml
processManagement:
fork: true
systemLog:
destination: file
# 指定 mongod 服务日志文件目录,其他节点以此类推
path: /data/mongodb/dc3/config-cluster/node-01/logs/mongodb.log
logAppend: true
storage:
journal:
enabled: true
# 指定数存放的路径,其他节点以此类推
dbPath: /data/mongodb/dc3/config-cluster/node-01/data/
directoryPerDB: true
# 选择存储引擎
engine: wiredTiger
wiredTiger:
engineConfig:
# 指定存储引擎的 cache 大小
cacheSizeGB: 20
directoryForIndexes: true
collectionConfig:
blockCompressor: snappy
indexConfig:
prefixCompression: true
net:
# 设置 mongod 监听端口
port: 27080
# 设置最大连接数
maxIncomingConnections: 10000
bindIpAll: true
operationProfiling:
# 设置慢日志时间
slowOpThresholdMs: 100
mode: slowOp
# 是否支持分片
sharding:
clusterRole: configsvr
archiveMovedChunks: true
replication:
oplogSizeMB: 10240
# 需要和 mongos configDB 配置中的名字一致
replSetName: dc3_replica
#security:
# # 指定 keyfile 位置,其他节点以此类推
# keyFile: /data/mongodb/dc3/config-cluster/node-01/keys/keyfile
# clusterAuthMode: keyFile
# authorization: enabled
```
#### 5.3 启动
```bash
/usr/local/mongodb/bin/mongod -f /data/mongodb/dc3/config-cluster/node-01/etc/config.conf
/usr/local/mongodb/bin/mongod -f /data/mongodb/dc3/config-cluster/node-02/etc/config.conf
/usr/local/mongodb/bin/mongod -f /data/mongodb/dc3/config-cluster/node-03/etc/config.conf
```
#### 5.4 配置 `node` 节点到 `config server` 服务
```bash
# 登录任意一个 node 节点
/usr/local/mongodb/bin/mongo --port 27080
# 执行以下配置,注意 id、ip、port 配置无误
config = {
"_id": "dc3_replica",
"members": [
{
"_id": 0,
"host": "127.0.0.1:27080"
},
{
"_id": 1,
"host": "127.0.0.1:27081"
},
{
"_id": 2,
"host": "127.0.0.1:27082"
}
]
};
rs.initiate(config);
# 查看配置服务中节点的状态
rs.status();
```
### 6. 配置 `mongos` 服务
#### 6.1 创建文件目录
> 创建一个路由服务(`mongos-cluster`)目录
```bash
cd /data
mkdir -p mongodb/dc3/mongos-cluster
```
> 为路由服务创建一个 `node` 节点目录,多个以此类推 `node-N`
```bash
cd mongodb/dc3/mongos-cluster
mkdir node-01
```
> 为每个节点创建配置、日志和Key目录,其他节点操作一致
```bash
cd node-N
mkdir etc keys logs
```
#### 6.2 配置文件
> 在每个节点的 `etc` 下添加配置文件 `mongos.conf `
```yaml
processManagement:
fork: true
pidFilePath: /data/mongodb/dc3/mongos-cluster/node-03/mongos.pid
systemLog:
destination: file
logAppend: true
path: /data/mongodb/dc3/mongos-cluster/node-03/logs/mongos.log
net:
# 设置 mongod 监听端口
port: 27090
maxIncomingConnections: 10000
bindIpAll: true
sharding:
# 这里的的 dc3_replica 必须和 mongs configDB 配置名称一致
# 这里的三个地址为 mongo-cfg 集群的地址
configDB: dc3_replica/127.0.0.1:27080,127.0.0.1:27081,127.0.0.1:27082
#不带认证需要屏蔽这两行配置
#security:
# keyFile: /data/mongodb/dc3/mongos-cluster/node-03/keys/keyfile
# clusterAuthMode: keyFile
# authorization: enabled
```
#### 6.3 启动
```bash
/usr/local/mongodb/bin/mongos -f /data/mongodb/dc3/mongos-cluster/node-01/etc/mongos.conf
```
#### 6.4 配置 `shard` 复制集到 `mongos` 服务
```bash
# 登录任意一个 node 节点
/usr/local/mongodb/bin/mongo --port 27090
sh.addShard("dc3_replica_1/127.0.0.1:27117,127.0.0.1:27118,127.0.0.1:27119");
sh.addShard("dc3_replica_2/127.0.0.1:27217,127.0.0.1:27218,127.0.0.1:27219");
# 查看路由服务节点的状态
sh.status();
```
| 17.841438 | 263 | 0.653039 | yue_Hant | 0.410076 |
ed991218cb34014bdc6b82763c26938af5efc153 | 3,055 | md | Markdown | page/index.md | pvalenzuelac/RegressionDiscontinuity.jl | 6f64c1a6379df5d7a8214f6dbcdb291a25e09bc0 | [
"MIT"
] | 1 | 2021-04-21T23:09:18.000Z | 2021-04-21T23:09:18.000Z | page/index.md | arubhardwaj/RegressionDiscontinuity.jl | 4f849d783bdc6489bd23efe4459ac3135a61ea79 | [
"MIT"
] | null | null | null | page/index.md | arubhardwaj/RegressionDiscontinuity.jl | 4f849d783bdc6489bd23efe4459ac3135a61ea79 | [
"MIT"
] | 1 | 2020-12-16T01:31:45.000Z | 2020-12-16T01:31:45.000Z |
<!-- =============================
GETTING STARTED
============================== -->
\begin{:section, title="Walkthrough"}
Let us load the dataset from [Lee (2018)](#ref-lee2018). We will reproduce analyses from [Imbens and Kalyanaraman (2012)](#ref-ik2012).
```julia:ex
using DataFrames, RegressionDiscontinuity, Plots
ENV["GKSwstype"] = "nul" #hide
lee08 = load_rdd_data(:lee08) |> DataFrame
first(lee08, 3)
```
```julia:ex
running_var = RunningVariable(lee08.Zs, cutoff=0.0, treated=:≧);
```
Let us first plot the histogram of the running variable:
```julia:ex
plot(running_var; ylim=(0,600), bins=40, background_color="#f3f6f9", size=(700,400))
savefig(joinpath(@OUTPUT, "histogram.svg")) # hide
```
\center{\fig{histogram}}
Next we plot the regressogram (also known as scatterbin) of the response:
```julia:ex
regressogram = plot(running_var, lee08.Ys; bins=40, background_color="#f3f6f9", size=(700,400), legend=:bottomright)
savefig(regressogram, joinpath(@OUTPUT, "regressogram.svg")) # hide
```
\center{\fig{regressogram}}
We observe a jump at the discontinuity, which we can estimate, e.g., with local linear regression. We use local linear regression with rectangular kernel and choose bandwidth with the Imbens-Kalyanaraman bandwidth selector:
```julia:ex
rect_ll_rd = fit(NaiveLocalLinearRD(kernel=Rectangular(), bandwidth=ImbensKalyanaraman()),
running_var, lee08.Ys)
```
```julia:ex
plot!(regressogram, rect_ll_rd; show_local_support=true)
savefig(joinpath(@OUTPUT, "regressogram_plus_llfit.svg")) # hide
```
\center{\fig{regressogram_plus_llfit}}
Let's zoom in on the support of the local kernel and also with more refined regressogram:
```julia:ex
local_regressogram = plot(rect_ll_rd.data_subset; bins=40, background_color="#f3f6f9", size=(700,400), legend=:bottomright)
plot!(rect_ll_rd)
savefig(joinpath(@OUTPUT, "local_regressogram.svg")) # hide
```
\center{\fig{local_regressogram}}
Finally, We could repeat all of the above analysis with another kernel, e.g. the triangular kernel.
```julia:ex
triang_ll_rd = fit(NaiveLocalLinearRD(kernel=SymTriangularDist(), bandwidth=ImbensKalyanaraman()),
running_var, lee08.Ys)
```
\end{:section}
\begin{:section, title="References"}
**Publications**
* \label{ref-ik2012} Imbens, Guido, and Karthik Kalyanaraman. "Optimal bandwidth choice for the regression discontinuity estimator." The Review of economic studies 79.3 (2012): 933-959.
* \label{ref-lee2018} Lee, David S. "Randomized experiments from non-random selection in US House elections." Journal of Econometrics 142.2 (2008): 675-697.
**Related Julia packages**
* [GeoRDD.jl](https://github.com/maximerischard/GeoRDD.jl): Package for spatial regression discontinuity designs.
**Related R packages**
* [rdd](https://cran.r-project.org/web/packages/rdd/index.html)
* [optrdd](https://github.com/swager/optrdd)
* [RDHonest](https://github.com/kolesarm/RDHonest)
* [rdrobust](https://cran.r-project.org/web/packages/rdrobust/index.html)
\end{:section}
| 33.944444 | 224 | 0.72144 | eng_Latn | 0.501728 |
ed991a0361f094b6e1a97df4f43fad2f2399db21 | 413 | md | Markdown | _posts/2021-07-08/2021-06-14-Outstanding-view-20210614205516853310.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-14-Outstanding-view-20210614205516853310.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-14-Outstanding-view-20210614205516853310.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | ---
title: "Outstanding view"
metadate: "hide"
categories: [ Rear Pussy ]
image: "https://external-preview.redd.it/mNC89xKto3Ov6_LHrOBPXenp5lZiq5Gb33QDwB3Xnhk.png?auto=webp&s=13591bc153d604416cf52020d8889798477681a3"
thumb: "https://external-preview.redd.it/mNC89xKto3Ov6_LHrOBPXenp5lZiq5Gb33QDwB3Xnhk.png?width=640&crop=smart&auto=webp&s=257abb36a9138533a0e9da3fc60d45368132d2d3"
visit: ""
---
Outstanding view
| 41.3 | 163 | 0.823245 | yue_Hant | 0.138796 |
ed995638fc582ee59243208d91ae1263fc9085eb | 3,538 | md | Markdown | README.md | yasirliu/aylQuizMatch | aa1ac6b1df7982308964cf7042456ed652853918 | [
"MIT"
] | 2 | 2021-04-11T18:56:02.000Z | 2022-01-24T22:59:30.000Z | README.md | delexw/jquery-quizmatch | aa1ac6b1df7982308964cf7042456ed652853918 | [
"MIT"
] | null | null | null | README.md | delexw/jquery-quizmatch | aa1ac6b1df7982308964cf7042456ed652853918 | [
"MIT"
] | null | null | null | # jQuery - Quiz Match
jQuery Quiz Match is a greater level jQuery plugin. It is not a common used plugin as it is meant for a particular case that is the match in a Quiz. I got the idea while I was working on an online learning system. The aim of this plugin is to learn CSS3, HTML5 drag & drop and jQuery plugin. If it just meets your needs for some quiz project, trying to use it or customizing it might be a good choice as the infrastructure of a match action is already there.
## Technical Features
- CSS3
- HTML5 drag & drop
- jQuery Plugin registered as an AMD
- Customizable
- Cross browser support
- Html template support
- Responsive UI
## Basic usage
This library requires jQuery and [jQuery template engine](https://github.com/codepb/jquery-template)
- Create a div in your html
- Set class to 'quizMatch-container' which is defined in the quiz match native css.
- Pass JSON data as part of the options of quiz match to init the UI
```
<div id='test' class="quizMatch-container"></div>
$('#test').aylquizmatch({
data: matchData
});
var matchedData = $('#test').aylquizmatch('getData');
```
The data structure:
```
var data = [{
leftItem: {
key: 1,
label: 'question1',
refRight: []
},
rightItem: {
key: 1,
label: 'answer1 answer1 answer1 answer1'
}
},{
leftItem: {
key: 2,
label: 'question2 question2 question2 question2',
refRight: []
},
rightItem: {
key: 2,
label: 'answer2 answer2 answer2 answer2 answer2 '
}
}]
```
The result of plugin is like:

## Options
Options can be accessed by $.fn.aylquizmatch.default
- template (currently the plugin is always using html templates)
- enable: enable or disable html template engine in plugin
- leftItem: define the html of left item or set the template relative path if enable is true
- rightItem: define the html of right item or set the template relative path if enable is true
- data: accept data in JSON format to initiate plugin.
- action:
- draggable: default is true.
- droppable: default is true.
- onbeforedrop : default is null.
- onafterdrop:default is null.
- status: it is used to track the position of the draggable elements
- enalbe: default is true.
- pre: object is used by internal track system in the plugin. defulat is {}. inner objects of the object are named by the value of the keys in data. for instance pre['001'].containerId. containerId is the only property of each of the inner objects. it represents the previous position of one draggable element. for instance pre['001'] = 'div001' means the element 001 was in div with id 'div001'.
- cur: similar as pre. it represents the current position of one draggable element.
## Methods
- getData(): retrieve the match data from plugin. The structure of the data is same as the data which is passed in.
## Customizable
1. If you want to choose another template engin for the plugin, you are able to use your own logic to load templates by overwriting method $.fn.aylquizmatch.default.utilities.loadTemplates.
2. If you want to use drag and drop API provided by 3rd party, for instance jQueryUI, overwrite $.fn.aylquizmatch.draggable and $.fn.aylquizmatch.droppable in order to execute your own binding for the two events.
## TODO
1. make template option work
2. use Gulp to complie and release it
3. add animation features
| 41.623529 | 458 | 0.713963 | eng_Latn | 0.993101 |
ed996b8823ee05b9f3983326c425f91ee3645e09 | 1,623 | md | Markdown | README.md | M4rkopolo/files-data-pandas | 563b96247d322d83876d0fd01faaa2eca34b0db7 | [
"MIT"
] | null | null | null | README.md | M4rkopolo/files-data-pandas | 563b96247d322d83876d0fd01faaa2eca34b0db7 | [
"MIT"
] | null | null | null | README.md | M4rkopolo/files-data-pandas | 563b96247d322d83876d0fd01faaa2eca34b0db7 | [
"MIT"
] | null | null | null | # files-data-pandas
read and write files txt, CSV, using "with open()" and pandas package and operatinge on data from files
with open("./file.txt", mode="r") as x:
list_of_names = names.readlines
OR
import pandas
with open("./birthdays.csv", mode="r") as days:
df = pandas.read_csv(days) #reading csv
data = df.to_dict() #saving data frame as a dict
>>>a = [1, 7, 2] #pandas Series data
>>>
>>>myvar = pd.Series(a)
>>>mydataset = {
>>> 'cars': ["BMW", "Volvo", "Ford"],
>>> 'passings': [3, 7, 2]
>>> }
myvar = pandas.DataFrame(mydataset)
#---------------------
„r” – read
„w” – write
„a” – append
„r+” – read and write data to the file
„w+” – write and read data from the file
„a+” – appending and reading data from the file
#----------------------
close() Closes the file
detach() Returns the separated raw stream from the buffer
fileno() Returns a number that represents the stream, from the operating system's perspective
flush() Flushes the internal buffer
isatty() Returns whether the file stream is interactive or not
read() Returns the file content
readable() Returns whether the file stream can be read or not
readline() Returns one line from the file
readlines() Returns a list of lines from the file
seek() Change the file position
seekable() Returns whether the file allows us to change the file position
tell() Returns the current file position
truncate() Resizes the file to a specified size
writable() Returns whether the file can be written to or not
write() Writes the specified string to the file
writelines() Writes a list of strings to the file
| 21.355263 | 103 | 0.683919 | eng_Latn | 0.994652 |
ed99f8fc2bcd45337f4e250eefa13c9c8b8fcab4 | 69 | md | Markdown | category/markup.md | muesliii/hydeout | 0ead7507df571d3ea7f406774ef1ae5d55c673d7 | [
"MIT"
] | null | null | null | category/markup.md | muesliii/hydeout | 0ead7507df571d3ea7f406774ef1ae5d55c673d7 | [
"MIT"
] | null | null | null | category/markup.md | muesliii/hydeout | 0ead7507df571d3ea7f406774ef1ae5d55c673d7 | [
"MIT"
] | null | null | null | ---
title: Markup
layout: category
---
Another sample category page. | 11.5 | 29 | 0.724638 | eng_Latn | 0.928607 |
ed9b47da50b8dac7727c2101edda7333b6911111 | 3,666 | md | Markdown | content/news/2014/04/2014-04-30-need-help-responding-to-facebook-twitter-questions-use-your-contact-center-customer-service-experts.md | afeijoo/digitalgov.gov | 117098d31802464d9696987980f4a400f3f6654c | [
"CC0-1.0"
] | 1 | 2022-02-11T11:53:47.000Z | 2022-02-11T11:53:47.000Z | content/news/2014/04/2014-04-30-need-help-responding-to-facebook-twitter-questions-use-your-contact-center-customer-service-experts.md | afeijoo/digitalgov.gov | 117098d31802464d9696987980f4a400f3f6654c | [
"CC0-1.0"
] | null | null | null | content/news/2014/04/2014-04-30-need-help-responding-to-facebook-twitter-questions-use-your-contact-center-customer-service-experts.md | afeijoo/digitalgov.gov | 117098d31802464d9696987980f4a400f3f6654c | [
"CC0-1.0"
] | null | null | null | ---
slug: need-help-responding-to-facebook-twitter-questions-use-your-contact-center-customer-service-experts
date: 2014-04-30 10:00:49 -0400
title: 'Need Help Responding to Facebook & Twitter Questions? Use Your Contact Center Customer Service Experts'
summary: 'Government agencies are always looking for better ways to connect with their audiences while making more effective use of existing (or shrinking) resources. To that end, many agencies—including ours, the National Cancer Institute—have begun to use social media platforms to help serve the communications mission. As these tools have become more widely used, NCI’s Contact'
authors:
- candace-maynard
topics:
- communities
- product-management
- monthly-theme
- aoi
- HHS
- national-institutes-of-health
- NCI
- nih
- social-media
- united-states-department-of-health-and-human-services
---
{{< legacy-img src="2014/04/250-x-188-women-working-in-call-center-diego-cervo-iStock-Thinkstock-119850328.jpg" alt="Women working in call center" caption="" >}}
Government agencies are always looking for better ways to connect with their audiences while making more effective use of existing (or shrinking) resources. To that end, many agencies—including ours, the National Cancer Institute—have begun to use social media platforms to help serve the communications mission. As these tools have become more widely used, NCI’s Contact Center has become an essential partner in our social media efforts.
For those who don’t know us, NCI is the largest of the 27 Institutes and Centers within the National Institutes of Health (NIH), which is part of the Department of Health and Human Services (DHHS). We are the lead federal agency for cancer research. Our headquarters are in Bethesda, Maryland, but most of the work funded by NCI takes place across the country and internationally. Collecting and disseminating information about cancer is one of NCI’s core responsibilities. NCI’s Cancer Information Service program has been around for 37 years, and responds to public queries by email, phone, live online chat and, most recently, our enterprise social media channels on Facebook and Twitter.
Here’s how our social media channels work: federal staff create and manage new posts and tweets according to agency priorities and the needs of our audiences, while our contact center staff (with federal review) monitor and respond to user comments and questions related to the posts and Tweets. The contact center’s experience handling public inquiries from other NCI channels (phone, email) transfers beautifully to social media, with some tweaking in the style and timeliness of their responses. This approach helps NCI maintain consistency and accuracy in its messages across all public-facing channels and leverages the skill of contact center staff when helping the public.
NCI and its parent agencies were among the first federal agencies to develop policies by which government social media platforms should be managed, including guidance on how to moderate and manage user posts and tweets. These policies guide the contact center staff on a daily basis. But as any contact center knows, not every public interaction is black and white. And it’s exactly that experience that we have incorporated into NCI’s approach to social media so that both agency and constituent needs are met.
If you’d like more information about our social media policies and procedures, or how our contact center plays a starring role, feel free to [contact me](mailto:[email protected]).
_**Candace Maynard** is the Senior Program Manager in NCI’s Cancer Information Service._ | 111.090909 | 691 | 0.801964 | eng_Latn | 0.998731 |
ed9b6dba52460b522d9b4260467aa5c08c247f7a | 1,506 | md | Markdown | en-jps-pipe/README.md | jcuenod/parabible-data-pipeline | 1d1734186fea30ebec5ab7fad025f2f4b479ec06 | [
"Xnet"
] | 1 | 2022-01-04T22:04:56.000Z | 2022-01-04T22:04:56.000Z | en-jps-pipe/README.md | jcuenod/parabible-data-pipeline | 1d1734186fea30ebec5ab7fad025f2f4b479ec06 | [
"Xnet"
] | 13 | 2020-07-25T00:35:37.000Z | 2022-03-23T02:12:34.000Z | en-jps-pipe/README.md | jcuenod/parabible-data-pipeline | 1d1734186fea30ebec5ab7fad025f2f4b479ec06 | [
"Xnet"
] | null | null | null | # Data Source
| | Notes |
| --- | --- |
| **Content** | JPS (1917) |
| **Source** | <https://ebible.org/details.php?id=engjps&all=1> |
| **Format** | XML with random nested elements |
| **License** | Public Domain |
## Content
### Jewish Publication Society Translation
The JPS translation is a standard Jewish translation that often differs from English translations in interesting ways. These divergences typically highlight alternative ways of reading the Hebrew that are always worth considering.
> The Open Siddur Project aims to produce a free software toolkit for making high-quality custom Jewish liturgical books such as haggadot, siddurim, and bentchers that can be displayed on screen or printed to paper.
> <https://github.com/opensiddur/opensiddur>
The text for the translation is available in [`/opensiddur-sources/sources/1917JPS/books/`](https://github.com/opensiddur/opensiddur/tree/develop/opensiddur-sources/sources/1917JPS/books). There's also a plain text file in the parent directory which may be easier to work with. It contains paragraphing and may split verses for that if necessary (I haven't checked)—there are a bunch of columns (including footnotes). It looks as though XML is the best bet though (also easier to determine when all the nested elements are accounted for). It is also available at ebible.org, though, and there it's in usfm.
### Enrichments
- NB: Needed enrichment is going to be **alignment** because JPS is offset from NRSV and MT at certain points.
| 60.24 | 606 | 0.763612 | eng_Latn | 0.99709 |
ed9bd21e0b5ca4d5121c910dea7af1770ec1057d | 455 | md | Markdown | docs/error-messages/tool-errors/linker-tools-error-lnk1127.md | changeworld/cpp-docs.zh-cn | fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/tool-errors/linker-tools-error-lnk1127.md | changeworld/cpp-docs.zh-cn | fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/tool-errors/linker-tools-error-lnk1127.md | changeworld/cpp-docs.zh-cn | fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 链接器工具错误 LNK1127
ms.date: 11/04/2016
f1_keywords:
- LNK1127
helpviewer_keywords:
- LNK1127
ms.assetid: 45404700-1420-4f24-97e1-efb7252ffd8f
ms.openlocfilehash: 4144e95261df9a16d6cbb094a2785121b3bdfd92
ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 10/31/2018
ms.locfileid: "50652395"
---
# <a name="linker-tools-error-lnk1127"></a>链接器工具错误 LNK1127
库已损坏
库文件已损坏。 重新生成库。 | 22.75 | 60 | 0.808791 | yue_Hant | 0.200727 |
ed9be5b7f9c23bb26f7eedebe50bf8f3e9da7d88 | 6,541 | md | Markdown | aspnet/signalr/overview/deployment/using-signalr-with-azure-web-sites.md | crowchirp/aspnet-docs | 68bd479a394f6660934c9560821c70ad831d0ef7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/signalr/overview/deployment/using-signalr-with-azure-web-sites.md | crowchirp/aspnet-docs | 68bd479a394f6660934c9560821c70ad831d0ef7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/signalr/overview/deployment/using-signalr-with-azure-web-sites.md | crowchirp/aspnet-docs | 68bd479a394f6660934c9560821c70ad831d0ef7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Using SignalR with Web Apps in Azure App Service | Microsoft Docs"
author: pfletcher
description: "This document describes how to configure a SignalR application that runs on Microsoft Azure. Software versions used in the tutorial Visual Studio 2013 or Vis..."
ms.author: aspnetcontent
manager: wpickett
ms.date: 07/01/2015
ms.topic: article
ms.assetid: 2a7517a0-b88c-4162-ade3-9bf6ca7062fd
ms.technology: dotnet-signalr
ms.prod: .net-framework
msc.legacyurl: /signalr/overview/deployment/using-signalr-with-azure-web-sites
msc.type: authoredcontent
---
Using SignalR with Web Apps in Azure App Service
====================
by [Patrick Fletcher](https://github.com/pfletcher)
> This document describes how to configure a SignalR application that runs on Microsoft Azure.
>
> ## Software versions used in the tutorial
>
>
> - [Visual Studio 2013](https://www.microsoft.com/visualstudio/eng/2013-downloads) or Visual Studio 2012
> - .NET 4.5
> - SignalR version 2
> - Azure SDK 2.3 for Visual Studio 2013 or 2012
>
>
>
> ## Questions and comments
>
> Please leave feedback on how you liked this tutorial and what we could improve in the comments at the bottom of the page. If you have questions that are not directly related to the tutorial, you can post them to the [ASP.NET SignalR forum](https://forums.asp.net/1254.aspx/1?ASP+NET+SignalR), [StackOverflow.com](http://stackoverflow.com/), or the [Microsoft Azure forums](https://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?category=windowsazureplatform).
## Table of Contents
- [Introduction](#introduction)
- [Deploying a SignalR Web App to Azure App Service](#deploying)
- [Enabling WebSockets on Azure App Service](#websocket)
- [Using the Azure Redis Cache Backplane](#backplane)
- [Next Steps](#nextsteps)
<a id="introduction"></a>
## Introduction
ASP.NET SignalR can be used to bring a new level of interactivity between servers and web or .NET clients. When hosted in Azure, SignalR applications can take advantage of the highly available, scalable, and performant environment that running in the cloud provides.
<a id="deploying"></a>
## Deploying a SignalR Web App to Azure App Service
SignalR doesn't add any particular complications to deploying an application to Azure versus deploying to an on-premises server. An application that uses SignalR can be hosted in Azure without any changes in configuration or other settings (though for WebSockets support, see [Enabling WebSockets on Azure App Service](#websocket) below.) For this tutorial, you'll deploy the application created in the [Getting Started Tutorial](../getting-started/tutorial-getting-started-with-signalr.md) to Azure.
**Prerequisites**
- Visual Studio 2013. If you don't have Visual Studio, Visual Studio 2013 Express for Web is included in the install of the Azure SDK.
- [Azure SDK 2.3 for Visual Studio 2013](https://go.microsoft.com/fwlink/?linkid=324322&clcid=0x409) or [Azure SDK 2.3 for Visual Studio 2012](https://go.microsoft.com/fwlink/p/?linkid=323511).
- To complete this tutorial, you will need an Azure subscription. You can [activate your MSDN subscriber benefits](https://azure.microsoft.com/en-us/pricing/member-offers/msdn-benefits-details/), or [sign up for a trial subscription](https://azure.microsoft.com/en-us/pricing/free-trial/).
### Deploying a SignalR web app to Azure
1. Complete the [Getting Started Tutorial](../getting-started/tutorial-getting-started-with-signalr.md), or download the finished project from [Code Gallery](https://code.msdn.microsoft.com/SignalR-Getting-Started-b9d18aa9).
2. In Visual Studio, select **Build**, **Publish SignalR Chat**.
3. In the "Publish Web" dialog, select "Windows Azure Web Sites".

4. If you aren't signed in to your Microsoft account, click **Sign In...** in the "Select Existing Web Site" dialog, and sign in.
 
5. In the "Select Existing Web Site" dialog, click **New**.

6. In the "Create site on Windows Azure" dialog, enter a unique app name. Select the region closest to you in the Region dropdown. Click **Create**.

7. In the "Publish Web" dialog, click **Publish**.

8. When the app has completed publishing, the SignalR Chat application hosted in Azure App Service Web Apps will open in a browser.

<a id="websocket"></a>
### Enabling WebSockets on Azure App Service Web Apps
WebSockets needs to be explicitly enabled in your web app to be used in a SignalR application; otherwise, other protocols will be used (See [Transports and Fallbacks](../getting-started/introduction-to-signalr.md) for details).
In order to use WebSockets on Azure App Service Web Apps, enable it in the configuration section of the web app. To do this, open your web app in the [Azure Management Portal](https://manage.windowsazure.com/), and select Configure.

At the top of the configuration page, ensure that .NET 4.5 is used for your web app.

On the configuration page, in the **WebSockets** setting, select **On**.

At the bottom of the Configuration page, select **Save** to save your changes.

<a id="backplane"></a>
## Using the Azure Redis Cache Backplane
If you use multiple instances for your web app, and the users of those instances need to interact with one another (so that, for instance, chat messages created in one instance can reach the users connected to other instances), the [Azure Redis Cache backplane](../performance/scaleout-with-redis.md) must be implemented in your application.
<a id="nextsteps"></a>
## Next Steps
For more information on Web Apps in Azure App Service, see [Web Apps overview](https://azure.microsoft.com/en-us/documentation/articles/app-service-web-overview/). | 58.401786 | 500 | 0.763645 | eng_Latn | 0.875629 |
ed9c34f7bec58928cf9dadf1e54ce024ca4047d6 | 138 | md | Markdown | README.md | aaronp/agently | c320540a19b17b0068935778f27640ea830d7ea8 | [
"Apache-2.0"
] | null | null | null | README.md | aaronp/agently | c320540a19b17b0068935778f27640ea830d7ea8 | [
"Apache-2.0"
] | null | null | null | README.md | aaronp/agently | c320540a19b17b0068935778f27640ea830d7ea8 | [
"Apache-2.0"
] | null | null | null | # Agently
A wrapper for using coursier (et al) to run things. It's meant to be used
as a cheap/cheerful agent to ensure stuff is started. | 34.5 | 73 | 0.76087 | eng_Latn | 0.999911 |
ed9c87e2090ffa14f96750cb3fceb9818809ecc2 | 967 | md | Markdown | add/metadata/System.Windows/AttachedPropertyBrowsableWhenAttributePresentAttribute.meta.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/System.Windows/AttachedPropertyBrowsableWhenAttributePresentAttribute.meta.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/System.Windows/AttachedPropertyBrowsableWhenAttributePresentAttribute.meta.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: System.Windows.AttachedPropertyBrowsableWhenAttributePresentAttribute
ms.technology:
- "dotnet-wpf"
author: "dotnet-bot"
ms.author: "dotnetcontent"
manager: "wpickett"
---
---
uid: System.Windows.AttachedPropertyBrowsableWhenAttributePresentAttribute.#ctor(System.Type)
ms.technology:
- "dotnet-wpf"
ms.author: "dotnet-bot"
manager: "wpickett"
---
---
uid: System.Windows.AttachedPropertyBrowsableWhenAttributePresentAttribute.AttributeType
ms.technology:
- "dotnet-wpf"
author: "dotnet-bot"
ms.author: "dotnetcontent"
manager: "wpickett"
---
---
uid: System.Windows.AttachedPropertyBrowsableWhenAttributePresentAttribute.GetHashCode
ms.technology:
- "dotnet-wpf"
author: "dotnet-bot"
ms.author: "dotnetcontent"
manager: "wpickett"
---
---
uid: System.Windows.AttachedPropertyBrowsableWhenAttributePresentAttribute.Equals(System.Object)
ms.technology:
- "dotnet-wpf"
author: "dotnet-bot"
ms.author: "dotnetcontent"
manager: "wpickett"
---
| 21.977273 | 96 | 0.772492 | pol_Latn | 0.10875 |
ed9c92f846c17f97d87e7cb60d5ae84c328beefe | 2,808 | md | Markdown | README.md | keodarus/Inq-mc-aixi | b3664e42df2e377eae6fc776d596944f29221f5e | [
"MIT"
] | 1 | 2021-07-27T09:46:58.000Z | 2021-07-27T09:46:58.000Z | README.md | keodarus/Inq-mc-aixi | b3664e42df2e377eae6fc776d596944f29221f5e | [
"MIT"
] | null | null | null | README.md | keodarus/Inq-mc-aixi | b3664e42df2e377eae6fc776d596944f29221f5e | [
"MIT"
] | null | null | null | # Inq-mc-aixi
This place primary prupose is to hold my ideas about this topic and to act as an initial sparking point to create useable, effective AIXI approximations.
Don't forget the option to spot possible conclusion errors or similiar.
I can also provide some terminal prompt commands in order to provide information on how to get started, such as how to compile Joel Venness's implementation.
When I succesfully manage to code the required changes, they will of course also be found here. I'm unfortunately not that experienced in programming.
## Main problem of current MC-AIXI approximations
According to my understanding the theory for how to build quite capable MC-AIXI's has been already elaborated by insanely intelligent humans.
My conclusion about the primary previous limiter for MC-AIXI's is:
- An computationally affordable exploration method, that consists not of random exploration!!
Even when it would cost some precious calculations, random exploring exhibits the same intelligence as a toast that has not yet been toasted.
Intelligent exploration makes up the other 50% of an RE agent's intelligent behavior. Random exploration becomes in later stages so ineffective and inefficient for environments that consists of challenging state spaces (like playing partially observable pacman).
The solution for this has been worked out in the paper: A Strongly Asymptotically Optimal Agent in General Environments.
Available at arXiv.org under the serial arXiv:1903.01021
## Medium problems of current realized MC-AIXI approximations
One thought that came to me is that the CTW algorithm used for prediction is probably not capable of transfer learning or interpolating (small) differences in (basically) the same observational data. This is certainly a problem in more complex visual, audio and analog real world problems. We must bear in mind that current MC-AIXI's are probably only capable of working with clear, digitally distinguishable, limited set of observation space sizes. The thing with transfer learning also becomes a problem when dealing with "deep, large" environments.
These problems are no game breakers for the agent but serious attention must be paid to them. The agent must work through a lot more cycles to compensate for this.
## Overview of existing work
- There is one open source C++ MC (FAC-CTW) AIXI agent approximation out there, that is capable of performing multi thread MC tree search. This makes it a very important work. The creator is Joel Veness, his work can be downloaded from his homepage.
- A javaScript demo implementing an Inq-MC-AIXI agent approximation + demo environment. It's license is GPL-3.0.
Found at GitHub: ejcatt/aixijs ---> It's also very helpful since it is implementing the Inq algorithmus in programming language.
[WIP]
| 96.827586 | 552 | 0.806624 | eng_Latn | 0.99962 |
ed9d3fccc7e909de784bf34a955d7af4b9a61be8 | 931 | md | Markdown | _posts/2020-02-27-espero-que-bolsonaro-troque-weintraub-antes-de-decisao-do-stf-diz-tabata.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2020-02-27-espero-que-bolsonaro-troque-weintraub-antes-de-decisao-do-stf-diz-tabata.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2020-02-27-espero-que-bolsonaro-troque-weintraub-antes-de-decisao-do-stf-diz-tabata.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | 1 | 2022-01-13T07:57:24.000Z | 2022-01-13T07:57:24.000Z | ---
layout: post
item_id: 2898178242
title: >-
Espero que Bolsonaro troque Weintraub antes de decisão do STF, diz Tabata
author: Tatu D'Oquei
date: 2020-02-27 01:00:00
pub_date: 2020-02-27 01:00:00
time_added: 2020-02-27 10:50:32
category:
tags: []
image: https://conteudo.imguol.com.br/c/noticias/c8/2020/02/20/a-deputada-tabata-amaral-pdt-sp-durante-entrevista-no-estudio-folhauol-1582225886214_v2_615x300.jpg
---
Em seu primeiro mandato, a deputada de 26 anos também criticou os ataques que sofre por ser mulher e jovem — "Sempre vão dizer que tem homem por trás" — e condenou o insulto de Bolsonaro à jornalista da Folha Patrícia Campos Mello.
**Link:** [https://noticias.uol.com.br/politica/ultimas-noticias/2020/02/27/espero-que-bolsonaro-troque-weintraub-antes-do-stf-diz-tabata.htm](https://noticias.uol.com.br/politica/ultimas-noticias/2020/02/27/espero-que-bolsonaro-troque-weintraub-antes-do-stf-diz-tabata.htm)
| 49 | 274 | 0.774436 | por_Latn | 0.83516 |
ed9dec4d7ec73f16d2840e6eb4716b29a2a61085 | 453 | md | Markdown | README.md | FelipeAlves99/Arduino_APS_UNIP_1_2019 | 41221f8a864adce37188f13dea0efea1093ab357 | [
"MIT"
] | null | null | null | README.md | FelipeAlves99/Arduino_APS_UNIP_1_2019 | 41221f8a864adce37188f13dea0efea1093ab357 | [
"MIT"
] | null | null | null | README.md | FelipeAlves99/Arduino_APS_UNIP_1_2019 | 41221f8a864adce37188f13dea0efea1093ab357 | [
"MIT"
] | null | null | null | # Arduino_APS_UNIP_2019
Arduino's source code
This arduino code was used for a college activity and used to check the data of DHT22 sensor (measures temperature and humidity) and a rain sensor (check if the metal plate is wet). All of them output a numeric info, but the rain part, depending of how high the value is, it is converted to a string.
After the sensor reading, the code output the data in this order: temperature, humidity and isRaining.
| 64.714286 | 301 | 0.788079 | eng_Latn | 0.999286 |
ed9e27b2ac425a603f2a5be33d08c0da96ad6b1a | 1,610 | md | Markdown | api/qsharp/microsoft.quantum.arithmetic.phaselittleendian.md | MicrosoftDocs/quantum-docs-pr.nl-NL | 9a561840d56aa53cb100ce8286e884c9f4a17180 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T20:13:35.000Z | 2020-05-19T20:13:35.000Z | api/qsharp/microsoft.quantum.arithmetic.phaselittleendian.md | MicrosoftDocs/quantum-docs-pr.nl-NL | 9a561840d56aa53cb100ce8286e884c9f4a17180 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/qsharp/microsoft.quantum.arithmetic.phaselittleendian.md | MicrosoftDocs/quantum-docs-pr.nl-NL | 9a561840d56aa53cb100ce8286e884c9f4a17180 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-11-15T09:22:24.000Z | 2021-12-05T18:52:31.000Z | ---
uid: Microsoft.Quantum.Arithmetic.PhaseLittleEndian
title: Door de gebruiker gedefinieerd PhaseLittleEndian-type
ms.date: 1/23/2021 12:00:00 AM
ms.topic: article
qsharp.kind: udt
qsharp.namespace: Microsoft.Quantum.Arithmetic
qsharp.name: PhaseLittleEndian
qsharp.summary: >-
Little-endian unsigned integers in QFT basis.
For example, if $\ket{x}$ is the little-endian encoding of the integer $x$ in the computational basis, then $\operatorname{QFTLE} \ket{x}$ is the encoding of $x$ in the QFT basis.
ms.openlocfilehash: 59df1db31090f875ccd261fe6cc43995ba57b963
ms.sourcegitcommit: 71605ea9cc630e84e7ef29027e1f0ea06299747e
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 01/26/2021
ms.locfileid: "98843002"
---
# <a name="phaselittleendian-user-defined-type"></a>Door de gebruiker gedefinieerd PhaseLittleEndian-type
Naam ruimte: [micro soft. Quantum. aritmetische](xref:Microsoft.Quantum.Arithmetic)
Pakket: [micro soft. Quantum. Standard](https://nuget.org/packages/Microsoft.Quantum.Standard)
Niet-ondertekende integers van Little-Endian in QFT.
Als $ \ket{x} $ bijvoorbeeld de code ring little-endian van het gehele getal $x $ in de berekening is, dan is $ \operatorname{QFTLE} \ket{x} $ de code ring van $x $ in de QFT basis.
```qsharp
newtype PhaseLittleEndian = (Qubit[]);
```
## <a name="remarks"></a>Opmerkingen
We hebben afkorting `PhaseLittleEndian` als `PhaseLE` in de documentatie.
## <a name="see-also"></a>Zie ook
- [Micro soft. Quantum. Canon. QFT](xref:Microsoft.Quantum.Canon.QFT)
- [Micro soft. Quantum. Canon. QFTLE](xref:Microsoft.Quantum.Canon.QFTLE) | 35.777778 | 181 | 0.767081 | nld_Latn | 0.347776 |
ed9e7e6b910d647cf2acd77bfaa9d60b73766197 | 951 | md | Markdown | vdj/README.md | RPGroup-PBoC/VDJ | a59214f878968e5958915b56983b0f52a0a0483e | [
"MIT"
] | null | null | null | vdj/README.md | RPGroup-PBoC/VDJ | a59214f878968e5958915b56983b0f52a0a0483e | [
"MIT"
] | null | null | null | vdj/README.md | RPGroup-PBoC/VDJ | a59214f878968e5958915b56983b0f52a0a0483e | [
"MIT"
] | null | null | null | # `vdj`
---
Welcome to the computational guts of the project. This module contains all
functions used in processing, analysis, and visualization of data in this work.
All functions are thoroughly document and can be viewed by typing
`vdj.module.function?` in a Python prompt. The following is a brief description
of each submodule.
* `__init__.py` \| Standard initiation file for the entire module.
* `bayes.py` \| A collection of functions for performing Bayesian inference via
Stan using the [Stan probabilistic programming language](http://mc-stan.org)
and the [PyStan](https://pystan.readthedocs.io/en/latest/) interface.
* `io.py` \| A collection of functions for file input-output and loading of
various constants, such as the sequence for each endogenous sequence.
* `stats.py` \| Functions for simple calculation of various summary statistics.
* `viz.py` \| Useful functions for data visualization and setting of the
plotting theme. | 55.941176 | 79 | 0.770768 | eng_Latn | 0.992071 |
ed9fd312049b4efb0587c511343e2a9a0eba14b2 | 218 | md | Markdown | README.md | joy-framework/language-janet | d2cb04e09d124c6d71213d6410d9bd93c0d639b2 | [
"MIT"
] | 5 | 2019-03-03T17:50:14.000Z | 2020-05-16T19:05:18.000Z | README.md | joy-framework/language-janet | d2cb04e09d124c6d71213d6410d9bd93c0d639b2 | [
"MIT"
] | 2 | 2020-01-06T00:55:44.000Z | 2020-02-01T02:07:02.000Z | README.md | joy-framework/language-janet | d2cb04e09d124c6d71213d6410d9bd93c0d639b2 | [
"MIT"
] | 3 | 2019-03-31T23:14:33.000Z | 2020-04-11T06:11:00.000Z | # Janet Language support in Atom
Adds syntax highlighting to Janet files in Atom.
Contributions are greatly appreciated. Please fork this repository and open a pull request to add snippets, make grammar tweaks, etc.
| 36.333333 | 133 | 0.807339 | eng_Latn | 0.989956 |
eda05c3e2cd6e1900d3f3e64da50bff068da22cc | 783 | md | Markdown | Port 3702 - WSD/README.md | racompton/AMP-Research | aabc1bb3f08ed960d8466bd1e53408d2977db1fe | [
"MIT"
] | 183 | 2019-09-30T09:22:44.000Z | 2022-03-30T20:39:30.000Z | WSD port 3702/README.md | mikust/AMP-Research | 81a3a95842616e5f6f498b8df1e04a5383ace7b5 | [
"MIT"
] | 5 | 2020-03-25T11:21:52.000Z | 2022-03-09T01:43:07.000Z | WSD port 3702/README.md | mikust/AMP-Research | 81a3a95842616e5f6f498b8df1e04a5383ace7b5 | [
"MIT"
] | 72 | 2019-09-28T19:12:39.000Z | 2022-03-27T20:08:07.000Z | # WS-DD (or just WSD - Web Services Discovery)
## Port: 3702
## Proto: UDP
## Amplification factor: 30-150x (vague depending on SOAP WSDL in use)
## Reflector count: ~422,000 (Nov 2019)
---
### Malformed requests
- Broken XML or simply a well formed request that is not in the scope of the SOAP WSDL
### Well-formed requests
- https://docs.microsoft.com/en-us/windows/win32/wsdapi/probematches-message
### Documentation
- http://specs.xmlsoap.org/ws/2005/04/discovery/ws-discovery.pdf
- https://docs.microsoft.com/en-us/sql/relational-databases/xml/requirements-and-limitations-for-xml-schema-collections-on-the-server?view=sql-server-2017
- http://schemas.xmlsoap.org/ws/2006/02/devprof/devicesprofile.xsd
- http://specs.xmlsoap.org/ws/2006/02/devprof/devicesprofile.pdf
| 29 | 154 | 0.750958 | kor_Hang | 0.31115 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.