hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
d08c68873782ccf62ad50d13f84ec0d796a6c775
3,082
md
Markdown
redis/HELP.md
papilong123/JavaCodeBase
069e06b21980b8a3df0c32f0e46e2d21f985e07a
[ "Apache-2.0" ]
null
null
null
redis/HELP.md
papilong123/JavaCodeBase
069e06b21980b8a3df0c32f0e46e2d21f985e07a
[ "Apache-2.0" ]
null
null
null
redis/HELP.md
papilong123/JavaCodeBase
069e06b21980b8a3df0c32f0e46e2d21f985e07a
[ "Apache-2.0" ]
null
null
null
## 前言 ### 1. 学习Spring 核心功能 - IoC - 类型转换系统 - SpEL - 集成JMX - DAO异常层级 ### 2. 学习NoSQL和键值型数据库 #### 例子 `retwisj` 例子,----一个基于Redis的twitter克隆版,可以在本地运行,也可以部署到云中 ### 3. 要求 Spring Data Redis 2。x二进制文件需要JDK 8.0及以上,Spring Framework 5.3.9及以上 在redis版本方面,Redis 2.6.X或更高。Spring Data Redis目前针对最新的4.0版本进行测试 ### 4. 额外帮助资源 * [社区资源(Stack Overflow spring-data tag)](https://stackoverflow.com/questions/tagged/spring-data) * [专业支持(Pivotal公司)](https://pivotal.io/) ### 5. 跟随开发进展 * 关于Spring Data源码仓库,night版本,快照组件的信息,请访问spring data主页[Spring Data](https://spring.io/spring-data) * 您可以通过在Stack Overflow上与[Spring-Data](https://stackoverflow.com/questions/tagged/spring-data) 或[Spring-Data-redis](https://stackoverflow.com/questions/tagged/spring-data-redis) 上的开发人员交互,帮助Spring Data更好地满足Spring社区的需求 * 要了解Spring生态系统的最新消息和公告,请订阅[Spring社区门户](https://spring.io) * 最后,您可以在Twitter上关注Spring[博客](https://spring.io/blog) 或项目团队([@SpringData](https://twitter.com/SpringData)) ### 6. 新特性 #### * Spring Data Redis 2.6 的新功能 1. 当使用MessageListener作为订阅确认回调时,支持SubscriptionListener 2. ReactiveRedisMessageListenerContainer和ReactiveRedisOperations提供了receiveLater()和listenToLater()方法来await,直到Redis确认订阅 3. 支持redis6.2命令(LPOP/RPOP with count, LMOVE/BLMOVE, COPY, GETEX, GETDEL, GEOSEARCH, GEOSEARCHSTORE, ZPOPMIN, BZPOPMIN, ZPOPMAX, BZPOPMAX, ZMSCORE, ZDIFF, ZDIFFSTORE, ZINTER, ZUNION, HRANDFIELD, ZRANDMEMBER, SMISMEMBER) #### * Spring Data Redis 2.5 的新功能 1. MappingRedisConverter不再将字节数组转换为集合表示 #### * Spring Data Redis 2.4 的新功能 1. RedisCache 暴露CacheStatics 2. ACL身份验证支持Redis Standalone,Redis Cluster和Master/Replica 3. 使用Jedis时支持Redis Sentinel的密码连接 4. 支持ZREVRANGEBYLEX和ZLEXCOUNT命令 5. 使用Jedis时支持流命令 #### * Spring Data Redis 2.3 的新功能 1. 对Duration和Instant的模板API方法重定义 2. 扩展流命令 #### * Spring Data Redis 2.2 的新功能 1. Redis Streams 2. 重定义union/diff/intersect的设置操作方法以接受单独的键集合 3. 升级到Jedis3 4. 添加对Jedis集群脚本命令的支持 #### * Spring Data Redis 2.1 的新功能 1. 使用Lettuce作为UNIX系统默认套接字连接 2. 使用Lettuce支持主写从读 3. 集成实例查询 4. @TypeAlias支持 Redis 存储库 #### * Spring Data Redis 2.0 的新功能 1. 升级到 Java 8 2. 升级到 Lettuce5.0 3. 删除了对 SRP 和 JRedis 驱动程序的支持 4. 使用 Lettuce 的反应式连接支持 #### * Spring Data Redis 1.9 的新功能 #### * Spring Data Redis 1.8 的新功能 #### * Spring Data Redis 1.7 的新功能 #### * Spring Data Redis 1.6 的新功能 #### * Spring Data Redis 1.5 的新功能 ### 7. 依赖 1. 不成熟版本,release train BOM <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-bom</artifactId> <version>2021.1.0-M1</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> 2. 成熟版本 <dependencies> <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-jpa</artifactId> </dependency> <dependencies> 3. spring boot管理依赖 4. spring framework依赖 ## 参考文档 ### 8. 介绍 #### 1. 文档结构 Redis support介绍了Redis模块的特性集 Redis Repositories引入了对Redis的存储库支持 ### 9. 为什么Spring Data Redis ### 10. Redis支持
26.118644
121
0.714796
yue_Hant
0.825084
d08ce1400ec8591820cbeb385569cde75911cd91
3,046
md
Markdown
CONTRIBUTING.md
malept/guardhaus
53089605eb42478fcc4313bb11a35cc9d27f497c
[ "MIT" ]
7
2016-04-10T08:25:41.000Z
2021-12-22T09:06:28.000Z
CONTRIBUTING.md
malept/guardhaus
53089605eb42478fcc4313bb11a35cc9d27f497c
[ "MIT" ]
5
2020-08-02T19:32:12.000Z
2021-04-17T02:53:19.000Z
CONTRIBUTING.md
malept/guardhaus
53089605eb42478fcc4313bb11a35cc9d27f497c
[ "MIT" ]
2
2017-03-25T20:34:52.000Z
2019-04-24T04:17:14.000Z
# Contributing to Guardhaus Guardhaus is a part of the Rust ecosystem. As such, all contributions to this project follow the [Rust language's code of conduct](https://www.rust-lang.org/conduct.html) where appropriate. This project is hosted at [GitHub](https://github.com/malept/guardhaus). Both pull requests and issues of many different kinds are accepted. ## Filing Issues Issues include bugs, questions, feedback, and feature requests. Before you file a new issue, please make sure that your issue has not already been filed by someone else. ### Filing Bugs When filing a bug, please include the following information: * Operating system and version. If on Linux, please also include the distribution name. * System architecture. Examples include: x86-64, x86, and ARMv7. * Rust version that compiled Guardhaus. * The version (and/or git revision) of Guardhaus. * A detailed list of steps to reproduce the bug. A minimal testcase would be very helpful, if possible. * If there any any error messages in the console, copying them in the bug summary will be very helpful. ## Finding a change to make Want to contribute? Find an issue that fits your skillset. ## Filing Pull Requests Here are some things to keep in mind as you file a pull request to fix a bug, add a new feature, etc.: * Travis CI (for Linux and OS X) and AppVeyor (for Windows) are used to make sure that the project builds as expected on the supported platforms, using the current stable and beta versions of Rust. Make sure the testsuite passes locally by running `cargo test`. * Unless it's impractical, please write tests for your changes. This will help spot regressions much easier. * If your PR changes the behavior of an existing feature, or adds a new feature, please add/edit the `rustdoc` inline documentation. * Please ensure that your changes follow the [rustfmt](https://github.com/rust-lang-nursery/rustfmt) coding standard, and do not produce any warnings when running the [clippy](https://github.com/Manishearth/rust-clippy) linter. * If you are contributing a nontrivial change, please add an entry to `NEWS.md`. The format is similar to the one described at [Keep a Changelog](http://keepachangelog.com/). * Please **do not** bump the version number in your pull requests, the maintainers will do that. Feel free to indicate whether the changes require a major, minor, or patch version bump, as prescribed by the [semantic versioning specification](http://semver.org/). * Please make sure your commits are rebased onto the latest commit in the main branch, and that you limit/squash the number of commits created to a "feature"-level. For instance: bad: ``` commit 1: add foo commit 2: run rustfmt commit 3: add test commit 4: add docs commit 5: add bar commit 6: add test + docs ``` good: ``` commit 1: add foo commit 2: add bar ``` If you are continuing the work of another person's PR and need to rebase/squash, please retain the attribution of the original author(s) and continue the work in subsequent commits.
41.162162
100
0.764609
eng_Latn
0.999038
d08ce6d68ccc8795b9bc967b434bb1bcfe7c8a0f
5,214
md
Markdown
articles/azure-portal/manage-filter-resource-views.md
Norrch2/azure-docs
cbdf4caba06d9a63592245b20996d3e82a424a51
[ "CC-BY-4.0", "MIT" ]
2
2018-10-08T20:29:07.000Z
2021-12-14T12:46:56.000Z
articles/azure-portal/manage-filter-resource-views.md
Norrch2/azure-docs
cbdf4caba06d9a63592245b20996d3e82a424a51
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-portal/manage-filter-resource-views.md
Norrch2/azure-docs
cbdf4caba06d9a63592245b20996d3e82a424a51
[ "CC-BY-4.0", "MIT" ]
1
2021-03-03T06:03:16.000Z
2021-03-03T06:03:16.000Z
--- title: View and filter Azure resource information description: Filter information and use different views to better understand your Azure resources. author: mgblythe ms.service: azure-portal ms.topic: how-to ms.author: mblythe ms.date: 09/11/2020 --- # View and filter Azure resource information The Azure portal enables you to browse detailed information about resources across your Azure subscriptions. This article shows you how to filter information and use different views to better understand your resources. The article focuses on the **All resources** screen shown in the following screenshot. Screens for individual resource types, such as virtual machines, have different options, such as starting and stopping a VM. :::image type="content" source="media/manage-filter-resource-views/all-resources.png" alt-text="Azure portal view of all resources"::: ## Filter resources Start exploring **All resources** by using filters to focus on a subset of your resources. The following screenshot shows filtering on resource groups, selecting two of the six resource groups in a subscription. :::image type="content" source="media/manage-filter-resource-views/filter-resource-group.png" alt-text="Filter view based on resource groups"::: You can combine filters, including those based on text searches, as shown in the following screenshot. In this case the results are scoped to resources that contain "SimpleWinVM" in one of the two resource groups already selected. :::image type="content" source="media/manage-filter-resource-views/filter-simplewinvm.png" alt-text="Filter view based on text entry"::: To change which columns are included in a view, select **Manage view** then **Edit columns**. :::image type="content" source="media/manage-filter-resource-views/edit-columns.png" alt-text="Edit columns shown in view"::: ## Save, use, and delete views You can save views that include the filters and columns you've selected. To save and use a view: 1. Select **Manage view** then **Save view**. 1. Enter a name for the view then select **OK**. The saved view now appears in the **Manage view** menu. :::image type="content" source="media/manage-filter-resource-views/simple-view.png" alt-text="Saved view"::: 1. To use a view, switch between **Default** and one of your own views to see how that affects the list of resources displayed. To delete a view: 1. Select **Manage view** then **Browse all views**. 1. In the **Saved views for "All resources"** pane, select the view then select the **Delete** icon ![Delete view icon](media/manage-filter-resource-views/icon-delete.png). ## Summarize resources with visuals The views we've looked at so far have been _list views_, but there are also _summary views_ that include visuals. You can save and use these views just like you can list views. Filters persist between the two types of views. There are standard views, like the **Location** view shown below, as well as views that are relevant to specific services, such as the **Status** view for Azure Storage. :::image type="content" source="media/manage-filter-resource-views/summary-map.png" alt-text="Summary of resources in a map view"::: To save and use a summary view: 1. From the view menu, select **Summary view**. :::image type="content" source="media/manage-filter-resource-views/menu-summary-view.png" alt-text="Summary view menu"::: 1. The summary view enables you to summarize by different attributes, including **Location** and **Type**. Select a **Summarize by** option and an appropriate visual. The following screenshot shows the **Type summary** with a **Bar chart** visual. :::image type="content" source="media/manage-filter-resource-views/type-summary-bar-chart.png" alt-text="Type summary showing a bar chart"::: 1. Select **Manage view** then **Save** to save this view like you did with the list view. 1. In the summary view, under **Type summary**, select a bar in the chart. Selecting the bar provides a list filtered down to one type of resource. :::image type="content" source="media/manage-filter-resource-views/all-resources-filtered-type.png" alt-text="All resources filtered by type"::: ## Run queries in Azure Resource Graph Azure Resource Graph provides efficient and performant resource exploration with the ability to query at scale across a set of subscriptions. The **All resources** screen in the Azure portal includes a link to open a Resource Graph query that is scoped to the current filtered view. To run a Resource Graph query: 1. Select **Open query**. :::image type="content" source="media/manage-filter-resource-views/open-query.png" alt-text="Open Azure Resource Graph query"::: 1. In **Azure Resource Graph Explorer**, select **Run query** to see the results. :::image type="content" source="media/manage-filter-resource-views/run-query.png" alt-text="Run Azure Resource Graph query"::: For more information, see [Run your first Resource Graph query using Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md). ## Next steps [Azure portal overview](azure-portal-overview.md) [Create and share dashboards in the Azure portal](azure-portal-dashboards.md)
54.884211
394
0.760453
eng_Latn
0.974204
d08d083fa806f846eb7b772730d291ad7b07881f
1,497
md
Markdown
README.md
AleksanderKlymchuk/BowlingScoreboard
837092927ac657c13182e27faddb1741e765c48c
[ "MIT" ]
null
null
null
README.md
AleksanderKlymchuk/BowlingScoreboard
837092927ac657c13182e27faddb1741e765c48c
[ "MIT" ]
null
null
null
README.md
AleksanderKlymchuk/BowlingScoreboard
837092927ac657c13182e27faddb1741e765c48c
[ "MIT" ]
null
null
null
# BowlingScoreboard ## Description This is a simple engine for calculation and keeping track of bowling’s scores. The engine is designed in accordance to cqrs and event sourcing pattern. The event sourcing provides capability of keeping track of every events in the system, which fits the requirement. The engine consists of bowling game, scorebroker, frame and roll. The bowling game keeps track of frame score, frame bonus, rolling balls and applying of frame to the broker. The frame and roll are used as event type in order to store information about rolling ball, strike, spare as well as total score. The bowlinggame it self does not expose any members to external components, it does by scorebroke. The scorebroker keeps all closed frames applied by the bowling game and expose command and query to other components. Typically if the dependency injection is used by application the scorebroker and bowling game can be registered as single instance. But only for one game a time. In case of multi games the engine should be extended so the game and events (frames) persisted and lifetimescope is used for registration. ## Test There are three tests in BowlingSoreboardTest, which tests basic functionality as well as gutter game and perfect game ## Demo The BowlingScore.App is a console app, which roll the balls and display the result in the end of game. ## System Requirement 1. .Net core 3 sdk for building and running. 2. Visual Studio 2019 (v16.3 or later) for development
99.8
824
0.798931
eng_Latn
0.999861
d08d30fbf65f9493dba64af0fbe262cb5a854cc0
1,340
md
Markdown
docs/middleware.md
fayazmiraz/foxql
7471ced7778a62e560fa536e4128c15d2f8f8f15
[ "Apache-2.0" ]
166
2021-02-09T23:24:13.000Z
2022-03-05T21:44:58.000Z
docs/middleware.md
fayazmiraz/foxql
7471ced7778a62e560fa536e4128c15d2f8f8f15
[ "Apache-2.0" ]
1
2021-05-15T16:52:27.000Z
2021-05-15T16:52:27.000Z
docs/middleware.md
fayazmiraz/foxql
7471ced7778a62e560fa536e4128c15d2f8f8f15
[ "Apache-2.0" ]
9
2021-02-21T11:28:28.000Z
2021-06-28T16:42:31.000Z
Create a new middleware instance ```javascript import { middleware } from "foxql"; const eventMiddleware = new middleware({ timeout : 1000, // miliseconds warningLimit : 3 }); ``` Push a new sender ```javascript eventMiddleware.up(sender); ``` Control sender status in event listener ```javascript const status = eventMiddleware.status(sender); if(status.warning){ /** Spam dedected **/ return false; } ``` Example ```javascript import { middleware } from "foxql"; const eventMiddleware = new middleware({ timeout : 1000, // miliseconds warningLimit : 3 }); const eventListenerName = 'helloWorld'; async function listener(data) { const sender = data._by; // emitter peer id const simulateStatus = data._simulated; if(simulateStatus) { // checking data for webRTC connection return true; } eventMiddleware.up(sender); const access = eventMiddleware.status(sender); if(access.warning){ if(this.peer.connections[sender] !== undefined) { this.dropPeer(sender); } return false; } this.peer.send(by, { listener : data.listener, // emitter waiting. Answer this listener name data :{ message : "Hello!", } }); } export default { name : eventListenerName, listener : listener } ```
19.142857
79
0.64403
eng_Latn
0.588412
d08df5d947132aa639ae9377de274d416bc9a824
8,767
md
Markdown
articles/jenkins/azure-container-instances-as-jenkins-build-agent.md
Duske/azure-dev-docs.de-de
176d1928063e45bb8d4641999f473d0599f09419
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/jenkins/azure-container-instances-as-jenkins-build-agent.md
Duske/azure-dev-docs.de-de
176d1928063e45bb8d4641999f473d0599f09419
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/jenkins/azure-container-instances-as-jenkins-build-agent.md
Duske/azure-dev-docs.de-de
176d1928063e45bb8d4641999f473d0599f09419
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Tutorial: Verwenden von Azure Container Instances als Jenkins-Build-Agent' description: Hier erfahren Sie, wie Sie einen Jenkins-Server zum Ausführen von Buildaufträgen in Azure Container Instances konfigurieren. keywords: Jenkins, Azure, DevOps, Container Instances, Build-Agent ms.topic: article ms.date: 01/08/2021 ms.custom: devx-track-jenkins,devx-track-azurecli ms.openlocfilehash: cc0e38dbad8056f8c511f2c76713891d842dddb8 ms.sourcegitcommit: 737d95fe31e9db55c2d42a93f194a3f3e4bd3c7d ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 03/10/2021 ms.locfileid: "102622308" --- # <a name="tutorial-use-azure-container-instances-as-a-jenkins-build-agent"></a>Tutorial: Verwenden von Azure Container Instances als Jenkins-Build-Agent [!INCLUDE [jenkins-integration-with-azure.md](includes/jenkins-integration-with-azure.md)] Mit Azure Container Instances (ACI) steht Ihnen eine bedarfsgesteuerte, burstfähige und isolierte Umgebung zum Ausführen von containerbasierten Workloads zur Verfügung. Aufgrund dieser Attribute ist ACI eine hervorragende Plattform für die Ausführung von Jenkins-Buildaufträgen in großem Umfang. In diesem Artikel erfahren Sie, wie Sie eine ACI-Instanz bereitstellen und als dauerhaften Build-Agent für einen Jenkins-Controller hinzufügen. Weitere Informationen zu Azure Container Instances finden Sie unter [Azure Container Instances](/azure/container-instances/container-instances-overview). ## <a name="prerequisites"></a>Voraussetzungen - **Azure-Abonnement**: Falls Sie über kein Azure-Abonnement verfügen, können Sie ein [kostenloses Azure-Konto erstellen](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio), bevor Sie beginnen. - **Jenkins-Server**: Falls Sie keinen Jenkins-Server installiert haben, [erstellen Sie einen Jenkins-Servers in Azure](./configure-on-linux-vm.md). ## <a name="prepare-the-jenkins-controller"></a>Vorbereiten des Jenkins-Controllers 1. Navigieren Sie zu Ihrem Jenkins-Portal. 1. Wählen Sie im Menü die Option **Manage Jenkins** (Jenkins verwalten) aus. 1. Wählen Sie unter **System Configuration** (Systemkonfiguration) die Option **Configure System** (System konfigurieren) aus. 1. Vergewissern Sie sich, dass die **Jenkins-URL** auf die HTTP-Adresse Ihrer Jenkins-Installation festgelegt ist: `http://<your_host>.<your_domain>:8080/`. 1. Wählen Sie im Menü die Option **Manage Jenkins** (Jenkins verwalten) aus. 1. Wählen Sie unter **Security** (Sicherheit) die Option **Configure Global Security** (Globale Sicherheit konfigurieren) aus. 1. Geben Sie unter **Agents** den Port **Fixed** (Fest) an, und geben Sie die entsprechende Portnummer für Ihre Umgebung ein. Konfigurationsbeispiel: ![Konfigurieren des TCP-Ports](./media/azure-container-instances-as-jenkins-build-agent/agent-port.png) 1. Wählen Sie **Speichern** aus. ## <a name="create-jenkins-work-agent"></a>Erstellen eines Jenkins-Arbeits-Agents 1. Navigieren Sie zu Ihrem Jenkins-Portal. 1. Wählen Sie im Menü die Option **Manage Jenkins** (Jenkins verwalten) aus. 1. Wählen Sie unter **System Configuration** (Systemkonfiguration) die Option **Manage Nodes and Clouds** (Knoten und Clouds verwalten) aus. 1. Wählen Sie im Menü die Option **New Node** (Neuer Knoten) aus. 1. Geben Sie unter **Node Name** (Knotenname) einen Wert ein. 1. Wählen Sie **Permanent Agent** (Dauerhafter Agent) aus. 1. Klicken Sie auf **OK**. 1. Geben Sie einen Wert für **Remote root directory** (Remotestammverzeichnis)ein. Beispiel: `/home/jenkins/work` 1. Hinzufügen einer <abbr title="Bezeichnungen werden verwendet, um mehrere Agents in einer logischen Gruppe zusammenzufassen. Ein Beispiel für eine Bezeichnung wäre `linux` zum Gruppieren Ihrer Linux-Agents.">**Label**</abbr> mit dem Wert `linux`. 1. Legen Sie **Launch method** (Startmethode) auf **Launch agent by connecting to the master** (Agent durch Verbindungsherstellung mit dem Master starten) fest. 1. Vergewissern Sie sich, dass alle erforderlichen Felder ausgefüllt sind: ![Beispielkonfiguration für den Jenkins-Agent](./media/azure-container-instances-as-jenkins-build-agent/agent-config.png) 1. Wählen Sie **Speichern** aus. 1. Auf der Seite mit dem Agent-Status sollten das Jenkins-Geheimnis (`JENKINS_SECRET`) und der Agent-Name (`AGENT_NAME`) angezeigt werden. Der folgende Screenshot zeigt, wo Sie diese Werte finden. Beide Werte werden beim Erstellen der Azure Container Instances-Instanz benötigt. ![Das Geheimnis für den Build-Agent wird nach dessen erfolgreicher Erstellung angezeigt.](./media/azure-container-instances-as-jenkins-build-agent/jenkins-secret.png) ## <a name="create-azure-container-instance-with-cli"></a>Erstellen der Azure Container Instances-Instanz mithilfe der CLI 1. Verwenden Sie [az group create](/cli/azure/group?#az_group_create), um eine Azure-Ressourcengruppe zu erstellen. ```azurecli az group create --name my-resourcegroup --location westus ``` 1. Verwenden Sie [az container create](/cli/azure/container#az_container_create), um eine Azure Container Instances-Instanz zu erstellen. Ersetzen Sie die Platzhalter durch die Werte, die Sie beim Erstellen des Arbeits-Agents erhalten haben. ```azurecli az container create \ --name my-dock \ --resource-group my-resourcegroup \ --ip-address Public --image jenkins/inbound-agent:latest \ --os-type linux \ --ports 80 \ --command-line "jenkins-agent -url http://jenkinsserver:port <JENKINS_SECRET> <AGENT_NAME>" ``` Ersetzen Sie `http://jenkinsserver:port`, `<JENKINS_SECRET>` und `<AGENT_NAME>` durch die Informationen für Ihre Jenkins-Controller und -Agents. Nach dem Start des Containers wird von diesem automatisch eine Verbindung mit dem Jenkins-Controllerserver hergestellt. 1. Kehren Sie zurück zum Jenkins-Dashboard, und überprüfen Sie den Agent-Status. ![Erfolgreich gestarteter Agent](./media/azure-container-instances-as-jenkins-build-agent/agent-start.png) > [!NOTE] > Jenkins-Agents stellen über den Port `5000` eine Verbindung mit dem Controller her. Stellen Sie sicher, dass an diesem Port eingehender Datenverkehr für den Jenkins-Controller zugelassen wird. ## <a name="create-a-build-job"></a>Erstellen eines Buildauftrags Nun wird ein Jenkins-Buildauftrag erstellt, um Jenkins-Builds auf einer Azure-Containerinstanz zu veranschaulichen. 1. Klicken Sie auf **Neues Element**, geben Sie dem Buildprojekt einen Namen wie **aci-demo**, wählen Sie **Freestyle project** (Freestyleprojekt) aus, und klicken Sie dann auf **OK**. ![Das Feld für den Namen des Buildauftrags und die Liste der Projekttypen](./media/azure-container-instances-as-jenkins-build-agent/jenkins-new-job.png) 1. Vergewissern Sie sich, dass unter **General** (Allgemein) die Option **Restrict where this project can be run** (Ausführungsort dieses Projekts beschränken) aktiviert ist. Geben Sie als Bezeichnungsausdruck **linux** ein. Mit dieser Konfiguration wird sichergestellt, dass dieser Buildauftrag in der ACI-Cloud ausgeführt wird. ![Die Registerkarte „Allgemein“ mit Konfigurationsdetails](./media/azure-container-instances-as-jenkins-build-agent/jenkins-job-01.png) 1. Klicken Sie unter **Build** auf **Buildschritt hinzufügen**, und klicken Sie auf **Execute Shell** (Shell ausführen). Geben Sie den Befehl `echo "aci-demo"` ein. ![Die Registerkarte „Build“ mit der Auswahl für den Buildschritt](./media/azure-container-instances-as-jenkins-build-agent/jenkins-job-02.png) 1. Wählen Sie **Speichern** aus. ## <a name="run-the-build-job"></a>Ausführen des Buildauftrags Starten Sie einen Build manuell, um den Buildauftrag zu testen und sich mit Azure Container Instances vertraut zu machen. 1. Klicken Sie auf **Build Now** (Jetzt erstellen), um einen Buildauftrag zu starten. Nach dem Starten des Auftrags wird ein Status ähnlich der folgenden Abbildung angezeigt: ![Informationen mit dem Auftragsstatus „Buildverlauf“](./media/azure-container-instances-as-jenkins-build-agent/jenkins-job-status.png) 1. Klicken Sie unter **Buildverlauf** auf **#1**. ![Konsolenausgabe: Anzeigen der Buildausgabe der Konsole unter „Buildverlauf“](./media/azure-container-instances-as-jenkins-build-agent/build-history.png) 1. Wählen Sie **Konsolenausgabe** aus, um die Ausgabe der Builds anzuzeigen. ![Konsolenausgabe: Anzeigen der Buildausgabe über die Konsole](./media/azure-container-instances-as-jenkins-build-agent/build-console-output.png) ## <a name="next-steps"></a>Nächste Schritte > [!div class="nextstepaction"] > [Continuous Integration und Continuous Deployment in Azure App Service](/azure/jenkins/tutorial-jenkins-deploy-web-app-azure-app-service)
60.047945
439
0.774381
deu_Latn
0.972454
d08eaf66a7d2b2e2072ef95a73ec914e4b1e070e
2,846
md
Markdown
articles/supply-chain/pim/tasks/set-up-attribute-based-pricing-configurable-products.md
MicrosoftDocs/Dynamics-365-Operations.is-is
ce362ebbd8aabebe5e960567bddc97e5d1f37b56
[ "CC-BY-4.0", "MIT" ]
2
2020-05-18T17:14:14.000Z
2021-04-20T21:13:46.000Z
articles/supply-chain/pim/tasks/set-up-attribute-based-pricing-configurable-products.md
MicrosoftDocs/Dynamics-365-Operations.is-is
ce362ebbd8aabebe5e960567bddc97e5d1f37b56
[ "CC-BY-4.0", "MIT" ]
6
2017-12-12T11:46:48.000Z
2019-04-30T11:45:51.000Z
articles/supply-chain/pim/tasks/set-up-attribute-based-pricing-configurable-products.md
MicrosoftDocs/Dynamics-365-Operations.is-is
ce362ebbd8aabebe5e960567bddc97e5d1f37b56
[ "CC-BY-4.0", "MIT" ]
3
2019-10-12T18:18:43.000Z
2022-02-09T23:55:11.000Z
--- title: Setja upp verð sem byggir á eigindum fyrir skilgreinanlegar afurðir description: Þetta efni útskýrir hvernig á að setja upp eigindabyggða úthlutun. author: t-benebo ms.date: 08/20/2019 ms.topic: business-process ms.prod: '' ms.technology: '' ms.search.form: DefaultDashboard, EcoResProductVariantMaintainWorkspace, PCProductConfigurationModelListPage, PCPriceModelList, PCPriceModel, PCConstraintEditor audience: Application User ms.reviewer: kamaybac ms.search.region: Global ms.author: benebotg ms.search.validFrom: 2016-06-30 ms.dyn365.ops.version: AX 7.0.0 ms.openlocfilehash: c4acd7b423396124dd1059602f5aa6460ec5e259 ms.sourcegitcommit: 3b87f042a7e97f72b5aa73bef186c5426b937fec ms.translationtype: HT ms.contentlocale: is-IS ms.lasthandoff: 09/29/2021 ms.locfileid: "7578153" --- # <a name="set-up-attribute-based-pricing-for-configurable-products"></a>Setja upp verð sem byggir á eigindum fyrir skilgreinanlegar afurðir [!include [banner](../../includes/banner.md)] Þetta efni útskýrir hvernig á að setja upp eigindabyggða úthlutun. Sem skilyrði, verður að hafa afbrigðalíkan afurðar sem hefur eitt eða fleiri íhluti og eiginleika. Þessi aðferð notar hágæða hátalara líkanið gögn fyrirtækisins sýnigögn USMF. Venjulega notar framleiðslustjóri þetta ferli. ## <a name="create-a-new-price-model"></a>Búa til nýtt verðlíkan 1. Farið í **Afurðarupplýsingastjórnun \> Afurðir \> Afbrigðalíkön afurða**. 1. Á listanum velurðu línuna fyrir **Hágæða hátalara** en velur ekki tengilinn fyrir heitið. 1. Í aðgerðarúðunni skal velja **Tegund**. 1. Veldu **Verðlíkön**. 1. Veljið **Nýtt**. 1. Í reitinn **Verðlíkan** skal rita gildi. Notið nafn sem gerir auðvelt að bera kennsl á líkan. 1. Í reitinn **Lýsing** skal slá inn gildi. 1. Veljið **Vista**. ## <a name="add-price-elements"></a>Bæta við verðeiningum 1. Veljið **Breyta**. Hver íhlutur í framleiðslulíkani getur haft grunnverð einingu og fjölda verðsegðarreglna. Einnig er hægt að bæta verði í mismunandi gjaldmiðlum. 2. Í reitinn **Grunnverðssegð** skal slá inn gildi. Til dæmis, skrifið 100. Grunnverðssegð getur verið tölugildi eða það getur verið samsett af útreikningi sem felur í sér eina eða fleiri eigindir. 3. Veljið **Bæta við**. 4. Í reitinn **Heiti** skaltu færa inn `Rosewood`. Nafn verð segð gerir auðveldara að verð einingarinnar sem stendur. Í þessu dæmi er mælt stofna verðeining fyrir Rosewood speaker cabinet ljúka valkost. 5. Veldu **Breyta ástandi**. Skilyrði verð hjálpar til við að tryggja samræmda segðareining verð er innifalinn í söluverði ef tiltekna samsetningu af eigindir eru til staðar. 6. Í reitinn **ConstraintBody** skal slá inn `CabinetFinish=="Rosewood"`. 7. Veljið **Í lagi**. 8. Í reitinn **Segð** skal slá inn gildi. Til dæmis, slá inn `50`. 9. Lokið síðunni. [!INCLUDE[footer-include](../../../includes/footer-banner.md)]
51.745455
289
0.777231
isl_Latn
0.999669
d08ef3e8d67bbeb8a51fdd0d53d188e9dbf6c1fe
946
md
Markdown
android/DL4JIrisClassifierDemo/README.md
abinj/dl4j-examples
dca8d83c3114ed815b4b7a8df7ac2bd8066f3a8e
[ "Apache-2.0" ]
2
2019-02-22T01:56:41.000Z
2019-08-30T04:28:52.000Z
android/DL4JIrisClassifierDemo/README.md
abinj/dl4j-examples
dca8d83c3114ed815b4b7a8df7ac2bd8066f3a8e
[ "Apache-2.0" ]
55
2019-06-26T11:36:35.000Z
2021-07-12T10:31:03.000Z
android/DL4JIrisClassifierDemo/README.md
abinj/dl4j-examples
dca8d83c3114ed815b4b7a8df7ac2bd8066f3a8e
[ "Apache-2.0" ]
1
2018-10-29T08:40:17.000Z
2018-10-29T08:40:17.000Z
# DL4JIrisClassifierDemo Using Deeplearning4J in Android Applications This tutorial demonstrates how the Java-based deep learning library Deeplearning4J can be used in Android Applications to build, train, and use a neural network. This application trains a simple neural network on the mobile device using a iris data set for iris flower type classification. The application has a simple UI to take measurements of petal length, petal width, sepal length, and sepal width from the user and it returns the probability that the measured iris belongs to one of three types (Iris serosa, Iris versicolor, and Iris virginica). A data set of 150 measurements is used to train the model, which takes anywhere from 5-30 seconds, depending on the device. A tutorial for the application can be found [here](https://deeplearning4j.org/android-DL4JIrisClassifierDemo). For more information on Deeplearning4J, please visit http://deeplearning4j.org/
72.769231
186
0.811839
eng_Latn
0.9909
d08f63ed501fedc080ce2c53153c3cc678fdaa1a
5,810
md
Markdown
articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
marcobrunodev/azure-docs.pt-br
0fff07f85663724745ac15ce05b4570890d108d9
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
marcobrunodev/azure-docs.pt-br
0fff07f85663724745ac15ce05b4570890d108d9
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
marcobrunodev/azure-docs.pt-br
0fff07f85663724745ac15ce05b4570890d108d9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Início Rápido: configurar o SSO (logon único) baseado em OIDC para um aplicativo no locatário do Azure AD (Azure Active Directory)' description: Este início rápido orienta você ao longo do processo de configuração do SSO (logon único) baseado em OIDC para um aplicativo em seu locatário do Azure AD (Azure Active Directory). services: active-directory author: kenwith manager: celestedg ms.service: active-directory ms.subservice: app-mgmt ms.topic: quickstart ms.workload: identity ms.date: 07/01/2020 ms.author: kenwith ms.openlocfilehash: 9ea4ec748ca37f93e9711970b10746a009543d00 ms.sourcegitcommit: 8e7316bd4c4991de62ea485adca30065e5b86c67 ms.translationtype: HT ms.contentlocale: pt-BR ms.lasthandoff: 11/17/2020 ms.locfileid: "94656591" --- # <a name="quickstart-set-up-oidc-based-single-sign-on-sso-for-an-application-in-your-azure-active-directory-azure-ad-tenant"></a>Início Rápido: configurar o SSO (logon único) baseado em OIDC para um aplicativo no locatário do Azure AD (Azure Active Directory) Comece a usar logons de usuário simplificados configurando o SSO (logon único) para um aplicativo que você adicionou ao seu locatário do Azure AD (Azure Active Directory). Depois de configurar o SSO, os usuários poderão entrar em um aplicativo usando as respectivas credenciais do Azure AD. O SSO está incluído na edição gratuita do Azure AD. ## <a name="prerequisites"></a>Pré-requisitos Para configurar o SSO para um aplicativo que você adicionou ao seu locatário do Azure AD, você precisa de: - Uma conta do Azure com uma assinatura ativa. [Crie uma conta gratuitamente](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Uma das seguintes funções: administrador global, administrador de aplicativos de nuvem, administrador de aplicativos ou proprietário da entidade de serviço. - Um aplicativo compatível com SSO e que já foi pré-configurado e adicionado à galeria do Azure AD. A maioria dos aplicativos pode usar o Azure AD para SSO. Os aplicativos na galeria do Azure AD foram pré-configurados. Se o aplicativo não estiver listado ou for um aplicativo desenvolvido personalizado, você ainda poderá usá-lo com o Azure AD. Confira os tutoriais e outras documentações no sumário. Este início rápido se concentra em aplicativos que foram pré-configurados para SSO e adicionados à galeria do Azure AD pelos desenvolvedores de aplicativos. - Opcional: conclusão de [Exibir seus aplicativos](view-applications-portal.md). - Opcional: conclusão de [Adicionar um aplicativo](add-application-portal.md). - Opcional: conclusão de [Configurar um aplicativo](add-application-portal-configure.md). - Opcional: A conclusão de [Atribuir usuários a um aplicativo](add-application-portal-assign-users.md). >[!IMPORTANT] >Use um ambiente de não produção para testar as etapas deste guia de início rápido. ## <a name="enable-single-sign-on-for-an-app"></a>Habilitar logon único para um aplicativo Quando você adiciona um aplicativo que usa o padrão OIDC para SSO, há um botão de configuração. Ao selecionar o botão, você é direcionado para o site de aplicativos e conclui o processo de inscrição para o aplicativo. O processo de adicionar um aplicativo é abordado no início rápido Adicionar um aplicativo, apresentado anteriormente nesta série. Se você estiver configurando um aplicativo que já tenha sido adicionado, confira o primeiro início rápido. Ele orienta você ao longo da exibição dos aplicativos que já estão no locatário. Para configurar o logon único para um aplicativo: 1. no início rápido anterior desta série, você aprendeu a adicionar um aplicativo que usará o locatário do Azure AD para gerenciamento de identidades. Se o desenvolvedor do aplicativo usou o padrão OIDC para implementar o SSO, então você verá o botão de inscrição ao adicionar o aplicativo. :::image type="content" source="media/add-application-portal-setup-oidc-sso/sign-up-oidc-sso.png" alt-text="Captura de tela mostra a opção de logon único e o botão de inscrição." lightbox="media/add-application-portal-setup-oidc-sso/sign-up-oidc-sso.png"::: 2. Selecione **Inscrever-se** e você será levado para a página de logon dos desenvolvedores de aplicativos. Entrada usando as credenciais de entrada do Azure Active Directory. > [!IMPORTANT] > Se você já tiver uma assinatura para o aplicativo, a validação dos detalhes do usuário e das informações de locatário/diretório ocorrerão. Se o aplicativo não puder verificar o usuário, ele redirecionará você para se inscrever no serviço de aplicativo ou para a página de erro. 3. Após a autenticação bem-sucedida, uma caixa de diálogo será exibida solicitando o consentimento do administrador. Selecione **Consentir em nome da organização** e selecione **Aceitar**. :::image type="content" source="media/add-application-portal-setup-oidc-sso/consent.png" alt-text="Captura de tela mostra a tela de consentimento de um aplicativo." lightbox="media/add-application-portal-setup-oidc-sso/consent.png"::: 4. O aplicativo é adicionado ao locatário e a home page do aplicativo é exibida. > [!TIP] > Para automatizar o gerenciamento de aplicativos usando a API do Graph, confira [Automatizar o gerenciamento de aplicativos com a API do Microsoft Graph](/graph/application-saml-sso-configure-api). ## <a name="clean-up-resources"></a>Limpar os recursos Quando concluir esta série de guias de início rápido, considere a possibilidade de excluir o aplicativo para limpar seu locatário de teste. A exclusão do aplicativo é abordada no último guia de início rápido desta série; confira [Excluir um aplicativo](delete-application-portal.md). ## <a name="next-steps"></a>Próximas etapas Prossiga para o próximo artigo para saber como excluir um aplicativo. > [!div class="nextstepaction"] > [Excluir um aplicativo](delete-application-portal.md)
78.513514
557
0.795181
por_Latn
0.999196
d0901ac973566024da6915d0794f5af29bfd86a6
271
md
Markdown
README.md
detergnet/smallstuff
4f1cce05c495977d160f947a93df49d8744a8447
[ "MIT" ]
null
null
null
README.md
detergnet/smallstuff
4f1cce05c495977d160f947a93df49d8744a8447
[ "MIT" ]
null
null
null
README.md
detergnet/smallstuff
4f1cce05c495977d160f947a93df49d8744a8447
[ "MIT" ]
null
null
null
Small stuff =========== A collection of small utility "libraries" for C. Each module (one source file and one header file) is self contained and it requires only an ISO C compiler. Documentation for each module can be found in its header file. Licensed under MIT/X11.
24.636364
78
0.749077
eng_Latn
0.99884
d09046ea6bfdabf2a848b744a31ba1fa43fc7f89
1,231
md
Markdown
src/pages/note/2019/08/sweep-leaves-not-lives/index.md
brianwisti/rgb-astro
19bdeecaeace794a455ff9520896d0460c3117d2
[ "CC-BY-4.0" ]
1
2021-12-26T03:37:18.000Z
2021-12-26T03:37:18.000Z
src/pages/note/2019/08/sweep-leaves-not-lives/index.md
brianwisti/rgb-astro
19bdeecaeace794a455ff9520896d0460c3117d2
[ "CC-BY-4.0" ]
null
null
null
src/pages/note/2019/08/sweep-leaves-not-lives/index.md
brianwisti/rgb-astro
19bdeecaeace794a455ff9520896d0460c3117d2
[ "CC-BY-4.0" ]
null
null
null
--- aliases: - /note/2019/222/sweep-leaves-not-lives/ caption: Somebody scraped the "lives" part off "sweep leaves not lives" cover_image: cover.jpg date: 2019-08-10 16:38:46 layout: layout:PublishedArticle slug: sweep-leaves-not-lives tags: - seattle - graffiti - art - homelessness title: Sweep Leaves Not Lives uuid: faef8031-095f-4918-9cca-0e041f06c670 --- Seattle's homeless sweeps are contentious to say the least. From [The Seattle Times][], on July 9th: [The Seattle Times]: https://www.seattletimes.com/seattle-news/homeless/on-way-to-long-term-changes-seattle-mayor-jenny-durkan-quietly-clears-homeless-camps/ > Seattle removed 75% more homeless encampments in the first four months of this year than during the same period in 2018, even with this February’s record snowstorm slowing clean-ups. [John Curley][] pointed out the NIMBY — "Not In My Backyard" — aspect: [John Curley]: https://mynorthwest.com/1445501/seattle-homeless-sweeps-2019/ > What they’re doing is they’re trying to push them out of view. So they’re sending them back behind railroad tracks, back behind buildings, back away from the general view. Mostly I'm curious who'd be so offended by this wall art that they'd remove part of its message.
38.46875
184
0.77173
eng_Latn
0.981078
d0912c0d0866c32c68cbe55ec94ae9687c902a7d
945
md
Markdown
includes/updated-for-az.md
malachma/azure-docs.de-de
b36d47e57caddbaff9d5e65fd011a56d18ec7995
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/updated-for-az.md
malachma/azure-docs.de-de
b36d47e57caddbaff9d5e65fd011a56d18ec7995
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/updated-for-az.md
malachma/azure-docs.de-de
b36d47e57caddbaff9d5e65fd011a56d18ec7995
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- author: sptramer ms.author: sttramer manager: carmonm ms.date: 04/17/2019 ms.topic: include ms.openlocfilehash: bba762fca7154067e528ebbbb0ea94c8ba7965f3 ms.sourcegitcommit: 3e7646d60e0f3d68e4eff246b3c17711fb41eeda ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 09/11/2019 ms.locfileid: "67171094" --- > [!NOTE] > Dieser Artikel wurde aktualisiert und beinhaltet jetzt das neue Az-Modul von Azure PowerShell. Sie können das AzureRM-Modul weiterhin verwenden, das bis mindestens Dezember 2020 weiterhin Fehlerbehebungen erhält. > Weitere Informationen zum neuen Az-Modul und zur Kompatibilität mit AzureRM finden Sie unter [Introducing the new Azure PowerShell Az module](/powershell/azure/new-azureps-module-az) (Einführung in das neue Az-Modul von Azure PowerShell). Anweisungen zur Installation des Az-Moduls finden Sie unter [Install Azure PowerShell](/powershell/azure/install-az-ps) (Installieren von Azure PowerShell).
52.5
397
0.819048
deu_Latn
0.898157
d09156efb2a188002957a69ca4de844fd16a9547
4,479
md
Markdown
doc/getting-started.md
paulasmuth/fnordmetric
5fca358e56e6334a22aa09264f2ccb7d41bd156f
[ "Apache-2.0" ]
721
2015-01-01T15:14:48.000Z
2017-04-28T20:27:56.000Z
doc/getting-started.md
idfumg/clip
5fca358e56e6334a22aa09264f2ccb7d41bd156f
[ "Apache-2.0" ]
98
2019-08-02T01:01:59.000Z
2019-08-02T11:54:38.000Z
doc/getting-started.md
idfumg/clip
5fca358e56e6334a22aa09264f2ccb7d41bd156f
[ "Apache-2.0" ]
93
2015-01-02T11:21:33.000Z
2017-04-26T08:18:33.000Z
Getting Started =============== Welcome to clip! This page will guide you through creating a simple first plot and then give you pointers to more in-depth documentation. If you don't have clip installed on your machine yet, please take a look at the [Installation](/documentation/installation) page first. In essence, clip is an automated drawing program; it reads an input text file containing a list of drawing instructions and then outputs a result image. ### Step 1: First lines Being a highly visual tool, clip is best explained by example, so here is a minimal example file to get you started. Paste the contents to a file called `example_chart.clp`: class: plot; lines { data-x: list(100 200 300 400 500 600 700 800 900); data-y: list(1.2 1.8 1.3 1.6 1.5 1.3 1.8 1.9 2.0); limit-y: 0 3; limit-x: 0 1000; marker-shape: pentagon; marker-size: 8pt; } You can then run the script through clip using the following command: $ clip --in example_chart.clp --out example_chart.svg After running the example, open the output file `example_chart.svg`. It should look similar to the one below: <figure> <img class="small" alt="Example Chart" src="/figures/quickstart1.svg" /> </figure> Let's analyze the clip script we just ran. The first line of a clip script normally contains a `(class ...)` declaration. The `class` controls which module is used to interpret the subsequent expressions. In this example, we are using the `plot` module. Once the plotting module is loaded, the remainder of the script contains a list of plotting commands. In the next line, we call a command named `lines` with a number of keyword arguments. The exact meaning of each of the arguments doesn't matter for now; the specifics are documented on the [reference page of the command](/plot/lines) and you can probably guess what most of them do anyway. ### Step 2: Adding axes To make this into a proper plot, we have to add some axes. For that, we extend the script with a call to the `axes` command. Add this to the beginning of the file: axes { limit-y: 0 3; limit-x: 0 1000; label-format-x: scientific(); label-placement-x: linear-interval(100 100 900); } After re-running clip on the updated script, the output should now look much more like the kind of chart we know and love: <figure> <img class="small" alt="Example Chart" src="/figures/quickstart2.svg" /> </figure> When adding the `axes` command to the script, make sure to add it _before_ the existing, `lines` command. The order of statements in clip is significant. Commands generally draw over the output of earlier commands, so changing the order of commands will generally give a different result. ### Step 3: Adding the legend To close things out on this example, we're going to add an explanatory legend to our chart using the `legend` command. Simply add the snippet from below to the file: legend { position: bottom left; item { label: "Example Data"; marker-shape: pentagon; } } Also, let's get rid of the duplicate `limit-x` and `limit-y` arguments. This leaves us with this final script: class: plot; axes { limit-y: 0 3; limit-x: 0 1000; label-format-x: scientific(); label-placement-x: linear-interval(100 100 900); } lines { data-x: list(100 200 300 400 500 600 700 800 900); data-y: list(1.2 1.8 1.3 1.6 1.5 1.3 1.8 1.9 2.0); limit-y: 0 3; limit-x: 0 1000; marker-shape: pentagon; marker-size: 8pt; } legend { position: bottom left; item { label: "Example Data"; marker-shape: pentagon; } } Running the above file through clip again should now yield the following final result: <figure> <img class="small" alt="Example Chart" src="/figures/quickstart3.svg" /> </figure> Further reading --------------- In the interest of keeping this Getting Started page short and easy to digest, this is it for now. You have seen how to create a simple plot using clip -- the rest is just adding more elements to your file and fine-tuning the appearance of individual elements using the arguments described in the documentation of each individual command. For more information, please take a look at the [remaining documentation chapters](/plot), in particular at the [Examples](/examples) page and the documentation pages for the individual commands.
31.542254
105
0.704845
eng_Latn
0.9981
d091c32f47fcefcd2193ac4311096ac990382d10
2,413
md
Markdown
AlchemyInsights/workflow-is-not-starting.md
pebaum/OfficeDocs-AlchemyInsights-pr.id-ID
1a5dd543599dbffc8d5b07285aa9705fa562a74d
[ "CC-BY-4.0", "MIT" ]
null
null
null
AlchemyInsights/workflow-is-not-starting.md
pebaum/OfficeDocs-AlchemyInsights-pr.id-ID
1a5dd543599dbffc8d5b07285aa9705fa562a74d
[ "CC-BY-4.0", "MIT" ]
null
null
null
AlchemyInsights/workflow-is-not-starting.md
pebaum/OfficeDocs-AlchemyInsights-pr.id-ID
1a5dd543599dbffc8d5b07285aa9705fa562a74d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Alur kerja tidak dimulai ms.author: pebaum author: pebaum manager: pamgreen ms.date: 04/21/2020 ms.audience: Admin ms.topic: article ROBOTS: NOINDEX, NOFOLLOW localization_priority: Normal ms.collection: Adm_O365 ms.custom: - "9000144" - "1670" ms.openlocfilehash: 941e6349c98278a1a8cdac77457ec1cc72cdef8b ms.sourcegitcommit: 631cbb5f03e5371f0995e976536d24e9d13746c3 ms.translationtype: MT ms.contentlocale: id-ID ms.lasthandoff: 04/22/2020 ms.locfileid: "43766100" --- # <a name="workflow-is-not-starting"></a>Alur kerja tidak dimulai - SharePoint 2010 dan SharePoint 2013 alur kerja tidak dimulai. - Jika alur kerja Anda tidak dimulai, mungkin ada masalah layanan sementara di mana pengguna mungkin mengalami penundaan terputus-putus dengan kemajuan alur kerja. Periksa [dasbor Kesehatan Layanan](https:/admin.microsoft.com/AdminPortal/Home#/servicehealth) untuk melihat apakah organisasi Anda terpengaruh. - Jika lebih dari 24 jam telah berlalu sejak Anda pertama kali melihat masalah ini, silakan log tiket dukungan. Dalam banyak kasus, kami telah bekerja pada solusi. Tolong beri kami setidaknya 24 jam untuk menyelesaikan solusi. - SharePoint 2010 alur kerja tertunda pada mulai. - Hal ini terjadi jika alur kerja dipicu dalam batch besar. (misalnya, ketika beberapa item ditambahkan sekaligus). - Alur kerja tidak dirancang untuk berjalan secara real-time, sehingga penundaan adalah perilaku dengan desain. - Jika alur kerja kompleks extensible objek Markup Language (XMOL), kompilasi dapat lambat. Periksa artikel [ini](https://support.microsoft.com//kb/3043697) . - Anda harus menyederhanakan alur kerja atau mendesain ulang menggunakan jenis platform Microsoft SharePoint 2013 alur kerja. - Jika Riwayat alur kerja Anda tumbuh besar, Anda mungkin ingin membersihkan item atau membuat daftar Riwayat baru. Informasi lebih lanjut: [membersihkan riwayat alur kerja](https://blogs.technet.microsoft.com/marj/2015/08/07/sharepoint-2010-workflows-best-practice-purge-workflow-history-list-items/) ## <a name="related-topics"></a>Topik terkait Ingin mencoba Microsoft Flow di SharePoint online? - [Membuat Flow](https://support.office.com/article/Create-a-flow-for-a-list-or-library-in-SharePoint-Online-or-OneDrive-for-Business-a9c3e03b-0654-46af-a254-20252e580d01) - [SharePoint dan aliran](https://flow.microsoft.com/blog/sharepoint-and-flow/)
47.313725
312
0.79196
ind_Latn
0.897667
d091e2f62cb17525e534c0f389dd140f9284fb92
15,865
md
Markdown
README.md
criteo/criteo-java-marketing-sdk
09e25162f7e61ba1a0bd25436399347ce8eef65d
[ "Apache-2.0" ]
7
2019-03-27T14:31:26.000Z
2020-10-10T08:13:01.000Z
README.md
criteo/criteo-java-marketing-sdk
09e25162f7e61ba1a0bd25436399347ce8eef65d
[ "Apache-2.0" ]
null
null
null
README.md
criteo/criteo-java-marketing-sdk
09e25162f7e61ba1a0bd25436399347ce8eef65d
[ "Apache-2.0" ]
5
2019-03-29T09:16:53.000Z
2022-02-21T11:27:44.000Z
# This project is deprecated We've built a new set of SDKs to help you use our [Criteo's API](https://developers.criteo.com/). You can find the new Java SDKs repository here : https://github.com/criteo/criteo-api-java-sdk # Criteo Marketing SDK for Java [![Build Status](https://travis-ci.com/criteo/criteo-java-marketing-sdk.svg?branch=master)](https://travis-ci.com/criteo/criteo-java-marketing-sdk) Marketing API v.1.0 - API version: v.1.0 *Automatically generated by the [OpenAPI Generator](https://openapi-generator.tech)* ## Requirements Building the API client library requires: 1. Java 1.8+ 2. Maven/Gradle ## Installation To install the API client library to your local Maven repository, simply execute: ```shell mvn clean install ``` or ```shell gradle install ``` To deploy it to a remote Maven repository instead, configure the settings of the repository and execute: ```shell mvn clean deploy ``` Refer to the [OSSRH Guide](http://central.sonatype.org/pages/ossrh-guide.html) for more information. ### Maven users Add this dependency to your project's POM: ```xml <dependency> <groupId>com.criteo</groupId> <artifactId>marketing.java-client</artifactId> <version>1.0.29</version> <scope>compile</scope> </dependency> ``` ### Gradle users Add this dependency to your project's build file: ```groovy compile "com.criteo:marketing.java-client:1.0.29" ``` ### Others At first generate the JAR by executing: ```shell mvn clean package ``` Then manually install the following JARs: * `target/marketing.java-client-1.0.29.jar` * `target/lib/*.jar` ## Example Please see [src/examples/java/com/criteo/marketing/](src/examples/java/com/criteo/marketing/) for full examples to get a valid token and make a call to the API. ## Documentation for API Endpoints All URIs are relative to *https://api.criteo.com/marketing* Class | Method | HTTP request | Description ------------ | ------------- | ------------- | ------------- *AdvertisersApi* | [**getCampaigns**](docs/AdvertisersApi.md#getCampaigns) | **GET** /v1/advertisers/{advertiserId}/campaigns | Gets all advertiser&#39;s campaigns *AdvertisersApi* | [**getCategories**](docs/AdvertisersApi.md#getCategories) | **GET** /v1/advertisers/{advertiserId}/categories | Gets all advertiser&#39;s categories *AdvertisersApi* | [**getCategory**](docs/AdvertisersApi.md#getCategory) | **GET** /v1/advertisers/{advertiserId}/categories/{categoryHashCode} | Gets a specific advertiser&#39;s category *AudiencesApi* | [**addRemoveUsersToAudience**](docs/AudiencesApi.md#addRemoveUsersToAudience) | **PATCH** /v1/audiences/userlist/{audienceId} | Add/Remove users to an Audience. *AudiencesApi* | [**createAudience**](docs/AudiencesApi.md#createAudience) | **POST** /v1/audiences/userlist | Create a new Audience. *AudiencesApi* | [**deleteAudience**](docs/AudiencesApi.md#deleteAudience) | **DELETE** /v1/audiences/{audienceId} | Delete an Audience. *AudiencesApi* | [**getAudiences**](docs/AudiencesApi.md#getAudiences) | **GET** /v1/audiences | Get the list of Audiences. *AudiencesApi* | [**removeUsersFromAudience**](docs/AudiencesApi.md#removeUsersFromAudience) | **DELETE** /v1/audiences/userlist/{audienceId}/users | Remove all users from an Audience. *AudiencesApi* | [**updateAudienceMetadata**](docs/AudiencesApi.md#updateAudienceMetadata) | **PUT** /v1/audiences/{audienceId} | Update an Audience metadata. *AuthenticationApi* | [**oAuth2TokenPost**](docs/AuthenticationApi.md#oAuth2TokenPost) | **POST** /oauth2/token | Authenticates provided credentials and returns an access token *BudgetsApi* | [**get**](docs/BudgetsApi.md#get) | **GET** /v1/budgets | Gets budgets *CampaignsApi* | [**getBids**](docs/CampaignsApi.md#getBids) | **GET** /v1/campaigns/bids | Gets a the bids for campaigns and their categories *CampaignsApi* | [**getCampaign**](docs/CampaignsApi.md#getCampaign) | **GET** /v1/campaigns/{campaignId} | Gets a specific campaign *CampaignsApi* | [**getCampaigns**](docs/CampaignsApi.md#getCampaigns) | **GET** /v1/campaigns | Gets campaigns *CampaignsApi* | [**getCategories**](docs/CampaignsApi.md#getCategories) | **GET** /v1/campaigns/{campaignId}/categories | Gets categories *CampaignsApi* | [**getCategory**](docs/CampaignsApi.md#getCategory) | **GET** /v1/campaigns/{campaignId}/categories/{categoryHashCode} | Gets a specific category *CampaignsApi* | [**updateBids**](docs/CampaignsApi.md#updateBids) | **PUT** /v1/campaigns/bids | Update bids for campaigns and their categories *CategoriesApi* | [**getCategories**](docs/CategoriesApi.md#getCategories) | **GET** /v1/categories | Gets categories *CategoriesApi* | [**updateCategories**](docs/CategoriesApi.md#updateCategories) | **PUT** /v1/categories | Enables/disables categories *PortfolioApi* | [**getPortfolio**](docs/PortfolioApi.md#getPortfolio) | **GET** /v1/portfolio | Gets portfolio *PublishersApi* | [**getStats**](docs/PublishersApi.md#getStats) | **POST** /v1/publishers/stats | *SellersApi* | [**createBudgets**](docs/SellersApi.md#createBudgets) | **POST** /v1/sellers/budgets | Creates a budget for a seller/list of sellers. *SellersApi* | [**get**](docs/SellersApi.md#get) | **GET** /v1/sellers | Gets sellers details. *SellersApi* | [**getCampaigns**](docs/SellersApi.md#getCampaigns) | **GET** /v1/sellers/campaigns | Gets campaigns *SellersApi* | [**getStats**](docs/SellersApi.md#getStats) | **POST** /v1/sellers/stats | Generates a statistics report *SellersApi* | [**updateBids**](docs/SellersApi.md#updateBids) | **PUT** /v1/sellers/bids | Set or update a bid for a seller/list of sellers. *SellersApi* | [**updateBudgets**](docs/SellersApi.md#updateBudgets) | **PUT** /v1/sellers/budgets | Updates a budget for a seller/list of sellers. *SellersV2Api* | [**createSellerBudgets**](docs/SellersV2Api.md#createSellerBudgets) | **POST** /v2/crp/budgets | Create a collection of budgets. *SellersV2Api* | [**createSellerCampaignsBySeller**](docs/SellersV2Api.md#createSellerCampaignsBySeller) | **POST** /v2/crp/sellers/{sellerId}/seller-campaigns | Create a SellerCampaign *SellersV2Api* | [**createSellers**](docs/SellersV2Api.md#createSellers) | **POST** /v2/crp/advertisers/{advertiserId}/sellers | Create new sellers for an advertiser *SellersV2Api* | [**getAdvertiser**](docs/SellersV2Api.md#getAdvertiser) | **GET** /v2/crp/advertisers/{advertiserId} | Get an advertiser. *SellersV2Api* | [**getAdvertiserCampaigns**](docs/SellersV2Api.md#getAdvertiserCampaigns) | **GET** /v2/crp/advertisers/{advertiserId}/campaigns | Get the collection of CRP campaigns associated with the advertiserId. *SellersV2Api* | [**getAdvertiserPreviewLimits**](docs/SellersV2Api.md#getAdvertiserPreviewLimits) | **GET** /v2/crp/advertisers/preview-limit | Get the collection of advertisers preview limits associated with the authorized user. *SellersV2Api* | [**getAdvertisers**](docs/SellersV2Api.md#getAdvertisers) | **GET** /v2/crp/advertisers | Get the collection of advertisers associated with the user. *SellersV2Api* | [**getBudgetsByAdvertiser**](docs/SellersV2Api.md#getBudgetsByAdvertiser) | **GET** /v2/crp/advertisers/{advertiserId}/budgets | Get CRP budgets for a specific advertiser *SellersV2Api* | [**getBudgetsBySeller**](docs/SellersV2Api.md#getBudgetsBySeller) | **GET** /v2/crp/sellers/{sellerId}/budgets | Get a collection of budgets for this seller. *SellersV2Api* | [**getBudgetsBySellerCampaignId**](docs/SellersV2Api.md#getBudgetsBySellerCampaignId) | **GET** /v2/crp/seller-campaigns/{sellerCampaignId}/budgets | Get a collection of budgets for this seller campaign. *SellersV2Api* | [**getSeller**](docs/SellersV2Api.md#getSeller) | **GET** /v2/crp/sellers/{sellerId} | Get details for a seller. *SellersV2Api* | [**getSellerAdDemo**](docs/SellersV2Api.md#getSellerAdDemo) | **GET** /v2/crp/advertisers/{advertiserId}/ad-preview | Get a preview of an HTML ad with products belonging to the provided seller *SellersV2Api* | [**getSellerBudget**](docs/SellersV2Api.md#getSellerBudget) | **GET** /v2/crp/budgets/{budgetId} | Get details for a budget. *SellersV2Api* | [**getSellerBudgets**](docs/SellersV2Api.md#getSellerBudgets) | **GET** /v2/crp/budgets | Get a collection of budgets. *SellersV2Api* | [**getSellerCampaign**](docs/SellersV2Api.md#getSellerCampaign) | **GET** /v2/crp/seller-campaigns/{sellerCampaignId} | Get details for a seller campaign. *SellersV2Api* | [**getSellerCampaigns**](docs/SellersV2Api.md#getSellerCampaigns) | **GET** /v2/crp/seller-campaigns | Get a collection of seller campaigns. *SellersV2Api* | [**getSellerCampaignsByAdvertiser**](docs/SellersV2Api.md#getSellerCampaignsByAdvertiser) | **GET** /v2/crp/advertisers/{advertiserId}/seller-campaigns | Get CRP seller-campaigns for a specific advertiser *SellersV2Api* | [**getSellerCampaignsBySeller**](docs/SellersV2Api.md#getSellerCampaignsBySeller) | **GET** /v2/crp/sellers/{sellerId}/seller-campaigns | Get a collection of seller campaigns for this seller. *SellersV2Api* | [**getSellers**](docs/SellersV2Api.md#getSellers) | **GET** /v2/crp/sellers | Get a collection of sellers. *SellersV2Api* | [**updateSellerBudget**](docs/SellersV2Api.md#updateSellerBudget) | **PATCH** /v2/crp/budgets/{budgetId} | Modify a single budget. *SellersV2Api* | [**updateSellerBudgets**](docs/SellersV2Api.md#updateSellerBudgets) | **PATCH** /v2/crp/budgets | Modify a collection of budgets. *SellersV2Api* | [**updateSellerCampaign**](docs/SellersV2Api.md#updateSellerCampaign) | **PATCH** /v2/crp/seller-campaigns/{sellerCampaignId} | Update an existing seller campaign. *SellersV2Api* | [**updateSellerCampaigns**](docs/SellersV2Api.md#updateSellerCampaigns) | **PATCH** /v2/crp/seller-campaigns | Update a collection of seller campaigns. *SellersV2StatsApi* | [**campaigns**](docs/SellersV2StatsApi.md#campaigns) | **GET** /v2/crp/stats/campaigns | Get stats by campaign. *SellersV2StatsApi* | [**sellerCampaigns**](docs/SellersV2StatsApi.md#sellerCampaigns) | **GET** /v2/crp/stats/seller-campaigns | Get stats by seller-campaign. *SellersV2StatsApi* | [**sellers**](docs/SellersV2StatsApi.md#sellers) | **GET** /v2/crp/stats/sellers | Get stats by seller. *StatisticsApi* | [**getCampaignReport**](docs/StatisticsApi.md#getCampaignReport) | **POST** /v1/statistics/report | Generates a statistics report *StatisticsApi* | [**getStats**](docs/StatisticsApi.md#getStats) | **POST** /v1/statistics | Generates a statistics report ## Documentation for Models - [AdvertiserCampaignMessage](docs/AdvertiserCampaignMessage.md) - [AdvertiserInfoMessage](docs/AdvertiserInfoMessage.md) - [AdvertiserQuotaMessage](docs/AdvertiserQuotaMessage.md) - [AudienceCreateRequest](docs/AudienceCreateRequest.md) - [AudienceCreateResponse](docs/AudienceCreateResponse.md) - [AudiencePatchRequest](docs/AudiencePatchRequest.md) - [AudiencePatchResponse](docs/AudiencePatchResponse.md) - [AudiencePutRequest](docs/AudiencePutRequest.md) - [AudienceResponse](docs/AudienceResponse.md) - [AudiencesGetResponse](docs/AudiencesGetResponse.md) - [BidMessage](docs/BidMessage.md) - [BudgetMessage](docs/BudgetMessage.md) - [CampaignBidChangeRequest](docs/CampaignBidChangeRequest.md) - [CampaignBidChangeResponse](docs/CampaignBidChangeResponse.md) - [CampaignBidMessage](docs/CampaignBidMessage.md) - [CampaignMessage](docs/CampaignMessage.md) - [CampaignReportQueryMessage](docs/CampaignReportQueryMessage.md) - [CatalogProduct](docs/CatalogProduct.md) - [CatalogProductV3](docs/CatalogProductV3.md) - [CategoryBidChangeRequest](docs/CategoryBidChangeRequest.md) - [CategoryBidMessage](docs/CategoryBidMessage.md) - [CategoryMessage](docs/CategoryMessage.md) - [CategoryUpdateError](docs/CategoryUpdateError.md) - [CategoryUpdateInput](docs/CategoryUpdateInput.md) - [CategoryUpdatesPerCatalog](docs/CategoryUpdatesPerCatalog.md) - [CategoryUpdatesPerCatalogError](docs/CategoryUpdatesPerCatalogError.md) - [CheckResult](docs/CheckResult.md) - [ClientRegistrationRequestMessage](docs/ClientRegistrationRequestMessage.md) - [ClientRegistrationResponseMessage](docs/ClientRegistrationResponseMessage.md) - [CreateSellerBudgetMapiMessage](docs/CreateSellerBudgetMapiMessage.md) - [CreateSellerCampaignMessageMapi](docs/CreateSellerCampaignMessageMapi.md) - [CustomAttributeV3](docs/CustomAttributeV3.md) - [ErrorSource](docs/ErrorSource.md) - [GoogleProduct](docs/GoogleProduct.md) - [GoogleProductV3](docs/GoogleProductV3.md) - [IThrottlingConfiguration](docs/IThrottlingConfiguration.md) - [InlineResponse200](docs/InlineResponse200.md) - [Installment](docs/Installment.md) - [InstallmentAmount](docs/InstallmentAmount.md) - [InstallmentV3](docs/InstallmentV3.md) - [LoyaltyPointsV3](docs/LoyaltyPointsV3.md) - [LoyatyPoints](docs/LoyatyPoints.md) - [MapiUserMessage](docs/MapiUserMessage.md) - [MarketplaceCampaignMessage](docs/MarketplaceCampaignMessage.md) - [MessageWithDetailsCampaignBidChangeResponse](docs/MessageWithDetailsCampaignBidChangeResponse.md) - [MessageWithDetailsCategoryUpdatesPerCatalogError](docs/MessageWithDetailsCategoryUpdatesPerCatalogError.md) - [PolicyRouteInfo](docs/PolicyRouteInfo.md) - [PortfolioMessage](docs/PortfolioMessage.md) - [Price](docs/Price.md) - [ProductImporterBatch](docs/ProductImporterBatch.md) - [ProductImporterMessage](docs/ProductImporterMessage.md) - [ProductShippingDimensionV3](docs/ProductShippingDimensionV3.md) - [ProductShippingV3](docs/ProductShippingV3.md) - [ProductShippingWeightV3](docs/ProductShippingWeightV3.md) - [ProductTaxV3](docs/ProductTaxV3.md) - [ProductUnitPricingBaseMeasureV3](docs/ProductUnitPricingBaseMeasureV3.md) - [PublisherFileStatsMessage](docs/PublisherFileStatsMessage.md) - [PublisherStatsMessage](docs/PublisherStatsMessage.md) - [PublisherStatsQueryMessage](docs/PublisherStatsQueryMessage.md) - [RoutePolicy](docs/RoutePolicy.md) - [SellerBase](docs/SellerBase.md) - [SellerBidInfoMessage](docs/SellerBidInfoMessage.md) - [SellerBidsMessage](docs/SellerBidsMessage.md) - [SellerBudgetCreateMessage](docs/SellerBudgetCreateMessage.md) - [SellerBudgetInfoMessage](docs/SellerBudgetInfoMessage.md) - [SellerBudgetMessage](docs/SellerBudgetMessage.md) - [SellerBudgetResponseMessage](docs/SellerBudgetResponseMessage.md) - [SellerBudgetUpdateMessage](docs/SellerBudgetUpdateMessage.md) - [SellerBudgetsCreateMessage](docs/SellerBudgetsCreateMessage.md) - [SellerBudgetsMessage](docs/SellerBudgetsMessage.md) - [SellerBudgetsUpdateMessage](docs/SellerBudgetsUpdateMessage.md) - [SellerCampaignMessage](docs/SellerCampaignMessage.md) - [SellerCampaignUpdate](docs/SellerCampaignUpdate.md) - [SellerInfoMessage](docs/SellerInfoMessage.md) - [SellerMessage](docs/SellerMessage.md) - [ServiceStatusCheckResult](docs/ServiceStatusCheckResult.md) - [Shipping](docs/Shipping.md) - [ShippingSize](docs/ShippingSize.md) - [StatsQueryMessage](docs/StatsQueryMessage.md) - [StatsQueryMessageEx](docs/StatsQueryMessageEx.md) - [Tax](docs/Tax.md) - [ThrottlePolicy](docs/ThrottlePolicy.md) - [ThrottlePolicyRates](docs/ThrottlePolicyRates.md) - [UnitMeasure](docs/UnitMeasure.md) - [UpdateSellerBudgetMessage](docs/UpdateSellerBudgetMessage.md) - [UpdateSellerBudgetMessageBase](docs/UpdateSellerBudgetMessageBase.md) ## Recommendation It's recommended to create an instance of `ApiClient` per thread in a multithreaded environment to avoid any potential issues. ## Disclaimer THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
65.557851
460
0.770816
yue_Hant
0.550644
d092474a1b46f43f1fbfac60d1027c0b962c9372
2,872
md
Markdown
docs/csharp/language-reference/builtin-types/bool.md
michha/docs
08f75b6ed8a9e6634235db708a21da4be57dc58f
[ "CC-BY-4.0", "MIT" ]
4
2017-02-14T15:30:51.000Z
2020-01-10T17:53:41.000Z
docs/csharp/language-reference/builtin-types/bool.md
michha/docs
08f75b6ed8a9e6634235db708a21da4be57dc58f
[ "CC-BY-4.0", "MIT" ]
679
2017-04-13T12:50:46.000Z
2022-03-16T09:12:27.000Z
docs/csharp/language-reference/builtin-types/bool.md
michha/docs
08f75b6ed8a9e6634235db708a21da4be57dc58f
[ "CC-BY-4.0", "MIT" ]
7
2017-05-26T02:33:16.000Z
2022-01-18T01:33:28.000Z
--- description: Learn about the built-in boolean type in C# title: "bool type - C# reference" ms.date: 11/26/2019 f1_keywords: - bool - bool_CSharpKeyword - "true" - "false" - true_CSharpKeyword - false_CSharpKeyword helpviewer_keywords: - "bool data type [C#]" - "Boolean [C#]" ms.assetid: 551cfe35-2632-4343-af49-33ad12da08e2 --- # bool (C# reference) The `bool` type keyword is an alias for the .NET <xref:System.Boolean?displayProperty=nameWithType> structure type that represents a Boolean value, which can be either `true` or `false`. To perform logical operations with values of the `bool` type, use [Boolean logical](../operators/boolean-logical-operators.md) operators. The `bool` type is the result type of [comparison](../operators/comparison-operators.md) and [equality](../operators/equality-operators.md) operators. A `bool` expression can be a controlling conditional expression in the [if](../keywords/if-else.md), [do](../keywords/do.md), [while](../keywords/while.md), and [for](../keywords/for.md) statements and in the [conditional operator `?:`](../operators/conditional-operator.md). The default value of the `bool` type is `false`. ## Literals You can use the `true` and `false` literals to initialize a `bool` variable or to pass a `bool` value: [!code-csharp-interactive[bool literals](snippets/shared/BoolType.cs#Literals)] ## Three-valued Boolean logic Use the nullable `bool?` type, if you need to support the three-valued logic, for example, when you work with databases that support a three-valued Boolean type. For the `bool?` operands, the predefined `&` and `|` operators support the three-valued logic. For more information, see the [Nullable Boolean logical operators](../operators/boolean-logical-operators.md#nullable-boolean-logical-operators) section of the [Boolean logical operators](../operators/boolean-logical-operators.md) article. For more information about nullable value types, see [Nullable value types](nullable-value-types.md). ## Conversions C# provides only two conversions that involve the `bool` type. Those are an implicit conversion to the corresponding nullable `bool?` type and an explicit conversion from the `bool?` type. However, .NET provides additional methods that you can use to convert to or from the `bool` type. For more information, see the [Converting to and from Boolean values](/dotnet/api/system.boolean#converting-to-and-from-boolean-values) section of the <xref:System.Boolean?displayProperty=nameWithType> API reference page. ## C# language specification For more information, see [The bool type](~/_csharplang/spec/types.md#the-bool-type) section of the [C# language specification](~/_csharplang/spec/introduction.md). ## See also - [C# reference](../index.md) - [Value types](value-types.md) - [true and false operators](../operators/true-false-operators.md)
57.44
564
0.754875
eng_Latn
0.972465
d0924fe4683c12bbcc0ddecb91ff2994a310e3c9
1,870
md
Markdown
README.md
linkorb/lua-php
13d4a840177e78c7c5102c846d63feb37f5c3361
[ "MIT" ]
4
2020-09-10T20:22:57.000Z
2021-10-18T06:33:38.000Z
README.md
linkorb/lua-php
13d4a840177e78c7c5102c846d63feb37f5c3361
[ "MIT" ]
null
null
null
README.md
linkorb/lua-php
13d4a840177e78c7c5102c846d63feb37f5c3361
[ "MIT" ]
null
null
null
Lua PHP ======= This library enables you to add Lua scripting support to your PHP applications. ## LuaSandbox The LuaSandbox class allows you to easily run user-supplied Lua scripts in an empty sandbox environment. This means that dangerous functions (i.e. for file and network IO) are unavailable by default. To make the sandbox useful, you register your own PHP-implemented functions that you allow the chunks to execute. ## Use-cases * Support user-supplied scripts to respond to events in your application * Advanced expressions, filters, segments * Customizable routing * ... and many more :) ## Usage Check the `example/` directory for a well-documented example. ## About Lua * Website: http://www.lua.org/ * Wikipedia: https://en.wikipedia.org/wiki/Lua_(programming_language) ## Requirements This library requires that the [PHP Lua extension](https://www.php.net/manual/en/book.lua.php) is installed. A quick install guide for Ubuntu: ```sh # Install lua library apt-get install -y --no-install-recommends lua5.3 liblua5.3-dev # pecl expects liblua and includes in specific locations, so move them around a bit: cp /usr/lib/x86_64-linux-gnu/liblua5.3.a /usr/lib/liblua.a cp /usr/lib/x86_64-linux-gnu/liblua5.3.so /usr/lib/liblua.so ln -s /usr/include/lua5.3 /usr/include/lua # Install the lua extension through pecl pecl install lua # Activate the lua extension in your PHP config php --ini # find out where your PHP config files are located echo "extension=lua.so" > /path/to/my/php/conf.d/lua.ini ``` ## License MIT. Please refer to the [license file](LICENSE) for details. ## Brought to you by the LinkORB Engineering team <img src="http://www.linkorb.com/d/meta/tier1/images/linkorbengineering-logo.png" width="200px" /><br /> Check out our other projects at [linkorb.com/engineering](http://www.linkorb.com/engineering). Btw, we're hiring!
32.241379
113
0.75615
eng_Latn
0.93714
d09297961adc52cd758f31ae8310fcd63aa091f2
33
md
Markdown
README.md
YasarTutuk/ReportingWebApp
073ee51643453a14452487db34af4503974e867d
[ "Unlicense" ]
null
null
null
README.md
YasarTutuk/ReportingWebApp
073ee51643453a14452487db34af4503974e867d
[ "Unlicense" ]
null
null
null
README.md
YasarTutuk/ReportingWebApp
073ee51643453a14452487db34af4503974e867d
[ "Unlicense" ]
null
null
null
# ReportingWebApp Bumin Homework
11
17
0.848485
kor_Hang
0.473686
d092b540bbeecd8761e07f9e6cf10b7b9c7178fe
25,783
md
Markdown
MasterBitcoin2CN/ch06.md
xiaoPOoo/BitCoin
e30e08c9f936fed8b141cc1233584fc83248dd60
[ "CC-BY-4.0" ]
null
null
null
MasterBitcoin2CN/ch06.md
xiaoPOoo/BitCoin
e30e08c9f936fed8b141cc1233584fc83248dd60
[ "CC-BY-4.0" ]
null
null
null
MasterBitcoin2CN/ch06.md
xiaoPOoo/BitCoin
e30e08c9f936fed8b141cc1233584fc83248dd60
[ "CC-BY-4.0" ]
null
null
null
## 6.1 简介 比特币交易是比特币系统中最重要的部分。根据比特币系统的设计原理,系统中任何其他的部分都是为了确保比特币交易可以被生成、能在比特币网络中得以传播和通过验证,并最终添加入全球比特币交易总账簿(比特币区块链)。比特币交易的本质是数据结构,这些数据结构中含有比特币交易参与者价值转移的相关信息。比特币区块链是一本全球复式记账总账簿,每个比特币交易都是在比特币区块链上的一个公开记录。 在这一章,我们将会剖析比特币交易的多种形式、所包含的信息、如何被创建、如何被验证以及如何成为所有比特币交易永久记录的一部分。当我们在本章中使用术语“钱包”时,我们指的是构建交易的软件,而不仅仅是密钥的数据库。 ## 6.2交易细节 在[第二章比特币概述]中,我们使用区块浏览器查看了Alice曾经在Bob的咖啡店(Alice与Bob's Cafe的交易)支付咖啡的交易。 区块浏览器应用程序显示从Alice的“地址”到Bob的“地址”的交易。 这是一个非常简化的交易中包含的内容。 实际上,正如我们将在本章中看到的,所显示的大部分信息都是由区块浏览器构建的,实际上并不在交易中。 ![图6-1Alice与Bob的咖啡交易](http://upload-images.jianshu.io/upload_images/1785959-41aeeaa8e1e6b256.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 图1. Alice与Bob's Cafe的交易 ### 6.2.1交易 - 幕后细节 在幕后,实际的交易看起来与典型的区块浏览器提供的交易非常不同。 事实上,我们在各种比特币应用程序用户界面中看到的大多数高级结构实际上并不存在于比特币系统中。 我们可以使用Bitcoin Core的命令行界面(getrawtransaction和decodeawtransaction)来检索Alice的“原始”交易,对其进行解码,并查看它包含的内容。 结果如下: Alice的交易被解码后是这个样子: ``` { "version": 1, "locktime": 0, "vin": [ { "txid":"7957a35fe64f80d234d76d83a2a8f1a0d8149a41d81de548f0a65a8a999f6f18", "vout": 0, "scriptSig": "3045022100884d142d86652a3f47ba4746ec719bbfbd040a570b1deccbb6498c75c4ae24cb02204b9f039ff08df09cbe9f6addac960298cad530a863ea8f53982c09db8f6e3813[ALL] 0484ecc0d46f1918b30928fa0e4ed99f16a0fb4fde0735e7ade8416ab9fe423cc5412336376789d172787ec3457eee41c04f4938de5cc17b4a10fa336a8d752adf", "sequence": 4294967295 } ], "vout": [ { "value": 0.01500000, "scriptPubKey": "OP_DUP OP_HASH160 ab68025513c3dbd2f7b92a94e0581f5d50f654e7 OP_EQUALVERIFY OP_CHECKSIG" }, { "value": 0.08450000, "scriptPubKey": "OP_DUP OP_HASH160 7f9b1a7fb68d60c536c2fd8aeaa53a8f3cc025a8 OP_EQUALVERIFY OP_CHECKSIG", } ] } ``` 您可能会注意到这笔交易似乎少了些什么东西,比如:Alice的地址在哪里?Bob的地址在哪里? Alice发送的“0.1”个币的输入在哪里? 在比特币里,没有具体的货币,没有发送者,没有接收者,没有余额,没有帐户,没有地址。为了使用者的便利,以及使事情更容易理解,所有这些都构建在更高层次上。 你可能还会注意到很多奇怪和难以辨认的字段以及十六进制字符串。 不必担心,本章将详细介绍这里所示的各个字段。 ## 6.3交易的输入输出 比特币交易中的基础构建单元是交易输出。 交易输出是比特币不可分割的基本组合,记录在区块上,并被整个网络识别为有效。 比特币完整节点跟踪所有可找到的和可使用的输出,称为 “未花费的交易输出”(unspent transaction outputs),即UTXO。 所有UTXO的集合被称为UTXO集,目前有数百万个UTXO。 当新的UTXO被创建,UTXO集就会变大,当UTXO被消耗时,UTXO集会随着缩小。每一个交易都代表UTXO集的变化(状态转换)。 当我们说用户的钱包已经“收到”比特币时,我们的意思是,钱包已经检测到了可用的UTXO。通过钱包所控制的密钥,我们可以把这些UTXO花出去。 因此,用户的比特币“余额”是指用户钱包中可用的UTXO总和,而他们可能分散在数百个交易和区块中。 “一个用户的比特币余额”,这个概念是比特币钱包应用创建的派生之物。比特币钱包通过扫描区块链并聚集所有属于该用户的UTXO来计算该用户的余额 。大多数钱包维护一个数据库或使用数据库服务来存储所有UTXO的快速参考集,这些UTXO由用户所有的密钥来控制花费行为。 一个UTXO可以是1“聪”(satoshi)的任意倍数(整数倍)。就像美元可以被分割成表示两位小数的“分”一样,比特币可以被分割成八位小数的“聪”。尽管UTXO可以是任意值,但一旦被创造出来,即不可分割。这是UTXO值得被强调的一个重要特性:UTXO是面值为“聪”的离散(不连续)且不可分割的价值单元,一个UTXO只能在一次交易中作为一个整体被消耗。 如果一个 UTXO比一笔交易所需量大,它仍会被当作一个整体而消耗掉,但同时会在交易中生成零头。例如,你有 一个价值20比特币的 UTXO并且想支付1比特币,那么你的交易必须消耗掉整个20比特币的UTXO,并产生两个输出:一个支付了1比特币给接收人,另一个支付了19比特币的找零到你的钱包。这样的话,由于UTXO(或交易输出)的不可分割特性,大部分比特币交易都会产生找零。 想象一下,一位顾客要买1.5元的饮料。她掏出钱包并试图从所有硬币和钞票中找出一种组合来凑齐她要支付的1.5 元。如果可能的话,她会选刚刚好的零钱(比如一张1元纸币和5个一毛硬币)或者是小面额的组合(比如3个五毛硬币)。如果都不行的话,她会用一张大面额的钞票,比如5元纸币。如果她把5元给了商店老板,她会得到3.5元的找零,并把找零放回她的钱包以供未来的交易使用。 类似的,一笔比特币交易可以是任意金额,但必须从用户可用的UTXO中创建出来。用户不能再把UTXO进一步细分,就像不能把一元纸币撕开而继续当货币使用一样。用户的钱包应用通常会从用户可用的UTXO中选取多个来拼凑出一个大于或等于一笔交易所需的比特币量。 就像现实生活中一样,比特币应用可以使用一些策略来满足付款需求:组合若干小额UTXO,并算出准确的找零;或者使用一个比交易额大的UTXO然后进行找零。所有这些复杂的、由可花费UTXO组成的集合,都是由用户的钱包自动完成, 并不为用户所见。只有当你以编程方式用UTXO来构建原始交易时,这些才与你有关。 一笔交易会消耗先前的已被记录(存在)的UTXO,并创建新的UTXO以备未来的交易消耗。通过这种方式,一定数量的比特币价值在不同所有者之间转移,并在交易链中消耗和创建UTXO。一笔比特币交易通过使用所有者的签名来解锁UTXO,并通过使用新的所有者的比特币地址来锁定并创建UTXO。 从交易的输出与输入链角度来看,有一个例外,即存在一种被称为“币基交易”(Coinbase Transaction)的特殊交易,它是每个区块中的第一笔交易,这种交易存在的原因是作为对挖矿的奖励,创造出全新的可花费比特币用来支付给“赢家”矿工。这也就是为什么比特币可以在挖矿过程中被创造出来,我们将在“挖矿”这一章进行详述。 > **小贴士:**输入和输出,哪一个是先产生的呢?先有鸡还是先有蛋呢?严格来讲,先产生输出,因为可以创造新比特币的 “币基交易”没有输入,但它可以无中生有地产生输出。 ### 6.3.1 交易输出 每一笔比特币交易都会创造输出,并被比特币账簿记录下来。除特例之外(见“数据输出操作符”(OP_RETURN)),几乎所有的输出都能创造一定数量的可用于支付的比特币,也就是UTXO。这些UTXO被整个网络识别,所有者可在未来的交易中使用它们。 UTXO在UTXO集(UTXOset)中被每一个全节点比特币客户端追踪。 新的交易从UTXO集中消耗(花费)一个或多个输出。 交易输出包含两部分: - 一定量的比特币,面值为“聪”(satoshis) ,是最小的比特币单位; - 确定花费输出所需条件的加密难题(cryptographic puzzle) 这个加密难题也被称为锁定脚本(locking script), 见证脚本(witness script), 或脚本公钥 (scriptPubKey)。 有关交易脚本语言会在后面121页的“交易脚本和脚本语言”一节中详细讨论。 现在,我们来看看 Alice 的交易(之前的章节“交易 - 幕后”所示),看看我们是否可以找到并识别输出。 在 JSON 编码中,输出位于名为 vout 的数组(列表)中: ``` "vout": [ { "value": 0.01500000, "scriptPubKey": "OP_DUP OP_HASH160 ab68025513c3dbd2f7b92a94e0581f5d50f654e7 OP_EQUALVERIFY OP_CHECKSIG" }, { "value": 0.08450000, "scriptPubKey": "OP_DUP OP_HASH160 7f9b1a7fb68d60c536c2fd8aeaa53a8f3cc025a8 OP_EQUALVERIFY OP_CHECKSIG", } ] ``` 如您所见,交易包含两个输出。 每个输出都由一个值和一个加密难题来定义。 在 Bitcoin Core 显示的编码中,该值显示以 bitcoin 为单位,但在交易本身中,它被记录为以 satoshis 为单位的整数。 每个输出的第二部分是设定支出条件的加密难题。 Bitcoin Core 将其显示为 scriptPubKey,并向我们展示了一个可读的脚本表示。 稍后将在脚本构造(Lock + Unlock)中讨论锁定和解锁UTXO的主题。 在 ScriptPubKey 中用于编辑脚本的脚本语言在章节Transaction Scripts(交易脚本)和Script Language(脚本语言)中讨论。 但在我们深入研究这些话题之前,我们需要了解交易输入和输出的整体结构。 #### 6.3.1.1交易序列化 - 输出 当交易通过网络传输或在应用程序之间交换时,它们被序列化。 序列化是将内部的数据结构表示转换为可以一次发送一个字节的格式(也称为字节流)的过程。 序列化最常用于编码通过网络传输或用于文件中存储的数据结构。 交易输出的序列化格式**如下表所示**: ![表6-1交易输出序列化格式](http://upload-images.jianshu.io/upload_images/1785959-d804ff8f61c1c86e.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 大多数比特币函数库和架构不会在内部将交易存储为字节流,因为每次需要访问单个字段时,都需要复杂的解析。为了方便和可读性,比特币函数库将交易内部存储在数据结构(通常是面向对象的结构)中。 从交易的字节流表示转换为函数库的内部数据结构表示的过程称为反序列化或交易解析。转换回字节流以通过网络传输、哈希化(hashing)或存储在磁盘上的过程称为序列化。大多数比特币函数库具有用于交易序列化和反序列化的内置函数。 看看是否可以从序列化的十六进制形式手动解码 Alice 的交易中,找到我们以前看到的一些元素。包含两个输出的部分在下面中已加粗显示: ![img](http://upload-images.jianshu.io/upload_images/8490153-c2bfbd698cdbb37d.png?imageMogr2/auto-orient/strip) 这里有一些提示: - 加粗显示的部分有两个输出,每个都如本节之前所述进行了序列化。 - 0.015比特币的价值是1,500,000 satoshis。 这是十六进制的16 e3 60。 - 在串行化交易中,值16 e3 60以小端(最低有效字节优先)字节顺序进行编码,所以它看起来像60 e3 16。 - scriptPubKey的长度为25个字节,以十六进制显示为19个字节。 ### 6.3.2交易输入 交易输入将UTXO(通过引用)标记为将被消费,并通过解锁脚本提供所有权证明。 要构建一个交易,一个钱包从它控制的UTXO中选择足够的价值来执行被请求的付款。 有时一个UTXO就足够,其他时候不止一个。 对于将用于进行此付款的每个UTXO,钱包将创建一个指向UTXO的输入,并使用解锁脚本解锁它。 让我们更详细地看一下输入的组件。输入的第一部分是一个指向UTXO的指针,通过指向UTXO被记录在区块链中所在的交易的哈希值和序列号来实现。 第二部分是解锁脚本,钱包构建它用以满足设定在UTXO中的支出条件。 大多数情况下,解锁脚本是一个证明比特币所有权的数字签名和公钥,但是并不是所有的解锁脚本都包含签名。 第三部分是序列号,稍后再讨论。 考虑我们在之前交易幕后章节提到的例子。交易输入是一个名为 vin 的数组(列表): ``` "vin": [ { "txid": "7957a35fe64f80d234d76d83a2a8f1a0d8149a41d81de548f0a65a8a999f6f18", "vout": 0, "scriptSig" : "3045022100884d142d86652a3f47ba4746ec719bbfbd040a570b1deccbb6498c75c4ae24cb02204b9f039ff08df09cbe9f6addac960298cad530a863ea8f53982c09db8f6e3813[ALL] 0484ecc0d46f1918b30928fa0e4ed99f16a0fb4fde0735e7ade8416ab9fe423cc5412336376789d172787ec3457eee41c04f4938de5cc17b4a10fa336a8d752adf", "sequence": 4294967295 } ] ``` 如您所见,列表中只有一个输入(因为一个UTXO包含足够的值来完成此付款)。 输入包含四个元素: - 一个交易ID,引用包含正在使用的UTXO的交易 - 一个输出索引(vout),用于标识来自该交易的哪个UTXO被引用(第一个为零) - 一个 scriptSig(解锁脚本),满足放置在UTXO上的条件,解锁它用于支出 - 一个序列号(稍后讨论) 在 Alice 的交易中,输入指向的交易ID是: ``` 7957a35fe64f80d234d76d83a2a8f1a0d8149a41d81de548f0a65a8a999f6f18 ``` 输出索引是0(即由该交易创建的第一个UTXO)。解锁脚本由Alice的钱包构建,首先检索引用的UTXO,检查其锁定脚本,然后使用它来构建所需的解锁脚本以满足此要求。 仅仅看这个输入,你可能已经注意到,除了对包含它引用的交易之外,我们无从了解这个UTXO的任何内容。我们不知道它的价值(多少satoshi金额),我们不知道设置支出条件的锁定脚本。要找到这些信息,我们必须通过检索整个交易来检索被引用的UTXO。请注意,由于输入的值未明确说明,因此我们还必须使用被引用的UTXO来计算在此交易中支付的费用(参见后面交易费用章节)。 不仅仅是Alice的钱包需要检索输入中引用的UTXO。一旦将该交易广播到网络,每个验证节点也将需要检索交易输入中引用的UTXO,以验证该交易。 因为缺乏语境,交易本身似乎不完整。他们在输入中引用UTXO,但是没有检索到UTXO,我们无法知道输入的值或其锁定条件。当编写比特币软件时,无论何时解码交易以验证它或计算费用或检查解锁脚本,您的代码首先必须从块链中检索引用的UTXO,以构建隐含但不存在于输入的UTXO引用中的语境。例如,要计算支付总额的交易费,您必须知道输入和输出值的总和。但是,如果不检索输入中引用的UTXO,则不知道它们的值。因此,在单个交易中计算交易费用的简单操作,实际上涉及多个交易的多个步骤和数据。 我们可以使用与比特币核心相同的命令序列,就像我们在检索Alice的交易(getrawtransaction和decodeawtransaction)时一样。因此,我们可以得到在前面的输入中引用的UTXO,并查看: 输入中引用的来自Alice 以前的交易中的UTXO: ``` "vout": [ { "value": 0.10000000, "scriptPubKey": "OP_DUP OP_HASH160 7f9b1a7fb68d60c536c2fd8aeaa53a8f3cc025a8 OP_EQUALVERIFY OP_CHECKSIG" } ] ``` 我们看到这个UTXO的值为0.1BTC,并且它有一个包含“OP_DUP OP_HASH160 ...”的锁定脚本(scriptPubKey)。 > **小贴士:**为了充分了解Alice的交易,我们必须检索引用以前的交易作为输入。 检索以前的交易和未花费的交易输出的函数是非常普遍的,并且存在于几乎每个比特币函数库和API中。 #### 6.3.2.1交易序列化--交易输入 当交易被序列化以在网络上传输时,它们的输入被编码成字节流,如下表所示 ![表6-2交易输入序列化](http://upload-images.jianshu.io/upload_images/1785959-8148f1d7622a1b82.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 与输出一样,我们来看看我们是否可以从序列化格式的 Alice 的交易中找到输入。 首先,将输入解码: ``` "vin": [ { "txid": "7957a35fe64f80d234d76d83a2a8f1a0d8149a41d81de548f0a65a8a999f6f18", "vout": 0, "scriptSig" : "3045022100884d142d86652a3f47ba4746ec719bbfbd040a570b1deccbb6498c75c4ae24cb02204b9f039ff08df09cbe9f6addac960298cad530a863ea8f53982c09db8f6e3813[ALL] 0484ecc0d46f1918b30928fa0e4ed99f16a0fb4fde0735e7ade8416ab9fe423cc5412336376789d172787ec3457eee41c04f4938de5cc17b4a10fa336a8d752adf", "sequence": 4294967295 } ] ``` 现在,我们来看看我们是否可以识别下面这些以十六进制表示法表示的字段: ![img](http://upload-images.jianshu.io/upload_images/8490153-cd56da6022b57006.png?imageMogr2/auto-orient/strip) **提示:** - 交易ID以反转字节顺序序列化,因此以(十六进制)18开头,以79结尾 - 输出索引为4字节组的“0”,容易识别 - scriptSig的长度为139个字节,或十六进制为8b - 序列号设置为FFFFFFFF,也容易识别 ### 6.3.3 交易费 大多数交易包含交易费(矿工费),这是为了确保网络安全而给比特币矿工的一种补偿。费用本身也作为一个安全机制,使经济上不利于攻击者通过交易来淹没网络。对于挖矿、费用和矿工得到的奖励,在挖矿一章中将有更详细的讨论。 这一节解释交易费是如何被包含在一个典型的交易中的。大多数钱包自动计算并计入交易费。但是, 如果你以编程方式构造交易,或者使用命令行界面,你必须手动计算并计入这些费用。 交易费作为矿工打包(挖矿)一笔交易到下一个区块中的一种激励;同时作为一种抑制因素,通过对每一笔交易收取小额费用来防止对系统的滥用。成功挖到某区块的矿工将得到该区内包含的矿工费, 并将该区块添加至区块链中。 交易费是基于交易的千字节规模来计算的,而不是比特币交易的价值。总的来说,交易费是根据比特币网络中的市场力量确定的。矿工会依据许多不同的标准对交易进行优先级排序,包括费用,他们甚至可能在某些特定情况下免费处理交易。但大多数情况下,交易费影响处理优先级,这意味着有足够费用的交易会更可能被打包进下一个挖出的区块中;反之交易费不足或者没有交易费的交易可能会被推迟,基于尽力而为的原则在几个区块之后被处理,甚至可能根本不被处理。交易费不是强制的,而且没有交易费的交易最终也可能会被处理,但是,交易费将提高处理优先级。 随着时间的推移,交易费的计算方式以及在交易处理优先级上的影响已经产生了变化。起初,交易费是固定的,是网络中的一个固定常数**。**渐渐地,随着网络容量和交易量的不断变化,并可能受到来自市场力量的影响,收费结构开始放松。自从至少2016年初以来,比特币网络容量的限制已经造成交易之间的竞争,从而导致更高的费用,免费交易彻底成为过去式。零费用或非常低费用的交易鲜少被处理,有时甚至不会在网络上传播。 在 Bitcoin Core 中,费用传递政策由minrelaytxfee选项设置。 目前默认的minrelaytxfee是每千字节0.00001比特币或者millibitcoin的1%。 因此,默认情况下,费用低于0.0001比特币的交易是免费的,但只有在内存池有空间时才会被转发; 否则,会被丢弃。 比特币节点可以通过调整minrelaytxfee的值来覆盖默认的费用传策略。 任何创建交易的比特币服务,包括钱包,交易所,零售应用等,都必须实现动态收费。动态费用可以通过第三方费用估算服务或内置的费用估算算法来实现。如果您不确定,那就从第三方服务开始,如果您希望去除第三方依赖,您应当有设计和部署自己算法的经验。 费用估算算法根据网络能力和“竞争”交易提供的费用计算适当的费用。这些算法的范围从十分简单的(最后一个块中的平均值或中位数)到非常复杂的(统计分析)均有覆盖。他们估计必要的费用(以字节为单位),这将使得交易具有很高的可能性被选择并打包进一定数量的块内。大多数服务为用户提供高、中、低优先费用的选择。高优先级意味着用户支付更高的交易费,但交易可能就会被打包进下一个块中。中低优先级意味着用户支付较低的交易费,但交易可能需要更长时间才能确认。 许多钱包应用程序使用第三方服务进行费用计算。一个流行的服务是http://bitcoinfees.21.co,它提供了一个API和一个可视化图表,以satoshi / byte为单位显示了不同优先级的费用。 > **小贴士:**静态费用在比特币网络上不再可行。 设置静态费用的钱包将导致用户体验不佳,因为交易往往会被“卡住”,并不被确认。 不了解比特币交易和费用的用户因交易被“卡住” 而感到沮丧,因为他们认为自己已经失去了资金。 下面费用估算服务bitcoinfees.21.co中的图表显示了10个satoshi / byte增量的费用的实时估计,以及每个范围的费用交易的预期确认时间(分钟和块数)。 对于每个收费范围(例如,61-70 satoshi /字节),两个水平栏显示过去24小时(102,975)中未确认交易的数量(1405)和交易总数,费用在该范围内。 根据图表,此时推荐的高优先费用为80 satoshi / 字节,这可能导致交易在下一个块(零块延迟)中开采。 据合理判断,一笔常规交易的大小约为226字节,因此单笔交易建议费用为18,080 satoshis(0.00018080 BTC)。 费用估算数据可以通过简单的HTTP REST API(https://bitcoinfees.21.co/api/v1/fees/recommended)来检索。 例如,在命令行中使用curl命令: 运用费用估算API ``` $ curl https://bitcoinfees.21.co/api/v1/fees/recommended {"fastestFee":80,"halfHourFee":80,"hourFee":60} ``` API通过费用估算以 satoshi per byte 的方式返回一个 JSON 对象,从而实现”最快确认“ (fastestFee),以及在三个块(halfHourFee)和六个块(hourFee)内确认。 ![图6-2bitcoinfees.21.co提供的费用估算服务](http://upload-images.jianshu.io/upload_images/1785959-df50cd23aa05add7.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) ### 6.3.4 把交易费加到交易中 交易的数据结构没有交易费的字段。相替代地,交易费是指输入和输出之间的差值。从所有输入中扣掉所有输出之后的多余的量会被矿工作为矿工费收集走: > 交易费即输入总和减输出总和的余量:交易费 = 求和(所有输入) - 求和(所有输出) 正确理解交易比较困难,但又尤为重要。因为如果你要构建你自己的交易,你必须确保你没有因疏忽在交易中添加一笔大量交易费而大大减少了输入的可花费额。这意味着你必须计算所有的输入,如有必要则加上找零, 不然的话,结果就是你给了矿工一笔相当可观的劳务费! 举例来说,如果你消耗了一个20比特币的UTXO来完成1比特币的付款,你必须包含一笔19比特币的找零回到你的钱包。否则,那剩下的19比特币会被当作交易费,并将由挖出你交易的矿工收走。尽管你会得到高优先级的处理,并且让一个矿工喜出望外,但这很可能不是你想要的。 > **警告:**如果你忘记了在手动构造的交易中增加找零的输出,系统会把找零当作交易费来处理。“不用找了!”也许不是你的真实意愿。 让我们重温一下Alice在咖啡店的交易来看看在实际中它如何运作。Alice想花0.015比特币购买咖啡。为了确保这笔交易能被立即处理,Alice想添加一笔交易费,比如说0.001。这意味着总花费会变成0.016。因此她的钱包需要凑齐 0.016或更多的UTXO。如果需要,还要加上找零。我们假设他的钱包有一个0.2比特币的UTXO可用。他的钱包就会消耗 掉这个UTXO,创造一个新的0.015的输出给Bob的咖啡店,另一个0.184比特币的输出作为找零回到Alice的钱包, 并留下未分配的0.001矿工费内含在交易中。 现在让我们看看另一种情况。Eugenia,我们在菲律宾的儿童募捐项目主管,完成了一次为孩子购买教材的筹款活动。她从世界范围内接收到了好几千份小额捐款,总额是50比特币。所以她的钱包塞满了非常小的UTXO。现在她想用比特币从本地的一家出版商购买几百本教材。 现在Eugenia的钱包应用想要构造一个单笔大额付款交易,它必须从可用的、由很多小数额构成的大的UTXO集合中寻求钱币来源。这意味着交易的结果是从上百个小额UTXO中作为输入,但只有一个输出用来付给出版商。输入数量这么巨大的交易会比一千字节要大,也许总尺寸会达到两至三千字节。结果是它将需要比中等规模交易要高得多的交易费。 Eugenia的钱包应用会通过测量交易的大小,乘以每千字节需要的费用来计算适当的交易费**。**很多钱包会支付较大的交易费,确保交易得到及时处理。更高交易费不是因为Eugenia付的钱很多,而是因为她的交易很复杂并且尺寸更大——交易费是与参加交易的比特币值无关的。 ## 6.4比特币交易脚本和脚本语言 比特币交易脚本语言,称为脚本,是一种类似Forth的逆波兰表达式的基于堆栈的执行语言。 如果听起来不知所云,是你可能还没有学习20世纪60年代的编程语言,但是没关系,我们将在本章中解释这一切。 放置在UTXO上的锁定脚本和解锁脚本都以此脚本语言编写。 当一笔比特币交易被验证时,每一个输入值中的解锁脚本与其对应的锁定脚本同时 (互不干扰地)执行,以确定这笔交易是否满足支付条件。 脚本是一种非常简单的语言,被设计为在执行范围上有限制,可在一些硬件上执行,可能与嵌入式装置一样简单。 它仅需要做最少的处理,许多现代编程语言可以做的花哨的事情它都不能做。 但用于验证可编程货币,这是一个经深思熟虑的安全特性。 如今,大多数经比特币网络处理的交易是以“Alice付给Bob”的形式存在,并基于一种称为“P2PKH”(Pay-toPublic-Key-Hash)脚本。但是,比特币交易不局限于“支付给Bob的比特币地址”的脚本。事实上,锁定脚本可以被编写成表达各种复杂的情况。为了理解这些更为复杂的脚本,我们必须首先了解交易脚本和脚本语言的基础知识。 在本节中,我们将会展示比特币交易脚本语言的各个组件;同时,我们也会演示如何使用它去表达简单的使用条件以及如何通过解锁脚本去满足这些花费条件。 > **小贴士:**比特币交易验证并不基于静态模式**,**而是通过脚本语言的执行来实现的。这种语言允许表达几乎无限的各种条件。这也是比特币作为一种“可编程的货币”所拥有的力量。 ### 6.4.1 图灵非完备性 比特币脚本语言包含许多操作码,但都故意限定为一种重要的模式——除了有条件的流控制以外,没有循环或复杂流控制能力。这样就保证了脚本语言的图灵非完备性,这意味着脚本有限的复杂性和可预见的执行次数。脚本并不是一种通用语言,这些限制确保该语言不被用于创造无限循环或其它类型的逻辑炸弹,这样的炸弹可以植入在一笔交易中,引起针对比特币网络的“拒绝服务”攻击。记住,每一笔交易都会被网络中的全节点验证,受限制的语言能防止交易验证机制被作为一个漏洞而加以利用。 ### 6.4.2 去中心化验证 比特币交易脚本语言是没有中心化主体的,没有任何中心主体能凌驾于脚本之上,也没有中心主体会在脚本被执行后对其进行保存。所以执行脚本所需信息都已包含在脚本中。可以预见的是,一个脚本能在任何系统上以相同的方式执行。如果您的系统验证了一个脚本,可以确信的是每一个比特币网络中的其他系统也将验证这个脚本,这意味着一个有效的交易对每个人而言都是有效的,而且每一个人都知道这一点。这种结果可预见性是比特币系统的一项至关重要的良性特征。 ### 6.4.3 脚本构建(锁定与解锁) 比特币的交易验证引擎依赖于两类脚本来验证比特币交易:锁定脚本和解锁脚本。 锁定脚本是一个放置在输出上面的花费条件:它指定了今后花费这笔输出必须要满足的条件。 由于锁定脚本往往含有一个公钥或比特币地址(公钥哈希值),在历史上它曾被称为脚本公钥(scriptPubKey)。由于认识到这种脚本技术存在着更为广泛的可能性,在本书中,我们将它称为“锁定脚本”(locking script)。在大多数比特币应用程序中,我们所称的“锁定脚本”将以scriptPubKey的形式出现在源代码中。您还将看到被称为见证脚本(witness script)的锁定脚本(参见[隔离见证]章节),或者更一般地说,它是一个加密难题(cryptographic puzzle)。 这些术语在不同的抽象层次上都意味着同样的东西。 解锁脚本是一个“解决”或满足被锁定脚本在一个输出上设定的花费条件的脚本,它将允许输出被消费。解锁脚本是每一笔比特币交易输入的一部分,而且往往含有一个由用户的比特币钱包(通过用户的私钥)生成的数字签名。由于解锁脚本常常包含一个数字签名,因此它曾被称作ScriptSig。在大多数比特币应用的源代码中,ScriptSig便是我们所说的解锁脚本。你也会看到解锁脚本被称作“见证”(witness 参见[隔离见证]章节)。在本书中,我们将它称为“解锁脚本”,用以承认锁定脚本的需求有更广的范围。但并非所有解锁脚本都一定会包含签名。 每一个比特币验证节点会通过同时执行锁定和解锁脚本来验证一笔交易。每个输入都包含一个解锁脚本,并引用了之前存在的UTXO。 验证软件将复制解锁脚本,检索输入所引用的UTXO,并从该UTXO复制锁定脚本。 然后依次执行解锁和锁定脚本。 如果解锁脚本满足锁定脚本条件,则输入有效(请参阅单独执行解锁和锁定脚本部分)。 所有输入都是独立验证的,作为交易总体验证的一部分。 请注意,UTXO被永久地记录在区块链中,因此是不变的,并且不受在新交易中引用失败的尝试的影响。 只有正确满足输出条件的有效交易才能将输出视为“开销来源”,继而该输出将被从未花费的交易输出集(UTXO set)中删除。 下图是最常见类型的比特币交易(P2PKH:对公钥哈希的付款)的解锁和锁定脚本的示例,显示了在脚本验证之前从解锁和锁定脚本的并置产生的组合脚本: ![图6-3结合scriptSig和scriptPubKey来评估交易脚本](http://upload-images.jianshu.io/upload_images/1785959-e4060555d14bcd28.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) #### 6.4.3.1脚本执行堆栈 比特币的脚本语言被称为基于堆栈的语言,因为它使用一种被称为堆栈的数据结构。堆栈是一个非常简单的数据结构,可以被视为一叠卡片。栈允许两个操作:push和pop(推送和弹出)。 Push(推送)在堆栈顶部添加一个项目。 Pop(弹出)从堆栈中删除最顶端的项。栈上的操作只能作用于栈最顶端项目。堆栈数据结构也被称为“后进先出”( Last-In-First-Out)或 “LIFO” 队列。 脚本语言通过从左到右处理每个项目来执行脚本。数字(数据常量)被推到堆栈上。操作码(Operators)从堆栈中推送或弹出一个或多个参数,对其进行操作,并可能将结果推送到堆栈上。例如,操作码 OP_ADD 将从堆栈中弹出两个项目,添加它们,并将结果的总和推送到堆栈上。 条件操作码(Conditional operators)对一个条件进行评估,产生一个 TRUE 或 FALSE 的布尔结果(boolean result)。例如, OP_EQUAL 从堆栈中弹出两个项目,如果它们相等,则推送为 TRUE(由数字1表示),否则推送为 FALSE(由数字0表示)。比特币交易脚本通常包含条件操作码,以便它们可以产生用来表示有效交易的 TRUE 结果。 #### 6.4.3.2一个简单的脚本 现在让我们将学到的关于脚本和堆栈的知识应用到一些简单的例子中。 如图6-4,在比特币的脚本验证中,执行简单的数学运算时,脚本“ 2 3 OP_ADD 5 OP_EQUAL ”演示了算术加法操作码 OP_ADD ,该操作码将两个数字相加,然后把结果推送到堆栈, 后面的条件操作符 OP_EQUAL 是验算之前的两数之和是否等于 5 。为了简化起见,前缀OP_在一步步的演示过程中将被省略。有关可用脚本操作码和函数的更多详细信息,请参见[交易脚本]。 ![图6-4比特币的脚本验证中,执行简单的数学运算](http://upload-images.jianshu.io/upload_images/1785959-c56755abdeed5b7b.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 尽管绝大多数解锁脚本都指向一个公钥哈希值(本质上就是比特币地址),因此如果想要使用资金则需验证所有权,但脚本本身并不需要如此复杂。任何解锁和锁定脚本的组合如果结果为真(TRUE),则为有效。前面被我们用于说明脚本语言的简单算术操作码同样也是一个有效的锁定脚本,该脚本能用于锁定交易输出。 使用部分算术操作码脚本作为锁定脚本的示例: ``` 3 OP_ADD 5 OP_EQUAL ``` 该脚本能被以如下解锁脚本为输入的一笔交易所满足: ``` 2 ``` 验证软件将锁定和解锁脚本组合起来,结果脚本是: ``` 2 3 OP_ADD 5 OP_EQUAL ``` 正如在上图中所看到的,当脚本被执行时,结果是OP_TRUE,交易有效。不仅该笔交易的输出锁定脚本有效,同时UTXO也能被任何知晓这个运算技巧(知道是数字2)的人所使用。 > **小贴士:**如果堆栈顶部的结果显示为TRUE(标记为{&#x7b;0x01&#x7d;}),即为任何非零值,或脚本执行后堆栈为空情形,则交易有效。如果堆栈顶部的结果显示为FALSE(0字节空值,标记为{&#x7b;&#x7d;})或脚本执行被操作码明确禁止,如OP_VERIFY、 OP_RETURN,或有条件终止如OP_ENDIF,则交易无效。详见[tx_script_ops\]相关内容。 以下是一个稍微复杂一点的脚本,它用于计算 2+7-3+1 。注意,当脚本在同一行包含多个操作码时,堆栈允许一个操作码的结果由于下一个操作码执行。 ``` 2 7 OP_ADD 3 OP_SUB 1 OP_ADD 7 OP_EQUAL ``` 请试着用纸笔自行演算脚本,当脚本执行完毕时,你会在堆栈得到正确的结果。 #### 6.4.3.3 解锁和锁定脚本的单独执行 在最初版本的比特币客户端中,解锁和锁定脚本是以连锁的形式存在,并被依次执行的。出于安全因素考虑,在2010 年比特币开发者们修改了这个特性——因为存在“允许异常解锁脚本推送数据入栈并且污染锁定脚本”的漏洞。而在当前的方案中,这两个脚本是随着堆栈的传递被分别执行的。下面将会详细介绍。 首先,使用堆栈执行引擎执行解锁脚本。如果解锁脚本在执行过程中未报错(例如:没有“悬挂”操作码),则复制主堆栈(而不是备用堆栈),并执行锁定脚本。如果从解锁脚本中复制而来的堆栈数据执行锁定脚本的结果为“TRUE",那么解锁脚本就成功地满足了锁定脚本所设置的条件,因此,该输入是一个能使用该UTXO的有效授权。如果在合并脚本后的结果不是”TRUE“以外的任何结果,输入都是无效的,因为它不能满足UTXO中所设置的使用该笔资金的条件。 ### 6.4.4 P2PKH(Pay-to-Public-Key-Hash) 比特币网络处理的大多数交易花费的都是由“付款至公钥哈希”(或P2PKH)脚本锁定的输出,这些输出都含有一个锁定脚本,将输入锁定为一个公钥哈希值,即我们常说的比特币地址。由P2PKH脚本锁定的输出可以通过提供一个公钥和由相应私钥创建的数字签名来解锁(使用)。参见数字签名ECDSA相关内容。 例如,我们可以再次回顾一下Alice向Bob咖啡馆支付的案例。Alice下达了向Bob咖啡馆的比特币地址支付0.015比特币的支付指令,该笔交易的输出内容为以下形式的锁定脚本: ``` OP_DUP OP_HASH160 <Cafe Public Key Hash> OP_EQUALVERIFY OP_CHECKSIG ``` 脚本中的 Cafe Public Key Hash 即为咖啡馆的比特币地址,但该地址不是基于Base58Check编码。事实上,大多数比特币地址的公钥哈希值都显示为十六进制码,而不是大家所熟知的以1开头的基于Bsase58Check编码的比特币地址。 上述锁定脚本相应的解锁脚本是: ``` <Cafe Signature> <Cafe Public Key> ``` 将两个脚本结合起来可以形成如下组合验证脚本: ``` <Cafe Signature> <Cafe Public Key> OP_DUP OP_HASH160 <Cafe Public Key Hash> OP_EQUALVERIFY OP_CHECKSIG ``` 只有当解锁脚本与锁定脚本的设定条件相匹配时,执行组合验证脚本时才会显示结果为真(TRUE)。换句话说,只有当解锁脚本得到了咖啡馆的有效签名,交易执行结果才会被通过(结果为真),该有效签名是从与公钥哈希相匹配的咖啡馆的私钥中所获取的。 图6-5和图6-6(分两部分)显示了组合脚本一步步检验交易有效性的过程。 ![图6-5评估P2PKH交易的脚本(第1部分,共2部分)](http://upload-images.jianshu.io/upload_images/1785959-df262c9f279a046c.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) ![图6-6评估P2PKH交易的脚本(第2部分,共2部分)](http://upload-images.jianshu.io/upload_images/1785959-86488f10788e53bd.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) ## 6.5数字签名(ECDSA) 到目前为止,我们还没有深入了解“数字签名”的细节。在本节中,我们将研究数字签名的工作原理,以及如何在不揭示私钥的情况下提供私钥的所有权证明。 比特币中使用的数字签名算法是椭圆曲线数字签名算法(Elliptic Curve Digital Signature Algorithm)或ECDSA。 ECDSA是用于基于椭圆曲线私钥/公钥对的数字签名的算法,如椭圆曲线章节[elliptic_curve\]所述。 ECDSA用于脚本函数OP_CHECKSIG,OP_CHECKSIGVERIFY,OP_CHECKMULTISIG和OP_CHECKMULTISIGVERIFY。每当你锁定脚本中看到这些时,解锁脚本都必须包含一个ECDSA签名。 数字签名在比特币中有三种用途(请参阅下面的侧栏)。第一,签名证明私钥的所有者,即资金所有者,已经授权支出这些资金。第二,授权证明是不可否认的(不可否认性)。第三,签名证明交易(或交易的具体部分)在签字之后没有也不能被任何人修改。 请注意,每个交易输入都是独立签名的。这一点至关重要,因为不管是签名还是输入都不必由同一“所有者”实施。事实上,一个名为 “CoinJoin” 的特定交易方案(多重签名方案?)就使用这个特性来创建多方交易来保护隐私。 > **注意:**每个交易输入和它可能包含的任何签名完全独立于任何其他输入或签名。 多方可以协作构建交易,并各自仅签一个输入。 > ***维基百科对 “数字签名 ”的定义:*** > > *数字签名是用于证明数字消息或文档的真实性的数学方案。 有效的数字签名给了一个容易接受的理由去相信:1)该消息是由已知的发送者(身份认证性)创建的; 2)发送方不能否认已发送消息(不可否认性;3)消息在传输中未被更改(完整性)。* 来源: [https://en.wikipedia.org/wiki/Digital_signature](https://en.wikipedia.org/wiki/Digital_signature)* ### 6.5.1数字签名如何工作 数字签名是一种由两部分组成的数学方案:第一部分是使用私钥(签名密钥)从消息(交易)创建签名的算法; 第二部分是允许任何人验证签名的算法,给定消息和公钥。 #### 6.5.1.1创建数字签名 在比特币的ECDSA算法的实现中,被签名的“消息”是交易,或更确切地说是交易中特定数据子集的哈希值(参见签名哈希类型(SIGHASH))。签名密钥是用户的私钥,结果是签名: *\(\(Sig = F{sig}(F{hash}(m), dA)\)\)* 这里的: - *dA* 是签名私钥 - *m* 是交易(或其部分) - *F~hash~* 是散列函数 - *F~sig~* 是签名算法 - *Sig* 是结果签名 ECDSA数学**运算**的更多细节可以在ECDSA Math章节中找到。 函数*F~sig~* 产生由两个值组成的签名*Sig*,通常称为*R*和*S*: ``` Sig = (R, S) ``` 现在已经计算了两个值R和S,它们就序列化为字节流,使用一种称为“分辨编码规则”(*Distinguished Encoding Rules*)或 DER的国际标准编码方案。 #### 6.5.1.2签名序列化(DER) 我们再来看看Alice创建的交易。 在交易输入中有一个解锁脚本,其中包含Alice的钱包中的以下DER编码签名: ``` 3045022100884d142d86652a3f47ba4746ec719bbfbd040a570b1deccbb6498c75c4ae24cb02204b9f039ff08df09cbe9f6addac960298cad530a863ea8f53982c09db8f6e381301 ``` 该签名是Alice的钱包生成的R和S值的序列化字节流,证明她拥有授权花费该输出的私钥。 序列化格式包含以下9个元素: - *0x30*表示DER序列的开始 - *0x45* - 序列的长度(69字节) - *0x02* - 一个整数值 - *0x21* - 整数的长度(33字节) - *R-00884d142d86652a3f47ba4746ec719bbfbd040a570b1deccbb6498c75c4ae24cb* - *0x02* - 接下来是一个整数 - *0x20* - 整数的长度(32字节) - *S-4b9f039ff08df09cbe9f6addac960298cad530a863ea8f53982c09db8f6e3813* - 后缀(*0x01*)指示使用的哈希的类型(SIGHASH_ALL) 看看您是否可以使用此列表解码 Alice 的序列化(DER编码)签名。 重要的数字是R和S; 数据的其余部分是DER编码方案的一部分。 ### 6.5.2验证签名 要验证签名,必须有签名(*R*和*S*)、序列化交易和公钥(对应于用于创建签名的私钥)。本质上,签名的验证意味着“只有生成此公钥的私钥的所有者,才能在此交易上产生此签名。” 签名验证算法采用消息(交易或其部分的哈希值)、签名者的公钥和签名(R和S值),如果签名对该消息和公钥有效,则返回 TRUE 值。 ### 6.5.3签名哈希类型(SIGHASH) 数字签名被应用于消息,在比特币中,就是交易本身。签名意味着签字人对特定交易数据的承诺( *commitment*)。在最简单的形式中,签名适用于整个交易,从而承诺(commit)所有输入,输出和其他交易字段。但是,在一个交易中一个签名可以只承诺(*commit*)一个数据子集,这对于我们将在本节中看到的许多场景是有用的。 比特币签名具有指示交易数据的哪一部分包含在使用 SIGHASH 标志的私钥签名的哈希中的方式。 SIGHASH 标志是附加到签名的单个字节。每个签名都有一个SIGHASH标志,该标志在不同输入之间也可以不同。具有三个签名输入的交易可以具有不同SIGHASH标志的三个签名,每个签名签署(承诺)交易的不同部分。 记住,每个输入可能在其解锁脚本中包含一个签名。因此,包含多个输入的交易可以拥有具有不同SIGHASH标志的签名,这些标志在每个输入中承诺交易的不同部分。还要注意,比特币交易可能包含来自不同“所有者”的输入,他们在部分构建(和无效)的交易中可能仅签署一个输入,继而与他人协作收集所有必要的签名后再使交易生效。许多SIGHSASH标志类型,只有在你考虑到由许多参与者在比特币网络之外共同协作去更新仅部分签署了的交易,才具有意义。 有三个SIGHASH标志:ALL,NONE和SINGLE,如下表所示。 ![表6-3 SIGHASH类型和意义](http://upload-images.jianshu.io/upload_images/1785959-88a98966ef3f4d05.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 另外还有一个修饰符标志SIGHASH_ANYONECANPAY,它可以与前面的每个标志组合。 当设置ANYONECANPAY时,只有一个输入被签名,其余的(及其序列号)打开以进行修改。 ANYONECANPAY的值为0x80,并通过按位OR运算,得到如下所示的组合标志: ![表6-4 带修饰符的SIGHASH类型及其含义](http://upload-images.jianshu.io/upload_images/1785959-5a018a40b5721c2e.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) SIGHASH标志在签名和验证期间应用的方式是建立交易的副本和删节其中的某些字段(设置长度为零并清空),继而生成的交易被序列化,SIGHASH标志被添加到序列化交易的结尾,并将结果哈希化 ,得到的哈希值本身即是被签名的“消息”。 基于SIGHASH标志的使用,交易的不同部分被删节。 所得到的哈希值取决于交易中数据的不同子集。 在哈希化前,SIGHASH作为最后一步被包含在内,签名也会对SIGHASH类型进行签署,因此不能更改(例如,被矿工)。 > **小贴士:**所有SIGHASH类型对应交易nLocktime字段(请参阅[transaction_locktime_nlocktime]部分)。 此外,SIGHASH类型本身在签名之前附加到交易,因此一旦签名就不能修改它。 在Alice的交易(参见序列化签名(DER)的列表)的例子中,我们看到DER编码签名的最后一部分是01,这是SIGHASH_ALL标志。这会锁定交易数据,因此Alice的签名承诺的是所有的输入和输出状态。 这是最常见的签名形式。 我们来看看其他一些SIGHASH类型,以及如何在实践中使用它们: *ALL | ANYONECANPAY* 这种构造可以用来做“众筹”交易,试图筹集资金的人可以用单笔输出来构建一个交易,单笔输出将“目标”金额付给众筹发起人。这样的交易显然是无效的,因为它没有输入。但是现在其他人可以通过添加自己的输入作为捐赠来修改它们,他们用ALL | ANYONECANPAY签署自己的输入,除非收集到足够的输入以达到输出的价值,交易无效,每次捐赠是一项“抵押”,直到募集整个目标金额才能由募款人收取。 *NONE* 该结构可用于创建特定数量的“不记名支票”或“空白支票”。它对输入进行承诺,但允许输出锁定脚本被更改。任何人都可以将自己的比特币地址写入输出锁定脚本并兑换交易。然而,输出值本身被签名锁定。 *NONE | ANYONECANPAY* 这种构造可以用来建造一个“吸尘器”。在他们的钱包中拥有微小UTXO的用户无法花费这些费用,因为手续费用超过了这些微小UTXO的价值。借助这种类型的签名,微小UTXO可以为任何人捐赠,以便随时随地收集和消费。 有一些修改或扩展SIGHASH系统的建议。作为Elements项目的一部分,一个这样的提案是Blockstream的Glenn Willen提出的Bitmask Sighash模式。这旨在为SIGHASH类型创建一个灵活的替代品,允许“任意的,输入和输出的矿工可改写位掩码”来表示“更复杂的合同预付款方案,例如已分配的资产交换中有变更的已签名的报价”。 > **注释:** 您不会在用户的钱包应用程序中看到SIGHASH标志作为一个功能呈现。 少数例外,钱包会构建P2PKH脚本,并使用SIGHASH_ALL标志进行签名。 要使用不同的SIGHASH标志,您必须编写软件来构造和签署交易。 更重要的是,SIGHASH标志可以被专用的比特币应用程序使用,从而实现新颖的用途。 ### 6.5.4 ECDSA数学 如前所述,签名由产生由两个值*R*和*S*组成的签名的数学函数*F*~sig~ 创建。在本节中,我们将查看函数*F~s*ig~ 的更多细节。 签名算法首先生成一个 *ephemeral*(临时)私公钥对。 在涉及签名私钥和交易哈希的变换之后,该临时密钥对用于计算*R*和*S*值。 临时密钥对基于随机数k,用作临时私钥。 从*k*,我们生成相应的临时公钥*P*(以P = k * G计算,与派生比特币公钥相同);参见[pubkey]部分)。数字签名的*R*值则是临时公钥*P*的*x*坐标。 从那里,算法计算签名的S值,使得: ![img](http://upload-images.jianshu.io/upload_images/8490153-e6cb048ad310aeb5.png?imageMogr2/auto-orient/strip) 其中: - *k*是临时私钥 - *R*是临时公钥的x坐标 - *dA*是签名私钥 - *m*是交易数据 - *p*是椭圆曲线的主要顺序 验证是签名生成函数的倒数,使用R,S值和公钥来计算一个值P,该值是椭圆曲线上的一个点(签名创建中使用的临时公钥): ![img](http://upload-images.jianshu.io/upload_images/8490153-7f2dd00d3eb43ec0.png?imageMogr2/auto-orient/strip) 其中: - *R*和*S*是签名值 - *Qa*是Alice的公钥 - *m*是签署的交易数据 - *G*是椭圆曲线发生器点 如果计算点*P*的*x*坐标等于*R*,则验证者可以得出结论,签名是有效的。 请注意,在验证签名时,私钥既不知道也不显示。 > **小贴士:**ECDSA的数学很复杂,难以理解。 网上有一些很棒的指南可能有帮助。 搜索“ECDSA解释”或尝试这个:http://bit.ly/2r0HhGB。 ### 6.5.5随机性在签名中的重要性 如我们在ECDSA Math中所看到的,签名生成算法使用随机密钥*k*作为临时私有-公钥对的基础。 *k* 的值不重要,只要它是随机的。如果使用相同的值 *k* 在不同的消息(交易)上产生两个签名,那么签名私钥可以由任何人计算。在签名算法中重用相同的 *k* 值会导致私钥的暴露! > **警告** 如果在两个不同的交易中,在签名算法中使用相同的值 *k*,则私钥可以被计算并暴露给世界! 这不仅仅是一个理论上的可能性。我们已经看到这个问题导致私人密钥在比特币中的几种不同实现的交易签名算法中的暴露。人们由于无意中重复使用 *k* 值而将资金窃取。重用 *k* 值的最常见原因是未正确初始化的随机数生成器。 为了避免这个漏洞,业界最佳实践不是用熵播种的随机数生成器生成 *k* 值,而是使用交易数据本身播种的确定性随机进程。这确保每个交易产生不同的 *k* 值。在互联网工程任务组(Internet Engineering Task Force)发布的RFC 6979中定义了 *k* 值的确定性初始化的行业标准算法。 如果您正在实现一种用于在比特币中签署交易的算法,则必须使用RFC 6979或类似的确定性随机算法来确保为每个交易生成不同的 *k* 值。 ## 6.6比特币地址,余额和其他摘要 在本章开始,我们发现交易的 “幕后”看起来与它在钱包、区块链浏览器和其它面向用户的应用程序中呈现的非常不同。 来自前几章的许多简单而熟悉的概念,如比特币地址和余额,似乎在交易结构中不存在。 我们看到交易本身并不包含比特币地址,而是通过锁定和解锁比特币离散值的脚本进行操作。 这个系统中的任何地方都不存在余额,而每个钱包应用程序都明明白白地显示了用户钱包的余额。 现在我们已经探讨了一个比特币交易中实际包含的内容,我们可以检查更高层次的抽象概念是如何从交易的看似原始的组成部分中派生出来的。 我们再来看看Alice的交易是如何在一个受欢迎的区块浏览器(前面章节Alice与Bob's Cafe的交易)中呈现的: ![图6-7Alice与Bob's Cafe的交易](http://upload-images.jianshu.io/upload_images/1785959-31608a723a75ae5e.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 在交易的左侧,区块浏览器将Alice的比特币地址显示为“发送者”。其实这个信息本身并不在交易中。当区块链接浏览器检索到交易时,它还检索在输入中引用的先前交易,并从该旧交易中提取第一个输出。在该输出内是一个锁定脚本,将UTXO锁定到Alice的公钥哈希(P2PKH脚本)。块链浏览器提取公钥哈希,并使用Base58Check编码对其进行编码,以生成和显示表示该公钥的比特币地址。 同样,在右侧,区块浏览器显示了两个输出;第一个到Bob的比特币地址,第二个到Alice的比特币地址(作为找零)。再次,为了创建这些比特币地址,区块链浏览器从每个输出中提取锁定脚本,将其识别为P2PKH脚本,并从内部提取公钥哈希。最后,块链浏览器重新编码了使用Base58Check的公钥哈希生成和显示比特币地址。 如果您要点击Bob的比特币地址,则块链接浏览器将显示Bob的比特币地址的余额: ![图6-8Bob的比特币地址的余额](http://upload-images.jianshu.io/upload_images/1785959-7ba0d107d3758cee.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 区块链浏览器显示了Bob的比特币地址的余额。但是比特币系统中却没有“余额”的概念。这么说吧,这里显示的余额其实是由区块链游览器按如下方式构建出来的: 为了构建“总接收”数量,区块链浏览器首先解码比特币地址的Base58Check编码,以检索编码在地址中的Bob的公钥的160位哈希值。然后,区块链浏览器搜索交易数据库,使用包含Bob公钥哈希的P2PKH锁定脚本寻找输出。通过总结所有输出的值,浏览器可以产生接收的总值。 完成构建当前余额(显示为“最终余额”)需要更多的工作。区块链接浏览器将当前未被使用的输入保存为一个分离的数据库——UTXO集。为了维护这个数据库,区块链浏览器必须监视比特币网络,添加新创建的UTXO,并在已被使用的UTXO出现在未经确认的交易中时,实时地删除它们。这是一个复杂的过程,不但要实时地跟踪交易在网络上的传播,同时还要保持与比特币网络的共识,确保在正确的链上。有时区块链浏览器未能保持同步,导致其对UTXO集的跟踪扫描不完整或不正确。 通过计算UTXO集,区块链浏览器总结了引用Bob的公钥哈希的所有未使用输出的值,并产生向用户显示的“最终余额”数目。 为了生成这张图片,得到这两个“余额”,区块链浏览器必须索引并搜索数十、数百甚至数十万的交易。 总之,通过钱包应用程序、区块链浏览器和其他比特币用户界面呈现给用户的信息通常源于更高层次的,通过搜索许多不同的交易,检查其内容以及操纵其中包含的数据而的抽象而构成。为了呈现出比特币交易类似于银行支票从发送人到接收人的这种简单视图,这些应用程序必须抽象许多底层细节。他们主要关注常见的交易类型:每个输入上具有SIGHASH_ALL签名的P2PKH。因此,虽然比特币应用程序以易于阅读的方式呈现所有了80%以上的交易,但有时候会被偏离了常规的交易 难住。包含更复杂的锁定脚本,或不同SIGHASH标志,或多个输入和输出的交易显示了这些抽象的简单性和弱点。 每天都有数百个不包含P2PKH输出的交易在块上被确认。 blockchain浏览器经常向他们发出红色警告信息,表示无法解码地址。以下链接包含未完全解码的最新的“奇怪交易”:https://blockchain.info/strange-transactions。 正如我们将在下一章中看到的,这些并不一定是奇怪的交易。它们是包含比常见的P2PKH更复杂的锁定脚本的交易。我们将学习如何解码和了解更复杂的脚本及其支持的应用程序。
41.720065
313
0.843269
yue_Hant
0.536329
d09355703834ece5444901579bca7cd0196b35b2
7,002
md
Markdown
content/reviews/20010622.md
mjmusante/if-review
6a9501e2692d3e6c8ae61bc4f1cf722bd34552de
[ "MIT" ]
null
null
null
content/reviews/20010622.md
mjmusante/if-review
6a9501e2692d3e6c8ae61bc4f1cf722bd34552de
[ "MIT" ]
null
null
null
content/reviews/20010622.md
mjmusante/if-review
6a9501e2692d3e6c8ae61bc4f1cf722bd34552de
[ "MIT" ]
null
null
null
+++ title = "A Step" date = 2001-06-22 [extra] reviewer = "Emily Short" sort_reviewer = "Short, Emily" game = "So Far" sort_game = "So Far" author = "Andrew Plotkin" sort_author = "Plotkin, Andrew" +++ _So Far_ is one of those games that everyone is supposed to play. It's the source of numerous jokes and references in other games; it is the longest (and some would say the most serious and most angst-ridden) of Zarf's games, which earns it attention in and of itself. And, written in 1996, it is old enough to stand in the position of a classic in this brief community. (If the reviewer's job is to tell people whether to play the game, I'm done now: you should, absolutely. But you knew that.) Because of its considerable reputation, I tried repeatedly to play it myself, and failed the first four or five times. The opening of the game is involving &mdash; almost oppressively so. I felt almost claustrophobic playing it, so vividly are the crowd and the summer heat evoked, so that it was a relief to escape from the first of the game's environments. At the same time, I had the sense that I had left behind some important bit of plot, something that I could or should be interacting with, and I wasn't sure whether I would ever be allowed to go back. The subsequent puzzles stumped me, in part because I was trying to make logical sense of the history of the worlds in which I was traveling, in part because I am not particularly good at visualizing complex machinery in the way required to manipulate it properly. One of the times I gave up, I complained to a friend that this was a game that required to be read slowly. I am no opponent of dense prose, and I don't mind spending time reading. But I have always found that when I am deeply involved in a piece of interactive fiction, a sense of urgency builds so that I am too impatient to read deeply and receptively. I want short pieces of prose, not long paragraphs. And I don't want flowery writing, either, or anything that stands between me and the most rapid possible comprehension. How am I supposed to know what to do when I am busy trying to ferret out the significance of lines upon lines of metaphor? Correspondingly I try for brevity in my own IF writing; I try to layer description through deep implementation, so that one description leads to another rather than the first surface description hitting the player with pages of prose. So I struggled with _So Far_, because my normal mode of playing IF is one in which I bash at things in order to understand them. I explore and summarize first, then tinker with the details to see what they will yield up. And that's the wrong way to play this game, if one can talk about right and wrong approaches. This is the kind of game where you not only have to read each piece carefully and thoughtfully the first time, you also have to stand permanently apart from what's going on. You're doing things that make no real logical sense &mdash; by the hundred, it seems. Graham Nelson's Player's Bill of Rights is triumphantly defied by some of the acts of intuitive leaping, save-and-restore decipherment, and hindsight required to get through the game properly. Even so I only managed with liberal use of Lucian Smith's Invisiclues and suggestions from friends on ifMUD. As Duncan Stevens says in his recent SPAG review of the same game, _So Far_ works thematically, but the plot doesn't entirely make sense. So it's involving, as I said, because of the vivid landscapes and the portentous sense of meaning that pervades everything, but it is perhaps not immersive in the classic sense. The save-and-restore method of solving problems is one that quickly teaches you to disregard your PC's life; danger is diminished thereby. (Adam Cadre's _Varicella_ carries this effect perhaps to its ultimate extreme, but it is present here as well.) I didn't feel that I was the player character, exactly. After all, he would have had access to the background knowledge to explain (and wouldn't that be convenient?) exactly what was going on between himself and his beloved(?) Aessa, among other things. I the player could only guess. The experience in that respect was a bit like reading a work of static fiction in which much of the background is deliberately withheld. I'm reminded of trying to read _The English Patient._ (I didn't like it as much as the movie and stopped after page 2.) Thus far we have fiendish puzzles, a density of prose and concept that challenges the average approach to IF, and a certain amount of narrative distance between player and PC. Something relatively inaccessible, in short. But what the game gains by being obscure, by demanding intellectual attention at a high order, is the ability to address itself to abstracts. Interactive fiction is a medium that principally deals with the concrete: objects that you can pick up and move around in standardized ways, without resorting to too complex or elusive a vocabulary. Various people have proposed, for instance, NPC interaction systems that would make use of complicated verbs such as PROVOKE, APOLOGIZE, etc., but I have yet to see such a system that worked particularly well, despite having tried to devise one myself. Likewise important choices of a personal nature &mdash; not choices about how to fend off the alligator who is about to bite off your leg if you don't feed him the ham, but choices about ethics or emotion &mdash; have always to be cast in terms of physical actions. Jigsaw, Spider and Web, and Tapestry all come to mind as having moments where the player's moral choice is encapsulated in a physical action, the significance of which has been carefully developed and spelled out in advance. What _So Far_ achieves that distinguishes itself even from these (which are themselves moments of high artistry in the genre) is not teaching the player how to regard a single action as representative of moral choice, but presenting the whole world in such a way that it seems redolent of such choices, tying the physical environment intimately to the emotional one in ways that are sometimes visible only in retrospect. To say that it relies on symbolic vocabulary is to understate the issue. Jigsaw, for instance, relies on symbolic vocabulary as well, especially in the endgame. But Nelson's symbols are isolated and recognizable, and stand out from the landscape in their symbolic significance like a girl in a red dress. 'Note this!' they say. And they are organized with a tidy symmetry, perfect and mathematical, so that the meaning of anything unexplained may be worked out by its relations to other symbols and the oppositions between them. Plotkin's symbolism is merged wholly with the landscape; it *is* the landscape. The pieces are polyvalent and connotative, any given thing suggesting an array of connections and meanings, not denoting a single concept in its purity. I am not sure whether any subsequent work has approached it in this regard. I am not sure that anyone has tried.
63.081081
84
0.795344
eng_Latn
0.999961
d094372e781a747c372db95759bf961338e07b35
676
md
Markdown
_posts/react/2021-08-01-react-classnames.md
limunosekai/limunosekai.github.io
7c1d51bb79155ff37cb2e30fe557551a50ae79b1
[ "MIT" ]
null
null
null
_posts/react/2021-08-01-react-classnames.md
limunosekai/limunosekai.github.io
7c1d51bb79155ff37cb2e30fe557551a50ae79b1
[ "MIT" ]
null
null
null
_posts/react/2021-08-01-react-classnames.md
limunosekai/limunosekai.github.io
7c1d51bb79155ff37cb2e30fe557551a50ae79b1
[ "MIT" ]
2
2021-05-04T08:11:41.000Z
2021-05-04T08:34:33.000Z
--- layout: post title: React에서 classnames 활용하기 category: React permalink: /react/:year/:month/:day/:title/ tags: [React] comments: true --- --- ## React에서 classnames 활용하기 --- classnames 패키지를 이용하여 className을 dynamic하게 관리할 수 있다. key-value로 설정하여 value의 true/false 평가에 따른 조건부 className 적용이 가능하다. ```react import classNames from 'classnames'; function Button (props) { const { isLoading } = props; return { <button className={classNames('btn', {'btn-loading': isLoading})}></button> } } ``` 이런 식으로 isLoading이 undefined가 아니고 true일 때, dynamic하게 클래스명을 조건부로 설정할 수 있다. <br> --- ## 참고 자료 --- [classnames - npm](https://www.npmjs.com/package/classnames)
16.095238
83
0.678994
kor_Hang
0.994201
d095c7e57e8637f713ba96cb73e39b2de9d50d77
2,074
md
Markdown
markdown/react.md
johnwan123/javascript1
561bfe5e6aef6485b1c9afb792363313e7a8b13f
[ "MIT" ]
null
null
null
markdown/react.md
johnwan123/javascript1
561bfe5e6aef6485b1c9afb792363313e7a8b13f
[ "MIT" ]
null
null
null
markdown/react.md
johnwan123/javascript1
561bfe5e6aef6485b1c9afb792363313e7a8b13f
[ "MIT" ]
null
null
null
## [React](https://facebook.github.io/react/) * React 的核心概念是封裝組件(component),組件有自己的狀態(state)、生命週期、參數,狀態改變時會在重新渲染(render)整個頁面。 * 不再透過直接查找 DOM 的方式來做動態修改 --- ## 組件(component) ```javascript const React = require('react'); const ReactDOM = require('react-dom'); class Hello extends React.Component { render() { return <div>Hello {this.props.name}</div>; } } // 將 hello 元件載入 root element 裡頭 ReactDOM.render( <Hello name="ggm" />, document.getElementById('root') ) ``` --- ## JSX React 發明一種新的語法就做 JSX,就是把 HTML 嵌入 Javascript 的程式碼中,有助於達到封裝的效果。React 並非一定要使用 JSX 語法,也可以使用純 Javscript 的語法。若使用 JSX 語法則必須要先做前置的編譯。 ```javascript class Hello extends React.Component { render() { return <div>Hello {this.props.name}</div>; } } ``` --- ## Virtual DOM 當組件(component)的狀態(state)有受到改變的時候,react 可以馬上做出反應,更新整個頁面。而這樣的做法可能會造成效能問題,於是有了 Virtual DOM 的概念。react 會自動找尋需要變更的節點(node)去做局部渲染(render),這樣的效能會提升許多。 --- ## 實作 * 請至這個[連結](https://github.com/ntu-csie-train/place-spot/tree/starter)下載專案 * 按下右邊有一個 `download` 的按鈕 --- ## [this.state](https://facebook.github.io/react/docs/state-and-lifecycle.html) ```javascript // Wrong this.state.comment = 'Hello'; ``` ```javascript // Correct this.setState({comment: 'Hello'}); ``` --- ## [this.props](https://facebook.github.io/react/docs/typechecking-with-proptypes.html) ```javascript class Hello extends React.Component { render() { return <div>Hello {this.props.name}</div>; } } // 將 hello 元件載入 root element 裡頭 ReactDOM.render( <Hello name="ggm" />, document.getElementById('root') ) ``` --- ## [this.refs](https://facebook.github.io/react/docs/refs-and-the-dom.html) ``` 使用 `ref` callback 來儲存參照值 ``` ```javascript render() { return ( <div> <input type="text" ref={(input) => { this.textInput = input; }} /> <input type="button" value="Focus the text input" onClick={this.focus} /> </div> ); } ```
19.203704
143
0.61379
yue_Hant
0.554842
d0968ee70b64f667688495510862f7662a2a5178
10,603
md
Markdown
docs/framework/wcf/extending/change-cryptographic-provider-x509-certificate-private-key.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/extending/change-cryptographic-provider-x509-certificate-private-key.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/extending/change-cryptographic-provider-x509-certificate-private-key.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Vorgehensweise: Ändern des Kryptografieanbieters für den privaten Schlüssel eines X.509-Zertifikats' ms.date: 03/30/2017 dev_langs: - csharp - vb helpviewer_keywords: - cryptographic provider [WCF], changing - cryptographic provider [WCF] ms.assetid: b4254406-272e-4774-bd61-27e39bbb6c12 ms.openlocfilehash: 33d42f26407787b26e1447f8b8f619dd6fc15229 ms.sourcegitcommit: bc293b14af795e0e999e3304dd40c0222cf2ffe4 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 11/26/2020 ms.locfileid: "96262111" --- # <a name="how-to-change-the-cryptographic-provider-for-an-x509-certificates-private-key"></a>Vorgehensweise: Ändern des Kryptografieanbieters für den privaten Schlüssel eines X.509-Zertifikats In diesem Thema wird gezeigt, wie Sie den Kryptografieanbieter ändern, mit dem der private Schlüssel eines X. 509-Zertifikats bereitgestellt wird, und wie Sie den Anbieter in das Windows Communication Foundation (WCF)-Sicherheits Framework integrieren. Weitere Informationen zur Verwendung von Zertifikaten finden Sie unter [Arbeiten mit Zertifikaten](../feature-details/working-with-certificates.md). Das WCF-Sicherheits Framework bietet eine Möglichkeit, neue Sicherheitstokentypen einzuführen, wie unter Gewusst [wie: Erstellen eines benutzerdefinierten](how-to-create-a-custom-token.md)Tokens beschrieben. Es ist auch möglich, ein benutzerdefiniertes Token zu verwenden, um vorhandene vom System bereitgestellte Tokentypen zu ersetzen. In diesem Thema wird das vom System bereitgestellte X.509-Sicherheitstoken durch ein benutzerdefiniertes X.509-Token ersetzt, das für den privaten Schlüssel des Zertifikats eine andere Implementierung bereitstellt. Dies ist bei Szenarios nützlich, bei denen der eigentliche private Schlüssel von einem anderen Kryptografieanbieter als dem standardmäßigen Kryptografieanbieter von Windows bereitgestellt wird. Ein Beispiel für einen alternativen Kryptografieanbieter ist ein Hardwaresicherheitsmodul, das alle Kryptografievorgänge durchführt, die private Schlüssel betreffen, und das die privaten Schlüssel nicht im Arbeitsspeicher speichert. Auf diese Weise wird die Sicherheit des Systems erhöht. Das folgende Beispiel dient nur der Veranschaulichung. Es stellt keinen Ersatz für den standardmäßigen Kryptografieanbieter dar, aber es verdeutlicht, an welcher Stelle ein Anbieter dieser Art integriert werden kann. ## <a name="procedures"></a>Prozeduren Jedes Sicherheitstoken, das über einen zugeordneten Sicherheitsschlüssel bzw. mehrere Sicherheitsschlüssel verfügt, muss die <xref:System.IdentityModel.Tokens.SecurityToken.SecurityKeys%2A>-Eigenschaft implementieren. Diese Eigenschaft gibt eine Auflistung der Schlüssel aus der Sicherheitstokeninstanz zurück. Wenn es sich bei dem Token um ein X.509-Sicherheitstoken handelt, enthält die Auflistung eine einzelne Instanz der <xref:System.IdentityModel.Tokens.X509AsymmetricSecurityKey>-Klasse, die sowohl öffentliche als auch private Schlüssel darstellt, die dem Zertifikat zugeordnet sind. Erstellen Sie eine neue Implementierung dieser Klasse, um den standardmäßigen Kryptografieanbieter zu ersetzen, der zum Bereitstellen der Zertifikatschlüssel verwendet wird. #### <a name="to-create-a-custom-x509-asymmetric-key"></a>So erstellen Sie einen benutzerdefinierten asymmetrischen X.509-Schlüssel 1. Definieren Sie eine neue von der <xref:System.IdentityModel.Tokens.X509AsymmetricSecurityKey>-Klasse abgeleitete Klasse. 2. Überschreiben Sie die schreibgeschützte <xref:System.IdentityModel.Tokens.SecurityKey.KeySize%2A>-Eigenschaft. Diese Eigenschaft gibt die tatsächliche Schlüsselgröße für das Paar aus öffentlichem und privatem Schlüssel des Zertifikats zurück. 3. Überschreiben Sie die <xref:System.IdentityModel.Tokens.SecurityKey.DecryptKey%2A> -Methode. Diese Methode wird vom WCF-Sicherheits Framework aufgerufen, um einen symmetrischen Schlüssel mit dem privaten Schlüssel des Zertifikats zu entschlüsseln. (Der Schlüssel wurde vorher mit dem öffentlichen Schlüssel des Zertifikats verschlüsselt.) 4. Überschreiben Sie die <xref:System.IdentityModel.Tokens.AsymmetricSecurityKey.GetAsymmetricAlgorithm%2A> -Methode. Diese Methode wird vom WCF-Sicherheits Framework aufgerufen, um eine Instanz der-Klasse zu erhalten, die <xref:System.Security.Cryptography.AsymmetricAlgorithm> den Kryptografieanbieter für den privaten oder öffentlichen Schlüssel des Zertifikats darstellt, abhängig von den Parametern, die an die-Methode weitergegeben werden. 5. (Optional) Überschreiben Sie die <xref:System.IdentityModel.Tokens.AsymmetricSecurityKey.GetHashAlgorithmForSignature%2A> -Methode. Überschreiben Sie diese Methode, wenn eine andere Implementierung der <xref:System.Security.Cryptography.HashAlgorithm>-Klasse erforderlich ist. 6. Überschreiben Sie die <xref:System.IdentityModel.Tokens.AsymmetricSecurityKey.GetSignatureFormatter%2A> -Methode. Diese Methode gibt eine Instanz der <xref:System.Security.Cryptography.AsymmetricSignatureFormatter>-Klasse zurück, die dem privaten Schlüssel des Zertifikats zugeordnet ist. 7. Überschreiben Sie die <xref:System.IdentityModel.Tokens.SecurityKey.IsSupportedAlgorithm%2A> -Methode. Diese Methode wird verwendet, um anzugeben, ob ein bestimmter Kryptografiealgorithmus von der Implementierung des Sicherheitsschlüssels unterstützt wird. [!code-csharp[c_CustomX509Token#1](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_customx509token/cs/source.cs#1)] [!code-vb[c_CustomX509Token#1](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_customx509token/vb/source.vb#1)] Im folgenden Verfahren wird gezeigt, wie Sie die in der vorherigen Prozedur erstellte benutzerdefinierte Implementierung des asymmetrischen x. 509-Sicherheits Schlüssels mit dem WCF-Sicherheits Framework integrieren, um das vom System bereitgestellte x. 509-Sicherheits Token zu ersetzen. #### <a name="to-replace-the-system-provided-x509-security-token-with-a-custom-x509-asymmetric-security-key-token"></a>So ersetzen Sie das vom System bereitgestellte X.509-Sicherheitstoken durch ein benutzerdefiniertes asymmetrisches X.509-Sicherheitsschlüsseltoken 1. Erstellen Sie ein benutzerdefiniertes X.509-Sicherheitstoken, das anstelle des vom System bereitgestellten Sicherheitsschlüssels den benutzerdefinierten asymmetrischen X.509-Sicherheitsschlüssel zurückgibt. Weitere Informationen zu benutzerdefinierten Sicherheits Token finden Sie unter Gewusst [wie: Erstellen eines benutzerdefinierten](how-to-create-a-custom-token.md)Tokens. [!code-csharp[c_CustomX509Token#2](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_customx509token/cs/source.cs#2)] [!code-vb[c_CustomX509Token#2](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_customx509token/vb/source.vb#2)] 2. Erstellen Sie einen benutzerdefinierten Sicherheitstokenanbieter, der ein benutzerdefiniertes X.509-Sicherheitstoken zurückgibt. Dies ist im Beispiel gezeigt. Weitere Informationen zu benutzerdefinierten Sicherheitstokenanbietern finden Sie unter Gewusst [wie: Erstellen eines benutzerdefinierten Sicherheitstokenanbieters](how-to-create-a-custom-security-token-provider.md). [!code-csharp[c_CustomX509Token#3](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_customx509token/cs/source.cs#3)] [!code-vb[c_CustomX509Token#3](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_customx509token/vb/source.vb#3)] 3. Wenn Sie den benutzerdefinierten Sicherheitsschlüssel auf der initiierenden Seite verwenden müssen, erstellen Sie einen benutzerdefinierten Clientsicherheitstoken-Manager und benutzerdefinierte Clientanmeldeinformationen-Klassen, wie im folgenden Beispiel gezeigt. Weitere Informationen zu benutzerdefinierten Client Anmelde Informationen und Client Sicherheits Token-Managern finden Sie unter Exemplarische Vorgehensweise [: Erstellen von benutzerdefinierten Client-und Dienst Anmelde](walkthrough-creating-custom-client-and-service-credentials.md)Informationen. [!code-csharp[c_CustomX509Token#4](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_customx509token/cs/source.cs#4)] [!code-vb[c_CustomX509Token#4](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_customx509token/vb/source.vb#4)] [!code-csharp[c_CustomX509Token#6](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_customx509token/cs/source.cs#6)] [!code-vb[c_CustomX509Token#6](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_customx509token/vb/source.vb#6)] 4. Wenn Sie den benutzerdefinierten Sicherheitsschlüssel auf der Empfängerseite verwenden müssen, erstellen Sie einen benutzerdefinierten Dienstsicherheitstoken-Manager und benutzerdefinierte Dienstanmeldeinformationen, wie im folgenden Beispiel gezeigt. Weitere Informationen über benutzerdefinierte Dienst Anmelde Informationen und Dienst Sicherheits Token-Manager finden Sie unter Exemplarische Vorgehensweise [: Erstellen von benutzerdefinierten Client-und Dienst Anmelde](walkthrough-creating-custom-client-and-service-credentials.md)Informationen. [!code-csharp[c_CustomX509Token#5](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_customx509token/cs/source.cs#5)] [!code-vb[c_CustomX509Token#5](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_customx509token/vb/source.vb#5)] [!code-csharp[c_CustomX509Token#7](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_customx509token/cs/source.cs#7)] [!code-vb[c_CustomX509Token#7](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_customx509token/vb/source.vb#7)] ## <a name="see-also"></a>Weitere Informationen - <xref:System.IdentityModel.Tokens.X509AsymmetricSecurityKey> - <xref:System.IdentityModel.Tokens.AsymmetricSecurityKey> - <xref:System.IdentityModel.Tokens.SecurityKey> - <xref:System.Security.Cryptography.AsymmetricAlgorithm> - <xref:System.Security.Cryptography.HashAlgorithm> - <xref:System.Security.Cryptography.AsymmetricSignatureFormatter> - [Exemplarische Vorgehensweise: Erstellen von benutzerdefinierten Client- und Dienstanmeldeinformationen](walkthrough-creating-custom-client-and-service-credentials.md) - [Vorgehensweise: Erstellen eines benutzerdefinierten Sicherheitstokenauthentifizierers](how-to-create-a-custom-security-token-authenticator.md) - [Vorgehensweise: Erstellen eines benutzerdefinierten Sicherheitstokenanbieters](how-to-create-a-custom-security-token-provider.md) - [Vorgehensweise: Erstellen eines benutzerdefinierten Tokens](how-to-create-a-custom-token.md)
114.010753
768
0.818448
deu_Latn
0.96788
d096da607c410d4f7d45a083cb62bc280ba12208
6,877
md
Markdown
aspnetcore/fundamentals/tools/dotnet-aspnet-codegenerator.md
duduribeiro/AspNetCore.Docs.pt-br
d22ebf2881f69ac6e2ccb5f170e8489e5aee1215
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnetcore/fundamentals/tools/dotnet-aspnet-codegenerator.md
duduribeiro/AspNetCore.Docs.pt-br
d22ebf2881f69ac6e2ccb5f170e8489e5aee1215
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnetcore/fundamentals/tools/dotnet-aspnet-codegenerator.md
duduribeiro/AspNetCore.Docs.pt-br
d22ebf2881f69ac6e2ccb5f170e8489e5aee1215
[ "CC-BY-4.0", "MIT" ]
1
2020-05-20T15:20:44.000Z
2020-05-20T15:20:44.000Z
--- title: comando dotnet aspnet-codegenerator author: rick-anderson description: O comando dotnet aspnet-codegenerator aplica scaffold em projetos do ASP.NET Core. monikerRange: '>= aspnetcore-2.1' ms.author: riande ms.date: 07/04/2019 uid: fundamentals/tools/dotnet-aspnet-codegenerator ms.openlocfilehash: 1043a578f66d5bb57f4a81e9fe21afa5e3c37cb8 ms.sourcegitcommit: 215954a638d24124f791024c66fd4fb9109fd380 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 09/18/2019 ms.locfileid: "71081503" --- # <a name="dotnet-aspnet-codegenerator"></a>dotnet aspnet-codegenerator Por [Rick Anderson](https://twitter.com/RickAndMSFT) `dotnet aspnet-codegenerator` – executa o mecanismo de scaffolding do ASP.NET Core. `dotnet aspnet-codegenerator` é necessário somente para fazer scaffold da linha de comando, não é preciso usar o scaffolding com o Visual Studio. Este artigo se aplica ao [SDK do .NET Core 2.1](https://dotnet.microsoft.com/download/dotnet-core/2.1) e posteriores. ## <a name="installing-aspnet-codegenerator"></a>Instalação do aspnet-codegenerator `dotnet-aspnet-codegenerator` é uma [ferramenta global](/dotnet/core/tools/global-tools) que deve ser instalada. O comando a seguir instala a versão estável mais recente da ferramenta `dotnet-aspnet-codegenerator`: ```dotnetcli dotnet tool install -g dotnet-aspnet-codegenerator ``` O comando a seguir atualiza `dotnet-aspnet-codegenerator` para a versão estável mais recente disponível dos SDKs do .NET Core instalado: ```dotnetcli dotnet tool update -g dotnet-aspnet-codegenerator ``` ## <a name="synopsis"></a>Sinopse ``` dotnet aspnet-codegenerator [arguments] [-p|--project] [-n|--nuget-package-dir] [-c|--configuration] [-tfm|--target-framework] [-b|--build-base-path] [--no-build] dotnet aspnet-codegenerator [-h|--help] ``` ## <a name="description"></a>Descrição O comando global `dotnet aspnet-codegenerator` executa o mecanismo de scaffolding e o gerador de código do ASP.NET Core. ## <a name="arguments"></a>Arguments `generator` O gerador de código para ser executado. Os geradores a seguir estão disponíveis: | Gerador | Operação | | ----------------- | ------------ | | Rede | [Faz scaffold de uma área](/aspnet/core/mvc/controllers/areas) | controlador| [Faz scaffold de um controlador](/aspnet/core/tutorials/first-mvc-app/adding-model) | identidade | [Faz scaffold de uma identidade](/aspnet/core/security/authentication/scaffold-identity) | razorpage | [Faz scaffold de Razor Pages](/aspnet/core/tutorials/razor-pages/model) | Exibição | [Faz scaffolds de um modo de exibição](/aspnet/core/mvc/views/overview) | ## <a name="options"></a>Opções `-n|--nuget-package-dir` Especifica o diretório de pacote do NuGet. `-c|--configuration {Debug|Release}` Define a configuração da compilação. O valor padrão é `Debug`. `-tfm|--target-framework` O [Framework](/dotnet/standard/frameworks) destino para usar. Por exemplo: `net46`. `-b|--build-base-path` O caminho base da compilação. `-h|--help` Imprime uma ajuda breve para o comando. `--no-build` Não compila o projeto antes da execução. Também define o sinalizador `--no-restore` implicitamente. `-p|--project <PATH>` Especifica o caminho do arquivo de projeto a ser executado (nome da pasta ou caminho completo). Se não é especificado, ele usa como padrão o diretório atual. ## <a name="generator-options"></a>Opções de gerador As seções a seguir detalham as opções disponíveis para os geradores com suporte: * Área * Controlador * Identidade * Razorpage * Exibir <a name="area"></a> ### <a name="area-options"></a>Opções de área Essa ferramenta destina-se aos projetos Web do ASP.NET Core com controladores e exibições. Ele não é destinado a aplicativos de Razor Pages. Uso: `dotnet aspnet-codegenerator area AreaNameToGenerate` O comando anterior gera as seguintes pastas: * *Áreas* * *AreaNameToGenerate* * *Controladores* * *Dados* * *Modelos* * *Exibições* <a name="ctl"></a> ### <a name="controller-options"></a>Opções de controlador A tabela a seguir lista as opções para `aspnet-codegenerator` `controller` e `razorpage`: [!INCLUDE [aspnet-codegenerator-args-md.md](~/includes/aspnet-codegenerator-args-md.md)] A tabela a seguir lista as opções exclusivas para `aspnet-codegenerator controller`: | Opção | Descrição| | ----------------- | ------------ | | --controllerName ou -name | O nome do controlador. | | --useAsyncActions ou -async | Gera ações do controlador assíncrono. | | --noViews ou -nv | **Não** gera modos de exibição. | | --restWithNoViews ou -api | Gera um controlador com API estilo REST. É assumido `noViews` e todas as opções relacionadas à visualização são ignoradas. | | --readWriteActions ou -actions | Gere o controlador com ações de leitura/gravação sem um modelo. | Use o switch `-h` para obter ajuda sobre o comando `aspnet-codegenerator controller`: ```dotnetcli dotnet aspnet-codegenerator controller -h ``` Confira [Fazer scaffold do modelo de filme](/aspnet/core/tutorials/razor-pages/model) para obter um exemplo de `dotnet aspnet-codegenerator controller`. ### <a name="razorpage"></a>Razorpage <a name="rp"></a> As Razor Pages podem ser geradas por scaffolding individualmente, especificando o nome da nova página e o modelo a ser usado. Os modelos com suporte são: * `Empty` * `Create` * `Edit` * `Delete` * `Details` * `List` Por exemplo, o comando a seguir usa o modelo Editar para gerar *MyEdit.cshtml* e *MyEdit.cshtml.cs*: ```dotnetcli dotnet aspnet-codegenerator razorpage MyEdit Edit -m Movie -dc RazorPagesMovieContext -outDir Pages/Movies ``` Normalmente, o modelo e o nome de arquivo gerado não são especificados e os seguintes modelos são criados: * `Create` * `Edit` * `Delete` * `Details` * `List` A tabela a seguir lista as opções para `aspnet-codegenerator` `razorpage` e `controller`: [!INCLUDE [aspnet-codegenerator-args-md.md](~/includes/aspnet-codegenerator-args-md.md)] A tabela a seguir lista as opções exclusivas para `aspnet-codegenerator razorpage`: | Opção | Descrição| | ----------------- | ------------ | | --namespaceName ou -namespace | O nome do namespace a ser usado no PageModel gerado | | --partialView ou -partial | Gere uma exibição parcial. As opções de layout -l e - udl são ignoradas, se isso for especificado. | | --noPageModel ou -npm | Alterne para não gerar uma classe PageModel para o modelo Vazio | Use o switch `-h` para obter ajuda sobre o comando `aspnet-codegenerator razorpage`: ```dotnetcli dotnet aspnet-codegenerator razorpage -h ``` Confira [Fazer scaffold do modelo de filme](/aspnet/core/tutorials/razor-pages/model) para obter um exemplo de `dotnet aspnet-codegenerator razorpage`. ### <a name="identity"></a>Identidade Confira [Identidade Scaffold](/aspnet/core/security/authentication/scaffold-identity)
35.086735
229
0.735786
por_Latn
0.973046
d09782a1f81482836901ad46788b760f91d5ef43
661
md
Markdown
pages/common/gifsicle.md
rnas/tldr
ad2bd28bc2a403023bde4f63cc4390903b37b5cd
[ "MIT" ]
null
null
null
pages/common/gifsicle.md
rnas/tldr
ad2bd28bc2a403023bde4f63cc4390903b37b5cd
[ "MIT" ]
null
null
null
pages/common/gifsicle.md
rnas/tldr
ad2bd28bc2a403023bde4f63cc4390903b37b5cd
[ "MIT" ]
null
null
null
# gifsicle <<<<<<< HEAD > Create gifs - Making a GIF animation with gifsicle `gifsicle --delay={{10}} --loop *.gif > {{anim.gif}}` - Extracting frames from an animation `gifsicle {{anim.gif}} '#0' > {{firstframe.gif}}` - You can also edit animations by replacing, deleting, or inserting frames ======= > Create gifs. - Make a GIF animation with gifsicle: `gifsicle --delay={{10}} --loop *.gif > {{anim.gif}}` - Extract frames from an animation: `gifsicle {{anim.gif}} '#0' > {{firstframe.gif}}` - You can also edit animations by replacing, deleting, or inserting frames: >>>>>>> upstream/master `gifsicle -b {{anim.gif}} --replace '#0' {{new.gif}}`
22.033333
75
0.652042
eng_Latn
0.854901
d097a9d5e95bc55b06946a265b1ea18ad3d3f654
2,773
md
Markdown
Concept.md
zdebeer99/srcgen
234582a04062ce87a86a72a5b321c55bafe13f07
[ "MIT" ]
null
null
null
Concept.md
zdebeer99/srcgen
234582a04062ce87a86a72a5b321c55bafe13f07
[ "MIT" ]
null
null
null
Concept.md
zdebeer99/srcgen
234582a04062ce87a86a72a5b321c55bafe13f07
[ "MIT" ]
null
null
null
# srcgen note! look at the swagger.io documentation! stdgen aims to set a standard for data objects used in code generation, the advantage of such a standard is that templates can be created by any developer that conforms to the standard and the template will work. the standard specify minimum functionality and can be extended by adding custom properties to a data object. The custom properties should be well documented on the template ReadMe.md file. ## Syntax * fields marked with [] square brackets are optional. ## Tags - Property tags is an array of strings and is a optional property to most structures. tags allows the developer to specify boolean type switch on objects that can be used by filters to exclude or include a object in a templates configuration data. ## Structures the following structure types is supported. * interface * model each structure represent a collection of items, a model Structure contains a collection of models and a interface contains a collection of functions. ## Application - Structure defines a application. * name : string * description : string ## Models - Structure models represent data structures that can be passed between client and server, between function calls as arguments and stored in memory, database or file. Typically CRUD operations is done on models but not always. **Model structure** * name : string * fields : array< Field > * [keyfield] : string * [caption] : string * [description] : string * [tags] : array< string > optional **Model.fields property** Field type * name : string * type : string * [defaultValue] : string * [validation] : Validation Type * [caption] : string * [description] : string * [tags] : array< string > optional ** Field.type property** The 'type' field supports the following types, and can be used by your template to convert the type into the type required by your template. any other type is assumed to be a custom type defined in your project code. Supported Types * string * decimal (f16, f32, f64) * int (i8, i16, i32, i64) * byte * date * bool * array type * map type type * any ** Field.validation property** Validation is used to check if the field value conforms to the requested data. the validation property is a simple string. Validation Type * required : boolean * max : if the type is a string max refers to the strings maximum length. * min : if the type is a string min refers to the strings minimum length. * regex : only for string types. ## Interface interface data contains interface call information. **Interface Type** * name * functions * [tags] **function type** * name : string * arguments : [FieldType] * returnType : string representing a std type. #Tegnology * https://github.com/spf13/viper config engine
24.113043
149
0.750451
eng_Latn
0.998174
d0980e50f47b29c193d27fe4533e267711a55e27
2,363
md
Markdown
vendor/samdark/sitemap/README.md
u4aew/tdtaliru
8e2a942d1e52cf2317ba09804b3b221de1203f74
[ "BSD-3-Clause" ]
null
null
null
vendor/samdark/sitemap/README.md
u4aew/tdtaliru
8e2a942d1e52cf2317ba09804b3b221de1203f74
[ "BSD-3-Clause" ]
null
null
null
vendor/samdark/sitemap/README.md
u4aew/tdtaliru
8e2a942d1e52cf2317ba09804b3b221de1203f74
[ "BSD-3-Clause" ]
null
null
null
Sitemap ======= Sitemap and sitemap index builder. <img src="https://travis-ci.org/samdark/sitemap.svg" /> Features -------- - Create sitemap files. - Create sitemap index files. - Automatically creates new file if 50000 URLs limit is reached. - Memory efficient buffer of configurable size. Installation ------------ Installation via Composer is very simple: ``` composer require samdark/sitemap ``` After that, make sure your application autoloads Composer classes by including `vendor/autoload.php`. How to use it ------------- ```php use samdark\sitemap; // create sitemap $sitemap = new Sitemap(__DIR__ . '/sitemap.xml'); // add some URLs $sitemap->addItem('http://example.com/mylink1'); $sitemap->addItem('http://example.com/mylink2', time()); $sitemap->addItem('http://example.com/mylink3', time(), Sitemap::HOURLY); $sitemap->addItem('http://example.com/mylink4', time(), Sitemap::DAILY, 0.3); // write it $sitemap->write(); // get URLs of sitemaps written $sitemapFileUrls = $sitemap->getSitemapUrls('http://example.com/'); // create sitemap for static files $staticSitemap = new Sitemap(__DIR__ . '/sitemap_static.xml'); // add some URLs $staticSitemap->addItem('http://example.com/about'); $staticSitemap->addItem('http://example.com/tos'); $staticSitemap->addItem('http://example.com/jobs'); // write it $staticSitemap->write(); // get URLs of sitemaps written $staticSitemapUrls = $staticSitemap->getSitemapUrls('http://example.com/'); // create sitemap index file $index = new Index(__DIR__ . '/sitemap_index.xml'); // add URLs foreach ($sitemapFileUrls as $sitemapUrl) { $index->addSitemap($sitemapUrl); } // add more URLs foreach ($staticSitemapUrls as $sitemapUrl) { $index->addSitemap($sitemapUrl); } // write it $index->write(); ``` Options ------- There are two methods to configre `Sitemap` instance: - `setMaxUrls($number)`. Sets maximum number of URLs to write in a single file. Default is 50000 which is the limit according to specification and most of existing implementations. - `setBufferSize($number)`. Sets number of URLs to be kept in memory before writing it to file. Default is 1000. If you have more memory consider increasing it. If 1000 URLs doesn't fit, decrease it. Running tests ------------- In order to run tests perform the following commands: ``` composer install phpunit ```
23.39604
95
0.710537
eng_Latn
0.760083
d0983fbfdd942107fe30e3797ccc3e82e3fbabdf
11,211
md
Markdown
zoom-coordinator-guide.md
barbaricyawps/quorum-meetups
1393b5368b0d4736eb68a20991e3c90e4309c06f
[ "MIT" ]
null
null
null
zoom-coordinator-guide.md
barbaricyawps/quorum-meetups
1393b5368b0d4736eb68a20991e3c90e4309c06f
[ "MIT" ]
null
null
null
zoom-coordinator-guide.md
barbaricyawps/quorum-meetups
1393b5368b0d4736eb68a20991e3c90e4309c06f
[ "MIT" ]
null
null
null
# Zoom coordinator guide Typically Alyssa takes on the role of the Zoom coordinator, but if there's ever a month where someone else needs to take on that role, this will guide you through the process! Even if you don't ever sub for me, these can also serve as a best practices guide for hosting your own online events. ## Get access to a pro Zoom account The first step is to get access to a pro Zoom account (hosts 100 participants for unlimited meeting lengths). Alyssa pays for hers personally, but there is also an account available for any Write the Docs meetup coordinator to use. For assistance getting access to the Write the Docs Zoom account, send a message to Rose W or Mike J. on WTD Slack. If you use a professional Zoom account through your employer, ensure that you are not violating your company's policies and that you have permission to do so. Write the Docs will also allow employers to sponsor events if your employer would like recognition for providing a professional Zoom room. ## Set up the Zoom meeting After the meetup organizer has set a date and time for the event, create a Zoom event with the following settings: - **Topic** - Give it a meaningful name, but be aware that you're the only one who will really see it. - **Time Zone** - Make sure you're setting the event using the correct time zone. - **Meeting ID** - Set to generate automatically (turned on by default). - **Security** - Require a passcode (turned on by default). - **Waiting Room** - Make sure this is activated (turned on by default). - **Meeting options** - Check the boxes for: - Mute participants upon entry - Automatically record meeting (then select *In the cloud*) After you save the meeting, you see the meeting details page. On this page, click **Copy invitation** to get the Zoom details. It gives more information than is needed. Typically, you only need the URL. ## Create the Meetup event We have a separate parent Meetup for each quorum program: - [U.S. East Coast and Central](https://www.meetup.com/virtual-write-the-docs-east-coast-quorum/) - [U.S. West Coast and Mountain](https://www.meetup.com/virtual-write-the-docs-west-coast-quorum/) Before creating the Meetup event, you need the following: - The event date and time provided by the meetup organizer - The Zoom meeting URL - The presentation title and description - The presenter(s) name, title, and bio - An image for the event (Alyssa has some on stock if you need) On Meetup, create a new event. For the event description, use this template for the **East Coast Quorum**: ``` [Event description provided by presenter] OUR SPEAKER: [Presenter(s) name, title, and bio] AGENDA 7:00 to 7:10 - Social networking 7:10 to 7:15 - Announcements 7:15 to 7:45 - Presentation and Q&A 7:45 to 8:00 - Breakout rooms by meetup - Meet with your local meetup organizers and members to say hi to other people in your area! SPONSORING MEETUPS: The Quorum program brings together various local Write the Docs meetup chapters that are in a common time zone to provide quarterly super meetups over Zoom throughout the year.The following U.S. East Coast and Central meetups are sponsoring this Meetup event: Austin, TX - https://www.meetup.com/WriteTheDocs-ATX-Meetup/ Detroit, MI/Windsor, CAN (our host meetup!) - https://www.meetup.com/write-the-docs-detroit-windsor/ Florida - https://www.meetup.com/write-the-docs-florida/ New England - https://www.meetup.com/ne-write-the-docs/ Philadelphia, PA - https://www.writethedocs.org/meetups/philly/ Washington, D.C. - https://www.meetup.com/Write-the-Docs-DC/ NOTE: This meeting will be recorded. Also be advised that the Zoom room for this meeting is capped at 100 attendees. If more than 100 individuals RSVP for the event, only the first 100 attendees who enter the Zoom room will be able to attend. ``` For the event description, use this template for the **West Coast Quorum**: ``` [Event description provided by presenter] OUR SPEAKER: [Presenter(s) name, title, and bio] AGENDA 7:00 to 7:10 - Social networking 7:10 to 7:15 - Announcements 7:15 to 7:45 - Presentation and Q&A 7:45 to 8:00 - Breakout rooms by meetup - Meet with your local meetup organizers and members to say hi to other people in your area! SPONSORING MEETUPS The Quorum program brings together various local Write the Docs meetup chapters that are in a common time zone to provide quarterly super meetups over Zoom throughout the year. The following U.S. West Coast, Mountain, and Australia meetups are sponsoring this Meetup event: Bay Area, CA - https://www.meetup.com/Write-the-Docs-Bay-Area/ Los Angeles, CA - https://www.meetup.com/Write-the-Docs-LA/ Portland, OR - https://www.meetup.com/Write-The-Docs-PDX/ Seattle, WA - https://www.meetup.com/Write-The-Docs-Seattle/ Australia - https://www.meetup.com/Write-the-Docs-Australia/ NOTE: This meeting will be recorded. Also be advised that the Zoom room for this meeting is capped at 100 attendees. If more than 100 individuals RSVP for the event, only the first 100 attendees who enter the Zoom room will be able to attend. ``` ## Publicize the meetup event After creating the new Meetup event, publicize it to the following channels: - Direct Slack message or email to the presenter(s) and/or meetup organizer - Post on the `#meetup-organizers-quorum` Slack channel and tag either the East or West Coast organizers for awareness - Post on the `groups.io` group for Quorum - Send an email to the Write the Docs newsletter team: `[email protected]` ## Host dry run event for presenter(s) and emcee Coordinate with the meetup organizer and/or presenter to schedule a dry-run session if needed. In this meeting you can: - Review the meeting agenda - Tell the presenter and emcee that you will be available 5-10 minutes before the event and will let them in to do a tech check if needed - Do a tech check to make sure everyone is aware of how the technology works - Plan icebreakers for [Poll Everywhere](https://www.polleverywhere.com/) - Create new meetup slides using the [Quorum meetup slides template](https://docs.google.com/presentation/d/1k6eVd4DZNbQhQW0zWlsExHAosY-pkcSQQ6QxDJZ83bc/edit?usp=sharing) NOTE: If you need Poll Everywhere account access, contact Alyssa. ## Wrangling local meetup organizers to RSVP regrets for the event About a week before the event, remind everyone about the upcoming event and ask each local meetup organizer to RSVP their regrets. You'll need to know which organizer doesn't have a representative there so that you will know whether to host a breakout room for them. ## Meetup announcements on the day of the event I recommend sending the following announcement on the day of the event: ``` TITLE: Looking forward to tonight's meetup! Hello, to all of you who have RSVPed for tonight's Write the Docs [East/West] Coast Quorum meetup event! We're pleasantly surprised that so many of you have signed up for today's event and we think you'll enjoy tonight's presentation. However, be aware that we currently have more than 100 participants who have RSVPed for this event and our current Zoom account only allows 100 participants. While we don't think that more than 100 people will show up for tonight's event, we will have to cap attendance at 100 if more do arrive. If you are not fortunate enough to join us on the call or are unable to, be aware that we will record this presentation and post it on the Write the Docs YouTube channel soon after tonight's presentation. On that note, here's our agenda for tonight (listed in [Eastern/Pacific] times): 7:00 to 7:10 - Social networking time and icebreaker activities 7:10 to 7:15 - Announcements 7:15 to 7:45 - Presentation and Q&A 7:45 to 8:00 - Breakout rooms by meetup - We'll use Zoom breakout rooms to have people meet with their local meetup organizers to say hi to other people in their area. Local meetup organizers will use that time to talk about local job openings, future meetups, and connect in a smaller group format. Even if you're not an official member of the meetup, you're welcome to stay and join one of the breakout rooms to connect with other attendees. ``` ## Tips for running the actual Zoom call The following are some best practices for running the actual Zoom call: ### Before the Zoom event officially begins - The Zoom coordinator and the emcee should generally not be the same person. - In the Zoom settings, make sure the meeting is automatically recorded to the cloud and that all participants are muted. - The Zoom coordinator should be available 5-10 minutes early and allow the emcee and presenter into the meeting from the waiting room. Leave other participants in the waiting room until the official start time. - Assign co-hosting privileges to the presenter(s) and the emcee so that they can share their screen and as a back-up if your network connection goes down. See [Using co-host in a meeting](https://support.zoom.us/hc/en-us/articles/206330935-Enabling-and-adding-a-co-host) ### During the Zoom event - The Zoom coordinator typically shares the slides from their computer, including the Poll Everywhere icebreakers. - Make sure the Zoom chat window is open and that you're monitoring it. - Have the Poll Everywhere link available in a text file and paste it into the Zoom chat when ready. - While the speaker is presenting, monitor the participants to see if you need to mute anyone. - While the speaker is presenting, set up the breakout rooms for each local meetup. See [Managing Breakout Rooms](https://support.zoom.us/hc/en-us/articles/206476313). ### At the end of the Zoom event - Before the breakout rooms begin, make sure you **thank the speaker** and encourage everyone to mime their applause, use jazz hands, or use their applause emoji. - Explain how to join a breakout room. - Open the breakout rooms. - As breakout rooms start to clear out, you can pull the remaining participants back into the room to encourage them to end their conversations before you end the meeting. ## Handing off the recording Ryan handles the editing and posting of our videos for our events. When the recording is available: - If you recorded on the cloud, send Ryan the link and passcode to download. - If you recorded on your computer, send Ryan the video through a direct message on Slack. Ryan also needs the following information in order to create the video title cards: - Presentation title - Presenter name - Date of presentation - Which quorum event it was (East Coast or West Coast) - Name of the host (emcee) and meetup chapter - Presentation description (will become the video description) Ryan will share the link with you when it's available. ## Publishing and announcing the video After the video is available, send one last message to the meeting attendees: ``` TITLE: Video of "presentation title" is now available Thank you to all of you who were able to attend our recent session with the [presenter name]. If you missed the event or would like to revisit the talk, it's now available on the Write the Docs YouTube channel: [link] We have meetups once a quarter, so our next event will be in [month of next meetup]. We hope to see you there! ```
50.5
500
0.773526
eng_Latn
0.998263
d0987d1463d1ab34194550d195d0d341b0ca6e7f
803
md
Markdown
_posts/2015-07-28-Venus-Bridal-BM1700.md
gownthlala/gownthlala.github.io
f86dbdf6fa0e98646ca4cb470fa2928a56e04eec
[ "MIT" ]
null
null
null
_posts/2015-07-28-Venus-Bridal-BM1700.md
gownthlala/gownthlala.github.io
f86dbdf6fa0e98646ca4cb470fa2928a56e04eec
[ "MIT" ]
null
null
null
_posts/2015-07-28-Venus-Bridal-BM1700.md
gownthlala/gownthlala.github.io
f86dbdf6fa0e98646ca4cb470fa2928a56e04eec
[ "MIT" ]
null
null
null
--- layout: post date: 2015-07-28 title: "Venus Bridal BM1700" category: Venus Bridal tags: [Venus Bridal] --- ### Venus Bridal BM1700 Just **$229.99** ### <table><tr><td>BRANDS</td><td>Venus Bridal</td></tr></table> <a href="https://www.readybrides.com/en/venus-bridal/54596-venus-bridal-bm1700.html"><img src="//img.readybrides.com/129158/venus-bridal-bm1700.jpg" alt="Venus Bridal BM1700" style="width:100%;" /></a> <!-- break --><a href="https://www.readybrides.com/en/venus-bridal/54596-venus-bridal-bm1700.html"><img src="//img.readybrides.com/129157/venus-bridal-bm1700.jpg" alt="Venus Bridal BM1700" style="width:100%;" /></a> Buy it: [https://www.readybrides.com/en/venus-bridal/54596-venus-bridal-bm1700.html](https://www.readybrides.com/en/venus-bridal/54596-venus-bridal-bm1700.html)
50.1875
215
0.713574
yue_Hant
0.58931
d099a33928d303ca053a1cb6364a125e1dd30b76
210
markdown
Markdown
_posts/2018-10-22-project-09.markdown
rickylibunao/rickylibunao.github.io
32765baec8a50a6da9b18603cd7f649742e85106
[ "MIT" ]
null
null
null
_posts/2018-10-22-project-09.markdown
rickylibunao/rickylibunao.github.io
32765baec8a50a6da9b18603cd7f649742e85106
[ "MIT" ]
null
null
null
_posts/2018-10-22-project-09.markdown
rickylibunao/rickylibunao.github.io
32765baec8a50a6da9b18603cd7f649742e85106
[ "MIT" ]
null
null
null
--- title: Image 09 layout: default modal-id: 9 date: 2014-07-15 img: 09.jpg alt: boy-scout-smiling project-date: January 2017 client: Interkids Bilingual School category: Photography description: Scouting ---
16.153846
34
0.766667
eng_Latn
0.301222
d099c20110715b3ddf10d0778d5f4af32aac2c85
578
md
Markdown
README.md
muenzpraeger/salesforce-mobile-sdk-example-ios-sirikit
19baa1df5ad3edf75df31e39fac759fdb12f527f
[ "Apache-2.0" ]
null
null
null
README.md
muenzpraeger/salesforce-mobile-sdk-example-ios-sirikit
19baa1df5ad3edf75df31e39fac759fdb12f527f
[ "Apache-2.0" ]
null
null
null
README.md
muenzpraeger/salesforce-mobile-sdk-example-ios-sirikit
19baa1df5ad3edf75df31e39fac759fdb12f527f
[ "Apache-2.0" ]
3
2017-06-08T00:56:01.000Z
2018-02-20T18:11:51.000Z
# salesforce-mobile-sdk-example-ios-sirikit This repository holds a sample iOS application which uses the [Salesforce Mobile SDK](https://github.com/forcedotcom/SalesforceMobileSDK-iOS#documentation) in combination with [SiriKit](https://developer.apple.com/sirikit/). # Demo [![Mobile SDK and SiriKit](http://img.youtube.com/vi/EbAI_vS83ds/2.jpg)](http://www.youtube.com/watch?v=EbAI_vS83ds "Mobile SDK and SiriKit") ## License For licensing see the included [license file](https://github.com/muenzpraeger/salesforce-mobile-sdk-example-ios-sirikit/blob/master/LICENSE.md).
52.545455
224
0.787197
eng_Latn
0.400811
d099dccaf844576c672924e4ae8d9dcb9041732f
3,384
md
Markdown
README.md
Abedalkareem/AMTabView-Android
9066e10e9e15d201e294c1477a30945469942a4c
[ "MIT" ]
8
2019-11-12T06:31:01.000Z
2021-12-19T17:48:22.000Z
README.md
Abedalkareem/AMTabView-Android
9066e10e9e15d201e294c1477a30945469942a4c
[ "MIT" ]
null
null
null
README.md
Abedalkareem/AMTabView-Android
9066e10e9e15d201e294c1477a30945469942a4c
[ "MIT" ]
2
2020-08-28T00:11:46.000Z
2021-05-31T10:34:06.000Z
<p align="center"> <img src="https://raw.githubusercontent.com/Abedalkareem/AMTabView-Android/master/tabviewlogo.png" > </p> ## Screenshot <img src="https://raw.githubusercontent.com/Abedalkareem/AMTabView-Android/master/screenshot.gif" width="240" > ## iOS It's also available on iOS you can find it [here](https://github.com/Abedalkareem/AMTabView). ## Example To run the example project, clone the repo, and run the project. ## Usage 1- Add the view to your xml file. ``` <com.abedalkareem.tabview.AMTabView android:id="@+id/tabView" android:layout_width="match_parent" android:layout_height="50dp" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" /> ``` 2-In the perant view group set the `clipChildren` to false. ``` <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/colorPrimary" android:clipChildren="false" tools:context=".MainActivity"> <FrameLayout android:layout_width="match_parent" android:layout_height="0dp" android:id="@+id/fragment_container" android:background="@color/colorPrimary" app:layout_constraintBottom_toTopOf="@id/tabView" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> <com.abedalkareem.tabview.AMTabView android:id="@+id/tabView" android:layout_width="match_parent" android:layout_height="50dp" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" /> </androidx.constraintlayout.widget.ConstraintLayout> ``` 3- In the activity set the `tabsImages` and the `onTabChangeListener` to berform actions when the tab changes. ``` override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val icons = mutableListOf( R.drawable.ic_action_tab1, R.drawable.ic_action_tab2, R.drawable.ic_action_tab3, R.drawable.ic_action_tab4 ) // set the tab images. tabView.tabsImages = icons // listen for tab changes. tabView.onTabChangeListener = { replaceFragmentWith(icons[it]) } } ``` ## Customization ``` app:tabColor="@android:color/holo_red_dark" app:selectedTabTintColor="@android:color/black" app:unSelectedTabTintColor="@android:color/background_light" app:ballColor="@android:color/holo_green_dark" ``` ## Installation Add JitPack repository in your root build.gradle at the end of repositories: ``` allprojects { repositories { maven { url 'https://jitpack.io' } } } ``` Add the dependency ``` dependencies { implementation 'com.github.Abedalkareem:AMTabView-Android:Tag' } ``` ## Author Abedalkareem, [email protected] ## License AMTabView is available under the MIT license. See the LICENSE file for more info.
27.737705
112
0.719858
eng_Latn
0.260497
d09aba04082c42179aa665f98f91436a4aa272d8
3,116
md
Markdown
blocks/button/button.md
yandex-ui/nanoislands
5916da53f144c5e6d09ba9e0e3247032426d90b5
[ "MIT" ]
19
2015-02-12T13:46:23.000Z
2022-01-18T02:32:57.000Z
blocks/button/button.md
yandex-ui/nanoislands
5916da53f144c5e6d09ba9e0e3247032426d90b5
[ "MIT" ]
51
2015-01-20T09:13:40.000Z
2016-12-20T21:09:27.000Z
blocks/button/button.md
yandex-ui/nanoislands
5916da53f144c5e6d09ba9e0e3247032426d90b5
[ "MIT" ]
11
2015-02-16T10:58:58.000Z
2020-08-28T13:26:07.000Z
### Default button > <div example="button-default"> > </div> > > ```yate > nb-button({ > 'content': 'Button' > }) > ``` * `size: m` * `theme: normal` ### Optional attributes * `'size'` {string} `'s' / 'm' / 'l' / 'xl'` * `'theme'` {string} `'normal' / 'action' (yellow) / 'dark' / 'pseudo' / 'pseudo-inverted' / 'promo' (big yellow)` * `'id'` {string} ... * `'name'` {string} ... * `'class'` {array} `['my_class1', 'my_class2']` — additional classes * `'disabled'` {boolean} — disabled button * `'tabindex'` {string} * `'icon'` {string} ... — link to icon * `'iconText'` {string} ... — symbol for icon * `'content'` {string} ... — content of button * `'attrs'` {object} `{'type': 'submit', 'attr2: 'value2' }` — custom DOM attributes for button * `'static'` {boolean} — block without nanoblocks functionality (JavaScript API) * `'type'` {string} * `'file'` — attach button. This is not DOM type aka `<input type=""/>`, this is instance type. * `'link'` — `<a>` * `'label'` - `<label>` * `'inline'` - `<span>` ### Yate examples #### Types `'type': 'link' 'label' 'inline' 'file'` > <div example="buttons-type"> > </div> > > ```yate > nb-button({ > 'content': 'Link button' > 'type': 'link' > 'attrs': { > 'href': '#' > } > }) > ' ' > nb-button({ > 'content': 'Label button' > 'type': 'label' > 'attrs': { > 'for': 'blah' > } > }) > ' ' > nb-button({ > 'content': 'Span button' > 'type': 'inline' > }) > ' ' > nb-button({ > 'content': 'Attach file' > 'type': 'file' > }) > ``` #### Size > <div example="buttons-size" > > </div> > > ```yate > nb-button({ > 'content': 'Small' > 'size': 's' > }) > ' ' > nb-button({ > 'content': 'Medium' > }) > ' ' > nb-button({ > 'content': 'Large' > 'size': 'l' > }) > ' ' > nb-button({ > 'content': 'Extra Large' > 'size': 'xl' > }) > ' ' > nb-button({ > 'theme': 'promo' > 'content': 'Large' > }) > ' ' > nb-button({ > 'theme': 'promo' > 'size': 'xl' > 'content': 'Extra large' > }) > ``` #### Themes `'theme': 'action' 'pseudo' 'pseudo-inverted' 'clear' 'dark' 'promo' 'flying'` > <div example="buttons-theme" > > </div> > > ```yate > nb-button({ > 'content': 'Action' > 'theme': 'action' > }) > ' ' > nb-button({ > 'content': 'Pseudo' > 'theme': 'pseudo' > }) > ' ' > nb-button({ > 'content': 'Clear' > 'theme': 'clear' > }) > ' ' > nb-button({ > 'content': 'Pseudo' > 'theme': 'pseudo-inverted' > }) > ' ' > nb-button({ > 'content': 'Dark' > 'theme': 'dark' > }) > > ' ' > nb-button({ > 'content': 'Promo' > 'theme': 'promo' > }) > > ' ' > nb-button({ > 'content': 'Flying' > 'theme': 'flying' > }) > ``` #### Icons > <div example="buttons-icon" > > </div> > > ```yate > nb-button({ > 'icon': 'eye' > }) > > ' ' > nb-button({ > 'iconText': '▼' > }) > > ' ' > nb-button({ > 'icon': 'link' > 'content': 'Открыть' > }) > ``` #### Disabled > <div example="button-disabled" > > </div> > > ```yate > nb-button({ > 'disabled': true() > 'content': 'Disabled' > } > ```
17.027322
114
0.477214
yue_Hant
0.125445
d09ad1c2d7052ed0e68256cc8ef6f23285bd4f76
20
md
Markdown
README.md
jaeyholic/Steam-Signup-Form
88701938d5f46617d0e0ab2b677c51df91cf145e
[ "MIT" ]
null
null
null
README.md
jaeyholic/Steam-Signup-Form
88701938d5f46617d0e0ab2b677c51df91cf145e
[ "MIT" ]
null
null
null
README.md
jaeyholic/Steam-Signup-Form
88701938d5f46617d0e0ab2b677c51df91cf145e
[ "MIT" ]
null
null
null
# Steam Signup Form
10
19
0.75
eng_Latn
0.924219
d09b626d9eb67869bf59d04c442519c58fe8f13d
1,014
md
Markdown
dev-libs/nss.md
filakhtov/gentoo-use-docs
4db20ef1693080450ea7acf533e96d029f252f9d
[ "Unlicense" ]
1
2019-05-12T13:36:30.000Z
2019-05-12T13:36:30.000Z
dev-libs/nss.md
filakhtov/gentoo-use-docs
4db20ef1693080450ea7acf533e96d029f252f9d
[ "Unlicense" ]
1
2018-06-13T02:50:47.000Z
2018-06-13T02:50:47.000Z
dev-libs/nss.md
filakhtov/gentoo-use-docs
4db20ef1693080450ea7acf533e96d029f252f9d
[ "Unlicense" ]
2
2018-06-11T21:27:10.000Z
2020-05-05T14:07:42.000Z
# dev-libs/nss ### cacert Apply an additional patch to add a Class 1 and Class 3 certificates from the CACert Inc. association into a built-in trusted root certificates list. These certificates aren't included by default because procedures to obtain them aren't compatible with the Mozilla verification and auditing policies. These certificates are considered insecure by various strict policies and so it is recommended to disable the flag, however it is safe to enable it if that's necessary. ### utils Build and install additional utilities: `addbuiltin`, `atob`, `baddbdir`, `btoa`, `certcgi`, `certutil`, `cmsutil`, `conflict`, `crlutil`, `derdump`, `digest`, `makepqg`, `mangle`, `modutil`, `multinit`, `nonspr10`, `ocspclnt`, `oidcalc`, `p7content`, `p7env`, `p7sign`, `p7verify`, `pk11mode`, `pk12util`, `pp`, `rsaperf`, `selfserv`, `shlibsign`, `signtool`, `signver`, `ssltap`, `strsclnt`, `symkeyutil`, `tstclnt`, `vfychain`, `vfyserv` and their respective MAN pages. It is safe to disable the flag.
84.5
472
0.739645
eng_Latn
0.980107
d09c692e8e06f56d3cf4327a7c9449c4747a83d2
7,761
md
Markdown
docs/Elm/Graphics/Collage.md
pselm/graphics
7462f57198adb3dee8b54137b2c93e100f7063fe
[ "BSD-3-Clause" ]
null
null
null
docs/Elm/Graphics/Collage.md
pselm/graphics
7462f57198adb3dee8b54137b2c93e100f7063fe
[ "BSD-3-Clause" ]
null
null
null
docs/Elm/Graphics/Collage.md
pselm/graphics
7462f57198adb3dee8b54137b2c93e100f7063fe
[ "BSD-3-Clause" ]
null
null
null
## Module Elm.Graphics.Collage The collage API is for freeform graphics. You can move, rotate, scale, etc. all sorts of forms including lines, shapes, images, and elements. Collages use the same coordinate system you might see in an algebra or physics problem. The origin (0,0) is at the center of the collage, not the top left corner as in some other graphics libraries. Furthermore, the y-axis points up, so moving a form 10 units in the y-axis will move it up on screen. #### `Collage` ``` purescript newtype Collage ``` ##### Instances ``` purescript Renderable Collage ``` #### `makeCollage` ``` purescript makeCollage :: Int -> Int -> List Form -> Collage ``` Create a `Collage` with certain dimensions and content. It takes width and height arguments to specify dimensions, and then a list of 2D forms to decribe the content. The forms are drawn in the order of the list, i.e., `collage w h (a : b : Nil)` will draw `b` on top of `a`. Note that this normally might be called `collage`, but Elm uses that for the function that actually creates an `Element`. #### `collage` ``` purescript collage :: Int -> Int -> List Form -> Element ``` Create a collage `Element` with certain dimensions and content. It takes width and height arguments to specify dimensions, and then a list of 2D forms to decribe the content. The forms are drawn in the order of the list, i.e., `collage w h (a : b : Nil)` will draw `b` on top of `a`. To make a `Collage` without immediately turning it into an `Element`, see `makeCollage`. #### `toElement` ``` purescript toElement :: Collage -> Element ``` Turn a `Collage` into an `Element`. #### `Form` ``` purescript newtype Form ``` A visual `Form` has a shape and texture. This can be anything from a red square to a circle textured with stripes. #### `toForm` ``` purescript toForm :: forall a. Renderable a => a -> Form ``` Turn any `Element` into a `Form`. This lets you use text, gifs, and video in your collage. This means you can move, rotate, and scale an `Element` however you want. In fact, this works with any `Renderable`, not just Elements. #### `filled` ``` purescript filled :: Color -> Shape -> Form ``` Create a filled in shape. #### `textured` ``` purescript textured :: String -> Shape -> Form ``` Create a textured shape. The texture is described by some url and is tiled to fill the entire shape. #### `gradient` ``` purescript gradient :: Gradient -> Shape -> Form ``` Fill a shape with a gradient. #### `outlined` ``` purescript outlined :: LineStyle -> Shape -> Form ``` Outline a shape with a given line style. #### `traced` ``` purescript traced :: LineStyle -> Path -> Form ``` Trace a path with a given line style. #### `text` ``` purescript text :: Text -> Form ``` Create some text. Details like size and color are part of the `Text` value itself, so you can mix colors and sizes and fonts easily. #### `outlinedText` ``` purescript outlinedText :: LineStyle -> Text -> Form ``` Create some outlined text. Since we are just outlining the text, the color is taken from the `LineStyle` attribute instead of the `Text`. #### `move` ``` purescript move :: Tuple Float Float -> Form -> Form ``` Move a form by the given amount (x, y). This is a relative translation so `(move (5,10) form)` would move `form` five pixels to the right and ten pixels up. #### `moveX` ``` purescript moveX :: Float -> Form -> Form ``` Move a shape in the x direction. This is relative so `(moveX 10 form)` moves `form` 10 pixels to the right. #### `moveY` ``` purescript moveY :: Float -> Form -> Form ``` Move a shape in the y direction. This is relative so `(moveY 10 form)` moves `form` upwards by 10 pixels. #### `scale` ``` purescript scale :: Float -> Form -> Form ``` Scale a form by a given factor. Scaling by 2 doubles both dimensions, and quadruples the area. #### `rotate` ``` purescript rotate :: Float -> Form -> Form ``` Rotate a form by a given angle. Rotate takes standard Elm angles (radians) and turns things counterclockwise. So to turn `form` 30&deg; to the left you would say, `(rotate (degrees 30) form)`. #### `alpha` ``` purescript alpha :: Float -> Form -> Form ``` Set the alpha of a `Form`. The default is 1, and 0 is totally transparent. #### `group` ``` purescript group :: List Form -> Form ``` Flatten many forms into a single `Form`. This lets you move and rotate them as a single unit, making it possible to build small, modular components. Forms will be drawn in the order that they are listed, as in `collage`. #### `groupTransform` ``` purescript groupTransform :: Transform2D -> List Form -> Form ``` Flatten many forms into a single `Form` and then apply a matrix transformation. Forms will be drawn in the order that they are listed, as in `collage`. #### `Shape` ``` purescript newtype Shape ``` A 2D shape. Shapes are closed polygons. They do not have a color or texture, that information can be filled in later. #### `rect` ``` purescript rect :: Float -> Float -> Shape ``` A rectangle with a given width and height, centered on the origin. #### `oval` ``` purescript oval :: Float -> Float -> Shape ``` An oval with a given width and height. #### `square` ``` purescript square :: Float -> Shape ``` A square with a given edge length, centred on the origin. #### `circle` ``` purescript circle :: Float -> Shape ``` A circle with a given radius. #### `ngon` ``` purescript ngon :: Int -> Float -> Shape ``` A regular polygon with N sides. The first argument specifies the number of sides and the second is the radius. So to create a pentagon with radius 30 you would say: ngon 5 30 #### `polygon` ``` purescript polygon :: List (Tuple Float Float) -> Shape ``` Create an arbitrary polygon by specifying its corners in order. `polygon` will automatically close all shapes, so the given list of points does not need to start and end with the same position. #### `Path` ``` purescript newtype Path ``` A 2D path. Paths are a sequence of points. They do not have a color. #### `segment` ``` purescript segment :: Tuple Float Float -> Tuple Float Float -> Path ``` Create a path along a given line segment. #### `path` ``` purescript path :: List (Tuple Float Float) -> Path ``` Create a path that follows a sequence of points. #### `solid` ``` purescript solid :: Color -> LineStyle ``` Create a solid line style with a given color. #### `dashed` ``` purescript dashed :: Color -> LineStyle ``` Create a dashed line style with a given color. Dashing equals `[8,4]`. #### `dotted` ``` purescript dotted :: Color -> LineStyle ``` Create a dotted line style with a given color. Dashing equals `[3,3]`. #### `LineStyle` ``` purescript type LineStyle = { color :: Color, width :: Float, cap :: LineCap, join :: LineJoin, dashing :: List Int, dashOffset :: Int } ``` All of the attributes of a line style. This lets you build up a line style however you want. You can also update existing line styles with record updates. #### `LineCap` ``` purescript data LineCap = Flat | Round | Padded ``` The shape of the ends of a line. ##### Instances ``` purescript Eq LineCap ``` #### `LineJoin` ``` purescript data LineJoin = Smooth | Sharp Float | Clipped ``` The shape of the &ldquo;joints&rdquo; of a line, where each line segment meets. `Sharp` takes an argument to limit the length of the joint. This defaults to 10. ##### Instances ``` purescript Eq LineJoin ``` #### `defaultLine` ``` purescript defaultLine :: LineStyle ``` The default line style, which is solid black with flat caps and sharp joints. You can use record updates to build the line style you want. For example, to make a thicker line, you could say: defaultLine { width = 10 }
20.423684
125
0.684319
eng_Latn
0.995353
d09cded397ad492760e9fd047d5aa7ab36d26ffa
414
md
Markdown
docs/kakao/com.agoda.kakao.pager2/-view-pager2-actions/scroll-to-end.md
k-kagurazaka/Kakao
342f3491998bda708b0555b17ed6d911c1ae88a3
[ "Apache-2.0" ]
1,213
2017-10-09T04:47:27.000Z
2022-03-29T15:37:44.000Z
docs/kakao/com.agoda.kakao.pager2/-view-pager2-actions/scroll-to-end.md
k-kagurazaka/Kakao
342f3491998bda708b0555b17ed6d911c1ae88a3
[ "Apache-2.0" ]
160
2017-10-09T04:40:35.000Z
2021-05-25T13:26:49.000Z
docs/kakao/com.agoda.kakao.pager2/-view-pager2-actions/scroll-to-end.md
k-kagurazaka/Kakao
342f3491998bda708b0555b17ed6d911c1ae88a3
[ "Apache-2.0" ]
138
2017-10-09T04:38:43.000Z
2022-01-26T22:55:56.000Z
[kakao](../../index.md) / [com.agoda.kakao.pager2](../index.md) / [ViewPager2Actions](index.md) / [scrollToEnd](./scroll-to-end.md) # scrollToEnd `open fun scrollToEnd(): `[`Unit`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-unit/index.html) Overrides [ScrollableActions.scrollToEnd](../../com.agoda.kakao.common.actions/-scrollable-actions/scroll-to-end.md) Scrolls to the last position of the view
37.636364
131
0.724638
yue_Hant
0.333906
d09d19ca1359567a92440f0df31b697e8f5c10fa
1,426
md
Markdown
docs/extensibility/debugger/reference/idebugobject-isreadonly.md
mairaw/visualstudio-docs.pt-br
26480481c1cdab3e77218755148d09daec1b3454
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/debugger/reference/idebugobject-isreadonly.md
mairaw/visualstudio-docs.pt-br
26480481c1cdab3e77218755148d09daec1b3454
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/debugger/reference/idebugobject-isreadonly.md
mairaw/visualstudio-docs.pt-br
26480481c1cdab3e77218755148d09daec1b3454
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: IDebugObject::IsReadOnly | Microsoft Docs ms.custom: '' ms.date: 11/04/2016 ms.technology: - vs-ide-sdk ms.topic: conceptual f1_keywords: - IDebugObject::IsReadOnly helpviewer_keywords: - IDebugObject::IsReadOnly method ms.assetid: c460f772-d08a-4b36-81f3-dff6a51a93fd author: gregvanl ms.author: gregvanl manager: douge ms.workload: - vssdk ms.openlocfilehash: 4238895b236db6dd75cbf384adc78284f34d073f ms.sourcegitcommit: 240c8b34e80952d00e90c52dcb1a077b9aff47f6 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 10/23/2018 ms.locfileid: "49936000" --- # <a name="idebugobjectisreadonly"></a>IDebugObject::IsReadOnly Determina se este objeto é somente leitura. ## <a name="syntax"></a>Sintaxe ```cpp HRESULT IsReadOnly( BOOL* pfIsReadOnly ); ``` ```csharp int IsReadOnly( out int pfIsReadOnly ); ``` #### <a name="parameters"></a>Parâmetros `pfIsReadOnly` [out] Retorna não zero (`TRUE`) se esse objeto é somente leitura; caso contrário, retornará zero (`FALSE`). ## <a name="return-value"></a>Valor de retorno Se for bem-sucedido, retornará S_OK; Caso contrário, retornará um código de erro. ## <a name="remarks"></a>Comentários Um objeto somente leitura não pode ter seu valor alterado depois que ele é criado. ## <a name="see-also"></a>Consulte também [IDebugObject](../../../extensibility/debugger/reference/idebugobject.md)
26.90566
110
0.725105
por_Latn
0.488054
d09ec41a3b058aa964e516bc3e69f9c60bb18104
1,287
md
Markdown
docs/framework/wcf/diagnostics/wmi/peertransportsecuritysettings.md
CharleyGui/docs.fr-fr
2563c94abf0d041d775f700b552d1dbe199f03d5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/wmi/peertransportsecuritysettings.md
CharleyGui/docs.fr-fr
2563c94abf0d041d775f700b552d1dbe199f03d5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/wmi/peertransportsecuritysettings.md
CharleyGui/docs.fr-fr
2563c94abf0d041d775f700b552d1dbe199f03d5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: PeerTransportSecuritySettings ms.date: 03/30/2017 ms.assetid: 1df08cbb-68c5-4d36-9f88-a776a8117de8 ms.openlocfilehash: 8b5276eca89d32a45177aa958d4c99d682e30668 ms.sourcegitcommit: bc293b14af795e0e999e3304dd40c0222cf2ffe4 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 11/26/2020 ms.locfileid: "96268948" --- # <a name="peertransportsecuritysettings"></a>PeerTransportSecuritySettings PeerTransportSecuritySettings ## <a name="syntax"></a>Syntaxe ```csharp class PeerTransportSecuritySettings { string CredentialType; }; ``` ## <a name="methods"></a>Méthodes La classe PeerTransportSecuritySettings ne définit pas de méthode. ## <a name="properties"></a>Propriétés La classe PeerTransportSecuritySettings a la propriété suivante : ### <a name="credentialtype"></a>CredentialType Type de données : chaîne Type d'accès : Lecture seule Type d'informations d'identification de l'élément de sécurité d'homologue. ## <a name="requirements"></a>Spécifications |MOF|Déclaré dans Servicemodel.mof.| |---------|-----------------------------------| |Espace de noms|Défini dans root\ServiceModel| ## <a name="see-also"></a>Voir aussi - <xref:System.ServiceModel.PeerTransportSecuritySettings>
25.74
77
0.717949
kor_Hang
0.22112
d09f1371912c3269c9f80bc82a6580260a7a74f9
495
md
Markdown
docs/alibaba_aliqin_fc_flow_query.md
aliwuyun/Alidayu
bad623239dae1a15e36ead882c9bb58625841d93
[ "MIT" ]
1
2017-04-11T11:34:59.000Z
2017-04-11T11:34:59.000Z
docs/alibaba_aliqin_fc_flow_query.md
aliwuyun/Alidayu
bad623239dae1a15e36ead882c9bb58625841d93
[ "MIT" ]
null
null
null
docs/alibaba_aliqin_fc_flow_query.md
aliwuyun/Alidayu
bad623239dae1a15e36ead882c9bb58625841d93
[ "MIT" ]
null
null
null
# 流量直充查询 ## 1. 官方文档 http://open.taobao.com/docs/api.htm?apiId=26305 ## 2. 参数、方法 |官方参数|对应方法|类型|是否必须|默认值|说明| |----|----|----|----|----|----| |`out_id`|`setOutId($value)`|string|**必须**| |唯一流水号| ## 3. 使用 ```php <?php use Aliwuyun\Alidayu\AlibabaAliqinFcFlowQuery; use Alidayu; class TestController extends Controller { public function send() { $message = (new AlibabaAliqinFcFlowQuery()) ->setOutId('123456'); return Alidayu::send($message); } } ```
15.46875
51
0.587879
yue_Hant
0.505439
d09f64002171e83e1b3db3c9802cf3b73b3114a2
2,559
md
Markdown
aspnetcore/blazor/includes/security/azure-scope-5x.md
JulianTurner/AspNetCore.Docs.de-de
8a23684502306840c8e5b92ac101cf29586fdab8
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnetcore/blazor/includes/security/azure-scope-5x.md
JulianTurner/AspNetCore.Docs.de-de
8a23684502306840c8e5b92ac101cf29586fdab8
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnetcore/blazor/includes/security/azure-scope-5x.md
JulianTurner/AspNetCore.Docs.de-de
8a23684502306840c8e5b92ac101cf29586fdab8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- no-loc: - appsettings.json - ASP.NET Core Identity - cookie - Cookie - Blazor - Blazor Server - Blazor WebAssembly - Identity - Let's Encrypt - Razor - SignalR ms.openlocfilehash: 5964554c36e2242b70faee390374828acd2bd860 ms.sourcegitcommit: a49c47d5a573379effee5c6b6e36f5c302aa756b ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 02/16/2021 ms.locfileid: "100551754" --- Wenn Sie mit einer Server-API arbeiten, die bei AAD registriert ist, und sich die AAD-Registrierung der App in einem Mandanten befindet, der auf einer [nicht überprüften Herausgeberdomäne](/azure/active-directory/develop/howto-configure-publisher-domain) basiert, ist der App-ID-URI Ihrer Server-API-App nicht `api://{SERVER API APP CLIENT ID OR CUSTOM VALUE}`, sondern hat das `https://{TENANT}.onmicrosoft.com/{SERVER API APP CLIENT ID OR CUSTOM VALUE}`-Format. Wenn dies der Fall ist, wird der Standardzugriffstoken-Bereich in `Program.Main` (`Program.cs`) der *`Client`* -App ähnlich wie folgt angezeigt: ```csharp options.ProviderOptions.DefaultAccessTokenScopes .Add("https://{TENANT}.onmicrosoft.com/{SERVER API APP CLIENT ID OR CUSTOM VALUE}/{DEFAULT SCOPE}"); ``` Legen Sie zum Konfigurieren der Server-API-App für eine passende Zielgruppe die `Audience` in der *`Server`* -API-App-Einstellungsdatei (`appsettings.json`) so fest, dass Sie der vom Azure-Portal bereitgestellten Zielgruppe der App entspricht: ```json { "AzureAd": { "Authority": "https://login.microsoftonline.com/{TENANT ID}", "ClientId": "{SERVER API APP CLIENT ID}", "ValidateAuthority": true, "Audience": "https://{TENANT}.onmicrosoft.com/{SERVER API APP CLIENT ID OR CUSTOM VALUE}" } } ``` In der vorhergehenden Konfiguration enthält das Ende des `Audience`-Werts **nicht** den Standardbereich `/{DEFAULT SCOPE}`. Beispiel: `Program.Main` (`Program.cs`) der *`Client`* -App: ```csharp options.ProviderOptions.DefaultAccessTokenScopes .Add("https://contoso.onmicrosoft.com/41451fa7-82d9-4673-8fa5-69eff5a761fd/API.Access"); ``` Konfigurieren Sie die *`Server`* -API-App-Einstellungsdatei (`appsettings.json`) mit einer passenden Zielgruppe (`Audience`): ```json { "AzureAd": { "Authority": "https://login.microsoftonline.com/e86c78e2-...-918e0565a45e", "ClientId": "41451fa7-82d9-4673-8fa5-69eff5a761fd", "ValidateAuthority": true, "Audience": "https://contoso.onmicrosoft.com/41451fa7-82d9-4673-8fa5-69eff5a761fd" } } ``` Im vorhergehenden Beispiel enthält das Ende des `Audience`-Werts **nicht** den Standardbereich `/API.Access`.
38.772727
608
0.749121
deu_Latn
0.546861
d09fb2774846d660bdfb4c6821ca6c6c8ebecf84
2,633
md
Markdown
Docs/keyMapping - ES.md
dimateos/TFG_Portals
4cb85e2f88fc8922b0112979104290e0afc4e981
[ "MIT" ]
2
2021-06-30T15:11:35.000Z
2021-12-27T08:13:15.000Z
Docs/keyMapping - ES.md
dimateos/TFG_Portals
4cb85e2f88fc8922b0112979104290e0afc4e981
[ "MIT" ]
null
null
null
Docs/keyMapping - ES.md
dimateos/TFG_Portals
4cb85e2f88fc8922b0112979104290e0afc4e981
[ "MIT" ]
null
null
null
# ES - Controles de la aplicación [Aquí](https://github.com/dimateos/TFG_Portals/releases) se puede encontrar la última versión de la aplicación. > Generales y vista: * **`ESC`**: salir de la aplicación. * **`ALT`**: bloquear / liberar el cursor. * **`V`**: bloquear / liberar rotación (y activar bordes naranjas). * **`M`**: activa / desactiva vista inferior. * **`N`**: intercambia vista cenital y primera persona. * **`Comma`** y `+/-`​: edita el zoom de la cámara cenital. * **`Period`** y ​`+/-`​: edita el tamaño de la vista inferior. > Propiedades de los portales: * **`1 hasta 9`** y **`O`**: activa / desactiva ​propiedades de los portales. * **`L`** y `+/-`: edita la cantidad máxima de niveles de recursión. * **`K`** y `+/-`​: edita la cantidad de niveles de recursión ​saltados. * **`J`** y `+/-`​: edita el grosor de los portales. > Movimiento y transformación: * **`WASD`** + **`movimiento del RATÓN`**: movimiento libre en primera persona. * **`Rueda del RATÓN`**: zoom de la cámara principal (click reseteo del zoom). * **`TAB`**: intercambio entre objetos controlables (jugador y portales). * **`TAB`** y **`B`**: intercambio de control entre cámara cenital y jugador. * **`FLECHAS`**: movimiento sobre el plano global XZ del objeto controlado. * **`ESPACIO`** / **`C`**: movimiento sobre el plano global Y del objeto controlado. * **`R`** + **`Q`** / **`E`**: habilitada rotación sobre el objeto controlado y ROLL. * Con la rotación activada **`WASD`** ejerce PITCH y YAW * **`SHIFT`** / **`CONTROL`**: modificadores de la velocidad del movimiento / rotación. * **`T`**: reseteo de posición / rotación del objeto controlado al inicio de su control. * **`Y`**: reseteo de posición / rotación del objeto controlado al inicio de la escena. * **`G`** / **`H`**: guarda y carga de transformación completa del objeto controlado. * **`H`** y **`B`** simultáneos: carga la posición guardada de todos los objetos. > Activación de elementos de la escena y post-procesado: * **`1 hasta 9`**: activa / desactiva ​elementos de la escena. * **`1 hasta 9`** y **`P`**: selecciona el filtro de post-procesado de los portales. * **`1 hasta 9`** y **`B`**: selecciona el filtro de post-procesado de la escena completa. > Otros: * **`F`** y `+/-`: edita el grosor del cuerpo del jugador. * **`X`** / **`Z`**: intercambia la cámara principal con las virtuales. * **`I`**: muestra / oculta ejes de coordenadas de jugador y portales. * **`I`** y **`O`**: muestra / oculta ejes de coordenadas de las cámaras virtuales. * **`I`** y **`F`**: muestra / oculta el cuerpo del jugador.
57.23913
112
0.64907
spa_Latn
0.979947
d09fd2188240497adca1d53421536446a0591057
659
md
Markdown
README.md
caiocampos/shrtr
bd8961633c41fe6b362e311d55d4a9bf0e90df66
[ "MIT" ]
null
null
null
README.md
caiocampos/shrtr
bd8961633c41fe6b362e311d55d4a9bf0e90df66
[ "MIT" ]
129
2019-11-07T01:16:53.000Z
2022-03-25T08:04:52.000Z
README.md
caiocampos/shrtr
bd8961633c41fe6b362e311d55d4a9bf0e90df66
[ "MIT" ]
1
2022-02-18T09:42:12.000Z
2022-02-18T09:42:12.000Z
# Shrtr [<img src="https://api.travis-ci.org/caiocampos/shrtr.svg?branch=master">](https://travis-ci.org/caiocampos/shrtr) [![codecov](https://codecov.io/gh/caiocampos/shrtr/branch/master/graph/badge.svg)](https://codecov.io/gh/caiocampos/shrtr) ![](https://img.shields.io/david/caiocampos/shrtr.svg) [![DepShield Badge](https://depshield.sonatype.org/badges/caiocampos/shrtr/depshield.svg)](https://depshield.github.io) [![GuardRails Badge](https://badges.guardrails.io/caiocampos/shrtr.svg)](https://www.guardrails.io/) [![License](https://img.shields.io/github/license/caiocampos/shrtr.svg)](LICENSE) Encurtador de links usando React, Express e MongoDB
54.916667
122
0.758725
yue_Hant
0.219342
d0a025fc4276d022ecd443dd864abd927fb4ec71
18,644
md
Markdown
README.md
asyncy/asyncy-redis
b3913ec931268dca406ea7f703df2842814a3c7c
[ "MIT" ]
1
2019-05-22T06:03:50.000Z
2019-05-22T06:03:50.000Z
README.md
asyncy/asyncy-redis
b3913ec931268dca406ea7f703df2842814a3c7c
[ "MIT" ]
13
2019-08-01T03:24:40.000Z
2019-10-21T09:47:28.000Z
README.md
oms-services/redis
b3913ec931268dca406ea7f703df2842814a3c7c
[ "MIT" ]
4
2019-06-28T10:15:21.000Z
2019-10-18T13:57:15.000Z
# _Redis_ Open Microservice > Wrapper for the Redis key-value store [![Open Microservice Specification Version](https://img.shields.io/badge/Open%20Microservice-1.0-477bf3.svg)](https://openmicroservices.org) [![Open Microservices Spectrum Chat](https://withspectrum.github.io/badge/badge.svg)](https://spectrum.chat/open-microservices) [![Open Microservices Code of Conduct](https://img.shields.io/badge/Contributor%20Covenant-v1.4%20adopted-ff69b4.svg)](https://github.com/oms-services/.github/blob/master/CODE_OF_CONDUCT.md) [![Open Microservices Commitzen](https://img.shields.io/badge/commitizen-friendly-brightgreen.svg)](http://commitizen.github.io/cz-cli/) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](http://makeapullrequest.com) ## Introduction This project is an example implementation of the [Open Microservice Specification](https://openmicroservices.org), a standard originally created at [Storyscript](https://storyscript.io) for building highly-portable "microservices" that expose the events, actions, and APIs inside containerized software. ## Getting Started The `oms` command-line interface allows you to interact with Open Microservices. If you're interested in creating an Open Microservice the CLI also helps validate, test, and debug your `oms.yml` implementation! See the [oms-cli](https://github.com/microservices/oms) project to learn more! ### Installation ``` npm install -g @microservices/oms ``` ## Usage ### Open Microservices CLI Usage Once you have the [oms-cli](https://github.com/microservices/oms) installed, you can run any of the following commands from within this project's root directory: #### Actions ##### set > Sets 'key' to hold 'value'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | value | `any` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run set \ -a key='*****' \ -a value='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### rpush > Insert 'value' at the end of list stored at 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | value | `any` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run rpush \ -a key='*****' \ -a value='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### lpush > Insert 'value' at the head of list stored at 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | value | `any` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run lpush \ -a key='*****' \ -a value='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### lpop > Removes and returns the first element of the list stored at 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run lpop \ -a key='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### rpop > Removes and returns the last element of the list stored at 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run rpop \ -a key='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### blpop > Removes and returns the first element of the list stored at 'key'. When there are no element in the list, the command will not return until an element got added. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run blpop \ -a key='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### brpop > Removes and returns the last element of the list stored at 'key'. When there are no element in the list, the command will not return until an elements got added. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run brpop \ -a key='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### delete > Removes 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run delete \ -a key='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### get > Returns the value of 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run get \ -a key='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### mget > Returns the values of multiple 'keys'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | keys | `list` | `false` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run mget \ -a keys='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### increment > Increments a number stored at 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | by | `int` | `false` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run increment \ -a key='*****' \ -a by='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### decrement > Decrements a number stored at 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | by | `int` | `false` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run decrement \ -a key='*****' \ -a by='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### append > Appends 'value' to a 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | value | `any` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run append \ -a key='*****' \ -a value='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### getSet > Returns the current value of 'key' and overwrites it with 'value'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | value | `any` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run getSet \ -a key='*****' \ -a value='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### setnx > Set a 'key' to 'value' only if the key does not exist yet. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | value | `any` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run setnx \ -a key='*****' \ -a value='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### mset > Sets multiple 'key'/'value' pairs simultaneously. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | pairs | `map` | `false` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run mset \ -a pairs='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### msetnx > Sets multiple 'key'/'value' pairs simultaneously. Only non-existing keys will be set. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | pairs | `map` | `false` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run msetnx \ -a pairs='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### expire > Set a timeout on a 'key'. ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | No description provided. | | seconds | `int` | `true` | None | No description provided. | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms run expire \ -a key='*****' \ -a seconds='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ##### rpop > RPOP a key constantly, and emit the values as events ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | The key to RPOP | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms subscribe rpop \ -a key='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ,##### lpop > LPOP a key constantly, and emit the values as events ##### Action Arguments | Argument Name | Type | Required | Default | Description | |:------------- |:---- |:-------- |:--------|:----------- | | key | `string` | `true` | None | The key to LPOP | | REDIS_HOST | `string` | `false` | None | No description provided. | | REDIS_PORT | `int` | `false` | None | No description provided. | | REDIS_DB | `string` | `false` | None | No description provided. | | REDIS_PASSWORD | `string` | `false` | None | No description provided. | ``` shell oms subscribe lpop \ -a key='*****' \ -e REDIS_HOST=$REDIS_HOST \ -e REDIS_PORT=$REDIS_PORT \ -e REDIS_DB=$REDIS_DB \ -e REDIS_PASSWORD=$REDIS_PASSWORD ``` ## Contributing All suggestions in how to improve the specification and this guide are very welcome. Feel free share your thoughts in the Issue tracker, or even better, fork the repository to implement your own ideas and submit a pull request. [![Edit redis on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/github/oms-services/redis) This project is guided by [Contributor Covenant](https://github.com/oms-services/.github/blob/master/CODE_OF_CONDUCT.md). Please read out full [Contribution Guidelines](https://github.com/oms-services/.github/blob/master/CONTRIBUTING.md). ## Additional Resources * [Install the CLI](https://github.com/microservices/oms) - The OMS CLI helps developers create, test, validate, and build microservices. * [Example OMS Services](https://github.com/oms-services) - Examples of OMS-compliant services written in a variety of languages. * [Example Language Implementations](https://github.com/microservices) - Find tooling & language implementations in Node, Python, Scala, Java, Clojure. * [Storyscript Hub](https://hub.storyscript.io) - A public registry of OMS services. * [Community Chat](https://spectrum.chat/open-microservices) - Have ideas? Questions? Join us on Spectrum.
36.485323
700
0.610813
kor_Hang
0.378334
d0a032e725f6bbe1890c1bc27d7e0b294e8d2a27
9,473
md
Markdown
src/BUILD_UNIX.md
gasegi/SoftEtherVPN
4d91bc1a07ad22998a16e9793e9f9370eadf4986
[ "Apache-2.0" ]
1
2021-03-05T13:58:29.000Z
2021-03-05T13:58:29.000Z
src/BUILD_UNIX.md
gasegi/SoftEtherVPN
4d91bc1a07ad22998a16e9793e9f9370eadf4986
[ "Apache-2.0" ]
null
null
null
src/BUILD_UNIX.md
gasegi/SoftEtherVPN
4d91bc1a07ad22998a16e9793e9f9370eadf4986
[ "Apache-2.0" ]
null
null
null
This document describes how to build SoftEtherVPN for UNIX based Operating systems - [Requirements](#requirements) * [Install requirements on Centos/RedHat](#install-requirements-on-centosredhat) * [Install Requirements on Debian/Ubuntu](#install-requirements-on-debianubuntu) * [Install Requirements on macOS](#install-requirements-on-macos) - [Build from source code and install](#build-from-source-code-and-install) - [Additional Build Options](#additional-build-options) - [How to Run SoftEther](#how-to-run-softether) * [Start/Stop SoftEther VPN Server](#startstop-softether-vpn-server) * [Start/Stop SoftEther VPN Bridge](#startstop-softether-vpn-bridge) * [Start/Stop SoftEther VPN Client](#startstop-softether-vpn-client) - [About HTML5-based Modern Admin Console and JSON-RPC API Suite](#about-html5-based-modern-admin-console-and-json-rpc-api-suite) * [Built-in SoftEther VPN Server HTML5 Ajax-based Web Administration Console](#built-in-softether-vpn-server-html5-ajax-based-web-administration-console) * [Built-in SoftEther Server VPN JSON-RPC API Suite](#built-in-softether-server-vpn-json-rpc-api-suite) - [Using SoftEther without installation.](#using-softether-without-installation) # Requirements You need to install the following software to build SoftEther VPN for UNIX. - [CMake](https://cmake.org) - C compiler (GCC, Clang, etc) - C Library (BSD libc, GNU libc, musl libc, etc) - POSIX threads library (pthread) - OpenSSL or LibreSSL (crypto, ssl) - make (GNU make, BSD make, etc) - libiconv - readline - ncurses ## Install requirements on Centos/RedHat ```bash sudo yum -y groupinstall "Development Tools" sudo yum -y install cmake ncurses-devel openssl-devel readline-devel zlib-devel ``` ## Install requirements on Debian/Ubuntu ```bash sudo apt -y install cmake gcc g++ libncurses5-dev libreadline-dev libssl-dev make zlib1g-dev ``` ## Install requirements on macOS ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" brew install cmake openssl readline ``` # Build from source code and install To build the programs from the source code, run the following commands: ```bash git clone https://github.com/SoftEtherVPN/SoftEtherVPN.git cd SoftEtherVPN git submodule init && git submodule update ./configure make -C build make -C build install ``` This will compile and install SoftEther VPN Server, Bridge and Client binaries under your executable path. If any error occurs, please check the above requirements. # Build on musl-based linux To build the programs from the source code when using musl as libc, run the following commands: ```bash export USE_MUSL=YES git clone https://github.com/SoftEtherVPN/SoftEtherVPN.git cd SoftEtherVPN git submodule init && git submodule update ./configure make -C build make -C build install ``` Building without USE_MUSL environment variable set compiles, but produced executables exhibit bad run-time behaviour. # Additional Build Options There are some additional build options useful if you're a distro package maintainer and creating a package of SoftEther VPN. It is recommended that you only specify these options when you understand what happens. ## Specify log, config, PID directories By default, SoftEther VPN writes out all files such as logs, config files, PID files under the same directory as `vpnserver`, `vpnbridge`, `vpnclient` executables. This behaviour is suitable when [using SoftEther without installation](#using-softether-without-installation) however not appropriate using with installation. Usually PID files are to put in `/var/run` or `/run`. Logs are `/var/log`. Other variable state information files including config files are `/var/lib` or `/var/db`. These directories can be changed at compile-time by specifying via CMake variables. * `SE_PIDDIR` - PID directory * `SE_LOGDIR` - root log directory * `SE_DBDIR` - config files and variable state directory To specify directories, perform `./configure` like below. ```bash CMAKE_FLAGS="-DSE_PIDDIR=/run/softether -DSE_LOGDIR=/var/log/softether -DSE_DBDIR=/var/lib/softether" ./configure ``` Please note that these directories are not created automatically after installation. Make sure to create these directories before starting SoftEther VPN Server, Bridge or Client. ## Build without [cpu_features](https://github.com/google/cpu_features) SoftEther VPN uses cpu_features library to retrieve CPU features such as available processor instructions. However, cpu_features is not available on some architectures. Whether to build with cpu_features is auto detected but autodetection is not so smart. If you want to build without cpu_features explicitly, perform `./configure` like below. ```bash CMAKE_FLAGS="-DSKIP_CPU_FEATURES" ./configure ``` # How to Run SoftEther ## Start/Stop SoftEther VPN Server To start the SoftEther VPN Server background service, run the following: ```bash vpnserver start ``` To stop the service, run the following: ```bash vpnserver stop ``` To configure the running SoftEther VPN Server service, you can use SoftEther VPN Command Line Management Utility as following: ```bash vpncmd ``` Or you can also use VPN Server Manager GUI Tool on other Windows PC to connect to the VPN Server remotely. You can download the GUI Tool from https://www.softether-download.com/. ## Start/Stop SoftEther VPN Bridge To start the SoftEther VPN Bridge background service, run the following: ```bash vpnbridge start ``` To stop the service, run the following: ```bash vpnbridge stop ``` To configure the running SoftEther VPN Bridge service, you can use SoftEther VPN Command Line Management Utility as following: ```bash vpncmd ``` Or you can also use VPN Server Manager GUI Tool on other Windows PC to connect to the VPN Bridge remotely. You can download the GUI Tool from https://www.softether-download.com/. ## Start/Stop SoftEther VPN Client To start the SoftEther VPN Client background service, run the following: ```bash vpnclient start ``` To stop the service, run the following: ```bash vpnclient stop ``` To configure the running SoftEther VPN Client service, you can use SoftEther VPN Command Line Management Utility as following: ```bash vpncmd ``` Or you can also use VPN Client Manager GUI Tool on other Windows PC to connect to the VPN Client remotely. You can download the GUI Tool from https://www.softether-download.com/. # About HTML5-based Modern Admin Console and JSON-RPC API Suite ## Built-in SoftEther VPN Server HTML5 Ajax-based Web Administration Console We are developing the HTML5 Ajax-based Web Administration Console (currently very limited, under construction) in the embedded HTTPS server on the SoftEther VPN Server. Access to the following URL from your favorite web browser. ``` https://<vpn_server_hostname>:<port>/admin/ ``` For example if your VPN Server is running as the port 5555 on the host at 192.168.0.1, you can access to the web console by: ``` https://192.168.0.1:5555/admin/ ``` Note: Your HTML5 development contribution is very appreciated. The current HTML5 pages are written by Daiyuu Nobori (the core developer of SoftEther VPN). He is obviously lack of HTML5 development ability. Please kindly consider to contribute for SoftEther VPN's development on GitHub. Your code will help every people running SoftEther VPN Server. ## Built-in SoftEther Server VPN JSON-RPC API Suite The API Suite allows you to easily develop your original SoftEther VPN Server management application to control the VPN Server (e.g. creating users, adding Virtual Hubs, disconnecting a specified VPN sessions). You can access to the [latest SoftEther VPN Server JSON-RPC Document on GitHub.](https://github.com/SoftEtherVPN/SoftEtherVPN/tree/master/developer_tools/vpnserver-jsonrpc-clients/) - Almost all control APIs, which the VPN Server provides, are available as JSON-RPC API. You can write your own VPN Server management application in your favorite languages (JavaScript, TypeScript, Java, Python, Ruby, C#, ... etc.) - If you are planning to develop your own VPN cloud service, the JSON-RPC API is the best choice to realize the automated operations for the VPN Server. - No need to use any specific API client library since all APIs are provided on the JSON-RPC 2.0 Specification. You can use your favorite JSON and HTTPS client library to call any of all APIs in your pure runtime environment. - Also, the SoftEther VPN Project provides high-quality JSON-RPC client stub libraries which define all of the API client stub codes. These libraries are written in C#, JavaScript and TypeScript. The Node.js Client Library for VPN Server RPC (vpnrpc) package is also available. # Using SoftEther without installation You can use any SoftEtherVPN component (server, client, bridge) without installing it, if you wish so. In this case please do not run the `make install` command after compiling the source code, and head directly to the **bin/** directory. There you will find the generated binaries for SoftEtherVPN and those could be used without installing SoftEtherVPN. ************************************ Thank You Using SoftEther VPN ! By SoftEther VPN Open-Source Project https://www.softether.org/
39.970464
349
0.758155
eng_Latn
0.972096
d0a15508032060109dae7960291e8522afeddfa0
12,767
md
Markdown
articles/active-directory/authentication/howto-password-ban-bad-on-premises-agent-versions.md
beatrizmayumi/azure-docs.pt-br
ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814
[ "CC-BY-4.0", "MIT" ]
39
2017-08-28T07:46:06.000Z
2022-01-26T12:48:02.000Z
articles/active-directory/authentication/howto-password-ban-bad-on-premises-agent-versions.md
beatrizmayumi/azure-docs.pt-br
ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814
[ "CC-BY-4.0", "MIT" ]
562
2017-06-27T13:50:17.000Z
2021-05-17T23:42:07.000Z
articles/active-directory/authentication/howto-password-ban-bad-on-premises-agent-versions.md
beatrizmayumi/azure-docs.pt-br
ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814
[ "CC-BY-4.0", "MIT" ]
113
2017-07-11T19:54:32.000Z
2022-01-26T21:20:25.000Z
--- title: Histórico de liberação do agente de proteção de senha-Azure Active Directory description: Lançamento de versão de documentos e o comportamento de histórico de alterações services: active-directory ms.service: active-directory ms.subservice: authentication ms.topic: article ms.date: 11/21/2019 ms.author: justinha author: justinha manager: daveba ms.reviewer: jsimmons ms.collection: M365-identity-device-management ms.openlocfilehash: 32ad7199360ca0acc8674f7a4e34bd206f8b335f ms.sourcegitcommit: 910a1a38711966cb171050db245fc3b22abc8c5f ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 03/19/2021 ms.locfileid: "101648757" --- # <a name="azure-ad-password-protection-agent-version-history"></a>Histórico de versão do agente de Proteção de Senha do Azure AD ## <a name="121720"></a>1.2.172.0 Data de lançamento: fevereiro de 22 2021 Há quase dois anos desde que as versões GA dos agentes de proteção de senha do Azure AD locais foram lançadas. Uma nova atualização já está disponível-consulte as descrições de alteração abaixo. Obrigado a todos que nos forneceram comentários sobre o produto. * O agente de DC e o software do agente de proxy agora exigem que o .NET 4.7.2 seja instalado. * Se o .NET 4.7.2 ainda não estiver instalado, baixe e execute o instalador encontrado no [instalador offline do .NET Framework 4.7.2 para Windows](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2). * O módulo do PowerShell do AzureADPasswordProtection agora também é instalado pelo software do agente do DC. * Dois novos cmdlets do PowerShell relacionados à integridade foram adicionados: Test-AzureADPasswordProtectionDCAgent e Test-AzureADPasswordProtectionProxy. * A DLL de filtro de senha do agente DC AzureADPasswordProtection agora será carregada e executada em computadores em que lsass.exe está configurado para ser executado no modo PPL. * Correção de bug para algoritmo de senha que permitia que senhas banidas com menos de cinco caracteres de comprimento sejam aceitas incorretamente. * Esse bug só será aplicável se a política de comprimento mínimo da senha do AD local tiver sido configurada para permitir menos de cinco senhas de caracteres em primeiro lugar. * Outras correções de bugs secundárias. Os novos instaladores atualizarão automaticamente as versões mais antigas do software. Se você tiver instalado o agente de DC e o software de proxy em um único computador (recomendado apenas para ambientes de teste), deverá atualizar ambos ao mesmo tempo. Há suporte para executar versões mais antigas e mais recentes do agente de DC e software de proxy em um domínio ou floresta, embora seja recomendável atualizar todos os agentes para a versão mais recente como uma prática recomendada. Há suporte para qualquer ordem de atualizações de agente-novos agentes de DC podem se comunicar por meio de agentes de proxy mais antigos e os agentes de DC mais antigos podem se comunicar por meio de agentes de proxy mais recentes ## <a name="121250"></a>1.2.125.0 Data de lançamento: março de 22 2019 * Corrigir erros de digitação secundária nas mensagens do log de eventos * Atualizar o contrato de EULA para a versão de disponibilidade geral final > [!NOTE] > Build 1.2.125.0 é a compilação de disponibilidade geral. Agradecemos novamente a todos os comentários sobre o produto! ## <a name="121160"></a>1.2.116.0 Data de lançamento: 3/13/2019 * Os cmdlets Get-AzureADPasswordProtectionProxy e Get-AzureADPasswordProtectionDCAgent agora relatam a versão do software e o locatário atual do Azure com as seguintes limitações: * A versão do software e os dados de locatário do Azure estão disponíveis somente para agentes de DC e proxies que executam a versão 1.2.116.0 ou posterior. * Os dados de locatário do Azure podem não ser relatados até que um novo registro (ou renovação) do proxy ou da floresta tenha ocorrido. * O serviço de proxy agora requer que o .NET 4,7 esteja instalado. * Se o .NET 4,7 ainda não estiver instalado, baixe e execute o instalador encontrado no [instalador offline do .NET Framework 4,7 para Windows](https://support.microsoft.com/help/3186497/the-net-framework-4-7-offline-installer-for-windows). * Em sistemas Server Core, pode ser necessário passar o sinalizador/q para o instalador do .NET 4,7 para que ele tenha êxito. * O serviço de proxy agora dá suporte à atualização automática. A atualização automática usa o serviço de atualizador do agente Microsoft Azure AD Connect, que é instalado lado a lado com o serviço de proxy. A atualização automática está ativada por padrão. * A atualização automática pode ser habilitada ou desabilitada usando o cmdlet Set-AzureADPasswordProtectionProxyConfiguration. A configuração atual pode ser consultada usando o cmdlet Get-AzureADPasswordProtectionProxyConfiguration. * O binário de serviço para o serviço de agente de controlador de domínio foi renomeado para AzureADPasswordProtectionDCAgent.exe. * O binário do serviço para o serviço de proxy foi renomeado para AzureADPasswordProtectionProxy.exe. As regras de firewall podem precisar ser modificadas de acordo se um firewall de terceiros estiver em uso. * Observação: se um arquivo de configuração de proxy http estava sendo usado em uma instalação de proxy anterior, ele precisará ser renomeado (de *proxyservice.exe.config* para *AzureADPasswordProtectionProxy.exe.config*) após essa atualização. * Todas as verificações de funcionalidade limitadas por tempo foram removidas do agente de DC. * Correções de bugs secundários e melhorias de log. ## <a name="12650"></a>1.2.65.0 Data de lançamento: 1º de fevereiro de 2019 Alterações: * Agora há compatibilidade com o serviço de agente e proxy do DC no Server Core. Os requisitos do sistema operacional Mininimum são inalterados de antes: Windows Server 2012 para agentes de DC e Windows Server 2012 R2 para proxies. * Os cmdlets Register-AzureADPasswordProtectionProxy e Register-AzureADPasswordProtectionForest agora são compatíveis com os modos de autenticação do Azure baseados em código. * O cmdlet Get-AzureADPasswordProtectionDCAgent ignorará os pontos de conexão de serviço inválidos e/ou danificados. Essa alteração corrige o bug em que os controladores de domínio às vezes aparecem várias vezes na saída. * O cmdlet Get-AzureADPasswordProtectionSummaryReport ignorará os pontos de conexão de serviço inválidos e/ou danificados. Essa alteração corrige o bug em que os controladores de domínio às vezes aparecem várias vezes na saída. * O módulo proxy PowerShell agora está registrado em%ProgramFiles%\WindowsPowerShell\Modules. A variável de ambiente PSModulePath do computador não é mais modificada. * Foi adicionado um novo cmdlet Get-AzureADPasswordProtectionProxy para auxiliar na descoberta de proxies registrados em um domínio ou floresta. * O agente do DC usa uma nova pasta no compartilhamento sysvol para replicar políticas de senha e outros arquivos. Localização da pasta antiga: `\\<domain>\sysvol\<domain fqdn>\Policies\{4A9AB66B-4365-4C2A-996C-58ED9927332D}` Nova localização da pasta: `\\<domain>\sysvol\<domain fqdn>\AzureADPasswordProtection` (Essa alteração foi feita para evitar avisos de “GPO órfão” falso-positivos.) > [!NOTE] > Nenhuma migração ou compartilhamento de dados será feita entre a pasta antiga e a nova pasta. As versões mais antigas do agente do DC continuarão a usar o local antigo até serem atualizadas para esta versão ou posterior. Depois que todos os agentes do DC estiverem executando a versão 1.2.65.0 ou posterior, a pasta sysvol antiga poderá ser excluída manualmente. * O serviço de proxy e do agente do DC agora detectarão e excluirão cópias danificadas de seus respectivos pontos de conexão de serviço. * Cada agente do DC excluirá periodicamente pontos de conexão de serviço danificado e obsoletos em seu domínio, para pontos de conexão do serviço de proxy e do agente do DC. Os pontos de conexão do serviço de proxy e do agente do DC são considerados obsoletos caso seu carimbo de data/hora de pulsação tenha mais de sete dias. * O agente do DC agora renovará o certificado da floresta conforme necessário. * O serviço de proxy agora renovará o certificado de proxy conforme necessário. * Atualizações para o algoritmo de validação de senha: a lista de senhas banidas global e a lista de senhas banidas específicas do cliente (se configuradas) são combinadas antes de validações de senha. Uma determinada senha agora pode ser rejeitada (falha ou somente auditoria) se ele contiver tokens na lista global e na específica do cliente. A documentação do log de eventos foi atualizada para refletir isso; consulte [monitorar a proteção de senha do Azure ad](howto-password-ban-bad-on-premises-monitor.md). * Correções de desempenho e robustez * Log aprimorado > [!WARNING] > Funcionalidade de tempo limitado: o serviço de agente do DC nesta versão (1.2.65.0) parará o processamento de solicitações de validação de senha a partir de 1º de setembro de 2019. Serviços de agente do DC em versões anteriores (consulte a lista abaixo) parará de processar a partir de 1º de julho de 2019. O serviço de agente do DC em todas as versões registrará 10021 eventos para o log de eventos do administrador nos dois meses anteriores a esses prazos. Todas as restrições de limite de tempo serão removidas na próxima versão com disponibilidade geral. O serviço de agente de Proxy não tem limitação de tempo em nenhuma versão, mas ainda deve ser atualizada para a versão mais recente para aproveitar todas as correções de bug subsequentes e outras melhorias. ## <a name="12250"></a>1.2.25.0 Data de lançamento: 1º de novembro de 2018 Correções: * Serviço de agente e o proxy do controlador de domínio não deve falhar devido a falhas de relação de confiança de certificado. * O agente de DC e o serviço de proxy têm correções para computadores compatíveis com FIPS. * Serviço de proxy agora funcionará corretamente em um ambiente de chamadas rede somente 1.2 TLS. * Menor de desempenho e correções de robustez * Log aprimorado Alterações: * O nível de sistema operacional mínimo exigido para o serviço Proxy é agora o Windows Server 2012 R2. O nível de sistema operacional mínimo exigido para o serviço do agente DC permanece no Windows Server 2012. * Agora, o serviço de Proxy requer o .NET versão 4.6.2. * O algoritmo de validação de senha usa uma tabela de normalização de caracteres expandida. Essa alteração pode resultar em senhas rejeitadas que foram aceitas em versões anteriores. ## <a name="12100"></a>1.2.10.0 Data de lançamento: 17 de agosto de 2018 Correções: * Register-AzureADPasswordProtectionProxy e Register-AzureADPasswordProtectionForest agora dão suporte a autenticação multifator * Register-AzureADPasswordProtectionProxy requer um WS2012 ou domínio posterior no domínio para evitar erros de criptografia. * O serviço de agente do controlador de domínio é mais confiável sobre como solicitar uma nova política de senha do Azure na inicialização. * O serviço de agente do DC solicitará uma nova política de senha do Azure a cada hora se necessário, mas agora vai fazer isso em uma hora de início selecionada aleatoriamente. * O serviço de agente do controlador de domínio não fará com que um atraso indefinido no novo anúncio do controlador de domínio quando instalado em um servidor antes da sua promoção como uma réplica. * O serviço de agente do controlador de domínio agora respeitará a definição de configuração "Habilitar a proteção por senha no Active Directory do Windows Server" * Ambos os instaladores de agente e o proxy do controlador de domínio agora serão compatível com a atualização in-loco ao atualizar para versões futuras. > [!WARNING] > Atualização in-loco de versão 1.1.10.3 não tem suporte e resultará em um erro de instalação. Para atualizar para a versão 1.2.10 ou posterior, você deve primeiro completamente desinstalar o software de serviço de agente e o proxy de controlador de domínio, e instalar a nova versão a partir do zero. Instalar e configurar o serviço proxy de proteção por senha do Azure Active Directory. Não é necessário registrar novamente a floresta. > [!NOTE] > Atualizações in-loco do software agente DC exigirá uma reinicialização. * Serviço de agente e o proxy do controlador de domínio agora oferecem suporte em execução em um servidor configurado para usar somente os algoritmos compatíveis com FIPS. * Menor de desempenho e correções de robustez * Log aprimorado ## <a name="11103"></a>1.1.10.3 Data de lançamento: 15 de junho de 2018 Versão prévia pública inicial ## <a name="next-steps"></a>Próximas etapas [Implantar a proteção de senha do Azure AD](howto-password-ban-bad-on-premises-deploy.md)
80.295597
768
0.803791
por_Latn
0.999895
d0a175969b6de18465510a5447ef74cf8c6b2d46
248
md
Markdown
docs/python/api/notifications.md
dkumor/connectordb
4b9d5ad6c695e9ee13f479665b6220a26d2734f1
[ "Apache-2.0" ]
179
2016-06-07T22:33:46.000Z
2019-10-24T10:32:51.000Z
docs/python/api/notifications.md
dkumor/connectordb
4b9d5ad6c695e9ee13f479665b6220a26d2734f1
[ "Apache-2.0" ]
41
2016-06-07T17:45:14.000Z
2019-11-01T16:53:20.000Z
docs/python/api/notifications.md
dkumor/connectordb
4b9d5ad6c695e9ee13f479665b6220a26d2734f1
[ "Apache-2.0" ]
20
2016-06-05T17:40:08.000Z
2019-09-08T22:10:32.000Z
# Notifications ## API (python_notifications)= ### Notifications ```{eval-rst} .. automodule:: heedy.notifications :members: :undoc-members: :inherited-members: :show-inheritance: :special-members: __call__, __getitem__ ```
14.588235
43
0.673387
kor_Hang
0.208112
d0a24ecce0127ba0069218c7fb8c0d29988c8b31
464
md
Markdown
docs/no-async-iteration.md
keithamus/eslint-plugin-escompat
2953b9e5b2564551105b1f970b73d3ad06ad7cdb
[ "MIT" ]
3
2019-02-27T22:14:34.000Z
2021-10-30T21:39:01.000Z
docs/no-async-iteration.md
fisker/eslint-plugin-escompat
2953b9e5b2564551105b1f970b73d3ad06ad7cdb
[ "MIT" ]
4
2020-02-05T11:50:26.000Z
2020-04-29T16:37:55.000Z
docs/no-async-iteration.md
fisker/eslint-plugin-escompat
2953b9e5b2564551105b1f970b73d3ad06ad7cdb
[ "MIT" ]
2
2019-03-18T17:52:04.000Z
2020-02-04T20:12:24.000Z
# no-async-iteration This prevents the use of Async Iteration - through `for await (... of ...)` statements: ```js for await (const line of readLines(path)) { console.log(line) } ``` These will not be allowed because they are not supported in the following browsers: - Edge < 79 - Safari < 12 - Firefox < 57 - Chrome < 63 This can be safely disabled if you intend to compile code with the `@babel/plugin-proposal-async-generator-functions` Babel plugin
24.421053
130
0.715517
eng_Latn
0.99577
d0a3ba3beebd3a8b1aa4beaaaaeca5ba536862fc
9,762
md
Markdown
site/website/blog/2020-04-01-upgrading-to-jet-40.md
kwart/hazelcast-jet
b54dcf0cb4b2e3be2a1e98b099c5747f85b97bb6
[ "Apache-2.0" ]
null
null
null
site/website/blog/2020-04-01-upgrading-to-jet-40.md
kwart/hazelcast-jet
b54dcf0cb4b2e3be2a1e98b099c5747f85b97bb6
[ "Apache-2.0" ]
2
2020-07-21T12:40:39.000Z
2020-07-22T12:47:56.000Z
site/website/blog/2020-04-01-upgrading-to-jet-40.md
kwart/hazelcast-jet
b54dcf0cb4b2e3be2a1e98b099c5747f85b97bb6
[ "Apache-2.0" ]
null
null
null
--- title: Upgrading to Jet 4.0 author: Bartok Jozsef authorURL: https://www.linkedin.com/in/bjozsef/ authorImageURL: https://www.itdays.ro/public/images/speakers-big/Jozsef_Bartok.jpg --- As we have announce earlier [Jet 4.0 is out](/blog/2020/03/02/jet-40-is-released)! In this blog post we aim to give you the lower level details needed for migrating from older versions. Jet 4.0 is a major version release. According to the semantic versioning we apply, this means that in version 4.0 some of the API has changed in a breaking way and code written for 3.x may no longer compile against it. ## Jet on IMDG 4.0 Jet 4.0 uses IMDG 4.0, which is also a major release with its own breaking changes. For details see [IMDG Release Notes](https://docs.hazelcast.org/docs/rn/index.html#4-0) and [IMDG Migration Guides](https://docs.hazelcast.org/docs/4.0/manual/html-single/#migration-guides). The most important changes we made and which have affected Jet too are as follows: * We renamed many packages and moved classes around. For details see the [IMDG Release Notes](https://docs.hazelcast.org/docs/rn/index.html#4-0). The most obvious change is that many classes that used to be in the general `com.hazelcast.core` package are now in specific packages like `com.hazelcast.map` or `com.hazelcast.collection`. * `com.hazelcast.jet.function`, the package containing serializable variants of `java.util.function`, is now merged into `com.hazelcast.function`: `BiConsumerEx`, `BiFunctionEx`, `BinaryOperatorEx`, `BiPredicateEx`, `ComparatorEx`, `ComparatorsEx`, `ConsumerEx`, `FunctionEx`, `Functions`, `PredicateEx`, `SupplierEx`, `ToDoubleFunctionEx`, `ToIntFunctionEx`, `ToLongFunctionEx`. * `EntryProcessor` and several other classes and methods received a cleanup of their type parameters. See the [relevant section](https://docs.hazelcast.org/docs/4.0/manual/html-single/#introducing-lambda-friendly-interfaces) in the IMDG Migration Guide. * The term "group" in configuration was replaced with "cluster". See the code snippet below for an example. This changes a Jet Command Line parameter as well (`-g/--groupName` renamed to `-n/--cluster-name`). ```java clientConfig.setClusterName("cluster_name"); //clientConfig.getGroupConfig().setName("cluster_name") ``` * `EventJournalConfig` moved from the top-level Config class to data structure-specific configs (`MapConfig`, `CacheConfig`): ```java config.getMapConfig("map_name").getEventJournalConfig(); //config.getMapEventJournalConfig("map_name") ``` * `ICompletableFuture` was removed and replaced with the JDK-standard `CompletionStage`. This affects the return type of async methods. See the [relevant section](https://docs.hazelcast.org/docs/4.0/manual/html-single/#removal-of-icompletablefuture) in the IMDG Migration Guide. ## Jet API Changes We made multiple breaking changes in Jet’s own APIs too: * `IMapJet`, `ICacheJet` and `IListJet`, which used to be Jet-specific wrappers around IMDG’s standard `IMap`, `ICache` and `IList`, were removed. The methods that used to return these types now return the standard ones. * Renamed `Pipeline.drawFrom` to `Pipeline.readFrom` and `GeneralStage.drainTo` to `GeneralStage.writeTo`: ```java pipeline.readFrom(TestSources.items(1, 2, 3)).writeTo(Sinks.logger()); //pipeline.drawFrom(TestSources.items(1, 2, 3)).drainTo(Sinks.logger()); ``` * `ContextFactory` was renamed to `ServiceFactory` and we added support for instance-wide initialization. createFn now takes `ProcessorSupplier.Context` instead of just `JetInstance`. We also added convenience methods in `ServiceFactories` to simplify constructing the common variants: ```java ServiceFactories.sharedService(ctx -> Executors.newFixedThreadPool(8), ExecutorService::shutdown); //ContextFactory.withCreateFn(jet -> Executors.newFixedThreadPool(8)).withLocalSharing(); ServiceFactories.nonSharedService(ctx -> DateTimeFormatter.ofPattern("HH:mm:ss.SSS"), ConsumerEx.noop()); //ContextFactory.withCreateFn(jet -> DateTimeFormatter.ofPattern("HH:mm:ss.SSS")) ``` * `map/filter/flatMapUsingContext` was renamed to `map/filter/flatMapUsingService`: ```java pipeline.readFrom(TestSources.items(1, 2, 3)) .filterUsingService( ServiceFactories.sharedService(pctx -> 1), (svc, i) -> i % 2 == svc) .writeTo(Sinks.logger()); /* pipeline.drawFrom(TestSources.items(1, 2, 3)) .filterUsingContext( ContextFactory.withCreateFn(i -> 1), (ctx, i) -> i % 2 == ctx) .drainTo(Sinks.logger()); */ ``` * `filterUsingServiceAsync` has been removed. Usages can be replaced with `mapUsingServiceAsync`, which behaves like a filter if it returns a `null` future or the returned future contains a `null` result: ```java stage.mapUsingServiceAsync(serviceFactory, (executor, item) -> { CompletableFuture<Long> f = new CompletableFuture<>(); executor.submit(() -> f.complete(item % 2 == 0 ? item : null)); return f; }); /* stage.filterUsingServiceAsync(serviceFactory, (executor, item) -> { CompletableFuture<Boolean> f = new CompletableFuture<>(); executor.submit(() -> f.complete(item % 2 == 0)); return f; }); */ ``` * `flatMapUsingServiceAsync` has been removed. Usages can be replaced with `mapUsingServiceAsync` followed by non-async `flatMap`: ```java stage.mapUsingServiceAsync(serviceFactory, (executor, item) -> { CompletableFuture<List<String>> f = new CompletableFuture<>(); executor.submit(() -> f.complete(Arrays.asList(item + "-1", item + "-2", item + "-3"))); return f; }) .flatMap(Traversers::traverseIterable); /* stage.flatMapUsingServiceAsync(serviceFactory, (executor, item) -> { CompletableFuture<Traverser<String>> f = new CompletableFuture<>(); executor.submit(() -> f.complete(traverseItems(item + "-1", item + "-2", item + "-3"))); return f; }) */ ``` * The methods `withMaxPendingCallsPerProcessor(int)` and `withUnorderedAsyncResponses()` were removed from `ServiceFactory`. These properties are relevant only in the context of asynchronous operations and were used in conjunction with `GeneralStage.mapUsingServiceAsync(…​)`. In Jet 4.0 the `GeneralStage.mapUsingServiceAsync(…​)` method has a new variant with explicit parameters for the above settings: ```java stage.mapUsingServiceAsync( ServiceFactories.sharedService(ctx -> Executors.newFixedThreadPool(8)), 2, false, (exec, task) -> CompletableFuture.supplyAsync(() -> task, exec) ); /* stage.mapUsingContextAsync( ContextFactory.withCreateFn(jet -> Executors.newFixedThreadPool(8)) .withMaxPendingCallsPerProcessor(2) .withUnorderedAsyncResponses(), (exec, task) -> CompletableFuture.supplyAsync(() -> task, exec) ); */ ``` * `com.hazelcast.jet.pipeline.Sinks#mapWithEntryProcessor` got a new signature in order to accommodate the improved `EntryProcessor`, which became more lambda-friendly in IMDG (see the [relevant section](https://docs.hazelcast.org/docs/4.0/manual/html-single/#introducing-lambda-friendly-interfaces) in the IMDG Migration Guide). The return type of `EntryProcessor` is now an explicit parameter in ``mapWithEntryProcessor``'s method signature: ```java FunctionEx<Map.Entry<String, Integer>, EntryProcessor<String, Integer, Void>> entryProcFn = entry -> (EntryProcessor<String, Integer, Void>) e -> { e.setValue(e.getValue() == null ? 1 : e.getValue() + 1); return null; }; Sinks.mapWithEntryProcessor(map, Map.Entry::getKey, entryProcFn); /* FunctionEx<Map.Entry<String, Integer>, EntryProcessor<String, Integer>> entryProcFn = entry -> (EntryProcessor<String, Integer>) e -> { e.setValue(e.getValue() == null ? 1 : e.getValue() + 1); return null; }; Sinks.mapWithEntryProcessor(map, Map.Entry::getKey, entryProcFn); */ ``` * HDFS source and sink methods are now `Hadoop.inputFormat` and `Hadoop.outputFormat`. * `MetricsConfig` is no longer part of `JetConfig`, but resides in the IMDG `Config` class: ```java jetConfig.getHazelcastConfig().getMetricsConfig().setCollectionFrequencySeconds(1); //jetConfig.getMetricsConfig().setCollectionIntervalSeconds(1); ``` * `Traverser` type got a slight change in the `flatMap` lambda’s generic type wildcards. This change shouldn’t affect anything in practice. * In sources and sinks we changed the method signatures so that the lambda becomes the last parameter, where applicable. * `JetBootstrap.getInstance()` moved to `Jet.bootstrappedInstance()` and now it automatically creates an isolated local instance when not running through `jet submit`. If used from `jet submit`, the behaviour remains the same. * `JobConfig.addResource(…​`) is now `addClasspathResource(…​`). * `ResourceType`, `ResourceConfig` and `JobConfig.getResourceConfigs()` are now labeled as private API and we discourage their direct usage. We also renamed `ResourceType.REGULAR_FILE` to `ResourceType.FILE`, but this is now an internal change. ## Further help In case you encounter any difficulties with migrating to Jet 4.0 feel free to [contact us any time](https://gitter.im/hazelcast/hazelcast-jet).
40.17284
161
0.693301
eng_Latn
0.646708
d0a3cda7a34e6f86e619a5a74d4ec2690dea8143
18,153
md
Markdown
articles/hdinsight/kafka/apache-kafka-mirroring.md
discentem/azure-docs
b1495f74a87004c34c5e8112e2b9f520ce94e290
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/hdinsight/kafka/apache-kafka-mirroring.md
discentem/azure-docs
b1495f74a87004c34c5e8112e2b9f520ce94e290
[ "CC-BY-4.0", "MIT" ]
1
2021-12-26T08:14:40.000Z
2021-12-26T08:14:40.000Z
articles/hdinsight/kafka/apache-kafka-mirroring.md
discentem/azure-docs
b1495f74a87004c34c5e8112e2b9f520ce94e290
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Mirror Apache Kafka topics - Azure HDInsight description: Learn how to use Apache Kafka's mirroring feature to maintain a replica of a Kafka on HDInsight cluster by mirroring topics to a secondary cluster. ms.service: hdinsight ms.topic: how-to ms.custom: hdinsightactive ms.date: 11/29/2019 --- # Use MirrorMaker to replicate Apache Kafka topics with Kafka on HDInsight Learn how to use Apache Kafka's mirroring feature to replicate topics to a secondary cluster. You can run mirroring as a continuous process, or intermittently, to migrate data from one cluster to another. In this article, you'll use mirroring to replicate topics between two HDInsight clusters. These clusters are in different virtual networks in different datacenters. > [!WARNING] > Don't use mirroring as a means to achieve fault-tolerance. The offset to items within a topic are different between the primary and secondary clusters, so clients can't use the two interchangeably. If you are concerned about fault tolerance, you should set replication for the topics within your cluster. For more information, see [Get started with Apache Kafka on HDInsight](apache-kafka-get-started.md). ## How Apache Kafka mirroring works Mirroring works by using the [MirrorMaker](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330) tool, which is part of Apache Kafka. MirrorMaker consumes records from topics on the primary cluster, and then creates a local copy on the secondary cluster. MirrorMaker uses one (or more) *consumers* that read from the primary cluster, and a *producer* that writes to the local (secondary) cluster. The most useful mirroring setup for disaster recovery uses Kafka clusters in different Azure regions. To achieve this, the virtual networks where the clusters reside are peered together. The following diagram illustrates the mirroring process and how the communication flows between clusters: :::image type="content" source="./media/apache-kafka-mirroring/kafka-mirroring-vnets2.png" alt-text="Diagram of the mirroring process." border="false"::: The primary and secondary clusters can be different in the number of nodes and partitions, and offsets within the topics are different also. Mirroring maintains the key value that is used for partitioning, so record order is preserved on a per-key basis. ### Mirroring across network boundaries If you need to mirror between Kafka clusters in different networks, there are the following additional considerations: * **Gateways**: The networks must be able to communicate at the TCP/IP level. * **Server addressing**: You can choose to address your cluster nodes by using their IP addresses or fully qualified domain names. * **IP addresses**: If you configure your Kafka clusters to use IP address advertising, you can proceed with the mirroring setup by using the IP addresses of the broker nodes and ZooKeeper nodes. * **Domain names**: If you don't configure your Kafka clusters for IP address advertising, the clusters must be able to connect to each other by using fully qualified domain names (FQDNs). This requires a domain name system (DNS) server in each network that is configured to forward requests to the other networks. When you're creating an Azure virtual network, instead of using the automatic DNS provided with the network, you must specify a custom DNS server and the IP address for the server. After you create the virtual network, you must then create an Azure virtual machine that uses that IP address. Then you install and configure DNS software on it. > [!IMPORTANT] > Create and configure the custom DNS server before installing HDInsight into the virtual network. There is no additional configuration required for HDInsight to use the DNS server configured for the virtual network. For more information on connecting two Azure virtual networks, see [Configure a connection](../../vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md). ## Mirroring architecture This architecture features two clusters in different resource groups and virtual networks: a primary and a secondary. ### Creation steps 1. Create two new resource groups: |Resource group | Location | |---|---| | kafka-primary-rg | Central US | | kafka-secondary-rg | North Central US | 1. Create a new virtual network **kafka-primary-vnet** in **kafka-primary-rg**. Leave the default settings. 1. Create a new virtual network **kafka-secondary-vnet** in **kafka-secondary-rg**, also with default settings. 1. Create two new Kafka clusters: | Cluster name | Resource group | Virtual network | Storage account | |---|---|---|---| | kafka-primary-cluster | kafka-primary-rg | kafka-primary-vnet | kafkaprimarystorage | | kafka-secondary-cluster | kafka-secondary-rg | kafka-secondary-vnet | kafkasecondarystorage | 1. Create virtual network peerings. This step will create two peerings: one from **kafka-primary-vnet** to **kafka-secondary-vnet**, and one back from **kafka-secondary-vnet** to **kafka-primary-vnet**. 1. Select the **kafka-primary-vnet** virtual network. 1. Under **Settings**, select **Peerings**. 1. Select **Add**. 1. On the **Add peering** screen, enter the details as shown in the following screenshot. :::image type="content" source="./media/apache-kafka-mirroring/hdi-add-vnet-peering.png" alt-text="Screenshot that shows H D Insight Kafka add virtual network peering." border="true"::: ### Configure IP advertising Configure IP advertising to enable a client to connect by using broker IP addresses, instead of domain names. 1. Go to the Ambari dashboard for the primary cluster: `https://PRIMARYCLUSTERNAME.azurehdinsight.net`. 1. Select **Services** > **Kafka**. Select the **Configs** tab. 1. Add the following config lines to the bottom **kafka-env template** section. Select **Save**. ``` # Configure Kafka to advertise IP addresses instead of FQDN IP_ADDRESS=$(hostname -i) echo advertised.listeners=$IP_ADDRESS sed -i.bak -e '/advertised/{/advertised@/!d;}' /usr/hdp/current/kafka-broker/conf/server.properties echo "advertised.listeners=PLAINTEXT://$IP_ADDRESS:9092" >> /usr/hdp/current/kafka-broker/conf/server.properties ``` 1. Enter a note on the **Save Configuration** screen, and select **Save**. 1. If you get a configuration warning, select **Proceed Anyway**. 1. On **Save Configuration Changes**, select **Ok**. 1. In the **Restart Required** notification, select **Restart** > **Restart All Affected**. Then select **Confirm Restart All**. :::image type="content" source="./media/apache-kafka-mirroring/ambari-restart-notification.png" alt-text="Screenshot that shows the Apache Ambari option to restart all affected." border="true"::: ### Configure Kafka to listen on all network interfaces 1. Stay on the **Configs** tab under **Services** > **Kafka**. In the **Kafka Broker** section, set the **listeners** property to `PLAINTEXT://0.0.0.0:9092`. 1. Select **Save**. 1. Select **Restart** > **Confirm Restart All**. ### Record broker IP addresses and ZooKeeper addresses for the primary cluster 1. Select **Hosts** on the Ambari dashboard. 1. Make a note of the IP addresses for the brokers and ZooKeepers. The broker nodes have **wn** as the first two letters of the host name, and the ZooKeeper nodes have **zk** as the first two letters of the host name. :::image type="content" source="./media/apache-kafka-mirroring/view-node-ip-addresses2.png" alt-text="Screenshot that shows the Apache Ambari view node i p addresses." border="true"::: 1. Repeat the previous three steps for the second cluster, **kafka-secondary-cluster**: configure IP advertising, set listeners, and make a note of the broker and ZooKeeper IP addresses. ## Create topics 1. Connect to the primary cluster by using SSH: ```bash ssh [email protected] ``` Replace `sshuser` with the SSH user name that you used when creating the cluster. Replace `PRIMARYCLUSTER` with the base name that you used when creating the cluster. For more information, see [Use SSH with HDInsight](../hdinsight-hadoop-linux-use-ssh-unix.md). 1. Use the following command to create two environment variables with the Apache ZooKeeper hosts and broker hosts for the primary cluster. Replace strings like `ZOOKEEPER_IP_ADDRESS1` with the actual IP addresses recorded earlier, such as `10.23.0.11` and `10.23.0.7`. The same goes for `BROKER_IP_ADDRESS1`. If you're using FQDN resolution with a custom DNS server, follow [these steps](apache-kafka-get-started.md#getkafkainfo) to get broker and ZooKeeper names. ```bash # get the ZooKeeper hosts for the primary cluster export PRIMARY_ZKHOSTS='ZOOKEEPER_IP_ADDRESS1:2181, ZOOKEEPER_IP_ADDRESS2:2181, ZOOKEEPER_IP_ADDRESS3:2181' # get the broker hosts for the primary cluster export PRIMARY_BROKERHOSTS='BROKER_IP_ADDRESS1:9092,BROKER_IP_ADDRESS2:9092,BROKER_IP_ADDRESS2:9092' ``` 1. To create a topic named `testtopic`, use the following command: ```bash /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 8 --topic testtopic --zookeeper $PRIMARY_ZKHOSTS ``` 1. Use the following command to verify that the topic was created: ```bash /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --zookeeper $PRIMARY_ZKHOSTS ``` The response contains `testtopic`. 1. Use the following to view the broker host information for this (the primary) cluster: ```bash echo $PRIMARY_BROKERHOSTS ``` This returns information similar to the following text: `10.23.0.11:9092,10.23.0.7:9092,10.23.0.9:9092` Save this information. It's used in the next section. ## Configure mirroring 1. Connect to the secondary cluster by using a different SSH session: ```bash ssh [email protected] ``` Replace `sshuser` with the SSH user name that you used when creating the cluster. Replace `SECONDARYCLUSTER` with the name that you used when creating the cluster. For more information, see [Use SSH with HDInsight](../hdinsight-hadoop-linux-use-ssh-unix.md). 1. Use a `consumer.properties` file to configure communication with the primary cluster. To create the file, use the following command: ```bash nano consumer.properties ``` Use the following text as the contents of the `consumer.properties` file: ```yaml bootstrap.servers=PRIMARY_BROKERHOSTS group.id=mirrorgroup ``` Replace `PRIMARY_BROKERHOSTS` with the broker host IP addresses from the primary cluster. This file describes the consumer information to use when reading from the primary Kafka cluster. For more information, see [Consumer Configs](https://kafka.apache.org/documentation#consumerconfigs) at `kafka.apache.org`. To save the file, press Ctrl+X, press Y, and then press Enter. 1. Before configuring the producer that communicates with the secondary cluster, set up a variable for the broker IP addresses of the secondary cluster. Use the following commands to create this variable: ```bash export SECONDARY_BROKERHOSTS='BROKER_IP_ADDRESS1:9092,BROKER_IP_ADDRESS2:9092,BROKER_IP_ADDRESS2:9092' ``` The command `echo $SECONDARY_BROKERHOSTS` should return information similar to the following text: `10.23.0.14:9092,10.23.0.4:9092,10.23.0.12:9092` 1. Use a `producer.properties` file to communicate the secondary cluster. To create the file, use the following command: ```bash nano producer.properties ``` Use the following text as the contents of the `producer.properties` file: ```yaml bootstrap.servers=SECONDARY_BROKERHOSTS compression.type=none ``` Replace `SECONDARY_BROKERHOSTS` with the broker IP addresses used in the previous step. For more information, see [Producer Configs](https://kafka.apache.org/documentation#producerconfigs) at `kafka.apache.org`. 1. Use the following commands to create an environment variable with the IP addresses of the ZooKeeper hosts for the secondary cluster: ```bash # get the ZooKeeper hosts for the secondary cluster export SECONDARY_ZKHOSTS='ZOOKEEPER_IP_ADDRESS1:2181,ZOOKEEPER_IP_ADDRESS2:2181,ZOOKEEPER_IP_ADDRESS3:2181' ``` 1. The default configuration for Kafka on HDInsight doesn't allow the automatic creation of topics. You must use one of the following options before starting the mirroring process: * **Create the topics on the secondary cluster**: This option also allows you to set the number of partitions and the replication factor. You can create topics ahead of time by using the following command: ```bash /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 8 --topic testtopic --zookeeper $SECONDARY_ZKHOSTS ``` Replace `testtopic` with the name of the topic to create. * **Configure the cluster for automatic topic creation**: This option allows MirrorMaker to automatically create topics. Note that it might create them with a different number of partitions or a different replication factor than the primary topic. To configure the secondary cluster to automatically create topics, perform these steps: 1. Go to the Ambari dashboard for the secondary cluster: `https://SECONDARYCLUSTERNAME.azurehdinsight.net`. 1. Select **Services** > **Kafka**. Then select the **Configs** tab. 1. In the __Filter__ field, enter a value of `auto.create`. This filters the list of properties and displays the `auto.create.topics.enable` setting. 1. Change the value of `auto.create.topics.enable` to `true`, and then select __Save__. Add a note, and then select __Save__ again. 1. Select the __Kafka__ service, select __Restart__, and then select __Restart all affected__. When prompted, select __Confirm restart all__. :::image type="content" source="./media/apache-kafka-mirroring/kafka-enable-auto-create-topics.png" alt-text="Screenshot that shows how to enable auto create topics in the kafka service." border="true"::: ## Start MirrorMaker > [!NOTE] > This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. 1. From the SSH connection to the secondary cluster, use the following command to start the MirrorMaker process: ```bash /usr/hdp/current/kafka-broker/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config consumer.properties --producer.config producer.properties --whitelist testtopic --num.streams 4 ``` The parameters used in this example are: |Parameter |Description | |---|---| |`--consumer.config`|Specifies the file that contains consumer properties. You use these properties to create a consumer that reads from the primary Kafka cluster.| |`--producer.config`|Specifies the file that contains producer properties. You use these properties to create a producer that writes to the secondary Kafka cluster.| |`--whitelist`|A list of topics that MirrorMaker replicates from the primary cluster to the secondary.| |`--num.streams`|The number of consumer threads to create.| The consumer on the secondary node is now waiting to receive messages. 1. From the SSH connection to the primary cluster, use the following command to start a producer and send messages to the topic: ```bash /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list $PRIMARY_BROKERHOSTS --topic testtopic ``` When you arrive at a blank line with a cursor, type in a few text messages. The messages are sent to the topic on the primary cluster. When done, press Ctrl+C to end the producer process. 1. From the SSH connection to the secondary cluster, press Ctrl+C to end the MirrorMaker process. It might take several seconds to end the process. To verify that the messages were replicated to the secondary, use the following command: ```bash /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server $SECONDARY_BROKERHOSTS --topic testtopic --from-beginning ``` The list of topics now includes `testtopic`, which is created when MirrorMaster mirrors the topic from the primary cluster to the secondary. The messages retrieved from the topic are the same as the ones you entered on the primary cluster. ## Delete the cluster [!INCLUDE [delete-cluster-warning](../includes/hdinsight-delete-cluster-warning.md)] The steps in this article created clusters in different Azure resource groups. To delete all of the resources created, you can delete the two resource groups created: **kafka-primary-rg** and **kafka-secondary-rg**. Deleting the resource groups removes all of the resources created by following this article, including clusters, virtual networks, and storage accounts. ## Next steps In this article, you learned how to use [MirrorMaker](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330) to create a replica of an [Apache Kafka](https://kafka.apache.org/) cluster. Use the following links to discover other ways to work with Kafka: * [Apache Kafka MirrorMaker documentation](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330) at cwiki.apache.org. * [Kafka Mirror Maker best practices](https://community.cloudera.com/t5/Community-Articles/Kafka-Mirror-Maker-Best-Practices/ta-p/249269) * [Get started with Apache Kafka on HDInsight](apache-kafka-get-started.md) * [Use Apache Spark with Apache Kafka on HDInsight](../hdinsight-apache-spark-with-kafka.md) * [Connect to Apache Kafka through an Azure virtual network](apache-kafka-connect-vpn-gateway.md)
59.130293
662
0.742412
eng_Latn
0.983818
d0a3e288610153c556775f11ff7dac6a863de370
17,488
md
Markdown
articles/app-service-web/web-sites-python-configure.md
OpenLocalizationTestOrg/azure-docs-pr15_de-DE
0d89047316de3b54555bd8da4452d449691018ad
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/app-service-web/web-sites-python-configure.md
OpenLocalizationTestOrg/azure-docs-pr15_de-DE
0d89047316de3b54555bd8da4452d449691018ad
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/app-service-web/web-sites-python-configure.md
OpenLocalizationTestOrg/azure-docs-pr15_de-DE
0d89047316de3b54555bd8da4452d449691018ad
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Konfigurieren von Python mit Azure App-Verwaltungsdienst Web Apps" description="In diesem Lernprogramm werden die Optionen zum Erstellen und konfigurieren einen einfachen Webserver Gateway-Benutzeroberfläche (WSGI) kompatible Python Anwendung auf Azure App Dienst Web Apps." services="app-service" documentationCenter="python" tags="python" authors="huguesv" manager="wpickett" editor=""/> <tags ms.service="app-service" ms.workload="na" ms.tgt_pltfrm="na" ms.devlang="python" ms.topic="article" ms.date="02/26/2016" ms.author="huvalo"/> # <a name="configuring-python-with-azure-app-service-web-apps"></a>Konfigurieren von Python mit Azure App-Verwaltungsdienst Web Apps In diesem Lernprogramm werden die Optionen zum Erstellen und Konfigurieren einer einfachen Web Server Gateway-Benutzeroberfläche (WSGI) kompatiblen Python Anwendungs auf [Azure App Dienst Web Apps](http://go.microsoft.com/fwlink/?LinkId=529714). Es werden zusätzliche Features der Git Bereitstellung, z. B. virtuelle Umgebung und Paketinstallation requirements.txt verwenden. ## <a name="bottle-django-or-flask"></a>Flaschen, Django oder wird? Die Azure Marketplace enthält Vorlagen für die Framework Flaschen, Django und wird. Wenn Sie Web app im App-Verwaltungsdienst Azure Entwickeln, oder Sie nicht mit Git vertraut sind, empfehlen wir, dass Sie eine der folgenden Lernprogramme, führen Sie die eine schrittweise Anleitung zum Erstellen einer Anwendung arbeiten, aus dem Katalog Git Bereitstellung von Windows oder Mac mit enthalten: - [Erstellen von Web-apps mit Flaschen](web-sites-python-create-deploy-bottle-app.md) - [Erstellen von Web-apps mit Django](web-sites-python-create-deploy-django-app.md) - [Erstellen von Web-apps mit wird](web-sites-python-create-deploy-flask-app.md) ## <a name="web-app-creation-on-azure-portal"></a>Web app Erstellung Azure-Portal In diesem Lernprogramm wird davon ausgegangen, einer vorhandenen Azure-Abonnement und den Zugriff auf das Portal Azure. Wenn Sie eine vorhandene Web app nicht verfügen, können Sie eine vom [Azure-Portal](https://portal.azure.com)erstellen. Klicken Sie auf die Schaltfläche ' Neu ' in der oberen linken Ecke, und klicken Sie auf **Web + Mobile** > **Web app**. ## <a name="git-publishing"></a>Git für die Veröffentlichung Konfigurieren Sie anhand der Anweisungen bei der [Lokalen Bereitstellung von Git Azure App Dienst](app-service-deploy-local-git.md)Git Veröffentlichung für Ihre neu erstellten Web app. In diesem Lernprogramm verwendet Git erstellen, verwalten und unsere Python Web app auf App-Verwaltungsdienst Azure veröffentlichen. Nachdem Sie für die Veröffentlichung Git eingerichtet haben, wird ein Repository Git erstellt und Web app zugeordnet. URL des Repositorys wird angezeigt und kann künftig zum Senden der Daten aus der lokalen Entwicklungsumgebung in der Cloud. Zum Veröffentlichen von Applications über Git stellen Sie sicher, dass ein Git Client auch installiert ist, und führen Sie die Anweisungen zur Verfügung gestellt, um Ihrer Web app-Inhalten an die App-Verwaltungsdienst Azure übermitteln. ## <a name="application-overview"></a>Übersicht über die Anwendung In den nächsten Abschnitten werden die folgenden Dateien erstellt. Sie sollten im Stammverzeichnis der Git Repository platziert werden. app.py requirements.txt runtime.txt web.config ptvs_virtualenv_proxy.py ## <a name="wsgi-handler"></a>WSGI Ereignishandler WSGI ist ein Python Standard durch [PEP 3333](http://www.python.org/dev/peps/pep-3333/) definieren eine Schnittstelle zwischen dem Webserver und Python beschrieben. Es bietet eine standardisierte Schnittstelle zum Schreiben von verschiedenen Webanwendungen und Framework Python verwenden. Beliebte Python Web Framework verwenden heute WSGI. Azure App Dienst Web Apps bietet, die Sie für alle dass solche Strukturen unterstützen; Darüber hinaus können fortgeschrittene Benutzer auch eigene verfassen so lange der benutzerdefinierte Ereignishandler die WSGI Spezifikation Richtlinien folgt. Hier ist ein Beispiel für eine `app.py` , einen benutzerdefinierten Ereignishandler definiert: def wsgi_app(environ, start_response): status = '200 OK' response_headers = [('Content-type', 'text/plain')] start_response(status, response_headers) response_body = 'Hello World' yield response_body.encode() if __name__ == '__main__': from wsgiref.simple_server import make_server httpd = make_server('localhost', 5555, wsgi_app) httpd.serve_forever() Sie können diese Anwendung lokal mit ausführen `python app.py`, navigieren Sie zu `http://localhost:5555` in Ihrem Webbrowser. ## <a name="virtual-environment"></a>Virtuelle Umgebung Auch die oben genannten Beispiel-app externe Pakete benötigt wird nicht, ist es wahrscheinlich, dass Ihre Anwendung einige erforderlich sind. Zum Verwalten von externen Paket Abhängigkeiten unterstützt Git Azure-Bereitstellung die Erstellung virtueller Umgebungen. Wenn Azure ein requirements.txt im Stammverzeichnis des Repositorys erkennt, erstellt es automatisch eine virtuelle Umgebung mit dem Namen `env`. In diesem Fall nur auf der ersten Bereitstellung, oder während einer beliebigen Bereitstellung nach der ausgewählten Python Runtime geändert hat. Es werden wahrscheinlich eine virtuelle Umgebung für die Entwicklung lokal erstellen möchten, aber nicht in Ihrem Repository Git einzuschließen. ## <a name="package-management"></a>Verwalten von Paketen In requirements.txt aufgelisteten Pakete werden in der virtuellen Umgebung mit Pip automatisch installiert. In diesem Fall auf jede Bereitstellung, aber Pip überspringt Installation, wenn ein Paket bereits installiert ist. Beispiel für `requirements.txt`: azure==0.8.4 ## <a name="python-version"></a>Python-Version [AZURE.INCLUDE [web-sites-python-customizing-runtime](../../includes/web-sites-python-customizing-runtime.md)] Beispiel für `runtime.txt`: python-2.7 ## <a name="webconfig"></a>Web.config Sie müssen zum Erstellen einer web.config-Datei, um anzugeben, wie der Server Anfragen behandelt werden sollen. Beachten Sie, dass, wenn Sie eine Datei web.x.y.config in Ihrem Repository, x.y entspricht der ausgewählten Python Runtime, und Azure wird automatisch die entsprechende Datei als web.config kopiert haben. Die folgenden Beispiele geben web.config basieren auf eine virtuelle Umgebung Proxy-Skript, die im nächsten Abschnitt beschrieben wird. Sie arbeiten mit dem WSGI Ereignishandler im Beispiel verwendete `app.py` über. Beispiel für `web.config` für Python 2.7: <?xml version="1.0"?> <configuration> <appSettings> <add key="WSGI_ALT_VIRTUALENV_HANDLER" value="app.wsgi_app" /> <add key="WSGI_ALT_VIRTUALENV_ACTIVATE_THIS" value="D:\home\site\wwwroot\env\Scripts\activate_this.py" /> <add key="WSGI_HANDLER" value="ptvs_virtualenv_proxy.get_virtualenv_handler()" /> <add key="PYTHONPATH" value="D:\home\site\wwwroot" /> </appSettings> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <handlers> <remove name="Python27_via_FastCGI" /> <remove name="Python34_via_FastCGI" /> <add name="Python FastCGI" path="handler.fcgi" verb="*" modules="FastCgiModule" scriptProcessor="D:\Python27\python.exe|D:\Python27\Scripts\wfastcgi.py" resourceType="Unspecified" requireAccess="Script" /> </handlers> <rewrite> <rules> <rule name="Static Files" stopProcessing="true"> <conditions> <add input="true" pattern="false" /> </conditions> </rule> <rule name="Configure Python" stopProcessing="true"> <match url="(.*)" ignoreCase="false" /> <conditions> <add input="{REQUEST_URI}" pattern="^/static/.*" ignoreCase="true" negate="true" /> </conditions> <action type="Rewrite" url="handler.fcgi/{R:1}" appendQueryString="true" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Beispiel für `web.config` für Python 3.4: <?xml version="1.0"?> <configuration> <appSettings> <add key="WSGI_ALT_VIRTUALENV_HANDLER" value="app.wsgi_app" /> <add key="WSGI_ALT_VIRTUALENV_ACTIVATE_THIS" value="D:\home\site\wwwroot\env\Scripts\python.exe" /> <add key="WSGI_HANDLER" value="ptvs_virtualenv_proxy.get_venv_handler()" /> <add key="PYTHONPATH" value="D:\home\site\wwwroot" /> </appSettings> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <handlers> <remove name="Python27_via_FastCGI" /> <remove name="Python34_via_FastCGI" /> <add name="Python FastCGI" path="handler.fcgi" verb="*" modules="FastCgiModule" scriptProcessor="D:\Python34\python.exe|D:\Python34\Scripts\wfastcgi.py" resourceType="Unspecified" requireAccess="Script" /> </handlers> <rewrite> <rules> <rule name="Static Files" stopProcessing="true"> <conditions> <add input="true" pattern="false" /> </conditions> </rule> <rule name="Configure Python" stopProcessing="true"> <match url="(.*)" ignoreCase="false" /> <conditions> <add input="{REQUEST_URI}" pattern="^/static/.*" ignoreCase="true" negate="true" /> </conditions> <action type="Rewrite" url="handler.fcgi/{R:1}" appendQueryString="true" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Statische Dateien werden direkt in den Webserver behandelt werden, ohne zu durchlaufen Python-Code zum Verbessern der Leistung. In den Beispielen oben sollte der Speicherort der statischen Dateien auf dem Datenträger die Position in der URL übereinstimmen. Dies bedeutet, dass eine Anforderung für `http://pythonapp.azurewebsites.net/static/site.css` wird die Datei auf dem Datenträger am dienen `\static\site.css`. `WSGI_ALT_VIRTUALENV_HANDLER`ist die Stelle, an der Sie den Ereignishandler WSGI angeben. In den Beispielen oben Sie `app.wsgi_app` , da der Ereignishandler eine Funktion namens ist `wsgi_app` in `app.py` im Stammordner. `PYTHONPATH`kann angepasst werden, aber wenn Sie alle Ihre Abhängigkeiten in der virtuellen Umgebung installieren, indem Sie sie in requirements.txt angeben, müssen Sie dürfen nicht ändern. ## <a name="virtual-environment-proxy"></a>Virtuelle Umgebung Proxy Das folgende Skript wird verwendet, um den Ereignishandler WSGI abgerufen werden, aktivieren die virtuelle Umgebung und der Protokolldateien Fehler. Es ist generische und ohne Änderungen verwendet werden soll. Inhalt des `ptvs_virtualenv_proxy.py`: # ############################################################################ # # Copyright (c) Microsoft Corporation. # # This source code is subject to terms and conditions of the Apache License, Version 2.0. A # copy of the license can be found in the License.html file at the root of this distribution. If # you cannot locate the Apache License, Version 2.0, please send an email to # [email protected]. By using this source code in any fashion, you are agreeing to be bound # by the terms of the Apache License, Version 2.0. # # You must not remove this notice, or any other, from this software. # # ########################################################################### import datetime import os import sys import traceback if sys.version_info[0] == 3: def to_str(value): return value.decode(sys.getfilesystemencoding()) def execfile(path, global_dict): """Execute a file""" with open(path, 'r') as f: code = f.read() code = code.replace('\r\n', '\n') + '\n' exec(code, global_dict) else: def to_str(value): return value.encode(sys.getfilesystemencoding()) def log(txt): """Logs fatal errors to a log file if WSGI_LOG env var is defined""" log_file = os.environ.get('WSGI_LOG') if log_file: f = open(log_file, 'a+') try: f.write('%s: %s' % (datetime.datetime.now(), txt)) finally: f.close() ptvsd_secret = os.getenv('WSGI_PTVSD_SECRET') if ptvsd_secret: log('Enabling ptvsd ...\n') try: import ptvsd try: ptvsd.enable_attach(ptvsd_secret) log('ptvsd enabled.\n') except: log('ptvsd.enable_attach failed\n') except ImportError: log('error importing ptvsd.\n'); def get_wsgi_handler(handler_name): if not handler_name: raise Exception('WSGI_ALT_VIRTUALENV_HANDLER env var must be set') if not isinstance(handler_name, str): handler_name = to_str(handler_name) module_name, _, callable_name = handler_name.rpartition('.') should_call = callable_name.endswith('()') callable_name = callable_name[:-2] if should_call else callable_name name_list = [(callable_name, should_call)] handler = None last_tb = '' while module_name: try: handler = __import__(module_name, fromlist=[name_list[0][0]]) last_tb = '' for name, should_call in name_list: handler = getattr(handler, name) if should_call: handler = handler() break except ImportError: module_name, _, callable_name = module_name.rpartition('.') should_call = callable_name.endswith('()') callable_name = callable_name[:-2] if should_call else callable_name name_list.insert(0, (callable_name, should_call)) handler = None last_tb = ': ' + traceback.format_exc() if handler is None: raise ValueError('"%s" could not be imported%s' % (handler_name, last_tb)) return handler activate_this = os.getenv('WSGI_ALT_VIRTUALENV_ACTIVATE_THIS') if not activate_this: raise Exception('WSGI_ALT_VIRTUALENV_ACTIVATE_THIS is not set') def get_virtualenv_handler(): log('Activating virtualenv with %s\n' % activate_this) execfile(activate_this, dict(__file__=activate_this)) log('Getting handler %s\n' % os.getenv('WSGI_ALT_VIRTUALENV_HANDLER')) handler = get_wsgi_handler(os.getenv('WSGI_ALT_VIRTUALENV_HANDLER')) log('Got handler: %r\n' % handler) return handler def get_venv_handler(): log('Activating venv with executable at %s\n' % activate_this) import site sys.executable = activate_this old_sys_path, sys.path = sys.path, [] site.main() sys.path.insert(0, '') for item in old_sys_path: if item not in sys.path: sys.path.append(item) log('Getting handler %s\n' % os.getenv('WSGI_ALT_VIRTUALENV_HANDLER')) handler = get_wsgi_handler(os.getenv('WSGI_ALT_VIRTUALENV_HANDLER')) log('Got handler: %r\n' % handler) return handler ## <a name="customize-git-deployment"></a>Anpassen der Git Bereitstellung [AZURE.INCLUDE [web-sites-python-customizing-runtime](../../includes/web-sites-python-customizing-deployment.md)] ## <a name="troubleshooting---package-installation"></a>Problembehandlung - Paketinstallation [AZURE.INCLUDE [web-sites-python-troubleshooting-package-installation](../../includes/web-sites-python-troubleshooting-package-installation.md)] ## <a name="troubleshooting---virtual-environment"></a>Problembehandlung - virtuellen Umgebung [AZURE.INCLUDE [web-sites-python-troubleshooting-virtual-environment](../../includes/web-sites-python-troubleshooting-virtual-environment.md)] ## <a name="next-steps"></a>Nächste Schritte Weitere Informationen finden Sie unter der [Python Developer Center](/develop/python/). >[AZURE.NOTE] Wenn Sie mit Azure-App-Verwaltungsdienst Schritte vor dem für ein Azure-Konto anmelden möchten, wechseln Sie zu [App-Verwaltungsdienst versuchen](http://go.microsoft.com/fwlink/?LinkId=523751), in dem Sie eine kurzlebige Starter Web app sofort im App-Dienst erstellen können. Keine Kreditkarten erforderlich; keine Zusagen. ## <a name="whats-changed"></a>Was hat sich geändert * Ein Leitfaden zum Ändern von Websites-App-Dienst finden Sie unter: [Azure-App-Dienst und seinen Einfluss auf die vorhandenen Azure Services](http://go.microsoft.com/fwlink/?LinkId=529714)
45.18863
588
0.677207
deu_Latn
0.811626
d0a3e5464a426c4a70482e125e774cc3b6c23ee4
6,063
md
Markdown
task/buildpacks/0.1/README.md
wumaxd/catalog
3c23c446a970c5e02c011c894e2387e685ca086c
[ "Apache-2.0" ]
null
null
null
task/buildpacks/0.1/README.md
wumaxd/catalog
3c23c446a970c5e02c011c894e2387e685ca086c
[ "Apache-2.0" ]
null
null
null
task/buildpacks/0.1/README.md
wumaxd/catalog
3c23c446a970c5e02c011c894e2387e685ca086c
[ "Apache-2.0" ]
null
null
null
# Cloud Native Buildpacks This build template builds source into a container image using [Cloud Native Buildpacks](https://buildpacks.io). To do that, it uses [builders](https://buildpacks.io/docs/concepts/components/builder/#what-is-a-builder) to run buildpacks against your application. Cloud Native Buildpacks are pluggable, modular tools that transform application source code into OCI images. They replace Dockerfiles in the app development lifecycle, and enable for swift rebasing of images and modular control over images (through the use of builders), among other benefits. This command uses a builder to construct the image, and pushes it to the registry provided. See also [`buildpacks-phases`](../buildpacks-phases) for the deconstructed version of this task, which runs each of the [lifecycle phases](https://buildpacks.io/docs/concepts/components/lifecycle/#phases) individually (this task uses the [creator binary](https://github.com/buildpacks/spec/blob/platform/0.3/platform.md#operations), which coordinates and runs all of the phases). ## Install the Task ``` kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/task/buildpacks/0.1/buildpacks.yaml ``` > **NOTE:** This task is currently only compatible with Tekton **v0.11.0** and above, and CNB Platform API 0.3 (lifecycle v0.7.0 and above). For previous Platform API versions, [see below](#previous-platform-api-versions). ## Parameters * **`BUILDER_IMAGE`**: The image on which builds will run. (must include lifecycle and compatible buildpacks; _required_) * **`CACHE`**: The name of the persistent app cache volume. (_default:_ an empty directory -- effectively no cache) * **`CACHE_IMAGE`**: The name of the persistent app cache image. (_default:_ no cache image) * **`PLATFORM_DIR`**: A directory containing platform provided configuration, such as environment variables. Files of the format `/platform/env/MY_VAR` with content `my-value` will be translated by the lifecycle into environment variables provided to buildpacks. For more information, see the [buildpacks spec](https://github.com/buildpacks/spec/blob/master/buildpack.md#provided-by-the-platform). (_default:_ an empty directory) * **`USER_ID`**: The user ID of the builder image user, as a string value. (_default:_ `"1000"`) * **`GROUP_ID`**: The group ID of the builder image user, as a string value. (_default:_ `"1000"`) * **`PROCESS_TYPE`**: The default process type to set on the image. (_default:_ `"web"`) * **`SKIP_RESTORE`**: Do not write layer metadata or restore cached layers. (clear cache between each run) (_default:_ `"false"`) * **`RUN_IMAGE`**: Reference to a run image to use. (_default:_ run image of the builder) * **`SOURCE_SUBPATH`**: A subpath within the `source` input where the source to build is located. (_default:_ `""`) ### Outputs * **`image`**: An `image`-type `PipelineResource` specifying the image that should be built. ## Workspaces The `source` workspace holds the source to build. See `SOURCE_SUBPATH` above if source is located within a subpath of this input. ## Usage This `TaskRun` will use the `buildpacks` task to build the source code, then publish a container image. ``` apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: example-run spec: taskRef: name: buildpacks podTemplate: volumes: # Uncomment the lines below to use an existing cache # - name: my-cache # persistentVolumeClaim: # claimName: my-cache-pvc # Uncomment the lines below to provide a platform directory # - name: my-platform-dir # persistentVolumeClaim: # claimName: my-platform-dir-pvc params: - name: SOURCE_SUBPATH value: <optional subpath within your source repo, e.g. "apps/java-maven"> - name: BUILDER_IMAGE value: <your builder image tag, see below for suggestions, e.g. "builder-repo/builder-image:builder-tag"> # Uncomment the lines below to use an existing cache # - name: CACHE # value: my-cache # Uncomment the lines below to provide a platform directory # - name: PLATFORM_DIR # value: my-platform-dir resources: outputs: - name: image resourceSpec: type: image params: - name: url value: <your output image tag, e.g. "gcr.io/app-repo/app-image:app-tag"> workspaces: - name: source persistentVolumeClaim: claimName: my-source-pvc ``` ### Example builders Paketo: - `gcr.io/paketo-buildpacks/builder:base` &rarr; Ubuntu bionic base image with buildpacks for Java, NodeJS and Golang - `gcr.io/paketo-buildpacks/builder:tiny` &rarr; Tiny base image (bionic build image, distroless run image) with buildpacks for Golang - `gcr.io/paketo-buildpacks/builder:full-cf` &rarr; cflinuxfs3 base image with buildpacks for Java, .NET, NodeJS, Golang, PHP, HTTPD and NGINX > NOTE: The `gcr.io/paketo-buildpacks/builder:full-cf` requires setting the USER_ID and GROUP_ID parameters to 2000, in order to work. Heroku: - `heroku/buildpacks:18` &rarr; heroku-18 base image with buildpacks for Ruby, Java, Node.js, Python, Golang, & PHP Google: - `gcr.io/buildpacks/builder:v1` &rarr; Ubuntu 18 base image with buildpacks for .NET, Go, Java, Node.js, and Python ## Previous Platform API Versions Use one of the following commands to install a previous version of this task. Be sure to also supply a compatible builder image (`BUILDER_IMAGE` input) when running the task (i.e. one that has a lifecycle implementing the expected platform API). ### CNB Platform API 0.2 Commit: [8c34055](https://github.com/tektoncd/catalog/tree/8c34055ea728413fb72af061e7bcbf1859a9fbd6/buildpacks#inputs) ``` kubectl -f https://raw.githubusercontent.com/tektoncd/catalog/8c34055ea728413fb72af061e7bcbf1859a9fbd6/buildpacks/buildpacks-v3.yaml ``` ### CNB Platform API 0.1 Commit: [5c2ab7d6](https://github.com/tektoncd/catalog/tree/5c2ab7d6c3b2507d43b49577d7f1bee9c49ed8ab/buildpacks#inputs) ``` kubectl -f https://raw.githubusercontent.com/tektoncd/catalog/5c2ab7d6c3b2507d43b49577d7f1bee9c49ed8ab/buildpacks/buildpacks-v3.yaml ```
47.367188
384
0.747815
eng_Latn
0.918724
d0a4332de9b73cbb3fd871533065bb42c2b1e16b
23,251
md
Markdown
StoreAppProject/Pods/SideMenu/README.md
yazidlouda/StoreApp
eb423f609e31bbbac1a4ccd18b69a8265452cc18
[ "MIT" ]
5,728
2015-12-29T09:22:34.000Z
2022-03-30T10:13:47.000Z
StoreAppProject/Pods/SideMenu/README.md
yazidlouda/StoreApp
eb423f609e31bbbac1a4ccd18b69a8265452cc18
[ "MIT" ]
649
2016-01-14T13:20:09.000Z
2022-03-02T13:56:32.000Z
StoreAppProject/Pods/SideMenu/README.md
sbenson189/storeApp
bba7394e51c085266f9b3ad14cc58477022d9cd0
[ "MIT" ]
863
2015-12-29T09:22:34.000Z
2022-03-26T21:40:09.000Z
# ▤ SideMenu [![CircleCI](https://circleci.com/gh/jonkykong/SideMenu.svg?style=svg)](https://circleci.com/gh/jonkykong/SideMenu) [![Version](https://img.shields.io/cocoapods/v/SideMenu.svg?style=flat-square)](http://cocoapods.org/pods/SideMenu) [![Carthage compatible](https://img.shields.io/badge/Carthage-compatible-4BC51D.svg?style=flat-square)](https://github.com/Carthage/Carthage) [![License](https://img.shields.io/cocoapods/l/SideMenu.svg?style=flat-square)](http://cocoapods.org/pods/SideMenu) [![Platform](https://img.shields.io/cocoapods/p/SideMenu.svg?style=flat-square)](http://cocoapods.org/pods/SideMenu) ### If you like SideMenu, give it a ★ at the top right of this page. #### SideMenu needs your help! If you're a skilled iOS developer and want to help maintain this repository and answer issues asked by the community, please [send me an email](mailto:[email protected]?subject=I%20Want%20To%20Help!). > Hi, I'm Jon Kent and I am an iOS designer, developer, and mobile strategist. I love coffee and play the drums. > * [**Hire me**](mailto:[email protected]?subject=Let's%20build%20something%20amazing) to help you make cool stuff. *Note: If you're having a problem with SideMenu, please open an [issue](https://github.com/jonkykong/SideMenu/issues/new) and do not email me.* > * Check out my [website](http://massappeal.co) to see some of my other projects. > * Building and maintaining this **free** library takes a lot of my time and **saves you time**. Please consider paying it forward by supporting me with a small amount to my [PayPal](https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=contact%40jonkent%2eme&lc=US&currency_code=USD&bn=PP%2dDonationsBF%3abtn_donateCC_LG%2egif%3aNonHosted). (only **13** people have donated since 12/23/15 but **thank you** to those who have!) * **[Overview](#overview)** * [Preview Samples](#preview-samples) * **[Requirements](#requirements)** * **[Installation](#installation)** * [CocoaPods](#cocoapods) * [Carthage](#carthage) * [Swift Package Manager](#swift-package-manager) * **[Usage](#usage)** * [Code-less Storyboard Implementation](#code-less-storyboard-implementation) * [Code Implementation](#code-implementation) * **[Customization](#customization)** * [SideMenuManager](#sidemenumanager) * [SideMenuNavigationController](#sidemenunavigationcontroller) * [SideMenuNavigationControllerDelegate](#sidemenunavigationcontrollerdelegate) * [Advanced](#advanced) * [Known Issues](#known-issues) * [Thank You](#thank-you) * [License](#license) ## Overview SideMenu is a simple and versatile side menu control written in Swift. - [x] **It can be implemented in storyboard without a single line of [code](#code-less-storyboard-implementation).** - [x] Eight standard animation styles to choose from (there's even a parallax effect if you want to get weird). - [x] Highly customizable without needing to write tons of custom code. - [x] Supports continuous swiping between side menus on boths sides in a single gesture. - [x] Global menu configuration. Set-up once and be done for all screens. - [x] Menus can be presented and dismissed the same as any other view controller since this control uses [custom transitions](https://developer.apple.com/library/content/featuredarticles/ViewControllerPGforiPhoneOS/CustomizingtheTransitionAnimations.html). - [x] Animations use your view controllers, not snapshots. - [x] Properly handles screen rotation and in-call status bar height changes. Check out the example project to see it in action! ### Preview Samples | Slide Out | Slide In | Dissolve | Slide In + Out | | --- | --- | --- | --- | | ![](https://raw.githubusercontent.com/jonkykong/SideMenu/master/etc/SlideOut.gif) | ![](https://raw.githubusercontent.com/jonkykong/SideMenu/master/etc/SlideIn.gif) | ![](https://raw.githubusercontent.com/jonkykong/SideMenu/master/etc/Dissolve.gif) | ![](https://raw.githubusercontent.com/jonkykong/SideMenu/master/etc/InOut.gif) | ## Requirements - [x] Xcode 11. - [x] Swift 5. - [x] iOS 10 or higher. ## Installation ### CocoaPods [CocoaPods](http://cocoapods.org) is a dependency manager for Cocoa projects. You can install it with the following command: ```bash $ gem install cocoapods ``` To integrate SideMenu into your Xcode project using CocoaPods, specify it in your `Podfile`: ```ruby source 'https://github.com/CocoaPods/Specs.git' platform :ios, '10.0' use_frameworks! pod 'SideMenu' # For Swift 5 use: # pod 'SideMenu', '~> 6.0' # For Swift 4.2 (no longer maintained) use: # pod 'SideMenu', '~> 5.0' ``` Then, run the following command: ```bash $ pod install ``` ### Carthage [Carthage](https://github.com/Carthage/Carthage) is a decentralized dependency manager that builds your dependencies and provides you with binary frameworks. You can install Carthage with [Homebrew](http://brew.sh/) using the following command: ```bash $ brew update $ brew install carthage ``` To integrate SideMenu into your Xcode project using Carthage, specify it in your `Cartfile`: ```ogdl github "jonkykong/SideMenu" "master" ``` ### Swift Package Manager The [Swift Package Manager](https://swift.org/package-manager/) is a tool for automating the distribution of Swift code and is integrated into the `swift` compiler. It is in early development, but SideMenu does support its use on supported platforms. Once you have your Swift package set up, adding SideMenu as a dependency is as easy as adding it to the `dependencies` value of your `Package.swift`. ```swift dependencies: [ .package(url: "https://github.com/jonkykong/SideMenu.git", from: "6.0.0") ] ``` ## Usage ### Code-less Storyboard Implementation 1. Create a Navigation Controller for a side menu. Set the `Custom Class` of the Navigation Controller to be `SideMenuNavigationController` in the **Identity Inspector**. Set the `Module` to `SideMenu` (ignore this step if you've manually added SideMenu to your project). Create a Root View Controller for the Navigation Controller (shown as a UITableViewController below). Set up any Triggered Segues you want in that view controller. ![](https://raw.githubusercontent.com/jonkykong/SideMenu/master/etc/Screenshot1.png) 2. Set the `Left Side` property of the `SideMenuNavigationController` to On if you want it to appear from the left side of the screen, or Off/Default if you want it to appear from the right side. ![](https://raw.githubusercontent.com/jonkykong/SideMenu/master/etc/Screenshot2.png) 3. Add a UIButton or UIBarButton to a view controller that you want to display the menu from. Set that button's Triggered Segues action to modally present the Navigation Controller from step 1. ![](https://raw.githubusercontent.com/jonkykong/SideMenu/master/etc/Screenshot3.png) That's it. *Note: you can only enable gestures in code.* ### Code Implementation First: ```swift import SideMenu ``` From a button, do something like this: ``` swift // Define the menu let menu = SideMenuNavigationController(rootViewController: YourViewController) // SideMenuNavigationController is a subclass of UINavigationController, so do any additional configuration // of it here like setting its viewControllers. If you're using storyboards, you'll want to do something like: // let menu = storyboard!.instantiateViewController(withIdentifier: "RightMenu") as! SideMenuNavigationController present(menu, animated: true, completion: nil) ``` To dismiss a menu programmatically, do something like this: ``` swift dismiss(animated: true, completion: nil) ``` To use gestures you have to use the `SideMenuManager`. In your `AppDelegate` do something like this: ``` swift // Define the menus let leftMenuNavigationController = SideMenuNavigationController(rootViewController: YourViewController) SideMenuManager.default.leftMenuNavigationController = leftMenuNavigationController let rightMenuNavigationController = SideMenuNavigationController(rootViewController: YourViewController) SideMenuManager.default.rightMenuNavigationController = rightMenuNavigationController // Setup gestures: the left and/or right menus must be set up (above) for these to work. // Note that these continue to work on the Navigation Controller independent of the view controller it displays! SideMenuManager.default.addPanGestureToPresent(toView: self.navigationController!.navigationBar) SideMenuManager.default.addScreenEdgePanGesturesToPresent(toView: self.navigationController!.view) // (Optional) Prevent status bar area from turning black when menu appears: leftMenuNavigationController.statusBarEndAlpha = 0 // Copy all settings to the other menu rightMenuNavigationController.settings = leftMenuNavigationController.settings ``` That's it. ### Customization #### SideMenuManager `SideMenuManager` supports the following: ``` swift /// The left menu. open var leftMenuNavigationController: SideMenuNavigationController? /// The right menu. public var rightMenuNavigationController: SideMenuNavigationController? /** Adds screen edge gestures for both left and right sides to a view to present a menu. - Parameter toView: The view to add gestures to. - Returns: The array of screen edge gestures added to `toView`. */ @discardableResult public func addScreenEdgePanGesturesToPresent(toView view: UIView) -> [UIScreenEdgePanGestureRecognizer] /** Adds screen edge gestures to a view to present a menu. - Parameter toView: The view to add gestures to. - Parameter forMenu: The menu (left or right) you want to add a gesture for. - Returns: The screen edge gestures added to `toView`. */ @discardableResult public func addScreenEdgePanGesturesToPresent(toView view: UIView, forMenu side: PresentDirection) -> UIScreenEdgePanGestureRecognizer /** Adds a pan edge gesture to a view to present menus. - Parameter toView: The view to add a pan gesture to. - Returns: The pan gesture added to `toView`. */ @discardableResult public func addPanGestureToPresent(toView view: UIView) -> UIPanGestureRecognizer ``` #### SideMenuNavigationController `SideMenuNavigationController` supports the following: ``` swift /// Prevents the same view controller (or a view controller of the same class) from being pushed more than once. Defaults to true. var allowPushOfSameClassTwice: Bool = true /// Forces menus to always animate when appearing or disappearing, regardless of a pushed view controller's animation. var alwaysAnimate: Bool = true /// The animation options when a menu is displayed. Ignored when displayed with a gesture. var animationOptions: UIView.AnimationOptions = .curveEaseInOut /** The blur effect style of the menu if the menu's root view controller is a UITableViewController or UICollectionViewController. - Note: If you want cells in a UITableViewController menu to show vibrancy, make them a subclass of UITableViewVibrantCell. */ var blurEffectStyle: UIBlurEffect.Style? = nil /// Duration of the remaining animation when the menu is partially dismissed with gestures. Default is 0.35 seconds. var completeGestureDuration: Double = 0.35 /// Animation curve of the remaining animation when the menu is partially dismissed with gestures. Default is .easeIn. var completionCurve: UIView.AnimationCurve = .curveEaseInOut /// Duration of the animation when the menu is dismissed without gestures. Default is 0.35 seconds. var dismissDuration: Double = 0.35 /// Automatically dismisses the menu when another view is presented from it. var dismissOnPresent: Bool = true /// Automatically dismisses the menu when another view controller is pushed from it. var dismissOnPush: Bool = true /// Automatically dismisses the menu when the screen is rotated. var dismissOnRotation: Bool = true /// Automatically dismisses the menu when app goes to the background. var dismissWhenBackgrounded: Bool = true /// Enable or disable a swipe gesture that dismisses the menu. Will not be triggered when `presentingViewControllerUserInteractionEnabled` is set to true. Default is true. var enableSwipeToDismissGesture: Bool = true /// Enable or disable a tap gesture that dismisses the menu. Will not be triggered when `presentingViewControllerUserInteractionEnabled` is set to true. Default is true. var enableTapToDismissGesture: Bool = true /// The animation initial spring velocity when a menu is displayed. Ignored when displayed with a gesture. var initialSpringVelocity: CGFloat = 1 /// Whether the menu appears on the right or left side of the screen. Right is the default. This property cannot be changed after the menu has loaded. var leftSide: Bool = false /// Width of the menu when presented on screen, showing the existing view controller in the remaining space. Default is zero. var menuWidth: CGFloat = 240 /// Duration of the animation when the menu is presented without gestures. Default is 0.35 seconds. var presentDuration: Double = 0.35 /// Enable or disable interaction with the presenting view controller while the menu is displayed. Enabling may make it difficult to dismiss the menu or cause exceptions if the user tries to present and already presented menu. `presentingViewControllerUseSnapshot` must also set to false. Default is false. var presentingViewControllerUserInteractionEnabled: Bool = false /// Use a snapshot for the presenting vierw controller while the menu is displayed. Useful when layout changes occur during transitions. Not recommended for apps that support rotation. Default is false. var presentingViewControllerUseSnapshot: Bool = false /// The presentation style of the menu. var presentationStyle: SideMenuPresentStyle = .viewSlideOut /** The push style of the menu. There are six modes in MenuPushStyle: - defaultBehavior: The view controller is pushed onto the stack. - popWhenPossible: If a view controller already in the stack is of the same class as the pushed view controller, the stack is instead popped back to the existing view controller. This behavior can help users from getting lost in a deep navigation stack. - preserve: If a view controller already in the stack is of the same class as the pushed view controller, the existing view controller is pushed to the end of the stack. This behavior is similar to a UITabBarController. - preserveAndHideBackButton: Same as .preserve and back buttons are automatically hidden. - replace: Any existing view controllers are released from the stack and replaced with the pushed view controller. Back buttons are automatically hidden. This behavior is ideal if view controllers require a lot of memory or their state doesn't need to be preserved.. - subMenu: Unlike all other behaviors that push using the menu's presentingViewController, this behavior pushes view controllers within the menu. Use this behavior if you want to display a sub menu. */ var pushStyle: MenuPushStyle = .default /// Draws `presentationStyle.backgroundColor` behind the status bar. Default is 0. var statusBarEndAlpha: CGFloat = 0 /// The animation spring damping when a menu is displayed. Ignored when displayed with a gesture. var usingSpringWithDamping: CGFloat = 1 /// Indicates if the menu is anywhere in the view hierarchy, even if covered by another view controller. var isHidden: Bool ``` #### SideMenuPresentStyle There are 8 pre-defined `SideMenuPresentStyle` options: ``` swift /// Menu slides in over the existing view. static let menuSlideIn: SideMenuPresentStyle /// The existing view slides out to reveal the menu underneath. static let viewSlideOut: SideMenuPresentStyle /// The existing view slides out while the menu slides in. static let viewSlideOutMenuIn: SideMenuPresentStyle /// The menu dissolves in over the existing view. static let menuDissolveIn: SideMenuPresentStyle /// The existing view slides out while the menu partially slides in. static let viewSlideOutMenuPartialIn: SideMenuPresentStyle /// The existing view slides out while the menu slides out from under it. static let viewSlideOutMenuOut: SideMenuPresentStyle /// The existing view slides out while the menu partially slides out from under it. static let viewSlideOutMenuPartialOut: SideMenuPresentStyle /// The existing view slides out and shrinks to reveal the menu underneath. static let viewSlideOutMenuZoom: SideMenuPresentStyle ``` #### SideMenuNavigationControllerDelegate To receive notifications when a menu is displayed from a view controller, have it adhere to the `SideMenuNavigationControllerDelegate` protocol: ``` swift extension MyViewController: SideMenuNavigationControllerDelegate { func sideMenuWillAppear(menu: SideMenuNavigationController, animated: Bool) { print("SideMenu Appearing! (animated: \(animated))") } func sideMenuDidAppear(menu: SideMenuNavigationController, animated: Bool) { print("SideMenu Appeared! (animated: \(animated))") } func sideMenuWillDisappear(menu: SideMenuNavigationController, animated: Bool) { print("SideMenu Disappearing! (animated: \(animated))") } func sideMenuDidDisappear(menu: SideMenuNavigationController, animated: Bool) { print("SideMenu Disappeared! (animated: \(animated))") } } ``` *Note: setting the `sideMenuDelegate` property on `SideMenuNavigationController` is optional. If your view controller adheres to the protocol then the methods will be called automatically.* ### Advanced <details> <summary>Click for Details</summary> #### Multiple SideMenuManagers For simplicity, `SideMenuManager.default` serves as the primary instance as most projects will only need one menu across all screens. If you need to show a different SideMenu using gestures, such as from a modal view controller presented from a previous SideMenu, do the following: 1. Declare a variable containing your custom `SideMenuManager` instance. You may want it to define it globally and configure it in your app delegate if menus will be used on multiple screens. ``` swift let customSideMenuManager = SideMenuManager() ``` 2. Setup and display menus with your custom instance the same as you would with the `SideMenuManager.default` instance. 3. If using Storyboards, subclass your instance of `SideMenuNavigationController` and set its `sideMenuManager` property to your custom instance. This must be done before `viewDidLoad` is called: ``` swift class MySideMenuNavigationController: SideMenuNavigationController { let customSideMenuManager = SideMenuManager() override func awakeFromNib() { super.awakeFromNib() sideMenuManager = customSideMenuManager } } ``` Alternatively, you can set `sideMenuManager` from the view controller that segues to your SideMenuNavigationController: ``` swift override func prepare(for segue: UIStoryboardSegue, sender: Any?) { if let sideMenuNavigationController = segue.destination as? SideMenuNavigationController { sideMenuNavigationController.sideMenuManager = customSideMenuManager } } ``` *Important: displaying SideMenu instances directly over each other is not supported. Use `menuPushStyle = .subMenu` to display multi-level menus instead.* ### SideMenuPresentationStyle If you want to create your own custom presentation style, create a subclass of `SideMenuPresentationStyle` and set your menu's `presentationStyle` to it: ```swift class MyPresentStyle: SideMenuPresentationStyle { override init() { super.init() /// Background color behind the views and status bar color backgroundColor = .black /// The starting alpha value of the menu before it appears menuStartAlpha = 1 /// Whether or not the menu is on top. If false, the presenting view is on top. Shadows are applied to the view on top. menuOnTop = false /// The amount the menu is translated along the x-axis. Zero is stationary, negative values are off-screen, positive values are on screen. menuTranslateFactor = 0 /// The amount the menu is scaled. Less than one shrinks the view, larger than one grows the view. menuScaleFactor = 1 /// The color of the shadow applied to the top most view. onTopShadowColor = .black /// The radius of the shadow applied to the top most view. onTopShadowRadius = 5 /// The opacity of the shadow applied to the top most view. onTopShadowOpacity = 0 /// The offset of the shadow applied to the top most view. onTopShadowOffset = .zero /// The ending alpha of the presenting view when the menu is fully displayed. presentingEndAlpha = 1 /// The amount the presenting view is translated along the x-axis. Zero is stationary, negative values are off-screen, positive values are on screen. presentingTranslateFactor = 0 /// The amount the presenting view is scaled. Less than one shrinks the view, larger than one grows the view. presentingScaleFactor = 1 /// The strength of the parallax effect on the presenting view once the menu is displayed. presentingParallaxStrength = .zero } /// This method is called just before the presentation transition begins. Use this to setup any animations. The super method does not need to be called. override func presentationTransitionWillBegin(to presentedViewController: UIViewController, from presentingViewController: UIViewController) {} /// This method is called during the presentation animation. Use this to animate anything alongside the menu animation. The super method does not need to be called. override func presentationTransition(to presentedViewController: UIViewController, from presentingViewController: UIViewController) {} /// This method is called when the presentation transition ends. Use this to finish any animations. The super method does not need to be called. override func presentationTransitionDidEnd(to presentedViewController: UIViewController, from presentingViewController: UIViewController, _ completed: Bool) {} /// This method is called just before the dismissal transition begins. Use this to setup any animations. The super method does not need to be called. override func dismissalTransitionWillBegin(to presentedViewController: UIViewController, from presentingViewController: UIViewController) {} /// This method is called during the dismissal animation. Use this to animate anything alongside the menu animation. The super method does not need to be called. override func dismissalTransition(to presentedViewController: UIViewController, from presentingViewController: UIViewController) {} /// This method is called when the dismissal transition ends. Use this to finish any animations. The super method does not need to be called. override func dismissalTransitionDidEnd(to presentedViewController: UIViewController, from presentingViewController: UIViewController, _ completed: Bool) {} } ``` </details> ## Known Issues * Issue [#258](https://github.com/jonkykong/SideMenu/issues/258). Using `presentingViewControllerUseSnapshot` can help preserve the experience. ## Thank You A special thank you to everyone that has [contributed](https://github.com/jonkykong/SideMenu/graphs/contributors) to this library to make it better. Your support is appreciated! ## License SideMenu is available under the MIT license. See the LICENSE file for more info.
56.987745
436
0.774977
eng_Latn
0.945636
d0a4a611062a0595dd3a61b0b7b9bfef110ab80c
2,991
md
Markdown
docs/code-quality/ca2141-transparent-methods-must-not-satisfy-linkdemands.md
adrianodaddiego/visualstudio-docs.it-it
b2651996706dc5cb353807f8448efba9f24df130
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/code-quality/ca2141-transparent-methods-must-not-satisfy-linkdemands.md
adrianodaddiego/visualstudio-docs.it-it
b2651996706dc5cb353807f8448efba9f24df130
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/code-quality/ca2141-transparent-methods-must-not-satisfy-linkdemands.md
adrianodaddiego/visualstudio-docs.it-it
b2651996706dc5cb353807f8448efba9f24df130
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: CA2141:I metodi Transparent non devono soddisfare i LinkDemand ms.date: 11/04/2016 ms.topic: reference f1_keywords: - CA2141 ms.assetid: 2957f5f7-c511-4180-ba80-752034f10a77 author: gewarren ms.author: gewarren manager: jillfra ms.workload: - multiple ms.openlocfilehash: fdd8ee4633cdc254bcfc5237391120ef887753da ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 04/23/2019 ms.locfileid: "62806971" --- # <a name="ca2141transparent-methods-must-not-satisfy-linkdemands"></a>CA2141:I metodi Transparent non devono soddisfare i LinkDemand ||| |-|-| |TypeName|TransparentMethodsMustNotSatisfyLinkDemands| |CheckId|CA2141| |Category|Microsoft.Security| |Modifica importante|Interruzione| ## <a name="cause"></a>Causa Un metodo trasparente per la sicurezza chiama un metodo in un assembly che non sia contrassegnato con il <xref:System.Security.AllowPartiallyTrustedCallersAttribute> attributo (APTCA) o un metodo trasparente per la sicurezza soddisfa un <xref:System.Security.Permissions.SecurityAction> `.LinkDemand` per un tipo o un metodo. ## <a name="rule-description"></a>Descrizione della regola Che soddisfa un LinkDemand è un'operazione sensibile di sicurezza che può causare l'elevazione dei privilegi non intenzionale dei privilegi. Codice SecurityTransparent non deve soddisfare i LinkDemand, perché non è soggetta agli stessi requisiti di controllo di sicurezza del codice SecurityCritical. I metodi Transparent negli assembly di livello 1 set di regole di sicurezza causa tutti i LinkDemand soddisfano da convertire in richieste complete in fase di esecuzione, che può causare problemi di prestazioni. Nell'assembly di livello 2 set di regole di sicurezza, i metodi transparent avrà esito negativo per la compilazione del compilatore just-in-time (JIT) se provano a soddisfa un LinkDemand. Negli assembly che utilizzano la sicurezza di livello 2, i tentativi da parte di un metodo trasparente per sicurezza soddisfa un LinkDemand o chiamare un metodo in un assembly APTCA non genera un <xref:System.MethodAccessException>; negli assembly di livello 1 LinkDemand diventano richieste complete. ## <a name="how-to-fix-violations"></a>Come correggere le violazioni Per correggere una violazione di questa regola, contrassegnare il metodo di accesso con il <xref:System.Security.SecurityCriticalAttribute> o <xref:System.Security.SecuritySafeCriticalAttribute> attributo o rimuovere i LinkDemand dal metodo di accesso. ## <a name="when-to-suppress-warnings"></a>Soppressione degli avvisi Non escludere un avviso da questa regola. ## <a name="example"></a>Esempio In questo esempio, un metodo trasparente tenta di chiamare un metodo che presenta un LinkDemand. Questa regola viene generato su questo codice. [!code-csharp[FxCop.Security.CA2141.TransparentMethodsMustNotSatisfyLinkDemands#1](../code-quality/codesnippet/CSharp/ca2141-transparent-methods-must-not-satisfy-linkdemands_1.cs)]
65.021739
701
0.815781
ita_Latn
0.967731
d0a549a4c0d962dc5446ab443d340dd1301eeedf
1,250
md
Markdown
README.md
portiny/elasticsearch-nette
1274d26f5a182fd2b659ebf58bfcc35710000ed7
[ "MIT" ]
6
2017-12-09T11:49:04.000Z
2019-01-21T09:06:34.000Z
README.md
portiny/elasticsearch-nette
1274d26f5a182fd2b659ebf58bfcc35710000ed7
[ "MIT" ]
5
2017-12-09T22:28:33.000Z
2020-06-08T13:31:18.000Z
README.md
portiny/elasticsearch-nette
1274d26f5a182fd2b659ebf58bfcc35710000ed7
[ "MIT" ]
5
2017-12-09T18:04:17.000Z
2019-12-06T19:28:19.000Z
# [Portiny/Elasticsearch](https://github.com/portiny/elasticsearch) for Nette Framework [![Build Status](https://img.shields.io/travis/portiny/elasticsearch-nette.svg?style=flat-square)](https://travis-ci.org/portiny/elasticsearch-nette) [![Downloads](https://img.shields.io/packagist/dt/portiny/elasticsearch-nette.svg?style=flat-square)](https://packagist.org/packages/portiny/elasticsearch-nette) [![Latest stable](https://img.shields.io/github/tag/portiny/elasticsearch-nette.svg?style=flat-square)](https://packagist.org/packages/portiny/elasticsearch-nette) [Portiny/Elasticsearch](https://github.com/portiny/elasticsearch) integration extension for Nette Framework. ## Installation The simplest way to install Portiny/Elasticsearch-Nette is using [Composer](http://getcomposer.org/): ```sh $ composer require portiny/elasticsearch-nette ``` ## Resources * [Documentation](https://github.com/portiny/elasticsearch-nette/blob/master/docs/en/index.md) * [Contributing](https://github.com/portiny/portiny/blob/master/CODE_OF_CONDUCT.md) * [Report issues](https://github.com/portiny/portiny/issues) and [send Pull Requests](https://github.com/portiny/portiny/pulls) in the [main Portiny repository](https://github.com/portiny/portiny)
52.083333
197
0.7832
kor_Hang
0.171553
d0a55afe5f5c5e5444a1b92975cafda28f430d42
191
md
Markdown
README.md
DSkhaled/KYACOUBI_Personal_WebPage
bd6ed86ab501725ab35c76096ac2a982aa153c71
[ "CC-BY-3.0" ]
null
null
null
README.md
DSkhaled/KYACOUBI_Personal_WebPage
bd6ed86ab501725ab35c76096ac2a982aa153c71
[ "CC-BY-3.0" ]
null
null
null
README.md
DSkhaled/KYACOUBI_Personal_WebPage
bd6ed86ab501725ab35c76096ac2a982aa153c71
[ "CC-BY-3.0" ]
null
null
null
# My Personal Web Page This repository contains a web page, which is only a glimpse of what I have done in my professional life. It contains my CV and the certificates that I have completed.
63.666667
167
0.790576
eng_Latn
0.999991
d0a5d376bf6ef8920319ec5b0014e381673f3080
174
md
Markdown
inventory-sdk/docs/WaveSuggestionItem.md
KiboSoftware/kibo.sdk.java
5155a7c8a8e1d9e22563c1518b978467eff92f75
[ "MIT" ]
null
null
null
inventory-sdk/docs/WaveSuggestionItem.md
KiboSoftware/kibo.sdk.java
5155a7c8a8e1d9e22563c1518b978467eff92f75
[ "MIT" ]
1
2019-10-02T13:15:06.000Z
2019-10-02T13:15:06.000Z
inventory-sdk/docs/WaveSuggestionItem.md
KiboSoftware/kibo.sdk.java
5155a7c8a8e1d9e22563c1518b978467eff92f75
[ "MIT" ]
1
2022-01-09T04:41:33.000Z
2022-01-09T04:41:33.000Z
# WaveSuggestionItem ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **binName** | **String** | Bin Name |
15.818182
60
0.413793
yue_Hant
0.217154
d0a6eaa73c726e17ac32bf4a8e2ffd0e9d807c58
1,857
md
Markdown
README.md
kytart/goegl
362830877775070b2c5cc81a3a7917f6310e2d99
[ "MIT" ]
null
null
null
README.md
kytart/goegl
362830877775070b2c5cc81a3a7917f6310e2d99
[ "MIT" ]
null
null
null
README.md
kytart/goegl
362830877775070b2c5cc81a3a7917f6310e2d99
[ "MIT" ]
null
null
null
# What's that? <tt>egl</tt> is a [Go](http://golang.org) package for accessing the [EGL](http://en.wikipedia.org/wiki/EGL_\(OpenGL\)) (Embedded Graphics Library). EGL is the access door toward hardware accelerated graphics, through OpenGL, on many embedded devices. The project was born for accessing the GPU of the [Raspberry PI](http://raspberrypi.org) (check this [post](https://plus.google.com/u/0/100271912081202470197/posts/LQVYfrj49qA)) but now it has been generalized to be go installable on other platforms too. This has the benefit that you could develop Open GL ES 2.0 applications on your desktop computer using [Mesa](http://www.mesa3d.org/egl.html) and deploy them on embedded systems like the Raspberry PI. # Currently supported platform * Raspberry PI * Xorg * Android (see [Mandala](https://github.com/remogatto/mandala)) # Install The package aims to be multiplatform. To achive this result two approaches are used: [build constraints](http://golang.org/pkg/go/build) and per-platform/per-implementation initialization [boilerplate code](platform/). By default egl will use the xorg implementation. ~~~bash $ go get github.com/kytart/goegl # use xorg by default ~~~ To build egl against a particular implementation use the specific build constraint, for example: ~~~bash $ go get -tags=raspberry github.com/kytart/goegl # install on the raspberry ~~~ On a debian like system you will need to install the following prerequisites: ~~~bash $ sudo apt-get install libegl1-mesa-dev libgles2-mesa-dev ~~~ # Usage Please refer to the [examples](https://github.com/remogatto/egl-examples/). # To Do * Add support for other platforms (e.g. android) * Add tests # Credits Thanks to Roger Roach for his [egl/opengles](https://github.com/mortdeus/egles) libraries. I stole a lot from his repository! # License See [LICENSE](LICENSE)
28.136364
77
0.760905
eng_Latn
0.92666
d0a7e0ebac8356d4bad6bbbf52fe35b1e8b4aa72
11,388
md
Markdown
_posts/2015-10-10-short-story.md
4dahalibut/4dahalibut.github.io
fb915df47d7e2ecfba3728a5d1315a302c6a8dbf
[ "MIT" ]
null
null
null
_posts/2015-10-10-short-story.md
4dahalibut/4dahalibut.github.io
fb915df47d7e2ecfba3728a5d1315a302c6a8dbf
[ "MIT" ]
null
null
null
_posts/2015-10-10-short-story.md
4dahalibut/4dahalibut.github.io
fb915df47d7e2ecfba3728a5d1315a302c6a8dbf
[ "MIT" ]
null
null
null
--- layout: post title: The House --- ![Cabin](/images/cabin.jpeg) The wind was blowing fast, it felt like nothing could stop its rush. The ragged wooden carriage slowly ground to a stop, it's metal wheels grinding their axles, and the tall black horse's breath was whisked away as it gruffly exhaled, worn from its final day of pulling. The husband looked down at his sextant and grinned, glancing upwards to thank the stars for safe guidance. Their longitude and latitude coordinates had been moving slowly closer to the coordinates written in the grant for the better part of two months now, and they finally matched perfectly. He opened the curtain that shielded the main cabin of the carriage, and whispered the news to his sleeping wife. She opened one eye, pointing it at him, and gave him a wry grin. It hadn't been such a bad journey after all. The husband stepped down from the driving seat and walked around the carriage. He reached for his wife's hand as she picked up the hem of her skirt and stepped towards the open back. She suddenly jumped off the trunk bed, inflating the skirt around her, and he caught her, moving her back against the woodwork, hugging her firmly with his head nestled in her neck. She felt as if the embrace would last forever. The husband returned to planning the house late the next morning. The night had been long. He smoothed out on the ground the blueprint he had been working on in the halts between the days of driving, flattening out the in-turned edges with his palms. He had estimated sixty trees timbered, thirty days till erection. It was May, he had time before winter. The man walked around from noon till dusk, scouting for the best spot for the foundation. The land granter who he had signed the papers with told him to look for a place in the sun, with easy drainage for the rain. The wife asked her husband if he had found a suitable setting upon his return to the campsite, as she dealt him a piping hot meal with some of their last potatoes and a beet. He told her there looked to be a suitable location about a thousand feet up a nearby hill. Forty feet to a spring, with a virgin view of the forested valley north of their campsite. ~ As the husband was about finishing thatching the shingles to their new home, he saw his wife running up to him, getting pushed back down the mountain by the wind. She scrambled from a stump up onto the roof. He could faintly hear her heart flittering like a hummingbird as she whispered into his ear that she had been feeling kicks on the inside of her previously tiny midsection for the past week. When she told him she was sure, he almost fell off the roof. ~ From the top of the hill, the smoke coming out of the house could be seen rising up slowly into the gray sky. Light could barely be seen leaking through the house's tiny windows, covered in hide as they were. Snow thinly covered the ground, little flecks of dirt still shown through where the husband had forayed to cut more firewood. The wife held her newborn son in her arms, soaking up the warmth from the hearth. Her husband pulled on his pipe and read aloud from his tattered copy of Robinson Crusoe, the one book that had made the trip from the coast. His father had given it to him as a parting gift the last time they had seen eachother before moving out west, in his childhood home in Annapolis. The man's wife knew the story word by word, but she loved hearing the husband's voice as he read, and smiled when he told her how Crusoe was saved from a life alone on the cold, scary island. ~ The clouds were carried across the azure sky in a blur, changing in front of the child's eyes. He was walking back towards the house after a day foraging for food and playing in the woods. He saw his younger brother running down the path from his home. The brother jeeringly yelled to him to hurry, his parents were already at the table, waiting until he arrived to eat lunch. The child was famished. He started jogging until the house came into view, and ducked inside to eat. Today though, his mother suggested that they eat food outside, on a bench her child had just finished building the past week. The bench was a big deal, as it was the child's first major contribution to the house. They all brought their dishes out to the log, and ate the good food in near silence. The trees rustled and a thrush called out in the distance. ~ One night, the mother was startled awake by a scream. She bolted upright to see a porcupine dashing out the door, followed closely by her eldest son, beet red in the face and with several quills embedded into his behind. She was crying of laughter as he shamefully told her and his father that he had been woken up to the creature nestled in the bed with him, nibbling at his ear. Bedding with porcupines is not the best idea, she told him. ~ Before the weather got too cold, the younger son had suggested they go for a picnic by a nearby lake, a five mile hike north. The father was adamant that he had too much grain to harvest, before it would spoil with the first frost, but a gleam in his son's eyes convinced him otherwise. They packed wild damson pies, roast duck with potatoes, and their second to last bottle of mint wine they had traded for during the summer past. They had chosen a flat spot on top of a sloped meadow leading down to the rocky shore of the wild blue lake. There were tiny cirrus clouds in the sky, and the sun was warm. The younger son had decided that it was, in fact, still warm enough to go for a swim and rushed to the lake after finishing his lunch, but on his way there had tripped and tumbled down the green hill towards the lake. The husband had thrown his head back in laughter and rolled on after to join him, followed closely by his wife and other son. The walk back was cold, but they didn't mind much. ~ The pale moon shone unhindered by the shadow of the earth over the house. It was late spring and the younger child was lying in the dirt without his jacket, thinking hard, staring up at the milky way. He was working up the courage to steal his parents' old horse to go see a girl. The past Sunday had been Easter, and he had seen her from across the single room school house in the town, which had been turned into a church for the occasion. He had convinced her to ditch service, and they talked about how much they hated the frontier life, and how envious they were of the city boy who had come in with the traveling preacher, and how annoying their parents were. He told her he would visit her that night, his urges towards her were unlike anything he had ever felt. He hitched up his parent's horse and shakily climbed on, and rode off into the night. ~ The wife hadn't stopped crying in a week now. She had found her son's body the night after he had disappeared with the horse. A fox had eaten into his arm and she had seen a bug crawling along the surface of his eye. The husband was shocked that God had done this to them. What had he done to deserve a dead son, and what human deserved only fifteen years of life? It felt like no time at all. ~ The house no longer radiated a happy feeling. The father kept his head down while working in the fields now, concentrating on the till. The house had a leaky shingle where the last winter's ice had crept into the house, and a bucket had been put on the floor to catch the water as it fell. The summer felt long, and sweltering, and the remaining son studied his schoolwork hard. From the room that the son had added onto the house the previous fall, the sounds of a piece of slate being written on and erased were clearly heard. The son had to make a choice. Should he carry on the farm, staying with his mother and father, or leave, walking into the city to seek his fortune? The house hadn't really been the same for the past five years, but his parents were starting to shoot smiles at each other again when the other entered the room. He knew he could always come back, and his parents would accept him back into their home. ~ The mother greeted the courier with a beaming smile, a letter had come from her child. She rushed out into the fields to read the letter with her husband. He had written the following: Dear beloved mother and father, I hope this letter reaches you well. I have leased an apartment in Chicago, and am gainfully employed as a clerk. If you would ever like to visit, I can easily fit both of you in the new brick house I have just signed a lease on. I am to marry a woman who I am sure you would both take to warmly, and I would love if you two would come up for the wedding. It is scheduled for later this fall. Your's truly, [REDACTED] It was a clear day, and the husband asked his wife if she desired to go to his son. She replied, they should go in the fall, before it gets cold. ~ One stormy night, right before the fall began, an autumn storm ravaged the valley. Their house groaned and creaked, and the wife shakily asked the husband, “Is there anything that can shield us from the lightning?” He calmly explained to her how the lightning should not worry them as they were, but she still clung to him tightly. They both jumped at the sound of a great crash, and the storm suddenly got much louder. They looked to where their son's room had stood, and could see a giant oak tree in it's place. The husband buried his head in his hands and might have let out a scream, but for the fact that he needed to maintain confidence for his wife. The mother thought about writing to her son, but the courier had stopped coming around as often, as they hadn't had mail in almost six months. She didn't have much to write about, she just wanted to hear how his life was proceeding. The past years had passed quickly, the snow melting, trees blossoming and dying, and before she knew it snow was on the ground again. She thought about the days in the sun with her husband and the children, the picnic they had taken out to the lake where they had rolled down the hill, and the porcupine incident. ~ The husband hugged his wife as they huddled under every blanket they had. This was the worst winter he could remember and they were cold, very cold. They talked of small things, what they would plant the next spring and if they would try to set out traps to get some fresh meat the next day. The wife sighed, and said that she was tired. The husband kissed his wife on the head as she nestled into the familiar crook of his warm neck. ~~ The widower sank into his chair, the chair he always sat in, and stirred the pot over the hearth with a long iron spoon. All was quiet as he thought about times past. His breath grew ragged and he took a final puff on his old, gnawed pipe. A small plume of smoke came out of his mouth and diffused back into the air, the fire that gave it form holding no power any longer. ~~~ The vine that had been growing over the back-side of the house had crept into the living room. It worked around the base of the old rocking chair and over the hearth. Grass was growing on the path to the overgrown fields, but the house watched over all of this diligently. ~~~~ As the sun was setting over the back of the mountain, the last standing wall of the house slowly sagged over and fell into the wet ground below it. The trees sang mightily with the sound of the wind and the rain pattered onto the leaves.
126.533333
999
0.782227
eng_Latn
1.000005
d0a7e76ce769666271d4302e54ebc2e7c415ac4c
990
mkdn
Markdown
eg/docs/Validation/Class/Directive/Pattern.mkdn
Htbaa/Validation-Class
4da907295fc4dda85cc2dd99e526f3b4a054b645
[ "Artistic-1.0" ]
null
null
null
eg/docs/Validation/Class/Directive/Pattern.mkdn
Htbaa/Validation-Class
4da907295fc4dda85cc2dd99e526f3b4a054b645
[ "Artistic-1.0" ]
null
null
null
eg/docs/Validation/Class/Directive/Pattern.mkdn
Htbaa/Validation-Class
4da907295fc4dda85cc2dd99e526f3b4a054b645
[ "Artistic-1.0" ]
null
null
null
# SYNOPSIS use Validation::Class::Simple; my $rules = Validation::Class::Simple->new( fields => { company_email => { pattern => qr/\@company\.com$/ } } ); # set parameters to be validated $rules->params->add($parameters); # validate unless ($rules->validate) { # handle the failures } # DESCRIPTION Validation::Class::Directive::Pattern is a core validation class field directive that validates simple patterns and complex regular expressions. - alternative argument: an-array-of-something This directive can be passed a regexp object or a simple pattern. A simple pattern is a string where the \`\#\` character matches digits and the \`X\` character matches alphabetic characters. fields => { task_date => { pattern => '##-##-####' }, task_time => { pattern => '##:##:##' } }
24.75
89
0.557576
eng_Latn
0.924257
d0a83c11e067cbcd70334281df47766fca12f1be
844
md
Markdown
_posts/2017-07-14-Stephanie-Allin-Alexa.md
celermarryious/celermarryious.github.io
bcf6ff5049c82e276226a68ba269c11ccca7f970
[ "MIT" ]
null
null
null
_posts/2017-07-14-Stephanie-Allin-Alexa.md
celermarryious/celermarryious.github.io
bcf6ff5049c82e276226a68ba269c11ccca7f970
[ "MIT" ]
null
null
null
_posts/2017-07-14-Stephanie-Allin-Alexa.md
celermarryious/celermarryious.github.io
bcf6ff5049c82e276226a68ba269c11ccca7f970
[ "MIT" ]
null
null
null
--- layout: post date: 2017-07-14 title: "Stephanie Allin Alexa" category: Stephanie Allin tags: [Stephanie Allin] --- ### Stephanie Allin Alexa Just **$339.99** ### <table><tr><td>BRANDS</td><td>Stephanie Allin</td></tr></table> <a href="https://www.readybrides.com/en/stephanie-allin/68841-stephanie-allin-alexa.html"><img src="//img.readybrides.com/160656/stephanie-allin-alexa.jpg" alt="Stephanie Allin Alexa" style="width:100%;" /></a> <!-- break --><a href="https://www.readybrides.com/en/stephanie-allin/68841-stephanie-allin-alexa.html"><img src="//img.readybrides.com/160655/stephanie-allin-alexa.jpg" alt="Stephanie Allin Alexa" style="width:100%;" /></a> Buy it: [https://www.readybrides.com/en/stephanie-allin/68841-stephanie-allin-alexa.html](https://www.readybrides.com/en/stephanie-allin/68841-stephanie-allin-alexa.html)
52.75
224
0.727488
yue_Hant
0.731026
d0a8e0cf5b28ba3f3ff831757cdd7b2e3534aeac
736
md
Markdown
docs-aspnet/html-helpers/navigation/treeview/how-to/bind-to-xml.md
davidda/kendo-ui-core
1cd612b2ccc4d6bdfc222948b74af1c4d37e2d9f
[ "Apache-2.0" ]
2,304
2015-01-04T13:49:57.000Z
2022-03-29T22:48:20.000Z
docs-aspnet/html-helpers/navigation/treeview/how-to/bind-to-xml.md
davidda/kendo-ui-core
1cd612b2ccc4d6bdfc222948b74af1c4d37e2d9f
[ "Apache-2.0" ]
5,952
2015-01-05T03:11:28.000Z
2022-03-31T15:16:01.000Z
docs-aspnet/html-helpers/navigation/treeview/how-to/bind-to-xml.md
davidda/kendo-ui-core
1cd612b2ccc4d6bdfc222948b74af1c4d37e2d9f
[ "Apache-2.0" ]
2,214
2015-01-03T14:30:19.000Z
2022-03-28T18:59:18.000Z
--- title: Bind TreeViews to XML page_title: Bind TreeViews to XML description: "Bind a TreeView to XML in ASP.NET MVC applications." previous_url: /helpers/navigation/treeview/how-to/bind-to-xml slug: howto_bindtoaml_treeviewaspnetmvc position: 0 --- # Bind TreeViews to XML To see the example, refer to the project on how to [bind the TreeView to XML](https://www.telerik.com/support/code-library/binding-to-xml) in ASP.NET MVC applications. ## See Also * [Basic Usage of the TreeView HtmlHelper for ASP.NET MVC (Demo)](https://demos.telerik.com/aspnet-mvc/treeview/index) * [TreeViewBuilder Server-Side API](https://docs.telerik.com/aspnet-mvc/api/Kendo.Mvc.UI.Fluent/TreeViewBuilder) * [TreeView Server-Side API](/api/treeview)
38.736842
167
0.771739
kor_Hang
0.442022
d0a8e3ea11a88c7ba2ce21c786475562f2483659
2,272
md
Markdown
README.md
saghul/django-ssl-client-auth
cf246ffb14d45fcdfafef646fa98b7a3a357ee7c
[ "MIT" ]
1
2015-02-16T09:41:37.000Z
2015-02-16T09:41:37.000Z
README.md
saghul/django-ssl-client-auth
cf246ffb14d45fcdfafef646fa98b7a3a357ee7c
[ "MIT" ]
null
null
null
README.md
saghul/django-ssl-client-auth
cf246ffb14d45fcdfafef646fa98b7a3a357ee7c
[ "MIT" ]
null
null
null
django-ssl-client-auth ====================== SSL authentication backend &amp; middleware for Django for authenticating users with SSL client certificates ## License MIT license, see LICENSE.txt for full text. ## Setup ### SSL Set up nginx and create SSL certificates for your server and set up the paths to server private key, server certificate and CA certificate used to sign the client certificates. Example configuration file is in samples/nginx.conf If you are on OS X, I suggest OS X KeyChain access for doing this for testing, as it will automatically make your client certificates available in all both Google chrome & Safari. Instructions can be found e.g. http://www.dummies.com/how-to/content/how-to-become-a-certificate-authority-using-lion-s.html On other platforms, there are many tutorials on how to do this with OpenSSL e.g. http://pages.cs.wisc.edu/~zmiller/ca-howto/ Restart your ngninx (sudo nginx -s restart), make sure your green unicorn is running and check that your https:// url loads your application and the _server certificate is valid_. ### This module 1. run setup.py (sudo python setup.py install) or install the latest release usning `pip install django_ssl_auth ` 2. edit your `settings.py` 1. add `"django_ssl_auth.SSLClientAuthMiddleware"` to your `MIDDLEWARE_CLASSES` 2. add `"django_ssl_auth.SSLClientAuthBackend"` to your `AUTHENTICATION_BACKENDS` #### Configuration There are two things you need to do in `settings.py` 1. Define a function that can return a dictionary with fields that are required by your user model, e.g. `USER_DATA_FN = 'django_ssl_auth.fineid.user_dict_from_dn` is a sample implementation that takes the required fields from the DN of a Finnish government issued ID smart card for the `contrib.auth.models.User`. 2. To automatically create `User`s for all valid certificate holders, set `AUTOCREATE_VALID_SSL_USERS = True` For details, see `testapp/ssltest/settings.py` #### Smart Card support For (Finnish) instructions see `doc/fineid/FINEID.md` ## TODO * Active directory integration. ## How to get help Please do ask your questions on http://stackoverflow.com/ I am active there, and more likely to answer you publicly. Also, you can try catching Kimvais on #django@freenode
37.245902
247
0.771567
eng_Latn
0.973895
d0a995d828392ad2bdd6c7239921a557291b6dd1
886
md
Markdown
README.md
dhharker/dotfilelinker
a6b5b56a4f919763dbc2ce739ccf29822b03ca04
[ "MIT" ]
null
null
null
README.md
dhharker/dotfilelinker
a6b5b56a4f919763dbc2ce739ccf29822b03ca04
[ "MIT" ]
null
null
null
README.md
dhharker/dotfilelinker
a6b5b56a4f919763dbc2ce739ccf29822b03ca04
[ "MIT" ]
null
null
null
# Dotfile Linker **What** Creates symlinks from a shared folder (with e.g. dropbox or [syncthing](https://syncthing.net)) to your dotfiles. **Why** Makes provisioning new machines easier when you keep your dotfiles in file sync. **How** Badly written `bash` (I don't even know bash scripting so this is just ugly). ## Setup 1. Put `use_synched_configs.sh` somewhere, I keep it in my synced dotfiles folder. 2. Edit the file to reference what you want to copy from your sync to your system. 3. When you setup a new machine, sync the dotfiles and run the script to link them. ## @TODO - [ ] Option to (backup then) delete existing files for selected target files - [ ] Ability to link all files from a synced folder to a target folder without making the target folder itself a link - [ ] Add CLI help to explain the stupid coloured ascii art which indicates target/source state
44.3
122
0.748307
eng_Latn
0.996224
d0a99f843537e97fd91fe47a648ef8e0202672ef
11,097
md
Markdown
docs/profiling/vsinstr.md
doodz/visualstudio-docs.fr-fr
49c7932ec7a761e4cd7c259a5772e5415253a7a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/profiling/vsinstr.md
doodz/visualstudio-docs.fr-fr
49c7932ec7a761e4cd7c259a5772e5415253a7a5
[ "CC-BY-4.0", "MIT" ]
1
2018-10-19T08:00:06.000Z
2018-10-19T08:00:06.000Z
docs/profiling/vsinstr.md
doodz/visualstudio-docs.fr-fr
49c7932ec7a761e4cd7c259a5772e5415253a7a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: VSInstr | Microsoft Docs ms.custom: ms.date: 11/04/2016 ms.reviewer: ms.suite: ms.technology: vs-ide-debug ms.tgt_pltfrm: ms.topic: article helpviewer_keywords: - performance tools, instrumentation - instrumentation, VSInstr tool - profiling tools,VSInstr - command-line tools, instrumentation - command line, tools - VSInstr - VSInstr tool - performance tools, VSInstr tool ms.assetid: 7b1334f7-f9b0-4a82-a145-d0607bfa8467 caps.latest.revision: "44" author: mikejo5000 ms.author: mikejo manager: ghogen ms.openlocfilehash: f97ce4480ebdf04cce6a129d7c1950ac28df2aaf ms.sourcegitcommit: f40311056ea0b4677efcca74a285dbb0ce0e7974 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 10/31/2017 --- # <a name="vsinstr"></a>VSInstr L’outil VSInstr est utilisé pour instrumenter des binaires. Il est appelé à l’aide de la syntaxe suivante : ``` VSInstr [/U] filename [/options] ``` Le tableau suivant décrit les options de l’outil VSInstr : |Options|Description| |-------------|-----------------| |**Help** ou **?**|Affiche de l’aide.| |**U**|Écrit la sortie redirigée de la console au format Unicode. Cela doit être la première option spécifiée.| |`@filename`|Spécifie le nom d’un fichier réponse qui contient une option de commande par ligne. N’utilisez pas de guillemets.| |**OutputPath** `:path`|Spécifie un répertoire de destination pour l’image instrumentée. Si aucun chemin de sortie n’est spécifié, le binaire d’origine est renommé en ajoutant « Orig » au nom de fichier dans le même répertoire, et une copie du binaire est instrumentée.| |**Exclude** `:funcspec`|Spécifie une spécification de fonction à exclure de l’instrumentation par les sondes. Cette option est utile quand le profilage d’une insertion de sondes dans une fonction provoque des résultats inattendus ou non souhaités.<br /><br /> N’utilisez pas d’options **Exclude** et **Include** qui font référence à des fonctions dans le même binaire.<br /><br /> Vous pouvez spécifier plusieurs spécifications de fonction avec différentes options **Exclude**.<br /><br /> `funcspec` est défini comme suit :<br /><br /> [espace de noms\<séparateur1>] [classe\<séparateur2>]fonction<br /><br /> \<séparateur1> est `::` pour le code natif, et `.` pour le code managé.<br /><br /> \<séparateur2> est toujours `::`.<br /><br /> **Exclude** est pris en charge avec la couverture du code.<br /><br /> Le caractère générique \* est pris en charge. Par exemple, pour exclure toutes les fonctions dans un espace de noms, utilisez :<br /><br /> nom_espace_de_noms::\*<br /><br /> Vous pouvez utiliser **VSInstr /DumpFuncs** pour répertorier les noms complets des fonctions dans le binaire spécifié.| |**Include** `:funcspec`|Spécifie une spécification de fonction dans un binaire à instrumenter avec les sondes. Toutes les autres fonctions dans les binaires ne sont pas instrumentées.<br /><br /> Vous pouvez spécifier plusieurs spécifications de fonction avec différentes options **Include**.<br /><br /> N’utilisez pas d’options **Include** et **Exclude** qui font référence à des fonctions dans le même binaire.<br /><br /> **Include** n’est pas pris en charge avec la couverture du code.<br /><br /> `funcspec` est défini comme suit :<br /><br /> [espace de noms\<séparateur1>] [classe\<séparateur2>]fonction<br /><br /> \<séparateur1> est `::` pour le code natif, et `.` pour le code managé.<br /><br /> \<séparateur2> est toujours `::`.<br /><br /> Le caractère générique \* est pris en charge. Par exemple, pour inclure toutes les fonctions dans un espace de noms, utilisez :<br /><br /> nom_espace_de_noms::\*<br /><br /> Vous pouvez utiliser **VSInstr /DumpFuncs** pour répertorier les noms complets des fonctions dans le binaire spécifié.| |**DumpFuncs**|Répertorie les fonctions dans l’image spécifiée. Aucune instrumentation n’est effectuée.| |**ExcludeSmallFuncs**|Exclut de l’instrumentation les petites fonctions (fonctions courtes qui n’effectuent pas d’appels de fonction). L’option **ExcludeSmallFuncs** permet de réduire la charge liée à l’instrumentation et donc d’accélérer celle-ci.<br /><br /> En outre, l’exclusion des petites fonctions réduit la taille du fichier .vsp et le temps nécessaire pour l’analyse.| |**Mark:**{**Before**`&#124;`**After**`&#124;`**Top**`&#124;`**Bottom**}`,funcname,markid`|Insère une marque de profil (identificateur utilisé pour délimiter les données dans les rapports) que vous pouvez utiliser pour identifier le début ou la fin d’une plage de données dans le fichier de rapport .vsp.<br /><br /> **Before** : juste avant l’entrée dans la fonction cible.<br /><br /> **After** : juste après la sortie de la fonction cible.<br /><br /> **Top** : juste après l’entrée dans la fonction cible.<br /><br /> **Bottom** : juste avant chaque retour dans la fonction cible.<br /><br /> `funcname` : nom de la fonction cible<br /><br /> `Markid` : entier positif (long) à utiliser comme identificateur de la marque de profil.| |**Coverage**|Exécute l’instrumentation de la couverture. Cette option ne peut être utilisée qu’avec les options suivantes : **Verbose**, **OutputPath**, **Exclude** et **Logfile**.| |**Verbose**|L’option **Verbose** permet d’afficher des informations détaillées sur le processus d’instrumentation.| |**NoWarn** `[:[Message Number[;Message Number]]]`|Supprimer la totalité ou une partie spécifique des avertissements.<br /><br /> `Message Number` : numéro d’avertissement. Si `Message Number` est omis, tous les avertissements sont supprimés.<br /><br /> Pour plus d’informations, consultez [Avertissements VSInstr](../profiling/vsinstr-warnings.md).| |**Control** `:{` **Thread** `&#124;` **Process** `&#124;` **Global** `}`|Spécifie le niveau de profilage des options suivantes du contrôle de la collecte de données VSInstr :<br /><br /> **Start**<br /><br /> **StartOnly**<br /><br /> **Suspend**<br /><br /> **StopOnly**<br /><br /> **SuspendOnly**<br /><br /> **ResumeOnly**<br /><br /> **Thread** : spécifie les fonctions de contrôle de collecte de données au niveau du thread. Le profilage est démarré ou arrêté uniquement pour le thread actuel. L’état de profilage des autres threads n’est pas affecté. La valeur par défaut correspond au thread.<br /><br /> **Process** : spécifie les fonctions de contrôle de collecte de données de profilage au niveau du processus. Le profilage démarre ou s’arrête pour tous les threads dans le processus actuel. L’état de profilage des autres processus n’est pas affecté.<br /><br /> **Global** : spécifie les fonctions de contrôle de collecte de données (interprocessus) au niveau global.<br /><br /> Une erreur se produit si vous ne spécifiez pas le niveau de profilage.| |**Start** `:{` **Inside** `&#124;` **Outside** `},funcname`|Limite la collecte de données à la fonction cible et aux fonctions enfants appelées par cette fonction.<br /><br /> **Inside** : insère la fonction StartProfile immédiatement après l’entrée dans la fonction cible. Insère la fonction StopProfile juste avant chaque retour dans la fonction cible.<br /><br /> **Outside** : insère la fonction StartProfile juste avant chaque appel à la fonction cible. Insère la fonction StopProfile immédiatement après chaque appel à la fonction cible.<br /><br /> `funcname` : nom de la fonction cible.| |**Suspend** `:{` **Inside** `&#124;` **Outside** `},funcname`|Exclut la collecte de données pour la fonction cible et les fonctions enfants appelées par la fonction.<br /><br /> **Inside** : insère la fonction SuspendProfile immédiatement après l’entrée dans la fonction cible. Insère la fonction ResumeProfile juste avant chaque retour dans la fonction cible.<br /><br /> **Outside** : insère la fonction SuspendProfile juste avant l’entrée dans la fonction cible. Insère la fonction ResumeProfile immédiatement après la sortie de la fonction cible.<br /><br /> `funcname` : nom de la fonction cible<br /><br /> Si la fonction cible contient une fonction StartProfile, la fonction SuspendProfile est insérée avant celle-ci. Si la fonction cible contient une fonction StopProfile, la fonction ResumeProfile est insérée après celle-ci.| |**StartOnly:** `{` **Before** `&#124;` **After** `&#124;` **Top** `&#124;` **Bottom** `},funcname`|Commence la collecte de données pendant une exécution de profilage. Elle insère la fonction API StartProfile à l’emplacement spécifié.<br /><br /> **Before** : juste avant l’entrée dans la fonction cible.<br /><br /> **After** : juste après la sortie de la fonction cible.<br /><br /> **Top** : juste après l’entrée dans la fonction cible.<br /><br /> **Bottom** : juste avant chaque retour dans la fonction cible.<br /><br /> `funcname` : nom de la fonction cible| |**StopOnly:**{**Before**`&#124;`**After**`&#124;`**Top**`&#124;`**Bottom**}`,funcname`|Arrête la collecte de données pendant une exécution de profilage. Elle insère la fonction StopProfile à l’emplacement spécifié.<br /><br /> **Before** : juste avant l’entrée dans la fonction cible.<br /><br /> **After** : juste après la sortie de la fonction cible.<br /><br /> **Top** : juste après l’entrée dans la fonction cible.<br /><br /> **Bottom** : juste avant chaque retour dans la fonction cible.<br /><br /> `funcname` : nom de la fonction cible| |**SuspendOnly:**{**Before**`&#124;`**After**`&#124;`**Top**`&#124;`**Bottom**}`,funcname`|Arrête la collecte de données pendant une exécution de profilage. Elle insère l’API SuspendProfile à l’emplacement spécifié.<br /><br /> **Before** : juste avant l’entrée dans la fonction cible.<br /><br /> **After** : juste après la sortie de la fonction cible.<br /><br /> **Top** : juste après l’entrée dans la fonction cible.<br /><br /> **Bottom** : juste avant chaque retour dans la fonction cible.<br /><br /> `funcname` : nom de la fonction cible<br /><br /> Si la fonction cible contient une fonction StartProfile, la fonction SuspendProfile est insérée avant celle-ci.| |**ResumeOnly:**{**Before**`&#124;`**After**`&#124;`**Top**`&#124;`**Bottom**}`,funcname`|Commence ou reprend la collecte de données pendant une exécution de profilage.<br /><br /> Elle est généralement utilisée pour commencer un profilage après l’arrêt d’un profilage par une option **SuspendOnly**. Elle insère une API ResumeProfile à l’emplacement spécifié.<br /><br /> **Before** : juste avant l’entrée dans la fonction cible.<br /><br /> **After** : juste après la sortie de la fonction cible.<br /><br /> **Top** : juste après l’entrée dans la fonction cible.<br /><br /> **Bottom** : juste avant chaque retour dans la fonction cible.<br /><br /> `funcname` : nom de la fonction cible<br /><br /> Si la fonction cible contient une fonction StopProfile, la fonction ResumeProfile est insérée après celle-ci.| ## <a name="see-also"></a>Voir aussi [VSPerfMon](../profiling/vsperfmon.md) [VSPerfCmd](../profiling/vsperfcmd.md) [VSPerfReport](../profiling/vsperfreport.md) [Avertissements VSInstr](../profiling/vsinstr-warnings.md) [Vues Rapport de performances](../profiling/performance-report-views.md)
168.136364
1,109
0.719023
fra_Latn
0.967235
d0a9ef5714af845e69ad8e02aaf31f157aaf7dbc
1,062
md
Markdown
PROBLEMS/leetcode.com/EASY/rising-temperature.md
Roxxum/Coding-Challenges
4212653e9687d002586249df8bb42d17b398f667
[ "MIT" ]
null
null
null
PROBLEMS/leetcode.com/EASY/rising-temperature.md
Roxxum/Coding-Challenges
4212653e9687d002586249df8bb42d17b398f667
[ "MIT" ]
null
null
null
PROBLEMS/leetcode.com/EASY/rising-temperature.md
Roxxum/Coding-Challenges
4212653e9687d002586249df8bb42d17b398f667
[ "MIT" ]
null
null
null
SQL SchemaTable: Weather +---------------+---------+ | Column Name | Type | +---------------+---------+ | id | int | | recordDate | date | | temperature | int | +---------------+---------+ id is the primary key for this table. This table contains information about the temperature on a certain day.   Write an SQL query to find all dates' Id with higher temperatures compared to its previous dates (yesterday). Return the result table in any order. The query result format is in the following example.   Example 1: Input: Weather table: +----+------------+-------------+ | id | recordDate | temperature | +----+------------+-------------+ | 1 | 2015-01-01 | 10 | | 2 | 2015-01-02 | 25 | | 3 | 2015-01-03 | 20 | | 4 | 2015-01-04 | 30 | +----+------------+-------------+ Output: +----+ | id | +----+ | 2 | | 4 | +----+ Explanation: In 2015-01-02, the temperature was higher than the previous day (10 -> 25). In 2015-01-04, the temperature was higher than the previous day (20 -> 30).
26.55
109
0.509416
eng_Latn
0.951877
d0aa82363bb13ade95fa62ecc72e8a310b5850fb
2,061
md
Markdown
README.md
mangiucugna/serverless-spambot
12be3a4600c678a567934076af514684492de953
[ "MIT" ]
null
null
null
README.md
mangiucugna/serverless-spambot
12be3a4600c678a567934076af514684492de953
[ "MIT" ]
null
null
null
README.md
mangiucugna/serverless-spambot
12be3a4600c678a567934076af514684492de953
[ "MIT" ]
null
null
null
# Spam Bot A slack bot running on Cloud Functions that you can use to escalate emergencies like production incidents to a number of channels To invoke the bot in slack you will use `/[slash command] [optional reason]` this will spam all the channels configured in the code ## How to deploy ### Gitlab The project is already provided with a .gitlab-ci.yml that takes care of deploying after each push to master via Cloud Deploy using the official Cloud SDK docker image and read the secrets from the variables ## Manually Use deploy.sh to manually deploy the function, but don't forget to fill env.yml with the right content ## How to setup the projects In the unlikely event that all production setup is lost, this is a guide to setup the projects in order to make this code work again. ### Google Cloud - Create or edit a new project and enable the following APIs: `Cloud Functions`, `Cloud Build`. If you are not familiar with it you can look at this [tutorial](https://cloud.google.com/functions/docs/tutorials/slack#before-you-begin) - Add the Google `project_id` to the Gitlab env variable `PROJECT_ID` or make sure that your local google cloud sdk is set to the right project id before deploying *If you are using Gitlab CI* - Create a service account with the following permissions: `Service Account User`, `Cloud Functions Developer`, `Cloud Build Service Account` and generate a JSON key - Store the content of the JSON key in the Gitlab env variable `SERVICE_ACCOUNT` ### Slack App Create a new app in Slack, the name doesn't matter what matters is that you set it up as follows: - Copy the signing secret to a gitlab env variable called `SLACK_SIGNING_SECRET` or in `env.yml` - Add the following scope permissions: `chat:write`, `chat:write.public` - Create a slash command and add the Cloud Function endpoint url to it - Add the application to your workspace - Copy the OAUTH token to the Gitlab env variable `SLACK_BOT_OAUTH_TOKEN` or in `env.yml` ## License Licensed under MIT License, read the `LICENSE` file for more information
49.071429
233
0.775352
eng_Latn
0.996833
d0ab068d88beaea28e40768bea284c6e8a1d256e
1,409
md
Markdown
docs/misc/mybase-must-be-followed-by-dot-and-an-identifier.md
doodz/visualstudio-docs.fr-fr
49c7932ec7a761e4cd7c259a5772e5415253a7a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/misc/mybase-must-be-followed-by-dot-and-an-identifier.md
doodz/visualstudio-docs.fr-fr
49c7932ec7a761e4cd7c259a5772e5415253a7a5
[ "CC-BY-4.0", "MIT" ]
1
2018-10-19T08:00:06.000Z
2018-10-19T08:00:06.000Z
docs/misc/mybase-must-be-followed-by-dot-and-an-identifier.md
doodz/visualstudio-docs.fr-fr
49c7932ec7a761e4cd7c259a5772e5415253a7a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "&#39;MyBase&#39; doit &#234;tre suivi de &#39;.&#39; et d’un identificateur | Microsoft Docs" ms.custom: "" ms.date: "11/16/2016" ms.prod: "visual-studio-dev14" ms.reviewer: "" ms.suite: "" ms.technology: - "devlang-visual-basic" ms.tgt_pltfrm: "" ms.topic: "article" f1_keywords: - "bc32027" - "vbc32027" helpviewer_keywords: - "BC32027" ms.assetid: 39e5512c-ef1e-46a3-a96c-277ea24bfee2 caps.latest.revision: 8 caps.handback.revision: 8 author: "stevehoag" ms.author: "shoag" manager: "wpickett" --- # &#39;MyBase&#39; doit &#234;tre suivi de &#39;.&#39; et d’un identificateur `MyBase` n’est pas une véritable variable objet et ne peut pas apparaître seule. Vous pouvez uniquement l’utiliser pour accéder à un membre de la classe de base de l’instance actuelle. **ID d’erreur :** BC32027 ### Pour corriger cette erreur - Si vous voulez accéder à un membre, spécifiez l’opérateur d’accès aux membres \(.\) et un membre de la classe de base après `MyBase`. - Si vous ne voulez pas accéder à un membre, déclarez, puis initialisez une instance de la classe de base, ou supprimez la référence à `MyBase`. ## Voir aussi [MyBase \- supprimer](http://msdn.microsoft.com/fr-fr/52491d06-6451-4f6f-9aa6-8fab59bbc2b9) [Inheritance Basics](/dotnet/visual-basic/programming-guide/language-features/objects-and-classes/inheritance-basics)
38.081081
186
0.703336
fra_Latn
0.89206
d0ab7130143597d133aa8ff94d3dcadcb10d0120
646
md
Markdown
Source/string/917. Reverse Only Letters.md
eatPorkAndSeePigRun/MyLeetcode
9f6b41d4980c25482a215f2ce3b680dcd5696546
[ "MIT" ]
null
null
null
Source/string/917. Reverse Only Letters.md
eatPorkAndSeePigRun/MyLeetcode
9f6b41d4980c25482a215f2ce3b680dcd5696546
[ "MIT" ]
null
null
null
Source/string/917. Reverse Only Letters.md
eatPorkAndSeePigRun/MyLeetcode
9f6b41d4980c25482a215f2ce3b680dcd5696546
[ "MIT" ]
null
null
null
# 917. Reverse Only Letters ## Description: Given a string `S`, return the "reversed" string where all characters that are not a letter stay in the same place, and all letters reverse their positions. ## Solution: ```c++ class Solution { public: string reverseOnlyLetters(string S) { int i = 0, j = S.size() - 1; while (i < j) { while (!isalpha(S[i])) ++i; while (!isalpha(S[j])) ++j; if (i < j) { swap(S[i], S[j]); ++i; --j; } } return S; } }; ``` <!-- remark: - -->
17.944444
156
0.448916
eng_Latn
0.972614
d0ac259d0867810077398794f1f9cec851b4d7e1
7,102
md
Markdown
fabric/11401-11885/11733.md
hyperledger-gerrit-archive/fabric-gerrit
188c6e69ccb2e4c4d609ae749a467fa7e289b262
[ "Apache-2.0" ]
2
2021-11-08T08:06:48.000Z
2021-12-03T01:51:44.000Z
fabric/11401-11885/11733.md
cendhu/fabric-gerrit
188c6e69ccb2e4c4d609ae749a467fa7e289b262
[ "Apache-2.0" ]
null
null
null
fabric/11401-11885/11733.md
cendhu/fabric-gerrit
188c6e69ccb2e4c4d609ae749a467fa7e289b262
[ "Apache-2.0" ]
4
2019-12-07T05:54:26.000Z
2020-06-04T02:29:43.000Z
<strong>Project</strong>: fabric<br><strong>Branch</strong>: master<br><strong>ID</strong>: 11733<br><strong>Subject</strong>: [FAB-5365] Fix bad error in peer CLI Deliver<br><strong>Status</strong>: MERGED<br><strong>Owner</strong>: Jason Yellick - [email protected]<br><strong>Assignee</strong>:<br><strong>Created</strong>: 7/18/2017, 12:55:23 PM<br><strong>LastUpdated</strong>: 7/19/2017, 12:20:35 PM<br><strong>CommitMessage</strong>:<br><pre>[FAB-5365] Fix bad error in peer CLI Deliver The peer CLI currently attempts to print the error status returned by the orderer's Deliver gRPC method. However, the log statement inappropriately uses the '%T' modifier, and prints the type of the status, not the actual status code inside it. Consequently, all deliver errors read the same uninformative error message: Got Status:*orderer.DeliverResponse_Status This CR fixes this log statement to include the status code instead, and additionally enhances the other error messages with pertitent information. Change-Id: I5a3e1dec574bfab178550cf67bc96a66f1896d5b Signed-off-by: Jason Yellick <[email protected]> </pre><h1>Comments</h1><strong>Reviewer</strong>: Jason Yellick - [email protected]<br><strong>Reviewed</strong>: 7/18/2017, 12:55:23 PM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/18/2017, 12:56:35 PM<br><strong>Message</strong>: <pre>Patch Set 1: Build Started https://jenkins.hyperledger.org/job/fabric-verify-z/9893/ (1/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/18/2017, 12:57:36 PM<br><strong>Message</strong>: <pre>Patch Set 1: Build Started https://jenkins.hyperledger.org/job/fabric-verify-x86_64/14241/ (2/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/18/2017, 12:57:51 PM<br><strong>Message</strong>: <pre>Patch Set 1: Build Started https://jenkins.hyperledger.org/job/fabric-verify-end-2-end-x86_64/5744/ (3/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/18/2017, 12:59:21 PM<br><strong>Message</strong>: <pre>Patch Set 1: Build Started https://jenkins.hyperledger.org/job/fabric-verify-behave-x86_64/8290/ (4/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/18/2017, 3:16:17 PM<br><strong>Message</strong>: <pre>Patch Set 1: Verified+1 Build Successful https://jenkins.hyperledger.org/job/fabric-verify-x86_64/14241/ : SUCCESS Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/fabric-verify-x86_64/14241 https://jenkins.hyperledger.org/job/fabric-verify-z/9893/ : SUCCESS Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/fabric-verify-z/9893 https://jenkins.hyperledger.org/job/fabric-verify-end-2-end-x86_64/5744/ : SUCCESS Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/fabric-verify-end-2-end-x86_64/5744 https://jenkins.hyperledger.org/job/fabric-verify-behave-x86_64/8290/ : SUCCESS Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/fabric-verify-behave-x86_64/8290</pre><strong>Reviewer</strong>: Yacov Manevich - [email protected]<br><strong>Reviewed</strong>: 7/19/2017, 9:50:17 AM<br><strong>Message</strong>: <pre>Patch Set 1: Code-Review+2</pre><strong>Reviewer</strong>: Gari Singh - [email protected]<br><strong>Reviewed</strong>: 7/19/2017, 9:51:45 AM<br><strong>Message</strong>: <pre>Patch Set 1: Code-Review+2</pre><strong>Reviewer</strong>: Gerrit Code Review - [email protected]<br><strong>Reviewed</strong>: 7/19/2017, 9:51:47 AM<br><strong>Message</strong>: <pre>Change has been successfully merged by Gari Singh</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/19/2017, 9:51:54 AM<br><strong>Message</strong>: <pre>Patch Set 1: Build Started https://jenkins.hyperledger.org/job/fabric-merge-z/1799/ (1/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/19/2017, 9:54:52 AM<br><strong>Message</strong>: <pre>Patch Set 1: Build Started https://jenkins.hyperledger.org/job/fabric-merge-behave-x86_64/1277/ (2/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/19/2017, 9:56:04 AM<br><strong>Message</strong>: <pre>Patch Set 1: Build Started https://jenkins.hyperledger.org/job/fabric-merge-x86_64/2286/ (3/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/19/2017, 10:02:44 AM<br><strong>Message</strong>: <pre>Patch Set 1: Build Started https://jenkins.hyperledger.org/job/fabric-merge-end-2-end-x86_64/967/ (4/4)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 7/19/2017, 12:20:35 PM<br><strong>Message</strong>: <pre>Patch Set 1: Build Successful https://jenkins.hyperledger.org/job/fabric-merge-behave-x86_64/1277/ : SUCCESS Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/fabric-merge-behave-x86_64/1277 https://jenkins.hyperledger.org/job/fabric-merge-x86_64/2286/ : SUCCESS Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/fabric-merge-x86_64/2286 https://jenkins.hyperledger.org/job/fabric-merge-z/1799/ : SUCCESS Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/fabric-merge-z/1799 https://jenkins.hyperledger.org/job/fabric-merge-end-2-end-x86_64/967/ : SUCCESS Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/fabric-merge-end-2-end-x86_64/967</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Jason Yellick - [email protected]<br><strong>Uploader</strong>: Jason Yellick - [email protected]<br><strong>Created</strong>: 7/18/2017, 12:55:23 PM<br><strong>GitHubMergedRevision</strong>: [22e1299566b9fabcea48d5c69ee3d1aa20530c07](https://github.com/hyperledger-gerrit-archive/fabric/commit/22e1299566b9fabcea48d5c69ee3d1aa20530c07)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 7/18/2017, 3:16:17 PM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Yacov Manevich - [email protected]<br><strong>Approved</strong>: 7/19/2017, 9:50:17 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Gari Singh - [email protected]<br><strong>Approved</strong>: 7/19/2017, 9:51:45 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>MergedBy</strong>: Gari Singh<br><strong>Merged</strong>: 7/19/2017, 9:51:47 AM<br><br></blockquote>
102.927536
1,265
0.763588
kor_Hang
0.451044
d0ac5235cd04ec32291fcf0150a490e3d88b3c2c
1,347
md
Markdown
documentation/choices.md
apersici/weather
e95ceee147ee546f08be331abd11ffeb07ea7345
[ "MIT" ]
null
null
null
documentation/choices.md
apersici/weather
e95ceee147ee546f08be331abd11ffeb07ea7345
[ "MIT" ]
1
2020-12-10T20:03:35.000Z
2020-12-10T20:53:37.000Z
documentation/choices.md
apersici/weather
e95ceee147ee546f08be331abd11ffeb07ea7345
[ "MIT" ]
null
null
null
# Choices - Programming language: **Python** - Framework: **Flask** - WSGI: **Gunicorn** - Authentication: **Basic Authentication** - DBMS: **PostgreSQL** - Client: **Telegram Bot** ------------ ## Authentication The authentication method employed here is the Basic access Authentication method (BA).\ This means that for every created path in the web service a pop up will appear asking for the credentials (username and password) specified in the WWW-Authenticate header. <p align="center"> <img src='images/authentication.jpg' height='250' width='400' /> </p> ------------ ## Data Format The data output is in JSON format. ------------ ## Database The messages coming from the user, in particular the name of the cities the user wants to get information about, are stored in a database.\ This way, the telegram bot will be able to retrieve information on the weather for the last city the user wrote in the chat. <p align="center"> <img src='images/database.jpg' height='150' width='400' /> </p> PostgreSQL was the DBMS adopted. ------------ ## Telegram The client chosen for the realization of this project is a Telegram bot created using [BotFather](http://telegram.me/botfather). \ After obtaining the token to access the HTTP API, a selection of commands, a description, and a photo were added to the newly created bot.
26.94
138
0.715664
eng_Latn
0.989237
d0ac5595ac69a7b8b7844f5d8928013f59f4ffe4
23
md
Markdown
README.md
yahayahu/react-video-timeline
ac6f773dfe1d08c6eb4dd2d21109f5472af25e5b
[ "MIT" ]
null
null
null
README.md
yahayahu/react-video-timeline
ac6f773dfe1d08c6eb4dd2d21109f5472af25e5b
[ "MIT" ]
null
null
null
README.md
yahayahu/react-video-timeline
ac6f773dfe1d08c6eb4dd2d21109f5472af25e5b
[ "MIT" ]
null
null
null
# React Video Timeline
11.5
22
0.782609
kor_Hang
0.539801
d0ac667cc0cfd21657e22dfd78595c98d95485f2
778
md
Markdown
_posts/2011-09-17-geoffrey-west-the-surprising-math-of-cities-and-corporations.md
heiko-braun/hbraun.info
c5ca35ca1eafefc753aeadccc42ac3b8cad088e6
[ "MIT" ]
null
null
null
_posts/2011-09-17-geoffrey-west-the-surprising-math-of-cities-and-corporations.md
heiko-braun/hbraun.info
c5ca35ca1eafefc753aeadccc42ac3b8cad088e6
[ "MIT" ]
2
2018-07-10T09:46:50.000Z
2018-07-10T09:51:19.000Z
_posts/2011-09-17-geoffrey-west-the-surprising-math-of-cities-and-corporations.md
heiko-braun/heiko-braun.github.io
c5ca35ca1eafefc753aeadccc42ac3b8cad088e6
[ "MIT" ]
null
null
null
--- title: 'Geoffrey West: The surprising math of cities and corporations' author: hbraun layout: post permalink: /2011/09/geoffrey-west-the-surprising-math-of-cities-and-corporations/ blogger_blog: - relative-order.blogspot.com blogger_author: - blogger_permalink: - /2011/09/geoffrey-west-surprising-math-of-cities.html dsq_thread_id: - 694975808 categories: - Uncategorized --- Physicist Geoffrey West has found that simple, mathematical laws govern the properties of cities &#8212; that wealth, crime rate, walking speed and many other aspects of a city can be deduced from a single number: the city&#8217;s population. In this mind-bending talk from TEDGlobal he shows how it works and how similar laws hold for organisms and corporations. <!--copy and paste-->
40.947368
363
0.780206
eng_Latn
0.985082
d0ac724972aeb709925d5547f56069d026bdc6e8
12
md
Markdown
README.md
wengongzhu/record
a35b474fca0840c3eac958b69188a3b04d156a84
[ "MIT" ]
null
null
null
README.md
wengongzhu/record
a35b474fca0840c3eac958b69188a3b04d156a84
[ "MIT" ]
null
null
null
README.md
wengongzhu/record
a35b474fca0840c3eac958b69188a3b04d156a84
[ "MIT" ]
null
null
null
# record 记录
4
8
0.666667
eng_Latn
0.999059
d0ac8b7de2538e8763d4f7749766fac79a1c8d00
11,803
md
Markdown
vendor/github.com/codegangsta/cli/CHANGELOG.md
panyingyun/blog
00de93ee85f00bf8e17e37a6c78f82b0c3848d93
[ "MIT" ]
null
null
null
vendor/github.com/codegangsta/cli/CHANGELOG.md
panyingyun/blog
00de93ee85f00bf8e17e37a6c78f82b0c3848d93
[ "MIT" ]
null
null
null
vendor/github.com/codegangsta/cli/CHANGELOG.md
panyingyun/blog
00de93ee85f00bf8e17e37a6c78f82b0c3848d93
[ "MIT" ]
null
null
null
# Change Log **ATTN**: This project uses [semantic versioning](http://semver.org/). ## [Unreleased] ### Added - Flag type code generation via `go generate` - Write to stderr and exit 1 if action returns non-nil error - Added support for TOML to the `altsrc` loader ### Changed - Raise minimum tested/supported Go version to 1.2+ ## [1.18.0] - 2016-06-27 ### Added - `./runtests` test runner with coverage tracking by default - testing on OS X - testing on Windows - `UintFlag`, `Uint64Flag`, and `Int64Flag` types and supporting code ### Changed - Use spaces for alignment in help/usage output instead of tabs, making the output alignment consistent regardless of tab width ### Fixed - Printing of command aliases in help text - Printing of visible flags for both struct and struct pointer flags - Display the `help` subcommand when using `CommandCategories` - No longer swallows `panic`s that occur within the `Action`s themselves when detecting the signature of the `Action` field ## [1.17.0] - 2016-05-09 ### Added - Pluggable flag-level help text rendering via `cli.DefaultFlagStringFunc` - `context.GlobalBoolT` was added as an analogue to `context.GlobalBool` - Support for hiding commands by setting `Hidden: true` -- this will hide the commands in help output ### Changed - `Float64Flag`, `IntFlag`, and `DurationFlag` default values are no longer quoted in help text output. - All flag types now include `(default: {value})` strings following usage when a default value can be (reasonably) detected. - `IntSliceFlag` and `StringSliceFlag` usage strings are now more consistent with non-slice flag types - Apps now exit with a code of 3 if an unknown subcommand is specified (previously they printed "No help topic for...", but still exited 0. This makes it easier to script around apps built using `cli` since they can trust that a 0 exit code indicated a successful execution. - cleanups based on [Go Report Card feedback](https://goreportcard.com/report/github.com/urfave/cli) ## [1.16.0] - 2016-05-02 ### Added - `Hidden` field on all flag struct types to omit from generated help text ### Changed - `BashCompletionFlag` (`--enable-bash-completion`) is now omitted from generated help text via the `Hidden` field ### Fixed - handling of error values in `HandleAction` and `HandleExitCoder` ## [1.15.0] - 2016-04-30 ### Added - This file! - Support for placeholders in flag usage strings - `App.Metadata` map for arbitrary data/state management - `Set` and `GlobalSet` methods on `*cli.Context` for altering values after parsing. - Support for nested lookup of dot-delimited keys in structures loaded from YAML. ### Changed - The `App.Action` and `Command.Action` now prefer a return signature of `func(*cli.Context) error`, as defined by `cli.ActionFunc`. If a non-nil `error` is returned, there may be two outcomes: - If the error fulfills `cli.ExitCoder`, then `os.Exit` will be called automatically - Else the error is bubbled up and returned from `App.Run` - Specifying an `Action` with the legacy return signature of `func(*cli.Context)` will produce a deprecation message to stderr - Specifying an `Action` that is not a `func` type will produce a non-zero exit from `App.Run` - Specifying an `Action` func that has an invalid (input) signature will produce a non-zero exit from `App.Run` ### Deprecated - <a name="deprecated-cli-app-runandexitonerror"></a> `cli.App.RunAndExitOnError`, which should now be done by returning an error that fulfills `cli.ExitCoder` to `cli.App.Run`. - <a name="deprecated-cli-app-action-signature"></a> the legacy signature for `cli.App.Action` of `func(*cli.Context)`, which should now have a return signature of `func(*cli.Context) error`, as defined by `cli.ActionFunc`. ### Fixed - Added missing `*cli.Context.GlobalFloat64` method ## [1.14.0] - 2016-04-03 (backfilled 2016-04-25) ### Added - Codebeat badge - Support for categorization via `CategorizedHelp` and `Categories` on app. ### Changed - Use `filepath.Base` instead of `path.Base` in `Name` and `HelpName`. ### Fixed - Ensure version is not shown in help text when `HideVersion` set. ## [1.13.0] - 2016-03-06 (backfilled 2016-04-25) ### Added - YAML file input support. - `NArg` method on context. ## [1.12.0] - 2016-02-17 (backfilled 2016-04-25) ### Added - Custom usage error handling. - Custom text support in `USAGE` section of help output. - Improved help messages for empty strings. - AppVeyor CI configuration. ### Changed - Removed `panic` from default help printer func. - De-duping and optimizations. ### Fixed - Correctly handle `Before`/`After` at command level when no subcommands. - Case of literal `-` argument causing flag reordering. - Environment variable hints on Windows. - Docs updates. ## [1.11.1] - 2015-12-21 (backfilled 2016-04-25) ### Changed - Use `path.Base` in `Name` and `HelpName` - Export `GetName` on flag types. ### Fixed - Flag parsing when skipping is enabled. - Test output cleanup. - Move completion check to account for empty input case. ## [1.11.0] - 2015-11-15 (backfilled 2016-04-25) ### Added - Destination scan support for flags. - Testing against `tip` in Travis CI config. ### Changed - Go version in Travis CI config. ### Fixed - Removed redundant tests. - Use correct example naming in tests. ## [1.10.2] - 2015-10-29 (backfilled 2016-04-25) ### Fixed - Remove unused var in bash completion. ## [1.10.1] - 2015-10-21 (backfilled 2016-04-25) ### Added - Coverage and reference logos in README. ### Fixed - Use specified values in help and version parsing. - Only display app version and help message once. ## [1.10.0] - 2015-10-06 (backfilled 2016-04-25) ### Added - More tests for existing functionality. - `ArgsUsage` at app and command level for help text flexibility. ### Fixed - Honor `HideHelp` and `HideVersion` in `App.Run`. - Remove juvenile word from README. ## [1.9.0] - 2015-09-08 (backfilled 2016-04-25) ### Added - `FullName` on command with accompanying help output update. - Set default `$PROG` in bash completion. ### Changed - Docs formatting. ### Fixed - Removed self-referential imports in tests. ## [1.8.0] - 2015-06-30 (backfilled 2016-04-25) ### Added - Support for `Copyright` at app level. - `Parent` func at context level to walk up context lineage. ### Fixed - Global flag processing at top level. ## [1.7.1] - 2015-06-11 (backfilled 2016-04-25) ### Added - Aggregate errors from `Before`/`After` funcs. - Doc comments on flag structs. - Include non-global flags when checking version and help. - Travis CI config updates. ### Fixed - Ensure slice type flags have non-nil values. - Collect global flags from the full command hierarchy. - Docs prose. ## [1.7.0] - 2015-05-03 (backfilled 2016-04-25) ### Changed - `HelpPrinter` signature includes output writer. ### Fixed - Specify go 1.1+ in docs. - Set `Writer` when running command as app. ## [1.6.0] - 2015-03-23 (backfilled 2016-04-25) ### Added - Multiple author support. - `NumFlags` at context level. - `Aliases` at command level. ### Deprecated - `ShortName` at command level. ### Fixed - Subcommand help output. - Backward compatible support for deprecated `Author` and `Email` fields. - Docs regarding `Names`/`Aliases`. ## [1.5.0] - 2015-02-20 (backfilled 2016-04-25) ### Added - `After` hook func support at app and command level. ### Fixed - Use parsed context when running command as subcommand. - Docs prose. ## [1.4.1] - 2015-01-09 (backfilled 2016-04-25) ### Added - Support for hiding `-h / --help` flags, but not `help` subcommand. - Stop flag parsing after `--`. ### Fixed - Help text for generic flags to specify single value. - Use double quotes in output for defaults. - Use `ParseInt` instead of `ParseUint` for int environment var values. - Use `0` as base when parsing int environment var values. ## [1.4.0] - 2014-12-12 (backfilled 2016-04-25) ### Added - Support for environment variable lookup "cascade". - Support for `Stdout` on app for output redirection. ### Fixed - Print command help instead of app help in `ShowCommandHelp`. ## [1.3.1] - 2014-11-13 (backfilled 2016-04-25) ### Added - Docs and example code updates. ### Changed - Default `-v / --version` flag made optional. ## [1.3.0] - 2014-08-10 (backfilled 2016-04-25) ### Added - `FlagNames` at context level. - Exposed `VersionPrinter` var for more control over version output. - Zsh completion hook. - `AUTHOR` section in default app help template. - Contribution guidelines. - `DurationFlag` type. ## [1.2.0] - 2014-08-02 ### Added - Support for environment variable defaults on flags plus tests. ## [1.1.0] - 2014-07-15 ### Added - Bash completion. - Optional hiding of built-in help command. - Optional skipping of flag parsing at command level. - `Author`, `Email`, and `Compiled` metadata on app. - `Before` hook func support at app and command level. - `CommandNotFound` func support at app level. - Command reference available on context. - `GenericFlag` type. - `Float64Flag` type. - `BoolTFlag` type. - `IsSet` flag helper on context. - More flag lookup funcs at context level. - More tests &amp; docs. ### Changed - Help template updates to account for presence/absence of flags. - Separated subcommand help template. - Exposed `HelpPrinter` var for more control over help output. ## [1.0.0] - 2013-11-01 ### Added - `help` flag in default app flag set and each command flag set. - Custom handling of argument parsing errors. - Command lookup by name at app level. - `StringSliceFlag` type and supporting `StringSlice` type. - `IntSliceFlag` type and supporting `IntSlice` type. - Slice type flag lookups by name at context level. - Export of app and command help functions. - More tests &amp; docs. ## 0.1.0 - 2013-07-22 ### Added - Initial implementation. [Unreleased]: https://github.com/urfave/cli/compare/v1.18.0...HEAD [1.18.0]: https://github.com/urfave/cli/compare/v1.17.0...v1.18.0 [1.17.0]: https://github.com/urfave/cli/compare/v1.16.0...v1.17.0 [1.16.0]: https://github.com/urfave/cli/compare/v1.15.0...v1.16.0 [1.15.0]: https://github.com/urfave/cli/compare/v1.14.0...v1.15.0 [1.14.0]: https://github.com/urfave/cli/compare/v1.13.0...v1.14.0 [1.13.0]: https://github.com/urfave/cli/compare/v1.12.0...v1.13.0 [1.12.0]: https://github.com/urfave/cli/compare/v1.11.1...v1.12.0 [1.11.1]: https://github.com/urfave/cli/compare/v1.11.0...v1.11.1 [1.11.0]: https://github.com/urfave/cli/compare/v1.10.2...v1.11.0 [1.10.2]: https://github.com/urfave/cli/compare/v1.10.1...v1.10.2 [1.10.1]: https://github.com/urfave/cli/compare/v1.10.0...v1.10.1 [1.10.0]: https://github.com/urfave/cli/compare/v1.9.0...v1.10.0 [1.9.0]: https://github.com/urfave/cli/compare/v1.8.0...v1.9.0 [1.8.0]: https://github.com/urfave/cli/compare/v1.7.1...v1.8.0 [1.7.1]: https://github.com/urfave/cli/compare/v1.7.0...v1.7.1 [1.7.0]: https://github.com/urfave/cli/compare/v1.6.0...v1.7.0 [1.6.0]: https://github.com/urfave/cli/compare/v1.5.0...v1.6.0 [1.5.0]: https://github.com/urfave/cli/compare/v1.4.1...v1.5.0 [1.4.1]: https://github.com/urfave/cli/compare/v1.4.0...v1.4.1 [1.4.0]: https://github.com/urfave/cli/compare/v1.3.1...v1.4.0 [1.3.1]: https://github.com/urfave/cli/compare/v1.3.0...v1.3.1 [1.3.0]: https://github.com/urfave/cli/compare/v1.2.0...v1.3.0 [1.2.0]: https://github.com/urfave/cli/compare/v1.1.0...v1.2.0 [1.1.0]: https://github.com/urfave/cli/compare/v1.0.0...v1.1.0 [1.0.0]: https://github.com/urfave/cli/compare/v0.1.0...v1.0.0
35.023739
81
0.69118
eng_Latn
0.892773
d0acc7df01be3691dc82f314564e98796c6d3bb9
145
md
Markdown
README.md
nandorcsupor/smartcontract_lottery
b69228c653331aff6b86ca273a814a07f58c4487
[ "MIT" ]
null
null
null
README.md
nandorcsupor/smartcontract_lottery
b69228c653331aff6b86ca273a814a07f58c4487
[ "MIT" ]
null
null
null
README.md
nandorcsupor/smartcontract_lottery
b69228c653331aff6b86ca273a814a07f58c4487
[ "MIT" ]
null
null
null
1. Users can enter lottery with ETH based on a USD fee. 2. An admin will choose when the lottery is over 3. Lottery will select a random winner.
36.25
55
0.765517
eng_Latn
0.99992
d0ad1037fcaea3583699b6a84165e49d5204f37d
2,084
md
Markdown
docs/cloud/concepts/flows.md
aliza-miller/prefect
a537e20a8b81192f12a7df575897596347b8a307
[ "Apache-2.0" ]
null
null
null
docs/cloud/concepts/flows.md
aliza-miller/prefect
a537e20a8b81192f12a7df575897596347b8a307
[ "Apache-2.0" ]
null
null
null
docs/cloud/concepts/flows.md
aliza-miller/prefect
a537e20a8b81192f12a7df575897596347b8a307
[ "Apache-2.0" ]
null
null
null
# Flows Flows can be deployed to Prefect Cloud for scheduling and execution, as well as management of run histories, logs, and other important metrics. ## Deploying a flow from Prefect Core To deploy a flow from Prefect Core, simply use its `deploy()` method: ```python flow.deploy(project_name="<a project name>") ``` Note that this assumes you have already [authenticated](auth.md) with Prefect Cloud. For more information on Flow deployment see [here](../flow-deploy.html). ## Deploying a flow <Badge text="GQL"/> To deploy a flow via the GraphQL API, first serialize the flow to JSON: ```python flow.serialize() ``` Next, use the `createFlow` GraphQL mutation to pass the serialized flow to Prefect Cloud. You will also need to provide a project ID: ```graphql mutation($flow: JSON!) { createFlow(input: { serializedFlow: $flow, projectId: "<project id>" }) { id } } ``` ```json // graphql variables { serializedFlow: <the serialized flow JSON> } ``` ## Flow Versions and Archiving <Badge text="GQL"/> You can control how Cloud versions your Flows by providing a `versionGroupId` whenever you deploy a Flow (exposed via the `version_group_id` keyword argument in `flow.deploy`). Flows which provide the same `versionGroupId` will be considered versions of each other. By default, Flows with the same name in the same Project will be given the same `versionGroupId` and are considered "versions" of each other. Anytime you deploy a new version of a flow, Prefect Cloud will automatically "archive" the old version in place of the newly deployed flow. Archiving means that the old version's schedule is set to "Paused" and no new flow runs can be created. You can always revisit old versions and unarchive them, if for example you want the same Flow to run on two distinct schedules. To archive or unarchive a flow, use the following GraphQL mutations: ```graphql mutation { archiveFlow( input: { flowId: "your-flow-id-here" }) { id } } ``` ```graphql mutation { unarchiveFlow( input: { flowId: "your-flow-id-here" }) { id } } ```
34.163934
656
0.732726
eng_Latn
0.994927
d0ad4275f8e65f4631e435ae829fc03857a694ed
2,314
md
Markdown
docs/extensibility/sccbeginbatch-function.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/sccbeginbatch-function.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/sccbeginbatch-function.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: SccBeginBatch-Funktion | Microsoft-Dokumentation ms.date: 11/04/2016 ms.topic: conceptual f1_keywords: - SccBeginBatch helpviewer_keywords: - SccBeginBatch function ms.assetid: 33968183-2e15-4e0d-955b-ca12212d1c25 author: gregvanl ms.author: gregvanl manager: jillfra ms.workload: - vssdk ms.openlocfilehash: cff35d2b2df3a09249d232fe904ba65815ce95ae ms.sourcegitcommit: 2193323efc608118e0ce6f6b2ff532f158245d56 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 01/25/2019 ms.locfileid: "55009881" --- # <a name="sccbeginbatch-function"></a>SccBeginBatch-Funktion Diese Funktion wird eine Batch-Sequenz von Quellcodeverwaltungsvorgänge gestartet. Die [SccEndBatch](../extensibility/sccendbatch-function.md) wird aufgerufen, um den Batch beenden. Diese Batches können nicht geschachtelt werden. ## <a name="syntax"></a>Syntax ```cpp SCCRTN SccBeginBatch(void); ``` ### <a name="parameters"></a>Parameter Keine ## <a name="return-value"></a>Rückgabewert Die Source-Steuerelement-Plug-in-Implementierung dieser Funktion muss einen der folgenden Werte zurückgeben: |Wert|Beschreibung| |-----------|-----------------| |SCC_OK|Batches von Vorgängen wurde erfolgreich gestartet.| |SCC_E_UNKNOWNERROR|Nicht spezifischen Fehler.| ## <a name="remarks"></a>Hinweise Source-Control-Batches werden verwendet, um die gleichen Vorgänge über mehrere Projekte oder mehrere Kontexte auszuführen. Batches können verwendet werden, um redundante projektbezogene Dialogfeldern auf die Benutzeroberfläche während eines Vorgangs im Batchmodus zu vermeiden. Die `SccBeginBatch` Funktion und die [SccEndBatch](../extensibility/sccendbatch-function.md) dienen als Funktionspaar Anfang und Ende eines Vorgangs an. Sie sind nicht schachtelbar. `SccBeginBatch` Legt ein Flag, der angibt, dass ein Batchvorgang ausgeführt wird. Während ein Batchvorgangs aktiviert ist, sollte das Quellcodeverwaltungs-Plug-in höchstens ein Dialogfeld für Fragen an den Benutzer vorhanden und die Antwort in diesem Dialogfeld auf alle nachfolgenden Operationen anwenden. ## <a name="see-also"></a>Siehe auch [Datenquellen-Steuerelement-Plug-in-API-Funktionen](../extensibility/source-control-plug-in-api-functions.md) [SccEndBatch](../extensibility/sccendbatch-function.md)
47.22449
544
0.780467
deu_Latn
0.935847
d0ae1389d9301cd1b8e6c4f641ab86d8b6a51ad7
850
md
Markdown
_posts/people-love/25/2019-07-31-hello3307 (369).md
chito365/p
d43434482da24b09c9f21d2f6358600981023806
[ "MIT" ]
null
null
null
_posts/people-love/25/2019-07-31-hello3307 (369).md
chito365/p
d43434482da24b09c9f21d2f6358600981023806
[ "MIT" ]
null
null
null
_posts/people-love/25/2019-07-31-hello3307 (369).md
chito365/p
d43434482da24b09c9f21d2f6358600981023806
[ "MIT" ]
null
null
null
--- id: 3670 title: Salustiano Candia author: chito layout: post guid: http://localhost/mbti/?p=3670 permalink: /hello3670 tags: - say thank you category: Guides --- {: toc} ## Name Salustiano Candia * * * ## Nationality Paraguay * * * ## National Position LB * * * ## Random data * National kit 6 * Club Free Agents * Club Position Res * Club Kit 99 * Club Joining 41605 * Contract Expiry 2020 * Rating 72 * Height 183 cm * Weight 82 kg * Preffered Foot Right * Birth Date 30505 * Preffered Position CB/LB Low / High * Weak foot 2 * Skill Moves 2 * Ball Control 59 * Dribbling 34 * Marking 67 * Sliding Tackle 71 * Standing Tackle 75 * Aggression 90 * Reactions 62 * Attacking Position 17 * Interceptions 66</ul>
8.333333
35
0.596471
eng_Latn
0.398781
d0ae63aa5ee1d7b82a29c0f3f7e196b07485b708
4,691
md
Markdown
school_year_2020_2021/homework_2/README.md
d0ivanov/python-course
d55ce01b1fe03d7959a2eb4869e56c3f5e257d08
[ "MIT" ]
2
2019-12-30T13:26:55.000Z
2020-01-18T14:03:25.000Z
school_year_2020_2021/homework_2/README.md
d0ivanov/python-course
d55ce01b1fe03d7959a2eb4869e56c3f5e257d08
[ "MIT" ]
3
2019-11-05T16:47:54.000Z
2020-10-31T18:50:31.000Z
school_year_2020_2021/homework_2/README.md
d0ivanov/python-course
d55ce01b1fe03d7959a2eb4869e56c3f5e257d08
[ "MIT" ]
24
2019-10-10T19:17:40.000Z
2020-10-25T10:42:00.000Z
# Костенурка [Turtle graphics](http://en.wikipedia.org/wiki/Turtle_graphics) представлява метод за рисуване чрез курсор (костенурката), който движим в двумерна координатна система. Костенурката се поставя на дадено място в координатната система и може да приема следните команди: - Премести се напред - Завърти се наляво - Завърти се надясно Преместването напред винаги е с една позиция спрямо отекущото положение и ориентация. Завъртането винаги е на 90 градуса и винаги е спрямо текущата ориентация ## Вашата задача Имплементирайте клас `Turtle`, чрез който можете да управлявате една такава костенурка. Координатите в координатната ни система могат да бъдат само цели числа - можем да си я представим като двумерен масив. Стойностите на кетките в двумерният масив ще бъдат цели числа и ще означават броят пъти, които костенурката е минала през съответната клетка. Завъртането на костенурката не се брои за посещение на клетката. Класът `Turtle` се инициализира с 2 аргумента - брой редове и брой колони. Ето един пример за работа с класа `Turtle`: turtle = Turtle(3, 3) turtle.spawn_at(0, 0) turtle.move() turtle.turn_right() turtle.move() turtle.move() turtle.turn_left() turtle.move() print(turtle.canvas) => [ [1, 1, 0], [0, 1, 0], [0, 1, 1] ] От примера можете да видите следните неща за класът `Turtle`: - Методът `spawn_at`, който задава началната позиция на костенурката и приема 2 аргумента - ред и колона - Позицията, от която започва да се движи костенурката в следствие на извикване на `spawn_at` се брои за посетена - Методът `move`, койото премества костенурката с една клетка напред в зависимост от текущата ѝ ориентация - Методът `turn_right` завърта костенурката надясно спрямо текущата ѝ ориентация - Методът `turn_left` завърта костенурката надясно спрямо текущата ѝ ориентация - `canvas` е атрибут на класа `Turtle`, който съдържа матрицата, която ползваме за координатна система Ако се опитаме да придвижим костенурката, без преди това да сме извикали метода `spawn_at` поне един път, трябва да получим изключение `RuntimeError.` Стойностите на `canvas` означават броят пъти, киото костенурката е посетила съответната клетка. Това означава, че всяко произволно положително число е валидна стойност за клетка от `canvas`. Ако костенурката излезе извън матрицата, то тя трябва да бъде "прехвърлена" в началото на съответният ред или колона, от която е излязла. Например, ако костенурката е в края на даден ред, ориентирана надясно, и бъде извикан метода `move`, то новата позицията на костенурката трябва да е началото на същият ред и ориентацията трябва да остане непроменена. Например: turtle = Turtle(3, 3) turtle.spawn_at(0, 0) for i in range(9): turtle.move() print(turtle.canvas) => [ [4, 3, 3], [0, 0, 0], [0, 0, 0] ] ## SimpleCanvas Класът `SimpleCanvas` ще ни помага да изобразяваме данните, които атрибутът `canvas` на класа `Turtle` съдържа. `SimpleCanvas` трябва да има един метод - `draw` който връща стринг, представляващ рисунката. _Пиксел_ ще наричаме една клетка от двумерната рисунка. _Интензитет_ на пиксел ще наричаме отношението на броят пъти, в които костенурката е минала през съответния пиксел и максималният брой пъти, в коит костенурката е минала през кой да е пиксел. Ще инициализираме `SimpleCanvas` по следният начин: `SimpleCanvas(canvas, [' ', '*', '@', '#'])`. Първият аргумент е съдържанието на атрибута `canvas` на класа `Turtle`, а вторият аргумент е масив от символи. Всеки елемент в този масив ще задава начин на изобразяване на пикселите според интензитета им. Първият елемент на масива отговаря на интензитет 0, а символите след първия се използват за изобразяване на равни интервали на интензитет на всеки пиксел. Например, резултатът от изпълнение на следният код: turtle = Turtle(3, 3) turtle.spawn_at(0, 0) for i in range(9): turtle.move() turtle.turn_right() for i in range(4): turtle.move() turtle.turn_left() turtle.move() turtle.move() turtle.turn_right() turtle.move() canvas = SimpleCanvas(turtle.canvas, [' ', '*', '@', '#']) print(canvas.draw()) #@@ @** * * Символите от примера слгаме по следният начин: - ' ' интензитетът == 0 - '\*' 0 < интентзитет <= 0.(3) - '@' 0.(3) < интензитет <= 0.(6) - '#' 0.(6) < интензитет <= 1.0 ### Забележки - Можете да разчитате, че винаги ще подаваме поне два символа, когато инициализираме `SimpleCanvas`. - Можете да разчитате, че винаги ще има поне един пиксел с поне едно посещение, когато инициализираме `SimpleCanvas`
45.543689
459
0.728203
bul_Cyrl
0.999822
d0aee2f3d34a143e225ca66c032e23b3e8b8932f
1,832
md
Markdown
_docs/08-grpc.md
ongzhixian/mini-tools
b127eea5106ea6e9a9f0e0b5c6c4aa50c6f6cbc2
[ "MIT" ]
1
2022-03-10T05:31:57.000Z
2022-03-10T05:31:57.000Z
_docs/08-grpc.md
ongzhixian/mini-tools
b127eea5106ea6e9a9f0e0b5c6c4aa50c6f6cbc2
[ "MIT" ]
null
null
null
_docs/08-grpc.md
ongzhixian/mini-tools
b127eea5106ea6e9a9f0e0b5c6c4aa50c6f6cbc2
[ "MIT" ]
null
null
null
# GRPC The main benefits of gRPC are: 1. Modern?, high-performance, lightweight RPC framework. 2. Contract-first API development, using Protocol Buffers by default, allowing for language agnostic implementations. 3. Tooling available for many languages to generate strongly-typed servers and clients. 4. Supports client, server, and bi-directional streaming calls. 5. Reduced network usage with Protobuf binary serialization. These benefits make gRPC ideal for: 1. Lightweight microservices where efficiency is critical. 2. Polyglot systems where multiple languages are required for development. 3. Point-to-point real-time services that need to handle streaming requests or responses. dotnet-grpc list --project .\MiniTools.HostApp\MiniTools.HostApp.csproj # Packages Grpc Grpc.Tools Google.Protobuf Grpc.AspNetCore dotnet tool install -g dotnet-grpc https://docs.microsoft.com/en-us/aspnet/core/grpc/dotnet-grpc?view=aspnetcore-6.0 # Problems with GRPC 1. No HTTP/2, no GRPC 2. Client-Server architecture Only the client can initiate events. The server can only respond to those. Pro GRPC 1. Much easier API versioning 2. Libraries available in pretty much any language 3. Easier to apply in microservice architecture ``` services.AddGrpcClient<Greeter.GreeterClient>(o => { o.Address = new Uri("https://localhost:5001"); }); ``` ## Grpc-dotnet https://blog.jetbrains.com/dotnet/2021/07/19/getting-started-with-asp-net-core-and-grpc/ https://www.telerik.com/blogs/introduction-to-grpc-dotnet-core-and-dotnet-5 ## Older (deprecated Grpc.Code) https://codelabs.developers.google.com/codelabs/cloud-grpc-csharp#1 https://github.com/grpc/grpc/tree/master/examples/csharp/Helloworld ## Remoting (channels) https://docs.microsoft.com/en-us/troubleshoot/dotnet/csharp/create-remote-server
25.444444
118
0.777293
eng_Latn
0.678772
d0aef4056caaca3534c65ff9dedf850da89596bf
51
md
Markdown
README.md
valkyrienyanko/math
9d28263f9700c4459462515f80ad15db054de1e9
[ "MIT" ]
null
null
null
README.md
valkyrienyanko/math
9d28263f9700c4459462515f80ad15db054de1e9
[ "MIT" ]
null
null
null
README.md
valkyrienyanko/math
9d28263f9700c4459462515f80ad15db054de1e9
[ "MIT" ]
null
null
null
A bunch of utils to help with math related stuffs.
25.5
50
0.784314
eng_Latn
0.999979
d0af67664719156a7e255af98a1743dd4ab0e53c
1,257
md
Markdown
docs/description/AbbreviationAsWordInName.md
codacy/codacy-checkstyle
3c1b63ba0c4c93f8607556a2874777ffe312bed7
[ "Apache-2.0" ]
7
2016-06-05T10:14:29.000Z
2021-05-19T02:09:34.000Z
docs/description/AbbreviationAsWordInName.md
codacy/codacy-checkstyle
3c1b63ba0c4c93f8607556a2874777ffe312bed7
[ "Apache-2.0" ]
25
2016-03-17T15:39:57.000Z
2021-11-15T18:32:28.000Z
docs/description/AbbreviationAsWordInName.md
codacy/codacy-checkstyle
3c1b63ba0c4c93f8607556a2874777ffe312bed7
[ "Apache-2.0" ]
9
2016-03-17T15:11:15.000Z
2020-11-21T19:15:56.000Z
Validates abbreviations (consecutive capital letters) length in identifier name, it also allows to enforce camel case naming. Please read more at [Google Style Guide](https://checkstyle.org/styleguides/google-java-style-20180523/javaguide.html#s5.3-camel-case) to get to know how to avoid long abbreviations in names. `allowedAbbreviationLength` specifies how many consecutive capital letters are allowed in the identifier. A value of *3* indicates that up to 4 consecutive capital letters are allowed, one after the other, before a violation is printed. The identifier 'MyTEST' would be allowed, but 'MyTESTS' would not be. A value of *0* indicates that only 1 consecutive capital letter is allowed. This is what should be used to enforce strict camel casing. The identifier 'MyTest' would be allowed, but 'MyTEst' would not be. `ignoreFinal`, `ignoreStatic`, and `ignoreStaticFinal` control whether variables with the respective modifiers are to be ignored. Note that a variable that is both static and final will always be considered under `ignoreStaticFinal` only, regardless of the values of `ignoreFinal` and `ignoreStatic`. So for example if `ignoreStatic` is true but `ignoreStaticFinal` is false, then static final variables will not be ignored.
54.652174
100
0.802705
eng_Latn
0.998126
d0b02ceb378681c1a58056a9e314c03508872e7f
17
md
Markdown
README.md
DangerOnTheRanger/project-panzee
9392a3d8c4a0acbac80905e164dca91b691022ab
[ "BSD-2-Clause" ]
5
2015-07-16T20:44:58.000Z
2016-04-19T03:47:46.000Z
README.md
DangerOnTheRanger/project-panzee
9392a3d8c4a0acbac80905e164dca91b691022ab
[ "BSD-2-Clause" ]
6
2015-12-23T07:05:06.000Z
2016-07-03T22:29:03.000Z
README.md
DangerOnTheRanger/project-panzee
9392a3d8c4a0acbac80905e164dca91b691022ab
[ "BSD-2-Clause" ]
null
null
null
# Project-Panzee
8.5
16
0.764706
deu_Latn
0.700442
d0b07e75b27ec2ac8e4500df4ac094b1f81f603b
69
md
Markdown
README.md
Kongduino/EspLoRa_Client
6817caa34287da50ad2a1362d2b74a258f33d70b
[ "MIT" ]
null
null
null
README.md
Kongduino/EspLoRa_Client
6817caa34287da50ad2a1362d2b74a258f33d70b
[ "MIT" ]
null
null
null
README.md
Kongduino/EspLoRa_Client
6817caa34287da50ad2a1362d2b74a258f33d70b
[ "MIT" ]
null
null
null
# EspLoRa_Client LoRa client for the Arduino Esplora for my LoRa散步.
23
51
0.797101
eng_Latn
0.880482
d0b0e7c24bd5f7e564907e8cdeb854babba5cb2c
26
md
Markdown
README.md
manuk-nersisyan/Api
c5e021aa5affe6ca38905b7abcea7bf47a66237b
[ "Apache-2.0" ]
null
null
null
README.md
manuk-nersisyan/Api
c5e021aa5affe6ca38905b7abcea7bf47a66237b
[ "Apache-2.0" ]
null
null
null
README.md
manuk-nersisyan/Api
c5e021aa5affe6ca38905b7abcea7bf47a66237b
[ "Apache-2.0" ]
null
null
null
# Api rest Api with Lumen
8.666667
19
0.730769
eng_Latn
0.991812
d0b0f3a6e5bd62ddb76fc2cc6d14515c56e8aa6c
8,659
md
Markdown
_posts/2020-10-29-MNIST_데이터로_해보는_CNN(Convolution_Neral_Network).md
hmkim312/hmkim312.github.io
70be77b620a08c2635a7ce8fd957348c7689fb1f
[ "MIT" ]
null
null
null
_posts/2020-10-29-MNIST_데이터로_해보는_CNN(Convolution_Neral_Network).md
hmkim312/hmkim312.github.io
70be77b620a08c2635a7ce8fd957348c7689fb1f
[ "MIT" ]
null
null
null
_posts/2020-10-29-MNIST_데이터로_해보는_CNN(Convolution_Neral_Network).md
hmkim312/hmkim312.github.io
70be77b620a08c2635a7ce8fd957348c7689fb1f
[ "MIT" ]
2
2021-09-04T06:44:20.000Z
2022-03-15T00:54:14.000Z
--- title: MNIST 데이터로 해보는 CNN (Convolution Neral Network) author: HyunMin Kim date: 2020-10-29 11:00:00 0000 categories: [Data Science, Deep Learning] tags: [Tensorflow, Neural Net, CNN, Over Fitting, Convolution Filter, Max Pooling, Drop Out, Pandding] --- ## 1. CNN (Convolution Neral Network) --- ### 1.1 CNN <img src="https://user-images.githubusercontent.com/60168331/97574180-2fefe200-1a2e-11eb-8044-9427b3672ad7.png"> - 이미지 영상인식의 혁명같은 CNN - CNN은 이미지의 특징을 검출하여, 분류하는 것 <br> <img src="https://user-images.githubusercontent.com/60168331/97574332-662d6180-1a2e-11eb-9c63-2e441019be24.png"> - CNN은 특징을 찾는 레이어와 분류를 하는 레이어로 구성됨 <br> ### 1.2 Convolutional Filter <img src = "https://user-images.githubusercontent.com/60168331/97574899-3e8ac900-1a2f-11eb-8a7c-7e9a9a661611.gif"> - Convolution : 특정 패턴이 있는지 박스로 훑으며 마킹하는 것 - 위 아래선 필터, 좌우선 필터, 대각선 필터, 각종 필터로 해당 패턴이 그림위에 있는지 확인 - 필터는 이미지의 특징을 찾아내기 위한 파라미터 위 그림에서는 주황색의 3 x 3 행렬 (CNN에서 Filter와 Kernel은 같은 의미로 사용됨) - 필터는 일반적으로 4 x 4 or 3 x 3과 같은 정사각 행렬로 정의됨. - CNN에서 학습을 통해 필터를 구할 수 있음 - CNN은 입력 데이터를 지정된 간격으로 순회하며 채널별로 합성곱을 하고 모든 채널(컬러의 경우 3개)의 합성곱의 합을 Feature Map로 만듬 - 위 그림은 채널이 1개인 입력 데이터를 (3, 3) 크기의 필터로 합성곱하는 과정을 나타냄 <br> ### 1.3 Pooling <img src = "https://user-images.githubusercontent.com/60168331/97575766-8fe78800-1a30-11eb-8239-5bd8f2c3a602.png"> <img src = "https://user-images.githubusercontent.com/60168331/97575782-95dd6900-1a30-11eb-87e9-02001cd26d3c.png"> <img src = "https://user-images.githubusercontent.com/60168331/97575790-9bd34a00-1a30-11eb-8812-fd4106488dce.png"> <img src = "https://user-images.githubusercontent.com/60168331/97575805-a1c92b00-1a30-11eb-9bd7-4a9159ff451b.png"> - 풀링은 점점 더 멀리서 보는것 -> 그림의 크기를 줄이는것 <br> ### 1.4 MaxPooling <img src = "https://user-images.githubusercontent.com/60168331/97576192-303dac80-1a31-11eb-8269-96affc0ec614.png"> - 그림의 사이즈를 점진적으로 줄이는 법 MaxPooling - n x n(pool)을 중요한 정보(Max) 한개로 줄임 - 선명한 정보만 남겨서 판단과 학습이 쉬워지고 노이즈가 줄면서 덤으로 융통성도 확보됨 - 4 x 4 행렬 -> 2 x 2 행렬이 됨 - Stride : 좌우로 몇칸씩 이동할지 설정, 보통 2 x 2 <br> ### 1.5 Conv Layer의 의미 <img src = 'https://user-images.githubusercontent.com/60168331/97576328-5fecb480-1a31-11eb-84b7-a3ab0b19e46f.png'> - Conv : 패턴들을 쌓아가며 점차 복잡한 패턴을 인식 - MaxPooling : 사이즈를 줄여가며, 더욱 추상화 해나감 <br> ### 1.6 CNN 모델 및 코드 <img src = "https://user-images.githubusercontent.com/60168331/97576513-9e826f00-1a31-11eb-89fe-634a6f25a88f.png"> - 위의 내용으로 만든 CNN 모델의 구조와, 파이썬 코드 <br> ### 1.7 Zero Padding <img src = "https://user-images.githubusercontent.com/60168331/97576784-f1f4bd00-1a31-11eb-9b5c-340d0c394b0e.png"> - Zero Padding : 이미지의 귀퉁이가 짤리니, 사이즈 유지를 위해 Conv 전에 0을 모서리에 추가하고 시작함 <br> ### 1.8 Over Fitting - 뉴럴넷에 고양이 이미지를 학습 시켰는데, 테스트 이미지가 학습한 이미지와 달라서 제대로 예측하지 못하는 현상 - 즉, 학습 데이터에 과도하게 Fitting되어 있음, 학습 데이터가 아니면 잘 예측하지 못하는것! <br> ### 1.9 Drop Out <img src = "https://user-images.githubusercontent.com/60168331/97577225-7a735d80-1a32-11eb-8142-8f41c1820d30.png"> - Overfitting을 방지하기 위한 방법 - 학습 시킬때 일부러 정보를 누락시키거나, 중간 중간에 노드를 끄는것 <br> ## 2. 실습 --- ### 2.1 MNIST Data load ```python from tensorflow.keras import datasets mnist = datasets.mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train, X_test = X_train / 255.0, X_test / 255.0 X_train = X_train.reshape((60000, 28 ,28, 1)) X_test = X_test.reshape((10000, 28 ,28, 1)) ``` - 텐서플로우에서 MNIST 데이터를 불러와서, 데이터 정리 - 255.0으로 나눠준 이유는 이미지가 0 ~ 255 사이의 값을 가지고 있어서, MinMaxScale을 적용한것 <br> ### 2.2 모델 구성 ```python from tensorflow.keras import layers, models model = models.Sequential([ layers.Conv2D(32, kernel_size=(5, 5), strides=(1, 1), padding='same', activation='relu', input_shape=(28, 28, 1)), layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)), layers.Conv2D(64, kernel_size=(2, 2), activation='relu', padding='same'), layers.MaxPooling2D(pool_size=(2, 2)), layers.Dropout(0.25), layers.Flatten(), layers.Dense(1000, activation='relu'), layers.Dense(10, activation='softmax') ]) model.summary() ``` Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_2 (Conv2D) (None, 28, 28, 32) 832 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 14, 14, 32) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 14, 14, 64) 8256 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 7, 7, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 7, 7, 64) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 3136) 0 _________________________________________________________________ dense_2 (Dense) (None, 1000) 3137000 _________________________________________________________________ dense_3 (Dense) (None, 10) 10010 ================================================================= Total params: 3,156,098 Trainable params: 3,156,098 Non-trainable params: 0 _________________________________________________________________ <br> ### 2.3 Fit ```python import time model.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy']) start_time = time.time() hist = model.fit(X_train, y_train, epochs=5, verbose = 1, validation_data=(X_test, y_test)) print(f'Fit Time :{time.time() - start_time}') ``` Epoch 1/5 1875/1875 [==============================] - 35s 19ms/step - loss: 0.1138 - accuracy: 0.9642 - val_loss: 0.0358 - val_accuracy: 0.9877 Epoch 2/5 1875/1875 [==============================] - 37s 20ms/step - loss: 0.0467 - accuracy: 0.9853 - val_loss: 0.0315 - val_accuracy: 0.9909 Epoch 3/5 1875/1875 [==============================] - 39s 21ms/step - loss: 0.0326 - accuracy: 0.9898 - val_loss: 0.0261 - val_accuracy: 0.9916 Epoch 4/5 1875/1875 [==============================] - 40s 21ms/step - loss: 0.0243 - accuracy: 0.9926 - val_loss: 0.0336 - val_accuracy: 0.9893 Epoch 5/5 1875/1875 [==============================] - 41s 22ms/step - loss: 0.0223 - accuracy: 0.9930 - val_loss: 0.0264 - val_accuracy: 0.9917 Fit Time :190.74329090118408 - Accuracy가 0.99...? <br> ### 2.4 그래프로 보기 ```python import matplotlib.pyplot as plt plot_target = ['loss' , 'accuracy', 'val_loss', 'val_accuracy'] plt.figure(figsize=(12, 8)) for each in plot_target: plt.plot(hist.history[each], label = each) plt.legend() plt.grid() plt.show() ``` <img src = 'https://user-images.githubusercontent.com/60168331/97581262-b2c96a80-1a37-11eb-8be7-5a53e5cbb6ec.png'> - 학습은 아무 문제없이 잘됨. <br> ### 2.5 Test ```python score = model.evaluate(X_test, y_test) print(f'Test Loss : {score[0]}') print(f'Test Accuracy : {score[1]}') ``` 313/313 [==============================] - 1s 5ms/step - loss: 0.0264 - accuracy: 0.9917 Test Loss : 0.02644716575741768 Test Accuracy : 0.9916999936103821 - Test도 0.99.. - 틀린 데이터가 궁금해짐 <br> ### 2.6 데이터 예측 ```python import numpy as np predicted_result = model.predict(X_test) predicted_labels = np.argmax(predicted_result, axis=1) predicted_labels[:10] ``` array([7, 2, 1, 0, 4, 1, 4, 9, 5, 9]) <br> ### 2.7 틀린 데이터만 모으기 ```python wrong_result = [] for n in range(0, len(y_test)): if predicted_labels[n] != y_test[n]: wrong_result.append(n) len(wrong_result) ``` 83 - 총 1만개 데이터 중에 83개를 틀림 - 정확도 엄청나다.. <br> ### 2.8 틀린 데이터 16개만 직접 그려보기 ```python import random samples = random.choices(population=wrong_result, k =16) plt.figure(figsize=(14, 12)) for idx, n in enumerate(samples): plt.subplot(4, 4, idx + 1) plt.imshow(X_test[n].reshape(28,28), cmap = 'Greys', interpolation='nearest') plt.title('Label ' + str(y_test[n]) + ', Predict ' + str(predicted_labels[n])) plt.axis('off') plt.show() ``` <img src = 'https://user-images.githubusercontent.com/60168331/97581267-b3fa9780-1a37-11eb-93ff-1f5a614e89af.png'> - 직접봐도 틀릴만한 것들. 1% <br> ### 2.9 Model Save ```python model.save('MNIST_CNN_model.h5') ``` - model.save를 사용하여 만든 모델을 저장할 수 있음! <br> ## 3. 요약 --- ### 3.1 요약 - 이미지, 영상의 최강자 CNN을 튜토리얼 해보았다. - 사실 하용호님의 자료가 너무 쉽게 잘 정리되어있어서 참 좋았다. - 딥러닝은 코드를 짜면서도 구성도를 생각해야 해서, 정말 어려운듯 하다.
26.39939
138
0.653193
kor_Hang
0.864925
d0b0fc2d31ddddadbb402400bca1b9d5d0b96fac
1,523
md
Markdown
powerbi-docs/guided-learning/includes/1-5-cleaning-irregular-data.md
Wennyaa/powerbi-docs.zh-cn
c3e453f28ca699d4994a658e7d12ddc520d3a8b2
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/guided-learning/includes/1-5-cleaning-irregular-data.md
Wennyaa/powerbi-docs.zh-cn
c3e453f28ca699d4994a658e7d12ddc520d3a8b2
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/guided-learning/includes/1-5-cleaning-irregular-data.md
Wennyaa/powerbi-docs.zh-cn
c3e453f28ca699d4994a658e7d12ddc520d3a8b2
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.openlocfilehash: fa6296485897b983c3ca4044ffa2875de3326dec ms.sourcegitcommit: 60dad5aa0d85db790553e537bf8ac34ee3289ba3 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 05/29/2019 ms.locfileid: "61264362" --- Power BI 可以从几乎任何来源导入数据,其可视化效果和建模工具最适用于列式数据。 有时数据不采用简单列格式,这种情况常出现在 Excel 电子表格中,适合肉眼查看的表格布局不一定是自动查询的最优选择。 例如,以下电子表格具有跨多个列的标题。 ![](media/1-5-cleaning-irregular-data/1-5_1.png) 幸运的是,Power BI 中的工具能将多列表格快速转化为数据集供你使用。 ## <a name="transpose-data"></a>转置数据 例如,使用**查询编辑器**中的**转置**,你可以对数据进行翻转(即将列变为行,将行变为列),从而将数据分解为可操作的格式。 ![](media/1-5-cleaning-irregular-data/1-5_2.png) 进行数次转置后,如视频所述,表格将开始转换为 Power BI 更容易处理的格式。 ## <a name="format-data"></a>设置数据格式 你可能还需要设置数据格式,以便 Power BI 在导入数据后对其进行适当分类和标识。 通过几种转换(包括 *将行提升为标题* 以分解标题、使用 **填充** 将 *null* 值变为给定列中上方或下方行内找到的值,以及 **逆透视列** ),即可将数据清理为可在 Power BI 中使用的数据集。 ![](media/1-5-cleaning-irregular-data/1-5_3.png) 通过 Power BI,你可以在你的数据上对这些转换进行试验,确定可将数据转换为 Power BI 可处理的列格式的转换类型。 请记住,你采取的所有操作都记录在“查询编辑器”中的“应用的步骤”部分中,因此如果转换未达到预期,只需单击该步骤旁的 **x** 撤消操作即可。 ![](media/1-5-cleaning-irregular-data/1-5_5.png) ## <a name="create-visuals"></a>创建视觉对象 数据 Power BI 可用格式后,即可通过转换和清理数据开始创建视觉对象。 ![](media/1-5-cleaning-irregular-data/1-5_4.png) ## <a name="next-steps"></a>后续步骤 **祝贺你!** 你已经完成了本部分的 Power BI **引导学习**课程。 你现已了解如何 **将数据导入** Power BI Desktop,以及如何 *调整* 或 *转换* 这些数据,因此你可以创建具引人注目的视觉对象。 了解 Power BI 的工作原理以及如何使其 *为你* 服务的下一步,是了解 **建模** 包含的内容。 你已经了解,**数据集**是 Power BI 的基本构建块,但某些数据集可能比较复杂并基于众多不同的数据源。 有时,你需要为所创建数据集添加自己的特殊亮点(或 *字段* )。 在下一个部分中,你将了解如何**建模**以及更多内容。 不见不散!
33.844444
142
0.759028
yue_Hant
0.508509
d0b115e7aa399ab349095874bf60e493bd26bfea
5,103
md
Markdown
concepts/foreach-loops/about.md
HugoRoss/csharp
782086650efff991f61184f3c55bc173ad28955b
[ "MIT" ]
null
null
null
concepts/foreach-loops/about.md
HugoRoss/csharp
782086650efff991f61184f3c55bc173ad28955b
[ "MIT" ]
7
2021-06-28T18:07:33.000Z
2022-03-01T18:07:21.000Z
concepts/foreach-loops/about.md
HugoRoss/csharp
782086650efff991f61184f3c55bc173ad28955b
[ "MIT" ]
null
null
null
# About TODO: about.md files and links.json files are the same for arrays, for-loops and foreach. Consider how to prise these apart of otherwise treat these closely coupled concepts. Data structures that can hold zero or more elements are known as _collections_. An **array** is a collection that has a fixed size/length and whose elements must all be of the same type. Elements can be assigned to an array or retrieved from it using an index. C# arrays are zero-based, meaning that the first element's index is always zero: ```csharp // Declare array with explicit size (size is 2) int[] twoInts = new int[2]; // Assign second element by index twoInts[1] = 8; // Retrieve the second element by index twoInts[1] == 8; // => true // Check the length of the array twoInts.Length == 2; // => true ``` Arrays can also be defined using a shortcut notation that allows you to both create the array and set its value. As the compiler can now tell how many elements the array will have, the length can be omitted: ```csharp // Three equivalent ways to declare and initialize an array (size is 3) int[] threeIntsV1 = new int[] { 4, 9, 7 }; int[] threeIntsV2 = new[] { 4, 9, 7 }; int[] threeIntsV3 = { 4, 9, 7 }; ``` Arrays can be manipulated by either calling an array instance's [methods][array-methods] or [properties][array-properties], or by using the static methods defined in the [`Array` class][array-class]. An array is also a _collection_, which means that you can iterate over _all_ its values using a [`foreach` loop][foreach-statement]: ```csharp char[] vowels = new [] { 'a', 'e', 'i', 'o', 'u' }; foreach (char vowel in vowels) { // Output the vowel System.Console.Write(vowel); } // => aeiou ``` One could use a [`for` loop][for-statement] to iterate over an array: ```csharp char[] vowels = new [] { 'a', 'e', 'i', 'o', 'u' }; for (int i = 0; i < vowels.Length; i++) { // Output the vowel System.Console.Write(vowels[i]); } // => aeiou ``` However, generally a `foreach` loop is preferrable over a `for` loop for the following reasons: - A `foreach` loop is guaranteed to iterate over _all_ values. With a `for` loop, it is easy to miss elements, for example due to an off-by-one error. - A `foreach` loop is more _declarative_, your code is communicating _what_ you want it to do, instead of a `for` loop that communicates _how_ you want to do it. - A `foreach` loop is foolproof, whereas with `for` loops it is easy to have an off-by-one error. - A `foreach` loop works on all collection types, including those that don't support using an indexer to access elements. To guarantee that a `foreach` loop will iterate over _all_ values, the compiler will not allow updating of a collection within a `foreach` loop: ```csharp char[] vowels = new [] { 'a', 'e', 'i', 'o', 'u' }; foreach (char vowel in vowels) { // This would result in a compiler error // vowel = 'Y'; } ``` A `for` loop does have some advantages over a `foreach` loop: - You can start or stop at the index you want. - You can use any (boolean) termination condition you want. - You can skip elements by customizing the incrementing of the loop variable. - You can process collections from back to front by counting down. - You can use `for` loops in scenarios that don't involve collections. Related Topics: - You should be aware that C# supports [multi-dimensional arrays][multi-dimensional-arrays] like `int[,] arr = new int[10, 5]` which can be very useful. - You should also be aware that you can instantiate objects of type [`System.Array`][system-array-object] with `Array.CreateInstance`. Such objects are of little use - mainly for interop with VB.NET code. They are not interchangeable with standard arrays (`T[]`). They can have a non-zero lower bound. Both the above topics are discussed more fully in a later exercise. [implicitly-typed-arrays]: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/arrays/implicitly-typed-arrays [array-foreach]: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/arrays/using-foreach-with-arrays [single-dimensional-arrays]: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/arrays/single-dimensional-arrays [array-class]: https://docs.microsoft.com/en-us/dotnet/api/system.array?view=netcore-3.1 [array-properties]: https://docs.microsoft.com/en-us/dotnet/api/system.array?view=netcore-3.1#properties [array-methods]: https://docs.microsoft.com/en-us/dotnet/api/system.array?view=netcore-3.1#methods [foreach-statement]: https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/foreach-in [for-statement]: https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/for [break-keyword]: https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/break [multi-dimensional-arrays]: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/arrays/multidimensional-arrays [system-array-object]: https://docs.microsoft.com/en-us/dotnet/api/system.array.createinstance?view=netcore-3.1#System_Array_CreateInstance_System_Type_System_Int32_
49.067308
341
0.739369
eng_Latn
0.984989
d0b13e06f43d47576d39f25ad968a992dd71e627
532
md
Markdown
README.md
jkilpatr/Kibana-Protector
8bb1d04c8c22b0137743fb9468293265b408b5a5
[ "Apache-2.0" ]
2
2016-12-08T21:38:26.000Z
2016-12-15T11:44:26.000Z
README.md
jkilpatr/Kibana-Protector
8bb1d04c8c22b0137743fb9468293265b408b5a5
[ "Apache-2.0" ]
null
null
null
README.md
jkilpatr/Kibana-Protector
8bb1d04c8c22b0137743fb9468293265b408b5a5
[ "Apache-2.0" ]
null
null
null
# Kibana-Protector A simple python proxy for securing Kibana instances that face the outside world from settings changes while allowing public viewing ## Features Provides a landing page for users to complete a capcha (currently google capcha) and then proceed to Kibana the proxy also blocks settings changes to Kibana, although this isn't tested well yet. The idea is a look don't touch feature set. I really wouldn't be that difficult to expand this to provide a basic user system for Kibana but that's not the goal right now.
53.2
131
0.802632
eng_Latn
0.999706
d0b16881418419d8957f94cb4a0fa6f519692692
31
md
Markdown
storage/app/Pages/Posts/location-422.md
irumvanselme/rwanda_insiders
37ee123b9e8264ee4789478edd898e97d353cbdb
[ "MIT" ]
null
null
null
storage/app/Pages/Posts/location-422.md
irumvanselme/rwanda_insiders
37ee123b9e8264ee4789478edd898e97d353cbdb
[ "MIT" ]
null
null
null
storage/app/Pages/Posts/location-422.md
irumvanselme/rwanda_insiders
37ee123b9e8264ee4789478edd898e97d353cbdb
[ "MIT" ]
null
null
null
### Welcome to SECTOR Nyarubuye
31
31
0.774194
kor_Hang
0.961321
d0b1cdc8a4e554f940452bc8b8d99d294a978b3d
9,479
md
Markdown
articles/dns/dns-operations-dnszones.md
macdrai/azure-docs.fr-fr
59bc35684beaba04a4f4c09a745393e1d91428db
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/dns/dns-operations-dnszones.md
macdrai/azure-docs.fr-fr
59bc35684beaba04a4f4c09a745393e1d91428db
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/dns/dns-operations-dnszones.md
macdrai/azure-docs.fr-fr
59bc35684beaba04a4f4c09a745393e1d91428db
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Gérer des zones DNS dans Azure DNS - PowerShell | Microsoft Docs description: Vous pouvez gérer des zones DNS à l’aide d’Azure Powershell. Cet article décrit comment mettre à jour, supprimer et créer des zones DNS sur Azure DNS. services: dns documentationcenter: na author: rohinkoul ms.service: dns ms.devlang: na ms.topic: how-to ms.tgt_pltfrm: na ms.workload: infrastructure-services ms.date: 03/19/2018 ms.author: rohink ms.custom: devx-track-azurepowershell ms.openlocfilehash: 6abcca9d9888dc8968d7233e7aee6cd76aa215f7 ms.sourcegitcommit: cd9754373576d6767c06baccfd500ae88ea733e4 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 11/20/2020 ms.locfileid: "94965746" --- # <a name="how-to-manage-dns-zones-using-powershell"></a>Gestion des zones DNS à l'aide de PowerShell > [!div class="op_single_selector"] > * [Portail](dns-operations-dnszones-portal.md) > * [PowerShell](dns-operations-dnszones.md) > * [Azure Classic CLI](./dns-operations-dnszones-cli.md) > * [Azure CLI](dns-operations-dnszones-cli.md) Cet article vous montre comment gérer vos zones DNS avec Azure PowerShell. Vous pouvez également gérer vos zones DNS à l’aide de [l’interface de ligne de commande Azure](dns-operations-dnszones-cli.md) multiplateforme ou du portail Azure. Ce guide traite spécifiquement des zones DNS publiques. Pour plus d’informations sur l’utilisation d’Azure PowerShell pour gérer les Zones privées dans Azure DNS, consultez [Prise en main d’Azure DNS Private Zones à l’aide d’Azure PowerShell](private-dns-getstarted-powershell.md). [!INCLUDE [dns-create-zone-about](../../includes/dns-create-zone-about-include.md)] [!INCLUDE [dns-powershell-setup](../../includes/dns-powershell-setup-include.md)] ## <a name="create-a-dns-zone"></a>Création d’une zone DNS Une zone DNS est créée à l’aide de l’applet de commande `New-AzureRmDnsZone` . L’exemple ci-dessous crée une zone DNS appelée *contoso.com* dans le groupe de ressources *MyResourceGroup* : ```powershell New-AzureRmDnsZone -Name contoso.com -ResourceGroupName MyAzureResourceGroup ``` L’exemple suivant montre comment créer une zone DNS avec deux [balises Azure Resource Manager](dns-zones-records.md#tags), *projet = demo* et *env = test* : ```powershell New-AzureRmDnsZone -Name contoso.com -ResourceGroupName MyAzureResourceGroup -Tag @{ project="demo"; env="test" } ``` Azure DNS prend également en charge les zones DNS privées. Pour en savoir plus sur les zones DNS privées, consultez la session relative à [l’utilisation d’Azure DNS pour les domaines privés](private-dns-overview.md). Vous trouverez un exemple de création d’une zone DNS privée sur la page [Bien démarrer avec les zones privées Azure DNS à l’aide de PowerShell](./private-dns-getstarted-powershell.md). ## <a name="get-a-dns-zone"></a>Obtention d’une zone DNS Pour récupérer une zone DNS, utilisez l’applet de commande `Get-AzureRmDnsZone` : Cette opération retourne un objet de zone DNS correspondant à une zone existante dans Azure DNS. Cet objet contient des données sur la zone (par exemple le nombre de jeux d’enregistrements), mais ne contient pas les jeux d’enregistrement eux-mêmes (voir `Get-AzureRmDnsRecordSet`). ```powershell Get-AzureRmDnsZone -Name contoso.com –ResourceGroupName MyAzureResourceGroup Name : contoso.com ResourceGroupName : myresourcegroup Etag : 00000003-0000-0000-8ec2-f4879750d201 Tags : {project, env} NameServers : {ns1-01.azure-dns.com., ns2-01.azure-dns.net., ns3-01.azure-dns.org., ns4-01.azure-dns.info.} NumberOfRecordSets : 2 MaxNumberOfRecordSets : 5000 ``` ## <a name="list-dns-zones"></a>Création de la liste des zones DNS En omettant le nom de la zone dans `Get-AzureRmDnsZone`, vous pouvez énumérer toutes les zones d’un groupe de ressources. Cette opération renvoie un tableau d’objets de la zone. ```powershell $zoneList = Get-AzureRmDnsZone -ResourceGroupName MyAzureResourceGroup ``` En omettant le nom de zone et le nom du groupe de ressources de `Get-AzureRmDnsZone`, vous pouvez énumérer toutes les zones dans l’abonnement Azure. ```powershell $zoneList = Get-AzureRmDnsZone ``` ## <a name="update-a-dns-zone"></a>Mise à jour d’une zone DNS Vous pouvez apporter des modifications à une ressource de zone DNS à l’aide de `Set-AzureRmDnsZone`. Cette applet de commande ne met pas à jour les jeux d’enregistrements DNS dans la zone (voir [Gestion des enregistrements DNS](dns-operations-recordsets.md)). Elle est utilisée seulement pour mettre à jour les propriétés de la ressource de zone elle-même. Les propriétés de zone accessibles en écriture sont actuellement limitées aux [« balises » Azure Resource Manager de la ressource de zone](dns-zones-records.md#tags). Utilisez l’une des options suivantes pour mettre à jour la zone DNS : ### <a name="specify-the-zone-using-the-zone-name-and-resource-group"></a>Spécifier la zone en utilisant le nom de zone et le groupe de ressources Cette approche remplace les balises de zone existantes par les valeurs spécifiées. ```powershell Set-AzureRmDnsZone -Name contoso.com -ResourceGroupName MyAzureResourceGroup -Tag @{ project="demo"; env="test" } ``` ### <a name="specify-the-zone-using-a-zone-object"></a>Spécifier la zone en utilisant un objet $zone Cette approche récupère l’objet existant de la zone, modifie les balises, puis valide les modifications. De cette façon, les balises existantes peuvent être conservées. ```powershell # Get the zone object $zone = Get-AzureRmDnsZone -Name contoso.com -ResourceGroupName MyAzureResourceGroup # Remove an existing tag $zone.Tags.Remove("project") # Add a new tag $zone.Tags.Add("status","approved") # Commit changes Set-AzureRmDnsZone -Zone $zone ``` Lors de l’utilisation de `Set-AzureRmDnsZone` avec un objet $zone, les [vérifications d’ETag](dns-zones-records.md#etags) sont utilisées pour éviter que les modifications simultanées soient remplacées. Le commutateur facultatif `-Overwrite` permet de supprimer ces vérifications. ## <a name="delete-a-dns-zone"></a>Suppression d’une zone DNS Les zones DNS peuvent être supprimées à l’aide de l’applet de commande `Remove-AzureRmDnsZone`. > [!NOTE] > Supprimer une zone DNS supprime également tous les enregistrements DNS de la zone. Cette opération ne peut pas être annulée. Si la zone DNS est en cours d’utilisation, les services utilisant la zone échouent lors de la suppression de la zone. > >Pour vous protéger contre la suppression accidentelle de zones, consultez la page [Comment protéger des zones et enregistrements DNS](dns-protect-zones-recordsets.md). Utilisez l’une des options suivantes pour supprimer une zone DNS : ### <a name="specify-the-zone-using-the-zone-name-and-resource-group-name"></a>Spécifier la zone en utilisant le nom de zone et le nom de groupe de ressources ```powershell Remove-AzureRmDnsZone -Name contoso.com -ResourceGroupName MyAzureResourceGroup ``` ### <a name="specify-the-zone-using-a-zone-object"></a>Spécifier la zone en utilisant un objet $zone Vous pouvez spécifier la zone à supprimer à l’aide d’un objet `$zone` retourné par `Get-AzureRmDnsZone`. ```powershell $zone = Get-AzureRmDnsZone -Name contoso.com -ResourceGroupName MyAzureResourceGroup Remove-AzureRmDnsZone -Zone $zone ``` L’objet de zone peut également être redirigé au lieu d’être transmis en tant que paramètre : ```powershell Get-AzureRmDnsZone -Name contoso.com -ResourceGroupName MyAzureResourceGroup | Remove-AzureRmDnsZone ``` Comme avec `Set-AzureRmDnsZone`, la spécification de la zone avec un objet `$zone` active les vérifications d’ETag et permet de s’assurer que les modifications simultanées ne sont pas supprimées. Utilisez le commutateur `-Overwrite` pour supprimer ces vérifications. ## <a name="confirmation-prompts"></a>Invites de confirmation Les applets de commande `New-AzureRmDnsZone`, `Set-AzureRmDnsZone` et `Remove-AzureRmDnsZone` prennent en charge les invites de confirmation. Les invites `New-AzureRmDnsZone` et `Set-AzureRmDnsZone` demandent une confirmation si la variable de préférence PowerShell `$ConfirmPreference` a la valeur `Medium` ou une valeur inférieure. En raison de l’impact potentiellement élevé de la suppression d’une zone DNS, l’applet de commande `Remove-AzureRmDnsZone` vous demande de confirmer si la variable PowerShell `$ConfirmPreference` a une valeur autre que `None`. Étant donné que la valeur par défaut `$ConfirmPreference` est `High`, seule la commande `Remove-AzureRmDnsZone` demande une confirmation par défaut. Vous pouvez remplacer le paramétrage actuel de `$ConfirmPreference` par le paramètre `-Confirm`. Si vous spécifiez les paramètres `-Confirm` ou `-Confirm:$True`, les applets de commande vous invitent à confirmer l’exécution. Si vous spécifiez le paramètre `-Confirm:$False`, l’applet de commande ne demande pas de confirmation. Pour plus d’informations sur les paramètres `-Confirm` et `$ConfirmPreference`, voir [À propos des variables de préférence](/powershell/module/microsoft.powershell.core/about/about_preference_variables). ## <a name="next-steps"></a>Étapes suivantes Découvrez comment [gérer des jeux d’enregistrements et des enregistrements](dns-operations-recordsets.md) dans votre zone DNS. <br> Découvrez comment [déléguer votre domaine à Azure DNS](dns-domain-delegation.md). <br> Examinez la [documentation de référence d’Azure DNS PowerShell](/powershell/module/azurerm.dns).
53.857955
523
0.774871
fra_Latn
0.92054
d0b21138a8e9b99950ae51551fa848f20e752ed5
16,937
md
Markdown
articles/active-directory/fundamentals/users-default-permissions.md
changeworld/azure-docs.cs-cz
cbff9869fbcda283f69d4909754309e49c409f7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/fundamentals/users-default-permissions.md
changeworld/azure-docs.cs-cz
cbff9869fbcda283f69d4909754309e49c409f7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/fundamentals/users-default-permissions.md
changeworld/azure-docs.cs-cz
cbff9869fbcda283f69d4909754309e49c409f7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Výchozí uživatelská oprávnění – Služba Azure Active Directory | Dokumenty společnosti Microsoft description: Přečtěte si o různých uživatelských oprávněních dostupných ve službě Azure Active Directory. services: active-directory author: msaburnley manager: daveba ms.service: active-directory ms.subservice: fundamentals ms.workload: identity ms.topic: conceptual ms.date: 02/16/2019 ms.author: ajburnle ms.reviewer: vincesm ms.custom: it-pro, seodec18 ms.collection: M365-identity-device-management ms.openlocfilehash: 227230f2d6f46fae27e2cec69d99390f5054c7db ms.sourcegitcommit: 07d62796de0d1f9c0fa14bfcc425f852fdb08fb1 ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 03/27/2020 ms.locfileid: "80366252" --- # <a name="what-are-the-default-user-permissions-in-azure-active-directory"></a>Jaká jsou výchozí uživatelská oprávnění ve službě Azure Active Directory? V Azure Active Directory (Azure AD) mají všichni uživatelé udělenou sadu výchozích oprávnění. Přístup uživatele se skládá z typu uživatele, jejich [přiřazení rolí](active-directory-users-assign-role-azure-portal.md)a jejich vlastnictví jednotlivých objektů. Tento článek popisuje tato výchozí oprávnění a obsahuje porovnání výchozích nastavení člena a uživatele typu host. Výchozí uživatelská oprávnění lze změnit jenom v uživatelských nastaveních ve službě Azure AD. ## <a name="member-and-guest-users"></a>Členové a uživatelé typu host Sada přijatých výchozích oprávnění závisí na tom, zda je uživatel nativním členem tenanta (členský uživatel) nebo zda je uživatel přenesen z jiného adresáře jako host spolupráce B2B (hostovaný uživatel). Další informace o přidávání uživatelů typu Host najdete v [tématu Co je spolupráce Azure AD B2B?](../b2b/what-is-b2b.md) * Členové můžou registrovat aplikace, spravovat vlastní profilovou fotku a číslo mobilního telefonu, změnit vlastní heslo a zvát hosty B2B. Kromě toho můžou uživatelé číst všechny informace v adresáři (s několika výjimkami). * Uživatelé typu Host mají omezená oprávnění adresáře. Uživatelé typu host například nemůžou procházet informace z tenanta nad rámec informací o vlastním profilu. Uživatel typu host však může načíst informace o jiném uživateli tím, že zadá hlavní název uživatele (UPN) nebo identifikátor objectID. Uživatel typu Host může číst vlastnosti skupin, do kterých patří, včetně členství ve skupině, bez ohledu na **to,** že oprávnění uživatelů typu Host mají omezené nastavení. Host nemůže zobrazit informace o jiných objektech klienta. Výchozí oprávnění pro hosty jsou ve výchozím nastavení omezující. Hosty je možné přidat do rolí správce, které jim udělí úplná oprávnění ke čtení a zápisu obsažená v dané roli. K dispozici je ještě jedno omezení – možnost hostů zvát jiné hosty. Nastavením možnosti **Hosté můžou posílat pozvánky** na hodnotu **Ne** zabráníte hostům zvát jiné hosty. Informace o postupu najdete v tématu [Delegování pozvánek pro spolupráci B2B](../b2b/delegate-invitations.md). Pokud chcete uživatelům typu host udělit stejná oprávnění, jako mají členové ve výchozím nastavení, nastavte možnost **Oprávnění uživatelů typu host jsou omezená** na hodnotu **Ne**. Toto nastavení standardně udělí uživatelům typu host všechna uživatelská oprávnění členů a také povolí přidání hostů do rolí pro správu. ## <a name="compare-member-and-guest-default-permissions"></a>Porovnání výchozích oprávnění pro členy a hosty **Oblast** | **Uživatelská oprávnění člena** | **Uživatelská oprávnění hosta** ------------ | --------- | ---------- Uživatelé a kontakty | Čtení všech veřejných vlastností uživatelů a kontaktů<br>Zvaní hostů<br>Změna vlastního hesla<br>Správa vlastního čísla mobilního telefonu<br>Správa vlastní fotky<br>Zneplatnění vlastních obnovovacích tokenů | Čtení vlastních vlastností<br>Čtení zobrazovaného jména, e-mailu, přihlašovacího jména, fotografie, hlavního uživatelského jména a vlastností typu uživatele ostatních uživatelů a kontaktů<br>Změna vlastního hesla Skupiny | Vytváření skupin zabezpečení<br>Vytváření skupin Office 365<br>Čtení všech vlastností skupin<br>Čtení neskrytých členství ve skupinách<br>Čtení skrytých členství ve skupinách Office 365 u připojené skupiny<br>Správa vlastností, vlastnictví a členství ve skupinách, které uživatel vlastní<br>Přidávání hostů do vlastněných skupin<br>Správa nastavení dynamického členství<br>Odstranění vlastněných skupin<br>Obnovení vlastněných skupin Office 365 | Čtení všech vlastností skupin<br>Čtení neskrytých členství ve skupinách<br>Čtení skrytých členství ve skupinách Office 365 u připojených skupin<br>Správa vlastněných skupin<br>Přidávání hostů do vlastněných skupin (pokud je to povoleno)<br>Odstranění vlastněných skupin<br>Obnovení vlastněných skupin Office 365<br>Přečtěte si vlastnosti skupin, do kterých patří, včetně členství. Aplikace | Registrace (vytvoření) nové aplikace<br>Čtení vlastností zaregistrovaných a podnikových aplikací<br>Správa vlastností aplikací, jejich přiřazení a přihlašovacích údajů u vlastněných aplikací<br>Vytvoření nebo odstranění hesla aplikace pro uživatele<br>Odstranění vlastněných aplikací<br>Obnovení vlastněných aplikací | Čtení vlastností zaregistrovaných a podnikových aplikací<br>Správa vlastností aplikací, jejich přiřazení a přihlašovacích údajů u vlastněných aplikací<br>Odstranění vlastněných aplikací<br>Obnovení vlastněných aplikací Zařízení | Čtení všech vlastností zařízení<br>Správa všech vlastností vlastněných zařízení<br> | Žádná oprávnění<br>Odstranění vlastněných zařízení<br> Adresář | Čtení všech informací o společnosti<br>Čtení všech domén<br>Čtení všech partnerských kontraktů | Čtení zobrazovaného názvu a ověřených domén Role a obory | Čtení všech rolí pro správu a členství v nich<br>Čtení všech vlastností a členství jednotek pro správu | Žádná oprávnění Předplatná | Čtení všech předplatných<br>Povolení člena plánu služby | Žádná oprávnění Zásady | Čtení všech vlastností zásad<br>Správa všech vlastností vlastněných zásad | Žádná oprávnění ## <a name="to-restrict-the-default-permissions-for-member-users"></a>Omezení výchozích oprávnění pro členy Výchozí oprávnění pro členy je možné omezit následujícími způsoby. Oprávnění | Vysvětlení nastavení ---------- | ------------ Uživatelé mohou zaregistrovat aplikaci | Nastavení této možnosti na ne zabrání uživatelům ve vytváření registrací aplikací. Možnost pak může být udělena zpět na konkrétní jednotlivce jejich přidáním do role vývojáře aplikací. Povolit uživatelům připojit pracovní nebo školní účet s LinkedIn | Nastavení této možnosti na ne zabrání uživatelům v připojení jejich pracovního nebo školního účtu k účtu LinkedIn. Další informace naleznete v tématu [Sdílení dat a souhlas připojení k účtu LinkedIn](https://docs.microsoft.com/azure/active-directory/users-groups-roles/linkedin-user-consent). Možnost vytvářet skupiny zabezpečení | Nastavení této možnosti na hodnotu Ne zabrání uživatelům vytvářet skupiny zabezpečení. Globální správci a uživatelští správci mohou stále vytvářet skupiny zabezpečení. Informace o postupu najdete v tématu [Rutiny Azure Active Directory pro konfiguraci nastavení skupiny](../users-groups-roles/groups-settings-cmdlets.md). Možnost vytvářet skupiny Office 365 | Nastavení této možnosti na hodnotu Ne zabrání uživatelům vytvářet skupiny Office 365. Nastavení této možnosti na hodnotu Někteří umožní vytvářet skupiny Office 365 vybrané skupině uživatelů. Globální správci a uživatelští správci budou stále moct vytvářet skupiny Office 365. Informace o postupu najdete v tématu [Rutiny Azure Active Directory pro konfiguraci nastavení skupiny](../users-groups-roles/groups-settings-cmdlets.md). Omezení přístupu k portálu pro správu Azure AD | Nastavení této možnosti na ne umožňuje nesprávcům používat portál pro správu Azure AD ke čtení a správě prostředků Azure AD. Ano, omezuje všechny non-správci přístup k datům Azure AD na portálu pro správu. Důležité si uvědomit: toto nastavení neomezuje přístup k datům Azure AD pomocí PowerShellu nebo jiných klientů, jako je Visual Studio. Pokud je nastavena na Ano, udělit konkrétní uživatel bez správce možnost používat portál pro správu Azure AD přiřadit libovolné role pro správu, jako je například role Čtečky adresářů. Tato role umožňuje čtení základních informací o adresáři, které mají členové uživatelé ve výchozím nastavení (hosté a instanční objekty nemají). Možnost číst ostatní uživatele | Toto nastavení je k dispozici pouze v PowerShellu. Nastavení tohoto příznaku na $false zabrání všem uživatelům, kteří nejsou správci, ve čtení informací o uživateli z adresáře. Tento příznak nebrání čtení informací o uživateli v jiných službách společnosti Microsoft, jako je Exchange Online. Toto nastavení je určeno pro zvláštní okolnosti a nastavení tohoto příznaku na $false se nedoporučuje. ## <a name="object-ownership"></a>Vlastnictví objektů ### <a name="application-registration-owner-permissions"></a>Oprávnění vlastníka registrace aplikace Když uživatel zaregistruje aplikaci, automaticky se přidá jako vlastník této aplikace. Jako vlastník může spravovat metadata aplikace, například její název a oprávnění, která aplikace vyžaduje. Může spravovat také konfiguraci aplikace specifickou pro tenanta, jako je konfigurace jednotného přihlašování a přiřazení uživatelů. Vlastník může také přidat nebo odebrat další vlastníky. Na rozdíl od globálních správců můžou vlastníci spravovat pouze aplikace, které vlastní. ### <a name="enterprise-application-owner-permissions"></a>Oprávnění vlastníka podnikových aplikací Když uživatel přidá novou podnikovou aplikaci, automaticky se přidá jako vlastník. Jako vlastník mohou spravovat konfiguraci aplikace specifickou pro klienta, jako je konfigurace připřihlašování, zřizování a přiřazení uživatelů. Vlastník může také přidat nebo odebrat další vlastníky. Na rozdíl od globálních správců mohou vlastníci spravovat pouze aplikace, které vlastní. ### <a name="group-owner-permissions"></a>Oprávnění vlastníka skupiny Když uživatel vytvoří skupinu, automaticky se přidá jako vlastník této skupiny. Jako vlastník mohou spravovat vlastnosti skupiny, jako je název, stejně jako spravovat členství ve skupině. Vlastník může také přidat nebo odebrat další vlastníky. Na rozdíl od globálních správců a správců uživatelů mohou vlastníci spravovat pouze skupiny, které vlastní. Pokud chcete přiřadit vlastníka skupiny, přečtěte si téma [Správa vlastníků skupiny](active-directory-accessmanagement-managing-group-owners.md). ### <a name="ownership-permissions"></a>Oprávnění k vlastnictví Následující tabulky popisují konkrétní oprávnění v azure active directory členové uživatelé mají přes vlastněné objekty. Uživatel má tato oprávnění pouze u objektů, které vlastní. #### <a name="owned-application-registrations"></a>Vlastněné registrace žádostí Uživatelé mohou provádět následující akce u vlastněných registrací aplikací. | **Akce** | **Popis** | | --- | --- | | microsoft.directory/applications/audience/update | Aktualizujte vlastnost applications.audience ve službě Azure Active Directory. | | microsoft.directory/applications/authentication/update | Aktualizujte vlastnost applications.authentication ve službě Azure Active Directory. | | Microsoft.directory/applications/basic/update | Aktualizujte základní vlastnosti aplikací ve službě Azure Active Directory. | | Microsoft.directory/applications/credentials/update | Aktualizujte vlastnost applications.credentials ve službě Azure Active Directory. | | microsoft.directory/applications/delete | Odstraňte aplikace ve službě Azure Active Directory. | | soubor microsoft.directory/applications/owners/update | Aktualizujte vlastnost applications.owners ve službě Azure Active Directory. | | soubor microsoft.directory/applications/permissions/update | Aktualizujte vlastnost applications.permissions ve službě Azure Active Directory. | | Soubor Microsoft.directory/applications/policies/update | Aktualizujte vlastnost applications.policies ve službě Azure Active Directory. | | Microsoft.directory/applications/restore | Obnovte aplikace ve službě Azure Active Directory. | #### <a name="owned-enterprise-applications"></a>Vlastněné podnikové aplikace Uživatelé mohou provádět následující akce u vlastněných podnikových aplikací. Podniková aplikace se skládá z instančního objektu, jedné nebo více zásad aplikace a někdy aplikačního objektu ve stejném tenantovi jako instanční objekt. | **Akce** | **Popis** | | --- | --- | | Microsoft.directory/auditLogs/allProperties/read | Přečtěte si všechny vlastnosti (včetně privilegovaných vlastností) na auditLogs ve službě Azure Active Directory. | | Soubor Microsoft.directory/policies/basic/update | Aktualizujte základní vlastnosti zásad ve službě Azure Active Directory. | | Microsoft.directory/policies/delete | Odstraňte zásady ve službě Azure Active Directory. | | Soubor Microsoft.directory/policies/owners/update | Aktualizujte vlastnost policies.owners ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Aktualizujte vlastnost servicePrincipals.appRoleAssignedTo ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/appRoleAssignments/update | Aktualizujte vlastnost users.appRoleAssignments ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/audience/update | Aktualizujte vlastnost servicePrincipals.audience ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/authentication/update | Aktualizujte vlastnost servicePrincipals.authentication ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/basic/update | Aktualizujte základní vlastnosti na servicePrincipals ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/credentials/update | Aktualizujte vlastnost servicePrincipals.credentials ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/delete | Odstraňte servicePrincipals ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/owners/update | Aktualizujte vlastnost servicePrincipals.owners ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/permissions/update | Aktualizujte vlastnost servicePrincipals.permissions ve službě Azure Active Directory. | | Microsoft.directory/servicePrincipals/policies/update | Aktualizujte vlastnost servicePrincipals.policies ve službě Azure Active Directory. | | microsoft.directory/signInReports/allProperties/read | Přečtěte si všechny vlastnosti (včetně privilegovaných vlastností) ve službě SignInReports ve službě Azure Active Directory. | #### <a name="owned-devices"></a>Vlastněná zařízení Uživatelé mohou provádět následující akce na vlastněných zařízeních. | **Akce** | **Popis** | | --- | --- | | Microsoft.directory/devices/bitLockerRecoveryKeys/read | Přečtěte si vlastnost devices.bitLockerRecoveryKeys ve službě Azure Active Directory. | | Microsoft.directory/devices/disable Microsoft.directory/devices/disable Microsoft.directory/devices/disable Microsoft. | Zakažte zařízení ve službě Azure Active Directory. | #### <a name="owned-groups"></a>Vlastněné skupiny Uživatelé mohou provádět následující akce u vlastněných skupin. | **Akce** | **Popis** | | --- | --- | | microsoft.directory/groups/appRoleAssignments/update | Aktualizujte vlastnost groups.appRoleAssignments ve službě Azure Active Directory. | | soubor microsoft.directory/groups/basic/update | Aktualizujte základní vlastnosti skupin ve službě Azure Active Directory. | | Microsoft.directory/groups/delete | Odstraňte skupiny ve službě Azure Active Directory. | | Microsoft.directory/groups/dynamicMembershipRule/update | Aktualizujte vlastnost groups.dynamicMembershipRule ve službě Azure Active Directory. | | soubor microsoft.directory/groups/members/update | Aktualizujte vlastnost groups.members ve službě Azure Active Directory. | | soubor microsoft.directory/groups/owners/update | Aktualizujte vlastnost groups.owners ve službě Azure Active Directory. | | Microsoft.directory/groups/restore | Obnovte skupiny ve službě Azure Active Directory. | | Microsoft.directory/groups/settings/update | Aktualizujte vlastnost groups.settings ve službě Azure Active Directory. | ## <a name="next-steps"></a>Další kroky * Další informace o tom, jak přiřadit role správce Azure AD, najdete [v tématu Přiřazení uživatele k rolím správce ve službě Azure Active Directory.](active-directory-users-assign-role-azure-portal.md) * Další informace o tom, jak se přístup k prostředkům řídí ve službě Microsoft Azure, najdete v části [Principy přístupu k prostředkům ve službě Azure](../../role-based-access-control/rbac-and-directory-admin-roles.md) * Další informace o vztahu Azure Active Directory k předplatnému Azure najdete v tématu [Jak je předplatné Azure propojeno se službou Azure Active Directory](active-directory-how-subscriptions-associated-directory.md). * [Spravovat uživatele](add-users-azure-active-directory.md)
123.627737
837
0.821338
ces_Latn
0.999898
d0b23150fe1bc2eccc8133e88bb6c459f3e136a7
742
md
Markdown
ESXi-EXOS-VM-Requirements.md
extremenetworks/Virtual-EXOS
67b1ae63245a03489e37f55a8f62d1f555a09403
[ "Unlicense" ]
null
null
null
ESXi-EXOS-VM-Requirements.md
extremenetworks/Virtual-EXOS
67b1ae63245a03489e37f55a8f62d1f555a09403
[ "Unlicense" ]
null
null
null
ESXi-EXOS-VM-Requirements.md
extremenetworks/Virtual-EXOS
67b1ae63245a03489e37f55a8f62d1f555a09403
[ "Unlicense" ]
null
null
null
# ESXI requirements for EXOS-VM When building a EXOS-VM in ESXI use the below ESXI VM settings to build the VM. Once the VM is built attach the CDROM with the EXOS-VM ISOs and boot up the VM to install the Image. Once installed remove the ISO from the VM. The first NIC is the Mgmt port, and the other ports up to 13 more are data ports. **Note: EXOS-VMs can get stuck in (pending AAA) for 1-2 minutes. Just be patient it will finish, and allow you to login**. ### ESXI EXOS-VM settings * Guest OS: Other Linux 64-bit * Virtural Sockets: 1 * Cores per Socket: 2 * Ram: 512MB * Hard Drive space: 4GB * Hard Disk: * SCSI - LSI Login Parallel * Thick Provision * IDE(0:0) - required. * Mode Dependent * CDROM: for EXOS-VM ISO
39.052632
308
0.715633
eng_Latn
0.97709
d0b25d634389c643ba60be18c88d80ac44772d92
715
md
Markdown
pages/common/date.md
vladimyr/tldr
803a51e783ac42fdfbeb5d939f51cee83153a929
[ "CC-BY-4.0" ]
null
null
null
pages/common/date.md
vladimyr/tldr
803a51e783ac42fdfbeb5d939f51cee83153a929
[ "CC-BY-4.0" ]
null
null
null
pages/common/date.md
vladimyr/tldr
803a51e783ac42fdfbeb5d939f51cee83153a929
[ "CC-BY-4.0" ]
null
null
null
# date > Set or display the system date. > More information: <https://www.gnu.org/software/coreutils/manual/html_node/date-invocation.html>. - Display the current date using the default locale's format: `date +"%c"` - Display the current date in UTC and ISO 8601 format: `date -u +"%Y-%m-%dT%H:%M:%S%Z"` - Display the current date as a Unix timestamp (seconds since the Unix epoch): `date +%s` - Display a specific date (represented as a Unix timestamp) using the default format: `date -d @1473305798` - Convert a specific date to the Unix timestamp format: `date -d "{{2018-09-01 00:00}}" +%s --utc` - Display the current date using the RFC-3339 format (`YYYY-MM-DD hh:mm:ss TZ`): `date --rfc-3339=s`
24.655172
99
0.700699
eng_Latn
0.831394
d0b2a8f32f8ddd224d9bb4c9c4d204d5a84ebe0f
6,780
md
Markdown
docs/vs-2015/extensibility/how-to-implement-undo-management.md
1DanielaBlanco/visualstudio-docs.es-es
9e934cd5752dc7df6f5e93744805e3c600c87ff0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/how-to-implement-undo-management.md
1DanielaBlanco/visualstudio-docs.es-es
9e934cd5752dc7df6f5e93744805e3c600c87ff0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/how-to-implement-undo-management.md
1DanielaBlanco/visualstudio-docs.es-es
9e934cd5752dc7df6f5e93744805e3c600c87ff0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Cómo: implementar la administración de deshacer | Microsoft Docs' ms.custom: '' ms.date: 11/15/2016 ms.prod: visual-studio-dev14 ms.reviewer: '' ms.suite: '' ms.technology: - vs-ide-sdk ms.tgt_pltfrm: '' ms.topic: article helpviewer_keywords: - editors [Visual Studio SDK], legacy - undo management ms.assetid: 1942245d-7a1d-4a11-b5e7-a3fe29f11c0b caps.latest.revision: 12 ms.author: gregvanl manager: ghogen ms.openlocfilehash: f7eb3e3a1bbda905b2f5c5819835b10513d444fb ms.sourcegitcommit: af428c7ccd007e668ec0dd8697c88fc5d8bca1e2 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 11/16/2018 ms.locfileid: "51806105" --- # <a name="how-to-implement-undo-management"></a>Cómo: implementar la administración de deshacer [!INCLUDE[vs2017banner](../includes/vs2017banner.md)] La interfaz principal que se usa para la administración de deshacer es <xref:Microsoft.VisualStudio.OLE.Interop.IOleUndoManager>, que es implementado por el entorno. Para admitir la administración de deshacer, implementar unidades de deshacer independiente (es decir, <xref:Microsoft.VisualStudio.OLE.Interop.IOleUndoUnit>, que puede contener varios pasos individuales. Cómo implementar la administración de deshacer varía en función de si el editor admite varias vistas o no. Los procedimientos para cada implementación se detallan en las secciones siguientes. ## <a name="cases-where-an-editor-supports-a-single-view"></a>Casos donde un editor es compatible con una sola vista En este escenario, el editor no admite varias vistas. Hay solo un editor y un documento y permiten deshacer. Use el procedimiento siguiente para implementar la administración de deshacer. #### <a name="to-support-undo-management-for-a-single-view-editor"></a>Para admitir la administración de la fase de reversión para un editor de vista única 1. Llame a `QueryInterface` en el `IServiceProvider` interfaz en el marco de ventana para `IOleUndoManager`, desde el objeto de vista de documento para obtener acceso al administrador de deshacer (`IID_IOLEUndoManager`). 2. Cuando una vista está ubicada en un marco de ventana, obtiene un puntero de sitio, que puede usar para llamar a `QueryInterface` para `IServiceProvider`. ## <a name="cases-where-an-editor-supports-multiple-views"></a>Casos donde un editor admite varias vistas Si dispone de separación de documento y vista, hay administrador de deshacer normalmente, un asociado con el propio documento. Todas las unidades de deshacer se colocan en el Administrador de deshacer asociado al objeto de datos de documento. En lugar de la vista de consulta para el Administrador de deshacer, de los cuales hay uno para cada vista, los datos del documento de objeto llama <xref:Microsoft.VisualStudio.Shell.Interop.ILocalRegistry2.CreateInstance%2A> para crear una instancia del Administrador de deshacer, especificando un identificador de clase de CLSID_OLEUndoManager. El identificador de clase se define en el archivo OCUNDOID.h. Cuando se usa <xref:Microsoft.VisualStudio.Shell.Interop.ILocalRegistry2.CreateInstance%2A> para crear su propia instancia del Administrador de deshacer, use el siguiente procedimiento para enlazar el Administrador de deshacer en el entorno. #### <a name="to-hook-your-undo-manager-into-the-environment"></a>Para enlazar el Administrador de deshacer en el entorno 1. Llame a `QueryInterface` en el objeto devuelto desde <xref:Microsoft.VisualStudio.Shell.Interop.ILocalRegistry2> para `IID_IOleUndoManager`. Store el puntero a <xref:Microsoft.VisualStudio.OLE.Interop.IOleUndoManager>. 2. Llame a `QueryInterface` en `IOleUndoManager` para `IID_IOleCommandTarget`. Store el puntero a <xref:Microsoft.VisualStudio.OLE.Interop.IOleCommandTarget>. 3. Retransmisión su <xref:Microsoft.VisualStudio.OLE.Interop.IOleCommandTarget.QueryStatus%2A> y <xref:Microsoft.VisualStudio.OLE.Interop.IOleCommandTarget.Exec%2A> llama a almacenado `IOleCommandTarget` interfaz para los siguientes comandos de StandardCommandSet97: - cmdidUndo - cmdidMultiLevelUndo - cmdidRedo - cmdidMultiLevelRedo - cmdidMultiLevelUndoList - cmdidMultiLevelRedoList 4. Llame a `QueryInterface` en `IOleUndoManager` para `IID_IVsChangeTrackingUndoManager`. Store el puntero a <xref:Microsoft.VisualStudio.TextManager.Interop.IVsChangeTrackingUndoManager>. Use el puntero para <xref:Microsoft.VisualStudio.TextManager.Interop.IVsChangeTrackingUndoManager> para llamar a la <xref:Microsoft.VisualStudio.TextManager.Interop.IVsChangeTrackingUndoManager.MarkCleanState%2A>, el <xref:Microsoft.VisualStudio.TextManager.Interop.IVsChangeTrackingUndoManager.AdviseTrackingClient%2A>y el <xref:Microsoft.VisualStudio.TextManager.Interop.IVsChangeTrackingUndoManager.UnadviseTrackingClient%2A> métodos. 5. Llame a `QueryInterface` en `IOleUndoManager` para `IID_IVsLinkCapableUndoManager`. 6. Llame a <xref:Microsoft.VisualStudio.TextManager.Interop.IVsLinkCapableUndoManager.AdviseLinkedUndoClient%2A> con el documento, que debe implementar la <xref:Microsoft.VisualStudio.TextManager.Interop.IVsLinkedUndoClient> interfaz. Cuando se cierra el documento, llame a `IVsLinkCapableUndoManager::UnadviseLinkedUndoClient`. 7. Cuando se cierra el documento, llame a `QueryInterface` en el Administrador de deshacer para `IID_IVsLifetimeControlledObject`. 8. Llame a <xref:Microsoft.VisualStudio.TextManager.Interop.IVsLifetimeControlledObject.SeverReferencesToOwner%2A>. 9. Cuando se realizan cambios en el documento, llame a <xref:Microsoft.VisualStudio.OLE.Interop.IOleUndoManager.Add%2A> en el Administrador de con un `OleUndoUnit` clase. El <xref:Microsoft.VisualStudio.OLE.Interop.IOleUndoManager.Add%2A> método mantiene una referencia al objeto, por lo general, liberarlo inmediatamente después del <xref:Microsoft.VisualStudio.OLE.Interop.IOleUndoManager.Add%2A>. La `OleUndoManager` clase representa una instancia de la pila de deshacer única. Por lo tanto, hay un objeto de administrador de deshacer cada entidad de datos está realizando un seguimiento para deshacer o rehacer. > [!NOTE] > Mientras que el objeto de administrador de deshacer es utilizado por el editor de texto, es un componente general que no tiene proporciona compatibilidad específica del editor de texto. Si desea admitir varios niveles de deshacer o rehacer, puede utilizar este objeto para hacerlo. ## <a name="see-also"></a>Vea también <xref:Microsoft.VisualStudio.TextManager.Interop.IVsChangeTrackingUndoManager> <xref:Microsoft.VisualStudio.TextManager.Interop.IVsLifetimeControlledObject> [Cómo: Borrar la pila de la fase de reversión](../extensibility/how-to-clear-the-undo-stack.md)
73.695652
443
0.798673
spa_Latn
0.913847
d0b336a623e392ee1ad0dc721a22098ce0f09be1
11,160
md
Markdown
docs/ssma/mysql/connecting-to-sql-server-mysqltosql.md
william-keller/sql-docs.pt-br
e5218aef85d1f8080eddaadecbb11de1e664c541
[ "CC-BY-4.0", "MIT" ]
1
2021-09-05T16:06:11.000Z
2021-09-05T16:06:11.000Z
docs/ssma/mysql/connecting-to-sql-server-mysqltosql.md
william-keller/sql-docs.pt-br
e5218aef85d1f8080eddaadecbb11de1e664c541
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/ssma/mysql/connecting-to-sql-server-mysqltosql.md
william-keller/sql-docs.pt-br
e5218aef85d1f8080eddaadecbb11de1e664c541
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Conectando-se ao SQL Server (MySQLToSQL) | Microsoft Docs description: Saiba como se conectar a uma instância de destino do SQL Server para migrar bancos de dados MySQL. O SSMA obtém metadados sobre bancos de dados no SQL Server. ms.prod: sql ms.custom: '' ms.date: 01/19/2017 ms.reviewer: '' ms.technology: ssma ms.topic: conceptual helpviewer_keywords: - connecting to SQL Server 2008, SQL Server permission - connecting to SQL Server 2008, synchronization ms.assetid: 08233267-693e-46e6-9ca3-3a3dfd3d2be7 author: nahk-ivanov ms.author: alexiva ms.openlocfilehash: 548433b02590ccacf164e9479690f1adadbbc3c4 ms.sourcegitcommit: e8f6c51d4702c0046aec1394109bc0503ca182f0 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 08/07/2020 ms.locfileid: "87936183" --- # <a name="connecting-to-sql-server-mysqltosql"></a>Conectar-se ao SQL Server (MySQLToSQL) Para migrar bancos de dados MySQL para SQL Server, você deve se conectar à instância de destino do SQL Server. Quando você se conecta, o SSMA obtém metadados sobre todos os bancos de dados na instância do SQL Server e exibe os metadados do banco de dados no SQL Server Gerenciador de metadados. O SSMA armazena informações da instância do SQL Server ao qual você está conectado, mas não armazena senhas. Sua conexão com SQL Server permanece ativa até que você feche o projeto. Quando você reabrir o projeto, deverá se reconectar ao SQL Server se quiser uma conexão ativa com o servidor. Você pode trabalhar offline até carregar os objetos de banco de dados em SQL Server e migrar. Os metadados sobre a instância do SQL Server não são sincronizados automaticamente. Em vez disso, para atualizar os metadados no SQL Server Gerenciador de metadados, você deve atualizar manualmente os metadados de SQL Server. Para obter mais informações, consulte a seção "sincronizando metadados de SQL Server" mais adiante neste tópico. ## <a name="required-sql-server-permissions"></a>Permissões de SQL Server necessárias A conta usada para se conectar ao SQL Server requer permissões diferentes dependendo das ações que a conta executa: - Para converter objetos MySQL em [!INCLUDE[tsql](../../includes/tsql-md.md)] sintaxe, para atualizar metadados de SQL Server ou para salvar a sintaxe convertida em scripts, a conta deve ter permissão para fazer logon na instância do SQL Server. - Para carregar objetos de banco de dados em SQL Server, o requisito mínimo de permissão é a associação na função de banco de dados **db_owner** no banco de dados de destino. ## <a name="establishing-a-sql-server-connection"></a>Estabelecendo uma conexão SQL Server Antes de converter objetos de banco de dados MySQL para SQL Server sintaxe, você deve estabelecer uma conexão com a instância do SQL Server em que você deseja migrar o banco de dados MySQL ou bancos. Ao definir as propriedades de conexão, você também especifica o banco de dados em que os objetos e data serão migrados. Você pode personalizar esse mapeamento no nível de esquema do MySQL depois de se conectar ao SQL Server. Para obter mais informações, consulte [mapeando bancos de dados MySQL para esquemas de SQL Server &#40;MySQLToSQL&#41;](../../ssma/mysql/mapping-mysql-databases-to-sql-server-schemas-mysqltosql.md) > [!IMPORTANT] > Antes de tentar se conectar ao SQL Server, verifique se a instância do SQL Server está em execução e pode aceitar conexões. **Para se conectar ao SQL Server** 1. No menu **arquivo** , selecione **conectar-se a SQL Server** (essa opção é habilitada após a criação de um projeto). Se você tiver se conectado anteriormente ao SQL Server, o nome do comando será **reconectado ao SQL Server**. 2. Na caixa de diálogo conexão, digite ou selecione o nome da instância do SQL Server. - Se você estiver se conectando à instância padrão no computador local, poderá inserir **localhost** ou um ponto (**.**). - Se você estiver se conectando à instância padrão em outro computador, insira o nome do computador. - Se você estiver se conectando a uma instância nomeada em outro computador, insira o nome do computador seguido por uma barra invertida e, em seguida, o nome da instância, como MyServer\MyInstance. 3. Se sua instância do SQL Server estiver configurada para aceitar conexões em uma porta não padrão, insira o número da porta que é usado para SQL Server conexões na caixa **porta do servidor** . Para a instância padrão do SQL Server, o número da porta padrão é 1433. Para instâncias nomeadas, o SSMA tentará obter o número da porta do serviço de SQL Server Browser. 4. Na caixa **autenticação** , selecione o tipo de autenticação a ser usado para a conexão. Para usar a conta atual do Windows, selecione **autenticação do Windows**. Para usar um logon SQL Server, selecione SQL Server autenticação e, em seguida, forneça o nome de logon e a senha. 5. Para conexão segura, dois controles são adicionados, as caixas de seleção **criptografar conexão** e **TrustServerCertificate** . Somente quando a opção **criptografar conexão** está marcada, a caixa de seleção **TrustServerCertificate** está visível. Quando **criptografar conexão** está marcado (true) e **TrustServerCertificate** está desmarcado (false), ele validará o SQL Server certificado SSL. A validação do certificado do servidor é parte do handshake SSL e garante que o servidor é o servidor correto para a conexão. Para garantir isso, um certificado deve ser instalado no lado do cliente, bem como no lado do servidor. 6. Clique em Conectar. **Compatibilidade de versão superior** É permitido conectar/reconectar-se a versões mais recentes do SQL Server. 1. Você poderá se conectar a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2008 ou [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2012 ou [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2014 ou [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2016 quando o projeto criado for [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2005. 2. Você poderá se conectar a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2012 ou [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2014 ou [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2016 quando o projeto criado for [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2008, mas não tiver permissão para se conectar a versões inferiores, ou seja, [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2005. 3. Você poderá se conectar a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2012 ou [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2014 ou [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2016 quando o projeto criado for [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2012. 4. Você poderá se conectar somente a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2014 ou [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2016 quando o projeto criado for [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2014. 5. A compatibilidade de versão superior não é válida para "SQL Azure". |TIPO de projeto versus versão do servidor de destino|[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2005<br /> (Versão: 9. x)|[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2008<br /> (Versão: 10. x)|[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2012<br />(Versão: 11. x)|[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]2014<br />(Versão: 12. x)|[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2016<br />(Versão: 13. x)|SQL Azure| |-|-|-|-|-|-|-| |[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2005|Sim|Sim|Sim|Sim|Sim|| |[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2008||Sim|Sim|Sim|Sim|| |[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2012|||Sim|Sim|Sim|| |[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]2014||||Sim|Sim|| |[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]2016|||||Sim|| |SQL Azure||||||Sim| > [!IMPORTANT] > A conversão dos objetos de banco de dados é executada de acordo com o tipo de projeto, mas não de acordo com a versão do [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] conectado ao. No caso do [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] projeto 2005, a conversão é realizada de acordo com [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 2005, embora você esteja conectado a uma versão mais recente do [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] (SQL Server 2008/SQL Server 2012/SQL Server 2014/SQL Server 2016). ## <a name="synchronizing-sql-server-metadata"></a>Sincronizando metadados de SQL Server Metadados sobre bancos de dados SQL Server não são atualizados automaticamente. Os metadados no SQL Server Gerenciador de metadados é um instantâneo dos metadados quando você se conecta pela primeira vez ao SQL Server, ou a última vez que você atualizou os metadados manualmente. Você pode atualizar os metadados manualmente para todos os bancos de dados ou para qualquer banco de dados ou objeto de banco de dados individual. **Para sincronizar metadados** 1. Verifique se você está conectado a SQL Server. 2. No SQL Server Gerenciador de metadados, marque a caixa de seleção ao lado do banco de dados ou esquema de banco de dados que você deseja atualizar. Por exemplo, para atualizar os metadados de todos os bancos de dados, selecione a caixa ao lado de bancos de dados. 3. Clique com o botão direito do mouse em bancos de dados ou em um esquema ou banco de dados individual e selecione **sincronizar com Banco de dados**. ## <a name="next-step"></a>Próxima etapa A próxima etapa na migração depende de suas necessidades de projeto: - Para personalizar o mapeamento entre os esquemas do MySQL e os esquemas e bancos de dados do SQL Server, consulte [mapeando bancos de dados MySQL para esquemas de SQL Server &#40;MySQLToSQL&#41;](../../ssma/mysql/mapping-mysql-databases-to-sql-server-schemas-mysqltosql.md) - Para personalizar as opções de configuração para os projetos, consulte [definir opções de projeto &#40;MySQLToSQL&#41;](../../ssma/mysql/setting-project-options-mysqltosql.md) - Para personalizar o mapeamento de tipos de dados de origem e de destino, consulte [mapeando MySQL e SQL Server tipos de dados &#40;MySQLToSQL&#41;](../../ssma/mysql/mapping-mysql-and-sql-server-data-types-mysqltosql.md) - Se você não precisar executar nenhuma dessas tarefas, poderá converter as definições do objeto de banco de dados MySQL em SQL Server definições de objeto. Para obter mais informações, consulte [convertendo bancos de dados MySQL &#40;MySQLToSQL&#41;](../../ssma/mysql/converting-mysql-databases-mysqltosql.md) ## <a name="see-also"></a>Consulte Também [Migrando bancos de dados MySQL para SQL Server-banco de MySQLToSql SQL do Azure &#40;&#41;](../../ssma/mysql/migrating-mysql-databases-to-sql-server-azure-sql-db-mysqltosql.md)
93
636
0.750179
por_Latn
0.996692
d0b340638b61f786322fbe036440d9a75e1e6011
771
md
Markdown
README.md
dotchang/ofxRealSense2
5f575d601bdcb2a84257e239c7dd0815c0b04f00
[ "Apache-2.0" ]
null
null
null
README.md
dotchang/ofxRealSense2
5f575d601bdcb2a84257e239c7dd0815c0b04f00
[ "Apache-2.0" ]
null
null
null
README.md
dotchang/ofxRealSense2
5f575d601bdcb2a84257e239c7dd0815c0b04f00
[ "Apache-2.0" ]
null
null
null
# ofxRealSense2 Thank you for watching. This is a minimal sample for using Depth Camera D415 and D435 on openframeworks(oF). This is based on Intel® RealSense™ SDK 2.0. ## Screenshot ![sample](https://github.com/mizumasa/ofxRealSense2/blob/master/screenshot.png "サンプル") # Requirement * OSX Xcode (not supported on Windows or Linux) * tested on oF v0.10.0 * Intel RealSense D415 or D435 # install ## Build SDK for RealSense How to install -> https://github.com/IntelRealSense/librealsense/blob/8e7db477a69741ca413a6b4d6402d19c8aace00e/doc/installation_osx.md We already have Pre-built library (librealsense2.2.10.1.dylib) If you need, bring dylib file to directory "ofxRealSense2/libs/realsense2/lib/osx/" ## Future Plans * Point Cloud * Bone Tracking ...
22.676471
119
0.76524
eng_Latn
0.712982
d0b44765640720aed83c1db2bfda841bdcd39d72
2,290
md
Markdown
readme.md
JPTIZ/libgba-cpp
44ad7d7ceae0ee92cdccde6fe06bfacb04ba1067
[ "MIT" ]
34
2017-09-01T02:18:04.000Z
2021-11-16T12:26:49.000Z
readme.md
patrickelectric/libgba-cpp
79fe11a42c0a31d654265dc8bd061f2dd47840ed
[ "MIT" ]
14
2018-08-29T17:05:24.000Z
2021-12-25T00:07:54.000Z
readme.md
patrickelectric/libgba-cpp
79fe11a42c0a31d654265dc8bd061f2dd47840ed
[ "MIT" ]
9
2020-10-14T17:24:40.000Z
2022-02-25T03:28:21.000Z
LibGBA - CPP ============ What is it? ----------- It's a C++ library for GBA homebrew development. Focused on better API and abstraction, so programmers can focus more on game logic and call functions for effects instead of thinking in a too low-level way. Requirements ------------ A lot of statements are being written with C++11/14/17 features, so, the requirements are: - **Toolchain**: DevKitPro (to compile for GBA ARMv4 ISA and ARM7TDMI processor). - **Compiler**: GCC 7.1 or above; - **Build-system**: Meson Examples -------- - [AlphaBlending (BG Layers)](tests/alphablend) - [Generated Tilemaps](tests/gen_tilemap) - [Interrupts](tests/interrupts) - [Mode5](tests/mode5) - [Mosaic](tests/mosaic) - [Sound](tests/sound) - [Tilemap](tests/tilemap) - [Windowing](tests/windowing) Building -------- To build this project, after installing the [Requirements](#Requirements) and cloning this repository, enter the cloned repository's directory and run: ```console $ meson build --cross-file cross_file.ini ``` If you want to build examples as well, add `-Dbuild-tests=true` to the command: ```console $ meson build -Dbuild-tests=true ``` Running examples ---------------- After building the examples (see [Building](#Building)), you'll need a GBA emulator. My personal recommendations are: - [MGBA](https://mgba.io/): an open-source fully-featured GBA emulator with a lot of cool debugging tools. Can also link with GDB for a professional in-depth debugging. - [Visual Boy Advance-M](https://vba-m.com/): continuation of the almighty legendary VisualBoyAdvance. Examples are compiled into the `build/tests` directory. So to run, say, `alphablend` example, you should open `build/tests/alphablend.gba` with your emulator. Documentation ------------- - [API Documentation](https://jptiz.github.io/libgba-cpp/index.html) To-Do's ------- [Check issues](https://github.com/JPTIZ/libgba-cpp/issues). Bibliography ------------ There are two main contributions for the knowledge necessary to make this library: one is the greatly well explained tutorials [TONC by J Vijn](http://www.coranac.com/tonc/text/toc.htm) (which I really want to thank him for making it), and the other is the great [GBA Techical Manual by Martin Korth](http://problemkaputt.de/gbatek.htm).
27.590361
79
0.718341
eng_Latn
0.944487
d0b4caf35fd2750d44e700ce0a08e63abf5c3070
151
md
Markdown
_posts/2020-01-14-briefly-ugh.md
jonkit/jonkit.github.io
a667f7543c14f393f6af2078bf5284c74911c66d
[ "MIT" ]
null
null
null
_posts/2020-01-14-briefly-ugh.md
jonkit/jonkit.github.io
a667f7543c14f393f6af2078bf5284c74911c66d
[ "MIT" ]
null
null
null
_posts/2020-01-14-briefly-ugh.md
jonkit/jonkit.github.io
a667f7543c14f393f6af2078bf5284c74911c66d
[ "MIT" ]
null
null
null
--- layout: briefly category: briefly date: 2020-01-14 15:58:19 -0400 permalink: ugh title: "Ugh" --- One more test post, he said, believing himself.
16.777778
48
0.708609
eng_Latn
0.953826