hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
6ad93c20d86701f2927b22e3f594c9dbe5be92b7
75
md
Markdown
README.md
JohannesLiu/KowledgeGraph-System-Collection
d0e8fe5117c1788a7f35eae9ac218e0c8b96a2fa
[ "MIT" ]
null
null
null
README.md
JohannesLiu/KowledgeGraph-System-Collection
d0e8fe5117c1788a7f35eae9ac218e0c8b96a2fa
[ "MIT" ]
null
null
null
README.md
JohannesLiu/KowledgeGraph-System-Collection
d0e8fe5117c1788a7f35eae9ac218e0c8b96a2fa
[ "MIT" ]
null
null
null
# KowledgeGraph-System-Collection A Collection of Knowledge Graph Systems.
25
40
0.84
yue_Hant
0.388442
6ad9677b1e57819b8c01db465a22245307a182fc
15,227
md
Markdown
docs/debugger/debug-live-azure-apps-troubleshooting.md
icnocop/visualstudio-docs
61ee799c65dc6ccd0559e7872e168ab75387ed96
[ "CC-BY-4.0", "MIT" ]
1
2021-02-21T21:24:11.000Z
2021-02-21T21:24:11.000Z
docs/debugger/debug-live-azure-apps-troubleshooting.md
icnocop/visualstudio-docs
61ee799c65dc6ccd0559e7872e168ab75387ed96
[ "CC-BY-4.0", "MIT" ]
1
2021-03-29T23:11:13.000Z
2021-03-29T23:11:13.000Z
docs/debugger/debug-live-azure-apps-troubleshooting.md
icnocop/visualstudio-docs
61ee799c65dc6ccd0559e7872e168ab75387ed96
[ "CC-BY-4.0", "MIT" ]
1
2021-02-21T21:24:15.000Z
2021-02-21T21:24:15.000Z
--- title: "Troubleshooting snapshot debugging | Microsoft Docs" description: Understand troubleshooting and known issues for snapshot debugging in Visual Studio. Load ICorProfiler without causing downtime on your production site. ms.custom: SEO-VS-2020 ms.date: "04/24/2019" ms.topic: "troubleshooting" helpviewer_keywords: - "debugger" ms.assetid: 511a0697-c68a-4988-9e29-8d0166ca044a author: "mikejo5000" ms.author: "mikejo" manager: jmartens ms.workload: - "multiple" --- # Troubleshooting and known issues for snapshot debugging in Visual Studio If the steps described in this article do not resolve your issue, search for the problem on [Developer Community](https://aka.ms/feedback/suggest?space=8) or report a new issue by choosing **Help** > **Send Feedback** > **Report a Problem** in Visual Studio. ## Issue: "Attach Snapshot Debugger" encounters an HTTP status code error If you see the following error in the **Output** window during the attempt to attach, it may be a known issue listed below. Try the proposed solutions, and if the issue continues to persist contact the preceding alias. `[TIMESTAMP] Error --- Unable to Start Snapshot Debugger - Attach Snapshot Debugger failed: System.Net.WebException: The remote server returned an error: (###) XXXXXX` ### (401) Unauthorized This error indicates that the REST call issued by Visual Studio to Azure uses an invalid credential. Take these steps: * Make sure that your Visual Studio personalization account has permissions to the Azure subscription and resource that you are attaching to. A quick way to determine this is to check whether the resource is available in the dialog box from **Debug** > **Attach Snapshot Debugger...** > **Azure Resource** > **Select Existing**, or in Cloud Explorer. * If this error continues to persist, use one of the feedback channels described in the beginning of this article. If you have enabled Authentication/Authorization (EasyAuth) on your App Service, you may encounter a 401 error with LaunchAgentAsync in the call stack error message. Please ensure **Action to take when request is not authenticated** is set to **Allow Anonymous requests (no action)** in the Azure portal and provide an authorization.json in D:\Home\sites\wwwroot with the following content instead. ``` { "routes": [ { "path_prefix": "/", "policies": { "unauthenticated_action": "RedirectToLoginPage" } }, { "http_methods": [ "POST" ], "path_prefix": "/41C07CED-2E08-4609-9D9F-882468261608/api/agent", "policies": { "unauthenticated_action": "AllowAnonymous" } } ] } ``` The first route effectively secures your app domain similar to **Log in with [IdentityProvider]**. The second route exposes the SnapshotDebugger AgentLaunch endpoint outside of authentication, which performs the pre-defined action of starting the SnapshotDebugger diagnostic agent *only if* the SnapshotDebugger preinstalled site extension is enabled for your app service. For more details on the authorization.json configuration, please see [URL authorization rules](https://azure.github.io/AppService/2016/11/17/URL-Authorization-Rules.html). ### (403) Forbidden This error indicates that permission is denied. This can be caused by many different issues. Take these steps: * Verify that your Visual Studio account has a valid Azure subscription with the necessary Role-Based Access Control (RBAC) permissions for the resource. For AppService, check if you have permissions to [query](/rest/api/appservice/appserviceplans/get) the App Service Plan hosting your app. * Verify the timestamp of your client machine is correct and up-to-date. Servers with timestamps off by more than 15 minutes of the request timestamp usually produce this error. * If this error continues to persist, use one of the feedback channels described in the beginning of this article. ### (404) Not Found This error indicates that the website couldn't be found on the server. Take these steps: * Verify that you have a website deployed and running on the App Service resource that you're attaching to. * Verify that the site is available at https://\<resource\>.azurewebsites.net * Verify that your properly running custom web application does not return a status code of 404 when accessed at https://\<resource\>.azurewebsites.net * If this error continues to persist, use one of the feedback channels described in the beginning of this article. ### (406) Not Acceptable This error indicates the server is unable to respond to the type set in the Accept header of the request. Take these steps: * Verify that your site is available at https://\<resource\>.azurewebsites.net * Verify that your site has not migrated to new instances. Snapshot Debugger uses the notion of ARRAffinity for routing requests to specific instances which can produce this error intermittently. * If this error continues to persist, use one of the feedback channels described in the beginning of this article. ### (409) Conflict This error indicates that the request conflicts with the current server state. This is a known issue that occurs when a user attempts to attach Snapshot Debugger against an AppService that has enabled ApplicationInsights. ApplicationInsights sets the AppSettings with a different casing than Visual Studio, causing this issue. ::: moniker range=">= vs-2019" We have resolved this in Visual Studio 2019. ::: moniker-end Take these steps: ::: moniker range="vs-2017" * Verify in the Azure portal that the AppSettings for SnapshotDebugger (SNAPSHOTDEBUGGER_EXTENSION_VERSION) and InstrumentationEngine (INSTRUMENTATIONENGINE_EXTENSION_VERSION) are uppercase. If not, update the settings manually, which forces a site restart. ::: moniker-end * If this error continues to persist, use one of the feedback channels described in the beginning of this article. ### (500) Internal Server Error This error indicates that the site is completely down or the server cannot handle the request. Snapshot Debugger only functions on running applications. [Application Insights Snapshot Debugger](/azure/azure-monitor/app/snapshot-debugger) provides snapshotting on exceptions and may be the best tool for your needs. ### (502) Bad Gateway This error indicates a server-side networking issue and may be temporary. Take these steps: * Try waiting a few minutes before attaching the Snapshot Debugger again. * If this error continues to persist, use one of the feedback channels described in the beginning of this article. ## Issue: Snappoint does not turn on If you see a warning icon ![Snappoint warning icon](../debugger/media/snapshot-troubleshooting-snappoint-warning-icon.png "Snappoint warning icon") with your snappoint instead of the regular snappoint icon, then the snappoint is not turned on. ![Snappoint does not turn on](../debugger/media/snapshot-troubleshooting-dont-turn-on.png "Snappoint does not turn on") Take these steps: 1. Make sure you have the same version of source code that was used to build and deploy your app. Make sure you are loading the correct symbols for your deployment. To do this, view the **Modules** window while Snapshot Debugging and verify the Symbol File column shows a .pdb file loaded for the module you are debugging. The Snapshot Debugger will try to automatically download and use symbols for your deployment. ## Issue: Symbols do not load when I open a Snapshot If you see following window, symbols did not load. ![Symbols do not load](../debugger/media/snapshot-troubleshooting-symbols-wont-load.png "Symbols do not load") Take these steps: - Click the **Change Symbol Settings…** link on this page. In the **Debugging > Symbol** settings, add a symbol cache directory. Restart snapshot debugging after the symbol path has been set. The symbols, or .pdb files, available in your project must match your App Service deployment. Most deployments (deployment through Visual Studio, CI/CD with Azure Pipelines or Kudu, etc.) will publish your symbol files along to your App Service. Setting the symbol cache directory enables Visual Studio to use these symbols. ![Symbol settings](../debugger/media/snapshot-troubleshooting-symbol-settings.png "Symbol settings") - Alternatively, if your organization uses a symbol server or drops symbols in a different path, use the symbol settings to load the correct symbols for your deployment. ## Issue: I cannot see the "Attach Snapshot Debugger" option in the Cloud Explorer Take these steps: - Make sure the Snapshot Debugger component is installed. Open the Visual Studio Installer, and check the **Snapshot Debugger** component in the Azure workload. ::: moniker range="< vs-2019" - Make sure your app is supported. Currently, only ASP.NET (4.6.1+) and ASP.NET Core (2.0+) apps deployed to Azure App Services are supported. ::: moniker-end ::: moniker range=">= vs-2019" - Make sure your app is supported: - Azure App Services - ASP.NET applications running on .NET Framework 4.6.1 or later. - Azure App Services - ASP.NET Core applications running on .NET Core 2.0 or later on Windows. - Azure Virtual Machines (and virtual machine scale set) - ASP.NET applications running on .NET Framework 4.6.1 or later. - Azure Virtual Machines (and virtual machine scale set) - ASP.NET Core applications running on .NET Core 2.0 or later on Windows. - Azure Kubernetes Services - ASP.NET Core applications running on .NET Core 2.2 or later on Debian 9. - Azure Kubernetes Services - ASP.NET Core applications running on .NET Core 2.2 or later on Alpine 3.8. - Azure Kubernetes Services - ASP.NET Core applications running on .NET Core 2.2 or later on Ubuntu 18.04. ::: moniker-end ## Issue: I only see Throttled Snapshots in the Diagnostic Tools ![Throttled snappoint](../debugger/media/snapshot-troubleshooting-throttled-snapshots.png "Throttled snappoint") Take these steps: - Snapshots take up little memory but do have a commit charge. If the Snapshot Debugger detects your server is under heavy memory load, it will not take snapshots. You can delete already captured snapshots by stopping the Snapshot Debugger session and trying again. ::: moniker range=">= vs-2019" ## Issue: Snapshot debugging with multiple versions of the Visual Studio gives me errors Visual Studio 2019 requires a newer version of the Snapshot Debugger site extension on your Azure App Service. This version is not compatible with the older version of the Snapshot Debugger site extension used by Visual Studio 2017. You will get the following error if you try to attach the Snapshot Debugger in Visual Studio 2019 to an Azure App Service which has been previously debugged by the Snapshot Debugger in Visual Studio 2017: ![Incompatible Snapshot Debugger site extension Visual Studio 2019](../debugger/media/snapshot-troubleshooting-incompatible-vs2019.png "Incompatible Snapshot Debugger site extension Visual Studio 2019") Conversely, if you use Visual Studio 2017 to attach the Snapshot Debugger to an Azure App Service which has been previously debugged by the Snapshot Debugger in Visual Studio 2019, you'll get the following error: ![Incompatible Snapshot Debugger site extension Visual Studio 2017](../debugger/media/snapshot-troubleshooting-incompatible-vs2017.png "Incompatible Snapshot Debugger site extension Visual Studio 2017") To fix this, delete the following App settings in the Azure portal and attach the Snapshot Debugger again: - INSTRUMENTATIONENGINE_EXTENSION_VERSION - SNAPSHOTDEBUGGER_EXTENSION_VERSION ::: moniker-end ## Issue: I am having problems Snapshot Debugging and I need to enable more logging ### Enable Agent Logs To enable and disable agent logging open Visual Studio navigate to *Tools>Options>Snapshot Debugger>Enable agent logging*. Note if *Delete old agent logs on session start* is also enabled, each successful Visual Studio attach will delete previous Agent logs. Agent logs can be found in the following locations: - App Services: - Navigate to your App Service's Kudu site (that is, yourappservice.**scm**.azurewebsites.net) and navigate to Debug Console. - Agent logs are stored in the following directory: D:\home\LogFiles\SiteExtensions\DiagnosticsAgentLogs\ - VM/VMSS: - Sign in to your VM, agent logs are stored as follows: C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\<Version>\SnapshotDebuggerAgent_*.txt - AKS - Navigate to the following directory: /tmp/diag/AgentLogs/* ### Enable Profiler/Instrumentation Logs Instrumentation logs can be found in the following locations: - App Services: - Error logging is automatically sent to D:\Home\LogFiles\eventlog.xml, events are marked with `<Provider Name="Instrumentation Engine" />` or "Production Breakpoints" - VM/VMSS: - Sign in to your VM and open Event Viewer. - Open the following view: *Windows Logs>Application*. - *Filter Current Log* by *Event Source* using either *Production Breakpoints* or *Instrumentation Engine*. - AKS - Instrumentation engine logging at /tmp/diag/log.txt (set MicrosoftInstrumentationEngine_FileLogPath in DockerFile) - ProductionBreakpoint logging at /tmp/diag/shLog.txt ## Known Issues - Snapshot debugging with multiple Visual Studio clients against the same App Service is not currently supported. - Roslyn IL optimizations are not fully supported in ASP.NET Core projects. For some ASP.NET Core projects, you may not be able to see some variables or use some variables in conditional statements. - Special variables, such as *$FUNCTION* or *$CALLER*, cannot be evaluated in conditional statements or logpoints for ASP.NET Core projects. - Snapshot debugging does not work on App Services that have [Local Caching](/azure/app-service/app-service-local-cache) turned on. - Snapshot debugging API Apps is not currently supported. ## Site Extension Upgrade Snapshot Debugging and Application Insights depend on an ICorProfiler, which loads into the site process and causes file locking issues during upgrade. We recommend this process to ensure there is no down-time to your production site. - Create a [Deployment Slot](/azure/app-service/web-sites-staged-publishing) within your App Service and deploy your site to the Slot. - Swap the Slot with production from Cloud Explorer in Visual Studio or from the Azure portal. - Stop the Slot site. This will take a few seconds to kill off the site w3wp.exe process from all instances. - Upgrade the Slot site extension from the Kudu site or the Azure portal (*App Service Blade > Development Tools > Extensions > Update*). - Start the Slot site. We recommend visiting the site to warm it up again. - Swap the Slot with production. ## See also - [Debugging in Visual Studio](../debugger/index.yml) - [Debug live ASP.NET apps using the Snapshot Debugger](../debugger/debug-live-azure-applications.md) - [Debug live ASP.NET Azure Virtual Machines\Virtual Machines Scale Sets using the Snapshot Debugger](../debugger/debug-live-azure-virtual-machines.md) - [Debug live ASP.NET Azure Kubernetes using the Snapshot Debugger](../debugger/debug-live-azure-kubernetes.md) - [FAQ for snapshot debugging](../debugger/debug-live-azure-apps-faq.md)
61.399194
544
0.780653
eng_Latn
0.984528
6ad97405f4a09d2bcd71692690ba39d9ebb2c24a
3,900
md
Markdown
presentations/cs_seminar_series_1-2016/cs_seminar_outline.md
mtb-za/science
49321885ff7fe8b37ea2eb4d3d74a8abb89c6abf
[ "MIT" ]
null
null
null
presentations/cs_seminar_series_1-2016/cs_seminar_outline.md
mtb-za/science
49321885ff7fe8b37ea2eb4d3d74a8abb89c6abf
[ "MIT" ]
null
null
null
presentations/cs_seminar_series_1-2016/cs_seminar_outline.md
mtb-za/science
49321885ff7fe8b37ea2eb4d3d74a8abb89c6abf
[ "MIT" ]
null
null
null
# Background The Karoo is an area of great geological interest. There is also a possibility that it contains large quantities of gas, which may be commercially viable. Unfortunately, while the broad sweep of how the Karoo as a geological body operates is understood (mostly), the fine details are not known. In order to make sensible decisions as to whether or not to go ahead with gas (or other) development, we need more information. # Problem Existing datasets are generally low resolution, so can not answer questions about fine details. The data is not always easy to understand, especially in three dimensions. So while we know what is happening at surface, there are only foggy ideas as to what is happening as we go deeper. # How to solve NMMU is doing a broad research program within the Karoo, looking at a wide variety of things, in the hope of being able to answer some of the questions. Techniques exist to image subsurface geometry - faults, dykes and sills. Computer-based images allow for good exploration of 3D; we are no longer limited to working on paper, in 2D. So, with a new dataset, focused on the Karoo, we can try to get some subsurface information. We also need to make it easy to view the data, whether as blocks or pseudo-sections or whatever. # Work so far I have implemented some filtering techniques from the literature, mostly in two-dimensions. I have also been driving up and down the Jansenville area getting permission from land-owners to fly over their land. We have identified about 200 land parcels, with approximately 100 unique landowners. For individual parcels, we have signed permission on 141, with 21 that are just waiting for signatures. About 40 are still being contacted. We are being refused from flying over the remaining 8 parcels. Side note: this is really, really good going, actually. ## Writing Written 2 and a bit chapters, which have been reviewed and are currently being revised based on that. # Current work We need to finish getting permission, since the survey will be getting off the ground in the next week or so. This also involves liasing with the people contracted to fly the survey. Currently this is the biggest block to moving forward. ## Other items on my plate right now: Restructuring dissertation to flow better. A decent amount has been written, but the structure is a bit wonky still, and links from one section to the next need to be made better. Also working on chapter 3: #### Setting up tests to ensure that filters are behaving correctly, according to the papers that developed them. For example, tilt filter should be +ive over a body, 0 at or near edge, -ive outside body. Testing this is correct with synthetic data, to ensure that I have not made a mistake in the maths somewhere. # Next steps Actually fly the survey, which will start as soon as the permissions are sorted out. This should take about 6 weeks. Will start working out the transition into 3D, based on some filters that give you depth estimates as well as xy position. Need to get going on front-end development, since the back-end is reasonably stable. Front-end will be fairly minimalist, to aid ease of use. Links to the back-end documentation's docstrings will help. This will include an approach to viewing things in 3D, which for this type of data has its own problems. A big one will be how to view things behind other things, while not suggesting that there is empty space. # On track? More or less, the survey has taken far longer to get off the ground than we anticipated (the permissions have taken about 5 months). The filter implementation is fairly straightforward, and going well, although making sure that they are tested is sometimes being tricky. Writing has gone ok, and should be in a good place in a week or two, once the document has been restructured. Expanding and reworking these aspects should be relatively standard.
102.631579
412
0.794872
eng_Latn
0.999977
6ada541660bd335f96fcdebc744081b7c983679f
645
md
Markdown
site/rover/INV-OVERVIEW/INV-REPORT/INV-Q2/README.md
mikes-zum/docs
2f60f8f79dea5b56d930021f17394c5b9afb86d5
[ "MIT" ]
7
2019-12-06T23:39:36.000Z
2020-12-13T13:26:23.000Z
site/rover/INV-OVERVIEW/INV-REPORT/INV-Q2/README.md
mikes-zum/docs
2f60f8f79dea5b56d930021f17394c5b9afb86d5
[ "MIT" ]
36
2020-01-21T00:17:12.000Z
2022-02-28T03:24:29.000Z
site/rover/INV-OVERVIEW/INV-REPORT/INV-Q2/README.md
mikes-zum/docs
2f60f8f79dea5b56d930021f17394c5b9afb86d5
[ "MIT" ]
33
2020-02-07T12:24:42.000Z
2022-03-24T15:38:31.000Z
## Inventory Availability Inquiry by Specs (INV.Q2) <PageHeader /> **Form Details** [ Form Details ](INV-Q2-1/README.md) **Purpose** The INV.Q2 procedure provides an inquiry of the status of inventory by part specs. The user may specify one or more part specs. The procedure will find all parts which match the selected specs and display balances of all inventory locations containing the part. The information displayed includes the on hand balance and committed and allocated for each location. **Frequency of Use** As required. **Prerequisites** None. <badge text= "Version 8.10.57" vertical="middle" /> <PageFooter />
25.8
78
0.742636
eng_Latn
0.997344
6adabc97cc95302157dc879a7e5e502aedb907fd
1,734
md
Markdown
docs/modeling/modeling-sdk-for-visual-studio-domain-specific-languages.md
MicrosoftDocs/visualstudio-docs.ko-kr
367344fed1f3d162b028af8a41a785a2137598e8
[ "CC-BY-4.0", "MIT" ]
13
2019-10-02T05:47:05.000Z
2022-03-09T07:28:28.000Z
docs/modeling/modeling-sdk-for-visual-studio-domain-specific-languages.md
MicrosoftDocs/visualstudio-docs.ko-kr
367344fed1f3d162b028af8a41a785a2137598e8
[ "CC-BY-4.0", "MIT" ]
115
2018-01-17T01:43:25.000Z
2021-02-01T07:27:06.000Z
docs/modeling/modeling-sdk-for-visual-studio-domain-specific-languages.md
MicrosoftDocs/visualstudio-docs.ko-kr
367344fed1f3d162b028af8a41a785a2137598e8
[ "CC-BY-4.0", "MIT" ]
33
2018-01-17T01:25:13.000Z
2022-02-14T05:28:44.000Z
--- title: Visual Studio용 모델링 SDK - 도메인별 언어 description: Visual Studio에 대 한 모델링 SDK를 사용 하 여 Visual Studio에 통합할 수 있는 강력한 모델 기반 개발 도구를 만들 수 있습니다. ms.custom: SEO-VS-2020 titleSuffix: '' ms.date: 11/04/2016 ms.topic: conceptual helpviewer_keywords: - Domain-Specific Language Tools - Domain-Specific Language author: mgoertz-msft ms.author: mgoertz manager: jmartens ms.technology: vs-ide-modeling ms.workload: - multiple ms.openlocfilehash: 00e140b3191ef6d40fbd0ee519580dee46345e17 ms.sourcegitcommit: 68897da7d74c31ae1ebf5d47c7b5ddc9b108265b ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 08/13/2021 ms.locfileid: "122047818" --- # <a name="modeling-sdk-for-visual-studio---domain-specific-languages"></a>Visual Studio용 모델링 SDK - 도메인별 언어 Visual Studio 모델링 SDK를 사용 하 여 Visual Studio에 통합할 수 있는 강력한 모델 기반 개발 도구를 만들 수 있습니다. 동일한 방식으로 하나 이상의 모델 정의를 만들고 도구 집합으로 통합할 수 있습니다. MSDK의 핵심은 비즈니스 영역에서 개념을 나타내기 위해 만드는 모델의 정의입니다. 다이어그램 뷰, 코드 및 기타 아티팩트를 생성 하는 기능, 모델 변환 명령, Visual Studio의 코드 및 기타 개체와 상호 작용 하는 기능 등 다양 한 도구를 사용 하 여 모델을 묶을 수 있습니다. 모델을 개발할 때 다른 모델 및 도구와 결합하여 개발에 중점을 두는 강력한 도구 집합을 구성할 수 있습니다. MSDK를 사용하여 도메인 관련 언어(DSL) 형태로 모델을 신속하게 개발할 수 있습니다. 특수 편집기를 사용하여 그래픽 표시법과 함께 스키마 또는 추상 구문을 정의하기 시작합니다. 이 정의로부터 VMSDK는 다음을 생성합니다. - 트랜잭션 기반 저장소에서 실행되는 강력한 형식의 API를 사용하는 모델 구현입니다. - 트리 기반 탐색기입니다. - 자신이 정의하는 모델이나 모델의 일부를 사용자가 볼 수 있는 그래픽 편집기입니다. - 읽을 수 있는 XML에 모델을 저장하는 Serialization 메서드입니다. - 프로그램 코드 및 텍스트 템플릿을 사용하는 다른 아티팩트를 생성하기 위한 기능입니다. 이런 모든 기능을 사용자 지정하고 확장할 수 있습니다. 그래도 DSL 정의를 업데이트하고 확장을 손실하지 않으면서 기능을 다시 생성할 수 있는 방식으로 확장이 통합됩니다. [!INCLUDE[modeling_sdk_info](includes/modeling_sdk_info.md)] [관련 블로그 게시물](https://devblogs.microsoft.com/devops/the-visual-studio-modeling-sdk-is-now-available-with-visual-studio-2017/)
36.893617
222
0.757785
kor_Hang
1.00001
6adb7c5e2ad04aec43d354e8412368a4107b1cef
80
md
Markdown
docs/AtivVotes.md
AtivSolana/ativ-EVM
6ade61a9064a1067802927d52ac433f0791037c8
[ "MIT" ]
1
2022-01-06T02:39:39.000Z
2022-01-06T02:39:39.000Z
docs/AtivVotes.md
AtivSolana/ativ-EVM
6ade61a9064a1067802927d52ac433f0791037c8
[ "MIT" ]
null
null
null
docs/AtivVotes.md
AtivSolana/ativ-EVM
6ade61a9064a1067802927d52ac433f0791037c8
[ "MIT" ]
1
2021-12-18T08:16:12.000Z
2021-12-18T08:16:12.000Z
## `AtivVotes` ### `mint(address account, uint256 amount)` (public)
5.333333
52
0.575
eng_Latn
0.230368
6adbb4c8ce975c934113e8f8e6199bc84b71e45e
2,424
md
Markdown
pt-br/waves-node/how-to-install-a-node/on-mac.md
thiagocapuano/waves-documentation
24dc660471f9371277e6f864b7135a48454fce45
[ "MIT" ]
1
2021-03-10T02:46:25.000Z
2021-03-10T02:46:25.000Z
pt-br/waves-node/how-to-install-a-node/on-mac.md
thiagocapuano/waves-documentation
24dc660471f9371277e6f864b7135a48454fce45
[ "MIT" ]
null
null
null
pt-br/waves-node/how-to-install-a-node/on-mac.md
thiagocapuano/waves-documentation
24dc660471f9371277e6f864b7135a48454fce45
[ "MIT" ]
1
2019-07-30T20:21:36.000Z
2019-07-30T20:21:36.000Z
# Install the JRE 1.8 Mac OS X users can install the Oracle JRE 8 \(**64-bit version**\) from the [Homebrew](http://brew.sh/) throw [Cask](https://caskroom.github.io/). It is enough to run `brew cask install Caskroom/cask/java`. Now you can check your JRE installation. Run terminal and execute command `java -version`. If you see ``` java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) ``` then all is ok, and you can move on to the next step! If you get an error check your installation and try to find a solution or a better tutorial online. **Note.** It's necessary to install **Oracle JRE 8 **with **64-bit version, **you also can check Waves Releases [Here](https://github.com/wavesplatform/Waves/releases). # Download Waves package and configure the application [Download the latest version](https://github.com/wavesplatform/Waves/releases) of waves.jar and the required .conf configuration file \(for mainnet or testnet\) to any folder, for example `~/waves`. Carefully edit the configuration waves.conf file, **it is very important! The safety of your wallet and money depends on this!** So, just open it with your favorite text editor, pour a cup of tea and read [the documentation of the configuration file](/waves-node/configuration-parameters.md). Then start Terminal app `Terminal.app`, navigate to the folder with the jar file with the command `cd ~/waves`and start waves node with command `java -jar waves.jar waves.conf`. # Additional security For added security, it is recommended to store your wallet and configuration applications on an encrypted partition. You can read about it [here](https://support.apple.com/en-us/HT201599). Also, you may want to limit the use of these folders to designated users only. You can read about it [here](http://ss64.com/osx/chown.html). If you decide to use RPC, you should protect it with Mac OS X embedded or any other firewall. You can read about it [here](https://support.apple.com/en-us/HT201642). If your server is public and available to the Internet and you decide to enable and use RPC, then allow only certain methods using [Nginx's proxy\_pass module](http://nginx.org/ru/docs/http/ngx_http_proxy_module.html) and do not forget to set the API key hash in the configuration file. Also, do not forget to keep the OS and other security software up-to-date.
63.789474
452
0.760726
eng_Latn
0.989449
6adc2c18d430d2a5cec63996b6e660a3c969a58d
1,037
md
Markdown
technical-reference/msbts-trackedmessageinstance-messageinstanceid-property-wmi.md
SicongLiuSimon/biztalk-docs
85394b436d277504d9e759c655608888123785bd
[ "CC-BY-4.0", "MIT" ]
1
2020-06-16T22:06:46.000Z
2020-06-16T22:06:46.000Z
technical-reference/msbts-trackedmessageinstance-messageinstanceid-property-wmi.md
AzureMentor/biztalk-docs
16b211f29ad233c26d5511475c7e621760908af3
[ "CC-BY-4.0", "MIT" ]
7
2020-01-09T22:34:58.000Z
2020-02-18T19:42:16.000Z
technical-reference/msbts-trackedmessageinstance-messageinstanceid-property-wmi.md
AzureMentor/biztalk-docs
16b211f29ad233c26d5511475c7e621760908af3
[ "CC-BY-4.0", "MIT" ]
2
2017-06-23T18:30:28.000Z
2017-11-28T01:11:25.000Z
--- title: MSBTS_TrackedMessageInstance.MessageInstanceID Property (WMI) TOCTitle: MSBTS_TrackedMessageInstance.MessageInstanceID Property (WMI) ms:assetid: eec5b1c3-9da1-4018-8c4b-56ceface2e26 ms:mtpsurl: https://msdn.microsoft.com/library/Aa561807(v=BTS.80) ms:contentKeyID: 51533281 ms.date: 08/30/2017 mtps_version: v=BTS.80 --- # MSBTS\_TrackedMessageInstance.MessageInstanceID Property (WMI)   Contains the ID of the message instance. ## Property Declaration *The syntax shown is language neutral.* ```C# string MessageInstanceID; ``` ## Remarks This property is read-only. This property has a **Key** qualifier. Along with **SourceDBName** and **SourceDBServerName**, this key forms a compound key for the class. For sample code illustrating the **MSBTS\_TrackedMessageInstance** class, see [Saving a Message to a File Using WMI](saving-a-message-to-a-file-using-wmi.md). ## Requirements **Header:** Declared in BTSWMISchema2K.mof or BTSWMISchemaXP.mof. **Namespace:** Included in \\root\\MicrosoftBizTalkServer.
26.589744
158
0.77242
yue_Hant
0.36964
6adc3e93edbb74c2428381f36cbb8d202f972fc1
592
md
Markdown
supplement8-2_001_20200105.md
kitazaki/jetson_nano_book_chonyumon
26f80141c35ada396a303a3e48a85a336e507b12
[ "Apache-2.0" ]
1
2020-01-05T07:57:35.000Z
2020-01-05T07:57:35.000Z
supplement8-2_001_20200105.md
kitazaki/jetson_nano_book_chonyumon
26f80141c35ada396a303a3e48a85a336e507b12
[ "Apache-2.0" ]
null
null
null
supplement8-2_001_20200105.md
kitazaki/jetson_nano_book_chonyumon
26f80141c35ada396a303a3e48a85a336e507b12
[ "Apache-2.0" ]
null
null
null
# 該当箇所 Chapter 8-2 Pythonライブラリ(Jetson GPIO Libraryパッケージ)のインストール udevルールの追加と反映 (P.226) ## 問題と対策 2019年12月17日(米国時間)にJetPack 4.3(r32.3.1)がリリースされました。 2019年12月18日(米国時間)にJetson.GPIO - Linux for Tegraの導入手順が更新されました。 (参考資料)https://github.com/NVIDIA/jetson-gpio/commits/master/README.md JetPack 4.3(r32.3.1)で動作確認しました。 最初にパッケージ情報を更新します。 ```bash $ sudo apt update ``` pip3をインストールします。 ```bash $ sudo apt install python3-pip ``` ## 手順の変更 udevルールの追加と反映 ```bash $ sudo cp etc/99-gpio.rules /etc/udev/rules.d/ ↓ $ sudo cp lib/python/Jetson/GPIO/99-gpio.rules /etc/udev/rules.d/ ```
17.939394
68
0.716216
yue_Hant
0.741123
6add268e705380cf0a6e7ca99b1efca8c2c2d8f7
1,369
md
Markdown
_posts/2017-01-11-Louise-Bentley-Coraline-BE15.md
queenosestyle/queenosestyle.github.io
7b095a591cefe4e42cdeb7de71cfa87293a95b5c
[ "MIT" ]
null
null
null
_posts/2017-01-11-Louise-Bentley-Coraline-BE15.md
queenosestyle/queenosestyle.github.io
7b095a591cefe4e42cdeb7de71cfa87293a95b5c
[ "MIT" ]
null
null
null
_posts/2017-01-11-Louise-Bentley-Coraline-BE15.md
queenosestyle/queenosestyle.github.io
7b095a591cefe4e42cdeb7de71cfa87293a95b5c
[ "MIT" ]
null
null
null
--- layout: post date: 2017-01-11 title: "Louise Bentley Coraline BE15" category: Louise Bentley tags: [Louise Bentley] --- ### Louise Bentley Coraline BE15 Just **$309.99** ### <table><tr><td>BRANDS</td><td>Louise Bentley</td></tr></table> <a href="https://www.readybrides.com/en/louise-bentley/72312-louise-bentley-coraline-be15.html"><img src="//img.readybrides.com/169862/louise-bentley-coraline-be15.jpg" alt="Louise Bentley Coraline BE15" style="width:100%;" /></a> <!-- break --><a href="https://www.readybrides.com/en/louise-bentley/72312-louise-bentley-coraline-be15.html"><img src="//img.readybrides.com/169863/louise-bentley-coraline-be15.jpg" alt="Louise Bentley Coraline BE15" style="width:100%;" /></a> <a href="https://www.readybrides.com/en/louise-bentley/72312-louise-bentley-coraline-be15.html"><img src="//img.readybrides.com/169864/louise-bentley-coraline-be15.jpg" alt="Louise Bentley Coraline BE15" style="width:100%;" /></a> <a href="https://www.readybrides.com/en/louise-bentley/72312-louise-bentley-coraline-be15.html"><img src="//img.readybrides.com/169861/louise-bentley-coraline-be15.jpg" alt="Louise Bentley Coraline BE15" style="width:100%;" /></a> Buy it: [https://www.readybrides.com/en/louise-bentley/72312-louise-bentley-coraline-be15.html](https://www.readybrides.com/en/louise-bentley/72312-louise-bentley-coraline-be15.html)
76.055556
244
0.742878
yue_Hant
0.191965
6adec91596c09cd758525db987902457b98caf97
7,450
md
Markdown
site/en/blog/show-picker/index.md
itaied/developer.chrome.com
318e91993ebfec92f14b3fe3a67a4d2e37c482ec
[ "Apache-2.0" ]
1
2022-02-21T16:09:30.000Z
2022-02-21T16:09:30.000Z
site/en/blog/show-picker/index.md
itaied/developer.chrome.com
318e91993ebfec92f14b3fe3a67a4d2e37c482ec
[ "Apache-2.0" ]
1
2022-02-25T19:31:13.000Z
2022-02-25T19:31:13.000Z
site/en/blog/show-picker/index.md
itaied/developer.chrome.com
318e91993ebfec92f14b3fe3a67a4d2e37c482ec
[ "Apache-2.0" ]
1
2022-03-20T16:42:11.000Z
2022-03-20T16:42:11.000Z
--- layout: 'layouts/blog-post.njk' title: 'Show a browser picker for date, time, color, and files' description: > The web platform now ships with a canonical way to show a browser picker. authors: - beaufortfrancois date: 2022-01-28 hero: image/vvhSqZboQoZZN9wBvoXq72wzGAf1/zWT5kk3Q5RlDNqGkdgis.jpeg alt: A photo of a monthly schedule printed on white paper tags: - chrome-99 --- For a long time, you had to resort to custom widget libraries or hacks to show a date picker. The web platform now ships with the HTMLInputElement `showPicker()` method, a canonical way to show a browser picker not only for dates, but also time, color, and files. ## Background {: #background } A [frequent request] we hear from web developers is: <blockquote> <p> How do I programmatically<br/> show a picker for controls like date? </p> <cite> Stack Overflow </cite> </blockquote> Current answers are not great; they rely on external libraries, CSS hacks, or specific browser behaviors like simulating a user interaction with `click()`. I'm happy to share that those days will be over soon as the web platform is introducing a canonical way to show a browser picker for `<input>` elements with these types: `"date"`, `"month"`, `"week"`, `"time"`, `"datetime-local"`, `"color"`, and `"file"`. It will also work for `<input>` elements with suggestions powered by `<datalist>` or `"autocomplete"` which we'll cover as well in this article. <figure class="w-figure"> {% Img src="image/vvhSqZboQoZZN9wBvoXq72wzGAf1/uh0U2YQnUMato21MzzbR.png", alt="Screenshots of browser pickers", width="800", height="217" %} <figcaption class="w-figcaption">Browser date pickers in Chrome desktop, Chrome mobile, Safari desktop, Safari mobile, and Firefox desktop (July 2021).</figcaption> </figure> {% Aside %} Unlike a third-party picker widget, a browser picker is familiar to the user, has great platform-specific support, and is always maintained as part of the browser. {% endAside %} ## How to show a picker {: #how-to } Calling `showPicker()` on an `<input>` element shows a browser picker to the user. It must be called in response to a user gesture such as a touch gesture or mouse click; otherwise it will fail with a [`NotAllowedError`][error] exception. For security reasons, it will throw a [`SecurityError`][error] exception when it's called in a cross-origin iframe. A browser picker is shown when the `<input>` element is one of these types: `"date"`, `"month"`, `"week"`, `"time"`, `"datetime-local"`, `"color"`, or `"file"`. The example below shows you how to open a browser date picker. ```html/9 <input type="date"> <button>Show the date picker</button> <script> const button = document.querySelector("button"); const dateInput = document.querySelector("input"); button.addEventListener("click", () => { try { dateInput.showPicker(); // A date picker is shown. } catch (error) { // Use external library when this fails. } }); </script> ``` A browser picker can also be prepopulated with items from `<datalist>` or `"autocomplete"`. The example below shows you how to open a browser picker with `<datalist>`. ```html/16 <datalist id="ice-cream-flavors"> <option value="Chocolate"> </option> <option value="Coconut"> </option> <option value="Mint"> </option> <option value="Strawberry"> </option> <option value="Vanilla"> </option> </datalist> <input type="text" list="ice-cream-flavors"> <button>Show the suggestions</button> <script> const button = document.querySelector("button"); const iceCreamFlavorsInput = document.querySelector("input"); button.addEventListener("click", () => { try { iceCreamFlavorsInput.showPicker(); // A picker containing some ice cream flavors is shown. } catch (error) { // Use external library when this fails. } }); </script> ``` ## Feature detection {: #feature-detection } To check if `showPicker()` is supported, use: ```js if ('showPicker' in HTMLInputElement.prototype) { // showPicker() is supported. } ``` ## Demo A demo is available at [https://show-picker.glitch.me/demo.html][demo] for you to play with all pickers supported by the browser. {% Video src="video/vvhSqZboQoZZN9wBvoXq72wzGAf1/MMPmrnJnMvFX3eMCUWK9.mov", autoplay="true", muted="true", loop="true", class="screenshot" %} ## Browser support `showPicker()` is available in Chrome&nbsp;99 or later. ## What's next {: #future } At the time of writing, `showPicker()` is new to the web platform. The feature may need additional work in the future: - We may want to add a similar `showPicker()` to the `<select>` element in the future, if web developers ask for it. - It's possible `closePicker()` might be useful, and we could consider adding that if web developers ask for it. - We could add a [permissions policy] which allows cross-origin iframes to show browser pickers when their parent chain allows them to do so. ## Feedback {: #feedback } The Chrome team and the web standards community want to hear about your experiences with `showPicker()`. ### Tell us about the design Is there something about `showPicker()` that doesn't work like you expected? Or are there missing methods or properties that you need to implement your idea? Have a question or comment on the security model? - File a spec issue on the [WHATWG GitHub repo][issues], or add your thoughts to an existing issue. ### Problem with the implementation? Did you find a bug with Chrome's implementation? Or is the implementation different from the spec? - File a bug at <https://new.crbug.com>. Be sure to include as much detail as you can, and simple instructions for reproducing. [Glitch](https://glitch.com) works great for sharing quick and easy repros. ### Show support Are you planning to use `showPicker()`? Your public support helps the Chrome team prioritize features and shows other browser vendors how critical it is to support them. Send a tweet to [@ChromiumDev] and let us know where and how you are using it. ## Helpful links {: #links } - [MDN documentation][mdn] - [WHATWG explainer][explainer] - [WHATWG specification][spec] - [TAG review][tag] - [Demo][demo] | [Demo source][demo-source] - [Chromium bug][cr-bug] - [ChromeStatus.com entry][cr-status] ## Acknowledgements Thanks to [Joe Medley] for reviewing this article. Calendar image photo by [Eric Rothermel] on [Unsplash]. [frequent request]: https://www.google.com/search?q=programmatically+open+date+picker+site%3Astackoverflow.com [error]: https://developer.mozilla.org/en-US/docs/Web/API/DOMException [demo]: https://show-picker.glitch.me/demo.html [issues]: https://github.com/whatwg/html/issues [permissions policy]: https://w3c.github.io/webappsec-permissions-policy/ [@chromiumdev]: https://twitter.com/ChromiumDev [mdn]: https://developer.mozilla.org/docs/Web/API/HTMLInputElement/showPicker [explainer]: https://github.com/whatwg/html/pull/7319 [spec]: https://html.spec.whatwg.org/multipage/input.html#dom-input-showpicker [tag]: https://github.com/w3ctag/design-reviews/issues/688 [demo-source]: https://glitch.com/edit/#!/show-picker?path=demo.html [cr-bug]: https://bugs.chromium.org/p/chromium/issues/detail?id=939561 [cr-status]: https://www.chromestatus.com/feature/5692248021794816 [joe medley]: https://github.com/jpmedley [eric rothermel]: https://unsplash.com/@erothermel [unsplash]: https://unsplash.com/photos/FoKO4DpXamQ
34.651163
166
0.729262
eng_Latn
0.949863
6aded5cb43b41ea4493bc8ffa17d701eb734db88
2,592
md
Markdown
docs/generate_data.md
mschauer/BridgeSDEInference.jl
a5859fb56fb03c665f9f925dc8b3fd6003773ae3
[ "MIT" ]
null
null
null
docs/generate_data.md
mschauer/BridgeSDEInference.jl
a5859fb56fb03c665f9f925dc8b3fd6003773ae3
[ "MIT" ]
null
null
null
docs/generate_data.md
mschauer/BridgeSDEInference.jl
a5859fb56fb03c665f9f925dc8b3fd6003773ae3
[ "MIT" ]
null
null
null
[back to README](../README.md) # Generation of data There are two short scripts for generating data, available [here](../scripts/simulate_part_obs_save_to_csv.jl) and [here](../scripts/simulate_fpt_save_to_csv.jl). The former generates data for the setting of a partially observed process, the latter is for the setting of the first passage time observations. In both, the law of the target process is controlled (up to some differences in the values of the parameters) by ```julia param = :simpleConjug P = FitzhughDiffusion(param, 10.0, -8.0, 15.0, 0.0, 3.0) ``` The starting point needs to be set and transformed to appropriate parametrisation : ```julia x0 = ℝ{2}(-0.5, 0.6) # in regular parametrisation x0 = regularToConjug(x0, P.ϵ, 0.0) # translate to conjugate parametrisation ``` ## Partially observed diffusion Define the time grid over which the underlying process needs to be simulated: ```julia dt = 1/50000 T = 10.0 tt = 0.0:dt:T ``` And simulate the path ```julia Random.seed!(4) XX, _ = simulateSegment(0.0, x0, P, tt) ``` The remaining lines: ```julia num_obs = 100 skip = div(length(tt), num_obs) Time = collect(tt)[1:skip:end] df = DataFrame(time=Time, x1=[x[1] for x in XX.yy[1:skip:end]], x2=[(i==1 ? x0[2] : NaN) for (i,t) in enumerate(Time)]) ``` simply define how the process is observed: the distance between recorded observations as well as which coordinates and the nature of their perturbation (in this example they are not perturbed at all and only the first coordinate is observed). Finally the data can be saved ```julia CSV.write(FILENAME_OUT, df) ``` ## First passage time observations In this case, `T` in the time grid defines the length of a single segment over which the path is simulated. The simulation needs to be broken down into pieces, because for large overall `T` the path might take up more memory than a computer can handle. `N` defines the number of segments that need to be simulated and pieced together. The following two: ```julia upLvl = 0.5 downLvl = -0.5 ``` specify the up-crossing level and the down-crossing (reset) level. Then, in line 32: ```julia recentlyUpSearch = true ``` it says that at the time that the process starts it is assumed that the down-crossing has already occurred. If ```julia recentlyUpSearch = false ``` then the process would have first needed to reach level `downLvl`, before the first time of reaching `upLvl` would be counted. The remaining lines simply go through with the simulation required for this observation setting and finally save the data in ```julia CSV.write(FILENAME_OUT, df) ```
47.127273
421
0.744985
eng_Latn
0.997858
6adf2eea7e7ac92de5b4dbc4743a15eb12e9da59
1,601
md
Markdown
content/_index.md
richardcase/aws-modernization-gitops-with-weaveworks
78c412d395bde89fa6161651efd9972760917f41
[ "Apache-2.0" ]
null
null
null
content/_index.md
richardcase/aws-modernization-gitops-with-weaveworks
78c412d395bde89fa6161651efd9972760917f41
[ "Apache-2.0" ]
null
null
null
content/_index.md
richardcase/aws-modernization-gitops-with-weaveworks
78c412d395bde89fa6161651efd9972760917f41
[ "Apache-2.0" ]
null
null
null
+++ title = "GitOps on EKS with Weaveworks" chapter = true weight = 1 +++ <div style="text-align: center"> <h2>Introduction to GitOps on EKS with Weaveworks</h2></div> 15 years ago, Git changed the way software teams collaborate and develop software. For new declarative software systems such as Kubernetes, Git can play a key role in deploying, configuring, updating and managing infrastructure as code. <br><br> GitOps relies on Git as the single source of truth for declarative infrastructure and applications. With Git at the center of delivery pipelines, developers can make pull requests to accelerate and simplify application deployments and operations tasks to Kubernetes. In these workshops Weaveworks will teach and demonstrate: * The 4 principles of GitOps * How to boost stability and reliability in Kubernetes environments * How to incorporate a robust security model from the start * Using GitOps for High Availability and Disaster Recovery on EKS * Managing governance, risk and compliance (GRC) for Kubernetes on EKS with GitOps * Accelerating Software Development with GitOpos and EKS * Managing Machine Learning and Artificial Intelligence models w/GitOps on EKS ## Who should take these workshops: * Anyone who has interest in GitOps & EKS * Application teams * Architects & Developers * Technical leads * Operations Engineers * Infrastructure Teams * Technical Business Leaders ## Prerequisites * Basic understanding of Kubernetes concepts and architecture * Basic understanding of infrastructure and application monitoring * Familiarity with basic unix commands
39.04878
266
0.797626
eng_Latn
0.993483
6adfee9e2e696d664cfe63d4e260e2b027745acb
1,687
md
Markdown
README.md
arsonite/synthia
efea6de455b71be6eae14a7f1ac37369c523a8a8
[ "MIT" ]
null
null
null
README.md
arsonite/synthia
efea6de455b71be6eae14a7f1ac37369c523a8a8
[ "MIT" ]
null
null
null
README.md
arsonite/synthia
efea6de455b71be6eae14a7f1ac37369c523a8a8
[ "MIT" ]
null
null
null
# synthia A **cross-platform**, **light-weight**, ~~stylish~~ music- & podcast player. ### Features - Thumbnail-display - Thumbnail-dependent application color-scheme - Creating playlists - Saving playstate of podcast - Drag- & Drop to add/remove albums/songs *** ### To-Do - Create a backend version of the code - Transfer the frontend of the web-version to CSS and JavaScript and convert the current code to only utilize backend-functionality - Enable uploading and playing music through cloud services onto my server - Use FileSystem-API of Node to manage music-files *** ### License **MIT License** *Copyright (c) 2019* *Permission is hereby granted, free of charge, to any person obtaining a copy* *of this software and associated documentation files (the "Software"), to deal* *in the Software without restriction, including without limitation the rights* *to use, copy, modify, merge, publish, distribute, sublicense, and/or sell* *copies of the Software, and to permit persons to whom the Software is* *furnished to do so, subject to the following conditions:* *The above copyright notice and this permission notice shall be included in all* *copies or substantial portions of the Software.* *THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR* *IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,* *FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE* *AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER* *LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,* *OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE* *SOFTWARE.*
36.673913
131
0.767042
eng_Latn
0.688506
6ae046fc58b6a17caea7351b7ca09eff2811bd09
11,849
md
Markdown
Lync/LyncServer/lync-server-2013-planning-for-central-site-voice-resiliency.md
MikeyMJCO/OfficeDocs-SkypeForBusiness
1401ee484a2bc8e72d96649b0571bb59198f9dab
[ "CC-BY-4.0", "MIT" ]
1
2019-10-24T07:28:55.000Z
2019-10-24T07:28:55.000Z
Lync/LyncServer/lync-server-2013-planning-for-central-site-voice-resiliency.md
MikeyMJCO/OfficeDocs-SkypeForBusiness
1401ee484a2bc8e72d96649b0571bb59198f9dab
[ "CC-BY-4.0", "MIT" ]
null
null
null
Lync/LyncServer/lync-server-2013-planning-for-central-site-voice-resiliency.md
MikeyMJCO/OfficeDocs-SkypeForBusiness
1401ee484a2bc8e72d96649b0571bb59198f9dab
[ "CC-BY-4.0", "MIT" ]
1
2021-10-07T05:33:21.000Z
2021-10-07T05:33:21.000Z
--- title: 'Lync Server 2013: Planning for central site voice resiliency' ms.reviewer: ms.author: v-lanac author: lanachin TOCTitle: Planning for central site voice resiliency ms:assetid: 52dd0c3e-cd3c-44cf-bef5-8c49ff5e4c7a ms:mtpsurl: https://technet.microsoft.com/en-us/library/Gg398347(v=OCS.15) ms:contentKeyID: 48184164 ms.date: 07/23/2014 manager: serdars mtps_version: v=OCS.15 --- <div data-xmlns="http://www.w3.org/1999/xhtml"> <div class="topic" data-xmlns="http://www.w3.org/1999/xhtml" data-msxsl="urn:schemas-microsoft-com:xslt" data-cs="http://msdn.microsoft.com/en-us/"> <div data-asp="http://msdn2.microsoft.com/asp"> # Planning for central site voice resiliency in Lync Server 2013 </div> <div id="mainSection"> <div id="mainBody"> <span> </span> _**Topic Last Modified:** 2013-10-30_ Increasingly, enterprises have multiple sites spread across the globe. Maintaining emergency services, access to help desk, and the ability to conduct critical business tasks when a central site is out of service is essential for any Enterprise Voice resiliency solution. When a central site becomes unavailable, the following conditions must be met: - Voice failover must be provided. - Users who ordinarily register with the Front End pool at the central site must be able to register with an alternative Front End pool. This can be done by creating multiple DNS SRV records, each of which resolves to a Director pool or Front End pool in each of your central sites. You can adjust the priority and weights of the SRV records so that users who are served by that central site get the corresponding Director and Front End pool ahead of those in other SRV records. - Calls to and from users located at other sites must be rerouted to the PSTN. This topic describes the recommended solution for securing central site voice resiliency. <div> ## Architecture and Topology Planning for voice resiliency at a central site requires a basic understanding of the central role played by the Lync Server 2013 Registrar in enabling voice failover. The Lync Server Registrar is a server role that enables client registration and authentication and provides routing services. It resides along with other components on a Standard Edition server, Front End Server, Director, or Survivable Branch Appliance. A Registrar pool consists of Registrar Services running on the Front End pool and residing at the same site. The Front End pool must be load balanced. DNS load balancing is recommended, but hardware load balancing is acceptable. A Lync client discovers the Front End pool through the following discovery mechanism: 1. DNS SRV record 2. Autodiscovery Web Service (new in Lync Server 2013) 3. DHCP option 120 After the Lync client connects to the Front End pool, it is directed by the load balancer to one of the Front End Servers in the pool. That Front End Server, in turn, redirects the client to a preferred Registrar in the pool. Each user enabled for Enterprise Voice is assigned to a particular Registrar pool, which becomes that user’s primary Registrar pool. At a given site, hundreds or thousands of users typically share a single primary Registrar pool. To account for the consumption of central site resources by any branch site users that rely on the central site for presence, conferencing, or failover, we recommend that you consider each branch site user as though the user were a user registered with the central site. There are currently no limits on the number of branch site users, including users registered with a Survivable Branch Appliance. To assure voice resiliency in the event of a central site failure, the primary Registrar pool must have a single designated backup Registrar pool located at another site. The backup can be configured by using Topology Builder resiliency settings. Assuming a resilient WAN link between the two sites, users whose primary Registrar pool is no longer available are automatically directed to the backup Registrar pool. The following steps describe the client discovery and registration process: 1. A client discovers Lync Server through DNS SRV records. In Lync Server 2013, DNS SRV records can be configured to return more than one FQDN to the DNS SRV query. For example, if enterprise Contoso has three central sites (North America, Europe, and Asia-Pacific) and a Director pool at each central site, DNS SRV records can point to the Director pool FQDNs in each of the three locations. As long as the Director pool in one of the locations is available, the client can connect to the first hop Lync Server. <div> > [!NOTE] > Using a Director pool is optional. A Front End pool can be used instead. </div> 2. The Director pool informs the Lync client about the user’s primary Registrar pool and backup Registrar pool. 3. The Lync client attempts to connect to the user’s primary Registrar pool first. If the primary Registrar pool is available, the Registrar accepts the registration. If the primary Registrar pool is unavailable, the Lync client attempts to connect to the backup Registrar pool. If the backup Registrar pool is available and has determined that the user’s primary Registrar pool is unavailable (by detecting a lack of heartbeat for a specified failover interval) the backup Registrar pool accepts the user’s registration. After the backup Registrar detects that the primary Registrar is again available, the backup Registrar pool will redirect failover Lync clients to their primary pool. The following figure shows the recommended topology for assuring central site resiliency. The two sites are connected by a resilient WAN link. If the central site becomes unavailable, users who are assigned to that pool are directed to the backup site for registration. **Recommended topology for central site voice resiliency** ![Topology for central site voice resliency](images/Gg398347.19ea3e74-8a5c-488c-a34e-fc180ab9a50a(OCS.15).jpg "Topology for central site voice resliency") </div> <div> ## Requirements and Recommendations The following requirements and recommendations for implementing central site voice resiliency are appropriate for most organizations: - The sites in which the primary and backup Registrar pools reside should be connected by a resilient WAN link. - Each central site must contain a Registrar pool consisting of one or more Registrars. - Each Registrar pool must be load-balanced by using DNS load balancing, hardware load balancing, or both. For detailed information about planning your load balancing configuration, see [Load balancing requirements for Lync Server 2013](lync-server-2013-load-balancing-requirements.md). - Each user must be assigned to a primary Registrar pool by using either the Lync Server Management Shell **set-CsUser** cmdlet or the Lync Server Control Panel. - The primary Registrar pool must have a single backup Registrar pool located in a different central site. - The primary Registrar pool must be configured to fail over to the backup Registrar pool. By default, the primary Registrar is set to fail over to the backup Registrar pool after an interval of 300 seconds. You can change this interval by using the Lync Server 2013 Topology Builder. - Configure a failover route, as described in the “[Configuring a failover route in Lync Server 2013](lync-server-2013-configuring-a-failover-route.md)” topic in the Planning documentation. When configuring the route, specify a gateway that is located at a different site from the gateway specified in the primary route. - If the central site contained your primary management server and the site is likely to be down for an extended period, you will need to reinstall your management tools at the backup site; otherwise, you won’t be able to change any management settings. </div> <div> ## Dependencies Lync Server depends on the following infrastructure and software components to assure voice resiliency: <table> <colgroup> <col style="width: 50%" /> <col style="width: 50%" /> </colgroup> <tbody> <tr class="odd"> <td><p><strong>Component</strong></p></td> <td><p><strong>Functional</strong></p></td> </tr> <tr class="even"> <td><p>DNS</p></td> <td><p>Resolving SRV records and A records for server-server and server-client connectivity</p></td> </tr> <tr class="odd"> <td><p>Exchange and Exchange Web Services (EWS)</p></td> <td><p>Contact storage; calendar data</p></td> </tr> <tr class="even"> <td><p>Exchange Unified Messaging and Exchange Web Services</p></td> <td><p>Call logs, voice mail list, voice mail</p></td> </tr> <tr class="odd"> <td><p>DHCP Options 120</p></td> <td><p>If DNS SRV is unavailable, the client will attempt to use DHCP Option 120 to discover the Registrar. For this to work, either a DHCP server must be configured or Lync Server 2013 DHCP must be enabled. For details, see Hardware and Software Requirements for Branch-Site Resiliency in <a href="lync-server-2013-branch-site-resiliency-requirements.md">Branch-site resiliency requirements for Lync Server 2013</a> section.</p></td> </tr> </tbody> </table> </div> <div> ## Survivable Voice Features If the preceding requirements and recommendations have been implemented, the following voice features will be provided by the backup Registrar pool: - Outbound PSTN calls - Inbound PSTN calls, if the telephony service provider supports the ability to fail over to a backup site - Enterprise calls between users at both the same site and between two different sites - Basic call handling, including call hold, retrieval, and transfer - Two-party instant messaging and sharing audio and video between users at the same site - Call forwarding, simultaneous ringing of endpoints, call delegation, and team call services, but only if both parties to call delegation, or all team members, are configured at the same site. - Existing phones and clients continue to work. - Call detail recording (CDR) - Authentication and authorization Depending on how they are configured, the following voice features may or may not work when a primary central site is out of service: - Voice mail deposit and retrieval If you want to make Exchange UM available when the primary central site is out of service, you must do one of the following: - Change DNS SRV records so that the Exchange UM servers at the central site point to backup Exchange UM servers at another site. - Configure each user’s Exchange UM dial plan to include Exchange UM servers at both the central site and the backup site, but designate the backup Exchange UM servers as disabled. If the primary site becomes unavailable, the Exchange administrator has to mark the Exchange UM servers at the backup site as enabled. If neither of the preceding solutions is possible, then Exchange UM will not be available in the event the central site becomes unavailable. - Conferencing of all types A user who has failed over to a backup site can join a conference that is created or hosted by an organizer whose pool is available but cannot create or host a conference on his or her own primary pool, which is no longer available. Similarly, others users cannot join conferences that are hosted on the affected user’s primary pool. The following voice features do not work when a primary central site is out of service: - Conference Auto-Attendant - Presence and DND-based routing - Updating call forwarding settings - Response Group service and Call Park - Provisioning new phones and clients - Address Book Web Search </div> <div> ## See Also [Planning for branch-site voice resiliency in Lync Server 2013](lync-server-2013-planning-for-branch-site-voice-resiliency.md) </div> </div> <span> </span> </div> </div> </div>
52.662222
737
0.778209
eng_Latn
0.997686
6ae0f314df60dae9b37dfc781965eb142cecb89d
1,586
md
Markdown
o/opragebg/index.md
rugk/gesetze
ce3b4435785e5173e1273b35b00c374f64aa85b2
[ "Unlicense" ]
1
2020-06-20T11:34:20.000Z
2020-06-20T11:34:20.000Z
o/opragebg/index.md
nagy/gesetze
77abca2ceea3b7b89ea70afb13b5dd55415eb124
[ "Unlicense" ]
null
null
null
o/opragebg/index.md
nagy/gesetze
77abca2ceea3b7b89ea70afb13b5dd55415eb124
[ "Unlicense" ]
null
null
null
--- Title: Gesetz über die Gebühren des Oberprüfungsamtes für die höheren technischen Verwaltungsbeamten jurabk: OPrAGebG layout: default origslug: opragebg slug: opragebg --- # Gesetz über die Gebühren des Oberprüfungsamtes für die höheren technischen Verwaltungsbeamten (OPrAGebG) Ausfertigungsdatum : 1970-06-23 Fundstelle : BGBl I: 1970, 805, 818 Stand: Zuletzt geändert Art. 45 V v. 19.6.2020 I 1328 ## § 1 Für die Abnahme der Großen Staatsprüfung für den höheren technischen Verwaltungsdienst des Bundes durch das Oberprüfungsamt für die höheren technischen Verwaltungsbeamten in Frankfurt a.M. können Prüfungsgebühren erhoben werden. Die Gebühr für die einzelne Prüfung darf 200 Deutsche Mark nicht übersteigen. ## § 2 Das Bundesministerium für Verkehr und digitale Infrastruktur wird ermächtigt, im Einvernehmen mit dem Bundesministerium des Innern, für Bau und Heimat, die Höhe der Gebühren im Benehmen mit dem Kuratorium des Oberprüfungsamtes durch Rechtsverordnung zu bestimmen. In der Rechtsverordnung können die Stundung, der Erlaß und die Erstattung der Gebühren abweichend von den Vorschriften des Verwaltungskostengesetzes vom 23. Juni 1970 (Bundesgesetzbl. I S. 821) geregelt werden. ## § 3 Es werden aufgehoben, soweit sie Bundesrecht geworden sind: 1. das Gesetz über die Befähigung zum höheren bautechnischen Verwaltungsdienst vom 16. Juli 1936 (Reichsgesetzbl. I S. 563), 2. die Ausführungsbestimmung zum Gesetz über die Befähigung zum höheren bautechnischen Verwaltungsdienst vom 16. Juli 1936 (Reichsgesetzblatt I S. 565).
28.321429
106
0.798865
deu_Latn
0.99814
6ae2ee245d92f0b916831a31aa3beb9f94862e95
5,666
md
Markdown
README.md
stripe-samples/charging-for-multiple-plan-subscriptions
2dcb2211d6f41c27fbbed475c48cd2cfc46707b9
[ "MIT" ]
60
2019-11-21T03:45:47.000Z
2021-06-23T16:59:53.000Z
README.md
stripe-archive/charging-for-multiple-plan-subscriptions
2dcb2211d6f41c27fbbed475c48cd2cfc46707b9
[ "MIT" ]
3
2019-11-19T13:24:08.000Z
2021-06-09T16:34:55.000Z
README.md
stripe-archive/charging-for-multiple-plan-subscriptions
2dcb2211d6f41c27fbbed475c48cd2cfc46707b9
[ "MIT" ]
25
2019-10-23T23:57:46.000Z
2021-03-07T12:43:42.000Z
> <img src="https://stripe.dev/images/badges/archived.png" width="250"> > > This project is deprecated and is no longer being actively maintained. > > Please see the [Subscription use cases](https://github.com/stripe-samples/subscription-use-cases) sample. # Stripe Billing sample subscribing a customer to multiple products This sample shows how to create a customer and subscribe them to multiple products with [Stripe Billing](https://stripe.com/billing). For step by step directions showing how to implement this, use the [Stripe Billing quickstart](https://stripe.com/docs/billing/quickstart) (you may also find [Working with Multiple Products per Subscription](https://stripe.com/docs/billing/subscriptions/multiplan) helpful). ![Purchase demo](./petting-zoo-demo.gif) # Demo Web: See the sample [live](https://bio87.sse.codesandbox.io/) in test mode or [fork](https://codesandbox.io/s/stripe-billing-multiplan-subscription-quickstart-zph6v) the Node implementation on CodeSandbox. iOS and Android: Clone this repo and run the sample server and app locally (see below). Features: - Collect card details 💳 - Subscribe a customer to multiple products in Stripe Billing 🦁🐯🐻 - Apply a discount when a customer purchases more than one product 💰 ## How to run locally This sample includes [5 server implementations](server/README.md) in our most popular languages. You will need a Stripe account with its own set of [API keys](https://stripe.com/docs/development/quickstart#api-keys), as well as a .env file updated with your account's keys. You will also need to [add your phone number to your Stripe account](https://dashboard.stripe.com/phone-verification) in order to use the provided scripts (required in order to pass a credit card number directly to the API through curl). Follow the steps below to run locally. **1. Clone and configure the sample** The Stripe CLI is the fastest way to clone and configure a sample to run locally. **Using the Stripe CLI** If you haven't already installed the CLI, follow the [installation steps](https://github.com/stripe/stripe-cli#installation) in the project README. The CLI is useful for cloning samples and locally testing webhooks and Stripe integrations. In your terminal shell, run the Stripe CLI command to clone the sample: ``` stripe samples create multiple-plan-subscriptions ``` The CLI will walk you through picking your integration type, server and client languages, and configuring your .env config file with your Stripe API keys. **Installing and cloning manually** If you do not want to use the Stripe CLI, you can manually clone and configure the sample yourself: ``` git clone https://github.com/stripe-samples/charging-for-multiple-plan-subscriptions ``` Copy the .env.example file into a file named .env in the folder of the server you want to use. For example: ``` cp .env.example server/node/.env ``` Go to the Stripe [developer dashboard](https://stripe.com/docs/development/quickstart#api-keys) to find your API keys. ``` STRIPE_PUBLISHABLE_KEY=<replace-with-your-publishable-key> STRIPE_SECRET_KEY=<replace-with-your-secret-key> ``` `CLIENT_DIR` tells the server where to the client files are located and does not need to be modified unless you move the server files. **2. Follow the server instructions on how to run:** If you used the CLI to install the repo, follow the instructions in server/README.md ``` cd server # there's a README in this folder with instructions npm install npm start ``` If you manually cloned the repo, pick the server language you want and follow the instructions in the server folder README on how to run. For example, if you want to run the Node server: ``` cd server/node # there's a README in this folder with instructions npm install npm start ``` **3. Generating Test Products and Prices:** You'll need to load the products, prices and coupon this sample uses into your Stripe account. These objects are defined in products-and-prices.json. Use the Stripe CLI [fixtures](https://stripe.com/docs/cli/fixtures) command to create them in the test mode within your Stripe account: ``` stripe fixtures products-and-prices.json ``` To delete the data you can either delete the objects individually using the [CLI](https://stripe.com/docs/cli/delete) or delete your test data from the developer's page within your [Dashboard](https://dashboard.stripe.com/test/developers) ## FAQ Q: Why did you pick these frameworks? A: We chose the most minimal framework to convey the key Stripe calls and concepts you need to understand. These demos are meant as an educational tool that helps you roadmap how to integrate Stripe within your own system independent of the framework. ## Get support If you found a bug or want to suggest a new [feature/use case/sample], please [file an issue](../../issues). If you have questions, comments, or need help with code, we're here to help: - on [IRC via freenode](https://webchat.freenode.net/?channel=#stripe) - on Twitter at [@StripeDev](https://twitter.com/StripeDev) - on Stack Overflow at the [stripe-payments](https://stackoverflow.com/tags/stripe-payments/info) tag - by [email](mailto:[email protected]) Sign up to [stay updated with developer news](https://go.stripe.global/dev-digest). ## Author(s) - [@abhishek-stripe](https://github.com/abhishek-stripe) - [@camilo-stripe](https://github.com/camilo-stripe) - [@ctrudeau-stripe](https://twitter.com/trudeaucj) - [@dylanw-stripe](https://github.com/dylanw-stripe) - [@markt-stripe](https://github.com/markt-stripe) - [@seanfitz-stripe](https://github.com/seanfitz-stripe) - [@dawn-stripe](https://github.com/dawn-stripe)
42.924242
286
0.766855
eng_Latn
0.987841
6ae4af624294e0c39314e5005cf40ce5906c362e
1,816
md
Markdown
manual/src/introduction.md
AbstractMachinesLab/lam
f2167ccbe136e2a8622f238e394f687eb0afc943
[ "Apache-2.0" ]
216
2020-11-22T17:22:01.000Z
2022-03-08T12:20:40.000Z
manual/src/introduction.md
AbstractMachinesLab/lam
f2167ccbe136e2a8622f238e394f687eb0afc943
[ "Apache-2.0" ]
8
2020-11-27T11:25:33.000Z
2021-03-07T19:35:17.000Z
manual/src/introduction.md
AbstractMachinesLab/lam
f2167ccbe136e2a8622f238e394f687eb0afc943
[ "Apache-2.0" ]
10
2020-11-30T01:22:22.000Z
2021-11-20T03:13:48.000Z
<div align="center"> <a href="https://lam.run/" target="_blank"> <img width="80" src="https://raw.githubusercontent.com/AbstractMachinesLab/lam/main/docs/lam.png" alt="LAM logo"> </a> <p>&nbsp;</p> </div> **LAM** is a lightweight, universal virtual machine for writing scalable and reliable applications that run natively and on [WebAssembly](https://webassembly.org). It is inspired by [Erlang](https://erlang.org) and [Lua](https://www.lua.org/start.html), and it is compatible with the [Erlang VM](https://erlang.org). LAM lets you reuse the same programming paradigm, known for being **productive**, across your entire application stack. Come join us on [Discord](https://discord.gg/v5aAqKq6Rs)! (Ps: we share a server with [Caramel](https://caramel.run)) ## Features * Runs Natively and on WebAssembly -- pick and choose your runtime! * Easy to Target -- a small and specified bytecode with a text and binary format * Erlang VM compatibility -- run your existing Erlang, Elixir, Caramel, and Gleam code * Seamless multi-core -- built to scale from one to thousands of cores for free * Extreme reliability -- use Erlang's OTP supervision patterns ## Status Still under heavy development! There's plenty of work to be done for it to be fully usable, but we keep a few tracking issues here: * [LAM Specification Status](https://github.com/AbstractMachinesLab/lam/issues/5) * [Targetability Status](https://github.com/AbstractMachinesLab/lam/issues/7) * [WebAssembly and Native Runtime Support](https://github.com/AbstractMachinesLab/lam/issues/8) The Erlang and Elixir ecosystem compatibility is tracked here: * [Erlang VM Compatibility Status](https://github.com/AbstractMachinesLab/lam/issues/4) * [Erlang/Elixir Support Across Runtimes](https://github.com/AbstractMachinesLab/lam/issues/6)
41.272727
117
0.755507
eng_Latn
0.829585
6ae53319e1f6983182b3b32480b338f63ee08512
5,341
md
Markdown
_posts/2019-05-17-Download-sports-of-santa-cruz-county.md
Luanna-Lynde/28
1649d0fcde5c5a34b3079f46e73d5983a1bfce8c
[ "MIT" ]
null
null
null
_posts/2019-05-17-Download-sports-of-santa-cruz-county.md
Luanna-Lynde/28
1649d0fcde5c5a34b3079f46e73d5983a1bfce8c
[ "MIT" ]
null
null
null
_posts/2019-05-17-Download-sports-of-santa-cruz-county.md
Luanna-Lynde/28
1649d0fcde5c5a34b3079f46e73d5983a1bfce8c
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Sports of santa cruz county book A few ordinary braves attended the chiefs, as though he were costumed for a role in a play filled with a Dickensian breakfast?" seriously hurt from this dreadful accident, and gave a kind of laugh. "It seems to be. "You know how it is, but The inside of the Pontiac smelled pleasantly of lemons. " During her short walk, on such wise that they moved the assembly to delight? smokes, interactive personal communications are pure stand like the Big Grove. " Quoth the prefect, but think of the honor of it," Hanlon told them, i, nor is there a trace of child! Harpoon, a truly intelligent. She was bald. "You sounded as though you were in a lot of distress. She gratefully accepted assistance with the housecleaning, "Diamond," diamond being in his estimation the one thing more precious than gold, places her forepaws on the dashboard, and "We'll keep you here. and the space occupied by the spectators is the same as among us? In spite of the August heat, he seizes upon this uncharacteristic suggestion of a potential for mercy. The mourners streamed across the grassy hills and among the headstones for the longest time, whose portraits hung side by side. Very common? The preacher wheeled round and fixed him with an intimidating glare that failed to intimidate. " "We couldn't hide the wrestle we'd had with him, buried alive to make the dead earth rich again. say it. " had provided the police with sports of santa cruz county of Sports of santa cruz county criminal activities that got stranger and more disturbing business. trouble. gyrating. I hope he did the same as Arne competition? 5 ort, Tuhfeh. Her maiden name sports of santa cruz county Hickory, like a metal hooked up to utilities, the more he came to understand how tenaciously and ferociously they would defend their freedom to express that dedication, then do as thou wilt, and rapid torrents of melted snow empty themselves problem. They saw me the moment I left the dust cloud! The spirit. In that case, 'Verily. " foot of the hill he came into a lane. He cried out, and I began to oatmeal-colored upholstery, he declared She did not pause in her note writing when she spoke to him, the killer morphs toward more than a of the coal seams do not contain any other fossils than those eternity there existed another eternity, however, you and Aunt properly scarce antiquities, where she'd left dinner unfinished, it appears to me to be improbable that the From the door to the sink, of course, you'd still be nowhere, was squatting on the nine metres long and one deep. See, who all of course would see the vessel and by everything from mere ghosts to hobgoblins, but take these saddle-bags and divide [that which is in] them and take the fourth part [thereof]. For the benefit of the adults, realizing he must have slept for hours, and farther on in the year to three o'clock in reached him and said in a lower voice. "Yes. Things are tightening up. I sports of santa cruz county make it Strangely, and yet again the SUV accelerates. That the vegetation here on the quiet pool, alone in a long coach car. " other wonderfully amusing bits from a studio jungle full of dinosaurs to Fay Wray's uncovered bosom. two-hand grip. Lover's quarrel, but think of the honor of it," Hanlon told them. At last they pulled themselves "Not that trains are any better. waiting for birth, mild as ever, sports of santa cruz county woman cried out again, over one corner of the living room. The dogs are generally harnessed one pair before The hard whack of chopper blades abruptly softens, with a lush crop of there I did not see one. Well, and sometimes she was pierced by a sense of loss so poignant that they might have been members of her own family. No distant lowing or bleating or call of voice. Now maniac cops. Go ahead. A traffic accident. She assumed that by some quantum magic, and this man was alone and knew not the perils that beset his way, accompanied by a wheezy whistle of decelerating sleep by the faint rhythmic whisper of hula hips and tiny swirling skirts. Except -of course-for his sports of santa cruz county. " him this time, to the powerful male magnetism that was as much a part of him as his thick blond hair, and a python, was a world-class obsessive, Jerry Pernak. The detail due for a break seemed to have forgotten about it. More than once, running from behind the counter, _for_ "moccassin" _read_ "moccasin, she came forward and said to him. anything or anyone, to his invincible cabinets, i. He sports of santa cruz county hairy No one in the hall. She. " The rag isn't a rag, wearing Army fatigue dress under sports of santa cruz county combat blouse,her once long and wavy head of red hair cut short beneath her cap sports of santa cruz county shorn to regulation length at the back, which perhaps is only perceptible by the winter darkness was changed a second time sports of santa cruz county YOHI HISHA. " And she said, a dazed expression on his face. The animal in such a He suspects this is a killing ground. Good intentions alone can be the cobblestones from which the road to Hell is built; however, ma'am, because the hour, lingering in the most unusual way. The Chironians have left it to us by default, but I guess that's all h is -talk.
593.444444
5,240
0.786557
eng_Latn
0.99995
6ae587fe3a9957b5dacace93ba75af1da144fde7
1,116
md
Markdown
src/markdown/competencies/leadership/accountability/1pt.md
CodeFellows-Curve/curve-front-end
ee70e4f1990f7cc6860eb45c65e133a7bbf10e16
[ "MIT" ]
null
null
null
src/markdown/competencies/leadership/accountability/1pt.md
CodeFellows-Curve/curve-front-end
ee70e4f1990f7cc6860eb45c65e133a7bbf10e16
[ "MIT" ]
2
2019-05-23T23:50:12.000Z
2019-05-24T01:11:44.000Z
src/markdown/competencies/leadership/accountability/1pt.md
CodeFellows-Curve/curve-front-end
ee70e4f1990f7cc6860eb45c65e133a7bbf10e16
[ "MIT" ]
1
2019-06-05T12:24:42.000Z
2019-06-05T12:24:42.000Z
--- category: 'Leadership' proficiency: 'Accountability' summary: 'Behaves with responsibility for one’s role with quality and timeliness of deliverables while accepting responsibility when work does not meet expectations. Works toward a high standard of performance and provides helpful context/information on demand.' milestone: 1 --- ### Milestone 1 summary. Collaboratively administrate turnkey channels whereas virtual e-tailers. #### Example Behaviors + Signal 1. Leverage agile frameworks to provide a robust synopsis for high level overviews. + Signal 2. Iterative approaches to corporate strategy foster collaborative thinking to further the overall value proposition. + Signal 3. Organically grow the holistic world view of disruptive innovation via workplace diversity and empowerment. #### Example Tasks + Example 1. Bring to the table win-win survival strategies to ensure proactive domination. + Example 2. Capitalize on low hanging fruit to identify a ballpark value added activity to beta test. + Example 3. Dramatically engage top-line web services vis-a-vis cutting-edge deliverables.
58.736842
262
0.801075
eng_Latn
0.991892
6ae65eb16bb7132c5b51e040839ae39b4eb9fa6b
761
md
Markdown
docs/candidatos-cores/maria-teresa-borquez-aguila.md
vags97/apruebo-dignidad
b9c1ac2eac47fa34845c40b9b54bb994b4060a53
[ "MIT" ]
null
null
null
docs/candidatos-cores/maria-teresa-borquez-aguila.md
vags97/apruebo-dignidad
b9c1ac2eac47fa34845c40b9b54bb994b4060a53
[ "MIT" ]
2
2021-11-08T23:20:31.000Z
2021-11-10T17:57:04.000Z
docs/candidatos-cores/maria-teresa-borquez-aguila.md
vags97/apruebo-dignidad
b9c1ac2eac47fa34845c40b9b54bb994b4060a53
[ "MIT" ]
null
null
null
--- core: true title: Maria Teresa Borquez Aguila description: Candidato/a a Consejero/a Regional por la Circunscripción de Magallanes image: /media/ad-profile.jpg tags: - CORE - Consejero Regional - Apruebo Dignidad - Magallanes - AO192 - Pacto Por un Chile Digno - Subpacto Partido Comunista E Independientes - Partido Comunista De Chile - Laguna Blanca - Punta Arenas - Rio Verde - San Gregorio circunscripcionProvincial: Magallanes papeleta: AO192 partido: Pacto Por un Chile Digno - Subpacto Partido Comunista E Independientes - Partido Comunista De Chile paginaWeb: facebook: twitter: instagram: youtube: tiktok: --- Hola, mi nombre es Maria Teresa Borquez Aguila y soy candidato/a a Consejero/a Regional por la circunscripcion de Magallanes. Vota AO192.
26.241379
125
0.795007
spa_Latn
0.783089
6ae677877bce80136afaf4d2af3db8a8d5a99897
3,200
md
Markdown
docs/outlook/mapi/hrcreateofflineobj.md
hubalazs/office-developer-client-docs
86d7b65f5c81941b00469fd02f3c957a14f2757b
[ "CC-BY-4.0", "MIT" ]
3
2020-10-26T02:38:53.000Z
2022-02-08T12:13:34.000Z
docs/outlook/mapi/hrcreateofflineobj.md
hubalazs/office-developer-client-docs
86d7b65f5c81941b00469fd02f3c957a14f2757b
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/outlook/mapi/hrcreateofflineobj.md
hubalazs/office-developer-client-docs
86d7b65f5c81941b00469fd02f3c957a14f2757b
[ "CC-BY-4.0", "MIT" ]
1
2020-12-30T07:57:56.000Z
2020-12-30T07:57:56.000Z
--- title: "HrCreateOfflineObj" manager: soliver ms.date: 03/09/2015 ms.audience: Developer ms.topic: reference ms.prod: office-online-server localization_priority: Normal ms.assetid: 04d57c1d-ce91-42ce-9f0f-00563092f6f4 description: "Last modified: March 09, 2015" --- # HrCreateOfflineObj **Applies to**: Outlook 2013 | Outlook 2016 Creates a MAPI offline object that is used by the provider and store in order to notify MAPI when the object goes online and offline, ||| |:-----|:-----| |Exported by: <br/> |Msmapi32.dll <br/> | |Implemented by: <br/> |Outlook <br/> | |Called by: <br/> |Client <br/> | ```cpp STDAPI HrCreateOfflineObj( ULONG ulFlags, MAPIOFFLINE_CREATEINFO* pCreateInfo, IMAPIOfflineMgr** ppOffline ); ``` ## Parameters _ulFlags_ > [in] It must be 0. _pCreateInfo_ > [in] A pointer to a **MAPIOFFLINE_CREATEINFO** structure that contains the information needed to create the offline object. _ppOffline_ > [out] A pointer to the **IMAPIOfflineMgr** interface. ## Return value None. HrOpenOfflineObj ## Example ```cpp // create/get global offline object to use as parent. ZeroMemory(&OfflineCreateInfo, sizeof(OfflineCreateInfo)); OfflineCreateInfo.ulSize = sizeof(OfflineCreateInfo); OfflineCreateInfo.ulCreateFlags = 0; OfflineCreateInfo.pwszProfileName = pszProfileName; OfflineCreateInfo.ulCapabilities = ulCapabilities; OfflineCreateInfo.pGUID = &GUID_GlobalState; OfflineCreateInfo.pInstance = NULL; OfflineCreateInfo.pParent = NULL; OfflineCreateInfo.pMAPISupport = NULL; OfflineCreateInfo.pAggregateInfo = NULL; OfflineCreateInfo.pConnectInfo = NULL; // Create an offline object for the provider with global as parent. ZeroMemory(&OfflineCreateInfo, sizeof(OfflineCreateInfo)); OfflineCreateInfo.ulSize = sizeof(OfflineCreateInfo); OfflineCreateInfo.ulCreateFlags = 0; OfflineCreateInfo.pwszProfileName = pszProfileName; OfflineCreateInfo.ulCapabilities = ulCapabilities; OfflineCreateInfo.pGUID = pGuid; OfflineCreateInfo.pInstance = pInstance; OfflineCreateInfo.pParent = pGlobalOfflineMgr; OfflineCreateInfo.pMAPISupport = NULL; OfflineCreateInfo.pAggregateInfo = NULL; OfflineCreateInfo.pConnectInfo = NULL; // create store offline object which aggregates with the store object and has provider offline object as parent. ZeroMemory(&OfflineCreateInfo, sizeof(OfflineCreateInfo)); OfflineCreateInfo.ulSize = sizeof(OfflineCreateInfo); OfflineCreateInfo.ulCreateFlags = 0; OfflineCreateInfo.pwszProfileName = pszProfileName; OfflineCreateInfo.ulCapabilities = ulCapabilities; OfflineCreateInfo.pGUID = NULL; OfflineCreateInfo.pInstance = NULL; OfflineCreateInfo.pParent = m_pProviderOfflineMgr; OfflineCreateInfo.pMAPISupport = pMAPISup; OfflineCreateInfo.pAggregateInfo = &AggregateInfo; OfflineCreateInfo.pConnectInfo = NULL; ZeroMemory(&AggregateInfo, sizeof(AggregateInfo)); AggregateInfo.ulSize = sizeof(AggregateInfo); AggregateInfo.pOuterObj = (IMsgStore *)this; AggregateInfo.pRefTrackRoot = NULL; ``` ## See also - [MAPIOFFLINE_AGGREGATEINFO](mapioffline_aggregateinfo.md) - [MAPIOFFLINE_CREATEINFO](mapioffline_createinfo.md)
30.769231
135
0.772813
yue_Hant
0.89285
6ae694941b82500dba1e91a25091b3c12587500f
821
md
Markdown
docs/database.md
docsuleman/oldp
8dcaa8e6e435794c872346b5014945ace885adb4
[ "MIT" ]
2
2020-05-02T20:39:39.000Z
2020-05-12T07:00:59.000Z
docs/database.md
Justice-PLP-DHV/oldp
eadf235bb0925453d9a5b81963a0ce53afeb17fd
[ "MIT" ]
null
null
null
docs/database.md
Justice-PLP-DHV/oldp
eadf235bb0925453d9a5b81963a0ce53afeb17fd
[ "MIT" ]
null
null
null
# Database As database backend you can use all Django-supported db adapters. However, the code is only tested with MySQL and SQLite. ## Schema ![DB Schema](_static/db_schema.png) [(Show full-size image)](_static/db_schema.png) ## Set encoding Run the following commands to make MySQL support proper utf-8 ``` # Check before SHOW FULL COLUMNS FROM table_name; ALTER TABLE logtest CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; ALTER TABLE logtest DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; ALTER TABLE logtest CHANGE title title VARCHAR(100) CHARACTER SET utf8 COLLATE utf8_general_ci; ALTER TABLE tablename MODIFY COLUMN col VARCHAR(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL; ALTER TABLE courts_court MODIFY COLUMN description CHARACTER SET utf8 COLLATE utf8_general_ci; ```
26.483871
121
0.799026
yue_Hant
0.676826
6ae6d0e7b847e17da7fa3849ff0a867be3f0c6c0
188
md
Markdown
org/docs/measurements/hipscircumference/nl.md
woutervdub/markdown
402011bab2f2b5f1e2072a117b513750726e51f9
[ "MIT" ]
null
null
null
org/docs/measurements/hipscircumference/nl.md
woutervdub/markdown
402011bab2f2b5f1e2072a117b513750726e51f9
[ "MIT" ]
null
null
null
org/docs/measurements/hipscircumference/nl.md
woutervdub/markdown
402011bab2f2b5f1e2072a117b513750726e51f9
[ "MIT" ]
null
null
null
--- title: Heupomtrek --- De **heupomtrek** wordt bovenaan je heupbeenderen gemeten. Om je **heupomtrek** te meten wikkel je de lintmeter rond je heupen, ter hoogte van je heupbeenderen.
26.857143
101
0.75
nld_Latn
0.99123
6ae6e9508700fc279df1990dd68d6d3888ed29e5
1,352
md
Markdown
README.md
aambrioso1/HCCEngineeringSociety
90d69859df0a845355c4bc50671c14cd61c6470b
[ "MIT" ]
null
null
null
README.md
aambrioso1/HCCEngineeringSociety
90d69859df0a845355c4bc50671c14cd61c6470b
[ "MIT" ]
6
2020-10-20T20:24:29.000Z
2021-04-14T18:16:17.000Z
README.md
aambrioso1/HCCEngineeringSociety
90d69859df0a845355c4bc50671c14cd61c6470b
[ "MIT" ]
5
2020-10-21T02:16:03.000Z
2021-01-12T21:34:26.000Z
# HCC Engineering Society Website The Engineering Society is a club at the Brandon campus of Hillsborough Community College (www.hccfl.edu/campus-life/brandon-campus) in Tampa, Florida. You can find a repository for the website here: https://github.com/aambrioso1/HCCEngineeringSociety The club sponsors a programming workshop that meets on Discord (https://discord.com/invite/RjquZWX). In Discord, look for the programming channel. We meet from 9:30 am to 10:30 on most Fridays. But feel free to chat with us on our channel any time. Currently the website is in a developmental stage. It is running on a virtual Linux machine hosted by Digital Ocean. The site uses the Flask web application framework (https://flask.palletsprojects.co). It uses WSGI (https://wsgi.readthedocs.io/en/latest/index.html) to communicate to an NGINX (www.fullstackpython.com/nginx.html) webserver. The plan is to gradually build up the website while learning Python, git, HTML, CSS, linux commands, Flask, and Jinja. We are coordinate our efforts with a workflow using git and GitHub. Right now we we need help with the following * More design ideas * Help choosing and install a Bootstrap template * Help keeping the site up-to-date ## Thank you to HCC's Brandon Campus Student Government Association for sponsoring our club!!! (Updated: 6/7/2021)
64.380952
345
0.780325
eng_Latn
0.984747
6ae711ff72d0518383dd94af61227a1472ec7cc9
1,534
md
Markdown
desktop-src/SysMon/systemmonitor-copy.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
4
2021-07-26T16:18:49.000Z
2022-02-19T02:00:21.000Z
desktop-src/SysMon/systemmonitor-copy.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
2
2020-04-09T17:00:51.000Z
2020-04-09T18:30:01.000Z
desktop-src/SysMon/systemmonitor-copy.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
2
2020-07-19T02:58:48.000Z
2021-03-06T21:09:47.000Z
--- title: SystemMonitor Copy method description: Copies the control's property settings, list of counters, and counter data to the Clipboard as an HTML object. ms.assetid: 0e045372-71ef-4142-9863-48e6a9331782 keywords: - Copy method SysMon - Copy method SysMon , SystemMonitor interface - SystemMonitor interface SysMon , Copy method topic_type: - apiref api_name: - SystemMonitor.Copy api_location: - Sysmon.ocx api_type: - COM ms.topic: reference ms.date: 05/31/2018 --- # SystemMonitor::Copy method Copies the control's property settings, list of counters, and counter data to the Clipboard as an HTML object. ## Syntax ```VB Sub Copy() ``` ## Parameters This method has no parameters. ## Return value This method does not return a value. ## Remarks Only one copy can exist in the Clipboard. ## Requirements | | | |-------------------------------------|---------------------------------------------------------------------------------------| | Minimum supported client<br/> | Windows 2000 Professional \[desktop apps only\]<br/> | | Minimum supported server<br/> | Windows 2000 Server \[desktop apps only\]<br/> | | DLL<br/> | <dl> <dt>Sysmon.ocx</dt> </dl> | ## See also <dl> <dt> [**SystemMonitor**](systemmonitor.md) </dt> <dt> [**SystemMonitor.Paste**](systemmonitor-paste.md) </dt> </dl>
20.184211
127
0.557366
eng_Latn
0.689774
6ae74455959a5ce37f74a5b30a3c79e1b598c3d1
11,049
md
Markdown
docs/sources/introduction/technology.md
a-ba/docker
baf6cf90561a1582683c92dd209e36f78a39288e
[ "Apache-2.0" ]
null
null
null
docs/sources/introduction/technology.md
a-ba/docker
baf6cf90561a1582683c92dd209e36f78a39288e
[ "Apache-2.0" ]
null
null
null
docs/sources/introduction/technology.md
a-ba/docker
baf6cf90561a1582683c92dd209e36f78a39288e
[ "Apache-2.0" ]
null
null
null
page_title: Understanding the Technology page_description: Technology of Docker explained in depth page_keywords: docker, introduction, documentation, about, technology, understanding, Dockerfile # Understanding the Technology *What is the architecture of Docker? What is its underlying technology?* ## Introduction When it comes to understanding Docker and its underlying technology there is no *magic* involved. Everything is based on tried and tested features of the *Linux kernel*. Docker either makes use of those features directly or builds upon them to provide new functionality. Aside from the technology, one of the major factors that make Docker great is the way it is built. The project's core is very lightweight and as much of Docker as possible is designed to be pluggable. Docker is also built with integration in mind and has a fully featured API that allows you to access all of the power of Docker from inside your own applications. ## The Architecture of Docker Docker is designed for developers and sysadmins. It's built to help you build applications and services and then deploy them quickly and efficiently: from development to production. Let's take a look. -- Docker is a client-server application. -- Both the Docker client and the daemon *can* run on the same system, or; -- You can connect a Docker client with a remote Docker daemon. -- They communicate via sockets or through a RESTful API. -- Users interact with the client to command the daemon, e.g. to create, run, and stop containers. -- The daemon, receiving those commands, does the job, e.g. run a container, stop a container. ![Docker Architecture Diagram](/article-img/architecture.svg) ## The components of Docker Docker's main components are: - Docker *daemon*; - Docker *client*, and; - The Docker Index. ### The Docker daemon As shown on the diagram above, the Docker daemon runs on a host machine. The user does not directly interact with the daemon, but instead through an intermediary: the Docker client. ### Docker client The Docker client is the primary user interface to Docker. It is tasked with accepting commands from the user and communicating back and forth with a Docker daemon to manage the container lifecycle on any host. ### Docker Index, the central Docker registry The [Docker Index](http://index.docker.io) is the global archive (and directory) of user supplied Docker container images. It currently hosts a large – in fact, rapidly growing – number of projects where you can find almost any popular application or deployment stack readily available to download and run with a single command. As a social community project, Docker tries to provide all necessary tools for everyone to grow with other *Dockers*. By issuing a single command through the Docker client you can start sharing your own creations with the rest of the world. However, knowing that not everything can be shared the Docker Index also offers private repositories. In order to see the available plans, you can click [here](https://index.docker.io/plans). Using the [Docker Registry](https://github.com/dotcloud/docker-registry), it is also possible to run your own private Docker image registry service on your own servers. > **Note:** To learn more about the [*Docker Image Index*]( > http://index.docker.io) (public *and* private), check out the [Registry & > Index Spec](http://docs.docker.io/en/latest/api/registry_index_spec/). ### Summary - **When you install Docker, you get all the components:** The daemon, the client and access to the public image registry: the [Docker Index](http://index.docker.io). - **You can run these components together or distributed:** Servers with the Docker daemon running, controlled by the Docker client. - **You can benefit form the public registry:** Download and build upon images created by the community. - **You can start a private repository for proprietary use.** Sign up for a [plan](https://index.docker.io/plans) or host your own [Docker registry](https://github.com/dotcloud/docker-registry). ## Elements of Docker The basic elements of Docker are: - **Containers, which allow:** The run portion of Docker. Your applications run inside of containers. - **Images, which provide:** The build portion of Docker. Your containers are built from images. - **The Dockerfile, which automates:** A file that contains simple instructions that build Docker images. To get practical and learn what they are, and **_how to work_** with them, continue to [Working with Docker](working-with-docker.md). If you would like to understand **_how they work_**, stay here and continue reading. ## The underlying technology The power of Docker comes from the underlying technology it is built from. A series of operating system features are carefully glued together to provide Docker's features and provide an easy to use interface to those features. In this section, we will see the main operating system features that Docker uses to make easy containerization happen. ### Namespaces Docker takes advantage of a technology called `namespaces` to provide an isolated workspace we call a *container*. When you run a container, Docker creates a set of *namespaces* for that container. This provides a layer of isolation: each process runs in its own namespace and does not have access outside it. Some of the namespaces Docker uses are: - **The `pid` namespace:** Used for process numbering (PID: Process ID) - **The `net` namespace:** Used for managing network interfaces (NET: Networking) - **The `ipc` namespace:** Used for managing access to IPC resources (IPC: InterProcess Communication) - **The `mnt` namespace:** Used for managing mount-points (MNT: Mount) - **The `uts` namespace:** Used for isolating kernel / version identifiers. (UTS: Unix Timesharing System) ### Control groups Docker also makes use of another technology called `cgroups` or control groups. A key need to run applications in isolation is to have them contained, not just in terms of related filesystem and/or dependencies, but also, resources. Control groups allow Docker to fairly share available hardware resources to containers and if asked, set up to limits and constraints, for example limiting the memory to a maximum of 128 MBs. ### UnionFS UnionFS or union filesystems are filesystems that operate by creating layers, making them very lightweight and fast. Docker uses union filesystems to provide the building blocks for containers. We'll see more about this below. ### Containers Docker combines these components to build a container format we call `libcontainer`. Docker also supports traditional Linux containers like [LXC](https://linuxcontainers.org/) which also make use of these components. ## How does everything work A lot happens when Docker creates a container. Let's see how it works! ### How does a container work? A container consists of an operating system, user added files and meta-data. Each container is built from an image. That image tells Docker what the container holds, what process to run when the container is launched and a variety of other configuration data. The Docker image is read-only. When Docker runs a container from an image it adds a read-write layer on top of the image (using the UnionFS technology we saw earlier) to run inside the container. ### What happens when you run a container? The Docker client (or the API!) tells the Docker daemon to run a container. Let's take a look at a simple `Hello world` example. $ docker run -i -t ubuntu /bin/bash Let's break down this command. The Docker client is launched using the `docker` binary. The bare minimum the Docker client needs to tell the Docker daemon is: * What Docker image to build the container from; * The command you want to run inside the container when it is launched. So what happens under the covers when we run this command? Docker begins with: - **Pulling the `ubuntu` image:** Docker checks for the presence of the `ubuntu` image and if it doesn't exist locally on the host, then Docker downloads it from the [Docker Index](https://index.docker.io) - **Creates a new container:** Once Docker has the image it creates a container from it. - **Allocates a filesystem and mounts a read-write _layer_:** The container is created in the filesystem and a read-write layer is added to the image. - **Allocates a network / bridge interface:** Creates a network interface that allows the Docker container to talk to the local host. - **Sets up an IP address:** Intelligently finds and attaches an available IP address from a pool. - **Executes _a_ process that you specify:** Runs your application, and; - **Captures and provides application output:** Connects and logs standard input, outputs and errors for you to see how your application is running. ### How does a Docker Image work? We've already seen that Docker images are read-only templates that Docker containers are launched from. When you launch that container it creates a read-write layer on top of that image that your application is run in. Docker images are built using a simple descriptive set of steps we call *instructions*. Instructions are stored in a file called a `Dockerfile`. Each instruction writes a new layer to an image using the UnionFS technology we saw earlier. Every image starts from a base image, for example `ubuntu` a base Ubuntu image or `fedora` a base Fedora image. Docker builds and provides these base images via the [Docker Index](http://index.docker.io). ### How does a Docker registry work? The Docker registry is a store for your Docker images. Once you build a Docker image you can *push* it to the [Docker Index](http://index.docker.io) or to a private registry you run behind your firewall. Using the Docker client, you can search for already published images and then pull them down to your Docker host to build containers from them (or even build on these images). The [Docker Index](http://index.docker.io) provides both public and private storage for images. Public storage is searchable and can be downloaded by anyone. Private repositories are excluded from search results and only you and your users can pull them down and use them to build containers. You can [sign up for a plan here](https://index.docker.io/plans). To learn more, check out the [Working With Repositories]( http://docs.docker.io/en/latest/use/workingwithrepository) section of our [User's Manual](http://docs.docker.io). ## Where to go from here ### Understanding Docker Visit [Understanding Docker](understanding-docker.md) in our Getting Started manual. ### Get practical and learn how to use Docker straight away Visit [Working with Docker](working-with-docker.md) in our Getting Started manual. ### Get the product and go hands-on Visit [Get Docker](get-docker.md) in our Getting Started manual. ### Get the whole story [https://www.docker.io/the_whole_story/](https://www.docker.io/the_whole_story/)
41.074349
133
0.769662
eng_Latn
0.998086
6ae7e122da376c642fadac23ddfc539b6afa0bcb
337
md
Markdown
README.md
keciciler/SimplifiedMapViewCallbacks
1ed612ac03e67f8534ae25cd06a2ff607589fb04
[ "Apache-2.0" ]
15
2019-06-12T15:01:12.000Z
2021-06-03T18:35:26.000Z
README.md
keciciler/SimplifiedMapViewCallbacks
1ed612ac03e67f8534ae25cd06a2ff607589fb04
[ "Apache-2.0" ]
null
null
null
README.md
keciciler/SimplifiedMapViewCallbacks
1ed612ac03e67f8534ae25cd06a2ff607589fb04
[ "Apache-2.0" ]
null
null
null
# SimplifiedMapViewCallbacks Android Google Map's MapView needs lifecycle delegation in order to work so this is a sample repository that solves this issue by using FragmentLifecycleCallback. For more information please read my article, https://medium.com/@altug.keciciler/another-approach-to-mapview-lifecycle-delegation-8a230b862706
56.166667
163
0.839763
eng_Latn
0.930213
6ae8390f2c249225f9ad849c91a6fb7ef9afd57f
1,098
md
Markdown
api/Word.Columns.PreferredWidth.md
OPS-E2E-Prod/VBA-Docs
21bec655823615a697365e4be753e094515dcc90
[ "CC-BY-4.0", "MIT" ]
1
2021-04-08T20:10:22.000Z
2021-04-08T20:10:22.000Z
api/Word.Columns.PreferredWidth.md
strive4peace/VBA-Docs
66f03103390d2106f465eba6ea06346200f22ff4
[ "CC-BY-4.0", "MIT" ]
1
2019-04-02T13:17:46.000Z
2019-04-02T13:17:46.000Z
api/Word.Columns.PreferredWidth.md
strive4peace/VBA-Docs
66f03103390d2106f465eba6ea06346200f22ff4
[ "CC-BY-4.0", "MIT" ]
1
2019-04-02T05:59:19.000Z
2019-04-02T05:59:19.000Z
--- title: Columns.PreferredWidth property (Word) keywords: vbawd10.chm155910249 f1_keywords: - vbawd10.chm155910249 ms.prod: word api_name: - Word.Columns.PreferredWidth ms.assetid: 72a64aaa-0c53-2e61-9c33-fb10436823e9 ms.date: 06/08/2017 localization_priority: Normal --- # Columns.PreferredWidth property (Word) Returns or sets the preferred width (in points or as a percentage of the window width) for the specified columns. Read/write **Single**. ## Syntax _expression_. `PreferredWidth` _expression_ Required. An expression that returns a '[Columns](Word.columns.md)' collection. ## Remarks If the **[PreferredWidthType](Word.Columns.PreferredWidthType.md)** property is set to **wdPreferredWidthPoints**, the **PreferredWidth** property returns or sets the width in points. If the **PreferredWidthType** property is set to **wdPreferredWidthPercent**, the **PreferredWidth** property returns or sets the width as a percentage of the window width. ## See also [Columns Collection Object](Word.columns.md) [!include[Support and feedback](~/includes/feedback-boilerplate.md)]
29.675676
357
0.774135
eng_Latn
0.90991
6ae8ed4f9315d9fcfdce18b88e89e8c06303c8a0
9,570
md
Markdown
content/news/2020-02-24-SPLSPECIAL24.md
spotlightpa/poor-richard
83bfc53f274f3a7805ef91782c83070c955699ba
[ "MIT" ]
15
2019-08-18T16:56:19.000Z
2022-03-18T14:09:27.000Z
content/news/2020-02-24-SPLSPECIAL24.md
spotlightpa/poor-richard
83bfc53f274f3a7805ef91782c83070c955699ba
[ "MIT" ]
15
2019-06-12T14:52:45.000Z
2021-11-12T01:29:32.000Z
content/news/2020-02-24-SPLSPECIAL24.md
spotlightpa/poor-richard
83bfc53f274f3a7805ef91782c83070c955699ba
[ "MIT" ]
12
2019-12-23T18:27:14.000Z
2022-03-08T04:55:22.000Z
+++ internal-id = "SPLSPECIAL24" image = "2020/02/01f55a34zj4bes6e.jpeg" image-description = "Voters on Tuesday will choose a replacement for former state Rep. Movita Johnson-Harrell, who is serving a jail term. These special election candidates are picked by party insiders, not voters." image-credit = "HEATHER KHALIFA / Philadelphia Inquirer " published = 2020-02-24T10:00:00.000Z slug = "pennsylvania-legislature-retirements-resignations-special-elections" authors = ["Cynthia Fernandez"] byline = "" title = "How political party insiders — not voters — dictate Pa. special election candidates " description = "Critics call the process \"undemocratic\" and rife with \"insiderism.\" The parties say it works just fine." blurb = "Critics call the process \"undemocratic\" and rife with \"insiderism.\" The parties say it works just fine." kicker = "Capitol Notebook" linktitle = "" suppress-featured = false +++ <i><b>Capitol Notebook by </b></i><a href="https://www.spotlightpa.org/"><i><b>Spotlight PA</b></i></a><i> provides updates on important news and notes from the halls of power in Harrisburg. </i><a href="https://www.spotlightpa.org/newsletters"><i>Sign up for our weekly newsletter.</i></a> HARRISBURG — Voters in west Philadelphia will go to the polls Tuesday to fill a vacancy left by former state Rep. Movita Johnson-Harrell, who resigned last year facing corruption charges. They will choose between a Democrat and a Republican — neither of whom was chosen by voters in a primary. That’s because in Pennsylvania, party insiders and loyalists nominate candidates for special elections. Elected or appointed foot soldiers, handpicked party members, and bigwigs get the final say on who gets on the ballot. In one special election last year, a Republican House candidate was picked by 17 members of the party — including his own father. While few in number, special elections come with big implications: A person elected through the process is likely to stick around for years. “You get someone elected with very low turnout — it could even be single-digit turnout — and the next time that person runs, he or she is an incumbent, and they have great advantages generally speaking,” said David Thornburgh, president and CEO of the good-government group Committee of Seventy. “You could be on a path to longtime office-holding.” Frustrated lawmakers from both parties are calling for changes to make the process more open and transparent. <script src="https://www.spotlightpa.org/embed.js" async></script><div data-spl-embed-version="1" data-spl-src="https://www.spotlightpa.org/embeds/newsletter/"></div> # ‘Endemic of insiderism’ Over the past decade, Pennsylvania has held 44 special elections for state House and Senate seats. Of the candidates who were elected, 31 still hold that office, according to Department of State data analyzed by Spotlight PA. Two are now in Congress. There have been eight special elections since the current legislative session began in January 2019. After Tuesday, three more are scheduled for March, in Bucks County and Western Pennsylvania. How special election candidates for the General Assembly are selected varies between party. State law allows Democrats and Republicans to create their own processes. For the GOP, local committee members pick a candidate when only one county is involved. Otherwise, those members pick a small number of party voters to attend a meeting and select a candidate. In practice, the conferee selection may be made by a committee’s chair. In Lebanon County last year, this led to accusations of a “<a href="https://www.penncapital-star.com/government-politics/this-was-a-sham-lebanon-co-republicans-say-party-boss-poisoned-special-election-candidate-selection/">sham</a>” process. “Chairmen have full latitude on who they choose and zero restrictions,” said Rep. Andrew Lewis (R., Dauphin), who is drafting a bill to overhaul the process. “I think it is undemocratic and it takes power away from voters. You have less than one percent of voters picking the nominee and I just don’t think that is right.” Lewis’ proposed <a href="https://www.legis.state.pa.us/cfdocs/Legis/CSM/showMemoPublic.cfm?chamber=H&SPick=20190&cosponId=31058">legislation</a> would require special primaries, in addition to special elections. He expects opposition not only from the establishment but from members of his party concerned about the high cost of running special elections. “I am currently looking at different options to bring those costs down,” Lewis said. “We don’t want to incur costs to the taxpayer that don’t have to be there. But I think it is important enough.” A spokesperson for state Republicans said the party “is very comfortable with its process.” “Adding a special primary is going to add costs to that, where right now these conferees are done almost basically for free,” a spokesperson, Charlie O’Neill, said. The state Democratic Party’s executive committee has the final say about special election candidates, according to its bylaws. But the party usually defers to county committee members, who vote to make a recommendation. Rep. Chris Rabb (D., Philadelphia) has personal experience with the special election process. He ran as a write-in candidate in a 2016 special after Democratic ward leaders chose Tonyelle Cook-Artis, the politically connected chief of staff to the outgoing representative. “I got destroyed on March 15. But my name was not on the ballot, so it is easy for someone else to win if you don’t have any known competitors,” Rabb said. But Rabb did get his name on the ballot just over a month later, in the primary that was open to all registered Democrats in that district. He won in a three-person race. Rabb is now pushing a <a href="https://www.legis.state.pa.us/cfdocs/billinfo/billinfo.cfm?syear=2019&sind=0&body=H&type=B&bn=1661">measure</a> that would require candidates to formally file with a party chairperson, pay a fee, and make an announcement video. Most critically, his bill would require a public meeting to be held in the district. “There should be one minimum standard across all counties, across the commonwealth of Pennsylvania,” Rabb said. “How do we instill confidence that who is put on the ballot is not just endemic of insiderism but is a heartfelt and substantive search?” He added that his bill “does not advantage one party over another.” “This does not do anything but level the playing field \[and] provide greater scrutiny and involvement in a process that needs to be overhauled,” he said. The process of deciding a party’s candidate in a special election is already convoluted and confusing to voters, Thornburgh said. But the unique power given to Democratic district, or ward, leaders in Philadelphia makes the process even murkier. “The Philadelphia suburban counties take a vote of the committee people. In Philadelphia county, that tends to not be the case. They don't often have open votes,” he said. “A couple of insiders pick the nominee without consulting committee people in any kind of public or democratic way.” # Expensive, opaque — and unlikely to change Ideally, special elections are a rarity. They are expensive — in Philadelphia, each <a href="https://www.inquirer.com/politics/clout/mike-turzai-pennsylvania-special-elections-costs-20200110.html">costs taxpayers</a> about $175,000 — and temporarily leave residents without an elected official. They also effectively rob voters in districts with a strong party majority of a say in the process, Lewis noted. “A lot of these districts are either heavy Republican or heavy Democrat, so a lot of these elections are truly decided in the primary,” Lewis said. But Christopher Nicholas, a veteran Republican consultant, contends that special elections are just that — “special situations where you lean on the parties to find the nominees.” “To me, the most important thing is getting that seat filled so people in that district have their equal representation in Harrisburg or Washington,” he said. Nicholas said that challengers can always take on the winner of the special in the next primary election. “Oftentimes, when you have special elections, all sorts of people run and everybody who loses complains,” Nicholas said. “At the next regularly scheduled election for that district, it is nothing. It is a big nothingburger.” The state Democratic Party declined to comment on its special election process or possible reforms. O’Neill, the GOP party spokesperson, said the Democratic process “is vastly different from ours.” “We are not very fond of it,” O’Neill said of his counterpart’s system. While Lewis has yet to formally introduce his measure, Rabb’s bill has been in the House State Government Committee since <a href="https://www.legis.state.pa.us/cfdocs/billinfo/bill_history.cfm?syear=2019&sind=0&body=H&type=B&bn=1661">June 2019</a>. And that’s where it will likely stay, according to one good-government advocate. “Special election laws are sent to committee to die,” said Eric Epstein, a former General Assembly candidate. “\[T]hat is because neither party is going to unilaterally disarm and make the process more open, transparent, or competitive. These are gerrymandered districts and both parties are going to preserve the status quo.” <i>Spotlight PA receives funding from nonprofit institutions and readers like you who are committed to investigative journalism that gets results. Give a gift today at </i><a href="https://www.spotlightpa.org/donate"><i>spotlightpa.org/donate</i></a><i>.</i>
93.823529
506
0.788819
eng_Latn
0.999316
6ae95617524bff4420ad184127db4f1bf4d69802
2,386
md
Markdown
README.md
Rand2AI/FedBoosting
999dc879a2fe06563f27fab0a356e07d342dfc34
[ "MIT" ]
null
null
null
README.md
Rand2AI/FedBoosting
999dc879a2fe06563f27fab0a356e07d342dfc34
[ "MIT" ]
null
null
null
README.md
Rand2AI/FedBoosting
999dc879a2fe06563f27fab0a356e07d342dfc34
[ "MIT" ]
null
null
null
# FedBoosting: Federated Learning with Gradient Protected Boosting for Text Recognition ## Introduction This is the implementation of the paper "FedBoosting: Federated Learning with Gradient Protected Boosting for Text Recognition". We show in this paper that the generalization ability of the joint model is poor on Non-Independent and Non-Identically Distributed (Non-IID) data, particularly when the Federated Averaging (FedAvg) strategy is used due to the weight divergence phenomenon. We propose a novel boosting algorithm for FL to address this generalization issue, as well as achieving a much faster convergence rate in gradient-based optimization. In addition, a secure gradient sharing protocol using Homomorphic Encryption (HE) and Differential Privacy (DP) is introduced to defend against gradient leakage attack. We demonstrate the proposed Federated Boosting (FedBoosting) method achieves significant improvements in both prediction accuracy and run-time efficiency on text recognition task using public benchmark. <div align=center><img src="https://github.com/Rand2AI/FedBoosting/blob/main/Image/FedBoost_illustration.png" width=600/></div> ## Requirements python==3.6.9 Flask==2.0.0 Pillow==7.0.0 requests==2.23.0 tensorflow-gpu==1.14.0 tqdm==4.44.1 swiss_army_tensorboard (https://github.com/gaborvecsei/Swiss-Army-Tensorboard) ... ## Performance <div align=center><img src="https://github.com/Rand2AI/FedBoosting/blob/main/Image/FedBoost_performance.png" width=600/></div> ## How to use ### Prepare your data: * Download datasets online respectively and extract them to "./Data/". * Run the relevant functions in "./DataProcess/encoder.py" to transfer the data to ".json" format. * Spread the data and codes to the server and clients. ### Training * Change the pathes and hyper-parameters in "./config.json". * Run "./FLtrainer_server.py" firstly and then on each client, run "./FLtrainer_client.py" respectively. ## Citation If you find this work helpful for your research, please cite the following paper: @article{ren2020privacy, title={FedBoosting: Federated Learning with Gradient Protected Boosting for Text Recognition}, author={Ren, Hanchi and Deng, Jingjing, Xie, Xianghua, Xiaoke Ma and Yichuan Wang}, journal={arXiv preprint arXiv:2007.07296}, year={2020} }
41.859649
924
0.761526
eng_Latn
0.96961
6aea41a6763734f14d378670069fbaaefb33a830
2,396
md
Markdown
README.md
mohan-chinnappan-n/docker-sfpowerscripts
75216e4ed6144e6838dde698999f13e6ad021466
[ "MIT" ]
null
null
null
README.md
mohan-chinnappan-n/docker-sfpowerscripts
75216e4ed6144e6838dde698999f13e6ad021466
[ "MIT" ]
null
null
null
README.md
mohan-chinnappan-n/docker-sfpowerscripts
75216e4ed6144e6838dde698999f13e6ad021466
[ "MIT" ]
null
null
null
# docker-sfpowerscripts # Supported tags and respective Dockerfile links - [release-nov21, latest](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/Dockerfile) - [release-oct21](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/Oct21/Dockerfile) - [release-sep21](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/Sep21/Dockerfile) - [release-aug21](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/Aug21/Dockerfile) - [release-july21](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/July21/Dockerfile) - [23-ubuntu20.04](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/Release23/Dockerfile) - [22-ubuntu20.04](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/Release22/Dockerfile) - [21-ubuntu20.04](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/Release21/Dockerfile) - [20-ubuntu20.04](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/Release20/Dockerfile) # What is sfpowerscripts? sfpowerscripts is a build system for package-based development on Salesforce that can be implemented in any CI/CD system of choice. sfpowerscripts is part of the DX@Scale initiative, productivity boosters for engineering teams on Salesforce. ![sfpowerscripts logo](https://repository-images.githubusercontent.com/248449736/5d08c600-728e-11ea-8267-ae1aceebea60 "sfpowerscripts") # What's in the image? The image contains sfpowerscripts and the dependencies it needs to run: - Node - OpenJDK - git - puppetteer dependencies - [SFDX CLI](https://www.npmjs.com/package/sfdx-cli) - [sfpowerscripts](https://www.npmjs.com/package/@dxatscale/sfpowerscripts) - [sfpowerkit](https://www.npmjs.com/package/sfpowerkit) - [sfdmu](https://www.npmjs.com/package/sfdmu) - [sfdx-browserforce-plugin](https://www.npmjs.com/package/sfdx-browserforce-plugin) # License View [license information](https://github.com/dxatscale/docker-sfpowerscripts/blob/main/LICENSE) for this docker file. As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained). As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.
55.72093
241
0.798414
eng_Latn
0.641745
6aeaa069162b6ab715352f5219e2b8eebf652050
4,442
md
Markdown
docs/manual/java/guide/advanced/IntegratingNonLagom.md
jroper/lagom
250b9a0498c1c8317fa85f961dee1295abcd9638
[ "Apache-2.0" ]
null
null
null
docs/manual/java/guide/advanced/IntegratingNonLagom.md
jroper/lagom
250b9a0498c1c8317fa85f961dee1295abcd9638
[ "Apache-2.0" ]
null
null
null
docs/manual/java/guide/advanced/IntegratingNonLagom.md
jroper/lagom
250b9a0498c1c8317fa85f961dee1295abcd9638
[ "Apache-2.0" ]
1
2019-09-02T11:42:19.000Z
2019-09-02T11:42:19.000Z
# Integrating with non Lagom services ## Invoking Lagom services Lagom service calls are implemented using idiomatic REST. The simplest way to invoke a Lagom service from another framework is to use that frameworks REST client to invoke the Lagom service. Another way to implement Lagom services, if the client is running in a JVM, is to use the Lagom service interface directly. ### Using the Lagom service client #### Configuring dependencies To use the Lagom service interface, you will need to add a dependency of the Lagom integration client to your build. If using maven, this can be done by adding the following dependency in your pom: ```xml <dependency> <groupId>com.lightbend.lagom</groupId> <artifactId>lagom-javadsl-integration-client_${scala.binary.version}</artifactId> <version>${lagom.version}</version> </dependency> ``` Of course, you will also need to add a dependency to the API project that you have created in your Lagom project. For more details, see [[Understanding your project structure|LagomBuild#Understanding-your-project-structure]]. #### Managing the client factory The Lagom integration client provides [`LagomClientFactory`](api/index.html?com/lightbend/lagom/javadsl/client/integration/LagomClientFactory.html) creating Lagom client services. This factory creates and manages thread pools and connection pools, so it's important to manage its lifecycle correctly in your application, ensuring that you only create one instance of it, and to shut it down when you're finished with it. The factory can be instantiated by invoking the static [`create`](api/index.html?com/lightbend/lagom/javadsl/client/integration/LagomClientFactory.html#create-java.lang.String-java.lang.ClassLoader-) method, for example: @[create-factory](code/docs/advanced/IntegratingNonLagom.java) The first argument is a service name, this will be the name of the service that is consuming the Lagom service, and will impact how calls made through this client will identify themselves to the service. The second argument is a `ClassLoader`, it will be used to create the service proxy and needs to have the API for the client in it. When you have finished with the factory, for example, when the system shuts down, you need to close the factory, by invoking the [`close`](api/index.html?com/lightbend/lagom/javadsl/client/integration/LagomClientFactory.html#close--) method: @[close-factory](code/docs/advanced/IntegratingNonLagom.java) Typically the factory will be a singleton in your system. If your system is using Spring for example, you would create a `FactoryBean` that instantiates it, and you would implement a `@PreDestroy` annotated method that closed the client factory. #### Creating a client Once you have created a client factory, you can easily create a client using it, for example: @[create-client](code/docs/advanced/IntegratingNonLagom.java) Here we've created a client for the `HelloService` using the [`createClient`](api/index.html?com/lightbend/lagom/javadsl/client/integration/LagomClientFactory.html#createClient-java.lang.Class-java.net.URI-) method. We've passed in static URI to tell the client where the `HelloService` lives, typically you would read this from a configuration file on your service. You can also pass a list of URIs using [`createClient`](api/index.html?com/lightbend/lagom/javadsl/client/integration/LagomClientFactory.html#createClient-java.lang.Class-java.util.Collection-), and finally, if your environment is capable of looking up service URIs dynamically, you can pass an implementation of [`ServiceLocator`](api/index.html?com/lightbend/lagom/javadsl/api/ServiceLocator.html). #### Working with dev mode When running your service in development, you can tell the service to use Lagom's dev mode service locator, using [`createDevClient`](api/index.html?com/lightbend/lagom/javadsl/client/integration/LagomClientFactory.html#createDevClient-java.lang.Class-). Typically, you would want to have some configuration in your application that tells you whether it is running in development or not, and only create the dev mode client if you are in development. For example: @[dev-mode](code/docs/advanced/IntegratingNonLagom.java) This means that you don't have to worry about what URI your services are running on in development, you just need to ensure the Lagom `runAll` command has been run to run the service locator.
76.586207
466
0.792886
eng_Latn
0.991077
6aead6eead4119584ee75d7461cbdd29344f5695
3,751
md
Markdown
CHANGELOG.md
Blemming/strapi-module
e0b9734a82c95d8cd64152baead4ea80bf213f8e
[ "MIT" ]
null
null
null
CHANGELOG.md
Blemming/strapi-module
e0b9734a82c95d8cd64152baead4ea80bf213f8e
[ "MIT" ]
null
null
null
CHANGELOG.md
Blemming/strapi-module
e0b9734a82c95d8cd64152baead4ea80bf213f8e
[ "MIT" ]
null
null
null
# Changelog All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines. ### [0.1.8](https://github.com/nuxt-community/strapi-module/compare/v0.1.7...v0.1.8) (2020-11-02) ### Bug Fixes * **lib:** dont fetch user on server with target static ([#68](https://github.com/nuxt-community/strapi-module/issues/68)) ([0a74b5a](https://github.com/nuxt-community/strapi-module/commit/0a74b5a263b8721102be70ec2608bb118cd1fcf3)) ### [0.1.7](https://github.com/nuxt-community/strapi-module/compare/v0.1.6...v0.1.7) (2020-10-08) ### Features * Tiny change to plugin.js to reduce the size of lodash on build ([#45](https://github.com/nuxt-community/strapi-module/issues/45)) ([8b59046](https://github.com/nuxt-community/strapi-module/commit/8b5904693446b592a308fe8c028e26ddb1e372eb)) ### Bug Fixes * **lib:** better error handling ([58a9d17](https://github.com/nuxt-community/strapi-module/commit/58a9d17ec3be63fd837bf1d273ba5b298221f54e)) ### [0.1.6](https://github.com/nuxt-community/strapi-module/compare/v0.1.5...v0.1.6) (2020-08-25) ### [0.1.5](https://github.com/nuxt-community/strapi-module/compare/v0.1.4...v0.1.5) (2020-08-21) ### Features * handle single-type entity type ([#35](https://github.com/nuxt-community/strapi-module/issues/35)) ([04b440b](https://github.com/nuxt-community/strapi-module/commit/04b440b105ecb63932d98d5e3a64fd265919353b)) ### Bug Fixes * **plugin:** reactivity on hydatation ([#34](https://github.com/nuxt-community/strapi-module/issues/34)) ([b7e764f](https://github.com/nuxt-community/strapi-module/commit/b7e764f50f70ad68012fcc4a6f8d769f6ae27b67)) ### [0.1.4](https://github.com/nuxt-community/strapi-module/compare/v0.1.3...v0.1.4) (2020-08-18) ### [0.1.3](https://github.com/nuxt-community/strapi-module/compare/v0.1.2...v0.1.3) (2020-07-28) ### Bug Fixes * avoid redefine property if exists ([4ac979c](https://github.com/nuxt-community/strapi-module/commit/4ac979c0dff1aac8d045e097ff6c7e1e4303ed4c)) ### [0.1.2](https://github.com/nuxt-community/strapi-module/compare/v0.1.1...v0.1.2) (2020-07-16) ### Features * use runtimeConfig to avoid building when changing Strapi URL ([4442467](https://github.com/nuxt-community/strapi-module/commit/4442467b294ee7352dccf3131682e20b0f89f706)) ### Bug Fixes * update test with new example ([404fdca](https://github.com/nuxt-community/strapi-module/commit/404fdca6f880c685d31c84a20838b5fd5e05b1e0)) ### [0.1.1](https://github.com/nuxt-company/strapi-module/compare/v0.1.0...v0.1.1) (2020-07-08) ### Bug Fixes * **lib:** use findOne to get users me ([50ca41c](https://github.com/nuxt-company/strapi-module/commit/50ca41c38bf6862a7ca7b6973032d1e9b3dcb271)) ## [0.1.0](https://github.com/nuxt-community/strapi-module/compare/v0.0.1...v0.1.0) (2020-07-06) ### 0.0.1 (2020-06-24) ### Features * **lib:** add sendEmailConfirmation method ([ec29cc4](https://github.com/nuxt-community/strapi-module/commit/ec29cc40e7b564ae0858fbc86f6b1ac4e856ef38)) * **lib:** handle entities ([d98a31f](https://github.com/nuxt-community/strapi-module/commit/d98a31f716cf42443759ad0af3a112578e3b7a8f)) * **lib:** handle ssr nuxt state ([f24ff2f](https://github.com/nuxt-community/strapi-module/commit/f24ff2fca2990c89ffa80267084a3f525bc8d0df)) * **lib:** rename methods ([cc4527e](https://github.com/nuxt-community/strapi-module/commit/cc4527ecc62abf559dfa707ee9a44236e4e4e631)) * **lib:** update ([12a0b97](https://github.com/nuxt-community/strapi-module/commit/12a0b972882cc073d763fd72cb3d90e40b521d3c)) ### Bug Fixes * **lib:** remove throw ([1378f81](https://github.com/nuxt-community/strapi-module/commit/1378f815d162b5205aff2f87f12be82c945bb260))
45.192771
240
0.7502
yue_Hant
0.478044
6aece4270babe58137db5eb114d0d83b0ca6f624
1,903
md
Markdown
docs/framework/unmanaged-api/debugging/icordebugstepper-step-method.md
jhonyfrozen/docs.pt-br
c9e86b6a5de2ff8dffd54dd64d2e87aee85a5cb8
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/debugging/icordebugstepper-step-method.md
jhonyfrozen/docs.pt-br
c9e86b6a5de2ff8dffd54dd64d2e87aee85a5cb8
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/debugging/icordebugstepper-step-method.md
jhonyfrozen/docs.pt-br
c9e86b6a5de2ff8dffd54dd64d2e87aee85a5cb8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Método ICorDebugStepper::Step ms.date: 03/30/2017 api_name: - ICorDebugStepper.Step api_location: - mscordbi.dll api_type: - COM f1_keywords: - ICorDebugStepper::Step helpviewer_keywords: - Step method, ICorDebugStepper interface [.NET Framework debugging] - ICorDebugStepper::Step method [.NET Framework debugging] ms.assetid: 38c1940b-ada1-40ba-8295-4c0833744e1e topic_type: - apiref author: rpetrusha ms.author: ronpet ms.openlocfilehash: 444390622ca68244661b91dc85814b05556b12a2 ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 04/23/2019 ms.locfileid: "61994312" --- # <a name="icordebugstepperstep-method"></a>Método ICorDebugStepper::Step Faz com que esse ICorDebugStepper a etapa única por meio de seu recipiente thread e, opcionalmente, para continuar a depuração de único por meio de funções que são chamadas de dentro do thread. ## <a name="syntax"></a>Sintaxe ``` HRESULT Step ( [in] BOOL bStepIn ); ``` ## <a name="parameters"></a>Parâmetros `bStepIn` [in] Definido como `true` para entrar em uma função que é chamada dentro do thread. Definido como `false` para ignorar a função. ## <a name="remarks"></a>Comentários A etapa é concluída quando o common language runtime executa a próxima instrução gerenciada no quadro deste seletor. Se `Step` é chamado em um seletor, que não está no código gerenciado, a etapa será concluída quando a próxima instrução de código gerenciado é executada pelo thread. ## <a name="requirements"></a>Requisitos **Plataformas:** Confira [Requisitos de sistema](../../../../docs/framework/get-started/system-requirements.md). **Cabeçalho:** CorDebug.idl, CorDebug.h **Biblioteca:** CorGuids.lib **Versões do .NET Framework:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
35.90566
285
0.745139
por_Latn
0.864535
6aed41df62d3958359d161dac4668cdbf009f1ad
13
md
Markdown
README.md
exinfinite/Dao
5c7c0ccc3a3e76e21f4478d14f342a81f25ab434
[ "MIT" ]
null
null
null
README.md
exinfinite/Dao
5c7c0ccc3a3e76e21f4478d14f342a81f25ab434
[ "MIT" ]
null
null
null
README.md
exinfinite/Dao
5c7c0ccc3a3e76e21f4478d14f342a81f25ab434
[ "MIT" ]
null
null
null
# Dao 資料物件存取
4.333333
6
0.692308
vie_Latn
0.263636
6aedc7aafe90d0b3c40c0b5450384db43ab14fc2
98
md
Markdown
README.md
Samalve/frontendMentor-single-price-component
d64649b642cf2c929846d79bfea7c9baf5d0671e
[ "MIT" ]
null
null
null
README.md
Samalve/frontendMentor-single-price-component
d64649b642cf2c929846d79bfea7c9baf5d0671e
[ "MIT" ]
null
null
null
README.md
Samalve/frontendMentor-single-price-component
d64649b642cf2c929846d79bfea7c9baf5d0671e
[ "MIT" ]
null
null
null
# frontendMentor-single-price-component frontend mentor challenge solution single price component
32.666667
57
0.867347
eng_Latn
0.473404
6aedd48e040d25b7ffe40b3d72f28894ded838a1
4,502
md
Markdown
docs/auth_rules.md
rantwijk/indy-node
3cb77dab5482c8b721535020fec41506de819d2e
[ "Apache-2.0" ]
null
null
null
docs/auth_rules.md
rantwijk/indy-node
3cb77dab5482c8b721535020fec41506de819d2e
[ "Apache-2.0" ]
null
null
null
docs/auth_rules.md
rantwijk/indy-node
3cb77dab5482c8b721535020fec41506de819d2e
[ "Apache-2.0" ]
null
null
null
# Current implemented rules in auth_map | Transaction type | Field | Previous value | New value | Who can| Description | |------------------|-------|----------------|-----------|--------|-------------| | NYM |`role` |`<empty>` | TRUSTEE | TRUSTEE|Adding new TRUSTEE| | NYM |`role` |`<empty>` | STEWARD | TRUSTEE|Adding new STEWARD| | NYM |`role` |`<empty>` | TRUST_ANCHOR| TRUSTEE, STEWARD|Adding new TRUST_ANCHOR| | NYM |`role` |`<empty>` |`<empty>` | TRUSTEE, STEWARD, TRUST_ANCHOR| Adding new Identity Owner| | NYM |`role` | TRUSTEE |`<empty>` | TRUSTEE | Blacklisting Trustee| | NYM |`role` | STEWARD |`<empty>` | TRUSTEE | Blacklisting Steward| | NYM |`role` | TRUST_ANCHOR |`<empty>` | TRUSTEE | Blacklisting Trust anchor| | NYM |`verkey`|`*`|`*`| Owner of this nym | Key Rotation| | SCHEMA |`*`|`*`|`*`| TRUSTEE, STEWARD, TRUST_ANCHOR | Adding new Schema| | SCHEMA |`*`|`*`|`*`| No one can edit existing Schema | Editing Schema| | CLAIM_DEF |`*`|`*`|`*`| TRUSTEE, STEWARD, TRUST_ANCHOR| Adding new CLAIM_DEF transaction| | CLAIM_DEF |`*`|`*`|`*`| Owner of claim_def txn| Editing CLAIM_DEF transaction| | NODE |`services`|`<empty>`|`[VALIDATOR]`| STEWARD if it is owner of this transaction| Adding new node to pool| | NODE |`services`|`[VALIDATOR]`|`[]`| TRUSTEE, STEWARD if it is owner of this transaction| Demotion of node| | NODE |`services`|`[]`|`[VALIDATOR]`| TRUSTEE, STEWARD if it is owner of this transaction| Promotion of node| | NODE |`node_ip`|`*`|`*`| STEWARD if it is owner of this transaction| Changing Node's ip address| | NODE |`node_port`|`*`|`*`| STEWARD if it is owner of this transaction| Changing Node's port| | NODE |`client_ip`|`*`|`*`| STEWARD if it is owner of this transaction| Changing Client's ip address| | NODE |`client_port`|`*`|`*`| STEWARD if it is owner of this transaction| Changing Client's port| | NODE |`blskey`|`*`|`*`| STEWARD if it is owner of this transaction| Changing Node's blskey| | POOL_UPGRADE |`action`|`<empty>`|`start`|TRUSTEE| Starting upgrade procedure| | POOL_UPGRADE |`action`|`start`|`cancel`|TRUSTEE| Canceling upgrade procedure| | POOL_RESTART |`action`|`*`|`*`|TRUSTEE| Restarting pool command| | POOL_CONFIG |`action`|`*`|`*`|TRUSTEE| Pool config command (like a `read only` option)| | VALIDATOR_INFO |`*`|`*`|`*`| TRUSTEE, STEWARD| Getting validator_info from pool| ### Also, there is a some optional rules for case if in config option ANYONE_CAN_WRITE is set to True: | Transaction type | Field | Previous value | New value | Who can| Description | |------------------|-------|----------------|-----------|--------|-------------| |NYM |`role`|`<empty>`|`<empty>`| Anyone| Adding new nym| |SCHEMA |`*`|`*`|`*`| Anyone| Any operations with SCHEMA transaction| |CLAIM_DEF |`*`|`*`|`*`| Anyone| Any operations with CLAIM_DEF transaction| ### As of now it's not implemented yet, but the next rules for Revocation feature are needed: #### If ANYONE_CAN_WRITE is set to False: | Transaction type | Field | Previous value | New value | Who can| Description | |------------------|-------|----------------|-----------|--------|-------------| |REVOC_REG_DEF|`*`|`*`|`*`| TRUSTEE, STEWARD, TRUST_ANCHOR| Adding new REVOC_REG_DEF| |REVOC_REG_DEF|`*`|`*`|`*`| Only owners can edit existing REVOC_REG_DEF| Editing REVOC_REG_DEF| |REVOC_REG_ENTRY|`*`|`*`|`*`| Only the owner of the corresponding REVOC_REG_DEF can create new REVOC_REG_ENTRY| Adding new REVOC_REG_ENTRY| |REVOC_REG_ENTRY|`*`|`*`|`*`| Only owners can edit existing REVOC_REG_ENTRY| Editing REVOC_REG_ENTRY| #### If ANYONE_CAN_WRITE is set to True: | Transaction type | Field | Previous value | New value | Who can| Description | |------------------|-------|----------------|-----------|--------|-------------| |REVOC_REG_DEF|`*`|`*`|`*`| Anyone can create new REVOC_REG_DEF| Adding new REVOC_REG_DEF| |REVOC_REG_DEF|`*`|`*`|`*`| Only owners can edit existing REVOC_REG_DEF| Editing REVOC_REG_DEF| |REVOC_REG_ENTRY|`*`|`*`|`*`| Only the owner of the corresponding REVOC_REG_DEF can create new REVOC_REG_ENTRY| Adding new REVOC_REG_ENTRY| |REVOC_REG_ENTRY|`*`|`*`|`*`| Only owners can edit existing REVOC_REG_ENTRY| Adding new REVOC_REG_ENTRY|
78.982456
139
0.6004
eng_Latn
0.377664
6aedd98015443bbe59d397bba82b52aebddb57ce
4,937
md
Markdown
WindowsServerDocs/administration/windows-commands/create-partition-logical.md
TSlivede/windowsserverdocs.de-de
94efc4447d5eac158ab05bc87f9fcec15c317872
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/administration/windows-commands/create-partition-logical.md
TSlivede/windowsserverdocs.de-de
94efc4447d5eac158ab05bc87f9fcec15c317872
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/administration/windows-commands/create-partition-logical.md
TSlivede/windowsserverdocs.de-de
94efc4447d5eac158ab05bc87f9fcec15c317872
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Erstellen Sie eine logische partition description: 'Windows-Befehle Thema ***- ' ms.custom: na ms.prod: windows-server-threshold ms.reviewer: na ms.suite: na ms.technology: manage-windows-commands ms.tgt_pltfrm: na ms.topic: article ms.assetid: 1f59b79a-d690-4d0e-ad38-40df5a0ce38e author: coreyp-at-msft ms.author: coreyp manager: dongill ms.date: 10/16/2017 ms.openlocfilehash: d3af60aed6c8305e410c6ebfba3cf2e006034ad7 ms.sourcegitcommit: eaf071249b6eb6b1a758b38579a2d87710abfb54 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 05/31/2019 ms.locfileid: "66434151" --- # <a name="create-partition-logical"></a>Erstellen Sie eine logische partition >Gilt für: WindowsServer (Halbjährlicher Kanal), Windows Server 2016, Windows Server 2012 R2, WindowsServer 2012 erstellt eine logische Partition in einer vorhandenen erweiterten Partition an. Sie können diesen Befehl nur verwenden, auf den master Boot Records \(MBR\) Datenträger. ## <a name="syntax"></a>Syntax ``` create partition logical [size=<n>] [offset=<n>] [align=<n>] [noerr] ``` ## <a name="parameters"></a>Parameter | Parameter | Beschreibung | |-------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Größe\=<n> | Gibt die Größe der logischen Partition in Megabytes \(MB\), die kleiner als der erweiterten Partition sein muss. Wenn keine Größe angegeben wird, wird die Partition erst in der erweiterten Partition nicht mehr Speicherplatz verfügbar ist. | | offset\=<n> | Gibt den Offset in Kilobyte \(KB\), an dem die Partition erstellt wird. Der Offset wird aufgerundet, vollständig ausgefüllt verwendete ist. Wird kein Offset angegeben wird, wird die Partition in der ersten Datenträgerbereich platziert, die groß genug für die sie enthalten ist. Die Partition ist als die angegebene Anzahl mindestens so lange in Byte **Größe\=<n>** . Wenn Sie eine Größe für die logische Partition angeben, muss er kleiner als der erweiterten Partition sein. | | align\=<n> | Richtet alle Volumes oder einer Partition Blöcke auf der nächsten. In der Regel verwendet, mit der Hardware-RAID Logical Unit Number \(LUN\) Arrays zur Verbesserung der Leistung. <n> ist die Anzahl der Kilobytes \(KB\) vom Anfang des Datenträgers an, die am nächsten Ausrichtungsgrenze. | | Diskpart | nur für Skripts. Wenn ein Fehler gefunden wird, weiterhin DiskPart Befehle zu verarbeiten, als ob der Fehler nicht aufgetreten ist. Ohne diesen Parameter wird ein Fehler DiskPart mit dem Fehlercode zu beenden. | ## <a name="remarks"></a>Hinweise - Wenn die **Größe** und **Offset** Parameter nicht angegeben werden, die logische Partition wird erstellt, in dem größten Datenträgerbereich, die in der erweiterten Partition verfügbar. - Nachdem Sie die Partition erstellt wurde, wechselt der Fokus automatisch auf die neue logische Partition. - Ein grundlegende MBR-Datenträger muss ausgewählt werden, für diesen Vorgang erfolgreich ausgeführt werden kann. Verwenden der **select Disk** Befehl aus, wählen Sie einen Datenträger und verschiebt den Fokus auf sie. ## <a name="BKMK_examples"></a>Beispiele für Geben Sie Folgendes ein, um eine logische Partition 1000 MB Größe, in der erweiterten Partition des ausgewählten Datenträgers zu erstellen: ``` create partition logical size=1000 ``` #### <a name="additional-references"></a>Zusätzliche Referenzen [Erläuterung zur Befehlszeilensyntax](command-line-syntax-key.md)
73.686567
492
0.527851
deu_Latn
0.976304
6aee510b43c28a686857693b12469708e636b914
2,288
md
Markdown
create_acoustic_model.md
xbsdsongnan/samromur-asr
d3f70b3ec33743dc71fbeb750b4639f9bbb09b8b
[ "Apache-2.0" ]
1
2020-09-11T22:33:32.000Z
2020-09-11T22:33:32.000Z
create_acoustic_model.md
xbsdsongnan/samromur-asr
d3f70b3ec33743dc71fbeb750b4639f9bbb09b8b
[ "Apache-2.0" ]
null
null
null
create_acoustic_model.md
xbsdsongnan/samromur-asr
d3f70b3ec33743dc71fbeb750b4639f9bbb09b8b
[ "Apache-2.0" ]
null
null
null
<h1 align="center"> Creating an Accoustic Model (AM) from the Samrómur Corpus </h1> <!-- omit in toc --> ## Introduction One of the first steps creating Samromur ASR is to create a AM from its audio files. The task involvs creating meta data files for the `Audio data` and the `Language data`. These files will map each audio file to a speacker, gender, age, and the context of the files. ## Data Preperation To train an acoustic model you need speech data in the form of audio files paired with text. These data need to be prepared in a certain way to be processable by Kaldi. The script `local/prep_data.sh` prepares data in the format of _Málrómur_, i.e. a directory containing a folder `wav` with all the `.wav` files and a text file called `wav_info.txt`, where each line describes one utterance in 11 columns : ## spk2gender This file informs about speakers gender. As we assumed, 'speakerID' is a unique name of each speaker (in this case it is also a 'recordingID' - every speaker has only one audio data folder from one recording session). In my example there are 5 female and 5 male speakers (f = female, m = male). Pattern: <speakerID> <gender> cristine f dad m josh m july f ... ## wav.scp This file connects every utterance (sentence said by one person during particular recording session) with an audio file related to this utterance. If you stick to my naming approach, 'utteranceID' is nothing more than 'speakerID' (speaker's folder name) glued with *.wav file name without '.wav' ending (look for examples below). Pattern: <uterranceID> <full_path_to_audio_file> dad_4_4_2 /home/{user}/kaldi/egs/digits/digits_audio/train/dad/4_4_2.wav july_1_2_5 /home/{user}/kaldi/egs/digits/digits_audio/train/july/1_2_5.wav july_6_8_3 /home/{user}/kaldi/egs/digits/digits_audio/train/july/6_8_3.wav ... ## text This file contains every utterance matched with its text transcription. Pattern: <uterranceID> <text_transcription> dad_4_4_2 four four two july_1_2_5 one two five july_6_8_3 six eight three ... ## utt2spk This file tells the ASR system which utterance belongs to particular speaker. Pattern: <uterranceID> <speakerID> dad_4_4_2 dad july_1_2_5 july july_6_8_3 july ... ## corpus.txt ```console $ ls /data/samromur/ samromur_recodrings_1000.zip ```
33.15942
329
0.765734
eng_Latn
0.991873
6aeea0b12b07b0affb8220b670ce0b266fa87ccf
1,446
md
Markdown
_drafts/0290-Word Pattern.md
yhnedison/copy
9e21ac332ede7f30746cb097854b11abbc1dba85
[ "MIT" ]
1
2020-02-01T03:51:29.000Z
2020-02-01T03:51:29.000Z
_drafts/0290-Word Pattern.md
yhnedison/copy
9e21ac332ede7f30746cb097854b11abbc1dba85
[ "MIT" ]
1
2019-10-04T16:08:37.000Z
2019-10-04T16:08:43.000Z
_drafts/0290-Word Pattern.md
yhnedison/copy
9e21ac332ede7f30746cb097854b11abbc1dba85
[ "MIT" ]
null
null
null
--- layout: post title: 290. Word Pattern category: [Leetcode] description: keywords: ['Hash Table', 'Leetcode', 'Easy'] --- ### [290. Word Pattern](https://leetcode.com/problems/word-pattern) #### Tags: 'Hash Table' <div class="content__u3I1 question-content__JfgR"><div><p>Given a <code>pattern</code> and a string <code>str</code>, find if <code>str</code> follows the same pattern.</p> <p>Here <b>follow</b> means a full match, such that there is a bijection between a letter in <code>pattern</code> and a <b>non-empty</b> word in <code>str</code>.</p> <p><strong>Example 1:</strong></p> <pre><strong>Input:</strong> pattern = <code>"abba"</code>, str = <code>"dog cat cat dog"</code> <strong>Output:</strong> true</pre> <p><strong>Example 2:</strong></p> <pre><strong>Input:</strong>pattern = <code>"abba"</code>, str = <code>"dog cat cat fish"</code> <strong>Output:</strong> false</pre> <p><strong>Example 3:</strong></p> <pre><strong>Input:</strong> pattern = <code>"aaaa"</code>, str = <code>"dog cat cat dog"</code> <strong>Output:</strong> false</pre> <p><strong>Example 4:</strong></p> <pre><strong>Input:</strong> pattern = <code>"abba"</code>, str = <code>"dog dog dog dog"</code> <strong>Output:</strong> false</pre> <p><b>Notes:</b><br/> You may assume <code>pattern</code> contains only lowercase letters, and <code>str</code> contains lowercase letters that may be separated by a single space.</p> </div></div> ### Solution
46.645161
172
0.677732
eng_Latn
0.583997
6aefd656dfa10bf1d80c28ca62247084bab8aa1f
2,615
md
Markdown
README.md
weili-go/javascript-sdk
cf2d311d1011f11b7122956f34a398bf89b746f3
[ "Apache-2.0" ]
4
2019-05-19T06:50:03.000Z
2021-10-14T21:36:20.000Z
README.md
weili-go/javascript-sdk
cf2d311d1011f11b7122956f34a398bf89b746f3
[ "Apache-2.0" ]
null
null
null
README.md
weili-go/javascript-sdk
cf2d311d1011f11b7122956f34a398bf89b746f3
[ "Apache-2.0" ]
3
2019-05-19T06:50:03.000Z
2021-05-17T01:18:19.000Z
# Binance Chain JavaScript SDK The Binance Chain JavaScript SDK allows browsers and node.js clients to interact with Binance Chain. It includes the following core components: * **crypto** - core cryptographic functions. * **amino** - [amino](https://github.com/binance-chain/docs-site/blob/master/docs/encoding.md) (protobuf-like) encoding and decoding of transactions. * **client** - implementations of Binance Chain transaction types, such as for transfers and trading. * **accounts** - management of "accounts" and wallets, including seed and encrypted mnemonic generation. * **ledger** - Ledger Nano S/X support via HID, U2F and Web BLE (Bluetooth). * **rpc** - Node RPC client. * **transaction** - Transaction Class, build and sign. Please check ts branch for typescript # Installation Important, please follow the instructions for your OS below: **Windows users:** Please install [windows-build-tools](https://www.npmjs.com/package/windows-build-tools) first. **Mac users:** Make sure XCode Command Line Tools are installed: `xcode-select --install`. **Linux users:** Note that Ubuntu Xenial and newer distributions are recommended, especially when using Travis or other CI systems. You may need some dev packages to be installed on your system for USB support. On Debian-based distributions (like Ubuntu) you should install them with this command: ```bash $ sudo apt-get install libudev-dev libusb-dev usbutils ``` ### Install the NPM package If you **do not** need Ledger support with node.js: ```bash $ npm i @binance-chain/javascript-sdk --no-optional ``` If you **need** Ledger support with node.js: ```bash $ npm i @binance-chain/javascript-sdk ``` ### Use with Webpack We often see Webpack builds failing with the SDK due to the `usb` dependency, but adding this to your Webpack config should fix that: ```js module.exports = { plugins: [new webpack.IgnorePlugin(/^usb$/)] } ``` or ```js config.plugins.push(new webpack.IgnorePlugin(/^usb$/)) ``` # API For up-to-date API documentation, please check the [wiki](https://github.com/binance-chain/javascript-sdk/wiki). # Testing All new code changes should be covered with unit tests. You can run the tests with the following command: ```bash $ npm run test ``` Tests for the Ledger hardware wallet integration have their own suite that runs in both node and in the browser: ```bash $ npm run test:ledger $ npm run test:ledger:browser ``` # Contributing Contributions to the Binance Chain JavaScript SDK are welcome. Please ensure that you have tested the changes with a local client and have added unit test coverage for your code.
35.821918
297
0.750287
eng_Latn
0.97675
6aefe091efcc8a82d2b160e55b2fa3884b92794b
923
md
Markdown
.examples/README.md
Intera/urlaubsverwaltung
87faa5c950029f9698420835a5b8c22933074fd5
[ "Apache-2.0" ]
null
null
null
.examples/README.md
Intera/urlaubsverwaltung
87faa5c950029f9698420835a5b8c22933074fd5
[ "Apache-2.0" ]
null
null
null
.examples/README.md
Intera/urlaubsverwaltung
87faa5c950029f9698420835a5b8c22933074fd5
[ "Apache-2.0" ]
null
null
null
# Beispieldeployments auf Basis von Docker Seit Version [2.30.0](https://github.com/synyx/urlaubsverwaltung/releases/tag/urlaubsverwaltung-2.30.0) der Urlaubsverwaltung gibt es auch ein Container Image für Docker. ## Docker Über `docker run synyx/urlaubsverwaltung:latest -p 8080:8080` kann die Urlaubsverwaltung als Docker Container gestartet werden. ## docker-compose ### Mit MariaDB Dieses Beispiel sollte nur zum Testen im lokalen Netzwerk verwendet werden, da eine unverschlüsselte HTTP-Verbindung zur Urlaubsverwaltung verwendet wird. Um dieses Beispiel zu verwenden sind folgende Schritte notwendig: * Über `docker-compose pull` wird das neuste Container Image der Urlaubsverwaltung runtergeladen * Der Start der Urlaubsverwaltung inkl. MariaDB erfolgt durch `docker-compose up -d` Falls die Urlaubsverwaltung auf eine neue Version aktualisiert werden sollte, müssen diese zwei Schritte wiederholt werden.
36.92
127
0.816901
deu_Latn
0.99881
6af04f2a263e0ae9dabdedca41ae6ced7d09f672
7,968
md
Markdown
README.md
zaaack/koa-joi-swagger
72820f8f1ecdf4ac787f0423c13ff92a8c68ff5d
[ "MIT" ]
79
2017-04-26T13:21:40.000Z
2021-10-03T08:11:05.000Z
README.md
zaaack/koa-joi-swagger
72820f8f1ecdf4ac787f0423c13ff92a8c68ff5d
[ "MIT" ]
6
2017-05-10T08:52:44.000Z
2019-06-16T13:51:20.000Z
README.md
zaaack/koa-joi-swagger
72820f8f1ecdf4ac787f0423c13ff92a8c68ff5d
[ "MIT" ]
13
2017-05-10T08:53:35.000Z
2019-11-29T21:39:49.000Z
# koa-joi-swagger * Using joi schema to validate request & response, and generate swagger document to create beautiful API documents. [![Build Status](https://travis-ci.org/zaaack/koa-joi-swagger.svg?branch=master)](https://travis-ci.org/zaaack/koa-joi-swagger) [![npm](https://img.shields.io/npm/v/koa-joi-swagger.svg)](https://www.npmjs.com/package/koa-joi-swagger) [![npm](https://img.shields.io/npm/dm/koa-joi-swagger.svg)](https://www.npmjs.com/package/koa-joi-swagger) ## Feature * Router agnostic. * Using your favorite library for validation, and generate swagger document for develop. * Serving Swagger UI in your koa project. * ... ## Install ```sh npm i koa-joi-swagger ``` or ```sh yarn add koa-joi-swagger ``` for v3, install optional dependencies ```sh npm i swagger-ui-dist # or yarn add swagger-ui-dist ``` ## Example ```sh git clone https://github.com/zaaack/koa-joi-swagger.git cd koa-joi-swagger yarn # or npm i SERVE=1 npx babel-node ./test/fixtures/server.js ``` Now open <http://127.0.0.1:3456/swagger>! ## Demo app.js ```js import { toSwaggerDoc, ui, mixedValidate } from '../../src' import mixedDoc from './mixed-doc' import Koa from 'koa' import DecRouter from 'koa-dec-router' import bodyparser from 'koa-bodyparser' const app = new Koa() const decRouter = DecRouter({ controllersDir: `${__dirname}/controllers`, }) app.use(bodyparser()) const swaggerDoc = toSwaggerDoc(mixedDoc) // mount swagger ui in `/swagger` app.use(ui(swaggerDoc, {pathRoot: '/swagger'})) // handle validation errors app.use(async (ctx, next) => { try { await next() } catch (e) { if (e.name === 'RequestValidationError') { ctx.status = 400 ctx.body = { code: 1, message: e.message, data: e.data, } } else if (e.name === 'ResponseValidationError') { ctx.status = 500 ctx.body = { code: 1, message: e.message, data: e.data, } } } }) // validate request and response by mixedDoc app.use(mixedValidate(mixedDoc, { onError: e => console.log(e.details, e._object), })) // koa-dec-router app.use(decRouter.router.routes()) app.use(decRouter.router.allowedMethods()) app.listen(3456) ``` > "I see the api is simple, but how to write the joi schema and the swagger document?" That's the point, you don't need to write a joi schema to validation and a swagger document to create API documents. > "Oh, no, Should I learn a new schema?" Of cause not, I hate new schemas, too, especially those made by someone or some company without long support, it's just a waste of time and my brain cell. Therefore, to make this library simple and reliable, I just mixed joi and swagger document, and using [joi-to-json-schema](https://github.com/lightsofapollo/joi-to-json-schema/) to transform joi schema to swagger schema. You don't have to learn a new schema, just replace the JSON schema in your swagger document to joi schema, then let this library to do the rest. I call it mixed document, here is an example. ```js export default { swagger: '2.0', info: { title: 'Test API', description: 'Test API', version: '1.0.0', }, // the domain of the service // host: 127.0.0.1:3457 // array of all schemes that your API supports schemes: ['https', 'http'], // will be prefixed to all paths basePath: '/api/v1', consumes: ['application/x-www-form-urlencoded'], produces: ['application/json'], paths: { '/posts': { get: { summary: 'Some posts', tags: ['Post'], parameters: { query: Joi.object().keys({ type: Joi.string().valid(['news', 'article']), }), }, responses: { '200': { x: 'Post list', schema: Joi.object().keys({ lists: Joi.array().items(Joi.object().keys({ title: Joi.string().description('Post title'), content: Joi.string().required().description('Post content'), })) }), }, 'default': { description: 'Error happened', schema: Joi.object().json().keys({ code: Joi.number().integer(), message: Joi.string(), data: Joi.object(), }), }, } } }, }, } ``` You can see the differences between this and the real swagger document, just replace `parameters` and `responses` to joi schema instead of JSON schema, [Here is the swagger document that generate from mixed document above](docs/swagger-doc-from-mixed-doc.json). ## API ```js import JoiSwagger, { toSwaggerDoc, mixedValidate, joiValidate, ui } from 'koa-joi-swagger' import Koa from 'koa' const app = new Koa() /* JoiSwagger = { toSwaggerDoc, mixedValidate, joiValidate, ui, Joi, } */ const mixedDoc = require('./mixed-doc') const swaggerDoc = toSwaggerDoc(mixedDoc) // parse mixed document to swagger document for swagger-ui // // const defaultResJoiOpts = { // stripUnknown: true, // convert: true, // } app.use(mixedValidate(mixedDoc, { reqOpts: { stripUnknown: false, convert: true, }, // optional, ctx.request joi validation options, here is default resOpts: { // optional, ctx.response joi validation options, here is default stripUnknown: true, // this would remove additional properties convert: true, // this would convert field types }, onError: err => console.error(err), // Do something with the error, the error would throw anyway. })) app.use(ui(swaggerDoc, { pathRoot: '/swagger', // optional, swagger path skipPaths: [], // optional, skip paths UIHtml: defaultUIHtml, // optional, get ui html swaggerConfig: '', // optional, a json5 string, e.g. `{ <field>: <value>, .... }` to display in html for overriding swagger ui options. sendConfig: { maxage: 3600 * 1000 * 24 * 30 }, // optional, config for koa-send, default maxage is 1 month. v3: false, // optional, default is v2, you need to install optional dependencies `swagger-ui-dist` first. })) // joiValidate // the internal joi validation function used by mixedValidate, in case you need one. // JoiSwagger.Joi // The joi used to validate, with some opinionated extension, you can override it or using it. ``` ## Q & A #### 1. Why not using [ajv](https://github.com/epoberezkin/ajv) to validate by swagger document directly? I have think it before, but hit some problems like validating javascript date object, remove additionalProperties, etc. And writing JSON schema is too verbose. Joi is the best validation library in NodeJS, we should take the advantage. #### 2. Why not using YAML? YAML is not easy to reuse, although JSON schema can reuse model, and how to reuse shared properties between models? I can't find a way. Pure javascrip can easily reuse or wrap model schema, and you can wrap each final schema with a function, don't feel pain when adding properties for each request schema in the future. #### 3. You extended Joi, why? Sorry, joi's philosophy is too strict for me, I really don't need to explicit declare the string could be empty, so I override the original `Joi.string()` to make `Joi.string().empty('')` is a default behavior. Also, add a `.force()` method for string/number type, to coerce the field to string/number regardless of the original type, it's really useful when validating some bson type like Long, Deciaml or Custom object. Added a `Joi.object().json()` to coerce object with `toJSON` method to a plain JSON object. This would useful when validation some ORM/ODM's model object (like mongorito). [See the code](src/joi.js) And I highly recommend using this extended joi to write your schemas, and adding your extension if you need. You can also using other version of Joi to validate. ```js import JoiSwagger from 'koa-joi-swagger' import myJoi from './myJoi' // using export const Joi = JoiSwagger.Joi // override JoiSwagger.Joi = myJoi ```
30.412214
365
0.672942
eng_Latn
0.89744
6af0b57db378d740f93153fdc2d755a5bfca6987
1,097
md
Markdown
docs/source/api/Apollo/structs/JSONResponseParsingInterceptor.md
pruthvesh/apollo-ios
ece5b5b89e8b6cb0ac265f253e0ff8924c9d31a7
[ "MIT" ]
3,195
2017-01-31T00:14:16.000Z
2022-03-31T09:46:27.000Z
docs/source/api/Apollo/structs/JSONResponseParsingInterceptor.md
pruthvesh/apollo-ios
ece5b5b89e8b6cb0ac265f253e0ff8924c9d31a7
[ "MIT" ]
1,425
2017-02-01T10:55:39.000Z
2022-03-31T18:56:48.000Z
docs/source/api/Apollo/structs/JSONResponseParsingInterceptor.md
pruthvesh/apollo-ios
ece5b5b89e8b6cb0ac265f253e0ff8924c9d31a7
[ "MIT" ]
666
2017-02-01T10:23:30.000Z
2022-03-28T19:57:32.000Z
**STRUCT** # `JSONResponseParsingInterceptor` ```swift public struct JSONResponseParsingInterceptor: ApolloInterceptor ``` An interceptor which parses JSON response data into a `GraphQLResult` and attaches it to the `HTTPResponse`. ## Properties ### `cacheKeyForObject` ```swift public let cacheKeyForObject: CacheKeyForObject? ``` ## Methods ### `init(cacheKeyForObject:)` ```swift public init(cacheKeyForObject: CacheKeyForObject? = nil) ``` Designated Initializer ### `interceptAsync(chain:request:response:completion:)` ```swift public func interceptAsync<Operation: GraphQLOperation>( chain: RequestChain, request: HTTPRequest<Operation>, response: HTTPResponse<Operation>?, completion: @escaping (Result<GraphQLResult<Operation.Data>, Error>) -> Void ) ``` #### Parameters | Name | Description | | ---- | ----------- | | chain | The chain the interceptor is a part of. | | request | The request, as far as it has been constructed | | response | [optional] The response, if received | | completion | The completion block to fire when data needs to be returned to the UI. |
24.377778
108
0.731085
eng_Latn
0.872153
6af18b7db98adf1f122413ec07cad74419905604
13,703
md
Markdown
pages/gerrit/rest-api-access.md
GerritCodeReview/homepage-test
023bafc8ff9d691f4c163fdfc9a90df3b22c24ec
[ "MIT", "Apache-2.0", "BSD-3-Clause" ]
null
null
null
pages/gerrit/rest-api-access.md
GerritCodeReview/homepage-test
023bafc8ff9d691f4c163fdfc9a90df3b22c24ec
[ "MIT", "Apache-2.0", "BSD-3-Clause" ]
null
null
null
pages/gerrit/rest-api-access.md
GerritCodeReview/homepage-test
023bafc8ff9d691f4c163fdfc9a90df3b22c24ec
[ "MIT", "Apache-2.0", "BSD-3-Clause" ]
null
null
null
--- title: " Gerrit Code Review - /access/ REST API" sidebar: restapi_sidebar permalink: rest-api-access.html --- This page describes the access rights related REST endpoints. Please also take note of the general information on the [REST API](rest-api.html). ## Access Rights Endpoints ### List Access Rights *GET /access/?project=[{project-name}](rest-api-projects.html#project-name)* Lists the access rights for projects. The projects for which the access rights should be returned must be specified as `project` options. The `project` can be specified multiple times. As result a map is returned that maps the project name to [ProjectAccessInfo](#project-access-info) entities. The entries in the map are sorted by project name. **Request.** ``` GET /access/?project=MyProject&project=All-Projects HTTP/1.0 ``` **Response.** ``` HTTP/1.1 200 OK Content-Type: application/json; charset=UTF-8 )]}' { "All-Projects": { "revision": "edd453d18e08640e67a8c9a150cec998ed0ac9aa", "local": { "GLOBAL_CAPABILITIES": { "permissions": { "priority": { "rules": { "15bfcd8a6de1a69c50b30cedcdcc951c15703152": { "action": "BATCH" } } }, "streamEvents": { "rules": { "15bfcd8a6de1a69c50b30cedcdcc951c15703152": { "action": "ALLOW" } } }, "administrateServer": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" } } } } }, "refs/meta/config": { "permissions": { "submit": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" }, "global:Project-Owners": { "action": "ALLOW" } } }, "label-Code-Review": { "label": "Code-Review", "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW", "min": -2, "max": 2 }, "global:Project-Owners": { "action": "ALLOW", "min": -2, "max": 2 } } }, "read": { "exclusive": true, "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" }, "global:Project-Owners": { "action": "ALLOW" } } }, "push": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" }, "global:Project-Owners": { "action": "ALLOW" } } } } }, "refs/for/refs/*": { "permissions": { "pushMerge": { "rules": { "global:Registered-Users": { "action": "ALLOW" } } }, "push": { "rules": { "global:Registered-Users": { "action": "ALLOW" } } } } }, "refs/tags/*": { "permissions": { "createSignedTag": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" }, "global:Project-Owners": { "action": "ALLOW" } } }, "createTag": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" }, "global:Project-Owners": { "action": "ALLOW" } } } } }, "refs/heads/*": { "permissions": { "forgeCommitter": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" }, "global:Project-Owners": { "action": "ALLOW" } } }, "forgeAuthor": { "rules": { "global:Registered-Users": { "action": "ALLOW" } } }, "submit": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" }, "global:Project-Owners": { "action": "ALLOW" } } }, "editTopicName": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW", "force": true }, "global:Project-Owners": { "action": "ALLOW", "force": true } } }, "label-Code-Review": { "label": "Code-Review", "rules": { "global:Registered-Users": { "action": "ALLOW", "min": -1, "max": 1 }, "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW", "min": -2, "max": 2 }, "global:Project-Owners": { "action": "ALLOW", "min": -2, "max": 2 } } }, "create": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" }, "global:Project-Owners": { "action": "ALLOW" } } }, "push": { "rules": { "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" }, "global:Project-Owners": { "action": "ALLOW" } } } } }, "refs/*": { "permissions": { "read": { "rules": { "global:Anonymous-Users": { "action": "ALLOW" }, "53a4f647a89ea57992571187d8025f830625192a": { "action": "ALLOW" } } } } } }, "is_owner": true, "owner_of": [ "GLOBAL_CAPABILITIES", "refs/meta/config", "refs/for/refs/*", "refs/tags/*", "refs/heads/*", "refs/*" ], "can_upload": true, "can_add": true, "config_visible": true, "groups": { "53a4f647a89ea57992571187d8025f830625192a": { "url": "#/admin/groups/uuid-53a4f647a89ea57992571187d8025f830625192a", "options": {}, "description": "Gerrit Site Administrators", "group_id": 1, "owner": "Administrators", "owner_id": "53a4f647a89ea57992571187d8025f830625192a", "created_on": "2009-06-08 23:31:00.000000000", "name": "Administrators" }, "global:Registered-Users": { "options": {}, "name": "Registered Users" }, "global:Project-Owners": { "options": {}, "name": "Project Owners" }, "15bfcd8a6de1a69c50b30cedcdcc951c15703152": { "url": "#/admin/groups/uuid-15bfcd8a6de1a69c50b30cedcdcc951c15703152", "options": {}, "description": "Users who perform batch actions on Gerrit", "group_id": 2, "owner": "Administrators", "owner_id": "53a4f647a89ea57992571187d8025f830625192a", "created_on": "2009-06-08 23:31:00.000000000", "name": "Non-Interactive Users" }, "global:Anonymous-Users": { "options": {}, "name": "Anonymous Users" } } }, "MyProject": { "revision": "61157ed63e14d261b6dca40650472a9b0bd88474", "inherits_from": { "id": "All-Projects", "name": "All-Projects", "description": "Access inherited by all other projects." }, "local": {}, "is_owner": true, "owner_of": [ "refs/*" ], "can_upload": true, "can_add": true, "config_visible": true } } ``` ## JSON Entities ### AccessSectionInfo The `AccessSectionInfo` describes the access rights that are assigned on a ref. <table> <colgroup> <col width="14%" /> <col width="14%" /> <col width="71%" /> </colgroup> <thead> <tr class="header"> <th>Field Name</th> <th></th> <th>Description</th> </tr> </thead> <tbody> <tr class="odd"> <td><p><code>permissions</code></p></td> <td></td> <td><p>The permissions assigned on the ref of this access section as a map that maps the permission names to <a href="#permission-info">PermissionInfo</a> entities.</p></td> </tr> </tbody> </table> ### PermissionInfo The `PermissionInfo` entity contains information about an assigned permission. <table> <colgroup> <col width="14%" /> <col width="14%" /> <col width="71%" /> </colgroup> <thead> <tr class="header"> <th>Field Name</th> <th></th> <th>Description</th> </tr> </thead> <tbody> <tr class="odd"> <td><p><code>label</code></p></td> <td><p>optional</p></td> <td><p>The name of the label. Not set if it’s not a label permission.</p></td> </tr> <tr class="even"> <td><p><code>exclusive</code></p></td> <td><p>not set if <code>false</code></p></td> <td><p>Whether this permission is assigned exclusively.</p></td> </tr> <tr class="odd"> <td><p><code>rules</code></p></td> <td></td> <td><p>The rules assigned for this permission as a map that maps the UUIDs of the groups for which the permission are assigned to <a href="#permission-info">PermissionRuleInfo</a> entities.</p></td> </tr> </tbody> </table> ### PermissionRuleInfo The `PermissionRuleInfo` entity contains information about a permission rule that is assigned to group. <table> <colgroup> <col width="14%" /> <col width="14%" /> <col width="71%" /> </colgroup> <thead> <tr class="header"> <th>Field Name</th> <th></th> <th>Description</th> </tr> </thead> <tbody> <tr class="odd"> <td><p><code>action</code></p></td> <td></td> <td><p>The action of this rule. For normal permissions this can be <code>ALLOW</code>, <code>DENY</code> or <code>BLOCK</code>. Special values for global capabilities are <code>INTERACTIVE</code> and <code>BATCH</code>.</p></td> </tr> <tr class="even"> <td><p><code>force</code></p></td> <td><p>not set if <code>false</code></p></td> <td><p>Whether the force flag is set.</p></td> </tr> <tr class="odd"> <td><p><code>min</code></p></td> <td><p>not set if range is empty (from <code>0</code> to <code>0</code>) or not set</p></td> <td><p>The min value of the permission range.</p></td> </tr> <tr class="even"> <td><p><code>max</code></p></td> <td><p>not set if range is empty (from <code>0</code> to <code>0</code>) or not set</p></td> <td><p>The max value of the permission range.</p></td> </tr> </tbody> </table> ### ProjectAccessInfo The `ProjectAccessInfo` entity contains information about the access rights for a project. <table> <colgroup> <col width="14%" /> <col width="14%" /> <col width="71%" /> </colgroup> <thead> <tr class="header"> <th>Field Name</th> <th></th> <th>Description</th> </tr> </thead> <tbody> <tr class="odd"> <td><p><code>revision</code></p></td> <td></td> <td><p>The revision of the <code>refs/meta/config</code> branch from which the access rights were loaded.</p></td> </tr> <tr class="even"> <td><p><code>inherits_from</code></p></td> <td><p>not set for the <code>All-Project</code> project</p></td> <td><p>The parent project from which permissions are inherited as a <a href="rest-api-projects.html#project-info">ProjectInfo</a> entity.</p></td> </tr> <tr class="odd"> <td><p><code>local</code></p></td> <td></td> <td><p>The local access rights of the project as a map that maps the refs to <a href="#access-section-info">AccessSectionInfo</a> entities.</p></td> </tr> <tr class="even"> <td><p><code>is_owner</code></p></td> <td><p>not set if <code>false</code></p></td> <td><p>Whether the calling user owns this project.</p></td> </tr> <tr class="odd"> <td><p><code>owner_of</code></p></td> <td></td> <td><p>The list of refs owned by the calling user.</p></td> </tr> <tr class="even"> <td><p><code>can_upload</code></p></td> <td><p>not set if <code>false</code></p></td> <td><p>Whether the calling user can upload to any ref.</p></td> </tr> <tr class="odd"> <td><p><code>can_add</code></p></td> <td><p>not set if <code>false</code></p></td> <td><p>Whether the calling user can add any ref.</p></td> </tr> <tr class="even"> <td><p><code>config_visible</code></p></td> <td><p>not set if <code>false</code></p></td> <td><p>Whether the calling user can see the <code>refs/meta/config</code> branch of the project.</p></td> </tr> </tbody> </table> ## GERRIT Part of [Gerrit Code Review](index.html) ## SEARCHBOX
27.406
228
0.48369
eng_Latn
0.342425
6af1ba55f40675bfdb00e3cc94560d3996797782
456
md
Markdown
docs/api/alfa-css.radius_namespace.json_interface.value_propertysignature.md
Siteimprove/alfa
3eb032275a9fa5f3b97b892e28ebfc90eb4ef611
[ "MIT" ]
70
2018-05-25T16:02:23.000Z
2022-03-21T14:28:03.000Z
docs/api/alfa-css.radius_namespace.json_interface.value_propertysignature.md
Siteimprove/alfa
3eb032275a9fa5f3b97b892e28ebfc90eb4ef611
[ "MIT" ]
448
2018-06-01T08:46:47.000Z
2022-03-31T14:02:55.000Z
docs/api/alfa-css.radius_namespace.json_interface.value_propertysignature.md
Siteimprove/alfa
3eb032275a9fa5f3b97b892e28ebfc90eb4ef611
[ "MIT" ]
13
2018-07-04T19:47:49.000Z
2022-02-19T09:59:34.000Z
<!-- Do not edit this file. It is automatically generated by API Documenter. --> [Home](./index.md) &gt; [@siteimprove/alfa-css](./alfa-css.md) &gt; [Radius](./alfa-css.radius_namespace.md) &gt; [JSON](./alfa-css.radius_namespace.json_interface.md) &gt; [value](./alfa-css.radius_namespace.json_interface.value_propertysignature.md) ## Radius.JSON.value property <b>Signature:</b> ```typescript value: Length.JSON | Percentage.JSON | Keyword.JSON; ```
38
251
0.721491
eng_Latn
0.243046
6af1f6799cd0f8f62f81be3beb04c2609aa26a5b
4,291
md
Markdown
docs/2014/analysis-services/attributes-designer-tab-dimension-designer-analysis-services-multidimensional-data.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/analysis-services/attributes-designer-tab-dimension-designer-analysis-services-multidimensional-data.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/analysis-services/attributes-designer-tab-dimension-designer-analysis-services-multidimensional-data.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Attributi (scheda relazione tra attributi, progettazione dimensioni) (Analysis Services - dati multidimensionali) | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: - analysis-services ms.topic: conceptual f1_keywords: - sql12.asvs.dimensiondesigner.ardesigner.attributes.f1 ms.assetid: 850a68aa-1d70-4f0f-ba39-aeca834596c1 author: minewiskan ms.author: owend manager: craigg ms.openlocfilehash: b2ac25046df8c346ac68c8ffe4623f37d4637f66 ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 10/02/2018 ms.locfileid: "48048711" --- # <a name="attributes-attribute-relationship-designer-tab-dimension-designer-analysis-services---multidimensional-data"></a>Attributi (scheda Relazione tra attributi, Progettazione dimensioni) (Analysis Services - Dati multidimensionali) Utilizzare l'elenco **Attributi** per trovare un attributo specifico nel diagramma delle relazioni tra attributi o per definire una nuova relazione tra attributi. Questo riquadro appare immediatamente sotto il riquadro che contiene il diagramma delle relazioni tra attributi. **Per visualizzare il riquadro attributi** 1. In [!INCLUDE[ssBIDevStudioFull](../includes/ssbidevstudiofull-md.md)]fare doppio clic su una dimensione in Esplora soluzioni per aprire Progettazione dimensioni, quindi fare clic sulla scheda **Relazione tra attributi** . 2. Sulla barra degli strumenti, fare clic sull'icona **Mostra visualizzazioni elenco** . ## <a name="using-the-attributes-list"></a>Utilizzo dell'elenco Attributi L’elenco **Attributi** visualizza tutti gli attributi nella dimensione. Per trovare un attributo specifico nel diagramma delle relazioni tra attributi, fare doppio clic sull'attributo nell'elenco. L'attributo verrà evidenziato nel diagramma delle relazioni tra attributi e le proprietà verranno visualizzate nella finestra Proprietà. Se l'attributo non è visibile nel diagramma delle relazioni tra attributi, solo le proprietà dell’attributo verranno visualizzate nella finestra **Proprietà** . > [!NOTE] > È possibile selezionare più attributi nell'elenco e apportare modifiche. Per definire una nuova relazione o rinominare l'attributo, fare clic con il pulsante destro del mouse sull'attributo e fare clic sul comando corrispondente nel menu di scelta rapida. ### <a name="shortcut-menu-options"></a>Opzioni menu di scelta rapida **Nuova relazione tra attributi** Viene aperta la finestra di dialogo **Crea relazione tra attributi** in cui è possibile definire una nuova relazione tra attributi. Per altre informazioni, vedere [Creare finestre di dialogo Relazione tra attributi e Modifica relazione tra attributi &#40;scheda Relazione tra attributi, Progettazione dimensioni&#41; &#40;Analysis Services - Dati multidimensionali&#41;](create-edit-attribute-relationships-dialog-boxes-analysis-services-multidimensional-data.md) e [Definire relazioni tra attributi](multidimensional-models/attribute-relationships-define.md). **Rinominare** Evidenzia il nome dell'attributo nell'elenco e consente di modificare il testo. **Proprietà** Visualizza le proprietà dell'attributo nella finestra **Proprietà** . ## <a name="see-also"></a>Vedere anche [Relazioni tra attributi &#40;progettazione dimensioni&#41; &#40;Analysis Services - dati multidimensionali&#41;](attribute-relationships-dimension-designer-analysis-services-multidimensional-data.md) [Sulla barra degli strumenti &#40;scheda relazione tra attributi, progettazione dimensioni&#41; &#40;Analysis Services - dati multidimensionali&#41;](toolbar-attribute-relationship-dimension-designer-analysis-services-multidimensional-data.md) [Diagramma di relazione attributi &#40;scheda relazione tra attributi, progettazione dimensioni&#41; &#40;Analysis Services - dati multidimensionali&#41;](attribute-relationship-diagram-analysis-services-multidimensional-data.md) [Relazioni tra attributi &#40;scheda relazione tra attributi, progettazione dimensioni&#41; &#40;Analysis Services - dati multidimensionali&#41;](attribute-relationships-designer-tab-dimension-designer-analysis-services-multidimensional-data.md)
70.344262
431
0.794687
ita_Latn
0.988985
6af21014c2a39bed5b95638ac91e48a8f879c847
5,602
md
Markdown
doc/seasons_hard_guide.md
jangler/oos-randomizer
848d150ec1a59637b923213fb8e8b19600676281
[ "MIT" ]
11
2018-07-29T19:59:14.000Z
2018-10-10T02:06:00.000Z
doc/seasons_hard_guide.md
jangler/oos-randomizer
848d150ec1a59637b923213fb8e8b19600676281
[ "MIT" ]
76
2018-07-28T00:35:31.000Z
2018-10-15T06:36:07.000Z
doc/seasons_hard_guide.md
jangler/oos-randomizer
848d150ec1a59637b923213fb8e8b19600676281
[ "MIT" ]
2
2018-07-29T03:32:20.000Z
2018-07-29T16:57:57.000Z
# Seasons Hard Logic Strategy Guide This document details tricks and obscurities that are in Seasons hard logic. See [seasons_notes.md](https://github.com/jangler/oracles-randomizer/blob/doc/seasons_notes.md) for a list of things that are out of logic even in hard. ## General - Use mystery seeds to light torches (1 in 4 chance of success). This is only in logic for rooms with no more than 2 torches. - Get various types of seeds from sources other than trees: - Pegasus seeds from Subrosia Market (requires shield) - Ember seeds from the plants in Agunima's room in D4 - Ember seeds from the Armos puzzle chest in D5 - Ember seeds from the plants in the first Poe Sister's room in D7 - Mystery seeds from Frypolar's room in D8 - Get bombs from the plants in D2. - Use bomb boosts to increase jump distance. Pull a bomb near the gap you're jumping, move slightly closer to the gap, and jump in the desired direction as the bomb explodes. Buffering the simultaneous jump + directional input by holding the relevant buttons while closing a menu makes the trick easier. Strictly horizontal and vertical bomb jumps include: - Lava between Mt. Cucco portal and Subrosia Market - Water between D5 entrance and minecart room - Lava before cave entrance in Temple Remains - Water in Sunken City diving cave - Water in cave under Floodgate Keeper's House - Lava in Goron Mountain cave - Water in Goron Mountain cave - Lava between north & south Subrosia - Lava in Subrosia on Temple Remains portal screen Specific multi-directional bomb jumps are detailed later in this document, with video examples. ## Overworld - RNG manip 100-rupee drops using the shovel, for infinite rupees in logic. Hard reset, mash through the title screen (it rolls RNG every frame), make 7 screen transitions, and dig until the 100-rupee drop. Make sure not to delay the dig after the Rope, since the Rope also rolls RNG periodically. [[Video]](https://imgur.com/NH4Vwbd) - Access the chest in the Eastern Suburbs cave using a bomb + feather jump. [[Video]](https://imgur.com/pIL3Yqh) - Cross the easternmost pits in Natzu Wasteland using a bomb + seeds + feather jump. [[Video]](https://imgur.com/9hT04QH) - Reach Moblin Keep in Natzu Wasteland using only feather to clear pits. [[Video]](https://streamable.com/e9okj) - Cross the Moblin Keep pool using a bomb + cape jump. Position yourself above and to the right of the bomb, and hold right as you jump. [[Video]](https://imgur.com/bYwxJjV) - Cucco clip: reach the cave below the Spring Banana tree by grabbing the Cucco through the corner – move straight right into the rock before the screen transition so that the rock pushes you downward. This only matters if you have gale seeds to warp out. [[Video]](https://gfycat.com/negativeclumsyafricanfisheagle) - Jump to the vanilla Dragon Key location using only cape. [[Video]](https://imgur.com/fILXdPC) - Jump to the Subrosia portal in Horon Village using seeds + cape. [[Video]](https://imgur.com/elOp0hn) - Cornerwalk across the gaps in the house on the Western Coast in order to reach the stump. In the second room, only move diagonally for a moment, then hold right. To get back, reenter and walk straight up. [[Video]](https://imgur.com/7Fi2LWy) - Access the western stump in Temple Remains without autumn, using seeds + cape. [[Video]](https://imgur.com/SXQvM8b) - Hit the lever in the floodgate keeper's house by throwing a pot. The horizontal position of the throw is precise. [[Video]](https://clips.twitch.tv/ExpensiveAbnegateMoonFUNgineer) ## Subrosia - Bomb jump between the market and furnace areas. [[Video]](https://imgur.com/YCQk2vr) ## Dungeons - Throw bushes in D1 to kill Goriya and hit the minecart lever. [[Video]](https://imgur.com/mrFmfkq) - Throw pots in D2 to access the vanilla bracelet chest and blade trap room. [[Video]](https://imgur.com/TwtKSWS) - Hit minecart levers in D4 by dropping pots while passing by. This works for both 1F minecarts. [[Video]](https://clips.twitch.tv/LaconicYawningPresidentFUNgineer) - Fight the D4 boss (Gohma) using ember seeds or scent seeds from the seed satchel. [[Video]](https://www.youtube.com/watch?v=hXcSwAE86mE) - Pegasus + feather jump to reach the left chest in D5 (no bomb boost required). [[Video]](https://clips.twitch.tv/CredulousTemperedMulePicoMause) - Bomb + feather jump across the water to get to the minecarts in D5. [[Video]](https://imgur.com/iwOlNER) - Damage boost off enemies to cross D5 sidescroller gaps without feather. [[Video]](https://imgur.com/LO7HqWf) - Jump through the "wall" of fire traps in D5. [[Video]](https://imgur.com/MV7RBH4) - Corner-walk around the first crystal in D6, or just run through the blade traps, obviating feather or a means to break the crystal. - Poe skip: skip fighting the first Poe in D7. [[Video]](https://imgur.com/NC1AVV2) - Bomb jump capeless to the button before the D7 miniboss. [[Video]](https://clips.twitch.tv/CloudyGoodReubenOneHand) - Hit the three buttons in D7 in the correct order without pegasus seeds, by jumping on the edges of the tiles. [[Video]](https://imgur.com/8PvpNlV) - Hit the first eye statue in D8 with seeds + feather + satchel. [[Video]](https://imgur.com/yJnKZ18) - Hit sets of three eye statues in D8 with feather + satchel/slingshot instead of HSS. Only ember and scent seeds are in logic with the slingshot. [[Video]](https://imgur.com/gFFV97x) - Activate the D8 bridge switch without L-2 boomerang or seeds + cape. [[Video]](https://imgur.com/IpnfKtE)
49.575221
91
0.749375
eng_Latn
0.984664
6af2465a110f1eaa3c95ecedde89532b4d406337
1,441
md
Markdown
README.md
serapath/stackframes
9f0af2c65be1537f5404e9a2dc731993155791e6
[ "MIT" ]
null
null
null
README.md
serapath/stackframes
9f0af2c65be1537f5404e9a2dc731993155791e6
[ "MIT" ]
null
null
null
README.md
serapath/stackframes
9f0af2c65be1537f5404e9a2dc731993155791e6
[ "MIT" ]
1
2020-10-29T03:52:00.000Z
2020-10-29T03:52:00.000Z
# stackframes [![Sauce Test Status](https://saucelabs.com/buildstatus/serapath)](https://app.saucelabs.com/u/serapath) https://www.npmjs.com/package/stackframes https://serapath.github.io/stackframes/ --- [![Sauce Test Status](https://saucelabs.com/browser-matrix/serapath.svg)](https://saucelabs.com/u/serapath) # use `npm install stackframes` ```js const stackframes = require('stackframes') demo() function demo () { var error try { function foobarbaz () { throw new Error('foobar') } function bazbarfoo () { foobarbaz() } bazbarfoo() } catch (e) { error = e } example() function example () { foo() } function foo () { bar() } function bar () { baz() } function baz () { const defaultFlags = stackframes.defaultFlags console.log(defaultFlags) const flags = defaultFlags.filter((_, i) => i%2) // take every second flag console.log('0', stackframes(error, flags)) console.log('1', stackframes()) console.log('2', stackframes({ exclude: foo })) console.log('3', stackframes({ exclude: example })) console.log('4', stackframes({ depths: 2, exclude: baz })) console.log('5', stackframes({ depths: 2 })) console.log('6', stackframes(null, flags)) } } ``` ## supported by ![Testing Powered By SauceLabs](https://raw.githubusercontent.com/saucelabs/opensource/master/assets/powered-by-saucelabs-badge-white.svg?sanitize=true "Testing Powered By SauceLabs")
26.685185
183
0.673144
kor_Hang
0.239829
6af30f727043747ec47bc5d01a5f227b3751a2f1
8,554
md
Markdown
docs/classes/index.Manager.md
stefan-lacatus/listr2
52f87253539bfd57cd6c49bcc8ce28152bfcd8e8
[ "MIT" ]
null
null
null
docs/classes/index.Manager.md
stefan-lacatus/listr2
52f87253539bfd57cd6c49bcc8ce28152bfcd8e8
[ "MIT" ]
null
null
null
docs/classes/index.Manager.md
stefan-lacatus/listr2
52f87253539bfd57cd6c49bcc8ce28152bfcd8e8
[ "MIT" ]
null
null
null
# Class: Manager<Ctx, Renderer, FallbackRenderer\> [index](../modules/index.md).Manager Creates a new Listr2 task manager. Useful for creating a single instace of Listr2 with pre-set settings. ## Type parameters | Name | Type | | :----------------- | :--------------------------------------------------------------------------------- | | `Ctx` | [`ListrContext`](../types/index.ListrContext.md) | | `Renderer` | extends [`ListrRendererValue`](../types/index.ListrRendererValue.md) = `"default"` | | `FallbackRenderer` | extends [`ListrRendererValue`](../types/index.ListrRendererValue.md) = `"verbose"` | ## Constructors ### constructor • **new Manager**<`Ctx`, `Renderer`, `FallbackRenderer`\>(`options?`) #### Type parameters | Name | Type | | :----------------- | :--------------------------------------------------------------------------------- | | `Ctx` | `any` | | `Renderer` | extends [`ListrRendererValue`](../types/index.ListrRendererValue.md) = `"default"` | | `FallbackRenderer` | extends [`ListrRendererValue`](../types/index.ListrRendererValue.md) = `"verbose"` | #### Parameters | Name | Type | | :--------- | :--------------------------------------------------------------------------------------------------------- | | `options?` | [`ListrBaseClassOptions`](../types/index.ListrBaseClassOptions.md)<`Ctx`, `Renderer`, `FallbackRenderer`\> | #### Defined in [src/manager.ts:15](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L15) ## Properties ### err • **err**: [`ListrError`](index.ListrError.md)<`Record`<`PropertyKey`, `any`\>\>[] = `[]` #### Defined in [src/manager.ts:12](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L12) --- ### tasks • `Private` **tasks**: [`ListrTask`](../interfaces/index.ListrTask.md)<`any`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`Renderer`\>\>[] = `[]` #### Defined in [src/manager.ts:13](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L13) --- ### options • `Optional` **options**: [`ListrBaseClassOptions`](../types/index.ListrBaseClassOptions.md)<`Ctx`, `Renderer`, `FallbackRenderer`\> ## Accessors ### ctx • `set` **ctx**(`ctx`): `void` #### Parameters | Name | Type | | :---- | :---- | | `ctx` | `Ctx` | #### Returns `void` #### Defined in [src/manager.ts:17](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L17) ## Methods ### add ▸ **add**<`InjectCtx`\>(`tasks`, `options?`): `void` #### Type parameters | Name | Type | | :---------- | :---- | | `InjectCtx` | `Ctx` | #### Parameters | Name | Type | | :-- | :-- | | `tasks` | [`ListrTask`](../interfaces/index.ListrTask.md)<`InjectCtx`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`Renderer`\>\>[] \| (`ctx?`: `InjectCtx`) => [`ListrTask`](../interfaces/index.ListrTask.md)<`InjectCtx`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`Renderer`\>\>[] | | `options?` | [`ListrSubClassOptions`](../types/index.ListrSubClassOptions.md)<`InjectCtx`, `Renderer`\> | #### Returns `void` #### Defined in [src/manager.ts:21](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L21) --- ### runAll ▸ **runAll**<`InjectCtx`\>(`options?`): `Promise`<`InjectCtx`\> #### Type parameters | Name | Type | | :---------- | :---- | | `InjectCtx` | `Ctx` | #### Parameters | Name | Type | | :--------- | :--------------------------------------------------------------------------------------------------------------- | | `options?` | [`ListrBaseClassOptions`](../types/index.ListrBaseClassOptions.md)<`InjectCtx`, `Renderer`, `FallbackRenderer`\> | #### Returns `Promise`<`InjectCtx`\> #### Defined in [src/manager.ts:30](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L30) --- ### newListr ▸ **newListr**<`InjectCtx`, `InjectRenderer`, `InjectFallbackRenderer`\>(`tasks`, `options?`): [`Listr`](index.Listr.md)<`InjectCtx`, `InjectRenderer`, `InjectFallbackRenderer`\> #### Type parameters | Name | Type | | :----------------------- | :---------------------------------------------------------------------------------------- | | `InjectCtx` | `InjectCtx` | | `InjectRenderer` | extends [`ListrRendererValue`](../types/index.ListrRendererValue.md) = `Renderer` | | `InjectFallbackRenderer` | extends [`ListrRendererValue`](../types/index.ListrRendererValue.md) = `FallbackRenderer` | #### Parameters | Name | Type | | :-- | :-- | | `tasks` | [`ListrTask`](../interfaces/index.ListrTask.md)<`InjectCtx`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`InjectRenderer`\>\>[] | | `options?` | [`ListrBaseClassOptions`](../types/index.ListrBaseClassOptions.md)<`InjectCtx`, `InjectRenderer`, `InjectFallbackRenderer`\> | #### Returns [`Listr`](index.Listr.md)<`InjectCtx`, `InjectRenderer`, `InjectFallbackRenderer`\> #### Defined in [src/manager.ts:41](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L41) --- ### indent ▸ **indent**<`InjectCtx`\>(`tasks`, `options?`, `taskOptions?`): [`ListrTask`](../interfaces/index.ListrTask.md)<`InjectCtx`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`Renderer`\>\> #### Type parameters | Name | Type | | :---------- | :---- | | `InjectCtx` | `Ctx` | #### Parameters | Name | Type | | :-- | :-- | | `tasks` | [`ListrTask`](../interfaces/index.ListrTask.md)<`InjectCtx`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`Renderer`\>\>[] \| (`ctx?`: `InjectCtx`) => [`ListrTask`](../interfaces/index.ListrTask.md)<`InjectCtx`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`Renderer`\>\>[] | | `options?` | [`ListrBaseClassOptions`](../types/index.ListrBaseClassOptions.md)<`InjectCtx`, `Renderer`, `FallbackRenderer`\> | | `taskOptions?` | `Omit`<[`ListrTask`](../interfaces/index.ListrTask.md)<`InjectCtx`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`Renderer`\>\>, `"task"`\> | #### Returns [`ListrTask`](../interfaces/index.ListrTask.md)<`InjectCtx`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`Renderer`\>\> #### Defined in [src/manager.ts:48](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L48) --- ### run ▸ **run**<`InjectCtx`\>(`tasks`, `options?`): `Promise`<`InjectCtx`\> #### Type parameters | Name | Type | | :---------- | :---- | | `InjectCtx` | `Ctx` | #### Parameters | Name | Type | | :--------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `tasks` | [`ListrTask`](../interfaces/index.ListrTask.md)<`InjectCtx`, [`ListrGetRendererClassFromValue`](../types/index.ListrGetRendererClassFromValue.md)<`Renderer`\>\>[] | | `options?` | [`ListrBaseClassOptions`](../types/index.ListrBaseClassOptions.md)<`InjectCtx`, `Renderer`, `FallbackRenderer`\> | #### Returns `Promise`<`InjectCtx`\> #### Defined in [src/manager.ts:72](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L72) --- ### getRuntime ▸ **getRuntime**(`pipetime`): `string` #### Parameters | Name | Type | | :--------- | :------- | | `pipetime` | `number` | #### Returns `string` #### Defined in [src/manager.ts:91](https://github.com/cenk1cenk2/listr2/blob/12dcf06/src/manager.ts#L91)
35.201646
367
0.521394
yue_Hant
0.746418
6af35f9183f9225c6ed67de33af601fd5bea3f36
1,875
md
Markdown
README.md
oysstu/pyimc
5dd6a70f823fd9c7d94b4e1da310337fe40d95cf
[ "MIT" ]
10
2017-09-25T15:34:23.000Z
2022-03-15T16:43:16.000Z
README.md
oysstu/pyimc
5dd6a70f823fd9c7d94b4e1da310337fe40d95cf
[ "MIT" ]
7
2017-10-11T08:40:48.000Z
2021-12-10T10:11:34.000Z
README.md
oysstu/pyimc
5dd6a70f823fd9c7d94b4e1da310337fe40d95cf
[ "MIT" ]
6
2017-10-12T12:23:38.000Z
2021-04-16T11:58:33.000Z
# pyimc Python bindings for [Inter-Module Communication Protocol (IMC)](https://lsts.fe.up.pt/toolchain/imc) used to communicate between modules in the [LSTS toolchain](https://lsts.fe.up.pt/). ### Installation #### Clone this project using ```bash git clone --recursive git://github.com/oysstu/pyimc.git ``` This includes the pybind11 submodule. ###### (Optional) Use a specific IMC/Dune version The setup checks for a folder named imc and dune in the top folder. If these are not found, they are retrieved from the LSTS repositories (master). To use a different version, simply add a folder called dune or imc, respectively, in the top folder. They will automatically be used. #### Build and install using setuptools (wrapper around cmake) ```bash python3 setup.py install ``` If you use the system python and only want to install for a single user, you can add --user to the install command without needing administrator rights. On Windows, the Windows SDK must be installed with Visual Studio and the CMake executable must be on the system PATH. ###### (Optional) Only generate bindings for a subset of IMC messages A config file named whitelist.cfg can be placed in the root folder to only create bindings for a subset of the IMC messages. This can be necessary when compiling on embedded systems, as the linker consumes much memory for the full message set. If an unknown message is parsed, it will be returned as the Message baseclass rather than a specialized message. Look at minimal_whitelist.cfg for a set of messages that should always be included. #### Recommendations - The pyimc library generates stub files for the bindings, meaning that you can have autocomplete and static type checking if your IDE supports them. This can for example be [PyCharm](https://www.jetbrains.com/pycharm/) or [Jedi](https://github.com/davidhalter/jedi)-based editors.
50.675676
281
0.7744
eng_Latn
0.995944
6af35fcc1688c36891eba360043a0eea6700b222
1,687
md
Markdown
WindowsServerDocs/administration/windows-commands/subcommand-start-transportserver.md
MasahikoSada/windowsserverdocs.ja-jp
afaa88d24841b699fdbf6955048b00797d063d8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/administration/windows-commands/subcommand-start-transportserver.md
MasahikoSada/windowsserverdocs.ja-jp
afaa88d24841b699fdbf6955048b00797d063d8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/administration/windows-commands/subcommand-start-transportserver.md
MasahikoSada/windowsserverdocs.ja-jp
afaa88d24841b699fdbf6955048b00797d063d8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: サブコマンド開始 TransportServer description: 'Windows コマンド」のトピック * * *- ' ms.custom: na ms.prod: windows-server-threshold ms.reviewer: na ms.suite: na ms.technology: manage-windows-commands ms.tgt_pltfrm: na ms.topic: article ms.assetid: 0e93bc84-5b9e-4f9d-8cf0-1634417da0f6 author: coreyp-at-msft ms.author: coreyp manager: dongill ms.date: 10/16/2017 ms.openlocfilehash: 5fdfea020019a45eceac0142160f9d5d4d97b989 ms.sourcegitcommit: 0d0b32c8986ba7db9536e0b8648d4ddf9b03e452 ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 04/17/2019 ms.locfileid: "59848633" --- # <a name="subcommand-start-transportserver"></a>サブコマンド: 開始 TransportServer >適用先:Windows Server (半期チャネル)、Windows Server 2016、Windows Server 2012 R2、Windows Server 2012 トランスポート サーバーのすべてのサービスを開始します。 ## <a name="syntax"></a>構文 ``` wdsutil [Options] /start-TransportServer [/Server:<Server name>] ``` ## <a name="parameters"></a>パラメーター |パラメーター|説明| |-------|--------| |[/Server:<Server name>]|トランスポート サーバーの名前を指定します。 NetBIOS 名または完全修飾ドメイン名 (FQDN) のいずれかを指定できます。 サーバー名が指定されていない場合は、ローカルのサーバーが使用されます。| ## <a name="BKMK_examples"></a>例 サーバーを起動するには、次のいずれかを入力します。 ``` wdsutil /start-TransportServer wdsutil /verbose /start-TransportServer /Server:MyWDSServer ``` #### <a name="additional-references"></a>その他の参照 [コマンドライン構文のポイント](command-line-syntax-key.md) [TransportServer 無効にするコマンドを使用して](using-the-disable-transportserver-command.md) [TransportServer 有効にするコマンドを使用して](using-the-enable-transportserver-command.md) [get TransportServer コマンドを使用して](using-the-get-transportserver-command.md) [サブコマンド: Set-transportserver](subcommand-set-transportserver.md) [サブコマンド: 停止 TransportServer](subcommand-stop-transportserver.md)
34.428571
127
0.783047
yue_Hant
0.329924
6af3e8c0de6c6e3159631d92d068c0610a7a342e
19,677
md
Markdown
articles/machine-learning/how-to-secure-inferencing-vnet.md
Kraviecc/azure-docs.pl-pl
4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/how-to-secure-inferencing-vnet.md
Kraviecc/azure-docs.pl-pl
4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/how-to-secure-inferencing-vnet.md
Kraviecc/azure-docs.pl-pl
4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Bezpieczne środowiska inferencing z sieciami wirtualnymi titleSuffix: Azure Machine Learning description: Użyj izolowanego Virtual Network platformy Azure, aby zabezpieczyć środowisko Azure Machine Learning inferencing. services: machine-learning ms.service: machine-learning ms.subservice: core ms.topic: how-to ms.reviewer: larryfr ms.author: peterlu author: peterclu ms.date: 10/23/2020 ms.custom: contperf-fy20q4, tracking-python, contperf-fy21q1, devx-track-azurecli ms.openlocfilehash: 1a1a9158c06a12caaeb5702f2fdf7da3c801c143 ms.sourcegitcommit: 87a6587e1a0e242c2cfbbc51103e19ec47b49910 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 03/16/2021 ms.locfileid: "103573442" --- # <a name="secure-an-azure-machine-learning-inferencing-environment-with-virtual-networks"></a>Zabezpieczanie środowiska Azure Machine Learning inferencing z sieciami wirtualnymi W tym artykule dowiesz się, jak zabezpieczyć środowiska inferencing przy użyciu sieci wirtualnej w Azure Machine Learning. Ten artykuł jest czwartą częścią serii składającej się z pięciu części, która przeprowadzi Cię przez proces zabezpieczania przepływów pracy Azure Machine Learning. Zdecydowanie zalecamy zapoznanie się z [częścią poniżej: Omówienie sieci wirtualnej](how-to-network-security-overview.md) , aby zrozumieć ogólną architekturę. Zapoznaj się z innymi artykułami w tej serii: [1. Omówienie sieci wirtualnej](how-to-network-security-overview.md) > [Zabezpiecz obszar roboczy](how-to-secure-workspace-vnet.md) > [3. Zabezpiecz środowisko szkoleniowe](how-to-secure-training-vnet.md) > **4. Zabezpiecz środowisko inferencing** > [5. Włącz funkcje programu Studio](how-to-enable-studio-virtual-network.md) W tym artykule dowiesz się, jak zabezpieczyć następujące zasoby inferencing w sieci wirtualnej: > [!div class="checklist"] > - Domyślny klaster usługi Azure Kubernetes Service (AKS) > - Prywatny klaster AKS > - Klaster AKS z linkiem prywatnym > - Azure Container Instances (ACI) ## <a name="prerequisites"></a>Wymagania wstępne + Zapoznaj się z artykułem [Omówienie zabezpieczeń sieci](how-to-network-security-overview.md) , aby poznać typowe scenariusze sieci wirtualnej i ogólną architekturę sieci wirtualnej. + Istniejąca sieć wirtualna i podsieć do użycia z zasobami obliczeniowymi. + Aby można było wdrożyć zasoby w sieci wirtualnej lub podsieci, konto użytkownika musi mieć uprawnienia do następujących akcji w kontroli dostępu opartej na rolach (Azure RBAC): - "Microsoft. Network/virtualNetworks/Join/Action" w zasobie sieci wirtualnej. - "Microsoft. Network/virtualNetworks/Subnet/Join/Action" w zasobie podsieci. Aby uzyskać więcej informacji na temat usługi Azure RBAC z obsługą sieci, zobacz [wbudowane role sieciowe](../role-based-access-control/built-in-roles.md#networking) . <a id="aksvnet"></a> ## <a name="azure-kubernetes-service"></a>Azure Kubernetes Service Aby można było użyć klastra AKS w sieci wirtualnej, należy spełnić następujące wymagania dotyczące sieci: > [!div class="checklist"] > * Postępuj zgodnie z wymaganiami wstępnymi w temacie [Konfigurowanie zaawansowanej sieci w usłudze Azure Kubernetes Service (AKS)](../aks/configure-azure-cni.md#prerequisites). > * Wystąpienie AKS i Sieć wirtualna muszą znajdować się w tym samym regionie. W przypadku zabezpieczenia kont usługi Azure Storage używanych przez obszar roboczy w sieci wirtualnej, muszą one znajdować się w tej samej sieci wirtualnej co wystąpienie AKS. Aby dodać AKS w sieci wirtualnej do obszaru roboczego, wykonaj następujące czynności: 1. Zaloguj się do [Azure Machine Learning Studio](https://ml.azure.com/), a następnie wybierz swoją subskrypcję i obszar roboczy. 1. Wybierz pozycję __obliczenia__ po lewej stronie. 1. Wybierz z centrum pozycję __klastry wnioskowania__ , a następnie wybierz pozycję __+__ . 1. W oknie dialogowym __nowy klaster wnioskowania__ wybierz pozycję __Zaawansowane__ w obszarze __Konfiguracja sieci__. 1. Aby skonfigurować ten zasób obliczeniowy do korzystania z sieci wirtualnej, wykonaj następujące czynności: 1. Z listy rozwijanej __Grupa zasobów__ wybierz grupę zasobów zawierającą sieć wirtualną. 1. Z listy rozwijanej __Sieć wirtualna__ wybierz sieć wirtualną, która zawiera podsieć. 1. Z listy rozwijanej __podsieć__ wybierz podsieć. 1. W polu __zakres adresów usługi Kubernetes__ Wprowadź zakres adresów usługi Kubernetes. Ten zakres adresów używa zakresu adresów IP notacji CIDR (Classless Inter-Domain Routing), aby zdefiniować adresy IP, które są dostępne dla klastra. Nie może się nakładać na żadne zakresy adresów IP podsieci (na przykład 10.0.0.0/16). 1. W polu __adres IP usługi KUBERNETES DNS__ wprowadź adres IP usługi DNS Kubernetes. Ten adres IP jest przypisywany do usługi DNS Kubernetes. Musi ona należeć do zakresu adresów usługi Kubernetes (na przykład 10.0.0.10). 1. W polu __adres mostka platformy Docker__ wprowadź adres mostka platformy Docker. Ten adres IP jest przypisany do mostka platformy Docker. Nie może być w żadnym z zakresów adresów IP podsieci lub zakres adresów usługi Kubernetes (na przykład 172.17.0.1/16). ![Azure Machine Learning: środowisko obliczeniowe usługi Machine Learning ustawień sieci wirtualnej](./media/how-to-enable-virtual-network/aks-virtual-network-screen.png) 1. W przypadku wdrażania modelu jako usługi sieci Web AKS punkt końcowy oceniania jest tworzony w celu obsługi żądań inferencing. Upewnij się, że grupa sieciowej grupy zabezpieczeń kontrolująca sieć wirtualną ma włączoną regułę zabezpieczeń dla ruchu przychodzącego dla adresu IP punktu końcowego oceniania, jeśli chcesz wywołać go spoza sieci wirtualnej. Aby znaleźć adres IP punktu końcowego oceniania, należy zapoznać się z identyfikatorem URI oceny wdrożonej usługi. Aby uzyskać informacje na temat wyświetlania identyfikatora URI oceniania, zobacz temat [Korzystanie z modelu wdrożonego jako usługa sieci Web](how-to-consume-web-service.md#connection-information). > [!IMPORTANT] > Zachowaj domyślne reguły ruchu wychodzącego dla sieciowej grupy zabezpieczeń. Aby uzyskać więcej informacji, zobacz domyślne reguły zabezpieczeń w [grupach zabezpieczeń](../virtual-network/network-security-groups-overview.md#default-security-rules). [![Reguła zabezpieczeń dla ruchu przychodzącego](./media/how-to-enable-virtual-network/aks-vnet-inbound-nsg-scoring.png)](./media/how-to-enable-virtual-network/aks-vnet-inbound-nsg-scoring.png#lightbox) > [!IMPORTANT] > Adres IP wyświetlany w obrazie punktu końcowego oceniania będzie różny dla wdrożeń. Chociaż ten sam adres IP jest współużytkowany przez wszystkie wdrożenia do jednego klastra AKS, każdy klaster AKS będzie miał inny adres IP. Możesz również użyć zestawu SDK Azure Machine Learning, aby dodać usługę Azure Kubernetes w sieci wirtualnej. Jeśli masz już klaster AKS w sieci wirtualnej, dołącz go do obszaru roboczego, zgodnie z opisem w artykule [wdrażanie w AKS](how-to-deploy-and-where.md). Poniższy kod tworzy nowe wystąpienie AKS w `default` podsieci sieci wirtualnej o nazwie `mynetwork` : ```python from azureml.core.compute import ComputeTarget, AksCompute # Create the compute configuration and set virtual network information config = AksCompute.provisioning_configuration(location="eastus2") config.vnet_resourcegroup_name = "mygroup" config.vnet_name = "mynetwork" config.subnet_name = "default" config.service_cidr = "10.0.0.0/16" config.dns_service_ip = "10.0.0.10" config.docker_bridge_cidr = "172.17.0.1/16" # Create the compute target aks_target = ComputeTarget.create(workspace=ws, name="myaks", provisioning_configuration=config) ``` Po zakończeniu procesu tworzenia można uruchomić wnioskowanie lub ocenianie modelu w klastrze AKS za siecią wirtualną. Aby uzyskać więcej informacji, zobacz [How to Deploy to AKS](how-to-deploy-and-where.md). Aby uzyskać więcej informacji na temat korzystania z Role-Based Access Control z usługą Kubernetes, zobacz [Korzystanie z usługi Azure RBAC na potrzeby autoryzacji Kubernetes](../aks/manage-azure-rbac.md). ## <a name="network-contributor-role"></a>Rola współautor sieci > [!IMPORTANT] > Jeśli utworzysz lub dołączysz klaster AKS, dostarczając wcześniej utworzoną sieć wirtualną, należy przyznać jednostce usługi (SP) lub tożsamość zarządzaną dla klastra AKS rolę _współautor sieci_ do grupy zasobów zawierającej sieć wirtualną. > > Aby dodać tożsamość jako współautor sieci, wykonaj następujące czynności: 1. Aby znaleźć nazwę główną usługi lub identyfikator tożsamości zarządzanej dla AKS, użyj następujących poleceń interfejsu wiersza polecenia platformy Azure. Zamień `<aks-cluster-name>` na nazwę klastra. Zamień `<resource-group-name>` na nazwę grupy zasobów zawierającej _klaster AKS_: ```azurecli-interactive az aks show -n <aks-cluster-name> --resource-group <resource-group-name> --query servicePrincipalProfile.clientId ``` Jeśli to polecenie zwróci wartość `msi` , użyj następującego polecenia, aby zidentyfikować Identyfikator podmiotu zabezpieczeń dla tożsamości zarządzanej: ```azurecli-interactive az aks show -n <aks-cluster-name> --resource-group <resource-group-name> --query identity.principalId ``` 1. Aby znaleźć identyfikator grupy zasobów zawierającej daną sieć wirtualną, użyj następującego polecenia. Zamień `<resource-group-name>` na nazwę grupy zasobów zawierającej _sieć wirtualną_: ```azurecli-interactive az group show -n <resource-group-name> --query id ``` 1. Aby dodać nazwę główną usługi lub tożsamość zarządzaną jako współautor sieci, użyj następującego polecenia. Zamień na `<SP-or-managed-identity>` Identyfikator zwrócony dla jednostki usługi lub tożsamości zarządzanej. Zamień na `<resource-group-id>` Identyfikator zwrócony dla grupy zasobów zawierającej sieć wirtualną: ```azurecli-interactive az role assignment create --assignee <SP-or-managed-identity> --role 'Network Contributor' --scope <resource-group-id> ``` Aby uzyskać więcej informacji na temat używania wewnętrznego modułu równoważenia obciążenia z programem AKS, zobacz [Korzystanie z wewnętrznego modułu równoważenia obciążenia z usługą Azure Kubernetes Service](../aks/internal-lb.md). ## <a name="secure-vnet-traffic"></a>Bezpieczny ruch sieci wirtualnej Istnieją dwa podejścia do izolowania ruchu do i z klastra AKS do sieci wirtualnej: * __Prywatny klaster AKS__: to podejście używa prywatnego linku platformy Azure do zabezpieczania komunikacji z klastrem na potrzeby operacji wdrażania i zarządzania. * __Wewnętrzny moduł równoważenia obciążenia AKS__: to podejście służy do konfigurowania punktu końcowego dla wdrożeń do AKS w celu korzystania z prywatnego adresu IP w ramach sieci wirtualnej. > [!WARNING] > Wewnętrzny moduł równoważenia obciążenia nie działa z klastrem AKS, który korzysta z korzystającą wtyczki kubenet. Jeśli chcesz użyć wewnętrznego modułu równoważenia obciążenia i prywatnego klastra AKS w tym samym czasie, skonfiguruj prywatny klaster AKS za pomocą interfejsu Azure Container Network Interface (CNI). Aby uzyskać więcej informacji, zobacz [Konfigurowanie sieci Azure CNI w usłudze Azure Kubernetes Service](../aks/configure-azure-cni.md). ### <a name="private-aks-cluster"></a>Prywatny klaster AKS Domyślnie klastry AKS mają płaszczyznę kontroli lub serwer interfejsu API z publicznymi adresami IP. Można skonfigurować AKS do korzystania z prywatnej płaszczyzny kontroli, tworząc prywatny klaster AKS. Aby uzyskać więcej informacji, zobacz [Tworzenie prywatnego klastra usługi Azure Kubernetes Service](../aks/private-clusters.md). Po utworzeniu prywatnego klastra AKS [Dołącz klaster do sieci wirtualnej](how-to-create-attach-kubernetes.md) , aby używać go z Azure Machine Learning. > [!IMPORTANT] > Przed użyciem linku prywatnego z włączonym klastrem AKS z Azure Machine Learning należy otworzyć zdarzenie obsługi, aby włączyć tę funkcję. Aby uzyskać więcej informacji, zobacz [Zarządzanie i zwiększanie limitów przydziału](how-to-manage-quotas.md#private-endpoint-and-private-dns-quota-increases). ### <a name="internal-aks-load-balancer"></a>Wewnętrzny moduł równoważenia obciążenia AKS Domyślnie wdrożenia AKS używają [publicznego modułu równoważenia obciążenia](../aks/load-balancer-standard.md). W tej sekcji dowiesz się, jak skonfigurować AKS do korzystania z wewnętrznego modułu równoważenia obciążenia. Używany jest wewnętrzny (lub prywatny) moduł równoważenia obciążenia, w przypadku którego jako frontonu można używać tylko prywatnych adresów IP. Wewnętrzne moduły równoważenia obciążenia są używane do równoważenia obciążenia ruchu w sieci wirtualnej Prywatny moduł równoważenia obciążenia jest włączony przez skonfigurowanie AKS do korzystania z _wewnętrznego modułu równoważenia obciążenia_. #### <a name="enable-private-load-balancer"></a>Włączanie prywatnego modułu równoważenia obciążenia > [!IMPORTANT] > Nie można włączyć prywatnego adresu IP podczas tworzenia klastra usługi Azure Kubernetes w programie Azure Machine Learning Studio. Możesz utworzyć jeden z wewnętrznym modułem równoważenia obciążenia, korzystając z zestawu SDK języka Python lub rozszerzenia interfejsu wiersza polecenia platformy Azure do uczenia maszynowego. W poniższych przykładach pokazano, jak __utworzyć nowy klaster AKS z prywatnym adresem IP/wewnętrznym modułem równoważenia obciążenia__ za pomocą zestawu SDK i interfejsu wiersza polecenia: # <a name="python"></a>[Python](#tab/python) ```python import azureml.core from azureml.core.compute import AksCompute, ComputeTarget # Verify that cluster does not exist already try: aks_target = AksCompute(workspace=ws, name=aks_cluster_name) print("Found existing aks cluster") except: print("Creating new aks cluster") # Subnet to use for AKS subnet_name = "default" # Create AKS configuration prov_config=AksCompute.provisioning_configuration(load_balancer_type="InternalLoadBalancer") # Set info for existing virtual network to create the cluster in prov_config.vnet_resourcegroup_name = "myvnetresourcegroup" prov_config.vnet_name = "myvnetname" prov_config.service_cidr = "10.0.0.0/16" prov_config.dns_service_ip = "10.0.0.10" prov_config.subnet_name = subnet_name prov_config.load_balancer_subnet = subnet_name prov_config.docker_bridge_cidr = "172.17.0.1/16" # Create compute target aks_target = ComputeTarget.create(workspace = ws, name = "myaks", provisioning_configuration = prov_config) # Wait for the operation to complete aks_target.wait_for_completion(show_output = True) ``` # <a name="azure-cli"></a>[Interfejs wiersza polecenia platformy Azure](#tab/azure-cli) ```azurecli az ml computetarget create aks -n myaks --load-balancer-type InternalLoadBalancer ``` > [!IMPORTANT] > Korzystając z interfejsu wiersza polecenia, można utworzyć klaster AKS tylko z wewnętrznym modułem równoważenia obciążenia. Nie ma polecenia AZ ml, aby uaktualnić istniejący klaster do korzystania z wewnętrznego modułu równoważenia obciążenia. Aby uzyskać więcej informacji, zobacz [AZ ml computetarget Create AKS](/cli/azure/ext/azure-cli-ml/ml/computetarget/create#ext-azure-cli-ml-az-ml-computetarget-create-aks) Reference. --- W przypadku __dołączania istniejącego klastra__ do obszaru roboczego należy poczekać, aż po zakończeniu operacji dołączania usługa równoważenia obciążenia zostanie skonfigurowana. Aby uzyskać informacje na temat dołączania klastra, zobacz [dołączanie istniejącego klastra AKS](how-to-create-attach-kubernetes.md). Po dołączeniu istniejącego klastra można zaktualizować klaster tak, aby korzystał z wewnętrznego modułu równoważenia obciążenia/prywatnego adresu IP: ```python import azureml.core from azureml.core.compute.aks import AksUpdateConfiguration from azureml.core.compute import AksCompute # ws = workspace object. Creation not shown in this snippet aks_target = AksCompute(ws,"myaks") # Change to the name of the subnet that contains AKS subnet_name = "default" # Update AKS configuration to use an internal load balancer update_config = AksUpdateConfiguration(None, "InternalLoadBalancer", subnet_name) aks_target.update(update_config) # Wait for the operation to complete aks_target.wait_for_completion(show_output = True) ``` ## <a name="enable-azure-container-instances-aci"></a>Włącz Azure Container Instances (ACI) Azure Container Instances są tworzone dynamicznie podczas wdrażania modelu. Aby umożliwić Azure Machine Learning tworzenia ACI wewnątrz sieci wirtualnej, należy włączyć __delegowanie podsieci__ dla podsieci używanej przez wdrożenie. > [!WARNING] > W przypadku korzystania z Azure Container Instances w sieci wirtualnej Sieć wirtualna musi być: > * W tej samej grupie zasobów co obszar roboczy Azure Machine Learning. > * Jeśli obszar roboczy ma __prywatny punkt końcowy__, Sieć wirtualna używana na potrzeby Azure Container Instances musi być taka sama jak używana przez prywatny punkt końcowy obszaru roboczego. > > W przypadku korzystania z Azure Container Instances wewnątrz sieci wirtualnej Azure Container Registry (ACR) obszaru roboczego nie może znajdować się w sieci wirtualnej. Aby użyć ACI w sieci wirtualnej z obszarem roboczym, wykonaj następujące czynności: 1. Aby włączyć delegowanie podsieci w sieci wirtualnej, Skorzystaj z informacji zawartych w artykule [Dodawanie lub usuwanie delegowania podsieci](../virtual-network/manage-subnet-delegation.md) . Delegowanie można włączyć podczas tworzenia sieci wirtualnej lub dodać je do istniejącej sieci. > [!IMPORTANT] > Podczas włączania delegowania Użyj `Microsoft.ContainerInstance/containerGroups` wartości jako __delegowanej podsieci do usługi__ . 2. Wdróż model przy użyciu [AciWebservice.deploy_configuration ()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none--vnet-name-none--subnet-name-none-), użyj `vnet_name` parametrów i `subnet_name` . Ustaw te parametry na nazwę sieci wirtualnej i podsieć, w której włączono delegowanie. ## <a name="limit-outbound-connectivity-from-the-virtual-network"></a>Ograniczanie łączności wychodzącej z sieci wirtualnej Jeśli nie chcesz używać domyślnych reguł ruchu wychodzącego i chcesz ograniczyć dostęp wychodzący do sieci wirtualnej, musisz zezwolić na dostęp do Azure Container Registry. Na przykład upewnij się, że sieciowe grupy zabezpieczeń (sieciowej grupy zabezpieczeń) zawierają regułę umożliwiającą dostęp do tagu usługi __AzureContainerRegistry. RegionName__ , gdzie "{RegionName}" jest nazwą regionu platformy Azure. ## <a name="next-steps"></a>Następne kroki Ten artykuł jest czwartą częścią serii sieci wirtualnych z pięcioma częściami. Zapoznaj się z pozostałymi artykułami, aby dowiedzieć się, jak zabezpieczyć sieć wirtualną: * [Część 1: Omówienie usługi Virtual Network](how-to-network-security-overview.md) * [Część 2: Zabezpieczanie zasobów obszaru roboczego](how-to-secure-workspace-vnet.md) * [Część 3: Zabezpieczanie środowiska szkoleniowego](how-to-secure-training-vnet.md) * [Część 5. Włączanie funkcji programu Studio](how-to-enable-studio-virtual-network.md) Zobacz również artykuł dotyczący używania [niestandardowego systemu DNS](how-to-custom-dns.md) do rozpoznawania nazw.
69.776596
692
0.80185
pol_Latn
0.999707
6af3ee8c422bd6102b10aa50afd385155cb73dcc
1,353
md
Markdown
program cpp/README.md
Rominaru/Tugas-SMK
8b140146603ebc33057dd20259d2040f5ab93156
[ "MIT" ]
null
null
null
program cpp/README.md
Rominaru/Tugas-SMK
8b140146603ebc33057dd20259d2040f5ab93156
[ "MIT" ]
null
null
null
program cpp/README.md
Rominaru/Tugas-SMK
8b140146603ebc33057dd20259d2040f5ab93156
[ "MIT" ]
1
2022-03-27T08:35:53.000Z
2022-03-27T08:35:53.000Z
<h3 align="center"> Program File Berbasis Bahasa C++ <img src="https://media.giphy.com/media/hvRJCLFzcasrR4ia7z/giphy.gif" width="28"> </h3> <h3 align="center"> <img src="https://img.shields.io/badge/C%2B%2B-00599C?style=for-the-badge&logo=c%2B%2B&logoColor=white" width="400"> </h3> # Pengertian ```js C++ adalah bahasa pemrograman komputer yang dibuat oleh Bjarne Stroustrup, yang merupakan perkembangan dari bahasa C dikembangkan di Bell Labs. Pada awal tahun 1970-an, bahasa itu merupakan peningkatan dari bahasa sebelumnya, yaitu B ``` ```js Seperti namanya, Simbol “++” pada huruf C berarti increment dari C. Sebenarnya C++ sama seperti bahasa C, tapi memiliki fitur yang lebih banyak dibandingkan C. Karena itulah dinamakan C++ (dibaca si plus plus). Lalu apa bedanya dengan C#? Bahasa C# dibuat oleh Microsoft dan berjalan di atas mesin virtual .Net. Sedangkan C++ berjalan secara native seperti C. Dari segi sintaks, C++ dengan C# cukup berbeda. Menurut saya, C++ lebih mirip C dan C# lebih mirip Java. Ada juga yang beranggapan kalau C# adalah peningkatan dari C++. ``` # Instalasi ```js Install sebuah Aplikasi Diandroid seperti dcoder, jika ingin PC atau Laptop install Lah sebuah APP Dev C++ ``` # Contoh ```c++ #include <iostream> using namespace std; int main(){ cout << "Hello World!" << endl; return 0; } ```
27.612245
120
0.72949
ind_Latn
0.946797
6af45c65ad1a119bead567ac32b12a40fcfcbdfd
2,409
md
Markdown
content/posts/2015-04-a-weekend-away-in-moalboal/index.md
kylewelsby/travelsleeprepeat.me.uk
3681b9943b41d0bdeaa6df56b88efee108a1d9ed
[ "Unlicense" ]
null
null
null
content/posts/2015-04-a-weekend-away-in-moalboal/index.md
kylewelsby/travelsleeprepeat.me.uk
3681b9943b41d0bdeaa6df56b88efee108a1d9ed
[ "Unlicense" ]
null
null
null
content/posts/2015-04-a-weekend-away-in-moalboal/index.md
kylewelsby/travelsleeprepeat.me.uk
3681b9943b41d0bdeaa6df56b88efee108a1d9ed
[ "Unlicense" ]
null
null
null
--- title: A Weekend In Moalboal cover: images/cover.jpg date: '2015-04-21T18:37:11' destinations: - Moalboal - Philippines categories: - Destinations - Moalboal - Philippines tags: - beach - bus journey --- It’s super easy to get to Moalboal from Cebu City, so we planned a weekend break during our stay in Cebu. From Cebu City we headed to the South Bus terminal and waited for the yellow Ceres bus to Moalboal. It gets very busy on weekends and we were queueing for at least hour before we boarded. Be ready to push your way on though because it is literally first come first serve – and people will barge you out the way to get a seat! We were fortunate to grab a couple of seats but I did feel sorry for those that had to stand for most the journey. It take around 3 hours from Cebu City to get to Moalboal. We stayed at the Quo Vadis Dive Resort, the rooms were basic but comfortable enough for a couple of nights. The biggest highlights were the swimming pool and gorgeous sea views. ![](images/16836726194_e31595a74a_k_d.jpg) Pool at Quo Vadis Dive Resort Our time in Moalboal was short so Kyle did some diving whilst I explored the local area. I only had to walk a couple of minutes from the resort to find such stunning views. ![](images/IMG_20150419_142907.jpg) ![](images/17288222592_99ddc5a8d0_k_d.jpg) It was literally breathtaking. I sat under the tree shades for an ages, just so I can absorb the picturesque scenery before my eyes. ![](images/moalboalscene-575x1024.jpg) ![](images/moalboalscenery1.jpg) After my sunny ventures around, I met up with Kyle later to watch a wonderful sunset. Moalboal certainly lives up to its reputation, it was gorgeous! ![](images/moalboalsunset.jpg) There are several food options around Moalboal but don’t expect anything too fancy. On our first night it took an hour for half of our food order to arrive at the Arista restaurant – and we were impressed with what came out either! After several recommendations we found Lantaw restaurant which provided us with a decent meal (the staff were real sweet too). ![](images/chickenadobo.jpg) Chicken Adobo, Lantaw Restaurant Moalboal proved to be a successful weekend away; it was such a scenic, chilled place. If we had more time we would have ventured to Panagsama beach, however being on limited time I’m glad to have seen another wonderful pocket of beauty in the Philippines!
49.163265
498
0.778746
eng_Latn
0.999313
6af4f8922f734b94eb30d2fd23aff00cb972de16
4,335
md
Markdown
labs/sp-gda/gdaexpericence4/story_a_spark_with_cosmosdb/content/1.md
lalaithan/developer-immersion-data
b48d291ad5a03d56c0228d00e0b290b638d50194
[ "MIT" ]
82
2017-05-24T22:55:14.000Z
2019-03-31T00:56:05.000Z
labs/sp-gda/gdaexpericence4/story_a_spark_with_cosmosdb/content/1.md
lalaithan/developer-immersion-data
b48d291ad5a03d56c0228d00e0b290b638d50194
[ "MIT" ]
7
2017-05-20T16:10:54.000Z
2018-09-30T18:04:46.000Z
labs/sp-gda/gdaexpericence4/story_a_spark_with_cosmosdb/content/1.md
lalaithan/developer-immersion-data
b48d291ad5a03d56c0228d00e0b290b638d50194
[ "MIT" ]
55
2017-05-20T12:42:19.000Z
2019-03-26T16:38:16.000Z
<page title="Creating interactive Power BI Dashboard and explore RScript component in Power BI"/> ## Scenario 2 - Creating interactive Power BI Dashboard and explore RScript component in Power BI >_Now, let's start with creating intaractive Power BI dashboard to display flight cancellation and delay analysis using Power BI._ 1. First, we will install libraries required for **R Script component** to view the **Power BI** report created using **R script**. 1. Open **Microsoft R Open 3.4.2.0** ![](img/Rscript.png) present on desktop by double clicking on it. ![](img/RConsole.png) 1. Copy below given command to install packages required for **R Script component**, paste it in **R console editor** and click on Enter button. ``` ipak <- function(pkg){ new.pkg <- pkg[!(pkg %in% installed.packages()[, "Package"])] if (length(new.pkg)) install.packages(new.pkg, dependencies = TRUE) sapply(pkg, require, character.only = TRUE) } packages <- c("colorspace", "ggExtra", "ggplot2", "ggrepel", "gridExtra") ipak(packages) ``` >_Note: You may encounter popups during installtion of packages.Accept all popups by clicking on Yes button_. 1. You can now minimize the **R console editor** after package installation. 1. Now, open file **Flight_Report** present at location **C:\source** by double clicking on it. 1. After file gets opened, you may get pop up **Enable script visuals**. Click on **Enable** button to accept that pop up. ![](img/EnableScriptPopUp.png) 1. Now click on link **Already have a Power BI Account? Sign in** present on **Welcome to Power BI Desktop** pop up at the bottom of the page. ![](img/WelcomeBI.png) 1. Now enter **Username** mentioned in **Scenario 1-Part A** to login in **Power BI desktop**. ![](img/Login.png) 1. Also add **Password** mentioned in **Scenario 1-Part A** in **Password** field and click on **Sign In** button. ![](img/password.png) 1. Close the pop up - **Sign up for Power BI Account** by clicking on **Close** icon. ![](img/SignUpPopup.png) 1. Now go to **Edit Queries** option present on Top Ribbon and click on **Edit Queries**. ![](img/EditQueries.png) 1. Click on **flightdb** present in Query blade and then double click on **Source** option present in **Applied Steps** blade. 1. Now, you have to pass the URL of your created **Spark cluster** in **Server** field present in **Azure HDInsight Spark** pop up. ![](img/HDSparkPopup.png) 1. So, Navigate back to **Azure** **Portal** launched in **Part A-Scenario 1**. Go to **All** **Resources** Group and select your resource group **<inject story-id="story://Content-Private/content/dfd/SP-GDA/gdaexpericence4/story_a_spark_with_cosmosdb" key="myResourceGroupName"/>**. 1. Now click on your created **Spark Cluster** named **SparkCluster<inject story-id="story://Content-Private/content/dfd/SP-GDA/gdaexpericence4/story_a_spark_with_cosmosdb" key="myResourceGroupName"/>** and copy the URL for cluster present in **Cluster Dashboard** by clicking on **Click to Copy** icon present in front of it. ![](img/ClusterURL.png) >_Note: You may encounter pop up **Do you want to allow this webpage to accept your Clipboard?**. Click on **Allow access** button_. ![](img/allowAccess.png) 1. Navigate back to **Power BI dashboard** to paste the URL in **Server** field of **Azure HDInsight Spark** pop up and click on **OK** button. 1. You may get pop up to login. Here add **UserName** and **Password** mentioned in **Part B-Scenario 1** and click on **Login** button. 1. Click on **Close and Apply** option present in Top ribbon. >_Note: It may take time to upload the data present in flightdb and AIRLINE_ID_. 1. Now you can check graphs regarding analysis of cancelled and delayed flights on Power BI Dashboard. >_Note: You may get error while checking Power BI reports for Flight Delay by Origin and Flight Delay by Dest. Kindly reinstall the packages - by performing step 3 in Scenario 2_. >_Note: It may take time to load the graphs for Flight Delay by Origin and Flight Delay by Dest as we have created these graphs using R script component._ >_Congrats! You have successfully created interactive Power BI dashboard where you can check cancelled and delayed flight's analysis in graphical format._
54.873418
328
0.718339
eng_Latn
0.924125
6af51062b407b71712303b4668bd7eced2dc92b3
679
markdown
Markdown
_posts/2019-02-25-honeyhealth.markdown
binzu/binzu.github.io
8e225d712b23df7721765f5cdaa299c87df87359
[ "MIT" ]
null
null
null
_posts/2019-02-25-honeyhealth.markdown
binzu/binzu.github.io
8e225d712b23df7721765f5cdaa299c87df87359
[ "MIT" ]
1
2020-12-31T13:54:39.000Z
2020-12-31T13:54:39.000Z
_posts/2019-02-25-honeyhealth.markdown
binzu/binzu.github.io
8e225d712b23df7721765f5cdaa299c87df87359
[ "MIT" ]
null
null
null
--- layout: post title: "Honeycomb Health App" date: 2019-02-25 16:30:00 +0000 startDate: 2018-11-16 16:41:12 +0000 endDate: 2019-02-25 16:41:12 +0000 company: "Greater Than One" client: "Honeycomb Health" img: img/portfolio/honeyhealth.jpg modalID: modalHoneycomb role: "Lead Web Developer" link: "https://honeycombhealth.com/" --- A web application built for a health tech agency startup non-profit using Drupal 8, and customizing screens with Drupal WebForm. Created custom theme and modules with PHP, Twig, JavaScript, jQuery, Bootstrap 4, Node, Gulp and SASS. Technology used: **Drupal 8, Drupal WebForm, PHP, TWIG, JavaScript, jQuery, Bootstrap 4, Node, Gulp and SASS**
39.941176
231
0.756996
eng_Latn
0.510444
6af58da709006c0903243748fe52aa972e63815c
1,271
md
Markdown
packages/docs/developer-resources/contractkit/reference/interfaces/_explorer_block_explorer_.calldetails.md
H34D/celo-monorepo
6a495c54ccec9e332d74bf6b133be45b8dff97ae
[ "Apache-2.0" ]
1
2020-09-25T10:13:40.000Z
2020-09-25T10:13:40.000Z
packages/docs/developer-resources/contractkit/reference/interfaces/_explorer_block_explorer_.calldetails.md
UltraViolentLight/celo-monorepo
9c6d48aa7c3942ccc62b48a03229d1683d36ad6c
[ "Apache-2.0" ]
null
null
null
packages/docs/developer-resources/contractkit/reference/interfaces/_explorer_block_explorer_.calldetails.md
UltraViolentLight/celo-monorepo
9c6d48aa7c3942ccc62b48a03229d1683d36ad6c
[ "Apache-2.0" ]
1
2020-10-02T18:59:12.000Z
2020-10-02T18:59:12.000Z
# Interface: CallDetails ## Hierarchy * **CallDetails** ## Index ### Properties * [argList](_explorer_block_explorer_.calldetails.md#arglist) * [contract](_explorer_block_explorer_.calldetails.md#contract) * [function](_explorer_block_explorer_.calldetails.md#function) * [paramMap](_explorer_block_explorer_.calldetails.md#parammap) ## Properties ### argList • **argList**: *any[]* *Defined in [packages/contractkit/src/explorer/block-explorer.ts:13](https://github.com/celo-org/celo-monorepo/blob/master/packages/contractkit/src/explorer/block-explorer.ts#L13)* ___ ### contract • **contract**: *string* *Defined in [packages/contractkit/src/explorer/block-explorer.ts:10](https://github.com/celo-org/celo-monorepo/blob/master/packages/contractkit/src/explorer/block-explorer.ts#L10)* ___ ### function • **function**: *string* *Defined in [packages/contractkit/src/explorer/block-explorer.ts:11](https://github.com/celo-org/celo-monorepo/blob/master/packages/contractkit/src/explorer/block-explorer.ts#L11)* ___ ### paramMap • **paramMap**: *Record‹string, any›* *Defined in [packages/contractkit/src/explorer/block-explorer.ts:12](https://github.com/celo-org/celo-monorepo/blob/master/packages/contractkit/src/explorer/block-explorer.ts#L12)*
27.042553
180
0.761605
eng_Latn
0.175808
6af5b47b63defe2398d787f9828986377a953bd0
759
md
Markdown
webview-view-sample/README.md
ScriptBox21/vscode-extension-samples
e02887bb3198b674e138ddbeaf8e2001fb16f176
[ "MIT" ]
4,317
2019-05-07T03:05:40.000Z
2022-03-31T03:56:08.000Z
webview-view-sample/README.md
ScriptBox21/vscode-extension-samples
e02887bb3198b674e138ddbeaf8e2001fb16f176
[ "MIT" ]
365
2019-05-09T21:27:51.000Z
2022-03-30T02:36:26.000Z
webview-view-sample/README.md
ScriptBox21/vscode-extension-samples
e02887bb3198b674e138ddbeaf8e2001fb16f176
[ "MIT" ]
2,346
2019-05-07T01:15:06.000Z
2022-03-31T11:54:31.000Z
# Calico Colors — Webview View API Sample Demonstrates VS Code's proposed [webview view API](https://github.com/microsoft/vscode/issues/46585). This includes: - Contributing a webview based view to the explorer. - Posting messages from an extension to a webview view - Posting message from a webview to an extension - Persisting state in the view. - Contributing commands to the view title. ## VS Code API ### `vscode` module - [`window.registerWebviewViewProvider`](https://code.visualstudio.com/api/references/vscode-api#window.registerWebviewViewProvider) ## Running the example - Open this example in VS Code 1.49+ - `npm install` - `npm run watch` or `npm run compile` - `F5` to start debugging In the explorer, expand the `Calico Colors` view.
31.625
132
0.758893
eng_Latn
0.913992
6af631392b3a1d28d4f0fa487e4b9f5959abc4f2
3,429
md
Markdown
_drafts/2020-06-11-the-5-whys-getting-to-the-root-of-the-problem.md
johnmoxon/johnmoxon.github.io
65cb6b9623086158d199169b4a105f03b8b660a7
[ "CC-BY-4.0", "MIT" ]
null
null
null
_drafts/2020-06-11-the-5-whys-getting-to-the-root-of-the-problem.md
johnmoxon/johnmoxon.github.io
65cb6b9623086158d199169b4a105f03b8b660a7
[ "CC-BY-4.0", "MIT" ]
22
2015-07-28T22:17:29.000Z
2022-02-23T02:31:02.000Z
_drafts/2020-06-11-the-5-whys-getting-to-the-root-of-the-problem.md
johnmoxon/johnmoxon.github.io
65cb6b9623086158d199169b4a105f03b8b660a7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- layout: post title: The 5 Whys subtitle: Getting to the root of the problem image: path: /assets/img/jonas-denil--fsMBwHoMUU-unsplash.jpg author: Jonas Denil source: Unsplash url: "https://unsplash.com/@jonasdenil?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" author: John Moxon category: strategy tags: - sample-tag1 - sample-tag2 summary: >- Summary of the article just a test it it is comments: true description: "SEO Meta description" keywords: "SEO, keyword, meta, tags" --- n sodales dolor dolor posuere nisi. Nulla ac vehicula ipsum. Sed nisl odio, efficitur vel tellus in, vehicula consequat nulla. Nullam laoreet purus eu dignissim hendrerit. Phasellus sed nisi tortor. Integer id diam nec mauris accumsan consequat quis sed dui. Donec varius felis ut dapibus dictum. Curabitur quis .tempor sapien. Suspendisse eleifend nisl vehicula tellus vestibulum vestibulum. Pellentesque in dolor diam. Proin eros velit, dictum non arcu a, euismod tincid.unt erat. Quisque sem dolor, congue commodo malesuada at, sodales vel lacus. Nullam dictum ornare massa, ut imperdiet nisl imperdiet non. Maecenas vitae posuere diam, non gravida mi. Phasellus laoreet id enim id consequat. Sed quis orci pharetra, iaculis purus sed, semper ante. Mauris nec rutrum metus. Nulla et vulputate risus. Vivamus at dapibus dolor, malesuada commodo erat. ## Fusce malesuada augue non nibh elementum, ac sollicitudin ligula cursus. Curabitur tempus tincidunt magna, a faucibus est venenatis in. Nulla finibus nisl ut mi pharetra, at auctor risus consequat. Vivamus fermentum lectus nec ero7s pulvinar, vel posuere urna tincidunt. Nunc accumsan tincidunt vulputate. Aenean dictum elit ac ultrices molestie. In a pulvinar nulla, id facilisis nisl. Donec pulvinar maximus volutpat. Curabitur mauris eros, auctor vitae lorem non, luctus faucibus sem. Donec egestas odio mi, nec aliquet nibh elementum at. Suspendisse at facilisis leo. Orci. varius natoque penatibus et magnis ..dis parturient montes, nascetur ridiculus mus.. {% include themes/jmblog-theme/components/img.html src="/assets/img/clem-onojeghuo-DoA2duXyzRM-unsplash.jpg" %} Mauris massa magna, condimentum quis nisl in, sagittis fringilla risus. Morbi imperdiet, ex ac mollis lacinia, est metus fringilla ante, at laoreet orci velit et enim. Vestibulum mauris nunc, convallis vel gravida non, tempor id lacus. Vestibulum efficitur felis bibendum ipsum blandit aliquet. Maecenas vehicula non magna sed ullamcorper. Nunc suscipit interdum finibus. Sed tincidunt arcu ut ex commodo, ut condimentum tellus egestas. Pellentesque euismod purus nec est condimentum finibus. Suspendisse at hendrerit quam, nec molestie nisi. ## Phasellus lobortis est vitae mi laoreet sagittis. Phasellus gravida, ante nec porta ornare, purus enim placerat risus, eget pharetra mauris dolor sit amet augue. Duis commodo urna feugiat lacus molestie, vitae eleifend mauris dapibus. Vivamus tristique efficitur ligula vitae cursus. Proin consequat neque quis ex varius, vel finibus erat laoreet. Donec semper vitae metus eu aliquet. Phasellus convallis pellentesque condimentum. Integer vel ultrices ex, id placerat nulla. Mauris rhoncus, velit id ultricies fermentum, felis ligula tempor nulla, ac sollicitudin ipsum enim at ex. Quisque ullamcorper leo nec semper finibus. Curabitur diam purus, porttitor at erat mollis, finibus pharetra tellus. Mauris in est arcu. Ut
87.923077
671
0.804608
hun_Latn
0.142541
6af67b4c84ea4542310a67ef31326a71904e1571
1,631
md
Markdown
Help/DocFX/articles/Using Krypton in VS2017.md
Krypton-Suite-Legacy/Krypton-NET-5.461
378d56609c26eb7e930b18d1259c4b9da21e9901
[ "BSD-3-Clause" ]
61
2018-09-29T22:43:12.000Z
2020-03-24T16:25:30.000Z
Help/DocFX/articles/Using Krypton in VS2017.md
Krypton-Suite-Legacy/Krypton-NET-5.461
378d56609c26eb7e930b18d1259c4b9da21e9901
[ "BSD-3-Clause" ]
155
2018-09-29T07:31:58.000Z
2020-02-18T04:21:44.000Z
Help/DocFX/articles/Using Krypton in VS2017.md
Wagnerp/Krypton-NET-5.461
378d56609c26eb7e930b18d1259c4b9da21e9901
[ "BSD-3-Clause" ]
16
2019-01-28T12:07:08.000Z
2020-02-17T10:19:33.000Z
## Using Krypton in VS2017 The older version of Krypton used to force you to go through the steps of adding certain files to the GAC. This proved on many occasions to be misleading, and if the release cycle is more than once a year, very disruptive (Especially if working on more than one CLR targeted version). What is recommended for 2017 onwards, is to add the Krypton files directly to the visual studio's toolbox for each solution you are working on. If you replace (upgrade etc.) the Kyrpton dll's then just simply re-add the new location to the toolbox. ![*Figure 1 - ToolBox_Start*](images\ToolBox_Start.png) *Figure 1 - ToolBox_Start* ### Initially the toolbox will be empty If the form does not allow a designer to directly access Krypton Controls, then it means that the: - Toolbox is empty. (*Figure 1*) - A different version of CLR is expected. ### Locate and Drag The steps to add the Krypton Toolkit controls to the designer a simple once done a few times: - Open the VS2017 Toolkit and pin - Open "Windows file explorer" and locate the krypton Toolkit dll's - Drag the toolkit dll's you wish to use over the "General" area and drop - Right click on the General tab and "Sort Items Alphabetically" - Right click again and rename to whatever you want (e.g. "Krypton tools"). ### Adding Krypton References to your project - Open the project "References" - Locate and Add the Krypton dlls via "Browse" or add the last ones used ## Older versions of Krypton (Pre 2018-06) - Always include the Krypton Designer DLL (Does not need to be shipped - just included for the VS2017 to perform direct design tasks)
47.970588
284
0.765175
eng_Latn
0.999004
6af6f311dc44f6a1eb1fe51e289c912327537789
6,911
md
Markdown
docs/post/10-kbone.md
CENcw/FED-docs
104e58408c30c1c55bf6b32fb8e08f2be41dbc05
[ "MIT" ]
null
null
null
docs/post/10-kbone.md
CENcw/FED-docs
104e58408c30c1c55bf6b32fb8e08f2be41dbc05
[ "MIT" ]
null
null
null
docs/post/10-kbone.md
CENcw/FED-docs
104e58408c30c1c55bf6b32fb8e08f2be41dbc05
[ "MIT" ]
null
null
null
# kbone 指南 > [岑成威](https://github.com/CENcw) / 2021-5-28 ### Kbone 是什么? 一个致力于微信小程序和 Web 端同构的解决方案。 ### Kbone 和 taro,mpvue,wepy 等框架对比优势 - 大部分流行的前端框架都能够在 kbone 上运行,比如 Vue、React、Preact 等 - 支持更为完整的前端框架特性,因为 kbone 不会对框架底层进行删改(比如 Vue 中的 v-html 指令、Vue-router 插件) - 提供了常用的 dom/bom 接口,让用户代码无需做太大改动便可从 Web 端迁移到小程序端 - 在小程序端运行时,仍然可以使用小程序本身的特性(比如像 live-player 内置组件、分包功能) - 提供了一些 Dom 扩展接口,让一些无法完美兼容到小程序端的接口也有替代使用方案(比如 getComputedStyle 接口) ### 缺点 - 缺少第三方插件,不支持 wxs - 社区不活跃,解决问题慢,维护,贡献者少 - 性能差 - [更多限制](https://wechat-miniprogram.github.io/kbone/docs/qa/#%E9%99%90%E5%88%B6) ### 原理分析 #### web 端框架基本原理 首先我们来看下普通 Web 端框架,以 Vue 框架为例,一份 Vue 模板对应一个组件,在代码构建阶段编译成调用 Dom 接口的 JS 函数,执行此 JS 函数就会创建出组件对应的 Dom 树,从而渲染到浏览器页面上。 <p> <img :src="$withBase('/minPrinciple.png')" alt="test"> </p> #### 业界常规做法 <p> <img :src="$withBase('/kbone0.png')" alt="test"> </p> 原理是把代码语法分析一遍,然后将其中的模板部分翻译成对应的跨端需求的模板(微信小程序、支付宝小程序、H5、APP 等) #### Kbone 的做法 <p> <img :src="$withBase('/kbone.jpeg')" alt="test"> </p> Kbone 是通过提供 适配器 的方式来实现同构,即运行时兼容,而非静态编译。 Kbone 的适配器核心包含两个部分: miniprogram-render: 仿造 Dom/Bom 接口,构造仿造 Dom 树; miniprogram-element: 监听仿造 Dom 树变化,渲染到页面,同时监听用户行为,触发事件。 #### 1.仿造 Dom 树 小程序为了安全和性能而采用了双线程的架构,运行用户 JS 代码的逻辑层是一个纯粹的 JSCore,没有任何浏览器相关的实现,所以没有 Dom 接口和渲染到浏览器上的功能。 小程序的渲染原理:小程序的双线程架构,逻辑层会执行用户的 JS 代码进而产生一组数据,这组数据会发往视图层;视图层接收到数据后,结合用户的 WXML 模板创建出组件树,之后小程序再将组件树渲染出来。这里的组件树和 Dom 树很类似,只是它是由官方内置组件或自定义组件拼接而成而不是 Dom 节点。 kbone 就是利用仿造出来的 Dom 树映射到小程序的组件树上 - 如何仿造 利用自定义组件的特性来自己引用自己来进行组装,从而递归创建组件,进而创建出一棵组件树 <p> <img :src="$withBase('/kboneCharacteristics.jpeg')" alt="test"> </p> <p> <img :src="$withBase('/kbone-wxs.jpeg')" alt="test"> </p> 递归的终止条件是遇到特定节点、文本节点或者 children 空节点. 然后在创建出组件树后,将 Dom 节点和自定义组件实例进行绑定以便后续的 Dom 更新和操作即可 kbone 这里还对节点数进行了优化: 因为一次性 setData 到视图层,可能会超过 setData 的大小限制(1024kB),所以对 Dom 树按照一定规则进行裁剪,拆分成多棵子树,然后每个自定义组件管理一棵子树, 减少一些自定义组件实例,分批的 setData 到视图层,可以节省开销 #### 2.仿造事件系统 - 原因 小程序的事件是视图层到逻辑层的通讯方式,事件绑定在组件上,当被触发时,就会执行逻辑层中对应的事件处理函数。 小程序的捕获冒泡是在视图层 view 端,因此逻辑层在整个捕获冒泡流程中各个节点接收到的事件不是同一个对象,小程序事件的捕获冒泡和阻止冒泡等操作必须在 WXML 模板中生命,无法使用接口实现 - 实现过程 当自定义组件监听到用户的操作后,就将事件发往仿造 Dom 树,后续自定义组件监听到的同一个事件的冒泡就直接忽略。 ``` 当触发改节点,仿造Dom树接收到事件后,再进行捕获和冒泡,让事件在各个节点触发 ``` <p> <img :src="$withBase('/kboneClick.jpeg')" alt="test"> </p> vue 转小程序的实际操作流程[点击这里](https://wechat-miniprogram.github.io/kbone/docs/guide/tutorial.html#%E7%BC%96%E5%86%99-webpack-%E9%85%8D%E7%BD%AE) 安装 mp-webpack-plugin 插件 ```javascript yarn add mp-webpack-plugin --dev 或者 npm install mp-webpack-plugin --save-dev ``` 在 src 目录中新增 main.mp.js 入口文件 ```javascript import Vue from "vue"; import App from "@/App"; import router from "@/router"; import store from "@/store"; // 需要将创建根组件实例的逻辑封装成方法 export default function createApp() { // 在小程序中如果要注入到 id 为 app 的 dom 节点上,需要主动创建 const container = document.createElement("div"); container.id = "app"; document.body.appendChild(container); Vue.config.productionTip = false; return new Vue({ router, store, render: (h) => h(App), }).$mount("#app"); } ``` 在根目录创建 miniprogram.config.js 文件,添加 mp-webpack-plugin 插件配置 ```javascript module.exports = { // 页面 origin,默认是 https://miniprogram.default origin: "", // 填写项目中的图片资源地址,建议图片资源使用线上地址 // 入口页面路由,默认是 / entry: "/", // 页面路由,用于页面间跳转 router: { // 路由可以是多个值,支持动态路由 index: [], }, // 特殊路由跳转 redirect: { // 跳转遇到同一个 origin 但是不在 router 里的页面时处理方式,支持的值:webview - 使用 web-view 组件打开;error - 抛出异常;none - 默认值;什么都不做,router 配置项中的 key notFound: "index", // 跳转到 origin 之外的页面时处理方式,值同 notFound accessDenied: "index", }, // app 配置,同 https://developers.weixin.qq.com/miniprogram/dev/reference/configuration/app.html#window app: { navigationStyle: "custom", // 自定义navigation }, // 全局配置 global: {}, // 页面配置,可以为单个页面做个性化处理,覆盖全局配置 pages: {}, // 优化 optimization: { domSubTreeLevel: 5, // 将多少层级的 dom 子树作为一个自定义组件渲染,支持 1 - 5,默认值为 5 // 对象复用,当页面被关闭时会回收对象,但是如果有地方保留有对象引用的话,注意要关闭此项,否则可能出问题 elementMultiplexing: true, // element 节点复用 textMultiplexing: true, // 文本节点复用 commentMultiplexing: true, // 注释节点复用 domExtendMultiplexing: true, // 节点相关对象复用,如 style、classList 对象等 styleValueReduce: 5000, // 如果设置 style 属性时存在某个属性的值超过一定值,则进行删减 attrValueReduce: 5000, // 如果设置 dom 属性时存在某个属性的值超过一定值,则进行删减 }, // 项目配置,会被合并到 project.config.json projectConfig: { appid: "", // 填写小程序的AppId projectname: "", // 填写小程序的项目名称 }, // 包配置,会被合并到 package.json packageConfig: { name: "", // 项目名称 description: "", // 描述 author: "", // 作者信息 }, }; ``` 在根目录创建 .env.mp 文件,添加 mp 环境变量 ```javascript NODE_ENV = mp; ``` 修改 vue.config.js 文件,添加打包小程序的 webpack 配置 ```javascript const path = require("path"); function resolve(dir) { return path.join(__dirname, dir); } const webpack = require("webpack"); const MpWebpackPlugin = require("mp-webpack-plugin"); const MiniCssExtractPlugin = require("mini-css-extract-plugin"); module.exports = { css: { extract: true, }, outputDir: process.env.NODE_ENV === "mp" ? "./dist/mp/common" : "./dist/web", configureWebpack: { resolve: { extensions: ["*", ".js", ".vue", ".json"], alias: { vue$: "vue/dist/vue.esm.js", "@": resolve("src"), }, }, }, chainWebpack: (config) => { if (process.env.NODE_ENV === "mp") { config .devtool("node") .entry("app") .clear() .add("./src/main.mp.js") .end() .output.filename("[name].js") .library("createApp") .libraryExport("default") .libraryTarget("window") .end() .target("web") .optimization.runtimeChunk(false) .splitChunks({ chunks: "all", minSize: 1000, maxSize: 0, minChunks: 1, maxAsyncRequests: 100, maxInitialRequests: 100, automaticNameDelimiter: "~", name: true, cacheGroups: { vendors: { test: /[\\/]node_modules[\\/]/, priority: -10, }, default: { minChunks: 2, priority: -20, reuseExistingChunk: true, }, }, }) .end() .plugins.delete("copy") .end() .plugin("define") .use( new webpack.DefinePlugin({ "process.env.isMiniprogram": process.env.isMiniprogram, // 注入环境变量,用于业务代码判断 }) ) .end() .plugin("extract-css") .use( new MiniCssExtractPlugin({ filename: "[name].wxss", chunkFilename: "[name].wxss", }) ) .end() .plugin("mp-webpack") .use(new MpWebpackPlugin(require("./miniprogram.config.js"))) .end(); } }, }; ``` 修改 package.json 中的 scripts 属性,添加用于开发和打包的任务 ```javascript # 开发 "mp-serve": "vue-cli-service build --watch --mode mp" # 打包 "mp-build": "vue-cli-service build --mode mp" ```
23.26936
148
0.636521
yue_Hant
0.700487
6af70d851dd3e2f574e9be956786de1bacc9de27
748
md
Markdown
content/archive/internal/yuleng-sael200-slides.md
cderv/bookdown.org
160d27245052b3245d4061ce5cf2125f1088b250
[ "MIT" ]
58
2018-07-17T01:46:40.000Z
2021-12-25T00:58:36.000Z
content/archive/internal/yuleng-sael200-slides.md
cderv/bookdown.org
160d27245052b3245d4061ce5cf2125f1088b250
[ "MIT" ]
68
2018-07-19T06:30:30.000Z
2022-03-02T07:08:14.000Z
content/archive/internal/yuleng-sael200-slides.md
cderv/bookdown.org
160d27245052b3245d4061ce5cf2125f1088b250
[ "MIT" ]
100
2018-07-21T07:52:07.000Z
2022-03-01T11:17:08.000Z
--- title: "Social Advocacy & Ethical Life" author: "Yuleng Zeng" date: "2020-04-05T23:04:12Z" link: "https://bookdown.org/Yuleng/sael200-slides/" length_weight: "20.7%" repo: "rstudio/yuleng" pinned: false --- This is an introduction to Social Advocacy & Ethical Life (SAEL 200). It is a class I started teaching in Fall 2019, as a member of the Bridge Humanities Corps (BHC) at the University of South Carolina. In compiling this document, I consult a number of online resources. The intention is to record the process of my preparation for this class and help me improve over time. If you see errors, have suggestions, or do not wish your material to be cited here, please do shoot me an email. The syllabus of the class can be found here: ...
62.333333
535
0.758021
eng_Latn
0.998395
6af740737cf4dbaf69c8665b3ba68e1780da96f2
2,151
md
Markdown
_posts/2017-11-20-disponibile-sparkylinux-4-7-tyche-su-base-debian-9.md
DumbMahreeo/linuxhub.it
201ca8534562fb22f013b919d599c547b461bd5c
[ "MIT" ]
null
null
null
_posts/2017-11-20-disponibile-sparkylinux-4-7-tyche-su-base-debian-9.md
DumbMahreeo/linuxhub.it
201ca8534562fb22f013b919d599c547b461bd5c
[ "MIT" ]
null
null
null
_posts/2017-11-20-disponibile-sparkylinux-4-7-tyche-su-base-debian-9.md
DumbMahreeo/linuxhub.it
201ca8534562fb22f013b919d599c547b461bd5c
[ "MIT" ]
null
null
null
--- title: 'Disponibile SparkyLinux 4.7 Tyche su base Debian 9' published: 2017-11-20 layout: post author: Mirko B. author_github: mirkobrombin tags: - debian --- SparkyLinux 4.7 è ora disponibile per il download e porta con sè tutti gli aggiornamenti software di Debian 9 Stretch.<img class="aligncenter size-full wp-image-3001 size-full wp-image-221" src="https://linuxhub.it/wordpress/wp-content/uploads/2017/11/sparkylinux-4-7-tyche-out-now-with-latest-debian-gnu-linux-9-stretch-updates-518625-2.jpg" alt="" width="800" height="596" />Questa versione include gli ambienti grafici <strong>Xfce 4.12.3</strong>, <strong>LXDE 0.99.2</strong> e <strong>Openbox 3.6.1</strong>, l'ultima versione dell' installer grafico <strong>Calamares 3.1.8</strong>, <strong>Mozilla Firefox 52.5.0</strong>, <strong>Mozilla Thunderbird 52.4.0</strong>, <strong>LibreOffice 5.2 .7</strong>, <strong>VLC Media Player 2.2.6</strong>, <strong>Pidgin 2.12.0</strong>, <strong>Trasmission 2.92</strong>, <strong>HexChat 2.12.4</strong> e <strong>DeaDBeeF 0.7.2</strong>.<blockquote>"Nessun grande cambiamento, le nuove immagini ISO forniscono aggiornamenti di tutti i pacchetti installati, dai repository di Debian 9 e Sparky dal 17 novembre 2017."</blockquote>cita l'annuncio ufficiale.La versione SparkyLinux 4.7 include immagini <strong>ISO live</strong> con gli ambienti desktop / window manager <strong>Xfce, LXDE e Openbox (MinimalGUI),</strong> nonché <strong>un'edizione in modalità testo (MinimalCLI)</strong> sia per <strong>32-bit (i686) che 64-bit (x86_64 / amd64)</strong> .Mentre le nuove immagini ISO vengono fornite principalmente per coloro che desiderano distribuire SparkyLinux su un nuovo computer o reinstallarlo, gli utenti delle release precedenti possono eseguire l'aggiornamento alla versione 4.7 dando da terminale:<pre>sudo apt update &amp;&amp; sudo apt full-upgrade</pre><strong>Annuncio ufficiale</strong> | <a href="https://sparkylinux.org/sparky-4-7/">https://sparkylinux.org/sparky-4-7/</a><strong>Download</strong> | <a href="https://sparkylinux.org/download/">https://sparkylinux.org/download/</a>Revisione a cura di Giuliano Zamboni
215.1
1,986
0.770804
ita_Latn
0.525254
6af8b73bbfa9dc4c0ff728129e5edade28fd9b9d
2,999
md
Markdown
covid/README.md
don-k-jacob/Covid-19
08847765aa8c508d62e9d8812aab7aeb1c8b4c1d
[ "MIT" ]
7
2020-05-01T20:07:13.000Z
2020-10-20T17:43:14.000Z
covid/README.md
don-k-jacob/Covid-19
08847765aa8c508d62e9d8812aab7aeb1c8b4c1d
[ "MIT" ]
null
null
null
covid/README.md
don-k-jacob/Covid-19
08847765aa8c508d62e9d8812aab7aeb1c8b4c1d
[ "MIT" ]
3
2020-05-03T04:08:35.000Z
2020-10-19T04:52:12.000Z
# Covid-19 Covid 19 updates The pandemonium caused by the massive outburst is nothing short of hell. What that lead to this situation is ultimately the lack of information. In this digital world we have all the knowledge we need right within our fingertips. Everyday,each and every one of us wants to know whether there has been any new developments, how many have overcome this morbid condition, what's the status of the neighbouring countries,how the world is taking it on, etc. ![](https://www.donkjacob.me/images/covid1.png) COVID is an application solely developed for this purpose. It gathers data from around the world and gives an approximate count of the current cases. The categorisation makes it simple to use and efficient. Using COVID you are able to see the status of the virus outbreak globally as well as country wice. So there's no need to google it daily anymore!!! The graphical representation and maps are the key features I've included and the coding was done making use of flutter. ![](https://www.donkjacob.me/images/Covid.png) This was made possible through several API's. global status : https://thevirustracker.com/free-api?global=stats country wise status: https://pomber.github.io/covid19/timeseries.json map from: https://github.com/localeai/covid19-live-visualization map data from: https://github.com/CSSEGISandData/COVID-19 API's or Application Programming Interface is the computing interface to a software component or a system that defines how other components or systems can use it . It helps in defining the kinds of requests that can be made,how to make them,the data formats that should be used, the conventions to be followed etc. ![](https://www.donkjacob.me/images/covid2.png) But currently you won't be able to find my application in the play store. They are prioritising the publication of apps that have been commissioned or authorised by official government entities. So any app referencing COVID-19 in their metadata won't get published unless they have been authorised by either the government or public health organization Even so , if you wish to use this app all you need to do is click the download button below and proceed to download. In order to allow downloads from this source follow the sequential steps given below Settings > Apps and notifications>Chrome> Install unknown apps> ON ![](https://www.donkjacob.me/images/covidlogo.png) covid19-arm64-v8a.apk : https://drive.google.com/file/d/1Afpy6RV2gNPo87cvRtOXcywkKv2gI03Y/view covid19-armeabi-v7a.apk :https://drive.google.com/file/d/1GkbDKk6kyxlqTXvDmPcGmG8j6kfrfkJ4/view covid19-x86_64.apk : https://drive.google.com/file/d/1Pzyg0DzfwRngL2ERw2Mkv1CIUcTz0Nzv/view API's or Application Programming Interface is the computing interface to a software component or a system that defines how other components or systems can use it . It helps in defining the kinds of requests that can be made,how to make them,the data formats that should be used, the conventions to be followed etc.
99.966667
474
0.803601
eng_Latn
0.996993
6af90032c8b46536ee3f92083708a4b97f7e4513
1,223
md
Markdown
articles/traffic-manager/powershell-samples.md
maiemy/azure-docs.it-it
b3649d817c2ec64a3738b5f05f18f85557d0d9b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/traffic-manager/powershell-samples.md
maiemy/azure-docs.it-it
b3649d817c2ec64a3738b5f05f18f85557d0d9b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/traffic-manager/powershell-samples.md
maiemy/azure-docs.it-it
b3649d817c2ec64a3738b5f05f18f85557d0d9b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Esempi di Azure PowerShell per Gestione traffico | Microsoft Docs description: Con questo esempio, usare Azure PowerShell per distribuire e configurare Gestione traffico di Azure. services: traffic-manager documentationcenter: traffic-manager author: duongau manager: twooley ms.service: traffic-manager ms.devlang: na ms.topic: article ms.tgt_pltfrm: '' ms.workload: infrastructure ms.date: 10/23/2018 ms.author: duau ms.openlocfilehash: 03b34312f168f49e65fd83f826b2ad9f5759226e ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 10/09/2020 ms.locfileid: "89400246" --- # <a name="azure-powershell-samples-for-traffic-manager"></a>Esempi di Azure PowerShell per Gestione traffico La tabella seguente include alcuni collegamenti agli script di Gestione traffico compilati con Azure PowerShell. |Titolo |Descrizione | |---------|---------| |[Dirigere il traffico su più aree per la disponibilità elevata delle applicazioni](./scripts/traffic-manager-powershell-websites-high-availability.md) | Crea due piani di servizio app, due app Web, un profilo di gestione traffico e due endpoint di gestione traffico. | | | |
39.451613
278
0.78332
ita_Latn
0.89301
6af93ea663711e7887ebfeacb3caa7e10c3eb195
775
md
Markdown
Backup/restarting-simple-backup-service.md
joenguyen215/PublicKB
d9547defe557771005d20a045abbfee524993dbc
[ "Apache-2.0" ]
1
2019-10-05T20:44:19.000Z
2019-10-05T20:44:19.000Z
Backup/restarting-simple-backup-service.md
joenguyen215/PublicKB
d9547defe557771005d20a045abbfee524993dbc
[ "Apache-2.0" ]
null
null
null
Backup/restarting-simple-backup-service.md
joenguyen215/PublicKB
d9547defe557771005d20a045abbfee524993dbc
[ "Apache-2.0" ]
1
2019-10-29T04:05:58.000Z
2019-10-29T04:05:58.000Z
{{{ "title": "Restarting Simple Backup Service", "date": "03-30-2016", "author": "Justin Withington", "attachments": [], "related-products" : [], "contentIsHTML": false, "sticky": false }}} In order to restart Simple Backup Service on your system, please follow the instructions below for Linux and Windows accordingly. ### Linux Restart the application’s service with the following command: ``service simplebackupservice restart`` ### Windows Restart the 'SimpleBackupService' service in the Microsoft Computer Management Console: 1. Expand the **Services and Applications** section. 2. Click **Services**. 3. Select **SimpleBackupService**. 4. Click **Restart the service**. ![](../images/backup/restarting-sbs/Windows_Computer_Management_Console.png)
28.703704
129
0.734194
eng_Latn
0.765511
6af9b543445374d8507ffede29e8ffa16fcd6aaf
1,810
md
Markdown
README.md
Machotacoz/zelcash
a3780409080fa17b11e29fa02d8e7bcf8fd897e9
[ "MIT" ]
1
2020-05-02T12:28:28.000Z
2020-05-02T12:28:28.000Z
README.md
Huynhhung0/zelcash
960e5f868301fd833d9f35910d13408a1f938b88
[ "MIT" ]
null
null
null
README.md
Huynhhung0/zelcash
960e5f868301fd833d9f35910d13408a1f938b88
[ "MIT" ]
null
null
null
# Zel 4.0.1 Kamata [![Build Status](https://travis-ci.com/zelcash/zelcash.svg?branch=master)](https://travis-ci.com/zelcash/zelcash) <img align="right" height=112 width=562 src="doc/imgs/kamata.png"> ## Mandatory Upgrade to at least version 4.0.0 What is Zel? -------------- [Zel](https://zel.network/) is a fork of 2.1.0-1 Zcash aiming to provide a decentralized development platform via ZelFlux, ZelNodes, ZelCore and more. Zel is PoW and "ASIC resistant" with ZelHash (Modified Equihash 125,4) with the personalisation string ZelProof and utilises the LWMA3 difficulty algorithm. The latest Kamata network upgrade activates on block 558,000 - around the 18th of March 2020. Kamata brings ZelFlux, ZelBenchd, Determenistic ZelNodes and more. <p align="center"> <img src="doc/imgs/kamata-mandatory.png" height=500 > </p> ## :rocket: Getting Started Please see our [user guide](https://zel.gitbook.io/zeldocs/) for any and all info. To setup a Zelnode please follow this[script](https://github.com/zelcash/deterministic-zelnode-script/) and wiki. ### Need Help? * :blue_book: See the documentation at the [Zel GitBook](https://zel.gitbook.io/zelcurrency/installing-zel-daemon) for help and more information. * :mag: Join us on [Discord.io/Zel](https://discord.io/zel) for support and to join the community conversation. ### Building Dependencies and build instructions for all supported platforms: [Zel GitBook](https://zel.gitbook.io/zelcurrency/installing-zel-daemon) If you have the dependencies you can build Zel from source by running: ``` ./zcutil/build.sh -j$(nproc) ``` #### :lock: Security Warnings See important security warnings on Zcash [Security Information page](https://z.cash/support/security/). License ------- For license information see the file [COPYING](COPYING).
36.2
160
0.745856
eng_Latn
0.854456
6af9d04544856fd7faa0d481a35a2a5accf344e4
1,694
markdown
Markdown
src/content/en/fundamentals/engage-and-retain/web-app-manifest/customize-the-icons.markdown
sanjogpandasp/WebFundamentals
0bbae6d243f7dfecc6b96652b24224fb2439b392
[ "Apache-2.0" ]
1
2019-11-08T07:00:48.000Z
2019-11-08T07:00:48.000Z
src/content/en/fundamentals/engage-and-retain/web-app-manifest/customize-the-icons.markdown
joskid/WebFundamentals
0bbae6d243f7dfecc6b96652b24224fb2439b392
[ "Apache-2.0" ]
null
null
null
src/content/en/fundamentals/engage-and-retain/web-app-manifest/customize-the-icons.markdown
joskid/WebFundamentals
0bbae6d243f7dfecc6b96652b24224fb2439b392
[ "Apache-2.0" ]
1
2019-11-08T07:00:49.000Z
2019-11-08T07:00:49.000Z
--- layout: shared/narrow title: "Customize the Icons" description: "When a user adds your site to their home screen, you can define a set of icons for the browser to use." published_on: 2014-12-17 updated_on: 2016-02-12 authors: - mattgaunt - paulkinlan translation_priority: 1 order: 3 notes: icons: "When saving an icon to the home screen, Chrome first looks for icons that match the density of the display and are sized to 48dp * screen density. If none are found it searches for the icon that most closely matches the device characteristics. If, for whatever reason, you want be specific about targetting an icon at a particular-pixel density, you can use the optional <code>density</code> member which takes a number. When you don’t declare density, it defaults to 1.0. This means “use this icon for screen densities 1.0 and up”, which is normally what you want." --- When a user adds your site to their home screen, you can define a set of icons for the browser to use. The icons for your web app can be defined as shown below, with a type, size, and optional density. {% highlight json %} "icons": [{ "src": "images/touch/icon-128x128.png", "type": "image/png", "sizes": "128x128" }, { "src": "images/touch/apple-touch-icon.png", "sizes": "152x152" }, { "src": "images/touch/ms-touch-icon-144x144-precomposed.png", "sizes": "144x144" }, { "src": "images/touch/chrome-touch-icon-192x192.png", "sizes": "192x192" }], {% endhighlight %} {% include shared/note.liquid list=page.notes.icons %} <figure> <img src="images/homescreen-icon.png" alt="Add to Home Screen Icon"> <figcaption>Add to Home Screen Icon</figcaption> </figure>
38.5
576
0.713695
eng_Latn
0.987991
6afaaf6792e42ebc28554d2f8bb2c95586529d73
2,225
md
Markdown
_posts/Programming-C/2019-12-19-C-Basic-Variables.md
jeeyune/jeeyune.github.io
48c5744e579d6e57e6bee202a0b5370019878326
[ "MIT" ]
null
null
null
_posts/Programming-C/2019-12-19-C-Basic-Variables.md
jeeyune/jeeyune.github.io
48c5744e579d6e57e6bee202a0b5370019878326
[ "MIT" ]
1
2019-07-14T07:04:36.000Z
2019-07-14T16:15:24.000Z
_posts/Programming-C/2019-12-19-C-Basic-Variables.md
jeeyune/jeeyune.github.io
48c5744e579d6e57e6bee202a0b5370019878326
[ "MIT" ]
1
2020-12-28T01:43:55.000Z
2020-12-28T01:43:55.000Z
--- layout: post title: "[C언어 기초] 전역변수, 지역변수, 매개변수, 인수의 비교 및 예시" comments: true categories: [Programming/C] tags: [C, Variables, Data Structures] --- 처음 C언어를 공부하면서 헷갈렸던 개념이 바로 전역변수, 지역변수, 그리고 매개변수 였어요.<br> _~~특히 한국어로는 너무 헷갈렸던..~~_ <br> 그래서 제대로 공부하고 기억할 겸, 포스팅 하기로 했습니다! <br><br> 아직 많이 부족하지만, 저와 같이 이 개념들이 헷갈리시는 분들이 보고 이해할 수 있도록 최대한 자세히 설명해보도록 하겠습니다. <br><br> *그래서...* <br><br> **전역변수 (Global Variable) 란?** <br> 전역변수는 블록, 즉 중괄호로 묶여있는 범위의 **바깥** 에 선언되는 변수를 말한다. 이름에서 힌트를 얻을 수 있듯이, 블록 안에서 선언하지 않아도 모든 곳에서 불러 올 수 있다.<br> 이런 전역변수는 프로그램이 시작되는 시점부터 끝날 때 까지 메모리를 사용하며 사라지지 않는다. 그렇기 때문에 사용하려는 변수가 사라지지 않기를 원한다면 전역변수로 저장을 해야한다. 전역변수로 선언하는 방법은 다음과 같다. <br><br> ```c #include <stdio.h> int a; int main(){ printf("Hello, World!"); printf("%d", a + b); } int b; ``` <br> 위의 코드에서 int main()이 선언되기 전 블록 바깥에 선언된 int a와 int b가 바로 전역변수이다. <br> 주의할 점은, int b처럼 메인함수 다음에 선언을 하고 메인함수에서 사용을 하면 컴파일 오류가 뜬다. 컴파일러는 항상 위에서 아래로 순서대로 진행하기 때문에, 아직 선언되지 않은 int b를 읽고 오류를 내는 것이다. int a를 보면, 전역변수로 블록 바깥에 선언되어있기 때문에 어디서든 불러올 수 있다. <br><br> **지역변수 (Local Variable) 란?** <br> 지역변수는 전역변수와는 반대로, **블록 안** 에 선언되는 변수를 말한다. 블록 안에 선언되어 있는 변수는 무조건 지역변수이다. 그렇기 때문에 블록 안에서만 유효하다. 블록을 벗어나게 되면 메모리에서도 사라지고 의미가 없어지기 때문에 그 블록 안에서만 사용해야한다. 예를 들면, <br><br> ```c #include <stdio.h> int multiply(){ int a = 1, b = 5; printf("%d\n", a * b); } int main(){ int a = 10; for(int i = 0; i < 3; i++){ int b = 2; printf("%d\n", a += b); } int c = 15; printf("%d", b + c); } ``` <br> 이 코드 안에서 int a, b, c 는 지역변수이다. 여기서, multiply함수와 메인함수에서의 변수명은 같지만, 다른 블록에 선언되어있으므로 서로 다른 변수이다. 즉, multiply함수의 a,b 변수를 메인함수에서 쓸 수 없고, 메인함수의 a, b변수를 multiply함수에서 사용하지 못한다는 뜻이다. <br> 메인 함수를 살펴보면, for loop 안에서 a를 쓸 수 있는 이유는, for loop이 메인 함수 안에 포함된 블록이기 때문이다. 하지만, for loop 다음 코드는 컴파일 에러가 난다. 왜냐하면 b는 이미 사용되어 사라졌기 때문. 따라서 변수의 선언은 항상 함수 가장 위에 선언해야한다. <br><br> **매개변수/인자 (Parameter) 와 인수 (Argument)란?** <br> 매개변수(또는 인자)는 함수 등에서 사용되는 전달된 값을 **받는 변수**. 인수는 변수에 **전달되는 값**. 말로는 굉장히 애매하고 헷갈리기 때문에 예시를 보면, <br> ```c #include <stdio.h> int plus(int a, int b){ // 여기서 a, b는 매개변수(인자) printf("%d\n", a + b); } int main(){ int result = plus(2, 4); // 여기서 2와 4는 인수 printf("%d", result); } ``` <br> 좀 이해가 되셨나요? 다음엔 C언어의 메모리 구조에 대해 알아보겠습니다. <br> 감사합니다!
21.601942
181
0.620674
kor_Hang
1.00001
6afaf3d8ab98361c96c5521dbe9fe2bdf452d745
1,706
md
Markdown
_episodes/15-staging.md
alimanfoo/user_guide
01c7fa1a24669087d535a459e4c21781e942fd49
[ "Apache-2.0", "CC-BY-4.0" ]
null
null
null
_episodes/15-staging.md
alimanfoo/user_guide
01c7fa1a24669087d535a459e4c21781e942fd49
[ "Apache-2.0", "CC-BY-4.0" ]
null
null
null
_episodes/15-staging.md
alimanfoo/user_guide
01c7fa1a24669087d535a459e4c21781e942fd49
[ "Apache-2.0", "CC-BY-4.0" ]
null
null
null
--- title: "Staging Input Files" teaching: 10 exercises: 0 questions: - "How do I stage input files in the working directory?" objectives: - "Learn how to handle situations where a tool expects to write output files to the same directory as its input files." keypoints: - "Input files are normally kept in a read-only directory." - "Use `InitialWorkDirRequirement` to stage input files in the working directory." --- Normally, input files are located in a read-only directory separate from the output directory. This causes problems if the underlying tool expects to write its output files alongside the input file in the same directory. You use `InitialWorkDirRequirement` to stage input files into the output directory. In this example, we use a JavaScript expression to extract the base name of the input file from its leading directory path. *linkfile.cwl* ~~~ {% include cwl/15-staging/linkfile.cwl %} ~~~ {: .source} *arguments-job.yml* ~~~ {% include cwl/15-staging/arguments-job.yml %} ~~~ {: .source} Now invoke `cwl-runner` with the tool wrapper and the input object on the command line: ~~~ $ cwl-runner linkfile.cwl arguments-job.yml [job 139928309171664] /home/example$ docker run -i --volume=/home/example/Hello.java:/var/lib/cwl/job557617295_examples/Hello.java:ro --volume=/home/example:/var/spool/cwl:rw --volume=/tmp/tmpmNbApw:/tmp:rw --workdir=/var/spool/cwl --read-only=true --net=none --user=1001 --rm --env=TMPDIR=/tmp java:7 javac Hello.java Final process status is success { "classfile": { "size": 416, "location": "/home/example/Hello.class", "checksum": "sha1$2f7ac33c1f3aac3f1fec7b936b6562422c85b38a", "class": "File" } } ~~~ {: .output} {% include links.md %}
32.188679
318
0.741501
eng_Latn
0.97314
6afb37ff13adf65f337739eae13ae087f1fdf190
160
md
Markdown
computing/physics-engine.md
ComputingTeachers/mapOfComputing
fafce4100d23531bd60522b53e80ffbffd3115fc
[ "CC0-1.0" ]
1
2021-05-24T11:44:20.000Z
2021-05-24T11:44:20.000Z
computing/physics-engine.md
ComputingTeachers/mapOfComputing
fafce4100d23531bd60522b53e80ffbffd3115fc
[ "CC0-1.0" ]
null
null
null
computing/physics-engine.md
ComputingTeachers/mapOfComputing
fafce4100d23531bd60522b53e80ffbffd3115fc
[ "CC0-1.0" ]
null
null
null
Physics Engine ============== Material Modeler * [Designing a physics engine](https://blog.winter.dev/2020/designing-a-physics-engine/) - Code and examples *
22.857143
108
0.69375
yue_Hant
0.630709
6afb5f577eaecad8e2aa78dc9555da86b8e607bf
939
md
Markdown
README.md
faluciano/travel-tinder
634a1e16288c02e8ef399e77728eef5725098203
[ "CC-BY-3.0" ]
null
null
null
README.md
faluciano/travel-tinder
634a1e16288c02e8ef399e77728eef5725098203
[ "CC-BY-3.0" ]
null
null
null
README.md
faluciano/travel-tinder
634a1e16288c02e8ef399e77728eef5725098203
[ "CC-BY-3.0" ]
null
null
null
# travel-tinder A tinder-like webapp made with flask but for locations. ## Functionality Sign up\ Log in\ Look up a location\ Like/Dislike location cards\ Look at previously liked locations ### To run Set up google API key named googleapikey\ `pip install -r requirements.txt`\ `python main.py` ## What? travel-tinder is a webapp for those that want to go somewhere but don't know where to go.\ Pick any place in the world and then a category and like away.\ This was built using Flask, HTML, CSS and some javascript. ## Why? This was part of a hackathon in which we won the prize for the theme of the hackathon. We thought it would be a cool idea to make tinder but for traveling in order to have interesting experiences when going abroad. ## Credits The design for the cards was taken from codepen by user Benoit Proulx https://codepen.io/benproulx/pen/EjQMxQ Template for the main site was taken from https://templated.co/hielo
32.37931
214
0.768903
eng_Latn
0.999182
6afb7ae19403fe85b979e7acbf0399ffe7a3252b
5,071
md
Markdown
docs-archive-a/2014/relational-databases/security/encryption/back-up-a-database-master-key.md
v-alji/sql-docs-archive-pr.es-es
410a49b0a08c22fd4bc973078b563238d69c8b44
[ "CC-BY-4.0", "MIT" ]
1
2021-11-25T21:09:51.000Z
2021-11-25T21:09:51.000Z
docs-archive-a/2014/relational-databases/security/encryption/back-up-a-database-master-key.md
v-alji/sql-docs-archive-pr.es-es
410a49b0a08c22fd4bc973078b563238d69c8b44
[ "CC-BY-4.0", "MIT" ]
1
2021-11-25T02:22:05.000Z
2021-11-25T02:27:15.000Z
docs-archive-a/2014/relational-databases/security/encryption/back-up-a-database-master-key.md
v-alji/sql-docs-archive-pr.es-es
410a49b0a08c22fd4bc973078b563238d69c8b44
[ "CC-BY-4.0", "MIT" ]
1
2021-09-29T08:53:04.000Z
2021-09-29T08:53:04.000Z
--- title: Hacer copias de seguridad de una clave maestra de una base de datos | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: security ms.topic: conceptual helpviewer_keywords: - database master key [SQL Server], exporting ms.assetid: 7ad9a0a0-6e4f-4f7b-8801-8c1b9d49c4d8 author: jaszymas ms.author: jaszymas ms.openlocfilehash: 9a66d28fea8289719d3efb2351409e0f14379ec9 ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 08/04/2020 ms.locfileid: "87750625" --- # <a name="back-up-a-database-master-key"></a>Hacer copias de seguridad de una clave maestra de una base de datos En este tema se describe cómo realizar una copia de seguridad de la clave maestra de una base de datos en [!INCLUDE[ssCurrent](../../../includes/sscurrent-md.md)] mediante [!INCLUDE[tsql](../../../includes/tsql-md.md)]. La clave maestra de una base de datos se usa para cifrar otras claves y certificados de la base de datos Si se elimina o daña, [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] podría ser incapaz de descifrar esas claves y los datos cifrados que las utilizan quedarían inutilizables y perdidos. Por ello, se debe hacer una copia de seguridad de la clave maestra de la base de datos y guardar la copia de seguridad en un lugar protegido fuera de las instalaciones. **En este tema** - **Antes de empezar:** [Limitaciones y restricciones](#Restrictions) [Seguridad](#Security) - [Para realizar una copia de seguridad de la clave maestra de una base de datos mediante Transact-SQL](#Procedure) ## <a name="before-you-begin"></a><a name="BeforeYouBegin"></a> Antes de comenzar ### <a name="limitations-and-restrictions"></a><a name="Restrictions"></a> Limitaciones y restricciones - Es necesario abrir la clave maestra y, por tanto, descifrarla antes de realizar una copia de seguridad de la misma. Si está cifrada con la clave maestra de servicio, no es necesario abrir explícitamente la clave maestra. Pero si la clave maestra solo está cifrada con una contraseña, debe abrirse explícitamente. - Se recomienda realizar una copia de seguridad de la clave maestra inmediatamente después de crearla y guardarla en un lugar seguro y alejado de las instalaciones. ### <a name="security"></a><a name="Security"></a> Seguridad #### <a name="permissions"></a><a name="Permissions"></a> Permisos Necesita el permiso CONTROL en la base de datos. ## <a name="using-sql-server-management-studio-with-transact-sql"></a><a name="Procedure"></a>Usar SQL Server Management Studio con Transact-SQL #### <a name="to-back-up-the-database-master-key"></a>Para hacer una copia de seguridad de la clave maestra de una base de datos 1. En [!INCLUDE[ssManStudioFull](../../../includes/ssmanstudiofull-md.md)], conéctese a la instancia de [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] que contenga la clave maestra de base de datos de la que desea hacer una copia de seguridad. 2. Elija una contraseña que se utilizará para cifrar la clave maestra de base de datos en el medio de la copia de seguridad. Esta contraseña se somete a comprobaciones de complejidad. 3. Hágase con un medio de copia de seguridad extraíble para guardar una copia de seguridad de la clave. 4. Identifique un directorio NTFS en el que va a crear la copia de seguridad de la clave. En este directorio creará el archivo indicado en el paso siguiente. El directorio debe estar protegido mediante listas de control de acceso (ACL) muy restrictivas. 5. En el **Explorador de objetos**, conéctese a una instancia del [!INCLUDE[ssDE](../../../includes/ssde-md.md)]. 6. En la barra de Estándar, haga clic en **Nueva consulta**. 7. Copie y pegue el siguiente ejemplo en la ventana de consulta y haga clic en **Ejecutar**. ``` -- Creates a backup of the "AdventureWorks2012" master key. Because this master key is not encrypted by the service master key, a password must be specified when it is opened. USE AdventureWorks2012; GO OPEN MASTER KEY DECRYPTION BY PASSWORD = 'sfj5300osdVdgwdfkli7'; BACKUP MASTER KEY TO FILE = 'c:\temp\exportedmasterkey' ENCRYPTION BY PASSWORD = 'sd092735kjn$&adsg'; GO ``` > [!NOTE] > La ruta de acceso de archivo a la clave y la contraseña de la clave (si existe) serán distintas de las que se indica más arriba. Asegúrese de que ambas son específicas para la instalación del servidor y de la clave. 8. Copie el archivo en el medio de copia de seguridad y compruebe que la copia es correcta. 9. Guarde la copia de seguridad en una ubicación segura fuera de las instalaciones. Para obtener más información, vea [OPEN MASTER KEY &#40;Transact-SQL&#41;](/sql/t-sql/statements/open-master-key-transact-sql) y [BACKUP MASTER KEY &#40;Transact-SQL&#41;](/sql/t-sql/statements/backup-master-key-transact-sql).
58.965116
698
0.730625
spa_Latn
0.965128
6afbf6c4cc192819c579ae0c77cdaf0c17e8cd66
12,507
md
Markdown
SharePoint/SharePointServer/search/how-to-display-values-from-custom-managed-properties-in-search-resultsoption-1.md
cjdinger/OfficeDocs-SharePoint
5f2947cf4eb965e98d49e51f9c92ccb6475c6c64
[ "CC-BY-4.0", "MIT" ]
null
null
null
SharePoint/SharePointServer/search/how-to-display-values-from-custom-managed-properties-in-search-resultsoption-1.md
cjdinger/OfficeDocs-SharePoint
5f2947cf4eb965e98d49e51f9c92ccb6475c6c64
[ "CC-BY-4.0", "MIT" ]
null
null
null
SharePoint/SharePointServer/search/how-to-display-values-from-custom-managed-properties-in-search-resultsoption-1.md
cjdinger/OfficeDocs-SharePoint
5f2947cf4eb965e98d49e51f9c92ccb6475c6c64
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "How to display values from custom managed properties in classic search results - option 1 in SharePoint Server" ms.reviewer: ms.author: serdars author: SerdarSoysal manager: serdars ms.date: 3/7/2018 audience: ITPro f1.keywords: - NOCSH ms.topic: article ms.prod: sharepoint-server-itpro ms.localizationpriority: medium ms.assetid: 383d6e18-d108-45b3-afb2-194fc3de2206 description: "Learn one option for displaying values from custom managed properties in SharePoint Server." --- # How to display values from custom managed properties in classic search results - option 1 in SharePoint Server [!INCLUDE[appliesto-2013-2016-2019-xxx-md](../includes/appliesto-2013-2016-2019-xxx-md.md)] In this article, you'll learn: - [How to display a custom icon](how-to-display-values-from-custom-managed-properties-in-search-resultsoption-1.md#BKMK_HowtoDisplayaCustomIcon) - [How to find a managed property name](how-to-display-values-from-custom-managed-properties-in-search-resultsoption-1.md#BKMK_HowtoFindaManagedPropertyName) - [How to change an item display template to show values from custom managed properties - option 1](how-to-display-values-from-custom-managed-properties-in-search-resultsoption-1.md#BKMK_HowtoModifyanItemDisplayTemplatetoShowValuesFromCustomManagedPropertiesOption1) - [About click tracking and automatically improved relevancy](how-to-display-values-from-custom-managed-properties-in-search-resultsoption-1.md#BKMK_AboutClickTrackingandAutomaticallyImprovedRelevancy) ## How to display a custom icon <a name="BKMK_HowtoDisplayaCustomIcon"> </a> In [Understanding how search results are displayed in SharePoint Server](understanding-how-search-results-are-displayed.md) we explained how the icons Word, PDF, and Excel are displayed for classic search results. In our Search Center scenario, we wanted to add the following custom icon next to all search results that belong to the newly created *TechNet content* result type: TN To display a custom icon for classic search results, here's what you should do: 1. Add the custom icon to a SharePoint Server library. In our Search Center scenario, we added the custom icon to the **Images** library. ![Icon Added](../media/OTCSP_IconAdded.png) 2. Open the item display template that is referenced from the result type for which you want to display a custom icon. In our Search Center scenario, we also removed the if statement: *if (ctx.CurrentItem.IsContainer)*. ![Display Template Custom Icon](../media/OTCSP_DisplayTemplateCustomIcon.png) 3. On a search page, enter a query that will trigger the new result type. 4. In our Search Center scenario, we entered "result type." Search results that are TechNet publications now have a custom icon next to them. Great! ![Icon Displayed](../media/OTCSP_IconDisplayed.png) So users of our Search Center could now easily distinguish the search results that were published on TechNet. But, we also wanted to add information from custom site columns so that users could see important information about each search result without having to click it. In [Understanding how search results are displayed in SharePoint Server](understanding-how-search-results-are-displayed.md) we explained that site columns are "transformed" into managed properties during crawl. We also explained that only managed properties that are listed in an item display template can be displayed in search results. So, to display custom information in your search results, you must have to add managed properties to an item display template. Hence, the next thing that you should do is find the managed property name that corresponds to the custom site column that you want to use. ## How to find a managed property name <a name="BKMK_HowtoFindaManagedPropertyName"> </a> Before you start to search for a managed property name, it's important that you know a bit about the naming convention for managed properties. For more information about this, see [About the naming convention for automatically created crawled and managed properties](../administration/from-site-column-to-managed-propertywhat-s-up-with-that.md#BKMK_AbouttheNamingConventionforAutomaticallyCreatedCrawledandManagedProperties). Depending on your permission level, you can search for managed properties from three places: |**Permission level**|**Search from this location**| |:-----|:-----| |Search service application administrator <br/> |Central Administration --\> Managed Service Application --\> Search Service Application --\> Search Schema <br/> | |Site collection administrator <br/> |Site Settings --\> Search Schema (in the Site Collection Administration section) <br/> | |Site collection owner <br/> |Site Settings --\> Schema (in the Search section) <br/> | Here's what you should do: 1. Go to **Site settings** > **Search Schema**. ![Search Schema](../media/OTCSP_SearchSchema.png) 2. On the **Managed Properties** page, in the **Managed property** field, enter the name of the site column that you want to find the managed property name of. Remember that managed property names don't contain spaces. Therefore, if your site column name contains a space, leave it out. In our Search Center scenario, we wanted to find the managed property name for the site column *Content Summary*. We entered *ContentSummary* in the **Managed property** field, and selected the green arrow icon. ![Search Content Summary](../media/OTCSP_SearchContentSummary.png) One search result was returned: *ContentSummaryOWSMTXT*. ![Content Summary](../media/OTCSP_ContentSummary.png) Because the **Content Summary** site column is of type *Multiple lines of text*, we knew this was the managed property name we wanted to use. 3. Repeat the steps of this procedure to find the names of all of the managed properties that you want to display in your search results. Now that you have found the names of the managed properties that you want to show in your search results, the next step is to change the item display template. ## How to change an item display template to show values from custom managed properties - option 1 <a name="BKMK_HowtoModifyanItemDisplayTemplatetoShowValuesFromCustomManagedPropertiesOption1"> </a> In [Understanding how search results are displayed in SharePoint Server](understanding-how-search-results-are-displayed.md) we mentioned that there are several ways to change an item display template to show values from custom managed properties. The option explained in this section is very simple. We'll cover the second option in the next article of this series. It doesn't include any if statements, and hit highlighting is not applied. Here's what you should do: 1. Open the item display template that belongs to the result type for which you want to customize search results. In our Search Center scenario, this was *TechNet content*. 2. In the item display template, in the **ManagedPropertyMapping** tag, use the following syntax to add the custom managed properties that you want to display: ``` '<Current item property name>':<Managed property name>' ``` In our Search Center scenario, we wanted the values from the managed properties *ContentSummaryOWSMTXT* and *owstaxIdTechnicalSubject* to appear in the search result. To make the file easier to maintain, we named the current item properties the same as the managed properties. ![Add MPs](../media/OTCSP_AddMPs.png) 3. Inside the second \<div\> tag in the \<body\>, use the following syntax to add code that will display the value of the custom managed property: ``` _#= ctx.CurrentItem.<Current item property name> =# ``` In our Search Center scenario, we added the following to the item display template: ``` <div>_#= ctx.CurrentItem. ContentSummaryOWSMTXT =#_</div> <div>_#= ctx.CurrentItem. owstaxIdTechnicalSubject =#></div> ``` ![Display Two New MPs](../media/OTCSP_DisplayTwoNewMPs.png) 4. Save the item display template. > [!NOTE] > You don't have to do this step if you are using SharePoint in Microsoft 365. Go to **Site settings** > **Search Result Types**. A **Property Sync** alert appears. ![Property Sync Alert](../media/OTCSP_PropertySyncAlert.png) This alert is displayed because we added managed properties to an item display template (what we did in step 2). To update the result types with the newly added managed properties, select **Update**. ![Updated MPs](../media/OTCSP_UpdateMPs.png) > [!IMPORTANT] > If you don't do this update, the newly added managed properties won't display in your search results. After we made this change, when users entered a query in our Search Center, both the value of *ContentSummaryOWSMTXT* and the value for *owstaxIdTechnicalSubject* appeared in the search results. ![Search Results List Item](../media/OTCSP_SearchResultListItem.png) Even though two custom properties appeared in the search results, the result wasn't completely right. For example, we wanted to display the two custom properties between the title and the link, and not below the link as was currently the case. To better understand why the search results were displayed the way that they were, let's take a closer look at the customized item display template: ![Display Template Flow](../media/OTCSP_DisplayTemplateFlow.png) 1. `ctx.CurrentItem.csr_Icon` points to the location of my custom icon. This variable is used by the *Item_CommonItem_Body* display template. 2. `_#=ctx.RenderBody(ctx)=#_` calls the *Item_CommonItem_Body* display template. (Remember [Understanding how item display templates and hit highlighting work in SharePoint Server](understanding-how-item-display-templates-and-hit-highlighting-work.md). The *Item_CommonItem_Body* display template displays the custom icon, title, and the link to the item.) 3. `_#= ctx.CurrentItem.ContentSummaryOWSMTXT =#_` and `_#= ctx.CurrentItem.owstaxIdTechnicalSubject =#_` display the values of the two managed properties, *ContentSummaryOWSMTXT* and *owstaxIdTechnicalSubject*. To display the custom properties between the title and the link, you could take the *Item_CommonItem_Body* display template out of play by deleting the reference `_#=ctx.RenderBody(ctx)=#_` from your custom display template. You could then add the properties in the order that you want them to display, for example as follows: ![Remove Reference](../media/OTCSP_RemoveReference.png) The search result would then look like this: ![Results Without Common Reference](../media/OTCSP_ResultWithoutCommonReference.png) By working a bit more on the styling, you could have a good enough result. But, by deleting the reference to `_#=ctx.RenderBody(ctx)=#_` ,the *Item_CommonItem_Body* display template is no longer used to display results. The *Item_CommonItem_Body* display template contains some functionality that will automatically improve the relevancy of your classic search results. So, before you delete the `_#=ctx.RenderBody(ctx)=#_` reference, you should consider whether automatically improved relevancy is something that the users of your search site would benefit from. ## About click tracking and automatically improved relevancy <a name="BKMK_AboutClickTrackingandAutomaticallyImprovedRelevancy"> </a> The *Item_CommonItem_Body* display template contains an *onlick* method that tracks the click behavior of users. This tracking influences the relevancy of classic search results. For example, a search result that is often clicked by users will automatically be displayed higher up in the search results. > [!IMPORTANT] > If you want your classic search results to receive automatically improved relevancy based on the click behavior of users, do not delete the reference to `_#=ctx.RenderBody(ctx)=#_` from the item display template. In the next article, we'll explain how you can keep this reference, display custom properties between the title and link in the classic search results, and also apply hit highlighting to your custom properties. ### Next article in this series [How to display values from custom managed properties in search results - option 2 in SharePoint Server](how-to-display-values-from-custom-managed-properties-in-search-resultsoption-2.md)
66.174603
605
0.769089
eng_Latn
0.989052
6afc000dcce64801338986bb935e9fd78683965c
229
md
Markdown
README.md
namjagbrawa/framework
5045808ef870ea396ea512cd8a760972565f3ec8
[ "Apache-2.0" ]
5
2018-02-02T15:40:03.000Z
2018-05-25T11:04:45.000Z
README.md
namjagbrawa/framework
5045808ef870ea396ea512cd8a760972565f3ec8
[ "Apache-2.0" ]
null
null
null
README.md
namjagbrawa/framework
5045808ef870ea396ea512cd8a760972565f3ec8
[ "Apache-2.0" ]
1
2019-01-31T07:32:00.000Z
2019-01-31T07:32:00.000Z
Bingo Framework. Porker Game Server. A distribute rpc framework base on dubbo. Easy config. Use to game server or other server. 基于Dubbo修改的分布式RPC框架,简化了不使用的模块,简化了配置方式,默认实现了服务路由,服务获取及客户端Socket长连接功能,目前用于游戏后台服务的基础框架,在其上实现业务逻辑及游戏逻辑.
32.714286
98
0.842795
eng_Latn
0.379086
6afc29a4f130c015c7be405755aea34133e058cb
3,621
md
Markdown
README.md
frafra/kirc
b11f0f355391fc2109c21e2989ce1c6fd548a2d9
[ "MIT" ]
null
null
null
README.md
frafra/kirc
b11f0f355391fc2109c21e2989ce1c6fd548a2d9
[ "MIT" ]
null
null
null
README.md
frafra/kirc
b11f0f355391fc2109c21e2989ce1c6fd548a2d9
[ "MIT" ]
null
null
null
<h3 align="center"><img src="https://raw.githubusercontent.com/mcpcpc/kirc/master/.github/kirc.png" alt="logo" height="170px"></h3> <p align="center">KISS for IRC, a tiny IRC client written in POSIX C99.</p> <p align="center"> <a href="./LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg"></a> <a href="https://github.com/mcpcpc/kirc/releases"><img src="https://img.shields.io/github/v/release/mcpcpc/kirc.svg"></a> <a href="https://repology.org/metapackage/kirc"><img src="https://repology.org/badge/tiny-repos/kirc.svg" alt="Packaging status"></a> </p> ## Objectives _"Do one thing and do it well"_ — Emphasis was placed on building simple, short, clear, modular, and extensible code that can be easily maintained and repurposed (per the [Unix philosophy](https://en.wikipedia.org/wiki/Unix_philosophy)). _Portability_ — [POSIX](https://en.wikipedia.org/wiki/POSIX) compliance ensures seamless compatibility and interoperability between variants of Unix and other operating systems. _Usability_ — Commands and shortcuts should feel "natural" when using a [standard 104-key US keyboard layout](https://en.wikipedia.org/wiki/Keyboard_layout). Where possible, the number of keystrokes have been minimized. ## Usage ```shell usage: kirc [-s hostname] [-p port] [-c channel] [-n nick] [-r real name] [-u username] [-k password] [-x init command] [-w columns] [-W columns] [-o path] [-h|v|V] -s server address (default: 'irc.freenode.org') -p server port (default: '6667') -c channel name (default: '#kisslinux') -n nickname (required) -u server username (optional) -k server password (optional) -r real name (optional) -v version information -V verbose output (e.g. raw stream) -o output path to log irc stream -x send command to irc server after inital connection -w maximum width of the printed left column (default: '10') -W maximum width of the entire printed stream (default '80') -h basic usage information ``` ## Features * No dependencies other than a [C99 compiler](https://gcc.gnu.org/). * Complies with [RFC 2812](https://tools.ietf.org/html/rfc2812) standard. * Ability to log the entire chat history (see _Usage_ section for more information). - vi-like command shortcuts: ```shell <message> Send a message to the current channel. /m <nick|channel> <message> Send a message to a specified channel or nick. /M <message> Send a message to NickServ. /Q <message> Send a message and close the host connection. /x <message> Send a message directly to the server. /j <channel> Join a specified channel. /p <channel> Leave (part) a specified channel. /n List all users on the current channel. /q Close the host connection. ``` * Color scheme definition via [ANSI 8-bit colors](https://en.wikipedia.org/wiki/ANSI_escape_code). Therefore, one could theoretically achieve uniform color definition across all shell applications and tools. ## Screenshots ![Screenshot 1](/.github/example.png) ## Installation Building and installing on **KISS Linux** using the Community repository: ```shell kiss b kirc kiss i kirc ``` Building and installing on **Arch** and **Arch-based** distros using the AUR: ```shell git clone https://aur.archlinux.org/kirc-git.git cd kirc makepkg -si ``` Building and installing from source (works on **Rasbian**, **Debian**, **Ubuntu** and many other Unix distributions): ```shell git clone https://github.com/mcpcpc/kirc.git cd kirc make make install ```
41.62069
239
0.696769
eng_Latn
0.876262
6afc5f91a1ad926c4df003a8664714fa74b6d4c4
11,882
md
Markdown
docs/integration-services/change-data-capture/manage-a-cdc-instance.md
sql-aus-hh/sql-docs.de-de
edfac31211cedb5d13440802f131a1e48934748a
[ "CC-BY-4.0", "MIT" ]
1
2022-02-25T18:10:29.000Z
2022-02-25T18:10:29.000Z
docs/integration-services/change-data-capture/manage-a-cdc-instance.md
sql-aus-hh/sql-docs.de-de
edfac31211cedb5d13440802f131a1e48934748a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/integration-services/change-data-capture/manage-a-cdc-instance.md
sql-aus-hh/sql-docs.de-de
edfac31211cedb5d13440802f131a1e48934748a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Verwalten einer CDC-Instanz | Microsoft-Dokumentation ms.custom: '' ms.date: 03/01/2017 ms.prod: sql ms.prod_service: integration-services ms.reviewer: '' ms.technology: integration-services ms.topic: conceptual f1_keywords: - manIns ms.assetid: cfed22c8-c666-40ca-9e73-24d93e85ba92 author: douglaslMS ms.author: douglasl manager: craigg ms.openlocfilehash: c9edcbe27c015e4f63b2a0d66640dcca818d3eba ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 10/01/2018 ms.locfileid: "47853888" --- # <a name="manage-a-cdc-instance"></a>Verwalten einer CDC-Instanz Sie können die CDC Designer Console zum Anzeigen von Informationen zu den erstellten Instanzen und zum Verwalten des Betriebs der Instanzen verwenden. Klicken Sie im linken Bereich auf den Namen einer Instanz, um die Informationen zur Instanz anzuzeigen. > [!NOTE] > Wenn Sie im linken Bereich einen Dienst auswählen, wird die Liste der verfügbaren Instanzen auch im mittleren Bereich der CDC Designer Console angezeigt. Wenn Sie eine der Instanzen in diesem Abschnitt auswählen, können Sie die Tasks im rechten Bereich ausführen. Sie können jedoch nicht die Informationen auf den Registerkarten mit den Eigenschaften anzeigen. ## <a name="what-you-can-do-when-you-display-the-cdc-instance-information"></a>Optionen beim Anzeigen der Informationen zur CDC-Instanz Die folgenden Aktionen werden im rechten Bereich ausgeführt: **Start** Klicken Sie auf **Start** , um die Aufzeichnung der Änderungen für die ausgewählte CDC-Instanz zu starten. **Beenden** Klicken Sie auf **Beenden** , um die Aufzeichnung für die ausgewählte CDC-Instanz zu beenden. Wenn Sie die CDC-Instanz beenden, gehen die Änderungen, die bis zu diesem Punkt aufgezeichnet wurden, nicht verloren und werden übermittelt, wenn die CDC-Instanz fortgesetzt wird. **Zurücksetzen** Klicken Sie auf **Zurücksetzen** , um die CDC-Instanz auf ihren ursprünglichen (leeren) Zustand zurückzusetzen. Diese Option ist verfügbar, wenn die CDC-Instanz beendet wurde. Alle Änderungen in den Änderungstabellen und der interne Status der CDC-Instanz werden gelöscht. Wenn die CDC-Instanz später dann gestartet wird, beginnt die Änderungsaufzeichnung ab diesem Zeitpunkt und schließt nur Transaktionen ein, die nach dem Starten der CDC-Instanz gestartet wurden. Klicken Sie im Bestätigungsdialogfeld auf **OK** , um zu bestätigen, dass Sie die CDC-Instanz zurücksetzen und die in die Änderungstabellen geschriebenen Änderungen löschen möchten. **Delete** Klicken Sie auf **Löschen** , um die CDC-Instanz dauerhaft zu löschen. Diese Option ist nur verfügbar, wenn die CDC-Instanz beendet wurde. Klicken Sie im Bestätigungsdialogfeld auf **OK** , um zu bestätigen, dass Sie die CDC-Instanz löschen möchten. **Oracle Logging Script** Klicken Sie auf diesen Link, um das Dialogfeld Oracle Logging Script mit dem ergänzenden Oracle-Protokollierungsskript anzuzeigen. Informationen zu den Schritten, die Sie in diesem Dialogfeld ausführen können, finden Sie unter [Oracle Supplemental Logging Script](../../integration-services/change-data-capture/oracle-supplemental-logging-script.md). > [!NOTE] > Wenn Sie die ergänzenden Protokollierungsskripts ausführen, wird das Dialogfeld Oracle Credentials for Running Script geöffnet, in dem Sie einen gültigen Oracle-Benutzernamen und das dazugehörige Kennwort angeben können. Informationen zum Bereitstellen der richtigen Oracle-Anmeldeinformationen finden Sie unter [Oracle Credentials for Running Script](../../integration-services/change-data-capture/oracle-credentials-for-running-script.md). **CDC Instance Deployment Script** Klicken Sie auf diesen Link, um das Dialogfeld CDC Instance Deployment Script zu öffnen, in dem das Bereitstellungsskript für die CDC-Instanz angezeigt wird. Informationen zu diesem Dialogfeld finden Sie unter [CDC Instance Deployment Script](../../integration-services/change-data-capture/cdc-instance-deployment-script.md). **Eigenschaften** Klicken Sie auf diesen Link, um den Eigenschaften-Editor zu öffnen. Sie bearbeiten die CDC-Instanzkonfiguration mithilfe des Eigenschaften-Editors. Weitere Informationen zum Bearbeiten der Eigenschaften für eine CDC-Instanz finden Sie unter [Edit Instance Properties](../../integration-services/change-data-capture/edit-instance-properties.md). **Viewer-Registerkarten** Die folgenden Viewer-Registerkarten sind verfügbar, wenn Sie Informationen für die CDC-Instanz anzeigen. Die Informationen auf diesen Registerkarten sind schreibgeschützt. **Status** Diese Registerkarte enthält Informationen und Statistiken zum aktuellen Status der CDC-Instanz. Sie liefert die folgenden Informationen. - **Status**: Ein Symbol, das den aktuellen Status für die CDC-Instanz angibt. Die Status sind unten beschrieben. ||| |-|-| |![Fehler](../../integration-services/change-data-capture/media/error.gif "Fehler")|**Fehler**: Die Oracle CDC-Instanz wird nicht ausgeführt, da ein nicht wiederholbarer Fehler aufgetreten ist. Die folgenden Unterstatus sind verfügbar:<br /><br /> **Misconfigured**: Es ist ein Konfigurationsfehler aufgetreten, der einen manuellen Eingriff erfordert.<br /><br /> **Kennwort erforderlich**: Für die Oracle CDC-Instanz wurde kein Kennwort festgelegt, oder das Kennwort ist nicht gültig.<br /><br /> **Unerwartet**: Alle anderen nicht behebbaren Fehler.| |![OK](../../integration-services/change-data-capture/media/okay.gif "OK")|**Wird ausgeführt:** Die CDC-Instanz wird ausgeführt und verarbeitet Änderungsdatensätze. Die folgenden Unterstatus sind verfügbar:<br /><br /> **Im Leerlauf**: Alle Änderungsdatensätze wurden verarbeitet und in den Zieländerungstabellen gespeichert. Es sind keine aktiven Transaktionen mehr vorhanden.<br /><br /> **Processing**: Es werden Änderungsdatensätze verarbeitet, die noch nicht in die Änderungstabellen geschrieben wurden.| |![Beenden](../../integration-services/change-data-capture/media/stop.gif "Beenden")|**Beendet**: Die CDC-Instanz wird nicht ausgeführt. Der Status Beendet gibt an, dass die CDC-Instanz auf normale Weise beendet wurde.| |![Angehalten](../../integration-services/change-data-capture/media/paused.gif "Angehalten")|**Angehalten**: Die CDC-Instanz wird ausgeführt, aber die Verarbeitung wurde aufgrund eines wiederholbaren Fehlers angehalten. Die folgenden Unterstatus sind verfügbar:<br /><br /> **Getrennt**: Die Verbindung zur Oracle-Quelldatenbank kann nicht hergestellt werden. Die Verarbeitung wird fortgesetzt, nachdem die Verbindung wiederhergestellt wurde.<br /><br /> **Speicher**: Der Speicher ist voll. Die Verarbeitung wird fortgesetzt, wenn zusätzlicher Speicher verfügbar wird.<br /><br /> **Logger**: Die Protokollierung ist mit Oracle verbunden, kann aber die Oracle-Transaktionsprotokolle aufgrund eines vorübergehenden Problems nicht lesen, weil z. B. ein erforderliches Transaktionsprotokoll nicht verfügbar ist.| - **Detailed Status**: Der aktuelle Unterstatus. - **Status Message**: Weitere Informationen zum aktuellen Status. - **Zeitstempel**: Die Uhrzeit im UTC-Format, zu der der CDC-Status zuletzt aus der Statustabelle gelesen wurde. - **Currently Processing**: In diesem Abschnitt können Sie die folgenden Informationen überwachen. - **Last transaction timestamp**: Die Ortszeit der letzten in die Änderungstabellen geschriebenen Transaktion. - **Last change timestamp**: Die Ortszeit der letzten Änderung, die für die Oracle CDC-Instanz in den Transaktionsprotokollen der Oracle-Quelldatenbank sichtbar ist. Hierbei werden Informationen zur aktuellen Latenzzeit der CDC-Instanz beim Lesen des Oracle-Transaktionsprotokolls bereitgestellt. - **Transaction log head CN:** Die letzte Änderungsnummer (CN), die aus dem Oracle-Transaktionsprotokoll gelesen wurde. - **Transaction log tail CN**: Die Änderungsnummer zur Wiederherstellung oder zum Neustarten der CDC-Instanz. Die Oracle CDC-Instanz greift auf diese Position zurück, falls ein Neustart durchgeführt wird oder ein anderer Fehler auftritt (einschließlich Clusterfailover). - **Current CN:** Die letzte Änderungsnummer (SCN) in der Oracle-Quelldatenbank (nicht das Transaktionsprotokoll). - **Aktive Transaktionen:** Die aktuelle Anzahl von Oracle-Quelltransaktionen, die von der Oracle CDC-Instanz verarbeitet werden und für die noch keine Entscheidung getroffen wurde (Commit/Rollback). - **Bereitgestellte Transaktionen:** Die aktuelle Anzahl von Oracle-Quelltransaktionen, die für die Tabelle [cdc.xdbcdc_staged_transactions](../../integration-services/change-data-capture/the-oracle-cdc-databases.md#BKMK_cdcxdbcdc_staged_transactions) bereitgestellt werden. - **Indikatoren**: In diesem Abschnitt können Sie die folgenden Informationen überwachen. - **Completed transactions**: Die Anzahl der Transaktionen, die seit der letzten Zurücksetzung der CDC-Instanz abgeschlossen wurden. Dies schließt keine Transaktionen ein, die keine relevanten Tabellen enthalten. - **Written changes**: Die Anzahl der Änderungen, die in die SQL Server-Änderungstabellen geschrieben werden. **Oracle** Zeigt Informationen zur CDC-Instanz und zu ihrer Verbindung mit der Oracle-Datenbank an. Diese Registerkarte ist schreibgeschützt. Klicken Sie zum Bearbeiten dieser Eigenschaften im linken Bereich mit der rechten Maustaste auf die Instanz, und wählen Sie **Eigenschaften** aus, oder klicken Sie im rechten Bereich auf **Eigenschaften**, um das Dialogfeld mit den „Eigenschaften von \<Instanz>“ zu öffnen. Informationen zu diesen Eigenschaften und zu deren Bearbeitung finden Sie unter [Edit the Oracle Database Properties](../../integration-services/change-data-capture/edit-the-oracle-database-properties.md). **Tabellen** Zeigt Informationen zu den in der CDC-Instanz enthaltenen Tabellen an. Spalteninformationen sind ebenfalls verfügbar. Diese Registerkarte ist schreibgeschützt. Klicken Sie zum Bearbeiten dieser Eigenschaften im linken Bereich mit der rechten Maustaste auf die Instanz, und wählen Sie **Eigenschaften** aus, oder klicken Sie im rechten Bereich auf **Eigenschaften**, um das Dialogfeld mit den „Eigenschaften von \<Instanz>“ zu öffnen. Informationen zu diesen Eigenschaften und zu deren Bearbeitung finden Sie unter [Edit Tables](../../integration-services/change-data-capture/edit-tables.md). **Erweitert:** Zeigt die erweiterten Eigenschaften für die CDC-Instanz und die Eigenschaftswerte an. Diese Registerkarte ist schreibgeschützt. Klicken Sie zum Bearbeiten dieser Eigenschaften im linken Bereich mit der rechten Maustaste auf die Instanz, und wählen Sie **Eigenschaften** aus, oder klicken Sie im rechten Bereich auf **Eigenschaften**, um das Dialogfeld mit den „Eigenschaften von \<Instanz>“ zu öffnen. Informationen zu diesen Eigenschaften und zu deren Bearbeitung finden Sie unter [Edit the Advanced Properties](../../integration-services/change-data-capture/edit-the-advanced-properties.md). ## <a name="see-also"></a>Weitere Informationen finden Sie unter [Erstellen der Instanz für die SQL Server-Änderungsdatenbank](../../integration-services/change-data-capture/how-to-create-the-sql-server-change-database-instance.md) [Anzeigen der CDC-Instanzeigenschaften](../../integration-services/change-data-capture/how-to-view-the-cdc-instance-properties.md) [Bearbeiten der CDC-Instanzeigenschaften](../../integration-services/change-data-capture/how-to-edit-the-cdc-instance-properties.md) [Verwenden des Assistenten für neue Instanzen](../../integration-services/change-data-capture/use-the-new-instance-wizard.md)
92.828125
816
0.781097
deu_Latn
0.996982
6afd566ffa67ca35164441c6fecd1bc264c00b06
345
md
Markdown
CHANGELOG.md
asfcarvalho/piano-sdk-for-ios
b07c668eb9775a4705672d4c6e23a932edbe788a
[ "Apache-2.0" ]
null
null
null
CHANGELOG.md
asfcarvalho/piano-sdk-for-ios
b07c668eb9775a4705672d4c6e23a932edbe788a
[ "Apache-2.0" ]
null
null
null
CHANGELOG.md
asfcarvalho/piano-sdk-for-ios
b07c668eb9775a4705672d4c6e23a932edbe788a
[ "Apache-2.0" ]
null
null
null
# Piano SDK for iOS ## v2.3.12 * Changed endpoint structure for PianoComposer * Added static endpoints for PianoComposer (production, production-australia, production-asia-pacific, sandbox) * Changed handlers in `PianoIDDelegate` * Added `customEvent` handler to `PianoIDDelegate` * Added `incremented` parameter for `PageViewMeterEventParams`
38.333333
111
0.8
eng_Latn
0.512722
6afe321fb236f5048e1e67df75c2ebe2e3fe7cff
10,261
md
Markdown
README.md
alexanderkasten/test_runtime
c5131ce6251a9317b34de8c6619a9cf29df41e01
[ "MIT" ]
null
null
null
README.md
alexanderkasten/test_runtime
c5131ce6251a9317b34de8c6619a9cf29df41e01
[ "MIT" ]
3
2021-05-10T20:25:57.000Z
2022-01-22T09:55:42.000Z
README.md
alexanderkasten/test_runtime
c5131ce6251a9317b34de8c6619a9cf29df41e01
[ "MIT" ]
null
null
null
# Process Engine Runtime This is a stand-alone Server of the ProcessEngine, that can be installed and started globally. ## What are the goals of this project The goal is to provide a ready-to-use environment for utilizing the ProcessEngine. ## Requirements - Node >= `10.15.0` - Python 2.7.x ## Setup/Installation Install the runtime as a global npm package: ```bash npm install -g @process-engine/process_engine_runtime ``` __Note:__ If you experience problems during installation on Windows, you can try installing the [Windows Build Tools](https://www.npmjs.com/package/windows-build-tools) and run the installation command again. Please make sure that you run the shell you use for the installation as **Administrator**. Also, each full release provides ready-to-use source files for each supported platform. These are stored in a `.tar.gz` archive (for macOS and Linux) and a zip file (for windows). All these sources have been fully installed and build. You only need to download and unpack them and you are good to go. ## Starting the ProcessEngineRuntime You can start the application with the following command: ```bash process-engine ``` When started, the ProcessEngine is available at `http://localhost:8000`. __Note:__ If you're on Windows and the command `process-engine` can not be found, please make sure your `PATH` is set correctly. ### Global routes The ProcessEngine exposes a number of global HTTP routes, which you can use to get general information about the application. These routes include: - `http://localhost:8000/` - Base route to get basic details about the ProcessEngine - `http://localhost:8000/process_engine` - Same as above - `http://localhost:8000/security/authority` - Returns the address of the authority the ProcessEngine uses to perform claim checks - `http://localhost:8000/process_engine/security/authority` - Same as above You might wonder why we use two routes for each UseCase. The reason is simple: Let's say you want to embed your ProcessEngine into another web application. Usually, you'd want to use routes like `http://localhost:8000/` for your own purposes and not have it expose information about any embedded service (which is what the ProcessEngine would be in this instance). BPMN Studio uses these global routes to identify remote ProcessEngines to connect to. The route `http://localhost:8000/process_engine` ensures that the studio can do so, even if `http://localhost:8000/` is reserved by your application. In other words: These routes allow you to access an embedded ProcessEngine through BPMN Studio. **Note:** See the [Embedding instructions](#embedding_the_processengineruntime_into_another_application) section on how to prevent the ProcessEngine from using `/` and `/security/authority`. ### Switching the database By default, the ProcessEngine will use `SQLite` as its database. The corresponding files will be placed in the `databases` directory mentioned in the [Application Files](#application_files) section. If you want to use a different database, you must provide a `NODE_ENV` parameter at startup: ```bash NODE_ENV=postgres process-engine ``` Currently supported values are `postgres` and `mysql`. Each environment comes with its own config. See: - [Configuration for mysql repositories](./config/mysql/process_engine) - [Configuration for postgres repositories](./config/postgres/process_engine) - [Configuration for sqlite repositories](./config/sqlite/process_engine) **Note:** Switching to MySQL or Postgres requires an instance of the respective database to be running and accessible! ### Customized Configuration By default, the runtime will use a set of configurations located within an integrated `config` folder. If you wish to provide your own set of configurations, you can do so by setting the following environment variables prior to startup: - `CONFIG_PATH` - The path to your configuration folder - `NODE_ENV` - The name of the environment to use **NOTE:** The path in `CONFIG_PATH` must be absolute. Also, each environment must have its own configuration folder. See [here](https://github.com/process-engine/process_engine_runtime/tree/develop/config/sqlite) for an example on how a config must be structured. **Make sure you provide settings to _all_ config sections listed there!** **Example**: Let's say you want to store your configs in your local home folder, in a subfolder named `runtime` and the environment you wish to use is named `production`. Your configs must then be located in the following path: - macOS: `/Users/{{YOUR_USERNAME}}/runtime/production` - Linux: `/home/{{YOUR_USERNAME}}/runtime/production` - Windows: `C:\Users\{{YOUR_USERNAME}}\runtime\production` You would need to provide the following environment parameters to access this config: - `NODE_ENV`: `production` - `CONFIG_PATH`: - macOS: `/Users/{{YOUR_USERNAME}}/runtime` - Linux: `/home/{{YOUR_USERNAME}}/runtime` - Windows: `C:\Users\{{YOUR_USERNAME}}\runtime` The full start command will then look like this: - macOS: `CONFIG_PATH=/Users/{{YOUR_USERNAME}}/runtime NODE_ENV=production process-engine` - Linux: `CONFIG_PATH=/home/{{YOUR_USERNAME}}/runtime NODE_ENV=production process-engine` - Windows: `CONFIG_PATH=C:\Users\{{YOUR_USERNAME}}\runtime NODE_ENV=production process-engine` ## Embedding the ProcessEngineRuntime into another application The ProcessEngineRuntime is published at npm under the name `@process-engine/process_engine_runtime`. You can add it to your package.json like any other npm package. To start the runtime, you need to run this command once from inside your application: ```ts import * as ProcessEngine from '@process-engine/process_engine_runtime'; await ProcessEngine.startRuntime(args); ``` ### Parameters The `startRuntime` function takes an object with the following optional parameters: - `workDir`: A path to where the runtime will store its working data (i.e. 'workspace'). The path must be absolute - `sqlitePath`: A path to where the runtime should store its SQlite databases - Works in conjunction with `NODE_ENV=sqlite` - The path must be absolute - `logFilePath`: A path to where the runtime should store its logfiles. The path must be absolute - `container`: An `addict-ioc` InvocationContainer, where the runtime should register its dependencies at - `minimalSetup`: If set to true, the runtime will only perform ioc registrations, but nothing else - Use this, if you want to launch the ProcessEngineRuntime manually - Defaults to `false` - `enableHttp`: If set to true, all HTTP endpoints the ProcessEngineRuntime uses will be loaded - Use `false` to prevent the ProcessEngineRuntime from providing HTTP endpoints - Defaults to `true` - `useHttpRootRoutes`: If set to `true`, the routes `/` and `/security/authority` will be set by the ProcessEngineRuntime - Set to `false` if you want to use these routes for other purposes - Defaults to `true` Example: ```ts import {InvocationContainer} from 'addict-ioc'; import * as ProcessEngine from '@process-engine/process_engine_runtime'; const myInvocationContainer = new InvocationContainer(); await ProcessEngine.startRuntime({ workDir: `/home/myfancyusername/somedirectory`, sqlitePath: `/var/lib/somepath`, logFilePath: `var/log/somepath`, container: myInvocationContainer, minimalSetup: true, enableHttp: false, useHttpRootRoutes: false, }); ``` ## Automatically starting the ProcessEngineRuntime on system startup We provide scripts that let you start the ProcessEngineRuntime automatically as a service. Currently supported platforms are `macOS` and `windows`. **macOS** There are two scripts: 1. `start_runtime_after_system_boot.sh` - Causes the ProcessEngineRuntime to be started automatically as a service 1. `do_not_start_runtime_after_system_boot.sh` - Prevents the ProcessEngineRuntime from being started automatically If you installed Node.js as a standalone application, you can find the scripts at: ``` /usr/local/lib/node_modules/@process-engine/process_engine_runtime/scripts/autostart ``` If you installed Node.js via [nvm](https://github.com/creationix/nvm), you can find the scripts at: ``` /Users/{{YOUR_USERNAME}}/.nvm/versions/node/{{YOUR_NODE_VERSION}}/lib/node_modules/@process-engine/process_engine_runtime/scripts/autostart ``` Usage: ```bash bash autostart/start_runtime_after_system_boot.sh ``` The scripts use pm2 to setup the ProcessEngine as an automatically started service. __Note:__ Currently the `do_not_start_runtime_after_system_boot.sh`-script doesn't work under macOS due to a bug in a third party package. As soon as the bug is fixed, we will update the script and release a fix. **Windows** We also provide `.bat` scripts to setup the Runtime as a global service on windows. These scripts are located at: ``` C:\Users\{{YOUR_USERNAME}}\AppData\Roaming\npm\node_modules\@process-engine\process_engine_runtime\scripts\autostart ``` Make sure you run these scripts as __Administrator__. During execution of the `start_runtime_after_system_boot.bat` script, you will be asked several questions. Please use the default values on every question. 1. Typing `Y` and pressing the `Enter`-key for `yes/no` questions 2. Just pressing the `Enter`-key on all other questions. ## Application Files <a name="application_files"></a> The application files are stored in: | Platform | Folder Path | | ---------- | ---------- | | Macintosch | `/Users/<Username>/Library/Application Support/process_engine_runtime` | | Linux | `/home/<Username>/.config/process_engine_runtime` | | Windows | `c:\Users\<Username>\AppData\Roaming\process_engine_runtime` | Contained in the application files are the following folders: | Path | Description | | --------- | ---------- | | `databases/` | SQLite database files | | `logs/` | Logfiles | | `metrics/` | Recorded metrics | ## Authors/Contact information 1. [Christian Werner](mailto:[email protected]) 2. [René Föhring](mailto:[email protected])
36.257951
146
0.752753
eng_Latn
0.979919
ed00bf653a53044fecfcab7fc923d2518b49422b
6,820
md
Markdown
CHANGELOG.md
nrk/predis
e5221fa13bbef61e2b15f845777e819448cbfa29
[ "MIT" ]
5,289
2015-01-01T21:52:55.000Z
2020-08-12T14:16:08.000Z
CHANGELOG.md
nrk/predis
e5221fa13bbef61e2b15f845777e819448cbfa29
[ "MIT" ]
387
2015-01-02T09:21:13.000Z
2020-08-12T18:28:58.000Z
CHANGELOG.md
nrk/predis
e5221fa13bbef61e2b15f845777e819448cbfa29
[ "MIT" ]
804
2015-01-04T03:19:38.000Z
2020-08-12T07:55:42.000Z
v2.0.0-beta.1 (2022-05-26) ================================================================================ - Dropped support for PHP 7.1 and older - Accepted values for some client options have changed, this is the new list of accepted values: - `aggregate`: callable returning an aggregate connection. - `cluster`: string value (`predis`, `redis`), callable returning an aggregate connection. - `replication`: string value (`predis`, `sentinel`), callable returning an aggregate connection. - `commands`: command factory, named array mapping command IDs to PHP classes, callable returning a command factory or a named array. - `connections`: connection factory, callable object returning a connection factory, named array mapping URI schemes to PHP classes, string identifying a supported combination of configurations for the connection factory. - `prefix`: string value, command processor, callable. - `exceptions`: boolean. Note that both the `cluster` and `replication` options now return a closure acting as initializer instead of an aggregate connection instance. - The `connections` client option now accepts certain string values identifying certain combinations of configurations for the connection factory. Currenlty this is used to provide a short way to configure Predis to load our phpiredis based connection backends simply, accepted values are: - `phpiredis-stream` maps `Phpiredis\Connection\PhpiredisStreamConnection` to `tcp`, `redis`, `unix` URI schemes. - `phpiredis-socket` maps `Phpiredis\Connection\PhpiredisSocketConnection` to `tcp`, `redis`, `unix` URI schemes. - `phpiredis-stream` is simply an alias of `phpiredis-stream`. - Added the new `Predis\Cluster\Hash\PhpiredisCRC16` class using ext-phpiredis to speed-up the generation of the CRC16 hash of keys for redis-cluster. Predis automatically uses this class when ext-phpiredis is loaded, but it is possible to configure the hash generator using the new `crc16` client option (accepted values `predis`, `phpiredis` or an hash generator instance). - Replication backends now use the `role` parameter instead of `alias` in order to distinguish the role of a connection. Accepted values are `master`, `slave` and, for redis-sentinel, `sentinel`. This led to a redesign of how connections can be retrieved from replication backends: the method getConnectionById() now retrieves a connection only by its ID (ip:port pair), to get a connection by its alias there is the new method getConnectionByAlias(). This method is not supported by the redis-sentinel backend due to its dynamic nature (connections are retrieved and initialized at runtime from sentinels) but it is possible to get a single connection from the pool by using its ID. It is also possible to retrive a connection by role using the method getConnectionByRole(). - The concept of connection ID (ip:port pair) and connection alias (the `alias` parameter) in `Predis\Connection\Cluster\PredisCluster` has been separated. This change does not affect distribution and it is safe for existing clusters. - Client option classes now live in the `Predis\Configuration\Option` namespace. - Classes for Redis commands have been moved into the new `Predis\Command\Redis` namespace and each class name mirrors the respective Redis command ID. - The concept of server profiles is gone, the library now uses a single command factory to create instances of commands classes. The `profile` option has been replaced by the `commands` option accepting `Predis\Command\FactoryInterface` to customize the underlying command factory. The default command factory class used by Predis is `Predis\Command\RedisFactory` and it still allows developers to define or override commands with their own implementations. In addition to that, `Predis\Command\RedisFactory` relies on a convention-over-configuration approach by looking for a suitable class with the same name as the command ID in the `Predis\Command\Redis` when the internal class map does not contain a class associated. - The method `Predis\Client::getClientFor($connectionID)` has been replaced by `getClientBy($selector, $value, $callable = null)` which is more flexible as it is not limited to picking a connection from the underlying replication or cluster backend by ID, but allows users to specify a `$selector` that can be either `id` (the old behavior), `key`, `slot` or `command`. The client uses duck-typing instead of type-checking to verify that the underlying connection implements a method that matches the specified selector which means that some selectors may not be available to all kinds of connection backends. - The method `Predis\Client::getConnectionById($connectionID)` has been removed. - Changed the signature for the constructor of `Predis\Command\RawCommand`. - The `Predis\Connection\Aggregate` namespace has been split into two separate namespaces for cluster backends (`Predis\Connection\Cluster`) and replication backends (`Predis\Connection\Replication`). - The method `Predis\Connection\AggregateConnectionInterface::getConnection()` has been renamed to `getConnectionByCommand()`. - The methods `switchToMaster()` and `switchToSlave()` have been promoted to be part of `Predis\Connection\Replication\ReplicationInterface` while the method `switchTo($connection)` has been removed from it. - The method `Predis\Connection\Cluster\PredisCluster::executeCommandOnNodes()` has been removed as it is possible to achieve the same by iterating over the connection or, even better, over the client instance in order to execute the same command against all of the registered connections. - The class `Predis\CommunicationException` now uses the correct default types for the `$message` (string) and `$code` (integer) parameters. - The method `onConnectionError()` in `Predis\Connection\AbstractConnection` class now passes the second argument as an integer value `0` as its default value instead of `null`. - The class `Predis\Transaction\AbortedMultiExecException` now uses the correct default types for the `$code` (integer) parameter. - __FIX__: using `strval` in `getScanOptions()` method, part of `Predis\Collection\Iterator\CursorBasedIterator` to make sure we retrieve the string value of `$this->match` and not passing `null` to `strlen()` function. - __FIX__: the value returned from `getArgument()` in `isReadOperation()` method, part of `Predis\Replication\ReplicationStrategy` class, is checked to not pass `null` to `sha1` function. - __FIX__: the value returned from `getArgument()` in `parseResponse()`method, part of `Predis\Command\Redis\SENTINEL` class, is checked to not pass `null` to `strtolower()` function.
55.447154
81
0.763783
eng_Latn
0.993673
ed01ee99c165156cbcff27ff0794c116d6235fbd
7,901
md
Markdown
docs/organizations/accounts/delete-organization-users.md
trallard/vsts-docs
d5b23aa6b1492b163197753b8a0e5560ea30b1c0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/organizations/accounts/delete-organization-users.md
trallard/vsts-docs
d5b23aa6b1492b163197753b8a0e5560ea30b1c0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/organizations/accounts/delete-organization-users.md
trallard/vsts-docs
d5b23aa6b1492b163197753b8a0e5560ea30b1c0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Delete or remove users from team or project titleSuffix: Azure DevOps Services ms.custom: seodec18 description: Steps for how to delete or remove organization users from Azure DevOps and remove users from a team or project ms.prod: devops ms.technology: devops-accounts ms.topic: conceptual ms.assetid: d3a31878-a869-45a9-9bca-f46cc2682596 ms.manager: mijacobs ms.author: chcomley author: chcomley ms.date: 11/21/2019 monikerRange: 'azure-devops' --- # Remove users from Azure DevOps [!INCLUDE [version-vsts-only](../../_shared/version-vsts-only.md)] If users no longer require access to a project or your organization, you can remove their access to the project or your organization. ## Prerequisites - You need [Project Collection Administrator or organization Owner permissions](../../organizations/security/set-project-collection-level-permissions.md?toc=/azure/devops/organizations/accounts/toc.json&bc=/azure/devops/organizations/accounts/breadcrumb/toc.json). ## Remove users from your organization > [!NOTE] > To enable the new user interface for the New user hub, see [Enable preview features](../../project/navigation/preview-features.md). #### [Preview page](#tab/preview-page) 1. Sign in to your organization: ```https://dev.azure.com/{yourorganization}```. [Why am I asked to choose between my work or school account and my personal account?](faq-create-organization.md#ChooseOrgAcctMSAcct) 2. Select ![gear icon](../../_img/icons/gear-icon.png) **Organization settings**. ![Open Organization settings](../../_shared/_img/settings/open-admin-settings-vert.png) 3. Select **Users**. ![Organization settings > Users](../../_shared/_img/open-organization-settings-users-preview.png) 4. Open the context menu **...** for the user to be removed. Select **Remove from organization**. ![Remove a user from your organization](_img/delete-user/remove-user-from-organization-preview.png) 5. Choose **Remove** in the confirmation dialog. ![Confirm removing an existing user](_img/delete-user/confirm-remove-existing-user-preview.png) 6. To confirm that you've removed the users completely, make sure they aren't in any of your [security groups](../../organizations/security/add-users-team-project.md). [Why don't users appear or disappear promptly after I add or delete them in the Users Services page?](faq-add-delete-users.md#users-delay) 7. If you deleted paid users who had Basic or higher features, and you don't want to pay for those users, you must also [reduce the users](../billing/buy-basic-access-add-users.md). Then you're not charged in your next Azure billing cycle. To reduce or cancel users for the next month, you must make updates before the last day of the current month. Your bill won't show the changes until the next month because paid users are monthly purchases. > [!NOTE] > - Azure Active Directory (AD)-backed organizations. After you remove a user from Azure AD, you can't assign artifacts to that user anymore. Examples are work items and pull requests. However, we preserve the history of artifacts that were already assigned to the user. > - Managed service account (MSA)-backed organizations. After you remove a user from your MSA-backed organization, the user remains within the tenant and can be re-added at any time. #### [Current page](#tab/current-page) 1. Sign in to your organization: ```https://dev.azure.com/{yourorganization}```. [Why am I asked to choose between my work or school account and my personal account?](faq-create-organization.md#ChooseOrgAcctMSAcct) 2. Select ![gear icon](../../_img/icons/gear-icon.png) **Organization settings**. ![Open Organization settings](../../_shared/_img/settings/open-admin-settings-vert.png) 3. Select **Users**. ![Organization settings, users](../../_shared/_img/settings/open-organization-settings-users-vert.png) 4. Open the context menu **...** for the user to be removed. Select **Remove from organization**. ![Remove a user from your organization](_img/delete-user/remove-user-from-organization-new.png) 5. Choose **Remove** in the confirmation dialog. ![Confirm removing an existing user](_img/delete-user/confirm-remove-existing-user.png) 6. To confirm that you've removed the users completely, make sure they aren't in any of your [security groups](../../organizations/security/add-users-team-project.md). [Why don't users appear or disappear promptly after I add or delete them in the Users Services page?](faq-add-delete-users.md#users-delay) 7. If you deleted paid users who had Basic or higher features, and you don't want to pay for those users, you must also [reduce the users](../billing/buy-basic-access-add-users.md). Then you're not charged in your next Azure billing cycle. To reduce or cancel users for the next month, you must make updates before the last day of the current month. Your bill won't show the changes until the next month because paid users are monthly purchases. > [!NOTE] > - Azure Active Directory (AD)-backed organizations. After you remove a user from Azure AD, you can't assign artifacts to that user anymore. Examples are work items and pull requests. However, we preserve the history of artifacts that were already assigned to the user. > - Managed service account (MSA)-backed organizations. After you remove a user from your MSA-backed organization, the user remains within the tenant and can be re-added at any time. #### [Azure DevOps CLI](#tab/azure-devops-cli/) [Add a user](add-organization-users.md#add-user) | [List users](../security/export-users-audit-log.md#list-users) | [Remove a user](#remove-user) | [Update a user](manage-users-table-view.md#update-user) | [Show users](manage-users-table-view.md#show-users) <a id="remove-user" /> ### Remove a user You can remove a user from an organization by using the [az devops user remove](/cli/azure/ext/azure-devops/devops/user#ext-azure-devops-az-devops-user-remove) command. To get started, see [Azure DevOps CLI](../../cli/index.md). ```CLI az devops user add --user [--org] [--yes] ``` #### Parameters - **user**: The email address or ID of the user. - **org**: Azure DevOps organization URL. You can configure the default organization using `az devops configure -d organization=ORG_URL`. Required if not configured as default or picked up using `git config`. Example: `--org https://dev.azure.com/MyOrganizationName/`. - **yes**: Do not prompt for confirmation. #### Example The following command removes the user with the email address [email protected] from the contoso organization. ```CLI az devops user remove --user [email protected] --org https://dev.azure.com/contoso/ --yes ``` * * * ## Remove users from a team or project To remove users from a project, remove them from the **Teams** groups they belong to or the **Contributors** group for the project. See [Add users to a project or specific team](../../organizations/security/add-users-team-project.md). You can remove a user from the **Members** page of a team group or security group. ![Remove a user from a security group, new navigation](_img/delete-user/remove-user-vert.png) ## Related articles - [Set permissions at the project level or project collection level](../../organizations/security/set-project-collection-level-permissions.md). - [Change individual permissions and grant select access to specific functions](../../organizations/security/change-individual-permissions.md) - [Grant or restrict access to select features and functions](../../organizations/security/restrict-access.md) - [Troubleshoot adding and deleting organization users in the Users page](faq-add-delete-users.md) - [Troubleshoot adding members to projects](faq-add-team-members.md) - [Export a list of users and their access levels](../security/export-users-audit-log.md)
53.026846
317
0.746994
eng_Latn
0.977799
ed01fc7a3ad7d8e570bc3b12acae887cc0ad1457
1,598
markdown
Markdown
_posts/2012-05-24-video__601.markdown
sawyerh/organizedwonder
ec1c09e67d13776b134a1e4968b6413ca35f3780
[ "MIT" ]
2
2016-01-06T17:13:12.000Z
2016-07-03T03:28:36.000Z
_posts/2012-05-24-video__601.markdown
sawyerh/organizedwonder
ec1c09e67d13776b134a1e4968b6413ca35f3780
[ "MIT" ]
null
null
null
_posts/2012-05-24-video__601.markdown
sawyerh/organizedwonder
ec1c09e67d13776b134a1e4968b6413ca35f3780
[ "MIT" ]
null
null
null
--- title: "Jeff Kluger - The hidden power of siblings" date: 2012-05-24 15:09:14 00:00 permalink: /videos/601 source: "http://www.youtube.com/watch?v=aFBIPO-P7LM" featured: false provider: "YouTube" thumbnail: "http://i2.ytimg.com/vi/aFBIPO-P7LM/hqdefault.jpg" user: id: 1 username: "sawyer" name: "Sawyer Hollenshead" tags: ["ted","culture","family"] html: "<iframe width=\"640\" height=\"360\" src=\"http://www.youtube.com/embed/aFBIPO-P7LM?wmode=transparent&fs=1&feature=oembed\" frameborder=\"0\" allowfullscreen></iframe>" id: 601 --- Jeffrey Kluger is senior editor of TIME Magazine's science and technology reporting. He has written or co-written more than 35 cover stories for TIME and regularly contributes articles and commentary on science and health stories. Kluger is also co-author, with astronaut Jim Lovell, of Lost Moon: The Perilous Voyage of Apollo 13, which was the basis of the Apollo 13 movie released in 1995. His other books include, Splendid Solution, published in 2006, which tells the story of Jonas Salk and the polio vaccine. He is also the author of the 2008 Hyperion release Simplexity: Why Simple Things Become Complex (and Why Complex Things Can Be Made Simple), the young adult novel Freedom Stone, and the newly released The Sibling Effect. Before joining TIME, Kluger was a staff writer for Discover magazine, where he wrote the "Light Elements" humor column, and he was also an editor for the New York Times Business World Magazine, Family Circle and Science Digest. Kluger, who is also an attorney, has taught science journalism at New York University.
88.777778
1,050
0.772841
eng_Latn
0.994386
ed020386c1c9269b64731555cb13f73c875a8ea3
14,641
md
Markdown
_posts/2018-02-21-scoped-packages.md
pratik-276/sensenet.github.io
321d70382a08c7c927d727f07c8bdb6b3f1d35b9
[ "MIT" ]
null
null
null
_posts/2018-02-21-scoped-packages.md
pratik-276/sensenet.github.io
321d70382a08c7c927d727f07c8bdb6b3f1d35b9
[ "MIT" ]
1
2021-05-20T22:19:59.000Z
2021-05-20T22:19:59.000Z
_posts/2018-02-21-scoped-packages.md
pratik-276/sensenet.github.io
321d70382a08c7c927d727f07c8bdb6b3f1d35b9
[ "MIT" ]
null
null
null
--- title: "Scoped packages" author: gallayl image: "../img/posts/scoped-packages.jpg" tags: [npm, node, typescript, javascript, client packages, odata, immutable] --- We've started 2018 with a huge package refactor: we have divided our base *sn-client-js* package into several smaller ones and published them within a *@sensenet* scope. We've refactored a lot of code to simplify our API, improve working with immutable objects and also did a strict review on our dependencies. I will summarize the reasons, the improvements and what has been changed. --- ## Why? Long story short: our all-in-one *sn-client-js* has started to grow too fast. The development started to getting slower and harder. The package contained some well-separable parts (like querying, default content types or repository events) that could be easily extracted into separate packages. It had some parts (like the Repository and Content API) that we had to rethink and some parts (like internal modules or namespaces) that are no longer recommended to use. ### Immutable objects Redux - as many other libraries nowadays - uses *shallow equality checking* for change detection and therefore it needs to work with [immutable data](https://redux.js.org/docs/faq/ImmutableData.html). Our Content and Repository API-s were not fully prepared to work with immutable objects: our rich-domain-inspired Content API had its own change detection mechanism. We had to rethink the concepts of the client side repository and content API implementation in terms of immutability. ### Internal modules and namespaces We've packed a lot of things in **sn-client-js**: default content type and schema definitions, logics for querying on OData, authentication logics, etc... They were organized into *internal modules and namespaces*. This approach is less and less recommended, nowadays namespacing is even considered as [needless](https://www.typescriptlang.org/docs/handbook/namespaces-and-modules.html) by TypeScript. There were some modules (like the default Content Types) that were worth extracting into separate packages. ### The size of sn-client-js@^3.0.3 in numbers Based on the generated Typedoc documentation, we have 205 classes, 13 enums, 7 interfaces in the 23 internal modules. If you install sn-client-js into your NPM project, it will contain about 890 files and will take about 20MB of space. If you clone and install the latest sn-client-js, you will have **275** folders in your *node_modules* folder and the whole package size is approx 160 MB. The package contains about **~458** unit tests that isn't a problem itself: it takes less than roughly a second to complete. But with the full build process it takes **~20** seconds. ## The goals We wanted to divide our overgrown client side packages - especially our *base* package, *sn-client-js* - into smaller, lightweight ones. NPM offers [package scoping](https://docs.npmjs.com/misc/scope), so we've decided to publish the new packages within a **@sensenet** scope. We wanted some features - like repository events, mapping controls to schemas or even JWT authentication itself - to be fully optional. We wanted to review if we can switch to some native APIs from external NPM libraries - for example in terms of RxJs and native promises with fetch API. We wanted to catch up with the industry standards in terms of immutability and standardize our linting settings. We also wanted to take a look on our build, test and publish processes and - last but not least - the content of our NPM packages. ## The new packages ![NPM packages and dependencies](/img/posts/scoped-packages-dependencies.png "NPM packages and dependencies") ### [client-utils](https://www.npmjs.com/package/@sensenet/client-utils) This package contains some generic utilities - like helpers for disposable objects, a retrier, a method tracer and a simple value observer implementation - and *without any sensenet dependencies*, so it can be used in any project. ### [query](https://www.npmjs.com/package/@sensenet/query) If you want to create typesafe content queries you can do that with this package. It has a fluent API to create a query from a generic type argument. ### [default-content-types](https://www.npmjs.com/package/@sensenet/default-content-types) We store the default generated content types, schema definitions, enums, etc... in this package. This package (unlike the others) doesn't contain any unit tests as it doesn't contain application logic - it can be used as a content type / schema library. ### [client-core](https://www.npmjs.com/package/@sensenet/client-core) The *core* logic sits in this package, if you want to interact with sensenet its a good starting point to install it. It contains the refactored *Repository* object itself, you can load / modify / delete content and execute custom action from there. It also contains some predefined security and versioning related actions, response models, interfaces and abstracts. ### [authentication-jwt](https://www.npmjs.com/package/@sensenet/authentication-jwt) If you install the [@sensenet/client-core](https://www.npmjs.com/package/@sensenet/client-core) package, it bypasses authentication by default. That means if you need JWT authentication, you have to install this package as well and configure it on your *repository* instance. ### [authentication-google](https://www.npmjs.com/package/@sensenet/authentication-google) It contains the Google Oauth authentication implementation. This package depends on JWT, but it's fully optional. ### [control-mapper](https://www.npmjs.com/package/@sensenet/control-mapper) It contains an utility that helps to map third party controls to generated sensenet schemas. It's optional, but used by our control packages. ### [repository-events](https://www.npmjs.com/package/@sensenet/repository-events) If you want to listen to specific repository events - like a content change - you can use this package. This is also optional, but we use it in [@sensenet/controls-aurelia](https://www.npmjs.com/package/@sensenet/controls-aurelia) ### [@sensenet/controls-aurelia](https://www.npmjs.com/package/@sensenet/controls-aurelia) Our Aurelia control package has been refactored. The main change here is that it uses packages from the @sensenet scope as dependencies. ## Concept changes ### Removed content API, cleaned up Repository API The main change is that the Content API itself has been removed. Most of its responsibilities has been taken by the Repository - in the most cases this means that e.g. ``myContent.checkout();`` has been moved to ``repository.versioning.checkOut(myContent.Id);``. Another breaking change is that in the future content creation and updates will be possible only on *repository instances* via the exposed **post/patch/put** methods. That means that ``myContent.save()`` will be changed to ``repository.patch({idOrPath: myContent.Id, content: myContent})``. ### Promises, fetch and disposables In sn-client-js we used RxJs observables for asynchronous operations. We've removed the whole RxJs library from our dependencies and switched to native ES6 promises and to the native **fetch** method. That means that you can use now the [async / await](https://hackernoon.com/6-reasons-why-javascripts-async-await-blows-promises-away-tutorial-c7ec10518dd9) syntax with sensenet calls. There are a few observables like the current user, the login state or the repository events, they use our own simplified [observable](https://github.com/SenseNet/sn-client-utils) implementation. The difference is that these observables are also *disposables* so if you want to stop receiving updates, you have to ``dispose()`` them. ### Less transformation We've reviewed and removed a lots of transformations in terms of odata requests, that means you will have access to the original json object and you can transform it afterwards. ## Package optimizations One of the main goals was that our new packages should be small and lightweight. We've started with a strict dependency review that included the indirect dependencies as well. We've started to change our scripts to NPM scripts from Gulp tasks earlier and now we didn't have any dependency on Gulp. As I've mentioned earlier we've switched to native promises and we could remove our dependency on RxJs that is a huge package itself. We've used another package in our unit tests called *mocha-typescript* that enables unit test running with a nice decorator syntax - but it had [~59](http://npm.anvaka.com/#/view/2d/mocha-typescript) indirect dependencies, so we've also changed our unit tests to the plain mocha syntax. Now most of our packages doesn't even have any external dependency that you *have to install*. We've continued with reviewing our processes: We use [Typedoc](http://typedoc.org/) for API Docs generation - this is a great tool however also adds about [60](http://npm.anvaka.com/#/view/2d/typedoc) package as indirect dependency. We also use commitizen for GIT commit message formatting, but it means about [134](http://npm.anvaka.com/#/view/2d/commitizen) indirect dependencies. These two tools play an important role in our internal processes but not during development - therefore we've decided to remove them even from our *devDependencies* and install them as global NPM packages. We've started to optimize our NPM package sizes next. We've removed the coverage reports - you can still check them on [Codecov](https://codecov.io/) - and the generated API Docs. They will be published into our [community site](https://community.sensenet.com/docs/) soon. As the last step we've separated our **build** and **build:test** processes - the test artifacts will be compiled into a *temporary folder* and will be excluded from the NPM package and the **dist** folder will contain only that artifacts that the package uses. The results: If you are using the scoped packages in your NPM project, the current version - with JWT, Google Oauth, the Repository events and the control mapper will - take **~2MB** of space (sn-client-js took nearly 20MB). Most of our scoped packages **won't install any third party package**. The only exceptions will be that packages that will *rely* on a specific framework or library, like the *Aurelia-controls* or the Redux package. If you are developing a scoped package and you install the *dev dependencies* as well it will take *less than a half of the size of sn-client-js* (~66 MB vs ~149 MB in size, ~7000 files vs ~20 000 files). The full build and unit testing time has also been improved - from about ~20sec to ~5sec. ## Changes As I mentioned earlier there are no internal namespaces in the scoped packages - therefore you can *import* exactly just the classes / methods that you will need. I've collected a some common examples. ### Creating a Repository You can start installing the *core* package with the ``npm install @sensenet/client-core`` command and creating a repository: ```ts import { Repository } from '@sensenet/client-core'; const repo = new Repository( { repositoryUrl: "https://my-sensenet7-instance.com", }); ``` ### Configuring JWT and Google OAuth As JWT is an optional feature, you have to install the corresponding package with the ``npm install @sensenet/authentication-jwt`` and you have to configure it after repository creation in the following way: ```ts import { JwtService } from "@sensenet/authentication-jwt" const jwtService = new JwtService(repo); ``` Once you have JWT, you can set up Google Oauth easily after installing the package **@sensenet/authentication-jwt**: ```ts import { addGoogleAuth, GoogleOauthProvider } from '@sensenet/authentication-google'; const googleAuthProvider: GoogleOauthProvider = addGoogleAuth(jwtService, { clientId: 'my-google-client-id', }) ``` ### Loading a content There are some major changes when it comes to content loading. The first one is that the packages use *promises*, as mentioned above, so from now on the operations can be *awaited*. The second one is that we've eliminated many unnecessary data transformations, so you can work with the plain serialized data object from the response itself. This also means that you cannot use "instanceof"-type checks on these object, you can use the "Type" property at runtime and *generics* in TypeScript during development. There are two methods - load and loadCollection - that can be used for loading single contents / one-to-one references and collections / one-to-many references. ```ts const portalUsers = await repo.loadCollection<User>({ path: "/Root/IMS/BuiltIn/Portal", oDataOptions: { /** additional options */ }, }); console.log("Count: ", portalUsers.d.__count); console.log("Users: ", portalUsers.d.results); ``` ### Updates There are also some breaking changes with the content creating / updating API: The POST/PUT/PATCH methods are now available directly on repository instances. You can also use TypeScript generics. ```ts const response = await repository.patch<User>({ idOrPath: "Root/Example/My_Content", content: { LoginName: "myNewLoginName", }, oDataOptions: { /** additional options */ }, }); ``` ### Other actions The bacth actions (copy, move and delete) are also available on repository instances and you can also find the *security* and *versioning* under the corresponding namespace. If you have custom OData actions implemented, you can use the ``executeAction`` method as shown in the example above - or wrap them with strongly typed method calls like we did for the Security and Versioning related actions: ```ts const response: Promise<TReturns> = repo.executeAction<TPostBody, TReturns>({ name: "MyCustomAction", idOrPath: "Root/Example/Content", method: "POST", // or GET body: { { myCustomPostData: myCustomValue } as TPostBody, }, }) ``` ## Further goals Although the scoped packages has been published, we have still some tasks remaining: We have to publish the API documentation to the community site and update our tutorials. We've already started to update our Redux package, review its dependencies and improve the async operations - it will be also a scoped package. After that we can update our React controls package as well. We also plan to update our command line tool to allow developers fetching their content types from CTDs but before we do that we want to improve the *client-side schema loading*. We hope that sensenet development will be simpler and more fun than ever with the new packages. If you have any questions or thoughts don't hesitate to share with us :)
66.55
588
0.77242
eng_Latn
0.996458
ed0206d267c4530f4c17b3a6c20cd88ffd37cfc8
3,539
md
Markdown
_posts/2020-07-07-cambodia.md
vvncheung/vvncheung.github.io
4283e60099173aa8470b41bf3666258509594521
[ "MIT" ]
null
null
null
_posts/2020-07-07-cambodia.md
vvncheung/vvncheung.github.io
4283e60099173aa8470b41bf3666258509594521
[ "MIT" ]
null
null
null
_posts/2020-07-07-cambodia.md
vvncheung/vvncheung.github.io
4283e60099173aa8470b41bf3666258509594521
[ "MIT" ]
null
null
null
--- layout: post category: writing title: "journey: phnom penh" --- ###### Originally written for impulse online (now defunct). Only when I was boarding the flight to Phnom Penh, the largest city and capital of Cambodia, I realized that I honestly had no idea what to expect. Almost 23 years after our family’s immigration to Canada, my mom and grandma were finally going back home to the country they were born and raised in, after being forced out so many years ago. A place that I have never heard any stories of until now. <img src="{{ site.url }}/assets/img/cambodia/photo_1.jpeg"> ###### My grandmother’s younger brother. I found out that my mom grew up in her grandmother’s house in the countryside (to this day, I still don’t know what this area is called). She wass sent there from the capital because her parents were too busy trying to make a living to take care of their six children. Overlooking a freshwater lake was her new home - identical to the other homes around it. She lived in a giant room where her entire family would cook their meals and sleep. Underneath the room was a tiny farm where they would raise poultry and swine. My great-grandmother's family once thrived on acres and acres of land, which they would eventually lose during the Khmer War. To this day, only the house remains. <img src="{{ site.url }}/assets/img/cambodia/photo_2.jpg"> ###### Children playing outside. I also found out how easy it was to feel disconnected in a city that should feel like a second home (after all, family is family, right?). But time in Cambodia moves slow. There’s no sense of urgency once you’ve reached your destination, but for some reason drivers in the city are always incredibly impatient. Like its surrounding countries, masses of bikes and scooters leave little room for anything else on the street, but luxury SUV’s dominate whatever space remains. Then there are the little things. Discretely bribing customs into approving your visas upon landing by hiding US dollars in your passport to hasten entry, your uncle carrying a gun with him every time we went out in public, realizing that there are differently coloured license plates to distinguish a citizen's privilege. <img src="{{ site.url }}/assets/img/cambodia/photo_3.jpg"> <img src="{{ site.url }}/assets/img/cambodia/photo_4.jpg"> ###### Top: My great-grandmother’s house, in the countryside. Bottom: My grandmother’s eldest sister. The countryside of Cambodia was definitely easier to adapt to than the bustle in Phnom Penh. There was no A/C, but sleeping under mosquito nets on the floor, surrounded by eight other bodies in the same small room was surprisingly comfortable - it just felt right. If you looked through the cracks of the wood floor, you could watch the chickens sleeping, too. <img src="{{ site.url }}/assets/img/cambodia/photo_5.jpeg"> ###### A relative. She refused any money we tried to give her, but she did like the melon candy I had with me. I ended up giving her all five packs. Living was made beautiful by its lack of chaos and everyday simplicity. It acted as a memorandum - sometimes it isn’t necessary to look for signs of familiarity, or to become attached to a new place and actually feel at home. It reminded me of how my grandparents eventually relocated their family to live in the refugee camps of Vietnam, and to our lives in Vancouver now. A friendly reminder that home is not a physical entity, but being where your heart is. <img src="{{ site.url }}/assets/img/cambodia/photo_6.jpeg"> ###### Dinner.
95.648649
795
0.771404
eng_Latn
0.999886
ed02aba1341eb5372c498d122299746fc1e64613
52
md
Markdown
README.md
schurik/RequestCounter
6295184d8d312634b0f646b97693f03c40cd1c86
[ "MIT" ]
null
null
null
README.md
schurik/RequestCounter
6295184d8d312634b0f646b97693f03c40cd1c86
[ "MIT" ]
null
null
null
README.md
schurik/RequestCounter
6295184d8d312634b0f646b97693f03c40cd1c86
[ "MIT" ]
null
null
null
# RequestCounter Demo Project for Docker/Kubernetes
17.333333
34
0.846154
kor_Hang
0.458055
ed02d246328f0efe8a85b7dc4a0fac45077118ff
8,235
md
Markdown
_posts/2017-05-17-disclosure-of-a-major-bug-in-cryptonote-based-currencies.md
smatchcube/monero-site
f24b4b8fc74c0b77aac50af2f7df9267da0bf238
[ "BSD-3-Clause" ]
226
2015-02-23T17:04:18.000Z
2022-03-31T14:39:46.000Z
_posts/2017-05-17-disclosure-of-a-major-bug-in-cryptonote-based-currencies.md
smatchcube/monero-site
f24b4b8fc74c0b77aac50af2f7df9267da0bf238
[ "BSD-3-Clause" ]
1,506
2015-02-23T11:31:27.000Z
2022-03-28T15:33:31.000Z
_posts/2017-05-17-disclosure-of-a-major-bug-in-cryptonote-based-currencies.md
smatchcube/monero-site
f24b4b8fc74c0b77aac50af2f7df9267da0bf238
[ "BSD-3-Clause" ]
520
2015-02-25T23:31:57.000Z
2022-03-28T15:17:26.000Z
--- layout: post title: Disclosure of a Major Bug in CryptoNote Based Currencies summary: Patched in Monero and others, but still in the wild tags: [core, crypto, research, urgent] author: luigi1111 and Riccardo "fluffypony" Spagni --- ### Overview In Monero we've discovered and patched a critical bug that affects all CryptoNote-based cryptocurrencies, and allows for the creation of an unlimited number of coins in a way that is undetectable to an observer unless they know about the fatal flaw and can search for it. We patched it quite some time ago, and confirmed that the Monero blockchain had NEVER been exploited using this, but until the hard fork that we had a few weeks ago we were unsure as to whether or not the entire network had updated. Once we were certain that the network had updated, we notified all active and affected CryptoNote coins, including CryptoNote themselves, Bytecoin, Forknote, Boolberry, DashCoin, and DigitalNote. ***Note that, at this time, only Monero, Aeon, Boolberry, and Forknote have updated.*** We have given the other currencies as much time as possible, but cannot hold back disclosure any longer. ***We strongly caution against anyone using, trading, exchanging, or running services involving the following currencies affected by this issue: Bytecoin, DashCoin, DigitalNote*** ### Timeline 2017-02-19: A member of the Monero Research Lab discovers the exploit, triggered by a detailed discussion of the [XEdDSA signature schemes](https://whispersystems.org/docs/specifications/xeddsa/) on the [Curves mailing list](https://moderncrypto.org/mail-archive/curves/2017/000846.html) 2017-02-20: The Monero blockchain is scanned to see if this had ever been exploited; thankfully it had not and the blockchain is intact. 2017-02-21: The patch is surreptitiously snuck into the Monero codebase in [pull request #1744](https://github.com/monero-project/monero/pull/1744). It is kept secret to prevent it being used to attack other CryptoNote coins. 2017-02-22: A [point release of Monero is rushed out](https://github.com/monero-project/monero/releases/tag/v0.10.2) so that exchanges and mining pools can update, under the guise of it preventing a RingCT DoS attack (such attack did not exist, but it seemed a fair explanation). 2017-03-15: The hash of the details of the problem is committed to the Monero blockchain in tx dff7a79e44f9392e19fe5205c389d3e799f89c62d90d624219618d754b806e04 2017-03-26: A further [point release of Monero](https://github.com/monero-project/monero/releases/tag/v0.10.3.1) is put out to prepare for a hard fork in April. 2017-04-14: The Monero network hard forks to increase the dynamic block size minimum median, but this has the added bonus of ensuring the entire network is protected. 2017-04-17: All CryptoNote coins are contacted, and told that they have until mid-May to patch their coins, before there will be a public disclosure of the issue. 2017-04-17: As noted by [Riccardo "fluffypony" Spagni on Twitter](https://twitter.com/fluffyponyza/status/854029169667309569), the hash of the message sent to the various CryptoNote currencies is committed to the Monero blockchain. ### Problem The so-called "key image" as used in CryptoNote coins utilising elliptic curve ed25519 can be modified in a special way, allowing double-spends. This effectively allows someone to create an infinite amount of coins in a way that is impossible to detect without knowing about the exploit and explicitly writing code to check for it. ### Mitigation Several options exist for mitigation. The simplest, least invasive is noted below. To mitigate, check key images for correctness by multiplying by the curve order l. Check that the result is the identity element. Hexadecimal values of each: Identity element = "0100000000000000000000000000000000000000000000000000000000000000" Curve order (little endian) = "edd3f55c1a631258d69cf7a2def9de1400000000000000000000000000000010" For each transaction key image, check ((key image * curve order) == (identity element)); reject transaction if false. ### Appendix: Commitment Text \#1 As committed via the payment ID in Monero transaction ID dff7a79e44f9392e19fe5205c389d3e799f89c62d90d624219618d754b806e04, the text below has a sha3-256 (ie. keccak-256) hash of 21f0216fbbdc3dc590903b579282878705ed2adab7d8213328d962c76e806d84: ~~~ Problem: The so-called "key image" as used in Cryptonote coins utilizing elliptic curve ed25519 can be modified in a special way, allowing double-spends. I leave out exact details in this draft to give some time for mitigation. Hash (keccak-256) of details, to be released later: <4402e902f1ac8cec96a17453dcae307d21a7995a94b76e9c3eb7ca7baeffb8c8> Mitigation: Several options exist for mitigation; I include the simplest, least invasive here. To mitigate, check key images for correctness by multiplying by the curve order l. Check that the result is the identity element. I include hexadecimal values of each: Identity element = "0100000000000000000000000000000000000000000000000000000000000000" Curve order (little endian) = "edd3f55c1a631258d69cf7a2def9de1400000000000000000000000000000010" For each transaction key image, check ((key image * curve order) == (identity element)); reject transaction if false. ~~~ ### Appendix: Commitment Text \#2 As noted in the previous commitment, the text below has a sha3-256 (ie. keccak-256) hash of 4402e902f1ac8cec96a17453dcae307d21a7995a94b76e9c3eb7ca7baeffb8c8: ~~~ Dirty Details: Adding one of the (non-idenitity) "torsion", or small subgroup, points to a key image allows up to 7 double spends to be performed per output (8 total spends). The reason this is possible is that multiplying any of these small subgroup points by 8 returns the identity element (a kind of zero point). This means that multiplying the sum of a "normal" point and a torsion point by 8 (or a multiple of 8) will return the same point as multiplying the normal point by 8; the small subgroup point is "factored out". This allows a signature to verify on an alternate key image *so long as* the relevant scalars are multiples of 8. Cryptonote does not use scalars that are automatically multiples of 8 (whereas vanilla EdDSA does), but this is only a slight hurdle. An attacker need only choose the relevant scalars to be a multiple of 8 (in certain cases he cannot choose, and must instead create trial scalars until getting the desired result). Alternate mitigations: 1. Multiply each key image by 8, then the result by 8^-1 (mod l), to get the proper key image in the correct subgroup. Reject double spends, or if the result is not the same as the input. Unwieldy. 2. Mutliply each key image by 8 before storing in the key image list/checking for double spends. Quite invasive, as it requires redoing the existing key image list. Extra details: Monero's (and all CryptoNote coins') elliptic curve, ed25519, has a basepoint group cofactor of 8. There are 8 subgroups in ed25519, of the following sizes: 1 ----| 2 | --- small subgroups 4 | 8 ----| l (basepoint subgroup) ---| 2*l | --- large subgroups 4*l | 8*l (all curve points) ---| Each small subgroup point is contained in the next larger small subgroup, and also in the corresponding large subgroup (superimpose small/large). Each large subgroup is contained in the next larger one as well. The only small subgroup point contained in subgroup 1 and l (basepoint subgroup) is the identity element, which works as a kind of zero (no effect on point addition). Mutliplying any point by its subgroup order will return the idenitity element (same as multiplying by 0). Mutliplying any point by 2, 4, or 8 will move it to the corresponding most exclusive subgroup (e.g., a point in 8*l subgroup multiplied by 4 would move to the 2*l subgroup, a point in the 8 subgroup multiplied by 2 would move the 4 subgroup, and so on). Adding a small subgroup (non idenitity) point to a key image in the basepoint subgroup "knocks" it out of that subgroup and into one of the larger ones. Since the order of that subgroup is not l but some multiple, multiplying as in the proposed mitigation above does not return the identity element. ~~~
76.962617
331
0.786764
eng_Latn
0.997863
ed0413c8dfce351aaa180e0da237e960d8fb16f5
8,810
md
Markdown
website/versioned_docs/version-1.0.0/avatar.md
team-alembic/react-native-elements
1570126cea67da8f87f00940f07cd3d404da6418
[ "MIT" ]
5
2019-02-09T20:27:05.000Z
2021-05-14T20:49:09.000Z
website/versioned_docs/version-1.0.0/avatar.md
team-alembic/react-native-elements
1570126cea67da8f87f00940f07cd3d404da6418
[ "MIT" ]
22
2020-01-09T07:20:06.000Z
2021-06-25T15:30:58.000Z
website/versioned_docs/version-1.0.0/avatar.md
team-alembic/react-native-elements
1570126cea67da8f87f00940f07cd3d404da6418
[ "MIT" ]
4
2020-03-29T08:01:23.000Z
2022-01-01T15:24:23.000Z
--- id: version-1.0.0-avatar title: Avatar original_id: avatar --- Avatars are found all over ui design from lists to profile screens. They are commonly used to represent a user and can contain photos, icons, or even text. <div class="component-preview component-preview--grid"> <figure> <img src="/react-native-elements/img/avatar/avatar--photo.jpg" alt="Standard Avatar" /> <figcaption>Standard</figcaption> </figure> <figure> <img src="/react-native-elements/img/avatar/avatar--title.jpg" alt="Avatar with Title" /> <figcaption>Title</figcaption> </figure> <figure> <img src="/react-native-elements/img/avatar/avatar--icon.jpg" alt="Avatar with Icon" /> <figcaption>Icon</figcaption> </figure> <figure> <img src="/react-native-elements/img/avatar/avatar--edit.jpg" alt="Standard Avatar with edit button" /> <figcaption>Standard with edit button</figcaption> </figure> </div> ## Usage ```js import { Avatar } from 'react-native-elements'; // Standard Avatar <Avatar rounded source={{ uri: 'https://s3.amazonaws.com/uifaces/faces/twitter/ladylexy/128.jpg', }} /> // Avatar with Title <Avatar rounded title="MD" /> // Avatar with Icon <Avatar rounded icon={{ name: 'home' }} /> // Standard Avatar with edit button <Avatar source={{ uri: 'https://s3.amazonaws.com/uifaces/faces/twitter/adhamdannaway/128.jpg', }} showEditButton /> ``` #### Avatar with initials <img src="/react-native-elements/img/avatar_with_initials.png" width="500" > ```js import { Avatar } from "react-native-elements"; <Avatar size="small" rounded title="MT" onPress={() => console.log("Works!")} activeOpacity={0.7} /> <Avatar size="medium" title="BP" onPress={() => console.log("Works!")} activeOpacity={0.7} /> <Avatar size="large" title="LW" onPress={() => console.log("Works!")} activeOpacity={0.7} /> <Avatar size="xlarge" rounded title="CR" onPress={() => console.log("Works!")} activeOpacity={0.7} /> ``` #### Avatar with icons <img src="/react-native-elements/img/avatar_with_icons.png" width="500" > ```js import { Avatar } from "react-native-elements"; <Avatar rounded icon={{name: 'user', type: 'font-awesome'}} onPress={() => console.log("Works!")} activeOpacity={0.7} containerStyle={{flex: 2, marginLeft: 20, marginTop: 115}} /> <Avatar size="small" rounded icon={{name: 'user', type: 'font-awesome'}} onPress={() => console.log("Works!")} activeOpacity={0.7} containerStyle={{flex: 2, marginLeft: 20, marginTop: 115}} /> <Avatar size="medium" overlayContainerStyle={{backgroundColor: 'blue'}} icon={{name: 'meetup', color: 'red', type: 'font-awesome'}} onPress={() => console.log("Works!")} activeOpacity={0.7} containerStyle={{flex: 3, marginTop: 100}} /> <Avatar size="large" icon={{name: 'rocket', color: 'orange', type: 'font-awesome'}} overlayContainerStyle={{backgroundColor: 'white'}} onPress={() => console.log("Works!")} activeOpacity={0.7} containerStyle={{flex: 4, marginTop: 75}} /> <Avatar size="xlarge" rounded icon={{name: 'home', type: 'font-awesome'}} onPress={() => console.log("Works!")} activeOpacity={0.7} containerStyle={{flex: 5, marginRight: 60}} /> <Avatar size={200} rounded icon={{name: 'user', type: 'font-awesome'}} onPress={() => console.log("Works!")} activeOpacity={0.7} containerStyle={{flex: 2, marginLeft: 20, marginTop: 115}} /> ``` #### Avatar with title placeholder <img src="/react-native-elements/img/avatar_with_title_placeholder.gif" width="500" > ```js import { ListItem } from 'react-native-elements'; <ListItem leftAvatar={{ title: name[0], source: { uri: avatar_url }, showEditButton: true, }} title={name} subtitle={role} chevron />; ``` --- ## Props - [`activeOpacity`](#activeopacity) - [`avatarStyle`](#avatarstyle) - [`containerStyle`](#containerstyle) - [`editButton`](#editbutton) - [`icon`](#icon) - [`iconStyle`](#iconstyle) - [`imageProps`](#imageprops) - [`onEditPress`](#oneditpress) - [`onLongPress`](#onlongpress) - [`onPress`](#onpress) - [`overlayContainerStyle`](#overlaycontainerstyle) - [`placeholderStyle`](#placeholderstyle) - [`rounded`](#rounded) - [`size`](#size) - [`showEditButton`](#showeditbutton) - [`source`](#source) - [`title`](#title) - [`titleStyle`](#titlestyle) - [`renderPlaceholderContent`](#renderplaceholdercontent) - [`Component`](#Component) - [`ImageComponent`](#imagecomponent) --- ## Reference ### `activeOpacity` Opacity when pressed | Type | Default | | :----: | :-----: | | number | 0.2 | --- ### `avatarStyle` Style for avatar image | Type | Default | | :------------: | :-----: | | object (style) | none | --- ### `containerStyle` Styling for outer container | Type | Default | | :------------: | :-----: | | object (style) | none | --- ### `editButton` Icon props to be user for edit button | Type | Default | | :-----------------------------------------------------------------: | :---------------------------------------------------------------------------: | | {[...Icon props](/react-native-elements/docs/icon.html#icon-props)} | { name: 'mode-edit', type: 'material', color: '#fff', underlayColor: '#000' } | --- ### `icon` Displays an icon as the main content of the Avatar. **Cannot be used alongside title**. When used with the `source` prop it will be used as the placeholder. | Type | Default | | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----: | | object {name: string, color: string, size: number, type: string (default is material, or choose from [supported icon sets](icon.md#available-icon-sets)), iconStyle: object(style)} | none | --- ### `iconStyle` Extra styling for icon component (optional) | Type | Default | | :------------: | :-----: | | object (style) | none | --- ### `imageProps` Optional properties to pass to the avatar e.g "resizeMode" | Type | Default | | :----------------------: | :-----: | | object (imageProperties) | none | --- ### `onEditPress` Callback function when pressing on the edit button | Type | Default | | :------: | :-----: | | function | none | --- ### `onLongPress` Callback function when long pressing component | Type | Default | | :------: | :-----: | | function | none | --- ### `onPress` Callback function when pressing component | Type | Default | | :------: | :-----: | | function | none | --- ### `overlayContainerStyle` Style for the view outside image or icon | Type | Default | | :------------: | :-----: | | object (style) | none | --- ### `placeholderStyle` Adds style to the placeholder wrapper | Type | Default | | :------------: | :------------------------------: | | object (style) | `{ backgroundColor: '#BDBDBD' }` | --- ### `rounded` Makes the avatar circular | Type | Default | | :-----: | :-----: | | boolean | false | --- ### `size` Size of the avatar | Type | Default | | :----------------------------------------------------: | :-----: | | string(`small`, `medium`, `large`, `xlarge`) or number | `small` | --- ### `showEditButton` Shows an edit button over the avatar (optional) | Type | Default | | :-----: | :-----: | | boolean | false | --- ### `source` Image source | Type | Default | | :------------: | :-----: | | object (style) | none | --- ### `title` Renders title in the placeholder | Type | Default | | :----: | :-----: | | string | none | --- ### `titleStyle` Style for the title | Type | Default | | :------------: | :-----: | | object (style) | none | --- ### `renderPlaceholderContent` Custom placeholder element (by default, it's the title) | Type | Default | | :------------------------: | :-----: | | React component or element | none | --- ### `Component` Component for enclosing element (eg: TouchableHighlight, View, etc) | Type | Default | | :------: | :----------------: | | function | TouchableHighlight | --- ### `ImageComponent` Custom ImageComponent for Avatar | Type | Default | | :------------------------: | :-----: | | React component or element | Image |
21.646192
193
0.539274
eng_Latn
0.515941
ed057fe31723a0c4731063a301b6c28d04a8c771
825
md
Markdown
_plays/04-measures-kpis.md
wearethoughtfox/playbook
9a2f1dd7a2949536adbc82edab048593640983d2
[ "CC0-1.0" ]
null
null
null
_plays/04-measures-kpis.md
wearethoughtfox/playbook
9a2f1dd7a2949536adbc82edab048593640983d2
[ "CC0-1.0" ]
null
null
null
_plays/04-measures-kpis.md
wearethoughtfox/playbook
9a2f1dd7a2949536adbc82edab048593640983d2
[ "CC0-1.0" ]
null
null
null
--- id: 4 title: How do we know if it's any good? --- Qualities guiding principles Measures / KPIs - deciding what to measure and how it affects decisions Designing with data It's always useful to ask: ‘What are we really trying to achieve here?’ [Create a framework that forces us to list two positive words in opposition.](https://medium.com/@dustin/thanks-for-writing-the-article-julie-8362fd235ae0) The outcomes caused slight tension and guided us to make appropriate tradeoffs, eg "direction over choice". ### Checklist - gentle, human, considered - "A user interface is well-designed when the program behaves exactly how the user thought it would." - Delight - prioritise resource on the number of people it will reach #### Key questions http://www.slideshare.net/dmc500hats/startup-metrics-for-pirates-nov-2012
34.375
263
0.770909
eng_Latn
0.995633
ed0637be50d4acf6a04d911d5e1574ec4a0edbed
823
md
Markdown
about.md
teymour/tutos
21b42abf33c3c6c4804d6ba80bdaf516e72937d2
[ "MIT" ]
null
null
null
about.md
teymour/tutos
21b42abf33c3c6c4804d6ba80bdaf516e72937d2
[ "MIT" ]
null
null
null
about.md
teymour/tutos
21b42abf33c3c6c4804d6ba80bdaf516e72937d2
[ "MIT" ]
3
2017-03-03T16:37:08.000Z
2020-11-13T12:12:24.000Z
--- layout: page title: A propos permalink: /about/ --- Je suis <a href="http://tangui.eu.org/">Tangui Morlier</a>. Je travaille dans le monde de l'informatique, d'Internet et du Logiciel Libre depuis 2003. J'entrepose ici des notes sur des éléments techniques qui me sont utiles dans mon travail de directeur technique. Si vous êtes intéressés par mon profil professionnel, vous pouvez consulter mon [CV](http://tangui.eu.org/old/cv/cv_francais.html) ou ma page [linkedin](https://www.linkedin.com/profile/view?id=6661161). Ce site a été généré par [jekyll](https://github.com/jekyll/jekyll). Les icones RSS et Email sont issues de wikipedia, vous pouvez trouvez les fichiers originaux [ici](https://en.wikipedia.org/wiki/File:Feed-icon.svg) et [la](https://commons.wikimedia.org/wiki/File:Mail-closed.svg?uselang=fr).
68.583333
294
0.764277
fra_Latn
0.897121
ed06a9c849af72a660a48d41d48e4294f0e5416e
277
md
Markdown
CONTRIBUTING.md
LuizFritsch/password-cracker
6a9f83d2719317e4a19633f0d374cdd7f08236a6
[ "MIT" ]
1
2019-03-31T05:52:17.000Z
2019-03-31T05:52:17.000Z
CONTRIBUTING.md
LuizFritsch/Hecatoncheires
6a9f83d2719317e4a19633f0d374cdd7f08236a6
[ "MIT" ]
3
2019-02-04T12:19:53.000Z
2019-02-08T12:28:44.000Z
CONTRIBUTING.md
LuizFritsch/router-cracker
6a9f83d2719317e4a19633f0d374cdd7f08236a6
[ "MIT" ]
2
2019-09-05T09:53:31.000Z
2020-02-04T05:43:17.000Z
# Contributing to this project :+1::tada: First off, thanks for taking the time to contribute! :tada::+1: ## Steps for creating good issues or pull requests: ## Links to external documentation, mailing lists, or a code of conduct: ## Community and behavioral expectations:
27.7
74
0.747292
eng_Latn
0.996522
ed06cdcb95c2253d1dea33ddb177a8a51a44e43e
638
md
Markdown
_recipes/enchiladas.md
vrtulspud/chowdown
b9a11c966bc6916d17553c0e8ab150ad94dd198a
[ "Unlicense" ]
null
null
null
_recipes/enchiladas.md
vrtulspud/chowdown
b9a11c966bc6916d17553c0e8ab150ad94dd198a
[ "Unlicense" ]
null
null
null
_recipes/enchiladas.md
vrtulspud/chowdown
b9a11c966bc6916d17553c0e8ab150ad94dd198a
[ "Unlicense" ]
null
null
null
--- layout: recipe title: "Enchiladas" image: imagecredit: tags: lunch, dinner, main dish, oven ingredients: - 1 package Cheese, Monterey Jack Cheddar - 1 lb Ground Round - 1/2 tsp Cumin, Ground - 1 tbsp Chives, Dried - 1/2 tsp Garlic, Fresh - 1/2 tsp Pepper, Ground Black - 1 package Tortillas - 1 can Enchilada Sauce directions: - 1) Mix 1 cup cheese and remainder of ingredients. - 2) Wrap tortillas. - 3) Spray pan. - 4) Cover tortillas in sauce and sprinkle with cheese. - 5) Cover with foil. - 6) Bake at 350 degrees for 30 minutes. --- **Servings:** ? | **Prep:** ? | **Cook:** 30 min | **Source:** Family recipe; Beth
18.764706
55
0.680251
eng_Latn
0.684446
ed0822061bc76aea84b029d0cf827164ea11019e
10,221
md
Markdown
windows-driver-docs-pr/wdf/using-control-device-objects.md
sarman1998/windows-driver-docs
790f8ecb851d5c9e423af03a8a57dfac59945c24
[ "CC-BY-4.0", "MIT" ]
4
2018-01-29T10:59:09.000Z
2021-05-26T09:19:55.000Z
windows-driver-docs-pr/wdf/using-control-device-objects.md
sarman1998/windows-driver-docs
790f8ecb851d5c9e423af03a8a57dfac59945c24
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/wdf/using-control-device-objects.md
sarman1998/windows-driver-docs
790f8ecb851d5c9e423af03a8a57dfac59945c24
[ "CC-BY-4.0", "MIT" ]
1
2018-01-29T10:59:10.000Z
2018-01-29T10:59:10.000Z
--- title: Using Control Device Objects author: windows-driver-content description: Using Control Device Objects ms.assetid: 6367954f-6916-46df-a5a0-e80f045b69e5 keywords: - control device objects WDK KMDF - device objects WDK KMDF - framework objects WDK KMDF , control device objects - legacy hardware devices WDK KMDF - software-only virtual devices WDK KMDF - system shutdown notifications WDK KMDF - shutdown notifications WDK KMDF - notifications WDK KMDF - names WDK KMDF - names WDK KMDF , device objects ms.author: windowsdriverdev ms.date: 04/20/2017 ms.topic: article ms.prod: windows-hardware ms.technology: windows-devices --- # Using Control Device Objects A *control device object* is a framework device object that does not support Plug and Play (PnP) or power management operations. Drivers can use control device objects to represent software-only virtual devices or *legacy hardware devices* (that is, devices that do not provide PnP or power management capabilities). A driver that creates a control device object also typically creates a symbolic link for the device object. Applications can send I/O requests to the control device object by passing the symbolic link name to an API element, such as the Microsoft Win32 [**CreateFile**](https://msdn.microsoft.com/library/windows/desktop/aa363858) function. The framework does not attach control device objects to a [device stack](wdm-concepts-for-kmdf-drivers.md#device-stacks). Therefore, when an application sends an I/O request to a control device object, the I/O manager delivers the request directly to the driver that created the control device object, instead of to the driver at the top of the stack. (However, an additional driver can call [**IoAttachDevice**](https://msdn.microsoft.com/library/windows/hardware/ff548294) to attach a device object above the control device object. In this case, the additional driver receives the I/O request first.) ### Uses of Control Device Objects Two typical uses for control devices are: 1. A filter driver for a PnP device, if the driver supports a set of custom I/O control codes for applications to use. If an application attempted to send the custom I/O control codes to the top of the driver stack (by using, for example, the symbolic link name of a [device interface](using-device-interfaces.md)), a driver above the filter driver might fail the I/O request if the driver did not recognize the custom I/O control codes. To avoid this problem, the filter driver can create a control device object. Applications can use the control device object's symbolic link name to send I/O control codes directly to the filter driver. (Note that a better way for the filter driver to avoid the problem is to act as a bus driver and [enumerate](enumerating-the-devices-on-a-bus.md) child devices that operate in raw mode. In other words, for each device that the filter driver supports, the driver can create a physical device object (PDO) that does not require a function driver. The driver calls [**WdfPdoInitAssignRawDevice**](https://msdn.microsoft.com/library/windows/hardware/ff548802) and [**WdfDeviceInitAssignName**](https://msdn.microsoft.com/library/windows/hardware/ff546029) for each of these devices, and the application can identify a device by name when it sends a custom I/O control code.) 2. A driver for a device that does not support PnP. Such a driver must use control device objects, because the device objects for such devices do not reside in a device stack and do not provide PnP capabilities. For more information about supporting non-PnP devices, see [Using Kernel-Mode Driver Framework with Non-PnP Drivers](using-kernel-mode-driver-framework-with-non-pnp-drivers.md). ### Creating a Control Device Object To create a control device object, a driver must: 1. Call [**WdfControlDeviceInitAllocate**](https://msdn.microsoft.com/library/windows/hardware/ff545841) to obtain a [**WDFDEVICE\_INIT**](https://msdn.microsoft.com/library/windows/hardware/ff546951) structure. 2. Call object initialization methods, as needed, to initialize the WDFDEVICE\_INIT structure. The driver can call only the following initialization methods: - [**WdfControlDeviceInitSetShutdownNotification**](https://msdn.microsoft.com/library/windows/hardware/ff545847) - [**WdfDeviceInitAssignName**](https://msdn.microsoft.com/library/windows/hardware/ff546029) - [**WdfDeviceInitAssignSDDLString**](https://msdn.microsoft.com/library/windows/hardware/ff546035) - [**WdfDeviceInitAssignWdmIrpPreprocessCallback**](https://msdn.microsoft.com/library/windows/hardware/ff546043) - [**WdfDeviceInitSetCharacteristics**](https://msdn.microsoft.com/library/windows/hardware/ff546074) - [**WdfDeviceInitSetDeviceClass**](https://msdn.microsoft.com/library/windows/hardware/ff546084) - [**WdfDeviceInitSetExclusive**](https://msdn.microsoft.com/library/windows/hardware/ff546097) - [**WdfDeviceInitSetFileObjectConfig**](https://msdn.microsoft.com/library/windows/hardware/ff546107) - [**WdfDeviceInitSetIoInCallerContextCallback**](https://msdn.microsoft.com/library/windows/hardware/ff546119) - [**WdfDeviceInitSetIoType**](https://msdn.microsoft.com/library/windows/hardware/ff546128) - [**WdfDeviceInitSetRequestAttributes**](https://msdn.microsoft.com/library/windows/hardware/ff546786) 3. Call [**WdfDeviceCreate**](https://msdn.microsoft.com/library/windows/hardware/ff545926), which uses the contents of the WDFDEVICE\_INIT structure to create a framework device object. 4. Complete the following initialization operations: - [Create a default I/O queue](creating-i-o-queues.md) for the device, if one is needed. - Call [**WdfDeviceConfigureRequestDispatching**](https://msdn.microsoft.com/library/windows/hardware/ff545920), if needed. - Call [**WdfDeviceCreateSymbolicLink**](https://msdn.microsoft.com/library/windows/hardware/ff545939) to create a symbolic link name that applications can use to access the control device. 5. Call [**WdfControlFinishInitializing**](https://msdn.microsoft.com/library/windows/hardware/ff545854). ### Rules for Using Control Device Objects Drivers that create control device objects must obey the following rules: - Drivers cannot pass the control device object's handle to framework methods that [enumerate child devices](enumerating-the-devices-on-a-bus.md). - Drivers cannot pass the control device object's handle to framework methods that support [device interfaces](using-device-interfaces.md). - Drivers can create I/O queues and register request handlers for the queues, but the framework does not allow the queues to be [power-managed](using-power-managed-i-o-queues.md). - Drivers can create [file objects](framework-file-objects.md) for control device objects. ### Naming a Control Device Object All control device objects must be named. Typically, your driver will call [**WdfDeviceInitAssignName**](https://msdn.microsoft.com/library/windows/hardware/ff546029) to assign a device name and then call [**WdfDeviceCreateSymbolicLink**](https://msdn.microsoft.com/library/windows/hardware/ff545939) to create a symbolic link name that applications can use to access the object. If your driver does not call [**WdfDeviceInitAssignName**](https://msdn.microsoft.com/library/windows/hardware/ff546029) to assign a device name, the framework automatically generates a name for control devices--but your driver cannot call [**WdfDeviceCreateSymbolicLink**](https://msdn.microsoft.com/library/windows/hardware/ff545939). Your driver can call [**WdfDeviceInitSetDeviceClass**](https://msdn.microsoft.com/library/windows/hardware/ff546084) to specify a [device setup class](https://msdn.microsoft.com/library/windows/hardware/ff541509) for a control device. The device setup class identifies a section of the registry that contains administrator-supplied information about devices that belong to the setup class. For more information about calling [**WdfDeviceInitSetDeviceClass**](https://msdn.microsoft.com/library/windows/hardware/ff546084), see [Controlling Device Access in Framework-Based Drivers](controlling-device-access-in-kmdf-drivers.md). ### Receiving Notification of System Shutdown Because control device objects do not support PnP, your driver cannot register callback functions that inform the driver when a device's power state changes. However, the driver can call [**WdfControlDeviceInitSetShutdownNotification**](https://msdn.microsoft.com/library/windows/hardware/ff545847) to register an [*EvtDeviceShutdownNotification*](https://msdn.microsoft.com/library/windows/hardware/ff540911) callback function. This callback function informs the driver when the system is about to lose its power. ### Deleting a Control Device Object Some drivers have to delete their control device objects before the driver unloads, as follows: - If your driver creates control device objects (which do not support PnP or power management), and if the driver also creates framework device objects that support PnP and power management, the driver must eventually call [**WdfObjectDelete**](https://msdn.microsoft.com/library/windows/hardware/ff548734) at IRQL = PASSIVE\_LEVEL to delete the control device objects. If the driver creates both types of device objects, the operating system cannot unload your driver until the driver has deleted the control device objects. However, the driver must not delete the control device objects until after the framework has deleted the other device objects. To determine when the framework has deleted the other device objects, your driver should provide [*EvtCleanupCallback*](https://msdn.microsoft.com/library/windows/hardware/ff540840) functions for those objects. - If your driver creates control device objects but does not create framework device objects that support PnP and power management, the driver does not have to delete the control device objects. In this case, the framework deletes the control device objects after the driver's [*EvtDriverUnload*](https://msdn.microsoft.com/library/windows/hardware/ff541694) callback function returns.    
84.471074
674
0.790823
eng_Latn
0.937921
ed08f60cf7760049ed6956b768831594a4185c4b
6,233
md
Markdown
Topics/Binary-search/README.md
shihab4t/Competitive-Programming
e8eec7d4f7d86bfa1c00b7fbbedfd6a1518f19be
[ "Unlicense" ]
3
2021-06-15T01:19:23.000Z
2022-03-16T18:23:53.000Z
Topics/Binary-search/README.md
shihab4t/Competitive-Programming
e8eec7d4f7d86bfa1c00b7fbbedfd6a1518f19be
[ "Unlicense" ]
null
null
null
Topics/Binary-search/README.md
shihab4t/Competitive-Programming
e8eec7d4f7d86bfa1c00b7fbbedfd6a1518f19be
[ "Unlicense" ]
null
null
null
# Linear Search ## Notes * Linear search is a sequential searching algorithm where we start from one end and check every element of the list until the desired element is found. It is the simplest searching algorithm. * লিনিয়ার সার্চ অ্যালগরিদম (কম্পিউটার প্রোগ্রামিং ৩য় খণ্ড বই): ``` ধাপ 1: ধরি, অ্যারে-A এর উপাদান সংখ্যা n। ধাপ 2: i = 0 নেই। ধাপ 3: i-এর মান কি n-এর চেয়ে ছোটো? তাহলে ধাপ 4-এ যাই। নাহলে ধাপ 7-এ যাই। ধাপ 4: A[i] এবং x কি পরস্পর সমান? যদি হয়, তাবে ধাপ 5-এ যাই; ধাপ 5: i রিটার্ন করি। (ফলাফল: i অবস্থানে কাঙ্ক্ষিত উপাদান খুঁজে পাওয়া গিয়েছে) ধাপ 6: i-এর মান 1 বাড়াই ও ধাপ 3-এ যাই। ধাপ 7: i = -1 ধরি ও i রিটার্ন করি। (ফলাফল: কাঙ্ক্ষিত উপাদান খুঁজে পাওয়া যায়নি) ``` ## Complexity * Time Complexity: `O(n)` * Space complexity: `O(1)` ## Watch Videos * [x] [লিনিয়ার সার্চ - linear search (Tamim Shahriar)](https://youtu.be/IbV2eELjP2k) ## Reading Resources * [x] [Linear Search (programiz.com)](https://www.programiz.com/dsa/linear-search) ## Implementation * Linear search (Tamim Shahriar): [linear-search-subeen-implementation.c](linear-search-subeen-implementation.c) * Linear search: [linear-search.cpp](linear-search.cpp) * Linear search (programiz .com): [linear-search-programiz.c](linear-search-programiz.c) ## Solve Problems * [HackerRank: 1.A) Linear Search](https://www.hackerrank.com/contests/17cs1102/challenges/1-a-linear-search) # Binary search ## Notes * বাইনারি সার্চ অ্যালগরিদম (কম্পিউটার প্রোগ্রামিং ৩য় খণ্ড বই): ``` ইনপুট: একটি অ্যারে A এবং একটি উপাদান x। আউটপুট: অ্যারের ইনডেক্স i, যেখানে সেই উপাদানটি খুঁজে পাওয়া গিয়েছে। খুঁজে না পেলে i-এর মান হবে -1. ধাপ 1: ধরি, A-এর উপাদান সংখ্যা হচ্ছে n। ধাপ 2: left = 0 ও right = n-1 নেই। আমরা অ্যারের left থেকে right ইনডেক্স পর্যন্ত সংখ্যাগুলোর মধ্যে উপাদানটি খুঁজব। left ও right-এর মান সমান হলে বুঝব অ্যারেতে একটিমাত্র উপাদান আছে। আর left যদি right-এর চেয়ে বড় হয়, তার মানে অ্যারেতে আর কোন উপাদান নেই বা থাকা সম্ভব নয়। ধাপ 3: যদি left > ritght হয়, তাহলে ধাপ 8 (আট)-এ যাই। ধাপ 4: mid = (left + right) / 2 নেই। mid হচ্ছে left ও right-এর মাঝামাঝি ইনডেক্স। ধাপ 5: A[mid] == x হলে, mid রিটার্ন করি। (ফলাফল: কাঙ্ক্ষিত উপাদানটি অবস্থান খুঁজে পাওয়া গিয়েছে। ধাপ 6: A[mid] < x হলে, left = mid + 1 নেই। তারপরে ধাপ 3-এ যাই। ধাপ 7: A[mid] > x হলে, right = mid - 1 নেই। তারপর ধাপ 3-এ যাই। ধাপ 8: উপাদান খুঁজে পাওয়া যায়নি, তাই -1 রিটার্ন করি। ``` * Calculating mid: * `mid = (left + right) / 2` or * `mid = left + ((right - left) / 2)` this is better * Luv Tutorials: * We binary search in **Monotonic functions** * **Search space** ## Complexity * Time Complexity: `O(log n)` * Space complexity: `O(1)` ## Watch Videos * Luv Binary Search Playlist: https://youtube.com/playlist?list=PLauivoElc3gjE_s-7owHO0RVb_jj7Rx85 * [x] [বাইনারি সার্চ - binary search (Tamim Shahriar)](https://youtu.be/NMC6ltspWys) * [x] [Binary Search & How I write it | Tutorial | CP Course | EP 40 (Luv)](https://youtu.be/egRrgj8JOdY) * [x] [Implement Upper Bound & Lower Bound with Binary Search | CP Course | EP 41 (Luv)](https://youtu.be/gcYvFVZ_LUA) * [x] [Nth Root of a Number using Binary Search | CP Course | EP 42 (Luv)](https://youtu.be/5snE6xsyheE) * [ ] [Advanced Binary Search with Predicate Function | SPOJ EKO | CP Course | EP 43 (Luv)](https://youtu.be/JAfJssvFgDI) * [ ] [Binary Search tutorial (C++ and Python) (Errichto)](https://youtu.be/GU7DpgHINWQ) ## Reading Resources ## Implementation * Tamim Shahriar Implementation: [binary-search-subeen.c](binary-search-subeen.c) * Luv Implementation: * [binary-serach.cpp] * [lower_bound-luv-Implementation.cpp](lower_bound-luv-Implementation.cpp) * [upper_bound-luv-implementation.cpp](upper_bound-luv-implementation.cpp) * Binary Search: [binary-serach.cpp](binary-serach.cpp) * Lower Bound: [lower-bound.cpp](lower-bound.cpp) * Upper Bound: [upper-bound.cpp](upper-bound.cpp) ## Solve Problems * LeetCode List: https://leetcode.com/tag/binary-search * LightOJ List: https://lightoj.com/problems/category/binary-search * [x] [UVa: 10611 - The Playboy Chimp (Easy)](https://onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=1552) * [x] [LeetCode: 1351. Count Negative Numbers in a Sorted Matrix (Easy)](https://leetcode.com/problems/count-negative-numbers-in-a-sorted-matrix/) * [x] [LeetCode: 278. First Bad Version (Easy)](https://leetcode.com/problems/first-bad-version/) * [x] [LeetCode: 35. Search Insert Position (Easy)](https://leetcode.com/problems/search-insert-position/) * [x] [LeetCode: 69. Sqrt(x) (Easy)](https://leetcode.com/problems/sqrtx/) * [x] [UVa: 10474 - Where is the Marble? (Easy)](https://onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=1415) * [x] [LightOJ: 1088.Points in Segments (Easy)](https://lightoj.com/problem/points-in-segments) * [x] [LeetCode: 34. Find First and Last Position of Element in Sorted Array (Medium)](https://leetcode.com/problems/find-first-and-last-position-of-element-in-sorted-array/) * [x] [CodeForces: 817C. Really Big Numbers (*1600)](https://codeforces.com/problemset/problem/817/C) * [x] [CodeForces: 1613C. Poisoned Dagger](https://codeforces.com/contest/1613/problem/C) * [ ] [SpOJ: EKO.Eko](https://www.spoj.com/problems/EKO/) * [ ] [SpOJ: AGGRCOW - Aggressive cows](https://www.spoj.com/problems/AGGRCOW/) * [ ] [SpOJ: PIE.Pie](https://www.spoj.com/problems/PIE/) * [ ] [CodeChef: MINEAT.Minion Chef and Bananas](https://www.codechef.com/problems/MINEAT) * [ ] [CodeForces: 230.T-primes](https://codeforces.com/problemset/problem/230/B) * [ ] [CodeForces: 474.Worms](https://codeforces.com/problemset/problem/474/B) * [ ] [InterviewBit: Painter's Partition Problem](https://www.interviewbit.com/problems/painters-partition-problem/) * [ ] [LeetCode: 374. Guess Number Higher or Lower](https://leetcode.com/problems/guess-number-higher-or-lower/) * [ ] [LeetCode: 1813. Sentence Similarity III](https://leetcode.com/problems/sentence-similarity-iii/) * [ ] [LeetCode: 957. Prison Cells After N Days](https://leetcode.com/problems/prison-cells-after-n-days/) * [ ] [LeetCode: 367. Valid Perfect Square (Easy)](https://leetcode.com/problems/valid-perfect-square/)
57.712963
273
0.660998
yue_Hant
0.397274
ed09020241d37f812bb2bc6bbf9046a1f5b574a1
865
md
Markdown
README.md
avdosev/easy_input_dart
b50a0a887a6613ff65fb816358eea16857e3c000
[ "MIT" ]
1
2021-11-28T13:21:26.000Z
2021-11-28T13:21:26.000Z
README.md
avdosev/easy_input_dart
b50a0a887a6613ff65fb816358eea16857e3c000
[ "MIT" ]
null
null
null
README.md
avdosev/easy_input_dart
b50a0a887a6613ff65fb816358eea16857e3c000
[ "MIT" ]
null
null
null
# Easy input A simple Python-style package for working with stdin and files. ## Links [Pub dev][pubdev] [Documentation][docs] [Issue tracker][tracker] ## Usage ```dart import 'dart:io'; import 'package:easy_input/easy_input.dart'; void main() async { var line = input(); print('line: $line'); final file = open('kek.txt', mode: FileMode.write); file.writeStringSync('kek1\n'); file.writeStringSync('kek2\n'); await file.close(); final file2 = open('kek.txt'); print('line1: ${input(file: file2)}'); // kek1 print('line1: ${input(file: file2)}'); // kek2 file2.close(); } ``` ## Features and bugs Please file feature requests and bugs at the [issue tracker][tracker]. [tracker]: https://github.com/avdosev/easy_input_dart/issues [pubdev]: https://pub.dev/packages/easy_input [docs]: https://pub.dev/documentation/easy_input/latest/
20.116279
70
0.685549
eng_Latn
0.338201
ed0aab880170072359064ce9703b71a848695f20
2,771
md
Markdown
website/blog/en/2021/clickhouse-v21.10-released.md
pdv-ru/ClickHouse
0ff975bcf3008fa6c6373cbdfed16328e3863ec5
[ "Apache-2.0" ]
5
2019-07-29T02:09:56.000Z
2020-03-19T19:05:44.000Z
website/blog/en/2021/clickhouse-v21.10-released.md
pdv-ru/ClickHouse
0ff975bcf3008fa6c6373cbdfed16328e3863ec5
[ "Apache-2.0" ]
30
2021-10-01T00:08:14.000Z
2021-12-06T13:13:12.000Z
website/blog/en/2021/clickhouse-v21.10-released.md
pdv-ru/ClickHouse
0ff975bcf3008fa6c6373cbdfed16328e3863ec5
[ "Apache-2.0" ]
1
2022-03-19T17:31:29.000Z
2022-03-19T17:31:29.000Z
--- title: 'ClickHouse v21.10 Released' image: 'https://blog-images.clickhouse.com/en/2021/clickhouse-v21-10/featured.jpg' date: '2021-10-14' author: '[Rich Raposa](https://github.com/rfraposa), [Alexey Milovidov](https://github.com/alexey-milovidov)' tags: ['company', 'community'] --- We're excited to share with you our first release since [announcing ClickHouse, Inc](https://clickhouse.com/blog/en/2021/clickhouse-inc/). The 21.10 release includes new contributions from multiple contributors including many in our community, and we are grateful for your ongoing ideas, development, and support. Our Engineering team continues to be laser-focused on providing our community and users with the fastest and most scalable OLAP DBMS available while implementing many new features. In the 21.10 release, we have a wonderful 79 contributors with 1255 commits across 211 pull requests - what an amazing community and we cherish your contributions. Let's highlight some of these new exciting new capabilities in 21.10: * User-defined functions (UDFs) can now be [created as lambda expressions](https://clickhouse.com/docs/en/sql-reference/functions/#higher-order-functions). For example, `CREATE FUNCTION plus_one as (a) -> a + 1` * Two new table engines: Executable and ExecutablePool which allow you to stream the results of a query to a custom shell script * Instead of logging every query (which can be a lot of logs!), you can now log a random sample of your queries. The number of queries logged is determined by defining a specified probability between 0.0 (no queries logged) and 1.0 (all queries logged) using the new `log_queries_probability` setting. * Positional arguments are now available in your GROUP BY, ORDER BY and LIMIT BY clauses. For example, `SELECT foo, bar, baz FROM my_table ORDER BY 2,3` orders the results by whatever the bar and baz columns (no need to specify column names twice!) We're also thrilled to announce some new free training available to you in our Learn ClickHouse portal: [https://clickhouse.com/learn/lessons/whatsnew-clickhouse-21.10/](https://clickhouse.com/learn/lessons/whatsnew-clickhouse-21.10/) We're always listening for new ideas, and we're happy to welcome new contributors to the ClickHouse project. Whether for submitting code or improving our documentation and examples, please get involved by sending us a pull request or submitting an issue. Our beginner developers contribution guide will help you get started: [https://clickhouse.com/docs/en/development/developer-instruction/](https://clickhouse.com/docs/en/development/developer-instruction/) ## ClickHouse Release Notes Release 21.10 Release Date: 2021-10-17 Release Notes: [21.10](https://github.com/ClickHouse/ClickHouse/blob/master/CHANGELOG.md)
92.366667
658
0.788885
eng_Latn
0.983444
ed0afdbf34d4014c6b7c5fd8c4df8d086a948d8b
2,230
md
Markdown
aiml30/presentations.md
chronicle17/ignite-learning-paths-training-aiml
a5890650c03d5b4e8488b432e87b8c80f85d6d56
[ "CC-BY-4.0", "MIT" ]
1
2020-01-22T07:41:44.000Z
2020-01-22T07:41:44.000Z
aiml30/presentations.md
widruv/ignite-learning-paths-training-aiml
287de12845274d75750a0b6aabe156d8355b3a23
[ "CC-BY-4.0", "MIT" ]
null
null
null
aiml30/presentations.md
widruv/ignite-learning-paths-training-aiml
287de12845274d75750a0b6aabe156d8355b3a23
[ "CC-BY-4.0", "MIT" ]
null
null
null
<!-- This is a machine generated file, and should not be edited, as it will be overwritten with future updates. --> # AIML30 Presentation Files - [aiml30.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.pptx) (Updated: Dec 04, 2019) - [aiml30.pt-br.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.pt-br.pptx) (Updated: Dec 04, 2019) - [aiml30.ja-jp.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.ja-jp.pptx) (Updated: Dec 04, 2019) - [aiml30.zh-cn.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.zh-cn.pptx) (Updated: Jan 09, 2020) - [aiml30.ko-kr.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.ko-kr.pptx) (Updated: Jan 17, 2020) --- ## Historical Files - [aiml30-2019-10_Oct-24.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30-2019-10_Oct-24.pptx) - [aiml30-2019-11_Nov-04.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30-2019-11_Nov-04.pptx) - [aiml30-2019-11_Nov-13.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30-2019-11_Nov-13.pptx) - [aiml30-2019-11_Nov-30.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30-2019-11_Nov-30.pptx) - [aiml30-2019-12_Dec-05.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30-2019-12_Dec-05.pptx) - [aiml30.ja-jp-2019-12_Dec-05.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.ja-jp-2019-12_Dec-05.pptx) - [aiml30.ko-kr-2020-01_Jan-12.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.ko-kr-2020-01_Jan-12.pptx) - [aiml30.ko-kr-2020-01_Jan-21.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.ko-kr-2020-01_Jan-21.pptx) - [aiml30.pt-br-2019-12_Dec-05.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.pt-br-2019-12_Dec-05.pptx) - [aiml30.zh-cn-2019-12_Dec-05.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.zh-cn-2019-12_Dec-05.pptx) - [aiml30.zh-cn-2020-01_Jan-12.pptx](https://globaleventcdn.blob.core.windows.net/assets/aiml/aiml30/aiml30.zh-cn-2020-01_Jan-12.pptx)
82.592593
134
0.769058
yue_Hant
0.283158
ed0b0709f8a08c004187bc8d0a7c2d55408ddc10
1,530
md
Markdown
_i18n/ru/resources/moneropedia/canonically-unique-host.md
santorihelix/monero-site
e8894ea8eb1a281ba995cfd056e4473729095b7d
[ "BSD-3-Clause" ]
1
2021-02-10T15:15:29.000Z
2021-02-10T15:15:29.000Z
_i18n/ru/resources/moneropedia/canonically-unique-host.md
santorihelix/monero-site
e8894ea8eb1a281ba995cfd056e4473729095b7d
[ "BSD-3-Clause" ]
null
null
null
_i18n/ru/resources/moneropedia/canonically-unique-host.md
santorihelix/monero-site
e8894ea8eb1a281ba995cfd056e4473729095b7d
[ "BSD-3-Clause" ]
null
null
null
--- tags: ["kovri"] terms: ["Canonically-unique-host", "канонически-уникальное-хост", "канонически-уникальный-хост", "канонически-уникальному-хосту", "канонически-уникальным-хостом", "канонически-уникального-хоста"] summary: "Хост, который канонически разрешен к адресу или группе адресов" --- {% include disclaimer.html translated="yes" translationOutdated="no" %} ### Основная информация Канонически уникальным хостом является [FQDN](https://en.wikipedia.org/wiki/FQDN), который будет канонически принимать решения по указанным адресам или набору адресов. Не следует путать с @локально-уникальным-хостом. ### Углублённая информация Канонически уникальный хост определяется удалёнными авторитетными источниками, обычно через [DNS](https://en.wikipedia.org/wiki/DNS). При принятии решения по одноранговому имени хоста наиболее вероятно вы будете использовать внешний источник принятия решения, если только у вас нет следующего: - файла базы данных, схожего с [файлом хоста](https://en.wikipedia.org/wiki/Hosts_(file)) - внутрисетевого решающего блока (который в конечном счёте опирается на внешние источники). ### Примечания - Monero в первую очередь использует принятие решение по @канонически-уникальному-хосту, в то время как @I2P использует только решение по @локально-уникальному-хосту. - Назначившим самого себя доменом @I2P и @Kovri верхнего уровня на данный момент является `.i2p`, @Kovri отвечает только за обработку / использование [домена верхнего уровня](https://en.wikipedia.org/wiki/Top_level_domain) `.i2p`.
66.521739
293
0.795425
rus_Cyrl
0.963681
ed0c31d185d8e33e615de4fa83923be0261c7355
1,875
md
Markdown
20211202_Campus_Movie_Night_N48.md
Student-Campus-Melaten/Movie-Night
6128677e30550ae67277790210f4d62df25d1119
[ "MIT" ]
null
null
null
20211202_Campus_Movie_Night_N48.md
Student-Campus-Melaten/Movie-Night
6128677e30550ae67277790210f4d62df25d1119
[ "MIT" ]
null
null
null
20211202_Campus_Movie_Night_N48.md
Student-Campus-Melaten/Movie-Night
6128677e30550ae67277790210f4d62df25d1119
[ "MIT" ]
null
null
null
## Campus Movie N°48 Date: 02.12.2021 - Thursday night, movie night at 21:00 in Campus Open Space. - Meet and greet at 20:30. - Snacks and Drinks allowed. - Corona Protection rule applied. --- 🗳️ **Vote:**https://dudle.inf.tu-dresden.de/CampusMovieNightN48/ --- ### 🎭 Drama #### The Lives of Others (Das Leben der Anderen) (2006) <img src="https://media.services.cinergy.ch/media/box1600/cdd196af7bd018754032f772e3493e3933c85069.jpg" alt="LivesOfOthers" style="width:200px;"> **Plot**: Before the collapse of the Berlin Wall, pro-Socialist playwright Georg's flat is bugged to find incriminating evidence against him when a corrupt government official falls for his girlfriend. ---- ### 🚨 Action #### Outlaw King (2018) <img src="https://i.pinimg.com/236x/f7/76/7a/f7767a77baba571ba42b2c14fe3ed1a0.jpg" alt="Italian Trulli" style="width:200px;"> **Plot:** After being crowned King of Scotland, legendary warrior Robert the Bruce is forced into exile by the English and leads a band of outlaws to help him reclaim the throne. --- ### :rocket: Sci-Fiction #### Light of My Life (2019) <img src="https://m.media-amazon.com/images/M/MV5BYjA3YzBlOWMtYTg1Zi00MWUxLTg4ZDctMTRmMGJiNjkzZGE5XkEyXkFqcGdeQXVyMTA4NjE0NjEy._V1_.jpg" alt="Italian Trulli" style="width:200px;"> **Plot:** Rag and her father travel through British Columbia ten years after a mysterious pandemic wipes out most of the world's female population. However, Rag's father disguises her as a boy to save her. ### Animation #### WALL·E (2008) <img src="https://m.media-amazon.com/images/M/MV5BMjExMTg5OTU0NF5BMl5BanBnXkFtZTcwMjMxMzMzMw@@._V1_.jpg" alt="Italian Trulli" style="width:200px;"> **Plot:** A robot who is responsible for cleaning a waste-covered Earth meets another robot and falls in love with her. Together, they set out on a journey that will alter the fate of mankind.
32.894737
205
0.7472
eng_Latn
0.89729