diff --git "a/summarization.csv" "b/summarization.csv" new file mode 100644--- /dev/null +++ "b/summarization.csv" @@ -0,0 +1,711 @@ +document,summary,title,category,subcategory,word_count,source,id +"Audit AnnotationsThis page serves as a reference for the audit annotations of the kubernetes.io namespace. These annotations apply to Event object from API group audit.k8s.io.Note:The following annotations are not used within the Kubernetes API. When you enable auditing in your cluster, audit event data is written using Event from API group audit.k8s.io. The annotations apply to audit events. Audit events are different from objects in the Event API (API group events.k8s.io).k8s.io/deprecatedExample: k8s.io/deprecated: ""true""Value must be ""true"" or ""false"". The value ""true"" indicates that the request used a deprecated API version.k8s.io/removed-releaseExample: k8s.io/removed-release: ""1.22""Value must be in the format ""."". It is set to target the removal release on requests made to deprecated API versions with a target removal release.pod-security.kubernetes.io/exemptExample: pod-security.kubernetes.io/exempt: namespaceValue must be one of user, namespace, or runtimeClass which correspond to Pod Security Exemption dimensions. This annotation indicates on which dimension was based the exemption from the PodSecurity enforcement.pod-security.kubernetes.io/enforce-policyExample: pod-security.kubernetes.io/enforce-policy: restricted:latestValue must be privileged:, baseline:, restricted: which correspond to Pod Security Standard levels accompanied by a version which must be latest or a valid Kubernetes version in the format v.. This annotations informs about the enforcement level that allowed or denied the pod during PodSecurity admission.See Pod Security Standards for more information.pod-security.kubernetes.io/audit-violationsExample: pod-security.kubernetes.io/audit-violations: would violate PodSecurity ""restricted:latest"": allowPrivilegeEscalation != false (container ""example"" must set securityContext.allowPrivilegeEscalation=false), ...Value details an audit policy violation, it contains the Pod Security Standard level that was transgressed as well as the specific policies on the fields that were violated from the PodSecurity enforcement.See Pod Security Standards for more information.apiserver.latency.k8s.io/etcdExample: apiserver.latency.k8s.io/etcd: ""4.730661757s""This annotation indiactes the measure of latency incurred inside the storage layer, it accounts for the time it takes to send data to the etcd and get the complete response back.The value of this audit annotation does not include the time incurred in admission, or validation.apiserver.latency.k8s.io/decode-response-objectExample: apiserver.latency.k8s.io/decode-response-object: ""450.6649ns""This annotation records the time taken to decode the response received from the storage layer (etcd)apiserver.latency.k8s.io/apf-queue-waitExample: apiserver.latency.k8s.io/apf-queue-wait: ""100ns""This annotation records the time that a request spent queued due to API server priorities.See API Priority and Fairness (APF) for more information about this mechanism.authorization.k8s.io/decisionExample: authorization.k8s.io/decision: ""forbid""Value must be forbid or allow. This annotation indicates whether or not a request was authorized in Kubernetes audit logs.See Auditing for more information.authorization.k8s.io/reasonExample: authorization.k8s.io/reason: ""Human-readable reason for the decision""This annotation gives reason for the decision in Kubernetes audit logs.See Auditing for more information.missing-san.invalid-cert.kubernetes.io/$hostnameExample: missing-san.invalid-cert.kubernetes.io/example-svc.example-namespace.svc: ""relies on a legacy Common Name field instead of the SAN extension for subject validation""Used by Kubernetes version v1.24 and laterThis annotation indicates a webhook or aggregated API server is using an invalid certificate that is missing subjectAltNames. Support for these certificates was disabled by default in Kubernetes 1.19, and removed in Kubernetes 1.23.Requests to endpoints using these certificates will fail. Services using these certificates should replace them as soon as possible to avoid disruption when running in Kubernetes 1.23+ environments.There's more information about this in the Go documentation: X.509 CommonName deprecation.insecure-sha1.invalid-cert.kubernetes.io/$hostnameExample: insecure-sha1.invalid-cert.kubernetes.io/example-svc.example-namespace.svc: ""uses an insecure SHA-1 signature""Used by Kubernetes version v1.24 and laterThis annotation indicates a webhook or aggregated API server is using an insecure certificate signed with a SHA-1 hash. Support for these insecure certificates is disabled by default in Kubernetes 1.24, and will be removed in a future release.Services using these certificates should replace them as soon as possible, to ensure connections are secured properly and to avoid disruption in future releases.There's more information about this in the Go documentation: Rejecting SHA-1 certificates.validation.policy.admission.k8s.io/validation_failureExample: validation.policy.admission.k8s.io/validation_failure: '[{""message"": ""Invalid value"", {""policy"": ""policy.example.com"", {""binding"": ""policybinding.example.com"", {""expressionIndex"": ""1"", {""validationActions"": [""Audit""]}]'Used by Kubernetes version v1.27 and later.This annotation indicates that a admission policy validation evaluated to false for an API request, or that the validation resulted in an error while the policy was configured with failurePolicy: Fail.The value of the annotation is a JSON object. The message in the JSON provides the message about the validation failure.The policy, binding and expressionIndex in the JSON identifies the name of the ValidatingAdmissionPolicy, the name of the ValidatingAdmissionPolicyBinding and the index in the policy validations of the CEL expressions that failed, respectively.The validationActions shows what actions were taken for this validation failure. See Validating Admission Policy for more details about validationActions.","Audit AnnotationsThis page serves as a reference for the audit annotations of the kubernetes.io namespace. These annotations apply to Event object from API group audit.k8s.io.Note:The following annotations are not used within the Kubernetes API. When you enable auditing in your cluster, audit event data is written using Event from API group audit.k8s.io. The annotations apply to audit events. Audit events are different from objects in the Event API (API group events.k8s.io).k8s.io/deprecatedExam",Audit Annotations,reference,general-reference,685,https://kubernetes.io/docs/reference/labels-annotations-taints/audit-annotations/,k8s_00000_summary +"Change the Reclaim Policy of a PersistentVolumeThis page shows how to change the reclaim policy of a Kubernetes PersistentVolume.Before you beginYou need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:iximiuz LabsKillercodaKodeKloudPlay with KubernetesTo check the version, enter kubectl version.Why change reclaim policy of a PersistentVolumePersistentVolumes can have various reclaim policies, including ""Retain"", ""Recycle"", and ""Delete"". For dynamically provisioned PersistentVolumes, the default reclaim policy is ""Delete"". This means that a dynamically provisioned volume is automatically deleted when a user deletes the corresponding PersistentVolumeClaim. This automatic behavior might be inappropriate if the volume contains precious data. In that case, it is more appropriate to use the ""Retain"" policy. With the ""Retain"" policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume will not be deleted. Instead, it is moved to the Released phase, where all of its data can be manually recovered.Changing the reclaim policy of a PersistentVolumeList the PersistentVolumes in your cluster:kubectl get pv The output is similar to this:NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s This list also includes the name of the claims that are bound to each volume for easier identification of dynamically provisioned volumes.Choose one of your PersistentVolumes and change its reclaim policy:kubectl patch pv -p '{""spec"":{""persistentVolumeReclaimPolicy"":""Retain""}}' where is the name of your chosen PersistentVolume.Note:On Windows, you must double quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote around any literals in the template. For example:kubectl patch pv -p ""{\""spec\"":{\""persistentVolumeReclaimPolicy\"":\""Retain\""}}"" Verify that your chosen PersistentVolume has the right policy:kubectl get pv The output is similar to this:NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 40s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 36s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 33s In the preceding output, you can see that the volume bound to claim default/claim3 has reclaim policy Retain. It will not be automatically deleted when a user deletes claim default/claim3.What's nextLearn more about PersistentVolumes.Learn more about PersistentVolumeClaims.ReferencesPersistentVolumePay attention to the .spec.persistentVolumeReclaimPolicy field of PersistentVolume.PersistentVolumeClaim","Change the Reclaim Policy of a PersistentVolumeThis page shows how to change the reclaim policy of a Kubernetes PersistentVolume.Before you beginYou need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubern",Change the Reclaim Policy of a PersistentVolume,tasks,manage-clusters,433,https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/,k8s_00001_summary +"Use a User Namespace With a PodFEATURE STATE: Kubernetes v1.33 [beta] (enabled by default: true)This page shows how to configure a user namespace for pods. This allows you to isolate the user running inside the container from the one in the host.A process running as root in a container can run as a different (non-root) user in the host; in other words, the process has full privileges for operations inside the user namespace, but is unprivileged for operations outside the namespace.You can use this feature to reduce the damage a compromised container can do to the host or other pods in the same node. There are several security vulnerabilities rated either HIGH or CRITICAL that were not exploitable when user namespaces is active. It is expected user namespace will mitigate some future vulnerabilities too.Without using a user namespace a container running as root, in the case of a container breakout, has root privileges on the node. And if some capability were granted to the container, the capabilities are valid on the host too. None of this is true when user namespaces are used.Before you beginYou need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:iximiuz LabsKillercodaKodeKloudPlay with KubernetesYour Kubernetes server must be at or later than version v1.25.To check the version, enter kubectl version.🛇 This item links to a third party project or product that is not part of Kubernetes itself. More informationThe node OS needs to be LinuxYou need to exec commands in the hostYou need to be able to exec into podsYou need to enable the UserNamespacesSupport feature gateNote:The feature gate to enable user namespaces was previously named UserNamespacesStatelessPodsSupport, when only stateless pods were supported. Only Kubernetes v1.25 through to v1.27 recognise UserNamespacesStatelessPodsSupport.The cluster that you're using must include at least one node that meets the requirements for using user namespaces with Pods.If you have a mixture of nodes and only some of the nodes provide user namespace support for Pods, you also need to ensure that the user namespace Pods are scheduled to suitable nodes.Run a Pod that uses a user namespaceA user namespace for a pod is enabled setting the hostUsers field of .spec to false. For example:pods/user-namespaces-stateless.yaml apiVersion: v1 kind: Pod metadata: name: userns spec: hostUsers: false containers: - name: shell command: [""sleep"", ""infinity""] image: debian Create the pod on your cluster:kubectl apply -f https://k8s.io/examples/pods/user-namespaces-stateless.yaml Exec into the pod and run readlink /proc/self/ns/user:kubectl exec -ti userns -- bash Run this command:readlink /proc/self/ns/user The output is similar to:user:[4026531837] Also run:cat /proc/self/uid_map The output is similar to:0 833617920 65536 Then, open a shell in the host and run the same commands.The readlink command shows the user namespace the process is running in. It should be different when it is run on the host and inside the container.The last number of the uid_map file inside the container must be 65536, on the host it must be a bigger number.If you are running the kubelet inside a user namespace, you need to compare the output from running the command in the pod to the output of running in the host:readlink /proc/$pid/ns/user replacing $pid with the kubelet PID.Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or projects. See the CNCF website guidelines for more details.You should read the content guide before proposing a change that adds an extra third-party link.","Use a User Namespace With a PodFEATURE STATE: Kubernetes v1.33 [beta] (enabled by default: true)This page shows how to configure a user namespace for pods. This allows you to isolate the user running inside the container from the one in the host.A process running as root in a container can run as a different (non-root) user in the host; in other words, the process has full privileges for operations inside the user namespace, but is unprivileged for operations outside the namespace.You can use th",Use a User Namespace With a Pod,tasks,configure-pods-containers,625,https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/,k8s_00002_summary +"kubeadm initThis command initializes a Kubernetes control plane node.SynopsisRun this command in order to set up the Kubernetes control planeThe ""init"" command executes the following phases:preflight Run pre-flight checks certs Certificate generation /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components /apiserver Generate the certificate for serving the Kubernetes API /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet /front-proxy-ca Generate the self-signed CA to provision identities for front proxy /front-proxy-client Generate the certificate for the front proxy client /etcd-ca Generate the self-signed CA to provision identities for etcd /etcd-server Generate the certificate for serving etcd /etcd-peer Generate the certificate for etcd nodes to communicate with each other /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd /sa Generate a private key for signing service account tokens along with its public key kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file /admin Generate a kubeconfig file for the admin to use and for kubeadm itself /super-admin Generate a kubeconfig file for the super-admin /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes /controller-manager Generate a kubeconfig file for the controller manager to use /scheduler Generate a kubeconfig file for the scheduler to use etcd Generate static Pod manifest file for local etcd /local Generate the static Pod manifest file for a local, single-node local etcd instance control-plane Generate all static Pod manifest files necessary to establish the control plane /apiserver Generates the kube-apiserver static Pod manifest /controller-manager Generates the kube-controller-manager static Pod manifest /scheduler Generates the kube-scheduler static Pod manifest kubelet-start Write kubelet settings and (re)start the kubelet wait-control-plane Wait for the control plane to start upload-config Upload the kubeadm and kubelet configuration to a ConfigMap /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap /kubelet Upload the kubelet component config to a ConfigMap upload-certs Upload certificates to kubeadm-certs mark-control-plane Mark a node as a control-plane bootstrap-token Generates bootstrap tokens used to join a node to a cluster kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap /enable-client-cert-rotation Enable kubelet client certificate rotation addon Install required addons for passing conformance tests /coredns Install the CoreDNS addon to a Kubernetes cluster /kube-proxy Install the kube-proxy addon to a Kubernetes cluster show-join-command Show the join command for control-plane and worker node kubeadm init [flags] Options--apiserver-advertise-address stringThe IP address the API Server will advertise it's listening on. If not set the default network interface will be used.--apiserver-bind-port int32 Default: 6443Port for the API Server to bind to.--apiserver-cert-extra-sans stringsOptional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.--cert-dir string Default: ""/etc/kubernetes/pki""The path where to save and store the certificates.--certificate-key stringKey used to encrypt the control-plane certificates in the kubeadm-certs Secret. The certificate key is a hex encoded string that is an AES key of size 32 bytes.--config stringPath to a kubeadm configuration file.--control-plane-endpoint stringSpecify a stable IP address or DNS name for the control plane.--cri-socket stringPath to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.--dry-runDon't apply any changes; just output what would be done.--feature-gates stringA set of key=value pairs that describe feature gates for various features. Options are:ControlPlaneKubeletLocalMode=true|false (BETA - default=true)NodeLocalCRISocket=true|false (BETA - default=true)PublicKeysECDSA=true|false (DEPRECATED - default=false)RootlessControlPlane=true|false (ALPHA - default=false)WaitForAllControlPlaneComponents=true|false (default=true)-h, --helphelp for init--ignore-preflight-errors stringsA list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.--image-repository string Default: ""registry.k8s.io""Choose a container registry to pull control plane images from--kubernetes-version string Default: ""stable-1""Choose a specific Kubernetes version for the control plane.--node-name stringSpecify the node name.--patches stringPath to a directory that contains files named ""target[suffix][+patchtype].extension"". For example, ""kube-apiserver0+merge.yaml"" or just ""etcd.json"". ""target"" can be one of ""kube-apiserver"", ""kube-controller-manager"", ""kube-scheduler"", ""etcd"", ""kubeletconfiguration"", ""corednsdeployment"". ""patchtype"" can be one of ""strategic"", ""merge"" or ""json"" and they match the patch formats supported by kubectl. The default ""patchtype"" is ""strategic"". ""extension"" must be either ""json"" or ""yaml"". ""suffix"" is an optional string that can be used to determine which patches are applied first alpha-numerically.--pod-network-cidr stringSpecify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.--service-cidr string Default: ""10.96.0.0/12""Use alternative range of IP address for service VIPs.--service-dns-domain string Default: ""cluster.local""Use alternative domain for services, e.g. ""myorg.internal"".--skip-certificate-key-printDon't print the key used to encrypt the control-plane certificates.--skip-phases stringsList of phases to be skipped--skip-token-printSkip printing of the default bootstrap token generated by 'kubeadm init'.--token stringThe token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef--token-ttl duration Default: 24h0m0sThe duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire--upload-certsUpload control-plane certificates to the kubeadm-certs Secret.Options inherited from parent commands--rootfs stringThe path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.Init workflowkubeadm init bootstraps a Kubernetes control plane node by executing the following steps:Runs a series of pre-flight checks to validate the system state before making changes. Some checks only trigger warnings, others are considered errors and will exit kubeadm until the problem is corrected or the user specifies --ignore-preflight-errors=.Generates a self-signed CA to set up identities for each component in the cluster. The user can provide their own CA cert and/or key by dropping it in the cert directory configured via --cert-dir (/etc/kubernetes/pki by default). The API server certs will have additional SAN entries for any --apiserver-cert-extra-sans arguments, lowercased if necessary.Writes kubeconfig files in /etc/kubernetes/ for the kubelet, the controller-manager, and the scheduler to connect to the API server, each with its own identity. Also additional kubeconfig files are written, for kubeadm as administrative entity (admin.conf) and for a super admin user that can bypass RBAC (super-admin.conf).Generates static Pod manifests for the API server, controller-manager and scheduler. In case an external etcd is not provided, an additional static Pod manifest is generated for etcd.Static Pod manifests are written to /etc/kubernetes/manifests; the kubelet watches this directory for Pods to create on startup.Once control plane Pods are up and running, the kubeadm init sequence can continue.Apply labels and taints to the control plane node so that no additional workloads will run there.Generates the token that additional nodes can use to register themselves with a control plane in the future. Optionally, the user can provide a token via --token, as described in the kubeadm token documents.Makes all the necessary configurations for allowing node joining with the Bootstrap Tokens and TLS Bootstrap mechanism:Write a ConfigMap for making available all the information required for joining, and set up related RBAC access rules.Let Bootstrap Tokens access the CSR signing API.Configure auto-approval for new CSR requests.See kubeadm join for additional information.Installs a DNS server (CoreDNS) and the kube-proxy addon components via the API server. In Kubernetes version 1.11 and later CoreDNS is the default DNS server. Please note that although the DNS server is deployed, it will not be scheduled until CNI is installed.Warning:kube-dns usage with kubeadm is deprecated as of v1.18 and is removed in v1.21.Using init phases with kubeadmkubeadm allows you to create a control plane node in phases using the kubeadm init phase command.To view the ordered list of phases and sub-phases you can call kubeadm init --help. The list will be located at the top of the help screen and each phase will have a description next to it. Note that by calling kubeadm init all of the phases and sub-phases will be executed in this exact order.Some phases have unique flags, so if you want to have a look at the list of available options add --help, for example:sudo kubeadm init phase control-plane controller-manager --help You can also use --help to see the list of sub-phases for a certain parent phase:sudo kubeadm init phase control-plane --help kubeadm init also exposes a flag called --skip-phases that can be used to skip certain phases. The flag accepts a list of phase names and the names can be taken from the above ordered list.An example:sudo kubeadm init phase control-plane all --config=configfile.yaml sudo kubeadm init phase etcd local --config=configfile.yaml # you can now modify the control plane and etcd manifest files sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files and then skip these phases using --skip-phases. By calling the last command you will create a control plane node with the custom manifest files.FEATURE STATE: Kubernetes v1.22 [beta]Alternatively, you can use the skipPhases field under InitConfiguration.Using kubeadm init with a configuration fileCaution:The configuration file is still considered beta and may change in future versions.It's possible to configure kubeadm init with a configuration file instead of command line flags, and some more advanced features may only be available as configuration file options. This file is passed using the --config flag and it must contain a ClusterConfiguration structure and optionally more structures separated by ---\n. Mixing --config with others flags may not be allowed in some cases.The default configuration can be printed out using the kubeadm config print command.If your configuration is not using the latest version it is recommended that you migrate using the kubeadm config migrate command.For more information on the fields and usage of the configuration you can navigate to our API reference page.Using kubeadm init with feature gateskubeadm supports a set of feature gates that are unique to kubeadm and can only be applied during cluster creation with kubeadm init. These features can control the behavior of the cluster. Feature gates are removed after a feature graduates to GA.To pass a feature gate you can either use the --feature-gates flag for kubeadm init, or you can add items into the featureGates field when you pass a configuration file using --config.Passing feature gates for core Kubernetes components directly to kubeadm is not supported. Instead, it is possible to pass them by Customizing components with the kubeadm API.List of feature gates:kubeadm feature gatesFeatureDefaultAlphaBetaGAControlPlaneKubeletLocalModetrue1.311.33-NodeLocalCRISockettrue1.321.34-WaitForAllControlPlaneComponentstrue1.301.331.34Note:Once a feature gate goes GA its value becomes locked to true by default.Feature gate descriptions:ControlPlaneKubeletLocalModeWith this feature gate enabled, when joining a new control plane node, kubeadm will configure the kubelet to connect to the local kube-apiserver. This ensures that there will not be a violation of the version skew policy during rolling upgrades.NodeLocalCRISocketWith this feature gate enabled, kubeadm will read/write the CRI socket for each node from/to the file /var/lib/kubelet/instance-config.yaml instead of reading/writing it from/to the annotation kubeadm.alpha.kubernetes.io/cri-socket on the Node object. The new file is applied as an instance configuration patch, before any other user managed patches are applied when the --patches flag is used. It contains a single field containerRuntimeEndpoint from the KubeletConfiguration file format. If the feature gate is enabled during upgrade, but the file /var/lib/kubelet/instance-config.yaml does not exist yet, kubeadm will attempt to read the CRI socket value from the file /var/lib/kubelet/kubeadm-flags.env.WaitForAllControlPlaneComponentsWith this feature gate enabled, kubeadm will wait for all control plane components (kube-apiserver, kube-controller-manager, kube-scheduler) on a control plane node to report status 200 on their /livez or /healthz endpoints. These checks are performed on https://ADDRESS:PORT/ENDPOINT.PORT is taken from --secure-port of a component.ADDRESS is --advertise-address for kube-apiserver and --bind-address for the kube-controller-manager and kube-scheduler.ENDPOINT is only /healthz for kube-controller-manager until it supports /livez as well.If you specify custom ADDRESS or PORT in the kubeadm configuration they will be respected. Without the feature gate enabled, kubeadm will only wait for the kube-apiserver on a control plane node to become ready. The wait process starts right after the kubelet on the host is started by kubeadm. You are advised to enable this feature gate in case you wish to observe a ready state from all control plane components during the kubeadm init or kubeadm join command execution.List of deprecated feature gates:kubeadm deprecated feature gatesFeatureDefaultAlphaBetaGADeprecatedPublicKeysECDSAfalse1.19--1.31RootlessControlPlanefalse1.22--1.31Feature gate descriptions:PublicKeysECDSACan be used to create a cluster that uses ECDSA certificates instead of the default RSA algorithm. Renewal of existing ECDSA certificates is also supported using kubeadm certs renew, but you cannot switch between the RSA and ECDSA algorithms on the fly or during upgrades. Kubernetes versions before v1.31 had a bug where keys in generated kubeconfig files were set use RSA, even when you had enabled the PublicKeysECDSA feature gate. This feature gate is deprecated in favor of the encryptionAlgorithm functionality available in kubeadm v1beta4.RootlessControlPlaneSetting this flag configures the kubeadm deployed control plane component static Pod containers for kube-apiserver, kube-controller-manager, kube-scheduler and etcd to run as non-root users. If the flag is not set, those components run as root. You can change the value of this feature gate before you upgrade to a newer version of Kubernetes.List of removed feature gates:kubeadm removed feature gatesFeatureAlphaBetaGARemovedEtcdLearnerMode1.271.291.321.33IPv6DualStack1.161.211.231.24UnversionedKubeletConfigMap1.221.231.251.26UpgradeAddonsBeforeControlPlane1.28--1.31Feature gate descriptions:EtcdLearnerModeWhen joining a new control plane node, a new etcd member will be created as a learner and promoted to a voting member only after the etcd data are fully aligned.IPv6DualStackThis flag helps to configure components dual stack when the feature is in progress. For more details on Kubernetes dual-stack support see Dual-stack support with kubeadm.UnversionedKubeletConfigMapThis flag controls the name of the ConfigMap where kubeadm stores kubelet configuration data. With this flag not specified or set to true, the ConfigMap is named kubelet-config. If you set this flag to false, the name of the ConfigMap includes the major and minor version for Kubernetes (for example: kubelet-config-1.34). Kubeadm ensures that RBAC rules for reading and writing that ConfigMap are appropriate for the value you set. When kubeadm writes this ConfigMap (during kubeadm init or kubeadm upgrade apply), kubeadm respects the value of UnversionedKubeletConfigMap. When reading that ConfigMap (during kubeadm join, kubeadm reset, kubeadm upgrade...), kubeadm attempts to use unversioned ConfigMap name first. If that does not succeed, kubeadm falls back to using the legacy (versioned) name for that ConfigMap.UpgradeAddonsBeforeControlPlaneThis feature gate has been removed. It was introduced in v1.28 as a deprecated feature and then removed in v1.31. For documentation on older versions, please switch to the corresponding website version.Adding kube-proxy parametersFor information about kube-proxy parameters in the kubeadm configuration see:kube-proxy referenceFor information about enabling IPVS mode with kubeadm see:IPVSPassing custom flags to control plane componentsFor information about passing flags to control plane components see:control-plane-flagsRunning kubeadm without an Internet connectionFor running kubeadm without an Internet connection you have to pre-pull the required control plane images.You can list and pull the images using the kubeadm config images sub-command:kubeadm config images list kubeadm config images pull You can pass --config to the above commands with a kubeadm configuration file to control the kubernetesVersion and imageRepository fields.All default registry.k8s.io images that kubeadm requires support multiple architectures.Using custom imagesBy default, kubeadm pulls images from registry.k8s.io. If the requested Kubernetes version is a CI label (such as ci/latest) gcr.io/k8s-staging-ci-images is used.You can override this behavior by using kubeadm with a configuration file. Allowed customization are:To provide kubernetesVersion which affects the version of the images.To provide an alternative imageRepository to be used instead of registry.k8s.io.To provide a specific imageRepository and imageTag for etcd or CoreDNS.Image paths between the default registry.k8s.io and a custom repository specified using imageRepository may differ for backwards compatibility reasons. For example, one image might have a subpath at registry.k8s.io/subpath/image, but be defaulted to my.customrepository.io/image when using a custom repository.To ensure you push the images to your custom repository in paths that kubeadm can consume, you must:Pull images from the defaults paths at registry.k8s.io using kubeadm config images {list|pull}.Push images to the paths from kubeadm config images list --config=config.yaml, where config.yaml contains the custom imageRepository, and/or imageTag for etcd and CoreDNS.Pass the same config.yaml to kubeadm init.Custom sandbox (pause) imagesTo set a custom image for these you need to configure this in your container runtime to use the image. Consult the documentation for your container runtime to find out how to change this setting; for selected container runtimes, you can also find advice within the Container Runtimes topic.Uploading control plane certificates to the clusterBy adding the flag --upload-certs to kubeadm init you can temporary upload the control plane certificates to a Secret in the cluster. Please note that this Secret will expire automatically after 2 hours. The certificates are encrypted using a 32byte key that can be specified using --certificate-key. The same key can be used to download the certificates when additional control plane nodes are joining, by passing --control-plane and --certificate-key to kubeadm join.The following phase command can be used to re-upload the certificates after expiration:kubeadm init phase upload-certs --upload-certs --config=SOME_YAML_FILE Note:A predefined certificateKey can be provided in InitConfiguration when passing the configuration file with --config.If a predefined certificate key is not passed to kubeadm init and kubeadm init phase upload-certs a new key will be generated automatically.The following command can be used to generate a new key on demand:kubeadm certs certificate-key Certificate management with kubeadmFor detailed information on certificate management with kubeadm see Certificate Management with kubeadm. The document includes information about using external CA, custom certificates and certificate renewal.Managing the kubeadm drop-in file for the kubeletThe kubeadm package ships with a configuration file for running the kubelet by systemd. Note that the kubeadm CLI never touches this drop-in file. This drop-in file is part of the kubeadm DEB/RPM package.For further information, see Managing the kubeadm drop-in file for systemd.Use kubeadm with CRI runtimesBy default, kubeadm attempts to detect your container runtime. For more details on this detection, see the kubeadm CRI installation guide.Setting the node nameBy default, kubeadm assigns a node name based on a machine's host address. You can override this setting with the --node-name flag. The flag passes the appropriate --hostname-override value to the kubelet.Be aware that overriding the hostname can interfere with cloud providers.Automating kubeadmRather than copying the token you obtained from kubeadm init to each node, as in the basic kubeadm tutorial, you can parallelize the token distribution for easier automation. To implement this automation, you must know the IP address that the control plane node will have after it is started, or use a DNS name or an address of a load balancer.Generate a token. This token must have the form <6 character string>.<16 character string>. More formally, it must match the regex: [a-z0-9]{6}\.[a-z0-9]{16}.kubeadm can generate a token for you:kubeadm token generate Start both the control plane node and the worker nodes concurrently with this token. As they come up they should find each other and form the cluster. The same --token argument can be used on both kubeadm init and kubeadm join.Similar can be done for --certificate-key when joining additional control plane nodes. The key can be generated using:kubeadm certs certificate-key Once the cluster is up, you can use the /etc/kubernetes/admin.conf file from a control plane node to talk to the cluster with administrator credentials or Generating kubeconfig files for additional users.Note that this style of bootstrap has some relaxed security guarantees because it does not allow the root CA hash to be validated with --discovery-token-ca-cert-hash (since it's not generated when the nodes are provisioned). For details, see the kubeadm join.What's nextkubeadm init phase to understand more about kubeadm init phaseskubeadm join to bootstrap a Kubernetes worker node and join it to the clusterkubeadm upgrade to upgrade a Kubernetes cluster to a newer versionkubeadm reset to revert any changes made to this host by kubeadm init or kubeadm join","kubeadm initThis command initializes a Kubernetes control plane node.SynopsisRun this command in order to set up the Kubernetes control planeThe ""init"" command executes the following phases:preflight Run pre-flight checks certs Certificate generation /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components /apiserver Generate the certificate for serving the Kubernetes API /apiserver-kubelet-client Generate the certificate for the API server to connect to",kubeadm init,reference,general-reference,3229,https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/,k8s_00003_summary +"Customizing components with the kubeadm APIThis page covers how to customize the components that kubeadm deploys. For control plane components you can use flags in the ClusterConfiguration structure or patches per-node. For the kubelet and kube-proxy you can use KubeletConfiguration and KubeProxyConfiguration, accordingly.All of these options are possible via the kubeadm configuration API. For more details on each field in the configuration you can navigate to our API reference pages.Note:Customizing the CoreDNS deployment of kubeadm is currently not supported. You must manually patch the kube-system/coredns ConfigMap and recreate the CoreDNS Pods after that. Alternatively, you can skip the default CoreDNS deployment and deploy your own variant. For more details on that see Using init phases with kubeadm.Note:To reconfigure a cluster that has already been created see Reconfiguring a kubeadm cluster.Customizing the control plane with flags in ClusterConfigurationThe kubeadm ClusterConfiguration object exposes a way for users to override the default flags passed to control plane components such as the APIServer, ControllerManager, Scheduler and Etcd. The components are defined using the following structures:apiServercontrollerManagerscheduleretcdThese structures contain a common extraArgs field, that consists of name / value pairs. To override a flag for a control plane component:Add the appropriate extraArgs to your configuration.Add flags to the extraArgs field.Run kubeadm init with --config .Note:You can generate a ClusterConfiguration object with default values by running kubeadm config print init-defaults and saving the output to a file of your choice.Note:The ClusterConfiguration object is currently global in kubeadm clusters. This means that any flags that you add, will apply to all instances of the same component on different nodes. To apply individual configuration per component on different nodes you can use patches.Note:Duplicate flags (keys), or passing the same flag --foo multiple times, is currently not supported. To workaround that you must use patches.APIServer flagsFor details, see the reference documentation for kube-apiserver.Example usage:apiVersion: kubeadm.k8s.io/v1beta4 kind: ClusterConfiguration kubernetesVersion: v1.16.0 apiServer: extraArgs: - name: ""enable-admission-plugins"" value: ""AlwaysPullImages,DefaultStorageClass"" - name: ""audit-log-path"" value: ""/home/johndoe/audit.log"" ControllerManager flagsFor details, see the reference documentation for kube-controller-manager.Example usage:apiVersion: kubeadm.k8s.io/v1beta4 kind: ClusterConfiguration kubernetesVersion: v1.16.0 controllerManager: extraArgs: - name: ""cluster-signing-key-file"" value: ""/home/johndoe/keys/ca.key"" - name: ""deployment-controller-sync-period"" value: ""50"" Scheduler flagsFor details, see the reference documentation for kube-scheduler.Example usage:apiVersion: kubeadm.k8s.io/v1beta4 kind: ClusterConfiguration kubernetesVersion: v1.16.0 scheduler: extraArgs: - name: ""config"" value: ""/etc/kubernetes/scheduler-config.yaml"" extraVolumes: - name: schedulerconfig hostPath: /home/johndoe/schedconfig.yaml mountPath: /etc/kubernetes/scheduler-config.yaml readOnly: true pathType: ""File"" Etcd flagsFor details, see the etcd server documentation.Example usage:apiVersion: kubeadm.k8s.io/v1beta4 kind: ClusterConfiguration etcd: local: extraArgs: - name: ""election-timeout"" value: 1000 Customizing with patchesFEATURE STATE: Kubernetes v1.22 [beta]Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component configuration is written to disk.You can pass this file to kubeadm init with --config :apiVersion: kubeadm.k8s.io/v1beta4 kind: InitConfiguration patches: directory: /home/user/somedir Note:For kubeadm init you can pass a file containing both a ClusterConfiguration and InitConfiguration separated by ---.You can pass this file to kubeadm join with --config :apiVersion: kubeadm.k8s.io/v1beta4 kind: JoinConfiguration patches: directory: /home/user/somedir The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.target can be one of kube-apiserver, kube-controller-manager, kube-scheduler, etcd and kubeletconfiguration.suffix is an optional string that can be used to determine which patches are applied first alpha-numerically.patchtype can be one of strategic, merge or json and these must match the patching formats supported by kubectl. The default patchtype is strategic.extension must be either json or yaml.Note:If you are using kubeadm upgrade to upgrade your kubeadm nodes you must again provide the same patches, so that the customization is preserved after upgrade. To do that you can use the --patches flag, which must point to the same directory. kubeadm upgrade currently does not support a configuration API structure that can be used for the same purpose.Customizing the kubeletTo customize the kubelet you can add a KubeletConfiguration next to the ClusterConfiguration or InitConfiguration separated by --- within the same configuration file. This file can then be passed to kubeadm init and kubeadm will apply the same base KubeletConfiguration to all nodes in the cluster.For applying instance-specific configuration over the base KubeletConfiguration you can use the kubeletconfiguration patch target.Alternatively, you can use kubelet flags as overrides by passing them in the nodeRegistration.kubeletExtraArgs field supported by both InitConfiguration and JoinConfiguration. Some kubelet flags are deprecated, so check their status in the kubelet reference documentation before using them.For additional details see Configuring each kubelet in your cluster using kubeadmCustomizing kube-proxyTo customize kube-proxy you can pass a KubeProxyConfiguration next your ClusterConfiguration or InitConfiguration to kubeadm init separated by ---.For more details you can navigate to our API reference pages.Note:kubeadm deploys kube-proxy as a DaemonSet, which means that the KubeProxyConfiguration would apply to all instances of kube-proxy in the cluster.","Customizing components with the kubeadm APIThis page covers how to customize the components that kubeadm deploys. For control plane components you can use flags in the ClusterConfiguration structure or patches per-node. For the kubelet and kube-proxy you can use KubeletConfiguration and KubeProxyConfiguration, accordingly.All of these options are possible via the kubeadm configuration API. For more details on each field in the configuration you can navigate to our API reference pages.Note:Custom",Customizing components with the kubeadm API,other,uncategorized,782,https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/,k8s_00004_summary +"Extending KubernetesDifferent ways to change the behavior of your Kubernetes cluster.Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.This guide describes the options for customizing a Kubernetes cluster. It is aimed at cluster operators who want to understand how to adapt their Kubernetes cluster to the needs of their work environment. Developers who are prospective Platform Developers or Kubernetes Project Contributors will also find it useful as an introduction to what extension points and patterns exist, and their trade-offs and limitations.Customization approaches can be broadly divided into configuration, which only involves changing command line arguments, local configuration files, or API resources; and extensions, which involve running additional programs, additional network services, or both. This document is primarily about extensions.ConfigurationConfiguration files and command arguments are documented in the Reference section of the online documentation, with a page for each binary:kube-apiserverkube-controller-managerkube-schedulerkubeletkube-proxyCommand arguments and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster operator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.Built-in policy APIs, such as ResourceQuota, NetworkPolicy and Role-based Access Control (RBAC), are built-in Kubernetes APIs that provide declaratively configured policy settings. APIs are typically usable even with hosted Kubernetes services and with managed Kubernetes installations. The built-in policy APIs follow the same conventions as other Kubernetes resources such as Pods. When you use a policy APIs that is stable, you benefit from a defined support policy like other Kubernetes APIs. For these reasons, policy APIs are recommended over configuration files and command arguments where suitable.ExtensionsExtensions are software components that extend and deeply integrate with Kubernetes. They adapt it to support new types and new kinds of hardware.Many cluster administrators use a hosted or distribution instance of Kubernetes. These clusters come with extensions pre-installed. As a result, most Kubernetes users will not need to install extensions and even fewer users will need to author new ones.Extension patternsKubernetes is designed to be automated by writing client programs. Any program that reads and/or writes to the Kubernetes API can provide useful automation. Automation can run on the cluster or off it. By following the guidance in this doc you can write highly available and robust automation. Automation generally works with any Kubernetes cluster, including hosted clusters and managed installations.There is a specific pattern for writing client programs that work well with Kubernetes called the controller pattern. Controllers typically read an object's .spec, possibly do things, and then update the object's .status.A controller is a client of the Kubernetes API. When Kubernetes is the client and calls out to a remote service, Kubernetes calls this a webhook. The remote service is called a webhook backend. As with custom controllers, webhooks do add a point of failure.Note:Outside of Kubernetes, the term “webhook” typically refers to a mechanism for asynchronous notifications, where the webhook call serves as a one-way notification to another system or component. In the Kubernetes ecosystem, even synchronous HTTP callouts are often described as “webhooks”.In the webhook model, Kubernetes makes a network request to a remote service. With the alternative binary Plugin model, Kubernetes executes a binary (program). Binary plugins are used by the kubelet (for example, CSI storage plugins and CNI network plugins), and by kubectl (see Extend kubectl with plugins).Extension pointsThis diagram shows the extension points in a Kubernetes cluster and the clients that access it.Kubernetes extension pointsKey to the figureUsers often interact with the Kubernetes API using kubectl. Plugins customise the behaviour of clients. There are generic extensions that can apply to different clients, as well as specific ways to extend kubectl.The API server handles all requests. Several types of extension points in the API server allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the API Access Extensions section.The API server serves various kinds of resources. Built-in resource kinds, such as pods, are defined by the Kubernetes project and can't be changed. Read API extensions to learn about extending the Kubernetes API.The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling, which are described in the Scheduling extensions section.Much of the behavior of Kubernetes is implemented by programs called controllers, that are clients of the API server. Controllers are often used in conjunction with custom resources. Read combining new APIs with automation and Changing built-in resources to learn more.The kubelet runs on servers (nodes), and helps pods appear like virtual servers with their own IPs on the cluster network. Network Plugins allow for different implementations of pod networking.You can use Device Plugins to integrate custom hardware or other special node-local facilities, and make these available to Pods running in your cluster. The kubelet includes support for working with device plugins.The kubelet also mounts and unmounts volume for pods and their containers. You can use Storage Plugins to add support for new kinds of storage and other volume types.Extension point choice flowchartIf you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.Flowchart guide to select an extension approachClient extensionsPlugins for kubectl are separate binaries that add or replace the behavior of specific subcommands. The kubectl tool can also integrate with credential plugins These extensions only affect a individual user's local environment, and so cannot enforce site-wide policies.If you want to extend the kubectl tool, read Extend kubectl with plugins.API extensionsCustom resource definitionsConsider adding a Custom Resource to Kubernetes if you want to define new controllers, application configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such as kubectl.For more about Custom Resources, see the Custom Resources concept guide.API aggregation layerYou can use Kubernetes' API Aggregation Layer to integrate the Kubernetes API with additional services such as for metrics.Combining new APIs with automationA combination of a custom resource API and a control loop is called the controllers pattern. If your controller takes the place of a human operator deploying infrastructure based on a desired state, then the controller may also be following the operator pattern. The Operator pattern is used to manage specific applications; usually, these are applications that maintain state and require care in how they are managed.You can also make your own custom APIs and control loops that manage other resources, such as storage, or to define policies (such as an access control restriction).Changing built-in resourcesWhen you extend the Kubernetes API by adding custom resources, the added resources always fall into a new API Groups. You cannot replace or change existing API groups. Adding an API does not directly let you affect the behavior of existing APIs (such as Pods), whereas API Access Extensions do.API access extensionsWhen a request reaches the Kubernetes API Server, it is first authenticated, then authorized, and is then subject to various types of admission control (some requests are in fact not authenticated, and get special treatment). See Controlling Access to the Kubernetes API for more on this flow.Each of the steps in the Kubernetes authentication / authorization flow offers extension points.AuthenticationAuthentication maps headers or certificates in all requests to a username for the client making the request.Kubernetes has several built-in authentication methods that it supports. It can also sit behind an authenticating proxy, and it can send a token from an Authorization: header to a remote service for verification (an authentication webhook) if those don't meet your needs.AuthorizationAuthorization determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields.If the built-in authorization options don't meet your needs, an authorization webhook allows calling out to custom code that makes an authorization decision.Dynamic admission controlAfter a request is authorized, if it is a write operation, it also goes through Admission Control steps. In addition to the built-in steps, there are several extensions:The Image Policy webhook restricts what images can be run in containers.To make arbitrary admission control decisions, a general Admission webhook can be used. Admission webhooks can reject creations or updates. Some admission webhooks modify the incoming request data before it is handled further by Kubernetes.Infrastructure extensionsDevice pluginsDevice plugins allow a node to discover new Node resources (in addition to the builtin ones like cpu and memory) via a Device Plugin.Storage pluginsContainer Storage Interface (CSI) plugins provide a way to extend Kubernetes with supports for new kinds of volumes. The volumes can be backed by durable external storage, or provide ephemeral storage, or they might offer a read-only interface to information using a filesystem paradigm.Kubernetes also includes support for FlexVolume plugins, which are deprecated since Kubernetes v1.23 (in favour of CSI).FlexVolume plugins allow users to mount volume types that aren't natively supported by Kubernetes. When you run a Pod that relies on FlexVolume storage, the kubelet calls a binary plugin to mount the volume. The archived FlexVolume design proposal has more detail on this approach.The Kubernetes Volume Plugin FAQ for Storage Vendors includes general information on storage plugins.Network pluginsYour Kubernetes cluster needs a network plugin in order to have a working Pod network and to support other aspects of the Kubernetes network model.Network Plugins allow Kubernetes to work with different networking topologies and technologies.Kubelet image credential provider pluginsFEATURE STATE: Kubernetes v1.26 [stable]Kubelet image credential providers are plugins for the kubelet to dynamically retrieve image registry credentials. The credentials are then used when pulling images from container image registries that match the configuration.The plugins can communicate with external services or use local files to obtain credentials. This way, the kubelet does not need to have static credentials for each registry, and can support various authentication methods and protocols.For plugin configuration details, see Configure a kubelet image credential provider.Scheduling extensionsThe scheduler is a special type of controller that watches pods, and assigns pods to nodes. The default scheduler can be replaced entirely, while continuing to use other Kubernetes components, or multiple schedulers can run at the same time.This is a significant undertaking, and almost all Kubernetes users find they do not need to modify the scheduler.You can control which scheduling plugins are active, or associate sets of plugins with different named scheduler profiles. You can also write your own plugin that integrates with one or more of the kube-scheduler's extension points.Finally, the built in kube-scheduler component supports a webhook that permits a remote HTTP backend (scheduler extension) to filter and / or prioritize the nodes that the kube-scheduler chooses for a pod.Note:You can only affect node filtering and node prioritization with a scheduler extender webhook; other extension points are not available through the webhook integration.What's nextLearn more about infrastructure extensionsDevice PluginsNetwork PluginsCSI storage pluginsLearn about kubectl pluginsLearn more about Custom ResourcesLearn more about Extension API ServersLearn about Dynamic admission controlLearn about the Operator pattern","Extending KubernetesDifferent ways to change the behavior of your Kubernetes cluster.Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.This guide describes the options for customizing a Kubernetes cluster. It is aimed at cluster operators who want to understand how to adapt their Kubernetes cluster to the needs of their work environment. Developers who are prospective Platform Developers or Kubernetes Pr",Extending Kubernetes,concepts,overview-architecture,1836,https://kubernetes.io/docs/concepts/extend-kubernetes/,k8s_00005_summary +"kubectl api-resourcesSynopsisPrint the supported API resources on the server.kubectl api-resources [flags] Examples # Print the supported API resources kubectl api-resources # Print the supported API resources with more information kubectl api-resources -o wide # Print the supported API resources sorted by a column kubectl api-resources --sort-by=name # Print the supported namespaced resources kubectl api-resources --namespaced=true # Print the supported non-namespaced resources kubectl api-resources --namespaced=false # Print the supported API resources with a specific APIGroup kubectl api-resources --api-group=rbac.authorization.k8s.io Options--api-group stringLimit to resources in the specified API group.--cachedUse the cached list of resources if available.--categories stringsLimit to resources that belong to the specified categories.-h, --helphelp for api-resources--namespaced Default: trueIf false, non-namespaced resources will be returned, otherwise returning namespaced resources by default.--no-headersWhen using the default or custom-column output format, don't print headers (default print headers).-o, --output stringOutput format. One of: (json, yaml, name, wide).--show-managed-fieldsIf true, keep the managedFields when printing objects in JSON or YAML format.--sort-by stringIf non-empty, sort list of resources using specified field. The field can be either 'name' or 'kind'.--verbs stringsLimit to resources that support the specified verbs.Parent Options Inherited--as stringUsername to impersonate for the operation. User could be a regular user or a service account in a namespace.--as-group stringsGroup to impersonate for the operation, this flag can be repeated to specify multiple groups.--as-uid stringUID to impersonate for the operation.--cache-dir string Default: ""$HOME/.kube/cache""Default cache directory--certificate-authority stringPath to a cert file for the certificate authority--client-certificate stringPath to a client certificate file for TLS--client-key stringPath to a client key file for TLS--cluster stringThe name of the kubeconfig cluster to use--context stringThe name of the kubeconfig context to use--disable-compressionIf true, opt-out of response compression for all requests to the server--insecure-skip-tls-verifyIf true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure--kubeconfig stringPath to the kubeconfig file to use for CLI requests.--kuberc stringPath to the kuberc file to use for preferences. This can be disabled by exporting KUBECTL_KUBERC=false feature gate or turning off the feature KUBERC=off.--match-server-versionRequire server version to match client version-n, --namespace stringIf present, the namespace scope for this CLI request--password stringPassword for basic authentication to the API server--profile string Default: ""none""Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)--profile-output string Default: ""profile.pprof""Name of the file to write the profile to--request-timeout string Default: ""0""The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.-s, --server stringThe address and port of the Kubernetes API server--storage-driver-buffer-duration duration Default: 1m0sWrites in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction--storage-driver-db string Default: ""cadvisor""database name--storage-driver-host string Default: ""localhost:8086""database host:port--storage-driver-password string Default: ""root""database password--storage-driver-secureuse secure connection with database--storage-driver-table string Default: ""stats""table name--storage-driver-user string Default: ""root""database username--tls-server-name stringServer name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used--token stringBearer token for authentication to the API server--user stringThe name of the kubeconfig user to use--username stringUsername for basic authentication to the API server--version version[=true]--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version--warnings-as-errorsTreat warnings received from the server as errors and exit with a non-zero exit codeSee Alsokubectl - kubectl controls the Kubernetes cluster managerThis page is automatically generated.If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.",kubectl api-resourcesSynopsisPrint the supported API resources on the server.kubectl api-resources [flags] Examples # Print the supported API resources kubectl api-resources # Print the supported API resources with more information kubectl api-resources -o wide # Print the supported API resources sorted by a column kubectl api-resources --sort-by=name # Print the supported namespaced resources kubectl api-resources --namespaced=true # Print the supported non-namespaced resources kubectl api-reso,kubectl api-resources,reference,api-reference,581,https://kubernetes.io/docs/reference/kubectl/generated/kubectl_api-resources/,k8s_00006_summary +"HorizontalPodAutoscaler WalkthroughA HorizontalPodAutoscaler (HPA for short) automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.This document walks you through an example of enabling HorizontalPodAutoscaler to automatically manage scale for an example web app. This example workload is Apache httpd running some PHP code.Before you beginYou need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:iximiuz LabsKillercodaKodeKloudPlay with KubernetesYour Kubernetes server must be at or later than version 1.23.To check the version, enter kubectl version.If you're running an older release of Kubernetes, refer to the version of the documentation for that release (see available documentation versions).To follow this walkthrough, you also need to use a cluster that has a Metrics Server deployed and configured. The Kubernetes Metrics Server collects resource metrics from the kubelets in your cluster, and exposes those metrics through the Kubernetes API, using an APIService to add new kinds of resource that represent metric readings.To learn how to deploy the Metrics Server, see the metrics-server documentation.If you are running Minikube, run the following command to enable metrics-server:minikube addons enable metrics-server Run and expose php-apache serverTo demonstrate a HorizontalPodAutoscaler, you will first start a Deployment that runs a container using the hpa-example image, and expose it as a Service using the following manifest:application/php-apache.yaml apiVersion: apps/v1 kind: Deployment metadata: name: php-apache spec: selector: matchLabels: run: php-apache template: metadata: labels: run: php-apache spec: containers: - name: php-apache image: registry.k8s.io/hpa-example ports: - containerPort: 80 resources: limits: cpu: 500m requests: cpu: 200m --- apiVersion: v1 kind: Service metadata: name: php-apache labels: run: php-apache spec: ports: - port: 80 selector: run: php-apache To do so, run the following command:kubectl apply -f https://k8s.io/examples/application/php-apache.yaml deployment.apps/php-apache created service/php-apache created Create the HorizontalPodAutoscalerNow that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.You will shortly run a command that creates a HorizontalPodAutoscaler that maintains between 1 and 10 replicas of the Pods controlled by the php-apache Deployment that you created in the first step of these instructions.Roughly speaking, the HPA controller will increase and decrease the number of replicas (by updating the Deployment) to maintain an average CPU utilization across all Pods of 50%. The Deployment then updates the ReplicaSet - this is part of how all Deployments work in Kubernetes - and then the ReplicaSet either adds or removes Pods based on the change to its .spec.Since each pod requests 200 milli-cores by kubectl run, this means an average CPU usage of 100 milli-cores. See Algorithm details for more details on the algorithm.Create the HorizontalPodAutoscaler:kubectl autoscale deployment php-apache --cpu=50% --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled You can check the current status of the newly-made HorizontalPodAutoscaler, by running:# You can use ""hpa"" or ""horizontalpodautoscaler""; either name works OK. kubectl get hpa The output is similar to:NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s (if you see other HorizontalPodAutoscalers with different names, that means they already existed, and isn't usually a problem).Please note that the current CPU consumption is 0% as there are no clients sending requests to the server (the TARGET column shows the average across all the Pods controlled by the corresponding deployment).Increase the loadNext, see how the autoscaler reacts to increased load. To do this, you'll start a different Pod to act as a client. The container within the client Pod runs in an infinite loop, sending queries to the php-apache service.# Run this in a separate terminal # so that the load generation continues and you can carry on with the rest of the steps kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c ""while sleep 0.01; do wget -q -O- http://php-apache; done"" Now run:# type Ctrl+C to end the watch when you're ready kubectl get hpa php-apache --watch Within a minute or so, you should see the higher CPU load; for example:NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m and then, more replicas. For example:NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 305% / 50% 1 10 7 3m Here, CPU consumption has increased to 305% of the request. As a result, the Deployment was resized to 7 replicas:kubectl get deployment php-apache You should see the replica count matching the figure from the HorizontalPodAutoscalerNAME READY UP-TO-DATE AVAILABLE AGE php-apache 7/7 7 7 19m Note:It may take a few minutes to stabilize the number of replicas. Since the amount of load is not controlled in any way it may happen that the final number of replicas will differ from this example.Stop generating loadTo finish the example, stop sending the load.In the terminal where you created the Pod that runs a busybox image, terminate the load generation by typing + C.Then verify the result state (after a minute or so):# type Ctrl+C to end the watch when you're ready kubectl get hpa php-apache --watch The output is similar to:NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m and the Deployment also shows that it has scaled down:kubectl get deployment php-apache NAME READY UP-TO-DATE AVAILABLE AGE php-apache 1/1 1 1 27m Once CPU utilization dropped to 0, the HPA automatically scaled the number of replicas back down to 1.Autoscaling the replicas may take a few minutes.Autoscaling on multiple metrics and custom metricsYou can introduce additional metrics to use when autoscaling the php-apache Deployment by making use of the autoscaling/v2 API version.First, get the YAML of your HorizontalPodAutoscaler in the autoscaling/v2 form:kubectl get hpa php-apache -o yaml > /tmp/hpa-v2.yaml Open the /tmp/hpa-v2.yaml file in an editor, and you should see YAML which looks like this:apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: php-apache spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: php-apache minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 status: observedGeneration: 1 lastScaleTime: currentReplicas: 1 desiredReplicas: 1 currentMetrics: - type: Resource resource: name: cpu current: averageUtilization: 0 averageValue: 0 Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resource specified on pod containers. Notice that you can specify other resource metrics besides CPU. By default, the only other supported resource metric is memory. These resources do not change names from cluster to cluster, and should always be available, as long as the metrics.k8s.io API is available.You can also specify resource metrics in terms of direct values, instead of as percentages of the requested value, by using a target.type of AverageValue instead of Utilization, and setting the corresponding target.averageValue field instead of the target.averageUtilization. metrics: - type: Resource resource: name: memory target: type: AverageValue averageValue: 500Mi There are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. These metrics may have names which are cluster specific, and require a more advanced cluster monitoring setup.The first of these alternative metric types is pod metrics. These metrics describe Pods, and are averaged together across Pods and compared with a target value to determine the replica count. They work much like resource metrics, except that they only support a target type of AverageValue.Pod metrics are specified using a metric block like this:type: Pods pods: metric: name: packets-per-second target: type: AverageValue averageValue: 1k The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the object; they only describe it. Object metrics support target types of both Value and AverageValue. With Value, the target is compared directly to the returned metric from the API. With AverageValue, the value returned from the custom metrics API is divided by the number of Pods before being compared to the target. The following example is the YAML representation of the requests-per-second metric.type: Object object: metric: name: requests-per-second describedObject: apiVersion: networking.k8s.io/v1 kind: Ingress name: main-route target: type: Value value: 2k If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then choose the one with the highest replica count.For example, if you had your monitoring system collecting metrics about network traffic, you could update the definition above using kubectl edit to look like this:apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: php-apache spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: php-apache minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 - type: Pods pods: metric: name: packets-per-second target: type: AverageValue averageValue: 1k - type: Object object: metric: name: requests-per-second describedObject: apiVersion: networking.k8s.io/v1 kind: Ingress name: main-route target: type: Value value: 10k status: observedGeneration: 1 lastScaleTime: currentReplicas: 1 desiredReplicas: 1 currentMetrics: - type: Resource resource: name: cpu current: averageUtilization: 0 averageValue: 0 - type: Object object: metric: name: requests-per-second describedObject: apiVersion: networking.k8s.io/v1 kind: Ingress name: main-route current: value: 10k Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingress were serving a total of 10000 requests per second.Autoscaling on more specific metricsMany metrics pipelines allow you to describe metrics either by name or by a set of additional descriptors called labels. For all non-resource metric types (pod, object, and external, described below), you can specify an additional label selector which is passed to your metric pipeline. For instance, if you collect a metric http_requests with the verb label, you can specify the following metric block to scale only on GET requests:type: Object object: metric: name: http_requests selector: {matchLabels: {verb: GET}} This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multiple series. The selector is additive, and cannot select metrics that describe objects that are not the target object (the target pods in the case of the Pods type, and the described object in the case of the Object type).Autoscaling on metrics not related to Kubernetes objectsApplications running on Kubernetes may need to autoscale based on metrics that don't have an obvious relationship to any object in the Kubernetes cluster, such as metrics describing a hosted service with no direct correlation to Kubernetes namespaces. In Kubernetes 1.10 and later, you can address this use case with external metrics.Using external metrics requires knowledge of your monitoring system; the setup is similar to that required when using custom metrics. External metrics allow you to autoscale your cluster based on any metric available in your monitoring system. Provide a metric block with a name and selector, as above, and use the External metric type instead of Object. If multiple time series are matched by the metricSelector, the sum of their values is used by the HorizontalPodAutoscaler. External metrics support both the Value and AverageValue target types, which function exactly the same as when you use the Object type.For example if your application processes tasks from a hosted queue service, you could add the following section to your HorizontalPodAutoscaler manifest to specify that you need one worker per 30 outstanding tasks.- type: External external: metric: name: queue_messages_ready selector: matchLabels: queue: ""worker_tasks"" target: type: AverageValue averageValue: 30 When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics API potentially allows access to any metric, so cluster administrators should take care when exposing it.Appendix: Horizontal Pod Autoscaler Status ConditionsWhen using the autoscaling/v2 form of the HorizontalPodAutoscaler, you will be able to see status conditions set by Kubernetes on the HorizontalPodAutoscaler. These status conditions indicate whether or not the HorizontalPodAutoscaler is able to scale, and whether or not it is currently restricted in any way.The conditions appear in the status.conditions field. To see the conditions affecting a HorizontalPodAutoscaler, we can use kubectl describe hpa:kubectl describe hpa cm-test Name: cm-test Namespace: prom Labels: Annotations: CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) ""http_requests"" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events: For this HorizontalPodAutoscaler, you can see several conditions in a healthy state. The first, AbleToScale, indicates whether or not the HPA is able to fetch and update scales, as well as whether or not any backoff-related conditions would prevent scaling. The second, ScalingActive, indicates whether or not the HPA is enabled (i.e. the replica count of the target is not zero) and is able to calculate desired scales. When it is False, it generally indicates problems with fetching metrics. Finally, the last condition, ScalingLimited, indicates that the desired scale was capped by the maximum or minimum of the HorizontalPodAutoscaler. This is an indication that you may wish to raise or lower the minimum or maximum replica count constraints on your HorizontalPodAutoscaler.QuantitiesAll metrics in the HorizontalPodAutoscaler and metrics APIs are specified using a special whole-number notation known in Kubernetes as a quantity. For example, the quantity 10500m would be written as 10.5 in decimal notation. The metrics APIs will return whole numbers without a suffix when possible, and will generally return quantities in milli-units otherwise. This means you might see your metric value fluctuate between 1 and 1500m, or 1 and 1.5 when written in decimal notation.Other possible scenariosCreating the autoscaler declarativelyInstead of using kubectl autoscale command to create a HorizontalPodAutoscaler imperatively we can use the following manifest to create it declaratively:application/hpa/php-apache.yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: php-apache spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: php-apache minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 Then, create the autoscaler by executing the following command:kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml horizontalpodautoscaler.autoscaling/php-apache created","HorizontalPodAutoscaler WalkthroughA HorizontalPodAutoscaler (HPA for short) automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.If the lo",HorizontalPodAutoscaler Walkthrough,tasks,configure-pods-containers,2498,https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/,k8s_00007_summary +"kubectl auth can-iSynopsisCheck whether an action is allowed.VERB is a logical Kubernetes API verb like 'get', 'list', 'watch', 'delete', etc. TYPE is a Kubernetes resource. Shortcuts and groups will be resolved. NONRESOURCEURL is a partial URL that starts with ""/"". NAME is the name of a particular Kubernetes resource. This command pairs nicely with impersonation. See --as global flag.kubectl auth can-i VERB [TYPE | TYPE/NAME | NONRESOURCEURL] Examples # Check to see if I can create pods in any namespace kubectl auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace kubectl auth can-i list deployments.apps # Check to see if service account ""foo"" of namespace ""dev"" can list pods in the namespace ""prod"" # You must be allowed to use impersonation for the global option ""--as"" kubectl auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace (""*"" means all) kubectl auth can-i '*' '*' # Check to see if I can get the job named ""bar"" in namespace ""foo"" kubectl auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs kubectl auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ kubectl auth can-i get /logs/ # Check to see if I can approve certificates.k8s.io kubectl auth can-i approve certificates.k8s.io # List all allowed actions in namespace ""foo"" kubectl auth can-i --list --namespace=foo Options-A, --all-namespacesIf true, check the specified action in all namespaces.-h, --helphelp for can-i--listIf true, prints all allowed actions.--no-headersIf true, prints allowed actions without headers-q, --quietIf true, suppress output and just return the exit code.--subresource stringSubResource such as pod/log or deployment/scaleParent Options Inherited--as stringUsername to impersonate for the operation. User could be a regular user or a service account in a namespace.--as-group stringsGroup to impersonate for the operation, this flag can be repeated to specify multiple groups.--as-uid stringUID to impersonate for the operation.--cache-dir string Default: ""$HOME/.kube/cache""Default cache directory--certificate-authority stringPath to a cert file for the certificate authority--client-certificate stringPath to a client certificate file for TLS--client-key stringPath to a client key file for TLS--cluster stringThe name of the kubeconfig cluster to use--context stringThe name of the kubeconfig context to use--disable-compressionIf true, opt-out of response compression for all requests to the server--insecure-skip-tls-verifyIf true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure--kubeconfig stringPath to the kubeconfig file to use for CLI requests.--kuberc stringPath to the kuberc file to use for preferences. This can be disabled by exporting KUBECTL_KUBERC=false feature gate or turning off the feature KUBERC=off.--match-server-versionRequire server version to match client version-n, --namespace stringIf present, the namespace scope for this CLI request--password stringPassword for basic authentication to the API server--profile string Default: ""none""Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)--profile-output string Default: ""profile.pprof""Name of the file to write the profile to--request-timeout string Default: ""0""The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.-s, --server stringThe address and port of the Kubernetes API server--storage-driver-buffer-duration duration Default: 1m0sWrites in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction--storage-driver-db string Default: ""cadvisor""database name--storage-driver-host string Default: ""localhost:8086""database host:port--storage-driver-password string Default: ""root""database password--storage-driver-secureuse secure connection with database--storage-driver-table string Default: ""stats""table name--storage-driver-user string Default: ""root""database username--tls-server-name stringServer name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used--token stringBearer token for authentication to the API server--user stringThe name of the kubeconfig user to use--username stringUsername for basic authentication to the API server--version version[=true]--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version--warnings-as-errorsTreat warnings received from the server as errors and exit with a non-zero exit codeSee Alsokubectl auth - Inspect authorizationThis page is automatically generated.If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.","kubectl auth can-iSynopsisCheck whether an action is allowed.VERB is a logical Kubernetes API verb like 'get', 'list', 'watch', 'delete', etc. TYPE is a Kubernetes resource. Shortcuts and groups will be resolved. NONRESOURCEURL is a partial URL that starts with ""/"". NAME is the name of a particular Kubernetes resource. This command pairs nicely with impersonation. See --as global flag.kubectl auth can-i VERB [TYPE | TYPE/NAME | NONRESOURCEURL] Examples # Check to see if I can create pods in any ",kubectl auth can-i,reference,cli-reference,685,https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/,k8s_00008_summary +"kube-proxy Configuration (v1alpha1)Resource TypesKubeProxyConfigurationFormatOptionsAppears in:LoggingConfigurationFormatOptions contains options for the different logging formats.FieldDescriptiontext [Required]TextOptions[Alpha] Text contains options for logging format ""text"". Only available when the LoggingAlphaOptions feature gate is enabled.json [Required]JSONOptions[Alpha] JSON contains options for logging format ""json"". Only available when the LoggingAlphaOptions feature gate is enabled.JSONOptionsAppears in:FormatOptionsJSONOptions contains options for logging format ""json"".FieldDescriptionOutputRoutingOptions [Required]OutputRoutingOptions(Members of OutputRoutingOptions are embedded into this type.) No description provided.LogFormatFactoryLogFormatFactory provides support for a certain additional, non-default log format.LoggingConfigurationAppears in:KubeProxyConfigurationKubeletConfigurationLoggingConfiguration contains logging options.FieldDescriptionformat [Required]stringFormat Flag specifies the structure of log messages. default value of format is textflushFrequency [Required]TimeOrMetaDurationMaximum time between log flushes. If a string, parsed as a duration (i.e. ""1s"") If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000). Ignored if the selected logging backend writes log messages without buffering.verbosity [Required]VerbosityLevelVerbosity is the threshold that determines which log messages are logged. Default is zero which logs only the most important messages. Higher values enable additional messages. Error messages are always logged.vmodule [Required]VModuleConfigurationVModule overrides the verbosity threshold for individual files. Only supported for ""text"" log format.options [Required]FormatOptions[Alpha] Options holds additional parameters that are specific to the different logging formats. Only the options for the selected format get used, but all of them get validated. Only available when the LoggingAlphaOptions feature gate is enabled.LoggingOptionsLoggingOptions can be used with ValidateAndApplyWithOptions to override certain global defaults.FieldDescriptionErrorStream [Required]io.WriterErrorStream can be used to override the os.Stderr default.InfoStream [Required]io.WriterInfoStream can be used to override the os.Stdout default.OutputRoutingOptionsAppears in:JSONOptionsTextOptionsOutputRoutingOptions contains options that are supported by both ""text"" and ""json"".FieldDescriptionsplitStream [Required]bool[Alpha] SplitStream redirects error messages to stderr while info messages go to stdout, with buffering. The default is to write both to stdout, without buffering. Only available when the LoggingAlphaOptions feature gate is enabled.infoBufferSize [Required]k8s.io/apimachinery/pkg/api/resource.QuantityValue[Alpha] InfoBufferSize sets the size of the info stream when using split streams. The default is zero, which disables buffering. Only available when the LoggingAlphaOptions feature gate is enabled.TextOptionsAppears in:FormatOptionsTextOptions contains options for logging format ""text"".FieldDescriptionOutputRoutingOptions [Required]OutputRoutingOptions(Members of OutputRoutingOptions are embedded into this type.) No description provided.TimeOrMetaDurationAppears in:LoggingConfigurationTimeOrMetaDuration is present only for backwards compatibility for the flushFrequency field, and new fields should use metav1.Duration.FieldDescriptionDuration [Required]meta/v1.DurationDuration holds the duration- [Required]boolSerializeAsString controls whether the value is serialized as a string or an integerVModuleConfiguration(Alias of []k8s.io/component-base/logs/api/v1.VModuleItem)Appears in:LoggingConfigurationVModuleConfiguration is a collection of individual file names or patterns and the corresponding verbosity threshold.VerbosityLevel(Alias of uint32)Appears in:LoggingConfigurationVerbosityLevel represents a klog or logr verbosity threshold.ClientConnectionConfigurationAppears in:KubeProxyConfigurationKubeSchedulerConfigurationGenericControllerManagerConfigurationClientConnectionConfiguration contains details for constructing a client.FieldDescriptionkubeconfig [Required]stringkubeconfig is the path to a KubeConfig file.acceptContentTypes [Required]stringacceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the default value of 'application/json'. This field will control all connections to the server used by a particular client.contentType [Required]stringcontentType is the content type used when sending data to the server from this client.qps [Required]float32qps controls the number of queries per second allowed for this connection.burst [Required]int32burst allows extra queries to accumulate when a client is exceeding its rate.DebuggingConfigurationAppears in:KubeSchedulerConfigurationGenericControllerManagerConfigurationDebuggingConfiguration holds configuration for Debugging related features.FieldDescriptionenableProfiling [Required]boolenableProfiling enables profiling via web interface host:port/debug/pprof/enableContentionProfiling [Required]boolenableContentionProfiling enables block profiling, if enableProfiling is true.LeaderElectionConfigurationAppears in:KubeSchedulerConfigurationGenericControllerManagerConfigurationLeaderElectionConfiguration defines the configuration of leader election clients for components that can run with leader election enabled.FieldDescriptionleaderElect [Required]boolleaderElect enables a leader election client to gain leadership before executing the main loop. Enable this when running replicated components for high availability.leaseDuration [Required]meta/v1.DurationleaseDuration is the duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.renewDeadline [Required]meta/v1.DurationrenewDeadline is the interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.retryPeriod [Required]meta/v1.DurationretryPeriod is the duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.resourceLock [Required]stringresourceLock indicates the resource object type that will be used to lock during leader election cycles.resourceName [Required]stringresourceName indicates the name of resource object that will be used to lock during leader election cycles.resourceNamespace [Required]stringresourceName indicates the namespace of resource object that will be used to lock during leader election cycles.KubeProxyConfigurationKubeProxyConfiguration contains everything necessary to configure the Kubernetes proxy server.FieldDescriptionapiVersionstringkubeproxy.config.k8s.io/v1alpha1kindstringKubeProxyConfigurationfeatureGates [Required]map[string]boolfeatureGates is a map of feature names to bools that enable or disable alpha/experimental features.clientConnection [Required]ClientConnectionConfigurationclientConnection specifies the kubeconfig file and client connection settings for the proxy server to use when communicating with the apiserver.logging [Required]LoggingConfigurationlogging specifies the options of logging. Refer to Logs Options for more information.hostnameOverride [Required]stringhostnameOverride, if non-empty, will be used as the name of the Node that kube-proxy is running on. If unset, the node name is assumed to be the same as the node's hostname.bindAddress [Required]stringbindAddress can be used to override kube-proxy's idea of what its node's primary IP is. Note that the name is a historical artifact, and kube-proxy does not actually bind any sockets to this IP.healthzBindAddress [Required]stringhealthzBindAddress is the IP address and port for the health check server to serve on, defaulting to ""0.0.0.0:10256"" (if bindAddress is unset or IPv4), or ""[::]:10256"" (if bindAddress is IPv6).metricsBindAddress [Required]stringmetricsBindAddress is the IP address and port for the metrics server to serve on, defaulting to ""127.0.0.1:10249"" (if bindAddress is unset or IPv4), or ""[::1]:10249"" (if bindAddress is IPv6). (Set to ""0.0.0.0:10249"" / ""[::]:10249"" to bind on all interfaces.)bindAddressHardFail [Required]boolbindAddressHardFail, if true, tells kube-proxy to treat failure to bind to a port as fatal and exitenableProfiling [Required]boolenableProfiling enables profiling via web interface on /debug/pprof handler. Profiling handlers will be handled by metrics server.showHiddenMetricsForVersion [Required]stringshowHiddenMetricsForVersion is the version for which you want to show hidden metrics.mode [Required]ProxyModemode specifies which proxy mode to use.iptables [Required]KubeProxyIPTablesConfigurationiptables contains iptables-related configuration options.ipvs [Required]KubeProxyIPVSConfigurationipvs contains ipvs-related configuration options.nftables [Required]KubeProxyNFTablesConfigurationnftables contains nftables-related configuration options.winkernel [Required]KubeProxyWinkernelConfigurationwinkernel contains winkernel-related configuration options.detectLocalMode [Required]LocalModedetectLocalMode determines mode to use for detecting local traffic, defaults to ClusterCIDRdetectLocal [Required]DetectLocalConfigurationdetectLocal contains optional configuration settings related to DetectLocalMode.clusterCIDR [Required]stringclusterCIDR is the CIDR range of the pods in the cluster. (For dual-stack clusters, this can be a comma-separated dual-stack pair of CIDR ranges.). When DetectLocalMode is set to ClusterCIDR, kube-proxy will consider traffic to be local if its source IP is in this range. (Otherwise it is not used.)nodePortAddresses [Required][]stringnodePortAddresses is a list of CIDR ranges that contain valid node IPs, or alternatively, the single string 'primary'. If set to a list of CIDRs, connections to NodePort services will only be accepted on node IPs in one of the indicated ranges. If set to 'primary', NodePort services will only be accepted on the node's primary IPv4 and/or IPv6 address according to the Node object. If unset, NodePort connections will be accepted on all local IPs.oomScoreAdj [Required]int32oomScoreAdj is the oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]conntrack [Required]KubeProxyConntrackConfigurationconntrack contains conntrack-related configuration options.configSyncPeriod [Required]meta/v1.DurationconfigSyncPeriod is how often configuration from the apiserver is refreshed. Must be greater than 0.portRange [Required]stringportRange was previously used to configure the userspace proxy, but is now unused.windowsRunAsService [Required]boolwindowsRunAsService, if true, enables Windows service control manager API integration.DetectLocalConfigurationAppears in:KubeProxyConfigurationDetectLocalConfiguration contains optional settings related to DetectLocalMode optionFieldDescriptionbridgeInterface [Required]stringbridgeInterface is a bridge interface name. When DetectLocalMode is set to LocalModeBridgeInterface, kube-proxy will consider traffic to be local if it originates from this bridge.interfaceNamePrefix [Required]stringinterfaceNamePrefix is an interface name prefix. When DetectLocalMode is set to LocalModeInterfaceNamePrefix, kube-proxy will consider traffic to be local if it originates from any interface whose name begins with this prefix.KubeProxyConntrackConfigurationAppears in:KubeProxyConfigurationKubeProxyConntrackConfiguration contains conntrack settings for the Kubernetes proxy server.FieldDescriptionmaxPerCore [Required]int32maxPerCore is the maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore min).min [Required]int32min is the minimum value of connect-tracking records to allocate, regardless of maxPerCore (set maxPerCore=0 to leave the limit as-is).tcpEstablishedTimeout [Required]meta/v1.DurationtcpEstablishedTimeout is how long an idle TCP connection will be kept open (e.g. '2s'). Must be greater than 0 to set.tcpCloseWaitTimeout [Required]meta/v1.DurationtcpCloseWaitTimeout is how long an idle conntrack entry in CLOSE_WAIT state will remain in the conntrack table. (e.g. '60s'). Must be greater than 0 to set.tcpBeLiberal [Required]booltcpBeLiberal, if true, kube-proxy will configure conntrack to run in liberal mode for TCP connections and packets with out-of-window sequence numbers won't be marked INVALID.udpTimeout [Required]meta/v1.DurationudpTimeout is how long an idle UDP conntrack entry in UNREPLIED state will remain in the conntrack table (e.g. '30s'). Must be greater than 0 to set.udpStreamTimeout [Required]meta/v1.DurationudpStreamTimeout is how long an idle UDP conntrack entry in ASSURED state will remain in the conntrack table (e.g. '300s'). Must be greater than 0 to set.KubeProxyIPTablesConfigurationAppears in:KubeProxyConfigurationKubeProxyIPTablesConfiguration contains iptables-related configuration details for the Kubernetes proxy server.FieldDescriptionmasqueradeBit [Required]int32masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using the iptables or ipvs proxy mode. Values must be within the range [0, 31].masqueradeAll [Required]boolmasqueradeAll tells kube-proxy to SNAT all traffic sent to Service cluster IPs, when using the iptables or ipvs proxy mode. This may be required with some CNI plugins.localhostNodePorts [Required]boollocalhostNodePorts, if false, tells kube-proxy to disable the legacy behavior of allowing NodePort services to be accessed via localhost. (Applies only to iptables mode and IPv4; localhost NodePorts are never allowed with other proxy modes or with IPv6.)syncPeriod [Required]meta/v1.DurationsyncPeriod is an interval (e.g. '5s', '1m', '2h22m') indicating how frequently various re-synchronizing and cleanup operations are performed. Must be greater than 0.minSyncPeriod [Required]meta/v1.DurationminSyncPeriod is the minimum period between iptables rule resyncs (e.g. '5s', '1m', '2h22m'). A value of 0 means every Service or EndpointSlice change will result in an immediate iptables resync.KubeProxyIPVSConfigurationAppears in:KubeProxyConfigurationKubeProxyIPVSConfiguration contains ipvs-related configuration details for the Kubernetes proxy server.FieldDescriptionsyncPeriod [Required]meta/v1.DurationsyncPeriod is an interval (e.g. '5s', '1m', '2h22m') indicating how frequently various re-synchronizing and cleanup operations are performed. Must be greater than 0.minSyncPeriod [Required]meta/v1.DurationminSyncPeriod is the minimum period between IPVS rule resyncs (e.g. '5s', '1m', '2h22m'). A value of 0 means every Service or EndpointSlice change will result in an immediate IPVS resync.scheduler [Required]stringscheduler is the IPVS scheduler to useexcludeCIDRs [Required][]stringexcludeCIDRs is a list of CIDRs which the ipvs proxier should not touch when cleaning up ipvs services.strictARP [Required]boolstrictARP configures arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interfacetcpTimeout [Required]meta/v1.DurationtcpTimeout is the timeout value used for idle IPVS TCP sessions. The default value is 0, which preserves the current timeout value on the system.tcpFinTimeout [Required]meta/v1.DurationtcpFinTimeout is the timeout value used for IPVS TCP sessions after receiving a FIN. The default value is 0, which preserves the current timeout value on the system.udpTimeout [Required]meta/v1.DurationudpTimeout is the timeout value used for IPVS UDP packets. The default value is 0, which preserves the current timeout value on the system.KubeProxyNFTablesConfigurationAppears in:KubeProxyConfigurationKubeProxyNFTablesConfiguration contains nftables-related configuration details for the Kubernetes proxy server.FieldDescriptionmasqueradeBit [Required]int32masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using the nftables proxy mode. Values must be within the range [0, 31].masqueradeAll [Required]boolmasqueradeAll tells kube-proxy to SNAT all traffic sent to Service cluster IPs, when using the nftables mode. This may be required with some CNI plugins.syncPeriod [Required]meta/v1.DurationsyncPeriod is an interval (e.g. '5s', '1m', '2h22m') indicating how frequently various re-synchronizing and cleanup operations are performed. Must be greater than 0.minSyncPeriod [Required]meta/v1.DurationminSyncPeriod is the minimum period between iptables rule resyncs (e.g. '5s', '1m', '2h22m'). A value of 0 means every Service or EndpointSlice change will result in an immediate iptables resync.KubeProxyWinkernelConfigurationAppears in:KubeProxyConfigurationKubeProxyWinkernelConfiguration contains Windows/HNS settings for the Kubernetes proxy server.FieldDescriptionnetworkName [Required]stringnetworkName is the name of the network kube-proxy will use to create endpoints and policiessourceVip [Required]stringsourceVip is the IP address of the source VIP endpoint used for NAT when loadbalancingenableDSR [Required]boolenableDSR tells kube-proxy whether HNS policies should be created with DSRrootHnsEndpointName [Required]stringrootHnsEndpointName is the name of hnsendpoint that is attached to l2bridge for root network namespaceforwardHealthCheckVip [Required]boolforwardHealthCheckVip forwards service VIP for health check port on WindowsLocalMode(Alias of string)Appears in:KubeProxyConfigurationLocalMode represents modes to detect local traffic from the nodeProxyMode(Alias of string)Appears in:KubeProxyConfigurationProxyMode represents modes used by the Kubernetes proxy server.Three modes of proxy are available on Linux platforms: iptables, ipvs, and nftables. One mode of proxy is available on Windows platforms: kernelspace.If the proxy mode is unspecified, a default proxy mode will be used (currently this is iptables on Linux and kernelspace on Windows). If the selected proxy mode cannot be used (due to lack of kernel support, missing userspace components, etc) then kube-proxy will exit with an error.This page is automatically generated.If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.","kube-proxy Configuration (v1alpha1)Resource TypesKubeProxyConfigurationFormatOptionsAppears in:LoggingConfigurationFormatOptions contains options for the different logging formats.FieldDescriptiontext [Required]TextOptions[Alpha] Text contains options for logging format ""text"". Only available when the LoggingAlphaOptions feature gate is enabled.json [Required]JSONOptions[Alpha] JSON contains options for logging format ""json"". Only available when the LoggingAlphaOptions feature gate is enabled.JS",kube-proxy Configuration (v1alpha1),reference,api-reference,2072,https://kubernetes.io/docs/reference/config-api/kube-proxy-config.v1alpha1/,k8s_00010_summary +"kubeadm resetPerforms a best effort revert of changes made by kubeadm init or kubeadm join.SynopsisPerforms a best effort revert of changes made to this host by 'kubeadm init' or 'kubeadm join'The ""reset"" command executes the following phases:preflight Run reset pre-flight checks remove-etcd-member Remove a local etcd member. cleanup-node Run cleanup node. kubeadm reset [flags] Options--cert-dir string Default: ""/etc/kubernetes/pki""The path to the directory where the certificates are stored. If specified, clean this directory.--cleanup-tmp-dirCleanup the ""/etc/kubernetes/tmp"" directory--config stringPath to a kubeadm configuration file.--cri-socket stringPath to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.--dry-runDon't apply any changes; just output what would be done.-f, --forceReset the node without prompting for confirmation.-h, --helphelp for reset--ignore-preflight-errors stringsA list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.--kubeconfig string Default: ""/etc/kubernetes/admin.conf""The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.--skip-phases stringsList of phases to be skippedOptions inherited from parent commands--rootfs stringThe path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.Reset workflowkubeadm reset is responsible for cleaning up a node local file system from files that were created using the kubeadm init or kubeadm join commands. For control-plane nodes reset also removes the local stacked etcd member of this node from the etcd cluster.kubeadm reset phase can be used to execute the separate phases of the above workflow. To skip a list of phases you can use the --skip-phases flag, which works in a similar way to the kubeadm join and kubeadm init phase runners.kubeadm reset also supports the --config flag for passing a ResetConfiguration structure.Cleanup of external etcd memberskubeadm reset will not delete any etcd data if external etcd is used. This means that if you run kubeadm init again using the same etcd endpoints, you will see state from previous clusters.To wipe etcd data it is recommended you use a client like etcdctl, such as:etcdctl del """" --prefix See the etcd documentation for more information.Cleanup of CNI configurationCNI plugins use the directory /etc/cni/net.d to store their configuration. The kubeadm reset command does not cleanup that directory. Leaving the configuration of a CNI plugin on a host can be problematic if the same host is later used as a new Kubernetes node and a different CNI plugin happens to be deployed in that cluster. It can result in a configuration conflict between CNI plugins.To cleanup the directory, backup its contents if needed and then execute the following command:sudo rm -rf /etc/cni/net.d Cleanup of network traffic rulesThe kubeadm reset command does not clean any iptables, nftables or IPVS rules applied to the host by kube-proxy. A control loop in kube-proxy ensures that the rules on each node host are synchronized. For additional details please see Virtual IPs and Service Proxies.Leaving the rules without cleanup should not cause any issues if the host is later reused as a Kubernetes node or if it will serve a different purpose.If you wish to perform this cleanup, you can use the same kube-proxy container which was used in your cluster and the --cleanup flag of the kube-proxy binary:docker run --privileged --rm registry.k8s.io/kube-proxy:v1.34.0 sh -c ""kube-proxy --cleanup && echo DONE"" The output of the above command should print DONE at the end. Instead of Docker, you can use your preferred container runtime to start the container.Cleanup of $HOME/.kubeThe $HOME/.kube directory typically contains configuration files and kubectl cache. While not cleaning the contents of $HOME/.kube/cache is not an issue, there is one important file in the directory. That is $HOME/.kube/config and it is used by kubectl to authenticate to the Kubernetes API server. After kubeadm init finishes, the user is instructed to copy the /etc/kubernetes/admin.conf file to the $HOME/.kube/config location and grant the current user access to it.The kubeadm reset command does not clean any of the contents of the $HOME/.kube directory. Leaving the $HOME/.kube/config file without deleting it, can be problematic depending on who will have access to this host after kubeadm reset was called. If the same cluster continues to exist, it is highly recommended to delete the file, as the admin credentials stored in it will continue to be valid.To cleanup the directory, examine its contents, perform backup if needed and execute the following command:rm -rf $HOME/.kube Graceful kube-apiserver shutdownIf you have your kube-apiserver configured with the --shutdown-delay-duration flag, you can run the following commands to attempt a graceful shutdown for the running API server Pod, before you run kubeadm reset:yq eval -i '.spec.containers[0].command = []' /etc/kubernetes/manifests/kube-apiserver.yaml timeout 60 sh -c 'while pgrep kube-apiserver >/dev/null; do sleep 1; done' || true What's nextkubeadm init to bootstrap a Kubernetes control-plane nodekubeadm join to bootstrap a Kubernetes worker node and join it to the cluster","kubeadm resetPerforms a best effort revert of changes made by kubeadm init or kubeadm join.SynopsisPerforms a best effort revert of changes made to this host by 'kubeadm init' or 'kubeadm join'The ""reset"" command executes the following phases:preflight Run reset pre-flight checks remove-etcd-member Remove a local etcd member. cleanup-node Run cleanup node. kubeadm reset [flags] Options--cert-dir string Default: ""/etc/kubernetes/pki""The path to the directory where the certificates are stored. If ",kubeadm reset,reference,general-reference,823,https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/,k8s_00011_summary +"Managing Secrets using Configuration FileCreating Secret objects using resource configuration file.Before you beginYou need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:iximiuz LabsKillercodaKodeKloudPlay with KubernetesCreate the SecretYou can define the Secret object in a manifest first, in JSON or YAML format, and then create that object. The Secret resource contains two maps: data and stringData. The data field is used to store arbitrary data, encoded using base64. The stringData field is provided for convenience, and it allows you to provide the same data as unencoded strings. The keys of data and stringData must consist of alphanumeric characters, -, _ or ..The following example stores two strings in a Secret using the data field.Convert the strings to base64:echo -n 'admin' | base64 echo -n '1f2d1e2e67df' | base64 Note:The serialized JSON and YAML values of Secret data are encoded as base64 strings. Newlines are not valid within these strings and must be omitted. When using the base64 utility on Darwin/macOS, users should avoid using the -b option to split long lines. Conversely, Linux users should add the option -w 0 to base64 commands or the pipeline base64 | tr -d '\n' if the -w option is not available.The output is similar to:YWRtaW4= MWYyZDFlMmU2N2Rm Create the manifest:apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: username: YWRtaW4= password: MWYyZDFlMmU2N2Rm Note that the name of a Secret object must be a valid DNS subdomain name.Create the Secret using kubectl apply:kubectl apply -f ./secret.yaml The output is similar to:secret/mysecret created To verify that the Secret was created and to decode the Secret data, refer to Managing Secrets using kubectl.Specify unencoded data when creating a SecretFor certain scenarios, you may wish to use the stringData field instead. This field allows you to put a non-base64 encoded string directly into the Secret, and the string will be encoded for you when the Secret is created or updated.A practical example of this might be where you are deploying an application that uses a Secret to store a configuration file, and you want to populate parts of that configuration file during your deployment process.For example, if your application uses the following configuration file:apiUrl: ""https://my.api.com/api/v1"" username: """" password: """" You could store this in a Secret using the following definition:apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: config.yaml: | apiUrl: ""https://my.api.com/api/v1"" username: password: Note:The stringData field for a Secret does not work well with server-side apply.When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.For example, if you run the following command:kubectl get secret mysecret -o yaml The output is similar to:apiVersion: v1 data: config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19 kind: Secret metadata: creationTimestamp: 2018-11-15T20:40:59Z name: mysecret namespace: default resourceVersion: ""7225"" uid: c280ad2e-e916-11e8-98f2-025000000001 type: Opaque Specify both data and stringDataIf you specify a field in both data and stringData, the value from stringData is used.For example, if you define the following Secret:apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: username: YWRtaW4= stringData: username: administrator Note:The stringData field for a Secret does not work well with server-side apply.The Secret object is created as follows:apiVersion: v1 data: username: YWRtaW5pc3RyYXRvcg== kind: Secret metadata: creationTimestamp: 2018-11-15T20:46:46Z name: mysecret namespace: default resourceVersion: ""7579"" uid: 91460ecb-e917-11e8-98f2-025000000001 type: Opaque YWRtaW5pc3RyYXRvcg== decodes to administrator.Edit a SecretTo edit the data in the Secret you created using a manifest, modify the data or stringData field in your manifest and apply the file to your cluster. You can edit an existing Secret object unless it is immutable.For example, if you want to change the password from the previous example to birdsarentreal, do the following:Encode the new password string:echo -n 'birdsarentreal' | base64 The output is similar to:YmlyZHNhcmVudHJlYWw= Update the data field with your new password string:apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: username: YWRtaW4= password: YmlyZHNhcmVudHJlYWw= Apply the manifest to your cluster:kubectl apply -f ./secret.yaml The output is similar to:secret/mysecret configured Kubernetes updates the existing Secret object. In detail, the kubectl tool notices that there is an existing Secret object with the same name. kubectl fetches the existing object, plans changes to it, and submits the changed Secret object to your cluster control plane.If you specified kubectl apply --server-side instead, kubectl uses Server Side Apply instead.Clean upTo delete the Secret you have created:kubectl delete secret mysecret What's nextRead more about the Secret conceptLearn how to manage Secrets using kubectlLearn how to manage Secrets using kustomize","Managing Secrets using Configuration FileCreating Secret objects using resource configuration file.Before you beginYou need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:iximiuz LabsK",Managing Secrets using Configuration File,tasks,general-tasks,788,https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/,k8s_00013_summary +"Cloud Controller Manager AdministrationFEATURE STATE: Kubernetes v1.11 [beta]Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the cloud-controller-manager binary allows cloud vendors to evolve independently from the core Kubernetes code.The cloud-controller-manager can be linked to any cloud provider that satisfies cloudprovider.Interface. For backwards compatibility, the cloud-controller-manager provided in the core Kubernetes project uses the same cloud libraries as kube-controller-manager. Cloud providers already supported in Kubernetes core are expected to use the in-tree cloud-controller-manager to transition out of Kubernetes core.AdministrationRequirementsEvery cloud has their own set of requirements for running their own cloud provider integration, it should not be too different from the requirements when running kube-controller-manager. As a general rule of thumb you'll need:cloud authentication/authorization: your cloud may require a token or IAM rules to allow access to their APIskubernetes authentication/authorization: cloud-controller-manager may need RBAC rules set to speak to the kubernetes apiserverhigh availability: like kube-controller-manager, you may want a high available setup for cloud controller manager using leader election (on by default).Running cloud-controller-managerSuccessfully running cloud-controller-manager requires some changes to your cluster configuration.kubelet, kube-apiserver, and kube-controller-manager must be set according to the user's usage of external CCM. If the user has an external CCM (not the internal cloud controller loops in the Kubernetes Controller Manager), then --cloud-provider=external must be specified. Otherwise, it should not be specified.Keep in mind that setting up your cluster to use cloud controller manager will change your cluster behaviour in a few ways:Components that specify --cloud-provider=external will add a taint node.cloudprovider.kubernetes.io/uninitialized with an effect NoSchedule during initialization. This marks the node as needing a second initialization from an external controller before it can be scheduled work. Note that in the event that cloud controller manager is not available, new nodes in the cluster will be left unschedulable. The taint is important since the scheduler may require cloud specific information about nodes such as their region or type (high cpu, gpu, high memory, spot instance, etc).cloud information about nodes in the cluster will no longer be retrieved using local metadata, but instead all API calls to retrieve node information will go through cloud controller manager. This may mean you can restrict access to your cloud API on the kubelets for better security. For larger clusters you may want to consider if cloud controller manager will hit rate limits since it is now responsible for almost all API calls to your cloud from within the cluster.The cloud controller manager can implement:Node controller - responsible for updating kubernetes nodes using cloud APIs and deleting kubernetes nodes that were deleted on your cloud.Service controller - responsible for loadbalancers on your cloud against services of type LoadBalancer.Route controller - responsible for setting up network routes on your cloudany other features you would like to implement if you are running an out-of-tree provider.ExamplesIf you are using a cloud that is currently supported in Kubernetes core and would like to adopt cloud controller manager, see the cloud controller manager in kubernetes core.For cloud controller managers not in Kubernetes core, you can find the respective projects in repositories maintained by cloud vendors or by SIGs.For providers already in Kubernetes core, you can run the in-tree cloud controller manager as a DaemonSet in your cluster, use the following as a guideline:admin/cloud/ccm-example.yaml # This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster. # It assumes that your masters can run pods and has the role node-role.kubernetes.io/master # Note that this Daemonset will not work straight out of the box for your cloud, this is # meant to be a guideline. --- apiVersion: v1 kind: ServiceAccount metadata: name: cloud-controller-manager namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:cloud-controller-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cloud-controller-manager namespace: kube-system --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: k8s-app: cloud-controller-manager name: cloud-controller-manager namespace: kube-system spec: selector: matchLabels: k8s-app: cloud-controller-manager template: metadata: labels: k8s-app: cloud-controller-manager spec: serviceAccountName: cloud-controller-manager containers: - name: cloud-controller-manager # for in-tree providers we use registry.k8s.io/cloud-controller-manager # this can be replaced with any other image for out-of-tree providers image: registry.k8s.io/cloud-controller-manager:v1.8.0 command: - /usr/local/bin/cloud-controller-manager - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here! - --leader-elect=true - --use-service-account-credentials # these flags will vary for every cloud provider - --allocate-node-cidrs=true - --configure-cloud-routes=true - --cluster-cidr=172.17.0.0/16 tolerations: # this is required so CCM can bootstrap itself - key: node.cloudprovider.kubernetes.io/uninitialized value: ""true"" effect: NoSchedule # these tolerations are to have the daemonset runnable on control plane nodes # remove them if your control plane nodes should not run pods - key: node-role.kubernetes.io/control-plane operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule # this is to restrict CCM to only run on master nodes # the node selector may vary depending on your cluster setup nodeSelector: node-role.kubernetes.io/master: """" LimitationsRunning cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for production workloads.Support for VolumesCloud controller manager does not implement any of the volume controllers found in kube-controller-manager as the volume integrations also require coordination with kubelets. As we evolve CSI (container storage interface) and add stronger support for flex volume plugins, necessary support will be added to cloud controller manager so that clouds can fully integrate with volumes. Learn more about out-of-tree CSI volume plugins here.ScalabilityThe cloud-controller-manager queries your cloud provider's APIs to retrieve information for all nodes. For very large clusters, consider possible bottlenecks such as resource requirements and API rate limiting.Chicken and EggThe goal of the cloud controller manager project is to decouple development of cloud features from the core Kubernetes project. Unfortunately, many aspects of the Kubernetes project has assumptions that cloud provider features are tightly integrated into the project. As a result, adopting this new architecture can create several situations where a request is being made for information from a cloud provider, but the cloud controller manager may not be able to return that information without the original request being complete.A good example of this is the TLS bootstrapping feature in the Kubelet. TLS bootstrapping assumes that the Kubelet has the ability to ask the cloud provider (or a local metadata service) for all its address types (private, public, etc) but cloud controller manager cannot set a node's address types without being initialized in the first place which requires that the kubelet has TLS certificates to communicate with the apiserver.As this initiative evolves, changes will be made to address these issues in upcoming releases.What's nextTo build and develop your own cloud controller manager, read Developing Cloud Controller Manager.","Cloud Controller Manager AdministrationFEATURE STATE: Kubernetes v1.11 [beta]Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the cloud-controller-manager binary allows cloud vendors to evolve independently from the core Kubernetes code.The cloud-controller-manager can be linked to any cloud provider that satisfies cloudprovider.Interface. For backwards compatibility, the cloud-controller-manager provided ",Cloud Controller Manager Administration,tasks,manage-clusters,1099,https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/,k8s_00014_summary +"Declare Network PolicyThis document helps you get started using the Kubernetes NetworkPolicy API to declare network policies that govern how pods communicate with each other.Note: This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the content guide before submitting a change. More information.Before you beginYou need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:iximiuz LabsKillercodaKodeKloudPlay with KubernetesYour Kubernetes server must be at or later than version v1.8.To check the version, enter kubectl version.Make sure you've configured a network provider with network policy support. There are a number of network providers that support NetworkPolicy, including:AntreaCalicoCiliumKube-routerRomanaWeave NetCreate an nginx deployment and expose it via a serviceTo see how Kubernetes network policy works, start off by creating an nginx Deployment.kubectl create deployment nginx --image=nginx deployment.apps/nginx created Expose the Deployment through a Service called nginx.kubectl expose deployment nginx --port=80 service/nginx exposed The above commands create a Deployment with an nginx Pod and expose the Deployment through a Service named nginx. The nginx Pod and Deployment are found in the default namespace.kubectl get svc,pod NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes 10.100.0.1 443/TCP 46m service/nginx 10.100.0.16 80/TCP 33s NAME READY STATUS RESTARTS AGE pod/nginx-701339712-e0qfq 1/1 Running 0 35s Test the service by accessing it from another PodYou should be able to access the new nginx service from other Pods. To access the nginx Service from another Pod in the default namespace, start a busybox container:kubectl run busybox --rm -ti --image=busybox -- /bin/sh In your shell, run the following command:wget --spider --timeout=1 nginx Connecting to nginx (10.100.0.16:80) remote file exists Limit access to the nginx serviceTo limit the access to the nginx service so that only Pods with the label access: true can query it, create a NetworkPolicy object as follows:service/networking/nginx-policy.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: access-nginx spec: podSelector: matchLabels: app: nginx ingress: - from: - podSelector: matchLabels: access: ""true"" The name of a NetworkPolicy object must be a valid DNS subdomain name.Note:NetworkPolicy includes a podSelector which selects the grouping of Pods to which the policy applies. You can see this policy selects Pods with the label app=nginx. The label was automatically added to the Pod in the nginx Deployment. An empty podSelector selects all pods in the namespace.Assign the policy to the serviceUse kubectl to create a NetworkPolicy from the above nginx-policy.yaml file:kubectl apply -f https://k8s.io/examples/service/networking/nginx-policy.yaml networkpolicy.networking.k8s.io/access-nginx created Test access to the service when access label is not definedWhen you attempt to access the nginx Service from a Pod without the correct labels, the request times out:kubectl run busybox --rm -ti --image=busybox -- /bin/sh In your shell, run the command:wget --spider --timeout=1 nginx Connecting to nginx (10.100.0.16:80) wget: download timed out Define access label and test againYou can create a Pod with the correct labels to see that the request is allowed:kubectl run busybox --rm -ti --labels=""access=true"" --image=busybox -- /bin/sh In your shell, run the command:wget --spider --timeout=1 nginx Connecting to nginx (10.100.0.16:80) remote file exists Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or projects. See the CNCF website guidelines for more details.You should read the content guide before proposing a change that adds an extra third-party link.","Declare Network PolicyThis document helps you get started using the Kubernetes NetworkPolicy API to declare network policies that govern how pods communicate with each other.Note: This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the content guide before submitting a change. More information.Before you beginYou need",Declare Network Policy,tasks,manage-clusters,617,https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/,k8s_00015_summary +"kubectl Quick ReferenceThis page contains a list of commonly used kubectl commands and flags.Note:These instructions are for Kubernetes v1.34. To check the version, use the kubectl version command.Kubectl autocompleteBASHsource <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first. echo ""source <(kubectl completion bash)"" >> ~/.bashrc # add autocomplete permanently to your bash shell. You can also use a shorthand alias for kubectl that also works with completion:alias k=kubectl complete -o default -F __start_kubectl k ZSHsource <(kubectl completion zsh) # set up autocomplete in zsh into the current shell echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell FISHNote:Requires kubectl version 1.23 or above.echo 'kubectl completion fish | source' > ~/.config/fish/completions/kubectl.fish && source ~/.config/fish/completions/kubectl.fish A note on --all-namespacesAppending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:kubectl -AKubectl context and configurationSet which Kubernetes cluster kubectl communicates with and modifies configuration information. See Authenticating Across Clusters with kubeconfig documentation for detailed config file information.kubectl config view # Show Merged kubeconfig settings. # use multiple kubeconfig files at the same time and view merged config KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view # Show merged kubeconfig settings and raw certificate data and exposed secrets kubectl config view --raw # get the password for the e2e user kubectl config view -o jsonpath='{.users[?(@.name == ""e2e"")].user.password}' # get the certificate for the e2e user kubectl config view --raw -o jsonpath='{.users[?(.name == ""e2e"")].user.client-certificate-data}' | base64 -d kubectl config view -o jsonpath='{.users[].name}' # display the first user kubectl config view -o jsonpath='{.users[*].name}' # get a list of users kubectl config get-contexts # display list of contexts kubectl config get-contexts -o name # get all context names kubectl config current-context # display the current-context kubectl config use-context my-cluster-name # set the default context to my-cluster-name kubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig # configure the URL to a proxy server to use for requests made by this client in the kubeconfig kubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url # add a new user to your kubeconf that supports basic auth kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword # permanently save the namespace for all subsequent kubectl commands in that context. kubectl config set-context --current --namespace=ggckad-s2 # set a context utilizing a specific username and namespace. kubectl config set-context gce --user=cluster-admin --namespace=foo \ && kubectl config use-context gce kubectl config unset users.foo # delete user foo # short alias to set/show context/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace) alias kx='f() { [ ""$1"" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f' alias kn='f() { [ ""$1"" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d"" "" -f6 ; } ; f' Kubectl applyapply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes applications on production. See Kubectl Book.Creating objectsKubernetes manifests can be defined in YAML or JSON. The file extension .yaml, .yml, and .json can be used.kubectl apply -f ./my-manifest.yaml # create resource(s) kubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files kubectl apply -f ./dir # create resource(s) in all manifest files in dir kubectl apply -f https://example.com/manifest.yaml # create resource(s) from url (Note: this is an example domain and does not contain a valid manifest) kubectl create deployment nginx --image=nginx # start a single instance of nginx # create a Job which prints ""Hello World"" kubectl create job hello --image=busybox:1.28 -- echo ""Hello World"" # create a CronJob that prints ""Hello World"" every minute kubectl create cronjob hello --image=busybox:1.28 --schedule=""*/1 * * * *"" -- echo ""Hello World"" kubectl explain pods # get the documentation for pod manifests # Create multiple YAML objects from stdin kubectl apply -f - < pod.yaml # Generate spec for running pod nginx and write it into a file called pod.yaml kubectl attach my-pod -i # Attach to Running Container kubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod kubectl exec my-pod -- ls / # Run command in existing pod (1 container case) kubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case) kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case) kubectl debug my-pod -it --image=busybox:1.28 # Create an interactive debugging session within existing pod and immediately attach to it kubectl debug node/my-node -it --image=busybox:1.28 # Create an interactive debugging session on a node and immediately attach to it kubectl top pod # Show metrics for all pods in the default namespace kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory' Copying files and directories to and from containerskubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace kubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container kubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace kubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally Note:kubectl cp requires that the 'tar' binary is present in your container image. If 'tar' is not present, kubectl cp will fail. For advanced use cases, such as symlinks, wildcard expansion or file mode preservation consider using kubectl exec.tar cf - /tmp/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C /tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace kubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally Interacting with Deployments and Serviceskubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case) kubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case) kubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name kubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by kubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases) Interacting with Nodes and clusterkubectl cordon my-node # Mark my-node as unschedulable kubectl drain my-node # Drain my-node in preparation for maintenance kubectl uncordon my-node # Mark my-node as schedulable kubectl top node # Show metrics for all nodes kubectl top node my-node # Show metrics for a given node kubectl cluster-info # Display addresses of the master and services kubectl cluster-info dump # Dump current cluster state to stdout kubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state # View existing taints on which exist on current nodes. kubectl get nodes -o='custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect' # If a taint with that key and effect already exists, its value is replaced as specified. kubectl taint nodes foo dedicated=special-user:NoSchedule Resource typesList all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:kubectl api-resources Other operations for exploring API resources:kubectl api-resources --namespaced=true # All namespaced resources kubectl api-resources --namespaced=false # All non-namespaced resources kubectl api-resources -o name # All resources with simple output (only the resource name) kubectl api-resources -o wide # All resources with expanded (aka ""wide"") output kubectl api-resources --verbs=list,get # All resources that support the ""list"" and ""get"" request verbs kubectl api-resources --api-group=extensions # All resources in the ""extensions"" API group Formatting outputTo output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.Output formatDescription-o=custom-columns=Print a table using a comma separated list of custom columns-o=custom-columns-file=Print a table using the custom columns template in the file-o=go-template=