text
stringlengths
20
57.3k
labels
class label
4 classes
Title: [FEATURE] Update available warning Body: **Describe the feature you'd like to see added** Was thinking about adding some kind of way to notify admins that there is an update available. Maybe display something in the console, maybe add a little notice at the footer. Oh yeah, speaking of the footer, it is not centered properly. Might make a pull request later if no one does.
0easy
Title: Add HeyGen Integration Body: Add "Create Avatar Video" block with HeyGen.<br><br>The clear and simple docs are as follows: [https://docs.heygen.com/reference/create-an-avatar-video-v2](https://docs.heygen.com/reference/create-an-avatar-video-v2)<br><br>Be sure to add a new HeyGen credential to the AutoGPT platform to handle the API key.
0easy
Title: ENH: Improvement of `attach_cls_member_docstring` to handle inherited `__doc__` Body: Currently, the function `attach_cls_member_docstring` retrieves the docstring from `member.__doc__`. This approach, however, fails when the member is inherited from a parent class and the `__doc__` is empty. It is notable that the `help` function can access the parent class's `__doc__`. For classes like `xorbits` that do not inherit from the `_mars` class, the `help` documentation would also be empty if `member.__doc__` is empty. This limitation could potentially obscure valuable information, especially for those relying on `help` to understand the functionalities. Thus, I suggest improving `attach_cls_member_docstring` to manage these instances effectively.
0easy
Title: [ENH] Safe arithmetic transforms Body: What kind of bugs me is that there is no pd.Series.log(), pd.Series.exp(), etc. function that chains the command. To make matters worse, if your Series has zeros, np.log on the Series would return -inf for the zero entries, which screws up any downstream computation --- raising a warning while returning np.nan would be a saner alternative. Proposal: add simple chain-able operations to pd.Series like - .log(warn=True) - .exp - .logit(warn=True) [logit(x) = log(x/(1-x)] - .sigmoid() [inverse logit] - .probit(warn=True) [Phi^-1, where Phi is the standard normal CDF] - .normal_cdf() [inverse of probit] - .z_score() [standardize by centering and shrinking by standard deviation]
0easy
Title: 'function' object has no attribute 'parameters' when I start to train the backbone in the 10 epoch Body: start training backbone. Traceback (most recent call last): File "../../tools/train.py", line 322, in <module> main() File "../../tools/train.py", line 317, in main train(train_loader, dist_model, optimizer, lr_scheduler, tb_writer) File "../../tools/train.py", line 194, in train optimizer, lr_scheduler = build_opt_lr(model.module, epoch) File "../../tools/train.py", line 77, in build_opt_lr for param in getattr(model.backbone, layer).parameters(): AttributeError: 'function' object has no attribute 'parameters' How can I deal with it?
0easy
Title: Simplify deterministic sampling for probabilistic models in production Body: First of all, thanks for this amazing library! It has made my work a lot easier. **Is your feature request related to a current problem? Please describe.** I'm wrapping a probabilistic forecasting model in a FastAPI endpoint for access by end users. I'm using `QuantileRegression` and the results look great. However, I noticed that the forecasts differ subtly between invocations when feeding in the same input timeseries. This is because the model instance uses its own RNG state that cannot be conveniently seeded from the outside using e.g. `torch.manual_seed(seed_int)` or `np.random.seed(seed_int)`. If I understand correctly, this is managed through the `@random_method` decorator defined here: https://github.com/unit8co/darts/blob/8e93720e6277726089058cf4d039b97de0f57c34/darts/utils/torch.py#L85 My model definition looks like the below. ```python model = NLinearModel( input_chunk_length=24*5, output_chunk_length=24, likelihood=QuantileRegression(quantiles=[0.01, 0.05, 0.2, 0.5, 0.8, 0.95, 0.99]), random_state=42, pl_trainer_kwargs={'deterministic': True} ) ``` **Describe proposed solution** The below is what I'm currently doing to re-seed the RNG before every invocation of `.predict()`. It would be great if there was a cleaner way to prevent `_random_instance` from being re-seeded on every invocation of `predict()` with a new random seed, i.e. to disable the effect of `@random_method`. ```python with torch.random.fork_rng(): # Re-seed the hidden RNG state of the model instance to ensure deterministic sampling. model._random_instance = np.random.RandomState(seed_int) prediction = model.predict( series=ts, n=24, num_samples=500, ) ```
0easy
Title: [Feature]: OpenAI Response API Body: ### 🚀 The feature, motivation and pitch I come across this https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses&example=chain-of-thought#streaming wrt to their new Response API, so we probably also want to add support in vLLM. ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
0easy
Title: AttributeError: dlsym(0x3c0c283b0, objc_msgSendSuper_stret): symbol not found Body: ### Describe the bug After upgrading Open Interpreter to version 0.2.0 New Computer and trying to run it I got this error: ```bash ~ $ interpreter --os Traceback (most recent call last): File "/Users/a.negelya/miniconda/bin/interpreter", line 8, in <module> sys.exit(interpreter.start_terminal_interface()) File "/Users/a.negelya/miniconda/lib/python3.10/site-packages/interpreter/core/core.py", line 25, in start_terminal_interface start_terminal_interface(self) File "/Users/a.negelya/miniconda/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 463, in start_terminal_interface __import__(package) File "/Users/a.negelya/miniconda/lib/python3.10/site-packages/pyautogui/__init__.py", line 246, in <module> import mouseinfo File "/Users/a.negelya/miniconda/lib/python3.10/site-packages/mouseinfo/__init__.py", line 100, in <module> from rubicon.objc import ObjCClass, CGPoint File "/Users/a.negelya/miniconda/lib/python3.10/site-packages/rubicon/objc/__init__.py", line 33, in <module> from . import api, collections, runtime, types File "/Users/a.negelya/miniconda/lib/python3.10/site-packages/rubicon/objc/api.py", line 29, in <module> from .runtime import ( File "/Users/a.negelya/miniconda/lib/python3.10/site-packages/rubicon/objc/runtime.py", line 460, in <module> libobjc.objc_msgSendSuper_stret.restype = None File "/Users/a.negelya/miniconda/lib/python3.10/ctypes/__init__.py", line 387, in __getattr__ func = self.__getitem__(name) File "/Users/a.negelya/miniconda/lib/python3.10/ctypes/__init__.py", line 392, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: dlsym(0x3c0c283b0, objc_msgSendSuper_stret): symbol not found ``` Updating and reinstalling `rubicon-objc`, `pyautogui`, `mouseinfo` didn't help. ### Reproduce 1. Open Interpreter to version 0.2.0 2. Run `$ interpreter --os` ### Expected behavior Running Open Interpreter in OS mode ### Screenshots _No response_ ### Open Interpreter version 0.2.0 ### Python version 3.10.10 ### Operating System name and version macOS Sonoma 14.2.1 ### Additional context _No response_
0easy
Title: drop session_key compression Body: Until Django.1.9, the username was restricted to 30 characters. For anonymous users, django-SHOP uses the session_key as a unique identifier. This required to encode the session_key of length 32 to a shorter variant, using base64 encoding. In Django-1.10 the length of the username increased to 150 characters. Hence these encoding and decoding steps are not required any more and can be removed from the code.
0easy
Title: ENH: Reorganize the output of the log Body: The current log output is excessive and contains some redundant and invalid information for DEBUG level, which needs to be reorganized to make the log more concise and efficient.
0easy
Title: add inline examples to PythonCallable Body: https://docs.ploomber.io/en/latest/api/_modules/tasks/ploomber.tasks.PythonCallable.html#ploomber.tasks.PythonCallable * simple example with a couple of tasks and a few variations (one product, many products) * link to other relevant sections in the docs and other examples in projects
0easy
Title: [BUG] Error: Frame was detached Body: **SYSTEM INFO-** 1. Patchright- 1.48.0.post0 2. OS- Windows 11 3. Browser- Chromium I have been working on Instagram data scraper bot, switched to patchright from playwright given it's promising undetection features. The bot's playwright scraping code has been tested successfully for long operations but with patchright it's unexpectedly closing context after few extractions. As usual the code is bulky, so will share only relevant parts. Feel free to ask for any relevant code snippets. Context initialisation code is- ``` async def initialize_browser() -> tuple[Browser, BrowserContext]: # Ensure a user data directory exists for Patchright unique_user_data_dir = os.path.join(os.path.expanduser("~"), "Documents", "CustomChromeProfile") if not os.path.exists(unique_user_data_dir): os.makedirs(unique_user_data_dir) else: clear_old_data(unique_user_data_dir) logging.info(f"Using custom user data directory at: {unique_user_data_dir}") # Start Patchright's asynchronous context manager playwright = await async_playwright().start() # Launch a persistent context with recommended settings for undetectability context = await playwright.chromium.launch_persistent_context( user_data_dir=unique_user_data_dir, # Path for persistent user data channel="chrome", # Use Chrome for increased stealth headless=False, # Use headless=False for more undetectable mode (change as needed) no_viewport=True, # Disable viewport resizing for more natural behavior args=[ "--window-size=1920,1080", "--disk-cache-size=500000000" ] ) page = await context.new_page() logging.info("Custom Chrome profile initialized and page opened.") logging.info("Initialized Patchright Browser with persistent context and stealth mode enabled.") return context, page ``` **Expected** -Initialize the context with provided cookies -Open the main scrolling page in primary tab -Process collected post and profile links in a different tab **Actual** - It abruptly closes the context after 10-15 extraction - The terminal log's point towards frame detachment ``` INFO:root:Total links collected so far: 36 C:\Users\HP\AppData\Local\Programs\Python\Python312\Lib\site-packages\patchright\driver\package\lib\server\frames.js:606 if (this.isDetached()) throw new Error('Frame was detached'); ^ Error: Frame was detached at Frame._context (C:\Users\HP\AppData\Local\Programs\Python\Python312\Lib\site-packages\patchright\driver\package\lib\server\frames.js:606:34) Node.js v20.18.0 Error encountered: Page.goto: Connection closed while reading from the driver ``` Help me out with this issue !!! Thanks
0easy
Title: Good First Issue >> Renko Body: **Which version are you running? The lastest version is Github. Pip is for major releases.** ```python import pandas_ta as ta print(ta.version) ``` **Upgrade.** ```sh $ pip install -U git+https://github.com/twopirllc/pandas-ta ``` **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. Thanks for using Pandas TA!
0easy
Title: Support `ETag` for static files Body: Render the `ETag` header when serving static files. We are already performing `.stat()` on the open file handle, so we should have the required data in order to build an ETag. We could use [the same algorithm](https://serverfault.com/questions/690341/algorithm-behind-nginx-etag-generation) as the popular Nginx web server. When serving, also check the [If-None-Match](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-None-Match) header, and render `HTTP 304 Not Modified` if appropriate.
0easy
Title: Profiler Options Documentation improvement of intra-documentation links Body: **Please provide the issue you face regarding the documentation** The hyperlink in the below sentence in the `[Profiler options]`(https://capitalone.github.io/DataProfiler/docs/0.8.1/html/profiler_example.html#Profiler-options) section of the Data Profiler documentation needs to link to better documentation for more in-depth options documentation. ``` Full list of options in the Profiler section of the [DataProfiler documentation](https://capitalone.github.io/DataProfiler). ```
0easy
Title: Add warning for repeated validation and stacking using together Body: Repeated validation and stacking used together produce a data leak. When used together we should warn the user.
0easy
Title: Feature: CLI publish command Body: User should be able to publish a test message right from the command line smth like this ```bash faststream publish main:app message topic ``` Probably, all arguments after `main:app` can be interpreted as `broker.publish(*args)` and all `--topic=topicname` options can be interpreted as `broker.publish(**kwargs)`. Also, it should supports `--rpc` flag with systemout output to be suitable as a healthcheck container command.
0easy
Title: Getting disabled Socket Mode tests back in CI builds Body: As a few Socket Mode related tests started failing only in the GitHub Actions environment, I've disabled them by this commit: https://github.com/slackapi/python-slack-sdk/commit/75b554a5e9c746641aa1805fce0584e0577f378b We would like to get the tests back to the running tests in CI builds once we figure out the way to make them stable. ### Category (place an `x` in each of the `[ ]`) - [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client) - [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender) - [ ] **slack_sdk.models** (UI component builders) - [ ] **slack_sdk.oauth** (OAuth Flow Utilities) - [ ] **slack_sdk.socket_mode** (Socket Mode client) - [ ] **slack_sdk.audit_logs** (Audit Logs API client) - [ ] **slack_sdk.scim** (SCIM API client) - [ ] **slack_sdk.rtm** (RTM client) - [ ] **slack_sdk.signature** (Request Signature Verifier) ### Requirements Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
0easy
Title: Raise test coverage above 90% for giotto/diagrams/_utils.py Body: Current test coverage from pytest is 86%
0easy
Title: Remove "run_id" and "delete_existing" options: instead move old memory/workspace folder to "archive" by default Body: The first step in the main file would be to check for memory folder and workspace, if they exist create a new folder in "archive" e.g. with the name "currentdate_currenttime", and move everything there. This would make main.py much nicer, and make it clearly defined that all files, apart from `archive` folder, in the project directory are from the most recent run. (It is also a prerequisite to later add handling of logging to separate files when there are "multiple of the same steps")
0easy
Title: Scheduler cache.GetNodeInfo should return IsNotFound error if applicable Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> **What happened**: This [PR](https://github.com/kubernetes/kubernetes/commit/30b0f8bf3bed2c1c146a85ecdfc6b9cb18c5a303) switches NodeLister to scheduler cache for `GetNodeInfo`. However, the new `GetNodeInfo` implementation in the cache never returns IsNotFound error any more. Rather, it returns a normal error: https://github.com/kubernetes/kubernetes/blob/67d750bb2893d7a66d45d435dfc075fb17cc15ee/pkg/scheduler/internal/cache/cache.go#L696-L706 However, we rely on the error type for logging, for example: https://github.com/kubernetes/kubernetes/blob/7cb4850699a4a3c1008382c8b05f245b66ce19ad/pkg/scheduler/algorithm/predicates/predicates.go#L1386-L1394 **What you expected to happen**: The cache `GetNodeInfo()` should return IsNotFound error if applicable. **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: /sig scheduling @draveness cc @ahg-g
0easy
Title: Azure File Premium Persistent Volume fails due to HTTP 400 error Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> **What happened**: When attempting to create an Azure File Premium persistent volume claim, the following error appears in the controller-manager logs and the PVC remains in the the Pending phase. ``` E0924 20:36:48.295697 1 goroutinemap.go:150] Operation for "provision-menagerie/azurefile-premium-storage-class[48ef9365-a696-4cf7-95bd-b8f7ec9c935b]" failed. No retries permitted until 2019-09-24 20:37:20.295656182 +0000 UTC m=+11638.725602350 (durationBeforeRetry 32s). Error: "failed to create share kubernetes-dynamic-pvc-48ef9365-a696-4cf7-95bd-b8f7ec9c935b in account f935d54720bd34a9f9c1f8b: failed to create file share, err: storage: service returned error: StatusCode=400, ErrorCode=InvalidHeaderValue, ErrorMessage=The value for one of the HTTP headers is not in the correct format.\nRequestId:42bed230-801a-0059-7417-7360a8000000\nTime:2019-09-24T20:36:48.2977475Z, RequestInitiated=Tue, 24 Sep 2019 20:36:48 GMT, RequestId=42bed230-801a-0059-7417-7360a8000000, API Version=2016-05-31, QueryParameterName=, QueryParameterValue=" ``` **What you expected to happen**: A persistent volume should be created and bound. **How to reproduce it (as minimally and precisely as possible)**: Resources used: ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true"},"name":"azurefile-premium"},"parameters":{"skuName":"Premium_LRS"},"provisioner":"kubernetes.io/azure-file"} creationTimestamp: "2019-09-24T17:28:52Z" labels: kubernetes.io/cluster-service: "true" name: azurefile-premium resourceVersion: "5365" selfLink: /apis/storage.k8s.io/v1/storageclasses/azurefile-premium uid: e2adbf87-5de6-43d5-af1c-b118bb107f66 parameters: skuName: Premium_LRS provisioner: kubernetes.io/azure-file reclaimPolicy: Delete volumeBindingMode: Immediate --- apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"app":"azurefile-premium-storage-class"},"name":"azurefile-premium-storage-class","namespace":"menagerie"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"azurefile-premium"}} volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-file creationTimestamp: "2019-09-24T20:35:08Z" finalizers: - kubernetes.io/pvc-protection labels: app: azurefile-premium-storage-class name: azurefile-premium-storage-class namespace: menagerie resourceVersion: "49718" selfLink: /api/v1/namespaces/menagerie/persistentvolumeclaims/azurefile-premium-storage-class uid: 48ef9365-a696-4cf7-95bd-b8f7ec9c935b spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: azurefile-premium volumeMode: Filesystem status: phase: Pending ``` **Anything else we need to know?**: Standard_LRS Azure File PVCs/PVs seem to work just fine **Environment**: - Kubernetes version (use `kubectl version`): ``` Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-19T13:57:45Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} ``` - Cloud provider or hardware configuration: Azure (not AKS) - OS (e.g: `cat /etc/os-release`): CoreOS 2191.5.0 - Kernel (e.g. `uname -a`): `Linux vmss-agent-worker0-11prodva7-baqwm00000W 4.19.66-coreos #1 SMP Mon Aug 26 21:06:50 -00 2019 x86_64 Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz GenuineIntel GNU/Linux` - Install tools: custom internal tooling, broadly similar to AKS engine - Network plugin and version (if this is a network-related bug): - Others:
0easy
Title: Many links in API documentation broken Body: **What happened**: There are many spots in API comments that are broken (pointing to https://git.k8s.io/community/contributors/devel/api-conventions.md). One example is here: https://github.com/kubernetes/api/blob/master/core/v1/types.go#L3491, a quick search for that URL currently shows [84 results on github](https://github.com/kubernetes/api/search?q=https%3A%2F%2Fgit.k8s.io%2Fcommunity%2Fcontributors%2Fdevel%2Fapi-conventions.md&unscoped_q=https%3A%2F%2Fgit.k8s.io%2Fcommunity%2Fcontributors%2Fdevel%2Fapi-conventions.md) (however this is obviously including generated files too) These lead to broken user-facing doc links, such as in `oc explain` in OpenShift **What you expected to happen**: I'm assuming these are supposed to be updated to https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md
0easy
Title: Port parsing can overflow (TOB-K8S-015: Overflows when using strconv.Atoi and downcasting the result) Body: This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf) **Description** The strconv.Atoi function parses an int - a machine dependent integer type, which, for 64-bit targets will be int64. There are places throughout the codebase where the result returned from strconv.Atoi is later converted to a smaller type: int16 or int32. This may overflow with a certain input. An example of the issue has been included in Figure 1. ``` v, err := strconv.Atoi(options.DiskMBpsReadWrite) diskMBpsReadWrite = int32(v) ``` Figure 13.1: pkg/cloudprovider/providers/azure/azure_managedDiskController.go:105 Additionally, there are many code paths that parse ports, and do so differently and in a manner lacking checks for a proper port range. An example of this has been identified within kubectl when handling port values. Kubectl has the ability to expose particular Pod ports through the use of kubectl expose. This command uses the function updatePodPorts, which uses strconv.Atoi to parse a string into an integer, then downcasts it to an int32 (Figure 2). ``` // updatePodContainers updates PodSpec.Containers.Ports with passed parameters. func updatePodPorts(params map[string]string, podSpec *v1.PodSpec) (err error) { port := -1 hostPort := -1 if len(params["port"]) > 0 { port, err = strconv.Atoi(params["port"]) // <-- this should parse port as strconv.ParseUint(params["port"], 10, 16) if err != nil { return err } } // (...) // Don't include the port if it was not specified. if len(params["port"]) > 0 { podSpec.Containers[0].Ports = []v1.ContainerPort{ { ContainerPort: int32(port), // <-- this should later just be uint16(port) }, } ``` Figure 13.2: Relevant snippet of the updatePodPorts function. This error has been operationalized into a crash within kubectl when overflowing provided ports. Starting with a standard deployment with no services, we can observe the expected behavior (Figure 3). ``` root@k8s-1:~# cat nginx.yml apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 1 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 root@k8s-1:~# kubectl create -f nginx.yml deployment.apps/nginx-deployment created root@k8s-1:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-76bf4969df-nskjh 1/1 Running 0 2m14s root@k8s-1:~# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 30m ``` Figure 13.3: The deployment spec with service and Pod status. To trigger the overflow, we can now update the deployment through the kubectl expose command with an overflown port, overflowing from 4294967377 to 81 (Figure 4). ``` root@k8s-1:/home/vagrant# kubectl expose deployment nginx-deployment --port 4294967377 --target-port 80 service/nginx-deployment exposed ``` Figure 13.4: Overflowing the port parameter. We are now able to observe this overflown port when listing the services with kubectl get services (Figure 5). We are also able to access the service on the overflown port (Figure 6). ``` root@k8s-1:/home/vagrant# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 42m nginx-deployment ClusterIP 10.233.25.138 <none> 81/TCP 2s ``` Figure 13.5: The overflown port got exposed. ``` root@k8s-1:/home/vagrant# curl 10.233.25.138:81 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... ``` Figure 13.6: The result of curling the overflown service port. Furthering this issue, we are able to also overflow the target port. After deleting the service, we can attempt to overflow the target port as well, which will result in a panic in kubectl (Figure 7 and 8). ``` root@k8s-1:/home/vagrant# kubectl delete service nginx-deployment service "nginx-deployment" deleted root@k8s-1:/home/vagrant# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 45m ``` Figure 13.7: The deletion of the deployment. ``` root@k8s-1:/home/vagrant# kubectl expose deployment nginx-deployment --port 4294967377 --target-port 4294967376 E0402 09:25:31.888983 3625 intstr.go:61] value: 4294967376 overflows int32 goroutine 1 [running]: runtime/debug.Stack(0xc000e54eb8, 0xc4f1e9b8, 0xa3ce32e2a3d43b34) /usr/local/go/src/runtime/debug/stack.go:24 +0xa7 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/intstr.FromInt(0x100000050, 0xa, 0x100000050, 0x0, 0x0) ... service/nginx-deployment exposed ``` Figure 13.8: The panic in kubectl when overflowing the target port. Despite the panic from kubectl (visible in Figure 8), the service is still exposed (Figure 9) and accessible (Figure 10). ``` root@k8s-1:/home/vagrant# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 46m nginx-deployment ClusterIP 10.233.59.190 <none> 81/TCP 35s ``` Figure 13.9: The service is exposed despite the kubectl panic and overflow. ``` root@k8s-1:/home/vagrant# curl 10.233.59.190:81 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... ``` Figure 13.10: The service is also accessible after overflow. **Exploit Scenario** A value is parsed from a configuration file with Atoi, resulting in an integer. It is then downcasted to a lower precision value, resulting in a potential overflow or underflow which is not raised as an error or panic. **Recommendation** Short term, when parsing strings into fixed-width integer types, use strconv.ParseInt or strconv.ParseUint with appropriate bitSize argument instead of strconv.Atoi. Long term, ensure the validity of data and types. Parse and validate values with common functions. For example the ParsePort (cmd/kubeadm/app/util/endpoint.go:117) utility function parses and validates TCP port values, but it is not well used across the codebase. **Anything else we need to know?**: See #81146 for current status of all issues created from these findings. The vendor gave this issue an ID of TOB-K8S-015 and it was finding 13 of the report. The vendor considers this issue Medium Severity. To view the original finding, begin on page 42 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf) **Environment**: - Kubernetes version: 1.13.4
0easy
Title: TOB-K8S-004: Pervasive world-accessible file permissions Body: This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf) **Description** Kubernetes uses files and directories to store information ranging from key-value data to certificate data to logs. However, a number of locations have world-writable directories: ``` cluster/images/etcd/migrate/rollback_v2.go:110: if err := os.MkdirAll(path.Join(migrateDatadir, "member", "snap"), 0777); err != nil { cluster/images/etcd/migrate/data_dir.go:49: err := os.MkdirAll(path, 0777) cluster/images/etcd/migrate/data_dir.go:87: err = os.MkdirAll(backupDir, 0777) third_party/forked/godep/save.go:472: err := os.MkdirAll(filepath.Dir(dst), 0777) third_party/forked/godep/save.go:585: err := os.MkdirAll(filepath.Dir(name), 0777) pkg/volume/azure_file/azure_util.go:34: defaultFileMode = "0777" pkg/volume/azure_file/azure_util.go:35: defaultDirMode  = "0777" pkg/volume/emptydir/empty_dir.go:41:const perm os.FileMode = 0777 ``` Figure 7.1: World-writable (0777) directories and defaults Other areas of the system use world-writable files as well: ``` cluster/images/etcd/migrate/data_dir.go:147: return ioutil.WriteFile(v.path, data, 0666) cluster/images/etcd/migrate/migrator.go:120: err := os.Mkdir(backupDir, 0666) third_party/forked/godep/save.go:589: return ioutil.WriteFile(name, []byte(body), 0666) pkg/kubelet/kuberuntime/kuberuntime_container.go:306: if err := m.osInterface.Chmod(containerLogPath, 0666); err != nil { pkg/volume/cinder/cinder_util.go:271: ioutil.WriteFile(name, data, 0666) pkg/volume/fc/fc_util.go:118: io.WriteFile(fileName, data, 0666) pkg/volume/fc/fc_util.go:128: io.WriteFile(name, data, 0666) pkg/volume/azure_dd/azure_common_linux.go:77: if err = io.WriteFile(name, data, 0666); err != nil { pkg/volume/photon_pd/photon_util.go:55: ioutil.WriteFile(fileName, data, 0666) pkg/volume/photon_pd/photon_util.go:65: ioutil.WriteFile(name, data, 0666) ``` Figure 7.2: World-writable (0666) files A number of locations in the code base also rely on world-readable directories and files. For example, Certificate Signing Requests (CSRs) are written to a directory with mode 0755 (world readable and browseable) with the actual CSR having mode 0644 (world-readable): ``` // WriteCSR writes the pem-encoded CSR data to csrPath. // The CSR file will be created with file mode 0644. // If the CSR file already exists, it will be overwritten. // The parent directory of the csrPath will be created as needed with file mode 0755. func WriteCSR(csrDir, name string, csr *x509.CertificateRequest) error { ... if err := os.MkdirAll(filepath.Dir(csrPath), os.FileMode(0755)); err != nil { ... } if err := ioutil.WriteFile(csrPath, EncodeCSRPEM(csr), os.FileMode(0644)); err != nil { ... } ... } ``` Figure 7.3: Documentation and code from cmd/kubeadm/app/util/pkiutil/pki_helpers.go **Exploit Scenario** Alice wishes to migrate some etcd values during normal cluster maintenance. Eve has local access to the cluster’s filesystem, and modifies the values stored during the migration process, granting Eve further access to the cluster as a whole. **Recommendation** Short term, audit all locations that use world-accessible permissions. Revoke those that are unnecessary. Very few files truly need to be readable by any user on a system. Almost none should need to allow arbitrary system users write access. Long term, use system groups and extended Access Control Lists (ACLs) to ensure that all files and directories created by Kuberenetes are accessible by only those users and groups that should be able to access them. This will ensure that only the appropriate users with the correct Unix-level groups may access data. Kubernetes may describe what these groups should be, or create a role-based system to which administrators may assign users and groups. **Anything else we need to know?**: See #81146 for current status of all issues created from these findings. The vendor gave this issue an ID of TOB-K8S-004 and it was finding 8 of the report. The vendor considers this issue Medium Severity. To view the original finding, begin on page 32 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf) **Environment**: - Kubernetes version: 1.13.4
0easy
Title: Different Webhook with same name have same label-value in metric webhook_admission_duration_seconds Body: **What happened**: Different Webhook with same name have same label-value in metric `webhook_admission_duration_seconds`. For now, we can add two or more Webhooks using the same name. Webhooks with same name will get the same label-value in metric `webhook_admission_duration_seconds` and record different Webhooks into the same label-value metric. **What you expected to happen**: Difference Webhooks which have same name using their AdmissionConfiguration(MutatingWebhookConfiguration or ValidatingWebhookConfiguration) name. Add a new label `configuration_name` to metric `webhook_admission_duration_seconds`. **How to reproduce it (as minimally and precisely as possible)**: Create two MutatingWebhookConfiguration which have same Webhook name like the flowing codes: ```yaml apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration metadata: name: mutating-a webhooks: - name: mutating-pod.k8s.io ... --- apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration metadata: name: mutating-b webhooks: - name: mutating-pod.k8s.io ... ``` **Anything else we need to know?**: Nothing yet. **Environment**: No cares about that.
0easy
Title: kube-proxy depricated flags are misorganized Body: kube-proxy contains a comment that "all flags below are deprecated", but many non-deprecated flags are in that comment block. https://github.com/kubernetes/kubernetes/blob/48848731605845c129945f1de146507ca2e5d8cb/cmd/kube-proxy/app/server.go#L149 Please minimally rearrange the flags, so that deprecated & non-deprecated flags are properly grouped.
0easy
Title: Add-On manager should not be creating kube-system namespace Body: **What happened**: Add-on manager uses kubectl apply to create the kube-system namespace at start-up. This is redundant because the kube-apiserver creates the namespace in NewBootstrapController. This is a problem because if someone uses apply to make changes to the kube-system namespace, the add-on manager will wipe out these changes. **What you expected to happen**: Kube-APIServer has the sole responsibility of creating the kube-system namespace. **How to reproduce it (as minimally and precisely as possible)**: kubectl apply a label to the kube-system namespace. restart the add-on manager. Note the labels have been removed. **Anything else we need to know?**: /sig api-machinery **Environment**: - Kubernetes version (use `kubectl version`): - Cloud provider or hardware configuration: - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`): - Install tools: - Others:
0easy
Title: 1000s of warnings: FindExpandablePluginBySpec err:no volume plugin matched Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!--> **What happened**: In an otherwise-stable bare-metal setup, with only LocalVolume and NFS storage, and without any volume mount activity, I get 80,000+ of these warning messages per day: ``` kube-system.kube-controller-manager kube-controller-manager-borg0.ci W0209 16:21:55.722176 1 plugins.go:845] \ FindExpandablePluginBySpec(vinson-cloud) -> err:no volume plugin matched ``` **What you expected to happen**: No warnings; and if there is a reason to generate these warnings, more info about what operation is being attempted should be added to the warnings message. **How to reproduce it (as minimally and precisely as possible)**: I don't know. This issue emerged after I added about 40 services and 300 volumes on a 3-worker cluster. **Anything else we need to know?**: ``` $ kubectl get storageclasses NAME PROVISIONER AGE local-storage (default) kubernetes.io/no-provisioner 63d nfs-client kubernetes.io/no-provisioner 62d ``` **Environment**: - Kubernetes version (use `kubectl version`): 1.13.0 ``` Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} ``` - Cloud provider or hardware configuration: Bare-metal - OS (e.g. from /etc/os-release): Ubuntu 18.04.1 bionic - Kernel (e.g. `uname -a`): 4.15.0-42-generic - Install tools: kubeadm - Others:
0easy
Title: Creating selectors is expensive in EndpointsController Body: When profiling scalability tests at large scale, it appears that the function ``` SelectorFromValidatedSet``` can consume even 10% of cpu. In the profiles it's coming purely from the GetPodServices in EndpointsController. The reason for that is for every single call of GetPodServices: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/endpoint/endpoints_controller.go#L170 for every single service in that namespace we build the selector from scratch: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/listers/core/v1/service_expansion.go#L49 Even though a single such operation is not very expensive, the amount of that is resulting in significant percentage of overall cpu usage. Longer-term solution is to do exactly what was done e.g. for ReplicaSet: https://github.com/kubernetes/api/blob/master/apps/v1/types.go#L723 - use appropriate type for the selector once we create v2 Service API. That would mean that store-in-memory services would have the selector already created, so it would eliminate that completely. I'm wondering if there is something we can do here more short-term. @kubernetes/sig-network-misc @freehan @kubernetes/sig-scalability-misc
0easy
Title: Replace kubernetes/pkg/util/strings with k8s.io/utils/tree/master/strings Body: At some point, the code in https://github.com/kubernetes/kubernetes/tree/master/pkg/util/strings/ was moved to https://github.com/kubernetes/utils/tree/master/strings However we have not started using the code in the kubernetes/utils library. We should switch over to the vendored library and delete the duplicate code in the main repo.
0easy
Title: Replace kubernetes/pkg/util/keymutex with k8s.io/utils/tree/master/keymutex Body: At some point, the code in https://github.com/kubernetes/kubernetes/tree/master/pkg/util/keymutex/ was moved to https://github.com/kubernetes/utils/tree/master/keymutex However we have not started using the code in the kubernetes/utils library. We should switch over to the vendored library and delete the duplicate code in the main repo.
0easy
Title: Replace kubernetes/pkg/util/net/sets with code in k8s.io/utils/net/ Body: At some point, the code in https://github.com/kubernetes/kubernetes/tree/master/pkg/util/net/ (net.go and ipnet.go) was moved to https://github.com/kubernetes/utils/tree/master/net However we have not started using the code in the kubernetes/utils library. We should switch over to the vendored library and delete the duplicate code in the main repo.
0easy
Title: kube-proxy: ipv4/ipv6 mix crashes: KUBE-MARK-DROP':No such file or directory Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!--> Hi there, I'm not entirely sure if this is a bug or a funny configuration fault. Since that exact configuration worked before, I'm still reporting it. **What happened**: Having mixed IPv4 and IPv6 addresses for pods prevented `kube-proxy` (in `iptables` mode) from creating iptables rules. Logged was an error from iptables-save: `Couldn't load target 'KUBE-MARK-DROP':No such file or directory`. Since there weren't a lot iptables rules written before the crashing command, network was completely down on that node. This problem only affected a single node of my 4 node cluster (3 controllers which are also workers, one additional worker). Affected node was a controller node. **What you expected to happen**: Create iptables rules as the cluster did before I started with the upgrades. **How to reproduce it (as minimally and precisely as possible)**: I started today with upgrading my cluster from 1.10.11 to 1.13.2 (installing every latest patch version for every minor in order). This problem started shortly after rolling out 1.12.5 after a node had an unrelated problem and had to be restarted. I think that reboot only uncovered the bug and the other nodes would also be affected after a reboot (or `kube-proxy --cleanup`). I had a funny-but-working config. My containers all have IPv4 addresses, but kubelet was configured to use the IPv6 address of the node (`--node-ip`, `--address`). This resulted in Pods created by a DaemonSet having IPv6 addresses. That exact configuration worked on 1.10.11 and broke somewhere newer. The versions installed today before seeing the bug were 1.10.12, 1.11.7 and 1.12.5. Fixing the kubelet config to use the IPv4 node address in the flags given above, restarting all kubelets and kube-proxies and then deleting all pods created by DaemonSets fixed the issue. **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): stated above - Cloud provider or hardware configuration: virtual machines, different specs - OS (e.g. from /etc/os-release): Debian Stretch (current stable) - Kernel (e.g. `uname -a`): 4.9.0-8-amd64 (debian stretch kernel) - Install tools: custom playbooks for Ansible, can be provided with history regarding this bug - Others: no CNI plugin but static routes
0easy
Title: Need NodePort test for mixed-OS clusters Body: We need a test case that will verify that NodePort works on all nodes in the cluster for a Windows application. Here's an example manual test: Run `kubectl create -f` with this yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: iis-2019 labels: app: iis-2019 spec: replicas: 1 template: metadata: name: iis-2019 labels: app: iis-2019 spec: containers: - name: iis image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 resources: limits: cpu: 1 memory: 800m requests: cpu: .1 memory: 300m ports: - containerPort: 80 nodeSelector: "beta.kubernetes.io/os": windows selector: matchLabels: app: iis-2019 --- apiVersion: v1 kind: Service metadata: name: iis spec: type: NodePort ports: - protocol: TCP port: 80 selector: app: iis-2019 ``` Once the pod is running, get the nodeport and make sure it's accessible on each node IP for that port. ``` $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE iis NodePort 10.0.88.64 <none> 80:30220/TCP 10m kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d azureuser@k8s-master-30163644-0:~$ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 3016k8s010 Ready agent 10d v1.13.2 10.240.0.35 <none> Windows Server Datacenter 10.0.17763.253 docker://18.9.0 3016k8s011 Ready agent 10d v1.13.2 10.240.0.96 <none> Windows Server Datacenter 10.0.17763.253 docker://18.9.0 k8s-linuxpool1-30163644-0 Ready agent 10d v1.13.2 10.240.0.4 <none> Ubuntu 16.04.5 LTS 4.15.0-1036-azure docker://3.0.1 k8s-linuxpool1-30163644-1 Ready agent 10d v1.13.2 10.240.0.127 <none> Ubuntu 16.04.5 LTS 4.15.0-1036-azure docker://3.0.1 k8s-master-30163644-0 Ready master 10d v1.13.2 10.255.255.5 <none> Ubuntu 16.04.5 LTS 4.15.0-1036-azure docker://3.0.1 azureuser@k8s-master-30163644-0:~$ curl -s -o /dev/null -w "%{http_code}" http://10.240.0.35:30220 200azureuser@k8s-master-30163644-0:~$ curl -s -o /dev/null -w "%{http_code}" http://10.240.0.96:30220 200azureuser@k8s-master-30163644-0:~$ curl -s -o /dev/null -w "%{http_code}" http://10.240.0.4:30220 200azureuser@k8s-master-30163644-0:~$ curl -s -o /dev/null -w "%{http_code}" http://10.240.0.127:30220 200 ``` /good-first-issue /kind test /sig windows
0easy
Title: `failed to garbage collect required amount of images. Wanted to free 473842483 bytes, but freed 0 bytes` Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!--> **What happened**: I've been seeing a number of evictions recently that appear to be due to disk pressure: ```yaml $$$ kubectl get pod kumo-go-api-d46f56779-jl6s2 --namespace=kumo-main -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: 2018-12-06T10:05:25Z generateName: kumo-go-api-d46f56779- labels: io.kompose.service: kumo-go-api pod-template-hash: "802912335" name: kumo-go-api-d46f56779-jl6s2 namespace: kumo-main ownerReferences: - apiVersion: extensions/v1beta1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: kumo-go-api-d46f56779 uid: c0a9355e-f780-11e8-b336-42010aa80057 resourceVersion: "11617978" selfLink: /api/v1/namespaces/kumo-main/pods/kumo-go-api-d46f56779-jl6s2 uid: 7337e854-f93e-11e8-b336-42010aa80057 spec: containers: - env: - redacted... image: gcr.io/<redacted>/kumo-go-api@sha256:c6a94fc1ffeb09ea6d967f9ab14b9a26304fa4d71c5798acbfba5e98125b81da imagePullPolicy: Always name: kumo-go-api ports: - containerPort: 5000 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-t6jkx readOnly: true dnsPolicy: ClusterFirst nodeName: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: default-token-t6jkx secret: defaultMode: 420 secretName: default-token-t6jkx status: message: 'The node was low on resource: nodefs.' phase: Failed reason: Evicted startTime: 2018-12-06T10:05:25Z ``` Taking a look at `kubectl get events`, I see these warnings: ``` $$$ kubectl get events LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 2m 13h 152 gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e07f40b90ed91 Node Warning ImageGCFailed kubelet, gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s (combined from similar events): failed to garbage collect required amount of images. Wanted to free 473948979 bytes, but freed 0 bytes 37m 37m 1 gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e3127ebc715c3 Node Warning ImageGCFailed kubelet, gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s failed to garbage collect required amount of images. Wanted to free 473674547 bytes, but freed 0 bytes ``` Digging a bit deeper: ```yaml $$$ kubectl get event gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e07f40b90ed91 -o yaml apiVersion: v1 count: 153 eventTime: null firstTimestamp: 2018-12-07T11:01:06Z involvedObject: kind: Node name: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s uid: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s kind: Event lastTimestamp: 2018-12-08T00:16:09Z message: '(combined from similar events): failed to garbage collect required amount of images. Wanted to free 474006323 bytes, but freed 0 bytes' metadata: creationTimestamp: 2018-12-07T11:01:07Z name: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e07f40b90ed91 namespace: default resourceVersion: "381976" selfLink: /api/v1/namespaces/default/events/gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e07f40b90ed91 uid: 65916e4b-fa0f-11e8-ae9a-42010aa80058 reason: ImageGCFailed reportingComponent: "" reportingInstance: "" source: component: kubelet host: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s type: Warning ``` There's actually remarkably little here. This message doesn't say anything regarding why ImageGC was initiated or why it was unable recover more space. **What you expected to happen**: Image GC to work correctly, or at least fail to schedule pods onto nodes that do not have sufficient disk space. **How to reproduce it (as minimally and precisely as possible)**: Run and stop as many pods as possible on a node in order to encourage disk pressure. Then observe these errors. **Anything else we need to know?**: n/a **Environment**: - Kubernetes version (use `kubectl version`): ``` Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-20T10:09:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.7-gke.11", GitCommit:"fa90543563c9cfafca69128ce8cd9ecd5941940f", GitTreeState:"clean", BuildDate:"2018-11-08T20:22:21Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"} ``` - Cloud provider or hardware configuration: GKE - OS (e.g. from /etc/os-release): I'm running macOS 10.14, nodes are running Container-Optimized OS (cos). - Kernel (e.g. `uname -a`): `Darwin D-10-19-169-80.dhcp4.washington.edu 18.0.0 Darwin Kernel Version 18.0.0: Wed Aug 22 20:13:40 PDT 2018; root:xnu-4903.201.2~1/RELEASE_X86_64 x86_64` - Install tools: n/a - Others: n/a <!-- DO NOT EDIT BELOW THIS LINE --> /kind bug
0easy
Title: change service port with aws nlb ,old security group rules not deleted Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!--> **What happened**: I deployed a cluster on aws and created a loadbalance type service. When I modify the port of this service, the value of nodeport, loadbalance on aws will only add a new port in the security group, and will not delete the port that was originally opened. **What you expected to happen**: Delete the original opened port, only keep the new port **How to reproduce it (as minimally and precisely as possible)**: create a service like this: svc.yaml kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb spec: selector: app: nginx ports: - name: internet-nlb nodePort: 30080 port: 30080 protocol: TCP targetPort: 80 type: LoadBalancer then securiy group will add two rules: Custom TCP Rule | TCP | 30080 | 0.0.0.0/0 | kubernetes.io/rule... Custom TCP Rule | TCP | 30080 | 172.31.0.0/16 | kubernetes.io/rule... then I modify the service port,change nodeport to 30081,port to 30081 then securiy group will be Custom TCP Rule | TCP | 30080 | 0.0.0.0/0 | kubernetes.io/rule... Custom TCP Rule | TCP | 30080 | 172.31.0.0/16 | kubernetes.io/rule... Custom TCP Rule | TCP | 30081 | 0.0.0.0/0 | kubernetes.io/rule... Custom TCP Rule | TCP | 30081 | 172.31.0.0/16 | kubernetes.io/rule... I think the rules contain 30080 should be removed,no server is open to 30080 now **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): v1.11.2 - Cloud provider or hardware configuration: - OS (e.g. from /etc/os-release): CentOS Linux 7 (Core) - Kernel (e.g. `uname -a`): Linux enn-2-129 3.10.0-693.el7.x86_64 - Install tools: install k8s cluster with binary and Configuration file - Others: <!-- DO NOT EDIT BELOW THIS LINE --> /kind bug
0easy
Title: API docs for job status should clearly explain what fields indicate success or failure Body: The API docs for jobs are unclear as to what fields a caller would use to determine job success or failure in 1.11 (and we haven't updated them in a while) ``` status: conditions: - lastProbeTime: 2018-09-15T19:51:27Z lastTransitionTime: 2018-09-15T19:51:27Z message: Job has reached the specified backoff limit reason: BackoffLimitExceeded status: "True" type: Failed failed: 6 startTime: 2018-09-15T19:46:13Z ``` ``` $ kubectl explain jobs.status KIND: Job VERSION: batch/v1 RESOURCE: status <Object> DESCRIPTION: Current status of a job. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status JobStatus represents the current state of a Job. FIELDS: active <integer> The number of actively running pods. completionTime <string> Represents time when the job was completed. It is not guaranteed to be set in happens-before order across separate operations. It is represented in RFC3339 form and is in UTC. conditions <[]Object> The latest available observations of an object's current state. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ failed <integer> The number of pods which reached phase Failed. startTime <string> Represents time when the job was acknowledged by the job controller. It is not guaranteed to be set in happens-before order across separate operations. It is represented in RFC3339 form and is in UTC. succeeded <integer> The number of pods which reached phase Succeeded. ``` A casual reader would infer that completionTime is set when the job fails (but it isn't). Linking out to the docs is ok, but I shouldn't have to click through the links to understand how to use the Job object. The Job description and the job status fields should *clearly* indicate what fields are set in completion states on the job, because that's 100% of the point of docs. Taking a summary of the jobs-run-to-completion link and putting it in job.status.conditions, job.status, and job would improve the actual utility of our API doc. We should also do a pass on other objects and make the actual in server docs more useful. @kubernetes/sig-apps-bugs
0easy
Title: HPA metrics client assumes all resources are namespaced Body: /kind bug The HPA controller's metrics client currently assumes all objects referenced in the `Object` metrics type are namespaced (unless it's namespaces themselves). This is wrong -- we should be using a RESTMapper to figure out the scope. When we do this, we need to make certain that we scope down the HPA's default permissions so that a normal user can't use the HPA to discover metrics that they otherwise shouldn't be able to (e.g. node metrics).
0easy
Title: kube-proxy: node-port ipv6 port open on v4 cluster Body: /kind bug **What happened**: When creating a node port application (in this case the example guestbook) on a host server that is dual stacked and has AAAA and A DNS entries, the node port on the v6 address is open to kube-proxy and hangs. **What you expected to happen**: The v6 address would refuse the connection and the client would fall back onto the v4 address which would connect correctly. **How to reproduce it (as minimally and precisely as possible)**: ``` apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration clusterName: kubernetes api: advertiseAddress: <ipv4 address> networking: dnsDomain: cluster.local podSubnet: 192.168.0.0/16 serviceSubnet: 172.30.0.0/16 bootstrapTokens: - token: v315ij.jrg9nu08ayho78q7 kubeProxy: config: mode: ipvs ipvs: scheduler: lc ``` kubeadm init with the above, and join in some workers. Then install the guestbook application https://kubernetes.io/docs/tutorials/stateless-application/guestbook/ Obtain the node port with `kubectl get services frontend` and telnet to one of the servers on that port from a dual stacked client. The client will connect and refuse to respond to 'GET /' telnet directly to the IPv4 address of the server on the port and all works as expected. **Anything else we need to know?**: kube-proxy is noted as listening on the port. ``` ubuntu@srv-jidtj:~$ sudo ss -l -t -p | grep 31759 LISTEN 0 128 *:31759 *:* users:(("kube-proxy",pid=2268,fd=7)) ``` **Environment**: - Kubernetes version (use `kubectl version`): ubuntu@srv-jidtj:~$ kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} - Cloud provider or hardware configuration: Brightbox - OS (e.g. from /etc/os-release): Ubuntu 18.04.1 LTS - Kernel (e.g. `uname -a`): Linux srv-jidtj 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux - Install tools: kubeadm 1.11 - Others:
0easy
Title: Literal "$Format" in API server logs Body: /kind bug **What happened**: I see the literal string "(dollar_sign)Format" in the API server logs: ``` I1101 07:05:53.364424 7 wrap.go:42] GET /api/v1/namespaces/kube-system/services/default-http-backend: (1.73008ms) 404 [[glbc/v0.0.0 (linux/amd64) kubernetes/$Format] 127.0.0.1:52702] ``` Note that most lines are fine, it seems that just some binaries don't have the placeholder substituted: ``` I1101 07:06:29.825192 7 wrap.go:42] POST /apis/authorization.k8s.io/v1beta1/subjectaccessreviews: (361.716µs) 201 [[kube-controller-manager/v1.9.0 (linux/amd64) kubernetes/7f9f847/system:serviceaccount:kube-system:certificate-controller] [::1]:50612] ``` **What you expected to happen**: Expected to see kubernetes commit ID or tag name in all cases. **How to reproduce it (as minimally and precisely as possible)**: Run an e2e test, grep for Format in API server log. **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): master - Cloud provider or hardware configuration: GCE
0easy
Title: Move selector immutability check to validation after v1beta1 retires Body: The check for selector immutability is located at `PrepareForUpdate` functions of `deploymentStrategy`, `rsStrategy` and `daemonSetStrategy`. We are not able to have the check at validation before _v1beta1_ API version retires due to breaking change to some tests (discussed in this closed [PR](https://github.com/kubernetes/kubernetes/pull/50348)). Once _v1beta1_ retires, we should move the check to validation.
0easy
Title: Sign error in comment on resource.ScaledValue Body: <!-- This form is for bug reports and feature requests ONLY! If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/). --> **Is this a BUG REPORT or FEATURE REQUEST?**: > Uncomment only one, leave it on its own line: > /kind bug > /kind feature **What happened**: I invoked `ScaledValue(-9)` on a resource.Quantity, and got a result that was multiplied by a factor of 10^9. **What you expected to happen**: I expected to get a value that was multiplied by a factor of 10^-9, because the comment says "ScaledValue returns the value of ceil(q * 10^scale)". **How to reproduce it (as minimally and precisely as possible)**: ``` q, _ := resource.ParseQuantity("10") i := q.ScaledValue(-1) ``` print `i`. You will find it is 100, not 1. **Anything else we need to know?**: Look at the meaning of the `AsScaledInt64` method invoked in the integer case. This method returns the quantity expressed as a multiple of 10^scale. 10 is 100 tenths. **Environment**: - Kubernetes version (use `kubectl version`): master
0easy
Title: Kubelet does not assign node address on expanded ipv6 address Body: ### What happened? In AWS EKS a issue was discovered in ipv6 only clusters. When nodes with expanded ipv6 address are created `2600:1f14:1d4:d101:0:0:0:ba3d` the string comparison [here](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/nodestatus/setters.go#L129) ends up comparing `2600:1f14:1d4:d101:0:0:0:ba3d` and `2600:1f14:1d4:d101::ba3d` and does not assign the ipv6 ip to the node object. ### What did you expect to happen? The string comparison should happen on sanitized ipv6 addess so that any different format of the ipv6 address being sent through the string works. ### How can we reproduce it (as minimally and precisely as possible)? On AWS EKS ipv6 clusters, scale the worker nodes until a worker node appears without the full 128bit ipv6 address octet. When such nodes are created, the node objects won't have internal-ip associated with them. ### Anything else we need to know? _No response_ ### Kubernetes version <details> 1.21/1.22/1.23 ```console $ kubectl version # paste output here ``` </details> ### Cloud provider <details> EKS </details> ### OS version <details> Ubuntu ```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ``` </details> ### Install tools <details> </details> ### Container runtime (CRI) and and version (if applicable) <details> </details> ### Related plugins (CNI, CSI, ...) and versions (if applicable) <details> </details>
0easy
Title: v1.Time Stringer implementation does not protect against nil-pointer dereference Body: ### What happened? The `v1.Time` struct implements the `Stringer` interface because the embedded type (`time.Time`) does. This is fine in production environments, even when the `*v1.Time` pointer is `nil` because of the [catchPanic](https://cs.opensource.google/go/go/+/refs/tags/go1.17.2:src/fmt/print.go;l=538-544) function in the Go standard library. This function will catch the nil-pointer dereference when trying to call the `String` method on the `nil` `*v1.Time` object. The result is `<nil>` is written. Again, this is all fine when running a production binary, but causes problems when trying to debug. Specifically, when debugging a program from Goland using `dlv`, the nil-pointer dereference is caught by the debugger and causes execution to pause. It also seems that the debugger does not allow the `catchPanic` function to `recover`. I say this because nil-pointer dereference error occurs constantly going forward (possibly related to the re-queuing nature our operator uses). If the `v1.Time` object had a `String()` method that protected against nil-pointer dereference (like the [IsZero](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/time.go#L58-L64) method does), then this would not be a problem. ### What did you expect to happen? I would expect that the `v1.Time` object had a `String` method that protects against nil-pointer dereference. ### How can we reproduce it (as minimally and precisely as possible)? Running the following program: ``` package main import ( "fmt" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) func main() { t := &v1.Time{} fmt.Printf("%s\n", t) t = (*v1.Time)(nil) fmt.Printf("%s\n", t) fmt.Println(t.String()) } ``` will have the output: ``` 0001-01-01 00:00:00 +0000 UTC <nil> [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1401f96] goroutine 1 [running]: main.main() $GOPATH/src/temp/main.go:16 +0xb6 exit status 2 ``` ### Anything else we need to know? _No response_ ### Kubernetes version <details> Rancher is currently using `v0.21.0` version of `k8s.io/apimachinery` </details> ### Cloud provider <details> </details> ### OS version <details> ```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ``` </details> ### Install tools <details> </details> ### Container runtime (CRI) and and version (if applicable) <details> </details> ### Related plugins (CNI, CSI, ...) and versions (if applicable) <details> </details>
0easy
Title: pkg/proxy/ipvs/README.md says `kube init` but means `kubeadm init` Body: https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md#cluster-created-by-kubeadm says: > before running > `kube init --config <path_to_configuration_file>` > to specify the ipvs mode before deploying the cluster. It likely means not `kube init`, but `kubeadm init`
0easy
Title: apiserver/pkg/storage/interfaces.go contains unintended HTML tags Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: My IDE rendered the comments in `interfaces.go` and they were incomprehensible. I looked at the corresponding source, and found stuff like ``` // IndexerFuncs is a mapping from <index name> to function that // for a given object computes <value for that index>. type IndexerFuncs map[string]IndexerFunc ``` Those things set off in angle brackets are taken to be HTML tags. They should be backquoted. #### What you expected to happen: I expected the comments to render comprehensibly. #### How to reproduce it (as minimally and precisely as possible): #### Anything else we need to know?: #### Environment: - Kubernetes version (use `kubectl version`): 1.22.0 - Cloud provider or hardware configuration: - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`): - Install tools: - Network plugin and version (if this is a network-related bug): - Others:
0easy
Title: imagePullSecrets should log warning if secret does not exist Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: imagePullSecrets can added to a pod manifest to reference a secret containing registry credentials. Currently, if you add the name of a secret that does not exist, there is no warning or error. #### What you expected to happen: I would expect there to be an error and a failed deployment if the secret does not exist/cannot be found in the namespace. If people are pulling public images, they may still wish to authenticate to a registry, for example dockerhub, before pulling the image, in order to utilise their subscription account rate limits. Currently if they use imagePullSecrets and the referenced secret does not exist, they may assume they are authenticating to their account when they are not, and will still be able to pull public images without any warning that authentication is not taking place. #### How to reproduce it (as minimally and precisely as possible): Add `imagePullSecrets: myfakesecret` to a pod spec, and pull images from docker.io.
0easy
Title: Add non-serialized metrics tests for Windows Body: #### What happened: - Lets put a metrics CI job in place that isnt `Serial`. - Investigate our test coverage of metrics-server - See why our CI jobs dont test metrics-server - Confirm that we are doing enough in `kubelet_stats` #### What you expected to happen: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/windows/kubelet_stats.go is a good test, lets make it part of all our containerd jobs by removing the Serial tags
0easy
Title: Log attempts to output resp.Body Body: #### What happened: Log messages like `body of failing http response: &{0x531900 0xc0010d2780 0x775180}` are not very informative. /sig cloud-provider #### What you expected to happen: Either actual response body or none of it if this is a security concern. /sig security sig security to confirm it is ok to change this to log an actual response body. #### How to reproduce it (as minimally and precisely as possible): two places it is being logged: - https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/staging/src/k8s.io/legacy-cloud-providers/gce/gcpcredential/credentialutil.go#L63 - https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/credentialprovider/config.go#L206 #### Environment: - Kubernetes version (use `kubectl version`):1.20, but doesn't matter - Cloud provider or hardware configuration:GKE - OS (e.g: `cat /etc/os-release`):COS
0easy
Title: Operationexecutor log format problem Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: `I0726 15:05:12.072600 39595 operation_generator.go:974] UnmountDevice succeeded for volume "volume-name" %!(EXTRA string=UnmountDevice succeeded for volume "volume-name" (UniqueName: "fake-plugin/fake-device1") on node "mynodename" ) ` Here at the same time, simple, detail information is returned and formatting problems are caused. #### What you expected to happen: No format error. For example, only datail is returned. #### How to reproduce it (as minimally and precisely as possible): ```shell cd pkg/kubelet/volumemanager/reconciler go test . -v ``` #### Anything else we need to know?: Found from running pkg/kubelet/volumemanager/reconciler/reconciler_test.go #### Environment: - Kubernetes version (use `kubectl version`): - Cloud provider or hardware configuration: - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`): - Install tools: - Network plugin and version (if this is a network-related bug): - Others: /good-first-issue
0easy
Title: (cleanup) Make the netpol/ model.go model stateful Body: #### What happened In https://github.com/kubernetes/kubernetes/pull/102354#discussion_r651517900 @aojea brought up the fact that to get Service IPs we currently need to make an API call, i suppose that is suboptimal . ## in a nutshell... - We *CREATE* services in one place, and this returns the Service IP that we dont use at that time. ``` // kubemanager.go initializeCluster() createdPods = append(createdPods, kubePod) svc, err := k.createService(pod.Service()) if err != nil { return err } ``` (model.go) - We *READ* service IPs in another place, and we waste a query to apiserver for the data we just got above, and threw away. ``` // network_policy.go getK8sModelWithServiceIPs() service := pod.Service() kubeService, err := f.ClientSet.CoreV1().Services(pod.Namespace).Get(context.TODO(), service.Name, metav1.GetOptions{}) if err != nil { ``` So we *should* in an ideal way, find a way to cache the data we get for free in the first bullet, with the function that reads the service data, in the second bullet. basically any solution that makes this easier to deal with is probably a small win for us, so , good first issue to hack on :) ## details This is because we create the model using a function (rather then caching it somehow). The functional approach imo is nice bc the model offers no gaurantees, so, we can call it as much as we want and have less state to carry over. However, i think maybe at some point it would be faster to cache this data, and not call Spec.ClusterIP for ever pod on every test. #### Fix This is a good first issue i think: Take a look at the code path in the netpol suite, and see if you can get it to pass by extending the model class somehow, so that when we converge `getK8SModel()` and `getK8sModelWithServiceIPs()`, into a single call to getting the Model, which uses a cached model, that is capable of *creating itself* somehow, that way, once the model exists, it can be reused. I think to merge this since we're adding state carry over,it would be interesting to measure wether the tests run any faster afterwards. I dont have any strong opinions here, but thought i'd follow this as a follow on to #102354
0easy
Title: Pods with Shutdown status confuse users of graceful node shutdown feature Body: #### What happened: When using Graceful Node Shutdown feature, status and reason of "shutdown" pod are confusing for users. See https://github.com/kubernetes/website/pull/28235/ where I attempt to address this confusion by documenting the behavior. However, it would be great to improve messages. Perhaps the following renaming: ``` Status: Failed. --------------------------------> Same Reason: Shutdown ----------------------------> Terminated Message: Node is shutting, evicting pods ------> Pod was terminated in response to node shutdown. ``` Reason "Shutdown" is confusing because it may appear as a process of shutting down. Better message will also be useful. #### What you expected to happen: Reason and Message changed for pods terminated in response to Node shutdown. #### How to reproduce it (as minimally and precisely as possible): Shutdown the node and observer the pod status. #### Anything else we need to know?: KEP: kubernetes/enhancements#2000 /cc @bobbypage /sig node /help /good-first-issue #### Environment: - Kubernetes version (use `kubectl version`): 1.19+ (with feature enabled)
0easy
Title: Endpoint slice mirroring controller mirrors kubectl kubectl.kubernetes.io/last-applied-configuration Body: #### What happened: If I modify a custom endpoint, the kubectl kubectl.kubernetes.io/last-applied-configuration annotation is mirrored to the generated endpoint slice #### What you expected to happen: Since the kubect last-applied-configuration tracks the status of the object, it shouldn't be mirrored since it doesn't apply to the generated object #### How to reproduce it (as minimally and precisely as possible): 1. Create a custom endpoint 1. Apply an update on any of the fields 1. Check the generated endpoint slice annotations ```yaml kubectl get endpointslices ep-test-ljgn7 -o yaml addressType: IPv4 apiVersion: discovery.k8s.io/v1 endpoints: - addresses: - 193.99.144.80 conditions: ready: true - addresses: - 195.54.164.39 conditions: ready: true kind: EndpointSlice metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Endpoints","metadata":{"annotations":{},"name":"ep-test","namespace":"default"},"subsets":[{"addresses":[{"ip":"193.99.144.80"}],"notReadyAddresses":[{"ip":"195.54.164.39"}],"ports":[{"name":"http","port":80,"protocol":"TCP"},{"name":"https","port":443,"protocol":"TCP"}]}]} creationTimestamp: "2021-06-08T07:35:24Z" generateName: ep-test- generation: 1 labels: endpointslice.kubernetes.io/managed-by: endpointslicemirroring-controller.k8s.io kubernetes.io/service-name: ep-test managedFields: - apiVersion: discovery.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:addressType: {} f:endpoints: {} f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:generateName: {} f:labels: .: {} f:endpointslice.kubernetes.io/managed-by: {} f:kubernetes.io/service-name: {} f:ownerReferences: .: {} k:{"uid":"21dc11be-4794-4db8-93c2-8c9fb208e75e"}: {} f:ports: {} manager: kube-controller-manager operation: Update time: "2021-06-08T07:35:24Z" name: ep-test-ljgn7 ``` /sig network
0easy
Title: Get rid of duplicate set of metrics for watch counts Body: Just noticed that we have redundant metrics for watch counts like: ``` # HELP apiserver_longrunning_gauge [ALPHA] Gauge of all active long-running apiserver requests broken out by verb, group, version, resource, scope and component. Not all requests are tracked this way. # TYPE apiserver_longrunning_gauge gauge apiserver_longrunning_gauge{component="apiserver",group="",resource="configmaps",scope="cluster",subresource="",verb="WATCH",version="v1"} 1 apiserver_longrunning_gauge{component="apiserver",group="",resource="configmaps",scope="namespace",subresource="",verb="WATCH",version="v1"} 25 ``` ^ This was added in v1.9 as part of https://github.com/kubernetes/kubernetes/pull/52302 ``` # HELP apiserver_registered_watchers [ALPHA] Number of currently registered watchers for a given resources # TYPE apiserver_registered_watchers gauge apiserver_registered_watchers{group="",kind="ConfigMap",version="v1"} 26 ``` ^ This was added in v1.11 as part of https://github.com/kubernetes/kubernetes/pull/63779 We probably should get rid of the latter since the former is more granular and also includes other long-running calls (like proxy). Both seem to be in alpha status FWIW. /cc @wojtek-t @smarterclayton (authors for those changes) /sig instrumentation /good-first-issue
0easy
Title: HPA desired replicas not respecting minReplicas when set to 1 Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: When deploying a HorizontalPodAutoscaler (HPA) with `minReplicas` set to `1`, the `desired` replicas is set to `0` #### What you expected to happen: I expect that the `desired` replicas should fall with the range of `minReplicas` and `maxReplicas` and, in this case, should be calculated to be `1` #### How to reproduce it (as minimally and precisely as possible): Deploy this manifest: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: busybox namespace: default labels: app: busybox spec: selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - image: busybox command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600'] imagePullPolicy: IfNotPresent name: busybox --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: busybox namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: busybox minReplicas: 1 maxReplicas: 4 targetCPUUtilizationPercentage: 80 ``` The resulting HPA never sets `desired` replica count to the minimum replicas of `1` ``` $ kubectl describe hpa busybox Name: busybox Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Fri, 28 May 2021 09:43:03 -0400 Reference: Deployment/busybox Metrics: ( current / target ) resource cpu on pods (as a percentage of request): <unknown> / 80% Min replicas: 1 Max replicas: 4 Deployment pods: 1 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: missing request for cpu Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetResourceMetric 13s horizontal-pod-autoscaler missing request for cpu Warning FailedComputeMetricsReplicas 13s horizontal-pod-autoscaler failed to compute desired number of replicas based on listed metrics for Deployment/default/busybox: invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: missing request for cpu ``` #### Anything else we need to know?: I realize that in my example, the HPA is unable to read the resource metric and that may be a contributing factor in the calculation of the `desired` replica count. However, when `minReplicas` is set higher than `1`, then the `desired` replica count is calculated to be vale of `minReplicas`. For example, deploying the same manifest as before with the only change is setting `minReplicas` to `2`. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: busybox namespace: default labels: app: busybox spec: selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - image: busybox command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600'] imagePullPolicy: IfNotPresent name: busybox --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: busybox namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: busybox minReplicas: 2 maxReplicas: 4 targetCPUUtilizationPercentage: 80 ``` The resulting HPA has a `desired` replica count set to the minimum replicas of `2` ``` $ k describe hpa busybox Name: busybox Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Fri, 28 May 2021 09:48:25 -0400 Reference: Deployment/busybox Metrics: ( current / target ) resource cpu on pods (as a percentage of request): <unknown> / 80% Min replicas: 2 Max replicas: 4 Deployment pods: 2 current / 2 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: missing request for cpu Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 2m10s horizontal-pod-autoscaler New size: 2; reason: Current number of replicas below Spec.MinReplicas Warning FailedGetResourceMetric 5s (x8 over 113s) horizontal-pod-autoscaler missing request for cpu Warning FailedComputeMetricsReplicas 5s (x8 over 113s) horizontal-pod-autoscaler failed to compute desired number of replicas based on listed metrics for Deployment/default/busybox: invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: missing request for cpu ``` The behavior should be consistent no matter the value of `minReplicas`. #### Environment: - Kubernetes version (use `kubectl version`): ``` Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.9-gke.1400", GitCommit:"ec68c7064ea987ad0c7fb63930df96bdefeb93c4", GitTreeState:"clean", BuildDate:"2021-04-07T09:20:04Z", GoVersion:"go1.15.8b5", Compiler:"gc", Platform:"linux/amd64"} ``` - Cloud provider or hardware configuration: Google Kubernetes Engine
0easy
Title: minor netpol test improvements Body: #### What happened: In addition to #102286 , this is another good getting started issue for someone interested in learning more about networkpolicy testing in the k8s netpol/ package - `probeConnectivity` is a little bit long. lets make its args into a struct somehow - investigate failure modes for `probeConnectivity` - what if we send it a bad time like `1S` instead of `1s`, and so on. can we use native go time package to parameterize it so the code doesnt need `timeoutSeconds` and so on. - consolidate the code for making the newWindowsFramework `framework.NodeOSDistroIs("windows") {` - map parameters in the `Model` struct as constants where appropriate, or see if theres a way we can configure this. Maybe split model into "LinuxModel" and "WindowsModel" structs. - Investigate if theres a way we can "sleep" on test failures by reading an easter-egg env var? its really tedious to recompile and then import the sleep when trying to debug a networkpolicy issue. - Make logging verbosity lower , but smart enough to print periodically important stuf, right now we print a ton of `/agnhost connect s-netpol-4776-z-c.netpol-4776-z.svc.cluster.local:80 --timeout=1 --protocol=tcp` type statements . - add comments that are more comprehensive about windows vs linux tests , and look at other ways to further modularize the tests (i.e. we return 3 probe for linux, 1 probe for windows, why and so on) #### What you expected to happen: not a functional issue , but in aggregate this cleanup will be a good way to grow the pool of netpol test maintainers
0easy
Title: Server logs flooded with Skipping invalid IP Body: Originally post on project https://github.com/k3s-io/k3s/issues/3255 <!-- Thanks for helping us to improve K3S! We welcome all bug reports. Please fill out each area of the template so we can better help you. Comments like this will be hidden when you post but you can delete them if you wish. --> **Environmental Info:** K3s Version: v1.21.0+k3s1 <!-- Provide the output from "k3s -v" --> Node(s) CPU architecture, OS, and Version: Linux 4.19.0-16-cloud-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 <!-- Provide the output from "uname -a" on the node(s) --> Cluster Configuration: 1 server & 3 agents <!-- Provide some basic information on the cluster configuration. For example, "3 servers, 2 agents". --> **Describe the bug:** I use an external cloud controller manager to manage load balancing and services exposition. But on my services spec, I have no ```status.loadBalancer.ingress.IP``` field but a ```status.loadBalancer.ingress.hostname``` instead. Everything works fine but server logs are flooded with entries like ```utils.go:288] Skipping invalid IP:``` <!-- A clear and concise description of what the bug is. --> **Steps To Reproduce:** <!-- Steps to reproduce the behavior. Please include as the first step how you installed K3s on the node(s) (including all flags or environment variables). If you have customized configuration via systemd drop-ins or overrides (https://coreos.com/os/docs/latest/using-systemd-drop-in-units.html) please include those as well. --> - Installed K3s: - Use an external CCM which only provide a hostname to expose service **Expected behavior:** No more logs like ```utils.go:288] Skipping invalid IP:``` <!-- A clear and concise description of what you expected to happen. --> **Actual behavior:** Server logs are flooded with entries like ```utils.go:288] Skipping invalid IP:``` <!-- A clear and concise description of what actually happened. --> **Additional context / logs:** <!-- Add any other context and/or logs about the problem here. --> Should it be a good idea to test consistency of IP here : https://github.com/kubernetes/kubernetes/blob/f235adc4d271c30c4e3b32f48196ae812a6c7100/pkg/proxy/service.go#L204 or here ? https://github.com/kubernetes/kubernetes/blob/f235adc4d271c30c4e3b32f48196ae812a6c7100/pkg/proxy/util/utils.go#L279
0easy
Title: kubelet SyncLoop hangs on "os.Stat" forever if there is an unresponsive NFS volume Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: We saw this several times: after restarting kubelet, kubelet didn't creating new Pods. The healthz endpoint showed that the `syncloop` was stuck: ``` # curl -k https://127.0.0.1:10250/healthz [+]ping ok [+]log ok [-]syncloop failed: reason withheld healthz check failed ``` From the stack trace, the goroutine was stuck in the call of `os.Stat` when doing `HandlePodCleanups`, so no other updates can be processed: ``` goroutine 251 [syscall, 939 minutes]: syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc001357440, 0xc0015a2378, 0x0, 0x0, 0x0, 0xc0017d3320, 0xc0017d3338, 0x411b98) /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5 syscall.fstatat(0xffffffffffffff9c, 0xc001357320, 0x5e, 0xc0015a2378, 0x0, 0xc001357129, 0x8) /usr/local/go/src/syscall/zsyscall_linux_amd64.go:1480 +0xc8 syscall.Stat(...) /usr/local/go/src/syscall/syscall_linux_amd64.go:65 os.statNolog(0xc001357320, 0x5e, 0x3cf2e80, 0x0, 0x6, 0x7f154449dec8) /usr/local/go/src/os/stat_unix.go:31 +0x6e os.Stat(0xc001357320, 0x5e, 0xc000d56b68, 0x3, 0xc000facb40, 0x48) /usr/local/go/src/os/stat.go:13 +0x4d k8s.io/kubernetes/vendor/k8s.io/mount-utils.(*Mounter).IsLikelyNotMountPoint(0xc000becfc0, 0xc001357320, 0x5e, 0xc001302700, 0x1, 0x1) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/mount-utils/mount_linux.go:284 +0x3c k8s.io/kubernetes/pkg/kubelet.(*Kubelet).getMountedVolumePathListFromDisk(0xc0000c5500, 0xc00097b798, 0x24, 0xc001449e90, 0xc0017d35b0, 0xe73afb5b, 0xd6f7addb2f85b69d, 0x3929dc9) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_getters.go:360 +0x110 k8s.io/kubernetes/pkg/kubelet.(*Kubelet).podVolumesExist(0xc0000c5500, 0xc00097b798, 0x24, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_volumes.go:64 +0xb4 k8s.io/kubernetes/pkg/kubelet.(*Kubelet).cleanupOrphanedPodCgroups(0xc0000c5500, 0x4fe5880, 0xc0017b8e00, 0xc001603920, 0xc00117f600, 0x5, 0x8) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1955 +0x3cd k8s.io/kubernetes/pkg/kubelet.(*Kubelet).HandlePodCleanups(0xc0000c5500, 0x0, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1130 +0x659 k8s.io/kubernetes/pkg/kubelet.(*Kubelet).syncLoopIteration(0xc0000c5500, 0xc000c1e900, 0x4fd5fe0, 0xc0000c5500, 0xc000291260, 0xc0002912c0, 0xc001006a20, 0x1) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2000 +0x18e2 k8s.io/kubernetes/pkg/kubelet.(*Kubelet).syncLoop(0xc0000c5500, 0xc000c1e900, 0x4fd5fe0, 0xc0000c5500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1867 +0x3f2 k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run(0xc0000c5500, 0xc000c1e900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1447 +0x2a5 created by k8s.io/kubernetes/cmd/kubelet/app.startKubelet /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:1183 +0x5f ``` Listing the volumes of orphaned Pods on this Node, there was an unresponsive NFS volume, stat it will hang forever, so the above goroutine should be stuck in cleaning up this Pod: ``` # ls -l volumes/ total 8 drwxr-x--- 3 root root 4096 Apr 28 11:18 kubernetes.io~nfs drwxr-xr-x 2 root root 4096 Apr 28 11:21 kubernetes.io~secret # dmesg |grep "not responding" | tail [106629.084267] nfs: server 10.150.200.91 not responding, timed out [106649.568224] nfs: server 10.150.200.91 not responding, timed out [107730.908287] nfs: server 10.150.200.91 not responding, timed out [107743.196391] nfs: server 10.150.200.91 not responding, timed out [108832.732238] nfs: server 10.150.200.91 not responding, timed out [108845.020241] nfs: server 10.150.200.91 not responding, timed out [109934.556157] nfs: server 10.150.200.91 not responding, timed out [109950.940236] nfs: server 10.150.200.91 not responding, timed out [111032.284325] nfs: server 10.150.200.91 not responding, timed out [111048.668198] nfs: server 10.150.200.91 not responding, timed out ``` The NFS server IP is a service IP, whose backend Pod happened to be scheduled to this Node. However, as `syncLoop` was stuck, it didn't handle Pod Add/Update from apiserver and didn't create the backend Pod, hence a dead lock. Searched PR and issue history, it seems there were several ones fixing NFS volume issues but looks like not covering this case: https://github.com/kubernetes/kubernetes/pull/35038 https://github.com/kubernetes/kubernetes/pull/37255 https://github.com/kubernetes/kubernetes/pull/84206 (however reverted) https://github.com/kubernetes/kubernetes/issues/31272 (still open) Is there anyone still working on it? #### What you expected to happen: The main syncloop of kubelet shouldn't hang due to an unresponsive NFS volume, leading to no Pods can be created. #### How to reproduce it (as minimally and precisely as possible): #### Anything else we need to know?: #### Environment: - Kubernetes version (use `kubectl version`): v1.20.5
0easy
Title: Misnamed feature gate in API server help Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#list-deployment-v1-apps (which is generated automatically) mentions a feature gate named `WatchBookmarks`. This is not the correct name; that feature gate is / was `WatchBookmark`. Also, the feature has graduated. So what happened was I saw an incorrect feature gate name. #### What you expected to happen: The generated docs don't mention `WatchBookmark` _or_ `WatchBookmarks` ie, I think the fix is to remove ```go // If the feature gate WatchBookmarks is not enabled in apiserver, // this field is ignored. ``` #### How to reproduce it (as minimally and precisely as possible): - generate the documentation - read the documentation or, short cut, go and look at https://github.com/kubernetes/apimachinery/blob/8daf28983e6ecf28bd8271925ee135c1179ad13a/pkg/apis/meta/v1/types.go#L360 #### Anything else we need to know?: #### Environment: - Kubernetes version (use `kubectl version`): v1.21 - Cloud provider or hardware configuration: n/a - OS (e.g: `cat /etc/os-release`): n/a - Kernel (e.g. `uname -a`): n/a - Install tools: n/a - Network plugin and version (if this is a network-related bug): - Others: /kind bug /sig api-machinery /priority backlog (IMO)
0easy
Title: CRD structural schema validation doesn't allow having description on openAPIV3Schema.properties[metadata] Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: A CRD with the following schema ``` apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: foos.example.com spec: group: example.com names: kind: Foo listKind: FooList plural: foos singular: foo scope: Cluster versions: - name: v1beta1 served: true storage: true schema: openAPIV3Schema: description: A foo. type: object properties: metadata: type: object description: Additional validation on foo's name. <-- this is the violator properties: name: description: The name may only contain lowercase letters. type: string pattern: '^[a-z]+$' ``` will fail server side validation ``` $ kubectl apply -f crd3.yaml The CustomResourceDefinition "foos.example.com" is invalid: spec.validation.openAPIV3Schema.properties[metadata]: Forbidden: must not specify anything other than name and generateName, but metadata is implicitly specified ``` while removing the description on metadata will make the CRD valid ``` $ cat crd3.yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: foos.example.com spec: group: example.com names: kind: Foo listKind: FooList plural: foos singular: foo scope: Cluster versions: - name: v1beta1 served: true storage: true schema: openAPIV3Schema: description: A foo. type: object properties: metadata: type: object properties: name: description: The name may only contain lowercase letters. type: string pattern: '^[a-z]+$' ``` ``` $ kubectl apply -f crd3.yaml customresourcedefinition.apiextensions.k8s.io/foos.example.com created ``` #### What you expected to happen: Either we allow description, or we make the error message clear. #### How to reproduce it (as minimally and precisely as possible): Create the example CRD above. #### Anything else we need to know?: The validation code is [here](https://github.com/kubernetes/kubernetes/blob/669016067d49110a35769f7db42705cc5f0becff/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/validation.go#L162). If we want to allow setting description, I think we should clear the description field the same way as we do for `metadata.Type` before calling `reflect.DeepEqual`. /good-first-issue /area custom-resources /sig api-machinery #### Environment: - Kubernetes version (use `kubectl version`): HEAD
0easy
Title: Ensure all external API types have descriptions Body: Working on this issue will teach you about: - codebase structure for APIs and testing scripts - how API types are written for various k8s objects like deployments, pods, etc - the `hack/verify-*` script pattern that's consistently used across k8s ### Background External API types are defined in `types.go` files in: - [`staging/src/k8s.io/api/*/v*/types.go`]( https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/api) - [`staging/src/k8s.io/apiextensions-apiserver/pkg/apis/*/v*/types.go`](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions) - [`staging/src/k8s.io/kube-aggregator/pkg/apis/*/v*/types.go`](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/kube-aggregator/pkg/apis/apiregistration) ### Problem There are many external API types that have missing descriptions/comments today (issue surfaced in https://github.com/kubernetes/kubernetes/pull/99621). For example - in the snippet below, the struct `NamespaceCondition` in file `staging/src/k8s.io/api/core/v1/types.go` does not have descriptions for the `lastTransitionTime`, `reason` and `message` fields. Note - the comment lines with `+` prefix are not considered as valid descriptions. https://github.com/kubernetes/kubernetes/blob/2695ef3f1b0e2fc51ffb1bcb8491a4165587a089/staging/src/k8s.io/api/core/v1/types.go#L5009-L5021 The types.go files that have missing descriptions are listed in [`hack/.descriptions_failures`](https://github.com/kubernetes/kubernetes/blob/master/hack/.descriptions_failures). We need to add descriptions to these types.go files and eventually remove `hack/.descriptions_failures`. ### Solution You can follow this workflow to add miss descriptions for API types. - Select a `types.go` file from the list below. - Post a comment on this issue that you are working on fixing the file so that others don't start working on it too. - Remove the line for this file from [`hack/.descriptions_failures`](https://github.com/kubernetes/kubernetes/blob/master/hack/.descriptions_failures). - Run the [`hack/verify-description.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/verify-description.sh) script. This script will display an error with missing descriptions for the API types in the types.go file. - Add the missing descriptions for the types displayed by the script. - Run the [`hack/verify-description.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/verify-description.sh) script again. There should be no error displayed. - Update autogenerated files by running the following scripts: - [`hack/update-generated-protobuf.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/update-generated-protobuf.sh) - [`hack/update-openapi-spec.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/update-openapi-spec.sh) - [`hack/update-generated-swagger-docs.sh`](https://github.com/kubernetes/kubernetes/blob/master/hackupdate-generated-swagger-docs.sh) - Open a Pull Request with your changes. In the PR body, link to this issue by writing `Ref: #99675`. Do _not_ write `Fixes #99657` because that will autoclose this issue. #### How to ask for help If you need help or have questions, please ask in the _#kubernetes-contributors_ channel on the k8s slack (see [slack.k8s.io](https://slack.k8s.io/) for an invite). ### Files to fix - [ ] ./staging/src/k8s.io/api/apiserverinternal/v1alpha1/types.go - [ ] ./staging/src/k8s.io/api/apps/v1/types.go - [ ] ./staging/src/k8s.io/api/apps/v1beta1/types.go - [ ] ./staging/src/k8s.io/api/apps/v1beta2/types.go - [ ] ./staging/src/k8s.io/api/authentication/v1/types.go - [ ] ./staging/src/k8s.io/api/authentication/v1beta1/types.go - [ ] ./staging/src/k8s.io/api/authorization/v1/types.go - [ ] ./staging/src/k8s.io/api/authorization/v1beta1/types.go - [ ] ./staging/src/k8s.io/api/autoscaling/v2beta2/types.go - [ ] ./staging/src/k8s.io/api/certificates/v1/types.go - [ ] ./staging/src/k8s.io/api/certificates/v1beta1/types.go - [ ] ./staging/src/k8s.io/api/core/v1/types.go - [x] ./staging/src/k8s.io/api/events/v1/types.go (#99681) - [x] ./staging/src/k8s.io/api/events/v1beta1/types.go (#99681) - [ ] ./staging/src/k8s.io/api/extensions/v1beta1/types.go - [ ] ./staging/src/k8s.io/api/flowcontrol/v1alpha1/types.go - [ ] ./staging/src/k8s.io/api/flowcontrol/v1beta1/types.go - [ ] ./staging/src/k8s.io/api/imagepolicy/v1alpha1/types.go - [ ] ./staging/src/k8s.io/api/networking/v1/types.go - [ ] ./staging/src/k8s.io/api/networking/v1beta1/types.go - [ ] ./staging/src/k8s.io/api/policy/v1beta1/types.go - [ ] ./staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/types.go - [ ] ./staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/types.go - [ ] ./staging/src/k8s.io/kube-aggregator/pkg/apis/apiregistration/v1/types.go - [ ] ./staging/src/k8s.io/kube-aggregator/pkg/apis/apiregistration/v1beta1/types.go
0easy
Title: Switch to new location and version for cfssl and cfssljson (cfssl and cfssljson installed via ./hack/local-cluster-up.sh broken) Body: Looks like the hardcoded URL - `https://pkg.cfssl.org/R1.2/cfssl_linux-amd64` is not working anymore and returns: ``` <html> <head><title>522 Origin Connection Time-out</title></head> <body bgcolor="white"> <center><h1>522 Origin Connection Time-out</h1></center> <hr><center>cloudflare-nginx</center> </body> </html> ``` Having said that - installation from https://github.com/cloudflare/cfssl works.
0easy
Title: windows kube-proxy: fall back to node-local terminating endpoints for external traffic Body: Windows analog for #96371 kube-proxy Should fallback to terminating endpoints `iff`: * externalTrafficPolicy: Local * traffic is from an external source * there are no ready and not terminating endpoints on the node This prevents packet loss in scenarios where a load balancer sends traffic to a node that recently terminated all it's endpoints. **Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:**
0easy
Title: cheking UID's in windows, follow on (quick pr) Body: **What happened**: follow on to https://github.com/kubernetes/kubernetes/pull/96172 Need to comment and avoid checking UID on windows. requires some research into what we **should** be checking so that we can add appropriate TODO. **What you expected to happen**: Lets also check this at the call site, https://github.com/kubernetes/kubernetes/blob/199d48728f33692dc5a5e8b808a064bd3e40b51a/cmd/kubelet/app/server.go#L757 **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): master - OS (e.g: `cat /etc/os-release`): windows
0easy
Title: Fixup bugs in the upstream OpenAPI spec Body: The folks maintaining the Rust Kubernetes API client have noted some discrepencies in our spec. Details are here: - https://github.com/Arnavion/k8s-openapi#works-around-bugs-in-the-upstream-openapi-spec - https://github.com/Arnavion/k8s-openapi/blob/master/k8s-openapi-codegen/src/fixups/upstream_bugs.rs - https://github.com/Arnavion/k8s-openapi/blob/master/k8s-openapi-codegen/src/supported_version.rs So the effort here is to go through the information above and break up the changes into sizable / related chunks and propose PRs to fix them.
0easy
Title: core v1 Event API needs improvement? Body: **What happened**: ```yaml apiVersion: v1 kind: Event ``` When creating the above resource (for testing), I'm getting this: ``` error: error validating "event-v1.yaml": error validating data: [ValidationError(Event): missing required field "metadata" in io.k8s.api.core.v1.Event, ValidationError(Event): missing required field "involvedObject" in io.k8s.api.core.v1.Event]; if you choose to ignore these errors, turn validation off with --validate=false ``` After checking the source code (https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L4569-L4576): ```go type Event struct { metav1.TypeMeta // +optional metav1.ObjectMeta // Required. The object that this event is about. Mapped to events.Event.regarding // +optional InvolvedObject ObjectReference ... ``` The declaration says metadata is optional, which is wrong. So the API spec generated is wrong. The declaration of `InvolvedObject` is contradictary. It is both "Required" and "+optional". **What you expected to happen**: Be consistent about field definitions. **How to reproduce it (as minimally and precisely as possible)**: ``` kubectl create -f config.yaml ``` **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): 1.19
0easy
Title: client-go prints a klog warning by default in library code which is not allowed Body: **What happened**: A code change was merged that used `klog.Warningf` in library code by default (`NewDefaultClientLoadingRules` changed in a way that impacted all code in the world which uses that default, which is pretty common), which is not allowed by convention in Kube because client libraries may be used in UIs that wish to control output (klog is not consistent with other output). The code change should have had a callback that allowed clients to optionally decide to print, not print to Warningf by default. https://github.com/kubernetes/kubernetes/pull/78185#issuecomment-685765692 We should also document the convention for other reviewers that logging by default in client libraries is disallowed. For instance, a callback would have allowed kubectl to print a human readable message `warning: KUBECONFIG does not exist` in general, and in specific cases (like `kubectl config` or login flows where KUBECONFIG not existing is totally reasonable) not print a message. **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): - Cloud provider or hardware configuration: - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`): - Install tools: - Network plugin and version (if this is a network-related bug): - Others:
0easy
Title: Analytics link not working in main README.md Body: The analytics link https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/README.md?pixel embedded in [README.md](https://github.com/kubernetes/kubernetes/blob/master/README.md) is not working now. If you open https://github.com/kubernetes/kubernetes, scroll down to the bottom, you will get: ![image](https://user-images.githubusercontent.com/1425903/90073781-ac381c00-dcae-11ea-92ba-a292b89496dc.png) /kind documentation
0easy
Title: kubectl --local is not quite local across all the commands Body: This situation was discovered when I wanted to remove deprecated functionality in https://github.com/kubernetes/kubernetes/pull/90243. Unfortunately, due to how we are constructing client in almost every kubectl command this removal ended up with the following bug https://github.com/kubernetes/kubernetes/issues/90074. The reason for that is that in each command's `Complete` method we create a client in one of the following ways: ```go o.Client, err = batchv1client.NewForConfig(clientConfig) if err != nil { return err } ``` ```go dynamicClient, err := f.DynamicClient() if err != nil { return err } ``` ```go discoveryClient, err := f.ToDiscoveryClient() if err != nil { return err } ``` The problem is that with the deprecated code from #90243 removed all of the above calls will fail when creating a client with: ``` error: Missing or incomplete configuration info. Please point to an existing, complete config file: 1. Via the command-line flag --kubeconfig 2. Via the KUBECONFIG environment variable 3. In your home directory as ~/.kube/config ``` which does not happen if we silently default the server to `http://localhost:8080` in deprecated `getDefaultServer`. There are two possible approaches to this problem: 1. Go through each and every single command and add conditions for `--local` flag and not to create clients when it is specified. 2. Provide a smart mechanism allowing to inject dependencies (in this particular case, we are talking about clients). During SIG-CLI call on July 15th, we agreed that solution 1 is a short-term and rather hacky approach to the problem and we would like to see this being approached as described in no. 2. Any proposals to this should be discussed with @soltysh or during one of sig-cli calls.
0easy
Title: fix staticcheck failures Body: Many of our packages are failing [staticcheck], we should fix all of them. See previously https://github.com/kubernetes/kubernetes/issues/36858 https://github.com/kubernetes/kubernetes/issues/90208 https://github.com/kubernetes/kubernetes/issues/81657 /help To fix these, take a look at [hack/.staticcheck_failures] for a list of currently failing packages. You can check a package by removing it from the list and running `make verify WHAT=staticcheck`. Once the package is no longer failing, please file a PR including the fixes and removing it from the list. Before filing your PR, you should run `hack/update-gofmt.sh` and `hack/update-bazel.sh` to auto format your code and update the build. I recommend keeping PRs scoped down in size for review, fix a package or set of closely related packages, or a certain class of failures, to make it easier for your reviewers to handle. We don't need to fix all failures in one PR, but please avoid 100s of single character PRs, as it does cost time & resources to get each PR reviewed, tested, merged etc. :upside_down_face: I can help review _some_ of these, but many of them will require other reviewers that own the relevant code. EDIT: I also don't recommend `/assign` ing to this one, many people are going to need to work on it and we want more people to join in. You can link to this issue in your PR and comment here that you're working on certain packages to help avoid duplication. EDIT2: Please do NOT put `Fixes #...` in your Pull Request, despite the template. That will close this PR. Instead put something like `Part of #92402` I think each of these own code that is currently failing: /sig testing /sig storage /sig api-machinery /sig architecture /sig cli /sig instrumentation /sig autoscaling /sig cloud-provider [staticcheck]: https://staticcheck.io/ [hack/.staticcheck_failures]: https://github.com/kubernetes/kubernetes/blob/master/hack/.staticcheck_failures
0easy
Title: Missing defaults and clarifying info for kube-controller-manager Body: **What happened**: In the KCM we're missing output about a few defaults, and probably a few of the options could use some clarity around `--allocate-node-cidrs` , `--cloud-controller-manager`, and i believe several other options have no `Default:` note, See https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/ for details. **What you expected to happen**: All Options for the controller-manager have similar detail level with all `Default` values being listed. **How to reproduce it (as minimally and precisely as possible)**: Browse https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/ or run `kube-controller-manager --help` **Anything else we need to know?**: See https://github.com/kubernetes/kubernetes/blob/master/cmd/controller-manager/app/options/kubecloudshared.go **Environment**: - Kubernetes version (use `kubectl version`): master
0easy
Title: API-server repair considers ports with same port number but different protocols as same Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> **What happened**: I created several Workloads which require up to ten ports for both UDP and TCP protocol. Now my logs get spammed with `the node port [PORT] for service [SERVER] was assigned to multiple services; please recreate`. **What you expected to happen**: [repair.go::collectServiceNodePorts](https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/service/portallocator/controller/repair.go#L203) should consider `PORT-PROTO`-combination instead of only `PORT`. **How to reproduce it (as minimally and precisely as possible)**: Deploy any service with a node-port with both protocols, e.g.: ``` apiVersion: v1 kind: Service metadata: name: test spec: ports: - port: 30080 targetPort: 80 protocol: UDP name: dummy - port: 30080 targetPort: 80 protocol: TCP name: http ``` **Anything else we need to know?**: This was already acknowledged by @thockin here: https://github.com/kubernetes/kubernetes/issues/59119#issuecomment-490760644 However, as this thread addresses a different bug and seems kind of stuck, i'd like to suggest that this might be addressed in a new context. **Environment**: - Kubernetes version (use `kubectl version`): `v1.17.5` - Cloud provider or hardware configuration: `Hetzner Cloud` - OS (e.g: `cat /etc/os-release`): `Debian GNU/Linux 10 (buster)` - Kernel (e.g. `uname -a`): `Linux server 4.19.0-8-amd64 #1 SMP Debian 4.19.98-1+deb10u1 (2020-04-27) x86_64 GNU/Linux` - Install tools: Deployed via Rancher v2.4.3
0easy
Title: EnvVarSource doc was wrong Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> **What happened**: When I tried to create a pod with env variables referencing to annotations: `spec.containers[0].env[0].valueFrom.fieldRef.fieldPath: Unsupported value: "metadata.annotations": supported values: "metadata.name", "metadata.namespace", "metadata.uid", "spec.nodeName", "spec.serviceAccountName", "status.hostIP", "status.podIP", "status.podIPs"` **What you expected to happen**: Should Proceed. As in [docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#envvarsource-v1-core) It stated `metadata.annotations` is a valid option **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): v1.18.2 - Cloud provider or hardware configuration: - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`): - Install tools: - Network plugin and version (if this is a network-related bug): - Others:
0easy
Title: Fix staticcheck failures Body: from: https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/90183/pull-kubernetes-verify/1250761092716564483 ``` Errors from staticcheck: cmd/kubeadm/app/preflight/checks.go:806:12: possible nil pointer dereference (SA5011) cmd/kubeadm/app/preflight/checks.go:808:7: this check suggests that the pointer can be nil cmd/kubeadm/app/util/config/cluster_test.go:752:11: possible nil pointer dereference (SA5011) cmd/kubeadm/app/util/config/cluster_test.go:749:7: this check suggests that the pointer can be nil pkg/kubelet/stats/cri_stats_provider.go:705:32: possible nil pointer dereference (SA5011) pkg/kubelet/stats/cri_stats_provider.go:701:6: this check suggests that the pointer can be nil pkg/util/ipvs/ipvs_linux_test.go:184:20: the address of a variable cannot be nil (SA4022) pkg/volume/awsebs/aws_ebs_block_test.go:81:5: the address of a variable cannot be nil (SA4022) pkg/volume/azure_dd/azure_dd_block_test.go:70:5: the address of a variable cannot be nil (SA4022) pkg/volume/cinder/cinder_block_test.go:73:5: the address of a variable cannot be nil (SA4022) pkg/volume/gcepd/gce_pd_block_test.go:73:5: the address of a variable cannot be nil (SA4022) pkg/volume/gcepd/gce_pd_test.go:226:17: possible nil pointer dereference (SA5011) pkg/volume/gcepd/gce_pd_test.go:223:5: this check suggests that the pointer can be nil pkg/volume/gcepd/gce_pd_test.go:227:26: possible nil pointer dereference (SA5011) pkg/volume/gcepd/gce_pd_test.go:223:5: this check suggests that the pointer can be nil pkg/volume/rbd/rbd_test.go:89:5: the address of a variable cannot be nil (SA4022) pkg/volume/testing/testing.go:403:16: possible nil pointer dereference (SA5011) pkg/volume/testing/testing.go:404:5: this check suggests that the pointer can be nil pkg/volume/vsphere_volume/vsphere_volume_block_test.go:72:5: the address of a variable cannot be nil (SA4022) test/e2e/scheduling/predicates.go:1055:46: possible nil pointer dereference (SA5011) test/e2e/scheduling/predicates.go:1054:27: this check suggests that the pointer can be nil test/e2e/scheduling/predicates.go:1060:46: possible nil pointer dereference (SA5011) test/e2e/scheduling/predicates.go:1059:27: this check suggests that the pointer can be nil test/e2e/scheduling/predicates.go:1061:27: possible nil pointer dereference (SA5011) test/e2e/scheduling/predicates.go:1059:27: this check suggests that the pointer can be nil test/e2e_node/gke_environment_test.go:306:6: func checkDockerStorageDriver is unused (U1000) vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure_test.go:124:24: possible nil pointer dereference (SA5011) vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure_test.go:121:7: this check suggests that the pointer can be nil vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure_test.go:128:10: possible nil pointer dereference (SA5011) vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure_test.go:125:7: this check suggests that the pointer can be nil vendor/k8s.io/client-go/util/cert/csr_test.go:57:14: possible nil pointer dereference (SA5011) vendor/k8s.io/client-go/util/cert/csr_test.go:51:5: this check suggests that the pointer can be nil vendor/k8s.io/client-go/util/certificate/certificate_store_test.go:319:14: possible nil pointer dereference (SA5011) vendor/k8s.io/client-go/util/certificate/certificate_store_test.go:316:5: this check suggests that the pointer can be nil vendor/k8s.io/legacy-cloud-providers/azure/azure_test.go:1733:14: possible nil pointer dereference (SA5011) vendor/k8s.io/legacy-cloud-providers/azure/azure_test.go:1729:5: this check suggests that the pointer can be nil ``` /good-first-issue
0easy
Title: ESIPP [Slow] should handle updates to ExternalTrafficPolicy field is failing in 5k-node tests Body: Example failure: https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-correctness/1248219443285200896 Stacktrace: ``` /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3067 Test Panicked /usr/local/go/src/runtime/panic.go:75 Panic: runtime error: index out of range [0] with length 0 Full stack: k8s.io/kubernetes/test/e2e/network.glob..func22.7() /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3117 +0x11f9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001f12700) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc001f12700) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:119 +0x2b testing.tRunner(0xc001f12700, 0x4b56120) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 ``` The problem seem to be related to the fact that we're running the test on private cluster (nodes doesn't have external IP address). @mm4tt @kubernetes/sig-network-bugs @kubernetes/sig-scalability-bugs
0easy
Title: Human readable duration formats exact year inconsistently Body: **What happened**: When the duration is **exactly** 2 years or 3 years, human readable duration is returned as `2y0d` or `3y0d`. But when the duration is exactly 4 years, 5 years, etc, the human readable duration is returned as `4y`, `5y`, etc. **What you expected to happen**: The format of an exact year should be consistent. Remove `0d` when duration is an exact year: * `2y0d` should be `2y` * `3y0d` should be `3y` * Check other years... 4, 5, 6, 7... not sure if they also have the problem. **How to reproduce it (as minimally and precisely as possible)**: Because this is time-dependent, it can be difficult to reproduce via the CLI, but you can see the problem illustrated in [this unit test](https://github.com/kubernetes/kubernetes/blob/15bb54c2d2c515ad155b90c2c2ca59eaa1cd3f65/staging/src/k8s.io/apimachinery/pkg/util/duration/duration_test.go#L77) which shows the current behavior: ``` {d: 2 * 365 * 24 * time.Hour, want: "2y0d"}, {d: 2*365*24*time.Hour + 23*time.Hour, want: "2y0d"}, {d: 3 * 365 * 24 * time.Hour, want: "3y0d"}, {d: 8*365*24*time.Hour - time.Millisecond, want: "7y364d"}, {d: 8 * 365 * 24 * time.Hour, want: "8y"}, {d: 8*365*24*time.Hour + 364*24*time.Hour, want: "8y"}, {d: 9 * 365 * 24 * time.Hour, want: "9y"}, ``` The `2y0d` and `3y0d` test cases should be changed to want `2y` and `3y`, and the code should be changed to make that pass. **Anything else we need to know?**: Note, this is not a problem when the duration is exactly 1 year because the human readable duration is still shown in days. So nothing needs to change to handle the case of exactly 1 year. **Environment**: - Kubernetes version (use `kubectl version`): 1.18 - Cloud provider or hardware configuration: n/a - OS (e.g: `cat /etc/os-release`): n/a - Kernel (e.g. `uname -a`): n/a - Install tools: n/a - Network plugin and version (if this is a network-related bug): n/a - Others: n/a
0easy
Title: IntStr GetValueFromIntOrPercent assumes all strings are percentages Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> **What happened**: Using [GetValueFromIntOrPercent](https://github.com/kubernetes/apimachinery/blob/c76d55892ea23f279bd999f01f951e78434b6095/pkg/util/intstr/intstr.go#L154-L170) from the `intstr` package in `k8s.io/apimachinery/pkg/util/intstr`, I found there's a difference in behaviour between `{"foo": 1}` and `{"foo": "1"}`, but there is no difference in behaviour between `{"foo": "1"}` and `{"foo": "1%"}` When unmarshalling from JSON, {"foo": "1"} results in the intstr being of type String, this causes the [getIntOrPercentValue](https://github.com/kubernetes/apimachinery/blob/c76d55892ea23f279bd999f01f951e78434b6095/pkg/util/intstr/intstr.go#L176-L183) method to assume that it is a percentage when it is not. So a value of `"1"` becomes identical in behaviour to a value of `"1%"`. **What you expected to happen**: A value of `1` and `"1"` should cause the same behaviour. The code should check for the existence of a `%` in the string before determining it is a percentage **How to reproduce it (as minimally and precisely as possible)**: ``` package main import ( "fmt" "k8s.io/apimachinery/pkg/util/intstr" ) func main() { oneString := intstr.FromString("1") val, err := intstr.GetValueFromIntOrPercent(&oneString, 10, false) if err != nil { panic(err) } fmt.Printf("(String) Expecting %d to equal 1\n", val) oneInt := intstr.FromInt(1) val, err = intstr.GetValueFromIntOrPercent(&oneInt, 10, false) if err != nil { panic(err) } fmt.Printf("(Int) Expecting %d to equal 1\n", val) } ``` Outputs: ``` (String) Expecting 0 to equal 1 (Int) Expecting 1 to equal 1 ``` https://play.golang.org/p/MR3M4UdWYph **Anything else we need to know?**: This was raised in [slack](https://kubernetes.slack.com/archives/C0EG7JC6T/p1583949884224600), @liggitt responded and noted that at first glance, where this is used in tree, the types have [validation](https://github.com/kubernetes/kubernetes/blob/c369cf187ea765c0a2387f2b39abe6ed18c8e6a8/pkg/apis/apps/validation/validation.go#L384-L399) to check string values are valid percents before being accepted and that if we want to fix this, a proper sweep of current in-tree uses should be checked to make sure that updating the behaviour wouldn't break any current uses @liggitt Also suggested it might be worth checking out of tree uses as well to be good citizens and linked to https://grep.app/search?q=.GetValueFromIntOrPercent%28 for helping with this /assign
0easy
Title: Missing validation for HPA annotation Body: Recently, I tried to create an HPA object with an empty value for the `autoscaling.alpha.kubernetes.io/conditions` annotation and even though the response from apiserver is a 500, the object itself still got created (I verified this by querying for the key from the etcd). And since then `kubectl get/list hpa` calls are failing because apiserver seems to be unable to parse that particular object. Even worse, I wasn't even able to delete that object through the k8s api (had to delete the key directly from etcd). This was on a v1.14.9 cluster. Here's a minimal hpa object to reproduce the issue (both `v2beta1` and `v2beta2` seem to have it): ``` { "kind": "HorizontalPodAutoscaler", "apiVersion": "autoscaling/v2beta1", "metadata": { "name": "bad-hpa-object", "annotations": { "autoscaling.alpha.kubernetes.io/conditions": "" } }, "spec": { "scaleTargetRef": { "kind": "Deployment", "name": "test", "apiVersion": "apps/v1" }, "maxReplicas": 1 } } ``` Failing "create hpa" request: ``` $ kubectl create -f bad-hpa-object.json --v=8 ... I0302 18:45:37.912801 25603 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"unexpected end of JSON input","code":500} I0302 18:45:37.913015 25603 helpers.go:196] server response object: [{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "error when creating \"hpa.json\": unexpected end of JSON input", "code": 500 }] ``` Kubectl call failing after above request: ``` $ kubectl get hpa Error from server: unexpected end of JSON input ``` But object itself is present in etcd and I manually deleted it: ``` $ ETCDCTL_API=3 etcdctl get --keys-only --prefix "/registry/horizontalpodautoscalers/" /registry/horizontalpodautoscalers/default/bad-hpa-object $ $ etcdctl del /registry/horizontalpodautoscalers/default/bad-hpa-object 1 ``` Which fixed the issue: ``` $ kubectl get hpa No resources found. ``` cc @kubernetes/sig-api-machinery-bugs (I'm tagging apimachinery here because even though this could be a missing validation for a particular API owned by @kubernetes/sig-autoscaling-bugs , I'm wondering why apiserver created the object even though it returned a 500 to the client)
0easy
Title: It's impossible to restart master in private clusters created via kube-up (provider GCE) Body: If you create a private cluster via kube-up using provider GCE, you won't be able to restart your master nodes. If you restart you master, the nodes won't be able to connect to your master. This is due to this line - https://github.com/kubernetes/kubernetes/blob/127c47caf49bec4a59a0500f99b1b1d4cdbf340e/cluster/gce/util.sh#L2922 It gets executed on master on cluster-up, but it won't be re-executed on master restart. We should fix that, especially if we're to implement https://github.com/kubernetes/kubernetes/issues/86484 /sig scalability /priority important-soon
0easy
Title: Security contacts for component-base not specified or out of date Body: The purpose of the `SECURITY_CONTACTS` file for each Kubernetes repository is to provide a list of people who can assist the Kubernetes [Product Security Committe](https://github.com/kubernetes/community/tree/master/committee-product-security) in the event that a security issue related to the repository is discovered or disclosed. As described in the file, those on the list should agree to our [Embargo Policy](https://git.k8s.io/security/private-distributors-list.md#embargo-policy). Please update the [`/staging/src/k8s.io/component-base/SECURITY_CONTACTS`](/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-base/SECURITY_CONTACTS) file for the component-base repository. After finding people who are willing to work in this capacity, you should add them to the list, then remove PSC members (except any PSC member will be working as a security contact for this repository). The list is GitHub usernames, optionally followed by an email address. If no email address is listed, the PSC will use the email address found on git commits made by the listed user. The file may already have people listed who are secuirty contacts. In that case, simply remove any PSC members who aren't also security contacts for the repo. See https://github.com/kubernetes/security/issues/92 for more information /area security /committee product-security /kind cleanup /lifecycle frozen /priority important-soon /sig api-machinery /sig cluster-lifecycle
0easy
Title: Security contacts for cluster-bootstrap not specified or out of date Body: The purpose of the `SECURITY_CONTACTS` file for each Kubernetes repository is to provide a list of people who can assist the Kubernetes [Product Security Committe](https://github.com/kubernetes/community/tree/master/committee-product-security) in the event that a security issue related to the repository is discovered or disclosed. As described in the file, those on the list should agree to our [Embargo Policy](https://git.k8s.io/security/private-distributors-list.md#embargo-policy). Please update the [`/staging/src/k8s.io/cluster-bootstrap/SECURITY_CONTACTS`](/kubernetes/kubernetes/blob/master/staging/src/k8s.io/cluster-bootstrap/SECURITY_CONTACTS) file for the cluster-bootstrap repository. After finding people who are willing to work in this capacity, you should add them to the list, then remove PSC members (except any PSC member will be working as a security contact for this repository). The list is GitHub usernames, optionally followed by an email address. If no email address is listed, the PSC will use the email address found on git commits made by the listed user. The file may already have people listed who are secuirty contacts. In that case, simply remove any PSC members who aren't also security contacts for the repo. See https://github.com/kubernetes/security/issues/92 for more information /area security /committee product-security /kind cleanup /lifecycle frozen /priority important-soon /sig cluster-lifecycle
0easy
Title: Refactor preparePVCDataSourceForProvisioning to support BlockVolume tests in multivolume.go Body: <!-- Feature requests are unlikely to make progress as an issue. Instead, please suggest enhancements by engaging with SIGs on slack and mailing lists. A proposal that works through the design along with the implications of the change can be opened as a KEP: https://git.k8s.io/enhancements/keps#kubernetes-enhancement-proposals-keps --> #### What would you like to be added: In https://github.com/kubernetes/kubernetes/pull/102775 block volume tests were skipped because `preparePVCDataSourceForProvisioning` is not using `utils.CheckWriteToPath / utils.CheckReadFromPath`, there are more details about how these functions are used in `TestConcurrentAccessToSingleVolume` This is a test that uses the [XFS](https://cloud.google.com/container-optimized-os/docs/concepts/supported-filesystems#xfs), if tested using the [GCE PD storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/#gce-pd) it should have `fstype: xfs`, in addition the test cluster should have the [VolumeSnapshot CRDs and the controllers installed](https://kubernetes.io/docs/concepts/storage/volume-snapshots/) *Things that are needed for a first time contributor* - a development kubernetes cluster, preferably on a public cloud instance although I'm not sure if kind can be used - a CSI driver like PD CSI Driver, the storage class should be setup to use the xfs fstype - the volume snapshot CRDs and controllers installed in the cluster (kubernetes-csi/external-snapshotter#usage) /sig storage /area testing /good-first-issue
0easy
Title: Add test coverage for SelfSubjectRulesReview and node authorizer Body: An integration test should be added to confirm that the node authorizer interacts correctly with the `SelfSubjectRulesReview` API. Test the 4 permutations of node authorizer=on/off with identity isNode=true/false. xref: #90975 #91030
0easy
Title: test/soak/cauldron uses Docker Hub to push/pull image Body: We've deleted all repos from Docker Hub and will be retiring the org/user (ref: https://github.com/kubernetes/k8s.io/issues/237) Apparently the `kubernetes/cauldron` repo is referenced for docker push and pull in https://github.com/kubernetes/kubernetes/blob/master/test/soak/cauldron/Dockerfile It's unclear to me whether we have broken anything, or if this is dead code. I would use https://cs.k8s.io to find out if this is referenced anywhere (aside from sundry vendor dirs) If dead code: remove it If used: move to test/e2e/images and follow the same pattern there used to push to gcr.io /sig testing /area test /wg k8s-infra /priority important-soon /help /good-first-issue I believe this is a pretty straightforward issue and am happy to review whatever PR's are made for it
0easy
Title: Add an impersonation test case to the audit E2E test Body: We don't currently have any test coverage of impersonation for audit logging. I think the audit e2e test (https://github.com/kubernetes/kubernetes/blob/master/test/e2e/auth/audit.go) would be a good place to do so. /area audit /area test /sig auth /help /good-first-issue
0easy
Title: destroy the "Expect(err).NotTo(HaveOccurred())" anti-pattern in e2e tests Body: I'm getting tired of seeing flake issues like https://github.com/kubernetes/kubernetes/issues/33647 where every single failure is of the form ``` Expected error: <*errors.errorString | 0xc820412c10>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred ``` This is about as unhelpful an error message as you can get. The problem is that we have ``` go Expect(err).NotTo(HaveOccurred()) ``` peppered throughout the code everywhere, and in this form, you can only figure out what failed if you read the Ginkgo error, remember which line has the line number you actually want, and then try to read surrounding code in that file. Possible improvements given in http://onsi.github.io/gomega/#handling-errors: - Use `Succeed()` - Annotate the assertions - Basically, do anything except just Expect(err).NotTo(HaveOccurred()) cc @kubernetes/sig-testing
0easy
Title: Cleanup: kube-proxy internal naming Body: We have sort of a mish-mash of "service" and "servicePort" and similar. We should think about changing ALL of the places that actually relate to a service-port (a specific port of a service, not the whole service) and rename them to some new term that means "service port". E.g. "svcpt" or "spt" or "sport". See https://github.com/kubernetes/kubernetes/pull/106497/commits/8fffae321d66487f776922e0c313b33af7bf27e3 in pkg/proxy/iptables/proxier.go and friends. e.g. `info.serviceNameString` -> `info.svcptNameString` ?
0easy
Title: Cleanup: pkg/util/ipset Body: As exposed in https://github.com/kubernetes/kubernetes/pull/108351 the validate logic should probably return an `error` and the test should compare the error string with a known value - to prove that it failed for the right reasons. Additionally the test could uset `t.Run()` to print a helpful name, rather than the comments tracking `// case[3]` Easy change, good beginner case.
0easy
Title: Failed to run the pod with duplicate persistent volumes Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: Create a pod: ```yaml apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: nginx volumeMounts: - name: config-a mountPath: /etc/a subPath: a - name: config-b mountPath: /etc/b subPath: b volumes: - name: config-a persistentVolumeClaim: claimName: test-nfs-claim - name: config-b persistentVolumeClaim: claimName: test-nfs-claim ``` the pod is always `ContainerCreating`, and describes as: ``` Warning FailedMount 4s kubelet Unable to attach or mount volumes: unmounted volumes=[config-a], unattached volumes=[config-a config-b default-token-t2wrn]: timed out waiting for the condition ``` kubelet log: ``` I0225 13:41:14.817380 2005690 desired_state_of_world_populator.go:361] Added volume "config-a" (volSpec="pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9") for pod "64659405-169c-4450-b006-99d00d4018bd" to desired state. I0225 13:41:14.823159 2005690 desired_state_of_world_populator.go:361] Added volume "config-b" (volSpec="pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9") for pod "64659405-169c-4450-b006-99d00d4018bd" to desired state. I0225 13:41:14.905775 2005690 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9" (UniqueName: "kubernetes.io/nfs/64659405-169c-4450-b006-99d00d4018bd-pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9") pod "test" (UID: "64659405-169c-4450-b006-99d00d4018bd") I0225 13:41:15.006182 2005690 reconciler.go:254] Starting operationExecutor.MountVolume for volume "pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9" (UniqueName: "kubernetes.io/nfs/64659405-169c-4450-b006-99d00d4018bd-pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9") pod "test" (UID: "64659405-169c-4450-b006-99d00d4018bd") I0225 13:41:15.006213 2005690 reconciler.go:269] operationExecutor.MountVolume started for volume "pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9" (UniqueName: "kubernetes.io/nfs/64659405-169c-4450-b006-99d00d4018bd-pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9") pod "test" (UID: "64659405-169c-4450-b006-99d00d4018bd") I0225 13:41:15.006319 2005690 nfs.go:246] NFS mount set up: /var/lib/kubelet/pods/64659405-169c-4450-b006-99d00d4018bd/volumes/kubernetes.io~nfs/pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9 false stat /var/lib/kubelet/pods/64659405-169c-4450-b006-99d00d4018bd/volumes/kubernetes.io~nfs/pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9: no such file or directory I0225 13:41:15.006482 2005690 mount_linux.go:171] Mounting cmd (mount) with arguments (-t nfs -o vers=4.1 10.246.250.159:/mnt/nfs/default-test-nfs-claim-pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9 /var/lib/kubelet/pods/64659405-169c-4450-b006-99d00d4018bd/volumes/kubernetes.io~nfs/pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9) I0225 13:41:15.084139 2005690 operation_generator.go:672] MountVolume.SetUp succeeded for volume "pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9" (UniqueName: "kubernetes.io/nfs/64659405-169c-4450-b006-99d00d4018bd-pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9") pod "test" (UID: "64659405-169c-4450-b006-99d00d4018bd") ``` #### What you expected to happen: Deny the pod orchestration with duplicate volumes, OR the pod can be running successfully. #### How to reproduce it (as minimally and precisely as possible): Create a pod with the duplicate pvc as above. #### Anything else we need to know?: Root cause: The key of persistent volume info in DSW is `volumeName` named after`pluginName/podUid-persistenVolumeName`, and the `OuterVolumeSpecName` is a `string`: https://github.com/kubernetes/kubernetes/blob/27c89b9aec73f66529a497910c460af2b25ab6dd/pkg/kubelet/volumemanager/cache/desired_state_of_world.go#L288-L293 When getting the unmounted volumes, it uses `OuterVolumeSpecName`: https://github.com/kubernetes/kubernetes/blob/27c89b9aec73f66529a497910c460af2b25ab6dd/pkg/kubelet/volumemanager/volume_manager.go#L435-L441 In our case, `config-b` volume will corve the volume `kubernetes.io/nfs/64659405-169c-4450-b006-99d00d4018bd-pvc-a0b70cdf-ac23-43f2-a7bc-14467138ffa9` in DSW, and the `config-a` volume info is lost in ASW. Solution: 2 choices: - Consider it as an invalid request, and deny creating it. - Construct uniqueVolumeName with `OuterVolumeSpecName` when the volume is a persistent volume. #### Environment: - Kubernetes version (use `kubectl version`): v1.20.4 - Cloud provider or hardware configuration: - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`): - Install tools: - Network plugin and version (if this is a network-related bug): - Others: /sig apps /sig node /sig storage
0easy
Title: build/README.md suggests using outdated docker-machine Body: <!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: The Docker Machine project is currently in maintenance mode (https://github.com/docker/machine/issues/4537) and there are different ways to utilize remote hosts for Docker containers to build Kubernetes. I believe the **build/README.md** documentation should be updated to reflect this. **Why is this needed**: Docker Machine is in maintenance mode and on the Docker website, it's listed as superseded. Kubernetes docs should recommend a better method before Docker Machine fades into non-maintenance.
0easy
Title: imagePullPolicy when defined without a value defaults to Always not to IfNotPresent Body: <!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: The documentation suggests that "The default pull policy is IfNotPresent" This is not the case when imagePullPolicy is defined under the spec section in a Pod definition without a specific value. Example: ``` apiVersion: v1 kind: Pod metadata: labels: run: nginx-test name: nginx-test spec: containers: - image: nginx name: nginx-test imagePullPolicy: ``` This will result in a Pod having imagePullPolicy as Always. **Why is this needed**: This is needed for clarification of the default behaviour. It is necessary for the appropriate handling of helm charts, in cases that the imagePullPolicy is empty.
0easy
Title: Incomplete list of operations for admission webhook Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> **What happened**: Working on other doc bugs, I went looking for what calls admission plugins. I found four call sites for validating plugins, documented in https://github.com/kubernetes/website/issues/20673 . They correspond to the verbs CONNECT, CREATE, DELETE, and UPDATE. But only two of those verbs are documented at https://github.com/kubernetes/kubernetes/blob/863ce9726e0aa617f03332f42ff3cebb7abe8fc0/staging/src/k8s.io/api/admissionregistration/v1/types.go#L465 **What you expected to happen**: **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): - Cloud provider or hardware configuration: - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`): - Install tools: - Network plugin and version (if this is a network-related bug): - Others:
0easy
Title: Update docs to indicate that API values of []byte are always base64-encoded Body: <!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: Extend the docs the clarify that various places there the API expects `[]byte` that this is base64 encoded **Why is this needed**: I was looking at the 1.13 API docs for the conversion webhook and it said ``` `caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. ``` It's not obvious to me that this value should be base64 of the pem encoded contents <!-- DO NOT EDIT BELOW THIS LINE --> /kind feature
0easy
Title: master/staging/README.md is out of date Body: List of directories in: https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io does not match the list here: https://github.com/kubernetes/kubernetes/tree/master/staging /good-first-issue
0easy
Title: update doc.go in staging/ repositories Body: - Look at one of the directories under staging/src/k8s.io/ (Please don't pick more than one directory for one PR as it will cause issues for reviewers) - Find all the files named `doc.go` (example find staging/src/k8s.io/client-go/ -name doc.go) - See the following link it has a `// import "k8s.io/client-go/tools/remotecommand"` this tells tools how this package needs to be imported. So we should ensure that each doc.go has an import https://github.com/kubernetes/kubernetes/blob/1fc36a5743bc02d62ddaac6cd41b71f0ef191ce2/staging/src/k8s.io/client-go/tools/remotecommand/doc.go#L20 - Add a description appropriate to the package (similar to the one below) https://github.com/kubernetes/kubernetes/blob/1fc36a5743bc02d62ddaac6cd41b71f0ef191ce2/staging/src/k8s.io/client-go/tools/remotecommand/doc.go#L17-L19 - Look through sub directories and see if there are any other important directories that are likely to be used by outside users and add a doc.go Why are we doing this? - We perfer to use vanity urls instead of directly using github.com to refer to code, so this helps ensure that tools honor our wish. See https://golang.org/doc/go1.4#canonicalimports - It would be good to have some package level documentation since these directories under staging/ are meant to be for use by folks outside of kubernetes github org. /good-first-issue
0easy
Title: Lack of documentation for ImagePullBackOff timing Body: Hello, I might have missed it, but I am unable to find documentation about ImagePullBackOff timing. Specifically: how long does it take for a retry to take place after ImagePullBackOff error occured? Is this something that can be configured? In the deployment or cluster scope?
0easy
Title: kubectl --v= is underdocumented Body: _(Comes from user)_ The most important thing for cloud support is insight, logs, verbosity. The description of the `--v` argument reads: `--v=0: log level for V logs` - What are the different log levels? - What level is most useful to see all API calls? - What is "V"? - What would be a useful standard reply to a customer? "Please re-run this command with --v=9 and send me the output" With gcloud I normally ask for the output of `--log-http --verbosity=debug`. @kubernetes/kubectl @ymqytw @pwittrock
0easy
Title: Feature gate SupportIPVSProxyMode is GA - when does it get removed? Body: When should it be removed? @rikatz /sig network /kind cleanup
0easy
Title: replace Expect.Equal(someBoolean, ...) Body: ### What would you like to be added? The `test/e2e` tests are full of assertions which compare a boolean against true or false, often without any additional explanation. When those assertions fail, the error message is useless for understanding what went wrong, basically just saying "expected false to be true". All of those assertions should be replace with `if (<failure check>) framework.Failf(<informative message>)`. Such locations can be found by grepping: ``` $ git grep 'Expect.*Equal([^,]*, *\(true\|false\)' | wc -l 272 ``` Example: ``` test/e2e/storage/vsphere/vsphere_volume_diskformat.go: framework.ExpectEqual(isAttached, true) => if (!isAttached) { framework.Failf("volume %s is not attached", ...) } ``` Some other places compare values. Those should be replaced with a comparison assertion. Example: ``` test/e2e/upgrades/apps/etcd.go: framework.ExpectEqual(ratio > 0.75, true) => gomega.Expect(ratio).To(gomega.BeNumerically(">", 0.75), "ratio too small") ``` It makes sense to split this work into individual PRs, one per OWNERS file under test/e2e. Here's a list of directories: - [ ] test/e2e/apimachinery - [ ] test/e2e/apps - [ ] test/e2e/auth - [ ] test/e2e/autoscaling: https://github.com/kubernetes/kubernetes/pull/106200 - [x] test/e2e/cloud/gcp - [x] test/e2e/cloud - [x] test/e2e/common/network - [ ] test/e2e/common/node - [ ] test/e2e/common/storage: @Ahmed-Aghadi - [x] test/e2e/framework: https://github.com/kubernetes/kubernetes/pull/105939 - [x] test/e2e/framework/providers/vsphere - [x] test/e2e/framework/volume - [x] test/e2e/instrumentation/logging - [ ] test/e2e/instrumentation: https://github.com/kubernetes/kubernetes/pull/106233 - [x] test/e2e/kubectl - [x] test/e2e/lifecycle - [ ] test/e2e/network/netpol @Ahmed-Aghadi - [ ] test/e2e/network @Ahmed-Aghadi - [ ] test/e2e/node: https://github.com/kubernetes/kubernetes/pull/106486 - [ ] test/e2e/scheduling: https://github.com/kubernetes/kubernetes/pull/106486 - [ ] test/e2e/storage: https://github.com/kubernetes/kubernetes/pull/105860 - [x] test/e2e/testing-manifests/storage-csi - [ ] test/e2e/upgrades/apps: https://github.com/kubernetes/kubernetes/pull/105774 - [ ] test/e2e/windows: https://github.com/kubernetes/kubernetes/pull/106220 When creating a PR, use `<directory>: enhance assertions` as title. Use one commit per PR, squash when addressing feedback. Unless noted otherwise, @chetak123 will work on PRs. Help may be welcome. /sig testing /help ### Why is this needed? Better output for developers when tests fail.
0easy