status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 19,161 | ["airflow/cli/cli_parser.py"] | airflow-triggerer service healthcheck broken | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Windows 10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Via docker-compose
### What happened
In using the [docker-compose.yml](https://airflow.apache.org/docs/apache-airflow/stable/docker-compose.yaml) example file, the service for `airflow-triggerer` shows up in an unhealthy state. It's the only service that shows up this way. I *have* modified this file to add certificates and fix other health checks so I'm not sure if it's something I did.
```
8ef17f8a28d7 greatexpectations_data_quality_airflow-triggerer "/usr/bin/dumb-init …" 14 minutes ago Up 14 minutes (unhealthy)
```
The healthcheck portion of the `docker-compose.yml` for this service shows:
```
airflow-triggerer:
<<: *airflow-common
command: triggerer
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
```
When I shell into the container and run this command I get:
```
airflow@8ef17f8a28d7:/opt/airflow$ airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"
WARNING:root:/opt/airflow/logs/scheduler/latest already exists as a dir/file. Skip creating symlink.
usage: airflow jobs check [-h] [--allow-multiple] [--hostname HOSTNAME] [--job-type {BackfillJob,LocalTaskJob,SchedulerJob}] [--limit LIMIT]
Checks if job(s) are still alive
optional arguments:
-h, --help show this help message and exit
--allow-multiple If passed, this command will be successful even if multiple matching alive jobs are found.
--hostname HOSTNAME The hostname of job(s) that will be checked.
--job-type {BackfillJob,LocalTaskJob,SchedulerJob}
The type of job(s) that will be checked.
--limit LIMIT The number of recent jobs that will be checked. To disable limit, set 0.
examples:
To check if the local scheduler is still working properly, run:
$ airflow jobs check --job-type SchedulerJob --hostname "$(hostname)"
To check if any scheduler is running when you are using high availability, run:
$ airflow jobs check --job-type SchedulerJob --allow-multiple --limit 100
```
From the description, it appears the `job-type` of `TriggererJob` is not a valid parameter to this call.
### What you expected to happen
The service should show up as "healthy".
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19161 | https://github.com/apache/airflow/pull/19179 | 10e2a88bdc9668931cebe46deb178ab2315d6e52 | d3ac01052bad07f6ec341ab714faabed913169ce | 2021-10-22T14:17:04Z | python | 2021-10-22T23:32:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 19,154 | ["airflow/api_connexion/openapi/v1.yaml"] | Add Release date for when an endpoint/field was added in the REST API | ### Description
This will help users to know the airflow version a field or an endpoint was added.
We could do this by checking the changelogs of previous airflow versions and adding a release date to new fields/endpoints corresponding to that version.
Take a look at the dag_run_id below

### Use case/motivation
We recently added the feature here: https://github.com/apache/airflow/pull/19105/files#diff-191056f40fba6bf5886956aa281e0e0d2bb4ddaa380beb012d922a25f5c65750R2305
If you decided to do this, take the opportunity to update the above to 2.3.0
### Related issues
Here is an issue a user created that could have been avoided: https://github.com/apache/airflow/issues/19101
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19154 | https://github.com/apache/airflow/pull/19203 | df465497ad59b0a2f7d3fd0478ea446e612568bb | 8dfc3cab4bf68477675c901e0678ca590684cfba | 2021-10-22T09:15:47Z | python | 2021-10-27T17:51:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 19,138 | ["airflow/models/baseoperator.py"] | BaseOperator type hints for retry_delay and max_retry_delay should reveal float option | ### Describe the issue with documentation
`BaseOperator` type hints for `retry_delay` and `max_retry_delay` shows `timedelta` only, however the params also accept `float` seconds values.
Also, type hint for `dag` param is missing.
More precise type hints and params descriptions in the docs can help to understand the code behavior easier.
### How to solve the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19138 | https://github.com/apache/airflow/pull/19142 | aa6c951988123edc84212d98b5a2abad9bd669f9 | 73b0ea18edb2bf8df79f11c7a7c746b2dc510861 | 2021-10-21T19:06:25Z | python | 2021-10-29T07:33:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 19,135 | ["airflow/decorators/__init__.pyi", "airflow/providers/cncf/kubernetes/decorators/__init__.py", "airflow/providers/cncf/kubernetes/decorators/kubernetes.py", "airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/provider.yaml", "airflow/providers/cncf/kubernetes/python_kubernetes_script.jinja2", "airflow/providers/cncf/kubernetes/python_kubernetes_script.py", "tests/providers/cncf/kubernetes/decorators/__init__.py", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py", "tests/system/providers/cncf/kubernetes/example_kubernetes_decorator.py"] | PythonKubernetesOperator and kubernetes taskflow decorator | ### Description
After the implementation of the Docker Taskflow Operator in 2.2, having a simple function on Kubernetes should be quite simple to accomplish.
One major difference I see is how to get the Code into the Pod.
My prefered way here would be to add an init container which takes care of this - much similar on how we use a Sidecar to extract XCom.
I would also prefer to add more defaults than the KubernetesPodOperator does by default to keep the call as simple and straightforward as possible. We could for example default the namespace to the namespace Airflow is running in if it is a Kubernetes Deployment. Also, we could generate a reasonable pod name from DAG-ID and Task ID.
### Use case/motivation
Beeing able to run a task as simple as:
```python
@task.kubernetes(image='my_image:1.1.0')
def sum(a, b):
from my_package_in_version_1 import complex_sum
return complex_sum(a,b)
```
would be awesome!
It would cleanly separate environments and resources between tasks just as the Docker or Virtualenv Operators do - and unlinke the `@task.python` decorator. We could specify the resources and environment of each task freely by specifying a `resources` Argument.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19135 | https://github.com/apache/airflow/pull/25663 | a7cc4678177f25ce2899da8d96813fee05871bbb | 0eb0b543a9751f3d458beb2f03d4c6ff22fcd1c7 | 2021-10-21T16:08:46Z | python | 2022-08-22T17:54:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 19,103 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py", "tests/api_connexion/schemas/test_task_instance_schema.py"] | REST API: taskInstances and taskInstances/list not returning dag_run_id | ### Description
Hi,
In order to avoid extra calls to the REST API to figure out the dag_run_id linked to a task instance, it would be great to have that information in the response of the following methods of the REST API:
- `/api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances`
- `/api/v1/dags/~/dagRuns/~/taskInstances/list`
The current response returns a list of task instances without the `dag_run_id`:
```json
{
"task_instances": [
{
"task_id": "string",
"dag_id": "string",
"execution_date": "string",
"start_date": "string",
"end_date": "string",
"duration": 0,
"state": "success",
"try_number": 0,
"max_tries": 0,
"hostname": "string",
"unixname": "string",
"pool": "string",
"pool_slots": 0,
"queue": "string",
"priority_weight": 0,
"operator": "string",
"queued_when": "string",
"pid": 0,
"executor_config": "string",
"sla_miss": {
"task_id": "string",
"dag_id": "string",
"execution_date": "string",
"email_sent": true,
"timestamp": "string",
"description": "string",
"notification_sent": true
}
}
],
"total_entries": 0
}
```
Our proposal is to add the dag_run_id in the response:
```diff
{
"task_instances": [
{
"task_id": "string",
"dag_id": "string",
"execution_date": "string",
+ "dag_run_id": "string",
"start_date": "string",
"end_date": "string",
"duration": 0,
"state": "success",
"try_number": 0,
"max_tries": 0,
"hostname": "string",
"unixname": "string",
"pool": "string",
"pool_slots": 0,
"queue": "string",
"priority_weight": 0,
"operator": "string",
"queued_when": "string",
"pid": 0,
"executor_config": "string",
"sla_miss": {
"task_id": "string",
"dag_id": "string",
"execution_date": "string",
"email_sent": true,
"timestamp": "string",
"description": "string",
"notification_sent": true
}
}
],
"total_entries": 0
}
```
Thanks!
### Use case/motivation
Having tthe dag_run_id when we get a list of task instances.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19103 | https://github.com/apache/airflow/pull/19105 | d75cf4d60ddbff5b88bfe348cb83f9d173187744 | cc3b062a2bdca16a7b239e73c4dc9e2a3a43c4f0 | 2021-10-20T11:32:02Z | python | 2021-10-20T18:56:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 19,098 | ["airflow/www/static/js/dag.js", "airflow/www/static/js/dags.js", "airflow/www/static/js/datetime_utils.js"] | Showing approximate Time until next dag run in Airflow UI | ### Description
It will be really helpful if we can add a dynamic message/tooltip which shows time remaining for next dagrun in UI.
### Use case/motivation
Although we have next_run available in UI, user has to look into schedule and find the timedifference between schedule and current time, It will be really convenient to have that information available on fingertips.
### Related issues
None
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19098 | https://github.com/apache/airflow/pull/20273 | c4d2e16197c5f49493c142bfd9b754ea3c816f48 | e148bf6b99b9b62415a7dd9fbfa594e0f5759390 | 2021-10-20T06:11:16Z | python | 2021-12-16T17:17:57Z |
closed | apache/airflow | https://github.com/apache/airflow | 19,080 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "airflow/utils/dot_renderer.py", "tests/cli/commands/test_dag_command.py", "tests/utils/test_dot_renderer.py"] | Display DAGs dependencies in CLI | ### Description
Recently, we added [a new DAG dependencies view](https://github.com/apache/airflow/pull/13199) to the webserver. It would be helpful if a similar diagram could also be displayed/generated using the CLI. For now, only [one DAG can be displayed](http://airflow.apache.org/docs/apache-airflow/stable/usage-cli.html#exporting-dag-structure-as-an-image) using CLI.

If anyone is interested, I will be happy to help with the review.
### Use case/motivation
* Keep parity between CLI and web serwer.
* Enable the writing of scripts that use these diagrams, e.g. for attaching in the documentation.
### Related issues
https://github.com/apache/airflow/pull/13199/files
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19080 | https://github.com/apache/airflow/pull/19985 | 728e94a47e0048829ce67096235d34019be9fac7 | 498f98a8fb3e53c9323faeba1ae2bf4083c28e81 | 2021-10-19T15:48:30Z | python | 2021-12-05T22:11:29Z |
closed | apache/airflow | https://github.com/apache/airflow | 19,078 | ["airflow/www/static/js/graph.js", "airflow/www/views.py"] | TaskGroup tooltip missing actual tooltip and default_args issue | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Just used `airflow standalone`
### What happened
First:
TaskGroup does not show the tooltip that is defined for that group on hover

Second:
Providing an Operator with `task_group=` instead of TaskGroup as context manager ignores that TaskGroup's default_args. Using the code I provided below, the task "end" in section_3 has the wrong owner.
### What you expected to happen
First:
If a TaskGroup is used and the tooltip is defined like so:
```python
with TaskGroup("section_1", tooltip="Tasks for section_1") as section_1:
```
the information provided with`tooltip="..."` should show on hover, as shown in the official documentation at https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html#taskgroups
Second:
Defining a TaskGroup with default_args like
```python
section_3 = TaskGroup("section_3", tooltip="Tasks for section_2", default_args={"owner": "bug"})
```
should result in those default_args being used by tasks defined like so:
```python
end = DummyOperator(task_id="end", task_group=section_3)
```
### How to reproduce
I have tested this with the below code
```python
from datetime import datetime
from airflow.models.dag import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
from airflow.utils.task_group import TaskGroup
with DAG(
dag_id="task_group_test",
start_date=datetime(2021, 1, 1),
catchup=False,
tags=["example"],
default_args={"owner": "test"},
) as dag:
start = DummyOperator(task_id="start")
with TaskGroup("section_1", tooltip="Tasks for section_1") as section_1:
task_1 = DummyOperator(task_id="task_1")
task_2 = BashOperator(task_id="task_2", bash_command="echo 1")
task_3 = DummyOperator(task_id="task_3")
task_1 >> [task_2, task_3]
with TaskGroup(
"section_2", tooltip="Tasks for section_2", default_args={"owner": "overwrite"}
) as section_2:
task_1 = DummyOperator(task_id="task_1")
section_3 = TaskGroup("section_3", tooltip="Tasks for section_3", default_args={"owner": "bug"})
end = DummyOperator(task_id="end", task_group=section_3)
start >> section_1 >> section_2 >> section_3
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19078 | https://github.com/apache/airflow/pull/19083 | d5a029e119eb50e78b5144e5405f2b249d5e4435 | 8745fb903069ac6174134d52513584538a2b8657 | 2021-10-19T15:23:04Z | python | 2021-10-19T18:08:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 19,077 | ["airflow/providers/microsoft/azure/hooks/data_factory.py", "docs/apache-airflow-providers-microsoft-azure/connections/adf.rst", "tests/providers/microsoft/azure/hooks/test_azure_data_factory.py"] | Add support for more authentication types in Azure Data Factory hook | ### Description
At the moment the Azure Data Factory hook only supports `TokenCredential` (service principal client ID and secret). It would be very useful to have support for other authentication types like managed identity. Preferably, we would create a `DefaultTokenCredential` if the client ID and secret are not provided in the connection.
### Use case/motivation
We're using Airflow on Azure Kubernetes Service and this would allow us to use the pod identity for authentication which is a lot cleaner than creating a service principal.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19077 | https://github.com/apache/airflow/pull/19079 | 9efb989d19e657a2cde2eef98804c5007f148ee1 | ca679c014cad86976c1b2e248b099d9dc9fc99eb | 2021-10-19T14:42:14Z | python | 2021-11-07T16:42:36Z |
closed | apache/airflow | https://github.com/apache/airflow | 19,001 | ["chart/values.schema.json", "chart/values.yaml"] | Slow liveness probe causes frequent restarts (scheduler and triggerer) | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
official docker image
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
I noticed scheduler was restarting a lot, and often ended up in CrashLoopBackoff state, apparently due to failed liveness probe:
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 19m (x25 over 24m) kubelet Back-off restarting failed container
Warning Unhealthy 4m15s (x130 over 73m) kubelet Liveness probe failed:
```
Triggerer also has this issue and also enters CrashLoopBackoff state frequently.
e.g.
```
NAME READY STATUS RESTARTS AGE
airflow-prod-redis-0 1/1 Running 0 2d7h
airflow-prod-scheduler-75dc64bc8-m8xdd 2/2 Running 14 77m
airflow-prod-triggerer-7897c44dd4-mtnq9 1/1 Running 126 12h
airflow-prod-webserver-7bdfc8ff48-gfnvs 1/1 Running 0 12h
airflow-prod-worker-659b566588-w8cd2 1/1 Running 0 147m
```
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 18m (x398 over 11h) kubelet Back-off restarting failed container
Warning Unhealthy 3m32s (x1262 over 12h) kubelet Liveness probe failed:
```
It turns out what was going on is the liveness probe takes too long to run and so it failed continuously, so the scheduler would just restart every 10 minutes.
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
I ran the liveness probe code in a container on k8s and found that it generally takes longer than 5 seconds.
Probably we should increase default timeout to 10 seconds and possibly reduce frequency so that it's not wasting as much CPU.
```
❯ keti airflow-prod-scheduler-6956684c7f-swfgb -- bash
Defaulted container "scheduler" out of: scheduler, scheduler-log-groomer, wait-for-airflow-migrations (init)
airflow@airflow-prod-scheduler-6956684c7f-swfgb:/opt/airflow$ time /entrypoint python -Wignore -c "import os
> os.environ['AIRFLOW__CORE__LOGGING_LEVEL'] = 'ERROR'
> os.environ['AIRFLOW__LOGGING__LOGGING_LEVEL'] = 'ERROR'
>
> from airflow.jobs.scheduler_job import SchedulerJob
> from airflow.utils.db import create_session
> from airflow.utils.net import get_hostname
> import sys
>
> with create_session() as session:
> job = session.query(SchedulerJob).filter_by(hostname=get_hostname()).order_by(
> SchedulerJob.latest_heartbeat.desc()).limit(1).first()
>
> print(0 if job.is_alive() else 1)
> "
0
real 0m5.696s
user 0m4.989s
sys 0m0.375s
airflow@airflow-prod-scheduler-6956684c7f-swfgb:/opt/airflow$ time /entrypoint python -Wignore -c "import os
os.environ['AIRFLOW__CORE__LOGGING_LEVEL'] = 'ERROR'
os.environ['AIRFLOW__LOGGING__LOGGING_LEVEL'] = 'ERROR'
from airflow.jobs.scheduler_job import SchedulerJob
from airflow.utils.db import create_session
from airflow.utils.net import get_hostname
import sys
with create_session() as session:
job = session.query(SchedulerJob).filter_by(hostname=get_hostname()).order_by(
SchedulerJob.latest_heartbeat.desc()).limit(1).first()
print(0 if job.is_alive() else 1)
"
0
real 0m7.261s
user 0m5.273s
sys 0m0.411s
airflow@airflow-prod-scheduler-6956684c7f-swfgb:/opt/airflow$
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19001 | https://github.com/apache/airflow/pull/19003 | b814ab43d62fad83c1083a7bc3a8d009c6103213 | 866c764ae8fc17c926e245421d607e4e84ac9ec6 | 2021-10-15T04:33:41Z | python | 2021-10-15T14:32:03Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,969 | ["airflow/providers/trino/hooks/trino.py", "docs/apache-airflow-providers-trino/connections.rst", "docs/apache-airflow-providers-trino/index.rst", "tests/providers/trino/hooks/test_trino.py"] | Trino JWT Authentication Support | ### Description
Would be great to support JWT Authentication in Trino Hook (which also can be used for presto hook).
For example like this
```
elif extra.get('auth') == 'jwt':
auth = trino.auth.JWTAuthentication(
token=extra.get('jwt__token')
)
```
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18969 | https://github.com/apache/airflow/pull/23116 | 6065d1203e2ce0aeb19551c545fb668978b72506 | ccb5ce934cd521dc3af74b83623ca0843211be62 | 2021-10-14T05:10:17Z | python | 2022-05-06T19:45:33Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,967 | ["airflow/hooks/dbapi.py", "airflow/providers/oracle/hooks/oracle.py", "tests/providers/oracle/hooks/test_oracle.py"] | DbApiHook.test_connection() does not work with Oracle db | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-oracle==2.0.1
### Deployment
Other
### Deployment details

### What happened
The title and screenshot are self explaining
### What you expected to happen
To have a successful message similar to what I got with SQL Server.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18967 | https://github.com/apache/airflow/pull/21699 | a1845c68f9a04e61dd99ccc0a23d17a277babf57 | 900bad1c67654252196bb095a2a150a23ae5fc9a | 2021-10-14T04:30:00Z | python | 2022-02-26T23:56:50Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,925 | ["airflow/providers/databricks/operators/databricks.py"] | Add template_ext = ('.json') to databricks operators | ### Description
Add template_ext = ('.json') in databricks operators. I will improve the debug and the way to organize parameters.
### Use case/motivation
Bigger parameters on file. Easier to debug and maintain.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18925 | https://github.com/apache/airflow/pull/21530 | 5590e98be30d00ab8f2c821b2f41f524db8bae07 | 0a2d0d1ecbb7a72677f96bc17117799ab40853e0 | 2021-10-12T22:42:56Z | python | 2022-02-12T12:52:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,917 | ["docs/apache-airflow/concepts/xcoms.rst"] | Document xcom clearing behaviour on task retries | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Debian Buster (in Docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
Dockerfile:
```
FROM quay.io/astronomer/ap-airflow-dev:2.2.0-buster-onbuild-44762
```
Launched via `astro dev start`
**DAG**
```python3
def fail_then_succeed(**kwargs):
val = kwargs["ti"].xcom_pull(key="foo")
if not val:
kwargs["ti"].xcom_push(key="foo", value="bar")
raise Exception("fail")
with DAG(
dag_id="fail_then_succeed",
start_date=days_ago(1),
schedule_interval=None,
) as dag:
PythonOperator(
task_id="succeed_second_try",
python_callable=fail_then_succeed,
retries=2,
retry_delay=timedelta(seconds=15)
)
```
### What happened
The pushed value appears in XCOM, but even on successive retries `xcom_pull` returned `None`
### What you expected to happen
On the second try, the XCOM value pushed by the first try is available, so the task succeeds after spending some time in "up for retry".
### How to reproduce
Run the dag shown above, notice that it fails.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18917 | https://github.com/apache/airflow/pull/19968 | e8b1120f26d49df1a174d89d51d24c6e7551bfdf | 538612c3326b5fd0be4f4114f85e6f3063b5d49c | 2021-10-12T18:54:45Z | python | 2021-12-05T23:03:38Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,901 | ["setup.py"] | Pandas Not Installed with Pip When Required for Providers Packages | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Ubuntu 20.04.2 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-hive==2.0.2
### Deployment
Virtualenv installation
### Deployment details
Commands used to install:
```
python3.8 -m venv ./airflow-2.2.0
source airflow-2.2.0/bin/activate
pip install --upgrade pip
export AIRFLOW_VERSION=2.2.0
export PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
export CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
pip install wheel
pip install --upgrade "apache-airflow[amazon,apache.druid,apache.hdfs,apache.hive,async,celery,http,jdbc,mysql,password,redis,ssh]==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
```
### What happened
After installing, Airflow started throwing these errors:
```
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveCliHook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveServer2Hook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveMetastoreHook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveCliHook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveServer2Hook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveMetastoreHook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
```
### What you expected to happen
I expected required packages (pandas) to be installed with the pip install command
### How to reproduce
Install using the commands in the deployment details.
### Anything else
This seems to be related to pandas being made optional in 2.2.0 but not accounting for providers packages still requiring it.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18901 | https://github.com/apache/airflow/pull/18997 | ee8e0b3a2a87a76bcaf088960bce35a6cee8c500 | de98976581294e080967e2aa52043176dffb644f | 2021-10-12T08:23:28Z | python | 2021-10-16T18:21:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,878 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/sensors/base.py", "tests/core/test_config_templates.py"] | Change the Sensors default timeout using airflow configs | ### Description
Add the configuration parameters to set the default `timeout` on `BaseSensorOperator`.
```
[sensors]
default_timeout=
```
### Use case/motivation
By default the sensor timeout is 7 days, this is to much time for some environments and could be reduced to release workers slot faster if a sensor never matches the result and have no custom timeout.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18878 | https://github.com/apache/airflow/pull/19119 | 0a6850647e531b08f68118ff8ca20577a5b4062c | 34e586a162ad9756d484d17b275c7b3dc8cefbc2 | 2021-10-11T01:12:50Z | python | 2021-10-21T19:33:13Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,799 | ["airflow/jobs/backfill_job.py", "tests/jobs/test_backfill_job.py"] | Add test for https://github.com/apache/airflow/pull/17305 | ### Body
Add unit test for changes in https://github.com/apache/airflow/pull/17305
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/18799 | https://github.com/apache/airflow/pull/18806 | e0af0b976c0cc43d2b1aa204d047fe755e4c5be7 | e286ee64c5c0aadd79a5cd86f881fb1acfbf317e | 2021-10-07T09:53:33Z | python | 2021-10-08T03:24:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,777 | ["tests/jobs/test_scheduler_job.py", "tests/test_utils/asserts.py"] | [Quarantine] test_no_orphan_process_will_be_left | ### Body
The `test_no_orphan_process_will_be_left` in `TestSchedulerJob` is really flaky.
Before we know how to fix it, I will quarantine it:
https://github.com/apache/airflow/pull/18691/checks?check_run_id=3811660138
```
=================================== FAILURES ===================================
_____________ TestSchedulerJob.test_no_orphan_process_will_be_left _____________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob object at 0x7fc2f91d2ac8>
def test_no_orphan_process_will_be_left(self):
empty_dir = mkdtemp()
current_process = psutil.Process()
old_children = current_process.children(recursive=True)
self.scheduler_job = SchedulerJob(
subdir=empty_dir, num_runs=1, executor=MockExecutor(do_update=False)
)
self.scheduler_job.run()
shutil.rmtree(empty_dir)
# Remove potential noise created by previous tests.
current_children = set(current_process.children(recursive=True)) - set(old_children)
> assert not current_children
E AssertionError: assert not {psutil.Process(pid=2895, name='pytest', status='running', started='06:53:45')}
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/18777 | https://github.com/apache/airflow/pull/19860 | d1848bcf2460fa82cd6c1fc1e9e5f9b103d95479 | 9b277dbb9b77c74a9799d64e01e0b86b7c1d1542 | 2021-10-06T16:04:33Z | python | 2021-12-13T17:55:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,771 | ["airflow/providers/pagerduty/hooks/pagerduty.py", "airflow/providers/pagerduty/hooks/pagerduty_events.py", "airflow/providers/pagerduty/provider.yaml", "tests/providers/pagerduty/hooks/test_pagerduty.py", "tests/providers/pagerduty/hooks/test_pagerduty_events.py"] | Current implementation of PagerdutyHook requires two tokens to be present in connection | ### Apache Airflow Provider(s)
pagerduty
### Versions of Apache Airflow Providers
apache-airflow-providers-pagerduty==2.0.1
### Apache Airflow version
2.1.4 (latest released)
### Operating System
macOS catalina
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Currently, the PagerdutyHook can be used to access two API's:
- Events API: interface to send alerts. Needs only a integration key/routing key/Events API key. (https://developer.pagerduty.com/docs/get-started/getting-started/#events-api)
- Pagerduty API: interface to interact with account data, project setup, etc. Needs only a Pagerduty rest API Key or account token. (https://developer.pagerduty.com/docs/get-started/getting-started/#rest-api)
In order to interact with the API's, the PagerdutyHook uses two attributes that refer to these api keys:
- `token`: Refers to the account token/rest API key. This attribute is retrieved from the `Password` field in the connection or can be set at initialization of a class instance by passing it into the init method of the class. In the __Init__ method, its value is asserted to be not `None`. The token is **not used** for sending alerts to Pagerduty.
- `routing_key`: Refers to the integration key/Events API key. This attribute is retrieved from the `Extra` field and is used in the `create_event` method to send request to the Events API. In the `create_event` method the value is asserted to be not `None`.
As a result if users want to use the hook to only send events, they will need to provide a connection with a random string as a password (which can't be None) and the actual integration key in the `extra` field. This to me makes no sense.
**Proposed solution:** Rather than having the two api's in a single hook, separate them into two different hooks. This increases code simplicity and maintainability.
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18771 | https://github.com/apache/airflow/pull/18784 | 5d9e5f69b9d9c7d4f4e5e5c040ace0589b541a91 | 923f5a5912785649be7e61c8ea32a0bd6dc426d8 | 2021-10-06T12:00:57Z | python | 2021-10-12T18:00:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,732 | ["airflow/www/static/css/main.css"] | Tooltip element is not removed and overlays another clickable elements | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
Debian GNU/Linux 11 rodete
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
On a clean Airflow install, sometimes there is a problem with accessing clickable elements like 'Airflow Home' button (logo with name), or 'DAGs' menu button. This is a problem that can be replicated also in older versions on Airflow.
The issue is caused by tooltip elements that are created each time another element is hovered and they appear as transparent elements in the 0,0 point which is the top left corner of the interface. That's why they cover Airflow Home button and sometimes even DAGs menu button (depending on the size of the tooltip).
This is causing some of the elements that are located near top left corner not clickable (e.g. Airflow Home button is clickable but only the bottom side of it).
This is a screenshot of the interface with highlighted place where the redundant tooltips appear. I also showed here the tooltip elements in the HTML code.

### What you expected to happen
I expect that the tooltips disappear when the element triggering them is not hovered anymore. The Airflow Home button and DAGs button should be clickable.
### How to reproduce
1. Hover over Airflow Home button and see that all of the element can be clicked (note the different pointer).
2. Now hover over any other element that shows a tooltip, e.g. filters for All/Active/Paused DAGs just a bit below.
3. Hover again over Airflow Home button and see that part of it is not clickable.
4. Open devtools and inspect the element that covers the top left corner of the interface.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18732 | https://github.com/apache/airflow/pull/19261 | 7b293c548a92d2cd0eea4f9571c007057aa06482 | 37767c1ba05845266668c84dec7f9af967139f42 | 2021-10-05T09:25:43Z | python | 2021-11-02T17:09:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,703 | ["airflow/api_connexion/endpoints/user_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_user_endpoint.py"] | "Already exists." message is missing while updating user email with existing email id through API | ### Apache Airflow version
2.2.0b2 (beta snapshot)
### Operating System
Debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro dev start
### What happened
"Already exists." error message is missing while updating user email with existing email id through API.
### What you expected to happen
"Email id already exists" error message should appear
### How to reproduce
1. Use a patch request with the below URL.
`{{url}}/users/{{username}}`
2. In payload use an exiting email id
```
{
"username": "{{username}}",
"password": "password1",
"email": "{{exiting_email}}@example.com",
"first_name": "{{$randomFirstName}}",
"last_name": "{{$randomLastName}}",
"roles":[{ "name": "Op"}]
}
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18703 | https://github.com/apache/airflow/pull/18757 | cf27419cfe058750cde4247935e20deb60bda572 | a36e7ba4176eeacab1aeaf72ce452d3b30f4de3c | 2021-10-04T11:48:44Z | python | 2021-10-06T15:17:34Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,702 | ["airflow/api_connexion/endpoints/user_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_user_endpoint.py"] | Unable to change username while updating user information through API | ### Apache Airflow version
2.2.0b2 (beta snapshot)
### Operating System
Debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro dev start
### What happened
Unable to change username while updating user information through API though it's possible from UI.
### What you expected to happen
Either username should not be editable from UI or It should be editable from API.
### How to reproduce
1. Use a patch request with the below URL.
`{{url}}/users/{{username}}`
2. In payload use a different username
```
{
"username": "{{username1}}",
"password": "password1",
"email": "{{email}}@example.com",
"first_name": "{{$randomFirstName}}",
"last_name": "{{$randomLastName}}",
"roles":[{ "name": "Op"}]
}
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18702 | https://github.com/apache/airflow/pull/18757 | cf27419cfe058750cde4247935e20deb60bda572 | a36e7ba4176eeacab1aeaf72ce452d3b30f4de3c | 2021-10-04T11:39:13Z | python | 2021-10-06T15:17:34Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,664 | ["airflow/providers/oracle/hooks/oracle.py", "tests/providers/oracle/hooks/test_oracle.py"] | [Oracle] Oracle Hook - make it possible to define a schema in the connection parameters | ### Description
Currently the oracle hook it not setting a CURRENT_SCHEMA after connecting with the Database.
In a lot of use cases we have production and test databases with seperate connections and database schemas e.g. TEST.Table1, PROD.Table1
"Hard-coding" the database schema in SQL Scripts is not elegant due to having different Airflow Instances for developing and Production.
An Option would be to store the database schema in a airflow Variable and getting it into the sql script with JINJA.
In Large SQL files with several tables it is not elegant either, because for every table a query to the metadatabase is made.
Why not using the Schema parameter in the Airflow Connections and executing
`ALTER SESSION SET CURRENT_SCHEMA='SCHEMA'`
right after successfully connecting to the database?
An alternative would be to use option `Connection.current_schema ` of Library cx_Oracle.
https://cx-oracle.readthedocs.io/en/6.4.1/connection.html#Connection.current_schema
### Use case/motivation
It makes Query development much easier by storing environment attributes directly in the Airflow Connection.
You have full flexibility without touching your SQL Script.
It makes separation of Test and Production environments and connections possible.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18664 | https://github.com/apache/airflow/pull/19084 | 6d110b565a505505351d1ff19592626fb24e4516 | 471e368eacbcae1eedf9b7e1cb4290c385396ea9 | 2021-10-01T09:21:06Z | python | 2022-02-07T20:37:51Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,634 | ["docs/apache-airflow/upgrading-from-1-10/index.rst"] | Pendulum 1.x -> 2.x with Airflow 1.x -> 2.x Documentation Updates | ### Describe the issue with documentation
With the upgrade from Pendulum 1.x to 2.x that coincides with upgrading from Airflow 1.x to 2.x, there are some breaking changes that aren't mentioned here:
[Upgrading from 1.x to 2.x](https://airflow.apache.org/docs/apache-airflow/stable/upgrading-from-1-10/index.html)
The specific breaking change I experienced is the .copy() method is completely removed in Pendulum 2.x. It turns out this isn't needed in my case as each macro that provides a pendulum.Pendulum provides a brand new instance, but it still caught me by surprise.
I also noticed that the stable documentation (2.1.4 at time of writing) for macros still links to Pendulum 1.x:
[Macros reference](https://airflow.apache.org/docs/apache-airflow/stable/macros-ref.html)
Specifically the macros: execution_date, prev_execution_date, prev_execution_date_success, prev_start_date_success, next_execution_date
**EDIT**
Another breaking change: .format() now defaults to the alternative formatter in 2.x, meaning
`execution_date.format('YYYY-MM-DD HH:mm:ss', formatter='alternative')`
now throws errors.
### How to solve the problem
The macros reference links definitely need to be changed to Pendulum 2.x, maybe it would be a nice-to-have in the 1.x -> 2.x documentation but I was effectively using .copy() in a misguided way so I wouldn't say it's totally necessary.
**EDIT**
I think it's worth including that
`execution_date.format('YYYY-MM-DD HH:mm:ss', formatter='alternative')`
will throw errors in the 1.x -> 2.x upgrade documentation.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18634 | https://github.com/apache/airflow/pull/18955 | f967ca91058b4296edb507c7826282050188b501 | 141d9f2d5d3e47fe7beebd6a56953df1f727746e | 2021-09-30T11:49:40Z | python | 2021-10-14T17:56:28Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,632 | ["airflow/www/jest-setup.js", "airflow/www/package.json", "airflow/www/static/js/tree/useTreeData.js", "airflow/www/static/js/tree/useTreeData.test.js", "airflow/www/templates/airflow/tree.html", "airflow/www/views.py", "airflow/www/yarn.lock"] | Auto-refresh doesn't take into account the selected date | ### Apache Airflow version
2.1.3
### Operating System
Debian GNU/Linux 9 (stretch)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
In the DAG tree view, if I select a custom date in the date filter (top left corner) and press "update", DAG runs are correctly filtered to the selected date and number of runs.
However, if the "auto-refresh" toggle is on, when the next tick refresh happens, the date filter is no longer taken into account and the UI displays the actual **latest** x DAG runs status. However, neither the header dates (45° angle) nor the date filter reflect this new time window
I investigated in the network inspector and it seems that the xhr request that fetches dag runs status doesn't contain a date parameter and thus always fetch the latest DAG run data
### What you expected to happen
I expect that the auto-refresh feature preserves the selected time window
### How to reproduce
Open a DAG with at least 20 dag runs
Make sure autorefresh is disabled
Select a filter date earlier than the 10th dag run start date and a "number of runs" value of 10, press "update"
The DAG tree view should now be focused on the 10 first dag runs
Now toggle autorefresh and wait for next tick
The DAG tree view will now display status of the latest 10 runs but the header dates will still reflect the old start dates
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18632 | https://github.com/apache/airflow/pull/19605 | 186513e24e723b79bc57e3ca0ade3c73e4fa2f9a | d3ccc91ba4af069d3402d458a1f0ca01c3ffb863 | 2021-09-30T09:45:00Z | python | 2021-11-16T00:24:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,600 | ["airflow/www/static/js/dags.js", "airflow/www/templates/airflow/dags.html"] | Selecting DAG in search dropdown should lead directly to DAG | ### Description
When searching for a DAG in the search box, the dropdown menu suggests matching DAG names. Currently, selecting a DAG from the dropdown menu initiates a search with that DAG name as the search query. However, I think it would be more intuitive to go directly to the DAG.
If the user prefers to execute the search query, they can still have the option to search without selecting from the dropdown.
---
Select `example_bash_operator` from dropdown menu:

---
We are taken to the search result instead of that specific DAG:

### Use case/motivation
When you select a specific DAG from the dropdown, you probably intend to go to that DAG. Even if there were another DAG that began with the name of the selected DAG, both DAGs would appear in the dropdown, so you would still be able to select the one that you want.
For example:

### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18600 | https://github.com/apache/airflow/pull/18991 | 324c31c2d7ad756ce3814f74f0b6654d02f19426 | 4d1f14aa9033c284eb9e6b94e9913a13d990f93e | 2021-09-29T04:22:53Z | python | 2021-10-22T15:35:50Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,599 | ["airflow/stats.py", "tests/core/test_stats.py"] | datadog parsing error for dagrun.schedule_delay since it is not passed in float type | ### Apache Airflow version
2.1.2
### Operating System
Gentoo/Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
In datadog-agent logs, got parsing error
[ AGENT ] 2021-09-29 03:20:01 UTC | CORE | ERROR | (pkg/dogstatsd/server.go:411 in errLog) | Dogstatsd: error parsing metric message '"airflow.dagrun.schedule_delay.skew:<Period [2021-09-29T03:20:00+00:00 -> 2021-09-29T03:20:00.968404+00:00]>|ms"': could not parse dogstatsd metric value: strconv.ParseFloat: parsing "<Period [2021-09-29T03:20:00+00:00 -> 2021-09-29T03:20:00.968404+00:00]>": invalid syntax
### What you expected to happen
since datadog agent expects a float, see https://github.com/DataDog/datadog-agent/blob/6830beaeb182faadac40368d9d781b796b4b2c6f/pkg/dogstatsd/parse.go#L119, the schedule_delay should be a float instead of timedelta.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18599 | https://github.com/apache/airflow/pull/19973 | dad2f8103be954afaedf15e9d098ee417b0d5d02 | 5d405d9cda0b88909e6b726769381044477f4678 | 2021-09-29T04:02:52Z | python | 2021-12-15T10:42:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,545 | ["airflow/www/views.py"] | Unable to add a new user when logged using LDAP auth | ### Discussed in https://github.com/apache/airflow/discussions/18290
<div type='discussions-op-text'>
<sup>Originally posted by **pawsok** September 16, 2021</sup>
### Apache Airflow version
2.1.4 (latest released)
### Operating System
Amazon Linux AMI 2018.03
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
- AWS ECS EC2 mode
- RDS PostgreSQL for DB
- LDAP authentication enabled
### What happened
We upgraded Airflow from 2.0.1 to 2.1.3 and now when i log into Airflow (Admin role) using LDAP authentication and go to Security --> List Users i cannot see **add button** ("plus").
**Airflow 2.0.1** (our current version):

**Airflow 2.1.3:**

### What you expected to happen
Option to add a new user (using LDAP auth).
### How to reproduce
1. Upgrade to Airflow 2.1.3
2. Log in to Airflow as LDAP user type
3. Go to Security --> List Users
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/18545 | https://github.com/apache/airflow/pull/22619 | 29de8daeeb979d8f395b1e8e001e182f6dee61b8 | 4e4c0574cdd3689d22e2e7d03521cb82179e0909 | 2021-09-27T08:55:18Z | python | 2022-04-01T08:42:54Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,512 | ["airflow/models/renderedtifields.py"] | airflow deadlock trying to update rendered_task_instance_fields table (mysql) | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
We have been unable to reproduce this in our testing. Our dags will utilize the S3KeySensor task. Sometimes we will have up to 100 dag_runs (running the same DAG) waiting for the S3KeySensor to poke the expected S3 documents.
We are regularly seeing this deadlock with mysql:
```
Exception: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
[SQL: DELETE FROM rendered_task_instance_fields WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND (rendered_task_instance_fields.dag_id, rendered_task_instance_fields.task_id, rendered_task_instance_fields.execution_date) NOT IN (SELECT subq1.dag_id, subq1.task_id, subq1.execution_date
FROM (SELECT rendered_task_instance_fields.dag_id AS dag_id, rendered_task_instance_fields.task_id AS task_id, rendered_task_instance_fields.execution_date AS execution_date
FROM rendered_task_instance_fields
WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s ORDER BY rendered_task_instance_fields.execution_date DESC
LIMIT %s) AS subq1)]
[parameters: ('refresh_hub', 'scorecard_wait', 'refresh_hub', 'scorecard_wait', 1)] Exception trying create activation error: 400:
```
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
Sometimes we will wait multiple days for our S3 documents to appear, This deadlock is occurring for 30%-40% of the dag_runs for an individual dags.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18512 | https://github.com/apache/airflow/pull/18616 | b6aa8d52a027c75aaa1151989c68f8d6b8529107 | db2d73d95e793e63e152692f216deec9b9d9bc85 | 2021-09-24T21:40:14Z | python | 2021-09-30T16:50:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,495 | ["airflow/www/views.py", "docs/apache-airflow/howto/email-config.rst"] | apache-airflow-providers-sendgrid==2.0.1 doesn't show in the connections drop down UI | ### Apache Airflow Provider(s)
sendgrid
### Versions of Apache Airflow Providers
I'm running this version of airflow locally to test the providers modules. I'm interested in sendgrid, however it doesn't show up as a conn type in the UI making it unusable.
The other packages I've installed do show up.
```
@linux-2:~$ airflow info
Apache Airflow
version | 2.1.2
executor | SequentialExecutor
task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler
sql_alchemy_conn | sqlite:////home/airflow/airflow.db
dags_folder | /home/airflow/dags
plugins_folder | /home/airflow/plugins
base_log_folder | /home/airflow/logs
remote_base_log_folder |
System info
OS | Linux
architecture | x86_64
uname | uname_result(system='Linux', node='linux-2.fritz.box', release='5.11.0-34-generic', version='#36~20.04.1-Ubuntu SMP Fri Aug 27 08:06:32 UTC 2021', machine='x86_64', processor='x86_64')
locale | ('en_US', 'UTF-8')
python_version | 3.6.4 (default, Aug 12 2021, 10:51:13) [GCC 9.3.0]
python_location |.pyenv/versions/3.6.4/bin/python3.6
Tools info
git | git version 2.25.1
ssh | OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020
kubectl | Client Version: v1.20.4
gcloud | Google Cloud SDK 357.0.0
cloud_sql_proxy | NOT AVAILABLE
mysql | mysql Ver 8.0.26-0ubuntu0.20.04.2 for Linux on x86_64 ((Ubuntu))
sqlite3 | NOT AVAILABLE
psql | psql (PostgreSQL) 12.8 (Ubuntu 12.8-0ubuntu0.20.04.1)
Providers info
apache-airflow-providers-amazon | 2.0.0
apache-airflow-providers-ftp | 2.0.1
apache-airflow-providers-google | 3.0.0
apache-airflow-providers-http | 2.0.1
apache-airflow-providers-imap | 2.0.1
apache-airflow-providers-sendgrid | 2.0.1
apache-airflow-providers-sqlite | 2.0.1
```
### Apache Airflow version
2.1.2
### Operating System
NAME="Ubuntu" VERSION="20.04.2 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.2 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal
### Deployment
Other
### Deployment details
I simply run `airflow webserver --port 8080` to test this on my local machine.
In production we're using a helm chart, but the package also doesn't show up there prompting me to test it out locally with no luck.
### What happened
Nothing
### What you expected to happen
sendgrid provider being available as conn_type
### How to reproduce
run it locally using localhost
### Anything else
nop
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18495 | https://github.com/apache/airflow/pull/18502 | ac4acf9c5197bd96fbbcd50a83ef3266bfc366a7 | be82001f39e5b04fd16d51ef79b3442a3aa56e88 | 2021-09-24T12:10:04Z | python | 2021-09-24T16:12:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,487 | ["airflow/timetables/interval.py"] | 2.1.3/4 queued dag runs changes catchup=False behaviour | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
Amazon Linux 2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
Say, for example, you have a DAG that has a sensor. This DAG is set to run every minute, with max_active_runs=1, and catchup=False.
This sensor may pass 1 or more times per day.
Previously, when this sensor is satisfied once per day, there is 1 DAG run for that given day. When the sensor is satisfied twice per day, there are 2 DAG runs for that given day.
With the new queued dag run state, new dag runs will be queued for each minute (up to AIRFLOW__CORE__MAX_QUEUED_RUNS_PER_DAG), this seems to be against the spirit of catchup=False.
This means that if a dag run is waiting on a sensor for longer than the schedule_interval, it will still in effect 'catchup' due to the queued dag runs.
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18487 | https://github.com/apache/airflow/pull/19130 | 3a93ad1d0fd431f5f4243d43ae8865e22607a8bb | 829f90ad337c2ea94db7cd58ccdd71dd680ad419 | 2021-09-23T23:24:20Z | python | 2021-10-22T01:03:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,482 | ["airflow/utils/log/secrets_masker.py"] | No SQL error shown when using the JDBC operator | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
Debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
astro dev start
Dockerfile - https://gist.github.com/jyotsa09/267940333ffae4d9f3a51ac19762c094#file-dockerfile
### What happened
Both connection pointed to the same database where table "Person" doesn't exist.
When there is a SQL failure in PostgressOperator then logs are saying -
```
psycopg2.errors.UndefinedTable: relation "person" does not exist
LINE 1: Select * from Person
```
When there is a SQL failure in JDBCOperator then logs are just saying -
```
[2021-09-23, 19:06:22 UTC] {local_task_job.py:154} INFO - Task exited with return code
```
### What you expected to happen
JDBCOperator should show "Task failed with exception" or something similar to Postgress.
### How to reproduce
Use this dag - https://gist.github.com/jyotsa09/267940333ffae4d9f3a51ac19762c094
(Connection extras is in the gist.)
### Anything else
Related: https://github.com/apache/airflow/issues/16564
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18482 | https://github.com/apache/airflow/pull/21540 | cb24ee9414afcdc1a2b0fe1ec0b9f0ba5e1bd7b7 | bc1b422e1ce3a5b170618a7a6589f8ae2fc33ad6 | 2021-09-23T20:01:53Z | python | 2022-02-27T13:07:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,473 | ["airflow/cli/commands/dag_command.py", "airflow/jobs/backfill_job.py", "airflow/models/dag.py", "tests/cli/commands/test_dag_command.py"] | CLI: `airflow dags test { dag w/ schedule_interval=None } ` error: "No run dates were found" | ### Apache Airflow version
2.2.0b2 (beta snapshot)
### Operating System
ubuntu 20.04
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
pip install /path/to/airflow/src
### What happened
Given any DAG initialized with: `schedule_interval=None`
Run `airflow dags test mydagname $(date +%Y-%m-%d)` and get an error:
```
INFO - No run dates were found for the given dates and dag interval.
```
This behavior changed in https://github.com/apache/airflow/pull/15397, it used to trigger a backfill dagrun at the given date.
### What you expected to happen
I expected a backfill dagrun with the given date, regardless of whether it fit into the `schedule_interval`.
If AIP-39 made that an unrealistic expectation, then I'd hope for some way to define unscheduled dags which can still be tested from the command line (which, so far as I know, is the fastest way to iterate on a DAG.).
As it is, I keep changing `schedule_interval` back and forth depending on whether I want to iterate via `astro dev start` (which tolerates `None` but does superfluous work if the dag is scheduled) or via `airflow dags test ...` (which doesn't tolerate `None`).
### How to reproduce
Initialize a DAG with: `schedule_interval=None` and run it via `airflow dags test mydagname $(date +%Y-%m-%d)`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18473 | https://github.com/apache/airflow/pull/18742 | 5306a6071e1cf223ea6b4c8bc4cb8cacd25d370e | cfc2e1bb1cbf013cae065526578a4e8ff8c18362 | 2021-09-23T15:05:23Z | python | 2021-10-06T19:40:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,465 | ["airflow/providers/slack/operators/slack.py"] | SlackAPIFileOperator filename is not templatable but is documented as is | ### Apache Airflow Provider(s)
slack
### Versions of Apache Airflow Providers
apache-airflow-providers-slack==3.0.0
### Apache Airflow version
2.1.4 (latest released)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
Using a template string for `SlackAPIFileOperator.filename` doesn't apply the templating.
### What you expected to happen
`SlackAPIFileOperator.filrname` should work with a template string.
### How to reproduce
Use a template string when using `SlackAPIFileOperator`.
### Anything else
Re: PR: https://github.com/apache/airflow/pull/17400/files
In commit: https://github.com/apache/airflow/pull/17400/commits/bbd11de2f1d37e9e2f07e5f9b4d331bf94ef6b97
`filename` was removed from `template_fields`. I believe this was because `filename` was removed as a parameter in favour of `file`. In a later commit `file` was renamed to `filename` but the `template_fields` was not put back. The documentation still states that it's a templated field.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18465 | https://github.com/apache/airflow/pull/18466 | 958b679dae6bbd9def7c60191c3a7722ce81382a | 9bf0ed2179b62f374cad74334a8976534cf1a53b | 2021-09-23T12:22:24Z | python | 2021-09-23T18:11:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,442 | ["airflow/providers/microsoft/azure/hooks/azure_batch.py", "docs/apache-airflow-providers-microsoft-azure/connections/azure_batch.rst", "tests/providers/microsoft/azure/hooks/test_azure_batch.py", "tests/providers/microsoft/azure/operators/test_azure_batch.py"] | Cannot retrieve Account URL in AzureBatchHook using the custom connection fields | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure==2.0.0
### Apache Airflow version
2.1.0
### Operating System
Debian GNU/Linux
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
Related to #18124.
When attempting to connect to Azure Batch, the following exception is thrown even though the corresponding "Azure Batch Account URl" field is populated in the Azure Batch connection form:
```
airflow.exceptions.AirflowException: Extra connection option is missing required parameter: `account_url`
```
Airflow Connection:

### What you expected to happen
Airflow tasks should be able to connect to Azure Batch when properly populating the custom connection form. Or, at the very least, the above exception should not be thrown when an Azure Batch Account URL is provided in the connection.
### How to reproduce
1. Install the Microsoft Azure provider and create an Airflow Connection with the type Azure Batch
2. Provide values for at least "Azure Batch Account URl"
3. Finally execute a task which uses the AzureBatchHook
### Anything else
The get_required_param() method is being passed a value of "auth_method" from the `Extra` field in the connection form. The `Extra` field is no longer exposed in the connection form and would never be able to be provided.
```python
def _get_required_param(name):
"""Extract required parameter from extra JSON, raise exception if not found"""
value = conn.extra_dejson.get(name)
if not value:
raise AirflowException(f'Extra connection option is missing required parameter: `{name}`')
return value
...
batch_account_url = _get_required_param('account_url') or _get_required_param(
'extra__azure_batch__account_url'
)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18442 | https://github.com/apache/airflow/pull/18456 | df131471999578f4824a2567ce932a3a24d7c495 | 1d2924c94e38ade7cd21af429c9f451c14eba183 | 2021-09-22T20:07:53Z | python | 2021-09-24T08:04:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,440 | ["airflow/www/views.py"] | Can't Run Tasks from UI when using CeleryKubernetesExecutor | ### Apache Airflow version
2.1.3
### Operating System
ContainerOS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.1.0
apache-airflow-providers-celery==2.0.0
apache-airflow-providers-cncf-kubernetes==2.0.2
apache-airflow-providers-databricks==2.0.1
apache-airflow-providers-docker==2.1.0
apache-airflow-providers-elasticsearch==2.0.2
apache-airflow-providers-ftp==2.0.0
apache-airflow-providers-google==5.0.0
apache-airflow-providers-grpc==2.0.0
apache-airflow-providers-hashicorp==2.0.0
apache-airflow-providers-http==2.0.0
apache-airflow-providers-imap==2.0.0
apache-airflow-providers-microsoft-azure==3.1.0
apache-airflow-providers-mysql==2.1.0
apache-airflow-providers-papermill==2.0.1
apache-airflow-providers-postgres==2.0.0
apache-airflow-providers-redis==2.0.0
apache-airflow-providers-sendgrid==2.0.0
apache-airflow-providers-sftp==2.1.0
apache-airflow-providers-slack==4.0.0
apache-airflow-providers-sqlite==2.0.0
apache-airflow-providers-ssh==2.1.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Official Helm Chart on GKE using CeleryKubernetesExecutor.
### What happened
I'm unable to Run Tasks from the UI when using CeleryKubernetesExecutor. It just shows the error "Only works with the Celery or Kubernetes executors, sorry".
<img width="1792" alt="Captura de Pantalla 2021-09-22 a la(s) 17 00 30" src="https://user-images.githubusercontent.com/2586758/134413668-5b638941-bb50-44ff-9349-d53326d1f489.png">
### What you expected to happen
It should be able to run Tasks from the UI when using CeleryKubernetesExecutor, as it only is a "selective superset" (based on queue) of both the Celery and Kubernetes executors.
### How to reproduce
1. Run an Airflow instance with the CeleryKubernetes executor.
2. Go to any DAG
3. Select a Task
4. Press the "Run" button.
5. Check the error.
### Anything else
I will append an example PR as a comment.
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18440 | https://github.com/apache/airflow/pull/18441 | 32947a464d174517da942a84d62dd4d0b1ff4b70 | dc45f97cbb192882d628428bd6dd3ccd32128537 | 2021-09-22T20:03:30Z | python | 2021-10-07T00:30:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,436 | ["airflow/www/templates/airflow/variable_list.html", "airflow/www/templates/airflow/variable_show.html", "airflow/www/templates/airflow/variable_show_widget.html", "airflow/www/views.py", "airflow/www/widgets.py"] | Add a 'View' button for Airflow Variables in the UI | ### Description
A 'view' (eye) button is added to the Airflow Variables in the UI. This button shall open the selected Variable for viewing (but not for editing), with the actual json formatting, in either a new page or a modal view.
### Use case/motivation
I store variables in the json format, most of them have > 15 attributes. To view the content of the variables, I either have to open the Variable in editing mode, which I find dangerous since there is a chance I (or another user) accidentally delete information; or I have to copy the variable as displayed in the Airflow Variables page list and then pass it through a json formatter to get the correct indentation. I would like to have a way to (safely) view the variable in it's original format.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18436 | https://github.com/apache/airflow/pull/21342 | 5276ef8ad9749b2aaf4878effda513ee378f4665 | f0bbb9d1079e2660b4aa6e57c53faac84b23ce3d | 2021-09-22T16:29:17Z | python | 2022-02-28T01:41:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,392 | ["airflow/jobs/triggerer_job.py", "tests/jobs/test_triggerer_job.py"] | TriggerEvent fires, and then defers a second time (doesn't fire a second time though). | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
debian buster
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
Helm info:
```
helm repo add astronomer https://helm.astronomer.io
cat <<- 'EOF' | helm install airflow astronomer/airflow --namespace airflow -f -
airflowVersion: 2.2.0
defaultAirflowRepository: myrepo
defaultAirflowTag: myimagetag
executor: KubernetesExecutor
images:
airflow:
repository: myrepo
pullPolicy: Always
pod_template:
repository: myrepo
pullPolicy: Always
triggerer:
serviceAccount:
create: True
EOF
```
Dockerfile:
```
FROM quay.io/astronomer/ap-airflow-dev:2.2.0-buster-43897
COPY ./dags/ ./dags/
```
[the dag](https://gist.github.com/MatrixManAtYrService/8d1d0c978465aa249e8bd0498cc08031#file-many_triggers-py)
### What happened
Six tasks deferred for a random (but deterministic) amount of time, and the triggerer fired six events. Two of those events then deferred a second time, which wasn't necessary because they had already fired. Looks like a race condition.
Here are the triggerer logs: https://gist.github.com/MatrixManAtYrService/8d1d0c978465aa249e8bd0498cc08031#file-triggerer-log-L29
Note that most deferrals only appear once, but the ones for "Aligator" and "Bear" appear twice.
### What you expected to happen
Six tasks deferred, six tasks fires, no extra deferrals.
### How to reproduce
Run the dag in [this gist](https://gist.github.com/MatrixManAtYrService/8d1d0c978465aa249e8bd0498cc08031) with the kubernetes executor (be sure there's a triggerer running). Notice that some of the triggerer log messages appear nore than once. Those represent superfluous computation.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18392 | https://github.com/apache/airflow/pull/20699 | 0ebd55e0f8fc7eb26a2b35b779106201ffe88f55 | 16b8c476518ed76e3689966ec4b0b788be935410 | 2021-09-20T21:16:04Z | python | 2022-01-06T23:16:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,369 | ["airflow/www/static/js/graph.js"] | If the DAG contains Tasks Goups, Graph view does not properly display tasks statuses | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
Linux/Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When upgrading to latest release (2.1.4) from 2.1.3, Graph View does not properly display tasks statuses.
See:

All these tasks are in succeed or failed status in the Tree View, for the same Dag Run.
### What you expected to happen
Graph View to display task statuses.
### How to reproduce
As I can see, DAGs without tasks groups are not affected. So, probably just create a DAG with a task group. I can reproduce this with in local with the official docker image (python 3.8).
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18369 | https://github.com/apache/airflow/pull/18607 | afec743a60fa56bd743a21e85d718e685cad0778 | 26680d4a274c4bac036899167e6fea6351e73358 | 2021-09-20T09:27:16Z | python | 2021-09-29T16:30:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,363 | ["airflow/api/common/experimental/mark_tasks.py", "airflow/api_connexion/endpoints/dag_run_endpoint.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | http patched an in-progress dagRun to state=failed, but the change was clobbered by a subsequent TI state change. | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
debian buster
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start` image: [quay.io/astronomer/ap-airflow-dev:2.2.0-buster-43897](https://quay.io/repository/astronomer/ap-airflow-dev?tab=tags)
### What happened
I patched a dagRun's state to `failed` while it was running.
```
curl -i -X PATCH "http://localhost:8080/api/v1/dags/steady_task_stream/dagRuns/scheduled__2021-09-18T00:00:00+00:00" -H 'Content-Type: application/json' --user 'admin:admin' \
-d '{ "state": "failed" }'
```
For a brief moment, I saw my change reflected in the UI. Then auto-refresh toggled itself to disabled. When I re-enabled it, the dag_run the state: "running".

Presumably this happened because the new API **only** affects the dagRun. The tasks keep running, so they updated dagRun state and overwrite the patched value.
### What you expected to happen
The UI will not let you set a dagrun's state without also setting TI states. Here's what it looks like for "failed":

Notice that it forced me to fail a task instance. This is what prevents the dag from continuing onward and clobbering my patched value. It's similar when you set it to "success", except every child task gets set to "success".
At the very least, *if* the API lets me patch the dagrun state *then* it should also lets me patch the TI states. This way an API user can patch both at once and prevent the clobber. (`/api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}` doesn't yet support the `patch` verb, so I can't currently do this).
Better still would be for the API to make it easy for me to do what the UI does. For instance, patching a dagrun with:
```
{ "state": "failed" }
```
Should return code 409 and a message like:
> To patch steady_task_stream.scheduled__2021-09-18T00:00:00+00:00.state to success, you must also make the following changes: { "task_1":"success", "task_2":"success", ... }. Supply "update_tasks": "true" to do this.
And updating it with:
```
{ "state": "failed",
"update_tasks": "true"}
```
Should succeed and provide feedback about which state changes occurred.
### How to reproduce
Any dag will do here, but here's the one I used:
https://gist.github.com/MatrixManAtYrService/654827111dc190407a3c81008da6ee16
Be sure to run it in an airflow that has https://github.com/apache/airflow/pull/17839, which introduced the patch functionality that makes it possible to reach this bug.
- Unpause the dag
- Make note of the execution date
- Delete the dag and wait for it to repopulate in the UI (you may need to refresh).
- Prepare this command in your terminal, you may need to tweak the dag name and execution date to match your scenario:
```
curl -i -X PATCH "http://localhost:8080/api/v1/dags/steady_task_stream/dagRuns/scheduled__2021-09-18T00:00:00+00:00" -H 'Content-Type: application/json' --user 'admin:admin' \
-d '{ "state": "failed" }'
```
- Unpause the dag again
- Before it completes, run the command to patch its state to "failed"
Note that as soon as the next task completes, your patched state has been overwritten and is now "running" or maybe "success"
### Anything else
@ephraimbuddy, @kaxil mentioned that you might be interested in this.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18363 | https://github.com/apache/airflow/pull/18370 | 13a558d658c3a1f6df4b2ee5d894fee65dc103db | 56d1765ea01b25b42947c3641ef4a64395daec5e | 2021-09-20T03:28:12Z | python | 2021-09-22T20:16:34Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,333 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | No Mass Delete Option for Task Instances Similar to What DAGRuns Have in UI | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
macOS Big Sur 11.3.1
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
default Astronomer deployment with some test DAGs
### What happened
In the UI for DAGRuns there are checkboxes that allow multiple DAGRuns to be selected.

Within the Actions menu on this same view, there is a Delete option which allows multiple DAGRuns to be deleted at the same time.

Task instance view on the UI does not offer the same option, even though Task Instances can be individually deleted with the trash can button.

### What you expected to happen
I expect that the Task Instances can also be bulk deleted, in the same way that DAGRuns can be.
### How to reproduce
Open up Task Instance and DAGRun views from the Browse tab and compare the options in the Actions dropdown menus.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18333 | https://github.com/apache/airflow/pull/18438 | db2d73d95e793e63e152692f216deec9b9d9bc85 | 932a2254064a860d614ba2b7c10c7cb091605c7d | 2021-09-17T19:12:15Z | python | 2021-09-30T17:19:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,329 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/executors/kubernetes_executor.py", "airflow/kubernetes/kubernetes_helper_functions.py", "tests/executors/test_kubernetes_executor.py"] | Add task name, DAG name, try_number, and run_id to all Kubernetes executor logs | ### Description
Every line in the scheduler log pertaining to a particular task instance should be stamped with that task instance's identifying information. In the Kubernetes executor, some lines are stamped only with the pod name instead.
### Use case/motivation
When trying to trace the lifecycle of a task in the kubernetes executor, you currently must search first for the name of the pod created by the task, then search for the pod name in the logs. This means you need to be pretty familiar with the structure of the scheduler logs in order to search effectively for the lifecycle of a task that had a problem.
Some log statements like `Attempting to finish pod` do have the annotations for the pod, which include dag name, task name, and run_id, but others do not. For instance, `Event: podname-a2f2c1ac706 had an event of type DELETED` has no such annotations.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18329 | https://github.com/apache/airflow/pull/29929 | ffc1dbb4acc1448a9ee6576cb4d348a20b209bc5 | 64b0872d92609e2df465989062e39357eeef9dab | 2021-09-17T13:41:38Z | python | 2023-05-25T08:40:18Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,283 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/config_templates/default_celery.py", "chart/templates/_helpers.yaml", "chart/templates/secrets/result-backend-connection-secret.yaml", "chart/values.yaml", "tests/charts/test_basic_helm_chart.py", "tests/charts/test_rbac.py", "tests/charts/test_result_backend_connection_secret.py"] | db+ string in result backend but not metadata secret | ### Official Helm Chart version
1.1.0 (latest released)
### Apache Airflow version
2.1.3 (latest released)
### Kubernetes Version
1.21
### Helm Chart configuration
data:
metadataSecretName: "airflow-metadata"
resultBackendSecretName: "airflow-result-backend"
### Docker Image customisations
_No response_
### What happened
If we only supply 1 secret with
```
connection: postgresql://airflow:[email protected]:5432/airflow?sslmode=disable
```
To use for both metadata and resultBackendConnection then we end up with a connection error because
resultBackendConnection expects the string to be formatted like
```
connection: db+postgresql://airflow:[email protected]:5432/airflow?sslmode=disable
```
from what i can tell
### What you expected to happen
I'd expect to be able to use the same secret for both using the same format if they are using the same connection.
### How to reproduce
Make a secret structured like above to look like the metadataConnection auto-generated secret.
use that same secret for the result backend.
deploy.
### Anything else
Occurs always.
To get around currently we make 2 secrets one with just the db+ prepended.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18283 | https://github.com/apache/airflow/pull/24496 | 5f67cc0747ea661b703e4c44c77e7cd005cb9588 | 9312b2865a53cfcfe637605c708cf68d6df07a2c | 2021-09-15T22:16:35Z | python | 2022-06-23T15:34:21Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,267 | ["airflow/providers/amazon/aws/transfers/gcs_to_s3.py", "tests/providers/amazon/aws/transfers/test_gcs_to_s3.py"] | GCSToS3Operator : files already uploaded because of wrong prefix | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian 10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### What happened
The operator airflow.providers.amazon.aws.transfers.gcs_to_s3.GCSToS3Operator is buggy because if the argument replace=False and the task is retried then the task will always fail :
```
Traceback (most recent call last):
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_task
result = task_copy.execute(context=context)
File "/home/airflow/gcs/plugins/birdz/operators/gcs_to_s3_operator.py", line 159, in execute
cast(bytes, file_bytes), key=dest_key, replace=self.replace, acl_policy=self.s3_acl_policy
File "/usr/local/lib/airflow/airflow/providers/amazon/aws/hooks/s3.py", line 61, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/usr/local/lib/airflow/airflow/providers/amazon/aws/hooks/s3.py", line 90, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/usr/local/lib/airflow/airflow/providers/amazon/aws/hooks/s3.py", line 608, in load_bytes
self._upload_file_obj(file_obj, key, bucket_name, replace, encrypt, acl_policy)
File "/usr/local/lib/airflow/airflow/providers/amazon/aws/hooks/s3.py", line 653, in _upload_file_obj
raise ValueError(f"The key {key} already exists.")
```
Furthermore I noted that the argument "prefix" that correspond to the GCS prefix to search in the bucket is keeped in the destination S3
### What you expected to happen
I expect the operator to print "In sync, no files needed to be uploaded to S3" if a retry is done in case of replace=false
And I expect to keep or not the GCS prefix to the destination S3 key. It's already handled by other transfer operator like 'airflow.providers.google.cloud.transfers.gcs_to_sftp.GCSToSFTPOperator' which implement a keep_directory_structure argument to keep the folder structure between source and destination
### How to reproduce
Retry the operator with replace=False
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18267 | https://github.com/apache/airflow/pull/22071 | 184a46fc93cf78e6531f25d53aa022ee6fd66496 | c7286e53064d717c97807f7ccd6cad515f88fe52 | 2021-09-15T11:34:01Z | python | 2022-03-08T14:11:49Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,245 | ["airflow/settings.py", "airflow/utils/state.py", "docs/apache-airflow/howto/customize-ui.rst", "tests/www/views/test_views_home.py"] | Deferred status color not distinct enough | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
Mac OSX 11.5.2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
In the tree view, the status color for deferred is very difficult to distinguish from up_for_reschedule and is generally not very distinct. I can’t imagine it’s good for people with colorblindness either.
### What you expected to happen
I expect the status colors to form a distinct palette. While there is already some crowding in the greens, I don't want to see it get worse with the new status.
### How to reproduce
Make a deferred task and view in tree view.
### Anything else
I might suggest something in the purple range like [BlueViolet #8A2BE2](https://www.color-hex.com/color/8a2be2)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18245 | https://github.com/apache/airflow/pull/18414 | cfc2e1bb1cbf013cae065526578a4e8ff8c18362 | e351eada1189ed50abef8facb1036599ae96399d | 2021-09-14T14:47:27Z | python | 2021-10-06T19:45:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,237 | ["airflow/providers/microsoft/azure/hooks/wasb.py"] | Azure wasb hook is creating a container when getting a blob client | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure==3.1.1
### Apache Airflow version
2.1.2
### Operating System
OSX
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
`_get_blob_client` method in the wasb hook is trying to create a container
### What you expected to happen
I expect to get a container client of an existing container
### How to reproduce
_No response_
### Anything else
I believe the fix is minor, e.g.
```
def _get_blob_client(self, container_name: str, blob_name: str) -> BlobClient:
"""
Instantiates a blob client
:param container_name: The name of the blob container
:type container_name: str
:param blob_name: The name of the blob. This needs not be existing
:type blob_name: str
"""
container_client = self.create_container(container_name)
return container_client.get_blob_client(blob_name)
```
should be changed to
```
def _get_blob_client(self, container_name: str, blob_name: str) -> BlobClient:
"""
Instantiates a blob client
:param container_name: The name of the blob container
:type container_name: str
:param blob_name: The name of the blob. This needs not be existing
:type blob_name: str
"""
container_client = self._get_container_client(container_name)
return container_client.get_blob_client(blob_name)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18237 | https://github.com/apache/airflow/pull/18287 | f8ba4755ae77f3e08275d18e5df13c368363066b | 2dac083ae241b96241deda20db7725e2fcf3a93e | 2021-09-14T10:35:37Z | python | 2021-09-16T16:59:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,216 | ["chart/UPDATING.rst", "chart/templates/NOTES.txt", "chart/templates/flower/flower-ingress.yaml", "chart/templates/webserver/webserver-ingress.yaml", "chart/tests/test_ingress_flower.py", "chart/tests/test_ingress_web.py", "chart/values.schema.json", "chart/values.yaml"] | Helm chart ingress support multiple hostnames | ### Description
When the official airflow helm chart is used to install airflow in k8s, I want to be able to access the airflow UI from multiple hostnames. Currently given how the ingress resource is structured it doesn't seem possible, and modifying it needs to take into account backwards compatibility concerns.
### Use case/motivation
Given my company's DNS structure, I need to be able to access the airflow UI running in kubernetes from multiple hostnames.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18216 | https://github.com/apache/airflow/pull/18257 | 4308a8c364d410ea8c32d2af7cc8ca3261054696 | 516d6d86064477b1e2044a92bffb33bf9d7fb508 | 2021-09-13T21:01:13Z | python | 2021-09-17T16:53:47Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,155 | ["tests/core/test_providers_manager.py"] | Upgrade `importlib-resources` version | ### Description
The version for `importlib-resources` constraint sets it to be [v1.5.0](https://github.com/python/importlib_resources/tree/v1.5.0) which is over a year old. For compatibility sake (for instance with something like Datapane) I would suggest upgrading it.
### Use case/motivation
Upgrade a an old dependency to keep code up to date.
### Related issues
Not that I am aware of, maybe somewhat #12120, or #15991.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18155 | https://github.com/apache/airflow/pull/18215 | dd313a57721918577b6465cd00d815a429a8f240 | b7f366cd68b3fed98a4628d5aa15a1e8da7252a3 | 2021-09-10T19:40:00Z | python | 2021-09-13T23:03:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,146 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Deferred TI's `next_method` and `next_kwargs` not cleared on retries | ### Apache Airflow version
main (development)
### Operating System
macOS 11.5.2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
If your first try fails and you have retries enabled, the subsequent tries skip right to the final `next_method(**next_kwargs)` instead of starting with `execute` again.
### What you expected to happen
Like we reset things like start date, we should wipe `next_method` and `next_kwargs` so we can retry the task from the beginning.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18146 | https://github.com/apache/airflow/pull/18210 | 9c8f7ac6236bdddd979bb6242b6c63003fae8490 | 9d497729184711c33630dec993b88603e0be7248 | 2021-09-10T14:15:01Z | python | 2021-09-13T18:14:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,136 | ["chart/templates/rbac/security-context-constraint-rolebinding.yaml", "chart/tests/test_scc_rolebinding.py", "chart/values.schema.json", "chart/values.yaml", "docs/helm-chart/production-guide.rst"] | Allow airflow standard images to run in openshift utilising the official helm chart | ### Description
Airflow helm chart is very powerful and configurable, however in order to run it in a on-premises openshift 4 environment, one must manually create security context constraints or extra RBAC rules in order to permit the pods to start with arbitrary user ids.
### Use case/motivation
I would like to be able to run Airflow using the provided helmchart in a on-premises Openshift 4 installation.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18136 | https://github.com/apache/airflow/pull/18147 | 806e4bce9bf827869b4066a14861f791c46179c8 | 45e8191f5c07a1db83c54cf892767ae71a295ba0 | 2021-09-10T10:21:30Z | python | 2021-09-28T16:38:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,124 | ["airflow/providers/microsoft/azure/hooks/adx.py", "tests/providers/microsoft/azure/hooks/test_adx.py"] | Cannot retrieve Authentication Method in AzureDataExplorerHook using the custom connection fields | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure==1!3.1.1
### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian GNU/Linux
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
When attempting to connect to Azure Data Explorer, the following exception is thrown even though the corresponding "Authentication Method" field is populated in the Azure Data Explorer connection form:
```
airflow.exceptions.AirflowException: Extra connection option is missing required parameter: `auth_method`
```
Airflow Connection:

### What you expected to happen
Airflow tasks should be able to connect to Azure Data Explorer when properly populating the custom connection form. Or, at the very least, the above exception should not be thrown when an Authentication Method is provided in the connection.
### How to reproduce
1. Install the Microsoft Azure provider and create an Airflow Connection with the type `Azure Data Explorer`.
2. Provide all values for "Auth Username", "Auth Password", "Tenant ID", and "Authentication Method".
3. Finally execute a task which uses the `AzureDataExplorerHook`
### Anything else
Looks like there are a few issues in the `AzureDataExplorerHook`:
- The `get_required_param()` method is being passed a value of "auth_method" from the `Extras` field in the connection form. The `Extras` field is no longer exposed in the connection form and would never be able to be provided.
```python
def get_required_param(name: str) -> str:
"""Extract required parameter from extra JSON, raise exception if not found"""
value = conn.extra_dejson.get(name)
if not value:
raise AirflowException(f'Extra connection option is missing required parameter: `{name}`')
return value
auth_method = get_required_param('auth_method') or get_required_param(
'extra__azure_data_explorer__auth_method'
)
```
- The custom fields mapping for "Tenant ID" and "Authentication Method" are switched so even if these values are provided in the connection form they will not be used properly in the hook.
```python
@staticmethod
def get_connection_form_widgets() -> Dict[str, Any]:
"""Returns connection widgets to add to connection form"""
from flask_appbuilder.fieldwidgets import BS3PasswordFieldWidget, BS3TextFieldWidget
from flask_babel import lazy_gettext
from wtforms import PasswordField, StringField
return {
"extra__azure_data_explorer__auth_method": StringField(
lazy_gettext('Tenant ID'), widget=BS3TextFieldWidget()
),
"extra__azure_data_explorer__tenant": StringField(
lazy_gettext('Authentication Method'), widget=BS3TextFieldWidget()
),
...
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18124 | https://github.com/apache/airflow/pull/18203 | d9c0e159dbe670458b89a47d81f49d6a083619a2 | 410e6d7967c6db0a968f26eb903d072e356f1348 | 2021-09-09T19:24:05Z | python | 2021-09-18T14:01:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,103 | ["chart/templates/jobs/create-user-job.yaml", "chart/templates/jobs/migrate-database-job.yaml", "chart/tests/test_basic_helm_chart.py"] | Helm Chart Jobs (apache-airflow-test-run-airflow-migrations) does not Pass Labels to Pod | ### Official Helm Chart version
1.1.0 (latest released)
### Apache Airflow version
2.1.3 (latest released)
### Kubernetes Version
1.18
### Helm Chart configuration
Any configuration should replicate this. Here is a simple example:
```
statsd:
enabled: false
redis:
enabled: false
postgresql:
enabled: false
dags:
persistence:
enabled: true
labels:
sidecar.istio.iok/inject: "false"
```
### Docker Image customisations
Base image that comes with helm
### What happened
When I installed istio (/w istio proxy etc) on my namespace I noticed that the "apache-airflow-test-run-airflow-migrations" job would never fully complete. I was investigating and it seemed like the issue was with Istio so I tried creating a label in my values.yaml (as seen above) to disable to the istio inject -
```
labels:
sidecar.istio.iok/inject: "false"
```
The job picked up this label but when the job created the pod, the pod did not have this label. It appears there's a mismatch between the job's and the pods' labels that it creates.
### What you expected to happen
I was thinking that any labels associated to job (from values.yaml) would be inherited in the corresponding pods created
### How to reproduce
1. Install istio on your cluster
2. Create a namespace and add this label to the namespace- `istio-injection: enabled`
3. Add this in your values.yaml `sidecar.istio.iok/inject: "false"` and deploy your helm chart
4. Ensure that your apache-airflow-test-run-airflow-migrations job has the istio inject label, but the corresponding apache-airflow-test-run-airflow-migrations-... pod does **not**
### Anything else
I have a fix for this issue in my forked repo [here](https://github.com/jlouk/airflow/commit/c4400493da8b774c59214078eed5cf7d328844ea)
I've tested this on my cluster and have ensured this removes the mismatch of labels between job and pod. The helm install continues without error. Let me know if there are any other tests you'd like me to run.
Thanks,
John
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18103 | https://github.com/apache/airflow/pull/18403 | 43bd351df17084ec8c670e712da0193503519b74 | a91d9a7d681a5227d7d72876110e31b448383c20 | 2021-09-08T21:16:02Z | python | 2021-09-21T18:43:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,100 | ["airflow/www/static/css/flash.css"] | DAG Import Errors Broken DAG Dropdown Arrow Icon Switched | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
macOS Big Sur 11.3.1
### Versions of Apache Airflow Providers
N/A
### Deployment
Virtualenv installation
### Deployment details
Default installation, with one DAG added for testing.
### What happened
The arrow icons used when a DAG has an import error there is a drop down that will tell you the errors with the import.


Once the arrow is pressed it flips from facing right to facing down, and the specific tracebacks are displayed.


By clicking the arrow on the traceback, it displays more information about the error that occurred. That arrow then flips from facing down to facing right.


Notice how the arrows in the UI display conflicting information depending on the direction of the arrow and the element that contains that arrow. For the parent -- DAG Import Errors -- when the arrow is right-facing, it hides information in the child elements and when the arrow is down-facing it displays the child elements. For the child elements, the arrow behavior is the opposite of expected: right-facing shows the information, and left-facing hides it.
### What you expected to happen
I expect for both the parent and child elements of the DAG Import Errors banner, when the arrow in the GUI is facing right the information is hidden, and when the arrow faces down the information is shown.
### How to reproduce
Create a DAG with an import error and the banner will be displayed.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18100 | https://github.com/apache/airflow/pull/18207 | 8ae2bb9bfa8cfd62a8ae5f6edabce47800ccb140 | d3d847acfdd93f8d1d8dc1495cf5b3ca69ae5f78 | 2021-09-08T20:03:00Z | python | 2021-09-13T16:11:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,082 | ["airflow/api/common/experimental/trigger_dag.py", "tests/api/client/test_local_client.py", "tests/operators/test_trigger_dagrun.py", "tests/www/api/experimental/test_dag_runs_endpoint.py"] | TriggerDagRunOperator start_date is not set | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
ubuntu, macos
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==1.4.0
apache-airflow-providers-ftp==1.1.0
apache-airflow-providers-http==1.1.1
apache-airflow-providers-imap==1.0.1
apache-airflow-providers-postgres==1.0.2
apache-airflow-providers-slack==4.0.0
apache-airflow-providers-sqlite==1.0.2
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
We are using TriggerDagRunOperator in the end of DAG to retrigger current DAG:
`TriggerDagRunOperator(task_id=‘trigger_task’, trigger_dag_id=‘current_dag’)`
Everything works fine, except we have missing duration in UI and warnings in scheduler :
`[2021-09-07 15:33:12,890] {dagrun.py:604} WARNING - Failed to record duration of <DagRun current_dag @ 2021-09-07 12:32:17.035471+00:00: manual__2021-09-07T12:32:16.956461+00:00, externally triggered: True>: start_date is not set.`
And in web UI we can't see duration in TreeView and DAG has no started and duration values.
### What you expected to happen
Correct behaviour with start date and duration metrics in web UI.
### How to reproduce
```
import pendulum
import pytz
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
with DAG(
start_date=pendulum.datetime(year=2021, month=7, day=1).astimezone(pytz.utc),
schedule_interval='@daily',
default_args={},
max_active_runs=1,
dag_id='current_dag'
) as dag:
step1 = DummyOperator(task_id='dummy_task')
trigger_self = TriggerDagRunOperator(task_id='trigger_self', trigger_dag_id='current_dag')
step1 >> trigger_self
```
`[2021-09-08 12:53:35,094] {dagrun.py:604} WARNING - Failed to record duration of <DagRun current_dag @ 2021-01-04 03:00:11+00:00: backfill__2021-01-04T03:00:11+00:00, externally triggered: False>: start_date is not set.`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18082 | https://github.com/apache/airflow/pull/18226 | 1d2924c94e38ade7cd21af429c9f451c14eba183 | 6609e9a50f0ab593e347bfa92f56194334f5a94d | 2021-09-08T09:55:33Z | python | 2021-09-24T09:45:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,066 | ["Dockerfile", "chart/templates/workers/worker-deployment.yaml", "docs/docker-stack/entrypoint.rst"] | Chart version 1.1.0 does not gracefully shutdown workers | ### Official Helm Chart version
1.1.0 (latest released)
### Apache Airflow version
2.1.3 (latest released)
### Kubernetes Version
1.19.13
### Helm Chart configuration
```yaml
executor: "CeleryExecutor"
workers:
# Number of airflow celery workers in StatefulSet
replicas: 1
# Below is the default value, it does not work
command: ~
args:
- "bash"
- "-c"
- |-
exec \
airflow celery worker
```
### Docker Image customisations
```dockerfile
FROM apache/airflow:2.1.3-python3.7
ENV AIRFLOW_HOME=/opt/airflow
USER root
RUN set -ex \
&& buildDeps=' \
python3-dev \
libkrb5-dev \
libssl-dev \
libffi-dev \
build-essential \
libblas-dev \
liblapack-dev \
libpq-dev \
gcc \
g++ \
' \
&& apt-get update -yqq \
&& apt-get upgrade -yqq \
&& apt-get install -yqq --no-install-recommends \
$buildDeps \
libsasl2-dev \
libsasl2-modules \
apt-utils \
curl \
vim \
rsync \
netcat \
locales \
sudo \
patch \
libpq5 \
&& apt-get autoremove -yqq --purge\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER airflow
COPY --chown=airflow:root requirements*.txt /tmp/
RUN pip install -U pip setuptools wheel cython \
&& pip install -r /tmp/requirements_providers.txt \
&& pip install -r /tmp/requirements.txt
COPY --chown=airflow:root setup.py /tmp/custom_operators/
COPY --chown=airflow:root custom_operators/ /tmp/custom_operators/custom_operators/
RUN pip install /tmp/custom_operatos
COPY --chown=airflow:root entrypoint*.sh /
COPY --chown=airflow:root config/ ${AIRFLOW_HOME}/config/
COPY --chown=airflow:root airflow.cfg ${AIRFLOW_HOME}/
COPY --chown=airflow:root dags/ ${AIRFLOW_HOME}/dags
```
### What happened
Using CeleryExecutor whenever I kill a worker pod that is running a task with `kubectl delete pod` or a `helm upgrade` the pod gets instantly killed and does not wait for the task to finish or the end of terminationGracePeriodSeconds.
### What you expected to happen
I expect the worker to finish all it's tasks inside the grace period before being killed
Killing the pod when it's running a task throws this
```bash
k logs -f airflow-worker-86d78f7477-rjljs
* Serving Flask app "airflow.utils.serve_logs" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
[2021-09-07 16:26:25,612] {_internal.py:113} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
/home/airflow/.local/lib/python3.7/site-packages/celery/platforms.py:801 RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=50000 euid=50000 gid=0 egid=0
[2021-09-07 16:28:11,021: WARNING/ForkPoolWorker-1] Running <TaskInstance: test-long-running.long-long 2021-09-07T16:28:09.148524+00:00 [queued]> on host airflow-worker-86d78f7477-rjljs
worker: Warm shutdown (MainProcess)
[2021-09-07 16:28:32,919: ERROR/MainProcess] Process 'ForkPoolWorker-2' pid:20 exited with 'signal 15 (SIGTERM)'
[2021-09-07 16:28:32,930: ERROR/MainProcess] Process 'ForkPoolWorker-1' pid:19 exited with 'signal 15 (SIGTERM)'
[2021-09-07 16:28:33,183: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 15 (SIGTERM) Job: 0.')
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/worker.py", line 208, in start
self.blueprint.start(self)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 599, in start
c.loop(*c.loop_args())
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/loops.py", line 83, in asynloop
next(loop)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 303, in create_loop
poll_timeout = fire_timers(propagate=propagate) if scheduled else 1
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 145, in fire_timers
entry()
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/timer.py", line 68, in __call__
return self.fun(*self.args, **self.kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/timer.py", line 130, in _reschedules
return fun(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/gossip.py", line 167, in periodic
for worker in values(workers):
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/utils/functional.py", line 109, in _iterate_values
for k in self:
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/utils/functional.py", line 95, in __iter__
def __iter__(self):
File "/home/airflow/.local/lib/python3.7/site-packages/celery/apps/worker.py", line 285, in _handle_request
raise exc(exitcode)
celery.exceptions.WorkerShutdown: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/billiard/pool.py", line 1267, in mark_as_worker_lost
human_status(exitcode), job._job),
billiard.exceptions.WorkerLostError: Worker exited prematurely: signal 15 (SIGTERM) Job: 0.
-------------- celery@airflow-worker-86d78f7477-rjljs v4.4.7 (cliffs)
--- ***** -----
-- ******* ---- Linux-5.4.129-63.229.amzn2.x86_64-x86_64-with-debian-10.10 2021-09-07 16:26:26
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7ff517d78d90
- ** ---------- .> transport: redis://:**@airflow-redis:6379/0
- ** ---------- .> results: postgresql+psycopg2://airflow:**@db-host:5432/airflow
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=default(direct) key=default
```
### How to reproduce
Run a dag with this airflow configuration
```yaml
executor: "CeleryExecutor"
workers:
replicas: 1
command: ~
args:
- "bash"
- "-c"
# The format below is necessary to get `helm lint` happy
- |-
exec \
airflow celery worker
```
and kill the worker pod
### Anything else
Overwriting the official entrypoint seems to solve the issue
```yaml
workers:
# To gracefully shutdown workers I have to overwrite the container entrypoint
command: ["airflow"]
args: ["celery", "worker"]
```
When the worker gets killed another worker pod comes online and the old one stays in status `Terminating`, all new tasks go to the new worker.
Below are the logs when the worker gets killed
```bash
k logs -f airflow-worker-5ff95df84f-fznk7
* Serving Flask app "airflow.utils.serve_logs" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
[2021-09-07 16:42:42,399] {_internal.py:113} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
/home/airflow/.local/lib/python3.7/site-packages/celery/platforms.py:801 RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=50000 euid=50000 gid=0 egid=0
[2021-09-07 16:42:53,133: WARNING/ForkPoolWorker-1] Running <TaskInstance: test-long-running.long-long 2021-09-07T16:28:09.148524+00:00 [queued]> on host airflow-worker-5ff95df84f-fznk7
worker: Warm shutdown (MainProcess)
-------------- celery@airflow-worker-5ff95df84f-fznk7 v4.4.7 (cliffs)
--- ***** -----
-- ******* ---- Linux-5.4.129-63.229.amzn2.x86_64-x86_64-with-debian-10.10 2021-09-07 16:42:43
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7f69aaa90d50
- ** ---------- .> transport: redis://:**@airflow-redis:6379/0
- ** ---------- .> results: postgresql+psycopg2://airflow:**@db-host:5432/airflow
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=default(direct) key=default
rpc error: code = Unknown desc = Error: No such container: efe5ce470f5bd5b7f84479c1a8f5dc1d5d92cb1ad6b16696fa5a1ca9610602ee%
```
There is no timestamp but it waits for the task to finish before writing `worker: Warm shutdown (MainProcess)`
Another option I tried was using this as the entrypoint and it also works
```bash
#!/usr/bin/env bash
handle_worker_term_signal() {
# Remove worker from queue
celery -b $AIRFLOW__CELERY__BROKER_URL -d celery@$HOSTNAME control cancel_consumer default
while [ $(airflow jobs check --hostname $HOSTNAME | grep "Found one alive job." | wc -l) -eq 1 ]; do
echo 'Finishing jobs!'
airflow jobs check --hostname $HOSTNAME --limit 100 --allow-multiple
sleep 60
done
echo 'All jobs finished! Terminating worker'
kill $pid
exit 0
}
trap handle_worker_term_signal SIGTERM
airflow celery worker &
pid="$!"
wait $pid
```
Got the idea from this post: https://medium.com/flatiron-engineering/upgrading-airflow-with-zero-downtime-8df303760c96
Thanks!
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18066 | https://github.com/apache/airflow/pull/18068 | 491d81893b0afb80b5f9df191369875bce6e2aa0 | 9e13e450032f4c71c54d091e7f80fe685204b5b4 | 2021-09-07T17:02:04Z | python | 2021-09-10T18:13:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,055 | [".github/workflows/build-images.yml"] | The current CI requires rebasing to main far too often | ### Body
I was wondering why people have to rebase their PRs so often now, this was not the case for quite a while, but it seems that now, the need to rebase to latest `main` is far more frequent than it should be. I think also we have a number of cases where PR build succeeds but it breaks main after merging. This also did not use to be that often as we observe now..
I looked a bit closer and I believe the problem was #15944 PR streamlined some of the code of our CI I just realised it also removed an important feature of the previous setup - namely using the MERGE commit rather than original commit from the PR.
Previously, when we built the image, we have not used the original commit from the incoming PR, but the merge commit that GitHub generates. Whenever there is no conflict, GitHub performs automatic merge with `main` and by default PR during the build uses that 'merge' PR and not the original commit.
This means that all the PRs - even if they can be cleanly rebased - are using the original commit now and they are built "as if they were built using original branch point".
Unfortunately as far as I checked, there is no "merge commit hash" in the "pull_request_target" workflow. Previously, the "build image" workflow used my custom "get_workflow_origin" action to find the merge commit via GitHub API. This has much better user experience because the users do not have to rebase to main nearly as often as they do now.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/18055 | https://github.com/apache/airflow/pull/18060 | 64d2f5488f6764194a2f4f8a01f961990c75b840 | 1bfb5722a8917cbf770922a26dc784ea97aacf33 | 2021-09-07T11:52:12Z | python | 2021-09-07T15:21:39Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,050 | ["airflow/www/views.py", "tests/www/views/test_views_connection.py"] | Duplicating the same connection twice gives "Integrity error, probably unique constraint" | ### Apache Airflow version
2.2.0
### Operating System
Debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
astro dev start
### What happened
When I have tried to duplicate a connection first time it has created a connection with hello_copy1 and duplicating the same connection second time giving me an error of
```Connection hello_copy2 can't be added. Integrity error, probably unique constraint.```
### What you expected to happen
It should create a connection with name hello_copy3 or the error message should be more user friendly.
### How to reproduce
https://github.com/apache/airflow/pull/15574#issuecomment-912438705
### Anything else
Suggested Error
```
Connection hello_copy2 can't be added because it already exists.
```
or
Change the number logic in _copyN so the new name will be always unique.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18050 | https://github.com/apache/airflow/pull/18161 | f248a215aa341608e2bc7d9083ca9d18ab756ac4 | 3ddb36578c5020408f89f5532b21dc0c38e739fb | 2021-09-06T17:44:51Z | python | 2021-10-09T14:06:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 18,005 | ["airflow/providers/neo4j/hooks/neo4j.py", "tests/providers/neo4j/hooks/test_neo4j.py", "tests/providers/neo4j/operators/test_neo4j.py"] | Neo4j Hook does not return query response | ### Apache Airflow Provider(s)
neo4j
### Versions of Apache Airflow Providers
apache-airflow-providers-neo4j version 2.0.0
### Apache Airflow version
2.1.0
### Operating System
macOS Big Sur Version 11.4
### Deployment
Docker-Compose
### Deployment details
docker-compose version 1.29.2, build 5becea4c
Docker version 20.10.7, build f0df350
### What happened
```python
neo4j = Neo4jHook(conn_id=self.neo4j_conn_id)
sql = "MATCH (n) RETURN COUNT(n)"
result = neo4j.run(sql)
logging.info(result)
```
If i run the code snippet above i always get an empty response.
### What you expected to happen
i would like to have the query response.
### How to reproduce
```python
from airflow.models.baseoperator import BaseOperator
from airflow.providers.neo4j.hooks.neo4j import Neo4jHook
import logging
class testNeo4jHookOperator(BaseOperator):
def __init__(
self,
neo4j_conn_id: str,
**kwargs) -> None:
super().__init__(**kwargs)
self.neo4j_conn_id = neo4j_conn_id
def execute(self, context):
neo4j = Neo4jHook(conn_id=self.neo4j_conn_id)
sql = "MATCH (n) RETURN COUNT(n)"
result = neo4j.run(sql)
logging.info(result)
return result
```
I created this custom operator to test the bug.
### Anything else
The bug is related to the way the hook is implemented and how the Neo4j driver works:
```python
def run(self, query) -> Result:
"""
Function to create a neo4j session
and execute the query in the session.
:param query: Neo4j query
:return: Result
"""
driver = self.get_conn()
if not self.connection.schema:
with driver.session() as session:
result = session.run(query)
else:
with driver.session(database=self.connection.schema) as session:
result = session.run(query)
return result
```
In my opinion, the above hook run method should be changed to:
```python
def run(self, query) -> Result:
"""
Function to create a neo4j session
and execute the query in the session.
:param query: Neo4j query
:return: Result
"""
driver = self.get_conn()
if not self.connection.schema:
with driver.session() as session:
result = session.run(query)
return result.data()
else:
with driver.session(database=self.connection.schema) as session:
result = session.run(query)
return result.data()
```
This is because when trying to get result.data (or result in general) externally the session is always empty.
I tried this solution and it seems to work.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18005 | https://github.com/apache/airflow/pull/18007 | 0dba2e0d644ab0bd2512144231b56463218a3b74 | 5d2b056f558f3802499eb6d98643433c31d8534c | 2021-09-03T09:11:40Z | python | 2021-09-07T16:17:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,969 | ["airflow/api_connexion/endpoints/dag_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/exceptions.py", "airflow/www/views.py"] | Delete DAG REST API Functionality | ### Description
The stable REST API seems to be missing the functionality to delete a DAG.
This would mirror clicking the "Trash Can" on the UI (and could maybe eventually power it)

### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17969 | https://github.com/apache/airflow/pull/17980 | b6a962ca875bc29aa82a252f5c179faff601780b | 2cace945cd35545385d83090c8319525d91f8efd | 2021-09-01T15:45:03Z | python | 2021-09-03T12:46:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,966 | ["airflow/task/task_runner/standard_task_runner.py"] | lack of definitive error message if task launch fails | ### Apache Airflow version
2.1.2
### Operating System
Centos docker image
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Self built docker image on kubernetes
### What happened
If airflow fails to launch a task (Kubernetes worker pod in my case), the task logs with a simple error message "task exited with return code 1".
This is problematic for developers working on developing platforms around airflow.
### What you expected to happen
Provide a proper error message when a task exits with return code 1 from standard task runner.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17966 | https://github.com/apache/airflow/pull/17967 | bff580602bc619afe1bee2f7a5c3ded5fc6e39dd | b6a962ca875bc29aa82a252f5c179faff601780b | 2021-09-01T14:49:56Z | python | 2021-09-03T12:39:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,962 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/www/views.py", "docs/apache-airflow/security/webserver.rst", "tests/www/views/test_views_base.py", "tests/www/views/test_views_robots.py"] | Warn if robots.txt is accessed | ### Description
https://github.com/apache/airflow/pull/17946 implements a `/robots.txt` endpoint to block search engines crawling Airflow - in the cases where it is (accidentally) exposed to the public Internet.
If we record any GET requests to that end-point we'd have a strong warning flag that the deployment is exposed, and could issue a warning in the UI, or even enable some kill-switch on the deployment.
Some deployments are likely intentionally available and rely on auth mechanisms on the `login` endpoint, so there should be a config option to suppress the warnings.
An alternative approach would be to monitor for requests from specific user-agents used by crawlers for the same reasons
### Use case/motivation
People who accidentally expose airflow have a slightly higher chance of realising they've done so and tighten their security.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17962 | https://github.com/apache/airflow/pull/18557 | 7c45bc35e767ff21982636fa2e36bc07c97b9411 | 24a53fb6476a3f671859451049407ba2b8d931c8 | 2021-09-01T12:11:26Z | python | 2021-12-24T10:24:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,932 | ["airflow/models/baseoperator.py", "airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"] | template_ext in task attributes shows incorrect value | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian Buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Managed Services (Astronomer, Composer, MWAA etc.)
### Deployment details
Running `quay.io/astronomer/ap-airflow:2.1.3-buster` Docker image
### What happened
In the task attributes view of a BashOperator, the `template_ext` row doesn't show any value, while I would expect to see `('.sh', '.bash')`.

### What you expected to happen
Expect to see correct `template_ext` value
### How to reproduce
Run a BashOperator and browse to Instance Details --> scroll down to `template_ext` row
### Anything else
Spent a little time trying to figure out what could be the reason, but haven't found the problem yet. Therefore created issue to track.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17932 | https://github.com/apache/airflow/pull/17985 | f7276353ccd5d15773eea6c0d90265650fd22ae3 | ca4f99d349e664bbcf58d3c84139b5f4919f6c8e | 2021-08-31T08:21:15Z | python | 2021-09-02T22:54:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,931 | ["airflow/plugins_manager.py", "airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py", "tests/test_utils/timetables.py"] | Timetable registration a la OperatorLinks | Currently (as implemented in #17414), timetables are serialised by their classes’ full import path. This works most of the time, but not in some cases, including:
* Nested in class or function
* Declared directly in a DAG file without a valid import name (e.g. `12345.py`)
It’s fundamentally impossible to fix some of the cases (e.g. function-local class declaration) due to how Python works, but by requiring the user to explicitly register the timetable class, we can at least expose that problem so users don’t attempt to do that.
However, since the timetable actually would work a lot of times without any additional mechanism, I’m also wondering if we should _require_ registration.
1. Always require registration. A DAG using an unregistered timetable class fails to serialise.
2. Only require registration when the timetable class has wonky import path. “Normal” classes work out of the box without registering, and user sees a serialisation error asking for registration otherwise.
3. Don’t require registration. If a class cannot be correctly serialised, tell the user we can’t do it and the timetable must be declared another way. | https://github.com/apache/airflow/issues/17931 | https://github.com/apache/airflow/pull/17989 | 31b15c94886c6083a6059ca0478060e46db67fdb | be7efb1d30929a7f742f5b7735a3d6fbadadd352 | 2021-08-31T08:13:51Z | python | 2021-09-03T12:15:57Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,897 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/models/base.py", "docs/apache-airflow/howto/set-up-database.rst", "tests/models/test_base.py"] | Dag tags are not refreshing if case sensitivity changed | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Custom docker image on k8s
### What happened
New dag version was written in same named dag. The only difference was one of specified tags got changed in case, for ex. "test" to "Test". With huge amount of dags this causes scheduler immediately to crash due to constraint violation.
### What you expected to happen
Tags are refreshed correctly without crash of scheduler.
### How to reproduce
1. Create dag with tags in running airflow cluster
2. Update dag with change of case of one of tags for ex. "test" to "Test"
3. Watch scheduler crash continuously
### Anything else
Alternative option is to make tags case sensitive...
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17897 | https://github.com/apache/airflow/pull/18072 | 79d85573591f641db4b5f89a12213e799ec6dea1 | b658a4243fb5b22b81677ac0f30c767dfc3a1b4b | 2021-08-29T17:14:31Z | python | 2021-09-07T23:17:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,879 | ["airflow/utils/state.py", "tests/utils/test_state.py"] | Cleared tasks get literal 'DagRunState.QUEUED' instead of the value 'queued' | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
CentOS Stream release 8
### Versions of Apache Airflow Providers
None of them are relevant
### Deployment
Virtualenv installation
### Deployment details
mkdir /srv/airflow
cd /srv/airflow
virtualenv venv
source venv/bin/activate
pip install apache-airflow==2.1.3
AIRFLOW_HOME and AIRFLOW_CONFIG is specified via environment variables in /etc/sysconfig/airflow, which is in turn used as EnvironmentFile in systemd service files.
systemctl start airflow-{scheduler,webserver,kerberos}
Python version: 3.9.2
LocalExecutors are used
### What happened
On the Web UI, I had cleared failed tasks, which have been cleared properly, but the DagRun became black with a literal value of "DagRunState.QUEUED", therefore it can't be scheduled again.
### What you expected to happen
DagRun state should be 'queued'.
### How to reproduce
Just clear any tasks on the Web UI. I wonder how could it be that nobody noticed this issue.
### Anything else
Here's a patch to fix it. Maybe the __str__ method should be different, or the database/persistence layer should handle this, but for now, this solves the problem.
```patch
--- airflow/models/dag.py.orig 2021-08-28 09:48:05.465542450 +0200
+++ airflow/models/dag.py 2021-08-28 09:47:34.272639960 +0200
@@ -1153,7 +1153,7 @@
confirm_prompt=False,
include_subdags=True,
include_parentdag=True,
- dag_run_state: DagRunState = DagRunState.QUEUED,
+ dag_run_state: DagRunState = DagRunState.QUEUED.value,
dry_run=False,
session=None,
get_tis=False,
@@ -1369,7 +1369,7 @@
confirm_prompt=False,
include_subdags=True,
include_parentdag=False,
- dag_run_state=DagRunState.QUEUED,
+ dag_run_state=DagRunState.QUEUED.value,
dry_run=False,
):
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17879 | https://github.com/apache/airflow/pull/17886 | 332406dae9f6b08de0d43576c4ed176eb49b8ed0 | a3f9c690aa80d12ff1d5c42eaaff4fced07b9429 | 2021-08-28T08:13:59Z | python | 2021-08-30T18:05:08Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,874 | ["airflow/www/static/js/trigger.js"] | Is it possible to extend the window with parameters in the UI? | ### Description
Is it possible to extend the window with parameters in the UI? - I have simple parameters and they do not fit (
### Use case/motivation
I have simple parameters and they do not fit (
### Related issues
yt
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17874 | https://github.com/apache/airflow/pull/20052 | 9083ecd928a0baba43e369e2c65225e092a275ca | abe842cf6b68472cc4f84dcec1a5ef94ff98ba5b | 2021-08-27T20:38:53Z | python | 2021-12-08T22:14:13Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,869 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/configuration.py", "airflow/www/extensions/init_views.py", "docs/apache-airflow/security/api.rst"] | More than one url in cors header | **Description**
Enabled list of cors headers origin allowed
**Use case / motivation**
I'm currently developping Django application working with apache airflow for some external work through api. From Django, user run dag and a js request wait for dag end. By this way, I enabled cors headers in airflow.cfg with correct url. But now, I've environment (dev & prod). It could be useful to be able to set more than one url in airflow.cfg cors header origin allowed
Thank's in advance :)
| https://github.com/apache/airflow/issues/17869 | https://github.com/apache/airflow/pull/17941 | ace2374c20a9dc4b004237bfd600dd6eaa0f91b4 | a88115ea24a06f8706886a30e4f765aa4346ccc3 | 2021-08-27T12:24:31Z | python | 2021-09-03T14:37:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,852 | ["docs/apache-airflow/howto/connection.rst"] | Connections created via AIRFLOW_CONN_ enviroment variables do not show up in the Admin > Connections or airflow connections list | The connections created using [environment variables like AIRFLOW_CONN_MYCONNID](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html#storing-a-connection-in-environment-variables) do not show up in the UI.
They don't show up in `airflow connections list` either, although if you know the conn_id you can `airflow connections get conn_id` and it will find it.
| https://github.com/apache/airflow/issues/17852 | https://github.com/apache/airflow/pull/17915 | b5da846dd1f27d798dc7dc4f4227de4418919874 | e9bf127a651794b533f30a77dc171e0ea5052b4f | 2021-08-26T14:43:30Z | python | 2021-08-30T16:52:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,826 | ["airflow/www/static/css/dags.css", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "docs/apache-airflow/core-concepts/dag-run.rst", "tests/www/views/test_views_home.py"] | Add another tab in Dags table that shows running now | The DAGs main page has All/Active/Paused.
It would be nice if we can have also Running Now which will show all the DAGs that have at least 1 in Running of DAG runs:

For the above example the new DAGs tab of Running Now should show only 2 rows.
The use case is to see easily what DAGs are currently in progress.
| https://github.com/apache/airflow/issues/17826 | https://github.com/apache/airflow/pull/30429 | c25251cde620481592392e5f82f9aa8a259a2f06 | dbe14c31d52a345aa82e050cc0a91ee60d9ee567 | 2021-08-25T08:38:02Z | python | 2023-05-22T16:05:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,810 | ["airflow/www/static/js/dags.js", "airflow/www/templates/airflow/dags.html"] | "Runs" column being cut off by "Schedule" column in web UI | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**:
2.1.3
**OS**:
CentOS Linux 7
**Apache Airflow Provider versions**:
Standard provider package that comes with Airflow 2.1.3 via Docker container
**Deployment**:
docker-compose version 1.29.2, build 5becea4c
**What happened**:
Updated container from Airflow 2.1.2 --> 2.1.3 without an specific error message
**What you expected to happen**:
I did not expect an overlap of these two columns.
**How to reproduce it**:
Update Airflow image from 2.1.2 to 2.1.3 using Google Chrome Version 92.0.4515.159 (Official Build) (x86_64).
**Anything else we need to know**:
Tested in Safari Version 14.1.2 (16611.3.10.1.6) and the icons do not even load. The loading "dots" animate and do not load "Runs" and "Schedule" icons. No special plugins used and adblockers shut off. Refresh does not help; rebuild does not help. Widened window. Does not fix issue as other column adjust with change in window width.
**Are you willing to submit a PR?**
I am rather new to Airflow and do not have enough experience in fixing Airflow bugs.

| https://github.com/apache/airflow/issues/17810 | https://github.com/apache/airflow/pull/17817 | 06e53f26e5cb2f1ad4aabe05fa12d2db9c66e282 | 96f7e3fec76a78f49032fbc9a4ee9a5551f38042 | 2021-08-24T13:09:54Z | python | 2021-08-25T12:48:33Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,795 | ["airflow/providers/http/provider.yaml"] | Circular Dependency in Apache Airflow 2.1.3 | **Apache Airflow version**:
2.1.3
**OS**:
Ubuntu 20.04 LTS
**Deployment**:
Bazel
**What happened**:
When I tried to bump my Bazel monorepo from 2.1.2 to 2.1.3, Bazel complains that there is the following circular dependency.
```
ERROR: /github/home/.cache/bazel/_bazel_bookie/c5c5e4532705a81d38d884f806d2bf84/external/pip/pypi__apache_airflow/BUILD:11:11: in py_library rule @pip//pypi__apache_airflow:pypi__apache_airflow: cycle in dependency graph:
//wager/publish/airflow:airflow
.-> @pip//pypi__apache_airflow:pypi__apache_airflow
| @pip//pypi__apache_airflow_providers_http:pypi__apache_airflow_providers_http
`-- @pip//pypi__apache_airflow:pypi__apache_airflow
```
**What you expected to happen**:
No dependency cycles.
**How to reproduce it**:
A concise reproduction will require some effort. I am hoping that there is a quick resolution to this, but I am willing to create a reproduction if it is required to determine the root cause.
**Anything else we need to know**:
Perhaps related to apache/airflow#14128.
**Are you willing to submit a PR?**
Yes. | https://github.com/apache/airflow/issues/17795 | https://github.com/apache/airflow/pull/17796 | 36c5fd3df9b271702e1dd2d73c579de3f3bd5fc0 | 0264fea8c2024d7d3d64aa0ffa28a0cfa48839cd | 2021-08-23T20:52:43Z | python | 2021-08-23T22:47:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,733 | ["scripts/in_container/prod/clean-logs.sh"] | Scheduler-gc fails on helm chart when running 'find' | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**: 2.1.3
**OS**: centos-release-7-8.2003.0.el7.centos.x86_64
**Apache Airflow Provider versions**: Irrelevant
**Deployment**: Helm Chart
**Versions**:
```go
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-eks-ad4801", GitCommit:"ad4801fd44fe0f125c8d13f1b1d4827e8884476d", GitTreeState:"clean", BuildDate:"2020-10-20T23:30:20Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.17-eks-087e67", GitCommit:"087e67e479962798594218dc6d99923f410c145e", GitTreeState:"clean", BuildDate:"2021-07-31T01:39:55Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
```
**What happened**:
Helm deployment succeeds but scheduler pod fails with error:
```log
Cleaning logs every 900 seconds
Trimming airflow logs to 15 days.
find: The -delete action automatically turns on -depth, but -prune does nothing when -depth is in effect. If you want to carry on anyway, just explicitly use the -depth option.
```
**What you expected to happen**:
I expect the `scheduler-gc` to run successfully and if it fails, helm deployment should fail.
**How to reproduce it**:
1. Build container using breeze from [reference `719709b6e994a99ad2cb8f90042a19a7924acb8e`](https://github.com/apache/airflow/commit/719709b6e994a99ad2cb8f90042a19a7924acb8e)
2. Deploy to helm using this container.
3. Run kubectl describe pod airflow-schduler-XXXX -c sheduler-gc` to see the error.
For minimal version:
```shell
git clone [email protected]:apache/airflow.git
cd airflow
./breeze build-image --image-tag airflow --production-image
docker run --entrypoint=bash airflow /clean-logs
echo $?
```
**Anything else we need to know**:
It appears the changes was made in order to support a specific kubernetes volume protocol: https://github.com/apache/airflow/pull/17547.
I ran this command locally and achieved the same message:
```console
$ find "${DIRECTORY}"/logs -type d -name 'lost+found' -prune -name '*.log' -delete
find: The -delete action automatically turns on -depth, but -prune does nothing when -depth is in effect. If you want to carry on anyway, just explicitly use the -depth option.
$ echo $?
1
````
**Are you willing to submit a PR?**
Maybe. I am not very familiar with the scheduler-gc service so I will have to look into it. | https://github.com/apache/airflow/issues/17733 | https://github.com/apache/airflow/pull/17739 | af9dc8b3764fc0f4630c0a83f1f6a8273831c789 | f654db8327cdc5c6c1f26517f290227cbc752a3c | 2021-08-19T14:43:47Z | python | 2021-08-19T19:42:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,727 | ["airflow/www/templates/airflow/_messages.html", "airflow/www/templates/airflow/dags.html", "airflow/www/templates/airflow/main.html", "airflow/www/templates/appbuilder/flash.html", "airflow/www/views.py", "tests/www/views/test_views.py"] | Duplicated general notifications in Airflow UI above DAGs list | **Apache Airflow version**:
2.2.0.dev0 (possible older versions too)
**OS**:
Linux Debian 11
(BTW Here we have a typo in comment inside bug report template in `.github/ISSUE_TEMPLATE/bug_report.md`: `cat /etc/oss-release` <- double 's'.)
**Apache Airflow Provider versions**: -
**Deployment**:
Docker-compose 1.25.0
**What happened**:
The issue is related to Airflow UI and it shows duplicated notifications when removing all the tags from the input after filtering DAGs.
**What you expected to happen**:
The notifications should not be duplicated.
**How to reproduce it**:
In the main view, above the list of the DAGs (just below the top bar menu), there is a place where notifications appear. Suppose that there are 2 notifications (no matter which). Now try to search DAGs by tag using 'Filter DAGs by tag' input and use a valid tag. After the filtering is done, clear the input either by clicking on 'x' next to the tag or on the 'x' near the right side of the input. Notice that the notifications are duplicated and now you have 4 instead of 2 (each one is displayed twice). The input id is `s2id_autogen1`.
This bug happens only if all the tags are removed from the filtering input. If you remove the tag while there is still another one in the input, the bug will not appear. Also, it is not present while searching DAGs by name using the input 'Search DAGs'.
After search, before removing tags from input:

Duplicated notifications after removing tag from input:

**Are you willing to submit a PR?**
I can try. | https://github.com/apache/airflow/issues/17727 | https://github.com/apache/airflow/pull/18462 | 3857aa822df5a057e0db67b2342ef769c012785f | 18e91bcde0922ded6eed724924635a31578d8230 | 2021-08-19T13:00:16Z | python | 2021-09-24T11:52:54Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,694 | ["airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/operators/test_snowflake.py"] | Add SQL Check Operators to Snowflake Provider | Add SnowflakeCheckOperator, SnowflakeValueCheckOperator, and SnowflakeIntervalCheckOperator to Snowflake provider.
**Use case / motivation**
The SQL operator, as well as other provider DB operators, already support this functionality. It can prove useful for data quality use cases with Snowflake. It should be relatively easy to implement in the same fashion as [BigQuery's versions](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py#L101).
**Are you willing to submit a PR?**
Yep!
**Related Issues**
Not that I could find. | https://github.com/apache/airflow/issues/17694 | https://github.com/apache/airflow/pull/17741 | 19454940d45486a925a6d890da017a22b6b962de | a8970764d98f33a54be0e880df27f86b311038ac | 2021-08-18T18:39:57Z | python | 2021-09-09T23:41:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,693 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Kubernetes Executor: Tasks Stuck in Queued State indefinitely (or until scheduler restart). | **Apache Airflow version**: 2.1.3
**OS**: Custom Docker Image built from python:3.8-slim
**Apache Airflow Provider versions**:
apache-airflow-providers-cncf-kubernetes==2.0.1
apache-airflow-providers-ftp==2.0.0
apache-airflow-providers-http==2.0.0
apache-airflow-providers-imap==2.0.0
**Deployment**: Self managed on EKS, manifests templated and rendered using krane, dags mounted via PVC & EFS
**What happened**: Task remains in queued state
<img width="1439" alt="Screen Shot 2021-08-18 at 12 17 28 PM" src="https://user-images.githubusercontent.com/47788186/129936170-e16e1362-24ca-4ce9-b2f7-978f2642d388.png">
**What you expected to happen**: Task starts running
**How to reproduce it**: I believe it is because a node is removed. I've attached both the scheduler/k8s executor logs and the kubernetes logs. I deployed a custom executor with an extra line in the KubernetesJobWatcher.process_status that logged the Event type for the pod.
```
date,name,message
2021-09-08T01:06:35.083Z,KubernetesJobWatcher,Event: <pod_id> had an event of type ADDED
2021-09-08T01:06:35.084Z,KubernetesJobWatcher,Event: <pod_id> Status: Pending
2021-09-08T01:06:35.085Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:06:35.085Z,KubernetesJobWatcher,Event: <pod_id> Status: Pending
2021-09-08T01:06:43.390Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:06:43.390Z,KubernetesJobWatcher,Event: <pod_id> Status: Pending
2021-09-08T01:06:43.390Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:06:43.391Z,KubernetesJobWatcher,Event: <pod_id> Status: Pending
2021-09-08T01:06:45.392Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:06:45.393Z,KubernetesJobWatcher,Event: <pod_id> Status: is Running
2021-09-08T01:07:33.185Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:07:33.186Z,KubernetesJobWatcher,Event: <pod_id> Status: is Running
2021-09-08T01:09:50.478Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:09:50.479Z,KubernetesJobWatcher,Event: <pod_id> Status: is Running
2021-09-08T01:09:50.479Z,KubernetesJobWatcher,Event: <pod_id> had an event of type DELETED
2021-09-08T01:09:50.480Z,KubernetesJobWatcher,Event: <pod_id> Status: is Running
```
Here are the corresponding EKS Logs
```
@timestamp,@message
2021-09-08 01:06:34 1 factory.go:503] pod etl/<pod_id> is already present in the backoff queue
2021-09-08 01:06:43 1 scheduler.go:742] pod etl/<pod_id> is bound successfully on node ""ip-10-0-133-132.ec2.internal"", 19 nodes evaluated, 1 nodes were found feasible."
2021-09-08 01:07:32 1 controller_utils.go:122] Update ready status of pods on node [ip-10-0-133-132.ec2.internal]
2021-09-08 01:07:32 1 controller_utils.go:139] Updating ready status of pod <pod_id> to false
2021-09-08 01:07:38 1 node_lifecycle_controller.go:889] Node ip-10-0-133-132.ec2.internal is unresponsive as of 2021-09-08 01:07:38.017788167
2021-09-08 01:08:51 1 node_tree.go:100] Removed node "ip-10-0-133-132.ec2.internal" in group "us-east-1:\x00:us-east-1b" from NodeTree
2021-09-08 01:08:53 1 node_lifecycle_controller.go:789] Controller observed a Node deletion: ip-10-0-133-132.ec2.internal
2021-09-08 01:09:50 1 gc_controller.go:185] Found orphaned Pod etl/<pod_id> assigned to the Node ip-10-0-133-132.ec2.internal. Deleting.
2021-09-08 01:09:50 1 gc_controller.go:189] Forced deletion of orphaned Pod etl/<pod_id> succeeded
```
**Are you willing to submit a PR?** I think there is a state & event combination missing from the process_state function in the KubernetesJobWatcher. I will submit a PR to fix.
| https://github.com/apache/airflow/issues/17693 | https://github.com/apache/airflow/pull/18095 | db72f40707c041069f0fbb909250bde6a0aea53d | e2d069f3c78a45ca29bc21b25a9e96b4e36a5d86 | 2021-08-18T17:52:26Z | python | 2021-09-11T20:18:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,684 | ["airflow/www/views.py", "tests/www/views/test_views_base.py", "tests/www/views/test_views_home.py"] | Role permissions and displaying dag load errors on webserver UI | Running Airflow 2.1.2. Setup using oauth authentication (azure). With the public permissions role if a user did not have a valid role in Azure AD or did not login I got all sorts of redirect loops and errors. So I updated the public permissions to: [can read on Website, menu access on My Profile, can read on My Profile, menu access on Documentation, menu access on Docs]. So basically a user not logged in can view a nearly empty airflow UI and click the "Login" button to login. I also added a group to my Azure AD with the role public that includes all users in the subscription. So users can login and it will create an account in airflow for them and they see the same thing as if they are not logged in. Then if someone added them in to a role in the azure enterprise application with a different role when the login they will have what they need. Keeps everything clean and no redirect errors, etc... always just nice airflow screens.
Now one issue I noticed is with the "can read on Website" permission added to a public role the dag errors appears and are not hidden. Since the errors are related to dags I and the user does not have any dag related permissions I would think the errors would not be displayed.
I'm wondering if this is an issue or more of something that should be a new feature? Cause if I can't view the dag should I be able to view the errors for it? Like adding a role "can read on Website Errors" if a new feature or update the code to tie the display of there errors into a role permission that they are related to like can view one of the dags permissions.
Not logged in or logged in as a user who is defaulted to "Public" role and you can see the red errors that can be expanded and see line reference details of the errors:

| https://github.com/apache/airflow/issues/17684 | https://github.com/apache/airflow/pull/17835 | a759499c8d741b8d679c683d1ee2757dd42e1cd2 | 45e61a965f64feffb18f6e064810a93b61a48c8a | 2021-08-18T15:03:30Z | python | 2021-08-25T22:50:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,679 | ["chart/templates/scheduler/scheduler-deployment.yaml"] | Airflow not running as root when deployed to k8s via helm | **Apache Airflow version**: v2.1.2
**Deployment**: Helm Chart + k8s
**What happened**:
helm install with values:
uid=0
gid=0
Airflow pods must run as root.
error:
from container's bash:
root@airflow-xxx-scheduler-7f49549459-w9s67:/opt/airflow# airflow
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 5, in <module>
from airflow.__main__ import main
ModuleNotFoundError: No module named 'airflow'
```
**What you expected to happen**:
should run as root
using airflow's helm only
in pod's describe I see:
securityContext:
fsGroup: 0
runAsUser: 0 | https://github.com/apache/airflow/issues/17679 | https://github.com/apache/airflow/pull/17688 | 4e59741ff9be87d6aced1164812ab03deab259c8 | 986381159ee3abf3442ff496cb553d2db004e6c4 | 2021-08-18T08:17:33Z | python | 2021-08-18T18:40:21Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,652 | ["airflow/www/security.py", "tests/www/test_security.py"] | Subdags permissions not getting listed on Airflow UI | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**: 2.0.2 and 2.1.1
<!-- AIRFLOW VERSION IS MANDATORY -->
**OS**: Debian
<!-- MANDATORY! You can get it via `cat /etc/oss-release` for example -->
**Apache Airflow Provider versions**:
apache-airflow-providers-ftp==2.0.0
apache-airflow-providers-imap==2.0.0
apache-airflow-providers-sqlite==2.0.0
<!-- You can use `pip freeze | grep apache-airflow-providers` (you can leave only relevant ones)-->
**Deployment**: docker-compose
<!-- e.g. Virtualenv / VM / Docker-compose / K8S / Helm Chart / Managed Airflow Service -->
<!-- Please include your deployment tools and versions: docker-compose, k8s, helm, etc -->
**What happened**:
<!-- Please include exact error messages if you can -->
**What you expected to happen**:
Permissions for subdags should get listed on list roles on Airflow UI but only the parent dag get listed but not the children dags.
<!-- What do you think went wrong? -->
**How to reproduce it**:
Run airflow 2.0.2 or 2.1.1 with a subdag in the dags directory. And then try to find subdag listing on list roles on Airflow UI.

<!--
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images/screen-casts etc. by drag-dropping the image here.
-->
**Anything else we need to know**:
It appears this problem is there most of the time.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here inside fenced
``` ``` blocks or inside a foldable details tag if it's long:
<details><summary>x.log</summary> lots of stuff </details>
-->
<!---
This is absolutely not required, but we are happy to guide you in contribution process
especially if you already have a good understanding of how to implement the fix.
Airflow is a community-managed project and we love to bring new contributors in.
Find us in #airflow-how-to-pr on Slack!
-->
| https://github.com/apache/airflow/issues/17652 | https://github.com/apache/airflow/pull/18160 | 3d6c86c72ef474952e352e701fa8c77f51f9548d | 3e3c48a136902ac67efce938bd10930e653a8075 | 2021-08-17T11:23:38Z | python | 2021-09-11T17:48:54Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,639 | ["airflow/api_connexion/openapi/v1.yaml"] | Airflow API (Stable) (1.0.0) Update a DAG endpoint documentation shows you can update is_active, but the api does not accept it | **Apache Airflow version**:
v2.1.1+astro.2
**OS**:
Ubuntu v18.04
**Apache Airflow Provider versions**:
<!-- You can use `pip freeze | grep apache-airflow-providers` (you can leave only relevant ones)-->
**Deployment**:
VM
Ansible
**What happened**:
PATCH call to api/v1/dags/{dagID} gives the following response when is_active is included in the update mask and/or body:
{
"detail": "Only `is_paused` field can be updated through the REST API",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
The API spec clearly indicates is_active is a modifiable field: https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/patch_dag
**What you expected to happen**:
Expected the is_active field to be updated in the DAG. Either fix the endpoint or fix the documentation.
**How to reproduce it**:
Send PATCH call to api/v1/dags/{dagID} with "is_active": true/false in the body
**Anything else we need to know**:
Occurs every time.
**Are you willing to submit a PR?**
No
| https://github.com/apache/airflow/issues/17639 | https://github.com/apache/airflow/pull/17667 | cbd9ad2ffaa00ba5d99926b05a8905ed9ce4e698 | 83a2858dcbc8ecaa7429df836b48b72e3bbc002a | 2021-08-16T18:01:18Z | python | 2021-08-18T14:56:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,617 | ["tests/providers/alibaba/cloud/operators/test_oss.py", "tests/providers/alibaba/cloud/sensors/test_oss_key.py"] | Switch Alibaba tests to use Mocks | We should also take a look if there is a way to mock the calls rather than having to communicate with the reaal "Alibaba Cloud". This way our unit tests would be even more stable. For AWS we are using `moto` library. Seems that Alibaba has built-in way to use Mock server as backend (?) https://www.alibabacloud.com/help/doc-detail/48978.htm
CC: @Gabriel39 -> would you like to pick on that task? We had a number of transient failures of the unit tests over the last week - cause the unit tests were actually reaching out to cn-hengzou Alibaba region's servers, which has a lot of stability issues. I am switching to us-east-1 in https://github.com/apache/airflow/pull/17616 but ideally we should not reach out to "real" services in unit tests (we are working on standardizing system tests that will do that and reach-out to real services but it should not be happening for unit tests).
| https://github.com/apache/airflow/issues/17617 | https://github.com/apache/airflow/pull/22178 | 9e061879f92f67304f1afafdebebfd7ee2ae8e13 | 03d0c702cf5bb72dcb129b86c219cbe59fd7548b | 2021-08-14T16:34:02Z | python | 2022-03-11T12:47:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,533 | ["airflow/www/static/js/tree.js", "airflow/www/templates/airflow/tree.html"] | Tree-view auto refresh ignores requested root task | Airflow 2.1.2
In Tree View, when you click on a task in a middle of a DAG, then click "Filter Upstream" in a popup, the webpage will reload the tree view with `root` argument in URL.
However, when clicking a small Refresh button right, or enabling auto-refresh, the whole dag will load back. It ignores `root`.
| https://github.com/apache/airflow/issues/17533 | https://github.com/apache/airflow/pull/17633 | 84de7c53c2ebabef98e0916b13b239a8e4842091 | c645d7ac2d367fd5324660c616618e76e6b84729 | 2021-08-10T11:45:23Z | python | 2021-08-16T16:00:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,516 | ["airflow/dag_processing/processor.py", "airflow/models/dag.py", "tests/dag_processing/test_processor.py", "tests/www/views/test_views_home.py"] | Removed dynamically generated DAG doesn't always remove from Airflow metadb | **Apache Airflow version**: 2.1.0
**What happened**:
We use a python script that reads JSON configs from db to dynamically generate DAGs. Sometimes, when JSON Configs is updated, we expect DAGs to be removed in Airflow. This doesn't always happen. From my observation, one of the following happens:
1) DAG is removed because the python script no longer generates it.
2) DAG still exists but run history is empty. When trigger the DAG, first task is stuck in `queued` status indefinitely.
**What you expected to happen**:
I expect the right behavior should be
>1) DAG is removed because the python script no longer generates it.
**How to reproduce it**:
* Create a python script in DAG folder that dynamically generates multiple DAGs
* Execute these dynamically generated DAGs a few times
* Use Airflow Variable to toggle (reduce) the # of DAGs generated
* Examine if number of dynamically generated DAGs in the web UI
| https://github.com/apache/airflow/issues/17516 | https://github.com/apache/airflow/pull/17121 | dc94ee26653ee4d3446210520036cc1f0eecfd81 | e81f14b85e2609ce0f40081caa90c2a6af1d2c65 | 2021-08-09T19:37:58Z | python | 2021-09-18T19:52:54Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,476 | ["airflow/cli/commands/task_command.py", "airflow/utils/log/secrets_masker.py", "tests/cli/commands/test_task_command.py", "tests/utils/log/test_secrets_masker.py"] | Sensitive variables don't get masked when rendered with airflow tasks test | **Apache Airflow version**: 2.1.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): No
**Environment**:
- **Cloud provider or hardware configuration**: No
- **OS** (e.g. from /etc/os-release): MacOS Big Sur 11.4
- **Kernel** (e.g. `uname -a`): -
- **Install tools**: -
- **Others**: -
**What happened**:
With the following code:
```
from airflow import DAG
from airflow.models import Variable
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
def _extract():
partner = Variable.get("my_dag_partner_secret")
print(partner)
with DAG("my_dag", start_date=datetime(2021, 1 , 1), schedule_interval="@daily") as dag:
extract = PythonOperator(
task_id="extract",
python_callable=_extract
)
```
By executing the command
`
airflow tasks test my_dag extract 2021-01-01
`
The value of the variable my_dag_partner_secret gets rendered in the logs whereas it shouldn't
```
[2021-08-06 19:05:30,088] {taskinstance.py:1303} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=my_dag
AIRFLOW_CTX_TASK_ID=extract
AIRFLOW_CTX_EXECUTION_DATE=2021-01-01T00:00:00+00:00
partner_a
[2021-08-06 19:05:30,091] {python.py:151} INFO - Done. Returned value was: None
[2021-08-06 19:05:30,096] {taskinstance.py:1212} INFO - Marking task as SUCCESS. dag_id=my_dag, task_id=extract, execution_date=20210101T000000, start_date=20210806T131013, end_date=20210806T190530
```
**What you expected to happen**:
The value should be masked like on the UI or in the logs
**How to reproduce it**:
DAG given above
**Anything else we need to know**:
Nop
| https://github.com/apache/airflow/issues/17476 | https://github.com/apache/airflow/pull/24362 | 770ee0721263e108c7c74218fd583fad415e75c1 | 3007159c2468f8e74476cc17573e03655ab168fa | 2021-08-06T19:06:55Z | python | 2022-06-12T20:51:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,469 | ["airflow/models/taskmixin.py", "airflow/utils/edgemodifier.py", "airflow/utils/task_group.py", "tests/utils/test_edgemodifier.py"] | Label in front of TaskGroup breaks task dependencies | **Apache Airflow version**: 2.1.0
**What happened**:
Using a Label infront of a TaskGroup breaks task dependencies, as tasks before the TaskGroup will now point to the last task in the TaskGroup.
**What you expected to happen**:
Task dependencies with or without Labels should be the same (and preferably the Label should be visible outside of the TaskGroup).
Behavior without a Label is as follows:
```python
with TaskGroup("tg1", tooltip="Tasks related to task group") as tg1:
DummyOperator(task_id="b") >> DummyOperator(task_id="c")
DummyOperator(task_id="a") >> tg1
```

**How to reproduce it**:
```python
with TaskGroup("tg1", tooltip="Tasks related to task group") as tg1:
DummyOperator(task_id="b") >> DummyOperator(task_id="c")
DummyOperator(task_id="a") >> Label("testlabel") >> tg1
```

| https://github.com/apache/airflow/issues/17469 | https://github.com/apache/airflow/pull/29410 | d0657f5539722257657f84837936c51ac1185fab | 4b05468129361946688909943fe332f383302069 | 2021-08-06T11:22:05Z | python | 2023-02-18T17:57:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,449 | ["airflow/www/security.py", "tests/www/test_security.py"] | Missing menu items in navigation panel for "Op" role | **Apache Airflow version**: 2.1.1
**What happened**:
For user with "Op" role following menu items are not visible in navigation panel, however pages are accessible (roles has access to it):
- "Browse" -> "DAG Dependencies"
- "Admin" -> "Configurations"
**What you expected to happen**: available menu items in navigation panel | https://github.com/apache/airflow/issues/17449 | https://github.com/apache/airflow/pull/17450 | 9a0c10ba3fac3bb88f4f103114d4590b3fb191cb | 4d522071942706f4f7c45eadbf48caded454cb42 | 2021-08-05T15:36:55Z | python | 2021-09-01T19:58:50Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,438 | ["airflow/operators/trigger_dagrun.py", "tests/operators/test_trigger_dagrun.py"] | TriggerDagRunOperator to configure the run_id of the triggered dag | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
Airflow 1.10 support providing a run_id to TriggerDagRunOperator using [DagRunOrder](https://github.com/apache/airflow/blob/v1-10-stable/airflow/operators/dagrun_operator.py#L30-L33) object that will be returned after calling [TriggerDagRunOperator#python_callable](https://github.com/apache/airflow/blob/v1-10-stable/airflow/operators/dagrun_operator.py#L88-L95). With https://github.com/apache/airflow/pull/6317 (Airflow 2.0), this behavior changed and one could not provide run_id anymore to the triggered dag, which is very odd to say the least.
<!-- A short description of your feature -->
**Use case / motivation**
I want to be able to provide run_id for the TriggerDagRunOperator. In my case, I use the TriggerDagRunOperator to trigger 30K dag_run daily and will be so annoying to see them having unidentifiable run_id.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
After discussing with @ashb
**Suggested fixes can be**
* provide a templated run_id as a parameter to TriggerDagRunOperator.
* restore the old behavior of Airflow 1.10, where DagRunOrder holds the run_id and dag_run_conf of the triggered dag
* create a new sub-class of TriggerDagRunOperator to fully customize it
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/17438 | https://github.com/apache/airflow/pull/18788 | 5bc64fb923d7afcab420a1b4a6d9f6cc13362f7a | cdaa9aac80085b157c606767f2b9958cd6b2e5f0 | 2021-08-05T09:52:22Z | python | 2021-10-07T15:54:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,437 | ["docs/apache-airflow/faq.rst"] | It's too slow to recognize new dag file when there are a log of dags files | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
There are 5000+ dag files in our production env. It will delay for almost 10mins to recognize the new dag file if a new one comes.
There are 16 CPU cores in the scheduler machine. The airflow version is 2.1.0.
<!-- A short description of your feature -->
**Use case / motivation**
I think there should be a feature to support recognizing new dag files or recently modified dag files faster.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
Maybe will.
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/17437 | https://github.com/apache/airflow/pull/17519 | 82229b363d53db344f40d79c173421b4c986150c | 7dfc52068c75b01a309bf07be3696ad1f7f9b9e2 | 2021-08-05T09:45:52Z | python | 2021-08-10T10:05:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,373 | ["airflow/cli/cli_parser.py", "airflow/executors/executor_loader.py", "tests/cli/conftest.py", "tests/cli/test_cli_parser.py"] | Allow using default celery commands for custom Celery executors subclassed from existing | **Description**
Allow custom executors subclassed from existing (CeleryExecutor, CeleryKubernetesExecutor, etc.) to use default CLI commands to start workers or flower monitoring.
**Use case / motivation**
Currently, users who decide to roll their own custom Celery-based executor cannot use default commands (i.e. `airflow celery worker`) even though it's built on top of existing CeleryExecutor. If they try to, they'll receive the following error: `airflow command error: argument GROUP_OR_COMMAND: celery subcommand works only with CeleryExecutor, your current executor: custom_package.CustomCeleryExecutor, see help above.`
One workaround for this is to create custom entrypoint script for worker/flower containers/processes that are still going to use the same Celery app as CeleryExecutor. This leads to unnecessary maintaining of this entrypoint script.
I'd suggest two ways of fixing that:
- Check if custom executor is subclassed from Celery executor (which might lead to errors, if custom executor is used to access other celery app, which might be a proper reason for rolling custom executor)
- Store `app` as attribute of Celery-based executors and match the one provided by custom executor with the default one
**Related Issues**
N/A | https://github.com/apache/airflow/issues/17373 | https://github.com/apache/airflow/pull/18189 | d3f445636394743b9298cae99c174cb4ac1fc30c | d0cea6d849ccf11e2b1e55d3280fcca59948eb53 | 2021-08-02T08:46:59Z | python | 2021-12-04T15:19:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,368 | ["airflow/providers/slack/example_dags/__init__.py", "airflow/providers/slack/example_dags/example_slack.py", "airflow/providers/slack/operators/slack.py", "docs/apache-airflow-providers-slack/index.rst", "tests/providers/slack/operators/test.csv", "tests/providers/slack/operators/test_slack.py"] | Add example DAG for SlackAPIFileOperator | The SlackAPIFileOperator is not straight forward and it might be better to add an example DAG to demonstrate the usage.
| https://github.com/apache/airflow/issues/17368 | https://github.com/apache/airflow/pull/17400 | c645d7ac2d367fd5324660c616618e76e6b84729 | 2935be19901467c645bce9d134e28335f2aee7d8 | 2021-08-01T23:15:40Z | python | 2021-08-16T16:16:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,348 | ["airflow/providers/google/cloud/example_dags/example_mlengine.py", "airflow/providers/google/cloud/operators/mlengine.py", "tests/providers/google/cloud/operators/test_mlengine.py"] | Add support for hyperparameter tuning on GCP Cloud AI | @darshan-majithiya had opened #15429 to add the hyperparameter tuning PR but it's gone stale. I'm adding this issue to see if they want to pick it back up, or if not, if someone wants to pick up where they left off in the spirit of open source 😄 | https://github.com/apache/airflow/issues/17348 | https://github.com/apache/airflow/pull/17790 | 87769db98f963338855f59cfc440aacf68e008c9 | aa5952e58c58cab65f49b9e2db2adf66f17e7599 | 2021-07-30T18:50:32Z | python | 2021-08-27T18:12:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,340 | ["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/apache/livy/operators/livy.py", "tests/providers/apache/livy/operators/test_livy.py"] | Retrieve session logs when using Livy Operator | **Description**
The Airflow logs generated by the Livy operator currently only state the status of the submitted batch. To view the logs from the job itself, one must go separately to the session logs. I think that Airflow should have the option (possibly on by default) that retrieves the session logs after the batch reaches a terminal state if a `polling_interval` has been set.
**Use case / motivation**
When debugging a task submitted via Livy, the session logs are the first place to check. For most other tasks, including SparkSubmitOperator, viewing the first-check logs can be done in the Airflow UI, but for Livy you must go externally or write a separate task to retrieve them.
**Are you willing to submit a PR?**
I don't yet have a good sense of how challenging this will be to set up and test. I can try but if anyone else wants to go for it, don't let my attempt stop you.
**Related Issues**
None I could find
| https://github.com/apache/airflow/issues/17340 | https://github.com/apache/airflow/pull/17393 | d04aa135268b8e0230be3af6598a3b18e8614c3c | 02a33b55d1ef4d5e0466230370e999e8f1226b30 | 2021-07-30T13:54:00Z | python | 2021-08-20T21:49:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,316 | ["scripts/ci/pre_commit/pre_commit_check_provider_yaml_files.py"] | Docs validation function - Add meaningful errors | The following function just prints right/left error which is not very meaningful and very difficult to troubleshoot.
```python
def check_doc_files(yaml_files: Dict[str, Dict]):
print("Checking doc files")
current_doc_urls = []
current_logo_urls = []
for provider in yaml_files.values():
if 'integrations' in provider:
current_doc_urls.extend(
guide
for guides in provider['integrations']
if 'how-to-guide' in guides
for guide in guides['how-to-guide']
)
current_logo_urls.extend(
integration['logo'] for integration in provider['integrations'] if 'logo' in integration
)
if 'transfers' in provider:
current_doc_urls.extend(
op['how-to-guide'] for op in provider['transfers'] if 'how-to-guide' in op
)
expected_doc_urls = {
"/docs/" + os.path.relpath(f, start=DOCS_DIR)
for f in glob(f"{DOCS_DIR}/apache-airflow-providers-*/operators/**/*.rst", recursive=True)
if not f.endswith("/index.rst") and '/_partials' not in f
}
expected_doc_urls |= {
"/docs/" + os.path.relpath(f, start=DOCS_DIR)
for f in glob(f"{DOCS_DIR}/apache-airflow-providers-*/operators.rst", recursive=True)
}
expected_logo_urls = {
"/" + os.path.relpath(f, start=DOCS_DIR)
for f in glob(f"{DOCS_DIR}/integration-logos/**/*", recursive=True)
if os.path.isfile(f)
}
try:
assert_sets_equal(set(expected_doc_urls), set(current_doc_urls))
assert_sets_equal(set(expected_logo_urls), set(current_logo_urls))
except AssertionError as ex:
print(ex)
sys.exit(1)
``` | https://github.com/apache/airflow/issues/17316 | https://github.com/apache/airflow/pull/17322 | 213e337f57ef2ef9a47003214f40da21f4536b07 | 76e6315473671b87f3d5fe64e4c35a79658789d3 | 2021-07-29T14:58:56Z | python | 2021-07-30T19:18:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,276 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | Make platform version as independent parameter of ECSOperator | Currently `ECSOperator` propagates `platform_version` parameter either in case `launch_type` is `FARGATE` or there is `capacity_provider_strategy` parameter provided. The case with `capacity_provider_strategy` is wrong. Capacity provider strategy can contain a reference on EC2 capacity provider. If it's an EC2 capacity provider, then `platform_version` should not be propagated to the `boto3` api call. And it's not possible to do so with the current logic of `ECSOperator` because `platform_version` is always propagated in such case and `boto3` doesn't accept `platform_version` as `None`. So in order to fix that `platform_version` should be an independent parameter and propagated only when it's specified, regardless which `launch_type` or `capacity_provider_strategy` is specified. That should also simplify the logic of `ECSOperator`.
I will prepare a PR to fix that. | https://github.com/apache/airflow/issues/17276 | https://github.com/apache/airflow/pull/17281 | 5c1e09cafacea922b9281e901db7da7cadb3e9be | 71088986f12be3806d48e7abc722c3f338f01301 | 2021-07-27T20:44:50Z | python | 2021-08-02T08:05:08Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,249 | ["dev/README_RELEASE_HELM_CHART.md", "dev/chart/build_changelog_annotations.py"] | Add Changelog Annotation to Helm Chart | Our Helm Chart misses changelog annotation:
https://artifacthub.io/docs/topics/annotations/helm/

| https://github.com/apache/airflow/issues/17249 | https://github.com/apache/airflow/pull/20555 | 485ff6cc64d8f6a15d8d6a0be50661fe6d04b2d9 | c56835304318f0695c79ac42df7a97ad05ccd21e | 2021-07-27T07:24:22Z | python | 2021-12-29T21:24:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,240 | ["airflow/operators/bash.py", "tests/operators/test_bash.py"] | bash operator overrides environment variables instead of updating them | **Apache Airflow version**: 1.10.15
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
`Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.16-eks-7737de", GitCommit:"7737de131e58a68dda49cdd0ad821b4cb3665ae8", GitTreeState:"clean", BuildDate:"2021-03-10T21:33:25Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}`
**Environment**:
- **Cloud provider or hardware configuration**: EKS
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): Linux airflow-web-84bf9f8865-s5xxg 4.14.209-160.335.amzn2.x86_64 #1 SMP Wed Dec 2 23:31:46 UTC 2020 x86_64
- **Install tools**: helm (fork of the official chart)
- **Others**:
**What happened**:
We started using `env` variable in the built-in bash_operator. the goal was to add a single variable to be used as part of the command. but once using the variable, it caused all of the other environment variables to be ignored.
**What you expected to happen**:
we expected that any environment variable we are adding to this operator will be added or updated.
the expectation are wrong from this variable.
**How to reproduce it**:
`import os
os.environ["foo"] = "bar"
from datetime import datetime
from airflow import DAG
from airflow.models import TaskInstance
from airflow.operators.bash_operator import BashOperator
dag = DAG(dag_id='anydag', start_date=datetime.now())
#unsuccessfull example:
task = BashOperator(bash_command='if [ -z "$foo" ]; then exit 1; fi',env= {"foo1" : "bar1"},dag=dag, task_id='test')
ti = TaskInstance(task=task, execution_date=datetime.now())
result = task.execute(ti.get_template_context())
#successfull example:
task = BashOperator(bash_command='if [ -z "$foo" ]; then exit 1; fi',dag=dag, task_id='test1')
ti = TaskInstance(task=task, execution_date=datetime.now())
result = task.execute(ti.get_template_context())`
**Anything else we need to know**:
this happens every time it runs.
| https://github.com/apache/airflow/issues/17240 | https://github.com/apache/airflow/pull/18944 | b2045d6d1d4d2424c02d7d9b40520440aa4e5070 | d4a3d2b1e7cf273caaf94463cbfcbcdb77bfc338 | 2021-07-26T21:10:14Z | python | 2021-10-13T19:28:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,235 | ["airflow/www/views.py", "tests/www/views/test_views_connection.py"] | Connection inputs in Extra field are overwritten by custom form widget fields | **Apache Airflow version**:
2.1.0
**What happened**:
There are several hooks that still use optional parameters from the classic `Extra` field. However, when creating the connection the `Extra` field is overwritten with values from the custom fields that are included in the form. Because the `Extra` field is overwritten, these optional parameters cannot be used by the hook.
For example, in the `AzureDataFactoryHook`, If `resource_group_name` or `factory_name` are not provided when initializing the hook, it defaults to the value specified in the connection extras. Using the Azure Data Factory connection form, here is the initial connection submission:

After saving the connection, the `Extra` field is overwritten with the custom fields that use "extras" under the hood:

**What you expected to happen**:
Wavering slightly but I would have initially expected that the `Extra` field wasn't overwritten but updated with both the custom field "extras" plus the originally configured values in the `Extra` field. However, a better UX would be that the values used in the `Extra` field should be separate custom fields for these hooks and the `Extra` field is hidden. Perhaps it's even both?
**How to reproduce it**:
Install either the Microsoft Azure or Snowflake providers, attempt to create a connection for either the Snowflake, Azure Data Factory, Azure Container Volume, or Azure types with the `Extra` field populated prior to saving the form.
**Anything else we need to know**:
Happy to submit PRs to fix this issue. 🚀
| https://github.com/apache/airflow/issues/17235 | https://github.com/apache/airflow/pull/17269 | 76e6315473671b87f3d5fe64e4c35a79658789d3 | 1941f9486e72b9c70654ea9aa285d566239f6ba1 | 2021-07-26T17:16:23Z | python | 2021-07-31T05:35:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,193 | ["airflow/providers_manager.py"] | Custom connection cannot use SelectField | When you have Custom Connection and use SelectField, Connection screen becomes unusable (see #17064)
We should detect the situation and throw an exception when Provider Info is initialized. | https://github.com/apache/airflow/issues/17193 | https://github.com/apache/airflow/pull/17194 | 504294e4c231c4fe5b81c37d0a04c0832ce95503 | 8e94c1c64902b97be146cdcfe8b721fced0a283b | 2021-07-23T15:59:47Z | python | 2021-07-23T18:26:51Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,192 | ["airflow/providers/salesforce/hooks/salesforce.py", "docs/apache-airflow-providers-salesforce/connections/salesforce.rst", "tests/providers/salesforce/hooks/test_salesforce.py"] | Adding additional login support for SalesforceHook | **Description**
Currently the `SalesforceHook` only supports authentication via username, password, and security token. The Salesforce API used under the hood supports a few other authentication types:
- Direct access via a session ID
- IP filtering
- JWT access
**Use case / motivation**
The `SalesforceHook` should support all authentication types supported by the underlying API.
**Are you willing to submit a PR?**
Yes 🚀
**Related Issues**
#8766
| https://github.com/apache/airflow/issues/17192 | https://github.com/apache/airflow/pull/17399 | bb52098cd685497385801419a1e0a59d6a0d7283 | 5c0e98cc770b4f055dbd1c0b60ccbd69f3166da7 | 2021-07-23T15:27:10Z | python | 2021-08-06T13:20:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,186 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | Fix template_ext processing for Kubernetes Pod Operator | The "template_ext" mechanism is useful for automatically loading and jinja-processing files which are specified in parameters of Operators. However this might lead to certain problems for example (from slack conversation):
```
The templated_fields in KubernetesPodOperator seems cause the error airflow jinja2.exceptions.TemplateNotFound when some character in the column, e.g / in env_vars.
The code got error.
env_vars=[
k8s.V1EnvVar(
name="GOOGLE_APPLICATION_CREDENTIALS",
value="/var/secrets/google/service-account.json",
),
```
I believe the behaviour changed comparing to not-so-long-past. Some of the changes with processing parameters with jinja recursively caused this template behaviour to be applied also nested parameters like above.
There were also several discussions and user confusion with this behaviour: #15942, #16922
There are two ways we could improve the situation:
1) limit the "template_extension" resolution to only direct string kwargs passed to the operator (I think this is no brainer and we should do it)
2) propose some "escaping" mechanism, where you could either disable template_extension processing entirely or somehow mark the parameters that should not be treated as templates.
Here I have several proposals:
a) we could add "skip_template_ext_processing" or similar parameter in BaseOperator <- I do not like it as many operators rely on this behaviour for a good reason
b) we could add "template_ext" parameter in the operator that could override the original class-level-field <- I like this one a lot
c) we could add "disable_template_ext_pattern" (str) parameter where we could specify list of regexp's where we could filter out only specific values <- this one will allow to disable template_ext much more "selectively" - only for certain parameters.
UPDATE: It only affects Kubenernetes Pod Operator due to it's recursive behaviour and should be fixed there.
| https://github.com/apache/airflow/issues/17186 | https://github.com/apache/airflow/pull/17760 | 3b99225a4d3dc9d11f8bd80923e68368128adf19 | 73d2b720e0c79323a29741882a07eb8962256762 | 2021-07-23T07:55:07Z | python | 2021-08-21T13:57:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,168 | ["airflow/providers/amazon/aws/example_dags/example_local_to_s3.py", "airflow/providers/amazon/aws/transfers/local_to_s3.py", "tests/providers/amazon/aws/transfers/test_local_to_s3.py"] | Add LocalFilesystemtoS3Operator | **Description**
Currently, an S3Hook exists that allows transfer of files to S3 via `load_file()`, however there is no operator associated with it. The S3Load Operator would wrap the S3 Hook, so it is not used directly.
**Use case / motivation**
Seeing as to upload a local file to S3 using the S3 Hook, a Python task with the same functionality has to be written anyway, this could reduce a lot of redundant boiler-plate code and standardize the local file to S3 load process.
**Are you willing to submit a PR?**
Yes
**Related Issues**
Not that I could find.
| https://github.com/apache/airflow/issues/17168 | https://github.com/apache/airflow/pull/17382 | 721d4e7c60cbccfd064572f16c3941f41ff8ab3a | 1632c9f519510ff218656bbc1554c80cb158e85a | 2021-07-22T18:02:17Z | python | 2021-08-14T15:26:26Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.