url
stringlengths 59
59
| repository_url
stringclasses 1
value | labels_url
stringlengths 73
73
| comments_url
stringlengths 68
68
| events_url
stringlengths 66
66
| html_url
stringlengths 49
49
| id
int64 782M
1.89B
| node_id
stringlengths 18
24
| number
int64 4.97k
9.98k
| title
stringlengths 2
306
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | active_lock_reason
null | body
stringlengths 0
63.6k
⌀ | reactions
dict | timeline_url
stringlengths 68
68
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 0
classes | pull_request
dict | is_pull_request
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/kubeflow/pipelines/issues/5689
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5689/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5689/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5689/events
|
https://github.com/kubeflow/pipelines/issues/5689
| 895,030,297 |
MDU6SXNzdWU4OTUwMzAyOTc=
| 5,689 |
[Samples] Add v2 compatible built-in samples
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 952830767,
"node_id": "MDU6TGFiZWw5NTI4MzA3Njc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/M",
"name": "size/M",
"color": "ededed",
"default": false,
"description": null
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
},
{
"login": "james-jwu",
"id": 54086668,
"node_id": "MDQ6VXNlcjU0MDg2NjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/54086668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/james-jwu",
"html_url": "https://github.com/james-jwu",
"followers_url": "https://api.github.com/users/james-jwu/followers",
"following_url": "https://api.github.com/users/james-jwu/following{/other_user}",
"gists_url": "https://api.github.com/users/james-jwu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/james-jwu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/james-jwu/subscriptions",
"organizations_url": "https://api.github.com/users/james-jwu/orgs",
"repos_url": "https://api.github.com/users/james-jwu/repos",
"events_url": "https://api.github.com/users/james-jwu/events{/privacy}",
"received_events_url": "https://api.github.com/users/james-jwu/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @james-jwu ",
"Found a bug, in 1.7.0-rc.2 release, the builtin sample is broken.\r\nI think root cause is that, the builtin samples are built using master version of SDK. Master version of v2 compatible SDK is not usable, because the SDK is in master, but the launcher image is pinned to last released version. There may be version skew between them.",
"/cc @chensun \r\nI'll pin builtin samples to use latest released KFP SDK, sounds good?"
] | 2021-05-19T06:12:31 | 2021-08-05T01:53:18 | 2021-08-05T01:53:18 |
CONTRIBUTOR
| null |
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5689/timeline
| null |
completed
| null | null | false |
|
https://api.github.com/repos/kubeflow/pipelines/issues/5688
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5688/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5688/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5688/events
|
https://github.com/kubeflow/pipelines/issues/5688
| 895,028,721 |
MDU6SXNzdWU4OTUwMjg3MjE=
| 5,688 |
[SDK] better compiler error message when using v2 components in v1 mode
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 952830767,
"node_id": "MDU6TGFiZWw5NTI4MzA3Njc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/M",
"name": "size/M",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @chensun "
] | 2021-05-19T06:10:03 | 2021-05-31T20:39:04 | 2021-05-31T20:39:04 |
CONTRIBUTOR
| null |
Here's what I currently get when using v2 lwt components in KFP v1. This can be a common mistake when we start to build v2 only pipelines. So I think it's worth making it more readable.
```
Traceback (most recent call last):
File "/tmp/tmp.Kc1muRYxPb", line 716, in <module>
executor_main()
File "/tmp/tmp.Kc1muRYxPb", line 706, in executor_main
executor_input = json.loads(args.executor_input)
File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
```
I'm testing this
```
import kfp
from kfp import dsl
from kfp.v2.dsl import component, InputArtifact, Artifact
gcs_download_op = kfp.components.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/961b17fa6844e1d79e5d3686bb557d830d7b5a95/components/google-cloud/storage/download_blob/component.yaml'
)
@component
def echo_artifact(msg: InputArtifact(Artifact)):
"""Print an artifact."""
with open(msg.path) as f:
print(f.read())
@dsl.pipeline(
name='Exit Handler',
description=
'Downloads a message and prints it. The exit handler will run after the pipeline finishes (successfully or not).'
)
def pipeline_exit_handler(url='gs://ml-pipeline/shakespeare1.txt'):
"""A sample pipeline showing exit handler."""
exit_task = echo_artifact('exit!')
with dsl.ExitHandler(exit_task):
download_task = gcs_download_op(url)
echo_task = echo_artifact(download_task.output)
```
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5688/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5688/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5687
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5687/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5687/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5687/events
|
https://github.com/kubeflow/pipelines/issues/5687
| 895,027,550 |
MDU6SXNzdWU4OTUwMjc1NTA=
| 5,687 |
[Samples] update all core samples to avoid container op
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 955884211,
"node_id": "MDU6TGFiZWw5NTU4ODQyMTE=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/L",
"name": "size/L",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "NikeNano",
"id": 22057410,
"node_id": "MDQ6VXNlcjIyMDU3NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikeNano",
"html_url": "https://github.com/NikeNano",
"followers_url": "https://api.github.com/users/NikeNano/followers",
"following_url": "https://api.github.com/users/NikeNano/following{/other_user}",
"gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions",
"organizations_url": "https://api.github.com/users/NikeNano/orgs",
"repos_url": "https://api.github.com/users/NikeNano/repos",
"events_url": "https://api.github.com/users/NikeNano/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikeNano/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "NikeNano",
"id": 22057410,
"node_id": "MDQ6VXNlcjIyMDU3NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikeNano",
"html_url": "https://github.com/NikeNano",
"followers_url": "https://api.github.com/users/NikeNano/followers",
"following_url": "https://api.github.com/users/NikeNano/following{/other_user}",
"gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions",
"organizations_url": "https://api.github.com/users/NikeNano/orgs",
"repos_url": "https://api.github.com/users/NikeNano/repos",
"events_url": "https://api.github.com/users/NikeNano/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikeNano/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign",
"Hi @difince, if you are looking for issues to contribute, this would be my recommendation. (It can be several PRs too)\r\n\r\nFeel free to reject based on your own interests.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-19T06:08:05 | 2022-03-03T04:05:15 | 2022-03-03T04:05:15 |
CONTRIBUTOR
| null |
context: https://github.com/kubeflow/pipelines/pull/4166
remaining occurrences: https://github.com/kubeflow/pipelines/search?p=2&q=containerop+path%3Asamples%2Fcore
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5687/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5686
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5686/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5686/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5686/events
|
https://github.com/kubeflow/pipelines/issues/5686
| 895,026,806 |
MDU6SXNzdWU4OTUwMjY4MDY=
| 5,686 |
[Samples] organize and document sample folder structure with v2 samples
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 955884211,
"node_id": "MDU6TGFiZWw5NTU4ODQyMTE=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/L",
"name": "size/L",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
open
| false | null |
[] | null |
[
"TODO:\r\n* [ ] document sample folder structure in samples/README.md\r\n* [ ] check samples, mark those only run on v1 with comments\r\n* [ ] check samples, mark those only run on v2 with v2_ in example name, or move into v2 folder.\r\n* [ ] move some v2 samples from samples/test folder to v2 folder",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-05-19T06:06:46 | 2022-03-03T02:05:30 | null |
CONTRIBUTOR
| null |
Propose using the following convention to organize samples:
* all core samples are by default compatible with both v1 and v2.
* samples that only run on v2 should be added either in samples/v2/core folder, or next to a corresponding v1 sample and have a v2 suffix, e.g. exit_handler_v2.py
* core samples that do not currently run on v2, but we have plan to support so can stay in samples folder, but we should add a comment saying clarifying status and plan.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5686/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5685
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5685/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5685/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5685/events
|
https://github.com/kubeflow/pipelines/issues/5685
| 895,025,273 |
MDU6SXNzdWU4OTUwMjUyNzM=
| 5,685 |
[Launcher] presubmit unit tests
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 942985146,
"node_id": "MDU6TGFiZWw5NDI5ODUxNDY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/XS",
"name": "size/XS",
"color": "ededed",
"default": false,
"description": null
}
] |
closed
| false |
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Recommend using https://github.com/kubeflow/pipelines/blob/master/test/presubmit-backend-test.sh as a reference"
] | 2021-05-19T06:04:05 | 2021-06-08T05:37:53 | 2021-06-08T05:37:53 |
CONTRIBUTOR
| null |
https://github.com/kubeflow/pipelines/tree/master/v2
this should run:
* go mod tidy
* go test ./... (including tests that use an one-off MLMD local server)
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5685/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5684
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5684/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5684/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5684/events
|
https://github.com/kubeflow/pipelines/issues/5684
| 895,023,219 |
MDU6SXNzdWU4OTUwMjMyMTk=
| 5,684 |
[SDK] better compiler error message when input type mismatch between parameter/artifact
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 955884211,
"node_id": "MDU6TGFiZWw5NTU4ODQyMTE=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/L",
"name": "size/L",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @chensun ",
"Should have been covered by https://github.com/kubeflow/pipelines/pull/5759"
] | 2021-05-19T06:00:46 | 2021-06-08T00:22:40 | 2021-06-08T00:22:40 |
CONTRIBUTOR
| null |
https://github.com/kubeflow/pipelines/blob/f757a8842fd6d1ed3d37ab38c0707d127ea2048c/samples/core/exit_handler/exit_handler_test.py#L24-L34
download_gcs component has input type 'GCS Path', it has a cryptic error message when the input is passed a string parameter:
> File "/Users/gongyuan/kfp/pipelines/sdk/python/kfp/compiler/v2_compat.py", line 108, in update_op artifact_info = {"fileInputPath": op.input_artifact_paths[artifact_name]} KeyError: 'GCS path'
Because this can be a common mistake, I think it deserves a better error message which includes:
* clearly indicates what is going wrong here, including related information in the error message
* a link to public documentation explaining the rules of parameters vs artifacts
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5684/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5683
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5683/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5683/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5683/events
|
https://github.com/kubeflow/pipelines/issues/5683
| 895,022,380 |
MDU6SXNzdWU4OTUwMjIzODA=
| 5,683 |
Update sample pipeline names to recommended resource format
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"@Bobgy, I am a new contributor looking for my first issue. \r\nCould I work on this ? ",
"Thank you @difince ! Yes, this is suitable for first time contributor.\n\nThe desired change is to review all pipelines in samples folder, if they have @pipeline annotation, change the pipeline name to only contain lowercase letters, hyphen and numbers.",
"Example\n\nhttps://github.com/kubeflow/pipelines/blob/63a59a7e765b18c2fad04fa9e8431e82467e0cbb/samples/core/dns_config/dns_config.py#L32\n\nshould be changed to 'dns-config-setting'",
"Thank you @Bobgy for your explanatory response! ",
"@Bobgy Should I keep the uppercase letters or lower them? \r\nhttps://github.com/kubeflow/pipelines/blob/54ac9a6a7173aecbbb30a043b2077e790cac6953/samples/contrib/ibm-samples/watson/watson_train_serve_pipeline.py#L42\r\n\r\n`name='kfp-on-wml-training'` or\r\n`name='KFP-on-WML-training'` ?\r\n\r\nAnother example\r\nhttps://github.com/kubeflow/pipelines/blob/54ac9a6a7173aecbbb30a043b2077e790cac6953/samples/contrib/volume_snapshot_ops/volume_snapshotop_rokurl.py#L28\r\n",
"@Bobgy What about ContainerOp/VolumeOp/VolumeSnapshotOp names? Should they follow the same rules? \r\nhttps://github.com/kubeflow/pipelines/blob/54ac9a6a7173aecbbb30a043b2077e790cac6953/samples/contrib/volume_snapshot_ops/volume_snapshotop_rokurl.py#L40-L41\r\nIf yes, I could open a separate issue about them. ",
"Hi @difince, sorry I wasn't clear on the scope. It's enough updating all samples in samples/core folder.\n\nContainerOp/VolumeOp/VolumeSnapshotOp names do not need any changes. Right now, we are only moving towards restrictor name for pipelines, because they will be context names.",
"And uppercase should be lowered",
"/assign @difince "
] | 2021-05-19T05:59:19 | 2021-05-24T07:36:32 | 2021-05-24T07:36:32 |
CONTRIBUTOR
| null |
Good: "my-pipeline"
Bad: "my_pipeline"
No underscores, spaces etc special characters.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5683/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5681
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5681/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5681/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5681/events
|
https://github.com/kubeflow/pipelines/issues/5681
| 894,955,643 |
MDU6SXNzdWU4OTQ5NTU2NDM=
| 5,681 |
[feature] generic viewer operator for managing user webapps in Kubeflow
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 2710158147,
"node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info",
"name": "needs more info",
"color": "DBEF12",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"I think this would be a great step forward in terms of making it possible to easy run any type of web app with Kubeflow. The example resource from above would need some additional values of course, such as resource limits and requests, etc. Another important value that needs to be added is a `type` classification, so that the various web apps can filter through the different types of viewer (such as the Jupyter Web App and the Tensorboards Web App).\r\n\r\nAnother possibility is to adopt the newer (I think) CRD naming scheme, resulting on something similar to `notebook.webapps.kubefow.org`, `tensorboard.webapps.kubeflow.org` and `filebrowser.webapps.kubeflow.org`. Although I'm not sure if the same reconciliation code can be used for each type with this setup.\r\n\r\nIf going for the first option with a `type` that is specified in the resource, it would be a great feature to have this configurable for the cluster admin. This way, if they have some custom web app that they would like to deploy, the existing controller can be used. Another option would be to have the type not actually affect anything in the reconciliation loop, allowing any arbitrary value to be used.",
"Thank you for starting this discussion @Bobgy! This is a feature that we had also been discussing for a long time https://github.com/kubeflow/kubeflow/issues/3578#issuecomment-654914485, for Notebooks WG. Let my add some insights we had from running Notebooks, which could help us evaluate such a proposal.\r\n\r\nThe first thing I'd like to point out is we should consider exposing a [PodTemplateSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podtemplatespec-v1-core) instead of `ContainerSpec`, in the CR's spec, since it will allow us to configure at least the following things:\r\n1. A PVC to be used with the app's pod\r\n2. The ServiceAccount\r\n3. Affinities and Tolerations\r\n\r\n>With the number of different use-cases we have seen, sounds to me that we'd better leave those domain knowledge to a different layer of abstraction.\r\n\r\nI totally agree with that approach. If we'd like to include different apps then we should be thinking of only the common APIs that someone would need to configure for exposing such apps. \r\n\r\nThen it will be up to another abstraction, for example a managing web app, tο craft CRs for specific applications [Jupyter, TensorBoard, Captum etc], as you mentioned.",
"I'd also like to expose some necessary configurations we've seen for making the Notebook servers run under a prefix. Some apps will be expecting the requrests under the prefix path while some others will always expect requests under `/`.\r\n\r\nThe least necessary configuration for this will be the `spec.http[i].rewrite.uri` field in the VirtualService.\r\n\r\nAlso, we've seen applications, like RStudio, that require the prefix to be included in a custom header in each request.\r\n\r\nThe most flexible solution here would be to allow users to modify the entire spec of a VirtualService. But this will make it more involved to create such CRs, since the user will need to provide the Service name, gateways etc. in the spec.",
"Lastly, I'd also like to point out that an interesting feature would be to allow users to configure the replicas of the underlying Deployment.\r\n\r\nThis will essentially allow users to start/stop the underlying Pods, while still maintaining the CR.\r\n\r\nSo by taking all the above into consideration I'd propose the following iteration:\r\n\r\n**Strawman proposal v2**\r\n\r\n```yaml\r\napiVersion: pipelines.kubeflow.org/v1beta2\r\nkind: Viewer\r\nspec:\r\n ingress:\r\n type: istio.virtualservice # maybe we can have more types\r\n pathRewrite: /\r\n httpHeaders:\r\n - name: X-Forwarded-Prefix\r\n value: /tensorboard/kubeflow/tb-instance\r\n replicas: 1\r\n template:\r\n spec:\r\n containers:\r\n - name: main\r\n image: tensorflow:2.3\r\n command: ['python3', '-m']\r\n arguments: ['tensorboard', '--port', '8080', '--bind-all']\r\n envs:\r\n - name: AWS_SECRET\r\n valueFrom:\r\n - xxxx\r\n port: 8080\r\n```\r\n\r\nWould really like to hear your feedback. Also I believe another useful thing to discuss is how to handle the ports the container exposes and the underlying Service. Should we take for granted that the Service will only be sending traffic to Pod's `8080` port?",
"That already looks great!\r\nI think we can also use ports field instead, because Istio often require named ports to start with its protocol name.\r\n\r\nShall we put this into a design doc now?",
"I was just going to say the same thing about the ports. This will also be useful for pods that expose services on multiple ports (such as metrics for example). ",
"@Bobgy glad to hear! \r\n\r\nI will start working on a first iteration of a design doc so that we can further iterate and evaluate some edge cases. Do you have a template for design docs in pipelines I should follow?",
"@kimwnasptd Great! I am not opinionated about a template, you can use whatever template you prefer.",
"Related to this, would it be helpful to think about enabling generic dashboards through this same method (I'm thinking a dash or flask dashbord, or maybe even r-shiny?). Don't want to overload it, but general dashboards are the other use case that feel similar to this. In some ways I guess it is a generic version of the tensorboard use case",
"@ca-scribner The viewer CRD would be the confidante for these dashboard applications as well. Creating a web app that allows you to launch dashboards is something I’ve discussed with multiple people recently and something I would like to add in the future. Some thought will need to go into the authorization policies, so that non-Kubeflow users can access the dashboards as well (probably by using an OIDC group). A similar functionality will probably be desired for KFServing endpoints as well. ",
"This all sounds perfect, thanks!\n\nOn Mon, Jun 14, 2021 at 15:00 DavidSpek ***@***.***> wrote:\n\n> @ca-scribner <https://github.com/ca-scribner> The viewer CRD would be the\n> confidante for these dashboard applications as well. Creating a web app\n> that allows you to launch dashboards is something I’ve discussed with\n> multiple people recently and something I would like to add in the future.\n> Some thought will need to go into the authorization policies, so that\n> non-Kubeflow users can access the dashboards as well (probably by using an\n> OIDC group). A similar functionality will probably be desired for KFServing\n> endpoints as well.\n>\n> —\n> You are receiving this because you were mentioned.\n>\n>\n> Reply to this email directly, view it on GitHub\n> <https://github.com/kubeflow/pipelines/issues/5681#issuecomment-860919819>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALPFPI6GXB5IBN7ZY6WKUC3TSZGVBANCNFSM45DXWWZA>\n> .\n>\n",
"One thing that will need some careful consideration with the generic viewer is how to deal with RBAC permissions. For example, if you would want to allow a user to create tensorboards, but not a file browser instance. To support this I think it will be necessary to define multiple `Kind`s for the different viewers, but have them share (most of) the reconciliation loop. This then also allows for some domain specific implementations as well. Adding a layer of abstraction above this controller would probably require another controller, partially defeating the purpose of a single unified controller. The different specs would look similar to the following:\r\n\r\n```yaml\r\napiVersion: viewer.kubeflow.org/v1beta2\r\nkind: Tensorboard\r\n....\r\n```\r\n\r\n```yaml\r\napiVersion: viewer.kubeflow.org/v1beta2\r\nkind: Filebrowser\r\n....\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-19T03:34:34 | 2022-03-03T03:05:25 | 2022-03-03T03:05:25 |
CONTRIBUTOR
| null |
### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
<!-- /area frontend -->
/area backend
<!-- /area sdk -->
<!-- /area samples -->
<!-- /area components -->
### What feature would you like to see?
<!-- Provide a description of this feature and the user experience. -->
Besides tensorboard, KFP viewer controller supports generic viewers.
A viewer is a long running container that exposes a webapp through a certain port. (along with required setup to expose it through ingress, e.g. virtualservice in istio)
It can help visualize outputs of a pipeline component, but it can also be used outside of KFP like https://github.com/kubeflow/pipelines/issues/5651.
There are a few different use-cases we are currently getting:
* tensorboard (supported)
* file browser https://github.com/kubeflow/pipelines/issues/5651
* captum insights https://captum.ai/docs/captum_insights
* jupyter notebooks / vscode / rstudio (if we unify with Kubeflow notebooks controller)
All of them fit into this category, that makes it seem like a generic viewer operator that only abstracts the part of setting up ingress and lifecycle control seems like a good fit. The specific configuration for each different type of service we want to expose can be configured by users of viewer CRD.
#### Strawman Proposal
A generic viewer CRD like the following:
```
apiVersion: pipelines.kubeflow.org/v1beta2
kind: Viewer
spec:
ingress:
type: istio.virtualservice # maybe we can have more type supports
containers:
- name: main
image: tensorflow:2.3
command: ['python3', '-m']
arguments: ['tensorboard', '--port', '8080', '--bind-all']
envs:
- name: AWS_SECRET
valueFrom:
- xxxx
port: 8080
```
This custom resource will be used to setup the webapp for external access with:
* deployment
* service
* virtualservice
* authorizationpolicy
The major value coming from the generic viewer operator is to unify the resources needed to make this webapp available to users securely. Also, when creating/deleting this custom resource, operator will make sure the group of resources are created/deleted/updated.
I think the major controversial things to discuss is whether the viewer should encode domain knowledge about each type of service to start up. With the number of different use-cases we have seen, sounds to me that we'd better leave those domain knowledge to a different layer of abstraction. Curious about how others think about that.
### What is the use case or pain point?
<!-- It helps us understand the benefit of this feature for your use case. -->
This also helps mitigate the problem that Kubeflow community has two operators to support these features: https://github.com/kubeflow/kubeflow/issues/5921.
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5681/reactions",
"total_count": 8,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5681/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5680
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5680/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5680/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5680/events
|
https://github.com/kubeflow/pipelines/issues/5680
| 894,872,712 |
MDU6SXNzdWU4OTQ4NzI3MTI=
| 5,680 |
v2 compatible on full Kubeflow
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 955884211,
"node_id": "MDU6TGFiZWw5NTU4ODQyMTE=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/L",
"name": "size/L",
"color": "ededed",
"default": false,
"description": null
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @Bobgy ",
"I'm trying to run v2 compatible pipeline on full Kubeflow, as expected, it fails when uploading to minio storage:\r\n```\r\nF0519 07:16:43.219511 1 main.go:56] Failed to execute component: failed to upload output artifact \"output_dataset_one\" to remote storage URI \"minio://mlpipeline/v2/artifacts/two_step_pipeline/two-step-pipeline-z9pxp/preprocess/output_dataset_one\": uploadFile(): unable to complete copying \"/minio/mlpipeline/v2/artifacts/two_step_pipeline/two-step-pipeline-z9pxp/preprocess/output_dataset_one\" to remote storage \"two_step_pipeline/two-step-pipeline-z9pxp/preprocess/output_dataset_one\": failed to close Writer for bucket: blob (key \"two_step_pipeline/two-step-pipeline-z9pxp/preprocess/output_dataset_one\") (code=Unknown): RequestError: send request failed\r\ncaused by: Put \"\r\nhttp://minio-service:9000/mlpipeline/two_step_pipeline/two-step-pipeline-z9pxp/preprocess/output_dataset_one\r\n\": dial tcp: lookup minio-service on 10.11.240.10:53: no such host\r\n```",
"Interesting, when I switch pipeline_root to a gcs path like gs://gongyuan-dev/v2/.\r\nThe pipeline runs successfully, so finding mlmd grpc service host was not a problem.\r\n\r\nI checked how that's configured -- the mlmd grpc service host and port come from k8s env vars:\r\n\r\n```\r\ncommand:\r\n...\r\n- '--mlmd_server_address'\r\n- $(METADATA_GRPC_SERVICE_HOST)\r\n- '--mlmd_server_port'\r\n- $(METADATA_GRPC_SERVICE_PORT)\r\n```\r\n\r\nEDIT: I was wrong, the env vars come from a configmap, see\r\n```\r\n envFrom:\r\n - configMapRef:\r\n name: metadata-grpc-configmap\r\n optional: true\r\n```",
"Because configmaps needs to be deployed, I propose a simpler default mechanism:\r\n1. Find if env vars like `MINIO_SERVICE_SERVICE_HOST` and `MINIO_SERVICE_SERVICE_PORT` exists, if they do, then we are running in KFP single namespace mode, we can use these env vars to connect to minio service. (because the service is called `minio-service`, so it's env vars have double SERVICE words `MINIO_SERVICE_SERVICE_HOST`, we should avoid the service suffix in services in the future.\r\n2. If the env vars do not exist, we can assume we are running in KFP multi user mode, so default minio service should be `minio-service.kubeflow:9000`.\r\n3. If that fails, we'll just tell users it fails\r\n\r\nGoing forward, a proper configuration for artifact storage will need to be introduced -- that's when we can make this configurable.",
"UPDATE: after trying this out, I found that this doesn't work on Kubeflow on GCP, because we do not use the bucket `mlpipeline` by default -- instead, we use a GCS bucket.\r\n\r\nTherefore, I think the current approach is probably fine for full Kubeflow on other platforms with MinIO, but not a good fit for Kubeflow on GCP.\r\nWe still need an easy to use configuration for default bucket + artifact store + credentials (if MinIO)",
"So actually, configurability of credentials is not a must have for making v2compat run on full Kubeflow.\r\n\r\nIt's enough to make default pipeline root configurable. For KF on GCP, we can default to gcs path. For others, using minio://mlpipeline is enough.",
"/reopen\r\nwe need to keep track of integration with Kubeflow 1.4",
"@Bobgy: Reopened this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/5680#issuecomment-913917875):\n\n>/reopen\r\n>we need to keep track of integration with Kubeflow 1.4\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>",
"I think one issue that I previously missed is that, we need to make the new task API available from pipeline pods. That requires some istio authorization policy change.",
"TODOs:\r\n* [x] expose task API via Istio authorization policy\r\n* [x] merge https://github.com/kubeflow/gcp-blueprints/pull/284"
] | 2021-05-19T00:03:28 | 2021-09-25T05:51:25 | 2021-09-25T05:51:25 |
CONTRIBUTOR
| null |
UPDATE:
PRs have been written:
* https://github.com/kubeflow/pipelines/pull/5697
* https://github.com/kubeflow/pipelines/pull/5750
* https://github.com/kubeflow/gcp-blueprints/pull/284
===
This issue tracks efforts to make [v2 compatible pipelines](https://www.kubeflow.org/docs/components/pipelines/sdk/v2/) run on [full Kubeflow](https://www.kubeflow.org/docs/components/pipelines/installation/overview/#full-kubeflow-deployment).
Depends on #4649
The major differences from KFP standalone are:
* MLMD grpc service is in kubeflow namespace by default.
* MinIO service is in kubeflow namespace by default.
* In Kubeflow on GCP, the default bucket is on GCS, so it's not `mlpipeline`.
For MLMD grpc service, we are already reading a configmap called `metadata-grpc-configmap`. The configmap is already configured in full Kubeflow for each user namespace: https://github.com/kubeflow/pipelines/blob/7608fcdbb4210c153b4a97b019787c6291473b9f/manifests/kustomize/base/installs/multi-user/pipelines-profile-controller/sync.py#L52-L63
Therefore, it already works out of the box.
Because in Kubeflow on GCP, the default bucket is not `mlpipeline`, we need configurations that can set default pipeline root and credentials for each namespace, this is essentially https://github.com/kubeflow/pipelines/issues/4649 for v2 compatible pipelines.
The following comments are investigation logs towards the conclusion I summarized here.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5680/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5675
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5675/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5675/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5675/events
|
https://github.com/kubeflow/pipelines/issues/5675
| 894,447,262 |
MDU6SXNzdWU4OTQ0NDcyNjI=
| 5,675 |
KFP UI with KFP semantics
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 942897534,
"node_id": "MDU6TGFiZWw5NDI4OTc1MzQ=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/XXL",
"name": "size/XXL",
"color": "ededed",
"default": false,
"description": null
}
] |
closed
| false |
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @zijianjoy ",
"@zijianjoy I want to propose a few more changes:\r\n* [ ] artifact names in ML Metadata tab is empty\r\n \r\n* [ ] Hide ML Metadata tab from pipeline node details, because useful information from the tab is already either in input/output tab or visualizations. ML Metadata info can be navigated to by clicking on the execution name link, so it's still easy to access."
] | 2021-05-18T14:28:43 | 2021-07-01T00:10:23 | 2021-07-01T00:10:23 |
CONTRIBUTOR
| null |
Starting from v2 compatible mode, all the KFP semantics information should be available in MLMD.
Therefore, now we can start to render KFP UI in argo agnostic ways.
This is an umbrella issue to find out things we can improve under this theme.
Known issues:
* [x] https://github.com/kubeflow/pipelines/issues/5670
Depends on https://github.com/kubeflow/pipelines/issues/5669
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5675/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5674
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5674/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5674/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5674/events
|
https://github.com/kubeflow/pipelines/issues/5674
| 894,443,706 |
MDU6SXNzdWU4OTQ0NDM3MDY=
| 5,674 |
Rest API to query pipeline run results
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 942897534,
"node_id": "MDU6TGFiZWw5NDI4OTc1MzQ=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/XXL",
"name": "size/XXL",
"color": "ededed",
"default": false,
"description": null
}
] |
open
| false |
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"@Bobgy I am new contributor, looking for my first issue .. Could I work on this one ? \r\nAs I understand a new rest endpoint need to be added ? ",
"Hi @difince, this issue is too complex as first issue, because it also includes designing spec for the rest endpoint",
"Thank you for the response @Bobgy ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-18T14:25:20 | 2022-03-09T01:48:25 | null |
CONTRIBUTOR
| null |
Umbrella issue, this tracks
1. coming up with a standard spec that describes pipeline run results for all the engines, e.g. we now have Argo, Tekton and Vertex AI pipelines.
1. implement REST API for this spec
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5674/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5673
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5673/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5673/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5673/events
|
https://github.com/kubeflow/pipelines/issues/5673
| 894,442,812 |
MDU6SXNzdWU4OTQ0NDI4MTI=
| 5,673 |
[Launcher] support non-root containers in v2 compatible mode
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"It's possible to specify the user that the container should run under. Backend can auto-set that on all containers.",
"@Ark-kun do you mean our backend sets all containers to run as root by default?",
"@Bobgy What is the plan to support non-root containers ? it is difficult to do any PoC with V2 in an enterprise setup due to this. Thanks.",
"Hi @Nagarajj, this isn't on my priority list right now.\r\nSo welcome contributions on this! Feel free to discuss if you need any help.\r\nOr if there are more people chiming in that this is important, we can re-prioritize.",
"I think what needs to be done is basically making sure all the local dirs v2 compatible mode launcher reads from/writes to should be accessible by all non-root users.\r\n\r\n@Nagarajj may I confirm do you require all containers to run as non-root? or is it OK for some KFP system containers to be root? e.g. we have a `kfp-launcher` init container that copies the launcher binary to a shared emptyDir volume. Do you want it to be non-root too?",
"I'd imagine the easiest solution/workaround is to\r\n* mount another emptyDir volume for `/gcs` folder, so it's accessible by all users (because as mentioned, it's part of container contract -- we cannot change the /gcs path)\r\n* change path of `/minio/xxx` and `/s3/xxx` to the same folder as the emptyDir containing the launcher binary\r\n\r\nFor best practice, I think we should move the volume with launcher binary to `/var/run/kfp`, because `/var/run` is runtime variable data. https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s13.html\r\n\r\nSo here's the design:\r\n* an empty dir volume is mounted at `/var/run/kfp` (accessible to all users)\r\n * kfp launcher binary is copied to `/var/run/kfp/bin/launcher`\r\n * input/output s3/minio artifacts are downloaded/written to paths like `/var/run/kfp/artifact/s3/xxx`, `/var/run/kfp/artifact/minio/xxx`\r\n * output parameters are written to paths like `/var/run/kfp/parameter/xxxx`\r\n* an empty dir volume is mounted at `/gcs` (accessible to all users)\r\n * input/output gcs artifacts are downloaded/written to paths like `/gcs/xxxx`\r\n\r\nI think these are not very hard to achieve, maybe I can work on this too when I have some time, but welcome anyone who's interested.",
"> I think what needs to be done is basically making sure all the local dirs v2 compatible mode launcher reads from/writes to should be accessible by all non-root users.\r\n> \r\n> @Nagarajj may I confirm do you require all containers to run as non-root? or is it OK for some KFP system containers to be root? e.g. we have a `kfp-launcher` init container that copies the launcher binary to a shared emptyDir volume. Do you want it to be non-root too?\r\n\r\nIf we can remove restriction on Component container to be root it will be good. kfp-laucher init container can be root as we control that.",
"Thanks for the clarification! I think my above design works under these assumptions.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm working for an enterprise client that is very interested in this issue being resolved. They're in a sensitive data industry, and they are a major target for fraudsters. Hence, they have a need for tight security, and they have a policy of not allowing Kubeflow to run containers as root. Resolving this would allow them to use kubeflow pipelines native artifacts, rather than writing their own detached custom outputs.",
"We also need all container to run a non-root. Since v2compatible is deprecated that should be possible @zijianjoy ",
"hey folks, is there any update on this? I'm guessing current state is that due to 2.0 coming along soon V2_COMPATIBLE wont be worked on? There are several companies running containers in rootless mode - Is. #6530 still up for consideration?"
] | 2021-05-18T14:24:29 | 2022-07-02T00:37:30 | null |
CONTRIBUTOR
| null |
Currently, because the launcher writes input artifacts to paths like:
* `/gcs/xxx`
* `/minio/xxx`
* `/s3/xxxx`
These paths are not accessible by non-Root users by default.
When using a component with non-Root image, launcher fails when preparing input/output artifacts.
Because `/gcs/xxx` is currently a contract for KFP v2 python component wrappers, we cannot change to a different path like `/tmp/gcs/xxx` etc.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5673/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5673/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5672
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5672/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5672/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5672/events
|
https://github.com/kubeflow/pipelines/issues/5672
| 894,436,156 |
MDU6SXNzdWU4OTQ0MzYxNTY=
| 5,672 |
[Launcher] skip downloading artifacts not used by InputPath in v2 components
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 955992480,
"node_id": "MDU6TGFiZWw5NTU5OTI0ODA=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/XL",
"name": "size/XL",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-18T14:18:30 | 2022-03-03T06:05:13 | 2022-03-03T06:05:13 |
CONTRIBUTOR
| null |
As explained in https://github.com/kubeflow/pipelines/issues/5671, we cannot know if an artifact is used by local path in v2 python components.
So we need to figure out how we can know that first and then skip downloading unneeded artifacts.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5672/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5671
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5671/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5671/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5671/events
|
https://github.com/kubeflow/pipelines/issues/5671
| 894,433,179 |
MDU6SXNzdWU4OTQ0MzMxNzk=
| 5,671 |
[Launcher] skip downloading artifacts used by InputUri for v1 components
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 952830767,
"node_id": "MDU6TGFiZWw5NTI4MzA3Njc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/M",
"name": "size/M",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
open
| false | null |
[] | null |
[
"/assign @capri-xiyue ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-05-18T14:15:41 | 2022-04-17T07:27:19 | null |
CONTRIBUTOR
| null |
For v1 components, we can rely on artifact placeholders to understand whether one artifact is used via InputUri or InputPath. Only the input artifacts consumed by InputPath need to be downloaded.
For v2 python components, we cannot distinguish at the moment. We can distinguish v2 python components by checking if the root placeholder `{{$}}` is used in the component args/commands.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5671/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5670
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5670/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5670/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5670/events
|
https://github.com/kubeflow/pipelines/issues/5670
| 894,428,715 |
MDU6SXNzdWU4OTQ0Mjg3MTU=
| 5,670 |
[UI] inputs/outputs tab in KFP semantics
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 952830767,
"node_id": "MDU6TGFiZWw5NTI4MzA3Njc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/M",
"name": "size/M",
"color": "ededed",
"default": false,
"description": null
}
] |
closed
| false |
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @zijianjoy ",
"### Current INPUT/OUTPUT tab\r\n\r\nRender Input Parameters, Input Artifacts, Output Parameters, Output Artifacts.\r\n\r\nThey are read from Workflow object.\r\n\r\n### Use MLMD\r\n\r\nBased on execution, we can find a list of events to identify artifact and input/output for this execution. Detail info is in https://github.com/kubeflow/pipelines/blob/d9c019641ef9ebd78db60cdb78ea29b0d9933008/third_party/ml-metadata/ml_metadata/proto/metadata_store.proto#L94-L161.",
"### Questions\r\n\r\n1. What is the relationship of Input/Output tab vs the ML metadata tab in https://github.com/kubeflow/pipelines/blob/d9c019641ef9ebd78db60cdb78ea29b0d9933008/frontend/src/pages/ExecutionDetails.tsx#L126-L145? How should we merge them into one? Possible solution: Move the INPUT/OUTPUT/DECLARED_INPUT/DECLARED_OUTPUT to Input/Output tab, and shows only Properties/Custom Properties in ML Metadata tab.\r\n2. How do we differentiate Parameter and Artifact from MLMD? \r\n3. What is the relationship between DECLARED_INPUT and INPUT? How to show them in static pipeline mode?",
"These questions are right to the point! Let me try to explain some context, I don't have a clear answer to some of them, you'll need to do some designing.\r\n\r\n> ### Questions\r\n> 1. What is the relationship of Input/Output tab vs the ML metadata tab in https://github.com/kubeflow/pipelines/blob/d9c019641ef9ebd78db60cdb78ea29b0d9933008/frontend/src/pages/ExecutionDetails.tsx#L126-L145\r\n> ? How should we merge them into one? Possible solution: Move the INPUT/OUTPUT/DECLARED_INPUT/DECLARED_OUTPUT to Input/Output tab, and shows only Properties/Custom Properties in ML Metadata tab.\r\n\r\nIn KFP v1, input / output tab shows info parsed from argo workflows, but ML metadata tab shows info from MLMD. In v2 & v2 compatible, both will come from MLMD (and they are duplicate), so some merging or information rearrangement is necessary as you thought.\r\n\r\nHere's my gut feeling arrangement (mostly similar to your proposal), feel free to discuss:\r\n\r\nInput/output tab\r\n* shows info from MLMD\r\n* in addition to showing preview + download link, we can add a link to MLMD artifact details page\r\n\r\nML Metadata tab\r\n* suggest remove the tab altogether, because in KFP v2 compatible, we do not allow users to customize execution metadata/custom properties, so there's not much left to show\r\n\r\nLink to execution details page should probably be shown all the time (e.g. as the side content title, see below\r\n\r\n\r\n> 2. How do we differentiate Parameter and Artifact from MLMD?\r\n\r\n\r\nThe KFP MLMD data model is that input parameters are logged as `input:<parameter-name>` custom properties of the execution.\r\nOutput parameters are logged as `output:<parameter-name>` custom properties.\r\n\r\nSimilar to PR: https://github.com/kubeflow/pipelines/pull/5793, we will soon standardize to move parameters to fields of a custom property `metadata`. `metadata` is of `Struct` type, so it can include key value pairs like `input:<param>`, `output:<param>` like mentioned above.\r\n\r\nArtifacts are what you already observed in ML metadata tab, they are connected to executions by event.\r\n\r\n> 3. What is the relationship between DECLARED_INPUT and INPUT? How to show them in static pipeline mode?\r\n\r\nAnswer (ref from proto file):\r\n> For example, the DECLARED_INPUT and DECLARED_OUTPUT events are part of the signature of an execution\r\n\r\nhttps://github.com/google/ml-metadata/blob/47150524ee5ceee9766a034c4fbe5427440dd79e/ml_metadata/proto/metadata_store.proto#L100-L138\r\n\r\nThanks for the question, after re-reading the documentation, now I realized I had a wrong understanding of DECLARED_INPUT. For all inputs/outputs in KFP tasks, they should be declared input/outputs because they are part of the KFP component signature. Non-declared inputs/outputs is a concept only TFX uses.\r\n\r\nHowever, until now, because KFP does not have the non-declared inputs/outputs concept, we are logging all inputs & outputs as pure inputs and outputs. We need to confirm whether this is sth we need to change.\r\n\r\nRef: [MLMD Terminology section of KFP v2 design](https://docs.google.com/document/d/1fHU29oScMEKPttDA1Th1ibImAKsFVVt2Ynr4ZME05i0/edit#heading=h.zv2ulr8vrvu)"
] | 2021-05-18T14:11:11 | 2021-06-25T04:55:53 | 2021-06-25T04:55:53 |
CONTRIBUTOR
| null |
KFP Inputs/Outputs tab in run details page is currently very coupled to argo.
For v2 compatible pipelines, we can use information from MLMD to render the Inputs/Outputs tab in KFP semantics.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5670/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5669
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5669/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5669/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5669/events
|
https://github.com/kubeflow/pipelines/issues/5669
| 894,421,727 |
MDU6SXNzdWU4OTQ0MjE3Mjc=
| 5,669 |
[v2compat] finalize pipeline run MLMD schema
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 955992480,
"node_id": "MDU6TGFiZWw5NTU5OTI0ODA=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/XL",
"name": "size/XL",
"color": "ededed",
"default": false,
"description": null
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @capri-xiyue ",
"b/187556737",
"One problem I found from https://github.com/kubeflow/pipelines/issues/5668, is that\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/89364315e96ca0e14106ef178336f72265a8e91a/samples/test/metrics_visualization_v2.py#L62\r\nFor now, we log a custom property called \"accuracy\" in the artifact, but if we do not know in advance which metric names are logged, we cannot know which custom properties correspond to scalar metrics.\r\n\r\nTherefore, it seems we need more structured info for metrics.",
"> For now, we log a custom property called \"accuracy\" in the artifact, but if we do not know in advance which metric names are logged, we cannot know which custom properties correspond to scalar metrics.\r\n> Therefore, it seems we need more structured info for metrics.\r\n\r\nSDK allows user to define any mount of `log_metric` call, therefore scalar metric is a dictionary instead of single value. See this example:\r\n\r\n\r\n\r\nSee this SDK implementation: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/io_types.py#L112-L119\r\n\r\nIn order to visualize this scalar metrics, we need to create a table: one column for metric name, another column for metric value.",
"But we also need to distinguish `confidenceMetrics` and `confusionMatrix` from scalar metrics. Otherwise they will appear from the same dictionary which we need to explicitly filter: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/test_data/expected_io_types_classification_metrics.json",
"Needs further work item breakdown"
] | 2021-05-18T14:04:19 | 2021-07-26T11:57:21 | 2021-07-20T05:26:15 |
CONTRIBUTOR
| null |
TODOs:
* [x] Design, figure out the exact list of items we need to change (size/M)
Known work items:
* [x] https://github.com/kubeflow/pipelines/issues/5788 (size/XS)
* [x] https://github.com/kubeflow/pipelines/pull/5842
* [x] #5789 (size/S)
* [x] [v2compat] remove redundant fields “pipeline_name”, “pipeline_run_id” from execution custom properties
* [x] https://github.com/kubeflow/pipelines/issues/5978
* [x] make scalar metrics distinguishable. UPDATE: figured out that we treat all properties of type float as scalar metrics in metadata (custom properties).
* [x] #5985
* [x] #5986
* [x] https://github.com/kubeflow/pipelines/issues/5803 (decided this is not necessary)
* [ ] P2 pipeline root task execution
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5669/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5668
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5668/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5668/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5668/events
|
https://github.com/kubeflow/pipelines/issues/5668
| 894,330,157 |
MDU6SXNzdWU4OTQzMzAxNTc=
| 5,668 |
v2 metrics visualization
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 955884211,
"node_id": "MDU6TGFiZWw5NTU4ODQyMTE=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/L",
"name": "size/L",
"color": "ededed",
"default": false,
"description": null
}
] |
closed
| false |
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"b/186482243",
"/assign @zijianjoy ",
"Do we visualize HTML, Markdown, Tensorboard in the new visualization tab?",
"> Do we visualize HTML, Markdown, Tensorboard in the new visualization tab?\n\nYes, they are tracked in a different issue in P1.",
"We might need to update documentation https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/",
"> We might need to update documentation https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/\r\n\r\nThat's a good point! Note, the update work item is already tracked in https://github.com/kubeflow/pipelines/issues/5666#issue-894322129.",
"Definition of `System.ClassificationMetrics`: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/io_types.py#L122",
"Hello Yuan @Bobgy , what is the goal of this item?\r\n\r\n`add MLMD state verification of this pipeline`\r\n\r\nDoes it mean we check the status of `execution`? Or the status of complete pipeline run? What do we want to visualize when state is success?",
"@zijianjoy I just figured out moving custom properties to `metadata` field in a separate PR, so I think I can take `add MLMD state verification of this pipeline` task too in the same PR.",
"All is done",
"@Bobgy Hi, I'm trying to use test code for visualizing confusion matrix metrics but it seems doesn't work well can you help me please??\r\nhttps://github.com/kubeflow/pipelines/issues/8675"
] | 2021-05-18T12:33:37 | 2023-01-14T15:15:38 | 2021-06-28T06:10:11 |
CONTRIBUTOR
| null |
components can output metrics artifacts that are rendered in UI.
Sample pipeline: https://github.com/kubeflow/pipelines/blob/89364315e96ca0e14106ef178336f72265a8e91a/samples/test/metrics_visualization_v2.py
TODOs:
* [x] ~~add MLMD state verification of this pipeline: https://github.com/kubeflow/pipelines/blob/89364315e96ca0e14106ef178336f72265a8e91a/samples/test/metrics_visualization_v2_test.py#L21~~
* [x] scalar metric (no need to support showing scalar metrics in run list for now), wait for resolution of https://github.com/kubeflow/pipelines/issues/5669#issuecomment-845676621
* [x] ROC curve
* [x] confusion matrix
* [x] add v2 metrics visualization section in https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/
Also suggest trying to write new UI code in recent best practices and try to separate them from existing components. We want to take a gradual approach to modernize the UI codebase.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5668/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5667
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5667/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5667/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5667/events
|
https://github.com/kubeflow/pipelines/issues/5667
| 894,323,707 |
MDU6SXNzdWU4OTQzMjM3MDc=
| 5,667 |
[v2compat] umbrella - KFP data model based caching
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 942897534,
"node_id": "MDU6TGFiZWw5NDI4OTc1MzQ=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/XXL",
"name": "size/XXL",
"color": "ededed",
"default": false,
"description": null
}
] |
closed
| false |
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @capri-xiyue ",
"Needs further TODO item break down",
"Hi @Bobgy do you have the following in mind ? https://github.com/kubeflow/pipelines/issues/5509\r\nOr will there be a native way such that deleting runs from the pripelines webinterface will also delete the artifacts from minio and make the cacher aware that the files are gone?",
"@capri-xiyue can introduce more, but our implementation is basically what you described.\n\nWhen runs are deleted, their entries will no longer be used for caching.\n\nWe haven't invented a way to delete external data for users yet, because data loss would be very frightening.",
"Hi @zijianjoy, I just realized you'll need to make a small change https://github.com/kubeflow/pipelines/issues/5977 for caching.\r\nYou can coordinate with @capri-xiyue for when and how to do that",
"All P0s are done, great work @capri-xiyue @zijianjoy "
] | 2021-05-18T12:25:56 | 2021-08-20T08:40:11 | 2021-08-20T08:40:11 |
CONTRIBUTOR
| null |
internal tracker: b/181133870
Work Items estimate:
* [x] #5726 (size/L)
* [x] #5816 (size/M)
* [x] #5817(size/M)
* [x] #5818(size/S)
* [x] #5819 (size/M)
* [x] #5820 (size/M)
* [x] #5977
total: 4w, 1d for SDK team
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5667/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5666
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5666/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5666/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5666/events
|
https://github.com/kubeflow/pipelines/issues/5666
| 894,322,129 |
MDU6SXNzdWU4OTQzMjIxMjk=
| 5,666 |
v1 visualizations backward compatible in v2 compatible
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 955884211,
"node_id": "MDU6TGFiZWw5NTU4ODQyMTE=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/L",
"name": "size/L",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 3018359587,
"node_id": "MDU6TGFiZWwzMDE4MzU5NTg3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/launcher",
"name": "area/launcher",
"color": "6CC048",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @chensun \r\nfor SDK\r\n/assign @zijianjoy \r\nfor UI",
"I'd really like to see `mlpipeline-ui-metadata` gone in the future.\r\nBut first there needs to be an alternative to it.\r\nSupporting \"type-based visualizations\" and \"visualizations as components\" will provide a better alternative.",
"@Ark-kun the goal of this issue is to ease migration, when switching to v2-compatible pipelines, people can gradually move to type-based visualization with their existing pipelines.",
"Moving to P1, because Chen does not have time to work on this before 6.4.\r\n\r\nWe may work on the doc update first.",
"I have a few questions regarding writing sample pipelines and visualizing on UI:\r\n\r\n#### Sample Pipeline\r\n\r\nIs there a sample pipeline which writes v1 `mlpipeline-ui-metadata` artifact in V2 compatible mode using Python? It will be great to have a pipeline to create both V1 and V2 visualization, so I can test the scenario where both V1 and V2 artifacts exist in single execution. Furthermore:\r\n\r\n##### mlpipeline-ui-metadata Artifact naming in python code\r\n\r\nI tried to write the following component, but was not successful because python doesn't accept `-` in parameter name `mlpipeline-ui-metadata`:\r\n\r\n```\r\n@component\r\ndef visall(mlpipeline-ui-metadata: Output[Artifact]):\r\n import json\r\n \r\n metadata = {\r\n 'outputs' : [\r\n # Markdown that is hardcoded inline\r\n {\r\n 'storage': 'inline',\r\n 'source': '# Inline Markdown\\n[A link](https://www.kubeflow.org/)',\r\n 'type': 'markdown',\r\n }]\r\n }\r\n with open(mlpipeline-ui-metadata.path, 'w') as metadata_file:\r\n json.dump(metadata, metadata_file)\r\n```\r\n\r\nI slightly changed the name to something else so I can preview the content in output artifact. Here is the content:\r\n\r\n```\r\n{\"outputs\": [{\"storage\": \"inline\", \"source\": \"# Inline Markdown\\n[A link](https://www.kubeflow.org/)\", \"type\": \"markdown\"}]}\r\n```\r\n\r\n##### Source data sample for each visualization\r\n\r\nThere are 6 different V1 outputs in https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/#available-output-viewers, but I am looking for some sample data in a csv file or other format to be able to store in the visualization content. See below:\r\n\r\n* Confusion matrix\r\n * `CONFUSION_MATRIX_CSV_FILE` and `vocab` from the doc example.\r\n* Markdown\r\n * A google cloud storage path for Markdown file (what kind of permission we need to provide?)\r\n* ROC curve\r\n * `roc_file` file\r\n* Table\r\n * `prediction_results` file\r\n* TensorBoard\r\n * `args.job_dir` path\r\n* Web app\r\n * A google cloud storage path for HTML file.\r\n\r\n\r\n##### Read file permission and test cases\r\n\r\nIf we use a gcs path or minio path for storing the visualization data, what kind of permission do we need to give to KFP for accessing those files? Can they be added to the KFP repo?\r\n\r\n",
"> I have a few questions regarding writing sample pipelines and visualizing on UI:\n> \n> #### Sample Pipeline\n> \n> Is there a sample pipeline which writes v1 `mlpipeline-ui-metadata` artifact in V2 compatible mode using Python? It will be great to have a pipeline to create both V1 and V2 visualization, so I can test the scenario where both V1 and V2 artifacts exist in single execution. Furthermore:\n\nIt's blocked by https://github.com/kubeflow/pipelines/issues/5831. Ideally, all v1 samples should be able to run in v2 compatible mode.\n\n> ##### mlpipeline-ui-metadata Artifact naming in python code\n> \n> I tried to write the following component, but was not successful because python doesn't accept `-` in parameter name `mlpipeline-ui-metadata`:\n> \n> ```\n> @component\n> def visall(mlpipeline-ui-metadata: Output[Artifact]):\n> import json\n> \n> metadata = {\n> 'outputs' : [\n> # Markdown that is hardcoded inline\n> {\n> 'storage': 'inline',\n> 'source': '# Inline Markdown\\n[A link](https://www.kubeflow.org/)',\n> 'type': 'markdown',\n> }]\n> }\n> with open(mlpipeline-ui-metadata.path, 'w') as metadata_file:\n> json.dump(metadata, metadata_file)\n> ```\n> \n> I slightly changed the name to something else so I can preview the content in output artifact. Here is the content:\n> \n> ```\n> {\"outputs\": [{\"storage\": \"inline\", \"source\": \"# Inline Markdown\\n[A link](https://www.kubeflow.org/)\", \"type\": \"markdown\"}]}\n> ```\n\nTraditionally, KFP v1 sanitizes artifact names by turning all letters to lower case and all non letter chars to `-`. Therefore, you don't need to match the exact name in python.\n\nI think it's worth considering letting UI do the name sanitization for this specific artifact name if sdk v2 doesn't.\n\n> ##### Source data sample for each visualization\n> \n> There are 6 different V1 outputs in https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/#available-output-viewers, but I am looking for some sample data in a csv file or other format to be able to store in the visualization content. See below:\n> \n> * Confusion matrix\n> * `CONFUSION_MATRIX_CSV_FILE` and `vocab` from the doc example.\n> * Markdown\n> * A google cloud storage path for Markdown file (what kind of permission we need to provide?)\n> * ROC curve\n> * `roc_file` file\n> * Table\n> * `prediction_results` file\n> * TensorBoard\n> * `args.job_dir` path\n> * Web app\n> * A google cloud storage path for HTML file.\n\nNot quite following, do you want samples for these cases?\n\n> ##### Read file permission and test cases\n> \n> If we use a gcs path or minio path for storing the visualization data, what kind of permission do we need to give to KFP for accessing those files? Can they be added to the KFP repo?\n\nKFP UI server fetches these artifacts, by default we already mount MinIO secrets on it. On GCP it gets cluster default service account permission. So you don't need to worry about permission, users are responsible for setting them in their cluster.\n\n",
"Thank you @Bobgy for the answers!\r\n\r\n> It's blocked by #5831. Ideally, all v1 samples should be able to run in v2 compatible mode.\r\n\r\n So we cannot create python component with `-` parameter name, and cannot create yaml component with space yet. Can we use lightweight python component approach to create v1 mlpipeline_ui_metadata in v2 compatible mode? https://github.com/kubeflow/pipelines/blob/master/samples/core/lightweight_component/lightweight_component.ipynb\r\n\r\nOr is there any other approach to create a pipeline while waiting for the blocking issue to resolve?\r\n\r\n\r\n> \r\n> Traditionally, KFP v1 sanitizes artifact names by turning all letters to lower case and all non letter chars to `-`. Therefore, you don't need to match the exact name in python.\r\n> \r\n> I think it's worth considering letting UI do the name sanitization for this specific artifact name if sdk v2 doesn't.\r\n> \r\n\r\nI would like to learn more about `sanitization from UI` approach: Does this logic already handle the sanitization logic on SDK side? https://github.com/kubeflow/pipelines/pull/5832/files#diff-6e6c99d682c8b3b8d586a8c50d381dee529c0c58622316532c6574c4f511b655\r\n\r\nAs a result, how does user define a component in v1 fashion but is runnable on v2 compatible mode? More specifically, how does user define a `mlpipeline-ui-metadata` parameter in python? Currently I tried this component definition `def visall(mlpipeline_ui_metadata: Output[Artifact])` but SDK fails with `TypeError: 'ContainerOp' object is not callable`.\r\n\r\n\r\n> Not quite following, do you want samples for these cases?\r\n> \r\n\r\nSorry for the confusion, yes I want to have a simplified sample for these output viewer case. For example: we can define a simple confusion matrix with only 2 labels, but with actual `csv file` and `vocab` living in KFP repo, or hardcoded confusion matrix values as an array in python code. so they are convenient to use. Because these placeholders are referenced in kubeflow doc: https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/#available-output-viewers.\r\n\r\nIs there any existing sample which can achieve this goal? I found this file https://github.com/kubeflow/pipelines/blob/master/components/local/confusion_matrix/component.yaml but was stuck at trying to create a `GCSPath` object for `Predictions` parameter.\r\n\r\n \r\n> KFP UI server fetches these artifacts, by default we already mount MinIO secrets on it. On GCP it gets cluster default service account permission. So you don't need to worry about permission, users are responsible for setting them in their cluster.\r\n\r\nSounds good, thank you for clarifying !",
"@zijianjoy https://www.github.com/kubeflow/pipelines/tree/f3e368410e2d29cf880cd6debc95572b636dd208/samples%2Fcore%2Fvisualization%2Ftensorboard_minio.py\n\nYou can adapt this example first. The only way to write a visualization component for both v1 and v2 compatible right now is to write a yaml component with artifact name `mlpipeline-ui-metadata` (it doesn't have spaces in the name).\n\nAnother way for lightweight components is to return the json metadata as NamedTuple. See example https://github.com/kubeflow/pipelines/blob/master/samples/core/lightweight_component/lightweight_component.ipynb.\nI think it also works for v2 compatible, but you may need to confirm.\n\nNo, I don't have any existing examples. Existing examples are all around a specific use-case. It would be very helpful building an example with all the available visualizations simply hard coded.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-05-18T12:24:08 | 2022-03-02T21:50:58 | 2022-03-02T21:50:58 |
CONTRIBUTOR
| null |
support v1 visualizations:
https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/
Changes needed:
* [x] Doc: update https://kubeflow.org/docs/components/pipelines/sdk/output-viewer/ according to https://www.kubeflow.org/docs/components/pipelines/sdk/pipelines-metrics/
* [x] https://github.com/kubeflow/pipelines/pull/5832 SDK: remove hacks for mlpipeline-ui-metadata artifact in v2 compatible mode -- result: we should be able to see mlpipeline-ui-metadata as a usual artifact in MLMD.
* [x] UI: also supports visualizing mlpipeline-ui-metadata artifact from MLMD
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5666/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5664
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5664/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5664/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5664/events
|
https://github.com/kubeflow/pipelines/issues/5664
| 894,161,142 |
MDU6SXNzdWU4OTQxNjExNDI=
| 5,664 |
[sdk] Upload new SDK v2 JSON pipeline to Kubeflow 1.3
|
{
"login": "juangon",
"id": 1306127,
"node_id": "MDQ6VXNlcjEzMDYxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juangon",
"html_url": "https://github.com/juangon",
"followers_url": "https://api.github.com/users/juangon/followers",
"following_url": "https://api.github.com/users/juangon/following{/other_user}",
"gists_url": "https://api.github.com/users/juangon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juangon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juangon/subscriptions",
"organizations_url": "https://api.github.com/users/juangon/orgs",
"repos_url": "https://api.github.com/users/juangon/repos",
"events_url": "https://api.github.com/users/juangon/events{/privacy}",
"received_events_url": "https://api.github.com/users/juangon/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-18T09:14:03 | 2022-03-03T04:05:38 | 2022-03-03T04:05:38 |
NONE
| null |
### Environment
* KFP version:
* KFP SDK version: 1.6.0.rc0
* All dependencies version:
### Steps to reproduce
Within a Kubeflow Jupyter notebook, create a v2 pipeline. It seems json file should be provided when compiling using Compiler().compile method
Upload json file to Kubeflow 1.3 pipeline UI as a new pipeline
Then, Kubeflow will return an error.
### Expected result
Should upload json pipeline file successfully
### Materials and Reference
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5664/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5663
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5663/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5663/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5663/events
|
https://github.com/kubeflow/pipelines/issues/5663
| 893,904,867 |
MDU6SXNzdWU4OTM5MDQ4Njc=
| 5,663 |
[marketplace] Kubeflow Pipelines doesn't work on Kubernetes 1.19 or above
|
{
"login": "revathijay",
"id": 13513068,
"node_id": "MDQ6VXNlcjEzNTEzMDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/13513068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/revathijay",
"html_url": "https://github.com/revathijay",
"followers_url": "https://api.github.com/users/revathijay/followers",
"following_url": "https://api.github.com/users/revathijay/following{/other_user}",
"gists_url": "https://api.github.com/users/revathijay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/revathijay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/revathijay/subscriptions",
"organizations_url": "https://api.github.com/users/revathijay/orgs",
"repos_url": "https://api.github.com/users/revathijay/repos",
"events_url": "https://api.github.com/users/revathijay/events{/privacy}",
"received_events_url": "https://api.github.com/users/revathijay/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Do any other pipelines run successfully?",
"I had the same issue (with a custom pipeline), this started to happen after i launched a large number of runs and i had to stop the workloads from the GKE panel because the Kubeflow Pipelines dashboard wasnt responding.\r\n\r\nRight now i cant run any kind of pipeline, not even the examples",
"I tried to reinstall the KFP using GCP Marketplace and didnt worked, i tried to delete the cluster and recreate everything from scratch and the KFP doesnt work anymore (i've also deleted the artifacts from GCP Bucket). Do you have any clues @Ark-kun ?\r\n\r\nI've never experienced this issue before",
"Hello @cabjr , Kubeflow Pipelines doesn't work on Kubernetes 1.19 or above, would you like to confirm the k8s version of your Kubeflow cluster? If it is the case, you might need to create a new cluster with K8s 1.18, and deploy Kubeflow pipelines on it.\r\n\r\nReference: https://github.com/kubeflow/pipelines/issues/1654",
"I finally got it working. it was because the creation of the cluster was blocked from the marketplace page. And I had to tweak the creation of the cluster manually with the rt set of options and the pipeline started working."
] | 2021-05-18T03:15:42 | 2021-05-21T03:10:04 | 2021-05-21T03:10:04 |
NONE
| null |
### What steps did you take
Tried out the default kubeflow pipeline for TFX taxi prediction model in GCP marketplace Kubeflow pipeline version 1.4.1
<!-- A clear and concise description of what the bug is.-->
### What happened:
The error message for the csvexamplegen step is as follows.
This step is in Error state with this message: failed to save outputs: Error response from daemon: No such container: b715c336e667d305429003d1ce1c9b795ced595aa76b2bf252c679d2947b3643
<img width="1414" alt="Screen Shot 2021-05-18 at 1 07 09 pm" src="https://user-images.githubusercontent.com/13513068/118584703-dccb5a00-b7da-11eb-82be-9c7f3812c6f1.png">
### What did you expect to happen:
The pipeline was getting executed end to end earlier . The main container itself for the first step is getting executed successfully. But its failing at the wait container stage.
### Environment:
<!-- Please fill in those that seem relevant. -->
* How do you deploy Kubeflow Pipelines (KFP)?
* GCP AI pipeline V(1.4.1)
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version:
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* KFP SDK version:
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
### Anything else you would like to add:
<!-- Miscellaneous information that will assist in solving the issue.-->
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5663/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5659
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5659/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5659/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5659/events
|
https://github.com/kubeflow/pipelines/issues/5659
| 893,315,471 |
MDU6SXNzdWU4OTMzMTU0NzE=
| 5,659 |
[sdk] BadRequestError: Request header error: there is no user identity header.
|
{
"login": "HelmiGH",
"id": 46866583,
"node_id": "MDQ6VXNlcjQ2ODY2NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/46866583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HelmiGH",
"html_url": "https://github.com/HelmiGH",
"followers_url": "https://api.github.com/users/HelmiGH/followers",
"following_url": "https://api.github.com/users/HelmiGH/following{/other_user}",
"gists_url": "https://api.github.com/users/HelmiGH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HelmiGH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HelmiGH/subscriptions",
"organizations_url": "https://api.github.com/users/HelmiGH/orgs",
"repos_url": "https://api.github.com/users/HelmiGH/repos",
"events_url": "https://api.github.com/users/HelmiGH/events{/privacy}",
"received_events_url": "https://api.github.com/users/HelmiGH/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-17T13:08:33 | 2022-03-03T04:05:36 | 2022-03-03T04:05:36 |
NONE
| null |
### Environment
* KFP version:
* KFP SDK version: 1.0.4
* All dependencies version:
kfp (1.4.0)
kfp-pipeline-spec (0.1.7)
kfp-server-api (1.4.1)
### Steps to reproduce
```
client = kfp.Client()
run_result = client.create_run_from_pipeline_func(ml_pipeline,
experiment_name=experiment_name,
run_name=run_name,
namespace="internship",
arguments=arguments)
```
I compiled the pipeline and I want to create a run directly from the terminal running python script.py
I was able to make a run in a jupyter notebook inside the notebook server adding an envory filter but I couldn't make it in the terminal.
But I got
```
HTTP response headers: HTTPHeaderDict({'Audit-Id': '94ed6020-ba67-4a74-9f5c-d3dee5c77cfc', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Mon, 17 May 2021 10:21:24 GMT', 'Trailer': 'Grpc-Trailer-Content-Type', 'Transfer-Encoding': 'chunked'})
HTTP response body: {"error":"Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header.","message":"Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header.","code":10,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Request header error: there is no user identity header.","error_details":"Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header."}]}
```
### Expected result
to get a link to the pipeline ui to see the run
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5659/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5659/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5656
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5656/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5656/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5656/events
|
https://github.com/kubeflow/pipelines/issues/5656
| 892,980,876 |
MDU6SXNzdWU4OTI5ODA4NzY=
| 5,656 |
[feature] consistently support object stores
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] |
open
| false | null |
[] | null |
[
"Sorry for the slow response on this, but I can confirm that `go-cloud` supports IAM Roles for Service Accounts in order to access objects on S3.\r\n\r\nSupport was added with this PR: https://github.com/google/go-cloud/pull/2773\r\n\r\nI've also tested it independently. The IAM role needs to be attached to a ServiceAccount via an annotation as follows:\r\n\r\n```\r\napiVersion: v1\r\nkind: ServiceAccount\r\nmetadata:\r\n name: go-cloud-test\r\n namespace: mynamespace\r\n annotations:\r\n eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/go-cloud-test\r\n```\r\n\r\nAny Pod that is started with this ServiceAccount will then inherit the role. (`go-cloud` uses the `aws-cli` in order to resolve the attached role into a temporary session). As long as the role `arn:aws:iam::123456789012:role/go-cloud-test` has the necessary policies and trust relationships attached to it, the following psuedo-deployment should work out of the box:\r\n\r\n\r\n```\r\napiVersion: apps/v1\r\nkind: Deployment\r\nmetadata:\r\n name: go-cloud-test\r\n namespace: mynamespace\r\n labels:\r\n app: go-cloud-test\r\nspec:\r\n replicas: 1\r\n selector:\r\n matchLabels:\r\n app: go-cloud-test\r\n template:\r\n metadata:\r\n labels:\r\n app: go-cloud-test\r\n spec:\r\n serviceAccount: go-cloud-test\r\n containers:\r\n - name: go-cloud-test\r\n image: my/testimage\r\n workingDir: /go/src/go-cloud/samples/gocdk-blob #see https://github.com/google/go-cloud/tree/v0.23.0/samples/gocdk-blob\r\n command: [\"/bin/sh\",\"-c\"]\r\n args: [\"go run main.go download s3://my-test-bucket-4134451 hello.txt > foo.txt]\r\n```\r\n\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"/lifecycle freeze"
] | 2021-05-17T06:48:46 | 2022-04-26T15:17:17 | null |
CONTRIBUTOR
| null |
### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
<!-- /area frontend -->
/area backend
<!-- /area sdk -->
<!-- /area samples -->
<!-- /area components -->
### What feature would you like to see?
<!-- Provide a description of this feature and the user experience. -->
Consistently support different object stores (MinIO, S3, GCS, Azure blob, ...) in KFP, no matter it's for UI preview / visualization, metrics or in pipeline tasks.
Propose to unify using Go CDK: https://gocloud.dev/.
Changes:
1. In https://bit.ly/kfp-v2-compatible mode, KFP pipeline tasks will use Go CDK to upload/download artifacts in pipeline containers.
2. Implement support for different artifact stores in KFP API server, also make it deployable standalone in user namespaces.
3. Remove artifact API implementation in KFP UI.
### What is the use case or pain point?
<!-- It helps us understand the benefit of this feature for your use case. -->
1. Reduce tech debt
KFP UI uses MinIO js client, GCS client etc to access some object stores.
KFP API server uses MinIO Go client to only access MinIO.
Argo workflow supports several types of object stores natively too (but not sure what they use).
The difference in implementation makes it hard to achieve the same level of support for different object stores across KFP features. This also brings duplicate efforts in maintaining object access.
e.g. KFP v1 metrics is only supported with MinIO, because KFP API server only supports MinIO.
2. Consistently support object store features:
e.g. IRSA for S3: https://github.com/kubeflow/pipelines/issues/3405#issuecomment-841474419
or workload identity for GCS (needed an upgrade for argo)
### Is there a workaround currently?
<!-- Without this feature, how do you accomplish your task today? -->
No
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5656/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5656/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5655
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5655/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5655/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5655/events
|
https://github.com/kubeflow/pipelines/issues/5655
| 892,809,767 |
MDU6SXNzdWU4OTI4MDk3Njc=
| 5,655 |
Failed to start Tensorboard app
|
{
"login": "rylynchen",
"id": 1971286,
"node_id": "MDQ6VXNlcjE5NzEyODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1971286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rylynchen",
"html_url": "https://github.com/rylynchen",
"followers_url": "https://api.github.com/users/rylynchen/followers",
"following_url": "https://api.github.com/users/rylynchen/following{/other_user}",
"gists_url": "https://api.github.com/users/rylynchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rylynchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rylynchen/subscriptions",
"organizations_url": "https://api.github.com/users/rylynchen/orgs",
"repos_url": "https://api.github.com/users/rylynchen/repos",
"events_url": "https://api.github.com/users/rylynchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/rylynchen/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 2710158147,
"node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info",
"name": "needs more info",
"color": "DBEF12",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"You can follow the `deploy ml-pipeline-ui to support tensorboard viewer` in https://github.com/kubeflow/pipelines/blob/master/docs/config/volume-support.md to mount this volume for tensorboard. Note that this feature is in alpha phase, so it might not be stable.",
"> You can follow the `deploy ml-pipeline-ui to support tensorboard viewer` in https://github.com/kubeflow/pipelines/blob/master/docs/config/volume-support.md to mount this volume for tensorboard. Note that this feature is in alpha phase, so it might not be stable.\r\n\r\nThanks for your reply!"
] | 2021-05-17T01:18:53 | 2021-05-21T10:33:34 | 2021-05-21T10:33:34 |
NONE
| null |
### What steps did you take
Hi, I create a volume named 'nfs', and amount to the training pod in the pipeline. And training code generate tensorboard dir in the amount dir. But when I go to look tensorboard view, get error:

pipeline code:
```
def train_op(volume, image, cmd):
# amount volume
nfsVolume = dsl.PipelineVolume(volume=V1Volume(name="nfs",nfs={"server":"xxx","path":"/"}))
svol = dsl.PipelineVolume(name='shm-vol', empty_dir={'medium': 'Memory'})
op = dsl.ContainerOp(
name="train",
image=image,
arguments=["/bin/sh", "-c", "%s" % (cmd)],
pvolumes={
"/output": nfsVolume,
'/dev/shm': svol,
},
output_artifact_paths={
'mlpipeline-ui-metadata': '/mlpipeline-ui-metadata.json',
},
)
op.container.set_cpu_limit("0.5")
op.container.set_gpu_limit("2")
op.container.set_memory_limit("50G")
return op
```
My training code generate tensorboard data in `/output/runs`, and the create metadata code:
```
metadata = {
'outputs' : [
{
'type': 'tensorboard',
'source': 'volume://nfs/runs',
}
]
}
with open('/mlpipeline-ui-metadata.json', 'w') as f:
print(metadata)
json.dump(metadata, f)
```
### What happened:

### What did you expect to happen:
Tensorboard view show data
### Environment:
* pipeline version: 1.0.0
Anyone has goods idea ?
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5655/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5653
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5653/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5653/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5653/events
|
https://github.com/kubeflow/pipelines/issues/5653
| 892,614,984 |
MDU6SXNzdWU4OTI2MTQ5ODQ=
| 5,653 |
[backend] <Bug Name>
|
{
"login": "Attila111111",
"id": 73070542,
"node_id": "MDQ6VXNlcjczMDcwNTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/73070542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Attila111111",
"html_url": "https://github.com/Attila111111",
"followers_url": "https://api.github.com/users/Attila111111/followers",
"following_url": "https://api.github.com/users/Attila111111/following{/other_user}",
"gists_url": "https://api.github.com/users/Attila111111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Attila111111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Attila111111/subscriptions",
"organizations_url": "https://api.github.com/users/Attila111111/orgs",
"repos_url": "https://api.github.com/users/Attila111111/repos",
"events_url": "https://api.github.com/users/Attila111111/events{/privacy}",
"received_events_url": "https://api.github.com/users/Attila111111/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[] | 2021-05-16T07:28:11 | 2021-05-18T08:28:46 | 2021-05-18T08:28:46 |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version:
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* KFP SDK version:
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
### Expected result
<!-- What should the correct behavior be? -->
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5653/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5651
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5651/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5651/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5651/events
|
https://github.com/kubeflow/pipelines/issues/5651
| 892,457,235 |
MDU6SXNzdWU4OTI0NTcyMzU=
| 5,651 |
[feature] Add PVC File Browser to Viewer CRD
|
{
"login": "DavidSpek",
"id": 28541758,
"node_id": "MDQ6VXNlcjI4NTQxNzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/28541758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidSpek",
"html_url": "https://github.com/DavidSpek",
"followers_url": "https://api.github.com/users/DavidSpek/followers",
"following_url": "https://api.github.com/users/DavidSpek/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidSpek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidSpek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidSpek/subscriptions",
"organizations_url": "https://api.github.com/users/DavidSpek/orgs",
"repos_url": "https://api.github.com/users/DavidSpek/repos",
"events_url": "https://api.github.com/users/DavidSpek/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidSpek/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"👍🏽\r\nThis will come in very handy for my users!!",
"This would also be very helpful for us. Especially when trying to figure out which PVCs can be safely cleaned up.",
"Looks great! Really handy feature and very much in line with the sort of functionality our our clients are looking for. What would need happen in order to implement this?",
"This approach would be very useful for us. It would also help with a related functionality; ultimately artifacts that researchers create within these notebooks will need to be extracted out and sent to a vetting team to assess possible disclosures. This strategy would make it really easy for us to build a workflow for that use-case.",
"Yeah this would definitely be helpful. Having better support for this would make managing PVCs much easier, which in turn would make managing data generated by pipelines much easier. Would be great for use cases that need some mix of automation and interactivity. ",
"Hi folks, I'm sending out a proposal for building a generic viewer, see https://github.com/kubeflow/pipelines/issues/5681.\r\nCurious about your thoughts there.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-15T14:34:04 | 2022-03-03T06:05:09 | 2022-03-03T06:05:09 |
CONTRIBUTOR
| null |
### Feature Area
/area backend
### What feature would you like to see?
Add a `PVC Viewer`/`File Browser` spec to the Viewer controller.
### What is the use case or pain point?
This feature would allow people to launch a "File Browser" pod, which allows them to easily drag and drop files into a PVC or download files and folders present in a PVC. For KFP users, this can be useful if a pipeline step produces some form of data in a PVC that the user would like to easily download. The same CR would then be used to extend the Volumes Web App with a button that allows people to start a "File Browser" pod to easily upload and download file to and from their PVCs (most likely for the PVCs used with notebook servers). I currently have built a dedicated controller for this, but given that the pipelines viewer CRD is built to be extendable I thought it would make sense to add it to an existing controller. A demo of this functionality can be seen below.

### Is there a workaround currently?
There currently is no easy way for users to upload and download data to and from their PVCs, which is a large hurdle for people that have local data that they would like to use within their notebooks or pipelines.
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5651/reactions",
"total_count": 12,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5651/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5640
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5640/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5640/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5640/events
|
https://github.com/kubeflow/pipelines/issues/5640
| 890,462,450 |
MDU6SXNzdWU4OTA0NjI0NTA=
| 5,640 |
No experiment gets created by kfp.Client()
|
{
"login": "farshadsm",
"id": 25828445,
"node_id": "MDQ6VXNlcjI1ODI4NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/25828445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farshadsm",
"html_url": "https://github.com/farshadsm",
"followers_url": "https://api.github.com/users/farshadsm/followers",
"following_url": "https://api.github.com/users/farshadsm/following{/other_user}",
"gists_url": "https://api.github.com/users/farshadsm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farshadsm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farshadsm/subscriptions",
"organizations_url": "https://api.github.com/users/farshadsm/orgs",
"repos_url": "https://api.github.com/users/farshadsm/repos",
"events_url": "https://api.github.com/users/farshadsm/events{/privacy}",
"received_events_url": "https://api.github.com/users/farshadsm/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1124118036,
"node_id": "MDU6TGFiZWwxMTI0MTE4MDM2",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk/client",
"name": "area/sdk/client",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
open
| false | null |
[] | null |
[
"Can you please try some other `Client` methods like list_runs?\r\nThere might be auth error.",
"I executed `client.list_runs()`, where I defined `client` object in my first message, and I got the following result:\r\n`{'experiments': None, 'next_page_token': None, 'total_size': None}`\r\n\r\nWhen I executed `client.create_experiment(name=\"kubeflow\")`, no valid experiment was created. I checked the `Experiment details`, and I found following error message:\r\n`\"error\":\"Failed to authorize the request.: Failed to authorize with the experiment ID.: Failed to get namespace from experiment ID.: ResourceNotFoundError: Experiment None not found.\"`\r\n\r\nI also think the problem is with authentication. I followed https://www.kubeflow.org/docs/distributions/aws/pipeline/ to authenticate Kubeflow Pipeline using SDK inside cluster. But it seems it didn't work out properly for me.",
"@PatrickXYS Hello Patrick, this looks like authentication issue, would you like to take a look?",
"I think I need to follow the instructions in this url, https://www.kubeflow.org/docs/distributions/aws/authentication/authentication/, to provide TLS authentication for kubeflow. I'll try it and will let you know the outcome.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm not working on this issue since unassign myself",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any progress on this ?",
"> I think I need to follow the instructions in this url, https://www.kubeflow.org/docs/distributions/aws/authentication/authentication/, to provide TLS authentication for kubeflow. I'll try it and will let you know the outcome.\r\n\r\nIt looks like that page is no longer maintained. Did you figure out how to enable TLS authentication on Kubeflow?"
] | 2021-05-12T20:33:19 | 2022-10-06T19:44:08 | null |
NONE
| null |
Hi,
I've deployed Kubeflow onto an AWS EKS cluster. I tried to create and run Kubeflow pipeline using notebook UI inside Kubeflow cluster. However, no experiment gets created. I followed the example in https://www.kubeflow.org/docs/components/pipelines/sdk/build-pipeline/ to build the pipeline. Furthermore, I followed the instructions in https://www.kubeflow.org/docs/distributions/aws/pipeline/ to authenticate Kubeflow Pipeline using SDK inside cluster. Below, you can find how I did the authentication:
`authservice_session='<cookie I've obtained from notebook url which is http://localhost:8080/_/jupyter/>'
ALB_ADDRESS='<Value of Address field obtained by executing command "kubectl get ingress -n istio-system">'
HOST=ALB_ADDRESS+'/pipeline'
client = kfp.Client(host=HOST, cookies=authservice_session)`
When I executed the following command, I got an error.
`client.create_run_from_pipeline_package(
pipeline_file='pipeline.yaml',
arguments={
'url': 'https://storage.googleapis.com/ml-pipeline-playground/iris-csv-files.tar.gz'
})`
The error is:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-22-56afc4f05e24> in <module>
2 pipeline_file='pipeline.yaml',
3 arguments={
----> 4 'url': 'https://storage.googleapis.com/ml-pipeline-playground/iris-csv-files.tar.gz'
5 })
/usr/local/lib/python3.6/dist-packages/kfp/_client.py in create_run_from_pipeline_package(self, pipeline_file, arguments, run_name, experiment_name, namespace)
739 '%Y-%m-%d %H-%M-%S'))
740 experiment = self.create_experiment(name=experiment_name, namespace=namespace)
--> 741 run_info = self.run_pipeline(experiment.id, run_name, pipeline_file, arguments)
742 return RunPipelineResult(self, run_info)
743
/usr/local/lib/python3.6/dist-packages/kfp/_client.py in run_pipeline(self, experiment_id, job_name, pipeline_package_path, params, pipeline_id, version_id)
548 import IPython
549 html = ('<a href="%s/#/runs/details/%s" target="_blank" >Run details</a>.'
--> 550 % (self._get_url_prefix(), response.run.id))
551 IPython.display.display(IPython.display.HTML(html))
552 return response.run
AttributeError: 'NoneType' object has no attribute 'id'
```
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5640/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5638
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5638/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5638/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5638/events
|
https://github.com/kubeflow/pipelines/issues/5638
| 889,576,543 |
MDU6SXNzdWU4ODk1NzY1NDM=
| 5,638 |
[frontend] File format / path of the TFX artifacts from TFDV should be updated
|
{
"login": "jiyongjung0",
"id": 869152,
"node_id": "MDQ6VXNlcjg2OTE1Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/869152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiyongjung0",
"html_url": "https://github.com/jiyongjung0",
"followers_url": "https://api.github.com/users/jiyongjung0/followers",
"following_url": "https://api.github.com/users/jiyongjung0/following{/other_user}",
"gists_url": "https://api.github.com/users/jiyongjung0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiyongjung0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiyongjung0/subscriptions",
"organizations_url": "https://api.github.com/users/jiyongjung0/orgs",
"repos_url": "https://api.github.com/users/jiyongjung0/repos",
"events_url": "https://api.github.com/users/jiyongjung0/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiyongjung0/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"FYI, @1025kb ",
"Still same",
"@zijianjoy did you already fix this in your recent PR?",
"Yes the change has been applied in https://github.com/kubeflow/pipelines/pull/6061/files#diff-388538ba7580c2729c010006b01837a9dbf9a04702c185970d025ab5dbcd22ba."
] | 2021-05-12T03:32:12 | 2021-07-19T06:11:37 | 2021-07-19T06:11:37 |
CONTRIBUTOR
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)? - GCP Cloud AI Pipelines
* KFP version: 1.4.1
### Steps to reproduce
1. Use TFX taxi template to create a pipeline
2. Add StatisticsGen component and run the pipeline.(Step 5)
3. Click "Visualizations" of `StatisticsGen` component after the run finished successfully.
- This will result in `OSError: Invalid input path gs://jiyongjung-gke-kubeflowpipelines-default/tfx_pipeline_output/my_pipeline/StatisticsGen/statistics/257/eval/stats_tfrecord.`.
### Expected result
Visualization of TFDV analysis.
### Materials and Reference
TFX changed the file path and format of several artifact types recently. For example, [cl for statistics artifact](https://github.com/tensorflow/tfx/commit/2a1a00a675760e6c96887aa59ba67a91e0f8a4cc). But it seems like it is not propagated to KFP. [Related code](https://github.com/kubeflow/pipelines/blob/d9c019641ef9ebd78db60cdb78ea29b0d9933008/frontend/src/lib/OutputArtifactLoader.ts#L276-L327).
TFX started to record artifacts' [generated version as custom properties](https://github.com/tensorflow/tfx/blob/c78ed9678f790a55b444d8f4c57f584f3b3a4d53/tfx/types/artifact_utils.py#L36), so we might be able to use it to support both versions.
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5638/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5638/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5635
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5635/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5635/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5635/events
|
https://github.com/kubeflow/pipelines/issues/5635
| 887,835,576 |
MDU6SXNzdWU4ODc4MzU1NzY=
| 5,635 |
New kfp sdk release (1.5.0)
|
{
"login": "HenryW95",
"id": 19846755,
"node_id": "MDQ6VXNlcjE5ODQ2NzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/19846755?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HenryW95",
"html_url": "https://github.com/HenryW95",
"followers_url": "https://api.github.com/users/HenryW95/followers",
"following_url": "https://api.github.com/users/HenryW95/following{/other_user}",
"gists_url": "https://api.github.com/users/HenryW95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HenryW95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HenryW95/subscriptions",
"organizations_url": "https://api.github.com/users/HenryW95/orgs",
"repos_url": "https://api.github.com/users/HenryW95/repos",
"events_url": "https://api.github.com/users/HenryW95/events{/privacy}",
"received_events_url": "https://api.github.com/users/HenryW95/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"\"We're planning an SDK release this week. Probably have an RC version out mid-week at least.\"\r\nFrom: https://github.com/kubeflow/pipelines/issues/5254#issuecomment-835991598",
"Thank you! I see it's been released now"
] | 2021-05-11T16:40:01 | 2021-05-25T13:41:11 | 2021-05-25T13:41:11 |
NONE
| null |
The new use_k8s_secret function in the kfp_server_api sdk is very useful but it's not yet released in the kfp library (https://pypi.org/project/kfp/). Is there a plan for when the next release will be? Thanks much!
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5635/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5635/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5627
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5627/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5627/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5627/events
|
https://github.com/kubeflow/pipelines/issues/5627
| 885,304,899 |
MDU6SXNzdWU4ODUzMDQ4OTk=
| 5,627 |
[05/10/2021] Presubmit failure: kubeflow-pipelines-tfx-python36
|
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619528,
"node_id": "MDU6TGFiZWw5MzA2MTk1Mjg=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/testing",
"name": "area/testing",
"color": "00daff",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"The file was deleted by https://github.com/tensorflow/tfx/pull/3700."
] | 2021-05-10T23:01:33 | 2021-05-11T06:10:42 | 2021-05-11T06:10:42 |
COLLABORATOR
| null |
https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/5604/kubeflow-pipelines-tfx-python36/1391884823848554496
```
+ cd /home/prow/go/src/github.com/kubeflow/pipelines/tfx/tfx/examples/chicago_taxi_pipeline
+ python3 taxi_pipeline_kubeflow_gcp_test.py
python3: can't open file 'taxi_pipeline_kubeflow_gcp_test.py': [Errno 2] No such file or directory
```
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5627/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5620
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5620/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5620/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5620/events
|
https://github.com/kubeflow/pipelines/issues/5620
| 881,550,704 |
MDU6SXNzdWU4ODE1NTA3MDQ=
| 5,620 |
ERROR:Exception when calling CoreV1Api->create_namespaced_pod: (404) Reason: Not Found
|
{
"login": "jocelynbaduria",
"id": 62075076,
"node_id": "MDQ6VXNlcjYyMDc1MDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/62075076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jocelynbaduria",
"html_url": "https://github.com/jocelynbaduria",
"followers_url": "https://api.github.com/users/jocelynbaduria/followers",
"following_url": "https://api.github.com/users/jocelynbaduria/following{/other_user}",
"gists_url": "https://api.github.com/users/jocelynbaduria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jocelynbaduria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jocelynbaduria/subscriptions",
"organizations_url": "https://api.github.com/users/jocelynbaduria/orgs",
"repos_url": "https://api.github.com/users/jocelynbaduria/repos",
"events_url": "https://api.github.com/users/jocelynbaduria/events{/privacy}",
"received_events_url": "https://api.github.com/users/jocelynbaduria/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"@Ark-kun Hello Alexey, would you like to look at this example? Thank you!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-09T01:01:13 | 2022-03-03T04:05:35 | 2022-03-03T04:05:35 |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)? In GCP AI Platform
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version: : kfp-1.0.0
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. Successfully installed docstring-parser-0.7.3 fire-0.4.0 kfp-1.4.0 kfp-pipeline-spec-0.1.7
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* KFP SDK version:
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
This is throwing an error - 2021-05-07 22:16:27:ERROR:Exception when calling CoreV1Api->create_namespaced_pod: (404)
Reason: Not Found
Althought i have set to configure it in Kubetcl in GCP.

-->
### Expected result
# Target images for creating components should be added in GCP container registry.
PREPROCESS_IMG = 'gcr.io/%s/ms-coco/preprocess:latest' % PROJECT_NAME
TOKENIZE_IMG = 'gcr.io/%s/ms-coco/tokenize:latest' % PROJECT_NAME
TRAIN_IMG = 'gcr.io/%s/ms-coco/train:latest' % PROJECT_NAME
PREDICT_IMG = 'gcr.io/%s/ms-coco/predict:latest' % PROJECT_NAME
<!-- What should the correct behavior be? -->
Should update the container registry in GCP
- It is empty only the added file is not there.

### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
sample code - https://github.com/kubeflow/pipelines/blob/master/samples/contrib/image-captioning-gcp/Image%20Captioning%20TF%202.0.ipynb
I run it in GCP Kubeflow AI Platform notebook.
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5620/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5617
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5617/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5617/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5617/events
|
https://github.com/kubeflow/pipelines/issues/5617
| 880,533,520 |
MDU6SXNzdWU4ODA1MzM1MjA=
| 5,617 |
[pH] ask contributors use copyright header "The Kubeflow Authors"
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"presubmit can be implemented by using `addlicense` go CLI tool and then verify no changes have been made\r\n\r\nAlso we can recommend \"Copyright Inserter\" vscode plugin: https://marketplace.visualstudio.com/items?itemName=minherz.copyright-inserter",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-08T08:58:20 | 2022-03-03T06:05:17 | 2022-03-03T06:05:17 |
CONTRIBUTOR
| null |
After https://github.com/kubeflow/pipelines/pull/5587,
we should stick to "The Kubefow Authors" copyright header.
But people may miss the new convention,
Proposal:
1. p0 add another PR checklist to educate contributors
2. p2 add presubmit (not sure how we can actually achieve that)
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5617/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5614
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5614/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5614/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5614/events
|
https://github.com/kubeflow/pipelines/issues/5614
| 880,327,800 |
MDU6SXNzdWU4ODAzMjc4MDA=
| 5,614 |
[05/07/2021] Presubmit failure: kubeflow-pipelines-tfx-python36
|
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619528,
"node_id": "MDU6TGFiZWw5MzA2MTk1Mjg=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/testing",
"name": "area/testing",
"color": "00daff",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Package installed:\r\n\r\nSuccessfully installed Babel-2.9.1 Flask-Babel-1.0.0 Flask-JWT-Extended-3.25.1 Flask-OpenID-1.2.5 Flask-SQLAlchemy-2.5.1 Mako-1.1.4 MarkupSafe-1.1.1 Send2Trash-1.5.0 WTForms-2.3.3 alembic-1.6.2 apache-airflow-2.0.2 apache-airflow-providers-ftp-1.1.0 apache-airflow-providers-http-1.1.1 apache-airflow-providers-imap-1.0.1 apache-airflow-providers-mysql-1.1.0 apache-airflow-providers-sqlite-1.0.2 apache-beam-2.29.0 apispec-3.3.2 argcomplete-1.12.3 argon2-cffi-20.1.0 astunparse-1.6.3 async-generator-1.10 avro-python3-1.9.2.1 backcall-0.2.0 bleach-3.3.0 blinker-1.4 cached-property-1.5.2 cattrs-1.0.0 clickclick-20.10.2 colorama-0.4.4 colorlog-5.0.1 commonmark-0.9.1 connexion-2.7.0 crcmod-1.7 croniter-0.3.37 cryptography-3.4.7 dataclasses-0.8 decorator-5.0.7 defusedxml-0.7.1 dill-0.3.1.1 dnspython-2.1.0 docker-4.4.4 docopt-0.6.2 docutils-0.17.1 email-validator-1.1.2 entrypoints-0.3 fastavro-1.4.0 fasteners-0.16 flask-1.1.2 flask-appbuilder-3.2.3 flask-caching-1.10.1 flask-login-0.4.1 flask-wtf-0.14.3 flatbuffers-1.12 future-0.18.2 gast-0.3.3 google-api-python-client-1.12.8 google-apitools-0.5.31 google-auth-httplib2-0.1.0 google-auth-oauthlib-0.4.4 google-cloud-aiplatform-0.7.1 google-cloud-bigquery-2.16.0 google-cloud-bigtable-1.7.0 google-cloud-datastore-1.15.3 google-cloud-dlp-1.0.0 google-cloud-language-1.3.0 google-cloud-pubsub-1.7.0 google-cloud-spanner-1.19.1 google-cloud-videointelligence-1.16.1 google-cloud-vision-1.0.0 google-pasta-0.2.0 graphviz-0.16 grpc-google-iam-v1-0.12.3 grpcio-1.37.1 grpcio-gcp-0.2.2 gunicorn-19.10.0 h5py-2.10.0 hdfs-2.6.0 httplib2-0.17.4 importlib-metadata-1.7.0 importlib-resources-1.5.0 inflection-0.5.1 ipykernel-5.5.4 ipython-7.16.1 ipython-genutils-0.2.0 ipywidgets-7.6.3 iso8601-0.1.14 isodate-0.6.0 itsdangerous-1.1.0 jedi-0.18.0 jinja2-2.11.3 joblib-0.14.1 jupyter-client-6.1.12 jupyter-core-4.7.1 jupyterlab-pygments-0.1.2 jupyterlab-widgets-1.0.0 keras-preprocessing-1.1.2 keras-tuner-1.0.1 kubernetes-11.0.0 lazy-object-proxy-1.6.0 lockfile-0.12.2 markdown-3.3.4 marshmallow-3.11.1 marshmallow-enum-1.5.1 marshmallow-oneofschema-2.1.0 marshmallow-sqlalchemy-0.23.1 mistune-0.8.4 ml-metadata-0.30.0 more-itertools-8.7.0 mysql-connector-python-8.0.22 mysqlclient-2.0.3 natsort-7.1.1 nbclient-0.5.3 nbconvert-6.0.7 nbformat-5.1.3 nest-asyncio-1.5.1 notebook-6.3.0 numpy-1.19.5 oauth2client-4.1.3 openapi-schema-validator-0.1.5 openapi-spec-validator-0.3.0 opt-einsum-3.3.0 pandas-1.1.5 pandocfilters-1.4.3 parso-0.8.2 pendulum-2.1.2 pep562-1.0 pexpect-4.8.0 pickleshare-0.7.5 pluggy-0.13.1 portpicker-1.3.1 prison-0.1.3 prometheus-client-0.10.1 prompt-toolkit-3.0.18 proto-plus-1.18.1 psutil-5.8.0 ptyprocess-0.7.0 py-1.10.0 pyarrow-2.0.0 pydot-1.4.2 pygments-2.9.0 pyjwt-1.7.1 pymongo-3.11.4 pytest-5.4.3 python-daemon-2.3.0 python-editor-1.0.4 python-nvd3-0.15.0 python-slugify-4.0.1 python3-openid-3.2.0 pytzdata-2020.1 pyzmq-22.0.3 rich-9.2.0 scikit-learn-0.24.2 scipy-1.5.4 setproctitle-1.2.2 sqlalchemy-1.3.24 sqlalchemy-jsonfield-1.0.0 sqlalchemy-utils-0.37.2 swagger-ui-bundle-0.0.8 tenacity-6.2.0 tensorboard-2.5.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.0 tensorflow-2.4.1 tensorflow-data-validation-0.30.0 tensorflow-estimator-2.4.0 tensorflow-hub-0.9.0 tensorflow-metadata-0.30.0 tensorflow-model-analysis-0.30.0 tensorflow-serving-api-2.4.1 tensorflow-transform-0.30.0 terminado-0.9.4 terminaltables-3.1.0 testpath-0.4.4 text-unidecode-1.3 tfx-bsl-0.30.0 tfx-dev-0.31.0.dev0 threadpoolctl-2.1.0 tornado-6.1 tqdm-4.60.0 traitlets-4.3.3 typing-3.7.4.3 typing-extensions-3.7.4.3 unicodecsv-0.14.1 uritemplate-3.0.1 wcwidth-0.2.5 webencodings-0.5.1 werkzeug-1.0.1 widgetsnbextension-3.5.1",
"The test started to fail when we have `google-cloud-bigquery` 2.16.0 installed, and that is a result of https://github.com/tensorflow/tfx/pull/3644\r\nPrevious succeeded runs have `google-cloud-bigquery` 1.28.0. \r\n\r\nIf I'm not mistaken, the module `proto` from `AttributeError: module 'proto' has no attribute 'module'` should be from [proto-plus](https://pypi.org/project/proto-plus/), and we installed `proto-plus-1.18.1` per https://github.com/kubeflow/pipelines/issues/5614#issuecomment-835179329.\r\n\r\nLooking at [this code](https://github.com/googleapis/proto-plus-python/blob/e25dd7d65273a834f66d3be0445b88011103dc81/proto/__init__.py#L21), `proto.module` should exist. And this line has been there for a long time.\r\n\r\nSo now I suspect there was some other module or likely a variable named `proto` that shadowed this `proto` module package `proto-plus`.\r\n\r\nFor a temporary fix, we can pin to use `google-cloud-bigquery<2`: https://github.com/kubeflow/pipelines/pull/5616"
] | 2021-05-08T05:44:44 | 2021-05-08T10:57:07 | 2021-05-08T10:57:07 |
COLLABORATOR
| null |
https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/5594/kubeflow-pipelines-tfx-python36/1390893518523731968
```
Traceback (most recent call last):
File "kubeflow_dag_runner_test.py", line 26, in <module>
from tfx.extensions.google_cloud_big_query.example_gen import component as big_query_example_gen_component
File "/usr/local/lib/python3.6/site-packages/tfx/extensions/google_cloud_big_query/example_gen/component.py", line 26, in <module>
from tfx.extensions.google_cloud_big_query.example_gen import executor
File "/usr/local/lib/python3.6/site-packages/tfx/extensions/google_cloud_big_query/example_gen/executor.py", line 26, in <module>
from google.cloud import bigquery
File "/usr/local/lib/python3.6/site-packages/google/cloud/bigquery/__init__.py", line 35, in <module>
from google.cloud.bigquery.client import Client
File "/usr/local/lib/python3.6/site-packages/google/cloud/bigquery/client.py", line 59, in <module>
from google.cloud.bigquery import _pandas_helpers
File "/usr/local/lib/python3.6/site-packages/google/cloud/bigquery/_pandas_helpers.py", line 44, in <module>
from google.cloud.bigquery import schema
File "/usr/local/lib/python3.6/site-packages/google/cloud/bigquery/schema.py", line 19, in <module>
from google.cloud.bigquery_v2 import types
File "/usr/local/lib/python3.6/site-packages/google/cloud/bigquery_v2/__init__.py", line 19, in <module>
from .types.encryption_config import EncryptionConfiguration
File "/usr/local/lib/python3.6/site-packages/google/cloud/bigquery_v2/types/__init__.py", line 18, in <module>
from .encryption_config import EncryptionConfiguration
File "/usr/local/lib/python3.6/site-packages/google/cloud/bigquery_v2/types/encryption_config.py", line 24, in <module>
__protobuf__ = proto.module(
AttributeError: module 'proto' has no attribute 'module'
```
/area testing
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5614/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5606
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5606/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5606/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5606/events
|
https://github.com/kubeflow/pipelines/issues/5606
| 878,868,513 |
MDU6SXNzdWU4Nzg4Njg1MTM=
| 5,606 |
Error: persistentvolumeclaim "azure-managed-disk" not found
|
{
"login": "pangshengwei",
"id": 19318871,
"node_id": "MDQ6VXNlcjE5MzE4ODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19318871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pangshengwei",
"html_url": "https://github.com/pangshengwei",
"followers_url": "https://api.github.com/users/pangshengwei/followers",
"following_url": "https://api.github.com/users/pangshengwei/following{/other_user}",
"gists_url": "https://api.github.com/users/pangshengwei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pangshengwei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pangshengwei/subscriptions",
"organizations_url": "https://api.github.com/users/pangshengwei/orgs",
"repos_url": "https://api.github.com/users/pangshengwei/repos",
"events_url": "https://api.github.com/users/pangshengwei/events{/privacy}",
"received_events_url": "https://api.github.com/users/pangshengwei/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[] | 2021-05-07T12:00:19 | 2021-05-07T12:00:38 | 2021-05-07T12:00:38 |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version:
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* KFP SDK version:
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
### Expected result
<!-- What should the correct behavior be? -->
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5606/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5605
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5605/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5605/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5605/events
|
https://github.com/kubeflow/pipelines/issues/5605
| 878,867,414 |
MDU6SXNzdWU4Nzg4Njc0MTQ=
| 5,605 |
Error: persistentvolumeclaim "azure-managed-disk" not found
|
{
"login": "pangshengwei",
"id": 19318871,
"node_id": "MDQ6VXNlcjE5MzE4ODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19318871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pangshengwei",
"html_url": "https://github.com/pangshengwei",
"followers_url": "https://api.github.com/users/pangshengwei/followers",
"following_url": "https://api.github.com/users/pangshengwei/following{/other_user}",
"gists_url": "https://api.github.com/users/pangshengwei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pangshengwei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pangshengwei/subscriptions",
"organizations_url": "https://api.github.com/users/pangshengwei/orgs",
"repos_url": "https://api.github.com/users/pangshengwei/repos",
"events_url": "https://api.github.com/users/pangshengwei/events{/privacy}",
"received_events_url": "https://api.github.com/users/pangshengwei/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "aronchick",
"id": 51317,
"node_id": "MDQ6VXNlcjUxMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/51317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aronchick",
"html_url": "https://github.com/aronchick",
"followers_url": "https://api.github.com/users/aronchick/followers",
"following_url": "https://api.github.com/users/aronchick/following{/other_user}",
"gists_url": "https://api.github.com/users/aronchick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aronchick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aronchick/subscriptions",
"organizations_url": "https://api.github.com/users/aronchick/orgs",
"repos_url": "https://api.github.com/users/aronchick/repos",
"events_url": "https://api.github.com/users/aronchick/events{/privacy}",
"received_events_url": "https://api.github.com/users/aronchick/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "aronchick",
"id": 51317,
"node_id": "MDQ6VXNlcjUxMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/51317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aronchick",
"html_url": "https://github.com/aronchick",
"followers_url": "https://api.github.com/users/aronchick/followers",
"following_url": "https://api.github.com/users/aronchick/following{/other_user}",
"gists_url": "https://api.github.com/users/aronchick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aronchick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aronchick/subscriptions",
"organizations_url": "https://api.github.com/users/aronchick/orgs",
"repos_url": "https://api.github.com/users/aronchick/repos",
"events_url": "https://api.github.com/users/aronchick/events{/privacy}",
"received_events_url": "https://api.github.com/users/aronchick/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @aronchick \r\nDo you have a good contact for this issue? This is one of the azure tutorials",
"`kubectl delete pvc azure-managed-disk --namespace kubeflow`\r\n\r\nIf you look at the PVC's YAML you'll see it is created in namespace `kubeflow` which is not the default namespace and hence you aren't actually querying for.\r\n\r\nhttps://github.com/kubeflow/examples/blob/master/pipelines/azurepipeline/kubernetes/pvc.yaml#L5",
"In general the problem is that the PVC must exist in the same Kubernetes namespace as your Kubeflow pipeline. Just edit the PVC.yaml and change the namespace to wherever you are running your pipeline (probably the default namespace), then apply. ",
"/close",
"@berndverst: Closing this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/5605#issuecomment-849764877):\n\n>/close\r\n>/woof\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>"
] | 2021-05-07T11:59:36 | 2021-05-27T16:17:51 | 2021-05-27T16:17:26 |
NONE
| null |
**Question:**
I've followed the [azure end-to-end pipeline](https://www.kubeflow.org/docs/distributions/azure/azureendtoend/) tutorial, but am getting this error when running an experiment, does anyone know what might be the problem?

I've run `kubectl apply -f pvc.yaml` but somehow I dont see the volume when I do `kubectl get pvc`...am I supposed to?

I also dont understand why I am unable to delete the PVC

PVC.yaml

pipeline.py

|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5605/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5602
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5602/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5602/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5602/events
|
https://github.com/kubeflow/pipelines/issues/5602
| 878,372,645 |
MDU6SXNzdWU4NzgzNzI2NDU=
| 5,602 |
[sdk] pyinstaller ("import kfp") -> OSError: could not get source code
|
{
"login": "zhao1157",
"id": 12959339,
"node_id": "MDQ6VXNlcjEyOTU5MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/12959339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhao1157",
"html_url": "https://github.com/zhao1157",
"followers_url": "https://api.github.com/users/zhao1157/followers",
"following_url": "https://api.github.com/users/zhao1157/following{/other_user}",
"gists_url": "https://api.github.com/users/zhao1157/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhao1157/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhao1157/subscriptions",
"organizations_url": "https://api.github.com/users/zhao1157/orgs",
"repos_url": "https://api.github.com/users/zhao1157/repos",
"events_url": "https://api.github.com/users/zhao1157/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhao1157/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-07T03:03:07 | 2022-03-03T04:05:20 | 2022-03-03T04:05:20 |
NONE
| null |
### Environment
* KFP SDK version: 1.20, 1.3.0, 1.4.0
### Steps to reproduce
```python
# file name: a.py
import kfp
print ("xxx")
```
`pyinstaller -F a.py` creates an executable `a` in ./dist/
run the executable `./dist/a`
### Expected result
```
xxx
```
But got the following error
### Issues
```python
Traceback (most recent call last):
File "a.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 540, in exec_module
File "kfp/__init__.py", line 21, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 540, in exec_module
File "kfp/components/__init__.py", line 15, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 540, in exec_module
File "kfp/components/_airflow_op.py", line 23, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 540, in exec_module
File "kfp/components/_python_op.py", line 31, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 540, in exec_module
File "kfp/components/_components.py", line 29, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 540, in exec_module
File "kfp/components/_data_passing.py", line 73, in <module>
File "inspect.py", line 973, in getsource
File "inspect.py", line 955, in getsourcelines
File "inspect.py", line 786, in findsource
OSError: could not get source code
[300906] Failed to execute script a
```
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5602/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5598
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5598/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5598/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5598/events
|
https://github.com/kubeflow/pipelines/issues/5598
| 877,557,807 |
MDU6SXNzdWU4Nzc1NTc4MDc=
| 5,598 |
[backend] Cache Server is unable to deploy with external database using SSL
|
{
"login": "maganaluis",
"id": 15258405,
"node_id": "MDQ6VXNlcjE1MjU4NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/15258405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maganaluis",
"html_url": "https://github.com/maganaluis",
"followers_url": "https://api.github.com/users/maganaluis/followers",
"following_url": "https://api.github.com/users/maganaluis/following{/other_user}",
"gists_url": "https://api.github.com/users/maganaluis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maganaluis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maganaluis/subscriptions",
"organizations_url": "https://api.github.com/users/maganaluis/orgs",
"repos_url": "https://api.github.com/users/maganaluis/repos",
"events_url": "https://api.github.com/users/maganaluis/events{/privacy}",
"received_events_url": "https://api.github.com/users/maganaluis/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Can we consider this fixed after merging https://github.com/kubeflow/pipelines/pull/5599 @maganaluis ? ",
"@NikeNano Yes, I missed tagging this issue to the PR. "
] | 2021-05-06T14:26:53 | 2021-05-17T13:40:19 | 2021-05-17T13:40:19 |
CONTRIBUTOR
| null |
### Environment
Using Kustomize.
* KFP version:
KFP 1.3
* KFP SDK version:
KFP 1.4
### Steps to reproduce
We are unable to deploy the Cache Server with an external databases which uses SSL. In order to setup the correct parameters, the configuration should allow for extra parameters.
### Expected result
We should be able to pass extra parameters to the Cache Server deployment:
```
...
- name: DBCONFIG_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
- name: DBCONFIG_EXTRAPARAMS
value: '{"tls": "true"}'
- name: NAMESPACE_TO_WATCH
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args: ["--db_driver=$(DBCONFIG_DRIVER)",
"--db_host=$(DBCONFIG_HOST_NAME)",
"--db_port=$(DBCONFIG_PORT)",
"--db_name=$(DBCONFIG_DB_NAME)",
"--db_user=$(DBCONFIG_USER)",
"--db_password=$(DBCONFIG_PASSWORD)",
"--db_extra_params=$(DBCONFIG_EXTRAPARAMS)",
"--namespace_to_watch=$(NAMESPACE_TO_WATCH)",
]
imagePullPolicy: Always
...
```
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5598/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5597
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5597/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5597/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5597/events
|
https://github.com/kubeflow/pipelines/issues/5597
| 877,426,995 |
MDU6SXNzdWU4Nzc0MjY5OTU=
| 5,597 |
Unable to artifact output files in aws s3
|
{
"login": "swamat",
"id": 53515128,
"node_id": "MDQ6VXNlcjUzNTE1MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/53515128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swamat",
"html_url": "https://github.com/swamat",
"followers_url": "https://api.github.com/users/swamat/followers",
"following_url": "https://api.github.com/users/swamat/following{/other_user}",
"gists_url": "https://api.github.com/users/swamat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swamat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swamat/subscriptions",
"organizations_url": "https://api.github.com/users/swamat/orgs",
"repos_url": "https://api.github.com/users/swamat/repos",
"events_url": "https://api.github.com/users/swamat/events{/privacy}",
"received_events_url": "https://api.github.com/users/swamat/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "PatrickXYS",
"id": 23116624,
"node_id": "MDQ6VXNlcjIzMTE2NjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/23116624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PatrickXYS",
"html_url": "https://github.com/PatrickXYS",
"followers_url": "https://api.github.com/users/PatrickXYS/followers",
"following_url": "https://api.github.com/users/PatrickXYS/following{/other_user}",
"gists_url": "https://api.github.com/users/PatrickXYS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PatrickXYS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PatrickXYS/subscriptions",
"organizations_url": "https://api.github.com/users/PatrickXYS/orgs",
"repos_url": "https://api.github.com/users/PatrickXYS/repos",
"events_url": "https://api.github.com/users/PatrickXYS/events{/privacy}",
"received_events_url": "https://api.github.com/users/PatrickXYS/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "PatrickXYS",
"id": 23116624,
"node_id": "MDQ6VXNlcjIzMTE2NjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/23116624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PatrickXYS",
"html_url": "https://github.com/PatrickXYS",
"followers_url": "https://api.github.com/users/PatrickXYS/followers",
"following_url": "https://api.github.com/users/PatrickXYS/following{/other_user}",
"gists_url": "https://api.github.com/users/PatrickXYS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PatrickXYS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PatrickXYS/subscriptions",
"organizations_url": "https://api.github.com/users/PatrickXYS/orgs",
"repos_url": "https://api.github.com/users/PatrickXYS/repos",
"events_url": "https://api.github.com/users/PatrickXYS/events{/privacy}",
"received_events_url": "https://api.github.com/users/PatrickXYS/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Can someone please help on this",
"Update: My pipeline is running on namespace 'admin'. created secret file for namespace admin and kubeflow.\r\nUpdated deployment ml-pipeline-ui in kubeflow namespace to use aws-secret. Still in kubeflow UI am getting the below error:\r\n\r\nFailed to get object in bucket 'bucketname' at path README.md: S3Error: Access Denied",
"/assign @PatrickXYS ",
"Can someone please help on this",
"/cc @DavidSpek \nIn case you want to comment on this",
"@Bobgy Thanks for the ping. Our deployment for AWS includes a couple different methods to integrate S3 and Pipelines, so I'll see if I can spin up a test cluster with a similar implementation to this one and replicate the behavior.\r\n\r\nI did notice this behavior while I was experimenting with using upstream Argo Workflows. I doubt it is related, but I haven't looked into it yet so I thought it might be useful to mention here as well.",
"Can someone help on this",
"I am able to artifact files and logs in s3 now after following this https://github.com/e2fyi/kubeflow-aws/tree/master/pipelines"
] | 2021-05-06T12:07:19 | 2021-07-09T10:50:23 | 2021-07-09T10:50:23 |
NONE
| null |
### What steps did you take
I installed kubeflow v1.2 in AWS following official documentation -https://www.kubeflow.org/docs/distributions/aws/pipeline/
### What happened:
Created notebook server and executed kubeflow pipeline. AM able to view artifacts in UI stored in minio loation. Created secret.yaml file
apiVersion: v1
kind: Secret
metadata:
name: aws-secret
namespace: kubeflow
type: Opaque
data:
AWS_ACCESS_KEY_ID: <YOUR_BASE64_ACCESS_KEY>
AWS_SECRET_ACCESS_KEY: <YOUR_BASE64_SECRET_ACCESS>
Updated deployment ml-pipeline-ui to use AWS credential environment variables by running kubectl edit deployment ml-pipeline-ui -n kubeflow.
### What did you expect to happen:
Kubeflow to store the files in s3
### Environment:
AWS-ubuntu eks cluster
* How do you deploy Kubeflow Pipelines (KFP)? --https://www.kubeflow.org/docs/distributions/aws/pipeline/
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version: 1.2
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* KFP SDK version:
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
### Anything else you would like to add:
After applying secrets yaml am able to connect to s3 to read and write files using use_aws_secret but artifacts are not getting logged automatically in s3. Artifacts are logged in minio alone. How to log artifacts in s3
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5597/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5597/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5593
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5593/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5593/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5593/events
|
https://github.com/kubeflow/pipelines/issues/5593
| 876,939,987 |
MDU6SXNzdWU4NzY5Mzk5ODc=
| 5,593 |
[feature] Choose what metrics to show on the individual experiment page
|
{
"login": "yuhuishi-convect",
"id": 74702693,
"node_id": "MDQ6VXNlcjc0NzAyNjkz",
"avatar_url": "https://avatars.githubusercontent.com/u/74702693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuhuishi-convect",
"html_url": "https://github.com/yuhuishi-convect",
"followers_url": "https://api.github.com/users/yuhuishi-convect/followers",
"following_url": "https://api.github.com/users/yuhuishi-convect/following{/other_user}",
"gists_url": "https://api.github.com/users/yuhuishi-convect/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuhuishi-convect/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuhuishi-convect/subscriptions",
"organizations_url": "https://api.github.com/users/yuhuishi-convect/orgs",
"repos_url": "https://api.github.com/users/yuhuishi-convect/repos",
"events_url": "https://api.github.com/users/yuhuishi-convect/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuhuishi-convect/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2152751095,
"node_id": "MDU6TGFiZWwyMTUyNzUxMDk1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen",
"name": "lifecycle/frozen",
"color": "ededed",
"default": false,
"description": null
}
] |
open
| false | null |
[] | null |
[
"We are thinking about implementing this feature in KFP v2, because a basic requirement is that\r\n\r\nIn v2, we envision that each metric can be a separate artifact.\r\nAnd pipeline DSL can allow users to return artifacts they select.\r\n\r\nWith these two new features, it'll be possible to select pipeline level output artifacts and only metrics returned to pipeline output are associated with the pipeline.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"/lifecycle frozen"
] | 2021-05-05T23:36:33 | 2021-09-12T13:25:13 | null |
NONE
| null |
### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
/area frontend
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area samples -->
<!-- /area components -->
### What feature would you like to see?
For runs that output many metrics, the current experiment page only displays limited number of metrics.
It will be good to add a selector for users to choose what metrics to display, so they can be used for sorting and further analysis.
Example:
Run that generates many metrics

Only 2 metrics are displayed on the experiment page

### What is the use case or pain point?
To analyze runs that output many metrics and reduce the "noisy" information displayed on the run comparison page
### Is there a workaround currently?
The "compare runs" function can somehow achieve the purpose. But an overview and sorting over the "focus" metrics is better.
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5593/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5590
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5590/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5590/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5590/events
|
https://github.com/kubeflow/pipelines/issues/5590
| 876,343,608 |
MDU6SXNzdWU4NzYzNDM2MDg=
| 5,590 |
[backend] Pipelines 1.5.0 api-server argo version mismatch & samples cannot be compiled
|
{
"login": "mgiessing",
"id": 40735330,
"node_id": "MDQ6VXNlcjQwNzM1MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/40735330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mgiessing",
"html_url": "https://github.com/mgiessing",
"followers_url": "https://api.github.com/users/mgiessing/followers",
"following_url": "https://api.github.com/users/mgiessing/following{/other_user}",
"gists_url": "https://api.github.com/users/mgiessing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mgiessing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mgiessing/subscriptions",
"organizations_url": "https://api.github.com/users/mgiessing/orgs",
"repos_url": "https://api.github.com/users/mgiessing/repos",
"events_url": "https://api.github.com/users/mgiessing/events{/privacy}",
"received_events_url": "https://api.github.com/users/mgiessing/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 2710158147,
"node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info",
"name": "needs more info",
"color": "DBEF12",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Hi @mgiessing, this issue has been fixed in latest version. Can you confirm?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-05T11:18:28 | 2022-03-03T06:05:18 | 2022-03-03T06:05:18 |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
Pipelines is not yet deployed, I want to build the api-server docker image
* KFP version:
1.5.0
* KFP SDK version:
1.5.0
### Steps to reproduce
Difficult to reproduce as you'd need a ppc64le system, however that problem is not related to the underlying platform as the error indicates a wrong argo version and also happened on x86-Macbook Pro. Essentially these steps have been performed
```
git clone https://github.com/kubeflow/pipelines.git
cd pipelines && git checkout 1.5.0
## changing some ppc64le specifics in the dockerfile ##
docker build -t api-server:1.5.0.ppc64le -f backend/Dockerfile .
```
I've also build the image on my x86-Macbook and get the same errors during the `dsl-compile ...` command
See:
https://github.com/kubeflow/pipelines/blob/1.5.0/backend/Dockerfile#L35
and especially this line fails:
https://github.com/kubeflow/pipelines/blob/1.5.0/backend/Dockerfile#L40
### Expected result
I'd expect the to run without issues
### Materials and Reference
#### Error output / docker build log ppc64le-linux
```
[...]
Step 27/42 : RUN set -e; < /samples/sample_config.json jq .[].file --raw-output | while read pipeline_yaml; do pipeline_py="${pipeline_yaml%.yaml}"; mv "$pipeline_py" "${pipeline_py}.tmp"; echo 'import kfp; kfp.components.default_base_image_or_builder="gcr.io/google-appengine/python:2020-03-31-141326"' | cat - "${pipeline_py}.tmp" > "$pipeline_py"; /bin/bash -c "source activate py37 && dsl-compile --py \"$pipeline_py\" --output \"$pipeline_yaml\" || python3 \"$pipeline_py\""; done
---> Running in eb21ce3bd146
/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/compiler.py:1117: UserWarning: Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.
warnings.warn("Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.")
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
Traceback (most recent call last):
File "/root/miniconda3/envs/py37/bin/dsl-compile", line 33, in <module>
sys.exit(load_entry_point('kfp==1.5.0', 'console_scripts', 'dsl-compile')())
File "/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 104, in main
not args.disable_type_check,
File "/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 91, in compile_pyfile
_compile_pipeline_function(pipeline_funcs, function_name, output_path, type_check)
File "/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 54, in _compile_pipeline_function
raise ValueError('A function with @dsl.pipeline decorator is required in the py file.')
ValueError: A function with @dsl.pipeline decorator is required in the py file.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:From /samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py:75: external_input (from tfx.utils.dsl_utils) is deprecated and will be removed in a future version.
Instructions for updating:
external_input is deprecated, directly pass the uri to ExampleGen.
WARNING:absl:The "input" argument to the CsvExampleGen component has been deprecated by "input_base". Please update your usage as support for this argument will be removed soon.
/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/compiler.py:1117: UserWarning: Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.
warnings.warn("Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.")
Traceback (most recent call last):
File "/root/miniconda3/envs/py37/bin/dsl-compile", line 33, in <module>
sys.exit(load_entry_point('kfp==1.5.0', 'console_scripts', 'dsl-compile')())
File "/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 104, in main
not args.disable_type_check,
File "/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 91, in compile_pyfile
_compile_pipeline_function(pipeline_funcs, function_name, output_path, type_check)
File "/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 54, in _compile_pipeline_function
raise ValueError('A function with @dsl.pipeline decorator is required in the py file.')
ValueError: A function with @dsl.pipeline decorator is required in the py file.
/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/compiler.py:1117: UserWarning: Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.
warnings.warn("Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.")
Traceback (most recent call last):
File "/root/miniconda3/envs/py37/bin/dsl-compile", line 33, in <module>
sys.exit(load_entry_point('kfp==1.5.0', 'console_scripts', 'dsl-compile')())
File "/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 104, in main
not args.disable_type_check,
File "/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 91, in compile_pyfile
_compile_pipeline_function(pipeline_funcs, function_name, output_path, type_check)
File "/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 58, in _compile_pipeline_function
raise ValueError('There are multiple pipelines: %s. Please specify --function.' % func_names)
ValueError: There are multiple pipelines: ['flipcoin_pipeline', 'flipcoin_exit_pipeline']. Please specify --function.
/root/miniconda3/envs/py37/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/compiler.py:1117: UserWarning: Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.
warnings.warn("Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.")
Removing intermediate container eb21ce3bd146
---> 6bd1cd1530e1
Step 28/42 : FROM ppc64le/debian:stretch
[...]
```
#### Error output / docker build log x86-Macbook
```
[...]
Step 21/36 : RUN set -e; < /samples/sample_config.json jq .[].file --raw-output | while read pipeline_yaml; do pipeline_py="${pipeline_yaml%.yaml}"; mv "$pipeline_py" "${pipeline_py}.tmp"; echo 'import kfp; kfp.components.default_base_image_or_builder="gcr.io/google-appengine/python:2020-03-31-141326"' | cat - "${pipeline_py}.tmp" > "$pipeline_py"; dsl-compile --py "$pipeline_py" --output "$pipeline_yaml" || python3 "$pipeline_py"; done
---> Running in a008c231c82d
/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/compiler.py:1117: UserWarning: Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.
warnings.warn("Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.")
2021-05-05 11:10:07.987152: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-05-05 11:10:07.987241: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
Traceback (most recent call last):
File "/usr/local/bin/dsl-compile", line 33, in <module>
sys.exit(load_entry_point('kfp==1.5.0', 'console_scripts', 'dsl-compile')())
File "/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 104, in main
not args.disable_type_check,
File "/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 91, in compile_pyfile
_compile_pipeline_function(pipeline_funcs, function_name, output_path, type_check)
File "/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 54, in _compile_pipeline_function
raise ValueError('A function with @dsl.pipeline decorator is required in the py file.')
ValueError: A function with @dsl.pipeline decorator is required in the py file.
2021-05-05 11:10:13.142424: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-05-05 11:10:13.142485: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
WARNING:absl:From /samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py:75: external_input (from tfx.utils.dsl_utils) is deprecated and will be removed in a future version.
Instructions for updating:
external_input is deprecated, directly pass the uri to ExampleGen.
WARNING:absl:The "input" argument to the CsvExampleGen component has been deprecated by "input_base". Please update your usage as support for this argument will be removed soon.
/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/compiler.py:1117: UserWarning: Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.
warnings.warn("Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.")
Traceback (most recent call last):
File "/usr/local/bin/dsl-compile", line 33, in <module>
sys.exit(load_entry_point('kfp==1.5.0', 'console_scripts', 'dsl-compile')())
File "/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 104, in main
not args.disable_type_check,
File "/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 91, in compile_pyfile
_compile_pipeline_function(pipeline_funcs, function_name, output_path, type_check)
File "/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 54, in _compile_pipeline_function
raise ValueError('A function with @dsl.pipeline decorator is required in the py file.')
ValueError: A function with @dsl.pipeline decorator is required in the py file.
/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/compiler.py:1117: UserWarning: Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.
warnings.warn("Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.")
Traceback (most recent call last):
File "/usr/local/bin/dsl-compile", line 33, in <module>
sys.exit(load_entry_point('kfp==1.5.0', 'console_scripts', 'dsl-compile')())
File "/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 104, in main
not args.disable_type_check,
File "/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 91, in compile_pyfile
_compile_pipeline_function(pipeline_funcs, function_name, output_path, type_check)
File "/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/main.py", line 58, in _compile_pipeline_function
raise ValueError('There are multiple pipelines: %s. Please specify --function.' % func_names)
ValueError: There are multiple pipelines: ['flipcoin_pipeline', 'flipcoin_exit_pipeline']. Please specify --function.
/usr/local/lib/python3.7/site-packages/kfp-1.5.0-py3.7.egg/kfp/compiler/compiler.py:1117: UserWarning: Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.
warnings.warn("Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work.")
Removing intermediate container a008c231c82d
---> 6ab54eda2f90
Step 22/36 : FROM debian:stretch
[...]
```
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5590/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5589
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5589/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5589/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5589/events
|
https://github.com/kubeflow/pipelines/issues/5589
| 876,327,632 |
MDU6SXNzdWU4NzYzMjc2MzI=
| 5,589 |
SDK - Creating lightweight component inside lightweight component function leads to wrong inner component
|
{
"login": "Shuai-Xie",
"id": 18352713,
"node_id": "MDQ6VXNlcjE4MzUyNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/18352713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shuai-Xie",
"html_url": "https://github.com/Shuai-Xie",
"followers_url": "https://api.github.com/users/Shuai-Xie/followers",
"following_url": "https://api.github.com/users/Shuai-Xie/following{/other_user}",
"gists_url": "https://api.github.com/users/Shuai-Xie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shuai-Xie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shuai-Xie/subscriptions",
"organizations_url": "https://api.github.com/users/Shuai-Xie/orgs",
"repos_url": "https://api.github.com/users/Shuai-Xie/repos",
"events_url": "https://api.github.com/users/Shuai-Xie/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shuai-Xie/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1122445895,
"node_id": "MDU6TGFiZWwxMTIyNDQ1ODk1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk/components",
"name": "area/sdk/components",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"For quick debug try:\r\n```python\r\n def inpod_pipeline_func(a: int, b: int):\r\n add_task = add(a, b)\r\n print(add_task.outputs)\r\n show_result(add_task.outputs['func_name'], add_task.outputs['result'])\r\n```",
"Thanks for your quick reply. I've made two tests with your guidance.\r\n\r\n## Test 1\r\n\r\n### 1.1. Pipeline function defined in a component\r\n\r\nIn the `inpod_pipeline_func`, I directly print the `add_task.outputs` as below and don't fetch values by keys in the `show_result` function.\r\n\r\n```py\r\ndef create_pipeline_in_pod(data_dir: str):\r\n import kfp\r\n import kfp.dsl as dsl\r\n from kfp.components import func_to_container_op\r\n from typing import NamedTuple\r\n\r\n print('create_pipeline_in_pod')\r\n\r\n EXP_NAME = 'demo'\r\n RUN_NAME = 'pipeline_in_pod'\r\n\r\n @func_to_container_op\r\n def add(\r\n a: int,\r\n b: int # todo: can't recognize return type\r\n ) -> NamedTuple('output', [('func_name', str), ('result', int)]):\r\n from collections import namedtuple\r\n output = namedtuple('output', ['func_name', 'result'])\r\n return output('add_two_numbers', a + b)\r\n\r\n @func_to_container_op\r\n def show_result(result):\r\n print(f'result: {result}')\r\n\r\n @dsl.pipeline(name=RUN_NAME)\r\n def inpod_pipeline_func(a: int, b: int):\r\n add_task = add(a, b)\r\n print(add_task.outputs)\r\n show_result(add_task.outputs)\r\n\r\n # compile and submit\r\n print('begin compile in pod...')\r\n kfp.compiler.Compiler().compile(inpod_pipeline_func,\r\n f'{data_dir}/{RUN_NAME}.yaml')\r\n print('finish compile in pod')\r\n\r\n client = kfp.Client(host='my_k8s_master_ip:8888', namespace='kubeflow')\r\n client.create_run_from_pipeline_func(inpod_pipeline_func,\r\n arguments={\r\n 'a': '1',\r\n 'b': '2'\r\n },\r\n experiment_name=EXP_NAME,\r\n run_name=RUN_NAME)\r\n print('submit pipeline in pod')\r\n```\r\n\r\n### 1.2. Pipeline logs \r\n\r\nIt shows that `add_task.outputs` is `{}`, which means that no small values are passed in the pipeline. So I guess the return type definition in the `add` function may not be recognized by the `kfp.compiler` in the component.\r\n\r\n\r\n```\r\ncreate_pipeline_in_pod\r\nbegin compile in pod...\r\n{}\r\n/usr/local/lib/python3.7/site-packages/kfp/components/_data_passing.py:168: UserWarning: Missing type name was inferred as \"JsonObject\" based on the value \"{}\".\r\n warnings.warn('Missing type name was inferred as \"{}\" based on the value \"{}\".'.format(type_name, str(value)))\r\nfinish compile in pod\r\n{}\r\nsubmit pipeline in pod\r\n```\r\n\r\n### 1.3. Generated Pipeline Graph\r\n\r\nMore importantly, I found the components of the just created `pipeline_in_pod` pipeline are not connected like below. I guess I should adopt another experiment by passing values with PV.\r\n\r\n\r\n\r\n\r\n## Test 2\r\n\r\n### 2.1. Pipeline function defined with only one passing value\r\n\r\nAnd I tested a more easy version with only one passing value below. \r\n\r\nMaybe a bit weird, I still use `add_task.outputs` to fetch the outputs of the `add` component. But this modification makes me pass the compilation successfully. And the passing value is `{}`, too.\r\n\r\n```py\r\ndef create_pipeline_in_pod(data_dir: str):\r\n import kfp\r\n import kfp.dsl as dsl\r\n from kfp.components import func_to_container_op\r\n\r\n print('create_pipeline_in_pod')\r\n\r\n EXP_NAME = 'demo'\r\n RUN_NAME = 'pipeline_in_pod'\r\n\r\n @func_to_container_op\r\n def add(a: int, b: int) -> int:\r\n return a + b\r\n\r\n @func_to_container_op\r\n def show_result(result):\r\n print(f'result: {result}')\r\n\r\n @dsl.pipeline(name=RUN_NAME)\r\n def inpod_pipeline_func(a: int, b: int):\r\n add_task = add(a, b)\r\n print(add_task.outputs)\r\n show_result(add_task.outputs)\r\n\r\n # compile and submit\r\n print('begin compile in pod...')\r\n kfp.compiler.Compiler().compile(inpod_pipeline_func,\r\n f'{data_dir}/{RUN_NAME}.yaml')\r\n print('finish compile in pod')\r\n\r\n client = kfp.Client(host='my_k8s_master_ip:8888', namespace='kubeflow')\r\n client.create_run_from_pipeline_func(inpod_pipeline_func,\r\n arguments={\r\n 'info': 'hello world',\r\n },\r\n experiment_name=EXP_NAME,\r\n run_name=RUN_NAME)\r\n print('submit pipeline in pod')\r\n```\r\n\r\n### 2.2. Pipeline logs \r\n\r\n```shell\r\nWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: \r\nhttps://pip.pypa.io/warnings/venv\r\ncreate_pipeline_in_pod\r\nbegin compile in pod...\r\n{}\r\n/usr/local/lib/python3.7/site-packages/kfp/components/_data_passing.py:168: UserWarning: Missing type name was inferred as \"JsonObject\" based on the value \"{}\".\r\n warnings.warn('Missing type name was inferred as \"{}\" based on the value \"{}\".'.format(type_name, str(value)))\r\nfinish compile in pod\r\n{}\r\nsubmit pipeline in pod\r\n```\r\n\r\n### 2.3. Generated Pipeline Graph\r\n\r\nSame to 1.3.\r\n\r\n\r\n\r\n\r\n\r\nThanks very much. I'll keep following up on this issue.\r\n\r\nWarm Regrads.",
"### Some new discoveries of the not connected components\r\n\r\nIn my experiment, `Add` and `Show result` components are created by the pipeline defined in the `create_pipeline_in_pod` function. The two components are not connected in https://github.com/kubeflow/pipelines/issues/5589#issuecomment-833293618\r\n\r\nI try to define the pipeline workflow explicitly with `kfp.dsl.ContainerOp`'s `after()` function as below.\r\n\r\n```py\r\ndef create_add_two_numbers_pipeline_in_pod(data_dir: str):\r\n import kfp\r\n import kfp.dsl as dsl\r\n from kfp.components import func_to_container_op\r\n\r\n print('create_pipeline_in_pod')\r\n\r\n EXP_NAME = 'demo'\r\n RUN_NAME = 'add_two_numbers_in_pod'\r\n\r\n def add(a: int, b: int) -> int:\r\n return a + b\r\n\r\n def show_result(result):\r\n print(f'result: {result}')\r\n\r\n @dsl.pipeline(name=RUN_NAME)\r\n def inpod_pipeline_func(a: int, b: int):\r\n add_op = func_to_container_op(add)(a, b)\r\n print(add_op.outputs)\r\n show_op = func_to_container_op(show_result)(add_op.outputs)\r\n show_op.after(add_op) # note: add after() here\r\n```\r\n\r\nThe generated graph is ok, but the passing value is still `{}`.\r\n\r\n\r\n\r\n\r\nIf I annotate `show_op.after(add_op)` as below, the two components are not connected again.\r\n\r\n```py\r\ndef create_add_two_numbers_pipeline_in_pod(data_dir: str):\r\n import kfp\r\n import kfp.dsl as dsl\r\n from kfp.components import func_to_container_op\r\n\r\n print('create_pipeline_in_pod')\r\n\r\n EXP_NAME = 'demo'\r\n RUN_NAME = 'add_two_numbers_in_pod'\r\n\r\n def add(a: int, b: int) -> int:\r\n return a + b\r\n\r\n def show_result(result):\r\n print(f'result: {result}')\r\n\r\n @dsl.pipeline(name=RUN_NAME)\r\n def inpod_pipeline_func(a: int, b: int):\r\n add_op = func_to_container_op(add)(a, b)\r\n print(add_op.outputs)\r\n show_op = func_to_container_op(show_result)(add_op.outputs)\r\n # show_op.after(add_op)\r\n```\r\n\r\n\r\n\r\n",
"### Test results of `InputPath` and `OutputPath`\r\n\r\nI've tried to use `InputPath` and `OutputPath` to pass values in a local file with the help of `minio`.\r\n\r\nThe codes are updated as below.\r\n\r\n```py\r\ndef create_add_two_numbers_pipeline_in_pod(data_dir: str):\r\n import kfp\r\n import kfp.dsl as dsl\r\n from kfp.components import func_to_container_op, InputPath, OutputPath\r\n\r\n print('create_pipeline_in_pod')\r\n\r\n EXP_NAME = 'demo'\r\n RUN_NAME = 'add_two_numbers_in_pod'\r\n\r\n @func_to_container_op\r\n def add(a: int, b: int, output_sum_path: OutputPath(str)): # pass by path\r\n with open(output_sum_path, 'w') as f:\r\n f.write(str(a + b))\r\n\r\n @func_to_container_op\r\n def show_result(input_sum_path: InputPath(str)): # pass by path\r\n with open(input_sum_path, 'r') as f:\r\n res = f.readline()\r\n print('result:', res)\r\n\r\n @dsl.pipeline(name=RUN_NAME)\r\n def inpod_pipeline_func(a: int, b: int):\r\n add_task = add(a, b)\r\n show_result(add_task.output)\r\n\r\n if __name__ == '__main__':\r\n # compile and submit\r\n print('begin compile in pod...')\r\n kfp.compiler.Compiler().compile(inpod_pipeline_func,\r\n f'{data_dir}/{RUN_NAME}.yaml')\r\n print('finish compile in pod')\r\n\r\n # run pipeline outside\r\n client = kfp.Client(host='my_k8s_master_ip:8888', namespace='kubeflow')\r\n client.create_run_from_pipeline_func(inpod_pipeline_func,\r\n arguments={\r\n 'a': '1',\r\n 'b': '2'\r\n },\r\n experiment_name=EXP_NAME,\r\n run_name=RUN_NAME)\r\n print('submit pipeline in pod')\r\n```\r\n\r\nI got new errors as below. `OutputPath` is regarded as a common argument.\r\n\r\n```\r\nAdd() missing 1 required positional argument: 'output_sum_path'\r\n```\r\n\r\n\r\n\r\nBest Regards.",
"## Maybe a barely satisfactory solution\r\n\r\nIn the past days, I found it works well if I created a pipeline in a running pod.\r\n\r\nFor example :\r\n\r\n```sh\r\n# start a running python pod\r\n$ kubectl apply -f python37.yaml\r\n\r\n# go into the pod, and execute the pipeline code inside the pod\r\n$ kubectl exec -it python37 bash\r\n\r\n# in the shell of the running pod\r\n# use vim to write create_pipeline_in_pod.py\r\n...\r\n\r\n# execute\r\n$ python create_pipeline_in_pod.py\r\n```\r\n\r\nAnd I found the newly created `add_two_numbers_inpod` pipeline running well.\r\n\r\nSo I made a small change to the original code with 2 steps:\r\n\r\n1. save the code of `create_pipeline_in_pod()` into a python file like `create_pipeline_in_pod.py`\r\n2. use `os.system()` to execute the program.\r\n\r\nThe demo program is below.\r\n\r\n```python\r\nimport kfp\r\nimport kfp.dsl as dsl\r\nfrom kfp.components import func_to_container_op\r\n\r\nEXP_NAME = 'demo'\r\nRUN_NAME = 'create_pipeline_in_pod'\r\n\r\n\r\ndef create_add_two_numbers_pipeline_in_pod():\r\n import os\r\n import textwrap\r\n\r\n code = \"\"\"\\\r\n import kfp\r\n import kfp.dsl as dsl\r\n from kfp.components import func_to_container_op\r\n from typing import NamedTuple\r\n\r\n print('create pipeline in pod')\r\n\r\n EXP_NAME = 'automl'\r\n RUN_NAME = 'add_two_numbers_inpod'\r\n\r\n\r\n @func_to_container_op\r\n def add(a: int,\r\n b: int) -> NamedTuple('output', [('func_name', str), ('result', int)]):\r\n from collections import namedtuple\r\n output = namedtuple('output', ['func_name', 'result'])\r\n return output('add_two_numbers', a + b)\r\n\r\n\r\n @func_to_container_op\r\n def show_result(func_name, result):\r\n print(f'{func_name} result: {result}')\r\n\r\n\r\n @dsl.pipeline(name=RUN_NAME)\r\n def inpod_pipeline_func(a: int, b: int):\r\n add_task = add(a, b)\r\n show_result(add_task.outputs['func_name'], add_task.outputs['result'])\r\n\r\n\r\n if __name__ == '__main__':\r\n client = kfp.Client(host='my_k8s_master_ip:8888')\r\n client.create_run_from_pipeline_func(inpod_pipeline_func,\r\n arguments={\r\n 'a': '1',\r\n 'b': '2'\r\n },\r\n experiment_name=EXP_NAME,\r\n run_name=RUN_NAME)\r\n print('submit pipeline in pod')\r\n \"\"\"\r\n\r\n file_path = './create_pipeline_in_pod.py'\r\n\r\n # write\r\n with open(file_path, 'w') as f:\r\n f.write(textwrap.dedent(code))\r\n print('\\nwrite codes')\r\n\r\n # read\r\n print('\\nread codes')\r\n with open(file_path, 'r') as f:\r\n for line in f.readlines():\r\n print(line, end='')\r\n\r\n print('\\nrun codes')\r\n os.system(f'python {file_path}')\r\n\r\n\r\ninpipeline_op = func_to_container_op(create_add_two_numbers_pipeline_in_pod,\r\n base_image='shuaix/kfp_fairing:1.0')\r\n\r\n\r\n# Define the pipeline\r\[email protected](name=RUN_NAME)\r\ndef pipeline_func():\r\n inpipeline_op()\r\n\r\n\r\nif __name__ == '__main__':\r\n client = kfp.Client(host='my_k8s_master_ip:8888')\r\n client.create_run_from_pipeline_func(pipeline_func,\r\n arguments={},\r\n experiment_name=EXP_NAME,\r\n run_name=RUN_NAME)\r\n print('submit pipeline')\r\n```\r\n\r\nThe pipeline logs\r\n\r\n```\r\nwrite codes\r\n\r\nread codes\r\n\r\nimport kfp\r\nimport kfp.dsl as dsl\r\nfrom kfp.components import func_to_container_op\r\nfrom typing import NamedTuple\r\nprint('create pipeline in pod')\r\nEXP_NAME = 'automl'\r\nRUN_NAME = 'add_two_numbers_inpod'\r\n@func_to_container_op\r\ndef add(a: int,\r\n b: int) -> NamedTuple('output', [('func_name', str), ('result', int)]):\r\n from collections import namedtuple\r\n output = namedtuple('output', ['func_name', 'result'])\r\n return output('add_two_numbers', a + b)\r\n@func_to_container_op\r\ndef show_result(func_name, result):\r\n print(f'{func_name} result: {result}')\r\[email protected](name=RUN_NAME)\r\ndef inpod_pipeline_func(a: int, b: int):\r\n add_task = add(a, b)\r\n show_result(add_task.outputs['func_name'], add_task.outputs['result'])\r\nif __name__ == '__main__':\r\n client = kfp.Client(host='10.252.192.47:8888')\r\n client.create_run_from_pipeline_func(inpod_pipeline_func,\r\n arguments={\r\n 'a': '1',\r\n 'b': '2'\r\n },\r\n experiment_name=EXP_NAME,\r\n run_name=RUN_NAME)\r\n print('submit pipeline in pod')\r\n\r\nrun codes\r\ncreate pipeline in pod\r\nsubmit pipeline in pod\r\n```\r\n\r\nBoth of the pipelines run successfully.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nBest Regards.\r\n",
"I am having a related issue (with the same `kfp*` package versions as OP), and it appears that the source of the problem is that multiple outputs is broken. To illustrate, the following test fails, and the only key in `called_multi_output_op.outputs` is `\"Output\"`.\r\n\r\n```python\r\nfrom __future__ import annotations\r\nfrom typing import NamedTuple\r\nfrom kfp import components\r\n\r\ndef test_multiple_outputs() -> None:\r\n def multi_ouptput_func(\r\n a: str, b: str\r\n ) -> NamedTuple(\"MultiOutputNoOp\", [(\"a\", str), (\"b\", str)]): # type: ignore[valid-type]\r\n from collections import namedtuple\r\n\r\n MultiOutputNoOp = namedtuple(\"MultiOutputNoOp\", [\"a\", \"b\"])\r\n return MultiOutputNoOp(a=a, b=b)\r\n\r\n multi_ouptput_op = components.func_to_container_op(multi_ouptput_func)\r\n called_multi_output_op = multi_ouptput_op(a=\"some\", b=\"args\")\r\n\r\n assert all(k in called_multi_output_op.outputs for k in [\"a\", \"b\"]), list(\r\n called_multi_output_op.outputs\r\n )\r\n\r\n```",
"It looks like the issue in my test was the `from __future__ import annotations` import. If I remove that, then the test passes. So @Shuai-Xie , you might be running on python 3.10 locally, and it works in a running pod because you're using python 3.7?",
"Thanks for your kind reply @jaminsore. But the problem may be not a version issue since the program can run successfully in a running pod if I use this method https://github.com/kubeflow/pipelines/issues/5589#issuecomment-835101733. By the way, I use python 3.6.13 both in the host and pod mode.",
"I did a small investigation.\r\n\r\nThe easiest way to debug component-related issues like that is to look at the component definition:\r\n\r\n```python\r\nop = func_to_container_op(create_pipeline_in_pod, packages_to_install=['kfp'], output_component_file='nested.component.yaml')\r\n```\r\n\r\nWe immediately see the issue that was causing the weird behavior:\r\n\r\n```yaml\r\n @func_to_container_op\r\n def add(\r\n a,\r\n b # todo: can't recognize return type\r\n ):\r\n from collections import namedtuple\r\n output = namedtuple('output', ['func_name', 'result'])\r\n return output('add_two_numbers', a + b)\r\n```\r\n\r\nAll type annotations are gone. Most importantly the output type annotations.\r\n\r\nYes, the `create_component_from_func`/`func_to_container_op` does that.\r\n\r\nWhy?\r\n\r\nWhen not using code pickling (the default), the function must be self-contained. However, the types used in the function signature are \"outside\" the function body.\r\n\r\nThe code only really needs to strip annotations from the outer function, but it strips all function annotations inside.\r\n\r\nThis can be considered a somewhat minor bug.\r\n\r\nCan you please describe the scenario where you want to do static nested component authoring?\r\n\r\nI see multiple viable workarounds:\r\n\r\n1. Turn the pipeline into a graph component using `kfp.components.create_graph_component_from_pipleine_func`\r\n\r\n2. Do not put pipeline code inside the component. You can get the pipeline code from an input or from some URL.\r\n\r\nI'd split the outer pipeline into 2 or 3 components: Get pipeline code, [compile pipeline], Submit run",
"Hi, @Ark-kun. Many thanks for your detailed analysis and a clear experiment. \r\n\r\nI'd like to describe my scenario first. \r\n\r\nAssuming the **outer pipeline** has two components `C1` and `C2`, both of them provide resident services.\r\n\r\n- `C1`: a webui, which receives the user input, e.g. task categories: `[classification, detection]`\r\n- `C2`: a pipeline manager, which generates **new (inner) pipelines** doing classification or detection tasks based on the user input.\r\n\r\nCurrently, I can't find an elegant way to launch new pipelines in `C2`.\r\n\r\n---\r\n\r\nI've tried your two solutions.\r\n\r\n### (1) using `kfp.components.create_graph_component_from_pipleine_func`\r\n\r\nI guess you may want me to create an inner pipeline as below. This way can create a pipeline successfully. But the component functions of the inner pipeline are defined in the same scope of the outer pipeline.\r\n\r\n```py\r\nimport kfp\r\nimport kfp.dsl as dsl\r\nimport kfp.components as comps\r\nfrom typing import NamedTuple\r\n\r\nEXP_NAME = 'automl'\r\nRUN_NAME = 'outer_pipeline'\r\n\r\n\r\ndef add(a: int, b: int) -> NamedTuple('output', [('func_name', str), ('result', int)]):\r\n from collections import namedtuple\r\n output = namedtuple('output', ['func_name', 'result'])\r\n return output('add_two_numbers', a + b)\r\n\r\n\r\ndef show_result(func_name, result):\r\n print(f'{func_name} result: {result}')\r\n\r\n\r\ndef inner_pipeline_func(a: int, b: int):\r\n add_task = comps.create_component_from_func(add)(a, b)\r\n comps.create_component_from_func(show_result)(add_task.outputs['func_name'], add_task.outputs['result'])\r\n\r\n\r\[email protected](name=RUN_NAME)\r\ndef pipeline_func(a: int, b: int):\r\n # create a graph component from (inner) pipeline func, which is in the same scope as the outer one.\r\n create_pipeline_task = comps.create_graph_component_from_pipeline_func(inner_pipeline_func)(a, b)\r\n\r\n\r\nif __name__ == '__main__':\r\n client = kfp.Client(host='my_k8s_master_ip:8888', namespace='kubeflow')\r\n client.create_run_from_pipeline_func(pipeline_func,\r\n arguments={\r\n 'a': '1',\r\n 'b': '2'\r\n },\r\n experiment_name=EXP_NAME,\r\n run_name=RUN_NAME)\r\n print('submit pipeline')\r\n```\r\n\r\n### (2) Do not put pipeline code inside the component.\r\n\r\nYou're right. I can't define the inner pipeline code directly inside the component now. \r\n\r\nThis way https://github.com/kubeflow/pipelines/issues/5589#issuecomment-835101733 works by writing a complete inner pipeline code to a python file inside the container launched by the component. So next I want to have a try by defining some template pipelines inside a customized container in advance.\r\n\r\nThanks again!",
">a pipeline manager, which generates new (inner) pipelines\r\nShould it be *generating* new pipelines or just *launching* new pipeline runs? You can easily launch a pipeline or component from a component.\r\n\r\nThe issue only occurs when calling `create_component_from_func` inside function passed to `create_component_from_func`. It won't occur when they're called sequentially for example.\r\n\r\nCould you create, say 5 components/pipelines for different tasks, then create a \"manager\" pipeline that launches one of them based on input?",
">I can't define the inner pipeline code directly inside the component now.\r\n\r\nYou cannot define it in the component function (used with `create_component_from_func `). But it can be anywhere else. E.g. You could load a function from a python module and use `create_component_from_func ` on it. Or you can download the source from GitHub. It's about the location of the source code. You cannot have component function source within component function source code. But only that is restricted.",
"Another option to work around the issue - make either inner or outer component function without using `create_component_from_func` - you can create component.yaml directly.\r\n\r\nYou could even generate component.yaml using your initial approach, then fix the generated code inside the component.yaml.",
"Hi @Ark-kun, thanks for your suggestion. I think I've got your points. The current python sdk features of kubeflow pipeline can meet my demands now. \r\n\r\n**In the past few days, I've connected the pipeline component functions with the native `@staticmethod` of python class. By doing this, I can define a `specific kubeflow pipeline template` with `a specific python class` now. And these customized template Pipeline Classes (e.g. `AddPipeline` below) will be included in a self-built container image `C1`. Then I can directly import a pre-defined pipeline in a component function by using `C1` container as the `base_image`. I don't need to inject the python code in this way https://github.com/kubeflow/pipelines/issues/5589#issuecomment-835101733 now.**\r\n\r\n\r\n## Demo\r\n\r\nHere is an example.\r\n\r\n### 1. Define a `AddPipeline` class\r\n\r\nIn this class,\r\n\r\n- `add()` and `show_result()` are two component functions, **which are declared as `staticmethod` of this class**.\r\n- `pipeline_func(self, ..)` defines the pipeline workflow, which is not a static method.\r\n- `run(self, ..)` submits a pipeline instance, which is not a static method, too.\r\n\r\n```py\r\nimport kfp\r\nfrom kfp.components import func_to_container_op\r\nfrom typing import NamedTuple\r\n\r\n\r\nclass AddPipeline:\r\n def __init__(self,\r\n a: int,\r\n b: int,\r\n namespace: str = 'kubeflow',\r\n exp_name: str = 'Default',\r\n run_name: str = 'run') -> None:\r\n self.a = a\r\n self.b = b\r\n self.namespace = namespace\r\n self.exp_name = exp_name\r\n self.run_name = run_name\r\n\r\n @staticmethod\r\n def add(a: int, b: int) -> NamedTuple('output', [('func_name', str), ('result', int)]):\r\n from collections import namedtuple\r\n output = namedtuple('output', ['func_name', 'result'])\r\n return output('add_two_numbers', a + b)\r\n\r\n @staticmethod\r\n def show_result(func_name, result):\r\n print(f'{func_name} result: {result}')\r\n\r\n def pipeline_func(self, a: int, b: int):\r\n add_task = func_to_container_op(self.add)(a, b)\r\n func_to_container_op(self.show_result)(add_task.outputs['func_name'], add_task.outputs['result'])\r\n\r\n def run(self):\r\n client = kfp.Client(host='my_k8s_master_ip:8888')\r\n client.create_run_from_pipeline_func(self.pipeline_func,\r\n arguments={\r\n 'a': self.a,\r\n 'b': self.b,\r\n },\r\n experiment_name=self.exp_name,\r\n run_name=self.run_name)\r\n print('submit pipeline')\r\n```\r\n\r\n### 2. Using the `AddPipeline` class in a component function of the outer pipeline\r\n\r\nBelow are the outer pipeline codes. \r\n\r\nAfter instantiating the `AddPipeline` class with the related arguments in `create_job_pipeline_in_pod()`, I can run an inner pipeline with `add_pipeline.run()` method.\r\n\r\n**Note: I use `C1` as the base image for `create_job_pipeline_in_pod()`.**\r\n\r\n```py\r\nimport kfp\r\nimport kfp.dsl as dsl\r\nfrom kfp.components import func_to_container_op\r\n\r\nRUN_NAME = 'create_add_pipeline_in_pod'\r\n\r\n\r\ndef create_job_pipeline_in_pod(a: int, b: int) -> None:\r\n from pipelines.add_pipeline import AddPipeline\r\n add_pipeline = AddPipeline(a=a, b=b, run_name='add_two_numbers_inpod')\r\n add_pipeline.run()\r\n\r\n\r\[email protected](name=RUN_NAME)\r\ndef pipeline_func(a: int, b: int):\r\n # Note: I use `C1` container image as the base image here.\r\n func_to_container_op(create_job_pipeline_in_pod, base_image='C1:1,0')(a, b)\r\n\r\n\r\nif __name__ == '__main__':\r\n client = kfp.Client(host='my_k8s_master_ip:8888')\r\n client.create_run_from_pipeline_func(pipeline_func, arguments={'a': 1, 'b': 2}, run_name=RUN_NAME)\r\n print('submit pipeline')\r\n```\r\n\r\nBelow are the launched outer and inner pipelines.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nBest Regards.",
">(1) using kfp.components.create_graph_component_from_pipleine_func\r\n>I guess you may want me to create an inner pipeline as below. This way can create a pipeline successfully. But the component functions of the inner pipeline are defined in the same scope of the outer pipeline.\r\n\r\nI'm not sure I understand why this is a problem.\r\n\r\n>two component functions, which are declared as staticmethod of this class.\r\n\r\nIs there a reason why they cannot just be normal functions outside of the class? I'm not sure I understand why the class is needed.\r\n\r\nP.S.\r\nThe low-level scenario you're showing seems pretty weird. I wonder whether there is a simpler way to do what you're doing.\r\nFor example, `func_to_container_op(self.add)(a, b)` is a pretty weird construct.\r\nThe usual flow is:\r\n\r\nCreate shareable components:\r\n```python\r\ndef my_func1(): pass\r\n\r\ncreate_component_from_func(my_func1, output_component_file='my_func1/component.yaml')\r\n# Commit `component.yaml` and push to GitHub\r\n```\r\n[Optional] then optionally build a pipeline component:\r\n\r\n```python\r\nmy_func1_op = load_component_from_url('https://.../my_func1/component.yaml')\r\nmy_func2_op = load_component_from_url('https://.../my_func2/component.yaml')\r\n\r\ndef my_graph1():\r\n result1 = my_func1_op().outputs['out11']\r\n result2 = my_func2_op().outputs['out21']\r\n return {\r\n 'GraphOut1': result1,\r\n 'GraphOut2': result2,\r\n }\r\n\r\ncreate_graph_component_from_pipeline_func(my_graph1, output_component_file='my_graph1/component.yaml')\r\n# Commit `component.yaml` and push to GitHub\r\n```\r\n\r\nThen write a pipeline\r\n```python\r\nmy_graph1_op = load_component_from_url('https://.../my_graph1/component.yaml')\r\nmy_func3_op = load_component_from_url('https://.../my_func3/component.yaml')\r\n\r\ndef my_pipeline():\r\n graph_result1 = my_graph1_op().outputs['GraphOut1']\r\n my_func3_op(in1=graph_result1 )\r\n```\r\nand submit it for execution:\r\n```python\r\nkfp.Client().create_run_from_pipeline_func(my_pipeline)\r\n```\r\nOr compile it:\r\n```python\r\nkfp.compiler.Compiler()compile(my_pipeline, 'my.pipeline.yaml')\r\n```\r\nAnd submit the compiled pipeline:\r\n```python\r\nkfp.Client().create_run_from_pipeline_package('my.pipeline.yaml')\r\n```\r\n\r\nThings are built step by step. Components are reused between pipelines (and graph components).\r\n\r\nI'm not sure I understand the need to do everything recursively, not just sequentially.",
"Hi, @Ark-kun. Thanks for your quick reply. I agree with the above pipeline-construct process.\r\n\r\n>Is there a reason why they cannot just be normal functions outside of the class? I'm not sure I understand why the class is needed.\r\n\r\nOf course, they can. Perhaps the reason for my weird way is that I what to include everything need to construct a pipeline in a class scope :joy:. It can make the codes in `create_job_pipeline_in_pod()` simpler.\r\n\r\nAll in all, my problem has been solved with your help.\r\n\r\nMany thanks. Hope you will enjoy a good weekend :blush:. "
] | 2021-05-05T10:57:40 | 2021-06-03T00:19:18 | 2021-06-03T00:19:18 |
NONE
| null |
### Environment
* Kubernetes: 1.18.9
* KFP version: 1.4.0
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP SDK version: 1.4.0
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* All dependencies version:
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
```shell
$ pip list | grep kfp
kfp 1.4.0
kfp-pipeline-spec 0.1.7
kfp-server-api 1.5.0
```
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
I want to launch a new pipeline (not local) from one component named `create_pipeline_in_pod` of the pipeline named `outer_pipeline` . The demo codes are below.
```python
import kfp
import kfp.dsl as dsl
from kfp.components import func_to_container_op
EXP_NAME = 'demo'
RUN_NAME = 'outer_pipeline'
def create_pipeline_in_pod(data_dir: str):
import kfp
import kfp.dsl as dsl
from kfp.components import func_to_container_op
from typing import NamedTuple
print('create_pipeline_in_pod')
EXP_NAME = 'demo'
RUN_NAME = 'pipeline_in_pod'
@func_to_container_op
def add(
a: int,
b: int # todo: can't recognize return type
) -> NamedTuple('output', [('func_name', str), ('result', int)]):
from collections import namedtuple
output = namedtuple('output', ['func_name', 'result'])
return output('add_two_numbers', a + b)
@func_to_container_op
def show_result(func_name, result):
print(f'{func_name} result: {result}')
@dsl.pipeline(name=RUN_NAME)
def inpod_pipeline_func(a: int, b: int):
add_task = add(a, b)
show_result(add_task.outputs['func_name'], add_task.outputs['result'])
if __name__ == '__main__':
# compile and submit
print('begin compile in pod...')
kfp.compiler.Compiler().compile(inpod_pipeline_func,
f'{data_dir}/{RUN_NAME}.yaml')
print('finish compile in pod')
# run pipeline outside
client = kfp.Client(host='my_k8s_master_ip:8888', namespace='kubeflow')
client.create_run_from_pipeline_func(inpod_pipeline_func,
arguments={
'a': '1',
'b': '2'
},
experiment_name=EXP_NAME,
run_name=RUN_NAME)
print('submit pipeline in pod')
op = func_to_container_op(create_pipeline_in_pod, packages_to_install=['kfp'])
# Define the pipeline
@dsl.pipeline(name=RUN_NAME)
def pipeline_func(data_dir: str):
vop = dsl.VolumeOp(
name='create_pvc',
resource_name='data_volume',
size='1Gi', # if size not change, then use the same pvc
storage_class='local-path',
modes=dsl.VOLUME_MODE_RWO)
op(data_dir).add_pvolumes({data_dir: vop.volume})
if __name__ == '__main__':
print('begin compile...')
kfp.compiler.Compiler().compile(pipeline_func, f'{RUN_NAME}.yaml')
print('finish compile')
client = kfp.Client(host='my_k8s_master_ip:8888', namespace='kubeflow')
client.create_run_from_pipeline_func(pipeline_func,
arguments={'data_dir': '/mnt'},
experiment_name=EXP_NAME,
run_name=RUN_NAME)
print('submit pipeline')
```
The above code will fail when doing `kfp.compiler.Compiler().compile(inpod_pipeline_func,
f'{data_dir}/{RUN_NAME}.yaml')`.
And the error messages in the kubeflow pipeline ui are below.
```sh
create_pipeline_in_pod
begin compile in pod...
Traceback (most recent call last):
File "/tmp/tmp.aHWqREfBIL", line 50, in <module>
_outputs = create_pipeline_in_pod(**_parsed_args)
File "/tmp/tmp.aHWqREfBIL", line 32, in create_pipeline_in_pod
f'{data_dir}/{RUN_NAME}.yaml')
File "/usr/local/lib/python3.7/site-packages/kfp/compiler/compiler.py", line 975, in compile
package_path=package_path)
File "/usr/local/lib/python3.7/site-packages/kfp/compiler/compiler.py", line 1029, in _create_and_write_workflow
pipeline_conf)
File "/usr/local/lib/python3.7/site-packages/kfp/compiler/compiler.py", line 861, in _create_workflow
pipeline_func(*args_list, **kwargs_dict)
File "/tmp/tmp.aHWqREfBIL", line 26, in inpod_pipeline_func
show_result(add_task.outputs['func_name'], add_task.outputs['result'])
KeyError: 'func_name'
```
I've adapted three ablation tests.
#### Ablation test 1: Directly compile the code inside `create_pipeline_in_pod` function
It compiles perfectly and the kubeflow pipeline ui shows the right results.
#### Ablation test 2: Define an inner pipeline without passing small values
It also compiles perfectly and the kubeflow pipeline ui shows the right results of two runs. (Yes. The two runs are parallel below the `demo` experiment). The codes of the new `create_pipeline_in_pod` function are below.
```python
def create_pipeline_in_pod(data_dir: str):
import kfp
import kfp.dsl as dsl
from kfp.components import func_to_container_op
print('create_pipeline_in_pod')
EXP_NAME = 'demo'
RUN_NAME = 'pipeline_in_pod'
@func_to_container_op
def show_result(info):
print(info)
@dsl.pipeline(name=RUN_NAME)
def inpod_pipeline_func(info):
show_result(info)
# compile and submit
print('begin compile in pod...')
kfp.compiler.Compiler().compile(inpod_pipeline_func,
f'{data_dir}/{RUN_NAME}.yaml')
print('finish compile in pod')
client = kfp.Client(host='my_k8s_master_ip:8888', namespace='kubeflow')
client.create_run_from_pipeline_func(inpod_pipeline_func,
arguments={
'info': 'hello world',
},
experiment_name=EXP_NAME,
run_name=RUN_NAME)
print('submit pipeline in pod')
```
#### Ablation test 3: Change the `add` function name to `_add` or `add_here`
Still the same error messages as above.
### Expected result
<!-- What should the correct behavior be? -->
The pipeline defined inside the `create_pipeline_in_pod` component compiles perfectly.
Thanks a lot.
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5589/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5588
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5588/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5588/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5588/events
|
https://github.com/kubeflow/pipelines/issues/5588
| 876,268,627 |
MDU6SXNzdWU4NzYyNjg2Mjc=
| 5,588 |
Managed storage default sql username is not root as described
|
{
"login": "halvgaard",
"id": 33832493,
"node_id": "MDQ6VXNlcjMzODMyNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/33832493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/halvgaard",
"html_url": "https://github.com/halvgaard",
"followers_url": "https://api.github.com/users/halvgaard/followers",
"following_url": "https://api.github.com/users/halvgaard/following{/other_user}",
"gists_url": "https://api.github.com/users/halvgaard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/halvgaard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/halvgaard/subscriptions",
"organizations_url": "https://api.github.com/users/halvgaard/orgs",
"repos_url": "https://api.github.com/users/halvgaard/repos",
"events_url": "https://api.github.com/users/halvgaard/events{/privacy}",
"received_events_url": "https://api.github.com/users/halvgaard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"This issue gives the same errors as found here: https://stackoverflow.com/questions/66600365/gcp-ai-platform-pipelines-kfp-use-managed-storage-cloudsqlproxy-issue#",
"Thanks for reporting! This is a sub problem of https://github.com/kubeflow/pipelines/issues/5717.\r\nLet's track this issue in the more generic issue"
] | 2021-05-05T09:44:48 | 2021-06-01T08:13:55 | 2021-06-01T08:13:55 |
NONE
| null |
@Svendegroote91 Sorry for the late reply, reading through your description, it sounds like
> database username & password left empty to default to root user without password.
this part is wrong, you need to type `root` into database username, it's not the default when you leave it to empty.
_Originally posted by @Bobgy in https://github.com/kubeflow/pipelines/issues/4973#issuecomment-768226895_
The GCP AI platform Pipelines description for managed storage also says that if username is empty then it defaults to 'root' but the default is an empty string at least when adding a password. I had to type in 'root' to make it work.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5588/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5586
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5586/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5586/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5586/events
|
https://github.com/kubeflow/pipelines/issues/5586
| 875,992,333 |
MDU6SXNzdWU4NzU5OTIzMzM=
| 5,586 |
[Testing] samples v2 test fail with CreateContainerError
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"There were brief periods the node was in NotReady state:\r\n\r\n> Normal NodeNotReady 9m40s (x10 over 40h) kubelet Node gke-kfp-standalone-1-kfp-standalone-1-60baa0a0-mr50 status is now: NodeNotReady\r\n\r\nbut it recovered soon",
"EDIT: this is unrelated to this issue\r\n\r\nAnother symptom:\r\n> two-step-pipeline-szblc Succeeded 6d\r\n\r\nthere are workflow resources with more than 1d age, but not GCed",
"I couldn't figure out more information, the node looks healthy on GKE, but errors keep happening. The node is also still reported as healthy, so it's not auto-repaired.\r\n\r\nI'm going to taint the node first to unblock test infra and ask if there are ways to automatically handle unhealthy nodes like this.\r\n```bash\r\nkubectl taint nodes gke-kfp-standalone-1-kfp-standalone-1-60baa0a0-mr50 unhealthy=true:NoSchedule\r\n```",
"This issue might not be node related, after adding taint for mr50 node, pods on gke-kfp-standalone-1-kfp-standalone-1-df8a273f-syvn node start to fail with the same error message.",
"a second type of error when running v2 components: https://4e18c21c9d33d20f-dot-datalab-vm-staging.googleusercontent.com/#/runs/details/eefcbe66-e9b2-4da3-9bbb-5d4d66346365\r\n\r\n> Failed to execute component: rpc error: code = Aborted desc = mysql_query aborted: errno: 1213, error: Deadlock found when trying to get lock; try restarting transaction",
"Also the `CreateContainerError` error seems most common on cached pods, it might be a problem with pods running really fast?",
"BTW, the same tests on that PR already run successfully on my own cluster, so sth is wrong with the cluster. I'm going to try upgrading cluster and node version and see if that helps. (that also changes all nodes)",
"Hmm, upgrading nodes to control plane version solved the issue. Let's follow up again if this happens the second time"
] | 2021-05-05T02:47:23 | 2021-05-05T06:15:42 | 2021-05-05T06:14:46 |
CONTRIBUTOR
| null |
There are multiple types of errors I'm observing when running tests:
1. Pod fail with CreateContainerError
> Warning Failed 103s kubelet Error: context deadline exceeded
Warning FailedSync 91s (x2 over 102s) kubelet error determining status: rpc error: code = Unknown desc = Error: No such container: a3311041ff774eb4530aa3a46c05f42af0cfb1bfedfaa8569188afff225bdd10
In pod logs, there's error message: `failed to try resolving symlinks in path "/var/log/pods/kubeflow_pipeline-with-loop-static-tsdhk-940757082_3a3f5c55-7210-45aa-9b48-cbc9396d310b/main/0.log": lstat /var/log/pods/kubeflow_pipeline-with-loop-static-tsdhk-940757082_3a3f5c55-7210-45aa-9b48-cbc9396d310b/main/0.log: no such file or directory`, but I don't know where it comes from.
when I try to find what fails by `kubectl get pod -o wide | grep CreateContainerError`
they all happen in one node `gke-kfp-standalone-1-kfp-standalone-1-60baa0a0-mr50`.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5586/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5584
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5584/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5584/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5584/events
|
https://github.com/kubeflow/pipelines/issues/5584
| 875,318,560 |
MDU6SXNzdWU4NzUzMTg1NjA=
| 5,584 |
[backend] metadata_execution_id is empty in ExecutionOutput of cache record due to race condition
|
{
"login": "ekesken",
"id": 47163,
"node_id": "MDQ6VXNlcjQ3MTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/47163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekesken",
"html_url": "https://github.com/ekesken",
"followers_url": "https://api.github.com/users/ekesken/followers",
"following_url": "https://api.github.com/users/ekesken/following{/other_user}",
"gists_url": "https://api.github.com/users/ekesken/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekesken/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekesken/subscriptions",
"organizations_url": "https://api.github.com/users/ekesken/orgs",
"repos_url": "https://api.github.com/users/ekesken/repos",
"events_url": "https://api.github.com/users/ekesken/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekesken/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1863015205,
"node_id": "MDU6TGFiZWwxODYzMDE1MjA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/metadata-writer",
"name": "area/metadata-writer",
"color": "60fc35",
"default": false,
"description": ""
},
{
"id": 1984652455,
"node_id": "MDU6TGFiZWwxOTg0NjUyNDU1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/execution_cache",
"name": "area/execution_cache",
"color": "edd387",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "rui5i",
"id": 31815555,
"node_id": "MDQ6VXNlcjMxODE1NTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/31815555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rui5i",
"html_url": "https://github.com/rui5i",
"followers_url": "https://api.github.com/users/rui5i/followers",
"following_url": "https://api.github.com/users/rui5i/following{/other_user}",
"gists_url": "https://api.github.com/users/rui5i/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rui5i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rui5i/subscriptions",
"organizations_url": "https://api.github.com/users/rui5i/orgs",
"repos_url": "https://api.github.com/users/rui5i/repos",
"events_url": "https://api.github.com/users/rui5i/events{/privacy}",
"received_events_url": "https://api.github.com/users/rui5i/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"I understand the issue.\r\nI wonder how reliable `metadata_execution_id` is. Have you seen any cases where Metadata Writer failed to set it?\r\n\r\nWe're a little bit hesitant to introduce a dependency between cache server and metadata writer services.",
"> I wonder how reliable metadata_execution_id is.\r\n\r\nI believe instead of purely relying on `metadata_execution_id`, we should add another annotation in `metadata-writer` when create event arrives for the pod, like `pipelines.kubeflow.org/metadata_enabled`, if it's not a tfx job it should be set to true, and in cache-server it should wait until metadata_execution_id gets filled for the metadata enabled pods.\r\n\r\n> Have you seen any cases where Metadata Writer failed to set it?\r\n\r\nwe had issues from time to time, but I can't give any concrete example/scenario, we were just mitigating the problems by deleting metadata-grpc and metadata-writer pods blindly without checking for a root cause.\r\n\r\n> We're a little bit hesitant to introduce a dependency between cache server and metadata writer services.\r\n\r\nBut this has a serious impact for users, no? if this metadata linking is broken in the cache entry, they won't be able to see anything in metadata tab in UI for re-executions of the same step.\r\n",
"The issue is avoided in v2 compatible and v2 caching design.\r\nSee bit.ly/kfp-v2-compatible and bit.ly/kfp-v2"
] | 2021-05-04T10:20:28 | 2021-06-11T00:49:43 | 2021-06-11T00:49:43 |
CONTRIBUTOR
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
We deploy it via ArgoCD.
* KFP version: 1.0.4
Two components involved in issue:
* ml-pipeline/metadata-writer:1.0.4
* ml-pipeline/cache-server:1.0.4
### Steps to reproduce
This is a race condition between metadata-writer and cache-server, so not easy to re-produce, but we observe it quite frequently.
If `cache-server/watcher` gets pod completed event before `metadata_writer` can inject the `pipelines.kubeflow.org/metadata_execution_id` label, metadata link becomes missing for the cache record forever.
### Expected result
Expecting `pipelines.kubeflow.org/metadata_execution_id` label values in ExecutionOutput field of cache record to reflect the real id, instead of an empty value, so that we can continue being able to see the linked metadata in UI for re-runs of cached steps later on.
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
Both components watches for pod events to take action, see:
https://github.com/kubeflow/pipelines/blob/1.0.4/backend/src/cache/server/watcher.go#L30-L91
https://github.com/kubeflow/pipelines/blob/1.0.4/backend/metadata_writer/src/metadata_writer.py#L129-L258
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5584/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5579
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5579/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5579/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5579/events
|
https://github.com/kubeflow/pipelines/issues/5579
| 874,306,190 |
MDU6SXNzdWU4NzQzMDYxOTA=
| 5,579 |
In metacontroller statefulset container securitycontext block allowPrivilegeEscalation and privileged are set to true . Does the controller container really need those privileges
|
{
"login": "orugantichetan",
"id": 69839506,
"node_id": "MDQ6VXNlcjY5ODM5NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/69839506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orugantichetan",
"html_url": "https://github.com/orugantichetan",
"followers_url": "https://api.github.com/users/orugantichetan/followers",
"following_url": "https://api.github.com/users/orugantichetan/following{/other_user}",
"gists_url": "https://api.github.com/users/orugantichetan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orugantichetan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orugantichetan/subscriptions",
"organizations_url": "https://api.github.com/users/orugantichetan/orgs",
"repos_url": "https://api.github.com/users/orugantichetan/repos",
"events_url": "https://api.github.com/users/orugantichetan/events{/privacy}",
"received_events_url": "https://api.github.com/users/orugantichetan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"@orugantichetan can you raise the issue in metacontroller repo instead?"
] | 2021-05-03T07:19:41 | 2021-06-11T00:50:37 | 2021-06-11T00:50:37 |
NONE
| null |
/kind question
/kind/question
metacontroller statefulset container securitycontext block allowPrivilegeEscalation and privileged are set to true . Does the container really need those privileges
In metacontroller repo those privileged permissions are not set [https://github.com/metacontroller/metacontroller/blob/v0.3.0/manifests/metacontroller.yaml](url)
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5579/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5578
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5578/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5578/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5578/events
|
https://github.com/kubeflow/pipelines/issues/5578
| 874,272,402 |
MDU6SXNzdWU4NzQyNzI0MDI=
| 5,578 |
can metacontroller be upgraded to latest version from V0.3.0
|
{
"login": "orugantichetan",
"id": 69839506,
"node_id": "MDQ6VXNlcjY5ODM5NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/69839506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orugantichetan",
"html_url": "https://github.com/orugantichetan",
"followers_url": "https://api.github.com/users/orugantichetan/followers",
"following_url": "https://api.github.com/users/orugantichetan/following{/other_user}",
"gists_url": "https://api.github.com/users/orugantichetan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orugantichetan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orugantichetan/subscriptions",
"organizations_url": "https://api.github.com/users/orugantichetan/orgs",
"repos_url": "https://api.github.com/users/orugantichetan/repos",
"events_url": "https://api.github.com/users/orugantichetan/events{/privacy}",
"received_events_url": "https://api.github.com/users/orugantichetan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"We will upgrade metacontroller and investigate if there is any breaking change in this major version bump.\r\n\r\nAnother option is to remove metacontroller, and use kubeoperator or helm instead.",
"The following is possible. If you are interested in a merge request please tell me.\r\n\r\nI \r\n1. upgraded to 2.0.4 \r\n2. removed insane permissions (e.g. privileged root container) \r\n3. removed insane cpu and memory ressources (4 cores 4 GB)\r\n4. Added networkattachmentdefinitions for rootless istio-cni on Openshift and updated the profile-controller accordingly\r\n5. restricted the namespaces to kubeflow profile namespaces\r\n6. Reduced to a sane amount of logging\r\n\r\nSadly it still needs cluster admin-rights, but mabye someone has an idea on how to restrict it."
] | 2021-05-03T06:27:59 | 2021-09-14T12:36:32 | 2021-09-14T12:36:32 |
NONE
| null |
/kind question
/kind/question
can metacontroller be upgraded to latest version from V0.3.0
from v1.2.0 metacontroller have support for distroless image .
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5578/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5577
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5577/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5577/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5577/events
|
https://github.com/kubeflow/pipelines/issues/5577
| 874,046,227 |
MDU6SXNzdWU4NzQwNDYyMjc=
| 5,577 |
[sdk] `kfp run submit` fails when paramerer values contain '=' with "dictionary update sequence element"
|
{
"login": "lebiathan",
"id": 7630467,
"node_id": "MDQ6VXNlcjc2MzA0Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7630467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lebiathan",
"html_url": "https://github.com/lebiathan",
"followers_url": "https://api.github.com/users/lebiathan/followers",
"following_url": "https://api.github.com/users/lebiathan/following{/other_user}",
"gists_url": "https://api.github.com/users/lebiathan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lebiathan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lebiathan/subscriptions",
"organizations_url": "https://api.github.com/users/lebiathan/orgs",
"repos_url": "https://api.github.com/users/lebiathan/repos",
"events_url": "https://api.github.com/users/lebiathan/events{/privacy}",
"received_events_url": "https://api.github.com/users/lebiathan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "NikeNano",
"id": 22057410,
"node_id": "MDQ6VXNlcjIyMDU3NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikeNano",
"html_url": "https://github.com/NikeNano",
"followers_url": "https://api.github.com/users/NikeNano/followers",
"following_url": "https://api.github.com/users/NikeNano/following{/other_user}",
"gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions",
"organizations_url": "https://api.github.com/users/NikeNano/orgs",
"repos_url": "https://api.github.com/users/NikeNano/repos",
"events_url": "https://api.github.com/users/NikeNano/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikeNano/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "NikeNano",
"id": 22057410,
"node_id": "MDQ6VXNlcjIyMDU3NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikeNano",
"html_url": "https://github.com/NikeNano",
"followers_url": "https://api.github.com/users/NikeNano/followers",
"following_url": "https://api.github.com/users/NikeNano/following{/other_user}",
"gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions",
"organizations_url": "https://api.github.com/users/NikeNano/orgs",
"repos_url": "https://api.github.com/users/NikeNano/repos",
"events_url": "https://api.github.com/users/NikeNano/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikeNano/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign\r\n\r\nWill take a look. "
] | 2021-05-02T21:14:47 | 2021-05-16T07:44:45 | 2021-05-16T07:44:45 |
NONE
| null |
### Environment
* KFP version:
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP SDK version: 1.4.0, 1.5.0
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* All dependencies version: n/a
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
### Steps to reproduce
* Compile test pipeline below (run py script)
```py
import kfp as kfp
@kfp.components.func_to_container_op
def print_func(param: str):
print(str(param))
return
@kfp.dsl.pipeline(name='pipeline')
def pipeline(param: str):
print_func(param)
return
if __name__ == '__main__':
kfp.compiler.Compiler().compile(pipeline, __file__ + ".zip")
```
* Upload to KF
* Grab the `pipeline-id` to use it in the command line sdk
* Using the command line SDK, run the following:
```sh
$ kfp run submit --pipeline-id <pipeline-id> --experiment-name 'Test = parsing' param=12345 # Succeeds
$ kfp run submit --pipeline-id <pipeline-id> --experiment-name 'Test = parsing' param=some_name=4567 # Fails
dictionary update sequence element #0 has length 3; 2 is required
```
Running the parameter directly from the KFP UI with value `some_name=4567` works fine and prints `some_name=4567`.
#### Problem Explanation
The pipeline expects an input parameter named `param` to run and it prints the value of that param. The SDK will parse pipeline parameters on the `=` sign [here](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/cli/run.py#L79). The value contains an `=` sign, so we are trying to update the `dict()` with `('param', 'some_name', '4567')` when the goal is to update it with `('param', 'some_name=4567')`.
#### Problem Significance
The equality sign `=` is typically used to partition datasets in distributed storage (S3, HDFS, etc) leading to paths like
```
hdfs://path/to/table/year_partition=2020/month_partition=01/other_partition=other_value/file1.parquet
hdfs://path/to/table/year_partition=2020/month_partition=02/other_partition=other_value/file1.parquet
...
```
Due to the problem discussed above, we cannot use the CLI SDK to provide parameters that contain the `=` sign. However, we _can_ use the KFP UI and pass such paramters, leading to inconsistent behavior.
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
### Expected result
<!-- What should the correct behavior be? -->
Running
```sh
$ kfp run submit --pipeline-id <pipeline-id> --experiment-name 'Test = parsing' param=some_name=4567
```
should succeed just like running the pipeline via the KFP UI with the value `some_name=4567` for the run parameter **param**.
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
#### Suggested Solution
[This line](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/cli/run.py#L79) should be
```py
arg_dict = dict(arg.split('=', maxsplit=1) for arg in args)
```
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5577/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5576
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5576/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5576/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5576/events
|
https://github.com/kubeflow/pipelines/issues/5576
| 873,897,681 |
MDU6SXNzdWU4NzM4OTc2ODE=
| 5,576 |
[backend] pipelines upload failes without parameters
|
{
"login": "padrian2s",
"id": 423459,
"node_id": "MDQ6VXNlcjQyMzQ1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/423459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padrian2s",
"html_url": "https://github.com/padrian2s",
"followers_url": "https://api.github.com/users/padrian2s/followers",
"following_url": "https://api.github.com/users/padrian2s/following{/other_user}",
"gists_url": "https://api.github.com/users/padrian2s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padrian2s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padrian2s/subscriptions",
"organizations_url": "https://api.github.com/users/padrian2s/orgs",
"repos_url": "https://api.github.com/users/padrian2s/repos",
"events_url": "https://api.github.com/users/padrian2s/events{/privacy}",
"received_events_url": "https://api.github.com/users/padrian2s/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
},
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign",
"I don`t get this issue on the latest version of Kubeflow pipelines. Could which of the `XGB` examples did you use @padrian2s.",
"I use the default XGB after KF install, the one that is already published into DB. \r\n@NikeNano ",
"I vaguely remember this is already fixed in later KFP version",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-05-02T09:46:21 | 2022-03-03T06:05:11 | 2022-03-03T06:05:11 |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
deploy manually (kustomize build)
* KFP version:
KFP 1.3
* KFP SDK version:
kfp 1.4.0
kfp-pipeline-spec 0.1.7
kfp-server-api 1.5.0
### Steps to reproduce
upload xgboost sample pipeline, which has no pipeline input parameters
{"error_message":"Error creating pipeline: Create pipeline failed: Failed to get parameters from the workflow: InvalidInputError: Failed to parse the parameter.: error unmarshaling JSON: json: cannot unmarshal array into Go value of type v1alpha1.Workflow","error_details":"Error creating pipeline: Create pipeline failed: Failed to get parameters from the workflow: InvalidInputError: Failed to parse the parameter.: error unmarshaling JSON: json: cannot unmarshal array into Go value of type v1alpha1.Workflow"}
### Expected result
TBD
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5576/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5565
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5565/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5565/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5565/events
|
https://github.com/kubeflow/pipelines/issues/5565
| 871,318,727 |
MDU6SXNzdWU4NzEzMTg3Mjc=
| 5,565 |
[frontend] The sort, filter of artifact does not work
|
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"<img width=\"1192\" alt=\"Screen Shot 2021-04-29 at 11 33 56 AM\" src=\"https://user-images.githubusercontent.com/52932582/116601525-763fd400-a8df-11eb-86aa-9be8708b2715.png\">\r\n",
"Duplicate of https://github.com/kubeflow/pipelines/issues/3226"
] | 2021-04-29T18:35:05 | 2021-05-18T14:30:44 | 2021-05-18T14:30:44 |
CONTRIBUTOR
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)? Kubeflow Pipeline Standalone with GCP
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version: 1.5.0
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
### Steps to reproduce
Submit a pipeline which will generate artifatcs, then submit the pipeline run.
Go to the UI which shows the artifact list, click `Created At` to sort by created time. In addition, the filter also does not work.
<!-- Deploy a pipeline which will generate artifacts, submit a pipeline run
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
### Expected result
<!-- What should the correct behavior be? -->
The sort behavior should work
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5565/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5564
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5564/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5564/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5564/events
|
https://github.com/kubeflow/pipelines/issues/5564
| 870,726,124 |
MDU6SXNzdWU4NzA3MjYxMjQ=
| 5,564 |
[Doc] Update "Visualize Results in the Pipelines UI" page
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Update area:\r\n\r\n```\r\n with open('/mlpipeline-ui-metadata.json', 'w') as f:\r\n json.dump(metadata, f)\r\n```",
"Example of outputting mlpipeline-ui-metadata in component yaml component: https://github.com/kubeflow/pipelines/blob/d362cab2046b92efa5f015c6a980bc9371b4b63b/components/tensorflow/tensorboard/prepare_tensorboard/component.yaml\r\n\r\nExample of outputting mlpipeline-ui-metadata in python lightweight component -- option 1 as return value:\r\nhttps://github.com/kubeflow/pipelines/blob/d9c019641ef9ebd78db60cdb78ea29b0d9933008/samples/core/lightweight_component/lightweight_component.ipynb\r\n\r\nwe don't have a good example for emitting mlpipeline-ui-metadata with OutputPath in lightweight component, I think we need an example. For OutputPath, an example is https://github.com/kubeflow/pipelines/blob/d9c019641ef9ebd78db60cdb78ea29b0d9933008/components/dataset_manipulation/split_data_into_folds/in_CSV/component.py#L16\r\n"
] | 2021-04-29T07:57:41 | 2021-07-16T00:41:56 | 2021-07-16T00:41:56 |
CONTRIBUTOR
| null |
https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5564/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5560
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5560/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5560/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5560/events
|
https://github.com/kubeflow/pipelines/issues/5560
| 870,155,056 |
MDU6SXNzdWU4NzAxNTUwNTY=
| 5,560 |
[backend] OutputArtifact path is None
|
{
"login": "wilbry",
"id": 36278506,
"node_id": "MDQ6VXNlcjM2Mjc4NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/36278506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wilbry",
"html_url": "https://github.com/wilbry",
"followers_url": "https://api.github.com/users/wilbry/followers",
"following_url": "https://api.github.com/users/wilbry/following{/other_user}",
"gists_url": "https://api.github.com/users/wilbry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wilbry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wilbry/subscriptions",
"organizations_url": "https://api.github.com/users/wilbry/orgs",
"repos_url": "https://api.github.com/users/wilbry/repos",
"events_url": "https://api.github.com/users/wilbry/events{/privacy}",
"received_events_url": "https://api.github.com/users/wilbry/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 2975820904,
"node_id": "MDU6TGFiZWwyOTc1ODIwOTA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/v2",
"name": "area/v2",
"color": "A27925",
"default": false,
"description": ""
}
] |
open
| false |
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"I've tested as similar approach using 1.6.0.rc0 here:\r\n\r\n```\r\nimport kfp\r\nimport kfp.components as comp\r\nimport kfp.v2.dsl as dsl\r\nfrom kfp.v2 import compiler\r\nfrom kfp.v2.dsl import (\r\n component,\r\n Input,\r\n Output,\r\n Artifact,\r\n Dataset,\r\n Metrics,\r\n Model\r\n)\r\n\r\n\r\nfrom typing import NamedTuple\r\n#import kfp.components as comp\r\n#from kfp import compiler\r\n#import kfp.dsl as dsl\r\n#from kfp.components import OutputPath\r\n#from kfp.components import InputPath\r\n\r\n@component(\r\n packages_to_install=['sklearn'],\r\n output_component_file='download_data_component_sdk_v2.yaml'\r\n)\r\ndef download_data(output_data: Output[Dataset]):\r\n\r\n import json\r\n\r\n import argparse\r\n from pathlib import Path\r\n\r\n from sklearn.datasets import load_breast_cancer\r\n from sklearn.model_selection import train_test_split\r\n\r\n # Gets and split dataset\r\n x, y = load_breast_cancer(return_X_y=True)\r\n x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)\r\n\r\n # Creates `data` structure to save and \r\n # share train and test datasets.\r\n data = {'x_train' : x_train.tolist(),\r\n 'y_train' : y_train.tolist(),\r\n 'x_test' : x_test.tolist(),\r\n 'y_test' : y_test.tolist()}\r\n\r\n # Creates a json object based on `data`\r\n data_json = json.dumps(data)\r\n\r\n # Saves the json object into a file\r\n with open(output_data.path, 'w') as out_file:\r\n json.dump(data_json, out_file)\r\n\r\n@component(\r\n packages_to_install=['sklearn'],\r\n output_component_file='train_component_sdk_v2.yaml'\r\n)\r\ndef train(trainData: Input[Dataset], modelData: Output[Model], mlpipeline_metrics: Output[Metrics])-> str:\r\n import json\r\n from typing import NamedTuple\r\n from collections import namedtuple\r\n from sklearn.metrics import accuracy_score\r\n from sklearn.tree import DecisionTreeClassifier\r\n from joblib import dump\r\n import os\r\n\r\n # Open and reads file \"data\"\r\n with open(trainData) as data_file:\r\n data = json.load(data_file) \r\n \r\n data = json.loads(data)\r\n\r\n x_train = data['x_train']\r\n y_train = data['y_train']\r\n x_test = data['x_test']\r\n y_test = data['y_test']\r\n \r\n # Initialize and train the model\r\n model = DecisionTreeClassifier(max_depth=3)\r\n model.fit(x_train, y_train)\r\n\r\n # Get predictions\r\n y_pred = model.predict(x_test)\r\n \r\n # Get accuracy\r\n accuracy = accuracy_score(y_test, y_pred)\r\n\r\n # Save output into file\r\n #with open(args.accuracy, 'w') as accuracy_file:\r\n # accuracy_file.write(str(accuracy))\r\n\r\n # Exports two sample metrics:\r\n metrics = {\r\n 'metrics': [{\r\n 'name': 'accuracy',\r\n 'numberValue': float(accuracy),\r\n 'format': \"PERCENTAGE\"\r\n }]} \r\n \r\n with open(mlpipeline_metrics.path, 'w') as f:\r\n json.dump(metrics, f)\r\n \r\n \r\n dump(model, os.path.join(modelData, 'model.joblib'))\r\n \r\n return modelData.uri\r\n\r\nkfserving_op = comp.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/kfserving/component.yaml')\r\n\r\ndef kubeflow_deploy_op():\r\n return dsl.ContainerOp(\r\n name = 'deploy',\r\n image = KUBEFLOW_DEPLOYER_IMAGE,\r\n arguments = [\r\n '--model-export-path', model_path,\r\n '--server-name', model_name,\r\n ]\r\n )\r\n\r\[email protected](\r\n name='iris-deploy-pipeline-3',\r\n description='Pipeline and deploy for IRIS.'\r\n)\r\ndef sklearn_pipeline(my_num: int = 1000, \r\n my_name: str = 'some text', \r\n my_url: str = 'http://example.com'):\r\n download_task = download_data() # The download_data_op factory function returns\r\n # a dsl.ContainerOp class instance. \r\n\r\n train_task = train(download_task.output) \r\n \r\n kfserving_op(action='apply', model_name='sklearn-example',model_uri=train_task.outputs['Output'],framework='sklearn')\r\n \r\n# Specify argument values for your pipeline run.\r\narguments = {'a': '7', 'b': '8'}\r\n\r\n# Create a pipeline run, using the client you initialized in a prior step.\r\ncompiler.Compiler().compile(\r\n pipeline_func=sklearn_pipeline,\r\n package_path='sklearn_pipeline_sdk_v2.yaml')\r\n```\r\n\r\nUnfortunately it throws a weird error:\r\n\r\nTypeError: Passing value \"True\" with type \"String\" (as \"Parameter\") to component input \"Enable Istio Sidecar\" with type \"Bool\" (as \"Artifact\") is incompatible. Please fix the type of the component input.",
"The packaged KFServing component isn't currently compatible with V2, due to the Bool type no longer being supported as far as I know. The issue I got happens when that is fixed. \r\n",
"/assign",
"Hi @wilbry, thank you for your efforts porting kfserving component to use v2 semantics!\r\nSorry it took a while until I have time to investigate.\r\n\r\nYour initial post uses OutputArtifact, but it doesn't exist. Please refer to our updated documentation: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/python-function-components/\r\n",
"OutputArtifact(Model) should be changed to Output[Model]\r\n\r\n@wilbry can you share your kfserving_v2.yaml component? I fixed your error, but cannot proceed, because I'm not sure what changes you made to kfserving_v2.yaml.",
"Quite a few fixes have been released to v2 compatible mode released in kfp sdk 1.7.0 and KFP backend 1.7.0-rc.3 is out. It'll be super helpful if you can have a try on the latest releases.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-04-28T16:53:15 | 2022-03-03T00:05:17 | null |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
Deployed Standalone on k3s
* KFP version:
1.50
* KFP SDK version:
1.5.0rc3
### Steps to reproduce
Create lightweight component using OutputArtifact, compiled with V1 compiler, using V2 Compatible, and run through the UI. The following error appears, I think in the effort to create the output artifact data library.
```bash
Traceback (most recent call last):
File "/tmp/tmp.VtSD5WzDcx", line 594, in <module>
executor_main()
File "/tmp/tmp.VtSD5WzDcx", line 588, in executor_main
function_to_execute=function_to_execute)
File "/tmp/tmp.VtSD5WzDcx", line 353, in __init__
artifacts_list[0])
File "/tmp/tmp.VtSD5WzDcx", line 366, in _make_output_artifact
return OutputArtifact(artifact_type=type(artifact), artifact=artifact)
File "/tmp/tmp.VtSD5WzDcx", line 296, in __init__
os.makedirs(self.path, exist_ok=True)
File "/usr/local/lib/python3.7/os.py", line 208, in makedirs
head, tail = path.split(name)
File "/usr/local/lib/python3.7/posixpath.py", line 107, in split
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
F0428 16:41:59.832704 37 main.go:56] Failed to successfuly execute component: exit status 1
goroutine 1 [running]:
github.com/golang/glog.stacks(0xc000c80300, 0xc000652000, 0x61, 0xb6)
/go/pkg/mod/github.com/golang/[email protected]/glog.go:769 +0xb9
github.com/golang/glog.(*loggingT).output(0x2890520, 0xc000000003, 0xc0000fe000, 0x27d6a2a, 0x7, 0x38, 0x0)
/go/pkg/mod/github.com/golang/[email protected]/glog.go:720 +0x3b3
github.com/golang/glog.(*loggingT).printf(0x2890520, 0x3, 0x1b2c9c8, 0x2b, 0xc000a65f58, 0x1, 0x1)
/go/pkg/mod/github.com/golang/[email protected]/glog.go:655 +0x153
github.com/golang/glog.Fatalf(...)
/go/pkg/mod/github.com/golang/[email protected]/glog.go:1148
main.main()
/build/cmd/launch/main.go:56 +0x357
```
### Expected result
The artifact should be created without error, so that the pipeline can run successfully.
### Materials and Reference
I am working on a sample case in response to the discussion taking place in #5453. The code at the moment is
```python
from kfp import components
from kfp.dsl.io_types import OutputArtifact, Model
from kfp.dsl import pipeline, PipelineExecutionMode
from kfp.compiler import Compiler
def train_model(model: OutputArtifact(Model)) -> str:
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
from joblib import dump
import os
fake_features, fake_labels = make_classification(n_features = 5)
classifier = RandomForestClassifier(n_estimators=3)
classifier.fit(fake_features, fake_labels)
dump(classifier, os.path.join(model.path, 'model.joblib'))
return model.uri
train_model_op = components.create_component_from_func_v2(train_model, packages_to_install=['sklearn'])
kf_serving_op = components.load_component_from_file('./components/kfserving_v2.yaml')
@pipeline(
name='output_artifact_sample',
description='Demonstrate using OutputArtifact URI information in next component',
pipeline_root='minio://sample_output_artifiact_root'
)
def simple_kf_serving_pipeline():
train_step = train_model_op()
kf_serving_op(
action = 'apply',
model_name = 'output_artifact_sample',
model_uri = train_step.outputs['Output'],
framework = 'sklearn',
service_account= 'sa',
namespace='default')
def main():
Compiler(mode = PipelineExecutionMode.V2_COMPATIBLE).compile(simple_kf_serving_pipeline, 'sample_artifact_pipeline.yaml')
if __name__ == "__main__":
main()
```
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5560/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5560/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5558
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5558/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5558/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5558/events
|
https://github.com/kubeflow/pipelines/issues/5558
| 870,072,899 |
MDU6SXNzdWU4NzAwNzI4OTk=
| 5,558 |
[feature] Configurable KFP Artifact types while storing to Minio
|
{
"login": "rado-parrak",
"id": 11147983,
"node_id": "MDQ6VXNlcjExMTQ3OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/11147983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rado-parrak",
"html_url": "https://github.com/rado-parrak",
"followers_url": "https://api.github.com/users/rado-parrak/followers",
"following_url": "https://api.github.com/users/rado-parrak/following{/other_user}",
"gists_url": "https://api.github.com/users/rado-parrak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rado-parrak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rado-parrak/subscriptions",
"organizations_url": "https://api.github.com/users/rado-parrak/orgs",
"repos_url": "https://api.github.com/users/rado-parrak/repos",
"events_url": "https://api.github.com/users/rado-parrak/events{/privacy}",
"received_events_url": "https://api.github.com/users/rado-parrak/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] |
open
| false | null |
[] | null |
[
"Not that familiar with kfserving, but with respect to #5453.\r\n\r\nCan I assume there are 2 separate issues?\r\n\r\n- for model retrieval, kfserving only support local model or http/https model (#5453)\r\n- for remote model artifacts, it expects an unarchived model binary? (This ticket)\r\n\r\n\r\n",
"Yes indeed, these are two separate issues, stemming from 2 separate use cases. Thx.",
"Love this idea. I am new to Kubeflow and was implementing a \"normal\" pipeline but looks like model serving from another pipeline step that trains model isn't well supported, and found this issue.",
"#1640 probably can solve this issue w/o changing the sdk.",
"FYI, v2 compatible mode does not compress artifacts by default already.\r\nhttps://www.kubeflow.org/docs/components/pipelines/sdk/v2/",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @rado-parrak , may I ask eventually how your team solve this case? We meet the exact same case, and are not quite sure which way to proceed...thanks!",
"Hi @yangyang919, we actually did not solve it at the time. I left the project a couple of months ago, so I do not know if there was a resolution later. Sorry for being a dead end.",
"> FYI, v2 compatible mode does not compress artifacts by default already. https://www.kubeflow.org/docs/components/pipelines/sdk/v2/\r\n\r\nHi @Bobgy , could you point me where I can find this setting? Coz I found the model still gets compressed after pipeline running"
] | 2021-04-28T15:24:38 | 2022-02-21T08:55:19 | null |
NONE
| null |
### Feature Area
/area sdk
### What feature would you like to see?
We are trying to marry "KF Pipelines + KF Artifacts + KF Serving" in a nice use-case where we train a (sklearn) model using KFP, store it as an output artifact to the artifact store of pipelines (minio) and then deploy that model directly from the artifact store using KFServing.
### What is the use case or pain point?
The problem we are facing is that the output artifacts are by default stored as .tgz , so even a .joblib blob model object is still wrapped into .tgz at creation time of the artifact in minio. On the other end, the default KF serving application server expects a .joblib file, not a .tgz.
There are at least two ways to tackle this issue:
(1) On the KFP side, by adding some kind of flag that would allow to dump some artifacts to minio uncompressed (i.e. directly as .joblib in our case) - something we discussed with @eterna2 already in the Kubeflow Slack, or
(2) On the KFServing side, by adding a functionality to the storage -initiazer such that it can unpack the .tgz during the inference service creation time. This suggestion was made in this [ticket](https://github.com/kubeflow/kfserving/issues/1569).
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5558/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5558/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5556
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5556/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5556/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5556/events
|
https://github.com/kubeflow/pipelines/issues/5556
| 869,323,554 |
MDU6SXNzdWU4NjkzMjM1NTQ=
| 5,556 |
[Multi User] Kubeflow Pipelines Missing Headers
|
{
"login": "RoyerRamirez",
"id": 29738034,
"node_id": "MDQ6VXNlcjI5NzM4MDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/29738034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RoyerRamirez",
"html_url": "https://github.com/RoyerRamirez",
"followers_url": "https://api.github.com/users/RoyerRamirez/followers",
"following_url": "https://api.github.com/users/RoyerRamirez/following{/other_user}",
"gists_url": "https://api.github.com/users/RoyerRamirez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RoyerRamirez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RoyerRamirez/subscriptions",
"organizations_url": "https://api.github.com/users/RoyerRamirez/orgs",
"repos_url": "https://api.github.com/users/RoyerRamirez/repos",
"events_url": "https://api.github.com/users/RoyerRamirez/events{/privacy}",
"received_events_url": "https://api.github.com/users/RoyerRamirez/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[] | 2021-04-27T22:42:07 | 2021-05-18T17:59:44 | 2021-05-18T17:59:44 |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
Hi, we deployed Kubeflow using the [kfctl_k8s_istio.v1.2.0.yaml](https://raw.githubusercontent.com/kubeflow/manifests/v1.2-branch/kfdef/kfctl_k8s_istio.v1.2.0.yaml) file.
* KFP version:
Kubeflow Pipelines Version: 1.0.4
* KFP SDK version:
v1.2.0
### Steps to reproduce
We setup Identity Aware Proxy, all the traffic is getting routed to the istio gateway, then we're redirected to the Central Dashboard. Once we arrive there, everything works as expected. Whenever we navigate to **Pipelines --> Experiments**, we get the following error message:
`
{"error":"Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header.","message":"Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header.","code":10,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Request header error: there is no user identity header.","error_details":"Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header."}]}
`
We see this same error message only under **Pipelines --> Experiments** and under **Pipelines --> Archive**. All the other components of Kubeflow like Katib, and Juypter are working just fine. I also double checked all our header configs. We're currently passing the following under _**Kubeflow-config**_
_cluster-name
clusterDomain: cluster.local
istio-namespace: istio-system
userid-header: X-Goog-Authenticated-User-Email
userid-prefix: accounts.google.com:_
After doing some more poking around, I noticed that an Envoy Filter was created under the _**istio-system**_ namespace that was adding the _**[email protected]**_ user as a header. However, if you look at the configs, it's applying the **_kubeflow-userid_** instead of the _**x-goog-authenticated-user-email**_. I modified it, and it didn't seem to impact anything.
_apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-user-filter
namespace: istio-system
spec:
filters:
- filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
request_handle:headers():add("kubeflow-userid","[email protected]")
end
filterName: envoy.lua
filterType: HTTP
insertPosition:
index: FIRST
listenerMatch:
listenerType: GATEWAY
workloadLabels:
app: istio-ingressgateway_
You're guidance to resolve the header issue would be much appreciated.
### Expected result
We expect the **Kubeflow Pipeline Experiments** and **Kubeflow Pipeline Archives** to load correctly for each user. Right not, it's not loading for anyone.
Thank you for your time,
Royer R Ruiz
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5556/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5555
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5555/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5555/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5555/events
|
https://github.com/kubeflow/pipelines/issues/5555
| 869,309,587 |
MDU6SXNzdWU4NjkzMDk1ODc=
| 5,555 |
[feature] Allow jobs to be scheduled on AWS Fargate
|
{
"login": "yuhuishi-convect",
"id": 74702693,
"node_id": "MDQ6VXNlcjc0NzAyNjkz",
"avatar_url": "https://avatars.githubusercontent.com/u/74702693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuhuishi-convect",
"html_url": "https://github.com/yuhuishi-convect",
"followers_url": "https://api.github.com/users/yuhuishi-convect/followers",
"following_url": "https://api.github.com/users/yuhuishi-convect/following{/other_user}",
"gists_url": "https://api.github.com/users/yuhuishi-convect/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuhuishi-convect/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuhuishi-convect/subscriptions",
"organizations_url": "https://api.github.com/users/yuhuishi-convect/orgs",
"repos_url": "https://api.github.com/users/yuhuishi-convect/repos",
"events_url": "https://api.github.com/users/yuhuishi-convect/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuhuishi-convect/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
open
| false | null |
[] | null |
[
"@yuhuishi-convect we might want to switch to argo v3 emissary executor: https://argoproj.github.io/argo-workflows/workflow-executors/#emissary-emissary\r\nwhich doesn't require privileged permission.\r\n",
"This can be resolved by https://github.com/kubeflow/pipelines/issues/1654, when we switch to that executoor.",
"I got a walkaround under version 1.2 to allow scheduling jobs onto Fargate nodes. Here are the things I did:\r\n\r\n1. Switch the Argo workflow executor to `k8sapi`. \r\n```\r\nkubectl edit cm workflow-controller-configmap -n kubeflow\r\n```\r\nand change `containerRuntimeExecutor` from `pns` to `k8sapi`\r\n\r\n\r\n2. Modify the components to use `emptyDir` as the output location. \r\nFor example, I have a following helper function\r\n```\r\ndef mount_empty_dir(task: kfp.dsl.ContainerOp) -> kfp.dsl.ContainerOp:\r\n from kubernetes import client as k8s_client\r\n task = task.add_volume(\r\n k8s_client.V1Volume(\r\n empty_dir={},\r\n name=\"output-empty-dir\"\r\n )\r\n )\r\n\r\n task.container.add_volume_mount(\r\n k8s_client.V1VolumeMount(\r\n mount_path=\"/tmp/outputs\",\r\n name=\"output-empty-dir\"\r\n )\r\n )\r\n\r\n return task\r\n```\r\nThen apply the transformation to every `op` in the pipeline\r\n\r\n```\r\npipeline_conf.add_op_transformer(\r\n mount_empty_dir\r\n)\r\n```\r\n\r\n3. Hint an `op` can be scheduled on Fargate (this is specific to your Fargate settings). For my case, I am using the rule \r\n> Any pod that has a label `fargate-schedulable=true` under `kubeflow` namespace can be put on Fargate. \r\n\r\nSo in the pipeline\r\n\r\n```\r\ntask.add_pod_label(\"fargate-schedulable\", \"true\")\r\n```\r\nwill hint the task can be scheduled on Fargate.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"With KFP 1.7.0-rc.2, we support using argo emissary executor and it should be able to run on fargate.\n\n(I verified it works on GKE autopilot)",
"> With KFP 1.7.0-rc.2, we support using argo emissary executor and it should be able to run on fargate.\r\n> \r\n> (I verified it works on GKE autopilot)\r\n\r\nThanks for the update @Bobgy. Will this executor mode be enabled by default under 1.17 or editing `workflow-controller-configmap` is still needed?\r\n```\r\nkubectl edit cm workflow-controller-configmap -n kubeflow\r\n```",
"1.7.0-rc.2 default to emissary, but I am reverting that.\nI will add a separate manifest env for emissary.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-04-27T22:15:47 | 2022-03-03T00:05:28 | null |
NONE
| null |
### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
<!-- /area frontend -->
/area backend
<!-- /area sdk -->
<!-- /area samples -->
<!-- /area components -->
### What feature would you like to see?
I am trying to run a large number of kubeflow pipeline jobs on AWS [Fargate](https://aws.amazon.com/fargate/?nc=sn&loc=0&whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc&fargate-blogs.sort-by=item.additionalFields.createdDate&fargate-blogs.sort-order=desc).
The kubeflow pipeline components are deployed on AWS EKS. While the EKS has a Fargate profile that allows scheduling pods onto *virtual* nodes, Kubeflow pipeline jobs contain privileged containers that prevent them from using Fargate machine resources (https://docs.aws.amazon.com/eks/latest/userguide/fargate.html).
### What is the use case or pain point?
This feature enables more cost-efficient job scheduling since many jobs (e.g., hyperparameter tuning, scenario analysis ...) are *ephermal*, so scheduling them on a serverless machine pool such as provided by Fargate makes more sense. This avoids the need to reserve a pool of nodes upfront while supporting the *burst* type of workloads.
However, kubeflow pipeline jobs use privileged containers that are not supported by Fargate. For example, the `wait` container
```
containers:
- name: wait
image: 'gcr.io/ml-pipeline/argoexec:v2.7.5-license-compliance'
command:
- argoexec
- wait
env:
- name: ARGO_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ARGO_CONTAINER_RUNTIME_EXECUTOR
value: pns
resources: {}
volumeMounts:
- name: podmetadata
mountPath: /argo/podmetadata
- name: mlpipeline-minio-artifact
readOnly: true
mountPath: /argo/secret/mlpipeline-minio-artifact
- name: input-artifacts
mountPath: /mainctrfs/tmp/inputs/config/data
subPath: config
- name: input-artifacts
mountPath: /mainctrfs/tmp/inputs/data/data
subPath: convect-prepare-data-out_path
- name: pipeline-runner-token-j2fm7
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- SYS_PTRACE
```
needs further configurations under `securityContext`.
I am wondering if there are any workarounds or better solutions to make the jobs schedulable on serverless resource pools such as Fargate.
### Is there a workaround currently?
I do not see any solutions so far.
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5555/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5555/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5553
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5553/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5553/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5553/events
|
https://github.com/kubeflow/pipelines/issues/5553
| 869,202,635 |
MDU6SXNzdWU4NjkyMDI2MzU=
| 5,553 |
[backend] Local install: proxy-agent error "Could not resolve host: metadata.google.internal"
|
{
"login": "soleares",
"id": 494198,
"node_id": "MDQ6VXNlcjQ5NDE5OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/494198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soleares",
"html_url": "https://github.com/soleares",
"followers_url": "https://api.github.com/users/soleares/followers",
"following_url": "https://api.github.com/users/soleares/following{/other_user}",
"gists_url": "https://api.github.com/users/soleares/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soleares/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soleares/subscriptions",
"organizations_url": "https://api.github.com/users/soleares/orgs",
"repos_url": "https://api.github.com/users/soleares/repos",
"events_url": "https://api.github.com/users/soleares/events{/privacy}",
"received_events_url": "https://api.github.com/users/soleares/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Yes, for local install, you should use\r\n```\r\nkubectl apply -k \"github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION\"\r\n```\r\n\r\nwe should update documentation in KFP standalone: https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/",
"Actually, you should follow https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/ to deploy locally",
"Thanks @Bobgy, you are correct. I followed the local guide for 1.3 but for some reason with 1.5 I used the incorrect manifest."
] | 2021-04-27T19:42:14 | 2021-04-30T02:23:38 | 2021-04-30T02:23:38 |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
```
# Install k3d kubectl
brew install k3d kubectl
k3d cluster create kubeflow
k3d kubeconfig merge ml-kubeflow --kubeconfig-switch-context
# Install Kubeflow Pipelines
export PIPELINE_VERSION=1.5.0
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"
kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.io
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/dev?ref=$PIPELINE_VERSION"
```
* KFP version:
1.5.0
* KFP SDK version:
N/A
### Steps to reproduce
1. Install using steps above on MacOS w/ latest Docker running.
2. `kubectl get pods -A`
3. `proxy-agent-<suffix>` pod shows Error status
```
kubeflow proxy-agent-<suffix> 0/1 CrashLoopBackOff 13 49m
```
4. `kubectl logs proxy-agent-<suffux> -n kubeflow` shows the following error:
```
++ curl http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: metadata.google.internal
```
### Expected result
`proxy-agent-<suffix>` pod shows Running status
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
I've seen this happen in versions 1.4.1 and 1.5. I would guess there's something GCP specific in the `dev` env manifest but I'm not sure.
```
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/dev?ref=$PIPELINE_VERSION"
```
I don't remember this happening in 1.3 and I think the manifest was different in the 1.3 install instructions and it used the `platform-agnostic-pns` env.
```
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"
```
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5553/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5553/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5548
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5548/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5548/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5548/events
|
https://github.com/kubeflow/pipelines/issues/5548
| 868,350,145 |
MDU6SXNzdWU4NjgzNTAxNDU=
| 5,548 |
[sdk] mlpipelines-metrics path is wrong when using kfp.components.load_component_from_text
|
{
"login": "santiagoolivar2017",
"id": 22083743,
"node_id": "MDQ6VXNlcjIyMDgzNzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/22083743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santiagoolivar2017",
"html_url": "https://github.com/santiagoolivar2017",
"followers_url": "https://api.github.com/users/santiagoolivar2017/followers",
"following_url": "https://api.github.com/users/santiagoolivar2017/following{/other_user}",
"gists_url": "https://api.github.com/users/santiagoolivar2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santiagoolivar2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santiagoolivar2017/subscriptions",
"organizations_url": "https://api.github.com/users/santiagoolivar2017/orgs",
"repos_url": "https://api.github.com/users/santiagoolivar2017/repos",
"events_url": "https://api.github.com/users/santiagoolivar2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/santiagoolivar2017/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"> /mlpipeline-metrics.json, but instead I am getting /tmp/outputs/MLPipeline_Metrics/data\r\n\r\nThis is by design. Metrics is a normal output and has same style of paths as other outputs. Also, putting files in root is problematic for multiple reasons. It's not possible to mount volumes to the root to grab the files.\r\n\r\nIf the UI works when the file is written to `/mlpipeline-metrics.json`, but not `/tmp/outputs/MLPipeline_Metrics/data` then it's a bug.\r\n\r\nWhat version of the SDK and backend are you using? (AFAIK, this is part of the bug template).\r\n\r\nDo you have a screenshot of Input/Output in the UX?",
"Thanks @Ark-kun.\r\nWhen the output is written to `/mlpipeline-metrics.json` the UI shows the metrics properly, and when it is saved to `/tmp/outputs/MLPipeline_Metrics/data` the UI doesn't recognize it as metrics and doesn't show any.\r\n\r\nI am using kfp version 1.4.0 and Kubeflow Pipelines version 1.4.1\r\n",
"@santiagoolivar2017 can you provide a full sample pipeline to reproduce this issue?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-27T01:28:35 | 2022-03-03T03:05:15 | 2022-03-03T03:05:15 |
NONE
| null |
Let me describe what my problem is:
- I am using load_component_from_text to define my components
- In the pipeline I am generating a mlpipeline-metrics artifact
- After compiling the pipeline I am checking the yaml file that is created
- Within the yaml file, I was expecting the path of this artifact to be **/mlpipeline-metrics.json**, but instead I am getting **/tmp/outputs/MLPipeline_Metrics/data**
- This is causing Kubeflow UI to not show any output metrics.
- I am changing the paths on the yaml file to be **/mlpipeline-metrics.json** (as expected) and everything works as expected
Before I used to do this with ContainerOp and the paths seemed to be generated properly, which makes me think that there may be something wrong with how the aths are generated when the pipeline is compiled
### Steps to reproduce
```
load_component_from_text('''
name: name_of_component
outputs:
- {name: MLPipeline Metrics, type: Metrics}
implementation:
container:
image: image_name
command: [python, -u, script_name.py]
args: [
...
--metrics_path, {outputPath: MLPipeline Metrics}]
''')
```
### Expected result
yaml file with that contains the following information
```
name: model-training
container:
args: [ ...
--metrics_path, /mlpipeline-metrics.json]
command: [python, -u, script_name.py]
image: image_name
...
...
outputs:
artifacts:
- {name: mlpipeline-metrics, path: /mlpipeline-metrics.json}
```
### Instead of the expected output I am getting the next output
```
name: model-training
container:
args: [ ...
--metrics_path, /tmp/outputs/MLPipeline_Metrics/data]
command: [python, -u, script_name.py]
image: image_name
...
...
outputs:
artifacts:
- {name: mlpipeline-metrics, path:/tmp/outputs/MLPipeline_Metrics/data}
```
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5548/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5547
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5547/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5547/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5547/events
|
https://github.com/kubeflow/pipelines/issues/5547
| 867,837,577 |
MDU6SXNzdWU4Njc4Mzc1Nzc=
| 5,547 |
[Test Infra] v2 sample test failing with run timeout 4-26
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Strange, the temporary error goes away by itself. It might be some temporary infra issue",
"closing for now, we can reopen if we find similar issues"
] | 2021-04-26T15:36:09 | 2021-04-27T13:56:22 | 2021-04-27T13:56:22 |
CONTRIBUTOR
| null |
Example: https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/5546/kubeflow-pipelines-samples-v2/1386628232907853824
One test running: https://4e18c21c9d33d20f-dot-datalab-vm-staging.googleusercontent.com/#/runs/details/45981700-5acd-4d4e-b77d-3a1780f9ab68
All PRs are failing on this.
However, even if we wait for a while and check the run status, it failed at the end.
This needs more investigation than just increasing timeout.
## Details
some symptoms I noticed:
1. kaniko container build took a really long time at "[36mINFO[0m[0347] Taking snapshot of full filesystem..." step
2. many pods fail with `CreateContainerError`, but at the time I checked, pod events has been GCed. I need to verify on a container while it is running to understand why.
## Guesses
because of 2, might be worth trying pinning Kaniko to an older version, maybe our latest one has some problems
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5547/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5545
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5545/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5545/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5545/events
|
https://github.com/kubeflow/pipelines/issues/5545
| 867,225,810 |
MDU6SXNzdWU4NjcyMjU4MTA=
| 5,545 |
pipeline_conf.set_ttl_seconds_after_finished(seconds=10) NOT WORKING
|
{
"login": "prasadkyp7",
"id": 80782583,
"node_id": "MDQ6VXNlcjgwNzgyNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/80782583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prasadkyp7",
"html_url": "https://github.com/prasadkyp7",
"followers_url": "https://api.github.com/users/prasadkyp7/followers",
"following_url": "https://api.github.com/users/prasadkyp7/following{/other_user}",
"gists_url": "https://api.github.com/users/prasadkyp7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prasadkyp7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prasadkyp7/subscriptions",
"organizations_url": "https://api.github.com/users/prasadkyp7/orgs",
"repos_url": "https://api.github.com/users/prasadkyp7/repos",
"events_url": "https://api.github.com/users/prasadkyp7/events{/privacy}",
"received_events_url": "https://api.github.com/users/prasadkyp7/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"I think [this issue](https://github.com/kubeflow/pipelines/issues/5234) could guide you to a solution @prasadkyp7 ",
"+1 to NikeNano's answer",
"we upgraded kubeflow to latest version but still TTL is not working. \r\nNeed an urgent help on this\r\n",
"@prasadkyp7 can you add more details?\r\nWhat config did you set? What version are you using etc?\r\n\r\nRef: https://github.com/kubeflow/pipelines/issues/5234#issuecomment-805599988 this is my recommendation",
"@Bobgy i tried as you suggested , it worked.\r\nBut the problem here is . TTL is deleting the pods but not the run details. Because of this , run details SQL Table is becoming full.\r\nwhen i tried to delete the pipelines using API it deletes both pods and run details. we want TTL to delete both pod and run details. can you suggest a way to do this ?",
"I see, not deleting runs in the DB is the expected behavior.\nWe don't have TTL feature for the DB. So far, we recommend building a Cron pipeline/job that calls KFP API to list, filter old runs and GC them. Do you see any blockers in that direction?"
] | 2021-04-26T03:43:39 | 2021-08-12T14:27:07 | 2021-06-11T00:54:14 |
NONE
| null |
* How did you deploy Kubeflow Pipelines (KFP)?
**_WE are using full kubeflow deployment on AWS Cloud_**
* KFP version:
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
Kubeflow UI is not showing us any version
* KFP SDK version:
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
kfp 1.4.0
kfp-pipeline-spec 0.1.6
kfp-server-api 1.4.1
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
1.create a pipeline code
2. set the ttl using pipeline config
pipeline_conf = kfp.dsl.PipelineConf()
x=pipeline_conf.set_ttl_seconds_after_finished(seconds=10)
3. create the pipline using
kfp.compiler.Compiler().compile(loadTrain_pipeline,'loadTrain_pipeline_2.zip',pipeline_conf=x) and upload it in UI.
4. TEST this pipeline by creating a run.
EXPECTED RESULT:
As per setting TTL to 10 seconds , the pods created must be deleted after 10 sec.
But its not deleting the pods, in my case i am creating 1000's pods and as i am not deleting the successful pods Kubeflow UI is crashing.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5545/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5544
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5544/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5544/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5544/events
|
https://github.com/kubeflow/pipelines/issues/5544
| 867,060,318 |
MDU6SXNzdWU4NjcwNjAzMTg=
| 5,544 |
[sdk] input parameter created without default value
|
{
"login": "amitza",
"id": 9674751,
"node_id": "MDQ6VXNlcjk2NzQ3NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9674751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amitza",
"html_url": "https://github.com/amitza",
"followers_url": "https://api.github.com/users/amitza/followers",
"following_url": "https://api.github.com/users/amitza/following{/other_user}",
"gists_url": "https://api.github.com/users/amitza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amitza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amitza/subscriptions",
"organizations_url": "https://api.github.com/users/amitza/orgs",
"repos_url": "https://api.github.com/users/amitza/repos",
"events_url": "https://api.github.com/users/amitza/events{/privacy}",
"received_events_url": "https://api.github.com/users/amitza/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"This is expected. Your pipeline doesn't specify any default value for param.\r\nOne option is to provide a default value when you define the pipeline function, for instance:\r\n```\r\n @kfp.dsl.pipeline(name='pipeline')\r\n def pipeline(param: int = 10):\r\n ...\r\n```\r\nAnother option, probably more common, is to supply the value at pipeline submission time. through `arguments`:\r\nhttps://github.com/kubeflow/pipelines/blob/289d0f57be5533a15b93c520049e7b35578ab8e0/sdk/python/kfp/_client.py#L705-L721",
"+1 to what Chen said.\r\nThere is no default value. Specify it and the pipeline will work.\r\n\r\nP.S. We do not pledge to support submitting the compiled pipelines to Argo. This is implementation detail and there are some backend-side workflow changed."
] | 2021-04-25T17:01:55 | 2021-05-18T02:54:24 | 2021-05-18T02:54:24 |
NONE
| null |
Trying to work with KFP SDK directly with argo workflow and when compiling a pipeline with input parameter a default value is not created so the workflow is invalid.
### Environment
* KFP version: not relevant, submitting directly to argocd
* KFP SDK version: 1.5.0
### Steps to reproduce
Run the following python code and open the created file (pipe.yaml):
```python
import kfp as kfp
@kfp.components.func_to_container_op
def print_func(param: dict):
print(str(param))
@kfp.components.func_to_container_op
def list_func(param: int) -> list:
return list(range(param))
@kfp.dsl.pipeline(name='pipeline')
def pipeline(param: int):
list_func_op = list_func(param)
with kfp.dsl.ParallelFor([{'a': 1, 'b': 10}, {'a': 2, 'b': 20}]) as param:
print_func(param)
if __name__ == '__main__':
workflow_dict = kfp.compiler.Compiler()._create_workflow(pipeline)
del workflow_dict['spec']['serviceAccountName']
kfp.compiler.Compiler._write_workflow(workflow_dict, 'pipe.yaml')
```
When submitting to argo workflow the following error occures:
```
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Trailer': 'Grpc-Trailer-Content-Type', 'Date': 'Sun, 25 Apr 2021 16:37:21 GMT', 'Transfer-Encoding': 'chunked'})
HTTP response body: {"code":2,"message":"spec.arguments.param.value is required"}
```
### Expected result
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: pipeline-
annotations: {pipelines.kubeflow.org/kfp_sdk_version: 1.4.0, pipelines.kubeflow.org/pipeline_compilation_time: '2021-04-25T19:37:22.235892',
pipelines.kubeflow.org/pipeline_spec: '{"inputs": [{"name": "param", "type": "Integer"}],
"name": "pipeline"}'}
labels: {pipelines.kubeflow.org/kfp_sdk_version: 1.4.0}
spec:
entrypoint: pipeline
templates:
- name: for-loop-2
inputs:
parameters:
- {name: loop-item-param-1}
dag:
tasks:
- name: print-func
template: print-func
arguments:
parameters:
- {name: loop-item-param-1, value: '{{inputs.parameters.loop-item-param-1}}'}
- name: list-func
container:
args: [--param, '{{inputs.parameters.param}}', '----output-paths', /tmp/outputs/Output/data]
command:
- sh
- -ec
- |
program_path=$(mktemp)
printf "%s" "$0" > "$program_path"
python3 -u "$program_path" "$@"
- |
def list_func(param):
return list(range(param))
def _serialize_json(obj) -> str:
if isinstance(obj, str):
return obj
import json
def default_serializer(obj):
if hasattr(obj, 'to_struct'):
return obj.to_struct()
else:
raise TypeError("Object of type '%s' is not JSON serializable and does not have .to_struct() method." % obj.__class__.__name__)
return json.dumps(obj, default=default_serializer, sort_keys=True)
import argparse
_parser = argparse.ArgumentParser(prog='List func', description='')
_parser.add_argument("--param", dest="param", type=int, required=True, default=argparse.SUPPRESS)
_parser.add_argument("----output-paths", dest="_output_paths", type=str, nargs=1)
_parsed_args = vars(_parser.parse_args())
_output_files = _parsed_args.pop("_output_paths", [])
_outputs = list_func(**_parsed_args)
_outputs = [_outputs]
_output_serializers = [
_serialize_json,
]
import os
for idx, output_file in enumerate(_output_files):
try:
os.makedirs(os.path.dirname(output_file))
except OSError:
pass
with open(output_file, 'w') as f:
f.write(_output_serializers[idx](_outputs[idx]))
image: python:3.7
inputs:
parameters:
- {name: param}
outputs:
artifacts:
- {name: list-func-Output, path: /tmp/outputs/Output/data}
metadata:
annotations: {pipelines.kubeflow.org/component_spec: '{"implementation": {"container":
{"args": ["--param", {"inputValue": "param"}, "----output-paths", {"outputPath":
"Output"}], "command": ["sh", "-ec", "program_path=$(mktemp)\nprintf \"%s\"
\"$0\" > \"$program_path\"\npython3 -u \"$program_path\" \"$@\"\n", "def
list_func(param):\n return list(range(param))\n\ndef _serialize_json(obj)
-> str:\n if isinstance(obj, str):\n return obj\n import json\n def
default_serializer(obj):\n if hasattr(obj, ''to_struct''):\n return
obj.to_struct()\n else:\n raise TypeError(\"Object of
type ''%s'' is not JSON serializable and does not have .to_struct() method.\"
% obj.__class__.__name__)\n return json.dumps(obj, default=default_serializer,
sort_keys=True)\n\nimport argparse\n_parser = argparse.ArgumentParser(prog=''List
func'', description='''')\n_parser.add_argument(\"--param\", dest=\"param\",
type=int, required=True, default=argparse.SUPPRESS)\n_parser.add_argument(\"----output-paths\",
dest=\"_output_paths\", type=str, nargs=1)\n_parsed_args = vars(_parser.parse_args())\n_output_files
= _parsed_args.pop(\"_output_paths\", [])\n\n_outputs = list_func(**_parsed_args)\n\n_outputs
= [_outputs]\n\n_output_serializers = [\n _serialize_json,\n\n]\n\nimport
os\nfor idx, output_file in enumerate(_output_files):\n try:\n os.makedirs(os.path.dirname(output_file))\n except
OSError:\n pass\n with open(output_file, ''w'') as f:\n f.write(_output_serializers[idx](_outputs[idx]))\n"],
"image": "python:3.7"}}, "inputs": [{"name": "param", "type": "Integer"}],
"name": "List func", "outputs": [{"name": "Output", "type": "JsonArray"}]}',
pipelines.kubeflow.org/component_ref: '{}', pipelines.kubeflow.org/arguments.parameters: '{"param":
"{{inputs.parameters.param}}"}'}
- name: pipeline
inputs:
parameters:
- {name: param}
dag:
tasks:
- name: for-loop-2
template: for-loop-2
arguments:
parameters:
- {name: loop-item-param-1, value: '{{item}}'}
withItems:
- {a: 1, b: 10}
- {a: 2, b: 20}
- name: list-func
template: list-func
arguments:
parameters:
- {name: param, value: '{{inputs.parameters.param}}'}
- name: print-func
container:
args: [--param, '{{inputs.parameters.loop-item-param-1}}']
command:
- sh
- -ec
- |
program_path=$(mktemp)
printf "%s" "$0" > "$program_path"
python3 -u "$program_path" "$@"
- |
def print_func(param):
print(str(param))
import json
import argparse
_parser = argparse.ArgumentParser(prog='Print func', description='')
_parser.add_argument("--param", dest="param", type=json.loads, required=True, default=argparse.SUPPRESS)
_parsed_args = vars(_parser.parse_args())
_outputs = print_func(**_parsed_args)
image: python:3.7
inputs:
parameters:
- {name: loop-item-param-1}
metadata:
annotations: {pipelines.kubeflow.org/component_spec: '{"implementation": {"container":
{"args": ["--param", {"inputValue": "param"}], "command": ["sh", "-ec",
"program_path=$(mktemp)\nprintf \"%s\" \"$0\" > \"$program_path\"\npython3
-u \"$program_path\" \"$@\"\n", "def print_func(param):\n print(str(param))\n\nimport
json\nimport argparse\n_parser = argparse.ArgumentParser(prog=''Print func'',
description='''')\n_parser.add_argument(\"--param\", dest=\"param\", type=json.loads,
required=True, default=argparse.SUPPRESS)\n_parsed_args = vars(_parser.parse_args())\n\n_outputs
= print_func(**_parsed_args)\n"], "image": "python:3.7"}}, "inputs": [{"name":
"param", "type": "JsonObject"}], "name": "Print func"}', pipelines.kubeflow.org/component_ref: '{}',
pipelines.kubeflow.org/arguments.parameters: '{"param": "{{inputs.parameters.loop-item-param-1}}"}'}
arguments:
parameters:
- {name: param, value: <some default value>}
```
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5544/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5537
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5537/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5537/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5537/events
|
https://github.com/kubeflow/pipelines/issues/5537
| 866,589,985 |
MDU6SXNzdWU4NjY1ODk5ODU=
| 5,537 |
[bug] Failed to retry run (Can not trigger retry from UI when the failed step has dependent steps)
|
{
"login": "yanp",
"id": 5286182,
"node_id": "MDQ6VXNlcjUyODYxODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5286182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanp",
"html_url": "https://github.com/yanp",
"followers_url": "https://api.github.com/users/yanp/followers",
"following_url": "https://api.github.com/users/yanp/following{/other_user}",
"gists_url": "https://api.github.com/users/yanp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanp/subscriptions",
"organizations_url": "https://api.github.com/users/yanp/orgs",
"repos_url": "https://api.github.com/users/yanp/repos",
"events_url": "https://api.github.com/users/yanp/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanp/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @capri-xiyue "
] | 2021-04-24T04:17:49 | 2021-05-26T16:01:42 | 2021-05-26T16:01:42 |
NONE
| null |
### What steps did you take
This issue happened after we upgraded to Kubeflow Pipeliines `1.5.0`
Minimum example to reproduce:
```python
import kfp
@kfp.components.create_component_from_func
def fail_func():
raise ValueError("failure")
@kfp.components.create_component_from_func
def dummy_func():
print("hello")
@kfp.dsl.pipeline(name="my-pipeline")
def pipeline():
task_1 = fail_func()
task_2 = dummy_func().after(task_1)
if __name__ == "__main__":
client = kfp.Client()
client.create_run_from_pipeline_func(
pipeline,
arguments={},
experiment_name="a-kfp-test",
run_name="Test manual retry",
)
```
### What happened:
After the `Fail func` fails, if click `Retry` on UI the following error message would show up:
>"error":"Retry run failed.: InternalServerError: Workflow cannot be retried with node xxx in Omitted phase: workflow cannot be retried","code":13,...
### Environment:
* How do you deploy Kubeflow Pipelines (KFP)?
Kubeflow Pipelines Standalone on EKS, also reproduced on Minikube.
* KFP version:
`1.5.0`
* KFP SDK version:
kfp 1.4.0
kfp-pipeline-spec 0.1.7
kfp-server-api 1.5.0
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5537/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5537/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5534
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5534/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5534/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5534/events
|
https://github.com/kubeflow/pipelines/issues/5534
| 865,983,030 |
MDU6SXNzdWU4NjU5ODMwMzA=
| 5,534 |
load_pipeline_from_file
|
{
"login": "iuiu34",
"id": 30587996,
"node_id": "MDQ6VXNlcjMwNTg3OTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/30587996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iuiu34",
"html_url": "https://github.com/iuiu34",
"followers_url": "https://api.github.com/users/iuiu34/followers",
"following_url": "https://api.github.com/users/iuiu34/following{/other_user}",
"gists_url": "https://api.github.com/users/iuiu34/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iuiu34/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iuiu34/subscriptions",
"organizations_url": "https://api.github.com/users/iuiu34/orgs",
"repos_url": "https://api.github.com/users/iuiu34/repos",
"events_url": "https://api.github.com/users/iuiu34/events{/privacy}",
"received_events_url": "https://api.github.com/users/iuiu34/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"@luiu1234 What is the reason for preferring yaml over python for loading pipeline from file? We are currently heading towards the direction of supporting pipeline construction using python only. \r\n\r\n/cc @chensun ",
"Hi @zijianjoy ,\r\nreason is consistency.\r\nFor me is (a little bit) inconsistent to use use yaml in one side (features) & py in the other (pipelines).\r\n\r\nBut if you're moving to py in both sides then that's fine by me. ",
"Sorry, I missed this. Thank you @zijianjoy for bringing this to my attention.\r\n\r\n> We are currently heading towards the direction of supporting pipeline construction using python only.\r\n\r\nJust to clarify, Python has always been the only option to write pipelines. Component yaml is designed for authoring components only but never the pipelines. \r\n\r\n> Note: this is not to be confused with the static yaml that is upload to gcp kubeflow. This pipeline.yaml will contain same info as pipeline.py and then will generate the static.yaml\r\n\r\n@Iuiu1234, let's think about what actions you may take between loading this pipeline.yaml and generating the static.yaml (which is an Argo workflow template yaml). There's none. \r\nAs of today, you cannot (re)use a pipeline in another pipeline definition (this is actually something we're going to explore soon). So for now the only available use case after loading a pipeline is to submit it for run (or upload it for later run). In that sense, we are doing exactly what we should do -- you can load the compilation result but not some intermediate representation of a pipeline. Does that make sense?\r\n\r\n",
"> For me is (a little bit) inconsistent to use use yaml in one side (features) & py in the other (pipelines).\r\n\r\nAgree that inconsistence exists today. Also keep in mind that there're difference between component and pipeline as of today. When you write a component using yaml, you are only defining a template, like you're defining a function. But the pipeline contains how do you use such templates -- which arguments you passed to call a function. The current component yaml is not capable of expressing the later. In addition, it cannot express `dsl.ParallelFor`, `dsl.Condition`, for instance.\r\n\r\n> But if you're moving to py in both sides then that's fine by me.\r\n\r\nYes, we're evaluating making Python the only choice for both sides.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-23T10:34:14 | 2022-03-03T03:05:21 | 2022-03-03T03:05:21 |
NONE
| null |
Hi,
you can create components as a yaml, and load them with `load_component_from_file`.
Why there's not the same feature with pipelines?
Instead of using `@dsl.pipeline`, define the pipeline also with a yaml, and then just having a py script with a `load_pipeline_from_file` function. Or even better, pass `--pipeline-file` as a parameter when uploading the pipeline to kubeflow?
Note: this is not to be confused with the static yaml that is upload to gcp kubeflow. This pipeline.yaml will contain same info as pipeline.py and then will generate the static.yaml
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5534/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5533
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5533/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5533/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5533/events
|
https://github.com/kubeflow/pipelines/issues/5533
| 865,891,790 |
MDU6SXNzdWU4NjU4OTE3OTA=
| 5,533 |
[backend] minio/mysql pod pending because PV provisioned to wrong zone in regional cluster
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"In GCP, it's recommended to use GCS & Cloud SQL, so we can avoid this problem.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-23T08:42:53 | 2022-03-03T03:05:16 | 2022-03-03T03:05:16 |
CONTRIBUTOR
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
kfp standalone
* KFP version: 1.5.0
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
### Steps to reproduce
1. deploy a regional cluster
2. deploy KFP standalone following https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/
3. observe if pvs are provisioned in zones with node
4. Sometimes it isn't, e.g. I saw that minio-pvc was bound to a PV in us-central1-b, while my regional cluster only has nodes in us-central1-{a,c,f}.
5. so my minio pod was stuck pending
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
### Expected result
pvs should be provisioned in zones with node
<!-- What should the correct behavior be? -->
### Materials and Reference
https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode
In GCP, the default storage class has `volumeBindingMode: immediate`, so volume is provisioned immediately without taking into consideration which nodes we have.
We should better use a storage class that supports `WaitForFirstConsumer`, e.g. in GCP we can use [standard-rwo](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver) which is configured as `WaitForFirstConsumer`.
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5533/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5529
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5529/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5529/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5529/events
|
https://github.com/kubeflow/pipelines/issues/5529
| 865,256,745 |
MDU6SXNzdWU4NjUyNTY3NDU=
| 5,529 |
Kubeflow crashing
|
{
"login": "prasadkyp7",
"id": 80782583,
"node_id": "MDQ6VXNlcjgwNzgyNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/80782583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prasadkyp7",
"html_url": "https://github.com/prasadkyp7",
"followers_url": "https://api.github.com/users/prasadkyp7/followers",
"following_url": "https://api.github.com/users/prasadkyp7/following{/other_user}",
"gists_url": "https://api.github.com/users/prasadkyp7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prasadkyp7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prasadkyp7/subscriptions",
"organizations_url": "https://api.github.com/users/prasadkyp7/orgs",
"repos_url": "https://api.github.com/users/prasadkyp7/repos",
"events_url": "https://api.github.com/users/prasadkyp7/events{/privacy}",
"received_events_url": "https://api.github.com/users/prasadkyp7/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 2710158147,
"node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info",
"name": "needs more info",
"color": "DBEF12",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Hello @prasadkyp7 ,\r\n\r\nWould you like to provide more detail on the deployment environment?\r\n\r\nAre.you using Full Kubeflow deployment, or KFP standalone? What kind of cloud provider you are using?\r\n\r\nVersioning questions:\r\n1. How do you deploy Kubeflow Pipelines (KFP)?\r\n2. KFP version:\r\n3. KFP SDK version:\r\n\r\nWhat are the detailed steps to reproduce this issue?",
"We are using Full kubeflow deployment running on AWS cloud.\r\nBuild version is not showing for us in Kubeflow UI (Ui shows as build version:dev_local)\r\n\r\nSteps to reproduce:\r\n\r\n1.create a pipeline using kfp.compiler.Compiler().compile(pipeline,'pipeline.zip').\r\n2. Upload the pipeline zip file using kubeflow UI.\r\n3. Create a run by using kfp api \r\n4. payload for api-\r\npayload={\r\n \"name\":\"kfp_api_run\",\r\n \"pipeline_spec\":{\r\n \"pipeline_id\":\"pipeline id of this pipeline\",\r\n \"parameters\":[{\r\n \"name\":\"p\",\r\n \"value\":\"json value input\"\r\n }\r\n ]\r\n }\r\n}",
"Please provide the solution asap",
"I am suspecting this issue is related to resource limitation (or open file cap), because you mentioned that this error happens when there are many pipelines. But I don't have access to AWS, so cannot reproduce this issue. \r\n\r\nCan you check your max file descriptors on AWS Kubernetes?",
"The crash is not because of resource limitation . we have large nodes in AWS cluster. \r\n\r\nThe crash is because KUBEFLOW Is not deleting the pods in EKS even after the Pipelines are success. we tried deleting the pods using pipeline_conf.set_ttl_seconds_after_finished(seconds=10) . But this is not working.\r\n\r\nCan you please tell us how to delete the pods of pipeline on its own after they get succeed .\r\n\r\nThis is big problem with kubeflow. Need an urgent guidance on this",
"You can delete the pipeline runs from KFP UI: First `Archive` the run, then `Delete` the run: \r\n\r\n\r\n\r\n\r\nAlternatively, you can also use the Python SDK to delete the run. API documentation is in https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.server_api.html#kfp_server_api.api.run_service_api.RunServiceApi.delete_run\r\n\r\n\r\nAfter you delete the run, the pods associated with this run will be deleted.\r\n",
"i have already tried the second one .\r\n\r\nBut the problem is, in real time we are running thousands of pods and to delete using kfp.client.runs.delete_run(run_id) we need run id which is unique and is determined by kubeflow but not us which makes very difficult to delete the thousands of runs manually.\r\n\r\nIs there a way to delete the runs/pods on its own after the completion of run?",
"There is a difference between Pods and runs. Runs are not Kubernetes objects and are just stored in the DB. I'm not sure your problem is caused by runs. Are you sure the runs are the problem, not Pods and Workflows?\r\n\r\nAFAIK, there is some TTL on Pods as I see them removed after pretty short time.",
"FYI You can set a different TTL to garbage collect the pod. By default it's 1 day\r\nhttps://github.com/kubeflow/pipelines/blob/5d0f3a3d32f35d3dbca9cd27f4ee476e87f6d41e/manifests/gcp_marketplace/chart/kubeflow-pipelines/templates/pipeline.yaml#L517",
"Each Run from Kubeflow is creating a pod in a kubernetes. When a Run completes, the pod in kubernetes is not deleted and is staying there in successful state. So, if we delete the Run from kubeflow it will delete both the run details and pod as well.\r\nMy problem here is as 1000's number of success/ failed pods/ run are staying and not deleted-kubeflow is crashing.\r\n\r\nSo is there a way in kubeflow to delete 1000's of runs on its own after completion.\r\n\r\ni tried to set TTL using pipeline_conf.set_ttl_seconds_after_finished(seconds=10). But its not working , it is not deleting the pods.\r\n\r\nPlease let me know a way to delete the runs after completion. This is big blocker for us in using kubeflow. \r\n",
"any update on how to delete the runs on its own after completion?",
"when we run thousands of jobs daily without deleting the runs, the details of run which is stored in the table is over loaded and we are not able to create the runs with the error.Failed to create a new run.: InternalServerError: Failed to store run x9hqb to table: Error 1114: The table 'run_details' is full\"}]}\r\n\r\n\r\nso we have to delete the runs on its own after completion, i tried to set TTL using pipeline_conf.set_ttl_seconds_after_finished(seconds=10). it didnot work.\r\n\r\nThere is no documentation or resource available to delete the runs automatically, this is a big problem and big blocker. \r\n\r\nReally Appreciate if you help us in providing a solution to this probelm asap\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-22T17:50:23 | 2022-03-03T04:05:33 | 2022-03-03T04:05:33 |
NONE
| null |
when we run many pipelines(more than 500) using KFP-API , the entire kubeflow and UI is crashing. when i look back in kubernetes its showing this error-failed to create a watcher for certificate files: inotify_init: too many open files. We are not able to track/understand this error. But this crash is happening because of running many pipelines. Any help would be greatly appreciated.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5529/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5524
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5524/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5524/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5524/events
|
https://github.com/kubeflow/pipelines/issues/5524
| 864,591,295 |
MDU6SXNzdWU4NjQ1OTEyOTU=
| 5,524 |
[Test infra] 2021.4.22 -- v2 sample test failing -- API Exception 502
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Deleted the unhealthy node",
"I'm not seeing google cloud automatically bringing up a new node, so I edited in UI directly. I'm not sure what the canonical process should be.",
"after deleting the unhealthy node, it recovered",
"Closing the issue, but maybe we need to figure out next time how unhealthy nodes can be removed automatically. (to clarify, the node was said to be healthy by kubernetes, but all workload on it was stuck)"
] | 2021-04-22T06:31:17 | 2021-04-22T07:54:21 | 2021-04-22T07:54:21 |
CONTRIBUTOR
| null |
I found that all the pods in node `gke-kfp-standalone-1-kfp-standalone-1-60baa0a0-07jc` are either stuck at Terminating or ContainerCreating.
So I'm going to delete this node.
Test link: https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/5515/kubeflow-pipelines-samples-v2/1385113976681009152
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5524/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5521
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5521/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5521/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5521/events
|
https://github.com/kubeflow/pipelines/issues/5521
| 864,492,268 |
MDU6SXNzdWU4NjQ0OTIyNjg=
| 5,521 |
[frontend] tensorboard 2.3+ cannot load in KFP visualization
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1499519734,
"node_id": "MDU6TGFiZWwxNDk5NTE5NzM0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/upstream_issue",
"name": "upstream_issue",
"color": "006b75",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-22T03:01:37 | 2022-03-03T06:05:25 | 2022-03-03T06:05:25 |
CONTRIBUTOR
| null |
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
1. customize tensorboard image with https://github.com/kubeflow/pipelines/pull/5515, use image tensorflow/tensorflow:2.3.2
2. start tensorboard in KFP UI.
3. click tensorboard link.
4. TB UI loads, but fails with error message `Failed to fetch runs` and redirects to root path `/`.
5. upcoming network requests to TB fails, because root path does not proxy to tensorboard
### Expected result
<!-- What should the correct behavior be? -->
TB UI should work properly.
TB url should be sth like `https://71a74112589c16e8-dot-asia-east1.pipelines.googleusercontent.com/apis/v1beta1/_proxy/viewer-0004e5ec1428299e43070b9c17d48e201c8a6f7c-service.kubeflow.svc.cluster.local:80/tensorboard/viewer-0004e5ec1428299e43070b9c17d48e201c8a6f7c/`
but found `https://71a74112589c16e8-dot-asia-east1.pipelines.googleusercontent.com/#scalars`
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
This is clearly an upstream issue, because I verified doing the same with `tensorflow:2.2.2` and `gcr.io/deeplearning-platform-release/tf2-cpu.2-3:latest` and everything was as I expected.
I also verified tensorflow/tensorflow:2.4.0 has the same problem.
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5521/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5521/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5519
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5519/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5519/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5519/events
|
https://github.com/kubeflow/pipelines/issues/5519
| 863,892,346 |
MDU6SXNzdWU4NjM4OTIzNDY=
| 5,519 |
[feature] GCP Cloud ML Engine Component Support Local Training
|
{
"login": "gevmin94",
"id": 19888899,
"node_id": "MDQ6VXNlcjE5ODg4ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/19888899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gevmin94",
"html_url": "https://github.com/gevmin94",
"followers_url": "https://api.github.com/users/gevmin94/followers",
"following_url": "https://api.github.com/users/gevmin94/following{/other_user}",
"gists_url": "https://api.github.com/users/gevmin94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gevmin94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gevmin94/subscriptions",
"organizations_url": "https://api.github.com/users/gevmin94/orgs",
"repos_url": "https://api.github.com/users/gevmin94/repos",
"events_url": "https://api.github.com/users/gevmin94/events{/privacy}",
"received_events_url": "https://api.github.com/users/gevmin94/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @Bobgy \r\nsounds like an useful idea",
"I think that the existing component cannot really do that. It uses REST client to call the GCP API and cannot do what gcloud local is doing. So the easiest solution is to create a new component for this purpose.\r\n\r\nBut\r\n\r\nWhat do you think about using custom containers for training?\r\nIt's easy to run them locally and there is beta support for running them on GCP: https://cloud.google.com/ai-platform-unified/docs/training/create-custom-job#create_custom_job-gcloud",
"I'm not sure if I clearly understand custom containers for training usage in KFP. If we are intended to create a custom training job independent from the kubeflow pipeline, then it would be a good choice, however, the local e2e KFP pipeline development and before GCP upload testing gap still will be there. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-21T13:27:00 | 2022-03-03T04:05:19 | 2022-03-03T04:05:19 |
NONE
| null |
### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area samples -->
[GCP Components](https://github.com/kubeflow/pipelines/tree/master/components/gcp/ml_engine)
### What feature would you like to see?
Support to run [AI Platform training job locally.]( https://cloud.google.com/sdk/gcloud/reference/ml-engine/local/train)
### What is the use case or pain point?
A pain point is the pipeline testing part, which took a long time to test changes by running the pipeline in GCP, which adds additional cost when every time spinning up the new machine during the training component development phase. After the local training support feature, developers can do it fully locally and once the pipeline is ready they can upload it in GCP.
### Is there a workaround currently?
Writing training component from scratch.
<!-- Without this feature, how do you accomplish your task today? -->
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5519/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5518
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5518/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5518/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5518/events
|
https://github.com/kubeflow/pipelines/issues/5518
| 863,883,639 |
MDU6SXNzdWU4NjM4ODM2Mzk=
| 5,518 |
[bug] Lexical error in pipeline samples
|
{
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @difince ",
"Updated the text with https://github.com/kubeflow/pipelines/commit/acc2c2aaafa6c3a6b0cc90eef5d61cfd43440aef"
] | 2021-04-21T13:17:53 | 2021-04-22T21:31:15 | 2021-04-22T21:31:15 |
MEMBER
| null |
### What steps did you take
It is just a lexical error in pipeline sample file - [Data passing in python components - Files.py](https://github.com/kubeflow/pipelines/blob/d79071c0bef19442483abc101769a0d893e72f42/samples/tutorials/Data%20passing%20in%20python%20components/Data%20passing%20in%20python%20components%20-%20Files.py)
> Small lists, dictionaries and JSON structures are fine, but keep an eye on the size and consider switching to file-based data passing methods **taht** are more suitable for bigger data (more than several kilobytes) or binary data.
<!-- A clear and concise description of what the bug is.-->
### What did you expect to happen:
**taht** to be written correctly: that
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
/area samples
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5518/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5513
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5513/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5513/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5513/events
|
https://github.com/kubeflow/pipelines/issues/5513
| 863,200,683 |
MDU6SXNzdWU4NjMyMDA2ODM=
| 5,513 |
Large inline Markdown visualization causes OOM
|
{
"login": "jackwhelpton",
"id": 6883147,
"node_id": "MDQ6VXNlcjY4ODMxNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackwhelpton",
"html_url": "https://github.com/jackwhelpton",
"followers_url": "https://api.github.com/users/jackwhelpton/followers",
"following_url": "https://api.github.com/users/jackwhelpton/following{/other_user}",
"gists_url": "https://api.github.com/users/jackwhelpton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackwhelpton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackwhelpton/subscriptions",
"organizations_url": "https://api.github.com/users/jackwhelpton/orgs",
"repos_url": "https://api.github.com/users/jackwhelpton/repos",
"events_url": "https://api.github.com/users/jackwhelpton/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackwhelpton/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"I'm going to experiment with uploading this Markdown content to GCS, and see if that is handled better.\r\n\r\nUPDATE: the same happens with .md files hosted in GCS. They're large but not staggeringly so (11.3MB), but still cause out-of-memory errors in the browser tab.",
"Hi @jackwhelpton, this seems like an expected limitation on the UI side.\r\n\r\n1. I'm curious why a markdown file needs to be 20MB+, can people actually read it in KFP UI?\r\n2. we might be able to optimize UI code to specifically optimize for this case, but there's a limit we can achieve, I wonder what would be your requirement of the largest file you want to visualize.",
"I suspected that would be the case: I've worked around it by including only a limited amount of the data in the markdown that is displayed, pushing the full report out to GCS and including a link in the abridged version.\r\n\r\nTo give more background, what I'm displaying is a list of warnings/errors detected with a data set, kind of a \"lint\" for training and eval data. I was going to try committing the markdown and seeing if GitHub can display it, just as a test for what the browser itself can handle.\r\n\r\nIt might suffice to have something in the documentation that explains roughly where these limits lie: I'm not sure it's vital to fix this. The combination of \"here are the first 50 errors\" and \"here's where to get a complete log file\" is probably the way to go here.",
"Totally agree with you!\nSo todo for this issue is updating docs to inform visualization size limits.\n\nAlso, it can probably be nice if UI code could detect that the artifact is too big -- cannot display.",
"/assign @zijianjoy "
] | 2021-04-20T20:13:23 | 2021-05-25T04:10:09 | 2021-05-25T04:10:09 |
CONTRIBUTOR
| null |
### Environment
* KFP version: 1.0.1
### Steps to reproduce
Output a markdown artifact containing a large amount of Markdown inline:
```python
metadata = {
"outputs": [
{
"storage": "inline",
"source": "<large markdown string: ~20MB>",
"type": "markdown",
}]
}
```
### Expected result
The Visualizations tab and Run output should render the Markdown.
Instead, the original markup code is visible: instead, after some time loading, the Chrome tab will crash with SBOX_FATAL_MEMORY_EXCEEDED.
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5513/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5512
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5512/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5512/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5512/events
|
https://github.com/kubeflow/pipelines/issues/5512
| 863,010,858 |
MDU6SXNzdWU4NjMwMTA4NTg=
| 5,512 |
Missing Markdown support: horizontal rule, emojis
|
{
"login": "jackwhelpton",
"id": 6883147,
"node_id": "MDQ6VXNlcjY4ODMxNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackwhelpton",
"html_url": "https://github.com/jackwhelpton",
"followers_url": "https://api.github.com/users/jackwhelpton/followers",
"following_url": "https://api.github.com/users/jackwhelpton/following{/other_user}",
"gists_url": "https://api.github.com/users/jackwhelpton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackwhelpton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackwhelpton/subscriptions",
"organizations_url": "https://api.github.com/users/jackwhelpton/orgs",
"repos_url": "https://api.github.com/users/jackwhelpton/repos",
"events_url": "https://api.github.com/users/jackwhelpton/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackwhelpton/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
open
| false |
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"As a workaround, using the character ⚠️ directly works; this doesn't help with, for example, ℹ️ (`:information_source:`).\r\n\r\nThis doesn't help with the horizontal rule, either.",
"I'm expanding this ticket as I find other missing markdown features: let me know if each one should be spun off as a separate ticket.",
"Hello @jackwhelpton , feel free to add more missing markdown features in this ticket. We can update the issue description to match the scope as well. Ideally we should use another markdown render which can fit all markdown feature requests. \r\n\r\nhttps://github.com/kubeflow/pipelines/blob/226079338b38b85ae3b63a5e289c6ca0cd9511d0/frontend/src/components/viewers/MarkdownViewer.tsx#L111-L114",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-04-20T16:43:45 | 2022-03-11T05:40:26 | null |
CONTRIBUTOR
| null |
### Environment
* KFP version: 1.0.1
### Steps to reproduce
Output a markdown artifact containing the missing features: a horizontal rule, and an emoji, e.g. `:warning:`
```python
metadata = {
"outputs": [
{
"storage": "inline",
"source": "line1\n---\n:warning: Item had multiple genre labels",
"type": "markdown",
}]
}
```
### Expected result
The Visualizations tab and Run output should render the horizontal rule and emoji:
line1
---
⚠️ Item had multiple genre labels
Instead, the original markup code is visible:
`line1 --- :warning: Item had multiple genre labels`
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5512/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5511
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5511/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5511/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5511/events
|
https://github.com/kubeflow/pipelines/issues/5511
| 862,959,179 |
MDU6SXNzdWU4NjI5NTkxNzk=
| 5,511 |
Can the workflow controller be upgraded to argo v2.12.6?
|
{
"login": "midhun1998",
"id": 24776450,
"node_id": "MDQ6VXNlcjI0Nzc2NDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/24776450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/midhun1998",
"html_url": "https://github.com/midhun1998",
"followers_url": "https://api.github.com/users/midhun1998/followers",
"following_url": "https://api.github.com/users/midhun1998/following{/other_user}",
"gists_url": "https://api.github.com/users/midhun1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/midhun1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/midhun1998/subscriptions",
"organizations_url": "https://api.github.com/users/midhun1998/orgs",
"repos_url": "https://api.github.com/users/midhun1998/repos",
"events_url": "https://api.github.com/users/midhun1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/midhun1998/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Hi @midhun1998, kubeflow 1.3 is about to release and argo workflow controller has been upgraded to 2.12.9",
"Thanks for letting me know @Bobgy . I'm really excited for Kubeflow 1.3 🎉. But we are planning to upgrade to Kubeflow 1.3 only post some analysis(Could take a while🙁) . In the Meantime, is it possible to upgrade the current workflow controller as asked in my first comment? ",
"Not sure, I think you should be good. Have a try?",
"Cool. Will test it and update here incase someone else is in the same situation. 👍🏻",
"Haven't done it. Found an alternative way to achieve what we wanted. Closing this."
] | 2021-04-20T15:47:16 | 2021-05-16T13:41:24 | 2021-05-16T13:41:24 |
MEMBER
| null |
Hi,
We are looking to upgrade the workflow controller to v2.12.6 based on #4553 . The current version of Argo workflow controller installed is v2.3 which comes with Kubeflow v1.2. The main reason to upgrade is Argo v2.3 only outputs very few workflow metrics.
Will there be any incompatibilities? If no, will changing the image be enough or should we upgrade any other components? Please guide @Bobgy .(#5232 )
/kind question
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5511/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5509
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5509/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5509/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5509/events
|
https://github.com/kubeflow/pipelines/issues/5509
| 862,543,934 |
MDU6SXNzdWU4NjI1NDM5MzQ=
| 5,509 |
[feature] Artifacts validity period
|
{
"login": "jw-websensa",
"id": 80677462,
"node_id": "MDQ6VXNlcjgwNjc3NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/80677462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jw-websensa",
"html_url": "https://github.com/jw-websensa",
"followers_url": "https://api.github.com/users/jw-websensa/followers",
"following_url": "https://api.github.com/users/jw-websensa/following{/other_user}",
"gists_url": "https://api.github.com/users/jw-websensa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jw-websensa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jw-websensa/subscriptions",
"organizations_url": "https://api.github.com/users/jw-websensa/orgs",
"repos_url": "https://api.github.com/users/jw-websensa/repos",
"events_url": "https://api.github.com/users/jw-websensa/events{/privacy}",
"received_events_url": "https://api.github.com/users/jw-websensa/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1126834402,
"node_id": "MDU6TGFiZWwxMTI2ODM0NDAy",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components",
"name": "area/components",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Hello @jw-websensa , currently we recommend using minio rule to set expiration date: https://docs.min.io/docs/minio-bucket-lifecycle-guide.html. We will learn more about the usability and improve in the future.",
"Hello @zijianjoy, thank you for your response, I am aware about minio rules, but I think it will not work very well as cacher will still find those expired artifacts as available.",
"Thank you @jw-websensa for the extra context, we will consider this in an end-to-end fashion for configuring artifact lifetime in the coming versions."
] | 2021-04-20T08:10:00 | 2021-05-04T21:49:26 | 2021-04-30T00:57:36 |
NONE
| null |
### Feature Area
/area backend
/area components
### What feature would you like to see?
Pipeline run artifacts validity period.
### What is the use case or pain point?
I'm using Kubeflow with on-premise Kubernetes cluster.
As we use Rancher local path provisioner for Storage Class it is crucial to control disk usage (this class does not work with pvc limits).
I wonder if it is possible to set some kind of period of validity for each artifact to periodically and automatically delete the oldest.
### Is there a workaround currently?
I was thinking of preparing k8s crone job to do it manually, but I don't know specifically pipelines architecture and already noticed that deletion from minio is not enough (cacher still finds deleted artifacts available).
---
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5509/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5509/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5506
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5506/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5506/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5506/events
|
https://github.com/kubeflow/pipelines/issues/5506
| 862,351,938 |
MDU6SXNzdWU4NjIzNTE5Mzg=
| 5,506 |
[v2 sample test] separate samples from tests?
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 2710158147,
"node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info",
"name": "needs more info",
"color": "DBEF12",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"@Ark-kun what kind of folder structure do you propose?",
"I don't strongly feel that we need to hide tests, maybe users also want to know how they can\r\n* test their pipelines\r\n* extract results from their pipelines in an external system\r\n\r\nAlso, for user facing samples, they are already in separate folders, so each folder only contains a few files. It's very easy to identify that some files ending with `_test.py` are the tests.",
"Also, we want new sample contributors to be able to easily setup a new test.",
"> @Ark-kun what kind of folder structure do you propose?\r\n\r\nI wonder whether we can reduce each test case to just an item in the test config YAML.\r\n```yaml\r\n- pipelineCodePath: path/to/sample.py\r\n compile:\r\n mode: V2Compat\r\n pipelineFunc: my_pipeline # Optional\r\n arguments: # Optional\r\n param1: value1\r\n param2: value2\r\n- pipelineCodePath: path/to/sample2.py\r\n...\r\n```\r\n\r\nP.S. This is only for trivial tests (compile, submit, wait for success). For non-trivial ones I fully support keeping tests near the samples. I'm not sure non-trivial tests exist, since such tests can/should be part of the sample.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-20T02:48:18 | 2022-03-03T06:05:25 | 2022-03-03T06:05:25 |
CONTRIBUTOR
| null |
> I think we might want to have some separation between what we show to the users and the internal test infra machinery.
> I'm not sure we should show the users with that test infra code.
_Originally posted by @Ark-kun in https://github.com/kubeflow/pipelines/pull/5473#discussion_r616266440_
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5506/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5505
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5505/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5505/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5505/events
|
https://github.com/kubeflow/pipelines/issues/5505
| 862,291,747 |
MDU6SXNzdWU4NjIyOTE3NDc=
| 5,505 |
[v2 sample test] auto scan all files with suffix _test.py instead of config.yaml
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"/cc @chensun @neuromage @Ark-kun ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-20T02:00:20 | 2022-03-03T06:05:35 | 2022-03-03T06:05:35 |
CONTRIBUTOR
| null |
# Problem
We need to add an entry in [samples/test/config.yaml](https://github.com/kubeflow/pipelines/blob/master/samples/test/config.yaml), each time when we add a new sample. This can be error prone, also unnecessary, because the config.yaml file doesn't contain much information so far.
# Proposal
We can just scan all files in the samples folder with `_test.py` suffix and run them. WDUT?
Originally posted in https://github.com/kubeflow/pipelines/pull/5473#discussion_r616284827
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5505/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5507
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5507/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5507/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5507/events
|
https://github.com/kubeflow/pipelines/issues/5507
| 862,524,121 |
MDU6SXNzdWU4NjI1MjQxMjE=
| 5,507 |
[kubeflow v1.3.0-rc.1] cache server certificate secret not being created
|
{
"login": "raffaelespazzoli",
"id": 6179036,
"node_id": "MDQ6VXNlcjYxNzkwMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6179036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raffaelespazzoli",
"html_url": "https://github.com/raffaelespazzoli",
"followers_url": "https://api.github.com/users/raffaelespazzoli/followers",
"following_url": "https://api.github.com/users/raffaelespazzoli/following{/other_user}",
"gists_url": "https://api.github.com/users/raffaelespazzoli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raffaelespazzoli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raffaelespazzoli/subscriptions",
"organizations_url": "https://api.github.com/users/raffaelespazzoli/orgs",
"repos_url": "https://api.github.com/users/raffaelespazzoli/repos",
"events_url": "https://api.github.com/users/raffaelespazzoli/events{/privacy}",
"received_events_url": "https://api.github.com/users/raffaelespazzoli/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"Hi @raffaelespazzoli, can you report your kubernetes version?",
"Also transferred to KFP repo",
"@Bobgy yeah, I intended to open this on the manifests repo. sorry for this., can you add the link to where you transferred it to?\r\nI'm seeing this issue with both minkube 1.17.1 and openshift 4.7 (corresponding to kube 1.20). In both cases I'm using the https://github.com/kubeflow/manifests/apps/pipeline/upstream/env/platform-agnostic-multi-user/?ref=v1.3.0-rc.1 manifests.\r\n\r\nWhen I manually added the cert, I started getting this error from the cache-server log:\r\n```\r\n2021/04/20 12:02:00 open /etc/webhook/certs/cert.pem: no such file or directory\r\n```\r\nit should try to open `tls.crt` not `cert.pem`...\r\n\r\n",
"@raffaelespazzoli the issue seems to be KFP specific, so I suggest putting it here. You can ping corresponding distribution owners here too.",
"cache-deployer deployment is responsible for creating the cert, can you report what logs are there?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-19T21:08:20 | 2022-03-03T06:05:36 | 2022-03-03T06:05:36 |
NONE
| null |
I'm trying installing kubeflow v1.3.0-rc.1 with the official manifests and it seems to me that the manifests for creating the certs for the cache-server are not present.
It is still hard to say for certain for sure as the kustomize composition is still very hard to follow. That said, thanks for having removed kfctl and kfdef.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5507/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5504
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5504/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5504/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5504/events
|
https://github.com/kubeflow/pipelines/issues/5504
| 861,319,830 |
MDU6SXNzdWU4NjEzMTk4MzA=
| 5,504 |
[backend] Managed Storage Option Fails on KFP
|
{
"login": "bdoohan-goog",
"id": 74371449,
"node_id": "MDQ6VXNlcjc0MzcxNDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/74371449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bdoohan-goog",
"html_url": "https://github.com/bdoohan-goog",
"followers_url": "https://api.github.com/users/bdoohan-goog/followers",
"following_url": "https://api.github.com/users/bdoohan-goog/following{/other_user}",
"gists_url": "https://api.github.com/users/bdoohan-goog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bdoohan-goog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bdoohan-goog/subscriptions",
"organizations_url": "https://api.github.com/users/bdoohan-goog/orgs",
"repos_url": "https://api.github.com/users/bdoohan-goog/repos",
"events_url": "https://api.github.com/users/bdoohan-goog/events{/privacy}",
"received_events_url": "https://api.github.com/users/bdoohan-goog/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"After some offline troubleshooting, found that\r\n> Artifact storage Cloud Storage bucket: With managed storage, Kubeflow Pipelines stores pipeline artifacts in a Cloud Storage bucket. Specify the name of the bucket you want Kubeflow Pipelines to store artifacts in. If the specified bucket doesn't exist, the Kubeflow Pipelines deployer automatically creates a bucket for you in the us-central1 region.\r\n\r\nnote the bucket name inputted should NOT contain \"gs://\" prefix.",
"@Bobgy , that fixed the issue. Thank you! It would be great if the KFP setup UI had more proactive errors, alerting the user as soon as they enter in the \"gs://\". Thank you once again!"
] | 2021-04-19T13:38:12 | 2021-04-20T14:45:41 | 2021-04-20T14:45:41 |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
On Google Cloud Platform via "AI Platform" > "Pipelines" > "New Pipeline"
* KFP version:
* 1.4.1
* KFP SDK version:
kfp 1.4.0
kfp-pipeline-spec 0.1.7
kfp-server-api 1.4.1
### Steps to reproduce
1) Log into console.cloud.google.com
2) Navigate to AI Platform > Kubeflow Pipelines
3) Click "New Instance" and then "Configure"
4) Choose your cluster and Namespace
5) Select "Use managed storage" and list Cloud SQL instance and Google Cloud Storage bucket
6) Click deploy
### Expected result
I expect there to be a working KFP instance that allows me to run the saple Chicago Taxi dataset pipeline. Instead I receive this error: `Error: failed to retrieve list of pipelines. Click Details for more information.`
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
@Bobgy , I messaged you in the Slack channel on this.
The GCP walkthrough for this option is available here: https://cloud.google.com/ai-platform/pipelines/docs/setting-up#full-access
#4356 #3332
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5504/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5503
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5503/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5503/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5503/events
|
https://github.com/kubeflow/pipelines/issues/5503
| 861,240,778 |
MDU6SXNzdWU4NjEyNDA3Nzg=
| 5,503 |
[sdk] Bug when using kfp.dsl.ParallelFor with kfp.components.create_component_from_func
|
{
"login": "meowcakes",
"id": 3435150,
"node_id": "MDQ6VXNlcjM0MzUxNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3435150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meowcakes",
"html_url": "https://github.com/meowcakes",
"followers_url": "https://api.github.com/users/meowcakes/followers",
"following_url": "https://api.github.com/users/meowcakes/following{/other_user}",
"gists_url": "https://api.github.com/users/meowcakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meowcakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meowcakes/subscriptions",
"organizations_url": "https://api.github.com/users/meowcakes/orgs",
"repos_url": "https://api.github.com/users/meowcakes/repos",
"events_url": "https://api.github.com/users/meowcakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/meowcakes/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-19T12:08:23 | 2022-03-03T06:05:34 | 2022-03-03T06:05:34 |
CONTRIBUTOR
| null |
### Environment
* KFP version: 1.1
* KFP SDK version: 1.4
* All dependencies version:
kfp 1.4.0
kfp-pipeline-spec 0.1.7
kfp-server-api 1.4.1
### Steps to reproduce
Run the following pipeline:
```
import kfp
@kfp.components.create_component_from_func
def foo() -> list:
return [1,2,3]
@kfp.components.create_component_from_func
def bar() -> list:
return [{'a': 1}, {'a': 2}, {'a': 3}]
@kfp.components.create_component_from_func
def baz(a: dict) -> list:
print(type(a))
print(a)
@kfp.dsl.pipeline(
name='temp',
description='temp'
)
def temp():
a = foo()
b = bar()
with kfp.dsl.ParallelFor(a.output) as item:
baz(item)
with kfp.dsl.ParallelFor(b.output) as item:
baz(item)
```
### Expected result
The parallel for over `b` should create 3 tasks, with each task printing `<class 'dict'>` followed by `{'a': x}`. The parallel for over `a` should result in a type error (`int` cannot be converted to `dict`).
Instead, the parallel for over `b` creates a single task `for-loop-1`, which fails to run with error message `This step is in Error state with this message: failed to resolve {{item}}`. The parallel for over `a` runs successfully and prints `<class 'int'>` followed by `x`.
This only seems to be an issue when using `create_component_from_func`; if I create the component spec using YAML it works.
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5503/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5503/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5496
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5496/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5496/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5496/events
|
https://github.com/kubeflow/pipelines/issues/5496
| 859,917,466 |
MDU6SXNzdWU4NTk5MTc0NjY=
| 5,496 |
[feature] graph_component accepting non-PipelineParam arguments
|
{
"login": "Udiknedormin",
"id": 20307949,
"node_id": "MDQ6VXNlcjIwMzA3OTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/20307949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Udiknedormin",
"html_url": "https://github.com/Udiknedormin",
"followers_url": "https://api.github.com/users/Udiknedormin/followers",
"following_url": "https://api.github.com/users/Udiknedormin/following{/other_user}",
"gists_url": "https://api.github.com/users/Udiknedormin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Udiknedormin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Udiknedormin/subscriptions",
"organizations_url": "https://api.github.com/users/Udiknedormin/orgs",
"repos_url": "https://api.github.com/users/Udiknedormin/repos",
"events_url": "https://api.github.com/users/Udiknedormin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Udiknedormin/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "neuromage",
"id": 206520,
"node_id": "MDQ6VXNlcjIwNjUyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neuromage",
"html_url": "https://github.com/neuromage",
"followers_url": "https://api.github.com/users/neuromage/followers",
"following_url": "https://api.github.com/users/neuromage/following{/other_user}",
"gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neuromage/subscriptions",
"organizations_url": "https://api.github.com/users/neuromage/orgs",
"repos_url": "https://api.github.com/users/neuromage/repos",
"events_url": "https://api.github.com/users/neuromage/events{/privacy}",
"received_events_url": "https://api.github.com/users/neuromage/received_events",
"type": "User",
"site_admin": false
},
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"/assign @neuromage @chensun ",
"@neuromage @chensun Any progress regarding this issue?",
"@Ark-kun We want to support non-PipelineParam arguments on graph_component so that we can define the graph function with some default arguments. Do you think it makes sense to implement this feature? Thank you. ",
"@Tomcli @Udiknedormin I consider this issue to be a compiler bug or missing feature.\r\nIt makes sense to fix that, but it can be non-trivial due to the general compiler architecture.\r\nIn most cases, the compiler is \"task-oriented\", not \"component-oriented\". It generates Argo templates based on specific tasks and usages (e.g. whether constant or reference was passed as argument), instead of generating templates from components and then passing arguments to those templates. The current approach has many issues. Such approach is especially problematic for recursion, since the compiler cannot analyze the usage sites infinitely and must generate a single template that works for both first and subsequent executions.\r\n\r\nThere might or might not be an easy way to fix this issue.\r\n\r\nMaybe the `graph_component` decorator wrapper code is where this issue can be worked around.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-16T15:25:15 | 2022-03-03T04:05:39 | 2022-03-03T04:05:39 |
CONTRIBUTOR
| null |
### Feature Area
/area sdk
### What feature would you like to see?
`@graph_component` decorator accepting literal values, not just `PipelineParam`s. Currently, if a literal is passed, e.g. `0`, then the following error will be raised:
```
ValueError: arguments to {function_name} should be PipelineParams.
```
### What is the use case or pain point?
If a sequential recursive loop is to be implemented, it may start with a literal:
```python
@graph_component
def sequential_loop(iter_num: int, i: int = 0):
...
incr_i = ... # increase the loop counter (e.g. via python)
with dsl.Condition(incr_i < iter_num):
sequential_loop(iter_num, incr_i)
@dsl.pipeline("main")
def main(iter_num: int):
sequential_loop(iter_num) # throws error!
```
It's even worse if such loop is somewhere deeper in the call stack.
I think it can be done by using `@graph_component` to wrap the function's arguments into some type that could be treated like PipelineParam inside of an OpsGroup (i.e. not be inlined) and like a literal outside of it (so that a literal is passed to the OpsGroup's parameter).
I imagine it to work similarly to `LoopArguments`.
### Is there a workaround currently?
```python
@graph_component
def sequential_loop(iter_num: int, i: int = 0):
...
incr_i = ... # increase the loop counter (e.g. via python)
with dsl.Condition(incr_i < iter_num):
sequential_loop(iter_num, incr_i
@dsl.pipeline("main")
def main(iter_num: int, start_idx: int = 0):
sequential_loop(iter_num, start_idx)
```
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5496/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5496/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5493
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5493/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5493/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5493/events
|
https://github.com/kubeflow/pipelines/issues/5493
| 859,415,028 |
MDU6SXNzdWU4NTk0MTUwMjg=
| 5,493 |
default resource overhead for each individual profile
|
{
"login": "hsinhoyeh",
"id": 2792682,
"node_id": "MDQ6VXNlcjI3OTI2ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2792682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hsinhoyeh",
"html_url": "https://github.com/hsinhoyeh",
"followers_url": "https://api.github.com/users/hsinhoyeh/followers",
"following_url": "https://api.github.com/users/hsinhoyeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hsinhoyeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hsinhoyeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsinhoyeh/subscriptions",
"organizations_url": "https://api.github.com/users/hsinhoyeh/orgs",
"repos_url": "https://api.github.com/users/hsinhoyeh/repos",
"events_url": "https://api.github.com/users/hsinhoyeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/hsinhoyeh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 2710158147,
"node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info",
"name": "needs more info",
"color": "DBEF12",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Hello @hsinhoyeh , you can disable the pipeline pods by setting the following label from your profile's namespace labels: `pipelines.kubeflow.org/enabled: false`.\r\n\r\nYou can do this for all the profiles that don't require Kubeflow Pipelines. For future roadmap, we plan to remove visualization server too. \r\n\r\nDoes this solve your issue?",
"This is by design: https://docs.google.com/document/d/1YNxKUbJLnBRL7DbPn76fsShkQx5Q5jTc-iXfLmLt1FU/edit?usp=sharing.\r\n\r\nBTW, the mechanism @zijianjoy introduced is only available in Kubeflow 1.3.\r\n\r\nTo add to that, the deployment script https://github.com/kubeflow/pipelines/blob/a2d3f234eac1eff97f4ebdd99c69460a43ec0695/manifests/kustomize/base/installs/multi-user/pipelines-profile-controller/sync.py is designed so that you can customize to your own requirements.",
"@zijianjoy thanks, it did solve my issue.\r\n@Bobgy seems that the design doc has access control, I got access denied.",
"@hsinhoyeh you can request access",
"@zijianjoy @Bobgy this flag seems to disable all pipeline components (see [here](https://github.com/kubeflow/pipelines/blob/a2d3f234eac1eff97f4ebdd99c69460a43ec0695/manifests/kustomize/base/installs/multi-user/pipelines-profile-controller/sync.py#L33)), not just these two components as I mentioned earlier. do you have any suggestion on how we can disable it without modifying python code?",
"Not really, if you only want to disable the two services, you need to modify the sync.py script.",
"I agree with @hsinhoyeh. @Bobgy could you provide me the access to the document (I sent the request). I would like to understand why these services are launched per user.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-16T03:09:15 | 2022-03-03T04:05:37 | 2022-03-03T04:05:37 |
CONTRIBUTOR
| null |
**Question:**
Not sure whether this question has been discussed before, but I am curious about the resource overhead for each individual profile.
In the current configuration ( in v1.2, v1.3), when creating one single profile, there are two running pods are created by default:
```
ml-pipeline-ui-artifact-c8fbcb8f9-k88bz
ml-pipeline-visualizationserver-6b78c9646-s7q4c
```
each one takes additional istio sidecar container (cost 100m cpu/ 128Mi memory).
so in total cpu requested 100m * 2 + 128m * 2 = 0.5 CPU unit, memory requested 128 *2 + 128 * 2 = 512Mi.
(even the profile has been created but no further actions were conducted.)
so let say today we have to create 100 profiles (one for each individual participant) but only 20% of them are heavy users (80-20 rule), and our system wasted 100 * 80% * 0.5 CPU unit (and 512Mi + 100 * 80% memory).
This seems to be more like a resource leaking, not the design concept. any thoughts? or documents? thanks
---
transferred from https://github.com/kubeflow/kubeflow/issues/5832
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5493/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5484
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5484/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5484/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5484/events
|
https://github.com/kubeflow/pipelines/issues/5484
| 858,959,259 |
MDU6SXNzdWU4NTg5NTkyNTk=
| 5,484 |
[bug] Confusion matrix component task fails
|
{
"login": "Mullefa",
"id": 4085817,
"node_id": "MDQ6VXNlcjQwODU4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4085817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mullefa",
"html_url": "https://github.com/Mullefa",
"followers_url": "https://api.github.com/users/Mullefa/followers",
"following_url": "https://api.github.com/users/Mullefa/following{/other_user}",
"gists_url": "https://api.github.com/users/Mullefa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mullefa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mullefa/subscriptions",
"organizations_url": "https://api.github.com/users/Mullefa/orgs",
"repos_url": "https://api.github.com/users/Mullefa/repos",
"events_url": "https://api.github.com/users/Mullefa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mullefa/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1126834402,
"node_id": "MDU6TGFiZWwxMTI2ODM0NDAy",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components",
"name": "area/components",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Thank you @Mullefa for identifying this issue! Would you like to create a PR to bump up python version for the fix?",
"Thank you for identifying the issue.\r\n\r\nWe're interested to know more about your pipeline. What is the upstream component you use with the confusion matrix component?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-15T14:47:42 | 2022-03-03T03:05:22 | 2022-03-03T03:05:22 |
NONE
| null |
### What steps did you take
I created a task in a pipeline using the [confusion matrix](https://github.com/kubeflow/pipelines/tree/ab63956f3a61d4d11b27ac26f097e1784588fed9/components/local/confusion_matrix) component.
### What happened:
The task failed with the following message:
```
Traceback (most recent call last):
File "/ml/confusion_matrix.py", line 29, in <module>
import pandas as pd
ImportError: No module named pandas
```
I believe this is because the Python dependencies are [installed for Python 3](https://github.com/kubeflow/pipelines/blob/master/components/local/base/Dockerfile), whereas the command in the component definition is `python2 ...`.
### What did you expect to happen:
For the task to complete successfully.
### Environment:
<!-- Please fill in those that seem relevant. -->
* How do you deploy Kubeflow Pipelines (KFP)?
GCP AI Platform Pipelines
* KFP version:
1.4.1
* KFP SDK version:
1.4.0
### Anything else you would like to add:
<!-- Miscellaneous information that will assist in solving the issue.-->
If this issue is legitimate (and I haven't misinterpreted the error), I believe the [roc component](https://github.com/kubeflow/pipelines/tree/ab63956f3a61d4d11b27ac26f097e1784588fed9/components/local/roc) will suffer from the same issue.
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
/area components
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5484/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5483
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5483/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5483/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5483/events
|
https://github.com/kubeflow/pipelines/issues/5483
| 858,643,385 |
MDU6SXNzdWU4NTg2NDMzODU=
| 5,483 |
[feature] Make MLMD grpc API available through python REST client
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 2710158147,
"node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info",
"name": "needs more info",
"color": "DBEF12",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Hmmm, this is exactly kubeflow/metadata server: https://github.com/kubeflow/metadata/tree/master/api\r\nThey used grpc-gateway to generate a REST python client for Metadata API.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-15T08:39:32 | 2022-03-03T06:05:27 | 2022-03-03T06:05:27 |
CONTRIBUTOR
| null |
### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
<!-- /area frontend -->
/area backend
<!-- /area sdk -->
<!-- /area samples -->
<!-- /area components -->
### What feature would you like to see?
<!-- Provide a description of this feature and the user experience. -->
Instead of https://github.com/grpc/grpc-web which we are currently using,
use https://github.com/grpc-ecosystem/grpc-gateway to make MLMD grpc API accessible from outside the cluster as a REST endpoint with multiple language clients including Python.
### What is the use case or pain point?
There is no python client for MLMD grpc API through grpc-web proxy!
<!-- It helps us understand the benefit of this feature for your use case. -->
grpc-web only supports javascript/typescript clients, see https://github.com/grpc/grpc-web.
I cannot see any documentation how other client languages may be supported.
On the other hand, grpc-gateway supports porting the grpc API into REST api. It also generates swagger API definitions, which we can use to generate client for all language variants, including python, our primary user language.
### Is there a workaround currently?
<!-- Without this feature, how do you accomplish your task today? -->
We can only use `kubectl port-forward` the grpc service to localhost, then use python grpc client to connect to localhost.
However, there's no way we can use a python grpc client to access metadata api through HTTP proxy that do not support grpc (e.g. [inverse proxy](https://github.com/google/inverting-proxy) which we deploy for KFP standalone on GCP).
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5483/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5480
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5480/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5480/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5480/events
|
https://github.com/kubeflow/pipelines/issues/5480
| 858,437,230 |
MDU6SXNzdWU4NTg0MzcyMzA=
| 5,480 |
[backend] go get github.com/kubeflow/pipelines got k8s.io/[email protected]: invalid version: unknown revision v0.17.9
|
{
"login": "motocoder-cn",
"id": 82557955,
"node_id": "MDQ6VXNlcjgyNTU3OTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/82557955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/motocoder-cn",
"html_url": "https://github.com/motocoder-cn",
"followers_url": "https://api.github.com/users/motocoder-cn/followers",
"following_url": "https://api.github.com/users/motocoder-cn/following{/other_user}",
"gists_url": "https://api.github.com/users/motocoder-cn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/motocoder-cn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/motocoder-cn/subscriptions",
"organizations_url": "https://api.github.com/users/motocoder-cn/orgs",
"repos_url": "https://api.github.com/users/motocoder-cn/repos",
"events_url": "https://api.github.com/users/motocoder-cn/events{/privacy}",
"received_events_url": "https://api.github.com/users/motocoder-cn/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
open
| false | null |
[] | null |
[
"Thank you @ZhengJuCn for filing this request! Would you like to make a PR for the go get module supportability?",
"cc @Bobgy @capri-xiyue ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"did it fixed? :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"The bug still exists, at the same time, I also tried to go get k8s@latest but it didn't work 😭 "
] | 2021-04-15T03:22:26 | 2022-05-22T05:02:31 | null |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
install pipelines with kubeflow 1.3.0-rc.0 with kubeflow/manifest repo example manifest
* KFP version:
with pipeline docker image, pipeline version is 1.5.0-rc.2
* KFP SDK version:
not use
### Steps to reproduce
just exec 'go get github.com/kubeflow/pipelines'
then got error `go get k8s.io/[email protected]: k8s.io/[email protected]: invalid version: unknown revision v0.17.9`
### Expected result
get pipelines module success
### Materials and Reference
**error message:**
go: github.com/kubeflow/pipelines upgrade => v0.0.0-20210414214412-ab63956f3a61
get "k8s.io/kubernetes": found meta tag get.metaImport{Prefix:"k8s.io/kubernetes", VCS:"git", RepoRoot:"https://github.com/kubernetes/kubernetes"} at //k8s.io/kubernetes?go-get=1
go get: github.com/kubeflow/[email protected] requires
k8s.io/[email protected]: reading k8s.io/kubernetes/go.mod at revision v0.17.9: unknown revision v0.17.9
**in kubernetes repo i found an issue:**
k8s.io/kubernetes is not intended to be used with go get
the published staging repos can be used that way (e.g. go get k8s.io/api@<version>)
If you want to depend on k8s.io/kubernetes with go modules, you would need to define require directives for matching levels of all of the published staging repositories in your go.mod file
is this error? could someone help solve this problem, thanks
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5480/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5480/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5476
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5476/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5476/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5476/events
|
https://github.com/kubeflow/pipelines/issues/5476
| 858,229,278 |
MDU6SXNzdWU4NTgyMjkyNzg=
| 5,476 |
04/14/21 - presubmit kubeflow-pipelines-tfx-python36 failing
|
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[] | 2021-04-14T20:13:26 | 2021-04-14T21:44:12 | 2021-04-14T21:44:12 |
COLLABORATOR
| null |
2021-04-14 09:52:20.924348: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "taxi_pipeline_kubeflow_gcp_test.py", line 24, in <module>
from tfx.examples.chicago_taxi_pipeline import taxi_pipeline_kubeflow_gcp
File "/usr/local/lib/python3.6/site-packages/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_kubeflow_gcp.py", line 38, in <module>
from tfx.extensions.google_cloud_ai_platform.pusher import executor as ai_platform_pusher_executor
File "/usr/local/lib/python3.6/site-packages/tfx/extensions/google_cloud_ai_platform/pusher/executor.py", line 26, in <module>
from tfx.extensions.google_cloud_ai_platform import runner
File "/usr/local/lib/python3.6/site-packages/tfx/extensions/google_cloud_ai_platform/runner.py", line 32, in <module>
from tfx.extensions.google_cloud_ai_platform import training_clients
File "/usr/local/lib/python3.6/site-packages/tfx/extensions/google_cloud_ai_platform/training_clients.py", line 22, in <module>
from google.cloud.aiplatform import gapic
ModuleNotFoundError: No module named 'google.cloud.aiplatform'
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5476/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5475
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5475/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5475/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5475/events
|
https://github.com/kubeflow/pipelines/issues/5475
| 858,103,940 |
MDU6SXNzdWU4NTgxMDM5NDA=
| 5,475 |
[components] BrokenPipeError: [Errno 32] Broken pipe issue during mlengine_train_op hyperparameters optimization job
|
{
"login": "mustafaghaliUnity",
"id": 72842150,
"node_id": "MDQ6VXNlcjcyODQyMTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/72842150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mustafaghaliUnity",
"html_url": "https://github.com/mustafaghaliUnity",
"followers_url": "https://api.github.com/users/mustafaghaliUnity/followers",
"following_url": "https://api.github.com/users/mustafaghaliUnity/following{/other_user}",
"gists_url": "https://api.github.com/users/mustafaghaliUnity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mustafaghaliUnity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mustafaghaliUnity/subscriptions",
"organizations_url": "https://api.github.com/users/mustafaghaliUnity/orgs",
"repos_url": "https://api.github.com/users/mustafaghaliUnity/repos",
"events_url": "https://api.github.com/users/mustafaghaliUnity/events{/privacy}",
"received_events_url": "https://api.github.com/users/mustafaghaliUnity/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1126834402,
"node_id": "MDU6TGFiZWwxMTI2ODM0NDAy",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components",
"name": "area/components",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"I am experiencing the same issue when submitting a pyspark job to a dataproc cluster with this component: https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.2/components/gcp/dataproc/submit_pyspark_job/component.yaml",
"I think I was able to resolve the issue by applying K8s secret to authenticate the component where this failure happens using kfp gcp extension \r\n`task_op.apply(gcp.use_gcp_secret(\"your-K8s-secret\")) `",
"> I think I was able to resolve the issue by applying K8s secret to authenticate the component where this failure happens using kfp gcp extension\r\n> `task_op.apply(gcp.use_gcp_secret(\"your-K8s-secret\")) `\r\n\r\nThanks for following up! Is the task_op the hptune_op you have in your original example?",
"@pupilye yes ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-04-14T17:23:12 | 2021-09-07T15:31:13 | 2021-09-07T15:31:13 |
NONE
| null |
### Environment
GCP AI platform
* How did you deploy Kubeflow Pipelines (KFP)?
AI platform deployment
* KFP version:
1.4.0
* KFP SDK version:
1.4.0
### Steps to reproduce
```
mlengine_train_op = comp.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.0/components/gcp/ml_engine/train/component.yaml')
hptune_op = mlengine_train_op(
project_id=settings.project_id,
region=settings.region,
args=dict_to_args(hptune_job_ags),
job_dir=os.path.join(settings.gcs_working_dir),
master_image_uri=self.settings.training_job.master_image_uri, # using custom container to run the training
worker_image_uri="",
training_input=training_input_dict, # this include hyperameters and the SA
job_id_prefix="some_prefix",
job_id=""
)
```
### the log of this component run is:
```
INFO:googleapiclient.discovery:URL being requested: GET
https://ml.googleapis.com/...?alt=json
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/ml/kfp_component/launcher/__main__.py", line 45, in <module>
main()
File "/ml/kfp_component/launcher/__main__.py", line 42, in main
launch(args.file_or_module, args.args)
File "/ml/kfp_component/launcher/launcher.py", line 45, in launch
return fire.Fire(module, command=args, name=module.__name__)
File "/usr/local/lib/python3.7/site-packages/fire/core.py", line 127, in Fire
component_trace = _Fire(component, args, context, name)
File "/usr/local/lib/python3.7/site-packages/fire/core.py", line 366, in _Fire
component, remaining_args)
File "/usr/local/lib/python3.7/site-packages/fire/core.py", line 542, in _CallCallable
result = fn(*varargs, **kwargs)
File "/ml/kfp_component/google/ml_engine/_train.py", line 105, in train
job_dir_output_path=job_dir_output_path)
File "/ml/kfp_component/google/ml_engine/_create_job.py", line 62, in create_job
job_dir_output_path=job_dir_output_path,
File "/ml/kfp_component/google/ml_engine/_create_job.py", line 89, in execute_and_wait
job_dir_output_path=self._job_dir_output_path,
File "/ml/kfp_component/google/ml_engine/_common_ops.py", line 93, in wait_for_job_done
job = ml_client.get_job(project_id, job_id)
File "/ml/kfp_component/google/ml_engine/_client.py", line 69, in get_job
name=job_name).execute()
File "/usr/local/lib/python3.7/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/googleapiclient/http.py", line 846, in execute
method=str(self.method), body=self.body, headers=self.headers)
File "/usr/local/lib/python3.7/site-packages/googleapiclient/http.py", line 183, in _retry_request
raise exception
File "/usr/local/lib/python3.7/site-packages/googleapiclient/http.py", line 164, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/google_auth_httplib2.py", line 209, in request
self.credentials.before_request(self._request, method, uri, request_headers)
File "/usr/local/lib/python3.7/site-packages/google/auth/credentials.py", line 133, in before_request
self.refresh(request)
File "/usr/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 111, in refresh
self._retrieve_info(request)
File "/usr/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 88, in _retrieve_info
request, service_account=self._service_account_email
File "/usr/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 234, in get_service_account_info
return get(request, path, params={"recursive": "true"})
File "/usr/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 150, in get
response = request(url=url, method="GET", headers=_METADATA_HEADERS)
File "/usr/local/lib/python3.7/site-packages/google_auth_httplib2.py", line 120, in __call__
url, method=method, body=body, headers=headers, **kwargs
File "/usr/local/lib/python3.7/site-packages/googleapiclient/http.py", line 1724, in new_request
redirections, connection_type)
File "/usr/local/lib/python3.7/site-packages/httplib2/__init__.py", line 1991, in request
cachekey,
File "/usr/local/lib/python3.7/site-packages/httplib2/__init__.py", line 1651, in _request
conn, request_uri, method, body, headers
File "/usr/local/lib/python3.7/site-packages/httplib2/__init__.py", line 1558, in _conn_request
conn.request(method, request_uri, body, headers)
File "/usr/local/lib/python3.7/http/client.py", line 1277, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1323, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1272, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1032, in _send_output
self.send(msg)
File "/usr/local/lib/python3.7/http/client.py", line 993, in send
self.sock.sendall(data)
BrokenPipeError: [Errno 32] Broken pipe
```
### Materials and Reference
referring to [Ml_engine_train_op component ](https://github.com/kubeflow/pipelines/blob/master/components/gcp/ml_engine/train/README.md)
---
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5475/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5475/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5474
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5474/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5474/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5474/events
|
https://github.com/kubeflow/pipelines/issues/5474
| 858,018,271 |
MDU6SXNzdWU4NTgwMTgyNzE=
| 5,474 |
[Discussion] Future of Run UUID placeholders: `[[RunUUID]]` vs `kfp.dsl.RUN_ID_PLACEHOLDER`
|
{
"login": "StefanoFioravanzo",
"id": 3354305,
"node_id": "MDQ6VXNlcjMzNTQzMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3354305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StefanoFioravanzo",
"html_url": "https://github.com/StefanoFioravanzo",
"followers_url": "https://api.github.com/users/StefanoFioravanzo/followers",
"following_url": "https://api.github.com/users/StefanoFioravanzo/following{/other_user}",
"gists_url": "https://api.github.com/users/StefanoFioravanzo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StefanoFioravanzo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StefanoFioravanzo/subscriptions",
"organizations_url": "https://api.github.com/users/StefanoFioravanzo/orgs",
"repos_url": "https://api.github.com/users/StefanoFioravanzo/repos",
"events_url": "https://api.github.com/users/StefanoFioravanzo/events{/privacy}",
"received_events_url": "https://api.github.com/users/StefanoFioravanzo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false | null |
[] | null |
[
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-14T15:35:15 | 2022-03-03T06:05:27 | 2022-03-03T06:05:26 |
MEMBER
| null |
After merging #4995 we realized that there is some conflicting behaviour between the `[[RunUUID]]` macro and `kfp.dsl.RUN_ID_PLACEHOLDER` (which points to `{{workflow.uid}}` (related to #3709).
I will bring here some discussion from #4995 so we can continue here.
@Bobgy says:
> To reduce backward compatibility concerns, I'd suggest the folloowing rollout plan:
>
> 1. Now KFP api server supports both `[[RunUUID]]` and `{{workflow.uid}}` as KFP run ID placeholder.
> 2. After a certain period (3months?), change kfp.dsl.RUN_ID_PLACEHOLDER to [[RunUUID]]. (Wait 3 months, because there are people who upgrade their SDK a lot more than their KFP cluster.)
> 3. After another certain period (3month?), merge #3709. (Again, the wait period is to allow old SDK users to keep using new KFP api server without problem.)
@StefanoFioravanzo says:
> > Now KFP api server supports both [[RunUUID]] and {{workflow.uid}} as KFP run ID placeholder.
>
> Indeed, and I think this is actually good. This is why:
>
>We cannot just update `kfp.dsl.RUN_ID_PLACEHOLDER` to be `[[RunUUID]]` because the current placeholder is an Argo macro, which will be replaced anywhere in the workflow, while the `[[RunUUID]]` works just for pipeline parameters.
> We should keep support for `[[RunUUID]]` in one-off runs because it is a consistent mechanism to retrieve the Run UUID in pipeline parameters. People use this in SWFs and they expect it to work in one-off runs as well (We've seen this with our customers)
> The KFP Python SDK placeholder can be used as well, it's more generic and it does overlap a little bit with `[[RunUUID]]`, but this doesn't mean we need to support just one or the other. They are two different mechanisms that happen to overlap in this particular case (run id as pipeline parameter). I do still think that merging #3709 would make things more clean, but it's an orthogonal discussion.
>
> Maybe, as a longer term plan:
>
> It's the KFP API server who replaces the argo string with the run id throughout the workflow. That's how we can get the run id using that string. We can make the API server also replace all macro (`[[ * ]]`) occurrences throughout the workflow the exact same way. Then we can phase out the argo string.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5474/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5474/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5471
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5471/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5471/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5471/events
|
https://github.com/kubeflow/pipelines/issues/5471
| 857,683,331 |
MDU6SXNzdWU4NTc2ODMzMzE=
| 5,471 |
[feature] support customizing Tensorboard images
|
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"I think option 1 and 2 can probably both be useful",
"Some feature requests that is related to this https://github.com/kubeflow/pipelines/issues/1641 https://github.com/kubeflow/pipelines/issues/4714",
"Note, a prerequisite for implementing this feature is supporting passing full image to KFP viewer API. Currently we can only pass tensorboard version there: see https://github.com/kubeflow/pipelines/blob/2c682299202ad019bb2b2bb2ac8903c72ab245cf/frontend/src/components/viewers/Tensorboard.tsx#L215-L229 and https://github.com/kubeflow/pipelines/blob/2c682299202ad019bb2b2bb2ac8903c72ab245cf/frontend/server/handlers/tensorboard.ts#L69",
"@Bobgy Being able to pass full image paths is a must for most on-prem installations since they use an internal docker registry. For instance, our k8s klusters have no external access and any tensorflow-based image needs our internal certificates added.",
"/assign\r\nI will work on this shortly, the best way to help is confirm whether above suggested options fit your use-case.\r\n\r\nI will implement option 2 directly, because that seems like the fundamental requirement.",
"@Bobgy This seems like a good option.",
"@Bobgy Thanks, btw which version can I expect it to land on?\r\n",
"@lechwolowski 1.5.0 has been too close to a final release, so you should expect it to land on 1.6.0, but I can share a prebuilt image you can put in your cluster first to get this feature. KFP UI server has been very stable, so I won't expect much incompatibilities."
] | 2021-04-14T08:59:38 | 2021-05-11T07:45:42 | 2021-05-11T07:45:42 |
CONTRIBUTOR
| null |
### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
/area frontend
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area samples -->
<!-- /area components -->
### What feature would you like to see?
<!-- Provide a description of this feature and the user experience. -->
Allow users to customize the tensorboard image they want to use in KFP component visualizations.
https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/
There are mainly two ways I can think of to support this feature:
Option 1:
1. add an input textbox for image in Tensorboard plugin that users can directly edit
2. (optional) let the textbox remember historic entries
Option 2 - let component output decide
1. when a component emits visualization metadata, add a new field `image` they can specify tensorboard image
2. KFP UI loads the `mlpipeline-ui-metadata` and default to show the image option in component metadata
### What is the use case or pain point?
<!-- It helps us understand the benefit of this feature for your use case. -->
Existing issues that can be addressed by this feature: https://github.com/kubeflow/pipelines/issues/5449
It's currently not possible to choose any tensorboard image version unless it is [hard-coded in KFP UI](https://github.com/kubeflow/pipelines/blob/2c682299202ad019bb2b2bb2ac8903c72ab245cf/frontend/src/components/viewers/Tensorboard.tsx#L216).
Users or pipeline components know the best about which tensorboard version/image type can visualize its output, not the KFP cluster operator.
### Is there a workaround currently?
<!-- Without this feature, how do you accomplish your task today? -->
No
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5471/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5471/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5470
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5470/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5470/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5470/events
|
https://github.com/kubeflow/pipelines/issues/5470
| 857,583,162 |
MDU6SXNzdWU4NTc1ODMxNjI=
| 5,470 |
Assigning copyright to the project authors
|
{
"login": "jdavis34265",
"id": 82343735,
"node_id": "MDQ6VXNlcjgyMzQzNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/82343735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jdavis34265",
"html_url": "https://github.com/jdavis34265",
"followers_url": "https://api.github.com/users/jdavis34265/followers",
"following_url": "https://api.github.com/users/jdavis34265/following{/other_user}",
"gists_url": "https://api.github.com/users/jdavis34265/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jdavis34265/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jdavis34265/subscriptions",
"organizations_url": "https://api.github.com/users/jdavis34265/orgs",
"repos_url": "https://api.github.com/users/jdavis34265/repos",
"events_url": "https://api.github.com/users/jdavis34265/events{/privacy}",
"received_events_url": "https://api.github.com/users/jdavis34265/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1682717377,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzc3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/discussion",
"name": "kind/discussion",
"color": "ecfc15",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
},
{
"login": "theadactyl",
"id": 6440149,
"node_id": "MDQ6VXNlcjY0NDAxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6440149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theadactyl",
"html_url": "https://github.com/theadactyl",
"followers_url": "https://api.github.com/users/theadactyl/followers",
"following_url": "https://api.github.com/users/theadactyl/following{/other_user}",
"gists_url": "https://api.github.com/users/theadactyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theadactyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theadactyl/subscriptions",
"organizations_url": "https://api.github.com/users/theadactyl/orgs",
"repos_url": "https://api.github.com/users/theadactyl/repos",
"events_url": "https://api.github.com/users/theadactyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/theadactyl/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"@jdavis34265 I think this is a very good point!\r\nThere were some early usages of \"The Kubeflow Authors\" in pipelines repo too, see https://github.com/kubeflow/pipelines/search?p=1&q=the+kubeflow+authors. However, we are not enforcing this style consistently.\r\n\r\nI think it's reasonable to discuss this proposal, and if we accept it. IIUC, we should ask all the new contributions to copyright to `The Kubeflow Authors`. However, I'm not a lawyer, but my understand was that we cannot change existing headers. And also, copyrighting code to `The Kubeflow Authors` does not change the fact that copyright for a line of code still belongs to the company of that contributor.\r\n\r\nCan I confirm the major rationale for making the change is making the project give credits to its contributors? (just confirmation, I think that's a good enough reason)",
"Our conclusion:\r\n\r\nGoogle will initiate a PR to update all Google LLC copyright headers to \"The Kubeflow Authors\".\r\nGoogle does not have the legal right to change other copyright headers.\r\n\r\nGoing forward, we'll suggest people to stick to \"The Kubeflow Authors\" headers.",
"Sound good!",
"This is great!\r\n\r\nAs https://opensource.google/docs/releasing/authors/ instructs, there should be an `AUTHORS` file. Do we have one? If not, how do we plan on populating it and where should it live (in which repository)?",
"Thanks for bringing this up @elikatsis!\r\nI'll put up a PR to add the AUTHORS file."
] | 2021-04-14T06:55:39 | 2021-06-01T09:49:05 | 2021-06-01T09:49:05 |
NONE
| null |
I wanted to start a discussion about the open-source nature of the Kubeflow projects.
Most Kubeflow projects already follow the opens-source guidelines, but I've noticed some issues in the Kubeflow Pipelines project.
I see that the Kubeflow Pipelines has a great developer community that takes a big part in development.
Despite that I've noticed that all copyrights are being assigned to the Google LLC corporation.
Most open-source projects, including major projects by Google (TensorFlow, Kubernetes etc) assign copyright to the project contributors.
To improve the openness of the Kubeflow Pipelines project **I propose to change the license messages to: "Copyright [year] The Kubeflow Pipelines Authors"**.
I found explicit guidance for this in the Google's open-source guidelines: https://opensource.google/docs/releasing/authors/
**Update the copyright statements in your LICENSE file and all file headers to list “The [Project] Authors” rather than “Google”:
Before: Copyright 2014 Google LLC
After: Copyright 2014 The [Project Name] Authors.**
---
Some examples of Google projects adopting this approach:
TensorFlow: "Copyright 2015 The TensorFlow Authors." https://github.com/tensorflow/tensorflow/blob/master/tensorflow/__init__.py
Knative: "Copyright <year> The Knative Authors" https://knative.dev/community/contributing/repository-guidelines/
Kubernetes: "Copyright 2015 The Kubernetes Authors." https://github.com/kubernetes/kubernetes/blob/master/hack/update-vendor-licenses.sh
Go: "Copyright (c) 2009 The Go Authors." https://golang.org/LICENSE
Kubeflow:
"Copyright 2018 The Kubeflow Authors" https://github.com/kubeflow/common/blob/master/hack/boilerplate/boilerplate.go.txt https://github.com/kubeflow/katib/issues/20
"Copyright 2020 kubeflow.org." https://github.com/kubeflow/kfserving/blob/master/python/kfserving/kfserving/__init__.py
/cc @theadactyl @Bobgy @Ark-kun @chensun @SinaChavoshi @animeshsingh @elikatsis @NikeNano @Tomcli @RedbackThomson @eterna2 @zijianjoy @surajkota @Jeffwan @StefanoFioravanzo @jinchihe @DavidSpek
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5470/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5470/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5467
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5467/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5467/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5467/events
|
https://github.com/kubeflow/pipelines/issues/5467
| 857,285,186 |
MDU6SXNzdWU4NTcyODUxODY=
| 5,467 |
[bug] v2 Sample test: mysql_query aborted: errno: 1213, error: Deadlock found when trying to get lock; try restarting transaction
|
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "neuromage",
"id": 206520,
"node_id": "MDQ6VXNlcjIwNjUyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neuromage",
"html_url": "https://github.com/neuromage",
"followers_url": "https://api.github.com/users/neuromage/followers",
"following_url": "https://api.github.com/users/neuromage/following{/other_user}",
"gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neuromage/subscriptions",
"organizations_url": "https://api.github.com/users/neuromage/orgs",
"repos_url": "https://api.github.com/users/neuromage/repos",
"events_url": "https://api.github.com/users/neuromage/events{/privacy}",
"received_events_url": "https://api.github.com/users/neuromage/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "neuromage",
"id": 206520,
"node_id": "MDQ6VXNlcjIwNjUyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neuromage",
"html_url": "https://github.com/neuromage",
"followers_url": "https://api.github.com/users/neuromage/followers",
"following_url": "https://api.github.com/users/neuromage/following{/other_user}",
"gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neuromage/subscriptions",
"organizations_url": "https://api.github.com/users/neuromage/orgs",
"repos_url": "https://api.github.com/users/neuromage/repos",
"events_url": "https://api.github.com/users/neuromage/events{/privacy}",
"received_events_url": "https://api.github.com/users/neuromage/received_events",
"type": "User",
"site_admin": false
},
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
},
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"P.S. The error is intermittent and disappeared after re-run.",
"It seems we should add retry when calling KFP API",
"https://4e18c21c9d33d20f-dot-datalab-vm-staging.googleusercontent.com/#/runs/details/26a16bda-2f69-4cc1-baf1-cde02947a64e\r\n\r\nI'm seeing this once too, among many pipelines in one test, this was the only one that failed. This seems flaky.",
"This issue got fixed by https://github.com/kubeflow/pipelines/pull/5629"
] | 2021-04-13T20:10:19 | 2021-05-18T21:47:48 | 2021-05-18T21:47:47 |
CONTRIBUTOR
| null |
https://4e18c21c9d33d20f-dot-datalab-vm-staging.googleusercontent.com/#/runs/details/bb97866a-d8ad-44a1-bd8e-c28f8df1a560
```
1.1
F0413 20:03:23.231673 1 main.go:56] Failed to successfuly execute component: rpc error: code = Aborted desc = mysql_query aborted: errno: 1213, error: Deadlock found when trying to get lock; try restarting transaction
goroutine 1 [running]:
github.com/golang/glog.stacks(0xc000088e00, 0xc00022a280, 0xe0, 0x135)
/go/pkg/mod/github.com/golang/[email protected]/glog.go:769 +0xb9
github.com/golang/glog.(*loggingT).output(0x10ff320, 0xc000000003, 0xc000430380, 0x1082606, 0x7, 0x38, 0x0)
/go/pkg/mod/github.com/golang/[email protected]/glog.go:720 +0x3b3
github.com/golang/glog.(*loggingT).printf(0x10ff320, 0x3, 0xc1e218, 0x2b, 0xc00063ff58, 0x1, 0x1)
/go/pkg/mod/github.com/golang/[email protected]/glog.go:655 +0x153
github.com/golang/glog.Fatalf(...)
/go/pkg/mod/github.com/golang/[email protected]/glog.go:1148
main.main()
/build/cmd/launch/main.go:56 +0x357
```
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5467/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5462
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5462/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5462/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5462/events
|
https://github.com/kubeflow/pipelines/issues/5462
| 856,404,147 |
MDU6SXNzdWU4NTY0MDQxNDc=
| 5,462 |
[frontend] Pipeline generic installation does not include a virtual service
|
{
"login": "nakfour",
"id": 18536575,
"node_id": "MDQ6VXNlcjE4NTM2NTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/18536575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nakfour",
"html_url": "https://github.com/nakfour",
"followers_url": "https://api.github.com/users/nakfour/followers",
"following_url": "https://api.github.com/users/nakfour/following{/other_user}",
"gists_url": "https://api.github.com/users/nakfour/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nakfour/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nakfour/subscriptions",
"organizations_url": "https://api.github.com/users/nakfour/orgs",
"repos_url": "https://api.github.com/users/nakfour/repos",
"events_url": "https://api.github.com/users/nakfour/events{/privacy}",
"received_events_url": "https://api.github.com/users/nakfour/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Thank you @nakfour for submitting this request!\r\n\r\n@Bobgy and I have discussed about this issue and our initial thinking is to perform following refactorings:\r\n\r\n1. For `ml-pipeline-ui` VirtualService, we need to create a folder called `ml-pipeline-ui` under https://github.com/kubeflow/pipelines/tree/master/manifests/kustomize/base/pipeline, and put all UI related items, including virtual service file there.\r\n2. For `metadata-grpc` VirtualService, we will create a folder called `virtual-service` under `options` folder: https://github.com/kubeflow/pipelines/tree/master/manifests/kustomize/base/metadata/options. And put the virtual service definition to that folder. And put the remaining files under `istio` option to a new folder called `multi-user` option. Also we need to add a README to explain that if you want to use virtual service, you can install `virtual-service` option.",
"For now, you can use the `kubeflow/pipelines` instead of `kubeflow/manifests` repo for the virtual service definition: https://github.com/kubeflow/pipelines/blob/master/manifests/kustomize/base/metadata/options/istio/virtual-service.yaml",
"@zijianjoy virtual service for ml-pipeline-ui should still be put in options folder, because it's not needed for env without istio.",
"@Bobgy @zijianjoy sounds good. since these VRs are common for all options to install pipelines, good idea to place them under base/pipeline"
] | 2021-04-12T22:21:53 | 2021-04-19T16:44:17 | 2021-04-19T16:44:17 |
MEMBER
| null |
When installing "platform-agnostic" and trying to get to the pipeline UI from the Kubeflow central dashboard, we get "Sorry, /pipeline/ is not a valid page". Checking for virtualservices in namespace kubeflow, there are none for pipelines. Looking at the code, we dont see a VirtualService created : https://github.com/kubeflow/manifests/tree/master/apps/pipeline/upstream/base/installs/generic
While multi-user install does include a virtualservice: https://github.com/kubeflow/manifests/blob/master/apps/pipeline/upstream/base/installs/multi-user/virtual-service.yaml
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5462/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5462/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5460
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5460/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5460/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5460/events
|
https://github.com/kubeflow/pipelines/issues/5460
| 856,305,867 |
MDU6SXNzdWU4NTYzMDU4Njc=
| 5,460 |
mlpipeline-ui-metadata.json not shown in output artifacts, using common pipeline volume.
|
{
"login": "jagadeeshi2i",
"id": 46392704,
"node_id": "MDQ6VXNlcjQ2MzkyNzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/46392704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jagadeeshi2i",
"html_url": "https://github.com/jagadeeshi2i",
"followers_url": "https://api.github.com/users/jagadeeshi2i/followers",
"following_url": "https://api.github.com/users/jagadeeshi2i/following{/other_user}",
"gists_url": "https://api.github.com/users/jagadeeshi2i/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jagadeeshi2i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jagadeeshi2i/subscriptions",
"organizations_url": "https://api.github.com/users/jagadeeshi2i/orgs",
"repos_url": "https://api.github.com/users/jagadeeshi2i/repos",
"events_url": "https://api.github.com/users/jagadeeshi2i/events{/privacy}",
"received_events_url": "https://api.github.com/users/jagadeeshi2i/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"> fileOutputs: \r\n> MLPipeline UI metadata: /mlpipeline-ui-metadata.json\r\n\r\nWe do not support `fileOutputs`. It was deprecated from the start. Please use `outputPath` placeholder instead.\r\n\r\n> args:\r\n> - {outputPath: MLPipeline UI metadata}\r\n\r\nThis is correct. Thank you.\r\n\r\nLet's debug your issue.\r\n\r\nAre you seeing any outputs in the `Inputs/Outputs` tab?\r\n\r\nWhat are you writing to the json file?",
"My output parameters shows\r\n\r\n**Output parameters**\r\n```\r\nmlpipeline-ui-metadata-subpath artifact_data/b7a15847-3e46-46c4-8ebb-f1b6eefd05ba_pytorch-vz52r-2186059130/mlpipeline-ui-metadata\r\ntraining-output-subpath artifact_data/b7a15847-3e46-46c4-8ebb-f1b6eefd05ba_pytorch-vz52r-2186059130/training-output\r\ntraining-tensorboard_root-subpath artifact_data/b7a15847-3e46-46c4-8ebb-f1b6eefd05ba_pytorch-vz52r-2186059130/training-tensorboard_root\r\n```\r\nData written to json\r\n```\r\n\r\n{\r\n 'outputs' : [{\r\n 'type': 'tensorboard',\r\n 'source': '/tmp/outputs/tensorboard_root/data/cifar10_lightning_kubeflow'\r\n }]\r\n }\r\n```\r\n\r\nWriting to file: /tmp/outputs/MLPipeline_UI_metadata/data\r\n\r\n",
">subpath\r\n\r\nOh, it looks like you're using volume-based data passing strategy.\r\nUnfortunately, it's currently not supported by the UX.",
"@Ark-kun @paveldournov @Bobgy Can you please comment on how exactly the tensorboard configuration and logs are getting picked up so that the output can be shown from pvc/volume location and visualized in the UI: https://github.com/kubeflow/pipelines/blob/master/components/tensorflow/tensorboard/prepare_tensorboard/component.yaml",
"Discussed over zoom, this approach doesn't work currently. As @Ark-kun replied above, when passing data using volume, KFP UI cannot access the files.",
"> > subpath\r\n> \r\n> Oh, it looks like you're using volume-based data passing strategy.\r\n> Unfortunately, it's currently not supported by the UX.\r\n\r\n@Ark-kun - this followed your earlier recommendation for creating a single volume that is automatically mounted to all pipeline steps. IIUC this does allow the UI to show references/paths to the data, but loading the data in Ui needs access to the volume from the UI microservice, which isn't available.\n\n@Bobgy - can we treat the UI metadata files separately from the artifacts (as in - rely on the Argo logic for copying them to minio as they are likely small files). This way the files will be available for UI. ",
"@paveldournov although this approach is technically possible, it too hacky and currently needs too many manual deployment configurations to get it working e2e.\r\n\r\n1. tensorboard pod needs a config to mount volumes\r\n2. KFP ui needs an uri scheme to understand an artifact is on a volume\r\n\r\nAll of these need to be configured by cluster operator and very fragile.",
"> @Ark-kun - this followed your earlier recommendation for creating a single volume that is automatically mounted to all pipeline steps.\r\n\r\nYes.\r\nWhen data is too big to use normal Argo artifact passing, I still recommend the built-in volume-based data-passing over other volume-based approaches like the VolumeOp and using components that only work with volumes.\r\n\r\nWhen the data is not as big, the standard mode of artifact passing is easier.\r\n\r\nThe idea is that the pipeline author develops and debugs the pipeline using medium-size data. But if the production data is too big, the same pipeline can be submitted with volume-based data passing when working with production data loads.\r\n\r\nWhen the big data is in a volume, its pretty hard for UX to mount that data for visualization, since the volume is not mounted to frontend or backend.\r\n\r\nMany visualization configs are not self-contained. For self-contained visualization configs we could make a change to the volume-based data passing so that it leaves the `MLPipeline UI metadata` output as Argo artifact.\r\n\r\nJudging by `'source': '/tmp/outputs/tensorboard_root/data/cifar10_lightning_kubeflow'`, the visualization is not self-contained. I'm not 100% this will work even with normal data passing without volumes. AFAIK, the Tensorboard visualization only works when logs are uploaded to some URL that is accessible from the Frontend.",
"Closing because question answered, we do not currently have good enough support for visualization + volume based data passing.",
"Is there a workaround for that? "
] | 2021-04-12T19:46:14 | 2021-05-19T06:33:39 | 2021-05-18T13:19:08 |
MEMBER
| null |
### What steps did you take
## Approach 1:
https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/
As mentioned in the above link, we tried to write mlpipeline-ui-metadata.json to `/mlpipeline-ui-metadata.json`
this throws the below error
`invalid mount config for type "bind": invalid specification: destination can't be '/`
Sample code from component yaml
```
name: Training
description: |
Training for PyTorch.
inputs:
- {name: dataset}
- {name: params, type: JsonArray}
outputs:
- {name: output}
- {name: tensorboard_root}
- {name: MLPipeline UI metadata, type: UI metadata}
implementation:
container:
image: "pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime"
.....
fileOutputs:
MLPipeline UI metadata: /mlpipeline-ui-metadata.json
```
## Approach 2:
Tried writing the metadata information to a custom path as well using outputPath keyword.
We did not face the error faced in previous approach, yet we could not find the tar file in the output artifacts.
Sample code from component yaml
```
name: Training
description: |
Training for PyTorch.
inputs:
- {name: dataset}
- {name: params, type: JsonArray}
outputs:
- {name: output}
- {name: tensorboard_root}
- {name: MLPipeline UI metadata, type: UI metadata}
implementation:
container:
image: "pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime"
.....
args:
- {outputPath: MLPipeline UI metadata}
```
### What happened:
In approach 1 the training failed with the given error and in approach 2 the metadata file is not found.
### What did you expect to happen:
Metadata information should be shown in output artifacts and the corresponding visualization should be shown.
### Environment:
* How do you deploy Kubeflow Pipelines (KFP)?
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version: 1.4.1
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* KFP SDK version:
kfp 1.4.0
kfp-pipeline-spec 0.1.7
kfp-server-api 1.4.1
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
### Anything else you would like to add:
<!-- Miscellaneous information that will assist in solving the issue.-->
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5460/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5457
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5457/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5457/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5457/events
|
https://github.com/kubeflow/pipelines/issues/5457
| 855,415,989 |
MDU6SXNzdWU4NTU0MTU5ODk=
| 5,457 |
[frontend] Some frontend tests are locale-dependent
|
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
},
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Good catch! +1 to set a locale during test"
] | 2021-04-11T21:50:39 | 2021-04-15T15:29:13 | 2021-04-15T15:29:13 |
CONTRIBUTOR
| null |
When I run Frontend tests on my machine I get many errors:
```
Test Suites: 9 failed, 44 passed, 53 total
Tests: 34 failed, 1038 passed, 1072 total
Snapshots: 14 failed, 307 passed, 321 total
```
I expect the tests to pass under any system locale.
Perhaps the tests can specify an explicit locale that the date functions will use.
Couple of the errors:
```
● PipelineDetails › closes side panel when close button is clicked
expect(received).toMatchSnapshot()
Snapshot name: `PipelineDetails closes side panel when close button is clicked 1`
- Snapshot
+ Received
@@ -101,11 +101,11 @@
className="summaryKey"
>
Uploaded on
</div>
<div>
- 9/5/2018, 4:03:02 AM
+ 2018-9-5 05:03:02
</div>
<div
className="summaryKey"
>
Description
654 | tree.find('SidePanel').simulate('close');
655 | expect(tree.state('selectedNodeId')).toBe('');
> 656 | expect(tree).toMatchSnapshot();
| ^
657 | });
658 |
659 | it('shows correct versions in version selector', async () => {
at Object.toMatchSnapshot (src/pages/PipelineDetails.test.tsx:656:18)
```
BTW, see how the time differs by an hour.
```
FAIL src/components/Trigger.test.tsx (30.921s)
● Trigger › cron › builds a 1-minute cron trigger with specified start date
expect(jest.fn()).toHaveBeenLastCalledWith(...expected)
Expected: {"catchup": true, "maxConcurrentRuns": "10", "trigger": {"cron_schedule": {"cron": "0 * * * * ?", "end_time": undefined, "start_time": 2018-03-23T07:53:00.000Z}}}
Received
3: {"catchup": true, "maxConcurrentRuns": "10", "trigger": {"cron_schedule": {"cron": "0 * * * * ?", "end_time": undefined, "start_time": 2018-12-21T15:53:00.000Z}}}
-> 4
@@ -3,9 +3,9 @@
"maxConcurrentRuns": "10",
"trigger": Object {
"cron_schedule": Object {
"cron": "0 * * * * ?",
"end_time": undefined,
- "start_time": 2018-03-23T07:53:00.000Z,
+ "start_time": 2018-03-23T15:53:00.000Z,
},
},
},
```
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5457/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5457/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5453
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5453/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5453/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5453/events
|
https://github.com/kubeflow/pipelines/issues/5453
| 854,657,939 |
MDU6SXNzdWU4NTQ2NTc5Mzk=
| 5,453 |
[Discuss] Should KFServing Component use InputPath?
|
{
"login": "wilbry",
"id": 36278506,
"node_id": "MDQ6VXNlcjM2Mjc4NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/36278506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wilbry",
"html_url": "https://github.com/wilbry",
"followers_url": "https://api.github.com/users/wilbry/followers",
"following_url": "https://api.github.com/users/wilbry/following{/other_user}",
"gists_url": "https://api.github.com/users/wilbry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wilbry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wilbry/subscriptions",
"organizations_url": "https://api.github.com/users/wilbry/orgs",
"repos_url": "https://api.github.com/users/wilbry/repos",
"events_url": "https://api.github.com/users/wilbry/events{/privacy}",
"received_events_url": "https://api.github.com/users/wilbry/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1126834402,
"node_id": "MDU6TGFiZWwxMTI2ODM0NDAy",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components",
"name": "area/components",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "animeshsingh",
"id": 3631320,
"node_id": "MDQ6VXNlcjM2MzEzMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3631320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/animeshsingh",
"html_url": "https://github.com/animeshsingh",
"followers_url": "https://api.github.com/users/animeshsingh/followers",
"following_url": "https://api.github.com/users/animeshsingh/following{/other_user}",
"gists_url": "https://api.github.com/users/animeshsingh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/animeshsingh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/animeshsingh/subscriptions",
"organizations_url": "https://api.github.com/users/animeshsingh/orgs",
"repos_url": "https://api.github.com/users/animeshsingh/repos",
"events_url": "https://api.github.com/users/animeshsingh/events{/privacy}",
"received_events_url": "https://api.github.com/users/animeshsingh/received_events",
"type": "User",
"site_admin": false
},
{
"login": "Tomcli",
"id": 10889249,
"node_id": "MDQ6VXNlcjEwODg5MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/10889249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tomcli",
"html_url": "https://github.com/Tomcli",
"followers_url": "https://api.github.com/users/Tomcli/followers",
"following_url": "https://api.github.com/users/Tomcli/following{/other_user}",
"gists_url": "https://api.github.com/users/Tomcli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tomcli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tomcli/subscriptions",
"organizations_url": "https://api.github.com/users/Tomcli/orgs",
"repos_url": "https://api.github.com/users/Tomcli/repos",
"events_url": "https://api.github.com/users/Tomcli/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tomcli/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"I like the suggestion. Having file-based inputs always helps.\r\nI think that KFServe consuming the model as file, rather than as URL is significantly easier for the pipeline authors.\r\n\r\nThe change might be non-trivial, since the KFServing component needs to copy the data to the serving pod[s].\r\n\r\nYou can try a naïve draft variant and see what happens (most likely it will fail): https://raw.githubusercontent.com/kubeflow/pipelines/2f6fd6ca7e19a39f8d838dc83f9d4ae059d354a9/components/kubeflow/kfserving/from_artifact/component.yaml\r\n\r\nPerhaps the KFServe community can help with this improvement.",
"I think you are correct that it may be non-trivial. I took the component.yaml you provided and modified the kfservingdeployer.py file in src to append `\"file://\"` if no prefix was given, but upon running it, I realize that kubeflow is most likely attaching the storage to the pod running the code in kfservingdeployer.py, and not the pod that gets launched by KFServing, so you still end up with an error like\r\n\r\n```\r\nRuntimeError: Local path file:///tmp/inputs/Model/data does not exist.\r\n```\r\n\r\nI was considering using the Minio URL, but I am not sure if that is possible to get from the pipeline param dynamically. I will keep playing around with this and let you know if I make any progress, but would be happy for others to jump in with their suggestions. ",
"Seems that there is a way to pass [PVC URI](https://github.com/kubeflow/kfserving/blob/a2e835f9c9d057e7bc04160ba903f715c175c757/pkg/webhook/admission/pod/storage_initializer_injector.go#L116), but I am a bit out of my depth to know how to format that using only the PipelineParam. ",
"I don't really know much about KFServing.\r\nThe general solution might be as follows:\r\n\r\n* KFServe Launcher component creates a KFServe pod with an init container which will wait to receive the model. The init and main containers share an emptyDir volume where the model will be placed.\r\n* The launcher component uses `kubectl cp` to copy the model into the init container FS, into the shared emptyDir volume. It then writes a completion flag file to indicate that the copying if finished.\r\n* The init container notices the completion flag file and exists.\r\n* The main container starts and reads the model from the emptyDir volume file.\r\n\r\nThe wait for model polling could also be integrated into KFServe itself.\r\n\r\nWDYT?",
"I also don't have much KFServing knowledge. I saw it it was part of the overall Kubeflow framework and wanted to use it to deploy models trained using KFP. Do you know who might be good to loop into this discussion or should I open an issue over in kfserving? ",
"Looking into this more, I found an earlier comment where you recommend explicit persisting: [#4381](https://github.com/kubeflow/pipelines/issues/4381#issuecomment-675083510). I think that is the path forward right now. Maybe this is outside the purview of this project, but for cloud providers that provide managed service, this may lead to duplicating the storage, so it would be nice to be able to know where the item was persisted already in that situation.\r\n\r\nThis may also be resolved in v2 of the SDK, using OutputArtifact rather than OutputPath. Please let me know if I am misunderstanding this.\r\n\r\nThanks",
"Agree with\r\n\r\n> This may also be resolved in v2 of the SDK, using OutputArtifact rather than OutputPath. Please let me know if I am misunderstanding this.\r\n\r\n/cc @neuromage @chensun \r\nI think we are missing a good example component & pipeline to demonstrate this use-case.\r\nWith the v2 InputArtifact input type, it's possible to get uri of the artifact in the component, so that it could directly pass the gcs/s3 uri to KFServing component.\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/289d0f57be5533a15b93c520049e7b35578ab8e0/samples/test/lightweight_python_functions_v2_pipeline.py#L37 is probably the closest example.",
"I think this is a good path forward. I have a local version of the KFServing component I have edited to be compatible with the v2 pipeline compilation, but I can't test it because I am having trouble determining how to run a v2 pipeline on a stand-alone installation (if it is even possible at this point)",
"@chensun and @Bobgy can you provide @wilbry with some pointers here for how to run v2 pipelines? ",
"> for cloud providers that provide managed service, this may lead to duplicating the storage\r\n\r\nModels are pretty small, so this might be a small price to pay for convenience.\r\n\r\nEven for cloud services, a supported URI schema might not mean that the service may be called directly without copying the data. As an example, many existing GCS buckets cannot be used with Google Cloud AutoML service. That service only supports single-region buckets in the `us-central1` region. Just giving a URI to a component or service does not guarantee that that URI would be used successfully. Using File-based I/O and shifting the data staging responsibility to components makes the components more portable and robust.",
"Updated documentation for v2 compatible mode in https://www.kubeflow.org/docs/components/pipelines/sdk/v2/v2-compatibility/",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-09T16:04:00 | 2022-03-03T03:05:17 | 2022-03-03T03:05:17 |
NONE
| null |
Apologies if this is the wrong project, I am not sure if this component is developed here or over in kfserving, but should there be a version of KFServing (or maybe at way to do it with one version) such that the URI is an InputPath so a newly trained model can be served as part of a pipeline? I have searched the issues for a few days trying to see if this has been discussed before and have come up empty.
https://github.com/kubeflow/pipelines/blob/02dbcfa062ea975fcf8793b711640ed20442389e/components/kubeflow/kfserving/component.yaml#L6
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5453/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5452
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5452/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5452/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5452/events
|
https://github.com/kubeflow/pipelines/issues/5452
| 854,595,566 |
MDU6SXNzdWU4NTQ1OTU1NjY=
| 5,452 |
[backend] "platform-agnostic" missing "Viewer" CRD
|
{
"login": "nakfour",
"id": 18536575,
"node_id": "MDQ6VXNlcjE4NTM2NTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/18536575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nakfour",
"html_url": "https://github.com/nakfour",
"followers_url": "https://api.github.com/users/nakfour/followers",
"following_url": "https://api.github.com/users/nakfour/following{/other_user}",
"gists_url": "https://api.github.com/users/nakfour/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nakfour/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nakfour/subscriptions",
"organizations_url": "https://api.github.com/users/nakfour/orgs",
"repos_url": "https://api.github.com/users/nakfour/repos",
"events_url": "https://api.github.com/users/nakfour/events{/privacy}",
"received_events_url": "https://api.github.com/users/nakfour/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"cc @Bobgy @zijianjoy ",
"Thanks for the report @nakfour!\r\nI want to confirm you are doing the right thing, read KFP standalone installation guide in https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/#deploying-kubeflow-pipelines.\r\n\r\nCRDs are installed separated in:\r\n\r\n```bash\r\nkubectl apply -k \"github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION\"\r\n```\r\n\r\nDid you include `cluster-scoped-resources` in your kustomization.yaml?",
"@Bobgy we followed the instructions from here https://github.com/kubeflow/manifests#install-individual-components and replaced \"platform-agnostic-multi-user\" with \"platform-agnostic-multi-user\"\r\nAs you can see the two are kustomizations are very different\r\nhttps://github.com/kubeflow/manifests/blob/master/apps/pipeline/upstream/env/platform-agnostic/kustomization.yaml\r\nhttps://github.com/kubeflow/manifests/blob/master/apps/pipeline/upstream/env/platform-agnostic-multi-user/kustomization.yaml",
"Yes, the difference is by design. Platform-agnostic doesn't include crds, you need to include separately like my last comment.\n\n@yanniszark I think we should add documentation for standalone installation in kubeflow/manifests to avoid confusion like this",
"@Bobgy please elaborate why by design? I find it hard to believe they are both different, it should be the same",
"There are 3 major ways to deploy KFP:\n\n* Namespaced KFP standalone.\n* Multi-user KFP.\n* Multiple namespaced KFP standalone operating independently for several namespaces. Typically an administrator installs the cluster-scoped resources and namespace owners can install KFP by themselves.\n\nThe minimum maintenance way to support all the three types is to default to include cluster scoped resources for multi-user install and not include them for namespaced installs.\n\nI am open for suggestions. Currently, I plan to document the difference better.",
"@Bobgy thanks I understand. My suggestion is to add more information on the main Kubeflow installation Readme here: https://github.com/kubeflow/manifests#install-individual-components\r\nExplain one option, and point to more documentation to install the other options you just explained."
] | 2021-04-09T14:49:23 | 2021-04-19T16:44:17 | 2021-04-19T16:44:17 |
MEMBER
| null |
As part of the effort to get the Openshift distribution ready for KF 1.3 release, we tried to install the "platform-agnostic" pipeline kustomization and it is missing the "Viewer" CRD. Also discussed here https://github.com/kubeflow/manifests/issues/1810.
PR here: https://github.com/kubeflow/manifests/blob/63c9026ee3e01ad8514882358d57d944d3a92b52/distributions/stacks/openshift/application/pipeline-agnostic/kustomization.yaml
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5452/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5450
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5450/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5450/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5450/events
|
https://github.com/kubeflow/pipelines/issues/5450
| 854,252,011 |
MDU6SXNzdWU4NTQyNTIwMTE=
| 5,450 |
[backend] Sample pipeline "[Demo] TFX - Taxi tip prediction model trainer" failed with "Invalid bucket name: '{{kfp-default-bucket}}'"
|
{
"login": "myoshimu",
"id": 8294746,
"node_id": "MDQ6VXNlcjgyOTQ3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8294746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myoshimu",
"html_url": "https://github.com/myoshimu",
"followers_url": "https://api.github.com/users/myoshimu/followers",
"following_url": "https://api.github.com/users/myoshimu/following{/other_user}",
"gists_url": "https://api.github.com/users/myoshimu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/myoshimu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myoshimu/subscriptions",
"organizations_url": "https://api.github.com/users/myoshimu/orgs",
"repos_url": "https://api.github.com/users/myoshimu/repos",
"events_url": "https://api.github.com/users/myoshimu/events{/privacy}",
"received_events_url": "https://api.github.com/users/myoshimu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1606220157,
"node_id": "MDU6TGFiZWwxNjA2MjIwMTU3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/deployment/marketplace",
"name": "area/deployment/marketplace",
"color": "d2b48c",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @myoshimu, you can edit the parameter and replace {{kfp-default-bucket}} with your own bucket.\r\n\r\nbut you are right, it should be replaced automatically during installation, it's strange that failed",
"/assign @zijianjoy ",
"/assign @capri-xiyue ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @myoshimu ,\r\nI tested with the same KFP version using with [this step](https://cloud.google.com/ai-platform/pipelines/docs/getting-started\r\n). The \"[Demo] TFX - Taxi tip prediction model trainer\" works.(Screenshot attached)\r\n<img width=\"1126\" alt=\"Screen Shot 2021-07-30 at 10 58 50 AM\" src=\"https://user-images.githubusercontent.com/52932582/127693862-da1865d2-d64c-4163-8b13-ed9e781706bd.png\">\r\n<img width=\"1052\" alt=\"Screen Shot 2021-07-30 at 11 00 31 AM\" src=\"https://user-images.githubusercontent.com/52932582/127693870-00c53692-2313-439f-bc1c-f916f104ed5c.png\">\r\n\r\nCan you help check whether you deploy the KFP via GCP Marketplace https://cloud.google.com/ai-platform/pipelines/docs/getting-started#set_up_your_instance or you deploy KFP via standalone deployment https://cloud.google.com/ai-platform/pipelines/docs/getting-started#set_up_your_instance and you don't apply manifests needed for gcp patched configurations during standalone deployment?\r\n\r\nYou can verify this via running `kubectl get configmaps -n {your-namespacce}` to see whether you have `gcp-default-config`(if you have this, it means you deploy KFP via marketplace).\r\nIf you don't have `gcp-default-config`, you have `pipeline-install-config` configmap and you get empty string when you run `k get deployments.apps -n {your-name-space} -o yaml | grep \"HAS_DEFAULT_BUCKET\"`, it means you install KFP via install kfp via standalone deployment and you don't apply manifests needed for gcp patched configurations during standalone deployment.\r\n\r\nIf you install kfp via standalone deployment and you don't apply manifests needed for gcp patched configurations during standalone deployment, it is expected behavior that '{{kfp-default-bucket}}' doesn't get replaced with valid value and you will get invalid argument error. ",
"@capri-xiyue Hi! Is it possible to create a single example that does not require me to set up a gcp account? It would be great if I can just run example, without reading on a side that kfp actually sets some hidden env. variables(e.g. {{kfp-default-bucket}}) \r\n\r\nIt is very confusing for users that if I deployed my kfp cluster on aws, I cannot run any examples, and basically on my own.",
"> @capri-xiyue Hi! Is it possible to create a single example that does not require me to set up a gcp account? It would be great if I can just run example, without reading on a side that kfp actually sets some hidden env. variables(e.g. {{kfp-default-bucket}})\r\n> \r\n> It is very confusing for users that if I deployed my kfp cluster on aws, I cannot run any examples, and basically on my own.\r\n\r\nYou don't have to set up a gcp account by yourself to run the example when you follow this doc https://cloud.google.com/ai-platform/pipelines/docs/getting-started. It will use the default gcp service account. Or do you mean you don't want to use gcp and you need some docs regarding how to run a example when you deploy the kfp cluster on aws or your own cluster?"
] | 2021-04-09T07:29:08 | 2021-12-13T19:22:55 | 2021-12-13T19:22:55 |
NONE
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
Deployed the KFP from GCP marketplace with [this step](https://cloud.google.com/ai-platform/pipelines/docs/getting-started
).
* KFP version:
1.4.1
* KFP SDK version:
1.4.1
### Steps to reproduce
1) Deploy KFP with the step below
https://cloud.google.com/ai-platform/pipelines/docs/getting-started
2) From the Google Cloud AI Platform Pipelines, open Pipeline Dashboard.
3) Go "Pipelines"
4) Open "[Demo] TFX - Taxi tip prediction model trainer"
5) Choose "Create Run" and you can see the csvexamplegen failed.
6) The log shows following error:
```
2021-04-09 06:48:54.757999: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib
2021-04-09 06:48:54.758104: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
INFO:absl:tensorflow_ranking is not available: No module named 'tensorflow_ranking'
INFO:absl:tensorflow_text is not available: No module named 'tensorflow_text'
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:apache_beam.typehints.native_type_compatibility:Using Any for unsupported type: typing.MutableMapping[str, typing.Any]
INFO:absl:Running driver for CsvExampleGen
INFO:absl:MetadataStore with gRPC connection initialized
INFO:absl:select span and version = (0, None)
INFO:absl:latest span and version = (0, None)
INFO:absl:Adding KFP pod name parameterized-tfx-oss-lgngg-449390883 to execution
INFO:absl:Adding KFP pod name parameterized-tfx-oss-lgngg-449390883 to execution
/usr/local/lib/python3.7/dist-packages/setuptools/distutils_patch.py:26: UserWarning: Distutils was imported before Setuptools. This usage is discouraged and may exhibit undesirable behaviors or errors. Please use Setuptools' objects directly or at least import Setuptools first.
"Distutils was imported before Setuptools. This usage is discouraged "
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/tfx/orchestration/kubeflow/container_entrypoint.py", line 360, in <module>
main()
File "/usr/local/lib/python3.7/dist-packages/tfx/orchestration/kubeflow/container_entrypoint.py", line 353, in main
execution_info = launcher.launch()
File "/usr/local/lib/python3.7/dist-packages/tfx/orchestration/launcher/base_component_launcher.py", line 198, in launch
self._exec_properties)
File "/usr/local/lib/python3.7/dist-packages/tfx/orchestration/launcher/base_component_launcher.py", line 167, in _run_driver
component_info=self._component_info)
File "/usr/local/lib/python3.7/dist-packages/tfx/dsl/components/base/base_driver.py", line 311, in pre_execution
component_info=component_info)
File "/usr/local/lib/python3.7/dist-packages/tfx/components/example_gen/driver.py", line 141, in _prepare_output_artifacts
base_driver._prepare_output_paths(example_artifact) # pylint: disable=protected-access
File "/usr/local/lib/python3.7/dist-packages/tfx/dsl/components/base/base_driver.py", line 46, in _prepare_output_paths
if fileio.exists(artifact.uri):
File "/usr/local/lib/python3.7/dist-packages/tfx/dsl/io/fileio.py", line 63, in exists
return _get_filesystem(path).exists(path)
File "/usr/local/lib/python3.7/dist-packages/tfx/dsl/io/plugins/tensorflow_gfile.py", line 54, in exists
return tf.io.gfile.exists(path)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/lib/io/file_io.py", line 268, in file_exists_v2
_pywrap_file_io.FileExists(compat.path_to_bytes(path))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Error executing an HTTP request: HTTP response code 400 with body '{
"error": {
"code": 400,
"message": "Invalid bucket name: '{{kfp-default-bucket}}'",
"errors": [
{
"message": "Invalid bucket name: '{{kfp-default-bucket}}'",
"domain": "global",
"reason": "invalid"
}
]
}
}
'
when reading metadata of gs://{{kfp-default-bucket}}/tfx_taxi_simple/12cd6e2d-d7a7-4c4b-a1ad-970c24cd3166/CsvExampleGen/examples/11
Runtime execution graph. Only steps that are currently running or have already completed are shown.
```

### Expected result
The sample pipeline successfully complete.
### Materials and Reference
It looks like the bucket 'tfx_taxi_simple' does not exist.
https://github.com/kubeflow/pipelines/blob/master/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5450/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5449
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5449/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5449/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5449/events
|
https://github.com/kubeflow/pipelines/issues/5449
| 854,139,260 |
MDU6SXNzdWU4NTQxMzkyNjA=
| 5,449 |
How to enable PyTorch Profiler Tensorboard plugin in Tensorboard viewer?
|
{
"login": "chauhang",
"id": 4461127,
"node_id": "MDQ6VXNlcjQ0NjExMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4461127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chauhang",
"html_url": "https://github.com/chauhang",
"followers_url": "https://api.github.com/users/chauhang/followers",
"following_url": "https://api.github.com/users/chauhang/following{/other_user}",
"gists_url": "https://api.github.com/users/chauhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chauhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chauhang/subscriptions",
"organizations_url": "https://api.github.com/users/chauhang/orgs",
"repos_url": "https://api.github.com/users/chauhang/repos",
"events_url": "https://api.github.com/users/chauhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/chauhang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @chauhang, thank you for the report!\r\nIt's not currently possible, but I think the best way to support this is:\r\n1. you build a tensorboard image that already installs torch-tb-profiler\r\n2. you use https://github.com/kubeflow/pipelines/issues/5471 to switch to your custom tensorboard image\r\n\r\nHow does that sound to you?"
] | 2021-04-09T04:10:29 | 2021-05-11T07:45:43 | 2021-05-11T07:45:43 |
CONTRIBUTOR
| null |
We are working on KFP components for PyTorch and trying to figure out how to see the PyTorch Profiler Tensorboard plugin in the KFP Tensorboard viewer. It requires additional `pip install torch-tb-profiler`. For more details please [see]( https://github.com/pytorch/kineto/tree/master/tb_plugin).
What will be the steps to get this addon installed so that it starts showing up in the KFP Tensorboard viewer?
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5449/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5449/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5444
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5444/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5444/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5444/events
|
https://github.com/kubeflow/pipelines/issues/5444
| 853,306,118 |
MDU6SXNzdWU4NTMzMDYxMTg=
| 5,444 |
[frontend] mismatched frontend api client
|
{
"login": "algs",
"id": 13183602,
"node_id": "MDQ6VXNlcjEzMTgzNjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/13183602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/algs",
"html_url": "https://github.com/algs",
"followers_url": "https://api.github.com/users/algs/followers",
"following_url": "https://api.github.com/users/algs/following{/other_user}",
"gists_url": "https://api.github.com/users/algs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/algs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/algs/subscriptions",
"organizations_url": "https://api.github.com/users/algs/orgs",
"repos_url": "https://api.github.com/users/algs/repos",
"events_url": "https://api.github.com/users/algs/events{/privacy}",
"received_events_url": "https://api.github.com/users/algs/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
},
{
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"1. https://github.com/kubeflow/pipelines/blob/9116a5a162ac36a2615d4185d879dabfd0d7ad9a/backend/api/experiment.proto#L178-L182 has not been changed. \r\n\r\n2. swagger file has been changed in this PR https://github.com/kubeflow/pipelines/pull/4906/files#r573449242\r\n\r\nI think it's due to higher version generator. \r\n\r\n3. the good thing is frontend won't break because request param is still `storagestate`. But this is not a good experience. One thing we can detect this kind of issue earlier is to always generate apis in presubmit test. Otherwise, build will use local generate sdks (old version) which will pass tests.\r\n",
"@algs Feel free to help improve it once @NikeNano @Bobgy (who work on https://github.com/kubeflow/pipelines/issues/3250) confirms. \r\n\r\n",
"Thank you for the investigations!\nYes, we should better regenerate frontend client after proto change.\n\nDo you want to submit a PR?",
"@Bobgy Sure, I'll submit a PR to fix this issue"
] | 2021-04-08T10:10:33 | 2021-04-17T01:52:14 | 2021-04-17T01:52:14 |
CONTRIBUTOR
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version: 1.5.0-rc
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
### Steps to reproduce
1. clone the repo with latest version
2. cd frontend
3. npm run apis
4. git status and get modified files
5. found the definition of `ExperimentStorageState` change into `ApiExperimentStorageState` while the import statements in `frontend/src/pages/ExperimentDetails.tsx`, `frontend/src/components/ExperimentList.tsx`, etc. still `import ExperimentStorageState`, leading to import error
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
### Expected result
No import error, frontend and backend keep consistent.
<!-- What should the correct behavior be? -->
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
cc @Jeffwan
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5444/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5444/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5441
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5441/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5441/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5441/events
|
https://github.com/kubeflow/pipelines/issues/5441
| 852,490,729 |
MDU6SXNzdWU4NTI0OTA3Mjk=
| 5,441 |
[feature] Can reusable component output data be downloaded via SDK?
|
{
"login": "huihuang2015",
"id": 10718854,
"node_id": "MDQ6VXNlcjEwNzE4ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/10718854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huihuang2015",
"html_url": "https://github.com/huihuang2015",
"followers_url": "https://api.github.com/users/huihuang2015/followers",
"following_url": "https://api.github.com/users/huihuang2015/following{/other_user}",
"gists_url": "https://api.github.com/users/huihuang2015/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huihuang2015/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huihuang2015/subscriptions",
"organizations_url": "https://api.github.com/users/huihuang2015/orgs",
"repos_url": "https://api.github.com/users/huihuang2015/repos",
"events_url": "https://api.github.com/users/huihuang2015/events{/privacy}",
"received_events_url": "https://api.github.com/users/huihuang2015/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"The intermediate data storage is considered to be implementation detail.\r\nWe advice our users to make any data export explicit. This way they can easily control the location of the exported data.\r\nIf you want to put the artifact into a GCS bucket, just pass it to an `Upload to GCS` component.\r\nSee this pipeline for example:\r\nhttps://github.com/kubeflow/pipelines/blob/a80421191db917322ff312626409526b0a76aa68/samples/core/continue_training_from_prod/continue_training_from_prod.py#L94\r\n\r\nP.S. We could expose the ReadArtifact backend API method, but it would be pretty hard to use.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-04-07T14:49:07 | 2022-04-18T17:27:34 | 2022-04-18T17:27:34 |
NONE
| null |
### Feature Area
<area>sdk</area>
### What feature would you like to see?
SDK to download component output artifact after run
### What is the use case or pain point?
Output artifact can only be downloaded via UI, but not via SDK. Automation is difficult.
### Is there a workaround currently?
Don't know how to do it now.
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5441/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5440
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5440/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5440/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5440/events
|
https://github.com/kubeflow/pipelines/issues/5440
| 852,237,880 |
MDU6SXNzdWU4NTIyMzc4ODA=
| 5,440 |
[backend] grpc: received message larger than max
|
{
"login": "ConverJens",
"id": 61828156,
"node_id": "MDQ6VXNlcjYxODI4MTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/61828156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ConverJens",
"html_url": "https://github.com/ConverJens",
"followers_url": "https://api.github.com/users/ConverJens/followers",
"following_url": "https://api.github.com/users/ConverJens/following{/other_user}",
"gists_url": "https://api.github.com/users/ConverJens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ConverJens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ConverJens/subscriptions",
"organizations_url": "https://api.github.com/users/ConverJens/orgs",
"repos_url": "https://api.github.com/users/ConverJens/repos",
"events_url": "https://api.github.com/users/ConverJens/events{/privacy}",
"received_events_url": "https://api.github.com/users/ConverJens/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] |
closed
| false |
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
},
{
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Can you provide a sample for us to verify the problem?",
"In addition, can you also deploy latest KFP to see whether it gets fixed?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-04-07T09:58:09 | 2021-12-13T19:23:13 | 2021-12-13T19:23:13 |
CONTRIBUTOR
| null |
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
kfctl and KubeFlow v1.1.0 istio dex
* KFP version:
v1.0.0
* KFP SDK version:
v1.1.2
### Steps to reproduce
Latest version of TFX (0.29.0rc0) introduced an IR of the pipeline definitions. For larger pipelines (>20 nodes) these are quite large (>5Mb), which exceeds the default gRPC limit and the UI gives the following error message:
`{"error":"grpc: received message larger than max (5429435 vs. 4194304)","message":"grpc: received message larger than max (5429435 vs. 4194304)","code":8}`
I update the backend limit in the deployment to be twice the default size:
-name: DBCONFIG_GROUPCONCATMAXLEN
value: "8388608"
under upstream/base/pipeline/ml-pipeline-apiserver-deployment.yaml and I can exec into the pod and verify that the value is indeed set.
Despite this, the UI still raised and error with the default size of 4194304.
I removed and redeployed all pipeline components as well as the mysql DB but this didn't change anything.
To be on the safe side I exec:ed into the api-server and ran:
sed -i 's/4194304/8388608/g' config.json
which updated config.json and viper should then repopulate this field but still no difference.
Following a comment in this issue kubeflow/pipelines#2310, I also tested running:
set GLOBAL group_concat_max_len=8388608;
but this had no effect either.
### Expected result
Changing the gRPC message limit in the DB config should also take effect in the UI.
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5440/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5439
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5439/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5439/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5439/events
|
https://github.com/kubeflow/pipelines/issues/5439
| 852,228,410 |
MDU6SXNzdWU4NTIyMjg0MTA=
| 5,439 |
[feature] Update MLMD version and add process to keep this up to date.
|
{
"login": "DavidSpek",
"id": 28541758,
"node_id": "MDQ6VXNlcjI4NTQxNzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/28541758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidSpek",
"html_url": "https://github.com/DavidSpek",
"followers_url": "https://api.github.com/users/DavidSpek/followers",
"following_url": "https://api.github.com/users/DavidSpek/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidSpek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidSpek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidSpek/subscriptions",
"organizations_url": "https://api.github.com/users/DavidSpek/orgs",
"repos_url": "https://api.github.com/users/DavidSpek/repos",
"events_url": "https://api.github.com/users/DavidSpek/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidSpek/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] |
open
| false | null |
[] | null |
[
"Note, updating MLMD is fairly complex, we need to update all the 7 locations listed in https://github.com/kubeflow/pipelines/blob/1128631cd5b4335080fb70ca35f864dbedb8b2cf/manifests/kustomize/base/metadata/base/metadata-grpc-deployment.yaml#L19-L24\r\n\r\nSome of them are not well documented, so I'd suggest delaying the upgrade to next release -- unless you have absolute reasons.",
"@Bobgy I have no particular reason for updating it, just something I noticed. Once things settle down a bit more I think it would be interesting to setup Renovate with the regex engine so that it can handle updating MLMD everywhere in the repo in a single PR so it can be tested properly. How does that sound?",
"Part of the upgrade requires regenerating a proto client using later proto definitions, so it's unlikely we can configure renovate to do this.\n\nBut it's very helpful to automate the upgrade, we may add this script to release process",
"You can have Renovate execute a script as well, so it might still be possible to fully automate. ",
"Happy to see that theMLMD version has been upgraded in the latest release. @Bobgy is having this managed by Renovate something you would want to implement? I've been doing quite a bit of stuff with Renovate lately to automate keeping all the manifests for Kubeflow up to date so I might be able to help here.",
"@DavidSpek did you figure out the scriptable update? MLMD would need this, because one step is pulling in new proto definitions and regenerate proto clients.",
"Do you maybe have a link to the script that needs to be run and the locations where MLMD needs to be updated? It would save me some time looking for it. If not, it isn't a problem either. Then I can test on my fork to try and get this setup.\r\nRenovate has support for running scripts if it is self-hosted, but I'm not sure if that is an option or a path you want to go down. However, I think it might be possible to use another setup to run that script or pull those definitions into the PR. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"/cc @zijianjoy \nYour new script addresses first part of this issue",
"Thank you David and Yuan! This is the link to the MLMD upgrade script guide: https://github.com/kubeflow/pipelines/tree/master/third_party/ml-metadata#upgrade-mlmd-versions",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@zijianjoy @Bobgy @chensun, I see that we are now very behind again on the `ml-metadata` version, we are still running `1.5.0` from 2021, but [`1.13.1` exists](https://github.com/google/ml-metadata/releases).\r\n\r\nAny chance someone can manually do another update? ",
"We are currently waiting for ML Metadata 1.14 to be released. Once it is available, we will upgrade to this version."
] | 2021-04-07T09:47:06 | 2023-08-07T18:46:16 | null |
CONTRIBUTOR
| null |
Currently pipelines uses gcr.io/tfx-oss-public/ml_metadata_store_server version `0.25.1` but `0.29.0` is the latest version. The image tag in [metadata-grpc-deployment.yaml](https://github.com/kubeflow/pipelines/blob/master/manifests/kustomize/base/metadata/base/metadata-grpc-deployment.yaml) needs updating.
To automate this process, all image tags should be listed in the `images:` section of the `kustiomization.yaml` files ([here](https://github.com/kubeflow/pipelines/blob/master/manifests/kustomize/base/metadata/base/kustomization.yaml) for this specific case). Once that is done, Renovate can be setup to create PRs updating the image tags of the manifests.
@Bobgy Would you like me to go through the manifests and set the images in the `kustomization.yaml` files and add the necessary Renovate config for kustomize?
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5439/timeline
| null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5438
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5438/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5438/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5438/events
|
https://github.com/kubeflow/pipelines/issues/5438
| 852,160,761 |
MDU6SXNzdWU4NTIxNjA3NjE=
| 5,438 |
build_python_component without GCS
|
{
"login": "aronla",
"id": 11004278,
"node_id": "MDQ6VXNlcjExMDA0Mjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/11004278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aronla",
"html_url": "https://github.com/aronla",
"followers_url": "https://api.github.com/users/aronla/followers",
"following_url": "https://api.github.com/users/aronla/following{/other_user}",
"gists_url": "https://api.github.com/users/aronla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aronla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aronla/subscriptions",
"organizations_url": "https://api.github.com/users/aronla/orgs",
"repos_url": "https://api.github.com/users/aronla/repos",
"events_url": "https://api.github.com/users/aronla/events{/privacy}",
"received_events_url": "https://api.github.com/users/aronla/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1480390317,
"node_id": "MDU6TGFiZWwxNDgwMzkwMzE3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/sdk/containers",
"name": "sdk/containers",
"color": "f9d0c4",
"default": false,
"description": ""
},
{
"id": 1682627575,
"node_id": "MDU6TGFiZWwxNjgyNjI3NTc1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/misc",
"name": "kind/misc",
"color": "c2e0c6",
"default": false,
"description": "types beside feature and bug"
}
] |
closed
| false |
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"According to @chensun , the `staging_gcs_path` is a required field in sdk code. \r\nThis `kfp.compile.build_python_component` was deprecated before but was revived recently. We may want to deprecate it again sometime in the future.",
"Could you try using `kfp.components.create_component_from_func` instead? You can base the component on an image you build and push independently.",
"Hi and thanks for your answers! Yes, I can build components using `kfp.components.create_component_from_func` sucessfully. But the functionality is different I guess - `build_python_component` creates a docker-image and `create_component_from_func` just \"injects\" the code into the manifest and runs it on a base-image.\r\n\r\nTo be honest, I was just trying to learn the SDK using the tutorial and a local minikube session and I stumbled on this part, but happy to close.\r\nThanks for your help!"
] | 2021-04-07T08:30:31 | 2021-04-13T11:43:57 | 2021-04-13T11:43:57 |
NONE
| null |
Hi! Is it possible to use `kfp.compile.build_python_component` without using google cloud storage (for the parameter `staging_gcs_path`)?
I'm trying to run the [pipeline SDK tutorial ](https://v0-7.kubeflow.org/docs/pipelines/sdk/sdk-overview/) locally and this step
```
OUTPUT_DIR = 'local_tmp_dir/
my_op = compiler.build_python_component(
component_func=my_python_func,
staging_gcs_path=OUTPUT_DIR,
target_image=TARGET_IMAGE)
```
outputs
`ValueError: Error: local_tmp_dir/ should be a GCS path.`
Thank you! :)
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5438/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5436
|
https://api.github.com/repos/kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines/issues/5436/labels{/name}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5436/comments
|
https://api.github.com/repos/kubeflow/pipelines/issues/5436/events
|
https://github.com/kubeflow/pipelines/issues/5436
| 851,814,320 |
MDU6SXNzdWU4NTE4MTQzMjA=
| 5,436 |
Using a Google Service Accounts on AI Platform Pipeline Deployment to access Google Secret Manager
|
{
"login": "bkhuong",
"id": 40499864,
"node_id": "MDQ6VXNlcjQwNDk5ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/40499864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bkhuong",
"html_url": "https://github.com/bkhuong",
"followers_url": "https://api.github.com/users/bkhuong/followers",
"following_url": "https://api.github.com/users/bkhuong/following{/other_user}",
"gists_url": "https://api.github.com/users/bkhuong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bkhuong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bkhuong/subscriptions",
"organizations_url": "https://api.github.com/users/bkhuong/orgs",
"repos_url": "https://api.github.com/users/bkhuong/repos",
"events_url": "https://api.github.com/users/bkhuong/events{/privacy}",
"received_events_url": "https://api.github.com/users/bkhuong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[] | 2021-04-06T21:12:34 | 2021-04-08T17:32:18 | 2021-04-08T17:32:18 |
NONE
| null |
Based on this [documentation](https://www.kubeflow.org/docs/gke/pipelines/authentication-pipelines/), it looks like we cannot use any other Google Service Accounts when we deploy Kubeflow Pipelines through AI-platform pipelines other than the Compute Engine default service account:
> For AI Platform Pipelines, Compute Engine default service account is the only supported option.
Is this still true?
On the pipelines UI, it looks like we're able to provide a Kubernetes Service Account:

I've tried binding my Kubernertes Service Account to a Google Service account following the [workload identity process](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity), but it doesn't seem to be working. So I might be missing the point of having this option if it will always use the Compute Engine default service account.
That being said, is it possible then for a pipeline deployed on AI Platform Pipelines to access google secrets manager? It looks like the Compute Engine default service account does not have the right permissions to access secrets from there.
|
{
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/kubeflow/pipelines/issues/5436/timeline
| null |
completed
| null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.