status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
return &Directory{
q: q,
c: r.c,
}
}
func (r *Directory) WithTimestamps(timestamp int) *Directory {
q := r.q.Select("withTimestamps")
q = q.Arg("timestamp", timestamp)
return &Directory{
q: q,
c: r.c,
}
}
func (r *Directory) WithoutDirectory(path string) *Directory {
q := r.q.Select("withoutDirectory")
q = q.Arg("path", path)
return &Directory{
q: q,
c: r.c,
}
}
func (r *Directory) WithoutFile(path string) *Directory {
q := r.q.Select("withoutFile")
q = q.Arg("path", path)
return &Directory{
q: q,
c: r.c,
}
}
type EnvVariable struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q *querybuilder.Selection
c graphql.Client
}
func (r *EnvVariable) Name(ctx context.Context) (string, error) {
q := r.q.Select("name")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *EnvVariable) Value(ctx context.Context) (string, error) {
q := r.q.Select("value")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
type File struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q *querybuilder.Selection
c graphql.Client
}
func (r *File) Contents(ctx context.Context) (string, error) {
q := r.q.Select("contents")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *File) Export(ctx context.Context, path string) (bool, error) {
q := r.q.Select("export")
q = q.Arg("path", path)
var response bool
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *File) ID(ctx context.Context) (FileID, error) {
q := r.q.Select("id")
var response FileID
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *File) XXX_GraphQLType() string {
return "File"
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
}
func (r *File) XXX_GraphQLID(ctx context.Context) (string, error) {
id, err := r.ID(ctx)
if err != nil {
return "", err
}
return string(id), nil
}
func (r *File) Secret() *Secret {
q := r.q.Select("secret")
return &Secret{
q: q,
c: r.c,
}
}
func (r *File) Size(ctx context.Context) (int, error) {
q := r.q.Select("size")
var response int
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *File) WithTimestamps(timestamp int) *File {
q := r.q.Select("withTimestamps")
q = q.Arg("timestamp", timestamp)
return &File{
q: q,
c: r.c,
}
}
type GitRef struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q *querybuilder.Selection
c graphql.Client
}
func (r *GitRef) Digest(ctx context.Context) (string, error) {
q := r.q.Select("digest")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
type GitRefTreeOpts struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
SSHKnownHosts string
SSHAuthSocket *Socket
}
func (r *GitRef) Tree(opts ...GitRefTreeOpts) *Directory {
q := r.q.Select("tree")
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].SSHKnownHosts) {
q = q.Arg("sshKnownHosts", opts[i].SSHKnownHosts)
break
}
}
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].SSHAuthSocket) {
q = q.Arg("sshAuthSocket", opts[i].SSHAuthSocket)
break
}
}
return &Directory{
q: q,
c: r.c,
}
}
type GitRepository struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q *querybuilder.Selection
c graphql.Client
}
func (r *GitRepository) Branch(name string) *GitRef {
q := r.q.Select("branch")
q = q.Arg("name", name)
return &GitRef{
q: q,
c: r.c,
}
}
func (r *GitRepository) Branches(ctx context.Context) ([]string, error) {
q := r.q.Select("branches")
var response []string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *GitRepository) Commit(id string) *GitRef {
q := r.q.Select("commit")
q = q.Arg("id", id)
return &GitRef{
q: q,
c: r.c,
}
}
func (r *GitRepository) Tag(name string) *GitRef {
q := r.q.Select("tag")
q = q.Arg("name", name)
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
return &GitRef{
q: q,
c: r.c,
}
}
func (r *GitRepository) Tags(ctx context.Context) ([]string, error) {
q := r.q.Select("tags")
var response []string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
type Host struct {
q *querybuilder.Selection
c graphql.Client
}
type HostDirectoryOpts struct {
Exclude []string
Include []string
}
func (r *Host) Directory(path string, opts ...HostDirectoryOpts) *Directory {
q := r.q.Select("directory")
q = q.Arg("path", path)
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].Exclude) {
q = q.Arg("exclude", opts[i].Exclude)
break
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
}
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].Include) {
q = q.Arg("include", opts[i].Include)
break
}
}
return &Directory{
q: q,
c: r.c,
}
}
func (r *Host) EnvVariable(name string) *HostVariable {
q := r.q.Select("envVariable")
q = q.Arg("name", name)
return &HostVariable{
q: q,
c: r.c,
}
}
func (r *Host) UnixSocket(path string) *Socket {
q := r.q.Select("unixSocket")
q = q.Arg("path", path)
return &Socket{
q: q,
c: r.c,
}
}
type HostWorkdirOpts struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
Exclude []string
Include []string
}
func (r *Host) Workdir(opts ...HostWorkdirOpts) *Directory {
q := r.q.Select("workdir")
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].Exclude) {
q = q.Arg("exclude", opts[i].Exclude)
break
}
}
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].Include) {
q = q.Arg("include", opts[i].Include)
break
}
}
return &Directory{
q: q,
c: r.c,
}
}
type HostVariable struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q *querybuilder.Selection
c graphql.Client
}
func (r *HostVariable) Secret() *Secret {
q := r.q.Select("secret")
return &Secret{
q: q,
c: r.c,
}
}
func (r *HostVariable) Value(ctx context.Context) (string, error) {
q := r.q.Select("value")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
type Label struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q *querybuilder.Selection
c graphql.Client
}
func (r *Label) Name(ctx context.Context) (string, error) {
q := r.q.Select("name")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Label) Value(ctx context.Context) (string, error) {
q := r.q.Select("value")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
type Port struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q *querybuilder.Selection
c graphql.Client
}
func (r *Port) Description(ctx context.Context) (string, error) {
q := r.q.Select("description")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Port) Port(ctx context.Context) (int, error) {
q := r.q.Select("port")
var response int
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Port) Protocol(ctx context.Context) (NetworkProtocol, error) {
q := r.q.Select("protocol")
var response NetworkProtocol
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
type Project struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q *querybuilder.Selection
c graphql.Client
}
func (r *Project) Extensions(ctx context.Context) ([]Project, error) {
q := r.q.Select("extensions")
var response []Project
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Project) GeneratedCode() *Directory {
q := r.q.Select("generatedCode")
return &Directory{
q: q,
c: r.c,
}
}
func (r *Project) Install(ctx context.Context) (bool, error) {
q := r.q.Select("install")
var response bool
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Project) Name(ctx context.Context) (string, error) {
q := r.q.Select("name")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Project) Schema(ctx context.Context) (string, error) {
q := r.q.Select("schema")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Project) SDK(ctx context.Context) (string, error) {
q := r.q.Select("sdk")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Client) CacheVolume(key string) *CacheVolume {
q := r.q.Select("cacheVolume")
q = q.Arg("key", key)
return &CacheVolume{
q: q,
c: r.c,
}
}
type ContainerOpts struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
ID ContainerID
Platform Platform
}
func (r *Client) Container(opts ...ContainerOpts) *Container {
q := r.q.Select("container")
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].ID) {
q = q.Arg("id", opts[i].ID)
break
}
}
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].Platform) {
q = q.Arg("platform", opts[i].Platform)
break
}
}
return &Container{
q: q,
c: r.c,
}
}
func (r *Client) DefaultPlatform(ctx context.Context) (Platform, error) {
q := r.q.Select("defaultPlatform")
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
var response Platform
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
type DirectoryOpts struct {
ID DirectoryID
}
func (r *Client) Directory(opts ...DirectoryOpts) *Directory {
q := r.q.Select("directory")
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].ID) {
q = q.Arg("id", opts[i].ID)
break
}
}
return &Directory{
q: q,
c: r.c,
}
}
func (r *Client) File(id FileID) *File {
q := r.q.Select("file")
q = q.Arg("id", id)
return &File{
q: q,
c: r.c,
}
}
type GitOpts struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
KeepGitDir bool
ExperimentalServiceHost *Container
}
func (r *Client) Git(url string, opts ...GitOpts) *GitRepository {
q := r.q.Select("git")
q = q.Arg("url", url)
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].KeepGitDir) {
q = q.Arg("keepGitDir", opts[i].KeepGitDir)
break
}
}
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].ExperimentalServiceHost) {
q = q.Arg("experimentalServiceHost", opts[i].ExperimentalServiceHost)
break
}
}
return &GitRepository{
q: q,
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
c: r.c,
}
}
func (r *Client) Host() *Host {
q := r.q.Select("host")
return &Host{
q: q,
c: r.c,
}
}
type HTTPOpts struct {
ExperimentalServiceHost *Container
}
func (r *Client) HTTP(url string, opts ...HTTPOpts) *File {
q := r.q.Select("http")
q = q.Arg("url", url)
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].ExperimentalServiceHost) {
q = q.Arg("experimentalServiceHost", opts[i].ExperimentalServiceHost)
break
}
}
return &File{
q: q,
c: r.c,
}
}
type PipelineOpts struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
Description string
Labels []PipelineLabel
}
func (r *Client) Pipeline(name string, opts ...PipelineOpts) *Client {
q := r.q.Select("pipeline")
q = q.Arg("name", name)
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].Description) {
q = q.Arg("description", opts[i].Description)
break
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
}
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].Labels) {
q = q.Arg("labels", opts[i].Labels)
break
}
}
return &Client{
q: q,
c: r.c,
}
}
func (r *Client) Project(name string) *Project {
q := r.q.Select("project")
q = q.Arg("name", name)
return &Project{
q: q,
c: r.c,
}
}
func (r *Client) Secret(id SecretID) *Secret {
q := r.q.Select("secret")
q = q.Arg("id", id)
return &Secret{
q: q,
c: r.c,
}
}
type SocketOpts struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
ID SocketID
}
func (r *Client) Socket(opts ...SocketOpts) *Socket {
q := r.q.Select("socket")
for i := len(opts) - 1; i >= 0; i-- {
if !querybuilder.IsZeroValue(opts[i].ID) {
q = q.Arg("id", opts[i].ID)
break
}
}
return &Socket{
q: q,
c: r.c,
}
}
type Secret struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
q *querybuilder.Selection
c graphql.Client
}
func (r *Secret) ID(ctx context.Context) (SecretID, error) {
q := r.q.Select("id")
var response SecretID
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Secret) XXX_GraphQLType() string {
return "Secret"
}
func (r *Secret) XXX_GraphQLID(ctx context.Context) (string, error) {
id, err := r.ID(ctx)
if err != nil {
return "", err
}
return string(id), nil
}
func (r *Secret) Plaintext(ctx context.Context) (string, error) {
q := r.q.Select("plaintext")
var response string
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
type Socket struct {
q *querybuilder.Selection
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,668 |
Lazy executions are confusing to understand and sometimes don't work as expected
|
## Summary
Developers are often confused by a property of the Dagger engine called "laziness": pipelines are executed at the latest possible moment, to maximize performance. There are several dimensions to this problem; below is an overview of different dimensions, and status of possible solutions.
| Issues | Proposals |
| ------------------------------------------- | ------------------------ |
| [No `withExec`](#with-exec) | <li>#4833</li> |
| [`Dockerfile` build (without exec)](#build) | <li>#5065</li> |
| [Implicit query execution](#implicit) | <li>#5065</li> |
| [Multiple ways to execute](#multiple) | “Pipeline builder” model |
| [Documentation](#docs) | <li>#3617</li> |
## <a name="with-exec"></a>Issue: no `withExec`
We had some users report that part of their pipeline wasn't being executed, for which they had to add a `WithExec(nil)` statement for it to work:
```go
_, err := c.Container().Build(src).ExitCode(ctx) // doesn't work
_, err := c.Container().From("alpine").ExitCode(ctx) // same thing
```
### Explanation
Users may assume that since they know there’s an `Entrypoint`/`Cmd` in the docker image it should work, but it’s just updating the dagger container metadata. There’s nothing to run, it’s equivalent to the following:
```go
_, err := ctr.
WithEntrypoint([]string{}).
WithDefaultArgs(dagger.ContainerWithDefaultArgsOpts{
Args: []string{"/bin/sh"},
})
ExitCode(ctx) // nothing to execute!
```
`ExitCode` and `Stdout` only return something for the **last executed command**. That means the equivalent of a `RUN` instruction in a `Dockerfile` or running a container with `docker run`.
### Workaround
Add a `WithExec()` to tell dagger to execute the container:
```diff
_, err := client.Container().
Build(src).
+ WithExec(nil).
ExitCode(ctx)
```
The empty (`nil`) argument to `WithExec` will execute the **entrypoint and default args** configured in the dagger container.
> **Note**
> If you replace the `.ExitCode()` with a `Publish()`, you see that `Build()` is called and the image is published, because `Publish` doesn’t depend on execution but `Build` is still a dependency.
The same is true for a bound service:
```diff
db := client.Container().From("postgres").
WithExposedPort(5432).
+ WithExec(nil)
ctr := app.WithServiceBinding("db", db)
```
Here, `WithServiceBinding` clearly needs to execute/run the *postgres* container so that *app* can connect to it, so we need the `WithExec` here too (with `nil` for default entrypoint and arguments).
### Proposals
To avoid astonishment, a fix was added (#4716) to raise an error when fields like `.ExitCode` or `.WithServiceBinding` (that depend on `WithExec`) are used on a container that hasn’t been executed.
However, perhaps a better solution is to implicitly execute the entrypoint and default arguments because if you’re using a field that depends on an execution, we can assume that you mean to execute the container.
This is what #4833 proposes, meaning the following would now work as expected by users:
```go
// ExitCode → needs execution so use default exec
_, err := c.Container().From("alpine").ExitCode(ctx)
// WithServiceBinding → needs execution so use default exec
db := client.Container().From("postgres").WithExposedPort(5432)
ctr := app.WithServiceBinding("db", db)
```
```[tasklist]
### No `withExec`
- [x] #4716
- [ ] #4833
```
## <a name="build"></a>Issue: `Dockerfile` build (without exec)
Some users just want to test if a `Dockerfile` build succeeds or not, and **don’t want to execute the entrypoint** (e.g., long running executable):
```go
_, err = client.Container().Build(src).ExitCode(ctx)
```
In this case users are just using `ExitCode` as a way to trigger the build when they also don’t want to `Publish`. It’s the same problem as above, but the intent is different.
### Workarounds
With #4919, you’ll be able to skip the entrypoint:
```go
_, err = client.Container().
Build(src).
WithExec([]string{"/bin/true"}, dagger.ContainerWithExecOpts{
SkipEntrypoint: true,
}).
ExitCode(ctx)
```
But executing the container isn’t even needed to build, so `ExitCode` isn’t a good choice here. It’s just simpler to use another field such as:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Rootfs().Entries(ctx)
```
However this isn’t intuitive and is clearly a workaround (not meant for this).
### Proposal
Perhaps the best solution is to use a general synchronization primitive (#5065) that simply forces resolving the laziness in the pipeline, especially since the result is discarded in the above workarounds:
```diff
- _, err = client.Container().Build(src).ExitCode(ctx)
+ _, err = client.Container().Build(src).Sync(ctx)
```
```[tasklist]
### `Dockerfile` build (without exec)
- [x] #4919
- [ ] #5065
```
## <a name="implicit"></a>Issue: Implicit query execution
Some functions are “lazy” and don’t result in a [query execution](http://spec.graphql.org/October2021/#sec-Execution) (e.g., `From`, `Build`, `WithXXX`), while others execute (e.g., `ExitCode`, `Stdout`, `Publish`).
It’s not clear to some users which is which.
### Explanation
The model is implicit, with a “rule of thumb” in each language to hint which ones execute:
- **Go:** functions taking a context and returning an error
- **Python** and **Node.js:** `async` functions that need an `await`
Essentially, each SDK’s codegen (the feature that introspects the API and builds a dagger client that is idiomatic in each language) **transforms [leaf fields](http://spec.graphql.org/October2021/#sec-Leaf-Field-Selections) into an implicit API request** when called, and return the **value from the response**.
So the “rule of thumb” is based on the need to make a request to the GraphQL server, the problem is that it may not be immediately clear and the syntax can vary depending on the language so there’s different “rules” to understand.
This was discussed in:
- #3555
- #3558
### Proposal
The same [Pipeline Synchronization](https://github.com/dagger/dagger/issues/5065) proposal from the previous issue helps make this a bit more explicit:
```go
_, err := ctr.Sync(ctx)
```
```[tasklist]
### Implicit query execution
- [x] #3555
- [x] #3558
- [ ] #5065
```
## <a name="multiple"></a>Issue: Multiple ways to execute
“Execution” sometimes mean different things:
- **Container execution** (i.e., `Container.withExec`)
- **[Query execution](http://spec.graphql.org/October2021/#sec-Execution)** (i.e., making a request to the GraphQL API)
- **”Engine” execution** (i.e., doing actual work in BuildKit)
The *ID* fields like `Container.ID` for example, make a request to the API, but don’t do any actual work building the container. We reduced the scope of the issue in the SDKs by avoiding passing IDs around (#3558), and keeping the pipeline as lazy as possible until an output is needed (see [Implicit query execution](#implicit) above).
More importantly, users have been using `.ExitCode(ctx)` as the goto solution to “synchronize” the laziness, but as we’ve seen in the above issues, it triggers the container to execute and there’s cases where you don’t want to do that.
However, adding the general `.Sync()` (#4205) to fix that may make people shift to using it as the goto solution to “resolve” the laziness instead (“synchronize”), which actually makes sense. The problem is that we now go back to needing `WithExec(nil)` because `.Sync()` can’t assume you want to execute the container.
That’s a catch 22 situation! **There’s no single execute function** to “rule them all”.
It requires the user to have a good enough grasp on these concepts and the Dagger model to chose the right function for each purpose:
```go
// exec the container (build since it's a dependency)
c.Container().Build(src).ExitCode(ctx)
// just build (don't exec)
c.Container().Build(src).Sync(ctx)
```
### Proposal
During the “implicit vs explicit” discussions, the proposal for the most explicit solution was for a “pipeline builder” model (https://github.com/dagger/dagger/issues/3555#issuecomment-1301327344).
The idea was to make a clear separation between **building** the lazy pipeline and **executing** the query:
```go
// ExitCode doesn't implicitly execute query here! Still lazy.
// Just setting expected output, and adding exec as a dependency.
// Build is a dependency for exec so it also runs.
q := c.Container().Build(src).ExitCode()
// Same as above but don't care about output, just exec.
q := c.Container().Build(src).WithExec(nil)
// Same as above but don't want to exec, just build.
q := c.Container().Build(src)
// Only one way to execute query!
client.Query(q)
```
### Downsides
- It’s a big **breaking change** so it’s not seen as a viable solution now
- No great solution to grab output values
- More boilerplate for simple things
### Solution
Embrace the laziness!
## Issue: <a name="docs"></a>Documentation
We have a guide on [Lazy Evaluation](https://docs.dagger.io/api/975146/concepts#lazy-evaluation) but it’s focused on the GraphQL API and isn’t enough to explain the above issues.
We need better documentation to help users understand the “lazy DAG” model (https://github.com/dagger/dagger/issues/3617). It’s even more important if the “pipeline builder” model above isn’t viable.
```[tasklist]
### Documentation
- [x] #3622
- [ ] #3617
```
## Affected users
These are only some examples of users that were affected by this:
- from @RonanQuigley
> DX or a Bug: In order to have a dockerfile's entrypoint executed, why did we need to use a dummy withExec? There was a unamious 😩 in our team call after we figured this out.
- https://discord.com/channels/707636530424053791/708371226174685314/1079926439064911972
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- https://discord.com/channels/707636530424053791/1080160708123185264/1080174051965812766
- #5010
|
https://github.com/dagger/dagger/issues/4668
|
https://github.com/dagger/dagger/pull/4716
|
f2a62f276d36918b0e453389dc7c63cad195da59
|
ad722627391f3e7d5bf51534f846913afc95d555
| 2023-02-28T17:37:30Z |
go
| 2023-03-07T23:54:56Z |
sdk/go/api.gen.go
|
c graphql.Client
}
func (r *Socket) ID(ctx context.Context) (SocketID, error) {
q := r.q.Select("id")
var response SocketID
q = q.Bind(&response)
return response, q.Execute(ctx, r.c)
}
func (r *Socket) XXX_GraphQLType() string {
return "Socket"
}
func (r *Socket) XXX_GraphQLID(ctx context.Context) (string, error) {
id, err := r.ID(ctx)
if err != nil {
return "", err
}
return string(id), nil
}
type CacheSharingMode string
const (
Locked CacheSharingMode = "LOCKED"
Private CacheSharingMode = "PRIVATE"
Shared CacheSharingMode = "SHARED"
)
type NetworkProtocol string
const (
Tcp NetworkProtocol = "TCP"
Udp NetworkProtocol = "UDP"
)
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
package main
import (
"context"
"crypto/tls"
"crypto/x509"
goerrors "errors"
"fmt"
"net"
"os"
"os/user"
"path/filepath"
"sort"
"strconv"
"strings"
"time"
"github.com/containerd/containerd/pkg/seed"
"github.com/containerd/containerd/pkg/userns"
"github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/remotes/docker"
"github.com/containerd/containerd/sys"
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
sddaemon "github.com/coreos/go-systemd/v22/daemon"
daggerremotecache "github.com/dagger/dagger/engine/remotecache"
"github.com/docker/docker/pkg/reexec"
"github.com/gofrs/flock"
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
"github.com/moby/buildkit/cache/remotecache"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/cmd/buildkitd/config"
"github.com/moby/buildkit/control"
"github.com/moby/buildkit/executor/oci"
"github.com/moby/buildkit/frontend"
dockerfile "github.com/moby/buildkit/frontend/dockerfile/builder"
"github.com/moby/buildkit/frontend/gateway"
"github.com/moby/buildkit/frontend/gateway/forwarder"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/solver/bboltcachestorage"
"github.com/moby/buildkit/util/apicaps"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/appdefaults"
"github.com/moby/buildkit/util/archutil"
"github.com/moby/buildkit/util/bklog"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/profiler"
"github.com/moby/buildkit/util/resolver"
"github.com/moby/buildkit/util/stack"
"github.com/moby/buildkit/util/tracing/detect"
_ "github.com/moby/buildkit/util/tracing/detect/jaeger"
_ "github.com/moby/buildkit/util/tracing/env"
"github.com/moby/buildkit/util/tracing/transform"
"github.com/moby/buildkit/version"
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
"github.com/moby/buildkit/worker"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
"go.etcd.io/bbolt"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
"go.opentelemetry.io/otel/propagation"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
"go.opentelemetry.io/otel/trace"
tracev1 "go.opentelemetry.io/proto/otlp/collector/trace/v1"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc"
)
const (
autoMode = "auto"
)
func init() {
apicaps.ExportedProduct = "buildkit"
stack.SetVersionInfo(version.Version, version.Revision)
seed.WithTimeAndRand()
if reexec.Init() {
os.Exit(0)
}
detect.Recorder = detect.NewTraceRecorder()
}
var propagators = propagation.NewCompositeTextMapPropagator(propagation.TraceContext{}, propagation.Baggage{})
type workerInitializerOpt struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
config *config.Config
sessionManager *session.Manager
traceSocket string
}
type workerInitializer struct {
fn func(c *cli.Context, common workerInitializerOpt) ([]worker.Worker, error)
priority int
}
var (
appFlags []cli.Flag
workerInitializers []workerInitializer
)
func registerWorkerInitializer(wi workerInitializer, flags ...cli.Flag) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
workerInitializers = append(workerInitializers, wi)
sort.Slice(workerInitializers,
func(i, j int) bool {
return workerInitializers[i].priority < workerInitializers[j].priority
})
appFlags = append(appFlags, flags...)
}
func main() {
cli.VersionPrinter = func(c *cli.Context) {
fmt.Println(c.App.Name, version.Package, c.App.Version, version.Revision)
}
app := cli.NewApp()
app.Name = "buildkitd"
app.Usage = "build daemon"
app.Version = version.Version
defaultConf, err := defaultConf()
if err != nil {
fmt.Fprintf(os.Stderr, "%+v\n", err)
os.Exit(1)
}
rootlessUsage := "set all the default options to be compatible with rootless containers"
if userns.RunningInUserNS() {
app.Flags = append(app.Flags, cli.BoolTFlag{
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
Name: "rootless",
Usage: rootlessUsage + " (default: true)",
})
} else {
app.Flags = append(app.Flags, cli.BoolFlag{
Name: "rootless",
Usage: rootlessUsage,
})
}
groupValue := func(gid *int) string {
if gid == nil {
return ""
}
return strconv.Itoa(*gid)
}
app.Flags = append(app.Flags,
cli.StringFlag{
Name: "config",
Usage: "path to config file",
Value: defaultConfigPath(),
},
cli.BoolFlag{
Name: "debug",
Usage: "enable debug output in logs",
},
cli.StringFlag{
Name: "root",
Usage: "path to state directory",
Value: defaultConf.Root,
},
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
cli.StringSliceFlag{
Name: "addr",
Usage: "listening address (socket or tcp)",
Value: &cli.StringSlice{defaultConf.GRPC.Address[0]},
},
cli.StringFlag{
Name: "group",
Usage: "group (name or gid) which will own all Unix socket listening addresses",
Value: groupValue(defaultConf.GRPC.GID),
},
cli.StringFlag{
Name: "debugaddr",
Usage: "debugging address (eg. 0.0.0.0:6060)",
Value: defaultConf.GRPC.DebugAddress,
},
cli.StringFlag{
Name: "tlscert",
Usage: "certificate file to use",
Value: defaultConf.GRPC.TLS.Cert,
},
cli.StringFlag{
Name: "tlskey",
Usage: "key file to use",
Value: defaultConf.GRPC.TLS.Key,
},
cli.StringFlag{
Name: "tlscacert",
Usage: "ca certificate to verify clients",
Value: defaultConf.GRPC.TLS.CA,
},
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
cli.StringSliceFlag{
Name: "allow-insecure-entitlement",
Usage: "allows insecure entitlements e.g. network.host, security.insecure",
},
)
app.Flags = append(app.Flags, appFlags...)
app.Action = func(c *cli.Context) error {
if os.Geteuid() > 0 {
return errors.New("rootless mode requires to be executed as the mapped root in a user namespace; you may use RootlessKit for setting up the namespace")
}
ctx, cancel := context.WithCancel(appcontext.Context())
defer cancel()
cfg, err := config.LoadFile(c.GlobalString("config"))
if err != nil {
return err
}
if err := setDaggerDefaults(&cfg); err != nil {
return err
}
setDefaultConfig(&cfg)
if err := applyMainFlags(c, &cfg); err != nil {
return err
}
logrus.SetFormatter(&logrus.TextFormatter{FullTimestamp: true})
if cfg.Debug {
logrus.SetLevel(logrus.DebugLevel)
}
if cfg.GRPC.DebugAddress != "" {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
if err := setupDebugHandlers(cfg.GRPC.DebugAddress); err != nil {
return err
}
}
tp, err := detect.TracerProvider()
if err != nil {
return err
}
streamTracer := otelgrpc.StreamServerInterceptor(otelgrpc.WithTracerProvider(tp), otelgrpc.WithPropagators(propagators))
unary := grpc_middleware.ChainUnaryServer(unaryInterceptor(context.Background(), tp), grpcerrors.UnaryServerInterceptor)
stream := grpc_middleware.ChainStreamServer(streamTracer, grpcerrors.StreamServerInterceptor)
opts := []grpc.ServerOption{grpc.UnaryInterceptor(unary), grpc.StreamInterceptor(stream)}
server := grpc.NewServer(opts...)
root, err := filepath.Abs(cfg.Root)
if err != nil {
return err
}
cfg.Root = root
if err := os.MkdirAll(root, 0700); err != nil {
return errors.Wrapf(err, "failed to create %s", root)
}
lockPath := filepath.Join(root, "buildkitd.lock")
lock := flock.New(lockPath)
locked, err := lock.TryLock()
if err != nil {
return errors.Wrapf(err, "could not lock %s", lockPath)
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
if !locked {
return errors.Errorf("could not lock %s, another instance running?", lockPath)
}
defer func() {
lock.Unlock()
os.RemoveAll(lockPath)
}()
controller, remoteCacheDoneCh, err := newController(ctx, c, &cfg)
if err != nil {
return err
}
defer controller.Close()
controller.Register(server)
ents := c.GlobalStringSlice("allow-insecure-entitlement")
if len(ents) > 0 {
cfg.Entitlements = []string{}
for _, e := range ents {
switch e {
case "security.insecure":
cfg.Entitlements = append(cfg.Entitlements, e)
case "network.host":
cfg.Entitlements = append(cfg.Entitlements, e)
default:
return errors.Errorf("invalid entitlement : %s", e)
}
}
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
memListener := newInMemListener(server)
memClient, err := memListener.NewClient(ctx)
if err != nil {
return err
}
daggerClient, stopOperatorSession, err := NewOperatorClient(ctx, memClient)
if err != nil {
return err
}
defer stopOperatorSession()
stopCacheMountSync, err := daggerremotecache.StartCacheMountSynchronization(ctx, daggerClient)
if err != nil {
cancel()
bklog.G(ctx).WithError(err).Error("failed to start cache mount synchronization")
return err
}
errCh := make(chan error, 1)
if err := serveGRPC(cfg.GRPC, server, errCh); err != nil {
return err
}
select {
case serverErr := <-errCh:
err = serverErr
cancel()
case <-ctx.Done():
err = ctx.Err()
}
bklog.G(ctx).Infof("stopping server")
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
stopCacheSyncCtx, cancelCacheSync := context.WithTimeout(context.Background(), 300*time.Second)
defer cancelCacheSync()
stopCacheMountSyncErr := stopCacheMountSync(stopCacheSyncCtx)
if stopCacheMountSyncErr != nil {
bklog.G(ctx).WithError(stopCacheMountSyncErr).Error("failed to stop cache mount synchronization")
}
err = goerrors.Join(err, stopCacheMountSyncErr)
stopOperatorSession()
if os.Getenv("NOTIFY_SOCKET") != "" {
notified, notifyErr := sddaemon.SdNotify(false, sddaemon.SdNotifyStopping)
bklog.G(ctx).Debugf("SdNotifyStopping notified=%v, err=%v", notified, notifyErr)
}
select {
case <-remoteCacheDoneCh:
case <-time.After(60 * time.Second):
}
server.GracefulStop()
return err
}
app.After = func(_ *cli.Context) error {
return detect.Shutdown(context.TODO())
}
profiler.Attach(app)
if err := app.Run(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "buildkitd: %+v\n", err)
os.Exit(1)
}
}
func serveGRPC(cfg config.GRPCConfig, server *grpc.Server, errCh chan error) error {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
addrs := cfg.Address
if len(addrs) == 0 {
return errors.New("--addr cannot be empty")
}
tlsConfig, err := serverCredentials(cfg.TLS)
if err != nil {
return err
}
eg, _ := errgroup.WithContext(context.Background())
listeners := make([]net.Listener, 0, len(addrs))
for _, addr := range addrs {
l, err := getListener(addr, *cfg.UID, *cfg.GID, tlsConfig)
if err != nil {
for _, l := range listeners {
l.Close()
}
return err
}
listeners = append(listeners, l)
}
if os.Getenv("NOTIFY_SOCKET") != "" {
notified, notifyErr := sddaemon.SdNotify(false, sddaemon.SdNotifyReady)
logrus.Debugf("SdNotifyReady notified=%v, err=%v", notified, notifyErr)
}
for _, l := range listeners {
func(l net.Listener) {
eg.Go(func() error {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
defer l.Close()
logrus.Infof("running server on %s", l.Addr())
return server.Serve(l)
})
}(l)
}
go func() {
errCh <- eg.Wait()
}()
return nil
}
func defaultConfigPath() string {
if userns.RunningInUserNS() {
return filepath.Join(appdefaults.UserConfigDir(), "buildkitd.toml")
}
return filepath.Join(appdefaults.ConfigDir, "buildkitd.toml")
}
func defaultConf() (config.Config, error) {
cfg, err := config.LoadFile(defaultConfigPath())
if err != nil {
var pe *os.PathError
if !errors.As(err, &pe) {
return config.Config{}, err
}
logrus.Warnf("failed to load default config: %v", err)
}
setDefaultConfig(&cfg)
return cfg, nil
}
func setDefaultNetworkConfig(nc config.NetworkConfig) config.NetworkConfig {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
if nc.Mode == "" {
nc.Mode = autoMode
}
if nc.CNIConfigPath == "" {
nc.CNIConfigPath = appdefaults.DefaultCNIConfigPath
}
if nc.CNIBinaryPath == "" {
nc.CNIBinaryPath = appdefaults.DefaultCNIBinDir
}
return nc
}
func setDefaultConfig(cfg *config.Config) {
orig := *cfg
if cfg.Root == "" {
cfg.Root = appdefaults.Root
}
if len(cfg.GRPC.Address) == 0 {
cfg.GRPC.Address = []string{appdefaults.Address}
}
if cfg.Workers.OCI.Platforms == nil {
cfg.Workers.OCI.Platforms = formatPlatforms(archutil.SupportedPlatforms(false))
}
if cfg.Workers.Containerd.Platforms == nil {
cfg.Workers.Containerd.Platforms = formatPlatforms(archutil.SupportedPlatforms(false))
}
cfg.Workers.OCI.NetworkConfig = setDefaultNetworkConfig(cfg.Workers.OCI.NetworkConfig)
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
cfg.Workers.Containerd.NetworkConfig = setDefaultNetworkConfig(cfg.Workers.Containerd.NetworkConfig)
if userns.RunningInUserNS() {
if u := os.Getenv("USER"); u != "" && u != "root" {
if orig.Root == "" {
cfg.Root = appdefaults.UserRoot()
}
if len(orig.GRPC.Address) == 0 {
cfg.GRPC.Address = []string{appdefaults.UserAddress()}
}
appdefaults.EnsureUserAddressDir()
}
}
}
func applyMainFlags(c *cli.Context, cfg *config.Config) error {
if c.IsSet("debug") {
cfg.Debug = c.Bool("debug")
}
if c.IsSet("root") {
cfg.Root = c.String("root")
}
if c.IsSet("addr") || len(cfg.GRPC.Address) == 0 {
cfg.GRPC.Address = c.StringSlice("addr")
}
if c.IsSet("allow-insecure-entitlement") {
cfg.Entitlements = c.StringSlice("allow-insecure-entitlement")
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
if c.IsSet("debugaddr") {
cfg.GRPC.DebugAddress = c.String("debugaddr")
}
if cfg.GRPC.UID == nil {
uid := os.Getuid()
cfg.GRPC.UID = &uid
}
if cfg.GRPC.GID == nil {
gid := os.Getgid()
cfg.GRPC.GID = &gid
}
if group := c.String("group"); group != "" {
gid, err := grouptoGID(group)
if err != nil {
return err
}
cfg.GRPC.GID = &gid
}
if tlscert := c.String("tlscert"); tlscert != "" {
cfg.GRPC.TLS.Cert = tlscert
}
if tlskey := c.String("tlskey"); tlskey != "" {
cfg.GRPC.TLS.Key = tlskey
}
if tlsca := c.String("tlscacert"); tlsca != "" {
cfg.GRPC.TLS.CA = tlsca
}
return nil
}
func grouptoGID(group string) (int, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
if group == "" {
return os.Getgid(), nil
}
var (
err error
id int
)
if id, err = strconv.Atoi(group); err == nil {
return id, nil
} else if err.(*strconv.NumError).Err != strconv.ErrSyntax {
return 0, err
}
ginfo, err := user.LookupGroup(group)
if err != nil {
return 0, err
}
group = ginfo.Gid
if id, err = strconv.Atoi(group); err != nil {
return 0, err
}
return id, nil
}
func getListener(addr string, uid, gid int, tlsConfig *tls.Config) (net.Listener, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
addrSlice := strings.SplitN(addr, "://", 2)
if len(addrSlice) < 2 {
return nil, errors.Errorf("address %s does not contain proto, you meant unix://%s ?",
addr, addr)
}
proto := addrSlice[0]
listenAddr := addrSlice[1]
switch proto {
case "unix", "npipe":
if tlsConfig != nil {
logrus.Warnf("TLS is disabled for %s", addr)
}
return sys.GetLocalListener(listenAddr, uid, gid)
case "fd":
return listenFD(listenAddr, tlsConfig)
case "tcp":
l, err := net.Listen("tcp", listenAddr)
if err != nil {
return nil, err
}
if tlsConfig == nil {
logrus.Warnf("TLS is not enabled for %s. enabling mutual TLS authentication is highly recommended", addr)
return l, nil
}
return tls.NewListener(l, tlsConfig), nil
default:
return nil, errors.Errorf("addr %s not supported", addr)
}
}
func unaryInterceptor(globalCtx context.Context, tp trace.TracerProvider) grpc.UnaryServerInterceptor {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
withTrace := otelgrpc.UnaryServerInterceptor(otelgrpc.WithTracerProvider(tp), otelgrpc.WithPropagators(propagators))
return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp interface{}, err error) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
go func() {
select {
case <-ctx.Done():
case <-globalCtx.Done():
cancel()
}
}()
if strings.HasSuffix(info.FullMethod, "opentelemetry.proto.collector.trace.v1.TraceService/Export") {
return handler(ctx, req)
}
resp, err = withTrace(ctx, req, info, handler)
if err != nil {
logrus.Errorf("%s returned error: %v", info.FullMethod, err)
if logrus.GetLevel() >= logrus.DebugLevel {
fmt.Fprintf(os.Stderr, "%+v", stack.Formatter(grpcerrors.FromGRPC(err)))
}
}
return
}
}
func serverCredentials(cfg config.TLSConfig) (*tls.Config, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
certFile := cfg.Cert
keyFile := cfg.Key
caFile := cfg.CA
if certFile == "" && keyFile == "" {
return nil, nil
}
err := errors.New("you must specify key and cert file if one is specified")
if certFile == "" {
return nil, err
}
if keyFile == "" {
return nil, err
}
certificate, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
return nil, errors.Wrap(err, "could not load server key pair")
}
tlsConf := &tls.Config{
Certificates: []tls.Certificate{certificate},
MinVersion: tls.VersionTLS12,
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
if caFile != "" {
certPool := x509.NewCertPool()
ca, err := os.ReadFile(caFile)
if err != nil {
return nil, errors.Wrap(err, "could not read ca certificate")
}
if ok := certPool.AppendCertsFromPEM(ca); !ok {
return nil, errors.New("failed to append ca cert")
}
tlsConf.ClientAuth = tls.RequireAndVerifyClientCert
tlsConf.ClientCAs = certPool
}
return tlsConf, nil
}
func newController(ctx context.Context, c *cli.Context, cfg *config.Config) (*control.Controller, <-chan struct{}, error) {
sessionManager, err := session.NewManager()
if err != nil {
return nil, nil, err
}
tc, err := detect.Exporter()
if err != nil {
return nil, nil, err
}
var traceSocket string
if tc != nil {
traceSocket = filepath.Join(cfg.Root, "otel-grpc.sock")
if err := runTraceController(traceSocket, tc); err != nil {
logrus.Warnf("failed set up otel-grpc controller: %v", err)
traceSocket = ""
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
}
}
wc, err := newWorkerController(c, workerInitializerOpt{
config: cfg,
sessionManager: sessionManager,
traceSocket: traceSocket,
})
if err != nil {
return nil, nil, err
}
frontends := map[string]frontend.Frontend{}
frontends["dockerfile.v0"] = forwarder.NewGatewayForwarder(wc, dockerfile.Build)
frontends["gateway.v0"] = gateway.NewGatewayFrontend(wc)
cacheStorage, err := bboltcachestorage.NewStore(filepath.Join(cfg.Root, "cache.db"))
if err != nil {
return nil, nil, err
}
historyDB, err := bbolt.Open(filepath.Join(cfg.Root, "history.db"), 0600, nil)
if err != nil {
return nil, nil, err
}
resolverFn := resolverFunc(cfg)
w, err := wc.GetDefault()
if err != nil {
return nil, nil, err
}
cacheExporterFunc, cacheImporterFunc, remoteCacheDoneCh, err := daggerremotecache.StartDaggerCache(ctx,
sessionManager, w.ContentStore(), resolverFn)
if err != nil {
return nil, nil, err
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
}
remoteCacheExporterFuncs := map[string]remotecache.ResolveCacheExporterFunc{
"dagger": cacheExporterFunc,
}
remoteCacheImporterFuncs := map[string]remotecache.ResolveCacheImporterFunc{
"dagger": cacheImporterFunc,
}
ctrler, err := control.NewController(control.Opt{
SessionManager: sessionManager,
WorkerController: wc,
Frontends: frontends,
ResolveCacheExporterFuncs: remoteCacheExporterFuncs,
ResolveCacheImporterFuncs: remoteCacheImporterFuncs,
CacheKeyStorage: cacheStorage,
Entitlements: cfg.Entitlements,
TraceCollector: tc,
HistoryDB: historyDB,
LeaseManager: w.LeaseManager(),
ContentStore: w.ContentStore(),
HistoryConfig: cfg.History,
})
if err != nil {
return nil, nil, err
}
return ctrler, remoteCacheDoneCh, nil
}
func resolverFunc(cfg *config.Config) docker.RegistryHosts {
return resolver.NewRegistryConfig(cfg.Registries)
}
func newWorkerController(c *cli.Context, wiOpt workerInitializerOpt) (*worker.Controller, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
wc := &worker.Controller{}
nWorkers := 0
for _, wi := range workerInitializers {
ws, err := wi.fn(c, wiOpt)
if err != nil {
return nil, err
}
for _, w := range ws {
p := w.Platforms(false)
logrus.Infof("found worker %q, labels=%v, platforms=%v", w.ID(), w.Labels(), formatPlatforms(p))
archutil.WarnIfUnsupported(p)
if err = wc.Add(w); err != nil {
return nil, err
}
nWorkers++
}
}
if nWorkers == 0 {
return nil, errors.New("no worker found, rebuild the buildkit daemon?")
}
defaultWorker, err := wc.GetDefault()
if err != nil {
return nil, err
}
logrus.Infof("found %d workers, default=%q", nWorkers, defaultWorker.ID())
logrus.Warn("currently, only the default worker can be used.")
return wc, nil
}
func attrMap(sl []string) (map[string]string, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
m := map[string]string{}
for _, v := range sl {
parts := strings.SplitN(v, "=", 2)
if len(parts) != 2 {
return nil, errors.Errorf("invalid value %s", v)
}
m[parts[0]] = parts[1]
}
return m, nil
}
func formatPlatforms(p []ocispecs.Platform) []string {
str := make([]string, 0, len(p))
for _, pp := range p {
str = append(str, platforms.Format(platforms.Normalize(pp)))
}
return str
}
func parsePlatforms(platformsStr []string) ([]ocispecs.Platform, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
out := make([]ocispecs.Platform, 0, len(platformsStr))
for _, s := range platformsStr {
p, err := platforms.Parse(s)
if err != nil {
return nil, err
}
out = append(out, platforms.Normalize(p))
}
return out, nil
}
func getGCPolicy(cfg config.GCConfig, root string) []client.PruneInfo {
if cfg.GC != nil && !*cfg.GC {
return nil
}
if len(cfg.GCPolicy) == 0 {
cfg.GCPolicy = config.DefaultGCPolicy(root, cfg.GCKeepStorage)
}
out := make([]client.PruneInfo, 0, len(cfg.GCPolicy))
for _, rule := range cfg.GCPolicy {
out = append(out, client.PruneInfo{
Filter: rule.Filters,
All: rule.All,
KeepBytes: rule.KeepBytes,
KeepDuration: time.Duration(rule.KeepDuration) * time.Second,
})
}
return out
}
func getBuildkitVersion() client.BuildkitVersion {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
return client.BuildkitVersion{
Package: version.Package,
Version: version.Version,
Revision: version.Revision,
}
}
func getDNSConfig(cfg *config.DNSConfig) *oci.DNSConfig {
var dns *oci.DNSConfig
if cfg != nil {
dns = &oci.DNSConfig{
Nameservers: cfg.Nameservers,
Options: cfg.Options,
SearchDomains: cfg.SearchDomains,
}
}
return dns
}
func parseBoolOrAuto(s string) (*bool, error) {
if s == "" || strings.EqualFold(s, autoMode) {
return nil, nil
}
b, err := strconv.ParseBool(s)
return &b, err
}
func runTraceController(p string, exp sdktrace.SpanExporter) error {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
cmd/engine/main.go
|
server := grpc.NewServer()
tracev1.RegisterTraceServiceServer(server, &traceCollector{exporter: exp})
uid := os.Getuid()
l, err := sys.GetLocalListener(p, uid, uid)
if err != nil {
return err
}
if err := os.Chmod(p, 0666); err != nil {
l.Close()
return err
}
go server.Serve(l)
return nil
}
type traceCollector struct {
*tracev1.UnimplementedTraceServiceServer
exporter sdktrace.SpanExporter
}
func (t *traceCollector) Export(ctx context.Context, req *tracev1.ExportTraceServiceRequest) (*tracev1.ExportTraceServiceResponse, error) {
err := t.exporter.ExportSpans(ctx, transform.Spans(req.GetResourceSpans()))
if err != nil {
return nil, err
}
return &tracev1.ExportTraceServiceResponse{}, nil
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/cache.go
|
package remotecache
import (
"context"
"os"
"strings"
"dagger.io/dagger"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/remotes/docker"
"github.com/dagger/dagger/internal/engine"
"github.com/moby/buildkit/cache/remotecache"
"github.com/moby/buildkit/cache/remotecache/azblob"
"github.com/moby/buildkit/cache/remotecache/gha"
registryremotecache "github.com/moby/buildkit/cache/remotecache/registry"
"github.com/moby/buildkit/cache/remotecache/s3"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/util/bklog"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func StartDaggerCache(ctx context.Context, sm *session.Manager, cs content.Store, hosts docker.RegistryHosts) (remotecache.ResolveCacheExporterFunc, remotecache.ResolveCacheImporterFunc, <-chan struct{}, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/cache.go
|
cacheType, attrs, err := cacheConfigFromEnv()
if err != nil {
return nil, nil, nil, err
}
doneCh := make(chan struct{}, 1)
var s3Manager *s3CacheManager
if cacheType == experimentalDaggerS3CacheType {
s3Manager, err = newS3CacheManager(ctx, attrs, doneCh)
if err != nil {
return nil, nil, nil, err
}
}
return resolveCacheExporterFunc(sm, hosts, s3Manager), resolveCacheImporterFunc(sm, cs, hosts, s3Manager), doneCh, nil
}
func resolveCacheExporterFunc(sm *session.Manager, resolverFn docker.RegistryHosts, s3Manager *s3CacheManager) remotecache.ResolveCacheExporterFunc {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/cache.go
|
return func(ctx context.Context, g session.Group, userAttrs map[string]string) (remotecache.Exporter, error) {
cacheType, attrs, err := cacheConfigFromEnv()
if err != nil {
return nil, err
}
var impl remotecache.Exporter
switch cacheType {
case "registry":
impl, err = registryremotecache.ResolveCacheExporterFunc(sm, resolverFn)(ctx, g, attrs)
case "gha":
impl, err = gha.ResolveCacheExporterFunc()(ctx, g, attrs)
case "s3":
impl, err = s3.ResolveCacheExporterFunc()(ctx, g, attrs)
case experimentalDaggerS3CacheType:
impl = newS3CacheExporter(s3Manager)
case "azblob":
impl, err = azblob.ResolveCacheExporterFunc()(ctx, g, attrs)
default:
bklog.G(ctx).Debugf("unsupported cache type %s, defaulting export off", cacheType)
}
if err != nil {
return nil, err
}
if userAttrs != nil {
userAttrs["mode"] = attrs["mode"]
}
return impl, nil
}
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/cache.go
|
func resolveCacheImporterFunc(sm *session.Manager, cs content.Store, hosts docker.RegistryHosts, s3Manager *s3CacheManager) remotecache.ResolveCacheImporterFunc {
return func(ctx context.Context, g session.Group, userAttrs map[string]string) (remotecache.Importer, ocispecs.Descriptor, error) {
cacheType, attrs, err := cacheConfigFromEnv()
if err != nil {
return nil, ocispecs.Descriptor{}, err
}
var impl remotecache.Importer
var desc ocispecs.Descriptor
switch cacheType {
case "registry":
impl, desc, err = registryremotecache.ResolveCacheImporterFunc(sm, cs, hosts)(ctx, g, attrs)
case "gha":
impl, desc, err = gha.ResolveCacheImporterFunc()(ctx, g, attrs)
case "s3":
impl, desc, err = s3.ResolveCacheImporterFunc()(ctx, g, attrs)
case experimentalDaggerS3CacheType:
impl = s3Manager
case "azblob":
impl, desc, err = azblob.ResolveCacheImporterFunc()(ctx, g, attrs)
default:
bklog.G(ctx).Debugf("unsupported cache type %s, defaulting to noop", cacheType)
impl = &noopImporter{}
}
if err != nil {
return nil, ocispecs.Descriptor{}, err
}
return impl, desc, nil
}
}
func StartCacheMountSynchronization(ctx context.Context, daggerClient *dagger.Client) (func(ctx context.Context) error, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/cache.go
|
stop := func(ctx context.Context) error { return nil }
cacheType, attrs, err := cacheConfigFromEnv()
if err != nil {
return stop, err
}
switch cacheType {
case "experimental_dagger_s3":
stop, err = startS3CacheMountSync(ctx, attrs, daggerClient)
default:
bklog.G(ctx).Debugf("unsupported cache type %s, defaulting to no cache mount synchronization", cacheType)
}
return stop, err
}
func cacheConfigFromEnv() (string, map[string]string, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/cache.go
|
envVal, ok := os.LookupEnv(engine.CacheConfigEnvName)
if !ok {
return "", nil, nil
}
kvs := strings.Split(envVal, ",")
if len(kvs) == 0 {
return "", nil, nil
}
attrs := make(map[string]string)
for _, kv := range kvs {
parts := strings.SplitN(kv, "=", 2)
if len(parts) != 2 {
return "", nil, errors.Errorf("invalid form for cache config %q", kv)
}
attrs[parts[0]] = parts[1]
}
typeVal, ok := attrs["type"]
if !ok {
return "", nil, errors.Errorf("missing type in cache config: %q", envVal)
}
delete(attrs, "type")
return typeVal, attrs, nil
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
package remotecache
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"os"
"strings"
"sync"
"time"
awsConfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/smithy-go"
"github.com/containerd/containerd/content"
"github.com/moby/buildkit/cache/remotecache"
v1 "github.com/moby/buildkit/cache/remotecache/v1"
"github.com/moby/buildkit/solver"
"github.com/moby/buildkit/util/bklog"
"github.com/moby/buildkit/util/compression"
"github.com/moby/buildkit/worker"
"github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
const (
blobsSubprefix = "blobs/"
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
manifestsSubprefix = "manifests/"
cacheMountsSubprefix = "cacheMounts/"
experimentalDaggerS3CacheType = "experimental_dagger_s3"
)
type settings map[string]string
func (s settings) bucket() string {
b := s["bucket"]
if b == "" {
b = os.Getenv("AWS_BUCKET")
}
return b
}
func (s settings) region() string {
r := s["region"]
if r == "" {
r = os.Getenv("AWS_REGION")
}
return r
}
func (s settings) prefix() string {
return s["prefix"]
}
func (s settings) name() string {
return s["name"]
}
func (s settings) endpointURL() string {
return s["endpoint_url"]
}
func (s settings) usePathStyle() bool {
return s["use_path_style"] == "true"
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
}
func (s settings) accessKey() string {
return s["access_key_id"]
}
func (s settings) secretKey() string {
return s["secret_access_key"]
}
func (s settings) sessionToken() string {
return s["session_token"]
}
func (s settings) serverImplementation() string {
v := s["server_implementation"]
if v == "" {
return "AWS"
}
return v
}
func (s settings) synchronizedCacheMounts() []string {
split := strings.Split(s["synchronized_cache_mounts"], ";")
if len(split) == 1 && split[0] == "" {
return nil
}
return split
}
type s3CacheManager struct {
mu sync.Mutex
config v1.CacheConfig
descProviders v1.DescriptorProvider
exportRequested chan struct{}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
settings settings
s3Client *s3.Client
s3UploadManager *manager.Uploader
s3DownloadManager *manager.Downloader
}
func newS3CacheManager(ctx context.Context, attrs map[string]string, doneCh chan<- struct{}) (*s3CacheManager, error) {
m := &s3CacheManager{
descProviders: v1.DescriptorProvider{},
exportRequested: make(chan struct{}, 1),
settings: settings(attrs),
}
cfg, err := awsConfig.LoadDefaultConfig(ctx, awsConfig.WithRegion(m.settings.region()))
if err != nil {
return nil, errors.Errorf("Unable to load AWS SDK config, %v", err)
}
m.s3Client = s3.NewFromConfig(cfg, func(options *s3.Options) {
if m.settings.accessKey() != "" && m.settings.secretKey() != "" {
options.Credentials = credentials.NewStaticCredentialsProvider(m.settings.accessKey(), m.settings.secretKey(), m.settings.sessionToken())
}
if m.settings.endpointURL() != "" {
options.UsePathStyle = m.settings.usePathStyle()
options.EndpointResolver = s3.EndpointResolverFromURL(m.settings.endpointURL())
}
})
m.s3UploadManager = manager.NewUploader(m.s3Client)
m.s3DownloadManager = manager.NewDownloader(m.s3Client)
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
go func() {
defer close(doneCh)
var shutdown bool
for {
select {
case <-m.exportRequested:
case <-ctx.Done():
shutdown = true
}
exportCtx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
if err := m.export(exportCtx); err != nil {
bklog.G(ctx).WithError(err).Error("failed to export s3 cache")
}
if shutdown {
return
}
}
}()
if err := m.importFromPool(ctx); err != nil {
return nil, err
}
go func() {
for {
select {
case <-time.After(5 * time.Minute):
case <-ctx.Done():
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
return
}
if err := m.importFromPool(ctx); err != nil {
bklog.G(ctx).WithError(err).Error("failed to import s3 cache")
}
}
}()
return m, nil
}
func (m *s3CacheManager) mergeChains(ctx context.Context, chains *v1.CacheChains) error {
m.mu.Lock()
defer m.mu.Unlock()
if err := v1.ParseConfig(m.config, m.descProviders, chains); err != nil {
return err
}
newConfig, newProviders, err := chains.Marshal(ctx)
if err != nil {
return err
}
m.config = *newConfig
m.descProviders = newProviders
return nil
}
func (m *s3CacheManager) requestExport() {
select {
case m.exportRequested <- struct{}{}:
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
default:
}
}
func (m *s3CacheManager) copyConfig() (v1.CacheConfig, v1.DescriptorProvider, error) {
m.mu.Lock()
defer m.mu.Unlock()
data, err := json.Marshal(m.config)
if err != nil {
return v1.CacheConfig{}, nil, err
}
var config v1.CacheConfig
if err := json.Unmarshal(data, &config); err != nil {
return v1.CacheConfig{}, nil, err
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
descriptors := v1.DescriptorProvider{}
for k, v := range m.descProviders {
descriptors[k] = v
}
return config, descriptors, nil
}
func (m *s3CacheManager) export(ctx context.Context) error {
cacheConfig, descs, err := m.copyConfig()
if err != nil {
return err
}
for i, l := range cacheConfig.Layers {
dgstPair, ok := descs[l.Blob]
if !ok {
return errors.Errorf("missing blob %s", l.Blob)
}
if dgstPair.Descriptor.Annotations == nil {
return errors.Errorf("invalid descriptor without annotations")
}
v, ok := dgstPair.Descriptor.Annotations["containerd.io/uncompressed"]
if !ok {
return errors.Errorf("invalid descriptor without uncompressed annotation")
}
diffID, err := digest.Parse(v)
if err != nil {
return errors.Wrapf(err, "failed to parse uncompressed annotation")
}
key := m.blobKey(dgstPair.Descriptor.Digest)
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
exists, err := m.s3KeyExists(ctx, key)
if err != nil {
return errors.Wrapf(err, "failed to check file presence in cache")
}
if !exists {
bklog.G(ctx).Debugf("s3 exporter: uploading blob %s", l.Blob)
blobReader, err := dgstPair.Provider.ReaderAt(ctx, dgstPair.Descriptor)
if err != nil {
return err
}
if err := m.uploadToS3(ctx, key, content.NewReader(blobReader)); err != nil {
return errors.Wrap(err, "error writing layer blob")
}
}
la := &v1.LayerAnnotations{
DiffID: diffID,
Size: dgstPair.Descriptor.Size,
MediaType: dgstPair.Descriptor.MediaType,
}
if v, ok := dgstPair.Descriptor.Annotations["buildkit/createdat"]; ok {
var t time.Time
if err := (&t).UnmarshalText([]byte(v)); err != nil {
return err
}
la.CreatedAt = t.UTC()
}
cacheConfig.Layers[i].Annotations = la
}
configBytes, err := json.Marshal(cacheConfig)
if err != nil {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
return err
}
if err := m.uploadToS3(ctx, m.manifestKey(), bytes.NewReader(configBytes)); err != nil {
return errors.Wrapf(err, "error writing manifest: %s", m.manifestKey())
}
return nil
}
func (m *s3CacheManager) importFromPool(ctx context.Context) error {
var manifestKeys []string
listObjectsPages := s3.NewListObjectsV2Paginator(m.s3Client, &s3.ListObjectsV2Input{
Bucket: aws.String(m.settings.bucket()),
Prefix: aws.String(m.manifestsPrefix()),
})
for listObjectsPages.HasMorePages() {
listResp, err := listObjectsPages.NextPage(ctx)
if err != nil {
if !isS3NotFound(err) {
return errors.Wrapf(err, "error listing s3 objects")
}
}
for _, obj := range listResp.Contents {
manifestKeys = append(manifestKeys, *obj.Key)
}
}
configs := make([]v1.CacheConfig, 0, len(manifestKeys))
descProvider := v1.DescriptorProvider{}
for _, manifestKey := range manifestKeys {
configBuffer := manager.NewWriteAtBuffer([]byte{})
if err := m.downloadFromS3(ctx, manifestKey, configBuffer); err != nil {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
return errors.Wrapf(err, "error reading manifest: %s", manifestKey)
}
var config v1.CacheConfig
if err := json.Unmarshal(configBuffer.Bytes(), &config); err != nil {
return err
}
configs = append(configs, config)
for _, l := range config.Layers {
providerPair, err := m.descriptorProviderPair(l)
if err != nil {
return err
}
descProvider[l.Blob] = *providerPair
}
}
for _, config := range configs {
chain := v1.NewCacheChains()
if err := v1.ParseConfig(config, descProvider, chain); err != nil {
return err
}
if err := m.mergeChains(ctx, chain); err != nil {
return err
}
}
return nil
}
func (m *s3CacheManager) Resolve(ctx context.Context, _ ocispecs.Descriptor, id string, w worker.Worker) (solver.CacheManager, error) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
config, providers, err := m.copyConfig()
if err != nil {
return nil, err
}
chains := v1.NewCacheChains()
if err := v1.ParseConfig(config, providers, chains); err != nil {
return nil, err
}
keyStore, resultStore, err := v1.NewCacheKeyStorage(chains, w)
if err != nil {
return nil, err
}
return solver.NewCacheManager(ctx, id, keyStore, resultStore), nil
}
func (m *s3CacheManager) blobKey(dgst digest.Digest) string {
return m.settings.prefix() + blobsSubprefix + dgst.String()
}
func (m *s3CacheManager) manifestsPrefix() string {
return m.settings.prefix() + manifestsSubprefix
}
func (m *s3CacheManager) manifestKey() string {
return m.manifestsPrefix() + m.settings.name()
}
func (m *s3CacheManager) s3KeyExists(ctx context.Context, key string) (bool, error) {
_, err := m.s3Client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(m.settings.bucket()),
Key: aws.String(key),
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
})
if err != nil {
if isS3NotFound(err) {
return false, nil
}
return false, err
}
return true, nil
}
func (m *s3CacheManager) uploadToS3(ctx context.Context, key string, contents io.Reader) error {
_, err := m.s3UploadManager.Upload(ctx, &s3.PutObjectInput{
Bucket: aws.String(m.settings.bucket()),
Key: aws.String(key),
Body: contents,
})
return err
}
func (m *s3CacheManager) downloadFromS3(ctx context.Context, key string, dest io.WriterAt) error {
_, err := m.s3DownloadManager.Download(ctx, dest, &s3.GetObjectInput{
Bucket: aws.String(m.settings.bucket()),
Key: aws.String(key),
})
return err
}
func (m *s3CacheManager) descriptorProviderPair(layer v1.CacheLayer) (*v1.DescriptorProviderPair, error) {
if layer.Annotations == nil {
return nil, errors.Errorf("missing annotations for layer %s", layer.Blob)
}
annotations := map[string]string{}
if layer.Annotations.DiffID == "" {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
return nil, errors.Errorf("missing diffID for layer %s", layer.Blob)
}
annotations["containerd.io/uncompressed"] = layer.Annotations.DiffID.String()
if !layer.Annotations.CreatedAt.IsZero() {
createdAt, err := layer.Annotations.CreatedAt.MarshalText()
if err != nil {
return nil, err
}
annotations["buildkit/createdat"] = string(createdAt)
}
return &v1.DescriptorProviderPair{
Provider: m,
Descriptor: ocispecs.Descriptor{
MediaType: layer.Annotations.MediaType,
Digest: layer.Blob,
Size: layer.Annotations.Size,
Annotations: annotations,
},
}, nil
}
func (m *s3CacheManager) ReaderAt(ctx context.Context, desc ocispecs.Descriptor) (content.ReaderAt, error) {
return &s3ReaderAt{
ctx: ctx,
client: m.s3Client,
bucket: m.settings.bucket(),
key: m.blobKey(desc.Digest),
size: desc.Size,
}, nil
}
type s3CacheExporter struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
*v1.CacheChains
manager *s3CacheManager
}
var _ remotecache.Exporter = &s3CacheExporter{}
func newS3CacheExporter(manager *s3CacheManager) *s3CacheExporter {
return &s3CacheExporter{
CacheChains: v1.NewCacheChains(),
manager: manager,
}
}
func (e *s3CacheExporter) Name() string {
return "dagger-s3-exporter"
}
func (e *s3CacheExporter) Config() remotecache.Config {
return remotecache.Config{
Compression: compression.New(compression.Zstd),
}
}
func (e *s3CacheExporter) Finalize(ctx context.Context) (map[string]string, error) {
err := e.manager.mergeChains(ctx, e.CacheChains)
if err != nil {
return nil, err
}
e.manager.requestExport()
return nil, nil
}
func isS3NotFound(err error) bool {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
var errapi smithy.APIError
return errors.As(err, &errapi) && (errapi.ErrorCode() == "NoSuchKey" || errapi.ErrorCode() == "NotFound")
}
type s3ReaderAt struct {
ctx context.Context
client *s3.Client
bucket string
key string
size int64
body io.ReadCloser
offset int64
}
func (r *s3ReaderAt) ReadAt(p []byte, off int64) (int, error) {
if r.body == nil || off != r.offset {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,748 |
remotecache: error exporting manifest during concurrent garbage collection
|
Found while debugging a large job that was running on a small (20 GB) disk. The small disk size caused buildkit to very aggressively prune almost the whole local cache, but that happened in parallel with the export which caused a layer to not be found and failed the export.
The very temporary workaround is to use a larger disk size, but that will just make the issue more rare.
The fix on our end is to grab a lease on the content in the content store in the time between the client passing off the export request and when the s3 cache manager actually pushes it. Should be able to just pass the content store and the lease manager to the s3 manager since that's all available in our main func.
|
https://github.com/dagger/dagger/issues/4748
|
https://github.com/dagger/dagger/pull/4758
|
076b60be9a01c1e42a47ccf23f81405c2c640c9b
|
67c7e7635cf4ea0e446e2fed522a3e314c960f6a
| 2023-03-10T21:02:05Z |
go
| 2023-03-14T16:43:00Z |
engine/remotecache/s3.go
|
resp, err := r.client.GetObject(r.ctx, &s3.GetObjectInput{
Bucket: aws.String(r.bucket),
Key: aws.String(r.key),
Range: aws.String(fmt.Sprintf("bytes=%d-", off)),
})
if err != nil {
return 0, err
}
if r.body != nil {
bklog.G(r.ctx).Debugf("non-sequential read in s3ReaderAt for key %s, %d != %d", r.key, off, r.offset)
r.body.Close()
}
r.body = resp.Body
r.offset = off
}
n, err := r.body.Read(p)
r.offset += int64(n)
return n, err
}
func (r *s3ReaderAt) Size() int64 {
return r.size
}
func (r *s3ReaderAt) Close() error {
if r.body != nil {
return r.body.Close()
}
return nil
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
package core
import (
"context"
_ "embed"
"encoding/base64"
"errors"
"fmt"
"io"
"net"
"os"
"path/filepath"
"strings"
"testing"
"dagger.io/dagger"
"github.com/dagger/dagger/core"
"github.com/dagger/dagger/core/schema"
"github.com/dagger/dagger/internal/testutil"
"github.com/moby/buildkit/identity"
"github.com/stretchr/testify/require"
)
func TestContainerScratch(t *testing.T) {
t.Parallel()
res := struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Container struct {
ID string
Fs struct {
Entries []string
}
}
}{}
err := testutil.Query(
`{
container {
id
fs {
entries
}
}
}`, &res, nil)
require.NoError(t, err)
require.Empty(t, res.Container.Fs.Entries)
}
func TestContainerFrom(t *testing.T) {
t.Parallel()
res := struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Container struct {
From struct {
Fs struct {
File struct {
Contents string
}
}
}
}
}{}
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
fs {
file(path: "/etc/alpine-release") {
contents
}
}
}
}
}`, &res, nil)
require.NoError(t, err)
require.Equal(t, res.Container.From.Fs.File.Contents, "3.16.2\n")
}
func TestContainerBuild(t *testing.T) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
ctx := context.Background()
c, err := dagger.Connect(ctx)
require.NoError(t, err)
defer c.Close()
contextDir := c.Directory().
WithNewFile("main.go",
`package main
import "fmt"
import "os"
func main() {
for _, env := range os.Environ() {
fmt.Println(env)
}
}`)
t.Run("default Dockerfile location", func(t *testing.T) {
src := contextDir.
WithNewFile("Dockerfile",
`FROM golang:1.18.2-alpine
WORKDIR /src
COPY main.go .
RUN go mod init hello
RUN go build -o /usr/bin/goenv main.go
ENV FOO=bar
CMD goenv
`)
env, err := c.Container().Build(src).WithExec([]string{}).Stdout(ctx)
require.NoError(t, err)
require.Contains(t, env, "FOO=bar\n")
})
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
t.Run("custom Dockerfile location", func(t *testing.T) {
src := contextDir.
WithNewFile("subdir/Dockerfile.whee",
`FROM golang:1.18.2-alpine
WORKDIR /src
COPY main.go .
RUN go mod init hello
RUN go build -o /usr/bin/goenv main.go
ENV FOO=bar
CMD goenv
`)
env, err := c.Container().Build(src, dagger.ContainerBuildOpts{
Dockerfile: "subdir/Dockerfile.whee",
}).WithExec([]string{}).Stdout(ctx)
require.NoError(t, err)
require.Contains(t, env, "FOO=bar\n")
})
t.Run("subdirectory with default Dockerfile location", func(t *testing.T) {
src := contextDir.
WithNewFile("Dockerfile",
`FROM golang:1.18.2-alpine
WORKDIR /src
COPY main.go .
RUN go mod init hello
RUN go build -o /usr/bin/goenv main.go
ENV FOO=bar
CMD goenv
`)
sub := c.Directory().WithDirectory("subcontext", src).Directory("subcontext")
env, err := c.Container().Build(sub).WithExec([]string{}).Stdout(ctx)
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
require.NoError(t, err)
require.Contains(t, env, "FOO=bar\n")
})
t.Run("subdirectory with custom Dockerfile location", func(t *testing.T) {
src := contextDir.
WithNewFile("subdir/Dockerfile.whee",
`FROM golang:1.18.2-alpine
WORKDIR /src
COPY main.go .
RUN go mod init hello
RUN go build -o /usr/bin/goenv main.go
ENV FOO=bar
CMD goenv
`)
sub := c.Directory().WithDirectory("subcontext", src).Directory("subcontext")
env, err := c.Container().Build(sub, dagger.ContainerBuildOpts{
Dockerfile: "subdir/Dockerfile.whee",
}).WithExec([]string{}).Stdout(ctx)
require.NoError(t, err)
require.Contains(t, env, "FOO=bar\n")
})
t.Run("with build args", func(t *testing.T) {
src := contextDir.
WithNewFile("Dockerfile",
`FROM golang:1.18.2-alpine
ARG FOOARG=bar
WORKDIR /src
COPY main.go .
RUN go mod init hello
RUN go build -o /usr/bin/goenv main.go
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
ENV FOO=$FOOARG
CMD goenv
`)
env, err := c.Container().Build(src).WithExec([]string{}).Stdout(ctx)
require.NoError(t, err)
require.Contains(t, env, "FOO=bar\n")
env, err = c.Container().Build(src, dagger.ContainerBuildOpts{BuildArgs: []dagger.BuildArg{{Name: "FOOARG", Value: "barbar"}}}).WithExec([]string{}).Stdout(ctx)
require.NoError(t, err)
require.Contains(t, env, "FOO=barbar\n")
})
t.Run("with target", func(t *testing.T) {
src := contextDir.
WithNewFile("Dockerfile",
`FROM golang:1.18.2-alpine AS base
CMD echo "base"
FROM base AS stage1
CMD echo "stage1"
FROM base AS stage2
CMD echo "stage2"
`)
output, err := c.Container().Build(src).WithExec([]string{}).Stdout(ctx)
require.NoError(t, err)
require.Contains(t, output, "stage2\n")
output, err = c.Container().Build(src, dagger.ContainerBuildOpts{Target: "stage1"}).WithExec([]string{}).Stdout(ctx)
require.NoError(t, err)
require.Contains(t, output, "stage1\n")
require.NotContains(t, output, "stage2\n")
})
}
func TestContainerWithRootFS(t *testing.T) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
t.Parallel()
ctx := context.Background()
c, err := dagger.Connect(ctx)
require.NoError(t, err)
defer c.Close()
alpine316 := c.Container().From("alpine:3.16.2")
alpine316ReleaseStr, err := alpine316.File("/etc/alpine-release").Contents(ctx)
require.NoError(t, err)
alpine316ReleaseStr = strings.TrimSpace(alpine316ReleaseStr)
dir := alpine316.Rootfs()
exitCode, err := c.Container().WithEnvVariable("ALPINE_RELEASE", alpine316ReleaseStr).WithRootfs(dir).WithExec([]string{
"/bin/sh",
"-c",
"test -f /etc/alpine-release && test \"$(head -n 1 /etc/alpine-release)\" = \"$ALPINE_RELEASE\"",
}).ExitCode(ctx)
require.NoError(t, err)
require.Equal(t, exitCode, 0)
alpine315 := c.Container().From("alpine:3.15.6")
varVal := "testing123"
alpine315WithVar := alpine315.WithEnvVariable("DAGGER_TEST", varVal)
varValResp, err := alpine315WithVar.EnvVariable(ctx, "DAGGER_TEST")
require.NoError(t, err)
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
require.Equal(t, varVal, varValResp)
alpine315ReplacedFS := alpine315WithVar.WithRootfs(dir)
varValResp, err = alpine315ReplacedFS.EnvVariable(ctx, "DAGGER_TEST")
require.NoError(t, err)
require.Equal(t, varVal, varValResp)
releaseStr, err := alpine315ReplacedFS.File("/etc/alpine-release").Contents(ctx)
require.NoError(t, err)
require.Equal(t, "3.16.2\n", releaseStr)
}
func TestContainerExecExitCode(t *testing.T) {
t.Parallel()
res := struct {
Container struct {
From struct {
WithExec struct {
ExitCode *int
}
}
}
}{}
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
withExec(args: ["true"]) {
exitCode
}
}
}
}`, &res, nil)
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
require.NoError(t, err)
require.NotNil(t, res.Container.From.WithExec.ExitCode)
require.Equal(t, 0, *res.Container.From.WithExec.ExitCode)
/*
It's not currently possible to get a nonzero exit code back because
Buildkit raises an error.
We could perhaps have the shim mask the exit status and always exit 0, but
we would have to be careful not to let that happen in a big chained LLB
since it would prevent short-circuiting.
We could only do it when the user requests the exitCode, but then we would
actually need to run the command _again_ since we'd need some way to tell
the shim what to do.
Hmm...
err = testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
withExec(args: ["false"]) {
exitCode
}
}
}
}`, &res, nil)
require.NoError(t, err)
require.Equal(t, res.Container.From.WithExec.ExitCode, 1)
*/
}
func TestContainerExecStdoutStderr(t *testing.T) {
t.Parallel()
res := struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Container struct {
From struct {
WithExec struct {
Stdout string
Stderr string
}
}
}
}{}
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
withExec(args: ["sh", "-c", "echo hello; echo goodbye >/dev/stderr"]) {
stdout
stderr
}
}
}
}`, &res, nil)
require.NoError(t, err)
require.Equal(t, res.Container.From.WithExec.Stdout, "hello\n")
require.Equal(t, res.Container.From.WithExec.Stderr, "goodbye\n")
}
func TestContainerExecStdin(t *testing.T) {
t.Parallel()
res := struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Container struct {
From struct {
WithExec struct {
Stdout string
}
}
}
}{}
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
withExec(args: ["cat"], stdin: "hello") {
stdout
}
}
}
}`, &res, nil)
require.NoError(t, err)
require.Equal(t, res.Container.From.WithExec.Stdout, "hello")
}
func TestContainerExecRedirectStdoutStderr(t *testing.T) {
t.Parallel()
res := struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Container struct {
From struct {
WithExec struct {
Out, Err struct {
Contents string
}
}
}
}
}{}
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
withExec(
args: ["sh", "-c", "echo hello; echo goodbye >/dev/stderr"],
redirectStdout: "out",
redirectStderr: "err"
) {
out: file(path: "out") {
contents
}
err: file(path: "err") {
contents
}
}
}
}
}`, &res, nil)
require.NoError(t, err)
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
require.Equal(t, res.Container.From.WithExec.Out.Contents, "hello\n")
require.Equal(t, res.Container.From.WithExec.Err.Contents, "goodbye\n")
c, ctx := connect(t)
defer c.Close()
execWithMount := c.Container().From("alpine:3.16.2").
WithMountedDirectory("/mnt", c.Directory()).
WithExec([]string{"sh", "-c", "echo hello; echo goodbye >/dev/stderr"}, dagger.ContainerWithExecOpts{
RedirectStdout: "/mnt/out",
RedirectStderr: "/mnt/err",
})
stdout, err := execWithMount.File("/mnt/out").Contents(ctx)
require.NoError(t, err)
require.Equal(t, "hello\n", stdout)
stderr, err := execWithMount.File("/mnt/err").Contents(ctx)
require.NoError(t, err)
require.Equal(t, "goodbye\n", stderr)
_, err = execWithMount.Stdout(ctx)
require.Error(t, err)
require.Contains(t, err.Error(), "stdout: no such file or directory")
_, err = execWithMount.Stderr(ctx)
require.Error(t, err)
require.Contains(t, err.Error(), "stderr: no such file or directory")
}
func TestContainerExecWithWorkdir(t *testing.T) {
t.Parallel()
res := struct {
Container struct {
From struct {
WithWorkdir struct {
WithExec struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Stdout string
}
}
}
}
}{}
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
withWorkdir(path: "/usr") {
withExec(args: ["pwd"]) {
stdout
}
}
}
}
}`, &res, nil)
require.NoError(t, err)
require.Equal(t, res.Container.From.WithWorkdir.WithExec.Stdout, "/usr\n")
}
func TestContainerExecWithUser(t *testing.T) {
t.Parallel()
res := struct {
Container struct {
From struct {
User string
WithUser struct {
User string
WithExec struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Stdout string
}
}
}
}
}{}
t.Run("user name", func(t *testing.T) {
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
user
withUser(name: "daemon") {
user
withExec(args: ["whoami"]) {
stdout
}
}
}
}
}`, &res, nil)
require.NoError(t, err)
require.Equal(t, "", res.Container.From.User)
require.Equal(t, "daemon", res.Container.From.WithUser.User)
require.Equal(t, "daemon\n", res.Container.From.WithUser.WithExec.Stdout)
})
t.Run("user and group name", func(t *testing.T) {
err := testutil.Query(
`{
container {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
from(address: "alpine:3.16.2") {
user
withUser(name: "daemon:floppy") {
user
withExec(args: ["sh", "-c", "whoami; groups"]) {
stdout
}
}
}
}
}`, &res, nil)
require.NoError(t, err)
require.Equal(t, "", res.Container.From.User)
require.Equal(t, "daemon:floppy", res.Container.From.WithUser.User)
require.Equal(t, "daemon\nfloppy\n", res.Container.From.WithUser.WithExec.Stdout)
})
t.Run("user ID", func(t *testing.T) {
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
user
withUser(name: "2") {
user
withExec(args: ["whoami"]) {
stdout
}
}
}
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
}`, &res, nil)
require.NoError(t, err)
require.Equal(t, "", res.Container.From.User)
require.Equal(t, "2", res.Container.From.WithUser.User)
require.Equal(t, "daemon\n", res.Container.From.WithUser.WithExec.Stdout)
})
t.Run("user and group ID", func(t *testing.T) {
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
user
withUser(name: "2:11") {
user
withExec(args: ["sh", "-c", "whoami; groups"]) {
stdout
}
}
}
}
}`, &res, nil)
require.NoError(t, err)
require.Equal(t, "", res.Container.From.User)
require.Equal(t, "2:11", res.Container.From.WithUser.User)
require.Equal(t, "daemon\nfloppy\n", res.Container.From.WithUser.WithExec.Stdout)
})
}
func TestContainerExecWithEntrypoint(t *testing.T) {
t.Parallel()
res := struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Container struct {
From struct {
Entrypoint []string
WithEntrypoint struct {
Entrypoint []string
WithExec struct {
Stdout string
}
WithEntrypoint struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Entrypoint []string
}
}
}
}
}{}
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
entrypoint
withEntrypoint(args: ["sh", "-c"]) {
entrypoint
withExec(args: ["echo $HOME"]) {
stdout
}
withEntrypoint(args: []) {
entrypoint
}
}
}
}
}`, &res, nil)
require.NoError(t, err)
require.Empty(t, res.Container.From.Entrypoint)
require.Equal(t, []string{"sh", "-c"}, res.Container.From.WithEntrypoint.Entrypoint)
require.Equal(t, "/root\n", res.Container.From.WithEntrypoint.WithExec.Stdout)
require.Empty(t, res.Container.From.WithEntrypoint.WithEntrypoint.Entrypoint)
}
func TestContainerWithDefaultArgs(t *testing.T) {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
t.Parallel()
res := struct {
Container struct {
From struct {
Entrypoint []string
DefaultArgs []string
WithExec struct {
Stdout string
}
WithDefaultArgs struct {
Entrypoint []string
DefaultArgs []string
}
WithEntrypoint struct {
Entrypoint []string
DefaultArgs []string
WithExec struct {
Stdout string
}
WithDefaultArgs struct {
Entrypoint []string
DefaultArgs []string
WithExec struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Stdout string
}
}
}
}
}
}{}
err := testutil.Query(
`{
container {
from(address: "alpine:3.16.2") {
entrypoint
defaultArgs
withDefaultArgs {
entrypoint
defaultArgs
}
withEntrypoint(args: ["sh", "-c"]) {
entrypoint
defaultArgs
withExec(args: ["echo $HOME"]) {
stdout
}
withDefaultArgs(args: ["id"]) {
entrypoint
defaultArgs
withExec(args: []) {
stdout
}
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
}
}
}
}
}`, &res, nil)
t.Run("default alpine (no entrypoint)", func(t *testing.T) {
require.NoError(t, err)
require.Empty(t, res.Container.From.Entrypoint)
require.Equal(t, []string{"/bin/sh"}, res.Container.From.DefaultArgs)
})
t.Run("with nil default args", func(t *testing.T) {
require.Empty(t, res.Container.From.WithDefaultArgs.Entrypoint)
require.Empty(t, res.Container.From.WithDefaultArgs.DefaultArgs)
})
t.Run("with entrypoint set", func(t *testing.T) {
require.Equal(t, []string{"sh", "-c"}, res.Container.From.WithEntrypoint.Entrypoint)
require.Equal(t, []string{"/bin/sh"}, res.Container.From.WithEntrypoint.DefaultArgs)
})
t.Run("with exec args", func(t *testing.T) {
require.Equal(t, "/root\n", res.Container.From.WithEntrypoint.WithExec.Stdout)
})
t.Run("with default args set", func(t *testing.T) {
require.Equal(t, []string{"sh", "-c"}, res.Container.From.WithEntrypoint.WithDefaultArgs.Entrypoint)
require.Equal(t, []string{"id"}, res.Container.From.WithEntrypoint.WithDefaultArgs.DefaultArgs)
require.Equal(t, "uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)\n", res.Container.From.WithEntrypoint.WithDefaultArgs.WithExec.Stdout)
})
}
func TestContainerExecWithEnvVariable(t *testing.T) {
t.Parallel()
res := struct {
|
closed
|
dagger/dagger
|
https://github.com/dagger/dagger
| 4,801 |
Using a secret from `setSecret` with `withRegistryAuth` fails
|
@vikram-dagger tried to use a new version of secrets in the registry authentication and it failed:
Here is a repro
```go
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
sec := c.SetSecret("my-secret-id", "yolo")
stdout, err := c.Container().WithRegistryAuth("localhost:8888", "YOLO", sec).Publish(ctx, "localhost:8888/myorg/myapp")
if err != nil {
panic(err)
}
fmt.Println(stdout)
}
```
Result:
```console
panic: input:1: container.withRegistryAuth plaintext: empty secret?
Please visit https://dagger.io/help#go for troubleshooting guidance.
goroutine 1 [running]:
main.main()
/home/dolanor/src/daggerr/vikram-setsecret-empty/main.go:21 +0x1b1
exit status 2
```
|
https://github.com/dagger/dagger/issues/4801
|
https://github.com/dagger/dagger/pull/4809
|
0fa00a1f4905be1eb6fb017f3c87e0a09112c586
|
aaba659eccbc858a0f330c5178cb7ea20f997c94
| 2023-03-21T15:16:35Z |
go
| 2023-03-22T08:14:34Z |
core/integration/container_test.go
|
Container struct {
From struct {
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.