diff --git a/spec/Appendix E -- Examples.md b/spec/Appendix E -- Examples.md new file mode 100644 index 000000000..a2a1b4d2d --- /dev/null +++ b/spec/Appendix E -- Examples.md @@ -0,0 +1,219 @@ +# E. Appendix: Examples + +## Incremental Delivery Examples + +### Example 1 - A query containing both defer and stream + +```graphql example +query { + person(id: "cGVvcGxlOjE=") { + ...HomeWorldFragment @defer(label: "homeWorldDefer") + name + films @stream(initialCount: 1, label: "filmsStream") { + title + } + } +} +fragment HomeWorldFragment on Person { + homeWorld { + name + } +} +``` + +The response to this request will be an _incremental stream_ consisting of an +_initial incremental stream result_ followed by one or more _incremental stream +update result_. + +The _initial incremental stream result_ has: + +- a {"data"} entry containing the results of the GraphQL operation except for + the `@defer` and `@stream` selections; +- a {"pending"} entry containing two _incremental pending notices_, one for the + `@defer` selection and for the the `@stream` selection, indicating that these + results will be delivered in a later _incremental stream update result_; +- a {"hasNext"} entry with the value {true}, indicating that the response is not + yet complete. + +If an error were to occur, it would also have an {"errors"} entry; but not in +this example. + +```json example +{ + "data": { + "person": { + "name": "Luke Skywalker", + "films": [{ "title": "A New Hope" }] + } + }, + "pending": [ + { "id": "0", "path": ["person"], "label": "homeWorldDefer" }, + { "id": "1", "path": ["person", "films"], "label": "filmsStream" } + ], + "hasNext": true +} +``` + +Depending on the behavior of the backend and the time at which the deferred and +streamed resources resolve, the stream may produce results in different orders. +In this example, our first _incremental stream update result_ contains the +deferred data and the first streamed list item. There is one _incremental +completion notice_, indicating that the deferred data has been completely +delivered. + +```json example +{ + "incremental": [ + { + "id": "0", + "data": { "homeWorld": { "name": "Tatooine" } } + }, + { + "id": "1", + "items": [{ "title": "The Empire Strikes Back" }] + } + ], + "completed": [ + {"id": "0"} + ] + "hasNext": true +} +``` + +The second _incremental stream update result_ contains the final stream results. +In this example, the underlying iterator does not close synchronously so +{"hasNext"} is set to {true}. If this iterator did close synchronously, +{"hasNext"} could be set to {false} and make this the final incremental stream +update result. + +```json example +{ + "incremental": [ + { + "id": "1", + "items": [{ "title": "Return of the Jedi" }] + } + ], + "hasNext": true +} +``` + +When the underlying iterator of the `films` field closes there is no more data +to deliver, so the third and final _incremental stream update result_ sets +{"hasNext"} to {false} to indicate the end of the _incremental stream_. + +```json example +{ + "hasNext": false +} +``` + +### Example 2 - A query containing overlapping defers + +```graphql example +query { + person(id: "cGVvcGxlOjE=") { + ...HomeWorldFragment @defer(label: "homeWorldDefer") + ...NameAndHomeWorldFragment @defer(label: "nameAndWorld") + firstName + } +} +fragment HomeWorldFragment on Person { + homeWorld { + name + terrain + } +} + +fragment NameAndHomeWorldFragment on Person { + firstName + lastName + homeWorld { + name + } +} +``` + +In this example the response is an _incremental stream_ of the following +results. + +The _initial incremental stream result_ contains the results of the `firstName` +field. Even though it is also present in the `HomeWorldFragment`, it must be +returned in the initial incremental stream result because it is also defined +outside of any fragments with the `@defer` directive. Additionally, there are +two _incremental pending notices_ indicating that results for both `@defer`s in +the query will be delivered in later _incremental stream update result_. + +```json example +{ + "data": { + "person": { + "firstName": "Luke" + } + }, + "pending": [ + { "id": "0", "path": ["person"], "label": "homeWorldDefer" }, + { "id": "1", "path": ["person"], "label": "nameAndWorld" } + ], + "hasNext": true +} +``` + +In this example, the first _incremental stream update result_ contains the +deferred data from `HomeWorldFragment`. There is one _incremental completion +notice_, indicating that `HomeWorldFragment` has been completely delivered. +Because the `homeWorld` field is present in two separate `@defer`s, it is +separated into its own _incremental result_. In this example, this incremental +result contains the id `"0"`, but since the `name` field was included in both +`HomeWorldFragment` and `NameAndHomeWorldFragment`, an id of `"1"` would also be +a valid response. + +The second _incremental result_ in this _incremental stream update result_ +contains the data for the `terrain` field. This _incremental result_ contains a +{"subPath"} entry to indicate to clients that the _response position_ of this +result can be determined by concatenating: the path from the _incremental +pending notice_ for id `"0"`, and the value of this {"subPath"} entry. + +```json example +{ + "incremental": [ + { + "id": "0", + "data": { "homeWorld": { "name": "Tatooine" } } + }, + { + "id": "0", + "subPath": ["homeWorld"], + "data": { "terrain": "desert" } + } + ], + "completed": [{ "id": "0" }], + "hasNext": true +} +``` + +The second _incremental stream update result_ contains the remaining data from +the `NameAndHomeWorldFragment`. `lastName` is the only remaining field from this +selection that has not been delivered in a previous result. With this field now +delivered, clients are informed that the `NameAndHomeWorldFragment` has been +completed by the presence of the associated _incremental completion notice_. +Additionally, {"hasNext"} is set to {false} indicating the end of the +_incremental stream_. + +This example demonstrates that it is necessary for clients to process the entire +incremental stream, as both the initial data and previous incremental results +(with a potentially different value for {"id"}) may be required to complete a +deferred fragment. + +```json example +{ + "incremental": [ + { + "id": "1", + "data": { "lastName": "Skywalker" } + } + ], + "completed": [{ "id": "1" }], + "hasNext": false +} +``` diff --git a/spec/GraphQL.md b/spec/GraphQL.md index 7ac0717ac..0dbdfcdb0 100644 --- a/spec/GraphQL.md +++ b/spec/GraphQL.md @@ -60,4 +60,6 @@ working draft release can be found at # [Appendix: Specified Definitions](Appendix%20D%20--%20Specified%20Definitions.md) +# [Appendix: Examples](Appendix%20E%20--%20Examples.md) + # [Appendix: Licensing](../LICENSE.md) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 1be92a0ea..15f0e4ad5 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -812,8 +812,8 @@ And will yield the subset of each object type queried: When querying an Object, the resulting mapping of fields are conceptually ordered in the same order in which they were encountered during execution, excluding fragments for which the type does not apply and fields or fragments -that are skipped via `@skip` or `@include` directives. This ordering is -correctly produced when using the {CollectFields()} algorithm. +that are skipped via `@skip` or `@include` directives or postponed via `@defer`. +This ordering is correctly produced when using the {CollectFields()} algorithm. Response serialization formats capable of representing ordered maps should maintain this ordering. Serialization formats which can only represent unordered @@ -2084,6 +2084,15 @@ GraphQL implementations that support the type system definition language must provide the `@deprecated` directive if representing deprecated portions of the schema. +GraphQL implementations may provide the `@defer` and/or `@stream` directives. If +either or both of these directives are provided, they must conform to the +requirements defined in this specification. + +Note: The [Directives Are Defined](#sec-Directives-Are-Defined) validation rule +ensures that GraphQL operations can only include directives available on the +schema; thus operations including `@defer` or `@stream` directives can only be +executed by a GraphQL service that supports them. + GraphQL implementations that support the type system definition language should provide the `@specifiedBy` directive if representing custom scalar definitions. @@ -2321,3 +2330,121 @@ input UserUniqueCondition @oneOf { organizationAndEmail: OrganizationAndEmailInput } ``` + +### @defer + +```graphql +directive @defer( + if: Boolean! = true + label: String +) on FRAGMENT_SPREAD | INLINE_FRAGMENT +``` + +The `@defer` directive may be provided on a fragment spread or inline fragment +to indicate that execution of the related selection set should be deferred. When +a request includes the `@defer` directive, it may return an _incremental stream_ +consisting of an _initial incremental stream result_ containing all non-deferred +data, followed by one or more _incremental stream update result_ including +deferred data. + +The `@include` and `@skip` directives take precedence over `@defer`. + +```graphql example +query myQuery($shouldDefer: Boolean! = true) { + user { + name + ...someFragment @defer(if: $shouldDefer, label: "someLabel") + } +} +fragment someFragment on User { + id + profile_picture { + uri + } +} +``` + +#### @defer Arguments + +- `if: Boolean! = true` - When `true`, fragment _should_ be deferred (see + [Client Handling of `@defer`/`@stream`](#sec-Client-handling-of-defer-stream)). + When `false`, fragment must not be deferred. Defaults to `true`. +- `label: String` - An optional string literal used by GraphQL clients to + identify data in the _incremental stream_ and associate it with the + corresponding defer directive. If provided, the GraphQL service must include + this label in the corresponding _incremental pending notice_ within the + _incremental stream_. The `label` argument must be unique across all `@defer` + and `@stream` directives in the document. Variables are disallowed (via + [Defer And Stream Directive Labels Are Unique](#sec-Defer-And-Stream-Directive-Labels-Are-Unique)) + because their values may not be known during validation. + +### @stream + +```graphql +directive @stream( + if: Boolean! = true + label: String + initialCount: Int! = 0 +) on FIELD +``` + +The `@stream` directive may be provided for a field whose type incorporates a +`List` type modifier. The directive enables returning a partial list initially, +followed by additional items in one or more _incremental stream update result_. +If the field type incorporates multiple `List` type modifiers, only the +outermost list is streamed. + +Note: The mechanism through which items are streamed is implementation-defined +and may use technologies such as asynchronous iterators. + +The `@include` and `@skip` directives take precedence over `@stream`. + +```graphql example +query myQuery($shouldStream: Boolean! = true) { + user { + friends(first: 10) + @stream(if: $shouldStream, label: "friendsStream", initialCount: 5) { + name + } + } +} +``` + +#### @stream Arguments + +- `if: Boolean! = true` - When `true`, field _should_ be streamed (see + [Client Handling of `@defer`/`@stream`](#sec-Client-handling-of-defer-stream)). + When `false`, the field must behave as if the `@stream` directive is not + present—it must not be streamed and all of the list items must be included. + Defaults to `true`. +- `label: String` - An optional string literal used by GraphQL clients to + identify data in the _incremental stream_ and associate it with the + corresponding stream directive. If provided, the GraphQL service must include + this label in the corresponding _incremental pending notice_ within the + _incremental stream_. The `label` argument must be unique across all `@defer` + and `@stream` directives in the document. Variables are disallowed (via + [Defer And Stream Directive Labels Are Unique](#sec-Defer-And-Stream-Directive-Labels-Are-Unique)) + because their values may not be known during validation. +- `initialCount: Int! = 0` - The number of list items to include initially when + completing the parent selection set. If omitted, defaults to `0`. An execution + error will be raised if the value of this argument is less than `0`. When the + size of the list is greater than or equal to the value of `initialCount`, the + GraphQL service _must_ initially include at least as many list items as the + value of `initialCount` (see + [Client Handling of `@defer`/`@stream`](#sec-Client-handling-of-defer-stream)). + +### Client Handling of @defer/@stream + +The ability to defer and/or stream data can have a potentially significant +impact on application performance. Developers generally need clear, predictable +control over their application's performance. It is highly recommended that +GraphQL services honor the `@defer` and `@stream` directives on each execution. +However, the specification allows advanced use cases where the service can +determine that it is more performant to not defer and/or stream. Services can +make this determination on case by case basis; e.g. in a single operation, one +or more `@defer` and/or `@stream` may be acted upon while others are ignored. +Therefore, GraphQL clients _must_ be able to process a _response_ that ignores +individual `@defer` and/or `@stream` directives. This also applies to the +`initialCount` argument on the `@stream` directive. Clients must be able to +process a streamed field result that contains more initial list items than were +specified in the `initialCount` argument. diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index c48a6ba4a..aee87b90f 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -560,6 +560,7 @@ FieldsInSetCanMerge(set): {set} including visiting fragments and inline fragments. - Given each pair of distinct members {fieldA} and {fieldB} in {fieldsForName}: - {SameResponseShape(fieldA, fieldB)} must be true. + - {SameStreamDirective(fieldA, fieldB)} must be true. - If the parent types of {fieldA} and {fieldB} are equal or if either is not an Object Type: - {fieldA} and {fieldB} must have identical field names. @@ -595,6 +596,16 @@ SameResponseShape(fieldA, fieldB): - If {SameResponseShape(subfieldA, subfieldB)} is {false}, return {false}. - Return {true}. +SameStreamDirective(fieldA, fieldB): + +- If neither {fieldA} nor {fieldB} has a directive named `stream`. + - Return {true}. +- If both {fieldA} and {fieldB} have a directive named `stream`. + - Let {streamA} be the directive named `stream` on {fieldA}. + - Let {streamB} be the directive named `stream` on {fieldB}. + - If {streamA} and {streamB} have identical sets of arguments, return {true}. +- Return {false}. + Note: In prior versions of the spec the term "composite" was used to signal a type that is either an Object, Interface or Union type. @@ -1695,6 +1706,174 @@ query ($foo: Boolean = true, $bar: Boolean = false) { } ``` +### Defer And Stream Directives Are Used On Valid Root Field + +** Formal Specification ** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- Let {mutationType} be the root Mutation type in {schema}. +- Let {subscriptionType} be the root Subscription type in {schema}. +- If {directiveName} is "defer" or "stream": + - The parent type of {directive} must not be {mutationType} or + {subscriptionType}. + +**Explanatory Text** + +The defer and stream directives are not allowed to be used on root fields of the +mutation or subscription type. + +For example, the following document will not pass validation because `@defer` +has been used on a root mutation field: + +```raw graphql counter-example +mutation { + ... @defer { + mutationField + } +} +``` + +### Defer And Stream Directives Are Used On Valid Operations + +** Formal Specification ** + +- Let {subscriptionFragments} be the empty set. +- For each {operation} in a document: + - If {operation} is a subscription operation: + - Let {fragments} be every fragment referenced by that {operation} + transitively. + - For each {fragment} in {fragments}: + - Let {fragmentName} be the name of {fragment}. + - Add {fragmentName} to {subscriptionFragments}. +- For every {directive} in a document: + - If {directiveName} is not "defer" or "stream": + - Continue to the next {directive}. + - Let {ancestor} be the ancestor operation or fragment definition of + {directive}. + - If {ancestor} is a fragment definition: + - If the fragment name of {ancestor} is not present in + {subscriptionFragments}: + - Continue to the next {directive}. + - If {ancestor} is not a subscription operation: + - Continue to the next {directive}. + - Let {if} be the argument named "if" on {directive}. + - {if} must be defined. + - Let {argumentValue} be the value passed to {if}. + - {argumentValue} must be a variable, or the boolean value "false". + +**Explanatory Text** + +The defer and stream directives can not be used to defer or stream data in +subscription operations. If these directives appear in a subscription operation +they must be disabled using the "if" argument. This rule will not permit any +defer or stream directives on a subscription operation that cannot be disabled +using the "if" argument. + +For example, the following document will not pass validation because `@defer` +has been used in a subscription operation with no "if" argument defined: + +```raw graphql counter-example +subscription sub { + newMessage { + ... @defer { + body + } + } +} +``` + +### Defer And Stream Directive Labels Are Unique + +** Formal Specification ** + +- Let {labelValues} be an empty set. +- For every {directive} in the document: + - Let {directiveName} be the name of {directive}. + - If {directiveName} is "defer" or "stream": + - For every {argument} in {directive}: + - Let {argumentName} be the name of {argument}. + - Let {argumentValue} be the value passed to {argument}. + - If {argumentName} is "label": + - {argumentValue} must not be a variable. + - {argumentValue} must not be present in {labelValues}. + - Append {argumentValue} to {labelValues}. + +**Explanatory Text** + +The `@defer` and `@stream` directives each accept an argument "label". This +label may be used by GraphQL clients to uniquely identify response payloads. If +a label is passed, it must not be a variable and it must be unique within all +other `@defer` and `@stream` directives in the document. + +For example the following document is valid: + +```graphql example +{ + dog { + ...fragmentOne + ...fragmentTwo @defer(label: "dogDefer") + } + pets @stream(label: "petStream") { + name + } +} + +fragment fragmentOne on Dog { + name +} + +fragment fragmentTwo on Dog { + owner { + name + } +} +``` + +For example, the following document will not pass validation because the same +label is used in different `@defer` and `@stream` directives.: + +```raw graphql counter-example +{ + dog { + ...fragmentOne @defer(label: "MyLabel") + } + pets @stream(label: "MyLabel") { + name + } +} + +fragment fragmentOne on Dog { + name +} +``` + +### Stream Directives Are Used On List Fields + +**Formal Specification** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- If {directiveName} is "stream": + - Let {adjacent} be the AST node the directive affects. + - {adjacent} must be a List type. + +**Explanatory Text** + +GraphQL directive locations do not provide enough granularity to distinguish the +type of fields used in a GraphQL document. Since the stream directive is only +valid on list fields, an additional validation rule must be used to ensure it is +used correctly. + +For example, the following document will only pass validation if `field` is +defined as a List type in the associated schema. + +```graphql counter-example +query { + field @stream(initialCount: 0) +} +``` + ## Variables ### Variable Uniqueness diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5bde7a6c1..4a38ca7af 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -276,14 +276,16 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue): - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level selection set in {subscription}. -- Let {collectedFieldsMap} be the result of {CollectFields(subscriptionType, - selectionSet, variableValues)}. +- Let {collectedFieldsMap} and {newDeferUsages} be the result of + {CollectFields(subscriptionType, selectionSet, variableValues)}. +- Assert: {newDeferUsages} is empty. - If {collectedFieldsMap} does not have exactly one entry, raise a _request error_. - Let {fields} be the value of the first entry in {collectedFieldsMap}. -- Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. -- Let {field} be the first entry in {fields}. +- Let {fieldDetails} be the first entry in {fields}. +- Let {field} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the field name of {field}. Note: This value is unaffected + if an alias is used. - Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType, field, variableValues)}. - Let {sourceStream} be the result of running @@ -383,15 +385,25 @@ then executed, returning the resulting {data} and {errors}. ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, executionMode): -- Let {collectedFieldsMap} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. -- Let {data} be the result of running - {ExecuteCollectedFields(collectedFieldsMap, objectType, initialValue, - variableValues)} _serially_ if {executionMode} is {"serial"}, otherwise - _normally_ (allowing parallelization)). +- Let {collectedFieldsMap} and {newDeferUsages} be the result of + {CollectFields(objectType, selectionSet, variableValues)}. +- Let {executionPlan} be the result of {BuildExecutionPlan(collectedFieldsMap)}. +- Let {data} and {work} be the result of {ExecuteExecutionPlan(newDeferUsages, + executionPlan, objectType, initialValue, variableValues, executionMode)}. - Let {errors} be the list of all _execution error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. + execution plan. +- Let {tasks} and {streams} be the corresponding entries on {work}. +- If {tasks} is empty and {streams} is empty, return an unordered map containing + {data} and {errors}. +- Let {incrementalStreamResults} be the result of {YieldIncrementalResults(data, + errors, work)}. +- Wait for the first result in {incrementalStreamResults} to be available. +- Let {initialIncrementalStreamResult} be that result. +- Return {initialIncrementalStreamResult} and + {BatchIncrementalResults(incrementalStreamResults)}. + +Note: {ExecuteExecutionPlan()} does not directly raise execution errors from the +incremental portion of the Execution Plan. ### Field Collection @@ -405,7 +417,7 @@ name_ and its associated _field set_. A _collected fields map_ may be produced from a selection set via {CollectFields()} or from the selection sets of all entries of a _field set_ via {CollectSubfields()}. -:: A _field set_ is an ordered set of selected fields that share the same +:: A _field set_ is an ordered set of Field Details that share the same _response name_ (the field alias if defined, otherwise the field's name). Validation ensures each field in the set has the same name and arguments, however each may have different subfields (see: @@ -439,10 +451,46 @@ The depth-first-search order of each _field set_ produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. -CollectFields(objectType, selectionSet, variableValues, visitedFragments): +CollectFields(objectType, selectionSet, variableValues, deferUsage, +visitedFragments): + +The {CollectFields()} algorithm makes use of the following data types: + +Defer Usage Records are unordered maps representing the usage of a `@defer` +directive within a given operation. Defer Usages are "abstract" in that they +include information about the `@defer` directive from the AST of the GraphQL +document. A single Defer Usage may be used to create many "concrete" Delivery +Groups when a `@defer` is included within a list type. + +Defer Usages contain the following information: + +- {label}: the `label` argument provided by the given `@defer` directive, if + any, otherwise {undefined}. +- {parentDeferUsage}: a Defer Usage corresponding to the `@defer` directive + enclosing this `@defer` directive, if any, otherwise {undefined}. + +The {parentDeferUsage} entry is used to build distinct Execution Groups as +discussed within the Execution Plan Generation section below. + +Field Details Records are unordered maps containing the following entries: + +- {field}: the Field selection. +- {deferUsage}: the Defer Usage enclosing the selection, if any, otherwise + {undefined}. + +A Collected Fields Map is an ordered map of _response name_ to lists of Field +Details. + +The {CollectFields()} algorithm returns: + +- {collectedFieldsMap}: the Collected Fields Map for the fields in the selection + set. +- {newDeferUsages}: a list of new Defer Usages encountered during this field + collection. - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {collectedFieldsMap} to an empty ordered map of ordered sets. +- Initialize {newDeferUsages} to an empty list. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that directive. @@ -457,15 +505,26 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {selection} is a {Field}: - Let {responseName} be the _response name_ of {selection} (the alias if defined, otherwise the field name). + - Let {fieldDetails} be a new unordered map containing {field} and + {deferUsage}. + - Set the corresponding entries on {fieldDetails} to {selection} and + {deferUsage}, respectively. - Let {fieldsForResponseName} be the _field set_ value in {collectedFieldsMap} for the key {responseName}; otherwise create the entry with an empty ordered set. - - Add {selection} to the {fieldsForResponseName}. + - Add {fieldDetails} to the {fieldsForResponseName}. - If {selection} is a {FragmentSpread}: - Let {fragmentSpreadName} be the name of {selection}. + - If {selection} provides the directive `@defer` and its {if} argument is + not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise an _execution + error_. - If {fragmentSpreadName} is in {visitedFragments}, continue with the next {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. + - If {deferDirective} is not defined: + - Add {fragmentSpreadName} to {visitedFragments}. - Let {fragment} be the Fragment in the current Document whose name is {fragmentSpreadName}. - If no such {fragment} exists, continue with the next {selection} in @@ -474,31 +533,51 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - Let {fragmentCollectedFieldsMap} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {responseName} and {fragmentFields} in + - If {deferDirective} is defined: + - Let {label} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {label} and + {parentDeferUsage}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentCollectedFieldsMap} and {fragmentNewDeferUsages} be the + result of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. + - For each {responseName} and {fragmentFieldSet} in {fragmentCollectedFieldsMap}: - Let {fieldsForResponseName} be the _field set_ value in {collectedFieldsMap} for the key {responseName}; otherwise create the entry with an empty ordered set. - - Add each item from {fragmentFields} to {fieldsForResponseName}. + - Add each item from {fragmentFieldSet} to {fieldsForResponseName}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - If {selection} is an {InlineFragment}: - Let {fragmentType} be the type condition on {selection}. - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - Let {fragmentCollectedFieldsMap} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {responseName} and {fragmentFields} in + - If {selection} provides the directive `@defer` and its {if} argument is + not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise an _execution + error_. + - If {deferDirective} is defined: + - Let {label} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {label} and + {parentDeferUsage}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentCollectedFieldsMap} and {fragmentNewDeferUsages} be the + result of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. + - For each {responseName} and {fragmentFieldSet} in {fragmentCollectedFieldsMap}: - Let {fieldsForResponseName} be the _field set_ value in {collectedFieldsMap} for the key {responseName}; otherwise create the entry with an empty ordered set. - - Append each item from {fragmentFields} to {fieldsForResponseName}. -- Return {collectedFieldsMap}. + - Add each item from {fragmentFieldSet} to {fieldsForResponseName}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. +- Return {collectedFieldsMap} and {newDeferUsages}. DoesFragmentTypeApply(objectType, fragmentType): @@ -515,6 +594,10 @@ DoesFragmentTypeApply(objectType, fragmentType): Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. +Note: When completing a List field, the {CollectFields} algorithm is invoked +with the same arguments for each element of the list. GraphQL Services may +choose to memoize their implementations of {CollectFields}. + **Merging Selection Sets** In order to execute the sub-selections of an object typed field, all _selection @@ -550,20 +633,23 @@ resolved in the same phase with the same value. CollectSubfields(objectType, fields, variableValues): - Let {collectedFieldsMap} be an empty ordered map of ordered sets. -- For each {field} in {fields}: +- Let {newDeferUsages} be an empty list. +- For each {fieldDetails} in {fields}: + - Let {field} and {deferUsage} be the corresponding entries on {fieldDetails}. - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Let {fieldCollectedFieldsMap} be the result of {CollectFields(objectType, - fieldSelectionSet, variableValues)}. - - For each {responseName} and {subfields} in {fieldCollectedFieldsMap}: + - Let {subCollectedFieldsMap} and {subNewDeferUsages} be the result of + {CollectFields(objectType, fieldSelectionSet, variableValues, deferUsage)}. + - For each {responseName} and {subfields} in {subCollectedFieldsMap}: - Let {fieldsForResponseName} be the _field set_ value in {collectedFieldsMap} for the key {responseName}; otherwise create the entry with an empty ordered set. - - Add each fields from {subfields} to {fieldsForResponseName}. -- Return {collectedFieldsMap}. + - Add each item from {subfields} to {fieldsForResponseName}. + - Append all items in {subNewDeferUsages} to {newDeferUsages}. +- Return {collectedFieldsMap} and {newDeferUsages}. -Note: All the {fields} passed to {CollectSubfields()} share the same _response -name_. +Note: All the {fieldDetailsList} passed to {CollectSubfields()} share the same +_response name_. ### Executing Collected Fields @@ -575,23 +661,34 @@ collected fields map, producing an entry in the result map with the same _response name_ key. ExecuteCollectedFields(collectedFieldsMap, objectType, objectValue, -variableValues): +variableValues, path, deferUsageSet, deferMap): - Initialize {resultMap} to an empty ordered map. +- Initialize {groups}, {tasks}, and {streams} to empty lists. - For each {responseName} and {fields} in {collectedFieldsMap}: - - Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. + - Let {fieldDetails} be the first entry in {fields}. + - Let {field} be the corresponding entry on {fieldDetails}. + - Let {fieldName} be the field name of {field}. Note: This value is unaffected + if an alias is used. - Let {fieldType} be the return type defined for the field {fieldName} of {objectType}. - If {fieldType} is defined: - - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues)}. + - Let {responseValue} and {fieldWork} be the result of + {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, + path, deferUsageSet, deferMap)}. + - Let {fieldGroups}, {fieldTasks}, and {fieldStreams} be the corresponding + entries on {fieldWork}. - Set {responseValue} as the value for {responseName} in {resultMap}. -- Return {resultMap}. + - For each {fieldGroup} in {fieldGroups}: + - If {groups} does not contain an equivalent {fieldGroup}, append + {fieldGroup} to {groups}. + - Append all items in {fieldTasks} to {tasks}. + - Append all items in {fieldStreams} to {streams}. +- Return {resultMap} and an unordered map containing {groups}, {tasks}, and + {streams}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the [Field Collection](#sec-Field-Collection) -section. +is explained in greater detail in the Field Collection section below. **Errors and Non-Null Types** @@ -719,16 +816,19 @@ first coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues): +ExecuteField(objectType, objectValue, fieldType, fieldDetailsList, +variableValues, path, deferUsageSet, deferMap): -- Let {field} be the first entry in {fields}. +- Let {fieldDetails} be the first entry in {fieldDetailsList}. +- Let {field} be the corresponding entry on {fieldDetails}. - Let {fieldName} be the field name of {field}. +- Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)}. - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. -- Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues)}. +- Return the result of {CompleteValue(fieldType, fieldDetailsList, + resolvedValue, variableValues, path, deferUsageSet, deferMap)}. ### Coercing Field Arguments @@ -825,34 +925,61 @@ the expected return type. If the return type is another Object type, then the field execution process continues recursively by collecting and executing subfields. -CompleteValue(fieldType, fields, result, variableValues): +CompleteValue(fieldType, fieldDetailsList, result, variableValues, path, +deferUsageSet, deferMap): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - - Let {completedResult} be the result of calling {CompleteValue(innerType, - fields, result, variableValues)}. + - Let {completedResult} and {work} be the result of calling + {CompleteValue(innerType, fieldDetailsList, result, variableValues, path, + deferUsageSet, deferMap)}. - If {completedResult} is {null}, raise an _execution error_. - - Return {completedResult}. + - Return {completedResult} and {work}. - If {result} is {null} (or another internal value similar to {null} such as - {undefined}), return {null}. + {undefined}), return {null} and an unordered map containing empty lists for + {groups}, {tasks}, and {streams}. - If {fieldType} is a List type: - If {result} is not a collection of values, raise an _execution error_. - Let {innerType} be the inner type of {fieldType}. - - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues)}, where - {resultItem} is each item in {result}. + - Return the result of {CompleteListValue(innerType, fieldDetailsList, result, + variableValues, path, deferUsageSet, deferMap)}. - If {fieldType} is a Scalar or Enum type: - - Return the result of {CoerceResult(fieldType, result)}. + - Return the result of {CoerceResult(fieldType, result)} and an unordered map + containing empty lists for {groups}, {tasks}, and {streams}. - If {fieldType} is an Object, Interface, or Union type: - If {fieldType} is an Object type. - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {collectedFieldsMap} be the result of calling - {CollectSubfields(objectType, fields, variableValues)}. - - Return the result of evaluating {ExecuteCollectedFields(collectedFieldsMap, - objectType, result, variableValues)} _normally_ (allowing for - parallelization). + - Let {collectedFieldsMap} and {newDeferUsages} be the result of calling + {CollectSubfields(objectType, fieldDetailsList, variableValues)}. + - Let {executionPlan} be the result of {BuildExecutionPlan(collectedFieldsMap, + deferUsageSet)}. + - Return the result of {ExecuteExecutionPlan(newDeferUsages, executionPlan, + objectType, result, variableValues, "normal", path, deferUsageSet, + deferMap)}. + +CompleteListValue(innerType, fieldDetailsList, result, variableValues, path, +deferUsageSet, deferMap): + +- Initialize {items}, {groups}, {tasks}, and {streams} to empty lists. +- Let {index} be {0}. +- For each {resultItem} of {result}: + - Let {itemPath} be {path} with {index} appended. + - Let {completedItem} and {itemWork} be the result of calling + {CompleteValue(innerType, fieldDetailsList, resultItem, variableValues, + itemPath, deferUsageSet, deferMap)}. + - Let {itemGroups}, {itemTasks}, and {itemStreams} be the corresponding + entries on {itemWork}. + - Append {completedItem} to {items}. + - For each {itemGroup} in {itemGroups}: + - If {groups} does not contain an equivalent {itemGroup}, append {itemGroup} + to {groups}. + - Append all items in {itemTasks} to {tasks}. + - Append all items in {itemStreams} to {streams}. + - Increment {index} by {1}. +- Return {items} and an unordered map containing {groups}, {tasks}, and + {streams}. **Coercing Results** @@ -939,3 +1066,307 @@ position_ must resolve to {null}. If the `List` type is also wrapped in a If every _response position_ from the root of the request to the source of the execution error has a `Non-Null` type, then the {"data"} entry in the _execution result_ should be {null}. + +### Execution Plan Generation + +A _collected fields map_ may contain fields that have been deferred by the use +of the `@defer` directive on their enclosing fragments. Given a _collected +fields map_, {BuildExecutionPlan()} generates an execution plan by partitioning +the _collected fields map_ as specified by the operation's use of `@defer` and +the requirements of the incremental response format. An execution plan consists +of a single new _collected fields map_ containing the fields that do not require +deferral, and a map of new _collected fields maps_ where the keys represent sets +of Defer Usages containing those fields. + +BuildExecutionPlan(originalCollectedFieldsMap, parentDeferUsages): + +- If {parentDeferUsages} is not provided, initialize it to the empty set. +- Initialize {collectedFieldsMap} to an empty ordered map. +- Initialize {newCollectedFieldsMaps} to an empty unordered map. +- Let {executionPlan} be an unordered map containing {collectedFieldsMap} and + {newCollectedFieldsMaps}. +- For each {responseName} and {fieldsForResponseName} of + {originalCollectedFieldsMap}: + - Let {filteredDeferUsageSet} be the result of + {GetFilteredDeferUsageSet(fieldsForResponseName)}. + - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: + - Set the entry for {responseName} in {collectedFieldsMap} to + {fieldsForResponseName}. + - Otherwise: + - Let {newCollectedFieldsMap} be the entry in {newCollectedFieldsMaps} for + any equivalent set to {filteredDeferUsageSet}; if no such map exists, + create it as an empty ordered map. + - Set the entry for {responseName} in {newCollectedFieldsMap} to + {fieldsForResponseName}. +- Return {executionPlan}. + +GetFilteredDeferUsageSet(fieldDetailsList): + +- Initialize {filteredDeferUsageSet} to the empty set. +- For each {fieldDetails} of {fieldDetailsList}: + - Let {deferUsage} be the corresponding entry on {fieldDetails}. + - If {deferUsage} is not defined: + - Remove all entries from {filteredDeferUsageSet}. + - Return {filteredDeferUsageSet}. + - Add {deferUsage} to {filteredDeferUsageSet}. +- For each {deferUsage} in {filteredDeferUsageSet}: + - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. + - While {parentDeferUsage} is defined: + - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: + - Remove {deferUsage} from {filteredDeferUsageSet}. + - Continue to the next {deferUsage} in {filteredDeferUsageSet}. + - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. +- Return {filteredDeferUsageSet}. + +### Yielding Incremental Stream Results + +The procedure for yielding an _incremental stream_ uses a generic incremental +work queue and maps its event stream into an _initial incremental stream result_ +followed by zero or more _incremental stream update result_. + +YieldIncrementalResults(data, errors, work): + +- Let {initialGroups}, {initialStreams}, and {workEventStream} be the result of + {CreateWorkQueue(work)}. +- Let {pending} be the result of {GetPendingEntry(initialGroups, + initialStreams)}. +- Let {hasNext} be {true}. +- Yield an unordered map containing {data}, {errors}, {pending}, and {hasNext}. +- Let {incrementalStreamUpdateResults} be the result of + {MapIncrementalWorkEventsToResponseEvent(workEventStream)}. +- For each {incrementalStreamUpdateResult} in {incrementalStreamUpdateResults}: + - Yield {incrementalStreamUpdateResult}. +- Complete this incremental stream. + +### Mapping Work Events to Response Events + +The {MapIncrementalWorkEventsToResponseEvent()} algorithm maps each batch of +work queue events into an _incremental stream update result_. + +MapIncrementalWorkEventsToResponseEvent(workEventStream): + +- Let {idMap} be an empty map. +- Let {nextID} be {0}. +- Return a new event stream {responseEventStream} which yields events as + follows: +- For each {batch} emitted by {workEventStream}: + - Initialize {pending}, {incremental}, and {completed} to empty lists. + - Let {hasNext} be {true}. + - For each {event} in {batch}: + - If {event} is {GROUP_VALUES}: + - Let {group} and {values} be the corresponding entries on {event}. + - For each {value} in {values}: + - Append {GetIncrementalEntry(group, value)} to {incremental}. + - If {event} is {GROUP_SUCCESS}: + - Let {group}, {newGroups}, and {newStreams} be the corresponding entries + on {event}. + - Append {GetCompletedEntry(group)} to {completed}. + - Append all items in {GetPendingEntry(newGroups, newStreams)} to + {pending}. + - If {event} is {GROUP_FAILURE}: + - Let {group} and {error} be the corresponding entries on {event}. + - Let {groupErrors} be a list containing {error}. + - Append {GetCompletedEntry(group, groupErrors)} to {completed}. + - If {event} is {STREAM_VALUES}: + - Let {stream}, {values}, {newGroups}, and {newStreams} be the + corresponding entries on {event}. + - Let {id} be the result of {EnsureID(stream)}. + - Initialize {items} and {streamErrors} to empty lists. + - For each {value} in {values}: + - Append the stream item entry from {value} to {items}. + - If {value} contains {errors}, append each such error to + {streamErrors}. + - Let {incrementalEntry} be an unordered map containing {id} and {items}. + - If {streamErrors} is not empty, set the corresponding entry on + {incrementalEntry} to {streamErrors}. + - Append {incrementalEntry} to {incremental}. + - Append all items in {GetPendingEntry(newGroups, newStreams)} to + {pending}. + - If {event} is {STREAM_SUCCESS}: + - Let {stream} be the corresponding entry on {event}. + - Append {GetCompletedEntry(stream)} to {completed}. + - If {event} is {STREAM_FAILURE}: + - Let {stream} and {error} be the corresponding entries on {event}. + - Let {streamErrors} be a list containing {error}. + - Append {GetCompletedEntry(stream, streamErrors)} to {completed}. + - If {event} is {WORK_QUEUE_TERMINATION}: + - Let {hasNext} be {false}. + - Yield the result of {GetIncrementalStreamUpdateResult(hasNext, completed, + incremental, pending)}. + +The following algorithms have access to {idMap} and {nextID}. + +EnsureID(node): + +- If {idMap} has an entry for {node}, return that entry. +- Let {id} be {nextID} converted to a string. +- Set the entry for {node} in {idMap} to {id}. +- Increment {nextID} by {1}. +- Return {id}. + +GetPendingEntry(newGroups, newStreams): + +- Initialize {pending} to an empty list. +- For each {group} of {newGroups}: + - Let {id} be the result of {EnsureID(group)}. + - Let {path} and {label} be the corresponding entries on {group}. + - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. + - Append {pendingEntry} to {pending}. +- For each {stream} of {newStreams}: + - Let {id} be the result of {EnsureID(stream)}. + - Let {path} and {label} be the corresponding entries on {stream}. + - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. + - Append {pendingEntry} to {pending}. +- Return {pending}. + +GetIncrementalEntry(group, value): + +- Let {id} be the result of {EnsureID(group)}. +- Let {groupPath} be the path entry on {group}. +- Let {path}, {data}, and {errors} be the corresponding entries on {value}. +- Let {subPath} be the portion of {path} not contained by {groupPath}. +- Let {incrementalEntry} be an unordered map containing {id} and {data}. +- If {errors} is not empty, set the corresponding entry on {incrementalEntry} to + {errors}. +- If {subPath} is not empty, set the corresponding entry on {incrementalEntry} + to {subPath}. +- Return {incrementalEntry}. + +GetCompletedEntry(node, errors): + +- Let {id} be the result of {EnsureID(node)}. +- Let {completedEntry} be an unordered map containing {id}. +- If {errors} is provided and not empty, set the corresponding entry on + {completedEntry} to {errors}. +- Return {completedEntry}. + +GetIncrementalStreamUpdateResult(hasNext, completed, incremental, pending): + +- Let {incrementalStreamUpdateResult} be an unordered map containing {hasNext}. +- If {incremental} is not empty: + - Set the corresponding entry on {incrementalStreamUpdateResult} to + {incremental}. +- If {completed} is not empty: + - Set the corresponding entry on {incrementalStreamUpdateResult} to + {completed}. +- If {pending} is not empty: + - Set the corresponding entry on {incrementalStreamUpdateResult} to {pending}. +- Return {incrementalStreamUpdateResult}. + +### Batching Incremental Stream Update Results + +BatchIncrementalResults(incrementalStreamUpdateResults): + +- Return a new stream {batchedIncrementalStreamUpdateResults} which yields + events as follows: +- While {incrementalStreamUpdateResults} is not closed: + - Let {availableIncrementalStreamUpdateResults} be a list of one or more + _incremental stream update result_ available on + {incrementalStreamUpdateResults}. + - Let {batchedIncrementalStreamUpdateResult} be an unordered map created by + merging the items in {availableIncrementalStreamUpdateResults} into a single + unordered map, concatenating list entries as necessary, and setting + {hasNext} to the value of {hasNext} on the final item in the list. + - Yield {batchedIncrementalStreamUpdateResult}. + +## Executing an Execution Plan + +Executing an execution plan consists of two tasks that may be performed in +parallel. The first task is simply the execution of the non-deferred collected +fields map. The second task is to use the partitioned collected fields maps +within the execution plan to generate Execution Group tasks and combine those +tasks with any nested incremental work. + +ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, objectValue, +variableValues, executionMode, path, deferUsageSet, deferMap): + +- If {path} is not provided, initialize it to an empty list. +- Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, + deferMap)}. +- Let {collectedFieldsMap} and {newCollectedFieldsMaps} be the corresponding + entries on {executionPlan}. +- Allowing for parallelization, perform the following steps: + - Let {data} and {work} be the result of running + {ExecuteCollectedFields(collectedFieldsMap, objectType, objectValue, + variableValues, path, deferUsageSet, newDeferMap)} _serially_ if + {executionMode} is {"serial"}, _normally_ (allowing parallelization) + otherwise. + - Let {executionGroupTasks} be the result of + {CollectExecutionGroups(objectType, objectValue, variableValues, + newCollectedFieldsMaps, path, newDeferMap)}. +- Let {groups}, {tasks}, and {streams} be the corresponding entries on {work}. +- Append all items in {executionGroupTasks} to {tasks}. +- For each {task} in {executionGroupTasks}: + - Let {deferredFragments} be the Deferred Fragments incrementally completed by + {task}. + - For each {deferredFragment} in {deferredFragments}: + - If {groups} does not contain an equivalent {deferredFragment}, append + {deferredFragment} to {groups}. +- Return {data} and {work}. + +### Mapping @defer Directives to Delivery Groups + +Because `@defer` directives may be nested within list types, a map is required +to associate a Defer Usage record as recorded within Field Details Records and +an actual Deferred Fragment so that any additional Execution Groups may be +associated with the correct Deferred Fragment. The {GetNewDeferMap()} algorithm +creates that map. Given a list of new Defer Usages, the actual path at which the +fields they defer are spread, and an initial map, it returns a new map +containing all entries in the provided defer map, as well as new entries for +each new Defer Usage. + +GetNewDeferMap(newDeferUsages, path, deferMap): + +- If {newDeferUsages} is empty, return {deferMap}. +- Let {newDeferMap} be a new unordered map containing all entries in {deferMap}. +- For each {deferUsage} in {newDeferUsages}: + - Let {parentDeferUsage} and {label} be the corresponding entries on + {deferUsage}. + - Let {parent} be the entry in {deferMap} for {parentDeferUsage}. + - Let {newDeferredFragment} be an unordered map containing {parent}, {path} + and {label}. + - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. +- Return {newDeferMap}. + +### Collecting Execution Groups + +The {CollectExecutionGroups()} algorithm is responsible for creating the +Execution Group tasks for each partitioned collected fields map. It uses the map +created by {GetNewDeferMap()} algorithm to associate each Execution Group with +the correct Deferred Fragment. + +CollectExecutionGroups(objectType, objectValue, variableValues, +newCollectedFieldsMaps, path, deferMap): + +- Initialize {executionGroupTasks} to an empty list. +- For each {deferUsageSet} and {collectedFieldsMap} in {newCollectedFieldsMaps}: + - Let {deferredFragments} be an empty list. + - For each {deferUsage} in {deferUsageSet}: + - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. + - Append {deferredFragment} to {deferredFragments}. + - Let {executionGroupTask} represent the future execution of + {ExecuteExecutionGroup(collectedFieldsMap, objectType, objectValue, + variableValues, path, deferUsageSet, deferMap)}, incrementally completing + {deferredFragments} at {path}. + - Append {executionGroupTask} to {executionGroupTasks}. + - Schedule initiation of execution of {executionGroupTask} following any + implementation specific deferral. +- Return {executionGroupTasks}. + +Note: {executionGroupTask} can be safely initiated without blocking +higher-priority data once any of {deferredFragments} are released as pending. + +The {ExecuteExecutionGroup()} algorithm is responsible for actually executing +the deferred collected fields map and collecting the result and any raised +errors. + +ExecuteExecutionGroup(collectedFieldsMap, objectType, objectValue, +variableValues, path, deferUsageSet, deferMap): + +- Let {data} and {work} be the result of running + {ExecuteCollectedFields(collectedFieldsMap, objectType, objectValue, + variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing + parallelization). +- Let {errors} be the list of all _execution error_ raised while executing + {ExecuteCollectedFields()}. +- Return an unordered map containing {data}, {errors}, and {work}. diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 4ece8639d..3bb90807f 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -10,7 +10,8 @@ the case that any _execution error_ was raised and replaced with {null}. ## Response Format :: A GraphQL request returns a _response_. A _response_ is either an _execution -result_, a _response stream_, or a _request error result_. +result_, a _response stream_, an _incremental stream_, or a _request error +result_. ### Execution Result @@ -43,6 +44,14 @@ value of this entry is described in the "Extensions" section. subscription and the request included execution. A response stream must be a stream of _execution result_. +### Incremental Stream + +:: A GraphQL request returns an _incremental stream_ when the GraphQL service +has deferred or streamed data as a result of the `@defer` or `@stream` +directives. When the result of the GraphQL operation is an incremental stream, +the first payload will be an _initial incremental stream result_, optionally +followed by one or more _incremental stream update result_. + ### Request Error Result :: A GraphQL request returns a _request error result_ when one or more _request @@ -70,6 +79,80 @@ The _request error result_ map must not contain an entry with key {"data"}. The _request error result_ map may also contain an entry with key `extensions`. The value of this entry is described in the "Extensions" section. +### Initial Incremental Stream Result + +:: An _initial incremental stream result_ contains the result of executing any +non-deferred selections, along with any errors that occurred during their +execution, as well as details of any future _incremental stream update result_ +to be expected. An initial incremental stream result must be the first payload +yielded by an _incremental stream_. + +An _initial incremental stream result_ must be a map. + +The _initial incremental stream result_ must contain entries with keys {"data"}, +{"pending"}, and {"hasNext"}, and may contain entries with keys {"errors"}, +{"incremental"}, {"completed"}, and {"extensions"}. + +The value of {"data"}, {"errors"} and {"extensions"} are defined in the same way +as an _execution result_ as described in the "Data", "Errors", and "Extensions" +sections below. + +The value of {"hasNext"} must be {false} if the initial incremental stream +result is the last response of the incremental stream. Otherwise, {"hasNext"} +must be {true}. + +The value of {"pending"} must be a non-empty list of _incremental pending +notice_. Each _incremental pending notice_ must be a map as described in the +"Incremental Pending Notice" section below. + +The value of {"incremental"}, if present, must be a non-empty list of +_incremental result_. Each _incremental result_ must be a map as described in +the "Incremental Result" section below. + +The value of {"completed"}, if present, must be a non-empty list of _incremental +completion notice_. Each _incremental completion notice_ must be a map as +described in the "Incremental Completion Notice" section below. + +Note: A GraphQL service is permitted to include incrementally delivered data in +the _initial incremental stream_. For example, A GraphQL middleware layer, such +as a caching CDN or proxy service, may wish to intercept and rewrite the +_incremental stream_ before delivering it to a client. This service may collect +some or all of the _incremental pending notice_, _incremental result_, and +_incremental completion notice_ from the entire _incremental stream_ of the +upstream service, and construct a new incremental stream containing a single +payload: an _initial incremental stream result_ containing the all of the +intercepted incremental pending notices, incremental results, and incremental +completion notices, and the {"hasNext"} entry set to false. This would allow the +client to efficiently render the entire result without having to process +multiple payloads. + +### Incremental Stream Update Result + +:: An _incremental stream update result_ contains the result of executing any +deferred selections, along with any errors that occurred during their execution, +as well as details of any future _incremental stream update result_ to be +expected. All payloads yielded by an _incremental stream_, except the first, +must be incremental stream update results. + +An _incremental stream update result_ must be a map. + +The _incremental stream update result_ must contain an entry with the key +{"hasNext"}, and may contain entries with the keys {"pending"}, {"incremental"}, +{"completed"}, and {"extensions"}. Unlike the _initial incremental stream +result_, an _incremental stream update result_ must not contain entries with +keys {"data"} or {"errors"}. + +The value of {"hasNext"} must be {true} for all but the last response in the +_incremental stream_. Otherwise, {"hasNext"} must be {true}. + +The value of {"pending"}, {"incremental"}, and/or {"completed"}, if present are +defined in the same way as an _initial incremental stream result_ as described +in the "Incremental Pending Notice", "Incremental Result", and "Incremental +Completion Notice" sections below. + +The value of {"extensions"}, if present, is defined in the same way as an +_execution result_ as described in the "Extensions" section below. + ### Response Position @@ -94,6 +177,9 @@ represents a path in the response, not in the request. When a _response path_ is present on an _error result_, it identifies the _response position_ which raised the error. +When a _response path_ is present on an _incremental pending notice_, it +identifies the _response position_ of the incremental data update. + A single field execution may result in multiple response positions. For example, ```graphql example @@ -323,17 +409,186 @@ discouraged. ### Extensions -The {"extensions"} entry in an _execution result_ or _request error result_, if -set, must have a map as its value. This entry is reserved for implementers to +The {"extensions"} entry in an _execution result_, _request error result_, +_initial incremental stream result_, or an _incremental stream update result_, +if set, must have a map as its value. This entry is reserved for implementers to extend the protocol however they see fit, and hence there are no additional restrictions on its contents. +### Incremental Pending Notice + +:: A _incremental pending notice_ is used to communicate to clients that the +GraphQL service has chosen to incrementally deliver data associated with a +`@defer` or `@stream` directive. Each incremental pending notice corresponds to +a specific `@defer` or `@stream` directive located at a _response position_ in +the response data. The presence of an incremental pending notice indicates that +clients should expect the associated data in either the current response, or one +of the following responses. + +**Incremental Pending Notice Format** + +An _incremental pending notice_ must be a map. + +An _incremental pending notice_ must contain entries with the keys {"id"} and +{"path"}, and may contain an entry with key {"label"}. + +The value of {"id"} must be a string. This {"id"} should be used by clients to +correlate incremental pending notices with _incremental result_ and _completed +result_. The {"id"} value must be unique across the entire _incremental stream_ +response. There must not be any other incremental pending notice in the +_incremental stream_ with the same {"id"}. + +The value of {"path"} must be a _response position_. When the incremental +pending notice is associated with a `@stream` directive, it indicates the list +at this _response position_ is not known to be complete. Clients should expect +the GraphQL Service to incrementally deliver the remainder list items of this +list. When the incremental pending notice is associated with a `@defer` +directive, it indicates that the response fields contained in the deferred +fragment are not known to be complete. Clients should expect the GraphQL Service +to incrementally deliver the remainder of the fields contained in the deferred +fragment at this _response position_. + +If the associated `@defer` or `@stream` directive contains a `label` argument, +the incremental pending notice must contain an entry {"label"} with the value of +this argument. Clients should use this entry to differentiate the _incremental +pending notices_ for different deferred fragments at the same _response +position_. + +If an incremental pending notice is not returned for a `@defer` or `@stream` +directive, clients must assume that the GraphQL service chose not to +incrementally deliver this data, and the data can be found either in the +{"data"} entry in the _initial incremental stream result_, or one of the prior +_incremental stream update result_ in the _incremental stream_. + +:: The _associated incremental pending notice_ of an _incremental result_ or +_incremental completion notice_ is the _incremental pending notice_ whose {"id"} +entry has the same value as the {"id"} entry of the given incremental result or +incremental completion notice. + +### Incremental Result + +:: The _incremental result_ is used to deliver data that the GraphQL service has +chosen to incrementally deliver. An incremental result may be either an +_incremental list result_ or an _incremental object result_. + +An _incremental result_ must be a map. + +Every _incremental result_ must contain an entry with the key {"id"}, the value +of which is a string referencing its _associated incremental pending notice_. +The associated incremental pending notice must appear either in the _initial +incremental stream result_, in a prior _incremental stream update result_, or in +the same _incremental stream update result_ as the _incremental result_ that +references it. + +#### Incremental List Result + +:: An _incremental list result_ is an _incremental result_ used to deliver +additional list items for a list field with a `@stream` directive. The +_associated incremental pending notice_ for this _incremental list result_ must +be associated with a `@stream` directive. + +The _response position_ for an _incremental list result_ is the {"path"} entry +from its _associated incremental pending notice_. + +**Incremental List Result Format** + +Every _incremental list result_ must contain an {"items"} entry. The {"items"} +entry must contain a list of additional list items for the list field in the +incremental list result's _response position_. The value of this entry must be a +list of the same type of the response field at this _response position_. + +If any _execution error_ were raised during the execution of the results in +{"items"} and these errors propagate to the _response position_ of the +_incremental list result_ (i.e. the streamed list), or a parent response +position of the incremental list result's response position (i.e. a parent of +the streamed list), the incremental list result is considered failed and should +not be included in the _incremental stream_. The errors that caused this failure +will be included in an _incremental completion notice_. + +If any _execution error_ were raised during the execution of the results in +{"items"} and no such error propagated to the _response position_ of the +_incremental list result_, or a parent response position of the incremental list +result's response position, the incremental list result must contain an entry +with key {"errors"} containing these execution errors. The value of this entry +is described in the "Errors" section. + +#### Incremental Object Result + +:: An _incremental object result_ is an _incremental result_ used to deliver +additional response fields that were contained in one or more fragments with a +`@defer` directive. The _associated incremental pending notice_ for this +_incremental object result_ must be associated with a `@defer` directive. + +**Incremental Object Result Format** + +The _incremental object result_ may contain a {"subPath"} entry. If such an +entry is present, the _response position_ of the incremental object result is +the result of appending the value of this {"subPath"} to the value of the +{"path"} entry of the _associated incremental pending notice_. If no {"subPath"} +entry is present, the _response position_ is the value of the associated +incremental pending notice's {"path"} entry. + +An _incremental object result_ may be used to deliver data for response fields +that were contained in more than one deferred fragment. + +In that case, the _associated incremental pending notice_ of the incremental +object result must be one of the _incremental pending notice_ that corresponding +to a fragment that contained the delivered responsive fields. If any of these +incremental pending notices have a {"path"} of varying length, one of the +incremental pending notices with the longest {"path"} must be chosen to minimize +the size of the {"subPath"}. + +Every _incremental object result_ must contain a {"data"} entry. The {"data"} +entry must contain a map of additional response fields. The {"data"} entry in an +incremental object result will be of the type of the field at the incremental +object result's _response position_. + +If any _execution error_ were raised during the execution of the results in +{"data"} and these errors propagated to a parent _response position_ of the +_incremental object result_'s response position, the incremental object result +is considered failed and should not be included in the incremental stream. The +error that caused this failure will be included in an _incremental completion +notice_. + +If any _execution error_ were raised during the execution of the results in +{"data"} and no such error propagated to a parent _response position_ of the +_incremental object result_'s response position, the incremental object result +must contain an entry with key {"errors"} containing these execution errors. The +value of this entry is described in the "Errors" section. + +### Incremental Completion Notice + +:: An _incremental completion notice_ is used to communicate that the GraphQL +service has completed the incremental delivery of the data associated with the +_associated incremental pending notice_. The corresponding data must have been +completed in the same _initial incremental stream result_ or _incremental stream +update result_ in which this incremental completion notice appears. + +**Incremental Completion Notice Format** + +An _incremental completion notice_ must be a map. + +An _incremental completion notice_ must contain an entry with the key {"id"}, +and may contain an entry with the key {"errors"}. + +The value of {"id"} must be a string referencing its _associated incremental +pending notice_. The associated incremental pending notice must appear either in +the _initial incremental stream result_, in a prior _incremental stream update +result_, or in the same _incremental stream update result_ as the _completed +result_ that references it. + +The value of {"errors"}, if present, informs clients that the delivery of the +data from the _associated incremental pending notice_ has failed, due to an +execution error propagating to a parent _response position_ of the _incremental +result_'s response position. The {"errors"} entry must contain these execution +errors. The value of this entry is described in the "Errors" section. + ### Additional Entries To ensure future changes to the protocol do not break existing services and -clients, the _execution result_ and _request error result_ maps must not contain -any entries other than those described above. Clients must ignore any entries -other than those described above. +clients, any of the maps described in the "Response" section (with the exception +of {"extensions"}) must not contain any entries other than those described +above. Clients must ignore any entries other than those described above. ## Serialization Format