hash
stringlengths 32
32
| doc_id
stringlengths 7
13
| section
stringlengths 3
121
| content
stringlengths 0
3.82M
|
---|---|---|---|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.2.2 Standardization of BBS#
|
BBS# is currently being standardized by AFNOR (the French Standardization Association). Also note that a new standard on Attribute-Based Credentials has been launched by ISO/IEC SC 27 (ISO/IEC AWI 24843 - Information security - Attribute-Based Credentials). Orange and Austrian Institute of Technology (AIT) will be the editor of this new project which might include the BBS/BBS# family of protocols.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.3 Feasibility of using BBS+ or BBS# with W3C VCDM and mdoc
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.3.1 BBS+ applied to W3C VCDM
|
The analysis in clause 5.4.2.2 concludes that if ISO/IEC 24843 [i.185] and/or ISO/IEC CD 27565 [i.191] will standardize BBS+ according to IRTF CFRG BBS, then W3C BBS Cryptosuite v2023 [i.267] can be enhanced to reference such an ISO standard. In such a scenario, the W3C Verifiable Credential Data Integrity 1.0 specification [i.263] would refer to an ISO compliant version of W3C BBS Cryptosuite v2023. That would in turn mean that the W3C Verifiable Credentials Data Model v2.0, in conjunction with W3C Verifiable Credential Data Integrity 1.0, would be underpinned with an ISO standardized version BBS+. It should however be observed that the ARF [i.71] requires the JSON PID to be compliant with the W3C Verifiable Credentials Data Model v1.1 with JWT encoding. Since an ISO standardized version of BBS+ would require W3C Verifiable Credentials Data Model v2.0 [i.265] with JSON-LD encoding, it will not be compatible with the ARF. NOTE: It is not entirely clear what the ARF text requires in terms of W3C VCDM compliance. Section 6.2.2, Table 3 in the ARF text [i.71] requires that the presentation of an attestation is compliant with W3C VCDM 1.1, which means that the presentation includes verifiable statements about subject-predicate- value triplets that can be modelled as a graph. Section 7.5.3 of [i.71] requires that the issuance is compliant with the W3C VCDM 1.1. However, section 7.5.3 of [i.71] also requires that attestations are JWT based (optional support only for JSON-LD) and secured using SD-JWT. It is not clear how this compliance is to be achieved, i.e. whether enveloping and/or mapping is intended, and how enveloping would work with selective disclosure. The present document recommends using SD-JWT VC and relying on a mapping approach to ensure VCDM 1.1 compliance. If SD-JWT VCs are used, it is not clear how BBS+ can secure such attestations. Hence, in order to support an ISO standardized version of BBS+, it is recommended to update the ARF to allow for W3C Verifiable Credentials Data Model v2.0 or preferably specify such format in the forthcoming ETSI TS 119 472-1 [i.97] standard on (Q)EAAs profiles. Note that for a conversion between JSON or JSON-LD based document and multi-message signature schemes (such as BBS), choices have to be made that have an impact on the complexity of the overall signature scheme and possibly also Zero-Knowledge features. The W3C Data Integrity BBS Cryptosuites v1.0 [i.255] has made such choices for the data transformation that currently optimize for the 3 variants of the IETF BBS drafts. Following the transformation of W3C DI BBS, the individual messages in the BBS signature scheme hold the full RDF canonicalized messages (see clause E.1.1 for examples on n-quads) such as: [ … "_:b3 <https://windsurf.grotto-networking.com/selective#boards> _:b2 .\n", "_:b3 <https://windsurf.grotto-networking.com/selective#sailNumber> \"Earth101\" .\n" … ] This choice encodes the semantics (n-quads) together with the values, which works well with the current set of features of the IETF BBS drafts, but would make more complex proofs over values like range proofs and equality proofs significantly more difficult to construct and implement. Those trade-offs should be carefully taken into consideration when choosing data and container formats for BBS+ based credentials. Ecosystems should understand the more advanced features they want to implement before making those choices. There are other options for container formats for BBS+ like Json Web Proofs (see clause 5.5.1 for more details on JWP). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 104
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.3.2 BBS# applied to mdoc
|
BBS# can be made compatible with the ISO mDL device retrieval flow, for which selective disclosure is based on salted attribute digests. The use of BBS# on mdoc requires slight modifications to the BBS# issuance and selective disclosure protocols described in clause 4.4.3. A summary of how BBS# can be applied toMSO is described below. The issuer creates a MACBBS authentication tag on the user's mdoc authentication key pk and on the L digests {} . The user's Mobile Security Object (MSO), in the terminology of ISO/IEC 18013-5 [i.181], consists of its public key pk, the digests {} and the MACBBS authentication tag on these data: = (, {} , ). During the ISO mDL device retrieval flow, which involves selective disclosure, the user will create a signature on the set of data referred to as "DeviceAuthenticationBytes" in the ISO/IEC 18013-5 [i.181] standard. The signature is a proof that the MSO originates from the user, which is holding the underlying MACBBS authentication tag σ, on the attributes disclosed to the relying party. A complete description of how BBS# can be applied to the ISO mDL device retrieval flow is available in Annex G.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.3.3 BBS# applied to W3C VCDM
|
BBS# is considered to be compatible with W3C Verifiable Credentials Data Model (VCDM) v2.0, given the following requisites: • W3C VCDM v2.0 is compatible with BBS/BBS+ as declared in clause 7.4.3.1, and the credentials format can be preserved for BBS#. • W3C VCDM v2.0 leverages Data Integrity BBS Cryptosuite for proofs, which can be extended to BBS#. • Additional BBS# specific attributes, such as non-revocation proofs, can be defined as extensions to W3C VCDM.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.4 Post-quantum considerations for BBS+ and BBS#
|
As discussed in clause 4.4.2.6, and as further elaborated on in clause 9, BBS+ multi-message signatures and disclosures that are generated in a pre-quantum world will remain confidential in a post-quantum world. As regards to BBS#, and as discussed in clause 4.4.3.4, the (Gap) q-SDH assumption is not quantum-safe, so an attacker in a post-quantum world will be able to forge BBS# credentials. Put differently, a computationally unbounded attacker will not be able to reveal neither undisclosed BBS+/BBS# messages nor the hidden signature value. In a post-quantum world, however, neither BBS+ nor BBS# can maintain data integrity and authenticity. An attacker with a quantum computer can reveal the signer's private key from the public key and forge new signatures and proofs. Clause 9 discusses the prerequisites of this attack, its potential impact, and how to protect against it in greater detail.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.5 Conclusions of using BBS+ and BBS# applied to eIDAS2
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.5.1 Conclusions of applying BBS+ to eIDAS2
|
An analysis of the BBS+ scheme applied to an eIDAS2 context results in the following observations and recommendations: • The BBS+ algorithm would need to be standardized according to ISO/IEC 24843 [i.185] in order to comply with the EU regulation 1025/2012 [i.105] on standardization. • A standardized profile of W3C BBS Cryptosuite v2023 would need to reference the ISO standardized version of BBS+. It is recommended that ETSI TC ESI standardize such a profile. • A standardized (Q)EAA/PID profile of W3C Verifiable Credentials Data Model (VCDM) v2.0 in conjunction with W3C Verifiable Credential Data Integrity (VCDI) 1.0 would need to be specified, and reference the standardized W3C BBS Cryptosuite v2023. It is recommended that ETSI TC ESI standardizes profiles if attestation formats are to be W3C VCDM compliant and secured using BBS+. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 105 • The issuing QTSPs/PIDPs would need to implement such ETSI standards in order to issue (Q)EAAs/PIDs compliant to the ARF and signed with the BBS+ algorithm. • The BBS+ signature verifier corresponds to an eIDAS2 relying party (that will validate the BBS+ multi message signatures generated by the (Q)EAA/PID). • The eIDAS2 relying party should use the eIDAS2 EU TL to retrieve the QTSP/PIDP trust anchor. • The eIDAS2 relying party should validate the BBS+ multi message signature (finalized by the EUDI Wallet) according to the principles described in the IRTF CFRG BBS specification (or the future ISO standard on BBS+); the issuer's signature should be validated by using the QTSP/PIDP trust anchor. NOTE: The BBS+ algorithm would cater for full unlinkability. • The EUDI Wallets need to support the BBS+ algorithm in cryptographic keys management systems as specified in clause 6.5.3 of the ARF [i.71]. As described in clause 7.6, such cryptographic keys management systems with support for BBS+ could preferably be remote HSMs (with BBS+ support) or SIM-cards with support for BBS_MAC/BBS+ (see clause 6.6.4). • A long term (Q)EAA/PID based on BBS+ should be used in a pre-quantum world only. The QTSP/PIDP should plan for migrating to quantum-safe cryptograhic algorithms in a post-quantum world. These observations and recommendations should be considered with respect to selective disclosure for ETSI TS 119 462 [i.95], ETSI TS 119 471 [i.96] and ETSI TS 119 472-1 [i.97].
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.5.2 Conclusions of applying BBS# to eIDAS2
|
An analysis of the BBS# scheme applied to an eIDAS2 context results in the following observations and recommendations: • The BBS# algorithm would need to be standardized in order to comply with the EU regulation 1025/2012 [i.105] on standardization. • BBS# could be made compatible with the mdoc and SD-JWT formats, and if standardized, this would bring full unlinkability to the associated protocols. • The issuing QTSPs/PIDPs would need to implement such ETSI standards in order to issue (Q)EAAs/PIDs compliant to the ARF and signed with the BBS# algorithm. • The BBS# signature verifier corresponds to an eIDAS2 relying party (that will validate the BBS# multi message signatures generated by the (Q)EAA/PID). • The eIDAS2 relying party should use the eIDAS2 EU TL to retrieve the QTSP/PIDP trust anchor. • BBS# is compatible with the existing security infrastructure (secure elements and HSMs) that could be used for WSCDs. BBS# optimizes the performance and security when deployed in combination with HSM based WSCD. • BBS# can either use pairing friendly curves in which case it does not require additional interactions with the issuer for the proof or pairing free curves. Hence, BBS# leverages a SOG-IS approved holder binding cryptographic protocol (ECDSA), yet preserving all privacy properties of BBS/BBS+. The BBS# signature scheme would meet the requirements of the EUDI Wallet cryptographic keys management systems as specified in clause 6.5.3 of the ARF [i.71]. Any ECDSA capable certified WSCD would suffice, combined with the proper software support in the EUDI Wallet. BBS# is not forgery safe in a post-quantum world but supports everlasting privacy which offers a reasonably long window of opportunity. A long term (Q)EAA/PID based on BBS# should be used in a pre-quantum world only. The QTSP/PIDP should plan for migrating to quantum-safe cryptographic algorithms in a post-quantum world. At this stage, BBS# matches the ZKP scheme security requirements that is at the same time efficient and non-circuit based. Furthermore, BBS# optimizes the deployment with HSM based WSCDs, which seems to be the preferred solutions for several EUDI Wallets across the EU Member States. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 106 BBS# still lacks standardisation but would be easily amenable to the mdoc format and is already used commercially by dock.io. An overview of the implementation of a trust model based on BBS# is described in [i.217]. 7.5 Feasibility of programmable ZKPs applied to eIDAS2 (Q)EAAs
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.5.1 Background and existing solutions
|
As discussed in clause 6.5, there exist two implementations of ZKP schemes (zk-SNARKs) that are utilized for sharing selectively disclosed attributes and revocation status information. The Cinderella project (see clause 6.5.2) has integrated support for zk-SNARKs in TLS software libraries, which allows for Cinderella pseudo-certificates with selected attributes and optional OCSP stapled responses to be communicated over the TLS handshake. More specifically, the Belgian, Estonian, and Spanish national eID smartcards with X.509 QCs have been successfully tested with the Cinderella TLS implementation. Hence, the existing eIDAS PKI infrastructure without modifications is re-used. Configuring or refreshing the Cinderella pseudo-certificates can take up to nine minutes, and should therefore be performed offline, but the online verification takes only 10 ms. The zk-creds project (see clause 6.5.3) has implemented anonymous credentials by using ZKP for credentials derived from ICAO compliant eMRTDs (passports). The ZKP is essentially generated based on the eMRTD's Data Group 1, which contains the textual information available on the eMRTD's data page and the Machine Readable Zone: name, issuing state, date of birth, and passport expiry. Hence, the Cinderella and zk-creds projects have demonstrated with their prototypes that ZKP schemes can be used with existing digital identity infrastructures to share selected attributes of X.509 certificates and ICAO eMRTDs.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.5.2 Extensions to EUDI Wallets, relying parties and protocols
|
In order for an EUDI Wallet to use zk-SNARKs with existing credentials (such as X.509 certificates), a circuit compiler (such as the Geppetto compiler) is needed to integrate the zk-SNARK client circuits into the EUDI Wallet. Furthermore, the authentication protocol (such as TLS) needs to be enhanced in order to generate pseudo-certificates that can be validated by the relying party (TLS server). The EUDI Wallet would need to download the trusted roots based on the EU Trusted List (TL) in order to validate the status of the X.509 certificate and the optional OCSP-response. The relying party needs to be extended in order to validate the pseudo-certificates and the proof of the OCSP response. The Cinderella project has demonstrated that this is feasible with TLS and X.509 certificates. In a similar fashion, the zk-creds project has demonstrated that it is possible to share selected attributes of an ICAO eMRTD by using ZKP schemes. Since the ARF specifies mdoc and mandates W3C VCDM compliance for the PID formats, it would be of interest to investigate if the EUDI Wallet could be extended with zk-SNARK client circuits policy templates that can generate selected attributes of pseudo-versions of mdocs and/or W3C VCDM compliant VCs (e.g. SD-JWT VC with mapping) and optional stapled revocation information. Furthermore, the ARF [i.71] specifies OID4VP [i.214] as the presentation protocol for the EUDI Wallet. Hence, it would be of interest to specify a profile of OID4VP with a DIF Presentation Definition (OID4VP request) [i.81] and DIF Presentation Submission (OID4VP response) [i.81] that could use programmable ZKP schemes to present selected attributes of pseudo-versions of mdocs and/or W3C VCDM compliant VCs and optional stapled revocation information. Since zk-SNARKs can cater for full unlinkability, this feature would be inherited for the EUDI Wallets as well. Also, it is recommended to select zk-SNARKs that are plausible quantum computing safe (see Table A.4). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 107
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.5.3 Conclusions of programmable ZKPs applied to eIDAS2 (Q)EAAs
|
An analysis of the ZKP scheme applied to (Q)EAAs, QCs or PIDs in an eIDAS2 context results in the following observations and recommendations: • The EUDI Wallets would need to be extended with programmable ZKP circuits and policy templates in order to generate pseudo-credentials with selected attributes of (Q)EAAs, QCs or PIDs and optional stapled revocation information. The EUDI Wallet should use the eIDAS2 EU TL to retrieve the QTSP/PIDP trust anchor. The zk-SNARK trusted roots would need to be configured as well. • The issuing QTSPs/PIDPs can re-use the existing eIDAS framework and related ETSI standards in order to issue QCs. The eIDAS2 framework and planned ETSI standards for issuance of (Q)EAAs/PIDs can also be used without modifications. The QTSP/PIDP trust anchor can be published at an eIDAS2 EU TL. • The verifier corresponds to an eIDAS2 relying party (that will validate zk-SNARK proofs and pseudo- credentials generated out of the (Q)EAA/QC/PID). The eIDAS2 relying parties would need to be extended with zk-SNARK circuits and policy templates in order to validate the pseudo-credentials and stapled revocation information. NOTE: The zk-SNARK scheme would cater for full unlinkability. • The zk-SNARKs that are plausible quantum computing safe (see Table A.4) should be used. • OID4VP would need to be extended in order for an EUDI Wallet to present the pseudo-credentials with selected attributes and stapled revocation information to a relying party. These observations and recommendations should be considered with respect to selective disclosure for ETSI TS 119 462 [i.95], ETSI TS 119 471 [i.96] and ETSI TS 119 472-1 [i.97]. Implementations of the programmable ZKP schemes in the EUDI Wallets and relying parties may be implemented and evaluated as part of the eIDAS2 large scale pilots.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.6 Secure storage of PID/(Q)EAA keys in EUDI Wallet
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.6.1 General
|
The mdoc authentication key and SD-JWT holder binding keys should be protected in the device's Trusted Execution Environment (TEE) or a Secure Element (SE). The user should be able to access the mdoc authentication key and SD-JWT holder binding key by authentication with a PIN-code or the use of biometrics. There exist implementations and large scale deployments of mdoc for Apple iOS® and Google Android®, which utilize Secure Elements that protect the mdoc authentication key. Several mdoc and SD-JWT data elements are PII and should therefore be stored securely. Encryption at rest of the SD-JWT is recommended, and if possible the SE/TEE should be used to perform the encryption, with keys protected by the SE/TEE, or else the mdoc and SD-JWT should be stored in the SE/TEE. Alternatively, the ISO MSO or SD-JWT keys could be protected in a remote HSM or external device, which are the other cryptographic keys management systems as specified in clause 6.5.3 of the ARF [i.71]. The ARF [i.71], clause 6.5.3 and Table 5 also specify how to store and access the PID/(Q)EAA cryptographic keys in a device used by the EUDI Wallet. Since BBS+ is not (yet) selected to be used for any PID format, there is no specification in the ARF about storage or access to BBS+ credentials and keys. However, the research paper "Improved Algebraic MACs and Practical Keyed-Verification Anonymous Credentials" [i.15] describes how to efficiently implement a BBS_MAC/BBS+ variant on a SIM-card, which can be considered as an external cryptographic device that can be accessed by a mobile device. It is also plausible that HSMs in a near future will be equipped with the BBS+ algorithm, which would then cater for the EUDI Wallets to access BBS+ credentials and keys in a remote HSM. It is however unlikely that BBS+ will be implemented in embedded Secure Elements in the near future. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 108 BBS# can leverage any ECDSA (or ECSDSA) signature capable WSCD making it suitable to the same widely available security infrastructures deployed today as any other ECDSA based solution, be it a secure element in or next to a phone or an HSM deployed in a secure environment. As noted above, BBS# is particularly well suited to HSM deployments. In addition to the WSCD though, randomization keys have to be managed inside the EUDI Wallet's WSCA or WSCI. Some keys are ephemeral and do not need to be stored in EUDI Wallet post transactions (typically for presentations) while others need to be stored for a longer time (typically those used to randomize the keys provided to issuers) as they have to be retrieved later on to perform proofs. This means that a key generation mechanism has to be managed in the EUDI Wallet's WSCA or WSCI, typically leveraging smartphone capabilities as TEE or secure elements when available (irrespective of the WSCD location). As explained above, this part of the key is necessary for security but it is not critical, contrary to the part stored in the WSCD. From a regulatory perspective, the eIDAS2 [i.103] article 5c specifies the legal requirements on an EUDI Wallet certification, which will be defined in a Commission Implementing Regulation (CIR). This CIR will in turn refer to ENISA's EU Cybersecurity Certification (EUCC) scheme, which may regulate the certification requirements on protection of the PID/(Q)EAA as mdoc and SD-JWT. Furthermore, CEN TC/224 WG17 may specify Common Criteria Protection Profiles (CC PP) on how to protect the PID/(Q)EAA and associate cryptographic keys related to the ENISA EU-CC; such EUDI Wallet CC PP may be based on TC/224 WG17 [i.54]. Also, TC/224 WG20 [i.55] are specifying how to onboard the PID to an EUDI Wallet, which involves the associated cryptographic key protection as well. Other certification standards that may underpin the ENISA EU-CC scheme are Global Platform TEE Protection Profile [i.118] and Eurosmart PP-0117 Protection Profile for Secure Sub-System in System-on-Chip (3S in SoC) [i.106]. Additional recommendations on how to store and protect credentials and the associated cryptographic keys in a digital wallet are available in the DIF Wallet Security [i.82], ISO/IEC CD 23220-6 [i.188] and W3C Universal Wallet [i.262] specifications. NOTE: Complete descriptions about storage of PID/(Q)EAA, protection of cryptographic keys and EUDI Wallet certifications go beyond the scope of the present document, but an overview is provided in the present clause since the cryptographic keys are of relevance to selective disclosure of PID/(Q)EAA in the formats of mdoc and SD-JWT.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.6.2 Key splitting technique (relevant for BBS#)
|
The theory behind the BBS# key splitting technique is described in clause 5.2 of "Making BBS Anonymous Credentials eIDAS 2.0 Compliant" [i.78]. The present clause analyzes how BBS# key splitting can be applied to the EUDI Wallet. The splitting technique of BBS# is similar to SECDSA except that the "blinded"/"randomized" public key, called , is included for security reasons in the message to be signed by the WSCD. Without , the BBS# splitting technique would be vulnerable to a "simple related-key attack" (which unfortunately applies to SECDSA and therefore means that SECDSA is insecure). The splitting technique pf BBS# basically distributes the keys and the associated computing between on one side the WSCD and on the other side the EUDI Wallet's WSCA/WSCI. The WSCD part is the Wallet Secure Cryptographic Device, which has to be AVA_VAN.5 certified. The WSCD hence protects the private key (or its cryptographic primitives). The WSCA/WSCI parts are responsible for randomization, which ensures unlinkability both at issuance - each VC can have its own unique public key - and at presentation. The WSCA/WSCI also create an additional security layer for "centralized" type WSCDs (like HSMs) by descaling the effects of a potential HSM takeover by forcing the attacker to also need to retrieve each of the random keys of each of the VCs from each of the EUDI Wallets' WSCA/WSCI. While these keys might not be as well protected as the WSCD keys, the bare fact that they reside on other platforms breaks the global reach of a successful HSM takeover. Finally, whenever a "combined" proof needs to be performed leveraging multiple VCs in a single VP, as all VPs use the same unique WSCD key, the calculation can be performed only once and the rest of the computations are then diversified locally on the wallet. This avoids the complex issue of having to authorize multiple transactions on a single WSCD with a single unlocking action from the end users and also reduces the load on the HSM when performing combined proofs, thus increasing its scalability. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 109
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.7 The proportionality of privacy goals
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.7.1 General
|
The present clause examines the complexity costs and practical implications of key privacy goals, structured around the two principal events where privacy preservation is most relevant: issuance and presentation. It focuses on core privacy objectives (issuer and verifier unlinkability, selective disclosure, pseudonymity, and unlinkable revocation) and discusses whether these are proportionate given the practical feasibility of the technical approaches used to achieve them. Rather than providing exhaustive coverage, the aim is to support a systematic evaluation of the trade-offs between privacy and practical feasibility across a set of representative scenarios. Different Levels of Assurance (LoA) significantly influence the cost of achieving privacy goals. For example, many PID issuers operating at LoA High face legal and operational constraints that limit the use of certain privacy-preserving technologies. Relying on salted attribute digests for selective disclosure can make it prohibitively expensive to meet legal requirements for full unlinkability through technical means alone. In contrast, private actors issuing at LoA Substantial encounter significantly lower complexity costs. Efforts to reduce the cost of achieving key privacy goals in LoA High issuance contexts (such as standardization, broader hardware vendor support, and changes to operational and legal frameworks) are underway. The results presented below are subject to change as cost-reduction efforts advance and should be interpreted within the appropriate LoA context.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.7.2 Issuance
|
Issuance requires identity verification and Proof of Possession (PoP) of a hardware-protected key (the hardware protection is especially burdensome at LoA High). The PoP inherently links the attestation to the user's identity as the issuer knows who receives which attestation and when. Issuer unlinkability (assuming issuer-verifier collusion) requires that no value in the attestation reduces uncertainty about the user's identity beyond the disclosed identity attributes. Any value in the attestation - timestamps, salts, the issuer's signature - can be linked to the previously identified user. While many of these values can be blinded, achieving issuer unlinkability at LoA High incurs prohibitive complexity costs if limited to conventional cryptography due to the requirement of cut-and-chose based issuance (costs are high even if more recent schemes and solutions are considered, e.g. BBS+ or relying on ZKP layering on top of the core wallet formats, as this requires certification, hardware support, and/or standardization efforts). Consequently, preventing issuer linkability - especially under collusion - is presently infeasible at LoA High (but available for private actors operating at LoA Substantial). Consequently, most deployments of PID issuance have to accept this limitation, rely on regulatory mechanisms to prevent issuer collusion, and focus privacy protections on the presentation phase instead. NOTE: Two main approaches are being explored to address the privacy limitations of core wallet formats. One approach is to layer ZKPs on top of an attestation. The other is to rely on private actors who can issue identity attestations at LoA Substantial, based on an underlying PID. Both approaches have distinct strengths and limitations, and efforts are ongoing to definitively assess their relative advantages or suitability within the EUDIW context. With the above in mind, and before elaborating on why issuer unlinkability is presently practically infeasible at LoA High, there are two primary issuance models to consider: 1) Request-based issuance, in which a user explicitly requests a credential from an issuer following successful authentication. 2) Scheduled issuance, where credentials are issued proactively - e.g. after enrollment or at regular intervals - without a direct request for each individual credential. In request-based issuance, the attestation is tailored to a specific verifier's request. This model increases some privacy risks but simplifies others: • Revealed beyond attributes: Request timing and attribute set reduce uncertainty about the target service and may uniquely identify it via auxiliary data (e.g. service registries). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 110 • Issuer unlinkability: Requires removing all correlation handles (salts, timestamps, signature, public PoP key). No practical solution achieves this presently at acceptable cost at LoA High, but ongoing efforts show promise. Solutions exist at LoA Substantial. • Verifier unlinkability: Achievable with short-lived, single-use attestations (assuming no issuer-verifier collusion) at LoA High. Achievable at LoA Substantial even assuming issuer-verifier collusion. • Selective disclosure: Not needed. The attestation is scoped to the verifier's request, unless such a scoping introduces a privacy risk necessitating selective disclosure. • Pseudonyms: Easy to implement but of limited value if colluding issuers are assumed when the users authenticate with identifying information. • Validity status mechanism: Privacy can be preserved by using explicit validity periods, which reduce linkability compared to mechanisms such as status lists or online revocation services. However, there is currently no standardized or widely deployed method to maintain privacy when explicit revocation is required. That said, several research initiatives show promise and are approaching trial readiness. • PoA mechanism: The issuer will always include the user's PoP key in the attestation and the PoA links every key to a user with one and the same verified identity. While blinding the PoP key is technically feasible (but requires great care), it only achieves issuer unlinkability if all other correlation handles are blinded as well. NOTE 1: The above purposefully ignores non-technical measures such as audits or organizational mechanisms where each eliminated correlation handle has a positive impact. NOTE 2: While possible to use long lived attestations in a request-based model, the benefits are unclear. These considerations highlight a core issue: issuer unlinkability cannot be achieved by blinding a single value in isolation. While blinding individual values is often feasible, the issuer observes the full authenticated context, and any remaining correlation handle can enable linkage. This risk is further exacerbated by the issuer's ability to structure value combinations to increase linkability - for example, using unique combinations of nbf, iat, and exp. Consequently, issuer unlinkability requires blinding all handles and standardizing certain attributes (like nbf, iat, and exp), which is likely prohibitively costly at LoA High using only conventional cryptography. Thus, legal requirements for issuer unlinkability cannot be met without additional measures, such as ZKP layering or issuing identity attestations from a PID using more suitable signature schemes at LoA Substantial. In scheduled issuance, the attestation includes all user attributes. This increases some privacy risks while reducing others: • Revealed beyond attributes: The issuer cannot infer the target service from the timing or the attribute set. • Issuer unlinkability: Same as request-based; eliminating all correlation handles is impractical at LoA High but otherwise achievable. • Verifier unlinkability: Achievable with short-lived, single-use attestations (assuming no issuer-verifier collusion), possibly using batch issuance at LoA High. Long-lived, multi-show use requires cryptographic protections (e.g. ZKPs) to avoid reuse of correlatable values, but are only effective if all correlation handles are eliminated. • Selective disclosure: Required. Salted attribute hashes offer a practical compromise for verifier unlinkability. Advanced schemes can support issuer unlinkability if all correlation handles are eliminated. • Pseudonyms: Same as in request-based issuance; limited utility under identifying authentication. • Validity status mechanism: Short-lived attestations can use embedded validity periods which limits linkability. Long-lived ones require a practical privacy-preserving revocation that can scale, which remains an open challenge (trials of promising candidates are planned). • PoA mechanism: Same as request-based; unlinkability depends on blinding all correlation handles. In summary, issuer linkability remains a fundamental challenge due to the reliance on an authenticated context during issuance. The issuer can use any value in the attestation - keys, timestamps, attributes - either in isolation or jointly to reduce uncertainty about the identity subject. Full unlinkability would require eliminating all correlation handles, which is practically impossible using conventional cryptography at LoA High. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 111 When discussing these correlation handles, important considerations include, but are not limited to: • Issued set vs. eligible set. The issuer's identification scope is limited to users who have received attestations, reducing uncertainty from the broader eligible population (e.g. all individuals eligible for a driver's license) to the actual issuance set. This allows the issuer to perform re-identification within a smaller, more tractable group. • Crafted attribute combinations: The issuer can construct unique identifiers by combining multiple attributes with high variability. For example, using a 60-second resolution for temporal fields such as nbf, exp, and iat yields 60³ = 216 000 possible combinations. Additional claims like aud, scope etc., (or even custom attributes) further increase identifiability. Mitigations may be costly. • Structured randomness. Fields intended to appear random can be structured to encode user-identifying information. The issuer can create specific bit patterns to represent users in values that are indistinguishable from random. To conclude: while request-based issuance offers privacy advantages by avoiding the use of unique salts or commitments and eliminating the need for revocation checks (both sources of linkability) the cost of using technical mechanisms to eliminate all correlation handles is disproportionate to the privacy benefits offered at LoA High, but achievable at LoA Substantial.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.7.3 Presentation
|
Presentation differs from issuance in two main ways that impact privacy. First, user authentication does not necessarily reveal an identity to the verifier (in contrast to issuance where the issuer identifies the user prior to issuance). Second, presentation occurs in the context of service access, which can expose behavioural patterns and enable profiling. It is therefore essential that verifier unlinkability is achieved. If issuers and verifiers collude, privacy is as difficult to preserve as during issuance at LoA High, due to the infeasibility of issuer unlinkability discussed in the previous clause. However, under a more realistic trust model - where issuers do not collude with verifiers, but verifiers may collude with each other - several privacy goals become achievable. In this setting, techniques such as selective disclosure, zero-knowledge proofs, and short-lived attestations are particularly effective in limiting linkability and preserving user privacy. At LoA Substantial, actors have several cost effective opportunities that work even under the more adversarial trust assumption of issuer-verifier collusion. Several verifier-side steps during presentation carry privacy implications: 1) Wallet instance validation: If verifiers have to validate the wallet, this step has to be privacy-preserving. Short-lived attestations mitigate the issue if the verifier can rely on issuer-side validation. In contrast, long-lived attestations require wallet validation mechanisms that avoid introducing correlation handles. While such mechanisms exist, they increase complexity and may outweigh the benefits of long-term credentials. 2) Parsing the presentation: Disclosed attributes, metadata, and validity information can all enable correlation if any value - such as salts, PoP keys, or issuer signatures - is reused. Verifier unlinkability requires eliminating all correlation handles and minimizing auxiliary linkability. For example, index assignment in validity status lists should be randomized to avoid correlation based on sequential ordering. Similarly, static boolean values (e.g. age_over_18: true) require external validity context (e.g. nbf) to be meaningful; a major privacy risk when using long-lived attestations. The validity context can be blinded, but the techniques used for blinding the context could just as well be adopted to instead blind the age value (e.g. a Bulletproof range proof). 3) Proof of Association (PoA): When combining attributes from multiple attestations or linking PoP keys, the association proof has to be unlinkable and preferably third-party deniable. This can be achieved using, for example, interactive discrete log equivalence (DLEQ) ZKPs. 4) Attribute value verification: Privacy-preserving techniques like Bulletproofs or other range proofs can be used to verify attribute values. Issuers have to ensure that all blinded value commitments and cryptographic hash digests are unique to ensure verifier unlinkability. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 112 5) Validity checks: Validity checks have to be privacy-preserving to avoid introducing stable identifiers (e.g. revocation list positions). The simplest approach is to use short-lived attestations with embedded expiration, avoiding online revocation checks entirely. For long lived attestations, several privacy preserving revocation solution proposals approach trial readiness.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.7.4 Prioritizing privacy goals given the costs
|
Verifier unlinkability is achievable today at reasonable complexity both at LoA High and Substantial with conventional cryptography, making the presentation phase the primary point for enforcing privacy in a digital identity system. This shifts some burden to issuance, which has to ensure unique salts, signatures, and PoP keys. Issuer unlinkability remains particularly challenging at LoA High, but approaches using ZKP and advanced signature schemes show promise. However, eliminating one or more values in isolation is ineffective as long as other potential correlation handles remain (both at the attribute level and through structured combinations). At LoA High, the recommendation is, as shown in Table 2, to prioritize verifier unlinkability first, and pursue more advanced privacy goals as supporting technologies mature and the understanding of privacy risks from structured combinations improves. Table 2: Privacy goals and their feasibility at LoA High Privacy Goal Problem Source(s) Feasible today? Recommended Approach Issuer unlinkability Authenticated context; any unique value identifies user No Mitigate through regulation or policy. Verifiers can leak malicious issuer behaviour. Verifier unlinkability Reuse of static values (PoP keys, salts, unique IDs, revocation data) Yes, with short-lived attestations (revocation) remains challenging Use short-lived, single-use attestations; eliminate reused values. ZK overlays (e.g. over ECDSA and range proofs). Association unlinkability Non-deniable binding between disjoint attestations Yes Use ephemeral or interactive ZKPs (e.g. DLEQ); avoid persistent proofs. Minimal disclosure Full disclosure of attributes, inflexible values Yes Use selective disclosure and range proofs with unique computational inputs (e.g. Pedersen commitments or hash digests). Signature schemes with inherent disclosure capabilities. Prevent service inference Request timing or highly specific attribute sets Yes Use scheduled issuance; avoid public service catalogues with request-based flows. At LoA Substantial, where actors operate under fewer constraints, privacy goals are significantly more attainable using solutions such as ZKP layering or advanced signature schemes (e.g. BBS+ or BBS#). It remains unclear whether ZKP layering is feasible at LoA High, or when advanced signature schemes will be acceptable for issuing LoA High attestations. The challenges associated with privacy preserving validity status checks are discussed next. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 113
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
8 Privacy aspects of revocation and validity checks
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
8.1 Introduction to revocation and validity checks
|
Given that eIDAS2 article 5a.16(a) as well as recitals 14, 15, and 59 require that selective disclosures and unlinkability are done in ways that prevent data linkability, then the data unlinkability requirement has to be extended to validity status checks. Herein, the focus includes only options that fall under "state of the art" (solutions that have been deployed on a market) as stipulated in GDPR [i.102] articles 25, 26, and 32, and those approaches that are "experimental" (solutions where technical feasibility has been demonstrated but where market deployments are still lacking). In addition to this, eIDAS2 article 5a.16 should be considered, where it is stated: "The technical framework of the European Digital Identity Wallet shall: (a) not allow providers of electronic attestations of attributes or any other party, after the issuance of the attestation of attributes, to obtain data that allows transactions or user behaviour to be tracked, linked or correlated, or knowledge of transactions or user behaviour to be otherwise obtained, unless explicitly authorised by the user;" Hence, revocation services and validity status check services should avoid collecting revocation information about the EUDI Wallet and its (Q)EAAs. Furthermore, a validity status check (e.g. due to revocation) can be conceptualized as a set (non-)membership proof, and alternatives that limit correlation handles and uncertainty reduction are discussed. For completeness, the text also mentions well known options that may not be suitable as a validity status check approach. NOTE 1: Both (Q)EAAs or PIDs may be considered with respect to revocation and validity status checks; only the term (Q)EAA is used for readability throughout clause 8. NOTE 2: (Q)EAAs or PIDs may contain unique identifiers or serial numbers; only the term identifier is used for readability throughout clause 8. NOTE 3: Issuers can use explicit validity periods as an alternative to the techniques mentioned below.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
8.2 Online certificate status protocol (OCSP)
|
The online certificate status protocol (OCSP) is an internet protocol specified in IETF RFC 6960 [i.160] that is designed to obtain and check the current validity status of a digital X.509 PKIX certificate. However, OCSP was not designed with privacy in mind and therefore it lacks certain privacy aspects. The OCSP protocol submits the unique identifier of a (Q)EAA to an OCSP responder, which checks revocation status of the X.509 PKIX certificate against a revocation database and returns an OCSP response with status 'good', 'revoked', or 'unknown'. So, from a privacy perspective, OCSP risks revealing more information with the OCSP responder than the user intended. However, OCSP could work for (Q)EAAs containing an identifier or serial number, specifically with respect to: • OCSP Must-Staple. In an OCSP stapling scenario, the EUDI Wallet itself would query the OCSP responder at regular intervals in order to obtain a signed and time-stamped OCSP response for the user's (Q)EAA. Then the EUDI Wallet would need to append the OCSP response when presenting the (Q)EAA to the verifier. OCSP stapling is supported by TLS in the Certificate Status Request extension (see section 8 in IETF RFC 6066 [i.159]). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 114
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
8.3 Revocation lists
|
A Revocation List (RL) is a mature and widely utilized validity status check mechanism. For detailed examples see IETF RFC 5280 [i.156] that specifies the Certificate Revocation List (CRL) profile for PKIX X.509 certificates and IETF RFC 6818 [i.161] that updates IETF RFC 5280 [i.156]. Commonly, a RL is a signed list of identifiers or serial numbers associated with the (Q)EAAs that have been revoked before they expired. Since the identifiers are unique and thus perfectly correlates with the associated (Q)EAAs, any solution that relies on a RL need to consider the following privacy aspects: • Single-show attestations, whereby each (Q)EAA has a unique identifier or serial number. This concept is equivalent to atomic (Q)EAAs that are described in clause 4.2. Hence, the RL will contain different identifiers for the user's set of atomic (Q)EAAs. • Range requests, which depends on the size of the RL. The privacy provided by a RL is proportionate to the size of the RL. In the extreme case with one revoked identifier in a RL, the RL provider will be able to identify what (Q)EAA the verifier or user needs to check. The larger the RL is, the more difficult it is for a RL provider to correlate the user's (Q)EAA with the requests to the RL provider. Additionally, a RL needs to also consider the event where a batch of (Q)EAAs change status at once. In such a scenario, verifiers can collude and compare the (Q)EAA identifiers with the simultaneous validity status changes to learn more about which (Q)EAAs describe the same subject. Cryptographic techniques such as Private Set Intersection (PSI) or Private Information Retrieval (PIR) may prove helpful as solutions: • Private Set Intersection [i.202] is a secure multiparty cryptographic technique that allows two parties holding sets to compare encrypted versions of these sets in order to compute the intersection. In this scenario, neither party reveals anything to the counterparty except for the elements in the intersection. • Private Information Retrieval [i.26] is a protocol that allows a client to retrieve an element of a database without the owner of that database being able to determine which element was selected.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
8.4 Validity status lists
|
A validity Status List (SL) is a bit vector that is issued and signed by an issuer (QTSP in eIDAS2 terms). The validity status of a (Q)EAA is represented using either a single bit or multiple bits in the SL bit vector. The (Q)EAA identifier is mapped to an index in the status list. The validity status check of the (Q)EAA is performed by checking the binary value of the bit(s) that is indexed in the status list bit vector. If the binary value of the indexed position in the status list is 1 (one), the (Q)EAA is revoked, else if it is 0 (zero) it is not revoked. EXAMPLE: The (Q)EAA with the identifier 49361 is mapped to the status list index 136547. In the status list bit vector, the indexed position 136547 is a binary value of 0 (zero). Hence, the (Q)EAA is not revoked in this example. The W3C Verifiable Credentials working group has specified "Bitstring Status List v1.0 - Privacy-preserving status information for Verifiable Credentials" [i.254] with details on how to issue status lists and check the validity status of Verifiable Credentials. IETF has specified "OAuth Status List" [i.153] that defines status list data structures for representing the status of JSON Web Tokens (JWTs) and CBOR Web Tokens (CWTs). Status lists have the following features: • The validity status list bit vector per se does not reveal any information about the (Q)EAA's identifier, which is a privacy preserving feature. (PKIX CRLs contain the serial numbers of the revoked PKIX X.509 certificates). • The size of a status list can be relatively small. The size after compression depends on the final revocation rate and whether or not the index assignments are random. Uncompressed, a status list for 100 000 (Q)EAAs is roughly 12,5 kB in size. This is beneficial for performance and bandwidth reasons when a verifier downloads the status list. (PKIX CRLs contain more metadata about the revoked PKIX X.509 certificates and are therefore considerably larger). • A verifier can retrieve the entire status list without revealing what index it will check, which is a privacy preserving feature. (An OCSP request contains the PKIX X.509 certificate serial number, which reveals what certificate a verifier needs to check). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 115 As with RLs, the identifier is a unique correlation handle. Consequently, any solution that relies on a SL need to also consider the following privacy preserving aspects: • Single-Show attestations, range requests, and/or PSI (cardinality), possibly ZKP, as described for RLs. • Randomized index assignment. The index associated with each (Q)EAA is randomly assigned over the entire set of possible (Q)EAAs. Consequently, chunks of the status list cannot be derived based on e.g. issuance or expiration time. • Hiding of still valid (Q)EAAs. Status list sizes that equal the number of issued (Q)EAAs allows an attacker to learn information about still valid (Q)EAAs. As with RL, a SL does also consider events where a batch of (Q)EAAs change status at once. Private Set Intersection and Private Information Retrieval techniques are therefore recommended to be considered.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
8.5 Cryptographic accumulators
|
A cryptographic accumulator allows the aggregation of many values into a fixed-length digest called the accumulator value. Furthermore, and in contrast to cryptographic hash functions, it is possible to verify whether an element is accumulated or not. Asymmetric accumulators rely on a so-called (non-)membership witness. Symmetric accumulators do not require a witness for membership testing. Negative accumulators support non-membership witnesses: positive ones support membership witnesses, and universal ones support both. A Bloom filter is an append-only data structure that can be used for a set of (non-)membership tests without any witness. These tests allow for false positives but not for false negatives. Put differently, a Bloom filter test will either yield that the tested element is possibly in the set, or that it is definitely not in the set. Multiple Bloom filters can be chained so that the false positives are included in a second Bloom filter that tests for the opposite value (e.g. the first Bloom tests for revocation; the second is a non-revocation test). This process can be repeated indefinitely to create a Bloom filter cascade with a sufficiently low false-positive rate. In contrast to RL and SL, a Bloom filter does directly reveal information about the set elements. Any validity status change is probabilistic, which means that colluding entities cannot know if the changes reflect a simultaneous validity status change (e.g. a revocation of a batch issued (Q)EAA) or a false positive. However, the probabilities depend on the Bloom filter and it has to be designed with care as colluding verifiers can use any Bloom filter based approach that has a sufficiently low false-positive rate to link together an attestation batch in the event of a validity status change. Many other cryptographic accumulators exist beside Bloom filters. This text mentions Bloom filters specifically due to the focus on market deployed techniques.Yet, the alignment of Blook filters with general-purpose ZKPs to achieve unlinkability remains unexplored, other examples of market deployed solutions exist, e.g. the accumulator scheme used in Hyperledger AnonCreds [i.131] and by the IRMA [i.173] project, which is an implementation of the Idemix [i.136] attribute-based credential scheme. It is also worth mentioning more recent work that demonstrates how the witness updates can be done in a privacy friendly batch update, meaning that the witness update is the same for all users. Camenisch and Lysyanskaya introduced the concept of dynamic accumulators in their paper "Dynamic accumulators and application to efficient revocation of anonymous credentials" [i.44] in 2002. A dynamic accumulator allows for dynamically adding or deleting a value, such that the cost of adding or deleting is independent of the number of accumulated values. The paper also provides a construction of a dynamic accumulator and an efficient zero-knowledge proof scheme, which can be proven secure under the strong RSA assumption. Such construction of dynamic accumulators enables efficient revocation of anonymous credentials and membership revocation for group signature and identity escrow schemes. Furthermore, the first dynamic universal accumulator was introduced in 2009 in a paper by Au, Tsang, Susilo and Mu that describes how dynamic universal accumulators for DDH groups can be applied to attribute-based anonymous credential systems [i.13]. Moreover, Nguyen described accumulators from bilinear pairings and applications in a paper published in 2005 [i.204], which was extended in 2008 by Damgård and Triandopoulos in their paper "Supporting Non-membership Proofs with Bilinear-map Accumulators" [i.76]. Recently, in 2022, the research in this field was extended by Vitto and Biryukov in their paper "Dynamic Universal Accumulator with Batch Update over Bilinear Groups" [i.247]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 116 Pairing free accumulators also exist that function with the same kind of scheme as BBS#. The BBS# scheme could be mutualized with a flow from the verifier to the issuer, which is described in option 1 in clause 4.4.3.3.3. Option 1 is recommended for performance reasons and because the holder can be offline. This setup assumes that the issuer is also the accumulator issuer (which should be the case in most if not all situations). Hence, cryptographic accumulators, and dynamic accumulators and universal dynamic accumulators are worth considering for revocation schemes when privacy requirements are high. Recent work has focused specifically on how accumulators can be used for revocation of the core EUDI Wallet formats [i.112].
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
8.6 Using programmable ZKP schemes for revocation checks
|
As described in clause 6.5.1, it is possible to design anonymous credentials from programmable ZKPs (typically zk- SNARKs) and existing digital identities (such as X.509 certificates). Furthermore, the revocation and validity status can be performed at the digital wallet, whilst the validation results, selected attributes and predicates are shared with the verifier. Hence, any type of revocation verification protocol, even OCSP, can be implemented at the digital wallet, yet providing privacy for the user.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
8.7 Conclusions on validity status checks
|
The present clause introduces the topic of revocation and validity status checks in the context of selective disclosure capable and unlinkable (Q)EAAs. If explicit (and short) validity periods are not used as an alternative, then it is recommended that the validity status check employed does not introduce a correlation handle in cases where selective disclosure and unlinkability are required. Concretely put, long lived (Q)EAAs that support selective disclosure and unlinkability using the mechanisms described in the present document: • Are recommended to use OCSP in Must-Staple mode. • May use validity Status List bit vectors rather than CRLs. • Cannot rely on Revocation Lists or validity Status Lists without additional privacy considerations as detailed above. Seemingly, the use of Revocation Lists or Status Lists requires some form of Private Information Retrieval or Private Set Intersection techniques not to undermine selective disclosure and unlinkability. • Can use cryptographic accumulators where possible given the associated complexity. Bloom filters represent an easy first step, whereas universal dynamic accumulators with public batch witness updates represent an interesting possibility for the future development of validity status checks of anonymized credentials and zero knowledge proofs. There is also work focused specifically on accumulators for EUDI Wallet core formats [i.112]. • May be combined with ZKP schemes (such as zk-SNARK) such that the status validity checks are performed at the digital wallet, and only the relevant information is disclosed with the verifier. NOTE: Revocation checks can be considered as a predicate - a computation on the revocation ID included in the (Q)EAA's cryptographic meta-data and additional public information about the revocation registry. However, there may be further non-public inputs going into this computation provided by the holder to perform an inclusion check, e.g. a Merkle proof. Ultimately, there is no suitable validity status mechanism that is both simple, mature in terms of standards, and that matches unlinkability requirements of (Q)EAAs capable of selective disclosure and data unlinkability. Where selective disclosure and unlinkability is required, it is presently advisable to rely on short lived (Q)EAAs with explicit validity periods. Where users are identified, and/or when using formats based on salted attribute hashes where full unlinkability guarantees cannot be made, standard solutions like RL and SL are suitable. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 117
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
9 Post-quantum considerations
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
9.1 General remarks
|
The recent years have witnessed significant advances in the area of quantum computing, which led to reconsider the threats posed by quantum algorithms such as the one devised by Shor [i.235] in 1994. The latter algorithm could indeed be used to attack the mathematical problems underlying most of the current asymmetric cryptographic algorithms, including several of those presented in the present document. While there is still a lot of uncertainty surrounding the advent of a Cryptographically Relevant Quantum Computer (CRQC), it can be noted that the questions of whether a CRQC will be built, and when, have become less crucial, for at least two reasons. The first one is that the confidentiality of current data is already at risk because the data encrypted today using non-quantum-safe cryptographic algorithms could be stored and then decrypted by a CRQC in the future. This attack is broadly known as "store now decrypt later" and questions the interest of precisely predicting the date of the Q-day (the advent of CRQC). Indeed, knowing whether the problem will occur in 2030, 2032, etc. is only relevant if it is considered, for example, that leaking sensitive data is acceptable in 2032 but not in 2030. On this note, the NIST IR 8547 "Transition to Post-Quantum Cryptography Standards" [i.209] has declared that ECDSA, EdDSA and RSA are disallowed after 2035. As most use-cases are unlikely to provide such a granularity for data shelf life it needs to be considered that every data with long-term sensitivity should be protected as of now. The second reason is that most cybersecurity agencies worldwide urge to initiate the transition to quantum-safe cryptography (also known as post-quantum cryptography) as soon as possible (see, e.g. [i.2] and [i.209]). This has already become mandatory for some systems in the US [i.243]. Transition is thus likely to become necessary for compliance and interoperability reasons, regardless of the actual advances of quantum computing. In this regard, it is important to assess the impact of quantum computers on the QEAAs systems described in the present document. To this end, it is first needed to clarify the actual consequences of the Shor's algorithm. In particular, the fact that the latter solves the main mathematical problems underlying elliptic curves, finite fields and RSA cryptography does not mean that every security assurance provided by a cryptographic mechanism implemented in these settings is lost. Indeed, a security property of such a cryptographic mechanism may rely on a different problem or even be proved unconditionally, that is, regardless of the computational power of the adversary. In such cases, the security property remains even in the presence of a quantum computer. This is fortunately the case of many QEAAs constructions presented in the present document and, while every construction will require a dedicated quantum risk assessment, the following general comments can be made: 1) QEAA systems based on multi-message signature schemes often achieve unconditional privacy which means that their privacy is not affected by quantum algorithms. This is for example the case for anonymous credentials based on BBS+ (clause 4.4.2), BBS# (clause 4.4.3), CL (clause 4.4.1) and PS-MS (clause 4.4.5) signature schemes. This also holds true for some of the extensions discussed in the present document such as [i.240] and [i.232] and for the KVAC scheme in [i.15]. This property can however be lost by some variants thereof, such as the DAA systems presented in clause 6.4.2. It is therefore important to understand that unconditionally privacy is not a property inherent to these multi-message signature schemes but only results from a careful design of the QEAA system. Any modification of the latter (e.g. to add a new feature) might then affect this property. 2) In QEAA systems based on salted attributes hashed, the privacy of non-disclosed attributes is protected by the salt entropy which prevents exhaustive search. While a quantum computer could theoretically improve this exhaustive search by running Grover's algorithm, it can be noted that the actual performance of the latter is still unclear. In the worst case, it would only lead to a quadratic speedup, which means that doubling the salt size would be sufficient to retain the same security assurances as the one these systems enjoy today against non-quantum adversaries. 3) Conversely, for all these systems, an adversary equipped with a quantum computer will be able to forge valid attestations by solving the underlying mathematical problem. In other words, a quantum adversary will be able to break the authenticity of QEAA systems but not (in most cases) their privacy. This subtlety is far from being insignificant as it means that all QEAA systems achieving unconditional privacy are immune to the store now decrypt later attack and so could postpone their transition as long as it is completed before the Q-day. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 118 Finally, it can be noted that several cybersecurity agencies recommend the use of so-called hybrid mechanisms, that is, mechanisms combining current cryptographic algorithms and post-quantum ones. In such a case, the systems presented in the present document will not have to be discarded but simply completed with post-quantum solutions.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
9.2 Post-quantum computing threats
|
A quantum computer capable of cryptanalysis remains a speculative prospect for a remote future despite the current level of trepidation. While a remote risk, the emergence of one with the computational power to execute algorithms like Shor [i.235] or Grover [i.126] could significantly affect the proposed solutions. To fully realize the impact of quantum computers, it is important to understand three things: 1) when they become a threat; 2) how quickly an attack is performed; and consequently 3) what they threaten. One way to assess when a quantum computer can be a threat is to look at the requirements for launching a particular attack. These requirements can be expressed as logical qubits (a collection of physical qubits to protect against errors, where each logical qubit acts as the unit of information analogous to a classical bit). Proos and Zalka 2008 [i.225] show that computing the ECDL on an elliptic curve of order n field requires roughly 6n qubits without degradation and error rates. However, due to degradation and error rates, it makes more sense to discuss logical qubits and estimate the number of physical qubits for various degradation and error rates. For one reasonable estimate, Roetteler et al. 2017 [i.230] conclude that the ECDL on an elliptic curve defined over an n-bit prime field can be computed with at most 9n + 2 × ceil (log2(n)) + 10 qubits. This means that 2330 logical qubits are required to perform NIST P-256 point addition and the full Shor algorithm on NIST P-256 would require 1,26 × 10^{11} universal gates. A final, but important consideration relating to the when, is that once a malicious and extremely well-resourced entity is equipped with a quantum resource it has to choose what to employ this resource on. Another important consideration is to estimate how quickly the attack, once possible, can be performed. This is important because the time frame for the attack determines both the required size of the quantum computer and what threat it poses. It is thus incorrect to assume that the emergence of a quantum computer capable of cryptanalysis immediately renders all classical cryptography obsolete; an attacker will carefully deploy their quantum computers and each attack takes time. It is difficult to provide an exact size estimation for a given time frame given the many assumptions that need to be made about how a future quantum computer may operate. But with reasonable assumptions, Webber et al. 2022 [i.252] estimate that breaking a 256-bit elliptic curve cryptography within a day would require 13 million physical qubits and a quantum computer capable of running Shor's algorithm [i.235]. After examining the conditions under which a quantum computer could pose a threat and the associated timeframes, the next crucial consideration is to identify the specific targets such a quantum computer would jeopardize within a defined timeframe. This elucidates the threats posed to (Q)EAAs and provides insights into potential countermeasures that prospective (Q)EAA issuers and users can take. The most significant threat, the Harvest Now, Decrypt Later (HNDL) threat, arises when a quantum computer is utilized on the sensitive ciphertext. In this scenario, an attacker monitors the key agreement between two actors, collects the ciphertext, and employs their quantum computers to find the negotiated symmetric decryption key. The threat here is one against confidentiality, i.e. the extraction of information about the signed message that the signer did not intend to disclose or the signature value itself in ZKP-capable signature schemes. The timeframe for such an attack can span the duration during which the encrypted data retains its sensitivity. Where an (Q)EAA contains information at risk of an HNDL attack, the risk of quantum computers necessitates that the (Q)EAA Provider abstains from using encryption schemes, and/or key sizes, where quantum computers pose a threat. An (Q)EAA Provider has many possible alternatives they could rely on, such as quantum-safe algorithms, zero-knowledge proofs that are quantum resistant (e.g. those based on cryptographic hash functions), increased key sizes, or Oblivious Pseudo-Random Functions, to name a few. However, Providers are recommended to take great care in the mitigating steps they take and be entirely sure that these protect against a HNDL attack. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 119 Another risk is that of signature and proof forging, which is arguably more relevant to the topic of the present document. Here, the risk is relatively much lower due to the time frames involved. Note that an attacker cannot begin the attack without knowledge of some public material (e.g. a public key) derived from the sensitive cryptographic material. The threat here is one against integrity and authenticity, i.e. that the attacker would need to forge signatures, disclosures, and/or proofs. Note also that the attacker does not have the same time frames at their disposal as in the case of an HNDL attack as the attack target is not a decryption key that can be used on pre-collected sensitive ciphertext. Actors may deploy frequent key rotation and rely on short-lived attestations to mitigate the quantum threat. The potential use of one-time signing and proof keys provides excellent protection against an attacker with a quantum computer. Frequent key rotation, or even one-time use of keys, is likely viable for the foreseeable future given existing development trajectories. Once the threat level is sufficiently high, actors can move to alternative signature algorithms (e.g. CRYSTALS Dilithium) and post-quantum safe zero-knowledge solutions. EXAMPLE: The complexity of forging documents that have been digitally signed in a pre-quantum world can be illustrated by this example. Assume that Alice digitally signs a document in the pre-quantum world. The signed document is also time-stamped by a trusted time-stamping authority. She stores the digitally signed document in an archive, which has an audit log where each log entry is digitally signed and each signed log entry is added to a chain of hashes of previous log entries. In a post-quantum world, the attacker Bob will be able to derive Alice's private key from her public key in the X.509 certificate. Hence, he can create a forged document and sign this with her private key and certificate. However, in order to replace the existing signed document, which is archived, Bob would also need to attack the time-stamping authority to generate a forged time-stamp (with a rewinded clock). He would also need to attack the archive to delete the existing document, replace it with the forged document, and finally forge the signed audit log and hash chain of log entries. Such an attack is utterly complicated to perform, even with the use of quantum computers. The related concept of everlasting privacy, which is typically applied to e-voting schemes, aims at ensuring the electronic votes will remain secret and secure also in the future. For more information on everlasting privacy the following research papers are recommended: "Practical Everlasting Privacy" [i.8] by Arapinis et al., "Towards everlasting privacy and efficient coercion resistance in remote electronic voting" [i.123] by Grontas et al, "Improvements in Everlasting Privacy: Efficient and Secure Zero Knowledge Proofs" [i.128] by Haines et al. and "SoK: Secure e-voting with everlasting privacy" [i.129] by Haines et al.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
9.3 Post-quantum computing solutions
|
Although (Q)EAA systems are not immediately threatened by quantum computing, as explained in clause 9.2, they will eventually have to migrate to post-quantum cryptography, at least before the Q-day. In the case of salted attributes hashes, the main component vulnerable to quantum computers is the signature scheme used to sign the hash values. Transition to post-quantum cryptography will then mostly consist in replacing this signature scheme by a post-quantum counterpart such as the NIST standard FIPS 204 [i.207]. The case of (Q)EAA based on multi-message signature schemes is more complex as post-quantum variants for these particular signature schemes will be needed, but also for the related zero-knowledge proof systems. This is today a very active research area whose main advances are presented in clause 9.4.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
9.4 Lattice-based anonymous credentials schemes
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
9.4.1 Background
|
The transition to post-quantum cryptography is an enormous challenge for cryptographers and the IT-security industry as a whole. There have been significant enhancements such as the future NIST standards on Post-Quantum Safe (PQS) cryptography. However, these NIST standards have so far only been focusing on general cryptographic mechanisms, such as digital signatures or key exchange, whilst there are not yet any similar PQS standardization efforts for blind signatures, group signatures, and anonymous credentials. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 120 Nevertheless, there are cryptographic research initiatives in the field of PQS multi-message signatures and anonymous credentials. In 2016, Libert et al. published the research paper "Signature Schemes with Efficient Protocols and Dynamic Group Signatures from Lattice Assumptions" [i.197]. The result of this research indicated that anonymous credential schemes, which are based on plausibly PQS cryptography using lattices, generate signature and proof sizes in the magnitude of several hundreds of MB. This lattice-based scheme is however outdated, and the research to improve the performance and proof sizes has continued as described in clause 9.4.2. Another option is to apply PQS zk-SNARKs to the Cinderella project (see clause 6.5.2), whereby PQS ZKPs can be derived from X.509 certificates. Potential PQS zk-SNARKs for such a setup are Spartan [i.200], Virgo [i.273] or Ligero [i.20]. Furthermore, the X.509 certificates would need to be signed with PQS cryptographic algorithms, such as CRYSTALS Dilithium [i.75]. There are also programmatic issues to be resolved with such an integration, such as patching the vulnerability in the Gepetto compiler. Hence, until recently there have essentially been two alternatives to achieve a plausible PQS ZKP system: a system with large signature and proofs that rely upon cryptographic algorithms, or a system based on ad-hoc integrations of PQS zk-SNARKs. The research of how to improve the performance and proof sizes of PQS ZKP systems has however progressed in recent years, which is further described in clause 9.4.2.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
9.4.2 Research on effective lattice-based anonymous credentials
|
In order to address the issues with large sized signatures, cryptographic research is currently being performed on PQS anonymous credentials with small signature sizes. In 2022, Jeudy et al. published the cryptographic research paper "Lattice Signature with Efficient Protocols, Application to Anonymous Credentials" [i.192]. The paper introduced a new construction that is based both on standard lattices and structured ones, which resulted in significant performance improvements. In particular, the size of a signature proof was reduced to less than 650 KB. Based on Jeudy's research, Dutto et al. proposed a PQS ZKP scheme in their paper "Toward a Post-Quantum Zero-Knowledge Verifiable Credential System for Self-Sovereign Identity" [i.83], which describes PQS variants of BBS+ and CL-signatures based on a lattice-based scheme. The research by Jeudy et al. was continued in 2024 by Argo et al. who published their research paper "Practical Post-Quantum Signatures for Privacy" [i.9] that proposes privacy-preserving Signatures with Efficient Protocols (SEP). The SEP is lattice-based and generates short-sized signatures that are PQS. Furthermore, the SEP has been integrated with an anonymous credential system, resulting in anonymous credentials of less than 80 KB. The source code of this project is published at the repository "Lattice Anonymous Credentials" [i.10]. Furthermore, Bootle et al. published the research paper "A Framework for Practical Anonymous Credentials from Lattices" [i.29] in 2023. Their paper introduces a framework for practical anonymous credential schemes based on a new family of lattices. The security of this lattice scheme is based on the difficulty to generate a pre-image for an element given short pre-images of random elements in a set. Such a framework can be used to implement efficient privacy-preserving cryptographic primitives for blind signatures, anonymous credentials, and group signatures. Hence, there are several cryptographic research initiatives that aim at inventing anonymous credentials and privacy-preserving signature schemes that are PQS with efficient and small-sized signature proofs.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
10 Conclusions
|
The eIDAS2 regulation and the Architecture and Reference Framework (ARF) define regulatory requirements on selective disclosure and unlinkability for the EUDI Wallet. The present document provides a comprehensive analysis of signature schemes, credential formats and protocols that cater for selective disclosure, unlinkability, and predicates. Since the ARF specifies that a PID Provider can issue any PID in both the format specified in ISO/IEC 18013-5 [i.181] and the SD-JWT VC format, the present document analyses ISO mDL and SD-JWT VC. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 121 The ISO mDL specified mdocand the IETF SD-JWT formats and related presentation protocols cater for selective disclosure using a hashed salted attributes approach. Both MSO and SD-JWT support SOG-IS approved cryptographic algorithms and can also be used with quantum-safe cryptography for future use. The conclusion is thus that MSO (as detailed in ISO mDL) as well as the SD-JWT approach meet the eIDAS2 regulatory and technical requirements on selective disclosure when defined as revealing at least one attribute from a single PID or (Q)EAA. Neither format supports selective disclosure of at least two attributes from multiple distinct PID/(Q)EAAs. Neither format supports predicates, although the present document also proposes a new approach to calculate predicates based on hash chains in conjunction with salted attribute hashes, which can be used for dynamically deriving statements about the user without revealing the attribute values. In addition to limited selective disclosure capabilities, the major drawback with mdoc and SD-JWT is the lack of unlinkability. Neither of the formats supports issuer unlinkability or full unlinkability, and verifier unlinkability encumbers the issuer. In order to achieve verifier unlinkability, batches of MSOs or SD-JWTs need to be issued to each EUDI Wallet. When the PID Provider (PIDP) or QTSP supports batch issuance with unique salts, both MSO and SD-JWT can support verifier unlinkability. In order to achieve verifier unlinkability, the random salts in the MSO and SD-JWT should be unique, meaning that refreshed MSOs and SD-JWTs are presented to a relying party. The present document gives recommendations on how eIDAS2 compliant PIDPs or QTSPs can issue PID/(Q)EAAs in the form of mdoc and/or SD-JWT that cater for selective disclosure. The present document notes that SD-JWT can provide selective disclosure capability also for attestations that use JSON-LD and linked data proofs but advises against it (support for data integrity proofs is lacking and there exist security concerns with polyglot parsing). There are many similarities between the mdoc issuers and the eIDAS2 QTSPs or PID providers, which could be harmonized in ETSI TS 119 471 [i.96] and ETSI TS 119 472-1 [i.97] that will standardize the issuance policies and profiles of (Q)EAAs. More specifically, the MSO could be issued by an eIDAS2 QTSP certification authority, meaning that the EU trusted lists can be used to retrieve revocation information and trust anchors when validating the MSO signature. ETSI TS 119 495 [i.93], which specifies certificate profiles and TSP policies for Open Banking and PSD2, may partially be re-used for the issuance of ISO mdocs as (Q)EAAs. The same principles could be applied on QTSPs and PID providers that will issue PIDs/(Q)EAAs in conjunction with SD-JWT, although the existing specifications do not specify the issuance policies in detail. Furthermore, there are recommendations on how to store MSO and SD-JWT VC compliant representation for JWT in the EUDI Wallet, and how to present selectively disclosed attributes to eIDAS2 relying parties. The presentation protocols for the ISO mDL and OID4VP are specified in the ARF, and the present document describes how to use these protocols for selective disclosure of attributes in mdoc and SD-JWT. The multi-message signature schemes on the other hand are designed to provide selective disclosure and full unlikability. Such multi-message signature schemes are BBS+, CL-Signatures, PS-MS signatures and Mercurial signatures. However, such signature schemes are based on pairing-based elliptic curve cryptographic algorithms that are not yet fully standardized. So far, ISO/IEC 20008 [i.184] has standardized single-message signature schemes that underpin BBS and PS-MS, but they are not sufficient for PID formats and (Q)EAAs that require multi-message signature schemes. However, ISO/IEC 24843 [i.185] intends to standardize BBS+ with blinded signatures, which may allow for a future standard that could be used in compliance with the EUDI Wallet requirements on selective disclosure and unlinkability in eIDAS2. Furthermore, there are cryptographic research projects, such as MoniPoly, where undisclosed attributes have no impact on the proof size. BBS# [i.78] is a variant of BBS/BBS+ that has been designed to meet several stringent requirements put forth in the eIDAS 2.0. regulation. More precisely, BBS# removes the need for pairings and pairing-friendly curves (which are not standardized and not supported by trusted phone hardware) and can be combined with SOG-IS sanctioned protocols for the implementation of the holder binding feature. The BBS# scheme can be made format compatible to mdoc and SD-JWT, thus catering for full unlinkability of mdoc and SD-JWT. Another interesting approach to achieve solutions for the EUDI Wallet with selective disclosure and full unlinkability are the systems that combine ZKP schemes (such as zk-SNARKs) with existing digital identity infrastructures (such as X.509 certificates or ICAO eMRTD). There are existing research projects, such as Cinderella, Crescent and zk-creds, that have succeeded to implement prototypes where zk-SNARKs are used to generate pseudo-certificates that share selected attributes from the (Q)EAAs and derived revocation information. Furthermore, the research of "Anonymous credentials from ECDSA" ("zk-mdoc") provides a ZKP solution for the existing ISO mDL protocols. These projects are still in the research phase, but may be considered for the EUDI Wallet and eIDAS2 relying parties. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 122 In order to achieve privacy preserving features for revocation and validity status checks it is recommended to use OCSP in Must-Staple mode, implement Revocation Lists or validity Status Lists with additional privacy techniques such as Private Information Retrieval or Private Set Intersection, and use cryptographic accumulators where possible given the associated complexity. If ZKP schemes (such as zk-SNARKs) are combined with existing (Q)EAAs (such as X.509), the status validity checks are performed at the EUDI Wallet, and only the relevant information is disclosed with the verifier. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 123 Annex A: Comparison of selective disclosure mechanisms A.1 Selective disclosure signature schemes Table A.1 provides a comparison of the investigated selective disclosure signature schemes. Table A.1: Comparison of selective disclosure signature schemes Signature scheme Cryptography Plausible quantum-safe Unlinkability Predicates Reference Category: Atomic attribute (Q)EAAs Atomic attribute (Q)EAAs Conditional: depends on the signature on the credential Yes, the (Q)EAAs can be signed with QSC algorithms. Verifier unlinkable attestations can be achieved. Fully unlinkable (Q)EAAs are not possible. No dynamic predicates are supported. Workaround: enrol for atomic attributes with Boolean attributes. See clause 4.2 Category: Salted attribute hashes Salted attribute hashes Salted attribute hashes, signed with RSA, ECC, or QSC Yes, the (Q)EAAs can be signed with QSC algorithms. Verifier unlinkability can be achieved if unique salts are used when creating the salted attribute hashes, but the schemes are not protected against issuer linkability. No dynamic predicates are supported. Workaround: set Boolean attributes in the PID/(Q)EAA. See clause 4.3 ACDC Salted attribute hashes structured in a Directed Acyclic Graph Yes Verifier unlinkability can be achieved if unique salts are used when creating the salted attribute hashes, but the schemes are not fully unlinkable. No dynamic predicates are supported. Workaround: set Boolean attributes in the PID/(Q)EAA. See clause 4.3.8 Gordian Envelopes Salted attribute hashes structured in a Directed Acyclic Graph Yes Verifier unlinkability can be achieved if unique salts are used when creating the salted attribute hashes, but the schemes are not fully unlinkable. No dynamic predicates are supported. Workaround: set Boolean attributes in the PID/(Q)EAA. See clause 4.3.9 HashWires Salted attribute hashes structured in a chain of hashes Yes Verifier unlinkability can be achieved if unique salts are used when creating the salted attribute hashes, but the schemes are not fully unlinkable. HashWires supports range proofs that can be combined with selectively disclosed salted hashes of attributes (see clause 4.3.7). See clause 4.3.7 ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 124 Signature scheme Cryptography Plausible quantum-safe Unlinkability Predicates Reference Category: Multi-message signature schemes BBS+ signatures Multi-message signature scheme based on ECC bilinear pairings ZKPs generated pre-quantum will remain plausible safe post-quantum. BBS+ is plausible vulnerable in a post-quantum world. Fully unlinkable with blinded signatures. Yes (in theory) See clause 4.4.2 BBS# signatures Multi-message signature scheme based on conventional elliptic curves (such as the NIST P-256 curve). ZKPs generated pre-quantum will remain plausible safe post-quantum. BBS+ is plausible vulnerable in a post-quantum world. Fully unlinkable with blinded signatures. Yes (in theory) See clause 4.4.3 Camenisch- Lysyanskaya (CL) signatures Multi-message signature scheme based on strong RSA assumption ZKPs generated pre-quantum will remain plausible safe post-quantum. CL-signatures are plausible vulnerable in a post-quantum world. Fully unlinkable with blinded signatures. Yes (in theory) See clause 4.4.1 Mercurial Signatures Multi-message signature scheme based on Decisional Diffie-Hellman (DDH) ZKPs generated pre-quantum will remain plausible safe post-quantum. MS is plausible vulnerable in a post-quantum world. Fully unlinkable with blinded signatures. Yes (in theory) See clause 4.4.4 Pointcheval- Sanders Multi- Signatures (PS-MS) Multi-message signature scheme based on improved CL-signatures ZKPs generated pre-quantum will remain plausible safe post-quantum. PS-MS is plausible vulnerable in a post-quantum world. Fully unlinkable with blinded signatures. Yes (in theory) See clause 4.4.5 Category: Proofs for arithmetic circuits (programmable ZKPs) Bulletproofs Proofs for arithmetic circuits based on Fiat-Shamir heuristics No Yes Yes See clause 4.5.4 zk-SNARKs Proofs for arithmetic circuits based on various mechanisms in clause A.4 Some zk-SNARK schemes are QSC, see Table A.4. Yes Yes See clauses 4.5.2 and A.4 zk-STARKs Proofs for arithmetic circuits based on various mechanisms Yes Yes Yes See clause 4.5.3 ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 125 A.2 (Q)EAA formats with selective disclosure Table A.2 provides a comparison of the investigated credential formats with selective disclosure. Table A.2: Comparison of credential formats with selective disclosure (Q)EAA format Scheme Encoding Maturity Reference Category: Atomic attribute credentials IETF X.509 attribute certificates Atomic attribute (Q)EAAs ASN.1/DER X.509 attribute certificate (IETF RFC 5755 [i.158]) is an IETF PKIX standard See clause 5.2.2 W3C Verifiable Credentials Atomic attribute (Q)EAAs JSON-LD or JWT W3C VC Data Model [i.264] is a standard See clause 5.2.3 Category: Salted attribute hashes IETF SD-JWT Salted attribute hashes JSON (JWT) IETF SD-JWT draft standard [i.155], several reference implementations See clause 5.3.2.1 IETF SD-JWT VC Salted attribute hashes JSON (JWT) IETF SD-JWT VC draft standard [i.143], several reference implementations See clause 5.3.2.2 ISO/IEC 18013-5 [i.181] Mobile Security Object (MSO) Salted attribute hashes CBOR/CDDL (COSE) ISO/IEC 18013-5 [i.181], implemented in several wallets, deployed in the US See clause 5.3.3 Category: Multi-message signature schemes Hyperledger AnonCreds CLRSA-signatures JSON (JWS) Deployed in Government of British Columbia, IDunion, and the IATA Travel Pass See clause 5.4.4 W3C VC with ZKP Various MMS schemes, CL-signatures explicitly referenced JSON (LD) W3C VC Data Model [i.264], implemented in several wallets See clause 5.4.1 W3C VC Data Integrity with BBS+ signatures BBS+ signatures JSON (LD) W3C VC Data Integrity [i.263] See clause 5.4.2 W3C VC Data Integrity with ECDSA-SD ECDSA-SD signatures JSON (LD) W3C VC Data Integrity [i.263] See clause 5.4.3 Category: JSON container formats IETF JSON Web Proof Flexible: CL-signatures, BBS+, etc. JSON (JWS) IETF JSON Web Proof draft standard [i.90] See clause 5.5.1 W3C JSON Web Proofs For Binary Merkle Trees Merkle trees JSON Web Proofs W3C draft specification See clause 5.5.1 JSON Web Zero Knowledge (JWZ) Zero-knowledge proofs, for example Groth-16 JSON Web Proofs Part of Iden3 protocol stack, several reference implementations See clause 5.5.3 ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 126 A.3 Selective disclosure systems and protocols Table A.3 provides a comparison of the investigated selective disclosure protocols. Table A.3: Comparison of selective disclosure systems and protocols Protocol Credentials Protocol Maturity Reference Category: Atomic attribute (Q)EAAs IETF X.509 attribute certificate (protocol) IETF X.509 attribute certificates Attribute certificate authorization protocol X.509 attribute certificate [i.158] is an IETF PKIX standard See clause 6.2.1 VC-FIDO W3C Verifiable Credentials VC-FIDO Deployed as a prototype at NHS in the UK See clause 6.2.2 Category: Salted attribute hashes protocols Singapore's Smart Nation OpenAttestation Document Integrity credentials OpenAttestation protocol [i.211] Deployed at the Singapore's Smart Nation See clause 6.3.1 Category: Multi-message signature schemes Hyperledger AnonCreds (protocol) AnonCreds [i.131] based on CLRSA-signatures Hyperledger Aries protocol [i.132] in conjunction with Hyperledger AnonCreds SDK [i.131] Deployed in Government of British Columbia, IDunion, and the IATA Travel Pass See clause 6.4.1 Direct Anonymous Attestation (DAA) DAA credentials ISO/IEC 20008-2 [i.184] Deployed at large scale by TCG in TPM 2.0 and Intel® in EPID 2.0 See clause 6.4.2 Iden3 W3C Verifiable Credentials with Iden3 Signature Schemes Verifiable Credentials with BJJ Signature [i.138] and Verifiable Credentials with SMT Signature [i.140] Web2 and Web3 projects performed at organizations such as the Ethereum Foundation, Deutsche Bank, HBSC, Kaleido, Rarimo, and others are using the Iden3 stack See clause 6.9 Category: Proofs for arithmetic circuits solutions Cinderella X.509 certificates zk-SNARK (Pinocchio) In research phase See clause 6.5.2 zk-creds ICAO eMRTDs zk-SNARK (Pinocchio) In research phase See clause 6.5.3 Anonymous credentials from ECDSA mdoc [i.181] ECDSA Implemented in a prototype of Google® Wallet See clause 6.5.4 Crescent JWT and mdoc [i.181] Sigma-protocols combined with zk-SNARK (Groth16) In research phase See clause 6.5.5 Category: ABC (Attribute Based Credentials) Idemix Idemix ABC credentials [i.136] based on CL-signatures Idemix ABC protocol [i.136] Implemented by IBM®, Hyperledger Fabric [i.133], IRMA project [i.227], and the EU-projects PrimeLife [i.224] and ABC4Trust [i.137] See clause 6.6.1 U-Prove U-Prove ABC credentials [i.201] U-Prove ABC protocol [i.201] Implemented in Microsoft® Identity Metasystem and the EU-project ABC4Trust [i.137] See clause 6.6.2 ISO/IEC 18370 [i.183] U-Prove ABC credentials [i.201] ISO/IEC 18370 [i.183] Implemented in U-Prove solutions, security flaws detected See clause 6.6.3 Keyed-Verification Anonymous Credentials (KVAC) Keyed-Verification Anonymous Credentials BBS_MAC+ [i.15] Implemented as a prototype on SIM-cards See clause 6.6.4 FIDO-AC ICAO eMRTDs FIDO2 (WebAuthn) In research phase See clause 6.6.5 ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 127 Protocol Credentials Protocol Maturity Reference Category: ISO mobile driving license (ISO mDL) ISO/IEC 18013-5 [i.181] (device retrieval) ISO/IEC 18013-5 [i.181] mDL/MSO [i.181] ISO mDL/MSO over BLE/NFC ISO standard, implemented in several wallets, deployed in the US See clause 6.7.2 ISO/IEC 18013-7 [i.182] (unattended) ISO/IEC 18013-5 [i.181] mDL/MSO [i.181] SIOP2 [i.216], OID4VP [i.214] Draft ISO/IEC CD 18013-7 [i.182] standard, correlated with ISO/IEC CD 23220-4 [i.187] See clause 6.7.4 ISO/IEC 23220-4 [i.187] ISO mDL [i.181], SD-JWT [i.155], etc. SIOP2 [i.216], OID4VP [i.214] Draft standard, correlated with ISO/IEC CD 18013-7 [i.182] See clause 6.7.5 ISO/IEC 18013-5 [i.181] (server retrieval) OpenID Connect ID-Token [i.212] OpenID Connect (OIDC) Core [i.212] ISO standard, implemented in several wallets, deployed in the US See clause 6.7.3 Category: OpenID for Verifiable Credentials (OpenID4VCI) OpenID for Verifiable Credential Issuance (OpenID4VCI) ISO mDL [i.181], SD-JWT [i.155], etc. OpenID4VCI [i.213] Draft standard, implemented in several wallets and pilot projects See clause 6.8.1 OpenID for Verifiable Presentations (OpenID4VP) ISO mDL [i.181], SD-JWT [i.155], etc. OpenID4VP [i.214] Draft standard, implemented in several wallets and pilot projects See clause 6.8.2 OpenID4VC High Assurance Interoperability Profile (HAIP) ISO mDL [i.181], SD-JWT [i.155], etc. HAIP [i.215] Draft standard, implemented in several wallets and pilot projects See clause 6.8.3 A.4 zk-SNARK protocols Table A.4 provides a comparison of the different zk-SNARK protocols. The comparison is made based on transparency, universality, and plausible quantum-safety. A transparent protocol is defined as it does not require any trusted setup and uses public randomness. A universal protocol is defined as it does not require a separate trusted setup for each circuit. A plausibly quantum-safe protocol is one that is not considered to be vulnerable to attacks by quantum computing algorithms. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 128 Table A.4: Comparison of zk-SNARK protocols Protocol Published Transparent Universal Quantum-safe Pinocchio [i.220] 2013 No No No Geppetto [i.72] 2015 No No No TinyRAM [i.19] 2013 No No No Buffet [i.249] 2015 No No No ZoKrates [i.85] 2018 No No No xJsnark [i.195] 2018 No No No vnTinyRAM [i.21] 2014 No Yes No MIRAGE [i.194] 2020 No Yes No Sonic [i.198] 2019 No Yes No Marlin [i.66] 2020 No Yes No PLONK [i.116] 2019 No Yes No Spartan [i.200] 2019 No Yes Yes SuperSonic [i.39] 2020 Yes Yes No Hyrax [i.250] 2018 Yes Yes No Halo [i.31] 2019 Yes Yes No Virgo [i.273] 2020 Yes Yes Yes Ligero [i.4] 2017 Yes Yes Yes Aurora [i.20] 2019 Yes Yes Yes Groth16 [i.124] 2016 No No No Ligetron [i.248] 2024 No Yes Yes ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 129 Annex B: Hash wires B.1 HashWires applied on inequality tests B.1.1 Using a hash chain for inequality tests A fundamental building block in HashWires is hash chains. Given two collision-resistant hash functions , , a maximum integer value , and a random value , the issuer computes the commitment . Here, ∙ represents k iterations of the function such that the digest of is the pre-image to . The issuer signs and sends , to the user (optionally also ). The user can now produce a hash chain of the same length as a threshold by computing the range proof . The user signs a presentation containing and the verifier checks if . If the check passes, the verifier knows that is the commitment to some value but does not learn . Figure B.1: A hash chain based inequality test In Figure B.1, the issuer signs the leftmost bold box representing the commitment . The user presents the dotted bold lined box representing the threshold value . The verifier accepts as a proof for the inequality . Note that for an age proof, the value should represent the user's actual age at the time of issuance and that represents the minimum age value 0. NOTE 1: The hash functions , should be listed in the SOG-IS table of agreed hash functions [i.237]. NOTE 2: The digital signature scheme should be listed in the SOG-IS table of agreed signature schemes [i.237]. NOTE 3: The use of digital signatures that are QSC should be possible. NOTE 4: The verifier does not learn the value , and any ∙ where . NOTE 5: A single hash function with two different salts, or a keyed HMAC with two keys, are both alternatives to , . When considering non-negative integers, one obvious representation is that the digest represents the maximum value, and each subsequent digest represents a decrement by 1. The problem with that approach is that it does not scale. Take for instance age over or equal to proofs. Here, the user should be able to prove that their age is equal to or above 18 the very day they turn 18, but not before. A hash chain for 18 years in days requires roughly 6 575 digests. This is further exacerbated by the batch issuance requirement for PIDs and (Q)EAAs to prevent verifier collusion (the Provider would need to create a new hash chain for every attestation since the commitment would be correlatable even with a salt). Also, each verifier needs to recompute the threshold length of the chain at every presentation. With ~450 million EU citizens, and potentially multifold more inequality tests for age based services, optimization is required. B.1.2 Using multiple hash chains for inequality tests The optimization presented in the HashWires paper ensures that the commitment generation, proof and verification, and proof size all scale well even for very large n-digit numbers. The core idea is to rely on multiple hash chains. However, instead of representing decrements starting from the maximum number, each digest represents the commitment to the digits of a number ⋅10 ⋅10 . . . ⋅10 . ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 130 For instance, using the commitments to the coefficients in 22 2 ⋅10 2 ⋅10 a user could generate a proof for the inequality 10. Note, however, that the user would not be able to use that commitment to prove 13 without revealing a lot more information than necessary (more specifically, the user would need to reveal commitments to 20). Chalkias et al. [i.58] here describe the idea of Minimum Dominating Partitions (MDP) to address the above problem. In the HashWires paper, there is a formal definition of MDP, which relies on the idea that a number dominates another number if each digit . The authors present an algorithm that takes a non-negative integer as input and outputs one or more non-negative integers that represent numbers that dominate other numbers, where the collection of numbers output can dominate any other number in the entire range of the requested inequality. A simpler explanation is that the MDP is generated using a recursive function that takes as input a number, and outputs the first number that the input cannot dominate. That new output number then becomes the new input number, and the MDP outputs the value it cannot dominate. For instance, using base 10, the number 84 can dominate 84,83,82,81,80 but not 79. Subsequently, 79 can dominate all numbers down to 0. So the 84 84,79 . Similarly, 3413 3413,3409,3399,2999 . Given a set of MDP partitions, the user can use hash chains to dominate any number that up to and including the first element by simply picking the element that can dominate the requested threshold value. For instance, given 3413 3413,3409,3399,2999 the user can use the 2999 element to prove 376. When the user can use more than a single element from the MDP to dominate the threshold number, the user picks the number that reveals the least amount of information. Figure B.2: Basic HashWires commitment Figure B.2 illustrates a basic HashWires commitment to the number 312 in base 4 with 312 312,303,233 . Each hash chain represents a commitment to a specific digit in each MDP partition. A further optimization can be made by reusing the same hash chain for multiple different commitments. The idea here is to generate one hash chain per digit in the largest number, with the length of the hash chain being the largest value of any digit in any MDP partition. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 131 Figure B.3: Optimized HashWires commitment Figure B.3 shows an optimized HashWires commitment to the number 312 in base 4 with 312 312,303,233 . Each hash chain represents the commitments to the digit values of each partition. Green dotted line illustrates how the values are sourced for the third digit in each MDP partition. Hash chains are coloured to correspond to their commitments, i.e. the second digit in each MDP partition would source their commitment from the middle hash chain, and the first digit in each partition would source commitments from the rightmost hash chain. The optimized HashWires approach is orders of magnitude more efficient than using a single hash chain. Specifically, the 6575 6575,6569,6499,5999 (18 years in days), requires 3 6 9 9 9 36 hash operations (three for the seeds, and then 6 for the fourth digit, and then 9 for each subsequent digit). In fact, using base 10, the maximum possible number of hash chains will never exceed the number of digits multiplied by 10. One concern with the optimized HashWires approach is that it may leak information about the partitions, and thus reveal the users actual number. To avoid such leaks, the authors of the HashWires paper suggest the use of an accumulator that can hide the actual commitments. While the use of an accumulator addresses the concern, it is also not necessary when the attestation format is capable of selectively disclosing the particular commitment that the user needs to prove the inequality, and when attestations are batch issued and used only once (that is not to say that the issuer cannot select to include the accumulator value as a selectively disclosable value). B.1.3 Protecting optimized HashWires with SD-JWT or MSO The MDP partitions leak information about the number in several ways. Therefore, it is important that the user only reveals the exact commitment that is required for the request threshold inequality proof. The original HashWires paper achieves this using an accumulator, but it is also possible to rely on the selective disclosure capabilities of SD-JWT and MSO. For reasons of readability, illustrative examples will be done using SD-JWT and without an accumulator, but the concept is equally applicable for MSO and every other salted attribute hashes based approach. NOTE: Combining HashWires range proofs with selectively disclosed salted hashes of attributes is suggested by Peter Lee Altmann (Swedish Digitalization Agency) and Sebastian Elfors (IDnow) to the present document. The idea is not peer reviewed and is meant primarily to illustrate the idea of a PID/(Q)EAA Provider signing computational inputs and parameters to enable dynamic predicates e.g. inequality tests. With modifications, the proposal could enhance the mdoc [i.181] and IETF SD-JWT [i.155] standards to cater for predicate proofs in addition to selectively disclosing claims. Consider an optimized HashWire for an n-digit number, , , . . . , , , , . . . , where denotes the hash chain root for digit position i in each MDP partition for a value x and denotes the seed used in ∙ to generate the first value of the hash chain for each digit position i. Each MDP partition is a combination of hash roots. For instance, the 6575 6575,6569,6499,5999 would require four seeds, resulting in four hash chains, one for each digit. The corresponding hash chains lengths for 6575 are 6 ⋅10 9 ⋅10 9 ⋅10 9. More precisely: • 6575 requires the commitment: , , , ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 132 • 6569 requires the commitment: , , , • 6499 requires the commitment: , , , • 5999 requires the commitment: , , , Each commitment is required to be included in a disclosure, and then signed as part of the SD-JWT or MSO. The PID/(Q)EAA Provider is required to also include a number of decoy digests to hide the number of MDP partitions, or alternatively commit only an accumulator value (e.g. a Merkle Tree as proposed in the original HashWires paper or the digest over the concatenation of all the decoys and commitments). In Figure B.3, and in the example below, the commitments are included as separate disclosures for illustrative purposes only. Figure B.4: Optimized HashWires commitment using SD-JWT Figure B.4 illustrates an optimized HashWires commitment to the number 312 in base 4 protected by the _sd object suitable for an SD-JWT. Each commitment to the three partitions is salted (box with S), contain a MDP partition identifier key, and the hash chain roots for each MDP partition. The hash over the salt, key, and commitment is included in the _sd (red highlights). The other digests in the _sd object are decoys to hide the number of MDP partitions the user has. Each commitment is included as a disclosable value for illustrative purposes. Optionally, an issuer could instead add the commitments to an accumulator, which would be disclosable. This is an illustration of HashWires, although implementations may differ. EXAMPLE: The random values needed to initiate each hash chain with ∙. The values are not sent to the verifier. { "10^0": "f6a23b90b9f07f34f33dfd4e5de87adab167b6ea9eb060163e741ac26f16edc1", "10^1": "3026950fd2d2c6c7e23c8a8b0a80928d5cdac0f953699a96e02c1033379ed392", "10^2": "d942fdb1d9c3274a257154ef2f6f66161ea5872163dbb8daa40c7496e5365242", "10^3": "ba0acaf18a6a966a3eecbb791e9e22bc45d3a1183ff47342ab9cbde4635a828c", "10^4": "f32da5b457d45e0e6113d744fff316a1882f77fbf6ef5f92456faf84dfc8bd02" } The disclosure of the commitment to the partition 13699 using the format ["salt", "key", <value>]. ["TpPrKdZ73ZR7JoUU-FCiTYvlQ4-QQ5ab9V2Z-cXze8E", "0", ["927eb07e71c648f73bec94e03d29cb41a0efc4f247a999d49f1318e3e8afbb84", "b4b2a297499d63dd1ae5ee64c1aa21667b43b8974be3b3e17273005951413a56", "854983f72c56c0102cac32edcce8b7c52365edc793cdba37d5603221b21d0a95", "040be38408070da03bd6ca9e63999fac072adc20e1ba6f4513861db317a82a54", "ad1a9492c27be7d33c7d00e33b0ca223e02a07440394b4036ded6f1f2c990c7a"]] The base64url encoded SHA256 digest included in the _sd: "zDHz3CX-akEjrDddMc8RYemeUCmEN0yjT1JIM_KXJd4" NOTE 1: The user is required to only disclose the particular partition it uses to generate the inequality proof. NOTE 2: The issuer can combine the disclosure digests into a single value using an accumulator or by concatenating the disclosure digests and the decoys. Implementation specific profiles are required. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 133 The user, given a threshold value t, is required to select the partition that can generate the hash chains required for the inequality ". The user sends the disclosure of the commitment required for the inequality test, and the threshold values for each digit. The verifier can compute the hash chain using the threshold value for each digit and compares the root hash with the issuer signed commitments in the SD-JWT or MSO. If the signature is verified, the verifier accepts the inequality test. B.1.4 Less than or equal to and range proofs Any range proof, # $ $ %, can be constructed using two inequality tests, one proving the inequality at the lower bound and the other at the upper bound. The above demonstrates an inequality test of type # $ . To generate a less than or equal to $ % proof, it is necessary to extend the above described approach. Using whole number K, the issuer can generate a commitment to the inequality & ' & ' %. Both inequality tests rely solely on hash digests and combined they can generate any valid range proof using issuer signed commitments. EXAMPLE: Figure B.5: Hash chain based range proof Figure B.5 illustrates a hash chain based range proof for the range 4 $ $ 8. The issuer signs the bold commitments to both the lower bound test 4 $ and the upper bound test $ 8. The user presents both inequality tests to the verifier. The verifier combines the two proofs for inequality tests into range proof and accepts the range proof if the issuer's signature over the commitments is valid. NOTE 1: For a range proof, the issuer is required to sign the parameter K used for the inequality & ' & ' %. NOTE 2: The attestation issuance date impacts the proof that the user generates. A user generates a proof on an inequality test not for the request threshold, t, but subtracts the difference between the issuance date and the presentation date. A similar logic applies for age under or equal to proofs, as well as for range proofs. HashWires represent an efficient way to generate inequality tests and range proofs using only SHA256. Running 70 000 loops on a dual core 2,2 GHz processor, it takes 72 µs ± 5,58 µs to generate the commitment for a 3 digit inequality test, and 156 µs ± 31,7 µs for a 6 digit one. The proof size is constant and the verification is faster than the generation. B.2 Hash chain code example This annex contains a Python code example of how to use hash chains to calculate a predicate of a user's age. import secrets from hashlib import sha256 # Get the user's age while True: try: age = int(float(input("Enter your age: "))) if age < 0: raise ValueError break except ValueError: print("Enter a non negative number.") # The issuer generates a seed and the commitment the user will need. seed = secrets.token_bytes() ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 134 commitment = sha256(seed) hash_chain = [commitment.hexdigest().encode('ascii')] # The issuer then generates the hash chain. for i in range(age): commitment = sha256(commitment.hexdigest().encode('ascii')) hash_chain.append(commitment.hexdigest().encode('ascii')) # The hash chain is reversed so that the index values equal age hash_chain.reverse() # The issuer includes the following claim in the signed attestation age_is_zero = hash_chain[0] # The verifier wants a proof for age_over_n n = 10 age_proof = None # The user has to generate the following age proof assert isinstance(n, int) and n >= 0, "The value is a non-negative integer." try: age_proof = hash_chain[n] if n != 0 else age_is_zero print(f"The proof value is: {age_proof}") print(f"Copy this value for the next cell's input prompt: {age_proof.decode('ascii')}") except IndexError: print(f"The user does not have a long enough hash chain for the required age proof of {n}") # The user sends the age proof to the verifier, who verifies the chain length age_proof_test = input("Copy paste the provided value from the previous cell: ") age_proof_test = age_proof_test.encode('ascii') above_n = False if n == 0 and age_proof_test == age_is_zero: above_n = True else: for i in range(n): age_proof_test = sha256(age_proof_test).hexdigest().encode('ascii') above_n = True if age_proof_test == age_is_zero else False print(f"The user provided valid proof for the age is equal to or greater than {n} test: {above_n}") B.3 HashWires for SD-JWT and MSO Code examples in Python and descriptions on how to use HashWires for inequality tests for SD-JWT and MSO have been provided by Peter Lee Altmann at the repository [i.6]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 135 Annex C: Post-quantum safe zero-knowledge proofs and anonymous credentials C.1 General This annex describes research and innovations of new types of ZKP schemes. These types of innovative ZKP schemes are still being researched at an academic level and are not yet standardized, so they cannot be considered for the EUDI Wallet at the time of writing (August 2025). Nevertheless, the research on ZKP schemes is described in this annex since they may be implemented and standardized, which could be of interest for future standardization of the EUDI Wallet. C.2 Quantum physics applied on ZKP schemes C.2.1 Background The advent of quantum computers is typically considered a disruption for classic cryptography. In 1994 Peter Shor published the paper "Algorithms for quantum computation: discrete logarithms and factoring algorithm" [i.235] that described how quantum computers can use certain algorithms for finding discrete logarithms and factoring integers. As a consequence, classic asymmetric cryptographic algorithms such as RSA and ECDSA, which are based on the discrete logarithm problem, are vulnerable against quantum computing attacks in a post-quantum world. One countermeasure is to employ Quantum-Safe Cryptography (QSC) algorithms, i.e. cryptographic algorithms (typically public-key algorithms) that are expected to be secure against a cryptanalytic attack by quantum computers. NIST conducts a research program [i.210] to identify candidates for QSC algorithms that can be standardized. The signature scheme finalists (December 2023) are FALCON [i.75], FIPS 204 [i.207] (based on CRYSTALS Dilithium [i.75]) and FIPS 205 [i.208] (based on SPHINCS+ [i.238]). Furthermore, Dutto et al. has published the paper "Toward a Post-Quantum Zero-Knowledge Verifiable Credential System for Self-Sovereign Identity" [i.83], which analyses quantum-safe variants of BBS+ and CL-signatures based on a lattice-based scheme. The paper also identifies the open issues for achieving VCs suitable for selective disclosure, non-interactive renewal mechanisms, and efficient revocation. NOTE: The countermeasures above describe lattice-based or hash-based algorithms that are executed in classic computers with the intention to protect against quantum computing attacks with Shor's algorithm, but the QSC algorithms per se are not designed for quantum computers. On the contrary to quantum computing attacks on classic cryptography, quantum physics and quantum computers can be used as an advantage when designing cryptographic protocols for a post-quantum world. There exist Quantum Key Distribution (QKD) protocols and quantum-based ZKP schemes, which are described in the following clauses. C.2.2 Quantum Key Distribution (QKD) The most mature quantum cryptographic application is Quantum Key Distribution (QKD), which utilizes quantum mechanics to share a random secret key with two parties, which then can be used to encrypt and decrypt messages. A unique property of quantum key distribution is the ability to detect if any third party has tried to eavesdrop on the communication channel between the two parties. The first QKD scheme was BB84 [i.24] that was invented by Charles Bennett and Gilles Brassard in 1984. BB84 is based on Heisenberg's uncertainty principle and uses the polarization state of photons to encode key bits, which means that the quantum data encoded as photons cannot be copied or measured without disturbing the key exchange protocol. There exist several commercial products that implement QKD schemes, which can be used for example to share symmetric AES keys. A tutorial on QKD with more information on this subject is published by IEEE [i.274]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 136 C.2.3 Quantum physics applied to the graph 3-colouring ZKP scheme The graph 3-colouring ring (G3C) problem is a classic problem that was introduced already in 1856. The graph 3- colouring problem takes as input a graph (G) and decides whether it can be coloured using only three (3) colours, such that no two adjacent vertices (nodes) have the same colour. The graph 3-colour problem is proven to be NP-complete. The graph 3-colouring problem can be used as a ZKP scheme as described below. Let G be a graph with n vertices and define the set of vertices as V = {v1, ..., vn}. Also define the set of edges as E = {ei,j}. where ei,j is the edge between vertices vi and vj. The graph G is known to both parties. The prover's private knowledge is the 3-colouring of the graph G. whilst the verifier only knows the graph shape (with black "hidden" colours). The protocol is executed as follows: 1) Prover: Randomly permute the 3-colours of graph G. Commit to the permutation of the colours of all vertices, such that ci = P(vi, colour of vi). 2) Prover: Share the graph G (with black "hidden") colours to the verifier. 3) Verifier: Select edge ei,j and send ei,j to the prover. 4) Prover: Open ci and cj. 5) Verifier: Accept if ci ≠ cj, else reject. The protocol is illustrated with Figures C.1 and C.2. In step 1, the prover permutes the colours of a graph G as illustrated in the figure below. Two permutations are shown in Figure C.1, and the prover commits to permutation P2 in this example. Figure C.1: Examples of 3-coloured graphs The prover shares the graph G (with hidden colours) with the verifier, as shown to the left in Figure C.2. The verifier selects edge e1,2 whereupon the prover opens vertices v1 and v2. Since v1 is red and v2 is blue, i.e. the colours are different, the verifier can accept the proof. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 137 Figure C.2: Example of 3-coloured graph ZKP Hence, the prover's knowledge is the 3-colouring permutation of the graph, and can prove this for each edge of the graph to the verifier. The prover's zero-knowledge proofs are the vertices that are opened to the verifier. A formal description of the graph 3-colouring ZKP scheme is described as Zero-Knowledge Protocol for Graph Isomorphism in the paper "Proofs that yield nothing but their validity or all languages in NP have zero-knowledge proof systems" [i.119] published in 1991 by Goldreich et al. The classic graph 3-colouring ZKP scheme can be transposed to the quantum world. Simply put, large entangled quantum states are utilized for a graph in a quantum computer, equivalent to how the colour permutations are computed on a graph in a classic computer. The quantum graphs may also be shared between the prover and verifier by using the quantum key distribution as described in the previous clause. The paper "Experimental relativistic zero-knowledge proofs" [i.4] describes how the graph 3-colouring ZKP can be implemented in a way that is theoretically quantum computing safe: • The quantum cryptography behind the graph 3-colouring ZKP schemes goes beyond the scope of the present document. For further reading the following research papers are recommended: "Zero-knowledge against quantum attacks" [i.251] by Watrous, "Post-quantum Efficient Proof for Graph 3-Coloring Problem" [i.87] by Ebrahimi, and "Zero-knowledge proof systems for QMA" [i.35] by Broadbent et al. C.2.4 ZKP using the quantum Internet (based on Schnorr's algorithm) Another quantum ZKP scheme is based on Schnorr's algorithm on non-interactive zero-knowledge proof [i.168]. Assume that the prover wants to prove that it knows the secret value x such that Y = g^x mod p, for prime p and generator g, with g, p, and Y public. Schnorr's algorithm can then be performed as follows: 1) The prover chooses the value r and calculates t = g^r mod p. The prover sends value t to the verifier. 2) The verifier sends the random value c to the prover. 3) The prover calculates s = r + cx, and sends the value s to the verifier. 4) The verifier checks that g^s ≡ t × Y^c mod p. Schnorr's algorithm can be proven as follows: t × Y^c ≡ g^r × (g^x)^c mod p ≡ g^(r+cx) mod p ≡ g^s mod p ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 138 Carney [i.51] has described how to replace the use of the generator g in Schnorr's scheme for a quantum mechanical qubit rotation, and how to perform zero-knowledge proofs using quantum algorithms over the quantum Internet. The applied quantum cryptography goes beyond the scope of the present document, but for further reading the paper "On Zero-Knowledge Proofs over the Quantum Internet" [i.51] is recommended. C.2.5 Conclusions on quantum ZKP schemes Quantum cryptography takes advantage of quantum computers to design new cryptographic protocols for a post-quantum world. The Quantum Key Distribution (QKD) schemes are rather mature and are implemented in several commercial products. Hence, the QKD schemes may be used for sharing keys between two parties using classic ZKP schemes. Several quantum cryptographic algorithms for use with ZKP are also being developed. The classic graph 3-colouring scheme and Schnorr's algorithm have been transposed into quantum cryptographic algorithms. There are also relativistic quantum ZKP protocols [i.4] with promising applications for identification tasks and blockchain applications such as cryptocurrencies or smart contracts. The quantum ZKP schemes are still being researched at an academic level and are not yet standardized, so they cannot be considered for the EUDI Wallet yet. It is however worthwhile to monitor the research and development of quantum ZKP schemes: if the quantum ZKP schemes get standardized and implemented in commercial products they could be considered for a future revision of the eIDAS regulation. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 139 Annex D: EUDI Wallet used with ISO mDL flows D.1 EUDI Wallet used with ISO mDL device retrieval flow D.1.1 Overview of the ISO mDL device retrieval flow The scope of the present clause is to describe how the EUDI Wallet can present ISO mDL selectively disclosed elements over the ISO mDL device retrieval flow, and how eIDAS2 trust services can be used to support this process. NOTE: The ISO mDL device retrieval flow is mandatory for the EUDI Wallet according to the ARF [i.71]. The ISO mDL device retrieval flow is described in ISO/IEC 18013-5 [i.181], sections 6.3.2, 6.3.2.1 (as flow 1) and 6.3.2.4. The present clause will not repeat the entire ISO mDL device retrieval process, although a brief summary is provided below for readability with references to the ISO/IEC 18013-5 [i.181]. The ISO mDL device retrieval flow is illustrated in Figure D.1. Figure D.1: Overview of the ISO mDL device retrieval flow On a high level, the ISO mDL device retrieval flow can be divided in the following phases, where the ISO mDL reader is equivalent to an attended eIDAS2 relying party: • Initialization phase, whereby the ISO mDL app is activated either by the user or triggered by NFC contact with the ISO mDL reader (see ISO/IEC 18013-5 [i.181], section 6.3.2.2 for more information). • Device engagement phase, whereby the ephemeral device key EDeviceKey is generated, and the device engagement structure is transferred over NFC or as QR-code. The device engagement structure contains parameters for device retrieval transfer options TransferMethod and TransferOptions (see ISO/IEC 18013-5 [i.181], sections 6.3.2.3, 9.1.1, 8.2.1, 8.2.2 and 8.2.1.1 for more information). • Data retrieval phase, whereby the EReaderKey, SKReader and SKDevice keys are generated to establish an encryption session. The ISO mDL reader then transmits the mDL Reader Request and the ISO mDL replies with the mDL Response (see ISO/IEC 18013-5 [i.181], sections 9.1, 9.1.1, 8.3.2.1.2 and 8.3.2.2.2 for more information). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 140 As regards to selective disclosure, the mDL Reader Request contains a list of the DataElements the mDL Reader requests from the mDL app. Upon the user's consent, the mDL app will reply with the mDL Response with the selected DataElements in the DeviceSignedItems. The DeviceSignedItems object is signed by the mDL Authentication Key, to which the user is authenticated with a PIN-code or biometrics (see ISO/IEC 18013-5 [i.181], sections 8.3.2.1.2 and 8.3.2.2.2 for more information). The selected DataElements will be hashed at the mDL reader, and be compared with the corresponding hash values in the MSO. ISO/IEC 18013-5 [i.181], section 9.1.2.3 describes how the relying party validates the MSO signature and how to check that the hashed mDL mdoc elements match the hash values in the MSO. More specifically, ISO/IEC 18013-5 [i.181], section 9.1.2.3 specifies in detail how the mDL reader validates the certificate chain of the IACA trust anchor and the Issuing Authority's MSO signer certificate. ISO/IEC 18013-5 [i.181], Annex C describes the ISO mDL VICAL, which points to the IACA trust anchor and revocation information. D.1.2 Analysis of the ISO mDL device retrieval flow for eIDAS2 An analysis of the ISO mDL device retrieval flow applied to an eIDAS2 context results in the following observations and recommendations: • The ISO mDL app should be part of an EUDI Wallet. • The ISO mDL Issuing Authority corresponds to a QTSP, PIDP and/or an EUDI Wallet provider. • The mDL Reader corresponds to an device retrieval eIDAS2 relying party (that will validate the ISO mDL as an (Q)EAA/PID). • The recommendations should be observed in clause 7.2.1 on how a QTSP/PIDP supervised under eIDAS2 can operate as an ISO mDL IACA. • The recommendations should be observed in clause 7.2.1 on how an eIDAS2 EU TL should be formatted to be compatible as an ISO mDL VICAL or vice versa. • The eIDAS2 relying party should use the eIDAS2 EU TL (which is equivalent to an ISO mDL VICAL) to retrieve the QTSP/PIDP trust anchor (which is equivalent to the IACA trust anchor). • The eIDAS2 relying party should validate the MSO (submitted by the ISO mDL app in the mDL Response) according to the principles in ISO/IEC 18013-5 [i.181], section 9.1.2.3, by using the QTSP/PIDP trust anchor. • The MSOs in the EUDI Wallet ISO mDL app should be unique as described in clause 7.2.1 to cater for verifier unlinkability when validated by the relying party. NOTE 1: ISO mDL MSO does not enable unlinkability; it only enables selective disclosure. NOTE 2: While issuer unlinkability is impossible to achieve, verifier unlinkability can be achieved by having the QTSP/PIDP issue batches of MSOs, each with unique salts, signatures, and DeviceKey elements. This will require an operational procedure of issuing multiple MSOs to each device on a regular basis, which may result in an additional operational cost for the QTSP/PIDP. Operational costs may be lessened by relying on a HDK function as described in clause 4.3.4.2 whereby the issuer only needs to keep track of a single DeviceKey element and use it to derive unique per MSO DeviceKey elements that the user can derive the corresponding private key for. • The MSO is signed by the QTSP/PIDP with a COSE formatted signature, which allows for SOG-IS approved cryptographic algorithms [i.237] and for QSC for future use [i.149]. These observations and recommendations should be considered with respect to selective disclosure for ETSI TS 119 462 [i.95], ETSI TS 119 471 [i.96] and ETSI TS 119 472-1 [i.97]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 141 D.2 EUDI Wallet used with ISO mDL server retrieval flow D.2.1 Overview of the ISO mDL server retrieval flows The scope of the present clause is to describe how the EUDI Wallet can present ISO mDL selectively disclosed elements over the ISO mDL server retrieval flow, and how eIDAS2 trust services can be used to support this process. NOTE: This ISO mDL server retrieval flow is NOT mentioned by the ARF, but may need to be used by national or specific implementations that need to be interoperable with ISO mDL. The ISO mDL server retrieval flow can be initialized as a hybrid device/server process (see clause D.2.2) or as a server process (see clause D.2.3). Once the ISO mDL server retrieval flow has been initialized, it continues with either the WebAPI (see clause D.2.5) or the OpenID Connect (OIDC) flow (see clause D.2.7). Clause D.2 will not repeat the entire ISO mDL server retrieval process, although a brief summary is provided below for readability with references to ISO/IEC 18013-5 [i.181]. D.2.2 ISO mDL flow initialization The initialization of the ISO mDL device and server retrieval flows are described in ISO/IEC 18013-5 [i.181], sections 6.3.2, 6.3.2.1 (as flow 2) and 6.3.2.4. The ISO mDL device/server data retrieval flow is illustrated in Figure D.2. Figure D.2: ISO mDL flow initialization On a high level, the ISO mDL device/server retrieval flow can be divided in the following phases (where the ISO mDL reader is equivalent to an eIDAS2 relying party): • Initialization phase, whereby the ISO mDL app is activated either by the user or triggered by NFC contact with the ISO mDL reader (see ISO/IEC 18013-5 [i.181], section 6.3.2.2 for more information). • Device engagement phase, whereby the ephemeral device key EDeviceKey is generated, and the device engagement structure is transferred over NFC or as QR-code (see ISO/IEC 18013-5 [i.181], sections 6.3.2.3, 9.1.1, 8.2.1 and 8.2.2 for more information). • Data retrieval phase, whereby the EReaderKey, SKReader and SKDevice keys are generated to establish an encryption session. The ISO mDL reader then transmits the mDL Reader Request including the server retrieval request and the ISO mDL replies with the mDL Response including the server retrieval information (see ISO/IEC 18013-5 [i.181], sections 9.1, 9.1.1, 8.3.2.1.2.1 and 8.3.2.1.2.2 for more information). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 142 The ISO mDL online data retrieval flow continues with either the WebAPI (see clause D.2.5) or OIDC (see clause D.2.7). D.2.3 ISO mDL server retrieval flow initialization The ISO mDL server retrieval flow initialization is described in ISO/IEC 18013-5 [i.181], sections 6.3.2 and 6.3.2.1 (as flow 3) and 6.3.2.4. The ISO mDL server retrieval flow initialization is illustrated in Figure D.3. Figure D.3: ISO mDL server retrieval flow initialization On a high level, the ISO mDL server retrieval flow can be divided in the following phases (where the ISO mDL reader is equivalent to an eIDAS2 relying party): • Initialization phase, whereby the ISO mDL app is activated either by the user or triggered by NFC contact with the ISO mDL reader (see ISO/IEC 18013-5 [i.181], section 6.3.2.2 for more information). • Device engagement phase, whereby the ephemeral device key EDeviceKey is generated, and the device engagement structure is transferred over NFC or as QR-code. The device engagement structure contains parameters for online transfer options WebAPI or OIDC (see ISO/IEC 18013-5 [i.181], sections 6.3.2.3, 9.1.1, 8.2.1, 8.2.2 and 8.2.1.1 for more information). The ISO mDL server retrieval flow continues with either the WebAPI (see clause D.2.5) or OIDC (see clause D.2.7). D.2.4 ISO mDL server retrieval WebAPI flow The ISO mDL server retrieval flow is described in ISO/IEC 18013-5 [i.181], section 8.3.2.2 and the WebAPI calls are specified in ISO/IEC 18013-5 [i.181], section 8.3.2.2.2. The ISO mDL WebAPI server retrieval flow is illustrated in Figure D.4. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 143 Figure D.4: ISO mDL server retrieval WebAPI flow As regards to selective disclosure, the mDL Reader submits a server retrieval WebAPI Request with a list of requested DataElements to the Issuing Authority. Upon the user's consent, the Issuing Authority will reply with the mDL Response with the selected and disclosed DataElements (see ISO/IEC 18013-5 [i.181], section 8.3.2.2.2 for more information). D.2.5 Analysis of the ISO mDL server retrieval WebAPI flow for eIDAS2 An analysis of the ISO mDL WebAPI server retrieval flow applied to an eIDAS2 context results in the following observations and recommendations: • The ISO mDL app should be part of an EUDI Wallet. • The ISO mDL Issuing Authority corresponds to a QTSP, PIDP and/or an EUDI Wallet provider. • The mDL Reader corresponds to an eIDAS2 relying party, which will connect to the ISO mDL Issuing Authority over the WebAPI to request information about the user. NOTE 1: eIDAS2 [i.103] Article 5a.14 states: "The provider of the European Digital Identity Wallet shall neither collect information about the use of the European Digital Identity Wallet which is not necessary for the provision of European Digital Identity Wallet services, nor combine person identification data or any other personal data stored or relating to the use of the European Digital Identity Wallet with personal data from any other services offered by that provider or from third-party services which are not necessary for the provision of European Digital Identity Wallet services, unless the user has expressly requested otherwise." If the ISO mDL Issuing Authority also has the role as an eIDAS2 European Digital Identity Wallet provider, the statement in eIDAS2 article 5a.14 may require additional privacy considerations when the server retrieval is used. NOTE 2: eIDAS2 [i.103] Article 5a.16 states: "The technical framework of the European Digital Identity Wallet shall: (a) not allow providers of electronic attestations of attributes or any other party, after the issuance of the attestation of attributes, to obtain data that allows transactions or user behaviour to be tracked, linked or correlated, or knowledge of transactions or user behaviour to be otherwise obtained, unless explicitly authorized by the user". If the ISO mDL Issuing Authority also has the role as an eIDAS2 QTSP/PIDP, the statement in eIDAS2 article 5a.16(a) may imply that server retrieval is not possible unless explicitly approved by the user. • The ISO mDL Issuing Authority may deploy QWACs in order to prove its authenticity over TLS to the connecting relying parties. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 144 • The WebAPI token is a JWT that is signed by the ISO mDL Issuing Authority OIDC Authorization Server. The JWT signer certificate should be issued by an IACA, which in the eIDAS2 context is also a QTSP. • The ISO mDL Reader, which is an eIDAS2 relying party, should use the ISO mDL VICAL (EU TL) to retrieve the IACA trust anchor (QTSP trust anchor). • The WebAPI JWT is signed by the QTSP/PIDP with a JOSE formatted signature, which allows for SOG-IS approved cryptographic algorithms [i.237] and for QSC for future use [i.149]. These observations and recommendations should be considered with respect to selective disclosure for ETSI TS 119 462 [i.95], ETSI TS 119 471 [i.96] and ETSI TS 119 472-1 [i.97]. D.2.6 ISO mDL server retrieval OIDC flow The ISO mDL server retrieval flow is described in ISO/IEC 18013-5 [i.181], clause 8.3.2.2 and the OIDC calls are specified in ISO/IEC 18013-5 [i.181], section 8.3.3.2.2. The ISO mDL OIDC server retrieval flow is illustrated in Figure D.5. Figure D.5: ISO mDL server retrieval OIDC flow As regards to selective disclosure, the mDL Reader (OIDC client) submits an server retrieval OIDC Request with the requested data elements (JWT claims) to the Issuing Authority, which operates an OIDC Authorization Server. This activates the OIDC authorization code flow [i.212]. Based on the user's consent, the Issuing Authority (OIDC Authorization Server) will reply to the mDL Reader (OIDC client) with the OIDC Token with the selected and disclosed JWT claims about the user (see ISO/IEC 18013-5 [i.181], section 8.3.3.2.2 and Annex D.4.2.2 for more information about the OIDC workflow). D.2.7 Analysis of the ISO mDL OIDC server retrieval flow applied to eIDAS2 An analysis of the ISO mDL OIDC server retrieval flow applied to an eIDAS2 context results in the following observations and recommendations: • The ISO mDL app should be part of an EUDI Wallet. • The ISO mDL Issuing Authority corresponds to a QTSP, PIDP and/or an EUDI Wallet provider. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 145 • The ISO mDL Issuing Authority operates an OIDC Authorization Server, which supports the OIDC authorization code flow. • The mDL Reader corresponds to an eIDAS2 relying party, which is registered as an OIDC client to the ISO mDL Issuing Authority OIDC Authorization Server. The mDL Reader will connect to the ISO mDL Issuing Authority over OIDC to request information about the user. NOTE 1: eIDAS2 [i.103] Article 5a.14 states: "The provider of the European Digital Identity Wallet shall neither collect information about the use of the European Digital Identity Wallet which is not necessary for the provision of European Digital Identity Wallet services, nor combine person identification data or any other personal data stored or relating to the use of the European Digital Identity Wallet with personal data from any other services offered by that provider or from third-party services which are not necessary for the provision of European Digital Identity Wallet services, unless the user has expressly requested otherwise." If the ISO mDL Issuing Authority also has the role as an eIDAS2 European Digital Identity Wallet provider, the statement in eIDAS2 article 5a.14 may require additional privacy considerations when the server retrieval is used. NOTE 2: eIDAS2 [i.103] Article 5a.16 states: "The technical framework of the European Digital Identity Wallet shall: (a) not allow providers of electronic attestations of attributes or any other party, after the issuance of the attestation of attributes, to obtain data that allows transactions or user behaviour to be tracked, linked or correlated, or knowledge of transactions or user behaviour to be otherwise obtained, unless explicitly authorized by the user". If the ISO mDL Issuing Authority also has the role as an eIDAS2 QTSP/PIDP, the statement in eIDAS2 article 5a.16(a) may imply that server retrieval is not possible unless explicitly approved by the user. • The ISO mDL Issuing Authority may deploy QWACs in order to prove its authenticity over TLS to the connecting relying parties. • The OIDC Token is a JWT that is signed by the ISO mDL Issuing Authority OIDC Authorization Server. The JWT signer certificate should be issued by an IACA, which in the eIDAS2 context is also a QTSP. • The ISO mDL Reader, which is an eIDAS2 relying party, should use the ISO mDL VICAL (EU TL) to retrieve the IACA trust anchor (QTSP trust anchor). • The OIDC token JWT is signed by the QTSP/PIDP with a JOSE formatted signature, which allows for SOG-IS approved cryptographic algorithms [i.237] and for QSC for future use [i.149]. These observations and recommendations should be considered with respect to selective disclosure for ETSI TS 119 462 [i.95], ETSI TS 119 471 [i.96] and ETSI TS 119 472-1 [i.97]. D.3 EUDI Wallets used with ISO/IEC 18013-7 for unattended flow D.3.1 Overview of the ISO/IEC 18013-7 flows ISO/IEC CD 18013-7 [i.182] draft standard extends ISO/IEC 18013-5 [i.181] with the unattended flow, i.e. the server retrieval flow whereby an ISO mDL app connects directly to an mDL reader that is hosted as a web server application. ISO/IEC CD 18013-7 [i.182] is backward compatible with the protocols in ISO/IEC 18013-5 [i.181]. NOTE: Since the ISO mDL app connects directly to the web hosted mDL reader without involving any issuer, this flow preserves the user's privacy as required in eIDAS2 [i.103], Article 5a.16. ISO/IEC CD 18013-7 [i.182] unattended flow is designed based on the following protocols: • Device Retrieval from an ISO mDL app to a web server application over HTTPS POST; this flow is described in clause D.3.2. • OpenID for Verifiable Presentations (OID4VP) [i.214] in conjunction with Self-issued OpenID Provider v2 (SIOP2) [i.216]; this flow is described in clause D.3.3. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 146 D.3.2 ISO/IEC 18013-7 Device Retrieval flow The general data retrieval architecture is described in ISO/IEC 18013-5 [i.181], section 6.3.2.4. ISO/IEC CD 18013-7 [i.182] draft standard describes device retrieval of data for unattended (i.e. online web application) use cases. The ISO mDL app and the ISO mDL reader support device retrieval using the mDL request and response as specified in ISO/IEC 18013-5 [i.181], section 8.3.2.1. ISO/IEC CD 18013-7 [i.182] adds Annex A that specifies the Reader Engagement phase, which takes place before the Device Engagement phase in ISO/IEC 18013-5 [i.181]. The Reader Engagement struct contains the parameter RetrievalOptions, which in turn includes the RestApiOptions that defines the URI and REST API parameters for the HTTPS connection to the web hosted mDL Reader. ISO/IEC CD 18013-7 [i.182] unattended online retrieval flow is illustrated in Figure D.6. Figure D.6: ISO mDL unattended Device Retrieval flow When the mDL Response has been retrieved and parsed by the ISO mDL reader/verifier, the mDL selected attributes and MSO are verified according to the same process as the ISO mDL device retrieval flow (clause 7.2.3). As regards to selective disclosure for the ISO mDL unattended Device Retrieval flow, the same principles and recommendations apply as for the ISO mDL device retrieval flow (clause 7.2.3). However, the ISO/IEC CD 18013-7 [i.182] specification is not referred to by the ARF [i.71], although the associated specification ISO/IEC CD 23220-4 [i.187] is mentioned in the ARF. D.3.3 ISO/IEC 18013-7 OID4VP/SIOP2 flow As an alternative to the unattended Device Retrieval flow, ISO/IEC CD 18013-7 [i.182] specifies an unattended (online) flow based on OID4VP [i.214] with SIOP2 [i.216]. The OID4VP/SIOP2 flow is defined in Annex B of ISO/IEC CD 18013-7 [i.182]. Furthermore, the OID4VP/SIOP2 protocol is based on the ISO/IEC CD 23220-4 [i.187] profile for presentations of ISO mDL. Note that the present clause is about a mDL presentation with OID4P, see clause 6.7.2 for a general description of OID4VP. ISO/IEC CD 18013-7 [i.182] unattended OID4VP/SIOP2 flow is illustrated in Figure D.7. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 147 Figure D.7: ISO mDL unattended OID4VP/SIOP2 flow When the OID4VP Response, which contains the mDL Response, has been retrieved and parsed by the ISO mDL reader/verifier, the mDL selected attributes and MSO are verified according to the same process as the ISO mDL device retrieval flow (clause 7.2.3). As regards to selective disclosure for the ISO mDL unattended OID4VP/SIOP2 flow, the same principles and recommendations apply as for the ISO mDL device retrieval flow (clause 7.2.3). However, the ISO/IEC CD 18013-7 [i.182] specification is not referred to by the ARF [i.71], although the associated specification ISO/IEC CD 23220-4 [i.187] is mentioned in the ARF. NOTE: ISO/IEC CD 23220-4 [i.187] is mentioned as a target in the ARF [i.71], but not mandatory since not yet published. If ISO/IEC CD 23220-4 [i.187] will include ISO/IEC 18013-5 [i.181] proximity as well as OID4VCI and OID4VP then 23220-4 is likely to be mandatory in a future version of the ARF. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 148 Annex E: A primer on W3C VCDM & SD-JWT VC E.1 Overview of W3C Verifiable Credential Data Model (VCDM) E.1.1 W3C VC, JSON-LD, data integrity proofs, and linked data signatures The W3C Verifiable Credential Data Model (VCDM) is a way to express verifiable electronic attestation of attributes on the Web. At its core, a W3C Verifiable Credentials (VC) is a standardized digital format for presenting and exchanging verifiable claims (in essence statements expressed using subject-property-value relationships) about individuals, organizations, or things. These claims can be expressed as attributes in an electronic attestation of attributes. Specifically designed for the Web, the W3C VCDM aims to enable users to present attribute assertions from potentially different issuers and about potentially different identity subjects. These assertions can be organized into information graphs expressing subject-property-value relationships (e.g. Credential-type-DrivingLicense). The W3C Verifiable Credentials Data Model (VCDM) is an open standard and is designed to be interoperable across different systems and platforms and to support a wide range of applications. The W3C VCDM v1.1 [i.264] describes a issuer-holder-verifier based model for digital "verifiable credentials" (defined as a "set of one or more claims made by an issuer" that are also "tamper-evident [with] authorship that can be cryptographically verified"). Specifically, the VCDM v1.1 aims to improve the ease of expressing digital credentials while also ensuring a high degree of privacy. EXAMPLE: A trusted authority, such as a PID Provider, could construct a W3C VCDM compliant attestation containing the PID attributes and sign these with their private key. The user (assumed herein to be the identity subject of the VC) can then create a Verifiable Presentation (VP) using one or more VCs and present attributes to a verifier. The resulting W3C VC is verifiable to any verifier who has access to the required cryptographic keys. The proof mechanism could then support privacy features such as selective disclosure and/or unlinkable verifiable presentations. The VCDM 1.1 text mandates that claims about a subject can be made tamper evident, that these claims are expressed in the form of subject-property-value relationships, and that it is possible to organize these claims into an information graph. However, it is not required that the claims or the proof is expressed as a graph in the attestation. To date, the VCDM 1.1 text has principally focused on JSON-LD type attestations. W3C VCDM Support for JSON only has been limited. The lack of JSON only support is problematic since the ARF prohibits the use of linked data proofs for the PID and only optionally supports JSON-LD. The ARF text mandates that the PID is issued as a JWT and that it is secured using SD-JWT. After the publication of VCDM v1.1, the W3C VC WG has been working on VCDM 2.0 to make the standard more flexible and able to support multiple formats and signature algorithms. Work was ongoing to support the representation of verifiable claims in multiple ways including JSON, JSON-LD, or using any other data representation syntax capable of expressing the data model such as XML, YAML, or CBOR, as long as there is a mapping defined back to the base data model defined in the VCDM document (which relies on JSON-LD). This work was ongoing as several outstanding issues remained unsolved. However, recently the W3C VC WG has argued strongly in favour of removing securing JSON and non linked data formats from the specification (see W3C VC WG issue #88 [i.260]). This means that the W3C VCDM is likely to evolve in a direction that will not address outstanding issues with the underspecified JSON sections, which includes key details such as how to do the required transformations or mappings. By extension, it is likely also that the proposed W3C work on how to secure a (W3C) VC using JSON [i.169] will be postponed until further notice. It is worth noting that the W3C VC WG charter does not specify specific media types, but that there does not exist a consensus with the WG to pursue JSON. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 149 Regardless of the debate outcome, each VC and VP includes fields for specifying the signature schemes used to sign the claim or the presentation of a claim respectively (i.e. whether the verification of the proof is calculated against the data transmitted or against a transformation such as another data model or an information graph). Since the debate outcome is presently unknown, the text herein describes the solutions presently mentioned by VCDM v1.1, which are JSON Web Token and Data Integrity Proofs. Each will be described, with illustrations for possible solutions to still outstanding issues for the JWT based approach. The data integrity proofs will only be briefly explained to help readers understand why some of the ideological differences may make it difficult to secure a W3C VC using SD-JWT without a proper specification on how to secure a W3C VC using JSON. Finally, the potential of relying only on SD-JWT VC for the attestation and use case specific mapping to VCDM 1.1 will be discussed as it represents the most suitable selective disclosure alternative considering the ongoing debates. E.1.2 W3C VC, JSON-LD, data integrity proofs, and linked data signatures There are many concepts surrounding the W3C VCDM v1.1, including JSON-LD, data integrity proofs, and linked signatures. The first, JSON-LD, will be explained in detail below, but it is helpful to explain how the other two relate to JSON-LD. Data integrity proofs are defined by the W3C as "a set of attributes that represent a digital proof and the parameters required to verify it." Put differently, a data integrity proof provides information about the proof mechanism, parameters required to verify that proof, and the proof value itself. This information is provided using Linked Data vocabularies in a JSON-LD formatted attestation. Linked data signatures are a proposed way to sign data expressed in linked data formats such as a JSON-LD. Linked data signatures sign the underlying information graph as opposed to the payload itself. More specifically, the graph is normalized into a byte stream that is signed. The corresponding verification can be of the graph of information, and not necessarily the syntax specific content itself meaning that the same digital signature would validate information expressed in multiple compatible syntaxes without necessitating syntax specific proofs (see W3C VC Data Integrity v1.0 where this idea is explored in detail). To understand how a W3C VCDM v1.1 compliant attestation would look like, it is necessary to understand its core format, JSON-LD. Being similar to JSON, a key difference is that JSON-LD uses a property called "@context" to link attributes to descriptions that provide semantic clarity on how to unambiguously interpret each attribute. Each attribute is expressed in the form of subject-predicate-object triples that essentially describe an information graph. Consider the following example of a JSON-LD document describing a person. The attributes name and jobTitle are mapped to concepts in the schema.org vocabulary as detailed in the "@context". { "@context": "http://schema.org/", "@id": "https://me.example.com", "@type": "Person", "name": "John Doe", "jobTitle": "ETSI TR editor" } The @context allows the JSON-LD to be mapped to an Resource Description Framework (RDF) model and thus an information graph. The information graph for the above looks as follows: Figure E.1: Example of W3C VCDM v1.1 graph ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 150 And the W3C VCDM v1.1 graph triples are as follows: Table E.1: Example of W3C VCDM v1.1 graph triples Subject Predicate Object https://me.example.com http://www.w3.org/1999/02/22-rdf-syntax- ns#type http://schema.org/Person https://me.example.com http://schema.org/jobTitle ETSI TR editor https://me.example.com http://schema.org/name John/Jane Doe And the associated N-Quads (a syntax for RDF datasets) are: 1) <https://me.example.com> <http://schema.org/jobTitle> "ETSI TR editor". 2) <https://me.example.com> <http://schema.org/name> "John/Jane Doe". 3) <https://me.example.com> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Person>. The benefit with the above is that it does not matter what syntax is used to describe the underlying information graph as they would all describe the same model and thus enable a mapping to the exact same N-Quads. NOTE: Since data integrity proofs sign the N-Quads containing triples as opposed to only the object, they do not fully support predicates that rely on the algebraic manipulations of the object. For instance, while it is possible to check for message equality, it is not possible to check whether one value is larger than another. Consequently, the signature scheme used to sign the N-Quads may support additional predicates than the N-Quads allow (e.g. a range proof may be supported by the signature scheme but the N-Quad may limit the predicate to an equality test). To enable selective disclosure of a W3C VCDM v1.1 using data integrity proofs and linked data proofs, an issuer would need a proof mechanism that can logically order the N-Quads in such a way that the verifier knows that the presented attributes are properly paired. One way is to use the N-Quad message digests as leaf nodes to a Merkle tree and include the Merkle root in the attestation. Another, assuming that the issuer is comfortable with using JSON-LD and linked data proofs only, is to include N-Quad messages as selectively disclosable values in a SD-JWT "_sd" array (see clause 7.3.1.2 for a detailed description of how to generate a disclosure in [i.155] (IETF OAUTH: "Selective Disclosure for JWTs (SD-JWT)") and let the user present only the parts of the information graph that the verifier needs. To date, the most well developed solution relies on the bbs-2022 cryptosuite, which supports JSON-LD + data integrity proofs + linked data proof. Including triples in SD-JWT is not entirely straight forward and would require additional specification. To conclude, JSON-LD is a way to express linked data and JSON-LD based attestations may include data integrity proofs that also rely on linked data for their verification. When also using linked data proofs, issuers can issue (Q)EAAs that are highly optimized for semantic interoperability. However, it is not entirely clear how selective disclosure and predicates would work in the context of PID/(Q)EAAs. Supporting crypto suites like bbs-2022 are based on primitives that the public sector is unlikely to use since they are not considered as being plausible quantum safe. Solutions like SD-JWT can support linked data proofs but it is not entirely clear how they could be combined with data integrity proofs (and what the benefits would be) as SD-JWT was designed with JWT based attestations in mind. Having described how W3C VCDM v1.1 compliant attestations can be secured using SD-JWT also for JSON-LD and linked data signatures, attention now turns to JWT based W3C VCs and SD-JWT. E.1.3 JWT based W3C VC One popular proof format that is actively used in several implementations is the JSON Web Token (IETF RFC 7519 [i.165]). A JWT encodes claims as a JSON object contained in a JSON Web Signature (JWS) (IETF RFC 7515 [i.163]) or JWE (IETF RFC 7516 [i.164]). A user could present a VP with the VC claims using JWT as described in example 32 of the W3C VC Data Model [i.264]. The decoded JWT contains the presentation as exemplified next. { ..., "verifiableCredential":[ "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6ImRpZDpleGFtcGxlOmFiZmUxM2...QGbg" ] ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 151 } The VC contained within (highlighted above in yellow) contains the following information about the identity subject. { ..., "credentialSubject":{ "degree":{ "type":"BachelorDegree", "name":"<span lang='fr-CA'>Baccalauréat en musiques numériques</span>" } } } The VC contains the attribute in cleartext. Typically, a signed JWT containing identity data cannot support use cases where the JWT is issued once and then presented multiple times by the user who seeks to disclose only the attributes necessary for the service. In and of itself, the W3C VC standard only supports, but does not enforce, selective disclosure by design. The standard is flexible and supports multiple selective disclosure techniques. However, until recently these selective disclosure techniques have relied on multi-message signature schemes like bbs-2022 suite. NOTE: The text below assumes that there is a way to secure JSON for W3C VCDM v1.1 and ignores the ongoing debate on the topic within the W3C VC WG. E.2 SD-JWT based attestations E.2.1 General To support selective disclosure in JWTs, Fett, Yasuda, and Campbell (2023) specify Selective Disclosure JSON Web Token (SD-JWT) in the Internet Engineering Task Force (IETF) draft document [i.155] entitled "Selective Disclosure for JWTs (SD-JWT)". At its core, an SD-JWT is a digitally signed JSON document that can contain salted attribute hashes that the user can selectively disclose using disclosures that are outside the SD-JWT document. This allows the user to share only those PID attributes that are strictly necessary for a particular service. NOTE 1: SD-JWT is generally applicable to selective disclosure of JWTs that are not bound to the W3C VCDM v1.1. A W3C VCDM v1.1 contains sections that describe how a VC can be JSON encoded in a JWT and then protected using JWS/JWE. Correspondingly, the SD-JWT specifies how any JWT can support selective disclosure. But the joint utilization of the two is not straightforward. NOTE 2: An SD-JWT supports selective disclosure solutions that require a clear logical ordering of data. It does not support algebraic manipulations of data. Each SD-JWT contains a header, payload, and signature. The header contains metadata about the token including the type and the signing algorithm used. The signature is generated using the PID Provider's private key. The payload includes the proof object that enables the selective disclosure of attributes. Each disclosure contains a salt, a cleartext claim name, and a cleartext claim value. The issuer then computes the hash digest of each disclosure and includes each digest in the attestation it signs and issues. Using the proof object and the user shared disclosures, the verifier can verify that the disclosed claims were part of the original attestation. To do so, the verifier first verifies the issuer's signature over the entire SD-JWT. The verifier then calculates the digest over the shared disclosures and checks that the digest is included in the signed SD-JWT. Since the SD-JWT includes only digests of disclosable attributes, the verifier can only learn about claim names and claim values that are disclosed by the user or that are included as clear-text claims. The verifier cannot learn about any other claim names or values as these are included in the SD-JWT as salted attribute digests. The IETF SD-JWT draft specification 07 [i.155] of 2023-12-11 details the exact process of creating a disclosure in section 5.2. In essence, for each disclosable claim, the issuer generates and associates a random salt with each key value pair, and encodes the byte representation of these as base64url. An example of a disclosure is shown in Figure E.2. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 152 Figure E.2: Example of SD-JWT disclosure Figure E.2 illustrates an example with the byte representation of the JSON-encoded array containing the salt, key, and value, is base64url-encoded into the disclosure. NOTE: A linked data signature could be included in the _sd array but it is not entirely clear how to handle triples in the disclosure. One option could be to set the subject to the sub property in the attestation and to only include predicates in the disclosures as: [<salt>, <predicate>, <object>]. To embed a disclosure in the SD-JWT, the issuer hashes each disclosure using a specified hash algorithm. The base64url encoded bytes of the digest, and not the disclosure, is then included in the SD-JWT as an array in the claim _sd, which includes only an array of strings, each being the digest of a disclosure or a random number (used to hide the original number of disclosures). This array is randomized so that the order of attribute disclosures is not always the same. The SD-JWT specification supports selectively disclosable claims in both flat and more complex nested data structures. The issuer can therefore decide for each key individually, on each level of the JSON, whether or not the key should be selectively disclosable. The _sd claim is included in the SD-JWT at the same level as the original claim. Selectively disclosable claims can in turn include other objects with selectively disclosable claims. Below, this text only exemplifies the flat and the nested data structure examples, but others are possible too. Table E.2: Example of SD-JWT using a flat data structure Contents ["imQfGj1_M0El76kdvf7Daw", "address", {"street_address": "Schulstr. 12", "locality": "Schulpforta", "region": "Sachsen-Anhalt", "country": "DE"}] Disclosure WyJpbVFmR2oxX00wRWw3NmtkdmY3RGF3IiwgImFkZHJlc3MiLCB7InN0cmVldF9hZGRyZXNzIjogIlNjaHVsc3R yLiAxMiIsICJsb2NhbGl0eSI6ICJTY2h1bHBmb3J0YSIsICJyZWdpb24iOiAiU2FjaHNlbi1BbmhhbHQiLCAiY2 91bnRyeSI6ICJERSJ9XQ Digest FphFFpj1vtr0rpYK-14fickGKMg3zf1fIpJXxTK8PAE _sd value { "_sd": [ "FphFFpj1vtr0rpYK-14fickGKMg3zf1fIpJXxTK8PAE" ], ..., "_sd_alg": "sha-256" } ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 153 Table E.3: Example of nested SD-JWT with the sub-claim country in cleartext Contents ["QSNIhu_n6a1rI8_2eNARCQ", "street_address", "Schulstr. 12"], ["QPkblxTnbSLL94I2fZIbHA", "locality", "Schulpforta"], ["jR-Yed08AEo4gcogpT5_UA", "region", "Sachsen-Anhalt"] Disclosures WyJRU05JaHVfbjZhMXJJOF8yZU5BUkNRIiwgInN0cmVldF9hZGRyZXNzIiwgIlNjaHVsc3RyLiAxMiJd, WyJRUGtibHhUbmJTTEw5NEkyZlpJYkhBIiwgImxvY2FsaXR5IiwgIlNjaHVscGZvcnRhIl0, WyJqUi1ZZWQwOEFFbzRnY29ncFQ1X1VBIiwgInJlZ2lvbiIsICJTYWNoc2VuLUFuaGFsdCJd Digests "G_FeM1D-U3tDJcHB7pwTNEElLal9FE9PUs0klHgeM1c", "KlG6HEM6XWbymEJDfyDY4klJkQQ9iTuNG0LQXnE9mQ0", "ffPGyxFBnNA1r60g2f796Hqq3dBGtaOogpnIBgRGdyY" _sd value { "address": { "_sd": [ "G_FeM1D-U3tDJcHB7pwTNEElLal9FE9PUs0klHgeM1c", "KlG6HEM6XWbymEJDfyDY4klJkQQ9iTuNG0LQXnE9mQ0", "ffPGyxFBnNA1r60g2f796Hqq3dBGtaOogpnIBgRGdyY" ], "country": "DE" }, ..., "_sd_alg": "sha-256" } The QTSP/PIDP will have to send the raw claim values contained in the SD-JWT, together with the salts, to the EUDI Wallet user. The SD-JWT standard requires that data format for sending the SD-JWT and the disclosures to the EUDI Wallet user is a series of base64url-encoded values in what is called the Combined Format for Issuance, which looks like follows: <JWT>~<Disclosure 1>~<Disclosure 2>~...~<Disclosure n>~<optional Holder Binding JWT>. Note the separation of between the values using ~. The specific ways the ~ character should be used is defined under section 5 in the SD-JWT v.07 specification. When the EUDI Wallet user receives the attestation from the QTSP/PIDP, the SD-JWT standard requires that the user verifies the disclosures. The user does so by extracting the disclosures and the SD-JWT from the Combined Format for Issuance, hashing each disclosure, and accepts the SD-JWT only if each resulting digest exists in the _sd array. Relatedly, during presentation, the user sends the SD-JWT and the n disclosures to the verifier as a series of base64url encoded values in what is called the Combined Format for Presentation (also called SD-JWT+KB), which looks as follows: <JWT>~<Disclosure 1>~<Disclosure 2>~...~<Disclosure n>~<optional Holder Binding JWT> The verifier checks that the issuer's signature is valid over the SD-JWT, that the disclosure digests are part of the SD-JWT, and if applicable that the Holder binding is valid (for specific steps see section 8 in the SD-JWT 07 specification). Having described JSON secured W3C VCs and how SD-JWT can ensure selective disclosure of JWT based attestations, the text next discusses the potential joint utilization of both W3C VCs and SD-JWT, and why it is not as straightforward as it may appear. E.2.2 SD-JWT VC The IETF SD-JWT VC draft specification [i.143] provides a format that is optimized for the transport of the credential including the disclosures without further encoding. It is not designed to be embedded into any envelopes. It is arguably better to simply rely on JSON only claims for SD-JWT VC and recreate the W3C VCDM using a mapping algorithm. This option does not require the issuer to use linked data proofs (the ARF text does not allow the use of linked data proofs for the PID attestation), includes identity subject claims in an SD-JWT VC, and where a transformation is used to map the SD-JWT VC claims to a W3C VCDM 1.1 compliant information graph. Relying on SD-JWT VC and mapping would circumvent the aforementioned four difficulties and also adhere strictly to the design logic of a particular solution approach. An example is provided next. { "alg": "ES256", "typ": dc+sd-jwt, <other header info> } . ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 154 { "iss":"https://example.com/issuers/14", "nbf": 1262304000, "iat": 1262304000, "vct": "eu.europa.ec.eudiw.pid.se.1", "_sd":[ "2cj...szs", "H03...iVY", "RKE...omY", "S7e...uDc" ], "_sd_alg":"sha-256" } Figure E.3: Example of a SD-JWT VC where W3C VCDM compliance relies on mapping The example in Figure E.3 shows an SD-JWT VC secured attestation (not using JSON-LD) with the mandatory and disclosable PID attributes highlighted in blue. The "_sd" is here included as a root claim. This SD-JWT VC can be consumed, without prior processing, by any compliant SD-JWT VC library. Further evaluation can be done using standard JWT payload processing algorithms. In the example in Figure E.3. • The JOSE header indicates the type. • The claims in the credential are standard JWT claims. Applications can use predefined and established JWT claims from the "JWT Claims Registry", like "sub" for user identifiers. They can also use more complex claim structures such as those defined by OpenID Connect for Identity Assurance for providing information about provenance and level of assurance. This means existing JWT-based implementations can consume such VC payloads directly. • The vct communicates to the verifier how to interpret any disclosed claim and there is no need for a separate @context. A presentation is constructed using the combined format for presentation as defined in the SD-JWT specification. NOTE 1: The present document recommends using the IETF October 23 2023 version of SD-JWT without Appendix A4 and A5 to understand the selective disclosure mechanism. Relatedly, to understand how to use SD-JWT VC as an attestation format, see the 2023-10-23 version of "SD-JWT-based Verifiable Credentials (SD-JWT VC)" [i.154]. NOTE 2: It should also be observed that SD-JWT VC is referenced by the OpenID4VC High Assurance Interoperability Profile (HAIP) [i.215], which is a profile of OpenID for Verifiable Credentials. E.2.3 SD-JWT and multi-show unlinkable disclosures Because every SD-JWT disclosure contains a unique salt, this unique salt acts as an identifier for the entire SD-JWT. Put differently, it is enough for a malicious issuer to receive a single disclosure from a colluding verifier for the issuer to uniquely identify the identity subject. Similarly, colluding verifiers could compare salt values to link together presentations from the same user (see clause 9.4 in the SD-JWT [i.155] specification for additional details). While it is impossible to prevent issuers from identifying the user based on the unique salt in the salted attribute hashes approach, it is possible to enable multi-show verifier unlinkable disclosures even if verifiers collude or if a single curious verifier attempts to learn more about the user than what is disclosed in each presentation. To achieve complete multi-show unlinkability it is required that: 1) each SD-JWT VC contains only unique salts (even for the same claim); and 2) each SD-JWT VC is associated with a unique cryptographic key material used for device binding and/or holder binding (denoted as "holder binding key" in the context of SD-JWT). Consequently, issuers are required to rely on batch issuance of SD-JWT to the EUDI Wallet if device retrieval functionality is desired (in an online scenario, the user can request a new SD-JWT on demand). NOTE: To reduce the burden on issuers, it is possible to introduce a limit on the number of uses of each SD-JWT. The user's SD-JWTs would then be linkable in a portion of their presentations. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 155 EXAMPLE: A user is given 10 PID attestations as SD-JWT VCs. The user presents the first 9 SD-JWT VCs once and the 10th twice. Out of the 11 presentations, two are linkable. E.2.4 Predicates in SD-JWT Similar to MSO, an SD-JWT was not designed to support predicates that can be dynamically computed (e.g. to compute an age over proof from the birth date). Here too, the recommendation is to use static claims with Boolean values such as "age_over_NN": "True". However, as presented above in clause 4.3.6, it is possible to rely on issuer signed computational inputs and parameters to enable dynamic predicate support in SD-JWT. E.3 W3C VCDM 2.0 with SD-JWT There are currently two constructions that combine W3C VCDM or certain aspects of it and SD-JWT: • W3C: Securing Verifiable Credentials using JOSE and COSE • OpenID4VP: SD-JWT VCLD profile The W3C specification "Securing Verifiable Credentials using JOSE and COSE" [i.261] defines how to secure credentials and presentations conforming to W3C VCDM 2.0 with JSON Object Signing and Encryption (JOSE), CBOR Object Signing and Encryption (COSE) and SD-JWT. This specification provides a proof mechanism that describes how to use SD-JWT to secure documents conforming to W3C CDM 2.0 using SD-JWT. The payload can be used as is without any kind of transformation algorithms. At the time of writing the present document (in August 2025), the SD-JWT section is marked as "Features related to [SD-JWT] are at risk and will be removed from the specification if the IETF standardization process occurs after this specification's timeline for reaching a Proposed Recommendation…", it is unclear if the SD-JWT section will remain in the final version of the specification. SD-JWT VCLD is a profile defined in Appendix B.3.7 of OID4VP [i.214] that extends the SD-JWT VC credential format and allows for the incorporation of JSON-LD based payloads (such as W3C VCDM), but keeps some of the core mechanisms of SD-JWT VC. The core idea of this profile is to have sequential processing rules by first applying the SD-JWT VC processing rules and then the JSON-LD processing rules on the output of the SD-JWT VC processing. This construction aims to introduce a clear separation between the security relevant (SD-JWT VC) and the business- logic relevant (JSON-LD) parts by introducing a new claim "ld" that contains all JSON-LD payload. This allows for existing SD-JWT VC implementations to be extended with JSON-LD payloads in a clearly defined manner. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 156 Annex F: Business models and unlinkability F.1 General In a digital identity ecosystem it is often the case that the QTSP needs to invoice the relying party for the digital transactions it consumes. EXAMPLE: An QTSP issues a Qualified Certificate to a user. The relying party is a bank with whom the user wants to sign a digital agreement. Hence, the user signs the digital agreement with a Qualified Electronic Signature by using its Qualified Certificate. Next, the relying party verifies the Qualified Electronic Signature and the corresponding Qualified Certificate. In order to check the status of the Qualified Certificate, the relying party sends an OCSP request to the QTSP. The QTSP counts the OCSP transactions from the relying party and can invoice the relying party accordingly. The example above illustrates how QTSPs under eIDAS1 have been able to keep track of the usage of its issued Qualified Certificates and have been able to invoice the relying parties accordingly. The legal conditions have however changed under eIDAS2, as article 5a.16 states: "The technical framework of the European Digital Identity Wallet shall: (a) not allow providers of electronic attestations of attributes or any other party, after the issuance of the attestation of attributes, to obtain data that allows transactions or user behaviour to be tracked, linked or correlated, or knowledge of transactions or user behaviour to be otherwise obtained, unless explicitly authorised by the user;" More specifically, with full unlinkability it is not possible for the QTSPs to, on their own, keep track of how the (Q)EAAs are shared and with which relying parties. This boils down to one question: How can the QTSPs invoice the relying parties without knowing how attestations are used and when? The clauses below present various options of how to design a business model for QTSPs that operate under eIDAS2 with (Q)EAAs being shared with full unlinkability. F.2 ETSI TR 119 479-2 ETSI ESI has released an early draft of ETSI TR 119 479-2 "EAA Extended Validation Services Framework and Application" [i.92]. The draft proposes a technical solution intended to enable QTSPs to invoice relying parties while claiming to preserve full unlinkability of (Q)EAAs/PIDs. This solution, termed "Cyphered VC Presentation", is further described in [i.92]. F.3 Anonymous usage data aggregation F.3.1 General To enable accurate billing and fair compensation for issuers in the EUDIW ecosystem, it is essential to collect data on attestation usage. At the same time, this collection has to uphold strict user privacy guarantees. While various anonymous data aggregation techniques exist, many rely on security assumptions or adversarial models that do not align with the EUDIW context, or they fail to scale efficiently and introduce complex flows that require protocol changes. The method outlined here is purpose-built to balance privacy, integrity, and performance, making it suitable for practical deployment at the required scale. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 157 Accurate aggregate usage counts are required to support billing models, which may involve differentiated pricing based on attestation type, verifier, or a combination of both. Ideally, aggregation should rely on parties without incentives to misreport and where auditability or certification can replace complex cryptographic guarantees. Both issuers and verifiers have financial motives to distort data: issuers to inflate usage, and verifiers to downplay it. In contrast, users operate certified wallet devices that provide a trusted execution environment for computing usage data, which they have no incentive to falsify. Users, however, require strong privacy guarantees. Usage data has to be collected in a way that preserves anonymity and prevents linkability. Output privacy is not essential, as billing data is shared only with the relevant parties, and there is no requirement to publish aggregate statistics. One viable approach that achieves both scalability and privacy-preserving aggregation is a multiparty private sum protocol. This method provides strong privacy under minimal trust assumptions (requires a single honest server) and supports high performance. However, it only enables efficient and accurate aggregation when all participants behave honestly. Importantly, an accurate aggregation result does not guarantee that the underlying data is truthful; for instance, an issuer may manipulate EUDIW interactions to artificially inflate usage counts. Consequently, usage data aggregation, regardless of approach, is most suitable for deployments with certified wallet software, audited issuers, and regulated aggregators. While it is technically feasible to limit the impact of malicious users (incl. issuers pretending to be users) and/or servers, the associated performance overhead - particularly in billing models with price differentiation by attestation type or service pair - can make the approach impractical at scale. F.3.2 The billing model and private sum process It is beneficial to list three different pricing structures as these impact performance: • Flat-rate models require only aggregate usage counts per issuer and verifier. This significantly simplifies private sum computations as there is no longer a need to keep track of tuples. • Type-based pricing necessitates tracking usage per issuer and possibly per attestation type (e.g. driver's licence, PID etc.). This makes the private sum computation slightly more cumbersome but is entirely manageable. • Pair-specific pricing sets different prices per (issuer, verifier) pair. This should be minimized to the extent possible to avoid complexity and overhead. Assuming a manageable amount of pair-specific pricing, the core mechanism of the private sum is additive secret sharing. Specifically: 1) The wallet logs a usage event as an (issuance_info, service_id) tuple and increments a local counter for that tuple. 2) When required, the wallet splits the counter into N random shares and sends each share to a distinct aggregation server, tagged with the corresponding tuple. 3) Aggregation servers independently collect shares across users, optionally applying tuple-specific pricing policies. 4) Aggregation servers then reveal the sums of the shares for each tuple and combine their results to compute the total usage counts. 5) Each aggregation server can then prepare billing information, and if required add further privacy enhancing measures to protect user privacy where required. 6) Each aggregation server can also compute aggregate statistics such as total count, most used service, average users per service etc. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 158 User privacy is preserved as long as at least one server remains honest and each tuple is used by at least 100 users. If rare use of tuples presents a privacy concern, it is possible to add carefully selected decoy tuples with count 0 to enable deniability. Accurate totals require both correct user-submitted shares and honest behaviour by all aggregation servers (e.g. a regulated clearing house, the wallet provider, or member state appointed actor). Ensuring correct user reports can be achieved with relatively low overhead, but defending against malicious servers entails a significant performance cost. To illustrate the core of the private sum protocol, consider three users who wish to compute the sum of their local counts using three trusted aggregation servers, under a prime modulus p. Each user performs the following steps: 1) Let the user's private input be ∈[0, −1]. 2) Generate two random values ( , ) ∈[0, −1]. 3) Compute the third share as: = ( − − ) mod p. 4) Sends each of the three shares, ( , , ), to a distinct aggregation server. The remaining users repeat the same process with their respective inputs. Each server receives one share from each user and sums them locally. Once all servers broadcast their local sums to each other, the final result is computed by summing the three totals modulo p. This final sum is correct because the random values cancel out, leaving only the true sum of the original user inputs. At the same time, privacy is preserved: each server sees only one randomized share per user and cannot infer individual input values from the aggregated data. F.3.3 Alternative approach optimized for compatibility The private sum method requires coordination among multiple parties, which can be challenging to reach an agreement on, and does not fully leverage the trust assumptions already present in the EUDIW ecosystem. Specifically, if any single party - such as the EUDIW Provider - can be trusted, then a simpler and more autonomous approach becomes feasible. Importantly, this alternative works with existing protocols and can be implemented independently by each member state. Assuming the EUDIW Provider can act as a trusted aggregation server, the following approach supports scalable usage reporting with reasonable privacy: 1) The wallet logs a usage event as an (issuance_info, service_id) tuple and increments a local counter for that tuple. 2) It then obtains a PID from the PID Provider, including a single-use Proof-of-Possession (PoP) key with no attribute disclosures. 3) The wallet authenticates to the EUDIW Provider via a dedicated endpoint used solely for usage reporting. 4) The EUDIW Provider verifies the PID and requests the usage data recorded in step 1. The above approach enables aggregation using a trusted party, where the submitting user's privacy is reasonably protected (the EUDIW Provider only uses the PID as proof that the user is valid and sees only a single-use key). Aggregation is now possible with the user submitted values. Additionally, it is now easier to detect possibly fraudulent usage reports since the EUDIW Provider sees the usage numbers in the clear (and the EUDIW Provider can submit PID keys to the PID Issuer for identification). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 159 Annex G: BBS# applied to ISO mDL G.1 General BBS# can be made compatible with ISO mDL or IETF SD-JWT. However, this requires slight modifications to the issuance and selective disclosure protocols described in clause 4.4.3. The present clause describes the modifications necessary for applying BBS# to achieve selective disclosure for the ISO mDL device retrieval flow. NOTE: The same principles can also be used for applying BBS# to the holder binding key used for signing IETF SD-JWT. G.2 Setup Let G denote a cyclic group of prime order p, g̃, g, h1, h2, ⋯, hL, L+3 random generators of G, x is the issuer's private key and PK1 = g̃x is the corresponding public key. Furthermore, let's it is assumed that the issuer has published L public values, randomly chosen from [1,…,p] and denoted {} as well as another integer (also public), randomly selected from [1,…,p] and denoted as Ud (for "undisclosed") in the following. NOTE: These public values (which can be the empty value) will be used for all VC issuances carried out by this issuer. Let sk denote the user's hardware-protected device key, = ℎ , the corresponding public key and (a1, a2, a3, … , aL), their attributes (known to the issuer). This pair of keys (sk, pk) corresponds to the mDL authentication key in the ISO mDL terminology. G.3 Issuance The issuer first computes L digests (cryptographic hashes), one for each attribute. Each of these L digests will be labelled with a unique digest identifier denoted as HIDi. The digest, denoted Hi, is computed for each attribute using its digest identifier (HIDi), the attribute identifier (denoted as ), the value of the attribute (ai) and the public value (Ki) generated and associated with this attribute during the set-up of this credential schema: = ℎ( || || || ) where Hash denotes a cryptographic hash function producing digests in [1,…, p] (such as SHA-256, for example). NOTE: The ISO/IEC 18013-5 [i.181] standard requires this representation of attributes. It is understood that BBS# would also work with any other representation of attributes. The issuer creates a MACBBS authentication tag on the user's mDL authentication key pk (of a signature scheme supporting key blinding or randomization) and on the L digests {} . The tag = ( , ) represents the user's credential and authenticates both the user's attributes and its mDL authentication key where: = ( ℎ ⋯ ℎ ) = ( ℎ ℎ ⋯ ℎ ) The issuer then transmits to the user. The user's Verifiable Credential (VC) (or Mobile Security Object - MSO in the terminology of ISO/IEC 18013-5 [i.181]) consists of their public key pk, the digests {} and the tag on these data: = (, {} , ). The secret data associated with this VC is sk. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 160 G.4 Selective disclosure In the following, D will denote the list of indices of the attributes requested by the relying party, which is also the verifier. For example, D={1,5,7} will mean that the relying party wants the user to reveal the attributes a1, a5 and a7. For each attribute not belonging to D, the user will send the value Ud to the relying party. During a Verifiable Presentation (VP) of the user's attributes (or a subset of them) to the relying party, the EUDI Wallet will first randomize their mDL authentication key pk (either additively if ECSDSA is used on the user's secure cryptographic device or multiplicatively in the case of ECDSA) as well as their tag (i.e. their verifiable credential) σ. These randomized versions are denoted as pkBlind and = ( , ) respectively. To guarantee the freshness of the VP, the user will then create a signature , using the private key associated with pkBlind, on the set of data referred to as "DeviceAuthenticationBytes" in the ISO/IEC 18013-5 [i.181] standard and denoted mDAB in the following. Furthermore, a ZKP proving knowledge is calculated of (a) two random factors (r, r'), (b) a credential σ and (c) of a public pk such that: (1) is a randomized version of σ under the random factor r, (2) pkBlind is a randomized version of pk under the random factor r', and (3) σ is a valid MACBBS authentication tag on the disclosed attributes requested by the verifier. NOTE: The DeviceAuthenticationBytes includes the nonce (or any other equivalent element specific to the current session with the relying party, which helps prevent replay attacks of VPs), possibly the set of data disclosed to the relying party, and other contextual data. However, the ISO/IEC 18013-5 [i.181] standard is not very explicit about the exact content of the "DeviceAuthenticationBytes". The signature is a proof that the VP originates from the user holding the underlying credential σ on the attributes disclosed to the verifier. The VP consists of the following elements: = ({}∈, ∈, , , , = ( , ), , ) where = { || || || }∈ represents the set of disclosed attributes, with their associated verification values, and : = { ∶ = ∧ = ̃}. Finally, denote = ( = ( , ), , ). Then = ( , {}∈, , , ). G.5 Verification Upon receipt of = ({}∈, {}∉, , , , ), the relying party first computes ′ = ℎ( || || || ) for each ∈ and verifies that ′ = for each ∈. The relying party then checks that the signature is valid on mDAB, using , and then verifies the validity of on the {}∈ and using PKI. This last verification consists in verifying that the ZKPs and are both valid, using the corresponding verification algorithms of these two ZKPs. If all these checks are successful, this proves that the attributes { }∈ have been certified by the issuer and that the VP indeed originates from the user whose attributes are the { }∈. This verification phase proceeds exactly as in the ISO/IEC 18013-5 [i.181] standard. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 161 Annex H: Bibliography ● Bellare-Goldreich-Goldwasser: "Incremental Cryptography: The Case of Hashing and Signing". ● Ben-or-Goldwasser-Shafi-Kilian-Wigderson: "Multi prover interactive proofs: How to remove intractability assumptions". ● Camenisch-Dubovitskaya-Lehmann: "Concepts and Languages for Privacy-Preserving Attribute-Based Authentication". ● ENISA: "Cybersecurity Certification: Candidate EUCC Scheme V1.1.1". ● ETSI EN 319 102-1: "Electronic Signatures and Infrastructures (ESI); Procedures for Creation and Validation of AdES Digital Signatures; Part 1: Creation and Validation". ● ETSI EN 319 403-1: "Electronic Signatures and Infrastructures (ESI); Trust Service Provider Conformity Assessment; Part 1: Requirements for conformity assessment bodies assessing Trust Service Providers". ● ETSI EN 319 411-2: "Electronic Signatures and Infrastructures (ESI); Policy and security requirements for Trust Service Providers issuing certificates; Part 2: Requirements for trust service providers issuing EU qualified certificates". ● ETSI TR 103 619: "CYBER; Migration strategies and recommendations to Quantum Safe schemes". ● ETSI TS 119 182-1: "Electronic Signatures and Infrastructures (ESI); JAdES digital signatures; Part 1: Building blocks and JAdES baseline signatures". ● Fuentes-González-Olvera-Veseli: "Assessment of attribute-based credentials for privacy-preserving road traffic services in smart cities". ● IETF RFC 6749: "OAuth 2.0 Authorization Framework". ● OASIS: "PKCS #11 Cryptographic Token Interface Base Specification Version 2.40". ● OpenID Foundation: "OpenID for Verifiable Credential Issuance". ● W3C®: "JSON-LD 1.1 - A JSON-based Serialization for Linked Data". ● W3C® Working Draft 21 July 2023: "Securing Verifiable Credentials using JOSE and COSE". ● Zhang-Genkin-Katz-Papadopoulos: "vRAM: Faster Verifiable RAM with Program-Independent Preprocessing". ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 162 Annex I: Change history Date Version Information about changes August 2023 V1.1.1 Publication January 2024 V1.1.2 Stable draft with updates made according to ESI(23)000072 "Comments on ETSI TR 119 476 V1.1.1 for the revision to ETSI TR 119 476 v1.2.1". February 2024 V1.1.3 Stable draft with updates made according to ESI(24)082054 "Resolved collated comments on ETSI TR 119 476 v1.1.2". March 2024 V1.1.4 Editorial edits based on feedback from ETSI's directorate. April 2024 V1.1.5 Final draft with updates made according to ESI(24)82b004 "Resolved collated comments on ETSI TR 119 476 v1.1.4". April 2025 V1.2.2 Early draft with updates made according to ESI(25)000077 "Collated resolved comments on ETSI TR 119 476 v1.2.1". June 2025 V1.2.3 Stable draft with updates made according to ESI(25)000376 "Resolved comments on ETSI TR 119 476 v1.2.2". June 2025 V1.2.4 Final draft with editorial edits based on feedback from ETSI's directorate. June 2025 V1.2.5 Final draft with editorial edits based on additional feedback from ETSI's directorate. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 163 History Document history V1.1.1 August 2023 Publication as ETSI TR 119 476 V1.2.1 July 2024 Publication as ETSI TR 119 476 V1.3.1 August 2025 Publication
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.