Are you over 18 and want to see adult content?
More Annotations
A complete backup of belairdirect.com
Are you over 18 and want to see adult content?
A complete backup of speedtypingonline.com
Are you over 18 and want to see adult content?
A complete backup of captainsquartersblog.com
Are you over 18 and want to see adult content?
A complete backup of cookieclicker.io
Are you over 18 and want to see adult content?
A complete backup of bethesdamagazine.com
Are you over 18 and want to see adult content?
A complete backup of motocrossplanet.nl
Are you over 18 and want to see adult content?
A complete backup of nationstarmtgonline.info
Are you over 18 and want to see adult content?
Favourite Annotations
A complete backup of marrakechalaan.com
Are you over 18 and want to see adult content?
A complete backup of guiadareceitafederal.com.br
Are you over 18 and want to see adult content?
A complete backup of allisoneames.weebly.com
Are you over 18 and want to see adult content?
A complete backup of craftsportswear.com
Are you over 18 and want to see adult content?
A complete backup of repaso-de.blogspot.com
Are you over 18 and want to see adult content?
A complete backup of allselfsustained.com
Are you over 18 and want to see adult content?
A complete backup of bibliothequedequebec.qc.ca
Are you over 18 and want to see adult content?
Text
IMPERIALVIOLET
But, on a 4GHz Skylake, proving takes 18 seconds and verification takes 13 seconds. That's not really practical, but there is a lot of room for optimisation and for multiple cores to be used concurrently. The proof is 70 450 bytes, dominated by the 2154 secret-inputcommitments.
IMPERIALVIOLET
IMPERIALVIOLET
Based on that name, all the assembler directives begin with .cfi_. Next we need to define the Canonical Frame Address (CFA). This is the value of the stack pointer just before the CALL instruction in the parent function. In the diagram above, it's the value indicated by “RSP value before CALL”.IMPERIALVIOLET
Revocation checking and Chrome's CRL (05 Feb 2012) When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to. In those certificates are pointers to services, run by the Certificate Authorities (CAs) that issued the certificate, thatIMPERIALVIOLET
Agility itself. Cryptographic agility is a huge cost. Implementing and supporting multiple algorithms means more code. More code begets more bugs. More things in general means less academic focus on any one thing, and less testing and code-review per thing. Any increase in the number of options also means more combinations and a higher chanceIMPERIALVIOLET
Register creates a new key-pair. Authenticate signs with an existing key-pair, after the user confirms physical presence, and Check confirms whether or not a key-pair is known to a security key. In more detail, Register takes a 32-byte challenge and a 32-byte appID. These are intended to be SHA-256 hashes, but are opaque and can be anything.IMPERIALVIOLET
IMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
A PRACTICAL UNIX CAPABILITY SYSTEM ii Abstract This report seeks to document the development of a capability security system based on a Linux kernel and to follow through the implications of such a system.IMPERIALVIOLET
But, on a 4GHz Skylake, proving takes 18 seconds and verification takes 13 seconds. That's not really practical, but there is a lot of room for optimisation and for multiple cores to be used concurrently. The proof is 70 450 bytes, dominated by the 2154 secret-inputcommitments.
IMPERIALVIOLET
IMPERIALVIOLET
Based on that name, all the assembler directives begin with .cfi_. Next we need to define the Canonical Frame Address (CFA). This is the value of the stack pointer just before the CALL instruction in the parent function. In the diagram above, it's the value indicated by “RSP value before CALL”.IMPERIALVIOLET
Revocation checking and Chrome's CRL (05 Feb 2012) When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to. In those certificates are pointers to services, run by the Certificate Authorities (CAs) that issued the certificate, thatIMPERIALVIOLET
Agility itself. Cryptographic agility is a huge cost. Implementing and supporting multiple algorithms means more code. More code begets more bugs. More things in general means less academic focus on any one thing, and less testing and code-review per thing. Any increase in the number of options also means more combinations and a higher chanceIMPERIALVIOLET
Register creates a new key-pair. Authenticate signs with an existing key-pair, after the user confirms physical presence, and Check confirms whether or not a key-pair is known to a security key. In more detail, Register takes a 32-byte challenge and a 32-byte appID. These are intended to be SHA-256 hashes, but are opaque and can be anything.IMPERIALVIOLET
IMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
A PRACTICAL UNIX CAPABILITY SYSTEM ii Abstract This report seeks to document the development of a capability security system based on a Linux kernel and to follow through the implications of such a system.IMPERIALVIOLET
When registering a security key for a username-free login, the important differences are that you need to make requireResidentKey true, set userVerification to required, and set a meaningful user ID. In WebAuthn terms, a “resident” credential is one that can be discovered without knowing its ID. Generally, most security keysoperate
IMPERIALVIOLET
Encrypting streams (27 Jun 2014) When sending data over the network, chunking is pretty much a given. TLS has a maximum record size of 16KB and this fits neatly with authenticated encryption APIs which all operate on an entire message at once. But file encryption frequently gets this wrong. Take OpenPGP: it bulk encrypts the data and sticks aIMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
The attacker also gets to choose any page on your HTTPS site with the problem. That includes pages that you don't expect to be served over HTTPS, but happen to be mapped. If you have this problem anywhere, on any HTTPS page, the attacker wins. With complex sites, it's very difficult to ensure that this doesn't happen.IMPERIALVIOLET
RISC-V assembly (31 Dec 2016) RISC-V is a new, open instruction set. Fabrice Bellard wrote a Javascript emulator for it that boots Linux here ( more info ). I happen to have just gotten a physical chip that implements it too (one of these) and what's cool isIMPERIALVIOLET
Security Keys are (generally) USB-connected hardware fobs that are capable of key generation and oracle signing. Websites can “enroll” a security key by asking it to generate a public key bound to an “appId” (which is limited by the browser based on thesite's origin).
IMPERIALVIOLET
This is most useful at the beginning of compression when there wouldn't otherwise be any text to refer back to. The problem that CRIME highlights is that sensitive cookie data and an attacker controlled path is compressed together in the same context. Cookie data makes up most of the red, uncompressed bytes in the diagram.IMPERIALVIOLET
Certificate Transparency (06 Nov 2012) These are my notes for a talk that I gave today at IETF 85. For more details, see the draft. Certificates are public statements that everyone trusts, but they aren't public record. Other critical details about companies are generally public record, their address, directors etc, but not theirpublic key.
A PRACTICAL UNIX CAPABILITY SYSTEM ii Abstract This report seeks to document the development of a capability security system based on a Linux kernel and to follow through the implications of such a system. PROBING THE VIABILITY OF TCP EXTENSIONS Probing the viability of TCP extensions Adam Langley Google Inc agl@google.com ABSTRACT TCP was designed with extendibility in mind, chie y re-ected in the options mechanism.IMPERIALVIOLET
But, on a 4GHz Skylake, proving takes 18 seconds and verification takes 13 seconds. That's not really practical, but there is a lot of room for optimisation and for multiple cores to be used concurrently. The proof is 70 450 bytes, dominated by the 2154 secret-inputcommitments.
IMPERIALVIOLET
If you do not know what ACVP is then you should read no further. If you think it might be useful then what you're actually looking for is Wycheproof; ACVP is only for those who have no choice.. If you're still reading and you're vaguely aware that your previous CAVP infrastructure isn't applicable any longer, and that you'll need to deal with ACVP next time, then you might be interested inIMPERIALVIOLET
Based on that name, all the assembler directives begin with .cfi_. Next we need to define the Canonical Frame Address (CFA). This is the value of the stack pointer just before the CALL instruction in the parent function. In the diagram above, it's the value indicated by “RSP value before CALL”.IMPERIALVIOLET
Agility itself. Cryptographic agility is a huge cost. Implementing and supporting multiple algorithms means more code. More code begets more bugs. More things in general means less academic focus on any one thing, and less testing and code-review per thing. Any increase in the number of options also means more combinations and a higher chanceIMPERIALVIOLET
Register creates a new key-pair. Authenticate signs with an existing key-pair, after the user confirms physical presence, and Check confirms whether or not a key-pair is known to a security key. In more detail, Register takes a 32-byte challenge and a 32-byte appID. These are intended to be SHA-256 hashes, but are opaque and can be anything.IMPERIALVIOLET
IMPERIALVIOLET
Security Keys are (generally) USB-connected hardware fobs that are capable of key generation and oracle signing. Websites can “enroll” a security key by asking it to generate a public key bound to an “appId” (which is limited by the browser based on thesite's origin).
IMPERIALVIOLET
IMPERIALVIOLET
Thomas Ptacek laid out a number of arguments against DNSSEC recently (and in a follow up).We don't fully agree on everything, but it did prompt me to write why, even if you assume DNSSEC, DANE (the standard for speaking about the intersection of TLS and DNSSEC) is not a foregone conclusion in web browsers. There are two ways that you might wish to use DANE in a web browser: either toIMPERIALVIOLET
A shallow survey of formal methods for C code (07 Sep 2014) Two interesting things in formally verified software happened recently. The big one was the release of SeL4 - a formally verified L4 microkernel. The second was much smaller, but closer to my usual scope: a paper which showed the correctness of sections of a couple ofthe assembly
IMPERIALVIOLET
But, on a 4GHz Skylake, proving takes 18 seconds and verification takes 13 seconds. That's not really practical, but there is a lot of room for optimisation and for multiple cores to be used concurrently. The proof is 70 450 bytes, dominated by the 2154 secret-inputcommitments.
IMPERIALVIOLET
If you do not know what ACVP is then you should read no further. If you think it might be useful then what you're actually looking for is Wycheproof; ACVP is only for those who have no choice.. If you're still reading and you're vaguely aware that your previous CAVP infrastructure isn't applicable any longer, and that you'll need to deal with ACVP next time, then you might be interested inIMPERIALVIOLET
Based on that name, all the assembler directives begin with .cfi_. Next we need to define the Canonical Frame Address (CFA). This is the value of the stack pointer just before the CALL instruction in the parent function. In the diagram above, it's the value indicated by “RSP value before CALL”.IMPERIALVIOLET
Agility itself. Cryptographic agility is a huge cost. Implementing and supporting multiple algorithms means more code. More code begets more bugs. More things in general means less academic focus on any one thing, and less testing and code-review per thing. Any increase in the number of options also means more combinations and a higher chanceIMPERIALVIOLET
Register creates a new key-pair. Authenticate signs with an existing key-pair, after the user confirms physical presence, and Check confirms whether or not a key-pair is known to a security key. In more detail, Register takes a 32-byte challenge and a 32-byte appID. These are intended to be SHA-256 hashes, but are opaque and can be anything.IMPERIALVIOLET
IMPERIALVIOLET
Security Keys are (generally) USB-connected hardware fobs that are capable of key generation and oracle signing. Websites can “enroll” a security key by asking it to generate a public key bound to an “appId” (which is limited by the browser based on thesite's origin).
IMPERIALVIOLET
IMPERIALVIOLET
Thomas Ptacek laid out a number of arguments against DNSSEC recently (and in a follow up).We don't fully agree on everything, but it did prompt me to write why, even if you assume DNSSEC, DANE (the standard for speaking about the intersection of TLS and DNSSEC) is not a foregone conclusion in web browsers. There are two ways that you might wish to use DANE in a web browser: either toIMPERIALVIOLET
A shallow survey of formal methods for C code (07 Sep 2014) Two interesting things in formally verified software happened recently. The big one was the release of SeL4 - a formally verified L4 microkernel. The second was much smaller, but closer to my usual scope: a paper which showed the correctness of sections of a couple ofthe assembly
IMPERIALVIOLET
If you do not know what ACVP is then you should read no further. If you think it might be useful then what you're actually looking for is Wycheproof; ACVP is only for those who have no choice.. If you're still reading and you're vaguely aware that your previous CAVP infrastructure isn't applicable any longer, and that you'll need to deal with ACVP next time, then you might be interested inIMPERIALVIOLET
Last time I reviewed various security keys at a fairly superficial level: basic function, physical characteristics etc. This post considers lower-level behaviour. Security Keys implement the FIDO U2F spec, which borrows a lot from ISO 7816-4.Each possible transport (i.e. USB, NFC, or Bluetooth) has its own spec for how to encapsulate the U2F messages over that transport (e.g. here's the USB one).EXPLORING JPEG
Exploring JPEG. This file is both a HTML file and a literate Haskell program. If you rename it to .lhs you can compile it with GHC 6.6. This is a functional, if limited, JPEG decoder. It only decodes grayscale, 8-bit images and is overly sensitive to the options used. I thought that people might like to learn a little about the JPEGstandard.
IMPERIALVIOLET
When registering a security key for a username-free login, the important differences are that you need to make requireResidentKey true, set userVerification to required, and set a meaningful user ID. In WebAuthn terms, a “resident” credential is one that can be discovered without knowing its ID. Generally, most security keysoperate
IMPERIALVIOLET
A shallow survey of formal methods for C code (07 Sep 2014) Two interesting things in formally verified software happened recently. The big one was the release of SeL4 - a formally verified L4 microkernel. The second was much smaller, but closer to my usual scope: a paper which showed the correctness of sections of a couple ofthe assembly
IMPERIALVIOLET
Revocation checking and Chrome's CRL (05 Feb 2012) When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to. In those certificates are pointers to services, run by the Certificate Authorities (CAs) that issued the certificate, thatIMPERIALVIOLET
Security Keys are (generally) USB-connected hardware fobs that are capable of key generation and oracle signing. Websites can “enroll” a security key by asking it to generate a public key bound to an “appId” (which is limited by the browser based on thesite's origin).
IMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
memcpy (and friends) with NULL pointers (26 Jun 2016) The C standard (ISO/IEC 9899:2011) has a sane-seeming definition of memcpy (section 7.24.2.1): The memcpy function copies n characters from the object pointed to by s2 into the object pointed to by s1. Apart from a prohibition on passing overlapping objects, I think every C programmerIMPERIALVIOLET
But, on a 4GHz Skylake, proving takes 18 seconds and verification takes 13 seconds. That's not really practical, but there is a lot of room for optimisation and for multiple cores to be used concurrently. The proof is 70 450 bytes, dominated by the 2154 secret-inputcommitments.
IMPERIALVIOLET
Based on that name, all the assembler directives begin with .cfi_. Next we need to define the Canonical Frame Address (CFA). This is the value of the stack pointer just before the CALL instruction in the parent function. In the diagram above, it's the value indicated by “RSP value before CALL”.IMPERIALVIOLET
When registering a security key for a username-free login, the important differences are that you need to make requireResidentKey true, set userVerification to required, and set a meaningful user ID. In WebAuthn terms, a “resident” credential is one that can be discovered without knowing its ID. Generally, most security keysoperate
IMPERIALVIOLET
Revocation checking and Chrome's CRL (05 Feb 2012) When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to. In those certificates are pointers to services, run by the Certificate Authorities (CAs) that issued the certificate, thatIMPERIALVIOLET
Register creates a new key-pair. Authenticate signs with an existing key-pair, after the user confirms physical presence, and Check confirms whether or not a key-pair is known to a security key. In more detail, Register takes a 32-byte challenge and a 32-byte appID. These are intended to be SHA-256 hashes, but are opaque and can be anything.IMPERIALVIOLET
Agility itself. Cryptographic agility is a huge cost. Implementing and supporting multiple algorithms means more code. More code begets more bugs. More things in general means less academic focus on any one thing, and less testing and code-review per thing. Any increase in the number of options also means more combinations and a higher chanceIMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
Thomas Ptacek laid out a number of arguments against DNSSEC recently (and in a follow up).We don't fully agree on everything, but it did prompt me to write why, even if you assume DNSSEC, DANE (the standard for speaking about the intersection of TLS and DNSSEC) is not a foregone conclusion in web browsers. There are two ways that you might wish to use DANE in a web browser: either toIMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
IMPERIALVIOLET
But, on a 4GHz Skylake, proving takes 18 seconds and verification takes 13 seconds. That's not really practical, but there is a lot of room for optimisation and for multiple cores to be used concurrently. The proof is 70 450 bytes, dominated by the 2154 secret-inputcommitments.
IMPERIALVIOLET
Based on that name, all the assembler directives begin with .cfi_. Next we need to define the Canonical Frame Address (CFA). This is the value of the stack pointer just before the CALL instruction in the parent function. In the diagram above, it's the value indicated by “RSP value before CALL”.IMPERIALVIOLET
When registering a security key for a username-free login, the important differences are that you need to make requireResidentKey true, set userVerification to required, and set a meaningful user ID. In WebAuthn terms, a “resident” credential is one that can be discovered without knowing its ID. Generally, most security keysoperate
IMPERIALVIOLET
Revocation checking and Chrome's CRL (05 Feb 2012) When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to. In those certificates are pointers to services, run by the Certificate Authorities (CAs) that issued the certificate, thatIMPERIALVIOLET
Register creates a new key-pair. Authenticate signs with an existing key-pair, after the user confirms physical presence, and Check confirms whether or not a key-pair is known to a security key. In more detail, Register takes a 32-byte challenge and a 32-byte appID. These are intended to be SHA-256 hashes, but are opaque and can be anything.IMPERIALVIOLET
Agility itself. Cryptographic agility is a huge cost. Implementing and supporting multiple algorithms means more code. More code begets more bugs. More things in general means less academic focus on any one thing, and less testing and code-review per thing. Any increase in the number of options also means more combinations and a higher chanceIMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
Thomas Ptacek laid out a number of arguments against DNSSEC recently (and in a follow up).We don't fully agree on everything, but it did prompt me to write why, even if you assume DNSSEC, DANE (the standard for speaking about the intersection of TLS and DNSSEC) is not a foregone conclusion in web browsers. There are two ways that you might wish to use DANE in a web browser: either toIMPERIALVIOLET
IMPERIALVIOLET
When registering a security key for a username-free login, the important differences are that you need to make requireResidentKey true, set userVerification to required, and set a meaningful user ID. In WebAuthn terms, a “resident” credential is one that can be discovered without knowing its ID. Generally, most security keysoperate
IMPERIALVIOLET
Revocation checking and Chrome's CRL (05 Feb 2012) When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to. In those certificates are pointers to services, run by the Certificate Authorities (CAs) that issued the certificate, thatIMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
Encrypting streams (27 Jun 2014) When sending data over the network, chunking is pretty much a given. TLS has a maximum record size of 16KB and this fits neatly with authenticated encryption APIs which all operate on an entire message at once. But file encryption frequently gets this wrong. Take OpenPGP: it bulk encrypts the data and sticks aIMPERIALVIOLET
Matching primitive strengths (25 May 2014) It's a common, and traditional, habit to match the strengths of cryptographic primitives. For example, one might design at the 128-bit security level and pick AES-128, SHA-256 and P-256. Or if someone wants “better” security, one might target a 192-bit security level with AES-192, SHA-384 andP-384.
IMPERIALVIOLET
PKCS#1 version 1.5 wanted to include an identifier for the hash function that's being used, inside the signature. This is a fine idea, but they did it by encoding the algorithm and hash value with ASN.1. This caused many implementations to include the complexity of an ASN.1 parser inside signature validation and that let the bugs in.IMPERIALVIOLET
The attacker also gets to choose any page on your HTTPS site with the problem. That includes pages that you don't expect to be served over HTTPS, but happen to be mapped. If you have this problem anywhere, on any HTTPS page, the attacker wins. With complex sites, it's very difficult to ensure that this doesn't happen.IMPERIALVIOLET
DNSSEC authenticated HTTPS in Chrome (16 Jun 2011) Update: this has been removed from Chrome due to lack of use. DNSSEC validation of HTTPS sites has been hanging around in Chrome for nearly a year now. But it's now enabled by default in the current canary and dev channels of Chrome and is on schedule to go stable with Chrome 14. PROBING THE VIABILITY OF TCP EXTENSIONS Probing the viability of TCP extensions Adam Langley Google Inc agl@google.com ABSTRACT TCP was designed with extendibility in mind, chie y re-ected in the options mechanism.IMPERIALVIOLET
But, on a 4GHz Skylake, proving takes 18 seconds and verification takes 13 seconds. That's not really practical, but there is a lot of room for optimisation and for multiple cores to be used concurrently. The proof is 70 450 bytes, dominated by the 2154 secret-inputcommitments.
IMPERIALVIOLET
Based on that name, all the assembler directives begin with .cfi_. Next we need to define the Canonical Frame Address (CFA). This is the value of the stack pointer just before the CALL instruction in the parent function. In the diagram above, it's the value indicated by “RSP value before CALL”.IMPERIALVIOLET
When registering a security key for a username-free login, the important differences are that you need to make requireResidentKey true, set userVerification to required, and set a meaningful user ID. In WebAuthn terms, a “resident” credential is one that can be discovered without knowing its ID. Generally, most security keysoperate
IMPERIALVIOLET
Revocation checking and Chrome's CRL (05 Feb 2012) When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to. In those certificates are pointers to services, run by the Certificate Authorities (CAs) that issued the certificate, thatIMPERIALVIOLET
Register creates a new key-pair. Authenticate signs with an existing key-pair, after the user confirms physical presence, and Check confirms whether or not a key-pair is known to a security key. In more detail, Register takes a 32-byte challenge and a 32-byte appID. These are intended to be SHA-256 hashes, but are opaque and can be anything.IMPERIALVIOLET
Agility itself. Cryptographic agility is a huge cost. Implementing and supporting multiple algorithms means more code. More code begets more bugs. More things in general means less academic focus on any one thing, and less testing and code-review per thing. Any increase in the number of options also means more combinations and a higher chanceIMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
Thomas Ptacek laid out a number of arguments against DNSSEC recently (and in a follow up).We don't fully agree on everything, but it did prompt me to write why, even if you assume DNSSEC, DANE (the standard for speaking about the intersection of TLS and DNSSEC) is not a foregone conclusion in web browsers. There are two ways that you might wish to use DANE in a web browser: either toIMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
IMPERIALVIOLET
But, on a 4GHz Skylake, proving takes 18 seconds and verification takes 13 seconds. That's not really practical, but there is a lot of room for optimisation and for multiple cores to be used concurrently. The proof is 70 450 bytes, dominated by the 2154 secret-inputcommitments.
IMPERIALVIOLET
Based on that name, all the assembler directives begin with .cfi_. Next we need to define the Canonical Frame Address (CFA). This is the value of the stack pointer just before the CALL instruction in the parent function. In the diagram above, it's the value indicated by “RSP value before CALL”.IMPERIALVIOLET
When registering a security key for a username-free login, the important differences are that you need to make requireResidentKey true, set userVerification to required, and set a meaningful user ID. In WebAuthn terms, a “resident” credential is one that can be discovered without knowing its ID. Generally, most security keysoperate
IMPERIALVIOLET
Revocation checking and Chrome's CRL (05 Feb 2012) When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to. In those certificates are pointers to services, run by the Certificate Authorities (CAs) that issued the certificate, thatIMPERIALVIOLET
Register creates a new key-pair. Authenticate signs with an existing key-pair, after the user confirms physical presence, and Check confirms whether or not a key-pair is known to a security key. In more detail, Register takes a 32-byte challenge and a 32-byte appID. These are intended to be SHA-256 hashes, but are opaque and can be anything.IMPERIALVIOLET
Agility itself. Cryptographic agility is a huge cost. Implementing and supporting multiple algorithms means more code. More code begets more bugs. More things in general means less academic focus on any one thing, and less testing and code-review per thing. Any increase in the number of options also means more combinations and a higher chanceIMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
Thomas Ptacek laid out a number of arguments against DNSSEC recently (and in a follow up).We don't fully agree on everything, but it did prompt me to write why, even if you assume DNSSEC, DANE (the standard for speaking about the intersection of TLS and DNSSEC) is not a foregone conclusion in web browsers. There are two ways that you might wish to use DANE in a web browser: either toIMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
IMPERIALVIOLET
When registering a security key for a username-free login, the important differences are that you need to make requireResidentKey true, set userVerification to required, and set a meaningful user ID. In WebAuthn terms, a “resident” credential is one that can be discovered without knowing its ID. Generally, most security keysoperate
IMPERIALVIOLET
Revocation checking and Chrome's CRL (05 Feb 2012) When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to. In those certificates are pointers to services, run by the Certificate Authorities (CAs) that issued the certificate, thatIMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
There is a performance cost: AES-GCM-SIV encryption runs at about 70% the speed of AES-GCM, although de cryption runs at the same speed. (Measured using current BoringSSL on an Intel Skylake chip with 8KiB messages.) But, in any situation where you don't have a watertight argument for nonce uniqueness, that might be pretty cheap compared tothe
IMPERIALVIOLET
The attacker also gets to choose any page on your HTTPS site with the problem. That includes pages that you don't expect to be served over HTTPS, but happen to be mapped. If you have this problem anywhere, on any HTTPS page, the attacker wins. With complex sites, it's very difficult to ensure that this doesn't happen.IMPERIALVIOLET
Encrypting streams (27 Jun 2014) When sending data over the network, chunking is pretty much a given. TLS has a maximum record size of 16KB and this fits neatly with authenticated encryption APIs which all operate on an entire message at once. But file encryption frequently gets this wrong. Take OpenPGP: it bulk encrypts the data and sticks aIMPERIALVIOLET
Matching primitive strengths (25 May 2014) It's a common, and traditional, habit to match the strengths of cryptographic primitives. For example, one might design at the 128-bit security level and pick AES-128, SHA-256 and P-256. Or if someone wants “better” security, one might target a 192-bit security level with AES-192, SHA-384 andP-384.
IMPERIALVIOLET
PKCS#1 version 1.5 wanted to include an identifier for the hash function that's being used, inside the signature. This is a fine idea, but they did it by encoding the algorithm and hash value with ASN.1. This caused many implementations to include the complexity of an ASN.1 parser inside signature validation and that let the bugs in.IMPERIALVIOLET
DNSSEC authenticated HTTPS in Chrome (16 Jun 2011) Update: this has been removed from Chrome due to lack of use. DNSSEC validation of HTTPS sites has been hanging around in Chrome for nearly a year now. But it's now enabled by default in the current canary and dev channels of Chrome and is on schedule to go stable with Chrome 14. PROBING THE VIABILITY OF TCP EXTENSIONS Probing the viability of TCP extensions Adam Langley Google Inc agl@google.com ABSTRACT TCP was designed with extendibility in mind, chie y re-ected in the options mechanism.IMPERIALVIOLET
ACVP (23 Dec 2020). If you do not know what ACVP is then you should read no further. If you think it might be useful then what you're actually looking for is Wycheproof; ACVP is only for those who have no choice.. If you're still reading and you're vaguely aware that your previous CAVP infrastructure isn't applicable any longer, and that you'll need to deal with ACVP next time, then youIMPERIALVIOLET
(This post uses x86-64 for illustration throughout. The fundamentals are similar for other platforms but will need some translation that Idon't cover here.)
IMPERIALVIOLET
When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to.IMPERIALVIOLET
Last time I reviewed various security keys at a fairly superficial level: basic function, physical characteristics etc. This post considers lower-level behaviour. Security Keys implement the FIDO U2F spec, which borrows a lot from ISO 7816-4.Each possible transport (i.e. USB, NFC, or Bluetooth) has its own spec for how to encapsulate the U2F messages over that transport (e.g. here's the USB one).IMPERIALVIOLET
A lot of mistakes were made in the 1990's—we really didn't know what we were doing. Phil Rogaway did, but sadly not enough people listened to him; probably because they were busy fighting the US Government which was trying to ban the whole field of study at the time.IMPERIALVIOLET
Most readers of this blog will be familiar with the traditional security key user experience: you register a token with a site then, when logging in, you enter a username and password as normal but are also required to press a security key in order for it to sign a challenge from the website.IMPERIALVIOLET
AEADs combine encryption and authentication in a way that provides the properties that people generally expect when they “encrypt”something.
IMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
IMPERIALVIOLET
Thomas Ptacek laid out a number of arguments against DNSSEC recently (and in a follow up).We don't fully agree on everything, but it did prompt me to write why, even if you assume DNSSEC, DANE (the standard for speaking about the intersection of TLS and DNSSEC) is not a foregone conclusion in web browsers. There are two ways that you might wish to use DANE in a web browser: either toIMPERIALVIOLET
ACVP (23 Dec 2020). If you do not know what ACVP is then you should read no further. If you think it might be useful then what you're actually looking for is Wycheproof; ACVP is only for those who have no choice.. If you're still reading and you're vaguely aware that your previous CAVP infrastructure isn't applicable any longer, and that you'll need to deal with ACVP next time, then youIMPERIALVIOLET
(This post uses x86-64 for illustration throughout. The fundamentals are similar for other platforms but will need some translation that Idon't cover here.)
IMPERIALVIOLET
When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to.IMPERIALVIOLET
Last time I reviewed various security keys at a fairly superficial level: basic function, physical characteristics etc. This post considers lower-level behaviour. Security Keys implement the FIDO U2F spec, which borrows a lot from ISO 7816-4.Each possible transport (i.e. USB, NFC, or Bluetooth) has its own spec for how to encapsulate the U2F messages over that transport (e.g. here's the USB one).IMPERIALVIOLET
A lot of mistakes were made in the 1990's—we really didn't know what we were doing. Phil Rogaway did, but sadly not enough people listened to him; probably because they were busy fighting the US Government which was trying to ban the whole field of study at the time.IMPERIALVIOLET
Most readers of this blog will be familiar with the traditional security key user experience: you register a token with a site then, when logging in, you enter a username and password as normal but are also required to press a security key in order for it to sign a challenge from the website.IMPERIALVIOLET
AEADs combine encryption and authentication in a way that provides the properties that people generally expect when they “encrypt”something.
IMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
IMPERIALVIOLET
Thomas Ptacek laid out a number of arguments against DNSSEC recently (and in a follow up).We don't fully agree on everything, but it did prompt me to write why, even if you assume DNSSEC, DANE (the standard for speaking about the intersection of TLS and DNSSEC) is not a foregone conclusion in web browsers. There are two ways that you might wish to use DANE in a web browser: either toIMPERIALVIOLET
Most readers of this blog will be familiar with the traditional security key user experience: you register a token with a site then, when logging in, you enter a username and password as normal but are also required to press a security key in order for it to sign a challenge from the website.IMPERIALVIOLET
When a browser connects to an HTTPS site it receives signed certificates which allow it to verify that it's really connecting to the domain that it should be connecting to.IMPERIALVIOLET
CECPQ1 was the experiment in post-quantum confidentiality that my colleague, Matt Braithwaite, and I ran in 2016. It's about time for CECPQ2. I've previously written about the experiments in Chrome which lead to the conclusion that structured lattices were likely the best area in which to look for a new key-exchange mechanism at the current time. Thanks to the NIST process we now have a greatIMPERIALVIOLET
AEADs combine encryption and authentication in a way that provides the properties that people generally expect when they “encrypt”something.
IMPERIALVIOLET
When sending data over the network, chunking is pretty much a given. TLS has a maximum record size of 16KB and this fits neatly with authenticated encryption APIs which all operate on an entire messageat once.
IMPERIALVIOLET
It's a common, and traditional, habit to match the strengths of cryptographic primitives. For example, one might design at the 128-bit security level and pick AES-128, SHA-256 and P-256.IMPERIALVIOLET
On Wednesday, Chrome and Mozilla did coordinated updates to fix an RSA signature verification bug in NSS — the crypto library that handles SSL in Firefox and (currently) Chrome on most platforms. The updates should be well spread now and the bug has been detailed on Reddit, so I think it's safe to talk about. (Hilariously, on the same day, bash turned out to have a little security issue andIMPERIALVIOLET
Update: this has been removed from Chrome due to lack of use.. DNSSEC validation of HTTPS sites has been hanging around in Chrome for nearly a year now. But it's now enabled by default in the current canary and dev channels of Chrome and is on schedule to go stable with Chrome 14.IMPERIALVIOLET
(These are my notes from the first half of my talk at HOPE9 last weekend. I write notes like these not as a script, but so that I have at least some words ready in my head when I'm speaking. PROBING THE VIABILITY OF TCP EXTENSIONS Probing the viability of TCP extensions Adam Langley Google Inc agl@google.com ABSTRACT TCP was designed with extendibility in mind, chie y re-ected in the options mechanism.IMPERIALVIOLET
ADAM LANGLEY'S WEBLOGACVP (23 DEC 2020)
If you do not know what ACVP is then you should read no further. If you think it might be useful then what you're actually looking for is Wycheproof ; ACVP is only for those who have no choice. If you're still reading and you're vaguely aware that your previous CAVP infrastructure isn't applicable any longer, and that you'll need to deal with ACVP next time, then you might be interested in BoringSSL's ACVP infrastructure. We have a number of different FIPS modules to test and wanted something generic rather than repeating the bespoke-per-module way that we handled CAVP. We also need to test not just BoringCrypto (a software module) but also embedded devices. The result, acvptool,
lives within the BoringSSL repo and can translate ACVP JSON into a series of reasonably simple IPC calls that a “module wrapper” speaks over stdin/stdout. BoringSSL's module wrapper is the reference implementation, but there's also a tiny one for testing that could easily be repurposed to forward over a serial link, etc, for embedded devices. It's reasonably likely that you'll find _some_ case that's not handled, but the code is just Go throwing around JSON so you should beable to extend it
without too much bother. But, for the cases that are already handled, the weird undocumented quirks that'll otherwise consume hours of your life are taken care of. LETTER TO 20 YEARS AGO (06 SEP 2020) I noticed that I have not posted anything here in 2020! There's a bunch of reasons for this: the work I'm doing at the moment does not lend itself so well to blog posts, and life intervenes, leaving less time for personal projects. But in order to head off the risk that I'll post nothing at all this year I pulled something from one of my notebooks. 2020 is a round number so I decided to do some reflection and this was a letter that I imagined writing to myself 20 years ago. It is very much a letter to me! The topics are going to be quite specific and if you weren't paying attention to the computing industry in the year 2000 I’m not sure how much of it will make sense. But these are the points that I think me of 20 years ago would havewondered about.
You must be thinking that computers will be crazy fast by now. Yes…ish. It's complicated, and that's going to be a theme here. You've been hearing from Intel that the NetBurst chips will hit 10GHz in a few years, so with another few doublings what will have by now? 50GHz? Actually common values are around 3.5GHz. Some reach 5GHz, but only in bursts. Intel never hit 10GHz and nor did anybody else. It’s better than it sounds: instructions per clock are up a lot, so each cycle is worth more. (Although maybe we'll get to some of the issues that caused!) More importantly, all systems are multiprocessor now. It's physically a single chip, but inside is often 8- to 32-way SMT. Yep, that's cool. And yep, it only helps for certain sorts of workloads. Multithreaded programming is not going away. Memory? 10s of gigabytes is common. Hard drives? It's nearly all flash now. You can still buy hard drives and they’re huge and cheap, but the speed of flash is pretty sweet. Computers really are quite substantially faster — don't be too put off by the clock speeds. Your day-to-day life is a bunch of xterms and a web browser. Nothing's changed; you are dramatically underestimating the importance of path dependence. Terminals are still emulating a fancy VT-100 and sometimes they get messed up and need a reset. No fixes there. It's still bash or zsh; nearly unchanged from your time. The kernel has been fixed a little: you can now get a handle to a process, so no more PID races. You can open a file relative to a directory descriptor and you can create an unlinked file in a directory and link it later. Yes it's good that these things are possible now, but it is not a fundamental change and it took a long time. Actually you know what? _Windows_ grew a much smarter shell, leapfrogging Linux in several respects. They had hardly moved forward since DOS so it was easier there, perversely. So innovation must have happened higher up where there was new ground and little legacy, right? What about the semantic web? How did that turn out? Not well. We don't have lots of data in machine-readable formats and fancy GUIs so that anyone can create automation. Information is somewhere between impossible and a huge pain to access. You’ve read The Dilbert Future by now and its ‘confusopoly’ concept is much closer to the mark. The Semantic Web stuff failed so badly that nobody even tries any longer. (I'm afraid Scott Adams won’t seem so wholesome in the future either.) The closest you'll get is that your web browser can fill out your name, address, and credit card details. And it has to work really hard to do that because there’s almost no assistance from web pages. Go find _Weaving the Web_ and throw it away. Something more positive: bandwidth! You are using a dial-up that tops out at 5 KB/s and charges by the minute. You use a local proxy that keeps a copy of everything so that viewed pages are available offline and it lets you mark missing pages for batch fetching to reduce the cost. This problem is now solved. You can assume that any house in a city can get an always-on, several 10s of Mb/s connection. It's not as cheap as it could be but it's a standard household expense now. (Note: past me doesn't live in the US! —agl.) Also, everyone carries an impossibly fancy PDA that has that level of connection _wirelessly_ and _everywhere_. I don't need to equivocate here, connectivity is solved in the sorts of places you're likely to live. But … there's a second edge to that sword. This connectivity can be a bit … much? There are some advantages to having the internet be stationary, metered, and behind 30 seconds of banshee wailing and static. Imagine your whole social life getting run through IRC, and that you're always connected. It’s tough to explain but there's a problem. But these PDAs? They have GPS and maps. Nobody gets lost anymore. Nobody carries paper street maps in their car. Connectivity can be pretty sweet. This next bit is going to upset you a little: the whole Palladium / trusted boot stuff never took off on the desktop, but these PDAs are pretty locked down. One type of them is completely locked down and you can’t run non-approved software. The other will make you jump through hoops and, even then, you can't access the data of other programs. On the latter sort you can install a completely custom OS most of the time, but there's attestation and some things won't cooperate. This is still playing out and people are fighting over the details (because of money, of course). It remains a concern, but you underestimate the benefits of this sort of system. Your idea that people should own their own computers because they’re critical tools isn't _wrong_, but it is elitist. For the vast majority of people, their desktops degrade into a fragile truce with a whole ecosystem of malware and near-malware. Maybe it's their “fault” for having installed it, but these PDAs are so popular, in part, because they're hard to screw up. Bad stuff does get through the approval process, but it cannot mess things up to the wipe-and-reinstall level that desktops reach. The jury is still out about whether we will regret this, but you're wrong about the viability of giving people Windows XP and getting a good result. Back on a positive note: the music industry switched to a $15 a month stream-whatever-you-want model and it works fine. You were completely right about this. Music still exists and it still pays a few at the top large sums and the rest very little. The music industry itself didn't sort this out though, other companies did it for them. What you're missing is that you’re not taking things far enough: companies also did this for TV and (many) movies. There are still rips of this stuff on BitTorrent, but it's not a live issue because people pay the subscription for the ease, by and large. In fact, access to scientific papers is a hotter issue now! Basically, rates of change are really uneven. REAL-WORLD MEASUREMENTS OF STRUCTURED-LATTICES AND SUPERSINGULAR ISOGENIES IN TLS (30 OCT 2019) This is the third in a series of posts about running experiments on post-quantum confidentiality in TLS. The firstdetailed
experiments that measured the estimated network overhead of three families of post-quantum key exchanges. The seconddetailed the
choices behind a specific structured-lattice scheme. This one gives details of a full, end-to-end measurement of that scheme and a supersingular isogeny scheme, SIKE/p434. This was done in collaboration with Cloudflare, who integrated Microsoft's SIKE code into BoringSSL for the tests, and ran the server-side of the experiment.SETUP
Google Chrome installs, on Dev and Canary channels, and on all platforms except iOS, were randomly assigned to one of three groups: control (30%), CECPQ2 (30%), or CECPQ2b (30%). (A random ten percent of installs did not take part in the experiment so the numbers only add up to 90.) CECPQ2 is the hybrid X25519+structured-lattice scheme previously described. CECPQ2b is
the name that we gave to the combination of X25519 and the SIKE/p434scheme.
Because optimised assembly implementations are labour-intensive to write, they were only available/written for AArch64 and x86-64. Because SIKE is computationally expensive, it wasn’t feasible to enable it without an assembly implementation, thus only AArch64 and x86-64 clients were included in the experiment and ARMv7 and x86 clients did not contribute to the results even if they were assigned to one of the experiment groups. Cloudflare servers were updated to include support for both CECPQ2 and CECPQ2b, and to support an empty TLS extension that indicated that they were part of the experiment. Depending on the experiment group, Chrome would either offer CECPQ2, CECPQ2b, or just non-post-quantum options, in its TLS 1.3 handshake, along with the signaling extension to indicate which clients were part of the control group. Measurements were taken of how long TLS handshakes took to complete using Chrome’s metrics system. Chrome knew which servers were part of the experiment because they echoed the signaling extension, thus all three groups were measuring handshake duration against the same set ofservers.
After this phase of the trial was complete, client-side measurements were disabled and Chrome Canary was switched to a mode where it randomly picked one of CECPQ2, CECPQ2b, or neither to offer. This enabled some additional, server-side measurements to ensure that nothing unexpected was occuring. (Cloudflare has a significantly more detailed write upof this
experiment.)
BIASES
We’re aware of a couple of biases and these need to be kept in mind when looking at the results. Firstly, since ARMv7 and x86 platforms were excluded, the population was significantly biased towards more powerful CPUs. This will make supersingular isogenies look better. Also, we’ve seen from past experiments that Canary and Dev Chrome users tend to have worse networks than the Chrome user population as a whole, and this too will tend to advantage supersingular isogenies since they require less network traffic.RESULTS
Here are histograms of client-side results, first from Windows (representing desktops/laptops) and from Android (representing mobiledevices):
1 10 100 1000 10000 TLS handshake time (ms) TLS handshake latency (Windows) Control Control CECPQ2 CECPQ2 CECPQ2b CECPQ2b 1 10 100 1000 10000 TLS handshake time (ms) TLS handshake latency (Android) Control Control CECPQ2 CECPQ2 CECPQ2b CECPQ2b From the histograms we can see that the CECPQ2b (SIKE) group shifts visibly to the right (i.e. slower) in both cases. (On Android, a similar but smaller shift is seen for CECPQ2.) Despite the advantages of removing the slower clients and experimenting with worse-than-usual networks, the computational demands of SIKE out-weigh the reduced network traffic. Only for the slowest 5% of connections are the smaller messages of SIKE a net advantage. Cloudflare have a much more detailed analysisof the
server-side results, which are very similar.CONCLUSION
While there may be cases where the smaller messages of SIKE are a decisive advantage, that doesn’t appear to be the case for TLS, where the computational advantages of structured lattices make them a more attractive choice for post-quantum confidentiality. USERNAME (AND PASSWORD) FREE LOGIN WITH SECURITY KEYS(10 AUG 2019)
Most readers of this blog will be familiar with the traditional security key user experience: you register a token with a site then, when logging in, you enter a username and password as normal but are also required to press a security key in order for it to sign a challenge from the website. This is an effective defense against phishing, phone number takeover, etc. But modern security keys are capable of serving the roles of username and password too, so the user experience can just involve clicking a login button, pressing the security key, and (perhaps) entering a locally-validated PIN if the security key doesn't do biometrics. This is possible with the recently released Chromium 76 and also with Edge or Firefox on current versionsof Windows.
On the plus side, this one-button flow frees users from having to remember and type their username and password for a given site. It also avoids sites having to receive and validate a password, potentially avoiding both having a password database (which, even with aggressively slow hashes, will leak many users' passwords if disclosed), and removing any possibility of accidentally logging the plaintext values (which both Googleand Facebook
have done recently). On the negative side, users will need a modern security key (or Windows Hello-enabled computer) and may still need toenter a PIN.
Which security keys count as “modern”? For most people it'll mean a series-5 black Yubikey or else a blue Yubikey that has a faint “2” printed on the upper side. Of course, there are other manufacturers who make security keys and, if it advertises “CTAP2” support, there's a good chance that it'll work too. But those Yubikeyscertainly do.
In practical terms, web sites exercise this capability via WebAuthn , the same API that handles the traditional security key flow. (I'm not going to go into much detail about how to use WebAuthn. Readers wanting more introductory information can see what I've written previouslyor else see
one of the several tutorials that come up in a Google search.)
When registering a security key for a username-free login, the important differences are that you need to make requireResidentKey true, set userVerification to required, and set a meaningful user ID.
In WebAuthn terms, a “resident” credential is one that can be discovered without knowing its ID. Generally, most security keys operate statelessly, i.e. the credential ID is an encrypted private seed, and the security key doesn't store any per-credential information itself. Thus the credential ID is required for the security key to function so the server sends a list of them to the browser during login, implying that the server already knows which user is logging in. Resident keys, on the other hand, require some state to be kept by the security key because they can be used without presenting their ID first. (Note that, while resident keys require some state to be kept, security keys are free to keep state for non-resident keys too: resident vs non-resident is all about whether the credential ID is needed.) User verification is about whether the security key is providing one or two authentication factors. With the traditional experience, the security key is something you have and the password is something you know. In order to get rid of the password, the security key now needs to provide two factors all by itself. It's still something you have so the second security key factor becomes a PIN (something you know) or a biometric (something you are). That begs the question: what's the difference between a PIN and a password? On the surface: nothing. A security key PIN is an arbitrary string, not limited to numbers. (I think it was probably considered too embarrassing to call it a password since FIDO's slogan is “solving the world's password problem”.) So you should think of it as a password, but it is a password with some deeper advantages: firstly, it doesn't get sent to web sites, so they can't leak it and people can safely use a single password everywhere. Secondly, brute-force resistance is enforced by the hardware of the security key, which will only allow eight attempts before locking and requiring a reset. Still, it'll be nice when biometrics are common in security keys.A user ID
is an opaque identifier that should _not_ be personally identifying. Most systems will have some database primary key that identifies a user and, if using that as a WebAuthn user ID, ensure that you encrypt it first with a key that is _only used for this purpose_. That way it doesn't matter if those primary keys surface elsewhere too. Security keys will only store a single credential for a given pair of domain name and user ID. So, if you register a second credential with the same user ID on the same domain, it'll overwrite the first. The fact that you can register more than one credential for a given domain means that it's important to set the metadata correctly when creating a resident credential. This isn't unique to resident keys, but it's much more important in this context. The user name and displayName will be shown by the browser during login when there's more than one credential for a domain. Also the relying party name and displayName will be shown in interfaces for managing the contents of a security key. When logging in, WebAuthn works as normal except you leave the list ofcredential IDs
empty and set userVerification to required. That triggers the resident-credential flow and the resulting credential will include the user ID, with which you look up the user and their set of registered public keys, and then validate the public key and other parameters.
Microsoft have a good test site (enter any username) where you can experiment with crafting different WebAuthn requests.EXPOSED CREDENTIALS
In order to support the above, security keys obviously have a command that says “what credentials do you have for domain _x_?”. But what level of authentication is needed to run that command is a little complex. While it doesn't matter for the web, one might want to use security keys to act as, for example, door access badges; especially over NFC. In that case one probably doesn't want to bother with a PIN etc. Thus the pertinent resident credentials would have to be discoverable and exercisable given only physical presence. But in a web context, perhaps you don't want your security key to indicate that it has a credential stored for cia.gov (or mss.gov.cn) to anyone who can plug it in. Current security keys, however, will disclose whether they have a resident credential for a given domain, and the user ID and public key for that credential, to anyone with physical access. (Which is one reason why user IDs should not be identifying.) Future security keys will have a concept of a per-credential protection level which will prevent them from being disclosed without user verification (i.e. PIN or biometrics), or without knowing their random credential ID. While Chromium will configure credential protection automatically if supported, other browsers may not. Thus it doesn't hurt to set credProtect: 2 in the extensions dictionary during registration. ZERO-KNOWLEDGE ATTESTATION (01 JAN2019)
U2F/FIDO tokens (a.k.a. “Security Keys”) are a solid contender for doing something about the effectiveness of phishing and so I believe they're pretty important. I've written a fairly lengthy introductionto them
previously and, as mentioned there, one concerning aspect of their design is that they permit attestation: when registering a key it's possible for a site to learn a cryptographically authenticated make, model, and batch. As a browser vendor who has dealt with User-Agent sniffing, and as a large-site operator, who has dealt with certificate pervasiveness issues, that's quite concerning for public sites. It's already the case that one significant financial site has enforced a single-vendor policy using attestation (i.e. you can only register a token made by that vendor). That does not feel very congruent with the web, where any implementation that follows the standards is supposed to be a first-class citizen. (Sure, we may have undermined that with staggering levels of complexity, but that doesn't discredit the worth of the goal itself.) Even in cases where a site's intended policy is more reasonable (say, they want to permit all tokens with some baseline competence), there are strong grounds for suspecting that things won't turn out well. Firstly, the policies of any two sites may not completely align, leading to a crappy user experience where a user needs multiple tokens to cover all the sites that they use, and also has to remember which works where. Secondly, sites have historically not been so hot about staying up-to-date. New token vendors may find themselves excluded from the market because it's not feasible to get every site to update their attestation whitelists. That feels similar to past issues with User-Agent headers but the solution there was to spoof other browsers. Since attestation involves a cryptographic signature, that answerdoesn't work here.
So the strong recommendation for public sites is not to request attestation and not to worry about it. The user, after all, has control of the browser once logged in, so it's not terribly clear what threats it would address. However, if we assume that certain classes of sites probably are going to use attestation, then users have a collective interest in those sites enforcing the same, transparent standard, and in them keeping their attestation metadata current. But without any impetus towards those ends, that's not going to happen. Which begs the question: can browsers do something about that? Ultimately, in such a world, sites only operate on a single bit of information about any registration: was this public-key generated in a certified device or not? The FIDO Alliance wants to run the certification process, so then the problem reduces down to providing that bit to the site. Maybe they would simply trust the browser to send it: the browser could keep a current copy of the attestation metadata and tell the site whether the device is certified or not. I don't present that as a straw-man: if the site's aim is just to ensure that the vast majority of users aren't using some backdoored token that came out of a box of breakfast cereal then it might work, and it's certainly simple for the site. But that would be a short blog post, and I suspect that trusting the browser probably wouldn't fly in some cases. So what we're looking for is something like a group signature scheme, but we can't change existing tokens. So we need to retrospectively impose a group signature on top of signers that are using vanillaP-256 ECDSA.
ZERO-KNOWLEDGE PROOFS It is a surprising but true result in cryptography that it's possible to create a convincing proof of any statement in NP that discloses nothing except the truth of the statement. As an example of such a statement, we might consider “I know a valid signature of message _x_ from one of the public keys in this set”. That's a pretty dense couple of sentences but rather than write an introduction to zero-knowledge proofs here, I'm going to refer you to Matthew Green'sposts.
He does a better job than I would. I obviously didn't pick that example at random. If there was a well-known set of acceptable public keys (say, as approved by the FIDO Alliance) then a browser could produce a zero-knowledge proof that it knew a valid attestation signature from one of those keys, without disclosing anything else, notably without disclosing _which_ public key was used. That could serve as an “attestation valid” bit, as hypothesised above, that doesn't require trusting the browser. As a concrete instantiation of zero-knowledge proofs for this task, I'll be using Bulletproofs.
(See zkp.science for a good collection of many different ZK systems. Also, dalek-cryptography have excellent noteson
Bulletproofs; Cathie Yun and Henry de Valence from that group were kind enough to help me with a question about Bulletproofs too.) The computational model for Bulletproofs is an arithmetic circuit: an
acyclic graph where public and secret inputs enter and each node either adds or multiplies all its inputs. Augmenting that are linear constraints on the nodes of the circuit. In the tool that I wrote for generating these circuits, this is represented as a series of equations where the only operations are multiplication, addition, and subtraction. Here are some primitives that hopefully convince you that non-trivial functions can be built from this: * IsBit(x): x² - x = 0* NOT(x): 1 - x
* AND(x, y): x × y
* OR(x, y): x + y - (x × y) * XOR(x, y): x + y - 2(x × y)PP-256
Using single bit values in an arithmetic circuit certainly works, but it's inefficient. Getting past single-bit values, the arithmetic circuits in Bulletproofs don't work in ℤ (i.e. arbitrary-length integers), rather they work over a finite field. Bulletproofs are built on top of an elliptic curve and the finite field of the arithmetic circuit is the _scalar field_ of that curve. When dealing with elliptic curves (as used in cryptography) there are two finite fields in play: the _x_ and _y_ coordinates of the points on the curve are in the _coordinate field_ of the curve. Multiples of the base point (B) then generate a prime number (_n_) of points in the group before cycling back to the base point. So xB + yB = (x + y mod n)B — i.e. you can reduce the multiple mod n before multiplying because it'll give the same result. Since _n_ is prime, reduction mod n gives a field, the scalar field. (I'm omitting powers of primes, cofactors, and some other complications in the above, but it'll serve.) So Bulletproofs work in the scalar field of whatever elliptic curve they're implemented with, but we want to build P-256 ECDSA verification inside of a Bulletproof, and that involves lots of operations in P-256's coordinate field. So, ideally, the Bulletproofs need to work on a curve whose scalar field is equal to P-256's coordinate field. Usually when generating a curve, one picks the coordinate field to be computationally convenient, iterates other parameters until the curve meets standard security properties, and the scalar field is whatever it ends up as. However, after some quality time with “Constructing elliptic curves of prime order ” (Broker & Stevenhagen) and Sage, we find that y² = x³ - 3x + B over GF(PP) where:* B=
0x671f37e49d38ff3b66fac0bdbcc1c1d8b9f884cf77f0d0e90271026e6ef4b9a1* PP=
0xffffffff000000010000000000000000aaa0c132719468089442c088a05f455d … gives a curve with the correct number of points, and which seems plausibly secure based on the SafeCurves criteria. (A more exhaustive check would be needed before using it for real, but it'll do for a holiday exploration.) Given its relationship to P-256, I called it “PP-256” in the code.ECDSA VERIFICATION
Reviewing the ECDSA verification algorithm,
the public keys and message hash are obviously public inputs. The _r_ and _s_ values that make up the signature cannot be both be public because then the verifier could just try each public key and find which one generated the signature. However, _one_ of _r_ and _s_ can be public. From the generation algorithm,
_r_ is the x-coordinate of a random point and _s_ is blinded by the inverse of the nonce. So on their own, neither _r_ nor _s_ disclose any information and so can just be given to the verifier—moving work outside of the expensive zero-knowledge proof. (I'm not worrying about tokens trying to use a covert channel here but, if you do worry aboutthat, see True2F .)
If we disclose _s_ to the verifier directly then what's left inside the zero-knowledge proof is 1) selecting the public key; 2) checking that the secret _r_ is in range; 3) u₂ = r/s mod n; 4) scalar-multiplication of the public key by u₂; 5) adding in the (now) public multiple of the base point; and 6) showing that the x-coordinate of resulting point equals the original _r_, mod n. The public-key is a 4-tooth comb, which is a precomputed form that speeds up scalar multiplications. It consists of 30 values. The main measure that we want to minimise in the arithmetic circuit is the number of multiplications where both inputs are secret. When selecting from _t_ possible public keys the prover supplies a secret _t_-bit vector where only one of the bits is set. The proof shows that each value is, indeed, either zero or one using IsBit (from above, at a cost of one multiply per bit), and that exactly one bit is set by requiring that the sum of the values equals one. Each of the 30_t_ public-key values is multiplied by one of the bits and summed to select exactly one key. Rather than checking that the secret _r_ is within , which would cost 512 multiplies, we just check that it's not equal to zero mod n. That's the important condition here since an out of range _r_ is otherwise just an encoding error. Showing that a number is not zero mod n just involves showing that it's not equal to zero or _n_, as 2_n_ is outside of the arithmetic circuit field. Proving a ≠ b is easy: the prover just provides an inverse for a - b (since zero doesn't have an inverse) and the proof shows that (a - b) × (a -b)⁻¹ = 1.
Calculating r/s mod n is the most complex part of the whole proof! Since the arithmetic circuit is working mod P-256's _p_, working mod n (which is the order of P-256—slightly less than _p_) is awkward. The prover gives bit-wise breakdown of _r_; the proof does the multiplication as three words of 86, 86, and 84 bits; the prover supplies the values for the carry-chain (since bit-shifts aren't a native operation in the arithmetic circuit); the prover then gives the result in the form a×n + b, where _b_ is a 256-bit number; and the proof does another multiplication and carry-chain to check that the results are equal. All for a total cost of 2152 multiplication nodes! After that, the elliptic curve operation itself is pretty easy. Using the formulae from “Complete addition formulas for prime orderelliptic curves
”
(Renes, Costello, and Batina) it takes 5365 multiplication nodes to do a 4-tooth comb scalar-mult with a secret scalar and a secret point. Then a final 17 multiplication nodes add in the public base-point multiple, supply the inverse to convert to affine form, and check that the resulting x-coordinate matches the original _r_ value. The circuit does not reduce the x-coordinate mod n in order to save work: for P-256, that means that around one in 2¹²⁸ signatures may be incorrectly rejected, but that's below the noise floor of arithmetic errors in CPUs. Perhaps if this were to be used in the real world, that would be worth doing correctly, but I go back to work tomorrow soI'm out of time.
In total, the full circuit contains 7534 multiplication nodes, 2154 secret inputs, and 17 236 constraints. (Pratyush Mishra points out that affine formulae would be more efficient than projective since inversion is cheap in this model.Oops!)
IMPLEMENTATION
My tool for generating the matrices that Bulletproofs operate on outputs 136KB of LZMA-compressed data for the circuit described above. In some contexts, that amount of binary size would be problematic, but it's not infeasible. There is also quite a lot of redundancy: the data includes instructions for propagating the secret inputs through the arithmetic circuit, but it also includes matrices from which that information could be derived. The implementation is based on BoringSSL's generic-curve code. It doesn't even use Shamir's trick for multi-scalar multiplication of curve points, it doesn't use Montgomery form in a lot of places, and it doesn't use any of the optimisations described in the Bulletproofs paper. In short, the following timings are _extremely_ pessimistic and should not be taken as any evidence about the efficiency of Bulletproofs. But, on a 4GHz Skylake, proving takes 18 seconds and verification takes 13 seconds. That's not really practical, but there is a lot of room for optimisation and for multiple cores to be usedconcurrently.
The proof is 70 450 bytes, dominated by the 2154 secret-input commitments. That's not very large by the standards of today's web pages. (And Dan Boneh points out that I should have used a vector commitment to the secret inputs, which would shrink the proof down to just a few hundred bytes.) INTERMEDIATES AND FIDO2 One important limitation of the above is that it only handles one level of signatures. U2F allows an intermediate certificate to be provided so that only less-frequently-updated roots need to be known a priori. With support for only a single level of signatures, manufacturers would have to publish their intermediates too. (But we already require that for the WebPKI.) Another issue is that it doesn't work with the updated FIDO2 standard. While only a tiny fraction of Security Keys are FIDO2-based so far, that's likely to increase. With FIDO2, the model of the device is also included in the signed message, so the zero-knowledge proof would also have to show that a SHA-256 preimage has a certain structure. While Bulletproofs are quite efficient for implementing elliptic curves, a binary-based algorithm like SHA-256 is quite expensive: the Bulletproofs paper notes a SHA-256 circuit using 25 400 multiplications. There may be a good solution in combining different zero-knowledge systems based on “Efficient Zero-Knowledge Proof of Algebraic and Non-Algebraic Statements with Applications to Privacy Preserving Credentials ” (Chase, Ganesh, Mohassel), but that'll have to be future work.Happy new year.
------------------------- _There's an index of all posts and one, long page with them all too_. Email: agl AT imperialviolet DOTorg.
Details
Copyright © 2024 ArchiveBay.com. All rights reserved. Terms of Use | Privacy Policy | DMCA | 2021 | Feedback | Advertising | RSS 2.0