Dear All,
I admit that I am only just hanging in there with this debate, but I
think I have a possible solution 5 to throw in as a contender.
Looking at the problem a bit laterally, we have RasMessage in UDP
packets that we want to sign, and H323-UserInformation in the UUIE that
we want to sign. Currently these are the only chunks of ASN.1 in these
fields.
We could add a second piece of ASN.1 into these fields (UDP packet and
UUIE) that contains the signatures, such as:
H323Extra ::= CHOICE
{
icv ICV,
...
}
This would be a separate ASN.1 tree. Therefore in a RAS UDP packet you
would get:
RasMessage chunk of ASN.1
H323Extra chunk of ASN.1 typically containing signature
Similarly in the UUIE, you would have
H323-UserInformation chunk of ASN.1
H323Extra chunk of ASN.1
Note that all the key ids and time stamps etc., would remain in the
RasMessage and H323-UserInformation parts (so they get signed).
I agree this is not beautiful, but it does not require multiple ASN.1
encodings, and doesn't radically change the format of the message
depending on whether you want to sign it or not (as solution 4 seems
to).
I hope this sounds vaguely as though I know what I'm talking about!!!
Regards,
Pete
=================================
Pete Cordell
BT Labs
E-Mail: pete.cordell@bt-sys.bt.co.uk
Tel: +44 1473 646436
Fax: +44 1473 645499
=================================
----------
From: Bancroft Scott[SMTP:baos@OSS.COM]
Sent: 28 June 1998 17:35
To: ITU-SG16@MAILBAG.INTEL.COM
Subject: Re: ASN.1 accross revisions
On Wed, 24 Jun 1998, Pekka Pessi wrote:
The current H.225.0 algorithm for calculating the ICV does not work well
with the PER extension mechanism.
Agreed.
The above algorithm (AFAIK I have understood it correctly) assumes that
the sender and receiver can encode the PDU exactly in the same way.
It is
a reasonable assumption, as PER produces only one possible (canonical)
encoding for given PDU.
PER is not inherently canonical. To get canonical behavior you have to
either do as this set of standards have been doing and stay away from
types REAL and SET OF, or you have to use Canonical PER Aligned or
Canonical PER Unaligned.
However, there is a serious problem: the
canonical representation changes if the ASN.1 SEQUENCE type is extended
later.
Correct.
When encoding this, the length of extension bitfield is 2. However,
recipient B is using extended version of Foo like this:
Foo ::= SEQUENCE
{
bar INTEGER (0..127),
...,
baz INTEGER (0..255) OPTIONAL,
integrityCheckValue ICV OPTIONAL,
importantExtension SomeType OPTIONAL
}
A lot of research has gone into how to most effectively create ICV's
by the designers of the Secure Electronic Payment protocol (SET),
e-check protocol and others. That's why in SET you see the likes of:
UnsignedCertificate ::= SEQUENCE {
version [0] CertificateVersion,
serialNumber CertificateSerialNumber,
signature AlgorithmIdentifier
{{SignatureAlgorithms}},
issuer Name,
validity Validity,
subject Name,
subjectPublicKeyInfo
SubjectPublicKeyInfo{{SupportedAlgorithms}},
issuerUniqueID [1] IMPLICIT UniqueIdentifier OPTIONAL,
subjectUniqueID [2] IMPLICIT UniqueIdentifier OPTIONAL,
extensions [3] Extensions -- Required for SET
usage
}
-- Compute the encrypted hash of this value if issuing a
certificate,
-- or recompute the issuer's signature on this value if validating
a
-- certificate.
--
EncodedCertificate ::= TYPE-IDENTIFIER.&Type (UnsignedCertificate)
Certificate::= SIGNED {
EncodedCertificate
} ( CONSTRAINED BY { -- Verify Or Sign Certificate -- } )
SIGNED { ToBeSigned } ::= SEQUENCE {
toBeSigned ToBeSigned,
algorithm AlgorithmIdentifier {{SignatureAlgorithms}},
signature BIT STRING
}
The key thing to notice is that they DO NOT attempt to sign the
unencoded
certificate (i.e., UnsignedCertficate). The actual signing occurs on
an
open type which has been constrained to carry the value of the
particular
type that is to be signed (TYPE-IDENTIFIER.&Type
(UnsignedCertificate)).
In so doing, a) they clearly identify that the type should be encoded
independently of the ICV, b) they avoid having to do a re-encode of the
data after the CRV has been calculated, c) the type to which the ICV is
to
be applied can be extended with no problem, and d) they prepared
themselves for the possible use of PER by utilizing an open type to
carry
the value to be signed - something that is *crucial* to signing
PER-encoded data.
Problems with Non-OPTIONAL Extensions
Another problem with some ASN.1 compilers is inclusion of non-OPTIONAL
extensions. Let us assume that software C uses following ASN.1
definition
for Foo:
Foo ::= SEQUENCE
{
bar INTEGER (0..127),
...,
baz INTEGER (0..255) OPTIONAL,
integrityCheckValue ICV OPTIONAL,
importantExtension SomeType OPTIONAL,
criticalExtension OtherType
}
There are two kinds of problems with an extension like
criticalExtension.
First, the encoder may try to ensure that all encoded PDUs conform
to the
specification and signal an error when a PDU without
criticalExtension is
encoded.
That's the correct behavior only if you are originating a message.
Encoders should not complain about criticalExtension being absent
if they are not originating the message.
Another problem is that the intermediate representation produced
by the ASN.1 compiler may not provide means for application to express
that criticalExtension is not present. (In other words, the produced
structure usually contains a flag telling whether an optional field is
present or not. Such flags are not included when the field is not
optional.)
It is an error for encoders not to encode extension additions that are
mandatory but missing if such extension additions do not occur within
an
extension addition group and if no other extension addition values
follow
the missing but mandatory extension. (I don't believe H.225 employs
extension addition groups (i.e., the "[[" and "]]" notation.)) This is
indicated in X.680:1997 clause 7.1 (X.680:1994 Amd.1 clause 6.1) which
unconditionally mandates that it be possible for decoded components
that
are defined to both the sender and receiver be re-encodable by the
receiver.
The presense or absence of the OPTIONAL flag in an extension does not
change the PER encoding of the SEQUENCE. In order to avoid previously
mentioned problems, application may use a version of ASN.1 notation that
has extra OPTIONAL keyword after each extension.
Neat! A nice and simple solution for use with such compilers.
Solution 1: Clarification to the PER Encoding Process
The text in PER document (X.691, 1994) is somewhat ambiguous how many
bits should be included in the extension present bitfield of
SEQUENCE. To
quote verbatim: "Let the number of extension additions in the type being
Note that it says the *type* being encoded, not the value. That's
distinct from the value.
encoded be "n", then a bit-field with "n" bits shall be produced for
addition to the field-list." (Is the "type being encoded" the abstract
syntax or an actual value like { bar 1, baz 2 }?) However, the 0
bits at
the end of extension present bitfield can be left out without changing
the resulting semantics: the corresponding extensions are not present.
It is true that they can be left out without changing the semantics,
but
this is not what PER mandates (It should have mandated what you
suggest,
but hindsight is 20-20.) Unless everyone encodes according to PER its
canonical nature will be lost.
As a result, the PER encoding does not change after a new extension
is added to the ASN.1 specification.
This solution, while leaving H.225.0 v2 protocol as it is, requires
however changes to some existing ASN.1 compilers, and in a pessimal
case,
to the X.691 standard text, too.
Yes, this would work, but as you point out, it requires that PER be
changed.
Solution 2: Hack
The receiving application does not decode and the re-encode the PDU, but
rather removes the ICV from the encoded PDU. In practice, this
requires that
application can identify PER-encoded fields within the PDU and it can
regenerate them, i.e. it has effectively the same functionality as
a PER
encoder/decoder.
Solution 3: Calculating ICV Differently
The following algorithm for generating and checking the ICV makes it
possible to avoid all the previously mentioned problems. The
problems are
avoided by breaking the protocol layering, the application changes
directly
the PER-encoded PDU:
integrityCheckValue - provides improved message integrity/message
authentication of the RAS messages. The cryptographically based
integrity check value is computed by the sender applying a
negotiated integrity algorithm and the secret key upon the entire
message. Prior to integrityCheckValue computation an ICV with a
previously agreed magic value (or key when using MDC) will be
inserted to this field. The magic value will contain same
algorithmOID and exactly as many bits in the icv BIT STRING as the
computed value. After computation, the sender replaces the magic
value with the computed integrity check value and transmits
message. The receiver decodes message, replaces the received
integrity value with the magic value, calculates the ICV and
compares it with the received value.
NOTE: The sender or receiver can encode the ICV separately and replace
it directly within the encoded PDU, when the magic ICV and computed
ICV have exactly the same length. When replacing ICV value within
an encoded PDU, re-encoding the whole PDU can be avoided.
I don't like this too much because of the possibility, though low,
of the data containing the magic ICV value. Still I prefer it to
solutions 1 and 2.
Solution 4:
Fourth alternative is to treat encoded RasMessage as octet string. IVC
can be calculated over that octet string and appended to the PDU on
separate layer. E.g., RAS PDU could be defined as follows:
RasMessage ::= CHOICE {
-- all previous RasMessage CHOICEs are here
authenticatedRasMessage SEQUENCE {
plainRasMessage OCTET STRING,
ivc IVC,
...
}
}
This is effectively what is done in SET and other security-conscious
protocols, though they typically use open type instead of octet string.
Alternate Solution 1:
To do exactly as is done in SET, you would define RasMessage without an
ICV, and define a container, AuthenticatedRasMessage, say, that carries
the encoded RasMessage as an open type value (i.e., octet-aligned, and
padded with 0-bits at the end to ensure that it is an integral multiple
of
8 bits) followed by the ICV value. In other words:
AuthenticatedRasMessage ::= SEQUENCE {
plainRasMessage TYPE-IDENTIFIER.&Type (RasMessage),
ivc IVC
}
Using this approach, the RasMessage (which would be defined without an
ICV) would be encoded, then the ICV would be calculated using the
encoded
RasMessage, then AuthenticatedRasMessage would be encoded. This
approach
is efficient because RasMessage does not have to be encoded without
the ICV value then re-encoded with it, nor does it require that the
encoded value be tinkered with in any way.
This technique is good only if backward compatibility is not an issue
(i.e., if you don't already have a deployed version of H.225), for the
open type carries a length component than an ordinary encoded
RasMessage
does not have.
Alternate Solution 2:
If backward compatibility is an issue then Solution 4 above is a better
alternative, though I would change it to:
RasMessage ::= CHOICE {
-- all previous RasMessage CHOICEs are here
authenticatedRasMessage AuthenticatedRasMessage
}
Open type and octet string produce identical encodings, but I have a
preference for the open type notation because it allows you to clearly
identify what the value is - in this case an encoded RasMessage, and
because it can simplify implementations that may not have a need to
authenticate the RasMessage upon decoding (e.g., line monitors.)
I am not intimate with H.225, so I don't know for sure of Solution 4
or this minor variation is backward compatible. It is if the ICV was
just introduced, else I don't think it is.
------------------------------------------------------------------------
--
Bancroft Scott Toll Free
:1-888-OSS-ASN1
Open Systems Solutions, Inc.
International:1-609-987-9073
baos@oss.com Tech Support
:1-732-249-5107
http://www.oss.com Fax
:1-732-249-4636