Thursday, October 3, 2019

At the beginning there was a schema.


Recently we developed a simple “in house” JSON-based protocol for fetching user details from a directory, where apps would submit a ticket (a string identifying a user) to get user info. For example:
app request
server response
{ ”ticket” : ”T9yIC4c2mzR” }
{
   "authorized" : true, 
   "name" : "John Johnson", 
   "email" : "jj@....", 
   "licenses" : [....]
}  

To communicate in JSON we utilized our own DCodec library, which is capable not only of JSON serialization, but also of JSON validation against an ASN.1 schema when one is provided. So we defined this ASN.1 schema:
UserInfo ::= SEQUENCE 
{
  authorized  BOOLEAN DEFAULT FALSE, 
  name        UTF8String (SIZE (3..161)),
  email       UTF8String (SIZE (5..100)),
  licenses    SEQUENCE (SIZE (0..MAX)) OF License
}
...

Since the protocol was simple, the schema was not mandatory (some nodes couldn’t use it even if they wanted to), but it was convenient as informal documentation for developers (defining the message structure, field optionality, value ranges, defaults, etc), as well as for message validation. Yet, there were a few more benefits of having a schema, which we realized only after we implemented it

The logic for handling the UserInfo.authorized field appeared to be very simple: the user is authorized only when the field is set to TRUE, while in all other conditions, including when something is wrong with the message (e.g. wasn't received), would result in a non-authorized user. Such logic proved correct during testing, so the code went into production.

While in production, on rare occasions, we noticed that some responses included an extra field (a way for the directory server to tell about an internal problem):
{
   "error" : 5,
   "authorized" : false, 
   "name" : "", 
   "email" : "", 
   "licenses" : []
}  

We’d never have caught the problem of a message carrying an extra field if not for the “invalid message” events logged due to the schema mismatch, since from the app point of view there was nothing wrong, and the user just appeared to be “non-authorized”, so there was no visible interruption.

Hence we updated the schema to allow a new field “error”, but we immediately caught another schema mismatch. This time it was for “name” and “email” SIZE violation, which cannot be empty. This time we realized that we’re re-using a single message for different cases - success and failure of fetching user info. So we changed the schema one more time to add a new message type for errors:
Response ::= CHOICE 
{
  user  UserInfo, -- successful response
  error Error     -- unsuccessful response
}

UserInfo ::= SEQUENCE 
{
  authorized  BOOLEAN DEFAULT FALSE, 
  name        UTF8String (SIZE (3..161)),
  email       UTF8String (SIZE (5..100)),
  licenses    SEQUENCE (SIZE (0..MAX)) OF License
}

Error ::= SEQUENCE
{
  code        INTEGER (0..255), 
  description UTF8String (SIZE (0..256))
}
Now the response could be either user info, say:
{"user":{"authorized":true, "name":"John Jonson", .... }}
or an “error”, like:
{"error":{"code":5, "description":"...."}}

Without the schema we could have just patched the app code (as “it’s not my problem”) by either ignoring the error, or allowed empty fields, or custom-handled every issue somewhere and somehow. Doing so would result in masking the problem and/or spreading the validation logic all over the application stack.

Another not-so-obvious benefit of using a schema is that the data definition could be stored together with the app sources, so data become a versioned part of Infrastructure as Code, Continuous Integration and Continuous Deployment - following the best practices of DevOps.


WITH SCHEMA
  • Clear boundary of data validation, when the data just entered the system (layer, node).
  • Catch the problem early.
  • Data and code are in sync, defined and implemented together.
  • Precise logic for data definition/validation.
WITHOUT SCHEMA
  • Debug the entire app stack to pinpoint the data at consumption location.
  • Vendor-specific validation logic.
  • Certain conditions might be left unchecked/undetected (e.g. outside the app logic).

Tuesday, August 20, 2019

New Use of ASN.1 - Nuclear Instrumentation


Among various new uses of ASN.1, nuclear instrumentation has taken ASN.1 to new heights in the critical world of sensors. The IEC International Standard, “Nuclear instrumentation – Data format for list mode digital data acquisition used in radiation detection and measurement”, uses ASN.1 to encode digital data acquired from various sensors, including radiation, environmental condition, and geolocation sensors. The standard specifies the ASN.1 encoding format of the binary data which represents the results of measurements of signals generated by various sensors.

For example, the radiation level detected by a sensor (called an event) is passed to an Amplifier/Anti-aliasing filter and an Analog to Digital converter (ADC). The digital signal is then processed by a DSP/FPGA to extract the pulse characteristics, e.g. shape, energy, timestamp, etc. This information is then encoded in ASN.1 COER (Canonical Octet Encoding Rules) and stored in a data file, or streamed to remote computers (receivers). It’s also possible to combine the data acquired from different sensors into a single data file or stream.


The IEC 63047 standard group chose ASN.1 and COER to specify data structures and encoding format because ASN.1 is an international standard that is vendor-, platform-, and language-independent. ASN.1 supports an extensibility feature which ensures backward and forward compatibility between different editions of the standard. Applications implemented using a revised edition of the standard will be able to exchange encodings with applications implemented using the previous editions and vice versa. The use of Canonical OER ensures that there is exactly one binary representation for each possible data value, which allows the use of digital signature and encryption to protect the encodings during transfer to a file or a stream. Several characteristics - it being an international standard; vendor, platform, and language independence; extensibility to ensure backward and forward compatibility; and canonical encodings - which often are taken for granted, made ASN.1 the ideal choice for use in such critical solutions.

All the encodings stored to a IEC63047 database file or sent to the receivers are of the Listmodedata ASN.1 type which is a choice of a “Header”, an “EventList”, or a “Footer” type. The first encoding should be of the “Header” type, followed by a set of encodings of the type “EventList”. The last encoding should be of the type “Footer” (see the diagram below). The EventList type is defined as a SEQUENCE type which contains a SEQUENCE OF “Event”, and one “Event” is a CHOICE of different types of Events.

The ASN.1 specification uses IEEE 754 single- and double-precision REAL types which yield very efficient encodings. The OER encodings of these REAL types are the same as the in-memory unencoded/decoded values. The values can be directly copied to/from the encodings from/to the in-memory representations, making the encoding and decoding process extremely efficient. 


The OSS ASN.1 Tools, available for C, C++, Java, and C# programming languages, can be used to generate the COER encodings of the types “Header”, “EventList”, and “Footer”. The ASN.1 compiler maps the Listmodedata ASN.1 type to the structure/class of the programming language in use. An encoder application can fill an instance of the structure/class and call the high performance COER encoder to create the IEC 63047 encoding. On the receiver side, the encoding can be decoded, using the COER decoder, to an instance of the structure/class. There is no need for the encoder/decoder applications, built using the OSS ASN.1 Tools, to be aware of the intricacies of ASN.1 or COER.

Please visit OSS website to learn more about the OSS ASN.1 Tools and access the documentation of the Tools.

Sunday, May 19, 2019

ASN.1 and Interledger


Interledger is an open protocol suite for the exchange of payments across different ledgers (banks, blockchains, crypto currencies, etc.). It is a standard way of bridging financial systems. Like Internet Protocol (IP), it routes “packets of money” across independent payments networks. The Interledger architecture consists of three types of nodes, a sender, a receiver, and a connector. As their names suggest, the sender initiates a payment request for the receiver, and the request is routed through various connectors.

The Interledger protocol suite is divided into layers of protocols each with different responsibilities. The Ledger protocols represent the existing money systems that Interledger connects to. The Interledger Protocol (ILP) is the core protocol of the entire suite. Its packets pass through all participants in the chain, from the sender, through the connectors, to the receiver. ILP is compatible with any type of currency and underlying ledger systems. The Transport layer protocols are used for end-to-end communication between senders and receivers. The protocols at the application layer communicate details outside of the minimum information that is needed to complete a payment.


The protocols used in the Transport and Interledger layers are specified in ASN.1, a standardized syntax notation to define message structures and encodings in a platform-independent way. Interledger messages are encoded according to the ASN.1 Octet Encoding Rules (OER). Using ASN.1 to define and encode protocol messages allows Interledger applications to interoperate irrespective of their choice of platform and programming language. OER encodings are simple, compact, and very easy to parse. ASN.1 tools - commercially licensed as well as open source - are available to assist with implementation.

Interledger uses advanced features of ASN.1 to make future upgrades of the protocols extremely easy to incorporate and fully backward compatible. For example, if Interledger adds another message type to BilateralTransferProtocolPacket (see the ASN.1 excerpt below), they will only need to define the message and add it to CallSet, without worrying about backward compatibility.
CALL ::= CLASS {
    &typeId UInt8 UNIQUE,
    &Type
} WITH SYNTAX {&typeId &Type}
CallSet CALL ::= {
    {1 Response} |
    {2 Error} |
    {3 Prepare} |
    {4 Fulfill} |
    {5 Reject} |
    {6 Message} |
    {7 Transfer} |
    {8 NewMessageType}
}
BilateralTransferProtocolPacket ::= SEQUENCE {
    -- One byte type ID
    type CALL.&typeId ({CallSet}),
    -- Used to associate requests and corresponding responses
    requestId UInt32,
    -- Length-prefixed main data
    data CALL.&Type ({CallSet}{@type})
}
NewMessageType ::= SEQUENCE {
            -- Add message fields here
}

The OSS ASN.1 Tools can be used to implement the Transport and Interledger layer protocols. The Interledger ASN.1 specification is passed to the ASN.1 compiler to generate programming language specific structures or classes. The compiler generated code along with the high performance OSS ASN.1 runtime libraries are used to create applications for  the sender, the receiver, and the connectors. ASN-1Step, a GUI based product, can be used to view, create, and modify Interledger messages.

The OSS ASN.1 Tools include a sample Interledger program which demonstrates how the implementer of Interledger Bilateral Transfer Protocol (BTP) can use the Tools to serialize and parse BTP messages. The sample simulates the communication between two BTP peers and implements two scenarios, a) a payment request is successfully processed, and b) a payment request is rejected because it’s expired.

Please visit OSS website to find more information about the OSS ASN.1 products, to download a trial, and/or to access the online documentation. If you have any questions about using the OSS ASN.1 products in Interledger applications, please contact info@oss.com.