English

Deribit extends multicast market data to AWS

Hugo van Duijn Cloud Consultant
Publish date: 20 March 2023

Deribit called upon the cloud networking knowledge of CloudNation to help extend MultiCast market data to AWS. CloudNation consultants Hugo van Duijn and Joost Wolfsen share their knowledge in this blog how to deploy this solution based on the newest Amazon technology.

 

The partnership of Deribit & CloudNation

After the initial stretch of Deribit trading services over a private low latency solution to AWS, the crypto platform now also enabled market data through multicast feeds to its customers in AWS. Until recently Deribit customers that wanted to receive the Deribit Multicast service needed to be co-located or cross-connected in Equinix LD4. Because of hard work, the service is now also available for customers in the AWS London and Tokyo regions.

Due to recent developments in the Multicast support within the AWS platform, it became a logical move to extend Multicast service to AWS. In order to do this, we used a model similar as the current TCP based trading service, which had been extended to AWS by Advanced AWS Consulting Partner CloudNation in 2021. Deribit called upon the cloud networking knowledge of CloudNation again to deploy this solution based on the newest Amazon technology.

 

The Multicast challenge

Because the Multicast service is a great success for customers that are cross-connected or co-located, Deribit wanted to extend this service to their customers that are located in the AWS cloud. Preferably in the form of a low latency solution that is stable, doesn’t require maintenance often and is simple to manage. And above all: it should be able to get the Multicast feeds from the Deribit datacenter to the customers.

Deribit blog (1)

The solution should explicitly not be using the PIM (Protocol Independent Multicast) routing protocol for connections to customers, because it causes a lot of management overhead. Instead, it should rely on IGMPv2 to handle Multicast group memberships. Furthermore it should be possible to have control over which AWS accounts are able to receive Multicast traffic and which ones shouldn’t, preferably administrated with the Terraform infrastructure as code solution used for the existing AWS deployment.

Another requirement was added when the first tests showed a successful proof of concept in the eu-west-2 (London) AWS region. This was the extension of the service to the AP-Northeast-1 (Tokyo) AWS region where Deribit already has a presence delivering a different service to their customers.

 

Tried solution with Juniper

One of the major limitations was that DirectConnect does not support Multicast traffic. AWS released a document with several reference architectures for Multicast networking with AWS Transit Gateway on May 5th 2022 to work around this, which was the trigger to start this project. One of the reference architectures describes how Multicast traffic can be sent to an on-premise data center, by using GRE tunneling and PIM multicast routing.

Deribit blog (2)

The other relevant architecture describes how the AWS Transit gateway can be used to send Multicast traffic from a VPC to a VPC in another AWS account by using shared Transit Gateway attachments.

Deribit blog (3)

Based on these architectures a start was made on getting multicast traffic from Deribit in Equinix LD4 to AWS. This was deemed the hardest step and failure to achieve this goal might jeopardize the whole project. Since it was not yet sure if all requirements could be met, it was decided to build a proof of concept for transporting Multicast traffic into AWS via a GRE tunnel.

To setup GRE tunneling between LD4 and AWS you will need third party networking solutions in AWS which are found on the AWS marketplace. Well known vendors such as Fortigate, Cisco and Juniper have offerings on the marketplace usually in the form of a virtual appliance which is then deployed as an EC2 instance in your AWS account.

First choice for this proof of concept was Juniper, which showed favorable pricing when compared to other providers of GRE capable marketplace appliances. Furthermore the Juniper documentation on GRE tunneling and multicast traffic was quite extensive, as well as being a platform that the CloudNation engineers had experience with.

After deploying a Juniper appliance in the Deribit AWS VPC it was quite simple to build a GRE tunnel which connected to the Deribit hardware in LD4 over the existing Direct Connect between AWS and Deribit LD4. Once the tunnel was up, we unfortunately discovered that the Deribit hardware in LD4 did not have the functionality needed to route Multicast traffic over the GRE tunnel. So this reference architecture was discarded.

That meant going back to the drawing board for the part of the architecture that is responsible for getting Multicast Traffic into AWS. Then second reference architecture for sharing multicast traffic with other accounts was still useable though.

 

Docker containers and TCP encapsulated Multicast

Since getting Multicast Traffic inside the VPC using DirectConnect and GRE was not an option, the Multicast had to originate from within AWS. By having both Deribit and CloudNation engineers brainstorm together and really think out of the box, a new direction was discovered. Deribit had already created a container using an MCAST-RELAY application which could output Multicast and consume encapsulated Multicast in TCP packets, which were sent from datacenter and received through DirectConnect and Transit Gateway.

We chose to run the container in Amazon Elastic Container service (ECS) based on a design which is described in Blog: Running multicast-enabled containers on AWS

Some very important takeaways were :

  • Fargate does not support multicast, EC2 Linux deployment mode must be used;
  • The ECS containers need to use the awsvpc networking mode;
  • IGMPv3 is default but not supported so ECS EC2 instances had to be bootstrapped with the code below:
#!/bin/bash -xe
cat >/etc/sysctl.d/99-igmpv2.conf <<EOF
# Force kernel to use IGMP v2 rather than default to v3
net.ipv4.conf.all.force_igmp_version=2
EOF
sysctl -p /etc/sysctl.d/99-igmpv2.conf
echo ECS_CLUSTER=multicast >> /etc/ecs/ecs.config

With this in mind the ECS environment was created, a Network LoadBalancer was added with Cross-zone load balancing option enabled. This meant traffic could be sent to the loadbalancer DNS name and received by the container. In case a complete Availability Zone would be unavailable, downtime would be limited to the time it takes for DNS to propagate the changes of the DNS loadbalancer record. On the container side ECS takes care of redeploying a crashed container using task definitions or instance using an auto scaling group.

During the testing phase everything looked good. Loadbalancer Target Groups showed healthy targets, but they became unhealthy after 30 minutes. Tweaking health check to maximum delays extended this period to a still non acceptable four hours. We suspected the TCP health checks created a SYN flood within the container and this was causing the checks to fail after a number of tries. The Dev team of Deribit helped out by creating a proper HTTP health check within the container, which then solved the issue.

Some alarms within ECS were added to make sure notifications are sent when problems arise. Since there should basically always be traffic, we chose to monitor both incoming and outgoing traffic to and from the containers. Getting traffic out of the container means there is a high chance everything is working as intended. The basic service alarm is also available.

Deribit blog (4)


Sharing with customers

With multicast traffic in the AWS VPC, originating from the containers, it was time to build the second part of the architecture. This part enables sharing with different AWS accounts, in this case consumers of the multicast feed. The concept was based on the reference architecture provided by AWS where a shared Transit Gateway Attachment is used.

To facilitate sharing the multicast feeds with different AWS accounts, a second Transit Gateway was deployed with multicast enabled. This Transit Gateway was then attached to the VPC in which the containers are running and thus became able to route the traffic to other Transit Gateway Attachments. The idea being that a second attachment of this Transit Gateway can be shared with customers via the Resource Access Manager and then attached to the customer VPC. In the end, both the multicast enabled Transit Gateway and the Transit Gateway Attachment for customers are shared with the customers in a single AWS RAM resource share.

Solution overview

With both the ingress of Multicast Traffic into AWS and the egress to a customer VPC covered, let’s take a look at the solution as a whole.

The multicast traffic is initiated in the Deribit datacenter in London, encapsulated in TCP packets, then forwarded to the VPC hosting the ECS containers via a Transit Gateway attached to the Direct Connect Gateway. The containers are reached via a Network LoadBalancer listening on TCP ports 6666 and 6667, one port for the development feed another for the production feed. The containers hosted in an ECS cluster using EC2 instances then decapsulate the Multicast traffic and send it out into the VPC. In turn the Multicast enabled Transit Gateway notices the traffic and enables routing to different Transit Gateway Attachments. The customer can then join the IGMP multicast group from an EC2 instance in their VPC and process the multicast feeds.

For the deployment in the AWS Tokyo region the same design pattern is followed with the only difference being another step in the connectivity between the Deribit datacenter in London and the containers in AWS. Here a Transit Gateway Peering is used between a Transit Gateway, attached to the Multicast VPC in Tokyo and the Transit Gateway attached to the DirectConnect in London. Transit Gateway peering provides low latency connections between the two geographic regions via the AWS global backbone.

Deribit blog (5)

Other relevant AWS services used are:

  • Elastic Container Repository ECR: used as a repository for the docker containers;
  • EC2 autoscaling: used to ensure automatic provisioning of container hosting capacity, mostly relevant in case of an Availability Zone failure;
  • CloudWatch: used to log output from the docker containers.

Besides using AWS services the project is managed with infrastructure as code provider Terraform. Terraform code stored in a Github repository is executed using Terraform Cloud against the AWS accounts of Deribit ensuring that the solution is deployed in a way that minimizes the chance of human errors, eliminates configuration drift, is consistent and repeatable to different AWS regions.

 

Conclusion

To conclude the engineers working on this project are all very enthusiastic about the solution that was built and the project as a whole. Multicast traffic in AWS is not something seen everyday and thus a nice challenge. And very different from the day to day projects. The collaboration between Deribit and CloudNation has always been pleasant and was definitely a factor in the succes of this project, especially when the GRE tunnel approach did not work out.

Working with relatively new AWS features is always interesting because reference materials are not in abundance. In this case the reference materials for getting multicast traffic from on-premises into the cloud did not suit our usecase, but with some creativity and building on experiences the customer has with multicast traffic on-premises we were able to build a solid solution in the cloud. The AWS Multicast features used in this project give a good impression and are a definite gamechanger for anyone looking to extend their multicast services to the cloud!

 

Written by Hugo van Duijn and Joost Wolfsen

Do you want to know more if CloudNation can do something for you?

Contact our Cloud Networking experts to discover the possibilities.

Let's talk
Mark N
Hugo van Duijn Cloud Consultant
Publish date: 20 March 2023

More knowledge, howto's and insights for inspiration