Skip to main content

Alternative Implementations

The gdcc depends on several concepts that be can configured to use different underlying implementations.

This page contains some performance measurements when using different implementations.

The comparisons have been done on a local developer laptop, so they can't be directly compared with the measurements against the development cluster

We will also mention some changes we could implement to improve performance of the implementations.

Cache

We use Microsoft's IDistributedCache interface to abstract the caching implementation.

The IDistributedCache can be configured to use different data stores underneath. We currently use the SQL Server implementation, but we have tested the Redis and InMemory implementations as well. The InMemory implementation is not suitable for production, but it is useful for evaluating the overhead of our own code.

Below are some numbers comparing performance of the three implementation. For these measurements both Redis and SQL Server are run in a local docker next to the test code.

Data Store1x10 lookup (ms)1x100 lookup (ms)1x1000 lookup (ms)1x5000 lookup (ms)
InMemory0.040.153-611
Redis316162831
SQL Server33474132033

Some notes:

  • Anecdotal evidence shows that Redis writes are slower than SQL Server writes.
  • The IDistributedCache doesn't allow for bundling multiple updates into a single transaction. We could implement this ourselves to drastically improve performance, at the cost of increased complexity and potential for bugs.
  • Compared to the REST overhead of 80 ms, see Performance Details, we've decided to favour maintainability over performance so far.

Distributed Task Queue

We use azure service bus and azure storage for our distributed task queue implementation.

We have also tested RabbitMQ and Redis as alternatives. The numbers cannot be directly compared with the measurements against the development cluster since in the tests we had RabbitMQ and Redis running locally.

Additionally the numbers below are from an earlier version, and from a different geographical location.

  • The tests were done using multiple 1x1 lookups in sequence, with the caching disabled.
  • The backend was running locally
  • The Azure Service Bus will have an extra overhead of calling into the cloud.
  • Ping to Ireland (North Europe data center): 60 ms
Implementation10 1x1 lookups (ms)100 1x1 lookups (ms)1000 1x1 lookups (ms)
Azure Service Bus + Azure Storage447847033404710
RabbitMQ + Redis8949524104039

Even taking the extra overhead of the cloud into account, the Azure Service Bus implementation is significantly slower than RabbitMQ and Redis, which at least indicates that further testing would be interesting.