Supra

Supra Distributed VRF

January 26, 2023 - 9 min read

The following blog post is the Supra Distributed VRF litepaper. For a more in-depth description of our distributed oracle protocol, download the full whitepaper here.

SupraDVRF-litepaper

Randomness in Computing

Randomness is one of the most important factors for achieving security in computing systems. In particular, randomness is an essential part of the design of any public-key cryptography scheme. For example, generating truly random numbers is essential for the successful implementation of cryptographic methods such as multiparty computations and zero-knowledge proofs. Computational randomness can also be used for other purposes, such as selecting online lottery winners or gaming prizes, chance events, as well as for other types of randomized selection processes.

What is “Good” Randomness?

As mentioned, generating good random numbers is essential for many types of computational processes, but what is a “good” random number? Good random numbers are:

  1. Unpredictable: no one can compute them ahead of time; 
  2. Unbiasable: good random numbers are not subject to biases that would make specific numbers more likely to appear than others; and 
  3. Publicly verifiable: anyone can verify whether the number is computed correctly.

Technically speaking, unpredictability is a stronger property than unbiasability. This is because any prediction leads to a bias towards the predicted value, whereas random numbers can be predictable, yet unbiasable.

Verifiable Random Functions

An ideal source of randomness can be modeled as a function. This type of function starts with the input of a number and returns a number that has all of the above properties. Verifiable Random Functions (VRFs) are cryptographic objects that, provided a given input and a randomly chosen secret key, produce outputs that are “as good as random’’ (technically, called pseudorandom).

This means that as long as the key is kept secret, all the above properties are satisfied for all practical (technically, called probabilistic polynomial time) purposes, preventing the types of attacks that randomness is designed to prevent. However, when the key becomes known, the function becomes predictable, and thus it no longer produces “good” randomness.

It is important to emphasize that even a short key (i.e. a 128-bit key) can produce many pseudorandom strings, one for each input. Therefore, the challenge boils down to keeping the secret key secure. In the Web3 environment, maintaining a secret key and computing randomness on-chain becomes quite challenging (and costly). Therefore, the best approach is to compute the VRF off-chain and subsequently verify its legitimacy on-chain with cryptographic proof.

Distributed VRFs

Contrary to our model, centralized VRF computation relies upon one single node to store the secret key. This becomes a single point of failure, because any compromise to that node would lead to a crucial breach of security, as it would render all random numbers predictable. This would make the protocol or application using the random number prone to attacks, violate assumptions about fair competition, as well as making it fundamentally incompatible with the Web3 ethos of decentralization. Therefore, instead of using a centralized VRF, we instead opt to use what we call a distributed VRF (DVRF), an extension of a VRF with more robust properties of decentralization.

In contrast to a centralized VRF, no single node has access to the entire secret key in our DVRF framework. That is, during the setup phase, the secret key is shared between a number (say N) of nodes in a cryptographically secure (using Shamir’s secret sharing) way such that any T+1 out of N (N > T) nodes hold the entire private key, but any T or fewer nodes have no knowledge of the key. Consequently, the system is resilient to the compromise of up to T nodes, and at least T+1 nodes need to participate for a successful execution. N and T parameters are selected in order to optimize the DVRF’s security and utility.

Therefore, the VRF computation is done collectively by at least T+1 node. Each node computes a partial evaluation using their respective key-shares, and anyone can publicly aggregate them together to compute the final output. This means that the secret-key is never reconstructed in one place and stays distributed at all times.

Importantly, the VRF verification procedure is totally agnostic to whether the VRF output is produced using a DVRF or a centralized VRF (or even the values of N,T that were used). This renders the framework flexible, similar to threshold signatures. Finally, the setup phase can be fully decentralized by deploying a distributed key-generation protocol.

DVRF Requirements

Despite the benefits, using a DVRF introduces new security challenges when compared to other VRF models. Apart from satisfying all the above properties, a DVRF should also be consistent, robust, available, and strongly pseudorandom, where:

  1. Consistency – this guarantees that it does not matter which T+1 nodes participate – this ensures that the verification procedure is completely agnostic of the VRF generation procedure and supports agility in the architecture (for example, T can be changed as per requirement); 
  2. Robustness – this guarantees that the aggregation procedure must detect illegitimate partial evaluations and discard them. This guarantees that an honestly aggregated VRF output is always correct and makes the participants immediately accountable; 
  3. Available – this means that no matter what the adversary does (including, for example, not responding), as long as it compromises less than T nodes, a correct VRF output must be computed; this is essential to maintain liveness; 
  4. Strongly pseudorandom – this implies that the VRF output is pseudorandom even when up to T nodes are compromised and may possibly collude.

Our design choices satisfy all of these properties in a provable manner, as we will elaborate further on below.

DVRF Designs

Our approach uses a construction called GLOW [Galindo et al. – Euro S&P’21], which is heavily based on BLS threshold signatures. Recall that a T out of N threshold signature enables the generation of signatures by any T+1 nodes . This means that the consistency of our DVRF design follows straightforwardly. The DVRF design additionally requires that each partial signature is verified by the aggregator during aggregation. For this, a special purpose zero-knowledge proof (ZKP) is used in GLOW. 

The use of a ZKP not only enables the verification of partial signatures, but also ensures strong pseudorandomness. An alternative approach (followed by Dfinity’s distributed randomness beacon) uses a partial BLS verification instead of ZKP. While this guarantees robustness, it suffers from two drawbacks compared to GLOW:

  1. It can not be proven strongly pseudorandom as BLS verification does not use ZKPs;
  2. The partial verification involves pairing computation, which is about 2.5x slower than the ZKP verification.

The final VRF output is computed as just a hash of the signature. In contrast, the signature itself is supplied as a a proof. This means that the VRF output can be verified by checking whether the proof is indeed the BLS signature, and whether the VRF output is the hash of that. Finally, for availability, we allow a corruption threshold T such that N is at least 2T+1. This ensures that even if T nodes are compromised and stay silent, there are T+1 honest nodes who can compute the VRF correctly. This is how all important properties are met.

Output Private DVRF

Whereas the above design suffices for applications where the produced random numbers are made immediately public, it severely limits the way the VRF service can be utilized:

i) First, the request cannot be made in advance towards having the randomness ready when play begins. The request has to be synchronized with the application. This introduces a significant latency overhead.

ii) Second, as the output is public, it cannot be reused in the future (for example, using a counter-mode hash to generate multiple random values to be used at different times when needed). This also results in significant overhead with respect to gas cost and the VRF service fee, as the requester has to make individual requests each time a new random number is required.

In a nutshell, this compels the user to carefully design their games/services such that their players/clients cannot exploit the public VRF outputs. In addition, it compels the user to ensure that the latency and monetary overheads stay affordable.

To resolve this, we put forward the novel notion of Output Private DVRF (or PVRF). A PVRF must satisfy all properties by DVRF, plus it should satisfy an additional property 5) Output-privacy: this implies that the output of the VRF function is hidden from all entities except the user application, but the user can make it public in the future. 

Based on GLOW, we propose a new design that satisfies our PVRF definition. Specifically, the user blinds the input (though input privacy is not required) using a blinding factor and provides the blinded input to the VRF node along with a zero-knowledge proof of correct blinding. The VRF nodes then compute using the blinded input to produce blinded partial outputs, which are then aggregated to construct the blinded output. This blinded output is then unblinded at the user’s end with the same blinding factor when needed.

While the PVRF design requires a few additional changes, this additional feature comes with virtually no additional overhead, and therefore, does not compromise efficiency. However, the security argument becomes significantly more challenging due to the blinding. This expands the useful applications of the private VRF. It should be noted that this concept has never before been considered in traditional cryptographic literature, making this an important conceptual contribution to cryptography.

SUPRA VRF Framework

DOWNLOAD WHITEPAPER PDF | VIEW PRESENTATION DECK

Now, let’s briefly describe the Supra VRF framework as depicted below in Figure-1. We start with our basic framework. The steps are enumerated according to the order of executions. In the first step, the user provides an input to the smart contract, which creates a VRF input. The VRF input is created carefully in such a way that it is possible to track the queries easily. In addition, malicious behaviors such as repeated queries or rejecting the output selectively to impose biases are avoided.

The VRF input is then fetched by the Relay/Aggregator nodes, which then forward it to the VRF service nodes (also called VRF clans in Supra terminology). Then, the nodes in the VRF clan return individually computed partial evaluations, which are then aggregated by the Relay/Aggregator nodes. The aggregated output is then returned to the calling contract, which verifies the VRF and then invokes the user callback function with the output.

Supra-DVRF-figure-1

Optimized DVRF Framework

To optimize the basic framework, we allow multiple VRF requests at the same time. This is depicted in Figure-2. In this setting, once the partial inputs for all requests are returned and aggregated at the Relay/Aggregator node (Step-5 and 6 below), the VRF outputs are then sent to the Supra SMR service.

The Supra SMR service then verifies each of them separately. If all of them succeed, it produces an appropriate signature according to the calling smart contract (for example, in Ethereum, this will be ECDSA) on the entire set of VRF outputs.

The “batched” signature is sent along with the VRF outputs to the calling contract, which only then verifies the signature (instead of verifying each VRF separately). This saves a significant amount of gas costs (for example, in Ethereum, batching 5 outputs yields reduces gas costs by 30x) as verifying each VRF typically involves complex operations such as bilinear pairing and hashing to groups. For instance, on Ethereum this costs about 200k gas; now compare this to ECDSA, which only costs about 30k gas.

Supra-DVRF-figure-2

SUPRA PVRF Framework

The PVRF framework, sketched in Figure-3, is quite similar to the framework above, except there is an additional round of interaction between the contract and the user. This is because once the input is created by the contract, it is sent back to the user for blinding, and for computing a ZKP for correct blinding, as depicted below in Steps-1, 2, and 3). The rest of the flow stays almost the same. In addition, one can optimize this by straightforwardly using the SMR service.

Supra-DVRF-figure-3

Of course, using DVRFs compared to centralized VRFs has slight trade-offs in terms of computational efficiency, communication cost, and system complexity. Nevertheless, the security gains significantly outweigh the potential drawbacks. Above all, a distributed VRF truly captures the ethos of decentralization which lies at the heart of blockchain technology.

Conclusion

Supra’s VRF service provides a fast, reliable, decentralized VRF service with novel functionality and optimization techniques. We deploy a state-of-art Distributed VRF primitive (namely GLOW) which is provably more secure than the existing deployments (due to strong pseudorandomness). It is also significantly more efficient due to our use of ZKPs.

This means that Supra can provide the first Private VRF service without adding any significant overhead. This is a novel concept in the VRF literature and is accompanied by a rigorous and provable security analysis (soon to be published as a research paper). Finally, we provide generic optimization techniques in our framework to significantly increase its efficiency using Supra’s SMR service.

If you want to unravel the technical mysteries of our distributed VRF design via a formal write-up, theorems and proofs, please check out the full Supra DVRF whitepaper.

twitterlinkedinfacebookmail

RECENT POSTS

获取新闻、见解等信息

注册Supra通讯,获取公司新闻、行业见解等信息。当我们退出隐形模式时,您还将第一时间获知。

隱私使用条款网站数据使用与Cookies漏洞披露生物特征信息隐私政策

©2024 Supra | Entropy基金会(瑞士注册号:CHE.383.364.961)。保留所有权利。