Next Article in Journal
Partly-Pseudo-Linear Cryptanalysis of Reduced-Round Speck
Previous Article in Journal
How Bad Are Bad Templates? Optimistic Design-Stage Side-Channel Security Evaluation and its Cost
Previous Article in Special Issue
Chaotic Quantum Key Distribution
Open AccessArticle

Statically Aggregate Verifiable Random Functions and Application to E-Lottery

1
Department of Computer Science and Engineering, Chalmers University of Technology, SE-41296 Gothenburg, Sweden
2
School of Computer Science, University of St. Gallen, CH-9000 St. Gallen, Switzerland
*
Author to whom correspondence should be addressed.
Received: 29 October 2020 / Revised: 29 November 2020 / Accepted: 9 December 2020 / Published: 13 December 2020
(This article belongs to the Special Issue Cryptographic Protocols 2020)

Abstract

Cohen, Goldwasser, and Vaikuntanathan (TCC’15) introduced the concept of aggregate pseudo-random functions (PRFs), which allow efficiently computing the aggregate of PRF values over exponential-sized sets. In this paper, we explore the aggregation augmentation on verifiable random function (VRFs), introduced by Micali, Rabin and Vadhan (FOCS’99), as well as its application to e-lottery schemes. We introduce the notion of static aggregate verifiable random functions (Agg-VRFs), which perform aggregation for VRFs in a static setting. Our contributions can be summarized as follows: (1) we define static aggregate VRFs, which allow the efficient aggregation of VRF values and the corresponding proofs over super-polynomially large sets; (2) we present a static Agg-VRF construction over bit-fixing sets with respect to product aggregation based on the q-decisional Diffie–Hellman exponent assumption; (3) we test the performance of our static Agg-VRFs instantiation in comparison to a standard (non-aggregate) VRF in terms of costing time for the aggregation and verification processes, which shows that Agg-VRFs lower considerably the timing of verification of big sets; and (4) by employing Agg-VRFs, we propose an improved e-lottery scheme based on the framework of Chow et al.’s VRF-based e-lottery proposal (ICCSA’05). We evaluate the performance of Chow et al.’s e-lottery scheme and our improved scheme, and the latter shows a significant improvement in the efficiency of generating the winning number and the player verification.
Keywords: pseudorandom functions; verifiable random functions; aggregate pseudorandom functions; aggregate verifiable random functions pseudorandom functions; verifiable random functions; aggregate pseudorandom functions; aggregate verifiable random functions

1. Introduction

Verifiable random functions (VRFs), initially introduced by Micali, Rabin, and Vadhan [1], can be seen as the public key equivalent of pseudorandom functions (PRFs) that, besides the pseudorandomness property (i.e., the function looks random at any input x), also provide the property of verifiability. More precisely, VRFs are defined by a pair of public and secret keys ( pk , sk ) in such a way that they provide not only the efficient computation of the pseudorandom function f sk ( x ) = y for any input x but also a non-interactive publicly verifiable proof π sk ( x ) that, given access to pk , allows the efficient verification of the statement f sk ( x ) = y for all inputs x. VRFs have been shown to be very useful in multiple application scenarios including key distribution centres [2], non-interactive lottery systems used in micropayments [3], domain name security extensions (DNSSEC) [4,5,6], e-lottery schemes [7], and proof-of-stake blockchain protocols such as Ouroboros Praos [8,9].
Cohen, Goldwasser, and Vaikuntanathan [10] were the first to investigate how to answer aggregate queries for PRFs over exponential-sized sets and introduced a type of augmented PRFs, called aggregate pseudo-random functions, which significantly enriched the existing family of (augmented) PRFs including constrained PRFs [11], key-homomorphic PRFs [12], and distributed PRFs [2]. Inspired by the idea of aggregated PRFs [10], in this paper, we explore the aggregation of VRFs and introduce a new cryptographic primitive, static aggregate verifiable random functions (static Agg-VRFs), which allow not only the efficient aggregation operation both on function values and proofs but also the verification on the correctness of the aggregated results.
Aggregate VRFs allow the efficient aggregation of a large number of function values, as well as the efficient verification of the correctness of the aggregated function result by employing the corresponding aggregated proof. Let us give an example to illustrate this property. Consider a cloud-assisted computing setting where a VRF can be employed in the client–server model, i.e., Alice is given access to a random function where the function description (or the secret key) is stored by a server (seen as the random value provider). Whenever Alice requests an arbitrary bit-string x, the server simply computes the function value y = f ( x ) together with the corresponding proof π and returns the tuple ( x , y , π ) to Alice. Alice may also request the aggregation (such as the product) of the function values over a large number of points (e.g., x 1 , x 2 , , x n , which may match some pattern, such as having same bits on some bit locations). In this case, aggregate VRFs allow the server to compute the product of f ( x 1 ) , , f ( x n ) efficiently, instead of firstly evaluating f ( x 1 ) , , f ( x n ) and then calculating their product. On receiving either the function value y of an individual input or the aggregated function value y agg over multiple inputs, Alice needs to verify the correctness of the returned value. VRFs allow the verification of the correctness of y using π , while, to verify the correctness of y agg , there is a trivial way, namely firstly verifying ( x i , y i , π i ) for i = 1 , , n using the verification algorithm of VRFs and then checking if y agg = i = 1 n y i , but the running time of which depends on the number n. Via aggregate VRFs, the verification of y agg can be achieved much more efficiently by using the aggregated proof π agg that is generated by the server and returned to Alice along with y agg .
A representative application of aggregate VRFs is in e-lottery schemes. More precisely, aggregate VRFs can be employed in VRF-based e-lottery schemes [7], where a random number generation mechanism is required to determine not only a winning number but also the public verifiability of the winning result, which guarantees that the dealer cannot cheat in the random number generation process. In this paper, we provide an e-lottery scheme, which has significant gain in the efficiency of generating the winning numbers and verifying the winning results. In a nutshell, VRF-based e-lottery schemes [7] proceed as follows: Initially, the dealer generates a secret/public key pair ( sk , pk ) of VRFs and publishes the public key pk , together with a parameter T associated with the time (this is the input parameter controlling the time complexity of the delaying function D ( · ) ) during which the dealer must release the winning ticket value. To purchase the ticket, a player chooses his bet number s and obtains a ticket ticket i (please refer to Section 4.2 for the generation of ticket ticket i on a bet number x in detail) from the dealer. The dealer links the ticket to a blockchain, which could be created as chain 1 : = H ( ticket 1 ) , chain i : = H ( chain i 1 | | ticket i ) for i > 1 , and publishes chain j where j is the number of tickets sold so far. To generate the random winning number, the dealer first computes a VRF as ( w 0 , π 0 ) = ( f sk ( d ) , π sk ( d ) ) on d = D ( h ) , where h is the final value of the blockchain (i.e., suppose there are n tickets sold, then h : = H ( chain n ) ). Assume that the numbers used in the lottery game are { 1 , 2 , , N max } . If w 0 > N max , then the dealer iteratively applies the VRF on w i 1 d to obtain ( w i , π i ) = ( f sk ( w i 1 d ) , π sk ( w i 1 d ) ) . Suppose that, within T units of time after the closing of the lottery session, until applying the VRF for t times, the dealer obtains ( w t , π t ) such that w t N max . Afterwards, the dealer publishes ( w t , π t ) as the winning number and the corresponding proof as well as all the intermediate tuples ( w 0 , π 0 ) , , ( w t 1 , π t 1 ) . If s = w t , a player wins. To verify the validity of a winning number w t , each player verifies the validity of ( w i , π i ) for i = 0 , , t .
Chow et al.’s e-lottery scheme [7] seems to be very promising when considering an ideal case that after a small number t of times that the VRF is applied, a function value w t such that w t N max can be obtained successfully. Otherwise, it means that the dealer needs to calculate the VRF more times, while the player needs to verify the correctness of more tuples in order to verify the winning result; the latter leads to large computational overhead and requires storage of all intermediate tuples of VRF function values and corresponding proofs, both from the dealer and the player.
Observe that both the evaluation and verification of multiple pairs of VRF function value/proof are time consuming. By using our aggregate VRF instantiation, we improve the e-lottery by devising the dealer to evaluate aggregate VRF twice at most so as to obtain a random winning number together with corresponding proof, thus rendering the verification for only such a single pair. This reduces the amount of data written to the dealer’s storage space and also decreases the computational cost for the verification process of each player.
Our Contribution. We introduce the notion of static aggregate verifiable random functions (static Agg-VRFs). Briefly, a static Agg-VRF is a family of keyed functions each associated with a pair of keys, such that, given the secret key, one can compute the aggregation function for both the function values and the proofs of the VRFs over super-polynomially large sets in polynomial time, while, given the public key, the correctness of the aggregate function values could be checked by the corresponding aggregated proof. It is very important that the sizes of the aggregated function values and proofs should be independent of the size of the set over which the aggregation is performed. The security requirement of a static Agg-VRF states that access to an aggregate oracle provides no advantage to the ability of a polynomial time adversary to distinguish the function value from a random value, even when the adversary could query an aggregation of the function values over a specific set (of possibly super-polynomial size) of his choice.
In this paper, the aggregate operation we consider is the product of all the VRF values and proofs over inputs belonging to a super-polynomially large set. We show how to compute the product aggregation over a super-polynomial size set in polynomial time, since it is impossible to directly compute the product on a super-polynomial number of values. More specifically, we show how to achieve a static Agg-VRF under the Hohenberger and Waters’ VRF scheme [13] for the product aggregation with respect to a bit-fixing set. We stress that after revisiting the JN-VRF scheme [14] proposed by Jager and Niehuesbased (currently the most efficient VRFs with full adaptive security in the standard model), we find that, even though JN-VRF almost enjoys the same framework of HW-VRF (since an admissible hash function H AHF is applied on inputs x before evaluating the function value and the corresponding proof, which impacts negatively the nice pattern of all inputs in a bit-fixing set), it is impossible to perform productive aggregation of a super-polynomial number of values f s k ( H AHF ( x ) ) efficiently over bit-fixing sets.
We implemented and evaluated the performance of our proposed static aggregate VRF in comparison to a standard (non-aggregate) VRF for inputs with different lengths i.e., 56, 128, 256, 512, and 1024 bits, in terms of the costing time for aggregating the function values, aggregating the proofs as well as the cost of verification for the aggregation. In all cases, our aggregate VRFs present significant computational advantage and are more efficient than standard VRFs. Furthermore, by employing aggregate VRFs for bit-fixing sets, we propose an improved e-lottery scheme based on the framework of Chow et al.’s VRF-based e-lottery proposal [7], by mainly modifying the winning result generation phase and the player verification phase. We implemented and tested the performance of both Chow et al.’s and our improved e-lottery schemes. Our improved scheme shows a significant improvement in efficiency in comparison to Chow et al.’s scheme.
Core Technique. We present a construction of static aggregate VRFs, which performs the product aggregation over a bit-fixing set, following Hohenberger and Waters’ [13] VRF scheme. A bit-fixing set consists of bit-strings which match a particular bit pattern. It can be defined by a pattern string v { 0 , 1 , } p o l y ( λ ) as S v = { x { 0 , 1 } p o l y ( λ ) : i , x i = v i or v i = } . The evaluation of the VRF on input x = x 1 x 2 x is defined as y = e ( g , h ) u 0 i = 1 u i x i , where g , h , U 0 = g u 0 , , U = g u are public keys and u 0 , , u are kept secret. The corresponding proofs of the VRF are given using a step ladder approach, namely, for j = 1 to , π j = g i = 1 j u i x i and π + 1 = g u 0 i = 1 u i x i .
Let Fixed ( v ) = { i [ ] : v i { 0 , 1 } } and | Fixed ( v ) | = τ . To aggregate the VRF, let π 0 agg = g 2 τ , for i = 1 , , ; we compute
π i agg = ( π i 1 agg ) u i v i if i Fixed ( v ) ( π i 1 agg ) ( u i + 1 ) / 2 if i Fixed ( v )
and π + 1 agg = ( π agg ) u 0 . The aggregated function value is computed as
y agg = e ( g , h ) u 0 i Fixed ( v ) u i v i i [ ] Fixed ( v ) ( u i + 1 ) .
The aggregation verification algorithm checks the following equations: for i = 1 , ,
e ( g , π i agg ) = e ( π i 1 agg , g ) if i Fixed ( v ) and v i = 0 e ( π i 1 agg , U i ) if i Fixed ( v ) and v i = 1 e ( π i 1 agg , g · U i ) 1 / 2 if i Fixed ( v )
and e ( π + 1 agg , g ) = e ( π agg , U 0 ) and e ( π + 1 agg , h ) = y agg .
Improved Efficiency. We provide some highlights on the achieved efficiency.
Efficiency of Aggregate VRF. The construction of our static aggregate VRF for a bit-fixing (BF) set achieves high performance in the verification process, since it takes only O ( ) bilinear pairing operations, even when verifying an exponentially large set of function values, where denotes the input length. The experimental results show that, even for 1024 bits of inputs, the aggregation of 2 1004 pairs of function values/proofs can be computed very efficiently in 6881 ms. Moreover, the time required to verify their aggregated function values/proofs of 2 1004 pairs only increases 50 % , comparing with the verification time for each single function value/proof pair of standard VRF. Section 3.2 and Section 3.3 present a detailed efficiency discussion and our experimental tests and comparisons.
Efficiency of Improved E-Lottery Scheme. We test the performance of Chow et al.’s e-lottery scheme [7] and our improved (aggregate VRF based) counterpart and make a comparison. In our improved e-lottery scheme, the computation of the aggregate function value/proof pair and the verification are performed via a single step of Aggregation and AggVerify algorithms, respectively, while Chow et al.’s e-lottery scheme is processed by t steps. We perform some experiments on Chow et al.’s scheme to see how big/small the t is so as to reach the point where the dealer obtains ( w t , π t ) such that w t N max , thus figuring out the computation-time for the corresponding multiple function evaluation and verification. In the experiments, we ran 10 times Chow et al.’s scheme and we obtained the median of all the runs. We reached t 2 and it took ≈100 s for each run of the winner generation and ≈5 s for player verification. In our improved version, the generation of the winner ticket costs less than 90 s, and the time for verification decreases to ≈2.5 s, which shows a significant improvement in efficiency.
Related work. We summarize relevant current state-of-the-art.
Verifiable Random Functions. Hohenberger and Waters’ VRF scheme [13] is the first that shows all the desired properties for a VRF (we say that a VRF scheme has all the desired properties if it allows an exponential-sized input space, achieves full adaptive security, and is based on a non-interactive assumption). Formerly, there have been several VRF proposals [15,16,17], all of which have some limitations: they only allow a polynomial-sized input space, they do not achieve fully adaptive security, or they are based on an interactive assumption. Thus far, there are also many constructions of VRFs with all the desired properties based on the decisional Diffie–Hellman assumption (DDH) or the decision linear assumption (DLIN) presenting different security losses [18,19,20,21,22]. Kohl [22] provided a detailed summary and comparison of all existing efficient constructions of VRFs in terms of the underlying assumption, sizes of verification key and the corresponding proof, and the associated security loss. Recently, Jager and Niehues [14] provided the most efficient VRF scheme with adaptive security in the standard model, relying on the computational admissible hash functions.
Aggregate Pseudorandom Functions. Cohen et al. [10] introduced the notion of aggregate PRFs, which is a family of functions indexed by a secret key with the functionality that, given the secret key, anyone is able to aggregate the values of the function over super-polynomially many PRF values with only a polynomial-time computation. They also proposed constructions of aggregate PRFs under various cryptographic hardness assumptions (one-way functions and sub-exponential hardness of the Decisional Diffie–Hellman assumption) for different types of aggregation operators such as sums and products and for several set systems including intervals, bit-fixing sets, and sets that can be recognized by polynomial-size decision trees and read-once Boolean formulas. In this paper, we explore how to aggregate VRFs, which involves efficient aggregations both on the function evaluations and on the corresponding proofs, while providing verifiability for the correctness of aggregated function value via corresponding proof.
E-lottery Schemes/Protocols. In 2005, Chow et al. [7] proposed an e-lottery scheme using a verifiable random function (VRF) and a delay function. To reduce the complexity in the (purchaser) verification phase, Liu et al. [23] improved Chow et al.’s scheme by proposing a multi-level hash chain to replace the original linear hash chain, as well as a hash-function-based delay function, which is more suitable for e-lottery networks with mobile portable terminals. Based on the secure one-way hash function and the factorization problem in RSA, Lee and Chang [24] presented an electronic t-out-of-n lottery on the Internet, which allows lottery players to simultaneously select t out of n numbers in a ticket without iterative selection. Given that the previous schemes [7,23,24] offer single participant lottery purchases on the Internet, Chen et al. [25] proposed an e-lottery purchase protocol that supports the joint purchase from multi-participants that enables them to safely and fairly participate in a mobile environment. Aiming to provide an online lottery protocol that does not rely on a trusted third party, Grumbach and Riemann [26] proposed a novel distributed e-lottery protocol based on the centralized e-lottery of Chow et al. [7] and incorporated the aforementioned multi-level hash chain verification phase of Liu et al. [23]. Considering that the existing works on e-lottery focus either on providing new functionalities (such as decentralization or threshold) or improving the hash chain or delay function, the building block of VRFs has received little attention. In this paper, we explore how to improve the efficiency of Chow et al.’s [7] e-lottery scheme by using aggregate VRFs.

2. Preliminaries

2.1. Verifiable Random Functions

Let F : K × X Y × P be an efficient function, where the key space K , domain X , range Y , and proof space P are dependent on the security parameter λ . Consider ( Setup , Eval , Verify ) as the following algorithms:
  • Setup ( 1 λ ) ( s k , p k ) takes as input a security parameter λ and outputs a key pair ( p k , s k ) . We say that s k is the secret key and p k is the verification key.
  • Eval ( s k , x ) ( y , π ) takes as input the secret key s k and x X and outputs a function value y Y and a proof π P . We write Fun s k ( x ) to denote the function value y and Prove s k ( x ) to denote the proof of correctness computed by Eval on input ( s k , x ) .
  • Verify ( p k , x , y , π ) { 0 , 1 } takes as input the verification key p k , x X , y Y , and the proof π P and outputs a bit.
Definition 1.
We say that ( Setup , Eval , Verify ) is a verifiable random function (VRF) if all the following properties hold.
1. 
Provability: For all ( p k , s k ) Setup ( 1 λ ) and inputs x X it holds: if ( y , π ) Eval ( s k , x ) , then Verify ( p k , x , y , π ) = 1 .
2. 
Uniqueness: For all p k (not necessarily generated by Setup ) and inputs x X , there does not exist a tuple ( y 0 , y 1 , π 0 , π 1 ) such that: ( 1 ) y 0 y 1 , ( 2 ) Verify ( p k , x , y 0 , π 0 ) = Verify ( p k , x , y 1 , π 1 ) = 1 .
3. 
Pseudorandomness: For all p.p.t. attackers D = ( D 1 , D 2 ) , there exists a negligible function μ ( λ ) such that:
Pr [ ( p k , s k ) Setup ( 1 λ ) ; ( x * , st ) D 1 Eval ( s k , · ) ( p k ) ; y 0 = Fun s k ( x * ) ; y 1 Y ; b { 0 , 1 } ; b D 2 Eval ( s k , · ) ( y b , st ) : b = b x * L Eval ] 1 2 + μ ( λ ) ,
where L Eval denotes the set of all inputs that D queries to its oracle Eval .
Here, we note that Eval and Fun denote two different functions. The former denotes the function that outputs both function value y and proof π , while the latter denotes the function that only outputs function value y.

2.2. Bilinear Maps and the HW-VRF Scheme

Bilinear Groups. Let G and G T be algebraic groups. A bilinear map is an efficient mapping e : G × G G T which is both: (bilinear) for all g G and a , b Z p , e ( g a , g b ) = e ( g , g ) a b ; and (non-degenerate) if g generates G , then e ( g , g ) 1 .
Assumption A1
(q-Decisional Diffie–Hellman Exponent (q-DDHE)). Let G , G T be groups of prime order p Θ ( 2 λ ) . For all p.p.t. adversaries A , there exists a negligible function μ such that:
Pr [ g , h G ; a Z p ; y 0 = e ( g , h ) a q ; y 1 G T ; b { 0 , 1 } ; b A ( g , h , g a , , g a q 1 , g a q + 1 , , g a 2 q , y b ) : b = b ] 1 2 + μ ( λ ) .
HW-VRF Scheme Here, we describe one of the elegant constructions of VRFs proposed by Hohenberger and Waters [13] (that is abbreviated as HW-VRF scheme). The latter is employed as a basis for our aggregate VRF scheme. HW-VRF is the first fully-secure VRF from the Naor-Reingold PRF [27] with exponential-size input space whose security is based on the so-called q-type complexity assumption, namely q-DDHE assumption, and is built as follows.
  • Setup ( 1 λ , 1 ) : The setup algorithm takes as input the security parameter λ as well as the input length . It firstly runs G ( 1 λ ) to obtain the description of the groups G , G T and of a bilinear map e : G × G G T . The description of G contains the generators g , h G . Let { 0 , 1 } be the input space. It next selects random values u 0 , u 1 , , u Z p and sets U 0 = g u 0 , U 1 = g u 1 , , U = g u . It then sets the keys as: p k = ( g , h , U 0 , U 1 , , U ) , s k = ( u 0 , u 1 , , u ) .
  • Fun ( s k , x ) : For x { 0 , 1 } , the function Fun s k evaluates x = x 1 x 2 x as:
    y = Fun s k ( x ) = e ( g , h ) u 0 i = 1 u i x i .
  • Prove ( s k , x ) . This algorithm outputs a proof π , which is comprised as follows. Let π + 1 = g u 0 i = 1 u i x i , for j = 1 to it computes: π j = g i = 1 j u i x i . Set π = ( π 1 , , π , π + 1 ) .
  • Verify ( p k , x , y , π ) . Let π = ( π 1 , , π , π + 1 ) . To verify that y was computed correctly, proceed in a step-by-step manner by checking that
    e ( π 1 , g ) = e ( g , g ) , if x 1 = 0 ; e ( U 1 , g ) , otherwise .
    then for i = 2 , , it checks, if the following equations are satisfied:
    e ( π i , g ) = e ( π i 1 , g ) , if x i = 0 ; e ( π i 1 , U i ) , otherwise .
    Finally, it checks that e ( π + 1 , g ) = e ( π , U 0 ) and e ( π + 1 , h ) = y . It outputs 1, if and only if all checks verify. Otherwise, it outputs 0.

3. Static Aggregate VRFs

In a (static) aggregate PRF [10] (here, we call the aggregate PRF proposed by Cohen, Goldwasser, and Vaikuntanathan [10] as a static aggregate PRF since their aggregation algorithm needs the secret key of the PRF to be taken as input), there is an additional aggregation algorithm which given the secret key can (efficiently) compute the aggregated result of all the function values over a set of all the inputs in polynomial time, even if the input set is of super-polynomial size. Note that in an aggregate VRF, similarly to an aggregate PRF, an additional aggregation algorithm is brought into the ordinary VRF [1]. Thus, aggregate VRFs can be regarded as an extension of ordinary VRFs. The static aggregate VRF differs from a static aggregate PRF [10] in that given the secret key the aggregation operation is performed not only on the function values but also on the corresponding proofs. Moreover, the resulted aggregate function value can be publicly verified by using aggregate proof (together with the public key and the input subset), which proves that the aggregate function value is a correct result on the aggregation of all function values over the input subset.
Cohen, Goldwasser, and Vaikuntanathan [10] were the first to consider the notion of aggregate PRFs over the super-polynomial large but efficiently recognizable set classes. In their model, they treat the efficiently recognizable set ensemble as a family of predicates, i.e., for any set S there exists a polynomial-size boolean circuit C : { 0 , 1 } * { 0 , 1 } such that x S if and only if C ( x ) = 1 . Boneh and Waters [11] also employed such a predicate to define the concept of constrained PRFs with respect to a constrained set. In this paper, we employ the concept and formalization of the efficiently recognizable set in the definition of static aggregate VRFs.
Recall that a verifiable random function (VRF) [1] is a function F : K × X Y × P defined over a secret key space K , a domain X , a range Y , and a proof space P (and these sets may be parameterized by the security parameter λ ). Let Fun : K × X Y denote the mapping of random function evaluations on arbitrary inputs and Prove : K × X P denote the mapping of proof evaluations on inputs, each of which can be computed by a deterministic polynomial time algorithm.
Let Ψ λ : ( Y λ , P λ ) * ( Y λ , P λ ) be the aggregation function that takes as inputs multiple pairs of values from the range Y λ and the proof space P λ of the function family, and aggregates them to output an aggregated function value in the range Y λ and the corresponding aggregated proof in the proof space P λ .
Definition 2
(Static Aggregate VRF). Let F = { F λ } λ N be a VRF function family where each function F F λ : K × X Y × P computable in polynomial time is defined over a key space K , a domain X , a range Y and a proof space P . Let S be an efficiently recognizable ensemble of sets { S λ } λ where for any S S , S X , and Ψ λ : ( Y λ , P λ ) * ( Y λ , P λ ) be an aggregation function. We say that F is an ( S , Ψ ) -static aggregate verifiable random function family (abbreviated ( S , Ψ ) -sAgg-VRFs) if it satisfies:
  • Efficient aggregation:There exists an efficient (computable in polynomial time) algorithm Aggregate F , S , Ψ ( s k , S ) ( y agg , π agg ) which on input the secret key s k of a VRF and a set S S , outputs aggregated results ( y agg , π agg ) Y × P such that for any S S , Aggregate F s k , S , Ψ ( s k , S ) = Ψ ( F s k ( x 1 ) , , F s k ( x | S | ) ) where F s k ( x i ) = ( y i = Fun s k ( x i ) , π i = Prove s k ( x i ) ) for i = 1 , , | S | ;
  • Verification for aggregation:There exists an efficient (computable in polynomial time) algorithm AggVerify ( p k , S , y agg , π agg ) { 0 , 1 } which on input the aggregated function value y agg and the proof π agg for an ensemble S S of the domain, verifies if it holds that y agg = Ψ ( Fun s k ( x 1 ) , , Fun s k ( x | S | ) ) using the aggregated proof π agg .
  • Correctness of aggregated values:For all ( p k , s k ) Setup ( 1 λ ) , set S S and the aggregate function Ψ Ψ λ , let ( y , π ) Eval ( s k , x ) and ( y agg , π agg ) Aggregate F , S , Ψ ( s k , S ) , then AggVerify ( p k , S , y agg , π agg ) = 1 .
  • Pseudorandomness:For all p.p.t. attackers D = ( D 1 , D 2 ) , there exists a negligible function μ ( λ ) s.t.:
    Pr [ ( p k , s k ) Setup ( 1 λ ) ; ( x * , st ) D 1 Eval ( s k , · ) , Aggregate F , S , Ψ ( s k , · ) ( p k ) ; b { 0 , 1 } ; y 0 = Fun s k ( x * ) ; y 1 Y ; b D 2 Eval ( s k , · ) , Aggregate F , S , Ψ ( s k , · ) ( y b , st ) : b = b C S i ( x * ) = 0 for all S i L Agg x * L Eval ] 1 2 + μ ( λ ) ,
    where L Eval is the set of all inputs that D queries to its oracle Eval , L Agg consists of all the sets S i that D queries to its oracle Aggregate , and C S i is the polynomial-size boolean circuit that is able to recognize the ensemble S i .
  • Compactness:There exists a polynomial p o l y ( · ) such that for every λ N , x X , set S S and the aggregate function Ψ Ψ λ , it holds with overwhelming probability over ( p k , s k ) Setup ( 1 λ ) , ( y , π ) Eval ( s k , x ) and Aggregate F , S , Ψ ( s k , S ) ( y agg , π agg ) that the resulting aggregated value y agg and aggregated proof π agg has size | y agg | , | π agg | p o l y ( λ , | x | ) . In particular, the size of y agg and π agg are independent of the size of the set S.
We stress that the set S over which the aggregation is performed can be super-polynomially large. Clearly, given exponential numbers of values F s k ( · ) , it is impossible to perform aggregation on them but yet, we show how to efficiently compute the aggregation function on an exponentially large set with respect to a concrete VRF given the secret key.
Some explanations on the notion of static aggregate VRFs. Firstly, the algorithm Aggregate F , S , Ψ achieves an efficient aggregation on function values/proofs over super-polynomially large sets S in polynomial time. We stress that our aim is to work on super-polynomially large sets, since, for any constant size of sets, the (productive) aggregation can be computed trivially, given the function value/proof pairs on all inputs in such a set. Secondly, the verification algorithm AggVerify is employed to efficiently verify the correctness of the aggregated function values y agg . Given { ( x i , y i , π i ) } i = 1 S and the aggregated function value y agg , there is a trivial way to verify the correctness of y agg , by verifying the correctness of each tuple ( x i , y i , π i ) for i = 1 , , S and then checking if y agg = i = 1 S y i , which is not computable in polynomial time if S is a super-polynomially large set. Therefore, our main concern is to achieve efficient verification on y agg via the corresponding proof π agg , the size of which is independent of the size of S. Thirdly, the condition AggVerify ( p k , S , y agg , π agg ) = 1 is interpreted as that value y agg is a correct result on the aggregation of { Fun s k ( x i ) } i = 1 S , i.e., y agg = Ψ ( Fun s k ( x 1 ) , , Fun s k ( x S ) ) , by using the corresponding proof π agg . We note that the verification for the aggregation does not violate the uniqueness of the underlying basic VRF. Indeed, there probably exist different sets S 1 and S 2 that result in a same y agg , but the uniqueness for any input point x S 1 ( x S 2 ) always holds. Looking ahead, in our instantiation of aggregate VRFs, to find two sets S 1 S 2 such that x i S 1 Fun s k ( x i ) = x i S 2 Fun s k ( x i ) is computationally hard, without knowledge of s k . Lastly, the condition AggVerify ( p k , S , y agg , π agg ) = 1 does not imply Verify ( p k , x i , y i , π i ) = 1 for all i = 1 , , S , since by maintaining a correct pair ( y agg , π agg ) , we always can alter any two tuples as ( x i , y i · r , π i ) and ( x j , y j · r 1 , π j ) for any random r G , which means Verify ( x i , y i · r , π i ) = Verify ( x j , y j · r 1 , π j ) = 0 .

3.1. A Static Aggregate VRF for Bit-Fixing Sets

We now propose a static aggregate VRF, whose aggregation function is to compute products over bit-fixing sets. In a nutshell, a bit-fixing set consists of bit-strings, which match a particular bit pattern. We naturally represent such sets by a string in { 0 , 1 , } p o l y ( λ ) with 0 and 1 indicating a fixed bit location and indicating a free bit location. To do so, we define for a pattern string v { 0 , 1 , } p o l y ( λ ) the bit-fixing set as S v = { x { 0 , 1 } p o l y ( λ ) : i , x i = v i or v i = } .
We show based on an elegant construction of VRFs proposed by Hohenberger and Waters [13] (abbreviated as HW-VRF scheme) how to compute the productive aggregation function over a bit-fixing set in polynomial time; thus, yielding a static aggregate VRF. Please refer to Section 2.2 for detailed description of HW-VRF scheme. The aggregation algorithm for bit-fixing sets takes as input the VRF secret key s k and a string v { 0 , 1 , } . Let Fixed ( v ) = { i [ ] : v i { 0 , 1 } } and | Fixed ( v ) | = τ . The aggregation algorithm and the verification algorithm for an aggregated function value and the corresponding proof works as follows:
  • Aggregate ( s k , v ) :
    Let π 0 agg : = g 2 τ . We define the aggregated proof as π agg = ( π 1 agg , , π agg , π + 1 agg ) , where for i = 1 , , ,
    π i agg = ( π i 1 agg ) u i v i if i Fixed ( v ) ( π i 1 agg ) ( u i + 1 ) / 2 if i Fixed ( v ) .
    and π + 1 agg = ( π agg ) u 0 . The aggregated function value is defined as:
    y agg = e ( g , h ) u 0 · ( i Fixed ( v ) u i v i ) · ( i [ ] Fixed ( v ) ( u i + 1 ) ) .
  • AggVerify ( p k , v , y agg , π agg ) :
    Parse π agg = ( π 1 agg , , π agg , π + 1 agg ) . Let π 0 agg = g 2 τ . The aggregation verification algorithm checks if the following equations are satisfied: for i = 1 , ,
    e ( g , π i agg ) = e ( π i 1 agg , g ) if i Fixed ( v ) and v i = 0 e ( π i 1 agg , U i ) if i Fixed ( v ) and v i = 1 e ( π i 1 agg , g · U i ) 1 / 2 if i Fixed ( v ) .
    and e ( π + 1 agg , g ) = e ( π agg , U 0 ) and e ( π + 1 agg , h ) = y agg . Output 1 if and only if all checks verify. Otherwise, output 0.
Letting S BF = { S ( λ ) BF } λ N where S ( λ ) BF = { 0 , 1 , } is the bit-fixing sets on { 0 , 1 } , we now prove the following theorem:
Theorem 1.
Let ϵ > 0 be a constant. Choose the security parameter λ = Ω ( 1 / ϵ ) , and assume the ( 2 λ ϵ , 2 λ ϵ ) -hardness of q-DDHE over the group G and G T . Then, the collection of verifiable random functions F defined above is a secure aggregate VRF with respect to the subsets S BF and the product aggregation function over G and G T .
The compactness follows straightforward, since the aggregated function value y agg G T and the aggregated proof π agg = ( π 1 agg , , π + 1 agg ) G + 1 , the sizes of which are independent of the size of the bit-fixing set S v , i.e., 2 τ .
The proof for pseudorandomness is similar to that of HW-VRF scheme in [13] since our static aggregate VRF is built on the ground of HW-VRF and the only phase we need to deal with in the proof is to simulate the responses of the aggregation queries. Here, we provide the simulation routine that the q-DDHE solver executes to act as a challenger in the pseudorandomness game of the aggregated VRFs. The detailed analysis of the game sequence is similar to the related descriptions in [13].
Proof of Theorem 1.
Let Q ( λ ) be a polynomial upper bound on the number of queries made by a p.p.t. distinguisher D to the oracles Eval and Aggregate . We use D to create an adversary B such that, if D wins in the pseudorandomness game for aggregate VRFs with probability 1 2 + 3 ϵ 64 Q ( + 1 ) , then B breaks the q-DDHE assumption with probability 1 2 + 3 ϵ 64 Q ( + 1 ) , where q = 4 Q ( + 1 ) , and is the input length of the static Agg-VRFs.
Given ( G , p , g , h , g a , , g a q 1 , g a q + 1 , , g a 2 q , y ) , to distinguish y = e ( g , h ) a q from y G T , B , proceed as follows:
Setup . Set m = 4 Q and choose an integer k $ [ 0 , ] . It then picks random integers r 1 , , r , r from the interval [ 0 , m 1 ] and random elements s 1 , s , s Z p , which are all kept internal by B .
For x { 0 , 1 } , let x i denote the ith bit of x. Define the following functions:
C ( x ) = m ( 1 + k ) + r + i = 1 x i r i , C ^ ( x , i ) = j = 1 i x j r j , J ( x ) = s i = 1 s i x i , J ^ ( x , i ) = j = 1 i s j x j
B sets U 0 = ( g a m ( 1 + k ) + r ) s and U i = ( g a r i ) s i for i = 1 , , . It sets the public key as ( G , p , g , h , U 0 , , U ) , and the secret key implicitly includes the values u 0 = a m ( 1 + k ) + r s and { u i = a r i s i } i [ 1 , ] .
Oracle Queries to Eval ( s k , · ) . The distinguisher D will make queries of VRF evaluations and proofs. On receiving an input x, B first checks if C ( x ) = q and aborts if this is true. Otherwise, it defines the function value as F ( x ) = e ( ( g a C ( x ) ) J ( x ) , h ) , and the corresponding proof as π = ( π 0 , π 1 , , π ) where π 0 = ( g a C ( x ) ) J ( x ) , π i = ( g a C ^ ( x , i ) ) J ^ ( x , i ) for i = 1 , , . Note that for any x { 0 , 1 } it holds:
  • The maximum value of C ( x ) is m ( 1 + ) + ( 1 + ) ( m 1 ) = ( 2 m 1 ) ( 1 + ) < 2 m ( 1 + ) = 2 q .
  • The maximum value of C ^ ( x , i ) is ( m 1 ) < m ( 1 + ) = q for i [ ] .
As a result, if C ( x ) q , B could answer all the Eval queries.
Oracle Queries to Aggregate F s k , S , Ψ ( · ) . The distinguisher D will also make queries for aggregate values. On receiving a pattern string v { 0 , 1 , } , B uses the above secret key to compute the aggregated proof and the aggregate function value. More precisely, B answers the query Aggregate F s k , S , Ψ ( S v ) as follows: Let π 0 agg : = g 2 τ . Since the aggregated proof is defined as π agg = ( π 1 agg , , π agg , π + 1 agg ) , where, for i = 1 , , ,
π i agg = ( π i 1 agg ) u i v i if i Fixed ( v ) ( π i 1 agg ) ( u i + 1 ) / 2 if i Fixed ( v ) .
and π + 1 agg = ( π agg ) u 0 , B will compute concretely:
π 1 agg = ( g a v 1 r 1 s 1 v 1 ) 2 τ if 1 Fixed ( v ) g 2 τ 1 ( 1 + a r 1 s 1 ) if 1 Fixed ( v )
and, for j = 2 , , , π j agg = g 2 τ τ ¯ j i [ j ] Fixed ( v ) ( a r i s i ) v i i Flex ( v j ) ( 1 + a r i s i ) where Flex ( v j ) : = { i [ ] : i j v i = } and τ ¯ j : = | Flex ( v j ) | . The above value could be computed by B through its knowledge of r i , s i . The value of
π + 1 agg = g i [ ] Fixed ( v ) ( a r i s i ) v i · i [ ] Fixed ( v ) ( 1 + a r i s i ) · a m ( 1 + k ) + r · s
can be handled similarly using m , k , r , s . While the aggregated function value is defined as y agg = e ( π + 1 agg , h ) .
Challenge.D will send a challenge input x * with the condition that x * is never queried to its Eval oracle. If C ( x * ) = q , B returns the value y. When D responds with a bit b , B outputs b as its guess to its own q-DDHE challenger. If C ( x * ) q , B outputs a random bit as its guess. This ends our description of q-DDHE adversary B . □
Remark 1.
Discussion on the impossibility of productive aggregation on JN-VRF for bit-fixing sets. Recently, based on q-DDH-assumption, Jager and Niehues [14] proposed the currently most efficient VRFs (that is abbreviated as JN-VRF scheme) with full adaptive security in the standard model. JN-VRF almost enjoys the same framework of HW-VRF, and the only difference is that in the former an admissible hash function H AHF is applied on inputs x before evaluating the function value and corresponding proof, while the latter is not. We stress that hash function H AHF : { 0 , 1 } { 0 , 1 } n on inputs x destroys the nice pattern of all inputs in a bit-fixing set, which implies that, for any x S v , i.e., for all i [ ] , x i = v i v i = , there does not exist a bit-string v { 0 , 1 , } n such that H AHF ( x ) = h 1 h n S v , where S v = { h j { 0 , 1 } n : j , h j = v j or v j = } . Otherwise, it is possible to find the collisions of H AHF . Therefore, given exponential numbers of values F s k ( H AHF ( x ) ) , it is impossible to perform productive aggregation over them efficiently by using the same technique as in the last subsection.

3.2. Efficiency Analysis

Analysis of Costs. The instantiation in Section 3.1 is very compact since the aggregated function value consists of a single element in G T , while the aggregated proof is composed of + 1 elements in G , which are independent of the size of a set S. The Aggregate algorithm simply requires at most multiplications plus one exponentiation to compute y agg and + 2 exponentiations to evaluate π agg , which needs much less computation compared to computing 2 τ multiplications to obtain y agg and 2 τ · ( + 1 ) multiplications to obtain π agg on all 2 τ number of inputs in S. The AggVerify algorithm simply requires at most ( 2 + 3 ) pairing operations, while 2 τ · ( 2 + 3 ) pairings are needed for verifying 2 τ number of function values/proofs on all inputs in S.
We summarize the cost for the Aggregate and AggVerify algorithms in Table 1, where MUL is the shortened form of the multiplication operation, EXP is the abbreviation for the exponentiation operation, and ADD denotes the addition operation.

3.3. Implementation and Experimental Results

Choice of elliptic curves and pairings. In our implementation, we use Type A curves as described in [28], which can be defined as follows. Let q be a prime satisfying q = 3 mod 4 and let p be some odd dividing q + 1 . Let E be the elliptic curve defined by the equation y 2 = x 3 + x over F q ; then, E ( F q ) is supersingular, # E ( F q ) = q + 1 , # E ( F q 2 ) = ( q + 1 ) 2 , and G = E ( F q ) [ p ] is a cyclic group of order p with embedding degree k = 2 . Given map Ψ ( x , y ) = ( x , i y ) , where i is the square root of 1 , Ψ maps points of E ( F q ) to points of E ( F q 2 ) \ E ( F q ) , and if f denotes the Tate pairing on the curve E ( F q 2 ) , then defining e : G × G F q 2 by e ( P , Q ) = f ( P , Ψ ( Q ) ) gives a bilinear nondegenerate map. For more details about the choice of parameters, please refer to [28]. In our case, we use the standard parameters proposed by Lynn [28] (https://crypto.stanford.edu/pbc/), where q has 126 bits and p = 730750818665451621361119245571504901405976559617 . To generate random elements, we use libsodium (https://libsodium.gitbook.io/). Our implementation uses the programming language “C” and the GNU Multiple Precision Arithmetic for arithmetic with big numbers. We use the GCC version 10.0.1 with the following compilation flags: “-O3 -m64 -fPIC -pthread -MMD -MP -MF”.
Implementing HW-VRF. In our implementation, we use the bilinear map as pairing implemented by Lynn [28] for the BLS signature scheme. We notice that, when computing the function value Fun s k ( x ) = e ( g , h ) u 0 i = 1 u i x i , we usually compute first the bilinear e ( g , h ) , and then do the exponentiation. However, it is expensive to do the exponentiation of an element in G T . To improve the efficiency of computing Fun s k ( x ) , we use the following mathematical trick: e ( g , h ) a b = e ( g a , h b ) , which implies that we calculate Fun s k ( x ) as e ( g u 0 , h Π i = 1 u i x i ) . Since the computation of g a (or h b ) corresponds to the scalar multiplication of a point P (or Q) by a scalar a (or b), using this trick, we avoid the exponentiation on an element in G T by requiring cost of two scalar multiplications of a point of the curve.
Implementing our static Agg-VRFs. Since p is fixed, when calculating the aggregated proof as π i agg : = ( π i 1 agg ) ( u i + 1 ) / 2 , we can precompute the inversion of 2 and thus only need to compute ( π i 1 agg ) ( u i + 1 ) inv ( 2 ) by the scalar multiplication of a point on curve with scalar ( u i + 1 ) inv ( 2 ) . We use a similar approach when computing e ( π i 1 agg , g · U i ) 1 / 2 ; in this case, we always perform e ( ( π i 1 agg ) inv ( 2 ) , g · U i ) . Again, ( π i 1 agg ) inv ( 2 ) corresponds to the scalar multiplication of a point with scalar inv ( 2 ) , while g · U i corresponds to the additive operation on two points on the elliptic curve.
Comparison. We tested the performance of our static Agg-VRFs in comparison to a standard (non-aggregate) VRF, for five different input lengths, i.e., 56, 128, 256, 512, and 1024 bits. In all cases, we set the size of the fixed-bit equal to 20. Thus, naturally, we wanted to compare the efficiency of our aggregated VRF versus the evaluation and corresponding verification of 2 36 , 2 108 , 2 236 , 2 492 , and 2 1004 VRF values. To perform our comparisons, we recorded the verification time for 100 pairs of function values and their corresponding proofs, if the verification is performed one-by-one (i.e., without using the aggregation) versus the corresponding performance of employing our proposed static aggregate VRF. Obviously, it holds 100 2 36 , 100 2 108 , 100 2 236 , 100 2 492 , and 100 2 1004 . In fact, it is fine to choose any number that is smaller than 2 36 . We choose 100 to have sensible running time for the performance of the standard (non-aggregate) VRF. By taking the 56 bits input length with 20 fixed bits as an example, the bit-fixing set should contain 2 36 elements; then, we should consider the verification time for 2 36 pairs of function values-proofs, which is drastically larger than the running time when we evaluate the verification for only 100 pairs. Thus, showing that our aggregate VRF is much more efficient than the evaluation and corresponding verification of 100 VRF values obviously implies that it is more efficient than the evaluation and corresponding verification of 2 36 , 2 108 , 2 236 , 2 492 , and 2 1004 VRF values, correspondingly.
Table 2 shows the result of our experiments. The column “Verify” corresponds to the required time for verifying a single pair of function value/proof. We tested how much time it costs to aggregate all the function values and their proofs for inputs belonging to the bit-fixing set. Furthermore, we evaluated the verification time to check the aggregated function value/proof. The column “Total Verification” corresponds to the total required time for verifying 100 pairs of function values/proofs via the standard VRFs (i.e., verification one-by-one), while the column “AggVerify” represents the costing time for verifying the aggregated value/proof via aggregate VRF (i.e., aggregated verification algorithm). The experimental results show that, even for 1024 bits of inputs, the aggregation of 2 1004 pairs of function values/proofs can be computed very efficiently in 6881 ms. Moreover, the time required to verify their aggregated function values/proofs of 2 1004 pairs only increases 50 % compared to the verification time for each single function value/proof pair of HW-VRFs.
We stress that our implementation is hardware independent. The only requirement is to have a compiler that is able to translate C code to the specific architecture. To give an estimation of what would happen if a different frequency in a computer architecture is used to run our code for HW-VRFs as well as our aggregate VRFs, we considered the original run using 56, 128, 256, 512, and 1024 bits, respectively. Then, we computed the difference between the frequencies and multiply for this result, as shown in Table 3. For different frequencies (GHz), the verification time for the aggregated function values/proofs increases 30–50%, compared to that for each single function value/proof pair of the HW-VRFs, as shown in Table 3.
Moreover, we performed experiments for the cases where the input lengths are equal to 256 (depicted in Figure 1b) and 1024 (in Figure 1a), respectively, by choosing different numbers τ of the fixed bits to see the variation of the costing time on the aggregation and verification processes. When = 256 , we ran experiments for three cases, i.e., worst-case where all τ fixed bits are 1, best-case where all τ fixed bits are 0, and average-case where τ fixed bits are chosen at random from { 0 , 1 } . In the worst-case, the Aggregate algorithm requires 256 multiplications plus 1 exponentiation to compute y agg and 258 exponentiation to evaluate π agg , while the AggVerify algorithm requires 515 pairing operations, as shown in Figure 1b with square dot dashed line, which cost almost the same amount of time with different τ . In the best-case, the Aggregate algorithm requires ( 256 τ ) multiplications plus 1 exponentiation to compute y agg and ( 258 τ ) exponentiation to evaluate π agg , while the AggVerify algorithm requires ( 516 τ ) pairing operations, as shown in Figure 1b with round dot dashed line, where the running time decreases with the increase of τ . The average-case, as shown with solid lines in Figure 1b, lies between the range of the best-case and the worst-case. When = 1024 , we show the time cost on the aggregation and verification algorithms in average-case, i.e., for randomly chosen τ fixed bits.

4. Application to the E-Lottery Scheme

4.1. Discussion on the Practical Instantiation of Chow et al.’s E-Lottery

Chow et al. [7] proposed an e-lottery scheme based on VRFs, where a random number generation mechanism is required to determine not only a winning number (chosen from a predefined domain) but also the public verifiability of the winning result, which guarantees that the dealer cannot cheat in the random number generation process. Let the numbers used in the lottery game be { 1 , 2 , , N max } . To generate the winning number, Chow et al. [7] employed a VRF that maps a bit-string of k length into a bit-string of l length. More precisely, the dealer firstly computes ( w 0 , π 0 ) : = ( VRF . Fun ( sk VRF , d ) , VRF . Prove ( sk VRF , d ) ) , where d is the ‘hash value’ of all tickets sold so far. If w 0 > N max , then the dealer iteratively calculates ( w i , π i ) : = ( VRF . Fun ( sk VRF , w i 1 d ) , VRF . Prove ( sk VRF , w i 1 d ) ) for i = 1 , 2 , , until obtaining ( w t , π t ) such that w t N max . Afterwards, the dealer publishes the final ( w t , π t ) and the intermediate tuples ( w i , π i ) for i = 0 , 1 , , t 1 of VRF function values and their corresponding proofs.
To instantiate such an e-lottery scheme, a practical and concrete VRF scheme is needed. To the best of our knowledge, so far, most of the efficient instantiations of VRFs are based on bilinear maps (e.g., [13,14,18,19,22]), where the input spaces of VRFs are defined over binary strings and the function values on inputs are elements in a group G T , i.e., w i G T , while the verification needs to compute some pairings.
Chow et al.’s construction [7] seems to be very promising when considering an ideal case that after a small number t of times that the VRF is applied, a function value w t such that w t N max can be obtained successfully. Nevertheless, what if t is a large number? Then, it implies that the dealer needs to calculate the VRF more times, while the player needs to verify the correctness of more tuples in order to verify the winning result, which also requires more consumption on pairing-computations from both the dealer and the player.
In the following section, we show how we modify Chow et al.’s e-lottery scheme [7] by employing our aggregate VRF, in order to reduce the computational overhead for the verification process of each player.

4.2. An E-Lottery Scheme Based on Aggregate VRFs

Observing that the intermediate tuple ( w i , π i ) is a VRF function value/proof on an input w i 1 d , which has the same last d bits and consists of a bit-fixing set. Thus, we compress all the intermediate tuples into a single function value w agg = e ( g , h ) u 0 · ( i = | d | + 1 u i d i ) · ( i = 1 | d | ( u i + 1 ) ) and corresponding proof π agg . If the exponent of w agg is less than N m a x , w a g g can be set as the winning number. Otherwise, the dealer can output a value w a g g = e ( g , h ) u 0 · ( i = | d | + 1 u i d i ) · ( i = 1 , i { j 1 , , j k } | d | ( u i + 1 ) ) for indices j 1 , , j k [ 1 , | d | ] as a winning number whose exponent is less than N m a x . Thus, the player only needs to verify the correctness of a single aggregated function value/proof via the efficient aggregation verification algorithm. This approach reduces not only the amount of data written to the dealer’s storage space but also the computational cost for the verification process of each player. Our improved e-lottery scheme shares the same framework as Chow et al.’s scheme [7], with some modifications in the winning result generation phase and the player verification phase. The concrete description of our improved e-lottery scheme is as follows.
Let H : { 0 , 1 } * { 0 , 1 } H be a collision-resistant hash function and D : { 0 , 1 } H { 0 , 1 } D a verifiable delay function (VDF). Please refer to Appendix A for a detailed description of the VDF that we use. Let VRF be a function with input space { 0 , 1 } , where > D , the function value space G T and the proof space G + 1 . Suppose any element in group G T is represented as a bit-string with length bt .
Setup:
Dealer side:
  • Generate a public/secret key pair of VRF as ( pk VRF , sk VRF ) VRF . Setup ( 1 λ ) and key pair of a signature scheme as ( pk SIG , sk SIG ) SIG . Setup ( 1 λ ) .
  • Choose an arbitrary integer N m a x Z p * . The numbers used in the lottery game are { 1 , 2 , , N m a x } .
  • Publish a collision-resistant hash function, public key of VRF pk VRF , public key of signature scheme pk SIG , the delaying function D ( · ) , and the amount of time T in which the dealer must release the generated winning ticket value.
Ticket Purchase:
Player side:
  • The player chooses x Z p * as bet number and randomly samples r Z p * . r is kept secret.
  • The player obtains a sequence number s of the ticket from the dealer.
  • Compute H ( x s r ) , and send ticket i = s ( x r ) H ( x s r ) to the dealer.
Dealer side:
  • The dealer generates a signature for ticket ticket i as σ i SIG . Sign ( sk SIG , ticket i ) and returns σ i to the player to acknowledge the recipient of player’s purchase request.
  • The dealer creates the state of ticket 1 as st 1 : = H ( ticket 1 ) , and st i : = H ( st i 1 ticket i ) for i = 2 , 3 , .
  • The dealer generates blocks which contain: (1) the current state st i { 0 , 1 } H ; (2) ticket ticket i ; and (3) signature σ i for ticket ticket i under sk SIG , e.g., with the following block structure:
    B i = ( st i , ticket i , σ i ) : = ( st i : = H ( st i 1 ticket i ) , ticket i , σ i ) .
  • The dealer links all blocks to a blockchain, which is a sequence of blocks B 1 , B 2 , .
  • The dealer publishes a blockchain C = B 1 , B 2 , , B n where n is the number of tickets sold so far. The length of a chain len ( C ) = n is its number of blocks. The structure of a blockchain C is depicted in Figure 2:
Winning Result Generation:
Dealer side:
  • Let the final state of the blockchain C be st n ; the dealer computes d : = D ( st n ) by the delaying function and publishes d.
  • Pad d { 0 , 1 } D with ⊥ as d ˜ : = D d . Let d ˜ = D d 1 d D . Define a set S d ˜ : = { ξ { 0 , 1 } : i [ ] , ξ i = d ˜ i or d ˜ i = } .
  • The dealer calculates the productive aggregation of all | S d ˜ | = 2 D numbers of function values and their corresponding proofs by using the efficient aggregation algorithm as ( y agg , π agg ) : = Aggregate ( sk VRF , d ˜ ) . More precisely, since sk VRF = ( u 0 , u 1 , , u ) , which is defined by HW-VRFs scheme in Section 2.2, we have y agg = e ( g , h ) u 0 · ( i = D + 1 u i d i ) · ( i = 1 D ( u i + 1 ) ) .
  • Let EXP ( y agg ) : = u 0 · ( i = D + 1 u i d i ) · ( i = 1 D ( u i + 1 ) ) . The dealer checks if EXP ( y agg ) ( mod p 1 ) N m a x . If it is true, then set y win : = y agg as the winning result.
  • Otherwise, the dealer chooses a random index ζ [ 1 , D ] and then uses ζ to define a new wildcard d ˜ ζ 1 0 D ζ d 1 d D . The dealer computes EXP ( d ˜ ) : = u 0 · ( i = D + 1 u i d i ) · ( i = 1 , i ζ D ( u i + 1 ) ) , and checks if EXP ( d ˜ ) ( mod p 1 ) N m a x . Once finding a ζ [ 1 , D ] s.t. EXP ( d ˜ ) ( mod p 1 ) N m a x , the dealer sets y win : = e ( g , h ) EXP ( d ˜ ) as the winning result and computes the corresponding proof by using the efficient aggregation algorithm Aggregate ( sk VRF , d ˜ ) .
  • The dealer publishes the winning result and its proof ( y win , π win ) together with corresponding d ˜ within Δ units of time after the closing of the lottery session.
Prize Claiming:
  • The player checks if e ( g , h ) x = y win . If it is true, the player wins.
  • The player submits ( s , r ) to the dealer.
  • The dealer checks whether there exists a ticket ticket i in the blockchain C such that ticket i = s ( x r ) H ( x s r ) .
  • If it is true, the dealer checks whether the tuple ( s , r ) has already been published (i.e., the prize has been claimed by someone already).
  • If the prize is not yet claimed, the dealer pays the player and publishes ( s , r ) .
Player Verification:
Player side:
  • The player checks whether his/her ticket(s) is/are included in the blockchain C and checks whether the final state st n of the blockchain C is correct.
  • The player verifies the correctness of d by using the verification algorithm of VDF.
  • The player parses d ˜ = ζ 1 0 D ζ d 1 d D and checks if d = d 1 d D .
  • The player verifies the correctness of y win by using the verification algorithm b AggVerify ( pk VRF , d ˜ , y win , π win ) .
  • For each winning ticket published, the players verify the validity of s ( x r ) H ( x s r ) .

4.3. Implementation and Comparison on Chow et al.’s/Improved E-Lottery

To implement Chow et al.’s e-lottery scheme, we use the instantiation of HW-VRF presented in Section 2.2, while using our aggregate VRF scheme in Section 3.1 to implement the improved counterpart. When implementing the winning result generation phase and the player verification phase, we reuse part of the code for our implementation in Section 3.3. For the required delay functions, we use an instantiation of VDFs proposed by Pietrzak [29] (see Appendix A for details), which is defined as D ( x ) : = x 2 T ( mod N ) , where N = p 1 q 1 is a product of two large, distinct primes p 1 and q 1 , x is a random element in Z N * , and T is the timing parameter. For the signature scheme, we used EdDSA [30] provided in the libsodium (https://libsodium.gitbook.io/doc/) library.
Parameters. In our concept implementation, we consider Pietrzak’s VDF scheme [29] with the following parameters, where the statistical soundness security parameter of a proof is λ Sound = 100 , the time parameter is T = 2 40 , the bit-lengths of two safe primes p 1 and q 1 are λ QR / 2 = 256 bit safe primes p 1 , q 1 , and the bit-length of an RSA modulus N = p 1 q 1 is λ RSA = 512 . (Here, we set λ RSA = 512 as a toy example of implementation, which can be easily scaled to 1024 or 2048, if we adjust the corresponding parameters of the hash function as well as the inputs/outputs spaces of the VRFs.) Then, the VDF defines a function D : { 0 , 1 } 256 { 0 , 1 } 512 . We use SHA-256 as a hash function H : { 0 , 1 } * { 0 , 1 } 256 . The inputs/outputs of the VRFs are set equal to 1024 binary strings. We randomly choose an integer N m a x Z p s.t. 1 N m a x < p 1 , and the numbers used in the lottery game are { 1 , 2 , , N m a x } . For every outcome w G T of VRFs evaluation, to check if w can be set as a winning number, the dealer actually checks if the discrete log of w under the base generator e ( g , h ) lies in the set { 1 , 2 , , N m a x } . In fact, as explained in the implementation of HW-VRF and the aggregate VRF in Section 3.3, to obtain the function value Fun s k ( x ) = e ( g , h ) u 0 i = 1 u i x i , the exponent u 0 i = 1 u i x i is computed beforehand, which makes it handy to decide if w can be set as a winning number or not.
Results.Table 4 shows the implementation results for both Chow et al.’s and our improved e-lottery schemes, particularly regarding the costing time for the winning number generation and the player verification. We employ a small blockchain scenario, where we have 1000 blocks, that is, there are 1000 random tickets in the system. The running time in Table 4 is for the experimental scenario that in Chow et al.’s e-lottery scheme the VRF is applied only twice and a function value is obtained, whose exponent EXP ( w 1 ) ( mod p 1 ) N max , while in our scheme the exponent of y agg is beyond N m a x , i.e., EXP ( y agg ) ( mod p 1 ) > N m a x , so that extra time is taken to search for another y agg whose exponent is below N m a x . We consider such a case, because it optimally shows the efficiency gain of our e-lottery scheme over Chow et al.’s, since in this case the fewest iterative applications of VRFs are involved in Chow et al.’s scheme, while the extra step of recomputing y agg occurs in ours. The experimental results show that our scheme takes less time to generate the winning ticket than Chow et al.’s, and the player verification phase takes almost half less time as that of Chow et al.’s.

5. Conclusions

Inspired by the idea of aggregated PRFs [10], in this paper, we investigate how we may efficiently aggregate VRF values and their corresponding proofs. We introduce the notion of static aggregate VRFs. We show how to achieve static aggregate VRFs under the Hohenberger and Waters’ VRF scheme [13] for the product aggregation with respect to a bit-fixing set, based on the q-decisional Diffie–Hellman exponent assumption. Furthermore, we apply our aggregate VRFs to improve the prior VRF-based e-lottery scheme in the efficiency of generating the winner number and player verification phases. We test the performances of our static aggregate VRFs and improved e-lottery scheme, which show significant computation advantage and efficiency gains compared to the original counterparts.
As future work, it would be interesting to explore if it is possible to realize static aggregate VRFs for much more expressive sets, such as sets that can be recognized by polynomial-size decision trees, read-once Boolean formulas, or polynomial-size boolean circuits. Furthermore, it would be interesting to consider a public dynamic aggregate VRF, which allows taking any two fresh (aggregate) function values and proofs and combine them into a new aggregate function value and proof, without requiring the exponential number of inputs (from an exponentially large set) to be known in advance, as required in static aggregation.

Author Contributions

Conceptualization, B.L. and A.M.; methodology, B.L.; implementation, G.B.; formal analysis, B.L.; writing—original draft preparation, B.L. and G.B.; writing—review and editing, A.M.; supervision, A.M.; project administration, B.L.; and funding acquisition, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the GENIE Chalmers CryptoQuaC project, the VR PRECIS project, the WASP expedition project “Massive, Secure, and Low-Latency Connectivity for IoT Applications”, and the National Nature Science Foundation of China (No. 61972124).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PRFsPseudorandom functions
VRFsVerifiable random functions
Agg-VRFsAggregate verifiable random functions
DDHEDecisional Diffie–Hellman Exponent

Appendix A. Verifiable Delay Functions (VDFs)

Here, we give an instantiation of VDFs proposed by Pietrzak [29], which is used in our improved e-lottery scheme.
Given as input an RSA group Z N * , where N = p q is a product of two large, distinct primes p and q, a random element x Z N * , and a timing parameter T, compute D ( x ) = x 2 T ( mod N ) . As we know, anyone with the knowledge of ϕ ( N ) = ( p 1 ) ( q 1 ) can efficiently compute D ( x ) with two exponentiations, by first computing e = 2 T ( mod ϕ ( N ) ) , followed by x e . Anyone who does not know ϕ ( N ) (or, equivalently, the factorization of N), in order to compute D ( x ) , is required to perform T sequential squarings in the group Z N * even on a parallel computer with polynomial numbers of processors. Without the group order of Z N * (or, equivalently, the factorization of N), the computation of D ( x ) requires T sequential squarings in the group.
To convince any verifier that the published value y is correct, namely that y = x 2 T ( mod N ) , Pietrzak [29] provided a succinct public-coin interactive argument for the language
L : = { ( Z N * , x , y , T ) : y = x 2 T Z N * } .
The verifier and prover do so as follows.
  • V sends to P a random r in Z 2 λ .
  • Both P and V compute x 1 x r · μ and y 1 μ r · y .
  • P and V recursively engage in an interactive proof for statement ( Z N * , x 1 , y 1 , T / 2 ) L , namely that y 1 = x 1 2 T 2 Z N * .
This subprotocol is repeated log 2 T times, each time halving the time parameter T until T = 1 , at which point V can efficiently verify correctness of the claim itself.

References

  1. Micali, S.; Rabin, M.; Vadhan, S. Verifiable random functions. In Proceedings of the 40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039), New York, NY, USA, 17–19 October 1999; pp. 120–130. [Google Scholar]
  2. Naor, M.; Pinkas, B.; Reingold, O. Distributed Pseudo-Random Functions and KDCs. In EUROCRYPT 1999; Springer: Berlin/Heidelberg, Germany, 1999; pp. 327–346. [Google Scholar]
  3. Micali, S.; Rivest, R.L. Micropayments Revisited; CT-RSA 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 149–163. [Google Scholar]
  4. Papadopoulos, D.; Wessels, D.; Huque, S.; Naor, M.; Včelák, J.; Reyzin, L.; Goldberg, S. Making NSEC5 Practical for DNSSEC; Cryptology ePrint Archive, Report 2017/099; IACR 2017; Available online: https://eprint.iacr.org/2017/099 (accessed on 8 February 2017).
  5. Goldberg, S.; Naor, M.; Papadopoulos, D.; Reyzin, L.; Vasant, S.; Ziv, A. NSEC5: Provably Preventing DNSSEC Zone Enumeration; NDSS: New York, NY, USA, 2015. [Google Scholar]
  6. Papadopoulos, D.; Wessels, D.; Huque, S.; Naor, M.; Vcelák, J.; Reyzin, L.; Goldberg, S. Can NSEC5 be practical for DNSSEC deployments? IACR Cryptol. Eprint Arch. 2017, 2017, 99. [Google Scholar]
  7. Chow, S.S.M.; Hui, L.C.K.; Yiu, S.M.; Chow, K.P. An e-Lottery Scheme Using Verifiable Random Function. In ICCSA 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 651–660. [Google Scholar]
  8. David, B.; Gaži, P.; Kiayias, A.; Russell, A. Ouroboros Praos: An Adaptively-Secure, Semi-Synchronous Proof-of-Stake Blockchain. In EUROCRYPT 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 66–98. [Google Scholar]
  9. Badertscher, C.; Gazi, P.; Kiayias, A.; Russell, A.; Zikas, V. Ouroboros Genesis: Composable Proof-of-Stake Blockchains with Dynamic Availability; Technical Report; Cryptology ePrint Archive, Report 2018/378; IACR 2018; Available online: https://eprint.iacr.org/2018/378.pdf (accessed on 25 April 2018).
  10. Cohen, A.; Goldwasser, S.; Vaikuntanathan, V. Aggregate Pseudorandom Functions and Connections to Learning. In TCC 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 61–89. [Google Scholar]
  11. Boneh, D.; Waters, B. Constrained Pseudorandom Functions and Their Applications. In ASIACRYPT 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 280–300. [Google Scholar]
  12. Boneh, D.; Lewi, K.; Montgomery, H.; Raghunathan, A. Key homomorphic PRFs and their applications. In CRYPTO 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 410–428. [Google Scholar]
  13. Hohenberger, S.; Waters, B. Constructing verifiable random functions with large input spaces. In EUROCRYPT 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 656–672. [Google Scholar]
  14. Jager, T.; Niehues, D. On the Real-World Instantiability of Admissible Hash Functions and Efficient Verifiable Random Functions. In SAC 2019; Springer: Berlin/Heidelberg, Germany, 2020; pp. 303–332. [Google Scholar]
  15. Lysyanskaya, A. Unique Signatures and Verifiable Random Functions from the DH-DDH Separation. In CRYPTO 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 597–612. [Google Scholar]
  16. Dodis, Y. Efficient Construction of (Distributed) Verifiable Random Functions. In PKC 2003; Springer: Berlin/Heidelberg, Germany, 2003; pp. 1–17. [Google Scholar]
  17. Dodis, Y.; Yampolskiy, A. A Verifiable Random Function with Short Proofs and Keys. In PKC 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 416–431. [Google Scholar]
  18. Jager, T. Verifiable Random Functions from Weaker Assumptions. In TCC 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 121–143. [Google Scholar]
  19. Hofheinz, D.; Jager, T. Verifiable Random Functions from Standard Assumptions. In TCC 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 336–362. [Google Scholar]
  20. Yamada, S. Asymptotically Compact Adaptively Secure Lattice IBEs and Verifiable Random Functions via Generalized Partitioning Techniques. In CRYPTO 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 161–193. [Google Scholar]
  21. Katsumata, S. On the Untapped Potential of Encoding Predicates by Arithmetic Circuits and Their Applications. In ASIACRYPT 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 95–125. [Google Scholar]
  22. Kohl, L. Hunting and Gathering—Verifiable Random Functions from Standard Assumptions with Short Proofs. In PKC 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 408–437. [Google Scholar]
  23. Liu, Y.; Hu, L.; Liu, H. Using an efficient hash chain and delaying function to improve an e-lottery scheme. Int. J. Comput. Math. 2007, 84, 967–970. [Google Scholar] [CrossRef]
  24. Lee, J.S.; Chang, C.C.; Fellow, IEEE. Design of electronic t-out-of-n lotteries on the Internet. Comput. Stand. Interfaces 2009, 31, 395–400. [Google Scholar] [CrossRef]
  25. Chen, C.L.; Chiang, M.L.; Lin, W.C.; Li, D.K. A novel lottery protocol for mobile environments. Comput. Electr. Eng. 2016, 49, 146–160. [Google Scholar] [CrossRef]
  26. Grumbach, S.; Riemann, R. Distributed Random Process for a Large-Scale Peer-to-Peer Lottery. In Distributed Applications and Interoperable Systems; Chen, L.Y., Reiser, H.P., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 34–48. [Google Scholar]
  27. Naor, M.; Reingold, O. Number-theoretic constructions of efficient pseudo-random functions. J. ACM (JACM) 2004, 51, 231–262. [Google Scholar] [CrossRef]
  28. Lynn, B. On the Implementation of Pairing-Based Cryptosystems. Ph.D. Thesis, Stanford University Stanford, Stanford, CA, USA, 2007. [Google Scholar]
  29. Pietrzak, K. Simple Verifiable Delay Functions. In Leibniz International Proceedings in Informatics (LIPIcs); ITCS 2019; Blum, A., Ed.; Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik: Dagstuhl, Germany, 2018; Volume 124, pp. 60:1–60:15. [Google Scholar] [CrossRef]
  30. Bernstein, D.J.; Duif, N.; Lange, T.; Schwabe, P.; Yang, B.Y. High-speed high-security signatures. J. Cryptogr. Eng. 2012, 2, 77–89. [Google Scholar] [CrossRef]
Figure 1. Time in milliseconds with respect to different numbers of fixed bits τ in the aggregate VRFs considering the cases of = 256 and = 1024 .
Figure 1. Time in milliseconds with respect to different numbers of fixed bits τ in the aggregate VRFs considering the cases of = 256 and = 1024 .
Cryptography 04 00037 g001
Figure 2. A blockchain C with n tickets sold so far.
Figure 2. A blockchain C with n tickets sold so far.
Cryptography 04 00037 g002
Table 1. The computation operations for the static aggregate VRF scheme with respect to bit-fixing sets.
Table 1. The computation operations for the static aggregate VRF scheme with respect to bit-fixing sets.
SchemeAssump.Input LengthCost for Aggregating Function ValueCost for Aggregating ProofCost on Verification for Aggregation
HW-VRF [13]q-DDHE 2 τ MUL on G T 2 τ · ( + 1 ) MUL on G 2 τ · ( + 1 ) bilinear pairings
Our static Agg-VRFs for bit-fixing setsq-DDHE MUL on Z p & one EXP ( 2 + 3 ) EXP ( 2 + 3 ) bilinear pairings
Table 2. Running time (milliseconds) of the aggregate VRFs and standard (non-aggregate) VRF.
Table 2. Running time (milliseconds) of the aggregate VRFs and standard (non-aggregate) VRF.
HW-VRFs [13]Aggregate VRFs
Input Length (bits)Verify (ms)Blocks Num.Total Verification (ms)Fixed-Bit SizeAggregate (ms)Our AggVerify (ms)
56891009492041122
1281971002237120196298
2564721005219920602579
512842100952332019241212
102415561001641292068812459
Table 3. Table of estimation of time for different frequencies.
Table 3. Table of estimation of time for different frequencies.
SizeFrequency (GHz)Time (ms) for AggVerifyTime (ms) for Verify of HW-VRFs [13]
56 1.6 12289
2.1 8562
3.0 7051
128 1.6 298197
2.1 208138
3.0 172114
256 1.6 579472
2.1 405330
3.0 335274
512 1.6 1212842
2.1 848589
3.0 702488
1024 1.6 24591556
2.1 16961089
3.0 1426902
Table 4. Running time (seconds) of the e-lottery schemes.
Table 4. Running time (seconds) of the e-lottery schemes.
Lottery SchemeWinning Number GenerationPlayer Verification
Scheme in [7]96.12992 s4.384418 s
Our Scheme90.13322 s2.782610 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop