We use the techniques in [14
] and adapt the scheme provided in [10
]. Even though we changed the revocation token generation with regards to the work in [10
], there is no impact to the correctness of the scheme from new revocation token generation, as we check the signer’s authenticity with RL
separately and we prove the honesty of the token independently in zero-knowledge proof.
For all n, N, all (gpk, gmsk, gsk, grt) returned by KeyGen(n, N), all , and all , Verify(gpk, RL, M, Sign(gpk, gsk[d], grt[d], M)) = valid; and Open(gmsk, M, Sign(gpk, gsk[d], grt[d], M)) = d.
We use the proof of correctness given in [10
]. We prove the correctness of Open
], Lemma 4). Let and . Over the randomness of ,
Proof. Fix a non-zero vector. Then the vectoris uniformly distributed over. It then follows that. Applying a union-bound get
An honest signer is able to obtain a valid witness
) for the underlying argument system with overwhelming probability. Moreover, the verification algorithm Verify
will not return Invalid for the signature check after Step 6, because Step 6 validates the signature using the underlying zero-knowledge interactive protocol. In Step 7 of the verification algorithm Verify
, the vector
for every i
can be delivered as
If the verification algorithm Verify outputs Valid, that is , for all i. This means . If there exists an index i, where , then . Then, the signature should not pass the Step 7 of the verification process because .
Suppose there is a situation , i.e., for every i, the vector is non-zero. According to Lemma 1, with overwhelming probability. At the same time, . Thus, .
In our scheme, the revocation token of a member d is generated by using the public parameters A and w. Thus, the underlying zero-knowledge protocol of the verification algorithm checks whether the user token satisfies in zero-knowledge. Thus, using the public parameters without revealing the or the signer should proof that his token is not being faked. On the other hand, because of this condition dishonest signers cannot generate a valid signature with a fake token that passes the signature verification. This confirms that only correct and honestly generated signatures can pass the signature verification.
Moreover, if the index of the signer is correctly encrypted in the ciphertext c at the time of signing, then the tracing algorithm Open returns the index of the signer correctly. Encryption of the index is guaranteed in the signing stage because no member can pass the underlying interactive protocol without correct encryption of the index via a LWE function. In addition to that, Verify returns Invalid if the ciphertext c is not correct encryption of the index because it cannot pass the underlying interactive protocol’s checking without a correct encryption. Thus, this proves the correctness of the encryption of the index and that the tracing algorithm outputs the index of the correct signer.
6.2. Almost-Full Anonymity
Theorem 1. The proposed VLR group signature scheme Section 5.1 is almost-full anonymous in the random oracle model under the assumption.
We demonstrate the anonymity of our scheme using eight indistinguishable games, where and .
Game: This is the naive anonymity experiment. Here, we believe that the adversary A has advantage . At the beginning, the challenger C produces keys gpk, gmsk, , and using KeyGen(n, N). Then, he/she hand overs gpk and gsk to A. The adversary A is allowed to query revocation tokens and opening of any signature. When A asks for a token of a user d the challenger C outputs grt[d]. When A queries for opening of any signature then C answers with using gmsk = . In the challenge phase, A sends a message M and two indices , such that the adversary A did not make a revocation query for users . Then, C sends back , which is generated using Sign(). The adversary A still can query for opening oracle except for challenged indices and he is not allowed for revocation queries with . The adversary’s challenge is to guess the index, that is employed to produce . Finally, A returns . If then A wins.
Game: In this game, a minor modification is done compared to . The key pair (), which is created at the signature generation in the real game, is produced at the initial stage of the game. Thus, at the query phase, if A queries for opening oracle with , where then the challenger C sends a random bit and terminates the game. As is created at the start, it does not have any relation to the adversary’s queries. Accordingly, the probability of is insignificant. Besides, after the challenge signature is sent, if the adversary queries a valid signature with then sig is a forged one. Therefore, the challenger terminating the game is irrelevant. Without losing the generality we believe that A does not ask for opening of a valid with .
: Here, we replace the encrypting matrices
with randomly obtained
, and we program the random oracle
according to B
. In real anonymity game, B
is obtained from GenTrap
is generated at the signature generation. In this game, we get uniformly random
. The challenger samples
to respond the opening oracles querying with
. This G
is utilized to respond the opening and (ovk
) is saved to reuse when A
repeats the same queries for
. In the challenge phase, program
. As the distribution of G
is statistically close to uniform over
] and the distributions of
are statistically close to the real game [29
], the games
: Without producing the legitimate non-interactive proof
, in this game, C
as discussed in Section 4
. For each
, take a forged challenge
and execute the interactive protocol. Then, program the random oracle
respectively. The challenging signature
is statistically close to the signature in the early games because the argument system is statistically zero-knowledge. Therefore,
is indistinguishable from
Game: The challenger replaces the naive revocation token used to generate the challenged signature , where , with a vector t sampled uniformly. We compute , where . V is uniformly random over , sampled from the error distribution , and we replace only by t. As the rest of the game is the same as the previous game , this game is statistically indistinguishable from .
: In this game, we obtain
uniformly. Thus, we make the revocation token totally independent of the challenging bit b
. We sample
. In the above game, the pair (V
) is a proper
instance, and in this game we replace v
with truly uniformly sampled y
. As the
problem is hard (Section 2
), the games
are indistinguishable. Suppose there is an algorithm B
for solving the
problem. Then, B
can interact with A
by answering the queries that A
makes. When A
queries for the revocation token of any group member, B
can simply answer with a value chosen uniformly random such as
instead of providing grt
. The rest of the game is the same as the original poof given in the previous game. If adversary A
can distinguish whether the revocation token is generated or chosen randomly, the algorithm B
succeeds. However, this contradicts the hardness of the
Game: In this game, we modify the creation of ciphertext () uniformly. Let and , where and are uniformly random and is the index of the challenger’s bit. As the rest of the game is same as and the problem is hard to solve, the games and are indistinguishable. Indeed, if A succeed on distinguishing two games, then A can also solve the LWE problem. That means, he/she can distinguish () from () and () from (), which conflicts with the assumption.
Game: The challenger makes the challenging signature totally independent of the bit b. Let and , where and are uniformly random. The games and are statistically indistinguishable. As this game is independent from the challenger’s bit b, the advantage of the adversary winning the game is 0.
Therefore, the above executed games prove that advantage of the adversary on almost-full anonymity of the scheme is negligible.
This concludes the proof of anonymity.
In the random oracle model, we say that our VLR group signature scheme is traceable if the problem is hard.
]). For any m, , and for any , solving a random instance of the or problem with non-negligible probability is at least as hard as approximating the problem on any lattice of dimension n to within certain factors.
]). If there is a traceability adversary A with success probability ϵ and running time
T, then there is an algorithm that solves the problem with success probability , and running time , where is the number of queries to the random oracle .
Based on Lemma 2 and Theorem 2, we prove that the proposed scheme is traceable in the random oracle.
Suppose there is an adversary A who can break the computational binding property of the commitment scheme COM with non-negligible probability. Therefore, A can find answers for the problem. Thus, without losing the generality, we believe that COM is computationally binding.
Let forger be a PPT algorithm that solves the problem with non-negligible probability.
The forgery is given the verification key (). then produces a key pair () and interacts with the adversary A by sending gpk = () and responding to the A’s queries as below.
Signatures queries: If the adversary A asks a signature of user d on a random message M, then outputs = , where is simulated without using the legitimate secret key and others are generated faithfully. The zero-knowledge property of the given underlying interactive protocol guarantees that is indistinguishable from the legitimate group signature.
Corruption queries: The corruption set CU is initially a empty set. If the adversary A asks the secret signing key of any user d, then adds d to CU and returns .
Queries to the random oracles and are handled by consistently returning uniformly random values in . For each , denotes the answer to the k-th query.
Finally, A sends a message , revocation data , and a non-trivial forged signature , which fulfills the requirements of the traceability game, where
= , such that Verify() = Valid and Open fails or outputs an user index outside of the coalition
Now let us show how exploits the forgery.
We assume that A
. Thus, with probability at least
, there exists certain
such that the
-th oracle queries involves the tuple
. Then, for any fixed
, execute A
many times and input as in the original run. For each repeated execution, A
gives the same results for the first
-1 queries as in the initial run and from the
-th query onward he/she outputs fresh random values. As stated in the forking lemma [[37
], Lemma 7], with probability larger than 1/2, algorithm
can get a 3-fork involving tuple (
) after less than
executions of A
. Let the outputs of
with respect to the 3-fork branches be
A simple computation shows that
Under the condition of the existence of such index i, one parses the three forgeries related to the fork branches to get ().
Then, by employing the knowledge extractor of the underlying argument system, we can obtain vectors (). We can get () from , which satisfy the following.
for some , and .
We can check that () is a correct encryption of , the tracing algorithm Open() returns , Verify() = Invalid and Verify() = Valid.
It then follows that and . As a result, () is a valid forgery. Furthermore, the analysis of the forgery signature shows that if A has non-negligible success probability returns in polynomial time, then so does .
This satisfies the proof of traceability.