Next Article in Journal
A Novel Ultra-Compact FPGA PUF: The DD-PUF
Previous Article in Journal / Special Issue
Implementing Privacy-Preserving Genotype Analysis with Consideration for Population Stratification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Foundations of Programmable Secure Computation

by
Sven Laur
1,† and
Pille Pullonen-Raudvere
2,*,†
1
Institute of Computer Science, University of Tartu, Narva mnt 18, 51009 Tartu, Estonia
2
Cybernetica AS, Mäealuse 2/1, 12618 Tallinn, Estonia
*
Author to whom correspondence should be addressed.
Both authors contributed equally to this work.
Submission received: 30 April 2021 / Revised: 11 August 2021 / Accepted: 18 August 2021 / Published: 21 August 2021
(This article belongs to the Special Issue Secure Multiparty Computation)

Abstract

:
This paper formalises the security of programmable secure computation focusing on simplifying security proofs of new algorithms for existing computation frameworks. Security of the frameworks is usually well established but the security proofs of the algorithms are often more intuitive than rigorous. This work specifies a transformation from the usual hybrid execution model to an abstract model that is closer to the intuition. We establish various preconditions that are satisfied by natural secure computation frameworks and protocols, thus showing that mostly the intuitive proofs suffice. More elaborate protocols might still need additional proof details.

1. Introduction

Over the years of secure multiparty computation (MPC) research many different frameworks [1,2,3,4,5,6,7] and applications [8,9,10,11,12,13] have been developed. Among the applications, some are tailored for a specific framework, others are more general and simply assume some underlying computation capabilities. Especially, when developing a new application or algorithm for MPC, it would be best if we had standard notions to use to specify the requirements (types of functionalities, data types and security assumptions) that the algorithm has on the MPC frameworks. This would give basis for the applicability of the algorithm as well as the security proof of the algorithm.
The security proofs and claims of the programmable MPC frameworks are usually well documented and follow the best practices of universally composable security [14]. Therefore, we are given guarantees that everything from protocol inputs until protocol outputs remains secure independently of the context where the protocol is being used. However, the standard set of operations in MPC frameworks is quite small, essentially supporting linear combinations, multiplication, giving inputs and getting outputs. In addition, these frameworks are often specified as one monolithic secure functionality, for example, arithmetic black box (ABB) [15]. ABB is essentially a representation of a secure computer where you can put values and give computation commands.
It is a separate task to build all other necessary algorithms and building blocks in order to achieve bigger applications like secure machine learning. Building full-fledged applications, like the equality check in Algorithm 1, that do not release intermediate values, is straightforward to model in ABB. In this case, you can give this code as commands to the ABB that would give out the desired outcome. However, note that instead of the shared values, the inputs would be private inputs of the participants. If the ABB is secure, then the output z is computed securely and, for example, if the ABB operates on finite fields, then this protocol is also correct. Formally, there is no good way to add primitive operations inside the ABB as there is no access to the internal representation of the intermediate data. Whenever a new operation is added, we should formally prove the security of the whole ABB. Still, ABB is the best abstraction to define quite generic new primitives for secure computation, for real-world uses see [16,17].
Many algorithms can be sped up by releasing intermediate values. For example, consider the sorting algorithm in Algorithm 2 where the comparison result b is published in the middle of the algorithm and elements are ordered based on this. The value b can be seen as given out from the ABB and then a new command can be given to reorder the values as necessary. In addition, there is no way in ABB to actually return the intermediate representation of the secure values [ [ k ] ] and [ [ m ] ] . Therefore, the effect of such a protocol is such that there are intermediate representations of m and k inside the ABB but the order and its use is defined by the follow up commands sent to the ABB. In this case, the security of the ABB is not necessarily sufficient to give security guarantees. Additional reasoning must be carried out, as the published values and actions based on them happen outside of the ABB.
Algorithm 1 Equality Check
Input: Two shared values [ [ x ] ] and [ [ y ] ]
Output: 0 if x = y , non-zero value otherwise
 Generate a shared random non-zero value [ [ r ] ] .
 Compute [ [ z ] ] = ( [ [ x ] ] [ [ y ] ] ) · [ [ r ] ] .
 Publish [ [ z ] ] as z.
return z
Our view of secure computation adds an explicit way to consider such choices and published values as well as to consider each operation such as comparison or addition individually, not just as one functionality. In short, our goal is to define an abstract execution environment for secure protocols where the only details that are relevant for the security analysis are necessary. These details are those that can be easily seen from algorithms written down in pseudocode like our two examples. One considers secure values [ [ x ] ] , different computations can be carried out with them and a special focus is on the published values x. For example, if some comparison-based sorting algorithm is defined similarly to Algorithm 2, then special care should be taken to analyse what the published values can leak. For example, they may leak something about the number of equal values in the input.
Algorithm 2 Two-element Comparison-Based Sorting
Input: Two shared values [ [ x ] ] and [ [ y ] ]
Output: Return fresh shares of x and y so that larger is the first
 Shuffle [ [ x ] ] and [ [ y ] ] to learn [ [ k ] ] , [ [ m ] ] where { m , k } = { x , y } .
 Compute private comparison [ [ b ] ] = [ [ k ] ] [ [ m ] ] , where b = 1 if k m .
 Publish [ [ b ] ] as b.
return [ [ k ] ] , [ [ m ] ] if b = 1 else [ [ m ] ] , [ [ k ] ]
From the viewpoint of building secure computation algorithms, it is easier to think of secure functionalities for individual protocols, like addition, multiplication, comparison, equality checks or bit decomposition. Essentially, if we had such small secure protocols then any algorithm described as using them could immediately be implemented in any concrete instantiations of these protocols while the composition theorem guarantees the security of the algorithm. Therefore, we have a conflict of interest between the frameworks that are specified as secure computational units versus the algorithm development that benefits from considering primitive operations of the computational unit individually. Either the security proofs of the concrete algorithms are very generic and refer more to intuition (e.g., that only published values should be analysed) or the algorithm is proven secure with respect to some fixed MPC framework, e.g., [18,19,20]. In the first case, we lose the rigour and good security definitions given by detailed security proofs. In the second, we are not exploring the full setting where this algorithm could be applicable and the proofs should be done again when implemented with different basic primitives and protection schemes.
Studying the separate arithmetic protocols as individual secure components is fairly straightforward for the cases of passive security that operate without private setup parameters. For example, earlier protocol development [21,22,23] focused their proofs on a fixed representation of the secure values and simply stated that the protocols are more generally applicable in practice. However, for frameworks using private setup parameters like shared keys in their operation, the monolithic functionality is a natural and nicer choice. Essentially, the monolithic functionality allows hiding the setup inside the protocol and using common flavours of composable security. If we would like to consider monolithic functionalities by their components, then we would need to take the joint state of the components into account. This could be achieved, for example, by using the joint-state UC framework [24] for the security proofs.
If we use a secure computation functionality as a starting point for defining a new algorithm, then we can base the algorithm on the ideal functionality of the framework. Therefore, the proof of the algorithm is in a hybrid model assuming interactions with the specification of the underlying computation functionality. The resulting hybrid model is usually still more complex than desired. A malicious adversary could possibly change the scheduling of subprotocols, create unplanned subprotocol instances, alter intermediate values or cause significant local computations. Most proofs first analyse the security in the abstract setting where shares are treated as non-malleable secure storage and focus on analysing only the values that are explicitly revealed.
We rigorously formalise the abstract execution model and study under which conditions the abstract and the full hybrid model are equivalent. We first establish the foundations of specifying secure computation environments and the security of both their computation protocols and secure storage in Section 2. We call the combination of the storage and the protocols the secure protection domain. Second, we study how to extend the protection domain with a new protocol. Third, we show how such security proofs can be done in an abstract setting when making some natural assumptions about the protection domain and the protocol. In doing this, we formalise the intuition that in most algorithms only the values that are public or malleable need proper discussion in a security proof. The abstract model and all relevant conditions are derived in Section 3. We specify the abstract model in a sequence of steps that each simplify some aspect of the hybrid model. In Section 4, we summarise the abstract execution model as well as the conditions under which it can be used. In addition, we explore why these conditions are satisfied by most programmable secure computation frameworks and primitive protocols. For protection domains, we specify the properties that have to be met in order for the storage to be secure and flexible enough to allow secure computation. In addition, we define a canonical form for reasonable functionalities defining the primitive operations for secure computation. Essentially, we assume that all functionalities are such that their output depends on the input values and not on the format of the protection. The storage allows for some homomorphic modifications but does not reveal information about the stored values. We also study the properties of the secure computation protocols that can realise these functionalities. Overall, we note that as long as some parties in the computation remain honest, they should also have control over which computations can be executed with the private values. Therefore, the adversarial actions are quite limited as long as the protocols are able to deal with malformed inputs and have reasonable semantics.

2. Materials and Methods

Modelling MPC protocols as asynchronous distributed system requires many low-level details that cannot be neglected in the definitions and proofs. We define a visual representation for the reactive simulatability framework (RSIM) [25,26,27] to visualise the main insight and sketch how the arguments can be fleshed out to complete proofs. RSIM is one formalisation for composable security, thus showing security according to their definitions guarantees security in overall contexts where the protocol might be used.
In this section, we describe the RSIM framework and our visual notation for it. Second, we discuss general privacy definitions based on observational equivalence and different models for security and composition. Third, we establish the meta theorem showing what we have to prove in the following to establish that the proofs in the abstract model are sufficient for the security in the hybrid execution model. Finally, we describe the secure protection domains and the core assumptions that we make regarding them and their execution environments.

2.1. Asynchronous Systems and Visual Notation

This section describes the core of the RSIM model for adaptive adversaries where a system is described by a fixed set of machines, for more details see in [25,26,27]. An asynchronous distributed system in RSIM consists of machines that we denote as boxes and communication buffers denoted by bullets. All machines communicate with outside world through ports. We denote input ports as white ( ) and output ports as grey ( ).
Standard buffers have three ports: input, output and clocking. In our notation, these ports are never drawn, as they are always used to connect ports of the machines. Instead, we denote a buffer as an arrow with a bullet ( Cryptography 05 00022 i001). We extend this model by adding buffers which leak information to the machine that clocks it. These can be used to ensure that the adversary learns some meta information about the messages, such as the subprotocol instance that receives the message. For visual clarity, we omit these details and use an arrow with a dotted bullet ( Cryptography 05 00022 i002) for the leaky buffer. We use a dedicated notation for sender-clocked ( Cryptography 05 00022 i003) and receiver-clocked ( Cryptography 05 00022 i004) buffers and omit port squares if they are deducible from context. By default all buffers are clocked by the adversary. The notation is illustrated in Figure 1. A message written to the input port of a buffer is appended to an internal queue of messages q 1 , , q n . A leaky buffer also has a corresponding queue of leaks 1 , , n that is kept in sync. Leaks can be fetched using a dedicated port, thus the clocking machine must have at least one input port to receive the leakage. The full construction of it can be found in Appendix A. An input i Z to standard clocking port causes q i to be removed from the queue and written to the output port. An empty output ϵ is written to the output if the input is out of range.
Input–output behaviour of a machine is determined by a state update function δ : S × I S × O , where S is the state space and I is the product of the domains of all input ports and O is the product of the domains of all output ports including clocking ports. All domains must contain an empty output ϵ . A machine can clock at most one buffer and thus only one clocking output can be non-empty. Execution rules also assure that one and only one input is non-empty when the machine is invoked except the main scheduler that can be invoked with empty inputs. As a result, a machine can clock only a single sender-clocked buffer and leaks cannot reach the clocker without explicit polling.
One machine is declared as the master scheduler that manages all undefined execution timings. In our setting, this machine is always either the adversary or the simulator. At the start of computations, the master scheduler is invoked. The scheduler will write to its output ports and clocks one buffer to start the chain of state transformations. When a machine writes a message to an output port, it is absorbed by the buffer and control goes back to the machine. When a message is written to a clocking port, the corresponding buffer releases a specified message and the control goes to the receiver. When a machine stops execution without clocking anything, the control goes to the master scheduler. The execution stops when the master scheduler reaches an end state and becomes inactive.
A collection C is a finite set of machines and buffers. It is closed if all its buffers are connected to ports and vice versa. A free connector is a connector that has one end attached to a buffer in a collection while the other end is not attached to any machine in the collection. Similarly, a free port is a port that belongs to a machine in a collection and is not connected to any buffer. An extended collection does not have free ports and a reduced collection does not have free connectors.
Collections C 1 and C 2 have matching interfaces if collections can be merged by joining free port and connector pairs while respecting restrictions posed by destination labels as well as ensuring there are no two ports expecting the same connection. Let the shorthand C 1 C 2 denote the resulting collection. Notation emphasises that C 2 is a distributed subroutine that matches structural restrictions posed by C 1 · calling it out. We also use a shorthand C 1 C 2 , C 3 for C 1 C 2 C 3 to emphasise that C 1 · is the outer environment although the concept is inherently symmetric. In this setting, the interface of C 2 can be partitioned into two sets according to the target collection. We refer to these as sub-interfaces.
We visualise the interface of an extended collection as a dashed border surrounding its machines and buffers. Free connectors must reach a right port type on a border. For clarity, we label these interface ports by the names of their host machine, e.g., which buffers must be connected to the adversary A or environment Env .

2.2. Security through Observational Equivalence

Collections C 1 and C 2 have identical interfaces if there exists a one-to-one mapping between interface elements that respects port types and destination labels. A distinguisher D · is a reduced collection that has a matching interface and has a dedicated machine D * with two end states 0 and 1. Let D C i denote the end state of D * when the collection stops. Then, the strongest equivalence form known as perfect observational equivalence C 1 C 2 , which means that Pr D C 1 = 1 = Pr D C 2 = 1 for any valid distinguisher D · . Perfect observational equivalence indicates that C 1 and C 2 realise the same functionality modulo implementation details that are encapsulated by the collection border. Perfect observational equivalence is unattainable for cryptographic constructions as the security inherently emerges from the asymmetry between honest and corrupted parties.
Let Π be a collection that models a protocol. Then, the interface naturally splits into two sub-interfaces. A service interface specifies how to call out the protocol. An adversarial interface exposes protocol weaknesses to the adversary. The set of adversaries A is compatible with Π and Env if Env Π , A is a well-defined and closed collection for any A A . Note that the definition allows collections Env Π , A where Env and A are communicating. Similarly we can define a set of compatible environments E .
Definition 1
(Security). Let Π 1 and Π 2 be collections with an identical service interface and let E be the set of compatible environments. Let A 1 , A 2 be the set of compatible adversaries. Then, Π 1 is as secure as Π 2 if there exists a function ρ : A 1 A 2 such that Env Π 1 , A 1 Env Π 2 , ρ ( A 1 ) for all A 1 A 1 , Env E .
Let Π 1 Π 2 denote that Π 1 is as secure as Π 2 . The notation is justified as the relation is reflexive and transitive for appropriate sets of adversaries. The corresponding equivalence relation Π 1 Π 2 Π 1 Π 2 Π 2 Π 1 captures protocols with similar security properties. Maximal elements over the relation identify maximally secure protocols, also known as ideal implementations.
This definition allows us to specify a wide spectrum of security definitions [28,29,30,31]. We can consider only nonuniform polynomial adversaries or different corruption models, for example, choose between static vs adaptive adversary, or semi-honest vs. active security [32,33]. The protocol Π 2 determines the set of unavoidable attacks. By tweaking the implementation of Π 2 , we can model fairness [34,35], selective failure (abort) [36,37] and security against covert adversaries [38]. The exact definition of plausible environments determines how and where the protocol can be used securely. Restrictions on the correspondence ρ define various flavours of black-box [39] and white-box security [40,41] or specify tightness requirements like polynomial and superpolynomial simulation [42,43]. Restrictions to A 1 and A 2 usually fix the model of corruption while constraints on E place restrictions on the protocol scheduling.
Theorem 1
(Secure two-system composition). Assume that we have three collections Π e , Π 1 , Π 2 such that collections Π e Π 1 and Π e Π 2 are well-defined and have an identical service interface. Let E be the subset of compatible environments and let ψ : E E * be a natural construction ψ ( Env ) = Env o Π e . Then, the construction ϕ : A 1 A 2 proves that Π 1 Π 2 for the set of environments E * is also a proof for Π e Π 1 Π e Π 2 for the set of environments E .
The theorem is particularly useful when the set of plausible environments and adversaries is closed, i.e., E = E * and A 1 = A 2 . As security is commonly defined against nonuniform polynomial-time adversaries the second constraint is trivially satisfied. The first constraint is satisfied when environments consist of all sequential compositions of poly-time subprotocols. The resulting sequential composition theorems [29,30,44] play a central role in cryptography. Alternatively, we can consider the set of all concurrent compositions of poly-time subprotocols. The resulting security notion is known as universal composability (UC) and has many flavours [14,25,45,46,47,48,49,50] which differ in minor details. Most formalisations assume that machines and connections between them remain unaltered during the execution while Canetti’s second formalisation of universal composability [51] allows dynamic reconfiguration of the environment. We consider an extension of the RSIM model [25,27] which has leaky buffers for proper modelling of secure communication channels. The resulting adaptive-adversary RSIM is very close to the simplified version of UC (SUC) that was defined to characterise MPC protocols [52]. Our formalisation of MPC in RSIM gives us more flexibility to split protocols into components to modularise the proofs and transformations.

2.3. Soundness and Completeness Theorems

Our main contribution is a description of an abstract execution model which hides all irrelevant technical details while the security proof in this model remains sound and complete. That is, a proof in the abstract model exists if and only if it exists in the original execution model. In other words, the abstract model is both sound and complete and, therefore, a suitable replacement for the hybrid execution model. Here, soundness means that a proof in the abstract setting means that there is also a proof in the original execution model. Completeness, on the other hand, specifies that if there is a proof in the original model, then there is also a proof in the abstract model.
Let Π 1 and Π 2 be the protocols of interest, and let Π 1 * and Π 2 * be their counterparts in the abstract execution model. Let E and E * denote the set of environments for original and abstract execution models. Let A 1 , A 2 and A 1 * , A 2 * denote the set of plausible adversaries. To show that security proofs in the abstract model are sound and complete, we define three explicit constructions and their semi-inverses
ψ * : E * E * ψ * : E * E ϕ 1 : A 1 A 1 * ϕ 1 * : A 1 * A 1 ϕ 2 : A 2 A 2 * ϕ 2 * : A 2 * A 2
which satisfy the following three pairs of equivalence relations
Env * E * : Env * E * : A 1 A 1 : A 1 * A 1 * : Env Π 1 , A 1 ψ ( Env ) Π 1 * , ϕ 1 ( A 1 ) Env * Π 1 * , A 1 * ψ * ( Env * ) Π 1 , ϕ 1 * ( A 1 * )
Env * E * : Env * E * : A 2 A 2 : A 2 * A 2 * : Env Π 2 , A 2 ψ ( Env ) Π 2 * , ϕ 2 ( A 2 ) Env * Π 2 * , A 2 * ψ * ( Env * ) Π 2 , ϕ 2 * ( A 2 * )
Env * E * : Env * E * : A 2 A 2 : A 2 * A 2 * : Env Π 2 , A 2 ψ * ( ψ ( Env ) ) Π 2 , A 2 Env * Π 2 * , A 2 * ψ ( ψ * ( Env * ) ) Π 2 , A 2 .
Note that constructions (1) together with equivalence relations (2)–(4) define a commutative square in Figure 2 with the equivalence guarantees for individual elements where for brevity pairs Env * , Env , A 1 , A 1 * and A 2 , A 2 * are defined through up or down arrows depending on the direction of traversal. As a result, the existence of ρ implies the existence of ρ * , and vice versa.
Theorem 2.
Let ψ : E E * , ϕ 1 : A 1 A 1 * and ϕ 2 : A 2 A 2 * be constructions with semi-inverses ψ * , ϕ 1 * , ϕ 2 * that satisfy the equivalence relations (2)–(4). Then, Π 1 Π 2 for environments E if and only if Π 1 * Π 2 * for environments E * .
Proof. 
For the proof, we simply trace the equivalence square depicted in Figure 2.
Soundness. Assume that there exists ρ * : A 1 * A 2 * such that
Env * Π 1 * , A 1 * Env * Π 2 * , A 2 *
The equivalence relations (2)–(5) assure that
Env Π 1 , A 1 ψ ( Env ) Π 1 * , ϕ 1 ( A 1 ) ψ ( Env ) Π 1 * , ϕ 1 ( A 1 ) ψ ( Env ) Π 2 * , ρ * ( ϕ 1 ( A 1 ) ) ψ ( Env ) Π 2 * , ρ * ( ϕ 1 ( A 1 ) ) ψ * ( ψ ( Env ) ) Π 2 , ϕ 2 * ( ρ * ( ϕ 1 ( A 1 ) ) ) ψ * ( ψ ( Env ) ) Π 2 , ϕ 2 * ( ρ * ( ϕ 1 ( A 1 ) ) ) Env Π 2 , ϕ 2 * ( ρ * ( ϕ 1 ( A 1 ) ) ) .
The claim follows as we can define ρ = ϕ 2 * ρ * ϕ 1 and the transitivity of equivalence relation proves the equivalence Env Π 1 , A 1 Env Π 2 , ρ ( A 1 ) .
Completeness Assume that there exists ρ : A 1 A 2 such that
Env Π 1 , A 1 Env Π 2 , A 2
The equivalence relations (2)–(4) and (6) assure that
Env * Π 1 * , A 1 * ψ * ( Env * ) Π 1 , ϕ 1 * ( A 1 * ) ψ * ( Env * ) Π 1 , ϕ 1 * ( A 1 * ) ψ * ( Env * ) Π 2 , ρ ( ϕ 1 * ( A 1 * ) ) ψ * ( Env * ) Π 2 , ρ ( ϕ 1 * ( A 1 * ) ) ψ ( ψ * ( Env * ) ) Π 2 * , ϕ 2 ( ρ ( ϕ 1 * ( A 1 * ) ) ) ψ ( ψ * ( Env * ) ) Π 2 * , ϕ 2 ( ρ ( ϕ 1 * ( A 1 * ) ) ) Env * Π 2 * , ϕ 2 ( ρ ( ϕ 1 * ( A 1 * ) ) ) .
The claim follows as we can define ρ * = ϕ 2 ρ ϕ 1 * and the transitivity of equivalence relation proves Env * Π 1 * , A 1 * Env * Π 2 * , ρ * ( A 1 * ) . □
In many cases, the security definitions limit the resource consumption of the parties, e.g., the adversaries are polynomial, in such cases ψ , ϕ , ρ have to keep these restrictions. We apply this theorem to show that we can hide the vast majority of technical details when analysing the security of a compound protocol. We split the construction into four major blocks. In Section 3.1, we show that certain attack techniques do not help the adversary when the protocol Π satisfies natural requirements to message scheduling. Thus, we can consider only a subset of adversaries, i.e., we partition A 1 and A 2 and choose a canonical representative for each class. As a result, the environment remains the same during the abstraction and ϕ 1 * : A 1 * A 1 , ϕ 2 * : A 2 * A 2 are also identity functions.
In Section 3.2, we separate state from protocol participants and replace message passing with a shared memory. Again the environment remains the same but now A 1 * ¬ A 1 and A 2 * ¬ A 2 . As a result, we need to explicitly define ϕ 1 * and ϕ 2 * . In Section 3.3, we expose the internals of ideal functionalities to further simplify the memory model and remove the share representation. We again explicitly define ϕ 1 * and ϕ 2 * . In Section 3.4, we define the abstract model by simplifying the environment to a simple representative class of environments. Fortunately, we can define ψ and ψ * so that observational equivalence guarantees
Env * E * : * ψ * ( ψ ( Env ) ) Env Env * E * : ψ ( ψ * ( Env * ) ) Env *
hold and the last pair of equivalence relations () follows directly.

2.4. Programmable Multiparty Computation

Most platforms for multiparty computations, see, e.g., in [1,2,4], consist of a secure storage and a system of primitive protocols operating on top of the storage. As a result, one can safely combine primitive instructions to implement any algorithm, thus we call such frameworks programmable. In this section, we formalise building blocks and show how one can extend existing secure computation instruction set with new primitives. Our work falls into a long list of MPC formalisations [14,25,29,52,53,54] where we are focused on specifying programmable secure computation. First, we discuss the storage properties, then we formalise two flavours of computations of the secure computation engine and define protection domains as our abstraction of a secure computation framework. For protection domains, we discuss their security and the natural conditions for environments and adversaries that we expect in our following security analysis.

2.4.1. Security of Distributed Storage Domains

A modular design of multiparty computation protocols requires the ability to store intermediate values. A secure storage can be built on top of different primitives, such as secret sharing, encryption, commitments, trusted hardware or a combination of different schemes. We develop the formalism for secret sharing, however, the abstract description for storing and retrieving values is universal. A secure storage domain δ is defined by two algorithms S δ and R δ which can use parameters from shared setup F . A machine S δ distributes an input x X δ into shares. A machine R δ converts shares back to the original value or returns a special failure symbol . Throughout the paper, we explicitly assume that the behaviour of S δ and R δ does not depend on previous queries.
An adversarial structure A δ defines which subsets of parties can be corrupted without losing security properties. We define privacy and integrity properties for secure storage through observational equivalence. Our definitions are generalisations of privacy and recoverability of secret sharing schemes [55] and are tailored toward the concrete application of secret sharing as a storage domain.
Intuitively, a storage is hiding if no information about the stored value leaks from shares captured by the adversary. However, there are three subtle issues: First, the outcomes of S δ and R δ could leak information about private parameters or other shared values. Second, in security proofs we often want to simulate shares for some values and use the remaining shares without changes. Third, we need to specify what happens when the adversary corrupts more parties than expected. Formally, we define the hiding property through two collections B 0 and B 1 that have identical layout, see Figure 3a. Machines L and L * are for storing values and the corresponding shares. The state of L is a one-dimensional array s . The state of L * consists of a two-dimensional array s * for shares and a one-dimensional array b that specifies how the shares will be generated.
An adversary A can adaptively specify the values of s [ ] and b [ ] , but each location can only be set once. The adversary A can also read and write shares s * [ k , i k ] of corrupted parties P i k . A static adversary must send the list of corrupted parties to L * before any value is shared while an adaptive adversary can issue corruption calls at any moment. When s * [ , i ] is queried and the location is uninitialised, L * initiates an update cycle. The machine L * always asks L to share the value s [ ] using S δ in collection B 0 . In collection B 1 , the value s [ ] is shared only if b [ ] = 0 . If b [ ] = 1 , then L * asks a share simulator S δ * to create the share s * [ , i ] . A share simulator S δ * is an efficient and potentially stateful machine, which can query values s [ ] from L only after the set of corrupted parties does not belong into A δ . Finally, A can place a reconstruction order for s * [ ] . The machine L * sends s * [ ] to R δ if b [ ] = 0 . Otherwise, L * first asks L to share the value s [ ] and then forwards the shares to R δ . In both cases R δ sends the output back to A .
Definition 2
(Hiding storage). A storage domain δ is perfectly hiding if no adversary can distinguish configurations B 0 and B 1 . A storage domain δ is hiding for A if the advantage is negligible for any adversary from A .
Many secret sharing schemes do not use private setup parameters. As a result, different sharings are independent from each other and it is sufficient to prove simulatability of a single sharing. In case of adaptive corruption, the secret sharing scheme must be efficiently patchable [56] as the simulator must progressively disclose shares of an unknown value. The existence of trusted setup F allows to achieve integrity even for honest minority. However, now different shares are correlated with each other due to shared setup parameters and we cannot reduce hiding to simpler security notions. In this case, we assume that S δ * uses the setup parameters of the corrupted parties.
Note that the hiding property does not guarantee privacy throughout the entire period of computations. Instead, each storage domain can define A δ to specify which parties can be corrupted while still maintaining privacy. For instance, the adversary who corrupts P i learns its local state which is a separate trivial storage domain. Values in the public domain become visible as soon as the adversary corrupts some participant.
As the adversary can always change shares under its control, security of a compound protocol relies on the integrity of stored values. Robust secret sharing guarantees that values cannot be altered while verifiable secret sharing allows to detect corrupted values. Modification awareness for a storage domain δ is defined through an efficient extractor machine E δ and an efficient operation δ . Let x = ( x 1 , , x n ) be the original secret sharing where x i is the share of P i , and let x ^ = ( x ^ 1 , , x ^ n ) be its adversarial modification and A the set of corrupted parties. Then, the extractor gets ( x i ) i A , ( x ^ i ) i A together with the setup parameters of A as an input and has to output a difference Δ such that R δ ( x ) δ Δ = R δ ( x ) . We denote reconstruction failures by and we expect that the modification operator is such that a δ = and δ a = for any a in the value domain. Modification function generalises the observation that in many MPC protocols adversarial modifications result in additive changes to the value [57].
Intuitively, a storage domain is modification aware if there exists a good extractor machine E δ which cannot be fooled by an adversary. The success of an adversary is defined through a collection B 3 that also contains machines L and L * , see Figure 3b. A two-dimensional array s * forms the entire state of L * . As before, an adversary A can adaptively specify the values of s [ ] but each value can be set only once. The adversary A has the power to corrupt parties and read and modify the shares of corrupted parties by interacting with E δ . The extractor E δ just forwards communication between A and L . Queries to uninitialised locations s * [ ] lead to the same update cycle as in B 0 , i.e., S δ generates shares from s [ ] . During a share update query E δ additionally computes the difference Δ and sends a pair , Δ to L . As a response, L updates the value s [ ] = s [ ] δ Δ and gives control back to E δ . Each sharing in L * can be updated by A at most once. The limit on modifications attempts eliminates trivial attacks where the adversary first invalidates its shares and then changes them back to original values. For many storage schemes this causes E δ to fail as E δ is stateless and has no knowledge of the previous modification or share values, thus it has to assume that δ Δ = for any Δ . The adversary A can also place reconstruction orders for s * [ ] . Given such an order L * sends s * [ ] to R δ who sends the output back to A . The adversary A wins the game if the outcome of R δ differs from s [ ] and the set of corrupted parties is in A δ .
Definition 3
(Modification awareness). A storage domain δ is modification aware against a class of adversaries A if the advantage against the modification game is negligible for any A A .
For robust secret sharing, the extractor E δ always outputs the neutral element of the modification operator because the adversary cannot affect the shared value. Local storage is robust by definition as the adversary cannot alter local state before corrupting the party and after corruption the definition poses no restrictions to modifications. Extractor E δ for verifiable secret sharing can output or the neutral element because the adversary can either invalidate the shared value or create a different sharing of the same value. Therefore, for most cases the potential changes to the value are quite limited.
On the other hand, modification awareness does not guarantee that the adversary can efficiently find a share modification for any potential change Δ of the value. A two-way extractor E δ can handle such requests. The success of an adversary against the two-way extractor E δ is defined through a collection B 4 that has the same layout as B 3 in Figure 3b. An adversary A can adaptively initialise and update the values of s [ ] . For the update, A has to provide Δ to L . As a response L sets s [ ] = s [ ] δ Δ , sends , Δ to E δ . Given , Δ from L , the extractor E δ first fetches ( x i ) i A from L * and computes new shares ( x ^ i ) i A for the corrupted parties such that R δ ( x ) δ Δ = R δ ( x ^ ) . Finally, E δ sends ( x ^ i ) i A to L * and returns the control to L . L returns control to A . The rest of the collection specification is identical to B 3 . As before, A must issue a reconstruction order for a location s * [ ] at the end of the game. The adversary breaks the two-way extractor if the outcome of R δ differs from s [ ] and the set of corrupted parties A is in A δ .
Definition 4
(Limited control). A class of adversaries has a limited control over a storage domain δ if the advantage in the collection B 4 is negligible for any adversary from the class.

2.4.2. Canonical Description of Ideal Functionalities

An idealised computation F p can be formalised in many different ways. We consider a decomposable functionality working on the data representation of the given protection domain and using the setup from the protection domain. In general, inputs and outputs of an ideal functionality F p may belong to several storage domains, e.g., some of them may be secret shared while the others are local variables. The entire computational process can be split into rounds where each round consists of three phases: reconstruction, computation and sharing as in Figure 4. Machines S and R are functionalities for sharing and reconstruction which internally execute functionalities S δ and R δ of individual storage domains and F is a combined setup procedure. Machine T R gathers inputs and interacts with R to reconstruct the values. These values are passed to F p * that evaluates a stateful function and sends outputs to T S together with a storage domain for each output. T S interacts with S to share outputs. If the reconstruction fails then F p * also outputs and the storage domain has to have a way to generate shares of . Most notably, there must be a canonical way to create shares of for verifiable secret sharing. Note that it is always possible to define functionalities that ignore some input and where such condition may not be necessary. However, if the input is used in the computation of the functionality then a malformed input can only result in a malformed output, otherwise the functionality leaks information or introduces selective failures. The adversary A can interact with F p through T R and T S . Machines T R and T S can coordinate their actions through a receiver-clocked buffer between T R and T S . Note that S , R and T R all receive setup parameters, however there is an important distinction that we expect only S and R to get the private parameters of different parties and T R only learns any public parameters. The latter is necessary to correctly define F p * because public parameters might, for example, specify the modulus for all computations.
The corruption mode for F p is defined through a communication between T R , T S and A . All responses to A must be computable from the inputs and outputs of the protocol instance and the setup parameters received by F p . For robust protocols, A is disconnected from F p . For fair protocols, A can send only abort signals to T S but gets no information from F p . For protocols without fairness the adversary could see corrupted parties outputs before deciding to abort. After abort, all parties get shares of as output. Protocol instances inside F p are distinguished by instance tags sent by protocol participants I i . All protocol instances are run concurrently and independently.
Definition 5
(Canonical ideal functionality). An ideal functionality F p has a standard corruption mode if all outputs are generated by S and the adversary cannot learn anything about the shares of honest parties other than revealed by the published values. Functionality F p is in canonical form if it is a collection of T R , R , F p * , T S and S with the internal structure specified above, has a standard corruption mode, and always outputs ⊥ if any input is ⊥.

2.4.3. Canonical Description of Local Functionalities

Ideal functionalities define operations computed together with other parties. Each party may also perform local operations with their shares. In principle, all kinds of local operations are possible. However, usually the storage domain defines a set of meaningful operations where the local operations are also meaningful operations on the shared values. For example, local linear operations are possible for linear sharing schemes. By definition, the output share of the local functionality depends only on the input shares of this party and, therefore, local functionalities cannot be described as canonical ideal functionalities.
Definition 6
(Meaningful local operation). A local operation G q implements a function g q if for any input ( y 1 , , y t ) = g q ( x 1 , , x s ) where x 1 , , x s are the values reconstructed from the input shares of G q and y 1 , , y t are the values reconstructed from the output shares of G q .
In the following, we represent local operations as G 1 , , G g where each G q is a collection of machines { G q , j } j J q implementing local operations carried out by parties in J q . We assume that each local operation consist of a single round. More complex local computations can be implemented as a series of local computations.

2.4.4. Security of Protection Domains

A protection domain consists of storage domains and computation protocols Π 1 , , Π k . For instance, a secure computation engine in the Arithmetic Black-Box model [15] is a specific protection domain with no explicit access to the stored values. Protection domain is also a refinement of a standard MPC deployment model [11] which divides participants into input, result and computing parties.
Let F 1 , , F k be the canonical ideal functionalities that the protection domain should implement. We will simplify the ideal functionalities by joining their sharing and reconstruction components. More formally, let R u be a machine that has k port pairs for reconstruction. A query to p-th pair is sent internally to R that is part of the ideal functionality F p and the reply is routed back to the corresponding output port. Let S u be analogous extension of the machine S . Note that R u and S u have only a single port pair for F . Let F ^ p be a collection we obtain by removing components R and S from F p as depicted in Figure 4b. Then, the ideal functionality for the protection domain is defined through a collection F ^ 1 , , F ^ k , R u , S u where F ^ p only use public parameters. Let the corresponding extended collection be denoted as F p d . The collection F 1 , , F p is observationally equivalent to F p d provided that the trusted setup F sends the same input parameters for all F 1 , , F k .
Definition 7.
A protection domain has modular representation if collections F F p d and F F 1 , , F k are observationally equivalent.
As a next step we need to specify the class of reasonable environments. A typical compound protocol in an MPC platform take shares as input and produces shares as output. The latter causes subtle issues. Note that a universally composable protocol must remain secure even if the adversary knows all inputs, while a verifiable secret sharing could be secure only if the adversary knows only a limited set of inputs. Consequently, universal composability is unachievable as the adversary can alter shared values without detection. However, these protocols do not run in a generic environment, and therefore we should also study their security in a more restricted setting where we separate the outside environment and the other computations that happen in the protection domain. This allows us to make reasonable assumptions about the visibility of the output shares. In the most general case, the best we can achieve if we have any shared setup, is joint-state universal composability [24] while joint-state sequential composition is the absolute minimum.
We define the set of plausible environments through a cartesian product { F } × E o × P where F is the trusted setup and P specifies all plausible compound protocols Π e and E o environments Env o in which Π e might be executed. The inner environment Π e · specifies computations done in the MPC framework while Env o · is an outer environment representing the rest of the world in which the compound protocol should preserve security.
Definition 8.
A list of protocols Π 1 , , Π k with a shared setup F is secure protection domain if F Π e Π 1 , , Π k F Π e F p d for any Π e P and Env o E o provided that Env o F Π e Π 1 , , Π k , A is a well-defined closed collection.
The signature of a protection domain ( E , P , F 1 , , F k ) determines what can be computed with the protection domain and what restrictions must be met to preserve security. The actual security guarantees are specified in terms of plausible adversaries A 1 and A 2 which depend on the environment. Each protection domain also specifies an adversary structure A δ which lists all sets of parties that the adversary can corrupt if we want to keep security guarantees. The adversary structure is limited by the secure storage domains of the protection domain and the security properties of the individual protocols in the protection domain. The corruption mode specifies the type of tolerated adversaries ranging from static semi-honest to adaptive malicious adversaries.

2.4.5. Secure Extension of Protection Domains

To modularise proofs, a protection domain is often defined through a minimal set of protocols Π 1 , , Π k that implement ideal functionalities F 1 , , F k . After that each new primitive F 0 is added by defining a protocol Π Π 1 , , Π k and proving its security. To establish basic security, we need to prove F Π Π 1 , , Π k F F 0 . Such proofs usually follow the two phase strategy where one proves
F Π Π 1 , , Π k F Π F p d F F 0 .
The first step follows directly from the security definition of a protection domain. Thus, the main bulk of the proof must be carried out in the hybrid execution model where real protocol implementations are replaced with F 1 , , F k .
To show validity of the extension, we must analyse the extended protection domain Π Π 1 , , Π k , Π 1 , , Π k when using the ideal implementation F p d . For that we have to analyse compound protocols Π e Π Π 1 , , Π k , Π 1 , , Π k . It is easy to restructure these compound protocols into observationally equivalent collections Π * * Π 1 , , Π k , Π 1 , , Π k . For example, Π * * can be obtained by joining P i and P i * on Figure 5. Formally, the list of protocols Π 1 , , Π k , Π 1 , , Π k may not be a secure protection domain as each protocol occurs twice. It is easy to see that a secure protection domain is securely extendable provided that each protocol instance is independent of the other instances of the same protocol. The next theorem describes under which conditions the second proof stage can be generalised.
Definition 9.
Protocols Π 1 , , Π k with shared setup F are a securely extendable protection domain if the list of protocols Π 1 , Π 1 , , Π k , Π k with F is also a secure protection domain.
Theorem 3.
Let Π 1 , , Π k be a securely extendable protection domain with a shared setup F . Let Π Π 1 , , Π k be as secure as an ideal functionality F 0 . Then, Π Π 1 , , Π k , Π 1 , , Π k is a secure protection domain for compound protocols P provided that Π F 1 , , F k is a secure protection domain for the set of compound protocols P * = { Π e F 1 , , F k · : Π e P } .
Proof. 
Let us fix a target signature ( E , P , F 0 , , F k ) for the extended domain, and let Π e · P be a compound protocol. Let A 1 be the set of adversaries against Π e Π Π 1 , , Π k , Π 1 , , Π k . As we can restructure the compound protocol into observationally equivalent form Π * * Π 1 , , Π k , Π 1 , , Π k , we can replace all protocols with ideal implementations and we need to show
F Π e Π F 1 , , F k , F 1 , , F k F Π e F 0 , , F k
for environments E and adversaries A 2 . By pushing F 1 , , F k into Π e we obtain a compound protocol Π e F 1 , , F k · P * that interfaces only with Π F 1 , , F k , A A 2 and Env o E o . Indeed, the construction just removes interface boundaries between Π e and F i , and nothing else changes. By the assumptions Π e F 1 , , F k · is a valid compound protocol for Π F 1 , , F k and thus over the environments E
F Π e F 1 , , F k Π F 1 , , F k F Π e F 1 , , F k F 0 .
The latter completes the proof as we can push F 1 , , F k out of the compound protocol to obtain Π e F 0 , F 1 , , F k . □
In other words, it is sufficient to analyse the security of Π F 1 , , F k against adversaries A 2 implicitly defined by the definition of securely extendable protection domains with respect to compound protocols P * and environments E .

2.4.6. Restrictions to Environments and Adversaries

It is usually impossible to prove F Π e Π 1 , , Π k F F for canonical F if Π e leaks the joint state to the environment Env o . In particular, no shares can pass the service interface between Env o and Π e nor can Π e send outputs that depend on the private setup parameters. We can force this constraint structurally by including dedicated protocols into the protection domain which allow parties to securely share and reconstruct values, i.e., they are are equivalent to securely applying S u and R u to inputs and outputs.
Let us consider the inner environment Π e that shares the state with Π . Note that the inner environment Π e can carry out the same actions as the protocol Π , thus we prefer this notation to clearly distinguish it from Env that represents also the rest of the world and all actions possible there. In the following, Env compatible with Π means Env o Π e that is the full environment against the protocol Π . Due to nesting, the same physical entity is represented by different machines in different collections, such as P i * and P i in Figure 5. In principle, a protocol participant can communicate with many machines from the inner environment Π e . However, simple physical considerations suggest that each protocol node P i should have only one parent node P i * that provides inputs to P i and receives its outputs. By duplicating protocols we can always reach a configuration where P i communicates only with a single parent node P i * . We consider only such inner environments P that satisfy this restriction. In addition, we assume that the state of P i is such that it allows to restore all computations that have occurred before it was corrupted (e.g., inputs and random choices are stored).
Definition 10
(Generic adversary). We call the class of adversaries against a protocol Π and the inner environment Π e generic if the only restrictions on the adversary are the port compatibility with the protocol Π, the inner environment Π e and the environment Env o .
A real-world adversary corrupts physical hardware or administrators, thus it is natural to assume that it will corrupt all machines hosted by it. Therefore, we assume that either all machines representing a party are corrupted or none are. Sometimes many logical protocol participants are represented by one physical party and in such cases it is also reasonable to assume that they are also all corrupted together.
Definition 11
(Coherent adversary). A coherent adversary always corrupts P i and P i * simultaneously, i.e., A sends a corruption call to P i immediately after P i * responds to a corruption call, or vice versa.
Lemma 1.
Any generic adversary can be extended to a coherent adversary provided that adversary structures for Π e and Π are compatible.

3. Results

In this section, we show the required transformations from the hybrid protocol to the abstract execution model. We show that any hybrid protocol satisfying some conditions can be translated to the abstract setting and vice versa. Therefore, we fulfil the requirements of Theorem 2 by defining the ψ , ϕ 1 , ϕ 2 and their semi-inverses and showing that the protocol designer only has to define ρ in the abstract execution model. We consider the protocol Π as a subprotocol of Π e representing the rest of the computations in the protection domain and the outer environment Env using and controlling the protection domain. The combination of Π e and Env forms the class E Π of environments against this protocol Π .
We define the transformation to the abstract model in small steps in order to make it clear where the different conditions come from and to make it easier to argue the correctness of these transformations. In Section 3.1, we show that it is sufficient to limit adversarial capabilities. In Section 3.2, we modify the protocol description and adversary to use a shared memory and limit adversarial actions to only modifications of real protocol messages. In Section 3.3, we show how to remove share representation and only give the adversary the access allowed by the limited control property of the storage domain. Finally, in Section 3.4 we arrive at the abstract execution model.

3.1. Minimal Requirements to Message Scheduling

In the following, we show that under certain natural restrictions about the protocol Π the adversaries ability to influence the execution is rather limited. All attacks can be accomplished by only modifying the state of corrupted parties while keeping them running. This is the first substantial step towards abstract model, as these attacks preserve the structure of computations. In term of the soundness theorem we define a universal construction ϕ l a z y that achieves
Π P : Env E Π : A A Π , Env : Env Π , A Env Π , ϕ l a z y ( A )
where P is the set of protocols that use F 0 , , F k and E Π is the set of environments where the protocol Π is intended to run, meaning they contain the outer environment Env and the inner environment Π e . The main result in shown in Theorem 4.

3.1.1. Basics of Protocol Execution

The formal description of a participant of Π is quite complicated, as it must be able to execute several instances of the protocol in parallel and correctly handle corruption queries from the adversary A . To simplify matters, we represent the participant P i as a collection of two machines I i and Z i , where I i interprets the original protocol without modifications and Z i models the effects of corruption by switching communication between the adversary A and other machines.
The corresponding collection is depicted in Figure 6 together with the numbering of port pairs and machine names connecting to them. The zeroth port pair between I i and Z i is for communicating with the adversary. Next k port pairs between I i and Z i are for calling out subprotocols. The last port pair between Z i and I i is for communicating with the inner environment Π e . All these ports are directly matched with port-pairs between Z i and the corresponding external machines. Note that all our buffers come in pairs, thus we also use the shorthand b p + , b p to specify the pair, where b p + is outgoing from P i and b p is incoming to the party. This notation can be enhanced with additional indices if it is important to consider many parties or ideal functionalities at once.
When Z i is in the honest state it mediates communication between the matching ports. Every time Z i receives a message, it writes the respective message and also clocks the messages to I i . When receiving a message from I i it also gives control back to I i after writing the message to the output port. When Z i is corrupted, then A can order Z i to write a message to any of its output ports and all messages arriving to the input ports of Z i are forwarded to A . Additionally, A can issue a special R e v e a l message to I i to receive the internal state of I i . Each message sent between P i and F p is a triple ( t 1 , t 2 , m ) where t 1 specifies the instance of the protocol Π and t 2 the instance of the sub-protocol called by I i . Similarly, a message sent between P i and Π e is triple ( t 1 , t 2 , m ) where t 2 specifies the instance of the protocol Π and t 1 the instance of Π e calling it.

3.1.2. Tight Message Scheduling

The structure of subprotocols Π 1 , , Π k determines what the adversary can do with ingoing and outgoing messages. For example, consider a protocol where P i submits several inputs x 1 , x 2 , , x without any reply from F p . Then, the adversary can trivially reorder inputs by delaying messages. Although sequence numbers can be added to fix the intended order of messages, we still cannot guarantee the arrival of x i . Only a reply to P i after the -th input stops the flow of inputs which the adversary can reorder. Therefore, sending x 1 , , x one-by-one has no theoretical benefit over a single message ( x 1 , , x ) . In practice one could still stream these as individual messages but it does not affect the theoretical communication model. The same argument invalidates the utility of piecewise release of outputs. As a result, neither P i nor F p should send a new message before they get a reply from their respondent. A reply in a protocol fixes time-point in a protocol after which an input or an output is committed and can not be changed.
A protocol might include a party without consent by sending outputs P i before it has sent inputs. In practical protocol constructions, we always know when P i is going to participate in a protocol, and thus we can always assume that all parties provide an input before receiving any outputs. The following definition summarises minimal requirements for protocol constructions to be secure against network delays. Communication patterns of such protocols may still depend on inputs or outputs. For example, a party can submit an unbounded number of inputs that depend on previous replies.
Definition 12
(Tight message scheduling). An ideal functionality F p has a tight message scheduling if P i and F p cannot send two consecutive messages to their recipients. Additionally, P i must send the first message before receiving anything from F p and both F p and P i must know when the other stops sending messages for a given protocol instance.

3.1.3. Robustness against Malformed Inputs

As a corrupted party can arbitrarily deviate from a protocol specification, we must relate its messages with the state progression in the honest protocol run. For that we show that a corrupted party can always run the interpreter honestly and deliver all messages from ideal functionalities to the interpreter instantly.
Definition 13
(Semi-simplistic adversary). An adversary is semi-simplistic if it fulfils the following conditions for corrupted P i .
(a)
The adversary clocks any outgoing buffer b p + and any incoming buffer b p connected to an honest party only when all incoming buffers b p connected to corrupted parties are empty.
(b)
Upon receiving a message from Z i that comes from Π e , F 1 , , F k , the adversary immediately orders Z i to forward it to I i without changes.
(c)
The adversary can send arbitrary messages to Π e , F 1 , , F k on behalf of P i .
(d)
The adversary can fetch the state of I i .
(e)
The adversary gives no other orders to Z i .
Conditions (a)–(b) formalise instant message delivery which preserves the order of messages: if P i receives m 1 before m 2 then I i must receive m 1 before m 2 . Furthermore, corrupted parties receive messages earlier than honest parties. Conditions (c)–(e) guarantee that the adversary can not directly manipulate the state of I i , thus I i is running honestly.
Lemma 2.
For any adversary, A , there exists an equivalent semi-simplistic adversary A * . The overhead in computational complexity can be arbitrary.
Proof. 
Adversarial behaviour. When a party, P i , is not corrupted A * just does whatever A does. Whenever A corrupts P i , the new adversary A * also corrupts P i and sends the Reveal message to get the internal state of I i . After that, A * can internally simulate the interpreter by initialising it with the state. Let I i * denote the corresponding virtual interpreter. Whenever A * gets an incoming message destined to the interpreter, it forwards it to I i without changes. If I i sends back a reply, A * deletes it. If I i does not send a reply, then control still goes to A * when I i stops. After that, A * passes the original message to A . Whenever A wants to send a message to I i , A * sends it to I i * . If I i * sends back a reply, A * forwards it to A . If A wants to send a message to other parties A * forwards it to Z i . As a result, all incoming messages reach I i without changes and right after being received by Z i while the A * sends out exactly the same messages as A .
Modified clocking. To guarantee that A * can always empty a buffer b p , we must do another modification. First, the adversary A * can keep the buffer b p empty by clocking it immediately when F p writes to it for corrupted P i . This fulfils the conditions (a)–(b). The timing of I i might change but we only need to preserve the behaviour of A . For that, A * stores the messages and internally simulates buffers b p to A .
Complexity. The overhead in the computational complexity consists of copying and running the interpreter. The state of the I i must be copied and its further actions as I i must be simulated. Simulated interpretation comes with at most a polynomial slowdown as the state is of polynomial size. As incoming messages of I i are potentially altered by the actions of A , the interpreter I i is not guaranteed to terminate. Therefore, we can give no overhead bound in this construction. Nevertheless, A * is semi-simplistic. □
Adversarial actions may lead to unexpected inputs, thus the interpreter I i is not guaranteed to terminate. Overall, there are three possibilities to overload the interpreter. First, they may get unexpected messages from ideal functionalities or the environment. The interpreter should be able to ignore such messages. Second, the adversary might be able to trick the environment or an ideal functionality to send overly long inputs to the interpreter. These attacks are harmless as long as the interpreter knows the maximal input length and ignores the rest. Third, the adversary might trick the interpreter to do expensive local computations. This is a serious concern unless the amount of local computations is bounded.
Definition 14
(Robustness against malformed inputs). A protocol is robust against malformed inputs if the running time of the interpreter is polynomial for all semi-simplistic adversaries.
Corollary 1.
If a protocol is robust against malformed inputs, then semi-simplistic and generic adversaries are equivalent to each other.
Proof. 
The robustness guarantees that the construction introduced in Lemma 2 has a polynomial overhead. Each time A * invokes I i , it is guaranteed to stop and pass the control back to A * . As the number of times A * invokes I i is bounded by the running time of A , the total running time of I i can be only polynomial times bigger than the running time of A . In addition, any semi-simplistic adversary is also a generic adversary. □

3.1.4. Security against Rushed Execution

A semi-simplistic adversary may create messages that are dropped by recipients as they are not ready to process them. Let a tuple ( i , p , t 1 , t 2 ) denote the event where I i writes a message ( t 1 , t 2 , m ) to the output port p, and let ( i , p , t 1 , t 2 , ) denote the event where the recipient F p writes a reply ( t 1 , t 2 , m ) to its output port. Note that for semi-simplistic adversaries the interpreters are always running honestly.
Definition 15
(Input and output signature). Let the input signature for a particular round of computations in canonical ideal functionality (Definition 5) F p be the set of parties that must provide inputs before T R forwards recovered inputs to F p * , and let the output signature be the set of parties that receive output shares from T S .
Definition 16.
We say that a round of computation is rushed if the ideal functionality F p executes the computation before some interpreter I i in the input signature has computed its input to this round. A protocol is secure against rushed execution for the set of environments E if no semi-simplistic adversary from the class of adversaries A can rush a round of computation.
The explicit limitations on the set of adversaries is necessary, as there may be a gap between protocols that unbounded adversaries can rush vs. polynomial time adversaries. Usually, one considers only polynomial-time adversaries. Security against rushing allows us to avoid situations where semi-simplistic adversaries desynchronise a protocol by sending out messages ( t 1 , t 2 , m ) way earlier than the event ( i , p , t 1 , t 2 , ) takes place.
Definition 17
(Lazy adversary). A semi-simplistic adversary is lazy if it always waits for ( i , p , t 1 , t 2 , ) signal from I i to clock a message ( t 1 , t 2 , m ) out of the buffer b p + and it always clocks at most one message with right tags per signal.
Lemma 3.
Assume that ideal functionalities F 1 , , F k and protocol Π have tight scheduling. If a protocol Π is secure against rushed execution, then semi-simplistic adversary can be converted to equivalent lazy semi-simplistic adversary.
Proof. 
Assume that all ideal functionalities have unlimited buffering. Let A * be a modified adversary that internally runs the original adversary A and monitors what messages are written to outgoing buffers and when they are clocked. This allows A * to catch all events where A tries to clock a message ( t 1 , t 2 , m ) out of a buffer b p + before the interpreter I i has produced the event ( i , p , t 1 , t 2 , ) . For these events, A * catches the clocking signal and forwards it as soon as the event ( i , p , t 1 , t 2 , ) occurs. Note that thanks to tight scheduling the tags t 1 , t 2 and the port uniquely determine the buffer and the message of the protocol and it is clear which message can be clocked.
Such delays have no effect on execution. If nobody submits its input after the event ( i , p , t 1 , t 2 , ) , then we have a prohibited a rushing event. As we have security against rushing, T R still waits for inputs when A * clocks ( t 1 , t 2 , m ) . If A * never clocks a message, then the event ( i , p , t 1 , t 2 , ) never occurs, and thus security against rushing guarantees that the corresponding round of computation is never completed or this party was not in the input signature. Consequently, the overall execution does not change.
In the general case, F p might keep ( t 1 , t 2 , m ) after a delay while its dropped in the original run. The reverse is not possible as no messages can take the place of delayed ( t 1 , t 2 , m ) because tightness ensures P i has only one outstanding message for instance t 2 of F p . As A * knows all messages sent and received by F p , tight message scheduling guarantees that A * knows which rounds of computations are completed or pending. Therefore, A * can efficiently compute whether a message will be dropped or not. Thus, we can always convert A to the lazy adversary that never clocks a message that is dropped.
Note that the the communication between Π and Π e is similar, except that Π is in the role of the F p (with tight scheduling), and we can only convert the adversary to lazy clocking if Π e is secure against rushing. Otherwise, for a coherent adversary, we know that adversary can only create messages to the buffers between the corrupted party in Π and Π e . Therefore, delays can affect only corrupted buffers and without lessening of generality we can assume that instead the adversary could perform the follow up actions of the corrupted party in Π e without really clocking this message. □
Theorem 4.
If a protocol is robust against malformed inputs and is secure against rushed execution, then lazy semi-simplistic and generic adversaries are equivalent to each other.
Proof. 
From Corollary 1, we know that generic adversaries are equivalent with semi-simplistic adversaries. Lemma 3 proves that any semi-simplistic adversary can be transformed to a lazy semi-simplistic adversary. In turn, each lazy semi-simplistic adversary is also a semi-simplistic adversary. □
Theorem 5
(Characterisation of rushing). A semi-simplistic adversary can rush a round of computation π for a functionality with tight scheduling only if one of the following holds for a corrupted P i in the input signature.
(a)
The round π is not in the program code of the interpreter I i .
(b)
The interpreter I i needs an input from Π e to submit an input to π.
(c)
The interpreter I i needs an output from a round of computation π to submit an input to π and π is executed concurrently or after π.
Proof. 
Let a party P i be in the input signature of a rushed round of computation π such that its interpreter I i computes the input for the round after π is completed and let b p + be the corresponding outgoing buffer. As T R does not proceed without the input from P i , the input had to be present in b p + . For honest parties, only I i can create such inputs. Consequently, P i must have been corrupted before the input to π was clocked.
There are two options why the interpreter I i cannot produce input before this clocking event. First, the program code of I i never computes inputs for π , i.e., π is not scheduled. Second, I i must be waiting for a message m to arrive through some incoming buffer b q before it can compute the input to π . For semi-simplistic adversaries, the buffer b q is always emptied before clocking of b p + . Thus, the input m must be created by another round of computation π or inputs from Π e that is still incomplete. □
The possibility of (b) can be eliminated by the design of Π or Π e . We can include Byzantine agreements in Π to make sure that no honest party starts π before necessary inputs are received from Π e , or we can restrict Π e to give inputs in a manner that all parties who execute π receive inputs in one go. Let F 0 be the canonical ideal functionality for Π . Then, we can analyse if the adversary can rush F 0 in Env F F 0 , F 1 , , F k using Theorem 5. If rushing is impossible, then all corrupted parties in round π are guaranteed to get input from Π e before executing π and we need to exclude only possibilities of (a) and (c).
The majority of all multiparty computation protocols operate with values that are secret shared among all participants. Consequently, no computation round can be rushed if the protocol consists of sequential execution of subprotocols and some party is honest. In particular, if a protocol description is symmetric for all parties and input functionalities are also symmetric, then dependencies between rounds of computation are the same for all parties and no round is computed without the inputs from honest parties.

3.2. Shared Memory and Simplistic Adversaries

Lazy semi-simplistic adversaries are quite restricted, as they clock messages to F p according to protocol specification. Still, they can send out more messages than are finally clocked. We define a shared memory model where such attacks can be carried out by modifying a limited set of memory locations and call this the simplistic adversary. In more formal terms, we define a ⋈-operator that acts on protocols and their components together with a universal construction ϕ and its semi-inverse ϕ * that achieves
Π P : Env E Π : A A Π , Env l a z y : Env Π , A Env Π , ϕ ( A )
Π P : Env E Π : A A Π , Env : Env Π , A Env Π , ϕ * ( A )
where P is the set of protocols satisfying the above restrictions which use F 0 , , F k in black-box way and E Π is the set of compatible environments. The first part constructing the memory model and simplistic adversary are shown in Theorems 6 and 7 through a ⋄-operator. This is extended to memory alignment in Theorem 8.

3.2.1. Interpreter Specification

The changes to the memory model alter the interpreters as well as the communication patterns. The interpreter I i in Π is a universal random access machine with two special communication instructions DmaCall and Send. All instances of I i share a program p . The internal state of I i is a three-dimensional array s [ t , δ , ] where t specifies a protocol instance, δ a storage domain and a memory location. A protocol instance t can access only its slice s [ t , · , · ] . Initially, the state s is empty and there are no active protocol instances. Π e launches a new protocol instance by sending a special triple denoted as Init ( t 1 , t 2 , δ , m ) to I i . Upon initialisation, I i launches a new protocol instance t 2 with the input m of type δ and stores t 1 as the instance of the parent protocol.
An instruction DmaCall ( t , p , α , β ) initiates a query–response round where the vector α = ( ( δ 1 , 1 ) , , ( δ u , u ) ) specifies the memory locations to be assembled into a tuple m and β = ( ( δ 1 , 1 ) , , ( δ v , v ) ) specifies to which locations the elements of a response tuple m are stored. The respondent and its protocol instance is fixed by the port number p and the instance tag t. The instruction is not complete until the response comes. An instruction Send ( t , p , α ) initiates analogous communication without response.
Tags t 1 and t 2 encode the instance of a caller and a callee in all triplets ( t 1 , t 2 , m ) . Therefore, the outcome of DmaCall and Send instructions depends on the port p. As ports 1 , , k are meant for subprotocols, outgoing messages must be in the form ( t 1 , t 2 , m ) where t 1 is the current protocol instance and t 2 fixes the instance of a subprotocol.
The remaining instructions formalise a type safe memory manipulation and conditional jumps. Conditional jumps in the program can occur only on public or local variables. A special local storage domain is used for storing the state of the instruction interpreter including program counters and memory locations of incomplete DmaCall instructions. This state can be quite complex for interpreter types that concurrently execute several protocol instructions.
Definition 18.
A program p is well-formed if the following holds.
(a)
Each memory location s [ t , δ , ] can be assigned only once.
(b)
No instruction can read a memory location before it is initialised.
(c)
A new message with tag ( t 1 , t 2 ) is never written to the output port p to Π e before reading a message with tag ( t 1 , t 2 ) from the input port p from Π e .
(d)
For instructionsDmaCall ( t , p , α , β ) andSend ( t , p , α ) , no other program instruction can read memory locations in the vector α and write the memory location β.
Note that after the location α has been used as an input to some ideal functionality, only F and A are allowed to read it and only F writes the return location β . For conceptual clarity and brevity, we consider only well-formed programs. Well-formedness is largely a property of the concrete implementation of the protocol, hence we may also address it as the protocol with well-formed implementation.

3.2.2. Shared Memory Model for Communication

Replacing message passing with shared memory allows us to merge individual states of interpreters into consolidated memory and limits the adversary to only work with messages sent by I i . Let I i be a stateless interpreter, F io is a collection of parts of interpreters that deal with protocol inputs and outputs, and M i is a memory machine for storing the internal state of I i . We introduce F io to make some follow-up steps of the transformation clearer later on. F io contains the input–output modules of each interpreter in the protocol. When F io gives an input to I i then it writes it to memory and clocks the notification to I i . When F io receives output from I i then it reads it from memory, writes it to buffer to Π e and gives control back to I i . Let F 1 , , F k be modified ideal functionalities with access to the shared memory M 1 , , M k as depicted in Figure 7.
The memory M i stores the state of I i as a three-dimensional array s . Each buffer pair can be used to access and modify s . Given an input Fetch ( t , δ , ) , M i returns ( t , s [ t , δ , ] ) . Given an input Set ( t , δ , , m ) , M i updates the state by setting s [ t , δ , ] = m and replies O k ( t ) . For convenience, there are also commands for block reads and writes. The communication between I i and F p goes through the exchange of memory locations. The interpreter I i translates DmaCall ( t 2 , p , α , β ) into a message ( t 1 , t 2 , α , β ) , where t 1 is the caller instance, α specifies the locations of message components and β locations for the reply. Send instructions are translated into triples ( t 1 , t 2 , α ) . A recipient F p or F io queries M i to assemble the message m and processes the resulting triple ( t 1 , t 2 , m ) as in the original setting. When a reply ( m 1 , , m u ) is generated then all elements m j are stored to memory locations s [ t 1 , δ j , j ] and a special message ( t 1 , t 2 , ϵ ) is sent back. Upon receiving Init t 1 , t 2 , δ , m ) for P i and protection domain δ , F io writes components of an input m to some default memory locations s [ t , δ j , j ] and sends Init ( t 1 , t 2 , δ , α ) to I i .
The machine Z i simulates the original execution of the protocol Π to A . For that, Z i must simulate a missing machine Z i and buffers b p , b p + between Z i and F p . Let the buffers connecting I i to F p be c p + and c p analogously to b p , b p + between Z i and F p . Z i can communicate with M i and clock buffers c p + and c p but can not access M i before adversary A issues a corruption call. Z i modifies only the memory locations α of incomplete DmaCall ( t , p , α , β ) and Send ( t , p , α ) calls. Define Π as an extended collection consisting of machines I 1 , , I n , M 1 , , M n , F 0 , , F k , F io together with all buffers attached to the machines. Let the corresponding adversarial construction ϕ ( A ) be defined as a reduced collection consisting of A , Z 1 , Z n .
Theorem 6.
Let Π be the protocol with a well-formed implementation and let E Π be the set of compatible environments. Then,
Env E Π : A A Π , Env l a z y : Env Π , A Env Π , A
where and A Π , Env l a z y is the set of compatible lazy semi-simplistic adversaries.
Proof sketch.
Let us define another extended collection Π ^ that consists of machines I 1 , , I n , M 1 , , M n , Z 1 , Z n , F 0 , , F k , F io together with all attached buffers. We get a trivial equivalence Env Π ^ , A Env Π , A for any A and Env . Therefore, it is sufficient to prove that Env Π , A Env Π ^ , A for any Env and A that meet the restrictions. Although collections Π and Π ^ are quite different there is a natural matching between machines and their internal states. Initially, the internal states of F p and F p coincide. The same is true for the interpreters I i and I i although their internal state s is stored in different locations. Therefore, we can run the standard bisimulation argument and show that the actions of A or Env cannot diverge execution to non-equivalent states.
First, define Z i that achieves the goal by ignoring memory access restrictions. As M i contains the entire state of I i , Z i can internally replicate all computations of I i and fetch all messages sent by F p from M i . Therefore, Z i can perfectly simulate Z i and the buffers b p + and b p provided that states of I i and I i and F p and F p have not diverged yet. However, note that only the messages from and responses to corrupted parties are required to be accessed from the memory, thus the same Z i can easily also satisfy memory access restrictions.
As A is lazy semi-simplistic, we know that I i creates a message before F p reads the corresponding input. Therefore, I i translates DmaCall ( t 2 , p , α , β ) and Send ( t 2 , p , α ) instructions before F p must fetch the corresponding input from the memory. Property (a) of a well-formed program guarantees that Z i can swap the values in the locations α just before it clocks the address tuple to F p . Property (d) guarantees that the change does not alter further actions of I i . Consequently, we can guarantee that F p and F p always get the same inputs and thus the executions of the ideal functionalities in the two worlds give the same results. Therefore, also I i and I i get the same results from the ideal functionalities and the simulation done by Z i is perfect. □
To show the full equivalence of the two execution models, we need to define a class of simplistic adversaries A Π , Env against Π and Env that are produced by ϕ and then define semi-inverse of ϕ * with the right properties.
Definition 19
(Simplistic adversary). An adversary is simplistic if it satisfies the following.
(a)
The adversary clocks any outgoing buffer b p + and any incoming buffer b p to an honest party only when all incoming buffers b p to corrupted parties are empty.
(b)
The adversary can modify the state of the corrupted party only in the locations α of pendingDmaCall ( t , p , α , β ) andSend ( t , p , α ) instructions. These changes are done before the corresponding tuple is clocked to F p and each value is modified at most once.
Corollary 2.
For any lazy semi-simplistic adversary A and for any well-formed implementation of Π, the construction ϕ ( A ) is simplistic adversary.
Proof. 
The way Z i alters memory just before clocking b p + in the proof of Theorem 6 guarantees that the property (b) of simplistic adversary is satisfied. The clocking rules for ϕ ( A ) are analogous the semi-simplistic adversary A and are therefore satisfied as ϕ ( A ) preserves clocking with respect to the matched buffers in the two configuration. □
We specify the semi-inverse through simulator machines Z i * that go between Z i and A . The simulator translates clocking signals, provides read only access to the state of I i and translates memory writes to actual protocol messages. As before, let ϕ * ( A ) be the reduced collection consisting of A , Z 1 * , Z n * .
Theorem 7.
Let Π be the protocol with a well-formed implementation and let E be the set of compatible environments. Then,
Env E : A A Π , Env : Env Π , A Env Π , ϕ * ( A )
where and A Π , Env is the set of simplistic adversaries compatible with the protocol and the environment. The resulting adversary ϕ * ( A ) is lazy and semi-simplistic.
Proof sketch.
Let us define another extended collection Π ^ that consists of machines I 1 , , I n , Z 1 , , Z n , Z 1 * , Z n * , F 0 , , F k together with all buffers attached to the machines. Then, it is sufficient to prove that the equivalence Env Π , A Env Π ^ , A holds for any Env and A as Env Π ^ , A Env Π , ϕ * ( A ) by construction.
We can use the same natural matching between machines and their internal states as in Theorem 6. Equivalence is obvious when A does not corrupt M i . The machine Z i * just forwards all clocking signals and the equivalence follows from the construction of I i and F p . To fetch a value s [ t , δ , ] , the machine Z i * sends Reveal ( t , δ , ) message to Z i and forwards the response. After a first Reveal, call Z i is corrupted and Z i * will instantly forward the communication from F p to I i .
When A issues memory modification instructions, Z i * can locally store all altered values s [ t , δ i , i ] . As the adversary A is simplistic, it alters only memory locations related to DmaCall ( t , p , α , β ) and Send ( t , p , α ) instructions. Let tuples ( t 1 , t 2 , m ) and ( t 1 , t 2 , α , ) denote the outcomes of I i and I i for these instructions. Then, by construction A can clock a tuple ( t 1 , t 2 , α , ) only after ( t 1 , t 2 , m ) is written to b p + . Moreover, A does not change values in the location α after ( t 1 , t 2 , α , ) is clocked to F p . By property (d) of a well-formed implementation, these changes alter only the corresponding message m assembled by F p and nothing more. Therefore, Z i * must always check whether A has altered any of the locations α . If there are no changes, Z i * clocks the message ( t 1 , t 2 , m ) to F p . If there are changes Z i * orders Z i to write a message ( t 1 , t 2 , m ) to b p + . After that Z i * can clock ( t 1 , t 2 , m ) and ignore the original message ( t 1 , t 2 , m ) . As a result, F p and F p get the same message at the same time and equivalence is preserved. As A is simplistic then the clocking and modification rules carry over and the resulting adversary is lazy semi-simplistic adversary. □

3.2.3. Memory Alignment and Protocol Specification

Let a global state gs be a four-dimensional array that combines the states of all machines M 1 , , M n . More precisely, let gs [ t , δ , , i ] = s [ t , δ , ] for parties P i in the storage domain δ and gs [ t , δ , , i ] = ϵ for parties P i outside the domain δ . For brevity, let gs [ t , δ , ] denote a tuple ( gs [ t , δ , , 1 ] , , gs [ t , δ , , n ] ) .
Definition 20
(Memory-alignment). A well-formed protocol uses an ideal functionality in memory-aligned manner if each individual input is reconstructed from gs [ t , δ , ] and each output is shared to gs [ t , δ , ] . This restriction must hold regardless of adversarial behaviour.
Note that memory-aligned usage is not a property of a protocol. It easy to define implementations where memory locations are not aligned in subprotocol calls. However, simple incremental addressing is enough to achieve memory alignment for all ideal functionalities when there are no local operations. Local operations without restrictions can easily break the alignment if some parties carry them out and others do not. However, alignment could be achieved by giving a dedicated memory region to local computation outputs that are not inputs to some ideal functionality.
However, from now we will treat local operations explicitly as external components rather than internal affairs of the interpreter. We represent local operations reading and writing to M as fragmented functionalities G 1 , , G m where each G q is a collections of machines { G q , j } j J q implementing local operations. Formally, we need a new interpreter I i and memory machine M i that have additional sender-clocked port pairs for communicating with G 1 , , G m . To initiate an operation, I i sends a tuple of locations ( t , α , β ) to G q which performs the computation and gives control back I i . As before, α determines the locations of inputs and β the locations of outputs in the state of M i .
We add a sender-clocked buffer pair between G q , i and F as local computations may utilise setup parameters of the respective party P i . As a result, all local operations can be pushed out from I i to G q , i and I i without changing the overall execution as shown in Figure 8.
Definition 21
(Canonical protocol). Let F 1 , , F k , F io , G 1 , , G m denote ideal functionalities used in a well-formed protocol. A protocol specification is in a canonical form if
(a)
all conditional jumps are based on local values,
(b)
the remaining local operations are implemented with G 1 , , G m and
(c)
all ideal functionalities are used in memory-aligned manner.
These conditions must hold regardless of the adversarial behaviour.
Similarly to the previous cases, any well-formed protocol can also be represented in the canonical form. Note that in canonical protocol specification, the interpreter I i can be isolated from the trusted setup F as it runs no computations on its own. A canonical protocol specifications guarantees that F p d operates with aligned memory locations. Therefore, the entire collection of memory modules M 1 , , M n can be replaced by a single memory module M with the same set of port pairs that keeps the internal state gs to answer all queries. To reconstruct an input or store an output, F ^ p always addresses a block gs [ t , δ , ] .
This allows us to define a collection F p d that meets the specification of F p d by replacing R u and S u with a machines R u and S u which directly communicate with the shared memory M . Given an input ( t , δ , ) , the machine R u fetches gs [ t , δ , ] , reconstructs the underlying value x and sends it back to M . Given an input ( t , δ , , y ) the machine S u computes shares for y and writes them to gs [ t , δ , ] .
To preserve compatibility, we replace T R and T S with machines T R and T S that consolidate memory locations instead of messages. The machine T R collects location tuples ( t 1 , t 2 , α , ) instead of incoming messages. When all tuples for a particular round of computation have arrived, T R extracts all unique input locations and reconstructs inputs with the help of R u . After that it sends output locations to T S and proceeds as T R . Whenever F p wakes up T S , it first clocks in a message from T R that contains the output locations and uses S u to share the outputs. To isolate T R and T S from the memory, we add a separate machine T M between T R , T S and A . This machine T M manages all cases when T R or T S give any values to A by receiving the locations from T R or T S and retrieving the necessary values itself. For clarity, let us define F ^ p as a collection of T R , F p , T S as we push T M into the adversary in the next step.
Thanks to T M , F ^ p can still leak inputs, outputs and their locations, or perform selective aborts based on values. The adversary gets a limited access to the memory M through T M . Note that we are working with the canonical ideal functionalities as specified in Section 2.4.2 which limits T M and subsequent adversaries to only read the shares of corrupted parties. F io is defined based on F io with the difference that it writes the values to M and sends only the location information to I . Define Π as an extended collection consisting of machines I 1 , , I n , M , F ^ 1 , , F ^ k , G 1 , 1 , , G g , n , F io with all attached buffers. Let ϕ ( A ) be the reduced collection consisting of A and all T M . Therefore, the new adversary expects memory locations from the ideal functionality and fetches corrupted parties values from the memory M .
Theorem 8.
Let Π be a well-formed protocol specification that is in a canonical form and let E be the set of compatible environments. Then,
Env E : A A Π , Env : Env Π , A Env Π , ϕ ( A ) Env E : A A Π , Env : Env Π , A Env Π , ϕ * ( A )
where and A Π , Env is the set of compatible simplistic adversaries for Π and A Π , Env for Π . The resulting adversaries ϕ ( A ) and ϕ * ( A ) are simplistic.
Proof. 
The buffers between I j and I j and adversary are the same in both worlds, and the adversary accesses the same state of the interpreter in M i and M . The differences are the joining M i to M , introducing G q , i as separate functionalities and decomposing F p to T S , T R and T M that also interact with M .
All memory addresses are written once by a well-formed protocol, thus the outputs of G q , i are never overwritten and the adversary can access them from M when needed. In addition, the buffers connecting G q , i to I i and M are sender-clocked and therefore this separation is invisible for A . Therefore, separating the local functionalities is equivalent to the local functionalities inside I where I writes all outputs to M . As all ideal functionalities are canonical, we can replace all instances of S u and R u with S u and R u provided that we rewire the memory to M .
Adversary ϕ ( A ) is simplistic as it respects the clocking rules of simplistic A and the construction did not change the clocking or the modified memory locations. Similarly, transforming A to equivalent A is straightforward and can be accomplished getting the same values as read from M from communication with F . □

3.3. Reduction to Abstract Memory Model

Simplistic adversaries are extremely restricted. They can alter a fixed set of memory locations and decide when ideal functionalities and interpreters perform their computations. As a protection domain F p d consists of ideal functionalities F ^ 1 , , F ^ 1 which use two universal machines R u and S u for reconstruction and sharing, it is straightforward to define an intermediate memory module M 0 that stores reconstructed values. With additional simplifications we can restate all operations in terms of M 0 and quantify memory modifications in terms of underlying values instead of shares. In more formal terms, we define a *-operator that acts on protocols and their components together with a universal ϕ * : A A * and its semireverse ϕ * * : A * A that achieves
Π P : Env E Π : A A : Env Π , A Env Π * , ϕ * ( A )
Π P : Env E Π : A A * : Env Π * , A * Env Π , ϕ * * ( A * )
where P is the set of protocols with canonical specification and E Π is the set of compatible environments. The forward transformation is covered step by step through the whole section and the reverse is summarised in Lemma 12.

3.3.1. Introduction of Abstract Memory

We split the memory M of values and shares and introduce a dedicated memory module M 0 for the values. The internal state of M 0 is a three-dimensional array s 0 is such that an entry s 0 [ t , δ , ] contains the value corresponding to shares in gs [ t , δ , ] in M at all critical time-steps. Then, we place machines R u * and S u * between M 0 and M together with a pair of sender-clocked buffers for synchronisation as depicted in Figure 9. Recall that T R only receive public parameters. Local variables and the state of each party are also stored in M 0 . For that we replace the pair of sender clocked buffers between each interpreter I i and M with a corresponding buffer pair between I i * and M 0 . M 0 allows the adversary can read and write local variables of corrupted parties.
We synchronise the states of s 0 and gs as follows. When a machine queries a share from a location ( t , δ , ) and gs [ t , δ , ] is empty or invalid M writes Share ( t , δ , ) to the synchronisation buffer. Then, M 0 uses S u * to share s 0 [ t , δ , ] and gives control back to M who uses gs [ t , δ , ] to complete the query. Whenever gs [ t , δ , ] is updated M writes Update ( t , δ , ) to the synchronisation buffer. After that M 0 uses R u * to update s 0 [ t , δ , ] and gives control back to M . Finally, M 0 must write Invalid ( t , δ , ) to the synchronisation buffer when the value of s 0 [ t , δ , ] is updated. After this M marks the location ( t , δ , ) as invalid and gives control back to M 0 . This mechanism guarantees that M 0 and M give always coherent replies to other machines.
As a result, we can connect F ^ p directly to M 0 instead of R u and S u without changing the execution, let this be F * ^ . When T R queries ( t , δ , ) , the machine M 0 replies s 0 [ t , δ , ] . When T S wants to share a value x to a location ( t , δ , ) , the machine M 0 sets s 0 [ t , δ , ] = x and marks the location ( t , δ , ) as invalid. The latter allows us to replace a sub-collection inside Π without changing the execution outcome. More precisely, let F 0 be an extended collection consisting of F ^ 1 , , F ^ p , R u , S u , M and let F 1 be an extended collection consisting of F * ^ 1 , , F * ^ p , M 0 , R u * , S u * , M .
Theorem 9.
Collections F 0 and F 1 are observationally equivalent for well-formed protocol specification in a canonical form.
Proof. 
The substitution does not change communication between the collection and F . Therefore, it is sufficient to show that machines M , F 1 * ^ , , F p * ^ run in the same order and provide identical replies to A and Π e . The latter is sufficient as communication with other machines in the collection is sender clocked and invisible to outside observers.
In particular, we need to show that when a machine queries gs [ t , δ , ] its value is same in both collections. Assume that so far all queries have yielded identical results. In F 0 , the response to T R is computed from gs [ t , δ , ] , while M 0 responds s 0 [ t , δ , ] . By the construction the value of s 0 [ t , δ , ] is invalidated whenever gs [ t , δ , ] is updated. Consequently, the reply s 0 [ t , δ , ] contains the reconstruction of gs [ t , δ , ] as in F 0 .
Assume that the location gs [ t , δ , ] of a new query is initialised. If T S * was the last machine to update gs [ t , δ , ] in F 0 then it was also the last to update s 0 [ t , δ , ] in F 1 . Therefore, s 0 [ t , δ , ] contains the correct value. By the construction, the update marked the location gs [ t , δ , ] as invalid and thus the reply yields shares generated by S u . For other machines, the memory cells are updated identically in both collections. If gs [ t , δ , ] is uninitialised in F 0 , then no machine has updated this location. Thus, T S could not have initialised s 0 [ t , δ , ] nor could any other machine have initialised gs [ t , δ , ] in F 1 . □

3.3.2. Extended Modification-Awareness

As the adversary can modify shares of corrupted parties and therefore change the values in M 0 . Here, we study under which assumptions the adversary can update s 0 [ t , δ , ] itself after modifying shares gs [ t , δ , ] . The concept of modification awareness (Definition 3) is meant to tackle this, but some interactions in the computations are not captured by the definition. We can apply the extractor from the modification awareness in our current setting but its success is not necessarily the same.
We define a collection F 2 where we place the extractor E between M 0 , M and A as in Figure 10. The extractor E which forwards messages between A to M and simultaneously extracts changes Δ corresponding to share modifications. These modifications are sent to M 0 who updates the state s 0 accordingly. A slightly modified machine M does not invalidate a location s 0 [ t , δ , ] when the extractor E alters the location gs [ t , δ , ] .
Definition 22.
An adversarial modification of gs [ t , δ , ] is oblique if the extractor E fails to correctly update the value of s 0 [ t , δ , ] .
Theorem 10.
Collections F 1 and F 2 are observationally equivalent for an environment adversary pair provided that the extraction failure is negligible.
Proof. 
As the machine E forwards the communication between A and M without modification, the states of the configurations can diverge only if the values of s 0 [ t , δ , ] differ when M 0 replies to some query. It is straightforward to see that the latter occurs only if the extraction fails for gs [ t , δ , ] . □
Definition 23.
Local operation is transparent if any modification of output shares can be effectively converted to a modification of inputs and oblique modification goes to oblique modification.
Common local operations—copying and linear combination—do not contribute to extraction failures as they are reversible and thus oblique modifications can be back-propagated. To bind the probability of extraction failures, we need to reason about the environment Π e and protocol Π together as oblique changes may enter the protocol through inputs. For instance, in verifiable storage domains, an adversary can corrupt a parent party in Π e and invalidate its input shares for Π . As A knows the original shares, it can modify invalid shares to the originals inside Π . The resulting modification is oblique. By definition, M 0 holds in that location meaning that the modification function always results in independently of the extractor outcome.
Non-canonical ideal functionalities are a potential source of oblique modifications. For example, consider a weird flavour of non-fair computations where the adversary learns the output shares of honest parties before it can invalidate some of them. To create an oblique modification, the adversary can later corrupt a party with invalidated share and replace it with the original. Unconventional local functionalities can also create oblique modifications simply by invalidating the input shares. As the adversary knows the original shares, it can create oblique modification by restoring the valid share.
Lemma 4.
Assume that all local operations are transparent and ideal functionalities are in a canonical form. Assume that S δ generates all protocol inputs for domain δ and the inputs of S δ are determined by the adversary. Then, the probability of extraction failures in a well-formed program is negligible for simplistic adversaries provided that every storage domain is modification aware.
Proof. 
Let A be a simplistic adversary that with probability ε creates an extraction failure in storage domain δ in some protocol instance. Then, we can define a new adversary B against modification awareness of storage domain δ (see Figure 3b).
First, note that the adversary B can perfectly simulate S and R , as it has direct access to S δ (inputting values through L and reading from L * through E δ ) and R δ in the modification game and can run the trusted setup for all other domains. As a consequence, B can perfectly simulate protocol inputs and ideal functionalities F p . The simulation of the protocol is straightforward, as the flow of the interpreters depends only on local values and new local values can be created by ideal functionalities or local operations. As B learns the entire output of S τ for protection domains τ δ , the shares of gs [ t , τ , ] can be directly computed. The requested shares of δ can be computed by backtracking all dependencies to shares generated by S δ and then redoing all local operations.
Next, we add back-propagation of modified shares for the domain δ . By definition, a simplistic adversary A can modify only the inputs of ideal functionalities. If a modification is generated by S δ then this modification can be used to win the modification game. If the modification is computed by a local functionality then due to transparency, B can always back-propagate a change to a change of one input of G q and an oblique change is guaranteed to remain oblique. This process can be repeated until B reaches a protocol input or an output of ideal functionality F p . These are potential challenge shares generated by S δ .
Each input modification leads to a modification of a potential challenge. The total number of such modifications is bounded by the number of inputs to ideal functionalities that have domain δ . Let the corresponding number be n. As we cannot check which modification is oblique, B must set one of them randomly as the challenge modification when it generates the shares. As a result, B succeeds in modification awareness game with probability ε n . The claim follows as ε is negligible whenever ε n is negligible. □
Corollary 3.
If all local operations are transparent, ideal functionalities are in a canonical form and the environment is simulatable; then, the probability of extraction failures in a well-formed program with modification aware storage domain is negligible for simplistic adversaries.
Proof. 
The proof of Lemma 4 can be generalised to a class of environments that can be simulated in the modification awareness game. Simulation means running the environment analogously to the protocol as part of the modification game. Therefore, we require the assumption that Π e does not leak the setup parameters or joint state. In addition, in such simulation we can notice the modifications to inputs of Π done in Π e that could be oblique. Such modifications can be used or propagated similarly to the actions with values of Π in Lemma 4. □

3.3.3. Meaningful Local Operations

Local operations cause another state update procedure where the information flows from the machine M to M 0 . Meaningful local operations (Definition 6) produce such output shares that the state s 0 can be updated from itself. Note that a value s 0 [ t , δ , ] in M 0 is read-only by F p . An adversary can obliviously update s 0 [ t , δ , ] by sending differences Δ to M 0 through E . The canonical protocol description guarantees that F p * reads s 0 [ t , δ , ] only after the location gs [ t , δ , ] has been initialised. Thus, if a local operation G q initialises gs [ t , δ , ] then it is always completed before the read of s 0 [ t , δ , ] .
This allows us to define a new collection F 3 where machines G q 0 update the state s 0 instead of R u * . Each G q 0 interacts with interpreters { I j } j J q and the abstract memory M 0 and receives public parameters from F . For that we slightly modify the behaviour of I i , M and M 0 . Whenever I i calls G q it also calls G q 0 after it gets the control back. The machine G q 0 immediately checks if the operation has been computed in M 0 or if there are enough inputs in M 0 to complete the local operation in M 0 . If all inputs are present and the operation has not been executed yet, then G q 0 reads inputs from s 0 , computes the outputs according to g q and writes it to s 0 . Modified M does not invalidate a location s 0 [ t , δ , ] when the functionality G q alters the location gs [ t , δ , ] but otherwise behaves as before.
Theorem 11.
Collections F 2 and F 3 are observationally equivalent provided that a well-formed protocol specification is in a canonical form and all local operations are meaningful and implement some deterministic functionality.
Proof. 
Modifications to the collection F 2 do not change the order of machine activations. Indeed, changes in M eliminate only a ping-pong interaction between M and M 0 . Changes in I i inject a ping-pong interaction between I i and G q 0 . As the communication in these interactions is sender-clocked the other machines can distinguish collections only based on the states of M 0 and M . Clearly, modifications do not change the state of M as long as responses to queries gs [ t , δ , ] and s 0 [ t , δ , ] remain same in both collections. Therefore, it is sufficient to consider only the responses to queries s 0 [ t , δ , ] .
Only an ideal functionality F p * (or F io ) can query s 0 [ t , δ , ] . Therefore, it is sufficient to consider when G q updates gs [ t , δ , ] , as otherwise both collections behave identically. For well-formed programs, the value gs [ t , δ , ] is never fetched by F p before all interpreters have defined their shares. Although gs [ t , δ , ] may depend on several local operations we know that all of them complete by that time. As the adversary never overwrites inputs of local computations, we can inductively prove that the update chain gives the the same result for s 0 [ t , δ , ] as the reconstruction step in F 2 . □
The result can be generalised to meaningful non-deterministic local functionalities. First, all inputs to G q must have a correct distribution, or otherwise we cannot apply the definition. Second, the inputs of local functionalities with non-deterministic outputs must be independent. For instance, if the local functionality G q is deterministic and we re-run the same shares we get the same output while g q gives two independent outputs.

3.3.4. Isolation of Protocol Outputs

By construction all protocol outputs are obtained by releasing shares of type gs [ t , δ , ] even if outputs are locally private values. As a next step, we modify the construction so that all output shares are generated anew by S u * . This a major prerequisite for isolating the adversary from the memory M . We define a new collection F 4 in Figure 11 where F io is replaced by R u + to construct inputs and S u + to create outputs. S + always gives control back to I i that called it. R u + writes the value to M 0 as soon as sufficient shares have been received and always forwards the signal to the party I i whose input was clocked to it. All these added buffers are sender clocked. Send instructions from I i are forwarded only to S u + . The behaviour of S u + is different from F io . The machine S u + keeps a special cache L of published values which is initially empty. For each input Send or DmaCall from I i , it extracts all locations ( t , δ , ) of output shares. If a triple ( t , δ , ) L , the machine S u + fetches the corresponding value from M 0 , computes the shares and stores them into L [ t , δ ] . If ( t , δ , ) L , then it uses the shares in L [ t , δ ] .
Definition 24.
A protocol environment pair Env Π is in an output-isolated configuration for a class of adversaries A and resource consumption constraints if there is a construction ϕ such that Env F 3 , A Env F 4 , ϕ ( A ) for A A and the construction ϕ satisfies resource constraints. A protocol Π is output-isolated if this property holds for any environment.
Any protocol where all outputs are returned by deterministic S (e.g., local values) or where no shared outputs are returned are output isolated. The value L [ t , δ , ] is guaranteed to have the same distribution as gs [ t , δ , ] when outputs are computed by an ideal functionality. However, the behaviour of F p may still reveal the discrepancy. For instance, A may decide whether to abort or continue based on the output shares of honest parties if F p can reveal them to A . The latter creates a discrepancy that is impossible to resolve in F 4 . It is also an explicit weakness of a protocol. Any protocol can be converted to output-isolated protocol by resharing outputs before returning them. Hence, almost all functionalities used in practice can be implemented with isolated outputs.
Lemma 5.
A canonical well-formed protocol specification is output-isolated for coherent adversaries if all outputs are computed by some ideal functionalities with standard corruption mode, output shares are not used further in computations and they are immediately returned as outputs.
Proof. 
It is straightforward to see that R + in F 4 processes inputs identically to F io in F 3 . Therefore, we need to consider only the output generation. In F 3 , output generation starts when F p writes its output to s 0 [ t , δ , ] . At some point, the adversary clocks the corresponding O k message from F p to some I i . I i writes DmaCall or Send message to push its share of gs [ t , δ , ] to Π e , and F io fetches the corresponding part of gs [ t , δ , ] . By construction, gs [ t , δ , ] is created by S u * from s 0 [ t , δ , ] . In F 4 , I i sends DmaCall or Send message to S u + , which uses L [ t , δ , ] instead of gs [ t , δ , ] to create the output. By construction, gs [ t , δ , ] and L [ t , δ , ] have the same distribution. However, there is a discrepancy between gs [ t , δ , ] and protocol output.
Note that only A can observe the discrepancy, as honest parties do not use output shares in further computations. Coherent A has also corrupted the parent node P i * in the inner environment and thus can fetch shares gs [ t , δ , ] from the outputs sent to P i * . For that, we must guarantee that the outputs arrive earlier than A wants to read the i-th share of gs [ t , δ , ] . The latter is straightforward as the output shares are published as soon as they become available for A and A can always clock the output to P i * . □
Lemma 6.
Any protocol is output-isolated for the class of environments that do not use randomised output shares at all.
Proof. 
Note that the changes introduced by F 4 alter only output shares given to the environment and only for non-deterministic S . As a result, the adversary can compute the protocol outputs based on the state of corrupted party P i instead of the actual protocol output reaching the parent node P i * . This hides the discrepancy between F 3 and F 4 , as these shares are never used in any other protocol. □
Lemma 7.
Any protocol where output shares have the same distribution as freshly generated shares is output-isolated for the class of adversaries that do not access the protocol state at all.
Proof. 
By assumption all protocol outputs are identically distributed in F 3 and F 4 . As the adversary A learns nothing about the protocol state it cannot detect the discrepancy, nor can the adversary alter the protocol execution as it cannot alter the protocol state. □
Theorem 12.
Let F be a canonical ideal functionality that does not leak shares to adversary and let Π be a protocol that is as secure as F for a class of environments E and adversaries A . Then, Π is also output-isolated for the same classes of adversaries and environments.
Proof. 
Let A A be an adversary against the original protocol and environment pair Env Π , and let ϕ be the construction that proves the security of the protocol. Then, ϕ ( A ) is an ideal adversary that defines protocol inputs and receives protocol outputs in F . As the adversary ϕ ( A ) does not learn values from the ideal functionality F , its interface is compatible with Env Π . By definition, Π is equivalent to F for all adversaries that behave honestly. Indeed, if an adversary B does not corrupt parties then ϕ ( B ) cannot also corrupt parties. The same must be true for the adversary ϕ ( A ) . Although ϕ ( A ) corrupts some parties and alters their inputs, these parties remain honest in the protocol. As a result, Env Π , A Env Π , ϕ ( A ) . The claim follows as ϕ ( A ) satisfies the assumptions of Lemma 7. □
A protocol is secure if any adversary can be converted to an adversary against ideal implementation. Therefore, all adversaries against F 4 can be modified to adversaries that do not read the shares of the outputs from M at all. That means that output-isolation is necessary for security in general. The intuition is that since these shares are updated during outputting and not used in the protocol then reading them is only relevant in Π e . Thus, in the following we only consider such adversaries.

3.3.5. Complete Memory Isolation

The transformation defining protocol output isolation in the previous section cuts the last link between the memory M and the environment Env . Only the actions of the adversary A can now depend on gs all local computations and ideal functionalities only use s 0 . Thus, honest execution does not require gs and in this section we show how A can simulate gs for hiding storage domains. Therefore, we will completely remove M from the protocol description.
First, we minimise the number of memory locations accessed by A . Memory locations gs [ t , δ , ] can be divided into three classes: protocol inputs, outputs of ideal functionalities and outputs of local computations. A coherent adversary can always extract protocol inputs from the parent node when the latter submits them to R u + . Simulation of local computations G q is straightforward provided that we know the inputs of G q . Only the outputs of ideal functionalities must be fetched from the memory M . In the following, we consider the details of constructing the adversary that only reads these values written by ideal functionalities from M and does not touch other memory locations.
Let A o be the class of adversaries that only read the outputs of F p from M . More formally, let A o A o be the new adversary that internally runs A and clones interpreters I 1 , , I n to answer adversarial queries about corrupted shares of gs [ t , δ , ] . To reconstruct the state of a newly corrupted party P i , A o has to redo all the computations done by P i so far. For that, A o must get all protocol inputs submitted by P i * . We use the assumption that the state of P i * contains enough information for A o to repeat all computations and thus the input extraction becomes trivial. The adversary A o must also access local variables and outcomes of random choices of P i stored in M 0 . The output shares of F p are queried from M as usual. As a result, A o can rerun a clone of I i to recreate the state of P i . In addition to the real values from M , A o uses the inputs known to it to fill the simulated memory with protocol inputs and local computation outputs that A can access.
Lemma 8.
Assume that we can rerun all computations for any corrupted parent node. Then, Env F 4 , A Env F 4 , A o for any coherent simplistic adversary A .
Proof. 
By construction of A o from A , we can just repeat all computations as necessary and only read the outputs of F p . □
Clearly, the outcome of Env F 4 , A * depends only on the locations gs [ t , δ , ] that are read by the adversary which are the outputs of ideal functionalities F p . As these locations are directly updated by S u * , we can remove the remaining update mechanisms for protocol inputs and local operations. Let F 5 be a simplified collection where R u + does not update M and is not connected to it. Interpreters I 1 , , I n activate only G 1 0 , , G m 0 , and machines G 1 , , G m are removed. Note that while the shares are not read, the adversary can still do modifications to all inputs of ideal functionalities and the outputs returned to Π e . Therefore, we disconnect E from M and join it to A o so that can supply it with the values that it recomputes that should be in memory M . Let the new adversary combining A o and E be A o E and the respective class of adversaries A o E is such that they submit direct coherent modifications to M 0 and M .
Lemma 9.
For any A o A o defined as above we have Env F 4 , A o Env F 5 , A o E .
Proof. 
By definition, A o reads only outputs of F p from M and these are not affected by the changes as these are written after F p executes. Nothing other than values in M are affected by the transformation to F 5 . Note that the extractor E still sees valid shares and works in the same manner as before but is just considered to be part of the adversary. □
To shield M completely from A o E , we use the simulator S δ o from the hiding property definition (Definition 2) and assume the adversary A does not read the output shares as discussed for output-isolation. Let A l c be the adversary that runs A o E internally but uses simulators S δ * for each protection domain instead of values in M . We denote by A l c the class of limited control adversaries (Definition 4) that do not interact with M .
Lemma 10.
Let Env be an environment that uses storage domains without private parameters. Then, Env F 5 , A o E Env F 5 , A l c for any limited control adversaries A o E A o E and A l c as defined above using A o E and not reading the output memory locations.
Proof. 
W.l.o.g. we can consider only the case where the protocol uses only one storage domain δ as there are no parameters that might be shared by domains. Define an adversary B against hiding games so that the first hiding game is equivalent to Env F 5 , A o E and the second to Env F 5 , A l c . First, note that B needs to internally simulate all computations of Env . As Env does not use storage domains with private variables, B can internally evaluate S δ and R δ for any storage domain δ and recreate all computations inside Env . Second, note that machines I 1 , , I n , F 1 , , F k , G 1 0 , , G m 0 , M 0 , E , S u + , R u + form a subcollection Z that interacts with M through the machine S u * . The adversary A can clock some buffers inside Z and interact with F 1 , , F k and E . After simulating Env , the adversary B can produce all inputs to the protocol Π .
For the outputs of ideal functionality gs [ t , δ , ] , the adversary B interacts with the hiding game. It corrupts locations when A o E issues corruption calls. When an ideal functionality F p wants to write x to gs [ t , δ , ] , B remaps a location ( t , δ , ) to κ and set s 0 [ κ ] = x by interacting with L . It also sets b [ κ ] = 1 to mark that the values is inside the protocol scope. In one game this does not affect the outcome but in the second this means that all shares will be simulated. When A o queries corrupted shares of gs [ t , δ , ] , the adversary B fetches corresponding shares from L * .
The first hiding game is equivalent to Env F 5 , A o E as the shares are generated by S δ . When B is in the second game then the shares of the protocol are simulated because b [ κ ] = 1 and the second game is equivalent to the definition of Env F 5 , A l c . Most importantly, note that the adversary A o E modifying the memory locations does not affect the interaction with the hiding game. B always simulates the computations to learn the new value in M 0 that it can put into L in the game. B succeeds thanks to A o E that gives the coherent modification of the value together with the share modification to M 0 and M . Note that in case A o E uses the extractor, this proof also covers the case where the hiding property is somehow invalidated by the extraction. □
The same proof can be generalised to storage domains with private setup parameters if the environment can be simulated inside the hiding games, meaning that all sharing operations inside the environment are carried out inside the hiding game with b [ κ ] = 0 . As before, this is possible if the Π e does not reveal setup parameters that would invalidate the hiding property. For the adversary A l c , the memory M has become meaningless, thus define the limited control execution model F 6 from F 5 by removing connections between M 0 and M , including S u * . This setup is shown in Figure 12.
Lemma 11.
Then Env F 5 , A l c Env F 6 , A l c for any A l c A l c .
Proof. 
Trivial as A l c does not communicate with the memory M or S u * . □
Corollary 4.
Any coherent adversary against the hybrid model can be transformed to an equivalent coherent limited control adversary against the same protocol in F 6 for hiding and modification aware storage domains and for well-formed protocols that are in a canonical form and use meaningful transparent local operations.
Proof. 
We work with a coherent adversary which works well as a generic adversary (Lemma 1). In Lemma 2 and Corollary 1, we show that if a protocol is robust against malformed inputs then any generic adversary is equivalent to semi-simplistic adversary. In Lemma 3 and Theorem 4, we show equivalence of semi-simplistic and lazy semi-simplistic adversary if all functionalities have tight scheduling and the protocol is secure against rushing. In Theorem 6 and Corollary 2 we conclude that it is sufficient to consider only simplistic adversaries when running well-formed programs. We then show that we can separate the value and share memories (Theorems 8 and 9), limit their interactions (Theorems 10 and 11). Assuming output isolation we continue the simplifications in Lemmas 8–10 to move from real shares to simulated shares. Finally, in Lemma 11 we conclude that it suffices to consider the limited control execution model. □

3.3.6. From Limited Control to the Hybrid Model

Lemma 12.
Any coherent limited control adversary against F 6 can be transformed to an equivalent coherent simplistic adversary against the same protocol in F 0 for storage domains with limited control and negligible extraction failure, and for well-formed protocols that are in a canonical form and use meaningful transparent local operations.
Proof. 
Theorems 8, 10 and 11 guarantee that F 0 , F 1 , F 2 and F 3 are observationally equivalent for any adversary for well-formed protocol specification in a canonical form, using meaningful local functionalities and storage with negligible extraction failure. However, for F 3 to F 6 we modified the adversary and have to consider how any coherent adversary against F 6 can be transformed to a simplistic coherent adversary against F 3 .
Consider the adaptor machine that connects to the abstract adversary that only interacts with M 0 and does clocking. If the protocol in it is using a storage domain with limited access, then all modifications to the values M 0 can be translated to the values of the shares in M in F 3 . Note that by definition the new adversary against F 3 reads only the values that are modified from the memory and only uses them to compute the modification. If the extraction has negligible failure then the values in M 0 are the same in F 3 and F 6 after this change. The added memory M and changed interaction with Π e do not change the view of the adversary or the values returned to Π e . □
Corollary 5.
For all coherent limited control adversaries A l c against the protocol Π * in F 6 there exists a hybrid adversary ϕ * ( A l c ) such that Env Π * , A l c Env Π , ϕ * ( A l c ) if the protocols are secure against rushing and malformed inputs, have well-formed specification, are in a canonical form, use meaningful local operations for deterministic functionalities, use storage domains with limited access and negligible extraction failure.
Proof. 
Semi-simplistic and lazy semi-simplistic adversary is equivalent to the generic adversary as shown in Corollary 1 and Theorem 4 for protocols secure against rushing. Simplistic adversaries can be transformed to equivalent lazy semi-simplistic adversaries for the same well-formed protocol in the respective protocol description as shown in Theorem 7. Theorem 8 shows that storing values in M in aligned manner can be reversed to any memory layout and G i can be merged to the interpreters. Lemma 12 shows that any limited control adversary can be transformed to equivalent simplistic adversary for the canonical protocol. Therefore, we have shown that any limited control adversary can be transformed to an equivalent adversary in the hybrid protocol. □

3.4. Abstract Model

Results from previous sections allow us to consider an execution model where the environment interacts with a system consisting of interpreters I 1 , , I n , a semi-protected memory module M 0 , a library of idealised functionalities G 1 0 , , G m 0 and F 1 , , F k for local and global operations and modules R u + and S u + for storing and fetching values from the memory M 0 . The environment for the protocol in question consists of the rest of the computation represented by Π e and outer Env o representing the rest of the world as in Figure 12. The left-hand side of the figure is completely stated in terms of abstract memory and public parameters whereas the right-hand side, especially Π e is still a complex network of nodes exchanging shares.
In Appendix C, we discuss further means to simplify the collection F 6 for many cases where the adversary does not really need individual buffers to clock each input and output of F p . For example, this is allowed in the usual case where the order and execution of F p is sequential within one protocol instance. These simplifications would also carry over to the abstract execution model.

3.4.1. Abstract Execution Environment

Consider a restricted class of environments E r where the inner environment Π e is trivial, i.e., parent parties P i * just forward inputs and outputs between Env abs and the protocol Π that we are considering. Each instance of Π e runs only a single instance of Π . All inputs provided by Env o are first shared in Π e using S δ for the protection domain where these values are and then reconstructed by R u + in Π . The output of S u + is reconstructed by the relevant reconstruction functionality R δ in Π e before it reaches Env o . Therefore, the main component of Env r is Env o and Π e is almost invisible.
For adversaries that do not read the state of corrupted parent nodes P i * , we can simplify Π e to F 7 as depicted in Figure 13. Machines O i n and O o u t handle protocol inputs and outputs. Buffers between O i n and O o u t allow instant sharing of their states. In F 6 , I i interacts with S u + only when it executes DmaCall or Send instructions. As there is only one instance of Π for each instance of Π e , we modify the interpreter I i in F 7 so that it would send a message to O o u t . As the buffer is clocked by the adversary, the interpreter does not lose control and can carry out as usual. By definition the interpreter I i expects an input from R u + to launch a new protocol instance or as a reply to a DmaCall instruction. In F 7 , the machine O i n sends the corresponding inputs. No changes are made to Env o that still submits and receives plain values.
Machines O i n and O o u t simulate the inner environment Π e with instant clocking. The machine O o u t collects incoming instructions and forwards DmaCall to O i n who immediately gives the control back. After that O o u t treats the instruction as a message from S u + and simulates the reactions of P i * and R δ . When R δ releases some values s 0 [ t , δ , ] , then O o u t fetches them from the memory and sends to Env o . The machine O i n simulates the actions of S δ , P i * and R u + . Based on the inputs from O o u t , it knows what messages interpreters I 1 , , I n expect and thus can assign correct memory locations to all messages sent to R u + . When some message arrives to R u + , O i n writes corresponding values to M 0 and forwards the response of R u + to the correct interpreter. Fast clocking inside the simulation guarantees that O i n creates responses no later than in F 6 , and O o u t writes the outputs to Env o as soon as all release instructions have arrived. As the adversary A clocks the buffer, A can reorder and delay protocol inputs exactly the same way as in F 6 .
Lemma 13.
Let A be the class of adversaries against the collection F 6 that do not observe the state of corrupted parent nodes. Let A * be the class of adversaries against F 7 and Π * be the protocol. Then there exist transformations ϕ and ϕ * such that
Env E r : A A : Env Π * , A Env * Π * , ϕ ( A ) Env E r : A * A * : Env * Π * , A * Env Π * , ϕ * ( A * )
where Env is the restricted environment in F 6 and Env * is the corresponding environment in F 7 that contain the same Env o .
Proof. 
First, convert the adversary A against F 6 to the adversary A * against F 7 . W.l.o.g. we can assume that the original adversary A immediately clocks buffers from P i * to R u + as this alters only in which order values will be reconstructed. The adversary A can correct the ordering and timing by clocking the leaky buffer between R δ and Env r . For the same reason, we can assume that A immediately clocks buffers from S u + to P i * .
As adversary A * clocks buffers from I i to O o u t instead of buffers from S u + to P i * and buffers from O i n to I i instead of buffers from P i * to R u + then machines O o u t and O i n carry out a perfect simulation. The claim follows as the transformation is reversible. □
In practical deployments, protocol inputs may come from different input parties. However, as we cannot rule out coalitions between input parties, we must include the class E r into the set of potential environment E . Consequently, we have established that protocol can be secure only if it is secure in the abstract execution model defined as follows.
Definition 25.
Abstract execution environment Env abs for a protocol Π * is defined as the collection of O i n , O o u t and Env o in F 7 . Let E abs denote the set of all abstract environments.
Abstract execution environment is very close to the minimal attack model. The adversary A gets setup information of the corrupted parties and learns the values of protocol inputs and outputs. Adversary A can also read and write some values in M 0 and interact with ideal functionalities F 1 , , F k if there is corresponding interface. Finally, the adversary can influence the order of execution by clocking leaky buffers between the interpreters and other machines. Depending on the environment the adversary can learn additional information about protocol inputs and outputs but nothing more.
Definition 26.
An environment class E is embeddable into the abstract model for the class of adversaries A if there exist transformations ϕ : A A * and ψ : E E a b s such that
Env E : A A : Env Π * , A ψ ( Env ) Π * , ϕ ( A ) .
For most protection domains, it is quite easy albeit highly tiresome to prove that relevant environment classes are embeddable into the abstract model. In fact, we have used similar assumptions in Section 3.3 as the simulatability of the Π e .
For embedding, we need to push all interactions between A and Π e to Env o . The modified adversary ϕ ( A ) internally runs A and forwards all queries for Π e to Env o that internally simulates Π e to provide correct responses. If all parameters generated by the setup F are public, then Env o can just redo all computations in Π e . There is a small caveat as Env learns protocol outputs when all parties have sent them to R . Thus, Env o must simulate initial shares of corrupted parties without knowing the secrets. The latter implies that storage domains for the protocol Π e outputs must be hiding.
Similar simulation is possible for protocols with private setup parameters as long as Env abs can simulate interactions with Π e knowing only protocol outputs and public parameters generated by F . In particular, note that the generic inner environment Π e is a protocol for which all inputs are generated by S δ inside Π e and S u + in Π * and all outputs are processed by R δ when given to Env and R u + when given to Π * . By placing the same restrictions to Π e as to Π we can carry put exactly the same simplification operations as we did to arrive to Π * from Π . As modification extractor and share simulator use only the public and corrupted parties parameters, the modified environment does not need parameters it cannot get.

3.4.2. Security in the Abstract Model

Throughout the paper we have transformed the initial protocol Π to the same protocol Π * in the abstract execution environment. We call Π * the abstraction of Π .
Definition 27
(Security in the abstract world). Let Π 1 and Π 2 be protocols and Π 1 * and Π 2 * their abstractions. Let A 1 * and A 2 * be the abstractions of classes of adversaries A 1 and A 2 against original protocols. Then, Π 1 is as secure as Π 2 in the abstract model if there exists a function ρ * : A 1 * A 2 * such that Env abs Π 1 * , A 1 * Env abs Π 2 * , ρ * ( A 1 * ) for all A A 1 * and Env abs E abs .
By definition, security can be defined with respect to any other protocol Π 2 . In practice, our goal is to prove that the protocol Π is as secure as some canonical ideal functionality F 0 . The simplifications from hybrid execution model to the abstract model can be applied to all protocols Π F 1 , , F 2 . Similarly, they could be done for Π 0 F 0 for the protocol Π 0 that only calls F 0 once per instance and returns all results to Π e . Note that Π 0 is output-isolated and the transformation is allowed as long as F 0 satisfies the rules we have set for the canonical ideal functionalities inside the protection domain.
In the context of Section 2.3, our current transformations define ϕ 1 and the same ϕ 1 can be applied to adversaries against Π 0 F 0 . Therefore, in order to prove that Π F 1 , , F 2 is as secure as Π 0 F 0 we can use ϕ 1 for both A 1 and A 2 . In order to prove that Π F 1 , , F 2 is as secure as F 0 we would need a similar transformation ϕ 2 . However, note that for coherent adversaries the functionality F 0 is equivalent to Π 0 F 0 , the details of this can be found in Appendix B. The main intuition is that Π 0 F 0 has also the machines of the parties but coherent adversary corrupts them coherently with the parties in Π e and does not gain any access that it does not have to just the F 0 machine. Hence, combining the equivalence of F 0 and Π 0 F 0 with ϕ 1 forms the required transformation ϕ 2 . It remains to argue that the security in the abstract model indeed suffices for the security in the hybrid execution model. The following theorem achieves this by putting the results of this paper into the relevant context.
Theorem 13.
Let F 0 be ideal functionality and Π is a protocol constructed on top of a hybrid protection domain F 1 , , F k with canonical (Definition 5) F p with tight scheduling (Definition 12), meaningful (Definition 6) and transparent (Definition 23) local functionalities. The storage domains in the protection domain are hiding (Definition 2), modification-aware (Definition 3) and have limited control (Definition 4). If the protocol Π satisfies all abstraction assumptions:
  • is well-formed (Definition 18) and in a canonical form (Definition 21),
  • is robust against malformed inputs (Definition 14),
  • is secure against rushing (Definition 16),
  • is output-isolated (Definition 24)
and the environment class E is embeddable into E abs for the class of adversaries A and Env abs E then security in the abstract model is necessary and sufficient for security in the hybrid model.
Proof. 
Necessity. Trivial Π e that just forwards the inputs and outputs to and from Π and Env has to be a valid protocol. Therefore, E abs is a valid class of adversaries against Π as they are formed by the trivial Π e and a generic adversary in E . If Env abs E then by security definition, the protocol must be secure against E abs among all other environments and from Corollary 4 the protocol running with environment in E abs in the hybrid model can be transformed to the abstract model.
Sufficiency. Sufficiency in the abstract model results from the reversability of the series of transformations we have made from the hybrid to the abstract world. If all environments of interest are embeddable, then we can apply Lemma 13 showing equivalence of the abstract execution model and the limited control model. The rest of the sufficiency follows from Corollary 5 showing that we can move from limited control model to the hybrid model. Hence, in total a security proof in the abstract model can be translated to a security argument of the same protocol in the hybrid model. □

4. Discussion

With the transformations in Section 3 we have established the required transformations ϕ 1 , ϕ 2 and ψ from Section 2.3 and shown their semi-inverses. Therefore, we have shown the abstract execution model and that it suffices to prove security in the abstract model when making a series of assumptions about the protection domain and the protocol.
On the side of the protection domain, we assume that all ideal functionalities are in a canonical form (Definition 5) and have tight scheduling (Definition 12), local functionalities are meaningful (Definition 6) and transparent (Definition 23), secure storage domains are hiding (Definition 2), modification-aware (Definition 3) and have limited control (Definition 4). These are mostly reasonable properties, however, they should be explicitly shown for the protection domain (the framework for programmable MPC) that is used. On the other hand, one can simply assume these properties as the requirements of their new algorithm for secure computation. If no other preconditions are made, then the algorithm can be securely implemented on any framework meeting these requirements. However, note that often it is reasonable to make additional requirements for the protection domain, for example, to specify the data types for which the algorithm works.
On the protocol side, we assume that it is well-formed (Definition 18) and in a canonical form (Definition 21), robust against malformed inputs (Definition 14), secure against rushing (Definition 16) and output-isolated (Definition 24). We have argued that some of these are natural or easily achievable requirements of the protocol or the concrete protocol implementation. Security against rushing is achievable for protocols where some honest party is always included in the subprotocols (Theorem 5). The main open question is the output isolation, which we have shown to hold for several special cases but for some protocols this property should be proven in addition to the security proof in the abstract model. Nevertheless, we have shown that secure protocols are output isolated (Theorem 12), thus such a proof must exist for all protocols both in order to use our framework as well as to prove security at all. In addition, most primitive functionalities return outputs as soon as they are computed and are therefore always output-isolated (Lemma 5).
We can consider the proposed sorting algorithm in Algorithm 2 as a concrete example of these properties and their use. This algorithm assumes that we have a protection domain that can shuffle values, compute comparisons and publish values. We expect all these to be represented as some canonical ideal functionalities. A common way to read this protocol is that parties execute these operations together. We also expect that the parties only start the computations once they have the input values and that they only write the outputs of the computations to the derived variables. In addition, they continue with the next instruction only when they have completed the previous one. Therefore, when reading this protocol description we already assume it to be executed so that it is well-formed. It is also in a canonical form because the conditional decision in the output is done based on a public value and we can assume that the implementation assures that the memory is aligned. We do not usually think of robustness against malformed inputs, however, we indirectly assume that the inputs are correct and all errors from incorrect formats are handled by the computation implementation outside the algorithm itself. In secure multiparty computation, we are concerned with cases where some participant is honest, thus we also achieve security against rushing when executing this algorithm. The main problem with this algorithm is output isolation. It is not immediately clear that it is output isolated since the values [ [ k ] ] and [ [ m ] ] are computed several steps before they are returned. Hence, we should either modify the algorithm to include a step to re-randomise the secure representation of the values right before returning them or prove output-isolation independently.
The abstract model is quite restricted, the adversary can manage the timing of the ideal functionalities F p used by the protocol Π as well as the input and output timing of Π . In addition, the adversary has access to the corrupted parties values in M 0 and other access depending on the storage domain. For reasonable schemes, either we consider a passive adversary that does not modify M 0 or we use a robust or verifiable scheme that ensures that the adversary has limited control over the shared values as long as the set of corrupted parties satisfies the bounds set by the storage domain. Therefore, this satisfies the part of the intuition that in security proofs of the protocol, the focus should be on the values that are made available to any party and can therefore also be seen by the adversary corrupting that party. Note that this also means that, in fact, the part of the protocol that needs to be analysed is the semantics of the published values and not really the cryptographic properties of the secure computation framework where the protocol is executed. One could say that we are left with security proofs without any cryptography.
If the protocol has one round, e.g., it receives some inputs and then gives outputs and finishes its work like common for protocols implementing arithmetic operations, then they are usually output-isolated. If, in addition, the execution of subprotocols in one instance of the protocol has sequential scheduling then the adversary can not alter the order in which the values are published. Therefore, being able to simulate all visible values is sufficient. Note that most protocols for secure computation fall into this category and, therefore, the intuitive proofs are easy to carry to formal proofs. However, if the scheduling is concurrent or there are many rounds, then at the very least the values should be simulated within the rounds in which they are computed and their order may depend on the scheduling. More details might be required depending on the model.

Author Contributions

Conceptualisation, S.L. and P.P.-R.; Methodology, S.L. and P.P.-R.; Writing—original draft, S.L. and P.P.-R.; Writing—review and editing, S.L. and P.P.-R. Both authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Estonian Centre of Excellence in IT (EXCITE) and by the Estonian Personal Research Grants PRG49 and PRG920.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
MPCSecure multiparty computation
ABBArithmetic black box
RSIMReactive simulatability
UCUniversal composability
[ [ x ] ] Secret sharing of value x
Env Environment
E Class of environments
A Adversary
A Class of adversaries
Π Protocol
Π e Inner environment representing computations in the secure computation framework
P Class of protocols
P i Protocol participant i
I i Code interpreter of P i
Z i Corruption manager for P i
F Secure setup functionality
F p Ideal functionality
F p d Ideal functionality of a protection domain
F io Input–output functionality
G p , i Local functionality p for P i
G p 0 Local functionality on values
S Secret sharing functionality
R Reconstruction functionality
T S Machine inside F p that manages timing of S
T R Machine inside F p that manages timing of S
T M Machine inside F p that manages access to M
E Extractor
L Storage of values in a storage domain
L * Storage of shares in a storage domain
M Memory
M 0 Memory holding only values
s Internal state of the protocol participant
gs Global state combining s of all participants
s 0 State kept in M 0
mProtocol message
ϕ Transformations of the adversary
ψ Transformations of the environment
ρ Transformation defined in the security proof
δ Storage domain called δ
A δ Adversary structure for storage domain δ
δ Modification operator in storage domain δ
Failure symbol denoting invalid values
F i Configuration i

Appendix A. Buffers Leaking Message Annotations

We specify leaky buffers using standard RSIM components. The hearth of the construction is a tag leaking machine T and three buffers for input, output and leakage, see Figure A1. The machine T accepts pairs of strings as inputs. When T receives ( m , t ) from the buffer b 1 , the pair ( m , t ) is written to an output buffer b 2 and the annotation t is written to the sender-clocked buffer b 3 . There are two ports for clocking the leaky buffer. The first port clk 1 determines when the annotation arrives to b 3 . The second port clk 2 controls when and in which order message pairs ( m , t ) are written to the port out .
Figure A1. Collection implementing a leaky buffer and the corresponding notation.
Figure A1. Collection implementing a leaky buffer and the corresponding notation.
Cryptography 05 00022 g0a1

Appendix B. Two Ways to Specify Ideal Functionalities

Two equivalent ways to depict ideal functionality are given in Figure A2. Commonly, one specifies the ideal functionality as a single machine F 0 . Alternatively, we can specify the ideal functionality as a special case of the hybrid protocol execution from Section 3.1.1 with just one ideal functionality The environment provides inputs to parties P 1 , , P n who forward these directly to F 0 . Whenever F 0 sends a message to P i , it forwards it to the environment. Let F 1 and F 2 denote the these alternative configurations.
Figure A2. Two alternatives for specifying ideal functionality.
Figure A2. Two alternatives for specifying ideal functionality.
Cryptography 05 00022 g0a2
Lemma A1.
For coherent adversaries configurations F 1 and F 2 in Figure A2 are equivalent.
Proof. 
In both configurations, F 0 gets exactly the same inputs including tags and the adversary controls when F 0 receives its inputs. However, the message travels through two leaky buffers in the configuration F 2 instead of one in F 1 . Adaptation of an adversary A 1 from F 1 to F 2 is straightforward. The new adversary clocks buffers between P i and F 0 as soon as possible and then clocks buffers to Π * the same as A 1 .
It is straightforward to simulate clocking for A 2 in F 1 . However, in F 1 there is no way to corrupt the party directly in F 0 . For coherent adversary corrupting P i in F 2 also means corrupting P i * in Π * , hence any modifications can be done in P i * in F 1 . □

Appendix C. Combined Interpreter with Simplified Clocking

The new memory-isolated model makes many clocking signals redundant. Adversarial control over buffer clocking is necessary only if this allows to control the execution order for the protocols or provides a time slot to carry out adversarial actions. Therefore, we simplify the model further by replacing all interpreters I i with a joint interpreter I that combines some buffers. The simplest construction is such where the interpreter I is just a collection of interpreters I i . For most protocol specifications, this model can be further simplified to the configuration depicted in the left of Figure A3 and most sequential protocol specifications to the configuration depicted on the right.
As the original F p contains modules T R and T S which combine and broadcast DmaCall-s and potentially interact with the adversary, we can extract machines C p and D p which only combine or broadcast DmaCall-s. We push these into the interpreter I . The use of sender clocked buffers forces us to add dummy buffers for passing control from C p to I i and from I i to D p . However, the corresponding changes are straightforward and guarantee equivalence for passive adversaries who ignore leaks.
Figure A3. Joining the interpreters by merging outgoing and incoming buffers.
Figure A3. Joining the interpreters by merging outgoing and incoming buffers.
Cryptography 05 00022 g0a3
In many cases, the adversary can simulate the leaks of b i , p and mimic the effect of multiple clockings with a single buffer. The adversary must always know what is the next DmaCall when I i receives an input and when C p inside F p is going to start a new round of computations. This is clearly true for sequential protocol implementations, but it also holds for many concurrent implementations. We can introduce D p if we additionally show that the outcome of the execution cannot be influenced by clockings of c i , p . This is evident for sequential protocol implementations, as only a single subprotocol instance is active at all times and thus delays in clockings just pause the protocol execution.

References

  1. Bogdanov, D.; Laur, S.; Willemson, J. Sharemind: A Framework for Fast Privacy-Preserving Computations. In Lecture Notes in Computer Science, Proceedings of the Computer Security—ESORICS 2008, 13th European Symposium on Research in Computer Security, Málaga, Spain, 6–8 October 2008; Jajodia, S., López, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5283, pp. 192–206. [Google Scholar] [CrossRef] [Green Version]
  2. Damgård, I.; Pastro, V.; Smart, N.P.; Zakarias, S. Multiparty Computation from Somewhat Homomorphic Encryption. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—CRYPTO 2012—32nd Annual Cryptology Conference, Santa Barbara, CA, USA, 19–23 August 2012; Safavi-Naini, R., Canetti, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7417, pp. 643–662. [Google Scholar] [CrossRef] [Green Version]
  3. Bogdanov, D. Sharemind: Programmable Secure Computations with Practical Applications. Ph.D. Thesis, University of Tartu, Tartu, Estonia, 2013. [Google Scholar]
  4. Demmler, D.; Schneider, T.; Zohner, M. ABY—A Framework for Efficient Mixed-Protocol Secure Two-Party Computation. In Proceedings of the 22nd Annual Network and Distributed System Security Symposium, NDSS 2015, San Diego, CA, USA, 8–11 February 2015. [Google Scholar]
  5. Alexandra Institute. FRESCO—A Framework for Efficient Secure Computation. Available online: http://github.com/aicis/fresco (accessed on 20 August 2021).
  6. KU Leuven. SCALE-MAMBA Software. Available online: https://github.com/KULeuven-COSIC/SCALE-MAMBA (accessed on 20 August 2021).
  7. Keller, M. MP-SPDZ: A Versatile Framework for Multi-Party Computation. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security—CCS’20, Virtual Event, 9–13 November 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1575–1590. [Google Scholar] [CrossRef]
  8. Bogetoft, P.; Christensen, D.L.; Damgård, I.; Geisler, M.; Jakobsen, T.P.; Krøigaard, M.; Nielsen, J.D.; Nielsen, J.B.; Nielsen, K.; Pagter, J.; et al. Secure Multiparty Computation Goes Live. In Lecture Notes in Computer Science, Proceedings of the Financial Cryptography and Data Security, 13th International Conference, FC 2009, Accra Beach, Barbados, 23–26 February 2009; Revised Selected Papers; Dingledine, R., Golle, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5628, pp. 325–343. [Google Scholar] [CrossRef] [Green Version]
  9. Mohassel, P.; Zhang, Y. SecureML: A System for Scalable Privacy-Preserving Machine Learning. In Proceedings of the 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, 22–26 May 2017; pp. 19–38. [Google Scholar] [CrossRef]
  10. Bogdanov, D.; Kamm, L.; Kubo, B.; Rebane, R.; Sokk, V.; Talviste, R. Students and Taxes: A Privacy-Preserving Study Using Secure Computation. PoPETs 2016, 2016, 117–135. [Google Scholar] [CrossRef] [Green Version]
  11. Archer, D.W.; Bogdanov, D.; Lindell, Y.; Kamm, L.; Nielsen, K.; Pagter, J.I.; Smart, N.P.; Wright, R.N. From Keys to Databases—Real-World Applications of Secure Multi-Party Computation. Comput. J. 2018, 61, 1749–1771. [Google Scholar] [CrossRef] [Green Version]
  12. Mohassel, P.; Rindal, P. ABY3: A Mixed Protocol Framework for Machine Learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS 2018, Toronto, ON, Canada, 15–19 October 2018; Lie, D., Mannan, M., Backes, M., Wang, X., Eds.; ACM: New York, NY, USA, 2018; pp. 35–52. [Google Scholar] [CrossRef]
  13. Laud, P.; Pankova, A. Privacy-preserving record linkage in large databases using secure multiparty computation. BMC Med. Genom. 2018, 11, 35–55. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Canetti, R. Universally Composable Security: A New Paradigm for Cryptographic Protocols. In Proceedings of the 42nd Annual Symposium on Foundations of Computer Science, FOCS 2001, Las Vegas, NV, USA, 14–17 October 2001; pp. 136–145. [Google Scholar] [CrossRef] [Green Version]
  15. Damgård, I.; Nielsen, J.B. Universally Composable Efficient Multiparty Computation from Threshold Homomorphic Encryption. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—CRYPTO 2003, 23rd Annual International Cryptology Conference, Santa Barbara, CA, USA, 17–21 August 2003; Boneh, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2729, pp. 247–264. [Google Scholar] [CrossRef] [Green Version]
  16. Lipmaa, H.; Toft, T. Secure Equality and Greater-Than Tests with Sublinear Online Complexity. In Lecture Notes in Computer Science, Proceedings of the Automata, Languages, and Programming—40th International Colloquium, ICALP 2013, Riga, Latvia, 8–12 July 2013; Fomin, F.V., Freivalds, R., Kwiatkowska, M.Z., Peleg, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7966, pp. 645–656. [Google Scholar] [CrossRef]
  17. Escudero, D.; Ghosh, S.; Keller, M.; Rachuri, R.; Scholl, P. Improved Primitives for MPC over Mixed Arithmetic-Binary Circuits. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—CRYPTO 2020—40th Annual International Cryptology Conference, CRYPTO 2020, Santa Barbara, CA, USA, 17–21 August 2020; Micciancio, D., Ristenpart, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12171, pp. 823–852. [Google Scholar] [CrossRef]
  18. Damgård, I.; Escudero, D.; Frederiksen, T.K.; Keller, M.; Scholl, P.; Volgushev, N. New Primitives for Actively-Secure MPC over Rings with Applications to Private Machine Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, 19–23 May 2019; pp. 1102–1120. [Google Scholar] [CrossRef]
  19. Kamm, L.; Willemson, J. Secure Floating-Point Arithmetic and Private Satellite Collision Analysis. Int. J. Inf. Secur. 2015, 14, 531–548. [Google Scholar] [CrossRef] [Green Version]
  20. Veugen, T.; Abspoel, M. Secure integer division with a private divisor. Proc. Priv. Enhancing Technol. 2021, 2021, 339–349. [Google Scholar] [CrossRef]
  21. Catrina, O.; de Hoogh, S. Improved Primitives for Secure Multiparty Integer Computation. In Lecture Notes in Computer Science, Proceedings of the Security and Cryptography for Networks, 7th International Conference, SCN 2010, Amalfi, Italy, 13–15 September 2010; Garay, J.A., Prisco, R.D., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6280, pp. 182–199. [Google Scholar] [CrossRef]
  22. Nishide, T.; Ohta, K. Multiparty Computation for Interval, Equality, and Comparison Without Bit-Decomposition Protocol. In Lecture Notes in Computer Science, Proceedings of the Public Key Cryptography—PKC 2007, 10th International Conference on Practice and Theory in Public-Key Cryptography, Beijing, China, 16–20 April 2007; Okamoto, T., Wang, X., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4450, pp. 343–360. [Google Scholar] [CrossRef] [Green Version]
  23. Damgård, I.; Fitzi, M.; Kiltz, E.; Nielsen, J.B.; Toft, T. Unconditionally Secure Constant-Rounds Multi-party Computation for Equality, Comparison, Bits and Exponentiation. In Lecture Notes in Computer Science, Proceedings of the Theory of Cryptography, Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, 4–7 March 2006; Halevi, S., Rabin, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3876, pp. 285–304. [Google Scholar] [CrossRef] [Green Version]
  24. Canetti, R.; Rabin, T. Universal Composition with Joint State. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—CRYPTO 2003, 23rd Annual International Cryptology Conference, Santa Barbara, CA, USA, 17–21 August 2003; Boneh, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2729, pp. 265–281. [Google Scholar] [CrossRef] [Green Version]
  25. Pfitzmann, B.; Waidner, M. A Model for Asynchronous Reactive Systems and its Application to Secure Message Transmission. In Proceedings of the 2001 IEEE Symposium on Security and Privacy—SP’01, Oakland, CA, USA, 14–16 May 2001; pp. 184–200. [Google Scholar]
  26. Backes, M.; Pfitzmann, B.; Waidner, M. A General Composition Theorem for Secure Reactive Systems. In Lecture Notes in Computer Science, Proceedings of the Theory of Cryptography, First Theory of Cryptography Conference, TCC 2004, Cambridge, MA, USA, 19–21 February 2004; Naor, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2004; Volume 2951, pp. 336–354. [Google Scholar] [CrossRef] [Green Version]
  27. Backes, M.; Pfitzmann, B.; Waidner, M. The reactive simulatability (RSIM) framework for asynchronous systems. Inf. Comput. 2007, 205, 1685–1720. [Google Scholar] [CrossRef] [Green Version]
  28. Goldreich, O. The Foundations of Cryptography—Volume 2: Basic Applications; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] [CrossRef]
  29. Canetti, R. Security and Composition of Multiparty Cryptographic Protocols. J. Cryptol. 2000, 13, 143–202. [Google Scholar] [CrossRef]
  30. Micali, S.; Rogaway, P. Secure Computation (Abstract). In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—CRYPTO’91, 11th Annual International Cryptology Conference, Santa Barbara, CA, USA, 11–15 August 1991; Feigenbaum, J., Ed.; Springer: Berlin/Heidelberg, Germany, 1991; Volume 576, pp. 392–404. [Google Scholar] [CrossRef] [Green Version]
  31. Beaver, D. Secure Multiparty Protocols and Zero-Knowledge Proof Systems Tolerating a Faulty Minority. J. Cryptol. 1991, 4, 75–122. [Google Scholar] [CrossRef]
  32. Canetti, R.; Feige, U.; Goldreich, O.; Naor, M. Adaptively Secure Multi-Party Computation. In Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, Philadelphia, PA, USA, 22–24 May 1996; Miller, G.L., Ed.; ACM: New York, NY, USA, 1996; pp. 639–648. [Google Scholar] [CrossRef]
  33. Zikas, V.; Hauser, S.; Maurer, U.M. Realistic Failures in Secure Multi-party Computation. In Lecture Notes in Computer Science, Proceedings of the Theory of Cryptography, 6th Theory of Cryptography Conference, TCC 2009, San Francisco, CA, USA, 15–17 March 2009; Reingold, O., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5444, pp. 274–293. [Google Scholar] [CrossRef] [Green Version]
  34. Gordon, S.D.; Katz, J. Complete Fairness in Multi-party Computation without an Honest Majority. In Lecture Notes in Computer Science, Proceedings of the Theory of Cryptography, 6th Theory of Cryptography Conference, TCC 2009, San Francisco, CA, USA, 15–17 March 2009; Reingold, O., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5444, pp. 19–35. [Google Scholar] [CrossRef] [Green Version]
  35. Cohen, R.; Lindell, Y. Fairness Versus Guaranteed Output Delivery in Secure Multiparty Computation. J. Cryptol. 2017, 30, 1157–1186. [Google Scholar] [CrossRef]
  36. Kiraz, M.; Schoenmakers, B. A protocol issue for the malicious case of Yao’s garbled circuit construction. In Proceedings of the 27th Symposium on Information Theory in the Benelux, Noordwijk, The Netherlands, 8–9 June 2006; pp. 283–290. [Google Scholar]
  37. Mohassel, P.; Franklin, M.K. Efficiency Tradeoffs for Malicious Two-Party Computation. In Lecture Notes in Computer Science, Proceedings of the Public Key Cryptography—PKC 2006, 9th International Conference on Theory and Practice of Public-Key Cryptography, New York, NY, USA, 24–26 April 2006; Yung, M., Dodis, Y., Kiayias, A., Malkin, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3958, pp. 458–473. [Google Scholar] [CrossRef] [Green Version]
  38. Aumann, Y.; Lindell, Y. Security Against Covert Adversaries: Efficient Protocols for Realistic Adversaries. In Lecture Notes in Computer Science, Proceedings of the Theory of Cryptography, 4th Theory of Cryptography Conference, TCC 2007, Amsterdam, The Netherlands, 21–24 February 2007; Vadhan, S.P., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4392, pp. 137–156. [Google Scholar] [CrossRef] [Green Version]
  39. Küsters, R.; Datta, A.; Mitchell, J.C.; Ramanathan, A. On the Relationships between Notions of Simulation-Based Security. J. Cryptol. 2008, 21, 492–546. [Google Scholar] [CrossRef]
  40. Goyal, V.; Gupta, D.; Sahai, A. Concurrent Secure Computation via Non-Black Box Simulation. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—CRYPTO 2015—35th Annual Cryptology Conference, Santa Barbara, CA, USA, 16–20 August 2015; Gennaro, R., Robshaw, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 9216, pp. 23–42. [Google Scholar] [CrossRef]
  41. Kiyoshima, S. Non-black-box Simulation in the Fully Concurrent Setting, Revisited. J. Cryptol. 2019, 32, 393–434. [Google Scholar] [CrossRef]
  42. Pass, R. Simulation in Quasi-Polynomial Time, and Its Application to Protocol Composition. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—EUROCRYPT 2003, International Conference on the Theory and Applications of Cryptographic Techniques, Warsaw, Poland, 4–8 May 2003; Biham, E., Ed.; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2656, pp. 160–176. [Google Scholar] [CrossRef] [Green Version]
  43. Barak, B.; Sahai, A. How To Play Almost Any Mental Game Over The Net—Concurrent Composition via Super-Polynomial Simulation. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), Pittsburgh, PA, USA, 23–25 October 2005; pp. 543–552. [Google Scholar] [CrossRef]
  44. Oren, Y. On the Cunning Power of Cheating Verifiers: Some Observations about Zero Knowledge Proofs (Extended Abstract). In Proceedings of the 28th Annual Symposium on Foundations of Computer Science, Los Angeles, CA, USA, 27–29 October 1987; pp. 462–471. [Google Scholar] [CrossRef]
  45. Canetti, R. Universally Composable Security: A New Paradigm for Cryptographic Protocols. Cryptology ePrint Archive, Report 2000/067. 2000. Available online: https://eprint.iacr.org/2000/067 (accessed on 20 August 2021).
  46. Maurer, U.; Renner, R. Abstract Cryptography. In Proceedings of the Innovations in Computer Science—ICS 2011, Beijing, China, 7–9 January 2011; Chazelle, B., Ed.; Tsinghua University Press: Beijing, China, 2011; pp. 1–21. [Google Scholar]
  47. Küsters, R. Simulation-Based Security with Inexhaustible Interactive Turing Machines. In Proceedings of the 19th IEEE Computer Security Foundations Workshop, (CSFW-19 2006), Venice, Italy, 5–7 July 2006; pp. 309–320. [Google Scholar] [CrossRef]
  48. Hofheinz, D.; Shoup, V. GNUC: A New Universal Composability Framework. J. Cryptol. 2015, 28, 423–508. [Google Scholar] [CrossRef] [Green Version]
  49. Böhl, F.; Unruh, D. Symbolic universal composability. J. Comput. Secur. 2016, 24, 1–38. [Google Scholar] [CrossRef]
  50. Camenisch, J.; Krenn, S.; Küsters, R.; Rausch, D. iUC: Flexible Universal Composability Made Simple. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—ASIACRYPT 2019—25th International Conference on the Theory and Application of Cryptology and Information Security, Kobe, Japan, 8–12 December 2019; Galbraith, S.D., Moriai, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 11923, pp. 191–221. [Google Scholar] [CrossRef]
  51. Barak, B.; Canetti, R.; Lindell, Y.; Pass, R.; Rabin, T. Secure Computation Without Authentication. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—CRYPTO 2005: 25th Annual International Cryptology Conference, Santa Barbara, CA, USA, 14–18 August 2005; Shoup, V., Ed.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3621, pp. 361–377. [Google Scholar] [CrossRef] [Green Version]
  52. Canetti, R.; Cohen, A.; Lindell, Y. A Simpler Variant of Universally Composable Security for Standard Multiparty Computation. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—CRYPTO 2015—35th Annual Cryptology Conference, Santa Barbara, CA, USA, 16–20 August 2015; Gennaro, R., Robshaw, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 9216, pp. 3–22. [Google Scholar] [CrossRef] [Green Version]
  53. Yao, A.C. Protocols for Secure Computations (Extended Abstract). In Proceedings of the 23rd Annual Symposium on Foundations of Computer Science, Chicago, IL, USA, 3–5 November 1982; pp. 160–164. [Google Scholar] [CrossRef]
  54. Beaver, D. Foundations of Secure Interactive Computing. In Lecture Notes in Computer Science, Proceedings of the Advances in Cryptology—CRYPTO’91, 11th Annual International Cryptology Conference, Santa Barbara, CA, USA, 11–15 August 1991; Feigenbaum, J., Ed.; Springer: Berlin/Heidelberg, Germany, 1991; Volume 576, pp. 377–391. [Google Scholar] [CrossRef] [Green Version]
  55. Bellare, M.; Rogaway, P. Robust Computational Secret Sharing and a Unified Account of Classical Secret-Sharing Goals. In Proceedings of the 14th ACM Conference on Computer and Communications Security—CCS’07, Alexandria, VA, USA, 2 November–31 October 2007; Association for Computing Machinery: New York, NY, USA, 2007; pp. 172–184. [Google Scholar] [CrossRef] [Green Version]
  56. Damgård, I.; Nielsen, J.B. Adaptive versus static security in the UC model. In Proceedings of the International Conference on Provable Security, Hong Kong, China, 9–10 October 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 10–28. [Google Scholar]
  57. Genkin, D.; Ishai, Y.; Prabhakaran, M.; Sahai, A.; Tromer, E. Circuits resilient to additive attacks with applications to secure computation. In Proceedings of the Symposium on Theory of Computing, STOC 2014, New York, NY, USA, 31 May–3 June 2014; Shmoys, D.B., Ed.; ACM: New York, NY, USA, 2014; pp. 495–504. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Notation for machines B 1 and B 2 communicating through various buffers with C clocking the leaky buffer.
Figure 1. Notation for machines B 1 and B 2 communicating through various buffers with C clocking the leaky buffer.
Cryptography 05 00022 g001
Figure 2. Equivalence guarantees and their relations to ρ and ρ * .
Figure 2. Equivalence guarantees and their relations to ρ and ρ * .
Cryptography 05 00022 g002
Figure 3. Configurations defining security properties of a storage domain.
Figure 3. Configurations defining security properties of a storage domain.
Cryptography 05 00022 g003
Figure 4. Internal structure of a two-party functionalities.
Figure 4. Internal structure of a two-party functionalities.
Cryptography 05 00022 g004
Figure 5. Protection domain extension with protocols and ideal functionalities where Π e is carried out by P i * and Π is carried out by P i .
Figure 5. Protection domain extension with protocols and ideal functionalities where Π e is carried out by P i * and Π is carried out by P i .
Cryptography 05 00022 g005
Figure 6. Internal structure of a two-party protocol Π calling out two protocols F 1 and F 2 .
Figure 6. Internal structure of a two-party protocol Π calling out two protocols F 1 and F 2 .
Cryptography 05 00022 g006
Figure 7. Decomposed interpreters for protocol Π calling out protocols F 1 and F 2 .
Figure 7. Decomposed interpreters for protocol Π calling out protocols F 1 and F 2 .
Cryptography 05 00022 g007
Figure 8. Encapsulation of local computations involving two parties.
Figure 8. Encapsulation of local computations involving two parties.
Cryptography 05 00022 g008
Figure 9. Separating memory to M 0 for values and M for shares, F 1 .
Figure 9. Separating memory to M 0 for values and M for shares, F 1 .
Cryptography 05 00022 g009
Figure 10. Separating a value modification module E from M , collection F 2 .
Figure 10. Separating a value modification module E from M , collection F 2 .
Cryptography 05 00022 g010
Figure 11. Memory models for input–output isolation.
Figure 11. Memory models for input–output isolation.
Cryptography 05 00022 g011
Figure 12. Limited control execution model for two-party protocols.
Figure 12. Limited control execution model for two-party protocols.
Cryptography 05 00022 g012
Figure 13. Abstract execution model for two-party protocols, F 7 .
Figure 13. Abstract execution model for two-party protocols, F 7 .
Cryptography 05 00022 g013
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Laur, S.; Pullonen-Raudvere, P. Foundations of Programmable Secure Computation. Cryptography 2021, 5, 22. https://0-doi-org.brum.beds.ac.uk/10.3390/cryptography5030022

AMA Style

Laur S, Pullonen-Raudvere P. Foundations of Programmable Secure Computation. Cryptography. 2021; 5(3):22. https://0-doi-org.brum.beds.ac.uk/10.3390/cryptography5030022

Chicago/Turabian Style

Laur, Sven, and Pille Pullonen-Raudvere. 2021. "Foundations of Programmable Secure Computation" Cryptography 5, no. 3: 22. https://0-doi-org.brum.beds.ac.uk/10.3390/cryptography5030022

Article Metrics

Back to TopTop