TrueBit – White Paper

Highlight to Annotate. Click to Learn.


A scalable verification solution for blockchains

Jason Teutsch TrueBit Establishment

Christian Reitwießner Ethereum Foundation

November 16, 2017




Bitcoin and Ethereum, whose miners arguably collectively comprise the most powerful computational resource in the history of mankind, offer no more power for processing and verifying transactions than a typical smart phone. The system described herein bypasses this bottle- neck and brings scalable computation to Ethereum. Our new system consists of a financial incentive layer atop a dispute resolution layer where the latter takes form of a versatile “verification game.” In addi- tion to secure outsourced computation, immediate applications include decentralized mining pools whose operator is an Ethereum smart con- tract, a cryptocurrency with scalable transaction throughput, and a trustless means for transferring currency between disjoint cryptocur- rency systems.


Securing computations with economics2

  1. How TrueBit works5
    1. System properties 7
    2. Assumptions 8
    3. Attacker model 9
  2. Dispute resolution layer10
    1. Bottleneck: The Verifier’s Dilemma 11
    2. Solution: The verification game 12
    3. Detailed protocol 13
    4. Runtime and security analysis 16

Incentive layer17





Securing computations with economics

Every Nakamoto consensus-based cryptocurrency (e.g. Bitcoin or Ethereum) offers something that everyone in the world agrees on: a definitive, public ledger of financial transactions, or blockchain. This consensus technology en- ables basic Bitcoin transactions, which transfer currency from one party to

another, while Ethereum transactions perform financial and database oper- ations contingent on the evaluation of more complex computational scripts. Anonymous miners, who freely join and leave cryptocurrency networks, de- termine the validity of these transactions and thereby establish who owns which coins. Remarkably, this verification scheme requires no central au- thority to operate securely.

In practice, miners can successfully maintain blockchain integrity so long as the computational burden of verification remains minimal. Indeed, Bit- coin and Ethereum, whose miners arguably collectively comprise the most powerful computational resource in the history of mankind, offer no more power for processing and verifying transactions than a typical smart phone. One cannot simply increase the volume or complexity of transactions flow- ing into the blockchain without risking inclusion of invalid transactions due to the so-called Verifier’s Dilemma [50] (see Section3.1). In 2015, the Ver- ifier’s Dilemma manifested itself in the form of the July 4 Bitcoin fork [35] which temporarily derailed the entire network. The Ethereum community also witnessed a related exploit in its 2016 denial-of-service attacks [16].

A consensus computer [50] permits users to outsource computations to the Ethereum network and receive correct answers in exchange for payment so long as the effort required to verify solutions does not exceed the threshold induced by the Verifier’s Dilemma. Thus the trustless consensus computer offers a small but reliable kernel of semantic truth.

Our contribution. We present here a system, called TrueBit, which am- plifies the consensus computer’s capabilities. TrueBit enables trustless smart contracts, in theory, to securely perform any computation task. Moreover, TrueBit vastly reduces the number of redundant network node computations used in traditional Ethereum smart contracts. Presently every Ethereum miner has to independently replicate each smart contract action in its en- tirety, whereas TrueBit outsources most computation work to a handful of entities. In this way, TrueBit makes secure computing affordable.

Outsourced computation

Let us ponder for a moment why cryptocurrencies might offer an especially convenient framework for secure outsourced computation, the core appli- cation of TrueBit. In the traditional cloud models, users must trust that the machine hardware, software, and cloud administrator all perform as ex- pected. A wide range of things can go wrong, particularly when one wishes to tie the results of such computations to monetized entities such as smart

contracts. Proper economic incentives, the cornerstone of any cryptocur- rency, can deter many types of errors from occurring in ways that simple task repetition cannot. Furthermore, in contrast to a cloud whose configu- ration details may not be visible to users, any systemic network errors that might occur on a blockchain-based system like TrueBit would appear in plain sight to the entire community. Cryptocurrencies indeed provide an excellent starting point as they already achieve several desirable properties.

      1. As witnessed by the Ethereum consensus computer, Nakamoto con- sensus grants networks the ability to trustlessly perform small compu- tations correctly.
      2. Blockchain public ledgers provide perfect transparency and immutabil- ity. Ethereum smart contracts inherit these characteristics.
      3. Currency incentives, which in cryptocurrencies are inextricably tied to computational processes, can be used to recruit and reward partic- ipants.

In general, economic forces greatly reduce the scope of possible network errors. We can safely assume that errors which incur economic penalties are much less likely to occur than those which do not.

Practical impact

A market for work related to computationally-intensive smart contracts al- ready exists. The Golem Project, which crowdfunded 820,000 ETH on the morning of November 11, 2016 (∼$8.6 million on that date), already cites TrueBit as a verification mechanism for their forthcoming outsourced com- putation network [14]. TrueBit has the potential to support many practical new applications. Here are some examples.

  • Outsourced computation. TrueBit functions as a worldwide computa- tion market. Anyone in the world can post a computational task, and anyone else can receive a reward for completing it. The system’s incentive structure guarantees correctness of returned solutions.
  • Truly decentralized mining. Centralized mining pools pose security threats. For any Nakamoto consensus-based cryptocurrency, one can use TrueBit to build a practical, efficient, and trustless mining pool managed by an Ethereum smart contract (see also [51]).
  • Trustless cryptocurrency exchange. Currently users must find trusted exchange partners to transfer currency (or messages) between block- chains. TrueBit can facilitates secure transfer, for example, of doge- coins to the Ethereum blockchain and back (modulo some new opcodes for dogecoin) [8].
  • Scalable blockchain. By decoupling verification for miners into a sep- arate protocol, we can achieve high transaction throughput without facing a Verifier’s Dilemma.
  • Scalable “on-chain” storage. Swarm [28] is developing a platform for incentivized, peer-to-peer storage. TrueBit can make Swarm data ac- cessible to Ethereum smart contracts.

We discuss these ideas further in Section7.

Smart contracts

Smart contracts, introduced in 1994 by Szabo [61] and first realized in the Ethereum [11] cryptocurrency in 2015, are uncensorable programs that live on Ethereum’s blockchain and have their own executable code and internal states, including storage for variable values and ether currency balance. Pro- posed applications for smart contracts include outsourced computation and storage [13,28,50], prediction markets [2], decentralized venture funds [29], and Bitcoin mining pools [27,51]. Ethereum’s variable gasLimit restricts the number of computation steps that smart contracts can perform.

TrueBit itself is an Ethereum smart contract which allows users to call TrueBit contracts for trusted, computationally-intensive applications. Tradi- tional Ethereum smart contracts can call a TrueBit contract as a subroutine, thereby effectively bypassing Ethereum’s gasLimit. In other words, TrueBit increases the per block work that the Ethereum network can process cor- rectly.

How TrueBit works

TrueBit’s primary purpose is to realize correct, trustless computations de- spite miners’ limited computation bandwidth. Intuitively, we wish to reward participants who correctly perform computational tasks, but who decides whether these tasks were done correctly? In absence of a dispute, the party who performs a computational task on behalf of a TrueBit contract simply receives a reward. On the other hand, if a dispute does occur, we must

rely on the only trusted resource, the limited network of miners, to resolve it. Dispute resolution occurs as a “verification game” subroutine in TrueBit, and we will discuss its details in Section3.

Who initiates a dispute in the case of an incorrect computation, and how can we ensure that such a dispute will actually occur? One option is to assume that each party with some stake in the contract brings their own trusted (although not necessarily mutually trusted) verifier, and flags for a challenge if needed. This trusted verifier approach suffices for some applications, but in general parties may need to rely on the results of a previously executed TrueBit contract ex post facto even if they themselves did not exist at the time of its creation. Like Ethereum smart contracts, TrueBit contracts must have universal validity.

Since there exist no trusted parties on Ethereum’s network, by symmetry we must allow any party to be hired to solve any computational task, and similarly anyone should be able to challenge a Solver’s outcome of a com- putational task. The latter requirement ensures that TrueBit operates by unanimous consensus. Assuming that an honest and motivated Challenger exists who will challenge any wrong answer given by the Solver, the system’s incentives for the Solver are straightforward: reward correct solutions. But how can we make sure that such a motivated Challenger exists?

Idea 1. Offer rewards for checking computations.

While this strategy may encourage participation, it provides no incentive for correct verification. We have no a priori reason to trust that the Verifier will substantively inspect the computation task, and we gain no additional assurance by increasing her reward for checking.

Idea 2. Offer rewards for finding bugs.

While a bug bounty may incentivize a Verifier to correctly perform a check, it only does so when the Verifier has some reason to believe that a bug might actually exist. Unless a potential Verifier believes that she has some real chance to find a bug, in practice we cannot expect her to participate in the system. In summary, we need both an incentive to partake in verification and an incentive to perform checks correctly. This leads us to the following proposition.

Idea 3. Offer a bug bounty and provide expectation that bugs will exist.

TrueBit takes this third approach by occasionally forcing Solvers to sub- mit incorrect solutions. During “forced errors” TrueBit reverses the normal

system incentives: the Solver gets paid for submitting an incorrect solu- tion and penalized for submitting a correct one. We discuss the details of TrueBit’s forced error protocol in Section4.

Let us return to our dispute resolution subroutine. At the heart of TrueBit’s protocol lies an interactive “verification game” which decides whe- ther or not a contested computational task was performed correctly (see Section3). The verification game proceeds through a series of rounds, where each round recursively checks a smaller and smaller subset of the compu- tation. A trusted network, in our case the Ethereum platform [11], merely enforces the rules of the game and therefore does not bear the bulk of the verification burden. Anyone on the network can generate tasks, compute, or verify in exchange for rewards. TrueBit does not require “honest” or “altru- istic” nodes for correct operation but rather runs under the assumption that each node wishes to maximize its own profit. We shall discuss the details of TrueBit’s incentives for participation in Section4.

System properties

Any outsourced computation system should be fair in the sense that parties who perform computational tasks indeed receive compensation for their work and reliable in the sense that paid computations are performed correctly. In addition to these properties, TrueBit also ensures accessibility through easy joining or exiting of the verification ecosystem. Any anonymous party can play the role of Task Giver, Solver, or Verifier, where a Verifier is a party who checks solutions and becomes a Challenger whenever she reports an error. In particular, TrueBit does not trust or rely on the reputation of its participants. Everyone who puts a deposit into the system has a fair chance to be hired for a given computational task.

TrueBit offers several novel advantages over traditional cloud computing

and verifiable computable models. Verifiable computing ensures a correct answer for an outsourced computation by forcing the cloud to provide a short proof which witnesses the correctness of its computation. The idea is that this “short proof” should be much easier to check than performing the computation oneself. Researchers have achieved much progress on this method in recent years, however the cryptographic setup costs and computa- tional overhead for the cloud in state-of-the-art systems make these methods unsuitable for most real-world applications. Moreover, many of the proof- based systems to-date, including Zaatar, Pinocchio, Ginger, and TinyRAM, require one to run thousands of instances of a single function before breaking even on the overall verification time for a 128 × 128 matrix multiplication

puzzle [65]. The new cryptocurrency Zcash [34] successfully makes use of verifiable computing machinery, albeit only for a very restricted class of computation tasks. Below we contrast TrueBit with verifiable computing and traditional cloud computing.

      1. Incentives. Unlike traditional cloud computing, where the user simply trusts the cloud to provide correct answers, TrueBit provides financial incentives to ensure correctness.
      2. Transparency. The entire inner workings of TrueBit’s interpreter sit on the blockchain and are open for inspection (see Section6). Further- more, the user community can democratically update the interpreter as needed.
      3. Efficiency. Solvers in TrueBit have low computational overhead and minimal initial setup costs. The verification game (Section3) does introduce some extra work, but in practice, due to high penalties for wrong answers and bogus challenges, we expect participants to appeal to the verification game only rarely, if at all.
      4. Simplicity. TrueBit’s operation is relatively straightforward. Unlike traditional verifiable computing, TrueBit avoids deep probabilistically checkable proofs (PCPs), succinct non-interactive arguments of knowl- edge (SNARKs) [37], and exotic cryptographic assumptions (e.g. those used in zkSNARKs [38]). The standard cryptography used in TrueBit, namely hash functions and digital signatures, sits in the underlying blockchain network and does not surface in the TrueBit protocol itself.
      5. Adaptability. TrueBit runs on top of Ethereum’s current protocol with- out impacting functionality.
      6. Keyless entry. Participants do not need to manage cryptographic keys beyond those used in Ethereum. TrueBit establishes identities through financial deposits alone, and therefore the system avoids risks from cryptographic trapdoors.


Traditional distributed systems models focus on tolerance against arbitrary, Byzantine errors. In Ethereum and other Nakamoto consensus-based cryp- tocurrencies, however, we have no reason to expect arbitrary errors—nodes generally deviate from protocol due to financial incentives. Our system model makes two basic assumptions about the underlying network.

  1. There exists a trusted network (i.e. Ethereum) that correctly performs very small computation tasks.
  2. Participants are rational in the sense that they act to maximize their individual profits. In particular, we expect that CPUs will show up to do computational tasks if and only if they expect fair compensation for their work.

The consensus computer [50] shows how one can use the incentives in Ether- eum to establish assumption (i) above both in theory and in practice. The task of our new system will be to amplify this small amount of correct com- putation in order to handle larger tasks using assumption (ii). Even though solo Ethereum miners rarely earn mining rewards, expected long-term re- turns suffice to merit their participation in the system (via mining pools). As we shall discuss in Section3.1, anonymous parties may not necessarily perform correct computations when economic incentives, including conser- vation of CPU cycles, pressure them to do otherwise. This observation justifies assumption (ii). We emphasize that we do not find the assumption that there exists a single honest (and non-lazy) participant [46] to be real- istic. A person who loyally contributes CPU cycles without incentive to do so most likely does not exist in a large, general purpose, public system.

This paper takes a somewhat simplified view of Nakamoto consensus. In some cryptocurrencies, such as Bitcoin, miners can selectively censor a class of transactions by deciding not to include its members in their blocks. In the case of TrueBit, censorship of challenges (see Section4) or transactions in the verification game (Section3.2) could result in catastrophic, bogus solutions being accepted on the blockchain. In Ethereum, however, censor- ing individual transactions is difficult as miners cannot easily see where an Ethereum transaction might call without executing it. Indeed Ethereum users can obfuscate the function of their transactions, making it computa- tionally costly for miners to hand-pick transactions for their blocks. Hence we shall assume that censorship of individual transactions does not happen. In Section5.1, we analyze an attack on TrueBit in which miners withhold entire blocks and show that it is not profitable.

Attacker model

TrueBit’s security relies on two basic assumptions about the behavior and capabilities of potential attackers.

  1. Attackers cannot subvert the underlying consensus computer. The

Judges in the verification game (Section3) always adjudicate correctly, as do the Referees in the incentive layer (Section4).

  1. Attackers are rational. We assume that attackers have limited financial resources and will not deviate from protocol unless it is profitable for them to do so.

While an adversary with sufficient computational power can derail a Naka- moto consensus-based cryptocurrency [56], assumption (i) above neatly sweeps this issue outside our domain. TrueBit itself does not utilize proof- of-work. Although we have chosen to implement TrueBit in the Ethereum platform for convenience, we could easily replace the “Judges” from (i) with any other universally trusted computing mechanism without affecting the system’s functionality.

Due to anonymity of all participants, Sybil attacks, in which a single party deviantly assumes multiple identities on the network, pose special threats to TrueBit (see Section5.1and Section5.2). We will assume that an attacker may create as many identities as she likes, including simultaneously playing the roles of Task Giver, Solver, and Verifier, appearing as two dis- tinct Verifiers, or managing multiple identities via pooled financial resources (see Section5.3). Using assumption (ii) above, we shall argue that TrueBit resists such attacks in Section5.

The “forced errors” described in the beginning of this section (Section2) pose a special challenge to TrueBit due to opportunistic attacks. As the prize for discovering forced errors is necessarily enormous, attackers may seek ways to obtain these large prizes prizes without performing the intended verification tasks (Section5.4). For this reason, the occurrence of forced errors must appear truly random to all prospective Verifiers. Moreover, Solvers must be incentivized from prematurely sharing the fact that their error was forced or claiming that an error was forced when it really wasn’t. In general, we want consistent, not sporadic, verification. At least one Verifier should be present for every task. The attraction of forced errors represents the “Achilles’ heel” of TrueBit, and we devote the incentive structure in Section4to careful defending of this potential attack surface.

Dispute resolution layer

In this section we present and analyze TrueBit’s dispute resolution mecha- nism. TrueBit relies on network “Judges” with limited computational power

who adjudicate all disputes, and we show how such Judges can correctly resolve disputes over complex computations.

Bottleneck: The Verifier’s Dilemma

Ethereum’s incentive structure severely restricts the amount of computation work that smart contracts can accurately enforce. Users interact with smart contracts by broadcasting transactions to the network. Such transactions contain program scripts which require computational work to verify their validity prior to miners’ acceptance on the blockchain. Each miner must not only locally verify but also remember the state of every single smart contract. On the surface, this redundant work limits smart contracts’ computational scope due to inefficiency, but the true bottleneck lies deeper at the incentive level.

Let us recall miners’ incentives for participating in Ethereum. In Naka- moto consensus-based cryptocurrencies [56], new transactions enter the block- chain through a process called mining. Participating miners maintain the integrity of the blockchain data by racing to solve computation-intensive, proof-of-work puzzles in exchange for rewards. The first miner who success- fully broadcasts a solution to the current proof-of-work puzzle proves that she has spent the necessary computation power to merit appending her new set of transactions to the blockchain, and this step awards the miner a set of newly minted coins. Then the race begins again on top of this new block.

When an Ethereum user broadcasts a transaction to the network, the miner who appends her valid transaction to the blockchain receives a trans- action fee for the service. On the other hand, Ethereum’s mining protocol dictates that other miners should verify this transaction gratis, for the “com- mon good” so to speak. More specifically, the mining protocol dictates that the longest chain consisting of valid transactions is the correct one, and miners should mine on top of it. If the time to verify new blocks is negli- gible, miners may indeed agree to check them, but if the verification work becomes substantial, then miners risk falling behind in the mining race by following protocol and actually verifying. In order to save time, rational miners might simply skip the verification step, leaving the blockchain open to invalid transactions. On the other hand, by skipping verification, a miner might also waste her CPU cycles by mining on top of a stale chain which the rest of the community deems invalid. Either way, the miner risks forfeiting rewards. A rational miner doesn’t know what to do!

Any substantial verification burden placed on miners thus results in a

Verifier’s Dilemma [50]. Because of the Verifier’s Dilemma, smart contracts

whose verifications require non-trivial computational efforts will fail to ex- ecute correctly as rational miners, who may choose to deviate from the suggested protocol, stop donating their limited resources to thankless ver- ification tasks. In short, Ethereum lacks a scalable verification mechanism and remains vulnerable to serious attacks [50] when miners have to verify more than a tiny amount.

TrueBit contracts securely execute computationally-intensive tasks for use in Ethereum smart contracts. The system’s core consists of a smart contract interface where a user can request a solution to a computational task or puzzle, and anyone can solve it in exchange for a reward. Our inter- active verification game, described below, empowers users to both outsource arbitrary computations in a decentralized manner and receive back correct solutions. To achieve this goal, we will not only reduce the redundancy of verification work but also refine the incentive structure responsible for the Verifier’s Dilemma.

Solution: The verification game

The goal of TrueBit’s interactive verification game is to resolve a given dis- pute between a Solver, who has provided a solution to some computational task, and a Challenger, who disagrees with the solution and therefore calls the verification game to be played. The outer layer of TrueBit (Section4) uses the verification game as a subroutine. The roles in the verification game include a Solver, who offers a solution to a given task, and a Chal- lenger who disagrees with the Solver’s solution. The final party, the Judges, always perform computations correctly but possess extremely limited com- putational bandwidth. The Judges in TrueBit are the entire community of Ethereum miners who reach verdicts through Nakamoto consensus.

The verification game proceeds in a series of rounds, each of which nar- rows down the portion of the computation in dispute. In the first round, the Challenger forces the Solver to commit to a pair of computation steps delimiting some time interval. In the next round, the Challenger iteratively challenges a subset of the Solver’s computation over this time interval, and next she challengers over a subset of that subset, etc. until in the final round the final challenge is sufficiently trivial that the Judges can make a final rul- ing on whether the challenge was justified. The Judges also enforce that the Solver and Challenger follow the rules of the game. At the end of the verification game, either the cheating Solver will be discovered and punished in the outer layer of TrueBit (Section4), or the Challenger will pay for the resources consumed by the false alarm.

Example (matrix multiplication [46]). We give an illustrative example of a verification game. Suppose that the given task is to compute the product of two matrices A and B. The Solver claims that A × B = C. In the first round, the Challenger must claim an error in one of the entries in C, and outputs coordinates i, j such that



ci,j ƒ= ai,k · bk,j


with corresponding evidence consisting of partial sums d0, d1, . . . , dn where



dm = ai,k · bk,j .


The Judges only verify that i, j are coordinates and that dn ƒ= ci,j and that d0 = 0. The Challenger loses if this fails to hold. In the second round, the Solver tries to defend himself by providing a k such that dk ƒ= dk−1 +ai,k ·bk,j . If the Judges can verify this claim then the Solver wins the game, and otherwise the Challenger wins.

Detailed protocol

We can use a variation of the Example above to check arbitrary computa- tions by creating a binary search to pinpoint the erroneous (or not erro- neous) step in a computation. The idea for the following verification game is essentially due to Canetti, Riva, and Rothblum [39,40] and was later in- dependently discovered by the authors of this paper [46,58]. Canetti, Riva, and Rothblum did not consider this game in the context of blockchains but rather a simpler scenario in which a user outsources a computation to k dif- ferent clouds under the assumption that at least 1 out of k of the clouds will perform the computation correctly. The assumption of at least one trusted Solver/Challenger cloud is too strong for our purposes since in a purely ra- tional network, such a Solver/Challenger may not exist. Section4describes our workaround for the smart contract setting.

For purposes of reducing the number of rounds of interaction and space demands on the blockchain, our protocol roughly follows [39,40] which com- bines the (parametrized) binary search optimization from [46, Proposition 9] with the probabilistic Merkle tree construction from [58]. Recall that a Merkle tree is a binary tree in which each node is the hash of the concate- nation of its children nodes. In general, the leaves of a Merkle tree will collectively contain some data of interest, and the root is a single hash value

which acts as a certificate commitment for the leaf values in the following sense. If one knows only the root of a Merkle tree and wants to confirm that some data x sits at one of the leaves, the holder of the original data can provide a path from the root to the leaf containing x together with the children of each node traversed in the Merkle tree. Such a path is difficult to fake because one needs to know the child preimages for each hash in the path, so with high likelihood the data holder will supply a correct path if and only if x actually sits at the designated leaf.

For clarity, we will present the verification game in the familiar and stan- dard language of Turing machines, but in actuality TrueBit uses the Google Lanai architecture to simulate steps in the computation (see Section6). The verification game consists of three parties:

  • a Solver who has allegedly performed some computational task and claims some solution,
  • a Challenger who disputes the correctness of the Solver’s solution, and
  • Judges with bounded computational power, who will rule on whether the output is correct.

TrueBit fills the roles of Solver and Challenger according to the incentive layer discussed in Section4. The Judges are the set of miners in Ethereum who collectively record their decisions on the blockchain. The incentive layer (Section4) determines the financial consequences of the Judges’ verdict.

To start, the Solver and Challenger each privately compile a tableau of Turing configurations mapping each time step of the task to its full internal state. Assume that the task runs in time t steps and space s, where s bits suffice to include the complete state of the Turing machine at any given time, including tape contents, head position, and machine state. The game protocol includes a fixed, integer parameter c > 1 which says how many configurations the Solver broadcasts to the blockchain in each round of the verification game. More configurations per round mean fewer total rounds needed, however it also means that the total number of configurations sent to the blockchain increases. In particular, we reach diminishing returns as the data sent per round reaches Ethereum’s per block capacity.

The verification game protocol proceeds as follows.

Main loop. The following steps are done iteratively to pinpoint the source of disagreement. The protocol determines timeout periods within which Solvers and Challengers must respond. Failure to respond within time bounds results in immediate loss for the non-responding party.

      1. The Solver selects c configurations equally spaced apart in time across the current range of the dispute. In the initial iteration, for example, the Solver selects configurations across the entire tableau at the following time steps:

t 2t

, ,

c c

3t ct

, . . . , .

c c

He then produces c Merkle trees, each with s leaves, where the leaves constitute the complete machine state at each of these times, and broadcasts each of the roots of these Merkle trees to the blockchain.

      1. The Challenger responds with a number i c, indicating the first time step in this list that differs from her own, and broadcasts this number i to the blockchain.
      2. The Judges check that the Solver indeed posted c Merkle roots to the blockchain and that the Challenger’s value i satisfies 1 ≤ i c. If either of these checks fails, the Solver, or respectively the Challenger, loses immediately.
      3. The next iteration of the game continues similarly, but restricted to configurations between the i − 1-st and i-th indexed configu- rations. Here we interpret a 0 as the computation’s initial con- figuration.

Final stage. After log t/ log c rounds, the loop above converges to the first, disputed computational step, and the Judges explicitly check this step of the computation as follows. Suppose that the first disagreement occurs at time e. The Solver provides paths from the Merkle root for time e to its leaves containing:

  • the location of the machine head at time e,
  • the content of the tape square under the machine head, the tape contents of its left and right neighbors, and
  • the machine state at time e.

The Solver also provides the analogous paths for time e−1. The Solver loses the dispute if the Judges find that any of these paths fail to be valid. Finally, using this information and the original task code (which existed on the blockchain prior to the verification game), the Judges check the computational step at time e and rule whether the challenge was justified.

Runtime and security analysis

We now show that the work done by the Judges is small compared to the work required to perform the task, and that the number of interaction rounds is modest. Note that the majority of effort from the Judges simply involves recording data onto the blockchain. The Judges’ work is split over many transactions and therefore avoids any chance of a Verifier’s Dilemma.

We fix σ as the size of each hash in the Merkle trees, and p as the space required to store a single machine state for the computation. For computational tasks that run in time t and space s with c configuration roots posted per round, the protocol requires log t/ log c rounds of interaction on the main loop, writes + log c bits to the blockchain during each iteration of the main loop, and writes 2 · ((2σ + 1) log s + 5 + p) bits in the final round (assuming that tape squares can contain either 0, 1, or be blank, so we need 3 · log 3/ log 2 ≈ 5 bits to describe 3 of them). Summing these parts, the total time required for community-wide verification from the Judges is then

Σ Σ· ·

O log t (+ log c) + 2 ((2σ + 1) log s + 5 + p) log c

where the hidden constant depends on the implementation of this protocol on the Judges’ local machines. This estimate does not include the space and time required to store the instructions for the initial computation task. We assume that this data already exists on the blockchain before the verification game begins.

In comparison with other proposed blockchain-based computation sys- tems, or even PCP-based verifiable computing systems [65], the work im- posed on the Judges by the verification game is small [12], as is the compu- tational overhead required to join the system [63]. Moreover, the economic costs to the Task Giver, who pays only a Solver and a Challenger and not all Judges to process the full task, is modest, as are the computational over- heads for the Solver and Challenger.

Note that larger values for σ reduce the chances of hash collisions. Hash collisions might allow a dishonest Solver to substitute configurations in his computation tableau with fake ones sharing the same Merkle roots. Two consecutive configurations in the tableau whose Merkle roots happen to agree with the Merkle roots of two consecutive configurations that the dis- honest Solver wishes to apply as substitute suffice to disrupt the integrity of the verification game protocol. Indeed, the Challenger will agree with both Merkle roots, and the Judges will confirm that the transition between them is valid and therefore incorrectly reject the challenge. For a fixed size

σ, the chance that such collisions occurs by accident even becomes likely for sufficiently enormous tasks.

One could entirely eliminate the security risk discussed in the previous paragraph by posting complete machine states on the blockchain rather than just Merkle roots, but this makes the protocol more expensive. Alternatively, one could either increase the parameter σ or the number of states checked by the Judges in the final step. In this sense, the choice of σ bounds the maximum complexity of secure computations in TrueBit. We will also see in Section4.1that the capital held by the TrueBit contract as a “jackpot” for finding a forced error poses an additional constraint on computational ca- pacity, but this value can scale as well. Finally, the effectiveness of TrueBit’s verification game may degrade for extremely complex tasks due to the com- putational limitations of the Challenger. If the expected execution time of a verification game exceeds a human life expectancy, for example, then potential Challengers may lack motivation to participate despite financial incentives. Note that even when a task itself is feasible, its correspond- ing verification game may not be as the verification game carries significant overhead. We anticipate that future versions of TrueBit may optimize the verification game for certain kinds of tasks so as to reduce this discrepancy (see Section7.4).

Remark. The verification game provides a significant privacy-protecting ef- fect in that only disputed parts of a given computation must touch the public eye [44].

Incentive layer

We now discuss the financial incentives which encourage anonymous par- ties to both contribute CPU cycles and perform requested computational tasks correctly. We use the dispute resolution layer from Section3as a subroutine and connect its role to the present construction. A Verifier is a potential Challenger, that is, someone who checks submitted solutions and calls for a challenge (see Section3) if and when she detects an error. By assumption (ii) in Section2.2, Verifiers must receive adequate payment for verification tasks. As argued in the beginning of Section2, however, Veri- fiers can only receive rewards when they actually find bugs. It follows that the reward for discovering errors must encompass amortized payment for all verification tasks checked, including those in which no bugs were found.

Any Ethereum user who offers a reward through TrueBit for performing a computational task is called a Task Giver. A party which offers a solu-

tion for performing these tasks in exchange for a reward is called a Solver, and, as indicated in the previous paragraph, Verifiers check that Solvers’ solutions are correct. Solvers and Verifiers will play the roles of “Solvers” and “Challengers” respectively from the dispute resolution layer (Section3) whenever a Verifier initiates a verification game. Referees enforce the rules of the TrueBit protocol in the incentive layer. While Ethereum miners enforce protocol rules both through the roles of “Judges” in Section3and the role of “Referees” in the present section, we use distinct labels for them because the layers and functions differ for these roles. The primary role of Judges is to interactively resolve a dispute, whereas Referees primarily enforce that Solvers and Verifiers timely submit appropriate data in the incentive layer.

In a nutshell. The main steps of a TrueBit contract are as follows. Items in quotes will be throughly explained in due course.

1.A Task Giver announces a task and offers a reward for its solution.

    1. A Solver is elected by lottery, and he prepares both a “correct” and an “incorrect” solution to the task.
      1. If a “forced error” is in effect, the Solver reveals the incorrect solution on the blockchain.
      2. Otherwise the Solver reveals the correct one.
    2. Verifiers check the Solver’s solution. They win a large “jackpot” pay- out if both:

(a)they correctly identify the solution as erroneous, and (b)a forced error was in effect.

    1. If no Verifier signals an error, then the system accepts the solution. Otherwise acceptance depends on the outcome of a verification game.

In the rest of this section, we discuss the security measures in TrueBit which ensure that participants comply with the skeletal procedure outlined above. A more complete description of the protocol appears in Section4.6.


TrueBit periodically imposes forced errors in which a Solver must offer a wrong solution to a task (see Section2). This ensures that Verifiers who diligently search for errors will eventually find them. Accordingly, Verifiers

who correctly report forced errors receive substantial jackpot payouts. By design, Verifiers cannot predict when forced errors will occur and therefore have incentive to check all tasks carefully. Forced errors occur only rarely, and we expect Solvers to make only few, if any, other errors. Rewards for identifying unforced errors may be modest compared to jackpot payouts, and we consider any such payments of these type incidental, rather than fundamental, to TrueBit’s secure incentive structure.

The jackpot payout effectively bounds the complexity of computation tasks that TrueBit can perform securely. The Verifier must, on average, re- ceive a payment which fairly compensates her for the task at hand, which means that the jackpot payout should at least consist of fair compensation for the current task times the forced error rate. In this way, the jackpot cap bounds the Verifier’s expected per task compensation which, by assump- tion (ii) in Section2.2, restricts Verifiers’ available CPU cycles per task.

We fix a rate for forced errors among tasks. This fraction should not be so low so as to discourage Verifier participation through excessively infrequent rewards, but neither should it be so high so as to run the risk of Referees bias (see Section5.1). We set forced errors to occur, on average, once every thousand tasks.


Given that a jackpot repository must exist, we now describe the mechanism for funding it. We assume that a generous philanthropist deposits some initial funds into the repository, but thereafter TrueBit will be self-sustaining through taxes. Any Task Giver that calls a TrueBit contract must pay not only the cost of computational work done by the Solver but also for the work done by the Verifier(s) (excluding unforced errors and bogus challenges), as well as the work done by Referees and Judges. We refer to the latter two costs as the verification tax. We ensure that the jackpot repository never disappears entirely by placing a cap on the jackpot size. To this end, we set the maximum jackpot payout for a forced error to be one third of the total repository size.

While a single, attentive Verifier suffices to ensure correctness and ach- ieves ideal tax efficiency, in practice the verification tax requires a substantial cushion. We estimate the necessary verification tax to be 500% – 5000% of the cost of performing the given task. As we shall see in Section5.2, there is a quantitative tradeoff between tax efficiency, Solver deposits, and security of computations, so Solvers could potentially absorb some of this tax burden by contributing higher deposits. Our tax rate estimate incorporates Veri-

fiers’ incentives for participation and the fact that both the Task Giver and the Solver for a given task each have incentive to perform verification. Par- ticipation from these parties may necessitate higher taxes because the total jackpot payoff decreases exponentially as the number of challenges increases (see Section5.3). Expected jackpot payoffs must be sufficiently high to con- sistently attract at least one Verifier per task. The required tax amounts may also depend on peculiarities of human behavior. Indeed, we may have to pay Verifiers more than Solvers per CPU cycle because Verifier rewards have the human-undesirable property of being sporadic whereas Solvers al- ways receive immediate rewards. Thus the optimal tax rate must, at least in part, be determined experimentally.


TrueBit requires deposits from Solvers and Verifiers in order to thwart Sybil attacks (see Section5.1) and incentivize correct computations. We set these deposits to be more than the expected jackpot payout for a given task plus the cost of playing a verification game. In particular, the deposits must be large enough to:

      1. pay for the (expensive) cost of a verification game, including all re- wards and penalties for Solver and Challengers and work done by Judges,
      2. discourage Solvers and Verifiers from sacrificing deposits in order to obtain jackpots without performing verification (see Section5.1and Section5.3),
      3. discourage Task Givers who collude with Solvers in effort to get bogus solutions onto the blockchain (see Section5.2),
      4. refund taxes to the Task Giver in case the Solver causes an unforced error,
      5. deter Solvers from randomly guessing solutions to obtain task rewards instead actually performing computations (as might be profitable for binary decision tasks with very high task rewards), and
      6. deter external, temporal pathologies.

Note that the currency used to pay Solvers and Verifiers need not be the same as the currency used to pay Judges and Referees, but for simplicity of presentation, we fix ether (ETH) as the unique underlying currency.

As an example of the second type, consider a situation where the Solver deposit is small (say 10 ETH) but the expected jackpot payout per task is high (say 1000 ETH). An individual playing both the role of the Solver and Verifier could offer a bogus solution and then challenge his own answer, hypothetically netting, on average, 1000 − 10 = 990 ETH without providing any useful service. Such an action would degrade other Verifiers’ incentive to participate.

As an example of the last case, if the outcome of a time-sensitive TrueBit contract controlled a 1000 ETH payout but only required a 10 ETH Solver deposit, then the party who stands to lose 1000 ETH from the TrueBit contract could attempt to cause a delay by posing as a Solver to the TrueBit contract and giving a bogus answer. We leave it to the Task Giver to determine appropriate minimum deposit values for such specific situations, as such contextual determinations lie outside of the scope of TrueBit itself.

Generating forced errors

In order to motivate verification of all tasks and to guard the jackpot repos- itory against swindle, forced errors must appear unpredictably. TrueBit uses strings of random bits to determine whether or not a forced error occurs for a given task. The system derives its unpredictability via the following properties.

      1. The Task Giver does not know the random bits at the time that she announces a task.
      2. The Solver does not know the random bits until after he has committed his solution.
      3. Verifiers do not know the random bits until after they decide whether to challenge.

The first property makes it difficult for a Task Giver to create a task designed to swindle the jackpot repository, and the second discourages Solvers from being lazy and only volunteering to solve tasks which have forced errors. In order to satisfy these three properties, TrueBit combines random bits from the following two sources:

(a)“private” random bits from the Solver, and

(b)the hash of the block mined immediately after the block containing the Solver’s solution.

By hash of the block, or block hash, we more precisely mean the hash of the block’s header, a roughly 200-byte piece of data that contains the times- tamp, nonce, previous block hash, and Merkle root hash of the transactions occurring in the block [32]. Source (b) achieves properties 1. and 2. above, and source (a) achieves property 3.

TrueBit generates Solvers’ “private” random bits using a method reminis- cent of RANDAO’s random generator mechanism [25]. Before participating in a TrueBit contract, a Solver must privately generate a string of random bits r and publish its hash on the blockchain. This action commits the Solver to using r in the protocol without revealing the value r to others.

The Solver establishes property 2. above by committing both a hash of a “correct” solution and a hash of an “incorrect” solution prior to the broadcast of the block hash from item (b). At the time that this block hash is revealed, only the Solver, who has access to the value r, knows whether or not a forced error is in effect. He publicly designates one of his two hashed solutions for evaluation, however potential Verifiers do not know a prioiri whether the Solver intended his solution to be “correct” or “incorrect.” Only after the timeout for challenges has elapsed do the Verifiers witness the Solver’s private random bits in the clear and learn whether a forced error was in effect. In case the Solver’s solution is challenged, the Solver must reveal r as well as his second “correct” solution in order to prove that the error was forced and avoid a penalty (see Section5.4). Thus, in the case of a forced error, the Task Giver still obtains a correct solution. In case of an unforced error, i.e. when the Solver evidently fails to provide a correct solution, the Task Giver receives a full refund of her task reward and taxes (see Section4.3and Section4.6).

Although in theory one could securely run this part of the protocol with- out hashing the Solver’s solutions, hashing makes it easier for the Solver to provide a convincing “incorrect” solution which appears, upon casual in- spection, indistinguishable from a “correct” one. The Solver can effectively use any randomly selected “incorrect” solution during a forced error because he never has to reveal its preimage.

Solver and Verifier election

The system effectively chooses Solvers by lottery. When a task is announced, Solvers broadcast their interest in solving it to the Referees in the form of Ethereum transactions. Referees, or more specifically miners, choose one of these transactions to include the next block, thereby electing the Solver for that given task. In case the miner who chooses the transactions for the

current block happens to also hold a lottery ticket, he may bias his chances of winning the lottery. This bias does not affect the security of TrueBit, however, since the Solver must still provide a correct solution.

In contrast to the Solver selection, TrueBit does not impose a limit on the number of Verifiers, and we describe in Section5.3how multiple Verifiers who participate in a task must split their rewards after a successful challenge. Due to the computational work involved in verification, rational Verifiers will only verify tasks in case they expect to receive adequate compensation for their work (assumption (ii), Section2.2). Thus the number of Verifiers verifying each task will remain low due to the balance between computation costs and incentives (see Section5.2and Section5.3).

Protocol overview

In this section we present an overview of the TrueBit protocol. We will discuss how our chosen parameters enable system security in Section5. Throughout the description below, Solvers and Verifiers who deviate from protocol by broadcasting malformed data or failing to respond within time- out bounds forfeit their security deposits to the jackpot repository.

Preprocessing steps. The following must be done prior to commencing

TrueBit operation:

1.A substantial jackpot repository must be established for the TrueBit contract prior to commencement of any interactions (See Section4.1for details).

  1. Solvers and Verifiers that wish to participate must commit de- posits to the TrueBit smart contract (See Section4.3). The de- posits may be placed at any time during the protocol so long as the underlying contract has sufficient time to confirm receipt of the funds.
  2. Solvers must also generate private random bits and commit their respective hashes to the blockchain. We denote the private ran- dom bits of the unique Solver selected to perform the task below by r (see Section4.4for more details).
  3. A universal tax rate T must be established (see Section4.2).

Main algorithm. The protocol steps are as follows.

1.A Task Giver provides the following: (a)a computational task,

(b)the timeOut for accepting bids, performing the computation, and waiting for a challenge. In the protocol below, we do not distinguish between these various timeouts with distinct no- tation, and we colloquially use “timeOut” to refer to both events and lengths of time. In all cases, timeOut must be long enough to avoid microforks during which Referees tem- porarily disagree on current parameters.

(c)a reward for a correct output, which must be at least the cash equivalent of the task difficulty d based on timeOut (see Section5.5), plus a total tax of T · d. The reward is held in escrow by the TrueBit contract while the taxes are immediately deposited into the jackpot repository.

(d)the minimum deposit, minDeposit, needed to participate as a Solver or Verifier (see Section4.3, Section5.1, Section5.2, and Section5.3).

  1. Solvers who have the requisite minDeposit and random bits can bid to take on the task until the bidding timeOut. At most one Solver is selected (Section4.5), and if no Solver takes on the task in the allotted time, the task request is canceled and the Task Giver receives a full refund.
  2. The Solver privately computes task. In case of a timeOut, Solver forfeits his deposit to the jackpot repository and the protocol terminates.
    1. Solver commits two distinct hashes to the blockchain, thereby committing both a “correct” and an “incorrect” solution.
    2. The hash of the next mined block is revealed, and then Solver

knows whether or not there is a forced error (see Section4.4).

    1. Solver designates one of the two hashes as the hash of his


  1. Verifiers who have posted minDeposit can challenge (the hash of) solution until timeOut. Prior to timeOut, the Verifier must broadcast the hash of an even integer to the blockchain in order to commit to a challenge. Hashing an odd number in case of no challenge is optional and may be used to camouflage real chal- lenges from other Verifiers (see Section5.3). After timeOut, the Verifier broadcasts to the blockchain this hashed number in the clear to reveal her action.
    1. If no Verifier challenges solution, then
      1. Solver reveals r to prove that there was no forced error (i.e. the criteria in Step 4(b)i below fails),
      2. Solver reveals solution in the clear on the blockchain,
      3. Solver receives the task reward, and then iv.the protocol terminates.
    2. Otherwise, some Verifier challenges solution.
      1. Solver reveals his private random string r on the block- chain, and Referees check it against his commitment from preprocessing step 3. If the hash of the concatenation of r and the block hash following the solution announce- ment from 3(b) is small (as determined by the forced er- ror rate, see Section4.1), then a forced error is in effect (see Section4.4).
      2. If the value r reveals that Solver’s error was forced, then Solver must reveal his secondary solution in the clear (see Section5.4).
        1. If no Verifer challenges Solver’s secondary solution so- lution before timeOut, then Verifier wins a fraction the maximum jackpot amount J , scaled for task dif- ficulty. In case of challenges from multiple Verifiers, the jackpot is split among them. In more detail, if there are k distinct challenges, then each participating Verifier receives J/2k−1 times the ratio of the task’s difficulty to the system’s maximum task difficulty (See Sections4.1,5.3, and5.4for further discussion).

B.Otherwise the Solver must play the verification game with the challenging Verifier(s). Verifier penalties, Ver- ifier rewards, Solver penalties, and refunds to the Task Giver here are the same as described in the next step. In case the Solver successfully defends himself against all challenges, however, then the jackpot payouts pro- ceed as in Step 4(b)i above.

      1. Otherwise the error was not forced. Solver reveals solution in the clear, and then Solver and Verifier must play a verification game (Section3.2). In case of chal- lenges from multiple Verifiers, the steps below are re- peated until either some Verifier wins a challenge or Solver defeats all Verifier challenges. The Solver collects reward only once he wins all of the challenges, and if this does

not happen, then the Task Giver receives a refund on his reward, and tax payments are reimbursed through Solver’s deposit as described below.

        1. If Solver wins, then Verifier forfeits half of her deposit to the jackpot repository and the other half to the Solver, where the latter suffices to compensate the Solver for playing the verification game (see Section4.3).

B.Otherwise Verifier wins. Solver pays at most half of his deposit to the Verifier(s), according to the distribution scheme in Step 4(b)ii above, pays back the Task Giver’s taxes out of this amount, and forfeits the remaining funds to the jackpot repository (see Section5.1and Section5.2).

End of protocol.

Note that Solver does not receive reward when a forced error occurs in Step 4(b)ii. In such situations, we expect the Solver to challenge his own “mistake” and receive a jackpot payout much greater than reward. TrueBit makes this payout automatic since the Solver would lose the verification game anyway if he and Verifier were to play it out.

Sanity check

The table on the next page recaps the parameters used in the TrueBit pro- tocol and hints at how to estimate their exact values. Each parameter is either a fixed constant, an input from a participant or something already on the blockchain, or can be computed from the elements which appear above it in the table. By inspecting the list below from top to bottom, one can confirm that all notions are well-defined.


We now analyze the security of the incentive layer from Section4. TrueBit’s security relies on the presence of at least one Verifier to check each task performed. In this section we show that available incentives suffice to guar- antee the existence of such Verifier(s) according to the network assumptions in Section2.2and the attacker model from Section2.3. TrueBit defends against shortcuts which could divert verification incentives to non-verifying parties. In particular, we show that TrueBit resists Sybil attacks, collusion

parameter dependency
dispute layer parameters

p, c, and σ (Section3) tax rate

(Section4.2, Section5.2) forced error rate


maximum jackpot payout (Section4.2)

cash equivalent of CPU cycles (Section5.5)

maximum task difficulty


  • fixed constants.
  • fixed constant (500% – 5000%).
  • fixed constant (1/1000).
  • 1/3 of current jackpot repository.
  • based on external markets.
  • maximum jackpot payout divided by cash equivalent of a CPU cycle.

timeouts (Section4.6) task difficulty

Solver reward (Section5.5)

expected jackpot payout (Section4.1, Section5.2,


Solver & Verifier deposits (Section4.3, Section5.1,

Section5.2, Section5.3)

Parameters in this box are chosen by the Task Giver with minimums as de- scribed below:

  • long enough to avoid microforks.
  • determined by timeouts.
  • no less than the cash equivalent of the task difficulty.
  • cash equivalent of task difficulty, and number of active Verifier deposits.
  • more than the cost of verification game plus the expected jackpot pay- out. Also depends on tax rate.
actual number of Verifiers (Section4.5)

jackpot payout for challenging a forced error (Section5.3)

Payout for detecting an unforced error (Section5.1, Section5.2)

  • as many join (incentives limit over- participation).
  • based on maximum jackpot payout, actual number of verifiers, and ratio of task difficulty to maximum task diffi- culty.
  • at most half of Solver deposit is split

among all Verifiers (rest goes to jackpot repository).

Table: Relations between parameters used in TrueBit.

pools, opportunistic attacks related to jackpot payoffs, and certain external threats.

Pairwise Sybil attacks

In a Sybil attack, an adversary assumes multiple identities on the network in order to execute an exploit. Identities on TrueBit include Task Givers, Solvers, Verifiers, Judges, and Referees. By our assumption (Section2.3), Judges and Referees always function as intended, and we provide additional justification for this axiom here. In this subsection, we consider all sets of pairs among the remaining identity types and show that pairwise coopera- tion does not harm operation of the system. While parties can freely join and leave TrueBit, each identity must make a deposit in order to participate. This deposit alone is a general deterrent against Sybil attacks, however as we shall see, multiple identities do not provide much cheating advantage.

Judges and Referees. Recall that Ethereum miners play the roles of Judges and Referees in TrueBit. Our analyses in Section3.4and Section4.6 show that the work done by these parties is small, and hence they are not vulnerable to a Verifier’s Dilemma (Section3.1). Nakamoto consensus [56] therefore ensures that miners will not post bogus transactions, i.e. enforce rules incorrectly, lest other miners reject their block and cost them a block reward. Therefore the only threat from miners is block withholding, specif- ically with regard to random number generator bias (see Section4.4).

While in theory miners could discard blocks, along with their mining reward and associated random bits for TrueBit, in practice this may never happen. A miner who drops a block must expect in exchange some income greater than the usual mining reward in order to make this action worth- while. If such income were to come as a bribe from a TrueBit participant, it would have to come from a Solver since only the Solver, who has unique access to his own private random bits, knows how to bias miners’ random bits towards or away from a forced error. In short, miners cannot disturb randomness in TrueBit without a tip-off from a Solver. The Solver, however, has no reason to bias against a forced error because this would prevent him from challenging his own answer and winning the jackpot, or at the very least would invite others to share it with him, thereby decreasing the total jackpot payout (see Section5.3). Moreover, the Solver is unlikely to suc- ceed in biasing towards a forced error since miners have little control over their own block hashes. This “one-sided” effect of block withholding, which can lock away jackpot funds but (almost) never release them, makes block

hashes a safe source of randomness in TrueBit. Hence the Solver’s potential reward for challenging his own solution under a forced error is not merely an artifact of TrueBit’s incentive structure — it guarantees unbiased Referees.

Task Giver and Solver. A Task Giver who also acts as a Solver does not benefit from solving her own task. One idea for an attack vector would be to create a task such that the Solver’s private random bits force an error for the given task. Since the Task Giver cannot predict the random bits from the block hash at the time the task is created (Section4.4, part (b)), the Task Giver cannot create such a task. Moreover, the Task Giver cannot afford to flood TrueBit with useless tasks and solve them herself in the hopes of eventually, by chance, encountering a forced error. The Task Giver would pay more taxes to the jackpot repository than she would expect to win from the jackpot payout (Section4.2), even taking into consideration any rewards she might win by correctly solving her own tasks. Her losses are amplified through payments to other Solvers, Verifiers, and Judges who choose to participate in the protocol.

Solver and Verifter. The Solver’s burned deposit always exceeds the Ver- ifier’s income for successfully challenging an unforced error (see Section4.3). Hence the Solver has no incentive to challenge himself as a Verifier in such cases. Similarly, a Verifier can only lose her deposit by posing bogus chal- lenges. In certain situations it is conceivable that a Solver–Verifier pair could benefit from submitting a false solution and then challenging it due to temporal constraints external to TrueBit itself (as mentioned in Section4.3), and in such cases the Task Giver must determine the necessary deposits to deter such actions.

In the case of an forced error, we expect the Solver will challenge himself in order to win the jackpot. Nevertheless, TrueBit’s tax rate (Section4.2), and hence its jackpot payout (Section4.1), suffices to incentivize an inde- pendent Verifier to also check the solution. As we shall see in Section5.3, the Solver lacks incentive to claim the same jackpot more than once, hence the Solver’s self-verification does not spoil other Verifiers’ motivation to par- ticipate.

Task Giver and Verifter. A Task giver can certainly verify the solution she receives. This checking can only improve the reliability of the system! We cannot, however, assume that the Task Giver will always perform such checks due to the Task Giver’s possible computational and financial resource

constraints or lack of motivation to involve herself in the details of a TrueBit


The trifecta

Up until this point, we have implicitly assumed that the attacker’s goal was to extract money from the jackpot without performing verification. How- ever there is another possibility: the attacker wishes to get a bogus solution accepted onto the blockchain. No penalty scheme can deter all such attacks since TrueBit, as a closed system, has no way to estimate the true economic impact of a bogus solution on Ethereum’s blockchain. Hence we now con- sider scenarios in which a party is willing to sacrifice a jackpot payout (or more) in order to skew computational results.

Scaring off Verifters. Suppose that a Task Giver wishes to obtain an incorrect solution for a task. Posing as a Solver, she gives a wrong solution to her own task. Now the only thing that could stand in the way of her success would be a pesky Verifier. In order to dodge this obstacle, the attacker poses as a regular Verifier prior to posting her malicious task. She checks every task that comes along, until eventually he encounters a forced error. At this point, the attacker challenges not once but a hundred times, which according to Step 4(b)ii in Section4.6essentially wipes out the jackpot payout for herself and any other Verifiers that also challenged this solution. The upshot is that legitimate Verifiers no longer wish to participate because TrueBit failed to deliver their well-deserved jackpot payout, and more to the point, they have no way to estimate their income from future jackpots. So when the attacker finally broadcasts her malicious task, legitimate Verifiers simply ignore it.

As Verifier participation, and hence security, depends entirely on the expected value of jackpot payouts (assumption (ii) in Section2.2), TrueBit must ensure sufficient expected value for these payouts. We therefore offer the following default strategy for all Verifiers. New tasks are submitted to TrueBit at various and overlapping times. A prospective Verifier who wishes to earn rewards for verification should count the number of active Verifier deposits and compare this to the average number of simultaneous tasks oc- curring on the network. If the tax rate and task flow profitably support an additional Verifier, then the prospective Verifier submits a deposit, ran- domly chooses an active task, and begins verifying. As soon as she finishes verifying this task, she randomly selects another active task and begins ver- ifying that next one. Thus a Verifier with either more powerful hardware or

better algorithms can gain a market advantage in TrueBit. Such a Verifier can check more solutions per unit time and therefore increase her expected jackpot rewards per unit time. Verifiers are free to leave the network at any time, for any reason. If the current number of active Verifiers exceeds the number supported by the tax rate and task flow, then the probabilis- tic default strategy tells each Verifier to exit the system with the minimal probability that one would expect to restore equilibrium if all Verifiers were following the default strategy.

Verifiers who choose tasks to verify at random according to the default strategy, as opposed to changing their behavior based on guesses as to who- solved-what before, can reasonably estimate their compensation for partici- pation. When a significant number of Verifiers (not necessarily a majority) follow the default strategy, the adversary wishing to establish a bogus so- lution can neither predict which tasks will be verified, nor can he influence which tasks default strategy Verifiers will verify. According to assump- tion (ii) in Section2.2, there exists some bound on the adversary’s financial means. If not, he could simply flood the Verifier pool with deposits and be- gin verifying, which in turn would incentivize legitimate Verifiers to leave the system. Therefore the adversary can only post a bogus solution and hope that, by chance, it gets ignored. In this way, the default Verifier strategy spoils the scare-off strategy above.

Let us now estimate the chances that the adversary succeeds to post a bogus solution which all legitimate Verifiers fail to notice. We make the worst-case assumption that the adversary Solver always gets elected to solve his own task. Suppose that the tax rate (see Section4.2) supports at least 6 Verifiers per task, and that the adversary holds at most 1/6 of all Verifier deposits. Then effectively there are an average of 5 verifiers per task. Sup- pose n is the average number of simultaneous tasks on the network. Then the number of Verifiers is approximately 5n, and so the chance that none of them check a particular solution is the chance that each of them inde- pendently and randomly choose other tasks, or [(n − 1)/n]5n. Therefore the probability that the adversary gets caught, regardless of n, is

1 − .1 −

1 5n



> 1 − e

5 > 99%.

The leftmost inequality follows from the standard limit definition for the Calculus constant e. We conclude that the adversary will most likely lose his Solver deposit by attempting such an attack.

By raising taxes further, we can make such an attack even more costly. In this sense the tax rate, together with the minimums for Solver deposits,

bounds the mandatory “honesty” of the network. If bogus solutions never occur, then there would be no need for taxes. On the other hand, higher taxes make it more expensive to cheat. There is a tradeoff between overhead tax expenses for Task Givers, minimum deposits for Solvers, and security of computations.

High-stake tasks. For some high-stakes applications it is possible to en- tirely fall back on the dispute resolution layer of Section3. In a high-stakes situation, it is reasonable to expect that other parties who have a significant financial stake in the outcome of a given task will make an effort to verify the solution regardless of expected jackpot payouts. This motivation acts as a secondary deterrent against the “Scaring off Verifiers” attack above, as does the fact that the Verifier receives a fraction of the Solver’s deposit in case of positive error detection (TrueBit burns part of the Solver’s deposit in order to avoid collusion between Solvers and Verifiers).

Collusion pools

In this section, we analyze the potential effects of pooling computational, informational, and financial resources among Verifiers and Solvers. Par- ticipants in TrueBit may voluntarily form a Cartel which may in turn use external smart contracts to enforce cooperation among its (mutually dis- trusting) members, however, Solver and Verifier deposits in TrueBit alone prevent many potentially harmful financial collusions designed to extract jackpot funds without exerting computational effort. Secondly, carefully chosen jackpot payout amounts, together with Solver and Verifier incentives and private commitment channels, prevent financial gain from unintended signaling of private random bits or challenges.

Rich but powerless. First, consider a Cartel of Solvers with limited CPU bandwidth but deep pockets which volunteers to solve each and every task that comes along. The Cartel makes no attempt to provide correct solutions to any of these tasks, but instead intends to absorb lost Solver deposits un- til a forced error comes along, at which point it splits the jackpot among its members. By construction (see Section4.3), Solver deposits exceed the expected jackpot payout per task. Therefore, in the long run, such a Cartel strategy loses money. If members of the Cartel instead decide to start pro- ducing correct solutions, this does not harm security because that’s what they are supposed to do anyway. We remark that, also by construction (Section4.4), a Solver cannot simply decide to accept only tasks which have

forced errors because the Solver does not know whether or not a forced error is in effect until after he has committed his solution to the blockchain.

Similarly, a Cartel which challenges every single task in the hopes of eventually obtaining a jackpot sustains losses in the long run due to lost Verifier deposits.

A flood of Challengers. Timely information about forced errors has eco- nomic value. While Solvers can earn jackpots by challenging their own forced errors, they risk sharing jackpot payouts by divulging information about such errors. It follows that Solvers have incentive to keep their knowledge of forced errors private and their fake solutions convincing. Information about active challenges also has economic value because a challenge posted to the blockchain could be a tip-off about a forced error. For this reason, Verifiers have incentive to announce “fake” challenges rather than remaining silent on tasks in which they don’t detect errors (See Step 5. in Section4.6). “Fake” challenges serve as noise to mask real challenges and protect Verifiers’ jack- pot revenue from potential copycats who might challenge just because they see someone else doing it.

If the jackpot for a given task were a fixed amount and equally divided among challenging participants, then one could potentially flood the set of participants with aliases to obtain more than one’s share of the jackpot. A Cartel in which members inform each other about forced errors could therefore have the economic means to flood the network with challenges and monopolize jackpot payouts. This in turn would dilute jackpots for legitimate Verifiers which would, in turn, degrade Verifier incentives for participation. While such a Cartel might successfully detect forced errors, it might also cause tasks with unforced errors to go unverified.

In order to dissuade a Cartel from flooding the network with harmful challenges, we must reduce the total jackpot payout based on the number of challenges that occur. The total jackpot paid out on a forced error with k challengers must not exceed


2k−1 . (5.1)

The reason is as follows. We want a Challenger’s net reward to decrease if she superfluously clones herself via the Cartel. Assume that there are n legitimate challenges plus a Cartel which initially contributes k challenges to the pool and is considering adding another alias. We want the per challenge jackpot distribution among n + k challenges to be less than it would be among n + k + 1 challenges regardless of k, and n. Let Ji denote the total

reward paid out when there are exactly i challenges. Then we want

k k + 1

n · Jn+k > n + 1 · Jn+k+1,


n + 1 k

Jn+k+1 < Jn+k · ·


>1 sfor˛a¸llxn≥0


1/2sfo˛r¸alxl k≥1

k + 1

Thus it suffices to set Jn+k+1 = Jn+k/2, or by induction Jk J1/2k−1. In fact we cannot do better than this because the case k = 1 asymptotically forces a matching bound.

The upper bound in (5.1) does not take into consideration the Verifier’s cost of playing the verification game. In the case where the Solver’s solution has been found to have an error through one of the challenges, there is no need to repeat the verification game with the other challenges. The Verifier who actually plays the verification game with the Solver receives a bonus from the Solver’s deposit to compensate her for the CPU cycles spent during the challenge.

On low-hanging fruit

We analyze the security implications of Verifiers and Solvers who oppor- tunistically disappear for hard tasks and reappear for easy ones, or who take advantage of solution discrepancies.

Easy winners. If the jackpot for all tasks were equal, rational Verifiers might choose to verify only simple tasks and ignore complex ones. For this reason, the jackpot for each task scales proportionally with the task’s complexity (Step 5. in Section4.6). Scaling ensures that a Verifier’s expected jackpot payout per CPU cycle remains constant across all tasks, and it equally incentivizes Verifiers to inspect simple and complex tasks. Solvers always appear to perform tasks since the minimum task reward suffices to compensate them for their work (see Section4.6and assumption (ii) in Section2.2, ).

Multiple solvers. For security reasons, TrueBit explicitly does not allow Task Givers to hire redundant Solvers for a single task (Main Algorithm, Step 2., Section4.6). Suppose that two Solvers provided solutions to a single task and exactly one of them receives a forced error. Any Observer who notices a difference in the two solutions on the blockchain could, without

verifying anything, challenge both solutions. By playing two verifications and sacrificing one deposit, such an Observer could potentially win a jackpot at negligible cost, thereby degrading the overall incentives for verification.

Forced error in disguise. What happens if a Solver has a forced error but gives a correct solution anyway? The Solver could then challenge his own solution while other Verifiers ignore it, resulting in both the Solver receiving a bigger share of the jackpot as well as damage to the system’s verification mechanism. Therefore, when the Solver reveals that a forced error was in effect, the protocol also forces him to reveal his committed “correct” solution which must be distinct from the “incorrect” solution that he showed at first. If the second solution revealed by the Solver is correct and distinct from his first solution, then by uniqueness of deterministic processes, the first solution must have been incorrect (as desired). Verifiers have an opportunity to challenge this second solution. If an error is detected in it via a verification game, the Verifier(s) split the Solver’s deposit rather than the jackpot payout according to Step 4(b)ii in Section4.6. Since the Verifier(s) presumably already performed the task themselves when checking the first solution, no additional CPU cycles are required to do this second check. As the Solver loses both a jackpot payout and a deposit by following this course, by assumption (ii) in Section2.3, the situation described in this paragraph should never arise.

A cash equivalence problem

Consider the following scenario. A Task Giver wishes to get a bogus solution onto the blockchain. He offers a minimal reward for a difficult task so as to ensure that no legitimate Solvers or Verifiers volunteer to inspect it. Acting as a Solver, she then provides a bogus solution, and the system accepts her solution because no one bothers to check it. It follows that for security purposes, the system must require that Task Givers compensate fairly based on the difficulty of the task. But how can TrueBit estimate the true cost of executing a given task in its native currency?

While one could potentially establish a long-term lower bound on the cost of a CPU cycle relative to a stable currency like the US dollar or Euro, calculating a lower bound relative to the value of a cryptocurrency token is another matter. Cryptocurrencies are extremely volatile. Moreover, TrueBit lives on the blockchain and does not have access to a newspaper with current exchange rates.

In the first iteration of TrueBit, we will manually update the internal cash equivalent of a CPU cycle based on a live feed (e.g. [4,23,26]). Ultimately, however, we would like to input these prices in a decentralized way without relying on a third-party. Later versions of the protocol may make use of Augur [2], a decentralized prediction market which ports outside information sources onto the Ethereum blockchain. As of this writing, Augur is currently in beta testing. Alternatively, we may build an independent blockchain for reporting prices whose “transactions” consist of exchange rate updates.


Formally, TrueBit is a smart contract in Ethereum which uses Ethereum’s ex- isting smart contract framework to bootstrap the creation of computationally- intensive TrueBit contracts. Tasks in TrueBit take the form of C, C++, or Rust code, but the user must pass this code as input to the Google Lanai architecture [15] (discussed below) prior to submission to TrueBit. This lat- ter step guarantees consistency of simulated architecture and allows Judges to adjudicate fairly.

Google’s Lanai interpreter. The theoretical framework for the verifi- cation game requires a fixed computer architecture to be used for all veri- fication tasks. In [40], the authors used the regular Intel X86 architecture for this purpose, but we believe that this architecture is far too complicated to be used in TrueBit. All participants of the verification game, including the Judges, have to simulate the behavior of the entire architecture. Even a slight difference in these implementations could result in catastrophic loss of funds for one of the parties. Moreover, simpler architecture costs less to sim- ulate on the blockchain, and because of its simplicity and the fact that there now exists an efficient and trusted LLVM compiler writer, we have chosen to use Google’s Lanai architecture [15]. The full Google Lanai interpreter will be available as a smart contract in Ethereum so that it can be used by Judges. In all TrueBit contracts, the Lanai bytecode will be authoritative.

TrueBit’s on-chain interpreter runs in Solidity. For efficiency reasons, tasks will not be run using the interpreters except in case of dispute. In general, users will run tasks written in native programming languages and running on regular hardware. In the majority of cases where tasks do not require the Lanai interpreter, Solvers and Verifiers can privately optimize implementation of task executions and potentially gain a market advantage over other participants.


TrueBit is more than just an outsourced computation system. It is designed for use in trustless smart contracts. We present some examples of possible use cases.

Practical decentralized pooled mining

Mining rewards are extremely competitive. A typical desktop computer might only mine on average one Bitcoin block every thousand years. To reduce income variance, miners often join mining pools which share both computing resources and mining rewards. Each pool has an operator who distributes computing tasks and rewards to pool members. This central- ization degrades the security of the system by giving the operator undue influence to censor transactions [51]. In extreme cases where an operator controls more than half of the network’s hash rate, as has been the case with DwarfPool [9] in Ethereum, [30] in Bitcoin, and could hap- pen again with Bitmain’s proposed gigantic mining center [3], the operator can even withdraw cleared transactions and double-spend money by way of a 51% attack [48].

SmartPool [27] introduces mining pools whose operators are Ethereum smart contracts. As decentralized pool operators, smart contracts have many virtues. In addition to counteracting the censorship issues described in the previous paragraph, they can operate at low cost relative to centralized pools and do not rely on a social contract to ensure fairness. Unlike other decentralized mining pools, which either have higher payout variance [24] that negate incentives for joining the pool or require a change in the proof- of-work protocol [54], SmartPool offers low variance payouts and retrofits existing cryptocurrencies, and it can handle a huge number of participants with a wide range of computing power.

SmartPool’s proposed Ethereum mining pool minimizes verification work by precomputing the 1 GB data sets needed to check Ethereum’s proof-of- work [10,51]. While this shortcut may help in checking Ethereum’s proof-of- work, not all cryptocurrencies have this lucky feature. Fortunately TrueBit contracts can check any proof-of-work, which means that with TrueBit, we can build a smart contract-based mining pool for any Nakamoto consensus- based cryptocurrency. At the time of this writing, for example, the effort required to check a Zcash proof-of-work [33] appears to exceed Ethereum’s gasLimit capacity by a factor of 50. TrueBit is an option for bringing this task within reach of Ethereum smart contracts. Project Alchemy [17],

which aims to bring smart contract functionality to Zcash in the spirit of the Dogecoin–Ethereum bridge below, may also benefit from TrueBit’s ability to check Zcash’s proof-of-work.

Dogecoin–Ethereum bridge

We can use TrueBit to build a two-way peg between Dogecoin [6] and Ether- eum, enabling Dogecoin users to move dogecoins between Dogecoin’s block- chain and Ethereum’s blockchain without transmitting currency through a third-party exchange and effectively adding smart contract functionality to Dogecoin. The Dogecoin community maintains an active interest in such a bridge [8], and current offers a prize of more than 6000 ETH for its con- struction [5,7].

TrueBit contracts can check Dogecoin’s Scrypt-based proof-of-work whereas traditional Ethereum smart contracts cannot. If Dogecoin were to enhance its scripting language with an operation indicating currency transfer to Ethereum addresses, a TrueBit contract could then confirm such transfers. The newly created dogecoin token could then be passed around Ethereum, and a final signed transaction in Ethereum could finally send the dogecoin back onto Dogecoin’s blockchain, assuming that Dogecoin miners are willing to treat the Ethereum blockchain as authoritative for such transfers.

Scalable transaction throughput

Building a system that can securely meet even the modest transaction vol- ume demands of current Bitcoin users remains a formidable challenge [41]. Proposed Byzantine “sharding” [36,42,47,49,53,55,57], and miner-based “serializing” [43,60] protocols exist which aim to distribute verification work, but here we take a simple and fundamentally different approach to decouple the two tasks which miners perform, namely

      1. selecting which transactions to include in the blockchain, and 2.verifying that blockchain transactions are valid.

Using TrueBit, one can construct a verification protocol whose incentives guarantee that task 2 is correctly performed by off-chain Solvers and Verifiers (with some help from Judges and Referees), while miners continue to perform task 1. In this way, complex transactions can securely reach the blockchain without overburdening miners.

Towards a big data system

In order to perform as a truly scalable cloud computer, TrueBit must have access to a scalable data storage system. Ethereum’s blockchain alone does not suffice as storing even moderate amounts of data directly on Ethereum’s blockchain is prohibitively expensive. TrueBit can securely access and use portions of massive data sets so long as the data is stored somewhere pub- licly and permanently, for example in Swarm [28] or on another blockchain. Parties who wish to rely on such data in a TrueBit contract must be sure that Verifiers have access to the full data set.

To use TrueBit on external data, one need only store a Merkle root of the massive data set on the blockchain and add non-deterministic steps in the verification game in which the Solver can “guess” the original data set rep- resented by the Merkle root. While Solvers and Verifiers must have access to the full data, Judges and Referees do not. Indeed, if we modify the ver- ification game so as to permit tasks for nondeterministic Turing machines, then the Solver can nondeterministically guess the certificate data as a step in the TrueBit contract. Only in cases of disputes would the Solver have to reveal external certificate data to the Judges via the blockchain. In some applications, the Solver might even even be able to reveal to the Judges a privacy-preserving zkSNARK rather than the data itself. zkSNARKs have the potential to enable privacy for many different kinds of systems on Ether- eum [59].

While in theory TrueBit’s scalable protocol can process arbitrarily com- plex tasks, in practice the verification game is inefficient and therefore secu- rity of TrueBit computations degrades as tasks reach the level of big data. For big data applications, TrueBit may not be able to rely on a one-size-fits- all verification game. Therefore we anticipate optimizing the verification game for certain classes of tasks. For example, Section3.2gives an example of an efficient, specialized verification game for matrix multiplication. In future versions of TrueBit, Task Givers might broadcast not only tasks but also an indication of the corresponding verification game to be played in case of disputes.

Remark. We conclude with a caveat: TrueBit may expose latent security vulnerabilities in the underlying Ethereum network as a result of new kinds of interactions between smart contracts and miners. By allowing smart contracts to check proof-of-works, for example, TrueBit may facilitate 38.2% attacks [62].

Acknowledgments. We thank Vitalik Buterin and Vlad Zamfir for sug- gesting the use of forced errors in the TrueBit protocol and Eli Bendersky for introducing us to Google Lanai. We also thank Loi Luu and Julia Koch for useful discussions.


In the months following the initial release of this whitepaper, new work and feedback informed TrueBit’s development roadmap and helped refine the protocol itself. We discuss a few recent developments.

Security patches

A TrueBit adversary has either one of two goals: 1.get bogus computations onto the blockchain, or

2.extract jackpot funds without performing verification.

We investigate three attacks of the first type followed by two of the second.

Premature disclosure of random bits (Zack Lawrence). A Solver can dissuade Verifier participation by publicly broadcasting his private random bits prior to the designated reveal time. Verifiers then know immediately whether or not a forced error is in effect. Since Verifiers expect to gain little from checking solutions without forced errors, they may decide not to verify, thereby offering opportunity to get bogus solutions onto the blockchain.

1protocol’s [1] random number generator protocol, Arbit, solves this problem by instituting penalties for Solvers who prematurely reveal private random bits and rewarding users who report them. When a user correctly reports a premature reveal to TrueBit, the following occurs.

  1. The Solver’s solutions are discarded, and a new Solver lottery takes place. This re-incentivizes Verifiers to participate in the task while voiding the Solver’s incentive to reveal private information.
  2. Half of the Solver’s deposit gets burned. This makes the above attack expensive for the Solver.
  3. The other half of the Solver’s deposit goes to the party who reported the Solver’s random bits. This incentivizes Observers to report the Solver’s prematurely revealed random bits.

Incorrect secondary solution (Sina Habibian and Harley Swick [22]). Suppose that a forced error is in effect and that the Solver submits two incorrect solutions. When the Solver reveals his “correct” secondary solution in Step 4(b)ii of the protocol (Section4.6), Verifiers ignore it because there’s no chance of a jackpot payout. Indeed, the only “reward” for correctly challenging this secondary solution is to play a verification game. Hence one of the Solver’s bogus solutions ends up on the blockchain.

We eliminate the incorrect secondary solution vulnerability as follows. Denote the Solver’s two solutions by A and B. In the beginning of Step 4, rather than signaling for a challenge with the hash of an even integer, the Verifier hashes an integer whose value mod 3 the protocol interprets as follows:

  1. mod 3: challenge solution A,
  2. mod 3: challenge solution B,
  3. mod 3: challenge both A and B.

The Verifiers indicate their choice without knowing which of the two solu- tions the Solver puts forth as an answer. The protocol hides this information from Verifiers via the following process. The Solver commits to either solu- tion A or solution B by hashing either A or B paired with his private random bits, where the Solver’s private random bits serve as “noise” which prevent Verifiers from guessing which option the Solver chose. The Solver has incen- tive not to share his private random bits due to the “Premature disclosure of random bits” patch above as well as the fact that the Solver risks reducing his jackpot share by exposing this private information. Finally, once the timeout for challenges has passed, the Solver reveals his random bits in the clear, thereby indicating his commitment to either solution A or solution

        1. Challenges and forced errors then proceed as usual. In case the protocol forces the Solver to reveal his second “correct” solution, Verifiers who earlier committed to a challenge against this second solution are obligated to play a verification game. In this way, Verifiers catch errant secondary solutions just as they catch errant primary ones.

In case a forced error is not in effect, broadcasting a pair of incorrect solutions poses a cost to the Solver in the form of a lost deposit. Indeed Verifiers have proper incentives to check the Solver’s primary answer. Since forced errors occur rarely and unpredictably, the Solver expects to sacrifice several deposits in order to mount an “incorrect secondary solution” attack. This loss offsets the Solver’s eventual jackpot gain from challenging his own forced error solution. We implicitly assume that the chance to win a jackpot sufficiently motivates a Verifier to challenge whenever a Solver submits a pair

of incorrect solutions; any Verifier who challenges both submitted solutions must play a verification game.

The fix above has a potentially useful side effect of publicly indicating how many Verifiers are monitoring a given task. Indeed, a Verifier broad- casts one of the three commitment forms above if and only if she is paying attention. The option to signal “no challenge” is no longer needed for cam- ouflage because broadcasting challenges no longer indicates presence of a forced error. Moreover, if the Solver were to submit two correct solutions, the smart contract could immediately recognize them as identical and pe- nalize the Solver accordingly.

An adversary could potentially exploit the monitoring feature in the pre- vious paragraph by broadcasting multiple challenge commitments from Sybil identities, thereby reducing the total payout in case of a forced error and discouraging other rational Verifiers from participating. For this reason, the protocol must prevent Verifiers from revealing which task they are challeng- ing until the final phase of the protocol. Since each Verifier performs the given computational task without distraction from others’ commitments, an adversary cannot deter Verifier participation via Sybil attack.

Program abort (Yaron Velner). A task must explicitly specify an upper bound on the number of steps for which the verification game can run, and the Solver’s solution should return an “error” if and only if the task computation exceeds this bound. Indeed, the computation steps for a given task must form a deterministic sequence in order for Judges to determine their correctness. Ideally, one would like to run tasks directly in a native language like C, C++, or Rust, however this approach requires accurate metering of the number of computation steps. We can reduce the metering overhead by processing steps with a compiler rather than an interpreter.

Jackpot balloon attack (Cl´ement Lesaege). In the “flood of challengers” attack (Section5.3), a Cartel could artificially inflate the jackpot repository by repeatedly challenging a single forced error solution and thereby reduce jackpot payout. Eventually, the Cartel could offset the cost of this attack by cashing in on an extra large jackpot. This action could recruit membership for the Cartel at low cost by removing incentives for Verifiers who do not belong to the Cartel.

We mitigate against the benefit of this attack by placing a fixed, absolute bound on the jackpot payout, regardless of the actual jackpot repository balance. Extra revenue accumulated in the jackpot repository only becomes

available for payout after a protocol upgrade. Note that the maximum jackpot amount does not necessarily pay out at each forced error; the actual payout depends on the difficulty of the task.

Incentivizing block withholding (Cl´ement Lesaege). The attacker, who is a Solver, deploys a smart contract which pays money to miners who with- hold (or intentionally uncle) blocks whose random seed fails to yield a forced error. See [64] for possible implementations. This attack is profitable for tasks in which the expected jackpot payout times the probability of a forced error exceeds a block reward. Through repetitions, this attack could drain the jackpot repository.

Arbit’s security mechanism, discussed in “Premature disclosure of ran- dom bits” above, defends against this attack. In order to execute the present “incentivizing block withholding” attack, the Solver either has to reveal his bits publicly or has to provide them to miners through private channels. If the Solver’s penalty for a premature reveal exceeds the expected jackpot payout times the probability of a forced error, then this attack results in an expected net loss.

The TrueBit Virtual Machine

Section3.3describes a verification game which operates with respect to a Turing machine. In practice, however, no compiler exists which can trans- form C++ code into something as simple as Turing machine language. In Section6, we proposed Google Lanai as a practical compromise in the ab- sence of Turing machine architecture. Due to Lanai’s complexity, the fact that Google controls its codebase, and that progress on its development ap- pears to have slowed, we have since migrated development away from Google Lanai.

In addition to executing exactly the same computation steps regardless of hardware configuration, the complier architecture underlying the verifica- tion game, or TrueBit Virtual Machine (TVM), must satisfy the following simplicity properties.

1.A single computation step on the TVM runs comfortably within Ether- eum’s gas limit, and

2.the space required to describe a TVM state change fits inside a single Ethereum transaction.

WebAssembly architecture [31] comes sufficiently close to matching these properties so as to make TVM execution practical today. WebAssembly has

become increasingly ready-to-use due to contributions and testing by Apple, Google, Microsoft, and Mozilla [45]. Several cryptocurrency projects have begun to develop on WebAssembly, including Dfinity, eWASM, and Parity, due to the platform’s machine independence and relative simplicity.

The TVM consists of two parts:

  1. an off-chain interpreter which enumerates a list of states for a given computation, and
  2. an on-chain stepper which, given a state, can compute the next state.

Solvers and Challengers use the interpreter to create a Merklized list of states for a computation. Once the Solver and Challenger have determined the first step at which they disagree, the Judges use the stepper to run this step and rule on the outcome of the verification game.

Since Native WebAssembly does not entirely satisfy the simplicity prop- erties above, the interpreter must either further compile input code into a simpler architecture, or it must divide each WebAssembly instruction into substeps. In order to reduce the chances of inducing compiler error, the TVM follows the latter strategy. The TVM’s off-chain interpreter parses WebAssembly instructions into OCaml execution which in turn creates reg- isters describing WebAssembly suboperations. Each Ocaml-based sub-step only accesses one dynamic data structure at a time.

Additional applications

Finally, we mention a few more illustrative applications.

Video broadcasting. Livepeer [18] offers a new platform for decentral- ized, live streaming video. Users can broadcast, watch, or get paid for performing the computationally intensive process of transcoding video into different codecs and formats. TrueBit ensures that transcoders perform this work correctly, while Swarm [28] guarantees that video data remains avail- able to TrueBit during the necessary verification period.

Autonomous machine learning. McConaghy’s ArtDAO [52] generates art, sells it, and then uses its revenue to improve its own code. A TrueBit- based ArtDAO would allow a financed piece of code on the blockchain to access computational resources in such a way that no one can “pull its plug.” We may eventually see other blockchain-based machine learning ap- plications, like computer vision, as well.

Data marketplace. Hedge fund Numerai [19] crowdsources modeling problems to data scientists and then executes trades based upon their work. Numerai rewards successful models, however contributing data scientists must trust Numerai to both test their work and compensate them fairly. TrueBit enables autonomous data markets. Open Mined [21], paired with TrueBit, opens the possibility of trustless renumeration based on stream- ing flows of models and data. The Ocean Protocol [20], which facilitates contribution and sharing of data, also requires a verification mechanism.

Staking and random numbers. 1protocol [1] allows users who have ei- ther computing resources or capital, but not necessarily both, to participate as TrueBit Solvers and Verifiers by decoupling security deposits from work done. In addition, 1protocol’s Arbit protocol uses interactive verification to generate random numbers in a decentralized way.

Other applications. Please check the TrueBit website for other current ideas in progress!


[1]1protocol. [2]Augur.

[16]I thikn the attacker is this miner—today he made over $50k. the_attacker_is_this_miner_today_he_made/.

  1. Introducing Project Alchemy. project-alchemy.html.
  2. Livepeer. [19]Numerai.

[20]Ocean Protocol. [21]Open Mined.

  1. Open problems. Developer-Resources/wiki/Open-Problems.
  2. Oraclize. [24]P2Pool.

[25]RANDAO. [26]Reality Keys. [27]SmartPool.

[28]Swarm. [29]The DAO.

  1. Warning: is nearing 51% – leave the pool. warning-ghash-io-nearing-51-leave-pool/.
  2. WebAssembly.
  3. White paper. White-Paper.
  4. Why Equihash? [34]Zcash.
  5. Some miners generating invalid blocks. alert/2015-07-04-spv-mining, July 2015.
  6. Ittai Abraham, Dahlia Malkhi, Kartik Nayak, Ling Ren, and Alexander Spiegelman. Solidus: An incentive-compatible cryptocurrency based on permissionless Byzantine consensus. 02916, 2016.
  7. Eli Ben-Sasson, Alessandro Chiesa, Daniel Genkin, Eran Tromer, and Madars Virza. SNARKs for C: Verifying program executions succinctly and in zero knowledge. In Ran Canetti and Juan A. Garay, editors, Advances in Cryptology – CRYPTO 2013: 33rd Annual Cryptology Conference, Santa Barbara, CA, USA, August 18-22, 2013. Proceed- ings, Part II, pages 90–108, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.
  8. Nir Bitansky, Ran Canetti, Omer Paneth, and Alon Rosen. Indistin- guishability obfuscation vs. auxiliary-input extractable functions: One must fall.
  9. Ran Canetti, Ben Riva, and Guy N. Rothblum. Practical delegation of computation using multiple servers. In Proceedings of the 18th ACM Conference on Computer and Communications Security, CCS ’11, pages 445–454, New York, NY, USA, 2011. ACM.
  10. Ran Canetti, Ben Riva, and Guy N. Rothblum. Refereed delegation of computation. Information and Computation, 226:16 – 36, 2013. Special Issue: Information Security as a Resource.
  11. Kyle Croman, Christian Decker, Ittay Eyal, Adem Efe Gencer, Ari Juels, Ahmed Kosba, Andrew Miller, Prateek Saxena, Elaine Shi, Emin Gu¨n Sirer, Dawn Song, and Roger Wattenhofer. On scaling decen- tralized blockchains (a position paper). In Financial Cryptography and Data Security 2016 BITCOIN Workshop, volume 9604 of Lecture Notes in Computer Science, pages 106–125. Springer Berlin Heidelberg, February 2016.
  12. Christian Decker, Jochen Seidel, and Roger Wattenhofer. Bitcoin meets strong consistency. In Proceedings of the 17th International Conference on Distributed Computing and Networking, ICDCN ’16, pages 13:1– 13:10, New York, NY, USA, 2016. ACM.
  13. Ittay Eyal, Adem Efe Gencer, Emin Gun Sirer, and Robbert Van Re- nesse. Bitcoin-NG: A scalable blockchain protocol. In 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16), pages 45–59, Santa Clara, CA, March 2016. USENIX Association.
  14. Tim Goddard. AdversariallyVerifiableMachine. https:

// verification_of_c_programs/d7ey41n/.

  1. Andreas Haas, Andreas Rossberg, Derek L. Schuff, Ben L. Titzer, Michael Holman, Dan Gohman, Luke Wagner, Alon Zakai, and JF Bastien. Bringing the web up to speed with WebAssembly. In Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2017, pages 185–200, New York, NY, USA, 2017. ACM.
  2. Sanjay Jain, Prateek Saxena, Frank Stephan, and Jason Teutsch. How to verify computation with a rational network. abs/1606.05917, June 2016.
  3. Eleftherios Kokoris Kogias, Philipp Jovanovic, Nicolas Gailly, Ismail Khoffi, Linus Gasser, and Bryan Ford. Enhancing Bitcoin security and performance with strong consistency via collective signing. In 25th USENIX Security Symposium (USENIX Security 16), pages 279–296, Austin, TX, 2016. USENIX Association.
  4. Joshua A. Kroll, Ian C. Davey, and Edward W. Felten. The economics of Bitcoin mining, or Bitcoin in the presence of adver- saries. KrollDaveyFeltenWEIS2013.pdf, June 2013.
  5. Loi Luu, Viswesh Narayanan, Chaodong Zheng, Kunal Baweja, Seth Gilbert, and Prateek Saxena. A secure sharding protocol for open blockchains. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, pages 17–30, New York, NY, USA, 2016. ACM.
  6. Loi Luu, Jason Teutsch, Raghav Kulkarni, and Prateek Saxena. Demys- tifying incentives in the consensus computer. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS 2015), pages 706–719, New York, NY, USA, 2015. ACM.
  7. Loi Luu, Yaron Welner, Jason Teutsch, and Prateek Saxena. Smart- Pool: Practical decentralized pooled mining. docs/smartpool.pdf.
  8. Trent McConaghy. Wild, wooly AI DAOs. https://blog.bigchaindb. com/wild-wooly-ai-daos-d1719e040956.
  9. Silvio Micali. ALGORAND: the efficient and democratic ledger. http:

//, 2016.

  1. Andrew Miller, Ahmed Kosba, Jonathan Katz, and Elaine Shi. Nonout- sourceable scratch-off puzzles to discourage Bitcoin mining coalitions. In Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, pages 680–691, New York, NY, USA, 2015. ACM.
  2. Andrew Miller, Yu Xia, Kyle Croman, Elaine Shi, and Dawn Song. The honey badger of BFT protocols. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, pages 31–42, New York, NY, USA, 2016. ACM.
  3. Satoshi Nakamoto. Bitcoin P2P e-cash paper. http://www., November 2008.
  4. Rafael Pass and Elaine Shi. Hybrid consensus: Efficient consensus in the permissionless model., 2016.
  5. Christian Reitwiessner. From smart contracts to courts with not so smart judges. smart-contracts-courts-not-smart-judges/.
  6. Christian Reitwiessner. zkSNARKs in a nutshell. https://blog., Dececmber 2016.
  7. Yonatan Sompolinsky, Yoad Lewenberg, and Aviv Zohar. SPECTRE: A fast and scalable cryptocurrency protocol. https://eprint.iacr. org/2016/1159.pdf, 2016.
  8. Nick Szabo. The idea of smart contracts. net/smart_contracts_idea.html, 1997.
  9. Jason Teutsch, Sanjay Jain, and Prateek Saxena. When cryptocur- rencies mine their own business. In Financial Cryptography and Data Security: 20th International Conference (FC 2016) Christ Church, Bar- bados, pages 499–514. Springer Berlin / Heidelberg, 2017.
  10. Jelle van den Hooff, M. Frans Kaashoek, and Nickolai Zeldovich. Ver- sum: Verifiable computations over large public logs. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS ’14, pages 1304–1316, New York, NY, USA, 2014. ACM.
  11. Yaron Velner, Jason Teutsch, and Loi Luu. Smart contracts make Bit- coin mining pools vulnerable. To appear in 4th Workshop on Bitcoin and Blockchain Research (BITCOIN 2017).
  12. Michael Walfish and Andrew J. Blumberg. Verifying computations without reexecuting them. Communications of the ACM, 58(2):74–84, January 2015.


Highlight to Annotate. Click to Learn.

The first scalable, high performance private blockchain

Will Martino <> Revision v1.0

August 2016



This paper introduces Kadena, the first private/permissioned blockchain technology to achieve high performance at scale. Kadena is an implementation of the novel ScalableBFT consensus protocol, which draws inspiration from the Tangaroa protocol as well as practical engineering realities. Until now, private blockchain technologies have been able to provide either high performance or scalability but not both. Rarely deployed into production, BFT-Consensus algorithms achieve high performance initially, but exhibit drastic performance degradation as cluster size increases. Mining (or “proof of work”) is the only workable alternative: it has practically no scaling limit. However, being a probabilistic mechanism, mining is necessarily slow and thus unable to attain the performance demanded by enterprise use-cases.

The ability to perform at scale, without sacrificing the guarantees that blockchains provide, has been a major unresolved problem in the implementation of private blockchains. Scaling is critical for industrial adoption. Blockchain applications must be able to maintain acceptable performance for a successful deployment to sustain wide and growing participation.

Common Features of Private Blockchains

The term “blockchain” is imprecise, often deployed as an umbrella term to describe a wide array of blockchain-like applications. It is even less well-defined for private or permissioned blockchains: industrial applications, which unlike Bitcoin or Ethereum, have identified participants operating nodes in the cluster. To address this imprecision, we enumerate a core feature-set that any private blockchain solution must provide.

Distributed: participating entities must be able to host their participating nodes from whichever locality or data-center they choose. Full Replication: if a node is lost or goes offline, the cluster must be able to fully recover that node and/or replay to new nodes, restoring full functionality and data. Immutability: once a transaction is committed to the blockchain it can neither be changed nor removed. This by extension requires all transactions to be fully verifiable via cryptographic techniques such as Merkle trees or incremental hashes.

Privacy: transactions must be able to be hidden from all but the intended counterparties as required by the use-case.

Byzantine-Fault-Tolerant (BFT) Consensus: the blockchain must employ a deterministic mechanism for achieving fault-tolerant consensus, instead of probabilistic and incentive-based approaches like mining.

High Performance: able to achieve superior throughput and latency (by orders of magnitude) than what current mining-based approaches offer.

Scalability: able to maintain high performance as the number of nodes increases, well into the hundreds if not thousands.

Strong, Durable Cryptographic Identification: the author of every submitted transaction must be cryptographically verifiable in a storable and reproducible way.

Kadena: High Performance at Scale

Kadena, our proprietary implementation of the ScalableBFT protocol, has achieved an industry first: a deterministic blockchain consensus that provides high performance and low latency, scales flat when customers need it to, and is uncompromising on the guarantees that blockchains are meant to provide. The chart below demonstrates Kadena’s expected and real-world performance as it scales. Kadena executes over 7000 transactions per second, with latency in the low milliseconds, with consistent performance on cluster sizes of up to 256 nodes. We forecast steady performance well into the 1000s of nodes.


To design ScalableBFT we returned to the first principles of the Tangaroa (Copeland/Zhong) and Juno (Martino/Popejoy) BFT-Consensus protocols to develop a predictive model of how Kadena, our implementation of ScalableBFT, should perform. The prediction was as surprising as it was accurate: well before we finished our first cut of Kadena, our model predicted it would support flat scaling over very large node counts.

This model provided the groundwork for upgrading Juno/Tangaroa into what is now the ScalableBFT protocol, while also guiding the design of the Kadena application.

The scalability of Kadena is, of course, not infinite. Our model predicts that that between 5,000 and 15,000 nodes (the limit is hardware and network dependent) Kadena will begin to exhibit linear performance degradation to some limit of acceptable performance, after which nodes can no longer be added.

We maintain that even for widely-adopted private applications, node counts under 10,000 are sufficient. Assuming that every entity participating in a private blockchain hosts 3 nodes for resiliency, Kadena would enable the cluster to scale up to three hundred participants (900 nodes) with no performance degradation.

Subject to the aforementioned scaling constraints, Kadena’s performance is dictated by hardware and network capabilities. Cluster latency is a function of the median network latency, and cluster throughput is a function of both median bandwidth and median host- server CPU capacity. These are “good problems to have,” resolved by purchasing better capabilities, instead of facing fundamental algorithmic limitations, as we’ll discuss later.

It is also worth noting that Kadena uses available bandwidth very efficiently. All private blockchains must replicate new transactions before they come to consensus, which requires some amount of initial bandwidth to complete. Kadena only requires an additional 0.1% of this initial bandwidth to then reach consensus.

An Overview of BFT-Consensus in Blockchain

The private blockchain scalability problem is subtle and easily misunderstood. It behooves us to briefly cover the current set of mechanisms that are under consideration for implementing consensus for private blockchains.

Mining and Private Blockchains

Mining represents a master-stroke of design as a solution to BFT Consensus for an anonymous, public blockchain. Though a full description of how and why mining works is outside the purview of this whitepaper we would like to point out some of the advantages, limitations and unresolved issues of using mining for private blockchains.

Mining’s biggest advantage is its ability to provide BFT in a trustless environment. This advantage, however, is useless in a private setting, as by definition all participant identities are known ahead of time. Mining’s biggest applicable advantage is its ability to massively scale: the throughput and latency of a mining-based system is not related to the number of participating entities.

Mining is a probabilistic process that necessarily depends on time for its BFT guarantees. One cannot simply decrease mining difficulty to increase performance, as confirmation of a transaction is probabilistic. Waiting the customary “6 blocks” for confirmation in Bitcoin is based on the probability that a longer fork could be created approaching zero. For a public blockchain like Bitcoin, these are excellent tradeoffs, as in return we obtain a BFT consensus that is capable of public, anonymous deployment.

When used for private blockchains, mining’s tradeoffs are no longer acceptable, as a public, anonymous system is no longer desired. Indeed, mining’s very consensus mechanism becomes questionable in private settings. One of the foundational pillars of mining is providing meaningful incentives to non-byzantine miners, usually in the form of fees and/or the issuance of new crypto-coins. Behaving in a byzantine (“traitorous”) way must not be as financially viable as behaving in a non-byzantine (“loyal”) way for miners. This pillar of incentivization crumbles when mining is used for private blockchains: the reward of coins/fees cannot hope to compete with significant business advantage that arises from the ability to retroactively invalidate transactions by mining a longer chain.

Proposed solutions to the incentivization problem usually revolve around registering all transactions with a third party oracle, and/or legal recourse for invalidating transactions. Such solutions are unsatisfactory, as they significantly detract from the promise of private blockchains: they are not distributed, instead relying on a central 3rd party oracle, and are not robust, relying on legal recourse.

BFT-Consensus Algorithms, Past and Present

BFT Consensus Algorithms (BFT-Consensus) have been a subject of academic and industrial research for decades. Historically BFT-Consensus has found production use in air- and spacecraft-control systems, where a small set of computers, connected via fast and reliable networks, come to consensus about control decisions, as a critical safety feature. In contrast, private blockchain applications demand large, globally-distributed clusters capable of maintaining high performance over real-world internet infrastructure.

General-purpose BFT-Consensus algorithms1 include Practical Byzantine Fault Tolerance (PBFT) and SmartBFT. They provide the same core guarantees and basic features:

A consensus mechanism robust against malicious and faulty nodes

High performance (both throughput and latency)

Deterministic (as opposed to probabilistic) confirmation

Unfortunately, this family of consensus algorithms were never designed for performance at scale yet performance at scale is precisely what is needed to take private blockchains into production. Pilot projects using non-scalable algorithms can demonstrate performance that trounces mining. As solutions are rolled out, however, scalability comes to the forefront: performance degradation spikes as node count climbs. This issue is fundamental to this family of algorithms, and thus unavoidable: they were simply not designed with +1k node clusters in mind.


Challenges Facing Private Blockchains

We are witnessing the birth of a paradigm-shifting technology, akin to the shift from mainframes to databases. Currently we are in the “whatever works” discovery phase due to the immaturity of the technology: low throughput, ill-designed smart-contract layers, and compromises of cryptographic security in the name of “performance” are the norm. Performance at scale is often left undiscussed.

As the market matures and adoption of this technology increases the focus of performance, scalability, security, and usability will increase. Though a perfect forecast of this evolution is impossible, new problems will arise that will demand solutions.

Parameters of Performance

All blockchain applications must, at some point:

  1. Replicate new transaction received from clients.
  2. Confirm the successful replication of the new transactions to the cluster (i.e. come to consensus).
  3. Validate the authorship of each transaction.
  4. Execute each transaction.

The speed at which a private blockchain can execute these four steps dictates its fundamental performance, while the efficiency, specifically the number of inter-server messages needed and the bandwidth needed for these post-replication messages, dictate its scalability. Though there are additional steps, these four form the vast majority of the workload.

If one accepts that mining is inadequate for consensus in private settings, due to its un- fixable performance issues and the breakdown of mining incentives, then performance and scalability become inescapably intertwined for private blockchains.

Cryptography Demands on Private Blockchains

The third fundamental step, verifying authorship, places some of the most stringent bounds on the performance of private blockchains because of the cryptographic work required. There are several approaches to accomplishing this verification: Public-Private Key signatures (signed transactions), threshold encryption, and secured channels (TLS).

Public-key signatures

Signed messages, the method that Kadena and Bitcoin use, is by far the most computationally demanding approach. In return it provides durable security and auditability guarantees. A signed transaction is permanently secured – the transaction cannot be changed without compromising the key needed to construct it.

Threshold Encryption

Threshold encryption is a newer and potentially fast method that can provide durable security. However, it is a distributed affair, requiring cluster nodes to work together to decrypt each command. This increases the bandwidth demands post-replication, while also coupling the “threshold required” to cluster size.

TLS/Secured Channels

Secured channels is the most computationally efficient, as the very existence of the channel itself affords the authorship guarantee. However, this method only provides ephemeral security, which cannot be persisted to disk. It is not possible to analyze past transactions command to prove authorship, which significantly degrades the security guarantees that blockchains are promising.

This verification model has major performance implications. For example, Kadena spends the vast majority of CPU time verifying signatures. When Kadena is configured to employ ephemeral security via TLS, its performance spikes to nearly 20k transactions per second. Unfortunately, we at Kadena cannot accept the tradeoffs imposed by ephemeral security.

Questions for a Private Blockchain Implementation

Despite the hype and astronomical market size estimates concerning private blockchains the fundamental technology must (a) actually work and (b) be better than current centralized, database-oriented solutions. Discerning which solutions hold the most promise will, of course, be tricky. Though the following list is by no means complete, it contains many of the critical questions that, over the coming years, interested parties should be asking of any private blockchain solution they find:

  1. How many rounds of messages are required to commit a single transaction?
  2. Is this number a function of cluster size?
  3. Can the solution batch transactions?
  4. How is the number related to the number of transactions being replicated?
  5. Does the solution provide ephemeral or durable security?
  6. Can we audit the blockchain and prove authorship of historical transactions?
  7. If we shut down the blockchain can we still audit transactions?
  8. Is transaction authorization logical or crypto-verifiable?
  9. Would a transaction with CEO-level authorization require a special key to create or would it be like any other command except that the “from” field says “CEO”?
  10. Does the security model used impact bandwidth demands?
  11. Does it impact messaging rounds?
  12. Is it impacted by the cluster size?
  13. How much bandwidth, in addition to the initial replication’s bandwidth, is required by each node? By the cluster?
  14. How is it related to cluster size?
  15. How is it related to newly replicated commands?
  16. What would an attacker need to do to masquerade as someone else and submit a malicious command?
  17. If an attacker fully compromises a single node/server participating in the private blockchain, how much damage could the attacker do without the cluster being aware something was wrong?
  18. Could the attacker submit malicious transactions? Change the body of a committed transaction? Reorder the log of transactions?
  19. How do these answers change if the attacker compromises 33%, 34%, 50%, 51%, 67%, or 100% of the entire cluster?


Highlight to Annotate. Click to Learn.

CryptoNote v 2.0

Nicolas van Saberhagen
October 17, 2013


“Bitcoin” [1] has been a successful implementation of the concept of p2p electronic cash. Both professionals and the general public have come to appreciate the convenient combination of public transactions and proof-of-work as a trust model. Today, the user base of electronic cash is growing at a steady pace; customers are attracted to low fees and the anonymity provided by electronic cash and merchants value its predicted and decentralized emission. Bitcoin has effectively proved that electronic cash can be as simple as paper money and as convenient as credit cards.

Unfortunately, Bitcoin suffers from several deficiencies. For example, the system’s distributed nature is inflexible, preventing the implementation of new features until almost all of the net- work users update their clients. Some critical flaws that cannot be fixed rapidly deter Bitcoin’s widespread propagation. In such inflexible models, it is more efficient to roll-out a new project rather than perpetually fix the original project.

In this paper, we study and propose solutions to the main deficiencies of Bitcoin. We believe that a system taking into account the solutions we propose will lead to a healthy competition among different electronic cash systems. We also propose our own electronic cash, “CryptoNote”, a name emphasizing the next breakthrough in electronic cash.


Bitcoin drawbacks and some possible solutions

Traceability of transactions

Privacy and anonymity are the most important aspects of electronic cash. Peer-to-peer payments seek to be concealed from third party’s view, a distinct difference when compared with traditional banking. In particular, T. Okamoto and K. Ohta described six criteria of ideal electronic cash, which included “privacy: relationship between the user and his purchases must be untraceable by anyone” [30]. From their description, we derived two properties which a fully anonymous electronic cash model must satisfy in order to comply with the requirements outlined by Okamoto and Ohta:

Untraceability: for each incoming transaction all possible senders are equiprobable.

Unlinkability: for any two outgoing transactions it is impossible to prove they were sent to the same person.

Unfortunately, Bitcoin does not satisfy the untraceability requirement. Since all the trans- actions that take place between the network’s participants are public, any transaction can be

unambiguously traced to a unique origin and final recipient. Even if two participants exchange funds in an indirect way, a properly engineered path-finding method will reveal the origin and final recipient.

It is also suspected that Bitcoin does not satisfy the second property. Some researchers stated ([33, 35, 29, 31]) that a careful blockchain analysis may reveal a connection between the users of the Bitcoin network and their transactions. Although a number of methods are disputed [25], it is suspected that a lot of hidden personal information can be extracted from the public database.

Bitcoin’s failure to satisfy the two properties outlined above leads us to conclude that it is not an anonymous but a pseudo-anonymous electronic cash system. Users were quick to develop solutions to circumvent this shortcoming. Two direct solutions were “laundering services” [2] and the development of distributed methods [3, 4]. Both solutions are based on the idea of mixing several public transactions and sending them through some intermediary address; which in turn suffers the drawback of requiring a trusted third party.

Recently, a more creative scheme was proposed by I. Miers et al. [28]: “Zerocoin”. Zerocoin utilizes a cryptographic one-way accumulators and zero-knoweldge proofs which permit users to “convert” bitcoins to zerocoins and spend them using anonymous proof of ownership instead of explicit public-key based digital signatures. However, such knowledge proofs have a constant but inconvenient size – about 30kb (based on today’s Bitcoin limits), which makes the proposal impractical. Authors admit that the protocol is unlikely to ever be accepted by the majority of Bitcoin users [5].


The proof-of-work function
Bitcoin creator Satoshi Nakamoto described the majority decision making algorithm as “one- CPU-one-vote” and used a CPU-bound pricing function (double SHA-256) for his proof-of-work scheme. Since users vote for the single history of transactions order [1], the reasonableness and consistency of this process are critical conditions for the whole system.
The security of this model suffers from two drawbacks. First, it requires 51% of the network’s mining power to be under the control of honest users. Secondly, the system’s progress (bug fixes, security fixes, etc…) require the overwhelming majority of users to support and agree to the changes (this occurs when the users update their wallet software) [6].Finally this same voting mechanism is also used for collective polls about implementation of some features [7].
This permits us to conjecture the properties that must be satisfied by the proof-of-work pricing function. Such function must not enable a network participant to have a significant advantage over another participant; it requires a parity between common hardware and high cost of custom devices. From recent examples [8], we can see that the SHA-256 function used in the Bitcoin architecture does not posses this property as mining becomes more efficient on GPUs and ASIC devices when compared to high-end CPUs.
Therefore, Bitcoin creates favourable conditions for a large gap between the voting power of participants as it violates the “one-CPU-one-vote” principle since GPU and ASIC owners posses a much larger voting power when compared with CPU owners. It is a classical example of the Pareto principle where 20% of a system’s participants control more than 80% of the votes.
One could argue that such inequality is not relevant to the network’s security since it is not the small number of participants controlling the majority of the votes but the honesty of these participants that matters. However, such argument is somewhat flawed since it is rather the possibility of cheap specialized hardware appearing rather than the participants’ honesty which poses a threat. To demonstrate this, let us take the following example. Suppose a malevolent individual gains significant mining power by creating his own mining farm through the cheap

hardware described previously. Suppose that the global hashrate decreases significantly, even for a moment, he can now use his mining power to fork the chain and double-spend. As we shall see later in this article, it is not unlikely for the previously described event to take place.


Irregular emission

Bitcoin has a predetermined emission rate: each solved block produces a fixed amount of coins. Approximately every four years this reward is halved. The original intention was to create a limited smooth emission with exponential decay, but in fact we have a piecewise linear emission function whose breakpoints may cause problems to the Bitcoin infrastructure.

When the breakpoint occurs, miners start to receive only half of the value of their previous reward. The absolute difference between 12.5 and 6.25 BTC (projected for the year 2020) may seem tolerable. However, when examining the 50 to 25 BTC drop that took place on November 28 2012, felt inappropriate for a significant number of members of the mining community. Figure 1 shows a dramatic decrease in the network’s hashrate in the end of November, exactly when the halving took place. This event could have been the perfect moment for the malevolent individual described in the proof-of-work function section to carry-out a double spending attack [36].


Fig. 1. Bitcoin hashrate chart (source:


Hardcoded constants

Bitcoin has many hard-coded limits, where some are natural elements of the original design (e.g. block frequency, maximum amount of money supply, number of confirmations) whereas other seem to be artificial constraints. It is not so much the limits, as the inability of quickly changing

them if necessary that causes the main drawbacks. Unfortunately, it is hard to predict when the constants may need to be changed and replacing them may lead to terrible consequences.

A good example of a hardcoded limit change leading to disastrous consequences is the block size limit set to 250kb1. This limit was sufficient to hold about 10000 standard transactions. In early 2013, this limit had almost been reached and an agreement was reached to increase the limit. The change was implemented in wallet version 0.8 and ended with a 24-blocks chain split and a successful double-spend attack [9]. While the bug was not in the Bitcoin protocol, but rather in the database engine it could have been easily caught by a simple stress test if there was no artificially introduced block size limit.

Constants also act as a form of centralization point. Despite the peer-to-peer nature of Bitcoin, an overwhelming majority of nodes use the official reference client [10] developed by a small group of people. This group makes the decision to implement changes to the protocol and most people accept these changes irrespective of their “correctness”. Some decisions caused heated discussions and even calls for boycott [11], which indicates that the community and the developers may disagree on some important points. It therefore seems logical to have a protocol with user-configurable and self-adjusting variables as a possible way to avoid these problems.


Bulky scripts

The scripting system in Bitcoin is a heavy and complex feature. It potentially allows one to create sophisticated transactions [12], but some of its features are disabled due to security concerns and some have never even been used [13]. The script (including both senders’ and receivers’ parts) for the most popular transaction in Bitcoin looks like this:






The script is 164 bytes long whereas its only purpose is to check if the receiver possess the secret key required to verify his signature.


The CryptoNote Technology

Now that we have covered the limitations of the Bitcoin technology, we will concentrate on presenting the features of CryptoNote.


Untraceable Transactions

In this section we propose a scheme of fully anonymous transactions satisfying both untraceability and unlinkability conditions. An important feature of our solution is its autonomy: the sender is not required to cooperate with other users or a trusted third party to make his transactions; hence each participant produces a cover traffic independently.


Literature review

Our scheme relies on the cryptographic primitive called a group signature. First presented by

D. Chaum and E. van Heyst [19], it allows a user to sign his message on behalf of the group. After signing the message the user provides (for verification purposes) not his own single public


1This is so-called “soft limit” — the reference client restriction for creating new blocks. Hard maximum of

possible blocksize was 1 MB

key, but the keys of all the users of his group. A verifier is convinced that the real signer is a member of the group, but cannot exclusively identify the signer.

The original protocol required a trusted third party (called the Group Manager), and he was the only one who could trace the signer. The next version called a ring signature, introduced by Rivest et al. in [34], was an autonomous scheme without Group Manager and anonymity revocation. Various modifications of this scheme appeared later: linkable ring signature [26, 27, 17] allowed to determine if two signatures were produced by the same group member, traceable ring signature [24, 23] limited excessive anonymity by providing possibility to trace the signer of two messages with respect to the same metainformation (or “tag” in terms of [24]).

A similar cryptographic construction is also known as a ad-hoc group signature [16, 38]. It emphasizes the arbitrary group formation, whereas group/ring signature schemes rather imply a fixed set of members.

For the most part, our solution is based on the work “Traceable ring signature” by E. Fujisaki and K. Suzuki [24]. In order to distinguish the original algorithm and our modification we will call the latter a one-time ring signature, stressing the user’s capability to produce only one valid signature under his private key. We weakened the traceability property and kept the linkability only to provide one-timeness: the public key may appear in many foreign verifying sets and the private key can be used for generating a unique anonymous signature. In case of a double spend attempt these two signatures will be linked together, but revealing the signer is not necessary for our purposes.




Elliptic curve parameters

As our base signature algorithm we chose to use the fast scheme EdDSA, which is developed and implemented by D.J. Bernstein et al. [18]. Like Bitcoin’s ECDSA it is based on the elliptic curve discrete logarithm problem, so our scheme could also be applied to Bitcoin in future.

Common parameters are:

q: a prime number; q = 2255 19;

d: an element of Fq; d = 121665/121666;

E: an elliptic curve equation; x2 + y2 = 1 + dx2y2; G: a base point; G = (x, 4/5);

l: a prime order of the base point; l = 2252 + 27742317777372353535851937790883648493;

Hs: a cryptographic hash function {0, 1}Fq;

Hp: a deterministic hash function E(Fq) E(Fq).



Enhanced privacy requires a new terminology which should not be confused with Bitcoin entities.

private ec-key is a standard elliptic curve private key: a number a [1, l 1];

public ec-key is a standard elliptic curve public key: a point A = aG;

one-time keypair is a pair of private and public ec-keys;

private user key is a pair (a, b) of two different private ec-keys;

tracking key is a pair (a, B) of private and public ec-key (where B = bG and a /= b);

public user key is a pair (A, B) of two public ec-keys derived from (a, b);

standard address is a representation of a public user key given into human friendly string with error correction;

truncated address is a representation of the second half (point B) of a public user key given into human friendly string with error correction.

The transaction structure remains similar to the structure in Bitcoin: every user can choose several independent incoming payments (transactions outputs), sign them with the corresponding private keys and send them to different destinations.

Contrary to Bitcoin’s model, where a user possesses unique private and public key, in the proposed model a sender generates a one-time public key based on the recipient’s address and some random data. In this sense, an incoming transaction for the same recipient is sent to a one-time public key (not directly to a unique address) and only the recipient can recover the corresponding private part to redeem his funds (using his unique private key). The recipient can spend the funds using a ring signature, keeping his ownership and actual spending anonymous. The details of the protocol are explained in the next subsections.


Unlinkable payments

Classic Bitcoin addresses, once being published, become unambiguous identifier for incoming payments, linking them together and tying to the recipient’s pseudonyms. If someone wants to receive an “untied” transaction, he should convey his address to the sender by a private channel. If he wants to receive different transactions which cannot be proven to belong to the same owner he should generate all the different addresses and never publish them in his own pseudonym.





Public Private


Bob’s addr 1

Bob’s key 1


image image


Bob’s addr 2

Bob’s key 2

Fig. 2. Traditional Bitcoin keys/transactions model.

We propose a solution allowing a user to publish a single address and receive unconditional unlinkable payments. The destination of each CryptoNote output (by default) is a public key, derived from recipient’s address and sender’s random data. The main advantage against Bitcoin is that every destination key is unique by default (unless the sender uses the same data for each of his transactions to the same recipient). Hence, there is no such issue as “address reuse” by design and no observer can determine if any transactions were sent to a specific address or link two addresses together.




One-time key

Public Private


One-time key



Bob’s Address

Bob’s Key

One-time key


Fig. 3. CryptoNote keys/transactions model.

First, the sender performs a Diffie-Hellman exchange to get a shared secret from his data and half of the recipient’s address. Then he computes a one-time destination key, using the shared secret and the second half of the address. Two different ec-keys are required from the recipient for these two steps, so a standard CryptoNote address is nearly twice as large as a Bitcoin wallet address. The receiver also performs a Diffie-Hellman exchange to recover the corresponding secret key.

A standard transaction sequence goes as follows:


Alice wants to send a payment to Bob, who has published his standard address. She unpacks the address and gets Bob’s public key (A, B).


Alice generates a random r [1, l 1] and computes a one-time public key P = Hs(rA)G +




Alice uses P as a destination key for the output and also packs value R = rG (as a part of the Diffie-Hellman exchange) somewhere into the transaction. Note that she can create other outputs with unique public keys: different recipients’ keys (Ai, Bi) imply different Pi even with the same r.



Tx public key Tx output

R = rG

Sender’s ran- dom data


Destination key

P = Hs(rA)G + B

(A, B)

Receiver’s public key

Fig. 4. Standard transaction structure.


Alice sends the transaction.


Bob checks every passing transaction with his private key (a, b), and computes P 1 =


s(aR)G + B. If Alice’s transaction for with Bob as the recipient was among them, then aR = arG = rA and P 1 = P .




Bob can recover the corresponding one-time private key: x = s(aR) + b, so as P = xG. He can spend this output at any time by signing a transaction with x.

Receiver’s private key

(a, b)

one-time private key


x = Hs(aR) + b

one-time public key


P I = Hs(aR)G + bG

P I =? P

Tx public key

Tx output

Destination key


Fig. 5. Incoming transaction check.

As a result Bob gets incoming payments, associated with one-time public keys which are

unlinkable for a spectator. Some additional notes:

When Bob “recognizes” his transactions (see step 5) he practically uses only half of his private information: (a, B). This pair, also known as the tracking key, can be passed to a third party (Carol). Bob can delegate her the processing of new transactions. Bob doesn’t need to explicitly trust Carol, because she can’t recover the one-time secret key p without Bob’s full private key (a, b). This approach is useful when Bob lacks bandwidth or computation power (smartphones, hardware wallets etc.).

In case Alice wants to prove she sent a transaction to Bob’s address she can either disclose r or use any kind of zero-knowledge protocol to prove she knows r (for example by signing the transaction with r).


If Bob wants to have an audit compatible address where all incoming transaction are linkable, he can either publish his tracking key or use a truncated address. That address represent only one public ec-key B, and the remaining part required by the protocol is derived from it as follows: a = s(B) and A = s(B)G. In both cases every person is able to “recognize” all of Bob’s incoming transaction, but, of course, none can spend the funds enclosed within them without the secret key b.


One-time ring signatures

A protocol based on one-time ring signatures allows users to achieve unconditional unlinkability. Unfortunately, ordinary types of cryptographic signatures permit to trace transactions to their respective senders and receivers. Our solution to this deficiency lies in using a different signature type than those currently used in electronic cash systems.

We will first provide a general description of our algorithm with no explicit reference to electronic cash.

A one-time ring signature contains four algorithms: (GEN, SIG, VER, LNK): GEN: takes public parameters and outputs an ec-pair (P, x) and a public key I.

SIG: takes a message m, a set S1 of public keys {Pi}i

and a set S = S1 ∪ {Ps}.

s, a pair (Ps, xs) and outputs a signature σ

VER: takes a message m, a set S, a signature σ and outputs “true” or “false”.

LNK: takes a set I = {Ii}, a signature σ and outputs “linked” or “indep”.

The idea behind the protocol is fairly simple: a user produces a signature which can be checked by a set of public keys rather than a unique public key. The identity of the signer is indistinguishable from the other users whose public keys are in the set until the owner produces a second signature using the same keypair.


Private keys


· · ·



· · ·

Ring Signature


Public keys


· · ·


· · ·


Fig. 6. Ring signature anonymity.



GEN: The signer picks a random secret key x [1, l 1] and computes the corresponding public key P = xG. Additionally he computes another public key I = x p(P ) which we will call the “key image”.

SIG: The signer generates a one-time ring signature with a non-interactive zero-knowledge proof using the techniques from [21]. He selects a random subset S1 of n from the other users’ public keys Pi, his own keypair (x, P ) and key image I. Let 0 s n be signer’s secret index in S (so that his public key is Ps).

{ | } { |

He picks a random qi i = 0 . . . n and wi i = 0 . . . n, i

following transformations:

s} from (1 . . . l) and applies the


L = qiG, if i = s qiG + wiPi, if i /= s


R = qiHp(Pi), if i = s qiHp(Pi) + wiI, if i =/ s

The next step is getting the non-interactive challenge:

c = Hs(m, L1, . . . , Ln, R1, . . . , Rn)

Finally the signer computes the response:


ci =

wi, if i /= s




ci mod l, if i = s




ci mod l, if i = s


r = qi, if i /= s

qs csx mod l, if i = s

The resulting signature is σ = (I, c1, . . . , cn, r1, . . . , rn).

VER: The verifier checks the signature by applying the inverse transformations:

L1i = riG + ciPi

Ri1 = riHp(Pi) + ciI

Finally, the verifier checks if

ci = Hs(m, L0, . . . , Ln, R0, . . . , Rn) mod l

),n ?


1 1 1 1


If this equality is correct, the verifier runs the algorithm LNK. Otherwise the verifier rejects

the signature.


LNK: The verifier checks if I has been used in past signatures (these values are stored in the set ). Multiple uses imply that two signatures were produced under the same secret key.

The meaning of the protocol: by applying L-transformations the signer proves that he knows such x that at least one Pi = xG. To make this proof non-repeatable we introduce the key image as I = xHp(P ). The signer uses the same coefficients (ri, ci) to prove almost the same statement: he knows such x that at least one Hp(Pi) = I · x1.

If the mapping x I is an injection:


Nobody can recover the public key from the key image and identify the signer;


The signer cannot make two signatures with different I’s and the same x. A full security analysis is provided in Appendix A.


Standard CryptoNote transaction

By combining both methods (unlinkable public keys and untraceable ring signature) Bob achieves new level of privacy in comparison with the original Bitcoin scheme. It requires him to store only one private key (a, b) and publish (A, B) to start receiving and sending anonymous transactions. While validating each transaction Bob additionally performs only two elliptic curve multi- plications and one addition per output to check if a transaction belongs to him. For his every output Bob recovers a one-time keypair (pi, Pi) and stores it in his wallet. Any inputs can be circumstantially proved to have the same owner only if they appear in a single transaction. In

fact this relationship is much harder to establish due to the one-time ring signature.

With a ring signature Bob can effectively hide every input among somebody else’s; all possible spenders will be equiprobable, even the previous owner (Alice) has no more information than any observer.

When signing his transaction Bob specifies n foreign outputs with the same amount as his output, mixing all of them without the participation of other users. Bob himself (as well as anybody else) does not know if any of these payments have been spent: an output can be used in thousands of signatures as an ambiguity factor and never as a target of hiding. The double spend check occurs in the LNK phase when checking against the used key images set.

Bob can choose the ambiguity degree on his own: n = 1 means that the probability he has spent the output is 50% probability, n = 99 gives 1%. The size of the resulting signature increases linearly as O(n + 1), so the improved anonymity costs to Bob extra transaction fees. He also can set n = 0 and make his ring signature to consist of only one element, however this will instantly reveal him as a spender.


Sender’s output



Tx input

Destination key


One-time keypair

Destination key


. . .


One-time private key

. . .

Destination key


Foreign transactions

P, x

I = xHp(P )

Key image


Ring Signature

Fig. 7. Ring signature generation in a standard transaction.


Egalitarian Proof-of-work

In this section we propose and ground the new proof-of-work algorithm. Our primary goal is to close the gap between CPU (majority) and GPU/FPGA/ASIC (minority) miners. It is appropriate that some users can have a certain advantage over others, but their investments should grow at least linearly with the power. More generally, producing special-purpose devices has to be as less profitable as possible.


Related works

The original Bitcoin proof-of-work protocol uses the CPU-intensive pricing function SHA-256. It mainly consists of basic logical operators and relies solely on the computational speed of processor, therefore is perfectly suitable for multicore/conveyer implementation.

However, modern computers are not limited by the number of operations per second alone, but also by memory size. While some processors can be substantially faster than others [8], memory sizes are less likely to vary between machines.

Memory-bound price functions were first introduced by Abadi et al and were defined as “functions whose computation time is dominated by the time spent accessing memory” [15]. The main idea is to construct an algorithm allocating a large block of data (“scratchpad”) within memory that can be accessed relatively slowly (for example, RAM) and “accessing an unpredictable sequence of locations” within it. A block should be large enough to make preserving the data more advantageous than recomputing it for each access. The algorithm also should prevent internal parallelism, hence N simultaneous threads should require N times more memory at once.

Dwork et al [22] investigated and formalized this approach leading them to suggest another variant of the pricing function: “Mbound”. One more work belongs to F. Coelho [20], who

proposed the most effective solution: “Hokkaido”.

To our knowledge the last work based on the idea of pseudo-random searches in a big array is the algorithm known as “scrypt” by C. Percival [32]. Unlike the previous functions it focuses on key derivation, and not proof-of-work systems. Despite this fact scrypt can serve our purpose: it works well as a pricing function in the partial hash conversion problem such as SHA-256 in Bitcoin.

By now scrypt has already been applied in Litecoin [14] and some other Bitcoin forks. How- ever, its implementation is not really memory-bound: the ratio “memory access time / overall time” is not large enough because each instance uses only 128 KB. This permits GPU miners to be roughly 10 times more effective and continues to leave the possibility of creating relatively cheap but highly-efficient mining devices.




Moreover, the scrypt construction itself allows a linear trade-off between memory size and CPU speed due to the fact that every block in the scratchpad is derived only from the previous. For example, you can store every second block and recalculate the others in a lazy way, i.e. only when it becomes necessary. The pseudo-random indexes are assumed to be uniformly distributed, hence the expected value of the additional blocks’ recalculations is 1 N , where N is the number of iterations. The overall computation time increases less than by half because there are also time independent (constant time) operations such as preparing the scratchpad and hashing on

every iteration. Saving 2/3 of the memory costs 1 · N + 1 · 2 · N = N additional recalculations;

image image


3 3

· · ·

9/ 1 1 1

10 results in 10 N + . . . + 10 9 N = 4.5N . It is easy to show that storing only s of all blocks


increases the time less than by a factor of s1 . This in turn implies that a machine with a CPU

200 times faster than the modern chips can store only 320 bytes of the scratchpad.


The proposed algorithm

We propose a new memory-bound algorithm for the proof-of-work pricing function. It relies on random access to a slow memory and emphasizes latency dependence. As opposed to scrypt every new block (64 bytes in length) depends on all the previous blocks. As a result a hypothetical “memory-saver” should increase his calculation speed exponentially.

Our algorithm requires about 2 Mb per instance for the following reasons:


It fits in the L3 cache (per core) of modern processors, which should become mainstream in a few years;


A megabyte of internal memory is an almost unacceptable size for a modern ASIC pipeline;


GPUs may run hundreds of concurrent instances, but they are limited in other ways: GDDR5 memory is slower than the CPU L3 cache and remarkable for its bandwidth, not random access speed.


Significant expansion of the scratchpad would require an increase in iterations, which in turn implies an overall time increase. “Heavy” calls in a trust-less p2p network may lead to serious vulnerabilities, because nodes are obliged to check every new block’s proof-of-work. If a node spends a considerable amount of time on each hash evaluation, it can be easily DDoSed by a flood of fake objects with arbitrary work data (nonce values).


Further advantages


Smooth emission

The upper bound for the overall amount of CryptoNote digital coins is: MSupply = 264 1

atomic units. This is a natural restriction based only on implementation limits, not on intuition such as “N coins ought to be enough for anybody”.

To ensure the smoothness of the emission process we use the following formula for block rewards:

BaseReward = (MSupply A) » 18,

where A is amount of previously generated coins.


Adjustable parameters



CryptoNote contains a targeting algorithm which changes the difficulty of every block. This decreases the system’s reaction time when the network hashrate is intensely growing or shrinking, preserving a constant block rate. The original Bitcoin method calculates the relation of actual and target time-span between the last 2016 blocks and uses it as the multiplier for the current difficulty. Obviously this is unsuitable for rapid recalculations (because of large inertia) and results in oscillations.

The general idea behind our algorithm is to sum all the work completed by the nodes and divide it by the time they have spent. The measure of work is the corresponding difficulty values in each block. But due to inaccurate and untrusted timestamps we cannot determine the exact time interval between blocks. A user can shift his timestamp into the future and the next time intervals might be improbably small or even negative. Presumably there will be few incidents of this kind, so we can just sort the timestamps and cut-off the outliers (i.e. 20%). The range of the rest values is the time which was spent for 80% of the corresponding blocks.


Size limits

Users pay for storing the blockchain and shall be entitled to vote for its size. Every miner deals with the trade-off between balancing the costs and profit from the fees and sets his own “soft-limit” for creating blocks. Also the core rule for the maximum block size is necessary for preventing the blockchain from being flooded with bogus transaction, however this value should not be hard-coded.


Let MN be the median value of the last N blocks sizes. Then the “hard-limit” for the size of accepting blocks is 2 MN . It averts the blockchain from bloating but still allows the limit to slowly grow with time if necessary.

Transaction size does not need to be limited explicitly. It is bounded by the size of a block; and if somebody wants to create a huge transaction with hundreds of inputs/outputs (or with the high ambiguity degree in ring signatures), he can do so by paying sufficient fee.


Excess size penalty

A miner still has the ability to stuff a block full of his own zero-fee transactions up to its maximum size 2 · Mb. Even though only the majority of miners can shift the median value, there is still a

possibility to bloat the blockchain and produce an additional load on the nodes. To discourage malevolent participants from creating large blocks we introduce a penalty function:

NewReward = BaseReward ·

BlkSize 2







This rule is applied only when BlkSize is greater than minimal free block size which should be close to max(10kb, MN 110%). Miners are permitted to create blocks of “usual size” and even exceed it with profit when the overall fees surpass the penalty. But fees are unlikely to grow quadratically unlike the penalty value so there will be an equilibrium.



Transaction scripts

CryptoNote has a very minimalistic scripting subsystem. A sender specifies an expression Φ =


f (x1, x2, . . . , xn), where n is the number of destination public keys {Pi}n . Only five binary

operators are supported: min, max, sum, mul and cmp. When the receiver spends this payment, he produces 0 k n signatures and passes them to transaction input. The verification process simply evaluates Φ with xi = 1 to check for a valid signature for the public key Pi, and xi = 0. A verifier accepts the proof iff Φ > 0.

Despite its simplicity this approach covers every possible case:


≤ ≤ ≥


Multi-/Threshold signature. For the Bitcoin-style “M-out-of-N” multi-signature (i.e. the receiver should provide at least 0 M N valid signatures) Φ = x1+x2+. . .+xN M (for clarity we are using common algebraic notation). The weighted threshold signature (some keys can be more important than other) could be expressed as Φ = w1 · x1 + w2 · x2 + . . . + wN · xN wM . And scenario where the master-key corresponds to Φ = max(M x, x1 + x2 + . . . + xN ) M . It is easy to show that any sophisticated case can be expressed with these operators, i.e. they form basis.

Password protection. Possession of a secret password s is equivalent to the knowledge of a private key, deterministically derived from the password: k = KDF(s). Hence, a receiver can prove that he knows the password by providing another signature under the key k. The sender simply adds the corresponding public key to his own output. Note that this method is much more secure than the “transaction puzzle” used in Bitcoin [13], where the password is explicitly passed in the inputs.

Degenerate cases. Φ = 1 means that anybody can spend the money; Φ = 0 marks the output as not spendable forever.

In the case when the output script combined with public keys is too large for a sender, he can use special output type, which indicates that the recipient will put this data in his input while the sender provides only a hash of it. This approach is similar to Bitcoin’s “pay-to-hash” feature, but instead of adding new script commands we handle this case at the data structure level.



We have investigated the major flaws in Bitcoin and proposed some possible solutions. These ad- vantageous features and our ongoing development make new electronic cash system CryptoNote a serious rival to Bitcoin, outclassing all its forks.

Nobel prize laureate Friedrich Hayek in his famous work proves that the existence of con- current independent currencies has a huge positive effect. Each currency issuer (or developer in our case) is trying to attract users by improving his product. Currency is like a commodity: it can have unique benefits and shortcomings and the most convenient and trusted currency has the greatest demand. Suppose we had a currency excelling Bitcoin: it means that Bitcoin would develop faster and become better. The biggest support as an open source project would come from its own users, who are interested in it.

We do not consider CryptoNote as a full replacement to Bitcoin. On the contrary, having two (or more) strong and convenient currencies is better than having just one. Running two and more different projects in parallel is the natural flow of electronic cash economics.

Nicolas van

Digitally signed by Nicolas van Saberhagen



DN: cn=Nicolas van Saberhagen,


Date: 2013.10.17 14:44:50


A Security

We shall give a proof for our one-time ring signature scheme. At some point it coincides with the parts of the proof in [24], but we decided to rewrite them with a reference rather than to force a reader to rush about from one paper to another.

These are the properties to be established:



Linkability. Given all the secret keys {xi}n for a set S it is impossible to produce n + 1

valid signatures σ1, σ2, . . . , σn+1, such that all of them pass the LNK phase (i.e. with n + 1 different key images Ii). This property implies the double spending protection in the context of CryptoNote.


Exculpability. Given set S, at most n 1 corresponding private keys xi (excluding i = j) and the image Ij of the keys xj it is impossible to produce a valid signature σ with Ij. This property implies theft protection in the context of CryptoNote.



Unforgeability. Given only a public keys set it is impossible to produce a valid signature





Anonymity. Given a signature σ and the corresponding set it is impossible to determine the secret index j of the signer with a probability p > 1 .


Theorem 1. Our one-time ring signature scheme is linkable under the random oracle model.

Proof. Suppose an adversary can produce n + 1 valid signatures σi with key images Ii /= Ij for any i, j [1 . . . n]. Since #S = n, at least one Ii /= xiHp(Pi) for every i. Consider the corresponding signature σ = (I, c1, . . . , cn, r1, . . . , rn). VER(σ) = “true”, this means that

L1i = riG + ciPi

Ri1 = riHp(Pi) + ciI

ci = Hs(m, L1, . . . , Ln, R1, . . . , Rn) mod l


1 1 1 1


The first two equalities imply

logG L1i = ri + cixi

logHp(Pi) Ri1 = ri + ci logHp(Pi) I

where logA B informally denotes the discrete logarithm of B to the base A.



As in [24] we note that i : xi = logHp(Pi) I implies that all ci’s are uniquely determined. The third equality forces the adversary to find a pre-image of s to succeed in the attack, an event whose probability is considered to be negligible.


Theorem 2. Our one-time ring signature scheme is exculpable under the discrete logarithm assumption in the random oracle model.

Proof. Suppose an adversary can produce a valid signature σ = (I, c1, . . . , cn, r1, . . . , rn) with I = xjHP (Pj) with given {xi | i = 1, . . . , j 1, j +1, . . . , n}. Then, we can construct an algorithm A which solves the discrete logarithm problem in E(Fq).


Suppose inst = (G, P ) E(Fq) is a given instance of the DLP and the goal is to get s, such that P = sG. Using the standard technique (as in [24]), A simulates the random and signing oracles and makes the adversary produce two valid signatures with Pj = P in the set : σ = (I, c1, . . . , cn, r1, . . . , rn) and σ1 = (I, c11, . . . , c1n, r11 , . . . , rn1 ).

Since I = x H (P ) in both signatures we compute x

= log

I = rj rjt



mod l

j p j

j Hp(Pj )

ctj cj

A outputs xj because Lj = rjG + cjPj = rj1 G + c1j Pj and Pj = P .


It has been shown in [24] that unforgeability is just an implication of both linkability and excul- pability.

Theorem 3. If a one-time ring signature scheme is linkable and exculpable, then it is unforgeable.

Proof. Suppose an adversary can forge a signature for a given set S: σ0 = (I0, . . .). Consider all valid signatures (produced by the honest signers) for the same message m and the set S: σ1, σ2, . . . , σn. There are two possible cases:


1. I0 ∈ {Ii}n


2. I0 /∈ {Ii}n

. Which contradicts exculpability.


. Which contradicts linkability.


Theorem 4. Our one-time ring signature scheme is anonymous under the decisional Diffie- Hellman assumption in the random oracle model.

Proof. Suppose an adversary can determine the secret index j of the Signer with a probability

p = 1 + E. Then, we can construct algorithm A which solves the decisional Diffie-Hellman

image image

n 1 E

problem in E(Fq) with the probability 2 + 2 .

Let inst = (G1, G2, Q1, Q2) E(Fq) be the instance of DDH and the goal to determine if

logG1 Q1 = logG2 Q2. A feeds the adversary with valid signature σ0 = (I, . . .), where Pj =

xjG1 = Q1 and I = Q2 and simulates oracle Hp, returning G2 for query Hp(Pj).





The adversary returns k as his guess for the index i : I = xiHP (Pi). If k = j, then A returns 1 (for “yes”) otherwise a random r ∈ {1, 0}. The probability of the right choice is com- puted as in [24]: 1 +Pr (1 | inst DDH)Pr (1 | inst / DDH) = 1 +Pr (k = j | inst DDH)+

Pr (k /= j | inst DDH)·Pr (r = 1)Pr (k = j | inst / DDH)Pr (k /= j | inst / DDH)·Pr (r = 0) =















1 + 1 + E + ( n1 E) · 1 1n1 · 1 = 1 + E



In fact, the result should be reduced by the probability of collision in s, but this value is considered to be negligible.

Notes on the hash function Hp

H → H

We defined p as deterministic hash function E(Fq) E(Fq). None of the proofs demands p to be an ideal cryptographic hash function. It’s main purpose is to get a pseudo-random base for image key I = xHp(xG) in some determined way.

With fixed base (I = xG2) the following scenario is possible:


Alice sends two standard transactions to Bob, generating one-time tx-keys: P2 = Hs(r1A)G+

B and P1 = Hs(r2A)G + B.


Bob recovers corresponding one-time private tx-keys x1 and x2 and spends the outputs with valid signatures and images keys I1 = x1G2 and I2 = x2G2.


Now Alice can link these signatures, checking the equality I1I2 =? (Hs(r1A)−Hs(r2A))G2.

The problem is that Alice knows the linear correlation between public keys P1 and P2 and in case of fixed base G2 she also gets the same correlation between key images I1 and I2. Replacing G2 with Hp(xG2), which does not preserve linearity, fixes that flaw.

For constructing deterministic Hp we use algorithm presented in [37].


image Services.

image Compatibility.


image hardware comparison.


Mart´ın Abadi, Michael Burrows, and Ted Wobber. Moderately hard, memory-bound func- tions. In NDSS, 2003.


Ben Adida, Susan Hohenberger, and Ronald L. Rivest. Ad-hoc-group signatures from hi- jacked keypairs. In in DIMACS Workshop on Theft in E-Commerce, 2005.


Man Ho Au, Sherman S. M. Chow, Willy Susilo, and Patrick P. Tsang. Short linkable ring signatures revisited. In EuroPKI, pages 101–115, 2006.


Daniel J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and Bo-Yin Yang. High-speed high-security signatures. J. Cryptographic Engineering, 2(2):77–89, 2012.


David Chaum and Eug`ene van Heyst. Group signatures. In EUROCRYPT, pages 257–265, 1991.


Fabien Coelho. Exponential memory-bound functions for proof of work protocols. IACR Cryptology ePrint Archive, 2005:356, 2005.


Ronald Cramer, Ivan Damg˚ard, and Berry Schoenmakers. Proofs of partial knowledge and simplified design of witness hiding protocols. In CRYPTO, pages 174–187, 1994.


Cynthia Dwork, Andrew Goldberg, and Moni Naor. On memory-bound functions for fighting spam. In CRYPTO, pages 426–444, 2003.


Eiichiro Fujisaki. Sub-linear size traceable ring signatures without random oracles. In CT- RSA, pages 393–415, 2011.


Eiichiro Fujisaki and Koutarou Suzuki. Traceable ring signature. In Public Key Cryptogra- phy, pages 181–200, 2007.


Jezz Garzik. Peer review of “quantitative analysis of the full bitcoin transaction graph”., 2012.


Joseph K. Liu, Victor K. Wei, and Duncan S. Wong. Linkable spontaneous anonymous group signature for ad hoc groups (extended abstract). In ACISP, pages 325–335, 2004.


Joseph K. Liu and Duncan S. Wong. Linkable ring signatures: Security models and new schemes. In ICCSA (2), pages 614–623, 2005.


Ian Miers, Christina Garman, Matthew Green, and Aviel D. Rubin. Zerocoin: Anonymous distributed e-cash from bitcoin. In IEEE Symposium on Security and Privacy, pages 397– 411, 2013.


Micha Ober, Stefan Katzenbeisser, and Kay Hamacher. Structure and anonymity of the bitcoin transaction graph. Future internet, 5(2):237–250, 2013.


Tatsuaki Okamoto and Kazuo Ohta. Universal electronic cash. In CRYPTO, pages 324–337, 1991.


Marc Santamaria Ortega. The bitcoin transaction graph — anonymity. Master’s thesis, Universitat Oberta de Catalunya, June 2013.


Colin Percival. Stronger key derivation via sequential memory-hard functions. Presented at BSDCan’09, May 2009.


Fergal Reid and Martin Harrigan. An analysis of anonymity in the bitcoin system. CoRR, abs/1107.4524, 2011.


Ronald L. Rivest, Adi Shamir, and Yael Tauman. How to leak a secret. In ASIACRYPT, pages 552–565, 2001.


Dorit Ron and Adi Shamir. Quantitative analysis of the full bitcoin transaction graph.

IACR Cryptology ePrint Archive, 2012:584, 2012.


Meni Rosenfeld. Analysis of hashrate-based double-spending. 2012.


Maciej Ulas. Rational points on certain hyperelliptic curves over finite fields. Bulletin of the Polish Academy of Sciences. Mathematics, 55(2):97–104, 2007.


Qianhong Wu, Willy Susilo, Yi Mu, and Fangguo Zhang. Ad hoc group signatures. In

IWSEC, pages 120–135, 2006.