Schlagwortarchiv für: PoS

This article explains how someone will be able to become a validator in Ethereum 2.0. New Ethereum will replace mining process as seen current Ethereum and use Proof of Stake consensus where validators will be the one maintaining the network. Those validators attestations are written on the Beacon chain. However, we won’t get into those technical details and you don’t even have to know all of that to take part in validating blocks.

What’s the current status of Ethereum 2.0 development?

Specification is there and seven different node clients are actively working on their implementation. The reason for this is that they are written in different languages and will have a different specializations i.e. being focused on the browser or resource constrained devices. Also, not all of them will survive (but that’s ok). In current Ethereum (Eth1) survivors are Geth and Parity. Current active Eth2 clients are: LodestarNimbusLighthousePrysmTrinity and Harmony + Artemis that should merge together.

They all have their own testnets but they all gathered in September on an interoperability event and created a multi-client testnet. Here is the historic tweet and we are glad that we could be there! Thanks Consensys.

First launch of the Ethereum 2.0 clients, tweet by Joseph Delong

Good news is that in the latest Eth2 spec release (v0.9.0) an official deposit contract has been declared as finished!

All inital deposits, that will happen on Eth1 chain, will be used by Eth2 chain to secure the network upon launch by leveraging the security pool and value of existing Ether.

How much can you earn by being a validator?

What a great question! Yes, you get a reward if your attestations get included in a block but the reward depends on the whole state of the network i.e. how many validators are online. The economics of this are still being examined and are to be tested. Latest estimates are that validators can expect 4.6% – 10.3% in annualized rewards. However, the spec is still being updated which results in a lot of estimates so we can only recommend you the following links to understand better: – Examining the Proposed Validator Economics of Ethereum 2.0 – Eth 2.0 Economics

Basically, the math for validator return of investment (ROI) is:

ROI = Validator rewards + Network fees - Cost to run a validator node

The goal is to encourage people to become validators and have many as possible to secure the network. Therefore, the whole PoS system is a collective rewards scheme where the more people online, the more everyone earns. Vice versa, the less people online, the less that people are earning. It is why there is a slight penalty if your validator client goes offline at any point. For example, if the current interest rate is 5%, you would lose 0.0137% of your deposit every day, but gain that for every day you’re online. In case of a bigger issue where 33% of validators are offline and you’re offline, you can lose 60% in 18 days. If at any point your deposit drops below 16 ETH you will be removed from the validator set entirely.

Unfortunately, the numbers above in the links are most probably going to be changed as the Eth2 spec is constantly being updated.

What do you need to become a validator?

Basically, here are the minimum requirements for being a validator: – have 32 ETH, – run validator node 247 (this can be your PC, remote server, Raspberry Pi or similar) with Internet connection, – have access to beacon node.

Hardware requirements for running the node will be better determined during testnet activities. For the validator client only something like Raspberry Pi will be enough but in case you are running your Beacon node, you’ll need a more powerful CPU and storage space.

Keep in mind that your uptime and therefore an Internet connection are the most important things as your stake gets slashed if you are offline, meaning that you are losing your money.

How to become a validator?

The validator setup requires some technical knowledge and understanding. However, we want to enable everyone to easily become a validator and know how well they perform. This will be possible with the desktop app that we are working on – ChainGuardian. Soon more info about that, you can track for progress our NodeFactory Twitter and ChainGuardian Github or join our Discord! This app is being built thanks to MolochDAO.

ChainGuardian is a Windows/Linux/Mac app that will be a one stop shop for validators. As a user, you will be able to fully onboard as a validator which includes: making an ETH deposit, generating or importing your required key pairs (which only you own!), running a validator client and a Beacon node. Most importantly, you will be able to observe the performance of your node, return of investment and get notified if your node is down so you don’t lose your earnings!

Here are a couple of sneak peaks below. We plan to expand these features to much more but currently we are focused to make a release where you can make an ETH deposit and effortlessly become a validator.

ChainGuardian desktop application previews

Basically, to become a validator, here are the steps that one needs to take:

  1. Install one of the previously listed Eth2 clients.
  2. Get Ether. In testnet case that’s Görli ETH. We understand this is not so easy to get so we will provide a faucet for you that will get you this ETH and submit your deposit transaction altogether.
  3. Generate a validator public and private key pair (used for signing your claims as a validator).
  4. Start your validator client along with Beacon chain. You can use your Beacon chain node or some existing public server.
  5. Make the ETH deposit (stake) to Eth1.
  6. Wait to get assigned as validator. Once your validator client is up and running you just have to wait for it’s activation. This takes a few minutes (or probably hours in case of mainnet) because of a voting period in which new deposits are added to the running chain from other validators.
  7. Watch your validator create, vote and attest for blocks as well as earn rewards!

Once again, those steps will be a part of the ChainGuardian app onboarding.

Note about your key pairs

What’s important to understand when handling your validator node are validator keys. You should have a signing key which is a hot wallet – unlocked account that app client uses for voting and proposing blocks. Also, you need a withdrawal key that is a separated cold wallet which will be used for funds withdrawal in case you want to stop being a validator or your signing key gets comprimised.

Conclusion

This is Ethereum 2.0 (Serenity) Phase 0 which includes launch of Beacon chain that manages the Casper Proof of Stake protocol for itself and all of the shard chains. Being a Phase 0 only, we won’t have all the new features of Ethereum 2.0 just yet. For example, there is currently no way to withdraw deposited Ether from Eth2 as it is effectively burned in Eth1. However, although the transfers weren’t planned in Phase 0, this is still open to discussion and changes.

Once Phase 0 is complete, there will be two active Ethereum chains – Eth1 chain (current) and the Eth2 chain (Beacon chain). They will operate in parallel during the Phase 1 and Phase 2 as well. However, the Eth1 to Eth2 transition is planned.

For the Beacon chain to start, there will be a minimum amount of ETH stake needed. It is defined in the deposit contract and currently this is set to 16384 validators (524,288 ETH).

It may seem that this Phase is not that significant as we won’t be available to use everything from Ethereum that we got used to but this is the foundation of the entire system. If we compare this phase to the beginning of the mining period, then we can certainly draw conclusions about advantages and profits in being the first in the line. However, there are all kinds of risks but seeing the community around this and efforts of the core developers, it’s only possible to be positive about how things will roll out.

Source: https://www.nodefactory.io/blog/how-to-become-a-validator-in-the-new-ethereum-2-proof-of-stake-system/

Blockchains are beginning to turn green. This post describes some of the IC3 research in this direction.

The glorious view from our windows at Cornell Tech takes in 432 Park Avenue, the tallest residential building in the world. This tower is a monument to many things. Above all, for a student of Financial systems, it epitomizes ways to store wealth with breathtaking waste. (Fittingly, it was inspired by a wastepaper basket, shown to the right.) Buildings like it are sprouting up around NYC as investment vehicles for the ultra-wealthy, and their owners don’t actually live much in them. 432 Park and its ilk are essentially hollow vaults.

Something similar can be said for Bitcoin. As a concept and technological inspiration, Bitcoin is a marvelous thing. And unquestionably like 432 Park, it does see legitimate and valuable uses (and some shady ones). As a currency, though, Bitcoin serves in no small degree as a wasteful and ecologically damaging way for people to park their money.

There are a number of ways to substantiate this claim. One is in terms of its electricity consumption. Estimates vary, but it is likely that the Bitcoin network consumes roughly as much electricity as a nuclear reactor, about 1/3 of the entire electricity consumption of the entire country of Ireland. (See our back-of-the-envelope calculations in the blog notes.) To view this in another light, a recent IC3 paper estimated the cost-per-confirmed transaction at as much as $6.20 in capital costs and electricity. (Transaction rates have been rising, and today the figure is substantially lower, but still high.) That’s $6.20 in resources per transaction to move money between accounts in the same system.

Bitcoin proponents argue that this is simply the cost of decentralization. A credit-card network doesn’t provide the pseudonymity, freedom from government interference, portability, and other features of Bitcoin, so it isn’t comparable. This is true. But it isn’t a law of nature that a system like Bitcoin should be so resource-intensive. Researchers at IC3 believe that the many benefits of Bitcoin can be had without the waste. In a few papers released over the past month or so, we’ve outlined three different approaches to the development of greener alternatives:

  1. PieceWork is a tweak to standard cryptocurrency PoWs that enables recycling of mining effort.
  2. Resource-Efficient Mining (REM) repurposes innately useful workloads for mining effort. It relies on use of a trusted hardware technology called Intel SGX.
  3. Snow White is the Proof-of-Stake system with rigorous security guarantees.

PieceWork: Recycling PoWs

PatchWork

If we can’t reduce waste at the source, why not recycle? That’s the premise of the first, and simplest idea, called PieceWork. Piecework involves a slight modification to the standard Proof-of-Work (PoW) construction, decomposing it into two layers. One layer produces small PoWs called puzzlets that play a critical role in the mining process and can also, as we shall show, serve useful non-mining purposes.

Consider a standard cryptocurrency, abstracting away into a single value X the details of what gets hashed into a PoW (transactions, the previous block, etc.). A miner’s task then is is simply to search for an input (“nonce”) n∈N for which

H(X, n) ≤ Z,

where Z is a threshold representing the difficulty of the PoW.

To decompose a PoW into two layers, we instead construct it as follows:

H(X, n) = Fout (X, Fin (X, n; rin ), rout ),

where rin = H0(r) and rout = H1(r) for distinct hash functions H0, H1 and a secret value r. (These two values are a technical requirement to prevent what are called block withholding attacks. See the blog notes.)

A valid solution is a value n such that

Fin (∙, n, ∙) < Zin and Fout (∙, n, ∙) < Zout.

To solve this puzzle or PoW, a miner must first find an n such that Fin (∙, n, ∙) < Zin. This inner-puzzle is what we call a puzzlet. To solve the whole PoW, a miner must find a puzzlet solution. The puzzlet solution must additionally satisfy Fout (… n…) < Zout, meaning that a miner must in general come up with many puzzlet solutions to solve the PoW as a whole. By setting Zin + Zout = Z, one obtains a PoW with the same difficulty as that in (1).

What’s the benefit of this two-layered structure? A puzzlet, i.e., the task of finding a solution n to Fin (∙, n ,∙) < Zin, can be outsourced by a miner or mining pool operator to a worker, and put to any of several non-cryptocurrency goals. DoS prevention for TLS is one example. TLS requires computationally intensive crypto operations from a server to set up connections. Thus it’s a natural target for DoS attacks, prompting the idea of requiring clients to solve PoWs if a server comes under attack, an idea now floated in an IETF proposal. These PoWs used for DoS mitigation can themselves be puzzlets. The effect is that the server becomes a mining pool operator, and its clients become workers. And a DoS attacker effectively showers the victim HTTPS server with cryptocurrency. (Of course, a server can also dispense puzzlets and make money even when it’s not under attack…) Other examples of puzzlet uses are spam prevention (the original PoW goal proposed by Dwork and Naor), MicroMint, and Tor relay payments.

In summary, PieceWork requires only a small modification to standard cryptocurrency PoWs. It turns them into dual-use computational problems and recycle wasted mining effort. How much recycling it can feasibly accomplish is an open research question. PieceWork benefits from a number of previous, related ideas. Our short paper on it can be found here. PieceWork will be presented in April at BITCOIN 2017.

Resource-Efficient Mining (REM): Using Innately Useful Work as Mining Effort

Efficient Miner

A very different approach to minimizing waste is embraced in our second project, a system called REM. Rather than relying on hash-based PoWs, it makes use of an entirely different type of PoW, in which the W, i.e., the work, is useful. We call this concept Proof of Useful Work (PoUW).

Of course, traditional PoWs have several useful properties, prime among them the ease with which solutions can be verified. Most workloads don’t have this property. To enable verification of work on arbitrary useful workloads, REM relies on a new technology: Intel SGX.

Intel’s new SGX (Software Guard eXtensions) trusted execution environment technology. In a nutshell, SGX enables the execution of an application in a hardware-protected environment, called an enclave, that is isolated from the operating system and other applications. It thus protects the application against tampering by even the owner of the machine on which it’s running. SGX also enables generation of an attestation that proves to a remote party that a particular application was running in an enclave. SGX is already supported in many recent-model Intel CPUs.

As a good way to see how SGX can facilitate mining, it’s worth discussing an elegant mining scheme proposed by Intel called PoET (Proof of Elapsed Time). The idea behind PoET is simple. If miners use SGX, then they can be forced to use only a sanctioned piece of mining software that simulates PoWs. Standard PoWs have solution times that are exponentially distributed. A PoET client can thus sample a solution time from an exponential distribution, simply sit idle until this time elapses, and then wake up with a block in hand. The first client to awake gets to publish its block. An SGX attestation proves to others in the system that the client idled as it should have.

PoET has several nice features. Foremost among them is the fact that (at first glance) it’s virtually energy-waste-free. Clients idle instead of hashing. Block solution times can be tuned to mimic those of a standard mining regime, like Bitcoin or Ethereum mining. Thus PoET can effectively be plugged into such schemes. It is also relatively egalitarian in that it achieves precisely one vote per CPU. PoET, though, has two technical challenges. We call these the broken chips and stale chips problems.

First, the broken chips problem. SGX security is imperfect and, as with any trusted hardware, it’s to be expected that a well-resourced adversary can break it. Thus, it’s to be expected that some SGX CPUs will be broken. In the basic PoET scheme, a broken chip has devastating effect, as it enables a miner to simulate a zero mining times and win every consensus round, i.e., publish all blocks. Intel has proposed a statistical testing regime to detect breaks, but details aren’t published and formal foundations are needed for a good analysis.

REM faces the same challenge. In REM, we have developed a rigorous statistical testing regime with formal foundations and shown analytically and empirically that it is highly effective: It can strictly limit the gains of adversaries that have broken chips while minimizing incorrect rejection of blocks from honest miners.

The stale chips problem is more subtle. Our economic analysis shows that in many practical settings in PoET and related systems, it will be advantageous for a miner to buy old (“stale”) SGX CPUs and cobble them together into “farms” that mine cheaply. Such farms reinstate a fraction of the waste that PoET is trying to avoid to begin with. This is where REM’s Proof of Useful Work (PoUW) approach comes into play. In a nutshell, with PoUW, miners run whatever workloads they consider to be useful—protein-folding computations and ML classification algorithms are a couple of examples considered in our work. Miners can prove that they did work on these problems using SGX. The probability of a miner mining a block is proportional to the amount of work she does. Thus, REM turns otherwise useful work into mining effort. Making PoUW work is technically challenging. It requires that workloads be themselves compiled and instrumented using SGX to prove correctness, an innovation of independent interest introduced in REM.

The biggest objection lodged against SGX-based mining is the fact that it places Intel in charge, undermining the decentralization at the heart of permissionless ledgers. Of course, Intel is already a trust anchor. But we’d view this another way, and characterize REM and PoET as partially decentralized. You can read about REM here, in a paper under submission.

Snow White: Proof of Stake with Rigorous Security Guarantees

PatchWork

Our final approach to reducing cryptocurrency waste is one both proposed and studied by many projects in the cryptocurrency community since the inception of Bitcoin. This idea is called proof of stake, and revolves around the basic premise that rather than mining simulating a lottery where your chance of finding a block is proportional to computing power, mining simulates a lottery where your chance of finding a block is proportional to the number of coins (or “stake”) you have in the system.

A key roadblock to the adoption and deployment of proof of stake systems involves questions around the security guarantees that they provide. This continues to be an ongoing source of controversy and debate in the community, with sources like the Bitcoin Wiki claiming that “Proof of Stake alone is considered to an unworkable consensus mechanism” and efforts like Ethereum’s Casper project studying questions of how to design a maximally useful and relevant proof of stake protocol for the next generation of cryptocurrencies.

Despite its potential shortfalls, we believe proof of stake represents a critical new development and direction in both the blockchain and distributed consensus fields. With this in mind, we set out to apply previous work by Rafael Pass (an IC3 member) and others, in which a model for analyzing and proving consistency, chain growth, and restrictions on adversarial chain impact for proof of work blockchains was developed.

To more accurately model the nature of blockchain distributed consensus, and the implication of network delays, we propose a new model for consensus called the sleepy model. This model more accurately mimics the operation and naturally captures the design of permissionless blockchains. In the sleepy model, a user (node or miner) can leave or join the protocol at will. This is modeled by (non-crashed) users in the protocol being given the ability to “sleep”, or go offline and rejoin the network at some unspecified later date with unmodified original state. The key question then becomes how can we design a useful consensus protocol in the sleepy model, when at least half of all online nodes (or stake) is honestly following the protocol?

The guarantees of consistency and availability are rigorously defined in this new model, more accurately capturing the guarantees users expect from blockchain protocols. The analogues of proof-of-work style guarantees like chain growth (availability) and chain quality (integrity) are also discussed. We believe this new class of consensus protocols in the “sleepy” model represents one of the fundamental contributions of blockchains to the distributed consensus space. Neither the asynchronous, partially synchronous, or synchronous models, in either a permissioned or permissionless setting, prove sufficient to model or reason about these new consensus protocols or the probabilistic and often economic guarantees they provide.

To that end, we are working on two protocols proven in the Sleepy model: Sleepy and Snow White.

Sleepy is a simple protocol intended to achieve the guarantees of chain quality, chain growth, and consistency/agreement with 51% of online nodes being honest. This protocol is intended for deployment in a permissioned context, and assumes stake assigned or instantiated by some trusted source. This makes Sleepy ideal for bankchains or other permissioned deployments, in which the set of stakeholders is known a priori but the blockchain guarantees of robust, auditable distributed consensus remain desirable. Every second, every member of the committee is eligible to “mine” a new block in the system, which involves a standard block mining solution with a public source of entropy as the nonce. Standard difficulty adjustments retarget the block interval to a desired target, as in Bitcoin and Ethereum today. The challenges of choosing an appropriate, ungameable mining function and source of entropy are tackled in the work, and proof is given that no committee member can manipulate the protocol to their advantage.

Snow White, on the other hand, is an extension of Sleepy intended to provide the same rigorous blockchain-derived guarantees in a permissionless setting, such as in the deployment of a public cryptocurrency. Obviously, this is substantially more difficult: choosing appropriate committee members for the block lottery, as well as ensuring that no coalition of these committee members (of bounded size) can game the protocol for more than a negligible advantage, are highly nontrivial. The resulting protocol is actually quite simple: each step, a committee mines as in Sleepy, with a shared source of entropy h0. With sufficiently many bits of entropy in h0 and an appropriately selected committee weighted on stake, it is possible to prove the desired result of chain quality, growth, and consistency in the Sleepy model. Choosing both the committee and h0 such that no adversary or non-majority coalition of adversaries gain substantial advantage by deviating from the protocol is the key to the construction and concrete parameters of the protocol, which are discussed further in our full publication.

Sleepy and Snow White represent the first rigorously justified and proven blockchain consensus protocols in both the permissioned and permissionless proof of stake space. It is our belief that the rigorous proofs of security are valuable as both theoretical efforts and to guide protocol development and deployment. Both the proof and concrete parameterization of these protocols are highly non obvious, and while heuristic protocols designed elsewhere in the community (with only informal justification) may operate in a similar manner to Sleepy, there is no guarantee that subtle network-level, timing, committee / stake poisoning, and other attacks are not present in these protocols. In our work, we assume an optimal adversary with ability to delay network messages up to some arbitrary time, a very strong notion of attacker that makes our protocols the most rigorous conceived in the space thusfar.

You can read about the papers in prepublication manuscripts we have uploaded for release on ePrint: Snow White, Sleepy. Further conference or journal publications with implementation details of these systems, full proofs, simulation results, and experimental comparisons to existing cryptocurrencies are currently in development. We hope to share more exciting news about these new protocols soon.

[It is worth noting that our willingness to assume that the majority of online coins are honestly following the protocol is an assumption that `has been challenged <https://blog.ethereum.org/2016/12/06/history-casper-chapter-1/>`_ by the Ethereum foundation. We do not necessarily agree with these criticisms or model; we believe that the ε-Nash equilibrium achievable in *Snow White is sufficient for the design of a robust, decentralized coin. Nonetheless, we believe developing and proving protocols secure in this context is valuable: both as the most natural model for private blockchain deployments, and to illuminate common pitfalls in proof of stake protocol design that may lead to attacks in naive protocols. We look forward to a full specification of Ethereum’s Casper, and to comparing both its assumptions and attack surface with that of Snow White.

Notes

Back-of-the-envelope Bitcoin electricity consumption calculation

There are many estimates of the electricity consumption of the Bitcoin network, but we don’t find them convincing. For example, this widely cited one derives an upper bound of 10 GW (in 2014!). As we’ll see from a simple calculation below, that would imply that miners were losing huge amounts of money. So here’s our crack at a crude estimate.

Using the technique in this paper, to obtain a lower bound on electricity consumption, let’s take the Antminer S9 to represent the state of the art in mining rigs. It consumes 0.098 W/GH. The current mining rate of the Bitcoin network is about 3,330,000 TH/s. Thus, were all miners using Antminer S9s, the electricity consumption of the network would be about 326 MW. (Of course, many miners are probably using less efficient rigs, so this is a loose lower bound.)

To obtain an upper bound on electricity consumption, assume that miners are rational, i.e., won’t mine if it causes them to lose money. At the current price of about $1000 / BTC, given a 12.5 BTC mining reward and block production rate of about 6 blocks per hour, the global mining reward per hour is about $72,500. A common, extremely cheap form of electricity used by miners is Chinese hydroelectric power; the very low end of the levelized cost of such electricity is $0.02 / kWh. Thus rational miners will consume no more than 3.625 GW of electricity. (Of course, this estimate disregards the capital costs of mining, and is therefore probably a quite loose upper bound.)

Taking the log-average of these two bounds yields an estimate of 1.075 GW, about the output of a single unit in a nuclear power station. Ireland’s average electricity consumption is about 3.25 GW (as derived from this 2013 figure).

Again, this is a crude estimate, but we believe it’s probably within a factor of 2 of the real one.

Why use rin and rout in PieceWork?

It is possible to outsource mining with the standard cryptocurrency PoW H(X,n) ≤ Z, simply by declaring a puzzlet to be the problem of finding an n such that H(X,n) ≤ Z_{easy}, for some Z_{easy} > Z. In other words, a worker can be asked to find a solution to a PoW easier than the target. But with some probability, a solution to H(X,n) ≤ Z_{easy} will also be a solution to H(X,n) ≤ Z, i.e., will solve the outsourcing miner’s PoW.

The problem with this approach is that a malicious worker can mount a block withholding attack. If it happens to find a solution to H(X,n) ≤ Z, it can simply not submit it. Or it can demand a bribe from the outsourcer. Use of rin and rout conceals from a worker whether or not a puzzlet solution is also a solution to the global PoW, preventing such attacks.

Quelle: The Greening of Blockchains