Special thanks to Robert Sams for the development of Seignorage Shares and insights regarding how to correctly value volatile coins in multi-currency systems

One of the main problems with Bitcoin for ordinary users is that, while the network may be a great way of sending payments, with lower transaction costs, much more expansive global reach, and a very high level of censorship resistance, Bitcoin the currency is a very volatile means of storing value. Although the currency had by and large grown by leaps and bounds over the past six years, especially in financial markets past performance is no guarantee (and by efficient market hypothesis not even an indicator) of future results of expected value, and the currency also has an established reputation for extreme volatility; over the past eleven months, Bitcoin holders have lost about 67% of their wealth and quite often the price moves up or down by as much as 25% in a single week. Seeing this concern, there is a growing interest in a simple question: can we get the best of both worlds? Can we have the full decentralization that a cryptographic payment network offers, but at the same time have a higher level of price stability, without such extreme upward and downward swings?

Last week, a team of Japanese researchers made a proposal for an “improved Bitcoin“, which was an attempt to do just that: whereas Bitcoin has a fixed supply, and a volatile price, the researchers’ Improved Bitcoin would vary its supply in an attempt to mitigate the shocks in price. However, the problem of making a price-stable cryptocurrency, as the researchers realized, is much different from that of simply setting up an inflation target for a central bank. The underlying question is more difficult: how do we target a fixed price in a way that is both decentralized and robust against attack?

To resolve the issue properly, it is best to break it down into two mostly separate sub-problems:

  1. How do we measure a currency’s price in a decentralized way?
  2. Given a desired supply adjustment to target the price, to whom do we issue and how do we absorb currency units?

Decentralized Measurement

For the decentralized measurement problem, there are two known major classes of solutions: exogenous solutions, mechanisms which try to measure the price with respect to some precise index from the outside, and endogenous solutions, mechanisms which try to use internal variables of the network to measure price. As far as exogenous solutions go, so far the only reliable known class of mechanisms for (possibly) cryptoeconomically securely determining the value of an exogenous variable are the different variants of Schellingcoin – essentially, have everyone vote on what the result is (using some set chosen randomly based on mining power or stake in some currency to prevent sybil attacks), and reward everyone that provides a result that is close to the majority consensus. If you assume that everyone else will provide accurate information, then it is in your interest to provide accurate information in order to be closer to the consensus – a self-reinforcing mechanism much like cryptocurrency consensus itself.


The main problem with Schellingcoin is that it’s not clear exactly how stable the consensus is. Particularly, what if some medium-sized actor pre-announces some alternative value to the truth that would be beneficial for most actors to adopt, and the actors manage to coordinate on switching over? If there was a large incentive, and if the pool of users was relatively centralized, it might not be too difficult to coordinate on switching over.

There are three major factors that can influence the extent of this vulnerability:

  1. Is it likely that the participants in a schellingcoin actually have a common incentive to bias the result in some direction?
  2. Do the participants have some common stake in the system that would be devalued if the system were to be dishonest?
  3. Is it possible to “credibly commit” to a particular answer (ie. commit to providing the answer in a way that obviously can’t be changed)?

(1) is rather problematic for single-currency systems, as if the set of participants is chosen by their stake in the currency then they have a strong incentive to pretend the currency price is lower so that the compensation mechanism will push it up, and if the set of participants is chosen by mining power then they have a strong incentive to pretend the currency’s price is too high so as to increase the issuance. Now, if there are two kinds of mining, one of which is used to select Schellingcoin participants and the other to receive a variable reward, then this objection no longer applies, and multi-currency systems can also get around the problem. (2) is true if the participant selection is based on either stake (ideally, long-term bonded stake) or ASIC mining, but false for CPU mining. However, we should not simply count on this incentive to outweigh (1).

(3) is perhaps the hardest; it depends on the precise technical implementation of the Schellingcoin. A simple implementation involving simply submitting the values to the blockchain is problematic because simply submitting one’s value early is a credible commitment. The original SchellingCoin used a mechanism of having everyone submit a hash of the value in the first round, and the actual value in the second round, sort of a cryptographic equivalent to requiring everyone to put down a card face down first, and then flip it at the same time; however, this too allows credible commitment by revealing (even if not submitting) one’s value early, as the value can be checked against the hash.

A third option is requiring all of the participants to submit their values directly, but only during a specific block; if a participant does release a submission early they can always “double-spend” it. The 12-second block time would mean that there is almost no time for coordination. The creator of the block can be strongly incentivized (or even, if the Schellingcoin is an independent blockchain, required) to include all participations, to discourage or prevent the block maker from picking and choosing answers. A fourth class of options involves some secret sharing or secure multiparty computation mechanism, using a collection of nodes, themselves selected by stake (perhaps even the participants themselves), as a sort of decentralized substitute for a centralized server solution, with all the privacy that such an approach entails.

Finally, a fifth strategy is to do the schellingcoin “blockchain-style”: every period, some random stakeholder is selected, and told to provide their vote as a [id, value] pair, where value is the actual valid and id is an identifier of the previous vote that looks correct. The incentive to vote correctly is that only tests that remain in the main chain after some number of blocks are rewarded, and future voters will note attach their vote to a vote that is incorrect fearing that if they do voters after them will reject their vote.

Schellingcoin is an untested experiment, and so there is legitimate reason to be skeptical that it will work; however, if we want anything close to a perfect price measurement scheme it’s currently the only mechanism that we have. If Schellingcoin proves unworkable, then we will have to make do with the other kinds of strategies: the endogenous ones.

Endogenous Solutions

To measure the price of a currency endogenously, what we essentially need is to find some service inside the network that is known to have a roughly stable real-value price, and measure the price of that service inside the network as measured in the network’s own token. Examples of such services include:

  • Computation (measured via mining difficulty)
  • Transaction fees
  • Data storage
  • Bandwidth provision

A slightly different, but related, strategy, is to measure some statistic that correllates indirectly with price, usually a metric of the level of usage; one example of this is transaction volume.

The problem with all of these services is, however, that none of them are very robust against rapid changes due to technological innovation. Moore’s Law has so far guaranteed that most forms of computational services become cheaper at a rate of 2x every two years, and it could easily speed up to 2x every 18 months or 2x every five years. Hence, trying to peg a currency to any of those variables will likely lead to a system which is hyperinflationary, and so we need some more advanced strategies for using these variables to determine a more stable metric of the price.

First, let us set up the problem. Formally, we define an estimator to be a function which receives a data feed of some input variable (eg. mining difficulty, transaction cost in currency units, etc) D[1], D[2], D[3]…, and needs to output a stream of estimates of the currency’s price, P[1], P[2], P[3]… The estimator obviously cannot look into the future; P[i] can be dependent on D[1], D[2]D[i], but not D[i+1]. Now, to start off, let us graph the simplest possible estimator on Bitcoin, which we’ll call the naive estimator: difficulty equals price.


Unfortunately, the problem with this approach is obvious from the graph and was already mentioned above: difficulty is a function of both price and Moore’s law, and so it gives results that depart from any accurate measure of the price exponentially over time. The first immediate strategy to fix this problem is to try to compensate for Moore’s law, using the difficulty but artificially reducing the price by some constant per day to counteract the expected speed of technological progress; we’ll call this the compensated naive estimator. Note that there are an infinite number of versions of this estimator, one for each depreciation rate, and all of the other estimators that we show here will also have parameters.

The way that we will select the parameter for our version is by using a variant of simulated annealing to find the optimal values, using the first 780 days of the Bitcoin price as “training data”. The estimators are then left to perform as they would for the remaining 780 days, to see how they would react to conditions that were unknown when the parameters were optimized (this technique, knows as “cross-validation”, is standard in machine learning and optimization theory). The optimal value for the compensated estimator is a drop of 0.48% per day, leading to this chart:


The next estimator that we will explore is the bounded estimator. The way the bounded estimator works is somewhat more complicated. By default, it assumes that all growth in difficulty is due to Moore’s law. However, it assumes that Moore’s law cannot go backwards (ie. technology getting worse), and that Moore’s law cannot go faster than some rate – in the case of our version, 5.88% per two weeks, or roughly quadrupling every year. Any growth outside these bounds it assumes is coming from price rises or drops. Thus, for example, if the difficulty rises by 20% in one period, it assumes that 5.88% of it is due to technological advancements, and the remaining 14.12% is due to a price increase, and thus a stabilizing currency based on this estimator might increase supply by 14.12% to compensate. The theory is that cryptocurrency price growth to a large extent happens in rapid bubbles, and thus the bounded estimator should be able to capture the bulk of the price growth during such events.


There are more advanced strategies as well; the best strategies should take into account the fact that ASIC farms take time to set up, and also follow a hysteresis effect: it’s often viable to keep an ASIC farm online if you already have it even when under the same conditions it would not be viable to start up a new one. A simple approach is looking at the rate of increase of the difficulty, and not just the difficulty itself, or even using a linear regression analysis to project difficulty 90 days into the future. Here is a chart containing the above estimators, plus a few others, compared to the actual price:

Note that the chart also includes three estimators that use statistics other than Bitcoin mining: a simple and an advanced estimator using transaction volume, and an estimator using the average transaction fee. We can also split up the mining-based estimators from the other estimators:

 

 

See https://github.com/ethereum/economic-modeling/tree/master/stability for the source code that produced these results.

Of course, this is only the beginning of endogenous price estimator theory; a more thorough analysis involving dozens of cryptocurrencies will likely go much further. The best estimators may well end up using a combination of different measures; seeing how the difficulty-based estimators overshot the price in 2014 and the transaction-based estimators undershot the price, the two combined could end up being substantially more accurate. The problem is also going to get easier over time as we see the Bitcoin mining economy stabilize toward something closer to an equilibrium where technology improves only as fast as the general Moore’s law rule of 2x every 2 years.

The other issue that all of these estimators have to contend with is exploitability: if transaction volume is used to determine the currency’s price, then an attacker can manipulate the price very easily by simply sending very many transactions. The average transaction fees paid in Bitcoin are about $ 5000 per day; at that price in a stabilized currency the attacker would be able to halve the price. Mining difficulty, however, is much more difficult to exploit simply because the market is so large. If a platform does not want to accept the inefficiencies of wasteful proof of work, an alternative is to build in a market for other resources, such as storage, instead; Filecoin and Permacoin are two efforts that attempt to use a decentralized file storage market as a consensus mechanism, and the same market could easily be dual-purposed to serve as an estimator.

The Issuance Problem

Now, even if we have a reasonably good, or even perfect, estimator for the currency’s price, we still have the second problem: how do we issue or absorb currency units? The simplest approach is to simply issue them as a mining reward, as proposed by the Japanese researchers. However, this has two problems:

  1. Such a mechanism can only issue new currency units when the price is too high; it cannot absorb currency units when the price is too low.
  2. If we are using mining difficulty in an endogenous estimator, then the estimator needs to take into account the fact that some of the increases in mining difficulty will be a result of an increased issuance rate triggered by the estimator itself.

If not handled very carefully, the second problem has the potential to create some rather dangerous feedback loops in either direction; however, if we use a different market as an estimator and as an issuance model then this will not be a problem. The first problem seems serious; in fact, one can interpret it as saying that any currency using this model will always be strictly worse than Bitcoin, because Bitcoin will eventually have an issuance rate of zero and a currency using this mechanism will have an issuance rate always above zero. Hence, the currency will always be more inflationary, and thus less attractive to hold. However, this argument is not quite true; the reason is that when a user purchases units of the stabilized currency then they have more confidence that at the time of purchase the units are not already overvalued and therefore will soon decline. Alternatively, one can note that extremely large swings in price are justified by changing estimations of the probability the currency will become thousands of times more expensive; clipping off this possibility will reduce the upward and downward extent of these swings. For users who care about stability, this risk reduction may well outweigh the increased general long-term supply inflation.

BitAssets

A second approach is the (original implementation of the) “bitassets” strategy used by Bitshares. This approach can be described as follows:

  1. There exist two currencies, “vol-coins” and “stable-coins”.
  2. Stable-coins are understood to have a value of $ 1.
  3. Vol-coins are an actual currency; users can have a zero or positive balance of them. Stable-coins exist only in the form of contracts-for-difference (ie. every negative stable-coin is really a debt to someone else, collateralized by at least 2x the value in vol-coins, and every positive stable-coin is the ownership of that debt).
  4. If the value of someone’s stable-coin debt exceeds 90% of the value of their vol-coin collateral, the debt is cancelled and the entire vol-coin collateral is transferred to the counterparty (“margin call”)
  5. Users are free to trade vol-coins and stable-coins with each other.

And that’s it. The key piece that makes the mechanism (supposedly) work is the concept of a “market peg”: because everyone understands that stable-coins are supposed to be worth $ 1, if the value of a stable-coin drops below $ 1, then everyone will realize that it will eventually go back to $ 1, and so people will buy it, so it actually will go back to $ 1 – a self-fulfilling prophecy argument. And for a similar reason, if the price goes above $ 1, it will go back down. Because stable-coins are a zero-total-supply currency (ie. each positive unit is matched by a corresponding negative unit), the mechanism is not intrinsically unworkable; a price of $ 1 could be stable with ten users or ten billion users (remember, fridges are users too!).

However, the mechanism has some rather serious fragility properties. Sure, if the price of a stable-coin goes to $ 0.95, and it’s a small drop that can easily be corrected, then the mechanism will come into play, and the price will quickly go back to $ 1. However, if the price suddenly drops to $ 0.90, or lower, then users may interpret the drop as a sign that the peg is actually breaking, and will start scrambling to get out while they can – thus making the price fall even further. At the end, the stable-coin could easily end up being worth nothing at all. In the real world, markets do often show positive feedback loops, and it is quite likely that the only reason the system has not fallen apart already is because everyone knows that there exists a large centralized organization (BitShares Inc) which is willing to act as a buyer of last resort to maintain the “market” peg if necessary.

Note that BitShares has now moved to a somewhat different model involving price feeds provided by the delegates (participants in the consensus algorithm) of the system; hence the fragility risks are likely substantially lower now.

SchellingDollar

An approach vaguely similar to BitAssets that arguably works much better is the SchellingDollar (called that way because it was originally intended to work with the SchellingCoin price detection mechanism, but it can also be used with endogenous estimators), defined as follows:

  1. There exist two currencies, “vol-coins” and “stable-coins”. Vol-coins are initially distributed somehow (eg. pre-sale), but initially no stable-coins exist.
  2. Users may have only a zero or positive balance of vol-coins. Users may have a negative balance of stable-coins, but can only acquire or increase their negative balance of stable-coins if they have a quantity of vol-coins equal in value to twice their new stable-coin balance (eg. if a stable-coin is $ 1 and a vol-coin is $ 5, then if a user has 10 vol-coins ($ 50) they can at most reduce their stable-coin balance to -25)
  3. If the value of a user’s negative stable-coins exceeds 90% of the value of the user’s vol-coins, then the user’s stable-coin and vol-coin balances are both reduced to zero (“margin call”). This prevents situations where accounts exist with negative-valued balances and the system goes bankrupt as users run away from their debt.
  4. Users can convert their stable-coins into vol-coins or their vol-coins into stable-coins at a rate of $ 1 worth of vol-coin per stable-coin, perhaps with a 0.1% exchange fee. This mechanism is of course subject to the limits described in (2).
  5. The system keeps track of the total quantity of stable-coins in circulation. If the quantity exceeds zero, the system imposes a negative interest rate to make positive stable-coin holdings less attractive and negative holdings more attractive. If the quantity is less than zero, the system similarly imposes a positive interest rate. Interest rates can be adjusted via something like a PID controller, or even a simple “increase or decrease by 0.2% every day based on whether the quantity is positive or negative” rule.

Here, we do not simply assume that the market will keep the price at $ 1; instead, we use a central-bank-style interest rate targeting mechanism to artificially discourage holding stable-coin units if the supply is too high (ie. greater than zero), and encourage holding stable-coin units if the supply is too low (ie. less than zero). Note that there are still fragility risks here. First, if the vol-coin price falls by more than 50% very quickly, then many margin call conditions will be triggered, drastically shifting the stable-coin supply to the positive side, and thus forcing a high negative interest rate on stable-coins. Second, if the vol-coin market is too thin, then it will be easily manipulable, allowing attackers to trigger margin call cascades.

Another concern is, why would vol-coins be valuable? Scarcity alone will not provide much value, since vol-coins are inferior to stable-coins for transactional purposes. We can see the answer by modeling the system as a sort of decentralized corporation, where “making profits” is equivalent to absorbing vol-coins and “taking losses” is equivalent to issuing vol-coins. The system’s profit and loss scenarios are as follows:

  • Profit: transaction fees from exchanging stable-coins for vol-coins
  • Profit: the extra 10% in margin call situations
  • Loss: situations where the vol-coin price falls while the total stable-coin supply is positive, or rises while the total stable-coin supply is negative (the first case is more likely to happen, due to margin-call situations)
  • Profit: situations where the vol-coin price rises while the total stable-coin supply is positive, or falls while it’s negative

Note that the second profit is in some ways a phantom profit; when users hold vol-coins, they will need to take into account the risk that they will be on the receiving end of this extra 10% seizure, which cancels out the benefit to the system from the profit existing. However, one might argue that because of the Dunning-Kruger effect users might underestimate their susceptibility to eating the loss, and thus the compensation will be less than 100%.

Now, consider a strategy where a user tries to hold on to a constant percentage of all vol-coins. When x% of vol-coins are absorbed, the user sells off x% of their vol-coins and takes a profit, and when new vol-coins equal to x% of the existing supply are released, the user increases their holdings by the same portion, taking a loss. Thus, the user’s net profit is proportional to the total profit of the system.

Seignorage Shares

A fourth model is “seignorage shares”, courtesy of Robert Sams. Seignorage shares is a rather elegant scheme that, in my own simplified take on the scheme, works as follows:

  1. There exist two currencies, “vol-coins” and “stable-coins” (Sams uses “shares” and “coins”, respectively)
  2. Anyone can purchase vol-coins for stable-coins or vol-coins for stable-coins from the system at a rate of $ 1 worth of vol-coin per stable-coin, perhaps with a 0.1% exchange fee

Note that in Sams’ version, an auction was used to sell off newly-created stable-coins if the price goes too high, and buy if it goes too low; this mechanism basically has the same effect, except using an always-available fixed price in place of an auction. However, the simplicity comes at the cost of some degree of fragility. To see why, let us make a similar valuation analysis for vol-coins. The profit and loss scenarios are simple:

  • Profit: absorbing vol-coins to issue new stable-coins
  • Loss: issuing vol-coins to absorb stable-coins

The same valuation strategy applies as in the other case, so we can see that the value of the vol-coins is proportional to the expected total future increase in the supply of stable-coins, adjusted by some discounting factor. Thus, here lies the problem: if the system is understood by all parties to be “winding down” (eg. users are abandoning it for a superior competitor), and thus the total stable-coin supply is expected to go down and never come back up, then the value of the vol-coins drops below zero, so vol-coins hyperinflate, and then stable-coins hyperinflate. In exchange for this fragility risk, however, vol-coins can achieve a much higher valuation, so the scheme is much more attractive to cryptoplatform developers looking to earn revenue via a token sale.

Note that both the SchellingDollar and seignorage shares, if they are on an independent network, also need to take into account transaction fees and consensus costs. Fortunately, with proof of stake, it should be possible to make consensus cheaper than transaction fees, in which case the difference can be added to profits. This potentially allows for a larger market cap for the SchellingDollar’s vol-coin, and allows the market cap of seignorage shares’ vol-coins to remain above zero even in the event of a substantial, albeit not total, permanent decrease in stable-coin volume. Ultimately, however, some degree of fragility is inevitable: at the very least, if interest in a system drops to near-zero, then the system can be double-spent and estimators and Schellingcoins exploited to death. Even sidechains, as a scheme for preserving one currency across multiple networks, are susceptible to this problem. The question is simply (1) how do we minimize the risks, and (2) given that risks exist, how do we present the system to users so that they do not become overly dependent on something that could break?

Conclusions

Are stable-value assets necessary? Given the high level of interest in “blockchain technology” coupled with disinterest in “Bitcoin the currency” that we see among so many in the mainstream world, perhaps the time is ripe for stable-currency or multi-currency systems to take over. There would then be multiple separate classes of cryptoassets: stable assets for trading, speculative assets for investment, and Bitcoin itself may well serve as a unique Schelling point for a universal fallback asset, similar to the current and historical functioning of gold.

If that were to happen, and particularly if the stronger version of price stability based on Schellingcoin strategies could take off, the cryptocurrency landscape may end up in an interesting situation: there may be thousands of cryptocurrencies, of which many would be volatile, but many others would be stable-coins, all adjusting prices nearly in lockstep with each other; hence, the situation could even end up being expressed in interfaces as a single super-currency, but where different blockchains randomly give positive or negative interest rates, much like Ferdinando Ametrano’s “Hayek Money”. The true cryptoeconomy of the future may have not even begun to take shape.

The post The Search for a Stable Cryptocurrency appeared first on ethereum blog.

ethereum blog

Hi, I’m Stephan Tual, and I’ve been responsible for Ethereum’s adoption and education since January as CCO. I’m also leading our UK ÐΞV hub, located at Co-Work in Putney (South West London).

I feel really privileged to be able to lead the effort on the communication strategy at ÐΞV. For the very first time, we’re seeing the mainstream public take a genuine interest in the potential of decentralisation. The feeling of excitement about what ‘could be’ when I first read Vitalik’s whitepaper on that fateful Christmas afternoon is now shared by dozens of thousands of technologists, developers and entrepreneurs.

Thanks to the Ether sale, a group of smart, hardworking individuals is now able to work full time on solving core technical and adoption challenges, and to deliver a solution at 10x the speed an equivalent garage-based initiative would have taken. With Ethereums’ APIs supporting Gav’s vision for web 3, it finally is in the reach of the community to build decentralized applications without middlemen. By democratizing access to programmable blockchain technology, Ethereum empowers software developers and entrepreneurs to make a major impact on not only the decentralisation of the economy, but also social structures, voting mechanisms and so much more. It’s a very ambitious project, and everyone at ÐΞV feels a strong sense of duty to to deliver on this vision.

As part of our efforts, technology is – of course – key, but so is adoption. Ethereum without dapps (decentralized apps) would be akin to a video game console without launch titles, and, just like any protocol, we expect the applications to be the real stars of the show. Here’s how we plan to spread the word and support developers in their efforts.

Education

Building a curriculum: We’re building an extensive curriculum adapted to both teachers and self-learners at home, at hackathons and in universities around the globe. Consisting of well defined modules progressing over time in their complexity, our goal is to establish a learning standard that will be of course completely free of charge and 100% Open Source.

Content aggregation: at the moment we are aware that information on how to ‘get into’ Ethereum is a little bit fragmented between forums, multiple wikis and various 3rd party sites. A subdomain to our website will be created during the course of the next few months to access this valuable information easily and in one place.

Produce tutorials, videos and articles: tutorials are key to learn a new set of languages and tools. By producing both videos and text-based tutorials, we intend to give the community an insiders’s view on best practices, from structuring contract storage to leveraging the new whisper P2P messaging system for example.

CodeAcademy-like site: not everyone likes to learn within a classroom environment, and some feel constrained by linear tutorials. With a release date coinciding with the launch of Ethereum, we’re partnering with a US-based company to build a CodeAcademy-like site within a gamified environment, where you’ll be able to learn at your own pace how to build dapps and their backend contracts.

University chapters: Vitalik and I recently gave a presentation at Cambridge University and Ethereum will participate in the Hackathon on Transparency on November 26 at the University of Geneva. Encouraged by the enthusiasm we’ve witnessed in the academic world, we are working to support directly the Oxbridge Blocktech Network (OBN) in their efforts to build a network of chapters, firstly within the UK then throughout Europe.

I’m happy to announce that Ken Kappler has joined the UK team to help with these educational efforts. Many of you in London know Ken as he’s been a semi-permanent fixture at all our meetups and hackathons, kindly helping behind the scenes. Ken, known as BlueChain on IRC, is also the writer behind http://dappsforbeginners.wordpress.com/ which will soon merge with our own education site.

Ken will lead a weekly ‘Ethereum Clinic’ on IRC to answer any questions you might have with your current project. Times will be posted on our forums.

Meetups

Encouraging the creation of new meetups: we now have an extensive network of 85 meetups worldwide, which is an amazing achievement but not sufficient to handle the overwhelming demand for regular catchups in a format that’s appropriate for the local needs and culture. We intend to encourage the creation of new meetups in almost every country and major urban hubs.

Tooling and support: in order to drive the effort to create and maintain such a large network of international meetups, we will be providing tools for meetup leaders to interact with each other, gain access to the core dev team for video-conference or physical interventions, and exchange information about speakers. These tools will of course be free to use and access.

Collaterals and venues: for the meetups that are the most active, Ethereum is considering, where appropriate, the use of small bursaries so that meetup leaders in these ‘core locales’ do not have to contend with the full costs of collaterals and venues. We will also work with our partners to help meetups secure sponsorships and access to free locations to hold their mini-conferences.

Global Hackathon: Starting this week, the Ethereum workshops are going to slowly transform into proper hackathons. We are working with wonderful locations around the globe, the vast majority of which started off as Ethereum meetups, to organize a worldwide hackathon with some great ETH prizes for the best dapps.

We’re very lucky to welcome Anthony D’onofrio to drive these very important initiatives. Anthony starts on the 10th of this month and will also cover the North American region from a community perspective – you probably already know him as ‘Texture’, his handle on most forums and channels.

Community

I’m incredibly proud that Ethereum’s exposure in the community has been entirely organic since day one. This has been the result of major, time consuming efforts to identify Ethereum projects in the wild, reaching out directly and building a strong relationship with our user base. We have achieved several key milestones, including over 10,000 followers on Twitter, 100,000 page views per month on our website, similar numbers on our forums and the growth trend only continues to accelerate.

Historically, Ethereum has never used PR as a tool to increase adoption, relying instead on word of mouth, meetups and conferences to spread the word. As the media attention is now intensifying rapidly, I’m pleased to welcome Freya Stevens to the team as PR/Marketing lead. Freya will help us build a shared database of media leads, write articles and make complex technology palatable to the general public while identifying strong story angles. Freya is based out of Cambridge, UK.

Also with a view to to scale up these initiatives we’re proud to welcome George Hallam to the team, AKA thehighfiveghost, who recently posted a survey so you can let Ethereum know how well we’re doing our job as custodians and developers of the platform. George, as a key supporter to the London community, will be already be familiar to many.

As part of these efforts, expect to see a lot more interactions on Reddit, IRC, Discuss and of course our very own forums. George will also help me identify key Ethereum-based projects and make contact to see how we can best help with information, connections and inclusion as guests in our weekly video updates, shot at our Putney Studio.

In order to produce very high quality content, we are also welcoming Ian Meikle to the London Hub. Ian is the creator of most of the video materials you might have seen relating to Ethereum, including the superb video loop that has been a staple at many Ethereum meetups. Ian will leverage the equipment at our studio to create explainer videos, interview key players in the space, and record panels led by Vinay Gupta, who joins the coms team as Strategic Consultant.

In London, and above and beyond our existing panels, socials and hackathons, regular ‘show and tell’ are being scheduled for dapp developers to present their work and receive feedback, a model we intend to promote internationally within the month.

And of course, last but not least, expect a major refresh to our website, with beautiful, clean content, practical examples of dapps, a dynamic meetup map and links to all our newly created assets and community points of contacts.

In conclusion

The question we’re going to continue asking ourselves everyday is how do we support you, the community, in building kick-ass dapps and being successful in your venture on our platform. I hope the above gives you a quick intro as to our plans. I’ll be issuing regular updates both on this blog and on our youtube.

Stephan ()

The post Ethereum Community and Adoption Update – Week 1 appeared first on ethereum blog.

ethereum blog

I thought it was about time I’d give an update on my side of things for those interested in knowing how we’re doing on the Dutch side. My name is Jeff, a founder of Ethereum and one of the three directors (alongside Vitalik and Gavin) of Ethereum ÐΞV, the development entity building Ethereum and all the associated tech.

Over the past months I’ve been been looking for a suitable office space to host the Amsterdam Hub. Unfortunately it takes more work than I initially anticipated and have got nothing to show for so far. I’ve past on this tasks to my good friend and now colleague Maran. Maran will do all he can to find the best suitable place for the Ams hub and the development of Mist. Those who are using the Ethereum Programming language mutan you may want to switch over to Serpent or the soon to be de-facto programming language Solidity. I’ve made the sensible decision to focus my attention on to more pressing matters such as the development of the protocol and the browser. Perhaps in the future I’ll have time to pick up it’s development again.

ÐΞV Amsterdam

The lawyers have finally, after 2 months, gotten around to set up the company here in Amsterdam (ugh, the Dutch and their bureaucracy eh) and we’ve found a bank that is willing to accept us as their loyal customers (…). At the moment we have a few options for our office space and I’ll write about them as soon as I know something more concrete.

The Team

It’s about time the Ams team got a proper introduction. These guys do some serious good work!

The first that joined the Ams team is Alex van de Sande (aka avsa). Alex is a gifted UX engineer and he’s been with us for quite a while. It was only a matter of time before he became an official member of the ÐΞV team. Alex has taken up the task of UI development and UX expert and is prototyping the latest of the Web3 browser.

The second that joined the team is Viktor Trón. Vik is a crazy math-head and is currently hacking away at the new DEVP2P and testing it rigourously. I’ve known Vik all the way back since the start of the project somewhere in Jan/Feb, he’s a great guy and a real asset to this team.

The third that joined the team is Felix Lange. Felix is a die-hard Gopher (yay!) and the first thing he pointed out to me was that I had done a bad job looking after my go routines and there were a lot of race conditions, so nice of him (-; Felix is going to work on the Whisper implementation once the spec has been formally finalised. Felix is a super star gopher and has the ability to become a true Ethereum Core Dev.

The fourth that joined the team is Daniel Nagy. Daniel has a history in crypto and security and his first tasks is to create a comprehensive spec for our DHT implementation and the development thereof.

Last but certainly not least is Maran Hidskes. Maran has been on this team before but in a completely different role. Maran used to work on the protocol but after spawning a crying, -peeing, -pooping machine new member of his family he decided to take some time off. Now his main role is to look after the daunting task; the administration of ÐΞV Amsterdam.

Even though they are not on anyone’s team, I like to thank Nick, Caktux and Joris for their ongoing effort in developing out our build systems. I’d also like to thank Nick specifically for pointing out the inconsistencies between our implementations: Nick, you truly are a great pain in my ass (-;

Onwards

While we are marching towards the next instalment in the Proof of Concept (PoC-7) we still have got quite some work ahead of us.

Recently I’ve started to build a toolset so we may test out Christoph (he’s on the Berlin team) awesome tests suit. Christoph has put a tremendous amount of work in developing out a proper testing suit for the Ethereum protocol. I never knew people could enjoy writing tests like you do, you’ve got my uttermost most respect.

I’ve also started a cross-implementation JavaScript framework called ethereum.js. Ethereum.js is quickly gaining adoption from the rest of the Ether Hackers and is already in use by the Go websocket & JSON RPC implementation, C++ JSON RPC implementation and the Node.js implementation. Ethereum.js is a true ÐΞV cross implementation team effort.

Our Polish partners at IMAPP (Paweł and Artur) have completed their first implementation of the JIT-compiled LLVM-based EVM implementation and have agreed to create a Go bridge so that Mist may benefit from the speed increase in running Ethereum contracts mentioned earlier in Gav’s update.

Finally, the UX and UI refinement of Mist, our flagship consumer product and next-generation browser, continues at great pace. An early ‘preview’ of the Mist interface can be downloaded here. It’s going to be impossible to deliver that type of behemoth of a browser for the first version but it will certainly be the end-goal.

Fin

I shall try to keep up writing blog post with updates regarding Mist, the protocol and ÐΞV in general so stay tuned!

Jeff ()

The post Jeff’s Ethereum ÐΞV Update I appeared first on ethereum blog.

ethereum blog

Well… what a busy two weeks. I thought it about time to make another update for any of you who might be interested in how we’re doing. If you don’t already know, I’m Gavin, a founder of Ethereum and one of the three directors (alongside Vitalik and Jeffrey) of Ethereum ÐΞV, the development entity building Ethereum and all the associated technology.

After doing some recruitment on behalf of DEV in Bucharest with the help of Mihai Alisie and the lovely Roxanna Sureanu I spent the last week at my home (and coincidentally, the Ethereum HQ) in Zug, Switzerland. During this time I was able to get going on the first prototype of Whisper, our secure identity-based communications protocol, finishing with a small IRC-like ÐApp demonstrating how easy it is to use. For those interested, there is more information on Whisper in the Ethereum Github wiki and a nice little screenshot on my twitter feed. In addition to this I’ve been helping finalise the soon-to-be-announced PoC-7 specification and working towards a PoC-8 (final). Finally, during our brief time together in Zug, Jeffrey, Vitalik and I drafted our strategy concerning identity and key management; this will be developed further during the coming weeks.

ÐΞVHUB Berlin

In Berlin, Sarah has been super-busy with the builders getting the hub ready. Here’s a couple pictures of the work in progress that is the Berlin hub. It might not look like much yet, but we’re on target to be moved in by mid-November. I’m particularly happy with Sarah’s efforts to find a genuine 70′s barrista espresso machine (-:

I’m excited to announce that Christian Vömel is joining the team in Berlin to be the Office Manager of ÐΞVHUB Berlin. Christian has many years experience including having worked in an international environment and has even taught office management! He’ll be taking some of the load from our frankly much-overworked company secretary Aeron Buchanan.

The Team Grows

We’ve finalised a number of new hires over the past couple of weeks: Network engineer Lefteris Karapetsas will be joining the Berlin team imminently. Having considerable experience with state-of-the-art network traffic analysis and deep-packet inspection systems, he’ll be helping audit our network protocols, however (like much of our team) truly multidisciplinary, he’ll also be working on NatSpec, the code name for our Natural Language Formal Contract Specification system, a cornerstone of our transaction security model.

I’m happy to announce that Ian Meikle, the accomplished videographer who co-authored the impressive “Koyaanis-glitchy” Ethereum brand video has been moved to ÐΞV to help with the communications team. He who shall be known only as Texture has also joined the comms side with Stephan to help with the strategy stateside and coordinate the worldwide meetup and hackathon network. Great to see such a capable and passionate designer on the team; I know he has a good few ideas for ÐApps!

Two more hires under Stephan in the comms team include Ken Kappler, handling the developer education direction, hackathons, ethereum curriculum and university partnerships. George Hallam has also been employed to evangelize ethereum to startups and partners, boost the reach of our formal network and generally help Stephan in the quest of having everybody know what Ethereum is and how it can help them.

Jeff’s team has also been expanded recently too; he’ll be telling you about his developments in an imminent post.

Further Developments

Aside from the aforementioned progress with Whisper and PoC-7, Christoph has been continuing his great work with the tests repository. Christian has been making great progress with the Solidity language having recently placed the first Solidity-compiled program onto the testnet block chain only a few days ago.

Marek has studiously been moving C++ over to a JSON-RPC and Javascript front-end fundamentally unified and bound to the Go client. Alex meanwhile has been grappling with the C++ crypto back-end and has done a great job of reducing bloat and extraneous dependencies.

Of late, the comms team has some good news brewing, in particular, it is in contact with some world-class education establishments regarding the possibility of eduction partnership and the formation of a network of chapters both in the UK and internationally. Watch this space (-:

Finally, our Polish partners at IMAPP (Paweł and Artur) have completed their first implementation of the JIT-compiled LLVM-based EVM implementation. They are reporting an average of 30x speedup (as high as 100x!) for non-external EVM instructions over the already best-in-class basic C++-based EVM implementation. Brilliant work and we’re looking forward to more improvements and optimisations yet.

And the rest…

So much to come; there are a couple of announcements (including a slew of imminent hires) I’d love to make but they need to be finalised before I can write about them here. Look out for the next update!

Gav ().

The post Gav’s Ethereum ÐΞV Update II appeared first on ethereum blog.

ethereum blog

Special thanks to Vlad Zamfir, Chris Barnett and Dominic Williams for ideas and inspiration

In a recent blog post I outlined some partial solutions to scalability, all of which fit into the umbrella of Ethereum 1.0 as it stands. Specialized micropayment protocols such as channels and probabilistic payment systems could be used to make small payments, using the blockchain either only for eventual settlement, or only probabilistically. For some computation-heavy applications, computation can be done by one party by default, but in a way that can be “pulled down” to be audited by the entire chain if someone suspects malfeasance. However, these approaches are all necessarily application-specific, and far from ideal. In this post, I describe a more comprehensive approach, which, while coming at the cost of some “fragility” concerns, does provide a solution which is much closer to being universal.

Understanding the Objective

First of all, before we get into the details, we need to get a much deeper understanding of what we actually want. What do we mean by scalability, particularly in an Ethereum context? In the context of a Bitcoin-like currency, the answer is relatively simple; we want to be able to:

  • Process tens of thousands of transactions per second
  • Provide a transaction fee of less than $ 0.001
  • Do it all while maintaining security against at least 25% attacks and without highly centralized full nodes

The first goal alone is easy; we just remove the block size limit and let the blockchain naturally grow until it becomes that large, and the economy takes care of itself to force smaller full nodes to continue to drop out until the only three full nodes left are run by GHash.io, Coinbase and Circle. At that point, some balance will emerge between fees and size, as excessize size leads to more centralization which leads to more fees due to monopoly pricing. In order to achieve the second, we can simply have many altcoins. To achieve all three combined, however, we need to break through a fundamental barrier posed by Bitcoin and all other existing cryptocurrencies, and create a system that works without the existence of any “full nodes” that need to process every transaction.

In an Ethereum context, the definition of scalability gets a little more complicated. Ethereum is, fundamentally, a platform for “dapps”, and within that mandate there are two kinds of scalability that are relevant:

  • Allow lots and lots of people to build dapps, and keep the transaction fees low
  • Allow each individual dapp to be scalable according to a definition similar to that for Bitcoin

The first is inherently easier than the second. The only property that the “build lots and lots of alt-Etherea” approach does not have is that each individual alt-Ethereum has relatively weak security; at a size of 1000 alt-Etherea, each one would be vulnerable to a 0.1% attack from the point of view of the whole system (that 0.1% is for externally-sourced attacks; internally-sourced attacks, the equivalent of GHash.io and Discus Fish colluding, would take only 0.05%). If we can find some way for all alt-Etherea to share consensus strength, eg. some version of merged mining that makes each chain receive the strength of the entire pack without requiring the existence of miners that know about all chains simultaneously, then we would be done.

The second is more problematic, because it leads to the same fragility property that arises from scaling Bitcoin the currency: if every node sees only a small part of the state, and arbitrary amounts of BTC can legitimately appear in any part of the state originating from any part of the state (such fungibility is part of the definition of a currency), then one can intuitively see how forgery attacks might spread through the blockchain undetected until it is too late to revert everything without substantial system-wide disruption via a global revert.

Reinventing the Wheel

We’ll start off by describing a relatively simple model that does provide both kinds of scalability, but provides the second only in a very weak and costly way; essentially, we have just enough intra-dapp scalability to ensure asset fungibility, but not much more. The model works as follows:

Suppose that the global Ethereum state (ie. all accounts, contracts and balances) is split up into N parts (“substates”) (think 10 <= N <= 200). Anyone can set up an account on any substate, and one can send a transaction to any substate by adding a substate number flag to it, but ordinary transactions can only send a message to an account in the same substate as the sender. However, to ensure security and cross-transmissibility, we add some more features. First, there is also a special “hub substate”, which contains only a list of messages, of the form [dest_substate, address, value, data]. Second, there is an opcode CROSS_SEND, which takes those four parameters as arguments, and sends such a one-way message enroute to the destination substate.

Miners mine blocks on some substate s[j], and each block on s[j] is simultaneously a block in the hub chain. Each block on s[j] has as dependencies the previous block on s[j] and the previous block on the hub chain. For example, with N = 2, the chain would look something like this:

The block-level state transition function does three things:

  1. Processes state transitions inside of s[j]
  2. If any of those state transitions creates a CROSS_SEND, adds that message to the hub chain
  3. If any messages are on the hub chain with dest_substate = j, removes the messages from the hub chain, sends the messages to their destination addresses on s[j], and processes all resulting state transitions

From a scalability perspective, this gives us a substantial improvement. All miners only need to be aware of two out of the total N + 1 substates: their own substate, and the hub substate. Dapps that are small and self-contained will exist on one substate, and dapps that want to exist across multiple substates will need to send messages through the hub. For example a cross-substate currency dapp would maintain a contract on all substates, and each contract would have an API that allows a user to destroy currency units inside of one substate in exchange for the contract sending a message that would lead to the user being credited the same amount on another substate.

Messages going through the hub do need to be seen by every node, so these will be expensive; however, in the case of ether or sub-currencies we only need the transfer mechanism to be used occasionally for settlement, doing off-chain inter-substate exchange for most transfers.

Attacks, Challenges and Responses

Now, let us take this simple scheme and analyze its security properties (for illustrative purposes, we’ll use N = 100). First of all, the scheme is secure against double-spend attacks up to 50% of the total hashpower; the reason is that every sub-chain is essentially merge-mined with every other sub-chain, with each block reinforcing the security of all sub-chains simultaneously.

However, there are more dangerous classes of attacks as well. Suppose that a hostile attacker with 4% hashpower jumps onto one of the substates, thereby now comprising 80% of the mining power on it. Now, that attacker mines blocks that are invalid – for example, the attacker includes a state transition that creates messages sending 1000000 ETH to every other substate out of nowhere. Other miners on the same substate will recognize the hostile miner’s blocks as invalid, but this is irrelevant; they are only a very small part of the total network, and only 20% of that substate. The miners on other substates don’t know that the attacker’s blocks are invalid, because they have no knowledge of the state of the “captured substate”, so at first glance it seems as though they might blindly accept them.

Fortunately, here the solution here is more complex, but still well within the reach of what we currently know works: as soon as one of the few legitimate miners on the captured substate processes the invalid block, they will see that it’s invalid, and therefore that it’s invalid in some particular place. From there, they will be able to create a light-client Merkle tree proof showing that that particular part of the state transition was invalid. To explain how this works in some detail, a light client proof consists of three things:

  1. The intermediate state root that the state transition started from
  2. The intermediate state root that the state transition ended at
  3. The subset of Patricia tree nodes that are accessed or modified in the process of executing the state transition

The first two “intermediate state roots” are the roots of the Ethereum Patricia state tree before and after executing the transaction; the Ethereum protocol requires both of these to be in every block. The Patricia state tree nodes provided are needed in order to the verifier to follow along the computation themselves, and see that the same result is arrived at the end. For example, if a transaction ends up modifying the state of three accounts, the set of tree nodes that will need to be provided might look something like this:

Technically, the proof should include the set of Patricia tree nodes that are needed to access the intermediate state roots and the transaction as well, but that’s a relatively minor detail. Altogether, one can think of the proof as consisting of the minimal amount of information from the blockchain needed to process that particular transaction, plus some extra nodes to prove that those bits of the blockchain are actually in the current state. Once the whistleblower creates this proof, they will then be broadcasted to the network, and all other miners will see the proof and discard the defective block.

The hardest class of attack of all, however, is what is called a “data unavailability attack”. Here, imagine that the miner sends out only the block header to the network, as well as the list of messages to add to the hub, but does not provide any of the transactions, intermediate state roots or anything else. Now, we have a problem. Theoretically, it is entirely possible that the block is completely legitimate; the block could have been properly constructed by gathering some transactions from a few millionaires who happened to be really generous. In reality, of course, this is not the case, and the block is a fraud, but the fact that the data is not available at all makes it impossible to construct an affirmative proof of the fraud. The 20% honest miners on the captured substate may yell and squeal, but they have no proof at all, and any protocol that did heed their words would necessarily fall to a 0.2% denial-of-service attack where the miner captures 20% of a substate and pretends that the other 80% of miners on that substate are conspiring against him.

To solve this problem, we need something called a challenge-response protocol. Essentially, the mechanism works as follows:

  1. Honest miners on the captured substate see the header-only block.
  2. An honest miner sends out a “challenge” in the form of an index (ie. a number).
  3. If the producer of the block can submit a “response” to the challenge, consisting of a light-client proof that the transaction execution at the given index was executed legitimately (or a proof that the given index is greater than the number of transactions in the block), then the challenge is deemed answered.
  4. If a challenge goes unanswered for a few seconds, miners on other substates consider the block suspicious and refuse to mine on it (the game-theoretic justification for why is the same as always: because they suspect that others will use the same strategy, and there is no point mining on a substate that will soon be orphaned)

Note that the mechanism requires a few added complexities on order to work. If a block is published alongside all of its transactions except for a few, then the challenge-response protocol could quickly go through them all and discard the block. However, if a block was published truly headers-only, then if the block contained hundreds of transactions, hundreds of challenges would be required. One heuristic approach to solving the problem is that miners receiving a block should privately pick some random nonces, send out a few challenges for those nonces to some known miners on the potentially captured substate, and if responses to all challenges do not come back immediately treat the block as suspect. Note that the miner does NOT broadcast the challenge publicly – that would give an opportunity for an attacker to quickly fill in the missing data.

The second problem is that the protocol is vulnerable to a denial-of-service attack consisting of attackers publishing very very many challenges to legitimate blocks. To solve this, making a challenge should have some cost – however, if this cost is too high then the act of making a challenge will require a very high “altruism delta”, perhaps so high that an attack will eventually come and no one will challenge it. Although some may be inclined to solve this with a market-based approach that places responsibility for making the challenge on whatever parties end up robbed by the invalid state transition, it is worth noting that it’s possible to come up with a state transition that generates new funds out of nowhere, stealing from everyone very slightly via inflation, and also compensates wealthy coin holders, creating a theft where there is no concentrated incentive to challenge it.

For a currency, one “easy solution” is capping the value of a transaction, making the entire problem have only very limited consequence. For a Turing-complete protocol the solution is more complex; the best approaches likely involve both making challenges expensive and adding a mining reward to them. There will be a specialized group of “challenge miners”, and the theory is that they will be indifferent as to which challenges to make, so even the tiniest altruism delta, enforced by software defaults, will drive them to make correct challenges. One may even try to measure how long challenges take to get responded, and more highly reward the ones that take longer.

The Twelve-Dimensional Hypercube

Note: this is NOT the same as the erasure-coding Borg cube. For more info on that, see here: https://blog.ethereum.org/2014/08/16/secret-sharing-erasure-coding-guide-aspiring-dropbox-decentralizer/

We can see two flaws in the above scheme. First, the justification that the challenge-response protocol will work is rather iffy at best, and has poor degenerate-case behavior: a chain takeover attack combined with a denial of service attack preventing challenges could potentially force an invalid block into a chain, requiring an eventual day-long revert of the entire chain when (if?) the smoke clears. There is also a fragility component here: an invalid block in any substate will invalidate all subsequent blocks in all substates. Second, cross-substate messages must still be seen by all nodes. We start off by solving the second problem, then proceed to show a possible defense to make the first problem slightly less bad, and then finally get around to solving it completely, and at the same time getting rid of proof of work.

The second flaw, the expensiveness of cross-substate messages, we solve by converting the blockchain model from this:

To this:

Except the cube should have twelve dimensions instead of three. Now, the protocol looks as follows:

  1. There exist 2N substates, each of which is identified by a binary string of length N (eg. 0010111111101). We define the Hamming distance H(S1, S2) as the number of digits that are different between the IDs of substates S1 and S2 (eg. HD(00110, 00111) = 1, HD(00110, 10010) = 2, etc).
  2. The state of each substate stores the ordinary state tree as before, but also an outbox.
  3. There exists an opcode, CROSS_SEND, which takes 4 arguments [dest_substate, to_address, value, data], and registers a message with those arguments in the outbox of S_from where S_from is the substate from which the opcode was called
  4. All miners must “mine an edge”; that is, valid blocks are blocks which modify two adjacent substates S_a and S_b, and can include transactions for either substate. The block-level state transition function is as follows:
    • Process all transactions in order, applying the state transitions to S_a or S_b as needed.
    • Process all messages in the outboxes of S_a and S_b in order. If the message is in the outbox of S_a and has final destination S_b, process the state transitions, and likewise for messages from S_b to S_a. Otherwise, if a message is in S_a and HD(S_b, msg.dest) < HD(S_a, msg.dest), move the message from the outbox of S_a to the outbox of S_b, and likewise vice versa.
  5. There exists a header chain keeping track of all headers, allowing all of these blocks to be merge-mined, and keeping one centralized location where the roots of each state are stored.

Essentially, instead of travelling through the hub, messages make their way around the substates along edges, and the constantly reducing Hamming distance ensures that each message always eventually gets to its destination.

The key design decision here is the arrangement of all substates into a hypercube. Why was the cube chosen? The best way to think of the cube is as a compromise between two extreme options: on the one hand the circle, and on the other hand the simplex (basically, 2N-dimensional version of a tetrahedron). In a circle, a message would need to travel on average a quarter of the way across the circle before it gets to its destination, meaning that we make no efficiency gains over the plain old hub-and-spoke model.

In a simplex, every pair of substates has an edge, so a cross-substate message would get across as soon as a block between those two substates is produced. However, with miners picking random edges it would take a long time for a block on the right edge to appear, and more importantly users watching a particular substate would need to be at least light clients on every other substate in order to validate blocks that are relevant to them. The hypercube is a perfect balance – each substate has a logarithmically growing number of neighbors, the length of the longest path grows logarithmically, and block time of any particular edge grows logarithmically.

Note that this algorithm has essentially the same flaws as the hub-and-spoke approach – namely, that it has bad degenerate-case behavior and the economics of challenge-response protocols are very unclear. To add stability, one approach is to modify the header chain somewhat.

Right now, the header chain is very strict in its validity requirements – if any block anywhere down the header chain turns out to be invalid, all blocks in all substates on top of that are invalid and must be redone. To mitigate this, we can require the header chain to simply keep track of headers, so it can contain both invalid headers and even multiple forks of the same substate chain. To add a merge-mining protocol, we implement exponential subjective scoring but using the header chain as an absolute common timekeeper. We use a low base (eg. 0.75 instead of 0.99) and have a maximum penalty factor of 1 / 2N to remove the benefit from forking the header chain; for those not well versed in the mechanics of ESS, this basically means “allow the header chain to contain all headers, but use the ordering of the header chain to penalize blocks that come later without making this penalty too strict”. Then, we add a delay on cross-substate messages, so a message in an outbox only becomes “eligible” if the originating block is at least a few dozen blocks deep.

Proof of Stake

Now, let us work on porting the protocol to nearly-pure proof of stake. We’ll ignore nothing-at-stake issues for now; Slasher-like protocols plus exponential subjective scoring can solve those concerns, and we will discuss adding them in later. Initially, our objective is to show how to make the hypercube work without mining, and at the same time partially solve the fragility problem. We will start off with a proof of activity implementation for multichain. The protocol works as follows:

  1. There exist 2N substates indentified by binary string, as before, as well as a header chain (which also keeps track of the latest state root of each substate).
  2. Anyone can mine an edge, as before, but with a lower difficulty. However, when a block is mined, it must be published alongside the complete set of Merkle tree proofs so that a node with no prior information can fully validate all state transitions in the block.
  3. There exists a bonding protocol where an address can specify itself as a potential signer by submitting a bond of size B (richer addresses will need to create multiple sub-accounts). Potential signers are stored in a specialized contract C[s] on each substate s.
  4. Based on the block hash, a random 200 substates s[i] are chosen, and a search index 0 <= ind[i] < 2^160 is chosen for each substate. Define signer[i] as the owner of the first address in C[s[i]] after index ind[i]. For the block to be valid, it must be signed by at least 133 of the set signer[0] ... signer[199].

To actually check the validity of a block, the consensus group members would do two things. First, they would check that the initial state roots provided in the block match the corresponding state roots in the header chain. Second, they would process the transactions, and make sure that the final state roots match the final state roots provided in the header chain and that all trie nodes needed to calculate the update are available somewhere in the network. If both checks pass, they sign the block, and if the block is signed by sufficiently many consensus group members it gets added to the header chain, and the state roots for the two affected blocks in the header chain are updated.

And that’s all there is to it. The key property here is that every block has a randomly chosen consensus group, and that group is chosen from the global state of all account holders. Hence, unless an attacker has at least 33% of the stake in the entire system, it will be virtually impossible (specifically, 2-70 probability, which with 230 proof of work falls well into the realm of cryptographic impossiblity) for the attacker to get a block signed. And without 33% of the stake, an attacker will not be able to prevent legitimate miners from creating blocks and getting them signed.

This approach has the benefit that it has nice degenerate-case behavior; if a denial-of-service attack happens, then chances are that almost no blocks will be produced, or at least blocks will be produced very slowly, but no damage will be done.

Now, the challenge is, how do we further reduce proof of work dependence, and add in blockmaker and Slasher-based protocols? A simple approach is to have a separate blockmaker protocol for every edge, just as in the single-chain approach. To incentivize blockmakers to act honestly and not double-sign, Slasher can also be used here: if a signer signs a block that ends up not being in the main chain, they get punished. Schelling point effects ensure that everyone has the incentive to follow the protocol, as they guess that everyone else will (with the additional minor pseudo-incentive of software defaults to make the equilibrium stronger).

A full EVM

These protocols allow us to send one-way messages from one substate to another. However, one way messages are limited in functionality (or rather, they have as much functionality as we want them to have because everything is Turing-complete, but they are not always the nicest to work with). What if we can make the hypercube simulate a full cross-substate EVM, so you can even call functions that are on other substates?

As it turns out, you can. The key is to add to messages a data structure called a continuation. For example, suppose that we are in the middle of a computation where a contract calls a contract which creates a contract, and we are currently executing the code that is creating the inner contract. Thus, the place we are in the computation looks something like this:

Now, what is the current “state” of this computation? That is, what is the set of all the data that we need to be able to pause the computation, and then using the data resume it later on? In a single instance of the EVM, that’s just the program counter (ie. where we are in the code), the memory and the stack. In a situation with contracts calling each other, we need that data for the entire “computational tree”, including where we are in the current scope, the parent scope, the parent of that, and so forth back to the original transaction:

This is called a “continuation”. To resume an execution from this continuation, we simply resume each computation and run it to completion in reverse order (ie. finish the innermost first, then put its output into the appropriate space in its parent, then finish the parent, and so forth). Now, to make a fully scalable EVM, we simply replace the concept of a one-way message with a continuation, and there we go.

Of course, the question is, do we even want to go this far? First of all, going between substates, such a virtual machine would be incredibly inefficient; if a transaction execution needs to access a total of ten contracts, and each contract is in some random substate, then the process of running through that entire execution will take an average of six blocks per transmission, times two transmissions per sub-call, times ten sub-calls – a total of 120 blocks. Additionally, we lose synchronicity; if A calls B then C, and B and C both call D, it’s entirely possible for C’s call of D to reach D before B’s does. Finally, it’s difficult to combine this mechanism with the concept of reverting transaction execution if transactions run out of gas. Thus, it may be easier to not bother with continuations, and rather opt for simple one-way messages; because the language is Turing-complete continuations can always be built on top.

As a result of the inefficiency and instability of cross-chain messages no matter how they are done, most dapps will want to live entirely inside of a single sub-state, and dapps or contracts that frequently talk to each other will want to live in the same sub-state as well. To prevent absolutely everyone from living on the same sub-state, we can have the gas limits for each substate “spill over” into each other and try to remain similar across substates; then, market forces will naturally ensure that popular substates become more expensive, encouraging marginally indifferent users and dapps to populate fresh new lands.

Not So Fast

So, what problems remain? First, there is the data availability problem: what happens when all of the full nodes on a given sub-state disappear? If such a situation happens, the sub-state data disappears forever, and the blockchain will essentially need to be forked from the last block where all of the sub-state data actually is known. This will lead to double-spends, some broken dapps from duplicate messages, etc. Hence, we need to essentially be sure that such a thing will never happen. This is a 1-of-N trust model; as long as one honest node stores the data we are fine. Single-chain architectures also have this trust model, but the concern increases when the number of nodes expected to store each piece of data decreases – as it does here by a factor of 2048. The concern is mitigated by the existence of altruistic nodes including blockchain explorers, but even that will become an issue if the network scales up so much that no single data center will be able to store the entire state.

Second, there is a fragility problem: if any block anywhere in the system is mis-processed, then that could lead to ripple effects throughout the entire system. A cross-substate message might not be sent, or might be re-sent; coins might be double-spent, and so forth. Of course, once a problem is detected it would inevitably be detected, and it could be solved by reverting the whole chain from that point, but it’s entirely unclear how often such situations will arise. One fragility solution is to have a separate version of ether in each substate, allowing ethers in different substates to float against each other, and then add message redundancy features to high-level languages, accepting that messages are going to be probabilistic; this would allow the number of nodes verifying each header to shrink to something like 20, allowing even more scalability, though much of that would be absorbed by an increased number of cross-substate messages doing error-correction.

A third issue is that the scalability is limited; every transaction needs to be in a substate, and every substate needs to be in a header that every node keeps track of, so if the maximum processing power of a node is N transactions, then the network can process up to N2 transactions. An approach to add further scalability is to make the hypercube structure hierarchical in some fashion – imagine the block headers in the header chain as being transactions, and imagine the header chain itself being upgraded from a single-chain model to the exact same hypercube model as described here – that would give N3 scalability, and applying it recursively would give something very much like tree chains, with exponential scalability – at the cost of increased complexity, and making transactions that go all the way across the state space much more inefficient.

Finally, fixing the number of substates at 4096 is suboptimal; ideally, the number would grow over time as the state grew. One option is to keep track of the number of transactions per substate, and once the number of transactions per substate exceeds the number of substates we can simply add a dimension to the cube (ie. double the number of substates). More advanced approaches involve using minimal cut algorithms such as the relatively simple Karger’s algorithm to try to split each substate in half when a dimension is added. However, such approaches are problematic, both because they are complex and because they involve unexpectedly massively increasing the cost and latency of dapps that end up accidentally getting cut across the middle.

Alternative Approaches

Of course, hypercubing the blockchain is not the only approach to making the blockchain scale. One very promising alternative is to have an ecosystem of multiple blockchains, some application-specific and some Ethereum-like generalized scripting environments, and have them “talk to” each other in some fashion – in practice, this generally means having all (or at least some) of the blockchains maintain “light clients” of each other inside of their own states. The challenge there is figuring out how to have all of these chains share consensus, particularly in a proof-of-stake context. Ideally, all of the chains involved in such a system would reinforce each other, but how would one do that when one can’t determine how valuable each coin is? If an attacker has 5% of all A-coins, 3% of all B-coins and 80% of all C-coins, how does A-coin know whether it’s B-coin or C-coin that should have the greater weight?

One approach is to use what is essentially Ripple consensus between chains – have each chain decide, either initially on launch or over time via stakeholder consensus, how much it values the consensus input of each other chain, and then allow transitivity effects to ensure that each chain protects every other chain over time. Such a system works very well, as it’s open to innovation – anyone can create new chains at any point with arbitrarily rules, and all the chains can still fit together to reinforce each other; quite likely, in the future we may see such an inter-chain mechanism existing between most chains, and some large chains, perhaps including older ones like Bitcoin and architectures like a hypercube-based Ethereum 2.0, resting on their own simply for historical reasons. The idea here is for a truly decentralized design: everyone reinforces each other, rather than simply hugging the strongest chain and hoping that that does not fall prey to a black swan attack.

The post Scalability, Part 2: Hypercubes appeared first on ethereum blog.

ethereum blog

Who are you?

I’m Gav – together with Jeffrey Wilcke and Vitalik Buterin, I’m one third of the ultimate leadership of Ethereum ÐΞV. ÐΞV is a UK software firm that is under a non-profit-making agreement with the Ethereum Foundation to create version 1.0 of the Web Three software stack. We three directors—who are ultimately responsible that the software is built and works—are the same three developers who designed and implemented the first working versions of the Ethereum clients.

ÐΞV is geographically split between London (where our comms operation is based) and Berlin (which hosts the main hub of ÐΞV). Though I’m based in Zug, Switzerland (being an Ethereum employee), I have been involved most recently in putting together the Berlin side of things.

Since its inception in summer, we have been working to set up the technical side of the project, under which we include our communications, education and adoption team lead by Stephan Tual and helped by Mathias Grønnebæk for the organisation of operations.

A Berlin Who’s Who

Aeron Buchanan, though originally brought on as a mathematical modeller, has been very successful in coordinating Berlin’s various operations including helping set up the arduous process of getting a bank account, recruitment, financial juggling to get people paid, technical interviews and other tedious administration tasks; more recently he has also been helping sort out the UK side of things, too.

I must acknowledge Brian Fabian Crane who helped connect us while in Berlin and made it possible for us to have a legal structure in place quickly. At present, the operation in Berlin is directed by our major PyEthereum contributor, Heiko Hees, with Aeron being the essential point of control for all operations. Over time, we expect Aeron to get back to modelling and to find a suitable candidate for the day-to-day management of the hub.

During our time in Berlin we’ve been very active in hiring (which as a process is considerably more arduous that you might think): Alex Leverington was our first hire and he flew to Berlin all the way from Texas to join the team. Alex has been engaged helping out with the Mac builds and making volunteer contributions since early in the year, so it’s great that he wanted to step forward into a permanent role. Now Alex has been working on some of the internals of the C++ client (specifically the client multiplexing, allowing multiple Web Three applications to coexist on the same physical machine).

Over the past few months we’ve recruited a few more people: Dr. Christian Reißweiner and Christoph Jentszch joined not long ago. Christian, who holds a PhD in Multiobjective Optimization and Language Equations is now engaged in prototyping and implementing the new domain-specific contract-authoring language that I proposed a while ago, Solidity. It didn’t take me long to realise that Christoph, currently finishing his PhD in physics and who utterly loves writing unit tests, would be a great hire for sorting out our clients’ interoperability issues. He has been leading our recent surge in getting the protocol in alignment for all clients through a comprehensive code-covering set of unit tests for the virtual machine operation.

Our newest recruit, Marek Kotewicz, journeying to Berlin from Poland, was an early Ethereum volunteer and enjoyed making contracts on some early C++ client prototypes. Coming from a Web-technology background (though being perfectly competent in C++), he has now started working on our C++/Javascript API, aiming towards full node.js integration to facilitate backend integration with existing web sites. Working alongside Marek is Marian Oancea, the feathers in whose cap include much of the technical prowess behind the highly successful ether sale. He has been developing out some of the first Web apps to use Ethereum as its backend.

I look forward to welcoming three more hires in the coming weeks, including some personnel with rather impressive and uniquely relevant backgrounds. More news on that next time.

And More…

Back in London, we’ve hired design outfit Proof-of-Work, headed by Louis Chang, to put together our new website and brand. We’re ecstatic with how things are coming along there and look forward to unveiling it soon. Once this is in place we’ll have a much clearer way of getting our updates and information out regarding what’s happening at ÐΞV.

Externally to ÐΞV but supported by it are a number of other individuals and projects: I am very grateful to Tim Hughes, who continues to consult on our efforts at an ASIC-resistant proof-of-work algorithm, also implementing it in C++. Similarly, Caktux an early volunteer and maintainer of the Ncurses-based C++ Ethereum front-end neth has been invaluble (alongside Joris and Nick Savers) in getting a continuous integration system up and running. We are pleased to support both of them in their endeavours to make this project a success.

Furthermore the guys at IMAPP, a software firm in Warsaw specialising in advanced languages and compilers deserve a great nod for their on-going efforts at using their considerable expertise in implementing a just-in-time (JIT) compiled version of the Ethereum virtual machine and making computationally-complex contracts a reasonably affordable possibility.

Finally, I must thank the EthereumJ (Java client) volunteer developers Roman Mandeleil and Nick Savers, both of whom have visited us in our prototype hub here in Berlin, and who work tirelessly to find different and innovative new ways of interpreting the formal protocol specification.

The California Connection

Over in Silicon Valley, we have made two hires, Joseph Chow and Martin Becze; Joseph will be leading the efforts there and concentrating on developing some of our core Ðapps that will help demonstrate the potential of Ethereum. Martin is leading the effort to create a pure Javascript implementation of Ethereum, a lofty goal, and thus all the more impressive that the project now has a core that is compatible with PoC-6.

We are also looking forward to working with the Agreemint Foundation (ie. Mintchalk), with their effort to create an online contract development environment, to provide a simple and highly accessible interface for the beginner and intermediate level users to learn about contract development and create and deploy Web Three Ðapps.

In the future we hope to expand our operations there, particularly over January and February when Vitalik and I will be staying there, we in particular look forward to spending some time discussing the future of data sharing and online publication with Juan of IPFS and are optimistic about the possibility of finding some synergy between our projects.

On Go-ing Development

Though I’m sure Jeff will make his own post on the goings-on over at his Golang-orientated end, I will say that on a personal note I’m very happy that Alex van de Sande (aka avsa) has joined us on a permanent basis. Alex is well known on the Ethereum forums and his mockups of what Web Three could look like were simply incredible in insight, technical knowledge and polish. As an accomplished UI & UX engineer, he’ll be joining Jeff in taking Mist, the Web Three browser, forward and making it into what I am sure will simultaneously be the most revolutionary and pleasing to use piece of new software in a very long time.

So what’s happening in Berlin then?

When we arrived at first we needed somewhere to be based out of: thanks to Brian, we were invited to the Rainmaking Loft, an excellent space for tech startups that need somewhere to spread their rug prior to world domination. Since August we’ve had a nice big desk there for our developers to work alongside our inimitable location scout, hub outfitter, project manager and interior designer rolled into one; Sarah O’Neill.

Sarah has worked tirelessly in finding our perfect location, our perfect contractors and our perfect fixtures and fittings and making it actually work. Right now as I write this at 4am EEST, she’s probably up on eBay looking for a decent deal for office chairs or costing a well-placed dry wall. And what a job she has done thus far. We will be based in probably the most perfect place we could hope for. Walking distance to two U-bahn stations, we’re located on a quiet street adjacent to Oranienstraße and a central point of Kreuzberg. We’re a short cycle ride from the centre of Berlin’s mass and, in the opposite direction, from the beautiful canal and Neukölln. We have some lovely quiet bars and cafés on our sexy little street and the bustling new-tech area that is Kreuzberg at the end of it.

Our new hub, designed and outfitted by her will be a 250m² cross of office, homely relaxation environment and (self-service) café—a new (and German-building-law-friendly) twist on the notion of the holon. We’ll be able to host meetups and events, have a great area for working and have ample collaboration space for any other Ethereum-aligned operations that would prefer not to pay coffee-tax for their power & wifi.

Not to be forgotten, helping Aeron and me with administration, procurements and organisation, not to mention general German-speaking tasks, Lisa Ottosson has been invaluable during this period.

And what have we been doing?

Since beginning, ÐΞV’s time has inevitably been wast^H^H^H^H spent wisely in bureaucracy, administration and red tape. It is impressive how much of a pain doing business in a perfectly well developed nation like Germany can be. Slowly (and thanks in no small part to Aeron) this tediousness is starting to let up. When not engaged in such matters, we’ve been pressing to get our most recent proof-of-concept releases out, PoC-5 and PoC-6. PoC-5 brought with it a number of important alterations to the Ethereum virtual machine and the core protocol. PoC-6 brought a 4-second block time (this is just for stress-testing; for the mainet we’re aiming for a 10 second block time) and wonderfully fast parallel block-chain downloading. Furthermore we’ve been talking with various potential technology partners concerning the future of Swarm, our data distribution system, including with our good friend Juan Batiz-Benet (Vitalik & I got to know him while staying at his house in Silicon Valley for a week back in March).

Speaking at a few meetings and conferences has taken time also. In my case, the keynote speaker at both Inside Bitcoin and Latin America’s popular tech-fest Campus Party was an honour, as was the invitation to address the main hall at the wonderful University San Francisco of Quito. I hesitate to imagine the number of such engagements Vitalik has done during the same time period.

In addition to his impressive public speaking schedule, Vitalik has been putting in considerable efforts into research on potential consensus algorithms. Together with Vlad Zamfir, a number of potential approaches have been mooted over the past few weeks. Ultimately, we decided to follow the advice of some in our community, like Nick Szabo, who have urged us to focus on getting a working product off the ground and not try to make every last detail perfect before launching. In that regard, we’ve decided to move many of our more ambitious changes, including native extensions, auto-triggering events and proof of stake, into a planned future upgrade to happen around mid-to-late 2015.

However, during a two-week visit to London Vitalik made major progress working with Vlad on developing stable proof-of-stake consensus algorithms, and we have a few models that we think are likely to work and solve all of the problems inherent in current approaches. The two have also begun more thoroughly laying the plans for our upcoming upgrades in scalability.

More recently, I have been hard at work rewriting much of the networking code and altering the network protocol to truly split off the peer-to-peer portion of the code to make an abstract layer for all peer-to-peer applications, including those external to the Web Three project that wish to piggyback on the Ethereum peer network. I’ve also been getting PoC-7 up to scratch and more reliable, as well as upgrading my team’s development processes which predictably were becoming a little too informal for an increasingly large team. We’ll be moving towards a peer-reviewed (rather than Gav-reviewed) commit review process, we have a much more curated GitHub issue tracker, alongside an increasingly scrum-oriented project management framework (a switch to Pivotal Tracker is underway – everything public, of course). Most recently I’ve been working on the Whisper project, designing, developing, chewing things over and prototyping.

Finally, we’ve also been making inroads into some well-known and some other not-so-well-known firms that can help us make our final core software as safe and secure as humanly possible. I’m sorry I can’t go into anything more specific now, but rest assured, this is one of our priorities.

So there you have it. What’s been happening.

And what’s going to happen?

Aside from the continuing hiring process and our inroads into setting up a solid security audit, we will very soon be instituting a more informal manner for volunteers and contributors to be supported by the project. In the coming days we will be launching a number of ÐΞV schemes to make it possible for dedicated and productive members of the Ethereum and Web Three community to apply for bursaries and expenses for visiting us at one of our hub locations. Watch this space.

In terms of coding, ÐΞV, at present, has one mission: the completion of version 1.0 of the Ethereum client software which will enable the release of the genesis block. This will be done as soon as possible, though we will release the genesis block only when we (and many others in the security world) are happy that it is safe to do so: we are presently aiming to have it out sometime during this winter (i.e. between December 21st and March 21st). This will include at least a basic contract development environment (the focus of the work here in Berlin under myself), an advanced client based around Google’s Chromium browser technology and several core Ðapps (the focus of the work under Jeff), and various command-line tools.

In specifics, after we have PoC-7 out, we’ll be making at most one more proof-of-concept release before freezing the protocol and moving into our alpha release series. The first alpha will signal the end of our core refactoring & optimisation process and the beginning of our security audit; we aim to have this under way within the next 4-6 weeks. The security audit will involve a number of people and firms, both internal and external, both hired and incentivised, analysing the design and implementations looking for flaws, bugs and potential attack vectors. Once all parties involved have signed off on all aspects of the system will we move to organise a coordinated release of the final block chain. We expect the auditing process to take 2-3 months, with another couple of weeks to coordinate the final release.

During this process we will be developing out the other parts of the project, including the Whisper messaging protocol, the contract development environment and Solidity, the Ethereum browser, Mist and the core Ðapps, all in readiness for the genesis block release.

We will take a very much fluid attitude to software development & release and incrementally roll out updates and improvements to our core suite of software over time. We don’t want to keep you waiting with the release of the blockchain and so that is our development priority. So you may be assured, it will be released just as soon as it is ready.

So hold on to your hats! You’ll be coding contracts and hacking society into new forms before you know it.

Gav.

The post Gav’s ÐΞV Update I: Where Ethereum’s at appeared first on ethereum blog.

ethereum blog

Wie ich Ethereum London besuchteSalto.bzEthereum ist ein Softwaresystem, dass die Prinzipien von Bitcoin benutzt um kleine nicht-manipulierbare Programme auszuführen. Einmal in Ethereum eingespielt kann so ein …
ethereum – Google Blogsuche

Special thanks to Vlad Zamfir and Zack Hess for ongoing research and discussions on proof-of-stake algorithms and their own input into Slasher-like proposals

One of the hardest problems in cryptocurrency development is that of devising effective consensus algorithms. Certainly, relatively passable default options exist. At the very least it is possible to rely on a Bitcoin-like proof of work algorithm based on either a randomly-generated circuit approach targeted for specialized-hardware resitance, or failing that simple SHA3, and our existing GHOST optimizations allow for such an algorithm to provide block times of 12 seconds. However, proof of work as a general category has many flaws that call into question its sustainability as an exclusive source of consensus; 51% attacks from altcoin miners, eventual ASIC dominance and high energy inefficiency are perhaps the most prominent. Over the last few months we have become more and more convinced that some inclusion of proof of stake is a necessary component for long-term sustainability; however, actually implementing a proof of stake algorithm that is effective is proving to be surprisingly complex.

The fact that Ethereum includes a Turing-complete contracting system complicates things further, as it makes certain kinds of collusion much easier without requiring trust, and creates a large pool of stake in the hands of decentralized entities that have the incentive to vote with the stake to collect rewards, but which are too stupid to tell good blockchains from bad. What the rest of this article will show is a set of strategies that deal with most of the issues surrounding proof of stake algorithms as they exist today, and a sketch of how to extend our current preferred proof-of-stake algorithm, Slasher, into something much more robust.

Historical Overview: Proof of stake and Slasher

If you’re not yet well-versed in the nuances of proof of stake algorithms, first read: https://blog.ethereum.org/2014/07/05/stake/

The fundamental problem that consensus protocols try to solve is that of creating a mechanism for growing a blockchain over time in a decentralized way that cannot easily be subverted by attackers. If a blockchain does not use a consensus protocol to regulate block creation, and simply allows anyone to add a block at any time, then an attacker or botnet with very many IP addresses could flood the network with blocks, and particularly they can use their power to perform double-spend attacks – sending a payment for a product, waiting for the payment to be confirmed in the blockchain, and then starting their own “fork” of the blockchain, substituting the payment that they made earlier with a payment to a different account controlled by themselves, and growing it longer than the original so everyone accepts this new blockchain without the payment as truth.

The general solution to this problem involves making a block “hard” to create in some fashion. In the case of proof of work, each block requires computational effort to produce, and in the case of proof of stake it requires ownership of coins – in most cases, it’s a probabilistic process where block-making privileges are doled out randomly in proportion to coin holdings, and in more exotic “negative block reward” schemes anyone can create a block by spending a certain quantity of funds, and they are compensated via transaction fees. In any of these approaches, each chain has a “score” that roughly reflects the total difficulty of producing the chain, and the highest-scoring chain is taken to represent the “truth” at that particular time.

For a detailed overview of some of the finer points of proof of stake, see the above-linked article; for those readers who are already aware of the issues I will start off by presenting a semi-formal specification for Slasher:

  1. Blocks are produced by miners; in order for a block to be valid it must satisfy a proof-of-work condition. However, this condition is relatively weak (eg. we can target the mining reward to something like 0.02x the genesis supply every year)
  2. Every block has a set of designated signers, which are chosen beforehand (see below). For a block with valid PoW to be accepted as part of the chain it must be accompanied by signatures from at least two thirds of its designated signers.
  3. When block N is produced, we say that the set of potential signers of block N + 3000 is the set of addresses such that sha3(address + block[N].hash) < block[N].balance(address) * D2 where D2 is a difficulty parameter targeting 15 signers per block (ie. if block N has less than 15 signers it goes down otherwise it goes up). Note that the set of potential signers is very computationally intensive to fully enumerate, and we don’t try to do so; instead we rely on signers to self-declare.
  4. If a potential signer for block N + 3000 wants to become a designated signer for that block, they must send a special transaction accepting this responsibility and that transaction must get included between blocks N + 1 and N + 64. The set of designated signers for block N + 3000 is the set of all individuals that do this. This “signer must confirm” mechanism helps ensure that the majority of signers will actually be online when the time comes to sign. For blocks 0 … 2999, the set of signers is empty, so proof of work alone suffices to create those blocks.
  5. When a designated signer adds their signature to block N + 3000, they are scheduled to receive a reward in block N + 6000.
  6. If a signer signs two different blocks at height N + 3000, then if someone detects the double-signing before block N + 6000 they can submit an “evidence” transaction containing the two signatures, destroying the signer’s reward and transferring a third of it to the whistleblower.
  7. If there is an insufficient number of signers to sign at a particular block height h, a miner can produce a block with height h+1 directly on top of the block with height h-1 by mining at an 8x higher difficulty (to incentivize this, but still make it less attractive than trying to create a normal block, there is a 6x higher reward). Skipping over two blocks has higher factors of 16x diff and 12x reward, three blocks 32x and 24x, etc.

Essentially, by explicitly punishing double-signing, Slasher in a lot of ways, although not all, makes proof of stake act like a sort of simulated proof of work. An important incidental benefit of Slasher is the non-revert property. In proof of work, sometimes after one node mines one block some other node will immediately mine two blocks, and so some nodes will need to revert back one block upon seeing the longer chain. Here, every block requires two thirds of the signers to ratify it, and a signer cannot ratify two blocks at the same height without losing their gains in both chains, so assuming no malfeasance the blockchain will never revert. From the point of view of a decentralized application developer, this is a very desirable property as it means that “time” only moves in one direction, just like in a server-based environment.

However, Slasher is still vulnerable to one particular class of attack: long-range attacks. Instead of trying to start a fork from ten blocks behind the current head, suppose that an attacker tries to start a fork starting from ten thousand blocks behind, or even the genesis block – all that matters is that the depth of the fork must be greater than the duration of the reward lockup. At that point, because users’ funds are unlocked and they can move them to a new address to escape punishment, users have no disincentive against signing on both chains. In fact, we may even expect to see a black market of people selling their old private keys, culminating with an attacker single-handedly acquiring access to the keys that controlled over 50% of the currency supply at some point in history.

One approach to solving the long-range double-signing problem is transactions-as-proof-of-stake, an alternative PoS solution that does not have an incentive to double-sign because it’s the transactions that vote, and there is no reward for sending a transaction (in fact there’s a cost, and the reward is outside the network); however, this does nothing to stop the black key market problem. To properly deal with that issue, we will need to relax a hidden assumption.

Subjective Scoring and Trust

For all its faults, proof of work does have some elegant economic properties. Particularly, because proof of work requires an externally rivalrous resource, something with exists and is consumed outside the blockchain, in order to generate blocks (namely, computational effort), launching a fork against a proof of work chain invariably requires having access to, and spending, a large quantity of economic resources. In the case of proof of stake, on the other hand, the only scarce value involved is value within the chain, and between multiple chains that value is not scarce at all. No matter what algorithm is used, in proof of stake 51% of the owners of the genesis block could eventually come together, collude, and produce a longer (ie. higher-scoring) chain than everyone else.

This may seem like a fatal flaw, but in reality it is only a flaw if we implicitly accept an assumption that is made in the case of proof of work: that nodes have no knowledge of history. In a proof-of-work protocol, a new node, having no direct knowledge of past events and seeing nothing but the protocol source code and the set of messages that have already been published, can join the network at any point and determine the score of all possible chains, and from there the block that is at the top of the highest-scoring main chain. With proof of stake, as we described, such a property cannot be achieved, since it’s very cheap to acquire historical keys and simulate alternate histories. Thus, we will relax our assumptions somewhat: we will say that we are only concerned with maintaining consensus between a static set of nodes that are online at least once every N days, allowing these nodes to use their own knowledge of history to reject obvious long-range forks using some formula, and new nodes or long-dormant nodes will need to specify a “checkpoint” (a hash of a block representing what the rest of the network agrees is a recent state) in order to get back onto the consensus.

Such an approach is essentially a hybrid between the pure and perhaps harsh trust-no-one logic of Bitcoin and the total dependency on socially-driven consensus found in networks like Ripple. In Ripple’s case, users joining the system need to select a set of nodes that they trust (or, more precisely, trust not to collude) and rely on those nodes during every step of the consensus process. In the case of Bitcoin, the theory is that no such trust is required and the protocol is completely self-contained; the system works just as well between a thousand isolated cavemen with laptops on a thousand islands as it does in a strongly connected society (in fact, it might work better with island cavemen, since without trust collusion is more difficult). In our hybrid scheme, users need only look to the society outside of the protocol exactly once – when they first download a client and find a checkpoint – and can enjoy Bitcoin-like trust properties starting from that point.

In order to determine which trust assumption is the better one to take, we ultimately need to ask a somewhat philosophical question: do we want our consensus protocols to exist as absolute cryptoeconomic constructs completely independent of the outside world, or are we okay with relying heavily on the fact that these systems exist in the context of a wider society? Although it is indeed a central tenet of mainstream cryptocurrency philosophy that too much external dependence is dangerous, arguably the level of independence that Bitcoin affords us in reality is no greater than that provided by the hybrid model. The argument is simple: even in the case of Bitcoin, a user must also take a leap of trust upon joining the network – first by trusting that they are joining a protocol that contains assets that other people find valuable (eg. how does a user know that bitcoins are worth $ 380 each and dogecoins only $ 0.0004? Especially with the different capabilities of ASICs for different algorithms, hashpower is only a very rough estimate), and second by trusting that they are downloading the correct software package. In both the supposedly “pure” model and the hybrid model there is always a need to look outside the protocol exactly once. Thus, on the whole, the gain from accepting the extra trust requirement (namely, environmental friendliness and security against oligopolistic mining pools and ASIC farms) is arguably worth the cost.

Additionally, we may note that, unlike Ripple consensus, the hybrid model is still compatible with the idea of blockchains “talking” to each each other by containing a minimal “light” implementation of each other’s protocols. The reason is that, while the scoring mechanism is not “absolute” from the point of view of a node without history suddenly looking at every block, it is perfectly sufficient from the point of view of an entity that remains online over a long period of time, and a blockchain certainly is such an entity.

So far, there have been two major approaches that followed some kind of checkpoint-based trust model:

  1. Developer-issued checkpoints – the client developer issues a new checkpoint with each client upgrade (eg. used in PPCoin)
  2. Revert limit – nodes refuse to accept forks that revert more than N (eg. 3000) blocks (eg. used in Tendermint)

The first approach has been roundly criticized by the cryptocurrency community for being too centralized. The second, however, also has a flaw: a powerful attacker can not only revert a few thousand blocks, but also potentially split the network permanently. In the N-block revert case, the strategy is as follows. Suppose that the network is currently at block 10000, and N = 3000. The attacker starts a secret fork, and grows it by 3001 blocks faster than the main network. When the main network gets to 12999, and some node produces block 13000, the attacker reveals his own fork. Some nodes will see the main network’s block 13000, and refuse to switch to the attacker’s fork, but the nodes that did not yet see that block will be happy to revert from 12999 to 10000 and then accept the attacker’s fork. From there, the network is permanently split.

Fortunately, one can actually construct a third approach that neatly solves this problem, which we will call exponentially subjective scoring. Essentially, instead of rejecting forks that go back too far, we simply penalize them on a graduating scale. For every block, a node maintains a score and a “gravity” factor, which acts as a multiplier to the contribution that the block makes to the blockchain’s score. The gravity of the genesis block is 1, and normally the gravity of any other block is set to be equal to the gravity of its parent. However, if a node receives a block whose parent already has a chain of N descendants (ie. it’s a fork reverting N blocks), that block’s gravity is penalized by a factor of 0.99N, and the penalty propagates forever down the chain and stacks multiplicatively with other penalties.

That is, a fork which starts 1 block ago will need to grow 1% faster than the main chain in order to overtake it, a fork which starts 100 blocks ago will need to grow 2.718 times as quickly, and a fork which starts 3000 blocks ago will need to grow 12428428189813 times as quickly – clearly an impossibility with even trivial proof of work.

The algorithm serves to smooth out the role of checkpointing, assigning a small “weak checkpoint” role to each individual block. If an attacker produces a fork that some nodes hear about even three blocks earlier than others, those two chains will need to stay within 3% of each other forever in order for a network split to maintain itself.

There are other solutions that could be used aside from, or even alongside ESS; a particular set of strategies involves stakeholders voting on a checkpoint every few thousand blocks, requiring every checkpoint produced to reflect a large consensus of the majority of the current stake (the reason the majority of the stake can’t vote on every block is, of course, that having that many signatures would bloat the blockchain).

Slasher Ghost

The other large complexity in implementing proof of stake for Ethereum specifically is the fact that the network includes a Turing-complete financial system where accounts can have arbitrary permissions and even permissions that change over time. In a simple currency, proof of stake is relatively easy to accomplish because each unit of currency has an unambiguous owner outside the system, and that owner can be counted on to participate in the stake-voting process by signing a message with the private key that owns the coins. In Ethereum, however, things are not quite so simple: if we do our job promoting proper wallet security right, the majority of ether is going to be stored in specialized storage contracts, and with Turing-complete code there is no clear way of ascertaining or assigning an “owner”.

One strategy that we looked at was delegation: requiring every address or contract to assign an address as a delegate to sign for them, and that delegate account would have to be controlled by a private key. However, there is a problem with any such approach. Suppose that a majority of the ether in the system is actually stored in application contracts (as opposed to personal storage contracts); this includes deposits in SchellingCoins and other stake-based protocols, security deposits in probabilistic enforcement systems, collateral for financial derivatives, funds owned by DAOs, etc. Those contracts do not have an owner even in spirit; in that case, the fear is that the contract will default to a strategy of renting out stake-voting delegations to the highest bidder. Because attackers are the only entities willing to bid more than the expected return from the delegation, this will make it very cheap for an attacker to acquire the signing rights to large quantities of stake.

The only solution to this within the delegation paradigm is to make it extremely risky to dole out signing privileges to untrusted parties; the simplest approach is to modify Slasher to require a large deposit, and slash the deposit as well as the reward in the event of double-signing. However, if we do this then we are essentially back to entrusting the fate of a large quantity of funds to a single private key, thereby defeating much of the point of Ethereum in the first place.

Fortunately, there is one alternative to delegation that is somewhat more effective: letting contracts themselves sign. To see how this works, consider the following protocol:

  1. There is now a SIGN opcode added.
  2. A signature is a series of virtual transactions which, when sequentially applied to the state at the end of the parent block, results in the SIGN opcode being called. The nonce of the first VTX in the signature must be the prevhash being signed, the nonce of the second must be the prevhash plus one, and so forth (alternatively, we can make the nonces -1, -2, -3 etc. and require the prevhash to be passed in through transaction data so as to be eventually supplied as an input to the SIGN opcode).
  3. When the block is processed, the state transitions from the VTXs are reverted (this is what is meant by “virtual”) but a deposit is subtracted from each signing contract and the contract is registered to receive the deposit and reward in 3000 blocks.

Basically, it is the contract’s job to determine the access policy for signing, and the contract does this by placing the SIGN opcode behind the appropriate set of conditional clauses. A signature now becomes a set of transactions which together satisfy this access policy. The incentive for contract developers to keep this policy secure, and not dole it out to anyone who asks, is that if it is not secure then someone can double-sign with it and destroy the signing deposit, taking a portion for themselves as per the Slasher protocol. Some contracts will still delegate, but this is unavoidable; even in proof-of-stake systems for plain currencies such as NXT, many users end up delegating (eg. DPOS even goes so far as to institutionalize delegation), and at least here contracts have an incentive to delegate to an access policy that is not likely to come under the influence of a hostile entity – in fact, we may even see an equilibrium where contracts compete to deliver secure blockchain-based stake pools that are least likely to double-vote, thereby increasing security over time.

However, the virtual-transactions-as-signatures paradigm does impose one complication: it is no longer trivial to provide an evidence transaction showing two signatures by the same signer at the same block height. Because the result of a transaction execution depends on the starting state, in order to ascertain whether a given evidence transaction is valid one must prove everything up to the block in which the second signature was given. Thus, one must essentially “include” the fork of a blockchain inside of the main chain. To do this efficiently, a relatively simple proposal is a sort of “Slasher GHOST” protocol, where one can include side-blocks in the main chain as uncles. Specifically, we declare two new transaction types:

  1. [block_number, uncle_hash] – this transaction is valid if (1) the block with the given uncle_hash has already been validated, (2) the block with the given uncle_hash has the given block number, and (3) the parent of that uncle is either in the main chain or was included earlier as an uncle. During the act of processing this transaction, if addresses that double-signed at that height are detected, they are appropriately penalized.
  2. [block_number, uncle_parent_hash, vtx] – this transaction is valid if (1) the block with the given uncle_parent_hash has already been validated, (2) the given virtual transaction is valid at the given block height with the state at the end of uncle_parent_hash, and (3) the virtual transaction shows a signature by an address which also signed a block at the given block_number in the main chain. This transaction penalizes that one address.

Essentially, one can think of the mechanism as working like a “zipper”, with one block from the fork chain at a time being zipped into the main chain. Note that for a fork to start, there must exist double-signers at every block; there is no situation where there is a double-signer 1500 blocks into a fork so a whistleblower must “zip” 1499 innocent blocks into a chain before getting to the target block – rather, in such a case, even if 1500 blocks need to be added, each one of them notifies the main chain about five separate malfeasors that double-signed at that height. One somewhat complicated property of the scheme is that the validity of these “Slasher uncles” depends on whether or not the node has validated a particular block outside of the main chain; to facilitate this, we specify that a response to a “getblock” message in the wire protocol must include the uncle-dependencies for a block before the actual block. Note that this may sometimes lead to a recursive expansion; however, the denial-of-service potential is limited since each individual block still requires a substantial quantity of proof-of-work to produce.

Blockmakers and Overrides

Finally, there is a third complication. In the hybrid-proof-of-stake version of Slasher, if a miner has an overwhelming share of the hashpower, then the miner can produce multiple versions of each block, and send different versions to different parts of the network. Half the signers will see and sign one block, half will see and sign another block, and the network will be stuck with two blocks with insufficient signatures, and no signer willing to slash themselves to complete the process; thus, a proof-of-work override will be required, a dangerous situation since the miner controls most of the proof-of-work. There are two possible solutions here:

  1. Signers should wait a few seconds after receiving a block before signing, and only sign stochastically in some fashion that ensures that a random one of the blocks will dominate.
  2. There should be a single “blockmaker” among the signers whose signature is required for a block to be valid. Effectively, this transfers the “leadership” role from a miner to a stakeholder, eliminating the problem, but at the cost of adding a dependency on a single party that now has the ability to substantially inconvenience everyone by not signing, or unintentionally by being the target of a denial-of-service attack. Such behavior can be disincentivized by having the signer lose part of their deposit if they do not sign, but even still this will result in a rather jumpy block time if the only way to get around an absent blockmaker is using a proof-of-work override.

One possible solution to the problem in (2) is to remove proof of work entirely (or almost entirely, keeping a minimal amount for anti-DDoS value), replacing it with a mechanism that Vlad Zamfir has coined “delegated timestamping”. Essentially, every block must appear on schedule (eg. at 15 second intervals), and when a block appears the signers vote 1 if the block was on time, or 0 if the block was too early or too late. If the majority of the signers votes 0, then the block is treated as invalid – kept in the chain in order to give the signers their fair reward, but the blockmaker gets no reward and the state transition gets skipped over. Voting is incentivized via schellingcoin – the signers whose vote agrees with the majority get an extra reward, so assuming that everyone else is going to be honest everyone has the incentive to be honest, in a self-reinforcing equilibrium. The theory is that a 15-second block time is too fast for signers to coordinate on a false vote (the astute reader may note that the signers were decided 3000 blocks in advance so this is not really true; to fix this we can create two groups of signers, one pre-chosen group for validation and another group chosen at block creation time for timestamp voting).

Putting it all Together

Taken together, we can thus see something like the following working as a functional version of Slasher:

  1. Every block has a designated blockmaker, a set of designated signers, and a set of designated timestampers. For a block to be accepted as part of the chain it must be accompanied by virtual-transactions-as-signatures from the blockmaker, two thirds of the signers and 10 timestampers, and the block must have some minimal proof of work for anti-DDoS reasons (say, targeted to 0.01x per year)
  2. During block N, we say that the set of potential signers of block N + 3000 is the set of addresses such that sha3(address + block[N].hash) < block[N].balance(address) * D2 where D2 is a difficulty parameter targeting 15 signers per block (ie. if block N has less than 15 signers it goes down otherwise it goes up).
  3. If a potential signer for block N + 3000 wants to become a signer, they must send a special transaction accepting this responsibility and supplying a deposit, and that transaction must get included between blocks N + 1 and N + 64. The set of designated signers for block N + 3000 is the set of all individuals that do this, and the blockmaker is the designated signer with the lowest value for sha3(address + block[N].hash). If the signer set is empty, no block at that height can be made. For blocks 0 … 2999, the blockmaker and only signer is the protocol developer.
  4. The set of timestampers of the block N + 3000 is the set of addresses such that sha3(address + block[N].hash) < block[N].balance(address) * D3, where D3 is targeted such that there is an average of 20 timestampers each block (ie. if block N has less than 20 timestampers it goes down otherwise it goes up).
  5. Let T be the timestamp of the genesis block. When block N + 3000 is released, timestampers can supply virtual-transactions-as-signatures for that block, and have the choice of voting 0 or 1 on the block. Voting 1 means that they saw the block within 7.5 seconds of time T + (N + 3000) * 15, and voting 0 means that they received the block when the time was outside that range. Note that nodes should detect if their clocks are out of sync with everyone else’s clocks on the blockchain, and if so adjust their system clocks.
  6. Timestampers who voted along with the majority receive a reward, other timestampers get nothing.
  7. The designated signers for block N + 3000 have the ability to sign that block by supplying a set of virtual-transactions-as-a-signature. All designated signers who sign are scheduled to receive a reward and their returned deposit in block N + 6000. Signers who skipped out are scheduled to receive their returned deposit minus twice the reward (this means that it’s only economically profitable to sign up as a signer if you actually think there is a chance greater than 2/3 that you will be online).
  8. If the majority timestamper vote is 1, the blockmaker is scheduled to receive a reward and their returned deposit in block N + 6000. If the majority timestamper vote is 0, the blockmaker is scheduled to receive their deposit minus twice the reward, and the block is ignored (ie. the block is in the chain, but it does not contribute to the chain’s score, and the state of the next block starts from the end state of the block before the rejected block).
  9. If a signer signs two different blocks at height N + 3000, then if someone detects the double-signing before block N + 6000 they can submit an “evidence” transaction containing the two signatures to either or both chains, destroying the signer’s reward and deposit and transferring a third of it to the whistleblower.
  10. If there is an insufficient number of signers to sign or the blockmaker is missing at a particular block height h, the designated blockmaker for height h + 1 can produce a block directly on top of the block at height h - 1 after waiting for 30 seconds instead of 15.

After years of research, one thing has become clear: proof of stake is non-trivial – so non-trivial that some even consider it impossible. The issues of nothing-at-stake and long-range attacks, and the lack of mining as a rate-limiting device, require a number of compensatory mechanisms, and even the protocol above does not address the issue of how to randomly select signers. With a substantial proof of work reward, the problem is limited, as block hashes can be a source of randomness and we can mathematically show that the gain from holding back block hashes until a miner finds a hash that favorably selects future signers is usually less than the gain from publishing the block hashes. Without such a reward, however, other sources of randomness such as low-influence functions need to be used.

For Ethereum 1.0, we consider it highly desirable to both not excessively delay the release and not try too many untested features at once; hence, we will likely stick with ASIC-resistant proof of work, perhaps with non-Slasher proof of activity as an addon, and look at moving to a more comprehensive proof of stake model over time.

The post Slasher Ghost, and Other Developments in Proof of Stake appeared first on ethereum blog.

ethereum blog

Auf der Ethereum Webseite heisst es gerade dass 55 Millionen Ether gekauft worden sind. Das kann ich jetzt nicht glauben. Natürlich schickt sich Ethereum an, das gesamte Internet zu revolutionieren, aber dann gleich soviel …
ethereum – Google Blogsuche