Special thanks to Vlad Zamfir and Jae Kwon for many of the ideas described in this post

Aside from the primary debate around weak subjectivity, one of the important secondary arguments raised against proof of stake is the issue that proof of stake algorithms are much harder to make light-client friendly. Whereas proof of work algorithms involve the production of block headers which can be quickly verified, allowing a relatively small chain of headers to act as an implicit proof that the network considers a particular history to be valid, proof of stake is harder to fit into such a model. Because the validity of a block in proof of stake relies on stakeholder signatures, the validity depends on the ownership distribution of the currency in the particular block that was signed, and so it seems, at least at first glance, that in order to gain any assurances at all about the validity of a block, the entire block must be verified.

Given the sheer importance of light client protocols, particularly in light of the recent corporate interest in “internet of things” applications (which must often necessarily run on very weak and low-power hardware), light client friendliness is an important feature for a consensus algorithm to have, and so an effective proof of stake system must address it.

Light clients in Proof of Work

In general, the core motivation behind the “light client” concept is as follows. By themselves, blockchain protocols, with the requirement that every node must process every transaction in order to ensure security, are expensive, and once a protocol gets sufficiently popular the blockchain becomes so big that many users become not even able to bear that cost. The Bitcoin blockchain is currently 27 GB in size, and so very few users are willing to continue to run “full nodes” that process every transaction. On smartphones, and especially on embedded hardware, running a full node is outright impossible.

Hence, there needs to be some way in which a user with far less computing power to still get a secure assurance about various details of the blockchain state – what is the balance/state of a particular account, did a particular transaction process, did a particular event happen, etc. Ideally, it should be possible for a light client to do this in logarithmic time – that is, squaring the number of transactions (eg. going from 1000 tx/day to 1000000 tx/day) should only double a light client’s cost. Fortunately, as it turns out, it is quite possible to design a cryptocurrency protocol that can be securely evaluated by light clients at this level of efficiency.

Basic block header model in Ethereum (note that Ethereum has a Merkle tree for transactions and accounts in each block, allowing light clients to easily access more data)

In Bitcoin, light client security works as follows. Instead of constructing a block as a monolithic object containing all of the transactions directly, a Bitcoin block is split up into two parts. First, there is a small piece of data called the block header, containing three key pieces of data:

  • The hash of the previous block header
  • The Merkle root of the transaction tree (see below)
  • The proof of work nonce

Additional data like the timestamp is also included in the block header, but this is not relevant here. Second, there is the transaction tree. Transactions in a Bitcoin block are stored in a data structure called a Merkle tree. The nodes on the bottom level of the tree are the transactions, and then going up from there every node is the hash of the two nodes below it. For example, if the bottom level had sixteen transactions, then the next level would have eight nodes: hash(tx[1] + tx[2]), hash(tx[3] + tx[4]), etc. The level above that would have four nodes (eg. the first node is equal to hash(hash(tx[1] + tx[2]) + hash(tx[3] + tx[4]))), the level above has two nodes, and then the level at the top has one node, the Merkle root of the entire tree.

The Merkle root can be thought of as a hash of all the transactions together, and has the same properties that you would expect out of a hash – if you change even one bit in one transaction, the Merkle root will end up completely different, and there is no way to come up with two different sets of transactions that have the same Merkle root. The reason why this more complicated tree construction needs to be used is that it actually allows you to come up with a compact proof that one particular transaction was included in a particular block. How? Essentially, just provide the branch of the tree going down to the transaction:

The verifier will verify only the hashes going down along the branch, and thereby be assured that the given transaction is legitimately a member of the tree that produced a particular Merkle root. If an attacker tries to change any hash anywhere going down the branch, the hashes will no longer match and the proof will be invalid. The size of each proof is equal to the depth of the tree – ie. logarithmic in the number of transactions. If your block contains 220 (ie. ~1 million) transactions, then the Merkle tree will have only 20 levels, and so the verifier will only need to compute 20 hashes in order to verify a proof. If your block contains 230 (ie. ~1 billion) transactions, then the Merkle tree will have 30 levels, and so a light client will be able to verify a transaction with just 30 hashes.

Ethereum extends this basic mechanism with a two additional Merkle trees in each block header, allowing nodes to prove not just that a particular transaction occurred, but also that a particular account has a particular balance and state, that a particular event occurred, and even that a particular account does not exist.

Verifying the Roots

Now, this transaction verification process all assumes one thing: that the Merkle root is trusted. If someone proves to you that a transaction is part of a Merkle tree that has some root, that by itself means nothing; membership in a Merkle tree only proves that a transaction is valid if the Merkle root is itself known to be valid. Hence, the other critical part of a light client protocol is figuring out exactly how to validate the Merkle roots – or, more generally, how to validate the block headers.

First of all, let us determine exactly what we mean by “validating block headers”. Light clients are not capable of fully validating a block by themselves; protocols exist for doing validation collaboratively, but this mechanism is expensive, and so in order to prevent attackers from wasting everyone’s time by throwing around invalid blocks we need a way of first quickly determining whether or not a particular block header is probably valid. By “probably valid” what we mean is this: if an attacker gives us a block that is determined to be probably valid, but is not actually valid, then the attacker needs to pay a high cost for doing so. Even if the attacker succeeds in temporarily fooling a light client or wasting its time, the attacker should still suffer more than the victims of the attack. This is the standard that we will apply to proof of work, and proof of stake, equally.

In proof of work, the process is simple. The core idea behind proof of work is that there exists a mathematical function which a block header must satisfy in order to be valid, and it is computationally very intensive to produce such a valid header. If a light client was offline for some period of time, and then comes back online, then it will look for the longest chain of valid block headers, and assume that that chain is the legitimate blockchain. The cost of spoofing this mechanism, providing a chain of block headers that is probably-valid-but-not-actually-valid, is very high; in fact, it is almost exactly the same as the cost of launching a 51% attack on the network.

In Bitcoin, this proof of work condition is simple: sha256(block_header) < 2**187 (in practice the “target” value changes, but once again we can dispense of this in our simplified analysis). In order to satisfy this condition, miners must repeatedly try different nonce values until they come upon one such that the proof of work condition for the block header is satisfied; on average, this consumes about 269 computational effort per block. The elegant feature of Bitcoin-style proof of work is that every block header can be verified by itself, without relying on any external information at all. This means that the process of validating the block headers can in fact be done in constant time – download 80 bytes and run a hash of it – even better than the logarithmic bound that we have established for ourselves. In proof of stake, unfortunately we do not have such a nice mechanism.

Light Clients in Proof of Stake

If we want to have an effective light client for proof of stake, ideally we would like to achieve the exact same complexity-theoretic properties as proof of work, although necessarily in a different way. Once a block header is trusted, the process for accessing any data from the header is the same, so we know that it will take a logarithmic amount of time in order to do. However, we want the process of validating the block headers themselves to be logarithmic as well.

To start off, let us describe an older version of Slasher, which was not particularly designed to be explicitly light-client friendly:

  1. In order to be a “potential blockmaker” or “potential signer”, a user must put down a security deposit of some size. This security deposit can be put down at any time, and lasts for a long period of time, say 3 months.
  2. During every time slot T (eg. T = 3069120 to 3069135 seconds after genesis), some function produces a random number R (there are many nuances behind making the random number secure, but they are not relevant here). Then, suppose that the set of potential signers ps (stored in a separate Merkle tree) has size N. We take ps[sha3(R) % N] as the blockmaker, and ps[sha3(R + 1) % N], ps[sha3(R + 2) % N]ps[sha3(R + 15) % N] as the signers (essentially, using R as entropy to randomly select a signer and 15 blockmakers)
  3. Blocks consist of a header containing (i) the hash of the previous block, (ii) the list of signatures from the blockmaker and signers, and (iii) the Merkle root of the transactions and state, as well as (iv) auxiliary data like the timestamp.
  4. A block produced during time slot T is valid if that block is signed by the blockmaker and at least 10 of the 15 signers.
  5. If a blockmaker or signer legitimately participates in the blockmaking process, they get a small signing reward.
  6. If a blockmaker or signer signs a block that is not on the main chain, then that signature can be submitted into the main chain as “evidence” that the blockmaker or signer is trying to participate in an attack, and this leads to that blockmaker or signer losing their deposit. The evidence submitter may receive 33% of the deposit as a reward.

Unlike proof of work, where the incentive not to mine on a fork of the main chain is the opportunity cost of not getting the reward on the main chain, in proof of stake the incentive is that if you mine on the wrong chain you will get explicitly punished for it. This is important; because a very large amount of punishment can be meted out per bad signature, a much smaller number of block headers are required to achieve the same security margin.

Now, let us examine what a light client needs to do. Suppose that the light client was last online N blocks ago, and wants to authenticate the state of the current block. What does the light client need to do? If a light client already knows that a block B[k] is valid, and wants to authenticate the next block B[k+1], the steps are roughly as follows:

  1. Compute the function that produces the random value R during block B[k+1] (computable either constant or logarithmic time depending on implementation)
  2. Given R, get the public keys/addresses of the selected blockmaker and signer from the blockchain’s state tree (logarithmic time)
  3. Verify the signatures in the block header against the public keys (constant time)

And that’s it. Now, there is one gotcha. The set of potential signers may end up changing during the block, so it seems as though a light client might need to process the transactions in the block before being able to compute ps[sha3(R + k) % N]. However, we can resolve this by simply saying that it’s the potential signer set from the start of the block, or even a block 100 blocks ago, that we are selecting from.

Now, let us work out the formal security assurances that this protocol gives us. Suppose that a light client processes a set of blocks, B[1] ... B[n], such that all blocks starting from B[k + 1] are invalid. Assuming that all blocks up to B[k] are valid, and that the signer set for block B[i] is determined from block B[i - 100], this means that the light client will be able to correctly deduce the signature validity for blocks B[k + 1] ... B[k + 100]. Hence, if an attacker comes up with a set of invalid blocks that fool a light client, the light client can still be sure that the attacker will still have to pay ~1100 security deposits for the first 100 invalid blocks. For future blocks, the attacker will be able to get away with signing blocks with fake addresses, but 1100 security deposits is an assurance enough, particularly since the deposits can be variably sized and thus hold many millions of dollars of capital altogether.

Thus, even this older version of Slasher is, by our definition, light-client-friendly; we can get the same kind of security assurance as proof of work in logarithmic time.

A Better Light-Client Protocol

However, we can do significantly better than the naive algorithm above. The key insight that lets us go further is that of splitting the blockchain up into epochs. Here, let us define a more advanced version of Slasher, that we will call “epoch Slasher”. Epoch Slasher is identical to the above Slasher, except for a few other conditions:

  1. Define a checkpoint as a block such that block.number % n == 0 (ie. every n blocks there is a checkpoint). Think of n as being somewhere around a few weeks long; it only needs to be substantially less than the security deposit length.
  2. For a checkpoint to be valid, 2/3 of all potential signers have to approve it. Also, the checkpoint must directly include the hash of the previous checkpoint.
  3. The set of signers during a non-checkpoint block should be determined from the set of signers during the second-last checkpoint.

This protocol allows a light client to catch up much faster. Instead of processing every block, the light client would skip directly to the next checkpoint, and validate it. The light client can even probabilistically check the signatures, picking out a random 80 signers and requesting signatures for them specifically. If the signatures are invalid, then we can be statistically certain that thousands of security deposits are going to get destroyed.

After a light client has authenticated up to the latest checkpoint, the light client can simply grab the latest block and its 100 parents, and use a simpler per-block protocol to validate them as in the original Slasher; if those blocks end up being invalid or on the wrong chain, then because the light client has already authenticated the latest checkpoint, and by the rules of the protocol it can be sure that the deposits at that checkpoint are active until at least the next checkpoint, once again the light client can be sure that at least 1100 deposits will be destroyed.

With this latter protocol, we can see that not only is proof of stake just as capable of light-client friendliness as proof of work, but moreover it’s actually even more light-client friendly. With proof of work, a light client synchronizing with the blockchain must download and process every block header in the chain, a process that is particularly expensive if the blockchain is fast, as is one of our own design objectives. With proof of stake, we can simply skip directly to the latest block, and validate the last 100 blocks before that to get an assurance that if we are on the wrong chain, at least 1100 security deposits will be destroyed.

Now, there is still a legitimate role for proof of work in proof of stake. In proof of stake, as we have seen, it takes a logarithmic amount of effort to probably-validate each individual block, and so an attacker can still cause light clients a logarithmic amount of annoyance by broadcasting bad blocks. Proof of work alone can be effectively validated in constant time, and without fetching any data from the network. Hence, it may make sense for a proof of stake algorithm to still require a small amount of proof of work on each block, ensuring that an attacker must spend some computational effort in order to even slightly inconvenience light clients. However, the amount of computational effort required to compute these proofs of work will only need to be miniscule.

The post Light Clients and Proof of Stake appeared first on .

Back in November, we created a quick survey for the Ethereum community to help us gauge how we’re doing, what can be improved, and how best we can engage with you all as we move forward towards the genesis block release in March. We feel it’s very important to enable the community to interact with Ethereum as well as itself, and we hope to offer new and exciting tools to do so using the survey results for guidance.

The survey itself consisted of 14 questions split into two sections; Ethereum as an “Organisation” and Ethereum as a “Technology”.  There was a total of 286 responses. This represents 7.8% of the current Ethereum reddit population, or 2.4% of the current @ethereumproject followers.

What country do you currently reside in?

Ethereum World

So, this is where everybody lives. To sum it up by continent – of the 286 respondents there are 123 (43%) in North America, 114 (40%) in Europe, 30 (10%) in Asia, 13 (5%) in Oceana and 6 (2%) in South America. No surprises there, though it does show how we – and the crypto space in general – have much work to do in areas south of the Brandt Line. One way to go about this is to seed more international Ethereum meetups. You can see a map of all the current Ethereum meetups here (We have 81 in total all over the world from London to New York to Tehran with over 6000 members taking part). If you’d like to start one yourself, please do message us and we can offer further assistance – .

02fixed

It’s understood that our transparency is very important to the community. To that end, we strive to make much of our internal workings freely available on the internet. As indicated in the chart, most people agree that we are doing just that. However, more can always be done. We’re currently working on a refresh of the ethereum.org website ready for the release of the genesis block. Expect much more content and information as we complete this towards the end of January. In the meantime, have a look at the Ethereum GitHub Repository, or head over to the new ΞTH ÐΞV website for a greater understanding of the entity that is delivering Ethereum 1.0, as well as its truly incredible team.

 

03fixed04fixed

We’ve always tried to give the community as much information about our financial situation as possible, and from the results it seems like a lot of you agree. For further information on how Ethereum intends to use the funds raised in the Ether sale as we move forward, check out the Road Map and the ĐΞV PLAN. To learn more about the Ether Sale itself, have a look at Vitalik’s Ether Sale Introduction, the Ethereum Bitcoin Wallet, or the Ether Sale Statistical Overview.

05

Though most people agree Ethereum’s use cases in society are clear, I wouldn’t be so sure we’ve figured them all out just yet. Everyday we’re speaking with developers and entrepreneurs via Skype or on IRC (Join in your browser – #ethereum / #ethereum-dev) who have thought of new and exciting ideas that they are looking to implement on top of Ethereum – many of which are brand new to us. For a brief overview of some of the use cases we’ve encountered, check out Stephan Tual’s recent presentation at NewFinance.

06

We’re doing our best to keep everyone updated with the plethora of changes, updates and general progression of the project that’s been taking place over the recent months. Gavin Wood and Jeff Wilcke especially have written some excellent blog updates on how things are going in their respective Berlin and Amsterdam ÐΞV Hubs. You can see all of the updates in the Project category of the Ethereum blog.

07

ΞTH ÐΞV’s mission statement is now proudly presented on the ΞTH ÐΞV website for all to see. In detail, it explains what needs to be achieved as time goes on, but can be summed up as “To research, design and build software that, as best as possible, facilitates, in a secure, decentralised and fair manner, the communication and automatically-enforced agreement between parties.”

08

Much like the crypto space in general, Ethereum is somewhat difficult to initially get your head around. No doubt about that, and it’s our job to make the process of gaining understanding and enabling participation as easy and intuitive as possible. As mentioned previously, the new look ethereum.org website will be an invaluable tool in helping people access the right information that is applicable to their own knowledge and skill set. Also, in time we aim to create a Udemy/Codacademy like utility which will allow people with skills ranging from none to Jedi Master to learn how Ethereum works and how to implement their ideas. In the mean time, a great place to start for those wanting to use Ethereum is Ken Kappler’s recent Tutorials.

11 Of the following aspects, do you think we should be focusing more or less or about the same on them?

This was an important question as it gave a lot of perspective on what aspects needed to be focused on before genesis, and what (though useful) could be developed afterwards. From a UI point of view, the Go team in Amsterdam is working towards the creation of Mist, Ethereum’s “Ðapp Navigator”. Mist’s initial design ideas are presented by the Lead UI Designer, Alex Van de Sande in this video.

Ease of installation will factor greatly in user adoption – we cant very well have people recompiling the client every time a new update is pushed! So binaries with internal update systems are in the pipeline. Client Reliability (bugs) is being actioned on by Jutta Steiner, the Manager of our internal and external security audits. We expect the community bug bounty project to be live by the middle of January, so stay tuned and be ready for epic 11 figure Satoshi rewards, leaderboards and more “1337” prizes.

Developer tools are on the way too. Specifically, project “Mix”. Mix supports some rather amazing features, including documentation, a compiler, debugger integration for writing information on code health, valid invariant, code structure and code formatting, as well as variable values and assertion truth annotations. It’s a long term project expected to be delivered in the next 12-18 months, right now we are very much focused on completing the blockchain. Once complete, we can reallocate our resources to other important projects. You can find out more in the Mix presentation from ÐΞVcon-0. For now, documentation is constantly being generated on the Ethereum GitHub Wiki.

The blog and social media interaction will continue to deliver Ethereum content on relevant channels with the aim of reaching the widest range of people as possible.

 

10

With more people owning smartphones than computers already, imagine how prolific they’ll will be as time goes on? This will be the case especially in emerging markets such as India and Nigeria, it’s likely they’ll leapfrog computers to some extent and gain wide adoption very quickly.  A mobile light client will be greatly important to the usability of Ethereum. As part of IBM and Samsung’s joint project “Adept” (an IoT platform which is currently being unveiled at CES 2015), an Android version of the Ethereum Java client – ethereumj, is going to be open-sourced on GitHub. This will go a long way to getting Ethereum Mobile!

12

It’s interesting to see a very mixed bag of responses for this question. As was said previously, Ethereum’s use cases are as wide as they are varied, and it’s great to see how many different types of services people are looking to implement on top of Ethereum. The emphasis on governance based Ðapps highlights Ethereum’s ability to facilitate interactions between the digital and physical world and create autonomously governed communities that can compete with both governments and corporations. Primavera De Filippi and Raffaele Mauro investigate this further in the Internet Policy Review Journal.

13 Which would be your favourite OS development environment?

This chart shows a reasonably even spread, we’ve done our best to make the various clients available on different operating systems. You can find the Alethzero binaries here, and the Mist binaries here. These however become obsolete very quickly and may not connect to the test net as development continues, so if you considering using Ethereum before release, it’s well worth while checking the client building tutorials to get the most up to date versions of the clients.

 

15 Which Ethereum clients do you use?

With Mist (Go), Alethzero (C++), Pythereum (Python) Node-Ethereum (Node.js), and Ethereumj (Java), Ethereum already has a plethora of clients available. The Yellow Paper written by Gavin Wood is a great reference for the community to create its own clients, as seen with those still under development such as the Clojure and Objective C iterations.

14 Which Language do you prefer to write contracts in?

As Gavin Wood has mentioned in a previous blogpost, Mutan and LLL as smart contract languages will be mothballed. Serpent will be continued to be developed by Vitalik with his team, and Soldity will continue as the primary development language for Ethereum contracts. You can try Solidity in your browser, or watch the recent vision and roadmap presentation by Gavin Wood and Vitalik Buterin at ÐΞVcon-0.

Thanks to Alex Van de Sande for helping with the implementation of the survey and chart graphics. Icons retrieved from icons8. If anyone would like a copy of the raw survey results, feel free to email .

The post Ethereum Community Survey appeared first on .

First of all, happy new year! What a year it has been. With a little luck we’ll surpass last year with an even more awesome year. It’s been too long since I’ve given an update on my side of things and that of the Go team and mostly due to a lack of time. I’ve been so incredibly busy and so many things have happened these past 2 months I’ve hardly had time to sit down and assess it all.

As you may be well aware the audit is looming around the corner and my little baby (go-ethereum!) will undergo it’s full inspection very, very soon. The audit teams will tear it apart and see if the repo contains anything incorrectly implemented as well as search for any major security flaws in the design and implementation. We’ve been pretty solid on tests, testing implementation details as well as consensus tests (thanks to Christoph) and will continue to add more tests over time. We’ll see how they hold up during the audit (though I’m confident we’ll be fine, it’s still a little bit scary (-:)

Development

PoC-7 has been released now for a about a week and has been quite stable (and growing in size!). We’re already hard at work to finalising PoC-8 which includes numerous small changes:

  • Adjusted block time back to 12s (was 4s)
  • Op code PREVHASH has become BLOCKHASH( N ) and therefore PREVHASH = BLOCKHASH(NUMBER - 1)
  • We’ve added an additional pre-compiled contract at address 0x04 which returns the given input (acts like copy / memcpy)

Ongoing

P2P

Felix has been hard at work on our new P2P package which has now entered in to v0.1 (PoC-7) and will soon already undergo it’s first upgrade for PoC-8. Felix has done an amazing job on the design of the package and it’s a real pleasure to work with. Auto-generated documentation can be found at GoDoc.

Whisper

A month or so back I finished the first draft of Whisper for the Go implementation and it’s now passing whisper messages nicely around the network and uses the P2P package mentioned earlier. The Go API is relatively easy and requires almost zero setup.

Backend

The backend stack of ethereum has also received its first major (well deserved) overhaul. Viktor’s been incredibly hard at work to reimplement the download manager and the ethereum sub protocol.

Swarm

Since the first day Dani joined the team he’s passionately been working on the peer selection algorithm and distributed preimage archive. The DPA will be used for our Swarm tech. The spec is about 95% complete and roughly about 50% has been implemented. Progress is going strong!

Both go-ethereum/p2p and go-ethereum/whisper have been developed in such a way that neither require ethereum to operate. If you’re developing in Go and your application requires a P2P network or (dark) messaging try out the packages. An example sub protocol can be found here and an example on how to use Whisper can be found here.

Ams Hub

Now that the hub is finally set up you’re free to drop by and grab a coffee with us. You can find us in the rather posh neighbourhood of Amsterdam Zuid near Museumplein (Alexander Boerstraat 21).

In my next post I hope I’ll have a release candidate for PoC-8 and perhaps even a draft implementation of swarm. But until then, happy whispering and mining!

The post Jeff’s Ethereum ÐΞV Update II appeared first on ethereum blog.

ethereum blog

One of the criticisms that many people have made about the current direction of the cryptocurrency space is the increasing amount of fragmentation that we are seeing. What was earlier perhaps a more tightly bound community centered around developing the common infrastructure of Bitcoin is now increasingly a collection of “silos”, discrete projects all working on their own separate things. There are a number of developers and researchers who are either working for Ethereum or working on ideas as volunteers and happen to spend lots of time interacting with the Ethereum community, and this set of people has coalesced into a group dedicated to building out our particular vision. Another quasi-decentralized collective, Bitshares, has set their hearts on their own vision, combining their particular combination of DPOS, market-pegged assets and vision of blockchain as decentralized autonomous corporation as a way of reaching their political goals of free-market libertarianism and a contract free society. Blockstream, the company behind “sidechains”, has likewise attracted their own group of people and their own set of visions and agendas – and likewise for Truthcoin, Maidsafe, NXT, and many others.

One argument, often raised by Bitcoin maximalists and sidechains proponents, is that this fragmentation is harmful to the cryptocurrency ecosystem – instead of all going our own separate ways and competing for users, we should all be working together and cooperating under Bitcoin’s common banner. As Fabian Brian Crane summarizes:

One recent event that has further inflamed the discussion is the publication of the sidechains proposal. The idea of sidechains is to allow the trustless innovation of altcoins while offering them the same monetary base, liquidity and mining power of the Bitcoin network.
For the proponents, this represents a crucial effort to rally the cryptocurrency ecosystem behind its most successful project and to build on the infrastructure and ecosystem already in place, instead of dispersing efforts in a hundred different directions.

Even to those who disagree with Bitcoin maximalism, this seems like a rather reasonable point, and even if the cryptocurrency community should not all stand together under the banner of “Bitcoin” one may argue that we need to all stand together somehow, working to build a more unified ecosystem. If Bitcoin is not powerful enough to be a viable backbone for life, the crypto universe and everything, then why not build a better and more scalable decentralized computer instead and build everything on that? Hypercubes certainly seem powerful enough to be worth being a maximalist over, if you’re the sort of person to whom one-X-to-rule-them-all proposals are intuitively appealing, and the members of Bitshares, Blockstream and other “silos” are often quite eager to believe the same thing about their own particular solutions, whether they are based on merged-mining, DPOS plus BitAssets or whatever else.

So why not? If there truly is one consensus mechanism that is best, why should we not have a large merger between the various projects, come up with the best kind of decentralized computer to push forward as a basis for the crypto-economy, and move forward together under one unified system? In some respects, this seems noble; “fragmentation” certainly has undesirable properties, and it is natural to see “working together” as a good thing. In reality, however, while more cooperation is certainly useful, and this blog post will later describe how and why, desires for extreme consolidation or winner-take-all are to a large degree exactly wrong – not only is fragmentation not all that bad, but rather it’s inevitable, and arguably the only way that this space can reasonably prosper.

Agree to Disagree

Why has fragmentation been happening, and why should we continue to let it happen? To the first question, and also simultaneously to the second, the answer is simple: we fragment because we disagree. Particularly, consider some of the following claims, all of which I believe in, but which are in many cases a substantial departure from the philosophies of many other people and projects:

  • I do not think that weak subjectivity is all that much of a problem. However, mugh higher degrees of subjectivity and intrinsic reliance on extra-protocol social consensus I am still not comfortable with.
  • I consider Bitcoin’s $ 600 million/year wasted electricity on proof of work to be an utter environmental and economic tragedy.
  • I believe ASICs are a serious problem, and that as a result of them Bitcoin has become qualitatively less secure over the past two years.
  • I consider Bitcoin (or any other fixed-supply currency) to be too incorrigibly volatile to ever be a stable unit of account, and believe that the best route to cryptocurrency price stability is by experimenting with intelligently designed flexible monetary policies (ie. NOT “the market” or “the Bitcoin central bank“). However, I am not interested in bringing cryptocurrency monetary policy under any kind of centralized control.
  • I have a substantially more anti-institutional/libertarian/anarchistic mindset than some people, but substantially less so than others (and am incidentally not an Austrian economist). In general, I believe there is value to both sides of the fence, and believe strongly in being diplomatic and working together to make the world a better place.
  • I am not in favor of there being one-currency-to-rule-them-all, in the crypto-economy or anywhere.
  • I think token sales are an awesome tool for decentralized protocol monetization, and that everyone attacking the concept outright is doing a disservice to society by threatening to take away a beautiful thing. However, I do agree that the model as implemented by us and other groups so far has its flaws and we should be actively experimenting with different models that try to align incentives better
  • I believe futarchy is promising enough to be worth trying, particularly in a blockchain governance context.
  • I consider economics and game theory to be a key part of cryptoeconomic protocol analysis, and consider the primary academic deficit of the cryptocurrency community to be not ignorance of advanced computer science, but rather economics and philosophy. We should reach out to http://lesswrong.com/ more.
  • I see one of the primary reasons why people will adopt decentralized technologies (blockchains, whisper, DHTs) in practice to be the simple fact that software developers are lazy, and do not wish to deal with the complexities of maintaining a centralized website.
  • I consider the blockchain-as-decentralized-autonomous-corporation metaphor to be useful, but limited. Particularly, I believe that we as cryptocurrency developers should be taking advantage of this perhaps brief period in which cryptocurrency is still an idealist-controlled industry to design institutions that maximize utilitarian social welfare metrics, not profit (no, they are not equivalent, primarily because of these).

There are probably very few people who agree with me on every single one of the items above. And it is not just myself that has my own peculiar opinions. As another example, consider the fact that the CTO of OpenTransactions, Chris Odom, says things like this:

What is needed is to replace trusted entities with systems of cryptographic proof. Any entity that you see in the Bitcoin community that you have to trust is going to go away, it’s going to cease to exist … Satoshi’s dream was to eliminate [trusted] entities entirely, either eliminate the risk entirely or distribute the risk in a way that it’s practically eliminated.

Meanwile, certain others feel the need to say things like this:

Put differently, commercially viable reduced-trust networks do not need to protect the world from platform operators. They will need to protect platform operators from the world for the benefit of the platform’s users.

Of course, if you see the primary benefit of cryptocurrency as being regulation avoidance then that second quote also makes sense, but in a way completely different from the way its original author intended – but that once again only serves to show just how differently people think. Some people see cryptocurrency as a capitalist revolution, others see it as an egalitarian revolution, and others see everything in between. Some see human consensus as a very fragile and corruptible thing and cryptocurrency as a beacon of light that can replace it with hard math; others see cryptocurrency consensus as being only an extension of human consensus, made more efficient with technology. Some consider the best way to achieve cryptoassets with dollar parity to be dual-coin financial derivative schemes; others see the simpler approach as being to use blockchains to represent claims on real-world assets instead (and still others think that Bitcoin will eventually be more stable than the dollar all on its own). Some think that scalability is best done by “scaling up“; others believe the ultimately superior option is “scaling out“.

Of course, many of these issues are inherently political, and some involve public goods; in those cases, live and let live is not always a viable solution. If a particular platform enables negative externalities, or threatens to push society into a suboptimal equilibrium, then you cannot “opt out” simply by using your platform instead. There, some kind of network-effect-driven or even in extreme cases 51%-attack-driven censure may be necessary. In some cases, the differences are related to private goods, and are primarily simply a matter of empirical beliefs. If I believe that SchellingDollar is the best scheme for price stability, and others prefer Seignorage Shares or NuBits then after a few years or decades one model will prove to work better, replace its competition, and that will be that.

In other cases, however, the differences will be resolved in a different way: it will turn out that the properties of some systems are better suited for some applications, and other systems better suited for other applications, and everything will naturally specialize into those use cases where it works best. As a number of commentators have pointed out, for decentralized consensus applications in the mainstream financial world, banks will likely not be willing to accept a network managed by anonymous nodes; in this case, something like Ripple will be more useful. But for Silk Road 4.0, the exact opposite approach is the only way to go – and for everything in between it’s a cost-benefit analysis all the way. If users want networks specialized to performing specific functions highly efficiently, then networks will exist for that, and if users want a general purpose network with a high network effect between on-chain applications then that will exist as well. As David Johnston points out, blockchains are like programming languages: they each have their own particular properties, and few developers religiously adhere to one language exclusively – rather, we use each one in the specific cases for which it is best suited.

Room for Cooperation

However, as was mentioned earlier, this does not mean that we should simply go our own way and try to ignore – or worse, actively sabotage, each other. Even if all of our projects are necessarily specializing toward different goals, there is nevertheless a substantial opportunity for much less duplication of effort, and more cooperation. This is true on multiple levels. First, let us look at a model of the cryptocurrency ecosystem – or, perhaps, a vision of what it might look like in 1-5 years time:

Ethereum has its own presence on pretty much every level:

  • Consensus: Ethereum blockchain, data-availablility Schelling-vote (maybe for Ethereum 2.0)
  • Economics: ether, an independent token, as well as research into stablecoin proposals
  • Blockchain services: name registry
  • Off-chain services: Whisper (messaging), web of trust (in progress)
  • Interop: BTC-to-ether bridge (in progress)
  • Browsers: Mist

Now, consider a few other projects that are trying to build holistic ecosystems of some kind. Bitshares has at the least:

  • Consensus: DPOS
  • Economics: BTSX and BitAssets
  • Blockchain services: BTS decentralized exchange
  • Browsers: Bitshares client (though not quite a browser in the same concept)

Maidsafe has:

  • Consensus: SAFE network
  • Economics: Safecoin
  • Off-chain services: Distributed hash table, Maidsafe Drive

BitTorrent has announced their plans for Maelstrom, a project intended to serve a rather similar function to Mist, albeit showcasing their own (not blockchain-based) technology. Cryptocurrency projects generally all build a blockchain, a currency and a client of their own, although forking a single client is common for the less innovative cases. Name registration and identity management systems are now a dime a dozen. And, of course, just about every project realizes that it has a need for some kind of reputation and web of trust.

Now, let us paint a picture of an alternative world. Instead of having a collection of cleanly disjoint vertically integrated ecosystems, with each one building its own components for everything, imagine a world where Mist could be used to access Ethereum, Bitshares, Maidsafe or any other major decentralized infrastructure network, with new decentralized networks being installable much like the plugins for Flash and Java inside of Chrome and Firefox. Imagine that the reputation data in the web of trust for Ethereum could be reused in other projects as well. Imagine StorJ running inside of Maelstrom as a dapp, using Maidsafe for a file storage backend, and using the Ethereum blockchain to maintain the contracts that incentivize continued storage and downloading. Imagine identities being automatically transferrable across any crypto-networks, as long as they use the same underlying cryptographic algorithms (eg. ECDSA + SHA3).

The key insight here is this: although some of the layers in the ecosystem are inextricably linked – for example, a single dapp will often correspond to a single specific service on the Ethereum blockchain – in many cases the layers can easily be designed to be much more modular, allowing each product on each layer to compete separately on its own merits. Browsers are perhaps the most separable component; most reasonably holistic lower level blockchain service sets have similar needs in terms of what applications can run on them, and so it makes sense for each browser to support each platform. Off-chain services are also a target for abstraction; any decentralized application, regardless of what blockchain technology it uses, should be free to use Whisper, Swarm, IPFS or any other service that developers come up with. On-chain services, like data provision, can theoretically be built so as to interact with multiple chains.

Additionally, there are plenty of opportunities to collaborate on fundamental research and development. Discussion on proof of work, proof of stake, stable currency systems and scalability, as well as other hard problems of cryptoeconomics can easily be substantially more open, so that the various projects can benefit from and be more aware of each other’s developments. Basic algorithms and best practices related to networking layers, cryptographic algorithm implementations and other low-level components can, and should, be shared. Interoperability technologies should be developed to facilitate easy exchange and interaction between services and decentralized entities on one platform and another. The Cryptocurrency Research Group is one initiative that we plan to initially support, with the hope that it will grow to flourish independently of ourselves, with the goal of promoting this kind of cooperation. Other formal and informal institutions can doubtlessly help support the process.

Hopefully, in the future we will see many more projects existing in a much more modular fashion, living on only one or two layers of the cryptocurrency ecosystem and providing a common interface allowing any mechanism on any other layer to work with them. If the cryptocurrency space goes far enough, then even Firefox and Chrome may end up adapting themselves to process decentralized application protocols as well. A journey toward such an ecosystem is not something that needs to be rushed immediately; at this point, we have quite little idea of what kinds of blockchain-driven services people will be using in the first place, making it hard to determine exactly what kind of interoperability would actually be useful. However, things slowly but surely are taking their first few steps in that direction; Eris’s Decerver, their own “browser” into the decentralized world, supports access to Bitcoin, Ethereum, their own Thelonious blockchains as well as an IPFS content hosting network.

There is room for many projects that are currently in the crypto 2.0 space to succeed, and so having a winner-take-all mentality at this point is completely unnecessary and harmful. All that we need to do right now to set off the journey on a better road is to live with the assumption that we are all building our own platforms, tuned to our own particular set of preferences and parameters, but at the end of the day a plurality of networks will succeed and we will need to live with that reality, so might as well start preparing for it now.

Happy new year, and looking forward to an exciting 2015 007 Anno Satoshii.

The post On Silos appeared first on ethereum blog.

ethereum blog

One thought on “The Bitcoin Group #40 – New York Regulation Week 2 – Ethereum Pre-Sale – Coinye – Ecuador”. thebitcoingroup. 10/08/2014 at 04:20 · Antworten. Hinterlasse eine Antwort Antworten abbrechen.
ethereum – Google Blogsuche

The crypto 2.0 industry has been making strong progress in the past year developing blockchain technology, including the formalization and in some cases realization of proof of stake designs like Slasher and DPOS, various forms of scalable blockchain algorithms, blockchains using “leader-free consensus” mechanisms derived from traditional Byzantine fault tolerance theory, as well as economic ingredients like Schelling consensus schemes and stable currencies. All of these technologies remedy key deficiencies of the blockchain design with respect to centralized servers: scalability knocks down size limits and transaction costs, leader-free consensus reduces many forms of exploitability, stronger PoS consensus algorithms reduce consensus costs and improve security, and Schelling consensus allows blockchains to be “aware” of real-world data. However, there is one piece of the puzzle that all approaches so far have not yet managed to crack: privacy.

Currency, Dapps and Privacy

Bitcoin brings to its users a rather unique set of tradeoffs with respect to financial privacy. Although Bitcoin does a substantially better job than any system that came before it at protecting the physical identities behind each of its accounts – better than fiat and banking infrastructure because it requires no identity registration, and better than cash because it can be combined with Tor to completely hide physical location, the presence of the Bitcoin blockchain means that the actual transactions made by the accounts are more public than ever – neither the US government, nor China, nor the thirteen year old hacker down the street even need so much as a warrant in order to determine exactly which account sent how much BTC to which destination at what particular time. In general, these two forces pull Bitcoin in opposite directions, and it is not entirely clear which one dominates.

With Ethereum, the situation is similar in theory, but in practice it is rather different. Bitcoin is a blockchain intended for currency, and currency is inherently a very fungible thing. There exist techniques like merge avoidance which allow users to essentially pretend to be 100 separate accounts, with their wallet managing the separation in the background. Coinjoin can be used to “mix” funds in a decentralized way, and centralized mixers are a good option too especially if one chains many of them together. Ethereum, on the other hand, is intended to store intermediate state of any kind of processes or relationships, and unfortunately it is the case that many processes or relationships that are substantially more complex than money are inherently “account-based”, and large costs would be incurred by trying to obfuscate one’s activities via multiple accounts. Hence, Ethereum, as it stands today, will in many cases inherit the transparency side of blockchain technology much more so than the privacy side (although those interested in using Ethereum for currency can certainly build higher-privacy cash protocols inside of subcurrencies).

Now, the question is, what if there are cases where people really want privacy, but a Diaspora-style self-hosting-based solution or a Zerocash-style zero-knowledge-proof strategy is for whatever reason impossible – for example, because we want to perform calculations that involve aggregating multiple users’ private data? Even if we solve scalability and blockchain data assets, will the lack of privacy inherent to blockchains mean that we simply have to go back to trusting centralized servers? Or can we come up with a protocol that offers the best of both worlds: a blockchain-like system which offers decentralized control not just over the right to update the state, but even over the right to access the information at all?

As it turns out, such a system is well within the realm of possibility, and was even conceptualized by Nick Szabo in 1998 under the moniker of “God protocols” (though, as Nick Szabo pointed out, we should not use that term for the protocols that we are about to describe here as God is generally assumed or even defined to be Pareto-superior to everything else and as we’ll soon see these protocols are very far from that); but now with the advent of Bitcoin-style cryptoeconomic technology the development of such a protocol may for the first time actually be viable. What is this protocol? To give it a reasonably technically accurate but still understandable term, we’ll call it a “secret sharing DAO”.

Fundamentals: Secret Sharing

Secret computation networks rely on two fundamental primitives to store information in a decentralized way. The first is secret sharing. Secret sharing essentially allows data to be stored in a decentralized way across N parties such that any K parties can work together to reconstruct the data, but K-1 parties cannot recover any information at all. N and K can be set to any values desired; all it takes is a few simple parameter tweaks in the algorithm.

The simplest way to mathematically describe secret sharing is as follows. We know that two points make a line:



So, to implement 2-of-N secret sharing, we take our secret S, generate a random slope m, and create the line y = mx + S. We then give the N parties the points on the line (1, m + S), (2, 2m + S), (3, 3m + S), etc. Any two of them can reconstruct the line and recover the original secret, but one person can do nothing; if you receive the point (4, 12), that could be from the line y = 2x + 4, or y = -10x + 52, or y = 305445x - 1221768. To implement 3-of-N secret sharing, we just make a parabola instead, and give people points on the parabola:

Parabolas have the property that any three points on a parabola can be used to reconstruct the parabola (and no one or two points suffice), so essentially the same process applies. And, more generally, to implement K-of-N secret sharing, we use a degree K-1 polynomial in the same way. There is a set of algorithms for recovering the polynomial from a sufficient set of points in all such cases; they are described in more details in our earlier article on erasure coding.

This is how the secret sharing DAO will store data. Instead of every participating node in the consensus storing a copy of the full system state, every participating node in the consensus will store a set of shares of the state – points on polynomials, one point on a different polynomial for each variable that makes up part of the state.

Fundamentals: Computation

Now, how does the secret sharing DAO do computation? For this, we use a set of algorithms called secure multiparty computation (SMPC). The basic principle behind SMPC is that there exist ways to take data which is split among N parties using secret sharing, perform computations on it in a decentralized way, and end up with the result secret-shared between the parties, all without ever reconstituting any of the data on a single device.

SMPC with addition is easy. To see how, let’s go back to the two-points-make-a-line example, but now let’s have two lines:



Suppose that the x=1 point of both lines A and B is stored by computer P[1], the x=2 point is stored by computer P[2], etc. Now, suppose that P[1] computes a new value, C(1) = A(1) + B(1), and B computes C(2) = A(2) + B(2). Now, let’s draw a line through those two points:



So we have a new line, C, such that C = A + B at points x=1 and x=2. However, the interesting thing is, this new line is actually equal to A + B on every point:



Thus, we have a rule: sums of secret shares (at the same x coordinate) are secret shares of the sum. Using this principle (which also applies to higher dimensions), we can convert secret shares of a and secret shares of b into secret shares of a+b, all without ever reconstituting a and b themselves. Multiplication by a known constant value works the same way: k times the ith secret share of a is equal to the ith secret share of a*k.

Multiplication of two secret shared values, unfortunately, is much more involved. The approach will take several steps to explain, and because it is fairly complicated in any case it’s worth simply doing for arbitrary polynomials right away. Here’s the magic. First, suppose that there exist values a and b, secret shared among parties P[1]P[n], where a[i] represents the ith share of a (and same for b[i] and b). We start off like this:



Now, one option that you might think of is, if we can just make a new polynomial c = a + b by having every party store c[i] = a[i] + b[i], can’t we do the same for multiplication as well? The answer is, surprisingly, yes, but with a serious problem: the new polynomial has a degree twice as large as the original. For example, if the original polynomials were y = x + 5 and y = 2x - 3, the product would be y = 2x^2 + 7x - 15. Hence, if we do multiplication more than once, the polynomial would become too big for the group of N to store.

To avoid this problem, we perform a sort of rebasing protocol where we convert the shares of the larger polynomial into shares of a polynomial of the original degree. The way it works is as follows. First, party P[i] generates a new random polynomial, of the same degree as a and b, which evaluates to c[i] = a[i]*b[i] at zero, and distributes points along that polynomial (ie. shares of c[i]) to all parties.



Thus, P[j] now has c[i][j] for all i. Given this, P[j] calculates c[j], and so everyone has secret shares of c, on a polynomial with the same degree as a and b.



To do this, we used a clever trick of secret sharing: because the secret sharing math itself involves nothing more than additions and multiplications by known constants, the two layers of secret sharing are commutative: if we apply secret sharing layer A and then layer B, then we can take layer A off first and still be protected by layer B. This allows us to move from a higher-degree polynomial to a lower degree polynomial but avoid revealing the values in the middle – instead, the middle step involved both layers being applied at the same time.

With addition and multiplication over 0 and 1, we have the ability to run arbitrary circuits inside of the SMPC mechanism. We can define:

  • AND(a, b) = a * b
  • OR(a, b) = a + b - a * b
  • XOR(a, b) = a * b - 2 * a * b
  • NOT(a) = 1 - a

Hence, we can run whatever programs we want, although with one key limitation: we can’t do secret conditional branching. That is, if we had a computation if (x == 5) <do A> else <do B> then the nodes would need to know whether they are computing branch A or branch B, so we would need to reveal x midway through.

There are two ways around this problem. First, we can use multiplication as a “poor man’s if” – replace something like if (x == 5) <y = 7> with y = (x == 5) * 7 + (x != 5) * y, using either circuits or clever protocols that implement equality checking through repeated multiplication (eg. if we are in a finite field we can check if a == b by using Fermat’s little theorem on a-b). Second, as we will see, if we implement if statements inside the EVM, and run the EVM inside SMPC, then we can resolve the problem, leaking only the information of how many steps the EVM took before computation exited (and if we really care, we can reduce the information leakage further, eg. round the number of steps to the nearest power of two, at some cost to efficiency).

The secret-sharing based protocol described above is only one way to do relatively simply SMPC; there are other approaches, and to achieve security there is also a need to add a verifiable secret sharing layer on top, but that is beyond the scope of this article – the above description is simply meant to show how a minimal implementation is possible.

Building a Currency

Now that we have a rough idea of how SMPC works, how would we use it to build a decentralized currency engine? The general way that a blockchain is usually described in this blog is as a system that maintains a state, S, accepts transactions, agrees on which transactions should be processed at a given time and computes a state transition function APPLY(S, TX) -> S' OR INVALID. Here, we will say that all transactions are valid, and if a transaction TX is invalid then we simply have APPLY(S, TX) = S.

Now, since the blockchain is not transparent, we might expect the need for two kinds of transactions that users can send into the SMPC: get requests, asking for some specific information about an account in the current state, and update requests, containing transactions to apply onto the state. We’ll implement the rule that each account can only ask for balance and nonce information about itself, and can withdraw only from itself. We define the two types of requests as follows:

SEND: [from_pubkey, from_id, to, value, nonce, sig] GET: [from_pubkey, from_id, sig] 

The database is stored among the N nodes in the following format:

Essentially, the database is stored as a set of 3-tuples representing accounts, where each 3-tuple stores the owning pubkey, nonce and balance. To send a request, a node constructs the transaction, splits it off into secret shares, generates a random request ID and attaches the ID and a small amount of proof of work to each share. The proof of work is there because some anti-spam mechanism is necessary, and because account balances are private there is no way if the sending account has enough funds to pay a transaction fee. The nodes then independently verify the shares of the signature against the share of the public key supplied in the transaction (there are signature algorithms that allow you to do this kind of per-share verification; Schnorr signatures are one major category). If a given node sees an invalid share (due to proof of work or the signature), it rejects it; otherwise, it accepts it.

Transactions that are accepted are not processed immediately, much like in a blockchain architecture; at first, they are kept in a memory pool. At the end of every 12 seconds, we use some consensus algorithm – it could be something simple, like a random node from the N deciding as a dictator, or an advanced neo-BFT algorithm like that used by Pebble – to agree on which set of request IDs to process and in which order (for simplicity, simple alphabetical order will probably suffice).

Now, to fufill a GET request, the SMPC will compute and reconstitute the output of the following computation:

owner_pubkey = R[0] * (from_id == 0) + R[3] * (from_id == 1) + ... + R[3*n] * (from_id == n)  valid = (owner_pubkey == from_pubkey)  output = valid * (R[2] * (from_id == 0) + R[5] * (from_id == 1) + ... + R[3n + 2] * (from_id == n)) 

So what does this formula do? It consists of three stages. First, we extract the owner pubkey of the account that the request is trying to get the balance of. Because the computation is done inside of an SMPC, and so no node actually knows what database index to access, we do this by simply taking all the database indices, multiplying the irrelevant ones by zero and taking the sum. Then, we check if the request is trying to get data from an account which is actually owns (remember that we checked the validity of from_pubkey against the signature in the first step, so here we just need to check the account ID against the from_pubkey). Finally, we use the same database getting primitive to get the balance, and multiply the balance by the validity to get the result (ie. invalid requests return a balance of 0, valid ones return the actual balance).

Now, let’s look at the execution of a SEND. First, we compute the validity predicate, consisting of checking that (1) the public key of the targeted account is correct, (2) the nonce is correct, and (3) the account has enough funds to send. Note that to do this we once again need to use the “multiply by an equality check and add” protocol, but for brevity we will abbreviate R[0] * (x == 0) + R[3] * (x == 1) + ... with R[x * 3].

valid = (R[from_id * 3] == from_pubkey) * (R[from_id * 3 + 1] == nonce) * (R[from_id * 3 + 2] >= value) 

We then do:

R[from_id * 3 + 2] -= value * valid R[from_id * 3 + 1] += valid R[to * 3 + 2] += value * valid 

For updating the database, R[x * 3] += y expands to the set of instructions R[0] += y * (x == 0), R[3] += y * (x == 1) .... Note that all of these can be parallelized. Also, note that to implement balance checking we used the >= operator. This is once again trivial using boolean logic gates, but even if we use a finite field for efficiency there do exist some clever tricks for performing the check using nothing but additions and multiplications.

In all of the above we saw two fundamental limitations in efficiency in the SMPC architecture. First, reading and writing to a database has an O(n) cost as you pretty much have to read and write every cell. Doing anything less would mean exposing to individual nodes which subset of the database a read or write was from, opening up the possibility of statistical memory leaks. Second, every multiplication requires a network message, so the fundamental bottleneck here is not computation or memory but latency. Because of this, we can already see that secret sharing networks are unfortunately not God protocols; they can do business logic just fine, but they will never be able to do anything more complicated – even crypto verifications, with the exception of a select few crypto verifications specifically tailored to the platform, are in many cases too expensive.

From Currency to EVM

Now, the next problem is, how do we go from this simple toy currency to a generic EVM processor? Well, let us examine the code for the virtual machine inside a single transaction environment. A simplified version of the function looks roughly as follows:

def run_evm(block, tx, msg, code):     pc = 0     gas = msg.gas     stack = []     stack_size = 0     exit = 0     while 1:         op = code[pc]         gas -= 1         if gas < 0 or stack_size < get_stack_req(op):             exit = 1         if op == ADD:             x = stack[stack_size]             y = stack[stack_size - 1]             stack[stack_size - 1] = x + y             stack_size -= 1         if op == SUB:             x = stack[stack_size]             y = stack[stack_size - 1]             stack[stack_size - 1] = x - y             stack_size -= 1         ...         if op == JUMP:             pc = stack[stack_size]             stack_size -= 1         ... 

The variables involved are:

  • The code
  • The stack
  • The memory
  • The account state
  • The program counter

Hence, we can simply store these as records, and for every computational step run a function similar to the following:

op = code[pc] * alive + 256 * (1 - alive) gas -= 1  stack_p1[0] = 0 stack_p0[0] = 0 stack_n1[0] = stack[stack_size] + stack[stack_size - 1] stack_sz[0] = stack_size - 1 new_pc[0] = pc + 1  stack_p1[1] = 0 stack_p0[1] = 0 stack_n1[1] = stack[stack_size] * stack[stack_size - 1] stack_sz[1] = stack_size - 1 new_pc[1] = pc + 1 ... stack_p1[86] = 0 stack_p0[86] = 0 stack_n1[86] = stack[stack_size - 1] stack_sz[86] = stack_size - 1 new_pc[86] = stack[stack_size] ... stack_p1[256] = 0 stack_p0[256] = 0 stack_n1[256] = 0 stack_sz[256] = 0 new_pc[256] = 0  pc = new_pc[op] stack[stack_size + 1] = stack_p1[op] stack[stack_size] = stack_p0[op] stack[stack_size - 1] = stack_n1[op] stack_size = stack_sz[op] pc = new_pc[op] alive *= (gas  0) * (stack_size  0) 

Essentially, we compute the result of every single opcode in parallel, and then pick the correct one to update the state. The alive variable starts off at 1, and if the alive variable at any point switches to zero, then all operations from that point simply do nothing. This seems horrendously inefficient, and it is, but remember: the bottleneck is not computation time but latency. Everything above can be parallelized. In fact, the astute reader may even notice that the entire process of running every opcode in parallel has only O(n) complexity in the number of opcodes (particularly if you pre-grab the top few items of the stack into specified variables for input as well as output, which we did not do for brevity), so it is not even the most computationally intensive part (if there are more accounts or storage slots than opcodes, which seems likely, the database updates are). At the end of every N steps (or for even less information leakage every power of two of steps) we reconstitute the alive variable and if we see that alive = 0 then we halt.

In an EVM with many participants, the database will likely be the largest overhead. To mitigate this problem, there are likely clever information leakage tradeoffs that can be made. For example, we already know that most of the time code is read from sequential database indices. Hence, one approach might be to store the code as a sequence of large numbers, each large number encoding many opcodes, and then use bit decomposition protocols to read off individual opcodes from a number once we load it. There are also likely many ways to make the virtual machine fundamentally much more efficient; the above is meant, once again, as a proof of concept to show how a secret sharing DAO is fundamentally possible, not anything close to an optimal implementation. Additionally, we can look into architectures similar to the ones used in scalability 2.0 techniques to highly compartmentalize the state to further increase efficiency.

Updating the N

The SMPC mechanism described above assumes an existing N parties involved, and aims to be secure against any minority of them (or in some designs at least any minority less than 1/4 or 1/3) colluding. However, blockchain protocols need to theoretically last forever, and so stagnant economic sets do not work; rather, we need to select the consensus participants using some mechanism like proof of stake. To do this, an example protocol would work as follows:

  1. The secret sharing DAO's time is divided into "epochs", each perhaps somewhere between an hour and a week long.
  2. During the first epoch, the participants are set to be the top N participants during the genesis sale.
  3. At the end of an epoch, anyone has the ability to sign up to be one of the participants in the next round by putting down a deposit. N participants are randomly chosen, and revealed.
  4. A "decentralized handoff protocol" is carried out, where the N participants simultaneously split their shares among the new N, and each of the new N reconstitutes their share from the pieces that they received - essentially, the exact same protocol as was used for multiplication. Note that this protocol can also be used to increase or decrease the number of participants.

All of the above handles decentralization assuming honest participants; but in a cryptocurrency protocol we also need incentives. To accomplish that, we use a set of primitives called verifiable secret sharing, that allow us to determine whether a given node was acting honestly throughout the secret sharing process. Essentially, this process works by doing the secret sharing math in parallel on two different levels: using integers, and using elliptic curve points (other constructions also exist, but because cryptocurrency users are most familiar with the secp256k1 elliptic curve we'll use that). Elliptic curve points are convenient because they have a commutative and associative addition operator - in essence, they are magic objects which can be added and subtracted much like numbers can. You can convert a number into a point, but not a point into a number, and we have the property that number_to_point(A + B) = number_to_point(A) + number_to_point(B). By doing the secret sharing math on the number level and the elliptic curve point level at the same time, and publicizing the elliptic curve points, it becomes possible to verify malfeasance. For efficiency, we can probably use a Schellingcoin-style protocol to allow nodes to punish other nodes that are malfeasant.

Applications

So, what do we have? If the blockchain is a decentralized computer, a secret sharing DAO is a decentralized computer with privacy. The secret sharing DAO pays dearly for this extra property: a network message is required per multiplication and per database access. As a result, gas costs are likely to be much higher than Ethereum proper, limiting the computation to only relatively simple business logic, and barring the use of most kinds of cryptographic calculations. Scalability technology may be used to partially offset this weakness, but ultimately there is a limit to how far you can get. Hence, this technology will probably not be used for every use case; instead, it will operate more like a special-purpose kernel that will only be employed for specific kinds of decentralized applications. Some examples include:

  • Medical records - keeping the data on a private decentralized platform can potentially open the door for an easy-to-use and secure health information system that keeps patients in control of their data. Particularly, note that proprietary diagnosis algorithms could run inside the secret sharing DAO, allowing medical diagnosis as a service based on data from separate medical checkup firms without running the risk that they will intentionally or unintentionally expose your private details to insurers, advertisers or other firms.
  • Private key escrow - a decentralized M-of-N alternative to centralized password recovery; could be used for financial or non-financial applications
  • Multisig for anything - even systems that do not natively support arbitrary access policies, or even M-of-N multisignature access, now will, since as long as they support cryptography you can stick the private key inside of a secret sharing DAO.
  • Reputation systems - what if reputation scores were stored inside a secret sharing DAO so you could privately assign reputation to other users, and have your assignment count towards the total reputation of that user, without anyone being able to see your individual assignments?
  • Private financial systems - secret sharing DAOs could provide an alternative route to Zerocash-style fully anonymous currency, except that here the functionality could be much more easily extended to decentralized exchange and more complex smart contracts. Business users may want to leverage some of the benefits of running their company on top of crypto without necessarily exposing every single one of their internal business processes to the general public.
  • Matchmaking algorithms - find employers, employees, dating partners, drivers for your next ride on Decentralized Uber, etc, but doing the matchmaking algorithm computations inside of SMPC so that no one sees any information about you unless the algorithm determines that you are a perfect match.

Essentially, one can think of SMPC as offering a set of tools roughly similar to that which it has been theorized would be offered by cryptographically secure code obfuscation, except with one key difference: it actually works on human-practical time scales.

Further Consequences

Aside from the applications above, what else will secret sharing DAOs bring? Particularly, is there anything to worry about? As it turns out, just like with blockchains themselves, there are a few concerns. The first, and most obvious, issue is that secret sharing DAOs will substantially increase the scope of applications that can be carried out in a completely private fashion. Many advocates of blockchain technology often base a large part of their argument on the key point that while blockchain-based currencies offer an unprecedented amount of anonymity in the sense of not linking addresses to individual identities, they are at the same time the most public form of currency in the world because every transaction is located on a shared ledger. Here, however, the first part remains, but the second part disappears completely. What we have left is essentially total anonymity.

If it turns out to be the case that this level of anonymity allows for a much higher degree of criminal activity, and the public is not happy with the tradeoff that the technology brings, then we can predict that governments and other institutions in general, perhaps even alongside volunteer vigilante hackers, will try their best to take these systems down, and perhaps they would even be justified. Fortunately for these attackers, however, secret sharing DAOs do have an inevitable backdoor: the 51% attack. If 51% of the maintainers of a secret sharing DAO at some particular time decide to collude, then they can uncover any of the data that is under their supervision. Furthermore, this power has no statute of limitations: if a set of entities who formed over half of the maintaining set of a secret sharing DAO at some point many years ago collude, then even then the group would be able to unearth the information from that point in time. In short, if society is overwhelmingly opposed to something being done inside of a secret sharing DAO, there will be plenty of opportunity for the operators to collude to stop or reveal what's going on.

A second, and subtler, issue is that the concept of secret sharing DAOs drives a stake through a cherished fact of cryptoeconomics: that private keys are not securely tradeable. Many protocols explicitly, or implicitly, rely on this idea, including non-outsourceable proof of work puzzles, Vlad Zamfir and Pavel Kravchenko's proof of custody, economic protocols that use private keys as identities, any kind of economic status that aims to be untradeable, etc. Online voting systems often have the requirement that it should be impossible to prove that you voted with a particular key, so as to prevent vote selling; with secret sharing DAOs, the problem is that now you actually can sell your vote, rather simply: by putting your private key into a contract inside of a secret sharing DAO, and renting out access.

The consequences of this ability to sell private keys are quite far reaching - in fact, they go so far as to almost threaten the security of the strongest available system underlying blockchain security: proof of stake. The potential concern is this: proof of stake derives its security from the fact that users have security deposits on the blockchain, and these deposits can potentially be taken away if the user misacts in some fashion (double-voting, voting for a fork, not voting at all, etc). Here, private keys become tradeable, and so security deposits become tradeable as well. We must ask the question: does this compromise proof of stake?

Fortunately, the answer is no. First of all, there are strong lemon-theoretic arguments for why no one would actually want to sell their deposit. If you have a deposit of $ 10, to you that's worth $ 10 minus the tiny probability that you will get hacked. But if you try to sell that deposit to someone else, they will have a deposit which is worth $ 10, unless you decide to use your private key to double-vote and thus destroy the deposit. Hence, from their point of view, there is a constant overhanging risk that you will act to take their deposit away, and you personally have no incentive not to do that. The very fact that you are trying to sell off your deposit should make them suspicious. Hence, from their point of view, your deposit might only be worth, say, $ 8. You have no reason to sacrifice $ 10 for $ 8, so as a rational actor you will keep the deposit to yourself.

Second, if the private key was in the secret sharing DAO right from the start, then by transferring access to the key you would personally lose access to it, so you would actually transfer the authority and the liability at the same time - from an economic standpoint, the effect on the system would be exactly the same as if one of the deposit holders simply had a change of personality at some point during the process. In fact, secret sharing DAOs may even improve proof of stake, by providing a more secure platform for users to participate in decentralized stake pools even in protocols like Tendermint, which do not natively support such functionality.

There are also other reasons why the theoretical attacks that secret sharing DAOs make possible may in fact fail in practice. To take one example, consider the case of non-outsourceable puzzles, computational problems which try to prove ownership of a private key and a piece of data at the same time. One kind of implementation of a non-outsourceable puzzle, used by Permacoin, involves a computation which needs to "bounce" back and forth between the key and the data hundreds of thousands of times. This is easy to do if you have the two pieces of data on the same piece of hardware, but becomes prohibitively slow if the two are separated by a network connection - and over a secret sharing DAO it would be nearly impossible due to the inefficiencies. As a result, one possible conclusion of all this is that secret sharing DAOs will lead to the standardization of a signature scheme which requires several hundred millions of rounds of computation - preferably with lots and lots of serial multiplication - to compute, at which point every computer, phone or internet-of-things microchip would have a built-in ASIC to do it trivially, secret sharing DAOs would be left in the dust, and we would all move on with our lives.

How Far Away?

So what is left before secret sharing DAO technology can go mainstream? In short, quite a bit, but not too much. At first, there is certainly a moderate amount of technical engineering involved, at least on the protocol level. Someone needs to formalize an SMPC implementation, together with how it would be combined with an EVM implementation, probably with many restrictions for efficiency (eg. hash functions inside of SMPC are very expensive, so Merkle tree storage may disappear in favor of every contract having a finite number of storage slots), a punishment, incentive and consensus framework and a hypercube-style scalability framework, and then release the protocol specification. From that point, it's a few months of development in Python (Python should be fine, as by far the primary bottleneck will be network latency, not computation), and we'll have a working proof of concept.

Secret sharing and SMPC technology has been out there for many years, and academic cryptographers have been talking about how to build privacy-preserving applications using M-of-N-based primitives and related technologies such as private information retrieval for over a decade. The key contribution made by Bitcoin, however, is the idea that M-of-N frameworks in general can be much more easily bootstrapped if we add in an economic layer. A secret sharing DAO with a currency built in would provide incentives for individuals to participate in maintaining the network, and would bootstrap it until the point where it could be fully self-sustaining on internal applications. Thus, altogether, this technology is quite possible, and not nearly so far away; it is only a matter of time until someone does it.

The post Secret Sharing DAOs: The Other Crypto 2.0 appeared first on ethereum blog.

ethereum blog

Beim Ethereum IPO ist wieder einmal Sand im Getriebe. Das Fundraising wurde gestoppt, also man kann nicht investieren ab dem 1. Februar. Es wurde auf einen unbestimmten Zeitpunkt nach hinten verschoben.
ethereum – Google Blogsuche

Hi, I’m Jutta! As some of you might have read in earlier posts, I’ve recently been busy setting up a security audit prior to the Ethereum genesis block release. Ethereum will launch following a world-class review by experts in IT security, cryptography and blockchain technology. Prior to the launch, we will also complete a bug bounty program – a major cornerstone of our approach to achieving security.

The bug bounty program will rely on the Ethereum community and all other motivated bug bounty hunters out there. We’ll soon release the final details of the program, currently under development by Gustav. A first glimpse:

  • 11 figure total Satoshi rewards.
  • Leaderboard listing top contributors by total hunting score.
  • Other l33t ideas for rewards soon to be announced… just wondering if a contract in our genesis block could be the perfect place to eternalize the hall of fame (:

Get ready for hunting down flaws in the protocols, Go implementation and network code. For those looking for a distraction over the holidays, please note that the protocols and code-base are currently still subject to change. Please also note that rewards will only be given to submissions received after the official launch of the bounty program. Detailed submission guidelines, rules and other info on the exact scope will soon be published on http://ethdev.com.

The post A call to all the bug bounty hunters out there… appeared first on ethereum blog.

ethereum blog

OK so a minor update about what we are (and are not) doing here at Ethereum DEV.

We are, first and foremost, developing a robust quasi-Turing-complete blockchain. This is known as Ethereum. Aside from having quasi-Turing-completeness, it delivers on a number of other important considerations, stemming from the fact we are developing entirely new blockchain technology including:

  • speedy, through a 12 second blocktime;
  • light-client-friendly through the use of Merkle roots in headers for compact inclusion/state proofs and DHT integration to allow light clients to host & share small parts of the full chain;
  • ÐApp-friendly, even for light-clients, through the use of multi-level Bloom filters and transaction receipt Markle tries to allow for lightweight log-indexing and proofs;
  • finite-blockchain-friendly – we designed the core protocol to facilitate upgrading to this technology, further reducing light-client footprint and helping guarantee mid-term scalability;
  • ASIC-unfriendly – through the (as yet unconfirmed) choice of PoW algo and the threat we’ll be upgrading to PoS in the Not-Too-Distant future.

It is robust because:

  • it is unambiguously formally defined, allowing a highly tractable analysis, saturation tests and formal auditing of implementations;
  • it has an extensive, and ultimately complete, set of tests for providing an exceptionally high degree of likelihood a particular implementation is conformant;
  • modern software development practices are observed including a CI system, internal unit tests, strict peer-reviewing, a strict no-warnings policy and automated code analysers;
  • its mesh/p2p backend (aka libp2p) is built on well-tested secure foundations (technology stemming from the Kademlia project);
  • official implementations undergo a full industry-standard security audit;
  • a large-scale stress test network will be instituted for profiling and testing against likely adverse conditions and attacks prior to final release.

Secondly (and at an accordingly lower priority), we are developing materials and tools to make use of this unprecedented technology possible. This includes:

  • developing a single custom-designed CO (contract-orientated) language;
  • developing a secure natural language contract specification format and infrastructure;
  • formal documentation for help coding contracts;
  • tutorials for help coding contracts;
  • sponsoring web-based projects in order to get people into development;
  • developing a block chain integrated development environment.

Thirdly, to facilitate adoption of this technology, gain testers and spur further development we are developing, collaborating over and sponsoring a number of force-multiplying technologies that leverage pre-existing technology including:

  • a graphical client “browser” (leveraging drop-in browser components from the Chromium project and Qt 5 technology);
  • a set of basic contracts and ÐApps, including for registration, reputation, web-of-trust and accounting (leveraging the pre-existing compilers and development tech);
  • a hybrid multi-DHT/messaging system, codenamed Whisper (leveraging the pre-existing p2p back end & protocols);
  • a simple reverse-hash lookup DHT, codenamed Swarm (also leveraging the pre-existing p2p back end & protocols), for which there is an ongoing internal implementation, but which could end up merging or being a collaboration with the IPFS project.

We are no longer actively targeting multiple languages (LLL and Mutan are mothballed, Serpent is continued as a side project). We are not developing any server technology. And, until there is a working, robust, secure and effective block chain alongside basic development tools, other parts of this overall project have substantially lower priority.

Following on from the release of the Ethereum block chain, expect the other components to get increasingly higher amounts of time dedicated to them.

The post Ethereum ÐΞV: What are we doing? appeared first on ethereum blog.

ethereum blog

Time for another update! So quite a bit has happened following ÐΞVcon-0, our internal developer’s conference. The conference itself was a great time to get all the developers together and really get to know each other, dissipate a lot of information (back to back presentations for 5 days!) and chat over a lot of ideas. The comms team will be releasing each of the presentations as fast as Ian can get them nicely polished.

During the time since the last update, much has happened including, finally, the release of the Ethereum ÐΞV website, ethdev.com. Though relatively simple as present, there are great plans to extend this into a developer’s portal in which you’ll be able to browse the bug bounty programme, look at and, ultimately follow tutorials, look up documentation, find the latest binaries for each platform and see the progress of builds.

As usual I have been mostly between Switzerland, the UK and Berlin, during this time. Now that ÐΞV-Berlin is settled in the hub, we have a great collaboration space in which volunteers can work, collaborate, bond and socialise alongside our more formal hires. Of late, I have been working to finish up the formal specification of Ethereum, the Yellow Paper, and make it up to date with the latest protocol changes in order that the security audit get underway. Together we have been putting the finishing touches on seventh, and likely final, proof-of-concept code, delayed largely due to a desire to make it the final PoC release for protocol changes. I’ve also been doing some nice core refactoring and documentation, specifically removing two long standing dislikes of mine, the State::create and State::call methods and making the State class nicer for creating custom states useful when developing contracts. You can expect to see the fruits of this work in Milestone II of Mix, Ethereum’s official IDE.

Ongoing Recruitment

On that note, I’m happy to announce that we have hired Arkadiy Paronyan, a talented developer originally from Russia who will be working with Yann on the Mix IDE. He’s got off to a great start on his first week helping on the front-end with the second milestone. I’m also very pleased to announce that we hired Gustav Simonsson. Being an expert Erlang with Go experience with considerable expertise in network programming and security reviewing, he will initially be working with Jutta on the Go code base security audit before joining the Go team.

We also have another two recruits: Dimitri Khoklov and Jason Colby. I first met Jason in the fateful week back last January when the early Ethereum collaborators got together for a week before the North American Bitcoin conference where Vitalik gave the first public talk about Ethereum. Jason, who has moved to Berlin from his home in New Hampshire, is mostly working alongside Aeron and Christian to help to look after the hub and looking after various bits of administration that need to be done. Dimitri, who works from Tver in Russia is helping flesh out our unit tests with Christoph, ultimately aiming towards full code coverage.

We have several more recruits that I’d love to mention but can’t announce quite yet – watch this space… (:

Ongoing Projects

I’m happy to say that after a busy weekend, Marek, Caktux, Nick and Sven have managed to get the Build Bot, our CI system, building on all three platforms cleanly again. A special shout goes out to Marek who tirelessly fought with CMake and MSVC to bend the Windows platform to his will. Well done to all involved.

Christian continues to power through on the Solidity project, aided now by Lefteris who focuses more on the documentation side. The latest feature to be added allows for the creation of new contracts in a beautiful manner with the new keyword. Alex and Sven are beginning to work on the project of introducing network well-formedness into the p2p subsystem using the salient parts of the well-proven Kademlia DHT design. We should begin seeing some of this stuff in the code base within before the year-end.

I’m also happy to announce that the first successful message was sent between Go & C++ clients on our messaging/hash-table hybrid system, codenamed Whisper. Though only at an early proof-of-concept stage, the API is reasonably robust and fixed, so largely ready to prototype applications on.

New Projects

Marian is the lucky guy who has been tasked with developing out what will be our awesome web-based C&C deck. This will provide a public website whose back-end connects to a bunch of nodes around the world and displays real-time information on network status including chain length and a chain-fork early warning system. Though accessible by anyone, we will of course have a dedicated monitor on at all times for this page at the hub.

Sven, Jutta and Heiko have also begun a most interesting and important project: the Ethereum stress-testing project. Designed to study and test the network in a range of real-life adverse situations prior to release, they will construct infrastructure allowing the setup of many (10s, 100s, even 1000s of) nodes each individually remote-controllable and able to simulate circumstances such as ISP attacks, net splits, rogue clients, arrival and departure of large amounts of hash-power and measure attributes like block & transaction propagation times and patterns, uncle rates and fork lengths. A project to watch out for.

Conclusions

The next time I write this I hope to have released PoC-7 and be on the way to the alpha release (not to mention have the Yellow Paper out). I expect Jeff will be doing an update concerning the Go side of things soon enough. Until then, watch out for the PoC-7 release and mine some testnet Ether!

The post Gav’s Ethereum ÐΞV Update IV appeared first on ethereum blog.

ethereum blog