Schlagwortarchiv für: Light

london

Consumers are demanding faster payments in the UK. Photo: Flickr

This is part 1 of our 2-part interview series with Jeremy Light on the future of payments.

Real-time payments is the hottest topic in Europe if not globally, says Jeremy Light, a managing director at Accenture and payments industry expert.

When you think of influential authorities in the payments space, Jeremy instantly comes to mind. As head of Accenture Payment Services in Europe, Africa, and Latin America, Jeremy focuses on strategy, systems integration, and outsourcing.

A prolific thought leader, Jeremy regularly produces groundbreaking industry publications—such as his report, “Digital Payment Transformation”—and also maintains a blog. He was named Accenture’s Inventor of the Year 2015 after being awarded a mobile payments patent in 2014 along with his colleagues.

We recently caught up with Jeremy to get his take on the future of payments.

Ripple Labs: Everyone seems to be talking about real-time payments these days.

Jeremy Light: I would say probably the hottest topic in Europe if not globally is real-time payments. The Euro Banking Association (EBA) is focused on it. So is the European Central Bank (ECB). At the end of last year, the ECB issued a challenge to the banking industry in Europe—for them to develop a cohesive vision for building real-time payments infrastructure. They didn’t want to see disparate systems being set up around Europe that didn’t work together.

The EBA, which also performs a clearing function, set up a forum earlier this year across Europe to spark a discussion on real-time payments. But even without the extra nudge, we’re seeing progress in quite a few countries around the world, where banks are investigating how they can set up a real-time clearing system.

Last year, Finland issued a request for information (RFI), which described the national payment system there as quickly becoming obsolete. The Netherlands just announced that they’re going to set up real-time payments. Dutch banks plan to have real-time solutions set up by  2019. The EBA itself has just issued a blueprint for a pan-European instant payment infrastructure to be implemented from 2016 to 2018.

RL: So real-time is clearly on everyone’s minds. What are the biggest challenges moving forward? What are the major roadblocks?

Jeremy: Well, one major challenge is getting banks not only to adopt new technology and infrastructure, but also offer these products to their customers. Even now, there are banks around Europe that will say that their customers aren’t asking for real-time payments, that same day or next day payments is good enough.

RL: It kind of sounds like how Internet providers here in the US will say that their customers don’t want faster broadband speeds.

Jeremy: Right. Consider the UK, where we’ve had the Faster Payments System since 2008, which, by the way, is now growing very strongly. Despite having the capabilities all these years, banks haven’t really promoted or marketed real-time payments. That’s not necessarily the fault of the banks either. A lot of small businesses, until recently, have been more than content with same day payments. But I would say that in the past 18 months, there’s been a big shift in attitude. We can’t pinpoint exactly why, but customers are now demanding real-time payments.

As a result, those banks that can’t offer guaranteed real-time payments 24/7/365, they’re starting to notice that their customers are complaining because they see this service being offered by competitors. They see others doing it, that it’s possible and that it’s happening. So there’s definitely been a wave of change regarding expectations in the last year or so.

Here in the UK, we have a lot of upstart banks, called challenger banks, encouraged by the government’s drive to open up competition in the banking industry. These new banks realize that they have to offer a real-time payments proposition because if they don’t, they’ll be at a huge disadvantage.

RL: Which makes sense. Everything else in our world today is on demand. The Internet has wired us for instant gratification.

Exactly. We suspect that this shift in expectations is part of the rise of the digital age. People’s lives are governed by their smartphones and they experiences they get with Google, Amazon, and Apple, where everything is immediate and instantaneous. It’s pushing banks to change their stance.

Consider a scenario if a bank in Europe wanted to offer mobile payments because their customers are demanding it. If i send you a payment by mobile and you get a message—Jeremy just sent you $ 15.

But then you look at your bank account and you don’t see the money, it’s a problem. It becomes a source of uncertainty and anxiety. You don’t know when your money will arrive or even if it will arrive. And banks get that. They realize that they can’t offer mobile payments unless it’s in real-time and it’s a sea change that’s occurring across Europe.

So most banks, most countries are realizing that they need to implement these systems because if you think about it, it’s the natural progression. It’s evolution. Like you said, everyone is used to real-time in every other aspect of their lives. It just makes sense. When people send you an email, they want an instant response. If I ask you a question, I want an answer. I don’t want it tomorrow. I want it immediately.

RL: Given all of that, where do you think distributed ledgers fit into the equation?

It’s an interesting question. The distributed consensus ledger technology that Ripple offers and that blockchain technologies offer is coming along at just the right time when banks are looking at real-time payments. So there’s an immense amount of interest in what these innovations have to offer.

The caveat of course, is that these technologies still need to mature, particularly if we’re talking about the blockchain. If you look at Bitcoin, the maximum transactions right now is maybe around 3 or 4 per second. Then you look at Visa, which can process around 45,000 transactions per second. Bitcoin also isn’t technically real-time since it takes at least ten minutes to confirm a transaction. That isn’t to say these issues can’t be addressed, but right now, the technology hasn’t advanced yet and in terms of businesses offering real solutions, I haven’t seen any credible candidates. I don’t see it as this incredible technology that offers a solution for the immediate future. On the other hand, Ripple confirms transactions in seconds, which, I expect, is what banks are looking for.

RL: You make a great point about Visa. A lot of people will say, look, we can already do real-time with a centralized ledger and so they don’t understand the appeal of a distributed system.

Jeremy: That’s a key question. Why distributed ledgers? For one, a distributed solution enforces integrity and commonality versus one central ledger that everyone can access.

At the same time, central clearing and settlement is very efficient. Most clearing and settlement systems today can handle very high volume and they rarely have issues. Apart from very occasional hiccups, such as in the UK wire system last year, we haven’t really had issues regarding resilience.

Still, having these central, proprietary systems adds a lot of complexity and cost on the bank side. There’s a lot of different systems and these systems don’t interoperate with each other. This adds to operational issues and operational processes. So if we consider long term costs, a distributed ledger could conceivably be a better, more efficient solution, and there’s certainly the potential to do that. That’s why these technologies are so interesting for banks.

We’re still in the discovery phase around cryptotechnologies, distributed consensus ledgers. What will be intriguing to banks and financial institutions is to figure out how they can leverage the advantages of these innovations.

RL: Maybe this is an obvious questions, but what are the immediate benefits of real-time payments, given that customers are now demanding it as a service?

From a consumer point of view, there are two main advantages. The first is that you have real-time availability of funds. If I give my daughter a pound coin, she can immediately go and buy an ice cream, rather than in a few hours or the next morning. When you have real-time availability of funds, you’re much more flexible with what you can do.

The second advantage is the ability to provide a seamless customer experience. We’ve noticed that in the UK, if a bank has delays of even just 10 minutes, customers service call volumes go up significantly because someone was sent money, but when they go to the ATM, it hasn’t arrived yet.

People enjoy a sense of certainty. They like knowing that a job is done so they can forget about it. Today, because PayPal is connected to the Faster Payments System, you can move money between your bank account and PayPal account in real-time. The same goes for people who have accounts at different institutions, you can send money from one bank to another instantaneously. If you initiate a transaction and you see that it’s left one account and it hasn’t arrived in another, it’s painful.

That’s the customer proposition. It’s instant gratification, essentially cashless cash. If you give cash to someone, a $ 20 bill, that’s it. The transaction is done and you can move on. That’s the beauty of real-time payments.

And this is exactly the trend we are seeing. In the UK, we predict volumes on the Faster Payments System to double or triple in the next few years, and they are already at over a billion payments per year.

Ripple

Special thanks to Vlad Zamfir and Jae Kwon for many of the ideas described in this post

Aside from the primary debate around weak subjectivity, one of the important secondary arguments raised against proof of stake is the issue that proof of stake algorithms are much harder to make light-client friendly. Whereas proof of work algorithms involve the production of block headers which can be quickly verified, allowing a relatively small chain of headers to act as an implicit proof that the network considers a particular history to be valid, proof of stake is harder to fit into such a model. Because the validity of a block in proof of stake relies on stakeholder signatures, the validity depends on the ownership distribution of the currency in the particular block that was signed, and so it seems, at least at first glance, that in order to gain any assurances at all about the validity of a block, the entire block must be verified.

Given the sheer importance of light client protocols, particularly in light of the recent corporate interest in “internet of things” applications (which must often necessarily run on very weak and low-power hardware), light client friendliness is an important feature for a consensus algorithm to have, and so an effective proof of stake system must address it.

Light clients in Proof of Work

In general, the core motivation behind the “light client” concept is as follows. By themselves, blockchain protocols, with the requirement that every node must process every transaction in order to ensure security, are expensive, and once a protocol gets sufficiently popular the blockchain becomes so big that many users become not even able to bear that cost. The Bitcoin blockchain is currently 27 GB in size, and so very few users are willing to continue to run “full nodes” that process every transaction. On smartphones, and especially on embedded hardware, running a full node is outright impossible.

Hence, there needs to be some way in which a user with far less computing power to still get a secure assurance about various details of the blockchain state – what is the balance/state of a particular account, did a particular transaction process, did a particular event happen, etc. Ideally, it should be possible for a light client to do this in logarithmic time – that is, squaring the number of transactions (eg. going from 1000 tx/day to 1000000 tx/day) should only double a light client’s cost. Fortunately, as it turns out, it is quite possible to design a cryptocurrency protocol that can be securely evaluated by light clients at this level of efficiency.

Basic block header model in Ethereum (note that Ethereum has a Merkle tree for transactions and accounts in each block, allowing light clients to easily access more data)

In Bitcoin, light client security works as follows. Instead of constructing a block as a monolithic object containing all of the transactions directly, a Bitcoin block is split up into two parts. First, there is a small piece of data called the block header, containing three key pieces of data:

  • The hash of the previous block header
  • The Merkle root of the transaction tree (see below)
  • The proof of work nonce

Additional data like the timestamp is also included in the block header, but this is not relevant here. Second, there is the transaction tree. Transactions in a Bitcoin block are stored in a data structure called a Merkle tree. The nodes on the bottom level of the tree are the transactions, and then going up from there every node is the hash of the two nodes below it. For example, if the bottom level had sixteen transactions, then the next level would have eight nodes: hash(tx[1] + tx[2]), hash(tx[3] + tx[4]), etc. The level above that would have four nodes (eg. the first node is equal to hash(hash(tx[1] + tx[2]) + hash(tx[3] + tx[4]))), the level above has two nodes, and then the level at the top has one node, the Merkle root of the entire tree.

The Merkle root can be thought of as a hash of all the transactions together, and has the same properties that you would expect out of a hash – if you change even one bit in one transaction, the Merkle root will end up completely different, and there is no way to come up with two different sets of transactions that have the same Merkle root. The reason why this more complicated tree construction needs to be used is that it actually allows you to come up with a compact proof that one particular transaction was included in a particular block. How? Essentially, just provide the branch of the tree going down to the transaction:

The verifier will verify only the hashes going down along the branch, and thereby be assured that the given transaction is legitimately a member of the tree that produced a particular Merkle root. If an attacker tries to change any hash anywhere going down the branch, the hashes will no longer match and the proof will be invalid. The size of each proof is equal to the depth of the tree – ie. logarithmic in the number of transactions. If your block contains 220 (ie. ~1 million) transactions, then the Merkle tree will have only 20 levels, and so the verifier will only need to compute 20 hashes in order to verify a proof. If your block contains 230 (ie. ~1 billion) transactions, then the Merkle tree will have 30 levels, and so a light client will be able to verify a transaction with just 30 hashes.

Ethereum extends this basic mechanism with a two additional Merkle trees in each block header, allowing nodes to prove not just that a particular transaction occurred, but also that a particular account has a particular balance and state, that a particular event occurred, and even that a particular account does not exist.

Verifying the Roots

Now, this transaction verification process all assumes one thing: that the Merkle root is trusted. If someone proves to you that a transaction is part of a Merkle tree that has some root, that by itself means nothing; membership in a Merkle tree only proves that a transaction is valid if the Merkle root is itself known to be valid. Hence, the other critical part of a light client protocol is figuring out exactly how to validate the Merkle roots – or, more generally, how to validate the block headers.

First of all, let us determine exactly what we mean by “validating block headers”. Light clients are not capable of fully validating a block by themselves; protocols exist for doing validation collaboratively, but this mechanism is expensive, and so in order to prevent attackers from wasting everyone’s time by throwing around invalid blocks we need a way of first quickly determining whether or not a particular block header is probably valid. By “probably valid” what we mean is this: if an attacker gives us a block that is determined to be probably valid, but is not actually valid, then the attacker needs to pay a high cost for doing so. Even if the attacker succeeds in temporarily fooling a light client or wasting its time, the attacker should still suffer more than the victims of the attack. This is the standard that we will apply to proof of work, and proof of stake, equally.

In proof of work, the process is simple. The core idea behind proof of work is that there exists a mathematical function which a block header must satisfy in order to be valid, and it is computationally very intensive to produce such a valid header. If a light client was offline for some period of time, and then comes back online, then it will look for the longest chain of valid block headers, and assume that that chain is the legitimate blockchain. The cost of spoofing this mechanism, providing a chain of block headers that is probably-valid-but-not-actually-valid, is very high; in fact, it is almost exactly the same as the cost of launching a 51% attack on the network.

In Bitcoin, this proof of work condition is simple: sha256(block_header) < 2**187 (in practice the “target” value changes, but once again we can dispense of this in our simplified analysis). In order to satisfy this condition, miners must repeatedly try different nonce values until they come upon one such that the proof of work condition for the block header is satisfied; on average, this consumes about 269 computational effort per block. The elegant feature of Bitcoin-style proof of work is that every block header can be verified by itself, without relying on any external information at all. This means that the process of validating the block headers can in fact be done in constant time – download 80 bytes and run a hash of it – even better than the logarithmic bound that we have established for ourselves. In proof of stake, unfortunately we do not have such a nice mechanism.

Light Clients in Proof of Stake

If we want to have an effective light client for proof of stake, ideally we would like to achieve the exact same complexity-theoretic properties as proof of work, although necessarily in a different way. Once a block header is trusted, the process for accessing any data from the header is the same, so we know that it will take a logarithmic amount of time in order to do. However, we want the process of validating the block headers themselves to be logarithmic as well.

To start off, let us describe an older version of Slasher, which was not particularly designed to be explicitly light-client friendly:

  1. In order to be a “potential blockmaker” or “potential signer”, a user must put down a security deposit of some size. This security deposit can be put down at any time, and lasts for a long period of time, say 3 months.
  2. During every time slot T (eg. T = 3069120 to 3069135 seconds after genesis), some function produces a random number R (there are many nuances behind making the random number secure, but they are not relevant here). Then, suppose that the set of potential signers ps (stored in a separate Merkle tree) has size N. We take ps[sha3(R) % N] as the blockmaker, and ps[sha3(R + 1) % N], ps[sha3(R + 2) % N]ps[sha3(R + 15) % N] as the signers (essentially, using R as entropy to randomly select a signer and 15 blockmakers)
  3. Blocks consist of a header containing (i) the hash of the previous block, (ii) the list of signatures from the blockmaker and signers, and (iii) the Merkle root of the transactions and state, as well as (iv) auxiliary data like the timestamp.
  4. A block produced during time slot T is valid if that block is signed by the blockmaker and at least 10 of the 15 signers.
  5. If a blockmaker or signer legitimately participates in the blockmaking process, they get a small signing reward.
  6. If a blockmaker or signer signs a block that is not on the main chain, then that signature can be submitted into the main chain as “evidence” that the blockmaker or signer is trying to participate in an attack, and this leads to that blockmaker or signer losing their deposit. The evidence submitter may receive 33% of the deposit as a reward.

Unlike proof of work, where the incentive not to mine on a fork of the main chain is the opportunity cost of not getting the reward on the main chain, in proof of stake the incentive is that if you mine on the wrong chain you will get explicitly punished for it. This is important; because a very large amount of punishment can be meted out per bad signature, a much smaller number of block headers are required to achieve the same security margin.

Now, let us examine what a light client needs to do. Suppose that the light client was last online N blocks ago, and wants to authenticate the state of the current block. What does the light client need to do? If a light client already knows that a block B[k] is valid, and wants to authenticate the next block B[k+1], the steps are roughly as follows:

  1. Compute the function that produces the random value R during block B[k+1] (computable either constant or logarithmic time depending on implementation)
  2. Given R, get the public keys/addresses of the selected blockmaker and signer from the blockchain’s state tree (logarithmic time)
  3. Verify the signatures in the block header against the public keys (constant time)

And that’s it. Now, there is one gotcha. The set of potential signers may end up changing during the block, so it seems as though a light client might need to process the transactions in the block before being able to compute ps[sha3(R + k) % N]. However, we can resolve this by simply saying that it’s the potential signer set from the start of the block, or even a block 100 blocks ago, that we are selecting from.

Now, let us work out the formal security assurances that this protocol gives us. Suppose that a light client processes a set of blocks, B[1] ... B[n], such that all blocks starting from B[k + 1] are invalid. Assuming that all blocks up to B[k] are valid, and that the signer set for block B[i] is determined from block B[i - 100], this means that the light client will be able to correctly deduce the signature validity for blocks B[k + 1] ... B[k + 100]. Hence, if an attacker comes up with a set of invalid blocks that fool a light client, the light client can still be sure that the attacker will still have to pay ~1100 security deposits for the first 100 invalid blocks. For future blocks, the attacker will be able to get away with signing blocks with fake addresses, but 1100 security deposits is an assurance enough, particularly since the deposits can be variably sized and thus hold many millions of dollars of capital altogether.

Thus, even this older version of Slasher is, by our definition, light-client-friendly; we can get the same kind of security assurance as proof of work in logarithmic time.

A Better Light-Client Protocol

However, we can do significantly better than the naive algorithm above. The key insight that lets us go further is that of splitting the blockchain up into epochs. Here, let us define a more advanced version of Slasher, that we will call “epoch Slasher”. Epoch Slasher is identical to the above Slasher, except for a few other conditions:

  1. Define a checkpoint as a block such that block.number % n == 0 (ie. every n blocks there is a checkpoint). Think of n as being somewhere around a few weeks long; it only needs to be substantially less than the security deposit length.
  2. For a checkpoint to be valid, 2/3 of all potential signers have to approve it. Also, the checkpoint must directly include the hash of the previous checkpoint.
  3. The set of signers during a non-checkpoint block should be determined from the set of signers during the second-last checkpoint.

This protocol allows a light client to catch up much faster. Instead of processing every block, the light client would skip directly to the next checkpoint, and validate it. The light client can even probabilistically check the signatures, picking out a random 80 signers and requesting signatures for them specifically. If the signatures are invalid, then we can be statistically certain that thousands of security deposits are going to get destroyed.

After a light client has authenticated up to the latest checkpoint, the light client can simply grab the latest block and its 100 parents, and use a simpler per-block protocol to validate them as in the original Slasher; if those blocks end up being invalid or on the wrong chain, then because the light client has already authenticated the latest checkpoint, and by the rules of the protocol it can be sure that the deposits at that checkpoint are active until at least the next checkpoint, once again the light client can be sure that at least 1100 deposits will be destroyed.

With this latter protocol, we can see that not only is proof of stake just as capable of light-client friendliness as proof of work, but moreover it’s actually even more light-client friendly. With proof of work, a light client synchronizing with the blockchain must download and process every block header in the chain, a process that is particularly expensive if the blockchain is fast, as is one of our own design objectives. With proof of stake, we can simply skip directly to the latest block, and validate the last 100 blocks before that to get an assurance that if we are on the wrong chain, at least 1100 security deposits will be destroyed.

Now, there is still a legitimate role for proof of work in proof of stake. In proof of stake, as we have seen, it takes a logarithmic amount of effort to probably-validate each individual block, and so an attacker can still cause light clients a logarithmic amount of annoyance by broadcasting bad blocks. Proof of work alone can be effectively validated in constant time, and without fetching any data from the network. Hence, it may make sense for a proof of stake algorithm to still require a small amount of proof of work on each block, ensuring that an attacker must spend some computational effort in order to even slightly inconvenience light clients. However, the amount of computational effort required to compute these proofs of work will only need to be miniscule.

The post Light Clients and Proof of Stake appeared first on .