Special thanks to Andrew Miller for coming up with this attack, and to Zack Hess, Vlad Zamfir and Paul Sztorc for discussion and responses

One of the more interesting surprises in cryptoeconomics in recent weeks came from an attack on SchellingCoin conceived by Andrew Miller earlier this month. Although it has always been understood that SchellingCoin, and similar systems (including the more advanced Truthcoin consensus), rely on what is so far a new and untested cryptoeconomic security assumption – that one can safely rely on people acting honestly in a simultaneous consensus game just because they believe that everyone else will – the problems that have been raised so far have to do with relatively marginal issues like an attacker’s ability to exert small but increasing amounts of influence on the output over time by applying continued pressure. This attack, on the other hand, shows a much more fundamental problem.

The scenario is described as follows. Suppose that there exists a simple Schelling game where users vote on whether or not some particular fact is true (1) or false (0); say in our example that it’s actually false. Each user can either vote 1 or 0. If a user votes the same as the majority, they get a reward of P; otherwise they get 0. Thus, the payoff matrix looks as follows:

You vote 0 You vote 1
Others vote 0 P 0
Others vote 1 0 P

The theory is that if everyone expects everyone else to vote truthfully, then their incentive is to also vote truthfully in order to comply with the majority, and that’s the reason why one can expect others to vote truthfully in the first place; a self-reinforcing Nash equilibrium.

Now, the attack. Suppose that the attacker credibly commits (eg. via an Ethereum contract, by simply putting one’s reputation at stake, or by leveraging the reputation of a trusted escrow provider) to pay out X to voters who voted 1 after the game is over, where X = P + ε if the majority votes 0, and X = 0 if the majority votes 1. Now, the payoff matrix looks like this:

You vote 0 You vote 1
Others vote 0 P P + ε
Others vote 1 0 P

Thus, it’s a dominant strategy for anyone to vote 1 no matter what you think the majority will do. Hence, assuming the system is not dominated by altruists, the majority will vote 1, and so the attacker will not need to pay anything at all. The attack has successfully managed to take over the mechanism at zero cost. Note that this differs from Nicholas Houy’s argument about zero-cost 51% attacks on proof of stake (an argument technically extensible to ASIC-based proof of work) in that here no epistemic takeover is required; even if everyone remains dead set in a conviction that the attacker is going to fail, their incentive is still to vote to support the attacker, because the attacker takes on the failure risk themselves.

Salvaging Schelling Schemes

There are a few avenues that one can take to try to salvage the Schelling mechanism. One approach is that instead of round N of the Schelling consensus itself deciding who gets rewarded based on the “majority is right” principle, we use round N + 1 to determine who should be rewarded during round N, with the default equilibrium being that only people who voted correctly during round N (both on the actual fact in question and on who should be rewarded in round N – 1) should be rewarded. Theoretically, this requires an attacker wishing to perform a cost-free attack to corrupt not just one round, but also all future rounds, making the required capital deposit that the attacker must make unbounded.

However, this approach has two flaws. First, the mechanism is fragile: if the attacker manages to corrupt some round in the far future by actually paying up P + ε to everyone, regardless of who wins, then the expectation of that corrupted round causes an incentive to cooperate with the attacker to back-propagate to all previous rounds. Hence, corrupting one round is costly, but corrupting thousands of rounds is not much more costly.

Second, because of discounting, the required deposit to overcome the scheme does not need to be infinite; it just needs to be very very large (ie. inversely proportional to the prevailing interest rate). But if all we want is to make the minimum required bribe larger, then there exists a much simpler and better strategy for doing so, pioneered by Paul Storcz: require participants to put down a large deposit, and build in a mechanism by which the more contention there is, the more funds are at stake. At the limit, where slightly over 50% of votes are in favor of one outcome and 50% in favor of the other, the entire deposit it taken away from minority voters. This ensures that the attack still works, but the bribe must now be greater than the deposit (roughly equal to the payout divided by the discounting rate, giving us equal performance to the infinite-round game) rather than just the payout for each round. Hence, in order to overcome such a mechanism, one would need to be able to prove that one is capable of pulling off a 51% attack, and perhaps we may simply be comfortable with assuming that attackers of that size do not exist.

Another approach is to rely on counter-coordination; essentially, somehow coordinate, perhaps via credible commitments, on voting A (if A is the truth) with probability 0.6 and B with probability 0.4, the theory being that this will allow users to (probabilistically) claim the mechanism’s reward and a portion of the attacker’s bribe at the same time. This (seems to) work particularly well in games where instead of paying out a constant reward to each majority-compliant voter, the game is structured to have a constant total payoff, adjusting individual payoffs to accomplish this goal is needed. In such situations, from a collective-rationality standpoint it is indeed the case that the group earns a highest profit by having 49% of its members vote B to claim the attacker’s reward and 51% vote A to make sure the attacker’s reward is paid out.



However, this approach itself suffers from the flaw that, if the attacker’s bribe is high enough, even from there one can defect. The fundamental problem is that given a probabilistic mixed strategy between A and B, for each the return always changes (almost) linearly with the probability parameter. Hence, if, for the individual, it makes more sense to vote for B than for A, it will also make more sense to vote with probability 0.51 for B than with probability 0.49 for B, and voting with probability 1 for B will work even better.



Hence, everyone will defect from the “49% for 1″ strategy by simply always voting for 1, and so 1 will win and the attacker will have succeeded in the costless takeover. The fact that such complicated schemes exist, and come so close to “seeming to work” suggests that perhaps in the near future some complex counter-coordination scheme will emerge that actually does work; however, we must be prepared for the eventuality that no such scheme will be developed.

Further Consequences

Given the sheer number of cryptoeconomic mechanisms that SchellingCoin makes possible, and the importance of such schemes in nearly all purely “trust-free” attempts to forge any kind of link between the cryptographic world and the real world, this attack poses a potential serious threat – although, as we will later see, Schelling schemes as a category are ultimately partially salvageable. However, what is more interesting is the much larger class of mechanisms that don’t look quite like SchellingCoin at first glance, but in fact have very similar sets of strengths and weaknesses.

Particularly, let us point to one very specific example: proof of work. Proof of work is in fact a multi-equilibrium game in much the same way that Schelling schemes are: if there exist two forks, A and B, then if you mine on the fork that ends up winning you get 25 BTC and if you mine on the fork that ends up losing you get nothing.

You mine on A You mine on B
Others mine on A 25 0
Others mine on B 0 25

Now, suppose that an attacker launches a double-spend attack against many parties simultaneously (this requirement ensures that there is no single party with very strong incentive to oppose the attacker, opposition instead becoming a public good; alternatively the double spend could be purely an attempt to crash the price with the attacker shorting at 10x leverage), and call the “main” chain A and the attacker’s new double-spend fork B. By default, everyone expects A to win. However, the attacker credibly commits to paying out 25.01 BTC to everyone who mines on B if B ends up losing. Hence, the payoff matrix becomes:

You mine on A You mine on B
Others mine on A 25 25.01
Others mine on B 0 25

Thus, mining on B is a dominant strategy regardless of one’s epistemic beliefs, and so everyone mines on B, and so the attacker wins and pays out nothing at all. Particularly, note that in proof of work we do not have deposits, so the level of bribe required is proportional only to the mining reward multiplied by the fork length, not the capital cost of 51% of all mining equipment. Hence, from a cryptoeconomic security standpoint, one can in some sense say that proof of work has virtually no cryptoeconomic security margin at all (if you are tired of opponents of proof of stake pointing you to this article by Andrew Poelstra, feel free to link them here in response). If one is genuinely uncomfortable with the weak subjectivity condition of pure proof of stake, then it follows that the correct solution may perhaps be to augment proof of work with hybrid proof of stake by adding security deposits and double-voting-penalties to mining.

Of course, in practice, proof of work has survived despite this flaw, and indeed it may continue to survive for a long time still; it may just be the case that there’s a high enough degree of altruism that attackers are not actually 100% convinced that they will succeed – but then, if we are allowed to rely on altruism, naive proof of stake works fine too. Hence, Schelling schemes too may well simply end up working in practice, even if they are not perfectly sound in theory.

The next part of this post will discuss the concept of “subjective” mechanisms in more detail, and how they can be used to theoretically get around some of these problems.

The post The P + epsilon Attack appeared first on .

The Federal Reserve Bank has released its highly anticipated strategy report for improving the U.S. payment system.

The report follows calls for industry feedback in late 2013—which Ripple Labs participated in (letter, response)—during which the Federal Reserve acknowledged the payment system’s contribution to not only the country’s financial stability but also U.S. economic growth. The need to improve the nation’s underlying infrastructure had reached a critical juncture, the Fed concluded.

To put the significance of the Fed’s strategy report into context, this is the central bank’s first major initiative to upgrade the domestic payment system since the creation of ACH in the 1970s. This is a big deal—and the goal is clear.

The report’s executive summary overviews the current situation:

The Federal Reserve believes that the U.S. payment system is at a critical juncture in its evolution. Technology is rapidly changing many elements that support the payment process. High-speed data networks are becoming ubiquitous, computing devices are becoming more sophisticated and mobile, and information is increasingly processed in real time. These capabilities are changing the nature of commerce and end-user expectations for payment services.
Meanwhile, payment security and the protection of sensitive data, which are foundational to public confidence in any payment system, are challenged by dynamic, persistent and rapidly escalating threats. Finally, an increasing number of U.S. citizens and businesses routinely transfer value across borders and demand better payment options to swiftly and efficiently do so.

It’s also a call to arms:

Responses to the Federal Reserve’s 2013 Payment System Improvement – Public Consultation Paper (Consultation Paper) indicate broad agreement with the gaps, opportunities and desired outcomes discussed in that paper. Recent stakeholder dialogue has advanced significantly, and momentum toward common goals has increased.
Many payment stakeholders are now independently initiating actions to discuss payment system improvements with one another—especially the prospect of increasing end-to-end payment speed and security. We believe these developments illustrate a rare confluence of factors that create favorable conditions for change. Through this Strategies for Improving the U.S. Payment System paper, the Federal Reserve is calling on all stakeholders to seize this opportunity and join together to improve the payment system.

Of particular note are the potential solutions outlined by the report. Of the four solutions suggested, Ripple is the enabling technology described in option two (page 40). Ripple provides neutral payment infrastructure, and its users (banks, networks) set their own rules and governance in accordance with regulations set in their jurisdictions (e.g. the Fed in the U.S.).

Option 2: Facilitate direct clearing between financial institutions on public IP networks using protocols and standards for sending and receiving payments.
A distributed architecture for messaging between financial institutions over public IP networks has the potential to lower costs compared to clearing transactions over a hub-and-spoke network architecture. A central authority would establish common protocols for messaging standards, communication, security and logging transactions.

The Fed also made a statement about the design options it decided to exclude from further consideration, which included all proposals to evolve existing infrastructure such as ACH, wire transfers, and checks. They also decided to forego leveraging telecom infrastructure, a popular route in developing economies following the phenomenal success of M-Pesa.

The other options either involve leveraging the existing ATM/PIN debit infrastructure—which present numerous operational challenges such as “the high variability on implementation feasibility” and issue of “silos that often exist between the retail and commercial units of financial institutions”—or building new infrastructure from the ground up, which, while theoretically ideal as “a potential longer-term objective,” involves “potentially high cost.”

The Fed highlighted one of the primary weaknesses of the current status quo—that standards and protocols had failed to catch up to evolving needs as disparate networks and industry members failed to consistently reach consensus on new rulesets. The Fed pledged its commitment toward further industry coordination and cooperation to address this issue—which in our view underlines the unique advantage and responsibility of the central bank.

That’s also why we see Ripple technology as such a compelling solution within the components defined by the Fed that compose a payment system—technology, rules, risk management, and the messaging standard. As an efficient, inexpensive, ruleset-agnostic solution, Ripple provides the technological layer while the Fed and other industry members can play to their strengths and provide complementary components such as rulesets.

Ripple Labs designed the Ripple protocol as such because we believe that local jurisdictions are best suited to define their own standards in connecting fragmented payment networks given the complexity of financial regulation. By applying jurisdiction-specific rulesets on top of a common technical infrastructure like Ripple, the various national payment systems around the world would benefit from increased interoperability and a significant improvement in the speed and cost of cross border payments.

Having analyzed the Fed’s consultation papers over the past two years, we believe Ripple comprehensively achieves many of the desired outcomes outlined by the Fed (page 8-15) along with addressing many of the existing weaknesses highlighted (page 34).

In general, we applaud the Fed’s ongoing initiative to provide a safe, efficient, and broadly accessible payment network. Their active and inclusive approach provides us further confidence in the work we are doing at Ripple Labs.

 

Follow Ripple on Twitter

Ripple

Warning: this post contains crazy ideas. Myself describing a crazy idea should NOT be construed as implying that (i) I am certain that the idea is correct/viable, (ii) I have an even >50% probability estimate that the idea is correct/viable, or that (iii) “Ethereum” endorses any of this in any way.

One of the common questions that many in the crypto 2.0 space have about the concept of decentralized autonomous organizations is a simple one: what are DAOs good for? What fundamental advantage would an organization have from its management and operations being tied down to hard code on a public blockchain, that could not be had by going the more traditional route? What advantages do blockchain contracts offer over plain old shareholder agreements? Particularly, even if public-good rationales in favor of transparent governance, and guarnateed-not-to-be-evil governance, can be raised, what is the incentive for an individual organization to voluntarily weaken itself by opening up its innermost source code, where its competitors can see every single action that it takes or even plans to take while themselves operating behind closed doors?

There are many paths that one could take to answering this question. For the specific case of non-profit organizations that are already explicitly dedicating themselves to charitable causes, one can rightfully say that the lack of individual incentive; they are already dedicating themselves to improving the world for little or no monetary gain to themselves. For private companies, one can make the information-theoretic argument that a governance algorithm will work better if, all else being equal, everyone can participate and introduce their own information and intelligence into the calculation – a rather reasonable hypothesis given the established result from machine learning that much larger performance gains can be made by increasing the data size than by tweaking the algorithm. In this article, however, we will take a different and more specific route.

What is Superrationality?

In game theory and economics, it is a very widely understood result that there exist many classes of situations in which a set of individuals have the opportunity to act in one of two ways, either “cooperating” with or “defecting” against each other, such that everyone would be better off if everyone cooperated, but regardless of what others do each indvidual would be better off by themselves defecting. As a result, the story goes, everyone ends up defecting, and so people’s individual rationality leads to the worst possible collective result. The most common example of this is the celebrated Prisoner’s Dilemma game.

Since many readers have likely already seen the Prisoner’s Dilemma, I will spice things up by giving Eliezer Yudkowsky’s rather deranged version of the game:

Let’s suppose that four billion human beings – not the whole human species, but a significant part of it – are currently progressing through a fatal disease that can only be cured by substance S.

However, substance S can only be produced by working with [a strange AI from another dimension whose only goal is to maximize the quantity of paperclips] – substance S can also be used to produce paperclips. The paperclip maximizer only cares about the number of paperclips in its own universe, not in ours, so we can’t offer to produce or threaten to destroy paperclips here. We have never interacted with the paperclip maximizer before, and will never interact with it again.

Both humanity and the paperclip maximizer will get a single chance to seize some additional part of substance S for themselves, just before the dimensional nexus collapses; but the seizure process destroys some of substance S.

The payoff matrix is as follows:

Humans cooperate Humans defect
AI cooperates 2 billion lives saved, 2 paperclips gained 3 billion lives, 0 paperclips
AI defects 0 lives, 3 paperclips 1 billion lives, 1 paperclip

From our point of view, it obviously makes sense from a practical, and in this case moral, standpoint that we should defect; there is no way that a paperclip in another universe can be worth a billion lives. From the AI’s point of view, defecting always leads to one extra paperclip, and its code assigns a value to human life of exactly zero; hence, it will defect. However, the outcome that this leads to is clearly worse for both parties than if the humans and AI both cooperated – but then, if the AI was going to cooperate, we could save even more lives by defecting ourselves, and likewise for the AI if we were to cooperate.

In the real world, many two-party prisoner’s dilemmas on the small scale are resolved through the mechanism of trade and the ability of a legal system to enforce contracts and laws; in this case, if there existed a god who has absolute power over both universes but cared only about compliance with one’s prior agreements, the humans and the AI could sign a contract to cooperate and ask the god to simultaneously prevent both from defecting. When there is no ability to pre-contract, laws penalize unilateral defection. However, there are still many situations, particularly when many parties are involved, where opportunities for defection exist:

  • Alice is selling lemons in a market, but she knows that her current batch is low quality and once customers try to use them they will immediately have to throw them out. Should she sell them anyway? (Note that this is the sort of marketplace where there are so many sellers you can’t really keep track of reputation). Expected gain to Alice: $ 5 revenue per lemon minus $ 1 shipping/store costs = $ 4. Expected cost to society: $ 5 revenue minus $ 1 costs minus $ 5 wasted money from customer = -$ 1. Alice sells the lemons.
  • Should Bob donate $ 1000 to Bitcoin development? Expected gain to society: $ 10 * 100000 people – $ 1000 = $ 999000, expected gain to Bob: $ 10 – $ 1000 = -$ 990, so Bob does not donate.
  • Charlie found someone else’s wallet, containing $ 500. Should he return it? Expected gain to society: $ 500 (to recipient) – $ 500 (Charlie’s loss) + $ 50 (intangible gain to society from everyone being able to worry a little less about the safety of their wallets). Expected gain to Charlie: -$ 500, so he keeps the wallet.
  • Should David cut costs in his factory by dumping toxic waste into a river? Expected gain to society: $ 1000 savings minus $ 10 average increased medical costs * 100000 people = -$ 999000, expected gain to David: $ 1000 – $ 10 = $ 990, so David pollutes.
  • Eve developed a cure for a type of cancer which costs $ 500 per unit to produce. She can sell it for $ 1000, allowing 50,000 cancer patients to afford it, or for $ 10000, allowing 25,000 cancer patients to afford it. Should she sell at the higher price? Expected gain to society: -25,000 lives (including Alice’s profit, which cancels’ out the wealthier buyers’ losses). Expected gain to Eve: $ 237.5 million profit instead of $ 25 million = $ 212.5 million, so Eve charges the higher price.

Of course, in many of these cases, people sometimes act morally and cooperate, even though it reduces their personal situation. But why do they do this? We were produced by evolution, which is generally a rather selfish optimizer. There are many explanations. One, and the one we will focus on, involves the concept of superrationality.

Superrationality

Consider the following explanation of virtue, courtesy of David Friedman:

I start with two observations about human beings. The first is that there is a substantial connection between what goes on inside and outside of their heads. Facial expressions, body positions, and a variety of other signs give us at least some idea of our friends’ thoughts and emotions. The second is that we have limited intellectual ability–we cannot, in the time available to make a decision, consider all options. We are, in the jargon of computers, machines of limited computing power operating in real time.
Suppose I wish people to believe that I have certain characteristics–that I am honest, kind, helpful to my friends. If I really do have those characteristics, projecting them is easy–I merely do and say what seems natural, without paying much attention to how I appear to outside observers. They will observe my words, my actions, my facial expressions, and draw reasonably accurate conclusions.
Suppose, however, that I do not have those characteristics. I am not (for example) honest. I usually act honestly because acting honestly is usually in my interest, but I am always willing to make an exception if I can gain by doing so. I must now, in many actual decisions, do a double calculation. First, I must decide how to act–whether, for example, this is a good opportunity to steal and not be caught. Second, I must decide how I would be thinking and acting, what expressions would be going across my face, whether I would be feeling happy or sad, if I really were the person I am pretending to be.
If you require a computer to do twice as many calculations, it slows down. So does a human. Most of us are not very good liars.
If this argument is correct, it implies that I may be better off in narrowly material terms–have, for instance, a higher income–if I am really honest (and kind and …) than if I am only pretending to be, simply because real virtues are more convincing than pretend ones. It follows that, if I were a narrowly selfish individual, I might, for purely selfish reasons, want to make myself a better person–more virtuous in those ways that others value.
The final stage in the argument is to observe that we can be made better–by ourselves, by our parents, perhaps even by our genes. People can and do try to train themselves into good habits–including the habits of automatically telling the truth, not stealing, and being kind to their friends. With enough training, such habits become tastes–doing “bad” things makes one uncomfortable, even if nobody is watching, so one does not do them. After a while, one does not even have to decide not to do them. You might describe the process as synthesizing a conscience.

Essentially, it is cognitively hard to convincingly fake being virtuous while being greedy whenever you can get away with it, and so it makes more sense for you to actually be virtuous. Much ancient philosophy follows similar reasoning, seeing virtue as a cultivated habit; David Friedman simply did us the customary service of an economist and converted the intuition into more easily analyzable formalisms. Now, let us compress this formalism even further. In short, the key point here is that humans are leaky agents – with every second of our action, we essentially indirectly expose parts of our source code. If we are actually planning to be nice, we act one way, and if we are only pretending to be nice while actually intending to strike as soon as our friends are vulnerable, we act differently, and others can often notice.

This might seem like a disadvantage; however, it allows a kind of cooperation that was not possible with the simple game-theoretic agents described above. Suppose that two agents, A and B, each have the ability to “read” whether or not the other is “virtuous” to some degree of accuracy, and are playing a symmetric Prisoner’s Dilemma. In this case, the agents can adopt the following strategy, which we assume to be a virtuous strategy:

  1. Try to determine if the other party is virtuous.
  2. If the other party is virtuous, cooperate.
  3. If the other party is not virtuous, defect.

If two virtuous agents come into contact with each other, both will cooperate, and get a larger reward. If a virtuous agent comes into contact with a non-virtuous agent, the virtuous agent will defect. Hence, in all cases, the virtuous agent does at least as well as the non-virtuous agent, and often better. This is the essence of superrationality.

As contrived as this strategy seems, human cultures have some deeply ingrained mechanisms for implementing it, particularly relating to mistrusting agents who try hard to make themselves less readable – see the common adage that you should never trust someone who doesn’t drink. Of course, there is a class of individuals who can convincingly pretend to be friendly while actually planning to defect at every moment – these are called sociopaths, and they are perhaps the primary defect of this system when implemented by humans.

Centralized Manual Organizations…

This kind of superrational cooperation has been arguably an important bedrock of human cooperation for the last ten thousand years, allowing people to be honest to each other even in those cases where simple market incentives might instead drive defection. However, perhaps one of the main unfortunate byproducts of the modern birth of large centralized organizations is that they allow people to effectively cheat others’ ability to read their minds, making this kind of cooperation more difficult.

Most people in modern civilization have benefited quite handsomely, and have also indirectly financed, at least some instance of someone in some third world country dumping toxic waste into a river to build products more cheaply for them; however, we do not even realize that we are indirectly participating in such defection; corporations do the dirty work for us. The market is so powerful that it can arbitrage even our own morality, placing the most dirty and unsavory tasks in the hands of those individuals who are willing to absorb their conscience at lowest cost and effectively hiding it from everyone else. The corporations themselves are perfectly able to have a smiley face produced as their public image by their marketing departments, leaving it to a completely different department to sweet-talk potential customers. This second department may not even know that the department producing the product is any less virtuous and sweet than they are.

The internet has often been hailed as a solution to many of these organizational and political problems, and indeed it does do a great job of reducing information asymmetries and offering transparency. However, as far as the decreasing viability of superrational cooperation goes, it can also sometimes make things even worse. Online, we are much less “leaky” even as individuals, and so once again it is easier to appear virtuous while actually intending to cheat. This is part of the reason why scams online and in the cryptocurrency space are more common than offline, and is perhaps one of the primary arguments against moving all economic interaction to the internet a la cryptoanarchism (the other argument being that cryptoanarchism removes the ability to inflict unboundedly large punishments, weakening the strength of a large class of economic mechanisms).

A much greater degree of transparency, arguably, offers a solution. Individuals are moderately leaky, current centralized organizations are less leaky, but organizations where randomly information is constantly being released to the world left, right and center are even more leaky than individuals are. Imagine a world where if you start even thinking about how you will cheat your friend, business partner or spouse, there is a 1% chance that the left part of your hippocampus will rebel and send a full recording of your thoughts to your intended victim in exchange for a $ 7500 reward. That is what it “feels” like to be the management board of a leaky organization.

This is essentially a restatement of the founding ideology behind Wikileaks, and more recently an incentivized Wikileaks alternative, slur.io came out to push the envelope further. However, Wikileaks exists, and yet shadowy centralized organizations also continue to still exist and are in many cases still quite shadowy. Perhaps incentivization, coupled with prediction-like-mechanisms for people to profit from outing their employers’ misdeeds, is what will open the floodgates for greater transparency, but at the same time we can also take a different route: offer a way for organizations to make themselves voluntarily, and radically, leaky and superrational to an extent never seen before.

… and DAOs

Decentralized autonomous organizations, as a concept, are unique in that their governance algorithms are not just leaky, but actually completely public. That is, while with even transparent centralized organizations outsiders can get a rough idea of what the organization’s temperament is, with a DAO outsiders can actually see the organization’s entire source code. Now, they do not see the “source code” of the humans that are behind the DAO, but there are ways to write a DAO’s source code so that it is heavily biased toward a particular objective regardless of who its participants are. A futarchy maximizing the average human lifespan will act very differently from a futarchy maximizing the production of paperclips, even if the exact same people are running it. Hence, not only is it the case that the organization will make it obvious to everyone if they start to cheat, but rather it’s not even possible for the organization’s “mind” to cheat.

Now, what would superrational cooperation using DAOs look like? First, we would need to see some DAOs actually appear. There are a few use-cases where it seems not too far-fetched to expect them to succeed: gambling, stablecoins, decentralized file storage, one-ID-per-person data provision, SchellingCoin, etc. However, we can call these DAOs type I DAOs: they have some internal state, but little autonomous governance. They cannot ever do anything but perhaps adjust a few of their own parameters to maximize some utility metric via PID controllers, simulated annealing or other simple optimization algorithms. Hence, they are in a weak sense superrational, but they are also rather limited and stupid, and so they will often rely on being upgraded by an external process which is not superrational at all.

In order to go further, we need type II DAOs: DAOs with a governance algorithm capable of making theoretically arbitrary decisions. Futarchy, various forms of democracy, and various forms of subjective extra-protocol governance (ie. in case of substantial disagreement, DAO clones itself into multiple parts with one part for each proposed policy, and everyone chooses which version to interact with) are the only ones we are currently aware of, though other fundamental approaches and clever combinations of these will likely continue to appear. Once DAOs can make arbitrary decisions, then they will be able to not only engage in superrational commerce with their human customers, but also potentially with each other.

What kinds of market failures can superrational cooperation solve that plain old regular cooperation cannot? Public goods problems may unfortunately be outside the scope; none of the mechanisms described here solve the massively-multiparty incentivization problem. In this model, the reason why organizations make themselves decentralized/leaky is so that others will trust them more, and so organizations that fail to do this will be excluded from the economic benefits of this “circle of trust”. With public goods, the whole problem is that there is no way to exclude anyone from benefiting, so the strategy fails. However, anything related to information asymmetries falls squarely within the scope, and this scope is large indeed; as society becomes more and more complex, cheating will in many ways become progressively easier and easier to do and harder to police or even understand; the modern financial system is just one example. Perhaps the true promise of DAOs, if there is any promise at all, is precisely to help with this.

The post Superrationality and DAOs appeared first on .

Continuing our dialogue with regulators about efforts to build more efficient, safer payment systems, Ripple Labs recently detailed the benefits and implications of a distributed network to the UK’s new Payment System Regulator (PSR).

In most cases, payments are included under a general framework for financial regulations. Yet the UK has taken a unique approach in designating a new regulator to build a more competitive, innovative and inclusive payment system.

Since the group will be fully operational in April 2015, the PSR called for industry input on its regulatory approach and initial priorities. Ripple Labs commends the PSR’s transparency, thoughtfulness, and inclusion in its call for input, and is grateful for the opportunity to submit a letter.

Our recommendations reflect ongoing discussions with regulators—such as our recent correspondence with New York Department of Financial Services (NYDFS) and the BitLicense proposal—and along those lines, represent our core perspective on regulations. That is, we believe the following four points to be essential to not only the PSR’s success, but regulatory frameworks in general:

  • Ensure regulations account for the new technologies that will be necessary for creating a more competitive, innovative, and inclusive payment system. Generally, existing regulations assume the use of a centralized operator. However, new technologies such as open protocols and distributed networks may not rely on a central operator. Regulators should ensure their rules account for technology with  alternative governance models to best leverage their benefits in the payments system.
  • Enable startups and smaller companies to contribute to the payment system. We encourage a flexible regulatory framework that is inclusive of startups and smaller companies—typically the drivers of innovation. We commend both the PSR for recognizing this need in their proposals, as well as the NYDFS’ decision to include a two-year transitional operating license giving  startups and small businesses an opportunity to compete with established players.
  • Take a holistic view of risk and consider the cumulative impact of regulations. New technologies present new risks, yet many of these risks are known and can be mitigated. Ripple Labs urges regulators to also consider the risk of continued reliance on antiquated infrastructure. These risks grow over time, are often underappreciated, and may have systemic consequences. Further, regulators should take a coordinated approach when implementing new rules, being mindful of their cumulative impact.
  • Consider how new infrastructure technology can minimize payment, operational, and systemic risks while improving anti-money laundering (AML) efforts. Novel approaches to infrastructure improvements can also go a long way in optimizing compliance capabilities and mitigating structural risks. In the case of distributed networks, the shared ledger lowers the cost of compliance by providing improved funds traceability and AML oversight.

In this case, we also included an overview of how Ripple benefits regulators, government agencies, and central banks. As an innovative approach to funds transfer, Ripple is an opportunity to improve today’s payment systems and minimize or even eliminate structural inefficiencies.

Unlike existing systems, Ripple is an Internet protocol-based technology, which means it is both neutral and also has the capacity to maintain a record of balances without a central counterparty. The result is a competitive market for funds exchange and delivery.

A payment system powered by Ripple has numerous benefits:

  1. Reduces fragmentation and concentration; increases competition.
  2. Enables fund traceability and transaction visibility.
  3. Reduces systemic risk: no single point of failure.
  4. Reduces the possibility of conflicts of interest as a neutral infrastructure layer.
  5. Improves capital efficiency and liquidity management.
  6. Decreases operational and settlement risk.
  7. Enables new products and improved consumer experience.
  8. Improves information security and reduces cyber threats.

In all, Ripple Labs supports and shares the PSR’s objectives of fostering a competitive, innovative, and inclusive payments systems. Indeed, we believe the Ripple protocol embodies many of the PSR’s goals. We look forward to continuing our proactive engagement with the PSR and other regulators in the future.

For a more in-depth overview of how Ripple Labs approaches regulations, you can view the entirety of our response to PSR CP14/1 here (PDF): Ripple Labs response to PSR

Follow Ripple on Twitter

Ripple

patrick-sibos

Ripple Labs EVP of Business Development Patrick Griffin at Sibos 2013 in Dubai

Ripple Labs will be in Boston at the end of the month for Sibos (Sept. 29 – Oct. 2), the annual financial services conference hosted by SWIFT, where 7000 industry members and thought leaders will gather to contemplate and help shape the future of payments and trade.

Running concurrently throughout the conference is Innotribe program, a SWIFT initiative focused on innovation at the convergence of finance and technology—of which Ripple will play a prominent part. (Check out our interview with Kosta Peric, Innotribe co-founder and former Head of Innovation of SWIFT.)

Ripple Labs CEO and co-founder Chris Larsen will be presenting at the following sessions on Monday, September 29th:

  • Future of Money: The Rise of Cryptocurrencies (9:30AM ET)
  • Disruption: Cryptocurrencies (12:30PM ET)

If you’re interested in learning how Ripple is driving down cross-border transaction costs for banks like Fidor, please contact us at to schedule a meeting with a Ripple Labs representative.

Follow us on Twitter

Ripple

To help accelerate the creation of strong, reliable, and compliant gateways, Ripple Labs will be providing XRP incentives and extended technical support for gateways that meet criteria considered to be critical for the success of a gateway.

Ripple Labs wants every gateway to achieve a gold standard in business planning, technical reliability and stability, regulatory compliance, and liquidity. The Ripple protocol enables the federation and interoperability of many independent payment systems.

As such, we’re actively developing the specifications for Gateway Services APIs and are eager to help gateways with implementation. In the meantime, here are some of the steps and assistance provided by Ripple Labs to help get your gateway to that point.

Gateway business plan development

Successful businesses start with a concept that can be concisely summarized and executed upon. To get things started on the right foot, here is a business plan template for gateways that is freely available. This plan was developed in consultation with new gateways that were exploring the business opportunities on Ripple, so it’s tailored to the needs of an early stage operator.

The template encourages you to carefully consider who your customer is and what value they’ll derive from your service. Simplifying their experience and making the deposit and withdrawal of assets frictionless is critical to driving volume and subsequent revenue.

Serious endeavors should contact Ripple Labs at to coordinate for possible assistance and business planning.

Gatewayd support

Gatewayd has been designed to make deploying a gateway as easy as possible.

It provides the basic functionality to link assets represented in the Ripple network to those held in the outside world. It includes a core database to track deposits and withdrawals and utilizes Ripple REST to issue assets to customer wallets.

Gatewayd plugins

If your gateway needs a custom deposit/withdrawal plugin for an external payment system (such as PayPal, AliPay, etc.), Ripple Labs may consider funding a bounty to create that plugin or build it for you. Plugins are custom pieces of code that are used to monitor and submit transactions to and from external payment systems so that gatewayd can take appropriate action. You can see examples of these kinds of plugins in the repos under gatewayd on GitHub.

Services implementation

Gateway Services APIs make gateways interoperable and provide straightforward calls that clients can use to route payments appropriately. Gateway Services rely on existing web standards like host-meta and webfinger, while making certain functions of the REST API more robust. Please contact us for assistance if you decide to implement these services at your gateway.

XRP for customers of KYC/AML compliant gateways

Ripple Labs may assist with customer acquisition by providing gateways with XRP that can be used to activate Ripple wallets of new accounts. Customers who provide a baseline level of KYC information may be eligible to receive XRP upon registration and making a deposit at your gateway.

Compliance resources

Ripple Labs regularly issues Gateway Bulletins as new features are released or on topics related to compliance and risk. Those bulletins are shared with the developer community including gateway operators and IRBA members. In addition to Gateway Bulletins, Ripple Labs publishes Compliance Resources that may be helpful for gateway operators in understanding local and global standards on KYC/AML policies, as well as opinions or guidance on virtual currency.

Since rules on KYC/AML policies and guidance on virtual currency vary by jurisdiction, gateways should obtain legal advice on how these rules apply to their business and country of operation. Be aware that regulatory standards are evolving rapidly. While Ripple Labs makes every effort to update the Gateway Bulletins and Compliance Resources regularly, gateways should seek legal advice and understand changes to regulation as it may vary based on geography and the products that you offer.

Generating liquidity

Ripple Labs understands that it may be difficult for new gateways to generate the liquidity needed to provide a compelling service to their customers. To do so, it is important to meet the aforementioned technical and compliance standards to have a popular, well-capitalized gateway. Transaction volume drives liquidity so Ripple Labs may facilitate introductions for operational gateways to market makers who can enable assets issued by your gateway to trade freely at competitive exchange rates.

Feedback is welcome

The Ripple protocol’s success will be largely determined by the ecosystem of gateways that are providing the onramps and off-ramps for value. As such, Ripple Labs continues to support gateway developers and entrepreneurs in their projects to build gateways.

We’d love to hear your feedback on what’s most useful and other tools that you’d like to see. We look forward to working alongside you to build the value web!

Ripple

US Banks

Ripple Labs is thrilled to have signed its first two U.S. banks to use the Ripple protocol for real-time, cross-border payments.

Cross River Bank, an independent transaction bank based in New Jersey, and CBW Bank, a century-old institution founded in Kansas, join Fidor Bank on the Ripple network, which continues to grow.

Both banks are excited to leverage the technology in order to provide greater efficiency and innovation to their customers.

“Our business customers expect banking to move at the speed of the Web, but with the security and confidence of the traditional financial system,” said Gilles Gade, president and CEO of Cross River Bank.

“Ripple will help make that a reality, enabling our customers to instantly transfer funds internationally while meeting all compliance requirements and payments rules. We are excited to be amongst the very first banks in the U.S. to deploy Ripple as a faster, more affordable and compliant payment rail for our customers.”

“Today’s banks offer the equivalent of 300-year-old paper ledgers converted to an electronic form – a digital skin on an antiquated transaction process,” said Suresh Ramamurthi, chairman and CTO of CBW Bank.

“Ripple addresses the structural problem of payments as IP-based settlement infrastructure that powers the exchange of any types of value. We’ll now be one of the first banks in the world to offer customers a reliable, compliant, safe and secure way to instantly send and receive money internationally.  As part of our integration with Ripple, we are rolling out Yantra’s cross-border, transaction-specific compliance, risk-scoring, monitoring and risk management system.”

But these new partnerships aren’t just great for Cross River Bank and CBW Bank customers, it’s great for everyone in the U.S. and Europe by essentially opening up a corridor between ACH and SEPA. Any U.S. bank can now use Cross River or CBW Bank as a correspondent to move funds in real-time to any other institution in Europe via Germany-based Fidor.

The deals will also help expand liquidity and trade volume on the protocol and generally improve the network effects of the system—which will continue to make Ripple more attractive for both market makers and developers.

Ultimately, this announcement is the culmination of many months of hard work and further validation for the Ripple Labs vision. The most exciting part? This is only just the beginning.

Follow us on Twitter

Ripple

chris-greg1

Chris Larsen (Co-founder and CEO) and Greg Kidd (Chief Risk Officer)—imagery courtesy of Money2020

In less than two weeks, Ripple Labs will be joining thousands of industry and thought leaders at Money20/20 in Las Vegas, Nevada.

Of the 7,000+ attendees, there will be “670 CEOs, from over 2,300 companies and 60 countries.” The team is looking forward to build on the success of Sibos earlier this month, where the Ripple narrative really picked up momentum toward industry acceptance.

Speaking schedule:

  • Greg Kidd (Chief Risk Officer): “Cryptocurrencies & Consumer Protection Issues”—Sunday, Nov. 2 at 1:00-1:45pm
  • Chris Larsen (Co-founder and CEO): “Remittances: Retail, Electronic & Cryptocurrencies”—Sunday, Nov. 2 at 3:00-3:45pm

If you’re interested in learning how Ripple is driving down cross-border transaction costs for banks like Fidor, please contact us at to schedule a meeting with a Ripple Labs representative.

Follow Ripple on Twitter

 

Ripple

Last Friday we did a master release of ripple-rest version 1.3.0. We’ve done a few changes externally but the substantial additions in 1.3.0 have been stability and verbose error handling. If you’ve been following the commits on [github](https://github.com/ripple/ripple-rest), we’ve also vastly improved test coverage and introduced simplicity by removing the need for Postgres.

 

Below is a list of some of the major changes and an explanation of the decisions we made for this last release.

 

  • Improved error handling: Error handling logic has been rewritten to provide clearer feedback for all requests. Prior to 1.3.0, an error could respond with a 200-299 range HTTP status code stating that the ripple-rest server was able to respond but the request may not have been successful. This put the burden on the developers to parse through the response body to determine whether something was successful or not. In version 1.3.0, ripple-rest will only return a “success” (200-299 range) when the actual request is successful and developers can expect that the response body will match what a successful request looks like. With actual errors and errors responses, ripple-rest will now include an error_type (a short code identifying the error), an error (a human-readable summary), and an optional message (for longer explanation of errors if needed). Details [here](http://dev.ripple.com/ripple-rest.html#errors).

 

  • DB support for SQLite on disk, and removal of Postgres support: Version 1.3.0 now directly supports both SQLite in memory and on disk. We’ve removed support for Postgres based on feedback that the installation has been a huge burden for the minimal amount of data that is stored in ripple-rest. The installation with SQLite is now much leaner and configuring a new database is as simple as pointing to a flat file location in the config.json. In the future, we may revisit adding additional database connectors for clustered and high availability deployments, but we’re much more keen on the usability and simplicity of only supporting SQLite at this point.

 

  • Config.json 2.0: The previous config.json 1.0.1 was confusing and disabling things like SSL required removal of lines inside the config file while environment variables could be set to overwrite config file values. We’ve cleaned up a lot of that messiness and we’ve modified the new config.json so that all configurations are fully transparent. SSL can be disabled simply by setting “ssl_enabled” as false and in order to switch to SQLite in memory the “db_path” should be set to “:memory:” instead of pointing to a flat file. Lastly, as a reminder to folks who didn’t know, ripple-rest does support a multi-server configuration in the array of “rippled_servers”. Documentation on config file can be found [here](https://github.com/ripple/ripple-rest/blob/develop/docs/server-configuration.md)

 

  • /v1/wallet/new endpoint: Easy and simple way to generate ripple wallets! No explanation needed!
  • Removed /v1/tx/{:hash} and /v1/transaction/{:hash}: Use `/v1/transactions/{:hash}`. This change serves to provide consistency with REST standards.

 

  • Removed /v1/payments: Use `/v1/accounts/{source_address}/payments` to submit a payment. This change serves to provide consistency in the payment flow.

 

We appreciate the continued feedback from those of you who are building integrations with ripple-rest and appreciate all the support that you’ve given us so far.

 

Ripple

topphoto1

Nearly 300 Ripple enthusiasts attended Around the World in 5 Seconds.

Despite pouring rain, nearly three hundred guests attended Around the World in 5 Seconds, a special night of demos and celebration at the Ripple Labs office in downtown San Francisco, an event meant to engage the local community and share our vision of Ripple’s potential.

Attendees ranged from engineers, product managers, and senior executives from blue-chip tech, banking and consulting companies to entrepreneurs bootstrapping their own ventures.

 

signin1

Signing in.

A series of product demos provided developers, investors, and industry leaders a tangible, hands-on experience for understanding how the Ripple protocol facilitates faster, cheaper, and more frictionless global payments than ever before.

 

learning

Learning about the intricacies of real-time settlement and the internet-of-value.

One demo station was manned by Marco Montes, who you might recognize from the newly re-designed Ripple.com homepage. Marco is the founder and CEO of Saldo.mx, a novel remittance service that allows US customers to pay bills back in Mexico using the Ripple protocol.

 

 

Ripple Labs CTO Stefan Thomas and software engineer Evan Schwartz delivered two back-to-back tech talks on Codius, an ecosystem for developing distributed applications that utilizes smart contracts, to two jam-packed and enthusiastic crowds.

 

codius1

Stefan and Evan explain Codius.

The presentation represents the first of a series of talks as part of our mission to better educate the broader community about Ripple technology, behind the scenes developments, as well as our take on the industry at large.

A warm thank you to all those who weathered the storm and helped make this inaugural event a resounding success. It surely won’t be the last so we look forward to seeing you at the next one, along with those who weren’t able to make it out this time.

 

thecrowd1

It was a packed house. See you next time!

Check out the Ripple Labs Facebook page for more photos of the event—courtesy of Ripple Labs senior software engineer and “head of photography,” Vahe Hovhannisyan. (You should also check out his Instagram.)

Follow Ripple on Twitter

Ripple