WEF2In its comprehensive report on the impact of emerging payment schemes, the message of the World Economic Forum (WEF) is clear—the industry must integrate legacy systems with new technologies in order the leverage the best of both worlds.

The report, which had a mandate to “explore the transformative potential of new entrants and innovations is the culmination of “extensive outreach and dialogue with the financial services community, innovation community, academia and a large number of financial technology startups” over the course of fifteen months.

The release of the WEF report follows a string of analysis on innovation in the payments space done by various industry organizations from the European Banking Association to Santander bank in Spain.

As part of the report’s key findings for payments, the WEF concluded that the greatest potential for “decentralized and non-traditional payment schemes” such as distributed ledgers “may be to radically streamline the transfer of value, rather than as store of value”—thus creating “competitive pressure for the value transfer rails to become faster, cheaper and more borderless.”

In other words, decentralized payment schemes are all but declared heir apparent to legacy banking structures, allowing for the possibility that nontraditional payments networks could rival, disrupt, or be assimilated into the traditional financial network.

It does not frame this as a question of if, but as a series of predictions of who, when, and how. Specifically, the WEF foresees three possible outcomes outlined in the report:

 

  1. Compete with an alternative network of financial providers
  2. Facilitate alternative payment schemes as complements
  3. Provide leaner, faster payment options within the existing network

In the first outcome, the two systems—the traditional and more modern financial systems—would remain disparate and have limited interaction with one another. This scenario is predicted to drive innovation, but possibly also expose consumers to unfamiliar risks.

The third outcome is considered the least likely—that incumbent institutions might transform their own payment and settlement systems, responding to competition with innovation to match. The financial industry is a slow-moving beast. Still, it appears many institutions get the picture. Banks like UBS, Deutsche Bank, and Citi are all betting big on fintech, launching so-called “innovation labs” around the world to experiment with new ideas.

But the WEF views the second outcome as the most productive, where the traditional banking structures adopt and integrate innovative technologies, fostering an ecosystem that combines the speed and ease of use of newer tech with the established identities of long-standing banks and improving the connectivity of historically siloed financial institutions.

The examples WEF provides (Fidor with Ripple and CIC with M-Pesa) represent this blending of establishment and disruptors offering “could be easily used for real-time payment and settlement between these institutions with no automated clearing house or correspondent banks required.”

This reality, the WEF suggests, represents the best of both worlds; the positioning and consumer confidence of legacy banking combined with the improved efficiency and compatibility with the real-time world that technologies can offer.

Read the World Economic Forum’s full report here

 

 

Follow Ripple on Twitter

 

Ripple

taxi2

An open marketplace has expanded both supply and demand for taxi services. Photo: Daniel Horacio Agostini

One unique feature of Ripple is the open nature of the network, which has numerous benefits for banks, market makers, regulators and ultimately consumers.

Given that the idea of an open value network is somewhat of a new paradigm, it’s worth going over just exactly what might mean for the payment ecosystem’s various stakeholders.

The benefit of an open network for banks

Managing information is pivotal for banks. As part of their daily operations, banks need to manage an endless flow of payment data, which also includes customer information. Not only do customers expect and demand this information to be kept private, regulators require it. Moreover, it’s important for banks to maintain confidentiality since transaction information—such as the volume or the currency—is considered competitive intelligence.

Throughout our numerous conversations with banks and financial institutions, a common question was whether or not an open network could facilitate both privacy and confidentiality. At first glance, “open” may seem concerning because banks naturally expect payment networks to be private.

In reality, Ripple satisfies both privacy and confidentiality. While transaction information on the ledger is public, payment information is not. It’s difficult for anyone to associate transaction information with any specific bank.

Finally, there are benefits to transparency, especially for cross-border (out-of-network) payments, which have traditionally been relatively opaque. End-to-end traceability will not only reduce risk and delays, it should also reduce the cost of compliance allowing banks to lower costs of fee-disclosures and regulatory reporting.

The benefit of an open network for market makers

The market for settling payments is huge. The heart of the issue is that it isn’t necessarily accessible. This undermines both efficiency and competition. Meanwhile, market makers already specialize in managing capital and the associated risks. As we’ve discussed previously, what Ripple does is essentially allow market makers to access what is essentially a marketplace for float, where market makers can compete to provide liquidity, which lowers costs for banks and businesses.

One way to understand the impact of expanding accessibility is to look at how Uber affected the marketplace for taxis. In San Francisco, the taxi market was about $ 140 million per year, according to Uber CEO Travis Kalanick. But Uber is already making three times that with revenues of $ 500 million per year. This means that competition doesn’t necessarily cannibalize existing revenue, it can help the entire pie grow much larger. In the case of Uber, by expanding the supply of drivers and offering a far better experience, many more customers decided to use taxi services rather than other modes of transportation, expanding the marketplace and eventually spurring on both innovation and competition.

The benefit of an open network for regulators

The role of regulators is to protect the welfare of any payment systems, primarily because payments can be used to finance activity that is deemed detrimental to society—such as crime and terrorism. As a result, being able to track payments is fundamental to regulators doing their job.

With the way things are today, it’s extremely difficult to monitor transaction activity given disparate systems, networks, and platforms plus the continued prevalence of physical cash. As a result, transaction monitoring is a highly manual and operationally intensive process, which means that regulators incur high costs simply to do their job or in some cases, they aren’t able to do their job as effectively as they would like.

For regulators, the open nature of Ripple provides further transparency and payment traceability, thereby reducing their costs while allowing them to do their jobs more effectively. If regulators and banks are able to automate compliance processes with Ripple, it reduces costs for everyone in the ecosystem.

In the end, however, the real winners are consumers, who benefit from a safer, more open, and competitive ecosystem that provides a platform for innovation, better access, and lower costs.

Follow Ripple on Twitter

Ripple

2344752466_7857f1e1ae_b

Photo: Christina/Flickr

Since the beginning of recorded history, the process of standardization has set the stage for immense gains in collaboration, productivity, and innovation. Standards allow us to find collective harmony within a society that grows increasingly complex.

Naturally, the first standards were ways of measuring time and space—from the Mayan Calendar to King Henry I of England’s preferred unit of measurement in 1120 AD—the length of his arm—which he instituted as the “ell.”

While early standards often existed in part as a vehicle for increasing the prestige and power of rulers and regulators that created them, they would—as expectations evolved—become a source of individual empowerment. Following the French Revolution, a new system of measurement was promoted as “a way to erase the arbitrary nature of local rule,” writes Andrew Russell, author of the book, Open Standards and the Digital Age: History, Ideology, and Networks. The argument being—How could citizens truly be free, independent, and self-reliant if they weren’t able to make calculations and measurements on their own?

Indeed, it was broad standardization that paved the way for the Industrial Revolution. Interchangeable parts dramatically reduced costs, allowing for easy assembly of new goods, cheap repairs, and most of all, they reduced the time and skill required for workers. Or consider how those manufactured products are then shipped—likely by train. Prior to the standardization of the railroad gauge, cargo traveling between regions would have to be unloaded and moved to new trains because the distance between rails no longer matched the train’s wheels.

 

hydrant

Photo: Flickr

On the other end of the spectrum, the failure to enact proper standards isn’t just inefficient and costly, it can prove disastrous—such as in 1904, when a vicious fire broke out in Baltimore. New York, Philadelphia, and Washington, DC quickly sent support, but found their efforts to be in vain as their fire hoses weren’t compatible with local fire hydrants. The fire would burn for over 30 hours and destroy 2,500 buildings.

While the situation with today’s payment systems isn’t nearly as dangerous, the lack of a universal standard for transacting value is implicitly costly and serves as a persistent bottleneck toward true financial innovation.

In the U.S., the last time there was broad consensus on a new payments standard was with the creation of Automated Clearing House in the 1970s, an electronic system meant to replace paper checks. That system, which still essentially enables all domestic payments has, in four decades, remained relatively unchanged. The primary reason is that achieving consensus for new standards isn’t easy, especially in an industry as far-reaching and as fundamental to the economy as payments, where there are numerous and a wide range of constituents with incentives that don’t always align. So even as the Federal Reserve pushes for real-time payments, affecting actual change remains elusive, as the technology becomes increasingly antiquated.

While payment standards find themselves stuck in time, standards everywhere else have continued to evolve.

The latter half of the 20th century saw the rise of the concept of the open standard. While there’s no set definition for an open standard, there are a few commonly accepted properties, such as its availability to the general public while being unencumbered by patents.

Early manifestations of an open standard were physical, the quintessential embodiment being the creation of the shipping container. Conceptualized by Malcom McLean in the 1950s and later standardized by the U.S. Maritime Administration and the International Standards Organization in the 1960s, the shipping container became a universal standard for moving goods.

As the standard became widely accepted and used, shipping boomed and costs spiraled downward. In other words, the birth of globalization began with a standard. Such is the ubiquity of shipping containers today, they’re used for low-cost housing in the outskirts of Berlin, while serving as a beer garden in trendy parts of San Francisco.

 

3144199355_d478f8c316_b

Photo: Håkan Dahlströ/Flickr

As it turned out, open standards wouldn’t just facilitate transportation of goods, they’d also enable the efficient and cheap sharing of information through the internet.

Before the rise of open standards, it was physically impossible to connect different computers. Even if you could connect them, they each required proprietary information to understand one another. The creation of standards like Ethernet, TCP/IP, and HTML allowed an unprecedented level of interoperability and simplicity when it came to transporting data. “As we know in hindsight, each of these open standards created an explosion of innovation,” tech luminary Joi Ito wrote in 2009.

And Internet standards are still evolving—from Creative Commons for copyrighted material to OAuth for online authorization.

While open standards have liberated the movement of physical goods and digital information, moving dollars and cents has been disappointingly left behind. It’s one of the primary reasons that there are still 2.5 billion people who lack access to the global economy.

In many cases, serving the unserved starts with setting a standard. One place where that idea has taken hold is Peru, which has one of the lowest rates of inclusion in all of South America—8 out of 10 working adults don’t have access to proper financial services.

When the country initially investigated how to provide more people access, they assumed the problem was mostly technological. They soon discovered that technology was only a small piece of the pie and in order for financial inclusion efforts to truly move forward, regulators would have to create a clear regulatory framework that standardized new technologies while promoting innovation and competition.

In 2013, the Peruvian government did just that, enacting e-money legislation that would blaze a path for serving those living in poverty. It wasn’t long before major financial institutions were onboard. Today, Peru serves as an international model for taking on inclusion.

The U.S. appears to be following suit.  A recent report from the Federal Reserve highlighted four paths to modernizing the U.S. payment system. Tellingly, “option 2” of the report details the development of  “protocols and standards for sending and receiving payments.”

That the U.S. central bank has acknowledged the potential for a new payments standard is momentous. Intelligently crafted standards create the potential for a common language, a universal platform where innovation and economics can flourish.

Follow Ripple on Twitter

Ripple

We’re proud to announce the first release of our new Gateway Guide, a comprehensive manual to operating a gateway in the Ripple network. Whether you’re trying to understand how a gateway makes revenue, or how to use the authorized accounts feature, or even just what a warm wallet is, the gateway guide has you covered.

The guide comes with step-by-step, diagrammed explanations of typical gateway operations, a hefty list of precautions to make your gateway safer, and concrete examples of all the API calls you need to perform in order to get your gateway accounts set up and secure.

We’re proud of all the work we’ve done to make the business of running a gateway easier, but there’s still more work to do. If you have any questions, comments, or ideas, please send feedback to – or post it on our forums. We’d love to hear from you!

Ripple

So I’m not sure if this kind of development methodology has ever been applied to such an extreme before so I figured I’d document it. In a nutshell, it’s sort of like test-driven triplet-programming development.

While speed-developing our alpha codebase, four of us sat around a table in the office in Berlin. Three people (Vitalik, Jeff and me) each coders of their own clean-room implementation of the Ethereum protocol. The fourth was Christoph, our master of testing.

Our target was to have three fully compatible implementations as well as an unambiguous specification by the end of three days of substantial development. Over distance, this process normally takes a few weeks.

This time we needed to expedite it; our process was quite simple. First we discuss the various consensus-breaking changes and formally describe them as best we can. Then, individually we each crack on coding up the changes simultaneously, popping our heads up about possible clarifications to the specifications as needed. Meanwhile, Christoph devises and codes tests, populating the results either manually or with the farthest-ahead of the implementations (C++, generally :-P).

After a milestone’s worth of changes are coded up and the tests written, each clean-room implementation is tested against the common test data that Christoph compiled. Where issues are found, we debug in a group. So far, this has proved to be an effective way of producing well-tested code quickly, and perhaps more importantly, in delivering clear unambiguous formal specifications.

Are there any more examples of such techniques taken to the extreme?

The post The Ethereum Development Process appeared first on .

 

I’m Vinay Gupta, the newly minted release coordinator for Ethereum. I’ve been working with the comms team on strategy, and have now come aboard to help smooth the release process.

I’ll be about 50/50 on comms and on release coordination. A lot of that is going to be about keeping you updated on progress: new features, new documentation, and hopefully writing about great new services you can use, so it’s in the hinterland between comms and project management. In theory, once I’m up to speed, I should be providing you with the answers to the question: “what’s going on?” But give me some time, because getting up to speed on all of this is nontrivial. We have a very large development team working with very advanced and often quite complex new technology, and keeping everybody up to date on that simultaneously is going to be tricky. To do that well, I have to actually understand what’s going on at quite a technical level first. I have a lot to wrap my head around. I was a 3D graphics programmer through the 1990s, and have a reasonably strong grounding in financial cryptography (I was, and I am not ashamed to admit it, a cypherpunk in those days). But we have a 25-30 person team working in parallel on several different aspects of Ethereum, so… patience please while I master the current state of play, so that I can communicate about what’s changing as we move forwards. It’s a lot of context to acquire, as I’m sure you all know – if there’s an occasional gaffe as I get oriented, forgive me!

I’ve just come back from Switzerland, where I got to meet a lot of the team, my “orientation week” being three days during the release planning meetings. Gav writes in some detail about that week here, so rather than repeat Gav, read his post, and I’ll press on to tell you what was on that release white board.

There is good news, there is bad news, but above all, there is a release schedule.

There will be another blog post with much more detail about the release schedule for the first live Ethereum network shortly – likely by the end of this week, as the developer meeting that Gav mentions in his post winds up and the conclusions are communicated. That’s the post which will give you timelines you can start firing up your mining rigs to, feature lists, and so on. Until then, let me lay out roughly what the four major steps in the release process will look like and we can get into detail soon.

Let’s lay out where we are first: Ethereum is a sprawling project with many teams in many countries implementing the same protocol in several different language versions so it can be integrated into the widest possible range of other systems/ecologies, and to provide long term resilience and future-proofing. In addition to that broad effort, there are several specific applications/toolchains to help people view, build and interact with Ethereum: Mist, Mix, Alethzero and so on. Starting quite soon, and over the next few months, a series of these tools will be stood up as late alpha, beta, ready for general use and shipped. Because the network is valuable, and the network is only as secure as the software we provide, this is going to be a security-led not schedule-led process. You want it done right, we want it done right, and this is one of the most revolutionary software projects ever shipped. 

While you’re waiting for the all singing, all dancing CERN httpd + NCSA Mosaic combo, the “we have just launched the Future of the Internet” breakthrough system, we will be actually be releasing the code and the tools in layers. We are standing up the infrastructure for a whole new web a piece at a time: server first, plus tool chain, and then the full user experience rich client. This makes sense: a client needs something to connect to, so the server infrastructure has to come first. An internet based on this metacomputer model is going to be a very different place, and getting a good interface to that is going to present a whole new set of challenges. There’s no way to simply put all the pieces together and hope it clips into place like forming an arch by throwing bricks in the air: we need scaffolding, and precise fit. We get that by concentrating on the underlying technical aspects for a while, including mining, the underlying network and so on, and then as that is widely deployed, stable and trusted, we will be moving up the stack towards the graphical user interface via Mist in the next few months. None of these pieces stand alone, either: the network needs miners and exchanges, and it takes people time to get organized to do that work properly. The Mist client needs applications, or it’s a bare browser with nothing to connect to, and it takes people time to write those applications. Each change, each step forwards, involves a lot of conversations and support as we get people set up with the new software and help them get their projects off the ground: the whole thing together is an ecology. Each piece needs its own time, its own attention. We have to do this in phases for all of these reasons, and more. 

It took bitcoin, a much less complex project, several years to cover that terrain: we have a larger team, but a more complex project. On the other hand, if you’re following the github repositories, you can see how much progress is being made, week by week, day by day, so… verify for yourself where we are.

So, now we’ve all got on the same page on real world software engineering, let’s actually look at phases of this release process!

Release Step One: Frontier

Frontier takes a model familiar to Bitcoiners, and stands it up for our initial release. Frontier is the Ethereum network in its barest form: an interface to mine Ether, and a way to upload and execute contracts. The main use of Frontier on the launch trajectory is to get mining operations and Ether exchanges running, so the community can get their mining rigs started, and to start to establish a “live” environment where people can test DApps and acquire Ether to upload their own software into Ethereum.

This is “no user interface to speak of” command line country, and you will be expected to be quite expert in the whole Ethereum world model, as well as to have substantial mastery of the tools at your disposal.

However, this is not a test net: this is a frontier release. If you are equipped, come along! Do not die of dysentery on the way.

Frontier showcases three areas of real utility:

  • you can mine real Ether, at 10% of the normal Ether issuance rate, 0.59 Ether per block reward, which can be spent to run programs or exchange for other things, as normal – this real Ether.
  • you can exchange Ether for Bitcoin, or with other users, if you need Ether to run code etc.
  • if you already bought Ether during the crowd sale, and you are fully conversant with the frontier environment, you can use it on the frontier network.
  • we do not recommend this, but have a very substantial security-and-recovery process in place to make it safer – see below 

We will migrate from Frontier to Homestead once Frontier is fully stable in the eyes of the core devs and the auditors:

  • when we are ready to move to Homestead, the release after Frontier, the Frontier network will be shut down; Ether values in wallets will be transferred, but state in contracts is will likely be erased (more information to follow on this in later blog posts)
  • switchover to  the new network will be enforced by “TheBomb”

This is very early release software: feature complete within these boundaries, but with a substantial risk of unexpected behaviours unseen in either the test net or the security review. And it’s not just us that will be putting new code into production: contracts, exchanges, miners, everybody else in the ecosystem will be shipping new services. Any one of those components getting seriously screwed up could impact a lot of users, and we want to shake bugs out of the ecosystem as a whole, not simply our own infrastructure: we are all in this together.

However, to help you safeguard your Ether, we have the following mechanisms planned (more details from the developers will follow soon as the security model is finalised):

  • if you do not perform any transactions, we guarantee 100% your Ether will not be touched and will be waiting for you once we move beyond Frontier
  • if you perform transactions, we guarantee 100% that any Ether you did not spend will will be available to you once we move beyond Frontier not be touched
  • Ether you spend will not fall through cracks into other people’s pockets or vanish without a trace: in the unlikely event that this happens, you have 24 hours to inform us, and we will freeze the network, return to the last good state, and start again with the bug patched
  • yes, this implies a real risk of network instability: everything possible has been done to prevent this, but this is a brand new aeroplane – take your parachute!
  • we will periodically checkpoint the network to show that neither user report nor automated testing has reported any problems. We expect the checkpoints will be around once daily, with a mean of around 12 hours of latency
  • exchanges etc. will be strongly encouraged to wait for checkpoints to be validated before sending out payments in fiat or bitcoin. Ethereum will provide explicit support to aid exchanges in determining what Ether transactions have fully cleared

Over the course of the next few weeks several pieces of software have to be integrated to maintain this basket of security features so we can allow genesis block Ether on to this platform without unacceptable risks. Building that infrastructure is a new process, and while it looks like a safe, sane and conservative schedule, there is always a chance of a delay as the unknown unknown is discovered either by us, the bug bounty hunters or by the security auditors. There will be a post shortly which goes through this release plan in real technical detail, and I’ll have a lot of direct input from the devs on that post, so for now take this with a pinch of salt and we will have hard details and expected dates as soon as possible. 

Release Step Two: Homestead

Homestead is where we move after Frontier. We expect the following three major changes.

  • Ether mining will be at 100% rather than 10% of the usual reward rate
  • checkpointing and manual network halts should never be necessary, although it is likely that checkpointing will continue if there is a general demand for it
  • we will remove the severe risk warning from putting your Ether on the network, although we will not consider the software to be out of beta until Metropolis

Still command line, so much the same feature set as Frontier, but this one we tell you is ready to go, within the relevant parameters.

How long will there be between Frontier and Homestead? Depends entirely on how Frontier performs: best case is not less than a month. We will have a pretty good idea of whether things are going smoothly or not from network review, so we will keep you in the loop through this process.

Release Step Three: Metropolis

Metropolis is when we finally officially release a relatively full-featured user interface for non-technical users of Ethereum, and throw the doors open: Mist launches, and we expect this launch to include a DApp store and several anchor tenant projects with full-featured, well-designed programs to showcase the full power of the network. This is what we are all waiting for, and working towards.

In practice, I suspect there will be at least one, and probably two as-yet-unnamed steps between Homestead and Metropolis: I’m open to suggestions for names (write to vinay[at]ethdev.com). Features will be sensible checkpoints on the way: specific feature sets inside of Mist would be my guess, but I’m still getting my head around that, so I expect we will cross those bridges after Homestead is stood up.

Release Step Four: Serenity

There’s just one thing left to discuss: mining. Proof of Work implies the inefficient conversion of electricity into heat, Ether and network stability, and we would quite like to not warm the atmosphere with our software more than is absolutely necessary. Short of buying carbon offsets for every unit of Ether mined (is that such a bad idea?), we need an algorithmic fix: the infamous Proof of Stake. 

Switching the network from Proof of Work to Proof of Stake is going to require a substantial switch, a transition process potentially much like the one between Frontier and Homestead. Similar rollback measures may be required, although in all probability more sophisticated mechanisms will be deployed (e.g. running both mechanisms together, with Proof of Work dominant, and flagging any cases where Proof of Stake gives a different output.)

This seems a long way out, but it’s not as far away as all that: the work is ongoing.

Proof of Work is a brutal waste of computing power – like democracy*, the worst system except all the others (*voluntarism etc. have yet to be tried at scale). Freed from that constraint, the network should be faster, more efficient, easier for newcomers to get into, and more resistant to cartelization of mining capacity etc. This is probably going to be almost as big a step forwards as putting smart contracts into a block chain in the first place, by the time all is said and done. It is a ways out. It will be worth it. 

Timelines

As you have seen since the Ether Sale, progress has been rapid and stable. Code on the critical path is getting written, teams are effective and efficient, and over-all the organization is getting things done. Reinventing the digital age is not easy, but somebody has to do it. Right now that is us.

We anticipate roughly one major announcement a month for the next few months, and then a delay while Metropolis is prepared. There will also be DEVcon One, an opportunity to come, learn the practical business of building and shipping DApps, meet fellow developers, potential investors, and understand the likely shape of things to come.

We will give you information about each release in more detail as each release approaches, but I want to give you the big overview of how this works and where we are going, fill in some of the gaps, highlight what is changing, both technically and in our communications and business partnership, and present you with an overview of what the summer is going to be like as we move down the path towards Serenity, another world changing technology.

I’m very glad to be part of this process. I’m a little at sea right now trying to wrap my head around the sheer scope of the project, and I’m hoping to actually visit a lot of the development teams over the summer to get the stories and put faces to names. This is a big, diverse project and, beyond the project itself, the launch of a new sociotechnical ecosystem. We are, after all, a platform effort: what’s really going to turn this into magic is you, and the things you build on top of the tools we’re all working so hard to ship. We are making tools for tool-makers.

Vinay signing off for now. More news soon!

 

The post The Ethereum Launch Process appeared first on .

 

I was woken by Vitalik’s call at 5:55 this morning; pitch black outside, nighttime was still upon us. Nonetheless, it was time to leave and this week had best start on the right foot.

The 25-minute walk in darkness from the Zug-based headquarters to the train station was wet. Streetlights reflecting off the puddles on the clean Swiss streets provided a picturesque, if quiet, march into town. I couldn’t help but think the rain running down my face was a very liquid reminder of the impending seasonal change, and then, on consideration, how fast the last nine months had gone.

Solid Foundations

The last week was spent in Zug by the Ethereum foundation board and ÐΞV leadership: Vitalik, Mihai and Taylor who officially form the founation’s board, Anthony and Joseph as the other official advisors and Aeron & Jutta as the ÐΞV executive joined by Jeff and myself wearing multiple hats of ÐΞV and advisory). The chief outcome of this was the dissemination of Vitalik’s superb plan to reform the foundation and turn it into a professional entity. The board will be recruited from accomplished professionals with minimal conflicts of interest; the present set of “founders” officially retired from those positions and a professional executive recruited, the latter process lead by Joseph. Anthony will take a greater ambassadorial role for Ethereum in China and North America. Conversely, ÐΞV will function much more as a department of the Foundation’s executive rather than a largely independent entity. Finally, I presented the release strategy to the others; an event after which I’ve never seen quite so many photos taken of a whiteboard. Needless to say, all was well received by the board and advisors. More information will be coming soon.

As I write this, I’m sitting on a crowded early commuter train, Vinay Gupta in tow, who recently took on a much more substantive role this week as release coordinator. He’ll be helping with release strategy and to keep you informed of our release process. This week, which might rather dramatically be described as ‘pivotal’ in the release process, will see Jeff, Vitalk and me sit around a table and develop all the PoC-9 changes, related unit tests, and integrations in three days, joined by our indomitable Master of Testing, Christoph. The outcome of this week will inform our announcement which will come later this week outlining in clear terms what we will be releasing and when.

I’m sorry it has been so long without an update. The last 2 months has been somewhat busy, choked up with travel and meetings, with the remaining time soaked up by coding, team-leading and management. The team is now substantially formed; the formal security audit started four weeks ago; the bounty programme is running smoothly. The latter processes are the exceedingly capable hands of Jutta and Gustav. Aeron, meanwhile will be stepping down as the ÐΞV head of finance and operations and assuming the role he was initially brought aboard for, system modelling. We’ll hopefully be able to announce his successors next week (yes, that was plural; he has been doing the jobs of 2.5 people over the last few months).

We are also in the process of forming partnerships with third parties in the industry; George, Jutta and myself managing this process; I’m happy to announce that at least three exchanges will be supporting Ether from day one on their trading platforms (details of which we’ll annouce soon), with more exchanges to follow. Marek and Alex are providing technical supprt there with Marek going so far as to make a substantial reference exchange implementation.

I also finished the first draft of ICAP, the Ethereum Inter-exchange Client Address Protocol, an IBAN-compatible system for referencing and transacting to client accounts aimed to streamline the process of transfering funds, worry-free between exchanges and, ultimately, make KYC and AML pains a thing of the past. The IBAN compatibility may even provide possibility of easy integration with existing banking infrastructure in some future.

Developments

Proof-of-Concept releases VII and VIII were released. NatSpec, “natural language specification format” and the basis of our transaction security was prototyped and integrated. Under Marek’s watch, now helped by Fabian, ethereum.js is truly coming of age with a near source-level compatibility with Solidity on contract interaction and support for the typed ABI with calling and events, the latter providing hassle-free state-change reporting. Mix, our IDE, underwent its first release and after some teethng issues is getting good use thanks to the excellent work done by Arkadiy and Yann. Solidity had numerous features added and is swiftly approaching 1.0 status with Christian, Lefteris and Liana to thank. Marian’s work goes ever forward on the network monitoring system while Sven and Heiko have been working diligently on the stress testing infrastructure which analyses and tests the peer network formation and performance. They’ll soon be joined by Alex and Lefteris to accellerate this programme.

So one of the major things that needed sorting for the next release is the proof-of-work algorithm that we’ll use. This had a number of requirements, two of which were actually pulling in opposite directions, but basically it had to be light-client-friendly algorithm whose speed-of-mining is proportional to the IO-bandwidth and which requires a considerable amount of RAM to do so. There was a vague consensus that we (well.. Vitalik and Matthew) head in the direction of a Hasimoto-like algorithm (a proof-of-work designed for the Bitcoin blockchain that aims to be IO-bound, meaning, roughly, that to make it go any faster, you’d need to add more memory rather than just sponsoring a smaller/faster ASIC). Since our blockchain has a number of important differences with the Bitcoin blockchain (mainly in transaction density), stemming from the extremely short 12s block time we’re aiming for, we would have to use not the blockchain data itself like Hashimoto but rather an artifcially created dataset, done with an algorithm known as Dagger (yes, some will remember it as Vitalik’s first and flawed attempt at a memory-hard proof-of-work).

While this looked like a good direction to be going in, a swift audit of Vitalik and Matt’s initial algorithm by Tim Hughes (ex-Director of Technology at Frontier Developments and expert in low-level CPU and GPU operation and optimisation) showed major flaws. With his help, they were able to work together to devise a substantially more watertight algorithm that, we are confident to say, should make the job of developing an FPGA/ASIC sufficiently difficult, especially given our determination to switch to a proof-of-stake system within the next 6-12 months.

Last, but not least, the new website was launched. Kudos to Ian and Konstantin for mucking down and getting it done. Next stop will be the developer site, which will be loosely based on the excellent resource at qt.io, the aim to provide a one-stop extravaganza of up to date reference documentation, curated tutorials, examples, recipes, downloads, issue tracking, and build status.

Onwards

So, as Alex, our networking maestro might say, these are exciting times. When deep in nitty gritty of development you sometimes forget quite how world-altering the technology you’re creating is, which is probably just as well since the gravity of the matter at hand would be continually distracting. Nonetheless, when one starts considering the near-term alterations that we can really bring one realises that the wave of change is at once unavoidable and heading straight for you. For what it’s worth, I find an excellent accompaniment to this crazy life is the superb music of Pretty Lights.

The post Gav’s Ethereum ÐΞV Update V appeared first on .

 

One of the issues inherent in many kinds of consensus architectures is that although they can be made to be robust against attackers or collusions up to a certain size, if an attacker gets large enough they are still, fundamentally, exploitable. If attackers in a proof of work system have less than 25% of mining power and everyone else is non-colluding and rational, then we can show that proof of work is secure; however, if an attacker is large enough that they can actually succeed, then the attack costs nothing – and other miners actually have the incentive to go along with the attack. SchellingCoin, as we saw, is vulnerable to a so-called P + epsilon attack in the presence of an attacker willing to commit to bribing a large enough amount, and is itself capturable by a majority-controlling attacker in much the same style as proof of work.

One question that we may want to ask is, can we do better than this? Particularly if a pseudonymous cryptocurrency like Bitcoin succeeds, and arguably even if it does not, there doubtlessly exists some shadowy venture capital industry willing to put up the billions of dollars needed to launch such attacks if they can be sure that they can quickly earn a profit from executing them. Hence, what we would like to have is cryptoeconomic mechanisms that are not just stable, in the sense that there is a large margin of minimum “size” that an attacker needs to have, but also unexploitable – although we can never measure and account for all of the extrinsic ways that one can profit from attacking a protocol, we want to at the very least be sure that the protocol presents no intrinsic profit potential from an attack, and ideally a maximally high intrinsic cost.

For some kinds of protocols, there is such a possibility; for example, with proof of stake we can punish double-signing, and even if a hostile fork succeeds the participants in the fork would still lose their deposits (note that to properly accomplish this we need to add an explicit rule that forks that refuse to include evidence of double-signing for some time are to be considered invalid). Unfortunately, for SchellingCoin-style mechanisms as they currently are, there is no such possibility. There is no way to cryptographically tell the difference between a SchellingCoin instance that votes for the temperature in San Francisco being 4000000000’C because it actually is that hot, and an instance that votes for such a temperature because the attacker committed to bribe people to vote that way. Voting-based DAOs, lacking an equivalent of shareholder regulation, are vulnerable to attacks where 51% of participants collude to take all of the DAO’s assets for themselves. So what can we do?

Between Truth and Lies

One of the key properties that all of these mechanisms have is that they can be described as being objective: the protocol’s operation and consensus can be maintained at all times using solely nodes knowing nothing but the full set of data that has been published and the rules of the protocol itself. There is no additional “external information” (eg. recent block hashes from block explorers, details about specific forking events, knowledge of external facts, reputation, etc) that is required in order to deal with the protocol securely. This is in contrast to what we will describe as subjective mechanisms – mechanisms where external information is required to securely interact with them.

When there exist multiple levels of the cryptoeconomic application stack, each level can be objective or subjective separately: Codius allows for subjectively determined scoring of oracles for smart contract validation on top of objective blockchains (as each individual user must decide for themselves whether or not a particular oracle is trustworthy), and Ripple’s decentralized exchange provides objective execution on top of an ultimately subjective blockchain. In general, however, cryptoeconomic protocols so far tend to try to be objective where possible.

Objectivity has often been hailed as one of the primary features of Bitcoin, and indeed it has many benefits. However, at the same time it is also a curse. The fundamental problem is this: as soon as you try to introduce something extra-cryptoeconomic, whether real-world currency prices, temperatures, events, reputation, or even time, from the outside world into the cryptoeconomic world, you are trying to create a link where before there was absolutely none. To see how this is an issue, consider the following two scenarios:

  • The truth is B, and most participants are honestly following the standard protocol through which the contract discovers that the truth is B, but 20% are attackers or accepted a bribe.
  • The truth is A, but 80% of participants are attackers or accepted a bribe to pretend that the truth is B.

From the point of view of the protocol, the two are completely indistinguishable; between truth and lies, the protocol is precisely symmetrical. Hence, epistemic takeovers (the attacker convincing everyone else that they have convinced everyone else to go along with an attack, potentially flipping an equilibrium at zero cost), P + epsilon attacks, profitable 51% attacks from extremely wealthy actors, etc, all begin to enter the picture. Although one might think at first glance that objective systems, with no reliance on any actor using anything but information supplied through the protocol, are easy to analyze, this panoply of issues reveals that to a large extent the exact opposite is the case: objective protocols are vulnerable to takeovers, and potentially zero-cost takeovers, and standard economics and game theory quite simply have very bad tools for analyzing equilibrium flips. The closest thing that we currently have to a science that actually does try to analyze the hardness of equilibrium flips is chaos theory, and it will be an interesting day when crypto-protocols start to become advertised as “chaos-theoretically guaranteed to protect your grandma’s funds”.

Hence, subjectivity. The power behind subjectivity lies in the fact that concepts like manipulation, takeovers and deceit, not detectable or in some cases even definable in pure cryptography, can be understood by the human community surrounding the protocol just fine. To see how subjectivity may work in action, let us jump straight to an example. The example supplied here will define a new, third, hypothetical form of blockchain or DAO governance, which can be used to complement futarchy and democracy: subjectivocracy. Pure subjectivocracy is defined quite simply:

  1. If everyone agrees, go with the unanimous decision.
  2. If there is a disagreement, say between decision A and decision B, split the blockchain/DAO into two forks, where one fork implements decision A and the other implements decision B.

All forks are allowed to exist; it’s left up to the surrounding community to decide which forks they care about. Subjectivocracy is in some sense the ultimate non-coercive form of governance; no one is ever forced to accept a situation where they don’t get their own way, the only catch being that if you have policy preferences that are unpopular then you will end up on a fork where few others are left to interact with you. Perhaps, in some futuristic society where nearly all resources are digital and everything that is material and useful is too-cheap-to-meter, subjectivocracy may become the preferred form of government; but until then the cryptoeconomy seems like a perfect initial use case.

For another example, we can also see how to apply subjectivocracy to SchellingCoin. First, let us define our “objective” version of SchellingCoin for comparison’s sake:

  1. The SchellingCoin mechanism has an associated sub-currency.
  2. Anyone has the ability to “join” the mechanism by purchasing units of the currency and placing them as a security deposit. Weight of participation is proportional to the size of the deposit, as usual.
  3. Anyone has the ability to ask the mechanism a question by paying a fixed fee in that mechanism’s currency.
  4. For a given question, all voters in the mechanism vote either A or B.
  5. Everyone who voted with the majority gets a share of the question fee; everyone who voted against the majority gets nothing.

Note that, as mentioned in the post on P + epsilon attacks, there is a refinement by Paul Sztorc under which minority voters lose some of their coins, and the more “contentious” a question becomes the more coins minority voters lose, right up to the point where at a 51/49 split the minority voters lose all their coins to the majority. This substantially raises the bar for a P + epsilon attack. However, raising the bar for us is not quite good enough; here, we are interested in having no exploitability (once again, we formally define “exploitability” as “the protocol provides intrinsic opportunities for profitable attacks”) at all. So, let us see how subjectivity can help. We will elide unchanged details:

  1. For a given question, all voters in the mechanism vote either A or B.
  2. If everyone agrees, go with the unanimous decision and reward everyone.
  3. If there is a disagreement, split the mechanism into two on-chain forks, where one fork acts as if it chose A, rewarding everyone who voted A, and the other fork acts as if it chose B, rewarding everyone who voted B.

Each copy of the mechanism has its own sub-currency, and can be interacted with separately. It is up to the user to decide which one is more worth asking questions to. The theory is that if a split does occur, the fork specifying the correct answer will have increased stake belonging to truth-tellers, the fork specifying the wrong answer will have increased stake belonging to liars, and so users will prefer to ask questions to the fork where truth-tellers have greater influence.

If you look at this closely, you can see that this is really just a clever formalism for a reputation system. All that the system does is essentially record the votes of all participants, allowing each individual user wishing to ask a question to look at the history of each respondent and then from there choose which group of participants to ask. A very mundane, old-fashioned, and seemingly really not even all that cryptoeconomic approach to solving the problem. Now, where do we go from here?

Moving To Practicality

Pure subjectivocracy, as described above, has two large problems. First, in most practical cases, there are simply far too many decisions to make in order for it to be practical for users to decide which fork they want to be on for every single one. In order to prevent massive cognitive load and storage bloat, it is crucial for the set of subjectively-decided decisions to be as small as possible.

Second, if a particular user does not have a strong belief that a particular decision should be answered in one way or another (or, alternatively, does not know what the correct decision is), then that user will have a hard time figuring out which fork to follow. This issue is particularly strong in the context of a category that can be termed “very stupid users” (VSUs) – think not Homer Simpson, but Homer Simpson’s fridge. Examples include internet-of-things/smart property applications (eg. SUVs), other cryptoeconomic mechanisms (eg. Ethereum contracts, separate blockchains, etc), hardware devices controlled by DAOs, independently operating autonomous agents, etc. In short, machines that have (i) no ability to get updated social information, and (ii) no intelligence beyond the ability to follow a pre-specified protocol. VSUs exist, and it would be nice to have some way of dealing with them.

The first problem, surprisingly enough, is essentially isomorphic to another problem that we all know very well: the blockchain scalability problem. The challenge is exactly the same: we want to have the strength equivalent to all users performing a certain kind of validation on a system, but not require that level of effort to actually be performed every time. And in blockchain scalability we have a known solution: try to use weaker approaches, like randomly selected consensus groups, to solve problems by default, only using full validation as a fallback to be used if an alarm has been raised. Here, we will do a similar thing: try to use traditional governance to resolve relatively non-contentious issues, only using subjectivocracy as a sort of fallback and incentivizer-of-last-resort.

So, let us define yet another version of SchellingCoin:

  1. For a given question, all voters in the mechanism vote either A or B.
  2. Everyone who voted with the majority gets a share of the question fee (which we will call P); everyone who voted against the majority gets nothing. However, deposits are frozen for one hour after voting ends.
  3. A user has the ability to put down a very large deposit (say, 50*P) to “raise the alarm” on a particular question that was already voted on – essentially, a bet saying “this was done wrong”. If this happens, then the mechanism splits into two on-chain forks, with one answer chosen on one fork and the other answer chosen on the other fork.
  4. On the fork where the chosen answer is equal to the original voted answer, the alarm raiser loses the deposit. On the other form, the alarm raiser gets back a reward of 2x the deposit, paid out from incorrect voters’ deposits. Additionally, the rewards for all other answerers are made more extreme: “correct” answerers get 5*P and “incorrect” answerers lose 10*P.

If we make a maximally generous assumption and assume that, in the event of a split, the incorrect fork quickly falls away and becomes ignored, the (partial) payoff matrix starts to look like this (assuming truth is A):

You vote A You vote B You vote against consensus, raise the alarm
Others mainly vote A P 0 -50P – 10P = -60P
Others mainly vote A, N >= 1 others raise alarm 5P -10P -10P – (50 / (N + 1)) * P
Others mainly vote B 0 P 50P + 5P = 55P
Others mainly vote B, N >= 1 others raise alarm 5P -10P 5P + (50 / (N + 1)) * P

The strategy of voting with the consensus and raising the alarm is clearly self-contradictory and silly, so we will omit it for brevity. We can analyze the payoff matrix using a fairly standard repeated-elimination approach:

  1. If others mainly vote B, then the greatest incentive is for you to raise the alarm.
  2. If others mainly vote A, then the greatest incentive is for you to vote A.
  3. Hence, each individual will never vote B. Hence, we know that everyone will vote A, and so everyone’s incentive is to vote A.

Note that, unlike the SchellingCoin game, there is actually a unique equilibrium here, at least if we assume that subjective resolution works correctly. Hence, by relying on what is essentially game theory on the part of the users instead of the voters, we have managed to avoid the rather nasty set of complications involving multi-equilibrium games and instead have a clearer analysis.

Additionally note that the “raise the alarm by making a bet” protocol differs from other approaches to fallback protocols that have been mentioned in previous articles here in the context of scalability; this new mechanism is superior to and cleaner than those other approaches, and can be applied in scalability theory too.

The Public Function of Markets

Now, let us bring our cars, blockchains and autonomous agents back into the fold. The reason why Bitcoin’s objectivity is so valued is to some extent precisely because the objectivity makes it highly amenable to such applications. Thus, if we want to have a protocol that competes in this regard, we need to have a solution for these “very stupid users” among us as well.

Enter markets. The key insight behind Hayek’s particular brand of libertarianism in the 1940s, and Robin Hanson’s invention of futarchy half a century later, is the idea that markets exist not just to match buyers and sellers, but also to provide a public service of information. A prediction market on a datum (eg. GDP, unemployment, etc) reveals the information of what the market thinks will be value of that datum at some point in the future, and a market on a good or service or token reveals to interested individuals, policymakers and mechanism designers how much the public values that particular good or service or token. Thus, markets can be thought of as a complement to SchellingCoin in that they, like SchellingCoin, are also a window between the digital world and the “real” world – in this case, a window that reveals just how much the real world cares about something.

So, how does this secondary “public function” of markets apply here? In short, the answer is quite simple. Suppose that there exists a SchellingCoin mechanism, of the last type, and after one particular question two forks appear. One fork says that the temperature in San Francisco is 20’C; the other fork says that the temperature is 4000000000’C. As a VSU, what do you see? Well, let’s see what the market sees. On the one hand, you have a fork where the larger share of the internal currency is controlled by truth-tellers. On the other hand, you have a fork where the larger share is controlled by liars. Well, guess which of the two currencies has a higher price on the market…

In cryptoeconomic terms, what happened here? Simply put, the market translated the human intelligence of the intelligent users in what is an ultimately subjective protocol into a pseudo-objective signal that allows the VSUs to join onto the correct fork as well. Note that the protocol itself is not objective; even if the attacker manages to successfully manipulate the market for a brief period of time and massively raise the price of token B, the users are still going to have a higher valuation for token A, and when the manipulator gives up token A will go right back to being the dominant one.

Now, what are the robustness properties of this market against attack? As was brought up in the Hanson/Moldbug debate on futarchy, in the ideal case a market will provide the correct price for a token for as long as the economic weight of the set of honestly participating users exceeds the economic weight of any particular colluding set of attackers. If some attackers bid the price up, an incentive arises for other participants to sell their tokens and for outsiders to come in and short it, in both cases earning an expected profit and at the same time helping to push the price right back down to the correct value. In practice, manipulation pressure does have some effect, but a complete takeover is only possible if the manipulator can outbid everyone else combined. And even if the attacker does succeed, they pay dearly for it, buying up tokens that end up being nearly valueless once the attack ends and the fork with the correct answer reasserts itself as the most valuable fork on the market.

Of course, the above is only a sketch of how quasi-subjective SchellingCoin may work; in reality a number of refinements will be needed to disincentivize asking ambiguous or unethical questions, handling linear and not just binary bets, and optimizing the non-exploitability property. However, if P + epsilon attacks, profit-seeking 51% attacks, or any other kind of attack ever actually do become a problem with objective SchellingCoin mechanisms, the basic model stands ready as a substitute.

Listening to Markets and Proof of Work

Earlier in this post, and in my original post on SchellingCoin, I posited a sort of isomorphism between SchellingCoin and proof of work – in the original post reasoning that because proof of work works so will SchellingCoin, and above that because SchellingCoin is problematic so is proof of work. Here, let us expand on this isomorphism further in a third direction: if SchellingCoin can be saved through subjectivity, then perhaps so can proof of work.

The key argument is this: proof of work, at the core, can be seen in two different ways. One way of seeing proof of work is as a SchellingCoin contest, an objective protocol where the participants that vote with the majority get rewarded 25 BTC and everyone else gets nothing. The other approach, however, is to see proof of work as a sort of constant ongoing “market” between a token and a resource that can be measured purely objectively: computational power. Proof of work is an infinite opportunity to trade computational power for currency, and the more interest there is in acquiring units in a currency the more work will be done on its blockchain. “Listening” to this market consists simply of verifying and computing the total quantity of work.

Seeing the description in the previous section of how our updated version of SchellingCoin might work, you may have been inclined to propose a similar approach for cryptocurrency, where if a cryptocurrency gets forked one can see the price of both forks on an exchange, and if the exchange prices one fork much more highly that implies that that fork is legitimate. However, such an approach has a problem: determining the validity of a crypto-fiat exchange is subjective, and so the problem is beyond the reach of a VSU. But with proof of work as our “exchange”, we can actually get much further.

Here is the equivalence: exponential subjective scoring. In ESS, the “score” that a client attaches to a fork depends not just on the total work done on the fork, but also on the time at which the fork appeared; forks that come later are explicitly penalized. Hence, the set of always-online users can see that a given fork came later, and therefore that it is a hostile attack, and so they will refuse to mine on it even if its proof of work chain grows to have much more total work done on it. Their incentive to do this is simple: they expect that eventually the attacker will give up, and so they will continue mining and eventually overtake the attacker, making their fork the universally accepted longest one again; hence, mining on the original fork has an expected value of 25 BTC and mining on the attacking fork has an expected value of zero.

VSUs that are not online at the time of a fork will simply look at the total proof of work done; this strategy is equivalent to the “listen to the child with the higher price” approach in our version of SchellingCoin. During an attack, such VSUs may of course temporarily be tricked, but eventually the original fork will win and so the attacker will have massively paid for the treachery. Hence, the subjectivity once again makes the mechanism less exploitable.

Conclusion

Altogether, what we see is that subjectivity, far from being an enemy of rigorous analysis, in fact makes many kinds of game-theoretic analysis of cryptoeconomic protocols substantially easier. However, if this kind of subjective algorithm design becomes accepted as the most secure approach, it has far-reaching consequences. First of all, Bitcoin maximalism, or any kind of single-cryptocurrency maximalism generally, cannot survive. Subjective algorithm design inherently requires a kind of loose coupling, where the higher-level mechanism does not actually control anything of value belonging to a lower-level protocol; this condition is necessary in order to allow higher-level mechanism instances to copy themselves.

In fact, in order for the VSU protocol to work, every mechanism would need to contain its own currency which would rise and fall with its perceived utility, and so thousands or even millions of “coins” would need to exist. On the other hand, it may well be possible to enumerate a very specific number of mechanisms that actually need to be subjective – perhaps, basic consensus on block data availability validation and timestamping and consensus on facts, and everything else can be built objectively on top. As is often the case, we have not even begun to see substantial actual attacks take place, and so it may well be over a decade until anything close to a final judgement needs to be made.

The post The Subjectivity / Exploitability Tradeoff appeared first on .

 

Als die ersten Personal Computer eine breitere Nutzerbasis erreichten, gingen viele davon aus, dass der Nutzen für uns Menschen im Laufe der folgenden Jahre/Jahrzehnte groß sein werde. Der Mensch sollte in diesem Zusammenhang im Mittelpunkt stehen und Computer sollten das Leben der Menschen vereinfachen und verschönen. So war seinerzeit die utopische Erwartung.

Inzwischen hat sich das Ganze so entwickelt, „…dass wir Produkt der Konzerne sind, die Software anbieten, die uns vermeintlich unterstützen. Wir geben bereitwillig intime Daten preis, über deren Verbleib wir anschließend die Hoheit verlieren. Wir sind auf dem besten Weg zur Totalüberwachung und Ausbeutung der Massen. Und daran verdienen ausschließlich die großen Firmen, die diese Produkte hervorbringen…“ (Jared Lanier – Wem die Zukunft gehört – 2014)

Ähnlich auch beim Crowdfunding: Hier ist der Ansatz prinzipiell ein guter, allerdings die Umsetzung nicht immer konsequent Mensch-zentriert.
Beispiel Kickstarter: Hier unterstützt man Projekte und bekommt dafür im Gegenzug irgendwelche Gimmicks. Man wird im Abspann des unterstützten Filmes erwähnt, man bekommt ein Mittagessen mit den Gründern, oder man erhält die allererste Ausgabe des gefundeten Buches. Oculus Rift hat so 2012 von Unterstützern bei Kickstarter 2,4 Mio US$ eingesammelt, die dafür T-Shirts, Poster oder anders gesagt, einen feuchten Händedruck bekamen. Zwei Jahre später wurde Oculus Rift dann von Facebook für schlappe 2 Mrd. US$ übernommen und die ursprünglichen Unterstützer standen außen vor. Oculus Rift ist hier sicher kein Vorwurf zu machen, allerdings zeigt es die Problematik sehr gut. Hätten die Unterstützer paritätisch (also gleichberechtigt je nach Höhe ihrer Unterstützung) am Unternehmen partizipiert, so hätten auch alle derer am Verkauf des Unternehmens teil gehabt.
So blieb ihnen nichts weiter als das T-Shirt, das ihnen die Teilhabe an Oculus Rift bestätigte. Wew…

Dem entgegen wachsen – langsam aber sicher – verschiedene gute Ansätze, Crowdfunding in Verbindung mit Blockchain Währungen zu verschmelzen.
So kaufen Unterstützer die jeweilige Firmen-Cryptocoins und partizipieren direkt am Unternehmen. Durch diese Bindung werden sie nicht nur zu monitären Unterstützern, auch ihre persönliche Integration in das Unternehmen wird gefördert. SWARM ist hier im Moment der mir bekannte Vorreiter.

Die Zukunft sollte also sein, dass nicht einzelne Großkonzerne das große Geld abgreifen und zudem nicht einmal ehrlich am fiskalen System teilnehmen, sondern jeder Einzelne am Erfolg einer Idee teilhaben kann. Monitär wie auch real.

shareable hat hierzu einen super Artikel:

Owning Together Is the New Sharing

Companies and startups are aspiring toward an economy, and an Internet, that is more fully ours with the use of cooperatives, „commons-based peer production,“ and cryptocurrencies.
VC-backed sharing economy companies like Airbnb and Uber have caused trouble for legacy industries, but gone is the illusion that they are doing it with actual sharing. Their main contribution to society has been facilitating new kinds of transactions — for a fee, of course, to pay back to their investors. “The sharing economy has become the on-demand economy,” laments Antonin Léonard, co-founder of the Paris-based network OuiShare, which connects sharing-economy entrepreneurs around the world.
Source: http://www.shareable.net/blog/owning-is-the-new-sharing
,
Ehemaliger FBI Agent beschreibt, wie er über 3.500 BitCoin Transaktionen verfolgen konnte

Alles lässt sich irgendwie hacken: Fingerabdruck Scanner, Autos, oder auch BitCoins.

Deren vermeintlicher Vorteil – die Anonymität durch Blockchain – lässt sich aushebeln, sofern man die BitCoin Adresse des jeweiligen Nutzers kennt. Aber das genau ist aus meiner Sicht der Punkt.
Natürlich kennen die beteiligten Parteien bei Transaktionen die Adresse des jeweils anderen, aber für Außenstehende ist das in der Regel nicht der Fall. Andererseits ist das bei der gemeinhin gebräuchlichen Kombination IBAN/ BIC auch nicht anders. Die kennen auch nur die Beteiligten.

Nur, dass bei BTC-Transaktionen diese öffentlich unter blockchain.info nachvollziehbar sind. Das ist genau das, was der große Vorteil blockchain-basierter FIAT-Währungen ist.
Man kann sich halt nur nicht heraussuchen, wer dann die Transaktion nachvollzieht.

Umso interessanter, was WIRED da so schreibt:

Prosecutors Trace $13.4M in Bitcoins From the Silk Road to Ulbricht’s Laptop

If anyone still believes that bitcoin is magically anonymous internet money, the US government just offered what may be the clearest demonstration yet that it’s not. A former federal agent has shown in a courtroom that he traced hundreds of thousands of bitcoins from the Silk Road anonymous marketplace for drugs directly to the personal computer of Ross Ulbricht, the 30-year-old accused of running that contraband bazaar.In Ulbricht’s trial Thursday, former FBI special agent Ilhwan Yum described how he traced 3,760 bitcoin transactions over 12 months ending in late August 2013 from servers seized in the Silk Road investigation to Ross Ulbricht’s Samsung 700z laptop, which the FBI seized at the time of his arrest in October of that year. In all, he followed more than 700,000 bitcoins along the public ledger of bitcoin transactions, known as the blockchain, from the marketplace to what seemed to be Ulbricht’s personal wallets. Based on exchange rates at the time of each transaction, Yum calculated that the transferred coins were worth a total of $13.4 million.

Vollständiger Artikel auf WIRED.com