So I’m not sure if this kind of development methodology has ever been applied to such an extreme before so I figured I’d document it. In a nutshell, it’s sort of like test-driven triplet-programming development.

While speed-developing our alpha codebase, four of us sat around a table in the office in Berlin. Three people (Vitalik, Jeff and me) each coders of their own clean-room implementation of the Ethereum protocol. The fourth was Christoph, our master of testing.

Our target was to have three fully compatible implementations as well as an unambiguous specification by the end of three days of substantial development. Over distance, this process normally takes a few weeks.

This time we needed to expedite it; our process was quite simple. First we discuss the various consensus-breaking changes and formally describe them as best we can. Then, individually we each crack on coding up the changes simultaneously, popping our heads up about possible clarifications to the specifications as needed. Meanwhile, Christoph devises and codes tests, populating the results either manually or with the farthest-ahead of the implementations (C++, generally :-P).

After a milestone’s worth of changes are coded up and the tests written, each clean-room implementation is tested against the common test data that Christoph compiled. Where issues are found, we debug in a group. So far, this has proved to be an effective way of producing well-tested code quickly, and perhaps more importantly, in delivering clear unambiguous formal specifications.

Are there any more examples of such techniques taken to the extreme?

The post The Ethereum Development Process appeared first on .

 

Screen Shot 2015-03-04 at 4.00.21 PM

Ripple Labs is thrilled to join the International Payments Framework Association (IPFA), which provides rules sets, best practices, and guidelines to improve cross-border payments.

Composed of over 25 prominent members in the payments space—including the likes of ACH, NACHA, and SWIFT—the IPFA promotes a grand vision for creating a global payments framework that facilitates interoperability and efficient cross-border payment processing.

IPFA is one of a series of membership groups and industry associations that Ripple Labs has joined in order to further our vision of transforming payments. Ripple Labs recently joined the Center for Financial Services Innovation Network and became a member of the NACHA Payment Innovation Alliance in June.

“IPFA rules—when they are appropriately modified for Ripple—helps us create a complete, real-time, cross border payment system,” said Nilesh Dusane, director of business development at Ripple Labs.

“We’re very excited to join this network,” he said.

Follow Ripple on Twitter

Ripple

I’m Vinay Gupta, the newly minted release coordinator for Ethereum. I’ve been working with the comms team on strategy, and have now come aboard to help smooth the release process.

I’ll be about 50/50 on comms and on release coordination. A lot of that is going to be about keeping you updated on progress: new features, new documentation, and hopefully writing about great new services you can use, so it’s in the hinterland between comms and project management. In theory, once I’m up to speed, I should be providing you with the answers to the question: “what’s going on?” But give me some time, because getting up to speed on all of this is nontrivial. We have a very large development team working with very advanced and often quite complex new technology, and keeping everybody up to date on that simultaneously is going to be tricky. To do that well, I have to actually understand what’s going on at quite a technical level first. I have a lot to wrap my head around. I was a 3D graphics programmer through the 1990s, and have a reasonably strong grounding in financial cryptography (I was, and I am not ashamed to admit it, a cypherpunk in those days). But we have a 25-30 person team working in parallel on several different aspects of Ethereum, so… patience please while I master the current state of play, so that I can communicate about what’s changing as we move forwards. It’s a lot of context to acquire, as I’m sure you all know – if there’s an occasional gaffe as I get oriented, forgive me!

I’ve just come back from Switzerland, where I got to meet a lot of the team, my “orientation week” being three days during the release planning meetings. Gav writes in some detail about that week here, so rather than repeat Gav, read his post, and I’ll press on to tell you what was on that release white board.

There is good news, there is bad news, but above all, there is a release schedule.

There will be another blog post with much more detail about the release schedule for the first live Ethereum network shortly – likely by the end of this week, as the developer meeting that Gav mentions in his post winds up and the conclusions are communicated. That’s the post which will give you timelines you can start firing up your mining rigs to, feature lists, and so on. Until then, let me lay out roughly what the four major steps in the release process will look like and we can get into detail soon.

Let’s lay out where we are first: Ethereum is a sprawling project with many teams in many countries implementing the same protocol in several different language versions so it can be integrated into the widest possible range of other systems/ecologies, and to provide long term resilience and future-proofing. In addition to that broad effort, there are several specific applications/toolchains to help people view, build and interact with Ethereum: Mist, Mix, Alethzero and so on. Starting quite soon, and over the next few months, a series of these tools will be stood up as late alpha, beta, ready for general use and shipped. Because the network is valuable, and the network is only as secure as the software we provide, this is going to be a security-led not schedule-led process. You want it done right, we want it done right, and this is one of the most revolutionary software projects ever shipped. 

While you’re waiting for the all singing, all dancing CERN httpd + NCSA Mosaic combo, the “we have just launched the Future of the Internet” breakthrough system, we will be actually be releasing the code and the tools in layers. We are standing up the infrastructure for a whole new web a piece at a time: server first, plus tool chain, and then the full user experience rich client. This makes sense: a client needs something to connect to, so the server infrastructure has to come first. An internet based on this metacomputer model is going to be a very different place, and getting a good interface to that is going to present a whole new set of challenges. There’s no way to simply put all the pieces together and hope it clips into place like forming an arch by throwing bricks in the air: we need scaffolding, and precise fit. We get that by concentrating on the underlying technical aspects for a while, including mining, the underlying network and so on, and then as that is widely deployed, stable and trusted, we will be moving up the stack towards the graphical user interface via Mist in the next few months. None of these pieces stand alone, either: the network needs miners and exchanges, and it takes people time to get organized to do that work properly. The Mist client needs applications, or it’s a bare browser with nothing to connect to, and it takes people time to write those applications. Each change, each step forwards, involves a lot of conversations and support as we get people set up with the new software and help them get their projects off the ground: the whole thing together is an ecology. Each piece needs its own time, its own attention. We have to do this in phases for all of these reasons, and more. 

It took bitcoin, a much less complex project, several years to cover that terrain: we have a larger team, but a more complex project. On the other hand, if you’re following the github repositories, you can see how much progress is being made, week by week, day by day, so… verify for yourself where we are.

So, now we’ve all got on the same page on real world software engineering, let’s actually look at phases of this release process!

Release Step One: Frontier

Frontier takes a model familiar to Bitcoiners, and stands it up for our initial release. Frontier is the Ethereum network in its barest form: an interface to mine Ether, and a way to upload and execute contracts. The main use of Frontier on the launch trajectory is to get mining operations and Ether exchanges running, so the community can get their mining rigs started, and to start to establish a “live” environment where people can test DApps and acquire Ether to upload their own software into Ethereum.

This is “no user interface to speak of” command line country, and you will be expected to be quite expert in the whole Ethereum world model, as well as to have substantial mastery of the tools at your disposal.

However, this is not a test net: this is a frontier release. If you are equipped, come along! Do not die of dysentery on the way.

Frontier showcases three areas of real utility:

  • you can mine real Ether, at 10% of the normal Ether issuance rate, 0.59 Ether per block reward, which can be spent to run programs or exchange for other things, as normal – this real Ether.
  • you can exchange Ether for Bitcoin, or with other users, if you need Ether to run code etc.
  • if you already bought Ether during the crowd sale, and you are fully conversant with the frontier environment, you can use it on the frontier network.
  • we do not recommend this, but have a very substantial security-and-recovery process in place to make it safer – see below 

We will migrate from Frontier to Homestead once Frontier is fully stable in the eyes of the core devs and the auditors:

  • when we are ready to move to Homestead, the release after Frontier, the Frontier network will be shut down; Ether values in wallets will be transferred, but state in contracts is will likely be erased (more information to follow on this in later blog posts)
  • switchover to  the new network will be enforced by “TheBomb”

This is very early release software: feature complete within these boundaries, but with a substantial risk of unexpected behaviours unseen in either the test net or the security review. And it’s not just us that will be putting new code into production: contracts, exchanges, miners, everybody else in the ecosystem will be shipping new services. Any one of those components getting seriously screwed up could impact a lot of users, and we want to shake bugs out of the ecosystem as a whole, not simply our own infrastructure: we are all in this together.

However, to help you safeguard your Ether, we have the following mechanisms planned (more details from the developers will follow soon as the security model is finalised):

  • if you do not perform any transactions, we guarantee 100% your Ether will not be touched and will be waiting for you once we move beyond Frontier
  • if you perform transactions, we guarantee 100% that any Ether you did not spend will will be available to you once we move beyond Frontier not be touched
  • Ether you spend will not fall through cracks into other people’s pockets or vanish without a trace: in the unlikely event that this happens, you have 24 hours to inform us, and we will freeze the network, return to the last good state, and start again with the bug patched
  • yes, this implies a real risk of network instability: everything possible has been done to prevent this, but this is a brand new aeroplane – take your parachute!
  • we will periodically checkpoint the network to show that neither user report nor automated testing has reported any problems. We expect the checkpoints will be around once daily, with a mean of around 12 hours of latency
  • exchanges etc. will be strongly encouraged to wait for checkpoints to be validated before sending out payments in fiat or bitcoin. Ethereum will provide explicit support to aid exchanges in determining what Ether transactions have fully cleared

Over the course of the next few weeks several pieces of software have to be integrated to maintain this basket of security features so we can allow genesis block Ether on to this platform without unacceptable risks. Building that infrastructure is a new process, and while it looks like a safe, sane and conservative schedule, there is always a chance of a delay as the unknown unknown is discovered either by us, the bug bounty hunters or by the security auditors. There will be a post shortly which goes through this release plan in real technical detail, and I’ll have a lot of direct input from the devs on that post, so for now take this with a pinch of salt and we will have hard details and expected dates as soon as possible. 

Release Step Two: Homestead

Homestead is where we move after Frontier. We expect the following three major changes.

  • Ether mining will be at 100% rather than 10% of the usual reward rate
  • checkpointing and manual network halts should never be necessary, although it is likely that checkpointing will continue if there is a general demand for it
  • we will remove the severe risk warning from putting your Ether on the network, although we will not consider the software to be out of beta until Metropolis

Still command line, so much the same feature set as Frontier, but this one we tell you is ready to go, within the relevant parameters.

How long will there be between Frontier and Homestead? Depends entirely on how Frontier performs: best case is not less than a month. We will have a pretty good idea of whether things are going smoothly or not from network review, so we will keep you in the loop through this process.

Release Step Three: Metropolis

Metropolis is when we finally officially release a relatively full-featured user interface for non-technical users of Ethereum, and throw the doors open: Mist launches, and we expect this launch to include a DApp store and several anchor tenant projects with full-featured, well-designed programs to showcase the full power of the network. This is what we are all waiting for, and working towards.

In practice, I suspect there will be at least one, and probably two as-yet-unnamed steps between Homestead and Metropolis: I’m open to suggestions for names (write to vinay[at]ethdev.com). Features will be sensible checkpoints on the way: specific feature sets inside of Mist would be my guess, but I’m still getting my head around that, so I expect we will cross those bridges after Homestead is stood up.

Release Step Four: Serenity

There’s just one thing left to discuss: mining. Proof of Work implies the inefficient conversion of electricity into heat, Ether and network stability, and we would quite like to not warm the atmosphere with our software more than is absolutely necessary. Short of buying carbon offsets for every unit of Ether mined (is that such a bad idea?), we need an algorithmic fix: the infamous Proof of Stake. 

Switching the network from Proof of Work to Proof of Stake is going to require a substantial switch, a transition process potentially much like the one between Frontier and Homestead. Similar rollback measures may be required, although in all probability more sophisticated mechanisms will be deployed (e.g. running both mechanisms together, with Proof of Work dominant, and flagging any cases where Proof of Stake gives a different output.)

This seems a long way out, but it’s not as far away as all that: the work is ongoing.

Proof of Work is a brutal waste of computing power – like democracy*, the worst system except all the others (*voluntarism etc. have yet to be tried at scale). Freed from that constraint, the network should be faster, more efficient, easier for newcomers to get into, and more resistant to cartelization of mining capacity etc. This is probably going to be almost as big a step forwards as putting smart contracts into a block chain in the first place, by the time all is said and done. It is a ways out. It will be worth it. 

Timelines

As you have seen since the Ether Sale, progress has been rapid and stable. Code on the critical path is getting written, teams are effective and efficient, and over-all the organization is getting things done. Reinventing the digital age is not easy, but somebody has to do it. Right now that is us.

We anticipate roughly one major announcement a month for the next few months, and then a delay while Metropolis is prepared. There will also be DEVcon One, an opportunity to come, learn the practical business of building and shipping DApps, meet fellow developers, potential investors, and understand the likely shape of things to come.

We will give you information about each release in more detail as each release approaches, but I want to give you the big overview of how this works and where we are going, fill in some of the gaps, highlight what is changing, both technically and in our communications and business partnership, and present you with an overview of what the summer is going to be like as we move down the path towards Serenity, another world changing technology.

I’m very glad to be part of this process. I’m a little at sea right now trying to wrap my head around the sheer scope of the project, and I’m hoping to actually visit a lot of the development teams over the summer to get the stories and put faces to names. This is a big, diverse project and, beyond the project itself, the launch of a new sociotechnical ecosystem. We are, after all, a platform effort: what’s really going to turn this into magic is you, and the things you build on top of the tools we’re all working so hard to ship. We are making tools for tool-makers.

Vinay signing off for now. More news soon!

 

The post The Ethereum Launch Process appeared first on .

 

I was woken by Vitalik’s call at 5:55 this morning; pitch black outside, nighttime was still upon us. Nonetheless, it was time to leave and this week had best start on the right foot.

The 25-minute walk in darkness from the Zug-based headquarters to the train station was wet. Streetlights reflecting off the puddles on the clean Swiss streets provided a picturesque, if quiet, march into town. I couldn’t help but think the rain running down my face was a very liquid reminder of the impending seasonal change, and then, on consideration, how fast the last nine months had gone.

Solid Foundations

The last week was spent in Zug by the Ethereum foundation board and ÐΞV leadership: Vitalik, Mihai and Taylor who officially form the founation’s board, Anthony and Joseph as the other official advisors and Aeron & Jutta as the ÐΞV executive joined by Jeff and myself wearing multiple hats of ÐΞV and advisory). The chief outcome of this was the dissemination of Vitalik’s superb plan to reform the foundation and turn it into a professional entity. The board will be recruited from accomplished professionals with minimal conflicts of interest; the present set of “founders” officially retired from those positions and a professional executive recruited, the latter process lead by Joseph. Anthony will take a greater ambassadorial role for Ethereum in China and North America. Conversely, ÐΞV will function much more as a department of the Foundation’s executive rather than a largely independent entity. Finally, I presented the release strategy to the others; an event after which I’ve never seen quite so many photos taken of a whiteboard. Needless to say, all was well received by the board and advisors. More information will be coming soon.

As I write this, I’m sitting on a crowded early commuter train, Vinay Gupta in tow, who recently took on a much more substantive role this week as release coordinator. He’ll be helping with release strategy and to keep you informed of our release process. This week, which might rather dramatically be described as ‘pivotal’ in the release process, will see Jeff, Vitalk and me sit around a table and develop all the PoC-9 changes, related unit tests, and integrations in three days, joined by our indomitable Master of Testing, Christoph. The outcome of this week will inform our announcement which will come later this week outlining in clear terms what we will be releasing and when.

I’m sorry it has been so long without an update. The last 2 months has been somewhat busy, choked up with travel and meetings, with the remaining time soaked up by coding, team-leading and management. The team is now substantially formed; the formal security audit started four weeks ago; the bounty programme is running smoothly. The latter processes are the exceedingly capable hands of Jutta and Gustav. Aeron, meanwhile will be stepping down as the ÐΞV head of finance and operations and assuming the role he was initially brought aboard for, system modelling. We’ll hopefully be able to announce his successors next week (yes, that was plural; he has been doing the jobs of 2.5 people over the last few months).

We are also in the process of forming partnerships with third parties in the industry; George, Jutta and myself managing this process; I’m happy to announce that at least three exchanges will be supporting Ether from day one on their trading platforms (details of which we’ll annouce soon), with more exchanges to follow. Marek and Alex are providing technical supprt there with Marek going so far as to make a substantial reference exchange implementation.

I also finished the first draft of ICAP, the Ethereum Inter-exchange Client Address Protocol, an IBAN-compatible system for referencing and transacting to client accounts aimed to streamline the process of transfering funds, worry-free between exchanges and, ultimately, make KYC and AML pains a thing of the past. The IBAN compatibility may even provide possibility of easy integration with existing banking infrastructure in some future.

Developments

Proof-of-Concept releases VII and VIII were released. NatSpec, “natural language specification format” and the basis of our transaction security was prototyped and integrated. Under Marek’s watch, now helped by Fabian, ethereum.js is truly coming of age with a near source-level compatibility with Solidity on contract interaction and support for the typed ABI with calling and events, the latter providing hassle-free state-change reporting. Mix, our IDE, underwent its first release and after some teethng issues is getting good use thanks to the excellent work done by Arkadiy and Yann. Solidity had numerous features added and is swiftly approaching 1.0 status with Christian, Lefteris and Liana to thank. Marian’s work goes ever forward on the network monitoring system while Sven and Heiko have been working diligently on the stress testing infrastructure which analyses and tests the peer network formation and performance. They’ll soon be joined by Alex and Lefteris to accellerate this programme.

So one of the major things that needed sorting for the next release is the proof-of-work algorithm that we’ll use. This had a number of requirements, two of which were actually pulling in opposite directions, but basically it had to be light-client-friendly algorithm whose speed-of-mining is proportional to the IO-bandwidth and which requires a considerable amount of RAM to do so. There was a vague consensus that we (well.. Vitalik and Matthew) head in the direction of a Hasimoto-like algorithm (a proof-of-work designed for the Bitcoin blockchain that aims to be IO-bound, meaning, roughly, that to make it go any faster, you’d need to add more memory rather than just sponsoring a smaller/faster ASIC). Since our blockchain has a number of important differences with the Bitcoin blockchain (mainly in transaction density), stemming from the extremely short 12s block time we’re aiming for, we would have to use not the blockchain data itself like Hashimoto but rather an artifcially created dataset, done with an algorithm known as Dagger (yes, some will remember it as Vitalik’s first and flawed attempt at a memory-hard proof-of-work).

While this looked like a good direction to be going in, a swift audit of Vitalik and Matt’s initial algorithm by Tim Hughes (ex-Director of Technology at Frontier Developments and expert in low-level CPU and GPU operation and optimisation) showed major flaws. With his help, they were able to work together to devise a substantially more watertight algorithm that, we are confident to say, should make the job of developing an FPGA/ASIC sufficiently difficult, especially given our determination to switch to a proof-of-stake system within the next 6-12 months.

Last, but not least, the new website was launched. Kudos to Ian and Konstantin for mucking down and getting it done. Next stop will be the developer site, which will be loosely based on the excellent resource at qt.io, the aim to provide a one-stop extravaganza of up to date reference documentation, curated tutorials, examples, recipes, downloads, issue tracking, and build status.

Onwards

So, as Alex, our networking maestro might say, these are exciting times. When deep in nitty gritty of development you sometimes forget quite how world-altering the technology you’re creating is, which is probably just as well since the gravity of the matter at hand would be continually distracting. Nonetheless, when one starts considering the near-term alterations that we can really bring one realises that the wave of change is at once unavoidable and heading straight for you. For what it’s worth, I find an excellent accompaniment to this crazy life is the superb music of Pretty Lights.

The post Gav’s Ethereum ÐΞV Update V appeared first on .

 

One of the issues inherent in many kinds of consensus architectures is that although they can be made to be robust against attackers or collusions up to a certain size, if an attacker gets large enough they are still, fundamentally, exploitable. If attackers in a proof of work system have less than 25% of mining power and everyone else is non-colluding and rational, then we can show that proof of work is secure; however, if an attacker is large enough that they can actually succeed, then the attack costs nothing – and other miners actually have the incentive to go along with the attack. SchellingCoin, as we saw, is vulnerable to a so-called P + epsilon attack in the presence of an attacker willing to commit to bribing a large enough amount, and is itself capturable by a majority-controlling attacker in much the same style as proof of work.

One question that we may want to ask is, can we do better than this? Particularly if a pseudonymous cryptocurrency like Bitcoin succeeds, and arguably even if it does not, there doubtlessly exists some shadowy venture capital industry willing to put up the billions of dollars needed to launch such attacks if they can be sure that they can quickly earn a profit from executing them. Hence, what we would like to have is cryptoeconomic mechanisms that are not just stable, in the sense that there is a large margin of minimum “size” that an attacker needs to have, but also unexploitable – although we can never measure and account for all of the extrinsic ways that one can profit from attacking a protocol, we want to at the very least be sure that the protocol presents no intrinsic profit potential from an attack, and ideally a maximally high intrinsic cost.

For some kinds of protocols, there is such a possibility; for example, with proof of stake we can punish double-signing, and even if a hostile fork succeeds the participants in the fork would still lose their deposits (note that to properly accomplish this we need to add an explicit rule that forks that refuse to include evidence of double-signing for some time are to be considered invalid). Unfortunately, for SchellingCoin-style mechanisms as they currently are, there is no such possibility. There is no way to cryptographically tell the difference between a SchellingCoin instance that votes for the temperature in San Francisco being 4000000000’C because it actually is that hot, and an instance that votes for such a temperature because the attacker committed to bribe people to vote that way. Voting-based DAOs, lacking an equivalent of shareholder regulation, are vulnerable to attacks where 51% of participants collude to take all of the DAO’s assets for themselves. So what can we do?

Between Truth and Lies

One of the key properties that all of these mechanisms have is that they can be described as being objective: the protocol’s operation and consensus can be maintained at all times using solely nodes knowing nothing but the full set of data that has been published and the rules of the protocol itself. There is no additional “external information” (eg. recent block hashes from block explorers, details about specific forking events, knowledge of external facts, reputation, etc) that is required in order to deal with the protocol securely. This is in contrast to what we will describe as subjective mechanisms – mechanisms where external information is required to securely interact with them.

When there exist multiple levels of the cryptoeconomic application stack, each level can be objective or subjective separately: Codius allows for subjectively determined scoring of oracles for smart contract validation on top of objective blockchains (as each individual user must decide for themselves whether or not a particular oracle is trustworthy), and Ripple’s decentralized exchange provides objective execution on top of an ultimately subjective blockchain. In general, however, cryptoeconomic protocols so far tend to try to be objective where possible.

Objectivity has often been hailed as one of the primary features of Bitcoin, and indeed it has many benefits. However, at the same time it is also a curse. The fundamental problem is this: as soon as you try to introduce something extra-cryptoeconomic, whether real-world currency prices, temperatures, events, reputation, or even time, from the outside world into the cryptoeconomic world, you are trying to create a link where before there was absolutely none. To see how this is an issue, consider the following two scenarios:

  • The truth is B, and most participants are honestly following the standard protocol through which the contract discovers that the truth is B, but 20% are attackers or accepted a bribe.
  • The truth is A, but 80% of participants are attackers or accepted a bribe to pretend that the truth is B.

From the point of view of the protocol, the two are completely indistinguishable; between truth and lies, the protocol is precisely symmetrical. Hence, epistemic takeovers (the attacker convincing everyone else that they have convinced everyone else to go along with an attack, potentially flipping an equilibrium at zero cost), P + epsilon attacks, profitable 51% attacks from extremely wealthy actors, etc, all begin to enter the picture. Although one might think at first glance that objective systems, with no reliance on any actor using anything but information supplied through the protocol, are easy to analyze, this panoply of issues reveals that to a large extent the exact opposite is the case: objective protocols are vulnerable to takeovers, and potentially zero-cost takeovers, and standard economics and game theory quite simply have very bad tools for analyzing equilibrium flips. The closest thing that we currently have to a science that actually does try to analyze the hardness of equilibrium flips is chaos theory, and it will be an interesting day when crypto-protocols start to become advertised as “chaos-theoretically guaranteed to protect your grandma’s funds”.

Hence, subjectivity. The power behind subjectivity lies in the fact that concepts like manipulation, takeovers and deceit, not detectable or in some cases even definable in pure cryptography, can be understood by the human community surrounding the protocol just fine. To see how subjectivity may work in action, let us jump straight to an example. The example supplied here will define a new, third, hypothetical form of blockchain or DAO governance, which can be used to complement futarchy and democracy: subjectivocracy. Pure subjectivocracy is defined quite simply:

  1. If everyone agrees, go with the unanimous decision.
  2. If there is a disagreement, say between decision A and decision B, split the blockchain/DAO into two forks, where one fork implements decision A and the other implements decision B.

All forks are allowed to exist; it’s left up to the surrounding community to decide which forks they care about. Subjectivocracy is in some sense the ultimate non-coercive form of governance; no one is ever forced to accept a situation where they don’t get their own way, the only catch being that if you have policy preferences that are unpopular then you will end up on a fork where few others are left to interact with you. Perhaps, in some futuristic society where nearly all resources are digital and everything that is material and useful is too-cheap-to-meter, subjectivocracy may become the preferred form of government; but until then the cryptoeconomy seems like a perfect initial use case.

For another example, we can also see how to apply subjectivocracy to SchellingCoin. First, let us define our “objective” version of SchellingCoin for comparison’s sake:

  1. The SchellingCoin mechanism has an associated sub-currency.
  2. Anyone has the ability to “join” the mechanism by purchasing units of the currency and placing them as a security deposit. Weight of participation is proportional to the size of the deposit, as usual.
  3. Anyone has the ability to ask the mechanism a question by paying a fixed fee in that mechanism’s currency.
  4. For a given question, all voters in the mechanism vote either A or B.
  5. Everyone who voted with the majority gets a share of the question fee; everyone who voted against the majority gets nothing.

Note that, as mentioned in the post on P + epsilon attacks, there is a refinement by Paul Sztorc under which minority voters lose some of their coins, and the more “contentious” a question becomes the more coins minority voters lose, right up to the point where at a 51/49 split the minority voters lose all their coins to the majority. This substantially raises the bar for a P + epsilon attack. However, raising the bar for us is not quite good enough; here, we are interested in having no exploitability (once again, we formally define “exploitability” as “the protocol provides intrinsic opportunities for profitable attacks”) at all. So, let us see how subjectivity can help. We will elide unchanged details:

  1. For a given question, all voters in the mechanism vote either A or B.
  2. If everyone agrees, go with the unanimous decision and reward everyone.
  3. If there is a disagreement, split the mechanism into two on-chain forks, where one fork acts as if it chose A, rewarding everyone who voted A, and the other fork acts as if it chose B, rewarding everyone who voted B.

Each copy of the mechanism has its own sub-currency, and can be interacted with separately. It is up to the user to decide which one is more worth asking questions to. The theory is that if a split does occur, the fork specifying the correct answer will have increased stake belonging to truth-tellers, the fork specifying the wrong answer will have increased stake belonging to liars, and so users will prefer to ask questions to the fork where truth-tellers have greater influence.

If you look at this closely, you can see that this is really just a clever formalism for a reputation system. All that the system does is essentially record the votes of all participants, allowing each individual user wishing to ask a question to look at the history of each respondent and then from there choose which group of participants to ask. A very mundane, old-fashioned, and seemingly really not even all that cryptoeconomic approach to solving the problem. Now, where do we go from here?

Moving To Practicality

Pure subjectivocracy, as described above, has two large problems. First, in most practical cases, there are simply far too many decisions to make in order for it to be practical for users to decide which fork they want to be on for every single one. In order to prevent massive cognitive load and storage bloat, it is crucial for the set of subjectively-decided decisions to be as small as possible.

Second, if a particular user does not have a strong belief that a particular decision should be answered in one way or another (or, alternatively, does not know what the correct decision is), then that user will have a hard time figuring out which fork to follow. This issue is particularly strong in the context of a category that can be termed “very stupid users” (VSUs) – think not Homer Simpson, but Homer Simpson’s fridge. Examples include internet-of-things/smart property applications (eg. SUVs), other cryptoeconomic mechanisms (eg. Ethereum contracts, separate blockchains, etc), hardware devices controlled by DAOs, independently operating autonomous agents, etc. In short, machines that have (i) no ability to get updated social information, and (ii) no intelligence beyond the ability to follow a pre-specified protocol. VSUs exist, and it would be nice to have some way of dealing with them.

The first problem, surprisingly enough, is essentially isomorphic to another problem that we all know very well: the blockchain scalability problem. The challenge is exactly the same: we want to have the strength equivalent to all users performing a certain kind of validation on a system, but not require that level of effort to actually be performed every time. And in blockchain scalability we have a known solution: try to use weaker approaches, like randomly selected consensus groups, to solve problems by default, only using full validation as a fallback to be used if an alarm has been raised. Here, we will do a similar thing: try to use traditional governance to resolve relatively non-contentious issues, only using subjectivocracy as a sort of fallback and incentivizer-of-last-resort.

So, let us define yet another version of SchellingCoin:

  1. For a given question, all voters in the mechanism vote either A or B.
  2. Everyone who voted with the majority gets a share of the question fee (which we will call P); everyone who voted against the majority gets nothing. However, deposits are frozen for one hour after voting ends.
  3. A user has the ability to put down a very large deposit (say, 50*P) to “raise the alarm” on a particular question that was already voted on – essentially, a bet saying “this was done wrong”. If this happens, then the mechanism splits into two on-chain forks, with one answer chosen on one fork and the other answer chosen on the other fork.
  4. On the fork where the chosen answer is equal to the original voted answer, the alarm raiser loses the deposit. On the other form, the alarm raiser gets back a reward of 2x the deposit, paid out from incorrect voters’ deposits. Additionally, the rewards for all other answerers are made more extreme: “correct” answerers get 5*P and “incorrect” answerers lose 10*P.

If we make a maximally generous assumption and assume that, in the event of a split, the incorrect fork quickly falls away and becomes ignored, the (partial) payoff matrix starts to look like this (assuming truth is A):

You vote A You vote B You vote against consensus, raise the alarm
Others mainly vote A P 0 -50P – 10P = -60P
Others mainly vote A, N >= 1 others raise alarm 5P -10P -10P – (50 / (N + 1)) * P
Others mainly vote B 0 P 50P + 5P = 55P
Others mainly vote B, N >= 1 others raise alarm 5P -10P 5P + (50 / (N + 1)) * P

The strategy of voting with the consensus and raising the alarm is clearly self-contradictory and silly, so we will omit it for brevity. We can analyze the payoff matrix using a fairly standard repeated-elimination approach:

  1. If others mainly vote B, then the greatest incentive is for you to raise the alarm.
  2. If others mainly vote A, then the greatest incentive is for you to vote A.
  3. Hence, each individual will never vote B. Hence, we know that everyone will vote A, and so everyone’s incentive is to vote A.

Note that, unlike the SchellingCoin game, there is actually a unique equilibrium here, at least if we assume that subjective resolution works correctly. Hence, by relying on what is essentially game theory on the part of the users instead of the voters, we have managed to avoid the rather nasty set of complications involving multi-equilibrium games and instead have a clearer analysis.

Additionally note that the “raise the alarm by making a bet” protocol differs from other approaches to fallback protocols that have been mentioned in previous articles here in the context of scalability; this new mechanism is superior to and cleaner than those other approaches, and can be applied in scalability theory too.

The Public Function of Markets

Now, let us bring our cars, blockchains and autonomous agents back into the fold. The reason why Bitcoin’s objectivity is so valued is to some extent precisely because the objectivity makes it highly amenable to such applications. Thus, if we want to have a protocol that competes in this regard, we need to have a solution for these “very stupid users” among us as well.

Enter markets. The key insight behind Hayek’s particular brand of libertarianism in the 1940s, and Robin Hanson’s invention of futarchy half a century later, is the idea that markets exist not just to match buyers and sellers, but also to provide a public service of information. A prediction market on a datum (eg. GDP, unemployment, etc) reveals the information of what the market thinks will be value of that datum at some point in the future, and a market on a good or service or token reveals to interested individuals, policymakers and mechanism designers how much the public values that particular good or service or token. Thus, markets can be thought of as a complement to SchellingCoin in that they, like SchellingCoin, are also a window between the digital world and the “real” world – in this case, a window that reveals just how much the real world cares about something.

So, how does this secondary “public function” of markets apply here? In short, the answer is quite simple. Suppose that there exists a SchellingCoin mechanism, of the last type, and after one particular question two forks appear. One fork says that the temperature in San Francisco is 20’C; the other fork says that the temperature is 4000000000’C. As a VSU, what do you see? Well, let’s see what the market sees. On the one hand, you have a fork where the larger share of the internal currency is controlled by truth-tellers. On the other hand, you have a fork where the larger share is controlled by liars. Well, guess which of the two currencies has a higher price on the market…

In cryptoeconomic terms, what happened here? Simply put, the market translated the human intelligence of the intelligent users in what is an ultimately subjective protocol into a pseudo-objective signal that allows the VSUs to join onto the correct fork as well. Note that the protocol itself is not objective; even if the attacker manages to successfully manipulate the market for a brief period of time and massively raise the price of token B, the users are still going to have a higher valuation for token A, and when the manipulator gives up token A will go right back to being the dominant one.

Now, what are the robustness properties of this market against attack? As was brought up in the Hanson/Moldbug debate on futarchy, in the ideal case a market will provide the correct price for a token for as long as the economic weight of the set of honestly participating users exceeds the economic weight of any particular colluding set of attackers. If some attackers bid the price up, an incentive arises for other participants to sell their tokens and for outsiders to come in and short it, in both cases earning an expected profit and at the same time helping to push the price right back down to the correct value. In practice, manipulation pressure does have some effect, but a complete takeover is only possible if the manipulator can outbid everyone else combined. And even if the attacker does succeed, they pay dearly for it, buying up tokens that end up being nearly valueless once the attack ends and the fork with the correct answer reasserts itself as the most valuable fork on the market.

Of course, the above is only a sketch of how quasi-subjective SchellingCoin may work; in reality a number of refinements will be needed to disincentivize asking ambiguous or unethical questions, handling linear and not just binary bets, and optimizing the non-exploitability property. However, if P + epsilon attacks, profit-seeking 51% attacks, or any other kind of attack ever actually do become a problem with objective SchellingCoin mechanisms, the basic model stands ready as a substitute.

Listening to Markets and Proof of Work

Earlier in this post, and in my original post on SchellingCoin, I posited a sort of isomorphism between SchellingCoin and proof of work – in the original post reasoning that because proof of work works so will SchellingCoin, and above that because SchellingCoin is problematic so is proof of work. Here, let us expand on this isomorphism further in a third direction: if SchellingCoin can be saved through subjectivity, then perhaps so can proof of work.

The key argument is this: proof of work, at the core, can be seen in two different ways. One way of seeing proof of work is as a SchellingCoin contest, an objective protocol where the participants that vote with the majority get rewarded 25 BTC and everyone else gets nothing. The other approach, however, is to see proof of work as a sort of constant ongoing “market” between a token and a resource that can be measured purely objectively: computational power. Proof of work is an infinite opportunity to trade computational power for currency, and the more interest there is in acquiring units in a currency the more work will be done on its blockchain. “Listening” to this market consists simply of verifying and computing the total quantity of work.

Seeing the description in the previous section of how our updated version of SchellingCoin might work, you may have been inclined to propose a similar approach for cryptocurrency, where if a cryptocurrency gets forked one can see the price of both forks on an exchange, and if the exchange prices one fork much more highly that implies that that fork is legitimate. However, such an approach has a problem: determining the validity of a crypto-fiat exchange is subjective, and so the problem is beyond the reach of a VSU. But with proof of work as our “exchange”, we can actually get much further.

Here is the equivalence: exponential subjective scoring. In ESS, the “score” that a client attaches to a fork depends not just on the total work done on the fork, but also on the time at which the fork appeared; forks that come later are explicitly penalized. Hence, the set of always-online users can see that a given fork came later, and therefore that it is a hostile attack, and so they will refuse to mine on it even if its proof of work chain grows to have much more total work done on it. Their incentive to do this is simple: they expect that eventually the attacker will give up, and so they will continue mining and eventually overtake the attacker, making their fork the universally accepted longest one again; hence, mining on the original fork has an expected value of 25 BTC and mining on the attacking fork has an expected value of zero.

VSUs that are not online at the time of a fork will simply look at the total proof of work done; this strategy is equivalent to the “listen to the child with the higher price” approach in our version of SchellingCoin. During an attack, such VSUs may of course temporarily be tricked, but eventually the original fork will win and so the attacker will have massively paid for the treachery. Hence, the subjectivity once again makes the mechanism less exploitable.

Conclusion

Altogether, what we see is that subjectivity, far from being an enemy of rigorous analysis, in fact makes many kinds of game-theoretic analysis of cryptoeconomic protocols substantially easier. However, if this kind of subjective algorithm design becomes accepted as the most secure approach, it has far-reaching consequences. First of all, Bitcoin maximalism, or any kind of single-cryptocurrency maximalism generally, cannot survive. Subjective algorithm design inherently requires a kind of loose coupling, where the higher-level mechanism does not actually control anything of value belonging to a lower-level protocol; this condition is necessary in order to allow higher-level mechanism instances to copy themselves.

In fact, in order for the VSU protocol to work, every mechanism would need to contain its own currency which would rise and fall with its perceived utility, and so thousands or even millions of “coins” would need to exist. On the other hand, it may well be possible to enumerate a very specific number of mechanisms that actually need to be subjective – perhaps, basic consensus on block data availability validation and timestamping and consensus on facts, and everything else can be built objectively on top. As is often the case, we have not even begun to see substantial actual attacks take place, and so it may well be over a decade until anything close to a final judgement needs to be made.

The post The Subjectivity / Exploitability Tradeoff appeared first on .

 

Als die ersten Personal Computer eine breitere Nutzerbasis erreichten, gingen viele davon aus, dass der Nutzen für uns Menschen im Laufe der folgenden Jahre/Jahrzehnte groß sein werde. Der Mensch sollte in diesem Zusammenhang im Mittelpunkt stehen und Computer sollten das Leben der Menschen vereinfachen und verschönen. So war seinerzeit die utopische Erwartung.

Inzwischen hat sich das Ganze so entwickelt, „…dass wir Produkt der Konzerne sind, die Software anbieten, die uns vermeintlich unterstützen. Wir geben bereitwillig intime Daten preis, über deren Verbleib wir anschließend die Hoheit verlieren. Wir sind auf dem besten Weg zur Totalüberwachung und Ausbeutung der Massen. Und daran verdienen ausschließlich die großen Firmen, die diese Produkte hervorbringen…“ (Jared Lanier – Wem die Zukunft gehört – 2014)

Ähnlich auch beim Crowdfunding: Hier ist der Ansatz prinzipiell ein guter, allerdings die Umsetzung nicht immer konsequent Mensch-zentriert.
Beispiel Kickstarter: Hier unterstützt man Projekte und bekommt dafür im Gegenzug irgendwelche Gimmicks. Man wird im Abspann des unterstützten Filmes erwähnt, man bekommt ein Mittagessen mit den Gründern, oder man erhält die allererste Ausgabe des gefundeten Buches. Oculus Rift hat so 2012 von Unterstützern bei Kickstarter 2,4 Mio US$ eingesammelt, die dafür T-Shirts, Poster oder anders gesagt, einen feuchten Händedruck bekamen. Zwei Jahre später wurde Oculus Rift dann von Facebook für schlappe 2 Mrd. US$ übernommen und die ursprünglichen Unterstützer standen außen vor. Oculus Rift ist hier sicher kein Vorwurf zu machen, allerdings zeigt es die Problematik sehr gut. Hätten die Unterstützer paritätisch (also gleichberechtigt je nach Höhe ihrer Unterstützung) am Unternehmen partizipiert, so hätten auch alle derer am Verkauf des Unternehmens teil gehabt.
So blieb ihnen nichts weiter als das T-Shirt, das ihnen die Teilhabe an Oculus Rift bestätigte. Wew…

Dem entgegen wachsen – langsam aber sicher – verschiedene gute Ansätze, Crowdfunding in Verbindung mit Blockchain Währungen zu verschmelzen.
So kaufen Unterstützer die jeweilige Firmen-Cryptocoins und partizipieren direkt am Unternehmen. Durch diese Bindung werden sie nicht nur zu monitären Unterstützern, auch ihre persönliche Integration in das Unternehmen wird gefördert. SWARM ist hier im Moment der mir bekannte Vorreiter.

Die Zukunft sollte also sein, dass nicht einzelne Großkonzerne das große Geld abgreifen und zudem nicht einmal ehrlich am fiskalen System teilnehmen, sondern jeder Einzelne am Erfolg einer Idee teilhaben kann. Monitär wie auch real.

shareable hat hierzu einen super Artikel:

Owning Together Is the New Sharing

Companies and startups are aspiring toward an economy, and an Internet, that is more fully ours with the use of cooperatives, „commons-based peer production,“ and cryptocurrencies.
VC-backed sharing economy companies like Airbnb and Uber have caused trouble for legacy industries, but gone is the illusion that they are doing it with actual sharing. Their main contribution to society has been facilitating new kinds of transactions — for a fee, of course, to pay back to their investors. “The sharing economy has become the on-demand economy,” laments Antonin Léonard, co-founder of the Paris-based network OuiShare, which connects sharing-economy entrepreneurs around the world.
Source: http://www.shareable.net/blog/owning-is-the-new-sharing

Ripple Labs is honored to be recognized as the world’s fourth Most Innovative Company in Money for 2015 by Fast Company.

“We are thrilled to be named as one of the most innovative companies in money and to be included among such respected brands,” said Ripple Labs co-founder and CEO Chris Larsen.

The inclusion by Fast Company in its annual list builds on continuing recognition of the work we’re doing at Ripple Labs by the media and the industry at large.

“Our vision is to transform the  world of finance in a way that benefits everyone from banks to governments to innovative developers to merchants, consumers and the financially underserved,” Larsen said. “It is gratifying to know that others share a desire for this change and recognize our work.”

Most Innovative Companies is Fast Company’s highly anticipated annual ranking of the world’s leading enterprises and rising newcomers that exemplify the best in business and innovation. You can check out the full list here:

The list of leading companies in money include:

1. Inventure

2. Stripe

3. Behaviosec

4. Ripple

5. Expensify

6. Apple Pay

7. Braintree

8. Nice Systems

9. Premise

10. Bluevine

Ripple

,
Ehemaliger FBI Agent beschreibt, wie er über 3.500 BitCoin Transaktionen verfolgen konnte

Alles lässt sich irgendwie hacken: Fingerabdruck Scanner, Autos, oder auch BitCoins.

Deren vermeintlicher Vorteil – die Anonymität durch Blockchain – lässt sich aushebeln, sofern man die BitCoin Adresse des jeweiligen Nutzers kennt. Aber das genau ist aus meiner Sicht der Punkt.
Natürlich kennen die beteiligten Parteien bei Transaktionen die Adresse des jeweils anderen, aber für Außenstehende ist das in der Regel nicht der Fall. Andererseits ist das bei der gemeinhin gebräuchlichen Kombination IBAN/ BIC auch nicht anders. Die kennen auch nur die Beteiligten.

Nur, dass bei BTC-Transaktionen diese öffentlich unter blockchain.info nachvollziehbar sind. Das ist genau das, was der große Vorteil blockchain-basierter FIAT-Währungen ist.
Man kann sich halt nur nicht heraussuchen, wer dann die Transaktion nachvollzieht.

Umso interessanter, was WIRED da so schreibt:

Prosecutors Trace $13.4M in Bitcoins From the Silk Road to Ulbricht’s Laptop

If anyone still believes that bitcoin is magically anonymous internet money, the US government just offered what may be the clearest demonstration yet that it’s not. A former federal agent has shown in a courtroom that he traced hundreds of thousands of bitcoins from the Silk Road anonymous marketplace for drugs directly to the personal computer of Ross Ulbricht, the 30-year-old accused of running that contraband bazaar.In Ulbricht’s trial Thursday, former FBI special agent Ilhwan Yum described how he traced 3,760 bitcoin transactions over 12 months ending in late August 2013 from servers seized in the Silk Road investigation to Ross Ulbricht’s Samsung 700z laptop, which the FBI seized at the time of his arrest in October of that year. In all, he followed more than 700,000 bitcoins along the public ledger of bitcoin transactions, known as the blockchain, from the marketplace to what seemed to be Ulbricht’s personal wallets. Based on exchange rates at the time of each transaction, Yum calculated that the transferred coins were worth a total of $13.4 million.

Vollständiger Artikel auf WIRED.com

 

Chris Kanaan joined Ripple Labs two months ago to become the company’s VP of Engineering. The Yelp alum and Stanford grad is working closely with Ripple Labs CTO Stefan Thomas to oversee the engineering department, making sure teams are aligned with each other as well as the company’s product needs and vision. With headcount at nearly ninety, Chris’s arrival is timely, to say the least.

One reason the search took so long is because it was important to find someone who was not only qualified but was also the right cultural fit. Chris wholeheartedly checks both boxes and we’re all incredibly excited to have him on the team.

I sat down with him for a brief interview to learn more about the man named Kanaan and also to check in and see how his first eight weeks have been.

That’s when I came across Ripple. Something just clicked. It seemed like this was absolutely the next chapter in this movement. It took the best aspects of Bitcoin—like the ledger—but improved on the concept, such as the system for closing ledgers and the ability to support all currencies. I stared into the face of everyone on the “About” page on the company website, and I wondered, “Do I think these people can do it?” The answer was a resounding “Yes.”

Tell us about yourself!

Chris: After college, I moved to Kansas City, Missouri to take a very specific job doing 3D medical imaging at this huge 6,000 person multinational. I realized quickly that it wasn’t for me. They’d just paved their parking lot so I’d bring my skateboard in on the weekends to work. I’d code. Then I’d skateboard and think. And I just remember getting badgered by campus cops even though the entire lot was empty. It clearly wasn’t a good fit. I ended up getting an EMT license and working on an ambulance at night. I was on a rotational program and I ended up in London for a bit but the change of scenery didn’t help.

I’d always dreamed of moving to San Francisco—ever since I saw it for the first time as a little kid. I had this picture of me standing below the TransAmerica building in my dorm room. I thought it’d be the coolest place to be. So I got a one way ticket to the U.S. and stayed on a friend’s couch in Berkeley.

I ended up at a company called Quantcast after reading a Craigslist ad. They do real-time bidding for advertising.  It was just a few people in a mostly empty room. Now it’s grown past 600. After Quantcast, I ended up at Yelp, where I worked for two years. I was an engineering manager, overseeing one team, then two.  After Yelp, I took some time to be with my family and spent lots of quiet time exploring technologies I had been interested in, but hadn’t had time to investigate.

A few years back, I had mined some Bitcoin. I thought—This is so different from any of the other ideas I’ve heard. It wasn’t just another app. Plus, Bitcoin had this very mysterious character in Satoshi Nakamoto.

Fast forward to today, I had moved on from Bitcoin and forgotten my wallet. That’s when I came across Ripple. Something just clicked. It seemed like this was absolutely the next chapter in this movement. It took the best aspects of Bitcoin—like the ledger—but improved on the concept, such as the system for closing ledgers and the ability to support all currencies. I stared into the face of everyone on the “About” page on the company website, and I wondered, “Do I think these people can do it?” The answer was a resounding “Yes.”

I was lucky enough to join.

What are you working on at Ripple Labs?

Last year was about finding product market fit, and we figured out we needed to focus on liquidity first by integrating with the existing financial system before we can focus on end consumers. As a result, I’m very mindful of developing best practices to improve stability and our software development process, which will be key to becoming a true enterprise company.

What exactly are the responsibilities of a VP of Engineering?

I think Stefan (CTO at Ripple Labs) takes care of forward thinking and R&D. I complement him by growing the team and making sure everyone is on the same page—communicating well from across the aisle and also within teams. My job is to make sure we are all working together efficiently, productively, and most of all collectively. So I’m working closely with product, with all the engineering teams, and with HR and recruiting.

I’m also keeping tabs on what the business development team has up next for integration so we can adjust our product roadmap accordingly. It’s important that our products can scale, that they are applicable to a wide range of integration clients.

We also need to continue to maintain the engine that powers the entire network. I was very keen to meet everyone on the rippled team—where a lot of people are remote.

How’s it been so far?

I’ve been here eight weeks so far and it’s definitely been great. It’s excellent—not only what people have already done but also what we’re working on for the future.

It’s important that engineering is clearly focused now. Last year was grounded in explorations and experimentations. Now we have to drive that to completion if we want to have a strong, stable value network and to have partners using our technology across the world.

ActionShot

Chris Kanaan with Monica Long, VP of Marketing and Communications

How would you describe the team?

The way I view the rippled team, they are probably part of the top one percent of C++ developers. Each one individually was probably the smartest person at the previous company that they worked at. So it’s a big draw for them to work together—whereas at their previous job, their peers could only give them a rubber stamp because they couldn’t quite understand the scope and subtlety of their work. Now there is wide discussion across topics by compelling characters with diverse opinions. Above all, they’re working on a product that is completely fascinating.

In terms of the rest of the engineering team, they’re truly exceptional.  It’s a nice mix of people who have worked in payments and finance as well as the startup world. So there’s good combination of experiences and the energy is apparent here. Everyone has a really good vibe, a buzz.  Everyone is smiling. There’s an excitement but also this great attitude. As advertised, the culture has been humble and inclusive. And of course, as a young company, we have quite a bit of youth.

Can you tell us a little bit about who the real Chris Kanaan is?

I’m really interested in backpacking.  The way I got into it—I had hiked the John Muir trail with a friend, in the winter. We didn’t see another human being for a month in these snowy mountains.

Toward the end of the trip, we saw two people. One woman was trying to set the speed record for the PCT. The other was a man named Scott, who was trying to do what he described as “the yo-yo,” a back and forth between Mexico and Canada on the PCT. I was just blown away.

The whole experience just opened my eyes. It was so cool, so surreal—not seeing anyone, then to see these people take it even further. I knew I had to try it myself one day. So my brother and I are planning to do it in 2020 or 2022.

Oh yeah, I also like to surf.

You’re just the quintessential Cali bro aren’t you?

I was actually born in the mountains overlooking Beirut. I moved here when I was nine.

Do you ever visit?

I go every couple of years. My dad’s side of the family is there so I always have to keep my Arabic up. But my mom is actually Swedish-American.

What about school, what did you study—given your wide range of interests?

I went to Stanford University and studied computer science. Later, I got a Master’s in sociology.

My advisor was on the board for Friendster and I got interested in understanding networks of people and representing that through code. I scraped websites of different companies to build a network of investors and C-level employees from different companies, looking for patterns in their social relationships. I wanted to see if you could use social distance to predict investment outcomes.

Any final thoughts?

Stay positive!

 

Follow Ripple on Twitter

Ripple

Ich habe ja bereits vor einigen Tagen über das Ethereum IPO berichtet. Jetzt zeigen sich leider schon erste Probleme. Für das Entwicklerteam um „Wunderkind“ Vitalik Buterin gab es einen herben Rückschlag, als ein …
ethereum – Google Blogsuche