Friday, November 18, 2016

Ways Sheep Can Die

Source: Marti Leimbach via Marginal Revolution.

Some of the ways sheep can die:

  • Getting stuck on their backs and dying of suffocation
  • Attacked by flies
  • Eaten by maggots
  • Being attacked by dogs or any other living creature
  • Being frightened into a heart attack by imagining the dog is going to attack, even though it is not 
  • Drowning (Are we surprised sheep cannot swim?)
  • Suffocating in snow (surprisingly common)
  • Hoof infections that poison the blood
  • Almost exploding with grass because they have eaten too much and are unable to pass wind
  • If they get too hot
  • If they get too cold

Thursday, October 1, 2015

Onwards and upwards

I want to thank BitPay for being a notable sponsor of bitcoin core development for over two years.  BitPay is truly a leader in open source, with bitcore and Copay being two notable examples.

It was an exciting and transformative time at BitPay, and I'm now transitioning to become a member of the BitPay Advisory Board.  I'll be focusing most of my time on building new and interesting things in the bitcoin space.

Other members of BitPay's Advisory Board include Arthur Levitt and Gavin Andresen, so I'm excited to continue to support BitPay.  Email will remain active as a BitPay advisor.

Update: FAQs answered here.

Tuesday, September 29, 2015

Decoupling Financial Indices with Decentralized Bitcoin Fact Generators

Financial indices such as the Dow Jones Industrial Average or the S&P 500 are well known.  In the age of ETFs and ETNs, a core index is a requirement of the investment product.

In the age of decentralized software, this will be further decoupled into networks of fact generators and verified algorithms.

Creating an index such as the S&P 500 requires two primary components:  Input data (stock prices), and an algorithm (criteria for selecting stock X with proportional weight Y).

Using technologies such as cryptographic hash functionsmerkle treesbitcoin blockchain timestamping and bitcoin oracles, a better, more secure, more transparent financial index system may be developed.  Let's call it "Index-NG."

In the Index-NG system, the algorithm - the software - that turns volumes of input data into "The S&P 500 closing price" or "current gold price at 12:01pm" would transform from a clunky Excel spreadsheet (yes, really) or proprietary S&P software into

  • Open source software
  • Written in a smart contract language such as bitcoin scriptMoxie or ethereum.
  • Secured against corruption and tampering via blockchain hash
  • If not entirely in-chain (bitcoin script, ethereum), processed by a network of oracles run by separate businesses/individuals.
This index algorithm architecture increases transparency and reduces the level of trust we place in any one organization or developer.  The level of peer review is greatly increased.  Auditing is a breeze.

To further decentralize and reduce trust required in the index algorithm, the index's input data is now considered.  The collection of raw data becomes a key act in a decentralized world.

Raw field data is collected by data sensors, and securely stored in the blockchain:  Stock price data, climate station weather data, air pollution data, and more.  Hashing and merkle trees are used to aggregate large volumes of data into small, secure blockchain anchors.

The software and actors that collect the raw data and securely store it in the blockchain are fact generators.  Fact generators are the second half of a decentralized financial index.  Their role is best illustrated with some examples, and is central to the security of the entire system.

Creating an index such as that S&P 500 requires building a secure digital loop for publishing its data and algorithms:
  • NYSE and NASDAQ publish digitally-signed intraday or closing prices, hashed into the blockchain.  Publish this hash in the New York Times and Wall Street Journal stock sections, too!  NYSE and NASDAQ play the role of fact generators, here.
  • Standard and Poor's publishes a digitally-signed S&P 500 algorithm, hashed into the blockchain.
  • Any bank, government agency, individual or machine-based agent may then independently generate the S&P500 index at any time, secured against tampering, with two simple pieces of information:  The hash of the algorithm, and the hash of the data summary.
Creating an ETF, then, becomes a second layer of decentralized algorithms which trigger trades in an ETF's primary markets.  ETFs can exist and be run 100% human-free.  With bitcoin as the value token, both the stock price and the value exist on the blockchain as digitally provable values, ensuring an autonomous agent or DAC can prove with 100% certainty that certain trades should/should not be executed.

Another example is measuring air quality or climate data, a dataset perhaps more subject to manipulation (or accusations thereof).  One can imagine
  • 1st layer: A network of Beijing air quality sensors or US-based climate temperature sensors securely timestamps their data into the blockchain.
  • 1st layer: Satellite infrared and smog imagery is securely hashed into the blockchain.
  • 1st layer: Bitcoin/USD exchange rate data is digital signed by each bitcoin exchange, and securely hashed into the blockchain.
  • 2nd layer: 10 governments and NGOs around the world publish their assessments of this data - and the models/algorithms used to achieve the assessments.
  • 3rd layer:  IMF and other agencies run automated agents which transfer bitcoin value based on the pollution/climate assessments, modified by bitcoin/USD exchange rate to eliminate volatility.
In this example, the fact generators - air quality sensors - are mostly untrusted.  A 2nd layer of software - also fact generators, generating derivative facts - achieves a consensus or quorum over untrusted data.  The 3rd layer of software than acts upon that quorum of derived facts.

In a decentralized world, the gathering of raw data, signing, hashing and synthesizing it - fact generation - becomes the key act upon which software will automatically trigger further actions - including real world actions such as hiring humans, moving shipping containers from point A to B, delivering groceries and more.

Decentralized software - using secured digital facts, running on blockchains (bitcoin, ethereum) or networks of oracles - will form an ecosystem that makes the entire world operate on a more transparent, more efficient, less corruptible basis.

The essence of smart contracts is executing a series of actions (and inactions) based on computer processing of digital facts.

Tuesday, December 16, 2014

Open development processes and reddit kerkluffles

It can be useful to review open source development processes from time to time.  This reddit thread[1] serves use both as a case study, and also a moment of OSS process introduction for newbies.

Dirty Laundry

When building businesses or commercial software projects, outsiders typically hear little about the internals of project development.  The public only hears what the companies release, which is prepped and polished. Internal disagreements, schedule slips, engineer fistfights are all unseen.

Open source development is the opposite.  The goal is radical transparency.  Inevitably there is private chatter (0day bugs etc.), but the default is openness.  This means that is it normal practice to "air dirty laundry in public."  Engineers will disagree, sometimes quietly, sometimes loudly, sometimes rudely and with ad hominem attacks.  On the Internet, there is a pile-on effect, where informed and uninformed supporters add their 0.02 BTC.

Competing interests cloud the issues further.  Engineers are typically employed by an organization, as a technology matures.  Those organizations have different strategies and motivations.  These organizations will sponsor work they find beneficial.  Sometimes those orgs are non-profit foundations, sometimes for-profit corporations.  Sometimes that work is maintenance ("keep it running"), sometimes that work is developing new, competitive features that company feels will give it a better market position.  In a transparent development environment, all parties are hyperaware of these competing interests.  Internet natterers painstakingly document and repeat every conspiracy theory about Bitcoin Foundation, Blockstream, BitPay, various altcoin developers, and more as a result of these competing interests.

Bitcoin and altcoin development adds an interesting new dimension.  Sometimes engineers have a more direct conflict of interest, in that the technology they are developing is also potentially their road to instant $millions.  Investors, amateur and professional, have direct stakes in a certain coin or coin technology.  Engineers also have an emotional stake in technology they design and nurture.  This results in incentives where supporters of a non-bitcoin technology work very hard to thump bitcoin.  And vice versa.  Even inside bitcoin, you see "tree chains vs. side chains" threads of a similar stripe.  This can lead to a very skewed debate.

That should not distract from the engineering discussion.  Starting from first principles, Assume Good Faith[2].  Most engineers in open source tend to mean what they say.  Typically they speak for themselves first, and their employers value that engineer's freedom of opinion.  Pay attention to the engineers actually working on the technology, and less attention to the noise bubbling around the Internet like the kindergarten game of grapevine.

Being open and transparent means engineering disagreements happen in public.  This is normal.  Open source engineers live an aquarium life[3].

What the fork?

In this case, a tweet suggests consensus bug risks, which reddit account "treeorsidechains" hyperbolizes into a dramatic headline[1].  However, the headline would seem to be the opposite of the truth.  Several changes were merged during 0.10 development which move snippets of source code into new files and new sub-directories.  The general direction of this work is creating a "libconsensus" library that carefully encapsulates consensus code in a manner usable by external projects.  This is a good thing.

The development was performed quite responsibly:  Multiple developers would verify each cosmetic change, ensuring no behavior changes had been accidentally (or maliciously!) introduced.  Each pull request receives a full multi-platform build + automated testing, over and above individual dev testing.  Comparisons at the assembly language level were sometimes made in critical areas, to ensure zero before-and-after change.  Each transformation gets the Bitcoin Core codebase to a more sustainable, more reusable state.

Certainly zero-change is the most conservative approach. Strictly speaking, that has the lowest consensus risk.  But that is a short term mentality.  Both Bitcoin Core and the larger ecosystem will benefit when the "hairball" pile of source code is cleaned up.  Progress has been made on that front in the past 2 years, and continues.   Long term, combined with the "libconsensus" work, that leads to less community-wide risk.

The key is balance.  Continue software engineering practices -- like those just mentioned above -- that enable change with least consensus risk.  Part of those practices is review at each step of the development process:  social media thought bubble, mailing list post, pull request, git merge, pre-release & release.  It probably seems chaotic at times.  In effect, git[hub] and the Internet enable a dynamic system of review and feedback, where each stage provides a check-and-balance for bad ideas and bad software changes.  It's a human process, designed to acknowledge and handle that human engineers are fallible and might make mistakes (or be coerced/under duress!).  History and field experience will be the ultimate judge, but I think Bitcoin Core is doing good on this score, all things considered.

At the end of the day, while no change is without risk, version 0.10 work was done with attention to consensus risk at multiple levels (not just short term).

Technical and social debt

Working on the Linux kernel was an interesting experience that combined git-driven parallel development and a similar source code hairball.  One of the things that quickly became apparent is that cosmetic patches, especially code movement, was hugely disruptive.  Some even termed it anti-social.  To understand why, it is important to consider how modern software changes are developed:

Developers work in parallel on their personal computers to develop XYZ change, then submit their change "upstream" as a github pull request.  Then time passes.  If code movement and refactoring changes are accepted upstream before XYZ, then the developer is forced to update XYZ -- typically trivial fixes, re-review XYZ, and re-test XYZ to ensure it remains in a known-working state.

Seemingly cosmetic changes such as code movement have a ripple effect on participating developers, and wider developer community.  Every developer who is not immediately merged upstream must bear the costs of updating their unmerged work.

Normally, this is expected.  Encouraging developers to build on top of "upstream" produces virtuous cycles.

However, a constant stream of code movement and cosmetic changes may produce a constant stream of disruption to developers working on non-trivial features that take a bit longer to develop before going upstream.  Trivial changes become encouraged, and non-trivial changes face a binary choice of (a) be merged immediately or (b) bear added re-base, re-view, re-test costs.

Taken over a timescale of months, I argue that a steady stream of cosmetic code movement changes serves as a disincentive to developers working with upstream.  Each upstream breakage has a ripple effect to all developers downstream, and imposes some added chance of newly introduced bugs on downstream developers.  I'll call this "social debt", a sort of technical debt[4] for developers.

As mentioned above, the libconsensus and code movement work is a net gain.  The codebase needs cleaning up.  Each change however incurs a little bit of social debt.  Life is a little bit harder on people trying to get work into the tree.  Developers are a little bit more discouraged at the busy-work they must perform.  Non-trivial pull requests take a little bit longer to approve, because they take a little bit more work to rebase (again).

A steady flow of code movement and cosmetic breakage into the tree may be a net gain, but it also incurs a lot of social debt.  In such situations, developers find that tested, working out-of-tree code repeatedly stops working during the process of trying to get that work in-tree.  Taken over time, it discourages working on the tree.  It is rational to sit back, not work on the tree, let the breakage stop, and then pick up the pieces.

Paradox Unwound

Bitcoin Core, then, is pulled in opposite directions by a familiar problem.  It is generally agreed that the codebase needs further refactoring.  That's not just isolated engineer nit-picking.  However, for non-trivial projects, refactoring is always anti-social in the short term.  It impacts projects other than your own, projects you don't even know about. One change causes work for N developers.  Given these twin opposing goals, the key, as ever, is finding the right balance.

Much like "feature freeze" in other software projects, developing a policy that opens and closes windows for code movement and major disruptive changes seems prudent.  One week of code movement & cosmetics followed by 3 weeks without, for example.  Part of open source parallel development is social signalling:  Signal to developers when certain changes are favored or not, then trust they can handle the rest from there.

While recent code movement commits themselves are individually ACK-worthy, professionally executed and moving towards a positive goal, I think the project could strike a better balance when it comes to disruptive cosmetic changes, a balance that better encourages developers to work on more involved Bitcoin Core projects.

Friday, December 12, 2014

Survey of largest Internet companies, and bitcoin

Status report: Internet companies & bitcoin

Considering the recent news of Microsoft accepting bitcoin as payment for some digital goods, it seemed worthwhile to make a quick status check.  Wikipedia helpfully supplies a list of the largest Internet companies.  Let's take that list on a case-by-case basis.

Amazon.  As I blogged earlier, it seemed likely Amazon will be a slower mover on bitcoin.

Google.  Internally, there is factional interest.  Some internal fans, some internal critics.  Externally, very little.  Eric Schmidt has said good things about bitcoin.  Core developer Mike Hearn worked on bitcoin projects with the approval of senior management.

eBayActively considering bitcoin integration.  Produced an explainer video on bitcoin.

Tencent. Nothing known.  Historical note:  Tencent, QQ, and bitcoin (CNN)

Alibaba.  Seemingly hostile, based on government pressure.  "Alibaba bans Bitcoin"

Facebook.  Nothing known.

Rakuten.  US subsidiary accepts bitcoin.

Priceline.  Nothing known.  Given that competitors Expedia and CheapAir accept bitcoin, it seems like momentum is building in that industry.

Baidu.  Presumed bitcoin-positive.  Briefly flirted with bitcoin, before government stepped in.

Yahoo.  Nothing known at the corporate level.  Their finance product displays bitcoin prices.

Salesforce.  Nothing known.  Third parties such as AltInvoice provide bitcoin integration through plugins.

Yandex. Presumed bitcoin-positive.  They launched a bitcoin conversion tool before their competitors.  Some critics suggest Yandex Money competes with bitcoin.

By my count, 6 out of 12 of the largest Internet companies have publicly indicated some level of involvement with bitcoin.

Similar lists may be produced by looking at the largest technology companies, and excluding electronics manufacturers.  Microsoft and IBM clearly top the list, both moving publicly into bitcoin and blockchain technology.

Wednesday, November 5, 2014

Prediction: GOP in 2014, Democrat WH in 2016

Consider today's US mid-term election results neutrally:

  • When one party controls both houses of Congress, that party will become giddy with power and over-reach.
  • When one party reaches minority status in both houses, that party resorts to tactics it previously condemned ("nuclear option").
  • It gets ugly when one party controls Congress, and another party controls the White House.

The most recent example is Bush 43 + Democrats, but that is only the latest example.

Typical results from this sort of situation:
  • An orgy of hearings.
  • A raft of long-delayed "red meat for the base" bills will be passed in short order.
  • Political theatre raised one level:  Congress will pass bills it knows the President will veto (and knows cannot achieve a veto override).
  • A 2-house minority party becomes the obstructionist Party Of No.
A party flush with power simply cannot resist over-reach.  Democrats and Republicans both have proven this true time and again.

As such, we must consider timing.  GOP won the mid-terms, giving them two years to over-reach before the 2016 general election.  Voters will be tired of the over-reach, and the pendulum will swing back.

Predicted result:  Democrats take the White House in 2016.

If the 2014 mid-term elections had been the 2016 election, we would be looking at a full sweep, with GOP in House, Senate and White House.

Losing could be the best thing Democrats did for themselves in 2014.

P.S. Secondary prediction:  ACA will not be repealed.  ACA repeal bill will be voted upon, but will not make it to the President's desk.

Monday, July 7, 2014

What should the news write about, today?

I have always wanted to be journalist.  As a youth, it seemed terribly entertaining, jet-setting around the world chasing stories by day.  Living like Ernest Hemingway by night.  Taking [some of the first in the world] web monkey jobs at Georgia Tech's Technique and CNN gave me the opportunity to learn the news business from the inside.

Fundamentally, from an engineering perspective, the news business is out of sync with actual news events.

News events happen in real time.  There are bursts of information.  Clusters of events happen within a short span of time.  There is more news on some days, less news on others.

The news business demands content, visits, links, shares, likes, follows, trends.  Assembly line production demands regularized schedules; deadlines.  Deadlines imply a story must be written, even if there is no story to write.

Today's news business is incredibly cut throat.  Old dinosaurs are thrashing about.  Young upstarts are too.  My two young children only know of newspapers from children's storybooks, and icons on their Android tablets.  Classified ads, once a traditional revenue driver, have gone the way of the Internet.  Many "newspapers" are largely point-and-click template affairs, with a little local reporting thrown in.  Robots auto-post every press release.  Content stealing abounds.

All these inherent barriers exist for those brave few journalists left on the robot battlefield.  As usual with any industry that is being automated, the key to staying ahead is doing things that humans are good at, but robots not: creativity, inventiveness, curiosity, detective work.  Avoiding herds, cargo cults, bike shedding, conventional wisdom.

In my ideal world, news sites would post more news, look and feel a bit different, on days and weeks where there is a lot of news.  On slow news days, the site/app should feel like it's a slow news day.

The "it bleeds, it leads" pattern is worn out, and must be thrown in the rubbish.

Every day, every week, when a reporter is met with the challenge of meeting a deadline to feed the content beast, the primary question should be:  What recent trends/events impact the biggest percentage of your audience?

Pick any "mainstream" news site.  How many stories impact those beyond the immediate protagonists/antagonists/victims/authorities involved?

The news business, by its very nature, obscures and disincentivizes reporting on deep, impactful, and probably boring trends shaping our lives.  The biggest changes that happen to the human race are largely apparent in hindsight, looking back over the decades or hundreds or thousands of years.

The good stories are always the hardest to find.  Every "news maker" has the incentive to puff their accomplishments, and hide their failures.  Scientists have the same incentives (sadly):  Science needs negative feedback ("this theory/test failed!") yet there are few incentives to publish that.  Reporters must seek and tell the untold story, not the story everyone already knows.

Journalists of 20+ years ago were information gateways.  Selecting which bit of information to publish, or not, was a key editorial power.  Now, with the Internet, the practice is "publish all, sift later."  Today's journalists must reinvent themselves as modern detectives, versus the information gateways and "filters" of past decades.