On the intellectual origin of blockchain technology (I). Early forerunners

In my previous contribution to this blog I talked about certain intellectual obstacles that can trip up jurists when dealing with the definition of smart contract and blockchain technology. The first of these is a deficit in technology training. One of the particular features of this technology, now a worldwide talking point due to its multiple applications and disruptive potential, is precisely that a high intellectual threshold is required to gain access to it, because it is hard to explain and understand.

In this and the following post I am going to try and bring a little perspective on the subject, which perhaps may help make out the signal –the significant points- among all the media noise that is currently causing so much interference.

Specifically, in the two posts I have planned, I am going to mention a few figures who have made important intellectual contributions on the path that has led to both cryptocoins and blockchain technology. On this subject, I will draw your attention to the early forerunners and more recent forerunners.

An early forerunner I could mention the German philosopher Leibniz, who at the end of the eighteenth century, besides making a mechanical universal calculator, conceived the idea of a machine that stored and handled codified information in binary digital code. But I am going to focus on two figures closer in time, who are regarded the founding fathers of computing: British Alan Turing and John von Neumann a Hungarian who later became a US citizen. And why these? Not just because in the nineteen thirties and forties they laid the intellectual foundations, in math and logic, that gave rise to the development of computing and with it the digital universe we now inhabit, but also because their ideas and visions foresaw a large part of the transformation that we are experiencing right now.

Alan Turing recently came into the public eye in a recent film (The Imitation Game, released in 2014) about his activity in the second world war in the British Royal Navy’s intelligence service, where he contributed to deciphering the codes of the famous encrypting machine Enigma used by the German navy and army in their communications. Our interest in him does not arise from his connection with the topic of cryptography, however. I especially want to talk about the work that made him famous, published in 1936 in the prestigious Proceedings of the London Mathematical Society: “On Computable Numbers, with an Application to the Entscheidungsproblem.

 

The Entscheidungsproblem or decidability problem is an arduous logical and mathematical question that kept a number of logicians and philosophers occupied at the beginning of the twentieth century from when it was posed by German mathematician David Hilbert in his writings at the beginning of the 1900s as one of the remaining challenges for the century that was then beginning: can mathematics provide an answer of a demonstrative type to every problem it poses? Or in other words, is the full axiomatization of mathematics possible to reconstruct it as a complete and self-sufficient system? Hilbert, the leading figure in what is known as formalism, considered it was, and Russell and Whitehead believed they had achieved it with their work Principia Mathematica. However, an introverted logic professor called Kurt Gödel proved it was not possible in a difficult and revolutionary article published in 1931, in which he formulated what is known as Gödel’s incompleteness theorem.

Following Gödel, in the work mentioned above, Turing (in a simple discourse strategy on the limits of computability) created the first stored-program machine (later becoming known as the universal Turing machine which at that time was only a theoretical construction) in other words, a machine having a memory that besides storing data, had the program itself for handling or computing those data, a machine that could be reprogramed and able to compute anything computable (in other words, what we now consider a computer). I must also mention his early interest in artificial intelligence to the point where the so-called “Turing test” is used to assess the greater or lesser intelligence of a device.

And so, when in 2014 Russian Canadian child prodigy Vitalik Buterin, at the tender age of 19, brought that second generation blockchain called Ethereum into operation, he would tell us that it was a blockchain using a Turing-complete programming language, and aspiring to become the universal programming machine, the World Computer. This brought Turing’s original idea into a new dimension: it was not a question of creating an individual reprogrammable machine for universal computation objects, but instead the existence of a network of computers which besides simultaneously recording those simple messages that are bitcoin transactions, also allow any programmable transaction within their capability to be carried out on them at the same time, and every step in the process and its result may be stored on a distributed, transparent, and incorruptible record with universal access. Or put another way, its universal nature does not relate only to the programmable object –like Turing’s virtual machine and our current computers -, but it is also universal in relation to the agents or devices they operate, in that the program is executed and the result is recorded simultaneously by an infinite number of computers throughout the world.

And then there is John von Neumann, one of the great scientific geniuses of the twentieth century, on a par with Einstein. In relation to our subject, von Neumann was the creator of the logical structure of one of the first high-speed electronic digital and stored-memory computers which was the first physical incarnation of the imagined universal Turing machine. That computer, known as EDVAC, was made at the end of the nineteen forties at the Princeton Institute for Advanced Study (USA), as an instrument to perform the complex and extremely laborious mathematical calculations required for the design and control of the first atomic bombs. In fact, even today the structure of every computer we use is based on what is known as “von Neumann architecture”, which consists of a memory, processor, central control unit and elements for communicating with the exterior for entering and receiving data.

Besides the fact of every computer application or development owing its existence (despite his premature death aged 53) to the visions and ideas of von Neumann (including artificial intelligence which was the subject of his latest ideas in works such as the “Theory of self-reproducing automata”), I would like to take a look here at two ideas of this great pioneer.

 

Firstly, in the years following the end of the second world war almost everything was in short supply and to build EDVAC they had to use leftover equipment and materials from the weapons manufacturing industry. Making a machine put together from these components work properly was a veritable challenge that Von Neumann confronted with the idea that a reliable machine had to be built from thousands of unreliable parts. This idea, which he developed theoretically in two articles in 1951 and 1952 (Reliable Organisms from Unreliable Components” and Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components), links up with the formulation, later in the eighties and in relation to the reliability of computer networks created for defense uses, of what is known as the “Byzantine Generals’ Problem” –which is usually mentioned in explanations of blockchain-. It is also related to “resilience”, one of today’s buzzwords; and is at the very core of blockchain design: how to create the most reliable and transparent recording system that has ever existed based only on particular individual agents any of which could be false.

 

In relation to blockchain design we can also trace the footprint of another great intellectual contribution from von Neumann. Because he was gifted with broad intelligence, besides taking an interest in and revolutionizing set theory, quantum physics and computer science, he also forayed into economic science, where he was no less revolutionary. He pioneered the theory of games, on which he co-authored with Oskar Morgenstern in 1944 a work entitled “Theory of games and economic behavior”.  Much of the theory of games, analyzing the rationality of strategic decisions of individual agents operating in an economy based on the likely behaviors of other agents, is also present in the smart design explained by the enigmatic Satoshi Nakamoto in his 2008 paper. After all, the design of a public blockchain, such as that on which Bitcoin is based, comes from the idea that the pursuit of individual interest in gain by  a few agents –the “miners”- results in the general reliability of the system; and from the idea that it makes little sense to defraud a system to obtain an asset whose economic value depends directly on the general belief in the reliability of that system.

(I would recommend the following books to anyone who has a taste for more on these subjects: “Turing’s Cathedral: The Origins of the Digital Universe”, by George Dyson, Pantheon,ISBN-13: 978-037542275 ; “Turing. Pioneer of the Information Age”, by B. Jack Copeland, Oxford University Press, ISBN-13: 978-0198719182; and “The Proof and Paradox of Kurt Gödel (Great Discoveries) by Rebecca Goldstein. W.W. Norton & Company. ISBN: 978-0393327601)

Taxation of the digital economy: the European package.

On March 21 the European Commission published a set of proposed new rules and measures on taxation of the digital economy in an attempt to set a starting point for the expected international negotiations on this matter, when the OECD has preferred to acknowledge the absence of sufficient consensus.

The key elements of this package are two proposals for a directive. One, containing rules on the taxation of businesses with a significant digital presence. The other, an announced proposal for a directive on the common system for a tax on digital services, charged on income from the provision of certain digital services. In other words, a proposal for the creation of a new tax in every member state on the income from those services, as a transitional solution until the directive on significant digital presence can be approved, which is not currently possible due to the failure to reach a consensus within Europe and internationally. The third element completing the package is a Commission Recommendation approved on the same date, March 21, suggesting that member states include in their tax treaties with non-Union states the guiding principles in the European Union on the idea of significant digital presence. All of which is rounded off with a Communication from the Commission to the European Parliament and the Council on the background and reasons for the proposed reform.

Through this package the European Commission has taken a step further towards its goal to harmonize corporate income tax and moved the global debate on by proposing a two-stage plan. Faced with the absence of a consensus over the taxation of this income, the Commission has proposed first implementing a tax that in actual fact taxes income at 3 percent. This new digital services tax would be charged on income from the provision of certain digital services and only where they are provided by certain companies earning sizable revenues. In the terminology used by the Directive, taxable services are those consisting in the placing on a digital interface of advertising or which allow users to find other users and to interact with them as well as the transmission of data collected about users which has been generated from such users on digital interfaces. These services are taxable where the provider’s worldwide revenues exceed €750 million and the total amount of taxable revenues obtained within the Union exceed 50 million.

The Commission acknowledges however that this tax is only a provisional solution and proposes a directive on corporate income tax for companies with a significant digital presence, a concept that involves an ad hoc reformulation of the old concept of permanent establishment.

This proposal for a directive assumes that the application of current corporate income rules to companies in the digital economy “has led to a misalignment between the place where the profits are taxed and the place where the value is created”. Consequently, it is clearly acknowledged that a reform of the principles of international taxation is necessary to adapt them to an economy in which intangible assets and the value of data are fundamental elements, without losing sight of the goal to tax income where wealth is generated. It is admitted that the traditional rules fail to tax the income of a nonresident in the absence of a physical presence and acknowledged that the principles governing the transfer pricing system lead to an underpricing of the functions and risks associated with the digital economy. Not even the CCCTB rules could ensure recognition of greater participation in the taxation of the income arising from the new economy for the states where the users of this digital economy are located.

Faced with this challenge, the proposal for a directive puts forward the this idea of a “significant digital presence”, as a new element broadening the concept of permanent establishment. It only captures, however, income arising, not from the digital economy, but from the provision of certain digital services, services delivered over the internet or over an electronic network and the nature of which renders their supply essentially automated and involving minimal human intervention, and which cannot be provided without information technology. The proposal details a number of services included in this definition such as the supply of digitized services generally, services proving or supporting a business or personal presence on an electronic network or those generated automatically from a computer via the internet or via an electronic network, in response to specific data input by the recipient, in addition to those listed in annex III, which basically relate to services delivered over the internet, or the sale of goods or other services facilitated by the use of internet.

This digital presence also requires certain thresholds to be met, notably a number of users greater than 100,000 or a number of contracts between companies for the provision of those digital services higher than 3,000, in that member state.

Besides altering the concept of permanent establishment, the proposal for a directive recognizes that the problem lies also, or especially, in the profit attribution rules, and therefore sets out its own profit attribution rules in this case, by including among the risks and functions any economically significant activities performed through a digital interface, especially in relation to data or users which are relevant to the exploitation of the company’s intangible assets.  The profit split method is the preferred method to determine attributable profits.

In short, one of the main problems associated with this tax package is simply the lack of consensus or international agreement which makes it very difficult to apply these principles and definitions in member states’ relationships with non-Union countries. The Commission Recommendation is very well-intentioned by trying to include this solution in relationships with non-Union countries through the negotiation of tax treaties which, moreover, it is admitted would otherwise prevail preventing a harmonized global solution from being applied. And to achieve that solution in an OECD or any other context, we will come up against the obstacles posed by the differing interests of the countries concerned, according to the types of companies they have.