The new disclosure obligation on tax intermediaries

The European has finally approved after a particularly quick procedure the Directive requiring so called tax intermediaries to supply specific information on transnational transactions with tax relevance. We are talking about Directive 2018/822  of 25 May 2018 (Official Journal June 5), amending Directive 2011/16/EU as regards mandatory automatic exchange of information in the field of taxation in relation to reportable cross-border arrangements.

1.- Background to the Directive

Some countries had tried out reporting mechanisms for transactions that could involve aggressive tax planning. Examples are the tax shelter disclosure system in the U.S. or the DOTAS (disclosure of tax avoidance schemes) regime in the UK. These experiences spread to other countries and were the inspiration for BEPS Action 12 on mandatory disclosure rules. In the final report on this action, the OECD called for the use of these disclosure regimes, in relation to the “promoters” of standard schemes identified through hallmarks. Disclosure would have the dual aim of providing immediate information to the authorities and as a deterrent from offering abusive planning schemes.

Taking up these ideas in the BEPS project, the European Commission submitted a Proposal for a Directive of 21 June 2017, and, following the political agreement by ECOFIN on March 13, 2018, the wording of Directive 2018/822 was reached containing notable differences with respect to the wording of the initial proposal, especially in relation to broadening the personal scope of the reporting obligation itself. The Directive amends Directive 2011/16, on automatic exchange of information between the member states in the field of taxation, which is why it is known as DAC6.

2.- Content of the Directive.

The contents of the Directive are easily summarized. The so called tax intermediaries must report to their tax authorities specific information on any cross-border arrangements in which they take part, where those arrangements have any of the hallmarks listed in the Directive itself. Then the member states will automatically exchange that information and therefore have prompt knowledge about abusive or potentially abusive planning arrangements.

The information must relate to a cross-border arrangement (“dispositifs” and “mecanismos” in the French and Spanish versions). There are no reporting obligations for purely domestic arrangements, not affecting any other state, although a member state may unilaterally include those transactions in the scope of the mandatory reporting regime.

No definition is given of “arrangement”. It must be interpreted as meaning any dealing or transaction or set of dealings. And these arrangements are mandatorily reportable where they have any of the characteristics or hallmarks set out in the New Annex IV to Directive 2011/16.  These hallmarks appear to relate to different objectives. The first two of the five hallmark categories relate to the typical standard tax planning arrangements usually involving a tax purpose combined with a fee for the promoter and a confidentiality clause. The third category is targeted at arrangements leading to a no tax scenario by taking advantage of certain tax regimes including the absence of any corporate income tax or a zero or “almost zero” rate. The fourth category is designed to deter arrangements that may have an impact on the automatic exchange of information between countries and the identification of beneficial ownership. And the last category is perhaps the most controversial due to relating to transfer pricing matters. It includes arrangements linked to the transfer of hard-to-value intangibles and certain reorganizations between companies in the same group with transfers of functions, risks or assets, if the projected annual earnings before interest and taxes (EBIT) of the transferor over the three-year period after the transfer, are less than 50% of the projected annual EBIT if the transfer had not been made.

It is the intermediaries of a member state, not the taxpayers, in principle,  that have the obligation to report these arrangements. And the Directive deals with a very broad definition of intermediary. It encompasses anyone who designs, markets, organizes, or makes available for implementation or manages the implementation of a reportable cross-border arrangement. But it also means anyone that knows or could be reasonably expected to know that they have undertaken to provide, directly or through others, marketing, organizing, making available for implementation or managing the implementation of a reportable cross-border arrangement. Where more than one intermediary is involved, the reporting obligation falls on all of them, unless the same information has already been filed by another of those intermediaries. The relevant taxpayer has the reporting obligation only if there is no intermediary because the arrangement was devised and implemented in house or where the national rules on legal professional privilege relieves all the intermediaries from this obligation.

The reportable information appears to relate only to identifying the transactions, their characteristics and values.

3.- Conclusions

This Directive plays a crucial part in the move to review tax planning  practices, but has a defect by starting out from a lack of definition because it mixes up the information, and the combatting and the prevention of tax fraud, without clarifying the limits separating them, and shies away from any attempt to make the system it sets out serve to give greater legal certainty. To the contrary, it advises that the reporting of this information does not serve to obtain any degree of certainty in advance over the validity that the tax authorities will give to these arrangements. So its implementation in the various states may be confused and could ironically aid tax competition between them in addition to placing obstacles to the functioning of the internal market by leaving out purely domestic arrangements.

In Spain’s case, the transposition of the Directive will without a doubt rekindle old problems that have never been resolved: how to define tax advisors and the meaning and scope of their legal professional privilege. Elsewhere, by somehow singling out so called tax planning it will affect the internal organization of the profession.

On the intellectual origin of blockchain (III). More on the recent forerunners

I ended the previous post on the subject of David Chaum and how his DigiCash did not lead to a proper break with traditional cash. The disruptive leap in this respect, even if still only in a theoretical or speculative realm, is attributable to the following two characters in this story.

The first of these characters actually worked with David Chaum at the unsuccessful DigiCash company. I am talking about a U.S. citizen with a Hungarian surname: Nick Szabo. A multi-talented man: computer science graduate (1989) from the University of Washington, cryptographer and jurist. Besides working at DigiCash, he was the designer of bit gold, a digital currency project, forerunner of Bitcoin and blockchain. Many have said in fact that the real person behind the pseudonym “Satoshi Nakamoto” –the bitcoin creator – is Szabo, something he has always denied. British writer and journalist Dominic Frisby said: “I’ve concluded there is only one person in the whole world that has the sheer breadth but also the specificity of knowledge and it is this chap…”. There is even a subunit of the Ether cryptocurrency (the currency running on the Ethereum platform) that was given his name (the szabo).

Szabo’s first great contribution on this subject was a paper entitled “Smart contracts: building blocks for digital free markets”, published in a Californian futurist and transhumanist journal called Extrop in 1996. In this visonary article, Szabo, computer engineer, cryptographer, as well as jurist, asks how the Internet, combined with modern cryptographic protocols (asymmetric or double key cryptography, blind signature systems such as those devised by Chaum, multiple signature systems, mixing protocols) could revolutionize traditional contract law, by enabling such a basic part of the law, a contract, which is the basis of the whole of our market economy, to be up to meeting the requirements of online trading. It was in this paper that the term and idea of “smart contract”–now part of everyone’s vocabulary- was created: a software program through which obligations that are both agreed and programmed are enforced automatically, giving rise to a contract that executes itself, aided by computer technology. Which is ideal particularly for a contract not just between absent parties but also between strangers who have no ground for trusting each other. This was also where we first saw the term “smart property” used to refer to a smart contract incorporated into a physical object (a vehicle, the lock of a house), so the physical availability of that object is also programmable according to the terms of a specific agreement.

This first paper on smart contracts was revised and extended in a 1997 publication entitled “Formalizing and securing relationships on public networks”. Here we now find an allusion to the idea of a distributed trust, in other words, to how the participation of several agents in the monitoring and recording of a transaction is a guarantee of certainty and protection against fraud.

This idea was explored further and started to gain importance in publications such as “Secure Property Titles with Owner Authority” a paper published in 1998, in which, faced with the problems of political uncertainty and discretionality –in less developed countries especially- that are associated with centralized property record systems, it was proposed to have a titles database distributed or replicated across a public network (a record system that –it tells us- would be able to survive a nuclear war). This involves the creation of a kind of property club on the Internet that gets together and decides to keep track of the ownership of some kind of property. The title held by each is authenticated with the electronic signature of the previous owner a process that is reproduced with each successive owner, forming a chain. And the record of the chain of titles which shows the current owner of each item of property is based on a consensus of the majority of the participants, given that it is unlikely that they will all come to an agreement to commit fraud. As we shall see, here lies the core of the ownership recording system for the bitcoin.

Another important paper exploring these ideas is “Advances in distributed security” published in 2003, where Szabo proposes leaving behind the unattainable idea of absolute certainty, to settle for systems with a high probability of certainty such as that provided by cryptography. In this context, he proposes processes such as distributed time-stamping, the use of hashes as a means of identifying the time-stamped messages or files, the creation of “Byzantine-resilient” replication systems, etc.

Alongside his concern over alternative systems to ensure compliance with contracts and the chain of ownership using the Internet, software programming and cryptography, Szabo also turned his attention to the specific subject of money, going much further than the ideas explored by David Chaum. What concerned him, as we have seen, was the subject of privacy: how the fact of acting as intermediaries in our electronic payments gives the financial institutions knowledge of essential information on our lives. Szabo also confronted another issue: placing the value of the money we use at the discretion of political authorities; the problem of discretionary inflation, in other words. This is where the impact of his 1998 proposal for bit gold lay, which appeared at the same time as another very similar idea: b-money, belonging to Wei Dai.

This Wei Dai is a cryptographer, and a fellow computer science graduate from the University of Washington. In 1998 he published a very short paper with the title “B-money: an anonymous, distributed electronic cash system” in the Cypherpunks mailing-list which was later quoted as a reference work in the whitepaper by Satoshi Nakamoto (no work by Szabo was ever quoted as such). The driving force behind Dai’s work, like any good cryptoanarchist, was basically the opacity of cash transactions, and the terminology was perhaps a little too eloquent: “b-money”. An interesting fact is that the smallest unit of the Ether cryptocurrency is called “wei”, named after that forerunner.

The idea put forward in these proposals (which tie in with the most radical visions of cryptoanarchism of Tim May whom Dai explicitly quotes at the beginning of his paper) is not to represent the existing money that is legal tender in a new electronic format to enable or achieve the anonymity of electronic payments, instead to replace that money originating from the government with a new type of money created by the users themselves, assisted by the web and cryptography. This intention –having, as we can see, much more radical political significance because it questions one of the key attributes of state sovereignty, the printing of money- poses a problem going beyond a simple accounting record issue to control the circulation of money, in other words, avoiding the dual availability of a digital asset: that of how to control the creation of this money, to avoid discretionality and ensure its scarcity, and which is somehow a reflection of an economic activity or value.

Wei Dai proposed a type of regular online auction among the system participants to determine the amount placed into circulation in new digital coins.

Szabo’s approach was different. He had for some time been mulling over the idea of how to make a simple bit string (a given number of zeros and ones) into something of value in itself. He was looking for a digital object that could work like gold. The instrument he devised for this –an application of the hashcash algorithm created by Adam Back to prevent email spam, mentioned also by Nakamoto- was a computational proof-of-work, a solution that could be given an economic meaning similar to gold, through the effort and use of resources required for its extraction; the use of computation cycles, in this case. This electronic money devised by Szabo is therefore managed through a program on the web which puts a given mathematical challenge or problem to the system participants. This mathematical problem or puzzle is related to the cryptographic function known as hashing, and may only be solved using “computational brute force”, in other words, by trial and error using different figures until a string is found that fits. When this result is obtained, in the form of a given bit string, it becomes the system’s first unit of currency. The program rewards the first participant to find that string by giving them the unit of currency, which can then be used by this participant to make payments to other users, and so the unit of currency and its fractions begin circulating. This first bit string, obtained by solving the problem, is the starting point for the next challenge, which the program then poses. This is how new currency units are added to the system regularly and in a programmed way.

This proposal was perhaps a little primitive –owing its existence to a metal-based and therefore materialistic idea of money, as a thing that must be given an intrinsic value rather than simply as a symbol of value-, and misguided too, because the intrinsic value we give to gold does not arise only from its scarcity and the difficulty to obtain it, instead from its intrinsic properties as a substance, which can never be said of a sequence of zeros and ones no matter how difficult they are to obtain.

This idea of Dai’s in relation to having bit gold as b-money would never be put into practice, but is the most direct forerunner of the bitcoin.

On the intellectual origin of blockchain (II). Recent forerunners

In the first post in this series on the intellectual origin of blockchain technology, I talked about two figures I consider to be early forerunners: Alan Turing and John von Neumann. In this second installment, I will look at two more recent figures: Tim May and David Chaum (and in a third post, I will touch on Nick Szabo and Wei Dai).

After the gloomy years of World War II and the Cold War, we now turn to the 1980s and 1990s, with its very different political, socio-economic and technological context. After the fall of the Berlin Wall and the break-up of the Eastern Block, the buzzwords became the “end of history” (in the sense that Marxist utopias had been left behind) and globalization. In the technology field, we were seeing increasingly powerful microprocessors, the internet and mobile phones.

As readers may remember, when internet first came on the scene, many heralded that it would bring about massive cultural, social and political changes. The lofty ideals of the Age of Enlightenment seemed closer than ever before: thanks to this new technology, a world compartmentalized into aggressive nation-states and under the thumb of large multinationals could give way to a true universal human community, united through communication, the free and direct flow of information and easy access to knowledge. Of course, this cyberspace-based community would have its economic facet as well: universal trade, which would spread prosperity throughout the planet.

However, while internet and mobile communications have indeed drastically reshaped our social customs, it didn’t take long to see that new technologies were not engendering the political changes some people hoped for. Nation-states are not a thing of the past; rather, they have found digital technologies useful as weapons for controlling their citizens, citing dangers from radicalism and globalized terrorism, which, in turn, have also co-opted technology for their own purposes, the polar opposite of the enlightened ideals internet was supposed to bring with it. On the economic side, we have seen that globalization does not in fact mean global prosperity. Different multinationals have come and gone, but, far from the expected distribution of wealth, economic power is even more tightly concentrated in the hands of a few. Today, a handful of companies born from new technologies, namely Apple, Microsoft, Google, Facebook and Amazon, have attained unprecedented popularity, wielding more power to control and influence users than ever seen before.

This situation, which first started to take shape in the 1990s, has pushed the somewhat visionary and utopian mindset of the internet’s early days into a type of resistance movement characterized by activism and an anti-establishment and anti-system ideology. The difference is that, this time, the insurgents are striking from within, using digital tools for their own purposes. This includes groups such as hackers (who, like all pirates and rebels, are rather romanticized), cyberpunks and the topic of this article, the cypherpunks and cryptoanarchists leading an ideological movement to put cryptography and information encryption techniques in the hands of individuals and to thwart national security agencies’ attempts to monopolize the use of this technology.

 

The cryptography I am referring to in this post is modern cryptography, heavily based on mathematical theory, computer science and the application of electronics to computing, i.e., something that only began to be developed in the 1950s. Before then, encryption techniques were much more rudimentary: from the simplest codes like the substitution cipher Julius Caesar used for military messages to the electro-mechanical encryption of telegraph and radio messages by German military intelligence services during World War II, using a series of Enigma machines creating polyalphabetic substitution through variable-position rotors. By placing the rotors of the deciphering machine in the same positions as in the cyphering machines, the recipient could decode messages. This was a symmetric-key mechanics-based encryption system, where letter substitutions changed every so often, but the problem was that the system only worked if the sender had previously given the recipient the key (i.e., the specific placement of the rotors) to be used at a certain time. Yet this information could be intercepted by the enemy and potentially used to decode messages if it also had the same model Enigma machine. In contrast, mathematical cryptography is based on arithmetic operations or mathematic calculations applied to digitalized messages, that is, messages that have been previously converted into numbers. Thanks to computers – machines that operate at electronic speed, close to the speed of light – practical use can be made of encryption and decryption techniques that require making highly complex numeric calculations very quickly (generally involving very large prime numbers, the product of which is difficult to factorize).

 

Initially, governments, in particular the United States through its National Security Agency (NSA), attempted to keep the knowledge and use of this technology – so critical during wartime – to themselves, standing in the way of commercial use and use by the general public. However, in the mid-1970s, following an embittered battle with the NSA, IBM registered its Data Encryption Standard (DES) algorithm with the National Bureau of Standards. This algorithm was later made available to financial sector companies, who needed it to develop their automatic teller machine networks. Knowledge and public release of this algorithm sparked a growing interest in modern mathematical cryptography beyond the closed field of national security. In that same decade, the public also learned about revolutionary asymmetric (or public-key) cryptography: the Diffie-Hellman algorithm, followed by the RSA (Rivest, Shamir and Adleman) algorithm, first developed in 1977. Asymmetric cryptography resolved the formidable vulnerability issue I mentioned above, namely that, in traditional symmetric cryptography systems, private keys must be shared between sender and receiver. This development was absolutely vital for enabling secure communications between parties that do not know one another, which in turn is essential for carrying out financial transactions via internet.

 

By the 1990s, the crypto-libertarian movement was beginning to take real shape thanks to Timothy C. May, better known as Tim May. In the early digital days of 1992, this California hippie-techie, a well-respected electronic engineer and scientist at Intel, wrote The Crypto Anarchist Manifesto, a short text heralding that computer technology was “on the verge of providing the ability for individuals and groups to communicate and interact with each other in a totally anonymous manner. Two persons may exchange messages, conduct business, and negotiate electronic contracts without ever knowing the True Name, or legal identity, of the other. Interactions over networks will be untraceable, via extensive re- routing of encrypted packets and tamper-proof boxes which implement cryptographic protocols with nearly perfect assurance against any tampering. Reputations will be of central importance, far more important in dealings than even the credit ratings of today. These developments will alter completely the nature of government regulation, the ability to tax and control economic interactions, the ability to keep information secret, and will even alter the nature of trust and reputation (…) The methods are based upon public-key encryption, zero-knowledge interactive proof systems, and various software protocols for interaction, authentication, and verification. The focus has until now been on academic conferences in Europe and the U.S., conferences monitored closely by the National Security Agency. But only recently have computer networks and personal computers attained sufficient speed to make the ideas practically realizable…

 

In the ensuing years, Tim May was the force behind, and the main contributor to, the cryptography and cryptoanarchy internet forum known as the Cypherpunks electronic mailing list and its document The Cyphernomicon. He is one of the leading intellectual references among the designers of the first cryptocurrencies, such as Nick Szabo and Wei Dai.

The next figure we look at, also from California, had a less political, and more technical and business, focus: David Lee Chaum, a brilliant mathematician and computer scientist who from the 1980s on created novel cryptographic protocols applicable in online commerce, payments and even voting. He is credited with the cryptographic protocol known as the blind signature, which is the digital equivalent of the physical act of a voter enclosing a completed anonymous ballot in a special envelope that has the voter’s credentials pre-printed on the outside.

Chaum’s motivation for creating this type of protocol was his concern for privacy of financial transactions, which was being eroded as electronic payment means were becoming more widely used. One of the essential properties of traditional cash bills and coins is their anonymous nature, as money simply held by the bearer. In contrast, all the new ways of sending money represented in bank accounts (bank transfers, credit cards, web-based payment gateways, etc.) allowed for tracking of who paid what amount to whom, and why. This meant that an increasing amount of data revealing a lot about a consumer’s preferences and spending habits and, in short, they type of person they are, was being recorded and stored in digital databases completely beyond the consumer’s control.

In this blog post, I will not go into a detailed technical explanation of the blind signature protocol invented by Chaum, which he presented at a 1982 conference under the title “Blind signatures for untraceable payments.” In a nutshell, however, it involves a person or authority verifying, with their electronic signature, a specific digital item generated by another person. The authority knows the identity of the sender and what type of item is involved, but it does not know the content of the signed item. This technology has many uses. It is crucial when a third party verifier is needed, such as an election authority certifying that the person casting an online vote is allowed to vote and only submits one ballot, but where, to safeguard the secrecy of ballots, this authority must not know the content of the validated vote. In the case of electronic money, the third party verifier is the bank that holds the account of the person wishing to generate a digital monetary item, or token, because it is the bank that must verify that the sender has sufficient funds and then debit the corresponding amount from the account to avoid double-spending. The blind signature protocol is very useful in this case. Once the bank has verified that the sender (the payer) is indeed authorized to make the payment, it can sign the digital token without knowing the specific serial number individually identifying the payer, while still monitoring the transaction. Accordingly, when the recipient (the payee) presents a given digital token with the specific serial number to that same bank to exchange it for cash money, the bank does not know which of its clients sent the token and, consequently, who made the payment in question.

 

How can a bank sign a digital monetary token without knowing its individual serial number? This is where Chaum’s mathematical ingenuity came into play: the payer (i.e., the bank client wishing to make a payment with electronic cash) generates a random serial number x and, prior to sending it to the bank with its payment order, disguises it by multiplying it by a factor C known only to the payer. Put differently, the payer gives the bank a serial number encrypted using a commutative algorithm or function C(x), which can be reversed by performing the inverse arithmetic operation, in this case, dividing C’ by the same factor. After verifying the payer’s standing to make the payment, the verifying bank electronically signs that same encrypted serial number with its private key S’ and returns the result, S’(C(x)), to the payer. The payer then applies inverse division to the prior multiplication through which the serial number was encrypted, C’(S’(C(x))), and obtains S’(x), that is, the original serial number now electronically signed by the bank. The payment can now be made without the bank knowing who generated that specific note. This is clearly analogous to issuing bank notes against cash deposits, but in a digital environment.

 

In 1989, David Chaum attempted to put this idea into practice with the creation of the Amsterdam-based electronic money corporation DigiCash, but the business did not flourish. In fact, only two banks supported DigiCash systems, the Missouri-based Mark Twain Bank and Deutsche Bank. The business only had 5,000 clients and total payment volume never passed $250,000. Chaum later explained that it was hard to get enough merchants to accept the payment method so enough consumers would use it, or vice versa. Although David Chaum became a hero for cryptoanarchists, the problem was that the average consumer was not that concerned about the privacy of his transactions. In the end, in 1998 DigiCash filed for bankruptcy and sold off all its assets.

What I find most interesting about this attempt is that the electronic cash David Chaum invented still depended on ordinary legal tender (because the process always started and ended at a bank account in dollars, euros or another national currency), and the accounting control that avoided double-spending was still in the hands of the traditional banking system, given that it was a bank that verified the sender’s ability to pay and then debited the amount of electronic money sent from the traditional account.

Bitcoin, to appear later along, was a radical departure from this model.

(The pioneering works about mathematical cryptography are Claude Shannon’s article “Communication Theory of Secrecy Systems” published in the Bell System Technical Journal in 1949 and the book by Shannon and Warren Weaver titled “Mathematical Theory of Communication.”

An explanation of how the Enigma machine worked and stories of the adventures and exploits on the cryptographic front of WWII can be found in a book cited in a previous post: “Alan Turing. Pioneer of the Information Age,” by B. Jack Copeland, published in Spain by Turner Noema (Madrid), 2012, pages 51 and thereafter.

For more on the battles between the first cryptoanarchists and the NSA, see “Crypto: How the Code Rebels Beat the Government Saving Privacy in the Digital Age,” by Steven Levy, published in Spain by Alianza Editorial, 2001.)

On the intellectual origin of blockchain technology (I). Early forerunners

In my previous contribution to this blog I talked about certain intellectual obstacles that can trip up jurists when dealing with the definition of smart contract and blockchain technology. The first of these is a deficit in technology training. One of the particular features of this technology, now a worldwide talking point due to its multiple applications and disruptive potential, is precisely that a high intellectual threshold is required to gain access to it, because it is hard to explain and understand.

In this and the following post I am going to try and bring a little perspective on the subject, which perhaps may help make out the signal –the significant points- among all the media noise that is currently causing so much interference.

Specifically, in the two posts I have planned, I am going to mention a few figures who have made important intellectual contributions on the path that has led to both cryptocoins and blockchain technology. On this subject, I will draw your attention to the early forerunners and more recent forerunners.

An early forerunner I could mention the German philosopher Leibniz, who at the end of the eighteenth century, besides making a mechanical universal calculator, conceived the idea of a machine that stored and handled codified information in binary digital code. But I am going to focus on two figures closer in time, who are regarded the founding fathers of computing: British Alan Turing and John von Neumann a Hungarian who later became a US citizen. And why these? Not just because in the nineteen thirties and forties they laid the intellectual foundations, in math and logic, that gave rise to the development of computing and with it the digital universe we now inhabit, but also because their ideas and visions foresaw a large part of the transformation that we are experiencing right now.

Alan Turing recently came into the public eye in a recent film (The Imitation Game, released in 2014) about his activity in the second world war in the British Royal Navy’s intelligence service, where he contributed to deciphering the codes of the famous encrypting machine Enigma used by the German navy and army in their communications. Our interest in him does not arise from his connection with the topic of cryptography, however. I especially want to talk about the work that made him famous, published in 1936 in the prestigious Proceedings of the London Mathematical Society: “On Computable Numbers, with an Application to the Entscheidungsproblem.

 

The Entscheidungsproblem or decidability problem is an arduous logical and mathematical question that kept a number of logicians and philosophers occupied at the beginning of the twentieth century from when it was posed by German mathematician David Hilbert in his writings at the beginning of the 1900s as one of the remaining challenges for the century that was then beginning: can mathematics provide an answer of a demonstrative type to every problem it poses? Or in other words, is the full axiomatization of mathematics possible to reconstruct it as a complete and self-sufficient system? Hilbert, the leading figure in what is known as formalism, considered it was, and Russell and Whitehead believed they had achieved it with their work Principia Mathematica. However, an introverted logic professor called Kurt Gödel proved it was not possible in a difficult and revolutionary article published in 1931, in which he formulated what is known as Gödel’s incompleteness theorem.

Following Gödel, in the work mentioned above, Turing (in a simple discourse strategy on the limits of computability) created the first stored-program machine (later becoming known as the universal Turing machine which at that time was only a theoretical construction) in other words, a machine having a memory that besides storing data, had the program itself for handling or computing those data, a machine that could be reprogramed and able to compute anything computable (in other words, what we now consider a computer). I must also mention his early interest in artificial intelligence to the point where the so-called “Turing test” is used to assess the greater or lesser intelligence of a device.

And so, when in 2014 Russian Canadian child prodigy Vitalik Buterin, at the tender age of 19, brought that second generation blockchain called Ethereum into operation, he would tell us that it was a blockchain using a Turing-complete programming language, and aspiring to become the universal programming machine, the World Computer. This brought Turing’s original idea into a new dimension: it was not a question of creating an individual reprogrammable machine for universal computation objects, but instead the existence of a network of computers which besides simultaneously recording those simple messages that are bitcoin transactions, also allow any programmable transaction within their capability to be carried out on them at the same time, and every step in the process and its result may be stored on a distributed, transparent, and incorruptible record with universal access. Or put another way, its universal nature does not relate only to the programmable object –like Turing’s virtual machine and our current computers -, but it is also universal in relation to the agents or devices they operate, in that the program is executed and the result is recorded simultaneously by an infinite number of computers throughout the world.

And then there is John von Neumann, one of the great scientific geniuses of the twentieth century, on a par with Einstein. In relation to our subject, von Neumann was the creator of the logical structure of one of the first high-speed electronic digital and stored-memory computers which was the first physical incarnation of the imagined universal Turing machine. That computer, known as EDVAC, was made at the end of the nineteen forties at the Princeton Institute for Advanced Study (USA), as an instrument to perform the complex and extremely laborious mathematical calculations required for the design and control of the first atomic bombs. In fact, even today the structure of every computer we use is based on what is known as “von Neumann architecture”, which consists of a memory, processor, central control unit and elements for communicating with the exterior for entering and receiving data.

Besides the fact of every computer application or development owing its existence (despite his premature death aged 53) to the visions and ideas of von Neumann (including artificial intelligence which was the subject of his latest ideas in works such as the “Theory of self-reproducing automata”), I would like to take a look here at two ideas of this great pioneer.

 

Firstly, in the years following the end of the second world war almost everything was in short supply and to build EDVAC they had to use leftover equipment and materials from the weapons manufacturing industry. Making a machine put together from these components work properly was a veritable challenge that Von Neumann confronted with the idea that a reliable machine had to be built from thousands of unreliable parts. This idea, which he developed theoretically in two articles in 1951 and 1952 (Reliable Organisms from Unreliable Components” and Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components), links up with the formulation, later in the eighties and in relation to the reliability of computer networks created for defense uses, of what is known as the “Byzantine Generals’ Problem” –which is usually mentioned in explanations of blockchain-. It is also related to “resilience”, one of today’s buzzwords; and is at the very core of blockchain design: how to create the most reliable and transparent recording system that has ever existed based only on particular individual agents any of which could be false.

 

In relation to blockchain design we can also trace the footprint of another great intellectual contribution from von Neumann. Because he was gifted with broad intelligence, besides taking an interest in and revolutionizing set theory, quantum physics and computer science, he also forayed into economic science, where he was no less revolutionary. He pioneered the theory of games, on which he co-authored with Oskar Morgenstern in 1944 a work entitled “Theory of games and economic behavior”.  Much of the theory of games, analyzing the rationality of strategic decisions of individual agents operating in an economy based on the likely behaviors of other agents, is also present in the smart design explained by the enigmatic Satoshi Nakamoto in his 2008 paper. After all, the design of a public blockchain, such as that on which Bitcoin is based, comes from the idea that the pursuit of individual interest in gain by  a few agents –the “miners”- results in the general reliability of the system; and from the idea that it makes little sense to defraud a system to obtain an asset whose economic value depends directly on the general belief in the reliability of that system.

(I would recommend the following books to anyone who has a taste for more on these subjects: “Turing’s Cathedral: The Origins of the Digital Universe”, by George Dyson, Pantheon,ISBN-13: 978-037542275 ; “Turing. Pioneer of the Information Age”, by B. Jack Copeland, Oxford University Press, ISBN-13: 978-0198719182; and “The Proof and Paradox of Kurt Gödel (Great Discoveries) by Rebecca Goldstein. W.W. Norton & Company. ISBN: 978-0393327601)

Taxation of the digital economy: the European package.

On March 21 the European Commission published a set of proposed new rules and measures on taxation of the digital economy in an attempt to set a starting point for the expected international negotiations on this matter, when the OECD has preferred to acknowledge the absence of sufficient consensus.

The key elements of this package are two proposals for a directive. One, containing rules on the taxation of businesses with a significant digital presence. The other, an announced proposal for a directive on the common system for a tax on digital services, charged on income from the provision of certain digital services. In other words, a proposal for the creation of a new tax in every member state on the income from those services, as a transitional solution until the directive on significant digital presence can be approved, which is not currently possible due to the failure to reach a consensus within Europe and internationally. The third element completing the package is a Commission Recommendation approved on the same date, March 21, suggesting that member states include in their tax treaties with non-Union states the guiding principles in the European Union on the idea of significant digital presence. All of which is rounded off with a Communication from the Commission to the European Parliament and the Council on the background and reasons for the proposed reform.

Through this package the European Commission has taken a step further towards its goal to harmonize corporate income tax and moved the global debate on by proposing a two-stage plan. Faced with the absence of a consensus over the taxation of this income, the Commission has proposed first implementing a tax that in actual fact taxes income at 3 percent. This new digital services tax would be charged on income from the provision of certain digital services and only where they are provided by certain companies earning sizable revenues. In the terminology used by the Directive, taxable services are those consisting in the placing on a digital interface of advertising or which allow users to find other users and to interact with them as well as the transmission of data collected about users which has been generated from such users on digital interfaces. These services are taxable where the provider’s worldwide revenues exceed €750 million and the total amount of taxable revenues obtained within the Union exceed 50 million.

The Commission acknowledges however that this tax is only a provisional solution and proposes a directive on corporate income tax for companies with a significant digital presence, a concept that involves an ad hoc reformulation of the old concept of permanent establishment.

This proposal for a directive assumes that the application of current corporate income rules to companies in the digital economy “has led to a misalignment between the place where the profits are taxed and the place where the value is created”. Consequently, it is clearly acknowledged that a reform of the principles of international taxation is necessary to adapt them to an economy in which intangible assets and the value of data are fundamental elements, without losing sight of the goal to tax income where wealth is generated. It is admitted that the traditional rules fail to tax the income of a nonresident in the absence of a physical presence and acknowledged that the principles governing the transfer pricing system lead to an underpricing of the functions and risks associated with the digital economy. Not even the CCCTB rules could ensure recognition of greater participation in the taxation of the income arising from the new economy for the states where the users of this digital economy are located.

Faced with this challenge, the proposal for a directive puts forward the this idea of a “significant digital presence”, as a new element broadening the concept of permanent establishment. It only captures, however, income arising, not from the digital economy, but from the provision of certain digital services, services delivered over the internet or over an electronic network and the nature of which renders their supply essentially automated and involving minimal human intervention, and which cannot be provided without information technology. The proposal details a number of services included in this definition such as the supply of digitized services generally, services proving or supporting a business or personal presence on an electronic network or those generated automatically from a computer via the internet or via an electronic network, in response to specific data input by the recipient, in addition to those listed in annex III, which basically relate to services delivered over the internet, or the sale of goods or other services facilitated by the use of internet.

This digital presence also requires certain thresholds to be met, notably a number of users greater than 100,000 or a number of contracts between companies for the provision of those digital services higher than 3,000, in that member state.

Besides altering the concept of permanent establishment, the proposal for a directive recognizes that the problem lies also, or especially, in the profit attribution rules, and therefore sets out its own profit attribution rules in this case, by including among the risks and functions any economically significant activities performed through a digital interface, especially in relation to data or users which are relevant to the exploitation of the company’s intangible assets.  The profit split method is the preferred method to determine attributable profits.

In short, one of the main problems associated with this tax package is simply the lack of consensus or international agreement which makes it very difficult to apply these principles and definitions in member states’ relationships with non-Union countries. The Commission Recommendation is very well-intentioned by trying to include this solution in relationships with non-Union countries through the negotiation of tax treaties which, moreover, it is admitted would otherwise prevail preventing a harmonized global solution from being applied. And to achieve that solution in an OECD or any other context, we will come up against the obstacles posed by the differing interests of the countries concerned, according to the types of companies they have.

Digital Economy and Taxation

There has recently been a resurgence of the debate on the consequences of the digital economy on taxation, or perhaps better expressed as, on the form of adapting international taxation to the digitalization of the economy. Commissioner Moscovici has said that the European Commission will present its new proposals in March, in line with similar declarations from the French government.

This concern over the consequences of digitalization on the economy is nothing new. Leaving aside earlier precedents, such as the OECD works on the digital permanent establishment and the later French 2013 Collin & Colin report, the key milestones in this debate are:

  1. Firstly, the OECD’s BEPS report devoted action 1 to these challenges of the digital economy. The final report in October 2015 was not particularly precise when it came to offering solutions, but it introduced the framework for the debate by determining this need to respond to the consequences of digitalization either in connection with the concept of permanent establishment, by accepting the idea of “significant economic presence”, or by creating new tax concepts, in the form of new “withholding tax” scenarios or of taxes on certain types of transactions to fill the gap left by the lower taxation of companies focused particularly on the digital economy.

In 2017, the OECD launched a new phase in this analysis with the document published on September 22 which should give rise to a specific report appearing soon.

  1. Later, on September 21, 2017, the European Commission published its Communication on a fair and efficient tax system in the European Union for the Digital Single Market.

 

This Communication is connected with the VAT package launched in 2017 which included the announcement in December 2017 of a new VAT system for online cross-border sales.

 

  1. Lastly, various EU countries have approved measures or published important documents. The reform of Italian legislation, which took effect on January 1, 2018, broadened the definition of permanent establishment, to take in economic presences without the backing of a physical presence, and taxed certain digitally supplied services. And in November 2017, the Treasury of the United Kingdom published a detailed “position paper” that is of particular interest.

But what does this movement towards adapting taxation to the digital economy broadly mean? According to the reasons given by the European Commission, the current rules no longer fit the modern context and are resulting in a number of technology giants not paying the taxes they should be paying. As a result, international taxation should allow income to be taxed where it is generated. This standpoint has focused the debate on direct taxation, although it has always been thought that the treatment of the digital economy requires a global view that considers both direct taxes on the income of companies and indirect taxes and, in particular, VAT on the supply of digital goods or services, or digitally supplied goods or services. From another angle, that approach has concealed the greater complexity that the problem encloses.

When the European Commission accepts in its Communication that digitalization affects all companies, but in varying degrees, it is admitting that in actual fact its proposals will be directed at particular business models, which are listed in the Communication itself: basically, online retail platforms, social media models, audiovisual digital services and the so-called collaborative economy. In relation to these sectors, it is sought to alter the rules determining the taxing powers of each state by introducing the concept of a significant commercial presence, even where there is no physical presence, and especially to alter the rules on calculating the tax relating to each state. Beyond taking the opportunity to offer the Common Consolidated Tax Base as a global solution, the Commission accepts the difficulties associated with the task, the setbacks associated with unilateral solutions, and the need to offer alternative solutions to the broadening of the definition of permanent establishment, solutions such as a tax on insufficiently taxed income, a withholding tax on income from certain types of transactions, or a specific tax.

Indeed, as the UK government’s position paper recognizes, the current debate is questioning the validity of the rules developed a century ago, but only to a certain extent or to the extent that those rules are now having effects that are rejected by being unfair for certain countries. As opposed to what we sometimes hear, it is not a question of companies paying taxes where they sell, because now, like then, most states argue that a company must pay taxes where it designs, produces and sells its products regardless of where its customers or consumers are located. What happens is that this principle, which continues to be at the core of consensus, is seen as unfair or inadequate in relation to the taxation of certain activities based (and here countries’ views vary) on the use of their users’ data, on digital advertising services targeted at those users from another country, or on certain digital intermediary platforms. And besides, these issues must be seen in light of other problems arising from the age-old transfer pricing principles, when trying to determine what portion of the income relates to certain intangibles, above all when they are in low-tax jurisdictions.

All in all, the difficulty now lies in how to justify this differentiation in the taxation of certain activities, and how do so without treading on the toes of any of the states with differing positions.