Speech by Félix Plaza, Director of Centro de Estudios Garrigues, at the Inauguration of the 2018/2019 Academic Year

Rector of the Antonio de Nebrija University, Juan Cayón,

Director-General of the State Tax Agency, Jesús Gascón Catalán,

Senior Partner of Garrigues, Ricardo Gómez-Barreda,

Dear teaching staff of Centro de Estudios Garrigues,

Dear students of the 2018/2019 academic year,

Dear friends:

Today we officially inaugurate the 2018/2019 academic year.

During the first few days of class I had the opportunity to speak to most of you, with a view to welcoming you to Centro de Estudios Garrigues and introducing you to a number of essential principles and values on which our institution is based, such as teamwork, solidarity, effort …, but above all our three “E’s”:

  • Ética (we strive to be ethical)
  • Excelencia (we seek excellence)
  • Exigencia (we are exacting)

Being exacting is probably no more than a manifestation of excellence, since it is impossible to attain excellence without demanding the most from oneself, but excellence is also supported by other pillars, such as rigor (understood as appropriateness and precision) and knowledge.

And it is here, in knowledge, that the thoughts I wish to share with you now, at the commencement of a new academic year, truly begin.

Back when I was at university, there was no Internet.  The research of any subject took time:  time to go to the library, time to locate the books or papers that might have a bearing on the subject, time to analyze or review databases and indices before requesting what one wished to consult.  Today, what used to take hours, takes only minutes (little more than the click of a mouse and a few minutes of processing the information to be analyzed).  But when all that is done, the only way to continue is, and always has been, careful study.

I have heard that, in Internet terms, “years” refers to “dog years”, because one year on the Internet is like seven years in the “real world”.

I have always thought that this technical evolution is good.  I still think it is.  But, as with everything, it can have side effects …

The other day, via WhatsApp, I was sent an article by Javier Paredes, Professor of Contemporary History at the University of Alcalá, published in the digital newspaper Hispanidad, in which, while analyzing another matter, he wrote the following:

When the information society supplants the knowledge-based society, ignorance sprouts even in the keenest of minds. The problem is that some think that the two societies are one and the same.  No, one has nothing to do with the other. The information society merely watches television and, at best, occasionally reads something short. The knowledge-based society reads, studies and seldom or never watches television.  Accordingly, we would be wise to consult those who have studied the essence of Spain”.

This article drew my attention to something that is, in my opinion, essential, and that is whether today, when we are living in a time in which we have access to more information than ever before, society is becoming capable on all fronts of transforming this information into greater knowledge, or whether, on the contrary, rapid access to information is having the effect that we no longer look deeply into things, and that, in fact, we know a little about a lot of things, and a lot about very little.

José María Sanz-Magallón, Subdirector-General of Internal Communication and Knowledge Management at Telefónica S.A., in an article published in Nueva Revista affirmed that “A daily issue of the New York Times contains more information than an average citizen of the 17th century would have had in his entire life. More information has been generated in the last five years than in the last 5,000 years, and this information doubles every five years”.

But are we capable of converting all that information into knowledge? Or can excess information have an adverse effect on society in general when it comes to deepening knowledge? And all of this, without asking ourselves something that is just as important as the above, such as, who is monitoring the truth and rigor of all this information?

As Sanz-Magallón notes in the article I just mentioned “It is clear that, thanks to the development of modern information storage, processing and transfer technologies, human beings can cope with and work with the enormous amounts of data produced.  Nonetheless, as Julio Linares indicates, ‘the more the information generated by society, the greater the need to turn it into knowledge’”.

Faculty members Zoia Bozul and José Castro Herrera, in their paper “University Faculty in the Knowledge-Based Society: Professional Teaching Skills” take the view that:

The knowledge-based society is not something that exists now, rather it is a final stage of an evolutionary phase toward which society is moving, a stage subsequent to the current information era, which will be reached through the opportunities represented by the information and communication technology (ICT) of current societies.


Based on this, a need is perceived to train people who can be capable of selecting, updating and using knowledge in a specific context, who are capable of learning in different contexts and modes throughout their life and who are able to understand the potential of what they are learning so as to adapt their knowledge to new situations”.

Nonetheless, the concepts of “information society” and “knowledge-based society” are frequently confused or even treated as the same thing.  I believe, however, that today, more than ever, it is necessary to distinguish clearly between information and knowledge, even if information is an integral part of knowledge.

In the previously-mentioned article, José María Sanz-Magallón defines the knowledge-based society as “that in which citizens have practically unlimited and immediate access to information, and in which information, its processing and transfer, serve as key factors in all the activities of individuals, from their economic relationships to leisure and public life.”


University of Barcelona professor Kasten Krüger, in his paper THE CONCEPT OF “KNOWLEDGE-BASED SOCIETY”, notes that “the current concept of “knowledge-based society” does not focus on technological progress, but rather regards it as a factor of social change among others, such as, for example, the expansion of education.  According to this focus, knowledge will increasingly serve as the basis for social processes in various functional areas of societies.  The importance of knowledge as an economic resource will grow, thus entailing the need to learn throughout one’s lifetime.  But awareness of “not knowing” and awareness of the risks of modern society will also grow”.


Along these lines, José Luis Mateo, former Vice President of the CSIC, in his paper KNOWLEDGE-BASED SOCIETY, states that: “knowledge has therefore always played an important role, although it is the rate of its generation that undoubtedly creates major differences from one era to another.


Every so often, our current society is referred to as a “learning society” and, doubtless, this name reflects the reality, although it would be advisable to qualify or add that this is mainly a result of the rapid production and generation of knowledge, which requires ongoing learning to avoid one’s knowledge of the matter in question becoming obsolete. The learning society is therefore a consequence of the knowledge-based society. In other words, the most recent generations of professionals and those to come will never cease to be students.

In the light of all of the foregoing, I believe it is necessary to understand that the information society should not be confused with the knowledge-based society, although it will lead us (it is leading us) inexorably towards it.

But in the same way, it becomes necessary, now more than ever, not to be superficial, frivolous, and not to be “informedly uninformed”.  Our future, the future of society, as has always been the case, depends on knowledge and on our capacity to turn information into knowledge.

The evolution of technology has given us the tools (unlimited access to information); it is now up to us to put these tools to use.  It is our job to transform information into knowledge.

In the words of Kofi Annan: “Knowledge is power. Information is liberating. Education is the premise of progress, in every society, in every family”.

At Centro de Estudios Garrigues, we want you to absolutely stand out, to understand that you are called to take the reins in the transformation of society and that, in this transformation, the most important tool, what will set you apart from the rest, is knowledge.

Let us to help you in this process, to help you build a better society, a better future, let us to give you the tool that will enable you to change the world: knowledge.

Today you have been duly informed of your responsibility.

Thank you.

The LLM in International Transactions begins at Fordham University in New York

The LLM constitutes training in line with the current international business environment and is aimed at Garrigues associates, offering these young lawyers valuable learning opportunities that will contribute to their personal and professional development at Garrigues.

This week saw the inauguration of the third LL.M. in International Transactions. The program, developed as part of the Garrigues’ International Training Program (ITP), is run through Centro de Estudios Garrigues in collaboration with Universidad de Nebrija and Fordham University School of Law, one of the oldest and most prestigious educational institutions in the US.

The program offers training in line with the current international business environment and is aimed at Garrigues associates, offering these young lawyers valuable learning opportunities that will contribute to their personal and professional development at the firm. The international teaching staff is made up of noted academics and expert lawyers in different fields of the law, as well as professionals from public institutions and multinational groups.

The program offers a multidisciplinary grounding in legal and business matters as well as the soft skills necessary to advise on the negotiation and implementation of corporate and commercial transactions and on the resolution of the related disputes. The training has an international dimension since many transactions involve more than one legal system and entail a combination of continental civil law systems and common law institutions.

The first stage of the training has begun in New York (USA). During two intensive weeks of study, students will attend classes at Fordham Law School in order to familiarize themselves with the US legal system and the main common law institutions.

The program will continue online until April, during which time theoretical and practical aspects of different areas such as corporate governance, insolvency, antitrust law, financial markets and privacy will be studied from a comparative perspective. The LL.M. ends with various weeks of study at Centro de Estudios Garrigues in Madrid, focusing on the resolution of multidisciplinary case studies and various talks on relevant topics for international lawyers (negotiation techniques, lobbying and advocacy, ICT tools for lawyers, disruptive technologies affecting law firms, digital transformation, etc.).

The new disclosure obligation on tax intermediaries

The European has finally approved after a particularly quick procedure the Directive requiring so called tax intermediaries to supply specific information on transnational transactions with tax relevance. We are talking about Directive 2018/822  of 25 May 2018 (Official Journal June 5), amending Directive 2011/16/EU as regards mandatory automatic exchange of information in the field of taxation in relation to reportable cross-border arrangements.

1.- Background to the Directive

Some countries had tried out reporting mechanisms for transactions that could involve aggressive tax planning. Examples are the tax shelter disclosure system in the U.S. or the DOTAS (disclosure of tax avoidance schemes) regime in the UK. These experiences spread to other countries and were the inspiration for BEPS Action 12 on mandatory disclosure rules. In the final report on this action, the OECD called for the use of these disclosure regimes, in relation to the “promoters” of standard schemes identified through hallmarks. Disclosure would have the dual aim of providing immediate information to the authorities and as a deterrent from offering abusive planning schemes.

Taking up these ideas in the BEPS project, the European Commission submitted a Proposal for a Directive of 21 June 2017, and, following the political agreement by ECOFIN on March 13, 2018, the wording of Directive 2018/822 was reached containing notable differences with respect to the wording of the initial proposal, especially in relation to broadening the personal scope of the reporting obligation itself. The Directive amends Directive 2011/16, on automatic exchange of information between the member states in the field of taxation, which is why it is known as DAC6.

2.- Content of the Directive.

The contents of the Directive are easily summarized. The so called tax intermediaries must report to their tax authorities specific information on any cross-border arrangements in which they take part, where those arrangements have any of the hallmarks listed in the Directive itself. Then the member states will automatically exchange that information and therefore have prompt knowledge about abusive or potentially abusive planning arrangements.

The information must relate to a cross-border arrangement (“dispositifs” and “mecanismos” in the French and Spanish versions). There are no reporting obligations for purely domestic arrangements, not affecting any other state, although a member state may unilaterally include those transactions in the scope of the mandatory reporting regime.

No definition is given of “arrangement”. It must be interpreted as meaning any dealing or transaction or set of dealings. And these arrangements are mandatorily reportable where they have any of the characteristics or hallmarks set out in the New Annex IV to Directive 2011/16.  These hallmarks appear to relate to different objectives. The first two of the five hallmark categories relate to the typical standard tax planning arrangements usually involving a tax purpose combined with a fee for the promoter and a confidentiality clause. The third category is targeted at arrangements leading to a no tax scenario by taking advantage of certain tax regimes including the absence of any corporate income tax or a zero or “almost zero” rate. The fourth category is designed to deter arrangements that may have an impact on the automatic exchange of information between countries and the identification of beneficial ownership. And the last category is perhaps the most controversial due to relating to transfer pricing matters. It includes arrangements linked to the transfer of hard-to-value intangibles and certain reorganizations between companies in the same group with transfers of functions, risks or assets, if the projected annual earnings before interest and taxes (EBIT) of the transferor over the three-year period after the transfer, are less than 50% of the projected annual EBIT if the transfer had not been made.

It is the intermediaries of a member state, not the taxpayers, in principle,  that have the obligation to report these arrangements. And the Directive deals with a very broad definition of intermediary. It encompasses anyone who designs, markets, organizes, or makes available for implementation or manages the implementation of a reportable cross-border arrangement. But it also means anyone that knows or could be reasonably expected to know that they have undertaken to provide, directly or through others, marketing, organizing, making available for implementation or managing the implementation of a reportable cross-border arrangement. Where more than one intermediary is involved, the reporting obligation falls on all of them, unless the same information has already been filed by another of those intermediaries. The relevant taxpayer has the reporting obligation only if there is no intermediary because the arrangement was devised and implemented in house or where the national rules on legal professional privilege relieves all the intermediaries from this obligation.

The reportable information appears to relate only to identifying the transactions, their characteristics and values.

3.- Conclusions

This Directive plays a crucial part in the move to review tax planning  practices, but has a defect by starting out from a lack of definition because it mixes up the information, and the combatting and the prevention of tax fraud, without clarifying the limits separating them, and shies away from any attempt to make the system it sets out serve to give greater legal certainty. To the contrary, it advises that the reporting of this information does not serve to obtain any degree of certainty in advance over the validity that the tax authorities will give to these arrangements. So its implementation in the various states may be confused and could ironically aid tax competition between them in addition to placing obstacles to the functioning of the internal market by leaving out purely domestic arrangements.

In Spain’s case, the transposition of the Directive will without a doubt rekindle old problems that have never been resolved: how to define tax advisors and the meaning and scope of their legal professional privilege. Elsewhere, by somehow singling out so called tax planning it will affect the internal organization of the profession.

On the intellectual origin of blockchain (III). More on the recent forerunners

I ended the previous post on the subject of David Chaum and how his DigiCash did not lead to a proper break with traditional cash. The disruptive leap in this respect, even if still only in a theoretical or speculative realm, is attributable to the following two characters in this story.

The first of these characters actually worked with David Chaum at the unsuccessful DigiCash company. I am talking about a U.S. citizen with a Hungarian surname: Nick Szabo. A multi-talented man: computer science graduate (1989) from the University of Washington, cryptographer and jurist. Besides working at DigiCash, he was the designer of bit gold, a digital currency project, forerunner of Bitcoin and blockchain. Many have said in fact that the real person behind the pseudonym “Satoshi Nakamoto” –the bitcoin creator – is Szabo, something he has always denied. British writer and journalist Dominic Frisby said: “I’ve concluded there is only one person in the whole world that has the sheer breadth but also the specificity of knowledge and it is this chap…”. There is even a subunit of the Ether cryptocurrency (the currency running on the Ethereum platform) that was given his name (the szabo).

Szabo’s first great contribution on this subject was a paper entitled “Smart contracts: building blocks for digital free markets”, published in a Californian futurist and transhumanist journal called Extrop in 1996. In this visonary article, Szabo, computer engineer, cryptographer, as well as jurist, asks how the Internet, combined with modern cryptographic protocols (asymmetric or double key cryptography, blind signature systems such as those devised by Chaum, multiple signature systems, mixing protocols) could revolutionize traditional contract law, by enabling such a basic part of the law, a contract, which is the basis of the whole of our market economy, to be up to meeting the requirements of online trading. It was in this paper that the term and idea of “smart contract”–now part of everyone’s vocabulary- was created: a software program through which obligations that are both agreed and programmed are enforced automatically, giving rise to a contract that executes itself, aided by computer technology. Which is ideal particularly for a contract not just between absent parties but also between strangers who have no ground for trusting each other. This was also where we first saw the term “smart property” used to refer to a smart contract incorporated into a physical object (a vehicle, the lock of a house), so the physical availability of that object is also programmable according to the terms of a specific agreement.

This first paper on smart contracts was revised and extended in a 1997 publication entitled “Formalizing and securing relationships on public networks”. Here we now find an allusion to the idea of a distributed trust, in other words, to how the participation of several agents in the monitoring and recording of a transaction is a guarantee of certainty and protection against fraud.

This idea was explored further and started to gain importance in publications such as “Secure Property Titles with Owner Authority” a paper published in 1998, in which, faced with the problems of political uncertainty and discretionality –in less developed countries especially- that are associated with centralized property record systems, it was proposed to have a titles database distributed or replicated across a public network (a record system that –it tells us- would be able to survive a nuclear war). This involves the creation of a kind of property club on the Internet that gets together and decides to keep track of the ownership of some kind of property. The title held by each is authenticated with the electronic signature of the previous owner a process that is reproduced with each successive owner, forming a chain. And the record of the chain of titles which shows the current owner of each item of property is based on a consensus of the majority of the participants, given that it is unlikely that they will all come to an agreement to commit fraud. As we shall see, here lies the core of the ownership recording system for the bitcoin.

Another important paper exploring these ideas is “Advances in distributed security” published in 2003, where Szabo proposes leaving behind the unattainable idea of absolute certainty, to settle for systems with a high probability of certainty such as that provided by cryptography. In this context, he proposes processes such as distributed time-stamping, the use of hashes as a means of identifying the time-stamped messages or files, the creation of “Byzantine-resilient” replication systems, etc.

Alongside his concern over alternative systems to ensure compliance with contracts and the chain of ownership using the Internet, software programming and cryptography, Szabo also turned his attention to the specific subject of money, going much further than the ideas explored by David Chaum. What concerned him, as we have seen, was the subject of privacy: how the fact of acting as intermediaries in our electronic payments gives the financial institutions knowledge of essential information on our lives. Szabo also confronted another issue: placing the value of the money we use at the discretion of political authorities; the problem of discretionary inflation, in other words. This is where the impact of his 1998 proposal for bit gold lay, which appeared at the same time as another very similar idea: b-money, belonging to Wei Dai.

This Wei Dai is a cryptographer, and a fellow computer science graduate from the University of Washington. In 1998 he published a very short paper with the title “B-money: an anonymous, distributed electronic cash system” in the Cypherpunks mailing-list which was later quoted as a reference work in the whitepaper by Satoshi Nakamoto (no work by Szabo was ever quoted as such). The driving force behind Dai’s work, like any good cryptoanarchist, was basically the opacity of cash transactions, and the terminology was perhaps a little too eloquent: “b-money”. An interesting fact is that the smallest unit of the Ether cryptocurrency is called “wei”, named after that forerunner.

The idea put forward in these proposals (which tie in with the most radical visions of cryptoanarchism of Tim May whom Dai explicitly quotes at the beginning of his paper) is not to represent the existing money that is legal tender in a new electronic format to enable or achieve the anonymity of electronic payments, instead to replace that money originating from the government with a new type of money created by the users themselves, assisted by the web and cryptography. This intention –having, as we can see, much more radical political significance because it questions one of the key attributes of state sovereignty, the printing of money- poses a problem going beyond a simple accounting record issue to control the circulation of money, in other words, avoiding the dual availability of a digital asset: that of how to control the creation of this money, to avoid discretionality and ensure its scarcity, and which is somehow a reflection of an economic activity or value.

Wei Dai proposed a type of regular online auction among the system participants to determine the amount placed into circulation in new digital coins.

Szabo’s approach was different. He had for some time been mulling over the idea of how to make a simple bit string (a given number of zeros and ones) into something of value in itself. He was looking for a digital object that could work like gold. The instrument he devised for this –an application of the hashcash algorithm created by Adam Back to prevent email spam, mentioned also by Nakamoto- was a computational proof-of-work, a solution that could be given an economic meaning similar to gold, through the effort and use of resources required for its extraction; the use of computation cycles, in this case. This electronic money devised by Szabo is therefore managed through a program on the web which puts a given mathematical challenge or problem to the system participants. This mathematical problem or puzzle is related to the cryptographic function known as hashing, and may only be solved using “computational brute force”, in other words, by trial and error using different figures until a string is found that fits. When this result is obtained, in the form of a given bit string, it becomes the system’s first unit of currency. The program rewards the first participant to find that string by giving them the unit of currency, which can then be used by this participant to make payments to other users, and so the unit of currency and its fractions begin circulating. This first bit string, obtained by solving the problem, is the starting point for the next challenge, which the program then poses. This is how new currency units are added to the system regularly and in a programmed way.

This proposal was perhaps a little primitive –owing its existence to a metal-based and therefore materialistic idea of money, as a thing that must be given an intrinsic value rather than simply as a symbol of value-, and misguided too, because the intrinsic value we give to gold does not arise only from its scarcity and the difficulty to obtain it, instead from its intrinsic properties as a substance, which can never be said of a sequence of zeros and ones no matter how difficult they are to obtain.

This idea of Dai’s in relation to having bit gold as b-money would never be put into practice, but is the most direct forerunner of the bitcoin.

On the intellectual origin of blockchain (II). Recent forerunners

In the first post in this series on the intellectual origin of blockchain technology, I talked about two figures I consider to be early forerunners: Alan Turing and John von Neumann. In this second installment, I will look at two more recent figures: Tim May and David Chaum (and in a third post, I will touch on Nick Szabo and Wei Dai).

After the gloomy years of World War II and the Cold War, we now turn to the 1980s and 1990s, with its very different political, socio-economic and technological context. After the fall of the Berlin Wall and the break-up of the Eastern Block, the buzzwords became the “end of history” (in the sense that Marxist utopias had been left behind) and globalization. In the technology field, we were seeing increasingly powerful microprocessors, the internet and mobile phones.

As readers may remember, when internet first came on the scene, many heralded that it would bring about massive cultural, social and political changes. The lofty ideals of the Age of Enlightenment seemed closer than ever before: thanks to this new technology, a world compartmentalized into aggressive nation-states and under the thumb of large multinationals could give way to a true universal human community, united through communication, the free and direct flow of information and easy access to knowledge. Of course, this cyberspace-based community would have its economic facet as well: universal trade, which would spread prosperity throughout the planet.

However, while internet and mobile communications have indeed drastically reshaped our social customs, it didn’t take long to see that new technologies were not engendering the political changes some people hoped for. Nation-states are not a thing of the past; rather, they have found digital technologies useful as weapons for controlling their citizens, citing dangers from radicalism and globalized terrorism, which, in turn, have also co-opted technology for their own purposes, the polar opposite of the enlightened ideals internet was supposed to bring with it. On the economic side, we have seen that globalization does not in fact mean global prosperity. Different multinationals have come and gone, but, far from the expected distribution of wealth, economic power is even more tightly concentrated in the hands of a few. Today, a handful of companies born from new technologies, namely Apple, Microsoft, Google, Facebook and Amazon, have attained unprecedented popularity, wielding more power to control and influence users than ever seen before.

This situation, which first started to take shape in the 1990s, has pushed the somewhat visionary and utopian mindset of the internet’s early days into a type of resistance movement characterized by activism and an anti-establishment and anti-system ideology. The difference is that, this time, the insurgents are striking from within, using digital tools for their own purposes. This includes groups such as hackers (who, like all pirates and rebels, are rather romanticized), cyberpunks and the topic of this article, the cypherpunks and cryptoanarchists leading an ideological movement to put cryptography and information encryption techniques in the hands of individuals and to thwart national security agencies’ attempts to monopolize the use of this technology.


The cryptography I am referring to in this post is modern cryptography, heavily based on mathematical theory, computer science and the application of electronics to computing, i.e., something that only began to be developed in the 1950s. Before then, encryption techniques were much more rudimentary: from the simplest codes like the substitution cipher Julius Caesar used for military messages to the electro-mechanical encryption of telegraph and radio messages by German military intelligence services during World War II, using a series of Enigma machines creating polyalphabetic substitution through variable-position rotors. By placing the rotors of the deciphering machine in the same positions as in the cyphering machines, the recipient could decode messages. This was a symmetric-key mechanics-based encryption system, where letter substitutions changed every so often, but the problem was that the system only worked if the sender had previously given the recipient the key (i.e., the specific placement of the rotors) to be used at a certain time. Yet this information could be intercepted by the enemy and potentially used to decode messages if it also had the same model Enigma machine. In contrast, mathematical cryptography is based on arithmetic operations or mathematic calculations applied to digitalized messages, that is, messages that have been previously converted into numbers. Thanks to computers – machines that operate at electronic speed, close to the speed of light – practical use can be made of encryption and decryption techniques that require making highly complex numeric calculations very quickly (generally involving very large prime numbers, the product of which is difficult to factorize).


Initially, governments, in particular the United States through its National Security Agency (NSA), attempted to keep the knowledge and use of this technology – so critical during wartime – to themselves, standing in the way of commercial use and use by the general public. However, in the mid-1970s, following an embittered battle with the NSA, IBM registered its Data Encryption Standard (DES) algorithm with the National Bureau of Standards. This algorithm was later made available to financial sector companies, who needed it to develop their automatic teller machine networks. Knowledge and public release of this algorithm sparked a growing interest in modern mathematical cryptography beyond the closed field of national security. In that same decade, the public also learned about revolutionary asymmetric (or public-key) cryptography: the Diffie-Hellman algorithm, followed by the RSA (Rivest, Shamir and Adleman) algorithm, first developed in 1977. Asymmetric cryptography resolved the formidable vulnerability issue I mentioned above, namely that, in traditional symmetric cryptography systems, private keys must be shared between sender and receiver. This development was absolutely vital for enabling secure communications between parties that do not know one another, which in turn is essential for carrying out financial transactions via internet.


By the 1990s, the crypto-libertarian movement was beginning to take real shape thanks to Timothy C. May, better known as Tim May. In the early digital days of 1992, this California hippie-techie, a well-respected electronic engineer and scientist at Intel, wrote The Crypto Anarchist Manifesto, a short text heralding that computer technology was “on the verge of providing the ability for individuals and groups to communicate and interact with each other in a totally anonymous manner. Two persons may exchange messages, conduct business, and negotiate electronic contracts without ever knowing the True Name, or legal identity, of the other. Interactions over networks will be untraceable, via extensive re- routing of encrypted packets and tamper-proof boxes which implement cryptographic protocols with nearly perfect assurance against any tampering. Reputations will be of central importance, far more important in dealings than even the credit ratings of today. These developments will alter completely the nature of government regulation, the ability to tax and control economic interactions, the ability to keep information secret, and will even alter the nature of trust and reputation (…) The methods are based upon public-key encryption, zero-knowledge interactive proof systems, and various software protocols for interaction, authentication, and verification. The focus has until now been on academic conferences in Europe and the U.S., conferences monitored closely by the National Security Agency. But only recently have computer networks and personal computers attained sufficient speed to make the ideas practically realizable…


In the ensuing years, Tim May was the force behind, and the main contributor to, the cryptography and cryptoanarchy internet forum known as the Cypherpunks electronic mailing list and its document The Cyphernomicon. He is one of the leading intellectual references among the designers of the first cryptocurrencies, such as Nick Szabo and Wei Dai.

The next figure we look at, also from California, had a less political, and more technical and business, focus: David Lee Chaum, a brilliant mathematician and computer scientist who from the 1980s on created novel cryptographic protocols applicable in online commerce, payments and even voting. He is credited with the cryptographic protocol known as the blind signature, which is the digital equivalent of the physical act of a voter enclosing a completed anonymous ballot in a special envelope that has the voter’s credentials pre-printed on the outside.

Chaum’s motivation for creating this type of protocol was his concern for privacy of financial transactions, which was being eroded as electronic payment means were becoming more widely used. One of the essential properties of traditional cash bills and coins is their anonymous nature, as money simply held by the bearer. In contrast, all the new ways of sending money represented in bank accounts (bank transfers, credit cards, web-based payment gateways, etc.) allowed for tracking of who paid what amount to whom, and why. This meant that an increasing amount of data revealing a lot about a consumer’s preferences and spending habits and, in short, they type of person they are, was being recorded and stored in digital databases completely beyond the consumer’s control.

In this blog post, I will not go into a detailed technical explanation of the blind signature protocol invented by Chaum, which he presented at a 1982 conference under the title “Blind signatures for untraceable payments.” In a nutshell, however, it involves a person or authority verifying, with their electronic signature, a specific digital item generated by another person. The authority knows the identity of the sender and what type of item is involved, but it does not know the content of the signed item. This technology has many uses. It is crucial when a third party verifier is needed, such as an election authority certifying that the person casting an online vote is allowed to vote and only submits one ballot, but where, to safeguard the secrecy of ballots, this authority must not know the content of the validated vote. In the case of electronic money, the third party verifier is the bank that holds the account of the person wishing to generate a digital monetary item, or token, because it is the bank that must verify that the sender has sufficient funds and then debit the corresponding amount from the account to avoid double-spending. The blind signature protocol is very useful in this case. Once the bank has verified that the sender (the payer) is indeed authorized to make the payment, it can sign the digital token without knowing the specific serial number individually identifying the payer, while still monitoring the transaction. Accordingly, when the recipient (the payee) presents a given digital token with the specific serial number to that same bank to exchange it for cash money, the bank does not know which of its clients sent the token and, consequently, who made the payment in question.


How can a bank sign a digital monetary token without knowing its individual serial number? This is where Chaum’s mathematical ingenuity came into play: the payer (i.e., the bank client wishing to make a payment with electronic cash) generates a random serial number x and, prior to sending it to the bank with its payment order, disguises it by multiplying it by a factor C known only to the payer. Put differently, the payer gives the bank a serial number encrypted using a commutative algorithm or function C(x), which can be reversed by performing the inverse arithmetic operation, in this case, dividing C’ by the same factor. After verifying the payer’s standing to make the payment, the verifying bank electronically signs that same encrypted serial number with its private key S’ and returns the result, S’(C(x)), to the payer. The payer then applies inverse division to the prior multiplication through which the serial number was encrypted, C’(S’(C(x))), and obtains S’(x), that is, the original serial number now electronically signed by the bank. The payment can now be made without the bank knowing who generated that specific note. This is clearly analogous to issuing bank notes against cash deposits, but in a digital environment.


In 1989, David Chaum attempted to put this idea into practice with the creation of the Amsterdam-based electronic money corporation DigiCash, but the business did not flourish. In fact, only two banks supported DigiCash systems, the Missouri-based Mark Twain Bank and Deutsche Bank. The business only had 5,000 clients and total payment volume never passed $250,000. Chaum later explained that it was hard to get enough merchants to accept the payment method so enough consumers would use it, or vice versa. Although David Chaum became a hero for cryptoanarchists, the problem was that the average consumer was not that concerned about the privacy of his transactions. In the end, in 1998 DigiCash filed for bankruptcy and sold off all its assets.

What I find most interesting about this attempt is that the electronic cash David Chaum invented still depended on ordinary legal tender (because the process always started and ended at a bank account in dollars, euros or another national currency), and the accounting control that avoided double-spending was still in the hands of the traditional banking system, given that it was a bank that verified the sender’s ability to pay and then debited the amount of electronic money sent from the traditional account.

Bitcoin, to appear later along, was a radical departure from this model.

(The pioneering works about mathematical cryptography are Claude Shannon’s article “Communication Theory of Secrecy Systems” published in the Bell System Technical Journal in 1949 and the book by Shannon and Warren Weaver titled “Mathematical Theory of Communication.”

An explanation of how the Enigma machine worked and stories of the adventures and exploits on the cryptographic front of WWII can be found in a book cited in a previous post: “Alan Turing. Pioneer of the Information Age,” by B. Jack Copeland, published in Spain by Turner Noema (Madrid), 2012, pages 51 and thereafter.

For more on the battles between the first cryptoanarchists and the NSA, see “Crypto: How the Code Rebels Beat the Government Saving Privacy in the Digital Age,” by Steven Levy, published in Spain by Alianza Editorial, 2001.)

Crossroad for unions: domestic or international?

A noticeable change in recent labor union history is a gradual departure from their original international ideas by increasingly narrowing them down to local or domestic matters. The old idea of international proletarianism, and the aspiration to defend workers’ interests worldwide, or at least beyond domestic boundaries, are giving way to union policies heavily conditioned by more immediate interests, often clashing with those defended by labor union organizations in other countries. From the grand labor union statements preaching the fight for workers’ rights in a context of international solidarity, and the organizational activities on the European and world stage, in an attempt to be present in the major global debates, we have moved towards framing the protection of interests preferably within the territory where each labor organization operates.

What lies behind this change? Obviously confusion, perplexity, and a failure to analyze and understand the new economic and social context associated with globalization and with the new labor relations. But that is not all: the way in which labor relations are evolving in an increasingly open and globalized world has turned the labor unions into increasingly more domestic organizations, whereas companies are tending to broaden their horizons. Companies are more international, the interplay of economic forces is increasingly more global, whereas the unions are finding it harder to push forward proposals and approaches that go beyond domestic boundaries.

In the current economic dynamics, in a great many company disputes the unions are finding it increasingly difficult to adopt the role of international player, and are being compelled to be restricted to defending local interests, which often come into conflict with the interests of workers and labor union organizations in other countries. In processes for industrial restructuring and shifting production, the only union weapon able to be used to exert any pressure is cross-border industrial action creating a coordinated pressure and negotiation front internationally. Only the concerted pressure of the various business units would make it possible to have representatives with a real ability to influence companies’ decisions. But this comes up against an impossible obstacle: the interests of the workers and of their unions, in the various countries are for the most part not the same. The shifting of production has an adverse effect on the country it leaves, but benefits the workers in the recipient country. The problem becomes more acute in a single market, where it is not even correct to talk about “shifting” to refer to the transfer of production to another country in the single market.

There have been recent examples of all this. Cases involving closures of business units, by multinational companies, with facilities in various European Union countries, in which the labor unions’ first goal was to seek the solidarity of the unions in those other countries, by asking them to take action to exert pressure for a reconsideration of its intentions by the company. And the response could not have been more disappointing: closure of the factory was going increase production and strengthen other factories, so their workers were not going to join any action that could adversely affect their interests.

And this situation is reproduced, with bolder hues, within the confines of groups of companies. A good example is the automotive industry, where a firm’s announcement of the manufacture of a new model or of the launch of a new product pits the factories in the various countries against each other on a race to see who will win the contract. Naturally this race is influenced, and heavily so, by the working conditions offered by the labor unions to secure production, and on many occasions, by the very survival of the plant. In these circumstances, it is unrealistic to propose common labor union positions and each labor union is compelled to strictly defend the interests of its own workers, in stark contrast to those of other countries’ workers. The union is therefore forced to “think” in domestic or regional terms whereas the company approaches and develops its strategies globally. This is what I mean when I say that the labor unions are increasingly more domestic and companies becoming are becoming more international. More global in other words.

This course of events in turn has considerably changed the dynamics of labor relations and the relationship of forces among their main players. From one angle, due to the characteristics of the new production system, the unions’ ability to exert pressure in a local context and in the short term has risen. Many companies cannot withstand a stoppage, not even a short one, in production. This is particularly the case at companies supplying others, which must operate on a just-in-time basis, and face fierce competition. In all these cases, the battles are usually won by the unions and that explains the demise of collective bargaining, and the appearance of collective labor agreements with a rising number of concessions. The terms of collective labor agreements often make us wonder how they were ever granted by companies. And the reply is simply this inability to withstand a stoppage in production. But whereas the battles are won by the unions, the wars are won by the companies. A labor union’s advantage in the short term becomes the company’s overriding power in the medium to long term. If the labor unions cannot properly administer their ability to exert pressure in the short term, they run the risk of triggering a scenario where the company, which is a global, rather than a local, player, must reorganize production and use processes for shifting production, or assign the workload to other business units. In Spain we have seen examples of union action which have gone, to paraphrase Groucho Marx, from victory to victory until the final defeat. We have seen how a brilliant string of victories in negotiations can lead to closure of the factory and to production being shifted to other countries.

The unions ought to seek a way to turn that tide and try to find a new role on the global  stage where the most important decisions are taken. Burying themselves in local causes is the surest route to insignificance and would condemn the unions to a strictly corporate role, confined to protecting the professional interests of certain core-groups of workers, at times no longer against employers but against other core-groups of workers, and a long way from the intermediary roles at the levels that really matter for decision-making and from having even the  slightest amount of influence on the organization of corporate relations (that political role, beyond strictly corporate matters, that the unions have already sought).

On the intellectual origin of blockchain technology (I). Early forerunners

In my previous contribution to this blog I talked about certain intellectual obstacles that can trip up jurists when dealing with the definition of smart contract and blockchain technology. The first of these is a deficit in technology training. One of the particular features of this technology, now a worldwide talking point due to its multiple applications and disruptive potential, is precisely that a high intellectual threshold is required to gain access to it, because it is hard to explain and understand.

In this and the following post I am going to try and bring a little perspective on the subject, which perhaps may help make out the signal –the significant points- among all the media noise that is currently causing so much interference.

Specifically, in the two posts I have planned, I am going to mention a few figures who have made important intellectual contributions on the path that has led to both cryptocoins and blockchain technology. On this subject, I will draw your attention to the early forerunners and more recent forerunners.

An early forerunner I could mention the German philosopher Leibniz, who at the end of the eighteenth century, besides making a mechanical universal calculator, conceived the idea of a machine that stored and handled codified information in binary digital code. But I am going to focus on two figures closer in time, who are regarded the founding fathers of computing: British Alan Turing and John von Neumann a Hungarian who later became a US citizen. And why these? Not just because in the nineteen thirties and forties they laid the intellectual foundations, in math and logic, that gave rise to the development of computing and with it the digital universe we now inhabit, but also because their ideas and visions foresaw a large part of the transformation that we are experiencing right now.

Alan Turing recently came into the public eye in a recent film (The Imitation Game, released in 2014) about his activity in the second world war in the British Royal Navy’s intelligence service, where he contributed to deciphering the codes of the famous encrypting machine Enigma used by the German navy and army in their communications. Our interest in him does not arise from his connection with the topic of cryptography, however. I especially want to talk about the work that made him famous, published in 1936 in the prestigious Proceedings of the London Mathematical Society: “On Computable Numbers, with an Application to the Entscheidungsproblem.


The Entscheidungsproblem or decidability problem is an arduous logical and mathematical question that kept a number of logicians and philosophers occupied at the beginning of the twentieth century from when it was posed by German mathematician David Hilbert in his writings at the beginning of the 1900s as one of the remaining challenges for the century that was then beginning: can mathematics provide an answer of a demonstrative type to every problem it poses? Or in other words, is the full axiomatization of mathematics possible to reconstruct it as a complete and self-sufficient system? Hilbert, the leading figure in what is known as formalism, considered it was, and Russell and Whitehead believed they had achieved it with their work Principia Mathematica. However, an introverted logic professor called Kurt Gödel proved it was not possible in a difficult and revolutionary article published in 1931, in which he formulated what is known as Gödel’s incompleteness theorem.

Following Gödel, in the work mentioned above, Turing (in a simple discourse strategy on the limits of computability) created the first stored-program machine (later becoming known as the universal Turing machine which at that time was only a theoretical construction) in other words, a machine having a memory that besides storing data, had the program itself for handling or computing those data, a machine that could be reprogramed and able to compute anything computable (in other words, what we now consider a computer). I must also mention his early interest in artificial intelligence to the point where the so-called “Turing test” is used to assess the greater or lesser intelligence of a device.

And so, when in 2014 Russian Canadian child prodigy Vitalik Buterin, at the tender age of 19, brought that second generation blockchain called Ethereum into operation, he would tell us that it was a blockchain using a Turing-complete programming language, and aspiring to become the universal programming machine, the World Computer. This brought Turing’s original idea into a new dimension: it was not a question of creating an individual reprogrammable machine for universal computation objects, but instead the existence of a network of computers which besides simultaneously recording those simple messages that are bitcoin transactions, also allow any programmable transaction within their capability to be carried out on them at the same time, and every step in the process and its result may be stored on a distributed, transparent, and incorruptible record with universal access. Or put another way, its universal nature does not relate only to the programmable object –like Turing’s virtual machine and our current computers -, but it is also universal in relation to the agents or devices they operate, in that the program is executed and the result is recorded simultaneously by an infinite number of computers throughout the world.

And then there is John von Neumann, one of the great scientific geniuses of the twentieth century, on a par with Einstein. In relation to our subject, von Neumann was the creator of the logical structure of one of the first high-speed electronic digital and stored-memory computers which was the first physical incarnation of the imagined universal Turing machine. That computer, known as EDVAC, was made at the end of the nineteen forties at the Princeton Institute for Advanced Study (USA), as an instrument to perform the complex and extremely laborious mathematical calculations required for the design and control of the first atomic bombs. In fact, even today the structure of every computer we use is based on what is known as “von Neumann architecture”, which consists of a memory, processor, central control unit and elements for communicating with the exterior for entering and receiving data.

Besides the fact of every computer application or development owing its existence (despite his premature death aged 53) to the visions and ideas of von Neumann (including artificial intelligence which was the subject of his latest ideas in works such as the “Theory of self-reproducing automata”), I would like to take a look here at two ideas of this great pioneer.


Firstly, in the years following the end of the second world war almost everything was in short supply and to build EDVAC they had to use leftover equipment and materials from the weapons manufacturing industry. Making a machine put together from these components work properly was a veritable challenge that Von Neumann confronted with the idea that a reliable machine had to be built from thousands of unreliable parts. This idea, which he developed theoretically in two articles in 1951 and 1952 (Reliable Organisms from Unreliable Components” and Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components), links up with the formulation, later in the eighties and in relation to the reliability of computer networks created for defense uses, of what is known as the “Byzantine Generals’ Problem” –which is usually mentioned in explanations of blockchain-. It is also related to “resilience”, one of today’s buzzwords; and is at the very core of blockchain design: how to create the most reliable and transparent recording system that has ever existed based only on particular individual agents any of which could be false.


In relation to blockchain design we can also trace the footprint of another great intellectual contribution from von Neumann. Because he was gifted with broad intelligence, besides taking an interest in and revolutionizing set theory, quantum physics and computer science, he also forayed into economic science, where he was no less revolutionary. He pioneered the theory of games, on which he co-authored with Oskar Morgenstern in 1944 a work entitled “Theory of games and economic behavior”.  Much of the theory of games, analyzing the rationality of strategic decisions of individual agents operating in an economy based on the likely behaviors of other agents, is also present in the smart design explained by the enigmatic Satoshi Nakamoto in his 2008 paper. After all, the design of a public blockchain, such as that on which Bitcoin is based, comes from the idea that the pursuit of individual interest in gain by  a few agents –the “miners”- results in the general reliability of the system; and from the idea that it makes little sense to defraud a system to obtain an asset whose economic value depends directly on the general belief in the reliability of that system.

(I would recommend the following books to anyone who has a taste for more on these subjects: “Turing’s Cathedral: The Origins of the Digital Universe”, by George Dyson, Pantheon,ISBN-13: 978-037542275 ; “Turing. Pioneer of the Information Age”, by B. Jack Copeland, Oxford University Press, ISBN-13: 978-0198719182; and “The Proof and Paradox of Kurt Gödel (Great Discoveries) by Rebecca Goldstein. W.W. Norton & Company. ISBN: 978-0393327601)

Taxation of the digital economy: the European package.

On March 21 the European Commission published a set of proposed new rules and measures on taxation of the digital economy in an attempt to set a starting point for the expected international negotiations on this matter, when the OECD has preferred to acknowledge the absence of sufficient consensus.

The key elements of this package are two proposals for a directive. One, containing rules on the taxation of businesses with a significant digital presence. The other, an announced proposal for a directive on the common system for a tax on digital services, charged on income from the provision of certain digital services. In other words, a proposal for the creation of a new tax in every member state on the income from those services, as a transitional solution until the directive on significant digital presence can be approved, which is not currently possible due to the failure to reach a consensus within Europe and internationally. The third element completing the package is a Commission Recommendation approved on the same date, March 21, suggesting that member states include in their tax treaties with non-Union states the guiding principles in the European Union on the idea of significant digital presence. All of which is rounded off with a Communication from the Commission to the European Parliament and the Council on the background and reasons for the proposed reform.

Through this package the European Commission has taken a step further towards its goal to harmonize corporate income tax and moved the global debate on by proposing a two-stage plan. Faced with the absence of a consensus over the taxation of this income, the Commission has proposed first implementing a tax that in actual fact taxes income at 3 percent. This new digital services tax would be charged on income from the provision of certain digital services and only where they are provided by certain companies earning sizable revenues. In the terminology used by the Directive, taxable services are those consisting in the placing on a digital interface of advertising or which allow users to find other users and to interact with them as well as the transmission of data collected about users which has been generated from such users on digital interfaces. These services are taxable where the provider’s worldwide revenues exceed €750 million and the total amount of taxable revenues obtained within the Union exceed 50 million.

The Commission acknowledges however that this tax is only a provisional solution and proposes a directive on corporate income tax for companies with a significant digital presence, a concept that involves an ad hoc reformulation of the old concept of permanent establishment.

This proposal for a directive assumes that the application of current corporate income rules to companies in the digital economy “has led to a misalignment between the place where the profits are taxed and the place where the value is created”. Consequently, it is clearly acknowledged that a reform of the principles of international taxation is necessary to adapt them to an economy in which intangible assets and the value of data are fundamental elements, without losing sight of the goal to tax income where wealth is generated. It is admitted that the traditional rules fail to tax the income of a nonresident in the absence of a physical presence and acknowledged that the principles governing the transfer pricing system lead to an underpricing of the functions and risks associated with the digital economy. Not even the CCCTB rules could ensure recognition of greater participation in the taxation of the income arising from the new economy for the states where the users of this digital economy are located.

Faced with this challenge, the proposal for a directive puts forward the this idea of a “significant digital presence”, as a new element broadening the concept of permanent establishment. It only captures, however, income arising, not from the digital economy, but from the provision of certain digital services, services delivered over the internet or over an electronic network and the nature of which renders their supply essentially automated and involving minimal human intervention, and which cannot be provided without information technology. The proposal details a number of services included in this definition such as the supply of digitized services generally, services proving or supporting a business or personal presence on an electronic network or those generated automatically from a computer via the internet or via an electronic network, in response to specific data input by the recipient, in addition to those listed in annex III, which basically relate to services delivered over the internet, or the sale of goods or other services facilitated by the use of internet.

This digital presence also requires certain thresholds to be met, notably a number of users greater than 100,000 or a number of contracts between companies for the provision of those digital services higher than 3,000, in that member state.

Besides altering the concept of permanent establishment, the proposal for a directive recognizes that the problem lies also, or especially, in the profit attribution rules, and therefore sets out its own profit attribution rules in this case, by including among the risks and functions any economically significant activities performed through a digital interface, especially in relation to data or users which are relevant to the exploitation of the company’s intangible assets.  The profit split method is the preferred method to determine attributable profits.

In short, one of the main problems associated with this tax package is simply the lack of consensus or international agreement which makes it very difficult to apply these principles and definitions in member states’ relationships with non-Union countries. The Commission Recommendation is very well-intentioned by trying to include this solution in relationships with non-Union countries through the negotiation of tax treaties which, moreover, it is admitted would otherwise prevail preventing a harmonized global solution from being applied. And to achieve that solution in an OECD or any other context, we will come up against the obstacles posed by the differing interests of the countries concerned, according to the types of companies they have.

Jurists and smart contracts

Jurists tend to come up against two large obstacles when it comes to dealing with smart contracts:

– The first problem is coming to grips with the technology; both the specific technology involved in the architecture and mechanics of a blockchain, and general computing technology. On the first subject, what exactly is a peer-to-peer network? How does asymmetric or double key cryptography work,? What are hashes or proof of work? How is node consensus achieved? What is a fork? And so on; and on the second, what is an algorithm? Or a bit string?, What is programming? What is a “code” in computing? What is involved in “compiling” or “editing”, or “executing” a program? Without a minimum amount of familiar with all these definitions, any effort to analyze or form a legal opinion on smart contracts is in vain, because, put simply, we have no idea what we are talking about.

So, when we define a smart contract as a “self-executing” contract, are we considering that the only thing a computer program does, in principle, is handle information, perform operations with data by following rules or instructions to produce new data? The relationship this has with the compliance, performance or practical enforcement of a contract is not immediately obvious.

The putting into practice of the idea of a smart contact is linked to the creation of those curious “assets” known as cryptocoins, which are simply made of digital information, meaning that the “placing into circulation” of these assets is programmable, able to be fully controlled by a computer program whose output (who is the current holder of a given bitcoin sum) is simply being recorded on a digital database. In relation to other assets made purely of digital matter (a sound or image file containing a work subject to intellectual property rights), it is easy to see how buying and executing the work may be computer-programmed (because a purely digital asset can be made available on remote media). But if our smart contract relates to other types of assets such as the ownership or use of tangible property, collection rights against a given party, or corporate rights or interests in a company, it will first be necessary to “tokenize” these assets or rights, meaning represent them through programmable digital files. This obviously poses the problem (now essentially a legal one) of how far our legal system is able to recognize the legal validity of that form of placing those tokenized assets or rights into circulation where those rights need to be made enforceable in practice in the real world outside the memory of the device or devices (or network, in some cases) executing a given program. Put another way, to what extent is the “authentication” provided by the code executed on the blockchain also legally valid outside the network (a problem that does not arise with cryptocoins, which only exist, operate and display all their effects on the network).

Insofar as the development of the internet of things has flooded the market with a whole range of articles equipped with electronic devices enabling them to be connected and communicate with the internet, and also be automatically controlled and programmed, the smart contract may achieve more effective and self-sufficient automatic self-execution of these articles, less in need of the support of the traditional legal enforcement mechanisms, and therefore less dependent (in principle, at least) on recognition of their legal authentication.

This first remark is designed to draw your attention to the fact that in matters related to smart contracts, knowledge and an understanding of technology, of what it is able or not able to do in practice, must come before any legal judgment.

– This brings me to the second major problem, concerning not so much knowledge as approach: we come to the subject armed with all our legal prejudices, and this could seriously distort our perception. Basically we are confused by the unclear meaning of the term “smart contract”, which immediately brings to mind our legal concept of a contract and everything associated with it. So we leap too soon into wondering about legal validity or invalidity, legal enforceability or unenforceability, about whether or not the requirements for obtaining legal recognition are satisfied, or even about evidence or its use in litigation; and we fail to realize that this is not really the crux of the matter. At the heart of the matter there is a deeper question, which cannot go unnoticed as a result of that legal preconception of ours. The question is not whether it is a greater or lesser defined new concept that seeks to be accommodated in our legal system, but rather something originally conceived in an attempt to make it an alternative to the whole of our legal system.

A smart contract, purely speaking, is not intended to be a legal contract, because it does not need to be one, in the same way as Bitcoin (in the mind of its creators) is not intended to be legally recognized money, or legal tender, but rather money for a society that has already left far behind, as unnecessary, the notions of national state, of laws and of national jurisdictions.

We need the support of a jurisdiction, of the courts of a given country, and as a precondition for this, recognition of the legal meaning and value of a given arrangement or understanding by the legislation of that country, to the extent that, de facto, compliance with, or practical performance of, the agreed terms depends on the intention of a human being. So, when that intention fails, or becomes inaccessible or hard to implement, we will seek help from state forces. If, however, technology provides us with the ability for that agreed arrangement to be implemented mechanically or automatically with complete independence from the intentions of an “obliged” party, then both the concept of contract, along with the whole legislative and institutional apparatus belonging to what we know as “contractual law” become irrelevant.

Clearly, this approach belongs to the intellectuals and ideologues who were the forerunners of this whole system –the crypto-anarchists-: a technological utopia according to which certain problems related to economic exchange and cooperation, which until now have been organized in a very unsatisfactory way (slow, expensive, complicated, unsafe) through the traditional legal systems, may be handled much more efficiently through the simple intervention of technological tools which are already within our reach.

From this starting point, the real issue that warrants our attention as jurists is: first, whether what is being sought is actually possible, simply in practical terms, and to what extent –in all areas of human relationships which until now were covered by contractual law or only some of those areas-, and how it may be possible; and secondly, whether this alternative way of doing things proves, from the standpoint of forming a judgment and bearing in mind all the potential interests at play (not just pure economic efficiency, the speed and safety of transactions, but also the need for protection of the weaker parties in economic relationships, particularly vulnerable property or vital interests, social solidarity interests which are supposed to form the basis of taxation, etc.), to be something acceptable and advisable, and in which areas it may be and in which areas it may not. While remaining very much aware at all times that we are confronting a phenomenon that largely goes beyond our forces, the forces of a national state, which may easily be overtaken by events in its attempt to gate the field.

The future of work. Changes in the world of employment

The changes we are experiencing in the world of employment are fast and continual. If we look at what work will be like, and how employment relationships will be structured in the future, the only certain thing we will find is that they will change continually and we must forget any attempt to find a stable place for accommodating new scenarios. In the words of Yuval Noah Harari, author of “Sapiens”, “any attempt to define the characteristics of modern society is akin to defining the color of a chameleon. The only characteristic of which we can be certain is the incessant change”. And the problem is that often we try to understand the changing employment scenario using conceptual structures of the past. That explains the disconcerted reaction to the changes and the improvised and well-intended nature of many of the changes proposed to bring order to the new scenario.

Above all, the world of work is experiencing the impact of automated and robotic processes. Which is having an effect on both the numbers and characteristics of the jobs required. The number of jobs is affected because the processes of automation, the use of increasingly sophisticated robots, is reducing labor needs, more drastically in some industries than in others. The destruction of jobs is vast, though it must be said that the same process that is destroying jobs is also creating new employment opportunities. This is a two-sided problem: from one angle, as we have seen in every change process in the production system that we have experienced in the past, there is an inevitable time delay between the destruction of jobs and the appearance of new jobs on the market. This puts pressure on unemployment, in the short term, and means that some generations of workers suffer the consequences of change more than others.

From another, and here we link up with the other impact of automation, on the characteristics of employment, the new jobs have very different training requirements and may not fall within the traditional formats in the world of employment. The jobs that are arising are different from the previous ones, with very different training requirements (making it hard for them to be performed by the workers displaced by automation) and with forms of performing work that may differ to a large extent from those traditionally in place.

It also has to be considered that the consequences of automation are felt very differently among the various sectors of the labor force. More qualified (and highly paid) employees in skilled jobs with a high cognitive element are for the time being little affected by the use of robots. Their work has a low manual component and is not repetitive, so not easy to automate (for the time being, I must stress, until we have robots with cognitive skills, able to take decisions independently). Similarly, people in lower qualified (and lower paid) jobs who are increasingly joining the ranks of personal services are also withstanding the devastation caused by automation, because their tasks are manual but not repetitive. Automation has had the greatest impact on medium to highly qualified jobs, with average to high pay, which are manual and repetitive and therefore easy to replace with a robot. The full effect of this is felt in the manufacturing industry, by the more traditional and unionized components of the working population.

That explains various changes that are taking place in the world of employment. Among other things, it is the root cause, along with other factors, naturally, of the widening salary gap, a greater difference between the higher and lower paid employees (due precisely to the impact on employees in the middle qualification and pay range). And it also explains, or helps explain, some of the changes that causing the most confusion among the analysts that are staying within the conceptual frameworks of the past. The rebirth of self-employed work, for example. Leaving aside the clearly fraudulent mechanisms, seeking only to evade employment legislation, such as those of false self-employed workers, a great many new jobs in advanced technology sectors, which make intensive use of information technologies, arise or are entered into as self-employed work, falling outside the traditional types of self-employed work. We are increasingly seeing the appearance of new opportunities for self-employed work, which is dependent to a greater or lesser extent (and, therefore, with a greater or lesser need for protection) but in all cases clearly distinct from the traditional types of self-employed work.

Elsewhere, the new jobs arising in the more advanced sectors of the economy are linked to specific projects and are therefore temporary. We still think of temporary jobs as precarious contracts, which are often used simply to avoid the economic and legal costs associated with an indefinite contract, but many of the new jobs created by changes in the production system are temporary jobs, meaning they require a temporary rather than an indefinite employment contract. The recent introduction in France, by the Macron reform, of project-based contracts is a good exponent of these new scenarios.

If we try to continue seeing temporary employment exclusively as a bad thing, or at least as an exceptional measure, in employment contracts, we will never be able to give an appropriate response to the new scenarios in the world of employment. Similarly, both the employment and social protection legislation must take into account the new central role that self-employment is gaining, and will increasingly have in the future.

Lastly, a no less important change in connection with labor relations is that arising from the trend, an unavoidable consequence of the factors described above, towards more individual employment relationships. Individual rules on working conditions will gradually become more important and the employment contract will re-gain room for determining working conditions. This creates a considerable amount of friction with the traditional collective bargaining system and with the unions’ goal to retain their monopoly over the rules on working conditions.

All of this makes for a changed and changing world of employment. It will be no use trying to ignore the changes or stamp them out by introducing prohibitions in the law. We must search for new legislative answers to the new circumstances we are facing, instead of trying to ignore the changes and redirect them to the structures of the past.