Sentencias TJUE

The court of justice of the European Union attempts to clarify the concept of tax avoidance

On February 26, 2019, the Court of Justice of the European Union (CJEU) handed down its judgments in the cases of T Danmark (ECLI:EU:C:2019:135) and N Luxembourg 1[1] (ECLI:EU:C:2019:134), which once again address the concept of abuse of rights as a legal basis for CJEU case law on tax avoidance, this time regarding the use of holding companies within investment holding structures.

In the two cases joined under the T Danmark judgment, the Court looks at the interpretation of Directive 90/435/EEC (the Parent-Subsidiary Directive) in its wording at the time the facts judged took place. Certain ultimate shareholders resident outside the European Union acquired a company with tax residence in Denmark and structured the investment through a chain of entities that included a holding company resident in an EU Member State (Luxembourg and Cyprus, in the two respective cases). The question was whether the dividends the Danish entity distributed to its ultimate investors through this intermediary holding company could be tax exempt under the Parent-Subsidiary Directive, or whether they would be taxed in Denmark due to application of the anti-abuse rule set out in article 1.2 of the Directive.

In the joined cases ruled on in the N Luxembourg 1 decision, which involved certain investments that were also analyzed in the other judgment, the Court looked at application of Directive 2003/49, on a common system of taxation applicable to interest and royalty payments, to a flow of interest payments derived from a chain of loans involving an intermediary lender in a Member State that directed the flow to ultimate lenders outside of the European Union. The CJEU was asked to interpret the anti-abuse rule set out in article 5 this Directive, in light of article 11 of the OECD Model Tax Convention.

Both judgments were issued as preliminary rulings based on requests submitted by the same Danish court. The Danish Government requested that these cases be heard by the Grand Chamber of the Court and suggested that the Court organize a joint hearing of all the cases. The Court granted the Danish Government’s requests. Undoubtedly, this procedural approach was aimed at highlighting the importance of the matters and laying the groundwork for a judgment that could qualify the CJEU’s previous case law. The Court constructed a common response in both cases, although certain additional aspects were present in the N Luxembourg 1 case. In that case, the Court alluded to the very meaning of the withholding on interest payments, which it claimed was consistent with the free movement of capital; however, the Court did not address when the expenses related with those interest payments would be deductible.

In essence, the CJEU answered the following question: When does the use of an intermediary company in a Member State other than the State of residence of the acquired entity or of the ultimate investors become an abuse of rights that warrants application of the corresponding anti-abuse rule?  The answer is structured into four different components.

Firstly, the CJEU defined the legal basis for understanding fraud or abuse to exist. To that end, the Court reiterated its prior case law on abuse of rights in taxation matters, beginning with the judgments of February 21, 2006 and September 12, 2006 in the Halifax and Cadbury Schweppes cases. The Court only specifies that the principle of prohibiting abusive practices is a general European Union law principle that can be applied even if the anti-abuse rule contained in a directive has not been explicitly transposed into national law, as it had already stated in its judgment of December 18, 2014 (Italmoda).

On the basis of this general principle, the CJEU analyzed whether, in light of its case law, such an abuse existed in the matters before it, focusing on whether an “artificial legal construction” exists (although, in the English version, it later refers to this as an “artificial arrangement” and, in the French, as “un montage artificiel” or artificial set-up). In order to know whether an artificial arrangement exists in the matters at hand, the Court looks at a series of indications, although it clearly leaves the final assessment in the hands of the national court that submitted the question for preliminary ruling. These indications center less on the substance of the intermediary company and more on the circumstances of the flow of dividend or interest payments. Accordingly, the Court considers whether the party is contractually or legally bound to pass the funds on to another party, or whether there are any other circumstances that evidence that the intermediary company, “in substance”, did not have the right to use and enjoy the funds it receives.

In this point, the CJEU, in accordance with the thinking of the referring court, examines, in both judgments, what would happen if the beneficial owner of the income were a resident of a third country with which the State of the first subsidiary had concluded a double tax convention that would also exempt the dividends or interest payments from withholding tax. The Court does not give a clear answer to this, overlooking the fact that the Parent-Subsidiary Directive does not address the concept of beneficial owner. Nevertheless, the Court notes that if the beneficial owner resides in a third country, the exemption can be refused, although if the result were the same in any case, the arrangement should not be thought of as having been created with a fraudulent or abusive aim. In the N Luxembourg 1 case, the Court looks more deeply at the idea of beneficial owner, which is in fact addressed in Directive 2003/49. For the Court, this EU law concept has the same meaning as given in article 11 of the OECD Model Tax Convention, although it adds that the exemption would still apply if the ultimate owner were resident in another Member State and met the requirements set out in the Directive.

Lastly, the CJEU addresses the question of the burden of proving the abuse of rights, stating that the tax authority has the task of establishing the existence of elements constituting abuse or fraud but not the task of identifying the beneficial owners of the dividends or interest payments for which the exemption is being refused.

These judgments are perhaps even more important because of what the Court does not expressly state. Indeed, the Court says nothing of the precedent set by its September 7, 2017 judgment in the Eqiom case (C-6/16) or of the Advocate General’s opinion in all these matters, which the Court shared in Eqiom but now distances itself from. In fact, when summarizing its case law, the Court does not even mention the Eqiom judgment, even though, in it, the Court insisted on avoiding general assumptions of fraud, on the fact that it is not feasible to understand the presence of ultimate shareholders resident outside the European Union to be an indication of fraud, and on the fact that placing an intermediary holding company in an EU Member State is consistent with the freedom of establishment. For these reasons, in the Opinion heard on March 1, 2018, the Advocate General had insisted on a “substance-over-form” approach for the holding entity as the only way to determine whether an artificial arrangement exists, given that in any other case the parent company would be the beneficial owner of the income (unless the structure was devised primarily for tax reasons or were tarnished by the location of the ultimate parents in certain territories).

Now, the CJEU apparently prefers to take a more cautious approach and focuses on the financial flow of the subject payments, without referring to the question of economic substance of the intermediary company located in the European Union. The Court seems to be moving closer to the concept of beneficial owner of the income within the meaning given in articles 10 and 11 of the OECD Model Tax Convention. What the CJEU possibly leaves unsaid is up to what point it will question structures such as those described, with the qualifications it has introduced into its case law.

[1] In joined cases C-116/16 and V-117/16, and in joined cases C-115/16, C-118/16, C-119/16 and C-299/16.

Working hours: limits and timekeeping

Historically, the main concern about work hours has been to cap the number of hours employees can work in a given period. On the back of past struggles to achieve an eight-hour workday, as a general rule, we now have legal limits on daily working hours. Barring certain notable exceptions (Japan), the majority of western countries – and international laws – respect a maximum workday of eight hours, although each country has its own ways of counting weekly, monthly and yearly time worked. Almost all jurisdictions have passed laws to safeguard the eight-hour day, to adapt it (and reduce it) through collective labor agreements, and to limit the amount of overtime people can be asked to work. One of the central issues these countries have dealt with in doing so is how to monitor the time effectively worked, both to ensure compliance with rules on the right to rest and to punish any violations of the law.

In recent years, we have seen the topic broaden to encompass a number of new issues: in particular, the question of the right to a balance between personal/family life and work life, which relates more to how we manage our work time than to how long we work (and encompasses efforts to bring about greater gender equality in the workplace), and issues deriving from new technologies and the new ways of organizing and conducting work they make possible. This latter aspect notably includes attempts to limit “off-the-clock work” by affording employees a new right, the right to disconnect from after-work digital communications. However, things are still rather nebulous and abstract in this arena, perhaps because the problem is a new one and because it is hard to design regulatory solutions to fix it. A good example of this is the recent Law on Data Protection and the Safeguard of Digital Rights (Law 3/2018, of December 5, 2018), which enshrines the right to digital disconnection from the workplace but leaves it to companies to regulate the right through internal policies, subject to the collective bargaining agreements they reach with workers.

In both these issues, although certainly more in the first, it is critical to identify a system for monitoring the workday actually worked. And therein lies the question – a salient one today following publication of Royal Decree-Law 8/2019, of March 8, 2019 – of keeping track of hours worked. Under the Workers’ Statute, work hours need only be logged for part-time employees and in order to calculate overtime. Although the National Appellate Court considered that, despite lawmakers’ silence on the issue, daily records of hours worked were necessary for monitoring compliance with the established limits, the Supreme Court held that there was no valid argument for straying from the letter of the law and requiring timekeeping in cases other than those expressly envisaged therein (part-time work and overtime). Against this backdrop (in which the Court of Justice of the European Union will soon take a stance, since the National Appellate Court submitted the question for a preliminary ruling), Royal Decree-Law 8/2019, reflecting input from labor unions, established that “companies must ensure that daily records are kept of the time employees work, including the exact start and end times”. This is a sea change, because it legally enshrines companies’ obligation to guarantee that work hours are logged on a daily basis and that start and end times are noted.

Will this solve the difficulties in monitoring compliance with laws on working hours? Probably not, and, more to the point, it will most likely create new problems. The terms and objectives of the new regulation are clearly set out, but the regulation itself is simplistic and therefore hard to reconcile with the complex and diverse features of today’s production system (which is no longer predominately industrial), the many ways work can be organized and the growing influence information technologies have in that regard. Not to mention that even though timekeeping is already mandatory for part-time workers, this has not eradicated the fraud that abounds in these records.

Moreover, the law includes several caveats to the timekeeping requirement that strip it of the sought-after simplicity and give rise to a bounty of tricky interpretation problems.

Firstly, the timekeeping requirement is guaranteed “without prejudice to any flexibility established” in the law. What does that mean? To what extent would spreading out a workday in an unusual way, for example, affect the obligation to clock in and out each day? And how would the timekeeping requirement be affected by an agreement to distribute work time in some way other than the nine-hour per-day maximum? Or by the right to adapt work hours and to spread out the workday to achieve a work/life balance? If, as lawmakers intend, the obligation to record work hours each day cannot prejudice the flexibility that the very law envisages, it would be necessary to specify in what terms and with what scope this flexibility can warrant a timekeeping system different from that established as a general rule.

We also have to take into account that the Decree-Law envisages “special rules on mandatory timekeeping in those sectors, jobs and professional categories whose particular features so warrant”. Can the sectors and jobs with unique working hours and timekeeping aspects, particularly those envisaged in the Decree on special working hours, wait for the government to issue these special rules? The Decree-Law’s reference to “professional categories” is even more tricky, as there is absolutely no clarification of what these categories are. Are professional categories with unique working hours and timekeeping features expected to wait for the secondary legislation? We are faced with a quagmire of doubts and uncertainties, with the resulting erosion of legal certainty. Perhaps it would have been advisable to establish that, for the sectors, jobs and professional categories with special features, the new law would not enter into force until the special dispensations were duly regulated.

Thirdly, daily timekeeping must be organized and documented pursuant to collective bargaining or corporate agreements, or, failing that, at the employer’s discretion after consulting with workers’ representatives. Here again we find a veritable minefield of interpretative doubts. Does timekeeping have to be organized and documented pursuant to a company-specific labor agreement, or will a sector labor agreement suffice? The most reasonable answer would be that a company-specific labor agreement should be used, as it would reflect the business’s actual characteristics and features. Then, there is the question of whether collective bargaining can, or should, be done for each workplace, since the circumstances surrounding timekeeping could vary from one location to another. Moreover, if there is no official workers’ representation at a company, is the employer still required to decide how to organize and document the timekeeping?

Lastly, can there even be a timekeeping obligation if there is no agreement or decision on how to organize and document the records? Is the entry into force of the timekeeping obligation conditional on having such an agreement or decision? Is the two-month deadline for implementing a timekeeping system the period granted for collective bargaining to reach an agreement, whereby if no agreement is reached by that deadline, it then falls to the employer to decide how to organize and document the timekeeping? Furthermore, in the collective bargaining process, can the parties mutually agree to extend the two-month period envisaged in the final provision, so as to postpone application of the timekeeping obligation until the negotiation is finished?

Lastly, “infringement of the rules” in respect of timekeeping is regulated as a violation of labor regulations. Looking at the interpretative landscape summarized above, it does not seem that the new regulations respect the principle of criminalizing specific infractions. What is the actual infraction? Letting the two-month vacatio legis elapse without agreeing on or deciding how to organize the timekeeping? Refusing to negotiate on how to organize the timekeeping? Do the public authorities have oversight on the content of the agreement on organizing and documenting timekeeping? Can the public authorities oppose an agreement that adapts timekeeping needs to flexible working hours? It seems that the only clear infraction would be that of failing to comply with the system for recording working hours, once that system (in terms of organization and documentation) has been determined through collective bargaining or an employer’s decision. It remains to be seen how much oversight the administration will want to take on in this matter, which by all indications cannot be easily reduced to simplistic systems.

One final point to ponder. Is there really an extraordinary and urgent need for this regulation? Can we really understand an extraordinary and urgent need to exist if there is a two-month vacatio legis and if implementation of the regulation is made at least partly conditional on secondary regulations and collective bargaining?

Document integrity in the digital environment: from depositing the media to the hash function: Exploring the Digital Frontier III

In the previous post in this series, I tried to explain how digitization is a unique way of converting information to a binary digital code (a “linguistic” question of how data is coded) so that management of that information can be mechanized electronically (a technological question).

Going one step further, I would now like to introduce the idea that, as jurists, we have a very special relationship with information or, what is one and the same, with language. For us, language is not just a way of communicating or transmitting information about the world or of expressing and sharing our feelings or emotions. Rather, with language, we do things: we enter into contracts and commitments, we waive rights, we organize what is to be done with our assets when we die, we get married, we enact laws, issue judgments and impose penalties, etc. All these specific actions form part of what language philosophers call performative language: actions that are only performed by uttering or writing certain words and that has an effect in that particular sphere of reality that is the sphere of legal validity or effectiveness.

Precisely because of this, for us, “documentation” – establishing the certainty of who said exactly what and when, so that, if we forgot it, it can be recalled and evidenced at any time – is not a trivial matter. Indeed, it lies at the very core of our field of activity and concerns.

Also because of this, since we can now document in the digital, paperless environment to which we are inexorably moving, it has become a matter of great legal importance.

A document system must meet three basic requirements: integrity, authorship and time stamp (as I said above: what was said, by whom and when).

In the world of paper, integrity is guaranteed through the inextricable physical link between the ink used to draw certain alphanumeric characters and the fibers with which a specific sheet of paper is made. Because of this, a document, in the classical sense of the word, is always an object belonging to the material world. It is identified as that individual specimen that comprises certain specific written pages. It is something we can destroy, but is not something that can be easily adulterated, or at least not in a way that is hard to detect (inserting, correcting or eliminating a word or figure to change the original written meaning ).

As far as authorship is concerned, that is, the possibility of attributing responsibility for what has been said to a specific person or persons, the quintessential tool used in the paper universe has always been a written signature. It is a sign that is recognizably or verifiably linked to a certain person, who must physically intervene in creating the signature. Its inclusion at the end of a written text is attributed with the legal meaning of voluntary assent and assuming ownership of the statements contained in the text (something which in itself can be independent from both the intellectual and material ownership of the document).

As for the time stamp, the document itself and its signature can reveal how old it is. However, recording of the precise moment that a legally relevant document was created and signed has usually been entrusted to a reputable official or public authority (a notary or a public registrar).

How do we satisfy these same needs for certainty of a document—which are vital from a legal standpoint—once we abandon paper and replace it with something as evanescent as an electronic computer file? How do we identify the exact contents of a specific file? How do we attribute authorship to a specific person? How do we evidence its date beyond all shadow of a doubt?

In this post, I will look at some answers to the first of these questions, the problem of integrity.

I must start by saying that there is no one answer or solution to the question, but rather many possible ones, some of which are more rudimentary than others.

The first solution (the least sophisticated) is to identify the file (that is, the fragment of digitalized information) by the storage media. The file whose content we wish to be certain about is copied onto a specific physical storage media, and that media is placed in the hands of a trustworthy agent: par excellence, a notary. For several years now, notaries have been taking deposit of floppy disks, CD-ROMs, DVDs, pen drives and even hard drives and entire computers, as a way to be able to subsequently attest to the content of a specific file or files recorded on that media.

The procedure is very rudimentary because, ultimately, it is based on safeguarding a particular storage media, on placing it with a trusted third party, and on a chain of custody that is broken at the very moment the media is returned by the notary with whom it was deposited or is handed off to any other person, except if this same notary, before doing so, accesses the content of the media and prints and witnesses a paper copy of the corresponding content (which means, in short, having to again return to the world of paper and to the authentication tools that inhabit it).

A second solution entails establishing a reference to information recorded on a trustworthy website. This system is used to verify the authenticity of certificates, permits, licenses and other types of electronic documents issued by certain public authorities and agencies, such as certifications of reserved company names issued electronically by the Spanish Central Commercial Registry. The text of the file includes a secure verification code, which is an alphanumeric code identifying the document and making it searchable in the online repository of the issuing authority or entity.

This concept is referred to in article 18.1.b of Law 11/2007 on Electronic Access by Citizens to Public Services, as follows:Secure verification code linked to the public administration, body or entity and, as the case may be, to the person signing the document, allowing in all cases the integrity of the document to be verified through access to the corresponding online site”. Article 30.5 of the same law states that “Printed copies of public administrative documents issued through electronic means and signed electronically will be considered authentic copies provided they bear an electronically-generated printed code or the mark of other verification systems through which authenticity can be verified by accessing electronic files of the issuing public administrations, body or entity”.

In a system such as this, the integrity of the document is verified by contrasting the electronic (or printed) copy presented as authentic against the file accessible on the corresponding website, therefore relying on the reliability of the website and its online access system. If the copy stored on that official website and used for the verification disappears, there is no way of knowing whether or not any other purported electronic or paper copy of the same file is authentic and complete.

The fact that these secure verification codes are randomly generated alphanumeric strings has nothing to do with the content of the document itself (in contrast to what we will see next, for the hash function). Rather, it is a way of safeguarding the confidential nature of the document repository that must necessarily be accessed by the public on the corresponding website. In order to access a particular document on the website, one has to enter a given “metadata”, i.e., that particular secure verification code, that only a person who is already looking at a purported copy of the corresponding file would know (because it is transcribed thereon).

The third solution devised to ensure the integrity of computerized files is much more technical: the hash function.

The term “hash” offers a natural analogy with its non-technical meaning (to “chop” or “scramble” something). A hash function a mathematical algorithm that, applied to a file or any other digital item, yields a particular fixed-size string of text of approximately thirty alphanumeric characters. In reality, this string is a number, usually expressed in hexadecimal and not decimal numbering (that is, using 16 digits, namely numbers 0 through 9 and the first six letters of the Roman alphabet (a through f)). An example of a hash value (SHA-1 formula) is as follows: 8b9248a4e0b64bbccf82e7723a3734279bf9bbc4.

The hash function has the amazing property that whenever it is applied to the same file, the resulting hash value will always be the same, yet by changing a single bit of the file, the resulting hash value will be completely different. Moreover, the likelihood that two different files would yield the same hash (called a collision) is very remote. Bear in mind that while there are infinite possible inputs in the algorithm (any string of digits, however long), there is a finite number of possible outputs: since the hash is fixed in length, there are many possible combinations of 0s and 1s, but not an endless amount. Accordingly, by definition, two different inputs could potentially yield the same hash.

However, another extremely important property of the hash algorithm is that it only goes in one direction; that is, it cannot be reversed. In other words, you cannot use a hash value to reconstruct the original file.

This property has two very significant consequences.

Firstly, a hash value on its own does not mean or symbolize anything; it does not have semantics, and it does not transmit or store any information because, as I already said, you cannot reconstruct the original from a hash value. The hash value can only be used to ensure that a specific file has not been altered. To be clear, the hash value does not prevent a file from being altered, but it does allow us to detect such alteration; therefore, it can be used to evidence that no changes have been made. If at a given time the hash value of a certain file is generated and recorded in a reliable, trustworthy manner, we can determine whether any purported new image or copy of the same file presented at a later date corresponds exactly to the original file. To do so, we simply need to generate the hash value of the new file being presented: if the new hash is the same as the hash obtained previously from the original file, then the file has not been altered and has the exact same content. The hash function cannot be used to save and store information (if the original file is lost or destroyed, having its hash value does not help us at all) or to evidence where information came from (the hash function is anonymous, anyone can apply it to a file). But it does ensure the integrity of the file and evidence that its content has not been altered, provided, of course, that we are certain we have the hash value that corresponded to the original file. Therefore, for the purely technological guarantee the hash function provides to be truly effective, someone must certify which hash value was initially obtained from a given file. Without this legally reliable certification, any comparison of hash values could be very secure from a mathematical standpoint, but pointless from a legal perspective.

The second consequence of the one-way nature, or computational asymmetry, of the hash function is the security it offers against deliberate attempts to create a hash collision. Given a particular file, a computer can obtain its hash value in an instant. However, there is no formula or algorithm for reconstructing the original file from the hash value. This can only be done through “computational brute force”, that is, trying, one-by-one, all the infinite combinations of bits that, as an input, would generate a given hash. This feature is absolutely essential for the security of this tool, because it is what precludes or extraordinarily hinders intentional generation of a hash collision. If somebody could deliberately generate a file with the exact same hash as a different file (but that is sufficiently similar so as to allow one file to be mistaken for the other), the uniqueness of the hash metadata would be jeopardized and, with it, all the security the hash function tool has to offer. However – and this is the most important thing and where the computational asymmetry comes into play – one thing is that a collision is theoretically possible, but it is quite another to be able to intentionally create a collision for a given file, which is what would allow someone to maliciously manipulate the information stored or certified using the hashing tool. The computational difficulty of a maneuver such as this would be astronomical, and the computational time complexity would not be polynomial but rather exponential.


The OECD`s position on taxation of the digital economy

The OECD recently published a policy note followed by a public consultation document addressing the tax challenges of the digitalization of the economy. Through the public consultation, the OECD seeks comments from different economic and social sectors affected by or interested in how the process is being developed under the Inclusive Framework on BEPS. Perhaps, some time in the future, when looking at how international taxation has evolved post-BEPS, the enormous importance of this OECD document and the step forward it represents will be clearly seen in retrospect. For now, the OECD’s position coincides with criteria such as those recently put forth by Joseph Stiglitz in calling for a debate on the very suitability of the current transfer pricing system.[i]

Although the OECD has yet to adopt a categorical position, the importance of the document lies in the breadth of the alternatives the organization is working with. In the document, the OECD maintains that digitalization of the economy requires fundamental changes in the rules governing international taxation, in particular the definition of permanent establishment and the related profit allocation rules. Moreover, the organization states that this transformation may also require a radical rethinking of the transfer pricing system, passing over traditional systems in favor of profit distribution systems (at least for residual income), with the possibility of using mechanisms to distribute global income or the residual income of a multinational group. The OECD could be seen as straying from the goal of finding a technically reliable solution and instead seeking a consensus among the major member states. In some cases it appears to merely take note of the reforms enacted in these countries, putting forward the United Kingdom’s proposals or those contained in the 2017 US tax reform as global solutions.

The OECD’s document examines proposals involving two pillars. The first pillar addresses the consequences of digitalization of the economy, assuming that these impacts require a substantial modification of both the nexus and profit-allocation rules for permanent establishments. The second pillar, while not related with the digital economy, is no less important. This second pillar is based on the idea of ensuring that the global income of multinational groups is taxed at a minimum rate, following in the wake of the recent United States tax reform.

Under the first pillar, the OECD highlights three possible solutions for addressing the tax challenges of the digitalization of the economy. The organization assumes that any of them will entail a significant change in the rules that have governed international taxation for the past century. In all cases, these proposals entail changes in the understanding of “permanent establishment” and the related profit allocation rules, envisaging that a nexus can exist at source even if there is no physical presence whatsoever. In summary, these proposed solutions are as follows:

  1. User participation proposal: This first proposal is inspired by European solutions and focuses on attributing profits based on the value created by certain highly digitalized businesses through developing an active and engaged user base. The uniqueness of this proposal lies in its limited scope, given that it attempts to limit the change to social media platforms, search engines and online marketplaces. Profit derived from user participation in these specific business models would be determined through a residual profit split approach, which could involve objective formulas.
  2. Marketing intangibles proposal: Undoubtedly, this option could bring more extensive consequences than the others. In short, the proposal, which the US could support, is to shift taxation to the applicable market jurisdiction or destination, not only for digital business models but in many other cases in which these marketing intangibles, such as the business’s brands, are considered to have been created, to some extent, by the very market in which they are used, even (or particularly) in those cases in which limited risk distributors exist in the market jurisdiction and contribute, to some extent, in creating certain intangibles. Again, the non-routine or residual profit would be allocated to the different market jurisdictions and must be then split among them according to a predetermined formula. Naturally, this adjustment would not extend to income attributable to technology-related intangibles, which would continue to correspond to the jurisdiction where the technology was developed.
  3. Significant economic presence: Lastly, the OECD document introduces the third proposal, which would entail amending article 5 of the OECD Model Tax Convention, whereby a taxable presence in a jurisdiction would arise when a non-resident enterprise has a significant economic presence. Although the OECD does not provide a full explanation of the proposal in the document, it does recognize that in accordance with BEPS Action 1, the proposal could contemplate the possible imposition of a withholding tax as a collection mechanism.

The OECD recognizes the difficulties inherent in any of these proposals and their effects on what is already somewhat of a transfer pricing minefield. However, in addition to relying on the corresponding technical solutions, the OECD continues to trust that an effective alternative solution can be found to the conflicts that arise.

As mentioned above, the second part of the OECD document goes beyond the digital economy, given that it reflects proposals to avoid tax evasion by multinational groups, envisaging minimum taxation that, as the OECD itself acknowledged, is in line with the solutions already put into practice in the United States. Two formulas were proposed to avoid this base erosion: Firstly, a new type of international tax transparency was proposed regarding income of branches or subsidiaries that is not sufficiently and effectively taxed in the source state. This proposal is in line with the Global Intangible Low-Taxed Income (GILTI) system developed under the US tax reform. Secondly, a suggestion has been made to deny tax credits or deductions envisaged in tax treaties if certain payments can be considered an erosion of the domestic tax base in line, again, with the base erosion and anti-abuse tax (BEAT), when such payments give rise to income that is not effectively and sufficiently taxed in another jurisdiction. Here, too, the OECD understands that such new rules would also require amendment of the Model Tax Convention.

In conclusion, the OECD’s public consultation document is extremely important, as it broadens the scope of the discussion on the future of international taxation. We should therefore keep a close eye on how these proposals fare and the debates surrounding all of them.

[i] Stiglitz, J.; “How Can We Tax Footloose Multinationals”, Project Syndicate, February 13, 2019.

There’s no such thing as public money

Over a decade ago, a top-level politician allegedly made the now-infamous statement that “nobody owns public money”. Although we do not know whether the politician actually said this and in what context, or if it is an apocryphal story, the phrase is often dredged up to criticize the underlying belief (that public authorities can do whatever they want with public money). That notion is usually countered with the idea that public money belongs to all of us and that public authorities should be accountable to everyone for how it is used, with the requisite transparency and diligence. From my experience in public administration, I recall that, at times, when I had misgivings about authorizing a particular expense, my counterparts would make two arguments. First, that the expense in question was within the law. To this, I would reply that I was not questioning the legality of the expense (otherwise, the whole point would have been moot), but rather whether it was a good idea to make it at that time. That is, besides just being allowed by law, an expense must be justified or appropriate. This would bring them to the second argument: do you think it’s your money? Or, more to the point: it is not your money. My reply came easily: of course it is my money, it is all our money, which is why we cannot be haphazard in our spending decisions and, moreover, why our very power to make spending choices is severely limited.

Now, I would like to take this one step further. Understanding public money to be res nullius is nowadays pretty laughable, and almost everyone is on board with the thinking that public funds belong to all of us and should therefore be used carefully and under strict control. But, beyond that, I believe we should question the very concept of “public money”.

I’ll go ahead and put it out there: I don’t believe there is such a thing as public money. What there is, however, is public management of money, which is quite a different thing. All money is private, and it comes from the individuals and companies that are out there creating wealth. When recessions hit, you can still hear people asking “Why don’t they just print more money”? A French labor organizer made headlines a few years ago by openly arguing that to put an end to the financial crisis, we needed to do just that. But, as we have witnessed many times, the cogs of our economic and tax structures somehow keep turning and, in one way or another, these structures remain afloat. Lawmakers do not have a “money-making machine”; rather, our society creates money by carrying out wealth-building business activities. Part of the wealth or money created by a society is taken from individuals and companies (through our tax mechanisms) and put in the hands of public authorities, who are entrusted with managing it so that the public receives all the services it needs to live together in society (which entails, to a greater or lesser degree, some redistribution of wealth).

Public management of money, therefore, is not the same as public money. The distinction is not trivial, and the practical consequences of looking at it in one way or the other can be major. For example, think about social security. In essence (and forgive the simplification), social security is when our governments establish and provide a set of benefits to cover citizens in their times of need. In particular, citizens’ loss of income, whether temporary (unemployment, temporary disability) or definitive (retirement), is covered through a public system that replaces lost income with other benefits. These benefits, and the system, are funded through contributions from employers and workers (sometimes, only by employers, such as in the case of occupational contingencies and workplace accidents and illness). These mandatory contributions are taken through the corresponding mechanisms and placed in the hands of the institutions managing the system. We often hear talk (even from those very institutions and the Court of Auditors) about public funds or social security funds, but they forget that these funds come from companies and workers and are only entrusted to social security in order to cover the public needs identified. The nuance is important, as we will see.

Think about companies that are allowed to opt out of the traditional social security system and instead self-insure. Under the law, for some contingencies, such as occupational ones, companies that meet certain criteria can choose to self-insure, directly covering benefits for their employees. In exchange, they do not have to contribute those amounts to the social security system. The logic behind this arrangement is that the companies paying the benefits themselves (even directly, out of their own funds, if the premiums withheld from employees fall short) and being released from making the corresponding social security contributions (although not entirely – they still pay 31% to the social security system to “support public services” and to “help defray overheads and other public welfare needs”) can enjoy the fruits of efficiently managing the funds. If the self-insuring company is inefficient or if, for any reason, outgoing benefits exceed the premiums withheld, the company must use its own funds to defray the cost of the benefits (which must always be guaranteed). If the company (which, as noted above, already gave 31% to social security to sustain public services and overheads) manages these funds efficiently and, after covering all benefits (and funding the required stabilization reserve), ends up saving money, it should be able to use those savings for itself. If you think that the money managed by these self-insured companies is actually social security’s money, and that the government just let these companies use it to help manage occupational contingencies, you would argue that any surplus generated belongs to social security. However, if you believe that the funds never actually “belonged” to the social security system, but were just entrusted to it through company contributions in order to cover public needs, then it would make sense to you that if companies retain part of the funds they usually contribute and assume responsibility for paying out the related benefits (remember, the benefits are guaranteed even if the premiums retained were not sufficient), then those companies should get to keep any surplus they generate by efficiently managing the funds (bearing in mind that the companies already paid social security 31% of contributions corresponding to the benefits they manage, and that they are required to fund stabilization reserves). Sooner or later, this will be the question under debate. Your stance will depend, ultimately, on whether you believe that the money is public or whether you think that the government simply manages our money (which is always private, never public).

Digitalizacion juridica

On electrons, bits and code: exploring the digital frontier II

When us attorneys move about the digital world with a specifically law-based mindset (that is, not as mere tech users), the first thing we run up against are the very concepts of the digital world or, better yet, the terminology. We start throwing around certain words – computerized, digital, electronic, telematic, file – whose exact meaning we have yet to nail down.


Accordingly, when attempting to give an “electronic document” or “electronic contract” an equivalent or at least analogous legal meaning to what jurists have always understood, in layman’s terms, as “document” and as “contract” or, even more so, when thinking of “digital assets” that can be exclusively owned, such as cryptocurrencies or tokens, we have to first get clear about certain concepts.

The first thing we need to clarify is that we are talking about “information” or “messages”, both of which have two distinguishable elements: the information in and of itself, as the idea-based or cognitive content or meaning (something we think), and the signals on a material or imprintable media through which we represent this meaning so that it can be conveyed from one human mind to another (or from one system or device to another system or device that is capable of perception) and, as the case may be, stored outside of a human memory.

  1. From electronic to digital

To put it simply, computer science is a technology we have developed to manage information (organize it, store it and transmit it) in a mechanized or automated way. This is possible thanks to some very impressive machines that, firstly, work with electricity (that is, they are electric devices) and, secondly, operate by directing tiny currents of electrons through complex circuits (what we understand as “electronics”). In short, in this information technology, this flow of microscopic electrons or other electrically charged particles is used to process specific “signals”.

Although the concept of “digital” is related with the concept of electronics, it is actually quite different. Digital does not refer to a given technology or to certain types of devices, but rather to how the signals and information are stored.

The term “digital” derives from the word “digit”, from the Latin digitus, “finger or toe”. However, it really means here is “numeric”, because human beings began counting on our fingers (precisely, our numeric system is a decimal (or base 10) number system because that is how many fingers we have to count on). Now, digitizing or, to invent a word, “numberizing”, is nothing more than coding or representing any piece of information by using digits, i.e.,  the ten numerals represented through the figures 0 to 9 – and, more precisely, with only two of these figures, 0 and 1 (a “bit” is a space that can be occupied by either a 0 or a 1), in what is known as binary code. Binary code was used as far back as in the ancient Chinese divination text I Ching or, relatively more recently, in certain late 17th century works by the German philosopher Leibniz. And although good old Leibniz –– had conceived, way back then, of a machine that could store and handle data codified in binary digital code, the coding and binary arithmetic exercises that he carried out in his day did not require any more technical instruments than a clean sheet of paper, a quill pen and a bit of ink. Because, as I have said, digitalization, in the strictest sense, is not a technological operation but rather simply one of applying code.

  1. From digits to code

Accordingly, “digitizing” a text, just like I am doing as I type these words, means converting each letter of the alphabet, punctuation sign and blank space into a specific binary number according to a pre-determined code (something like converting letters to a specific sequence of dashes and dots in Morse code).

In particular, using the American Standard for Information Interchange (ASCII) code, which is the one our computers most often work on, each uppercase and lowercase letter, each figure from 0 to 9, and certain punctuation signs and command prompts are represented through a given eight-bit chain, known as a byte. For example, the letter “a” is replaced by or represented with the binary sequence 01100001.

It is important to realize that digital coding of this type is 100% a matter of agreed convention. We could, for example, have agreed that the same sequence of 0s and 1s would represent the letter “b”. In any case, the takeaway from all this is that to use a string of 0s and 1s as an instrument to store and transmit information, we will always need not only something that can “record” these two symbols or signs, but also a “code” whereby each combination of 0s and 1s is assigned a specific meaning or a correspondence with a symbol from another language, such as a letter in the Roman alphabet.

  1. Where electronics meet digital technology

So what do electronics have to do with digital technology? The answer is very simple: any switch of a circuit through which an electric current passes can be in one of two positions: open or closed, on or off. If we assign the value 0 to one of these two possible positions and the value 1 to the other (which is yet another agreed convention of coding at a most basic level), we can “record” in an electronic circuit any information that we have previously “digitalized”, i.e., converted into a specific sequence of 0s and 1s.

While some of the electronic systems and devices that existed before we had modern-day electronic digital coding have recently been revamped with digital technology (audio players, radio, television, telephones), computer science and computers, that is, the devices humans invented to process information on an automated basis, have always been tied to digitalization of information.

What distinguishes digital electronics from analog electronics is that the former encodes a piece of information using only two positions (0 or 1) through two clearly differentiated or discrete levels of electric voltage, while the latter encodes an infinite number of information positions through continuous or gradual voltage changes. Precisely for this reason, the digital transmission and reproduction of a signal or piece of information (which has been previously simplified or schematized) is much more tidy and precise, and it is not encumbered with the possible distortion or deterioration of signals that can happen with analog transmission and reproduction.

  1. Analogy with the legal realm

This technical difference in terms of fidelity or accuracy of a representation or reproduction of a piece of information can be extremely relevant from a legal standpoint. This is the ultimate foundation of the identity of any piece or unit of information we handle in this particular medium, and also of the concept of “document” we have in this computerized world, which is much more abstract and nebulous than our traditional, more tangible-based concept of document. In the realm of information recorded on paper, an information unit (what we underhand as a “document”, such as a bill of exchange or a deed of sale and purchase of a building) is identified based on the individual material nature of a specific piece of paper on which something has been written in ink. In the world of digitalized information, the identity is purely how it is coded and stored: a piece of information is a specific chain of 0s and 1s, regardless of the material or physical media on which it is recorded.

We will need to come back to this later. For now, we just need to understand that, if natural human language always entails a first layer of coding (assigning certain meanings to certain sounds or combinations of sounds produced by using the human voice) and even a second layer of coding when written down (the relationship between certain sounds and certain shapes/letters), then the digital electronic technology used in computer science means adding an even more intense layer of intermediation.

For one thing, we have to depend on machines. Anyone who can see and who knows how to read (and knows the natural language written) can be privy to the thoughts and ideas represented when we write natural language on a piece of paper, on the pages of a book, or in a traditional document. At the most, if it is dark out, they might need a lamp or a candle. This is why we usually say that a feature of a document is that that it can be directly understood.

However, in order to recover information recorded on a pen drive or on a CD, we have no choice but to use a machine. Without a highly sophisticated engineered device able to “read” a CD and to transform the signals it contains into sound, images or alphanumeric characters on a screen that we can look at, the CD is nothing more than a piece of plastic with some faint grooves etched on it.

Breaking it down further, physical storage media for digitalized or computerized information comes in one of two types: magnetic or optical. On magnetic storage media (hard drives, digital tapes) information is stored by applying an electromagnet to a surface coated with electromagnetically sensitive particles (iron oxide). This gives each particle a magnetic charge; the direction of that charge, which it retains, determines whether it represents a 1 or a 0 (i.e., represents the data codified as bits).Information stored on magnetic media is read by again applying an electromagnet to detect the magnetization patterns.

In contrast, optical storage media (CDs, DVDs, etc.) do not use magnets, but rather lasers to “burn” data patterns onto the disk and to later read the reflections. In particular, microscopic grooves are burned into the flat surface of a disk, which is usually coated with aluminum. The recorded disk is read by again passing a laser over the surface. The surface grooves (i.e., how light is reflected or not), make the laser beam respond in different ways, thereby determining whether it’s a 1 or a 0 and “reading” the information stored on the disk.

  1. Nothing without its interpretation

To read from storage media, we need not only a machine that can perceive these optical or magnetic signals that in turn represent 0s or 1s, but one that can reconvert what has been codified as a sequence of binary digits back into sound, images or text in a natural language written, for example, with Roman alphabet characters displayed on a screen. Without software that can perform this conversion, digital information is impenetrable and meaningless.

Moreover, both machines and the software we run on them become obsolete at a dizzying pace. Therefore, it is not enough to have a machine and a program that can read a piece of information recorded on digital media at the time it is recorded; we also need a machine and software able to read that same type of media later on, when we want to recover the information stored. Most of us, for example, don’t have a tape recorder on hand to play those old audio cassette tapes we might still have stuffed in a drawer somewhere.

Lastly, and no less importantly, this machine on which I am typing, which is able to do all this so perfectly, only works if it is plugged into a power supply. If the electricity goes out, it is completely useless. A power outage would render all the information stored using this type of code and media completely inaccessible.

Lest this happen, I will be sure to hit “save”, so that my words are stored on the hard drive’s RAM, and I will attach this word processing file to an email I am sending to the editor of the blog. In the next blog, we will take it one step further and look at the question – highly relevant from a legal standpoint – of how to ensure authenticity and completeness in this peculiar world of digitalized information.

Rethinking the inspection procedure in the age of digitalization

The rules governing the so-called “inspection procedures and proceedings”, which are set out in the Spanish General Taxation Law (GTL), continue to reflect the legal and administrative traditions of the Tax Inspectorate, with few changes having been made to the provisions by which such processes have long been governed. Tax management procedures have undergone a profound evolution, and tax collection procedures, despite retaining the same basic structure, have been transformed in terms of their administrative organization and the powers of the bodies involved. The regulation of  tax inspections, however, continues to be based on the existence of a single procedure centered on ex-post verification using what are essentially accounting and documentary audit techniques.

There have, of course, been some changes. These include the creation of two bodies which have become essential: the Large Taxpayers’ Central Office and the National Office of International Taxation. Along these same lines, we have seen a general shift towards the internationalization of proceedings, with the mutual assistance regime now being regulated by the GTL. Inspection proceedings more often have an investigative bias, as seen from the number of cases in which the commencement of the proceedings is marked by the entity’s registered office being entered and searched. These changes, however, are often inadequately regulated, whereas in other cases, they have to co-exist alongside a basic structure consisting of single verification and investigation procedure which is different from the management verification procedure; this takes place on an ex-post basis, the objective being to carry out a complete audit, or one of general scope, covering various taxes and periods.

Perhaps the time has now come to rethink the legal bases supporting the inspection procedure, or at least to begin considering the possibility of doing so. A reform of this type will be complex and sensitive, although it has to an extent been made unavoidable by the way in which our social, economic and international reality has evolved. In any event, our aim here is merely to put out a few basic ideas, focusing more closely on certain areas in which, in our view, the possibility of such a reform could at least be considered.

Firstly, we need to look at the very essence of the current system, based on the distinction between an inspection procedure and a variety of management procedures, all of which are in any event classed as verification procedures. The existence of a variety of management procedures may give rise to artifice and prove overly contentious in relation to the suitability of the procedure chosen in each case by the acting administrative body. The separation between management procedures and inspection procedures – with the verification of accounting records being reserved for the latter – is an organizational tradition of the State Administration which is worthy of respect. The fact is, however, that the exact demarcation is unclear, giving rise to no end of problems regarding the legitimacy of actions taken by management bodies, in addition to which more relevant criteria,  based on the type and size of the taxpayer and the needs of the different tax Administrations, are overlooked. One possibility to be considered would therefore be the existence of a single verification procedure of which there would be different models in terms of scope of the proceedings, depending on the way in which each Administration is organized, the type of taxpayer and the tax being verified.

Secondly, any regulations governing inspection proceedings would need to address two aspects which currently lack an adequate and precise legal basis: proceedings carried out by the tax Administration which are purely investigative, and the link between inspection proceedings and so-called cooperative compliance or other formulas deriving from comparative and international experience. In addition, there is a need to regulate the classic inspection procedure in a manner which reflects what it is in reality, i.e. a computer audit procedure carried out on accounting data which is stored in data processing equipment.

As regards the purely investigative powers of the Tax Inspectorate, it already carries out proceedings of this kind and it is logical that it should do so. What is lacking, however, is proper regulation of such powers, which takes into account the rights of taxpayers. Such regulations would need to address a variety of issues, such as the power to obtain information, or to probe or monitor a particular taxpayer prior to the actual commencement of the inspection procedure. Similarly, the entering and searching of a domicile cannot continue to be based on nothing more than the existence of a legal rule which envisages judicial authorization of such actions where necessary in order to execute an administrative decision which, in reality, does not exist in this case. There need to be regulations which establish when such investigative proceedings are legitimate, how they are to be carried out, and what subsequent monitoring is to take place, based on the doctrine of the ECHR. Tax law should also reflect the consequences of the data protection regime, since in today’s world, the processing of data affects not only the bases of international taxation but also the bases which establish the limits applicable to actions by public authorities which are intrusive or affect a person’s private life. Finally, the particular characteristics of the Customs administration and how it links up with the Tax Agency as a whole need to be addressed.

What is more urgent, however, is the groundwork required to enable the Tax administration to function in a manner which reflects adequately the concept of cooperative compliance. The inspection procedure is based on a full ex-post verification procedure carried out according to a tax control plan and a system for the selection of taxpayers, whereas we need to move towards another type of model based on on-going relations and frequent communication between the Administration and major corporations, in which the inspection proceedings carried out are sporadic, partial, and based on an assessment of the tax risks. For this transition to work, companies must be transparent, the Administration must accept that it needs to respond to companies’ consultations in a manner which is clear and objective and respects the principle of legitimate expectation; and above all, we need to cease to assess how well the Administration functions based only on the purported cases of fraud discovered and the debts settled as a result of inspection assessments. This model will also require full inspection proceedings to be carried out in some cases, and the classing of taxpayers may well prove problematic in aspects such as objectivity, transparency, equality and even competition in the market.

It must be assumed, in relation to the changes referred to above, that the foreseeable evolution of any organization over the coming years will be characterized above all by internationalization and digitalization. Regarding internationalization, we have already seen the increasing importance of procedures relating to the verification of transnational groups or transfer pricing procedures, and a spectacular increase in the amount of data being passed on through international exchanges of information. Concepts which were all but unknown not so long ago have become commonplace, as we see with mutual agreement procedures, the importance of which – if the OECD’s predictions are anything to go by – is sure to increase. Inspections taking place simultaneous to proceedings carried out by another Administration and the exchange of information with other Administrations shall become more frequent, but we shall also see a normalization of informal relations between civil servants from different Administrations, all of which shall require the establishing of clear criteria, a change in training and mentality, and that the companies themselves be involved in the process of change.

Finally, the digitalization of the economy and of organizations will be reflected in an inspection procedure in which access to systems for the digital storage of information – primarily accounting data – is essential. More precise rules are therefore required on how these systems are accessed, the data obtained, how and for how long such data is to be kept, and the conditions in which it can be examined, with the involvement of the taxpayer or its representatives and advisors. A good starting point for anyone wishing to find out more about the legal requirements to which these changes will give rise is the Judgment of the ECHR of March 14, 2013 in the case Bernh Larsen Holding and others v.

Norway, which sets out the bases required to support an inspection procedure which relies on access to data stored in digital form.

Speech by Félix Plaza, Director of Centro de Estudios Garrigues, at the Inauguration of the 2018/2019 Academic Year

Rector of the Antonio de Nebrija University, Juan Cayón,

Director-General of the State Tax Agency, Jesús Gascón Catalán,

Senior Partner of Garrigues, Ricardo Gómez-Barreda,

Dear teaching staff of Centro de Estudios Garrigues,

Dear students of the 2018/2019 academic year,

Dear friends:

Today we officially inaugurate the 2018/2019 academic year.

During the first few days of class I had the opportunity to speak to most of you, with a view to welcoming you to Centro de Estudios Garrigues and introducing you to a number of essential principles and values on which our institution is based, such as teamwork, solidarity, effort …, but above all our three “E’s”:

  • Ética (we strive to be ethical)
  • Excelencia (we seek excellence)
  • Exigencia (we are exacting)

Being exacting is probably no more than a manifestation of excellence, since it is impossible to attain excellence without demanding the most from oneself, but excellence is also supported by other pillars, such as rigor (understood as appropriateness and precision) and knowledge.

And it is here, in knowledge, that the thoughts I wish to share with you now, at the commencement of a new academic year, truly begin.

Back when I was at university, there was no Internet.  The research of any subject took time:  time to go to the library, time to locate the books or papers that might have a bearing on the subject, time to analyze or review databases and indices before requesting what one wished to consult.  Today, what used to take hours, takes only minutes (little more than the click of a mouse and a few minutes of processing the information to be analyzed).  But when all that is done, the only way to continue is, and always has been, careful study.

I have heard that, in Internet terms, “years” refers to “dog years”, because one year on the Internet is like seven years in the “real world”.

I have always thought that this technical evolution is good.  I still think it is.  But, as with everything, it can have side effects …

The other day, via WhatsApp, I was sent an article by Javier Paredes, Professor of Contemporary History at the University of Alcalá, published in the digital newspaper Hispanidad, in which, while analyzing another matter, he wrote the following:

When the information society supplants the knowledge-based society, ignorance sprouts even in the keenest of minds. The problem is that some think that the two societies are one and the same.  No, one has nothing to do with the other. The information society merely watches television and, at best, occasionally reads something short. The knowledge-based society reads, studies and seldom or never watches television.  Accordingly, we would be wise to consult those who have studied the essence of Spain”.

This article drew my attention to something that is, in my opinion, essential, and that is whether today, when we are living in a time in which we have access to more information than ever before, society is becoming capable on all fronts of transforming this information into greater knowledge, or whether, on the contrary, rapid access to information is having the effect that we no longer look deeply into things, and that, in fact, we know a little about a lot of things, and a lot about very little.

José María Sanz-Magallón, Subdirector-General of Internal Communication and Knowledge Management at Telefónica S.A., in an article published in Nueva Revista affirmed that “A daily issue of the New York Times contains more information than an average citizen of the 17th century would have had in his entire life. More information has been generated in the last five years than in the last 5,000 years, and this information doubles every five years”.

But are we capable of converting all that information into knowledge? Or can excess information have an adverse effect on society in general when it comes to deepening knowledge? And all of this, without asking ourselves something that is just as important as the above, such as, who is monitoring the truth and rigor of all this information?

As Sanz-Magallón notes in the article I just mentioned “It is clear that, thanks to the development of modern information storage, processing and transfer technologies, human beings can cope with and work with the enormous amounts of data produced.  Nonetheless, as Julio Linares indicates, ‘the more the information generated by society, the greater the need to turn it into knowledge’”.

Faculty members Zoia Bozul and José Castro Herrera, in their paper “University Faculty in the Knowledge-Based Society: Professional Teaching Skills” take the view that:

The knowledge-based society is not something that exists now, rather it is a final stage of an evolutionary phase toward which society is moving, a stage subsequent to the current information era, which will be reached through the opportunities represented by the information and communication technology (ICT) of current societies.


Based on this, a need is perceived to train people who can be capable of selecting, updating and using knowledge in a specific context, who are capable of learning in different contexts and modes throughout their life and who are able to understand the potential of what they are learning so as to adapt their knowledge to new situations”.

Nonetheless, the concepts of “information society” and “knowledge-based society” are frequently confused or even treated as the same thing.  I believe, however, that today, more than ever, it is necessary to distinguish clearly between information and knowledge, even if information is an integral part of knowledge.

In the previously-mentioned article, José María Sanz-Magallón defines the knowledge-based society as “that in which citizens have practically unlimited and immediate access to information, and in which information, its processing and transfer, serve as key factors in all the activities of individuals, from their economic relationships to leisure and public life.”


University of Barcelona professor Kasten Krüger, in his paper THE CONCEPT OF “KNOWLEDGE-BASED SOCIETY”, notes that “the current concept of “knowledge-based society” does not focus on technological progress, but rather regards it as a factor of social change among others, such as, for example, the expansion of education.  According to this focus, knowledge will increasingly serve as the basis for social processes in various functional areas of societies.  The importance of knowledge as an economic resource will grow, thus entailing the need to learn throughout one’s lifetime.  But awareness of “not knowing” and awareness of the risks of modern society will also grow”.


Along these lines, José Luis Mateo, former Vice President of the CSIC, in his paper KNOWLEDGE-BASED SOCIETY, states that: “knowledge has therefore always played an important role, although it is the rate of its generation that undoubtedly creates major differences from one era to another.


Every so often, our current society is referred to as a “learning society” and, doubtless, this name reflects the reality, although it would be advisable to qualify or add that this is mainly a result of the rapid production and generation of knowledge, which requires ongoing learning to avoid one’s knowledge of the matter in question becoming obsolete. The learning society is therefore a consequence of the knowledge-based society. In other words, the most recent generations of professionals and those to come will never cease to be students.

In the light of all of the foregoing, I believe it is necessary to understand that the information society should not be confused with the knowledge-based society, although it will lead us (it is leading us) inexorably towards it.

But in the same way, it becomes necessary, now more than ever, not to be superficial, frivolous, and not to be “informedly uninformed”.  Our future, the future of society, as has always been the case, depends on knowledge and on our capacity to turn information into knowledge.

The evolution of technology has given us the tools (unlimited access to information); it is now up to us to put these tools to use.  It is our job to transform information into knowledge.

In the words of Kofi Annan: “Knowledge is power. Information is liberating. Education is the premise of progress, in every society, in every family”.

At Centro de Estudios Garrigues, we want you to absolutely stand out, to understand that you are called to take the reins in the transformation of society and that, in this transformation, the most important tool, what will set you apart from the rest, is knowledge.

Let us to help you in this process, to help you build a better society, a better future, let us to give you the tool that will enable you to change the world: knowledge.

Today you have been duly informed of your responsibility.

Thank you.

The LLM in International Transactions begins at Fordham University in New York

The LLM constitutes training in line with the current international business environment and is aimed at Garrigues associates, offering these young lawyers valuable learning opportunities that will contribute to their personal and professional development at Garrigues.

This week saw the inauguration of the third LL.M. in International Transactions. The program, developed as part of the Garrigues’ International Training Program (ITP), is run through Centro de Estudios Garrigues in collaboration with Universidad de Nebrija and Fordham University School of Law, one of the oldest and most prestigious educational institutions in the US.

The program offers training in line with the current international business environment and is aimed at Garrigues associates, offering these young lawyers valuable learning opportunities that will contribute to their personal and professional development at the firm. The international teaching staff is made up of noted academics and expert lawyers in different fields of the law, as well as professionals from public institutions and multinational groups.

The program offers a multidisciplinary grounding in legal and business matters as well as the soft skills necessary to advise on the negotiation and implementation of corporate and commercial transactions and on the resolution of the related disputes. The training has an international dimension since many transactions involve more than one legal system and entail a combination of continental civil law systems and common law institutions.

The first stage of the training has begun in New York (USA). During two intensive weeks of study, students will attend classes at Fordham Law School in order to familiarize themselves with the US legal system and the main common law institutions.

The program will continue online until April, during which time theoretical and practical aspects of different areas such as corporate governance, insolvency, antitrust law, financial markets and privacy will be studied from a comparative perspective. The LL.M. ends with various weeks of study at Centro de Estudios Garrigues in Madrid, focusing on the resolution of multidisciplinary case studies and various talks on relevant topics for international lawyers (negotiation techniques, lobbying and advocacy, ICT tools for lawyers, disruptive technologies affecting law firms, digital transformation, etc.).

The new disclosure obligation on tax intermediaries

The European has finally approved after a particularly quick procedure the Directive requiring so called tax intermediaries to supply specific information on transnational transactions with tax relevance. We are talking about Directive 2018/822  of 25 May 2018 (Official Journal June 5), amending Directive 2011/16/EU as regards mandatory automatic exchange of information in the field of taxation in relation to reportable cross-border arrangements.

1.- Background to the Directive

Some countries had tried out reporting mechanisms for transactions that could involve aggressive tax planning. Examples are the tax shelter disclosure system in the U.S. or the DOTAS (disclosure of tax avoidance schemes) regime in the UK. These experiences spread to other countries and were the inspiration for BEPS Action 12 on mandatory disclosure rules. In the final report on this action, the OECD called for the use of these disclosure regimes, in relation to the “promoters” of standard schemes identified through hallmarks. Disclosure would have the dual aim of providing immediate information to the authorities and as a deterrent from offering abusive planning schemes.

Taking up these ideas in the BEPS project, the European Commission submitted a Proposal for a Directive of 21 June 2017, and, following the political agreement by ECOFIN on March 13, 2018, the wording of Directive 2018/822 was reached containing notable differences with respect to the wording of the initial proposal, especially in relation to broadening the personal scope of the reporting obligation itself. The Directive amends Directive 2011/16, on automatic exchange of information between the member states in the field of taxation, which is why it is known as DAC6.

2.- Content of the Directive.

The contents of the Directive are easily summarized. The so called tax intermediaries must report to their tax authorities specific information on any cross-border arrangements in which they take part, where those arrangements have any of the hallmarks listed in the Directive itself. Then the member states will automatically exchange that information and therefore have prompt knowledge about abusive or potentially abusive planning arrangements.

The information must relate to a cross-border arrangement (“dispositifs” and “mecanismos” in the French and Spanish versions). There are no reporting obligations for purely domestic arrangements, not affecting any other state, although a member state may unilaterally include those transactions in the scope of the mandatory reporting regime.

No definition is given of “arrangement”. It must be interpreted as meaning any dealing or transaction or set of dealings. And these arrangements are mandatorily reportable where they have any of the characteristics or hallmarks set out in the New Annex IV to Directive 2011/16.  These hallmarks appear to relate to different objectives. The first two of the five hallmark categories relate to the typical standard tax planning arrangements usually involving a tax purpose combined with a fee for the promoter and a confidentiality clause. The third category is targeted at arrangements leading to a no tax scenario by taking advantage of certain tax regimes including the absence of any corporate income tax or a zero or “almost zero” rate. The fourth category is designed to deter arrangements that may have an impact on the automatic exchange of information between countries and the identification of beneficial ownership. And the last category is perhaps the most controversial due to relating to transfer pricing matters. It includes arrangements linked to the transfer of hard-to-value intangibles and certain reorganizations between companies in the same group with transfers of functions, risks or assets, if the projected annual earnings before interest and taxes (EBIT) of the transferor over the three-year period after the transfer, are less than 50% of the projected annual EBIT if the transfer had not been made.

It is the intermediaries of a member state, not the taxpayers, in principle,  that have the obligation to report these arrangements. And the Directive deals with a very broad definition of intermediary. It encompasses anyone who designs, markets, organizes, or makes available for implementation or manages the implementation of a reportable cross-border arrangement. But it also means anyone that knows or could be reasonably expected to know that they have undertaken to provide, directly or through others, marketing, organizing, making available for implementation or managing the implementation of a reportable cross-border arrangement. Where more than one intermediary is involved, the reporting obligation falls on all of them, unless the same information has already been filed by another of those intermediaries. The relevant taxpayer has the reporting obligation only if there is no intermediary because the arrangement was devised and implemented in house or where the national rules on legal professional privilege relieves all the intermediaries from this obligation.

The reportable information appears to relate only to identifying the transactions, their characteristics and values.

3.- Conclusions

This Directive plays a crucial part in the move to review tax planning  practices, but has a defect by starting out from a lack of definition because it mixes up the information, and the combatting and the prevention of tax fraud, without clarifying the limits separating them, and shies away from any attempt to make the system it sets out serve to give greater legal certainty. To the contrary, it advises that the reporting of this information does not serve to obtain any degree of certainty in advance over the validity that the tax authorities will give to these arrangements. So its implementation in the various states may be confused and could ironically aid tax competition between them in addition to placing obstacles to the functioning of the internal market by leaving out purely domestic arrangements.

In Spain’s case, the transposition of the Directive will without a doubt rekindle old problems that have never been resolved: how to define tax advisors and the meaning and scope of their legal professional privilege. Elsewhere, by somehow singling out so called tax planning it will affect the internal organization of the profession.