• Home
  • Blog
  • On electrons, bits and code: exploring the digital frontier II

On electrons, bits and code: exploring the digital frontier II

18 enero 2019

When us attorneys move about the digital world with a specifically law-based mindset (that is, not as mere tech users), the first thing we run up against are the very concepts of the digital world or, better yet, the terminology. We start throwing around certain words – computerized, digital, electronic, telematic, file – whose exact meaning we have yet to nail down.

 

Accordingly, when attempting to give an “electronic document” or “electronic contract” an equivalent or at least analogous legal meaning to what jurists have always understood, in layman’s terms, as “document” and as “contract” or, even more so, when thinking of “digital assets” that can be exclusively owned, such as cryptocurrencies or tokens, we have to first get clear about certain concepts.

The first thing we need to clarify is that we are talking about “information” or “messages”, both of which have two distinguishable elements: the information in and of itself, as the idea-based or cognitive content or meaning (something we think), and the signals on a material or imprintable media through which we represent this meaning so that it can be conveyed from one human mind to another (or from one system or device to another system or device that is capable of perception) and, as the case may be, stored outside of a human memory.

  1. From electronic to digital

To put it simply, computer science is a technology we have developed to manage information (organize it, store it and transmit it) in a mechanized or automated way. This is possible thanks to some very impressive machines that, firstly, work with electricity (that is, they are electric devices) and, secondly, operate by directing tiny currents of electrons through complex circuits (what we understand as “electronics”). In short, in this information technology, this flow of microscopic electrons or other electrically charged particles is used to process specific “signals”.

Although the concept of “digital” is related with the concept of electronics, it is actually quite different. Digital does not refer to a given technology or to certain types of devices, but rather to how the signals and information are stored.

The term “digital” derives from the word “digit”, from the Latin digitus, “finger or toe”. However, it really means here is “numeric”, because human beings began counting on our fingers (precisely, our numeric system is a decimal (or base 10) number system because that is how many fingers we have to count on). Now, digitizing or, to invent a word, “numberizing”, is nothing more than coding or representing any piece of information by using digits, i.e.,  the ten numerals represented through the figures 0 to 9 – and, more precisely, with only two of these figures, 0 and 1 (a “bit” is a space that can be occupied by either a 0 or a 1), in what is known as binary code. Binary code was used as far back as in the ancient Chinese divination text I Ching or, relatively more recently, in certain late 17th century works by the German philosopher Leibniz. And although good old Leibniz –– had conceived, way back then, of a machine that could store and handle data codified in binary digital code, the coding and binary arithmetic exercises that he carried out in his day did not require any more technical instruments than a clean sheet of paper, a quill pen and a bit of ink. Because, as I have said, digitalization, in the strictest sense, is not a technological operation but rather simply one of applying code.

  1. From digits to code

Accordingly, “digitizing” a text, just like I am doing as I type these words, means converting each letter of the alphabet, punctuation sign and blank space into a specific binary number according to a pre-determined code (something like converting letters to a specific sequence of dashes and dots in Morse code).

In particular, using the American Standard for Information Interchange (ASCII) code, which is the one our computers most often work on, each uppercase and lowercase letter, each figure from 0 to 9, and certain punctuation signs and command prompts are represented through a given eight-bit chain, known as a byte. For example, the letter “a” is replaced by or represented with the binary sequence 01100001.

It is important to realize that digital coding of this type is 100% a matter of agreed convention. We could, for example, have agreed that the same sequence of 0s and 1s would represent the letter “b”. In any case, the takeaway from all this is that to use a string of 0s and 1s as an instrument to store and transmit information, we will always need not only something that can “record” these two symbols or signs, but also a “code” whereby each combination of 0s and 1s is assigned a specific meaning or a correspondence with a symbol from another language, such as a letter in the Roman alphabet.

  1. Where electronics meet digital technology

So what do electronics have to do with digital technology? The answer is very simple: any switch of a circuit through which an electric current passes can be in one of two positions: open or closed, on or off. If we assign the value 0 to one of these two possible positions and the value 1 to the other (which is yet another agreed convention of coding at a most basic level), we can “record” in an electronic circuit any information that we have previously “digitalized”, i.e., converted into a specific sequence of 0s and 1s.

While some of the electronic systems and devices that existed before we had modern-day electronic digital coding have recently been revamped with digital technology (audio players, radio, television, telephones), computer science and computers, that is, the devices humans invented to process information on an automated basis, have always been tied to digitalization of information.

What distinguishes digital electronics from analog electronics is that the former encodes a piece of information using only two positions (0 or 1) through two clearly differentiated or discrete levels of electric voltage, while the latter encodes an infinite number of information positions through continuous or gradual voltage changes. Precisely for this reason, the digital transmission and reproduction of a signal or piece of information (which has been previously simplified or schematized) is much more tidy and precise, and it is not encumbered with the possible distortion or deterioration of signals that can happen with analog transmission and reproduction.

  1. Analogy with the legal realm

This technical difference in terms of fidelity or accuracy of a representation or reproduction of a piece of information can be extremely relevant from a legal standpoint. This is the ultimate foundation of the identity of any piece or unit of information we handle in this particular medium, and also of the concept of “document” we have in this computerized world, which is much more abstract and nebulous than our traditional, more tangible-based concept of document. In the realm of information recorded on paper, an information unit (what we underhand as a “document”, such as a bill of exchange or a deed of sale and purchase of a building) is identified based on the individual material nature of a specific piece of paper on which something has been written in ink. In the world of digitalized information, the identity is purely how it is coded and stored: a piece of information is a specific chain of 0s and 1s, regardless of the material or physical media on which it is recorded.

We will need to come back to this later. For now, we just need to understand that, if natural human language always entails a first layer of coding (assigning certain meanings to certain sounds or combinations of sounds produced by using the human voice) and even a second layer of coding when written down (the relationship between certain sounds and certain shapes/letters), then the digital electronic technology used in computer science means adding an even more intense layer of intermediation.

For one thing, we have to depend on machines. Anyone who can see and who knows how to read (and knows the natural language written) can be privy to the thoughts and ideas represented when we write natural language on a piece of paper, on the pages of a book, or in a traditional document. At the most, if it is dark out, they might need a lamp or a candle. This is why we usually say that a feature of a document is that that it can be directly understood.

However, in order to recover information recorded on a pen drive or on a CD, we have no choice but to use a machine. Without a highly sophisticated engineered device able to “read” a CD and to transform the signals it contains into sound, images or alphanumeric characters on a screen that we can look at, the CD is nothing more than a piece of plastic with some faint grooves etched on it.

Breaking it down further, physical storage media for digitalized or computerized information comes in one of two types: magnetic or optical. On magnetic storage media (hard drives, digital tapes) information is stored by applying an electromagnet to a surface coated with electromagnetically sensitive particles (iron oxide). This gives each particle a magnetic charge; the direction of that charge, which it retains, determines whether it represents a 1 or a 0 (i.e., represents the data codified as bits). Information stored on magnetic media is read by again applying an electromagnet to detect the magnetization patterns.

In contrast, optical storage media (CDs, DVDs, etc.) do not use magnets, but rather lasers to “burn” data patterns onto the disk and to later read the reflections. In particular, microscopic grooves are burned into the flat surface of a disk, which is usually coated with aluminum. The recorded disk is read by again passing a laser over the surface. The surface grooves (i.e., how light is reflected or not), make the laser beam respond in different ways, thereby determining whether it’s a 1 or a 0 and “reading” the information stored on the disk.

  1. Nothing without its interpretation

To read from storage media, we need not only a machine that can perceive these optical or magnetic signals that in turn represent 0s or 1s, but one that can reconvert what has been codified as a sequence of binary digits back into sound, images or text in a natural language written, for example, with Roman alphabet characters displayed on a screen. Without software that can perform this conversion, digital information is impenetrable and meaningless.

Moreover, both machines and the software we run on them become obsolete at a dizzying pace. Therefore, it is not enough to have a machine and a program that can read a piece of information recorded on digital media at the time it is recorded; we also need a machine and software able to read that same type of media later on, when we want to recover the information stored. Most of us, for example, don’t have a tape recorder on hand to play those old audio cassette tapes we might still have stuffed in a drawer somewhere.

Lastly, and no less importantly, this machine on which I am typing, which is able to do all this so perfectly, only works if it is plugged into a power supply. If the electricity goes out, it is completely useless. A power outage would render all the information stored using this type of code and media completely inaccessible.

Lest this happen, I will be sure to hit “save”, so that my words are stored on the hard drive’s RAM, and I will attach this word processing file to an email I am sending to the editor of the blog. In the next blog, we will take it one step further and look at the question – highly relevant from a legal standpoint – of how to ensure authenticity and completeness in this peculiar world of digitalized information.

Other articles
  • Legal Tech
De Grouchy al smartphone (Explorando la frontera digital I)
by Manuel González-Meneses
En la jornada del 18 de junio del año 1815 en una pradera ondulada próxima a la localidad belga de Waterloo las tropas napoleónicas sufrieron su derrota definitiva. Como se suele decir, en ese día estaba en juego el destino de Europa y del mundo. Si la
26 octubre 2018
  • Banking and Finance
  • Compliance
  • Business law
  • Labor and employment
  • Human resources
  • Technology and law
  • Taxation
On the intellectual origin of blockchain (II). Recent forerunners
by Manuel González-Meneses
In the first post in this series on the intellectual origin of blockchain technology, I talked about two figures I consider to be early forerunners: Alan Turing and John von Neumann. In this second installment, I will look at two more recent figures: Tim May
8 junio 2018
  • Banking and Finance
  • Compliance
  • Business law
  • Labor and employment
  • Human resources
  • Technology and law
  • Taxation
On the intellectual origin of blockchain (III). More on the recent forerunners
by Manuel González-Meneses
I ended the previous post on the subject of David Chaum and how his DigiCash did not lead to a proper break with traditional cash. The disruptive leap in this respect, even if still only in a theoretical or speculative realm, is attributable to the following
6 julio 2018
Share
Call us
Admissions: +34 662 98 80 37General information +34 91 514 53 30