Mastering Computing Magic: My quest for the right to know

Philip K Dick, one of the grand old masters of science fiction, famously wrote that at a certain point, technology which we don’t understand is indistinguishable from magic. I would like to think that I understand how the x86 architecture, the Linux operating system, and the internet works, but I have to admit that I have no clue how my PC works at most levels. It magically boots up after I hit a power button and presents me with a pretty little screen which lets me write documents, browse the internet and send emails. It is a wonderfully magical device, but unlike 99% of the population, I have never been content to just use this little black box. I have always wanted to understand how it works, so I have spent the last 20 years of my life, trying to figure out how my PC works.

As a history major in college, the program that I spent the most time learning how to use was a word processor, since I had to crank out many a late-night paper. But I never liked MS Word, which most of the other I students were using. Instead, I used WordPerfect, because it had the Reveal Codes function and I could see every code that went into making a document. After I created a document for one of my history classes, I would open up Reveal Codes and marvel at all the codes that created that document.

In the old days of MS-DOS programs, we couldn’t just flip through the menus and do Google searches on the internet to learn how to use a program. We had to actually sit down with a software manual and learn all the arcane function keys to use a program. WordPerfect 5.1 for DOS came with a blue 3 ring binder of documentation, and I often found myself flipping through it. The summer before my junior year in college, I had a clerical job which forced me to do the same repetitive tasks over and over in WordPerfect. One day I stumbled up the section in the manual about Macros. The idea that I could enter a series of codes into my word processor and make it do something over and over was fascinating. I recall that the clerical tasks to which I was assigned that summer were so utterly mind numbing, that any diversion was welcome, even reading that WordPerfect 5.1 manual on macros. The language in the manual was terse and filled with obtuse jargon, since WordPerfect Corp. needed to limit the size of its already thick manual. I didn’t understand 90% of the terminology, such as a “bitwise operator”, but I understood enough to copy and paste from the examples and I started playing with the code. After that I was hooked and I wanted to learn more.

I had no idea what I was doing when I bought a book on C. I just knew that the geeks at my university talked about C as the most powerful programming language, so I decided to learn it. With the intrepid audacity of the novice, I figured that it couldn’t be that hard. In many ways I was fortunate, because C is an old school language created back when computer programmers had to understand how the hardware worked. It was designed by the geniuses at Bell Labs to create the UNIX operating system, so it assumed that the programmer would need to do low-level tasks like pointer operations, bit fiddling and memory allocation. The circle of PhDs working at Bell Labs in the early 70s–Ken Thompson, Dennis Richie, Bob Pike and Brian Kernigan–worked very close to the metal, so they designed a cryptic language to compile lean code that was lightning fast and consumed few resources. For the inquisitive mind, learning C can be like discovering how magic works, because it forces the programmer to understand how the computer actually works, not learn a bunch of abstractions and APIs.

I was fortunate, because the book that I bought to learn C was written in the 1980s, when the command line was still deemed important and programmers still had to understand what was going on underneath the hood. I recall that there sections on converting decimal to binary and vice versa, using bit masks on the 16 interrupts in an 8086, and detecting line endings in UNIX (“\n”), MS-DOS (“\r\n”) and Mac OS (“\r”). The author spent a lot of time on basic concepts that have largely fallen to the wayside in modern programming, such as registers, how pointers work, sorting and search functions. The last chapter was on how to create a binary search tree. A introductory book on C today would not bother to explain how to create a complex data structure beyond a simple linked list, because it assumes that everyone is either using a database, or a compiler that has a built-in B-tree function, but back in the 1980s programmers didn’t have these tools at their fingertips for free like today, so they often had to create them themselves.

I remember reading that 750 page book on C programming like it was a philosophy text. Back in those days, C compilers were expensive, so I contented myself with reading the text and trying to work through the difficult concepts, without ever compiling a single program. I carefully did the exercises in the book on a notepad, but mostly I learned the concepts. Now this may sound strange, and it goes against all advice you will ever read on programming, but actually it was a good way to learn the basics about computers, because I was paying less attention to the syntax of the programming language and learning more about the fundamental concepts. Fortunately, the book spent a lot of time explaining the fundamentals, unlike most introductory texts to a programming language and its code examples were of low-level operations which forced me to think like a computer. Like learning a new kind of philosophy, programming was a sort of mental exercise that played with new conceptual frameworks in my head.

I had already studied a couple human languages, so programming seemed like another type of linguistics. The language was very small, but each word had grammatical rules about how it could be used and it was utterly logical, unlike human languages. Based upon a few simple key words, I could define new words (function and structure names) and start building my own language.

Most people see computers as a tool and they approach programming from its most practical and utilitarian aspect. Most programming manuals focus on how to do particular tasks, rather than allowing the reader to ponder the fundamental concepts behind those tasks or the underpinning structure that makes those tasks possible. I understand why many people find computer programming so boring, because how-to manuals are boring. They don’t expect you to learn anything beyond the task at hand and they don’t help you to figure out how things work.

Reading that first book on the C language suddenly opened up a new world for me. That black box which was a PC suddenly became more knowable. In short it made the transition from ineluctable magic to explicable technology in my head. It was no longer the unknowable tool that magically worked, but rather a new, fascinating world with its own rules of logic which could be explored. After C, I learned C++, PHP, Python and JavaScript, but I read books on half a dozen other programming languages, whose syntax I have since forgotten. I loved the addictive power of writing a snippet of code, then refining it and adding tweaks here and there to do exactly what I wanted it to do. Very little of what I wrote had any practical purpose, but it was the ability to explore and create that fascinated me.

At one point, I got fascinated by how computers can add, subtract, multiply and divide with simple binary operations of OR, AND and XOR. I recall being enthralled by the sight of a graph of logic gates that were strung together to create a half adder. If I had been an engineering major, perhaps I would have started designing my own circuits, but I didn’t have access to the software to design circuits. All I had was a Borland C/C++ compiler, that I found at Best Buy for $70. So I started writing C++ code to recreate addition, subtraction, multiplication and division with an arbitrary number of bits. Since my processor was limited to 32 bits at the time, I thought that it would be fun to create functions that could be precise up to any number of bits. My processor on might be limited to 32 bits, but my code could do precise mathematical operations up to 4096 bits if I wanted. I used operator overloading in C++, so I could redefine the -, +, *, / and % operators to carry out these operations. Never mind the fact that my code was way to slow to ever have any practical application in the real world. Never mind the fact that I never finished. I was playing around with the basic concepts of computing and constantly learning. The exploration was more fun than any practical utility that might come out of it.

Most people who program for fun start at an early age. I, however, started when I was a senior in college and almost ready to finish my BA in history. I often think that if I had started programming when I was 15, rather than 21 years old, I would have studied computer science at college rather than than history. Since I studied history, however, I was taught to think about human society and my philosophy classes forced me to think in more abstract terms. In the spring of my senior year in college, I attended a lecture about about electronic commerce. During the Q&A session afterwards, one of the computer science professors attending the lecture started talking about Richard Stallman and the GNU project. I don’t recall exactly what that professor said, but I grasped the basic notion that people were volunteering their time to produce software that could be freely shared and modified, so that they could create a better and freer society. At university, I was exposed to many new ideas, but most of them slipped through my head like blades of grass running through my fingers. These ideas were entrancing for the moment and they reached up to be grasped, but I was exposed to so many new and wonderful ideas, that I grasped onto very few of them. For some reason, however, the idea of free software captivated me, so after that talk I logged on the university system and pointed Netscape to www.gnu.org. and www.fsf.org.

Most people who become promoters of free software start by using a particular piece of free software, which convinces them of its utility and its elegance. Then, they use another and another, until they thinking about how much better the world would be if all their software were like free software. I, however, didn’t start with the software, because all the software listed on the www.gnu.org website was for UNIX, an operating system I didn’t know anything about except for the fact that there was a computer laboratory filled with SUN machines, where all the geeks hung out late at night. I had a passing acquaintance with many of these geeks on a social level, but I was too intimidated to ask for help from any of them when I sat down in front of one of these UNIX machines.

In a way, I was lucky that I couldn’t use the software, because I then wandered over to the “Philosophy” section of the Free Software Foundation. What I found there appealed to me far more than any computer program ever could. The idea that free code could be an instrument of social transformation captured my attention. Of course, it wasn’t presented in that way, but after reading Stallman’s short story “The Right to Read,” I grasped immediately that we could live in a freer and more egalitarian society if everyone used free software instead of proprietary software. I was trilled by the notion that we could resist the encroaching control of corporations and governments by sharing and modifying software. Without ever using free software, I became a convert to the free software movement.

I was raised in a family where my parents discussed how to make a better society around the dinner table. During the 1980s, my mother had been an activist against the war in El Salvador and the rights of the refugees from that war. Perhaps something of that same activist spirit infused me as well when I adopted free software as a guiding philosophy for how computers ought to work. It was purely philosophy in my case, since I had no idea how to use any of the software. The gnu.org and fsf.org websites made no mention of how to run free software on my humble PC. I heard that the SUN machines in our university computer laboratory had GNU Tools installed on them, but I had no idea how to use UNIX, much less its command line. In the spring of 1996, Linux was still an underground phenomena that few had heard of outside of hacker circles, so I had no idea that there might be free software that could run on my 386 laptop which was running Windows 3.1. I had geek friends who probably had heard of Linux, but somehow word of Linux never reached my ears before I graduated. After that I went off to Mexico. I only learned of Linux when I returned to the United States. At the time, I was living in Austin,Texas when I met an engineering student who was doing a summer internship at IBM. I started preaching to him about the benefits of free software. He didn’t say much, but I ran into him a couple weeks later and he told me that he had just installed “Linux” on his PC. I remember being utterly confused as he rambled on about much difficulty he had configuring Linux. Finally, I stopped him and asked, “What’s Linux?” I had been preaching to him about free software, but I had no idea how to use it, or even how to install it.

At that point I was working in a homeless shelter and didn’t have much expendable income to buy a PC after my 386 laptop died. It was almost a year later, when I got a job through nepotism at a tech company that I scrapped together enough money to buy a new laptop and install Linux. I found a copy of Mandrake at a bookstore in the back of a thick book which I read for several weeks before I built up the courage to try and install it. I shrunk the Windows 98 partition and used Lilo to dual boot. I recall writing up a screen of text proclaiming how using Linux was a political act and it displayed every time the laptop booted up. Unfortunately, my Thinkpad contained a winmodem that wasn’t compatible with Linux, so every time I wanted to check my email or browse the internet, I had to reboot into Windows. Every time I ran into a problem using Linux, I had to reboot into Windows and hunt through websites to find a solution. I would carefully jot down the solution in a notebook, then, boot into Linux and carefully type the solution into the command line.

It was fun learning how to install Linux, but in 1999 Linux didn’t even contain a decent free word processor, so I could see no practical utility in the system, since I couldn’t use it to connect to the internet or write text. The installation CD also didn’t contain GCC and without an internet connection, I had no idea how to install it. Despite my proud proclamation in my bootup screen pronouncing the benefits of free software for society, I rarely booted into my Mandrake partition. Besides, I was too busy at the time flipping through books on Verilog, Java and x86 Assembler, languages which I have since forgotten.

After that, I spent 14 months wandering around South America and another 5 months in Central America, where I rarely touched a computer, aside from an occasional stop at a cybercafe to check my email. It was only when I was back in the US, studying history in graduate school that I installed Linux again. This time I installed Red Hat, but it didn’t run WordPerfect, which I used to write all my papers for graduate school and the modem also didn’t work. I finally managed to get the modem working, after buying 3 different modems, but I never managed to find a satisfactory replacement for WordPerfect, so I continued to use Windows when I needed to get work done.

It was only after graduate school that I began exploring Linux, and thus exploring my computer. Unlike Windows, which hides most operations from the user, Linux is predicated upon the use of plain text files which can be easily read and configured by the user. Every process is assigned a number and creates a file under /proc. Every device, whether an optical drive or a external USB keyboard is treated as a file under /dev. The user can interact with every process and device with commands entered into the terminal. In other words, the entire system is designed to be monitored and controlled by the user, so that the entire machine is transformed from an opaque black box to a knowable device which is utterly transparent to the curious.

In the hands of the cognoscenti, a Linux box thus makes the leap from magic to understandable technology. In Windows, users confronted with a problem can do little more than reboot and pray to the gods of Redmond the problem goes away. If that fails, the solution is to reinstall or buy another program. Windows systems that behave in unexpected ways, are subjected to antivirus upgrades and when that fails, they are universally wiped and reinstalled from scratch. Problems with free software can be similarly frustrating, but the user has ways to monitor what is happening and dozens of configuration options to try. There are forums and mailing lists bristling with tips and bug trackers to glean for information. In the last instance, the source code can be downloaded and studied to figure out what is happening in the program and why it isn’t working. The intrepid can insert a few lines to see if a quick fix can be found. The even more intrepid can fork the code to program their own solution to the problem. The line between user and developer blurs as developers rely on users for bug fixes and users often transition into developers to solve their own problems. The developers are no longer the sorcerers with the only ones holding the spell books. In the world of free software, everyone has access to the spell books, so it no longer seems like magic when the whole system works.

After years of using free software, my PC has made the transition from ineluctable magic to knowable technology in my hands. I now have some idea how most of the hardware in my PC interacts with the operating system and with individual applications. After reading a book on programming with GTK+, the way that graphical interfaces interact with the lower layers of an operating system have become more transparent to me.

I have never learned to program in Java despite having picked up a number of books on the language, for the simple fact that it hides so many details from the programmer. Some part of me has always shied away from devoting too much time to a language which encourages its programmers to avoid all the messy details behind its functions. The magical way that memory gets allocated, garbage is collected and graphical interfaces appear may save the programmer a great deal of time, but the thaumaturgy underpinning such actions is rarely investigated by the blissfully unconcerned. The attitude that “it simply works, so why question it?,” never seemed very satisfactory to me.

Unlike my PC, the internet has always seemed to function more like magic to me. It is built on interlocking layers of protocols and hardware, which is so complicated to untangle. Nonetheless, after working for 4 years as community lead for a business web application, the web and its servers have also begun to loose their magical qualities, although I still get lost in all the layers of protocols and glue which holds it together.

I love the elegance of the Python language, despite the fact that it hides so much from the programmer. PHP always seemed like a loose pile of rubbish thrown haphazardly together, but it has so many useful functions for every occasion and looks so much like C in its syntax that it can’t help but delight me at times. The hit or miss nature of JavaScript depending on the vicissitudes of the browser constantly keeps me on my toes and it incorporates enough elements of funcional programming to intellectually engage me. Over time, I have come to enjoy web programming, nonetheless, coding for the web has always seemed to me like an act of crossing my fingers and hoping that everything works together. There are so many moving parts to get a web application to work, that it everything has to be fault tolerant to an extreme. Interpreted languages are designed to deal with the error prone nature of the web, and the messages generated by interpreted languages and interactive applications such a databases and browsers have helped me to understand how most of the pieces and parts in a web application fit together. Still, I can’t help but believe that some prestidigitation is involved when I click on a button in a web page and it gets processed in a server thousands of miles away, then a stream of data comes back to be rendered on the screen of my web browser. The ability to pull data from disparate sources through SOAP packets and a plethora of server APIs means that data streams into my PC from all over the globe to appear in a single web page in my browser. The tinge of magic in that prestidigitation becomes the full-blown blast of a spell when that data comes from a company like Google or Facebook, which has hundreds of thousands of computers linked together in networks to serve up a deluge of data to hundreds of millions of people at a time. While I can grasp how an individual web server like Apache or a database like MySQL works, I can conceive how hundreds of thousands of machines running web servers and databases can be interconnected to splash data across my browser window in seconds. The massively parallel nature of all that processing and all that data boggles the limited scope of my imagination. I have read articles about how it works, but none of it really makes sense to me, so my circumscribed mind relegates it to the category of magic, along with all the other things which simply can’t be comprehended.

For many years, I have been content to live with magic in my midst. I use that magic every day when I check my webmail at yahoo.com, when I do a search at google.com and when I check up on the antics of my friends at facebook.com. Nonetheless, I have always been a bit leery of this magic, despite the way that it has insinuated its way into my daily life. The free software activist in me has always distrusted what is happening on all those corporate servers. I would boast to my friends that Yahoo, Google and Facebook were using free software to construct such data processing monstrosities, but I always had pit of unease in my stomach at the way that those companies were gobbling up the mundane details of our lives behind their corporate firewalls.

Rumors and innuendos began to leak out in bits and drabs that the NSA was sifting through all that data. An occasional whistle blower would appear on a show like DemocracyNow to denounce the ways that the government was violating constitutional rights to privacy, but it happened rarely enough that the alarm would quickly dissipate and I could blithely ignore it. Maybe I wasn’t fully comfortable with the sorcerous powers which I invoked every time I conducted a web search or checked my email, but the familiarity of this magic made it appear less threatening and almost controllable. I might have known that the NSA was warehousing billions of exabytes of data about our lives in Colorado, but I was able to ignore it until Glenn Greenwald began publishing a torrent of revelations from Edward Snowden. Then, the danger of the familiar magic in my life was undeniable and I could no longer deceive myself. What made the revelations even worse was the fact that the details were obfuscated in such a way that it has been impossible to determine exactly how the NSA has been carrying out the surveillance. It was frightening to know that the NSA had the power to build a profile on me from the websites which I visited, but it was even more frightening to not be able to know how the NSA was able to do such a feat. Was it watching the traffic over the internet servers in the US? The traffic over the cables running under the ocean? Had it written spyware that infected people’s computers? Some things I could forestall, such as infecting spyware, but other things left me powerless. Knowing what the NSA was doing, but not knowing how terrified me, because there was no way to be sure how to prevent it.

The unknowable, but familiar magic in my daily life suddenly became the darkest sorcery to be avoided. The magic which had once so amazed me, began to take on other connotations, for what we don’t understand, we either goggle at in wonder or in fear. For a while I stopped using Facebook, which was no great loss, since 95% of the messages on Facebook are utter rubbish, but it cut me off from family and friends, so I have been reeled back in to keep abreast of what people in my extended life are doing. I found that I couldn’t live without the ineluctable magic in my life, despite the fact that that magic was being used to build up profiles about millions of people on the planet.

NSA surveillance needs to be resisted, since it might become the first step toward a world of Big Brother and governmental control over our lives. For many years, I have believed that surveillance from Little Brother, as are dubbed the many companies collecting data on our lives, was the greatest threat to our freedom, but governmental control in the form of Big Brother holds is a far more frightening prospect. The surveillance of the KGB in the former USSR doesn’t hold a candle to the amount of data that the NSA potentially has about our daily lives. We are told not to worry since all this surveillance is directed toward stopping foreign terrorism, but the information being gathered has already been used to spy on tax cheats, journalists and foreign heads of state, and this information is being shared with other governmental agencies outside the NSA and with some US allies. I have written my representatives in congress and the president to express my concerns, but I don’t see any mass movement of citizens in the offing to derail the mission creep that is NSA surveillance. The potential for abuse hangs over all our heads, yet I see so few people banding together and organizing to counteract the threat. We need a political movement, to change the politics, yet the few groups such as the Electronic Frontier Foundation which are trying to create such a movement are so dispersed, that the effort feels futile and quixotic. I know that I shouldn’t take that attitude, since a small number of people can have great effect and all political activism starts small, but I currently reside outside of the US so I can’t get involved physically, where my participation could have the most impact. To be honest, I am already committed to too many activist causes, so I don’t feel ready to take on this one as well.

If political mobilization appears out of the question for me, perhaps I can help create a little point of resistance. I know that I should start encrypting everything, to make it harder for the government to spy on anyone. If millions of ordinary people start using GPG to encrypting their email and start using TOR networks to encrypt their traffic, then it will be far harder for the NSA to single out any one person and the whole system of surveillance will eventually break down. Currently, however, few of my friends have any clue how to encrypt an email that I send them. If I gave them my public key, they wouldn’t know what to do with it. As for using TOR, there are so few TOR servers, that it is frightfully slow. In Bolivia, use of TOR to browse the internet is so slow that I often give up in frustration. Besides, the NSA is actively targeting the traffic coming out of TOR exit nodes, so it becomes almost counterproductive to use TOR at this point, unless you want to make a political statement. I considered setting up a TOR server, but the provider of my VPS informed me that it was not permitted.

The second way to start creating foci of individual resistance is to migrate away from the companies which are collaborating with the NSA. I give Kudos to Tweeter for attempting to defy the NSA, but it is depressing to observe how many of the big internet companies have compliantly rolled over and gave the NSN access to their servers. They could have chosen to fight it with secret court action as Tweeter did, but it would seem that they have a financial and political interest in collaborating with the government. The best way to fight this compliant attitude among internet companies is to set up mail servers and web applications which are outside companies which comply with the NSA. At first it appeared that the NSA was only targeting the data collected by the big companies such as Microsoft, Google, Yahoo! and Apple, but it has now become apparent that even small providers of email are being targeted for surveillance, if we can judge by what happened at XXX.

In the end, I decided that I would take individual action and create my own little pocket of resistance by setting up an email server for a couple activist groups in Bolivia. Sad to say, since the Bolivian government is almost certainly being surveyed by the NSA, but Bolivian environmental activists also worry that their own government might be monitoring their activities online. In a country like Bolivia, where there are few jobs outside the governmental sphere, students and professionals in environmental sciences fear to speak out against their government, since it might mean that they will never find employment.

The idea of setting up a web server and email list server for a couple activist groups in Bolivia seemed like a good idea, until I confronted my old nemesis–technology which is so far beyond my comprehension, that becomes little distinguishable from magic. Setting up a secure web server is not difficult and millions of people do it. Setting up a secure email server is a whole other ballgame, and I’m stepping up to the plate not even knowing how to swing the bat in this game. All the equipment that I need to play is laying around to be picked up for free: Postfix for a mail transfer agent, Dovecot for handling IMAP and POP3, OpenSSL for encryption, ClamAV for antivirus detection, SpamAssassin for culling spam, Milter-Greylist for blacklisting spammers, Sympa for managing email lists, RoundCube for webmail, MySQL for managing a database of email accounts, and all of these parts running of course on top of a Linux server with an Apache web server and PhpMyAdmin to administer the database. The task of learning how to use half a dozen new programs didn’t seem so daunting, until I tried it. Each piece isn’t so hard to set up individually, but configuring a server to make them all work together seems to enter the ineluctable realm of magic.

Like the intrepid apprentice who opens a master’s spell book for the first time, I have copied arcane commands from how-to articles on the internet and pasted them into the terminal in my VPS, having no idea what these spells really do. Then crossed my fingers and restarted the services, hoping to invoke the right magic to make it all work. Thus far most of the spells have misfired, leaving me an awful mess to clean up afterwards. Doggedly I keep trying new spells from different articles on the internet, hoping the next spell will work better than the last one. I make enough painful progress to tantalize me into not giving up, but I am playing with magic that I don’t truly understand.

I keep mixing and matching the pieces, hoping that one spell component will work better than the last. I started with Nginx, entranced by its speed and efficient memory usage. I managed to get Drupal and PhpMyAdmin installed on top of Nginx, but couldn’t get both of them to work together correctly, so I replaced Nginx with Apache. I planned to install Zarafa, which would handle all troublesome configuration of webmail, email server, spam and antivirus catcher, mail filter, email account manager, etc., but Zarafa doesn’t run on Debian Wheezy. Rather than reinstall my server with an older version of Debian, I decided to try and figure out how to install all the pieces and parts myself. I removed Exim and replaced it with Postfix, which seemed like a good choice, until I tried to configure it with Sympa. It took me a day to figure out how to install and configure Sympa, but now I can’t get it to display the interface in Spanish. I started with Courier, but then switched to Dovecot once I realized how antiquated Courier was. Unfortunately, the spell book I was following, was written for Dovecot 1.X and Debian Wheezy uses Dovecot 2.1. The invocations (i.e., configuration options) for Dovecot 1.X which I blithely pasted into Dovecot 2.1 caused it to spectacularly crash.

Now I’m pondering 3 different spell books, none of which seem to be using the same type of magic as I am. One magician uses RBL (Real-Time Blackhole Lists) instead of Milter-Greylist. Another uses Nginx instead of Apache. Another explains the spells better than the others, but others comment that there are some errors in parts of his spells. The magician whose spell book seems to be most complete, is working on a system that is half way between Debian Squeeze and Wheezy, so I am leery to try his spells, for fear that they might explode in spectacular failure on my system.

I would like to manage all my virtual email accounts using plain text files, since they have less chance of exploding in my unskillful fingers. Nonetheless, the spell books all seem to be pointing me towards using a database such as MySQL or PostgreSQL to manage the virtual accounts. Since I am playing with magic that I don’t comprehend, I will gamely follow the spell books, nonetheless, I wonder at the wisdom of the magicians who wrote these spell books.

I am beginning to despair at the prospect of ever mastering this magic, yet I persist, knowing that it is possible to turn this ineluctable magic into knowable technology. I know that behind every email server is not an occult acolyte to magic, but rather a skilled server administrator who has mastered this technology. I am determined to no longer be a slave to the magic of big internet companies whose inner workings I will never be permitted to glimpse. If I want to free myself and a handful of fellow activist, I must start by turning magic into knowable technology. I must dedicate the time to learning how all the spell components work, so they are no longer ineluctable magic. It is a hard magic to master, but it can be done, and this magic can help fight the magic of the big internet companies, which menaces not only my right to know, but also my digital rights and the rights of millions around the planet.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s