Category Archives: Uncategorized

Reflections on building my own solar panels

Over the last decade solar energy has gone from the hobby of oddball engineers and rich eccentrics to a viable way of generating energy for millions of people. Unfortunately, I live in Bolivia, a country where almost nobody uses solar electricity and it is difficult and expensive to import solar panels. Out of curiosity, I wondered whether I could get solar energy by building my own solar panels. I spent a couple weeks investigating how to make my own solar panels online and I would like to share what I found with anyone else who is thinking of building do-it-yourself (DIY) panels.

The idea of being able to generate my own carbon-free energy is very enticing. I live in a country where solar energy only comprises 0.25% of the national grid’s electrical capacity and bad public policy is currently deepening the country’s dependence on fossil fuels. Perhaps my desire to build a solar panel are born out of my sense of frustration at the powerless I feel to change the dirty development and environmentally-destructive policies being promulgated by the Bolivian government. I feel like I have to do something, however small it may be, to resist the relentless march toward the destruction of the planet and humanity’s role in that destruction. In this context, the idea of being able to build my own solar panels and participate in the democratization of energy is very empowering. Continue reading


The global production of electronic devices over the last decade

Electronic devices increasingly dominate the way humanity interacts and creates, so understanding what is happening in the electronics industry as a whole is a key component to understanding humanity’s future. Whether most humans will be interacting through desktop PCs, wearable smart devices or processors embedded in buildings and cars in the future will have a big impact on human society and how it functions.

Software, networks, communication protocols, media and everything else which runs on electronics are increasingly redefining and becoming embedded in human culture. This phenomenon is not new. To take just one example, look at how the evolution of electronics has transformed human politics. The advent of the transistor radio allowed political leaders such as Roosevelt and Hitler to transmit their words directly into people’s homes so they became a personal presence in people’s lives. The advent of the color television made people intimately aware of the visual features of politicians, so a youthful, telegenic man like John F. Kennedy could win a televised debate. Statistical analysis and number crunching by computers and software models had transformed how political campaigns are waged and who is targeted by those campaigns. The rise of social media and billions of mobile devices made it possible for left-wing candidates such as Jeremy Corbin and Bernie Sanders to bypass the traditional media and appeal directly to their base, but it has also given voice to ultra-nationalism and bigotry on the right.

More overlooked is the fact that electronics is an enormous consumer of energy and resources. Despite the small size of its components, the fabrication and use of electronics has an alarming  impact on the environment, far beyond its its physical size. It is easy for humans to grasp the environmental significance of construction, transportation, agriculture or extractive industries, because buildings, automobiles, fields and mining pits are tangible, large in size and easy to visualize. It is not easy to visualize the movement of electrons through circuits or the generation of those electrons in distant power plants. As electronics becomes increasingly nanoscaled and its processing moves to remote server farms away from the public eye, it becomes easier to overlook the  impact of electronics on the environment.

In an effort to better grasp the scope of these impacts, both societal and environmental, it is necessary to first ask how much the global electronics industry is producing and what are the trends in its production. These basic questions are remarkably hard to answer, because most electronics firms do not release production numbers out of fear that they will negatively impact their stock prices or reveal too much information to their competitors. It is telling that the only significant maker of phones, tablets and PCs to consistently release its production numbers is Apple, which enjoys a protected niche where it controls its own hardware and software, so it is shielded from competition. The producers of game consoles used to release their production numbers, since the producers of games needed to know the potential market size of their games. Now, Sony and Microsoft only sporadicly release the total lifetime number of gaming consoles as part of an occasional press release, so production is impossible to track year to year or quarter to quarter.

Most of the production numbers in the electronics industry are compiled by market intelligence firms such as International Data Corporation, Gartner, IHS, etc., which are loathe to release too much to the public. Instead, they release just enough information to garner headlines in tech news sites and to convince people to fork over thousands of dollars for market reports, whose details they are legally forbidden to share. What is publicly released provides little historical context, since the press releases generally only focuses on one quarter or year and its growth rate compared to the previous time period. Stringing together a whole series of these press releases, it may be possible to construct an idea of change over time, but the market intelligence firms often change their definitions of what is being counted and delete old press releases from their web sites.

Trying to piece together the puzzle with publicly accessible information can be a very frustrating task. The rivalry of Gartner and IDC to be the premiere intelligence firm for PCs, smartphones and tablets leads them to consistently publish the number of units shipped every quarter, but other sectors of the electronics industry only merit an occasional press every couple years. Often these press releases contain a growth rate or an expected product number, without providing a single datum of historical production. Nonetheless, there is often enough to piece together a sequence over time with some interpolation and educated guesses.

The overwhelming trend of the electronics industry since its inception has been growth based on a smaller and often cheaper form-factor displacing most of the market for the previous form-factor. Hulking mainframes were displaced by mini-computers and terminals in the late 60s and early 70s. Those in turn were displaced by personal computers and networks in the late 70s and early 80s. In

On those personal computers, the bulky RS-232, DB-25 and VGA ports were replaced by smaller FireWire, USB, DisplayPort, HDMI and Thunderbolt ports, which in turn are now being replaced by even smaller micro-USB, micro-HDMI, Lightning and finally USB Type C ports, which threatens to replace them all.  replaced by smaller and DisplayPortand ISA slots were replaced by the Bulky bulky parallel ports were replaced by smaller Firewire andTreplaced mainframes in the late 60s and personal coe mputers replaces Given these problems, here is



from twhich are loathe to release it  to   information publicly available, compared to the  Unfortunately, most of what is known about the global electron out  out the global production understand I started to compile to understand how

in build will have a big  has How many devices are being  Since The global production of advanced electronic devices dropped in 2016 for the first time since the economic downturn of 2008-9. The number of smartphones, smart wearables (such as the Apple Watch), camcorders and handheld game consoles grew in 2016, but the production of 2,817.3 million electronic devices in 12 different categories was 2.8% less than in 2015.


Over the last decade smartphones have eaten away at the market for most of the types of electronics listed in the table above. Once smartphones began to produced on a massive scale starting in 2007, they largely replaced the market for PDAs, cameras, camcorders, portable media players, GPS devices and handheld game consoles. Global production peaked in 2008 for portable media players, handheld game consoles and portable GPS devices and in 2010 for cameras and camcorders. These devices have largely been relegated to niche items for specialty markets.

The cheap point-and-shoot cameras which were so popular a decade ago have mostly disappeared from the market. Most cameras being sold today are more expensive models with better zooms, sensors and image processors than found in a standard smartphone. According to CIPA, only 6.7% of digital cameras produced in 2006 contained an interchangeable lens, whereas that percentage had grown to 47.8% a decade later in 2016.

Likewise, the market for standard camcorders has also largely disappeared, as most consumers now have a smartphone for low-quality filming. There is still a good market for professional quality camcorders, but almost all the growth in recent years has been for action cameras, known as “action-cams,” that are water proof and can be worn unobtrusively on the body. Frost and Sullivan estimate that 62% of the camcorders produced in 2016 were action-cams.

The same relegation to niches is occurring for GPS devices. According to IHS iSuppli, global production of GPS devices peaking in 2008 at 42.08 million devices. For many consumers, the maps on their cell phones provided by Google Maps, Waze, Apple Maps or OpenStreetMap are good enough to avoid buying a dedicated GPS device from a manufacturer such as Garmin or TomTom. GPS devices have been forced to increase the quantity and quality of their offline maps in order to differentiate from the free online maps that come with most smartphones and tablets. The need for greater offline storage capacity and higher resolution screens in these devices has increased their manufacturing costs, so they often cost as much as a mid-range smartphone with less functionality.  There is still a niche market for people who need a navigation device to drive in places with cellular dead zones or have limited cellular data plans, but it will become increasingly difficult to justify a dedicated GPS device in the future as cellular data plans continue to get cheaper and the data collection in online services such as Google Maps provides better real-time information about traffic and road closings.

Although Garmin remains the leader in the shrinking car navigation market, most of Garmin’s focus today is on the growing market for wearable GPS devices that can also track biometric information such as heartbeats, running steps, golf swing speed, swimming strokes, etc. While Garmin can charge a premium for these fitness wearables, the market is limited and cheaper devices from companies like Fitbit are encroaching on their premium market. Smartphones are also incorporating biometric sensors and becoming thinner and more water-proof, so it may be just a matter of time before   Like camera and camcorder manufacturers, GPS device makers  have been forced to focus on the high end of the car navigation market or or the  Many experts here is a growing market for action GPS become increasingly difficult for GPS device makers to compete with the network effec

Further analysis will follow, but for now here is the data:


The short-sighted missteps of the server companies

Apologists for Capitalism are wont to wax eloquent about the creative destruction they see in the tech industry. They see the vertiginous rise and fall of tech companies in the Silicon Valley as a beautiful system that weeds out the laggards who aren’t nimble enough to keep adapting, while rewarding the creative innovators with huge pay offs.

Frankly, I see the skyrocketing stocks and crashing failures of the tech industry as a condemnation of how modern Capitalism functions. The erratic fortunes of the tech companies generates a tremendous amount of stress in the lives of the people who work in these companies. The directors of tech companies often make decisions which are based on short-term profit margins, raising the stock prices or cashing out those stocks, rather than producing a quality product or service and working toward long-term goals that will help the company grow in the future and provide stable employment for the employees.

We can see this destructive dynamic playing out currently in the server business. Fifteen years ago, IBM was the undisputed leader in the server business. It had a long tradition of offering quality servers, which were pricey, but its engineers were known for the high quality of their support and services around servers. IBM was also renowned for for offering the best line of PCs for enterprise, which came with excellent support and long-term warranties. IBM’s Thinkpad and Thinkcentre lines were highly sought after PCs, due to their engineering excellence and sturdy construction. The Thinkpad laptops generated a special kind of brand loyalty among engineers and geeks, who took exceptional pride in owning the coveted boxy, black devices. Unfortunately, PCs were turning into mass market devices with slim profit margins under 3%, so IBM’s PC business was nearning the company very much.

Still, as the inventor of the PC and a long tradition of quality engineering and reliability, IBM’s PCs added a certain cachet to the reputation of the company. IBMers knew that HP and Dell might move more PCs, but they could take pride in the fact that they offered quality PCs and people trusted them to provide the best support in the industry. More importantly, IBM’s PC business gave the company an entry way into businesses to sell them more lucrative contracts in other areas. The support contracts for the PCs were a vehicle for Big Blue to talk to companies about their other IT services where IBM did earn large profit margins. Having a PC business allowed IBM to offer comprehensive IT services for companies and helped keep its competitors HP and Dell away from its clients.

Rather than think about PCs as an essential piece that helped enable their servers and software businesses, the directors of IBM fixated on the fact that PCs were being commoditized with low-profit margins. They decided that IBM should only focus on areas with high profit margins, so in 2004/5 they sold their PC business to Lenovo, a Chinese original design manufacturer who had been building their Thinkpads since 2002.

IBM essentially shot itself in the foot, although it would take a while for that fact to become evident, so the managers at IBM would pat themselves on the back for increasing their profit margins and getting rid of many costly employees in North America and Europe who they passed to Lenovo. In addition, they gained entry to the growing Chinese market, because Lenovo promised to direct their Chinese customers toward IBM’s server business. It looked like a great decision on paper, but in the long term, divesting from the PC business helped to undermine IBM’s profitable server business. Not only did IBM help establish Lenovo as a major provider of PCs to enterprise, but it also gave Lenovo a vehicle to start offering their own servers to many clients of IBM and become a major competitor which undercut IBM in the x86 server market. By no longer providing PCs, IBM lost contact with many potential new clients for its server business and it gave its existing clients to start talking to HP, Dell and its new competitor Lenovo for their IT services, since IBM could no longer offer a comprehensive IT solution for businesses.

After selling its PC devision to Lenovo, IBM gradually lost market share in its server business, especially among x86 servers, where all the growth in the industry was occurring. IBM’s biggest profit margins lay in mainframes and in AIX on the POWER architecture, but the market share of both mainframes and UNIX servers was already in long-term decline and that decline further accelerated after the economic crisis of 2008/9, as many companies sought to reduce their IT budgets by switching to cheaper x86 servers running Linux or Windows, reducing the number of servers through virtualization and by outsourcing their servers to third-party clouds.

While IBM maintained its formidable advantages on big iron, only a select number of companies and governments now needed mainframes. Much of the computation formerly conducted on mainframes moved to distributed networks of low-end x86 servers. High performance computing is increasingly moving to the cloud, where IBM certainly competes, but cloud computing is a cut-throat business dominated by Amazon, Google or Microsoft. The advent of the Moreover, many of the new mainframes were now located in China, where the government was eager to promote national companies shifdistributed computing on  found fewer and fewer reasons to use old-style mainframes


shrank, while the low-end servers based on the x86 architecture grew to take over most of the market. Since IBM was no longer a first tier eventually became new provider of In the long term, however, IBM opened up their rid of a business with low-profit margin


where ras the highest quality of

Silicon Valley company.  of both a failure of modern Capitalism which overly focused on sho
some kind it engenders. , but the need for short term profits to Capitalism the goal of short term profPeople often wax eloquent about the  of Capitali


Questioning the moral authority of the Catholic church

The Catholic Archdiocese of Kansas City in Kansas recently decided to sever its ties with the Girl Scouts. Instead, the Archdiocese will support the American Heritage Girls troops, which is a Christian-based scouting program. It is decisions like this which alienate me from my Catholic faith and make me question why I should invest much time or energy in organized religion in general.

Continue reading

The rise of the Debian distro family of Linux over time

DistroWatch currently ranks Linux Mint, Debian GNU/Linux and Ubuntu as the first, second and third most popular distributions, respectively, based on the number of times their pages are visited on the DistroWatch web site. Ubuntu takes a snapshot of the Debian unstable repository every 6 months and adds its changes on top. Mint adds a few packages, but the rest of its packages it gets directly from the Ubuntu repositories. In other words, Ubuntu is the child of Debian and Linux Mint is its grandchild, but they are all part of the same distribution family.

It is very difficult to measure the use of a Linux distribution, since few installations of Linux have paid for a support contract or license from Red Hat, the SUSE division of Micro Focus, Canonical or one of the other Linux companies. The annual surveys of the readers at or Reddit’s r/Linux group poll readers who are probably not representative of the average Linux user. For example, Slackware was selected as the desktop distro of the year 2017 at and Arch won in the last r/Linux poll conducted in August 2015. There is likely a self-selection bias to these polls, since both Slackware and Arch users are reputed to be more hardcore and dedicated than the average Linux user, so they are more likely to participate in these online forums and to take the time to participate in these polls.

People go to DistroWatch to find out information about a distribution and to see which version of software is installed in each release of that distribution. The number of page hits for each distro at is probably the best measure currently available of a distro’s relative popularity, since no scientific polls have been conducted on Linux usage and the only reliable data is for limited areas such as Linux images in Amazon’s Elastic Compute Cloud (EC2) or downloads of Vagrant Boxes.

Unfortunately, DistroWatch does not have pages for most of the embedded distros, such as Android, Sailfish OS, OpenWRT, and OpenEmbedded-Core, nor does it cover all the netbook distros like Chrome OS and its derivatives. DistroWatch’s statistics are probably a little biased against commercial distributions, because people seeking to find out information about these distros are more likely to go straight to their web pages rather than to When searching for “Red Hat Linux”, “SUSE Linux”, “Oracle Linux”, and “Ubuntu Linux”, Google offers their DistroWatch page as the 15th, 146th, 5th and 109th option in the search results, respectively. In contrast, when searching for non-commercial distros, such as “Debian Linux”, “Mint Linux”, “Solus Linux”, “Slackware Linux”, “Gentoo Linux” and “Arch Linux”, Google offers their DistroWatch page as the 3rd, 5th, 5th, 6th, 6th and 7th option, respectively. It appears that SUSE and Ubuntu (and maybe Red Hat) are using techniques to get news about their distros listed higher in Google searches, so DistroWatch appears later in the list, whereas Oracle doesn’t bother gaming the search results.

With these caveats in mind about the DistroWatch statistics, it is still interesting to observe how the relative position of the Linux families have changed over time. In 2002, when DistroWatch first started keeping statistics, Mandrake (a derivative of Red Hat) and Red Hat were the first and second most popular distros. Red Hat and its derivative distros received 34.4% of all page hits at DistroWatch, and the rpm family in general received 55.4%. In contrast, the Debian family received just 13.5% of page hits.


Today, the relative position of these two families has entirely reversed. In 2016, the Debian family received 50.8% of page hits, compared to 9.2% for Red Hat and its derivatives. The rpm family as a whole, which includes all the derivative distros from Red Hat, SUSE, Mandriva, Caldera and a few independent distros that use rpm packages, received just 16.9% of page hits.

DistroWatchFamiliesDataTableExamining the relative popularity of each of the rpm branches shows that Red Hat is still the leader in the rpm family, but it has lost significant ground over time. Between 2002 and 2006, when Red Hat Linux split into Red Hat Enterprise Linux (RHEL) and Fedora Core, the popularity of the Red Hat subfamily dropped dramatically from 34.4% to 14.4% of page hits on DistroWatch. Since that time, both RHEL and Fedora, as well as their derivatives, have gradually lost ground in the DistroWatch rankings. In 2016, Fedora and RHEL were number 6 and
The Red Hat derivative, Mandrake, which was renamed Mandriva after it merged with Conectiva, was the top ranked distro in 2002 – 2004, but then it slowly dropped in the rankings, falling to number 10 in 2011, when Mandriva S.A. went bankrupt. Mandriva’s community-based derivatives, PCLinuxOS, Mageia and OpenMandriva Lx enjoyed some popularity after the collapse of Mandriva, but they have all gradually declined in popularity over the last 5 years.

At the same time that Red Hat’s popularity crashed between 2002 and 2006, it was largely replaced by Debian derivatives. Between 2002 and 2003, KNOPPIX, which is a live CD based on Debian, skyrocketed from 21st to third in the DistroWatch ranking. Although there had been Linux live CDs before, such as Yggdrasil, KNOPPIX did it far better than previous attempts. It incorporated excellent hardware detection, plus it included proprietary firmware, networking and system recover tools, so KNOPPIX became an essential tool for checking whether it was possible to install Linux and for fixing a borked system. KNOPPIX inspired many derivatives and convinced many Linux users to switch to the Debian family, but its own popularity waned once its live CD functionality was incorporated into other distributions.

The distro another Debian derivative named Ubuntu, which was founded by South African multi-millionaire South American multi-, but its unique live CD capability was ra able to boot up most PCs. It included networkingthe  In 2005, just a year after its founding, Ubuntu had become the number 1 distro, according to DistroWatch. Linux Mint followed more slowly in the footsteps of its parent, Ubuntu, but by 2008 it was occupying the third spot in the rankings, which it held for the next 3 years. Ubuntu’s switch to the Unity desktop in 2011 alienated many of its users, who departed en masse for Linux Mint, which was developing the Mate and Cinnamon desktops for people who resisted the radical change of the GNOME 3 Shell and Ubuntu’s Unity. Due to its promotion of a traditional desktop that was familiar to most Linux users, Linux Mint jumped to the number 1 spot in 2011 and has held it ever since. What is surprising is the fact that stodgy Debian also overtook Ubuntu in 2015 and has been the number two distro ever since.


Another way to measure the popularity of distributions is to count how many derivative distribution are created from them. In 2002, 49% of the active distros tracked by DistroWatch were based on Red Hat and the rpm family represented 60% of all distros. In comparison, Debian and its derivatives accounted for just 13% of active distros in 2002.


Between 2002 and 2006, there was an enormous surge in the creation of new distros. According to DistroWatch the number of active Linux distros during this period grew from 96 to 335, and almost half of these new distros were derivatives of Debian. The Debian family grew from 12 to 131 distros between 2002 and 2006, and that number has maintained steady ever since. In contrast, the number of distros based on Red Hat grew from 47 in 2002 to 75 in 2006, but it has gradually fallen to 33 in 2016. The Mandriva-based distros peaked at 18 in 2007 and have since fallen to 6 in 2008. The number of SUSE-based distros also peaked at 7 in 2007 and fell to 3 in 2013. Nonetheless, SUSE Linux Enterprise and especially openSUSE have enjoyed a startling resurgence in recent years. Five new distros based on openSUSE appeared in 2016, which has helped arrest some of the decline in the rpm family.


The growing dominance of the Debian family, is due partly to the missteps of the companies Red Hat, SuSE and Mandriva. Red Hat essentially gave up on the Linux desktop when it split Red Hat Linux into RHEL and Fedora. The RHEL kernels were too out of date for modern hardware and its repositories were too limited and out-of-date to supply the software needed by desktop users. Fedora was too bleeding edge and not user friendly enough to be an adequate distro for desktop users. The purpose of Fedora was to be a testing ground for software that would eventually find its way into RHEL, not to provide a compelling user experience and grow the total number of Linux users.

Red Hat grew to be the biggest Linux distro in the mid-1990s by focusing on making the rpm package manager that made it easy to install and uninstall software and by making the Anaconda installer which auto-detected the hardware and provided a user-friendly graphical interface to install Linux. By focusing on making on making Linux easy to use and providing a good desktop experience, plus acquiring the largest assemblage of Linux engineers and consultants, Red Hat established itself as the most important distro and the largest Linux company, but its wasn’t generating much profit. The dot com bust in 2000 – 2002 wiped out most of the new Linux companies, and the trauma of that experience turned Red Hat into a conservative company that focused exclusively on short-term profits and the sectors that were generating revenues, such as servers and software for compilers, Java and internet infrastructure. Rather than taking a long-term gamble on the Linux desktop and trying to promote Linux as an alternative to Windows and Mac OS, Red Hat started taking measures in 2002 that made it harder to use its distro on servers without paying the company. In 2003, the company created a new server distro in which only paying customers could access its repositories. By 2003, Red Hat was generating profits again and its revenue has grown roughly 15% a year ever since. Red Hat’s exclusive focus on the profitable sectors of the Linux stack have turned it into a tech giant with 10,000 employees and a market capitalization of over 15 billion dollars. It also gave Red Hat the resources to hire a drove of talented programmers who have beavered away on many of the essential programs that make the Linux ecosystem work. The Linux kernel, GTK+, GNOME and hundreds of other free/open source programs have benefited from Red Hat engineering over the last 2 decades.

With the revenues Red Hat was generating from servers, the company had the resources to be the biggest evangelist for Linux on the desktop. It could have turned Fedora into an effective alternative to Windows. It could have contacted every school and offered to install Linux for free. It could have lobbied every PC company to sell machines with Fedora pre-installed. It could built a Linux industry coalition that lobbied governments around the world to enforce anti-monopoly regulations on Microsoft. It could have used that coalition to cajole or force every hardware manufacturer to either create free/open source drivers or to hand over the specs so Red Hat could create them. Red Hat could have turned the Linux Foundation into an organization that advocated for Linux on the desktop and for the rights of users, rather than just advancing it as a tool to build servers and embedded devices. It could have expanded the Open Invention Network to cover more user interface patents and software applications.

If Red Hat had put its resources into promoting Linux on the desktop, it would have been a less profitable company, at least in the short term, but it would have established its distro as the standard Linux, that every proprietary software maker could target.  that every could have easily gr





nfrastructure, It is hard to criticize Red Hat Nonetheless, Red Hat Enterprise Linux proved enormously successf


Red Hat was superseded to some degree in 1998 by the French distro, Linux-Mandrake, which originally used Red Hat’s repositories, installer and package manager, but added easier configuration tools called Drakes and a more compelling desktop on top of Red Hat, in the same way that Linux Mint today adds its Cinnamon desktop on top of Ubuntu. Nonetheless, the popularity of Linux-Mandrake, which was later renamed as Mandrake Linux, and then finally Mandriva Linux, was helping to draw new Linux users to the rpm family and many of them ended up using Red Hat servers and learning the Red Hat way of ding th      decision to focus exclusively on the profitable understandable tBy focusing on profitable serversIro  Red Hat stopped dedicating resources to winning over new Linux users. After the dot com bust of 2000-2002, SuSE couldn’t raise enough capital, so it was bought up by Novell, who mismanaged the company. Nonetheless, Novell also bought Ximian in 2003 and continued to invest in Linux for the enterprise desktop, unlike Red Hat, which improved the popularity of the green lizard among Linux users. In 2006, when Novell made a deal with Microsoft, openSUSE was ranked number 2 by DistroWatch, but paying Microsoft for its intellectual property, alienated many Linux users, who worried that SUSE was establishing a precedent that would damage Linux as a whole. OpenSUSE has bounced between number 3 and 5 in the rankings ever since the deal with Microsoft, but it generated remarkably few derivative distros.

[more to come]

The problem of installing the Rust compiler

Most of the time Linux “just works,” but sometimes all the pieces don’t play together nicely. After installing the Rust compiler, I couldn’t get it to run without typing out the full path to the program. If you are coming from the Windows world, this is the expected behavior because Windows is a brain-damaged OS, but in the Linux/UNIX world we expect commands to work everywhere without including the path. So I filed a bug report with the LightDM display manager and with the Rust compiler to ask them to set the PATH correctly:
My prediction is that LightDM will ignore my bug report (since it is developed by Ubuntu which tends to ignore bug reports) and the people at Mozilla who develop Rust will say, “It’s not our problem if LightDM doesn’t want to conform to the convention of putting session configuration in $HOME/.profile.”
And I will get annoyed that I just wasted an hour of my time tracking down this bug and reporting it, rather than getting some extra sleep that would make me a much happier camper tomorrow.

A preliminary review of the Rust programming language

The Mozilla Foundation has been developing an exciting new programming language named Rust, that is designed to be a low-level language capable of matching the performance of C/C++, but with the safety of Java, the concurrency of Go, and many of the modern features of high-level languages like Erlang, Haskell, and OCaml. After reading the documentation and playing with bits of the language, I find myself struggling with some of the concepts of the language. Continue reading