Free Software EULAs?

Ubuntu is now being forced to show a EULA before letting users run Firefox, on pain of losing the rights to the Firefox trademark.  (You know, End User License Agreements: those pop-ups Windows and Mac users have to put up with all the time, with the big “I Accept” button at the bottom.)  Mark Shuttleworth, Ubuntu top dog, weighs in on the bug:

Please feel free to make constructive suggestions as to how we can meet Mozilla’s requirements while improving the user experience. It’s not constructive to say “WTF?”, nor is it constructive to rant and rave in allcaps. Your software freedoms are built on legal grounds, as are Mozilla’s rights in the Firefox trademark. To act as though your rights are being infringed misses the point of free software by a mile.

This is a bit surprising, and a bit disappointing.  Both the decision itself, and Mark’s take on it, are quite wrong.

One of the most important benefits of free software is the legal agreement you work in.  You don’t have to agree to some long contract every time you need to do something new on your system, or sometimes even when you get a “critical update” to something you’re already doing.  You don’t have to read pages of legalese, or go through some long process with your company’s legal department, or just click the “make it go away” button with this vague unease that you’ve just signed your first-born child away to the Devil.

Most importantly, you feel like you actually own your computer when you run free software on it.  When you enter a situation where you always have to ask permission to do things, and have to be constantly reminded of the rules, you don’t feel comfortable.  Clearly, the thing in front of you is not yours, whatever your credit card bill might say; if it were, there wouldn’t be all this stress over you doing something the real owners don’t like.  Free software returns your computer to you, by guaranteeing that you don’t have to enter into all these contracts before you can use it.

Well, unless that “free” software is Firefox 3.0.2 or later, it seems.

It’s “free” by a technical definition (you can strip the Firefox trademark rather easily, and get rid of the EULA as well).  But when users fire up Ubuntu, and decide to do some browsing, and get confronted with pages of legal garbage and ALL CAPS, they will ask: “What’s so different about this open source stuff?  I thought I was getting rid of all this legal crap.”  And, suddenly, they’re slogging through the same drudgery they had to endure with every Windows service pack, and they wonder what they’ve gained.

Perhaps there is a price we should be willing to pay to help Mozilla preserve their trademarks, but this price is too great.  Mozilla should never have asked this of us, and Ubuntu should never have decided, on our behalf, that this price was acceptable.

Debian has already turned its back on Firefox, and I have yet to have a problem with Iceweasel (the branding Debian chose for its Firefox-alike) that was caused by the branding change.  But I’m tempted to bring it back, in Debian’s “non-free” software repository.  Perhaps we could provide Firefox, complete with nasty EULA, but launch Iceweasel instead of Firefox if the user clicks “No”.  There are probably all kinds of reasons why this is a bad idea, but I’m still drawn to the idea of illustrating how silly and useless click-through EULAs are.

But it would be much more productive for Mozilla to back down, and not ask us to sacrifice such a large part of our identity on the altar of their sacred mark.

UPDATE: First, I notice I was remiss in not giving a hat tip to Slashdot.

Second, Mark has posted another comment on the bug.  I encourage people to read the whole comment, but here’s a telling part:

For example, at the moment, we’re in detailed negotiations with a
company that makes a lot of popular hardware to release their drivers as
free software – they are currently proprietary. It would not be possible
to hold those negotiations if every step of the way turned into a public
discussion. And yet, engaging with that company both to make sure Ubuntu
works with its hardware and also to move them towards open source
drivers would seem to be precisely in keeping with our community values.

In this case, we have been holding extensive, sensitive and complex
conversations with Mozilla. We strongly want to support their brand
(don’t forget this is one of the few companies that has successfully
taken free software to the dragons lair) and come to a reasonable
agreement. We want to do that in a way which is aligned with Ubuntu’s
values, and we have senior representatives of the project participating
in the dialogue and examining options for the implementation of those
agreements. Me. Matt Zimmerman. Colin Watson. Those people have earned
our trust.

On the one hand, yes, I believe that the Canonical people have earned our trust, and I do appreciate the utility of quiet persuasion with a proprietary software company that doesn’t understand our community.  On the other hand, I had been under the impression that Mozilla was not a proprietary software company, and didn’t need persuasion and secret negotiations to see our point of view.

Is Mozilla still a free software company, or not?

UPDATE 2: Cautious optimism is appropriate, I think.  Mitchell Baker, Mozilla chair:

We (meaning Mozilla) have shot ourselves in the foot here given the old, wrong content.  So I hope we can have a discussion on this point, but I doubt we’ll have a good one until we fix the other problems.

The actual changes aren’t available yet, and I wonder how much of this had been communicated to Canonical beforehand.  Still, it’s a good sign.

The End

I was a little surprised to see a message of thanks for me and my old Progeny colleagues. Unfortunately, the news at Progeny’s home page was not good:

We are sorry to inform you that Progeny Linux Systems, Inc. ceased operations April 30, 2007.

It’s always a little sad to see a former employer go away, even when you feel the company brought its troubles onto itself. Imagine how much worse it is to see something die that you thought had a lot of potential, with fabulous co-workers, above-average management, and really good ideas. It’s often been said that competence and vision are not sufficient for success; without getting into the details, Progeny is now Exhibit A in making that case for me.

I am grateful for having worked there, and am proud of what we accomplished. It wasn’t easy surviving the dot-com bust and building a new business model for ourselves. And it’s certain that I wouldn’t be where I am today without the opportunities Progeny gave me.

I wish my former colleagues well as they find new jobs. Nearly everyone who passed through Progeny was top-notch, and would make excellent hires.

New Debian Release

The old testing release is now Debian 4.0:

The Debian Project is pleased to announce the official release of Debian GNU/Linux version 4.0, codenamed etch, after 21 months of constant development. Debian GNU/Linux is a free operating system which supports a total of eleven processor architectures and includes the KDE, GNOME and Xfce desktop environments. It also features cryptographic software and compatibility with the FHS v2.3 and software developed for version 3.1 of the LSB.

That last bit needs to be proven, which I’ll be doing this week.

Getting the Message Out

From a Fluendo employee:

Are we evil that we don’t take more hours out of our day to build on glibc 2.3 ? You bet, we are cold heartless bastards. But in reality 90% of the people on glibc 2.3 are users that have an upgrade path to a more recent version of their distro; the other people are future Debian Etch users. I’m sure the Etch releasers have convinced themselves of the usefulness of not releasing with a glibc 2.4 that is more than 15 months old, and instead opt for an even older series, even before they actually release. But I am starting to wonder more and more who the people are that are waiting for a release like this.
Realistically speaking, it is possible that we may add glibc 2.3 plugins in the future if we see that more than just Debian is affected. We are not against taking your money for giving you a service that works. But the hours in our day are just as scarce as they are in yours. I just wanted to explain this to people that want to know, to take away your incentive to complain about a nameless faceless Company being Evil to you.

Elsewhere, we learn why Debian is so “backward”. In sum: upgrades from the current stable would break with 2.4, and not all Debian architectures are supported well by 2.4.

I suspect the market for Linux multimedia plugins isn’t a huge one, and Debian is still popular both for end users and as the basis for other efforts. Given that, doesn’t it make sense not to artificially exclude a whole chunk of your potential market?

Of course, I think I know of someone who could help here…

What Do We Want From Microsoft?

Jason Matusow of Microsoft wants to know:

That said, the real voice of the community is…well…from those of you I don’t know. I have to tell you that the issues with getting this covenant right are incredibly complex and there are real concerns on all sides. Our design goal is to get language in place that allows individual developers to keep developing.

(This is in response to the recent patent deal between Microsoft and Novell, and the poor reception it’s getting from the free software community.)

Unfortunately, he got GrokLaw-ed, and his comment system isn’t taking the heat well. So, here’s my feedback; hopefully, he’s paying attention to views outside his comments.

The big problem, if you ask me, is the distinction between “commercial” and “non-commercial” that Matusow (and everyone else I hear from Microsoft) is making.

In our world, that distinction is a lot less important than the distinction between “proprietary” and “open”. For us, “commercial” is just another way software can be used, and restrictions on commercial use are like restrictions on use by women, or by people in Illinois, or by people who have ever picked their nose in public. Why are businessmen any less deserving of our software as a class than housewives, or Haitians, or other free software developers?

Matusow claims not to be interested in any of this:

We are not interested in providing carte blanche clearance on patents to any commercial activity – that is a separate discussion to be had on a per-instance basis. As you comment, please keep in mind that we are talking about individuals, not .orgs, not .com, not non-profits, not…well, not anyone other than individual non-commercial coders.

Dialogue often means meeting the other person where they’re at, not where you want them to be. They would, presumably, not take us seriously if we insisted on a blanket patent license as a condition for any kind of conversation. Fair enough; but then why should we taken them seriously when they insist on us turning our backs on one of our bedrock principles?

But does the conversation have to be either-or? I’m betting that Matusow’s blog post is evidence that it doesn’t. People at his level are not the types to waste time on wild goose chases.

And is it all that strange to think there might be value in the conversation? There’s a mighty thin line between “proprietary” and “commercial”, so thin even we get them confused sometimes. Does Microsoft really care all that much about for-profit use and improvement of free and open tech? If so, they’re prominent members of a small and shrinking club. If not, then it seems to me that we have a lot of common ground for discussion.

Blog Update

Well, it’s been over a month since the last entry. So much for posting more often!

Today, I’ve updated the blog to WordPress 2.0.3, and installed a new theme. I wasn’t too happy with the old Steam theme, but it was a variable-width theme, and I can’t stand fixed-width themes. (Why buy a better monitor if all the Web pages are forced to 600 pixels?) But with the new and improved theme support in 2.0, there are some nice themes that use your whole browser window.

(Posted to all known aggregators, too; I hope Planet doesn’t decide all my posts are new now.)

UPDATE: Well, that was fun; the nice-looking theme happens to be completely invalid. Expect theme changes over the next short while.

UPDATE: Wow, that’s depressing; the state of valid XHTML in WordPress themes is, uh, underwhelming. So I switched back to the nice theme, and edited it to be valid XHTML 1.0 Transitional and valid CSS. I’ve set up a Bazaar-NG repository for my changes.

Clocks Change, World Does Not End

Today, Indiana joined the rest of the country and “sprang forward” to Daylight Saving Time.

The technology world may be experiencing a few glitches. Anything that’s aware of both location and time may have the wrong time as of today, including many computers. The easiest fix is to change the timezone to New York time, or to Eastern time instead of “East-Indiana” or some such variant.

Surprisingly, Sprint phones don’t seem to be aware of the time change. The update seems to be both late and iffy; my phone still reports the wrong time, while my mother-in-law’s phone has already fixed itself.

Debian 3.1 appears to still have the old timezone information, while testing (“etch”) seems to be correct. I wonder if this isn’t something we should update in stable.

Autopackage Goes Insane

A while back, I wrote about a system called Autopackage, which attempted to solve some of the problems with software installation on Linux. I had some praise and a few criticisms of the project, and some of the autopackage people came by and discussed some of them. I still get new comments on that post every so often, mostly of the “if you don’t like autopackage, don’t use it” variety.

Autopackage has attracted a lot more criticism over time, and it seems that criticism has driven at least one autopackage person completely batty. Apparently, nearly everything violates their idea of how the world should work: package managers, Python, C++, the standard C library, the ELF executable file format, and the dynamic linker, at least.

Others have observed their poor attitude, and have pointed out inaccuracies.

The whole incident is frustrating. Autopackage does some things well. Their efforts to solve binary compatibility problems, for example, have resulted in some seriously cool utilities. But they seem to have an inflated opinion of themselves, and it closes their minds to working with others. With me, it was the idea that distributor support could possibly be desireable for users. This seemed to be a totally alien concept to them

I do want to emphasize Klik, though (from Erich’s link). It appears to solve many of the same problems, but without insisting that the entire software infrastructure behind Linux adapt to it.

LSB Distro Testing

I’ve seen several requests for a simple set of instructions that test a distribution against the LSB. I wrote some Debian-specific instructions in a mailing list post back in October, and thought they’d do better as more general instructions.

Before you start, you have to find out two things about the distribution you want to test:

  • How to install the LSB on your distribution. At minimum, your distro should provide something like a “lsb” package. Make sure that’s installed.
  • How to install LSB packages on your distribution. RPM-based distros have it easy here, since the procedure is likely to be the same as installing any other package on your system. Debian users (and users of Debian derivatives) need to use alien; I’ve found that you get the best results with alien -ick [package]. Other distributions will have their own ways, possibly involving alien as well.

Go to the LSB download page and start downloading everything under the Runtime Tests for the LSB version and architecture you want to test. Make sure you don’t get the betas unless that’s what you want to test. You will also need the lsb-python package from the Application Battery list. Install all of these packages.

Once all of the packages are installed, run the tests:

  • /opt/lsb/bin/lsblibchk
  • /opt/lsb/bin/lsbcmdchk
  • Add /usr/opt/lsb/appbat/bin to your PATH, run gcc -dumpmachine to get your machine triplet, and run /opt/lsb/share/test/qmtest_libstdcpp_[version]/run_tests. Answer the questions as appropriate.
  • For the X test suites, install enough of XFree86 or X.org on your system for proper client support, as well as the Xvfb X server. Debian users can install the x-window-system-core and xvfb packages, but see below about problems with Xvfb. After this is done, go to the directory /opt/lsb/test/vsw4 and run the run_vsw4.sh script found there.
  • For the main runtime test, make sure the loop driver is loaded. (This is mostly a problem on systems using udev.) After the test package is installed, set passwords for the “vsx0”, “vsx1”, and “vsx2” users, log in as vsx0 at a system console, and run run_tests. Logging in is required; su or sudo won’t work. The default answers to the questions asked are mostly OK, but a few don’t have good defaults. In particular, you’ll have to tell the script the right passwords for the vsx users, and you should make sure the test block device is /home/tet/test_sets/nonexistb. Be prepared to type the root password several times to set up various things the tests need.

Most of the tests are quick, taking less than 15 minutes usually per test. The runtime test takes somewhere in the neighborhood of six hours, and often looks like it has hung. Don’t assume the runtime test has hung until it’s run overnight. If you’re using an emulator, don’t give up on runtime until it’s run for at least 24 hours.

The tests create two kinds of files: journal files, and the official runtime report (created by the runtime test). Where you ran the tests, look for files with names starting with “journal” in the current directory, or for paths like results/0001e/journal. The C++ tests create a directory called “qmtest_libstdcpp_[version]”. There’s a handy utility called tjreport in /opt/lsb/bin; run that on the journal to get a quick summary of the results. If you want to post results, use tjreport. The official runtime report (in /home/tet/test_sets/results) has some additional information: a list of FIP (Further Information Provided) results.

What to do with failures: First, make sure it’s really a failure of the environment, and not a failure of the tests. The LSB publishes a list of official waivers for test failures; check that your failures aren’t on that list. For failures not on that list, Google is your best resource; most likely, someone else has experienced the same failure, and will have more information about it. If you really can’t figure it out, come over to one of the LSB’s mailing lists and ask around.

On Xvfb: The Xvfb versions shipped with most distributions have bugs that can cause problems. Anything running XFree86 will likely not be able to complete the X tests, and anything running X.org may see a number of failures. Because Xvfb is not required by the LSB, you can replace Xvfb with one that doesn’t have these bugs. Recent Debian xvfb packages for etch don’t have these bugs, so Debian and derivatives can use them for testing. (The sarge backport of Xvfb can be installed on vanilla sarge without upgrading any of the other XFree86 packages; missing packages can also be pulled from the sarge backport without affecting the packages that do ship.)

Please post questions, problems, criticisms, etc. in the comments. A version of this might end up on the LSB site someday, so any improvements would be appreciated.

New Job

Now that the right people have been told, I can make it public: as of January, I’ll be a full-time developer for the Free Standards Group, producers of the LSB.

This is, perhaps, one of the most difficult job decisions I’ve had to make. In every other case where I’ve changed jobs so far, I’ve done so only when it becomes clear that the old job isn’t going anywhere: either by going under, getting radically reorganized, treating me poorly, or otherwise being a dead end. None of that is true here. I still believe in what Progeny is doing. The co-workers are superb (yes, even that one), and management has always treated me well, even in difficult circumstances.

But sometimes, opportunities are too good to pass up. I think that the next year or two will be pivotal to the future of free standards, and I’ll be in a unique position to influence the direction those standards go. Plus, I’ll be able to work with another group of brilliant and talented people, with the hope that some of that brilliance and talent will rub off. I’m also positive about working primarily from home.

So, you can expect more blogging (he says, a month after his last post!), especially about standards in the free software world. I’ve created a new category for that topic, in case you’re interested in following just that conversation.

Yes, the LSB Has Value

Ulrich Drepper slams on the LSB, suggesting that binary compatibility is a red herring and that the LSB is incompetent.

I think it best to respond to the substance of the allegations here. To do that, you’ve got to filter out the ad hominem slurs (“…they buy into the advertisement of the people who have monetary benefits from the existence of the specification, they don’t do any research, and they generally don’t understand ABI issues.”), ego (“After they added the 100+ reports I sent and those others sent the test suite is a somewhat good reflection of who a Linux implementation should behave…”), and contradictions (even though the LSB people are incompetent, their experience shows that their goal is unattainable; one would think you need to try with competent people before deciding that something is impossible).

So what are you left with?

  • Test suites have bugs, even embarassing ones. Are we supposed to be surprised by this? Yes, part of the thread test bug Drepper highlights is pretty stupid. But who can claim to have never written stupid code? Certainly not I, and certainly not Drepper.
  • The tests are incomplete. This is both because of incomplete specifications and bugs, which cause discrepancies from the specs. By this logic, software testing is also useless for the same reasons. In the cases where the tests are right, can we not depend on them? “Useful” need not imply “perfection”, as all users of glibc today can testify.
  • Waivers are signs of incompatibility. Drepper’s complaint: “The result of all this is that you can have a program which is certified for LSBv3 which doesn’t run on all LSBv3 certified systems, depending on whether the LSB environment worked around the broken test or not.” But software vendors can easily retrieve a complete list of waivers, which allows vendors to anticipate what discrepancies they’re likely to encounter. Vendors may be ignorant of the issues, but that is hardly the LSB’s fault.
  • Having separate runtime environments for LSB issues is bad. First of all, Drepper seems to think that people doing separate LSB runtimes (like me) are doing it to work around LSB test problems. In Debian, however, 100% of the bugs I work around with the dynamic linker are bugs in Debian, not bugs in the tests, and only one of those bugs is not a glibc bug. Second, if the separate LSB runtime environment is complete, why should it matter that it’s different from the default, as long as it follows LSB behavior?

As a final point, I would look at Drepper’s recommendations for a replacement of the LSB: source specifications, and identical binaries whenever ABI compatibility is an issue. He doesn’t answer the question of whose binaries those should be, probably because he’s happy with the current de facto answer: the binaries provided by his employer, Red Hat. No doubt he would prefer a world without competition, but should the rest of us?

(Seen via Slashdot.)

Installers

Joey Hess is happy to see us using debian-installer for the DCC Core, and speculates about whether Progeny will be moving away from Anaconda.

First of all, I certainly hope we can make useful contributions to d-i. As a first effort, the long-stalled debian-installer module for picax (started at DebConf 4) is now in good shape, and was what we used to build DCC 3.0 PR1. And I can honestly say that d-i has, so far, exceeded our expectations.

On the other hand, there’s no reason why we can’t have more than one installer for Debian. FAI has been around for a long time, after all. Some people like Anaconda’s UI better than d-i’s, and probably will continue to even after d-i goes graphical. As long as people want it, there’s no reason why Anaconda for Debian should die off.

LSB Dynamic Linker Available

It’s now available, and mostly works. There are two source packages and four binary packages available here and here.

One source package builds the dynamic linker itself, as well as the fixed libc. The other fixes a bug in PAM; the pam_unix module returns success instead of the proper error under some circumstances. See Debian bug 323982 for the details. If necessary, the package could be itself patched, but as I’m focused on Debian stable’s LSB compatibility, I’m assuming that there will be resistance to the idea of patching stable in this way.

With these packages installed on top, nearly all of Debian’s LSB problems are resolved. Some exceptions may be found in this post.

Intruder Alert

Well, I dropped off the net for a short while. What happened? I got hacked, that’s what.

So, this is a brand-new installation of WordPress on a brand-new installation of Debian 3.1. The old hard drive is still around for forensics and careful restoration work. So far, I see no sign that the hacker got any farther than my hosted box, which is good.

Not everything is back yet. I hosted a friend’s site, which is still down; hopefully, I’ll get that working quick. Things may be a little strange. If you notice anything, leave me a comment.

Off To Finland

[eo] Mi vojaĝos en Finnlandon por "DebConf 5".

Tomorrow I take off for Helsinki, Finland, to attend DebConf 5, a conference for Debian developers. I should be very well-connected, but on a slightly different schedule due to the time difference, and definitely will be very busy if past DebConf conferences are any indication.

If I have time, I’ll post a little on what’s going on.

A New Approach to the LSB (part 2)

For background, read part 1. This post is going to get a bit technical, so if the first part made your eyes gloss over, you might want to skip this part.

So our goal is to provide an LSB-compatible environment for LSB programs, and an environment compatible with Debian 3.1 for the rest. It seems that we can’t do this using the same system libraries and programs, so we need to use different ones. But how do we convince one environment or the other to load when we need them, and not to load when we don’t?

The key is the dynamic linker: that magic code that finds the shared libraries for the programs we run and puts them all together so the program can find them. It turns out that the LSB insists on having its own dynamic linker, separate from the rest of the system; runtime environments can’t be LSB-compliant without them, and programs can’t be LSB-compliant unless they use the special LSB linker. The linker doesn’t have to act any different from the normal one, so standard procedure is to symlink the regular dynamic linker to the name the LSB requires.

But the possibility has always been there to use an entirely different linker for LSB programs than for non-LSB programs, and even to use different linkers for different LSB versions. So, the solution is obvious: instead of symlinking the standard linker, we provide a slightly modified linker for LSB programs.

What do we mean by “slightly modified”? One possibility: cutting-edge versions of libraries could be stored in a separate location on the system, where no regular application will see them. Our LSB linker could then prefer libraries from the special paths to the normal ones. Another: programs which must run differently under the LSB can be compiled differently and live in their own special location, which are preferred to the normal location when run in the LSB context.

So far, this is vaporware. I’m still wrestling with the source code to glibc to see how difficult it would be to do this. Once done, however, LSB compliance and compatibility with Debian might no longer be conflicting goals.

A New Approach to the LSB

[eo] Debian ne facile povas sekvi la "Linux-an Aprioran Fundamenton" ("Linux Standard Base", aŭ "LSB"), ĉar ili ŝanĝas tro rapide. Do, mi scias ke Debian devas doni metodo por uzi programojn LSB-ajn.

One of my responsibilities at work is the status of Debian (and Progeny’s Debian-derivative distributions, such as Componentized Linux) with regards to compliance with the Linux Standard Base (LSB). This has been very frustrating at times.

Many of the problems occur because the LSB has a less conservative position regarding core updates than Debian does. For example, the current LSB standard (2.0) pretty much requires version 2.3.3 of the standard core library (“glibc”), one better than the version shipped in Debian 3.1 last week. The prerelease standard (3.0) gives us whole new sets of problems; it requires glibc 2.3.4, the new tests for the C++ programming language standard seem to have problems with the standard C++ library, and the tests for the graphical system won’t even run in Debian.

While many of the particular problems are new, other problems have plagued previous Debian releases. Debian 3.0 was never able to achieve LSB compliance by itself, because of problems similar to these. Most of the problems from that era have been fixed in Debian 3.1, but new problems have arisen to take their places. And in some cases, the problems have persisted over a long time, such as the problem with international patches to some programs that have been rejected by upstream authors.

All of these problems have important implications for distributions that are based on Debian. Now that Debian 3.1 has been released, we want to use that as a baseline for compatibility between various distributions derived from Debian. But if we need to upgrade our distributions to comply with LSB requirements, we tend to break that compatibility. Will Progeny stuff work on Ubuntu, or Xandros stuff on regular Debian? It might not, if we don’t have some common ground. My boss has been giving Ubuntu a hard time over this already; it wouldn’t be good for us to criticize them and then follow their example.

So, my current research into the problem has focused not on making Debian adhere to the LSB standard, but on allowing Debian to provide a compatibility environment for LSB programs, without incorporating huge changes that would break compatibility with the current stable version of Debian.

Fortunately, the LSB provides us with a pretty big hook I think I can exploit. This post is already long enough, so I’ll describe it in a subsequent post.

Debian 3.1 Released

Late on Monday, the happy news came through: Debian 3.1 (“sarge”) has been officially released.

(I’d have posted about it earlier, but I’ve been busy upgrading.)

Upgrade Time

[eo] Bona novaĵo! La posta versio de Debian "frostiĝis" por liberigi. Mi ĝisdatigis mian ret-sendantan komputilon, kaj havis nur unu problemon kiu estis kaj serioza kaj nekonata.

Happy news! According to the release team, the next version of Debian has been frozen. This means that a release, indefinitely postponed for what seems like an eternity, is now imminent.

Among other things, the new release (Debian 3.1, codename “sarge”) now has a security infrastructure, making it possible for daring souls to upgrade to it and not leave security behind. I’ve been really wanting to do some things with software not available in the current release (Debian 3.0, “woody”), so off I went and upgraded. The victim: the main Web and mail server for my domain.

Since you’re reading this, you know it went well.

The upgrade was really smooth; only two serious bugs and one minor bug showed up. Of the serious bugs, one was already known and mentioned in the announcement (the perl bug). The other: I decided to upgrade the Web server to Apache 2. Since this is a WordPress blog, and since WordPress is written in something called PHP, I needed the PHP module for Apache 2 (version 4). This module, unfortunately, doesn’t install properly without some manual tweaking. The bug has been filed.

The last bug: imagemagick was held back to the woody version. This wasn’t a big deal, and was easy to fix (“apt-get install imagemagick”), but might be worth looking into to make sure the upgrade goes well for others.

Autopackage Considered Harmful

[eo] La "Autopackage" projekto promesas pli facila instalilon por programoj, sed ĝi kauzas pli multaj da problemoj ol ĝi solvis.

Via Slashdot, we learn of the advent of Autopackage, a project to make it easy to install third-party software onto Linux systems in a distribution-neutral fashion. What’s not to like?

Well, there’s plenty to like. The goal is certainly laudable; it is too difficult to get software installed that your distro vendor doesn’t support. Furthermore, the Autopackage team have wisely chosen not to fight the distros; they emphasize that their system is a complement, not a replacement, for the distro’s package manager. The file structure doesn’t look too bad. They seem to at least have a clue about security, even if their current security story isn’t all that great.

Unfortunately, they’ve yielded to the temptation towards short-term fixes. As a consequence of at least one short-term fix, I predict that distro vendors are going to start seeing support requests from Autopackage users tht may, in some cases, be tough to fix. Were I responsible for supporting a Linux distro, I would tell my users that use of Autopackage breaks the support contract, or (alternately) that such support would cost extra.

What’s the problem? The big one: Autopackage installs to /usr, according to a comment by someone involved with the project. If something is installed by Autopackage, and later that same thing is shipped by Debian, the two packages will barf all over each other, causing both packages to fail (despite their unsubstantiated claim otherwise). Telling users to just avoid the Debian package won’t work, because package dependencies change over time, and any popular package stands a very good chance of being added to a meta-package eventually. The same thing is likely just as true for Red Hat, Mandrake, and the like, though obviously the details may differ.

It’s particularly interesting that the software allows the option to install to other places, such as $HOME, /usr/local, and so on. Supposedly, /usr is supported because:

…there are many broken distributions that don’t setup paths for /usr/local correctly.

Yet, in their FAQ, they talk about cooperating with the various distributions to create something like a “Desktop LSB” for handling library dependencies that their tool isn’t good at handling yet. Of course, getting the distros to support /usr/local properly is a much easier task than getting the distros to agree to and implement a new standard. Why blow off the easy thing, and assume the hard thing?

This isn’t the only problem, but it is the biggest one. The other problems are probably easier to fix, especially if they keep their promise for full package-manager integration in the next version. I’m curious how they handle the conflicting library problem, or newer libraries with new symbols that don’t require soname upgrades, but I’m sure they’ve had to deal with those problems to get this far.

Ultimately, I think the Autopackage people would do well to include some traditional distro people in the conversation, and work to integrate well within the parameters the distros set. As they already acknowledge, they aren’t going to get anywhere without some buy-in from the distros. What I wonder about is why they didn’t get that buy-in from the beginning, or if they did, why they aren’t talking more about it.

UPDATE: Joey Hess takes a closer look at the technology; to say he doesn’t like it is an understatement. And Mike from Autopackage responds in the comments to both of us (sorta).

UPDATE (2005-03-31): After a little heat and a little light in the comments, Adam Williamson of Mandrake is bringing the issue up on the Cooker list. His initial message is also posted in our comments, and you should be able to read the full thread here.

UPDATE (2005-04-02): Ubuntu takes up the question, starting with this message. A bug has been filed and dismissed in Ubuntu’s BTS as well.