My Heart Bleeds (or, What’s Going On With Heartbleed)

One of the big news stories of the week has been “the Heartbleed bug“.  If you know a techie person, you might have noticed that person looking a bit more stressed and tired than usual since Monday (that was certainly true of me).  Some of the discussion might seem a bit confusing and/or scary; what’s worse, the non-tech press has started getting some of the details wrong and scare-mongering for readers.

So here’s my non-techie guide to what all the fuss is about.  If you’re a techie, this advice isn’t for you; chances are, you already know what you should be doing to help fix this.

(If you’re a techie and you don’t know, ask!  You might just need a little education on what needs to happen, and there’s nothing wrong with that, but you’ll be better off asking and possibly looking foolish than you will be if you get hacked.)

If you’re not inclined to read the whole thing, here are the important points:

  • Don’t panic!  There are reports of people cleaning out their bank accounts, cutting off their Internet service, buying new computers, etc.  If you’re thinking about doing anything drastic because you’re scared of Heartbleed, don’t.
  • You’ll probably need to change a lot of your passwords on various sites, but wait until each site you use tells you to.
  • This is mostly a problem for site servers, not PCs or phones or tablets.  Unless you’re doing something unusual (and you’d know if you were), you’re fine as long as you update your devices like you usually do.  (You do update your devices, right?)

So what happened?

There’s a notion called a “heartbeat signal”, where two computers talking to each other say “Hey, you there?” every so often. This is usually done by computer #1 sending some bit of data to computer #2, and computer #2 sending it back. In this particular situation, the two computers actually send both a bit of data and the length of that bit of data.

Some of you might be asking “so what happens if computer #1 sends a little bit of data, but lies and says the data is a lot longer than that?” In a perfect world, computer #2 would scold computer #1 for lying, and that’s what happens now with the bug fix. But before early this week, computer #2 would just trust computer #1 in one very specific case.

Now, computers use memory to keep track of stuff they’re working on, and they’re constantly asking for memory and then giving it back when they’re done, so it can be used by something else.  So, when you ask for memory, the bit of memory you get might have the results of what the program was doing just a moment ago–things like decrypting a credit card using a crypto key, or checking a password.

This isn’t normally a problem, since it’s the same program getting its own memory back.  But if it’s using this memory to keep track of these heartbeats, and it’s been tricked into thinking it needs to send back “the word HAT, which is 500 characters long“, then character 4 and following is likely to be memory used for something just a moment ago.

Most of that “recycled memory” would be undecipherable  junk. But credit cards, crypto keys, and passwords tend to be fairly easy to pick out, unfortunately.

And that, by the way, is where the name comes from: the heartbeat signal bleeds data, so “Heartbleed”.  There’s been some fascinating commentary on how well this bug has been marketed, by the way; hopefully, we in the techie community will learn something about how to explain problems like this for future incidents.

Does this affect every site?

No.  Only sites using certain newer versions of crypographic software called “OpenSSL” are affected by this.  OpenSSL is very popular; I’ve seen estimates that anywhere from a third to a half of all secure Internet sites use it.  But not all of those sites will have the bug, since it was only introduced in the last two years.

How do we know this?  OpenSSL is open source, and is developed “in public”.  Because of that, we know the exact moment when the bug was introduced, when it was released to the world, and when it was fixed.

(And, just for the record, it was an honest mistake.  Don’t go and slam on the poor guy who wrote the code with the bug.  It should have been caught by a number of different people, and none of them noticed it, so it’s a lot more complicated than “it’s his fault!  pitchforks and torches!”)

What should I do?

Nothing, yet.  Right now, this is mostly a techie problem.

Remember that bit about crypto keys?  That’s the part which puts the little lock icon next to the URL in your browser when you go to your bank’s Web site, or to Amazon to buy things, or whatever.  The crypto keys make sure that your conversation with your bank about your balance is just between you and your bank.

That’s also the part which is making techies the world over a little more stressed and tired.  You see, we know that the people who found the bug were “good guys” and helped to get the bug fixed, but we don’t know if any “bad guys” found the bug before this week.  And if a “bad guy” used the bug to extract crypto keys, they would still have those crypto keys, and could still use them even though the original bug is fixed.  That would mean that a “bad guy” could intercept your conversation with your bank / Amazon / whoever.

Since we don’t know, we have to do the safe thing, and assume that all our keys were in fact stolen,  That means we have to redo all our crypto keys.  That’s a lot of work.

And because your password is likely protected with those same crypto keys, if a “bad guy” has Amazon’s key, they’d be able to watch you change your password at Amazon.  Maybe they didn’t even have your old password, but now they have your new one.  Oops.  You’re now less secure than you were.

Now, it’s important to make sure we’re clear: we don’t know that this has happened.  There’s really no way of knowing, short of actually catching a “bad guy” in the act, and we haven’t caught anyone–yet.  So, this is a safety measure.

Thus, the best thing to do is: don’t panic.  Continue to live life as usual.  It might be prudent to put off doing some things for a few days, but I wouldn’t even worry so much about that.  If you pay your bills online, for example, don’t risk paying a bill late out of fear.  Remember: so far, we have no evidence yet that anyone’s actually doing anything malicious with this bug.

At some point, a lot of sites are going to post a notice that looks a lot like this:

We highly recommend our users change the password on their Linux Foundation ID—which is used for the logins on most Linux Foundation sites, including our community site, Linux.com—for your own security and as part of your own comprehensive effort to update and secure as many of your online credentials as you can.

(That’s the notice my employer posted once we had our site in order.)

That will be your cue that they’ve done the work to redo their crypto keys, and that it’s now safe to change your password.

A lot of sites will make statements saying, essentially, “we don’t have a problem”.  They’re probably right.  Don’t second-guess them; just exhale, slowly, and tick that site off your list of things to worry about.

Other sites might not say anything.  That’s the most worrying part, because it’s hard to tell if they’re OK or not.  If it’s an important site to you, the best course of action might be to just ask, or search on Google / Bing / DuckDuckGo / wherever for some kind of statement.

What about your site?

Yup, I use OpenSSL, and I was vulnerable.  But I’m the only person who actually logs in to anything on this site.  I’ve got the bugfix, but I’m still in the process of creating new keys.

Part of the problem is that everyone else is out there creating new keys at the same time, which creates a bit of a traffic jam.

So yeah, if you were thinking of posting your credit card number in a comment, and wanted to make sure you did it securely… well, don’t do that.  EVER.  And not because of Heartbleed.

Old Keys Never Die

Encryption is in the news a lot these days for some reason.  I’ve been doing encryption using the PGP family of encryption systems for quite a while now, but hadn’t been paying close attention until a recent reminder landed in my inbox from the Debian project.  They warn about “1024D” GnuPG keys being weak, which is a fancy way of saying “the way all the cool kids created keys back in the late ’90s”.  Including yours truly.  Oops!

So, it’s time to replace my key.  I’ve uploaded the new one to the key servers and created a transition statement per the guidelines in this fine document, with some changes inspired by others doing the same.  The details are in the transition statement, so I won’t bore you with long strings of hexadecimal numbers here.

The next step is to get signatures for the new key.  I’ll be at the Linux Foundation Collaboration Summit next week, and would greatly appreciate meeting with people in person to do key signings.  If there are any key signing parties happening, please invite!

Sorry for everyone who’s wondering what I’m talking about.  We all have secrets to keep, and conversations we wouldn’t want spread around; encryption gives you a little more control over that.  Plus, encryption lets you “authenticate” people, which is a fancy way of saying “is that you, George?” when you get messages from people, and letting them say “is that you, Jeff?” when you send messages back.  If you want to learn more about taking control of your communication, post a comment, email me, or search for “PGP”, “GnuPG”, or “encryption” in your favorite search engine.

Linux Is Hard, Except When It Isn’t

Online tech news site Ars Technica (which I recommend, by the way) recently reviewed the Dell XPS 13 Developer Edition.  Its unique feature: it ships with Ubuntu Linux as the default operating system.  This preload deal had a few unique properties:

  • It’s from a major system vendor, not a no-name or third-party integrator.
  • It’s a desktop-oriented product, not a server.
  • Most notably, the vendor actually put effort into making it work well.

That last point deserves some explanation.  A few vendors have grabbed a Windows computer they sell and allowed the option to preload Linux on it, but without support; you’re on your own if it doesn’t work in some way, which is likely.  Essentially, they save you the time of wiping Windows off the box and doing a fresh install, but not much more.  But this laptop comes out of Dell’s Project Sputnik, a project to put out Linux machines for developers with a “DevOps” flavor, and they felt the machine had to work as well as their regular products.  So they actually put effort and testing into getting the laptop to run Ubuntu well, with all the drivers configured properly and tweaked to support the machine’s quirks, just like they do for Windows.

And so, the review is surprised to learn that Ubuntu on the XPS 13, well, just works!  It’s even in the title of the review.  Here’s reviewer Lee Hutchinson’s observations:

I’ve struggled before with using Linux as my full-time operating environment both at work and at home. I did it for years at work, but it was never quite as easy as I wanted it to be—on an older Dell laptop, keeping dual monitor support working correctly across updates required endless fiddling with xorg.conf, and whether or not it was Nvidia’s fault was totally irrelevant to swearing, cursing Past Lee, trying desperately to get his monitors to display images so he could make his 10am conference call without having to resort to running the meeting on the small laptop screen.

And thence comes the astonishment: on this Linux laptop, everything just works.  Most of the review is spent on the kinds of hardware features that distinguish this from other laptops: the keyboard is like this, the screen is that resolution, it has this CPU and this much RAM and so on.  Some space is devoted to impressions of the default Ubuntu 12.04 install, and some space is given to the special “DevOps” software, which helps the developer reproduce the software environment on the laptop when deploying apps.

But before all that, Hutchinson has to put in a dig:

It’s an impressive achievement, and it’s also a sad comment on the overall viability of Linux as a consumer-facing operating system for normal people. I don’t think anyone is arguing that Linux hasn’t earned its place in the data center—it most certainly has—but there’s no way I’d feel comfy installing even newbie-friendly Ubuntu or Mint on my parents’ computers. The XPS 13 DE shows the level of functionality and polish possible with extra effort, and that effort and polish together means this kind of Linux integration is something we won’t see very often outside of boutique OEMs.

Of course, Windows is actually worse than Linux on the hardware front–when you don’t get it pre-installed.  Imagine if more vendors put as much effort into preinstalled Linux as they did into preinstalled Windows.  In that alternate reality, I imagine people would react more like this:

“Isn’t that what you’re looking for in a mainstream product?” Rick chided. “In 1996 it was: ‘Wow look at this, I got Linux running on xxxxxxxx.’ Even in 2006 that was at times an accomplishment… When was the last time you turned on an Apple or Windows machine and marveled that it ‘just worked?’ It should be boring.”

Which was, of course, the reaction Hutchinson got when discussing the review with a Linux-using friend.

With Microsoft being less of a friend to the hardware vendors every day, here’s a case study more of them should be paying attention to.

Time Flies

18 years ago, I carried a baby out of a delivery room. MY baby.  What a rush.

Looking down on him in the baby warmer, amazement and fear dominated my thoughts, clamoring for my attention. I was a father. What would I do now? My life was REALLY not just my own anymore; I had this little one that was counting on me.  Was I up to the challenge?

And what about when he wasn’t a little one anymore? What would he be like as an adult? Would he be a good person? What would he care about? When he turned 18, what would we do, and what would his plans be for the future?

That day was something I thought about often in that nursery all those years ago.  And now, that day has arrived.

Jon is now a young adult.  And looking at the ultimate result of the last 18 years of worry, I feel immeasurably proud.  He has made his mistakes, and no doubt will make more mistakes in the future.  But he has not let those mistakes dampen his confident optimism, or drag down his sense of what’s right.  More importantly, he has a heart for others that expresses itself with everyone he’s around.  Often, the topics of our disagreements center around his fierce protective instinct, and on more than one occasion, he’s challenged me to improve myself.

I have not been a perfect father.  At times, I’ve been far from perfect.  But I am grateful that I’ve been a part of raising a young man I can admire and, yes, even learn from.

Happy 18th birthday, Jon.  Have an excellent life.  I’ll cherish the rest of the time you’re still at home, miss you when the time comes for you to leave, and always be there for you as long as I live.

Your mom and I are your biggest fans; never forget that.

FHS Refresh

I’ve been busy tonight spamming mailing lists and otherwise getting the word out: the LSB workgroup is preparing to update the FHS.  This update has been a long time in coming; FHS 2.3 (the current version) was released back in 2004.  Since then, a lot has happened, and it’s starting to look like the FHS is holding things back due to the lack of updates.

For the longest time, the FHS was cared for by its original editors: Dan Quinlan, Rusty Russell, and Chris Yeoh.  We should all be grateful that they created a useful and well-written standard–one that has been resilient enough to remain useful for six years without changes.  Even though it’s time to move on, we should not forget that we are building on a strong foundation they laid for us.

So, you may be asking: how can I help?  Glad you asked!

  • First of all, get the word out!  If you know people who might be interested (developers for Linux distributions, standards people, etc.), point them to this post or to the LSB announcement linked earlier.
  • We have set up the usual open-source project infrastructure: a bug tracker, version control (using Bazaar), a mailing list, and a wiki (of sorts; it’s actually a page on the LSB wiki).  Come and join in!  Subscribe to the mailing list, post comments on the wiki, check out the source and submit patches.
  • The bug tracker deserves special mention.  We hosted it for the old FHS project, and so we’re continuing to use it.  In particular, we’ll be doing triage on the old bugs there, as well as any new bugs filed.  So go ahead and file bugs, or add comments to old bugs; we’ll be taking those into account for the new update.  If you file new bugs, please file them against the “FHS” product.

We’re tentatively shooting for a goal of releasing FHS 3.0 before July, though that’s not written in stone.  But we don’t want to wait much longer than we’ve already had to.

Blog Refresh: Family Health Scare

Some of you may remember that my wife has a genetic condition called Marfan syndrome.  If you do, you might remember that the syndrome can cause serious problems with the eyes and heart.  Both are treatable with surgery; in an ideal world, you’d deal with each problem as it comes up, and spread the surgeries over at least a period of several years.

Unfortunately, Tami didn’t get to experience that ideal world.

About this time last year, she experienced sudden vision loss in one eye while working, which didn’t clear up on its own.  We went in, and found that her eyes had deteriorated to the point that she needed surgery to preserve her vision.  Although only one eye was not working right, the other was on the verge of failing in the same way.

Then came the normally routine pre-surgery checkups.  This time, however, was anything but routine; the cardiologist declared that she had entered the “danger zone” for heart complications.  This would require open-heart surgery to fix.

All ended well.  Five surgeries later (three on the heart, plus one each per eye), she’s back to normal, and even has the best vision she’s ever experienced.  But I don’t recommend doing so much so quickly (four months from the first to the last).

Triumphant Return

“When you don’t update a blog, it gets stale fast.” — Tim Bray

Of course, I didn’t intend to violate this basic rule of blogging.  It just happened–one thing leads to another, and pretty soon you notice just how little your front page has changed in the past two-and-a-half years.  So, I shall begin again.

Quite a bit has changed:

  • It’s especially ironic, given the previous post, that our family has given in and replaced the main television with a HDTV.  Not that I’ve changed pmy mind much; it’s just that I’ve decided to live with the limitations of the technology, and have figured out how to work around some of them.
  • Although my suspicion of the cloud remains, my participation has greatly increased.  I’m now on Twitter, Facebook, LinkedIn, and piles of Google services.
  • There’s been a major health scare in the family, which is now behind us.

All of these will get their own posts in the very near future.  In the meantime, enjoy the new look.  (Especially on mobile!)

HDTV Still Not Ready Yet

So you put off buying a high-def TV for years, because you weren’t sure they had gotten all the standards right.  You recently gave in, thinking that the coming shut-off of analog broadcast TV in February meant that they had to have their technology figured out by now.

Of course, you were wrong:

CableCARD devices have generally supported only one-way access to cable systems, but their long, winding journey toward full two-way communications is finally coming to an end. Panasonic has announced that it is at last shipping new HDTVs enabled with tru2way technology to the two US markets where they can actually be used.

So what’s the main thing you’re supposed to get with tru2way?

This means that you can walk out of a retail store with a tru2way-enabled HDTV, plug it in at home, and have immediate access to basic features like an on-screen guide and on-demand content.

In other words, we are just now starting to see HDTVs that can just plug into the cable jack and work, without an add-on cable box and all the limitations that implies, right?

Well, not really.

All tru2way-compatible devices will have a CableCARD slot built into them to facilitate the decryption of protected content, though details are still sketchy as to how this system will work with devices like PVRs. Physical CableCARDs will apparently not be needed to access basic two-way services and non-encrypted channels.

Meaning that, in order to get anything you can’t get already with broadcast TV (“non-encrypted”), you still need a cable company tech to come out and install the CableCARD.  And they don’t know how all of this will integrate with the new video recorders like TiVo.

Why is this so hard?  It’s producer paranoia.  If they don’t play these games, you might watch some show for free, or share it so others can watch it for free, instead of… well, watching it for free live.  And you might cut the commercials out, instead of… cutting the commercials out by getting up for more chips during the commercial breaks.  (But that’s stealing, so you shouldn’t do that either.)

Our family keeps edging closer to deciding to get a HDTV.  But then I see stuff like this, and notice that the old tube TV still works fine…

Free Software EULAs?

Ubuntu is now being forced to show a EULA before letting users run Firefox, on pain of losing the rights to the Firefox trademark.  (You know, End User License Agreements: those pop-ups Windows and Mac users have to put up with all the time, with the big “I Accept” button at the bottom.)  Mark Shuttleworth, Ubuntu top dog, weighs in on the bug:

Please feel free to make constructive suggestions as to how we can meet Mozilla’s requirements while improving the user experience. It’s not constructive to say “WTF?”, nor is it constructive to rant and rave in allcaps. Your software freedoms are built on legal grounds, as are Mozilla’s rights in the Firefox trademark. To act as though your rights are being infringed misses the point of free software by a mile.

This is a bit surprising, and a bit disappointing.  Both the decision itself, and Mark’s take on it, are quite wrong.

One of the most important benefits of free software is the legal agreement you work in.  You don’t have to agree to some long contract every time you need to do something new on your system, or sometimes even when you get a “critical update” to something you’re already doing.  You don’t have to read pages of legalese, or go through some long process with your company’s legal department, or just click the “make it go away” button with this vague unease that you’ve just signed your first-born child away to the Devil.

Most importantly, you feel like you actually own your computer when you run free software on it.  When you enter a situation where you always have to ask permission to do things, and have to be constantly reminded of the rules, you don’t feel comfortable.  Clearly, the thing in front of you is not yours, whatever your credit card bill might say; if it were, there wouldn’t be all this stress over you doing something the real owners don’t like.  Free software returns your computer to you, by guaranteeing that you don’t have to enter into all these contracts before you can use it.

Well, unless that “free” software is Firefox 3.0.2 or later, it seems.

It’s “free” by a technical definition (you can strip the Firefox trademark rather easily, and get rid of the EULA as well).  But when users fire up Ubuntu, and decide to do some browsing, and get confronted with pages of legal garbage and ALL CAPS, they will ask: “What’s so different about this open source stuff?  I thought I was getting rid of all this legal crap.”  And, suddenly, they’re slogging through the same drudgery they had to endure with every Windows service pack, and they wonder what they’ve gained.

Perhaps there is a price we should be willing to pay to help Mozilla preserve their trademarks, but this price is too great.  Mozilla should never have asked this of us, and Ubuntu should never have decided, on our behalf, that this price was acceptable.

Debian has already turned its back on Firefox, and I have yet to have a problem with Iceweasel (the branding Debian chose for its Firefox-alike) that was caused by the branding change.  But I’m tempted to bring it back, in Debian’s “non-free” software repository.  Perhaps we could provide Firefox, complete with nasty EULA, but launch Iceweasel instead of Firefox if the user clicks “No”.  There are probably all kinds of reasons why this is a bad idea, but I’m still drawn to the idea of illustrating how silly and useless click-through EULAs are.

But it would be much more productive for Mozilla to back down, and not ask us to sacrifice such a large part of our identity on the altar of their sacred mark.

UPDATE: First, I notice I was remiss in not giving a hat tip to Slashdot.

Second, Mark has posted another comment on the bug.  I encourage people to read the whole comment, but here’s a telling part:

For example, at the moment, we’re in detailed negotiations with a
company that makes a lot of popular hardware to release their drivers as
free software – they are currently proprietary. It would not be possible
to hold those negotiations if every step of the way turned into a public
discussion. And yet, engaging with that company both to make sure Ubuntu
works with its hardware and also to move them towards open source
drivers would seem to be precisely in keeping with our community values.

In this case, we have been holding extensive, sensitive and complex
conversations with Mozilla. We strongly want to support their brand
(don’t forget this is one of the few companies that has successfully
taken free software to the dragons lair) and come to a reasonable
agreement. We want to do that in a way which is aligned with Ubuntu’s
values, and we have senior representatives of the project participating
in the dialogue and examining options for the implementation of those
agreements. Me. Matt Zimmerman. Colin Watson. Those people have earned
our trust.

On the one hand, yes, I believe that the Canonical people have earned our trust, and I do appreciate the utility of quiet persuasion with a proprietary software company that doesn’t understand our community.  On the other hand, I had been under the impression that Mozilla was not a proprietary software company, and didn’t need persuasion and secret negotiations to see our point of view.

Is Mozilla still a free software company, or not?

UPDATE 2: Cautious optimism is appropriate, I think.  Mitchell Baker, Mozilla chair:

We (meaning Mozilla) have shot ourselves in the foot here given the old, wrong content.  So I hope we can have a discussion on this point, but I doubt we’ll have a good one until we fix the other problems.

The actual changes aren’t available yet, and I wonder how much of this had been communicated to Canonical beforehand.  Still, it’s a good sign.

Election Time: Republicans Win

It’s silly season again in America: a Presidential election year.  If you don’t know that, you must really be living under a rock.

As I did four years ago, I’ll post my thoughts about how I vote online for all the elections I can participate in: national, Congressional, Indiana-wide, and local.  That way, you can do more than curse the ignorant Americans for their choices; you can possibly influence at least one.

Local races look to be more boring than usual this year, because neither of Indiana’s Senators is running this year.  Fishers trends strongly Republican, too, which makes a lot of the other local races uncompetitive.

But that’s not why I said the Republicans win.  I figured I’d be able to watch the speeches from the conventions at my own convenience online, so just now I tried both sites.  Here’s what the Democratic convention site told me:

We’re sorry, but the Democratic Convention video web site isn’t compatible with your operating system and/or browser. Please try again on a computer with the following:

Compatible operating systems:
Windows XP SP2, Windows Vista, or a Mac with Tiger (OS 10.4) or Leopard (OS 10.5).
Compatible browsers:
Internet Explorer (version 6 or later), Firefox (version 2), or, if you are on a Mac, Safari (version 3.1) also works.

That’s because the Democrats chose Microsoft as their official technology provider, and Microsoft chose to deliver all video using their Silverlight technology, which doesn’t work on Linux (yet).

And what are the Republicans using?  Good ol’ YouTube.

Yes, I’ll probably be able to find the important Democrat speeches on YouTube.  But how easy will that be?  And how many of the obscure Democrat speeches will I be drawn into watching just out of curiosity?  I’ve already listened to portions of Fred Thompson’s and Joe Lieberman’s speeches–because it was so easy.

Advantage: Republicans.

Comment Policy Updated: No More CAPTCHA

The comment policy has changed; check the page links for the details.  The big change: I’ve turned off the CAPTCHA page that would be presented for comments judged to be “borderline” spam by the spam filter software.

For those not aware, CAPTCHA is the name given to the funny letters and numbers on weird backgrounds that you sometimes have to type in to do things on certain web sites.  The idea was that computers couldn’t read those letters and numbers, but humans could; thus, each solved CAPTCHA was proof that a human had done whatever it was that had been done.

CAPTCHA had issues even from the beginning.  They present obvious issues for the blind, and were often simple enough to be read by modern OCR software.  Because of this, I never turned it on for every comment, and any comment rejected because of the CAPTCHA just went into the moderation queue.  But I’m now convinced that CAPTCHA has reached the end of its useful life.

So when a commenter on my last post expressed his dissatisfaction with my CAPTCHA, I decided it was time to turn it off.  And so, references to it have been expunged from my comment policy.

The Esperanto translation of my comment policy has also been updated, in the hopes that I might someday post a little more often in that language.  It’s also been moved to a page.

Internet Speed Hype

Reportedly, the USA is falling behind the rest of the world in bandwidth:

The 2008 median real-time download speed in the U.S. is a mere 2.3 megabits per second. This represents a gain of only 0.4 mbps over last year’s median download speed. It compares to an average download speed in Japan of 63 mbps, the survey reveals.

US also trails South Korea at 49 mbps, Finland at 21 mbps, France at 17 mbps, and Canada at 7.6 mbps, and the median upload speed was just 435 kilobits per second (kbps), far too slow for patient monitoring or to transmit large files such as medical records.

But don’t tell Chris Blizzard’s commenters.  He writes about Comcast’s annoucement of a 250GB/month bandwidth cap, and gets an earful from commenters from Canada and Europe:

A boo hoo hoo. Major Canadian ISPs have had a limit of 60 GB for months, if not years.

Oh wait… probably the same way as most of the world manages on 10-20GB, for far more money than you’re paying for $250. Not a lot of sympathy from this corner…

Yep, no sympathy from here either — in Australia, with the only _independant_ ISP left, $280 AUD gets you 100GB.  $50 with a major telco (the rest of the ISPs here) gets you 5GB.

eg with my current ISP, a 8 MB line with a 300 GB monthly cap costs 20 GBP/month. A 8 MB line with unlimited bandwidth costs 160 GBP/month. Quite a difference!

I pay the equivalent of $40 a month for 30GB, and extra GB on top are $3 each. That’s with Plus Net (http://www.plus.net).

I’m in South Africa paying about $130 for a 10GB cap.

So who’s really better off?  By my calculations, if a Canadian ISP provides 7.8 mb/s with a 60 GB cap, that’s about 17.5 hours per month of sustained maximum bandwidth before you’ve blown your limit.  By contrast, an American ISP with 2.3 mb/s and a 250 GB cap gives you about 247 hours per month of sustained maximum bandwidth.

Perhaps part of the answer is that only one country–Canada–shows up in the list of “faster countries” and in the comments section of Chris’s post.  That could explain the apparent disconnect; maybe Great Britain and Australia are worse off than the USA, while Finland and Japan are better off.

Still, this does bring the question to mind: which is better, raw speed, or the ability to actually use it without fear?

Standards and Conversations, Part 2

Picking up where we left off last time

The LSB spec invents things without consulting distros. Like the whole init scripts thing. But that’s not as bad as depending on RPM or requiring a specific layout.

What can be very frustrating is that we do reach out to all the major distros, and a number of the less major ones.  But we don’t talk to every single person on every single distro; we can’t.  We also try to follow best practices for an open project: open version control, open IRC, open mailing lists.

Part of the problem may be that we also talk to independent software developers, and sometimes, distro people aren’t prepared to hear what developers are saying.  So, it looks like we’re pushing things on them, like predictable directory layouts, hooks for working with the user environment, different options for software installation, and the like.

We used to just listen to distros and do what they wanted.  Part of the reason there’s still a lingering perception that “the LSB failed” is that software developers saw us as irrelevant.  And they were right: we were irrelevant, because we only listened to the distros.  So now we listen to both sides, and try to get them to talk to each other, and act as a go-between when they don’t seem able to.

I had an eye-opening experience in Berlin in 2006.  We talked to packaging people, and talked about the need for cooperation between package managers and third-party installer tools. A lot of people thought that was a bad idea.  So we got them together with some major ISVs in Berlin, and told them to figure it out.  And they did figure something out, and surprise!  Communication between package managers and third-party installers became a good thing, at least if done right.

And we don’t have a problem with the “done right” part, either.  We made a few attempts at proposals for the communication system above, and someone has created an independent implementation.  Some of those proposals came under sharp criticism.  And we’re cool with that; happy, in fact, that it got attention.

So if you want to find out what’s going on with us, and what terrible things we’re going to make you do in the future, check out our project plan, sign up for our mailing list, or just come by our IRC channel (irc.linuxfoundation.org, #lsb) and ask some questions.  We try to be friendly and helpful.

Standards and Conversations, Part 1

So it looks like the project I’ve been laboring on has been getting some attention:

Ever thought it was difficult to write software for Linux? For multiple distros? InternetNews reports that the LSB is making a push for their next release (due out later this year) that should help make all that much easier.

They even link to our project status page.  Cool!

Of course, good publicity invites criticism.  This time, there seem to be two themes.  William Pitcock seems to have the most succinct summary:

To put things simply, the LSB sucks. Here’s why:

  • The LSB spec depends on RPM. I mean, come on. Seriously. Why do they need to require a specific package manager? If package handling is really required, then why not create a simple package format that can be converted on demand into the system package format? Or why care about packages at all?
  • The LSB spec invents things without consulting distros. Like the whole init scripts thing. But that’s not as bad as depending on RPM or requiring a specific layout.

(See also Scott James Remnant.)

Let’s take this one part at a time.  Today’s topic: packaging.

Part of William’s problem may be that he doesn’t understand the spec.  The LSB doesn’t require a specific package manager, or a specific package format.  It doesn’t even require that the distribution be set up using package management at all!

The spec only requires that LSB-compliant software be distributed so that any LSB-compliant distribution can install it.  That could be tarballs with POSIX-compliant install scripts, an LSB-compliant install binary, a shar archive, a Python script with embedded base64 binaries, whatever.  One of the options allowed is an RPM package, with a number of restrictions.

The restrictions are key, because they effectively define a subset of RPM that acts as, to quote William again:

…a simple package format that can be converted on demand into the system package format…

The difference being, of course, that we didn’t reinvent the wheel and create our own; we used a popular format as the basis for ours.

Scott raises another concern:

While much of the LSB can be hacked into a different distribution through compatibility layers and tools, such as alien, what ISV or other vendor wants to provide a support contract against a distribution that has such kludges?

I’m not sure if he’s referring specifically to packaging or to the standard in general.  As regards packaging: the reason we specify a strict subset is because we can test that subset, and we’ve tailored it to the needs of tools such as alien.  The theory goes that alien isn’t a kludge when it comes to LSB packages.

But, as already mentioned, if vendors aren’t comfortable with supporting RPM, they have a number of other options.  As it turns out, most of them are doing just that; the feedback we’re getting from most ISVs is that packaging (whether LSB-subset RPM, full RPM, or Debian) is just not worth the effort.

Coming up: part 2

Damned If You Do

JROBI, a chess blogger, on energy policy:

A large study in Europe concluded that it takes more gas and oil to produce a bottle of bio-fuel than it does to produce a bottle of gas. What does this mean? It means that Bio-Fuel is more damaging to the environment in the long run, and on top of that it is driving up the cost of basic food supplies. Millions and millions around the world in a number of countries are unable to afford the rising food costs for basic staples like Corn, and for what?

If Bio-Fuel is not better for the environment, why are politicians and environmentalists getting behind this growing industry? I think it’s because it seems to be the “trendy” thing to do, and we all know what happens when the media promotes a new trend. We get tons of media coverage telling us why it’s a good thing, and hardly any coverage of the negative impacts. Already people from the Bio-Fuel industry are getting on television shouting out that there are many factors contributing to rising food prices, trying to deflect the fact that their destruction of food to fuel vehicles is the main culprit.

Actually, I suspect the emphasis on biofuels in the USA and Europe has to do with the fact that it’s the only alternative to fossil-based motor fuels proven to be sustainable and scalable:

The success of FFFVs, together with the mandatory use of E25 blend of gasoline throughout the country, allowed Brazil to get more than 40% of its automobile fuels from sugar cane-based ethanol in 2007.

I see no link to the European study in question, but previous studies have suffered from various faults; for example, the assumption that trucks transporting fuel cannot themselves shift to biofuels. I’m sure better analysis of the study is on its way.

But that’s not the most interesting thing, to me. More interesting: my general impression that a lot of the climate-change hysteria is just that.

If we hear what science seems to be telling us about the environment, and we think that something needs to be done, then we should do things that will actually work. One thing that really works is conservation: use less of the bad stuff we’re using. But we’ve done quite a bit on that front, only to hear that much, much more is required to make a difference. I’m not sure there’s much, much more benefit for us to realize in conservation, at least in the short term.

So, to make a real difference, we have to make more radical changes. Can we change our motor fuel?  Sure; starting with something that pollutes less, and that even absorbs some of that same pollutant in its production, sounds like a winner.

JROBI, again:

It makes no sense whatsoever to create Bio-Fuel when there are much better options on the table – for instance Hydrogen vehicles. When was the last time you heard someone on the news talk about Hydrogen initiatives?

I hear it every so often. But most talk, today, focuses on the very real problems with hydrogen as a motor fuel. There are many; just look at the discussions of hydrogen fuel tank technology for a sample. But one of the biggest problems is that of developing an infrastructure for delivering fuel to the customer.

No one talks about the problems of setting up an ethanol infrastructure. We already have it. Brazil has demonstrated that the current gasoline infrastructure can easily be adapted to deliver ethanol instead, and that there is a viable migration plan for gradually moving people off fossil fuels.

Now, this isn’t to say that the world of ethanol is hunky-dory. It’s arguable that, while ethanol may be sustainable, the corn-based system the USA has adopted isn’t. Some people are talking about sensible tweaks that may solve the food problems while continuing to support biofuels–removing our silly tariff on Brazilian ethanol, for example, or developing alternative feedstocks for ethanol production.

The problem is that hysteria seems to be breeding hysteria. Global warming is so severe, we are told, that we need solutions, and we need them immediately. So we develop solutions we can use immediately. But no! These solutions cost; we need something else, and we need it immediately, and we need it cost-free.

Practically, this kind of insistence on perfection–that we deploy solutions with no drawbacks, only benefits–has the effect of dampening our enthusiasm for environmental solutions. We tried, our leaders will tell us, but nothing was good enough, so we gave up. And so, rather than do something that helps, or even something that lays the foundation for helping, we continue our use of fossil fuels.

Perhaps ethanol is the wrong solution. But if it is, we should resign ourselves to the inevitability of the future, as foretold by science, or fervently hope that the global warming deniers are right, because other solutions will arrive too late to do much good.

It’s Not Like You Care About Your Documents

Recently, as part of the many antitrust/anti-competition legal actions they’re suffering under, Microsoft released specifications for the old Office binary file formats. As expected, they’re big and complex. Joel Spolsky (a former member of the Excel team) had some thoughts on their size and complexity:

With a little bit of digging, I’ll show you how those file formats got so unbelievably complicated, why it doesn’t reflect bad programming on Microsoft’s part, and what you can do to work around it.

The digging turns up reasons that make some sense: the limitations of older computers, feature creep, a complete lack of attention to the future. But it’s hard to see some of these reasons as “why it doesn’t reflect bad programming on Microsoft’s part”. Carelessness is common, sure, but we don’t call it a virtue because everybody does it.

And these are problems that should have been on someone’s radar at Microsoft. It’s one thing for a grunt programmer to hack a feature to meet a deadline; it’s another for the management to simply go along with it, or to not order a rethink when the problems come to light. When you read about hacks like the following, everything sounds nice and reasonable, until you remember what the end result is: that Microsoft Excel doesn’t have a standard format for storing and manipulating dates!

There are two kinds of Excel worksheets: those where the epoch for dates is 1/1/1900 (with a leap-year bug deliberately created for 1-2-3 compatibility that is too boring to describe here), and those where the epoch for dates is 1/1/1904. Excel supports both because the first version of Excel, for the Mac, just used that operating system’s epoch because that was easy, but Excel for Windows had to be able to import 1-2-3 files, which used 1/1/1900 for the epoch. It’s enough to bring you to tears. At no point in history did a programmer ever not do the right thing, but there you have it.

It may not have been the wrong decision, in the sense that it enabled them to ship, and shipping is everything in some circles. But as a design decision, how can anyone defend such inconsistency?

Business information technology was able to move forward in the early ’90s because older document formats like 1-2-3 and WordPerfect were simple enough to import easily into Microsoft Office. Today, when we talk about moving to open-source suites like OpenOffice or online systems like Google Docs, detractors left and right cite the pain of document conversion as a reason to hold back. But if Joel is right about the old binary formats, the pain of transition is like the pain of changing your oil: you can pay now, or you can pay a lot more later. Even Microsoft is having trouble opening its own files from long ago, with “long ago” being a period measured in years, not decades.

Maybe you didn’t write anything a decade ago you’d care to read again today; maybe you can’t imagine any of your stuff being worth reading a decade from now. Do you want to take that chance?

Thankfully, I was a geek, and kept most of my documents in plain text. Today, I take care to save important documents in formats and encodings designed for the long haul, like Unicode, ODF, and PDF. It helps that I avoid Microsoft software like the plague. (If you think they’ve changed since the bad old days, just surf the web in Firefox on Linux sometime, and see how many badly-rendered pages look much better when you switch their text encoding from Unicode to “Windows-1252″.)

If you have a lot of Office documents, even if you’re happy with Office, you might consider whether you care about opening those documents ten years from now, and whether you’d rather take the time to future-proof them while you still can.

Christmas Gadgets: Creative Zen, LCD Monitor

So it’s a few days after Christmas, and like most of us tech-heads, I’ve got a few more gadgets to play with.

First up: the Creative Zen 4GB. This one was a little bit of a saga.

Last year, we got the kids no-name MP3 players, on the theory that we didn’t want to spend megabucks on something they wouldn’t use. They made valiant attempts to use them, but the little machines just weren’t up to the job. So, it seemed prudent to buy them iPods this year.

Well, except for Apple’s attempts to break all non-iTunes iPod software, which had the side effect of making the devices unusable under Linux. Still, this was what they wanted, and they had been good this year, and very patient with my ever-more-convoluted schemes to get the old players working. So, iPod Nano 3Gs for both of them. My heart sank as I watched some of my hard-earned money go to reward such behavior.

As part of the deal, I vowed to find a non-Apple player that would be good for when the iPods gave up the ghost or became “uncool”. And my dear wife, upon hearing this, went online, did some research, and bought me the aforementioned Creative Zen 4GB.

From a Linux perspective, it’s in the “not quite ready for prime time” mode. Rhythmbox and Banshee are working on support; I tried a prerelease of Rhythmbox, and found its support to be very unstable. The only usable app is Gnomad2, which has a terrible UI and also occasionally crashes, but can manage to upload audio, video, and photos without too much hassle. Still, this is a problem of fine-tuning, and not of a hostile hardware vendor; I’m confident that these devices will be well-supported in the near future.

The Zen is picky about what video files it will play, but I managed to figure it out: DivX or XviD video, 320×200 or smaller image size, encoded at a 480 kbit/sec video bitrate or less. Other video files might work, too, but you’ll have to find them on your own.

My Zen has a little problem with the button locking feature: after unlocking, the screen comes up to all-white, and you have to power-cycle it to get the display back. I’m assuming this is a firmware bug, as the screen is still visible for a short time after engaging the lock. Other than this, the Zen is a delight, and every bit as functional as the iPod.

The other nice gadget: a 24-inch LCD from Envision, bought after Christmas with a combination of gift cards, exchanges, and some of my own money. It was an open-box, and I saved about $80 for that; the only problem turns out to be a single dead pixel in the corner of the screen which is barely visible. It does 1920×1200 in very nice, bright color.

Here, too, an improvement on my life only came after some effort. Debian 4.0′s drivers for the Intel graphics chipset are not capable of driving a widescreen LCD; the best I could get was 1600×1200, a normal-width resolution stretched across the wide display. I booted an Ubuntu Gutsy live CD to verify that the problem wasn’t with the monitor, and then set to the task of backporting everything I needed from lenny. Happily, before I started, I found that someone (Holger Levsen, to be exact) had done the work for me.

Things are now about 90% there. The new drivers still don’t have everything figured out for running both Compiz desktop effects and XVideo acceleration at the same time, so I’ve had to turn XVideo off. My computer can render video without hardware support, but the quality isn’t as high. But, I have my nice wide screen, with crisp fonts and lots of room. I figure I’ll live with what I have until lenny releases, and then see what progress has been made.

Rest In Peace, CompUSA

I’m very surprised about the popularity of an old post of mine, regarding my experiences with CompUSA. It continues to collect horror story comments, the last one coming less than three weeks ago. While any company has its detractors (especially any company dealing directly with the public), it seems odd to me that people continue to be motivated enough to post to my blog, of all places, their tales of woe.

For me, life has been very CompUSA-less of late. Indianapolis now has a Fry’s, one of only two east of the Mississippi as of this writing, and for someone in the relatively tech-starved Midwest, it is a godsend. (People from the west coast: please stifle your laughter as best you can.) And evidently enough of these horror stories have been passed around that they felt the need to close over half their stores in February.

The Indy store was spared that time, but not for long.

The electronics retailer decided to finish what it had started earlier this year, announcing that it would sell or close the remainder of its stores in the US after the holiday season. The company, controlled by Mexican retail management company Grupo Sanborns since 1999, has been sold to Gordon Brothers Group, a restructuring firm that will be responsible for selling off the remainder of its assets.

In an abstract sense, less competition in the electronic retail business isn’t ever good. But it’s arguable that we’ve never had so much competition in the electronic retail business if you count the Internet stores that have sprung up all over. And I’m certainly happy to see an outfit that will slander people for profit go belly-up.

“This Is Not An Oops.”

Carver County, Minnesota, is in big trouble. (via buzz.mn)

Eric Mattson was not surprised that the small vacant lot he bought last year near the shores of Lake Waconia was increasing in value.

What shocked him was the $189 million market value the Carver County assessor’s office came up with for the 55- by 80-foot lot, making it the most valuable property in Waconia and possibly the county.

Of the resulting $2.5 million tax windfall, about $900,000 had already been spent by the time Mattson got the bill and came in to complain. They’re now looking at spending cuts and new taxes to pay for the shortfall.

“This is not an ‘oops.’ This is a major error that affects an awful lot of people,” said Mark Lundgren, director of the Carver County division that oversees the assessor’s office.

So how could someone make such an egregious error?

Lundgren said the trouble began in August when a clerk went into Mattson’s file to change the designation of the property, at 233 Lake St. E., from homestead to non-homestead to reflect its change in status after its sale.

The clerk filled in the $18,900 proposed valuation, but then mistakenly hit the key to exit the program. The computer added four zeros to fill out the nine numerical spaces required by the software, thus indicating the value was $189,000,000.

So many thing come to mind, most of which are probably too snarky. But a few observations come to mind:

  • Don’t just pin this on the clerk. The major mistake was with the programmers, whose software did such an unexpected thing, and on the auditors, who missed a $2.5 million mistake. (Oddly, given that audit failure was an issue, the only solution worth mentioning in the article was “more auditing”.)
  • Programmers, cherish your input. Do not auto-munge it without at least user review! And, I’d argue, don’t auto-munge it at all if the result is at all valuable. Validate it, sure, but don’t change it; force the user to fix his or her own mistakes. After all, if your program was so smart as to know what the user “meant”, why does it need manual data entry at all?
  • Use modern tools! What kind of data store today requires zero-padding? MySQL is a free download, and very popular; for all its perceived faults, it can at least store numbers of variable sizes correctly.

LSB 3.2 Beta

Today, we released the first beta of LSB 3.2. If all goes well, this will hopefully be the only beta.

We’ve been working on 3.2 for a while, and we’re really excited about it. We’ve added quite a few interfaces, based on feedback from application vendors and others. There are whole new sections: printing support, Perl and Python, FreeType, Qt 4, and trial use support (our new name for “optional”) for Xrender, Xft, and the ALSA API.

Betas can only be as good as the people participating; more feedback means a better standard. So please go check out the beta. Look at the whole thing, or just parts you’re interested in. Read the spec, or check out the tests, or try building your favorite open-source app with our SDK.

We’re hoping for a release before Christmas, but that depends on the feedback we get, of course. And we’d rather know about that really big issue we forgot about and delay the beta than find out after the release. So get cracking!