Archive for May, 2006
Ladies and gentlemen, I give you: Diebold!
“For there to be a problem here, you’re basically assuming a premise where you have some evil and nefarious election officials who would sneak in and introduce a piece of software,” [David Bear, a spokesman for Diebold Election Systems,] said. “I don’t believe these evil elections people exist.”
(Originally from here, if you can read it.)
Another interesting topic for standardization in the LSB involves multimedia. It’s clear that we need to give developers a good story on how to do multimedia on Linux in their applications; what’s less clear is what that story should be.
There’s been an interesting conversation regarding GStreamer, and its status in the KDE desktop. Apparently, burned by their experiences with their previous sound framework, the KDE folks are writing a new system, called Phonon. The idea is that Phonon would provide a clean, stable API layer for KDE apps to use for the vast majority of simple multimedia-ish things, like playing a sound clip.
Christian Schaller, a GStreamer hacker, isn’t too thrilled with this, and posted an unflattering analysis of Phonon to his blog. This prompted the kind of response you’d expect, including criticisms of GStreamer:
All other arguments aside, GStreamer doesn’t offer a stable API. I can understand why that’s the case, but as such, because of the (sane) library policies within the KDE project on binary compatibility we cannot simply use a GStreamer binding as our multimedia solution. Period. I was a little surprised by Christian’s posting because we’ve talked about this multiple times.
This piqued my interest, because there’s been talk within the LSB to add a multimedia framework, and GStreamer is one of the candidates. So Christian’s response is very important:
I consider Scott a friend and I think his entry is well considered. My general response is that the bigger and more complex an API gets the chances of getting it right the first time goes down.
I’m not sure how to react to this. If GStreamer’s ABI is still in flux, it may not be a good candidate for inclusion in the LSB. On the other hand, are there credible alternatives? Phonon’s scope is too limited, and it will likely be tied strongly to KDE, which makes it less desireable. There are other frameworks, but I’m not seeing that any of them have the credibility of GStreamer.
That sounds like an endorsement of the Phonon approach, and in a sense it is. But we have to be careful that we don’t create another source of complexity for the Linux desktop. If Phonon encourages all of us to play around with two or three separate multimedia frameworks, to the point that we can’t really have multimedia on our desktop without having to mess with more than one, then the Phonon supporters will have done the Linux desktop a disservice.
Now that LSB 3.1 is out, there’s been some discussion of future directions for the LSB to take. Not surprisingly, desktop componentry beyond the graphical toolkits (GTK+, Qt) has been of interest.
If you’re interested in this, the results of Sun’s evaluation of the GNOME interfaces for inclusion in Solaris provide a lot of good information about what parts of GNOME are stable enough for inclusion.
Some candidates for standardization were left out due to uncertainty over their status as standards:
We would also like to add the icon integration specification as “Stable”, but the fact that the FreeDesktop Standards website makes a weak stability claim by saying, “freedesktop.org is not a standards body” makes us a bit unsure which specifications should be considerd Stable. It would be good, I think, if the FreeDesktop community could make a stronger claim about the specific specifications that are needed for desktop integration, such as those recommended for use in the GNOME System Admin Guide, and they should probably be referenced on the GNOME Developer Standards page as well.
Well, if the freedesktop folks are nervous about being a standards body, perhaps they could work with a standards body to codify the things they consider standards. I know of at least one candidate for that position…
No, not really. But have you ever wondered how people in the South could have twisted their heads into thinking that the institution of slavery, with all its brutality, could possibly be a good thing, or how some Southeners, even poor whites, could have been so heated in their defense of their “peculiar institution”?
I have. The antebellum South has always seemed deluded to me. Often, their struggle against the North was framed in terms of freedom, liberty, and so on, even as they denied freedom and liberty to a whole class of their people. But I’ve never been satisfied with this conclusion. Most of the time, “delusional” thinking on someone else’s part is more correctly described as ignorance on your part–ignorance of some factor that, while possibly incorrect, at least brings that thinking within the realm of rationality.
So this article on Winds of Change has been a revelation to me. Its context is more modern: how does modern American society preserve public virtue today? (Or, more pointedly, does modern American society preserve public virtue at all?) But it makes its point by reference to the different theories of public virtue which held in the North and South before the Civil War, and how those views are still expressed today, although in different ways:
But North and South diverged on how best to keep the tree of public virtue well-watered and flowering. The puritan republicans upheld personal morality as the solution: A virtuous people could not help but be a virtuous republic.
And the South?
Rigorous private moral virtue was not necessary in the agrarian republican model — and was little esteemed among men in the South. Instead, jealousy of power and careful attention to governance would keep the flame of public virtue alive. Govern well, put men of pure virtues and total leisure in power, guard against demagogues and tyrants, and live as well as you please.
Callimachus coins the phrases “totalitarian liberty” and “aristocratic liberty” to describe the respective approaches taken by the North and South. While the North sought to preserve public virtue by forcing private virtue on its citizens, in the South public virtue was preserved by an orderly class hierarchy. Slavery was essential to preserving this hierarchy, as the wealth of the higher classes was supported by the wealth of the lower classes.
And where did the South get this idea of public virtue? From history:
As odious as much of the old South is to modern attitudes, it had the approval of history. The Spartan, Athenian, and Roman republics — the principal examples available to the Founders — all were built on essentially the same social and economic model, with a mass of slaves at the bottom.
Thus, attacking the institution of slavery was seen as a way of attacking the foundations of the Republic at its base, drawing forth the stirring defenses of liberty you often see from such folks as John C. Calhoun.
They would have been right, of course, except that they didn’t notice the alternate path ahead of them. The North managed to preserve public virtue with a much flatter and less stratified view of society. The excesses of slavery didn’t look to Northerners like the bedrock of civil society; they just looked like needless brutality, certainly nothing that should be defended. And in the end, Northern victory did not bring about the end of democratic civil society, as so many Southerners thought it would.
But in all this moralizing, we have to recognize that the Southerners were right about some things, even if they were wrong about one particular detail. What if the North had been able to convince the South (without warfare) that industry could substitute for slavery in preserving that lowest level of society, and that it could do so without brutalizing whole classes of people? Perhaps today, we would have a better appreciation of some of those Southern values of days gone by: limited government, non-interference in personal affairs, and eternal vigilance as the price of liberty.
The Los Angeles Times has a story on a the unintended consequences of government meddling in China.
To sum up, Renhe is a small, formerly rural town on the edge of Chongqing, a rapidly growing commercial center. To accomodate growth, the government is in the habit of seizing small farms for development, compensating the small farmers by giving them small apartments in the new city. Most people there are not stupid; they recognize that they are being taken for a ride, and thus do not hesitate to cheat the government in return.
And, it seemed, an opportunity presented itself: married couples got one two-bedroom apartment, while singles got one one-bedroom apartment. So, if a married couple fakes a divorce, they can get two apartments instead of one, and make some money off the second apartment once they remarried. The divorce rate in Renhe soared to 98% after the government seizure.
Of course, the government was not stupid, either; they cut the second apartment out of the deal. Couples who want to divorce can no longer do so, since separation inevitably means one of the partners will become homeless. Farm families who secured their right to a second apartment before the rule change ended up on a waiting list, since too few apartments had been built to accomodate all those people. The promised development is still in progress, so there are no jobs for all the displaced farmers, who are not able to pay for food and utilities. Worse, they have found that not all the divorces were shams:
Meanwhile, most of the former marriages are in tatters. Considering the prospect of a future without financial security, remarrying now simply seems too much of a hassle. Promises are souring. Stunned villagers are watching their life partners drift off. Some have found new love. Others are deciding to try out freedom from a marriage they never thought they wanted to leave.
Arguably, the whole process started with state seizure of the farms without adequate compensation, but the state is playing coy about the problem they caused:
“In the face of the law, there is no such thing as a fake divorce,” said Xue Xiang, an officer at the local marriage registry who oversaw the wave of divorces. At its height late last year, up to a hundred couples showed up at the office every day. “Every citizen has the right to marry and divorce. As long as it’s voluntary, we have to follow the rules and grant them their wish. We can’t help it if some people have ulterior motives.”
Much has been made of China’s liberalization and resulting success. Few recognize that China has been dragged kicking and screaming into these policies, and that the Chinese leadership still resists loosening their grasp of power. Incidents like this one may be small, but they illuminate just how fragile Chinese society is. Will the single-parent children of Renhe become the criminals and dissidents of China’s future? Time will tell.
Once upon a time, there was Windows, MacOS, and Linux. MacOS was a joke, so we won’t talk about it for now. Windows was easy to use, but also not quite stable and quite insecure. Linux was more difficult to use, but was also a lot more stable and secure.
This seemed like an interesting correlation: more security leads to more difficulty, and vice versa. Was this necessarily so? Both sides said no; Linux users claimed they would achieve ease-of-use without sacrificing security, while Microsoft claimed they could eliminate the stability and security problems of Windows while still keeping it easy to use. And with that, each side went to work.
We’ve been seeing one side of that work–the Linux side–gradually manifest itself. There’s no question that Linux has improved tremendously in ease of use. As the new technology has been developed, it hasn’t really affected stability more than usual; the main problem is that the new usability features are in high demand, and thus are more likely to be deployed before they’re ready.
Now the other side of that work is starting to come into focus with the recent betas of Windows Vista. So far, it seems that things are not going well:
Let’s say you have a 250GB external USB drive packed with music files, videos, pictures, and backed-up documents. When you plug it into your new computer, Vista assigns it the drive letter F:. You have no trouble viewing those pictures and playing those music tracks. But as soon as you start organizing your files into new folders, Windows Vista begins prompting you for permission to perform file operations. You have to click Continue, switch to the Secure Desktop, and then click Continue in the Consent dialog box to complete each operation.Why? Because the default permissions on that external drive give Full Control to the Administrators group, but only Read permissions to Users. And remember, you’re running with the process token of a standard user, unlike Windows XP, which gave you full credit for logging on as an administrator.
This sounds like a major blunder, but it’s not. Long-time Linux users will recognize the problem immediately: how do you secure removable media like USB sticks or CD-ROMs? We went through several iterations of that problem before coming up with a sensible solution: by default, the user who inserts media has full permissions to work with that media, and no one else should. It doesn’t sound like Microsoft has been learning from our experiences so far.
Slashdot has an article on Vista’s new security system, which has motivated some interesting analyses in the comments:
The new Windows ‘protection’ scheme will browbeat the user until they disable the security system (in some way or another). That way, when the inevitable virus and spyware hits the system, Microsoft can wash their hands and say that it’s all the user’s fault for making use of their computer bearable.
Here are the simple solutions all the windows experts are missing:Set yourself up as the owner of all files on the drive.
Set full permissions to all files to the “user” group.
Oh gosh gee. I don’t know how we could have been so stupid. Please forgive us for doubting the security, power, and flexibility of Microsoft operating systems.
Dear Microsoft “experts”: You just permanently lost the user privilege security argument, and you probably don’t even know why.
“Granted, I have to set the ACLs on both directories and registry settings, but it’s never been very hard.”Your Momma.
As in, ask Your Momma to do that.
From that review, it seems that running as a regular user will be easier under Ubuntu today than under Windows whenever it is released. There’s no excuse for that.
It’s interesting to note that Mac OS X–the successor to the previously-dismissed MacOS–is now cited as a model for implementing usable security, and that they’ve done so by building on a Unix base.
Yesterday, I finally achieved a goal I’d been working on for a long time: getting MythTV to display on our family room TV.
So what changed that made the impossible possible? One thing changed: the video card in the computer by the TV. It’s now a cheapo NVidia GeForce 4 MX card, instead of a super-expensive (at the time) ATI All-In-Wonder Radeon.
Windows users aren’t used to the troubles Linux users often endure getting hardware to work. When it works, it usually works very well, better even than in Windows. When it doesn’t just work, it’s usually a huge effort to get working, and sometimes there’s just nothing you can do except dump the hardware on EBay and get something else.
When I bought the ATI card, I had been reading some enthusiastic reviews of the card. ATI was, at the time, the most Linux-cooperative graphics card company, and while no support existed yet for the card’s cool TV recording and TV-out features, everyone assumed it would be just a matter of time.
Well, it’s been several years since then, and ATI was in the process of changing the way they did Linux support. The documentation they normally released to open-source driver writers never came. There were efforts to reverse-engineer the card, with varying success. Soon after, ATI announced that they would be providing their own proprietary driver for newer cards, making their Linux support worse than NVidia’s. (NVidia also does proprietary drivers, but the drivers are at least decent and support all of the card’s functionality; with the ATI drivers, for example, you can’t have both accelerated 3D and accelerated video support at the same time.) Video capture was not, and still isn’t available; ATI actually sends people to the reverse-engineering project above for that. And my card was too old to be supported by the proprietary driver.
To get an idea of the impact of ATI’s new Linux support policies, check out this page, which documents video input for ATI cards based on the older Mach64 and Rage chips, with this page, which documents video input for ATI cards based on the newer Radeon chips. The process for older chips is simple, as Linux driver support goes: download the module, build it, load it, use it. By contrast, the Radeon process has sections for “conservative”, “advanced”, and “adventurous”, where “adventurous” means “using TV-out”, and everything depends on using their special program for doing video input. Forget about using MythTV with this.
So, after months of very frustrating episodes of trying to get TV-out working on my ATI card (never mind video input), I finally broke down and bought the NVidia card. Total time to get TV-out working on the NVidia: about three hours, most of which was occupied in getting my thick S-Video cable past a metal bar in my case.
Lessons learned: avoid “do-everything” integrated hardware in favor of single-purpose hardware; never, ever, buy hardware without knowing that it will work that day; and stay away from ATI, at least for now.
Every so often, I see computer setups with multiple monitors hooked to a single computer, usually set up as a single very long desktop. You move the mouse to the edge of one monitor, and keep moving; the mouse then jumps to the other monitor. This can be really handy for some specific goals; for example, there’s no better way to create an immersive experience for a simulator. Most of the time, though, multi-monitor is used just to give the user a bigger screen.
I don’t generally have a problem with screen room. (Virtual desktops are very handy in that regard.) But I do have a problem controlling several computers, and switching between sets of keyboard, mouse, and monitor to use them.
So I was very intrigued when several people began blogging their experiences with Synergy, a little utility that links the desktops of several computers together into one, such that the desktops look a lot like multi-monitor. It even handles cut-n-paste across the desktops; I can cut or copy on one machine, sweep my mouse across to the other computer, and paste. It’s cross-platform, too, running on Unix/X11 systems, Windows, and Mac OS X.
So right now, I’m typing this blog entry on my main workstation’s keyboard, but into a browser running on my laptop. And instead of having a zillion tabs on my browser to keep track of pages I want to reference in my blog, I can just zip over to my workstation’s browser with a flick of the mouse, get to the page I want, copy the URL for it, zip back over to my laptop, and paste it into my blog post. Sweet.
If you find yourself using more than one computer at a time, you should check Synergy out.