Archive for the 'Software' Category

Recursive Satisfaction

Gruber writes about a recent clang milestone.

“Recursive satisfaction.” I love that description.

I would add that any program which takes, as its input, another program of the same type, has this potential. Simulators can simulate themselves. Program instrumentation or analysis tools — ditto. I’m fairly certain I’ve even debugged a debugger with itself once.

But, yeah, it’s cool.

Incidentally, if you don’t read Daring Fireball — you’re the only one left. Start now.

WebKit Never Gets Slower

I’m going to break radio silence to discuss this statement posted by the WebKit team on the subject of software performance. Summary: WebKit is fast because we’ve got performance tests and we never allow a regression:

Common excuses people give when they regress performance are, “But the new way is cleaner!” or “The new way is more correct.” We don’t care. No performance regressions are allowed, regardless of the reason. There is no justification for regressing performance. None.

I love WebKit (it’s blazing fast), and the team’s statement is very reassuring in that take-a-stand way. Unfortunately, I have to call bullshit.

Firstly, it’s important to note that a policy like this is only as good as your performance test suite. It’s very easy to accept a change which appears to have 100% positive benefit but is, in fact, a trade-off that you cannot measure because your test suite doesn’t tell the other side of the story. The WebKit team’s article admits as much, asking people to run their own performance tests and notify the team if badness occurs.

Even with an iron-clad test suite, we know software has bugs and that bugs must be fixed. One funny thing about high performance software is that, often, things can go really really fast when they are incorrect. I can optimize the hell out of any software provided it doesn’t have to get the right answer. To say that another way, bug fixes often regress performance. You can bet that the WebKit team fixes bugs. The product would not be useful otherwise.

Lastly, on what scale is the “no performance regressions” rule enforced? At most it can be per-check-in. It doesn’t take a genius to extrapolate from there. I’m sure WebKit has had plenty of check-ins which do more than one thing. Consider this imaginary check-in comment:

This change set refactors the Widget rendering code to make it more logical and less of a bug farm. It also memoizes the GetBestWidget function, improving performance on WidgetBench by 30%.

A cynical person (ahem) might say that this type of check-in actually does two, separable things, and that perhaps if we simply tolerated the original crappy bug-farm rendering code and added the optimization we’d have seen 35% gains.

Bottom line: things are never quite as simple as they seem.

The End of Architecture

The End of Architecture
Burton Smith, Tera Computer Company
17th Annual Symposium on Computer Architecture
Seattle, Washington
May 29, 1990

(Thanks, Wendy!)

Santaniello’s Law

Here’s my contribution to the lore of software development:

Any piece of software larger than a screenful is a steaming pile of crap.

Pessimistic, yes, but perhaps also liberating in a way. Think about it.

Update:
Apparently I’m not the only one with this sentiment.

Chris Hecker is Wrong About OoO Execution

Here is a quote from Chris Hecker at GDC 2005:

Modern CPUs use out-of-order execution, which is there to make crappy code run fast. This was really good for the industry when it happened, although it annoyed many assembly language wizards in Sweden.

I first heard this when Chris was quoted by Pete Isensee (from the XBOX 360 team) in his NWCPP talk a year ago. Maybe Chris was kidding. I don’t know. What I do know is:

  1. He is wrong
  2. Smart people are believing him
  3. It’s time to set the record straight

Processors implement dynamic scheduling because sometimes the ideal order for a given sequence of instructions can only be known at runtime. In fact, the ideal order can change each time the instructions are executed.

Imagine your binary contains the following very simple code:


     mov rax, [foo]
     mov rbx, [bar]

Two loads — that’s all. Lets assume that each of the loads misses cache 10% of the time. Often, one will miss but the other will hit. If you have an in-order machine, and the first load misses, you are forced to wait — you cannot proceed to the 2nd load, and you cannot hide any of the miss latency.

No matter how much of an uber assembly coder you are, you are going to be forced to choose an order for these two loads. More likely, your compiler will make this choice for you. Either way, that choice will be wrong at least some of the time.

An OoO processor can do the right thing every time.

ZFS FTW!

If you haven’t heard of Sun’s ZFS file system, here are the two important summary bullets:

For slightly more info, check out these screencasts, this slide deck, or this set of vids.

I can’t even begin to describe how much I lust for Linux support (currently difficult due to CDDL incompatibility with GPL). With a little effort, one could combine ZFS and something like S3 or rsync.net to get easy off-site backup on the cheap.

Update:
Much of the power behind ZFS comes from its copy-on-write philosophy. The parallels with software transactional memory are stark. One outstanding question re transaction size: are they limited by free disk space? Traditional file systems use in-place modification, but it sounds as though ZFS may require additional free storage proportional to the size of the change.

Update 2:
I just learned about the clone ability in ZFS and my eyes just about fell out of my head. I mean seriously, this is constant time file copying. I’d have half a mind to alias cp zfs clone.

Update 3:
Man the ZFS hits just keep on coming. I’ve got a RAID-5 at home, but I’d never heard of the “RAID-5 write hole” until I read Jeff Bonwick’s blog article. Don’t miss this war story featuring ZFS’s end-to-end checksumming.

Console Resize Utility for Windows

For you command-line junkies, I present size.exe. You can use it to resize a console window programmatically. Here’s an example:

  C:\scripts >type vimdiff.cmd
  @echo off
  size 60 120
  vim -d %*
  size 60 80

I wrote this in C++ using Boost. If you’re interested, the source is available here.

Update 4/8/2008:
Turns out that Windows actually does have built-in support for this, it’s just hidden in a dusty corner as usual. Good thing, too, because my program didn’t work worth a damn. Here’s the official method:

   mode 120,60

(Credit to Sahil Malik)

Foxit

Despite the cheezy website, Foxit is a great PDF reader.

It starts up freaking instantly. Multiple documents open in multiple windows by default. Better still, there is no installation required because it’s just a single .EXE.

This is what Acrobat would be if it weren’t too busy moonlighting as the posterchild for software bloat. I wish I had discovered this a long time ago.

No More Volatile Memory?

What would a system with no volatile memory look like? Imagine a PC without DRAMs or a HDD — instead, what if we could have a terabyte hunk of very fast nonvolatile flash, or something?

I’ve been thinking about this question, on and off, for a few years now.

I think software would change a lot. What would it mean to install software? To load a program? To open a file?

Cool Application of AJAX

Marketwatch.com uses AJAX to embed real-time stock quotes* into its articles. For example, see this MarketWatch article about an analyst downgrade of AMD.

I’m not an expert on web development, but I think this sort of thing may be the killer app for AJAX. There was really no reasonable way to do this this well before AJAX. It would have been obscene to use a Java applet, for example.

* Technically the quotes are delayed 20 minutes, but they update in real-time.