The Social Impacts of Software Choices

I only mentioned this in passing in my post about accountability the other day, but the choices all of us make when creating software, or when finding new ways to use it, are selecting for certain behaviors. This has a tremendous number of implications, despite the fact that the effects are very hard to predict and even harder to change once they’ve begun.

Take, for example, PageRank. Lots of people are thinking about it again, thanks to yesterday’s nofollowannouncement. As originally designed (and there’s even some debate about that), Google’s ranking system was designed to confer relevance ranking on sites based on the inbound links to those sites. But PageRank made a few assumptions, either explicitly or implicitly, that reflected the realities of the web when it was created but don’t reflect the web as it is today. In fact, a strong argument could be made that part of the reason the web is the way it is today is due to some of these choices.

PageRank, when created, didn’t assume that content on a web page, especially links, would be generated by someone other than the publisher of that page. PageRank was not based on the assumption that the rankings would have monetary value. And PageRank is based on the assumption that site editors choose their content, particularly their links, based primarily on merit.

One of the things the nofollow initiative does is attempt to restore PageRank back to its creators’ intent. Naturally, this meets some resistance. The first voices I saw complaining were ones whom I think are either comment spammers themselves or are affiliated with comment spammers. I’d link to them and out them, but then I’d just get sued, and it’s not worth the hassle. Suffice to say, my experience with the scumbags in the dark side/link-spamming part of the SEO industry (I still get emails about the SEO competition every other day or so) showed me exactly how nefarious these people are.

But there’s also some resistance from real bloggers, who are fretting now that their comments won’t confer PageRank on their blogs. It’s worth noting here that PageRank isn’t a true currency, in the sense that giving it to others doesn’t leave me with less myself, but the reality is that links are worth money. The question is, are they supposed to get their ranking improved for posting a comment?

The “no rank improvement” school has an easy justification: Only a site owner can confer PageRank, since the owner controls the site and is responsible for its content. This is especially true since commenters can create their own links in many contexts, which amounts to a user being able to give himself legitimacy instead of earning it.

The “improve their ranking” school (which, I must confess, I think I’m sympathetic towards) says that a comment that includes a link shouldn’t be filtered out. Most of the people advancing this argument are saying it because of some obtuse “the rich get richer” feeling that their only shot at improving ranking is going away. Get over it, shlubs. Real ranking comes from people linking to you in their posts, anyway, so write stuff that’s worth linking to and promote it well.

If you want to make a real argument, you can say that your comment, being cogent and articulate, increases the value of the page it’s on. Therefore, you should be compensated for your contribution, and PageRank is a currency in which you accept compensation. I haven’t seen anybody advance that argument, which is disappointing.

Now, as mentioned in the Professional Network post about nofollow, it would definitely make sense to give site owners the option of not placing the “nofollow” link type on links from commenters that were authenticated or approved. Giving people choices in general is a good idea. But the interesting thing here is that some site owners will interpret this intent differently, and the choice as to whether commenters are rewarded, and under what circumstances, will return to the site owner.

Finally, aside from all the social implications, the nofollow effort has been fascinating to watch. Robots.txt evolved over a long period of time, and still isn’t really built into publishing tools. (TypePad does the right thing when you make your blog private, but that’s still fairly rare.) And there’s never been a public announcement of support for robots.txt from any of the major vendors, as far as I’ve seen. It’s barely even been touched by the W3C. And nofollow, which is a roughly analogous initiative, came together in just a few days, with the involvement of dozens of different people. Pretty amazing.

I’m also incredibly impressed with our Six Apart team. We didn’t just announce, we shipped, on multiple platforms in multiple countries, in an incredibly short period of time. That’s just awesome to watch, because I think our strength as a blogging company is in having the resources to pull that off, while our strength in not being one of the behemoths like the search companies is that we can be nimble enough to just ship. Kick ass.

Also, since he’d never pimp it himself, Brad Choate thought about this stuff 3 years ago. He’s talking about something even more sophisticated, (which a lot of people are interested in now) where sections of a page can be marked as untrusted, presumably with a “follow” attribute for exceptions within that range. I’m sure we’ll get there someday, but if you are one of those folks who’s a stickler for crediting inventiveness, it’s worth linking to Brad.