People often forget that the Internet wasn’t created in a dramatic flash of omnipotence by an all-knowing God, but rather patched together by thousands of squabbling humans over decades. A piece here, a piece there. Lift that server, tote that fibre. Let’s call the process, oh, ‘evolution’.

Over the weekend two events highlighted this ramshackle nature, and highlighted yet another risk of secret ISP-level Internet censorship. If you did a Google search between 1.30am and 2.25am Sunday AEST, the message “This site may harm your computer” accompanied every single search result. Huh? Surely not every single website is infected with malware? Of course not. It was a mistake.

Google gets its list of possibly-infected websites from, a US- and UK-based non-profit. Due to a “human error“, the list of bad websites supplied to Google contained a line with a single slash character “/”. Unfortunately, every possible website address has a slash, so every website was marked as dangerous.

To Google’s credit, they found and fixed the problem in under an hour. On a weekend. Impressive. Of course, people could ignore the warning and click through, and we could still use any of the other search engines like Microsoft Live or Yahoo.

But if a similar mistake happened to our fancy-pants ISP-level Internet filters, it wouldn’t just be a warning. It could shut down the entire Internet across Australia for hours. Alarmist? Hear me out.

Conroy’s Rabbit-Proof Firewall has an externally-supplied list too: the ACMA blacklist, provided by an external organisation. But unlike Google’s list, the ACMA blacklist will presumably be encrypted to preserve its secrecy. Otherwise, several thousand ISP systems administrators could get their hands on the filth list. The filters which are about to be trialled in ISPs come in three main flavours.

The simplest requires all traffic to be routed through the black-box computers running the filter software. That slows everything down, so my guess is it won’t be chosen except for the very smallest ISPs.

The other two use a split approach. First, all traffic is checked against a list of potentially-bad Internet addresses. Most is legit, and passes straight through with negligible slowdown. Only a small proportion of traffic is routed to the computer running the detailed filtering system.

(For the hypergeeks, BGP tells the ISP’s core routers which traffic to route to the filter box. That box does DPI to determine if the HTTP request contains a bad URL, or whatever is appropriate for non-web traffic, and it responds in one of two ways. In one, it denies the connection. In the other, it fires three TCP RST packets to each end of the connection to kill it. Serious Network Engineers are welcome to pick apart this overly-simplistic explanation in the comments.)

But what happens if the list is wrong? What if the routers send all traffic to the filter box, which is then overloaded? What if the filter box starts blocking all traffic?

Unlike the Google glitch, this system is now blocking the traffic itself. And here’s the irony: systems engineers can’t connect to the routers to reset them, because all the traffic is being blocked. That means systems engineers across Australia would have to physically visit every router to reset it. They’d also need a new ACMA blacklist. Where do we get one of those on a Sunday?

Maybe I’m paranoid. So I ran this past A Very Experienced ICT Security Specialist Who Cannot Be Named.

“Broadly speaking, you’re right. The network would have to be extremely well designed to recover from an error like that, and that’s most unlikely,” he said.

So how well are our networks designed? Try these two examples.

On 24 February last year, Pakistan tried to block the evil YouTube using they very BGP technique I described — and they killed YouTube globally.

And only yesterday, swathes of Victoria and Tasmania were without the Internet for hours because power failed at a data centre. There were standby power generators, but, well, best-laid plans, etc.