Viggly Vorm
Jan. 27th, 2003 11:58 amWe probably all know about the worm that ate the Internet now. The technical details are simple enough for even me to grasp. It's a worm which copies itself into the memory of unprotected SQL servers and proceeds to create enough Internet traffic that it swamps the network.
So far, so good. The thing I fail to comprehend is, if there has been a patch available for this vulnerability for the last six months, why have so few people got round to installing it? I realise that the majority of the people who read this are likely to around 3000% more tecchie than me, so can you explain this to me? It seems self-evident that these sorts of security patches should be a first priority for anybody operating a server.
But what do I know? I'm only a hack.
So far, so good. The thing I fail to comprehend is, if there has been a patch available for this vulnerability for the last six months, why have so few people got round to installing it? I realise that the majority of the people who read this are likely to around 3000% more tecchie than me, so can you explain this to me? It seems self-evident that these sorts of security patches should be a first priority for anybody operating a server.
But what do I know? I'm only a hack.
no subject
Date: 2003-01-27 04:32 am (UTC)no subject
Date: 2003-01-27 11:02 am (UTC)no subject
Date: 2003-01-27 04:10 pm (UTC)Priorities are an issue - should the Sysadmin take the system down (even for a reboot) to install the patch for something that isn't necessarily an issue? (The patch has been around for months before this attack - therefore there was no real sense of urgency when the patch was issued). Where I work, unless there is an immediate threat, the core systems (upon which our business is reliant to function) do not get taken down in office hours. So there's another issue - resourcing the overtime (or flexitime) to install patches in less peak times.
And what if the patches don't work, or screw your system, or conflict with software you're running? Microsoft, for example, are not a shining bastion of testing software before releasing it to the public. Why should their security patches be any different? Many sysadmins don't want to be at the bleeding edge - they want their computers to work, so they either wait to hear that a particular patch isn't going to screw them, or they install it onto a test server before touching the live server.
Information about how you install patches (not the mechanics, but the procedure and timing) is often difficult to come by. Several patches won't work properly on IIS, for example, if you don't install them in the correct order. And finding the correct order to install them in was a non-trivial task.
All these things make keeping machines secure an ongoing and thankless task, especially when TPTB don't support regular maintainance - it's not a revenue generator. (Luckily, not where I currently work, but in places I have worked in the past.)
Yes, sysadmins goof off. Probably just as much as journos. But it's hardly the only explanation.
no subject
Date: 2003-01-28 03:40 am (UTC)http://www.nextgenss.com/advisories/mssql-udp.txt
Please note the section:
Network Based Denial of Service
*************************************
When an SQL Server receives a single byte packet, 0x0A, on UDP port 1434 it
will reply to the sender with 0x0A. A problem arises as SQL Server will
respond, sending a 'ping' response to the source IP address and source port.
This 'ping' is a single byte UDP packet - 0x0A. By spoofing a packet from
one SQL Server, setting the UDP port to 1434, and sending it the a second
SQL Server, the second will respond to the first's UDP port 1434. The first
will then reply to the second's UDP port 1434 and so on. This causes a storm
of single byte pings between the two servers. Only when one of the servers
is disconnected from the network or its SQL service is stopped will the
storm stop. This is a simple newtork based DoS, reminiscent of the echo and
chargen DoSes discussed back in 1996
(http://www.cert.org/advisories/CA-1996-01.html). When in this state, the
load on each SQL Server is raised to c. 40 - 60 % CPU time.
There is no excuse for not patching your server when something like this is at stake. Yes, even we in the Unix world let a few things go unpatched, but after we've considered the potential ramifications vs the immediate problems. I help maintain the security on approximately 300 customized servers which have next to no way to standardly patch when issues come up, but we make sure to get things patched when either the network or the server is at risk as quickly as possible. Even if that means rebuilding and retuning Apache and all of its dependent packages for each customized server on our network.
Yes, it's a lot of work when we have other projects we need to get done, but it's also prevented a lot of major catastrophes here. Such is the life of a sysadmin, if we couldn't hack it this way, we wouldn't be in this job.
I can accept that if something requires a lot of reworking (as I'm told this patch really did require), it might take a bit longer than "yesterday" to get applied. However, six months is just ridiculous and horrible work on any administrator's part.
Gwendolyn R. Schmidt, SysAdmin, SAGE member.