[Grey-Walter] Fw: Benevolent Worms
txopi at sindominio.net
txopi at sindominio.net
Tue Sep 16 09:19:15 CEST 2003
Este artículo está extraido del boletín de Bruce Schneier (criptógrafo
estadounidense):
http://www.counterpane.com/crypto-gram.html
No estoy suscrito a la lista, pero suponía que os interesaría :)
Salu2,
Txopi.
Benevolent Worms
================
A week after Blaster infected computers across the Internet, a
"benevolent" worm started spreading in its wake. Called Blast.D or
Nachi, it infects computers through the same vulnerability that Blaster
did. When it infects a computer, it finds and deletes Blaster, and
then applies the Microsoft patch to the computer so that the
vulnerability is closed and Blaster cannot reinfect. It then scans the
network for other infected machines and repairs them, too.
Blast.D represents a cool-sounding idea, and one that surfaces from
time to time. Why don't we use worms (or viruses) for good instead of
evil? Worms contain two parts: a propagation mechanism and a
payload. The propagation mechanism spreads the code from computer to
computer. The payload is what it does once it gets to a computer. As
long they're infecting everyone's computer, why don't we use them to
patch vulnerabilities, update systems, and improve security? Why don't
we create worms with beneficial payloads, and then let them propagate
unchecked?
This is tempting for several reasons. One, it's poetic: turning a
weapon against itself. Two, it lets ethical programmers share in the
fun of designing worms. And three, it sounds like a promising
technique to solve one of the nastiest online security problems:
patching or repairing computers' vulnerabilities.
Everyone knows that patching is in shambles. Users, especially home
users, don't do it. The best patching techniques involve a lot of
negotiation, pleading, and manual labor...things that nobody enjoys
very much. Beneficial worms look like a happy solution. You turn a
Byzantine social problem into a fun technical problem. You don't have
to convince people to install patches and system updates; you use
technology to force them to do what you want.
And that's exactly why it's a terrible idea. Patching other people's
machines without annoying them is good; patching other people's
machines without their consent is not. A worm is not "bad" or "good"
depending on its payload. Viral propagation mechanisms are inherently
bad, and giving them beneficial payloads doesn't make things better. A
worm is no tool for any rational network administrator, regardless of
intent.
A good software distribution mechanism has the following characteristics:
1) People can choose the options they want.
2) Installation is adapted to the host it's running on.
3) It's easy to stop an installation in progress, or uninstall the
software.
4) It's easy to know what has been installed where.
A successful worm, on the other hand, runs without the consent of the
user. It has a small amount of code, and once it starts to spread, it
is self-propagating, and will keep going automatically until it's halted.
These characteristics are simply incompatible. Giving the user more
choice, making installation flexible and universal, allowing for
uninstallation -- all of these make worms harder to
propagate. Designing a better software distribution mechanism, makes
it a worse worm, and vice versa. On the other hand, making the worm
quieter and less obvious to the user, making it smaller and easier to
propagate, and making it impossible to contain, all make for bad
software distribution.
All of this makes worms easy to get wrong and hard to recover
from. Experimentation, most of it involuntary, proves that worms are
very hard to debug successfully: in other words, once worms starts
spreading it's hard to predict exactly what they will do. Some viruses
were written to propagate harmlessly, but did damage -- ranging from
crashed machines to clogged networks -- because of bugs in their
code. Many worms were written to do damage and turned out to be
harmless (which is even more revealing).
Intentional experimentation by well-meaning system administrators
proves that in your average office environment, the code that
successfully patches one machine won't work on another. Indeed,
sometimes the results are worse than any threat of external
attack. Combining a tricky problem with a distribution mechanism
that's impossible to debug and difficult to control is fraught with
danger. Every system administrator who's ever distributed software
automatically on his network has had the "I just automatically, with
the press of a button, destroyed the software on hundreds of machines
at once!" experience. And that's with systems you can debug and
control; self-propagating systems don't even let you shut them down
when you find the problem. Patching systems is fundamentally a human
problem, and beneficial worms are a technical solution that doesn't work.
On the other hand, automatic update functions are sometimes a good
idea. Corporate network administrators often hate them, for all the
right reasons, but there's no other way to patch many home-user
systems. There are legions of computer users who cannot administer
their own computers. For them, I strongly recommend automatic
updates. It won't be perfect. It'll occasionally break their
system. And sooner or later someone will figure out how to install
malware using the automatic update system. But it's a much better
solution than the alternative, which is that these systems never get
patched.
(An earlier version of this essay was written with Elizabeth Zwicky in
2000, and appeared in "The Industry Standard.")
Blast.D Stories:
<http://www.washingtonpost.com/ac2/wp-dyn/A9531-2003Aug18>
<http://www.computerworld.com/printthis/2003/0,4814,84126,00.html>
<http://news.com.com/2102-1002_3-5065117.html?tag=ni_print>
More information about the Grey-Walter
mailing list