In confronting malware, there is nothing innovative about new strains of Klez,
Yaha, SirCam and Code Red. Yet all of these worms have demonstrated unprecedented
staying power on the Internet despite the existence of patches, anti-virus signatures,
personal firewall protection and Intrusion Detection technology. Why are these
threats so prolific and why do new threats gain traction so quickly if all they
amount to are retread malicious code?
This paper analyzes the patterns of emerging malware and presents a strategy
to assist network and security administrators in addressing ďnewĒ
yet old threats.
Itís easy to dismiss old news and in the case of Code Red, nobody wants to
take a look in the rear view mirror. We want to forget about a malware invasion
that required IT staff overtime in rebuilding and patching machines, auditing
a network to make sure our respective environments are clear. However, by examining
the history and effects of the Code Red outbreak from its inception in the summer
of 2001 up through the present, we can learn a great deal. First and foremost,
the experience should remind us not to downplay or give up for dead any malicious
code in the wild. Even before it was finished wreaking havoc, Code Red was estimated
by the United States Government Accounting Office to have caused upwards of 2.4
billion dollars worth of damage with hundreds of thousands of MS Internet Information
Who even remembers that Code Red was thought of at the time to be so severe
a threat that it brought Microsoft and the FBI together to brainstorm solutions.
Unfortunately, wake up calls have a short shelf life. We are all driven by new
priorities every day and if there is no perceived immediate danger, itís
only natural to forge ahead with our work day with tasks requiring immediate
attention. Still, this should not preclude you from maintaining a diligent asset
protection program with ongoing patch and change management processes. There
is tremendous value in keeping an eye on early warning reports of new malware
threats no matter how retread they may seem, testing these new threats and exploits
whenever feasible and ensuring that your environment is assured to be as least
vulnerable as you can possibly make it. This involves more than sending an advisory
e-mail to your user-base regarding new threats and where to download a patch.
Ongoing preventive maintenance involves written procedures based on notes you
have taken and information collected in preparation for the day we all hope
never arrives. That day of reckoning when the unforeseen happens when your network
is ripped to shreds by a malware attack.
Lightening does strike out of the blue and contrary to popular belief, it can
even happen twice. Network and security technicians can never overlook seemingly
innocuous details. Perhaps you are already familiar with that sinking feeling
of discovering a compromised box on your network. That should be motivation
enough to maintain a preventive maintenance program but if this notion reflected
reality, we would not even be having a discussion on yet another re-emergence
of Code Red.
The first step in preventive maintenance is adopting a proactive rather than
a reactive approach to combating Internet threats. We tend to think the most
important details involve retracing what we have already done to address the
last outbreak - our machines are patched, weíve upgraded our gateways and
desktops and laptops with the latest anti-virus signatures. What can possibly
To find out what could go wrong, letís look at the pattern of what occurred
when Code Red first emerged. The original Code Red attacked an IIS buffer overflow
vulnerability that was discovered in July 2001. It took at least one month for
the worm author(s) to develop their code, send the worm into the wild and it
for it to gather steam. It was not an immediate impact. As we know, some worms
have the ability to propagate very rapidly but this is not the case all of the
time so itís not a good idea to be fooled by so-called ďlow riskĒ
worms. All worms have the ability to become larger problems.
In the case of Code Red a patch eventually materialized though by the time
it did, the worm had cascaded across the Internet and the damage was done. Part
of the cleanup process was an industry collective mindshare in discussing the
Code Red problem and how to best prevent it from happening again. The IT industry
was fixated on girding for an even greater and more sophisticated malware threat
ďin the future.Ē Well, the future is here and it seems our preventive
maintenance procedures havenít changed all that much. Code Red is back
in 2003 and following the same exact pattern it did in 2001. Maybe it wonít
repeat the scale of menace it once was but clearly, the concept is applicable
to any new threat. For example, a new Yaha strain emerges every other month
it seems. SirCam just wonít go away and Klez retains a stranglehold as
the most hardy malware the Internet has ever seen.
All this being said, are malware techniques becoming more sophisticated? Are
the propagation methods any different? Not really. Weíre actually looking
at the exact same patterns emerging and in many case through the same exact
malware. A few differences here and there but by and large, itís all old
hat and weíre just as vulnerable to a network shredding now as we were
back in 2001.
Aside from negligence in not keeping up with our best intentions for preventive
maintenance, what are we doing wrong? Weíre more sensitive to impressing
security measures upon end-users. We have a stronger appreciation for taking
network maintenance seriously. We have improved protection at the gateways and
other vectors into a network. Security expenditures have increased from year
to year within most companies. The industry is more open than ever before when
it comes to disclosure of vulnerabilities as well as development and distribution
of patches. Even Microsoft has made the commitment to greater security as they
lumber toward another platform release. Will Windows 2003 Server and IIS 6 solve
security issues or bring with it a new set of problems to be dealt with? It
all remains to be seen.
The age of polymorphic malware is upon us yet we can expect more of the same:
intelligent algorithms to identify IP addresses, back doors sending broadcasts
to other servers with the same vulnerabilities as the infected host. Even if
the malware is not successful in locating suitable new hosts, the replication
process is causing the most harm. This is what bottlenecks the Internet worse
The experts predicted worms of the future will leave us with no lead time to
respond to new threats after a vulnerability is published To an extent, that
prediction has come true. Its not uncommon for the speed of saturation to be
extraordinarily rapid. In the case of SQL Slammer for example, it sought targets
by way of broadcasting connection requests to random IP addresses in a rapid
manner. The worm itself was only applicable to MS SQL Servers yet, the rate
of infection was high even though Microsoft had already had a patch available
six months prior.
Granted, itís not possible to stop every worm outbreak but the record
over the last two years clearly shows new approaches are needed to deal with
the proliferation of pattern malware attacks. This is especially true with regard
to repeat offenders who have no business cropping up every few months with a
new strain. There may only be subtle differences among strains but malware is
still a sophisticated and intelligent menace. The only way to understand the
threat is to see it in action, study its behavior in a contained environment.
Challenges for IT Staff
As network and security administrators, itís not in your best interest
to shy away from testing suspicious programs to gauge impact to your network.
Take note of patterns in file names associated with particular malware and utilize
security software that makes use of MD5. MD5 is an algorithm that produces 128-bit
message digests unique to every application. It is computationally infeasible
for applications to have the same MD5 signature. Therefore, MD5 can be used
to verify data authenticity and serve as the primary instrument of file comparison
and file detection as well as determinant of file corruption and tampering.
Replicating user experiences in a safe environment is invaluable to your own
education and will come into play as you continue to flesh out the priorities
of your defense strategies. Speed and accuracy are critical. Having a test environment
ready can win the day. If you really want to get serious about malware testing,
build a lab, segregate it from your company network and use it to exclusively
test malware, spyware and adware.
Dealing with malware early will solve a great deal of problems and frustration
later. There is no substitute for adopting an ongoing preventive maintenance
attitude. While there may never be an absolute magic bullet, leave nothing to
chance. The following are some suggestions to use in building or adding to your
malware response framework.
- Devise rapid response checklists and workflows that anyone can follow.
The hardest part of this is finding the time to keep them updated. Structured
documentation goes a long way.
- Have a GHOST server or another image software server at the ready to warehouse
the most updated operating systems, service packs and security fixes. Most
importantly, make sure the builds are clean. If you suspect an image build
is compromised, err on the side of caution and build it again.
- Maintain a secure FTP server with backups of image builds, diagnostic tools,
bookmarks, etc. Have everything you need ready for rapid re-deployment in
case disaster strikes and your main repository is cut off.
- When you install a patch, do you also test its effectiveness? This is an
extra step that most technicians donít take. It can be time consuming
but itís too easy to place our faith in a vendor to fix a problem by
simply installing a patch. As evidenced by the strength of malware, patches
sometimes donít fully resolve a problem, it just covers it up. Second
wave vulnerability discoveries are common. It takes more than one layer of
shielding to thwart some of the more resilient malware.
- End-users are going to be independent but donít let that stop you
from training and educate them on effective desktop security usage. The more
you impress upon them handy tips, the less prone they will be to making mistakes
that can impact your network. Familiarity with new security policies has to
- Donít take compromises personally. Youíre not going to win every
battle. Take careful notes and make the effort to not to repeat the mistakes
of the past. Become a stronger technician with each experience.