Posted by
Parks, Raymond on
URL: http://friam.383.s1.nabble.com/Diversity-and-Stability-in-Food-Webs-tp520801p520805.html
I suppose that everyone has a hot button that pushes them to respond
even when such response is inappropriate or unusual. Mine is academic
computer security researchers who, as a group, seem to be out of touch
with the state of things in the wild. I wrote this brief introduction
when I decided my comments, below, are somewhat inflammatory; yet, I
wanted to get the points across to those on the list who are not
knowledgeable about this subject. I need to see or hear more than the
abstract before I can safely say that all the comments apply to Dr.
Pucella's research. I can comfortably say that the comments apply in a
more general sense to similiar research.
Dan Kunkle quoted Professor Riccardo Pucella's ABSTRACT:
>
> Computers that execute the same program risk being vulnerable to the
> same attacks. This explains why the Internet, whose machines
> typically have much software in common, is so susceptible to viruses,
> worms, and other forms of malware.
This is true, and probably in more ways than Dr. Pucella was
thinking. The genetic inheritance involved in software is an interesting
field of study.
> It is also a reason that
> replication of servers does not necessarily enhance the availability
> of a service subject to attack.
This depends upon the type of attack. After the first major set of
DDOS attacks on commercial services, the services turned to redundant
servers spread across the geography of the Internet. A new type of
server management software was required so that clients could receive
subsequent web-pages and/or updates to web-pages from distributed
servers. DDOS works by clogging a communication channel. Distributing
the servers made sure that no attacker could clog enough communication
channels to deny the availability of the service.
In the case of worms, the same measure is at least partially
effective depending upon the distribution of the servers and the
propagation algorithm of the worm.
In fact, the only way that redundant, identical servers don't help
security is if one foolishly sets them up side by side on the same
network pipe - in which case they are effectively the same server.
> Diversity is an obvious defense. A set of replicas is diverse in so
> far as all implement the same functionality but differ in their
> implementation details.
This is true of attacks on software infrastructure (web-servers,
other services, etc). It is not true of application layer attacks. If
a web-site that uses IIS on Winders is susceptible to SQL injection or
cross-site scripting, the same web-site code is likely to be susceptible
if run through Apache on NetBSD. Diversity is no help at the
application layer, which is becoming the most popular (since it is the
easiest) layer to attack.
> Diverse replicas are less prone to having
> vulnerabilities in common, because attacks typically depend on memory
> layout and/or instruction sequence specifics.
This is only true of unsophisticated buffer/heap overflow types of
attacks. For poorly written malware, the simple differences between
language versions of Windows will thwart these types of attacks.
Unfortunately, the malware in the wild has been more sophisticated than
that for several years. Nearly all new exploits, even just the
Proof-of-concept types, use universal offsets and methods. Attack
frameworks, such as metasploit, have taken advantage of this to separate
the payload from the delivery mechanism.
> But building multiple distinct versions of a program is expensive,
This is not necessarily true, even for architecture and platform
specific languages like C. The Debian Linux distribution is available
for 11 different hardware platforms. Debian is free and supported by
volunteers.
Some code is platform independent. Some depends upon the software
platform but is OS and hardware independent. Even code that must have
different versions for different hardware is well understood and there
are mechanisms to deal with this situation, ranging from simple IFDEFs
in C through configure scripts.
Dr. Pucella is right if one considers writing the code to the same
functional requirements but with different executables. This can be
very difficult. Code obfuscators try to achieve this but are frequently
foiled by the sophistication of optimizing compilers. It is possible
for two different source codes to compile to the same executable.
> so researchers have
> turned to mechanical means for creating diverse replicas via
> transformations: relocation and/or padding the run-time stack by
> random amounts,
And the bad guys have already found means to bypass most of these
mechanisms for buffer overflows; these mechanisms are useless against
application layer attacks.
> re-arranging basic blocks and code within basic blocks,
Code obfuscation is more a defensive mechanism of malware writers and
copy protection zealots than overflow protection. And it doesn't work.
> and randomly changing the names of system calls or instruction
> opcodes.
Thereby losing any benefits of standardization such as Posix
compliance and absolutely requiring multiple versions of software for
the same platforms. Ergo, this will never be commercially viable.
> Different classes of transformations are more or less effective in
> defending against different classes of attacks. Although knowing this
> correspondence is important when designing a set of defenses for a
> given threat model, knowing the correspondences is not the same as
> knowing the overall power of mechanically-generated diversity as a
> defense. In this talk, I explore that latter, broader, issue,
> investigating two complementary points:
> (1) a formal characterization of what attacks cannot be blunted by
> mechanically-generated diversity, and
Well that's easy - any attack that doesn't depend on specific stack
or heap overflows. That leaves in nearly all current attack mechanisms
including most current stack and heap overflows.
> (2) a rigorous comparison of mechanically-generated diversity to
> type systems, another commonly advocated defense.
I'm not sure whether Dr. Pucella is referring to lambda Calculus type
theory, conventional data types (related but not the same thing), or
mandatory access controls. Since a recent paper he wrote with Matthew
Fluet discusses the data type specializations within the type system of
Standard ML, I will assume the first option.
My math chops are worse than Owen's, especially when it comes to
lambda calculus, but my understanding of type theory is that data types
are defined through this formal method, thereby providing inherit
defence against overflow attacks. Put practically, the software
function will not recognize that string array argument if it is more
than the expected length. If the defense can be generalized beyond
academic software languages, then it would work against overflows such
as Aleph1 describes in his seminal "Smashing the Stack for Fun and
Profit". However, that defense seems problematic against off-by-ones,
heap overflows and pretty much anything that's come along since 1996.
I'm afraid that the only real defense is to follow the programming
practice drilled into me very early in my career. Write all software to
validate any external input before using that input. The real problem
with low-level attacks like buffer overflows and high-level (in the OSI
ISO sense) attacks like SQL injection is they all result from failure to
validate input.
--
Ray Parks rcparks at sandia.gov
IDART Project Lead Voice:505-844-4024
IORTA Department Fax:505-844-9641
http://www.sandia.gov/idart Pager:800-690-5288