Defensive programming

Defensive programming is a form of defensive design intended to ensure the continuing function of a piece of software under unforeseen circumstances. Defensive programming practices are often used where high availability, safety or security is needed.

Defensive programming is an approach to improve software and source code, in terms of:

Overly defensive programming however introduces code to prevent errors that can't happen, but needs to be executed on runtime and to be maintained by the developers, thus increasing the runtime and maintenance costs. There is also the risk that the code catches or prevents too many exceptions. In those cases, the error would be suppressed and go unnoticed, while the result would be still wrong.

Secure programming

Main article: Secure coding

Secure programming is the subset of defensive programming concerned with computer security. That is to say, security is the concern, not necessarily safety or availability (the software may be allowed to fail in certain ways). As with all kinds of defensive programming, avoiding bugs is a primary objective, however the motivation is not as much to reduce the likelyhood of failure in normal operation (as if safety was the concern), but to reduce the attack surface – the programmer must assume that the software might be misused actively to reveal bugs, and that bugs could be exploited maliciously.

int risky_programming(char *input){
  char str[1000+1];     // one more for the null character
  // ...
  strcpy(str, input);   // copy input
  // ...
}

The function will crash when the input is over 1000 characters. Some novice programmers may not feel that this is a problem, supposing that no user will enter such a long input. This particular bug demonstrates a vulnerability which enables buffer overflow exploits. Here is a solution to this example:

int secure_programming(char *input){
  char str[1000];
  // ...
  strncpy(str, input, sizeof(str)); // copy input without exceeding the length of the destination
  str[sizeof(str) - 1] = '\0'; // if strlen(input) >= sizeof(str) then strncpy won't NUL terminate
  // ...
}

Offensive programming

Main article: Offensive programming

Offensive programming can be considered a category of defensive programming, with the added emphasis that certain errors should not be handled defensively. In this practice, only errors from outside the program's control are to be handled (such as user input); the software itself, as well as data from within the program's line of defense, are to be trusted in this methodology.

Trusting internal data validity

Overly defensive programming
const char* trafficlight_colorname(enum trafficlight_color c) {
    switch (c) {
        case TRAFFICLIGHT_RED:    return "red";
        case TRAFFICLIGHT_YELLOW: return "yellow";
        case TRAFFICLIGHT_GREEN:  return "green";
    }
    return "black"; // To be handled as a dead traffic light.
}
Offensive programming
const char* trafficlight_colorname(enum trafficlight_color c) {
    switch (c) {
        case TRAFFICLIGHT_RED:    return "red";
        case TRAFFICLIGHT_YELLOW: return "yellow";
        case TRAFFICLIGHT_GREEN:  return "green";
    }
    assert(0); // Assert that this section is unreachable.
}

Trusting software components

Overly defensive programming
if (is_legacy_compatible(user_config)) {
    // Strategy: Don't trust that the new code behaves the same
    old_code(user_config);
} else {
    // Fallback: Don't trust that the new code handles the same cases
    if (new_code(user_config) != OK) {
        old_code(user_config);
    }
}
Offensive programming
// Trust that the new code has no new bugs
new_code(user_config);

Techniques

Here are some defensive programming techniques:

Intelligent source code reuse

If existing code is tested and known to work, reusing it may reduce the chance of bugs being introduced.

However, reusing code is not always a good practice, particularly when business logic is involved. Reuse in this case may cause serious business process bugs.

Legacy problems

Before reusing old source code, libraries, APIs, configurations and so forth, it must be considered if the old work is valid for reuse, or if it is likely to be prone to legacy problems.

Legacy problems are problems inherent when old designs are expected to work with today's requirements, especially when the old designs were not developed or tested with those requirements in mind.

Many software products have experienced problems with old legacy source code, for example:

Notable examples of the legacy problem:

Secure input and output handling

Canonicalization

Crackers are likely to invent new kinds of representations of incorrect data.

For example, if you checked if a requested file is not "/etc/passwd", a cracker might pass another variant of this file name, like "/etc/./passwd".

To avoid bugs due to non-canonical input, employ canonicalization libraries.

Low tolerance against "potential" bugs

Assume that code constructs that appear to be problem prone (similar to known vulnerabilities, etc.) are bugs and potential security flaws. The basic rule of thumb is: "I'm not aware of all types of security exploits. I must protect against those I do know of and then I must be proactive!".

Other techniques

See also

External links

This article is issued from Wikipedia - version of the 12/1/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.