I get this lots.
I believe (IE waiting to be contradicted here!) that there´s bugger all you can do about it, and here´s how I come to that conclusion 
any texture loaded in, unless specified otherwise, is brightened / darkened by various factors (eg gamma, specific brightness / overbrightness settings).
what starts out as a smooth set of RGB values then gets multiplied / divided / added to etc as these brightness parameters are applied. Now either at each stage or at the end (probably at each stage, I doubt anyone would bother converting rgb characters to float, but I could well be wrong). The problem is, at the end, your values have to be rounded down / off (no idea which, doesn´t really matter) to end up back in plain old rgb format. And you get rounding errors (are they called quantization errors?).
As an example:
0 0 0 = black
1 1 1 = dark grey
127 127 127 = mid grey
because of gamma they get multiplied by say 1.4. In floats we might have:
black becomes 0.0 0.0 0.0
dark grey becomes 1.4 1.4 1.4
mid grey becomes 177.8 177.8 177.8
when they get converted back to integer values for the RGB, they become:
black = 0 0 0 (no change)
dark grey = 1 1 1 (no change)
mid grey = 178 178 178
I´ve used extreme values to illustrate how discrepencies occur when scaling up, but the same (or worse) is true when scaling down, because then adjacent values ´collapse´ to the same values as near ones, which is what is often what you see from banding.
Now, I don´t actually know that any scaling down is possible (assuming gamma etc are greater than default?) but I do remember reading somewhere that multiply texture layers use 2* multiply. This may account for why all default textures are so dark, I´m not sure, but it´s slightly (Ie not likely) that textures get darkened by default, which may account for some of the banding.
Or, in short!
Anytime a smooth gradient gets altered by multiplying or dividing (here through brightness settings) some banding may occur 
Now, I await the ´wrong!´ posts 