next up previous contents
Next:  Connection with the SCALe Up:  Of F000s, SCALes and Previous:  Of F000s, SCALes and   Contents


 F000-related things

For the purposes of calculating a conventional Fourier synthesis, both the presence and the value of F000 can safely be ignored. The reason is, of course, that for the conventional syntheses, F000 is simply a constant term that is added to the electron density distribution. Changing its value will only change the mean electron density and nothing more. Given that most macromolecular crystallographers prefer to contour their maps with first contour at the mean electron density plus somethingxrmsd, it has become a macromolecular norm to actually prefer setting the F000 to zero, so that the first contour of the maps is always at somethingxrmsd.

F000=0 is bound to fail with the maxent maps. Let me illustrate this with an example. The following graphs show the distribution of density along a line containing the origin peak of a Patterson function projection17, both for the conventional synthesis and a number of GraphEnt maps calculated with different values for the F000 term (all scaled to 999.0). I'm probably taking the fun out of it, but I think it is worth mentioning that this is a Harker line for a single-site platinum derivative : The signal is the major non-origin peak. The other peaks do not arise from the heavy atom structure.

Figure: Conventional map
Conventional map

Figure: F000=38000e
F000=38000e

Figure: F000=5000e
F000=5000e

Figure: F000=1000e
F000=1000e

Figure: F000=100e
F000=100e

Figure: F000=10e
F000=10e

Taking the trends apparent from these graphs to their extreme, you could argue that as the value of F000 tends to 0.0 e-, the peaks in the map will tend towards delta functions. This line of reasoning immediately warns you that by ``adjusting'' the value of F000, you can make your map look as sharp as you please although your data (meaning the data that you have indeed measured) are the same. The point is of course that F000 is NOT an adjustable quantity : the sharpness of these maps is not required by the data that you measured, but by the value that you arbitrarily decided to assign to the F000. What GraphEnt will give you is (or, better still, I hope it is) what is required by the data (including the assignment of F000). If you tell the program that F000 = 10.0 e-, then GraphEnt will give you peaks as sharp as needed for the sum of electron density on the unit cell to be 10.0 e-. The result will be that noise will also appear as sharp peaks, and you are bound to mis-interprete your map18.

The one and only consistent way of doing the calculation is to give F000 its correct value. This sounds very nice, but in real life things are not so straightforward : what should the F000 value be for an isomorphous difference Patterson calculation using acentric terms (in which case even knowing from before-hand the number of substitution sites doesn't help because FPH - FP <> FH) ? what should the F000 value be for a (2mFo - DFc)exp(iPHI_c) difference map phased from an incomplete poly-alanine model ? should the F000 include the number of electrons due to bulk solvent although I only have data from 8Å (and some strong data are missing because they were overloaded) ? etc. For these reasons, and in order to keep the procedure of running GraphEnt automatic (at least for the first time), I have resorted to the following unjustified and arbitrary assumptions about your F000s :

A pragmatist's view : If your GraphEnt maps look unjustifiably sharp, increase F000. If they look smooth, decrease F000 till the point where you can still ``interprete'' the features that you see.

Please note : The value of the F000 is only used for the calculation of the initial uniform map, but is not used to constrain the sum of densities in the GraphEnt maps that follow. In other words, do not expect the F000 calculated from the GraphEnt map to be identical with the value that you defined.

Quoting from Gull & Daniell, (1978), ``... Exact fitting also implies the existence of numerous separate constraints, resulting mathematically in an unwieldy proliferation of Lagrange multipliers and preventing calculation of the solution in all but the simplest cases''. In the case of F000 things may not be that complex (I would think that one additional re-scaling step is all that is required), but given the difficulties with estimating F000 in the case of Patterson and difference Fourier synthesis, I thought I would better leave F000 unconstrained.


Footnotes

... projection17
This is the line v = 0.5 from the example Patt_projection.in included with the distribution of GraphEnt.
... map18
You can actually see one of the artifacts of having too small a value of F000 in the last two graphs. If you look carefully, you will see that it is not only the major peak that is beginning to show line splitting, but also the origin peak. The splitting of the origin peak is only indirectly due to the F000 being too small : as the peaks in the GraphEnt map tend towards delta functions, the amplitudes of the transform of the GraphEnt map tend to a set of normalised E-values with aver(E^2) = 1 for all resulution shells. Now, because you are sampling data that go to the infinity on a finite grid (ie, the grid of your map), the power of the transform that is outside the limits of your finite grid folds back into the limits of your transform (this is usually called ``aliasing''). The most notable result is that some of the phases of the Patterson function coefficients will become negative, and the origin peak will start developing a hole in the middle.

next up previous contents
Next:  Connection with the SCALe Up:  Of F000s, SCALes and Previous:  Of F000s, SCALes and   Contents
NMG, Nov 2002