Major project in PHP. Roll out the new version on server memory usage (memory_get_peak_usage at the end of executable code) has doubled.
Alarmed, they began to dig.
Disabling eAccelerator shows that the old and new versions consume the same.
Include eA — again the difference in the two.
1. Since the memory consumption of the same script off eA and eA is markedly different, it was concluded that in the case of a working eA in consumed memory does not count opcode the script itself, because it lies in shared memory. Is it really so? If not, then how else can you explain the enormous difference in the result of memory_get_peak_usage in the same conditions?
2. And most importantly: what might be a reasonable explanation of the conduct set forth in the first part? When eA memory consumption has not changed (and we tend to believe it), and when enabled, the eA issued a totally unrealistic increase in consumption.